problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_49673
|
rasdani/github-patches
|
git_diff
|
kserve__kserve-2835
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No matches for kind \"HorizontalPodAutoscaler\" in version \"autoscaling/v2beta2\
/kind bug
**What steps did you take and what happened:**
Deploy kserve in raw mode on kubernetes 1.26 where autoscaling/v2beta2 is no longer available
**What did you expect to happen:**
Kserve should support v2 of the api
</issue>
<code>
[start of hack/python-sdk/update_release_version_helper.py]
1 #!/usr/bin/env python3
2
3 # Copyright 2023 The KServe Authors.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 import tomlkit
18 import argparse
19
20 parser = argparse.ArgumentParser(description="Update release version in python toml files")
21 parser.add_argument("version", type=str, help="release version")
22 args, _ = parser.parse_known_args()
23
24 toml_files = [
25 "python/kserve/pyproject.toml",
26 "python/aiffairness/pyproject.toml",
27 "python/aixexplainer/pyproject.toml",
28 "python/alibiexplainer/pyproject.toml",
29 "python/artexplainer/pyproject.toml",
30 "python/custom_model/pyproject.toml",
31 "python/custom_transformer/pyproject.toml",
32 "python/lgbserver/pyproject.toml",
33 "python/paddleserver/pyproject.toml",
34 "python/pmmlserver/pyproject.toml",
35 "python/sklearnserver/pyproject.toml",
36 "python/xgbserver/pyproject.toml",
37 ]
38
39 for toml_file in toml_files:
40 with open(toml_file, "r") as file:
41 toml_config = tomlkit.load(file)
42 toml_config['tool']['poetry']['version'] = args.version
43
44 with open(toml_file, "w") as file:
45 tomlkit.dump(toml_config, file)
46
[end of hack/python-sdk/update_release_version_helper.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/hack/python-sdk/update_release_version_helper.py b/hack/python-sdk/update_release_version_helper.py
--- a/hack/python-sdk/update_release_version_helper.py
+++ b/hack/python-sdk/update_release_version_helper.py
@@ -24,7 +24,6 @@
toml_files = [
"python/kserve/pyproject.toml",
"python/aiffairness/pyproject.toml",
- "python/aixexplainer/pyproject.toml",
"python/alibiexplainer/pyproject.toml",
"python/artexplainer/pyproject.toml",
"python/custom_model/pyproject.toml",
|
{"golden_diff": "diff --git a/hack/python-sdk/update_release_version_helper.py b/hack/python-sdk/update_release_version_helper.py\n--- a/hack/python-sdk/update_release_version_helper.py\n+++ b/hack/python-sdk/update_release_version_helper.py\n@@ -24,7 +24,6 @@\n toml_files = [\n \"python/kserve/pyproject.toml\",\n \"python/aiffairness/pyproject.toml\",\n- \"python/aixexplainer/pyproject.toml\",\n \"python/alibiexplainer/pyproject.toml\",\n \"python/artexplainer/pyproject.toml\",\n \"python/custom_model/pyproject.toml\",\n", "issue": "No matches for kind \\\"HorizontalPodAutoscaler\\\" in version \\\"autoscaling/v2beta2\\\n/kind bug\r\n\r\n**What steps did you take and what happened:**\r\nDeploy kserve in raw mode on kubernetes 1.26 where autoscaling/v2beta2 is no longer available\r\n\r\n\r\n**What did you expect to happen:**\r\nKserve should support v2 of the api\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright 2023 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport tomlkit\nimport argparse\n\nparser = argparse.ArgumentParser(description=\"Update release version in python toml files\")\nparser.add_argument(\"version\", type=str, help=\"release version\")\nargs, _ = parser.parse_known_args()\n\ntoml_files = [\n \"python/kserve/pyproject.toml\",\n \"python/aiffairness/pyproject.toml\",\n \"python/aixexplainer/pyproject.toml\",\n \"python/alibiexplainer/pyproject.toml\",\n \"python/artexplainer/pyproject.toml\",\n \"python/custom_model/pyproject.toml\",\n \"python/custom_transformer/pyproject.toml\",\n \"python/lgbserver/pyproject.toml\",\n \"python/paddleserver/pyproject.toml\",\n \"python/pmmlserver/pyproject.toml\",\n \"python/sklearnserver/pyproject.toml\",\n \"python/xgbserver/pyproject.toml\",\n]\n\nfor toml_file in toml_files:\n with open(toml_file, \"r\") as file:\n toml_config = tomlkit.load(file)\n toml_config['tool']['poetry']['version'] = args.version\n\n with open(toml_file, \"w\") as file:\n tomlkit.dump(toml_config, file)\n", "path": "hack/python-sdk/update_release_version_helper.py"}]}
| 1,115 | 133 |
gh_patches_debug_30014
|
rasdani/github-patches
|
git_diff
|
DDMAL__CantusDB-1183
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`created_by` and `last_updated_by` fields should be read only on source admin change page
They should be readonly fields instead. I don't recall seeing them at the top of the page like they are now. Maybe previously they were hidden and now they are visible.
</issue>
<code>
[start of django/cantusdb_project/main_app/admin.py]
1 from django.contrib import admin
2 from main_app.models import *
3 from main_app.forms import (
4 AdminCenturyForm,
5 AdminChantForm,
6 AdminFeastForm,
7 AdminGenreForm,
8 AdminNotationForm,
9 AdminOfficeForm,
10 AdminProvenanceForm,
11 AdminRismSiglumForm,
12 AdminSegmentForm,
13 AdminSequenceForm,
14 AdminSourceForm,
15 )
16
17 # these fields should not be editable by all classes
18 EXCLUDE = (
19 "created_by",
20 "last_updated_by",
21 "json_info",
22 )
23
24
25 class BaseModelAdmin(admin.ModelAdmin):
26 exclude = EXCLUDE
27
28 # if an object is created in the admin interface, assign the user to the created_by field
29 # else if an object is updated in the admin interface, assign the user to the last_updated_by field
30 def save_model(self, request, obj, form, change):
31 if change:
32 obj.last_updated_by = request.user
33 else:
34 obj.created_by = request.user
35 super().save_model(request, obj, form, change)
36
37
38 class CenturyAdmin(BaseModelAdmin):
39 search_fields = ("name",)
40 form = AdminCenturyForm
41
42
43 class ChantAdmin(BaseModelAdmin):
44 @admin.display(description="Source Siglum")
45 def get_source_siglum(self, obj):
46 if obj.source:
47 return obj.source.siglum
48
49 list_display = (
50 "incipit",
51 "get_source_siglum",
52 "genre",
53 )
54 search_fields = (
55 "title",
56 "incipit",
57 "cantus_id",
58 "id",
59 )
60
61 readonly_fields = (
62 "date_created",
63 "date_updated",
64 )
65
66 list_filter = (
67 "genre",
68 "office",
69 )
70 exclude = EXCLUDE + (
71 "col1",
72 "col2",
73 "col3",
74 "next_chant",
75 "s_sequence",
76 "is_last_chant_in_feast",
77 "visible_status",
78 "date",
79 "volpiano_notes",
80 "volpiano_intervals",
81 "title",
82 "differentiae_database",
83 )
84 form = AdminChantForm
85 raw_id_fields = (
86 "source",
87 "feast",
88 )
89 ordering = ("source__siglum",)
90
91
92 class DifferentiaAdmin(BaseModelAdmin):
93 search_fields = (
94 "differentia_id",
95 "id",
96 )
97
98
99 class FeastAdmin(BaseModelAdmin):
100 search_fields = (
101 "name",
102 "feast_code",
103 )
104 list_display = (
105 "name",
106 "month",
107 "day",
108 "feast_code",
109 )
110 form = AdminFeastForm
111
112
113 class GenreAdmin(BaseModelAdmin):
114 search_fields = ("name",)
115 form = AdminGenreForm
116
117
118 class NotationAdmin(BaseModelAdmin):
119 search_fields = ("name",)
120 form = AdminNotationForm
121
122
123 class OfficeAdmin(BaseModelAdmin):
124 search_fields = ("name",)
125 form = AdminOfficeForm
126
127
128 class ProvenanceAdmin(BaseModelAdmin):
129 search_fields = ("name",)
130 form = AdminProvenanceForm
131
132
133 class RismSiglumAdmin(BaseModelAdmin):
134 search_fields = ("name",)
135 form = AdminRismSiglumForm
136
137
138 class SegmentAdmin(BaseModelAdmin):
139 search_fields = ("name",)
140 form = AdminSegmentForm
141
142
143 class SequenceAdmin(BaseModelAdmin):
144 @admin.display(description="Source Siglum")
145 def get_source_siglum(self, obj):
146 if obj.source:
147 return obj.source.siglum
148
149 search_fields = (
150 "title",
151 "incipit",
152 "cantus_id",
153 "id",
154 )
155 exclude = EXCLUDE + (
156 "c_sequence",
157 "next_chant",
158 "is_last_chant_in_feast",
159 "visible_status",
160 )
161 list_display = ("incipit", "get_source_siglum", "genre")
162 list_filter = (
163 "genre",
164 "office",
165 )
166 raw_id_fields = (
167 "source",
168 "feast",
169 )
170 ordering = ("source__siglum",)
171 form = AdminSequenceForm
172
173
174 class SourceAdmin(BaseModelAdmin):
175 exclude = ("source_status",)
176
177 # These search fields are also available on the user-source inline relationship in the user admin page
178 search_fields = (
179 "siglum",
180 "title",
181 "id",
182 )
183 readonly_fields = (
184 "number_of_chants",
185 "number_of_melodies",
186 "date_created",
187 "date_updated",
188 )
189 # from the Django docs:
190 # Adding a ManyToManyField to this list will instead use a nifty unobtrusive JavaScript “filter” interface
191 # that allows searching within the options. The unselected and selected options appear in two boxes side by side.
192 filter_horizontal = (
193 "century",
194 "notation",
195 "current_editors",
196 "inventoried_by",
197 "full_text_entered_by",
198 "melodies_entered_by",
199 "proofreaders",
200 "other_editors",
201 )
202
203 list_display = (
204 "title",
205 "siglum",
206 "id",
207 )
208
209 list_filter = (
210 "full_source",
211 "segment",
212 "source_status",
213 "published",
214 "century",
215 )
216
217 ordering = ("siglum",)
218
219 form = AdminSourceForm
220
221
222 admin.site.register(Century, CenturyAdmin)
223 admin.site.register(Chant, ChantAdmin)
224 admin.site.register(Differentia, DifferentiaAdmin)
225 admin.site.register(Feast, FeastAdmin)
226 admin.site.register(Genre, GenreAdmin)
227 admin.site.register(Notation, NotationAdmin)
228 admin.site.register(Office, OfficeAdmin)
229 admin.site.register(Provenance, ProvenanceAdmin)
230 admin.site.register(RismSiglum, RismSiglumAdmin)
231 admin.site.register(Segment, SegmentAdmin)
232 admin.site.register(Sequence, SequenceAdmin)
233 admin.site.register(Source, SourceAdmin)
234
[end of django/cantusdb_project/main_app/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/django/cantusdb_project/main_app/admin.py b/django/cantusdb_project/main_app/admin.py
--- a/django/cantusdb_project/main_app/admin.py
+++ b/django/cantusdb_project/main_app/admin.py
@@ -16,14 +16,17 @@
# these fields should not be editable by all classes
EXCLUDE = (
- "created_by",
- "last_updated_by",
"json_info",
)
+READ_ONLY = (
+ "created_by",
+ "last_updated_by",
+)
class BaseModelAdmin(admin.ModelAdmin):
exclude = EXCLUDE
+ readonly_fields = READ_ONLY
# if an object is created in the admin interface, assign the user to the created_by field
# else if an object is updated in the admin interface, assign the user to the last_updated_by field
@@ -58,7 +61,7 @@
"id",
)
- readonly_fields = (
+ readonly_fields = READ_ONLY + (
"date_created",
"date_updated",
)
@@ -172,7 +175,7 @@
class SourceAdmin(BaseModelAdmin):
- exclude = ("source_status",)
+ exclude = EXCLUDE + ("source_status",)
# These search fields are also available on the user-source inline relationship in the user admin page
search_fields = (
@@ -180,7 +183,7 @@
"title",
"id",
)
- readonly_fields = (
+ readonly_fields = READ_ONLY + (
"number_of_chants",
"number_of_melodies",
"date_created",
|
{"golden_diff": "diff --git a/django/cantusdb_project/main_app/admin.py b/django/cantusdb_project/main_app/admin.py\n--- a/django/cantusdb_project/main_app/admin.py\n+++ b/django/cantusdb_project/main_app/admin.py\n@@ -16,14 +16,17 @@\n \n # these fields should not be editable by all classes\n EXCLUDE = (\n- \"created_by\",\n- \"last_updated_by\",\n \"json_info\",\n )\n \n+READ_ONLY = (\n+ \"created_by\",\n+ \"last_updated_by\",\n+)\n \n class BaseModelAdmin(admin.ModelAdmin):\n exclude = EXCLUDE\n+ readonly_fields = READ_ONLY\n \n # if an object is created in the admin interface, assign the user to the created_by field\n # else if an object is updated in the admin interface, assign the user to the last_updated_by field\n@@ -58,7 +61,7 @@\n \"id\",\n )\n \n- readonly_fields = (\n+ readonly_fields = READ_ONLY + (\n \"date_created\",\n \"date_updated\",\n )\n@@ -172,7 +175,7 @@\n \n \n class SourceAdmin(BaseModelAdmin):\n- exclude = (\"source_status\",)\n+ exclude = EXCLUDE + (\"source_status\",)\n \n # These search fields are also available on the user-source inline relationship in the user admin page\n search_fields = (\n@@ -180,7 +183,7 @@\n \"title\",\n \"id\",\n )\n- readonly_fields = (\n+ readonly_fields = READ_ONLY + (\n \"number_of_chants\",\n \"number_of_melodies\",\n \"date_created\",\n", "issue": "`created_by` and `last_updated_by` fields should be read only on source admin change page\nThey should be readonly fields instead. I don't recall seeing them at the top of the page like they are now. Maybe previously they were hidden and now they are visible.\n", "before_files": [{"content": "from django.contrib import admin\nfrom main_app.models import *\nfrom main_app.forms import (\n AdminCenturyForm,\n AdminChantForm,\n AdminFeastForm,\n AdminGenreForm,\n AdminNotationForm,\n AdminOfficeForm,\n AdminProvenanceForm,\n AdminRismSiglumForm,\n AdminSegmentForm,\n AdminSequenceForm,\n AdminSourceForm,\n)\n\n# these fields should not be editable by all classes\nEXCLUDE = (\n \"created_by\",\n \"last_updated_by\",\n \"json_info\",\n)\n\n\nclass BaseModelAdmin(admin.ModelAdmin):\n exclude = EXCLUDE\n\n # if an object is created in the admin interface, assign the user to the created_by field\n # else if an object is updated in the admin interface, assign the user to the last_updated_by field\n def save_model(self, request, obj, form, change):\n if change:\n obj.last_updated_by = request.user\n else:\n obj.created_by = request.user\n super().save_model(request, obj, form, change)\n\n\nclass CenturyAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminCenturyForm\n\n\nclass ChantAdmin(BaseModelAdmin):\n @admin.display(description=\"Source Siglum\")\n def get_source_siglum(self, obj):\n if obj.source:\n return obj.source.siglum\n\n list_display = (\n \"incipit\",\n \"get_source_siglum\",\n \"genre\",\n )\n search_fields = (\n \"title\",\n \"incipit\",\n \"cantus_id\",\n \"id\",\n )\n\n readonly_fields = (\n \"date_created\",\n \"date_updated\",\n )\n\n list_filter = (\n \"genre\",\n \"office\",\n )\n exclude = EXCLUDE + (\n \"col1\",\n \"col2\",\n \"col3\",\n \"next_chant\",\n \"s_sequence\",\n \"is_last_chant_in_feast\",\n \"visible_status\",\n \"date\",\n \"volpiano_notes\",\n \"volpiano_intervals\",\n \"title\",\n \"differentiae_database\",\n )\n form = AdminChantForm\n raw_id_fields = (\n \"source\",\n \"feast\",\n )\n ordering = (\"source__siglum\",)\n\n\nclass DifferentiaAdmin(BaseModelAdmin):\n search_fields = (\n \"differentia_id\",\n \"id\",\n )\n\n\nclass FeastAdmin(BaseModelAdmin):\n search_fields = (\n \"name\",\n \"feast_code\",\n )\n list_display = (\n \"name\",\n \"month\",\n \"day\",\n \"feast_code\",\n )\n form = AdminFeastForm\n\n\nclass GenreAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminGenreForm\n\n\nclass NotationAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminNotationForm\n\n\nclass OfficeAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminOfficeForm\n\n\nclass ProvenanceAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminProvenanceForm\n\n\nclass RismSiglumAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminRismSiglumForm\n\n\nclass SegmentAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminSegmentForm\n\n\nclass SequenceAdmin(BaseModelAdmin):\n @admin.display(description=\"Source Siglum\")\n def get_source_siglum(self, obj):\n if obj.source:\n return obj.source.siglum\n\n search_fields = (\n \"title\",\n \"incipit\",\n \"cantus_id\",\n \"id\",\n )\n exclude = EXCLUDE + (\n \"c_sequence\",\n \"next_chant\",\n \"is_last_chant_in_feast\",\n \"visible_status\",\n )\n list_display = (\"incipit\", \"get_source_siglum\", \"genre\")\n list_filter = (\n \"genre\",\n \"office\",\n )\n raw_id_fields = (\n \"source\",\n \"feast\",\n )\n ordering = (\"source__siglum\",)\n form = AdminSequenceForm\n\n\nclass SourceAdmin(BaseModelAdmin):\n exclude = (\"source_status\",)\n\n # These search fields are also available on the user-source inline relationship in the user admin page\n search_fields = (\n \"siglum\",\n \"title\",\n \"id\",\n )\n readonly_fields = (\n \"number_of_chants\",\n \"number_of_melodies\",\n \"date_created\",\n \"date_updated\",\n )\n # from the Django docs:\n # Adding a ManyToManyField to this list will instead use a nifty unobtrusive JavaScript \u201cfilter\u201d interface\n # that allows searching within the options. The unselected and selected options appear in two boxes side by side.\n filter_horizontal = (\n \"century\",\n \"notation\",\n \"current_editors\",\n \"inventoried_by\",\n \"full_text_entered_by\",\n \"melodies_entered_by\",\n \"proofreaders\",\n \"other_editors\",\n )\n\n list_display = (\n \"title\",\n \"siglum\",\n \"id\",\n )\n\n list_filter = (\n \"full_source\",\n \"segment\",\n \"source_status\",\n \"published\",\n \"century\",\n )\n\n ordering = (\"siglum\",)\n\n form = AdminSourceForm\n\n\nadmin.site.register(Century, CenturyAdmin)\nadmin.site.register(Chant, ChantAdmin)\nadmin.site.register(Differentia, DifferentiaAdmin)\nadmin.site.register(Feast, FeastAdmin)\nadmin.site.register(Genre, GenreAdmin)\nadmin.site.register(Notation, NotationAdmin)\nadmin.site.register(Office, OfficeAdmin)\nadmin.site.register(Provenance, ProvenanceAdmin)\nadmin.site.register(RismSiglum, RismSiglumAdmin)\nadmin.site.register(Segment, SegmentAdmin)\nadmin.site.register(Sequence, SequenceAdmin)\nadmin.site.register(Source, SourceAdmin)\n", "path": "django/cantusdb_project/main_app/admin.py"}]}
| 2,512 | 365 |
gh_patches_debug_10660
|
rasdani/github-patches
|
git_diff
|
mozilla__pontoon-2816
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Don't submit pretranslated strings with warnings or errors
If possible, it would be great to have 2 different behaviors:
* Strings with warning should be suggested, instead of submitted, because they might be OK (i.e. the warning could be ignored), or at least it might be still useful to post-edit them instead of starting from scratch.
* String with errors shouldn't be even suggested.
If not possible, I guess storing them both as suggestions would work, since the localizer wouldn't be able to confirm a string with errors (but it will be confusing).
The important piece is not storing them in VCS, because it will create a chain of errors, especially in parts that we don't control (the final product).
</issue>
<code>
[start of pontoon/pretranslation/tasks.py]
1 import logging
2
3 from django.db.models import Q, CharField, Value as V
4 from django.db.models.functions import Concat
5 from django.conf import settings
6 from pontoon.base.models import (
7 Project,
8 Entity,
9 TranslatedResource,
10 Translation,
11 )
12 from pontoon.actionlog.models import ActionLog
13 from pontoon.pretranslation.pretranslate import (
14 get_pretranslations,
15 update_changed_instances,
16 )
17 from pontoon.base.tasks import PontoonTask
18 from pontoon.sync.core import serial_task
19 from pontoon.checks.utils import bulk_run_checks
20
21
22 log = logging.getLogger(__name__)
23
24
25 @serial_task(settings.SYNC_TASK_TIMEOUT, base=PontoonTask, lock_key="project={0}")
26 def pretranslate(self, project_pk, locales=None, entities=None):
27 """
28 Identifies strings without any translations and any suggestions.
29 Engages TheAlgorithm (bug 1552796) to gather pretranslations.
30 Stores pretranslations as suggestions (approved=False) to DB.
31
32 :arg project_pk: the pk of the project to be pretranslated
33 :arg Queryset locales: the locales for the project to be pretranslated
34 :arg Queryset entites: the entities for the project to be pretranslated
35
36 :returns: None
37 """
38 project = Project.objects.get(pk=project_pk)
39
40 if not project.pretranslation_enabled:
41 log.info(f"Pretranslation not enabled for project {project.name}")
42 return
43
44 if locales:
45 locales = project.locales.filter(pk__in=locales)
46 else:
47 locales = project.locales
48
49 locales = locales.filter(
50 project_locale__project=project,
51 project_locale__pretranslation_enabled=True,
52 project_locale__readonly=False,
53 )
54
55 if not locales:
56 log.info(
57 f"Pretranslation not enabled for any locale within project {project.name}"
58 )
59 return
60
61 log.info(f"Fetching pretranslations for project {project.name} started")
62
63 if not entities:
64 entities = Entity.objects.filter(
65 resource__project=project,
66 obsolete=False,
67 )
68
69 entities = entities.prefetch_related("resource")
70
71 # get available TranslatedResource pairs
72 tr_pairs = (
73 TranslatedResource.objects.filter(
74 resource__project=project,
75 locale__in=locales,
76 )
77 .annotate(
78 locale_resource=Concat(
79 "locale_id", V("-"), "resource_id", output_field=CharField()
80 )
81 )
82 .values_list("locale_resource", flat=True)
83 .distinct()
84 )
85
86 # Fetch all distinct locale-entity pairs for which translation exists
87 translated_entities = (
88 Translation.objects.filter(
89 locale__in=locales,
90 entity__in=entities,
91 )
92 .annotate(
93 locale_entity=Concat(
94 "locale_id", V("-"), "entity_id", output_field=CharField()
95 )
96 )
97 .values_list("locale_entity", flat=True)
98 .distinct()
99 )
100
101 translated_entities = list(translated_entities)
102
103 translations = []
104
105 # To keep track of changed TranslatedResources and their latest_translation
106 tr_dict = {}
107
108 tr_filter = []
109 index = -1
110
111 for locale in locales:
112 log.info(f"Fetching pretranslations for locale {locale.code} started")
113 for entity in entities:
114 locale_entity = f"{locale.id}-{entity.id}"
115 locale_resource = f"{locale.id}-{entity.resource.id}"
116 if locale_entity in translated_entities or locale_resource not in tr_pairs:
117 continue
118
119 pretranslations = get_pretranslations(entity, locale)
120
121 if not pretranslations:
122 continue
123
124 for string, plural_form, user in pretranslations:
125 t = Translation(
126 entity=entity,
127 locale=locale,
128 string=string,
129 user=user,
130 approved=False,
131 pretranslated=True,
132 active=True,
133 plural_form=plural_form,
134 )
135
136 index += 1
137 translations.append(t)
138
139 if locale_resource not in tr_dict:
140 tr_dict[locale_resource] = index
141
142 # Add query for fetching respective TranslatedResource.
143 tr_filter.append(
144 Q(locale__id=locale.id) & Q(resource__id=entity.resource.id)
145 )
146
147 # Update the latest translation index
148 tr_dict[locale_resource] = index
149
150 log.info(f"Fetching pretranslations for locale {locale.code} done")
151
152 if len(translations) == 0:
153 return
154
155 translations = Translation.objects.bulk_create(translations)
156
157 # Log creating actions
158 actions_to_log = [
159 ActionLog(
160 action_type=ActionLog.ActionType.TRANSLATION_CREATED,
161 performed_by=t.user,
162 translation=t,
163 )
164 for t in translations
165 ]
166
167 ActionLog.objects.bulk_create(actions_to_log)
168
169 # Run checks on all translations
170 translation_pks = {translation.pk for translation in translations}
171 bulk_run_checks(Translation.objects.for_checks().filter(pk__in=translation_pks))
172
173 # Mark translations as changed
174 changed_translations = Translation.objects.filter(pk__in=translation_pks)
175 changed_translations.bulk_mark_changed()
176
177 # Update latest activity and stats for changed instances.
178 update_changed_instances(tr_filter, tr_dict, translations)
179
180 log.info(f"Fetching pretranslations for project {project.name} done")
181
[end of pontoon/pretranslation/tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pontoon/pretranslation/tasks.py b/pontoon/pretranslation/tasks.py
--- a/pontoon/pretranslation/tasks.py
+++ b/pontoon/pretranslation/tasks.py
@@ -171,7 +171,12 @@
bulk_run_checks(Translation.objects.for_checks().filter(pk__in=translation_pks))
# Mark translations as changed
- changed_translations = Translation.objects.filter(pk__in=translation_pks)
+ changed_translations = Translation.objects.filter(
+ pk__in=translation_pks,
+ # Do not sync translations with errors and warnings
+ errors__isnull=True,
+ warnings__isnull=True,
+ )
changed_translations.bulk_mark_changed()
# Update latest activity and stats for changed instances.
|
{"golden_diff": "diff --git a/pontoon/pretranslation/tasks.py b/pontoon/pretranslation/tasks.py\n--- a/pontoon/pretranslation/tasks.py\n+++ b/pontoon/pretranslation/tasks.py\n@@ -171,7 +171,12 @@\n bulk_run_checks(Translation.objects.for_checks().filter(pk__in=translation_pks))\n \n # Mark translations as changed\n- changed_translations = Translation.objects.filter(pk__in=translation_pks)\n+ changed_translations = Translation.objects.filter(\n+ pk__in=translation_pks,\n+ # Do not sync translations with errors and warnings\n+ errors__isnull=True,\n+ warnings__isnull=True,\n+ )\n changed_translations.bulk_mark_changed()\n \n # Update latest activity and stats for changed instances.\n", "issue": "Don't submit pretranslated strings with warnings or errors\nIf possible, it would be great to have 2 different behaviors:\r\n* Strings with warning should be suggested, instead of submitted, because they might be OK (i.e. the warning could be ignored), or at least it might be still useful to post-edit them instead of starting from scratch.\r\n* String with errors shouldn't be even suggested.\r\n\r\nIf not possible, I guess storing them both as suggestions would work, since the localizer wouldn't be able to confirm a string with errors (but it will be confusing).\r\n\r\nThe important piece is not storing them in VCS, because it will create a chain of errors, especially in parts that we don't control (the final product).\n", "before_files": [{"content": "import logging\n\nfrom django.db.models import Q, CharField, Value as V\nfrom django.db.models.functions import Concat\nfrom django.conf import settings\nfrom pontoon.base.models import (\n Project,\n Entity,\n TranslatedResource,\n Translation,\n)\nfrom pontoon.actionlog.models import ActionLog\nfrom pontoon.pretranslation.pretranslate import (\n get_pretranslations,\n update_changed_instances,\n)\nfrom pontoon.base.tasks import PontoonTask\nfrom pontoon.sync.core import serial_task\nfrom pontoon.checks.utils import bulk_run_checks\n\n\nlog = logging.getLogger(__name__)\n\n\n@serial_task(settings.SYNC_TASK_TIMEOUT, base=PontoonTask, lock_key=\"project={0}\")\ndef pretranslate(self, project_pk, locales=None, entities=None):\n \"\"\"\n Identifies strings without any translations and any suggestions.\n Engages TheAlgorithm (bug 1552796) to gather pretranslations.\n Stores pretranslations as suggestions (approved=False) to DB.\n\n :arg project_pk: the pk of the project to be pretranslated\n :arg Queryset locales: the locales for the project to be pretranslated\n :arg Queryset entites: the entities for the project to be pretranslated\n\n :returns: None\n \"\"\"\n project = Project.objects.get(pk=project_pk)\n\n if not project.pretranslation_enabled:\n log.info(f\"Pretranslation not enabled for project {project.name}\")\n return\n\n if locales:\n locales = project.locales.filter(pk__in=locales)\n else:\n locales = project.locales\n\n locales = locales.filter(\n project_locale__project=project,\n project_locale__pretranslation_enabled=True,\n project_locale__readonly=False,\n )\n\n if not locales:\n log.info(\n f\"Pretranslation not enabled for any locale within project {project.name}\"\n )\n return\n\n log.info(f\"Fetching pretranslations for project {project.name} started\")\n\n if not entities:\n entities = Entity.objects.filter(\n resource__project=project,\n obsolete=False,\n )\n\n entities = entities.prefetch_related(\"resource\")\n\n # get available TranslatedResource pairs\n tr_pairs = (\n TranslatedResource.objects.filter(\n resource__project=project,\n locale__in=locales,\n )\n .annotate(\n locale_resource=Concat(\n \"locale_id\", V(\"-\"), \"resource_id\", output_field=CharField()\n )\n )\n .values_list(\"locale_resource\", flat=True)\n .distinct()\n )\n\n # Fetch all distinct locale-entity pairs for which translation exists\n translated_entities = (\n Translation.objects.filter(\n locale__in=locales,\n entity__in=entities,\n )\n .annotate(\n locale_entity=Concat(\n \"locale_id\", V(\"-\"), \"entity_id\", output_field=CharField()\n )\n )\n .values_list(\"locale_entity\", flat=True)\n .distinct()\n )\n\n translated_entities = list(translated_entities)\n\n translations = []\n\n # To keep track of changed TranslatedResources and their latest_translation\n tr_dict = {}\n\n tr_filter = []\n index = -1\n\n for locale in locales:\n log.info(f\"Fetching pretranslations for locale {locale.code} started\")\n for entity in entities:\n locale_entity = f\"{locale.id}-{entity.id}\"\n locale_resource = f\"{locale.id}-{entity.resource.id}\"\n if locale_entity in translated_entities or locale_resource not in tr_pairs:\n continue\n\n pretranslations = get_pretranslations(entity, locale)\n\n if not pretranslations:\n continue\n\n for string, plural_form, user in pretranslations:\n t = Translation(\n entity=entity,\n locale=locale,\n string=string,\n user=user,\n approved=False,\n pretranslated=True,\n active=True,\n plural_form=plural_form,\n )\n\n index += 1\n translations.append(t)\n\n if locale_resource not in tr_dict:\n tr_dict[locale_resource] = index\n\n # Add query for fetching respective TranslatedResource.\n tr_filter.append(\n Q(locale__id=locale.id) & Q(resource__id=entity.resource.id)\n )\n\n # Update the latest translation index\n tr_dict[locale_resource] = index\n\n log.info(f\"Fetching pretranslations for locale {locale.code} done\")\n\n if len(translations) == 0:\n return\n\n translations = Translation.objects.bulk_create(translations)\n\n # Log creating actions\n actions_to_log = [\n ActionLog(\n action_type=ActionLog.ActionType.TRANSLATION_CREATED,\n performed_by=t.user,\n translation=t,\n )\n for t in translations\n ]\n\n ActionLog.objects.bulk_create(actions_to_log)\n\n # Run checks on all translations\n translation_pks = {translation.pk for translation in translations}\n bulk_run_checks(Translation.objects.for_checks().filter(pk__in=translation_pks))\n\n # Mark translations as changed\n changed_translations = Translation.objects.filter(pk__in=translation_pks)\n changed_translations.bulk_mark_changed()\n\n # Update latest activity and stats for changed instances.\n update_changed_instances(tr_filter, tr_dict, translations)\n\n log.info(f\"Fetching pretranslations for project {project.name} done\")\n", "path": "pontoon/pretranslation/tasks.py"}]}
| 2,257 | 170 |
gh_patches_debug_20429
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-2905
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
only module can be removed from published project
**URL:** https://meinberlin-dev.liqd.net/dashboard/projects/lorem-ipsum/basic/
**user:** initiator
**expected behaviour:** when a project is published, at least one modle should always be added to the project, the initiator shouldn't be able to remove the last one
**behaviour:** last added module can be removed, resulting ind strange project detail view ("Die Beteiligung ist aktuell nicht möglich. Sie startet am None. ") with an empty project without any participation.
**important screensize:**
**device & browser:**
**Comment/Question:** @CarolingerSeilchenspringer I am not sure we decided it should be like that (or can you just not delete the last module and only delete it after it has been removed?), but I think this needs to be fixed. Do you agree?
Screenshot?
</issue>
<code>
[start of meinberlin/apps/dashboard/templatetags/meinberlin_dashboard_tags.py]
1 import json
2 from urllib.parse import unquote
3
4 from django import template
5
6 register = template.Library()
7
8
9 @register.simple_tag(takes_context=True)
10 def closed_accordeons(context, project_id):
11 request = context['request']
12 cookie = request.COOKIES.get('dashboard_projects_closed_accordeons', '[]')
13 ids = json.loads(unquote(cookie))
14 if project_id in ids:
15 ids.append(-1)
16 return ids
17
18
19 @register.filter
20 def is_publishable(project, project_progress):
21 """Check if project can be published.
22
23 Required project details need to be filled in and at least one module
24 has to be published (added to the project).
25 """
26 return (project_progress['project_is_complete']
27 and project.published_modules.count() >= 1)
28
29
30 @register.filter
31 def has_unpublishable_modules(project):
32 """Check if modules can be removed from project.
33
34 Modules can be removed if the project is not yet published and there is
35 another module published for (added to) the project.
36 """
37 return (project.is_draft
38 and project.published_modules.count() <= 1)
39
40
41 @register.filter
42 def project_nav_is_active(dashboard_menu_project):
43 """Check if the view is in the project dashboard nav."""
44 for item in dashboard_menu_project:
45 if item['is_active']:
46 return True
47 return False
48
49
50 @register.filter
51 def module_nav_is_active(dashboard_menu_modules):
52 """Check if the view is in the project dashboard nav."""
53 for module_menu in dashboard_menu_modules:
54 for item in module_menu['menu']:
55 if item['is_active']:
56 return True
57 return False
58
59
60 @register.filter
61 def has_publishable_module(dashboard_menu_modules):
62 for module_menu in dashboard_menu_modules:
63 if module_menu['is_complete']:
64 return True
65 return False
66
[end of meinberlin/apps/dashboard/templatetags/meinberlin_dashboard_tags.py]
[start of meinberlin/apps/dashboard/views.py]
1 import json
2 from urllib import parse
3
4 from django.apps import apps
5 from django.contrib import messages
6 from django.contrib.messages.views import SuccessMessageMixin
7 from django.http import HttpResponseRedirect
8 from django.urls import resolve
9 from django.urls import reverse
10 from django.utils.translation import ugettext_lazy as _
11 from django.views import generic
12 from django.views.generic.detail import SingleObjectMixin
13
14 from adhocracy4.dashboard import mixins
15 from adhocracy4.dashboard import signals
16 from adhocracy4.dashboard import views as a4dashboard_views
17 from adhocracy4.dashboard.blueprints import get_blueprints
18 from adhocracy4.modules import models as module_models
19 from adhocracy4.phases import models as phase_models
20 from adhocracy4.projects import models as project_models
21 from adhocracy4.projects.mixins import ProjectMixin
22 from meinberlin.apps.dashboard.forms import DashboardProjectCreateForm
23
24
25 class ModuleBlueprintListView(ProjectMixin,
26 mixins.DashboardBaseMixin,
27 mixins.BlueprintMixin,
28 generic.DetailView):
29 template_name = 'meinberlin_dashboard/module_blueprint_list_dashboard.html'
30 permission_required = 'a4projects.change_project'
31 model = project_models.Project
32 slug_url_kwarg = 'project_slug'
33 menu_item = 'project'
34
35 @property
36 def blueprints(self):
37 return get_blueprints()
38
39 def get_permission_object(self):
40 return self.project
41
42
43 class ModuleCreateView(ProjectMixin,
44 mixins.DashboardBaseMixin,
45 mixins.BlueprintMixin,
46 SingleObjectMixin,
47 generic.View):
48 permission_required = 'a4projects.change_project'
49 model = project_models.Project
50 slug_url_kwarg = 'project_slug'
51 success_message = _('The module was created')
52
53 def post(self, request, *args, **kwargs):
54 project = self.get_object()
55 weight = 1
56 if project.modules:
57 weight = max(
58 project.modules.values_list('weight', flat=True)
59 ) + 1
60 module = module_models.Module(
61 name=self.blueprint.title,
62 weight=weight,
63 project=project,
64 is_draft=True,
65 )
66 module.save()
67 signals.module_created.send(sender=None,
68 module=module,
69 user=self.request.user)
70
71 self._create_module_settings(module)
72 self._create_phases(module, self.blueprint.content)
73 messages.success(self.request, self.success_message)
74
75 cookie = request.COOKIES.get('dashboard_projects_closed_accordeons',
76 '[]')
77 ids = json.loads(parse.unquote(cookie))
78 if self.project.id not in ids:
79 ids.append(self.project.id)
80
81 cookie = parse.quote(json.dumps(ids))
82
83 response = HttpResponseRedirect(self.get_next(module))
84 response.set_cookie('dashboard_projects_closed_accordeons', cookie)
85 return response
86
87 def _create_module_settings(self, module):
88 if self.blueprint.settings_model:
89 settings_model = apps.get_model(*self.blueprint.settings_model)
90 module_settings = settings_model(module=module)
91 module_settings.save()
92
93 def _create_phases(self, module, blueprint_phases):
94 for index, phase_content in enumerate(blueprint_phases):
95 phase = phase_models.Phase(
96 type=phase_content.identifier,
97 name=phase_content.name,
98 description=phase_content.description,
99 weight=index,
100 module=module,
101 )
102 phase.save()
103
104 def get_next(self, module):
105 return reverse('a4dashboard:dashboard-module_basic-edit', kwargs={
106 'module_slug': module.slug
107 })
108
109 def get_permission_object(self):
110 return self.project
111
112
113 class ModulePublishView(SingleObjectMixin,
114 generic.View):
115 permission_required = 'a4projects.change_project'
116 model = module_models.Module
117 slug_url_kwarg = 'module_slug'
118
119 def get_permission_object(self):
120 return self.get_object().project
121
122 def post(self, request, *args, **kwargs):
123 action = request.POST.get('action', None)
124 if action == 'publish':
125 self.publish_module()
126 elif action == 'unpublish':
127 self.unpublish_module()
128 else:
129 messages.warning(self.request, _('Invalid action'))
130
131 return HttpResponseRedirect(self.get_next())
132
133 def get_next(self):
134 if 'referrer' in self.request.POST:
135 return self.request.POST['referrer']
136 elif 'HTTP_REFERER' in self.request.META:
137 return self.request.META['HTTP_REFERER']
138
139 return reverse('a4dashboard:project-edit', kwargs={
140 'project_slug': self.project.slug
141 })
142
143 def publish_module(self):
144 module = self.get_object()
145 if not module.is_draft:
146 messages.info(self.request, _('Module is already added'))
147 return
148
149 module.is_draft = False
150 module.save()
151
152 signals.module_published.send(sender=None,
153 module=module,
154 user=self.request.user)
155
156 messages.success(self.request,
157 _('The module is displayed in the project.'))
158
159 def unpublish_module(self):
160 module = self.get_object()
161 if module.is_draft:
162 messages.info(self.request, _('Module is already removed'))
163 return
164
165 module.is_draft = True
166 module.save()
167
168 signals.module_unpublished.send(sender=None,
169 module=module,
170 user=self.request.user)
171
172 messages.success(self.request,
173 _('The module is no longer displayed in the project.'
174 ))
175
176
177 class ModuleDeleteView(generic.DeleteView):
178 permission_required = 'a4projects.change_project'
179 model = module_models.Module
180 success_message = _('The module has been deleted')
181
182 def delete(self, request, *args, **kwargs):
183 messages.success(self.request, self.success_message)
184 return super().delete(request, *args, **kwargs)
185
186 def get_permission_object(self):
187 return self.get_object().project
188
189 def get_success_url(self):
190 referrer = self.request.POST.get('referrer', None) \
191 or self.request.META.get('HTTP_REFERER', None)
192 if referrer:
193 view, args, kwargs = resolve(referrer)
194 if 'module_slug' not in kwargs \
195 or not kwargs['module_slug'] == self.get_object().slug:
196 return referrer
197
198 return reverse('a4dashboard:project-edit', kwargs={
199 'project_slug': self.get_object().project.slug
200 })
201
202
203 class DashboardProjectListView(a4dashboard_views.ProjectListView):
204 def get_queryset(self):
205 return super().get_queryset().filter(
206 projectcontainer=None,
207 externalproject=None
208 )
209
210
211 class ProjectCreateView(mixins.DashboardBaseMixin,
212 SuccessMessageMixin,
213 generic.CreateView):
214 model = project_models.Project
215 slug_url_kwarg = 'project_slug'
216 form_class = DashboardProjectCreateForm
217 template_name = 'a4dashboard/project_create_form.html'
218 permission_required = 'a4projects.add_project'
219 menu_item = 'project'
220 success_message = _('Project was created.')
221
222 def get_permission_object(self):
223 return self.organisation
224
225 def get_form_kwargs(self):
226 kwargs = super().get_form_kwargs()
227 kwargs['organisation'] = self.organisation
228 kwargs['creator'] = self.request.user
229 return kwargs
230
231 def get_success_url(self):
232 return reverse('a4dashboard:project-edit',
233 kwargs={'project_slug': self.object.slug})
234
235 def form_valid(self, form):
236 response = super().form_valid(form)
237 signals.project_created.send(sender=None,
238 project=self.object,
239 user=self.request.user)
240
241 return response
242
[end of meinberlin/apps/dashboard/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/meinberlin/apps/dashboard/templatetags/meinberlin_dashboard_tags.py b/meinberlin/apps/dashboard/templatetags/meinberlin_dashboard_tags.py
--- a/meinberlin/apps/dashboard/templatetags/meinberlin_dashboard_tags.py
+++ b/meinberlin/apps/dashboard/templatetags/meinberlin_dashboard_tags.py
@@ -35,7 +35,7 @@
another module published for (added to) the project.
"""
return (project.is_draft
- and project.published_modules.count() <= 1)
+ and project.published_modules.count() > 1)
@register.filter
diff --git a/meinberlin/apps/dashboard/views.py b/meinberlin/apps/dashboard/views.py
--- a/meinberlin/apps/dashboard/views.py
+++ b/meinberlin/apps/dashboard/views.py
@@ -161,6 +161,16 @@
if module.is_draft:
messages.info(self.request, _('Module is already removed'))
return
+ if not module.project.is_draft:
+ messages.error(self.request,
+ _('Module cannot be removed '
+ 'from a published project.'))
+ return
+ if module.project.published_modules.count() == 1:
+ messages.error(self.request,
+ _('Module cannot be removed. '
+ 'It is the only module added to the project.'))
+ return
module.is_draft = True
module.save()
|
{"golden_diff": "diff --git a/meinberlin/apps/dashboard/templatetags/meinberlin_dashboard_tags.py b/meinberlin/apps/dashboard/templatetags/meinberlin_dashboard_tags.py\n--- a/meinberlin/apps/dashboard/templatetags/meinberlin_dashboard_tags.py\n+++ b/meinberlin/apps/dashboard/templatetags/meinberlin_dashboard_tags.py\n@@ -35,7 +35,7 @@\n another module published for (added to) the project.\n \"\"\"\n return (project.is_draft\n- and project.published_modules.count() <= 1)\n+ and project.published_modules.count() > 1)\n \n \n @register.filter\ndiff --git a/meinberlin/apps/dashboard/views.py b/meinberlin/apps/dashboard/views.py\n--- a/meinberlin/apps/dashboard/views.py\n+++ b/meinberlin/apps/dashboard/views.py\n@@ -161,6 +161,16 @@\n if module.is_draft:\n messages.info(self.request, _('Module is already removed'))\n return\n+ if not module.project.is_draft:\n+ messages.error(self.request,\n+ _('Module cannot be removed '\n+ 'from a published project.'))\n+ return\n+ if module.project.published_modules.count() == 1:\n+ messages.error(self.request,\n+ _('Module cannot be removed. '\n+ 'It is the only module added to the project.'))\n+ return\n \n module.is_draft = True\n module.save()\n", "issue": "only module can be removed from published project\n**URL:** https://meinberlin-dev.liqd.net/dashboard/projects/lorem-ipsum/basic/\r\n**user:** initiator\r\n**expected behaviour:** when a project is published, at least one modle should always be added to the project, the initiator shouldn't be able to remove the last one \r\n**behaviour:** last added module can be removed, resulting ind strange project detail view (\"Die Beteiligung ist aktuell nicht m\u00f6glich. Sie startet am None. \") with an empty project without any participation.\r\n**important screensize:**\r\n**device & browser:** \r\n**Comment/Question:** @CarolingerSeilchenspringer I am not sure we decided it should be like that (or can you just not delete the last module and only delete it after it has been removed?), but I think this needs to be fixed. Do you agree?\r\n\r\nScreenshot?\r\n\n", "before_files": [{"content": "import json\nfrom urllib.parse import unquote\n\nfrom django import template\n\nregister = template.Library()\n\n\[email protected]_tag(takes_context=True)\ndef closed_accordeons(context, project_id):\n request = context['request']\n cookie = request.COOKIES.get('dashboard_projects_closed_accordeons', '[]')\n ids = json.loads(unquote(cookie))\n if project_id in ids:\n ids.append(-1)\n return ids\n\n\[email protected]\ndef is_publishable(project, project_progress):\n \"\"\"Check if project can be published.\n\n Required project details need to be filled in and at least one module\n has to be published (added to the project).\n \"\"\"\n return (project_progress['project_is_complete']\n and project.published_modules.count() >= 1)\n\n\[email protected]\ndef has_unpublishable_modules(project):\n \"\"\"Check if modules can be removed from project.\n\n Modules can be removed if the project is not yet published and there is\n another module published for (added to) the project.\n \"\"\"\n return (project.is_draft\n and project.published_modules.count() <= 1)\n\n\[email protected]\ndef project_nav_is_active(dashboard_menu_project):\n \"\"\"Check if the view is in the project dashboard nav.\"\"\"\n for item in dashboard_menu_project:\n if item['is_active']:\n return True\n return False\n\n\[email protected]\ndef module_nav_is_active(dashboard_menu_modules):\n \"\"\"Check if the view is in the project dashboard nav.\"\"\"\n for module_menu in dashboard_menu_modules:\n for item in module_menu['menu']:\n if item['is_active']:\n return True\n return False\n\n\[email protected]\ndef has_publishable_module(dashboard_menu_modules):\n for module_menu in dashboard_menu_modules:\n if module_menu['is_complete']:\n return True\n return False\n", "path": "meinberlin/apps/dashboard/templatetags/meinberlin_dashboard_tags.py"}, {"content": "import json\nfrom urllib import parse\n\nfrom django.apps import apps\nfrom django.contrib import messages\nfrom django.contrib.messages.views import SuccessMessageMixin\nfrom django.http import HttpResponseRedirect\nfrom django.urls import resolve\nfrom django.urls import reverse\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.views import generic\nfrom django.views.generic.detail import SingleObjectMixin\n\nfrom adhocracy4.dashboard import mixins\nfrom adhocracy4.dashboard import signals\nfrom adhocracy4.dashboard import views as a4dashboard_views\nfrom adhocracy4.dashboard.blueprints import get_blueprints\nfrom adhocracy4.modules import models as module_models\nfrom adhocracy4.phases import models as phase_models\nfrom adhocracy4.projects import models as project_models\nfrom adhocracy4.projects.mixins import ProjectMixin\nfrom meinberlin.apps.dashboard.forms import DashboardProjectCreateForm\n\n\nclass ModuleBlueprintListView(ProjectMixin,\n mixins.DashboardBaseMixin,\n mixins.BlueprintMixin,\n generic.DetailView):\n template_name = 'meinberlin_dashboard/module_blueprint_list_dashboard.html'\n permission_required = 'a4projects.change_project'\n model = project_models.Project\n slug_url_kwarg = 'project_slug'\n menu_item = 'project'\n\n @property\n def blueprints(self):\n return get_blueprints()\n\n def get_permission_object(self):\n return self.project\n\n\nclass ModuleCreateView(ProjectMixin,\n mixins.DashboardBaseMixin,\n mixins.BlueprintMixin,\n SingleObjectMixin,\n generic.View):\n permission_required = 'a4projects.change_project'\n model = project_models.Project\n slug_url_kwarg = 'project_slug'\n success_message = _('The module was created')\n\n def post(self, request, *args, **kwargs):\n project = self.get_object()\n weight = 1\n if project.modules:\n weight = max(\n project.modules.values_list('weight', flat=True)\n ) + 1\n module = module_models.Module(\n name=self.blueprint.title,\n weight=weight,\n project=project,\n is_draft=True,\n )\n module.save()\n signals.module_created.send(sender=None,\n module=module,\n user=self.request.user)\n\n self._create_module_settings(module)\n self._create_phases(module, self.blueprint.content)\n messages.success(self.request, self.success_message)\n\n cookie = request.COOKIES.get('dashboard_projects_closed_accordeons',\n '[]')\n ids = json.loads(parse.unquote(cookie))\n if self.project.id not in ids:\n ids.append(self.project.id)\n\n cookie = parse.quote(json.dumps(ids))\n\n response = HttpResponseRedirect(self.get_next(module))\n response.set_cookie('dashboard_projects_closed_accordeons', cookie)\n return response\n\n def _create_module_settings(self, module):\n if self.blueprint.settings_model:\n settings_model = apps.get_model(*self.blueprint.settings_model)\n module_settings = settings_model(module=module)\n module_settings.save()\n\n def _create_phases(self, module, blueprint_phases):\n for index, phase_content in enumerate(blueprint_phases):\n phase = phase_models.Phase(\n type=phase_content.identifier,\n name=phase_content.name,\n description=phase_content.description,\n weight=index,\n module=module,\n )\n phase.save()\n\n def get_next(self, module):\n return reverse('a4dashboard:dashboard-module_basic-edit', kwargs={\n 'module_slug': module.slug\n })\n\n def get_permission_object(self):\n return self.project\n\n\nclass ModulePublishView(SingleObjectMixin,\n generic.View):\n permission_required = 'a4projects.change_project'\n model = module_models.Module\n slug_url_kwarg = 'module_slug'\n\n def get_permission_object(self):\n return self.get_object().project\n\n def post(self, request, *args, **kwargs):\n action = request.POST.get('action', None)\n if action == 'publish':\n self.publish_module()\n elif action == 'unpublish':\n self.unpublish_module()\n else:\n messages.warning(self.request, _('Invalid action'))\n\n return HttpResponseRedirect(self.get_next())\n\n def get_next(self):\n if 'referrer' in self.request.POST:\n return self.request.POST['referrer']\n elif 'HTTP_REFERER' in self.request.META:\n return self.request.META['HTTP_REFERER']\n\n return reverse('a4dashboard:project-edit', kwargs={\n 'project_slug': self.project.slug\n })\n\n def publish_module(self):\n module = self.get_object()\n if not module.is_draft:\n messages.info(self.request, _('Module is already added'))\n return\n\n module.is_draft = False\n module.save()\n\n signals.module_published.send(sender=None,\n module=module,\n user=self.request.user)\n\n messages.success(self.request,\n _('The module is displayed in the project.'))\n\n def unpublish_module(self):\n module = self.get_object()\n if module.is_draft:\n messages.info(self.request, _('Module is already removed'))\n return\n\n module.is_draft = True\n module.save()\n\n signals.module_unpublished.send(sender=None,\n module=module,\n user=self.request.user)\n\n messages.success(self.request,\n _('The module is no longer displayed in the project.'\n ))\n\n\nclass ModuleDeleteView(generic.DeleteView):\n permission_required = 'a4projects.change_project'\n model = module_models.Module\n success_message = _('The module has been deleted')\n\n def delete(self, request, *args, **kwargs):\n messages.success(self.request, self.success_message)\n return super().delete(request, *args, **kwargs)\n\n def get_permission_object(self):\n return self.get_object().project\n\n def get_success_url(self):\n referrer = self.request.POST.get('referrer', None) \\\n or self.request.META.get('HTTP_REFERER', None)\n if referrer:\n view, args, kwargs = resolve(referrer)\n if 'module_slug' not in kwargs \\\n or not kwargs['module_slug'] == self.get_object().slug:\n return referrer\n\n return reverse('a4dashboard:project-edit', kwargs={\n 'project_slug': self.get_object().project.slug\n })\n\n\nclass DashboardProjectListView(a4dashboard_views.ProjectListView):\n def get_queryset(self):\n return super().get_queryset().filter(\n projectcontainer=None,\n externalproject=None\n )\n\n\nclass ProjectCreateView(mixins.DashboardBaseMixin,\n SuccessMessageMixin,\n generic.CreateView):\n model = project_models.Project\n slug_url_kwarg = 'project_slug'\n form_class = DashboardProjectCreateForm\n template_name = 'a4dashboard/project_create_form.html'\n permission_required = 'a4projects.add_project'\n menu_item = 'project'\n success_message = _('Project was created.')\n\n def get_permission_object(self):\n return self.organisation\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['organisation'] = self.organisation\n kwargs['creator'] = self.request.user\n return kwargs\n\n def get_success_url(self):\n return reverse('a4dashboard:project-edit',\n kwargs={'project_slug': self.object.slug})\n\n def form_valid(self, form):\n response = super().form_valid(form)\n signals.project_created.send(sender=None,\n project=self.object,\n user=self.request.user)\n\n return response\n", "path": "meinberlin/apps/dashboard/views.py"}]}
| 3,499 | 321 |
gh_patches_debug_24899
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-2279
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] odd failure in `fh._coerce_to_period` that occurs only on windows, in context of hierarchical forecast
When making hierarchical forecasts on default output of `_make_hierarchical`, `fh.coerce_to_period` fails, on a windows run:
https://github.com/alan-turing-institute/sktime/runs/5503723513
This is quite strange, and I cannot reproduce the behaviour locally on my windows machine (python 3.8 or 3.9).
It looks like `freq` is not correctly preserved in subsetting from a hierarchical multi-index ... but only under specific conditions on a windows system???
Expected behaviour: the test does not fail, just like on linux/max and as on my local system.
</issue>
<code>
[start of sktime/datatypes/_vectorize.py]
1 # -*- coding: utf-8 -*-
2 # copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
3 """Wrapper for easy vectorization/iteration of time series data.
4
5 Contains VectorizedDF class.
6 """
7
8 import pandas as pd
9
10 from sktime.datatypes._check import check_is_scitype, mtype
11 from sktime.datatypes._convert import convert_to
12
13
14 class VectorizedDF:
15 """Wrapper for easy vectorization/iteration over instances.
16
17 VectorizedDF is an iterable that returns pandas.DataFrame
18 in sktime Series or Panel format.
19 Elements are all Series or Panels in X, these are iterated over.
20
21 Parameters
22 ----------
23 X : object in sktime compatible Panel or Hierarchical format
24 the data container to vectorize over
25 y : placeholder argument, not used currently
26 is_scitype : str ("Panel", "Hierarchical") or None, default = "Panel"
27 scitype of X, if known; if None, will be inferred
28 provide to constructor if known to avoid superfluous checks
29 Caution: will not conduct checks if provided, assumes checks done
30 iterate_as : str ("Series", "Panel")
31 scitype of the iteration
32 for instance, if X is Panel and iterate_as is "Series"
33 then the class will iterate over individual Series in X
34 or, if X is Hierarchical and iterate_as is "Panel"
35 then the class will iterate over individual Panels in X
36 (Panel = flat/non-hierarchical collection of Series)
37
38 Methods
39 -------
40 self[i] or self.__getitem__(i)
41 Returns i-th Series/Panel (depending on iterate_as) in X
42 as pandas.DataFrame with Index or MultiIndex (in sktime pandas format)
43 len(self) or self.__len__
44 returns number of Series/Panel in X
45 get_iter_indices()
46 Returns pandas.(Multi)Index that are iterated over
47 reconstruct(self, df_list, convert_back=False)
48 Takes iterable df_list and returns as an object of is_scitype.
49 Used to obtain original format after applying operations to self iterated
50 """
51
52 def __init__(self, X, y=None, iterate_as="Series", is_scitype="Panel"):
53
54 self.X = X
55
56 if is_scitype is None:
57 possible_scitypes = ["Panel", "Hierarchical"]
58 _, _, metadata = check_is_scitype(
59 X, scitype=possible_scitypes, return_metadata=True
60 )
61 is_scitype = metadata["scitype"]
62 X_orig_mtype = metadata["mtype"]
63 else:
64 X_orig_mtype = None
65
66 if is_scitype is not None and is_scitype not in ["Hierarchical", "Panel"]:
67 raise ValueError(
68 'is_scitype must be None, "Hierarchical" or "Panel", ',
69 f"found {is_scitype}",
70 )
71 self.iterate_as = iterate_as
72
73 self.is_scitype = is_scitype
74 self.X_orig_mtype = X_orig_mtype
75
76 if iterate_as not in ["Series", "Panel"]:
77 raise ValueError(
78 f'iterate_as must be "Series" or "Panel", found {iterate_as}'
79 )
80 self.iterate_as = iterate_as
81
82 if iterate_as == "Panel" and is_scitype == "Panel":
83 raise ValueError(
84 'If is_scitype is "Panel", then iterate_as must be "Series"'
85 )
86
87 self.converter_store = dict()
88
89 self.X_multiindex = self._init_conversion(X)
90 self.iter_indices = self._init_iter_indices()
91
92 def _init_conversion(self, X):
93 """Convert X to a pandas multiindex format."""
94 is_scitype = self.is_scitype
95
96 if is_scitype == "Panel":
97 return convert_to(
98 X,
99 to_type="pd-multiindex",
100 as_scitype="Panel",
101 store=self.converter_store,
102 )
103 elif is_scitype == "Hierarchical":
104 return convert_to(
105 X,
106 to_type="pd_multiindex_hier",
107 as_scitype="Hierarchical",
108 store=self.converter_store,
109 )
110 else:
111 raise RuntimeError(
112 f"unexpected value found for attribute self.is_scitype: {is_scitype}"
113 'must be "Panel" or "Hierarchical"'
114 )
115
116 def _init_iter_indices(self):
117 """Initialize indices that are iterated over in vectorization."""
118 iterate_as = self.iterate_as
119 X = self.X_multiindex
120
121 if iterate_as == "Series":
122 ix = X.index.droplevel(-1).unique()
123 return ix
124 elif iterate_as == "Panel":
125 ix = X.index.droplevel([-1, -2]).unique()
126 return ix
127 else:
128 raise RuntimeError(
129 f"unexpected value found for attribute self.iterate_as: {iterate_as}"
130 'must be "Series" or "Panel"'
131 )
132
133 def get_iter_indices(self):
134 """Get indices that are iterated over in vectorization.
135
136 Returns
137 -------
138 pandas.Index or pandas.MultiIndex
139 index with unique indices that are iterated over
140 use to reconstruct data frame after iteration
141 """
142 return self.iter_indices
143
144 def __len__(self):
145 """Return number of indices to iterate over."""
146 return len(self.get_iter_indices())
147
148 def __getitem__(self, i: int):
149 """Return the i-th element iterated over in vectorization."""
150 X = self.X_multiindex
151 ind = self.get_iter_indices()[i]
152 item = X.loc[ind]
153 # pd-multiindex type (Panel case) expects these index names:
154 if self.iterate_as == "Panel":
155 item.index.set_names(["instances", "timepoints"], inplace=True)
156 return item
157
158 def as_list(self):
159 """Shorthand to retrieve self (iterator) as list."""
160 return list(self)
161
162 def reconstruct(self, df_list, convert_back=False, overwrite_index=True):
163 """Reconstruct original format from iterable of vectorization instances.
164
165 Parameters
166 ----------
167 df_list : iterable of objects of same type as __getitem__ returns.
168 can be self, but will in general be another object to be useful.
169 Example: [some_operation(df) for df in self] that leaves types the same
170 convert_back : bool, default = False
171 whether to convert output back to mtype of X in __init__
172 if False, the return will be a pandas.DataFrame with Index or multiIndex
173 if True, the return is converted to the mtype of X in __init__
174 overwrite_index : bool, default = True
175 if True, the resulting return will have index overwritten by that of X
176 only if applies, i.e., overwrite is possible and X had an index
177 if False, no index overwrite will happen
178
179 Returns
180 -------
181 X_reconstructed_orig_format : row-concatenation of df-list,
182 with keys and additional level from self.get_iter_indices
183 if convert_back=False, always a pd.DataFrame in a sktime MultiIndex format
184 (pd-multiindex mtype for Panel, or pd_multiindex_hier for Hierarchical)
185 if convert_back=True, will have same format and mtype as X input to __init__
186 """
187 ix = self.get_iter_indices()
188 X_mi_reconstructed = pd.concat(df_list, keys=ix)
189
190 X_mi_index = X_mi_reconstructed.index
191 X_orig_index = self.X_multiindex.index
192 if overwrite_index and len(X_mi_index.names) == len(X_orig_index.names):
193 X_mi_reconstructed.index.set_names(X_orig_index.names)
194
195 if not convert_back:
196 return X_mi_reconstructed
197 else:
198 X_orig_mtype = self.X_orig_mtype
199 is_scitype = self.is_scitype
200 if X_orig_mtype is None:
201 X_orig_mtype = mtype(self.X, as_scitype=self.is_scitype)
202
203 X_reconstructed_orig_format = convert_to(
204 X_mi_reconstructed,
205 to_type=X_orig_mtype,
206 as_scitype=is_scitype,
207 store=self.converter_store,
208 )
209
210 return X_reconstructed_orig_format
211
[end of sktime/datatypes/_vectorize.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sktime/datatypes/_vectorize.py b/sktime/datatypes/_vectorize.py
--- a/sktime/datatypes/_vectorize.py
+++ b/sktime/datatypes/_vectorize.py
@@ -4,7 +4,6 @@
Contains VectorizedDF class.
"""
-
import pandas as pd
from sktime.datatypes._check import check_is_scitype, mtype
@@ -150,6 +149,7 @@
X = self.X_multiindex
ind = self.get_iter_indices()[i]
item = X.loc[ind]
+ item = _enforce_index_freq(item)
# pd-multiindex type (Panel case) expects these index names:
if self.iterate_as == "Panel":
item.index.set_names(["instances", "timepoints"], inplace=True)
@@ -208,3 +208,20 @@
)
return X_reconstructed_orig_format
+
+
+def _enforce_index_freq(item: pd.Series) -> pd.Series:
+ """Enforce the frequency of a Series index using pd.infer_freq.
+
+ Parameters
+ ----------
+ item : pd.Series
+ Returns
+ -------
+ pd.Series
+ Pandas series with the inferred frequency. If the frequency cannot be inferred
+ it will stay None
+ """
+ if hasattr(item.index, "freq") and item.index.freq is None:
+ item.index.freq = pd.infer_freq(item.index)
+ return item
|
{"golden_diff": "diff --git a/sktime/datatypes/_vectorize.py b/sktime/datatypes/_vectorize.py\n--- a/sktime/datatypes/_vectorize.py\n+++ b/sktime/datatypes/_vectorize.py\n@@ -4,7 +4,6 @@\n \n Contains VectorizedDF class.\n \"\"\"\n-\n import pandas as pd\n \n from sktime.datatypes._check import check_is_scitype, mtype\n@@ -150,6 +149,7 @@\n X = self.X_multiindex\n ind = self.get_iter_indices()[i]\n item = X.loc[ind]\n+ item = _enforce_index_freq(item)\n # pd-multiindex type (Panel case) expects these index names:\n if self.iterate_as == \"Panel\":\n item.index.set_names([\"instances\", \"timepoints\"], inplace=True)\n@@ -208,3 +208,20 @@\n )\n \n return X_reconstructed_orig_format\n+\n+\n+def _enforce_index_freq(item: pd.Series) -> pd.Series:\n+ \"\"\"Enforce the frequency of a Series index using pd.infer_freq.\n+\n+ Parameters\n+ ----------\n+ item : pd.Series\n+ Returns\n+ -------\n+ pd.Series\n+ Pandas series with the inferred frequency. If the frequency cannot be inferred\n+ it will stay None\n+ \"\"\"\n+ if hasattr(item.index, \"freq\") and item.index.freq is None:\n+ item.index.freq = pd.infer_freq(item.index)\n+ return item\n", "issue": "[BUG] odd failure in `fh._coerce_to_period` that occurs only on windows, in context of hierarchical forecast\nWhen making hierarchical forecasts on default output of `_make_hierarchical`, `fh.coerce_to_period` fails, on a windows run:\r\nhttps://github.com/alan-turing-institute/sktime/runs/5503723513\r\n\r\nThis is quite strange, and I cannot reproduce the behaviour locally on my windows machine (python 3.8 or 3.9).\r\n\r\nIt looks like `freq` is not correctly preserved in subsetting from a hierarchical multi-index ... but only under specific conditions on a windows system???\r\n\r\nExpected behaviour: the test does not fail, just like on linux/max and as on my local system.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Wrapper for easy vectorization/iteration of time series data.\n\nContains VectorizedDF class.\n\"\"\"\n\nimport pandas as pd\n\nfrom sktime.datatypes._check import check_is_scitype, mtype\nfrom sktime.datatypes._convert import convert_to\n\n\nclass VectorizedDF:\n \"\"\"Wrapper for easy vectorization/iteration over instances.\n\n VectorizedDF is an iterable that returns pandas.DataFrame\n in sktime Series or Panel format.\n Elements are all Series or Panels in X, these are iterated over.\n\n Parameters\n ----------\n X : object in sktime compatible Panel or Hierarchical format\n the data container to vectorize over\n y : placeholder argument, not used currently\n is_scitype : str (\"Panel\", \"Hierarchical\") or None, default = \"Panel\"\n scitype of X, if known; if None, will be inferred\n provide to constructor if known to avoid superfluous checks\n Caution: will not conduct checks if provided, assumes checks done\n iterate_as : str (\"Series\", \"Panel\")\n scitype of the iteration\n for instance, if X is Panel and iterate_as is \"Series\"\n then the class will iterate over individual Series in X\n or, if X is Hierarchical and iterate_as is \"Panel\"\n then the class will iterate over individual Panels in X\n (Panel = flat/non-hierarchical collection of Series)\n\n Methods\n -------\n self[i] or self.__getitem__(i)\n Returns i-th Series/Panel (depending on iterate_as) in X\n as pandas.DataFrame with Index or MultiIndex (in sktime pandas format)\n len(self) or self.__len__\n returns number of Series/Panel in X\n get_iter_indices()\n Returns pandas.(Multi)Index that are iterated over\n reconstruct(self, df_list, convert_back=False)\n Takes iterable df_list and returns as an object of is_scitype.\n Used to obtain original format after applying operations to self iterated\n \"\"\"\n\n def __init__(self, X, y=None, iterate_as=\"Series\", is_scitype=\"Panel\"):\n\n self.X = X\n\n if is_scitype is None:\n possible_scitypes = [\"Panel\", \"Hierarchical\"]\n _, _, metadata = check_is_scitype(\n X, scitype=possible_scitypes, return_metadata=True\n )\n is_scitype = metadata[\"scitype\"]\n X_orig_mtype = metadata[\"mtype\"]\n else:\n X_orig_mtype = None\n\n if is_scitype is not None and is_scitype not in [\"Hierarchical\", \"Panel\"]:\n raise ValueError(\n 'is_scitype must be None, \"Hierarchical\" or \"Panel\", ',\n f\"found {is_scitype}\",\n )\n self.iterate_as = iterate_as\n\n self.is_scitype = is_scitype\n self.X_orig_mtype = X_orig_mtype\n\n if iterate_as not in [\"Series\", \"Panel\"]:\n raise ValueError(\n f'iterate_as must be \"Series\" or \"Panel\", found {iterate_as}'\n )\n self.iterate_as = iterate_as\n\n if iterate_as == \"Panel\" and is_scitype == \"Panel\":\n raise ValueError(\n 'If is_scitype is \"Panel\", then iterate_as must be \"Series\"'\n )\n\n self.converter_store = dict()\n\n self.X_multiindex = self._init_conversion(X)\n self.iter_indices = self._init_iter_indices()\n\n def _init_conversion(self, X):\n \"\"\"Convert X to a pandas multiindex format.\"\"\"\n is_scitype = self.is_scitype\n\n if is_scitype == \"Panel\":\n return convert_to(\n X,\n to_type=\"pd-multiindex\",\n as_scitype=\"Panel\",\n store=self.converter_store,\n )\n elif is_scitype == \"Hierarchical\":\n return convert_to(\n X,\n to_type=\"pd_multiindex_hier\",\n as_scitype=\"Hierarchical\",\n store=self.converter_store,\n )\n else:\n raise RuntimeError(\n f\"unexpected value found for attribute self.is_scitype: {is_scitype}\"\n 'must be \"Panel\" or \"Hierarchical\"'\n )\n\n def _init_iter_indices(self):\n \"\"\"Initialize indices that are iterated over in vectorization.\"\"\"\n iterate_as = self.iterate_as\n X = self.X_multiindex\n\n if iterate_as == \"Series\":\n ix = X.index.droplevel(-1).unique()\n return ix\n elif iterate_as == \"Panel\":\n ix = X.index.droplevel([-1, -2]).unique()\n return ix\n else:\n raise RuntimeError(\n f\"unexpected value found for attribute self.iterate_as: {iterate_as}\"\n 'must be \"Series\" or \"Panel\"'\n )\n\n def get_iter_indices(self):\n \"\"\"Get indices that are iterated over in vectorization.\n\n Returns\n -------\n pandas.Index or pandas.MultiIndex\n index with unique indices that are iterated over\n use to reconstruct data frame after iteration\n \"\"\"\n return self.iter_indices\n\n def __len__(self):\n \"\"\"Return number of indices to iterate over.\"\"\"\n return len(self.get_iter_indices())\n\n def __getitem__(self, i: int):\n \"\"\"Return the i-th element iterated over in vectorization.\"\"\"\n X = self.X_multiindex\n ind = self.get_iter_indices()[i]\n item = X.loc[ind]\n # pd-multiindex type (Panel case) expects these index names:\n if self.iterate_as == \"Panel\":\n item.index.set_names([\"instances\", \"timepoints\"], inplace=True)\n return item\n\n def as_list(self):\n \"\"\"Shorthand to retrieve self (iterator) as list.\"\"\"\n return list(self)\n\n def reconstruct(self, df_list, convert_back=False, overwrite_index=True):\n \"\"\"Reconstruct original format from iterable of vectorization instances.\n\n Parameters\n ----------\n df_list : iterable of objects of same type as __getitem__ returns.\n can be self, but will in general be another object to be useful.\n Example: [some_operation(df) for df in self] that leaves types the same\n convert_back : bool, default = False\n whether to convert output back to mtype of X in __init__\n if False, the return will be a pandas.DataFrame with Index or multiIndex\n if True, the return is converted to the mtype of X in __init__\n overwrite_index : bool, default = True\n if True, the resulting return will have index overwritten by that of X\n only if applies, i.e., overwrite is possible and X had an index\n if False, no index overwrite will happen\n\n Returns\n -------\n X_reconstructed_orig_format : row-concatenation of df-list,\n with keys and additional level from self.get_iter_indices\n if convert_back=False, always a pd.DataFrame in a sktime MultiIndex format\n (pd-multiindex mtype for Panel, or pd_multiindex_hier for Hierarchical)\n if convert_back=True, will have same format and mtype as X input to __init__\n \"\"\"\n ix = self.get_iter_indices()\n X_mi_reconstructed = pd.concat(df_list, keys=ix)\n\n X_mi_index = X_mi_reconstructed.index\n X_orig_index = self.X_multiindex.index\n if overwrite_index and len(X_mi_index.names) == len(X_orig_index.names):\n X_mi_reconstructed.index.set_names(X_orig_index.names)\n\n if not convert_back:\n return X_mi_reconstructed\n else:\n X_orig_mtype = self.X_orig_mtype\n is_scitype = self.is_scitype\n if X_orig_mtype is None:\n X_orig_mtype = mtype(self.X, as_scitype=self.is_scitype)\n\n X_reconstructed_orig_format = convert_to(\n X_mi_reconstructed,\n to_type=X_orig_mtype,\n as_scitype=is_scitype,\n store=self.converter_store,\n )\n\n return X_reconstructed_orig_format\n", "path": "sktime/datatypes/_vectorize.py"}]}
| 3,013 | 329 |
gh_patches_debug_11956
|
rasdani/github-patches
|
git_diff
|
intel__dffml-197
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
docs: Switch HTML theme to sphinx_rtd_theme
https://sphinx-rtd-theme.readthedocs.io/en/stable/installing.html
</issue>
<code>
[start of setup.py]
1 # SPDX-License-Identifier: MIT
2 # Copyright (c) 2019 Intel Corporation
3 import ast
4 from io import open
5 from setuptools import find_packages, setup
6
7 with open("dffml/version.py", "r") as f:
8 for line in f:
9 if line.startswith("VERSION"):
10 VERSION = ast.literal_eval(line.strip().split("=")[-1].strip())
11 break
12
13 with open("README.md", "r", encoding="utf-8") as f:
14 README = f.read()
15
16 setup(
17 name="dffml",
18 version=VERSION,
19 description="Data Flow Facilitator for Machine Learning",
20 long_description=README,
21 long_description_content_type="text/markdown",
22 author="John Andersen",
23 author_email="[email protected]",
24 maintainer="John Andersen",
25 maintainer_email="[email protected]",
26 url="https://github.com/intel/dffml",
27 license="MIT",
28 keywords=[""],
29 classifiers=[
30 "Development Status :: 3 - Alpha",
31 "Intended Audience :: Developers",
32 "License :: OSI Approved :: MIT License",
33 "Natural Language :: English",
34 "Operating System :: OS Independent",
35 "Programming Language :: Python :: 3 :: Only",
36 "Programming Language :: Python :: 3.7",
37 "Programming Language :: Python :: Implementation :: CPython",
38 "Programming Language :: Python :: Implementation :: PyPy",
39 ],
40 packages=find_packages(),
41 include_package_data=True,
42 zip_safe=False,
43 extras_require={
44 "models": ["dffml-model-tensorflow", "dffml-model-scratch"],
45 "git": ["dffml-feature-git"],
46 "dev": [
47 "coverage",
48 "codecov",
49 "sphinx",
50 "sphinxcontrib-asyncio",
51 "black",
52 ],
53 },
54 entry_points={
55 "console_scripts": ["dffml = dffml.cli:CLI.main"],
56 "dffml.source": [
57 "csv = dffml.source.csv:CSVSource",
58 "json = dffml.source.json:JSONSource",
59 "memory = dffml.source.memory:MemorySource",
60 ],
61 "dffml.port": ["json = dffml.port.json:JSON"],
62 "dffml.service.cli": ["dev = dffml.service.dev:Develop"],
63 # Data Flow
64 "dffml.operation": [
65 "group_by = dffml.operation.output:GroupBy.op",
66 "get_single = dffml.operation.output:GetSingle.op",
67 "associate = dffml.operation.output:Associate.op",
68 ],
69 "dffml.operation.implementation": [
70 "group_by = dffml.operation.output:GroupBy.imp",
71 "get_single = dffml.operation.output:GetSingle.imp",
72 "associate = dffml.operation.output:Associate.imp",
73 ],
74 "dffml.kvstore": ["memory = dffml.df.memory:MemoryKeyValueStore"],
75 "dffml.input.network": ["memory = dffml.df.memory:MemoryInputNetwork"],
76 "dffml.operation.network": [
77 "memory = dffml.df.memory:MemoryOperationNetwork"
78 ],
79 "dffml.redundancy.checker": [
80 "memory = dffml.df.memory:MemoryRedundancyChecker"
81 ],
82 "dffml.lock.network": ["memory = dffml.df.memory:MemoryLockNetwork"],
83 "dffml.operation.implementation.network": [
84 "memory = dffml.df.memory:MemoryOperationImplementationNetwork"
85 ],
86 "dffml.orchestrator": ["memory = dffml.df.memory:MemoryOrchestrator"],
87 },
88 )
89
[end of setup.py]
[start of docs/conf.py]
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # http://www.sphinx-doc.org/en/master/config
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12 #
13 import os
14 import sys
15
16 sys.path.insert(0, os.path.abspath("."))
17 from dffml.version import VERSION
18
19 # -- Project information -----------------------------------------------------
20
21 project = "DFFML"
22 copyright = "2019, Intel"
23 author = "John Andersen"
24
25 # The short X.Y version
26 version = VERSION
27
28 # The full version, including alpha/beta/rc tags
29 release = version
30
31
32 # -- General configuration ---------------------------------------------------
33
34 # Add any Sphinx extension module names here, as strings. They can be
35 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
36 # ones.
37 extensions = [
38 "sphinx.ext.intersphinx",
39 "sphinx.ext.autodoc",
40 "sphinx.ext.viewcode",
41 "sphinxcontrib.asyncio",
42 ]
43
44 intersphinx_mapping = {"python": ("https://docs.python.org/3", None)}
45
46 # Add any paths that contain templates here, relative to this directory.
47 templates_path = ["_templates"]
48
49 # List of patterns, relative to source directory, that match files and
50 # directories to ignore when looking for source files.
51 # This pattern also affects html_static_path and html_extra_path.
52 exclude_patterns = []
53
54
55 # -- Options for HTML output -------------------------------------------------
56
57 # The theme to use for HTML and HTML Help pages. See the documentation for
58 # a list of builtin themes.
59 #
60 html_theme = "alabaster"
61
62 html_theme_options = {
63 "description": "The fastest path to machine learning integration",
64 "github_user": "intel",
65 "github_repo": "dffml",
66 "github_button": True,
67 "travis_button": True,
68 "codecov_button": True,
69 }
70
71 # Add any paths that contain custom static files (such as style sheets) here,
72 # relative to this directory. They are copied after the builtin static files,
73 # so a file named "default.css" will overwrite the builtin "default.css".
74 html_static_path = ["_static"]
75
76 # -- Extension configuration -------------------------------------------------
77
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -57,7 +57,7 @@
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
-html_theme = "alabaster"
+html_theme = "sphinx_rtd_theme"
html_theme_options = {
"description": "The fastest path to machine learning integration",
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -49,6 +49,7 @@
"sphinx",
"sphinxcontrib-asyncio",
"black",
+ "sphinx_rtd_theme",
],
},
entry_points={
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -57,7 +57,7 @@\n # The theme to use for HTML and HTML Help pages. See the documentation for\n # a list of builtin themes.\n #\n-html_theme = \"alabaster\"\n+html_theme = \"sphinx_rtd_theme\"\n \n html_theme_options = {\n \"description\": \"The fastest path to machine learning integration\",\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -49,6 +49,7 @@\n \"sphinx\",\n \"sphinxcontrib-asyncio\",\n \"black\",\n+ \"sphinx_rtd_theme\",\n ],\n },\n entry_points={\n", "issue": "docs: Switch HTML theme to sphinx_rtd_theme\nhttps://sphinx-rtd-theme.readthedocs.io/en/stable/installing.html\n", "before_files": [{"content": "# SPDX-License-Identifier: MIT\n# Copyright (c) 2019 Intel Corporation\nimport ast\nfrom io import open\nfrom setuptools import find_packages, setup\n\nwith open(\"dffml/version.py\", \"r\") as f:\n for line in f:\n if line.startswith(\"VERSION\"):\n VERSION = ast.literal_eval(line.strip().split(\"=\")[-1].strip())\n break\n\nwith open(\"README.md\", \"r\", encoding=\"utf-8\") as f:\n README = f.read()\n\nsetup(\n name=\"dffml\",\n version=VERSION,\n description=\"Data Flow Facilitator for Machine Learning\",\n long_description=README,\n long_description_content_type=\"text/markdown\",\n author=\"John Andersen\",\n author_email=\"[email protected]\",\n maintainer=\"John Andersen\",\n maintainer_email=\"[email protected]\",\n url=\"https://github.com/intel/dffml\",\n license=\"MIT\",\n keywords=[\"\"],\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n ],\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n extras_require={\n \"models\": [\"dffml-model-tensorflow\", \"dffml-model-scratch\"],\n \"git\": [\"dffml-feature-git\"],\n \"dev\": [\n \"coverage\",\n \"codecov\",\n \"sphinx\",\n \"sphinxcontrib-asyncio\",\n \"black\",\n ],\n },\n entry_points={\n \"console_scripts\": [\"dffml = dffml.cli:CLI.main\"],\n \"dffml.source\": [\n \"csv = dffml.source.csv:CSVSource\",\n \"json = dffml.source.json:JSONSource\",\n \"memory = dffml.source.memory:MemorySource\",\n ],\n \"dffml.port\": [\"json = dffml.port.json:JSON\"],\n \"dffml.service.cli\": [\"dev = dffml.service.dev:Develop\"],\n # Data Flow\n \"dffml.operation\": [\n \"group_by = dffml.operation.output:GroupBy.op\",\n \"get_single = dffml.operation.output:GetSingle.op\",\n \"associate = dffml.operation.output:Associate.op\",\n ],\n \"dffml.operation.implementation\": [\n \"group_by = dffml.operation.output:GroupBy.imp\",\n \"get_single = dffml.operation.output:GetSingle.imp\",\n \"associate = dffml.operation.output:Associate.imp\",\n ],\n \"dffml.kvstore\": [\"memory = dffml.df.memory:MemoryKeyValueStore\"],\n \"dffml.input.network\": [\"memory = dffml.df.memory:MemoryInputNetwork\"],\n \"dffml.operation.network\": [\n \"memory = dffml.df.memory:MemoryOperationNetwork\"\n ],\n \"dffml.redundancy.checker\": [\n \"memory = dffml.df.memory:MemoryRedundancyChecker\"\n ],\n \"dffml.lock.network\": [\"memory = dffml.df.memory:MemoryLockNetwork\"],\n \"dffml.operation.implementation.network\": [\n \"memory = dffml.df.memory:MemoryOperationImplementationNetwork\"\n ],\n \"dffml.orchestrator\": [\"memory = dffml.df.memory:MemoryOrchestrator\"],\n },\n)\n", "path": "setup.py"}, {"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# http://www.sphinx-doc.org/en/master/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\n\nsys.path.insert(0, os.path.abspath(\".\"))\nfrom dffml.version import VERSION\n\n# -- Project information -----------------------------------------------------\n\nproject = \"DFFML\"\ncopyright = \"2019, Intel\"\nauthor = \"John Andersen\"\n\n# The short X.Y version\nversion = VERSION\n\n# The full version, including alpha/beta/rc tags\nrelease = version\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.viewcode\",\n \"sphinxcontrib.asyncio\",\n]\n\nintersphinx_mapping = {\"python\": (\"https://docs.python.org/3\", None)}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = []\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"alabaster\"\n\nhtml_theme_options = {\n \"description\": \"The fastest path to machine learning integration\",\n \"github_user\": \"intel\",\n \"github_repo\": \"dffml\",\n \"github_button\": True,\n \"travis_button\": True,\n \"codecov_button\": True,\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# -- Extension configuration -------------------------------------------------\n", "path": "docs/conf.py"}]}
| 2,188 | 167 |
gh_patches_debug_17221
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-2980
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Parsing and usage of boolean environment variables doesn't work as expected
We cannot specify "falsy" values via environment variables, as they will be overridden by `default` in `get_env`.
Also some integrations (e.g. `botocore` for setting `distributed_tracing`) don't parse the result of `get_env` correctly, and thus cannot be changed via environment variables.
I would like to set `config.botocore["distributing_tracing"]` to `False` via the `DD_BOTOCORE_DISTRIBUTED_TRACING` environment variable, but because the default is `True`, there is no way to change the semantics of the setting to disable the behaviour.
There are two issues, and if either of them was fixed, the problem would be fixed.
I would be happy to provide a fix for both of them, or be directed to a way to fix it, without having to add extra code to my project.
## Empty environment variables are assumed to be missing
If you do `DD_BOTOCORE_DISTRIBUTED_TRACING=` (an empty string), [the relevant code](https://github.com/DataDog/dd-trace-py/blob/e7a73336ab7c4c1479d504f86730c4dfaeb2e5f7/ddtrace/utils/formats.py#L47-L58) will replace it with `default`, in which case it would be `True`:
```python
def get_env(*parts, **kwargs):
...
# env = "DD_BOTOCORE_DISTRIBUTED_TRACING"
value = os.getenv(env) # value = ""
...
# legacy = None
value = value or legacy # value = None
# default = True
return value if value else default # return True
```
The fix would be to check for `None`, instead for "falsy":
```python
if value is None:
if legacy is not None:
value = legacy
else:
value = default
```
## Usage of `get_env` for booleans should convert text values to boolean
When the `botocore` integration [uses](https://github.com/DataDog/dd-trace-py/blob/e7a73336ab7c4c1479d504f86730c4dfaeb2e5f7/ddtrace/contrib/botocore/patch.py#L39) `get_env` to define a boolean option, it should convert common boolean text values to boolean, and reject others:
```python
config._add(
"botocore",
{
"distributed_tracing": get_bool_env("botocore", "distributed_tracing", default=True),
...,
},
)
```
And
```python
TRUE_STRINGS = [True, "true", "yes", "y", "enable", "enabled", "1"]
FALSE_STRINGS = [False, "false", "no", "n", "disable", "disabled", "0"]
def to_boolean(value: str) -> bool:
if value.lower() in TRUE_STRINGS:
return True
if value.lower() in FALSE_STRINGS:
return False
raise ValueError(f"Unknown boolean value '{value}'")
def get_bool_env(*parts, **kwargs):
return to_boolean(get_env(*parts, **kwargs))
```
### Which version of dd-trace-py are you using?
`ddtrace==0.55.3`
### Which version of pip are you using?
`pip 21.2.4 (python 3.8)`
### Which version of the libraries are you using?
Not relevant
### How can we reproduce your problem?
First issue:
Before running the script:
```shell
DD_BOTOCORE_DISTRIBUTED_TRACING=
```
Run the script:
```python3
from ddtrace import config
from ddtrace.contrib import botocore
print(config.botocore["distributed_tracing"])
print(bool(config.botocore["distributed_tracing"]))
```
Outputs:
```
True
True
```
Second issue:
Before running the script:
```shell
DD_BOTOCORE_DISTRIBUTED_TRACING=False
```
Run the script:
```python3
from ddtrace import config
from ddtrace.contrib import botocore
print(config.botocore["distributed_tracing"])
print(bool(config.botocore["distributed_tracing"]))
```
Outputs:
```
False
True
```
### What is the result that you get?
I cannot meaningfully change the value of `config.botocore["distributed_tracing"]`
### What is the result that you expected?
I want to be able to meaningfully change `config.botocore["distributed_tracing"]` via the environment variable (i.e. `DD_BOTOCORE_DISTRIBUTED_TRACING`)
</issue>
<code>
[start of ddtrace/contrib/botocore/patch.py]
1 """
2 Trace queries to aws api done via botocore client
3 """
4 # 3p
5 import base64
6 import json
7
8 import botocore.client
9
10 from ddtrace import config
11 from ddtrace.vendor import wrapt
12
13 # project
14 from ...constants import ANALYTICS_SAMPLE_RATE_KEY
15 from ...constants import SPAN_MEASURED_KEY
16 from ...ext import SpanTypes
17 from ...ext import aws
18 from ...ext import http
19 from ...internal.logger import get_logger
20 from ...pin import Pin
21 from ...propagation.http import HTTPPropagator
22 from ...utils import get_argument_value
23 from ...utils.formats import deep_getattr
24 from ...utils.formats import get_env
25 from ...utils.wrappers import unwrap
26
27
28 # Original botocore client class
29 _Botocore_client = botocore.client.BaseClient
30
31 ARGS_NAME = ("action", "params", "path", "verb")
32 TRACED_ARGS = {"params", "path", "verb"}
33
34 log = get_logger(__name__)
35
36 # Botocore default settings
37 config._add(
38 "botocore",
39 {
40 "distributed_tracing": get_env("botocore", "distributed_tracing", default=True),
41 "invoke_with_legacy_context": get_env("botocore", "invoke_with_legacy_context", default=False),
42 },
43 )
44
45
46 def inject_trace_data_to_message_attributes(trace_data, entry):
47 if "MessageAttributes" not in entry:
48 entry["MessageAttributes"] = {}
49 # An Amazon SQS message can contain up to 10 metadata attributes.
50 if len(entry["MessageAttributes"]) < 10:
51 entry["MessageAttributes"]["_datadog"] = {"DataType": "String", "StringValue": json.dumps(trace_data)}
52 else:
53 log.debug("skipping trace injection, max number (10) of MessageAttributes exceeded")
54
55
56 def inject_trace_to_sqs_batch_message(args, span):
57 trace_data = {}
58 HTTPPropagator.inject(span.context, trace_data)
59 params = args[1]
60
61 for entry in params["Entries"]:
62 inject_trace_data_to_message_attributes(trace_data, entry)
63
64
65 def inject_trace_to_sqs_message(args, span):
66 trace_data = {}
67 HTTPPropagator.inject(span.context, trace_data)
68 params = args[1]
69
70 inject_trace_data_to_message_attributes(trace_data, params)
71
72
73 def modify_client_context(client_context_object, trace_headers):
74 if config.botocore["invoke_with_legacy_context"]:
75 trace_headers = {"_datadog": trace_headers}
76
77 if "custom" in client_context_object:
78 client_context_object["custom"].update(trace_headers)
79 else:
80 client_context_object["custom"] = trace_headers
81
82
83 def inject_trace_to_client_context(args, span):
84 trace_headers = {}
85 HTTPPropagator.inject(span.context, trace_headers)
86 client_context_object = {}
87 params = args[1]
88 if "ClientContext" in params:
89 try:
90 client_context_json = base64.b64decode(params["ClientContext"]).decode("utf-8")
91 client_context_object = json.loads(client_context_json)
92 except Exception:
93 log.warning("malformed client_context=%s", params["ClientContext"], exc_info=True)
94 return
95 modify_client_context(client_context_object, trace_headers)
96 try:
97 json_context = json.dumps(client_context_object).encode("utf-8")
98 except Exception:
99 log.warning("unable to encode modified client context as json: %s", client_context_object, exc_info=True)
100 return
101 params["ClientContext"] = base64.b64encode(json_context).decode("utf-8")
102
103
104 def patch():
105 if getattr(botocore.client, "_datadog_patch", False):
106 return
107 setattr(botocore.client, "_datadog_patch", True)
108
109 wrapt.wrap_function_wrapper("botocore.client", "BaseClient._make_api_call", patched_api_call)
110 Pin(service="aws", app="aws").onto(botocore.client.BaseClient)
111
112
113 def unpatch():
114 if getattr(botocore.client, "_datadog_patch", False):
115 setattr(botocore.client, "_datadog_patch", False)
116 unwrap(botocore.client.BaseClient, "_make_api_call")
117
118
119 def patched_api_call(original_func, instance, args, kwargs):
120
121 pin = Pin.get_from(instance)
122 if not pin or not pin.enabled():
123 return original_func(*args, **kwargs)
124
125 endpoint_name = deep_getattr(instance, "_endpoint._endpoint_prefix")
126
127 with pin.tracer.trace(
128 "{}.command".format(endpoint_name), service="{}.{}".format(pin.service, endpoint_name), span_type=SpanTypes.HTTP
129 ) as span:
130 span.set_tag(SPAN_MEASURED_KEY)
131 operation = None
132 if args:
133 operation = get_argument_value(args, kwargs, 0, "operation_name")
134 # DEV: join is the fastest way of concatenating strings that is compatible
135 # across Python versions (see
136 # https://stackoverflow.com/questions/1316887/what-is-the-most-efficient-string-concatenation-method-in-python)
137 span.resource = ".".join((endpoint_name, operation.lower()))
138
139 if config.botocore["distributed_tracing"]:
140 if endpoint_name == "lambda" and operation == "Invoke":
141 inject_trace_to_client_context(args, span)
142 if endpoint_name == "sqs" and operation == "SendMessage":
143 inject_trace_to_sqs_message(args, span)
144 if endpoint_name == "sqs" and operation == "SendMessageBatch":
145 inject_trace_to_sqs_batch_message(args, span)
146
147 else:
148 span.resource = endpoint_name
149
150 aws.add_span_arg_tags(span, endpoint_name, args, ARGS_NAME, TRACED_ARGS)
151
152 region_name = deep_getattr(instance, "meta.region_name")
153
154 span._set_str_tag("aws.agent", "botocore")
155 if operation is not None:
156 span._set_str_tag("aws.operation", operation)
157 if region_name is not None:
158 span._set_str_tag("aws.region", region_name)
159
160 result = original_func(*args, **kwargs)
161
162 response_meta = result.get("ResponseMetadata")
163 if response_meta:
164 if "HTTPStatusCode" in response_meta:
165 span.set_tag(http.STATUS_CODE, response_meta["HTTPStatusCode"])
166
167 if "RetryAttempts" in response_meta:
168 span.set_tag("retry_attempts", response_meta["RetryAttempts"])
169
170 if "RequestId" in response_meta:
171 span.set_tag("aws.requestid", response_meta["RequestId"])
172
173 # set analytics sample rate
174 span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, config.botocore.get_analytics_sample_rate())
175
176 return result
177
[end of ddtrace/contrib/botocore/patch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ddtrace/contrib/botocore/patch.py b/ddtrace/contrib/botocore/patch.py
--- a/ddtrace/contrib/botocore/patch.py
+++ b/ddtrace/contrib/botocore/patch.py
@@ -20,6 +20,7 @@
from ...pin import Pin
from ...propagation.http import HTTPPropagator
from ...utils import get_argument_value
+from ...utils.formats import asbool
from ...utils.formats import deep_getattr
from ...utils.formats import get_env
from ...utils.wrappers import unwrap
@@ -37,8 +38,8 @@
config._add(
"botocore",
{
- "distributed_tracing": get_env("botocore", "distributed_tracing", default=True),
- "invoke_with_legacy_context": get_env("botocore", "invoke_with_legacy_context", default=False),
+ "distributed_tracing": asbool(get_env("botocore", "distributed_tracing", default=True)),
+ "invoke_with_legacy_context": asbool(get_env("botocore", "invoke_with_legacy_context", default=False)),
},
)
|
{"golden_diff": "diff --git a/ddtrace/contrib/botocore/patch.py b/ddtrace/contrib/botocore/patch.py\n--- a/ddtrace/contrib/botocore/patch.py\n+++ b/ddtrace/contrib/botocore/patch.py\n@@ -20,6 +20,7 @@\n from ...pin import Pin\n from ...propagation.http import HTTPPropagator\n from ...utils import get_argument_value\n+from ...utils.formats import asbool\n from ...utils.formats import deep_getattr\n from ...utils.formats import get_env\n from ...utils.wrappers import unwrap\n@@ -37,8 +38,8 @@\n config._add(\n \"botocore\",\n {\n- \"distributed_tracing\": get_env(\"botocore\", \"distributed_tracing\", default=True),\n- \"invoke_with_legacy_context\": get_env(\"botocore\", \"invoke_with_legacy_context\", default=False),\n+ \"distributed_tracing\": asbool(get_env(\"botocore\", \"distributed_tracing\", default=True)),\n+ \"invoke_with_legacy_context\": asbool(get_env(\"botocore\", \"invoke_with_legacy_context\", default=False)),\n },\n )\n", "issue": "Parsing and usage of boolean environment variables doesn't work as expected\nWe cannot specify \"falsy\" values via environment variables, as they will be overridden by `default` in `get_env`.\r\n\r\nAlso some integrations (e.g. `botocore` for setting `distributed_tracing`) don't parse the result of `get_env` correctly, and thus cannot be changed via environment variables.\r\n\r\nI would like to set `config.botocore[\"distributing_tracing\"]` to `False` via the `DD_BOTOCORE_DISTRIBUTED_TRACING` environment variable, but because the default is `True`, there is no way to change the semantics of the setting to disable the behaviour.\r\n\r\nThere are two issues, and if either of them was fixed, the problem would be fixed.\r\n\r\nI would be happy to provide a fix for both of them, or be directed to a way to fix it, without having to add extra code to my project.\r\n\r\n## Empty environment variables are assumed to be missing\r\n\r\nIf you do `DD_BOTOCORE_DISTRIBUTED_TRACING=` (an empty string), [the relevant code](https://github.com/DataDog/dd-trace-py/blob/e7a73336ab7c4c1479d504f86730c4dfaeb2e5f7/ddtrace/utils/formats.py#L47-L58) will replace it with `default`, in which case it would be `True`:\r\n\r\n```python\r\ndef get_env(*parts, **kwargs):\r\n ...\r\n\r\n # env = \"DD_BOTOCORE_DISTRIBUTED_TRACING\"\r\n value = os.getenv(env) # value = \"\"\r\n ...\r\n\r\n # legacy = None\r\n value = value or legacy # value = None\r\n # default = True\r\n return value if value else default # return True\r\n```\r\n\r\nThe fix would be to check for `None`, instead for \"falsy\":\r\n\r\n```python\r\nif value is None:\r\n if legacy is not None:\r\n value = legacy\r\n else:\r\n value = default\r\n```\r\n\r\n## Usage of `get_env` for booleans should convert text values to boolean\r\n\r\nWhen the `botocore` integration [uses](https://github.com/DataDog/dd-trace-py/blob/e7a73336ab7c4c1479d504f86730c4dfaeb2e5f7/ddtrace/contrib/botocore/patch.py#L39) `get_env` to define a boolean option, it should convert common boolean text values to boolean, and reject others:\r\n\r\n```python\r\nconfig._add(\r\n \"botocore\",\r\n {\r\n \"distributed_tracing\": get_bool_env(\"botocore\", \"distributed_tracing\", default=True),\r\n ...,\r\n },\r\n)\r\n```\r\nAnd\r\n```python\r\nTRUE_STRINGS = [True, \"true\", \"yes\", \"y\", \"enable\", \"enabled\", \"1\"]\r\nFALSE_STRINGS = [False, \"false\", \"no\", \"n\", \"disable\", \"disabled\", \"0\"]\r\n\r\n\r\ndef to_boolean(value: str) -> bool:\r\n if value.lower() in TRUE_STRINGS:\r\n return True\r\n if value.lower() in FALSE_STRINGS:\r\n return False\r\n raise ValueError(f\"Unknown boolean value '{value}'\")\r\n\r\n\r\n\r\ndef get_bool_env(*parts, **kwargs):\r\n return to_boolean(get_env(*parts, **kwargs))\r\n```\r\n\r\n### Which version of dd-trace-py are you using?\r\n\r\n`ddtrace==0.55.3`\r\n\r\n### Which version of pip are you using?\r\n\r\n`pip 21.2.4 (python 3.8)`\r\n\r\n### Which version of the libraries are you using?\r\n\r\nNot relevant\r\n\r\n### How can we reproduce your problem?\r\n\r\nFirst issue:\r\n\r\nBefore running the script:\r\n\r\n```shell\r\nDD_BOTOCORE_DISTRIBUTED_TRACING=\r\n```\r\n\r\nRun the script:\r\n\r\n```python3\r\nfrom ddtrace import config\r\nfrom ddtrace.contrib import botocore\r\nprint(config.botocore[\"distributed_tracing\"])\r\nprint(bool(config.botocore[\"distributed_tracing\"]))\r\n```\r\n\r\nOutputs:\r\n\r\n```\r\nTrue\r\nTrue\r\n```\r\n\r\nSecond issue:\r\n\r\nBefore running the script:\r\n\r\n```shell\r\nDD_BOTOCORE_DISTRIBUTED_TRACING=False\r\n```\r\n\r\nRun the script:\r\n\r\n```python3\r\nfrom ddtrace import config\r\nfrom ddtrace.contrib import botocore\r\nprint(config.botocore[\"distributed_tracing\"])\r\nprint(bool(config.botocore[\"distributed_tracing\"]))\r\n```\r\n\r\nOutputs:\r\n\r\n```\r\nFalse\r\nTrue\r\n```\r\n\r\n\r\n### What is the result that you get?\r\n\r\nI cannot meaningfully change the value of `config.botocore[\"distributed_tracing\"]`\r\n\r\n### What is the result that you expected?\r\n\r\nI want to be able to meaningfully change `config.botocore[\"distributed_tracing\"]` via the environment variable (i.e. `DD_BOTOCORE_DISTRIBUTED_TRACING`)\r\n\n", "before_files": [{"content": "\"\"\"\nTrace queries to aws api done via botocore client\n\"\"\"\n# 3p\nimport base64\nimport json\n\nimport botocore.client\n\nfrom ddtrace import config\nfrom ddtrace.vendor import wrapt\n\n# project\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY\nfrom ...constants import SPAN_MEASURED_KEY\nfrom ...ext import SpanTypes\nfrom ...ext import aws\nfrom ...ext import http\nfrom ...internal.logger import get_logger\nfrom ...pin import Pin\nfrom ...propagation.http import HTTPPropagator\nfrom ...utils import get_argument_value\nfrom ...utils.formats import deep_getattr\nfrom ...utils.formats import get_env\nfrom ...utils.wrappers import unwrap\n\n\n# Original botocore client class\n_Botocore_client = botocore.client.BaseClient\n\nARGS_NAME = (\"action\", \"params\", \"path\", \"verb\")\nTRACED_ARGS = {\"params\", \"path\", \"verb\"}\n\nlog = get_logger(__name__)\n\n# Botocore default settings\nconfig._add(\n \"botocore\",\n {\n \"distributed_tracing\": get_env(\"botocore\", \"distributed_tracing\", default=True),\n \"invoke_with_legacy_context\": get_env(\"botocore\", \"invoke_with_legacy_context\", default=False),\n },\n)\n\n\ndef inject_trace_data_to_message_attributes(trace_data, entry):\n if \"MessageAttributes\" not in entry:\n entry[\"MessageAttributes\"] = {}\n # An Amazon SQS message can contain up to 10 metadata attributes.\n if len(entry[\"MessageAttributes\"]) < 10:\n entry[\"MessageAttributes\"][\"_datadog\"] = {\"DataType\": \"String\", \"StringValue\": json.dumps(trace_data)}\n else:\n log.debug(\"skipping trace injection, max number (10) of MessageAttributes exceeded\")\n\n\ndef inject_trace_to_sqs_batch_message(args, span):\n trace_data = {}\n HTTPPropagator.inject(span.context, trace_data)\n params = args[1]\n\n for entry in params[\"Entries\"]:\n inject_trace_data_to_message_attributes(trace_data, entry)\n\n\ndef inject_trace_to_sqs_message(args, span):\n trace_data = {}\n HTTPPropagator.inject(span.context, trace_data)\n params = args[1]\n\n inject_trace_data_to_message_attributes(trace_data, params)\n\n\ndef modify_client_context(client_context_object, trace_headers):\n if config.botocore[\"invoke_with_legacy_context\"]:\n trace_headers = {\"_datadog\": trace_headers}\n\n if \"custom\" in client_context_object:\n client_context_object[\"custom\"].update(trace_headers)\n else:\n client_context_object[\"custom\"] = trace_headers\n\n\ndef inject_trace_to_client_context(args, span):\n trace_headers = {}\n HTTPPropagator.inject(span.context, trace_headers)\n client_context_object = {}\n params = args[1]\n if \"ClientContext\" in params:\n try:\n client_context_json = base64.b64decode(params[\"ClientContext\"]).decode(\"utf-8\")\n client_context_object = json.loads(client_context_json)\n except Exception:\n log.warning(\"malformed client_context=%s\", params[\"ClientContext\"], exc_info=True)\n return\n modify_client_context(client_context_object, trace_headers)\n try:\n json_context = json.dumps(client_context_object).encode(\"utf-8\")\n except Exception:\n log.warning(\"unable to encode modified client context as json: %s\", client_context_object, exc_info=True)\n return\n params[\"ClientContext\"] = base64.b64encode(json_context).decode(\"utf-8\")\n\n\ndef patch():\n if getattr(botocore.client, \"_datadog_patch\", False):\n return\n setattr(botocore.client, \"_datadog_patch\", True)\n\n wrapt.wrap_function_wrapper(\"botocore.client\", \"BaseClient._make_api_call\", patched_api_call)\n Pin(service=\"aws\", app=\"aws\").onto(botocore.client.BaseClient)\n\n\ndef unpatch():\n if getattr(botocore.client, \"_datadog_patch\", False):\n setattr(botocore.client, \"_datadog_patch\", False)\n unwrap(botocore.client.BaseClient, \"_make_api_call\")\n\n\ndef patched_api_call(original_func, instance, args, kwargs):\n\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return original_func(*args, **kwargs)\n\n endpoint_name = deep_getattr(instance, \"_endpoint._endpoint_prefix\")\n\n with pin.tracer.trace(\n \"{}.command\".format(endpoint_name), service=\"{}.{}\".format(pin.service, endpoint_name), span_type=SpanTypes.HTTP\n ) as span:\n span.set_tag(SPAN_MEASURED_KEY)\n operation = None\n if args:\n operation = get_argument_value(args, kwargs, 0, \"operation_name\")\n # DEV: join is the fastest way of concatenating strings that is compatible\n # across Python versions (see\n # https://stackoverflow.com/questions/1316887/what-is-the-most-efficient-string-concatenation-method-in-python)\n span.resource = \".\".join((endpoint_name, operation.lower()))\n\n if config.botocore[\"distributed_tracing\"]:\n if endpoint_name == \"lambda\" and operation == \"Invoke\":\n inject_trace_to_client_context(args, span)\n if endpoint_name == \"sqs\" and operation == \"SendMessage\":\n inject_trace_to_sqs_message(args, span)\n if endpoint_name == \"sqs\" and operation == \"SendMessageBatch\":\n inject_trace_to_sqs_batch_message(args, span)\n\n else:\n span.resource = endpoint_name\n\n aws.add_span_arg_tags(span, endpoint_name, args, ARGS_NAME, TRACED_ARGS)\n\n region_name = deep_getattr(instance, \"meta.region_name\")\n\n span._set_str_tag(\"aws.agent\", \"botocore\")\n if operation is not None:\n span._set_str_tag(\"aws.operation\", operation)\n if region_name is not None:\n span._set_str_tag(\"aws.region\", region_name)\n\n result = original_func(*args, **kwargs)\n\n response_meta = result.get(\"ResponseMetadata\")\n if response_meta:\n if \"HTTPStatusCode\" in response_meta:\n span.set_tag(http.STATUS_CODE, response_meta[\"HTTPStatusCode\"])\n\n if \"RetryAttempts\" in response_meta:\n span.set_tag(\"retry_attempts\", response_meta[\"RetryAttempts\"])\n\n if \"RequestId\" in response_meta:\n span.set_tag(\"aws.requestid\", response_meta[\"RequestId\"])\n\n # set analytics sample rate\n span.set_tag(ANALYTICS_SAMPLE_RATE_KEY, config.botocore.get_analytics_sample_rate())\n\n return result\n", "path": "ddtrace/contrib/botocore/patch.py"}]}
| 3,466 | 253 |
gh_patches_debug_20822
|
rasdani/github-patches
|
git_diff
|
cognitedata__cognite-sdk-python-1628
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TypeError: '<=' not supported between instances of 'int' and 'str' in /cognite/client/utils/_time.py", line 97, in ms_to_datetime in if not (MIN_TIMESTAMP_MS <= ms <= MAX_TIMESTAMP_MS)
**System information (please complete the following information):**
- OS: macOS Sonoma v 14.1
- Python Version: 3.11
- SDK Version: 7.17.3
**Describe the bug**
In cognite/client/utils/_time.py
One needs to make sure that item[k] is integer. Otherwise In a `Type error` in the ms_to_datetime funciton
(see error at the end)
```
def _convert_and_isoformat_time_attrs_in_dict(item: dict) -> dict:
for k in TIME_ATTRIBUTES.intersection(item):
try:
item[k] = ms_to_datetime(item[k]).isoformat(sep=" ", timespec="milliseconds")
except ValueError:
pass
return item
```
**To Reproduce**
Runnable code reproducing the error.
```
"""
Test creating a data model in akerbp-sandbox
"""
# Authenticate with akerbp-sandbox
from cognite.client import CogniteClient, ClientConfig
from cognite.client.credentials import OAuthInteractive
from variables import (
TENANT_ID,
CDF_CLIENT_ID,
CDF_CLIENT_NAME,
CDF_BASE_URL,
)
# Create OAuth credentials
credentials = OAuthInteractive(
authority_url=f"https://login.microsoftonline.com/{TENANT_ID}",
client_id=CDF_CLIENT_ID, # type: ignore
scopes=[f"{CDF_BASE_URL}/.default"],
)
# Create client configurations
configurations = ClientConfig(
client_name=CDF_CLIENT_NAME, # type: ignore
base_url=CDF_BASE_URL,
project="akerbp-sandbox", # type: ignore
credentials=credentials,
)
# Initialize client
client = CogniteClient(configurations)
# Create a data model
from cognite.client.data_classes.data_modeling import DataModelApply
from cognite.client.data_classes.data_modeling import SpaceApply
from cognite.client.exceptions import CogniteAPIError
def apply_space(client: CogniteClient):
space = client.data_modeling.spaces.apply(
SpaceApply(
space="test_sp_aad_cdf_mapping",
name="test space for AAD CDF mapping",
)
)
print(space)
def apply_data_model(client: CogniteClient):
dm = client.data_modeling.data_models.apply(
DataModelApply(
space="test_sp_aad_cdf_mapping",
external_id="test_dm_aad_cdf_mapping",
version="v1",
description="mapping between AAD members and CDF resources",
name="test AAD CDF mapping",
)
)
print(dm)
def apply_dml_to_data_model(client: CogniteClient):
dm = client.data_modeling.graphql.apply_dml(
id=("test_sp_aad_cdf_mapping", "test_dm_aad_cdf_mapping", "v1"),
dml="""
type Group {
id: String!
name: String!
members: [Member]
}
type Member {
id: String!
name: String!
type: String!
groups: [Group]
}
"""
)
print("Done!")
def main() -> None:
try:
apply_data_model(client)
except CogniteAPIError:
apply_space(client)
apply_data_model(client)
apply_dml_to_data_model(client)
if __name__ == "__main__":
main()
```
**Expected behavior**
```
{
"space": "test_sp_aad_cdf_mapping",
"external_id": "test_dm_aad_cdf_mapping",
"version": "v1",
"created_time": "2024-02-15T13:18:18.986Z",
"last_updated_time": "2024-02-16T08:55:53.150Z"
}
```
**Screenshots**
```
Traceback (most recent call last):
File "/Users/[email protected]/Cognite/CDF-Aker-BP-Project/Extract-info-AAD/remote_repo/inso-dataops-incubator/integrating-aad-and-cdf-group-data/src/test.py", line 144, in <module>
main()
File "/Users/[email protected]/Cognite/CDF-Aker-BP-Project/Extract-info-AAD/remote_repo/inso-dataops-incubator/integrating-aad-and-cdf-group-data/src/test.py", line 140, in main
apply_dml_to_data_model(client)
File "/Users/[email protected]/Cognite/CDF-Aker-BP-Project/Extract-info-AAD/remote_repo/inso-dataops-incubator/integrating-aad-and-cdf-group-data/src/test.py", line 130, in apply_dml_to_data_model
print(dm)
File "/Users/[email protected]/Cognite/CDF-Aker-BP-Project/Extract-info-AAD/remote_repo/inso-dataops-incubator/integrating-aad-and-cdf-group-data/.venv/lib/python3.11/site-packages/cognite/client/data_classes/_base.py", line 126, in __str__
item = convert_and_isoformat_time_attrs(self.dump(camel_case=False))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/[email protected]/Cognite/CDF-Aker-BP-Project/Extract-info-AAD/remote_repo/inso-dataops-incubator/integrating-aad-and-cdf-group-data/.venv/lib/python3.11/site-packages/cognite/client/utils/_time.py", line 209, in convert_and_isoformat_time_attrs
return _convert_and_isoformat_time_attrs_in_dict(item)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/[email protected]/Cognite/CDF-Aker-BP-Project/Extract-info-AAD/remote_repo/inso-dataops-incubator/integrating-aad-and-cdf-group-data/.venv/lib/python3.11/site-packages/cognite/client/utils/_time.py", line 191, in _convert_and_isoformat_time_attrs_in_dict
item[k] = ms_to_datetime(item[k]).isoformat(sep=" ", timespec="milliseconds")
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/[email protected]/Cognite/CDF-Aker-BP-Project/Extract-info-AAD/remote_repo/inso-dataops-incubator/integrating-aad-and-cdf-group-data/.venv/lib/python3.11/site-packages/cognite/client/utils/_time.py", line 97, in ms_to_datetime
if not (MIN_TIMESTAMP_MS <= ms <= MAX_TIMESTAMP_MS):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '<=' not supported between instances of 'int' and 'str'
```
</issue>
<code>
[start of cognite/client/_version.py]
1 from __future__ import annotations
2
3 __version__ = "7.20.0"
4 __api_subversion__ = "V20220125"
5
[end of cognite/client/_version.py]
[start of cognite/client/data_classes/data_modeling/graphql.py]
1 from __future__ import annotations
2
3 from dataclasses import dataclass
4 from typing import TYPE_CHECKING, Any
5
6 from typing_extensions import Self
7
8 from cognite.client.data_classes._base import CogniteObject
9 from cognite.client.data_classes.data_modeling.ids import DataModelId
10
11 if TYPE_CHECKING:
12 from cognite.client import CogniteClient
13
14
15 @dataclass
16 class DMLApplyResult(CogniteObject):
17 space: str
18 external_id: str
19 version: str
20 name: str | None
21 description: str | None
22 created_time: int
23 last_updated_time: int
24
25 def as_id(self) -> DataModelId:
26 return DataModelId(
27 space=self.space,
28 external_id=self.external_id,
29 version=self.version,
30 )
31
32 @classmethod
33 def _load(cls, resource: dict[str, Any], cognite_client: CogniteClient | None = None) -> Self:
34 return cls(
35 space=resource["space"],
36 external_id=resource["externalId"],
37 version=resource["version"],
38 name=resource["name"],
39 description=resource["description"],
40 created_time=resource["createdTime"],
41 last_updated_time=resource["lastUpdatedTime"],
42 )
43
44
45 @dataclass
46 class GraphQlQueryResult(CogniteObject):
47 items: list[dict[str, Any]]
48
49 @classmethod
50 def _load(cls, resource: dict[str, Any], cognite_client: CogniteClient | None = None) -> Self:
51 return cls(resource["items"])
52
[end of cognite/client/data_classes/data_modeling/graphql.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cognite/client/_version.py b/cognite/client/_version.py
--- a/cognite/client/_version.py
+++ b/cognite/client/_version.py
@@ -1,4 +1,4 @@
from __future__ import annotations
-__version__ = "7.20.0"
+__version__ = "7.20.1"
__api_subversion__ = "V20220125"
diff --git a/cognite/client/data_classes/data_modeling/graphql.py b/cognite/client/data_classes/data_modeling/graphql.py
--- a/cognite/client/data_classes/data_modeling/graphql.py
+++ b/cognite/client/data_classes/data_modeling/graphql.py
@@ -7,6 +7,7 @@
from cognite.client.data_classes._base import CogniteObject
from cognite.client.data_classes.data_modeling.ids import DataModelId
+from cognite.client.utils import _json
if TYPE_CHECKING:
from cognite.client import CogniteClient
@@ -19,8 +20,11 @@
version: str
name: str | None
description: str | None
- created_time: int
- last_updated_time: int
+ created_time: str
+ last_updated_time: str
+
+ def __str__(self) -> str:
+ return _json.dumps(self.dump(camel_case=False), indent=4)
def as_id(self) -> DataModelId:
return DataModelId(
|
{"golden_diff": "diff --git a/cognite/client/_version.py b/cognite/client/_version.py\n--- a/cognite/client/_version.py\n+++ b/cognite/client/_version.py\n@@ -1,4 +1,4 @@\n from __future__ import annotations\n \n-__version__ = \"7.20.0\"\n+__version__ = \"7.20.1\"\n __api_subversion__ = \"V20220125\"\ndiff --git a/cognite/client/data_classes/data_modeling/graphql.py b/cognite/client/data_classes/data_modeling/graphql.py\n--- a/cognite/client/data_classes/data_modeling/graphql.py\n+++ b/cognite/client/data_classes/data_modeling/graphql.py\n@@ -7,6 +7,7 @@\n \n from cognite.client.data_classes._base import CogniteObject\n from cognite.client.data_classes.data_modeling.ids import DataModelId\n+from cognite.client.utils import _json\n \n if TYPE_CHECKING:\n from cognite.client import CogniteClient\n@@ -19,8 +20,11 @@\n version: str\n name: str | None\n description: str | None\n- created_time: int\n- last_updated_time: int\n+ created_time: str\n+ last_updated_time: str\n+\n+ def __str__(self) -> str:\n+ return _json.dumps(self.dump(camel_case=False), indent=4)\n \n def as_id(self) -> DataModelId:\n return DataModelId(\n", "issue": "TypeError: '<=' not supported between instances of 'int' and 'str' in /cognite/client/utils/_time.py\", line 97, in ms_to_datetime in if not (MIN_TIMESTAMP_MS <= ms <= MAX_TIMESTAMP_MS)\n**System information (please complete the following information):**\r\n - OS: macOS Sonoma v 14.1\r\n - Python Version: 3.11\r\n - SDK Version: 7.17.3\r\n\r\n**Describe the bug**\r\nIn cognite/client/utils/_time.py\r\nOne needs to make sure that item[k] is integer. Otherwise In a `Type error` in the ms_to_datetime funciton\r\n(see error at the end)\r\n```\r\ndef _convert_and_isoformat_time_attrs_in_dict(item: dict) -> dict:\r\n for k in TIME_ATTRIBUTES.intersection(item):\r\n try:\r\n item[k] = ms_to_datetime(item[k]).isoformat(sep=\" \", timespec=\"milliseconds\")\r\n except ValueError:\r\n pass\r\n return item\r\n```\r\n**To Reproduce**\r\nRunnable code reproducing the error.\r\n```\r\n\"\"\"\r\nTest creating a data model in akerbp-sandbox\r\n\"\"\"\r\n\r\n# Authenticate with akerbp-sandbox\r\nfrom cognite.client import CogniteClient, ClientConfig\r\nfrom cognite.client.credentials import OAuthInteractive\r\nfrom variables import (\r\n TENANT_ID,\r\n CDF_CLIENT_ID,\r\n CDF_CLIENT_NAME,\r\n CDF_BASE_URL,\r\n)\r\n\r\n\r\n# Create OAuth credentials\r\ncredentials = OAuthInteractive(\r\n authority_url=f\"https://login.microsoftonline.com/{TENANT_ID}\",\r\n client_id=CDF_CLIENT_ID, # type: ignore\r\n scopes=[f\"{CDF_BASE_URL}/.default\"],\r\n)\r\n\r\n# Create client configurations\r\nconfigurations = ClientConfig(\r\n client_name=CDF_CLIENT_NAME, # type: ignore\r\n base_url=CDF_BASE_URL,\r\n project=\"akerbp-sandbox\", # type: ignore\r\n credentials=credentials,\r\n)\r\n\r\n# Initialize client\r\nclient = CogniteClient(configurations)\r\n\r\n\r\n\r\n\r\n# Create a data model\r\nfrom cognite.client.data_classes.data_modeling import DataModelApply\r\nfrom cognite.client.data_classes.data_modeling import SpaceApply\r\nfrom cognite.client.exceptions import CogniteAPIError\r\n\r\ndef apply_space(client: CogniteClient):\r\n space = client.data_modeling.spaces.apply(\r\n SpaceApply(\r\n space=\"test_sp_aad_cdf_mapping\",\r\n name=\"test space for AAD CDF mapping\",\r\n )\r\n )\r\n print(space)\r\n\r\n\r\ndef apply_data_model(client: CogniteClient):\r\n dm = client.data_modeling.data_models.apply(\r\n DataModelApply(\r\n space=\"test_sp_aad_cdf_mapping\",\r\n external_id=\"test_dm_aad_cdf_mapping\",\r\n version=\"v1\",\r\n description=\"mapping between AAD members and CDF resources\",\r\n name=\"test AAD CDF mapping\",\r\n )\r\n )\r\n print(dm)\r\n\r\n\r\ndef apply_dml_to_data_model(client: CogniteClient):\r\n dm = client.data_modeling.graphql.apply_dml(\r\n id=(\"test_sp_aad_cdf_mapping\", \"test_dm_aad_cdf_mapping\", \"v1\"),\r\n dml=\"\"\"\r\ntype Group {\r\n id: String!\r\n name: String!\r\n members: [Member]\r\n} \r\ntype Member {\r\n id: String!\r\n name: String!\r\n type: String!\r\n groups: [Group]\r\n} \r\n\"\"\"\r\n)\r\n print(\"Done!\")\r\n\r\n\r\ndef main() -> None:\r\n try:\r\n apply_data_model(client)\r\n except CogniteAPIError:\r\n apply_space(client)\r\n apply_data_model(client)\r\n \r\n apply_dml_to_data_model(client)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\n**Expected behavior**\r\n```\r\n{\r\n \"space\": \"test_sp_aad_cdf_mapping\",\r\n \"external_id\": \"test_dm_aad_cdf_mapping\",\r\n \"version\": \"v1\",\r\n \"created_time\": \"2024-02-15T13:18:18.986Z\",\r\n \"last_updated_time\": \"2024-02-16T08:55:53.150Z\"\r\n}\r\n```\r\n**Screenshots**\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/[email protected]/Cognite/CDF-Aker-BP-Project/Extract-info-AAD/remote_repo/inso-dataops-incubator/integrating-aad-and-cdf-group-data/src/test.py\", line 144, in <module>\r\n main()\r\n File \"/Users/[email protected]/Cognite/CDF-Aker-BP-Project/Extract-info-AAD/remote_repo/inso-dataops-incubator/integrating-aad-and-cdf-group-data/src/test.py\", line 140, in main\r\n apply_dml_to_data_model(client)\r\n File \"/Users/[email protected]/Cognite/CDF-Aker-BP-Project/Extract-info-AAD/remote_repo/inso-dataops-incubator/integrating-aad-and-cdf-group-data/src/test.py\", line 130, in apply_dml_to_data_model\r\n print(dm)\r\n File \"/Users/[email protected]/Cognite/CDF-Aker-BP-Project/Extract-info-AAD/remote_repo/inso-dataops-incubator/integrating-aad-and-cdf-group-data/.venv/lib/python3.11/site-packages/cognite/client/data_classes/_base.py\", line 126, in __str__\r\n item = convert_and_isoformat_time_attrs(self.dump(camel_case=False))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/[email protected]/Cognite/CDF-Aker-BP-Project/Extract-info-AAD/remote_repo/inso-dataops-incubator/integrating-aad-and-cdf-group-data/.venv/lib/python3.11/site-packages/cognite/client/utils/_time.py\", line 209, in convert_and_isoformat_time_attrs\r\n return _convert_and_isoformat_time_attrs_in_dict(item)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/[email protected]/Cognite/CDF-Aker-BP-Project/Extract-info-AAD/remote_repo/inso-dataops-incubator/integrating-aad-and-cdf-group-data/.venv/lib/python3.11/site-packages/cognite/client/utils/_time.py\", line 191, in _convert_and_isoformat_time_attrs_in_dict\r\n item[k] = ms_to_datetime(item[k]).isoformat(sep=\" \", timespec=\"milliseconds\")\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/[email protected]/Cognite/CDF-Aker-BP-Project/Extract-info-AAD/remote_repo/inso-dataops-incubator/integrating-aad-and-cdf-group-data/.venv/lib/python3.11/site-packages/cognite/client/utils/_time.py\", line 97, in ms_to_datetime\r\n if not (MIN_TIMESTAMP_MS <= ms <= MAX_TIMESTAMP_MS):\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nTypeError: '<=' not supported between instances of 'int' and 'str'\r\n```\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\n__version__ = \"7.20.0\"\n__api_subversion__ = \"V20220125\"\n", "path": "cognite/client/_version.py"}, {"content": "from __future__ import annotations\n\nfrom dataclasses import dataclass\nfrom typing import TYPE_CHECKING, Any\n\nfrom typing_extensions import Self\n\nfrom cognite.client.data_classes._base import CogniteObject\nfrom cognite.client.data_classes.data_modeling.ids import DataModelId\n\nif TYPE_CHECKING:\n from cognite.client import CogniteClient\n\n\n@dataclass\nclass DMLApplyResult(CogniteObject):\n space: str\n external_id: str\n version: str\n name: str | None\n description: str | None\n created_time: int\n last_updated_time: int\n\n def as_id(self) -> DataModelId:\n return DataModelId(\n space=self.space,\n external_id=self.external_id,\n version=self.version,\n )\n\n @classmethod\n def _load(cls, resource: dict[str, Any], cognite_client: CogniteClient | None = None) -> Self:\n return cls(\n space=resource[\"space\"],\n external_id=resource[\"externalId\"],\n version=resource[\"version\"],\n name=resource[\"name\"],\n description=resource[\"description\"],\n created_time=resource[\"createdTime\"],\n last_updated_time=resource[\"lastUpdatedTime\"],\n )\n\n\n@dataclass\nclass GraphQlQueryResult(CogniteObject):\n items: list[dict[str, Any]]\n\n @classmethod\n def _load(cls, resource: dict[str, Any], cognite_client: CogniteClient | None = None) -> Self:\n return cls(resource[\"items\"])\n", "path": "cognite/client/data_classes/data_modeling/graphql.py"}]}
| 2,619 | 331 |
gh_patches_debug_14038
|
rasdani/github-patches
|
git_diff
|
microsoft__promptflow-3244
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Fallback to DefaultAzureCredential isn't compatible with AzureDeveloperCliCredential
**Describe the bug**
PromptFlow currently falls back to using DefaultAzureCredential and fetching a token for the Oauth 1.0 audience. That is not compatible with all credentials in the chain, specifically, the AzureDeveloperCliCredential which is Oauth 2.0 only. This PR changes the audience value to the correct scope value, and renames variables accordingly.
**How To Reproduce the bug**
Steps to reproduce the behavior, how frequent can you experience the bug:
1. Run `azd auth login` and DONT login to Azure CLI
2. Run ai-rag-chat-evaluator or other sample code that does *not* specify a credential when using the evaluators/evaluate function
PR incoming!
</issue>
<code>
[start of src/promptflow-core/promptflow/_core/token_provider.py]
1 import threading
2 from abc import ABC, abstractmethod
3 from promptflow.exceptions import UserErrorException
4 from promptflow._utils.credential_utils import get_default_azure_credential
5
6
7 # to access azure ai services, we need to get the token with this audience
8 COGNITIVE_AUDIENCE = "https://cognitiveservices.azure.com/"
9
10
11 class TokenProviderABC(ABC):
12 def __init__(self) -> None:
13 super().__init__()
14
15 @abstractmethod
16 def get_token(self) -> str:
17 pass
18
19
20 class StaticTokenProvider(TokenProviderABC):
21 def __init__(self, token: str) -> None:
22 super().__init__()
23 self.token = token
24
25 def get_token(self) -> str:
26 return self.token
27
28
29 class AzureTokenProvider(TokenProviderABC):
30 _instance_lock = threading.Lock()
31 _instance = None
32
33 def __new__(cls, *args, **kwargs):
34 with cls._instance_lock:
35 if not cls._instance:
36 cls._instance = super().__new__(cls)
37 cls._instance._init_instance()
38 return cls._instance
39
40 def _init_instance(self):
41 try:
42 # Initialize a credential instance
43 self.credential = get_default_azure_credential()
44 except ImportError as ex:
45 raise UserErrorException(
46 "Failed to initialize AzureTokenProvider. "
47 + f"Please try 'pip install azure.identity' to install dependency, {ex.msg}."
48 )
49
50 def get_token(self):
51 audience = COGNITIVE_AUDIENCE
52 return self.credential.get_token(audience).token
53
[end of src/promptflow-core/promptflow/_core/token_provider.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/promptflow-core/promptflow/_core/token_provider.py b/src/promptflow-core/promptflow/_core/token_provider.py
--- a/src/promptflow-core/promptflow/_core/token_provider.py
+++ b/src/promptflow-core/promptflow/_core/token_provider.py
@@ -4,8 +4,8 @@
from promptflow._utils.credential_utils import get_default_azure_credential
-# to access azure ai services, we need to get the token with this audience
-COGNITIVE_AUDIENCE = "https://cognitiveservices.azure.com/"
+# to access azure ai services, we need to get the token with this scope
+COGNITIVE_SCOPE = "https://cognitiveservices.azure.com/.default"
class TokenProviderABC(ABC):
@@ -48,5 +48,5 @@
)
def get_token(self):
- audience = COGNITIVE_AUDIENCE
- return self.credential.get_token(audience).token
+ scope = COGNITIVE_SCOPE
+ return self.credential.get_token(scope).token
|
{"golden_diff": "diff --git a/src/promptflow-core/promptflow/_core/token_provider.py b/src/promptflow-core/promptflow/_core/token_provider.py\n--- a/src/promptflow-core/promptflow/_core/token_provider.py\n+++ b/src/promptflow-core/promptflow/_core/token_provider.py\n@@ -4,8 +4,8 @@\n from promptflow._utils.credential_utils import get_default_azure_credential\n \n \n-# to access azure ai services, we need to get the token with this audience\n-COGNITIVE_AUDIENCE = \"https://cognitiveservices.azure.com/\"\n+# to access azure ai services, we need to get the token with this scope\n+COGNITIVE_SCOPE = \"https://cognitiveservices.azure.com/.default\"\n \n \n class TokenProviderABC(ABC):\n@@ -48,5 +48,5 @@\n )\n \n def get_token(self):\n- audience = COGNITIVE_AUDIENCE\n- return self.credential.get_token(audience).token\n+ scope = COGNITIVE_SCOPE\n+ return self.credential.get_token(scope).token\n", "issue": "[BUG] Fallback to DefaultAzureCredential isn't compatible with AzureDeveloperCliCredential\n**Describe the bug**\r\n\r\nPromptFlow currently falls back to using DefaultAzureCredential and fetching a token for the Oauth 1.0 audience. That is not compatible with all credentials in the chain, specifically, the AzureDeveloperCliCredential which is Oauth 2.0 only. This PR changes the audience value to the correct scope value, and renames variables accordingly.\r\n\r\n\r\n**How To Reproduce the bug**\r\nSteps to reproduce the behavior, how frequent can you experience the bug:\r\n1. Run `azd auth login` and DONT login to Azure CLI\r\n2. Run ai-rag-chat-evaluator or other sample code that does *not* specify a credential when using the evaluators/evaluate function\r\n\r\n\r\nPR incoming!\n", "before_files": [{"content": "import threading\nfrom abc import ABC, abstractmethod\nfrom promptflow.exceptions import UserErrorException\nfrom promptflow._utils.credential_utils import get_default_azure_credential\n\n\n# to access azure ai services, we need to get the token with this audience\nCOGNITIVE_AUDIENCE = \"https://cognitiveservices.azure.com/\"\n\n\nclass TokenProviderABC(ABC):\n def __init__(self) -> None:\n super().__init__()\n\n @abstractmethod\n def get_token(self) -> str:\n pass\n\n\nclass StaticTokenProvider(TokenProviderABC):\n def __init__(self, token: str) -> None:\n super().__init__()\n self.token = token\n\n def get_token(self) -> str:\n return self.token\n\n\nclass AzureTokenProvider(TokenProviderABC):\n _instance_lock = threading.Lock()\n _instance = None\n\n def __new__(cls, *args, **kwargs):\n with cls._instance_lock:\n if not cls._instance:\n cls._instance = super().__new__(cls)\n cls._instance._init_instance()\n return cls._instance\n\n def _init_instance(self):\n try:\n # Initialize a credential instance\n self.credential = get_default_azure_credential()\n except ImportError as ex:\n raise UserErrorException(\n \"Failed to initialize AzureTokenProvider. \"\n + f\"Please try 'pip install azure.identity' to install dependency, {ex.msg}.\"\n )\n\n def get_token(self):\n audience = COGNITIVE_AUDIENCE\n return self.credential.get_token(audience).token\n", "path": "src/promptflow-core/promptflow/_core/token_provider.py"}]}
| 1,156 | 235 |
gh_patches_debug_1754
|
rasdani/github-patches
|
git_diff
|
python-telegram-bot__python-telegram-bot-237
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bot stopps working after reconnect of system
This error appears after some time of running. Because of some reason my wifi reconnects after some time and after this the bot dont get back to work. Any ideas to prevent this? I have to restart the bot every time this happens.
```
Exception in thread updater:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "C:\Python27\lib\site-packages\telegram\ext\updater.py", line 146, in _thread_wrapper
target(*args, **kwargs)
File "C:\Python27\lib\site-packages\telegram\ext\updater.py", line 223, in _start_polling
network_delay=network_delay)
File "C:\Python27\lib\site-packages\telegram\bot.py", line 128, in decorator
result = func(self, *args, **kwargs)
File "C:\Python27\lib\site-packages\telegram\bot.py", line 796, in getUpdates
result = request.post(url, data, network_delay=network_delay)
File "C:\Python27\lib\site-packages\telegram\utils\request.py", line 77, in decorator
return func(*args, **kwargs)
File "C:\Python27\lib\site-packages\telegram\utils\request.py", line 173, in post
result = urlopen(request, **urlopen_kwargs).read()
File "C:\Python27\lib\urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "C:\Python27\lib\urllib2.py", line 431, in open
response = self._open(req, data)
File "C:\Python27\lib\urllib2.py", line 449, in _open
'_open', req)
File "C:\Python27\lib\urllib2.py", line 409, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 1240, in https_open
context=self._context)
File "C:\Python27\lib\urllib2.py", line 1200, in do_open
r = h.getresponse(buffering=True)
File "C:\Python27\lib\httplib.py", line 1136, in getresponse
response.begin()
File "C:\Python27\lib\httplib.py", line 453, in begin
version, status, reason = self._read_status()
File "C:\Python27\lib\httplib.py", line 409, in _read_status
line = self.fp.readline(_MAXLINE + 1)
File "C:\Python27\lib\socket.py", line 480, in readline
data = self._sock.recv(self._rbufsize)
File "C:\Python27\lib\ssl.py", line 734, in recv
return self.read(buflen)
File "C:\Python27\lib\ssl.py", line 621, in read
v = self._sslobj.read(len or 1024)
error: [Errno 10054] Eine vorhandene Verbindung wurde vom Remotehost geschlossen
```
</issue>
<code>
[start of telegram/utils/request.py]
1 #!/usr/bin/env python
2 # pylint: disable=no-name-in-module,unused-import
3 #
4 # A library that provides a Python interface to the Telegram Bot API
5 # Copyright (C) 2015-2016
6 # Leandro Toledo de Souza <[email protected]>
7 #
8 # This program is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU Lesser Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # This program is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU Lesser Public License for more details.
17 #
18 # You should have received a copy of the GNU Lesser Public License
19 # along with this program. If not, see [http://www.gnu.org/licenses/].
20
21 """This module contains methods to make POST and GET requests"""
22
23 import functools
24 import json
25 import socket
26 from ssl import SSLError
27
28 try:
29 # python2
30 from httplib import HTTPException
31 except ImportError:
32 # python3
33 from http.client import HTTPException
34
35 try:
36 # python3
37 from urllib.request import urlopen, urlretrieve, Request
38 from urllib.error import HTTPError, URLError
39 except ImportError:
40 # python2
41 from urllib import urlretrieve
42 from urllib2 import urlopen, Request, URLError
43 from urllib2 import HTTPError
44
45 from telegram import (InputFile, TelegramError)
46 from telegram.error import Unauthorized, NetworkError, TimedOut
47
48
49 def _parse(json_data):
50 """Try and parse the JSON returned from Telegram and return an empty
51 dictionary if there is any error.
52
53 Args:
54 url:
55 urllib.urlopen object
56
57 Returns:
58 A JSON parsed as Python dict with results.
59 """
60 decoded_s = json_data.decode('utf-8')
61 try:
62 data = json.loads(decoded_s)
63 except ValueError:
64 raise TelegramError('Invalid server response')
65
66 if not data.get('ok') and data.get('description'):
67 return data['description']
68
69 return data['result']
70
71
72 def _try_except_req(func):
73 """Decorator for requests to handle known exceptions"""
74 @functools.wraps(func)
75 def decorator(*args, **kwargs):
76 try:
77 return func(*args, **kwargs)
78
79 except HTTPError as error:
80 # `HTTPError` inherits from `URLError` so `HTTPError` handling must
81 # come first.
82 errcode = error.getcode()
83
84 if errcode in (401, 403):
85 raise Unauthorized()
86 elif errcode == 502:
87 raise NetworkError('Bad Gateway')
88
89 try:
90 message = _parse(error.read())
91 except ValueError:
92 message = 'Unknown HTTPError {0}'.format(error.getcode())
93
94 raise NetworkError('{0} ({1})'.format(message, errcode))
95
96 except URLError as error:
97 raise NetworkError('URLError: {0}'.format(error.reason))
98
99 except (SSLError, socket.timeout) as error:
100 err_s = str(error)
101 if 'operation timed out' in err_s:
102 raise TimedOut()
103
104 raise NetworkError(err_s)
105
106 except HTTPException as error:
107 raise NetworkError('HTTPException: {0!r}'.format(error))
108
109 return decorator
110
111
112 @_try_except_req
113 def get(url):
114 """Request an URL.
115 Args:
116 url:
117 The web location we want to retrieve.
118
119 Returns:
120 A JSON object.
121 """
122 result = urlopen(url).read()
123
124 return _parse(result)
125
126
127 @_try_except_req
128 def post(url,
129 data,
130 timeout=None,
131 network_delay=2.):
132 """Request an URL.
133 Args:
134 url:
135 The web location we want to retrieve.
136 data:
137 A dict of (str, unicode) key/value pairs.
138 timeout:
139 float. If this value is specified, use it as the definitive timeout (in
140 seconds) for urlopen() operations. [Optional]
141 network_delay:
142 float. If using the timeout specified in `data` (which is a timeout for
143 the Telegram servers operation), then `network_delay` as an extra delay
144 (in seconds) to compensate for network latency.
145 default: 2 [Optional]
146
147 Notes:
148 If neither `timeout` nor `data['timeout']` is specified. The underlying
149 defaults are used.
150
151 Returns:
152 A JSON object.
153
154 """
155 urlopen_kwargs = {}
156
157 if timeout is not None:
158 urlopen_kwargs['timeout'] = timeout
159 elif 'timeout' in data:
160 urlopen_kwargs['timeout'] = data['timeout'] + network_delay
161
162 if InputFile.is_inputfile(data):
163 data = InputFile(data)
164 request = Request(url,
165 data=data.to_form(),
166 headers=data.headers)
167 else:
168 data = json.dumps(data)
169 request = Request(url,
170 data=data.encode(),
171 headers={'Content-Type': 'application/json'})
172
173 result = urlopen(request, **urlopen_kwargs).read()
174 return _parse(result)
175
176
177 @_try_except_req
178 def download(url,
179 filename):
180 """Download a file by its URL.
181 Args:
182 url:
183 The web location we want to retrieve.
184
185 filename:
186 The filename wihtin the path to download the file.
187 """
188
189 urlretrieve(url, filename)
190
[end of telegram/utils/request.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/telegram/utils/request.py b/telegram/utils/request.py
--- a/telegram/utils/request.py
+++ b/telegram/utils/request.py
@@ -106,6 +106,9 @@
except HTTPException as error:
raise NetworkError('HTTPException: {0!r}'.format(error))
+ except socket.error as error:
+ raise NetworkError('socket.error: {0!r}'.format(error))
+
return decorator
|
{"golden_diff": "diff --git a/telegram/utils/request.py b/telegram/utils/request.py\n--- a/telegram/utils/request.py\n+++ b/telegram/utils/request.py\n@@ -106,6 +106,9 @@\n except HTTPException as error:\n raise NetworkError('HTTPException: {0!r}'.format(error))\n \n+ except socket.error as error:\n+ raise NetworkError('socket.error: {0!r}'.format(error))\n+\n return decorator\n", "issue": "Bot stopps working after reconnect of system\nThis error appears after some time of running. Because of some reason my wifi reconnects after some time and after this the bot dont get back to work. Any ideas to prevent this? I have to restart the bot every time this happens.\n\n```\nException in thread updater:\nTraceback (most recent call last):\n File \"C:\\Python27\\lib\\threading.py\", line 801, in __bootstrap_inner\n self.run()\n File \"C:\\Python27\\lib\\threading.py\", line 754, in run\n self.__target(*self.__args, **self.__kwargs)\n File \"C:\\Python27\\lib\\site-packages\\telegram\\ext\\updater.py\", line 146, in _thread_wrapper\n target(*args, **kwargs)\n File \"C:\\Python27\\lib\\site-packages\\telegram\\ext\\updater.py\", line 223, in _start_polling\n network_delay=network_delay)\n File \"C:\\Python27\\lib\\site-packages\\telegram\\bot.py\", line 128, in decorator\n result = func(self, *args, **kwargs)\n File \"C:\\Python27\\lib\\site-packages\\telegram\\bot.py\", line 796, in getUpdates\n result = request.post(url, data, network_delay=network_delay)\n File \"C:\\Python27\\lib\\site-packages\\telegram\\utils\\request.py\", line 77, in decorator\n return func(*args, **kwargs)\n File \"C:\\Python27\\lib\\site-packages\\telegram\\utils\\request.py\", line 173, in post\n result = urlopen(request, **urlopen_kwargs).read()\n File \"C:\\Python27\\lib\\urllib2.py\", line 154, in urlopen\n return opener.open(url, data, timeout)\n File \"C:\\Python27\\lib\\urllib2.py\", line 431, in open\n response = self._open(req, data)\n File \"C:\\Python27\\lib\\urllib2.py\", line 449, in _open\n '_open', req)\n File \"C:\\Python27\\lib\\urllib2.py\", line 409, in _call_chain\n result = func(*args)\n File \"C:\\Python27\\lib\\urllib2.py\", line 1240, in https_open\n context=self._context)\n File \"C:\\Python27\\lib\\urllib2.py\", line 1200, in do_open\n r = h.getresponse(buffering=True)\n File \"C:\\Python27\\lib\\httplib.py\", line 1136, in getresponse\n response.begin()\n File \"C:\\Python27\\lib\\httplib.py\", line 453, in begin\n version, status, reason = self._read_status()\n File \"C:\\Python27\\lib\\httplib.py\", line 409, in _read_status\n line = self.fp.readline(_MAXLINE + 1)\n File \"C:\\Python27\\lib\\socket.py\", line 480, in readline\n data = self._sock.recv(self._rbufsize)\n File \"C:\\Python27\\lib\\ssl.py\", line 734, in recv\n return self.read(buflen)\n File \"C:\\Python27\\lib\\ssl.py\", line 621, in read\n v = self._sslobj.read(len or 1024)\nerror: [Errno 10054] Eine vorhandene Verbindung wurde vom Remotehost geschlossen\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# pylint: disable=no-name-in-module,unused-import\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2016\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\n\"\"\"This module contains methods to make POST and GET requests\"\"\"\n\nimport functools\nimport json\nimport socket\nfrom ssl import SSLError\n\ntry:\n # python2\n from httplib import HTTPException\nexcept ImportError:\n # python3\n from http.client import HTTPException\n\ntry:\n # python3\n from urllib.request import urlopen, urlretrieve, Request\n from urllib.error import HTTPError, URLError\nexcept ImportError:\n # python2\n from urllib import urlretrieve\n from urllib2 import urlopen, Request, URLError\n from urllib2 import HTTPError\n\nfrom telegram import (InputFile, TelegramError)\nfrom telegram.error import Unauthorized, NetworkError, TimedOut\n\n\ndef _parse(json_data):\n \"\"\"Try and parse the JSON returned from Telegram and return an empty\n dictionary if there is any error.\n\n Args:\n url:\n urllib.urlopen object\n\n Returns:\n A JSON parsed as Python dict with results.\n \"\"\"\n decoded_s = json_data.decode('utf-8')\n try:\n data = json.loads(decoded_s)\n except ValueError:\n raise TelegramError('Invalid server response')\n\n if not data.get('ok') and data.get('description'):\n return data['description']\n\n return data['result']\n\n\ndef _try_except_req(func):\n \"\"\"Decorator for requests to handle known exceptions\"\"\"\n @functools.wraps(func)\n def decorator(*args, **kwargs):\n try:\n return func(*args, **kwargs)\n\n except HTTPError as error:\n # `HTTPError` inherits from `URLError` so `HTTPError` handling must\n # come first.\n errcode = error.getcode()\n\n if errcode in (401, 403):\n raise Unauthorized()\n elif errcode == 502:\n raise NetworkError('Bad Gateway')\n\n try:\n message = _parse(error.read())\n except ValueError:\n message = 'Unknown HTTPError {0}'.format(error.getcode())\n\n raise NetworkError('{0} ({1})'.format(message, errcode))\n\n except URLError as error:\n raise NetworkError('URLError: {0}'.format(error.reason))\n\n except (SSLError, socket.timeout) as error:\n err_s = str(error)\n if 'operation timed out' in err_s:\n raise TimedOut()\n\n raise NetworkError(err_s)\n\n except HTTPException as error:\n raise NetworkError('HTTPException: {0!r}'.format(error))\n\n return decorator\n\n\n@_try_except_req\ndef get(url):\n \"\"\"Request an URL.\n Args:\n url:\n The web location we want to retrieve.\n\n Returns:\n A JSON object.\n \"\"\"\n result = urlopen(url).read()\n\n return _parse(result)\n\n\n@_try_except_req\ndef post(url,\n data,\n timeout=None,\n network_delay=2.):\n \"\"\"Request an URL.\n Args:\n url:\n The web location we want to retrieve.\n data:\n A dict of (str, unicode) key/value pairs.\n timeout:\n float. If this value is specified, use it as the definitive timeout (in\n seconds) for urlopen() operations. [Optional]\n network_delay:\n float. If using the timeout specified in `data` (which is a timeout for\n the Telegram servers operation), then `network_delay` as an extra delay\n (in seconds) to compensate for network latency.\n default: 2 [Optional]\n\n Notes:\n If neither `timeout` nor `data['timeout']` is specified. The underlying\n defaults are used.\n\n Returns:\n A JSON object.\n\n \"\"\"\n urlopen_kwargs = {}\n\n if timeout is not None:\n urlopen_kwargs['timeout'] = timeout\n elif 'timeout' in data:\n urlopen_kwargs['timeout'] = data['timeout'] + network_delay\n\n if InputFile.is_inputfile(data):\n data = InputFile(data)\n request = Request(url,\n data=data.to_form(),\n headers=data.headers)\n else:\n data = json.dumps(data)\n request = Request(url,\n data=data.encode(),\n headers={'Content-Type': 'application/json'})\n\n result = urlopen(request, **urlopen_kwargs).read()\n return _parse(result)\n\n\n@_try_except_req\ndef download(url,\n filename):\n \"\"\"Download a file by its URL.\n Args:\n url:\n The web location we want to retrieve.\n\n filename:\n The filename wihtin the path to download the file.\n \"\"\"\n\n urlretrieve(url, filename)\n", "path": "telegram/utils/request.py"}]}
| 3,038 | 100 |
gh_patches_debug_19830
|
rasdani/github-patches
|
git_diff
|
yt-project__yt-4519
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DEPR: `VisibleDeprecationWarning` considered noisy
Context:
yt doesn't use builtin `DeprecationWarning`s but instead uses a custom class `VisibleDeprecationWarning`, whose sole intent is to work around Python's default warning filters which hide `DeprecationWarning`s by default (the idea being that they should only be made visible if users opt-in).
The design currently used in yt made sense a while back, because we want deprecation warnings to reach all users, not just maintainers of dependent code with high quality standards. The trade-off being that our warnings will surface to users no matter how many layers of dependencies there are between the code being written and yt; for instance, if `yt_astro_analysis` or `trident` uses deprecated yt API, *their* users don't have much control over it and it can be argued they shouldn't see these "noisy" warnings by default.
Since Python 3.7 ([PEP 565](https://peps.python.org/pep-0565/)), `DeprecationWarnings` attributed to the `__main__` module are again showed by default, meaning they will reach direct users (including in REPLs) but will not surface (by default) when deprecated API calls live in an intermediate dependency layer.
It seems to me that our current solution isn't necessary anymore and that we should use simple `DeprecationWarning`s from now on.
</issue>
<code>
[start of yt/_maintenance/deprecation.py]
1 import warnings
2 from functools import wraps
3 from types import FunctionType
4 from typing import Dict, Optional
5
6
7 class VisibleDeprecationWarning(UserWarning):
8 """Visible deprecation warning, adapted from NumPy
9
10 The nose runner does not show users DeprecationWarning.
11 This ensures that a deprecation warning is visible to users
12 if that is desired.
13 """
14
15 # this class becomes useless after the tests are migrated from nose to pytest
16
17 pass
18
19
20 def issue_deprecation_warning(
21 msg: str,
22 *,
23 stacklevel: int,
24 since: str,
25 removal: Optional[str] = None,
26 ):
27 """
28 Parameters
29 ----------
30 msg : str
31 A text message explaining that the code surrounding the call to this function is
32 deprecated, and what should be changed on the user side to avoid it.
33
34 stacklevel: int
35 Number of stack frames to be skipped when pointing at caller code, starting from
36 *this* function's frame. In general 3 is a minimum.
37
38 since and removal: str version numbers, indicating the anticipated removal date
39
40 Notes
41 -----
42
43 removal can be left empty if it is not clear how many minor releases are expected to
44 happen before the next major.
45
46 removal and since arguments are keyword-only to forbid accidentally swapping them.
47
48 Examples
49 --------
50 >>> issue_deprecation_warning(
51 ... "This code is deprecated.", stacklevel=3, since="4.0"
52 ... )
53 """
54
55 msg += f"\nDeprecated since yt {since}"
56 if removal is not None:
57 msg += f"\nThis feature is planned for removal in yt {removal}"
58 warnings.warn(msg, VisibleDeprecationWarning, stacklevel=stacklevel)
59
60
61 def future_positional_only(positions2names: Dict[int, str], /, **depr_kwargs):
62 """Warn users when using a future positional-only argument as keyword.
63 Note that positional-only arguments are available from Python 3.8
64 See https://www.python.org/dev/peps/pep-0570/
65 """
66
67 def outer(func: FunctionType):
68 @wraps(func)
69 def inner(*args, **kwargs):
70 for no, name in sorted(positions2names.items()):
71 if name not in kwargs:
72 continue
73 value = kwargs[name]
74 issue_deprecation_warning(
75 f"Using the {name!r} argument as keyword (on position {no}) "
76 "is deprecated. "
77 "Pass the argument as positional to suppress this warning, "
78 f"i.e., use {func.__name__}({value!r}, ...)",
79 stacklevel=3,
80 **depr_kwargs,
81 )
82 return func(*args, **kwargs)
83
84 return inner
85
86 return outer
87
[end of yt/_maintenance/deprecation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/yt/_maintenance/deprecation.py b/yt/_maintenance/deprecation.py
--- a/yt/_maintenance/deprecation.py
+++ b/yt/_maintenance/deprecation.py
@@ -4,19 +4,6 @@
from typing import Dict, Optional
-class VisibleDeprecationWarning(UserWarning):
- """Visible deprecation warning, adapted from NumPy
-
- The nose runner does not show users DeprecationWarning.
- This ensures that a deprecation warning is visible to users
- if that is desired.
- """
-
- # this class becomes useless after the tests are migrated from nose to pytest
-
- pass
-
-
def issue_deprecation_warning(
msg: str,
*,
@@ -55,7 +42,7 @@
msg += f"\nDeprecated since yt {since}"
if removal is not None:
msg += f"\nThis feature is planned for removal in yt {removal}"
- warnings.warn(msg, VisibleDeprecationWarning, stacklevel=stacklevel)
+ warnings.warn(msg, DeprecationWarning, stacklevel=stacklevel)
def future_positional_only(positions2names: Dict[int, str], /, **depr_kwargs):
|
{"golden_diff": "diff --git a/yt/_maintenance/deprecation.py b/yt/_maintenance/deprecation.py\n--- a/yt/_maintenance/deprecation.py\n+++ b/yt/_maintenance/deprecation.py\n@@ -4,19 +4,6 @@\n from typing import Dict, Optional\n \n \n-class VisibleDeprecationWarning(UserWarning):\n- \"\"\"Visible deprecation warning, adapted from NumPy\n-\n- The nose runner does not show users DeprecationWarning.\n- This ensures that a deprecation warning is visible to users\n- if that is desired.\n- \"\"\"\n-\n- # this class becomes useless after the tests are migrated from nose to pytest\n-\n- pass\n-\n-\n def issue_deprecation_warning(\n msg: str,\n *,\n@@ -55,7 +42,7 @@\n msg += f\"\\nDeprecated since yt {since}\"\n if removal is not None:\n msg += f\"\\nThis feature is planned for removal in yt {removal}\"\n- warnings.warn(msg, VisibleDeprecationWarning, stacklevel=stacklevel)\n+ warnings.warn(msg, DeprecationWarning, stacklevel=stacklevel)\n \n \n def future_positional_only(positions2names: Dict[int, str], /, **depr_kwargs):\n", "issue": "DEPR: `VisibleDeprecationWarning` considered noisy\nContext:\r\nyt doesn't use builtin `DeprecationWarning`s but instead uses a custom class `VisibleDeprecationWarning`, whose sole intent is to work around Python's default warning filters which hide `DeprecationWarning`s by default (the idea being that they should only be made visible if users opt-in).\r\n\r\nThe design currently used in yt made sense a while back, because we want deprecation warnings to reach all users, not just maintainers of dependent code with high quality standards. The trade-off being that our warnings will surface to users no matter how many layers of dependencies there are between the code being written and yt; for instance, if `yt_astro_analysis` or `trident` uses deprecated yt API, *their* users don't have much control over it and it can be argued they shouldn't see these \"noisy\" warnings by default. \r\n\r\nSince Python 3.7 ([PEP 565](https://peps.python.org/pep-0565/)), `DeprecationWarnings` attributed to the `__main__` module are again showed by default, meaning they will reach direct users (including in REPLs) but will not surface (by default) when deprecated API calls live in an intermediate dependency layer.\r\n\r\nIt seems to me that our current solution isn't necessary anymore and that we should use simple `DeprecationWarning`s from now on.\n", "before_files": [{"content": "import warnings\nfrom functools import wraps\nfrom types import FunctionType\nfrom typing import Dict, Optional\n\n\nclass VisibleDeprecationWarning(UserWarning):\n \"\"\"Visible deprecation warning, adapted from NumPy\n\n The nose runner does not show users DeprecationWarning.\n This ensures that a deprecation warning is visible to users\n if that is desired.\n \"\"\"\n\n # this class becomes useless after the tests are migrated from nose to pytest\n\n pass\n\n\ndef issue_deprecation_warning(\n msg: str,\n *,\n stacklevel: int,\n since: str,\n removal: Optional[str] = None,\n):\n \"\"\"\n Parameters\n ----------\n msg : str\n A text message explaining that the code surrounding the call to this function is\n deprecated, and what should be changed on the user side to avoid it.\n\n stacklevel: int\n Number of stack frames to be skipped when pointing at caller code, starting from\n *this* function's frame. In general 3 is a minimum.\n\n since and removal: str version numbers, indicating the anticipated removal date\n\n Notes\n -----\n\n removal can be left empty if it is not clear how many minor releases are expected to\n happen before the next major.\n\n removal and since arguments are keyword-only to forbid accidentally swapping them.\n\n Examples\n --------\n >>> issue_deprecation_warning(\n ... \"This code is deprecated.\", stacklevel=3, since=\"4.0\"\n ... )\n \"\"\"\n\n msg += f\"\\nDeprecated since yt {since}\"\n if removal is not None:\n msg += f\"\\nThis feature is planned for removal in yt {removal}\"\n warnings.warn(msg, VisibleDeprecationWarning, stacklevel=stacklevel)\n\n\ndef future_positional_only(positions2names: Dict[int, str], /, **depr_kwargs):\n \"\"\"Warn users when using a future positional-only argument as keyword.\n Note that positional-only arguments are available from Python 3.8\n See https://www.python.org/dev/peps/pep-0570/\n \"\"\"\n\n def outer(func: FunctionType):\n @wraps(func)\n def inner(*args, **kwargs):\n for no, name in sorted(positions2names.items()):\n if name not in kwargs:\n continue\n value = kwargs[name]\n issue_deprecation_warning(\n f\"Using the {name!r} argument as keyword (on position {no}) \"\n \"is deprecated. \"\n \"Pass the argument as positional to suppress this warning, \"\n f\"i.e., use {func.__name__}({value!r}, ...)\",\n stacklevel=3,\n **depr_kwargs,\n )\n return func(*args, **kwargs)\n\n return inner\n\n return outer\n", "path": "yt/_maintenance/deprecation.py"}]}
| 1,606 | 265 |
gh_patches_debug_26317
|
rasdani/github-patches
|
git_diff
|
dask__distributed-5823
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AllProgress plugin not removed after progress_stream: `TypeError: remove_plugin() takes 0 positional arguments but 2 were given`
**What happened**:
```
distributed/diagnostics/tests/test_progress_stream.py::test_progress_quads_too_many PASSED
distributed.worker - WARNING - Compute Failed
Function: div
args: (1, 0)
kwargs: {}
Exception: "ZeroDivisionError('division by zero')"
distributed.utils - ERROR - remove_plugin() takes 0 positional arguments but 2 were given
Traceback (most recent call last):
File "/Users/runner/work/distributed/distributed/distributed/utils.py", line 680, in log_errors
yield
File "/Users/runner/work/distributed/distributed/distributed/scheduler.py", line 7057, in feed
teardown(self, state)
TypeError: remove_plugin() takes 0 positional arguments but 2 were given
distributed.core - ERROR - Exception while handling op feed
Traceback (most recent call last):
File "/Users/runner/work/distributed/distributed/distributed/core.py", line 520, in handle_comm
result = await result
File "/Users/runner/work/distributed/distributed/distributed/scheduler.py", line 7057, in feed
teardown(self, state)
TypeError: remove_plugin() takes 0 positional arguments but 2 were given
tornado.application - ERROR - Exception in callback functools.partial(<function TCPServer._handle_connection.<locals>.<lambda> at 0x13f9a6f70>, <Task finished name='Task-49498' coro=<BaseTCPListener._handle_stream() done, defined at /Users/runner/work/distributed/distributed/distributed/comm/tcp.py:513> exception=TypeError('remove_plugin() takes 0 positional arguments but 2 were given')>)
Traceback (most recent call last):
File "/Users/runner/miniconda3/envs/dask-distributed/lib/python3.9/site-packages/tornado/ioloop.py", line 741, in _run_callback
ret = callback()
File "/Users/runner/miniconda3/envs/dask-distributed/lib/python3.9/site-packages/tornado/tcpserver.py", line 331, in <lambda>
gen.convert_yielded(future), lambda f: f.result()
File "/Users/runner/work/distributed/distributed/distributed/comm/tcp.py", line 530, in _handle_stream
await self.comm_handler(comm)
File "/Users/runner/work/distributed/distributed/distributed/core.py", line 520, in handle_comm
result = await result
File "/Users/runner/work/distributed/distributed/distributed/scheduler.py", line 7057, in feed
teardown(self, state)
TypeError: remove_plugin() takes 0 positional arguments but 2 were given
```
**What you expected to happen**:
The "all-progress" plugin should be removed when "progress_stream" is torn down.
</issue>
<code>
[start of distributed/diagnostics/progress_stream.py]
1 import logging
2 from functools import partial
3
4 from tlz import merge, valmap
5
6 from ..core import coerce_to_address, connect
7 from ..scheduler import Scheduler
8 from ..utils import color_of, key_split
9 from ..worker import dumps_function
10 from .progress import AllProgress
11
12 logger = logging.getLogger(__name__)
13
14
15 def counts(scheduler, allprogress):
16 return merge(
17 {"all": valmap(len, allprogress.all), "nbytes": allprogress.nbytes},
18 {
19 state: valmap(len, allprogress.state[state])
20 for state in ["memory", "erred", "released", "processing"]
21 },
22 )
23
24
25 def remove_plugin(**kwargs):
26 # Wrapper function around `Scheduler.remove_plugin` to avoid raising a
27 # `PicklingError` when using a cythonized scheduler
28 return Scheduler.remove_plugin(**kwargs)
29
30
31 async def progress_stream(address, interval):
32 """Open a TCP connection to scheduler, receive progress messages
33
34 The messages coming back are dicts containing counts of key groups::
35
36 {'inc': {'all': 5, 'memory': 2, 'erred': 0, 'released': 1},
37 'dec': {'all': 1, 'memory': 0, 'erred': 0, 'released': 0}}
38
39 Parameters
40 ----------
41 address: address of scheduler
42 interval: time between batches, in seconds
43
44 Examples
45 --------
46 >>> stream = await eventstream('127.0.0.1:8786', 0.100) # doctest: +SKIP
47 >>> print(await read(stream)) # doctest: +SKIP
48 """
49 address = coerce_to_address(address)
50 comm = await connect(address)
51 await comm.write(
52 {
53 "op": "feed",
54 "setup": dumps_function(AllProgress),
55 "function": dumps_function(counts),
56 "interval": interval,
57 "teardown": dumps_function(partial(remove_plugin, name=AllProgress.name)),
58 }
59 )
60 return comm
61
62
63 def progress_quads(msg, nrows=8, ncols=3):
64 """
65 >>> msg = {'all': {'inc': 5, 'dec': 1, 'add': 4},
66 ... 'memory': {'inc': 2, 'dec': 0, 'add': 1},
67 ... 'erred': {'inc': 0, 'dec': 1, 'add': 0},
68 ... 'released': {'inc': 1, 'dec': 0, 'add': 1},
69 ... 'processing': {'inc': 1, 'dec': 0, 'add': 2}}
70
71 >>> progress_quads(msg, nrows=2) # doctest: +SKIP
72 {'name': ['inc', 'add', 'dec'],
73 'left': [0, 0, 1],
74 'right': [0.9, 0.9, 1.9],
75 'top': [0, -1, 0],
76 'bottom': [-.8, -1.8, -.8],
77 'released': [1, 1, 0],
78 'memory': [2, 1, 0],
79 'erred': [0, 0, 1],
80 'processing': [1, 0, 2],
81 'done': ['3 / 5', '2 / 4', '1 / 1'],
82 'released-loc': [.2/.9, .25 / 0.9, 1],
83 'memory-loc': [3 / 5 / .9, .5 / 0.9, 1],
84 'erred-loc': [3 / 5 / .9, .5 / 0.9, 1.9],
85 'processing-loc': [4 / 5, 1 / 1, 1]}}
86 """
87 width = 0.9
88 names = sorted(msg["all"], key=msg["all"].get, reverse=True)
89 names = names[: nrows * ncols]
90 n = len(names)
91 d = {k: [v.get(name, 0) for name in names] for k, v in msg.items()}
92
93 d["name"] = names
94 d["show-name"] = [name if len(name) <= 15 else name[:12] + "..." for name in names]
95 d["left"] = [i // nrows for i in range(n)]
96 d["right"] = [i // nrows + width for i in range(n)]
97 d["top"] = [-(i % nrows) for i in range(n)]
98 d["bottom"] = [-(i % nrows) - 0.8 for i in range(n)]
99 d["color"] = [color_of(name) for name in names]
100
101 d["released-loc"] = []
102 d["memory-loc"] = []
103 d["erred-loc"] = []
104 d["processing-loc"] = []
105 d["done"] = []
106 for r, m, e, p, a, l in zip(
107 d["released"], d["memory"], d["erred"], d["processing"], d["all"], d["left"]
108 ):
109 rl = width * r / a + l
110 ml = width * (r + m) / a + l
111 el = width * (r + m + e) / a + l
112 pl = width * (p + r + m + e) / a + l
113 done = "%d / %d" % (r + m + e, a)
114 d["released-loc"].append(rl)
115 d["memory-loc"].append(ml)
116 d["erred-loc"].append(el)
117 d["processing-loc"].append(pl)
118 d["done"].append(done)
119
120 return d
121
122
123 def color_of_message(msg):
124 if msg["status"] == "OK":
125 split = key_split(msg["key"])
126 return color_of(split)
127 else:
128 return "black"
129
130
131 colors = {
132 "transfer": "red",
133 "disk-write": "orange",
134 "disk-read": "orange",
135 "deserialize": "gray",
136 "compute": color_of_message,
137 }
138
139
140 alphas = {
141 "transfer": 0.4,
142 "compute": 1,
143 "deserialize": 0.4,
144 "disk-write": 0.4,
145 "disk-read": 0.4,
146 }
147
148
149 prefix = {
150 "transfer": "transfer-",
151 "disk-write": "disk-write-",
152 "disk-read": "disk-read-",
153 "deserialize": "deserialize-",
154 "compute": "",
155 }
156
157
158 def task_stream_append(lists, msg, workers):
159 key = msg["key"]
160 name = key_split(key)
161 startstops = msg.get("startstops", [])
162
163 for startstop in startstops:
164 color = colors[startstop["action"]]
165 if type(color) is not str:
166 color = color(msg)
167
168 lists["start"].append((startstop["start"] + startstop["stop"]) / 2 * 1000)
169 lists["duration"].append(1000 * (startstop["stop"] - startstop["start"]))
170 lists["key"].append(key)
171 lists["name"].append(prefix[startstop["action"]] + name)
172 lists["color"].append(color)
173 lists["alpha"].append(alphas[startstop["action"]])
174 lists["worker"].append(msg["worker"])
175
176 worker_thread = "%s-%d" % (msg["worker"], msg["thread"])
177 lists["worker_thread"].append(worker_thread)
178 if worker_thread not in workers:
179 workers[worker_thread] = len(workers) / 2
180 lists["y"].append(workers[worker_thread])
181
182 return len(startstops)
183
[end of distributed/diagnostics/progress_stream.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/distributed/diagnostics/progress_stream.py b/distributed/diagnostics/progress_stream.py
--- a/distributed/diagnostics/progress_stream.py
+++ b/distributed/diagnostics/progress_stream.py
@@ -1,10 +1,8 @@
import logging
-from functools import partial
from tlz import merge, valmap
from ..core import coerce_to_address, connect
-from ..scheduler import Scheduler
from ..utils import color_of, key_split
from ..worker import dumps_function
from .progress import AllProgress
@@ -22,10 +20,10 @@
)
-def remove_plugin(**kwargs):
+def _remove_all_progress_plugin(self, *args, **kwargs):
# Wrapper function around `Scheduler.remove_plugin` to avoid raising a
# `PicklingError` when using a cythonized scheduler
- return Scheduler.remove_plugin(**kwargs)
+ self.remove_plugin(name=AllProgress.name)
async def progress_stream(address, interval):
@@ -54,7 +52,7 @@
"setup": dumps_function(AllProgress),
"function": dumps_function(counts),
"interval": interval,
- "teardown": dumps_function(partial(remove_plugin, name=AllProgress.name)),
+ "teardown": dumps_function(_remove_all_progress_plugin),
}
)
return comm
|
{"golden_diff": "diff --git a/distributed/diagnostics/progress_stream.py b/distributed/diagnostics/progress_stream.py\n--- a/distributed/diagnostics/progress_stream.py\n+++ b/distributed/diagnostics/progress_stream.py\n@@ -1,10 +1,8 @@\n import logging\n-from functools import partial\n \n from tlz import merge, valmap\n \n from ..core import coerce_to_address, connect\n-from ..scheduler import Scheduler\n from ..utils import color_of, key_split\n from ..worker import dumps_function\n from .progress import AllProgress\n@@ -22,10 +20,10 @@\n )\n \n \n-def remove_plugin(**kwargs):\n+def _remove_all_progress_plugin(self, *args, **kwargs):\n # Wrapper function around `Scheduler.remove_plugin` to avoid raising a\n # `PicklingError` when using a cythonized scheduler\n- return Scheduler.remove_plugin(**kwargs)\n+ self.remove_plugin(name=AllProgress.name)\n \n \n async def progress_stream(address, interval):\n@@ -54,7 +52,7 @@\n \"setup\": dumps_function(AllProgress),\n \"function\": dumps_function(counts),\n \"interval\": interval,\n- \"teardown\": dumps_function(partial(remove_plugin, name=AllProgress.name)),\n+ \"teardown\": dumps_function(_remove_all_progress_plugin),\n }\n )\n return comm\n", "issue": "AllProgress plugin not removed after progress_stream: `TypeError: remove_plugin() takes 0 positional arguments but 2 were given`\n**What happened**:\r\n```\r\ndistributed/diagnostics/tests/test_progress_stream.py::test_progress_quads_too_many PASSED\r\ndistributed.worker - WARNING - Compute Failed\r\nFunction: div\r\nargs: (1, 0)\r\nkwargs: {}\r\nException: \"ZeroDivisionError('division by zero')\"\r\ndistributed.utils - ERROR - remove_plugin() takes 0 positional arguments but 2 were given\r\nTraceback (most recent call last):\r\n File \"/Users/runner/work/distributed/distributed/distributed/utils.py\", line 680, in log_errors\r\n yield\r\n File \"/Users/runner/work/distributed/distributed/distributed/scheduler.py\", line 7057, in feed\r\n teardown(self, state)\r\nTypeError: remove_plugin() takes 0 positional arguments but 2 were given\r\ndistributed.core - ERROR - Exception while handling op feed\r\nTraceback (most recent call last):\r\n File \"/Users/runner/work/distributed/distributed/distributed/core.py\", line 520, in handle_comm\r\n result = await result\r\n File \"/Users/runner/work/distributed/distributed/distributed/scheduler.py\", line 7057, in feed\r\n teardown(self, state)\r\nTypeError: remove_plugin() takes 0 positional arguments but 2 were given\r\ntornado.application - ERROR - Exception in callback functools.partial(<function TCPServer._handle_connection.<locals>.<lambda> at 0x13f9a6f70>, <Task finished name='Task-49498' coro=<BaseTCPListener._handle_stream() done, defined at /Users/runner/work/distributed/distributed/distributed/comm/tcp.py:513> exception=TypeError('remove_plugin() takes 0 positional arguments but 2 were given')>)\r\nTraceback (most recent call last):\r\n File \"/Users/runner/miniconda3/envs/dask-distributed/lib/python3.9/site-packages/tornado/ioloop.py\", line 741, in _run_callback\r\n ret = callback()\r\n File \"/Users/runner/miniconda3/envs/dask-distributed/lib/python3.9/site-packages/tornado/tcpserver.py\", line 331, in <lambda>\r\n gen.convert_yielded(future), lambda f: f.result()\r\n File \"/Users/runner/work/distributed/distributed/distributed/comm/tcp.py\", line 530, in _handle_stream\r\n await self.comm_handler(comm)\r\n File \"/Users/runner/work/distributed/distributed/distributed/core.py\", line 520, in handle_comm\r\n result = await result\r\n File \"/Users/runner/work/distributed/distributed/distributed/scheduler.py\", line 7057, in feed\r\n teardown(self, state)\r\nTypeError: remove_plugin() takes 0 positional arguments but 2 were given\r\n```\r\n**What you expected to happen**:\r\nThe \"all-progress\" plugin should be removed when \"progress_stream\" is torn down.\n", "before_files": [{"content": "import logging\nfrom functools import partial\n\nfrom tlz import merge, valmap\n\nfrom ..core import coerce_to_address, connect\nfrom ..scheduler import Scheduler\nfrom ..utils import color_of, key_split\nfrom ..worker import dumps_function\nfrom .progress import AllProgress\n\nlogger = logging.getLogger(__name__)\n\n\ndef counts(scheduler, allprogress):\n return merge(\n {\"all\": valmap(len, allprogress.all), \"nbytes\": allprogress.nbytes},\n {\n state: valmap(len, allprogress.state[state])\n for state in [\"memory\", \"erred\", \"released\", \"processing\"]\n },\n )\n\n\ndef remove_plugin(**kwargs):\n # Wrapper function around `Scheduler.remove_plugin` to avoid raising a\n # `PicklingError` when using a cythonized scheduler\n return Scheduler.remove_plugin(**kwargs)\n\n\nasync def progress_stream(address, interval):\n \"\"\"Open a TCP connection to scheduler, receive progress messages\n\n The messages coming back are dicts containing counts of key groups::\n\n {'inc': {'all': 5, 'memory': 2, 'erred': 0, 'released': 1},\n 'dec': {'all': 1, 'memory': 0, 'erred': 0, 'released': 0}}\n\n Parameters\n ----------\n address: address of scheduler\n interval: time between batches, in seconds\n\n Examples\n --------\n >>> stream = await eventstream('127.0.0.1:8786', 0.100) # doctest: +SKIP\n >>> print(await read(stream)) # doctest: +SKIP\n \"\"\"\n address = coerce_to_address(address)\n comm = await connect(address)\n await comm.write(\n {\n \"op\": \"feed\",\n \"setup\": dumps_function(AllProgress),\n \"function\": dumps_function(counts),\n \"interval\": interval,\n \"teardown\": dumps_function(partial(remove_plugin, name=AllProgress.name)),\n }\n )\n return comm\n\n\ndef progress_quads(msg, nrows=8, ncols=3):\n \"\"\"\n >>> msg = {'all': {'inc': 5, 'dec': 1, 'add': 4},\n ... 'memory': {'inc': 2, 'dec': 0, 'add': 1},\n ... 'erred': {'inc': 0, 'dec': 1, 'add': 0},\n ... 'released': {'inc': 1, 'dec': 0, 'add': 1},\n ... 'processing': {'inc': 1, 'dec': 0, 'add': 2}}\n\n >>> progress_quads(msg, nrows=2) # doctest: +SKIP\n {'name': ['inc', 'add', 'dec'],\n 'left': [0, 0, 1],\n 'right': [0.9, 0.9, 1.9],\n 'top': [0, -1, 0],\n 'bottom': [-.8, -1.8, -.8],\n 'released': [1, 1, 0],\n 'memory': [2, 1, 0],\n 'erred': [0, 0, 1],\n 'processing': [1, 0, 2],\n 'done': ['3 / 5', '2 / 4', '1 / 1'],\n 'released-loc': [.2/.9, .25 / 0.9, 1],\n 'memory-loc': [3 / 5 / .9, .5 / 0.9, 1],\n 'erred-loc': [3 / 5 / .9, .5 / 0.9, 1.9],\n 'processing-loc': [4 / 5, 1 / 1, 1]}}\n \"\"\"\n width = 0.9\n names = sorted(msg[\"all\"], key=msg[\"all\"].get, reverse=True)\n names = names[: nrows * ncols]\n n = len(names)\n d = {k: [v.get(name, 0) for name in names] for k, v in msg.items()}\n\n d[\"name\"] = names\n d[\"show-name\"] = [name if len(name) <= 15 else name[:12] + \"...\" for name in names]\n d[\"left\"] = [i // nrows for i in range(n)]\n d[\"right\"] = [i // nrows + width for i in range(n)]\n d[\"top\"] = [-(i % nrows) for i in range(n)]\n d[\"bottom\"] = [-(i % nrows) - 0.8 for i in range(n)]\n d[\"color\"] = [color_of(name) for name in names]\n\n d[\"released-loc\"] = []\n d[\"memory-loc\"] = []\n d[\"erred-loc\"] = []\n d[\"processing-loc\"] = []\n d[\"done\"] = []\n for r, m, e, p, a, l in zip(\n d[\"released\"], d[\"memory\"], d[\"erred\"], d[\"processing\"], d[\"all\"], d[\"left\"]\n ):\n rl = width * r / a + l\n ml = width * (r + m) / a + l\n el = width * (r + m + e) / a + l\n pl = width * (p + r + m + e) / a + l\n done = \"%d / %d\" % (r + m + e, a)\n d[\"released-loc\"].append(rl)\n d[\"memory-loc\"].append(ml)\n d[\"erred-loc\"].append(el)\n d[\"processing-loc\"].append(pl)\n d[\"done\"].append(done)\n\n return d\n\n\ndef color_of_message(msg):\n if msg[\"status\"] == \"OK\":\n split = key_split(msg[\"key\"])\n return color_of(split)\n else:\n return \"black\"\n\n\ncolors = {\n \"transfer\": \"red\",\n \"disk-write\": \"orange\",\n \"disk-read\": \"orange\",\n \"deserialize\": \"gray\",\n \"compute\": color_of_message,\n}\n\n\nalphas = {\n \"transfer\": 0.4,\n \"compute\": 1,\n \"deserialize\": 0.4,\n \"disk-write\": 0.4,\n \"disk-read\": 0.4,\n}\n\n\nprefix = {\n \"transfer\": \"transfer-\",\n \"disk-write\": \"disk-write-\",\n \"disk-read\": \"disk-read-\",\n \"deserialize\": \"deserialize-\",\n \"compute\": \"\",\n}\n\n\ndef task_stream_append(lists, msg, workers):\n key = msg[\"key\"]\n name = key_split(key)\n startstops = msg.get(\"startstops\", [])\n\n for startstop in startstops:\n color = colors[startstop[\"action\"]]\n if type(color) is not str:\n color = color(msg)\n\n lists[\"start\"].append((startstop[\"start\"] + startstop[\"stop\"]) / 2 * 1000)\n lists[\"duration\"].append(1000 * (startstop[\"stop\"] - startstop[\"start\"]))\n lists[\"key\"].append(key)\n lists[\"name\"].append(prefix[startstop[\"action\"]] + name)\n lists[\"color\"].append(color)\n lists[\"alpha\"].append(alphas[startstop[\"action\"]])\n lists[\"worker\"].append(msg[\"worker\"])\n\n worker_thread = \"%s-%d\" % (msg[\"worker\"], msg[\"thread\"])\n lists[\"worker_thread\"].append(worker_thread)\n if worker_thread not in workers:\n workers[worker_thread] = len(workers) / 2\n lists[\"y\"].append(workers[worker_thread])\n\n return len(startstops)\n", "path": "distributed/diagnostics/progress_stream.py"}]}
| 3,346 | 287 |
gh_patches_debug_34559
|
rasdani/github-patches
|
git_diff
|
nf-core__tools-398
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Download: Also fetch institutional configs
Now that the institute config files are held in their own repo (see #212), they will not be available for users running offline.
Should be an easy enough fix though - the `nf-core download` command needs to be extended to also download the configs repo and set the base path for the import correctly.
</issue>
<code>
[start of nf_core/download.py]
1 #!/usr/bin/env python
2 """Downloads a nf-core pipeline to the local file system."""
3
4 from __future__ import print_function
5
6 import errno
7 from io import BytesIO
8 import logging
9 import hashlib
10 import os
11 import requests
12 import subprocess
13 import sys
14 from zipfile import ZipFile
15
16 import nf_core.list
17 import nf_core.utils
18
19
20 class DownloadWorkflow(object):
21 """Downloads a nf-core workflow from Github to the local file system.
22
23 Can also download its Singularity container image if required.
24
25 Args:
26 pipeline (str): A nf-core pipeline name.
27 release (str): The workflow release version to download, like `1.0`. Defaults to None.
28 singularity (bool): Flag, if the Singularity container should be downloaded as well. Defaults to False.
29 outdir (str): Path to the local download directory. Defaults to None.
30 """
31 def __init__(self, pipeline, release=None, singularity=False, outdir=None):
32 self.pipeline = pipeline
33 self.release = release
34 self.singularity = singularity
35 self.outdir = outdir
36
37 self.wf_name = None
38 self.wf_sha = None
39 self.wf_download_url = None
40 self.config = dict()
41 self.containers = list()
42
43 def download_workflow(self):
44 """Starts a nf-core workflow download."""
45 # Get workflow details
46 try:
47 self.fetch_workflow_details(nf_core.list.Workflows())
48 except LookupError:
49 sys.exit(1)
50
51 # Check that the outdir doesn't already exist
52 if os.path.exists(self.outdir):
53 logging.error("Output directory '{}' already exists".format(self.outdir))
54 sys.exit(1)
55
56 logging.info(
57 "Saving {}".format(self.pipeline) +
58 "\n Pipeline release: {}".format(self.release) +
59 "\n Pull singularity containers: {}".format('Yes' if self.singularity else 'No') +
60 "\n Output directory: {}".format(self.outdir)
61 )
62
63 # Download the pipeline files
64 logging.info("Downloading workflow files from GitHub")
65 self.download_wf_files()
66
67 # Download the singularity images
68 if self.singularity:
69 logging.debug("Fetching container names for workflow")
70 self.find_container_images()
71 if len(self.containers) == 0:
72 logging.info("No container names found in workflow")
73 else:
74 os.mkdir(os.path.join(self.outdir, 'singularity-images'))
75 logging.info("Downloading {} singularity container{}".format(len(self.containers), 's' if len(self.containers) > 1 else ''))
76 for container in self.containers:
77 try:
78 # Download from Dockerhub in all cases
79 self.pull_singularity_image(container)
80 except RuntimeWarning as r:
81 # Raise exception if this is not possible
82 logging.error("Not able to pull image. Service might be down or internet connection is dead.")
83 raise r
84
85
86
87 def fetch_workflow_details(self, wfs):
88 """Fetches details of a nf-core workflow to download.
89
90 Args:
91 wfs (nf_core.list.Workflows): A nf_core.list.Workflows object
92
93 Raises:
94 LockupError, if the pipeline can not be found.
95 """
96 wfs.get_remote_workflows()
97
98 # Get workflow download details
99 for wf in wfs.remote_workflows:
100 if wf.full_name == self.pipeline or wf.name == self.pipeline:
101
102 # Set pipeline name
103 self.wf_name = wf.name
104
105 # Find latest release hash
106 if self.release is None and len(wf.releases) > 0:
107 # Sort list of releases so that most recent is first
108 wf.releases = sorted(wf.releases, key=lambda k: k.get('published_at_timestamp', 0), reverse=True)
109 self.release = wf.releases[0]['tag_name']
110 self.wf_sha = wf.releases[0]['tag_sha']
111 logging.debug("No release specified. Using latest release: {}".format(self.release))
112 # Find specified release hash
113 elif self.release is not None:
114 for r in wf.releases:
115 if r['tag_name'] == self.release.lstrip('v'):
116 self.wf_sha = r['tag_sha']
117 break
118 else:
119 logging.error("Not able to find release '{}' for {}".format(self.release, wf.full_name))
120 logging.info("Available {} releases: {}".format(wf.full_name, ', '.join([r['tag_name'] for r in wf.releases])))
121 raise LookupError("Not able to find release '{}' for {}".format(self.release, wf.full_name))
122
123 # Must be a dev-only pipeline
124 elif not self.release:
125 self.release = 'dev'
126 self.wf_sha = 'master' # Cheating a little, but GitHub download link works
127 logging.warn("Pipeline is in development - downloading current code on master branch.\n" +
128 "This is likely to change soon should not be considered fully reproducible.")
129
130 # Set outdir name if not defined
131 if not self.outdir:
132 self.outdir = 'nf-core-{}'.format(wf.name)
133 if self.release is not None:
134 self.outdir += '-{}'.format(self.release)
135
136 # Set the download URL and return
137 self.wf_download_url = 'https://github.com/{}/archive/{}.zip'.format(wf.full_name, self.wf_sha)
138 return
139
140 # If we got this far, must not be a nf-core pipeline
141 if self.pipeline.count('/') == 1:
142 # Looks like a GitHub address - try working with this repo
143 logging.warn("Pipeline name doesn't match any nf-core workflows")
144 logging.info("Pipeline name looks like a GitHub address - attempting to download anyway")
145 self.wf_name = self.pipeline
146 if not self.release:
147 self.release = 'master'
148 self.wf_sha = self.release
149 if not self.outdir:
150 self.outdir = self.pipeline.replace('/', '-').lower()
151 if self.release is not None:
152 self.outdir += '-{}'.format(self.release)
153 # Set the download URL and return
154 self.wf_download_url = 'https://github.com/{}/archive/{}.zip'.format(self.pipeline, self.release)
155 else:
156 logging.error("Not able to find pipeline '{}'".format(self.pipeline))
157 logging.info("Available pipelines: {}".format(', '.join([w.name for w in wfs.remote_workflows])))
158 raise LookupError("Not able to find pipeline '{}'".format(self.pipeline))
159
160 def download_wf_files(self):
161 """Downloads workflow files from Github to the :attr:`self.outdir`.
162 """
163 logging.debug("Downloading {}".format(self.wf_download_url))
164
165 # Download GitHub zip file into memory and extract
166 url = requests.get(self.wf_download_url)
167 zipfile = ZipFile(BytesIO(url.content))
168 zipfile.extractall(self.outdir)
169
170 # Rename the internal directory name to be more friendly
171 gh_name = '{}-{}'.format(self.wf_name, self.wf_sha).split('/')[-1]
172 os.rename(os.path.join(self.outdir, gh_name), os.path.join(self.outdir, 'workflow'))
173
174 # Make downloaded files executable
175 for dirpath, subdirs, filelist in os.walk(os.path.join(self.outdir, 'workflow')):
176 for fname in filelist:
177 os.chmod(os.path.join(dirpath, fname), 0o775)
178
179 def find_container_images(self):
180 """ Find container image names for workflow """
181
182 # Use linting code to parse the pipeline nextflow config
183 self.config = nf_core.utils.fetch_wf_config(os.path.join(self.outdir, 'workflow'))
184
185 # Find any config variables that look like a container
186 for k,v in self.config.items():
187 if k.startswith('process.') and k.endswith('.container'):
188 self.containers.append(v.strip('"').strip("'"))
189
190
191 def pull_singularity_image(self, container):
192 """Uses a local installation of singularity to pull an image from Docker Hub.
193
194 Args:
195 container (str): A pipeline's container name. Usually it is of similar format
196 to `nfcore/name:dev`.
197
198 Raises:
199 Various exceptions possible from `subprocess` execution of Singularity.
200 """
201 out_name = '{}.simg'.format(container.replace('nfcore', 'nf-core').replace('/','-').replace(':', '-'))
202 out_path = os.path.abspath(os.path.join(self.outdir, 'singularity-images', out_name))
203 address = 'docker://{}'.format(container.replace('docker://', ''))
204 singularity_command = ["singularity", "pull", "--name", out_path, address]
205 logging.info("Building singularity image from dockerhub: {}".format(address))
206 logging.debug("Singularity command: {}".format(' '.join(singularity_command)))
207
208 # Try to use singularity to pull image
209 try:
210 subprocess.call(singularity_command)
211 except OSError as e:
212 if e.errno == errno.ENOENT:
213 # Singularity is not installed
214 logging.error('Singularity is not installed!')
215 else:
216 # Something else went wrong with singularity command
217 raise e
218
219 def validate_md5(self, fname, expected):
220 """Calculates the md5sum for a file on the disk and validate with expected.
221
222 Args:
223 fname (str): Path to a local file.
224 expected (str): The expected md5sum.
225
226 Raises:
227 IOError, if the md5sum does not match the remote sum.
228 """
229 logging.debug("Validating image hash: {}".format(fname))
230
231 # Calculate the md5 for the file on disk
232 hash_md5 = hashlib.md5()
233 with open(fname, "rb") as f:
234 for chunk in iter(lambda: f.read(4096), b""):
235 hash_md5.update(chunk)
236 file_hash = hash_md5.hexdigest()
237
238 if file_hash == expected:
239 logging.debug('md5 sum of image matches expected: {}'.format(expected))
240 else:
241 raise IOError ("{} md5 does not match remote: {} - {}".format(fname, expected, file_hash))
242
[end of nf_core/download.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nf_core/download.py b/nf_core/download.py
--- a/nf_core/download.py
+++ b/nf_core/download.py
@@ -64,6 +64,11 @@
logging.info("Downloading workflow files from GitHub")
self.download_wf_files()
+ # Download the centralised configs
+ logging.info("Downloading centralised configs from GitHub")
+ self.download_configs()
+ self.wf_use_local_configs()
+
# Download the singularity images
if self.singularity:
logging.debug("Fetching container names for workflow")
@@ -176,6 +181,46 @@
for fname in filelist:
os.chmod(os.path.join(dirpath, fname), 0o775)
+ def download_configs(self):
+ """Downloads the centralised config profiles from nf-core/configs to :attr:`self.outdir`.
+ """
+ configs_zip_url = "https://github.com/nf-core/configs/archive/master.zip"
+ configs_local_dir = "configs-master"
+ logging.debug("Downloading {}".format(configs_zip_url))
+
+ # Download GitHub zip file into memory and extract
+ url = requests.get(configs_zip_url)
+ zipfile = ZipFile(BytesIO(url.content))
+ zipfile.extractall(self.outdir)
+
+ # Rename the internal directory name to be more friendly
+ os.rename(os.path.join(self.outdir, configs_local_dir), os.path.join(self.outdir, 'configs'))
+
+ # Make downloaded files executable
+ for dirpath, subdirs, filelist in os.walk(os.path.join(self.outdir, 'configs')):
+ for fname in filelist:
+ os.chmod(os.path.join(dirpath, fname), 0o775)
+
+ def wf_use_local_configs(self):
+ """Edit the downloaded nextflow.config file to use the local config files
+ """
+ nfconfig_fn = os.path.join(self.outdir, 'workflow', 'nextflow.config')
+ find_str = 'https://raw.githubusercontent.com/nf-core/configs/${params.custom_config_version}'
+ repl_str = '../configs/'
+ logging.debug("Editing params.custom_config_base in {}".format(nfconfig_fn))
+
+ # Load the nextflow.config file into memory
+ with open(nfconfig_fn, 'r') as nfconfig_fh:
+ nfconfig = nfconfig_fh.read()
+
+ # Replace the target string
+ nfconfig = nfconfig.replace(find_str, repl_str)
+
+ # Write the file out again
+ with open(nfconfig_fn, 'w') as nfconfig_fh:
+ nfconfig_fh.write(nfconfig)
+
+
def find_container_images(self):
""" Find container image names for workflow """
|
{"golden_diff": "diff --git a/nf_core/download.py b/nf_core/download.py\n--- a/nf_core/download.py\n+++ b/nf_core/download.py\n@@ -64,6 +64,11 @@\n logging.info(\"Downloading workflow files from GitHub\")\n self.download_wf_files()\n \n+ # Download the centralised configs\n+ logging.info(\"Downloading centralised configs from GitHub\")\n+ self.download_configs()\n+ self.wf_use_local_configs()\n+\n # Download the singularity images\n if self.singularity:\n logging.debug(\"Fetching container names for workflow\")\n@@ -176,6 +181,46 @@\n for fname in filelist:\n os.chmod(os.path.join(dirpath, fname), 0o775)\n \n+ def download_configs(self):\n+ \"\"\"Downloads the centralised config profiles from nf-core/configs to :attr:`self.outdir`.\n+ \"\"\"\n+ configs_zip_url = \"https://github.com/nf-core/configs/archive/master.zip\"\n+ configs_local_dir = \"configs-master\"\n+ logging.debug(\"Downloading {}\".format(configs_zip_url))\n+\n+ # Download GitHub zip file into memory and extract\n+ url = requests.get(configs_zip_url)\n+ zipfile = ZipFile(BytesIO(url.content))\n+ zipfile.extractall(self.outdir)\n+\n+ # Rename the internal directory name to be more friendly\n+ os.rename(os.path.join(self.outdir, configs_local_dir), os.path.join(self.outdir, 'configs'))\n+\n+ # Make downloaded files executable\n+ for dirpath, subdirs, filelist in os.walk(os.path.join(self.outdir, 'configs')):\n+ for fname in filelist:\n+ os.chmod(os.path.join(dirpath, fname), 0o775)\n+\n+ def wf_use_local_configs(self):\n+ \"\"\"Edit the downloaded nextflow.config file to use the local config files\n+ \"\"\"\n+ nfconfig_fn = os.path.join(self.outdir, 'workflow', 'nextflow.config')\n+ find_str = 'https://raw.githubusercontent.com/nf-core/configs/${params.custom_config_version}'\n+ repl_str = '../configs/'\n+ logging.debug(\"Editing params.custom_config_base in {}\".format(nfconfig_fn))\n+\n+ # Load the nextflow.config file into memory\n+ with open(nfconfig_fn, 'r') as nfconfig_fh:\n+ nfconfig = nfconfig_fh.read()\n+\n+ # Replace the target string\n+ nfconfig = nfconfig.replace(find_str, repl_str)\n+\n+ # Write the file out again\n+ with open(nfconfig_fn, 'w') as nfconfig_fh:\n+ nfconfig_fh.write(nfconfig)\n+\n+\n def find_container_images(self):\n \"\"\" Find container image names for workflow \"\"\"\n", "issue": "Download: Also fetch institutional configs\nNow that the institute config files are held in their own repo (see #212), they will not be available for users running offline.\r\n\r\nShould be an easy enough fix though - the `nf-core download` command needs to be extended to also download the configs repo and set the base path for the import correctly.\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"Downloads a nf-core pipeline to the local file system.\"\"\"\n\nfrom __future__ import print_function\n\nimport errno\nfrom io import BytesIO\nimport logging\nimport hashlib\nimport os\nimport requests\nimport subprocess\nimport sys\nfrom zipfile import ZipFile\n\nimport nf_core.list\nimport nf_core.utils\n\n\nclass DownloadWorkflow(object):\n \"\"\"Downloads a nf-core workflow from Github to the local file system.\n\n Can also download its Singularity container image if required.\n\n Args:\n pipeline (str): A nf-core pipeline name.\n release (str): The workflow release version to download, like `1.0`. Defaults to None.\n singularity (bool): Flag, if the Singularity container should be downloaded as well. Defaults to False.\n outdir (str): Path to the local download directory. Defaults to None.\n \"\"\"\n def __init__(self, pipeline, release=None, singularity=False, outdir=None):\n self.pipeline = pipeline\n self.release = release\n self.singularity = singularity\n self.outdir = outdir\n\n self.wf_name = None\n self.wf_sha = None\n self.wf_download_url = None\n self.config = dict()\n self.containers = list()\n\n def download_workflow(self):\n \"\"\"Starts a nf-core workflow download.\"\"\"\n # Get workflow details\n try:\n self.fetch_workflow_details(nf_core.list.Workflows())\n except LookupError:\n sys.exit(1)\n\n # Check that the outdir doesn't already exist\n if os.path.exists(self.outdir):\n logging.error(\"Output directory '{}' already exists\".format(self.outdir))\n sys.exit(1)\n\n logging.info(\n \"Saving {}\".format(self.pipeline) +\n \"\\n Pipeline release: {}\".format(self.release) +\n \"\\n Pull singularity containers: {}\".format('Yes' if self.singularity else 'No') +\n \"\\n Output directory: {}\".format(self.outdir)\n )\n\n # Download the pipeline files\n logging.info(\"Downloading workflow files from GitHub\")\n self.download_wf_files()\n\n # Download the singularity images\n if self.singularity:\n logging.debug(\"Fetching container names for workflow\")\n self.find_container_images()\n if len(self.containers) == 0:\n logging.info(\"No container names found in workflow\")\n else:\n os.mkdir(os.path.join(self.outdir, 'singularity-images'))\n logging.info(\"Downloading {} singularity container{}\".format(len(self.containers), 's' if len(self.containers) > 1 else ''))\n for container in self.containers:\n try:\n # Download from Dockerhub in all cases\n self.pull_singularity_image(container)\n except RuntimeWarning as r:\n # Raise exception if this is not possible\n logging.error(\"Not able to pull image. Service might be down or internet connection is dead.\")\n raise r\n\n\n\n def fetch_workflow_details(self, wfs):\n \"\"\"Fetches details of a nf-core workflow to download.\n\n Args:\n wfs (nf_core.list.Workflows): A nf_core.list.Workflows object\n\n Raises:\n LockupError, if the pipeline can not be found.\n \"\"\"\n wfs.get_remote_workflows()\n\n # Get workflow download details\n for wf in wfs.remote_workflows:\n if wf.full_name == self.pipeline or wf.name == self.pipeline:\n\n # Set pipeline name\n self.wf_name = wf.name\n\n # Find latest release hash\n if self.release is None and len(wf.releases) > 0:\n # Sort list of releases so that most recent is first\n wf.releases = sorted(wf.releases, key=lambda k: k.get('published_at_timestamp', 0), reverse=True)\n self.release = wf.releases[0]['tag_name']\n self.wf_sha = wf.releases[0]['tag_sha']\n logging.debug(\"No release specified. Using latest release: {}\".format(self.release))\n # Find specified release hash\n elif self.release is not None:\n for r in wf.releases:\n if r['tag_name'] == self.release.lstrip('v'):\n self.wf_sha = r['tag_sha']\n break\n else:\n logging.error(\"Not able to find release '{}' for {}\".format(self.release, wf.full_name))\n logging.info(\"Available {} releases: {}\".format(wf.full_name, ', '.join([r['tag_name'] for r in wf.releases])))\n raise LookupError(\"Not able to find release '{}' for {}\".format(self.release, wf.full_name))\n\n # Must be a dev-only pipeline\n elif not self.release:\n self.release = 'dev'\n self.wf_sha = 'master' # Cheating a little, but GitHub download link works\n logging.warn(\"Pipeline is in development - downloading current code on master branch.\\n\" +\n \"This is likely to change soon should not be considered fully reproducible.\")\n\n # Set outdir name if not defined\n if not self.outdir:\n self.outdir = 'nf-core-{}'.format(wf.name)\n if self.release is not None:\n self.outdir += '-{}'.format(self.release)\n\n # Set the download URL and return\n self.wf_download_url = 'https://github.com/{}/archive/{}.zip'.format(wf.full_name, self.wf_sha)\n return\n\n # If we got this far, must not be a nf-core pipeline\n if self.pipeline.count('/') == 1:\n # Looks like a GitHub address - try working with this repo\n logging.warn(\"Pipeline name doesn't match any nf-core workflows\")\n logging.info(\"Pipeline name looks like a GitHub address - attempting to download anyway\")\n self.wf_name = self.pipeline\n if not self.release:\n self.release = 'master'\n self.wf_sha = self.release\n if not self.outdir:\n self.outdir = self.pipeline.replace('/', '-').lower()\n if self.release is not None:\n self.outdir += '-{}'.format(self.release)\n # Set the download URL and return\n self.wf_download_url = 'https://github.com/{}/archive/{}.zip'.format(self.pipeline, self.release)\n else:\n logging.error(\"Not able to find pipeline '{}'\".format(self.pipeline))\n logging.info(\"Available pipelines: {}\".format(', '.join([w.name for w in wfs.remote_workflows])))\n raise LookupError(\"Not able to find pipeline '{}'\".format(self.pipeline))\n\n def download_wf_files(self):\n \"\"\"Downloads workflow files from Github to the :attr:`self.outdir`.\n \"\"\"\n logging.debug(\"Downloading {}\".format(self.wf_download_url))\n\n # Download GitHub zip file into memory and extract\n url = requests.get(self.wf_download_url)\n zipfile = ZipFile(BytesIO(url.content))\n zipfile.extractall(self.outdir)\n\n # Rename the internal directory name to be more friendly\n gh_name = '{}-{}'.format(self.wf_name, self.wf_sha).split('/')[-1]\n os.rename(os.path.join(self.outdir, gh_name), os.path.join(self.outdir, 'workflow'))\n\n # Make downloaded files executable\n for dirpath, subdirs, filelist in os.walk(os.path.join(self.outdir, 'workflow')):\n for fname in filelist:\n os.chmod(os.path.join(dirpath, fname), 0o775)\n\n def find_container_images(self):\n \"\"\" Find container image names for workflow \"\"\"\n\n # Use linting code to parse the pipeline nextflow config\n self.config = nf_core.utils.fetch_wf_config(os.path.join(self.outdir, 'workflow'))\n\n # Find any config variables that look like a container\n for k,v in self.config.items():\n if k.startswith('process.') and k.endswith('.container'):\n self.containers.append(v.strip('\"').strip(\"'\"))\n\n\n def pull_singularity_image(self, container):\n \"\"\"Uses a local installation of singularity to pull an image from Docker Hub.\n\n Args:\n container (str): A pipeline's container name. Usually it is of similar format\n to `nfcore/name:dev`.\n\n Raises:\n Various exceptions possible from `subprocess` execution of Singularity.\n \"\"\"\n out_name = '{}.simg'.format(container.replace('nfcore', 'nf-core').replace('/','-').replace(':', '-'))\n out_path = os.path.abspath(os.path.join(self.outdir, 'singularity-images', out_name))\n address = 'docker://{}'.format(container.replace('docker://', ''))\n singularity_command = [\"singularity\", \"pull\", \"--name\", out_path, address]\n logging.info(\"Building singularity image from dockerhub: {}\".format(address))\n logging.debug(\"Singularity command: {}\".format(' '.join(singularity_command)))\n\n # Try to use singularity to pull image\n try:\n subprocess.call(singularity_command)\n except OSError as e:\n if e.errno == errno.ENOENT:\n # Singularity is not installed\n logging.error('Singularity is not installed!')\n else:\n # Something else went wrong with singularity command\n raise e\n\n def validate_md5(self, fname, expected):\n \"\"\"Calculates the md5sum for a file on the disk and validate with expected.\n\n Args:\n fname (str): Path to a local file.\n expected (str): The expected md5sum.\n\n Raises:\n IOError, if the md5sum does not match the remote sum.\n \"\"\"\n logging.debug(\"Validating image hash: {}\".format(fname))\n\n # Calculate the md5 for the file on disk\n hash_md5 = hashlib.md5()\n with open(fname, \"rb\") as f:\n for chunk in iter(lambda: f.read(4096), b\"\"):\n hash_md5.update(chunk)\n file_hash = hash_md5.hexdigest()\n\n if file_hash == expected:\n logging.debug('md5 sum of image matches expected: {}'.format(expected))\n else:\n raise IOError (\"{} md5 does not match remote: {} - {}\".format(fname, expected, file_hash))\n", "path": "nf_core/download.py"}]}
| 3,399 | 602 |
gh_patches_debug_8912
|
rasdani/github-patches
|
git_diff
|
yt-project__yt-3847
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DOC: private methods used in cookbook recipe
### Bug report
**Bug summary**
in #2567, `doc/source/cookbook/render_two_fields_tf.py` was added to the cookbook
On the two following lines, the private method `_get_field_info` is used
https://github.com/yt-project/yt/blob/ef08989d8222e7172751d17009959102db05dd20/doc/source/cookbook/render_two_fields_tf.py#L15
https://github.com/yt-project/yt/blob/ef08989d8222e7172751d17009959102db05dd20/doc/source/cookbook/render_two_fields_tf.py#L30
This should be refactored to avoid showing private elements
@zingale, do you know how we could do this ?
</issue>
<code>
[start of doc/source/cookbook/render_two_fields_tf.py]
1 import numpy as np
2
3 import yt
4 from yt.visualization.volume_rendering.api import Scene, create_volume_source
5
6 ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
7
8 # create a scene and add volume sources to it
9
10 sc = Scene()
11
12 # Add density
13
14 field = "density"
15 ds._get_field_info(field).take_log = True
16
17 vol = create_volume_source(ds, field=field)
18 vol.use_ghost_zones = True
19
20 tf = yt.ColorTransferFunction([-28, -25])
21 tf.clear()
22 tf.add_layers(4, 0.02, alpha=np.logspace(-3, -1, 4), colormap="winter")
23
24 vol.set_transfer_function(tf)
25 sc.add_source(vol)
26
27 # Add temperature
28
29 field = "temperature"
30 ds._get_field_info(field).take_log = True
31
32 vol2 = create_volume_source(ds, field=field)
33 vol2.use_ghost_zones = True
34
35 tf = yt.ColorTransferFunction([4.5, 7.5])
36 tf.clear()
37 tf.add_layers(4, 0.02, alpha=np.logspace(-0.2, 0, 4), colormap="autumn")
38
39 vol2.set_transfer_function(tf)
40 sc.add_source(vol2)
41
42 # setup the camera
43
44 cam = sc.add_camera(ds, lens_type="perspective")
45 cam.resolution = (1600, 900)
46 cam.zoom(20.0)
47
48 # Render the image.
49
50 sc.render()
51
52 sc.save_annotated(
53 "render_two_fields_tf.png",
54 sigma_clip=6.0,
55 tf_rect=[0.88, 0.15, 0.03, 0.8],
56 render=False,
57 )
58
[end of doc/source/cookbook/render_two_fields_tf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/doc/source/cookbook/render_two_fields_tf.py b/doc/source/cookbook/render_two_fields_tf.py
--- a/doc/source/cookbook/render_two_fields_tf.py
+++ b/doc/source/cookbook/render_two_fields_tf.py
@@ -12,7 +12,6 @@
# Add density
field = "density"
-ds._get_field_info(field).take_log = True
vol = create_volume_source(ds, field=field)
vol.use_ghost_zones = True
@@ -27,7 +26,6 @@
# Add temperature
field = "temperature"
-ds._get_field_info(field).take_log = True
vol2 = create_volume_source(ds, field=field)
vol2.use_ghost_zones = True
|
{"golden_diff": "diff --git a/doc/source/cookbook/render_two_fields_tf.py b/doc/source/cookbook/render_two_fields_tf.py\n--- a/doc/source/cookbook/render_two_fields_tf.py\n+++ b/doc/source/cookbook/render_two_fields_tf.py\n@@ -12,7 +12,6 @@\n # Add density\n \n field = \"density\"\n-ds._get_field_info(field).take_log = True\n \n vol = create_volume_source(ds, field=field)\n vol.use_ghost_zones = True\n@@ -27,7 +26,6 @@\n # Add temperature\n \n field = \"temperature\"\n-ds._get_field_info(field).take_log = True\n \n vol2 = create_volume_source(ds, field=field)\n vol2.use_ghost_zones = True\n", "issue": "DOC: private methods used in cookbook recipe\n### Bug report\r\n\r\n**Bug summary**\r\n\r\nin #2567, `doc/source/cookbook/render_two_fields_tf.py` was added to the cookbook\r\nOn the two following lines, the private method `_get_field_info` is used\r\nhttps://github.com/yt-project/yt/blob/ef08989d8222e7172751d17009959102db05dd20/doc/source/cookbook/render_two_fields_tf.py#L15\r\nhttps://github.com/yt-project/yt/blob/ef08989d8222e7172751d17009959102db05dd20/doc/source/cookbook/render_two_fields_tf.py#L30\r\n\r\nThis should be refactored to avoid showing private elements\r\n@zingale, do you know how we could do this ?\n", "before_files": [{"content": "import numpy as np\n\nimport yt\nfrom yt.visualization.volume_rendering.api import Scene, create_volume_source\n\nds = yt.load(\"IsolatedGalaxy/galaxy0030/galaxy0030\")\n\n# create a scene and add volume sources to it\n\nsc = Scene()\n\n# Add density\n\nfield = \"density\"\nds._get_field_info(field).take_log = True\n\nvol = create_volume_source(ds, field=field)\nvol.use_ghost_zones = True\n\ntf = yt.ColorTransferFunction([-28, -25])\ntf.clear()\ntf.add_layers(4, 0.02, alpha=np.logspace(-3, -1, 4), colormap=\"winter\")\n\nvol.set_transfer_function(tf)\nsc.add_source(vol)\n\n# Add temperature\n\nfield = \"temperature\"\nds._get_field_info(field).take_log = True\n\nvol2 = create_volume_source(ds, field=field)\nvol2.use_ghost_zones = True\n\ntf = yt.ColorTransferFunction([4.5, 7.5])\ntf.clear()\ntf.add_layers(4, 0.02, alpha=np.logspace(-0.2, 0, 4), colormap=\"autumn\")\n\nvol2.set_transfer_function(tf)\nsc.add_source(vol2)\n\n# setup the camera\n\ncam = sc.add_camera(ds, lens_type=\"perspective\")\ncam.resolution = (1600, 900)\ncam.zoom(20.0)\n\n# Render the image.\n\nsc.render()\n\nsc.save_annotated(\n \"render_two_fields_tf.png\",\n sigma_clip=6.0,\n tf_rect=[0.88, 0.15, 0.03, 0.8],\n render=False,\n)\n", "path": "doc/source/cookbook/render_two_fields_tf.py"}]}
| 1,245 | 158 |
gh_patches_debug_27056
|
rasdani/github-patches
|
git_diff
|
jupyterhub__jupyterhub-344
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow location of home directories to be configured
When JupyterHub is run with `create_system_users` it would be nice to be able to configure the locations of the home directories (something other than `/home`). I think this can be done with the `-d /otherhome` flag to the `useradd` command.
</issue>
<code>
[start of jupyterhub/auth.py]
1 """Simple PAM authenticator"""
2
3 # Copyright (c) IPython Development Team.
4 # Distributed under the terms of the Modified BSD License.
5
6 from grp import getgrnam
7 import pwd
8 from subprocess import check_call, check_output, CalledProcessError
9
10 from tornado import gen
11 import pamela
12
13 from traitlets.config import LoggingConfigurable
14 from traitlets import Bool, Set, Unicode, Any
15
16 from .handlers.login import LoginHandler
17 from .utils import url_path_join
18
19 class Authenticator(LoggingConfigurable):
20 """A class for authentication.
21
22 The API is one method, `authenticate`, a tornado gen.coroutine.
23 """
24
25 db = Any()
26 admin_users = Set(config=True,
27 help="""set of usernames of admin users
28
29 If unspecified, only the user that launches the server will be admin.
30 """
31 )
32 whitelist = Set(config=True,
33 help="""Username whitelist.
34
35 Use this to restrict which users can login.
36 If empty, allow any user to attempt login.
37 """
38 )
39 custom_html = Unicode('',
40 help="""HTML login form for custom handlers.
41 Override in form-based custom authenticators
42 that don't use username+password,
43 or need custom branding.
44 """
45 )
46 login_service = Unicode('',
47 help="""Name of the login service for external
48 login services (e.g. 'GitHub').
49 """
50 )
51
52 @gen.coroutine
53 def authenticate(self, handler, data):
54 """Authenticate a user with login form data.
55
56 This must be a tornado gen.coroutine.
57 It must return the username on successful authentication,
58 and return None on failed authentication.
59 """
60
61 def pre_spawn_start(self, user, spawner):
62 """Hook called before spawning a user's server.
63
64 Can be used to do auth-related startup, e.g. opening PAM sessions.
65 """
66
67 def post_spawn_stop(self, user, spawner):
68 """Hook called after stopping a user container.
69
70 Can be used to do auth-related cleanup, e.g. closing PAM sessions.
71 """
72
73 def check_whitelist(self, user):
74 """
75 Return True if the whitelist is empty or user is in the whitelist.
76 """
77 # Parens aren't necessary here, but they make this easier to parse.
78 return (not self.whitelist) or (user in self.whitelist)
79
80 def add_user(self, user):
81 """Add a new user
82
83 By default, this just adds the user to the whitelist.
84
85 Subclasses may do more extensive things,
86 such as adding actual unix users.
87 """
88 if self.whitelist:
89 self.whitelist.add(user.name)
90
91 def delete_user(self, user):
92 """Triggered when a user is deleted.
93
94 Removes the user from the whitelist.
95 """
96 self.whitelist.discard(user.name)
97
98 def login_url(self, base_url):
99 """Override to register a custom login handler"""
100 return url_path_join(base_url, 'login')
101
102 def logout_url(self, base_url):
103 """Override to register a custom logout handler"""
104 return url_path_join(base_url, 'logout')
105
106 def get_handlers(self, app):
107 """Return any custom handlers the authenticator needs to register
108
109 (e.g. for OAuth)
110 """
111 return [
112 ('/login', LoginHandler),
113 ]
114
115 class LocalAuthenticator(Authenticator):
116 """Base class for Authenticators that work with local *ix users
117
118 Checks for local users, and can attempt to create them if they exist.
119 """
120
121 create_system_users = Bool(False, config=True,
122 help="""If a user is added that doesn't exist on the system,
123 should I try to create the system user?
124 """
125 )
126
127 group_whitelist = Set(
128 config=True,
129 help="Automatically whitelist anyone in this group.",
130 )
131
132 def _group_whitelist_changed(self, name, old, new):
133 if self.whitelist:
134 self.log.warn(
135 "Ignoring username whitelist because group whitelist supplied!"
136 )
137
138 def check_whitelist(self, username):
139 if self.group_whitelist:
140 return self.check_group_whitelist(username)
141 else:
142 return super().check_whitelist(username)
143
144 def check_group_whitelist(self, username):
145 if not self.group_whitelist:
146 return False
147 for grnam in self.group_whitelist:
148 try:
149 group = getgrnam(grnam)
150 except KeyError:
151 self.log.error('No such group: [%s]' % grnam)
152 continue
153 if username in group.gr_mem:
154 return True
155 return False
156
157 @gen.coroutine
158 def add_user(self, user):
159 """Add a new user
160
161 By default, this just adds the user to the whitelist.
162
163 Subclasses may do more extensive things,
164 such as adding actual unix users.
165 """
166 user_exists = yield gen.maybe_future(self.system_user_exists(user))
167 if not user_exists:
168 if self.create_system_users:
169 yield gen.maybe_future(self.add_system_user(user))
170 else:
171 raise KeyError("User %s does not exist." % user.name)
172
173 yield gen.maybe_future(super().add_user(user))
174
175 @staticmethod
176 def system_user_exists(user):
177 """Check if the user exists on the system"""
178 try:
179 pwd.getpwnam(user.name)
180 except KeyError:
181 return False
182 else:
183 return True
184
185 @staticmethod
186 def add_system_user(user):
187 """Create a new *ix user on the system. Works on FreeBSD and Linux, at least."""
188 name = user.name
189 for useradd in (
190 ['pw', 'useradd', '-m'],
191 ['useradd', '-m'],
192 ):
193 try:
194 check_output(['which', useradd[0]])
195 except CalledProcessError:
196 continue
197 else:
198 break
199 else:
200 raise RuntimeError("I don't know how to add users on this system.")
201
202 check_call(useradd + [name])
203
204
205 class PAMAuthenticator(LocalAuthenticator):
206 """Authenticate local *ix users with PAM"""
207 encoding = Unicode('utf8', config=True,
208 help="""The encoding to use for PAM"""
209 )
210 service = Unicode('login', config=True,
211 help="""The PAM service to use for authentication."""
212 )
213
214 @gen.coroutine
215 def authenticate(self, handler, data):
216 """Authenticate with PAM, and return the username if login is successful.
217
218 Return None otherwise.
219 """
220 username = data['username']
221 if not self.check_whitelist(username):
222 return
223 try:
224 pamela.authenticate(username, data['password'], service=self.service)
225 except pamela.PAMError as e:
226 self.log.warn("PAM Authentication failed: %s", e)
227 else:
228 return username
229
230 def pre_spawn_start(self, user, spawner):
231 """Open PAM session for user"""
232 try:
233 pamela.open_session(user.name, service=self.service)
234 except pamela.PAMError as e:
235 self.log.warn("Failed to open PAM session for %s: %s", user.name, e)
236
237 def post_spawn_stop(self, user, spawner):
238 """Close PAM session for user"""
239 try:
240 pamela.close_session(user.name, service=self.service)
241 except pamela.PAMError as e:
242 self.log.warn("Failed to close PAM session for %s: %s", user.name, e)
243
244
[end of jupyterhub/auth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/jupyterhub/auth.py b/jupyterhub/auth.py
--- a/jupyterhub/auth.py
+++ b/jupyterhub/auth.py
@@ -123,6 +123,17 @@
should I try to create the system user?
"""
)
+ system_user_home = Unicode('', config=True,
+ help="""Specify root home directory for users if different from system default.
+
+ Passed to `useradd -b`.
+ If specified, when system users are created their home directories will be created in
+
+ system_user_home/username
+
+ Only has an effect when users are created with `create_system_users=True`.
+ """
+ )
group_whitelist = Set(
config=True,
@@ -182,8 +193,7 @@
else:
return True
- @staticmethod
- def add_system_user(user):
+ def add_system_user(self, user):
"""Create a new *ix user on the system. Works on FreeBSD and Linux, at least."""
name = user.name
for useradd in (
@@ -198,8 +208,11 @@
break
else:
raise RuntimeError("I don't know how to add users on this system.")
-
- check_call(useradd + [name])
+
+ cmd = useradd
+ if self.system_user_home:
+ cmd = cmd + ['-b', self.system_user_home]
+ check_call(cmd + [name])
class PAMAuthenticator(LocalAuthenticator):
|
{"golden_diff": "diff --git a/jupyterhub/auth.py b/jupyterhub/auth.py\n--- a/jupyterhub/auth.py\n+++ b/jupyterhub/auth.py\n@@ -123,6 +123,17 @@\n should I try to create the system user?\n \"\"\"\n )\n+ system_user_home = Unicode('', config=True,\n+ help=\"\"\"Specify root home directory for users if different from system default.\n+ \n+ Passed to `useradd -b`.\n+ If specified, when system users are created their home directories will be created in\n+ \n+ system_user_home/username\n+ \n+ Only has an effect when users are created with `create_system_users=True`.\n+ \"\"\"\n+ )\n \n group_whitelist = Set(\n config=True,\n@@ -182,8 +193,7 @@\n else:\n return True\n \n- @staticmethod\n- def add_system_user(user):\n+ def add_system_user(self, user):\n \"\"\"Create a new *ix user on the system. Works on FreeBSD and Linux, at least.\"\"\"\n name = user.name\n for useradd in (\n@@ -198,8 +208,11 @@\n break\n else:\n raise RuntimeError(\"I don't know how to add users on this system.\")\n- \n- check_call(useradd + [name])\n+ \n+ cmd = useradd\n+ if self.system_user_home:\n+ cmd = cmd + ['-b', self.system_user_home]\n+ check_call(cmd + [name])\n \n \n class PAMAuthenticator(LocalAuthenticator):\n", "issue": "Allow location of home directories to be configured\nWhen JupyterHub is run with `create_system_users` it would be nice to be able to configure the locations of the home directories (something other than `/home`). I think this can be done with the `-d /otherhome` flag to the `useradd` command.\n\n", "before_files": [{"content": "\"\"\"Simple PAM authenticator\"\"\"\n\n# Copyright (c) IPython Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nfrom grp import getgrnam\nimport pwd\nfrom subprocess import check_call, check_output, CalledProcessError\n\nfrom tornado import gen\nimport pamela\n\nfrom traitlets.config import LoggingConfigurable\nfrom traitlets import Bool, Set, Unicode, Any\n\nfrom .handlers.login import LoginHandler\nfrom .utils import url_path_join\n\nclass Authenticator(LoggingConfigurable):\n \"\"\"A class for authentication.\n \n The API is one method, `authenticate`, a tornado gen.coroutine.\n \"\"\"\n \n db = Any()\n admin_users = Set(config=True,\n help=\"\"\"set of usernames of admin users\n\n If unspecified, only the user that launches the server will be admin.\n \"\"\"\n )\n whitelist = Set(config=True,\n help=\"\"\"Username whitelist.\n \n Use this to restrict which users can login.\n If empty, allow any user to attempt login.\n \"\"\"\n )\n custom_html = Unicode('',\n help=\"\"\"HTML login form for custom handlers.\n Override in form-based custom authenticators\n that don't use username+password,\n or need custom branding.\n \"\"\"\n )\n login_service = Unicode('',\n help=\"\"\"Name of the login service for external\n login services (e.g. 'GitHub').\n \"\"\"\n )\n \n @gen.coroutine\n def authenticate(self, handler, data):\n \"\"\"Authenticate a user with login form data.\n \n This must be a tornado gen.coroutine.\n It must return the username on successful authentication,\n and return None on failed authentication.\n \"\"\"\n\n def pre_spawn_start(self, user, spawner):\n \"\"\"Hook called before spawning a user's server.\n \n Can be used to do auth-related startup, e.g. opening PAM sessions.\n \"\"\"\n \n def post_spawn_stop(self, user, spawner):\n \"\"\"Hook called after stopping a user container.\n \n Can be used to do auth-related cleanup, e.g. closing PAM sessions.\n \"\"\"\n \n def check_whitelist(self, user):\n \"\"\"\n Return True if the whitelist is empty or user is in the whitelist.\n \"\"\"\n # Parens aren't necessary here, but they make this easier to parse.\n return (not self.whitelist) or (user in self.whitelist)\n\n def add_user(self, user):\n \"\"\"Add a new user\n \n By default, this just adds the user to the whitelist.\n \n Subclasses may do more extensive things,\n such as adding actual unix users.\n \"\"\"\n if self.whitelist:\n self.whitelist.add(user.name)\n \n def delete_user(self, user):\n \"\"\"Triggered when a user is deleted.\n \n Removes the user from the whitelist.\n \"\"\"\n self.whitelist.discard(user.name)\n \n def login_url(self, base_url):\n \"\"\"Override to register a custom login handler\"\"\"\n return url_path_join(base_url, 'login')\n \n def logout_url(self, base_url):\n \"\"\"Override to register a custom logout handler\"\"\"\n return url_path_join(base_url, 'logout')\n \n def get_handlers(self, app):\n \"\"\"Return any custom handlers the authenticator needs to register\n \n (e.g. for OAuth)\n \"\"\"\n return [\n ('/login', LoginHandler),\n ]\n\nclass LocalAuthenticator(Authenticator):\n \"\"\"Base class for Authenticators that work with local *ix users\n \n Checks for local users, and can attempt to create them if they exist.\n \"\"\"\n \n create_system_users = Bool(False, config=True,\n help=\"\"\"If a user is added that doesn't exist on the system,\n should I try to create the system user?\n \"\"\"\n )\n\n group_whitelist = Set(\n config=True,\n help=\"Automatically whitelist anyone in this group.\",\n )\n\n def _group_whitelist_changed(self, name, old, new):\n if self.whitelist:\n self.log.warn(\n \"Ignoring username whitelist because group whitelist supplied!\"\n )\n\n def check_whitelist(self, username):\n if self.group_whitelist:\n return self.check_group_whitelist(username)\n else:\n return super().check_whitelist(username)\n\n def check_group_whitelist(self, username):\n if not self.group_whitelist:\n return False\n for grnam in self.group_whitelist:\n try:\n group = getgrnam(grnam)\n except KeyError:\n self.log.error('No such group: [%s]' % grnam)\n continue\n if username in group.gr_mem:\n return True\n return False\n\n @gen.coroutine\n def add_user(self, user):\n \"\"\"Add a new user\n \n By default, this just adds the user to the whitelist.\n \n Subclasses may do more extensive things,\n such as adding actual unix users.\n \"\"\"\n user_exists = yield gen.maybe_future(self.system_user_exists(user))\n if not user_exists:\n if self.create_system_users:\n yield gen.maybe_future(self.add_system_user(user))\n else:\n raise KeyError(\"User %s does not exist.\" % user.name)\n \n yield gen.maybe_future(super().add_user(user))\n \n @staticmethod\n def system_user_exists(user):\n \"\"\"Check if the user exists on the system\"\"\"\n try:\n pwd.getpwnam(user.name)\n except KeyError:\n return False\n else:\n return True\n \n @staticmethod\n def add_system_user(user):\n \"\"\"Create a new *ix user on the system. Works on FreeBSD and Linux, at least.\"\"\"\n name = user.name\n for useradd in (\n ['pw', 'useradd', '-m'],\n ['useradd', '-m'],\n ):\n try:\n check_output(['which', useradd[0]])\n except CalledProcessError:\n continue\n else:\n break\n else:\n raise RuntimeError(\"I don't know how to add users on this system.\")\n \n check_call(useradd + [name])\n\n\nclass PAMAuthenticator(LocalAuthenticator):\n \"\"\"Authenticate local *ix users with PAM\"\"\"\n encoding = Unicode('utf8', config=True,\n help=\"\"\"The encoding to use for PAM\"\"\"\n )\n service = Unicode('login', config=True,\n help=\"\"\"The PAM service to use for authentication.\"\"\"\n )\n \n @gen.coroutine\n def authenticate(self, handler, data):\n \"\"\"Authenticate with PAM, and return the username if login is successful.\n \n Return None otherwise.\n \"\"\"\n username = data['username']\n if not self.check_whitelist(username):\n return\n try:\n pamela.authenticate(username, data['password'], service=self.service)\n except pamela.PAMError as e:\n self.log.warn(\"PAM Authentication failed: %s\", e)\n else:\n return username\n \n def pre_spawn_start(self, user, spawner):\n \"\"\"Open PAM session for user\"\"\"\n try:\n pamela.open_session(user.name, service=self.service)\n except pamela.PAMError as e:\n self.log.warn(\"Failed to open PAM session for %s: %s\", user.name, e)\n \n def post_spawn_stop(self, user, spawner):\n \"\"\"Close PAM session for user\"\"\"\n try:\n pamela.close_session(user.name, service=self.service)\n except pamela.PAMError as e:\n self.log.warn(\"Failed to close PAM session for %s: %s\", user.name, e)\n \n", "path": "jupyterhub/auth.py"}]}
| 2,833 | 341 |
gh_patches_debug_27311
|
rasdani/github-patches
|
git_diff
|
conda__conda-7948
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
non-conda-installed python packages can have missing version information
When reading up metadata out of site-packages, non-conda-installed python packages can sometime be missing version information. When this is the case, we should default to a hard-codes `0.0.0` version. I believe this behavior is consistent with python’s own `python setup.py —version` when no version is actually given in the `setup()` function.
</issue>
<code>
[start of conda/core/prefix_data.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (C) 2012 Anaconda, Inc
3 # SPDX-License-Identifier: BSD-3-Clause
4 from __future__ import absolute_import, division, print_function, unicode_literals
5
6 from fnmatch import filter as fnmatch_filter
7 from logging import getLogger
8 from os import listdir
9 from os.path import basename, isdir, isfile, join, lexists
10
11 from ..base.constants import CONDA_TARBALL_EXTENSION, PREFIX_MAGIC_FILE
12 from ..base.context import context
13 from ..common.compat import JSONDecodeError, itervalues, odict, string_types, with_metaclass
14 from ..common.constants import NULL
15 from ..common.path import get_python_site_packages_short_path, win_path_ok
16 from ..common.pkg_formats.python import get_site_packages_anchor_files
17 from ..common.serialize import json_load
18 from ..exceptions import (
19 BasicClobberError, CondaDependencyError, CorruptedEnvironmentError, maybe_raise,
20 )
21 from ..gateways.disk.create import write_as_json_to_file
22 from ..gateways.disk.delete import rm_rf
23 from ..gateways.disk.read import read_python_record
24 from ..gateways.disk.test import file_path_is_writable
25 from ..models.match_spec import MatchSpec
26 from ..models.prefix_graph import PrefixGraph
27 from ..models.records import PackageRecord, PrefixRecord
28
29 log = getLogger(__name__)
30
31
32 class PrefixDataType(type):
33 """Basic caching of PrefixData instance objects."""
34
35 def __call__(cls, prefix_path, pip_interop_enabled=None):
36 if prefix_path in PrefixData._cache_:
37 return PrefixData._cache_[prefix_path]
38 elif isinstance(prefix_path, PrefixData):
39 return prefix_path
40 else:
41 prefix_data_instance = super(PrefixDataType, cls).__call__(prefix_path,
42 pip_interop_enabled)
43 PrefixData._cache_[prefix_path] = prefix_data_instance
44 return prefix_data_instance
45
46
47 @with_metaclass(PrefixDataType)
48 class PrefixData(object):
49 _cache_ = {}
50
51 def __init__(self, prefix_path, pip_interop_enabled=None):
52 # pip_interop_enabled is a temporary paramater; DO NOT USE
53 # TODO: when removing pip_interop_enabled, also remove from meta class
54 self.prefix_path = prefix_path
55 self.__prefix_records = None
56 self.__is_writable = NULL
57 self._pip_interop_enabled = (context.pip_interop_enabled
58 if pip_interop_enabled is None
59 else pip_interop_enabled)
60
61 def load(self):
62 self.__prefix_records = {}
63 _conda_meta_dir = join(self.prefix_path, 'conda-meta')
64 if lexists(_conda_meta_dir):
65 for meta_file in fnmatch_filter(listdir(_conda_meta_dir), '*.json'):
66 self._load_single_record(join(_conda_meta_dir, meta_file))
67 if self._pip_interop_enabled:
68 self._load_site_packages()
69
70 def reload(self):
71 self.load()
72 return self
73
74 def insert(self, prefix_record):
75 assert prefix_record.name not in self._prefix_records
76
77 assert prefix_record.fn.endswith(CONDA_TARBALL_EXTENSION)
78 filename = prefix_record.fn[:-len(CONDA_TARBALL_EXTENSION)] + '.json'
79
80 prefix_record_json_path = join(self.prefix_path, 'conda-meta', filename)
81 if lexists(prefix_record_json_path):
82 maybe_raise(BasicClobberError(
83 source_path=None,
84 target_path=prefix_record_json_path,
85 context=context,
86 ), context)
87 rm_rf(prefix_record_json_path)
88
89 write_as_json_to_file(prefix_record_json_path, prefix_record)
90
91 self._prefix_records[prefix_record.name] = prefix_record
92
93 def remove(self, package_name):
94 assert package_name in self._prefix_records
95
96 prefix_record = self._prefix_records[package_name]
97
98 filename = prefix_record.fn[:-len(CONDA_TARBALL_EXTENSION)] + '.json'
99 conda_meta_full_path = join(self.prefix_path, 'conda-meta', filename)
100 if self.is_writable:
101 rm_rf(conda_meta_full_path)
102
103 del self._prefix_records[package_name]
104
105 def get(self, package_name, default=NULL):
106 try:
107 return self._prefix_records[package_name]
108 except KeyError:
109 if default is not NULL:
110 return default
111 else:
112 raise
113
114 def iter_records(self):
115 return itervalues(self._prefix_records)
116
117 def iter_records_sorted(self):
118 prefix_graph = PrefixGraph(self.iter_records())
119 return iter(prefix_graph.graph)
120
121 def all_subdir_urls(self):
122 subdir_urls = set()
123 for prefix_record in itervalues(self._prefix_records):
124 subdir_url = prefix_record.channel.subdir_url
125 if subdir_url and subdir_url not in subdir_urls:
126 log.debug("adding subdir url %s for %s", subdir_url, prefix_record)
127 subdir_urls.add(subdir_url)
128 return subdir_urls
129
130 def query(self, package_ref_or_match_spec):
131 # returns a generator
132 param = package_ref_or_match_spec
133 if isinstance(param, string_types):
134 param = MatchSpec(param)
135 if isinstance(param, MatchSpec):
136 return (prefix_rec for prefix_rec in self.iter_records()
137 if param.match(prefix_rec))
138 else:
139 assert isinstance(param, PackageRecord)
140 return (prefix_rec for prefix_rec in self.iter_records() if prefix_rec == param)
141
142 @property
143 def _prefix_records(self):
144 return self.__prefix_records or self.load() or self.__prefix_records
145
146 def _load_single_record(self, prefix_record_json_path):
147 log.trace("loading prefix record %s", prefix_record_json_path)
148 with open(prefix_record_json_path) as fh:
149 try:
150 json_data = json_load(fh.read())
151 except JSONDecodeError:
152 raise CorruptedEnvironmentError(self.prefix_path, prefix_record_json_path)
153
154 # TODO: consider, at least in memory, storing prefix_record_json_path as part
155 # of PrefixRecord
156 prefix_record = PrefixRecord(**json_data)
157
158 # check that prefix record json filename conforms to name-version-build
159 # apparently implemented as part of #2638 to resolve #2599
160 try:
161 n, v, b = basename(prefix_record_json_path)[:-5].rsplit('-', 2)
162 if (n, v, b) != (prefix_record.name, prefix_record.version, prefix_record.build):
163 raise ValueError()
164 except ValueError:
165 log.warn("Ignoring malformed prefix record at: %s", prefix_record_json_path)
166 # TODO: consider just deleting here this record file in the future
167 return
168
169 self.__prefix_records[prefix_record.name] = prefix_record
170
171 @property
172 def is_writable(self):
173 if self.__is_writable == NULL:
174 test_path = join(self.prefix_path, PREFIX_MAGIC_FILE)
175 if not isfile(test_path):
176 is_writable = None
177 else:
178 is_writable = file_path_is_writable(test_path)
179 self.__is_writable = is_writable
180 return self.__is_writable
181
182 # # REMOVE: ?
183 def _has_python(self):
184 return 'python' in self._prefix_records
185
186 @property
187 def _python_pkg_record(self):
188 """Return the prefix record for the package python."""
189 return next(
190 (prefix_record for prefix_record in itervalues(self.__prefix_records)
191 if prefix_record.name == 'python'),
192 None
193 )
194
195 def _load_site_packages(self):
196 """
197 Load non-conda-installed python packages in the site-packages of the prefix.
198
199 Python packages not handled by conda are installed via other means,
200 like using pip or using python setup.py develop for local development.
201
202 Packages found that are not handled by conda are converted into a
203 prefix record and handled in memory.
204
205 Packages clobbering conda packages (i.e. the conda-meta record) are
206 removed from the in memory representation.
207 """
208 python_pkg_record = self._python_pkg_record
209
210 if not python_pkg_record:
211 return {}
212
213 site_packages_dir = get_python_site_packages_short_path(python_pkg_record.version)
214 site_packages_path = join(self.prefix_path, win_path_ok(site_packages_dir))
215
216 if not isdir(site_packages_path):
217 return {}
218
219 # Get anchor files for corresponding conda (handled) python packages
220 prefix_graph = PrefixGraph(self.iter_records())
221 python_records = prefix_graph.all_descendants(python_pkg_record)
222 conda_python_packages = get_conda_anchor_files_and_records(python_records)
223
224 # Get all anchor files and compare against conda anchor files to find clobbered conda
225 # packages and python packages installed via other means (not handled by conda)
226 sp_anchor_files = get_site_packages_anchor_files(site_packages_path, site_packages_dir)
227 conda_anchor_files = set(conda_python_packages)
228 clobbered_conda_anchor_files = conda_anchor_files - sp_anchor_files
229 non_conda_anchor_files = sp_anchor_files - conda_anchor_files
230
231 # If there's a mismatch for anchor files between what conda expects for a package
232 # based on conda-meta, and for what is actually in site-packages, then we'll delete
233 # the in-memory record for the conda package. In the future, we should consider
234 # also deleting the record on disk in the conda-meta/ directory.
235 for conda_anchor_file in clobbered_conda_anchor_files:
236 prefix_rec = self._prefix_records.pop(conda_python_packages[conda_anchor_file].name)
237 try:
238 extracted_package_dir = basename(prefix_rec.extracted_package_dir)
239 except AttributeError:
240 extracted_package_dir = "-".join((
241 prefix_rec.name, prefix_rec.version, prefix_rec.build
242 ))
243 prefix_rec_json_path = join(
244 self.prefix_path, "conda-meta", '%s.json' % extracted_package_dir
245 )
246 try:
247 rm_rf(prefix_rec_json_path)
248 except EnvironmentError:
249 log.debug("stale information, but couldn't remove: %s", prefix_rec_json_path)
250 else:
251 log.debug("removed due to stale information: %s", prefix_rec_json_path)
252
253 # Create prefix records for python packages not handled by conda
254 new_packages = {}
255 for af in non_conda_anchor_files:
256 try:
257 python_record = read_python_record(self.prefix_path, af, python_pkg_record.version)
258 except EnvironmentError:
259 continue
260 if not python_record:
261 continue
262 self.__prefix_records[python_record.name] = python_record
263 new_packages[python_record.name] = python_record
264
265 return new_packages
266
267
268 def get_conda_anchor_files_and_records(python_records):
269 """Return the anchor files for the conda records of python packages."""
270 anchor_file_endings = ('.egg-info/PKG-INFO', '.dist-info/RECORD', '.egg-info')
271 conda_python_packages = odict()
272
273 for prefix_record in python_records:
274 for fpath in prefix_record.files:
275 if fpath.endswith(anchor_file_endings) and 'site-packages' in fpath:
276 # Then 'fpath' is an anchor file
277 conda_python_packages[fpath] = prefix_record
278
279 return conda_python_packages
280
281
282 def get_python_version_for_prefix(prefix):
283 # returns a string e.g. "2.7", "3.4", "3.5" or None
284 py_record_iter = (rcrd for rcrd in PrefixData(prefix).iter_records() if rcrd.name == 'python')
285 record = next(py_record_iter, None)
286 if record is None:
287 return None
288 next_record = next(py_record_iter, None)
289 if next_record is not None:
290 raise CondaDependencyError("multiple python records found in prefix %s" % prefix)
291 else:
292 return record.version[:3]
293
294
295 def delete_prefix_from_linked_data(path):
296 '''Here, path may be a complete prefix or a dist inside a prefix'''
297 linked_data_path = next((key for key in sorted(PrefixData._cache_, reverse=True)
298 if path.startswith(key)),
299 None)
300 if linked_data_path:
301 del PrefixData._cache_[linked_data_path]
302 return True
303 return False
304
[end of conda/core/prefix_data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conda/core/prefix_data.py b/conda/core/prefix_data.py
--- a/conda/core/prefix_data.py
+++ b/conda/core/prefix_data.py
@@ -8,6 +8,7 @@
from os import listdir
from os.path import basename, isdir, isfile, join, lexists
+from .._vendor.auxlib.exceptions import ValidationError
from ..base.constants import CONDA_TARBALL_EXTENSION, PREFIX_MAGIC_FILE
from ..base.context import context
from ..common.compat import JSONDecodeError, itervalues, odict, string_types, with_metaclass
@@ -257,6 +258,16 @@
python_record = read_python_record(self.prefix_path, af, python_pkg_record.version)
except EnvironmentError:
continue
+ except ValidationError:
+ import sys
+ exc_type, exc_value, exc_traceback = sys.exc_info()
+ import traceback
+ tb = traceback.format_exception(exc_type, exc_value, exc_traceback)
+ log.warn("Problem reading non-conda package record at %s. Please verify that you "
+ "still need this, and if so, that this is still installed correctly. "
+ "Reinstalling this package may help.", af)
+ log.debug("ValidationError: \n%s\n", "\n".join(tb))
+ continue
if not python_record:
continue
self.__prefix_records[python_record.name] = python_record
|
{"golden_diff": "diff --git a/conda/core/prefix_data.py b/conda/core/prefix_data.py\n--- a/conda/core/prefix_data.py\n+++ b/conda/core/prefix_data.py\n@@ -8,6 +8,7 @@\n from os import listdir\n from os.path import basename, isdir, isfile, join, lexists\n \n+from .._vendor.auxlib.exceptions import ValidationError\n from ..base.constants import CONDA_TARBALL_EXTENSION, PREFIX_MAGIC_FILE\n from ..base.context import context\n from ..common.compat import JSONDecodeError, itervalues, odict, string_types, with_metaclass\n@@ -257,6 +258,16 @@\n python_record = read_python_record(self.prefix_path, af, python_pkg_record.version)\n except EnvironmentError:\n continue\n+ except ValidationError:\n+ import sys\n+ exc_type, exc_value, exc_traceback = sys.exc_info()\n+ import traceback\n+ tb = traceback.format_exception(exc_type, exc_value, exc_traceback)\n+ log.warn(\"Problem reading non-conda package record at %s. Please verify that you \"\n+ \"still need this, and if so, that this is still installed correctly. \"\n+ \"Reinstalling this package may help.\", af)\n+ log.debug(\"ValidationError: \\n%s\\n\", \"\\n\".join(tb))\n+ continue\n if not python_record:\n continue\n self.__prefix_records[python_record.name] = python_record\n", "issue": "non-conda-installed python packages can have missing version information\nWhen reading up metadata out of site-packages, non-conda-installed python packages can sometime be missing version information. When this is the case, we should default to a hard-codes `0.0.0` version. I believe this behavior is consistent with python\u2019s own `python setup.py \u2014version` when no version is actually given in the `setup()` function. \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (C) 2012 Anaconda, Inc\n# SPDX-License-Identifier: BSD-3-Clause\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom fnmatch import filter as fnmatch_filter\nfrom logging import getLogger\nfrom os import listdir\nfrom os.path import basename, isdir, isfile, join, lexists\n\nfrom ..base.constants import CONDA_TARBALL_EXTENSION, PREFIX_MAGIC_FILE\nfrom ..base.context import context\nfrom ..common.compat import JSONDecodeError, itervalues, odict, string_types, with_metaclass\nfrom ..common.constants import NULL\nfrom ..common.path import get_python_site_packages_short_path, win_path_ok\nfrom ..common.pkg_formats.python import get_site_packages_anchor_files\nfrom ..common.serialize import json_load\nfrom ..exceptions import (\n BasicClobberError, CondaDependencyError, CorruptedEnvironmentError, maybe_raise,\n)\nfrom ..gateways.disk.create import write_as_json_to_file\nfrom ..gateways.disk.delete import rm_rf\nfrom ..gateways.disk.read import read_python_record\nfrom ..gateways.disk.test import file_path_is_writable\nfrom ..models.match_spec import MatchSpec\nfrom ..models.prefix_graph import PrefixGraph\nfrom ..models.records import PackageRecord, PrefixRecord\n\nlog = getLogger(__name__)\n\n\nclass PrefixDataType(type):\n \"\"\"Basic caching of PrefixData instance objects.\"\"\"\n\n def __call__(cls, prefix_path, pip_interop_enabled=None):\n if prefix_path in PrefixData._cache_:\n return PrefixData._cache_[prefix_path]\n elif isinstance(prefix_path, PrefixData):\n return prefix_path\n else:\n prefix_data_instance = super(PrefixDataType, cls).__call__(prefix_path,\n pip_interop_enabled)\n PrefixData._cache_[prefix_path] = prefix_data_instance\n return prefix_data_instance\n\n\n@with_metaclass(PrefixDataType)\nclass PrefixData(object):\n _cache_ = {}\n\n def __init__(self, prefix_path, pip_interop_enabled=None):\n # pip_interop_enabled is a temporary paramater; DO NOT USE\n # TODO: when removing pip_interop_enabled, also remove from meta class\n self.prefix_path = prefix_path\n self.__prefix_records = None\n self.__is_writable = NULL\n self._pip_interop_enabled = (context.pip_interop_enabled\n if pip_interop_enabled is None\n else pip_interop_enabled)\n\n def load(self):\n self.__prefix_records = {}\n _conda_meta_dir = join(self.prefix_path, 'conda-meta')\n if lexists(_conda_meta_dir):\n for meta_file in fnmatch_filter(listdir(_conda_meta_dir), '*.json'):\n self._load_single_record(join(_conda_meta_dir, meta_file))\n if self._pip_interop_enabled:\n self._load_site_packages()\n\n def reload(self):\n self.load()\n return self\n\n def insert(self, prefix_record):\n assert prefix_record.name not in self._prefix_records\n\n assert prefix_record.fn.endswith(CONDA_TARBALL_EXTENSION)\n filename = prefix_record.fn[:-len(CONDA_TARBALL_EXTENSION)] + '.json'\n\n prefix_record_json_path = join(self.prefix_path, 'conda-meta', filename)\n if lexists(prefix_record_json_path):\n maybe_raise(BasicClobberError(\n source_path=None,\n target_path=prefix_record_json_path,\n context=context,\n ), context)\n rm_rf(prefix_record_json_path)\n\n write_as_json_to_file(prefix_record_json_path, prefix_record)\n\n self._prefix_records[prefix_record.name] = prefix_record\n\n def remove(self, package_name):\n assert package_name in self._prefix_records\n\n prefix_record = self._prefix_records[package_name]\n\n filename = prefix_record.fn[:-len(CONDA_TARBALL_EXTENSION)] + '.json'\n conda_meta_full_path = join(self.prefix_path, 'conda-meta', filename)\n if self.is_writable:\n rm_rf(conda_meta_full_path)\n\n del self._prefix_records[package_name]\n\n def get(self, package_name, default=NULL):\n try:\n return self._prefix_records[package_name]\n except KeyError:\n if default is not NULL:\n return default\n else:\n raise\n\n def iter_records(self):\n return itervalues(self._prefix_records)\n\n def iter_records_sorted(self):\n prefix_graph = PrefixGraph(self.iter_records())\n return iter(prefix_graph.graph)\n\n def all_subdir_urls(self):\n subdir_urls = set()\n for prefix_record in itervalues(self._prefix_records):\n subdir_url = prefix_record.channel.subdir_url\n if subdir_url and subdir_url not in subdir_urls:\n log.debug(\"adding subdir url %s for %s\", subdir_url, prefix_record)\n subdir_urls.add(subdir_url)\n return subdir_urls\n\n def query(self, package_ref_or_match_spec):\n # returns a generator\n param = package_ref_or_match_spec\n if isinstance(param, string_types):\n param = MatchSpec(param)\n if isinstance(param, MatchSpec):\n return (prefix_rec for prefix_rec in self.iter_records()\n if param.match(prefix_rec))\n else:\n assert isinstance(param, PackageRecord)\n return (prefix_rec for prefix_rec in self.iter_records() if prefix_rec == param)\n\n @property\n def _prefix_records(self):\n return self.__prefix_records or self.load() or self.__prefix_records\n\n def _load_single_record(self, prefix_record_json_path):\n log.trace(\"loading prefix record %s\", prefix_record_json_path)\n with open(prefix_record_json_path) as fh:\n try:\n json_data = json_load(fh.read())\n except JSONDecodeError:\n raise CorruptedEnvironmentError(self.prefix_path, prefix_record_json_path)\n\n # TODO: consider, at least in memory, storing prefix_record_json_path as part\n # of PrefixRecord\n prefix_record = PrefixRecord(**json_data)\n\n # check that prefix record json filename conforms to name-version-build\n # apparently implemented as part of #2638 to resolve #2599\n try:\n n, v, b = basename(prefix_record_json_path)[:-5].rsplit('-', 2)\n if (n, v, b) != (prefix_record.name, prefix_record.version, prefix_record.build):\n raise ValueError()\n except ValueError:\n log.warn(\"Ignoring malformed prefix record at: %s\", prefix_record_json_path)\n # TODO: consider just deleting here this record file in the future\n return\n\n self.__prefix_records[prefix_record.name] = prefix_record\n\n @property\n def is_writable(self):\n if self.__is_writable == NULL:\n test_path = join(self.prefix_path, PREFIX_MAGIC_FILE)\n if not isfile(test_path):\n is_writable = None\n else:\n is_writable = file_path_is_writable(test_path)\n self.__is_writable = is_writable\n return self.__is_writable\n\n # # REMOVE: ?\n def _has_python(self):\n return 'python' in self._prefix_records\n\n @property\n def _python_pkg_record(self):\n \"\"\"Return the prefix record for the package python.\"\"\"\n return next(\n (prefix_record for prefix_record in itervalues(self.__prefix_records)\n if prefix_record.name == 'python'),\n None\n )\n\n def _load_site_packages(self):\n \"\"\"\n Load non-conda-installed python packages in the site-packages of the prefix.\n\n Python packages not handled by conda are installed via other means,\n like using pip or using python setup.py develop for local development.\n\n Packages found that are not handled by conda are converted into a\n prefix record and handled in memory.\n\n Packages clobbering conda packages (i.e. the conda-meta record) are\n removed from the in memory representation.\n \"\"\"\n python_pkg_record = self._python_pkg_record\n\n if not python_pkg_record:\n return {}\n\n site_packages_dir = get_python_site_packages_short_path(python_pkg_record.version)\n site_packages_path = join(self.prefix_path, win_path_ok(site_packages_dir))\n\n if not isdir(site_packages_path):\n return {}\n\n # Get anchor files for corresponding conda (handled) python packages\n prefix_graph = PrefixGraph(self.iter_records())\n python_records = prefix_graph.all_descendants(python_pkg_record)\n conda_python_packages = get_conda_anchor_files_and_records(python_records)\n\n # Get all anchor files and compare against conda anchor files to find clobbered conda\n # packages and python packages installed via other means (not handled by conda)\n sp_anchor_files = get_site_packages_anchor_files(site_packages_path, site_packages_dir)\n conda_anchor_files = set(conda_python_packages)\n clobbered_conda_anchor_files = conda_anchor_files - sp_anchor_files\n non_conda_anchor_files = sp_anchor_files - conda_anchor_files\n\n # If there's a mismatch for anchor files between what conda expects for a package\n # based on conda-meta, and for what is actually in site-packages, then we'll delete\n # the in-memory record for the conda package. In the future, we should consider\n # also deleting the record on disk in the conda-meta/ directory.\n for conda_anchor_file in clobbered_conda_anchor_files:\n prefix_rec = self._prefix_records.pop(conda_python_packages[conda_anchor_file].name)\n try:\n extracted_package_dir = basename(prefix_rec.extracted_package_dir)\n except AttributeError:\n extracted_package_dir = \"-\".join((\n prefix_rec.name, prefix_rec.version, prefix_rec.build\n ))\n prefix_rec_json_path = join(\n self.prefix_path, \"conda-meta\", '%s.json' % extracted_package_dir\n )\n try:\n rm_rf(prefix_rec_json_path)\n except EnvironmentError:\n log.debug(\"stale information, but couldn't remove: %s\", prefix_rec_json_path)\n else:\n log.debug(\"removed due to stale information: %s\", prefix_rec_json_path)\n\n # Create prefix records for python packages not handled by conda\n new_packages = {}\n for af in non_conda_anchor_files:\n try:\n python_record = read_python_record(self.prefix_path, af, python_pkg_record.version)\n except EnvironmentError:\n continue\n if not python_record:\n continue\n self.__prefix_records[python_record.name] = python_record\n new_packages[python_record.name] = python_record\n\n return new_packages\n\n\ndef get_conda_anchor_files_and_records(python_records):\n \"\"\"Return the anchor files for the conda records of python packages.\"\"\"\n anchor_file_endings = ('.egg-info/PKG-INFO', '.dist-info/RECORD', '.egg-info')\n conda_python_packages = odict()\n\n for prefix_record in python_records:\n for fpath in prefix_record.files:\n if fpath.endswith(anchor_file_endings) and 'site-packages' in fpath:\n # Then 'fpath' is an anchor file\n conda_python_packages[fpath] = prefix_record\n\n return conda_python_packages\n\n\ndef get_python_version_for_prefix(prefix):\n # returns a string e.g. \"2.7\", \"3.4\", \"3.5\" or None\n py_record_iter = (rcrd for rcrd in PrefixData(prefix).iter_records() if rcrd.name == 'python')\n record = next(py_record_iter, None)\n if record is None:\n return None\n next_record = next(py_record_iter, None)\n if next_record is not None:\n raise CondaDependencyError(\"multiple python records found in prefix %s\" % prefix)\n else:\n return record.version[:3]\n\n\ndef delete_prefix_from_linked_data(path):\n '''Here, path may be a complete prefix or a dist inside a prefix'''\n linked_data_path = next((key for key in sorted(PrefixData._cache_, reverse=True)\n if path.startswith(key)),\n None)\n if linked_data_path:\n del PrefixData._cache_[linked_data_path]\n return True\n return False\n", "path": "conda/core/prefix_data.py"}]}
| 4,063 | 318 |
gh_patches_debug_29663
|
rasdani/github-patches
|
git_diff
|
Azure__azure-cli-extensions-480
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Extension online docs are not reflecting CLI parameter metadata.
The docs for `az network firewall ip-config create` are mismatched.
Docs: `az network firewall ip-config create` list `--subnet` as a valid command.
CLI:
```
mike@Azure:~$ az network firewall ip-config create -h
This command is from the following extension: azure-firewall
The extension is in preview
Command
az network firewall ip-config create : Create an Azure Firewall IP configuration.
Arguments
--firewall-name -f [Required] : Azure Firewall name.
--name -n [Required] : Name of the IP configuration.
--public-ip-address [Required] : Name or ID of the public IP to use.
--resource-group -g [Required] : Name of resource group. You can configure the default group
using `az configure --defaults group=<name>`.
--vnet-name [Required] : The virtual network (VNet) name.
--private-ip-address : IP address used by the Firewall ILB as the next hop in User
Defined Routes.
Global Arguments
--debug : Increase logging verbosity to show all debug logs.
--help -h : Show this help message and exit.
--output -o : Output format. Allowed values: json, jsonc, none, table, tsv,
yaml. Default: json.
--query : JMESPath query string. See http://jmespath.org/ for more
information and examples.
--subscription : Name or ID of subscription. You can configure the default
subscription using `az account set -s NAME_OR_ID`.
--verbose : Increase logging verbosity. Use --debug for full debug logs
```
If you try and leverage the `--subnet` flag, you get this error:
```
mike@Azure:~$ az network firewall ip-config create -f <fw> --name <name> --vnet-name <vnet> --public-ip-address <pip> --private-ip-address 10.0.0.9 -g <rg> --subnet <snet>
Resource /subscriptions/.../resourceGroups/<rg>/providers/Microsoft.Network/virtualNetworks/<vnet>/subnets/<snet> referenced by resource /subscriptions/.../resourceGroups/<rg>/providers/Microsoft.Network/azureFirewalls/<fw> was not found. Please make sure that the referenced resource exists, and that both resources are in the same region.
```
Both resources are in the same resource group in the same region.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 4a309da7-b5b4-3aa1-e546-174af6270e4d
* Version Independent ID: f62e6045-1dd9-677f-f4df-5b29338f29dc
* Content: [az network firewall ip-config](https://docs.microsoft.com/en-us/cli/azure/ext/azure-firewall/network/firewall/ip-config?view=azure-cli-latest#ext-azure-firewall-az-network-firewall-ip-config-create)
* Content Source: [latest/docs-ref-autogen/ext/azure-firewall/network/firewall/ip-config.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/live/latest/docs-ref-autogen/ext/azure-firewall/network/firewall/ip-config.yml)
* GitHub Login: @rloutlaw
* Microsoft Alias: **routlaw**
</issue>
<code>
[start of scripts/refdoc/azhelpgen/azhelpgen.py]
1 # --------------------------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License. See License.txt in the project root for license information.
4 # --------------------------------------------------------------------------------------------
5
6 import argparse
7 import json
8 from os.path import expanduser
9 from docutils import nodes
10 from docutils.statemachine import ViewList
11 from docutils.parsers.rst import Directive
12 from sphinx.util.nodes import nested_parse_with_titles
13
14 from knack.help_files import helps
15
16 from knack.help import GroupHelpFile
17 from azure.cli.core import MainCommandsLoader, AzCli
18 from azure.cli.core.commands import AzCliCommandInvoker, ExtensionCommandSource
19 from azure.cli.core.parser import AzCliCommandParser
20 from azure.cli.core._help import AzCliHelp, CliCommandHelpFile, ArgumentGroupRegistry
21
22 USER_HOME = expanduser('~')
23
24
25 def get_extension_help_files(cli_ctx):
26 invoker = cli_ctx.invocation_cls(cli_ctx=cli_ctx, commands_loader_cls=cli_ctx.commands_loader_cls,
27 parser_cls=cli_ctx.parser_cls, help_cls=cli_ctx.help_cls)
28 cli_ctx.invocation = invoker
29 cmd_table = invoker.commands_loader.load_command_table(None)
30 # Filter the command table to only get commands from extensions
31 cmd_table = {k: v for k, v in cmd_table.items() if isinstance(v.command_source, ExtensionCommandSource)}
32 invoker.commands_loader.command_table = cmd_table
33 print('FOUND {} command(s) from the extension.'.format(len(cmd_table)))
34 for cmd_name in cmd_table:
35 invoker.commands_loader.load_arguments(cmd_name)
36 invoker.parser.load_command_table(invoker.commands_loader)
37
38 parser_keys = []
39 parser_values = []
40 sub_parser_keys = []
41 sub_parser_values = []
42 _store_parsers(invoker.parser, parser_keys, parser_values, sub_parser_keys, sub_parser_values)
43 for cmd, parser in zip(parser_keys, parser_values):
44 if cmd not in sub_parser_keys:
45 sub_parser_keys.append(cmd)
46 sub_parser_values.append(parser)
47 help_ctx = cli_ctx.help_cls(cli_ctx=cli_ctx)
48 help_files = []
49 for cmd, parser in zip(sub_parser_keys, sub_parser_values):
50 try:
51 help_file = GroupHelpFile(help_ctx, cmd, parser) if _is_group(parser) \
52 else CliCommandHelpFile(help_ctx, cmd, parser)
53 help_file.load(parser)
54 help_files.append(help_file)
55 except Exception as ex:
56 print("Skipped '{}' due to '{}'".format(cmd, ex))
57 help_files = sorted(help_files, key=lambda x: x.command)
58 return help_files
59
60 class AzHelpGenDirective(Directive):
61 def make_rst(self):
62 INDENT = ' '
63 DOUBLEINDENT = INDENT * 2
64
65 az_cli = AzCli(cli_name='az',
66 commands_loader_cls=MainCommandsLoader,
67 invocation_cls=AzCliCommandInvoker,
68 parser_cls=AzCliCommandParser,
69 help_cls=AzCliHelp)
70 help_files = get_extension_help_files(az_cli)
71
72 for help_file in help_files:
73 is_command = isinstance(help_file, CliCommandHelpFile)
74 yield '.. cli{}:: {}'.format('command' if is_command else 'group', help_file.command if help_file.command else 'az') #it is top level group az if command is empty
75 yield ''
76 yield '{}:summary: {}'.format(INDENT, help_file.short_summary)
77 yield '{}:description: {}'.format(INDENT, help_file.long_summary)
78 if help_file.deprecate_info:
79 yield '{}:deprecated: {}'.format(INDENT, help_file.deprecate_info._get_message(help_file.deprecate_info))
80 yield ''
81
82 if is_command and help_file.parameters:
83 group_registry = ArgumentGroupRegistry([p.group_name for p in help_file.parameters if p.group_name])
84
85 for arg in sorted(help_file.parameters,
86 key=lambda p: group_registry.get_group_priority(p.group_name)
87 + str(not p.required) + p.name):
88 yield '{}.. cliarg:: {}'.format(INDENT, arg.name)
89 yield ''
90 yield '{}:required: {}'.format(DOUBLEINDENT, arg.required)
91 if arg.deprecate_info:
92 yield '{}:deprecated: {}'.format(DOUBLEINDENT, arg.deprecate_info._get_message(arg.deprecate_info))
93 short_summary = arg.short_summary or ''
94 possible_values_index = short_summary.find(' Possible values include')
95 short_summary = short_summary[0:possible_values_index
96 if possible_values_index >= 0 else len(short_summary)]
97 short_summary = short_summary.strip()
98 yield '{}:summary: {}'.format(DOUBLEINDENT, short_summary)
99 yield '{}:description: {}'.format(DOUBLEINDENT, arg.long_summary)
100 if arg.choices:
101 yield '{}:values: {}'.format(DOUBLEINDENT, ', '.join(sorted([str(x) for x in arg.choices])))
102 if arg.default and arg.default != argparse.SUPPRESS:
103 try:
104 if arg.default.startswith(USER_HOME):
105 arg.default = arg.default.replace(USER_HOME, '~').replace('\\', '/')
106 except Exception:
107 pass
108 try:
109 arg.default = arg.default.replace("\\", "\\\\")
110 except Exception:
111 pass
112 yield '{}:default: {}'.format(DOUBLEINDENT, arg.default)
113 if arg.value_sources:
114 yield '{}:source: {}'.format(DOUBLEINDENT, ', '.join(arg.value_sources))
115 yield ''
116 yield ''
117 if len(help_file.examples) > 0:
118 for e in help_file.examples:
119 yield '{}.. cliexample:: {}'.format(INDENT, e.name)
120 yield ''
121 yield DOUBLEINDENT + e.text.replace("\\", "\\\\")
122 yield ''
123
124 def run(self):
125 node = nodes.section()
126 node.document = self.state.document
127 result = ViewList()
128 for line in self.make_rst():
129 result.append(line, '<azhelpgen>')
130
131 nested_parse_with_titles(self.state, result, node)
132 return node.children
133
134 def setup(app):
135 app.add_directive('azhelpgen', AzHelpGenDirective)
136
137
138 def _store_parsers(parser, parser_keys, parser_values, sub_parser_keys, sub_parser_values):
139 for s in parser.subparsers.values():
140 parser_keys.append(_get_parser_name(s))
141 parser_values.append(s)
142 if _is_group(s):
143 for c in s.choices.values():
144 sub_parser_keys.append(_get_parser_name(c))
145 sub_parser_values.append(c)
146 _store_parsers(c, parser_keys, parser_values, sub_parser_keys, sub_parser_values)
147
148 def _is_group(parser):
149 return getattr(parser, '_subparsers', None) is not None \
150 or getattr(parser, 'choices', None) is not None
151
152 def _get_parser_name(s):
153 return (s._prog_prefix if hasattr(s, '_prog_prefix') else s.prog)[3:]
154
[end of scripts/refdoc/azhelpgen/azhelpgen.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scripts/refdoc/azhelpgen/azhelpgen.py b/scripts/refdoc/azhelpgen/azhelpgen.py
--- a/scripts/refdoc/azhelpgen/azhelpgen.py
+++ b/scripts/refdoc/azhelpgen/azhelpgen.py
@@ -23,18 +23,31 @@
def get_extension_help_files(cli_ctx):
+
+ # 1. Create invoker and load command table and arguments. Remember to turn off applicability check.
invoker = cli_ctx.invocation_cls(cli_ctx=cli_ctx, commands_loader_cls=cli_ctx.commands_loader_cls,
parser_cls=cli_ctx.parser_cls, help_cls=cli_ctx.help_cls)
cli_ctx.invocation = invoker
+
+ invoker.commands_loader.skip_applicability = True
cmd_table = invoker.commands_loader.load_command_table(None)
- # Filter the command table to only get commands from extensions
+
+ # turn off applicability check for all loaders
+ for loaders in invoker.commands_loader.cmd_to_loader_map.values():
+ for loader in loaders:
+ loader.skip_applicability = True
+
+ # filter the command table to only get commands from extensions
cmd_table = {k: v for k, v in cmd_table.items() if isinstance(v.command_source, ExtensionCommandSource)}
invoker.commands_loader.command_table = cmd_table
print('FOUND {} command(s) from the extension.'.format(len(cmd_table)))
+
for cmd_name in cmd_table:
- invoker.commands_loader.load_arguments(cmd_name)
+ invoker.commands_loader.load_arguments(cmd_name)
+
invoker.parser.load_command_table(invoker.commands_loader)
+ # 2. Now load applicable help files
parser_keys = []
parser_values = []
sub_parser_keys = []
|
{"golden_diff": "diff --git a/scripts/refdoc/azhelpgen/azhelpgen.py b/scripts/refdoc/azhelpgen/azhelpgen.py\n--- a/scripts/refdoc/azhelpgen/azhelpgen.py\n+++ b/scripts/refdoc/azhelpgen/azhelpgen.py\n@@ -23,18 +23,31 @@\n \n \n def get_extension_help_files(cli_ctx):\n+\n+ # 1. Create invoker and load command table and arguments. Remember to turn off applicability check.\n invoker = cli_ctx.invocation_cls(cli_ctx=cli_ctx, commands_loader_cls=cli_ctx.commands_loader_cls,\n parser_cls=cli_ctx.parser_cls, help_cls=cli_ctx.help_cls)\n cli_ctx.invocation = invoker\n+\n+ invoker.commands_loader.skip_applicability = True\n cmd_table = invoker.commands_loader.load_command_table(None)\n- # Filter the command table to only get commands from extensions\n+\n+ # turn off applicability check for all loaders\n+ for loaders in invoker.commands_loader.cmd_to_loader_map.values():\n+ for loader in loaders:\n+ loader.skip_applicability = True\n+\n+ # filter the command table to only get commands from extensions\n cmd_table = {k: v for k, v in cmd_table.items() if isinstance(v.command_source, ExtensionCommandSource)}\n invoker.commands_loader.command_table = cmd_table\n print('FOUND {} command(s) from the extension.'.format(len(cmd_table)))\n+\n for cmd_name in cmd_table:\n- invoker.commands_loader.load_arguments(cmd_name)\n+ invoker.commands_loader.load_arguments(cmd_name)\n+\n invoker.parser.load_command_table(invoker.commands_loader)\n \n+ # 2. Now load applicable help files\n parser_keys = []\n parser_values = []\n sub_parser_keys = []\n", "issue": "Extension online docs are not reflecting CLI parameter metadata.\nThe docs for `az network firewall ip-config create` are mismatched.\n\nDocs: `az network firewall ip-config create` list `--subnet` as a valid command.\n\nCLI:\n\n```\nmike@Azure:~$ az network firewall ip-config create -h\nThis command is from the following extension: azure-firewall\nThe extension is in preview\n\nCommand\n az network firewall ip-config create : Create an Azure Firewall IP configuration.\n\nArguments\n --firewall-name -f [Required] : Azure Firewall name.\n --name -n [Required] : Name of the IP configuration.\n --public-ip-address [Required] : Name or ID of the public IP to use.\n --resource-group -g [Required] : Name of resource group. You can configure the default group\n using `az configure --defaults group=<name>`.\n --vnet-name [Required] : The virtual network (VNet) name.\n --private-ip-address : IP address used by the Firewall ILB as the next hop in User\n Defined Routes.\n\nGlobal Arguments\n --debug : Increase logging verbosity to show all debug logs.\n --help -h : Show this help message and exit.\n --output -o : Output format. Allowed values: json, jsonc, none, table, tsv,\n yaml. Default: json.\n --query : JMESPath query string. See http://jmespath.org/ for more\n information and examples.\n --subscription : Name or ID of subscription. You can configure the default\n subscription using `az account set -s NAME_OR_ID`.\n --verbose : Increase logging verbosity. Use --debug for full debug logs\n```\n\nIf you try and leverage the `--subnet` flag, you get this error:\n\n```\nmike@Azure:~$ az network firewall ip-config create -f <fw> --name <name> --vnet-name <vnet> --public-ip-address <pip> --private-ip-address 10.0.0.9 -g <rg> --subnet <snet>\nResource /subscriptions/.../resourceGroups/<rg>/providers/Microsoft.Network/virtualNetworks/<vnet>/subnets/<snet> referenced by resource /subscriptions/.../resourceGroups/<rg>/providers/Microsoft.Network/azureFirewalls/<fw> was not found. Please make sure that the referenced resource exists, and that both resources are in the same region.\n```\n\nBoth resources are in the same resource group in the same region.\n\n---\n#### Document Details\n\n\u26a0 *Do not edit this section. It is required for docs.microsoft.com \u279f GitHub issue linking.*\n\n* ID: 4a309da7-b5b4-3aa1-e546-174af6270e4d\n* Version Independent ID: f62e6045-1dd9-677f-f4df-5b29338f29dc\n* Content: [az network firewall ip-config](https://docs.microsoft.com/en-us/cli/azure/ext/azure-firewall/network/firewall/ip-config?view=azure-cli-latest#ext-azure-firewall-az-network-firewall-ip-config-create)\n* Content Source: [latest/docs-ref-autogen/ext/azure-firewall/network/firewall/ip-config.yml](https://github.com/MicrosoftDocs/azure-docs-cli/blob/live/latest/docs-ref-autogen/ext/azure-firewall/network/firewall/ip-config.yml)\n* GitHub Login: @rloutlaw\n* Microsoft Alias: **routlaw**\n", "before_files": [{"content": "# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nimport argparse\nimport json\nfrom os.path import expanduser\nfrom docutils import nodes\nfrom docutils.statemachine import ViewList\nfrom docutils.parsers.rst import Directive\nfrom sphinx.util.nodes import nested_parse_with_titles\n\nfrom knack.help_files import helps\n\nfrom knack.help import GroupHelpFile\nfrom azure.cli.core import MainCommandsLoader, AzCli\nfrom azure.cli.core.commands import AzCliCommandInvoker, ExtensionCommandSource\nfrom azure.cli.core.parser import AzCliCommandParser\nfrom azure.cli.core._help import AzCliHelp, CliCommandHelpFile, ArgumentGroupRegistry\n\nUSER_HOME = expanduser('~')\n\n\ndef get_extension_help_files(cli_ctx):\n invoker = cli_ctx.invocation_cls(cli_ctx=cli_ctx, commands_loader_cls=cli_ctx.commands_loader_cls,\n parser_cls=cli_ctx.parser_cls, help_cls=cli_ctx.help_cls)\n cli_ctx.invocation = invoker\n cmd_table = invoker.commands_loader.load_command_table(None)\n # Filter the command table to only get commands from extensions\n cmd_table = {k: v for k, v in cmd_table.items() if isinstance(v.command_source, ExtensionCommandSource)}\n invoker.commands_loader.command_table = cmd_table\n print('FOUND {} command(s) from the extension.'.format(len(cmd_table)))\n for cmd_name in cmd_table:\n invoker.commands_loader.load_arguments(cmd_name)\n invoker.parser.load_command_table(invoker.commands_loader)\n\n parser_keys = []\n parser_values = []\n sub_parser_keys = []\n sub_parser_values = []\n _store_parsers(invoker.parser, parser_keys, parser_values, sub_parser_keys, sub_parser_values)\n for cmd, parser in zip(parser_keys, parser_values):\n if cmd not in sub_parser_keys:\n sub_parser_keys.append(cmd)\n sub_parser_values.append(parser)\n help_ctx = cli_ctx.help_cls(cli_ctx=cli_ctx)\n help_files = []\n for cmd, parser in zip(sub_parser_keys, sub_parser_values):\n try:\n help_file = GroupHelpFile(help_ctx, cmd, parser) if _is_group(parser) \\\n else CliCommandHelpFile(help_ctx, cmd, parser)\n help_file.load(parser)\n help_files.append(help_file)\n except Exception as ex:\n print(\"Skipped '{}' due to '{}'\".format(cmd, ex))\n help_files = sorted(help_files, key=lambda x: x.command)\n return help_files\n\nclass AzHelpGenDirective(Directive):\n def make_rst(self):\n INDENT = ' '\n DOUBLEINDENT = INDENT * 2\n\n az_cli = AzCli(cli_name='az',\n commands_loader_cls=MainCommandsLoader,\n invocation_cls=AzCliCommandInvoker,\n parser_cls=AzCliCommandParser,\n help_cls=AzCliHelp)\n help_files = get_extension_help_files(az_cli)\n\n for help_file in help_files:\n is_command = isinstance(help_file, CliCommandHelpFile)\n yield '.. cli{}:: {}'.format('command' if is_command else 'group', help_file.command if help_file.command else 'az') #it is top level group az if command is empty\n yield ''\n yield '{}:summary: {}'.format(INDENT, help_file.short_summary)\n yield '{}:description: {}'.format(INDENT, help_file.long_summary)\n if help_file.deprecate_info:\n yield '{}:deprecated: {}'.format(INDENT, help_file.deprecate_info._get_message(help_file.deprecate_info))\n yield ''\n\n if is_command and help_file.parameters:\n group_registry = ArgumentGroupRegistry([p.group_name for p in help_file.parameters if p.group_name]) \n\n for arg in sorted(help_file.parameters,\n key=lambda p: group_registry.get_group_priority(p.group_name)\n + str(not p.required) + p.name):\n yield '{}.. cliarg:: {}'.format(INDENT, arg.name)\n yield ''\n yield '{}:required: {}'.format(DOUBLEINDENT, arg.required)\n if arg.deprecate_info:\n yield '{}:deprecated: {}'.format(DOUBLEINDENT, arg.deprecate_info._get_message(arg.deprecate_info))\n short_summary = arg.short_summary or ''\n possible_values_index = short_summary.find(' Possible values include')\n short_summary = short_summary[0:possible_values_index\n if possible_values_index >= 0 else len(short_summary)]\n short_summary = short_summary.strip()\n yield '{}:summary: {}'.format(DOUBLEINDENT, short_summary)\n yield '{}:description: {}'.format(DOUBLEINDENT, arg.long_summary)\n if arg.choices:\n yield '{}:values: {}'.format(DOUBLEINDENT, ', '.join(sorted([str(x) for x in arg.choices])))\n if arg.default and arg.default != argparse.SUPPRESS:\n try:\n if arg.default.startswith(USER_HOME):\n arg.default = arg.default.replace(USER_HOME, '~').replace('\\\\', '/')\n except Exception:\n pass\n try:\n arg.default = arg.default.replace(\"\\\\\", \"\\\\\\\\\")\n except Exception:\n pass\n yield '{}:default: {}'.format(DOUBLEINDENT, arg.default)\n if arg.value_sources:\n yield '{}:source: {}'.format(DOUBLEINDENT, ', '.join(arg.value_sources))\n yield ''\n yield ''\n if len(help_file.examples) > 0:\n for e in help_file.examples:\n yield '{}.. cliexample:: {}'.format(INDENT, e.name)\n yield ''\n yield DOUBLEINDENT + e.text.replace(\"\\\\\", \"\\\\\\\\\")\n yield ''\n\n def run(self):\n node = nodes.section()\n node.document = self.state.document\n result = ViewList()\n for line in self.make_rst():\n result.append(line, '<azhelpgen>')\n\n nested_parse_with_titles(self.state, result, node)\n return node.children\n\ndef setup(app):\n app.add_directive('azhelpgen', AzHelpGenDirective)\n\n\ndef _store_parsers(parser, parser_keys, parser_values, sub_parser_keys, sub_parser_values):\n for s in parser.subparsers.values():\n parser_keys.append(_get_parser_name(s))\n parser_values.append(s)\n if _is_group(s):\n for c in s.choices.values():\n sub_parser_keys.append(_get_parser_name(c))\n sub_parser_values.append(c)\n _store_parsers(c, parser_keys, parser_values, sub_parser_keys, sub_parser_values)\n\ndef _is_group(parser):\n return getattr(parser, '_subparsers', None) is not None \\\n or getattr(parser, 'choices', None) is not None\n\ndef _get_parser_name(s):\n return (s._prog_prefix if hasattr(s, '_prog_prefix') else s.prog)[3:]\n", "path": "scripts/refdoc/azhelpgen/azhelpgen.py"}]}
| 3,177 | 390 |
gh_patches_debug_37024
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-1936
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Challenge creation notifications
At the moment, staff users receive emails about new challenges. It would be nice to turns those emails into notifications.
**Challenge creation emails**
- create follow: `staff users` --> `gc.localhost Site`
- upon challenge creation: send `action` with challenge `creator` as `actor`, the created `challenge` as `action_object` and the `gc.localhost site` as `target`
- _Notification template:_ print action as is for normal challenges, for external challenges, make sure to link the `update_url` for the challenge in the notification rather than the challenge url
</issue>
<code>
[start of app/grandchallenge/challenges/emails.py]
1 from django.conf import settings
2 from django.contrib.auth import get_user_model
3 from django.contrib.sites.models import Site
4 from django.core.mail import send_mail
5
6 from grandchallenge.subdomains.utils import reverse
7
8
9 def send_challenge_created_email(challenge):
10 site = Site.objects.get_current()
11 message = (
12 f"Dear {{}},\n\n"
13 f"User {challenge.creator} has just created the challenge "
14 f"{challenge.short_name} at {challenge.get_absolute_url()}.\n\n"
15 f"Regards,\n"
16 f"{site.name}\n\n"
17 f"This is an automated service email from {site.domain}."
18 )
19
20 staff = get_user_model().objects.filter(is_staff=True)
21
22 for s in staff:
23 send_mail(
24 subject=f"[{site.domain.lower()}] New Challenge Created",
25 message=message.format(s.username),
26 from_email=settings.DEFAULT_FROM_EMAIL,
27 recipient_list=[s.email],
28 )
29
30
31 def send_external_challenge_created_email(challenge):
32 site = Site.objects.get_current()
33 update_url = reverse(
34 "challenges:external-update",
35 kwargs={"short_name": challenge.short_name},
36 )
37
38 message = (
39 f"Dear {{}},\n\n"
40 f"User {challenge.creator} has just created the challenge "
41 f"{challenge.short_name}. You need to un-hide it before it is visible "
42 f"on the all challenges page, you can do that here: {update_url}\n\n"
43 f"Regards,\n"
44 f"{site.name}\n\n"
45 f"This is an automated service email from {site.domain}."
46 )
47
48 staff = get_user_model().objects.filter(is_staff=True)
49
50 for s in staff:
51 send_mail(
52 subject=f"[{site.domain.lower()}] New External Challenge",
53 message=message.format(s.username),
54 from_email=settings.DEFAULT_FROM_EMAIL,
55 recipient_list=[s.email],
56 )
57
[end of app/grandchallenge/challenges/emails.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/grandchallenge/challenges/emails.py b/app/grandchallenge/challenges/emails.py
--- a/app/grandchallenge/challenges/emails.py
+++ b/app/grandchallenge/challenges/emails.py
@@ -1,7 +1,5 @@
-from django.conf import settings
-from django.contrib.auth import get_user_model
from django.contrib.sites.models import Site
-from django.core.mail import send_mail
+from django.core.mail import mail_managers
from grandchallenge.subdomains.utils import reverse
@@ -9,7 +7,7 @@
def send_challenge_created_email(challenge):
site = Site.objects.get_current()
message = (
- f"Dear {{}},\n\n"
+ f"Dear manager,\n\n"
f"User {challenge.creator} has just created the challenge "
f"{challenge.short_name} at {challenge.get_absolute_url()}.\n\n"
f"Regards,\n"
@@ -17,15 +15,10 @@
f"This is an automated service email from {site.domain}."
)
- staff = get_user_model().objects.filter(is_staff=True)
-
- for s in staff:
- send_mail(
- subject=f"[{site.domain.lower()}] New Challenge Created",
- message=message.format(s.username),
- from_email=settings.DEFAULT_FROM_EMAIL,
- recipient_list=[s.email],
- )
+ mail_managers(
+ subject=f"[{site.domain.lower()}] New Challenge Created",
+ message=message,
+ )
def send_external_challenge_created_email(challenge):
@@ -36,7 +29,7 @@
)
message = (
- f"Dear {{}},\n\n"
+ f"Dear manager,\n\n"
f"User {challenge.creator} has just created the challenge "
f"{challenge.short_name}. You need to un-hide it before it is visible "
f"on the all challenges page, you can do that here: {update_url}\n\n"
@@ -45,12 +38,7 @@
f"This is an automated service email from {site.domain}."
)
- staff = get_user_model().objects.filter(is_staff=True)
-
- for s in staff:
- send_mail(
- subject=f"[{site.domain.lower()}] New External Challenge",
- message=message.format(s.username),
- from_email=settings.DEFAULT_FROM_EMAIL,
- recipient_list=[s.email],
- )
+ mail_managers(
+ subject=f"[{site.domain.lower()}] New External Challenge",
+ message=message,
+ )
|
{"golden_diff": "diff --git a/app/grandchallenge/challenges/emails.py b/app/grandchallenge/challenges/emails.py\n--- a/app/grandchallenge/challenges/emails.py\n+++ b/app/grandchallenge/challenges/emails.py\n@@ -1,7 +1,5 @@\n-from django.conf import settings\n-from django.contrib.auth import get_user_model\n from django.contrib.sites.models import Site\n-from django.core.mail import send_mail\n+from django.core.mail import mail_managers\n \n from grandchallenge.subdomains.utils import reverse\n \n@@ -9,7 +7,7 @@\n def send_challenge_created_email(challenge):\n site = Site.objects.get_current()\n message = (\n- f\"Dear {{}},\\n\\n\"\n+ f\"Dear manager,\\n\\n\"\n f\"User {challenge.creator} has just created the challenge \"\n f\"{challenge.short_name} at {challenge.get_absolute_url()}.\\n\\n\"\n f\"Regards,\\n\"\n@@ -17,15 +15,10 @@\n f\"This is an automated service email from {site.domain}.\"\n )\n \n- staff = get_user_model().objects.filter(is_staff=True)\n-\n- for s in staff:\n- send_mail(\n- subject=f\"[{site.domain.lower()}] New Challenge Created\",\n- message=message.format(s.username),\n- from_email=settings.DEFAULT_FROM_EMAIL,\n- recipient_list=[s.email],\n- )\n+ mail_managers(\n+ subject=f\"[{site.domain.lower()}] New Challenge Created\",\n+ message=message,\n+ )\n \n \n def send_external_challenge_created_email(challenge):\n@@ -36,7 +29,7 @@\n )\n \n message = (\n- f\"Dear {{}},\\n\\n\"\n+ f\"Dear manager,\\n\\n\"\n f\"User {challenge.creator} has just created the challenge \"\n f\"{challenge.short_name}. You need to un-hide it before it is visible \"\n f\"on the all challenges page, you can do that here: {update_url}\\n\\n\"\n@@ -45,12 +38,7 @@\n f\"This is an automated service email from {site.domain}.\"\n )\n \n- staff = get_user_model().objects.filter(is_staff=True)\n-\n- for s in staff:\n- send_mail(\n- subject=f\"[{site.domain.lower()}] New External Challenge\",\n- message=message.format(s.username),\n- from_email=settings.DEFAULT_FROM_EMAIL,\n- recipient_list=[s.email],\n- )\n+ mail_managers(\n+ subject=f\"[{site.domain.lower()}] New External Challenge\",\n+ message=message,\n+ )\n", "issue": "Challenge creation notifications\nAt the moment, staff users receive emails about new challenges. It would be nice to turns those emails into notifications. \r\n\r\n**Challenge creation emails**\r\n\r\n- create follow: `staff users` --> `gc.localhost Site`\r\n- upon challenge creation: send `action` with challenge `creator` as `actor`, the created `challenge` as `action_object` and the `gc.localhost site` as `target`\r\n- _Notification template:_ print action as is for normal challenges, for external challenges, make sure to link the `update_url` for the challenge in the notification rather than the challenge url\n", "before_files": [{"content": "from django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.sites.models import Site\nfrom django.core.mail import send_mail\n\nfrom grandchallenge.subdomains.utils import reverse\n\n\ndef send_challenge_created_email(challenge):\n site = Site.objects.get_current()\n message = (\n f\"Dear {{}},\\n\\n\"\n f\"User {challenge.creator} has just created the challenge \"\n f\"{challenge.short_name} at {challenge.get_absolute_url()}.\\n\\n\"\n f\"Regards,\\n\"\n f\"{site.name}\\n\\n\"\n f\"This is an automated service email from {site.domain}.\"\n )\n\n staff = get_user_model().objects.filter(is_staff=True)\n\n for s in staff:\n send_mail(\n subject=f\"[{site.domain.lower()}] New Challenge Created\",\n message=message.format(s.username),\n from_email=settings.DEFAULT_FROM_EMAIL,\n recipient_list=[s.email],\n )\n\n\ndef send_external_challenge_created_email(challenge):\n site = Site.objects.get_current()\n update_url = reverse(\n \"challenges:external-update\",\n kwargs={\"short_name\": challenge.short_name},\n )\n\n message = (\n f\"Dear {{}},\\n\\n\"\n f\"User {challenge.creator} has just created the challenge \"\n f\"{challenge.short_name}. You need to un-hide it before it is visible \"\n f\"on the all challenges page, you can do that here: {update_url}\\n\\n\"\n f\"Regards,\\n\"\n f\"{site.name}\\n\\n\"\n f\"This is an automated service email from {site.domain}.\"\n )\n\n staff = get_user_model().objects.filter(is_staff=True)\n\n for s in staff:\n send_mail(\n subject=f\"[{site.domain.lower()}] New External Challenge\",\n message=message.format(s.username),\n from_email=settings.DEFAULT_FROM_EMAIL,\n recipient_list=[s.email],\n )\n", "path": "app/grandchallenge/challenges/emails.py"}]}
| 1,185 | 562 |
gh_patches_debug_13377
|
rasdani/github-patches
|
git_diff
|
GeotrekCE__Geotrek-admin-1326
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SEARCH_PATH for Geotrek DB user
Since Geotrek 0.28, tables and functions have be moved to different schemas, which is a very good point (https://github.com/makinacorpus/Geotrek/releases/tag/v0.28.0).
Schemas are not mentionned in triggers which is OK too, as Django is doing it in his connexions so it is not a problem for GEOTREK applications.
It gets a problem when you try to edit or insert a data from an external tool (QGIS, Talend...).
You have to change the db_user search_path so that he can find tables and functions not only in public schemas.
It could be interesting to do it during GEOTREK installation for the Geotrek DB user mentionned in settings :
ALTER USER $geotrek_db_user SET
search_path=public,django,geotrek,gestion,rando,zonage,foncier,tourisme;
Of course if you are using another user to edit datas in external tools, you will have to do it manually the first time.
</issue>
<code>
[start of geotrek/common/utils/postgresql.py]
1 import re
2 import os
3 import logging
4 import traceback
5 from functools import wraps
6
7 from django.db import connection, models
8 from django.conf import settings
9 from django.db.models import get_app, get_models
10
11
12 logger = logging.getLogger(__name__)
13
14
15 def debug_pg_notices(f):
16
17 @wraps(f)
18 def wrapped(*args, **kwargs):
19 before = len(connection.connection.notices) if connection.connection else 0
20 try:
21 r = f(*args, **kwargs)
22 finally:
23 # Show triggers output
24 allnotices = []
25 current = ''
26 if connection.connection:
27 notices = []
28 for notice in connection.connection.notices[before:]:
29 try:
30 notice, context = notice.split('CONTEXT:', 1)
31 context = re.sub("\s+", " ", context)
32 except ValueError:
33 context = ''
34 notices.append((context, notice))
35 if context != current:
36 allnotices.append(notices)
37 notices = []
38 current = context
39 allnotices.append(notices)
40 current = ''
41 for notices in allnotices:
42 for context, notice in notices:
43 if context != current:
44 if context != '':
45 logger.debug('Context %s...:' % context.strip()[:80])
46 current = context
47 notice = notice.replace('NOTICE: ', '')
48 prefix = '' if context == '' else ' '
49 logger.debug('%s%s' % (prefix, notice.strip()))
50 return r
51
52 return wrapped
53
54
55 def load_sql_files(app_label):
56 """
57 Look for SQL files in Django app, and load them into database.
58 We remove RAISE NOTICE instructions from SQL outside unit testing
59 since they lead to interpolation errors of '%' character in python.
60 """
61 app_dir = os.path.dirname(models.get_app(app_label).__file__)
62 sql_dir = os.path.normpath(os.path.join(app_dir, 'sql'))
63 if not os.path.exists(sql_dir):
64 logger.debug("No SQL folder for %s" % app_label)
65 return
66
67 r = re.compile(r'^.*\.sql$')
68 sql_files = [os.path.join(sql_dir, f)
69 for f in os.listdir(sql_dir)
70 if r.match(f) is not None]
71 sql_files.sort()
72
73 if len(sql_files) == 0:
74 logger.warning("Empty folder %s" % sql_dir)
75
76 cursor = connection.cursor()
77 for sql_file in sql_files:
78 try:
79 logger.info("Loading initial SQL data from '%s'" % sql_file)
80 f = open(sql_file)
81 sql = f.read()
82 f.close()
83 if not settings.TEST:
84 # Remove RAISE NOTICE (/!\ only one-liners)
85 sql = re.sub(r"\n.*RAISE NOTICE.*\n", "\n", sql)
86 # TODO: this is the ugliest driver hack ever
87 sql = sql.replace('%', '%%')
88
89 # Replace curly braces with settings values
90 pattern = re.compile(r'{{\s*(.*)\s*}}')
91 for m in pattern.finditer(sql):
92 value = getattr(settings, m.group(1))
93 sql = sql.replace(m.group(0), unicode(value))
94 cursor.execute(sql)
95 except Exception as e:
96 logger.critical("Failed to install custom SQL file '%s': %s\n" %
97 (sql_file, e))
98 traceback.print_exc()
99 raise
100
101
102 def move_models_to_schemas(app_label):
103 """
104 Move models tables to PostgreSQL schemas.
105
106 Views, functions and triggers will be moved in Geotrek app SQL files.
107 """
108 app = get_app(app_label)
109 default_schema = settings.DATABASE_SCHEMAS.get('default')
110 app_schema = settings.DATABASE_SCHEMAS.get(app_label, default_schema)
111
112 table_schemas = {}
113 for model in get_models(app):
114 model_name = model._meta.module_name
115 table_name = model._meta.db_table
116 model_schema = settings.DATABASE_SCHEMAS.get(model_name, app_schema)
117 table_schemas.setdefault(model_schema, []).append(table_name)
118
119 for m2m_field in model._meta.many_to_many:
120 table_name = m2m_field.db_table
121 if table_name:
122 table_schemas[model_schema].append(table_name)
123
124 cursor = connection.cursor()
125
126 for schema_name in table_schemas.keys():
127 try:
128 sql = "CREATE SCHEMA %s;" % model_schema
129 cursor.execute(sql)
130 logger.info("Created schema %s" % model_schema)
131 except Exception:
132 logger.debug("Schema %s already exists." % model_schema)
133
134 for schema_name, tables in table_schemas.items():
135 for table_name in tables:
136 try:
137 sql = "ALTER TABLE %s SET SCHEMA %s;" % (table_name, schema_name)
138 cursor.execute(sql)
139 logger.info("Moved %s to schema %s" % (table_name, schema_name))
140 except Exception:
141 logger.debug("Table %s already in schema %s" % (table_name, schema_name))
142
[end of geotrek/common/utils/postgresql.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/geotrek/common/utils/postgresql.py b/geotrek/common/utils/postgresql.py
--- a/geotrek/common/utils/postgresql.py
+++ b/geotrek/common/utils/postgresql.py
@@ -139,3 +139,12 @@
logger.info("Moved %s to schema %s" % (table_name, schema_name))
except Exception:
logger.debug("Table %s already in schema %s" % (table_name, schema_name))
+
+ # For Django, search_path is set in connection options.
+ # But when accessing the database using QGis or ETL, search_path must be
+ # set database level (for all users, and for this database only).
+ if app_label == 'common':
+ dbname = settings.DATABASES['default']['NAME']
+ search_path = 'public,%s' % ','.join(set(settings.DATABASE_SCHEMAS.values()))
+ sql = "ALTER DATABASE %s SET search_path=%s;" % (dbname, search_path)
+ cursor.execute(sql)
|
{"golden_diff": "diff --git a/geotrek/common/utils/postgresql.py b/geotrek/common/utils/postgresql.py\n--- a/geotrek/common/utils/postgresql.py\n+++ b/geotrek/common/utils/postgresql.py\n@@ -139,3 +139,12 @@\n logger.info(\"Moved %s to schema %s\" % (table_name, schema_name))\n except Exception:\n logger.debug(\"Table %s already in schema %s\" % (table_name, schema_name))\n+\n+ # For Django, search_path is set in connection options.\n+ # But when accessing the database using QGis or ETL, search_path must be\n+ # set database level (for all users, and for this database only).\n+ if app_label == 'common':\n+ dbname = settings.DATABASES['default']['NAME']\n+ search_path = 'public,%s' % ','.join(set(settings.DATABASE_SCHEMAS.values()))\n+ sql = \"ALTER DATABASE %s SET search_path=%s;\" % (dbname, search_path)\n+ cursor.execute(sql)\n", "issue": "SEARCH_PATH for Geotrek DB user\nSince Geotrek 0.28, tables and functions have be moved to different schemas, which is a very good point (https://github.com/makinacorpus/Geotrek/releases/tag/v0.28.0).\n\nSchemas are not mentionned in triggers which is OK too, as Django is doing it in his connexions so it is not a problem for GEOTREK applications.\n\nIt gets a problem when you try to edit or insert a data from an external tool (QGIS, Talend...). \nYou have to change the db_user search_path so that he can find tables and functions not only in public schemas.\n\nIt could be interesting to do it during GEOTREK installation for the Geotrek DB user mentionned in settings : \n\nALTER USER $geotrek_db_user SET \nsearch_path=public,django,geotrek,gestion,rando,zonage,foncier,tourisme; \n\nOf course if you are using another user to edit datas in external tools, you will have to do it manually the first time. \n\n", "before_files": [{"content": "import re\nimport os\nimport logging\nimport traceback\nfrom functools import wraps\n\nfrom django.db import connection, models\nfrom django.conf import settings\nfrom django.db.models import get_app, get_models\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef debug_pg_notices(f):\n\n @wraps(f)\n def wrapped(*args, **kwargs):\n before = len(connection.connection.notices) if connection.connection else 0\n try:\n r = f(*args, **kwargs)\n finally:\n # Show triggers output\n allnotices = []\n current = ''\n if connection.connection:\n notices = []\n for notice in connection.connection.notices[before:]:\n try:\n notice, context = notice.split('CONTEXT:', 1)\n context = re.sub(\"\\s+\", \" \", context)\n except ValueError:\n context = ''\n notices.append((context, notice))\n if context != current:\n allnotices.append(notices)\n notices = []\n current = context\n allnotices.append(notices)\n current = ''\n for notices in allnotices:\n for context, notice in notices:\n if context != current:\n if context != '':\n logger.debug('Context %s...:' % context.strip()[:80])\n current = context\n notice = notice.replace('NOTICE: ', '')\n prefix = '' if context == '' else ' '\n logger.debug('%s%s' % (prefix, notice.strip()))\n return r\n\n return wrapped\n\n\ndef load_sql_files(app_label):\n \"\"\"\n Look for SQL files in Django app, and load them into database.\n We remove RAISE NOTICE instructions from SQL outside unit testing\n since they lead to interpolation errors of '%' character in python.\n \"\"\"\n app_dir = os.path.dirname(models.get_app(app_label).__file__)\n sql_dir = os.path.normpath(os.path.join(app_dir, 'sql'))\n if not os.path.exists(sql_dir):\n logger.debug(\"No SQL folder for %s\" % app_label)\n return\n\n r = re.compile(r'^.*\\.sql$')\n sql_files = [os.path.join(sql_dir, f)\n for f in os.listdir(sql_dir)\n if r.match(f) is not None]\n sql_files.sort()\n\n if len(sql_files) == 0:\n logger.warning(\"Empty folder %s\" % sql_dir)\n\n cursor = connection.cursor()\n for sql_file in sql_files:\n try:\n logger.info(\"Loading initial SQL data from '%s'\" % sql_file)\n f = open(sql_file)\n sql = f.read()\n f.close()\n if not settings.TEST:\n # Remove RAISE NOTICE (/!\\ only one-liners)\n sql = re.sub(r\"\\n.*RAISE NOTICE.*\\n\", \"\\n\", sql)\n # TODO: this is the ugliest driver hack ever\n sql = sql.replace('%', '%%')\n\n # Replace curly braces with settings values\n pattern = re.compile(r'{{\\s*(.*)\\s*}}')\n for m in pattern.finditer(sql):\n value = getattr(settings, m.group(1))\n sql = sql.replace(m.group(0), unicode(value))\n cursor.execute(sql)\n except Exception as e:\n logger.critical(\"Failed to install custom SQL file '%s': %s\\n\" %\n (sql_file, e))\n traceback.print_exc()\n raise\n\n\ndef move_models_to_schemas(app_label):\n \"\"\"\n Move models tables to PostgreSQL schemas.\n\n Views, functions and triggers will be moved in Geotrek app SQL files.\n \"\"\"\n app = get_app(app_label)\n default_schema = settings.DATABASE_SCHEMAS.get('default')\n app_schema = settings.DATABASE_SCHEMAS.get(app_label, default_schema)\n\n table_schemas = {}\n for model in get_models(app):\n model_name = model._meta.module_name\n table_name = model._meta.db_table\n model_schema = settings.DATABASE_SCHEMAS.get(model_name, app_schema)\n table_schemas.setdefault(model_schema, []).append(table_name)\n\n for m2m_field in model._meta.many_to_many:\n table_name = m2m_field.db_table\n if table_name:\n table_schemas[model_schema].append(table_name)\n\n cursor = connection.cursor()\n\n for schema_name in table_schemas.keys():\n try:\n sql = \"CREATE SCHEMA %s;\" % model_schema\n cursor.execute(sql)\n logger.info(\"Created schema %s\" % model_schema)\n except Exception:\n logger.debug(\"Schema %s already exists.\" % model_schema)\n\n for schema_name, tables in table_schemas.items():\n for table_name in tables:\n try:\n sql = \"ALTER TABLE %s SET SCHEMA %s;\" % (table_name, schema_name)\n cursor.execute(sql)\n logger.info(\"Moved %s to schema %s\" % (table_name, schema_name))\n except Exception:\n logger.debug(\"Table %s already in schema %s\" % (table_name, schema_name))\n", "path": "geotrek/common/utils/postgresql.py"}]}
| 2,172 | 229 |
gh_patches_debug_438
|
rasdani/github-patches
|
git_diff
|
OpenMined__PySyft-155
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Set up CI for automated testing and style checks
Now that our codebase is growing (hooray!), we should set up CI for automated testing and style checks (PEP8, PEP257).
Choices include [CircleCI](https://circleci.com) and [TravisCI](https://travis-ci.org). These can be integrated into our repo such that every pull request will be checked before review.
</issue>
<code>
[start of setup.py]
1 import os
2 from setuptools import setup,find_packages
3
4 # Utility function to read the README file.
5 # Used for the long_description. It's nice, because now 1) we have a top level
6 # README file and 2) it's easier to type in the README file than to put a raw
7 # string in below ...
8 def read(fname):
9 return open(os.path.join(os.path.dirname(__file__), fname)).read()
10
11 requirements = read('requirements.txt').split()
12
13 setup(
14 name = "syft",
15 version = "0.1.0",
16 author = "Amber Trask",
17 author_email = "[email protected]",
18 description = ("A library for Homomorphically Encrypted Deep Learning Algorithms"),
19 license = "Apache-2.0",
20 keywords = "deep learning machine artificial intelligence homomorphic encryption",
21 packages=find_packages(exclude=['notebooks', 'test*','dist']),
22 include_package_data=True,
23 long_description=read('README.md'),
24 url='github.com/OpenMined/Syft',
25 classifiers=[
26 "Development Status :: 1 - Alpha",
27 ],
28 scripts=['bin/syft_cmd'],
29 install_requires=requirements,
30 setup_requires=['pytest-runner'],
31 tests_require=['pytest']
32 )
33
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -28,5 +28,5 @@
scripts=['bin/syft_cmd'],
install_requires=requirements,
setup_requires=['pytest-runner'],
- tests_require=['pytest']
+ tests_require=['pytest', 'pytest-flake8']
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -28,5 +28,5 @@\n scripts=['bin/syft_cmd'],\n install_requires=requirements,\n setup_requires=['pytest-runner'],\n- tests_require=['pytest']\n+ tests_require=['pytest', 'pytest-flake8']\n )\n", "issue": "Set up CI for automated testing and style checks\nNow that our codebase is growing (hooray!), we should set up CI for automated testing and style checks (PEP8, PEP257). \r\n\r\nChoices include [CircleCI](https://circleci.com) and [TravisCI](https://travis-ci.org). These can be integrated into our repo such that every pull request will be checked before review. \n", "before_files": [{"content": "import os\nfrom setuptools import setup,find_packages\n\n# Utility function to read the README file.\n# Used for the long_description. It's nice, because now 1) we have a top level\n# README file and 2) it's easier to type in the README file than to put a raw\n# string in below ...\ndef read(fname):\n return open(os.path.join(os.path.dirname(__file__), fname)).read()\n\nrequirements = read('requirements.txt').split()\n\nsetup(\n name = \"syft\",\n version = \"0.1.0\",\n author = \"Amber Trask\",\n author_email = \"[email protected]\",\n description = (\"A library for Homomorphically Encrypted Deep Learning Algorithms\"),\n license = \"Apache-2.0\",\n keywords = \"deep learning machine artificial intelligence homomorphic encryption\",\n packages=find_packages(exclude=['notebooks', 'test*','dist']),\n include_package_data=True,\n long_description=read('README.md'),\n url='github.com/OpenMined/Syft',\n classifiers=[\n \"Development Status :: 1 - Alpha\",\n ],\n scripts=['bin/syft_cmd'],\n install_requires=requirements,\n setup_requires=['pytest-runner'],\n tests_require=['pytest']\n)\n", "path": "setup.py"}]}
| 943 | 75 |
gh_patches_debug_6021
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-6334
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'json' selector type not documented
https://docs.scrapy.org/en/latest/topics/selectors.html
`type defines the selector type, it can be "html", "xml" or None (default).`
But my tests are now failing because it's returning `json` type for actual json docs now.
</issue>
<code>
[start of scrapy/selector/unified.py]
1 """
2 XPath selectors based on lxml
3 """
4
5 from typing import Any, Optional, Type, Union
6
7 from parsel import Selector as _ParselSelector
8
9 from scrapy.http import HtmlResponse, TextResponse, XmlResponse
10 from scrapy.utils.python import to_bytes
11 from scrapy.utils.response import get_base_url
12 from scrapy.utils.trackref import object_ref
13
14 __all__ = ["Selector", "SelectorList"]
15
16 _NOT_SET = object()
17
18
19 def _st(response: Optional[TextResponse], st: Optional[str]) -> str:
20 if st is None:
21 return "xml" if isinstance(response, XmlResponse) else "html"
22 return st
23
24
25 def _response_from_text(text: Union[str, bytes], st: Optional[str]) -> TextResponse:
26 rt: Type[TextResponse] = XmlResponse if st == "xml" else HtmlResponse
27 return rt(url="about:blank", encoding="utf-8", body=to_bytes(text, "utf-8"))
28
29
30 class SelectorList(_ParselSelector.selectorlist_cls, object_ref):
31 """
32 The :class:`SelectorList` class is a subclass of the builtin ``list``
33 class, which provides a few additional methods.
34 """
35
36
37 class Selector(_ParselSelector, object_ref):
38 """
39 An instance of :class:`Selector` is a wrapper over response to select
40 certain parts of its content.
41
42 ``response`` is an :class:`~scrapy.http.HtmlResponse` or an
43 :class:`~scrapy.http.XmlResponse` object that will be used for selecting
44 and extracting data.
45
46 ``text`` is a unicode string or utf-8 encoded text for cases when a
47 ``response`` isn't available. Using ``text`` and ``response`` together is
48 undefined behavior.
49
50 ``type`` defines the selector type, it can be ``"html"``, ``"xml"``
51 or ``None`` (default).
52
53 If ``type`` is ``None``, the selector automatically chooses the best type
54 based on ``response`` type (see below), or defaults to ``"html"`` in case it
55 is used together with ``text``.
56
57 If ``type`` is ``None`` and a ``response`` is passed, the selector type is
58 inferred from the response type as follows:
59
60 * ``"html"`` for :class:`~scrapy.http.HtmlResponse` type
61 * ``"xml"`` for :class:`~scrapy.http.XmlResponse` type
62 * ``"html"`` for anything else
63
64 Otherwise, if ``type`` is set, the selector type will be forced and no
65 detection will occur.
66 """
67
68 __slots__ = ["response"]
69 selectorlist_cls = SelectorList
70
71 def __init__(
72 self,
73 response: Optional[TextResponse] = None,
74 text: Optional[str] = None,
75 type: Optional[str] = None,
76 root: Optional[Any] = _NOT_SET,
77 **kwargs: Any,
78 ):
79 if response is not None and text is not None:
80 raise ValueError(
81 f"{self.__class__.__name__}.__init__() received "
82 "both response and text"
83 )
84
85 st = _st(response, type)
86
87 if text is not None:
88 response = _response_from_text(text, st)
89
90 if response is not None:
91 text = response.text
92 kwargs.setdefault("base_url", get_base_url(response))
93
94 self.response = response
95
96 if root is not _NOT_SET:
97 kwargs["root"] = root
98
99 super().__init__(text=text, type=st, **kwargs)
100
[end of scrapy/selector/unified.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scrapy/selector/unified.py b/scrapy/selector/unified.py
--- a/scrapy/selector/unified.py
+++ b/scrapy/selector/unified.py
@@ -47,7 +47,7 @@
``response`` isn't available. Using ``text`` and ``response`` together is
undefined behavior.
- ``type`` defines the selector type, it can be ``"html"``, ``"xml"``
+ ``type`` defines the selector type, it can be ``"html"``, ``"xml"``, ``"json"``
or ``None`` (default).
If ``type`` is ``None``, the selector automatically chooses the best type
|
{"golden_diff": "diff --git a/scrapy/selector/unified.py b/scrapy/selector/unified.py\n--- a/scrapy/selector/unified.py\n+++ b/scrapy/selector/unified.py\n@@ -47,7 +47,7 @@\n ``response`` isn't available. Using ``text`` and ``response`` together is\n undefined behavior.\n \n- ``type`` defines the selector type, it can be ``\"html\"``, ``\"xml\"``\n+ ``type`` defines the selector type, it can be ``\"html\"``, ``\"xml\"``, ``\"json\"``\n or ``None`` (default).\n \n If ``type`` is ``None``, the selector automatically chooses the best type\n", "issue": "'json' selector type not documented\nhttps://docs.scrapy.org/en/latest/topics/selectors.html\r\n\r\n`type defines the selector type, it can be \"html\", \"xml\" or None (default).`\r\n\r\nBut my tests are now failing because it's returning `json` type for actual json docs now.\n", "before_files": [{"content": "\"\"\"\nXPath selectors based on lxml\n\"\"\"\n\nfrom typing import Any, Optional, Type, Union\n\nfrom parsel import Selector as _ParselSelector\n\nfrom scrapy.http import HtmlResponse, TextResponse, XmlResponse\nfrom scrapy.utils.python import to_bytes\nfrom scrapy.utils.response import get_base_url\nfrom scrapy.utils.trackref import object_ref\n\n__all__ = [\"Selector\", \"SelectorList\"]\n\n_NOT_SET = object()\n\n\ndef _st(response: Optional[TextResponse], st: Optional[str]) -> str:\n if st is None:\n return \"xml\" if isinstance(response, XmlResponse) else \"html\"\n return st\n\n\ndef _response_from_text(text: Union[str, bytes], st: Optional[str]) -> TextResponse:\n rt: Type[TextResponse] = XmlResponse if st == \"xml\" else HtmlResponse\n return rt(url=\"about:blank\", encoding=\"utf-8\", body=to_bytes(text, \"utf-8\"))\n\n\nclass SelectorList(_ParselSelector.selectorlist_cls, object_ref):\n \"\"\"\n The :class:`SelectorList` class is a subclass of the builtin ``list``\n class, which provides a few additional methods.\n \"\"\"\n\n\nclass Selector(_ParselSelector, object_ref):\n \"\"\"\n An instance of :class:`Selector` is a wrapper over response to select\n certain parts of its content.\n\n ``response`` is an :class:`~scrapy.http.HtmlResponse` or an\n :class:`~scrapy.http.XmlResponse` object that will be used for selecting\n and extracting data.\n\n ``text`` is a unicode string or utf-8 encoded text for cases when a\n ``response`` isn't available. Using ``text`` and ``response`` together is\n undefined behavior.\n\n ``type`` defines the selector type, it can be ``\"html\"``, ``\"xml\"``\n or ``None`` (default).\n\n If ``type`` is ``None``, the selector automatically chooses the best type\n based on ``response`` type (see below), or defaults to ``\"html\"`` in case it\n is used together with ``text``.\n\n If ``type`` is ``None`` and a ``response`` is passed, the selector type is\n inferred from the response type as follows:\n\n * ``\"html\"`` for :class:`~scrapy.http.HtmlResponse` type\n * ``\"xml\"`` for :class:`~scrapy.http.XmlResponse` type\n * ``\"html\"`` for anything else\n\n Otherwise, if ``type`` is set, the selector type will be forced and no\n detection will occur.\n \"\"\"\n\n __slots__ = [\"response\"]\n selectorlist_cls = SelectorList\n\n def __init__(\n self,\n response: Optional[TextResponse] = None,\n text: Optional[str] = None,\n type: Optional[str] = None,\n root: Optional[Any] = _NOT_SET,\n **kwargs: Any,\n ):\n if response is not None and text is not None:\n raise ValueError(\n f\"{self.__class__.__name__}.__init__() received \"\n \"both response and text\"\n )\n\n st = _st(response, type)\n\n if text is not None:\n response = _response_from_text(text, st)\n\n if response is not None:\n text = response.text\n kwargs.setdefault(\"base_url\", get_base_url(response))\n\n self.response = response\n\n if root is not _NOT_SET:\n kwargs[\"root\"] = root\n\n super().__init__(text=text, type=st, **kwargs)\n", "path": "scrapy/selector/unified.py"}]}
| 1,574 | 153 |
gh_patches_debug_9764
|
rasdani/github-patches
|
git_diff
|
mampfes__hacs_waste_collection_schedule-895
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fixing abki_de recycling
fix for #891
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/abki_de.py]
1 import requests
2 from waste_collection_schedule import Collection # type: ignore[attr-defined]
3 from waste_collection_schedule.service.ICS import ICS
4
5 from datetime import datetime
6 import logging
7
8 TITLE = "Abfallwirtschaftsbetrieb Kiel (ABK)"
9 DESCRIPTION = "Source for Abfallwirtschaftsbetrieb Kiel (ABK)."
10 URL = "https://abki.de/"
11 TEST_CASES = {
12 "auguste-viktoria-straße, 14": {"street": "auguste-viktoria-straße", "number": 14},
13 "Achterwehrer Straße, 1 A": {"street": "Achterwehrer Straße", "number": "1 a"},
14 "Boltenhagener Straße, 4-8": {"street": "Boltenhagener Straße", "number": "4-8"},
15 }
16
17
18 ICON_MAP = {
19 "Restabfall": "mdi:trash-can",
20 "Glass": "mdi:bottle-soda",
21 "Bioabfall": "mdi:leaf",
22 "Papier": "mdi:package-variant",
23 "Gelbe": "mdi:recycle",
24 }
25
26
27 ICAL_URL = "https://abki.de/abki-services/abki-leerungen-ical"
28 _LOGGER = logging.getLogger(__name__)
29
30
31 class Source:
32 def __init__(self, street: str, number: str | int):
33 self._street: str = street
34 self._number: str = str(number)
35 self._ics = ICS()
36
37 def fetch(self):
38 now = datetime.now()
39 session = requests.Session()
40
41 # get street id
42 params = f'filter[logic]=and&filter[filters][0][value]={self._street}&filter[filters][0][field]=Strasse&filter[filters][0][operator]=startswith&filter[filters][0][ignoreCase]=true'
43 r = session.get(
44 "https://abki.de/abki-services/strassennamen", params=params) # , params=params)
45 r.raise_for_status()
46
47 streets = r.json()
48 if len(streets) > 1:
49 _LOGGER.warning(
50 "Multiple streets found please be more specific, using first one: "+streets[0]["Strasse"])
51 if len(streets) < 1:
52 raise ValueError("No street found", self._street)
53
54 street_id = streets[0]["IDSTREET"]
55
56 # get number id
57 r = session.get("https://abki.de/abki-services/streetnumber",
58 params={"IDSTREET": street_id})
59 r.raise_for_status()
60 numbers = r.json()
61 number_id, standort_id = None, None
62 for number in numbers:
63 if number["NUMBER"].lower().replace(" ", "").replace("-", "") == self._number.lower().replace(" ", "").replace("-", ""):
64 number_id = number["id"]
65 standort_id = number["IDSTANDORT"]
66 break
67
68 if number_id is None:
69 raise ValueError("No number found", self._number)
70
71 # get ics file link
72 r = session.get("https://abki.de/abki-services/leerungen-data", params={
73 "Zeitraum": now.year,
74 "Strasse_input": self._street,
75 "Strasse": street_id,
76 "IDSTANDORT_input": 2,
77 "IDSTANDORT": standort_id,
78 "Hausnummernwahl": number_id
79 })
80 r.raise_for_status()
81 request_data = r.json()["dataFile"]
82
83 # get ICS file
84 r = session.get(ICAL_URL, params={"data": request_data})
85
86 dates = self._ics.convert(r.text)
87
88 # if december, also try to get next year
89 if now.month == 12:
90 try:
91 r = session.get("https://abki.de/abki-services/leerungen-data", params={
92 "Zeitraum": now.year+1,
93 "Strasse_input": "Aarhusstraße",
94 "Strasse": street_id,
95 "IDSTANDORT_input": 2,
96 "IDSTANDORT": standort_id,
97 "Hausnummernwahl": number_id
98 })
99 r.raise_for_status()
100 request_data = r.json()["dataFile"]
101 r = session.get(ICAL_URL, params={"data": request_data})
102 dates += self._ics.convert(r.text)
103 except:
104 pass
105
106 entries = []
107 for d in dates:
108 entries.append(Collection(
109 d[0], d[1], ICON_MAP.get(d[1].split(" ")[0])))
110
111 return entries
112
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/abki_de.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abki_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abki_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abki_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abki_de.py
@@ -90,7 +90,7 @@
try:
r = session.get("https://abki.de/abki-services/leerungen-data", params={
"Zeitraum": now.year+1,
- "Strasse_input": "Aarhusstraße",
+ "Strasse_input": self._street,
"Strasse": street_id,
"IDSTANDORT_input": 2,
"IDSTANDORT": standort_id,
|
{"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abki_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abki_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abki_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abki_de.py\n@@ -90,7 +90,7 @@\n try:\n r = session.get(\"https://abki.de/abki-services/leerungen-data\", params={\n \"Zeitraum\": now.year+1,\n- \"Strasse_input\": \"Aarhusstra\u00dfe\",\n+ \"Strasse_input\": self._street,\n \"Strasse\": street_id,\n \"IDSTANDORT_input\": 2,\n \"IDSTANDORT\": standort_id,\n", "issue": "fixing abki_de recycling\nfix for #891\n", "before_files": [{"content": "import requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\nfrom datetime import datetime\nimport logging\n\nTITLE = \"Abfallwirtschaftsbetrieb Kiel (ABK)\"\nDESCRIPTION = \"Source for Abfallwirtschaftsbetrieb Kiel (ABK).\"\nURL = \"https://abki.de/\"\nTEST_CASES = {\n \"auguste-viktoria-stra\u00dfe, 14\": {\"street\": \"auguste-viktoria-stra\u00dfe\", \"number\": 14},\n \"Achterwehrer Stra\u00dfe, 1 A\": {\"street\": \"Achterwehrer Stra\u00dfe\", \"number\": \"1 a\"},\n \"Boltenhagener Stra\u00dfe, 4-8\": {\"street\": \"Boltenhagener Stra\u00dfe\", \"number\": \"4-8\"},\n}\n\n\nICON_MAP = {\n \"Restabfall\": \"mdi:trash-can\",\n \"Glass\": \"mdi:bottle-soda\",\n \"Bioabfall\": \"mdi:leaf\",\n \"Papier\": \"mdi:package-variant\",\n \"Gelbe\": \"mdi:recycle\",\n}\n\n\nICAL_URL = \"https://abki.de/abki-services/abki-leerungen-ical\"\n_LOGGER = logging.getLogger(__name__)\n\n\nclass Source:\n def __init__(self, street: str, number: str | int):\n self._street: str = street\n self._number: str = str(number)\n self._ics = ICS()\n\n def fetch(self):\n now = datetime.now()\n session = requests.Session()\n\n # get street id\n params = f'filter[logic]=and&filter[filters][0][value]={self._street}&filter[filters][0][field]=Strasse&filter[filters][0][operator]=startswith&filter[filters][0][ignoreCase]=true'\n r = session.get(\n \"https://abki.de/abki-services/strassennamen\", params=params) # , params=params)\n r.raise_for_status()\n\n streets = r.json()\n if len(streets) > 1:\n _LOGGER.warning(\n \"Multiple streets found please be more specific, using first one: \"+streets[0][\"Strasse\"])\n if len(streets) < 1:\n raise ValueError(\"No street found\", self._street)\n\n street_id = streets[0][\"IDSTREET\"]\n\n # get number id\n r = session.get(\"https://abki.de/abki-services/streetnumber\",\n params={\"IDSTREET\": street_id})\n r.raise_for_status()\n numbers = r.json()\n number_id, standort_id = None, None\n for number in numbers:\n if number[\"NUMBER\"].lower().replace(\" \", \"\").replace(\"-\", \"\") == self._number.lower().replace(\" \", \"\").replace(\"-\", \"\"):\n number_id = number[\"id\"]\n standort_id = number[\"IDSTANDORT\"]\n break\n \n if number_id is None:\n raise ValueError(\"No number found\", self._number)\n\n # get ics file link\n r = session.get(\"https://abki.de/abki-services/leerungen-data\", params={\n \"Zeitraum\": now.year,\n \"Strasse_input\": self._street,\n \"Strasse\": street_id,\n \"IDSTANDORT_input\": 2,\n \"IDSTANDORT\": standort_id,\n \"Hausnummernwahl\": number_id\n })\n r.raise_for_status()\n request_data = r.json()[\"dataFile\"]\n\n # get ICS file\n r = session.get(ICAL_URL, params={\"data\": request_data})\n\n dates = self._ics.convert(r.text)\n\n # if december, also try to get next year\n if now.month == 12:\n try:\n r = session.get(\"https://abki.de/abki-services/leerungen-data\", params={\n \"Zeitraum\": now.year+1,\n \"Strasse_input\": \"Aarhusstra\u00dfe\",\n \"Strasse\": street_id,\n \"IDSTANDORT_input\": 2,\n \"IDSTANDORT\": standort_id,\n \"Hausnummernwahl\": number_id\n })\n r.raise_for_status()\n request_data = r.json()[\"dataFile\"]\n r = session.get(ICAL_URL, params={\"data\": request_data})\n dates += self._ics.convert(r.text)\n except:\n pass\n\n entries = []\n for d in dates:\n entries.append(Collection(\n d[0], d[1], ICON_MAP.get(d[1].split(\" \")[0])))\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/abki_de.py"}]}
| 1,826 | 178 |
gh_patches_debug_53281
|
rasdani/github-patches
|
git_diff
|
holoviz__panel-645
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Port not being released after stopping threaded holoviz panel app server
Closing a threaded panel app holds on to the port it started on. This is different behavior than closing an app initialized without threading.
```
usgs_logo = pn.panel('../assets/usgs_logo.png', height=130)
column = pn.Column(usgs_logo)
app = column.show(port=8889)
app.stop()
```
Port 8889 is released.
```
app = row.show(port=8889, threaded=True)
app.stop()
```
Port 8889 is not released.
</issue>
<code>
[start of panel/io/server.py]
1 """
2 Utilities for creating bokeh Server instances.
3 """
4 from __future__ import absolute_import, division, unicode_literals
5
6 import signal
7 import threading
8
9 from functools import partial
10
11 from bokeh.server.server import Server
12
13 from .state import state
14
15
16 #---------------------------------------------------------------------
17 # Private API
18 #---------------------------------------------------------------------
19
20 def _origin_url(url):
21 if url.startswith("http"):
22 url = url.split("//")[1]
23 return url
24
25
26 def _server_url(url, port):
27 if url.startswith("http"):
28 return '%s:%d%s' % (url.rsplit(':', 1)[0], port, "/")
29 else:
30 return 'http://%s:%d%s' % (url.split(':')[0], port, "/")
31
32 #---------------------------------------------------------------------
33 # Public API
34 #---------------------------------------------------------------------
35
36 def get_server(panel, port=0, websocket_origin=None, loop=None,
37 show=False, start=False, **kwargs):
38 """
39 Returns a Server instance with this panel attached as the root
40 app.
41
42 Arguments
43 ---------
44 port: int (optional, default=0)
45 Allows specifying a specific port
46 websocket_origin: str or list(str) (optional)
47 A list of hosts that can connect to the websocket.
48
49 This is typically required when embedding a server app in
50 an external web site.
51
52 If None, "localhost" is used.
53 loop : tornado.ioloop.IOLoop (optional, default=IOLoop.current())
54 The tornado IOLoop to run the Server on
55 show : boolean (optional, default=False)
56 Whether to open the server in a new browser tab on start
57 start : boolean(optional, default=False)
58 Whether to start the Server
59 kwargs: dict
60 Additional keyword arguments to pass to Server instance
61
62 Returns
63 -------
64 server : bokeh.server.server.Server
65 Bokeh Server instance running this panel
66 """
67 from tornado.ioloop import IOLoop
68 opts = dict(kwargs)
69 if loop:
70 loop.make_current()
71 opts['io_loop'] = loop
72 else:
73 opts['io_loop'] = IOLoop.current()
74
75 if websocket_origin:
76 if not isinstance(websocket_origin, list):
77 websocket_origin = [websocket_origin]
78 opts['allow_websocket_origin'] = websocket_origin
79
80 server_id = kwargs.pop('server_id', None)
81 server = Server({'/': partial(panel._modify_doc, server_id)}, port=port, **opts)
82 if server_id:
83 state._servers[server_id] = (server, panel, [])
84
85 if show:
86 def show_callback():
87 server.show('/')
88 server.io_loop.add_callback(show_callback)
89
90 def sig_exit(*args, **kwargs):
91 server.io_loop.add_callback_from_signal(do_stop)
92
93 def do_stop(*args, **kwargs):
94 server.io_loop.stop()
95
96 try:
97 signal.signal(signal.SIGINT, sig_exit)
98 except ValueError:
99 pass # Can't use signal on a thread
100
101 if start:
102 server.start()
103 try:
104 server.io_loop.start()
105 except RuntimeError:
106 pass
107 return server
108
109
110 class StoppableThread(threading.Thread):
111 """Thread class with a stop() method."""
112
113 def __init__(self, io_loop=None, timeout=1000, **kwargs):
114 from tornado import ioloop
115 super(StoppableThread, self).__init__(**kwargs)
116 self._stop_event = threading.Event()
117 self.io_loop = io_loop
118 self._cb = ioloop.PeriodicCallback(self._check_stopped, timeout)
119 self._cb.start()
120
121 def _check_stopped(self):
122 if self.stopped:
123 self._cb.stop()
124 self.io_loop.stop()
125
126 def stop(self):
127 self._stop_event.set()
128
129 @property
130 def stopped(self):
131 return self._stop_event.is_set()
132
[end of panel/io/server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/panel/io/server.py b/panel/io/server.py
--- a/panel/io/server.py
+++ b/panel/io/server.py
@@ -122,6 +122,15 @@
if self.stopped:
self._cb.stop()
self.io_loop.stop()
+
+ def run(self):
+ try:
+ if self._target:
+ bokeh_server = self._target(*self._args, **self._kwargs)
+ finally:
+ if isinstance(bokeh_server, Server):
+ bokeh_server.stop()
+ del self._target, self._args, self._kwargs
def stop(self):
self._stop_event.set()
|
{"golden_diff": "diff --git a/panel/io/server.py b/panel/io/server.py\n--- a/panel/io/server.py\n+++ b/panel/io/server.py\n@@ -122,6 +122,15 @@\n if self.stopped:\n self._cb.stop()\n self.io_loop.stop()\n+ \n+ def run(self):\n+ try:\n+ if self._target:\n+ bokeh_server = self._target(*self._args, **self._kwargs)\n+ finally:\n+ if isinstance(bokeh_server, Server):\n+ bokeh_server.stop()\n+ del self._target, self._args, self._kwargs\n \n def stop(self):\n self._stop_event.set()\n", "issue": "Port not being released after stopping threaded holoviz panel app server\nClosing a threaded panel app holds on to the port it started on. This is different behavior than closing an app initialized without threading.\r\n\r\n```\r\nusgs_logo = pn.panel('../assets/usgs_logo.png', height=130)\r\n\r\ncolumn = pn.Column(usgs_logo)\r\n\r\napp = column.show(port=8889)\r\n\r\napp.stop()\r\n```\r\n\r\nPort 8889 is released.\r\n\r\n```\r\napp = row.show(port=8889, threaded=True)\r\n\r\napp.stop()\r\n```\r\nPort 8889 is not released.\n", "before_files": [{"content": "\"\"\"\nUtilities for creating bokeh Server instances.\n\"\"\"\nfrom __future__ import absolute_import, division, unicode_literals\n\nimport signal\nimport threading\n\nfrom functools import partial\n\nfrom bokeh.server.server import Server\n\nfrom .state import state\n\n\n#---------------------------------------------------------------------\n# Private API\n#---------------------------------------------------------------------\n\ndef _origin_url(url):\n if url.startswith(\"http\"):\n url = url.split(\"//\")[1]\n return url\n\n\ndef _server_url(url, port):\n if url.startswith(\"http\"):\n return '%s:%d%s' % (url.rsplit(':', 1)[0], port, \"/\")\n else:\n return 'http://%s:%d%s' % (url.split(':')[0], port, \"/\")\n\n#---------------------------------------------------------------------\n# Public API\n#---------------------------------------------------------------------\n\ndef get_server(panel, port=0, websocket_origin=None, loop=None,\n show=False, start=False, **kwargs):\n \"\"\"\n Returns a Server instance with this panel attached as the root\n app.\n\n Arguments\n ---------\n port: int (optional, default=0)\n Allows specifying a specific port\n websocket_origin: str or list(str) (optional)\n A list of hosts that can connect to the websocket.\n\n This is typically required when embedding a server app in\n an external web site.\n\n If None, \"localhost\" is used.\n loop : tornado.ioloop.IOLoop (optional, default=IOLoop.current())\n The tornado IOLoop to run the Server on\n show : boolean (optional, default=False)\n Whether to open the server in a new browser tab on start\n start : boolean(optional, default=False)\n Whether to start the Server\n kwargs: dict\n Additional keyword arguments to pass to Server instance\n\n Returns\n -------\n server : bokeh.server.server.Server\n Bokeh Server instance running this panel\n \"\"\"\n from tornado.ioloop import IOLoop\n opts = dict(kwargs)\n if loop:\n loop.make_current()\n opts['io_loop'] = loop\n else:\n opts['io_loop'] = IOLoop.current()\n\n if websocket_origin:\n if not isinstance(websocket_origin, list):\n websocket_origin = [websocket_origin]\n opts['allow_websocket_origin'] = websocket_origin\n\n server_id = kwargs.pop('server_id', None)\n server = Server({'/': partial(panel._modify_doc, server_id)}, port=port, **opts)\n if server_id:\n state._servers[server_id] = (server, panel, [])\n\n if show:\n def show_callback():\n server.show('/')\n server.io_loop.add_callback(show_callback)\n\n def sig_exit(*args, **kwargs):\n server.io_loop.add_callback_from_signal(do_stop)\n\n def do_stop(*args, **kwargs):\n server.io_loop.stop()\n\n try:\n signal.signal(signal.SIGINT, sig_exit)\n except ValueError:\n pass # Can't use signal on a thread\n\n if start:\n server.start()\n try:\n server.io_loop.start()\n except RuntimeError:\n pass\n return server\n\n\nclass StoppableThread(threading.Thread):\n \"\"\"Thread class with a stop() method.\"\"\"\n\n def __init__(self, io_loop=None, timeout=1000, **kwargs):\n from tornado import ioloop\n super(StoppableThread, self).__init__(**kwargs)\n self._stop_event = threading.Event()\n self.io_loop = io_loop\n self._cb = ioloop.PeriodicCallback(self._check_stopped, timeout)\n self._cb.start()\n\n def _check_stopped(self):\n if self.stopped:\n self._cb.stop()\n self.io_loop.stop()\n\n def stop(self):\n self._stop_event.set()\n\n @property\n def stopped(self):\n return self._stop_event.is_set()\n", "path": "panel/io/server.py"}]}
| 1,777 | 152 |
gh_patches_debug_5160
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-5342
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of extensions/cpp_extension.py]
1 import importlib
2 import os
3 import time
4 from abc import abstractmethod
5 from pathlib import Path
6 from typing import List
7
8 from .base_extension import _Extension
9
10 __all__ = ["_CppExtension"]
11
12
13 class _CppExtension(_Extension):
14 def __init__(self, name: str, priority: int = 1):
15 super().__init__(name, support_aot=True, support_jit=True, priority=priority)
16
17 # we store the op as an attribute to avoid repeated building and loading
18 self.cached_op = None
19
20 # build-related variables
21 self.prebuilt_module_path = "colossalai._C"
22 self.prebuilt_import_path = f"{self.prebuilt_module_path}.{self.name}"
23 self.version_dependent_macros = ["-DVERSION_GE_1_1", "-DVERSION_GE_1_3", "-DVERSION_GE_1_5"]
24
25 def csrc_abs_path(self, path):
26 return os.path.join(self.relative_to_abs_path("csrc"), path)
27
28 def relative_to_abs_path(self, code_path: str) -> str:
29 """
30 This function takes in a path relative to the colossalai root directory and return the absolute path.
31 """
32
33 # get the current file path
34 # iteratively check the parent directory
35 # if the parent directory is "extensions", then the current file path is the root directory
36 # otherwise, the current file path is inside the root directory
37 current_file_path = Path(__file__)
38 while True:
39 if current_file_path.name == "extensions":
40 break
41 else:
42 current_file_path = current_file_path.parent
43 extension_module_path = current_file_path
44 code_abs_path = extension_module_path.joinpath(code_path)
45 return str(code_abs_path)
46
47 # functions must be overrided over
48 def strip_empty_entries(self, args):
49 """
50 Drop any empty strings from the list of compile and link flags
51 """
52 return [x for x in args if len(x) > 0]
53
54 def import_op(self):
55 """
56 This function will import the op module by its string name.
57 """
58 return importlib.import_module(self.prebuilt_import_path)
59
60 def build_aot(self) -> "CppExtension":
61 from torch.utils.cpp_extension import CppExtension
62
63 return CppExtension(
64 name=self.prebuilt_import_path,
65 sources=self.strip_empty_entries(self.sources_files()),
66 include_dirs=self.strip_empty_entries(self.include_dirs()),
67 extra_compile_args=self.strip_empty_entries(self.cxx_flags()),
68 )
69
70 def build_jit(self) -> None:
71 from torch.utils.cpp_extension import load
72
73 build_directory = _Extension.get_jit_extension_folder_path()
74 build_directory = Path(build_directory)
75 build_directory.mkdir(parents=True, exist_ok=True)
76
77 # check if the kernel has been built
78 compiled_before = False
79 kernel_file_path = build_directory.joinpath(f"{self.name}.o")
80 if kernel_file_path.exists():
81 compiled_before = True
82
83 # load the kernel
84 if compiled_before:
85 print(f"[extension] Loading the JIT-built {self.name} kernel during runtime now")
86 else:
87 print(f"[extension] Compiling the JIT {self.name} kernel during runtime now")
88
89 build_start = time.time()
90 op_kernel = load(
91 name=self.name,
92 sources=self.strip_empty_entries(self.sources_files()),
93 extra_include_paths=self.strip_empty_entries(self.include_dirs()),
94 extra_cflags=self.cxx_flags(),
95 extra_ldflags=[],
96 build_directory=str(build_directory),
97 )
98 build_duration = time.time() - build_start
99
100 if compiled_before:
101 print(f"[extension] Time taken to load {self.name} op: {build_duration} seconds")
102 else:
103 print(f"[extension] Time taken to compile {self.name} op: {build_duration} seconds")
104
105 return op_kernel
106
107 # functions must be overrided begin
108 @abstractmethod
109 def sources_files(self) -> List[str]:
110 """
111 This function should return a list of source files for extensions.
112 """
113
114 @abstractmethod
115 def include_dirs(self) -> List[str]:
116 """
117 This function should return a list of include files for extensions.
118 """
119
120 @abstractmethod
121 def cxx_flags(self) -> List[str]:
122 """
123 This function should return a list of cxx compilation flags for extensions.
124 """
125
126 def load(self):
127 try:
128 op_kernel = self.import_op()
129 except ImportError:
130 # if import error occurs, it means that the kernel is not pre-built
131 # so we build it jit
132 op_kernel = self.build_jit()
133
134 return op_kernel
135
[end of extensions/cpp_extension.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/extensions/cpp_extension.py b/extensions/cpp_extension.py
--- a/extensions/cpp_extension.py
+++ b/extensions/cpp_extension.py
@@ -126,7 +126,7 @@
def load(self):
try:
op_kernel = self.import_op()
- except ImportError:
+ except (ImportError, ModuleNotFoundError):
# if import error occurs, it means that the kernel is not pre-built
# so we build it jit
op_kernel = self.build_jit()
|
{"golden_diff": "diff --git a/extensions/cpp_extension.py b/extensions/cpp_extension.py\n--- a/extensions/cpp_extension.py\n+++ b/extensions/cpp_extension.py\n@@ -126,7 +126,7 @@\n def load(self):\n try:\n op_kernel = self.import_op()\n- except ImportError:\n+ except (ImportError, ModuleNotFoundError):\n # if import error occurs, it means that the kernel is not pre-built\n # so we build it jit\n op_kernel = self.build_jit()\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import importlib\nimport os\nimport time\nfrom abc import abstractmethod\nfrom pathlib import Path\nfrom typing import List\n\nfrom .base_extension import _Extension\n\n__all__ = [\"_CppExtension\"]\n\n\nclass _CppExtension(_Extension):\n def __init__(self, name: str, priority: int = 1):\n super().__init__(name, support_aot=True, support_jit=True, priority=priority)\n\n # we store the op as an attribute to avoid repeated building and loading\n self.cached_op = None\n\n # build-related variables\n self.prebuilt_module_path = \"colossalai._C\"\n self.prebuilt_import_path = f\"{self.prebuilt_module_path}.{self.name}\"\n self.version_dependent_macros = [\"-DVERSION_GE_1_1\", \"-DVERSION_GE_1_3\", \"-DVERSION_GE_1_5\"]\n\n def csrc_abs_path(self, path):\n return os.path.join(self.relative_to_abs_path(\"csrc\"), path)\n\n def relative_to_abs_path(self, code_path: str) -> str:\n \"\"\"\n This function takes in a path relative to the colossalai root directory and return the absolute path.\n \"\"\"\n\n # get the current file path\n # iteratively check the parent directory\n # if the parent directory is \"extensions\", then the current file path is the root directory\n # otherwise, the current file path is inside the root directory\n current_file_path = Path(__file__)\n while True:\n if current_file_path.name == \"extensions\":\n break\n else:\n current_file_path = current_file_path.parent\n extension_module_path = current_file_path\n code_abs_path = extension_module_path.joinpath(code_path)\n return str(code_abs_path)\n\n # functions must be overrided over\n def strip_empty_entries(self, args):\n \"\"\"\n Drop any empty strings from the list of compile and link flags\n \"\"\"\n return [x for x in args if len(x) > 0]\n\n def import_op(self):\n \"\"\"\n This function will import the op module by its string name.\n \"\"\"\n return importlib.import_module(self.prebuilt_import_path)\n\n def build_aot(self) -> \"CppExtension\":\n from torch.utils.cpp_extension import CppExtension\n\n return CppExtension(\n name=self.prebuilt_import_path,\n sources=self.strip_empty_entries(self.sources_files()),\n include_dirs=self.strip_empty_entries(self.include_dirs()),\n extra_compile_args=self.strip_empty_entries(self.cxx_flags()),\n )\n\n def build_jit(self) -> None:\n from torch.utils.cpp_extension import load\n\n build_directory = _Extension.get_jit_extension_folder_path()\n build_directory = Path(build_directory)\n build_directory.mkdir(parents=True, exist_ok=True)\n\n # check if the kernel has been built\n compiled_before = False\n kernel_file_path = build_directory.joinpath(f\"{self.name}.o\")\n if kernel_file_path.exists():\n compiled_before = True\n\n # load the kernel\n if compiled_before:\n print(f\"[extension] Loading the JIT-built {self.name} kernel during runtime now\")\n else:\n print(f\"[extension] Compiling the JIT {self.name} kernel during runtime now\")\n\n build_start = time.time()\n op_kernel = load(\n name=self.name,\n sources=self.strip_empty_entries(self.sources_files()),\n extra_include_paths=self.strip_empty_entries(self.include_dirs()),\n extra_cflags=self.cxx_flags(),\n extra_ldflags=[],\n build_directory=str(build_directory),\n )\n build_duration = time.time() - build_start\n\n if compiled_before:\n print(f\"[extension] Time taken to load {self.name} op: {build_duration} seconds\")\n else:\n print(f\"[extension] Time taken to compile {self.name} op: {build_duration} seconds\")\n\n return op_kernel\n\n # functions must be overrided begin\n @abstractmethod\n def sources_files(self) -> List[str]:\n \"\"\"\n This function should return a list of source files for extensions.\n \"\"\"\n\n @abstractmethod\n def include_dirs(self) -> List[str]:\n \"\"\"\n This function should return a list of include files for extensions.\n \"\"\"\n\n @abstractmethod\n def cxx_flags(self) -> List[str]:\n \"\"\"\n This function should return a list of cxx compilation flags for extensions.\n \"\"\"\n\n def load(self):\n try:\n op_kernel = self.import_op()\n except ImportError:\n # if import error occurs, it means that the kernel is not pre-built\n # so we build it jit\n op_kernel = self.build_jit()\n\n return op_kernel\n", "path": "extensions/cpp_extension.py"}]}
| 1,857 | 107 |
gh_patches_debug_29779
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-597
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add "public view" on results pages for staff users
For staff users there should be an option on the results pages to switch between "public view" (without comments, like students would see the page) and "complete view" (the current view including all comments).
</issue>
<code>
[start of evap/results/views.py]
1 from django.http import HttpResponse
2 from django.core.exceptions import PermissionDenied
3 from django.shortcuts import get_object_or_404, render
4 from django.utils.translation import get_language
5 from django.contrib.auth.decorators import login_required
6
7 from evap.evaluation.auth import staff_required
8 from evap.evaluation.models import Semester, Degree
9 from evap.evaluation.tools import calculate_results, calculate_average_grades_and_deviation, TextResult
10
11 from evap.results.exporters import ExcelExporter
12
13 from collections import OrderedDict, namedtuple
14
15
16 @login_required
17 def index(request):
18 semesters = Semester.get_all_with_published_courses()
19
20 return render(request, "results_index.html", dict(semesters=semesters))
21
22
23 @login_required
24 def semester_detail(request, semester_id):
25 semester = get_object_or_404(Semester, id=semester_id)
26 courses = list(semester.course_set.filter(state="published").prefetch_related("degrees"))
27
28 # annotate each course object with its grades
29 for course in courses:
30 course.avg_grade, course.avg_deviation = calculate_average_grades_and_deviation(course)
31
32 CourseTuple = namedtuple('CourseTuple', ('courses', 'single_results'))
33
34 courses_by_degree = OrderedDict()
35 for degree in Degree.objects.all():
36 courses_by_degree[degree] = CourseTuple([], [])
37 for course in courses:
38 if course.is_single_result():
39 for degree in course.degrees.all():
40 section = calculate_results(course)[0]
41 result = section.results[0]
42 courses_by_degree[degree].single_results.append((course, result))
43 else:
44 for degree in course.degrees.all():
45 courses_by_degree[degree].courses.append(course)
46
47 template_data = dict(semester=semester, courses_by_degree=courses_by_degree, staff=request.user.is_staff)
48 return render(request, "results_semester_detail.html", template_data)
49
50
51 @staff_required
52 def semester_export(request, semester_id):
53 semester = get_object_or_404(Semester, id=semester_id)
54
55 filename = "Evaluation-%s-%s.xls" % (semester.name, get_language())
56
57 response = HttpResponse(content_type="application/vnd.ms-excel")
58 response["Content-Disposition"] = "attachment; filename=\"%s\"" % filename
59
60 ExcelExporter(semester).export(response, 'all' in request.GET)
61
62 return response
63
64
65 @login_required
66 def course_detail(request, semester_id, course_id):
67 semester = get_object_or_404(Semester, id=semester_id)
68 course = get_object_or_404(semester.course_set, id=course_id)
69
70 if not course.can_user_see_results(request.user):
71 raise PermissionDenied
72
73 sections = calculate_results(course)
74
75 for section in sections:
76 results = []
77 for result in section.results:
78 if isinstance(result, TextResult):
79 answers = [answer for answer in result.answers if user_can_see_text_answer(request.user, answer)]
80 if answers:
81 results.append(TextResult(question=result.question, answers=answers))
82 else:
83 results.append(result)
84 section.results[:] = results
85
86 # filter empty sections and group by contributor
87 course_sections = []
88 contributor_sections = OrderedDict()
89 for section in sections:
90 if not section.results:
91 continue
92 if section.contributor is None:
93 course_sections.append(section)
94 else:
95 contributor_sections.setdefault(section.contributor, []).append(section)
96
97 # show a warning if course is still in evaluation (for staff preview)
98 evaluation_warning = course.state != 'published'
99
100 # results for a course might not be visible because there are not enough answers
101 # but it can still be "published" e.g. to show the comment results to contributors.
102 # users who can open the results page see a warning message in this case
103 sufficient_votes_warning = not course.can_publish_grades
104
105 show_grades = request.user.is_staff or course.can_publish_grades
106
107 course.avg_grade, course.avg_deviation = calculate_average_grades_and_deviation(course)
108
109 template_data = dict(
110 course=course,
111 course_sections=course_sections,
112 contributor_sections=contributor_sections,
113 evaluation_warning=evaluation_warning,
114 sufficient_votes_warning=sufficient_votes_warning,
115 show_grades=show_grades,
116 staff=request.user.is_staff)
117 return render(request, "results_course_detail.html", template_data)
118
119 def user_can_see_text_answer(user, text_answer):
120 if user.is_staff:
121 return True
122 contributor = text_answer.contribution.contributor
123 if text_answer.is_private:
124 return contributor == user
125 if text_answer.is_published:
126 if contributor == user or contributor in user.represented_users.all():
127 return True
128 if text_answer.contribution.course.is_user_responsible_or_delegate(user):
129 return True
130
131 return False
132
[end of evap/results/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/evap/results/views.py b/evap/results/views.py
--- a/evap/results/views.py
+++ b/evap/results/views.py
@@ -72,11 +72,14 @@
sections = calculate_results(course)
+ public_view = request.GET.get('public_view', 'false') # default: show own view
+ public_view = {'true': True, 'false': False}.get(public_view.lower()) # convert parameter to boolean
+
for section in sections:
results = []
for result in section.results:
if isinstance(result, TextResult):
- answers = [answer for answer in result.answers if user_can_see_text_answer(request.user, answer)]
+ answers = [answer for answer in result.answers if user_can_see_text_answer(request.user, answer, public_view)]
if answers:
results.append(TextResult(question=result.question, answers=answers))
else:
@@ -113,10 +116,14 @@
evaluation_warning=evaluation_warning,
sufficient_votes_warning=sufficient_votes_warning,
show_grades=show_grades,
- staff=request.user.is_staff)
+ staff=request.user.is_staff,
+ contributor=course.is_user_contributor_or_delegate(request.user),
+ public_view=public_view)
return render(request, "results_course_detail.html", template_data)
-def user_can_see_text_answer(user, text_answer):
+def user_can_see_text_answer(user, text_answer, public_view=False):
+ if public_view:
+ return False
if user.is_staff:
return True
contributor = text_answer.contribution.contributor
|
{"golden_diff": "diff --git a/evap/results/views.py b/evap/results/views.py\n--- a/evap/results/views.py\n+++ b/evap/results/views.py\n@@ -72,11 +72,14 @@\n \n sections = calculate_results(course)\n \n+ public_view = request.GET.get('public_view', 'false') # default: show own view\n+ public_view = {'true': True, 'false': False}.get(public_view.lower()) # convert parameter to boolean\n+\n for section in sections:\n results = []\n for result in section.results:\n if isinstance(result, TextResult):\n- answers = [answer for answer in result.answers if user_can_see_text_answer(request.user, answer)]\n+ answers = [answer for answer in result.answers if user_can_see_text_answer(request.user, answer, public_view)]\n if answers:\n results.append(TextResult(question=result.question, answers=answers))\n else:\n@@ -113,10 +116,14 @@\n evaluation_warning=evaluation_warning,\n sufficient_votes_warning=sufficient_votes_warning,\n show_grades=show_grades,\n- staff=request.user.is_staff)\n+ staff=request.user.is_staff,\n+ contributor=course.is_user_contributor_or_delegate(request.user),\n+ public_view=public_view)\n return render(request, \"results_course_detail.html\", template_data)\n \n-def user_can_see_text_answer(user, text_answer):\n+def user_can_see_text_answer(user, text_answer, public_view=False):\n+ if public_view:\n+ return False\n if user.is_staff:\n return True\n contributor = text_answer.contribution.contributor\n", "issue": "Add \"public view\" on results pages for staff users\nFor staff users there should be an option on the results pages to switch between \"public view\" (without comments, like students would see the page) and \"complete view\" (the current view including all comments).\n\n", "before_files": [{"content": "from django.http import HttpResponse\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import get_object_or_404, render\nfrom django.utils.translation import get_language\nfrom django.contrib.auth.decorators import login_required\n\nfrom evap.evaluation.auth import staff_required\nfrom evap.evaluation.models import Semester, Degree\nfrom evap.evaluation.tools import calculate_results, calculate_average_grades_and_deviation, TextResult\n\nfrom evap.results.exporters import ExcelExporter\n\nfrom collections import OrderedDict, namedtuple\n\n\n@login_required\ndef index(request):\n semesters = Semester.get_all_with_published_courses()\n\n return render(request, \"results_index.html\", dict(semesters=semesters))\n\n\n@login_required\ndef semester_detail(request, semester_id):\n semester = get_object_or_404(Semester, id=semester_id)\n courses = list(semester.course_set.filter(state=\"published\").prefetch_related(\"degrees\"))\n\n # annotate each course object with its grades\n for course in courses:\n course.avg_grade, course.avg_deviation = calculate_average_grades_and_deviation(course)\n\n CourseTuple = namedtuple('CourseTuple', ('courses', 'single_results'))\n\n courses_by_degree = OrderedDict()\n for degree in Degree.objects.all():\n courses_by_degree[degree] = CourseTuple([], [])\n for course in courses:\n if course.is_single_result():\n for degree in course.degrees.all():\n section = calculate_results(course)[0]\n result = section.results[0]\n courses_by_degree[degree].single_results.append((course, result))\n else:\n for degree in course.degrees.all():\n courses_by_degree[degree].courses.append(course)\n\n template_data = dict(semester=semester, courses_by_degree=courses_by_degree, staff=request.user.is_staff)\n return render(request, \"results_semester_detail.html\", template_data)\n\n\n@staff_required\ndef semester_export(request, semester_id):\n semester = get_object_or_404(Semester, id=semester_id)\n\n filename = \"Evaluation-%s-%s.xls\" % (semester.name, get_language())\n\n response = HttpResponse(content_type=\"application/vnd.ms-excel\")\n response[\"Content-Disposition\"] = \"attachment; filename=\\\"%s\\\"\" % filename\n\n ExcelExporter(semester).export(response, 'all' in request.GET)\n\n return response\n\n\n@login_required\ndef course_detail(request, semester_id, course_id):\n semester = get_object_or_404(Semester, id=semester_id)\n course = get_object_or_404(semester.course_set, id=course_id)\n\n if not course.can_user_see_results(request.user):\n raise PermissionDenied\n\n sections = calculate_results(course)\n\n for section in sections:\n results = []\n for result in section.results:\n if isinstance(result, TextResult):\n answers = [answer for answer in result.answers if user_can_see_text_answer(request.user, answer)]\n if answers:\n results.append(TextResult(question=result.question, answers=answers))\n else:\n results.append(result)\n section.results[:] = results\n\n # filter empty sections and group by contributor\n course_sections = []\n contributor_sections = OrderedDict()\n for section in sections:\n if not section.results:\n continue\n if section.contributor is None:\n course_sections.append(section)\n else:\n contributor_sections.setdefault(section.contributor, []).append(section)\n\n # show a warning if course is still in evaluation (for staff preview)\n evaluation_warning = course.state != 'published'\n\n # results for a course might not be visible because there are not enough answers\n # but it can still be \"published\" e.g. to show the comment results to contributors.\n # users who can open the results page see a warning message in this case\n sufficient_votes_warning = not course.can_publish_grades\n\n show_grades = request.user.is_staff or course.can_publish_grades\n\n course.avg_grade, course.avg_deviation = calculate_average_grades_and_deviation(course)\n\n template_data = dict(\n course=course,\n course_sections=course_sections,\n contributor_sections=contributor_sections,\n evaluation_warning=evaluation_warning,\n sufficient_votes_warning=sufficient_votes_warning,\n show_grades=show_grades,\n staff=request.user.is_staff)\n return render(request, \"results_course_detail.html\", template_data)\n\ndef user_can_see_text_answer(user, text_answer):\n if user.is_staff:\n return True\n contributor = text_answer.contribution.contributor\n if text_answer.is_private:\n return contributor == user\n if text_answer.is_published:\n if contributor == user or contributor in user.represented_users.all():\n return True\n if text_answer.contribution.course.is_user_responsible_or_delegate(user):\n return True\n\n return False\n", "path": "evap/results/views.py"}]}
| 1,909 | 356 |
gh_patches_debug_11299
|
rasdani/github-patches
|
git_diff
|
huggingface__optimum-341
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MT5 Support
Big Thanks for the awesome Optimum library,
MT5 is a Massive Transformers And Supports Many Languages, but it has a slow inference time
for speedup MT5 inference time, We need some post-training methods, and Optimum will help more!
Please Add MT5 Support for Optimum! Thanks
</issue>
<code>
[start of optimum/onnxruntime/utils.py]
1 # Copyright 2021 The HuggingFace Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from enum import Enum
15
16 import torch
17 from transformers.onnx import OnnxConfig, OnnxSeq2SeqConfigWithPast
18 from transformers.utils import logging
19
20 import onnx
21 import onnxruntime as ort
22
23 from ..onnx import OnnxConfigWithLoss, OnnxSeq2SeqConfigWithPastAndLoss
24
25
26 logger = logging.get_logger(__name__)
27
28 ONNX_WEIGHTS_NAME = "model.onnx"
29 OPTIMIZED_ONNX_WEIGHTS_NAME = "optimized_model.onnx"
30 QUANTIZED_ONNX_WEIGHTS_NAME = "q8_model.onnx"
31
32 ONNX_ENCODER_NAME = "encoder_model.onnx"
33 ONNX_DECODER_NAME = "decoder_model.onnx"
34 ONNX_DECODER_WITH_PAST_NAME = "decoder_with_past_model.onnx"
35
36
37 def _is_gpu_available():
38 """
39 checks if a gpu is available.
40 """
41 available_providers = ort.get_available_providers()
42 if "CUDAExecutionProvider" in available_providers and torch.cuda.is_available():
43 return True
44 else:
45 return False
46
47
48 class ORTConfigManager:
49 """
50 A class that contains all the information needed by ONNX Runtime optimization for a given model type.
51
52 Attributes:
53 _conf (`Dict[str, tuple]`):
54 A dictionary mapping each supported model type to a tuple containing the number of attention heads
55 and the hidden size model config attribute names as well as the corresponding ONNX Runtime model type.
56 """
57
58 # Contribution note: Please add new models in alphabetical order
59 _conf = {
60 "albert": ("num_attention_heads", "hidden_size", "bert"),
61 "bart": ("encoder_attention_heads", "d_model", "bart"),
62 "bert": ("num_attention_heads", "hidden_size", "bert"),
63 "big_bird": ("num_attention_heads", "hidden_size", "bert"),
64 "camembert": ("num_attention_heads", "hidden_size", "bert"),
65 "codegen": ("n_head", "n_embd", "gpt2"),
66 "deberta": ("num_attention_heads", "hidden_size", "bert"),
67 "deberta-v2": ("num_attention_heads", "hidden_size", "bert"),
68 "distilbert": ("n_heads", "dim", "bert"),
69 "electra": ("num_attention_heads", "hidden_size", "bert"),
70 "gpt2": ("n_head", "n_embd", "gpt2"),
71 "gpt_neo": ("num_heads", "hidden_size", "gpt2"),
72 "marian": ("encoder_attention_heads", "d_model", "bart"),
73 "roberta": ("num_attention_heads", "hidden_size", "bert"),
74 "xlm-roberta": ("num_attention_heads", "hidden_size", "bert"),
75 }
76
77 @classmethod
78 def get_num_heads_name(cls, model_type: str) -> str:
79 num_heads = "num_attention_heads"
80 try:
81 num_heads = cls._conf[model_type][0]
82 except KeyError:
83 logger.warning(
84 f"{model_type} is not supported yet. Only {list(cls._conf.keys())} are supported. The default value to "
85 f"access the number of heads defined in the config is set to `{num_heads}`."
86 )
87 return num_heads
88
89 @classmethod
90 def get_hidden_size_name(cls, model_type: str) -> str:
91 hidden_size = "hidden_size"
92 try:
93 hidden_size = cls._conf[model_type][1]
94 except KeyError:
95 logger.warning(
96 f"{model_type} is not supported yet. Only {list(cls._conf.keys())} are supported. The default value to "
97 f"access the hidden size defined in the config is set to `{hidden_size}`."
98 )
99 return hidden_size
100
101 @classmethod
102 def get_model_ort_type(cls, model_type: str) -> str:
103 try:
104 model_type = cls._conf[model_type][2]
105 except KeyError:
106 logger.warning(f"{model_type} is not supported yet. Only {list(cls._conf.keys())} are supported.")
107 return model_type
108
109 @classmethod
110 def check_supported_model_or_raise(cls, model_type: str) -> bool:
111 if model_type not in cls._conf:
112 raise KeyError(
113 f"{model_type} model type is not supported yet. Only {list(cls._conf.keys())} are supported. "
114 f"If you want to support {model_type} please propose a PR or open up an issue."
115 )
116
117
118 def generate_identified_filename(filename, identifier):
119 return filename.parent.joinpath(filename.stem + identifier).with_suffix(filename.suffix)
120
121
122 def fix_atenops_to_gather(model_path):
123 # Fix broken ATenOp nodes back to Gather nodes.
124 model = onnx.load(model_path)
125 onnx.checker.check_model(model)
126
127 nodes = model.graph.node
128
129 for node in nodes:
130 if node.op_type in ["ATenOp", "ATen"]:
131 logger.info(f"----Start fixing node: {node.name}----")
132 op_num = node.name.split("_")[-1]
133 new_node = onnx.helper.make_node(
134 "Gather",
135 name="Gather_" + op_num,
136 inputs=[node.input[0], node.input[1]],
137 outputs=node.output,
138 )
139
140 model.graph.node.remove(node)
141 model.graph.node.insert(int(op_num), new_node)
142
143 onnx.checker.check_model(model)
144 onnx.save(model, model_path)
145
146
147 def wrap_onnx_config_for_loss(onnx_config: OnnxConfig) -> OnnxConfig:
148 if isinstance(onnx_config, OnnxSeq2SeqConfigWithPast):
149 return OnnxSeq2SeqConfigWithPastAndLoss(onnx_config)
150 else:
151 return OnnxConfigWithLoss(onnx_config)
152
153
154 def get_device_for_provider(provider: str) -> torch.device:
155 """
156 Gets the PyTorch device (CPU/CUDA) associated with an ONNX Runtime provider.
157 """
158 return torch.device("cuda") if provider == "CUDAExecutionProvider" else torch.device("cpu")
159
160
161 def get_provider_for_device(device: torch.device) -> str:
162 """
163 Gets the ONNX Runtime provider associated with the PyTorch device (CPU/CUDA).
164 """
165 return "CUDAExecutionProvider" if device.type.lower() == "cuda" else "CPUExecutionProvider"
166
167
168 class ORTQuantizableOperator(Enum):
169 # Common ops
170 Gather = "Gather"
171 Transpose = "Transpose"
172 EmbedLayerNormalizationQuant = "EmbedLayerNormalization"
173
174 # QLinearOps
175 Conv = "Conv"
176 MatMul = "MatMul"
177 Add = "Add"
178 Mul = "Mul"
179 Relu = "Relu"
180 Clip = "Clip"
181 LeakyRelu = "LeakyRelu"
182 Sigmoid = "Sigmoid"
183 MaxPool = "MaxPool"
184 GlobalAveragePool = "GlobalAveragePool"
185 Split = "Split"
186 Pad = "Pad"
187 Reshape = "Reshape"
188 Squeeze = "Squeeze"
189 Unsqueeze = "Unsqueeze"
190 Resize = "Resize"
191 AveragePool = "AveragePool"
192 Concat = "Concat"
193
[end of optimum/onnxruntime/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/optimum/onnxruntime/utils.py b/optimum/onnxruntime/utils.py
--- a/optimum/onnxruntime/utils.py
+++ b/optimum/onnxruntime/utils.py
@@ -69,6 +69,7 @@
"electra": ("num_attention_heads", "hidden_size", "bert"),
"gpt2": ("n_head", "n_embd", "gpt2"),
"gpt_neo": ("num_heads", "hidden_size", "gpt2"),
+ "mt5": ("num_heads", "d_model", "bart"),
"marian": ("encoder_attention_heads", "d_model", "bart"),
"roberta": ("num_attention_heads", "hidden_size", "bert"),
"xlm-roberta": ("num_attention_heads", "hidden_size", "bert"),
|
{"golden_diff": "diff --git a/optimum/onnxruntime/utils.py b/optimum/onnxruntime/utils.py\n--- a/optimum/onnxruntime/utils.py\n+++ b/optimum/onnxruntime/utils.py\n@@ -69,6 +69,7 @@\n \"electra\": (\"num_attention_heads\", \"hidden_size\", \"bert\"),\n \"gpt2\": (\"n_head\", \"n_embd\", \"gpt2\"),\n \"gpt_neo\": (\"num_heads\", \"hidden_size\", \"gpt2\"),\n+ \"mt5\": (\"num_heads\", \"d_model\", \"bart\"),\n \"marian\": (\"encoder_attention_heads\", \"d_model\", \"bart\"),\n \"roberta\": (\"num_attention_heads\", \"hidden_size\", \"bert\"),\n \"xlm-roberta\": (\"num_attention_heads\", \"hidden_size\", \"bert\"),\n", "issue": "MT5 Support \nBig Thanks for the awesome Optimum library, \r\nMT5 is a Massive Transformers And Supports Many Languages, but it has a slow inference time\r\nfor speedup MT5 inference time, We need some post-training methods, and Optimum will help more!\r\n\r\nPlease Add MT5 Support for Optimum! Thanks\n", "before_files": [{"content": "# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom enum import Enum\n\nimport torch\nfrom transformers.onnx import OnnxConfig, OnnxSeq2SeqConfigWithPast\nfrom transformers.utils import logging\n\nimport onnx\nimport onnxruntime as ort\n\nfrom ..onnx import OnnxConfigWithLoss, OnnxSeq2SeqConfigWithPastAndLoss\n\n\nlogger = logging.get_logger(__name__)\n\nONNX_WEIGHTS_NAME = \"model.onnx\"\nOPTIMIZED_ONNX_WEIGHTS_NAME = \"optimized_model.onnx\"\nQUANTIZED_ONNX_WEIGHTS_NAME = \"q8_model.onnx\"\n\nONNX_ENCODER_NAME = \"encoder_model.onnx\"\nONNX_DECODER_NAME = \"decoder_model.onnx\"\nONNX_DECODER_WITH_PAST_NAME = \"decoder_with_past_model.onnx\"\n\n\ndef _is_gpu_available():\n \"\"\"\n checks if a gpu is available.\n \"\"\"\n available_providers = ort.get_available_providers()\n if \"CUDAExecutionProvider\" in available_providers and torch.cuda.is_available():\n return True\n else:\n return False\n\n\nclass ORTConfigManager:\n \"\"\"\n A class that contains all the information needed by ONNX Runtime optimization for a given model type.\n\n Attributes:\n _conf (`Dict[str, tuple]`):\n A dictionary mapping each supported model type to a tuple containing the number of attention heads\n and the hidden size model config attribute names as well as the corresponding ONNX Runtime model type.\n \"\"\"\n\n # Contribution note: Please add new models in alphabetical order\n _conf = {\n \"albert\": (\"num_attention_heads\", \"hidden_size\", \"bert\"),\n \"bart\": (\"encoder_attention_heads\", \"d_model\", \"bart\"),\n \"bert\": (\"num_attention_heads\", \"hidden_size\", \"bert\"),\n \"big_bird\": (\"num_attention_heads\", \"hidden_size\", \"bert\"),\n \"camembert\": (\"num_attention_heads\", \"hidden_size\", \"bert\"),\n \"codegen\": (\"n_head\", \"n_embd\", \"gpt2\"),\n \"deberta\": (\"num_attention_heads\", \"hidden_size\", \"bert\"),\n \"deberta-v2\": (\"num_attention_heads\", \"hidden_size\", \"bert\"),\n \"distilbert\": (\"n_heads\", \"dim\", \"bert\"),\n \"electra\": (\"num_attention_heads\", \"hidden_size\", \"bert\"),\n \"gpt2\": (\"n_head\", \"n_embd\", \"gpt2\"),\n \"gpt_neo\": (\"num_heads\", \"hidden_size\", \"gpt2\"),\n \"marian\": (\"encoder_attention_heads\", \"d_model\", \"bart\"),\n \"roberta\": (\"num_attention_heads\", \"hidden_size\", \"bert\"),\n \"xlm-roberta\": (\"num_attention_heads\", \"hidden_size\", \"bert\"),\n }\n\n @classmethod\n def get_num_heads_name(cls, model_type: str) -> str:\n num_heads = \"num_attention_heads\"\n try:\n num_heads = cls._conf[model_type][0]\n except KeyError:\n logger.warning(\n f\"{model_type} is not supported yet. Only {list(cls._conf.keys())} are supported. The default value to \"\n f\"access the number of heads defined in the config is set to `{num_heads}`.\"\n )\n return num_heads\n\n @classmethod\n def get_hidden_size_name(cls, model_type: str) -> str:\n hidden_size = \"hidden_size\"\n try:\n hidden_size = cls._conf[model_type][1]\n except KeyError:\n logger.warning(\n f\"{model_type} is not supported yet. Only {list(cls._conf.keys())} are supported. The default value to \"\n f\"access the hidden size defined in the config is set to `{hidden_size}`.\"\n )\n return hidden_size\n\n @classmethod\n def get_model_ort_type(cls, model_type: str) -> str:\n try:\n model_type = cls._conf[model_type][2]\n except KeyError:\n logger.warning(f\"{model_type} is not supported yet. Only {list(cls._conf.keys())} are supported.\")\n return model_type\n\n @classmethod\n def check_supported_model_or_raise(cls, model_type: str) -> bool:\n if model_type not in cls._conf:\n raise KeyError(\n f\"{model_type} model type is not supported yet. Only {list(cls._conf.keys())} are supported. \"\n f\"If you want to support {model_type} please propose a PR or open up an issue.\"\n )\n\n\ndef generate_identified_filename(filename, identifier):\n return filename.parent.joinpath(filename.stem + identifier).with_suffix(filename.suffix)\n\n\ndef fix_atenops_to_gather(model_path):\n # Fix broken ATenOp nodes back to Gather nodes.\n model = onnx.load(model_path)\n onnx.checker.check_model(model)\n\n nodes = model.graph.node\n\n for node in nodes:\n if node.op_type in [\"ATenOp\", \"ATen\"]:\n logger.info(f\"----Start fixing node: {node.name}----\")\n op_num = node.name.split(\"_\")[-1]\n new_node = onnx.helper.make_node(\n \"Gather\",\n name=\"Gather_\" + op_num,\n inputs=[node.input[0], node.input[1]],\n outputs=node.output,\n )\n\n model.graph.node.remove(node)\n model.graph.node.insert(int(op_num), new_node)\n\n onnx.checker.check_model(model)\n onnx.save(model, model_path)\n\n\ndef wrap_onnx_config_for_loss(onnx_config: OnnxConfig) -> OnnxConfig:\n if isinstance(onnx_config, OnnxSeq2SeqConfigWithPast):\n return OnnxSeq2SeqConfigWithPastAndLoss(onnx_config)\n else:\n return OnnxConfigWithLoss(onnx_config)\n\n\ndef get_device_for_provider(provider: str) -> torch.device:\n \"\"\"\n Gets the PyTorch device (CPU/CUDA) associated with an ONNX Runtime provider.\n \"\"\"\n return torch.device(\"cuda\") if provider == \"CUDAExecutionProvider\" else torch.device(\"cpu\")\n\n\ndef get_provider_for_device(device: torch.device) -> str:\n \"\"\"\n Gets the ONNX Runtime provider associated with the PyTorch device (CPU/CUDA).\n \"\"\"\n return \"CUDAExecutionProvider\" if device.type.lower() == \"cuda\" else \"CPUExecutionProvider\"\n\n\nclass ORTQuantizableOperator(Enum):\n # Common ops\n Gather = \"Gather\"\n Transpose = \"Transpose\"\n EmbedLayerNormalizationQuant = \"EmbedLayerNormalization\"\n\n # QLinearOps\n Conv = \"Conv\"\n MatMul = \"MatMul\"\n Add = \"Add\"\n Mul = \"Mul\"\n Relu = \"Relu\"\n Clip = \"Clip\"\n LeakyRelu = \"LeakyRelu\"\n Sigmoid = \"Sigmoid\"\n MaxPool = \"MaxPool\"\n GlobalAveragePool = \"GlobalAveragePool\"\n Split = \"Split\"\n Pad = \"Pad\"\n Reshape = \"Reshape\"\n Squeeze = \"Squeeze\"\n Unsqueeze = \"Unsqueeze\"\n Resize = \"Resize\"\n AveragePool = \"AveragePool\"\n Concat = \"Concat\"\n", "path": "optimum/onnxruntime/utils.py"}]}
| 2,773 | 181 |
gh_patches_debug_26919
|
rasdani/github-patches
|
git_diff
|
cupy__cupy-7302
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use `warpSize` to get the warp size in ROCm environment
CuPy, currently, has a hard-coded warp size of 64 for ROCm environment. However, AMD gpu's warp size is not necessarily 64; for example, Navi21 that ROCm 5.0 supports has the warp size of 32. The following can be a good solution.
https://github.com/pytorch/pytorch/blob/2cefbb71cf65696e28ee1bdfce06d6846285e5fb/c10/macros/Macros.h#L322
Thanks @amathews-amd for pointing this out.
</issue>
<code>
[start of cupy/_core/__init__.py]
1 # mypy: ignore-errors
2
3 from cupy._core import core # NOQA
4 from cupy._core import fusion # NOQA
5 from cupy._core import internal # NOQA
6
7
8 # internal APIs for testing and developement
9 from cupy._core._accelerator import set_elementwise_accelerators # NOQA
10 from cupy._core._accelerator import set_reduction_accelerators # NOQA
11 from cupy._core._accelerator import set_routine_accelerators # NOQA
12 from cupy._core._accelerator import get_elementwise_accelerators # NOQA
13 from cupy._core._accelerator import get_reduction_accelerators # NOQA
14 from cupy._core._accelerator import get_routine_accelerators # NOQA
15
16
17 # import class and function
18 from cupy._core._kernel import create_ufunc # NOQA
19 from cupy._core._kernel import ElementwiseKernel # NOQA
20 from cupy._core._kernel import ufunc # NOQA
21 from cupy._core._reduction import create_reduction_func # NOQA
22 from cupy._core._reduction import ReductionKernel # NOQA
23 from cupy._core._routines_binary import bitwise_and # NOQA
24 from cupy._core._routines_binary import bitwise_or # NOQA
25 from cupy._core._routines_binary import bitwise_xor # NOQA
26 from cupy._core._routines_binary import invert # NOQA
27 from cupy._core._routines_binary import left_shift # NOQA
28 from cupy._core._routines_binary import right_shift # NOQA
29 from cupy._core._routines_linalg import _mat_ptrs # NOQA
30 from cupy._core._routines_linalg import dot # NOQA
31 from cupy._core._routines_linalg import get_compute_type # NOQA
32 from cupy._core._routines_linalg import matmul # NOQA
33 from cupy._core._routines_linalg import set_compute_type # NOQA
34 from cupy._core._routines_linalg import tensordot_core # NOQA
35 from cupy._core._routines_logic import create_comparison # NOQA
36 from cupy._core._routines_logic import equal # NOQA
37 from cupy._core._routines_logic import greater # NOQA
38 from cupy._core._routines_logic import greater_equal # NOQA
39 from cupy._core._routines_logic import less # NOQA
40 from cupy._core._routines_logic import less_equal # NOQA
41 from cupy._core._routines_logic import not_equal # NOQA
42 from cupy._core._routines_manipulation import array_split # NOQA
43 from cupy._core._routines_manipulation import broadcast # NOQA
44 from cupy._core._routines_manipulation import broadcast_to # NOQA
45 from cupy._core._routines_manipulation import concatenate_method # NOQA
46 from cupy._core._routines_manipulation import moveaxis # NOQA
47 from cupy._core._routines_manipulation import rollaxis # NOQA
48 from cupy._core._routines_manipulation import size # NOQA'
49 from cupy._core._routines_math import absolute # NOQA
50 from cupy._core._routines_math import add # NOQA
51 from cupy._core._routines_math import angle, angle_deg # NOQA
52 from cupy._core._routines_math import conjugate # NOQA
53 from cupy._core._routines_math import divide # NOQA
54 from cupy._core._routines_math import floor_divide # NOQA
55 from cupy._core._routines_math import multiply # NOQA
56 from cupy._core._routines_math import negative # NOQA
57 from cupy._core._routines_math import positive # NOQA
58 from cupy._core._routines_math import power # NOQA
59 from cupy._core._routines_math import remainder # NOQA
60 from cupy._core._routines_math import sqrt # NOQA
61 from cupy._core._routines_math import subtract # NOQA
62 from cupy._core._routines_math import true_divide # NOQA
63 from cupy._core._routines_statistics import nanmax # NOQA
64 from cupy._core._routines_statistics import nanmin # NOQA
65 from cupy._core.core import _internal_ascontiguousarray # NOQA
66 from cupy._core.core import _internal_asfortranarray # NOQA
67 from cupy._core.core import array # NOQA
68 from cupy._core.core import ascontiguousarray # NOQA
69 from cupy._core.core import asfortranarray # NOQA
70 from cupy._core.core import divmod # NOQA
71 from cupy._core.core import elementwise_copy # NOQA
72 from cupy._core.core import ndarray # NOQA
73 from cupy._core.dlpack import fromDlpack # NOQA
74 from cupy._core.dlpack import from_dlpack # NOQA
75 from cupy._core.internal import complete_slice # NOQA
76 from cupy._core.internal import get_size # NOQA
77 from cupy._core.raw import RawKernel # NOQA
78 from cupy._core.raw import RawModule # NOQA
79
[end of cupy/_core/__init__.py]
[start of cupyx/jit/_interface.py]
1 import functools
2 import warnings
3
4 import numpy
5
6 from cupy_backends.cuda.api import runtime
7 import cupy
8 from cupy._core import core
9 from cupyx.jit import _compile
10 from cupyx.jit import _cuda_typerules
11 from cupyx.jit import _cuda_types
12 from cupyx.jit import _internal_types
13 from cupyx.jit._cuda_types import Scalar
14
15
16 class _CudaFunction:
17 """JIT cupy function object
18 """
19
20 def __init__(self, func, mode, device=False, inline=False):
21 self.attributes = []
22
23 if device:
24 self.attributes.append('__device__')
25 else:
26 self.attributes.append('__global__')
27
28 if inline:
29 self.attributes.append('inline')
30
31 self.name = getattr(func, 'name', func.__name__)
32 self.func = func
33 self.mode = mode
34
35 def __call__(self, *args, **kwargs):
36 raise NotImplementedError
37
38 def _emit_code_from_types(self, in_types, ret_type=None):
39 return _compile.transpile(
40 self.func, self.attributes, self.mode, in_types, ret_type)
41
42
43 class _JitRawKernel:
44 """JIT CUDA kernel object.
45
46 The decorator :func:``cupyx.jit.rawkernel`` converts the target function
47 to an object of this class. This class is not inteded to be instantiated
48 by users.
49 """
50
51 def __init__(self, func, mode, device):
52 self._func = func
53 self._mode = mode
54 self._device = device
55 self._cache = {}
56 self._cached_codes = {}
57
58 def __call__(
59 self, grid, block, args, shared_mem=0, stream=None):
60 """Calls the CUDA kernel.
61
62 The compilation will be deferred until the first function call.
63 CuPy's JIT compiler infers the types of arguments at the call
64 time, and will cache the compiled kernels for speeding up any
65 subsequent calls.
66
67 Args:
68 grid (tuple of int): Size of grid in blocks.
69 block (tuple of int): Dimensions of each thread block.
70 args (tuple):
71 Arguments of the kernel. The type of all elements must be
72 ``bool``, ``int``, ``float``, ``complex``, NumPy scalar or
73 ``cupy.ndarray``.
74 shared_mem (int):
75 Dynamic shared-memory size per thread block in bytes.
76 stream (cupy.cuda.Stream): CUDA stream.
77
78 .. seealso:: :ref:`jit_kernel_definition`
79 """
80 in_types = []
81 for x in args:
82 if isinstance(x, cupy.ndarray):
83 t = _cuda_types.CArray.from_ndarray(x)
84 elif numpy.isscalar(x):
85 t = _cuda_typerules.get_ctype_from_scalar(self._mode, x)
86 else:
87 raise TypeError(f'{type(x)} is not supported for RawKernel')
88 in_types.append(t)
89 in_types = tuple(in_types)
90 device_id = cupy.cuda.get_device_id()
91
92 kern, enable_cg = self._cache.get((in_types, device_id), (None, None))
93 if kern is None:
94 result = self._cached_codes.get(in_types)
95 if result is None:
96 result = _compile.transpile(
97 self._func,
98 ['extern "C"', '__global__'],
99 self._mode,
100 in_types,
101 _cuda_types.void,
102 )
103 self._cached_codes[in_types] = result
104
105 fname = result.func_name
106 enable_cg = result.enable_cooperative_groups
107 options = ('-DCUPY_JIT_MODE', '--std=c++14')
108 backend = result.backend
109 if backend == 'nvcc':
110 options += ('-DCUPY_JIT_NVCC',)
111 module = core.compile_with_cache(
112 source=result.code,
113 options=options,
114 backend=backend)
115 kern = module.get_function(fname)
116 self._cache[(in_types, device_id)] = (kern, enable_cg)
117
118 new_args = []
119 for a, t in zip(args, in_types):
120 if isinstance(t, Scalar):
121 if t.dtype.char == 'e':
122 a = numpy.float32(a)
123 else:
124 a = t.dtype.type(a)
125 new_args.append(a)
126
127 kern(grid, block, tuple(new_args), shared_mem, stream, enable_cg)
128
129 def __getitem__(self, grid_and_block):
130 """Numba-style kernel call.
131
132 .. seealso:: :ref:`jit_kernel_definition`
133 """
134 grid, block = grid_and_block
135 if not isinstance(grid, tuple):
136 grid = (grid, 1, 1)
137 if not isinstance(block, tuple):
138 block = (block, 1, 1)
139 return lambda *args, **kwargs: self(grid, block, args, **kwargs)
140
141 @property
142 def cached_codes(self):
143 """Returns a dict that has input types as keys and codes values.
144
145 This proprety method is for debugging purpose.
146 The return value is not guaranteed to keep backward compatibility.
147 """
148 if len(self._cached_codes) == 0:
149 warnings.warn(
150 'No codes are cached because compilation is deferred until '
151 'the first function call.')
152 return dict([(k, v.code) for k, v in self._cached_codes.items()])
153
154 @property
155 def cached_code(self):
156 """Returns `next(iter(self.cached_codes.values()))`.
157
158 This proprety method is for debugging purpose.
159 The return value is not guaranteed to keep backward compatibility.
160 """
161 codes = self.cached_codes
162 if len(codes) > 1:
163 warnings.warn(
164 'The input types of the kernel could not be inferred. '
165 'Please use `.cached_codes` instead.')
166 return next(iter(codes.values()))
167
168
169 def rawkernel(*, mode='cuda', device=False):
170 """A decorator compiles a Python function into CUDA kernel.
171 """
172 cupy._util.experimental('cupyx.jit.rawkernel')
173
174 def wrapper(func):
175 return functools.update_wrapper(
176 _JitRawKernel(func, mode, device), func)
177 return wrapper
178
179
180 threadIdx = _internal_types.Data('threadIdx', _cuda_types.dim3)
181 blockDim = _internal_types.Data('blockDim', _cuda_types.dim3)
182 blockIdx = _internal_types.Data('blockIdx', _cuda_types.dim3)
183 gridDim = _internal_types.Data('gridDim', _cuda_types.dim3)
184
185 warpsize = _internal_types.Data(
186 '64' if runtime.is_hip else '32', _cuda_types.uint32)
187 warpsize.__doc__ = r"""Returns the number of threads in a warp.
188
189 In CUDA this is always 32, and in ROCm/HIP always 64.
190
191 .. seealso:: :obj:`numba.cuda.warpsize`
192 """
193
[end of cupyx/jit/_interface.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cupy/_core/__init__.py b/cupy/_core/__init__.py
--- a/cupy/_core/__init__.py
+++ b/cupy/_core/__init__.py
@@ -18,6 +18,7 @@
from cupy._core._kernel import create_ufunc # NOQA
from cupy._core._kernel import ElementwiseKernel # NOQA
from cupy._core._kernel import ufunc # NOQA
+from cupy._core._kernel import _get_warpsize # NOQA
from cupy._core._reduction import create_reduction_func # NOQA
from cupy._core._reduction import ReductionKernel # NOQA
from cupy._core._routines_binary import bitwise_and # NOQA
diff --git a/cupyx/jit/_interface.py b/cupyx/jit/_interface.py
--- a/cupyx/jit/_interface.py
+++ b/cupyx/jit/_interface.py
@@ -3,7 +3,6 @@
import numpy
-from cupy_backends.cuda.api import runtime
import cupy
from cupy._core import core
from cupyx.jit import _compile
@@ -182,11 +181,8 @@
blockIdx = _internal_types.Data('blockIdx', _cuda_types.dim3)
gridDim = _internal_types.Data('gridDim', _cuda_types.dim3)
-warpsize = _internal_types.Data(
- '64' if runtime.is_hip else '32', _cuda_types.uint32)
+warpsize = _internal_types.Data('warpSize', _cuda_types.int32)
warpsize.__doc__ = r"""Returns the number of threads in a warp.
-In CUDA this is always 32, and in ROCm/HIP always 64.
-
.. seealso:: :obj:`numba.cuda.warpsize`
"""
|
{"golden_diff": "diff --git a/cupy/_core/__init__.py b/cupy/_core/__init__.py\n--- a/cupy/_core/__init__.py\n+++ b/cupy/_core/__init__.py\n@@ -18,6 +18,7 @@\n from cupy._core._kernel import create_ufunc # NOQA\n from cupy._core._kernel import ElementwiseKernel # NOQA\n from cupy._core._kernel import ufunc # NOQA\n+from cupy._core._kernel import _get_warpsize # NOQA\n from cupy._core._reduction import create_reduction_func # NOQA\n from cupy._core._reduction import ReductionKernel # NOQA\n from cupy._core._routines_binary import bitwise_and # NOQA\ndiff --git a/cupyx/jit/_interface.py b/cupyx/jit/_interface.py\n--- a/cupyx/jit/_interface.py\n+++ b/cupyx/jit/_interface.py\n@@ -3,7 +3,6 @@\n \n import numpy\n \n-from cupy_backends.cuda.api import runtime\n import cupy\n from cupy._core import core\n from cupyx.jit import _compile\n@@ -182,11 +181,8 @@\n blockIdx = _internal_types.Data('blockIdx', _cuda_types.dim3)\n gridDim = _internal_types.Data('gridDim', _cuda_types.dim3)\n \n-warpsize = _internal_types.Data(\n- '64' if runtime.is_hip else '32', _cuda_types.uint32)\n+warpsize = _internal_types.Data('warpSize', _cuda_types.int32)\n warpsize.__doc__ = r\"\"\"Returns the number of threads in a warp.\n \n-In CUDA this is always 32, and in ROCm/HIP always 64.\n-\n .. seealso:: :obj:`numba.cuda.warpsize`\n \"\"\"\n", "issue": "Use `warpSize` to get the warp size in ROCm environment\nCuPy, currently, has a hard-coded warp size of 64 for ROCm environment. However, AMD gpu's warp size is not necessarily 64; for example, Navi21 that ROCm 5.0 supports has the warp size of 32. The following can be a good solution.\r\n\r\nhttps://github.com/pytorch/pytorch/blob/2cefbb71cf65696e28ee1bdfce06d6846285e5fb/c10/macros/Macros.h#L322\r\n\r\nThanks @amathews-amd for pointing this out.\n", "before_files": [{"content": "# mypy: ignore-errors\n\nfrom cupy._core import core # NOQA\nfrom cupy._core import fusion # NOQA\nfrom cupy._core import internal # NOQA\n\n\n# internal APIs for testing and developement\nfrom cupy._core._accelerator import set_elementwise_accelerators # NOQA\nfrom cupy._core._accelerator import set_reduction_accelerators # NOQA\nfrom cupy._core._accelerator import set_routine_accelerators # NOQA\nfrom cupy._core._accelerator import get_elementwise_accelerators # NOQA\nfrom cupy._core._accelerator import get_reduction_accelerators # NOQA\nfrom cupy._core._accelerator import get_routine_accelerators # NOQA\n\n\n# import class and function\nfrom cupy._core._kernel import create_ufunc # NOQA\nfrom cupy._core._kernel import ElementwiseKernel # NOQA\nfrom cupy._core._kernel import ufunc # NOQA\nfrom cupy._core._reduction import create_reduction_func # NOQA\nfrom cupy._core._reduction import ReductionKernel # NOQA\nfrom cupy._core._routines_binary import bitwise_and # NOQA\nfrom cupy._core._routines_binary import bitwise_or # NOQA\nfrom cupy._core._routines_binary import bitwise_xor # NOQA\nfrom cupy._core._routines_binary import invert # NOQA\nfrom cupy._core._routines_binary import left_shift # NOQA\nfrom cupy._core._routines_binary import right_shift # NOQA\nfrom cupy._core._routines_linalg import _mat_ptrs # NOQA\nfrom cupy._core._routines_linalg import dot # NOQA\nfrom cupy._core._routines_linalg import get_compute_type # NOQA\nfrom cupy._core._routines_linalg import matmul # NOQA\nfrom cupy._core._routines_linalg import set_compute_type # NOQA\nfrom cupy._core._routines_linalg import tensordot_core # NOQA\nfrom cupy._core._routines_logic import create_comparison # NOQA\nfrom cupy._core._routines_logic import equal # NOQA\nfrom cupy._core._routines_logic import greater # NOQA\nfrom cupy._core._routines_logic import greater_equal # NOQA\nfrom cupy._core._routines_logic import less # NOQA\nfrom cupy._core._routines_logic import less_equal # NOQA\nfrom cupy._core._routines_logic import not_equal # NOQA\nfrom cupy._core._routines_manipulation import array_split # NOQA\nfrom cupy._core._routines_manipulation import broadcast # NOQA\nfrom cupy._core._routines_manipulation import broadcast_to # NOQA\nfrom cupy._core._routines_manipulation import concatenate_method # NOQA\nfrom cupy._core._routines_manipulation import moveaxis # NOQA\nfrom cupy._core._routines_manipulation import rollaxis # NOQA\nfrom cupy._core._routines_manipulation import size # NOQA'\nfrom cupy._core._routines_math import absolute # NOQA\nfrom cupy._core._routines_math import add # NOQA\nfrom cupy._core._routines_math import angle, angle_deg # NOQA\nfrom cupy._core._routines_math import conjugate # NOQA\nfrom cupy._core._routines_math import divide # NOQA\nfrom cupy._core._routines_math import floor_divide # NOQA\nfrom cupy._core._routines_math import multiply # NOQA\nfrom cupy._core._routines_math import negative # NOQA\nfrom cupy._core._routines_math import positive # NOQA\nfrom cupy._core._routines_math import power # NOQA\nfrom cupy._core._routines_math import remainder # NOQA\nfrom cupy._core._routines_math import sqrt # NOQA\nfrom cupy._core._routines_math import subtract # NOQA\nfrom cupy._core._routines_math import true_divide # NOQA\nfrom cupy._core._routines_statistics import nanmax # NOQA\nfrom cupy._core._routines_statistics import nanmin # NOQA\nfrom cupy._core.core import _internal_ascontiguousarray # NOQA\nfrom cupy._core.core import _internal_asfortranarray # NOQA\nfrom cupy._core.core import array # NOQA\nfrom cupy._core.core import ascontiguousarray # NOQA\nfrom cupy._core.core import asfortranarray # NOQA\nfrom cupy._core.core import divmod # NOQA\nfrom cupy._core.core import elementwise_copy # NOQA\nfrom cupy._core.core import ndarray # NOQA\nfrom cupy._core.dlpack import fromDlpack # NOQA\nfrom cupy._core.dlpack import from_dlpack # NOQA\nfrom cupy._core.internal import complete_slice # NOQA\nfrom cupy._core.internal import get_size # NOQA\nfrom cupy._core.raw import RawKernel # NOQA\nfrom cupy._core.raw import RawModule # NOQA\n", "path": "cupy/_core/__init__.py"}, {"content": "import functools\nimport warnings\n\nimport numpy\n\nfrom cupy_backends.cuda.api import runtime\nimport cupy\nfrom cupy._core import core\nfrom cupyx.jit import _compile\nfrom cupyx.jit import _cuda_typerules\nfrom cupyx.jit import _cuda_types\nfrom cupyx.jit import _internal_types\nfrom cupyx.jit._cuda_types import Scalar\n\n\nclass _CudaFunction:\n \"\"\"JIT cupy function object\n \"\"\"\n\n def __init__(self, func, mode, device=False, inline=False):\n self.attributes = []\n\n if device:\n self.attributes.append('__device__')\n else:\n self.attributes.append('__global__')\n\n if inline:\n self.attributes.append('inline')\n\n self.name = getattr(func, 'name', func.__name__)\n self.func = func\n self.mode = mode\n\n def __call__(self, *args, **kwargs):\n raise NotImplementedError\n\n def _emit_code_from_types(self, in_types, ret_type=None):\n return _compile.transpile(\n self.func, self.attributes, self.mode, in_types, ret_type)\n\n\nclass _JitRawKernel:\n \"\"\"JIT CUDA kernel object.\n\n The decorator :func:``cupyx.jit.rawkernel`` converts the target function\n to an object of this class. This class is not inteded to be instantiated\n by users.\n \"\"\"\n\n def __init__(self, func, mode, device):\n self._func = func\n self._mode = mode\n self._device = device\n self._cache = {}\n self._cached_codes = {}\n\n def __call__(\n self, grid, block, args, shared_mem=0, stream=None):\n \"\"\"Calls the CUDA kernel.\n\n The compilation will be deferred until the first function call.\n CuPy's JIT compiler infers the types of arguments at the call\n time, and will cache the compiled kernels for speeding up any\n subsequent calls.\n\n Args:\n grid (tuple of int): Size of grid in blocks.\n block (tuple of int): Dimensions of each thread block.\n args (tuple):\n Arguments of the kernel. The type of all elements must be\n ``bool``, ``int``, ``float``, ``complex``, NumPy scalar or\n ``cupy.ndarray``.\n shared_mem (int):\n Dynamic shared-memory size per thread block in bytes.\n stream (cupy.cuda.Stream): CUDA stream.\n\n .. seealso:: :ref:`jit_kernel_definition`\n \"\"\"\n in_types = []\n for x in args:\n if isinstance(x, cupy.ndarray):\n t = _cuda_types.CArray.from_ndarray(x)\n elif numpy.isscalar(x):\n t = _cuda_typerules.get_ctype_from_scalar(self._mode, x)\n else:\n raise TypeError(f'{type(x)} is not supported for RawKernel')\n in_types.append(t)\n in_types = tuple(in_types)\n device_id = cupy.cuda.get_device_id()\n\n kern, enable_cg = self._cache.get((in_types, device_id), (None, None))\n if kern is None:\n result = self._cached_codes.get(in_types)\n if result is None:\n result = _compile.transpile(\n self._func,\n ['extern \"C\"', '__global__'],\n self._mode,\n in_types,\n _cuda_types.void,\n )\n self._cached_codes[in_types] = result\n\n fname = result.func_name\n enable_cg = result.enable_cooperative_groups\n options = ('-DCUPY_JIT_MODE', '--std=c++14')\n backend = result.backend\n if backend == 'nvcc':\n options += ('-DCUPY_JIT_NVCC',)\n module = core.compile_with_cache(\n source=result.code,\n options=options,\n backend=backend)\n kern = module.get_function(fname)\n self._cache[(in_types, device_id)] = (kern, enable_cg)\n\n new_args = []\n for a, t in zip(args, in_types):\n if isinstance(t, Scalar):\n if t.dtype.char == 'e':\n a = numpy.float32(a)\n else:\n a = t.dtype.type(a)\n new_args.append(a)\n\n kern(grid, block, tuple(new_args), shared_mem, stream, enable_cg)\n\n def __getitem__(self, grid_and_block):\n \"\"\"Numba-style kernel call.\n\n .. seealso:: :ref:`jit_kernel_definition`\n \"\"\"\n grid, block = grid_and_block\n if not isinstance(grid, tuple):\n grid = (grid, 1, 1)\n if not isinstance(block, tuple):\n block = (block, 1, 1)\n return lambda *args, **kwargs: self(grid, block, args, **kwargs)\n\n @property\n def cached_codes(self):\n \"\"\"Returns a dict that has input types as keys and codes values.\n\n This proprety method is for debugging purpose.\n The return value is not guaranteed to keep backward compatibility.\n \"\"\"\n if len(self._cached_codes) == 0:\n warnings.warn(\n 'No codes are cached because compilation is deferred until '\n 'the first function call.')\n return dict([(k, v.code) for k, v in self._cached_codes.items()])\n\n @property\n def cached_code(self):\n \"\"\"Returns `next(iter(self.cached_codes.values()))`.\n\n This proprety method is for debugging purpose.\n The return value is not guaranteed to keep backward compatibility.\n \"\"\"\n codes = self.cached_codes\n if len(codes) > 1:\n warnings.warn(\n 'The input types of the kernel could not be inferred. '\n 'Please use `.cached_codes` instead.')\n return next(iter(codes.values()))\n\n\ndef rawkernel(*, mode='cuda', device=False):\n \"\"\"A decorator compiles a Python function into CUDA kernel.\n \"\"\"\n cupy._util.experimental('cupyx.jit.rawkernel')\n\n def wrapper(func):\n return functools.update_wrapper(\n _JitRawKernel(func, mode, device), func)\n return wrapper\n\n\nthreadIdx = _internal_types.Data('threadIdx', _cuda_types.dim3)\nblockDim = _internal_types.Data('blockDim', _cuda_types.dim3)\nblockIdx = _internal_types.Data('blockIdx', _cuda_types.dim3)\ngridDim = _internal_types.Data('gridDim', _cuda_types.dim3)\n\nwarpsize = _internal_types.Data(\n '64' if runtime.is_hip else '32', _cuda_types.uint32)\nwarpsize.__doc__ = r\"\"\"Returns the number of threads in a warp.\n\nIn CUDA this is always 32, and in ROCm/HIP always 64.\n\n.. seealso:: :obj:`numba.cuda.warpsize`\n\"\"\"\n", "path": "cupyx/jit/_interface.py"}]}
| 4,019 | 420 |
gh_patches_debug_17589
|
rasdani/github-patches
|
git_diff
|
coala__coala-bears-1661
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
JuliaLintBear: Use JuliaRequirement
`REQUIREMENTS` should be a `JuliaRequirement` instead of a `DistributionRequirement`.
</issue>
<code>
[start of bears/julia/JuliaLintBear.py]
1 from coala_utils.string_processing.Core import escape
2 from coalib.bearlib.abstractions.Linter import linter
3 from dependency_management.requirements.DistributionRequirement import (
4 DistributionRequirement)
5 from coalib.results.RESULT_SEVERITY import RESULT_SEVERITY
6
7
8 @linter(executable='julia',
9 output_format='regex',
10 output_regex=r'.+:(?P<line>\d+) (?P<severity>.)\d+ (?P<message>.*)',
11 severity_map={'E': RESULT_SEVERITY.MAJOR,
12 'W': RESULT_SEVERITY.NORMAL,
13 'I': RESULT_SEVERITY.INFO},
14 prerequisite_check_command=('julia', '-e', 'import Lint.lintfile'),
15 prerequisite_check_fail_message='Lint package not installed. Run '
16 '`Pkg.add("Lint")` from Julia to '
17 'install Lint.')
18 class JuliaLintBear:
19 """
20 Provide analysis related to common bugs and potential issues in Julia like
21 dead code, undefined variable usage, duplicate keys in dicts, incorrect
22 ADT usage, wrongfully using ellipsis, and much more.
23
24 See <https://lintjl.readthedocs.org/en/stable/> for more information
25 on the analysis provided.
26 """
27 LANGUAGES = {'Julia'}
28 REQUIREMENTS = {DistributionRequirement(apt_get='julia')}
29 AUTHORS = {'The coala developers'}
30 AUTHORS_EMAILS = {'[email protected]'}
31 LICENSE = 'AGPL-3.0'
32 CAN_DETECT = {'Unused Code', 'Syntax', 'Redundancy', 'Duplication',
33 'Unreachable Code', 'Security', 'Formatting'}
34
35 @staticmethod
36 def create_arguments(filename, file, config_file):
37 lintcode = ('import Lint.lintfile; display(lintfile("' +
38 escape(filename, '"\\') + '"))')
39 return '-e', lintcode
40
[end of bears/julia/JuliaLintBear.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bears/julia/JuliaLintBear.py b/bears/julia/JuliaLintBear.py
--- a/bears/julia/JuliaLintBear.py
+++ b/bears/julia/JuliaLintBear.py
@@ -1,7 +1,7 @@
from coala_utils.string_processing.Core import escape
from coalib.bearlib.abstractions.Linter import linter
-from dependency_management.requirements.DistributionRequirement import (
- DistributionRequirement)
+from dependency_management.requirements.JuliaRequirement import (
+ JuliaRequirement)
from coalib.results.RESULT_SEVERITY import RESULT_SEVERITY
@@ -25,7 +25,7 @@
on the analysis provided.
"""
LANGUAGES = {'Julia'}
- REQUIREMENTS = {DistributionRequirement(apt_get='julia')}
+ REQUIREMENTS = {JuliaRequirement('Lint')}
AUTHORS = {'The coala developers'}
AUTHORS_EMAILS = {'[email protected]'}
LICENSE = 'AGPL-3.0'
|
{"golden_diff": "diff --git a/bears/julia/JuliaLintBear.py b/bears/julia/JuliaLintBear.py\n--- a/bears/julia/JuliaLintBear.py\n+++ b/bears/julia/JuliaLintBear.py\n@@ -1,7 +1,7 @@\n from coala_utils.string_processing.Core import escape\n from coalib.bearlib.abstractions.Linter import linter\n-from dependency_management.requirements.DistributionRequirement import (\n- DistributionRequirement)\n+from dependency_management.requirements.JuliaRequirement import (\n+ JuliaRequirement)\n from coalib.results.RESULT_SEVERITY import RESULT_SEVERITY\n \n \n@@ -25,7 +25,7 @@\n on the analysis provided.\n \"\"\"\n LANGUAGES = {'Julia'}\n- REQUIREMENTS = {DistributionRequirement(apt_get='julia')}\n+ REQUIREMENTS = {JuliaRequirement('Lint')}\n AUTHORS = {'The coala developers'}\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n", "issue": "JuliaLintBear: Use JuliaRequirement\n`REQUIREMENTS` should be a `JuliaRequirement` instead of a `DistributionRequirement`.\n", "before_files": [{"content": "from coala_utils.string_processing.Core import escape\nfrom coalib.bearlib.abstractions.Linter import linter\nfrom dependency_management.requirements.DistributionRequirement import (\n DistributionRequirement)\nfrom coalib.results.RESULT_SEVERITY import RESULT_SEVERITY\n\n\n@linter(executable='julia',\n output_format='regex',\n output_regex=r'.+:(?P<line>\\d+) (?P<severity>.)\\d+ (?P<message>.*)',\n severity_map={'E': RESULT_SEVERITY.MAJOR,\n 'W': RESULT_SEVERITY.NORMAL,\n 'I': RESULT_SEVERITY.INFO},\n prerequisite_check_command=('julia', '-e', 'import Lint.lintfile'),\n prerequisite_check_fail_message='Lint package not installed. Run '\n '`Pkg.add(\"Lint\")` from Julia to '\n 'install Lint.')\nclass JuliaLintBear:\n \"\"\"\n Provide analysis related to common bugs and potential issues in Julia like\n dead code, undefined variable usage, duplicate keys in dicts, incorrect\n ADT usage, wrongfully using ellipsis, and much more.\n\n See <https://lintjl.readthedocs.org/en/stable/> for more information\n on the analysis provided.\n \"\"\"\n LANGUAGES = {'Julia'}\n REQUIREMENTS = {DistributionRequirement(apt_get='julia')}\n AUTHORS = {'The coala developers'}\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n CAN_DETECT = {'Unused Code', 'Syntax', 'Redundancy', 'Duplication',\n 'Unreachable Code', 'Security', 'Formatting'}\n\n @staticmethod\n def create_arguments(filename, file, config_file):\n lintcode = ('import Lint.lintfile; display(lintfile(\"' +\n escape(filename, '\"\\\\') + '\"))')\n return '-e', lintcode\n", "path": "bears/julia/JuliaLintBear.py"}]}
| 1,047 | 223 |
gh_patches_debug_37767
|
rasdani/github-patches
|
git_diff
|
elastic__apm-agent-python-1567
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update instrumentation of SQS for batch send, delete, and batch delete operations
The SQS API also includes batch send, delete, and batch delete messages. These actions were not described in the original SQS spec so the instrumentation needs to be updated as per the description in this PR https://github.com/elastic/apm/pull/419/files
</issue>
<code>
[start of elasticapm/instrumentation/packages/botocore.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 import urllib.parse
32 from collections import namedtuple
33
34 from elasticapm.conf import constants
35 from elasticapm.instrumentation.packages.base import AbstractInstrumentedModule
36 from elasticapm.traces import capture_span, execution_context
37 from elasticapm.utils.logging import get_logger
38
39 logger = get_logger("elasticapm.instrument")
40
41 SQS_MAX_ATTRIBUTES = 10
42
43
44 HandlerInfo = namedtuple("HandlerInfo", ("signature", "span_type", "span_subtype", "span_action", "context"))
45
46 # Used for boto3 < 1.7
47 endpoint_to_service_id = {"SNS": "SNS", "S3": "S3", "DYNAMODB": "DynamoDB", "SQS": "SQS"}
48
49
50 class BotocoreInstrumentation(AbstractInstrumentedModule):
51 name = "botocore"
52
53 instrument_list = [("botocore.client", "BaseClient._make_api_call")]
54
55 capture_span_ctx = capture_span
56
57 def _call(self, service, instance, args, kwargs):
58 """
59 This is split out from `call()` so that it can be re-used by the
60 aiobotocore instrumentation without duplicating all of this code.
61 """
62 if "operation_name" in kwargs:
63 operation_name = kwargs["operation_name"]
64 else:
65 operation_name = args[0]
66
67 parsed_url = urllib.parse.urlparse(instance.meta.endpoint_url)
68 context = {
69 "destination": {
70 "address": parsed_url.hostname,
71 "port": parsed_url.port,
72 "cloud": {"region": instance.meta.region_name},
73 }
74 }
75
76 handler_info = None
77 handler = handlers.get(service, False)
78 if handler:
79 handler_info = handler(operation_name, service, instance, args, kwargs, context)
80 if not handler_info:
81 handler_info = handle_default(operation_name, service, instance, args, kwargs, context)
82
83 return self.capture_span_ctx(
84 handler_info.signature,
85 span_type=handler_info.span_type,
86 leaf=True,
87 span_subtype=handler_info.span_subtype,
88 span_action=handler_info.span_action,
89 extra=handler_info.context,
90 )
91
92 def _get_service(self, instance):
93 service_model = instance.meta.service_model
94 if hasattr(service_model, "service_id"): # added in boto3 1.7
95 service = service_model.service_id
96 else:
97 service = service_model.service_name.upper()
98 service = endpoint_to_service_id.get(service, service)
99 return service
100
101 def call(self, module, method, wrapped, instance, args, kwargs):
102 service = self._get_service(instance)
103
104 ctx = self._call(service, instance, args, kwargs)
105 with ctx as span:
106 if service in span_modifiers:
107 span_modifiers[service](span, args, kwargs)
108 return wrapped(*args, **kwargs)
109
110
111 def handle_s3(operation_name, service, instance, args, kwargs, context):
112 span_type = "storage"
113 span_subtype = "s3"
114 span_action = operation_name
115 if len(args) > 1 and "Bucket" in args[1]:
116 bucket = args[1]["Bucket"]
117 else:
118 # TODO handle Access Points
119 bucket = ""
120 signature = f"S3 {operation_name} {bucket}"
121
122 context["destination"]["service"] = {"name": span_subtype, "resource": bucket, "type": span_type}
123
124 return HandlerInfo(signature, span_type, span_subtype, span_action, context)
125
126
127 def handle_dynamodb(operation_name, service, instance, args, kwargs, context):
128 span_type = "db"
129 span_subtype = "dynamodb"
130 span_action = "query"
131 if len(args) > 1 and "TableName" in args[1]:
132 table = args[1]["TableName"]
133 else:
134 table = ""
135 signature = f"DynamoDB {operation_name} {table}".rstrip()
136
137 context["db"] = {"type": "dynamodb", "instance": instance.meta.region_name}
138 if operation_name == "Query" and len(args) > 1 and "KeyConditionExpression" in args[1]:
139 context["db"]["statement"] = args[1]["KeyConditionExpression"]
140
141 context["destination"]["service"] = {"name": span_subtype, "resource": table, "type": span_type}
142 return HandlerInfo(signature, span_type, span_subtype, span_action, context)
143
144
145 def handle_sns(operation_name, service, instance, args, kwargs, context):
146 if operation_name != "Publish":
147 # only "publish" is handled specifically, other endpoints get the default treatment
148 return False
149 span_type = "messaging"
150 span_subtype = "sns"
151 span_action = "send"
152 topic_name = ""
153 if len(args) > 1:
154 if "Name" in args[1]:
155 topic_name = args[1]["Name"]
156 if "TopicArn" in args[1]:
157 topic_name = args[1]["TopicArn"].rsplit(":", maxsplit=1)[-1]
158 signature = f"SNS {operation_name} {topic_name}".rstrip()
159 context["destination"]["service"] = {
160 "name": span_subtype,
161 "resource": f"{span_subtype}/{topic_name}" if topic_name else span_subtype,
162 "type": span_type,
163 }
164 return HandlerInfo(signature, span_type, span_subtype, span_action, context)
165
166
167 def handle_sqs(operation_name, service, instance, args, kwargs, context):
168 if operation_name not in ("SendMessage", "SendMessageBatch", "ReceiveMessage"):
169 # only "publish" is handled specifically, other endpoints get the default treatment
170 return False
171 span_type = "messaging"
172 span_subtype = "sqs"
173 span_action = "send" if operation_name in ("SendMessage", "SendMessageBatch") else "receive"
174 topic_name = ""
175 batch = "_BATCH" if operation_name == "SendMessageBatch" else ""
176 signature_type = "RECEIVE from" if span_action == "receive" else f"SEND{batch} to"
177
178 if len(args) > 1:
179 topic_name = args[1]["QueueUrl"].rsplit("/", maxsplit=1)[-1]
180 signature = f"SQS {signature_type} {topic_name}".rstrip() if topic_name else f"SQS {signature_type}"
181 context["destination"]["service"] = {
182 "name": span_subtype,
183 "resource": f"{span_subtype}/{topic_name}" if topic_name else span_subtype,
184 "type": span_type,
185 }
186 return HandlerInfo(signature, span_type, span_subtype, span_action, context)
187
188
189 def modify_span_sqs(span, args, kwargs):
190 if span.id:
191 trace_parent = span.transaction.trace_parent.copy_from(span_id=span.id)
192 else:
193 # this is a dropped span, use transaction id instead
194 transaction = execution_context.get_transaction()
195 trace_parent = transaction.trace_parent.copy_from(span_id=transaction.id)
196 attributes = {constants.TRACEPARENT_HEADER_NAME: {"DataType": "String", "StringValue": trace_parent.to_string()}}
197 if trace_parent.tracestate:
198 attributes[constants.TRACESTATE_HEADER_NAME] = {"DataType": "String", "StringValue": trace_parent.tracestate}
199 if len(args) > 1:
200 attributes_count = len(attributes)
201 if "MessageAttributes" in args[1]:
202 messages = [args[1]]
203 elif "Entries" in args[1]:
204 messages = args[1]["Entries"]
205 else:
206 messages = []
207 for message in messages:
208 if len(message["MessageAttributes"]) + attributes_count <= SQS_MAX_ATTRIBUTES:
209 message["MessageAttributes"].update(attributes)
210 else:
211 logger.info("Not adding disttracing headers to message due to attribute limit reached")
212
213
214 def handle_default(operation_name, service, instance, args, kwargs, destination):
215 span_type = "aws"
216 span_subtype = service.lower()
217 span_action = operation_name
218
219 destination["service"] = {"name": span_subtype, "resource": span_subtype, "type": span_type}
220
221 signature = f"{service}:{operation_name}"
222 return HandlerInfo(signature, span_type, span_subtype, span_action, destination)
223
224
225 handlers = {
226 "S3": handle_s3,
227 "DynamoDB": handle_dynamodb,
228 "SNS": handle_sns,
229 "SQS": handle_sqs,
230 "default": handle_default,
231 }
232
233 span_modifiers = {
234 "SQS": modify_span_sqs,
235 }
236
[end of elasticapm/instrumentation/packages/botocore.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/elasticapm/instrumentation/packages/botocore.py b/elasticapm/instrumentation/packages/botocore.py
--- a/elasticapm/instrumentation/packages/botocore.py
+++ b/elasticapm/instrumentation/packages/botocore.py
@@ -164,26 +164,33 @@
return HandlerInfo(signature, span_type, span_subtype, span_action, context)
+SQS_OPERATIONS = {
+ "SendMessage": {"span_action": "send", "signature": "SEND to"},
+ "SendMessageBatch": {"span_action": "send_batch", "signature": "SEND_BATCH to"},
+ "ReceiveMessage": {"span_action": "receive", "signature": "RECEIVE from"},
+ "DeleteMessage": {"span_action": "delete", "signature": "DELETE from"},
+ "DeleteMessageBatch": {"span_action": "delete_batch", "signature": "DELETE_BATCH from"},
+}
+
+
def handle_sqs(operation_name, service, instance, args, kwargs, context):
- if operation_name not in ("SendMessage", "SendMessageBatch", "ReceiveMessage"):
+ op = SQS_OPERATIONS.get(operation_name, None)
+ if not op:
# only "publish" is handled specifically, other endpoints get the default treatment
return False
span_type = "messaging"
span_subtype = "sqs"
- span_action = "send" if operation_name in ("SendMessage", "SendMessageBatch") else "receive"
topic_name = ""
- batch = "_BATCH" if operation_name == "SendMessageBatch" else ""
- signature_type = "RECEIVE from" if span_action == "receive" else f"SEND{batch} to"
if len(args) > 1:
topic_name = args[1]["QueueUrl"].rsplit("/", maxsplit=1)[-1]
- signature = f"SQS {signature_type} {topic_name}".rstrip() if topic_name else f"SQS {signature_type}"
+ signature = f"SQS {op['signature']} {topic_name}".rstrip() if topic_name else f"SQS {op['signature']}"
context["destination"]["service"] = {
"name": span_subtype,
"resource": f"{span_subtype}/{topic_name}" if topic_name else span_subtype,
"type": span_type,
}
- return HandlerInfo(signature, span_type, span_subtype, span_action, context)
+ return HandlerInfo(signature, span_type, span_subtype, op["span_action"], context)
def modify_span_sqs(span, args, kwargs):
@@ -200,7 +207,9 @@
attributes_count = len(attributes)
if "MessageAttributes" in args[1]:
messages = [args[1]]
- elif "Entries" in args[1]:
+ # both send_batch and delete_batch use the same "Entries" list. We only want to add the
+ # traceparent to send_batch. We use the existence of ReceiptHandle to differentiate between the two
+ elif "Entries" in args[1] and args[1]["Entries"] and "ReceiptHandle" not in args[1]["Entries"][0]:
messages = args[1]["Entries"]
else:
messages = []
|
{"golden_diff": "diff --git a/elasticapm/instrumentation/packages/botocore.py b/elasticapm/instrumentation/packages/botocore.py\n--- a/elasticapm/instrumentation/packages/botocore.py\n+++ b/elasticapm/instrumentation/packages/botocore.py\n@@ -164,26 +164,33 @@\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n \n \n+SQS_OPERATIONS = {\n+ \"SendMessage\": {\"span_action\": \"send\", \"signature\": \"SEND to\"},\n+ \"SendMessageBatch\": {\"span_action\": \"send_batch\", \"signature\": \"SEND_BATCH to\"},\n+ \"ReceiveMessage\": {\"span_action\": \"receive\", \"signature\": \"RECEIVE from\"},\n+ \"DeleteMessage\": {\"span_action\": \"delete\", \"signature\": \"DELETE from\"},\n+ \"DeleteMessageBatch\": {\"span_action\": \"delete_batch\", \"signature\": \"DELETE_BATCH from\"},\n+}\n+\n+\n def handle_sqs(operation_name, service, instance, args, kwargs, context):\n- if operation_name not in (\"SendMessage\", \"SendMessageBatch\", \"ReceiveMessage\"):\n+ op = SQS_OPERATIONS.get(operation_name, None)\n+ if not op:\n # only \"publish\" is handled specifically, other endpoints get the default treatment\n return False\n span_type = \"messaging\"\n span_subtype = \"sqs\"\n- span_action = \"send\" if operation_name in (\"SendMessage\", \"SendMessageBatch\") else \"receive\"\n topic_name = \"\"\n- batch = \"_BATCH\" if operation_name == \"SendMessageBatch\" else \"\"\n- signature_type = \"RECEIVE from\" if span_action == \"receive\" else f\"SEND{batch} to\"\n \n if len(args) > 1:\n topic_name = args[1][\"QueueUrl\"].rsplit(\"/\", maxsplit=1)[-1]\n- signature = f\"SQS {signature_type} {topic_name}\".rstrip() if topic_name else f\"SQS {signature_type}\"\n+ signature = f\"SQS {op['signature']} {topic_name}\".rstrip() if topic_name else f\"SQS {op['signature']}\"\n context[\"destination\"][\"service\"] = {\n \"name\": span_subtype,\n \"resource\": f\"{span_subtype}/{topic_name}\" if topic_name else span_subtype,\n \"type\": span_type,\n }\n- return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n+ return HandlerInfo(signature, span_type, span_subtype, op[\"span_action\"], context)\n \n \n def modify_span_sqs(span, args, kwargs):\n@@ -200,7 +207,9 @@\n attributes_count = len(attributes)\n if \"MessageAttributes\" in args[1]:\n messages = [args[1]]\n- elif \"Entries\" in args[1]:\n+ # both send_batch and delete_batch use the same \"Entries\" list. We only want to add the\n+ # traceparent to send_batch. We use the existence of ReceiptHandle to differentiate between the two\n+ elif \"Entries\" in args[1] and args[1][\"Entries\"] and \"ReceiptHandle\" not in args[1][\"Entries\"][0]:\n messages = args[1][\"Entries\"]\n else:\n messages = []\n", "issue": "Update instrumentation of SQS for batch send, delete, and batch delete operations\nThe SQS API also includes batch send, delete, and batch delete messages. These actions were not described in the original SQS spec so the instrumentation needs to be updated as per the description in this PR https://github.com/elastic/apm/pull/419/files\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport urllib.parse\nfrom collections import namedtuple\n\nfrom elasticapm.conf import constants\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.traces import capture_span, execution_context\nfrom elasticapm.utils.logging import get_logger\n\nlogger = get_logger(\"elasticapm.instrument\")\n\nSQS_MAX_ATTRIBUTES = 10\n\n\nHandlerInfo = namedtuple(\"HandlerInfo\", (\"signature\", \"span_type\", \"span_subtype\", \"span_action\", \"context\"))\n\n# Used for boto3 < 1.7\nendpoint_to_service_id = {\"SNS\": \"SNS\", \"S3\": \"S3\", \"DYNAMODB\": \"DynamoDB\", \"SQS\": \"SQS\"}\n\n\nclass BotocoreInstrumentation(AbstractInstrumentedModule):\n name = \"botocore\"\n\n instrument_list = [(\"botocore.client\", \"BaseClient._make_api_call\")]\n\n capture_span_ctx = capture_span\n\n def _call(self, service, instance, args, kwargs):\n \"\"\"\n This is split out from `call()` so that it can be re-used by the\n aiobotocore instrumentation without duplicating all of this code.\n \"\"\"\n if \"operation_name\" in kwargs:\n operation_name = kwargs[\"operation_name\"]\n else:\n operation_name = args[0]\n\n parsed_url = urllib.parse.urlparse(instance.meta.endpoint_url)\n context = {\n \"destination\": {\n \"address\": parsed_url.hostname,\n \"port\": parsed_url.port,\n \"cloud\": {\"region\": instance.meta.region_name},\n }\n }\n\n handler_info = None\n handler = handlers.get(service, False)\n if handler:\n handler_info = handler(operation_name, service, instance, args, kwargs, context)\n if not handler_info:\n handler_info = handle_default(operation_name, service, instance, args, kwargs, context)\n\n return self.capture_span_ctx(\n handler_info.signature,\n span_type=handler_info.span_type,\n leaf=True,\n span_subtype=handler_info.span_subtype,\n span_action=handler_info.span_action,\n extra=handler_info.context,\n )\n\n def _get_service(self, instance):\n service_model = instance.meta.service_model\n if hasattr(service_model, \"service_id\"): # added in boto3 1.7\n service = service_model.service_id\n else:\n service = service_model.service_name.upper()\n service = endpoint_to_service_id.get(service, service)\n return service\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n service = self._get_service(instance)\n\n ctx = self._call(service, instance, args, kwargs)\n with ctx as span:\n if service in span_modifiers:\n span_modifiers[service](span, args, kwargs)\n return wrapped(*args, **kwargs)\n\n\ndef handle_s3(operation_name, service, instance, args, kwargs, context):\n span_type = \"storage\"\n span_subtype = \"s3\"\n span_action = operation_name\n if len(args) > 1 and \"Bucket\" in args[1]:\n bucket = args[1][\"Bucket\"]\n else:\n # TODO handle Access Points\n bucket = \"\"\n signature = f\"S3 {operation_name} {bucket}\"\n\n context[\"destination\"][\"service\"] = {\"name\": span_subtype, \"resource\": bucket, \"type\": span_type}\n\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n\n\ndef handle_dynamodb(operation_name, service, instance, args, kwargs, context):\n span_type = \"db\"\n span_subtype = \"dynamodb\"\n span_action = \"query\"\n if len(args) > 1 and \"TableName\" in args[1]:\n table = args[1][\"TableName\"]\n else:\n table = \"\"\n signature = f\"DynamoDB {operation_name} {table}\".rstrip()\n\n context[\"db\"] = {\"type\": \"dynamodb\", \"instance\": instance.meta.region_name}\n if operation_name == \"Query\" and len(args) > 1 and \"KeyConditionExpression\" in args[1]:\n context[\"db\"][\"statement\"] = args[1][\"KeyConditionExpression\"]\n\n context[\"destination\"][\"service\"] = {\"name\": span_subtype, \"resource\": table, \"type\": span_type}\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n\n\ndef handle_sns(operation_name, service, instance, args, kwargs, context):\n if operation_name != \"Publish\":\n # only \"publish\" is handled specifically, other endpoints get the default treatment\n return False\n span_type = \"messaging\"\n span_subtype = \"sns\"\n span_action = \"send\"\n topic_name = \"\"\n if len(args) > 1:\n if \"Name\" in args[1]:\n topic_name = args[1][\"Name\"]\n if \"TopicArn\" in args[1]:\n topic_name = args[1][\"TopicArn\"].rsplit(\":\", maxsplit=1)[-1]\n signature = f\"SNS {operation_name} {topic_name}\".rstrip()\n context[\"destination\"][\"service\"] = {\n \"name\": span_subtype,\n \"resource\": f\"{span_subtype}/{topic_name}\" if topic_name else span_subtype,\n \"type\": span_type,\n }\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n\n\ndef handle_sqs(operation_name, service, instance, args, kwargs, context):\n if operation_name not in (\"SendMessage\", \"SendMessageBatch\", \"ReceiveMessage\"):\n # only \"publish\" is handled specifically, other endpoints get the default treatment\n return False\n span_type = \"messaging\"\n span_subtype = \"sqs\"\n span_action = \"send\" if operation_name in (\"SendMessage\", \"SendMessageBatch\") else \"receive\"\n topic_name = \"\"\n batch = \"_BATCH\" if operation_name == \"SendMessageBatch\" else \"\"\n signature_type = \"RECEIVE from\" if span_action == \"receive\" else f\"SEND{batch} to\"\n\n if len(args) > 1:\n topic_name = args[1][\"QueueUrl\"].rsplit(\"/\", maxsplit=1)[-1]\n signature = f\"SQS {signature_type} {topic_name}\".rstrip() if topic_name else f\"SQS {signature_type}\"\n context[\"destination\"][\"service\"] = {\n \"name\": span_subtype,\n \"resource\": f\"{span_subtype}/{topic_name}\" if topic_name else span_subtype,\n \"type\": span_type,\n }\n return HandlerInfo(signature, span_type, span_subtype, span_action, context)\n\n\ndef modify_span_sqs(span, args, kwargs):\n if span.id:\n trace_parent = span.transaction.trace_parent.copy_from(span_id=span.id)\n else:\n # this is a dropped span, use transaction id instead\n transaction = execution_context.get_transaction()\n trace_parent = transaction.trace_parent.copy_from(span_id=transaction.id)\n attributes = {constants.TRACEPARENT_HEADER_NAME: {\"DataType\": \"String\", \"StringValue\": trace_parent.to_string()}}\n if trace_parent.tracestate:\n attributes[constants.TRACESTATE_HEADER_NAME] = {\"DataType\": \"String\", \"StringValue\": trace_parent.tracestate}\n if len(args) > 1:\n attributes_count = len(attributes)\n if \"MessageAttributes\" in args[1]:\n messages = [args[1]]\n elif \"Entries\" in args[1]:\n messages = args[1][\"Entries\"]\n else:\n messages = []\n for message in messages:\n if len(message[\"MessageAttributes\"]) + attributes_count <= SQS_MAX_ATTRIBUTES:\n message[\"MessageAttributes\"].update(attributes)\n else:\n logger.info(\"Not adding disttracing headers to message due to attribute limit reached\")\n\n\ndef handle_default(operation_name, service, instance, args, kwargs, destination):\n span_type = \"aws\"\n span_subtype = service.lower()\n span_action = operation_name\n\n destination[\"service\"] = {\"name\": span_subtype, \"resource\": span_subtype, \"type\": span_type}\n\n signature = f\"{service}:{operation_name}\"\n return HandlerInfo(signature, span_type, span_subtype, span_action, destination)\n\n\nhandlers = {\n \"S3\": handle_s3,\n \"DynamoDB\": handle_dynamodb,\n \"SNS\": handle_sns,\n \"SQS\": handle_sqs,\n \"default\": handle_default,\n}\n\nspan_modifiers = {\n \"SQS\": modify_span_sqs,\n}\n", "path": "elasticapm/instrumentation/packages/botocore.py"}]}
| 3,390 | 714 |
gh_patches_debug_35924
|
rasdani/github-patches
|
git_diff
|
archlinux__archinstall-731
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Nvidia proprietary driver installation
Hello
I tried to install my system via this installer but i can't install nvidia proprietary driver with this installer.
I think there is an option "Nvidia (proprietary)" but it still installing open source nvidia driver.
https://github.com/archlinux/archinstall/blob/0071a069080732047e11309134869f3ab40c642c/archinstall/lib/hardware.py
How can install proprietary drivers ?
Instead of gfx_driver option adding nvidia package names to packages option like that :
```"packages": ["bash-completion","linux-headers","nvidia","nvidia-settings","opencl-nvidia"]```
My config file :
```
{
"bootloader": "systemd-bootctl",
"audio": "pulseaudio",
"filesystem": "ext4",
"gfx_driver": "Nvidia (proprietary)",
"harddrive": {
"path": "/dev/sda"
},
"hostname": "pc",
"kernels": ["linux"],
"keyboard-language": "trq",
"mirror-region": {},
"nic": {
"NetworkManager": true
},
"profile": "kde",
"ntp": true,
"packages": ["bash-completion", "linux-headers"],
"sys-encoding": "utf-8",
"timezone": "UTC",
"!root-password": "****",
"superusers": {
"kaan": {
"!password": "****"
}
}
}
```
Thank you.
</issue>
<code>
[start of profiles/xorg.py]
1 # A system with "xorg" installed
2
3 import archinstall
4
5 is_top_level_profile = True
6
7 __description__ = 'Installs a minimal system as well as xorg and graphics drivers.'
8
9 __packages__ = [
10 'dkms',
11 'xorg-server',
12 'xorg-xinit',
13 'nvidia-dkms',
14 *archinstall.lib.hardware.__packages__,
15 ]
16
17
18 def _prep_function(*args, **kwargs):
19 """
20 Magic function called by the importing installer
21 before continuing any further. It also avoids executing any
22 other code in this stage. So it's a safe way to ask the user
23 for more input before any other installer steps start.
24 """
25
26 archinstall.storage["gfx_driver_packages"] = archinstall.select_driver()
27
28 # TODO: Add language section and/or merge it with the locale selected
29 # earlier in for instance guided.py installer.
30
31 return True
32
33
34 # Ensures that this code only gets executed if executed
35 # through importlib.util.spec_from_file_location("xorg", "/somewhere/xorg.py")
36 # or through conventional import xorg
37 if __name__ == 'xorg':
38 try:
39 if "nvidia" in archinstall.storage.get("gfx_driver_packages", None):
40 if "linux-zen" in archinstall.storage['installation_session'].base_packages or "linux-lts" in archinstall.storage['installation_session'].base_packages:
41 for kernel in archinstall.storage['installation_session'].kernels:
42 archinstall.storage['installation_session'].add_additional_packages(f"{kernel}-headers") # Fixes https://github.com/archlinux/archinstall/issues/585
43 archinstall.storage['installation_session'].add_additional_packages("dkms") # I've had kernel regen fail if it wasn't installed before nvidia-dkms
44 archinstall.storage['installation_session'].add_additional_packages("xorg-server xorg-xinit nvidia-dkms")
45 else:
46 archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', None))}")
47 else:
48 archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', None))}")
49 except:
50 archinstall.storage['installation_session'].add_additional_packages("xorg-server xorg-xinit") # Prep didn't run, so there's no driver to install
51
[end of profiles/xorg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/profiles/xorg.py b/profiles/xorg.py
--- a/profiles/xorg.py
+++ b/profiles/xorg.py
@@ -1,6 +1,7 @@
# A system with "xorg" installed
import archinstall
+import logging
is_top_level_profile = True
@@ -36,15 +37,16 @@
# or through conventional import xorg
if __name__ == 'xorg':
try:
- if "nvidia" in archinstall.storage.get("gfx_driver_packages", None):
+ if "nvidia" in archinstall.storage.get("gfx_driver_packages", []):
if "linux-zen" in archinstall.storage['installation_session'].base_packages or "linux-lts" in archinstall.storage['installation_session'].base_packages:
for kernel in archinstall.storage['installation_session'].kernels:
archinstall.storage['installation_session'].add_additional_packages(f"{kernel}-headers") # Fixes https://github.com/archlinux/archinstall/issues/585
archinstall.storage['installation_session'].add_additional_packages("dkms") # I've had kernel regen fail if it wasn't installed before nvidia-dkms
archinstall.storage['installation_session'].add_additional_packages("xorg-server xorg-xinit nvidia-dkms")
else:
- archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', None))}")
+ archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}")
else:
- archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', None))}")
- except:
+ archinstall.storage['installation_session'].add_additional_packages(f"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}")
+ except Exception as err:
+ archinstall.log(f"Could not handle nvidia and linuz-zen specific situations during xorg installation: {err}", level=logging.WARNING, fg="yellow")
archinstall.storage['installation_session'].add_additional_packages("xorg-server xorg-xinit") # Prep didn't run, so there's no driver to install
|
{"golden_diff": "diff --git a/profiles/xorg.py b/profiles/xorg.py\n--- a/profiles/xorg.py\n+++ b/profiles/xorg.py\n@@ -1,6 +1,7 @@\n # A system with \"xorg\" installed\n \n import archinstall\n+import logging\n \n is_top_level_profile = True\n \n@@ -36,15 +37,16 @@\n # or through conventional import xorg\n if __name__ == 'xorg':\n \ttry:\n-\t\tif \"nvidia\" in archinstall.storage.get(\"gfx_driver_packages\", None):\n+\t\tif \"nvidia\" in archinstall.storage.get(\"gfx_driver_packages\", []):\n \t\t\tif \"linux-zen\" in archinstall.storage['installation_session'].base_packages or \"linux-lts\" in archinstall.storage['installation_session'].base_packages:\n \t\t\t\tfor kernel in archinstall.storage['installation_session'].kernels:\n \t\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"{kernel}-headers\") # Fixes https://github.com/archlinux/archinstall/issues/585\n \t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(\"dkms\") # I've had kernel regen fail if it wasn't installed before nvidia-dkms\n \t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(\"xorg-server xorg-xinit nvidia-dkms\")\n \t\t\telse:\n-\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', None))}\")\n+\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n \t\telse:\n-\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', None))}\")\n-\texcept:\n+\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', []))}\")\n+\texcept Exception as err:\n+\t\tarchinstall.log(f\"Could not handle nvidia and linuz-zen specific situations during xorg installation: {err}\", level=logging.WARNING, fg=\"yellow\")\n \t\tarchinstall.storage['installation_session'].add_additional_packages(\"xorg-server xorg-xinit\") # Prep didn't run, so there's no driver to install\n", "issue": "Nvidia proprietary driver installation\nHello \r\n\r\nI tried to install my system via this installer but i can't install nvidia proprietary driver with this installer.\r\n\r\nI think there is an option \"Nvidia (proprietary)\" but it still installing open source nvidia driver.\r\nhttps://github.com/archlinux/archinstall/blob/0071a069080732047e11309134869f3ab40c642c/archinstall/lib/hardware.py\r\n\r\nHow can install proprietary drivers ?\r\nInstead of gfx_driver option adding nvidia package names to packages option like that :\r\n```\"packages\": [\"bash-completion\",\"linux-headers\",\"nvidia\",\"nvidia-settings\",\"opencl-nvidia\"]```\r\n\r\n\r\n\r\nMy config file :\r\n\r\n```\r\n{\r\n \"bootloader\": \"systemd-bootctl\",\r\n \"audio\": \"pulseaudio\",\r\n \"filesystem\": \"ext4\",\r\n \"gfx_driver\": \"Nvidia (proprietary)\",\r\n \"harddrive\": {\r\n \"path\": \"/dev/sda\"\r\n },\r\n \"hostname\": \"pc\",\r\n \"kernels\": [\"linux\"],\r\n \"keyboard-language\": \"trq\",\r\n \"mirror-region\": {},\r\n \"nic\": {\r\n \"NetworkManager\": true\r\n },\r\n \"profile\": \"kde\",\r\n \"ntp\": true,\r\n \"packages\": [\"bash-completion\", \"linux-headers\"],\r\n \"sys-encoding\": \"utf-8\",\r\n \"timezone\": \"UTC\",\r\n \"!root-password\": \"****\",\r\n \"superusers\": {\r\n \"kaan\": {\r\n \"!password\": \"****\"\r\n }\r\n }\r\n}\r\n```\r\n\r\nThank you.\r\n\n", "before_files": [{"content": "# A system with \"xorg\" installed\n\nimport archinstall\n\nis_top_level_profile = True\n\n__description__ = 'Installs a minimal system as well as xorg and graphics drivers.'\n\n__packages__ = [\n\t'dkms',\n\t'xorg-server',\n\t'xorg-xinit',\n\t'nvidia-dkms',\n\t*archinstall.lib.hardware.__packages__,\n]\n\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\tarchinstall.storage[\"gfx_driver_packages\"] = archinstall.select_driver()\n\n\t# TODO: Add language section and/or merge it with the locale selected\n\t# earlier in for instance guided.py installer.\n\n\treturn True\n\n\n# Ensures that this code only gets executed if executed\n# through importlib.util.spec_from_file_location(\"xorg\", \"/somewhere/xorg.py\")\n# or through conventional import xorg\nif __name__ == 'xorg':\n\ttry:\n\t\tif \"nvidia\" in archinstall.storage.get(\"gfx_driver_packages\", None):\n\t\t\tif \"linux-zen\" in archinstall.storage['installation_session'].base_packages or \"linux-lts\" in archinstall.storage['installation_session'].base_packages:\n\t\t\t\tfor kernel in archinstall.storage['installation_session'].kernels:\n\t\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"{kernel}-headers\") # Fixes https://github.com/archlinux/archinstall/issues/585\n\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(\"dkms\") # I've had kernel regen fail if it wasn't installed before nvidia-dkms\n\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(\"xorg-server xorg-xinit nvidia-dkms\")\n\t\t\telse:\n\t\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', None))}\")\n\t\telse:\n\t\t\tarchinstall.storage['installation_session'].add_additional_packages(f\"xorg-server xorg-xinit {' '.join(archinstall.storage.get('gfx_driver_packages', None))}\")\n\texcept:\n\t\tarchinstall.storage['installation_session'].add_additional_packages(\"xorg-server xorg-xinit\") # Prep didn't run, so there's no driver to install\n", "path": "profiles/xorg.py"}]}
| 1,524 | 519 |
gh_patches_debug_5418
|
rasdani/github-patches
|
git_diff
|
saulpw__visidata-625
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[help keystrokes] Help keystrokes has longnames for about 20 commands
**Small description**
Keystrokes in HelpSheet display commands not keystrokes for about 20 commands.
**Expected result**
Keystroke values
**Actual result with screenshot**
sheet longname keystrokes
TableSheet go-next-selected next-selected
TableSheet go-prev-selected prev-selected
TableSheet setcol-clipboard paste-cells
TableSheet search-prev prev-search
TableSheet setcol-input edit-cells
TableSheet setcol-fill fill-nulls
TableSheet go-next-value next-value
TableSheet go-right-page page-right
BaseSheet jump-prev prev-sheet
TableSheet go-prev-value prev-value
BaseSheet open-new add-sheet
TableSheet go-next-null next-null
TableSheet go-left-page page-left
TableSheet go-prev-null prev-null
TableSheet open-row dive-row
DirSheet open-row dive-row
BaseSheet open-statuses statuses
Canvas zoomout-mouse 2097152
BaseSheet show-status status
**Steps to reproduce with sample data and a .vd**
Visit the help sheet, and look for the long commands above.
**Additional context**
Please include the version of VisiData. Latest from develop
</issue>
<code>
[start of visidata/help.py]
1 from visidata import *
2
3
4 class HelpSheet(MetaSheet):
5 'Show all commands available to the source sheet.'
6 rowtype = 'commands'
7 precious = False
8 _ordering = [('sheet', False), ('longname', False)]
9
10 columns = [
11 ColumnAttr('sheet'),
12 ColumnAttr('longname'),
13 Column('keystrokes', getter=lambda col,row: col.sheet.revbinds.get(row.longname)),
14 Column('description', getter=lambda col,row: col.sheet.cmddict[(row.sheet, row.longname)].helpstr),
15 ColumnAttr('execstr', width=0),
16 Column('logged', width=0, getter=lambda col,row: isLoggableCommand(row.longname)),
17 ]
18 nKeys = 2
19
20 def iterload(self):
21 from pkg_resources import resource_filename
22 cmdlist = VisiDataMetaSheet('cmdlist', source=None)
23
24 self.cmddict = {}
25 itcmds = vd.commands.iterall()
26 for (k, o), v in itcmds:
27 yield v
28 v.sheet = o
29 self.cmddict[(v.sheet, v.longname)] = v
30
31 for cmdrow in cmdlist.rows:
32 k = (cmdrow.sheet, cmdrow.longname)
33 if k in self.cmddict:
34 self.cmddict[k].helpstr = cmdrow.helpstr
35
36 self.revbinds = {} # [longname] -> keystrokes
37 itbindings = vd.bindkeys.iterall()
38 for (keystrokes, _), longname in itbindings:
39 if keystrokes not in self.revbinds:
40 self.revbinds[longname] = keystrokes
41
42
43 @VisiData.api
44 @asyncthread
45 def help_search(vd, sheet, regex):
46 vs = HelpSheet(source=None)
47 vs.rows = [] # do not trigger push reload
48 vd.push(vs) # push first, then reload
49 vd.sync(vs.reload())
50
51 # find rows matching regex on original HelpSheet
52 rowidxs = list(vd.searchRegex(vs, regex=regex, columns="visibleCols"))
53
54 # add only matching rows
55 allrows = vs.rows
56 vs.rows = []
57 for rowidx in rowidxs:
58 vs.addRow(allrows[rowidx])
59
60
61 @VisiData.global_api
62 def openManPage(vd):
63 from pkg_resources import resource_filename
64 import os
65 with SuspendCurses():
66 os.system(' '.join(['man', resource_filename(__name__, 'man/vd.1')]))
67
68
69 # in VisiData, ^H refers to the man page
70 globalCommand('^H', 'sysopen-help', 'openManPage()', 'view vd man page')
71 BaseSheet.addCommand('z^H', 'help-commands', 'vd.push(HelpSheet(name + "_commands", source=sheet, revbinds={}))', 'view sheet of command longnames and keybindings for current sheet')
72 BaseSheet.addCommand('gz^H', 'help-commands-all', 'vd.push(HelpSheet("all_commands", source=None, revbinds={}))', 'view sheet of command longnames and keybindings for all sheet types')
73 globalCommand(None, 'help-search', 'help_search(sheet, input("help: "))', 'search through command longnames with search terms')
74
75 BaseSheet.bindkey('KEY_F(1)', 'sysopen-help')
76 BaseSheet.bindkey('KEY_BACKSPACE', 'sysopen-help')
77 BaseSheet.bindkey('zKEY_F(1)', 'help-commands')
78 BaseSheet.bindkey('zKEY_BACKSPACE', 'help-commands')
79
[end of visidata/help.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/visidata/help.py b/visidata/help.py
--- a/visidata/help.py
+++ b/visidata/help.py
@@ -36,7 +36,7 @@
self.revbinds = {} # [longname] -> keystrokes
itbindings = vd.bindkeys.iterall()
for (keystrokes, _), longname in itbindings:
- if keystrokes not in self.revbinds:
+ if (keystrokes not in self.revbinds) and ('-' not in keystrokes or keystrokes[-1] == '-'):
self.revbinds[longname] = keystrokes
|
{"golden_diff": "diff --git a/visidata/help.py b/visidata/help.py\n--- a/visidata/help.py\n+++ b/visidata/help.py\n@@ -36,7 +36,7 @@\n self.revbinds = {} # [longname] -> keystrokes\n itbindings = vd.bindkeys.iterall()\n for (keystrokes, _), longname in itbindings:\n- if keystrokes not in self.revbinds:\n+ if (keystrokes not in self.revbinds) and ('-' not in keystrokes or keystrokes[-1] == '-'):\n self.revbinds[longname] = keystrokes\n", "issue": "[help keystrokes] Help keystrokes has longnames for about 20 commands\n**Small description**\r\nKeystrokes in HelpSheet display commands not keystrokes for about 20 commands.\r\n\r\n**Expected result**\r\nKeystroke values\r\n\r\n**Actual result with screenshot**\r\n\r\nsheet\tlongname\tkeystrokes\r\nTableSheet\tgo-next-selected\tnext-selected\r\nTableSheet\tgo-prev-selected\tprev-selected\r\nTableSheet\tsetcol-clipboard\tpaste-cells\r\nTableSheet\tsearch-prev\tprev-search\r\nTableSheet\tsetcol-input\tedit-cells\r\nTableSheet\tsetcol-fill\tfill-nulls\r\nTableSheet\tgo-next-value\tnext-value\r\nTableSheet\tgo-right-page\tpage-right\r\nBaseSheet\tjump-prev\tprev-sheet\r\nTableSheet\tgo-prev-value\tprev-value\r\nBaseSheet\topen-new\tadd-sheet\r\nTableSheet\tgo-next-null\tnext-null\r\nTableSheet\tgo-left-page\tpage-left\r\nTableSheet\tgo-prev-null\tprev-null\r\nTableSheet\topen-row\tdive-row\r\nDirSheet\topen-row\tdive-row\r\nBaseSheet\topen-statuses\tstatuses\r\nCanvas\tzoomout-mouse\t2097152\r\nBaseSheet\tshow-status\tstatus\r\n\r\n**Steps to reproduce with sample data and a .vd**\r\nVisit the help sheet, and look for the long commands above.\r\n\r\n**Additional context**\r\nPlease include the version of VisiData. Latest from develop\r\n\n", "before_files": [{"content": "from visidata import *\n\n\nclass HelpSheet(MetaSheet):\n 'Show all commands available to the source sheet.'\n rowtype = 'commands'\n precious = False\n _ordering = [('sheet', False), ('longname', False)]\n\n columns = [\n ColumnAttr('sheet'),\n ColumnAttr('longname'),\n Column('keystrokes', getter=lambda col,row: col.sheet.revbinds.get(row.longname)),\n Column('description', getter=lambda col,row: col.sheet.cmddict[(row.sheet, row.longname)].helpstr),\n ColumnAttr('execstr', width=0),\n Column('logged', width=0, getter=lambda col,row: isLoggableCommand(row.longname)),\n ]\n nKeys = 2\n\n def iterload(self):\n from pkg_resources import resource_filename\n cmdlist = VisiDataMetaSheet('cmdlist', source=None)\n\n self.cmddict = {}\n itcmds = vd.commands.iterall()\n for (k, o), v in itcmds:\n yield v\n v.sheet = o\n self.cmddict[(v.sheet, v.longname)] = v\n\n for cmdrow in cmdlist.rows:\n k = (cmdrow.sheet, cmdrow.longname)\n if k in self.cmddict:\n self.cmddict[k].helpstr = cmdrow.helpstr\n\n self.revbinds = {} # [longname] -> keystrokes\n itbindings = vd.bindkeys.iterall()\n for (keystrokes, _), longname in itbindings:\n if keystrokes not in self.revbinds:\n self.revbinds[longname] = keystrokes\n\n\[email protected]\n@asyncthread\ndef help_search(vd, sheet, regex):\n vs = HelpSheet(source=None)\n vs.rows = [] # do not trigger push reload\n vd.push(vs) # push first, then reload\n vd.sync(vs.reload())\n\n # find rows matching regex on original HelpSheet\n rowidxs = list(vd.searchRegex(vs, regex=regex, columns=\"visibleCols\"))\n\n # add only matching rows\n allrows = vs.rows\n vs.rows = []\n for rowidx in rowidxs:\n vs.addRow(allrows[rowidx])\n\n\[email protected]_api\ndef openManPage(vd):\n from pkg_resources import resource_filename\n import os\n with SuspendCurses():\n os.system(' '.join(['man', resource_filename(__name__, 'man/vd.1')]))\n\n\n# in VisiData, ^H refers to the man page\nglobalCommand('^H', 'sysopen-help', 'openManPage()', 'view vd man page')\nBaseSheet.addCommand('z^H', 'help-commands', 'vd.push(HelpSheet(name + \"_commands\", source=sheet, revbinds={}))', 'view sheet of command longnames and keybindings for current sheet')\nBaseSheet.addCommand('gz^H', 'help-commands-all', 'vd.push(HelpSheet(\"all_commands\", source=None, revbinds={}))', 'view sheet of command longnames and keybindings for all sheet types')\nglobalCommand(None, 'help-search', 'help_search(sheet, input(\"help: \"))', 'search through command longnames with search terms')\n\nBaseSheet.bindkey('KEY_F(1)', 'sysopen-help')\nBaseSheet.bindkey('KEY_BACKSPACE', 'sysopen-help')\nBaseSheet.bindkey('zKEY_F(1)', 'help-commands')\nBaseSheet.bindkey('zKEY_BACKSPACE', 'help-commands')\n", "path": "visidata/help.py"}]}
| 1,725 | 141 |
gh_patches_debug_4466
|
rasdani/github-patches
|
git_diff
|
googleapis__google-cloud-python-5899
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Default max_bytes in pubsub seems wrong
https://github.com/GoogleCloudPlatform/google-cloud-python/blob/005ddd11770f8953643dac9b64a037524da2f49e/pubsub/google/cloud/pubsub_v1/types.py#L43
In my code I'm seeing this:
```
google.api_core.exceptions.InvalidArgument: 400 The value for 10159812 is too large. You passed request_size in the request, but the maximum value is 10000000.
```
Based on the error coming out of the pubsub, I think this default max size is wrong.
</issue>
<code>
[start of pubsub/google/cloud/pubsub_v1/types.py]
1 # Copyright 2017, Google LLC All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16 import collections
17 import sys
18
19 from google.api import http_pb2
20 from google.iam.v1 import iam_policy_pb2
21 from google.iam.v1 import policy_pb2
22 from google.iam.v1.logging import audit_data_pb2
23 from google.protobuf import descriptor_pb2
24 from google.protobuf import duration_pb2
25 from google.protobuf import empty_pb2
26 from google.protobuf import field_mask_pb2
27 from google.protobuf import timestamp_pb2
28
29 from google.api_core.protobuf_helpers import get_messages
30 from google.cloud.pubsub_v1.proto import pubsub_pb2
31
32
33 # Define the default values for batching.
34 #
35 # This class is used when creating a publisher or subscriber client, and
36 # these settings can be altered to tweak Pub/Sub behavior.
37 # The defaults should be fine for most use cases.
38 BatchSettings = collections.namedtuple(
39 'BatchSettings',
40 ['max_bytes', 'max_latency', 'max_messages'],
41 )
42 BatchSettings.__new__.__defaults__ = (
43 1024 * 1024 * 10, # max_bytes: 10 MB
44 0.05, # max_latency: 0.05 seconds
45 1000, # max_messages: 1,000
46 )
47
48 # Define the type class and default values for flow control settings.
49 #
50 # This class is used when creating a publisher or subscriber client, and
51 # these settings can be altered to tweak Pub/Sub behavior.
52 # The defaults should be fine for most use cases.
53 FlowControl = collections.namedtuple(
54 'FlowControl',
55 ['max_bytes', 'max_messages', 'resume_threshold', 'max_requests',
56 'max_request_batch_size', 'max_request_batch_latency',
57 'max_lease_duration'],
58 )
59 FlowControl.__new__.__defaults__ = (
60 100 * 1024 * 1024, # max_bytes: 100mb
61 100, # max_messages: 100
62 0.8, # resume_threshold: 80%
63 100, # max_requests: 100
64 100, # max_request_batch_size: 100
65 0.01, # max_request_batch_latency: 0.01s
66 2 * 60 * 60, # max_lease_duration: 2 hours.
67 )
68
69
70 _shared_modules = [
71 http_pb2,
72 iam_policy_pb2,
73 policy_pb2,
74 audit_data_pb2,
75 descriptor_pb2,
76 duration_pb2,
77 empty_pb2,
78 field_mask_pb2,
79 timestamp_pb2,
80 ]
81
82 _local_modules = [
83 pubsub_pb2,
84 ]
85
86
87 names = ['BatchSettings', 'FlowControl']
88
89
90 for module in _shared_modules:
91 for name, message in get_messages(module).items():
92 setattr(sys.modules[__name__], name, message)
93 names.append(name)
94
95 for module in _local_modules:
96 for name, message in get_messages(module).items():
97 message.__module__ = 'google.cloud.pubsub_v1.types'
98 setattr(sys.modules[__name__], name, message)
99 names.append(name)
100
101
102 __all__ = tuple(sorted(names))
103
[end of pubsub/google/cloud/pubsub_v1/types.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pubsub/google/cloud/pubsub_v1/types.py b/pubsub/google/cloud/pubsub_v1/types.py
--- a/pubsub/google/cloud/pubsub_v1/types.py
+++ b/pubsub/google/cloud/pubsub_v1/types.py
@@ -40,7 +40,7 @@
['max_bytes', 'max_latency', 'max_messages'],
)
BatchSettings.__new__.__defaults__ = (
- 1024 * 1024 * 10, # max_bytes: 10 MB
+ 1000 * 1000 * 10, # max_bytes: documented "10 MB", enforced 10000000
0.05, # max_latency: 0.05 seconds
1000, # max_messages: 1,000
)
|
{"golden_diff": "diff --git a/pubsub/google/cloud/pubsub_v1/types.py b/pubsub/google/cloud/pubsub_v1/types.py\n--- a/pubsub/google/cloud/pubsub_v1/types.py\n+++ b/pubsub/google/cloud/pubsub_v1/types.py\n@@ -40,7 +40,7 @@\n ['max_bytes', 'max_latency', 'max_messages'],\n )\n BatchSettings.__new__.__defaults__ = (\n- 1024 * 1024 * 10, # max_bytes: 10 MB\n+ 1000 * 1000 * 10, # max_bytes: documented \"10 MB\", enforced 10000000\n 0.05, # max_latency: 0.05 seconds\n 1000, # max_messages: 1,000\n )\n", "issue": "Default max_bytes in pubsub seems wrong\nhttps://github.com/GoogleCloudPlatform/google-cloud-python/blob/005ddd11770f8953643dac9b64a037524da2f49e/pubsub/google/cloud/pubsub_v1/types.py#L43\r\n\r\nIn my code I'm seeing this:\r\n```\r\ngoogle.api_core.exceptions.InvalidArgument: 400 The value for 10159812 is too large. You passed request_size in the request, but the maximum value is 10000000.\r\n```\r\n\r\nBased on the error coming out of the pubsub, I think this default max size is wrong.\n", "before_files": [{"content": "# Copyright 2017, Google LLC All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nimport collections\nimport sys\n\nfrom google.api import http_pb2\nfrom google.iam.v1 import iam_policy_pb2\nfrom google.iam.v1 import policy_pb2\nfrom google.iam.v1.logging import audit_data_pb2\nfrom google.protobuf import descriptor_pb2\nfrom google.protobuf import duration_pb2\nfrom google.protobuf import empty_pb2\nfrom google.protobuf import field_mask_pb2\nfrom google.protobuf import timestamp_pb2\n\nfrom google.api_core.protobuf_helpers import get_messages\nfrom google.cloud.pubsub_v1.proto import pubsub_pb2\n\n\n# Define the default values for batching.\n#\n# This class is used when creating a publisher or subscriber client, and\n# these settings can be altered to tweak Pub/Sub behavior.\n# The defaults should be fine for most use cases.\nBatchSettings = collections.namedtuple(\n 'BatchSettings',\n ['max_bytes', 'max_latency', 'max_messages'],\n)\nBatchSettings.__new__.__defaults__ = (\n 1024 * 1024 * 10, # max_bytes: 10 MB\n 0.05, # max_latency: 0.05 seconds\n 1000, # max_messages: 1,000\n)\n\n# Define the type class and default values for flow control settings.\n#\n# This class is used when creating a publisher or subscriber client, and\n# these settings can be altered to tweak Pub/Sub behavior.\n# The defaults should be fine for most use cases.\nFlowControl = collections.namedtuple(\n 'FlowControl',\n ['max_bytes', 'max_messages', 'resume_threshold', 'max_requests',\n 'max_request_batch_size', 'max_request_batch_latency',\n 'max_lease_duration'],\n)\nFlowControl.__new__.__defaults__ = (\n 100 * 1024 * 1024, # max_bytes: 100mb\n 100, # max_messages: 100\n 0.8, # resume_threshold: 80%\n 100, # max_requests: 100\n 100, # max_request_batch_size: 100\n 0.01, # max_request_batch_latency: 0.01s\n 2 * 60 * 60, # max_lease_duration: 2 hours.\n)\n\n\n_shared_modules = [\n http_pb2,\n iam_policy_pb2,\n policy_pb2,\n audit_data_pb2,\n descriptor_pb2,\n duration_pb2,\n empty_pb2,\n field_mask_pb2,\n timestamp_pb2,\n]\n\n_local_modules = [\n pubsub_pb2,\n]\n\n\nnames = ['BatchSettings', 'FlowControl']\n\n\nfor module in _shared_modules:\n for name, message in get_messages(module).items():\n setattr(sys.modules[__name__], name, message)\n names.append(name)\n\nfor module in _local_modules:\n for name, message in get_messages(module).items():\n message.__module__ = 'google.cloud.pubsub_v1.types'\n setattr(sys.modules[__name__], name, message)\n names.append(name)\n\n\n__all__ = tuple(sorted(names))\n", "path": "pubsub/google/cloud/pubsub_v1/types.py"}]}
| 1,742 | 194 |
gh_patches_debug_11467
|
rasdani/github-patches
|
git_diff
|
scoutapp__scout_apm_python-523
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error with the scout add-on in Heroku
With version 2.14.1 of this package, I encounter the following error:
` File "/code/nluproxy/__init__.py", line 3, in <module>
from .celery import app as celery_app
File "/code/nluproxy/celery.py", line 12, in <module>
scout_apm.celery.install(app)
File "/usr/local/lib/python3.7/site-packages/scout_apm/celery.py", line 58, in install
installed = scout_apm.core.install()
File "/usr/local/lib/python3.7/site-packages/scout_apm/compat.py", line 132, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/__init__.py", line 46, in install
report_app_metadata()
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/metadata.py", line 17, in report_app_metadata
event_value=get_metadata(),
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/metadata.py", line 37, in get_metadata
"libraries": get_python_packages_versions(),
File "/usr/local/lib/python3.7/site-packages/scout_apm/core/metadata.py", line 60, in get_python_packages_versions
for distribution in distributions()
TypeError: '<' not supported between instances of 'NoneType' and 'str'
`
</issue>
<code>
[start of src/scout_apm/core/metadata.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import datetime as dt
5 import sys
6 from os import getpid
7
8 from scout_apm.core.commands import ApplicationEvent
9 from scout_apm.core.config import scout_config
10 from scout_apm.core.socket import CoreAgentSocketThread
11
12
13 def report_app_metadata():
14 CoreAgentSocketThread.send(
15 ApplicationEvent(
16 event_type="scout.metadata",
17 event_value=get_metadata(),
18 source="Pid: " + str(getpid()),
19 timestamp=dt.datetime.utcnow(),
20 )
21 )
22
23
24 def get_metadata():
25 data = {
26 "language": "python",
27 "language_version": "{}.{}.{}".format(*sys.version_info[:3]),
28 "server_time": dt.datetime.utcnow().isoformat() + "Z",
29 "framework": scout_config.value("framework"),
30 "framework_version": scout_config.value("framework_version"),
31 "environment": "",
32 "app_server": scout_config.value("app_server"),
33 "hostname": scout_config.value("hostname"),
34 "database_engine": "",
35 "database_adapter": "",
36 "application_name": "",
37 "libraries": get_python_packages_versions(),
38 "paas": "",
39 "application_root": scout_config.value("application_root"),
40 "scm_subdirectory": scout_config.value("scm_subdirectory"),
41 "git_sha": scout_config.value("revision_sha"),
42 }
43 # Deprecated - see #327:
44 data["version"] = data["language_version"]
45 return data
46
47
48 def get_python_packages_versions():
49 try:
50 if sys.version_info >= (3, 8):
51 from importlib.metadata import distributions
52 else:
53 from importlib_metadata import distributions
54 except ImportError:
55 # For some reason it is unavailable
56 return []
57
58 return sorted(
59 (distribution.metadata["Name"], distribution.metadata["Version"])
60 for distribution in distributions()
61 )
62
[end of src/scout_apm/core/metadata.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/scout_apm/core/metadata.py b/src/scout_apm/core/metadata.py
--- a/src/scout_apm/core/metadata.py
+++ b/src/scout_apm/core/metadata.py
@@ -56,6 +56,15 @@
return []
return sorted(
- (distribution.metadata["Name"], distribution.metadata["Version"])
+ (
+ distribution.metadata["Name"],
+ (distribution.metadata["Version"] or "Unknown"),
+ )
for distribution in distributions()
+ # Filter out distributions wtih None for name or value. This can be the
+ # case for packages without a METADATA or PKG-INFO file in their relevant
+ # distribution directory. According to comments in importlib.metadata
+ # internals this is possible for certain old packages, but I could only
+ # recreate it by deliberately deleting said files.
+ if distribution.metadata["Name"]
)
|
{"golden_diff": "diff --git a/src/scout_apm/core/metadata.py b/src/scout_apm/core/metadata.py\n--- a/src/scout_apm/core/metadata.py\n+++ b/src/scout_apm/core/metadata.py\n@@ -56,6 +56,15 @@\n return []\n \n return sorted(\n- (distribution.metadata[\"Name\"], distribution.metadata[\"Version\"])\n+ (\n+ distribution.metadata[\"Name\"],\n+ (distribution.metadata[\"Version\"] or \"Unknown\"),\n+ )\n for distribution in distributions()\n+ # Filter out distributions wtih None for name or value. This can be the\n+ # case for packages without a METADATA or PKG-INFO file in their relevant\n+ # distribution directory. According to comments in importlib.metadata\n+ # internals this is possible for certain old packages, but I could only\n+ # recreate it by deliberately deleting said files.\n+ if distribution.metadata[\"Name\"]\n )\n", "issue": "Error with the scout add-on in Heroku\nWith version 2.14.1 of this package, I encounter the following error:\r\n` File \"/code/nluproxy/__init__.py\", line 3, in <module>\r\n\r\n from .celery import app as celery_app\r\n\r\n File \"/code/nluproxy/celery.py\", line 12, in <module>\r\n\r\n scout_apm.celery.install(app)\r\n\r\n File \"/usr/local/lib/python3.7/site-packages/scout_apm/celery.py\", line 58, in install\r\n\r\n installed = scout_apm.core.install()\r\n\r\n File \"/usr/local/lib/python3.7/site-packages/scout_apm/compat.py\", line 132, in wrapper\r\n\r\n return func(*args, **kwargs)\r\n\r\n File \"/usr/local/lib/python3.7/site-packages/scout_apm/core/__init__.py\", line 46, in install\r\n\r\n report_app_metadata()\r\n\r\n File \"/usr/local/lib/python3.7/site-packages/scout_apm/core/metadata.py\", line 17, in report_app_metadata\r\n\r\n event_value=get_metadata(),\r\n\r\n File \"/usr/local/lib/python3.7/site-packages/scout_apm/core/metadata.py\", line 37, in get_metadata\r\n\r\n \"libraries\": get_python_packages_versions(),\r\n\r\n File \"/usr/local/lib/python3.7/site-packages/scout_apm/core/metadata.py\", line 60, in get_python_packages_versions\r\n\r\n for distribution in distributions()\r\n\r\nTypeError: '<' not supported between instances of 'NoneType' and 'str'\r\n\r\n`\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport datetime as dt\nimport sys\nfrom os import getpid\n\nfrom scout_apm.core.commands import ApplicationEvent\nfrom scout_apm.core.config import scout_config\nfrom scout_apm.core.socket import CoreAgentSocketThread\n\n\ndef report_app_metadata():\n CoreAgentSocketThread.send(\n ApplicationEvent(\n event_type=\"scout.metadata\",\n event_value=get_metadata(),\n source=\"Pid: \" + str(getpid()),\n timestamp=dt.datetime.utcnow(),\n )\n )\n\n\ndef get_metadata():\n data = {\n \"language\": \"python\",\n \"language_version\": \"{}.{}.{}\".format(*sys.version_info[:3]),\n \"server_time\": dt.datetime.utcnow().isoformat() + \"Z\",\n \"framework\": scout_config.value(\"framework\"),\n \"framework_version\": scout_config.value(\"framework_version\"),\n \"environment\": \"\",\n \"app_server\": scout_config.value(\"app_server\"),\n \"hostname\": scout_config.value(\"hostname\"),\n \"database_engine\": \"\",\n \"database_adapter\": \"\",\n \"application_name\": \"\",\n \"libraries\": get_python_packages_versions(),\n \"paas\": \"\",\n \"application_root\": scout_config.value(\"application_root\"),\n \"scm_subdirectory\": scout_config.value(\"scm_subdirectory\"),\n \"git_sha\": scout_config.value(\"revision_sha\"),\n }\n # Deprecated - see #327:\n data[\"version\"] = data[\"language_version\"]\n return data\n\n\ndef get_python_packages_versions():\n try:\n if sys.version_info >= (3, 8):\n from importlib.metadata import distributions\n else:\n from importlib_metadata import distributions\n except ImportError:\n # For some reason it is unavailable\n return []\n\n return sorted(\n (distribution.metadata[\"Name\"], distribution.metadata[\"Version\"])\n for distribution in distributions()\n )\n", "path": "src/scout_apm/core/metadata.py"}]}
| 1,402 | 204 |
gh_patches_debug_7171
|
rasdani/github-patches
|
git_diff
|
matrix-org__synapse-14727
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
synapse returns undocumented `device` property from `GET /_matrix/client/v3/pushrules/`
Synapse's response to a `/pushrules` query looks something like:
```json5
[
"global": {
"content": { /*... */ },
"override": { /*... */ },
"room": { /*... */ },
"sender": { /*... */ },
"underride": { /*... */ }
},
"device": {}
]
```
This `device` property is not [specced](https://spec.matrix.org/v1.2/client-server-api/#get_matrixclientv3pushrules) and should be removed.
</issue>
<code>
[start of synapse/push/clientformat.py]
1 # Copyright 2016 OpenMarket Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 from typing import Any, Dict, List, Optional
17
18 from synapse.push.rulekinds import PRIORITY_CLASS_INVERSE_MAP, PRIORITY_CLASS_MAP
19 from synapse.synapse_rust.push import FilteredPushRules, PushRule
20 from synapse.types import UserID
21
22
23 def format_push_rules_for_user(
24 user: UserID, ruleslist: FilteredPushRules
25 ) -> Dict[str, Dict[str, list]]:
26 """Converts a list of rawrules and a enabled map into nested dictionaries
27 to match the Matrix client-server format for push rules"""
28
29 rules: Dict[str, Dict[str, List[Dict[str, Any]]]] = {
30 "global": {},
31 "device": {},
32 }
33
34 rules["global"] = _add_empty_priority_class_arrays(rules["global"])
35
36 for r, enabled in ruleslist.rules():
37 template_name = _priority_class_to_template_name(r.priority_class)
38
39 rulearray = rules["global"][template_name]
40
41 template_rule = _rule_to_template(r)
42 if not template_rule:
43 continue
44
45 rulearray.append(template_rule)
46
47 pattern_type = template_rule.pop("pattern_type", None)
48 if pattern_type == "user_id":
49 template_rule["pattern"] = user.to_string()
50 elif pattern_type == "user_localpart":
51 template_rule["pattern"] = user.localpart
52
53 template_rule["enabled"] = enabled
54
55 if "conditions" not in template_rule:
56 # Not all formatted rules have explicit conditions, e.g. "room"
57 # rules omit them as they can be derived from the kind and rule ID.
58 #
59 # If the formatted rule has no conditions then we can skip the
60 # formatting of conditions.
61 continue
62
63 # Remove internal stuff.
64 template_rule["conditions"] = copy.deepcopy(template_rule["conditions"])
65 for c in template_rule["conditions"]:
66 c.pop("_cache_key", None)
67
68 pattern_type = c.pop("pattern_type", None)
69 if pattern_type == "user_id":
70 c["pattern"] = user.to_string()
71 elif pattern_type == "user_localpart":
72 c["pattern"] = user.localpart
73
74 sender_type = c.pop("sender_type", None)
75 if sender_type == "user_id":
76 c["sender"] = user.to_string()
77
78 return rules
79
80
81 def _add_empty_priority_class_arrays(d: Dict[str, list]) -> Dict[str, list]:
82 for pc in PRIORITY_CLASS_MAP.keys():
83 d[pc] = []
84 return d
85
86
87 def _rule_to_template(rule: PushRule) -> Optional[Dict[str, Any]]:
88 templaterule: Dict[str, Any]
89
90 unscoped_rule_id = _rule_id_from_namespaced(rule.rule_id)
91
92 template_name = _priority_class_to_template_name(rule.priority_class)
93 if template_name in ["override", "underride"]:
94 templaterule = {"conditions": rule.conditions, "actions": rule.actions}
95 elif template_name in ["sender", "room"]:
96 templaterule = {"actions": rule.actions}
97 unscoped_rule_id = rule.conditions[0]["pattern"]
98 elif template_name == "content":
99 if len(rule.conditions) != 1:
100 return None
101 thecond = rule.conditions[0]
102
103 templaterule = {"actions": rule.actions}
104 if "pattern" in thecond:
105 templaterule["pattern"] = thecond["pattern"]
106 elif "pattern_type" in thecond:
107 templaterule["pattern_type"] = thecond["pattern_type"]
108 else:
109 return None
110 else:
111 # This should not be reached unless this function is not kept in sync
112 # with PRIORITY_CLASS_INVERSE_MAP.
113 raise ValueError("Unexpected template_name: %s" % (template_name,))
114
115 templaterule["rule_id"] = unscoped_rule_id
116 templaterule["default"] = rule.default
117 return templaterule
118
119
120 def _rule_id_from_namespaced(in_rule_id: str) -> str:
121 return in_rule_id.split("/")[-1]
122
123
124 def _priority_class_to_template_name(pc: int) -> str:
125 return PRIORITY_CLASS_INVERSE_MAP[pc]
126
[end of synapse/push/clientformat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/synapse/push/clientformat.py b/synapse/push/clientformat.py
--- a/synapse/push/clientformat.py
+++ b/synapse/push/clientformat.py
@@ -26,10 +26,7 @@
"""Converts a list of rawrules and a enabled map into nested dictionaries
to match the Matrix client-server format for push rules"""
- rules: Dict[str, Dict[str, List[Dict[str, Any]]]] = {
- "global": {},
- "device": {},
- }
+ rules: Dict[str, Dict[str, List[Dict[str, Any]]]] = {"global": {}}
rules["global"] = _add_empty_priority_class_arrays(rules["global"])
|
{"golden_diff": "diff --git a/synapse/push/clientformat.py b/synapse/push/clientformat.py\n--- a/synapse/push/clientformat.py\n+++ b/synapse/push/clientformat.py\n@@ -26,10 +26,7 @@\n \"\"\"Converts a list of rawrules and a enabled map into nested dictionaries\n to match the Matrix client-server format for push rules\"\"\"\n \n- rules: Dict[str, Dict[str, List[Dict[str, Any]]]] = {\n- \"global\": {},\n- \"device\": {},\n- }\n+ rules: Dict[str, Dict[str, List[Dict[str, Any]]]] = {\"global\": {}}\n \n rules[\"global\"] = _add_empty_priority_class_arrays(rules[\"global\"])\n", "issue": "synapse returns undocumented `device` property from `GET /_matrix/client/v3/pushrules/`\nSynapse's response to a `/pushrules` query looks something like:\r\n\r\n```json5\r\n[\r\n \"global\": {\r\n \"content\": { /*... */ },\r\n \"override\": { /*... */ },\r\n \"room\": { /*... */ },\r\n \"sender\": { /*... */ },\r\n \"underride\": { /*... */ }\r\n },\r\n \"device\": {}\r\n]\r\n```\r\n\r\nThis `device` property is not [specced](https://spec.matrix.org/v1.2/client-server-api/#get_matrixclientv3pushrules) and should be removed.\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2016 OpenMarket Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\nfrom typing import Any, Dict, List, Optional\n\nfrom synapse.push.rulekinds import PRIORITY_CLASS_INVERSE_MAP, PRIORITY_CLASS_MAP\nfrom synapse.synapse_rust.push import FilteredPushRules, PushRule\nfrom synapse.types import UserID\n\n\ndef format_push_rules_for_user(\n user: UserID, ruleslist: FilteredPushRules\n) -> Dict[str, Dict[str, list]]:\n \"\"\"Converts a list of rawrules and a enabled map into nested dictionaries\n to match the Matrix client-server format for push rules\"\"\"\n\n rules: Dict[str, Dict[str, List[Dict[str, Any]]]] = {\n \"global\": {},\n \"device\": {},\n }\n\n rules[\"global\"] = _add_empty_priority_class_arrays(rules[\"global\"])\n\n for r, enabled in ruleslist.rules():\n template_name = _priority_class_to_template_name(r.priority_class)\n\n rulearray = rules[\"global\"][template_name]\n\n template_rule = _rule_to_template(r)\n if not template_rule:\n continue\n\n rulearray.append(template_rule)\n\n pattern_type = template_rule.pop(\"pattern_type\", None)\n if pattern_type == \"user_id\":\n template_rule[\"pattern\"] = user.to_string()\n elif pattern_type == \"user_localpart\":\n template_rule[\"pattern\"] = user.localpart\n\n template_rule[\"enabled\"] = enabled\n\n if \"conditions\" not in template_rule:\n # Not all formatted rules have explicit conditions, e.g. \"room\"\n # rules omit them as they can be derived from the kind and rule ID.\n #\n # If the formatted rule has no conditions then we can skip the\n # formatting of conditions.\n continue\n\n # Remove internal stuff.\n template_rule[\"conditions\"] = copy.deepcopy(template_rule[\"conditions\"])\n for c in template_rule[\"conditions\"]:\n c.pop(\"_cache_key\", None)\n\n pattern_type = c.pop(\"pattern_type\", None)\n if pattern_type == \"user_id\":\n c[\"pattern\"] = user.to_string()\n elif pattern_type == \"user_localpart\":\n c[\"pattern\"] = user.localpart\n\n sender_type = c.pop(\"sender_type\", None)\n if sender_type == \"user_id\":\n c[\"sender\"] = user.to_string()\n\n return rules\n\n\ndef _add_empty_priority_class_arrays(d: Dict[str, list]) -> Dict[str, list]:\n for pc in PRIORITY_CLASS_MAP.keys():\n d[pc] = []\n return d\n\n\ndef _rule_to_template(rule: PushRule) -> Optional[Dict[str, Any]]:\n templaterule: Dict[str, Any]\n\n unscoped_rule_id = _rule_id_from_namespaced(rule.rule_id)\n\n template_name = _priority_class_to_template_name(rule.priority_class)\n if template_name in [\"override\", \"underride\"]:\n templaterule = {\"conditions\": rule.conditions, \"actions\": rule.actions}\n elif template_name in [\"sender\", \"room\"]:\n templaterule = {\"actions\": rule.actions}\n unscoped_rule_id = rule.conditions[0][\"pattern\"]\n elif template_name == \"content\":\n if len(rule.conditions) != 1:\n return None\n thecond = rule.conditions[0]\n\n templaterule = {\"actions\": rule.actions}\n if \"pattern\" in thecond:\n templaterule[\"pattern\"] = thecond[\"pattern\"]\n elif \"pattern_type\" in thecond:\n templaterule[\"pattern_type\"] = thecond[\"pattern_type\"]\n else:\n return None\n else:\n # This should not be reached unless this function is not kept in sync\n # with PRIORITY_CLASS_INVERSE_MAP.\n raise ValueError(\"Unexpected template_name: %s\" % (template_name,))\n\n templaterule[\"rule_id\"] = unscoped_rule_id\n templaterule[\"default\"] = rule.default\n return templaterule\n\n\ndef _rule_id_from_namespaced(in_rule_id: str) -> str:\n return in_rule_id.split(\"/\")[-1]\n\n\ndef _priority_class_to_template_name(pc: int) -> str:\n return PRIORITY_CLASS_INVERSE_MAP[pc]\n", "path": "synapse/push/clientformat.py"}]}
| 1,982 | 163 |
gh_patches_debug_20549
|
rasdani/github-patches
|
git_diff
|
liberapay__liberapay.com-1378
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cron fails to run
The weekly cron for `create_payday_issue` is no longer running successfully at the scheduled time.
</issue>
<code>
[start of liberapay/cron.py]
1 from collections import namedtuple
2 from datetime import datetime, timedelta
3 import logging
4 import threading
5 from time import sleep
6
7
8 logger = logging.getLogger('liberapay.cron')
9
10
11 Daily = namedtuple('Daily', 'hour')
12 Weekly = namedtuple('Weekly', 'weekday hour')
13
14
15 class Cron(object):
16
17 def __init__(self, website):
18 self.website = website
19 self.conn = None
20 self.has_lock = False
21 self.exclusive_jobs = []
22
23 def __call__(self, period, func, exclusive=False):
24 if not self.website.env.run_cron_jobs:
25 return
26 if exclusive and not self.has_lock:
27 self.exclusive_jobs.append((period, func))
28 self._wait_for_lock()
29 return
30 def f():
31 if isinstance(period, Weekly):
32 while True:
33 now = datetime.utcnow()
34 then = now.replace(hour=period.hour, minute=10, second=0)
35 days = (now.isoweekday() - period.weekday) % 7
36 if days:
37 then += timedelta(days=days)
38 seconds = (then - now).total_seconds()
39 if seconds > 0:
40 sleep(seconds)
41 elif seconds < -60:
42 sleep(86400 * 6)
43 continue
44 try:
45 func()
46 except Exception as e:
47 self.website.tell_sentry(e, {})
48 sleep(86400 * 6)
49 elif isinstance(period, Daily):
50 while True:
51 now = datetime.utcnow()
52 then = now.replace(hour=period.hour, minute=5, second=0)
53 seconds = (then - now).total_seconds()
54 if seconds > 0:
55 # later today
56 sleep(seconds)
57 elif seconds < -60:
58 # tomorrow
59 sleep(3600 * 24 + seconds)
60 try:
61 func()
62 except Exception as e:
63 self.website.tell_sentry(e, {})
64 # retry in 5 minutes
65 sleep(300)
66 else:
67 # success, sleep until tomorrow
68 sleep(3600 * 23)
69 else:
70 assert period >= 1
71 while True:
72 try:
73 func()
74 except Exception as e:
75 self.website.tell_sentry(e, {})
76 sleep(period)
77 t = threading.Thread(target=f)
78 t.daemon = True
79 t.start()
80
81 def _wait_for_lock(self):
82 if self.conn:
83 return # Already waiting
84 self.conn = self.website.db.get_connection().__enter__()
85 def f():
86 cursor = self.conn.cursor()
87 while True:
88 if cursor.one("SELECT pg_try_advisory_lock(0)"):
89 self.has_lock = True
90 break
91 sleep(300)
92 for job in self.exclusive_jobs:
93 self(*job, exclusive=True)
94 t = threading.Thread(target=f)
95 t.daemon = True
96 t.start()
97
[end of liberapay/cron.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/liberapay/cron.py b/liberapay/cron.py
--- a/liberapay/cron.py
+++ b/liberapay/cron.py
@@ -21,7 +21,7 @@
self.exclusive_jobs = []
def __call__(self, period, func, exclusive=False):
- if not self.website.env.run_cron_jobs:
+ if not self.website.env.run_cron_jobs or not period:
return
if exclusive and not self.has_lock:
self.exclusive_jobs.append((period, func))
@@ -32,7 +32,7 @@
while True:
now = datetime.utcnow()
then = now.replace(hour=period.hour, minute=10, second=0)
- days = (now.isoweekday() - period.weekday) % 7
+ days = (period.weekday - now.isoweekday()) % 7
if days:
then += timedelta(days=days)
seconds = (then - now).total_seconds()
|
{"golden_diff": "diff --git a/liberapay/cron.py b/liberapay/cron.py\n--- a/liberapay/cron.py\n+++ b/liberapay/cron.py\n@@ -21,7 +21,7 @@\n self.exclusive_jobs = []\n \n def __call__(self, period, func, exclusive=False):\n- if not self.website.env.run_cron_jobs:\n+ if not self.website.env.run_cron_jobs or not period:\n return\n if exclusive and not self.has_lock:\n self.exclusive_jobs.append((period, func))\n@@ -32,7 +32,7 @@\n while True:\n now = datetime.utcnow()\n then = now.replace(hour=period.hour, minute=10, second=0)\n- days = (now.isoweekday() - period.weekday) % 7\n+ days = (period.weekday - now.isoweekday()) % 7\n if days:\n then += timedelta(days=days)\n seconds = (then - now).total_seconds()\n", "issue": "Cron fails to run\nThe weekly cron for `create_payday_issue` is no longer running successfully at the scheduled time.\n", "before_files": [{"content": "from collections import namedtuple\nfrom datetime import datetime, timedelta\nimport logging\nimport threading\nfrom time import sleep\n\n\nlogger = logging.getLogger('liberapay.cron')\n\n\nDaily = namedtuple('Daily', 'hour')\nWeekly = namedtuple('Weekly', 'weekday hour')\n\n\nclass Cron(object):\n\n def __init__(self, website):\n self.website = website\n self.conn = None\n self.has_lock = False\n self.exclusive_jobs = []\n\n def __call__(self, period, func, exclusive=False):\n if not self.website.env.run_cron_jobs:\n return\n if exclusive and not self.has_lock:\n self.exclusive_jobs.append((period, func))\n self._wait_for_lock()\n return\n def f():\n if isinstance(period, Weekly):\n while True:\n now = datetime.utcnow()\n then = now.replace(hour=period.hour, minute=10, second=0)\n days = (now.isoweekday() - period.weekday) % 7\n if days:\n then += timedelta(days=days)\n seconds = (then - now).total_seconds()\n if seconds > 0:\n sleep(seconds)\n elif seconds < -60:\n sleep(86400 * 6)\n continue\n try:\n func()\n except Exception as e:\n self.website.tell_sentry(e, {})\n sleep(86400 * 6)\n elif isinstance(period, Daily):\n while True:\n now = datetime.utcnow()\n then = now.replace(hour=period.hour, minute=5, second=0)\n seconds = (then - now).total_seconds()\n if seconds > 0:\n # later today\n sleep(seconds)\n elif seconds < -60:\n # tomorrow\n sleep(3600 * 24 + seconds)\n try:\n func()\n except Exception as e:\n self.website.tell_sentry(e, {})\n # retry in 5 minutes\n sleep(300)\n else:\n # success, sleep until tomorrow\n sleep(3600 * 23)\n else:\n assert period >= 1\n while True:\n try:\n func()\n except Exception as e:\n self.website.tell_sentry(e, {})\n sleep(period)\n t = threading.Thread(target=f)\n t.daemon = True\n t.start()\n\n def _wait_for_lock(self):\n if self.conn:\n return # Already waiting\n self.conn = self.website.db.get_connection().__enter__()\n def f():\n cursor = self.conn.cursor()\n while True:\n if cursor.one(\"SELECT pg_try_advisory_lock(0)\"):\n self.has_lock = True\n break\n sleep(300)\n for job in self.exclusive_jobs:\n self(*job, exclusive=True)\n t = threading.Thread(target=f)\n t.daemon = True\n t.start()\n", "path": "liberapay/cron.py"}]}
| 1,378 | 227 |
gh_patches_debug_17568
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-473
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
QLayoutWidget Add Label Error: No method AddItem
The code in QLayoutWidget looks something like:
```
def addLabel(self, text=' ', row=None, col=None, rowspan=1, colspan=1, **kargs):
"""
Create a QLabel with *text* and place it in the next available cell (or in the cell specified)
All extra keyword arguments are passed to QLabel().
Returns the created widget.
"""
text = QtGui.QLabel(text, **kargs)
self.addItem(text, row, col, rowspan, colspan)
return text
```
I think it should have `self.addWidget` instead of `self.addItem` as addItem method is missing.
Error:
```
AttributeError: 'LayoutWidget' object has no attribute 'addItem'
```
</issue>
<code>
[start of pyqtgraph/widgets/LayoutWidget.py]
1 from ..Qt import QtGui, QtCore
2
3 __all__ = ['LayoutWidget']
4 class LayoutWidget(QtGui.QWidget):
5 """
6 Convenience class used for laying out QWidgets in a grid.
7 (It's just a little less effort to use than QGridLayout)
8 """
9
10 def __init__(self, parent=None):
11 QtGui.QWidget.__init__(self, parent)
12 self.layout = QtGui.QGridLayout()
13 self.setLayout(self.layout)
14 self.items = {}
15 self.rows = {}
16 self.currentRow = 0
17 self.currentCol = 0
18
19 def nextRow(self):
20 """Advance to next row for automatic widget placement"""
21 self.currentRow += 1
22 self.currentCol = 0
23
24 def nextColumn(self, colspan=1):
25 """Advance to next column, while returning the current column number
26 (generally only for internal use--called by addWidget)"""
27 self.currentCol += colspan
28 return self.currentCol-colspan
29
30 def nextCol(self, *args, **kargs):
31 """Alias of nextColumn"""
32 return self.nextColumn(*args, **kargs)
33
34
35 def addLabel(self, text=' ', row=None, col=None, rowspan=1, colspan=1, **kargs):
36 """
37 Create a QLabel with *text* and place it in the next available cell (or in the cell specified)
38 All extra keyword arguments are passed to QLabel().
39 Returns the created widget.
40 """
41 text = QtGui.QLabel(text, **kargs)
42 self.addItem(text, row, col, rowspan, colspan)
43 return text
44
45 def addLayout(self, row=None, col=None, rowspan=1, colspan=1, **kargs):
46 """
47 Create an empty LayoutWidget and place it in the next available cell (or in the cell specified)
48 All extra keyword arguments are passed to :func:`LayoutWidget.__init__ <pyqtgraph.LayoutWidget.__init__>`
49 Returns the created widget.
50 """
51 layout = LayoutWidget(**kargs)
52 self.addItem(layout, row, col, rowspan, colspan)
53 return layout
54
55 def addWidget(self, item, row=None, col=None, rowspan=1, colspan=1):
56 """
57 Add a widget to the layout and place it in the next available cell (or in the cell specified).
58 """
59 if row == 'next':
60 self.nextRow()
61 row = self.currentRow
62 elif row is None:
63 row = self.currentRow
64
65
66 if col is None:
67 col = self.nextCol(colspan)
68
69 if row not in self.rows:
70 self.rows[row] = {}
71 self.rows[row][col] = item
72 self.items[item] = (row, col)
73
74 self.layout.addWidget(item, row, col, rowspan, colspan)
75
76 def getWidget(self, row, col):
77 """Return the widget in (*row*, *col*)"""
78 return self.row[row][col]
79
80 #def itemIndex(self, item):
81 #for i in range(self.layout.count()):
82 #if self.layout.itemAt(i).graphicsItem() is item:
83 #return i
84 #raise Exception("Could not determine index of item " + str(item))
85
86 #def removeItem(self, item):
87 #"""Remove *item* from the layout."""
88 #ind = self.itemIndex(item)
89 #self.layout.removeAt(ind)
90 #self.scene().removeItem(item)
91 #r,c = self.items[item]
92 #del self.items[item]
93 #del self.rows[r][c]
94 #self.update()
95
96 #def clear(self):
97 #items = []
98 #for i in list(self.items.keys()):
99 #self.removeItem(i)
100
101
102
[end of pyqtgraph/widgets/LayoutWidget.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyqtgraph/widgets/LayoutWidget.py b/pyqtgraph/widgets/LayoutWidget.py
--- a/pyqtgraph/widgets/LayoutWidget.py
+++ b/pyqtgraph/widgets/LayoutWidget.py
@@ -39,7 +39,7 @@
Returns the created widget.
"""
text = QtGui.QLabel(text, **kargs)
- self.addItem(text, row, col, rowspan, colspan)
+ self.addWidget(text, row, col, rowspan, colspan)
return text
def addLayout(self, row=None, col=None, rowspan=1, colspan=1, **kargs):
@@ -49,7 +49,7 @@
Returns the created widget.
"""
layout = LayoutWidget(**kargs)
- self.addItem(layout, row, col, rowspan, colspan)
+ self.addWidget(layout, row, col, rowspan, colspan)
return layout
def addWidget(self, item, row=None, col=None, rowspan=1, colspan=1):
|
{"golden_diff": "diff --git a/pyqtgraph/widgets/LayoutWidget.py b/pyqtgraph/widgets/LayoutWidget.py\n--- a/pyqtgraph/widgets/LayoutWidget.py\n+++ b/pyqtgraph/widgets/LayoutWidget.py\n@@ -39,7 +39,7 @@\n Returns the created widget.\n \"\"\"\n text = QtGui.QLabel(text, **kargs)\n- self.addItem(text, row, col, rowspan, colspan)\n+ self.addWidget(text, row, col, rowspan, colspan)\n return text\n \n def addLayout(self, row=None, col=None, rowspan=1, colspan=1, **kargs):\n@@ -49,7 +49,7 @@\n Returns the created widget.\n \"\"\"\n layout = LayoutWidget(**kargs)\n- self.addItem(layout, row, col, rowspan, colspan)\n+ self.addWidget(layout, row, col, rowspan, colspan)\n return layout\n \n def addWidget(self, item, row=None, col=None, rowspan=1, colspan=1):\n", "issue": "QLayoutWidget Add Label Error: No method AddItem\nThe code in QLayoutWidget looks something like:\n\n```\n def addLabel(self, text=' ', row=None, col=None, rowspan=1, colspan=1, **kargs):\n \"\"\"\n Create a QLabel with *text* and place it in the next available cell (or in the cell specified)\n All extra keyword arguments are passed to QLabel().\n Returns the created widget.\n \"\"\"\n text = QtGui.QLabel(text, **kargs)\n self.addItem(text, row, col, rowspan, colspan)\n return text\n```\n\nI think it should have `self.addWidget` instead of `self.addItem` as addItem method is missing.\n\nError:\n\n```\nAttributeError: 'LayoutWidget' object has no attribute 'addItem'\n```\n\n", "before_files": [{"content": "from ..Qt import QtGui, QtCore\n\n__all__ = ['LayoutWidget']\nclass LayoutWidget(QtGui.QWidget):\n \"\"\"\n Convenience class used for laying out QWidgets in a grid.\n (It's just a little less effort to use than QGridLayout)\n \"\"\"\n\n def __init__(self, parent=None):\n QtGui.QWidget.__init__(self, parent)\n self.layout = QtGui.QGridLayout()\n self.setLayout(self.layout)\n self.items = {}\n self.rows = {}\n self.currentRow = 0\n self.currentCol = 0\n \n def nextRow(self):\n \"\"\"Advance to next row for automatic widget placement\"\"\"\n self.currentRow += 1\n self.currentCol = 0\n \n def nextColumn(self, colspan=1):\n \"\"\"Advance to next column, while returning the current column number \n (generally only for internal use--called by addWidget)\"\"\"\n self.currentCol += colspan\n return self.currentCol-colspan\n \n def nextCol(self, *args, **kargs):\n \"\"\"Alias of nextColumn\"\"\"\n return self.nextColumn(*args, **kargs)\n \n \n def addLabel(self, text=' ', row=None, col=None, rowspan=1, colspan=1, **kargs):\n \"\"\"\n Create a QLabel with *text* and place it in the next available cell (or in the cell specified)\n All extra keyword arguments are passed to QLabel().\n Returns the created widget.\n \"\"\"\n text = QtGui.QLabel(text, **kargs)\n self.addItem(text, row, col, rowspan, colspan)\n return text\n \n def addLayout(self, row=None, col=None, rowspan=1, colspan=1, **kargs):\n \"\"\"\n Create an empty LayoutWidget and place it in the next available cell (or in the cell specified)\n All extra keyword arguments are passed to :func:`LayoutWidget.__init__ <pyqtgraph.LayoutWidget.__init__>`\n Returns the created widget.\n \"\"\"\n layout = LayoutWidget(**kargs)\n self.addItem(layout, row, col, rowspan, colspan)\n return layout\n \n def addWidget(self, item, row=None, col=None, rowspan=1, colspan=1):\n \"\"\"\n Add a widget to the layout and place it in the next available cell (or in the cell specified).\n \"\"\"\n if row == 'next':\n self.nextRow()\n row = self.currentRow\n elif row is None:\n row = self.currentRow\n \n \n if col is None:\n col = self.nextCol(colspan)\n \n if row not in self.rows:\n self.rows[row] = {}\n self.rows[row][col] = item\n self.items[item] = (row, col)\n \n self.layout.addWidget(item, row, col, rowspan, colspan)\n\n def getWidget(self, row, col):\n \"\"\"Return the widget in (*row*, *col*)\"\"\"\n return self.row[row][col]\n\n #def itemIndex(self, item):\n #for i in range(self.layout.count()):\n #if self.layout.itemAt(i).graphicsItem() is item:\n #return i\n #raise Exception(\"Could not determine index of item \" + str(item))\n \n #def removeItem(self, item):\n #\"\"\"Remove *item* from the layout.\"\"\"\n #ind = self.itemIndex(item)\n #self.layout.removeAt(ind)\n #self.scene().removeItem(item)\n #r,c = self.items[item]\n #del self.items[item]\n #del self.rows[r][c]\n #self.update()\n \n #def clear(self):\n #items = []\n #for i in list(self.items.keys()):\n #self.removeItem(i)\n\n\n", "path": "pyqtgraph/widgets/LayoutWidget.py"}]}
| 1,695 | 212 |
gh_patches_debug_27448
|
rasdani/github-patches
|
git_diff
|
Pyomo__pyomo-2458
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'quicksum' hides exceptions raised in generator from the user, leading to a wrong model
## Summary
If the generator passed to `quicksum` raises an exception at the first call to __next__, the exception handling logic of `quicksum` will hide the exception from the user and return an expression as `0`. This behavior of `quicksum` may produce a wrong model unintentionally.
The example below shows that the user misspells `param.aaa` as `param.bbb` in the expression, in fact it is a non-exsiting attribute of the class. However, the code can run fluently without any exception occurs.
### Steps to reproduce the issue
```python
import pyomo.environ as pyo
class Param:
def __init__(self) -> None:
self.aaa = [1.1, 2.2, 3.3]
param = Param()
m = pyo.ConcreteModel()
m.i = pyo.RangeSet(0, 3 - 1)
m.x = pyo.Var(m.i)
m.expr = pyo.Expression(expr=pyo.quicksum(m.x[i] * param.aaa[i] for i in m.i))
m.expr.pprint()
m.expr2 = pyo.Expression(expr=pyo.quicksum(m.x[i] * param.bbb[i] for i in m.i))
m.expr2.pprint()
```
### Error Message
```console
expr : Size=1, Index=None
Key : Expression
None : 1.1*x[0] + 2.2*x[1] + 3.3*x[2]
expr2 : Size=1, Index=None
Key : Expression
None : 0.0
```
### Information on your system
Pyomo version: 6.4.1
Python version: 3.9.12
Operating system: Windows 11
How Pyomo was installed (PyPI, conda, source): conda
</issue>
<code>
[start of pyomo/core/util.py]
1 # ___________________________________________________________________________
2 #
3 # Pyomo: Python Optimization Modeling Objects
4 # Copyright (c) 2008-2022
5 # National Technology and Engineering Solutions of Sandia, LLC
6 # Under the terms of Contract DE-NA0003525 with National Technology and
7 # Engineering Solutions of Sandia, LLC, the U.S. Government retains certain
8 # rights in this software.
9 # This software is distributed under the 3-clause BSD License.
10 # ___________________________________________________________________________
11
12 #
13 # Utility functions
14 #
15
16 __all__ = ['sum_product', 'summation', 'dot_product', 'sequence', 'prod', 'quicksum', 'target_list']
17
18 from pyomo.core.expr.numvalue import native_numeric_types
19 from pyomo.core.expr.numeric_expr import decompose_term
20 from pyomo.core.expr import current as EXPR
21 from pyomo.core.base.var import Var
22 from pyomo.core.base.expression import Expression
23 from pyomo.core.base.component import _ComponentBase
24
25 def prod(terms):
26 """
27 A utility function to compute the product of a list of terms.
28
29 Args:
30 terms (list): A list of terms that are multiplied together.
31
32 Returns:
33 The value of the product, which may be a Pyomo expression object.
34 """
35 ans = 1
36 for term in terms:
37 ans *= term
38 return ans
39
40
41 def quicksum(args, start=0, linear=None):
42 """
43 A utility function to compute a sum of Pyomo expressions.
44
45 The behavior of :func:`quicksum` is similar to the builtin :func:`sum`
46 function, but this function generates a more compact Pyomo
47 expression.
48
49 Args:
50 args: A generator for terms in the sum.
51
52 start: A value that is initializes the sum. If
53 this value is not a numeric constant, then the +=
54 operator is used to add terms to this object.
55 Defaults to zero.
56
57 linear: If :attr:`start` is not a numeric constant, then this
58 option is ignored. Otherwise, this value indicates
59 whether the terms in the sum are linear. If the value
60 is :const:`False`, then the terms are
61 treated as nonlinear, and if :const:`True`, then
62 the terms are treated as linear. Default is
63 :const:`None`, which indicates that the first term
64 in the :attr:`args` is used to determine this value.
65
66 Returns:
67 The value of the sum, which may be a Pyomo expression object.
68 """
69 #
70 # If we're starting with a numeric value, then
71 # create a new nonlinear sum expression but
72 # return a static version to the user.
73 #
74 if start.__class__ in native_numeric_types:
75 if linear is None:
76 #
77 # Get the first term, which we will test for linearity
78 #
79 try:
80 first = next(args, None)
81 except:
82 try:
83 args = args.__iter__()
84 first = next(args, None)
85 except:
86 raise RuntimeError("The argument to quicksum() is not iterable!")
87 if first is None:
88 return start
89 #
90 # Check if the first term is linear, and if so return the terms
91 #
92 linear, terms = decompose_term(first)
93 #
94 # Right now Pyomo5 expressions can only handle single linear
95 # terms.
96 #
97 # Also, we treat linear expressions as nonlinear if the constant
98 # term is not a native numeric type. Otherwise, large summation
99 # objects are created for the constant term.
100 #
101 if linear:
102 nvar=0
103 for term in terms:
104 c,v = term
105 if not v is None:
106 nvar += 1
107 elif not c.__class__ in native_numeric_types:
108 linear = False
109 if nvar > 1:
110 linear = False
111 start = start+first
112 if linear:
113 with EXPR.linear_expression() as e:
114 e += start
115 for arg in args:
116 e += arg
117 # Return the constant term if the linear expression does not contains variables
118 if e.is_constant():
119 return e.constant
120 return e
121 else:
122 with EXPR.nonlinear_expression() as e:
123 e += start
124 for arg in args:
125 e += arg
126 if e.nargs() == 0:
127 return 0
128 elif e.nargs() == 1:
129 return e.arg(0)
130 return e
131 #
132 # Otherwise, use the context that is provided and return it.
133 #
134 e = start
135 for arg in args:
136 e += arg
137 return e
138
139
140 def sum_product(*args, **kwds):
141 """
142 A utility function to compute a generalized dot product.
143
144 This function accepts one or more components that provide terms
145 that are multiplied together. These products are added together
146 to form a sum.
147
148 Args:
149 *args: Variable length argument list of generators that
150 create terms in the summation.
151 **kwds: Arbitrary keyword arguments.
152
153 Keyword Args:
154 index: A set that is used to index the components used to
155 create the terms
156 denom: A component or tuple of components that are used to
157 create the denominator of the terms
158 start: The initial value used in the sum
159
160 Returns:
161 The value of the sum.
162 """
163 denom = kwds.pop('denom', tuple() )
164 if type(denom) not in (list, tuple):
165 denom = [denom]
166 nargs = len(args)
167 ndenom = len(denom)
168
169 if nargs == 0 and ndenom == 0:
170 raise ValueError("The sum_product() command requires at least an " + \
171 "argument or a denominator term")
172
173 if 'index' in kwds:
174 index=kwds['index']
175 else:
176 if nargs > 0:
177 iarg=args[-1]
178 if not isinstance(iarg,Var) and not isinstance(iarg, Expression):
179 raise ValueError("Error executing sum_product(): The last argument value must be a variable or expression object if no 'index' option is specified")
180 else:
181 iarg=denom[-1]
182 if not isinstance(iarg,Var) and not isinstance(iarg, Expression):
183 raise ValueError("Error executing sum_product(): The last denom argument value must be a variable or expression object if no 'index' option is specified")
184 index = iarg.index_set()
185
186 start = kwds.get("start", 0)
187 vars_ = []
188 params_ = []
189 for arg in args:
190 if isinstance(arg, Var):
191 vars_.append(arg)
192 else:
193 params_.append(arg)
194 nvars = len(vars_)
195
196 num_index = range(0,nargs)
197 if ndenom == 0:
198 #
199 # Sum of polynomial terms
200 #
201 if start.__class__ in native_numeric_types:
202 if nvars == 1:
203 v = vars_[0]
204 if len(params_) == 0:
205 with EXPR.linear_expression() as expr:
206 expr += start
207 for i in index:
208 expr += v[i]
209 elif len(params_) == 1:
210 p = params_[0]
211 with EXPR.linear_expression() as expr:
212 expr += start
213 for i in index:
214 expr += p[i]*v[i]
215 else:
216 with EXPR.linear_expression() as expr:
217 expr += start
218 for i in index:
219 term = 1
220 for j in params_:
221 term *= params_[j][i]
222 expr += term * v[i]
223 return expr
224 #
225 with EXPR.nonlinear_expression() as expr:
226 expr += start
227 for i in index:
228 term = 1
229 for j in num_index:
230 term *= args[j][i]
231 expr += term
232 return expr
233 #
234 return quicksum((prod(args[j][i] for j in num_index) for i in index), start)
235 elif nargs == 0:
236 #
237 # Sum of reciprocals
238 #
239 denom_index = range(0,ndenom)
240 return quicksum((1/prod(denom[j][i] for j in denom_index) for i in index), start)
241 else:
242 #
243 # Sum of fractions
244 #
245 denom_index = range(0,ndenom)
246 return quicksum((prod(args[j][i] for j in num_index)/prod(denom[j][i] for j in denom_index) for i in index), start)
247
248
249 #: An alias for :func:`sum_product <pyomo.core.expr.util>`
250 dot_product = sum_product
251
252 #: An alias for :func:`sum_product <pyomo.core.expr.util>`
253 summation = sum_product
254
255
256 def sequence(*args):
257 """
258 sequence([start,] stop[, step]) -> generator for a list of integers
259
260 Return a generator that containing an arithmetic
261 progression of integers.
262 sequence(i, j) returns [i, i+1, i+2, ..., j];
263 start defaults to 1.
264 step specifies the increment (or decrement)
265 For example, sequence(4) returns [1, 2, 3, 4].
266 """
267 if len(args) == 0:
268 raise ValueError('sequence expected at least 1 arguments, got 0')
269 if len(args) > 3:
270 raise ValueError('sequence expected at most 3 arguments, got %d' % len(args))
271 if len(args) == 1:
272 return range(1,args[0]+1)
273 if len(args) == 2:
274 return range(args[0],args[1]+1)
275 return range(args[0],args[1]+1,args[2])
276
277 def target_list(x):
278 if isinstance(x, _ComponentBase):
279 return [ x ]
280 elif hasattr(x, '__iter__'):
281 ans = []
282 for i in x:
283 if isinstance(i, _ComponentBase):
284 ans.append(i)
285 else:
286 raise ValueError(
287 "Expected Component or list of Components."
288 "\n\tReceived %s" % (type(i),))
289 return ans
290 else:
291 raise ValueError(
292 "Expected Component or list of Components."
293 "\n\tReceived %s" % (type(x),))
[end of pyomo/core/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyomo/core/util.py b/pyomo/core/util.py
--- a/pyomo/core/util.py
+++ b/pyomo/core/util.py
@@ -21,6 +21,8 @@
from pyomo.core.base.var import Var
from pyomo.core.base.expression import Expression
from pyomo.core.base.component import _ComponentBase
+import logging
+logger = logging.getLogger(__name__)
def prod(terms):
"""
@@ -66,6 +68,14 @@
Returns:
The value of the sum, which may be a Pyomo expression object.
"""
+
+ # Ensure that args is an iterator (this manages things like IndexedComponent_slice objects)
+ try:
+ args = iter(args)
+ except:
+ logger.error('The argument `args` to quicksum() is not iterable!')
+ raise
+
#
# If we're starting with a numeric value, then
# create a new nonlinear sum expression but
@@ -76,14 +86,7 @@
#
# Get the first term, which we will test for linearity
#
- try:
- first = next(args, None)
- except:
- try:
- args = args.__iter__()
- first = next(args, None)
- except:
- raise RuntimeError("The argument to quicksum() is not iterable!")
+ first = next(args, None)
if first is None:
return start
#
|
{"golden_diff": "diff --git a/pyomo/core/util.py b/pyomo/core/util.py\n--- a/pyomo/core/util.py\n+++ b/pyomo/core/util.py\n@@ -21,6 +21,8 @@\n from pyomo.core.base.var import Var\n from pyomo.core.base.expression import Expression\n from pyomo.core.base.component import _ComponentBase\n+import logging\n+logger = logging.getLogger(__name__)\n \n def prod(terms):\n \"\"\"\n@@ -66,6 +68,14 @@\n Returns:\n The value of the sum, which may be a Pyomo expression object.\n \"\"\"\n+\n+ # Ensure that args is an iterator (this manages things like IndexedComponent_slice objects)\n+ try:\n+ args = iter(args)\n+ except:\n+ logger.error('The argument `args` to quicksum() is not iterable!')\n+ raise\n+\n #\n # If we're starting with a numeric value, then \n # create a new nonlinear sum expression but \n@@ -76,14 +86,7 @@\n #\n # Get the first term, which we will test for linearity\n #\n- try:\n- first = next(args, None)\n- except:\n- try:\n- args = args.__iter__()\n- first = next(args, None)\n- except:\n- raise RuntimeError(\"The argument to quicksum() is not iterable!\")\n+ first = next(args, None)\n if first is None:\n return start\n #\n", "issue": "'quicksum' hides exceptions raised in generator from the user, leading to a wrong model\n## Summary\r\nIf the generator passed to `quicksum` raises an exception at the first call to __next__, the exception handling logic of `quicksum` will hide the exception from the user and return an expression as `0`. This behavior of `quicksum` may produce a wrong model unintentionally.\r\n\r\nThe example below shows that the user misspells `param.aaa` as `param.bbb` in the expression, in fact it is a non-exsiting attribute of the class. However, the code can run fluently without any exception occurs. \r\n\r\n### Steps to reproduce the issue\r\n\r\n```python\r\nimport pyomo.environ as pyo\r\n\r\nclass Param:\r\n def __init__(self) -> None:\r\n self.aaa = [1.1, 2.2, 3.3]\r\n\r\nparam = Param()\r\n\r\nm = pyo.ConcreteModel()\r\nm.i = pyo.RangeSet(0, 3 - 1)\r\nm.x = pyo.Var(m.i)\r\n\r\nm.expr = pyo.Expression(expr=pyo.quicksum(m.x[i] * param.aaa[i] for i in m.i))\r\nm.expr.pprint()\r\n\r\nm.expr2 = pyo.Expression(expr=pyo.quicksum(m.x[i] * param.bbb[i] for i in m.i))\r\nm.expr2.pprint()\r\n```\r\n### Error Message\r\n```console\r\nexpr : Size=1, Index=None\r\n Key : Expression\r\n None : 1.1*x[0] + 2.2*x[1] + 3.3*x[2]\r\nexpr2 : Size=1, Index=None\r\n Key : Expression\r\n None : 0.0\r\n```\r\n\r\n\r\n### Information on your system\r\n\r\nPyomo version: 6.4.1\r\nPython version: 3.9.12\r\nOperating system: Windows 11\r\nHow Pyomo was installed (PyPI, conda, source): conda\r\n\r\n\n", "before_files": [{"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright (c) 2008-2022\n# National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain\n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\n\n#\n# Utility functions\n#\n\n__all__ = ['sum_product', 'summation', 'dot_product', 'sequence', 'prod', 'quicksum', 'target_list']\n\nfrom pyomo.core.expr.numvalue import native_numeric_types\nfrom pyomo.core.expr.numeric_expr import decompose_term\nfrom pyomo.core.expr import current as EXPR\nfrom pyomo.core.base.var import Var\nfrom pyomo.core.base.expression import Expression\nfrom pyomo.core.base.component import _ComponentBase\n\ndef prod(terms):\n \"\"\"\n A utility function to compute the product of a list of terms.\n\n Args:\n terms (list): A list of terms that are multiplied together.\n\n Returns:\n The value of the product, which may be a Pyomo expression object.\n \"\"\"\n ans = 1\n for term in terms:\n ans *= term\n return ans\n\n\ndef quicksum(args, start=0, linear=None):\n \"\"\"\n A utility function to compute a sum of Pyomo expressions.\n\n The behavior of :func:`quicksum` is similar to the builtin :func:`sum`\n function, but this function generates a more compact Pyomo\n expression.\n\n Args:\n args: A generator for terms in the sum.\n\n start: A value that is initializes the sum. If\n this value is not a numeric constant, then the += \n operator is used to add terms to this object.\n Defaults to zero.\n\n linear: If :attr:`start` is not a numeric constant, then this \n option is ignored. Otherwise, this value indicates\n whether the terms in the sum are linear. If the value\n is :const:`False`, then the terms are\n treated as nonlinear, and if :const:`True`, then\n the terms are treated as linear. Default is\n :const:`None`, which indicates that the first term\n in the :attr:`args` is used to determine this value.\n\n Returns:\n The value of the sum, which may be a Pyomo expression object.\n \"\"\"\n #\n # If we're starting with a numeric value, then \n # create a new nonlinear sum expression but \n # return a static version to the user.\n #\n if start.__class__ in native_numeric_types:\n if linear is None:\n #\n # Get the first term, which we will test for linearity\n #\n try:\n first = next(args, None)\n except:\n try:\n args = args.__iter__()\n first = next(args, None)\n except:\n raise RuntimeError(\"The argument to quicksum() is not iterable!\")\n if first is None:\n return start\n #\n # Check if the first term is linear, and if so return the terms\n #\n linear, terms = decompose_term(first)\n #\n # Right now Pyomo5 expressions can only handle single linear\n # terms.\n #\n # Also, we treat linear expressions as nonlinear if the constant\n # term is not a native numeric type. Otherwise, large summation\n # objects are created for the constant term.\n #\n if linear:\n nvar=0\n for term in terms:\n c,v = term\n if not v is None:\n nvar += 1\n elif not c.__class__ in native_numeric_types:\n linear = False\n if nvar > 1:\n linear = False\n start = start+first\n if linear:\n with EXPR.linear_expression() as e:\n e += start\n for arg in args:\n e += arg\n # Return the constant term if the linear expression does not contains variables\n if e.is_constant():\n return e.constant\n return e\n else:\n with EXPR.nonlinear_expression() as e:\n e += start\n for arg in args:\n e += arg\n if e.nargs() == 0:\n return 0\n elif e.nargs() == 1:\n return e.arg(0)\n return e\n #\n # Otherwise, use the context that is provided and return it.\n #\n e = start\n for arg in args:\n e += arg\n return e\n\n\ndef sum_product(*args, **kwds):\n \"\"\"\n A utility function to compute a generalized dot product. \n\n This function accepts one or more components that provide terms\n that are multiplied together. These products are added together\n to form a sum.\n\n Args:\n *args: Variable length argument list of generators that\n create terms in the summation.\n **kwds: Arbitrary keyword arguments.\n\n Keyword Args:\n index: A set that is used to index the components used to\n create the terms\n denom: A component or tuple of components that are used to\n create the denominator of the terms\n start: The initial value used in the sum\n\n Returns:\n The value of the sum.\n \"\"\"\n denom = kwds.pop('denom', tuple() )\n if type(denom) not in (list, tuple):\n denom = [denom]\n nargs = len(args)\n ndenom = len(denom)\n\n if nargs == 0 and ndenom == 0:\n raise ValueError(\"The sum_product() command requires at least an \" + \\\n \"argument or a denominator term\")\n\n if 'index' in kwds:\n index=kwds['index']\n else:\n if nargs > 0:\n iarg=args[-1]\n if not isinstance(iarg,Var) and not isinstance(iarg, Expression):\n raise ValueError(\"Error executing sum_product(): The last argument value must be a variable or expression object if no 'index' option is specified\")\n else:\n iarg=denom[-1]\n if not isinstance(iarg,Var) and not isinstance(iarg, Expression):\n raise ValueError(\"Error executing sum_product(): The last denom argument value must be a variable or expression object if no 'index' option is specified\")\n index = iarg.index_set()\n\n start = kwds.get(\"start\", 0)\n vars_ = []\n params_ = []\n for arg in args:\n if isinstance(arg, Var):\n vars_.append(arg)\n else:\n params_.append(arg)\n nvars = len(vars_)\n\n num_index = range(0,nargs)\n if ndenom == 0:\n #\n # Sum of polynomial terms\n #\n if start.__class__ in native_numeric_types:\n if nvars == 1:\n v = vars_[0]\n if len(params_) == 0:\n with EXPR.linear_expression() as expr:\n expr += start\n for i in index:\n expr += v[i]\n elif len(params_) == 1: \n p = params_[0]\n with EXPR.linear_expression() as expr:\n expr += start\n for i in index:\n expr += p[i]*v[i]\n else:\n with EXPR.linear_expression() as expr:\n expr += start\n for i in index:\n term = 1\n for j in params_:\n term *= params_[j][i]\n expr += term * v[i]\n return expr\n #\n with EXPR.nonlinear_expression() as expr:\n expr += start\n for i in index:\n term = 1\n for j in num_index:\n term *= args[j][i]\n expr += term\n return expr\n #\n return quicksum((prod(args[j][i] for j in num_index) for i in index), start)\n elif nargs == 0:\n #\n # Sum of reciprocals\n #\n denom_index = range(0,ndenom)\n return quicksum((1/prod(denom[j][i] for j in denom_index) for i in index), start)\n else:\n #\n # Sum of fractions\n #\n denom_index = range(0,ndenom)\n return quicksum((prod(args[j][i] for j in num_index)/prod(denom[j][i] for j in denom_index) for i in index), start)\n\n\n#: An alias for :func:`sum_product <pyomo.core.expr.util>`\ndot_product = sum_product\n\n#: An alias for :func:`sum_product <pyomo.core.expr.util>`\nsummation = sum_product\n\n\ndef sequence(*args):\n \"\"\"\n sequence([start,] stop[, step]) -> generator for a list of integers\n\n Return a generator that containing an arithmetic\n progression of integers. \n sequence(i, j) returns [i, i+1, i+2, ..., j]; \n start defaults to 1. \n step specifies the increment (or decrement)\n For example, sequence(4) returns [1, 2, 3, 4].\n \"\"\"\n if len(args) == 0:\n raise ValueError('sequence expected at least 1 arguments, got 0')\n if len(args) > 3:\n raise ValueError('sequence expected at most 3 arguments, got %d' % len(args))\n if len(args) == 1:\n return range(1,args[0]+1)\n if len(args) == 2:\n return range(args[0],args[1]+1)\n return range(args[0],args[1]+1,args[2])\n\ndef target_list(x):\n if isinstance(x, _ComponentBase):\n return [ x ]\n elif hasattr(x, '__iter__'):\n ans = []\n for i in x:\n if isinstance(i, _ComponentBase):\n ans.append(i)\n else:\n raise ValueError(\n \"Expected Component or list of Components.\"\n \"\\n\\tReceived %s\" % (type(i),))\n return ans\n else:\n raise ValueError(\n \"Expected Component or list of Components.\"\n \"\\n\\tReceived %s\" % (type(x),))", "path": "pyomo/core/util.py"}]}
| 3,996 | 320 |
gh_patches_debug_25244
|
rasdani/github-patches
|
git_diff
|
netket__netket-1019
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OverflowError when using random_state with Fock space
This produces an `OverflowError` on my system:
```python
import netket as nk
hi = nk.hilbert.Fock(N=1)
print(hi) # Fock(n_max=INT_MAX, N=1)
rng = nk.jax.PRNGSeq(0)
hi.random_state(next(seq))
```
```
OverflowError: Python int 9223372036854775808 too large to convert to int64
```
Note that `9223372036854775808` is `hi.n_max + 1` so it appears to be an off-by-one error somewhere.
OverflowError when using random_state with Fock space
This produces an `OverflowError` on my system:
```python
import netket as nk
hi = nk.hilbert.Fock(N=1)
print(hi) # Fock(n_max=INT_MAX, N=1)
rng = nk.jax.PRNGSeq(0)
hi.random_state(next(seq))
```
```
OverflowError: Python int 9223372036854775808 too large to convert to int64
```
Note that `9223372036854775808` is `hi.n_max + 1` so it appears to be an off-by-one error somewhere.
</issue>
<code>
[start of netket/hilbert/fock.py]
1 # Copyright 2021 The NetKet Authors - All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import List, Optional, Union
16
17 import numpy as np
18 from numba import jit
19
20 from netket.graph import AbstractGraph
21
22 from .homogeneous import HomogeneousHilbert
23 from ._deprecations import graph_to_N_depwarn
24
25
26 @jit(nopython=True)
27 def _sum_constraint(x, n_particles):
28 return np.sum(x, axis=1) == n_particles
29
30
31 class Fock(HomogeneousHilbert):
32 r"""Hilbert space obtained as tensor product of local fock basis."""
33
34 def __init__(
35 self,
36 n_max: Optional[int] = None,
37 N: int = 1,
38 n_particles: Optional[int] = None,
39 graph: Optional[AbstractGraph] = None,
40 ):
41 r"""
42 Constructs a new ``Boson`` given a maximum occupation number, number of sites
43 and total number of bosons.
44
45 Args:
46 n_max: Maximum occupation for a site (inclusive). If None, the local
47 occupation number is unbounded.
48 N: number of bosonic modes (default = 1)
49 n_particles: Constraint for the number of particles. If None, no constraint
50 is imposed.
51 graph: (Deprecated, pleaese use `N`) A graph, from which the number of nodes
52 is extracted.
53
54 Examples:
55 Simple boson hilbert space.
56
57 >>> from netket.hilbert import Fock
58 >>> hi = Fock(n_max=5, n_particles=11, N=3)
59 >>> print(hi.size)
60 3
61 >>> print(hi.n_states)
62 15
63 """
64 N = graph_to_N_depwarn(N=N, graph=graph)
65
66 self._n_max = n_max
67
68 if n_particles is not None:
69 n_particles = int(n_particles)
70 self._n_particles = n_particles
71 assert n_particles > 0
72
73 if self._n_max is None:
74 self._n_max = n_particles
75 else:
76 if self._n_max * N < n_particles:
77 raise Exception(
78 """The required total number of bosons is not compatible
79 with the given n_max."""
80 )
81
82 def constraints(x):
83 return _sum_constraint(x, n_particles)
84
85 else:
86 constraints = None
87 self._n_particles = None
88
89 if self._n_max is not None:
90 # assert self._n_max > 0
91 local_states = np.arange(self._n_max + 1)
92 else:
93 max_ind = np.iinfo(np.intp).max
94 self._n_max = max_ind
95 local_states = None
96
97 super().__init__(local_states, N, constraints)
98
99 @property
100 def n_max(self) -> Optional[int]:
101 r"""The maximum number of bosons per site, or None
102 if the number is unconstrained."""
103 return self._n_max
104
105 @property
106 def n_particles(self) -> Optional[int]:
107 r"""The total number of particles, or None
108 if the number is unconstrained."""
109 return self._n_particles
110
111 def __pow__(self, n) -> "Fock":
112 if self.n_particles is None:
113 n_particles = None
114 else:
115 n_particles = n_particles * n
116
117 return Fock(self.n_max, self.size * n, n_particles=n_particles)
118
119 def _mul_sametype_(self, other: "Fock") -> "Fock":
120 assert type(self) == type(other)
121 if self.n_max == other.n_max:
122 if self._n_particles is None and other._n_particles is None:
123 return Fock(self.n_max, N=self.size + other.size)
124
125 return NotImplemented
126
127 def ptrace(self, sites: Union[int, List]) -> Optional["Fock"]:
128 if isinstance(sites, int):
129 sites = [sites]
130
131 for site in sites:
132 if site < 0 or site >= self.size:
133 raise ValueError(
134 f"Site {site} not in this hilbert space of site {self.size}"
135 )
136
137 if self.n_particles is not None:
138 raise TypeError(
139 "Cannot take the partial trace with a total particles constraint."
140 )
141
142 Nsites = len(sites)
143
144 if self.size - Nsites == 0:
145 return None
146 else:
147 return Fock(self.n_max, N=self.size - Nsites)
148
149 def __repr__(self):
150 n_particles = (
151 ", n_particles={}".format(self._n_particles)
152 if self._n_particles is not None
153 else ""
154 )
155 nmax = self._n_max if self._n_max < np.iinfo(np.intp).max else "INT_MAX"
156 return "Fock(n_max={}{}, N={})".format(nmax, n_particles, self._size)
157
158 @property
159 def _attrs(self):
160 return (self.size, self._n_max, self._constraint_fn)
161
[end of netket/hilbert/fock.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/netket/hilbert/fock.py b/netket/hilbert/fock.py
--- a/netket/hilbert/fock.py
+++ b/netket/hilbert/fock.py
@@ -22,6 +22,13 @@
from .homogeneous import HomogeneousHilbert
from ._deprecations import graph_to_N_depwarn
+FOCK_MAX = np.iinfo(np.intp).max - 1
+"""
+Maximum number of particles in the fock space.
+It is `maxvalue(np.int64)-1` because we use N+1 in several formulas
+and it would overflow.
+"""
+
@jit(nopython=True)
def _sum_constraint(x, n_particles):
@@ -90,8 +97,7 @@
# assert self._n_max > 0
local_states = np.arange(self._n_max + 1)
else:
- max_ind = np.iinfo(np.intp).max
- self._n_max = max_ind
+ self._n_max = FOCK_MAX
local_states = None
super().__init__(local_states, N, constraints)
@@ -152,7 +158,7 @@
if self._n_particles is not None
else ""
)
- nmax = self._n_max if self._n_max < np.iinfo(np.intp).max else "INT_MAX"
+ nmax = self._n_max if self._n_max < FOCK_MAX else "FOCK_MAX"
return "Fock(n_max={}{}, N={})".format(nmax, n_particles, self._size)
@property
|
{"golden_diff": "diff --git a/netket/hilbert/fock.py b/netket/hilbert/fock.py\n--- a/netket/hilbert/fock.py\n+++ b/netket/hilbert/fock.py\n@@ -22,6 +22,13 @@\n from .homogeneous import HomogeneousHilbert\n from ._deprecations import graph_to_N_depwarn\n \n+FOCK_MAX = np.iinfo(np.intp).max - 1\n+\"\"\"\n+Maximum number of particles in the fock space.\n+It is `maxvalue(np.int64)-1` because we use N+1 in several formulas\n+and it would overflow.\n+\"\"\"\n+\n \n @jit(nopython=True)\n def _sum_constraint(x, n_particles):\n@@ -90,8 +97,7 @@\n # assert self._n_max > 0\n local_states = np.arange(self._n_max + 1)\n else:\n- max_ind = np.iinfo(np.intp).max\n- self._n_max = max_ind\n+ self._n_max = FOCK_MAX\n local_states = None\n \n super().__init__(local_states, N, constraints)\n@@ -152,7 +158,7 @@\n if self._n_particles is not None\n else \"\"\n )\n- nmax = self._n_max if self._n_max < np.iinfo(np.intp).max else \"INT_MAX\"\n+ nmax = self._n_max if self._n_max < FOCK_MAX else \"FOCK_MAX\"\n return \"Fock(n_max={}{}, N={})\".format(nmax, n_particles, self._size)\n \n @property\n", "issue": "OverflowError when using random_state with Fock space\nThis produces an `OverflowError` on my system:\r\n```python\r\nimport netket as nk\r\n\r\nhi = nk.hilbert.Fock(N=1)\r\nprint(hi) # Fock(n_max=INT_MAX, N=1)\r\n\r\nrng = nk.jax.PRNGSeq(0)\r\nhi.random_state(next(seq))\r\n```\r\n```\r\nOverflowError: Python int 9223372036854775808 too large to convert to int64\r\n```\r\n\r\nNote that `9223372036854775808` is `hi.n_max + 1` so it appears to be an off-by-one error somewhere.\nOverflowError when using random_state with Fock space\nThis produces an `OverflowError` on my system:\r\n```python\r\nimport netket as nk\r\n\r\nhi = nk.hilbert.Fock(N=1)\r\nprint(hi) # Fock(n_max=INT_MAX, N=1)\r\n\r\nrng = nk.jax.PRNGSeq(0)\r\nhi.random_state(next(seq))\r\n```\r\n```\r\nOverflowError: Python int 9223372036854775808 too large to convert to int64\r\n```\r\n\r\nNote that `9223372036854775808` is `hi.n_max + 1` so it appears to be an off-by-one error somewhere.\n", "before_files": [{"content": "# Copyright 2021 The NetKet Authors - All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import List, Optional, Union\n\nimport numpy as np\nfrom numba import jit\n\nfrom netket.graph import AbstractGraph\n\nfrom .homogeneous import HomogeneousHilbert\nfrom ._deprecations import graph_to_N_depwarn\n\n\n@jit(nopython=True)\ndef _sum_constraint(x, n_particles):\n return np.sum(x, axis=1) == n_particles\n\n\nclass Fock(HomogeneousHilbert):\n r\"\"\"Hilbert space obtained as tensor product of local fock basis.\"\"\"\n\n def __init__(\n self,\n n_max: Optional[int] = None,\n N: int = 1,\n n_particles: Optional[int] = None,\n graph: Optional[AbstractGraph] = None,\n ):\n r\"\"\"\n Constructs a new ``Boson`` given a maximum occupation number, number of sites\n and total number of bosons.\n\n Args:\n n_max: Maximum occupation for a site (inclusive). If None, the local\n occupation number is unbounded.\n N: number of bosonic modes (default = 1)\n n_particles: Constraint for the number of particles. If None, no constraint\n is imposed.\n graph: (Deprecated, pleaese use `N`) A graph, from which the number of nodes\n is extracted.\n\n Examples:\n Simple boson hilbert space.\n\n >>> from netket.hilbert import Fock\n >>> hi = Fock(n_max=5, n_particles=11, N=3)\n >>> print(hi.size)\n 3\n >>> print(hi.n_states)\n 15\n \"\"\"\n N = graph_to_N_depwarn(N=N, graph=graph)\n\n self._n_max = n_max\n\n if n_particles is not None:\n n_particles = int(n_particles)\n self._n_particles = n_particles\n assert n_particles > 0\n\n if self._n_max is None:\n self._n_max = n_particles\n else:\n if self._n_max * N < n_particles:\n raise Exception(\n \"\"\"The required total number of bosons is not compatible\n with the given n_max.\"\"\"\n )\n\n def constraints(x):\n return _sum_constraint(x, n_particles)\n\n else:\n constraints = None\n self._n_particles = None\n\n if self._n_max is not None:\n # assert self._n_max > 0\n local_states = np.arange(self._n_max + 1)\n else:\n max_ind = np.iinfo(np.intp).max\n self._n_max = max_ind\n local_states = None\n\n super().__init__(local_states, N, constraints)\n\n @property\n def n_max(self) -> Optional[int]:\n r\"\"\"The maximum number of bosons per site, or None\n if the number is unconstrained.\"\"\"\n return self._n_max\n\n @property\n def n_particles(self) -> Optional[int]:\n r\"\"\"The total number of particles, or None\n if the number is unconstrained.\"\"\"\n return self._n_particles\n\n def __pow__(self, n) -> \"Fock\":\n if self.n_particles is None:\n n_particles = None\n else:\n n_particles = n_particles * n\n\n return Fock(self.n_max, self.size * n, n_particles=n_particles)\n\n def _mul_sametype_(self, other: \"Fock\") -> \"Fock\":\n assert type(self) == type(other)\n if self.n_max == other.n_max:\n if self._n_particles is None and other._n_particles is None:\n return Fock(self.n_max, N=self.size + other.size)\n\n return NotImplemented\n\n def ptrace(self, sites: Union[int, List]) -> Optional[\"Fock\"]:\n if isinstance(sites, int):\n sites = [sites]\n\n for site in sites:\n if site < 0 or site >= self.size:\n raise ValueError(\n f\"Site {site} not in this hilbert space of site {self.size}\"\n )\n\n if self.n_particles is not None:\n raise TypeError(\n \"Cannot take the partial trace with a total particles constraint.\"\n )\n\n Nsites = len(sites)\n\n if self.size - Nsites == 0:\n return None\n else:\n return Fock(self.n_max, N=self.size - Nsites)\n\n def __repr__(self):\n n_particles = (\n \", n_particles={}\".format(self._n_particles)\n if self._n_particles is not None\n else \"\"\n )\n nmax = self._n_max if self._n_max < np.iinfo(np.intp).max else \"INT_MAX\"\n return \"Fock(n_max={}{}, N={})\".format(nmax, n_particles, self._size)\n\n @property\n def _attrs(self):\n return (self.size, self._n_max, self._constraint_fn)\n", "path": "netket/hilbert/fock.py"}]}
| 2,462 | 368 |
gh_patches_debug_13509
|
rasdani/github-patches
|
git_diff
|
UTNkar__moore-512
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix navigation bar on laptop screens
On utn.se there are many items in the menu. On laptop screens (1366x768 pixels) the navigation bar splits into two rows which makes it unusable. To solve this, make the navigation bar scrollable if it's width gets to long. That way it will still be usable. This is done by setting `scroll: auto` on the container.
However, the navigation bar is built with the materialize library which adds a lot of css to the navigation bar. To get around this, we have to use `<div>` instead of `<nav>` and change the name of the class for the navigation bar
</issue>
<code>
[start of src/branding/templatetags/branding_tags.py]
1 from django import template
2
3 from branding.models import Logo, SocialMediaSettings, FooterSettings
4 from wagtail.core.models import Site
5
6 register = template.Library()
7
8
9 @register.inclusion_tag('branding/tags/footer.html', takes_context=True)
10 def custom_footer(context):
11 request = context['request']
12 site = Site.find_for_request(request)
13 return {
14 'settings': FooterSettings.for_site(site)
15 }
16
17
18 @register.inclusion_tag('branding/tags/structure_header.html',
19 takes_context=True)
20 def structure_header(context, logo_color=''):
21 request = context['request']
22 site = Site.find_for_request(request)
23 logos = Logo.objects.exclude(belongs_to=site).all()
24 committees = logos.filter(category='committee')
25 sections = logos.filter(category='section')
26 return {
27 'committees': committees,
28 'sections': sections,
29 'color': logo_color,
30 }
31
32
33 @register.inclusion_tag('branding/tags/social_media.html', takes_context=True)
34 def social_media(context, dark=False):
35 request = context['request']
36 site = Site.find_for_request(request)
37 return {
38 'settings': SocialMediaSettings.for_site(site),
39 'dark': dark,
40 }
41
[end of src/branding/templatetags/branding_tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/branding/templatetags/branding_tags.py b/src/branding/templatetags/branding_tags.py
--- a/src/branding/templatetags/branding_tags.py
+++ b/src/branding/templatetags/branding_tags.py
@@ -23,10 +23,19 @@
logos = Logo.objects.exclude(belongs_to=site).all()
committees = logos.filter(category='committee')
sections = logos.filter(category='section')
+
+ committees_left = committees[:len(committees) // 2]
+ committees_right = committees[len(committees) // 2:]
+
+ sections_left = sections[:len(sections) // 2]
+ sections_right = sections[len(sections) // 2:]
+
return {
- 'committees': committees,
- 'sections': sections,
'color': logo_color,
+ 'committees_left': committees_left,
+ 'committees_right': committees_right,
+ 'sections_left': sections_left,
+ 'sections_right': sections_right,
}
|
{"golden_diff": "diff --git a/src/branding/templatetags/branding_tags.py b/src/branding/templatetags/branding_tags.py\n--- a/src/branding/templatetags/branding_tags.py\n+++ b/src/branding/templatetags/branding_tags.py\n@@ -23,10 +23,19 @@\n logos = Logo.objects.exclude(belongs_to=site).all()\n committees = logos.filter(category='committee')\n sections = logos.filter(category='section')\n+\n+ committees_left = committees[:len(committees) // 2]\n+ committees_right = committees[len(committees) // 2:]\n+\n+ sections_left = sections[:len(sections) // 2]\n+ sections_right = sections[len(sections) // 2:]\n+\n return {\n- 'committees': committees,\n- 'sections': sections,\n 'color': logo_color,\n+ 'committees_left': committees_left,\n+ 'committees_right': committees_right,\n+ 'sections_left': sections_left,\n+ 'sections_right': sections_right,\n }\n", "issue": "Fix navigation bar on laptop screens\nOn utn.se there are many items in the menu. On laptop screens (1366x768 pixels) the navigation bar splits into two rows which makes it unusable. To solve this, make the navigation bar scrollable if it's width gets to long. That way it will still be usable. This is done by setting `scroll: auto` on the container.\r\n\r\nHowever, the navigation bar is built with the materialize library which adds a lot of css to the navigation bar. To get around this, we have to use `<div>` instead of `<nav>` and change the name of the class for the navigation bar\n", "before_files": [{"content": "from django import template\n\nfrom branding.models import Logo, SocialMediaSettings, FooterSettings\nfrom wagtail.core.models import Site\n\nregister = template.Library()\n\n\[email protected]_tag('branding/tags/footer.html', takes_context=True)\ndef custom_footer(context):\n request = context['request']\n site = Site.find_for_request(request)\n return {\n 'settings': FooterSettings.for_site(site)\n }\n\n\[email protected]_tag('branding/tags/structure_header.html',\n takes_context=True)\ndef structure_header(context, logo_color=''):\n request = context['request']\n site = Site.find_for_request(request)\n logos = Logo.objects.exclude(belongs_to=site).all()\n committees = logos.filter(category='committee')\n sections = logos.filter(category='section')\n return {\n 'committees': committees,\n 'sections': sections,\n 'color': logo_color,\n }\n\n\[email protected]_tag('branding/tags/social_media.html', takes_context=True)\ndef social_media(context, dark=False):\n request = context['request']\n site = Site.find_for_request(request)\n return {\n 'settings': SocialMediaSettings.for_site(site),\n 'dark': dark,\n }\n", "path": "src/branding/templatetags/branding_tags.py"}]}
| 1,019 | 241 |
gh_patches_debug_8790
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-178
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Installed through dependencies "websocket" library doesn't work with Python 3
During `pip install docker-py` with Python 3 I'm getting following error:
```
Running setup.py install for websocket-client
changing mode of build/scripts-3.3/wsdump.py from 644 to 755
File "/home/bob/env/lib/python3.3/site-packages/websocket.py", line 769
except Exception, e:
^
SyntaxError: invalid syntax
```
Looks like this error is ignored and actually docker finishes it's installation and works afterwards, but if docker-py uses websocket library it's usage will fail.
According to websocket docs they have separate branch for Python 3: https://github.com/liris/websocket-client/tree/py3
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 import os
3 from setuptools import setup
4
5 ROOT_DIR = os.path.dirname(__file__)
6 SOURCE_DIR = os.path.join(ROOT_DIR)
7
8 test_requirements = []
9 with open('./requirements.txt') as requirements_txt:
10 requirements = [line for line in requirements_txt]
11
12 setup(
13 name="docker-py",
14 version='0.3.0',
15 description="Python client for Docker.",
16 packages=['docker', 'docker.auth', 'docker.unixconn', 'docker.utils'],
17 install_requires=requirements + test_requirements,
18 zip_safe=False,
19 test_suite='tests',
20 classifiers=[
21 'Development Status :: 4 - Beta',
22 'Environment :: Other Environment',
23 'Intended Audience :: Developers',
24 'Operating System :: OS Independent',
25 'Programming Language :: Python',
26 'Programming Language :: Python :: 2.6',
27 'Programming Language :: Python :: 2.7',
28 'Programming Language :: Python :: 3.2',
29 'Programming Language :: Python :: 3.3',
30 'Topic :: Utilities',
31 'License :: OSI Approved :: Apache Software License',
32 ],
33 )
34
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,12 +1,18 @@
#!/usr/bin/env python
import os
+import sys
from setuptools import setup
ROOT_DIR = os.path.dirname(__file__)
SOURCE_DIR = os.path.join(ROOT_DIR)
+if sys.version_info[0] == 3:
+ requirements_file = './requirements3.txt'
+else:
+ requirements_file = './requirements.txt'
+
test_requirements = []
-with open('./requirements.txt') as requirements_txt:
+with open(requirements_file) as requirements_txt:
requirements = [line for line in requirements_txt]
setup(
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,12 +1,18 @@\n #!/usr/bin/env python\n import os\n+import sys\n from setuptools import setup\n \n ROOT_DIR = os.path.dirname(__file__)\n SOURCE_DIR = os.path.join(ROOT_DIR)\n \n+if sys.version_info[0] == 3:\n+ requirements_file = './requirements3.txt'\n+else:\n+ requirements_file = './requirements.txt'\n+\n test_requirements = []\n-with open('./requirements.txt') as requirements_txt:\n+with open(requirements_file) as requirements_txt:\n requirements = [line for line in requirements_txt]\n \n setup(\n", "issue": "Installed through dependencies \"websocket\" library doesn't work with Python 3\nDuring `pip install docker-py` with Python 3 I'm getting following error:\n\n```\n Running setup.py install for websocket-client\n changing mode of build/scripts-3.3/wsdump.py from 644 to 755\n File \"/home/bob/env/lib/python3.3/site-packages/websocket.py\", line 769\n except Exception, e:\n ^\n SyntaxError: invalid syntax\n```\n\nLooks like this error is ignored and actually docker finishes it's installation and works afterwards, but if docker-py uses websocket library it's usage will fail.\n\nAccording to websocket docs they have separate branch for Python 3: https://github.com/liris/websocket-client/tree/py3\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nfrom setuptools import setup\n\nROOT_DIR = os.path.dirname(__file__)\nSOURCE_DIR = os.path.join(ROOT_DIR)\n\ntest_requirements = []\nwith open('./requirements.txt') as requirements_txt:\n requirements = [line for line in requirements_txt]\n\nsetup(\n name=\"docker-py\",\n version='0.3.0',\n description=\"Python client for Docker.\",\n packages=['docker', 'docker.auth', 'docker.unixconn', 'docker.utils'],\n install_requires=requirements + test_requirements,\n zip_safe=False,\n test_suite='tests',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Other Environment',\n 'Intended Audience :: Developers',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.2',\n 'Programming Language :: Python :: 3.3',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: Apache Software License',\n ],\n)\n", "path": "setup.py"}]}
| 992 | 144 |
gh_patches_debug_38573
|
rasdani/github-patches
|
git_diff
|
facebookresearch__fairseq-5149
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MMS Multiple voice inputs problem
When I use the following command:
python examples/mms/asr/infer/mms_infer.py --model "/path/to/asr/model" --lang lang_code --audio "/path/to/audio_1.wav" "/path/to/audio_1.wav"
The result is disorder. How to correspond one by one.
</issue>
<code>
[start of examples/mms/asr/infer/mms_infer.py]
1 #!/usr/bin/env python -u
2 # Copyright (c) Facebook, Inc. and its affiliates.
3 #
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7 import argparse
8 import soundfile as sf
9 import tempfile
10 from pathlib import Path
11 import os
12 import subprocess
13 import sys
14 import re
15
16 def parser():
17 parser = argparse.ArgumentParser(description="ASR inference script for MMS model")
18 parser.add_argument("--model", type=str, help="path to ASR model", required=True)
19 parser.add_argument("--audio", type=str, help="path to audio file", required=True, nargs='+')
20 parser.add_argument("--lang", type=str, help="audio language", required=True)
21 parser.add_argument("--format", type=str, choices=["none", "letter"], default="letter")
22 return parser.parse_args()
23
24 def process(args):
25 with tempfile.TemporaryDirectory() as tmpdir:
26 print(">>> preparing tmp manifest dir ...", file=sys.stderr)
27 tmpdir = Path(tmpdir)
28 with open(tmpdir / "dev.tsv", "w") as fw:
29 fw.write("/\n")
30 for audio in args.audio:
31 nsample = sf.SoundFile(audio).frames
32 fw.write(f"{audio}\t{nsample}\n")
33 with open(tmpdir / "dev.uid", "w") as fw:
34 fw.write(f"{audio}\n"*len(args.audio))
35 with open(tmpdir / "dev.ltr", "w") as fw:
36 fw.write("d u m m y | d u m m y\n"*len(args.audio))
37 with open(tmpdir / "dev.wrd", "w") as fw:
38 fw.write("dummy dummy\n"*len(args.audio))
39 cmd = f"""
40 PYTHONPATH=. PREFIX=INFER HYDRA_FULL_ERROR=1 python examples/speech_recognition/new/infer.py -m --config-dir examples/mms/asr/config/ --config-name infer_common decoding.type=viterbi dataset.max_tokens=4000000 distributed_training.distributed_world_size=1 "common_eval.path='{args.model}'" task.data={tmpdir} dataset.gen_subset="{args.lang}:dev" common_eval.post_process={args.format} decoding.results_path={tmpdir}
41 """
42 print(">>> loading model & running inference ...", file=sys.stderr)
43 subprocess.run(cmd, shell=True, stdout=subprocess.DEVNULL,)
44 with open(tmpdir/"hypo.word") as fr:
45 for ii, hypo in enumerate(fr):
46 hypo = re.sub("\(\S+\)$", "", hypo).strip()
47 print(f'===============\nInput: {args.audio[ii]}\nOutput: {hypo}')
48
49
50 if __name__ == "__main__":
51 args = parser()
52 process(args)
53
[end of examples/mms/asr/infer/mms_infer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/mms/asr/infer/mms_infer.py b/examples/mms/asr/infer/mms_infer.py
--- a/examples/mms/asr/infer/mms_infer.py
+++ b/examples/mms/asr/infer/mms_infer.py
@@ -21,28 +21,38 @@
parser.add_argument("--format", type=str, choices=["none", "letter"], default="letter")
return parser.parse_args()
+def reorder_decode(hypos):
+ outputs = []
+ for hypo in hypos:
+ idx = int(re.findall("\(None-(\d+)\)$", hypo)[0])
+ hypo = re.sub("\(\S+\)$", "", hypo).strip()
+ outputs.append((idx, hypo))
+ outputs = sorted(outputs)
+ return outputs
+
def process(args):
with tempfile.TemporaryDirectory() as tmpdir:
print(">>> preparing tmp manifest dir ...", file=sys.stderr)
tmpdir = Path(tmpdir)
- with open(tmpdir / "dev.tsv", "w") as fw:
+ with open(tmpdir / "dev.tsv", "w") as fw, open(tmpdir / "dev.uid", "w") as fu:
fw.write("/\n")
for audio in args.audio:
nsample = sf.SoundFile(audio).frames
fw.write(f"{audio}\t{nsample}\n")
- with open(tmpdir / "dev.uid", "w") as fw:
- fw.write(f"{audio}\n"*len(args.audio))
+ fu.write(f"{audio}\n")
with open(tmpdir / "dev.ltr", "w") as fw:
- fw.write("d u m m y | d u m m y\n"*len(args.audio))
+ fw.write("d u m m y | d u m m y |\n"*len(args.audio))
with open(tmpdir / "dev.wrd", "w") as fw:
fw.write("dummy dummy\n"*len(args.audio))
cmd = f"""
- PYTHONPATH=. PREFIX=INFER HYDRA_FULL_ERROR=1 python examples/speech_recognition/new/infer.py -m --config-dir examples/mms/asr/config/ --config-name infer_common decoding.type=viterbi dataset.max_tokens=4000000 distributed_training.distributed_world_size=1 "common_eval.path='{args.model}'" task.data={tmpdir} dataset.gen_subset="{args.lang}:dev" common_eval.post_process={args.format} decoding.results_path={tmpdir}
+ PYTHONPATH=. PREFIX=INFER HYDRA_FULL_ERROR=1 python examples/speech_recognition/new/infer.py -m --config-dir examples/mms/asr/config/ --config-name infer_common decoding.type=viterbi dataset.max_tokens=1440000 distributed_training.distributed_world_size=1 "common_eval.path='{args.model}'" task.data={tmpdir} dataset.gen_subset="{args.lang}:dev" common_eval.post_process={args.format} decoding.results_path={tmpdir}
"""
print(">>> loading model & running inference ...", file=sys.stderr)
subprocess.run(cmd, shell=True, stdout=subprocess.DEVNULL,)
with open(tmpdir/"hypo.word") as fr:
- for ii, hypo in enumerate(fr):
+ hypos = fr.readlines()
+ outputs = reorder_decode(hypos)
+ for ii, hypo in outputs:
hypo = re.sub("\(\S+\)$", "", hypo).strip()
print(f'===============\nInput: {args.audio[ii]}\nOutput: {hypo}')
|
{"golden_diff": "diff --git a/examples/mms/asr/infer/mms_infer.py b/examples/mms/asr/infer/mms_infer.py\n--- a/examples/mms/asr/infer/mms_infer.py\n+++ b/examples/mms/asr/infer/mms_infer.py\n@@ -21,28 +21,38 @@\n parser.add_argument(\"--format\", type=str, choices=[\"none\", \"letter\"], default=\"letter\")\n return parser.parse_args()\n \n+def reorder_decode(hypos):\n+ outputs = []\n+ for hypo in hypos:\n+ idx = int(re.findall(\"\\(None-(\\d+)\\)$\", hypo)[0])\n+ hypo = re.sub(\"\\(\\S+\\)$\", \"\", hypo).strip()\n+ outputs.append((idx, hypo))\n+ outputs = sorted(outputs)\n+ return outputs\n+\n def process(args): \n with tempfile.TemporaryDirectory() as tmpdir:\n print(\">>> preparing tmp manifest dir ...\", file=sys.stderr)\n tmpdir = Path(tmpdir)\n- with open(tmpdir / \"dev.tsv\", \"w\") as fw:\n+ with open(tmpdir / \"dev.tsv\", \"w\") as fw, open(tmpdir / \"dev.uid\", \"w\") as fu:\n fw.write(\"/\\n\")\n for audio in args.audio:\n nsample = sf.SoundFile(audio).frames\n fw.write(f\"{audio}\\t{nsample}\\n\")\n- with open(tmpdir / \"dev.uid\", \"w\") as fw:\n- fw.write(f\"{audio}\\n\"*len(args.audio))\n+ fu.write(f\"{audio}\\n\")\n with open(tmpdir / \"dev.ltr\", \"w\") as fw:\n- fw.write(\"d u m m y | d u m m y\\n\"*len(args.audio))\n+ fw.write(\"d u m m y | d u m m y |\\n\"*len(args.audio))\n with open(tmpdir / \"dev.wrd\", \"w\") as fw:\n fw.write(\"dummy dummy\\n\"*len(args.audio))\n cmd = f\"\"\"\n- PYTHONPATH=. PREFIX=INFER HYDRA_FULL_ERROR=1 python examples/speech_recognition/new/infer.py -m --config-dir examples/mms/asr/config/ --config-name infer_common decoding.type=viterbi dataset.max_tokens=4000000 distributed_training.distributed_world_size=1 \"common_eval.path='{args.model}'\" task.data={tmpdir} dataset.gen_subset=\"{args.lang}:dev\" common_eval.post_process={args.format} decoding.results_path={tmpdir}\n+ PYTHONPATH=. PREFIX=INFER HYDRA_FULL_ERROR=1 python examples/speech_recognition/new/infer.py -m --config-dir examples/mms/asr/config/ --config-name infer_common decoding.type=viterbi dataset.max_tokens=1440000 distributed_training.distributed_world_size=1 \"common_eval.path='{args.model}'\" task.data={tmpdir} dataset.gen_subset=\"{args.lang}:dev\" common_eval.post_process={args.format} decoding.results_path={tmpdir}\n \"\"\"\n print(\">>> loading model & running inference ...\", file=sys.stderr)\n subprocess.run(cmd, shell=True, stdout=subprocess.DEVNULL,)\n with open(tmpdir/\"hypo.word\") as fr:\n- for ii, hypo in enumerate(fr):\n+ hypos = fr.readlines()\n+ outputs = reorder_decode(hypos)\n+ for ii, hypo in outputs:\n hypo = re.sub(\"\\(\\S+\\)$\", \"\", hypo).strip()\n print(f'===============\\nInput: {args.audio[ii]}\\nOutput: {hypo}')\n", "issue": "MMS Multiple voice inputs problem\nWhen I use the following command\uff1a\r\npython examples/mms/asr/infer/mms_infer.py --model \"/path/to/asr/model\" --lang lang_code --audio \"/path/to/audio_1.wav\" \"/path/to/audio_1.wav\"\r\nThe result is disorder. How to correspond one by one.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python -u\n# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nimport argparse\nimport soundfile as sf\nimport tempfile\nfrom pathlib import Path\nimport os\nimport subprocess\nimport sys\nimport re\n\ndef parser():\n parser = argparse.ArgumentParser(description=\"ASR inference script for MMS model\")\n parser.add_argument(\"--model\", type=str, help=\"path to ASR model\", required=True)\n parser.add_argument(\"--audio\", type=str, help=\"path to audio file\", required=True, nargs='+')\n parser.add_argument(\"--lang\", type=str, help=\"audio language\", required=True)\n parser.add_argument(\"--format\", type=str, choices=[\"none\", \"letter\"], default=\"letter\")\n return parser.parse_args()\n\ndef process(args): \n with tempfile.TemporaryDirectory() as tmpdir:\n print(\">>> preparing tmp manifest dir ...\", file=sys.stderr)\n tmpdir = Path(tmpdir)\n with open(tmpdir / \"dev.tsv\", \"w\") as fw:\n fw.write(\"/\\n\")\n for audio in args.audio:\n nsample = sf.SoundFile(audio).frames\n fw.write(f\"{audio}\\t{nsample}\\n\")\n with open(tmpdir / \"dev.uid\", \"w\") as fw:\n fw.write(f\"{audio}\\n\"*len(args.audio))\n with open(tmpdir / \"dev.ltr\", \"w\") as fw:\n fw.write(\"d u m m y | d u m m y\\n\"*len(args.audio))\n with open(tmpdir / \"dev.wrd\", \"w\") as fw:\n fw.write(\"dummy dummy\\n\"*len(args.audio))\n cmd = f\"\"\"\n PYTHONPATH=. PREFIX=INFER HYDRA_FULL_ERROR=1 python examples/speech_recognition/new/infer.py -m --config-dir examples/mms/asr/config/ --config-name infer_common decoding.type=viterbi dataset.max_tokens=4000000 distributed_training.distributed_world_size=1 \"common_eval.path='{args.model}'\" task.data={tmpdir} dataset.gen_subset=\"{args.lang}:dev\" common_eval.post_process={args.format} decoding.results_path={tmpdir}\n \"\"\"\n print(\">>> loading model & running inference ...\", file=sys.stderr)\n subprocess.run(cmd, shell=True, stdout=subprocess.DEVNULL,)\n with open(tmpdir/\"hypo.word\") as fr:\n for ii, hypo in enumerate(fr):\n hypo = re.sub(\"\\(\\S+\\)$\", \"\", hypo).strip()\n print(f'===============\\nInput: {args.audio[ii]}\\nOutput: {hypo}')\n\n\nif __name__ == \"__main__\":\n args = parser()\n process(args)\n", "path": "examples/mms/asr/infer/mms_infer.py"}]}
| 1,317 | 774 |
gh_patches_debug_2121
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-2855
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Redundant exclude in setup.py's find_packages
I think the call can be reduced from
``` python
find_packages(
where="src", exclude=["_cffi_src", "_cffi_src.*", "tests", "tests.*"]
)
```
to
``` python
find_packages(where="src", exclude=["_cffi_src", "_cffi_src.*"])
```
because of the `where="src"`. I verified by printing the output from setup.py
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 # This file is dual licensed under the terms of the Apache License, Version
4 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
5 # for complete details.
6
7 from __future__ import absolute_import, division, print_function
8
9 import os
10 import platform
11 import subprocess
12 import sys
13 from distutils.command.build import build
14
15 import pkg_resources
16
17 from setuptools import find_packages, setup
18 from setuptools.command.install import install
19 from setuptools.command.test import test
20
21
22 base_dir = os.path.dirname(__file__)
23 src_dir = os.path.join(base_dir, "src")
24
25 # When executing the setup.py, we need to be able to import ourselves, this
26 # means that we need to add the src/ directory to the sys.path.
27 sys.path.insert(0, src_dir)
28
29 about = {}
30 with open(os.path.join(src_dir, "cryptography", "__about__.py")) as f:
31 exec(f.read(), about)
32
33
34 VECTORS_DEPENDENCY = "cryptography_vectors=={0}".format(about['__version__'])
35
36 requirements = [
37 "idna>=2.0",
38 "pyasn1>=0.1.8",
39 "six>=1.4.1",
40 "setuptools>=11.3",
41 ]
42 setup_requirements = []
43
44 if sys.version_info < (3, 4):
45 requirements.append("enum34")
46
47 if sys.version_info < (3, 3):
48 requirements.append("ipaddress")
49
50 if platform.python_implementation() == "PyPy":
51 if sys.pypy_version_info < (2, 6):
52 raise RuntimeError(
53 "cryptography 1.0 is not compatible with PyPy < 2.6. Please "
54 "upgrade PyPy to use this library."
55 )
56 else:
57 requirements.append("cffi>=1.4.1")
58 setup_requirements.append("cffi>=1.4.1")
59
60 test_requirements = [
61 "pytest",
62 "pretend",
63 "iso8601",
64 "pyasn1_modules",
65 ]
66 if sys.version_info[:2] > (2, 6):
67 test_requirements.append("hypothesis>=1.11.4")
68
69
70 # If there's no vectors locally that probably means we are in a tarball and
71 # need to go and get the matching vectors package from PyPi
72 if not os.path.exists(os.path.join(base_dir, "vectors/setup.py")):
73 test_requirements.append(VECTORS_DEPENDENCY)
74
75
76 def cc_is_available():
77 return sys.platform == "darwin" and list(map(
78 int, platform.mac_ver()[0].split("."))) >= [10, 8, 0]
79
80
81 backends = [
82 "openssl = cryptography.hazmat.backends.openssl:backend"
83 ]
84
85 if cc_is_available():
86 backends.append(
87 "commoncrypto = cryptography.hazmat.backends.commoncrypto:backend",
88 )
89
90
91 class PyTest(test):
92 def finalize_options(self):
93 test.finalize_options(self)
94 self.test_args = []
95 self.test_suite = True
96
97 # This means there's a vectors/ folder with the package in here.
98 # cd into it, install the vectors package and then refresh sys.path
99 if VECTORS_DEPENDENCY not in test_requirements:
100 subprocess.check_call(
101 [sys.executable, "setup.py", "install"], cwd="vectors"
102 )
103 pkg_resources.get_distribution("cryptography_vectors").activate()
104
105 def run_tests(self):
106 # Import here because in module scope the eggs are not loaded.
107 import pytest
108 test_args = [os.path.join(base_dir, "tests")]
109 errno = pytest.main(test_args)
110 sys.exit(errno)
111
112
113 def keywords_with_side_effects(argv):
114 """
115 Get a dictionary with setup keywords that (can) have side effects.
116
117 :param argv: A list of strings with command line arguments.
118 :returns: A dictionary with keyword arguments for the ``setup()`` function.
119
120 This setup.py script uses the setuptools 'setup_requires' feature because
121 this is required by the cffi package to compile extension modules. The
122 purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi
123 build process as a result of setup.py invocations that don't need the cffi
124 module to be built (setup.py serves the dual purpose of exposing package
125 metadata).
126
127 All of the options listed by ``python setup.py --help`` that print
128 information should be recognized here. The commands ``clean``,
129 ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.
130 Any combination of these options and commands is also supported.
131
132 This function was originally based on the `setup.py script`_ of SciPy (see
133 also the discussion in `pip issue #25`_).
134
135 .. _pip issue #25: https://github.com/pypa/pip/issues/25
136 .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py
137 """
138 no_setup_requires_arguments = (
139 '-h', '--help',
140 '-n', '--dry-run',
141 '-q', '--quiet',
142 '-v', '--verbose',
143 '-V', '--version',
144 '--author',
145 '--author-email',
146 '--classifiers',
147 '--contact',
148 '--contact-email',
149 '--description',
150 '--egg-base',
151 '--fullname',
152 '--help-commands',
153 '--keywords',
154 '--licence',
155 '--license',
156 '--long-description',
157 '--maintainer',
158 '--maintainer-email',
159 '--name',
160 '--no-user-cfg',
161 '--obsoletes',
162 '--platforms',
163 '--provides',
164 '--requires',
165 '--url',
166 'clean',
167 'egg_info',
168 'register',
169 'sdist',
170 'upload',
171 )
172
173 def is_short_option(argument):
174 """Check whether a command line argument is a short option."""
175 return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'
176
177 def expand_short_options(argument):
178 """Expand combined short options into canonical short options."""
179 return ('-' + char for char in argument[1:])
180
181 def argument_without_setup_requirements(argv, i):
182 """Check whether a command line argument needs setup requirements."""
183 if argv[i] in no_setup_requires_arguments:
184 # Simple case: An argument which is either an option or a command
185 # which doesn't need setup requirements.
186 return True
187 elif (is_short_option(argv[i]) and
188 all(option in no_setup_requires_arguments
189 for option in expand_short_options(argv[i]))):
190 # Not so simple case: Combined short options none of which need
191 # setup requirements.
192 return True
193 elif argv[i - 1:i] == ['--egg-base']:
194 # Tricky case: --egg-info takes an argument which should not make
195 # us use setup_requires (defeating the purpose of this code).
196 return True
197 else:
198 return False
199
200 if all(argument_without_setup_requirements(argv, i)
201 for i in range(1, len(argv))):
202 return {
203 "cmdclass": {
204 "build": DummyBuild,
205 "install": DummyInstall,
206 "test": DummyPyTest,
207 }
208 }
209 else:
210 cffi_modules = [
211 "src/_cffi_src/build_openssl.py:ffi",
212 "src/_cffi_src/build_constant_time.py:ffi",
213 "src/_cffi_src/build_padding.py:ffi",
214 ]
215 if cc_is_available():
216 cffi_modules.append("src/_cffi_src/build_commoncrypto.py:ffi")
217
218 return {
219 "setup_requires": setup_requirements,
220 "cmdclass": {
221 "test": PyTest,
222 },
223 "cffi_modules": cffi_modules
224 }
225
226
227 setup_requires_error = ("Requested setup command that needs 'setup_requires' "
228 "while command line arguments implied a side effect "
229 "free command or option.")
230
231
232 class DummyBuild(build):
233 """
234 This class makes it very obvious when ``keywords_with_side_effects()`` has
235 incorrectly interpreted the command line arguments to ``setup.py build`` as
236 one of the 'side effect free' commands or options.
237 """
238
239 def run(self):
240 raise RuntimeError(setup_requires_error)
241
242
243 class DummyInstall(install):
244 """
245 This class makes it very obvious when ``keywords_with_side_effects()`` has
246 incorrectly interpreted the command line arguments to ``setup.py install``
247 as one of the 'side effect free' commands or options.
248 """
249
250 def run(self):
251 raise RuntimeError(setup_requires_error)
252
253
254 class DummyPyTest(test):
255 """
256 This class makes it very obvious when ``keywords_with_side_effects()`` has
257 incorrectly interpreted the command line arguments to ``setup.py test`` as
258 one of the 'side effect free' commands or options.
259 """
260
261 def run_tests(self):
262 raise RuntimeError(setup_requires_error)
263
264
265 with open(os.path.join(base_dir, "README.rst")) as f:
266 long_description = f.read()
267
268
269 setup(
270 name=about["__title__"],
271 version=about["__version__"],
272
273 description=about["__summary__"],
274 long_description=long_description,
275 license=about["__license__"],
276 url=about["__uri__"],
277
278 author=about["__author__"],
279 author_email=about["__email__"],
280
281 classifiers=[
282 "Intended Audience :: Developers",
283 "License :: OSI Approved :: Apache Software License",
284 "License :: OSI Approved :: BSD License",
285 "Natural Language :: English",
286 "Operating System :: MacOS :: MacOS X",
287 "Operating System :: POSIX",
288 "Operating System :: POSIX :: BSD",
289 "Operating System :: POSIX :: Linux",
290 "Operating System :: Microsoft :: Windows",
291 "Programming Language :: Python",
292 "Programming Language :: Python :: 2",
293 "Programming Language :: Python :: 2.6",
294 "Programming Language :: Python :: 2.7",
295 "Programming Language :: Python :: 3",
296 "Programming Language :: Python :: 3.3",
297 "Programming Language :: Python :: 3.4",
298 "Programming Language :: Python :: 3.5",
299 "Programming Language :: Python :: Implementation :: CPython",
300 "Programming Language :: Python :: Implementation :: PyPy",
301 "Topic :: Security :: Cryptography",
302 ],
303
304 package_dir={"": "src"},
305 packages=find_packages(
306 where="src", exclude=["_cffi_src", "_cffi_src.*", "tests", "tests.*"]
307 ),
308 include_package_data=True,
309
310 install_requires=requirements,
311 tests_require=test_requirements,
312 extras_require={
313 "test": test_requirements,
314 "docs-test": [
315 "doc8",
316 "pyenchant",
317 "readme_renderer",
318 "sphinx",
319 "sphinx_rtd_theme",
320 "sphinxcontrib-spelling",
321 ],
322 "pep8-test": [
323 "flake8",
324 "flake8-import-order",
325 "pep8-naming",
326 ],
327 },
328
329 # for cffi
330 zip_safe=False,
331 ext_package="cryptography.hazmat.bindings",
332 entry_points={
333 "cryptography.backends": backends,
334 },
335 **keywords_with_side_effects(sys.argv)
336 )
337
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -302,9 +302,7 @@
],
package_dir={"": "src"},
- packages=find_packages(
- where="src", exclude=["_cffi_src", "_cffi_src.*", "tests", "tests.*"]
- ),
+ packages=find_packages(where="src", exclude=["_cffi_src", "_cffi_src.*"]),
include_package_data=True,
install_requires=requirements,
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -302,9 +302,7 @@\n ],\n \n package_dir={\"\": \"src\"},\n- packages=find_packages(\n- where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\", \"tests\", \"tests.*\"]\n- ),\n+ packages=find_packages(where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\"]),\n include_package_data=True,\n \n install_requires=requirements,\n", "issue": "Redundant exclude in setup.py's find_packages\nI think the call can be reduced from \n\n``` python\nfind_packages(\n where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\", \"tests\", \"tests.*\"]\n)\n```\n\nto\n\n``` python\nfind_packages(where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\"])\n```\n\nbecause of the `where=\"src\"`. I verified by printing the output from setup.py\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport platform\nimport subprocess\nimport sys\nfrom distutils.command.build import build\n\nimport pkg_resources\n\nfrom setuptools import find_packages, setup\nfrom setuptools.command.install import install\nfrom setuptools.command.test import test\n\n\nbase_dir = os.path.dirname(__file__)\nsrc_dir = os.path.join(base_dir, \"src\")\n\n# When executing the setup.py, we need to be able to import ourselves, this\n# means that we need to add the src/ directory to the sys.path.\nsys.path.insert(0, src_dir)\n\nabout = {}\nwith open(os.path.join(src_dir, \"cryptography\", \"__about__.py\")) as f:\n exec(f.read(), about)\n\n\nVECTORS_DEPENDENCY = \"cryptography_vectors=={0}\".format(about['__version__'])\n\nrequirements = [\n \"idna>=2.0\",\n \"pyasn1>=0.1.8\",\n \"six>=1.4.1\",\n \"setuptools>=11.3\",\n]\nsetup_requirements = []\n\nif sys.version_info < (3, 4):\n requirements.append(\"enum34\")\n\nif sys.version_info < (3, 3):\n requirements.append(\"ipaddress\")\n\nif platform.python_implementation() == \"PyPy\":\n if sys.pypy_version_info < (2, 6):\n raise RuntimeError(\n \"cryptography 1.0 is not compatible with PyPy < 2.6. Please \"\n \"upgrade PyPy to use this library.\"\n )\nelse:\n requirements.append(\"cffi>=1.4.1\")\n setup_requirements.append(\"cffi>=1.4.1\")\n\ntest_requirements = [\n \"pytest\",\n \"pretend\",\n \"iso8601\",\n \"pyasn1_modules\",\n]\nif sys.version_info[:2] > (2, 6):\n test_requirements.append(\"hypothesis>=1.11.4\")\n\n\n# If there's no vectors locally that probably means we are in a tarball and\n# need to go and get the matching vectors package from PyPi\nif not os.path.exists(os.path.join(base_dir, \"vectors/setup.py\")):\n test_requirements.append(VECTORS_DEPENDENCY)\n\n\ndef cc_is_available():\n return sys.platform == \"darwin\" and list(map(\n int, platform.mac_ver()[0].split(\".\"))) >= [10, 8, 0]\n\n\nbackends = [\n \"openssl = cryptography.hazmat.backends.openssl:backend\"\n]\n\nif cc_is_available():\n backends.append(\n \"commoncrypto = cryptography.hazmat.backends.commoncrypto:backend\",\n )\n\n\nclass PyTest(test):\n def finalize_options(self):\n test.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n # This means there's a vectors/ folder with the package in here.\n # cd into it, install the vectors package and then refresh sys.path\n if VECTORS_DEPENDENCY not in test_requirements:\n subprocess.check_call(\n [sys.executable, \"setup.py\", \"install\"], cwd=\"vectors\"\n )\n pkg_resources.get_distribution(\"cryptography_vectors\").activate()\n\n def run_tests(self):\n # Import here because in module scope the eggs are not loaded.\n import pytest\n test_args = [os.path.join(base_dir, \"tests\")]\n errno = pytest.main(test_args)\n sys.exit(errno)\n\n\ndef keywords_with_side_effects(argv):\n \"\"\"\n Get a dictionary with setup keywords that (can) have side effects.\n\n :param argv: A list of strings with command line arguments.\n :returns: A dictionary with keyword arguments for the ``setup()`` function.\n\n This setup.py script uses the setuptools 'setup_requires' feature because\n this is required by the cffi package to compile extension modules. The\n purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi\n build process as a result of setup.py invocations that don't need the cffi\n module to be built (setup.py serves the dual purpose of exposing package\n metadata).\n\n All of the options listed by ``python setup.py --help`` that print\n information should be recognized here. The commands ``clean``,\n ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.\n Any combination of these options and commands is also supported.\n\n This function was originally based on the `setup.py script`_ of SciPy (see\n also the discussion in `pip issue #25`_).\n\n .. _pip issue #25: https://github.com/pypa/pip/issues/25\n .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py\n \"\"\"\n no_setup_requires_arguments = (\n '-h', '--help',\n '-n', '--dry-run',\n '-q', '--quiet',\n '-v', '--verbose',\n '-V', '--version',\n '--author',\n '--author-email',\n '--classifiers',\n '--contact',\n '--contact-email',\n '--description',\n '--egg-base',\n '--fullname',\n '--help-commands',\n '--keywords',\n '--licence',\n '--license',\n '--long-description',\n '--maintainer',\n '--maintainer-email',\n '--name',\n '--no-user-cfg',\n '--obsoletes',\n '--platforms',\n '--provides',\n '--requires',\n '--url',\n 'clean',\n 'egg_info',\n 'register',\n 'sdist',\n 'upload',\n )\n\n def is_short_option(argument):\n \"\"\"Check whether a command line argument is a short option.\"\"\"\n return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'\n\n def expand_short_options(argument):\n \"\"\"Expand combined short options into canonical short options.\"\"\"\n return ('-' + char for char in argument[1:])\n\n def argument_without_setup_requirements(argv, i):\n \"\"\"Check whether a command line argument needs setup requirements.\"\"\"\n if argv[i] in no_setup_requires_arguments:\n # Simple case: An argument which is either an option or a command\n # which doesn't need setup requirements.\n return True\n elif (is_short_option(argv[i]) and\n all(option in no_setup_requires_arguments\n for option in expand_short_options(argv[i]))):\n # Not so simple case: Combined short options none of which need\n # setup requirements.\n return True\n elif argv[i - 1:i] == ['--egg-base']:\n # Tricky case: --egg-info takes an argument which should not make\n # us use setup_requires (defeating the purpose of this code).\n return True\n else:\n return False\n\n if all(argument_without_setup_requirements(argv, i)\n for i in range(1, len(argv))):\n return {\n \"cmdclass\": {\n \"build\": DummyBuild,\n \"install\": DummyInstall,\n \"test\": DummyPyTest,\n }\n }\n else:\n cffi_modules = [\n \"src/_cffi_src/build_openssl.py:ffi\",\n \"src/_cffi_src/build_constant_time.py:ffi\",\n \"src/_cffi_src/build_padding.py:ffi\",\n ]\n if cc_is_available():\n cffi_modules.append(\"src/_cffi_src/build_commoncrypto.py:ffi\")\n\n return {\n \"setup_requires\": setup_requirements,\n \"cmdclass\": {\n \"test\": PyTest,\n },\n \"cffi_modules\": cffi_modules\n }\n\n\nsetup_requires_error = (\"Requested setup command that needs 'setup_requires' \"\n \"while command line arguments implied a side effect \"\n \"free command or option.\")\n\n\nclass DummyBuild(build):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py build`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyInstall(install):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py install``\n as one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyPyTest(test):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py test`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run_tests(self):\n raise RuntimeError(setup_requires_error)\n\n\nwith open(os.path.join(base_dir, \"README.rst\")) as f:\n long_description = f.read()\n\n\nsetup(\n name=about[\"__title__\"],\n version=about[\"__version__\"],\n\n description=about[\"__summary__\"],\n long_description=long_description,\n license=about[\"__license__\"],\n url=about[\"__uri__\"],\n\n author=about[\"__author__\"],\n author_email=about[\"__email__\"],\n\n classifiers=[\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Security :: Cryptography\",\n ],\n\n package_dir={\"\": \"src\"},\n packages=find_packages(\n where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\", \"tests\", \"tests.*\"]\n ),\n include_package_data=True,\n\n install_requires=requirements,\n tests_require=test_requirements,\n extras_require={\n \"test\": test_requirements,\n \"docs-test\": [\n \"doc8\",\n \"pyenchant\",\n \"readme_renderer\",\n \"sphinx\",\n \"sphinx_rtd_theme\",\n \"sphinxcontrib-spelling\",\n ],\n \"pep8-test\": [\n \"flake8\",\n \"flake8-import-order\",\n \"pep8-naming\",\n ],\n },\n\n # for cffi\n zip_safe=False,\n ext_package=\"cryptography.hazmat.bindings\",\n entry_points={\n \"cryptography.backends\": backends,\n },\n **keywords_with_side_effects(sys.argv)\n)\n", "path": "setup.py"}]}
| 3,970 | 115 |
gh_patches_debug_26644
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-293
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cliquet.storage.exceptions.BackendError: ConnectionError: Error 8 connecting to localhos:6379
The config wizard has typos:
https://github.com/Kinto/kinto/blob/master/kinto/config/__init__.py#L39
https://github.com/Kinto/kinto/blob/master/kinto/config/__init__.py#L41
https://github.com/Kinto/kinto/blob/master/kinto/config/__init__.py#L43
I am not sure how, but maybe we could have a way to test that the produced default config works
</issue>
<code>
[start of kinto/config/__init__.py]
1 import os
2 import binascii
3 import codecs
4 from six.moves import input
5
6 HERE = os.path.abspath(os.path.dirname(__file__))
7
8
9 def render_template(template, destination, **kwargs):
10 template = os.path.join(HERE, template)
11
12 with codecs.open(template, 'r', encoding='utf-8') as f:
13 raw_template = f.read()
14 rendered = raw_template.format(**kwargs)
15 with codecs.open(destination, 'w+', encoding='utf-8') as output:
16 output.write(rendered)
17
18
19 def init(config_file):
20 values = {}
21 values['secret'] = binascii.b2a_hex(os.urandom(32))
22
23 backend = input("Which backend to use? "
24 "(1 - postgresql, 2 - redis, default - memory) ").strip()
25
26 if backend == '1':
27 # Postgresql configuration
28 postgresql_url = "postgres://postgres:postgres@localhost/postgres"
29 values['storage_backend'] = "cliquet.storage.postgresql"
30 values['storage_url'] = postgresql_url
31 values['cache_backend'] = "cliquet.cache.postgresql"
32 values['cache_url'] = postgresql_url
33 values['permission_backend'] = "cliquet.permission.postgresql"
34 values['permission_url'] = postgresql_url
35
36 elif backend == '2':
37 # Redis configuration
38 values['storage_backend'] = "cliquet.storage.redis"
39 values['storage_url'] = "redis://localhos:6379/1"
40 values['cache_backend'] = "cliquet.cache.redis"
41 values['cache_url'] = "redis://localhos:6379/2"
42 values['permission_backend'] = "cliquet.permission.redis"
43 values['permission_url'] = "redis://localhos:6379/3"
44
45 else:
46 # Memory configuration / default backend
47 values['storage_backend'] = "cliquet.storage.memory"
48 values['storage_url'] = ""
49 values['cache_backend'] = "cliquet.cache.memory"
50 values['cache_url'] = ""
51 values['permission_backend'] = "cliquet.permission.memory"
52 values['permission_url'] = ""
53
54 render_template("kinto.tpl", config_file,
55 secret=values['secret'],
56 storage_backend=values['storage_backend'],
57 storage_url=values['storage_url'],
58 cache_backend=values['cache_backend'],
59 cache_url=values['cache_url'],
60 permission_backend=values['permission_backend'],
61 permission_url=values['permission_url'])
62
[end of kinto/config/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kinto/config/__init__.py b/kinto/config/__init__.py
--- a/kinto/config/__init__.py
+++ b/kinto/config/__init__.py
@@ -36,11 +36,11 @@
elif backend == '2':
# Redis configuration
values['storage_backend'] = "cliquet.storage.redis"
- values['storage_url'] = "redis://localhos:6379/1"
+ values['storage_url'] = "redis://localhost:6379/1"
values['cache_backend'] = "cliquet.cache.redis"
- values['cache_url'] = "redis://localhos:6379/2"
+ values['cache_url'] = "redis://localhost:6379/2"
values['permission_backend'] = "cliquet.permission.redis"
- values['permission_url'] = "redis://localhos:6379/3"
+ values['permission_url'] = "redis://localhost:6379/3"
else:
# Memory configuration / default backend
@@ -51,11 +51,4 @@
values['permission_backend'] = "cliquet.permission.memory"
values['permission_url'] = ""
- render_template("kinto.tpl", config_file,
- secret=values['secret'],
- storage_backend=values['storage_backend'],
- storage_url=values['storage_url'],
- cache_backend=values['cache_backend'],
- cache_url=values['cache_url'],
- permission_backend=values['permission_backend'],
- permission_url=values['permission_url'])
+ render_template("kinto.tpl", config_file, **values)
|
{"golden_diff": "diff --git a/kinto/config/__init__.py b/kinto/config/__init__.py\n--- a/kinto/config/__init__.py\n+++ b/kinto/config/__init__.py\n@@ -36,11 +36,11 @@\n elif backend == '2':\n # Redis configuration\n values['storage_backend'] = \"cliquet.storage.redis\"\n- values['storage_url'] = \"redis://localhos:6379/1\"\n+ values['storage_url'] = \"redis://localhost:6379/1\"\n values['cache_backend'] = \"cliquet.cache.redis\"\n- values['cache_url'] = \"redis://localhos:6379/2\"\n+ values['cache_url'] = \"redis://localhost:6379/2\"\n values['permission_backend'] = \"cliquet.permission.redis\"\n- values['permission_url'] = \"redis://localhos:6379/3\"\n+ values['permission_url'] = \"redis://localhost:6379/3\"\n \n else:\n # Memory configuration / default backend\n@@ -51,11 +51,4 @@\n values['permission_backend'] = \"cliquet.permission.memory\"\n values['permission_url'] = \"\"\n \n- render_template(\"kinto.tpl\", config_file,\n- secret=values['secret'],\n- storage_backend=values['storage_backend'],\n- storage_url=values['storage_url'],\n- cache_backend=values['cache_backend'],\n- cache_url=values['cache_url'],\n- permission_backend=values['permission_backend'],\n- permission_url=values['permission_url'])\n+ render_template(\"kinto.tpl\", config_file, **values)\n", "issue": "cliquet.storage.exceptions.BackendError: ConnectionError: Error 8 connecting to localhos:6379\nThe config wizard has typos:\n\nhttps://github.com/Kinto/kinto/blob/master/kinto/config/__init__.py#L39\nhttps://github.com/Kinto/kinto/blob/master/kinto/config/__init__.py#L41\nhttps://github.com/Kinto/kinto/blob/master/kinto/config/__init__.py#L43\n\nI am not sure how, but maybe we could have a way to test that the produced default config works\n\n", "before_files": [{"content": "import os\nimport binascii\nimport codecs\nfrom six.moves import input\n\nHERE = os.path.abspath(os.path.dirname(__file__))\n\n\ndef render_template(template, destination, **kwargs):\n template = os.path.join(HERE, template)\n\n with codecs.open(template, 'r', encoding='utf-8') as f:\n raw_template = f.read()\n rendered = raw_template.format(**kwargs)\n with codecs.open(destination, 'w+', encoding='utf-8') as output:\n output.write(rendered)\n\n\ndef init(config_file):\n values = {}\n values['secret'] = binascii.b2a_hex(os.urandom(32))\n\n backend = input(\"Which backend to use? \"\n \"(1 - postgresql, 2 - redis, default - memory) \").strip()\n\n if backend == '1':\n # Postgresql configuration\n postgresql_url = \"postgres://postgres:postgres@localhost/postgres\"\n values['storage_backend'] = \"cliquet.storage.postgresql\"\n values['storage_url'] = postgresql_url\n values['cache_backend'] = \"cliquet.cache.postgresql\"\n values['cache_url'] = postgresql_url\n values['permission_backend'] = \"cliquet.permission.postgresql\"\n values['permission_url'] = postgresql_url\n\n elif backend == '2':\n # Redis configuration\n values['storage_backend'] = \"cliquet.storage.redis\"\n values['storage_url'] = \"redis://localhos:6379/1\"\n values['cache_backend'] = \"cliquet.cache.redis\"\n values['cache_url'] = \"redis://localhos:6379/2\"\n values['permission_backend'] = \"cliquet.permission.redis\"\n values['permission_url'] = \"redis://localhos:6379/3\"\n\n else:\n # Memory configuration / default backend\n values['storage_backend'] = \"cliquet.storage.memory\"\n values['storage_url'] = \"\"\n values['cache_backend'] = \"cliquet.cache.memory\"\n values['cache_url'] = \"\"\n values['permission_backend'] = \"cliquet.permission.memory\"\n values['permission_url'] = \"\"\n\n render_template(\"kinto.tpl\", config_file,\n secret=values['secret'],\n storage_backend=values['storage_backend'],\n storage_url=values['storage_url'],\n cache_backend=values['cache_backend'],\n cache_url=values['cache_url'],\n permission_backend=values['permission_backend'],\n permission_url=values['permission_url'])\n", "path": "kinto/config/__init__.py"}]}
| 1,319 | 374 |
gh_patches_debug_31207
|
rasdani/github-patches
|
git_diff
|
openedx__edx-ora2-1144
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"TypeError: u'<url>' is not JSON serializable" on upload to backends other than S3
File upload to "django" and "filesystem" backends is currently broken. Each attempt to generate an upload URL fails with the following error:
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/exception.py", line 41, in inner
response = get_response(request)
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 249, in _legacy_get_response
response = self._get_response(request)
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 187, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py", line 185, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python2.7/dist-packages/django/utils/decorators.py", line 185, in inner
return func(*args, **kwargs)
File "/openedx/edx-platform/lms/djangoapps/courseware/module_render.py", line 962, in handle_xblock_callback
return _invoke_xblock_handler(request, course_id, usage_id, handler, suffix, course=course)
File "/openedx/edx-platform/lms/djangoapps/courseware/module_render.py", line 1068, in _invoke_xblock_handler
resp = instance.handle(handler, req, suffix)
File "/usr/local/lib/python2.7/dist-packages/xblock/mixins.py", line 90, in handle
return self.runtime.handle(self, handler_name, request, suffix)
File "/openedx/edx-platform/common/lib/xmodule/xmodule/x_module.py", line 1347, in handle
return super(MetricsMixin, self).handle(block, handler_name, request, suffix=suffix)
File "/usr/local/lib/python2.7/dist-packages/xblock/runtime.py", line 1037, in handle
results = handler(request, suffix)
File "/usr/local/lib/python2.7/dist-packages/xblock/mixins.py", line 74, in wrapper
response = json.dumps(response)
File "/usr/lib/python2.7/json/__init__.py", line 244, in dumps
return _default_encoder.encode(obj)
File "/usr/lib/python2.7/json/encoder.py", line 207, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python2.7/json/encoder.py", line 184, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: u'/openassessment/fileupload/django/4dd623a62f4e626d827277cd3fedaf45/course-v1:org+video1+01/block-v1:org+video1+01+type@openassessment+block@25d85
51f741342ac919590db2e72091b/' is not JSON serializable
```
This error is due to the fact that urls returned by 'django' and 'filesystem' backends are computed with `django.core.urlresolvers.reverse_lazy`. Thus, they are not unicode, but `django.utils.functional.__proxy__` objects. The TypeError is raised when calling `json.dumps()` on `{'success': True, 'url': url_object}` in `xblock.mixins.HandlersMixin.json_handler`.
I can easily solve the problem by replacing `reverse_lazy` by `reverse`. However, I wonder what was the original reason to use `reverse_lazy`? The change was introduced in commit 3e13d27df by @efischer19.
Python tests pass successfully if I replace `reverse_lazy` by `reverse`.
I believe both ORA2 backends are broken for the current Open edX Hawthorn release.
(this error was first discovered [here](https://github.com/regisb/openedx-docker/issues/97))
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 from setuptools import find_packages, setup
3
4
5 def is_requirement(line):
6 """
7 Return True if the requirement line is a package requirement;
8 that is, it is not blank, a comment, or editable.
9 """
10 # Remove whitespace at the start/end of the line
11 line = line.strip()
12
13 # Skip blank lines, comments, and editable installs
14 return not (
15 line == '' or
16 line.startswith('-r') or
17 line.startswith('#') or
18 line.startswith('-e') or
19 line.startswith('git+')
20 )
21
22 def load_requirements(*requirements_paths):
23 """
24 Load all requirements from the specified requirements files.
25 Returns a list of requirement strings.
26 """
27 requirements = set()
28 for path in requirements_paths:
29 requirements.update(
30 line.strip() for line in open(path).readlines()
31 if is_requirement(line)
32 )
33 return list(requirements)
34
35 setup(
36 name='ora2',
37 version='2.2.1',
38 author='edX',
39 url='http://github.com/edx/edx-ora2',
40 description='edx-ora2',
41 license='AGPL',
42 classifiers=[
43 'Development Status :: 3 - Alpha',
44 'Framework :: Django :: 1.11',
45 'Intended Audience :: Developers',
46 'License :: OSI Approved :: GNU Affero General Public License v3',
47 'Operating System :: OS Independent',
48 'Programming Language :: Python',
49 ],
50 packages=find_packages(include=['openassessment*'], exclude=['*.test', '*.tests']),
51 include_package_data=True,
52 install_requires=load_requirements('requirements/base.txt', "requirements/django.txt"),
53 tests_require=load_requirements('requirements/test.txt'),
54 entry_points={
55 'xblock.v1': [
56 'openassessment = openassessment.xblock.openassessmentblock:OpenAssessmentBlock',
57 ]
58 },
59 )
60
[end of setup.py]
[start of openassessment/fileupload/backends/filesystem.py]
1 from django.conf import settings
2 import django.core.cache
3 from django.core.urlresolvers import reverse_lazy
4 from django.utils.encoding import smart_text
5
6 from .. import exceptions
7 from .base import BaseBackend
8
9
10 class Backend(BaseBackend):
11 """
12 Upload openassessment student files to a local filesystem. Note
13 that in order to use this file storage backend, you need to include the
14 urls from openassessment.fileupload in your urls.py file:
15
16 E.g:
17 url(r'^openassessment/storage', include(openassessment.fileupload.urls)),
18
19 The ORA2_FILEUPLOAD_CACHE_NAME setting will also have to be defined for the
20 name of the django.core.cache instance which will maintain the list of
21 active storage URLs.
22
23 E.g:
24
25 ORA2_FILEUPLOAD_CACHE_NAME = "ora2-storage"
26 CACHES = {
27 ...
28 'ora2-storage': {
29 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
30 ...
31 },
32 ...
33 }
34 """
35
36 def get_upload_url(self, key, content_type):
37 make_upload_url_available(self._get_key_name(key), self.UPLOAD_URL_TIMEOUT)
38 return self._get_url(key)
39
40 def get_download_url(self, key):
41 make_download_url_available(self._get_key_name(key), self.DOWNLOAD_URL_TIMEOUT)
42 return self._get_url(key)
43
44 def remove_file(self, key):
45 from openassessment.fileupload.views_filesystem import safe_remove, get_file_path
46 return safe_remove(get_file_path(self._get_key_name(key)))
47
48 def _get_url(self, key):
49 key_name = self._get_key_name(key)
50 url = reverse_lazy("openassessment-filesystem-storage", kwargs={'key': key_name})
51 return url
52
53
54 def get_cache():
55 """
56 Returns a django.core.cache instance in charge of maintaining the
57 authorized upload and download URL.
58
59 Raises:
60 FileUploadInternalError if the cache name setting is not defined.
61 InvalidCacheBackendError if the corresponding cache backend has not
62 been configured.
63 """
64 cache_name = getattr(settings, "ORA2_FILEUPLOAD_CACHE_NAME", None)
65 if cache_name is None:
66 raise exceptions.FileUploadInternalError("Undefined cache backend for file upload")
67 return django.core.cache.caches[cache_name]
68
69
70 def make_upload_url_available(url_key_name, timeout):
71 """
72 Authorize an upload URL.
73
74 Arguments:
75 url_key_name (str): key that uniquely identifies the upload url
76 timeout (int): time in seconds before the url expires
77 """
78 get_cache().set(
79 smart_text(get_upload_cache_key(url_key_name)),
80 1, timeout
81 )
82
83
84 def make_download_url_available(url_key_name, timeout):
85 """
86 Authorize a download URL.
87
88 Arguments:
89 url_key_name (str): key that uniquely identifies the url
90 timeout (int): time in seconds before the url expires
91 """
92 get_cache().set(
93 smart_text(get_download_cache_key(url_key_name)),
94 1, timeout
95 )
96
97
98 def is_upload_url_available(url_key_name):
99 """
100 Return True if the corresponding upload URL is available.
101 """
102 return get_cache().get(smart_text(get_upload_cache_key(url_key_name))) is not None
103
104
105 def is_download_url_available(url_key_name):
106 """
107 Return True if the corresponding download URL is available.
108 """
109 return get_cache().get(smart_text(get_download_cache_key(url_key_name))) is not None
110
111
112 def get_upload_cache_key(url_key_name):
113 return "upload/" + url_key_name
114
115
116 def get_download_cache_key(url_key_name):
117 return "download/" + url_key_name
118
[end of openassessment/fileupload/backends/filesystem.py]
[start of openassessment/fileupload/backends/django_storage.py]
1 import os
2
3 from django.core.files.base import ContentFile
4 from django.core.files.storage import default_storage
5 from django.core.urlresolvers import reverse_lazy
6
7 from .base import BaseBackend
8
9
10 class Backend(BaseBackend):
11 """
12 Manage openassessment student files uploaded using the default django storage settings.
13 """
14 def get_upload_url(self, key, content_type):
15 """
16 Return the URL pointing to the ORA2 django storage upload endpoint.
17 """
18 return reverse_lazy("openassessment-django-storage", kwargs={'key': key})
19
20 def get_download_url(self, key):
21 """
22 Return the django storage download URL for the given key.
23
24 Returns None if no file exists at that location.
25 """
26 path = self._get_file_path(key)
27 if default_storage.exists(path):
28 return default_storage.url(path)
29 return None
30
31 def upload_file(self, key, content):
32 """
33 Upload the given file content to the keyed location.
34 """
35 path = self._get_file_path(key)
36 saved_path = default_storage.save(path, ContentFile(content))
37 return saved_path
38
39 def remove_file(self, key):
40 """
41 Remove the file at the given keyed location.
42
43 Returns True if the file exists, and was removed.
44 Returns False if the file does not exist, and so was not removed.
45 """
46 path = self._get_file_path(key)
47 if default_storage.exists(path):
48 default_storage.delete(path)
49 return True
50 return False
51
52 def _get_file_name(self, key):
53 """
54 Returns the name of the keyed file.
55
56 Since the backend storage may be folders, or it may use pseudo-folders,
57 make sure the filename doesn't include any path separators.
58 """
59 file_name = key.replace("..", "").strip("/ ")
60 file_name = file_name.replace(os.sep, "_")
61 return file_name
62
63 def _get_file_path(self, key):
64 """
65 Returns the path to the keyed file, including the storage prefix.
66 """
67 path = self._get_key_name(self._get_file_name(key))
68 return path
69
[end of openassessment/fileupload/backends/django_storage.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/openassessment/fileupload/backends/django_storage.py b/openassessment/fileupload/backends/django_storage.py
--- a/openassessment/fileupload/backends/django_storage.py
+++ b/openassessment/fileupload/backends/django_storage.py
@@ -2,7 +2,7 @@
from django.core.files.base import ContentFile
from django.core.files.storage import default_storage
-from django.core.urlresolvers import reverse_lazy
+from django.core.urlresolvers import reverse
from .base import BaseBackend
@@ -15,7 +15,7 @@
"""
Return the URL pointing to the ORA2 django storage upload endpoint.
"""
- return reverse_lazy("openassessment-django-storage", kwargs={'key': key})
+ return reverse("openassessment-django-storage", kwargs={'key': key})
def get_download_url(self, key):
"""
diff --git a/openassessment/fileupload/backends/filesystem.py b/openassessment/fileupload/backends/filesystem.py
--- a/openassessment/fileupload/backends/filesystem.py
+++ b/openassessment/fileupload/backends/filesystem.py
@@ -1,6 +1,6 @@
from django.conf import settings
import django.core.cache
-from django.core.urlresolvers import reverse_lazy
+from django.core.urlresolvers import reverse
from django.utils.encoding import smart_text
from .. import exceptions
@@ -47,7 +47,7 @@
def _get_url(self, key):
key_name = self._get_key_name(key)
- url = reverse_lazy("openassessment-filesystem-storage", kwargs={'key': key_name})
+ url = reverse("openassessment-filesystem-storage", kwargs={'key': key_name})
return url
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -34,7 +34,7 @@
setup(
name='ora2',
- version='2.2.1',
+ version='2.2.2',
author='edX',
url='http://github.com/edx/edx-ora2',
description='edx-ora2',
|
{"golden_diff": "diff --git a/openassessment/fileupload/backends/django_storage.py b/openassessment/fileupload/backends/django_storage.py\n--- a/openassessment/fileupload/backends/django_storage.py\n+++ b/openassessment/fileupload/backends/django_storage.py\n@@ -2,7 +2,7 @@\n \n from django.core.files.base import ContentFile\n from django.core.files.storage import default_storage\n-from django.core.urlresolvers import reverse_lazy\n+from django.core.urlresolvers import reverse\n \n from .base import BaseBackend\n \n@@ -15,7 +15,7 @@\n \"\"\"\n Return the URL pointing to the ORA2 django storage upload endpoint.\n \"\"\"\n- return reverse_lazy(\"openassessment-django-storage\", kwargs={'key': key})\n+ return reverse(\"openassessment-django-storage\", kwargs={'key': key})\n \n def get_download_url(self, key):\n \"\"\"\ndiff --git a/openassessment/fileupload/backends/filesystem.py b/openassessment/fileupload/backends/filesystem.py\n--- a/openassessment/fileupload/backends/filesystem.py\n+++ b/openassessment/fileupload/backends/filesystem.py\n@@ -1,6 +1,6 @@\n from django.conf import settings\n import django.core.cache\n-from django.core.urlresolvers import reverse_lazy\n+from django.core.urlresolvers import reverse\n from django.utils.encoding import smart_text\n \n from .. import exceptions\n@@ -47,7 +47,7 @@\n \n def _get_url(self, key):\n key_name = self._get_key_name(key)\n- url = reverse_lazy(\"openassessment-filesystem-storage\", kwargs={'key': key_name})\n+ url = reverse(\"openassessment-filesystem-storage\", kwargs={'key': key_name})\n return url\n \n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -34,7 +34,7 @@\n \n setup(\n name='ora2',\n- version='2.2.1',\n+ version='2.2.2',\n author='edX',\n url='http://github.com/edx/edx-ora2',\n description='edx-ora2',\n", "issue": "\"TypeError: u'<url>' is not JSON serializable\" on upload to backends other than S3\nFile upload to \"django\" and \"filesystem\" backends is currently broken. Each attempt to generate an upload URL fails with the following error:\r\n\r\n```\r\nTraceback (most recent call last): \r\n File \"/usr/local/lib/python2.7/dist-packages/django/core/handlers/exception.py\", line 41, in inner \r\n response = get_response(request) \r\n File \"/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py\", line 249, in _legacy_get_response \r\n response = self._get_response(request) \r\n File \"/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py\", line 187, in _get_response \r\n response = self.process_exception_by_middleware(e, request) \r\n File \"/usr/local/lib/python2.7/dist-packages/django/core/handlers/base.py\", line 185, in _get_response \r\n response = wrapped_callback(request, *callback_args, **callback_kwargs) \r\n File \"/usr/local/lib/python2.7/dist-packages/django/utils/decorators.py\", line 185, in inner \r\n return func(*args, **kwargs) \r\n File \"/openedx/edx-platform/lms/djangoapps/courseware/module_render.py\", line 962, in handle_xblock_callback \r\n return _invoke_xblock_handler(request, course_id, usage_id, handler, suffix, course=course) \r\n File \"/openedx/edx-platform/lms/djangoapps/courseware/module_render.py\", line 1068, in _invoke_xblock_handler \r\n resp = instance.handle(handler, req, suffix) \r\n File \"/usr/local/lib/python2.7/dist-packages/xblock/mixins.py\", line 90, in handle \r\n return self.runtime.handle(self, handler_name, request, suffix) \r\n File \"/openedx/edx-platform/common/lib/xmodule/xmodule/x_module.py\", line 1347, in handle \r\n return super(MetricsMixin, self).handle(block, handler_name, request, suffix=suffix) \r\n File \"/usr/local/lib/python2.7/dist-packages/xblock/runtime.py\", line 1037, in handle \r\n results = handler(request, suffix) \r\n File \"/usr/local/lib/python2.7/dist-packages/xblock/mixins.py\", line 74, in wrapper \r\n response = json.dumps(response) \r\n File \"/usr/lib/python2.7/json/__init__.py\", line 244, in dumps \r\n return _default_encoder.encode(obj) \r\n File \"/usr/lib/python2.7/json/encoder.py\", line 207, in encode\r\n chunks = self.iterencode(o, _one_shot=True)\r\n File \"/usr/lib/python2.7/json/encoder.py\", line 270, in iterencode\r\n return _iterencode(o, 0)\r\n File \"/usr/lib/python2.7/json/encoder.py\", line 184, in default\r\n raise TypeError(repr(o) + \" is not JSON serializable\")\r\nTypeError: u'/openassessment/fileupload/django/4dd623a62f4e626d827277cd3fedaf45/course-v1:org+video1+01/block-v1:org+video1+01+type@openassessment+block@25d85\r\n51f741342ac919590db2e72091b/' is not JSON serializable\r\n```\r\n\r\nThis error is due to the fact that urls returned by 'django' and 'filesystem' backends are computed with `django.core.urlresolvers.reverse_lazy`. Thus, they are not unicode, but `django.utils.functional.__proxy__` objects. The TypeError is raised when calling `json.dumps()` on `{'success': True, 'url': url_object}` in `xblock.mixins.HandlersMixin.json_handler`.\r\n\r\nI can easily solve the problem by replacing `reverse_lazy` by `reverse`. However, I wonder what was the original reason to use `reverse_lazy`? The change was introduced in commit 3e13d27df by @efischer19.\r\n\r\nPython tests pass successfully if I replace `reverse_lazy` by `reverse`.\r\n\r\nI believe both ORA2 backends are broken for the current Open edX Hawthorn release. \r\n \r\n(this error was first discovered [here](https://github.com/regisb/openedx-docker/issues/97))\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import find_packages, setup\n\n\ndef is_requirement(line):\n \"\"\"\n Return True if the requirement line is a package requirement;\n that is, it is not blank, a comment, or editable.\n \"\"\"\n # Remove whitespace at the start/end of the line\n line = line.strip()\n\n # Skip blank lines, comments, and editable installs\n return not (\n line == '' or\n line.startswith('-r') or\n line.startswith('#') or\n line.startswith('-e') or\n line.startswith('git+')\n )\n\ndef load_requirements(*requirements_paths):\n \"\"\"\n Load all requirements from the specified requirements files.\n Returns a list of requirement strings.\n \"\"\"\n requirements = set()\n for path in requirements_paths:\n requirements.update(\n line.strip() for line in open(path).readlines()\n if is_requirement(line)\n )\n return list(requirements)\n\nsetup(\n name='ora2',\n version='2.2.1',\n author='edX',\n url='http://github.com/edx/edx-ora2',\n description='edx-ora2',\n license='AGPL',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Framework :: Django :: 1.11',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU Affero General Public License v3',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n ],\n packages=find_packages(include=['openassessment*'], exclude=['*.test', '*.tests']),\n include_package_data=True,\n install_requires=load_requirements('requirements/base.txt', \"requirements/django.txt\"),\n tests_require=load_requirements('requirements/test.txt'),\n entry_points={\n 'xblock.v1': [\n 'openassessment = openassessment.xblock.openassessmentblock:OpenAssessmentBlock',\n ]\n },\n)\n", "path": "setup.py"}, {"content": "from django.conf import settings\nimport django.core.cache\nfrom django.core.urlresolvers import reverse_lazy\nfrom django.utils.encoding import smart_text\n\nfrom .. import exceptions\nfrom .base import BaseBackend\n\n\nclass Backend(BaseBackend):\n \"\"\"\n Upload openassessment student files to a local filesystem. Note\n that in order to use this file storage backend, you need to include the\n urls from openassessment.fileupload in your urls.py file:\n\n E.g:\n url(r'^openassessment/storage', include(openassessment.fileupload.urls)),\n\n The ORA2_FILEUPLOAD_CACHE_NAME setting will also have to be defined for the\n name of the django.core.cache instance which will maintain the list of\n active storage URLs.\n\n E.g:\n\n ORA2_FILEUPLOAD_CACHE_NAME = \"ora2-storage\"\n CACHES = {\n ...\n 'ora2-storage': {\n 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n ...\n },\n ...\n }\n \"\"\"\n\n def get_upload_url(self, key, content_type):\n make_upload_url_available(self._get_key_name(key), self.UPLOAD_URL_TIMEOUT)\n return self._get_url(key)\n\n def get_download_url(self, key):\n make_download_url_available(self._get_key_name(key), self.DOWNLOAD_URL_TIMEOUT)\n return self._get_url(key)\n\n def remove_file(self, key):\n from openassessment.fileupload.views_filesystem import safe_remove, get_file_path\n return safe_remove(get_file_path(self._get_key_name(key)))\n\n def _get_url(self, key):\n key_name = self._get_key_name(key)\n url = reverse_lazy(\"openassessment-filesystem-storage\", kwargs={'key': key_name})\n return url\n\n\ndef get_cache():\n \"\"\"\n Returns a django.core.cache instance in charge of maintaining the\n authorized upload and download URL.\n\n Raises:\n FileUploadInternalError if the cache name setting is not defined.\n InvalidCacheBackendError if the corresponding cache backend has not\n been configured.\n \"\"\"\n cache_name = getattr(settings, \"ORA2_FILEUPLOAD_CACHE_NAME\", None)\n if cache_name is None:\n raise exceptions.FileUploadInternalError(\"Undefined cache backend for file upload\")\n return django.core.cache.caches[cache_name]\n\n\ndef make_upload_url_available(url_key_name, timeout):\n \"\"\"\n Authorize an upload URL.\n\n Arguments:\n url_key_name (str): key that uniquely identifies the upload url\n timeout (int): time in seconds before the url expires\n \"\"\"\n get_cache().set(\n smart_text(get_upload_cache_key(url_key_name)),\n 1, timeout\n )\n\n\ndef make_download_url_available(url_key_name, timeout):\n \"\"\"\n Authorize a download URL.\n\n Arguments:\n url_key_name (str): key that uniquely identifies the url\n timeout (int): time in seconds before the url expires\n \"\"\"\n get_cache().set(\n smart_text(get_download_cache_key(url_key_name)),\n 1, timeout\n )\n\n\ndef is_upload_url_available(url_key_name):\n \"\"\"\n Return True if the corresponding upload URL is available.\n \"\"\"\n return get_cache().get(smart_text(get_upload_cache_key(url_key_name))) is not None\n\n\ndef is_download_url_available(url_key_name):\n \"\"\"\n Return True if the corresponding download URL is available.\n \"\"\"\n return get_cache().get(smart_text(get_download_cache_key(url_key_name))) is not None\n\n\ndef get_upload_cache_key(url_key_name):\n return \"upload/\" + url_key_name\n\n\ndef get_download_cache_key(url_key_name):\n return \"download/\" + url_key_name\n", "path": "openassessment/fileupload/backends/filesystem.py"}, {"content": "import os\n\nfrom django.core.files.base import ContentFile\nfrom django.core.files.storage import default_storage\nfrom django.core.urlresolvers import reverse_lazy\n\nfrom .base import BaseBackend\n\n\nclass Backend(BaseBackend):\n \"\"\"\n Manage openassessment student files uploaded using the default django storage settings.\n \"\"\"\n def get_upload_url(self, key, content_type):\n \"\"\"\n Return the URL pointing to the ORA2 django storage upload endpoint.\n \"\"\"\n return reverse_lazy(\"openassessment-django-storage\", kwargs={'key': key})\n\n def get_download_url(self, key):\n \"\"\"\n Return the django storage download URL for the given key.\n\n Returns None if no file exists at that location.\n \"\"\"\n path = self._get_file_path(key)\n if default_storage.exists(path):\n return default_storage.url(path)\n return None\n\n def upload_file(self, key, content):\n \"\"\"\n Upload the given file content to the keyed location.\n \"\"\"\n path = self._get_file_path(key)\n saved_path = default_storage.save(path, ContentFile(content))\n return saved_path\n\n def remove_file(self, key):\n \"\"\"\n Remove the file at the given keyed location.\n\n Returns True if the file exists, and was removed.\n Returns False if the file does not exist, and so was not removed.\n \"\"\"\n path = self._get_file_path(key)\n if default_storage.exists(path):\n default_storage.delete(path)\n return True\n return False\n\n def _get_file_name(self, key):\n \"\"\"\n Returns the name of the keyed file.\n\n Since the backend storage may be folders, or it may use pseudo-folders,\n make sure the filename doesn't include any path separators.\n \"\"\"\n file_name = key.replace(\"..\", \"\").strip(\"/ \")\n file_name = file_name.replace(os.sep, \"_\")\n return file_name\n\n def _get_file_path(self, key):\n \"\"\"\n Returns the path to the keyed file, including the storage prefix.\n \"\"\"\n path = self._get_key_name(self._get_file_name(key))\n return path\n", "path": "openassessment/fileupload/backends/django_storage.py"}]}
| 3,729 | 455 |
gh_patches_debug_18528
|
rasdani/github-patches
|
git_diff
|
doccano__doccano-2077
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CSRF

</issue>
<code>
[start of backend/config/settings/development.py]
1 from .base import * # noqa: F403
2
3 MIDDLEWARE.append("api.middleware.RangesMiddleware") # noqa: F405
4 CORS_ORIGIN_WHITELIST = ("http://127.0.0.1:3000", "http://0.0.0.0:3000", "http://localhost:3000")
5 CSRF_TRUSTED_ORIGINS = CORS_ORIGIN_WHITELIST
6 # LOGGING = {
7 # 'version': 1,
8 # 'handlers': {
9 # 'console': {
10 # 'level': 'DEBUG',
11 # 'class': 'logging.StreamHandler',
12 # }
13 # },
14 # 'loggers': {
15 # 'django.db.backends': {
16 # 'level': 'DEBUG',
17 # 'handlers': ['console'],
18 # },
19 # }
20 # }
21
[end of backend/config/settings/development.py]
[start of backend/config/settings/base.py]
1 """
2 Django settings for app project.
3
4 For more information on this file, see
5 https://docs.djangoproject.com/en/2.0/topics/settings/
6
7 For the full list of settings and their values, see
8 https://docs.djangoproject.com/en/2.0/ref/settings/
9
10 Any setting that is configured via an environment variable may
11 also be set in a `.env` file in the project base directory.
12 """
13 from os import path
14
15 import dj_database_url
16 from environs import Env, EnvError
17 from furl import furl
18
19 # Build paths inside the project like this: path.join(BASE_DIR, ...)
20 BASE_DIR = path.dirname(path.dirname(path.dirname(path.abspath(__file__))))
21
22 env = Env()
23 env.read_env(path.join(BASE_DIR, ".env"), recurse=False)
24
25 # Quick-start development settings - unsuitable for production
26 # See https://docs.djangoproject.com/en/2.0/howto/deployment/checklist/
27 # SECURITY WARNING: keep the secret key used in production secret!
28 SECRET_KEY = env("SECRET_KEY", "v8sk33sy82!uw3ty=!jjv5vp7=s2phrzw(m(hrn^f7e_#1h2al")
29
30 # SECURITY WARNING: don't run with debug turned on in production!
31 DEBUG = env.bool("DEBUG", True)
32
33 # Application definition
34 INSTALLED_APPS = [
35 "whitenoise.runserver_nostatic",
36 "django.contrib.admin",
37 "django.contrib.auth",
38 "django.contrib.contenttypes",
39 "django.contrib.sessions",
40 "django.contrib.messages",
41 "django.contrib.staticfiles",
42 "api",
43 "roles",
44 "projects",
45 "metrics",
46 "users",
47 "data_import",
48 "data_export",
49 "auto_labeling",
50 "labels",
51 "label_types",
52 "examples",
53 "rest_framework",
54 "rest_framework.authtoken",
55 "django_filters",
56 "polymorphic",
57 "corsheaders",
58 "drf_yasg",
59 "allauth",
60 "allauth.account",
61 "allauth.socialaccount",
62 "dj_rest_auth",
63 "dj_rest_auth.registration",
64 "django_celery_results",
65 "django_drf_filepond",
66 "health_check",
67 "health_check.cache",
68 "health_check.storage",
69 "health_check.contrib.migrations",
70 "health_check.contrib.celery",
71 "django_cleanup",
72 ]
73
74
75 MIDDLEWARE = [
76 "django.middleware.security.SecurityMiddleware",
77 "whitenoise.middleware.WhiteNoiseMiddleware",
78 "django.contrib.sessions.middleware.SessionMiddleware",
79 "django.middleware.common.CommonMiddleware",
80 "django.middleware.csrf.CsrfViewMiddleware",
81 "django.contrib.auth.middleware.AuthenticationMiddleware",
82 "django.contrib.messages.middleware.MessageMiddleware",
83 "django.middleware.clickjacking.XFrameOptionsMiddleware",
84 "corsheaders.middleware.CorsMiddleware",
85 ]
86
87
88 ROOT_URLCONF = "config.urls"
89 WSGI_APPLICATION = "config.wsgi.application"
90
91 # Django templates
92 TEMPLATES = [
93 {
94 "BACKEND": "django.template.backends.django.DjangoTemplates",
95 "DIRS": [path.join(BASE_DIR, "client/dist")],
96 "APP_DIRS": True,
97 "OPTIONS": {
98 "context_processors": [
99 "django.template.context_processors.debug",
100 "django.template.context_processors.request",
101 "django.contrib.auth.context_processors.auth",
102 "django.contrib.messages.context_processors.messages",
103 ],
104 },
105 },
106 ]
107
108 # Static files (CSS, JavaScript, Images)
109 # https://docs.djangoproject.com/en/2.0/howto/static-files/
110 STATIC_URL = "/static/"
111 STATIC_ROOT = path.join(BASE_DIR, "staticfiles")
112 STATICFILES_DIRS = [
113 path.join(BASE_DIR, "client/dist/static"),
114 ]
115 # STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
116 STATICFILES_STORAGE = "whitenoise.storage.CompressedStaticFilesStorage"
117
118 # Auth settings
119 AUTHENTICATION_BACKENDS = [
120 "django.contrib.auth.backends.ModelBackend",
121 ]
122 HEADER_AUTH_USER_NAME = env("HEADER_AUTH_USER_NAME", "")
123 HEADER_AUTH_USER_GROUPS = env("HEADER_AUTH_USER_GROUPS", "")
124 HEADER_AUTH_ADMIN_GROUP_NAME = env("HEADER_AUTH_ADMIN_GROUP_NAME", "")
125 HEADER_AUTH_GROUPS_SEPERATOR = env("HEADER_AUTH_GROUPS_SEPERATOR", default=",")
126 if HEADER_AUTH_USER_NAME and HEADER_AUTH_USER_GROUPS and HEADER_AUTH_ADMIN_GROUP_NAME:
127 MIDDLEWARE.append("api.middleware.HeaderAuthMiddleware")
128 AUTHENTICATION_BACKENDS.append("django.contrib.auth.backends.RemoteUserBackend")
129
130 # Role settings
131 ROLE_PROJECT_ADMIN = env("ROLE_PROJECT_ADMIN", "project_admin")
132 ROLE_ANNOTATOR = env("ROLE_ANNOTATOR", "annotator")
133 ROLE_ANNOTATION_APPROVER = env("ROLE_ANNOTATION_APPROVER", "annotation_approver")
134
135 # Password validation
136 # https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators
137 AUTH_PASSWORD_VALIDATORS = [
138 {
139 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
140 },
141 {
142 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
143 },
144 {
145 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
146 },
147 {
148 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
149 },
150 ]
151
152 REST_FRAMEWORK = {
153 # Use Django's standard `django.contrib.auth` permissions,
154 # or allow read-only access for unauthenticated users.
155 "DEFAULT_PERMISSION_CLASSES": [
156 "rest_framework.permissions.DjangoModelPermissionsOrAnonReadOnly",
157 "rest_framework.permissions.IsAuthenticated",
158 ],
159 "DEFAULT_AUTHENTICATION_CLASSES": (
160 "rest_framework.authentication.SessionAuthentication",
161 "rest_framework.authentication.TokenAuthentication",
162 ),
163 "DEFAULT_PAGINATION_CLASS": "rest_framework.pagination.LimitOffsetPagination",
164 "PAGE_SIZE": env.int("DOCCANO_PAGE_SIZE", default=5),
165 "DEFAULT_FILTER_BACKENDS": ("django_filters.rest_framework.DjangoFilterBackend",),
166 "SEARCH_PARAM": "q",
167 "DEFAULT_RENDERER_CLASSES": (
168 "rest_framework.renderers.JSONRenderer",
169 "rest_framework.renderers.BrowsableAPIRenderer",
170 "rest_framework_xml.renderers.XMLRenderer",
171 ),
172 }
173
174 # Internationalization
175 # https://docs.djangoproject.com/en/2.0/topics/i18n/
176 LANGUAGE_CODE = "en-us"
177 TIME_ZONE = "UTC"
178 USE_I18N = True
179 USE_L10N = True
180 USE_TZ = True
181
182 # Testing
183 TEST_RUNNER = "xmlrunner.extra.djangotestrunner.XMLTestRunner"
184 TEST_OUTPUT_DIR = path.join(BASE_DIR, "junitxml")
185
186 LOGIN_URL = "/login/"
187 LOGIN_REDIRECT_URL = "/projects/"
188 LOGOUT_REDIRECT_URL = "/"
189
190 # Database
191 # https://docs.djangoproject.com/en/2.0/ref/settings/#databases
192 DATABASES = {
193 "default": {
194 "ENGINE": "django.db.backends.sqlite3",
195 "NAME": path.join(BASE_DIR, "db.sqlite3"),
196 }
197 }
198 # Change 'default' database configuration with $DATABASE_URL.
199 DATABASES["default"].update(
200 dj_database_url.config(
201 env="DATABASE_URL",
202 conn_max_age=env.int("DATABASE_CONN_MAX_AGE", 500),
203 ssl_require="sslmode" not in furl(env("DATABASE_URL", "")).args,
204 )
205 )
206
207 # work-around for dj-database-url: explicitly disable ssl for sqlite
208 if DATABASES["default"].get("ENGINE") == "django.db.backends.sqlite3":
209 DATABASES["default"].get("OPTIONS", {}).pop("sslmode", None)
210
211 # work-around for dj-database-url: patch ssl for mysql
212 if DATABASES["default"].get("ENGINE") == "django.db.backends.mysql":
213 DATABASES["default"].get("OPTIONS", {}).pop("sslmode", None)
214 if env("MYSQL_SSL_CA", None):
215 DATABASES["default"].setdefault("OPTIONS", {}).setdefault("ssl", {}).setdefault("ca", env("MYSQL_SSL_CA", None))
216
217 # default to a sensible modern driver for Azure SQL
218 if DATABASES["default"].get("ENGINE") == "sql_server.pyodbc":
219 DATABASES["default"].setdefault("OPTIONS", {}).setdefault("driver", "ODBC Driver 17 for SQL Server")
220
221
222 # Sessions and CSRF
223 # Honor the 'X-Forwarded-Proto' header for request.is_secure()
224 SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
225 SESSION_COOKIE_SECURE = env.bool("SESSION_COOKIE_SECURE", False)
226 CSRF_COOKIE_SECURE = env.bool("CSRF_COOKIE_SECURE", False)
227 CSRF_TRUSTED_ORIGINS = env.list("CSRF_TRUSTED_ORIGINS", [])
228
229 # Allow all host headers
230 ALLOWED_HOSTS = ["*"]
231
232 if DEBUG:
233 CORS_ORIGIN_WHITELIST = ("http://127.0.0.1:3000", "http://0.0.0.0:3000", "http://localhost:3000")
234 CSRF_TRUSTED_ORIGINS = CORS_ORIGIN_WHITELIST
235
236 # Batch size for importing data
237 IMPORT_BATCH_SIZE = env.int("IMPORT_BATCH_SIZE", 1000)
238
239 # Necessary for email verification of new accounts
240 EMAIL_USE_TLS = env.bool("EMAIL_USE_TLS", False)
241 EMAIL_HOST = env("EMAIL_HOST", None)
242 EMAIL_HOST_USER = env("EMAIL_HOST_USER", None)
243 EMAIL_HOST_PASSWORD = env("EMAIL_HOST_PASSWORD", None)
244 EMAIL_PORT = env.int("EMAIL_PORT", 587)
245 DEFAULT_FROM_EMAIL = env("DEFAULT_FROM_EMAIL", "webmaster@localhost")
246 if not EMAIL_HOST:
247 EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend"
248
249
250 # User media files
251 MEDIA_ROOT = env("MEDIA_ROOT", path.join(BASE_DIR, "media"))
252 MEDIA_URL = "/media/"
253
254 # Filepond settings
255 DJANGO_DRF_FILEPOND_UPLOAD_TMP = path.join(BASE_DIR, "filepond-temp-uploads")
256 DJANGO_DRF_FILEPOND_FILE_STORE_PATH = MEDIA_ROOT
257
258 # File upload setting
259 MAX_UPLOAD_SIZE = env.int("MAX_UPLOAD_SIZE", pow(1024, 3)) # default: 1GB per a file
260 ENABLE_FILE_TYPE_CHECK = env.bool("ENABLE_FILE_TYPE_CHECK", False)
261
262 # Celery settings
263 DJANGO_CELERY_RESULTS_TASK_ID_MAX_LENGTH = 191
264 CELERY_RESULT_BACKEND = "django-db"
265 try:
266 CELERY_BROKER_URL = env("CELERY_BROKER_URL")
267 except EnvError:
268 try:
269 # quickfix for Heroku.
270 # See https://github.com/doccano/doccano/issues/1327.
271 uri = env("DATABASE_URL")
272 if uri.startswith("postgres://"):
273 uri = uri.replace("postgres://", "postgresql://", 1)
274 CELERY_BROKER_URL = "sqla+{}".format(uri)
275 except EnvError:
276 CELERY_BROKER_URL = "sqla+sqlite:///{}".format(DATABASES["default"]["NAME"])
277 CELERY_ACCEPT_CONTENT = ["application/json"]
278 CELERY_TASK_SERIALIZER = "json"
279 CELERY_RESULT_SERIALIZER = "json"
280
281 DEFAULT_AUTO_FIELD = "django.db.models.AutoField"
282
[end of backend/config/settings/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/backend/config/settings/base.py b/backend/config/settings/base.py
--- a/backend/config/settings/base.py
+++ b/backend/config/settings/base.py
@@ -230,8 +230,9 @@
ALLOWED_HOSTS = ["*"]
if DEBUG:
- CORS_ORIGIN_WHITELIST = ("http://127.0.0.1:3000", "http://0.0.0.0:3000", "http://localhost:3000")
- CSRF_TRUSTED_ORIGINS = CORS_ORIGIN_WHITELIST
+ CORS_ORIGIN_ALLOW_ALL = True
+ CSRF_TRUSTED_ORIGINS = ["http://127.0.0.1:3000", "http://0.0.0.0:3000", "http://localhost:3000"]
+ CSRF_TRUSTED_ORIGINS += env.list("CSRF_TRUSTED_ORIGINS", [])
# Batch size for importing data
IMPORT_BATCH_SIZE = env.int("IMPORT_BATCH_SIZE", 1000)
diff --git a/backend/config/settings/development.py b/backend/config/settings/development.py
--- a/backend/config/settings/development.py
+++ b/backend/config/settings/development.py
@@ -1,8 +1,6 @@
from .base import * # noqa: F403
MIDDLEWARE.append("api.middleware.RangesMiddleware") # noqa: F405
-CORS_ORIGIN_WHITELIST = ("http://127.0.0.1:3000", "http://0.0.0.0:3000", "http://localhost:3000")
-CSRF_TRUSTED_ORIGINS = CORS_ORIGIN_WHITELIST
# LOGGING = {
# 'version': 1,
# 'handlers': {
|
{"golden_diff": "diff --git a/backend/config/settings/base.py b/backend/config/settings/base.py\n--- a/backend/config/settings/base.py\n+++ b/backend/config/settings/base.py\n@@ -230,8 +230,9 @@\n ALLOWED_HOSTS = [\"*\"]\n \n if DEBUG:\n- CORS_ORIGIN_WHITELIST = (\"http://127.0.0.1:3000\", \"http://0.0.0.0:3000\", \"http://localhost:3000\")\n- CSRF_TRUSTED_ORIGINS = CORS_ORIGIN_WHITELIST\n+ CORS_ORIGIN_ALLOW_ALL = True\n+ CSRF_TRUSTED_ORIGINS = [\"http://127.0.0.1:3000\", \"http://0.0.0.0:3000\", \"http://localhost:3000\"]\n+ CSRF_TRUSTED_ORIGINS += env.list(\"CSRF_TRUSTED_ORIGINS\", [])\n \n # Batch size for importing data\n IMPORT_BATCH_SIZE = env.int(\"IMPORT_BATCH_SIZE\", 1000)\ndiff --git a/backend/config/settings/development.py b/backend/config/settings/development.py\n--- a/backend/config/settings/development.py\n+++ b/backend/config/settings/development.py\n@@ -1,8 +1,6 @@\n from .base import * # noqa: F403\n \n MIDDLEWARE.append(\"api.middleware.RangesMiddleware\") # noqa: F405\n-CORS_ORIGIN_WHITELIST = (\"http://127.0.0.1:3000\", \"http://0.0.0.0:3000\", \"http://localhost:3000\")\n-CSRF_TRUSTED_ORIGINS = CORS_ORIGIN_WHITELIST\n # LOGGING = {\n # 'version': 1,\n # 'handlers': {\n", "issue": "CSRF\n\r\n\n", "before_files": [{"content": "from .base import * # noqa: F403\n\nMIDDLEWARE.append(\"api.middleware.RangesMiddleware\") # noqa: F405\nCORS_ORIGIN_WHITELIST = (\"http://127.0.0.1:3000\", \"http://0.0.0.0:3000\", \"http://localhost:3000\")\nCSRF_TRUSTED_ORIGINS = CORS_ORIGIN_WHITELIST\n# LOGGING = {\n# 'version': 1,\n# 'handlers': {\n# 'console': {\n# 'level': 'DEBUG',\n# 'class': 'logging.StreamHandler',\n# }\n# },\n# 'loggers': {\n# 'django.db.backends': {\n# 'level': 'DEBUG',\n# 'handlers': ['console'],\n# },\n# }\n# }\n", "path": "backend/config/settings/development.py"}, {"content": "\"\"\"\nDjango settings for app project.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/2.0/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/2.0/ref/settings/\n\nAny setting that is configured via an environment variable may\nalso be set in a `.env` file in the project base directory.\n\"\"\"\nfrom os import path\n\nimport dj_database_url\nfrom environs import Env, EnvError\nfrom furl import furl\n\n# Build paths inside the project like this: path.join(BASE_DIR, ...)\nBASE_DIR = path.dirname(path.dirname(path.dirname(path.abspath(__file__))))\n\nenv = Env()\nenv.read_env(path.join(BASE_DIR, \".env\"), recurse=False)\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/2.0/howto/deployment/checklist/\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = env(\"SECRET_KEY\", \"v8sk33sy82!uw3ty=!jjv5vp7=s2phrzw(m(hrn^f7e_#1h2al\")\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = env.bool(\"DEBUG\", True)\n\n# Application definition\nINSTALLED_APPS = [\n \"whitenoise.runserver_nostatic\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"api\",\n \"roles\",\n \"projects\",\n \"metrics\",\n \"users\",\n \"data_import\",\n \"data_export\",\n \"auto_labeling\",\n \"labels\",\n \"label_types\",\n \"examples\",\n \"rest_framework\",\n \"rest_framework.authtoken\",\n \"django_filters\",\n \"polymorphic\",\n \"corsheaders\",\n \"drf_yasg\",\n \"allauth\",\n \"allauth.account\",\n \"allauth.socialaccount\",\n \"dj_rest_auth\",\n \"dj_rest_auth.registration\",\n \"django_celery_results\",\n \"django_drf_filepond\",\n \"health_check\",\n \"health_check.cache\",\n \"health_check.storage\",\n \"health_check.contrib.migrations\",\n \"health_check.contrib.celery\",\n \"django_cleanup\",\n]\n\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"whitenoise.middleware.WhiteNoiseMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"corsheaders.middleware.CorsMiddleware\",\n]\n\n\nROOT_URLCONF = \"config.urls\"\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Django templates\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [path.join(BASE_DIR, \"client/dist\")],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/2.0/howto/static-files/\nSTATIC_URL = \"/static/\"\nSTATIC_ROOT = path.join(BASE_DIR, \"staticfiles\")\nSTATICFILES_DIRS = [\n path.join(BASE_DIR, \"client/dist/static\"),\n]\n# STATICFILES_STORAGE = \"whitenoise.storage.CompressedManifestStaticFilesStorage\"\nSTATICFILES_STORAGE = \"whitenoise.storage.CompressedStaticFilesStorage\"\n\n# Auth settings\nAUTHENTICATION_BACKENDS = [\n \"django.contrib.auth.backends.ModelBackend\",\n]\nHEADER_AUTH_USER_NAME = env(\"HEADER_AUTH_USER_NAME\", \"\")\nHEADER_AUTH_USER_GROUPS = env(\"HEADER_AUTH_USER_GROUPS\", \"\")\nHEADER_AUTH_ADMIN_GROUP_NAME = env(\"HEADER_AUTH_ADMIN_GROUP_NAME\", \"\")\nHEADER_AUTH_GROUPS_SEPERATOR = env(\"HEADER_AUTH_GROUPS_SEPERATOR\", default=\",\")\nif HEADER_AUTH_USER_NAME and HEADER_AUTH_USER_GROUPS and HEADER_AUTH_ADMIN_GROUP_NAME:\n MIDDLEWARE.append(\"api.middleware.HeaderAuthMiddleware\")\n AUTHENTICATION_BACKENDS.append(\"django.contrib.auth.backends.RemoteUserBackend\")\n\n# Role settings\nROLE_PROJECT_ADMIN = env(\"ROLE_PROJECT_ADMIN\", \"project_admin\")\nROLE_ANNOTATOR = env(\"ROLE_ANNOTATOR\", \"annotator\")\nROLE_ANNOTATION_APPROVER = env(\"ROLE_ANNOTATION_APPROVER\", \"annotation_approver\")\n\n# Password validation\n# https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\nREST_FRAMEWORK = {\n # Use Django's standard `django.contrib.auth` permissions,\n # or allow read-only access for unauthenticated users.\n \"DEFAULT_PERMISSION_CLASSES\": [\n \"rest_framework.permissions.DjangoModelPermissionsOrAnonReadOnly\",\n \"rest_framework.permissions.IsAuthenticated\",\n ],\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"rest_framework.authentication.SessionAuthentication\",\n \"rest_framework.authentication.TokenAuthentication\",\n ),\n \"DEFAULT_PAGINATION_CLASS\": \"rest_framework.pagination.LimitOffsetPagination\",\n \"PAGE_SIZE\": env.int(\"DOCCANO_PAGE_SIZE\", default=5),\n \"DEFAULT_FILTER_BACKENDS\": (\"django_filters.rest_framework.DjangoFilterBackend\",),\n \"SEARCH_PARAM\": \"q\",\n \"DEFAULT_RENDERER_CLASSES\": (\n \"rest_framework.renderers.JSONRenderer\",\n \"rest_framework.renderers.BrowsableAPIRenderer\",\n \"rest_framework_xml.renderers.XMLRenderer\",\n ),\n}\n\n# Internationalization\n# https://docs.djangoproject.com/en/2.0/topics/i18n/\nLANGUAGE_CODE = \"en-us\"\nTIME_ZONE = \"UTC\"\nUSE_I18N = True\nUSE_L10N = True\nUSE_TZ = True\n\n# Testing\nTEST_RUNNER = \"xmlrunner.extra.djangotestrunner.XMLTestRunner\"\nTEST_OUTPUT_DIR = path.join(BASE_DIR, \"junitxml\")\n\nLOGIN_URL = \"/login/\"\nLOGIN_REDIRECT_URL = \"/projects/\"\nLOGOUT_REDIRECT_URL = \"/\"\n\n# Database\n# https://docs.djangoproject.com/en/2.0/ref/settings/#databases\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.sqlite3\",\n \"NAME\": path.join(BASE_DIR, \"db.sqlite3\"),\n }\n}\n# Change 'default' database configuration with $DATABASE_URL.\nDATABASES[\"default\"].update(\n dj_database_url.config(\n env=\"DATABASE_URL\",\n conn_max_age=env.int(\"DATABASE_CONN_MAX_AGE\", 500),\n ssl_require=\"sslmode\" not in furl(env(\"DATABASE_URL\", \"\")).args,\n )\n)\n\n# work-around for dj-database-url: explicitly disable ssl for sqlite\nif DATABASES[\"default\"].get(\"ENGINE\") == \"django.db.backends.sqlite3\":\n DATABASES[\"default\"].get(\"OPTIONS\", {}).pop(\"sslmode\", None)\n\n# work-around for dj-database-url: patch ssl for mysql\nif DATABASES[\"default\"].get(\"ENGINE\") == \"django.db.backends.mysql\":\n DATABASES[\"default\"].get(\"OPTIONS\", {}).pop(\"sslmode\", None)\n if env(\"MYSQL_SSL_CA\", None):\n DATABASES[\"default\"].setdefault(\"OPTIONS\", {}).setdefault(\"ssl\", {}).setdefault(\"ca\", env(\"MYSQL_SSL_CA\", None))\n\n# default to a sensible modern driver for Azure SQL\nif DATABASES[\"default\"].get(\"ENGINE\") == \"sql_server.pyodbc\":\n DATABASES[\"default\"].setdefault(\"OPTIONS\", {}).setdefault(\"driver\", \"ODBC Driver 17 for SQL Server\")\n\n\n# Sessions and CSRF\n# Honor the 'X-Forwarded-Proto' header for request.is_secure()\nSECURE_PROXY_SSL_HEADER = (\"HTTP_X_FORWARDED_PROTO\", \"https\")\nSESSION_COOKIE_SECURE = env.bool(\"SESSION_COOKIE_SECURE\", False)\nCSRF_COOKIE_SECURE = env.bool(\"CSRF_COOKIE_SECURE\", False)\nCSRF_TRUSTED_ORIGINS = env.list(\"CSRF_TRUSTED_ORIGINS\", [])\n\n# Allow all host headers\nALLOWED_HOSTS = [\"*\"]\n\nif DEBUG:\n CORS_ORIGIN_WHITELIST = (\"http://127.0.0.1:3000\", \"http://0.0.0.0:3000\", \"http://localhost:3000\")\n CSRF_TRUSTED_ORIGINS = CORS_ORIGIN_WHITELIST\n\n# Batch size for importing data\nIMPORT_BATCH_SIZE = env.int(\"IMPORT_BATCH_SIZE\", 1000)\n\n# Necessary for email verification of new accounts\nEMAIL_USE_TLS = env.bool(\"EMAIL_USE_TLS\", False)\nEMAIL_HOST = env(\"EMAIL_HOST\", None)\nEMAIL_HOST_USER = env(\"EMAIL_HOST_USER\", None)\nEMAIL_HOST_PASSWORD = env(\"EMAIL_HOST_PASSWORD\", None)\nEMAIL_PORT = env.int(\"EMAIL_PORT\", 587)\nDEFAULT_FROM_EMAIL = env(\"DEFAULT_FROM_EMAIL\", \"webmaster@localhost\")\nif not EMAIL_HOST:\n EMAIL_BACKEND = \"django.core.mail.backends.console.EmailBackend\"\n\n\n# User media files\nMEDIA_ROOT = env(\"MEDIA_ROOT\", path.join(BASE_DIR, \"media\"))\nMEDIA_URL = \"/media/\"\n\n# Filepond settings\nDJANGO_DRF_FILEPOND_UPLOAD_TMP = path.join(BASE_DIR, \"filepond-temp-uploads\")\nDJANGO_DRF_FILEPOND_FILE_STORE_PATH = MEDIA_ROOT\n\n# File upload setting\nMAX_UPLOAD_SIZE = env.int(\"MAX_UPLOAD_SIZE\", pow(1024, 3)) # default: 1GB per a file\nENABLE_FILE_TYPE_CHECK = env.bool(\"ENABLE_FILE_TYPE_CHECK\", False)\n\n# Celery settings\nDJANGO_CELERY_RESULTS_TASK_ID_MAX_LENGTH = 191\nCELERY_RESULT_BACKEND = \"django-db\"\ntry:\n CELERY_BROKER_URL = env(\"CELERY_BROKER_URL\")\nexcept EnvError:\n try:\n # quickfix for Heroku.\n # See https://github.com/doccano/doccano/issues/1327.\n uri = env(\"DATABASE_URL\")\n if uri.startswith(\"postgres://\"):\n uri = uri.replace(\"postgres://\", \"postgresql://\", 1)\n CELERY_BROKER_URL = \"sqla+{}\".format(uri)\n except EnvError:\n CELERY_BROKER_URL = \"sqla+sqlite:///{}\".format(DATABASES[\"default\"][\"NAME\"])\nCELERY_ACCEPT_CONTENT = [\"application/json\"]\nCELERY_TASK_SERIALIZER = \"json\"\nCELERY_RESULT_SERIALIZER = \"json\"\n\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n", "path": "backend/config/settings/base.py"}]}
| 3,952 | 406 |
gh_patches_debug_23673
|
rasdani/github-patches
|
git_diff
|
buildbot__buildbot-6055
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docker image for 2.4.0 contains packages from 2.3.1
The Docker image `buildbot/buildbot-master:v2.4.0` contains Python packages for buildbot `2.3.1`:
```sh
buildbot-console-view==2.3.1
buildbot-grid-view==2.3.1
buildbot-waterfall-view==2.3.1
buildbot-worker==2.3.1
buildbot-www==2.3.1
```
The `buildbot` package itself has version `2019.8.18`, which is not available from PyPI.
Even if `2019.8.18` turns out to be identical to `2.4.0`, this is somewhat problematic: Installing packages into the image with a dependency on buildbot will always uninstall the buildbot version that comes with the official image.
It would be nice if the Dockerfile could install buildbot from PyPI to ensure deterministic builds for those who need to derive their own image.
<details>
<summary>Output of pip freeze</summary>
```
$ docker run --rm buildbot/buildbot-master:v2.4.0 pip freeze
asn1crypto==0.24.0
attrs==19.1.0
autobahn==19.8.1
Automat==0.7.0
buildbot==2019.8.18
buildbot-console-view==2.3.1
buildbot-grid-view==2.3.1
buildbot-waterfall-view==2.3.1
buildbot-worker==2.3.1
buildbot-www==2.3.1
certifi==2019.6.16
cffi==1.12.3
chardet==3.0.4
constantly==15.1.0
cryptography==2.7
decorator==4.4.0
future==0.17.1
hyperlink==19.0.0
idna==2.8
incremental==17.5.0
Jinja2==2.10.1
MarkupSafe==1.1.1
pbr==5.4.2
psycopg2==2.8.3
pyasn1==0.4.6
pyasn1-modules==0.2.6
pycparser==2.19
PyHamcrest==1.9.0
PyJWT==1.7.1
pyOpenSSL==19.0.0
python-dateutil==2.8.0
PyYAML==5.1.2
requests==2.22.0
service-identity==18.1.0
six==1.12.0
SQLAlchemy==1.3.7
sqlalchemy-migrate==0.12.0
sqlparse==0.3.0
Tempita==0.5.2
Twisted==19.7.0
txaio==18.8.1
txrequests==0.9.6
urllib3==1.25.3
zope.interface==4.6.0
```
</details>
</issue>
<code>
[start of pkg/buildbot_pkg.py]
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 # Method to add build step taken from here
17 # https://seasonofcode.com/posts/how-to-add-custom-build-steps-and-commands-to-setuppy.html
18 import datetime
19 import os
20 import re
21 import subprocess
22 import sys
23 from pkg_resources import parse_version
24 from subprocess import PIPE
25 from subprocess import STDOUT
26 from subprocess import Popen
27
28 import setuptools.command.build_py
29 import setuptools.command.egg_info
30 from setuptools import setup
31
32 import distutils.cmd # isort:skip
33
34 old_listdir = os.listdir
35
36
37 def listdir(path):
38 # patch listdir to avoid looking into node_modules
39 l = old_listdir(path)
40 if "node_modules" in l:
41 l.remove("node_modules")
42 return l
43 os.listdir = listdir
44
45
46 def check_output(cmd):
47 """Version of check_output which does not throw error"""
48 popen = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
49 out = popen.communicate()[0].strip()
50 if not isinstance(out, str):
51 out = out.decode(sys.stdout.encoding)
52 return out
53
54
55 def gitDescribeToPep440(version):
56 # git describe produce version in the form: v0.9.8-20-gf0f45ca
57 # where 20 is the number of commit since last release, and gf0f45ca is the short commit id preceded by 'g'
58 # we parse this a transform into a pep440 release version 0.9.9.dev20 (increment last digit and add dev before 20)
59
60 VERSION_MATCH = re.compile(r'(?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)(\.post(?P<post>\d+))?(-(?P<dev>\d+))?(-g(?P<commit>.+))?')
61 v = VERSION_MATCH.search(version)
62 if v:
63 major = int(v.group('major'))
64 minor = int(v.group('minor'))
65 patch = int(v.group('patch'))
66 if v.group('dev'):
67 patch += 1
68 dev = int(v.group('dev'))
69 return "{}.{}.{}-dev{}".format(major, minor, patch, dev)
70 if v.group('post'):
71 return "{}.{}.{}.post{}".format(major, minor, patch, v.group('post'))
72 return "{}.{}.{}".format(major, minor, patch)
73
74 return v
75
76
77 def mTimeVersion(init_file):
78 cwd = os.path.dirname(os.path.abspath(init_file))
79 m = 0
80 for root, dirs, files in os.walk(cwd):
81 for f in files:
82 m = max(os.path.getmtime(os.path.join(root, f)), m)
83 d = datetime.datetime.utcfromtimestamp(m)
84 return d.strftime("%Y.%m.%d")
85
86
87 def getVersionFromArchiveId(git_archive_id='$Format:%ct %d$'):
88 """ Extract the tag if a source is from git archive.
89
90 When source is exported via `git archive`, the git_archive_id init value is modified
91 and placeholders are expanded to the "archived" revision:
92
93 %ct: committer date, UNIX timestamp
94 %d: ref names, like the --decorate option of git-log
95
96 See man gitattributes(5) and git-log(1) (PRETTY FORMATS) for more details.
97 """
98 # mangle the magic string to make sure it is not replaced by git archive
99 if not git_archive_id.startswith('$For''mat:'):
100 # source was modified by git archive, try to parse the version from
101 # the value of git_archive_id
102
103 match = re.search(r'tag:\s*v([^,)]+)', git_archive_id)
104 if match:
105 # archived revision is tagged, use the tag
106 return gitDescribeToPep440(match.group(1))
107
108 # archived revision is not tagged, use the commit date
109 tstamp = git_archive_id.strip().split()[0]
110 d = datetime.datetime.utcfromtimestamp(int(tstamp))
111 return d.strftime('%Y.%m.%d')
112 return None
113
114
115 def getVersion(init_file):
116 """
117 Return BUILDBOT_VERSION environment variable, content of VERSION file, git
118 tag or 'latest'
119 """
120
121 try:
122 return os.environ['BUILDBOT_VERSION']
123 except KeyError:
124 pass
125
126 try:
127 cwd = os.path.dirname(os.path.abspath(init_file))
128 fn = os.path.join(cwd, 'VERSION')
129 with open(fn) as f:
130 return f.read().strip()
131 except IOError:
132 pass
133
134 version = getVersionFromArchiveId()
135 if version is not None:
136 return version
137
138 try:
139 p = Popen(['git', 'describe', '--tags', '--always'], stdout=PIPE, stderr=STDOUT, cwd=cwd)
140 out = p.communicate()[0]
141
142 if (not p.returncode) and out:
143 v = gitDescribeToPep440(str(out))
144 if v:
145 return v
146 except OSError:
147 pass
148
149 try:
150 # if we really can't find the version, we use the date of modification of the most recent file
151 # docker hub builds cannot use git describe
152 return mTimeVersion(init_file)
153 except Exception:
154 # bummer. lets report something
155 return "latest"
156
157
158 # JS build strategy:
159 #
160 # Obviously, building javascript with setuptools is not really something supported initially
161 #
162 # The goal of this hack are:
163 # - override the distutils command to insert our js build
164 # - has very small setup.py
165 #
166 # from buildbot_pkg import setup_www
167 #
168 # setup_www(
169 # ...
170 # packages=["buildbot_myplugin"]
171 # )
172 #
173 # We need to override the first command done, so that source tree is populated very soon,
174 # as well as version is found from git tree or "VERSION" file
175 #
176 # This supports following setup.py commands:
177 #
178 # - develop, via egg_info
179 # - install, via egg_info
180 # - sdist, via egg_info
181 # - bdist_wheel, via build
182 # This is why we override both egg_info and build, and the first run build
183 # the js.
184
185 class BuildJsCommand(distutils.cmd.Command):
186 """A custom command to run JS build."""
187
188 description = 'run JS build'
189 already_run = False
190
191 def initialize_options(self):
192 """Set default values for options."""
193
194 def finalize_options(self):
195 """Post-process options."""
196
197 def run(self):
198 """Run command."""
199 if self.already_run:
200 return
201 package = self.distribution.packages[0]
202 if os.path.exists("gulpfile.js") or os.path.exists("webpack.config.js"):
203 yarn_version = check_output("yarn --version")
204 assert yarn_version != "", "need nodejs and yarn installed in current PATH"
205 yarn_bin = check_output("yarn bin").strip()
206
207 commands = []
208
209 commands.append(['yarn', 'install', '--pure-lockfile'])
210
211 if os.path.exists("gulpfile.js"):
212 commands.append([os.path.join(yarn_bin, "gulp"), 'prod', '--notests'])
213 elif os.path.exists("webpack.config.js"):
214 commands.append(['yarn', 'run', 'build'])
215
216 shell = bool(os.name == 'nt')
217
218 for command in commands:
219 self.announce('Running command: {}'.format(str(" ".join(command))),
220 level=distutils.log.INFO)
221 subprocess.check_call(command, shell=shell)
222
223 self.copy_tree(os.path.join(package, 'static'), os.path.join(
224 "build", "lib", package, "static"))
225
226 with open(os.path.join("build", "lib", package, "VERSION"), "w") as f:
227 f.write(self.distribution.metadata.version)
228
229 with open(os.path.join(package, "VERSION"), "w") as f:
230 f.write(self.distribution.metadata.version)
231
232 self.already_run = True
233
234
235 class BuildPyCommand(setuptools.command.build_py.build_py):
236 """Custom build command."""
237
238 def run(self):
239 self.run_command('build_js')
240 super().run()
241
242
243 class EggInfoCommand(setuptools.command.egg_info.egg_info):
244 """Custom egginfo command."""
245
246 def run(self):
247 self.run_command('build_js')
248 super().run()
249
250
251 def setup_www_plugin(**kw):
252 package = kw['packages'][0]
253 if 'version' not in kw:
254 kw['version'] = getVersion(os.path.join(package, "__init__.py"))
255
256 setup(cmdclass=dict(
257 egg_info=EggInfoCommand,
258 build_py=BuildPyCommand,
259 build_js=BuildJsCommand),
260 **kw)
261
[end of pkg/buildbot_pkg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pkg/buildbot_pkg.py b/pkg/buildbot_pkg.py
--- a/pkg/buildbot_pkg.py
+++ b/pkg/buildbot_pkg.py
@@ -199,19 +199,22 @@
if self.already_run:
return
package = self.distribution.packages[0]
- if os.path.exists("gulpfile.js") or os.path.exists("webpack.config.js"):
- yarn_version = check_output("yarn --version")
- assert yarn_version != "", "need nodejs and yarn installed in current PATH"
- yarn_bin = check_output("yarn bin").strip()
+ if os.path.exists("webpack.config.js"):
- commands = []
+ yarn_program = None
+ for program in ["yarnpkg", "yarn"]:
+ yarn_version = check_output([program, "--version"])
+ if yarn_version != "":
+ yarn_program = program
- commands.append(['yarn', 'install', '--pure-lockfile'])
+ assert yarn_version is not None, "need nodejs and yarn installed in current PATH"
- if os.path.exists("gulpfile.js"):
- commands.append([os.path.join(yarn_bin, "gulp"), 'prod', '--notests'])
- elif os.path.exists("webpack.config.js"):
- commands.append(['yarn', 'run', 'build'])
+ yarn_bin = check_output([yarn_program, "bin"]).strip()
+
+ commands = [
+ [yarn_program, 'install', '--pure-lockfile'],
+ [yarn_program, 'run', 'build'],
+ ]
shell = bool(os.name == 'nt')
|
{"golden_diff": "diff --git a/pkg/buildbot_pkg.py b/pkg/buildbot_pkg.py\n--- a/pkg/buildbot_pkg.py\n+++ b/pkg/buildbot_pkg.py\n@@ -199,19 +199,22 @@\n if self.already_run:\n return\n package = self.distribution.packages[0]\n- if os.path.exists(\"gulpfile.js\") or os.path.exists(\"webpack.config.js\"):\n- yarn_version = check_output(\"yarn --version\")\n- assert yarn_version != \"\", \"need nodejs and yarn installed in current PATH\"\n- yarn_bin = check_output(\"yarn bin\").strip()\n+ if os.path.exists(\"webpack.config.js\"):\n \n- commands = []\n+ yarn_program = None\n+ for program in [\"yarnpkg\", \"yarn\"]:\n+ yarn_version = check_output([program, \"--version\"])\n+ if yarn_version != \"\":\n+ yarn_program = program\n \n- commands.append(['yarn', 'install', '--pure-lockfile'])\n+ assert yarn_version is not None, \"need nodejs and yarn installed in current PATH\"\n \n- if os.path.exists(\"gulpfile.js\"):\n- commands.append([os.path.join(yarn_bin, \"gulp\"), 'prod', '--notests'])\n- elif os.path.exists(\"webpack.config.js\"):\n- commands.append(['yarn', 'run', 'build'])\n+ yarn_bin = check_output([yarn_program, \"bin\"]).strip()\n+\n+ commands = [\n+ [yarn_program, 'install', '--pure-lockfile'],\n+ [yarn_program, 'run', 'build'],\n+ ]\n \n shell = bool(os.name == 'nt')\n", "issue": "Docker image for 2.4.0 contains packages from 2.3.1\nThe Docker image `buildbot/buildbot-master:v2.4.0` contains Python packages for buildbot `2.3.1`:\r\n\r\n```sh\r\nbuildbot-console-view==2.3.1\r\nbuildbot-grid-view==2.3.1\r\nbuildbot-waterfall-view==2.3.1\r\nbuildbot-worker==2.3.1\r\nbuildbot-www==2.3.1\r\n```\r\n\r\nThe `buildbot` package itself has version `2019.8.18`, which is not available from PyPI.\r\n\r\nEven if `2019.8.18` turns out to be identical to `2.4.0`, this is somewhat problematic: Installing packages into the image with a dependency on buildbot will always uninstall the buildbot version that comes with the official image.\r\n\r\nIt would be nice if the Dockerfile could install buildbot from PyPI to ensure deterministic builds for those who need to derive their own image.\r\n\r\n<details>\r\n <summary>Output of pip freeze</summary>\r\n\r\n```\r\n$ docker run --rm buildbot/buildbot-master:v2.4.0 pip freeze\r\nasn1crypto==0.24.0\r\nattrs==19.1.0\r\nautobahn==19.8.1\r\nAutomat==0.7.0\r\nbuildbot==2019.8.18\r\nbuildbot-console-view==2.3.1\r\nbuildbot-grid-view==2.3.1\r\nbuildbot-waterfall-view==2.3.1\r\nbuildbot-worker==2.3.1\r\nbuildbot-www==2.3.1\r\ncertifi==2019.6.16\r\ncffi==1.12.3\r\nchardet==3.0.4\r\nconstantly==15.1.0\r\ncryptography==2.7\r\ndecorator==4.4.0\r\nfuture==0.17.1\r\nhyperlink==19.0.0\r\nidna==2.8\r\nincremental==17.5.0\r\nJinja2==2.10.1\r\nMarkupSafe==1.1.1\r\npbr==5.4.2\r\npsycopg2==2.8.3\r\npyasn1==0.4.6\r\npyasn1-modules==0.2.6\r\npycparser==2.19\r\nPyHamcrest==1.9.0\r\nPyJWT==1.7.1\r\npyOpenSSL==19.0.0\r\npython-dateutil==2.8.0\r\nPyYAML==5.1.2\r\nrequests==2.22.0\r\nservice-identity==18.1.0\r\nsix==1.12.0\r\nSQLAlchemy==1.3.7\r\nsqlalchemy-migrate==0.12.0\r\nsqlparse==0.3.0\r\nTempita==0.5.2\r\nTwisted==19.7.0\r\ntxaio==18.8.1\r\ntxrequests==0.9.6\r\nurllib3==1.25.3\r\nzope.interface==4.6.0\r\n```\r\n</details>\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\n# Method to add build step taken from here\n# https://seasonofcode.com/posts/how-to-add-custom-build-steps-and-commands-to-setuppy.html\nimport datetime\nimport os\nimport re\nimport subprocess\nimport sys\nfrom pkg_resources import parse_version\nfrom subprocess import PIPE\nfrom subprocess import STDOUT\nfrom subprocess import Popen\n\nimport setuptools.command.build_py\nimport setuptools.command.egg_info\nfrom setuptools import setup\n\nimport distutils.cmd # isort:skip\n\nold_listdir = os.listdir\n\n\ndef listdir(path):\n # patch listdir to avoid looking into node_modules\n l = old_listdir(path)\n if \"node_modules\" in l:\n l.remove(\"node_modules\")\n return l\nos.listdir = listdir\n\n\ndef check_output(cmd):\n \"\"\"Version of check_output which does not throw error\"\"\"\n popen = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)\n out = popen.communicate()[0].strip()\n if not isinstance(out, str):\n out = out.decode(sys.stdout.encoding)\n return out\n\n\ndef gitDescribeToPep440(version):\n # git describe produce version in the form: v0.9.8-20-gf0f45ca\n # where 20 is the number of commit since last release, and gf0f45ca is the short commit id preceded by 'g'\n # we parse this a transform into a pep440 release version 0.9.9.dev20 (increment last digit and add dev before 20)\n\n VERSION_MATCH = re.compile(r'(?P<major>\\d+)\\.(?P<minor>\\d+)\\.(?P<patch>\\d+)(\\.post(?P<post>\\d+))?(-(?P<dev>\\d+))?(-g(?P<commit>.+))?')\n v = VERSION_MATCH.search(version)\n if v:\n major = int(v.group('major'))\n minor = int(v.group('minor'))\n patch = int(v.group('patch'))\n if v.group('dev'):\n patch += 1\n dev = int(v.group('dev'))\n return \"{}.{}.{}-dev{}\".format(major, minor, patch, dev)\n if v.group('post'):\n return \"{}.{}.{}.post{}\".format(major, minor, patch, v.group('post'))\n return \"{}.{}.{}\".format(major, minor, patch)\n\n return v\n\n\ndef mTimeVersion(init_file):\n cwd = os.path.dirname(os.path.abspath(init_file))\n m = 0\n for root, dirs, files in os.walk(cwd):\n for f in files:\n m = max(os.path.getmtime(os.path.join(root, f)), m)\n d = datetime.datetime.utcfromtimestamp(m)\n return d.strftime(\"%Y.%m.%d\")\n\n\ndef getVersionFromArchiveId(git_archive_id='$Format:%ct %d$'):\n \"\"\" Extract the tag if a source is from git archive.\n\n When source is exported via `git archive`, the git_archive_id init value is modified\n and placeholders are expanded to the \"archived\" revision:\n\n %ct: committer date, UNIX timestamp\n %d: ref names, like the --decorate option of git-log\n\n See man gitattributes(5) and git-log(1) (PRETTY FORMATS) for more details.\n \"\"\"\n # mangle the magic string to make sure it is not replaced by git archive\n if not git_archive_id.startswith('$For''mat:'):\n # source was modified by git archive, try to parse the version from\n # the value of git_archive_id\n\n match = re.search(r'tag:\\s*v([^,)]+)', git_archive_id)\n if match:\n # archived revision is tagged, use the tag\n return gitDescribeToPep440(match.group(1))\n\n # archived revision is not tagged, use the commit date\n tstamp = git_archive_id.strip().split()[0]\n d = datetime.datetime.utcfromtimestamp(int(tstamp))\n return d.strftime('%Y.%m.%d')\n return None\n\n\ndef getVersion(init_file):\n \"\"\"\n Return BUILDBOT_VERSION environment variable, content of VERSION file, git\n tag or 'latest'\n \"\"\"\n\n try:\n return os.environ['BUILDBOT_VERSION']\n except KeyError:\n pass\n\n try:\n cwd = os.path.dirname(os.path.abspath(init_file))\n fn = os.path.join(cwd, 'VERSION')\n with open(fn) as f:\n return f.read().strip()\n except IOError:\n pass\n\n version = getVersionFromArchiveId()\n if version is not None:\n return version\n\n try:\n p = Popen(['git', 'describe', '--tags', '--always'], stdout=PIPE, stderr=STDOUT, cwd=cwd)\n out = p.communicate()[0]\n\n if (not p.returncode) and out:\n v = gitDescribeToPep440(str(out))\n if v:\n return v\n except OSError:\n pass\n\n try:\n # if we really can't find the version, we use the date of modification of the most recent file\n # docker hub builds cannot use git describe\n return mTimeVersion(init_file)\n except Exception:\n # bummer. lets report something\n return \"latest\"\n\n\n# JS build strategy:\n#\n# Obviously, building javascript with setuptools is not really something supported initially\n#\n# The goal of this hack are:\n# - override the distutils command to insert our js build\n# - has very small setup.py\n#\n# from buildbot_pkg import setup_www\n#\n# setup_www(\n# ...\n# packages=[\"buildbot_myplugin\"]\n# )\n#\n# We need to override the first command done, so that source tree is populated very soon,\n# as well as version is found from git tree or \"VERSION\" file\n#\n# This supports following setup.py commands:\n#\n# - develop, via egg_info\n# - install, via egg_info\n# - sdist, via egg_info\n# - bdist_wheel, via build\n# This is why we override both egg_info and build, and the first run build\n# the js.\n\nclass BuildJsCommand(distutils.cmd.Command):\n \"\"\"A custom command to run JS build.\"\"\"\n\n description = 'run JS build'\n already_run = False\n\n def initialize_options(self):\n \"\"\"Set default values for options.\"\"\"\n\n def finalize_options(self):\n \"\"\"Post-process options.\"\"\"\n\n def run(self):\n \"\"\"Run command.\"\"\"\n if self.already_run:\n return\n package = self.distribution.packages[0]\n if os.path.exists(\"gulpfile.js\") or os.path.exists(\"webpack.config.js\"):\n yarn_version = check_output(\"yarn --version\")\n assert yarn_version != \"\", \"need nodejs and yarn installed in current PATH\"\n yarn_bin = check_output(\"yarn bin\").strip()\n\n commands = []\n\n commands.append(['yarn', 'install', '--pure-lockfile'])\n\n if os.path.exists(\"gulpfile.js\"):\n commands.append([os.path.join(yarn_bin, \"gulp\"), 'prod', '--notests'])\n elif os.path.exists(\"webpack.config.js\"):\n commands.append(['yarn', 'run', 'build'])\n\n shell = bool(os.name == 'nt')\n\n for command in commands:\n self.announce('Running command: {}'.format(str(\" \".join(command))),\n level=distutils.log.INFO)\n subprocess.check_call(command, shell=shell)\n\n self.copy_tree(os.path.join(package, 'static'), os.path.join(\n \"build\", \"lib\", package, \"static\"))\n\n with open(os.path.join(\"build\", \"lib\", package, \"VERSION\"), \"w\") as f:\n f.write(self.distribution.metadata.version)\n\n with open(os.path.join(package, \"VERSION\"), \"w\") as f:\n f.write(self.distribution.metadata.version)\n\n self.already_run = True\n\n\nclass BuildPyCommand(setuptools.command.build_py.build_py):\n \"\"\"Custom build command.\"\"\"\n\n def run(self):\n self.run_command('build_js')\n super().run()\n\n\nclass EggInfoCommand(setuptools.command.egg_info.egg_info):\n \"\"\"Custom egginfo command.\"\"\"\n\n def run(self):\n self.run_command('build_js')\n super().run()\n\n\ndef setup_www_plugin(**kw):\n package = kw['packages'][0]\n if 'version' not in kw:\n kw['version'] = getVersion(os.path.join(package, \"__init__.py\"))\n\n setup(cmdclass=dict(\n egg_info=EggInfoCommand,\n build_py=BuildPyCommand,\n build_js=BuildJsCommand),\n **kw)\n", "path": "pkg/buildbot_pkg.py"}]}
| 3,972 | 359 |
gh_patches_debug_518
|
rasdani/github-patches
|
git_diff
|
feast-dev__feast-1946
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow plugin repos to test against universal test suite
**Is your feature request related to a problem? Please describe.**
There are several plugin repos for custom connectors (Hive, Azure, Snowflake, etc.), and there is increasing interest from the community in contributing plugins. One blocker for many folks is that there is no easy way to test their custom connector against our universal test suite. Someone working on a plugin repo should be able to test their connector against the universal test suite with minimal changes in their repo.
**Describe the solution you'd like**
The Feast team has come up with two solutions.
The first solution is a temporary fix to unblock folks who wish to start testing immediately. We recommend that you add `feast` as a [git submodule](https://git-scm.com/book/en/v2/Git-Tools-Submodules) in your plugin repo, and then install `feast` in editable mode by navigating to `feast` and running `pip install -e sdk/python/[ci]` as detailed [here](https://github.com/feast-dev/feast/blob/master/CONTRIBUTING.md). This will allow you to `import feast`, and will also allow you to run our test suite with `pytest`. For example, in `feast` you should be able to run `make test`, and all unit tests should succeed. In order to run the full suite of integration tests with your custom connector, all you need to do is modify `FULL_REPO_CONFIGS` in `sdk/python/tests/integration/feature_repos/repo_configuration.py`. Most of our integration tests rely on pytest fixtures defined in `conftest.py`, most of which are parametrized based on `FULL_REPO_CONFIGS`. The main thing you will need to do in order to overwrite `FULL_REPO_CONFIGS` is to write a `DataSourceCreator`. We consider this solution a temporary fix because it still requires that the user to modify the `feast` repo directly, even if it's in a git submodule.
The second solution, which extends the first solution to be more viable in the long-term, will be to allow users to overwrite `FULL_REPO_CONFIGS` through an environment variable. This means that after adding `feast` as a git submodule, users should be able to directly run integration tests without ever needing to modify the `feast` repo. We intend to build this functionality out eventually, but are currently working on several other higher-priority features. If anyone in the community wants to take this on, that would be great!
**Describe alternatives you've considered**
N/A
**Additional context**
Add any other context or screenshots about the feature request here.
</issue>
<code>
[start of sdk/python/feast/constants.py]
1 #
2 # Copyright 2019 The Feast Authors
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # https://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 #
16
17 # Maximum interval(secs) to wait between retries for retry function
18 MAX_WAIT_INTERVAL: str = "60"
19
20 AWS_LAMBDA_FEATURE_SERVER_IMAGE = "feastdev/feature-server:aws"
21
22 # feature_store.yaml environment variable name for remote feature server
23 FEATURE_STORE_YAML_ENV_NAME: str = "FEATURE_STORE_YAML_BASE64"
24
25 # Environment variable for toggling usage
26 FEAST_USAGE = "FEAST_USAGE"
27
[end of sdk/python/feast/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sdk/python/feast/constants.py b/sdk/python/feast/constants.py
--- a/sdk/python/feast/constants.py
+++ b/sdk/python/feast/constants.py
@@ -24,3 +24,6 @@
# Environment variable for toggling usage
FEAST_USAGE = "FEAST_USAGE"
+
+# Environment variable for the path for overwriting universal test configs
+FULL_REPO_CONFIGS_MODULE_ENV_NAME: str = "FULL_REPO_CONFIGS_MODULE"
|
{"golden_diff": "diff --git a/sdk/python/feast/constants.py b/sdk/python/feast/constants.py\n--- a/sdk/python/feast/constants.py\n+++ b/sdk/python/feast/constants.py\n@@ -24,3 +24,6 @@\n \n # Environment variable for toggling usage\n FEAST_USAGE = \"FEAST_USAGE\"\n+\n+# Environment variable for the path for overwriting universal test configs\n+FULL_REPO_CONFIGS_MODULE_ENV_NAME: str = \"FULL_REPO_CONFIGS_MODULE\"\n", "issue": "Allow plugin repos to test against universal test suite\n**Is your feature request related to a problem? Please describe.**\r\nThere are several plugin repos for custom connectors (Hive, Azure, Snowflake, etc.), and there is increasing interest from the community in contributing plugins. One blocker for many folks is that there is no easy way to test their custom connector against our universal test suite. Someone working on a plugin repo should be able to test their connector against the universal test suite with minimal changes in their repo. \r\n\r\n**Describe the solution you'd like**\r\nThe Feast team has come up with two solutions. \r\n\r\nThe first solution is a temporary fix to unblock folks who wish to start testing immediately. We recommend that you add `feast` as a [git submodule](https://git-scm.com/book/en/v2/Git-Tools-Submodules) in your plugin repo, and then install `feast` in editable mode by navigating to `feast` and running `pip install -e sdk/python/[ci]` as detailed [here](https://github.com/feast-dev/feast/blob/master/CONTRIBUTING.md). This will allow you to `import feast`, and will also allow you to run our test suite with `pytest`. For example, in `feast` you should be able to run `make test`, and all unit tests should succeed. In order to run the full suite of integration tests with your custom connector, all you need to do is modify `FULL_REPO_CONFIGS` in `sdk/python/tests/integration/feature_repos/repo_configuration.py`. Most of our integration tests rely on pytest fixtures defined in `conftest.py`, most of which are parametrized based on `FULL_REPO_CONFIGS`. The main thing you will need to do in order to overwrite `FULL_REPO_CONFIGS` is to write a `DataSourceCreator`. We consider this solution a temporary fix because it still requires that the user to modify the `feast` repo directly, even if it's in a git submodule.\r\n\r\nThe second solution, which extends the first solution to be more viable in the long-term, will be to allow users to overwrite `FULL_REPO_CONFIGS` through an environment variable. This means that after adding `feast` as a git submodule, users should be able to directly run integration tests without ever needing to modify the `feast` repo. We intend to build this functionality out eventually, but are currently working on several other higher-priority features. If anyone in the community wants to take this on, that would be great!\r\n\r\n**Describe alternatives you've considered**\r\nN/A\r\n\r\n**Additional context**\r\nAdd any other context or screenshots about the feature request here.\r\n\n", "before_files": [{"content": "#\n# Copyright 2019 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# Maximum interval(secs) to wait between retries for retry function\nMAX_WAIT_INTERVAL: str = \"60\"\n\nAWS_LAMBDA_FEATURE_SERVER_IMAGE = \"feastdev/feature-server:aws\"\n\n# feature_store.yaml environment variable name for remote feature server\nFEATURE_STORE_YAML_ENV_NAME: str = \"FEATURE_STORE_YAML_BASE64\"\n\n# Environment variable for toggling usage\nFEAST_USAGE = \"FEAST_USAGE\"\n", "path": "sdk/python/feast/constants.py"}]}
| 1,373 | 102 |
gh_patches_debug_8170
|
rasdani/github-patches
|
git_diff
|
tensorflow__addons-2324
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Channel Found None in MaxUnPooling2D
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Colab
- TensorFlow version and how it was installed (source or binary): 2.4.0
- TensorFlow-Addons version and how it was installed (source or binary):
- Python version: 3.7
- Is GPU used? (yes/no): yes
**Describe the bug**
The channel dimension is None after MaxUnpooling2D layer
**Code to reproduce the issue**
```
def test():
inputs = tf.keras.layers.Input((None, None, 1))
x,indices = tf.nn.max_pool_with_argmax(inputs, 2, 2, padding="SAME")
x = tfa.layers.MaxUnpooling2D()(x,indices)
x = tf.keras.layers.Conv2D(64, 3, padding="same")(x)
return tf.keras.Model(inputs, x)
model = test()
model(tf.random.normal([1,128,128,1]))
```
**Other info / logs**
<ipython-input-25-675633950ad9> in test()
3 x,indices = tf.nn.max_pool_with_argmax(inputs, 2, 2, padding="SAME")
4 x = tfa.layers.MaxUnpooling2D()(x,indices)
----> 5 x = tf.keras.layers.Conv2D(64, 3, padding="same")(x)
6 return tf.keras.Model(inputs, x)
7 model = test()
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
950 if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
951 return self._functional_construction_call(inputs, args, kwargs,
--> 952 input_list)
953
954 # Maintains info about the `Layer.call` stack.
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
1089 # Check input assumptions set after layer building, e.g. input shape.
1090 outputs = self._keras_tensor_symbolic_call(
-> 1091 inputs, input_masks, args, kwargs)
1092
1093 if outputs is None:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _keras_tensor_symbolic_call(self, inputs, input_masks, args, kwargs)
820 return nest.map_structure(keras_tensor.KerasTensor, output_signature)
821 else:
--> 822 return self._infer_output_signature(inputs, args, kwargs, input_masks)
823
824 def _infer_output_signature(self, inputs, args, kwargs, input_masks):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _infer_output_signature(self, inputs, args, kwargs, input_masks)
860 # overridden).
861 # TODO(kaftan): do we maybe_build here, or have we already done it?
--> 862 self._maybe_build(inputs)
863 outputs = call_fn(inputs, *args, **kwargs)
864
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _maybe_build(self, inputs)
2708 # operations.
2709 with tf_utils.maybe_init_scope(self):
-> 2710 self.build(input_shapes) # pylint:disable=not-callable
2711 # We must set also ensure that the layer is marked as built, and the build
2712 # shape is stored since user defined build functions may not be calling
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/convolutional.py in build(self, input_shape)
186 def build(self, input_shape):
187 input_shape = tensor_shape.TensorShape(input_shape)
--> 188 input_channel = self._get_input_channel(input_shape)
189 if input_channel % self.groups != 0:
190 raise ValueError(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/convolutional.py in _get_input_channel(self, input_shape)
358 channel_axis = self._get_channel_axis()
359 if input_shape.dims[channel_axis].value is None:
--> 360 raise ValueError('The channel dimension of the inputs '
361 'should be defined. Found `None`.')
362 return int(input_shape[channel_axis])
ValueError: The channel dimension of the inputs should be defined. Found `None`.
</issue>
<code>
[start of tensorflow_addons/layers/max_unpooling_2d.py]
1 # Copyright 2020 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """MaxUnpooling2D operation."""
16
17 import tensorflow as tf
18
19 from typeguard import typechecked
20 from typing import Union
21
22 from tensorflow_addons.utils.keras_utils import normalize_tuple
23
24
25 def _calculate_output_shape(input_shape, pool_size, strides, padding):
26 """Calculates the shape of the unpooled output."""
27 if padding == "VALID":
28 output_shape = (
29 input_shape[0],
30 (input_shape[1] - 1) * strides[0] + pool_size[0],
31 (input_shape[2] - 1) * strides[1] + pool_size[1],
32 input_shape[3],
33 )
34 elif padding == "SAME":
35 output_shape = (
36 input_shape[0],
37 input_shape[1] * strides[0],
38 input_shape[2] * strides[1],
39 input_shape[3],
40 )
41 else:
42 raise ValueError('Padding must be a string from: "SAME", "VALID"')
43 return output_shape
44
45
46 def _max_unpooling_2d(updates, mask, pool_size=(2, 2), strides=(2, 2), padding="SAME"):
47 """Unpool the outputs of a maximum pooling operation."""
48 pool_size_attr = " ".join(["i: %d" % v for v in pool_size])
49 strides_attr = " ".join(["i: %d" % v for v in strides])
50 experimental_implements = [
51 'name: "addons:MaxUnpooling2D"',
52 'attr { key: "pool_size" value { list {%s} } }' % pool_size_attr,
53 'attr { key: "strides" value { list {%s} } }' % strides_attr,
54 'attr { key: "padding" value { s: "%s" } }' % padding,
55 ]
56 experimental_implements = " ".join(experimental_implements)
57
58 @tf.function(experimental_implements=experimental_implements)
59 def func(updates, mask):
60 mask = tf.cast(mask, "int32")
61 input_shape = tf.shape(updates, out_type="int32")
62 output_shape = _calculate_output_shape(input_shape, pool_size, strides, padding)
63
64 # Calculates indices for batch, height, width and feature maps.
65 one_like_mask = tf.ones_like(mask, dtype="int32")
66 batch_shape = tf.concat([[input_shape[0]], [1], [1], [1]], axis=0)
67 batch_range = tf.reshape(
68 tf.range(output_shape[0], dtype="int32"), shape=batch_shape
69 )
70 b = one_like_mask * batch_range
71 y = mask // (output_shape[2] * output_shape[3])
72 x = (mask // output_shape[3]) % output_shape[2]
73 feature_range = tf.range(output_shape[3], dtype="int32")
74 f = one_like_mask * feature_range
75
76 # Transposes indices & reshape update values to one dimension.
77 updates_size = tf.size(updates)
78 indices = tf.transpose(tf.reshape(tf.stack([b, y, x, f]), [4, updates_size]))
79 values = tf.reshape(updates, [updates_size])
80 ret = tf.scatter_nd(indices, values, output_shape)
81 return ret
82
83 return func(updates, mask)
84
85
86 @tf.keras.utils.register_keras_serializable(package="Addons")
87 class MaxUnpooling2D(tf.keras.layers.Layer):
88 """Unpool the outputs of a maximum pooling operation.
89
90 This function currently does not support outputs of MaxPoolingWithArgMax in
91 following cases:
92 - include_batch_in_index equals true.
93 - input_shape is not divisible by strides if padding is "SAME".
94 - (input_shape - pool_size) is not divisible by strides if padding is "VALID".
95 - The max pooling operation results in duplicate values in updates and mask.
96
97 Args:
98 updates: The pooling result from max pooling.
99 mask: the argmax result corresponds to above max values.
100 pool_size: The filter that max pooling was performed with. Default: (2, 2).
101 strides: The strides that max pooling was performed with. Default: (2, 2).
102 padding: The padding that max pooling was performed with. Default: "SAME".
103 Input shape:
104 4D tensor with shape: `(batch_size, height, width, channel)`.
105 Output shape:
106 4D tensor with the same shape as the input of max pooling operation.
107 """
108
109 @typechecked
110 def __init__(
111 self,
112 pool_size: Union[list, tuple, int] = (2, 2),
113 strides: Union[list, tuple, int] = (2, 2),
114 padding: str = "SAME",
115 **kwargs,
116 ):
117 super(MaxUnpooling2D, self).__init__(**kwargs)
118
119 if padding != "SAME" and padding != "VALID":
120 raise ValueError('Padding must be a string from: "SAME", "VALID"')
121
122 self.pool_size = normalize_tuple(pool_size, 2, "pool_size")
123 self.strides = normalize_tuple(strides, 2, "strides")
124 self.padding = padding
125
126 def call(self, updates, mask):
127 return _max_unpooling_2d(
128 updates,
129 mask,
130 pool_size=self.pool_size,
131 strides=self.strides,
132 padding=self.padding,
133 )
134
135 def compute_output_shape(self, input_shapes):
136 input_shape = input_shapes[1]
137 return _calculate_output_shape(
138 input_shape, self.pool_size, self.strides, self.padding
139 )
140
141 def get_config(self):
142 config = super(MaxUnpooling2D, self).get_config()
143 config["pool_size"] = self.pool_size
144 config["strides"] = self.strides
145 config["padding"] = self.padding
146 return config
147
[end of tensorflow_addons/layers/max_unpooling_2d.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tensorflow_addons/layers/max_unpooling_2d.py b/tensorflow_addons/layers/max_unpooling_2d.py
--- a/tensorflow_addons/layers/max_unpooling_2d.py
+++ b/tensorflow_addons/layers/max_unpooling_2d.py
@@ -59,6 +59,7 @@
def func(updates, mask):
mask = tf.cast(mask, "int32")
input_shape = tf.shape(updates, out_type="int32")
+ input_shape = [updates.shape[i] or input_shape[i] for i in range(4)]
output_shape = _calculate_output_shape(input_shape, pool_size, strides, padding)
# Calculates indices for batch, height, width and feature maps.
|
{"golden_diff": "diff --git a/tensorflow_addons/layers/max_unpooling_2d.py b/tensorflow_addons/layers/max_unpooling_2d.py\n--- a/tensorflow_addons/layers/max_unpooling_2d.py\n+++ b/tensorflow_addons/layers/max_unpooling_2d.py\n@@ -59,6 +59,7 @@\n def func(updates, mask):\n mask = tf.cast(mask, \"int32\")\n input_shape = tf.shape(updates, out_type=\"int32\")\n+ input_shape = [updates.shape[i] or input_shape[i] for i in range(4)]\n output_shape = _calculate_output_shape(input_shape, pool_size, strides, padding)\n \n # Calculates indices for batch, height, width and feature maps.\n", "issue": "Channel Found None in MaxUnPooling2D\n**System information**\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Colab\r\n- TensorFlow version and how it was installed (source or binary): 2.4.0\r\n- TensorFlow-Addons version and how it was installed (source or binary):\r\n- Python version: 3.7\r\n- Is GPU used? (yes/no): yes\r\n\r\n**Describe the bug**\r\n\r\nThe channel dimension is None after MaxUnpooling2D layer\r\n\r\n**Code to reproduce the issue**\r\n\r\n```\r\ndef test():\r\n inputs = tf.keras.layers.Input((None, None, 1))\r\n x,indices = tf.nn.max_pool_with_argmax(inputs, 2, 2, padding=\"SAME\")\r\n x = tfa.layers.MaxUnpooling2D()(x,indices)\r\n x = tf.keras.layers.Conv2D(64, 3, padding=\"same\")(x)\r\n return tf.keras.Model(inputs, x)\r\nmodel = test()\r\nmodel(tf.random.normal([1,128,128,1]))\r\n```\r\n\r\n**Other info / logs**\r\n\r\n<ipython-input-25-675633950ad9> in test()\r\n 3 x,indices = tf.nn.max_pool_with_argmax(inputs, 2, 2, padding=\"SAME\")\r\n 4 x = tfa.layers.MaxUnpooling2D()(x,indices)\r\n----> 5 x = tf.keras.layers.Conv2D(64, 3, padding=\"same\")(x)\r\n 6 return tf.keras.Model(inputs, x)\r\n 7 model = test()\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)\r\n 950 if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):\r\n 951 return self._functional_construction_call(inputs, args, kwargs,\r\n--> 952 input_list)\r\n 953 \r\n 954 # Maintains info about the `Layer.call` stack.\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)\r\n 1089 # Check input assumptions set after layer building, e.g. input shape.\r\n 1090 outputs = self._keras_tensor_symbolic_call(\r\n-> 1091 inputs, input_masks, args, kwargs)\r\n 1092 \r\n 1093 if outputs is None:\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _keras_tensor_symbolic_call(self, inputs, input_masks, args, kwargs)\r\n 820 return nest.map_structure(keras_tensor.KerasTensor, output_signature)\r\n 821 else:\r\n--> 822 return self._infer_output_signature(inputs, args, kwargs, input_masks)\r\n 823 \r\n 824 def _infer_output_signature(self, inputs, args, kwargs, input_masks):\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _infer_output_signature(self, inputs, args, kwargs, input_masks)\r\n 860 # overridden).\r\n 861 # TODO(kaftan): do we maybe_build here, or have we already done it?\r\n--> 862 self._maybe_build(inputs)\r\n 863 outputs = call_fn(inputs, *args, **kwargs)\r\n 864 \r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py in _maybe_build(self, inputs)\r\n 2708 # operations.\r\n 2709 with tf_utils.maybe_init_scope(self):\r\n-> 2710 self.build(input_shapes) # pylint:disable=not-callable\r\n 2711 # We must set also ensure that the layer is marked as built, and the build\r\n 2712 # shape is stored since user defined build functions may not be calling\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/convolutional.py in build(self, input_shape)\r\n 186 def build(self, input_shape):\r\n 187 input_shape = tensor_shape.TensorShape(input_shape)\r\n--> 188 input_channel = self._get_input_channel(input_shape)\r\n 189 if input_channel % self.groups != 0:\r\n 190 raise ValueError(\r\n\r\n/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/convolutional.py in _get_input_channel(self, input_shape)\r\n 358 channel_axis = self._get_channel_axis()\r\n 359 if input_shape.dims[channel_axis].value is None:\r\n--> 360 raise ValueError('The channel dimension of the inputs '\r\n 361 'should be defined. Found `None`.')\r\n 362 return int(input_shape[channel_axis])\r\n\r\nValueError: The channel dimension of the inputs should be defined. Found `None`.\n", "before_files": [{"content": "# Copyright 2020 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"MaxUnpooling2D operation.\"\"\"\n\nimport tensorflow as tf\n\nfrom typeguard import typechecked\nfrom typing import Union\n\nfrom tensorflow_addons.utils.keras_utils import normalize_tuple\n\n\ndef _calculate_output_shape(input_shape, pool_size, strides, padding):\n \"\"\"Calculates the shape of the unpooled output.\"\"\"\n if padding == \"VALID\":\n output_shape = (\n input_shape[0],\n (input_shape[1] - 1) * strides[0] + pool_size[0],\n (input_shape[2] - 1) * strides[1] + pool_size[1],\n input_shape[3],\n )\n elif padding == \"SAME\":\n output_shape = (\n input_shape[0],\n input_shape[1] * strides[0],\n input_shape[2] * strides[1],\n input_shape[3],\n )\n else:\n raise ValueError('Padding must be a string from: \"SAME\", \"VALID\"')\n return output_shape\n\n\ndef _max_unpooling_2d(updates, mask, pool_size=(2, 2), strides=(2, 2), padding=\"SAME\"):\n \"\"\"Unpool the outputs of a maximum pooling operation.\"\"\"\n pool_size_attr = \" \".join([\"i: %d\" % v for v in pool_size])\n strides_attr = \" \".join([\"i: %d\" % v for v in strides])\n experimental_implements = [\n 'name: \"addons:MaxUnpooling2D\"',\n 'attr { key: \"pool_size\" value { list {%s} } }' % pool_size_attr,\n 'attr { key: \"strides\" value { list {%s} } }' % strides_attr,\n 'attr { key: \"padding\" value { s: \"%s\" } }' % padding,\n ]\n experimental_implements = \" \".join(experimental_implements)\n\n @tf.function(experimental_implements=experimental_implements)\n def func(updates, mask):\n mask = tf.cast(mask, \"int32\")\n input_shape = tf.shape(updates, out_type=\"int32\")\n output_shape = _calculate_output_shape(input_shape, pool_size, strides, padding)\n\n # Calculates indices for batch, height, width and feature maps.\n one_like_mask = tf.ones_like(mask, dtype=\"int32\")\n batch_shape = tf.concat([[input_shape[0]], [1], [1], [1]], axis=0)\n batch_range = tf.reshape(\n tf.range(output_shape[0], dtype=\"int32\"), shape=batch_shape\n )\n b = one_like_mask * batch_range\n y = mask // (output_shape[2] * output_shape[3])\n x = (mask // output_shape[3]) % output_shape[2]\n feature_range = tf.range(output_shape[3], dtype=\"int32\")\n f = one_like_mask * feature_range\n\n # Transposes indices & reshape update values to one dimension.\n updates_size = tf.size(updates)\n indices = tf.transpose(tf.reshape(tf.stack([b, y, x, f]), [4, updates_size]))\n values = tf.reshape(updates, [updates_size])\n ret = tf.scatter_nd(indices, values, output_shape)\n return ret\n\n return func(updates, mask)\n\n\[email protected]_keras_serializable(package=\"Addons\")\nclass MaxUnpooling2D(tf.keras.layers.Layer):\n \"\"\"Unpool the outputs of a maximum pooling operation.\n\n This function currently does not support outputs of MaxPoolingWithArgMax in\n following cases:\n - include_batch_in_index equals true.\n - input_shape is not divisible by strides if padding is \"SAME\".\n - (input_shape - pool_size) is not divisible by strides if padding is \"VALID\".\n - The max pooling operation results in duplicate values in updates and mask.\n\n Args:\n updates: The pooling result from max pooling.\n mask: the argmax result corresponds to above max values.\n pool_size: The filter that max pooling was performed with. Default: (2, 2).\n strides: The strides that max pooling was performed with. Default: (2, 2).\n padding: The padding that max pooling was performed with. Default: \"SAME\".\n Input shape:\n 4D tensor with shape: `(batch_size, height, width, channel)`.\n Output shape:\n 4D tensor with the same shape as the input of max pooling operation.\n \"\"\"\n\n @typechecked\n def __init__(\n self,\n pool_size: Union[list, tuple, int] = (2, 2),\n strides: Union[list, tuple, int] = (2, 2),\n padding: str = \"SAME\",\n **kwargs,\n ):\n super(MaxUnpooling2D, self).__init__(**kwargs)\n\n if padding != \"SAME\" and padding != \"VALID\":\n raise ValueError('Padding must be a string from: \"SAME\", \"VALID\"')\n\n self.pool_size = normalize_tuple(pool_size, 2, \"pool_size\")\n self.strides = normalize_tuple(strides, 2, \"strides\")\n self.padding = padding\n\n def call(self, updates, mask):\n return _max_unpooling_2d(\n updates,\n mask,\n pool_size=self.pool_size,\n strides=self.strides,\n padding=self.padding,\n )\n\n def compute_output_shape(self, input_shapes):\n input_shape = input_shapes[1]\n return _calculate_output_shape(\n input_shape, self.pool_size, self.strides, self.padding\n )\n\n def get_config(self):\n config = super(MaxUnpooling2D, self).get_config()\n config[\"pool_size\"] = self.pool_size\n config[\"strides\"] = self.strides\n config[\"padding\"] = self.padding\n return config\n", "path": "tensorflow_addons/layers/max_unpooling_2d.py"}]}
| 3,439 | 175 |
gh_patches_debug_936
|
rasdani/github-patches
|
git_diff
|
lnbits__lnbits-1183
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] LNDhub extension return unusable `getinfo` response
**Describe the bug**
The [getinfo call](https://github.com/lnbits/lnbits/blob/main/lnbits/extensions/lndhub/views_api.py#L22) simply returns `bad auth` everytime, which breaks integrations like for us in BTCPay Server (see btcpayserver/btcpayserver#4414).
**Expected behavior**
Return [valid information](https://github.com/BlueWallet/LndHub/blob/master/doc/Send-requirements.md#get-getinfo), which we can use to connect. For us that would mean having a list of `uris` and a `block_height` being set.
</issue>
<code>
[start of lnbits/extensions/lndhub/views_api.py]
1 import asyncio
2 import time
3 from base64 import urlsafe_b64encode
4 from http import HTTPStatus
5
6 from fastapi.param_functions import Query
7 from fastapi.params import Depends
8 from pydantic import BaseModel
9 from starlette.exceptions import HTTPException
10
11 from lnbits import bolt11
12 from lnbits.core.crud import delete_expired_invoices, get_payments
13 from lnbits.core.services import create_invoice, pay_invoice
14 from lnbits.decorators import WalletTypeInfo
15 from lnbits.settings import LNBITS_SITE_TITLE, WALLET
16
17 from . import lndhub_ext
18 from .decorators import check_wallet, require_admin_key
19 from .utils import decoded_as_lndhub, to_buffer
20
21
22 @lndhub_ext.get("/ext/getinfo")
23 async def lndhub_getinfo():
24 raise HTTPException(status_code=HTTPStatus.UNAUTHORIZED, detail="bad auth")
25
26
27 class AuthData(BaseModel):
28 login: str = Query(None)
29 password: str = Query(None)
30 refresh_token: str = Query(None)
31
32
33 @lndhub_ext.post("/ext/auth")
34 async def lndhub_auth(data: AuthData):
35 token = (
36 data.refresh_token
37 if data.refresh_token
38 else urlsafe_b64encode(
39 (data.login + ":" + data.password).encode("utf-8")
40 ).decode("ascii")
41 )
42 return {"refresh_token": token, "access_token": token}
43
44
45 class AddInvoice(BaseModel):
46 amt: str = Query(...)
47 memo: str = Query(...)
48 preimage: str = Query(None)
49
50
51 @lndhub_ext.post("/ext/addinvoice")
52 async def lndhub_addinvoice(
53 data: AddInvoice, wallet: WalletTypeInfo = Depends(check_wallet)
54 ):
55 try:
56 _, pr = await create_invoice(
57 wallet_id=wallet.wallet.id,
58 amount=int(data.amt),
59 memo=data.memo or LNBITS_SITE_TITLE,
60 extra={"tag": "lndhub"},
61 )
62 except:
63 raise HTTPException(
64 status_code=HTTPStatus.NOT_FOUND, detail="Failed to create invoice"
65 )
66 invoice = bolt11.decode(pr)
67 return {
68 "pay_req": pr,
69 "payment_request": pr,
70 "add_index": "500",
71 "r_hash": to_buffer(invoice.payment_hash),
72 "hash": invoice.payment_hash,
73 }
74
75
76 class Invoice(BaseModel):
77 invoice: str = Query(...)
78
79
80 @lndhub_ext.post("/ext/payinvoice")
81 async def lndhub_payinvoice(
82 r_invoice: Invoice, wallet: WalletTypeInfo = Depends(require_admin_key)
83 ):
84 try:
85 await pay_invoice(
86 wallet_id=wallet.wallet.id,
87 payment_request=r_invoice.invoice,
88 extra={"tag": "lndhub"},
89 )
90 except:
91 raise HTTPException(status_code=HTTPStatus.NOT_FOUND, detail="Payment failed")
92
93 invoice: bolt11.Invoice = bolt11.decode(r_invoice.invoice)
94
95 return {
96 "payment_error": "",
97 "payment_preimage": "0" * 64,
98 "route": {},
99 "payment_hash": invoice.payment_hash,
100 "decoded": decoded_as_lndhub(invoice),
101 "fee_msat": 0,
102 "type": "paid_invoice",
103 "fee": 0,
104 "value": invoice.amount_msat / 1000,
105 "timestamp": int(time.time()),
106 "memo": invoice.description,
107 }
108
109
110 @lndhub_ext.get("/ext/balance")
111 async def lndhub_balance(
112 wallet: WalletTypeInfo = Depends(check_wallet),
113 ):
114 return {"BTC": {"AvailableBalance": wallet.wallet.balance}}
115
116
117 @lndhub_ext.get("/ext/gettxs")
118 async def lndhub_gettxs(
119 wallet: WalletTypeInfo = Depends(check_wallet),
120 limit: int = Query(20, ge=1, le=20),
121 offset: int = Query(0, ge=0),
122 ):
123 for payment in await get_payments(
124 wallet_id=wallet.wallet.id,
125 complete=False,
126 pending=True,
127 outgoing=True,
128 incoming=False,
129 limit=limit,
130 offset=offset,
131 exclude_uncheckable=True,
132 ):
133 await payment.check_status()
134
135 return [
136 {
137 "payment_preimage": payment.preimage,
138 "payment_hash": payment.payment_hash,
139 "fee_msat": payment.fee * 1000,
140 "type": "paid_invoice",
141 "fee": payment.fee,
142 "value": int(payment.amount / 1000),
143 "timestamp": payment.time,
144 "memo": payment.memo if not payment.pending else "Payment in transition",
145 }
146 for payment in reversed(
147 (
148 await get_payments(
149 wallet_id=wallet.wallet.id,
150 pending=True,
151 complete=True,
152 outgoing=True,
153 incoming=False,
154 limit=limit,
155 offset=offset,
156 )
157 )
158 )
159 ]
160
161
162 @lndhub_ext.get("/ext/getuserinvoices")
163 async def lndhub_getuserinvoices(
164 wallet: WalletTypeInfo = Depends(check_wallet),
165 limit: int = Query(20, ge=1, le=20),
166 offset: int = Query(0, ge=0),
167 ):
168 for invoice in await get_payments(
169 wallet_id=wallet.wallet.id,
170 complete=False,
171 pending=True,
172 outgoing=False,
173 incoming=True,
174 limit=limit,
175 offset=offset,
176 exclude_uncheckable=True,
177 ):
178 await invoice.set_pending(
179 (await WALLET.get_invoice_status(invoice.checking_id)).pending
180 )
181
182 return [
183 {
184 "r_hash": to_buffer(invoice.payment_hash),
185 "payment_request": invoice.bolt11,
186 "add_index": "500",
187 "description": invoice.memo,
188 "payment_hash": invoice.payment_hash,
189 "ispaid": not invoice.pending,
190 "amt": int(invoice.amount / 1000),
191 "expire_time": int(time.time() + 1800),
192 "timestamp": invoice.time,
193 "type": "user_invoice",
194 }
195 for invoice in reversed(
196 (
197 await get_payments(
198 wallet_id=wallet.wallet.id,
199 pending=True,
200 complete=True,
201 incoming=True,
202 outgoing=False,
203 limit=limit,
204 offset=offset,
205 )
206 )
207 )
208 ]
209
210
211 @lndhub_ext.get("/ext/getbtc")
212 async def lndhub_getbtc(wallet: WalletTypeInfo = Depends(check_wallet)):
213 "load an address for incoming onchain btc"
214 return []
215
216
217 @lndhub_ext.get("/ext/getpending")
218 async def lndhub_getpending(wallet: WalletTypeInfo = Depends(check_wallet)):
219 "pending onchain transactions"
220 return []
221
222
223 @lndhub_ext.get("/ext/decodeinvoice")
224 async def lndhub_decodeinvoice(invoice: str = Query(None)):
225 inv = bolt11.decode(invoice)
226 return decoded_as_lndhub(inv)
227
228
229 @lndhub_ext.get("/ext/checkrouteinvoice")
230 async def lndhub_checkrouteinvoice():
231 "not implemented on canonical lndhub"
232 pass
233
[end of lnbits/extensions/lndhub/views_api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lnbits/extensions/lndhub/views_api.py b/lnbits/extensions/lndhub/views_api.py
--- a/lnbits/extensions/lndhub/views_api.py
+++ b/lnbits/extensions/lndhub/views_api.py
@@ -21,7 +21,7 @@
@lndhub_ext.get("/ext/getinfo")
async def lndhub_getinfo():
- raise HTTPException(status_code=HTTPStatus.UNAUTHORIZED, detail="bad auth")
+ return {"alias": LNBITS_SITE_TITLE}
class AuthData(BaseModel):
|
{"golden_diff": "diff --git a/lnbits/extensions/lndhub/views_api.py b/lnbits/extensions/lndhub/views_api.py\n--- a/lnbits/extensions/lndhub/views_api.py\n+++ b/lnbits/extensions/lndhub/views_api.py\n@@ -21,7 +21,7 @@\n \n @lndhub_ext.get(\"/ext/getinfo\")\n async def lndhub_getinfo():\n- raise HTTPException(status_code=HTTPStatus.UNAUTHORIZED, detail=\"bad auth\")\n+ return {\"alias\": LNBITS_SITE_TITLE}\n \n \n class AuthData(BaseModel):\n", "issue": "[BUG] LNDhub extension return unusable `getinfo` response\n**Describe the bug**\r\nThe [getinfo call](https://github.com/lnbits/lnbits/blob/main/lnbits/extensions/lndhub/views_api.py#L22) simply returns `bad auth` everytime, which breaks integrations like for us in BTCPay Server (see btcpayserver/btcpayserver#4414).\r\n\r\n**Expected behavior**\r\nReturn [valid information](https://github.com/BlueWallet/LndHub/blob/master/doc/Send-requirements.md#get-getinfo), which we can use to connect. For us that would mean having a list of `uris` and a `block_height` being set.\r\n\n", "before_files": [{"content": "import asyncio\nimport time\nfrom base64 import urlsafe_b64encode\nfrom http import HTTPStatus\n\nfrom fastapi.param_functions import Query\nfrom fastapi.params import Depends\nfrom pydantic import BaseModel\nfrom starlette.exceptions import HTTPException\n\nfrom lnbits import bolt11\nfrom lnbits.core.crud import delete_expired_invoices, get_payments\nfrom lnbits.core.services import create_invoice, pay_invoice\nfrom lnbits.decorators import WalletTypeInfo\nfrom lnbits.settings import LNBITS_SITE_TITLE, WALLET\n\nfrom . import lndhub_ext\nfrom .decorators import check_wallet, require_admin_key\nfrom .utils import decoded_as_lndhub, to_buffer\n\n\n@lndhub_ext.get(\"/ext/getinfo\")\nasync def lndhub_getinfo():\n raise HTTPException(status_code=HTTPStatus.UNAUTHORIZED, detail=\"bad auth\")\n\n\nclass AuthData(BaseModel):\n login: str = Query(None)\n password: str = Query(None)\n refresh_token: str = Query(None)\n\n\n@lndhub_ext.post(\"/ext/auth\")\nasync def lndhub_auth(data: AuthData):\n token = (\n data.refresh_token\n if data.refresh_token\n else urlsafe_b64encode(\n (data.login + \":\" + data.password).encode(\"utf-8\")\n ).decode(\"ascii\")\n )\n return {\"refresh_token\": token, \"access_token\": token}\n\n\nclass AddInvoice(BaseModel):\n amt: str = Query(...)\n memo: str = Query(...)\n preimage: str = Query(None)\n\n\n@lndhub_ext.post(\"/ext/addinvoice\")\nasync def lndhub_addinvoice(\n data: AddInvoice, wallet: WalletTypeInfo = Depends(check_wallet)\n):\n try:\n _, pr = await create_invoice(\n wallet_id=wallet.wallet.id,\n amount=int(data.amt),\n memo=data.memo or LNBITS_SITE_TITLE,\n extra={\"tag\": \"lndhub\"},\n )\n except:\n raise HTTPException(\n status_code=HTTPStatus.NOT_FOUND, detail=\"Failed to create invoice\"\n )\n invoice = bolt11.decode(pr)\n return {\n \"pay_req\": pr,\n \"payment_request\": pr,\n \"add_index\": \"500\",\n \"r_hash\": to_buffer(invoice.payment_hash),\n \"hash\": invoice.payment_hash,\n }\n\n\nclass Invoice(BaseModel):\n invoice: str = Query(...)\n\n\n@lndhub_ext.post(\"/ext/payinvoice\")\nasync def lndhub_payinvoice(\n r_invoice: Invoice, wallet: WalletTypeInfo = Depends(require_admin_key)\n):\n try:\n await pay_invoice(\n wallet_id=wallet.wallet.id,\n payment_request=r_invoice.invoice,\n extra={\"tag\": \"lndhub\"},\n )\n except:\n raise HTTPException(status_code=HTTPStatus.NOT_FOUND, detail=\"Payment failed\")\n\n invoice: bolt11.Invoice = bolt11.decode(r_invoice.invoice)\n\n return {\n \"payment_error\": \"\",\n \"payment_preimage\": \"0\" * 64,\n \"route\": {},\n \"payment_hash\": invoice.payment_hash,\n \"decoded\": decoded_as_lndhub(invoice),\n \"fee_msat\": 0,\n \"type\": \"paid_invoice\",\n \"fee\": 0,\n \"value\": invoice.amount_msat / 1000,\n \"timestamp\": int(time.time()),\n \"memo\": invoice.description,\n }\n\n\n@lndhub_ext.get(\"/ext/balance\")\nasync def lndhub_balance(\n wallet: WalletTypeInfo = Depends(check_wallet),\n):\n return {\"BTC\": {\"AvailableBalance\": wallet.wallet.balance}}\n\n\n@lndhub_ext.get(\"/ext/gettxs\")\nasync def lndhub_gettxs(\n wallet: WalletTypeInfo = Depends(check_wallet),\n limit: int = Query(20, ge=1, le=20),\n offset: int = Query(0, ge=0),\n):\n for payment in await get_payments(\n wallet_id=wallet.wallet.id,\n complete=False,\n pending=True,\n outgoing=True,\n incoming=False,\n limit=limit,\n offset=offset,\n exclude_uncheckable=True,\n ):\n await payment.check_status()\n\n return [\n {\n \"payment_preimage\": payment.preimage,\n \"payment_hash\": payment.payment_hash,\n \"fee_msat\": payment.fee * 1000,\n \"type\": \"paid_invoice\",\n \"fee\": payment.fee,\n \"value\": int(payment.amount / 1000),\n \"timestamp\": payment.time,\n \"memo\": payment.memo if not payment.pending else \"Payment in transition\",\n }\n for payment in reversed(\n (\n await get_payments(\n wallet_id=wallet.wallet.id,\n pending=True,\n complete=True,\n outgoing=True,\n incoming=False,\n limit=limit,\n offset=offset,\n )\n )\n )\n ]\n\n\n@lndhub_ext.get(\"/ext/getuserinvoices\")\nasync def lndhub_getuserinvoices(\n wallet: WalletTypeInfo = Depends(check_wallet),\n limit: int = Query(20, ge=1, le=20),\n offset: int = Query(0, ge=0),\n):\n for invoice in await get_payments(\n wallet_id=wallet.wallet.id,\n complete=False,\n pending=True,\n outgoing=False,\n incoming=True,\n limit=limit,\n offset=offset,\n exclude_uncheckable=True,\n ):\n await invoice.set_pending(\n (await WALLET.get_invoice_status(invoice.checking_id)).pending\n )\n\n return [\n {\n \"r_hash\": to_buffer(invoice.payment_hash),\n \"payment_request\": invoice.bolt11,\n \"add_index\": \"500\",\n \"description\": invoice.memo,\n \"payment_hash\": invoice.payment_hash,\n \"ispaid\": not invoice.pending,\n \"amt\": int(invoice.amount / 1000),\n \"expire_time\": int(time.time() + 1800),\n \"timestamp\": invoice.time,\n \"type\": \"user_invoice\",\n }\n for invoice in reversed(\n (\n await get_payments(\n wallet_id=wallet.wallet.id,\n pending=True,\n complete=True,\n incoming=True,\n outgoing=False,\n limit=limit,\n offset=offset,\n )\n )\n )\n ]\n\n\n@lndhub_ext.get(\"/ext/getbtc\")\nasync def lndhub_getbtc(wallet: WalletTypeInfo = Depends(check_wallet)):\n \"load an address for incoming onchain btc\"\n return []\n\n\n@lndhub_ext.get(\"/ext/getpending\")\nasync def lndhub_getpending(wallet: WalletTypeInfo = Depends(check_wallet)):\n \"pending onchain transactions\"\n return []\n\n\n@lndhub_ext.get(\"/ext/decodeinvoice\")\nasync def lndhub_decodeinvoice(invoice: str = Query(None)):\n inv = bolt11.decode(invoice)\n return decoded_as_lndhub(inv)\n\n\n@lndhub_ext.get(\"/ext/checkrouteinvoice\")\nasync def lndhub_checkrouteinvoice():\n \"not implemented on canonical lndhub\"\n pass\n", "path": "lnbits/extensions/lndhub/views_api.py"}]}
| 2,813 | 121 |
gh_patches_debug_19153
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-2762
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
KeyError USERDOMAIN on `conan install`
We have at least two machines for which `conan install` errors out with the message:
```
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\conans\client\command.py", line 1187, in run
method(args[0][1:])
File "C:\Python27\lib\site-packages\conans\client\command.py", line 304, in install
install_folder=args.install_folder)
File "C:\Python27\lib\site-packages\conans\client\conan_api.py", line 61, in wrapper
return f(*args, **kwargs)
File "C:\Python27\lib\site-packages\conans\client\conan_api.py", line 444, in install
no_imports=no_imports)
File "C:\Python27\lib\site-packages\conans\client\manager.py", line 395, in install
installer.install(deps_graph, profile.build_requires, keep_build)
File "C:\Python27\lib\site-packages\conans\client\installer.py", line 262, in install
nodes_to_process = self._get_nodes(nodes_by_level, skip_private_nodes)
File "C:\Python27\lib\site-packages\conans\client\installer.py", line 501, in _get_nodes
check_outdated)
File "C:\Python27\lib\site-packages\conans\client\proxy.py", line 47, in package_available
package_folder = self._client_cache.package(package_ref, short_paths=short_paths)
File "C:\Python27\lib\site-packages\conans\paths.py", line 162, in package
return path_shortener(p, short_paths)
File "C:\Python27\lib\site-packages\conans\util\windows.py", line 57, in path_shortener
cmd = r'cacls %s /E /G "%s\%s":F' % (short_home, os.environ['USERDOMAIN'], os.environ['USERNAME'])
File "C:\Python27\lib\os.py", line 425, in __getitem__
return self.data[key.upper()]
KeyError: 'USERDOMAIN'
```
Defining an environment variable `USERDOMAIN` to any value fixes the problem.
Both machines are Windows 7, with Conan 1.2.3.
</issue>
<code>
[start of conans/util/windows.py]
1 import os
2 import subprocess
3
4 from conans.util.files import load, mkdir, save, rmdir
5 import tempfile
6
7
8 CONAN_LINK = ".conan_link"
9
10
11 def conan_expand_user(path):
12 """ wrapper to the original expanduser function, to workaround python returning
13 verbatim %USERPROFILE% when some other app (git for windows) sets HOME envvar
14 """
15 # In win these variables should exist and point to user directory, which
16 # must exist. Using context to avoid permanent modification of os.environ
17 old_env = dict(os.environ)
18 try:
19 home = os.environ.get("HOME")
20 # Problematic cases of wrong HOME variable
21 # - HOME = %USERPROFILE% verbatim, as messed by some other tools
22 # - MSYS console, that defines a different user home in /c/mingw/msys/users/xxx
23 # In these cases, it is safe to remove it and rely on USERPROFILE directly
24 if home and (not os.path.exists(home) or
25 (os.getenv("MSYSTEM") and os.getenv("USERPROFILE"))):
26 del os.environ["HOME"]
27 result = os.path.expanduser(path)
28 finally:
29 os.environ.clear()
30 os.environ.update(old_env)
31 return result
32
33
34 def path_shortener(path, short_paths):
35 """ short_paths is 4-state:
36 False: Never shorten the path
37 True: Always shorten the path, create link if not existing
38 None: Use shorten path only if already exists, not create
39 """
40 if short_paths is False or os.getenv("CONAN_USER_HOME_SHORT") == "None":
41 return path
42 link = os.path.join(path, CONAN_LINK)
43 if os.path.exists(link):
44 return load(link)
45 elif short_paths is None:
46 return path
47
48 short_home = os.getenv("CONAN_USER_HOME_SHORT")
49 if not short_home:
50 drive = os.path.splitdrive(path)[0]
51 short_home = drive + "/.conan"
52 mkdir(short_home)
53
54 # Workaround for short_home living in NTFS file systems. Give full control permission to current user to avoid
55 # access problems in cygwin/msys2 windows subsystems when using short_home folder
56 try:
57 cmd = r'cacls %s /E /G "%s\%s":F' % (short_home, os.environ['USERDOMAIN'], os.environ['USERNAME'])
58 subprocess.check_output(cmd, stderr=subprocess.STDOUT) # Ignoring any returned output, make command quiet
59 except subprocess.CalledProcessError as e:
60 # cmd can fail if trying to set ACL in non NTFS drives, ignoring it.
61 pass
62
63 redirect = tempfile.mkdtemp(dir=short_home, prefix="")
64 # This "1" is the way to have a non-existing directory, so commands like
65 # shutil.copytree() to it, works. It can be removed without compromising the
66 # temp folder generator and conan-links consistency
67 redirect = os.path.join(redirect, "1")
68 save(link, redirect)
69 return redirect
70
71
72 def ignore_long_path_files(src_folder, build_folder, output):
73 def _filter(src, files):
74 filtered_files = []
75 for the_file in files:
76 source_path = os.path.join(src, the_file)
77 # Without storage path, just relative
78 rel_path = os.path.relpath(source_path, src_folder)
79 dest_path = os.path.normpath(os.path.join(build_folder, rel_path))
80 # it is NOT that "/" is counted as "\\" so it counts double
81 # seems a bug in python, overflows paths near the limit of 260,
82 if len(dest_path) >= 249:
83 filtered_files.append(the_file)
84 output.warn("Filename too long, file excluded: %s" % dest_path)
85 return filtered_files
86 return _filter
87
88
89 def rm_conandir(path):
90 """removal of a directory that might contain a link to a short path"""
91 link = os.path.join(path, CONAN_LINK)
92 if os.path.exists(link):
93 short_path = load(link)
94 rmdir(os.path.dirname(short_path))
95 rmdir(path)
96
[end of conans/util/windows.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conans/util/windows.py b/conans/util/windows.py
--- a/conans/util/windows.py
+++ b/conans/util/windows.py
@@ -54,9 +54,11 @@
# Workaround for short_home living in NTFS file systems. Give full control permission to current user to avoid
# access problems in cygwin/msys2 windows subsystems when using short_home folder
try:
- cmd = r'cacls %s /E /G "%s\%s":F' % (short_home, os.environ['USERDOMAIN'], os.environ['USERNAME'])
+ username = os.getenv("USERDOMAIN")
+ domainname = "%s\%s" % (username, os.environ["USERNAME"]) if username else os.environ["USERNAME"]
+ cmd = r'cacls %s /E /G "%s":F' % (short_home, domainname)
subprocess.check_output(cmd, stderr=subprocess.STDOUT) # Ignoring any returned output, make command quiet
- except subprocess.CalledProcessError as e:
+ except subprocess.CalledProcessError:
# cmd can fail if trying to set ACL in non NTFS drives, ignoring it.
pass
|
{"golden_diff": "diff --git a/conans/util/windows.py b/conans/util/windows.py\n--- a/conans/util/windows.py\n+++ b/conans/util/windows.py\n@@ -54,9 +54,11 @@\n # Workaround for short_home living in NTFS file systems. Give full control permission to current user to avoid\n # access problems in cygwin/msys2 windows subsystems when using short_home folder\n try:\n- cmd = r'cacls %s /E /G \"%s\\%s\":F' % (short_home, os.environ['USERDOMAIN'], os.environ['USERNAME'])\n+ username = os.getenv(\"USERDOMAIN\")\n+ domainname = \"%s\\%s\" % (username, os.environ[\"USERNAME\"]) if username else os.environ[\"USERNAME\"]\n+ cmd = r'cacls %s /E /G \"%s\":F' % (short_home, domainname)\n subprocess.check_output(cmd, stderr=subprocess.STDOUT) # Ignoring any returned output, make command quiet\n- except subprocess.CalledProcessError as e:\n+ except subprocess.CalledProcessError:\n # cmd can fail if trying to set ACL in non NTFS drives, ignoring it.\n pass\n", "issue": "KeyError USERDOMAIN on `conan install`\nWe have at least two machines for which `conan install` errors out with the message:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Python27\\lib\\site-packages\\conans\\client\\command.py\", line 1187, in run\r\n method(args[0][1:])\r\n File \"C:\\Python27\\lib\\site-packages\\conans\\client\\command.py\", line 304, in install\r\n install_folder=args.install_folder)\r\n File \"C:\\Python27\\lib\\site-packages\\conans\\client\\conan_api.py\", line 61, in wrapper\r\n return f(*args, **kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\conans\\client\\conan_api.py\", line 444, in install\r\n no_imports=no_imports)\r\n File \"C:\\Python27\\lib\\site-packages\\conans\\client\\manager.py\", line 395, in install\r\n installer.install(deps_graph, profile.build_requires, keep_build)\r\n File \"C:\\Python27\\lib\\site-packages\\conans\\client\\installer.py\", line 262, in install\r\n nodes_to_process = self._get_nodes(nodes_by_level, skip_private_nodes)\r\n File \"C:\\Python27\\lib\\site-packages\\conans\\client\\installer.py\", line 501, in _get_nodes\r\n check_outdated)\r\n File \"C:\\Python27\\lib\\site-packages\\conans\\client\\proxy.py\", line 47, in package_available\r\n package_folder = self._client_cache.package(package_ref, short_paths=short_paths)\r\n File \"C:\\Python27\\lib\\site-packages\\conans\\paths.py\", line 162, in package\r\n return path_shortener(p, short_paths)\r\n File \"C:\\Python27\\lib\\site-packages\\conans\\util\\windows.py\", line 57, in path_shortener\r\n cmd = r'cacls %s /E /G \"%s\\%s\":F' % (short_home, os.environ['USERDOMAIN'], os.environ['USERNAME'])\r\n File \"C:\\Python27\\lib\\os.py\", line 425, in __getitem__\r\n return self.data[key.upper()]\r\nKeyError: 'USERDOMAIN'\r\n```\r\n\r\nDefining an environment variable `USERDOMAIN` to any value fixes the problem.\r\n\r\nBoth machines are Windows 7, with Conan 1.2.3.\n", "before_files": [{"content": "import os\nimport subprocess\n\nfrom conans.util.files import load, mkdir, save, rmdir\nimport tempfile\n\n\nCONAN_LINK = \".conan_link\"\n\n\ndef conan_expand_user(path):\n \"\"\" wrapper to the original expanduser function, to workaround python returning\n verbatim %USERPROFILE% when some other app (git for windows) sets HOME envvar\n \"\"\"\n # In win these variables should exist and point to user directory, which\n # must exist. Using context to avoid permanent modification of os.environ\n old_env = dict(os.environ)\n try:\n home = os.environ.get(\"HOME\")\n # Problematic cases of wrong HOME variable\n # - HOME = %USERPROFILE% verbatim, as messed by some other tools\n # - MSYS console, that defines a different user home in /c/mingw/msys/users/xxx\n # In these cases, it is safe to remove it and rely on USERPROFILE directly\n if home and (not os.path.exists(home) or\n (os.getenv(\"MSYSTEM\") and os.getenv(\"USERPROFILE\"))):\n del os.environ[\"HOME\"]\n result = os.path.expanduser(path)\n finally:\n os.environ.clear()\n os.environ.update(old_env)\n return result\n\n\ndef path_shortener(path, short_paths):\n \"\"\" short_paths is 4-state:\n False: Never shorten the path\n True: Always shorten the path, create link if not existing\n None: Use shorten path only if already exists, not create\n \"\"\"\n if short_paths is False or os.getenv(\"CONAN_USER_HOME_SHORT\") == \"None\":\n return path\n link = os.path.join(path, CONAN_LINK)\n if os.path.exists(link):\n return load(link)\n elif short_paths is None:\n return path\n\n short_home = os.getenv(\"CONAN_USER_HOME_SHORT\")\n if not short_home:\n drive = os.path.splitdrive(path)[0]\n short_home = drive + \"/.conan\"\n mkdir(short_home)\n\n # Workaround for short_home living in NTFS file systems. Give full control permission to current user to avoid\n # access problems in cygwin/msys2 windows subsystems when using short_home folder\n try:\n cmd = r'cacls %s /E /G \"%s\\%s\":F' % (short_home, os.environ['USERDOMAIN'], os.environ['USERNAME'])\n subprocess.check_output(cmd, stderr=subprocess.STDOUT) # Ignoring any returned output, make command quiet\n except subprocess.CalledProcessError as e:\n # cmd can fail if trying to set ACL in non NTFS drives, ignoring it.\n pass\n\n redirect = tempfile.mkdtemp(dir=short_home, prefix=\"\")\n # This \"1\" is the way to have a non-existing directory, so commands like\n # shutil.copytree() to it, works. It can be removed without compromising the\n # temp folder generator and conan-links consistency\n redirect = os.path.join(redirect, \"1\")\n save(link, redirect)\n return redirect\n\n\ndef ignore_long_path_files(src_folder, build_folder, output):\n def _filter(src, files):\n filtered_files = []\n for the_file in files:\n source_path = os.path.join(src, the_file)\n # Without storage path, just relative\n rel_path = os.path.relpath(source_path, src_folder)\n dest_path = os.path.normpath(os.path.join(build_folder, rel_path))\n # it is NOT that \"/\" is counted as \"\\\\\" so it counts double\n # seems a bug in python, overflows paths near the limit of 260,\n if len(dest_path) >= 249:\n filtered_files.append(the_file)\n output.warn(\"Filename too long, file excluded: %s\" % dest_path)\n return filtered_files\n return _filter\n\n\ndef rm_conandir(path):\n \"\"\"removal of a directory that might contain a link to a short path\"\"\"\n link = os.path.join(path, CONAN_LINK)\n if os.path.exists(link):\n short_path = load(link)\n rmdir(os.path.dirname(short_path))\n rmdir(path)\n", "path": "conans/util/windows.py"}]}
| 2,178 | 261 |
gh_patches_debug_32494
|
rasdani/github-patches
|
git_diff
|
certbot__certbot-8989
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add deprecation warnings for legacy certbot.display.util code
As a followup from https://github.com/certbot/certbot/pull/8967, I think we should generate deprecation warnings using an approach like https://github.com/certbot/certbot/pull/6859/files#diff-e5eaf744409c293203b898ba9896da75689fd04ff5f1566c035940a5b195c257 for any code in `certbot.display.util` that is unused and/or we don't want to be part of our public API.
</issue>
<code>
[start of certbot/certbot/display/util.py]
1 """Certbot display.
2
3 This module (`certbot.display.util`) or its companion `certbot.display.ops`
4 should be used whenever:
5
6 - Displaying status information to the user on the terminal
7 - Collecting information from the user via prompts
8
9 Other messages can use the `logging` module. See `log.py`.
10
11 """
12 from typing import List
13 from typing import Optional
14 from typing import Tuple
15 from typing import Union
16
17
18 # These specific imports from certbot._internal.display.obj and
19 # certbot._internal.display.util are done to not break the public API of this
20 # module.
21 from certbot._internal.display.obj import FileDisplay # pylint: disable=unused-import
22 from certbot._internal.display.obj import NoninteractiveDisplay # pylint: disable=unused-import
23 from certbot._internal.display.obj import SIDE_FRAME # pylint: disable=unused-import
24 from certbot._internal.display.util import input_with_timeout # pylint: disable=unused-import
25 from certbot._internal.display.util import separate_list_input # pylint: disable=unused-import
26 from certbot._internal.display.util import summarize_domain_list # pylint: disable=unused-import
27 from certbot._internal.display import obj
28
29
30 # These constants are defined this way to make them easier to document with
31 # Sphinx and to not couple our public docstrings to our internal ones.
32 OK = obj.OK
33 """Display exit code indicating user acceptance."""
34
35 CANCEL = obj.CANCEL
36 """Display exit code for a user canceling the display."""
37
38 # These constants are unused and should be removed in a major release of
39 # Certbot.
40 WIDTH = 72
41
42 HELP = "help"
43 """Display exit code when for when the user requests more help. (UNUSED)"""
44
45 ESC = "esc"
46 """Display exit code when the user hits Escape (UNUSED)"""
47
48
49 def notify(msg: str) -> None:
50 """Display a basic status message.
51
52 :param str msg: message to display
53
54 """
55 obj.get_display().notification(msg, pause=False, decorate=False, wrap=False)
56
57
58 def notification(message: str, pause: bool = True, wrap: bool = True,
59 force_interactive: bool = False, decorate: bool = True) -> None:
60 """Displays a notification and waits for user acceptance.
61
62 :param str message: Message to display
63 :param bool pause: Whether or not the program should pause for the
64 user's confirmation
65 :param bool wrap: Whether or not the application should wrap text
66 :param bool force_interactive: True if it's safe to prompt the user
67 because it won't cause any workflow regressions
68 :param bool decorate: Whether to surround the message with a
69 decorated frame
70
71 """
72 obj.get_display().notification(message, pause=pause, wrap=wrap,
73 force_interactive=force_interactive, decorate=decorate)
74
75
76 def menu(message: str, choices: Union[List[str], Tuple[str, str]],
77 default: Optional[int] = None, cli_flag: Optional[str] = None,
78 force_interactive: bool = False) -> Tuple[str, int]:
79 """Display a menu.
80
81 .. todo:: This doesn't enable the help label/button (I wasn't sold on
82 any interface I came up with for this). It would be a nice feature.
83
84 :param str message: title of menu
85 :param choices: Menu lines, len must be > 0
86 :type choices: list of tuples (tag, item) or
87 list of descriptions (tags will be enumerated)
88 :param default: default value to return (if one exists)
89 :param str cli_flag: option used to set this value with the CLI
90 :param bool force_interactive: True if it's safe to prompt the user
91 because it won't cause any workflow regressions
92
93 :returns: tuple of (`code`, `index`) where
94 `code` - str display exit code
95 `index` - int index of the user's selection
96
97 :rtype: tuple
98
99 """
100 return obj.get_display().menu(message, choices, default=default, cli_flag=cli_flag,
101 force_interactive=force_interactive)
102
103
104 def input_text(message: str, default: Optional[str] = None, cli_flag: Optional[str] = None,
105 force_interactive: bool = False) -> Tuple[str, str]:
106 """Accept input from the user.
107
108 :param str message: message to display to the user
109 :param default: default value to return (if one exists)
110 :param str cli_flag: option used to set this value with the CLI
111 :param bool force_interactive: True if it's safe to prompt the user
112 because it won't cause any workflow regressions
113
114 :returns: tuple of (`code`, `input`) where
115 `code` - str display exit code
116 `input` - str of the user's input
117 :rtype: tuple
118
119 """
120 return obj.get_display().input(message, default=default, cli_flag=cli_flag,
121 force_interactive=force_interactive)
122
123
124 def yesno(message: str, yes_label: str = "Yes", no_label: str = "No",
125 default: Optional[bool] = None, cli_flag: Optional[str] = None,
126 force_interactive: bool = False) -> bool:
127 """Query the user with a yes/no question.
128
129 Yes and No label must begin with different letters, and must contain at
130 least one letter each.
131
132 :param str message: question for the user
133 :param str yes_label: Label of the "Yes" parameter
134 :param str no_label: Label of the "No" parameter
135 :param default: default value to return (if one exists)
136 :param str cli_flag: option used to set this value with the CLI
137 :param bool force_interactive: True if it's safe to prompt the user
138 because it won't cause any workflow regressions
139
140 :returns: True for "Yes", False for "No"
141 :rtype: bool
142
143 """
144 return obj.get_display().yesno(message, yes_label=yes_label, no_label=no_label, default=default,
145 cli_flag=cli_flag, force_interactive=force_interactive)
146
147
148 def checklist(message: str, tags: List[str], default: Optional[str] = None,
149 cli_flag: Optional[str] = None,
150 force_interactive: bool = False) -> Tuple[str, List[str]]:
151 """Display a checklist.
152
153 :param str message: Message to display to user
154 :param list tags: `str` tags to select, len(tags) > 0
155 :param default: default value to return (if one exists)
156 :param str cli_flag: option used to set this value with the CLI
157 :param bool force_interactive: True if it's safe to prompt the user
158 because it won't cause any workflow regressions
159
160 :returns: tuple of (`code`, `tags`) where
161 `code` - str display exit code
162 `tags` - list of selected tags
163 :rtype: tuple
164
165 """
166 return obj.get_display().checklist(message, tags, default=default, cli_flag=cli_flag,
167 force_interactive=force_interactive)
168
169
170 def directory_select(message: str, default: Optional[str] = None, cli_flag: Optional[str] = None,
171 force_interactive: bool = False) -> Tuple[int, str]:
172 """Display a directory selection screen.
173
174 :param str message: prompt to give the user
175 :param default: default value to return (if one exists)
176 :param str cli_flag: option used to set this value with the CLI
177 :param bool force_interactive: True if it's safe to prompt the user
178 because it won't cause any workflow regressions
179
180 :returns: tuple of the form (`code`, `string`) where
181 `code` - display exit code
182 `string` - input entered by the user
183
184 """
185 return obj.get_display().directory_select(message, default=default, cli_flag=cli_flag,
186 force_interactive=force_interactive)
187
188
189 def assert_valid_call(prompt, default, cli_flag, force_interactive):
190 """Verify that provided arguments is a valid IDisplay call.
191
192 :param str prompt: prompt for the user
193 :param default: default answer to prompt
194 :param str cli_flag: command line option for setting an answer
195 to this question
196 :param bool force_interactive: if interactivity is forced by the
197 IDisplay call
198
199 """
200 msg = "Invalid IDisplay call for this prompt:\n{0}".format(prompt)
201 if cli_flag:
202 msg += ("\nYou can set an answer to "
203 "this prompt with the {0} flag".format(cli_flag))
204 assert default is not None or force_interactive, msg
205
[end of certbot/certbot/display/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/certbot/certbot/display/util.py b/certbot/certbot/display/util.py
--- a/certbot/certbot/display/util.py
+++ b/certbot/certbot/display/util.py
@@ -9,10 +9,14 @@
Other messages can use the `logging` module. See `log.py`.
"""
+import sys
+from types import ModuleType
+from typing import cast
from typing import List
from typing import Optional
from typing import Tuple
from typing import Union
+import warnings
# These specific imports from certbot._internal.display.obj and
@@ -202,3 +206,36 @@
msg += ("\nYou can set an answer to "
"this prompt with the {0} flag".format(cli_flag))
assert default is not None or force_interactive, msg
+
+
+# This class takes a similar approach to the cryptography project to deprecate attributes
+# in public modules. See the _ModuleWithDeprecation class here:
+# https://github.com/pyca/cryptography/blob/91105952739442a74582d3e62b3d2111365b0dc7/src/cryptography/utils.py#L129
+class _DisplayUtilDeprecationModule:
+ """
+ Internal class delegating to a module, and displaying warnings when attributes
+ related to deprecated attributes in the certbot.display.util module.
+ """
+ def __init__(self, module):
+ self.__dict__['_module'] = module
+
+ def __getattr__(self, attr):
+ if attr in ('FileDisplay', 'NoninteractiveDisplay', 'SIDE_FRAME', 'input_with_timeout',
+ 'separate_list_input', 'summarize_domain_list', 'WIDTH', 'HELP', 'ESC'):
+ warnings.warn('{0} attribute in certbot.display.util module is deprecated '
+ 'and will be removed soon.'.format(attr),
+ DeprecationWarning, stacklevel=2)
+ return getattr(self._module, attr)
+
+ def __setattr__(self, attr, value): # pragma: no cover
+ setattr(self._module, attr, value)
+
+ def __delattr__(self, attr): # pragma: no cover
+ delattr(self._module, attr)
+
+ def __dir__(self): # pragma: no cover
+ return ['_module'] + dir(self._module)
+
+
+# Patching ourselves to warn about deprecation and planned removal of some elements in the module.
+sys.modules[__name__] = cast(ModuleType, _DisplayUtilDeprecationModule(sys.modules[__name__]))
|
{"golden_diff": "diff --git a/certbot/certbot/display/util.py b/certbot/certbot/display/util.py\n--- a/certbot/certbot/display/util.py\n+++ b/certbot/certbot/display/util.py\n@@ -9,10 +9,14 @@\n Other messages can use the `logging` module. See `log.py`.\n \n \"\"\"\n+import sys\n+from types import ModuleType\n+from typing import cast\n from typing import List\n from typing import Optional\n from typing import Tuple\n from typing import Union\n+import warnings\n \n \n # These specific imports from certbot._internal.display.obj and\n@@ -202,3 +206,36 @@\n msg += (\"\\nYou can set an answer to \"\n \"this prompt with the {0} flag\".format(cli_flag))\n assert default is not None or force_interactive, msg\n+\n+\n+# This class takes a similar approach to the cryptography project to deprecate attributes\n+# in public modules. See the _ModuleWithDeprecation class here:\n+# https://github.com/pyca/cryptography/blob/91105952739442a74582d3e62b3d2111365b0dc7/src/cryptography/utils.py#L129\n+class _DisplayUtilDeprecationModule:\n+ \"\"\"\n+ Internal class delegating to a module, and displaying warnings when attributes\n+ related to deprecated attributes in the certbot.display.util module.\n+ \"\"\"\n+ def __init__(self, module):\n+ self.__dict__['_module'] = module\n+\n+ def __getattr__(self, attr):\n+ if attr in ('FileDisplay', 'NoninteractiveDisplay', 'SIDE_FRAME', 'input_with_timeout',\n+ 'separate_list_input', 'summarize_domain_list', 'WIDTH', 'HELP', 'ESC'):\n+ warnings.warn('{0} attribute in certbot.display.util module is deprecated '\n+ 'and will be removed soon.'.format(attr),\n+ DeprecationWarning, stacklevel=2)\n+ return getattr(self._module, attr)\n+\n+ def __setattr__(self, attr, value): # pragma: no cover\n+ setattr(self._module, attr, value)\n+\n+ def __delattr__(self, attr): # pragma: no cover\n+ delattr(self._module, attr)\n+\n+ def __dir__(self): # pragma: no cover\n+ return ['_module'] + dir(self._module)\n+\n+\n+# Patching ourselves to warn about deprecation and planned removal of some elements in the module.\n+sys.modules[__name__] = cast(ModuleType, _DisplayUtilDeprecationModule(sys.modules[__name__]))\n", "issue": "Add deprecation warnings for legacy certbot.display.util code\nAs a followup from https://github.com/certbot/certbot/pull/8967, I think we should generate deprecation warnings using an approach like https://github.com/certbot/certbot/pull/6859/files#diff-e5eaf744409c293203b898ba9896da75689fd04ff5f1566c035940a5b195c257 for any code in `certbot.display.util` that is unused and/or we don't want to be part of our public API.\n", "before_files": [{"content": "\"\"\"Certbot display.\n\nThis module (`certbot.display.util`) or its companion `certbot.display.ops`\nshould be used whenever:\n\n- Displaying status information to the user on the terminal\n- Collecting information from the user via prompts\n\nOther messages can use the `logging` module. See `log.py`.\n\n\"\"\"\nfrom typing import List\nfrom typing import Optional\nfrom typing import Tuple\nfrom typing import Union\n\n\n# These specific imports from certbot._internal.display.obj and\n# certbot._internal.display.util are done to not break the public API of this\n# module.\nfrom certbot._internal.display.obj import FileDisplay # pylint: disable=unused-import\nfrom certbot._internal.display.obj import NoninteractiveDisplay # pylint: disable=unused-import\nfrom certbot._internal.display.obj import SIDE_FRAME # pylint: disable=unused-import\nfrom certbot._internal.display.util import input_with_timeout # pylint: disable=unused-import\nfrom certbot._internal.display.util import separate_list_input # pylint: disable=unused-import\nfrom certbot._internal.display.util import summarize_domain_list # pylint: disable=unused-import\nfrom certbot._internal.display import obj\n\n\n# These constants are defined this way to make them easier to document with\n# Sphinx and to not couple our public docstrings to our internal ones.\nOK = obj.OK\n\"\"\"Display exit code indicating user acceptance.\"\"\"\n\nCANCEL = obj.CANCEL\n\"\"\"Display exit code for a user canceling the display.\"\"\"\n\n# These constants are unused and should be removed in a major release of\n# Certbot.\nWIDTH = 72\n\nHELP = \"help\"\n\"\"\"Display exit code when for when the user requests more help. (UNUSED)\"\"\"\n\nESC = \"esc\"\n\"\"\"Display exit code when the user hits Escape (UNUSED)\"\"\"\n\n\ndef notify(msg: str) -> None:\n \"\"\"Display a basic status message.\n\n :param str msg: message to display\n\n \"\"\"\n obj.get_display().notification(msg, pause=False, decorate=False, wrap=False)\n\n\ndef notification(message: str, pause: bool = True, wrap: bool = True,\n force_interactive: bool = False, decorate: bool = True) -> None:\n \"\"\"Displays a notification and waits for user acceptance.\n\n :param str message: Message to display\n :param bool pause: Whether or not the program should pause for the\n user's confirmation\n :param bool wrap: Whether or not the application should wrap text\n :param bool force_interactive: True if it's safe to prompt the user\n because it won't cause any workflow regressions\n :param bool decorate: Whether to surround the message with a\n decorated frame\n\n \"\"\"\n obj.get_display().notification(message, pause=pause, wrap=wrap,\n force_interactive=force_interactive, decorate=decorate)\n\n\ndef menu(message: str, choices: Union[List[str], Tuple[str, str]],\n default: Optional[int] = None, cli_flag: Optional[str] = None,\n force_interactive: bool = False) -> Tuple[str, int]:\n \"\"\"Display a menu.\n\n .. todo:: This doesn't enable the help label/button (I wasn't sold on\n any interface I came up with for this). It would be a nice feature.\n\n :param str message: title of menu\n :param choices: Menu lines, len must be > 0\n :type choices: list of tuples (tag, item) or\n list of descriptions (tags will be enumerated)\n :param default: default value to return (if one exists)\n :param str cli_flag: option used to set this value with the CLI\n :param bool force_interactive: True if it's safe to prompt the user\n because it won't cause any workflow regressions\n\n :returns: tuple of (`code`, `index`) where\n `code` - str display exit code\n `index` - int index of the user's selection\n\n :rtype: tuple\n\n \"\"\"\n return obj.get_display().menu(message, choices, default=default, cli_flag=cli_flag,\n force_interactive=force_interactive)\n\n\ndef input_text(message: str, default: Optional[str] = None, cli_flag: Optional[str] = None,\n force_interactive: bool = False) -> Tuple[str, str]:\n \"\"\"Accept input from the user.\n\n :param str message: message to display to the user\n :param default: default value to return (if one exists)\n :param str cli_flag: option used to set this value with the CLI\n :param bool force_interactive: True if it's safe to prompt the user\n because it won't cause any workflow regressions\n\n :returns: tuple of (`code`, `input`) where\n `code` - str display exit code\n `input` - str of the user's input\n :rtype: tuple\n\n \"\"\"\n return obj.get_display().input(message, default=default, cli_flag=cli_flag,\n force_interactive=force_interactive)\n\n\ndef yesno(message: str, yes_label: str = \"Yes\", no_label: str = \"No\",\n default: Optional[bool] = None, cli_flag: Optional[str] = None,\n force_interactive: bool = False) -> bool:\n \"\"\"Query the user with a yes/no question.\n\n Yes and No label must begin with different letters, and must contain at\n least one letter each.\n\n :param str message: question for the user\n :param str yes_label: Label of the \"Yes\" parameter\n :param str no_label: Label of the \"No\" parameter\n :param default: default value to return (if one exists)\n :param str cli_flag: option used to set this value with the CLI\n :param bool force_interactive: True if it's safe to prompt the user\n because it won't cause any workflow regressions\n\n :returns: True for \"Yes\", False for \"No\"\n :rtype: bool\n\n \"\"\"\n return obj.get_display().yesno(message, yes_label=yes_label, no_label=no_label, default=default,\n cli_flag=cli_flag, force_interactive=force_interactive)\n\n\ndef checklist(message: str, tags: List[str], default: Optional[str] = None,\n cli_flag: Optional[str] = None,\n force_interactive: bool = False) -> Tuple[str, List[str]]:\n \"\"\"Display a checklist.\n\n :param str message: Message to display to user\n :param list tags: `str` tags to select, len(tags) > 0\n :param default: default value to return (if one exists)\n :param str cli_flag: option used to set this value with the CLI\n :param bool force_interactive: True if it's safe to prompt the user\n because it won't cause any workflow regressions\n\n :returns: tuple of (`code`, `tags`) where\n `code` - str display exit code\n `tags` - list of selected tags\n :rtype: tuple\n\n \"\"\"\n return obj.get_display().checklist(message, tags, default=default, cli_flag=cli_flag,\n force_interactive=force_interactive)\n\n\ndef directory_select(message: str, default: Optional[str] = None, cli_flag: Optional[str] = None,\n force_interactive: bool = False) -> Tuple[int, str]:\n \"\"\"Display a directory selection screen.\n\n :param str message: prompt to give the user\n :param default: default value to return (if one exists)\n :param str cli_flag: option used to set this value with the CLI\n :param bool force_interactive: True if it's safe to prompt the user\n because it won't cause any workflow regressions\n\n :returns: tuple of the form (`code`, `string`) where\n `code` - display exit code\n `string` - input entered by the user\n\n \"\"\"\n return obj.get_display().directory_select(message, default=default, cli_flag=cli_flag,\n force_interactive=force_interactive)\n\n\ndef assert_valid_call(prompt, default, cli_flag, force_interactive):\n \"\"\"Verify that provided arguments is a valid IDisplay call.\n\n :param str prompt: prompt for the user\n :param default: default answer to prompt\n :param str cli_flag: command line option for setting an answer\n to this question\n :param bool force_interactive: if interactivity is forced by the\n IDisplay call\n\n \"\"\"\n msg = \"Invalid IDisplay call for this prompt:\\n{0}\".format(prompt)\n if cli_flag:\n msg += (\"\\nYou can set an answer to \"\n \"this prompt with the {0} flag\".format(cli_flag))\n assert default is not None or force_interactive, msg\n", "path": "certbot/certbot/display/util.py"}]}
| 3,114 | 595 |
gh_patches_debug_10516
|
rasdani/github-patches
|
git_diff
|
Gallopsled__pwntools-2307
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pwn libcdb file fails if binary doesn't contain b'/bin/sh\x00'
Hello,
the "pwn libcdb file /[something]" is crashing depending whether or not the binary contains the string b'/bin/sh\x00'.
This works:
```
$ pwn libcdb file /bin/bash
[*] bash
BuildID: e1da91a3e72343eb054c8c69a8d6b4240acb8b10
MD5: c33ad3a4937b1c186a8a1279bb31e702
SHA1: d07f822b462ecf5ae31f5ccf1c6657b7505afb3f
SHA256: a8334e823ce220c4a375e1d5f32fabc1bd47abb6810760ea4100415b55a097e4
Symbols:
dup2 = 0x302b4
printf = not found
puts = 0x30024
read = 0x30494
str_bin_sh = 0x336ca
system = not found
write = 0x30134
```
This is crashing as search returns :
```
$ pwn libcdb file /bin/ls
[*] ls
BuildID: e2ca832f1c2112aea9d7b9bc639e97e873a6b516
MD5: df0e7216034340f844de8e3b3c37d32b
SHA1: 0c5f47f25f4379690945f6e7eaa92e1999d0755d
SHA256: 9379a0fa9ed1e0b4302c4a2c9b1254d3cd76a9048f0ead3c9e216a5082b536bf
Symbols:
Traceback (most recent call last):
File "/usr/bin/pwn", line 33, in <module>
sys.exit(load_entry_point('pwntools==4.11.1', 'console_scripts', 'pwn')())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/pwnlib/commandline/main.py", line 58, in main
commands[args.command](args)
File "/usr/lib/python3.12/site-packages/pwnlib/commandline/libcdb.py", line 236, in main
synthetic_symbols = collect_synthetic_symbols(exe)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/pwnlib/commandline/libcdb.py", line 180, in collect_synthetic_symbols
exe.symbols['str_bin_sh'] = next(exe.search(b'/bin/sh\x00'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
StopIteration
```
</issue>
<code>
[start of pwnlib/commandline/libcdb.py]
1 #!/usr/bin/env python
2 from __future__ import absolute_import
3 from __future__ import division
4 from __future__ import print_function
5
6 import re
7 import shutil
8 import sys
9
10 import pwnlib.args
11 pwnlib.args.free_form = False
12
13 from pwn import *
14 from pwnlib.commandline import common
15
16 parser = common.parser_commands.add_parser(
17 'libcdb',
18 help = 'Print various information about a libc binary',
19 description = 'Print various information about a libc binary'
20 )
21
22 libc_commands = parser.add_subparsers(
23 dest = 'libc_command'
24 )
25
26 lookup_parser = libc_commands.add_parser(
27 'lookup',
28 help = 'Lookup a libc version by function offsets',
29 description = 'Lookup a libc version by function offsets'
30 )
31
32 lookup_parser.add_argument(
33 'symbol_offset_pairs',
34 metavar = 'symbol_offset_pairs',
35 nargs = '+',
36 help = 'Symbol and offset pairs to lookup matching libc version. Can be any number of pairs to narrow the search. Example: "read 3e0 write 520"'
37 )
38
39 lookup_parser.add_argument(
40 '--download-libc',
41 action = 'store_true',
42 default = False,
43 help = 'Attempt to download the matching libc.so'
44 )
45
46 lookup_parser.add_argument(
47 '--unstrip',
48 action = 'store_true',
49 default = True,
50 help = 'Attempt to unstrip the libc binary with debug symbols from a debuginfod server'
51 )
52
53 lookup_parser.add_argument(
54 '--no-unstrip',
55 action = 'store_false',
56 dest = 'unstrip',
57 help = 'Do NOT attempt to unstrip the libc binary with debug symbols from a debuginfod server'
58 )
59
60 hash_parser = libc_commands.add_parser(
61 'hash',
62 help = 'Display information of a libc version given an unique hash',
63 description = 'Display information of a libc version given an unique hash'
64 )
65
66 hash_parser.add_argument(
67 'hash_value',
68 metavar = 'hash_value',
69 nargs = '+',
70 help = 'Hex encoded hash value'
71 )
72
73 hash_parser.add_argument(
74 '-t', '--hash_type',
75 nargs = '?',
76 type = str,
77 choices = ['id', 'buildid', 'md5', 'sha1', 'sha256'],
78 default = 'buildid',
79 help = 'The type of the provided hash value. Supported hashtypes: id, buildid, md5, sha1, sha256'
80 )
81
82 hash_parser.add_argument(
83 '--download-libc',
84 action = 'store_true',
85 default = False,
86 help = 'Attempt to download the matching libc.so'
87 )
88
89 hash_parser.add_argument(
90 '--unstrip',
91 action = 'store_true',
92 default = True,
93 help = 'Attempt to unstrip the libc binary with debug symbols from a debuginfod server'
94 )
95
96 hash_parser.add_argument(
97 '--no-unstrip',
98 action = 'store_false',
99 dest = 'unstrip',
100 help = 'Do NOT attempt to unstrip the libc binary with debug symbols from a debuginfod server'
101 )
102
103 file_parser = libc_commands.add_parser(
104 'file',
105 help = 'Dump information about a libc binary',
106 description = 'Dump information about a libc binary'
107 )
108
109 file_parser.add_argument(
110 'files',
111 metavar = 'files',
112 nargs = '+',
113 help = 'Libc binary to dump'
114 )
115
116 file_parser.add_argument(
117 '-s', '--symbols',
118 metavar = 'symbols',
119 nargs = '*',
120 help = 'List of symbol offsets to dump in addition to the common ones'
121 )
122
123 file_parser.add_argument(
124 '-o', '--offset',
125 metavar = 'offset',
126 type = str,
127 help = 'Display all offsets relative to this symbol'
128 )
129
130 file_parser.add_argument(
131 '--unstrip',
132 action = 'store_true',
133 default = False,
134 help = 'Attempt to unstrip the libc binary inplace with debug symbols from a debuginfod server'
135 )
136
137 common_symbols = ['dup2', 'printf', 'puts', 'read', 'system', 'write']
138
139 def find_libc(params):
140 import requests
141 url = "https://libc.rip/api/find"
142 result = requests.post(url, json=params, timeout=20)
143 log.debug('Request: %s', params)
144 log.debug('Result: %s', result.json())
145 if result.status_code != 200 or len(result.json()) == 0:
146 log.failure("Could not find libc for %s on libc.rip", params)
147 return []
148
149 return result.json()
150
151 def print_libc(libc):
152 log.info('%s', text.red(libc['id']))
153 log.indented('\t%-20s %s', text.green('BuildID:'), libc['buildid'])
154 log.indented('\t%-20s %s', text.green('MD5:'), libc['md5'])
155 log.indented('\t%-20s %s', text.green('SHA1:'), libc['sha1'])
156 log.indented('\t%-20s %s', text.green('SHA256:'), libc['sha256'])
157 log.indented('\t%s', text.green('Symbols:'))
158 for symbol in libc['symbols'].items():
159 log.indented('\t%25s = %s', symbol[0], symbol[1])
160
161 def handle_remote_libc(args, libc):
162 print_libc(libc)
163 if args.download_libc:
164 path = libcdb.search_by_build_id(libc['buildid'], args.unstrip)
165 if path:
166 if args.unstrip:
167 libcdb.unstrip_libc(path)
168 shutil.copy(path, './{}.so'.format(libc['id']))
169
170 def translate_offset(offs, args, exe):
171 if args.offset:
172 if args.offset not in exe.symbols:
173 log.info_once('offset symbol %s not found. ignoring.', args.offset)
174 return offs
175 return offs - exe.symbols[args.offset]
176 return offs
177
178 def collect_synthetic_symbols(exe):
179 available_symbols = ['str_bin_sh']
180 exe.symbols['str_bin_sh'] = next(exe.search(b'/bin/sh\x00'))
181
182 libc_start_main_return = exe.libc_start_main_return
183 if libc_start_main_return > 0:
184 exe.symbols['__libc_start_main_ret'] = libc_start_main_return
185 available_symbols.append('__libc_start_main_ret')
186
187 return available_symbols
188
189 def main(args):
190 if len(sys.argv) < 3:
191 parser.print_usage()
192 sys.exit()
193
194 if args.libc_command == 'lookup':
195 pairs = args.symbol_offset_pairs
196 if len(pairs) % 2 != 0:
197 log.failure('Uneven number of arguments. Please provide "symbol offset" pairs')
198 return
199
200 symbols = {pairs[i]:pairs[i+1] for i in range(0, len(pairs), 2)}
201 matched_libcs = find_libc({'symbols': symbols})
202 for libc in matched_libcs:
203 handle_remote_libc(args, libc)
204
205 elif args.libc_command == 'hash':
206 for hash_value in args.hash_value:
207 matched_libcs = find_libc({args.hash_type: hash_value})
208 for libc in matched_libcs:
209 handle_remote_libc(args, libc)
210
211 elif args.libc_command == 'file':
212 from hashlib import md5, sha1, sha256
213 for file in args.files:
214 if not os.path.exists(file) or not os.path.isfile(file):
215 log.failure('File does not exist %s', args.file)
216 continue
217
218 if args.unstrip:
219 libcdb.unstrip_libc(file)
220
221 exe = ELF(file, checksec=False)
222 log.info('%s', text.red(os.path.basename(file)))
223
224 libc_version = re.search(br'libc[ -](\d+\.\d+)', exe.data)
225 if libc_version:
226 log.indented('%-20s %s', text.green('Version:'), libc_version.group(1).decode())
227
228 if exe.buildid:
229 log.indented('%-20s %s', text.green('BuildID:'), enhex(exe.buildid))
230 log.indented('%-20s %s', text.green('MD5:'), md5(exe.data).hexdigest())
231 log.indented('%-20s %s', text.green('SHA1:'), sha1(exe.data).hexdigest())
232 log.indented('%-20s %s', text.green('SHA256:'), sha256(exe.data).hexdigest())
233
234 # Always dump the basic list of common symbols
235 log.indented('%s', text.green('Symbols:'))
236 synthetic_symbols = collect_synthetic_symbols(exe)
237
238 symbols = common_symbols + (args.symbols or []) + synthetic_symbols
239 symbols.sort()
240 for symbol in symbols:
241 if symbol not in exe.symbols:
242 log.indented('%25s = %s', symbol, text.red('not found'))
243 else:
244 log.indented('%25s = %#x', symbol, translate_offset(exe.symbols[symbol], args, exe))
245
246 if __name__ == '__main__':
247 pwnlib.commandline.common.main(__file__)
248
[end of pwnlib/commandline/libcdb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pwnlib/commandline/libcdb.py b/pwnlib/commandline/libcdb.py
--- a/pwnlib/commandline/libcdb.py
+++ b/pwnlib/commandline/libcdb.py
@@ -176,9 +176,13 @@
return offs
def collect_synthetic_symbols(exe):
- available_symbols = ['str_bin_sh']
- exe.symbols['str_bin_sh'] = next(exe.search(b'/bin/sh\x00'))
-
+ available_symbols = []
+ try:
+ exe.symbols['str_bin_sh'] = next(exe.search(b'/bin/sh\x00'))
+ available_symbols.append('str_bin_sh')
+ except StopIteration:
+ pass
+
libc_start_main_return = exe.libc_start_main_return
if libc_start_main_return > 0:
exe.symbols['__libc_start_main_ret'] = libc_start_main_return
|
{"golden_diff": "diff --git a/pwnlib/commandline/libcdb.py b/pwnlib/commandline/libcdb.py\n--- a/pwnlib/commandline/libcdb.py\n+++ b/pwnlib/commandline/libcdb.py\n@@ -176,9 +176,13 @@\n return offs\n \n def collect_synthetic_symbols(exe):\n- available_symbols = ['str_bin_sh']\n- exe.symbols['str_bin_sh'] = next(exe.search(b'/bin/sh\\x00'))\n-\n+ available_symbols = []\n+ try:\n+ exe.symbols['str_bin_sh'] = next(exe.search(b'/bin/sh\\x00'))\n+ available_symbols.append('str_bin_sh')\n+ except StopIteration:\n+ pass\n+ \n libc_start_main_return = exe.libc_start_main_return\n if libc_start_main_return > 0:\n exe.symbols['__libc_start_main_ret'] = libc_start_main_return\n", "issue": "pwn libcdb file fails if binary doesn't contain b'/bin/sh\\x00'\nHello,\r\nthe \"pwn libcdb file /[something]\" is crashing depending whether or not the binary contains the string b'/bin/sh\\x00'.\r\n\r\nThis works:\r\n```\r\n$ pwn libcdb file /bin/bash\r\n[*] bash\r\n BuildID: e1da91a3e72343eb054c8c69a8d6b4240acb8b10\r\n MD5: c33ad3a4937b1c186a8a1279bb31e702\r\n SHA1: d07f822b462ecf5ae31f5ccf1c6657b7505afb3f\r\n SHA256: a8334e823ce220c4a375e1d5f32fabc1bd47abb6810760ea4100415b55a097e4\r\n Symbols:\r\n dup2 = 0x302b4\r\n printf = not found\r\n puts = 0x30024\r\n read = 0x30494\r\n str_bin_sh = 0x336ca\r\n system = not found\r\n write = 0x30134\r\n\r\n```\r\n\r\nThis is crashing as search returns :\r\n```\r\n$ pwn libcdb file /bin/ls\r\n[*] ls\r\n BuildID: e2ca832f1c2112aea9d7b9bc639e97e873a6b516\r\n MD5: df0e7216034340f844de8e3b3c37d32b\r\n SHA1: 0c5f47f25f4379690945f6e7eaa92e1999d0755d\r\n SHA256: 9379a0fa9ed1e0b4302c4a2c9b1254d3cd76a9048f0ead3c9e216a5082b536bf\r\n Symbols:\r\nTraceback (most recent call last):\r\n File \"/usr/bin/pwn\", line 33, in <module>\r\n sys.exit(load_entry_point('pwntools==4.11.1', 'console_scripts', 'pwn')())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.12/site-packages/pwnlib/commandline/main.py\", line 58, in main\r\n commands[args.command](args)\r\n File \"/usr/lib/python3.12/site-packages/pwnlib/commandline/libcdb.py\", line 236, in main\r\n synthetic_symbols = collect_synthetic_symbols(exe)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.12/site-packages/pwnlib/commandline/libcdb.py\", line 180, in collect_synthetic_symbols\r\n exe.symbols['str_bin_sh'] = next(exe.search(b'/bin/sh\\x00'))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nStopIteration\r\n\r\n```\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport re\nimport shutil\nimport sys\n\nimport pwnlib.args\npwnlib.args.free_form = False\n\nfrom pwn import *\nfrom pwnlib.commandline import common\n\nparser = common.parser_commands.add_parser(\n 'libcdb',\n help = 'Print various information about a libc binary',\n description = 'Print various information about a libc binary'\n)\n\nlibc_commands = parser.add_subparsers(\n dest = 'libc_command'\n)\n\nlookup_parser = libc_commands.add_parser(\n 'lookup',\n help = 'Lookup a libc version by function offsets',\n description = 'Lookup a libc version by function offsets'\n)\n\nlookup_parser.add_argument(\n 'symbol_offset_pairs',\n metavar = 'symbol_offset_pairs',\n nargs = '+',\n help = 'Symbol and offset pairs to lookup matching libc version. Can be any number of pairs to narrow the search. Example: \"read 3e0 write 520\"'\n)\n\nlookup_parser.add_argument(\n '--download-libc',\n action = 'store_true',\n default = False,\n help = 'Attempt to download the matching libc.so'\n)\n\nlookup_parser.add_argument(\n '--unstrip',\n action = 'store_true',\n default = True,\n help = 'Attempt to unstrip the libc binary with debug symbols from a debuginfod server'\n)\n\nlookup_parser.add_argument(\n '--no-unstrip',\n action = 'store_false',\n dest = 'unstrip',\n help = 'Do NOT attempt to unstrip the libc binary with debug symbols from a debuginfod server'\n)\n\nhash_parser = libc_commands.add_parser(\n 'hash',\n help = 'Display information of a libc version given an unique hash',\n description = 'Display information of a libc version given an unique hash'\n)\n\nhash_parser.add_argument(\n 'hash_value',\n metavar = 'hash_value',\n nargs = '+',\n help = 'Hex encoded hash value'\n)\n\nhash_parser.add_argument(\n '-t', '--hash_type',\n nargs = '?',\n type = str,\n choices = ['id', 'buildid', 'md5', 'sha1', 'sha256'],\n default = 'buildid',\n help = 'The type of the provided hash value. Supported hashtypes: id, buildid, md5, sha1, sha256'\n)\n\nhash_parser.add_argument(\n '--download-libc',\n action = 'store_true',\n default = False,\n help = 'Attempt to download the matching libc.so'\n)\n\nhash_parser.add_argument(\n '--unstrip',\n action = 'store_true',\n default = True,\n help = 'Attempt to unstrip the libc binary with debug symbols from a debuginfod server'\n)\n\nhash_parser.add_argument(\n '--no-unstrip',\n action = 'store_false',\n dest = 'unstrip',\n help = 'Do NOT attempt to unstrip the libc binary with debug symbols from a debuginfod server'\n)\n\nfile_parser = libc_commands.add_parser(\n 'file',\n help = 'Dump information about a libc binary',\n description = 'Dump information about a libc binary'\n)\n\nfile_parser.add_argument(\n 'files',\n metavar = 'files',\n nargs = '+',\n help = 'Libc binary to dump'\n)\n\nfile_parser.add_argument(\n '-s', '--symbols',\n metavar = 'symbols',\n nargs = '*',\n help = 'List of symbol offsets to dump in addition to the common ones'\n)\n\nfile_parser.add_argument(\n '-o', '--offset',\n metavar = 'offset',\n type = str,\n help = 'Display all offsets relative to this symbol'\n)\n\nfile_parser.add_argument(\n '--unstrip',\n action = 'store_true',\n default = False,\n help = 'Attempt to unstrip the libc binary inplace with debug symbols from a debuginfod server'\n)\n\ncommon_symbols = ['dup2', 'printf', 'puts', 'read', 'system', 'write']\n\ndef find_libc(params):\n import requests\n url = \"https://libc.rip/api/find\"\n result = requests.post(url, json=params, timeout=20)\n log.debug('Request: %s', params)\n log.debug('Result: %s', result.json())\n if result.status_code != 200 or len(result.json()) == 0:\n log.failure(\"Could not find libc for %s on libc.rip\", params)\n return []\n\n return result.json()\n\ndef print_libc(libc):\n log.info('%s', text.red(libc['id']))\n log.indented('\\t%-20s %s', text.green('BuildID:'), libc['buildid'])\n log.indented('\\t%-20s %s', text.green('MD5:'), libc['md5'])\n log.indented('\\t%-20s %s', text.green('SHA1:'), libc['sha1'])\n log.indented('\\t%-20s %s', text.green('SHA256:'), libc['sha256'])\n log.indented('\\t%s', text.green('Symbols:'))\n for symbol in libc['symbols'].items():\n log.indented('\\t%25s = %s', symbol[0], symbol[1])\n\ndef handle_remote_libc(args, libc):\n print_libc(libc)\n if args.download_libc:\n path = libcdb.search_by_build_id(libc['buildid'], args.unstrip)\n if path:\n if args.unstrip:\n libcdb.unstrip_libc(path)\n shutil.copy(path, './{}.so'.format(libc['id']))\n\ndef translate_offset(offs, args, exe):\n if args.offset:\n if args.offset not in exe.symbols:\n log.info_once('offset symbol %s not found. ignoring.', args.offset)\n return offs\n return offs - exe.symbols[args.offset]\n return offs\n\ndef collect_synthetic_symbols(exe):\n available_symbols = ['str_bin_sh']\n exe.symbols['str_bin_sh'] = next(exe.search(b'/bin/sh\\x00'))\n\n libc_start_main_return = exe.libc_start_main_return\n if libc_start_main_return > 0:\n exe.symbols['__libc_start_main_ret'] = libc_start_main_return\n available_symbols.append('__libc_start_main_ret')\n\n return available_symbols\n\ndef main(args):\n if len(sys.argv) < 3:\n parser.print_usage()\n sys.exit()\n\n if args.libc_command == 'lookup':\n pairs = args.symbol_offset_pairs\n if len(pairs) % 2 != 0:\n log.failure('Uneven number of arguments. Please provide \"symbol offset\" pairs')\n return\n \n symbols = {pairs[i]:pairs[i+1] for i in range(0, len(pairs), 2)}\n matched_libcs = find_libc({'symbols': symbols})\n for libc in matched_libcs:\n handle_remote_libc(args, libc)\n\n elif args.libc_command == 'hash':\n for hash_value in args.hash_value:\n matched_libcs = find_libc({args.hash_type: hash_value})\n for libc in matched_libcs:\n handle_remote_libc(args, libc)\n\n elif args.libc_command == 'file':\n from hashlib import md5, sha1, sha256\n for file in args.files:\n if not os.path.exists(file) or not os.path.isfile(file):\n log.failure('File does not exist %s', args.file)\n continue\n \n if args.unstrip:\n libcdb.unstrip_libc(file)\n\n exe = ELF(file, checksec=False)\n log.info('%s', text.red(os.path.basename(file)))\n\n libc_version = re.search(br'libc[ -](\\d+\\.\\d+)', exe.data)\n if libc_version:\n log.indented('%-20s %s', text.green('Version:'), libc_version.group(1).decode())\n\n if exe.buildid:\n log.indented('%-20s %s', text.green('BuildID:'), enhex(exe.buildid))\n log.indented('%-20s %s', text.green('MD5:'), md5(exe.data).hexdigest())\n log.indented('%-20s %s', text.green('SHA1:'), sha1(exe.data).hexdigest())\n log.indented('%-20s %s', text.green('SHA256:'), sha256(exe.data).hexdigest())\n\n # Always dump the basic list of common symbols\n log.indented('%s', text.green('Symbols:'))\n synthetic_symbols = collect_synthetic_symbols(exe)\n\n symbols = common_symbols + (args.symbols or []) + synthetic_symbols\n symbols.sort()\n for symbol in symbols:\n if symbol not in exe.symbols:\n log.indented('%25s = %s', symbol, text.red('not found'))\n else:\n log.indented('%25s = %#x', symbol, translate_offset(exe.symbols[symbol], args, exe))\n\nif __name__ == '__main__':\n pwnlib.commandline.common.main(__file__)\n", "path": "pwnlib/commandline/libcdb.py"}]}
| 3,956 | 199 |
gh_patches_debug_50432
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-4754
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve unexpected error message
Many users are reporting / filling an issue in our issue tracker when this message is shown to them, which is logic because it's what the message says.
> There was a problem with Read the Docs while building your documentation. Please report this to us with your build id (1234)
Although, I think we should improve this message saying something like "if this problem persists, please report..." or something similar to that. Otherwise, sometimes it's a temporal failure and we get tons of reports.
</issue>
<code>
[start of readthedocs/doc_builder/exceptions.py]
1 # -*- coding: utf-8 -*-
2 """Exceptions raised when building documentation."""
3
4 from __future__ import division, print_function, unicode_literals
5
6 from django.utils.translation import ugettext_noop
7
8
9 class BuildEnvironmentException(Exception):
10
11 message = None
12 status_code = None
13
14 def __init__(self, message=None, **kwargs):
15 self.status_code = kwargs.pop('status_code', None) or self.status_code or 1
16 message = message or self.get_default_message()
17 super(BuildEnvironmentException, self).__init__(message, **kwargs)
18
19 def get_default_message(self):
20 return self.message
21
22
23 class BuildEnvironmentError(BuildEnvironmentException):
24
25 GENERIC_WITH_BUILD_ID = ugettext_noop(
26 'There was a problem with Read the Docs while building your documentation. '
27 'Please report this to us with your build id ({build_id}).',
28 )
29
30
31 class BuildEnvironmentCreationFailed(BuildEnvironmentError):
32
33 message = ugettext_noop('Build environment creation failed')
34
35
36 class VersionLockedError(BuildEnvironmentError):
37
38 message = ugettext_noop('Version locked, retrying in 5 minutes.')
39 status_code = 423
40
41
42 class ProjectBuildsSkippedError(BuildEnvironmentError):
43
44 message = ugettext_noop('Builds for this project are temporarily disabled')
45
46
47 class YAMLParseError(BuildEnvironmentError):
48
49 GENERIC_WITH_PARSE_EXCEPTION = ugettext_noop(
50 'Problem parsing YAML configuration. {exception}',
51 )
52
53
54 class BuildTimeoutError(BuildEnvironmentError):
55
56 message = ugettext_noop('Build exited due to time out')
57
58
59 class BuildEnvironmentWarning(BuildEnvironmentException):
60 pass
61
[end of readthedocs/doc_builder/exceptions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/readthedocs/doc_builder/exceptions.py b/readthedocs/doc_builder/exceptions.py
--- a/readthedocs/doc_builder/exceptions.py
+++ b/readthedocs/doc_builder/exceptions.py
@@ -24,7 +24,9 @@
GENERIC_WITH_BUILD_ID = ugettext_noop(
'There was a problem with Read the Docs while building your documentation. '
- 'Please report this to us with your build id ({build_id}).',
+ 'Please try again later. '
+ 'However, if this problem persists, '
+ 'please report this to us with your build id ({build_id}).',
)
|
{"golden_diff": "diff --git a/readthedocs/doc_builder/exceptions.py b/readthedocs/doc_builder/exceptions.py\n--- a/readthedocs/doc_builder/exceptions.py\n+++ b/readthedocs/doc_builder/exceptions.py\n@@ -24,7 +24,9 @@\n \n GENERIC_WITH_BUILD_ID = ugettext_noop(\n 'There was a problem with Read the Docs while building your documentation. '\n- 'Please report this to us with your build id ({build_id}).',\n+ 'Please try again later. '\n+ 'However, if this problem persists, '\n+ 'please report this to us with your build id ({build_id}).',\n )\n", "issue": "Improve unexpected error message\nMany users are reporting / filling an issue in our issue tracker when this message is shown to them, which is logic because it's what the message says.\r\n\r\n> There was a problem with Read the Docs while building your documentation. Please report this to us with your build id (1234)\r\n\r\nAlthough, I think we should improve this message saying something like \"if this problem persists, please report...\" or something similar to that. Otherwise, sometimes it's a temporal failure and we get tons of reports.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Exceptions raised when building documentation.\"\"\"\n\nfrom __future__ import division, print_function, unicode_literals\n\nfrom django.utils.translation import ugettext_noop\n\n\nclass BuildEnvironmentException(Exception):\n\n message = None\n status_code = None\n\n def __init__(self, message=None, **kwargs):\n self.status_code = kwargs.pop('status_code', None) or self.status_code or 1\n message = message or self.get_default_message()\n super(BuildEnvironmentException, self).__init__(message, **kwargs)\n\n def get_default_message(self):\n return self.message\n\n\nclass BuildEnvironmentError(BuildEnvironmentException):\n\n GENERIC_WITH_BUILD_ID = ugettext_noop(\n 'There was a problem with Read the Docs while building your documentation. '\n 'Please report this to us with your build id ({build_id}).',\n )\n\n\nclass BuildEnvironmentCreationFailed(BuildEnvironmentError):\n\n message = ugettext_noop('Build environment creation failed')\n\n\nclass VersionLockedError(BuildEnvironmentError):\n\n message = ugettext_noop('Version locked, retrying in 5 minutes.')\n status_code = 423\n\n\nclass ProjectBuildsSkippedError(BuildEnvironmentError):\n\n message = ugettext_noop('Builds for this project are temporarily disabled')\n\n\nclass YAMLParseError(BuildEnvironmentError):\n\n GENERIC_WITH_PARSE_EXCEPTION = ugettext_noop(\n 'Problem parsing YAML configuration. {exception}',\n )\n\n\nclass BuildTimeoutError(BuildEnvironmentError):\n\n message = ugettext_noop('Build exited due to time out')\n\n\nclass BuildEnvironmentWarning(BuildEnvironmentException):\n pass\n", "path": "readthedocs/doc_builder/exceptions.py"}]}
| 1,120 | 140 |
gh_patches_debug_4309
|
rasdani/github-patches
|
git_diff
|
cloudtools__troposphere-2027
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
WAFv2 AndStatement and OrStatement incorrect validation
Hi guys,
https://github.com/cloudtools/troposphere/blob/f287fa8999ef2a5f5e301ba5c0af6421471e230b/troposphere/validators/wafv2.py#L32-L33
In the validator for WAFv2 statements, there is a check to see if the number of statements if _exactly_ 2 when it should be looking to see if there are _at least_ 2 statements given.
> "You provide more than one [Statement](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-wafv2-webacl-notstatement.html#cfn-wafv2-webacl-notstatement-statement) within the AndStatement."
[Source](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-wafv2-rulegroup-andstatement.html)
AWS requires there to be more than one statement, but does not restrict them to only pairs.
A small change in the logic of the `if` statement and the `TypeError` message is all that is needed.
Thank you
</issue>
<code>
[start of troposphere/validators/wafv2.py]
1 # Copyright (c) 2012-2021, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6
7 def validate_statement(statement):
8 """
9 Validate Transformation Type for WebACL TextTransformation
10 Property: RuleGroupRule.Statement
11 Property: WebACLRule.Statement
12 Property: ManagedRuleGroupStatement.ScopeDownStatement
13 Property: NotStatement.Statement
14 Property: RateBasedStatement.ScopeDownStatement
15 """
16
17 from .. import AWSHelperFn
18 from ..wafv2 import Statement
19
20 if not isinstance(statement, (Statement, AWSHelperFn)):
21 raise TypeError(f"{statement} is not a valid Statement")
22
23 return statement
24
25
26 def validate_statements(statements):
27 """
28 Property: AndStatement.Statements
29 Property: OrStatement.Statements
30 """
31
32 if not isinstance(statements, list) or len(statements) != 2:
33 raise TypeError("Statements must be a list of 2 Statement elements")
34
35 for s in statements:
36 validate_statement(s)
37
38 return statements
39
40
41 def validate_transformation_type(transformation_type):
42 """
43 Validate Transformation Type for WebACL TextTransformation
44 Property: TextTransformation.Type
45 """
46
47 VALID_TRANSFORMATION_TYPES = (
48 "BASE64_DECODE",
49 "BASE64_DECODE_EXT",
50 "CMD_LINE",
51 "COMPRESS_WHITE_SPACE",
52 "CSS_DECODE",
53 "ESCAPE_SEQ_DECODE",
54 "HEX_DECODE",
55 "HTML_ENTITY_DECODE",
56 "JS_DECODE",
57 "LOWERCASE",
58 "MD5",
59 "NONE",
60 "NORMALIZE_PATH",
61 "NORMALIZE_PATH_WIN",
62 "REMOVE_NULLS",
63 "REPLACE_COMMENTS",
64 "REPLACE_NULLS",
65 "SQL_HEX_DECODE",
66 "URL_DECODE",
67 "URL_DECODE_UNI",
68 "UTF8_TO_UNICODE",
69 )
70
71 if transformation_type not in VALID_TRANSFORMATION_TYPES:
72 raise ValueError(
73 "WebACL TextTransformation must be one of: %s"
74 % ", ".join(VALID_TRANSFORMATION_TYPES)
75 )
76 return transformation_type
77
78
79 def validate_comparison_operator(comparison_operator):
80 """
81 Validate Comparison Operator for WebACL SizeConstraintStatement
82 Property: SizeConstraintStatement.ComparisonOperator
83 """
84
85 VALID_COMPARISON_OPERATORS = (
86 "EQ",
87 "GE",
88 "GT",
89 "LE",
90 "LT",
91 "NE",
92 )
93
94 if comparison_operator not in VALID_COMPARISON_OPERATORS:
95 raise ValueError(
96 "WebACL SizeConstraintStatement must be one of: %s"
97 % ", ".join(VALID_COMPARISON_OPERATORS)
98 )
99 return comparison_operator
100
101
102 def validate_ipaddress_version(ipaddress_version):
103 """
104 Validate IPAddress version for IPSet
105 Property: IPSet.IPAddressVersion
106 """
107
108 VALID_IP_VERSION = ("IPV4", "IPV6")
109
110 if ipaddress_version not in VALID_IP_VERSION:
111 raise ValueError(
112 "IPSet IPAddressVersion must be one of: %s" % ", ".join(VALID_IP_VERSION)
113 )
114 return ipaddress_version
115
116
117 def validate_positional_constraint(positional_constraint):
118 """
119 Validate positional constraint for ByteMatchStatement
120 Property: ByteMatchStatement.PositionalConstraint
121 """
122
123 VALID_POSITIONAL_CONSTRAINTS = (
124 "CONTAINS",
125 "CONTAINS_WORD",
126 "ENDS_WITH",
127 "EXACTLY",
128 "STARTS_WITH",
129 )
130
131 if positional_constraint not in VALID_POSITIONAL_CONSTRAINTS:
132 raise ValueError(
133 "ByteMatchStatement PositionalConstraint must be one of: %s"
134 % ", ".join(VALID_POSITIONAL_CONSTRAINTS) # NOQA
135 )
136 return positional_constraint
137
138
139 def validate_custom_response_bodies(custom_response_bodies):
140 """
141 Validate custom response bodies
142 Property: RuleGroup.CustomResponseBodies
143 Property: WebACL.CustomResponseBodies
144 """
145
146 from ..wafv2 import CustomResponseBody
147
148 if not isinstance(custom_response_bodies, dict):
149 raise ValueError("CustomResponseBodies must be dict")
150
151 for k, v in custom_response_bodies.items():
152 if not isinstance(v, CustomResponseBody):
153 raise ValueError("value of %s must be type of CustomResponseBody" % (k))
154
155 return custom_response_bodies
156
157
158 def wafv2_custom_body_response_content(content):
159 """
160 Validate wafv2 custom body response content. Any character between 1 to 10240
161 Property: CustomResponseBody.Content
162 """
163
164 if not content:
165 raise ValueError("Content must not be empty")
166 if len(content) > 10240:
167 raise ValueError("Content maximum length must not exceed 10240")
168
169 return content
170
171
172 def wafv2_custom_body_response_content_type(content_type):
173 """
174 validate wafv2 custom response content type
175 Property: CustomResponseBody.ContentType
176 """
177
178 valid_types = ["APPLICATION_JSON", "TEXT_HTML", "TEXT_PLAIN"]
179 if content_type not in valid_types:
180 raise ValueError('ContentType must be one of: "%s"' % (", ".join(valid_types)))
181 return content_type
182
[end of troposphere/validators/wafv2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/troposphere/validators/wafv2.py b/troposphere/validators/wafv2.py
--- a/troposphere/validators/wafv2.py
+++ b/troposphere/validators/wafv2.py
@@ -29,8 +29,8 @@
Property: OrStatement.Statements
"""
- if not isinstance(statements, list) or len(statements) != 2:
- raise TypeError("Statements must be a list of 2 Statement elements")
+ if not isinstance(statements, list) or len(statements) < 2:
+ raise TypeError("Statements must be a list of at least 2 Statement elements")
for s in statements:
validate_statement(s)
|
{"golden_diff": "diff --git a/troposphere/validators/wafv2.py b/troposphere/validators/wafv2.py\n--- a/troposphere/validators/wafv2.py\n+++ b/troposphere/validators/wafv2.py\n@@ -29,8 +29,8 @@\n Property: OrStatement.Statements\n \"\"\"\n \n- if not isinstance(statements, list) or len(statements) != 2:\n- raise TypeError(\"Statements must be a list of 2 Statement elements\")\n+ if not isinstance(statements, list) or len(statements) < 2:\n+ raise TypeError(\"Statements must be a list of at least 2 Statement elements\")\n \n for s in statements:\n validate_statement(s)\n", "issue": "WAFv2 AndStatement and OrStatement incorrect validation\nHi guys,\r\n\r\nhttps://github.com/cloudtools/troposphere/blob/f287fa8999ef2a5f5e301ba5c0af6421471e230b/troposphere/validators/wafv2.py#L32-L33\r\n\r\nIn the validator for WAFv2 statements, there is a check to see if the number of statements if _exactly_ 2 when it should be looking to see if there are _at least_ 2 statements given. \r\n\r\n> \"You provide more than one [Statement](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-wafv2-webacl-notstatement.html#cfn-wafv2-webacl-notstatement-statement) within the AndStatement.\"\r\n[Source](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-wafv2-rulegroup-andstatement.html)\r\n\r\nAWS requires there to be more than one statement, but does not restrict them to only pairs.\r\n\r\nA small change in the logic of the `if` statement and the `TypeError` message is all that is needed.\r\n\r\nThank you\r\n\n", "before_files": [{"content": "# Copyright (c) 2012-2021, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\n\ndef validate_statement(statement):\n \"\"\"\n Validate Transformation Type for WebACL TextTransformation\n Property: RuleGroupRule.Statement\n Property: WebACLRule.Statement\n Property: ManagedRuleGroupStatement.ScopeDownStatement\n Property: NotStatement.Statement\n Property: RateBasedStatement.ScopeDownStatement\n \"\"\"\n\n from .. import AWSHelperFn\n from ..wafv2 import Statement\n\n if not isinstance(statement, (Statement, AWSHelperFn)):\n raise TypeError(f\"{statement} is not a valid Statement\")\n\n return statement\n\n\ndef validate_statements(statements):\n \"\"\"\n Property: AndStatement.Statements\n Property: OrStatement.Statements\n \"\"\"\n\n if not isinstance(statements, list) or len(statements) != 2:\n raise TypeError(\"Statements must be a list of 2 Statement elements\")\n\n for s in statements:\n validate_statement(s)\n\n return statements\n\n\ndef validate_transformation_type(transformation_type):\n \"\"\"\n Validate Transformation Type for WebACL TextTransformation\n Property: TextTransformation.Type\n \"\"\"\n\n VALID_TRANSFORMATION_TYPES = (\n \"BASE64_DECODE\",\n \"BASE64_DECODE_EXT\",\n \"CMD_LINE\",\n \"COMPRESS_WHITE_SPACE\",\n \"CSS_DECODE\",\n \"ESCAPE_SEQ_DECODE\",\n \"HEX_DECODE\",\n \"HTML_ENTITY_DECODE\",\n \"JS_DECODE\",\n \"LOWERCASE\",\n \"MD5\",\n \"NONE\",\n \"NORMALIZE_PATH\",\n \"NORMALIZE_PATH_WIN\",\n \"REMOVE_NULLS\",\n \"REPLACE_COMMENTS\",\n \"REPLACE_NULLS\",\n \"SQL_HEX_DECODE\",\n \"URL_DECODE\",\n \"URL_DECODE_UNI\",\n \"UTF8_TO_UNICODE\",\n )\n\n if transformation_type not in VALID_TRANSFORMATION_TYPES:\n raise ValueError(\n \"WebACL TextTransformation must be one of: %s\"\n % \", \".join(VALID_TRANSFORMATION_TYPES)\n )\n return transformation_type\n\n\ndef validate_comparison_operator(comparison_operator):\n \"\"\"\n Validate Comparison Operator for WebACL SizeConstraintStatement\n Property: SizeConstraintStatement.ComparisonOperator\n \"\"\"\n\n VALID_COMPARISON_OPERATORS = (\n \"EQ\",\n \"GE\",\n \"GT\",\n \"LE\",\n \"LT\",\n \"NE\",\n )\n\n if comparison_operator not in VALID_COMPARISON_OPERATORS:\n raise ValueError(\n \"WebACL SizeConstraintStatement must be one of: %s\"\n % \", \".join(VALID_COMPARISON_OPERATORS)\n )\n return comparison_operator\n\n\ndef validate_ipaddress_version(ipaddress_version):\n \"\"\"\n Validate IPAddress version for IPSet\n Property: IPSet.IPAddressVersion\n \"\"\"\n\n VALID_IP_VERSION = (\"IPV4\", \"IPV6\")\n\n if ipaddress_version not in VALID_IP_VERSION:\n raise ValueError(\n \"IPSet IPAddressVersion must be one of: %s\" % \", \".join(VALID_IP_VERSION)\n )\n return ipaddress_version\n\n\ndef validate_positional_constraint(positional_constraint):\n \"\"\"\n Validate positional constraint for ByteMatchStatement\n Property: ByteMatchStatement.PositionalConstraint\n \"\"\"\n\n VALID_POSITIONAL_CONSTRAINTS = (\n \"CONTAINS\",\n \"CONTAINS_WORD\",\n \"ENDS_WITH\",\n \"EXACTLY\",\n \"STARTS_WITH\",\n )\n\n if positional_constraint not in VALID_POSITIONAL_CONSTRAINTS:\n raise ValueError(\n \"ByteMatchStatement PositionalConstraint must be one of: %s\"\n % \", \".join(VALID_POSITIONAL_CONSTRAINTS) # NOQA\n )\n return positional_constraint\n\n\ndef validate_custom_response_bodies(custom_response_bodies):\n \"\"\"\n Validate custom response bodies\n Property: RuleGroup.CustomResponseBodies\n Property: WebACL.CustomResponseBodies\n \"\"\"\n\n from ..wafv2 import CustomResponseBody\n\n if not isinstance(custom_response_bodies, dict):\n raise ValueError(\"CustomResponseBodies must be dict\")\n\n for k, v in custom_response_bodies.items():\n if not isinstance(v, CustomResponseBody):\n raise ValueError(\"value of %s must be type of CustomResponseBody\" % (k))\n\n return custom_response_bodies\n\n\ndef wafv2_custom_body_response_content(content):\n \"\"\"\n Validate wafv2 custom body response content. Any character between 1 to 10240\n Property: CustomResponseBody.Content\n \"\"\"\n\n if not content:\n raise ValueError(\"Content must not be empty\")\n if len(content) > 10240:\n raise ValueError(\"Content maximum length must not exceed 10240\")\n\n return content\n\n\ndef wafv2_custom_body_response_content_type(content_type):\n \"\"\"\n validate wafv2 custom response content type\n Property: CustomResponseBody.ContentType\n \"\"\"\n\n valid_types = [\"APPLICATION_JSON\", \"TEXT_HTML\", \"TEXT_PLAIN\"]\n if content_type not in valid_types:\n raise ValueError('ContentType must be one of: \"%s\"' % (\", \".join(valid_types)))\n return content_type\n", "path": "troposphere/validators/wafv2.py"}]}
| 2,368 | 159 |
gh_patches_debug_39929
|
rasdani/github-patches
|
git_diff
|
enthought__chaco-532
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Setup cron job to test against dependencies from source
Travis (and possibly appveyor too) allow the creation of cron jobs. Some of the ETS packages now have cron jobs setup to test against dependencies which are installed from source - instead of testing against the released versions.
See examples :
- https://github.com/enthought/envisage/pull/162
- https://github.com/enthought/traitsui/pull/914
- https://github.com/enthought/pyface/pull/549
</issue>
<code>
[start of ci/edmtool.py]
1 #
2 # Copyright (c) 2017, Enthought, Inc.
3 # All rights reserved.
4 #
5 # This software is provided without warranty under the terms of the BSD
6 # license included in enthought/LICENSE.txt and may be redistributed only
7 # under the conditions described in the aforementioned license. The license
8 # is also available online at http://www.enthought.com/licenses/BSD.txt
9 #
10 # Thanks for using Enthought open source!
11 #
12 """
13 Tasks for Test Runs
14 ===================
15 This file is intended to be used with a python environment with the
16 click library to automate the process of setting up test environments
17 and running the test within them. This improves repeatability and
18 reliability of tests be removing many of the variables around the
19 developer's particular Python environment. Test environment setup and
20 package management is performed using `EDM http://docs.enthought.com/edm/`_
21
22 To use this to run you tests, you will need to install EDM and click
23 into your working environment. You will also need to have git
24 installed to access required source code from github repositories.
25
26 You can then do::
27 python edmtool.py install --runtime=... --toolkit=...
28 to create a test environment from the current codebase and::
29 python edmtool.py test --runtime=... --toolkit=...
30 to run tests in that environment. You can remove the environment with::
31 python edmtool.py cleanup --runtime=... --toolkit=...
32
33 If you make changes you will either need to remove and re-install the
34 environment or manually update the environment using ``edm``, as
35 the install performs a ``python setup.py install`` rather than a ``develop``,
36 so changes in your code will not be automatically mirrored in the test
37 environment. You can update with a command like::
38 edm run --environment ... -- python setup.py install
39 You can run all three tasks at once with::
40 python edmtool.py test_clean --runtime=... --toolkit=...
41 which will create, install, run tests, and then clean-up the environment. And
42 you can run tests in all supported runtimes and toolkits (with cleanup)
43 using::
44 python edmtool.py test_all
45
46 Currently supported runtime values are ``3.6``, and currently
47 supported toolkits are ``null``, ``pyqt``, ``pyqt5`` and ``pyside2``. Not all
48 combinations of toolkits and runtimes will work, but the tasks will fail with
49 a clear error if that is the case. Tests can still be run via the usual means
50 in other environments if that suits a developer's purpose.
51
52 Changing This File
53 ------------------
54 To change the packages installed during a test run, change the dependencies
55 variable below. To install a package from github, or one which is not yet
56 available via EDM, add it to the `ci/requirements.txt` file (these will be
57 installed by `pip`).
58
59 Other changes to commands should be a straightforward change to the listed
60 commands for each task. See the EDM documentation for more information about
61 how to run commands within an EDM enviornment.
62 """
63 import glob
64 import os
65 import subprocess
66 import sys
67 from shutil import rmtree, copy as copyfile
68 from tempfile import mkdtemp
69 from contextlib import contextmanager
70
71 import click
72
73 supported_combinations = {
74 '3.6': {'pyside2', 'pyqt', 'pyqt5', 'null'},
75 }
76
77 dependencies = {
78 "six",
79 "mock",
80 "numpy",
81 "pandas",
82 "pygments",
83 "pyparsing",
84 "cython",
85 # Needed to install enable from source
86 "swig",
87 }
88
89 extra_dependencies = {
90 'pyside2': set(), # pyside2 is pip-installed during the install step
91 'pyqt': {'pyqt'},
92 'pyqt5': {'pyqt5'},
93 'null': set()
94 }
95
96 environment_vars = {
97 'pyside2': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyside2'},
98 'pyqt': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt'},
99 'pyqt5': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt5'},
100 'null': {'ETS_TOOLKIT': 'null.image'},
101 }
102
103
104 def normalize(name):
105 return name.replace("_", "-")
106
107
108 @click.group(context_settings={"token_normalize_func": normalize})
109 def cli():
110 pass
111
112
113 @cli.command()
114 @click.option('--runtime', default='3.6')
115 @click.option('--toolkit', default='null')
116 @click.option('--environment', default=None)
117 def install(runtime, toolkit, environment):
118 """ Install project and dependencies into a clean EDM environment.
119 """
120 parameters = get_parameters(runtime, toolkit, environment)
121 parameters['packages'] = ' '.join(
122 dependencies | extra_dependencies.get(toolkit, set()))
123 # edm commands to setup the development environment
124 commands = [
125 "edm environments create {environment} --force --version={runtime}",
126 "edm install -y -e {environment} {packages}",
127 ("edm run -e {environment} -- pip install -r ci/requirements.txt"
128 " --no-dependencies"),
129 # Note that enable dependencies will be installed implicitly using pip
130 ("edm run -e {environment} -- "
131 "pip install git+https://[email protected]/enthought/enable.git"),
132 "edm run -e {environment} -- pip install . --no-deps",
133 ]
134 # pip install pyside2, because we don't have it in EDM yet
135 if toolkit == 'pyside2':
136 commands.append(
137 "edm run -e {environment} -- pip install pyside2==5.11"
138 )
139
140 click.echo("Creating environment '{environment}'".format(**parameters))
141 execute(commands, parameters)
142 click.echo('Done install')
143
144
145 @cli.command()
146 @click.option('--runtime', default='3.6')
147 @click.option('--toolkit', default='null')
148 @click.option('--environment', default=None)
149 def test(runtime, toolkit, environment):
150 """ Run the test suite in a given environment with the specified toolkit.
151 """
152 parameters = get_parameters(runtime, toolkit, environment)
153 environ = environment_vars.get(toolkit, {}).copy()
154
155 environ['PYTHONUNBUFFERED'] = "1"
156 commands = [
157 "edm run -e {environment} -- coverage run -m unittest discover -v chaco"
158 ]
159
160 cwd = os.getcwd()
161
162 # We run in a tempdir to avoid accidentally picking up wrong traitsui
163 # code from a local dir. We need to ensure a good .coveragerc is in
164 # that directory, plus coverage has a bug that means a non-local coverage
165 # file doesn't get populated correctly.
166 click.echo("Running tests in '{environment}'".format(**parameters))
167 with do_in_tempdir(files=['.coveragerc'], capture_files=['./.coverage*']):
168 os.environ.update(environ)
169 execute(commands, parameters)
170
171 click.echo('Done test')
172
173
174 @cli.command()
175 @click.option('--runtime', default='3.6')
176 @click.option('--toolkit', default='null')
177 @click.option('--environment', default=None)
178 def cleanup(runtime, toolkit, environment):
179 """ Remove a development environment.
180 """
181 parameters = get_parameters(runtime, toolkit, environment)
182 commands = [
183 "edm run -e {environment} -- python setup.py clean",
184 "edm environments remove {environment} --purge -y",
185 ]
186 click.echo("Cleaning up environment '{environment}'".format(**parameters))
187 execute(commands, parameters)
188 click.echo('Done cleanup')
189
190
191 @cli.command()
192 @click.option('--runtime', default='3.6')
193 @click.option('--toolkit', default='null')
194 def test_clean(runtime, toolkit):
195 """ Run tests in a clean environment, cleaning up afterwards
196 """
197 args = ['--toolkit={}'.format(toolkit),
198 '--runtime={}'.format(runtime)]
199 try:
200 install(args=args, standalone_mode=False)
201 test(args=args, standalone_mode=False)
202 finally:
203 cleanup(args=args, standalone_mode=False)
204
205
206 @cli.command()
207 @click.option('--runtime', default='3.6')
208 @click.option('--toolkit', default='null')
209 @click.option('--environment', default=None)
210 def update(runtime, toolkit, environment):
211 """ Update/Reinstall package into environment.
212 """
213 parameters = get_parameters(runtime, toolkit, environment)
214 commands = [
215 "edm run -e {environment} -- python setup.py install"]
216 click.echo("Re-installing in '{environment}'".format(**parameters))
217 execute(commands, parameters)
218 click.echo('Done update')
219
220
221 @cli.command()
222 def test_all():
223 """ Run test_clean across all supported environment combinations.
224 """
225 for runtime, toolkits in supported_combinations.items():
226 for toolkit in toolkits:
227 args = ['--toolkit={}'.format(toolkit),
228 '--runtime={}'.format(runtime)]
229 test_clean(args, standalone_mode=True)
230
231
232 # ----------------------------------------------------------------------------
233 # Utility routines
234 # ----------------------------------------------------------------------------
235
236 def get_parameters(runtime, toolkit, environment):
237 """Set up parameters dictionary for format() substitution
238 """
239 parameters = {'runtime': runtime, 'toolkit': toolkit,
240 'environment': environment}
241 if toolkit not in supported_combinations[runtime]:
242 msg = ("Python {runtime!r}, toolkit {toolkit!r}, "
243 "not supported by test environments ({available})")
244 available = ", ".join(
245 repr(tk) for tk in sorted(supported_combinations[runtime])
246 )
247 raise RuntimeError(msg.format(available=available, **parameters))
248 if environment is None:
249 tmpl = 'chaco-test-{runtime}-{toolkit}'
250 environment = tmpl.format(**parameters)
251 parameters['environment'] = environment
252 return parameters
253
254
255 @contextmanager
256 def do_in_tempdir(files=(), capture_files=()):
257 """ Create a temporary directory, cleaning up after done.
258 Creates the temporary directory, and changes into it. On exit returns to
259 original directory and removes temporary dir.
260 Parameters
261 ----------
262 files : sequence of filenames
263 Files to be copied across to temporary directory.
264 capture_files : sequence of filenames
265 Files to be copied back from temporary directory.
266 """
267 path = mkdtemp()
268 old_path = os.getcwd()
269
270 # send across any files we need
271 for filepath in files:
272 click.echo('copying file to tempdir: {}'.format(filepath))
273 copyfile(filepath, path)
274
275 os.chdir(path)
276 try:
277 yield path
278 # retrieve any result files we want
279 for pattern in capture_files:
280 for filepath in glob.iglob(pattern):
281 click.echo('copying file back: {}'.format(filepath))
282 copyfile(filepath, old_path)
283 finally:
284 os.chdir(old_path)
285 rmtree(path)
286
287
288 def execute(commands, parameters):
289 for command in commands:
290 print("[EXECUTING]", command.format(**parameters))
291 try:
292 subprocess.check_call(command.format(**parameters).split())
293 except subprocess.CalledProcessError:
294 sys.exit(1)
295
296
297 if __name__ == '__main__':
298 cli()
299
[end of ci/edmtool.py]
[start of chaco/__init__.py]
1 # Copyright (c) 2005-2014 by Enthought, Inc.
2 # All rights reserved.
3 """ Two-dimensional plotting application toolkit.
4 Part of the Chaco project of the Enthought Tool Suite.
5 """
6 from ._version import full_version as __version__ # noqa
7
8 __requires__ = [
9 'enable',
10 'six'
11 ]
12
[end of chaco/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/chaco/__init__.py b/chaco/__init__.py
--- a/chaco/__init__.py
+++ b/chaco/__init__.py
@@ -6,6 +6,10 @@
from ._version import full_version as __version__ # noqa
__requires__ = [
+ 'traits',
+ 'traitsui',
+ 'pyface',
+ 'numpy',
'enable',
'six'
]
diff --git a/ci/edmtool.py b/ci/edmtool.py
--- a/ci/edmtool.py
+++ b/ci/edmtool.py
@@ -79,13 +79,27 @@
"mock",
"numpy",
"pandas",
+ "pyface",
"pygments",
"pyparsing",
+ "traits",
+ "traitsui",
"cython",
+ "enable",
# Needed to install enable from source
"swig",
}
+# Dependencies we install from source for cron tests
+source_dependencies = {
+ "enable",
+ "pyface",
+ "traits",
+ "traitsui",
+}
+
+github_url_fmt = "git+http://github.com/enthought/{0}.git#egg={0}"
+
extra_dependencies = {
'pyside2': set(), # pyside2 is pip-installed during the install step
'pyqt': {'pyqt'},
@@ -114,7 +128,12 @@
@click.option('--runtime', default='3.6')
@click.option('--toolkit', default='null')
@click.option('--environment', default=None)
-def install(runtime, toolkit, environment):
[email protected](
+ "--source/--no-source",
+ default=False,
+ help="Install ETS packages from source",
+)
+def install(runtime, toolkit, environment, source):
""" Install project and dependencies into a clean EDM environment.
"""
parameters = get_parameters(runtime, toolkit, environment)
@@ -126,9 +145,6 @@
"edm install -y -e {environment} {packages}",
("edm run -e {environment} -- pip install -r ci/requirements.txt"
" --no-dependencies"),
- # Note that enable dependencies will be installed implicitly using pip
- ("edm run -e {environment} -- "
- "pip install git+https://[email protected]/enthought/enable.git"),
"edm run -e {environment} -- pip install . --no-deps",
]
# pip install pyside2, because we don't have it in EDM yet
@@ -139,6 +155,26 @@
click.echo("Creating environment '{environment}'".format(**parameters))
execute(commands, parameters)
+
+ if source:
+ # Remove EDM ETS packages and install them from source
+ cmd_fmt = (
+ "edm plumbing remove-package "
+ "--environment {environment} --force "
+ )
+ commands = [cmd_fmt + source_pkg for source_pkg in source_dependencies]
+ execute(commands, parameters)
+ source_pkgs = [
+ github_url_fmt.format(pkg) for pkg in source_dependencies
+ ]
+ commands = [
+ "python -m pip install {pkg} --no-deps".format(pkg=pkg)
+ for pkg in source_pkgs
+ ]
+ commands = [
+ "edm run -e {environment} -- " + command for command in commands
+ ]
+ execute(commands, parameters)
click.echo('Done install')
|
{"golden_diff": "diff --git a/chaco/__init__.py b/chaco/__init__.py\n--- a/chaco/__init__.py\n+++ b/chaco/__init__.py\n@@ -6,6 +6,10 @@\n from ._version import full_version as __version__ # noqa\n \n __requires__ = [\n+ 'traits',\n+ 'traitsui',\n+ 'pyface',\n+ 'numpy',\n 'enable',\n 'six'\n ]\ndiff --git a/ci/edmtool.py b/ci/edmtool.py\n--- a/ci/edmtool.py\n+++ b/ci/edmtool.py\n@@ -79,13 +79,27 @@\n \"mock\",\n \"numpy\",\n \"pandas\",\n+ \"pyface\",\n \"pygments\",\n \"pyparsing\",\n+ \"traits\",\n+ \"traitsui\",\n \"cython\",\n+ \"enable\",\n # Needed to install enable from source\n \"swig\",\n }\n \n+# Dependencies we install from source for cron tests\n+source_dependencies = {\n+ \"enable\",\n+ \"pyface\",\n+ \"traits\",\n+ \"traitsui\",\n+}\n+\n+github_url_fmt = \"git+http://github.com/enthought/{0}.git#egg={0}\"\n+\n extra_dependencies = {\n 'pyside2': set(), # pyside2 is pip-installed during the install step\n 'pyqt': {'pyqt'},\n@@ -114,7 +128,12 @@\n @click.option('--runtime', default='3.6')\n @click.option('--toolkit', default='null')\n @click.option('--environment', default=None)\n-def install(runtime, toolkit, environment):\[email protected](\n+ \"--source/--no-source\",\n+ default=False,\n+ help=\"Install ETS packages from source\",\n+)\n+def install(runtime, toolkit, environment, source):\n \"\"\" Install project and dependencies into a clean EDM environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n@@ -126,9 +145,6 @@\n \"edm install -y -e {environment} {packages}\",\n (\"edm run -e {environment} -- pip install -r ci/requirements.txt\"\n \" --no-dependencies\"),\n- # Note that enable dependencies will be installed implicitly using pip\n- (\"edm run -e {environment} -- \"\n- \"pip install git+https://[email protected]/enthought/enable.git\"),\n \"edm run -e {environment} -- pip install . --no-deps\",\n ]\n # pip install pyside2, because we don't have it in EDM yet\n@@ -139,6 +155,26 @@\n \n click.echo(\"Creating environment '{environment}'\".format(**parameters))\n execute(commands, parameters)\n+\n+ if source:\n+ # Remove EDM ETS packages and install them from source\n+ cmd_fmt = (\n+ \"edm plumbing remove-package \"\n+ \"--environment {environment} --force \"\n+ )\n+ commands = [cmd_fmt + source_pkg for source_pkg in source_dependencies]\n+ execute(commands, parameters)\n+ source_pkgs = [\n+ github_url_fmt.format(pkg) for pkg in source_dependencies\n+ ]\n+ commands = [\n+ \"python -m pip install {pkg} --no-deps\".format(pkg=pkg)\n+ for pkg in source_pkgs\n+ ]\n+ commands = [\n+ \"edm run -e {environment} -- \" + command for command in commands\n+ ]\n+ execute(commands, parameters)\n click.echo('Done install')\n", "issue": "Setup cron job to test against dependencies from source\nTravis (and possibly appveyor too) allow the creation of cron jobs. Some of the ETS packages now have cron jobs setup to test against dependencies which are installed from source - instead of testing against the released versions.\r\n\r\nSee examples : \r\n\r\n- https://github.com/enthought/envisage/pull/162\r\n- https://github.com/enthought/traitsui/pull/914\r\n- https://github.com/enthought/pyface/pull/549\n", "before_files": [{"content": "#\n# Copyright (c) 2017, Enthought, Inc.\n# All rights reserved.\n#\n# This software is provided without warranty under the terms of the BSD\n# license included in enthought/LICENSE.txt and may be redistributed only\n# under the conditions described in the aforementioned license. The license\n# is also available online at http://www.enthought.com/licenses/BSD.txt\n#\n# Thanks for using Enthought open source!\n#\n\"\"\"\nTasks for Test Runs\n===================\nThis file is intended to be used with a python environment with the\nclick library to automate the process of setting up test environments\nand running the test within them. This improves repeatability and\nreliability of tests be removing many of the variables around the\ndeveloper's particular Python environment. Test environment setup and\npackage management is performed using `EDM http://docs.enthought.com/edm/`_\n\nTo use this to run you tests, you will need to install EDM and click\ninto your working environment. You will also need to have git\ninstalled to access required source code from github repositories.\n\nYou can then do::\n python edmtool.py install --runtime=... --toolkit=...\nto create a test environment from the current codebase and::\n python edmtool.py test --runtime=... --toolkit=...\nto run tests in that environment. You can remove the environment with::\n python edmtool.py cleanup --runtime=... --toolkit=...\n\nIf you make changes you will either need to remove and re-install the\nenvironment or manually update the environment using ``edm``, as\nthe install performs a ``python setup.py install`` rather than a ``develop``,\nso changes in your code will not be automatically mirrored in the test\nenvironment. You can update with a command like::\n edm run --environment ... -- python setup.py install\nYou can run all three tasks at once with::\n python edmtool.py test_clean --runtime=... --toolkit=...\nwhich will create, install, run tests, and then clean-up the environment. And\nyou can run tests in all supported runtimes and toolkits (with cleanup)\nusing::\n python edmtool.py test_all\n\nCurrently supported runtime values are ``3.6``, and currently\nsupported toolkits are ``null``, ``pyqt``, ``pyqt5`` and ``pyside2``. Not all\ncombinations of toolkits and runtimes will work, but the tasks will fail with\na clear error if that is the case. Tests can still be run via the usual means\nin other environments if that suits a developer's purpose.\n\nChanging This File\n------------------\nTo change the packages installed during a test run, change the dependencies\nvariable below. To install a package from github, or one which is not yet\navailable via EDM, add it to the `ci/requirements.txt` file (these will be\ninstalled by `pip`).\n\nOther changes to commands should be a straightforward change to the listed\ncommands for each task. See the EDM documentation for more information about\nhow to run commands within an EDM enviornment.\n\"\"\"\nimport glob\nimport os\nimport subprocess\nimport sys\nfrom shutil import rmtree, copy as copyfile\nfrom tempfile import mkdtemp\nfrom contextlib import contextmanager\n\nimport click\n\nsupported_combinations = {\n '3.6': {'pyside2', 'pyqt', 'pyqt5', 'null'},\n}\n\ndependencies = {\n \"six\",\n \"mock\",\n \"numpy\",\n \"pandas\",\n \"pygments\",\n \"pyparsing\",\n \"cython\",\n # Needed to install enable from source\n \"swig\",\n}\n\nextra_dependencies = {\n 'pyside2': set(), # pyside2 is pip-installed during the install step\n 'pyqt': {'pyqt'},\n 'pyqt5': {'pyqt5'},\n 'null': set()\n}\n\nenvironment_vars = {\n 'pyside2': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyside2'},\n 'pyqt': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt'},\n 'pyqt5': {'ETS_TOOLKIT': 'qt4', 'QT_API': 'pyqt5'},\n 'null': {'ETS_TOOLKIT': 'null.image'},\n}\n\n\ndef normalize(name):\n return name.replace(\"_\", \"-\")\n\n\[email protected](context_settings={\"token_normalize_func\": normalize})\ndef cli():\n pass\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef install(runtime, toolkit, environment):\n \"\"\" Install project and dependencies into a clean EDM environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n parameters['packages'] = ' '.join(\n dependencies | extra_dependencies.get(toolkit, set()))\n # edm commands to setup the development environment\n commands = [\n \"edm environments create {environment} --force --version={runtime}\",\n \"edm install -y -e {environment} {packages}\",\n (\"edm run -e {environment} -- pip install -r ci/requirements.txt\"\n \" --no-dependencies\"),\n # Note that enable dependencies will be installed implicitly using pip\n (\"edm run -e {environment} -- \"\n \"pip install git+https://[email protected]/enthought/enable.git\"),\n \"edm run -e {environment} -- pip install . --no-deps\",\n ]\n # pip install pyside2, because we don't have it in EDM yet\n if toolkit == 'pyside2':\n commands.append(\n \"edm run -e {environment} -- pip install pyside2==5.11\"\n )\n \n click.echo(\"Creating environment '{environment}'\".format(**parameters))\n execute(commands, parameters)\n click.echo('Done install')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef test(runtime, toolkit, environment):\n \"\"\" Run the test suite in a given environment with the specified toolkit.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n environ = environment_vars.get(toolkit, {}).copy()\n\n environ['PYTHONUNBUFFERED'] = \"1\"\n commands = [\n \"edm run -e {environment} -- coverage run -m unittest discover -v chaco\"\n ]\n\n cwd = os.getcwd()\n\n # We run in a tempdir to avoid accidentally picking up wrong traitsui\n # code from a local dir. We need to ensure a good .coveragerc is in\n # that directory, plus coverage has a bug that means a non-local coverage\n # file doesn't get populated correctly.\n click.echo(\"Running tests in '{environment}'\".format(**parameters))\n with do_in_tempdir(files=['.coveragerc'], capture_files=['./.coverage*']):\n os.environ.update(environ)\n execute(commands, parameters)\n\n click.echo('Done test')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef cleanup(runtime, toolkit, environment):\n \"\"\" Remove a development environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n commands = [\n \"edm run -e {environment} -- python setup.py clean\",\n \"edm environments remove {environment} --purge -y\",\n ]\n click.echo(\"Cleaning up environment '{environment}'\".format(**parameters))\n execute(commands, parameters)\n click.echo('Done cleanup')\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\ndef test_clean(runtime, toolkit):\n \"\"\" Run tests in a clean environment, cleaning up afterwards\n \"\"\"\n args = ['--toolkit={}'.format(toolkit),\n '--runtime={}'.format(runtime)]\n try:\n install(args=args, standalone_mode=False)\n test(args=args, standalone_mode=False)\n finally:\n cleanup(args=args, standalone_mode=False)\n\n\[email protected]()\[email protected]('--runtime', default='3.6')\[email protected]('--toolkit', default='null')\[email protected]('--environment', default=None)\ndef update(runtime, toolkit, environment):\n \"\"\" Update/Reinstall package into environment.\n \"\"\"\n parameters = get_parameters(runtime, toolkit, environment)\n commands = [\n \"edm run -e {environment} -- python setup.py install\"]\n click.echo(\"Re-installing in '{environment}'\".format(**parameters))\n execute(commands, parameters)\n click.echo('Done update')\n\n\[email protected]()\ndef test_all():\n \"\"\" Run test_clean across all supported environment combinations.\n \"\"\"\n for runtime, toolkits in supported_combinations.items():\n for toolkit in toolkits:\n args = ['--toolkit={}'.format(toolkit),\n '--runtime={}'.format(runtime)]\n test_clean(args, standalone_mode=True)\n\n\n# ----------------------------------------------------------------------------\n# Utility routines\n# ----------------------------------------------------------------------------\n\ndef get_parameters(runtime, toolkit, environment):\n \"\"\"Set up parameters dictionary for format() substitution\n \"\"\"\n parameters = {'runtime': runtime, 'toolkit': toolkit,\n 'environment': environment}\n if toolkit not in supported_combinations[runtime]:\n msg = (\"Python {runtime!r}, toolkit {toolkit!r}, \"\n \"not supported by test environments ({available})\")\n available = \", \".join(\n repr(tk) for tk in sorted(supported_combinations[runtime])\n )\n raise RuntimeError(msg.format(available=available, **parameters))\n if environment is None:\n tmpl = 'chaco-test-{runtime}-{toolkit}'\n environment = tmpl.format(**parameters)\n parameters['environment'] = environment\n return parameters\n\n\n@contextmanager\ndef do_in_tempdir(files=(), capture_files=()):\n \"\"\" Create a temporary directory, cleaning up after done.\n Creates the temporary directory, and changes into it. On exit returns to\n original directory and removes temporary dir.\n Parameters\n ----------\n files : sequence of filenames\n Files to be copied across to temporary directory.\n capture_files : sequence of filenames\n Files to be copied back from temporary directory.\n \"\"\"\n path = mkdtemp()\n old_path = os.getcwd()\n\n # send across any files we need\n for filepath in files:\n click.echo('copying file to tempdir: {}'.format(filepath))\n copyfile(filepath, path)\n\n os.chdir(path)\n try:\n yield path\n # retrieve any result files we want\n for pattern in capture_files:\n for filepath in glob.iglob(pattern):\n click.echo('copying file back: {}'.format(filepath))\n copyfile(filepath, old_path)\n finally:\n os.chdir(old_path)\n rmtree(path)\n\n\ndef execute(commands, parameters):\n for command in commands:\n print(\"[EXECUTING]\", command.format(**parameters))\n try:\n subprocess.check_call(command.format(**parameters).split())\n except subprocess.CalledProcessError:\n sys.exit(1)\n\n\nif __name__ == '__main__':\n cli()\n", "path": "ci/edmtool.py"}, {"content": "# Copyright (c) 2005-2014 by Enthought, Inc.\n# All rights reserved.\n\"\"\" Two-dimensional plotting application toolkit.\n Part of the Chaco project of the Enthought Tool Suite.\n\"\"\"\nfrom ._version import full_version as __version__ # noqa\n\n__requires__ = [\n 'enable',\n 'six'\n]\n", "path": "chaco/__init__.py"}]}
| 3,959 | 785 |
gh_patches_debug_15903
|
rasdani/github-patches
|
git_diff
|
pyload__pyload-1093
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ShareOnline PrePaid is not recognized
As the title says. A PrePaid ShareOnline Account is not recognized as valid Premium Account and therefore pyload asks for Captchas when trying to download an URL.


</issue>
<code>
[start of module/plugins/accounts/ShareonlineBiz.py]
1 # -*- coding: utf-8 -*-
2
3 import re
4
5 from module.plugins.Account import Account
6
7
8 class ShareonlineBiz(Account):
9 __name__ = "ShareonlineBiz"
10 __type__ = "account"
11 __version__ = "0.30"
12
13 __description__ = """Share-online.biz account plugin"""
14 __license__ = "GPLv3"
15 __authors__ = [("Walter Purcaro", "[email protected]")]
16
17
18 def api_response(self, user, req):
19 return req.load("http://api.share-online.biz/cgi-bin",
20 get={'q' : "userdetails",
21 'aux' : "traffic",
22 'username': user,
23 'password': self.getAccountData(user)['password']})
24
25
26 def loadAccountInfo(self, user, req):
27 premium = False
28 validuntil = None
29 trafficleft = -1
30 maxtraffic = 100 * 1024 * 1024 * 1024 #: 100 GB
31
32 api = {}
33 for line in self.api_response(user, req).splitlines():
34 if "=" in line:
35 key, value = line.split("=")
36 api[key] = value
37
38 self.logDebug(api)
39
40 if api['a'].lower() != "not_available":
41 req.cj.setCookie("share-online.biz", 'a', api['a'])
42
43 premium = api['group'] == "Premium"
44
45 validuntil = float(api['expire_date'])
46
47 traffic = float(api['traffic_1d'].split(";")[0])
48 maxtraffic = max(maxtraffic, traffic)
49 trafficleft = maxtraffic - traffic
50
51 maxtraffic /= 1024 #@TODO: Remove `/ 1024` in 0.4.10
52 trafficleft /= 1024 #@TODO: Remove `/ 1024` in 0.4.10
53
54 return {'premium': premium, 'validuntil': validuntil, 'trafficleft': trafficleft, 'maxtraffic': maxtraffic}
55
56
57 def login(self, user, data, req):
58 html = self.api_response(user, req)
59 err = re.search(r'\*\*(.+?)\*\*', html)
60 if err:
61 self.logError(err.group(1))
62 self.wrongPassword()
63
[end of module/plugins/accounts/ShareonlineBiz.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/module/plugins/accounts/ShareonlineBiz.py b/module/plugins/accounts/ShareonlineBiz.py
--- a/module/plugins/accounts/ShareonlineBiz.py
+++ b/module/plugins/accounts/ShareonlineBiz.py
@@ -8,7 +8,7 @@
class ShareonlineBiz(Account):
__name__ = "ShareonlineBiz"
__type__ = "account"
- __version__ = "0.30"
+ __version__ = "0.31"
__description__ = """Share-online.biz account plugin"""
__license__ = "GPLv3"
@@ -40,7 +40,7 @@
if api['a'].lower() != "not_available":
req.cj.setCookie("share-online.biz", 'a', api['a'])
- premium = api['group'] == "Premium"
+ premium = api['group'] in ["Premium", "PrePaid"]
validuntil = float(api['expire_date'])
|
{"golden_diff": "diff --git a/module/plugins/accounts/ShareonlineBiz.py b/module/plugins/accounts/ShareonlineBiz.py\n--- a/module/plugins/accounts/ShareonlineBiz.py\n+++ b/module/plugins/accounts/ShareonlineBiz.py\n@@ -8,7 +8,7 @@\n class ShareonlineBiz(Account):\n __name__ = \"ShareonlineBiz\"\n __type__ = \"account\"\n- __version__ = \"0.30\"\n+ __version__ = \"0.31\"\n \n __description__ = \"\"\"Share-online.biz account plugin\"\"\"\n __license__ = \"GPLv3\"\n@@ -40,7 +40,7 @@\n if api['a'].lower() != \"not_available\":\n req.cj.setCookie(\"share-online.biz\", 'a', api['a'])\n \n- premium = api['group'] == \"Premium\"\n+ premium = api['group'] in [\"Premium\", \"PrePaid\"]\n \n validuntil = float(api['expire_date'])\n", "issue": "ShareOnline PrePaid is not recognized\nAs the title says. A PrePaid ShareOnline Account is not recognized as valid Premium Account and therefore pyload asks for Captchas when trying to download an URL.\n\n\n\n\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport re\n\nfrom module.plugins.Account import Account\n\n\nclass ShareonlineBiz(Account):\n __name__ = \"ShareonlineBiz\"\n __type__ = \"account\"\n __version__ = \"0.30\"\n\n __description__ = \"\"\"Share-online.biz account plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Walter Purcaro\", \"[email protected]\")]\n\n\n def api_response(self, user, req):\n return req.load(\"http://api.share-online.biz/cgi-bin\",\n get={'q' : \"userdetails\",\n 'aux' : \"traffic\",\n 'username': user,\n 'password': self.getAccountData(user)['password']})\n\n\n def loadAccountInfo(self, user, req):\n premium = False\n validuntil = None\n trafficleft = -1\n maxtraffic = 100 * 1024 * 1024 * 1024 #: 100 GB\n\n api = {}\n for line in self.api_response(user, req).splitlines():\n if \"=\" in line:\n key, value = line.split(\"=\")\n api[key] = value\n\n self.logDebug(api)\n\n if api['a'].lower() != \"not_available\":\n req.cj.setCookie(\"share-online.biz\", 'a', api['a'])\n\n premium = api['group'] == \"Premium\"\n\n validuntil = float(api['expire_date'])\n\n traffic = float(api['traffic_1d'].split(\";\")[0])\n maxtraffic = max(maxtraffic, traffic)\n trafficleft = maxtraffic - traffic\n\n maxtraffic /= 1024 #@TODO: Remove `/ 1024` in 0.4.10\n trafficleft /= 1024 #@TODO: Remove `/ 1024` in 0.4.10\n\n return {'premium': premium, 'validuntil': validuntil, 'trafficleft': trafficleft, 'maxtraffic': maxtraffic}\n\n\n def login(self, user, data, req):\n html = self.api_response(user, req)\n err = re.search(r'\\*\\*(.+?)\\*\\*', html)\n if err:\n self.logError(err.group(1))\n self.wrongPassword()\n", "path": "module/plugins/accounts/ShareonlineBiz.py"}]}
| 1,356 | 211 |
gh_patches_debug_3536
|
rasdani/github-patches
|
git_diff
|
mindsdb__mindsdb-1560
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add new method to count number of rows for PostgreSQL datasources :electric_plug: :1234:
When MindsDB creates a new PostgreSQL datasource we get information for row counts by fetching all datasources. The problem here is that if datasource is big it takes a lot of time. We need a new get_row_count method to return the number of rows per datasource. The PR should include this method inside the PostgreSQL class .
## Steps :male_detective: :female_detective:
- Implement in https://github.com/mindsdb/mindsdb/blob/stable/mindsdb/integrations/postgres/postgres.py#L37
- Example method:
```py
def get_row_count(self, query):
result = conn.execute(query)
return len(query)
```
- Push to staging branch
## Additional rewards :1st_place_medal:
Each code PR brings :three: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/
</issue>
<code>
[start of mindsdb/integrations/postgres/postgres.py]
1 from contextlib import closing
2 import pg8000
3
4 from lightwood.api import dtype
5 from mindsdb.integrations.base import Integration
6 from mindsdb.utilities.log import log
7
8
9 class PostgreSQLConnectionChecker:
10 def __init__(self, **kwargs):
11 self.host = kwargs.get('host')
12 self.port = kwargs.get('port')
13 self.user = kwargs.get('user')
14 self.password = kwargs.get('password')
15 self.database = kwargs.get('database', 'postgres')
16
17 def _get_connection(self):
18 return pg8000.connect(
19 database=self.database,
20 user=self.user,
21 password=self.password,
22 host=self.host,
23 port=self.port
24 )
25
26 def check_connection(self):
27 try:
28 con = self._get_connection()
29 with closing(con) as con:
30 con.run('select 1;')
31 connected = True
32 except Exception:
33 connected = False
34 return connected
35
36
37 class PostgreSQL(Integration, PostgreSQLConnectionChecker):
38 def __init__(self, config, name, db_info):
39 super().__init__(config, name)
40 self.user = db_info.get('user')
41 self.password = db_info.get('password')
42 self.host = db_info.get('host')
43 self.port = db_info.get('port')
44 self.database = db_info.get('database', 'postgres')
45
46 def _to_postgres_table(self, dtype_dict, predicted_cols, columns):
47 subtype_map = {
48 dtype.integer: ' int8',
49 dtype.float: 'float8',
50 dtype.binary: 'bool',
51 dtype.date: 'date',
52 dtype.datetime: 'timestamp',
53 dtype.binary: 'text',
54 dtype.categorical: 'text',
55 dtype.tags: 'text',
56 dtype.image: 'text',
57 dtype.video: 'text',
58 dtype.audio: 'text',
59 dtype.short_text: 'text',
60 dtype.rich_text: 'text',
61 dtype.array: 'text'
62 }
63
64 column_declaration = []
65 for name in columns:
66 try:
67 col_subtype = dtype_dict[name]
68 new_type = subtype_map[col_subtype]
69 column_declaration.append(f' "{name}" {new_type} ')
70 if name in predicted_cols:
71 column_declaration.append(f' "{name}_original" {new_type} ')
72 except Exception as e:
73 log.error(f'Error: can not determine postgres data type for column {name}: {e}')
74
75 return column_declaration
76
77 def _escape_table_name(self, name):
78 return '"' + name.replace('"', '""') + '"'
79
80 def _query(self, query):
81 con = self._get_connection()
82 with closing(con) as con:
83
84 cur = con.cursor()
85 res = True
86 cur.execute(query)
87
88 try:
89 rows = cur.fetchall()
90 keys = [k[0] if isinstance(k[0], str) else k[0].decode('ascii') for k in cur.description]
91 res = [dict(zip(keys, row)) for row in rows]
92 except Exception:
93 pass
94
95 con.commit()
96
97 return res
98
99 def setup(self):
100 user = f"{self.config['api']['mysql']['user']}_{self.name}"
101 password = self.config['api']['mysql']['password']
102 host = self.config['api']['mysql']['host']
103 port = self.config['api']['mysql']['port']
104
105 try:
106 self._query('''
107 DO $$
108 begin
109 if not exists (SELECT 1 FROM pg_extension where extname = 'mysql_fdw') then
110 CREATE EXTENSION mysql_fdw;
111 end if;
112 END
113 $$;
114 ''')
115 except Exception:
116 print('Error: cant find or activate mysql_fdw extension for PostgreSQL.')
117
118 self._query(f'DROP SCHEMA IF EXISTS {self.mindsdb_database} CASCADE')
119
120 self._query(f"DROP USER MAPPING IF EXISTS FOR {self.user} SERVER server_{self.mindsdb_database}")
121
122 self._query(f'DROP SERVER IF EXISTS server_{self.mindsdb_database} CASCADE')
123
124 self._query(f'''
125 CREATE SERVER server_{self.mindsdb_database}
126 FOREIGN DATA WRAPPER mysql_fdw
127 OPTIONS (host '{host}', port '{port}');
128 ''')
129
130 self._query(f'''
131 CREATE USER MAPPING FOR {self.user}
132 SERVER server_{self.mindsdb_database}
133 OPTIONS (username '{user}', password '{password}');
134 ''')
135
136 self._query(f'CREATE SCHEMA {self.mindsdb_database}')
137
138 q = f"""
139 CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (
140 name text,
141 status text,
142 accuracy text,
143 predict text,
144 select_data_query text,
145 external_datasource text,
146 training_options text
147 )
148 SERVER server_{self.mindsdb_database}
149 OPTIONS (dbname 'mindsdb', table_name 'predictors');
150 """
151 self._query(q)
152
153 q = f"""
154 CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.commands (
155 command text
156 ) SERVER server_{self.mindsdb_database}
157 OPTIONS (dbname 'mindsdb', table_name 'commands');
158 """
159 self._query(q)
160
161 def register_predictors(self, model_data_arr):
162 for model_meta in model_data_arr:
163 name = model_meta['name']
164 predict = model_meta['predict']
165 if not isinstance(predict, list):
166 predict = [predict]
167 columns_sql = ','.join(self._to_postgres_table(
168 model_meta['dtype_dict'],
169 predict,
170 list(model_meta['dtype_dict'].keys())
171 ))
172 columns_sql += ',"select_data_query" text'
173 columns_sql += ',"external_datasource" text'
174 for col in predict:
175 columns_sql += f',"{col}_confidence" float8'
176 if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):
177 columns_sql += f',"{col}_min" float8'
178 columns_sql += f',"{col}_max" float8'
179 columns_sql += f',"{col}_explain" text'
180
181 self.unregister_predictor(name)
182 q = f"""
183 CREATE FOREIGN TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (
184 {columns_sql}
185 ) SERVER server_{self.mindsdb_database}
186 OPTIONS (dbname 'mindsdb', table_name '{name}');
187 """
188 self._query(q)
189
190 def unregister_predictor(self, name):
191 q = f"""
192 DROP FOREIGN TABLE IF EXISTS {self.mindsdb_database}.{self._escape_table_name(name)};
193 """
194 self._query(q)
195
[end of mindsdb/integrations/postgres/postgres.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mindsdb/integrations/postgres/postgres.py b/mindsdb/integrations/postgres/postgres.py
--- a/mindsdb/integrations/postgres/postgres.py
+++ b/mindsdb/integrations/postgres/postgres.py
@@ -192,3 +192,10 @@
DROP FOREIGN TABLE IF EXISTS {self.mindsdb_database}.{self._escape_table_name(name)};
"""
self._query(q)
+
+ def get_row_count(self, query):
+ q = f"""
+ SELECT COUNT(*) as count
+ FROM ({query}) as query;"""
+ result = self._query(q)
+ return result[0]['count']
|
{"golden_diff": "diff --git a/mindsdb/integrations/postgres/postgres.py b/mindsdb/integrations/postgres/postgres.py\n--- a/mindsdb/integrations/postgres/postgres.py\n+++ b/mindsdb/integrations/postgres/postgres.py\n@@ -192,3 +192,10 @@\n DROP FOREIGN TABLE IF EXISTS {self.mindsdb_database}.{self._escape_table_name(name)};\n \"\"\"\n self._query(q)\n+\n+ def get_row_count(self, query):\n+ q = f\"\"\" \n+ SELECT COUNT(*) as count\n+ FROM ({query}) as query;\"\"\"\n+ result = self._query(q)\n+ return result[0]['count']\n", "issue": "Add new method to count number of rows for PostgreSQL datasources :electric_plug: :1234: \nWhen MindsDB creates a new PostgreSQL datasource we get information for row counts by fetching all datasources. The problem here is that if datasource is big it takes a lot of time. We need a new get_row_count method to return the number of rows per datasource. The PR should include this method inside the PostgreSQL class .\r\n\r\n## Steps :male_detective: :female_detective: \r\n\r\n- Implement in https://github.com/mindsdb/mindsdb/blob/stable/mindsdb/integrations/postgres/postgres.py#L37\r\n- Example method:\r\n```py\r\ndef get_row_count(self, query):\r\n result = conn.execute(query)\r\n return len(query)\r\n```\r\n- Push to staging branch\r\n\r\n## Additional rewards :1st_place_medal: \r\n\r\nEach code PR brings :three: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/\r\n \r\n\r\n\n", "before_files": [{"content": "from contextlib import closing\nimport pg8000\n\nfrom lightwood.api import dtype\nfrom mindsdb.integrations.base import Integration\nfrom mindsdb.utilities.log import log\n\n\nclass PostgreSQLConnectionChecker:\n def __init__(self, **kwargs):\n self.host = kwargs.get('host')\n self.port = kwargs.get('port')\n self.user = kwargs.get('user')\n self.password = kwargs.get('password')\n self.database = kwargs.get('database', 'postgres')\n\n def _get_connection(self):\n return pg8000.connect(\n database=self.database,\n user=self.user,\n password=self.password,\n host=self.host,\n port=self.port\n )\n\n def check_connection(self):\n try:\n con = self._get_connection()\n with closing(con) as con:\n con.run('select 1;')\n connected = True\n except Exception:\n connected = False\n return connected\n\n\nclass PostgreSQL(Integration, PostgreSQLConnectionChecker):\n def __init__(self, config, name, db_info):\n super().__init__(config, name)\n self.user = db_info.get('user')\n self.password = db_info.get('password')\n self.host = db_info.get('host')\n self.port = db_info.get('port')\n self.database = db_info.get('database', 'postgres')\n\n def _to_postgres_table(self, dtype_dict, predicted_cols, columns):\n subtype_map = {\n dtype.integer: ' int8',\n dtype.float: 'float8',\n dtype.binary: 'bool',\n dtype.date: 'date',\n dtype.datetime: 'timestamp',\n dtype.binary: 'text',\n dtype.categorical: 'text',\n dtype.tags: 'text',\n dtype.image: 'text',\n dtype.video: 'text',\n dtype.audio: 'text',\n dtype.short_text: 'text',\n dtype.rich_text: 'text',\n dtype.array: 'text'\n }\n\n column_declaration = []\n for name in columns:\n try:\n col_subtype = dtype_dict[name]\n new_type = subtype_map[col_subtype]\n column_declaration.append(f' \"{name}\" {new_type} ')\n if name in predicted_cols:\n column_declaration.append(f' \"{name}_original\" {new_type} ')\n except Exception as e:\n log.error(f'Error: can not determine postgres data type for column {name}: {e}')\n\n return column_declaration\n\n def _escape_table_name(self, name):\n return '\"' + name.replace('\"', '\"\"') + '\"'\n\n def _query(self, query):\n con = self._get_connection()\n with closing(con) as con:\n\n cur = con.cursor()\n res = True\n cur.execute(query)\n\n try:\n rows = cur.fetchall()\n keys = [k[0] if isinstance(k[0], str) else k[0].decode('ascii') for k in cur.description]\n res = [dict(zip(keys, row)) for row in rows]\n except Exception:\n pass\n\n con.commit()\n\n return res\n\n def setup(self):\n user = f\"{self.config['api']['mysql']['user']}_{self.name}\"\n password = self.config['api']['mysql']['password']\n host = self.config['api']['mysql']['host']\n port = self.config['api']['mysql']['port']\n\n try:\n self._query('''\n DO $$\n begin\n if not exists (SELECT 1 FROM pg_extension where extname = 'mysql_fdw') then\n CREATE EXTENSION mysql_fdw;\n end if;\n END\n $$;\n ''')\n except Exception:\n print('Error: cant find or activate mysql_fdw extension for PostgreSQL.')\n\n self._query(f'DROP SCHEMA IF EXISTS {self.mindsdb_database} CASCADE')\n\n self._query(f\"DROP USER MAPPING IF EXISTS FOR {self.user} SERVER server_{self.mindsdb_database}\")\n\n self._query(f'DROP SERVER IF EXISTS server_{self.mindsdb_database} CASCADE')\n\n self._query(f'''\n CREATE SERVER server_{self.mindsdb_database}\n FOREIGN DATA WRAPPER mysql_fdw\n OPTIONS (host '{host}', port '{port}');\n ''')\n\n self._query(f'''\n CREATE USER MAPPING FOR {self.user}\n SERVER server_{self.mindsdb_database}\n OPTIONS (username '{user}', password '{password}');\n ''')\n\n self._query(f'CREATE SCHEMA {self.mindsdb_database}')\n\n q = f\"\"\"\n CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (\n name text,\n status text,\n accuracy text,\n predict text,\n select_data_query text,\n external_datasource text,\n training_options text\n )\n SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name 'predictors');\n \"\"\"\n self._query(q)\n\n q = f\"\"\"\n CREATE FOREIGN TABLE IF NOT EXISTS {self.mindsdb_database}.commands (\n command text\n ) SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name 'commands');\n \"\"\"\n self._query(q)\n\n def register_predictors(self, model_data_arr):\n for model_meta in model_data_arr:\n name = model_meta['name']\n predict = model_meta['predict']\n if not isinstance(predict, list):\n predict = [predict]\n columns_sql = ','.join(self._to_postgres_table(\n model_meta['dtype_dict'],\n predict,\n list(model_meta['dtype_dict'].keys())\n ))\n columns_sql += ',\"select_data_query\" text'\n columns_sql += ',\"external_datasource\" text'\n for col in predict:\n columns_sql += f',\"{col}_confidence\" float8'\n if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):\n columns_sql += f',\"{col}_min\" float8'\n columns_sql += f',\"{col}_max\" float8'\n columns_sql += f',\"{col}_explain\" text'\n\n self.unregister_predictor(name)\n q = f\"\"\"\n CREATE FOREIGN TABLE {self.mindsdb_database}.{self._escape_table_name(name)} (\n {columns_sql}\n ) SERVER server_{self.mindsdb_database}\n OPTIONS (dbname 'mindsdb', table_name '{name}');\n \"\"\"\n self._query(q)\n\n def unregister_predictor(self, name):\n q = f\"\"\"\n DROP FOREIGN TABLE IF EXISTS {self.mindsdb_database}.{self._escape_table_name(name)};\n \"\"\"\n self._query(q)\n", "path": "mindsdb/integrations/postgres/postgres.py"}]}
| 2,690 | 153 |
gh_patches_debug_21992
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-3685
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow patching indicator with empty list of dimension names
</issue>
<code>
[start of akvo/rest/serializers/indicator.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from akvo.rest.serializers.indicator_period import IndicatorPeriodFrameworkSerializer
8 from akvo.rest.serializers.indicator_dimension_name import IndicatorDimensionNameSerializer
9 from akvo.rest.serializers.rsr_serializer import BaseRSRSerializer
10 from akvo.rsr.models import Indicator
11
12 from rest_framework import serializers
13
14
15 class IndicatorSerializer(BaseRSRSerializer):
16
17 result_unicode = serializers.ReadOnlyField(source='result.__unicode__')
18 measure_label = serializers.ReadOnlyField(source='iati_measure_unicode')
19 children_aggregate_percentage = serializers.ReadOnlyField()
20
21 class Meta:
22 model = Indicator
23 fields = '__all__'
24
25 # TODO: add validation for parent_indicator
26
27
28 class IndicatorFrameworkSerializer(BaseRSRSerializer):
29
30 periods = IndicatorPeriodFrameworkSerializer(many=True, required=False, read_only=True)
31 parent_indicator = serializers.ReadOnlyField(source='parent_indicator_id')
32 children_aggregate_percentage = serializers.ReadOnlyField()
33 dimension_names = IndicatorDimensionNameSerializer(many=True, required=False, read_only=True)
34
35 class Meta:
36 model = Indicator
37 fields = '__all__'
38
[end of akvo/rest/serializers/indicator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/akvo/rest/serializers/indicator.py b/akvo/rest/serializers/indicator.py
--- a/akvo/rest/serializers/indicator.py
+++ b/akvo/rest/serializers/indicator.py
@@ -7,7 +7,7 @@
from akvo.rest.serializers.indicator_period import IndicatorPeriodFrameworkSerializer
from akvo.rest.serializers.indicator_dimension_name import IndicatorDimensionNameSerializer
from akvo.rest.serializers.rsr_serializer import BaseRSRSerializer
-from akvo.rsr.models import Indicator
+from akvo.rsr.models import Indicator, IndicatorDimensionName
from rest_framework import serializers
@@ -17,6 +17,8 @@
result_unicode = serializers.ReadOnlyField(source='result.__unicode__')
measure_label = serializers.ReadOnlyField(source='iati_measure_unicode')
children_aggregate_percentage = serializers.ReadOnlyField()
+ dimension_names = serializers.PrimaryKeyRelatedField(
+ many=True, queryset=IndicatorDimensionName.objects.all())
class Meta:
model = Indicator
|
{"golden_diff": "diff --git a/akvo/rest/serializers/indicator.py b/akvo/rest/serializers/indicator.py\n--- a/akvo/rest/serializers/indicator.py\n+++ b/akvo/rest/serializers/indicator.py\n@@ -7,7 +7,7 @@\n from akvo.rest.serializers.indicator_period import IndicatorPeriodFrameworkSerializer\n from akvo.rest.serializers.indicator_dimension_name import IndicatorDimensionNameSerializer\n from akvo.rest.serializers.rsr_serializer import BaseRSRSerializer\n-from akvo.rsr.models import Indicator\n+from akvo.rsr.models import Indicator, IndicatorDimensionName\n \n from rest_framework import serializers\n \n@@ -17,6 +17,8 @@\n result_unicode = serializers.ReadOnlyField(source='result.__unicode__')\n measure_label = serializers.ReadOnlyField(source='iati_measure_unicode')\n children_aggregate_percentage = serializers.ReadOnlyField()\n+ dimension_names = serializers.PrimaryKeyRelatedField(\n+ many=True, queryset=IndicatorDimensionName.objects.all())\n \n class Meta:\n model = Indicator\n", "issue": "Allow patching indicator with empty list of dimension names\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom akvo.rest.serializers.indicator_period import IndicatorPeriodFrameworkSerializer\nfrom akvo.rest.serializers.indicator_dimension_name import IndicatorDimensionNameSerializer\nfrom akvo.rest.serializers.rsr_serializer import BaseRSRSerializer\nfrom akvo.rsr.models import Indicator\n\nfrom rest_framework import serializers\n\n\nclass IndicatorSerializer(BaseRSRSerializer):\n\n result_unicode = serializers.ReadOnlyField(source='result.__unicode__')\n measure_label = serializers.ReadOnlyField(source='iati_measure_unicode')\n children_aggregate_percentage = serializers.ReadOnlyField()\n\n class Meta:\n model = Indicator\n fields = '__all__'\n\n # TODO: add validation for parent_indicator\n\n\nclass IndicatorFrameworkSerializer(BaseRSRSerializer):\n\n periods = IndicatorPeriodFrameworkSerializer(many=True, required=False, read_only=True)\n parent_indicator = serializers.ReadOnlyField(source='parent_indicator_id')\n children_aggregate_percentage = serializers.ReadOnlyField()\n dimension_names = IndicatorDimensionNameSerializer(many=True, required=False, read_only=True)\n\n class Meta:\n model = Indicator\n fields = '__all__'\n", "path": "akvo/rest/serializers/indicator.py"}]}
| 918 | 220 |
gh_patches_debug_22683
|
rasdani/github-patches
|
git_diff
|
HypothesisWorks__hypothesis-3051
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ghostwriter error on attrs class
While playing around with the ghostwriter today, I got an exception like this one running it against some internal code:
```
Traceback (most recent call last):
File "/tmp/repro/venv/bin/hypothesis", line 8, in <module>
sys.exit(main())
File "/tmp/repro/venv/lib/python3.9/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/tmp/repro/venv/lib/python3.9/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/tmp/repro/venv/lib/python3.9/site-packages/click/core.py", line 1668, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/tmp/repro/venv/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmp/repro/venv/lib/python3.9/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/tmp/repro/venv/lib/python3.9/site-packages/hypothesis/extra/cli.py", line 242, in write
code = getattr(ghostwriter, writer)(*func, except_=except_ or (), style=style)
File "/tmp/repro/venv/lib/python3.9/site-packages/hypothesis/extra/ghostwriter.py", line 882, in magic
make_(
File "/tmp/repro/venv/lib/python3.9/site-packages/hypothesis/extra/ghostwriter.py", line 802, in make_
imp, body = how(*args, **kwargs, except_=except_, style=style)
File "/tmp/repro/venv/lib/python3.9/site-packages/hypothesis/extra/ghostwriter.py", line 626, in _make_test_body
given_strategies = given_strategies or _get_strategies(
File "/tmp/repro/venv/lib/python3.9/site-packages/hypothesis/extra/ghostwriter.py", line 409, in _get_strategies
if strat.args:
AttributeError: 'MappedSearchStrategy' object has no attribute 'args'
```
I reduced the input to trigger this exception down to
```python
import attr
@attr.s()
class Foo:
foo: str = attr.ib()
```
I saw this with cPython 3.9.6, attrs 21.2.0, and hypothesis master branch at hypothesis-python-6.14.5-5-g9d4da6c2f.
</issue>
<code>
[start of hypothesis-python/src/hypothesis/strategies/_internal/attrs.py]
1 # This file is part of Hypothesis, which may be found at
2 # https://github.com/HypothesisWorks/hypothesis/
3 #
4 # Most of this work is copyright (C) 2013-2021 David R. MacIver
5 # ([email protected]), but it contains contributions by others. See
6 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
7 # consult the git log if you need to determine who owns an individual
8 # contribution.
9 #
10 # This Source Code Form is subject to the terms of the Mozilla Public License,
11 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
12 # obtain one at https://mozilla.org/MPL/2.0/.
13 #
14 # END HEADER
15
16 from functools import reduce
17 from itertools import chain
18
19 import attr
20
21 from hypothesis import strategies as st
22 from hypothesis.errors import ResolutionFailed
23 from hypothesis.internal.compat import get_type_hints
24 from hypothesis.strategies._internal.types import is_a_type, type_sorting_key
25 from hypothesis.utils.conventions import infer
26
27
28 def from_attrs(target, args, kwargs, to_infer):
29 """An internal version of builds(), specialised for Attrs classes."""
30 fields = attr.fields(target)
31 kwargs = {k: v for k, v in kwargs.items() if v is not infer}
32 for name in to_infer:
33 kwargs[name] = from_attrs_attribute(getattr(fields, name), target)
34 # We might make this strategy more efficient if we added a layer here that
35 # retries drawing if validation fails, for improved composition.
36 # The treatment of timezones in datetimes() provides a precedent.
37 return st.tuples(st.tuples(*args), st.fixed_dictionaries(kwargs)).map(
38 lambda value: target(*value[0], **value[1])
39 )
40
41
42 def from_attrs_attribute(attrib, target):
43 """Infer a strategy from the metadata on an attr.Attribute object."""
44 # Try inferring from the default argument. Note that this will only help if
45 # the user passed `infer` to builds() for this attribute, but in that case
46 # we use it as the minimal example.
47 default = st.nothing()
48 if isinstance(attrib.default, attr.Factory):
49 if not attrib.default.takes_self:
50 default = st.builds(attrib.default.factory)
51 elif attrib.default is not attr.NOTHING:
52 default = st.just(attrib.default)
53
54 # Try inferring None, exact values, or type from attrs provided validators.
55 null = st.nothing() # updated to none() on seeing an OptionalValidator
56 in_collections = [] # list of in_ validator collections to sample from
57 validator_types = set() # type constraints to pass to types_to_strategy()
58 if attrib.validator is not None:
59 validator = attrib.validator
60 if isinstance(validator, attr.validators._OptionalValidator):
61 null = st.none()
62 validator = validator.validator
63 if isinstance(validator, attr.validators._AndValidator):
64 vs = validator._validators
65 else:
66 vs = [validator]
67 for v in vs:
68 if isinstance(v, attr.validators._InValidator):
69 if isinstance(v.options, str):
70 in_collections.append(list(all_substrings(v.options)))
71 else:
72 in_collections.append(v.options)
73 elif isinstance(v, attr.validators._InstanceOfValidator):
74 validator_types.add(v.type)
75
76 # This is the important line. We compose the final strategy from various
77 # parts. The default value, if any, is the minimal shrink, followed by
78 # None (again, if allowed). We then prefer to sample from values passed
79 # to an in_ validator if available, but infer from a type otherwise.
80 # Pick one because (sampled_from((1, 2)) | from_type(int)) would usually
81 # fail validation by generating e.g. zero!
82 if in_collections:
83 sample = st.sampled_from(list(ordered_intersection(in_collections)))
84 strat = default | null | sample
85 else:
86 strat = default | null | types_to_strategy(attrib, validator_types)
87
88 # Better to give a meaningful error here than an opaque "could not draw"
89 # when we try to get a value but have lost track of where this was created.
90 if strat.is_empty:
91 raise ResolutionFailed(
92 "Cannot infer a strategy from the default, validator, type, or "
93 "converter for attribute=%r of class=%r" % (attrib, target)
94 )
95 return strat
96
97
98 def types_to_strategy(attrib, types):
99 """Find all the type metadata for this attribute, reconcile it, and infer a
100 strategy from the mess."""
101 # If we know types from the validator(s), that's sufficient.
102 if len(types) == 1:
103 (typ,) = types
104 if isinstance(typ, tuple):
105 return st.one_of(*map(st.from_type, typ))
106 return st.from_type(typ)
107 elif types:
108 # We have a list of tuples of types, and want to find a type
109 # (or tuple of types) that is a subclass of all of of them.
110 type_tuples = [k if isinstance(k, tuple) else (k,) for k in types]
111 # Flatten the list, filter types that would fail validation, and
112 # sort so that ordering is stable between runs and shrinks well.
113 allowed = [
114 t
115 for t in set(sum(type_tuples, ()))
116 if all(issubclass(t, tup) for tup in type_tuples)
117 ]
118 allowed.sort(key=type_sorting_key)
119 return st.one_of([st.from_type(t) for t in allowed])
120
121 # Otherwise, try the `type` attribute as a fallback, and finally try
122 # the type hints on a converter (desperate!) before giving up.
123 if is_a_type(getattr(attrib, "type", None)):
124 # The convoluted test is because variable annotations may be stored
125 # in string form; attrs doesn't evaluate them and we don't handle them.
126 # See PEP 526, PEP 563, and Hypothesis issue #1004 for details.
127 return st.from_type(attrib.type)
128
129 converter = getattr(attrib, "converter", None)
130 if isinstance(converter, type):
131 return st.from_type(converter)
132 elif callable(converter):
133 hints = get_type_hints(converter)
134 if "return" in hints:
135 return st.from_type(hints["return"])
136
137 return st.nothing()
138
139
140 def ordered_intersection(in_):
141 """Set union of n sequences, ordered for reproducibility across runs."""
142 intersection = reduce(set.intersection, in_, set(in_[0]))
143 for x in chain.from_iterable(in_):
144 if x in intersection:
145 yield x
146 intersection.remove(x)
147
148
149 def all_substrings(s):
150 """Generate all substrings of `s`, in order of length then occurrence.
151 Includes the empty string (first), and any duplicates that are present.
152
153 >>> list(all_substrings('010'))
154 ['', '0', '1', '0', '01', '10', '010']
155 """
156 yield s[:0]
157 for n, _ in enumerate(s):
158 for i in range(len(s) - n):
159 yield s[i : i + n + 1]
160
[end of hypothesis-python/src/hypothesis/strategies/_internal/attrs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/hypothesis-python/src/hypothesis/strategies/_internal/attrs.py b/hypothesis-python/src/hypothesis/strategies/_internal/attrs.py
--- a/hypothesis-python/src/hypothesis/strategies/_internal/attrs.py
+++ b/hypothesis-python/src/hypothesis/strategies/_internal/attrs.py
@@ -21,6 +21,7 @@
from hypothesis import strategies as st
from hypothesis.errors import ResolutionFailed
from hypothesis.internal.compat import get_type_hints
+from hypothesis.strategies._internal.core import BuildsStrategy
from hypothesis.strategies._internal.types import is_a_type, type_sorting_key
from hypothesis.utils.conventions import infer
@@ -34,9 +35,7 @@
# We might make this strategy more efficient if we added a layer here that
# retries drawing if validation fails, for improved composition.
# The treatment of timezones in datetimes() provides a precedent.
- return st.tuples(st.tuples(*args), st.fixed_dictionaries(kwargs)).map(
- lambda value: target(*value[0], **value[1])
- )
+ return BuildsStrategy(target, args, kwargs)
def from_attrs_attribute(attrib, target):
|
{"golden_diff": "diff --git a/hypothesis-python/src/hypothesis/strategies/_internal/attrs.py b/hypothesis-python/src/hypothesis/strategies/_internal/attrs.py\n--- a/hypothesis-python/src/hypothesis/strategies/_internal/attrs.py\n+++ b/hypothesis-python/src/hypothesis/strategies/_internal/attrs.py\n@@ -21,6 +21,7 @@\n from hypothesis import strategies as st\n from hypothesis.errors import ResolutionFailed\n from hypothesis.internal.compat import get_type_hints\n+from hypothesis.strategies._internal.core import BuildsStrategy\n from hypothesis.strategies._internal.types import is_a_type, type_sorting_key\n from hypothesis.utils.conventions import infer\n \n@@ -34,9 +35,7 @@\n # We might make this strategy more efficient if we added a layer here that\n # retries drawing if validation fails, for improved composition.\n # The treatment of timezones in datetimes() provides a precedent.\n- return st.tuples(st.tuples(*args), st.fixed_dictionaries(kwargs)).map(\n- lambda value: target(*value[0], **value[1])\n- )\n+ return BuildsStrategy(target, args, kwargs)\n \n \n def from_attrs_attribute(attrib, target):\n", "issue": "Ghostwriter error on attrs class\nWhile playing around with the ghostwriter today, I got an exception like this one running it against some internal code:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/tmp/repro/venv/bin/hypothesis\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/tmp/repro/venv/lib/python3.9/site-packages/click/core.py\", line 1137, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"/tmp/repro/venv/lib/python3.9/site-packages/click/core.py\", line 1062, in main\r\n rv = self.invoke(ctx)\r\n File \"/tmp/repro/venv/lib/python3.9/site-packages/click/core.py\", line 1668, in invoke\r\n return _process_result(sub_ctx.command.invoke(sub_ctx))\r\n File \"/tmp/repro/venv/lib/python3.9/site-packages/click/core.py\", line 1404, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"/tmp/repro/venv/lib/python3.9/site-packages/click/core.py\", line 763, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"/tmp/repro/venv/lib/python3.9/site-packages/hypothesis/extra/cli.py\", line 242, in write\r\n code = getattr(ghostwriter, writer)(*func, except_=except_ or (), style=style)\r\n File \"/tmp/repro/venv/lib/python3.9/site-packages/hypothesis/extra/ghostwriter.py\", line 882, in magic\r\n make_(\r\n File \"/tmp/repro/venv/lib/python3.9/site-packages/hypothesis/extra/ghostwriter.py\", line 802, in make_\r\n imp, body = how(*args, **kwargs, except_=except_, style=style)\r\n File \"/tmp/repro/venv/lib/python3.9/site-packages/hypothesis/extra/ghostwriter.py\", line 626, in _make_test_body\r\n given_strategies = given_strategies or _get_strategies(\r\n File \"/tmp/repro/venv/lib/python3.9/site-packages/hypothesis/extra/ghostwriter.py\", line 409, in _get_strategies\r\n if strat.args:\r\nAttributeError: 'MappedSearchStrategy' object has no attribute 'args'\r\n```\r\n\r\nI reduced the input to trigger this exception down to\r\n\r\n```python\r\nimport attr\r\n\r\[email protected]()\r\nclass Foo:\r\n foo: str = attr.ib()\r\n```\r\n\r\nI saw this with cPython 3.9.6, attrs 21.2.0, and hypothesis master branch at hypothesis-python-6.14.5-5-g9d4da6c2f. \n", "before_files": [{"content": "# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis/\n#\n# Most of this work is copyright (C) 2013-2021 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at https://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom functools import reduce\nfrom itertools import chain\n\nimport attr\n\nfrom hypothesis import strategies as st\nfrom hypothesis.errors import ResolutionFailed\nfrom hypothesis.internal.compat import get_type_hints\nfrom hypothesis.strategies._internal.types import is_a_type, type_sorting_key\nfrom hypothesis.utils.conventions import infer\n\n\ndef from_attrs(target, args, kwargs, to_infer):\n \"\"\"An internal version of builds(), specialised for Attrs classes.\"\"\"\n fields = attr.fields(target)\n kwargs = {k: v for k, v in kwargs.items() if v is not infer}\n for name in to_infer:\n kwargs[name] = from_attrs_attribute(getattr(fields, name), target)\n # We might make this strategy more efficient if we added a layer here that\n # retries drawing if validation fails, for improved composition.\n # The treatment of timezones in datetimes() provides a precedent.\n return st.tuples(st.tuples(*args), st.fixed_dictionaries(kwargs)).map(\n lambda value: target(*value[0], **value[1])\n )\n\n\ndef from_attrs_attribute(attrib, target):\n \"\"\"Infer a strategy from the metadata on an attr.Attribute object.\"\"\"\n # Try inferring from the default argument. Note that this will only help if\n # the user passed `infer` to builds() for this attribute, but in that case\n # we use it as the minimal example.\n default = st.nothing()\n if isinstance(attrib.default, attr.Factory):\n if not attrib.default.takes_self:\n default = st.builds(attrib.default.factory)\n elif attrib.default is not attr.NOTHING:\n default = st.just(attrib.default)\n\n # Try inferring None, exact values, or type from attrs provided validators.\n null = st.nothing() # updated to none() on seeing an OptionalValidator\n in_collections = [] # list of in_ validator collections to sample from\n validator_types = set() # type constraints to pass to types_to_strategy()\n if attrib.validator is not None:\n validator = attrib.validator\n if isinstance(validator, attr.validators._OptionalValidator):\n null = st.none()\n validator = validator.validator\n if isinstance(validator, attr.validators._AndValidator):\n vs = validator._validators\n else:\n vs = [validator]\n for v in vs:\n if isinstance(v, attr.validators._InValidator):\n if isinstance(v.options, str):\n in_collections.append(list(all_substrings(v.options)))\n else:\n in_collections.append(v.options)\n elif isinstance(v, attr.validators._InstanceOfValidator):\n validator_types.add(v.type)\n\n # This is the important line. We compose the final strategy from various\n # parts. The default value, if any, is the minimal shrink, followed by\n # None (again, if allowed). We then prefer to sample from values passed\n # to an in_ validator if available, but infer from a type otherwise.\n # Pick one because (sampled_from((1, 2)) | from_type(int)) would usually\n # fail validation by generating e.g. zero!\n if in_collections:\n sample = st.sampled_from(list(ordered_intersection(in_collections)))\n strat = default | null | sample\n else:\n strat = default | null | types_to_strategy(attrib, validator_types)\n\n # Better to give a meaningful error here than an opaque \"could not draw\"\n # when we try to get a value but have lost track of where this was created.\n if strat.is_empty:\n raise ResolutionFailed(\n \"Cannot infer a strategy from the default, validator, type, or \"\n \"converter for attribute=%r of class=%r\" % (attrib, target)\n )\n return strat\n\n\ndef types_to_strategy(attrib, types):\n \"\"\"Find all the type metadata for this attribute, reconcile it, and infer a\n strategy from the mess.\"\"\"\n # If we know types from the validator(s), that's sufficient.\n if len(types) == 1:\n (typ,) = types\n if isinstance(typ, tuple):\n return st.one_of(*map(st.from_type, typ))\n return st.from_type(typ)\n elif types:\n # We have a list of tuples of types, and want to find a type\n # (or tuple of types) that is a subclass of all of of them.\n type_tuples = [k if isinstance(k, tuple) else (k,) for k in types]\n # Flatten the list, filter types that would fail validation, and\n # sort so that ordering is stable between runs and shrinks well.\n allowed = [\n t\n for t in set(sum(type_tuples, ()))\n if all(issubclass(t, tup) for tup in type_tuples)\n ]\n allowed.sort(key=type_sorting_key)\n return st.one_of([st.from_type(t) for t in allowed])\n\n # Otherwise, try the `type` attribute as a fallback, and finally try\n # the type hints on a converter (desperate!) before giving up.\n if is_a_type(getattr(attrib, \"type\", None)):\n # The convoluted test is because variable annotations may be stored\n # in string form; attrs doesn't evaluate them and we don't handle them.\n # See PEP 526, PEP 563, and Hypothesis issue #1004 for details.\n return st.from_type(attrib.type)\n\n converter = getattr(attrib, \"converter\", None)\n if isinstance(converter, type):\n return st.from_type(converter)\n elif callable(converter):\n hints = get_type_hints(converter)\n if \"return\" in hints:\n return st.from_type(hints[\"return\"])\n\n return st.nothing()\n\n\ndef ordered_intersection(in_):\n \"\"\"Set union of n sequences, ordered for reproducibility across runs.\"\"\"\n intersection = reduce(set.intersection, in_, set(in_[0]))\n for x in chain.from_iterable(in_):\n if x in intersection:\n yield x\n intersection.remove(x)\n\n\ndef all_substrings(s):\n \"\"\"Generate all substrings of `s`, in order of length then occurrence.\n Includes the empty string (first), and any duplicates that are present.\n\n >>> list(all_substrings('010'))\n ['', '0', '1', '0', '01', '10', '010']\n \"\"\"\n yield s[:0]\n for n, _ in enumerate(s):\n for i in range(len(s) - n):\n yield s[i : i + n + 1]\n", "path": "hypothesis-python/src/hypothesis/strategies/_internal/attrs.py"}]}
| 3,134 | 265 |
gh_patches_debug_35419
|
rasdani/github-patches
|
git_diff
|
ManimCommunity__manim-99
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
remove all of manim/files/
It contains svg files specific to 3b1b.
Remove OpenCV and SceneFromVideo
opencv does not seem to be used anywhere other than in ScenefromVideo, which seems to be broken.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_namespace_packages
2
3 setup(
4 name="manimlib",
5 version="0.2.0",
6 description="Animation engine for explanatory math videos",
7 license="MIT",
8 packages=find_namespace_packages(),
9 package_data={ "manim": ["*.tex"] },
10 entry_points={
11 "console_scripts": [
12 "manim=manim.__main__:main",
13 "manimcm=manim.__main__:main",
14 ]
15 },
16 install_requires=[
17 "argparse",
18 "colour",
19 "numpy",
20 "Pillow",
21 "progressbar",
22 "scipy",
23 "tqdm",
24 "opencv-python",
25 "pycairo",
26 "pydub",
27 "pygments",
28 "pyreadline; sys_platform == 'win32'",
29 "rich",
30 ],
31 )
32
[end of setup.py]
[start of manim/__init__.py]
1 #!/usr/bin/env python
2 from .constants import *
3
4 from .animation.animation import *
5 from .animation.composition import *
6 from .animation.creation import *
7 from .animation.fading import *
8 from .animation.growing import *
9 from .animation.indication import *
10 from .animation.movement import *
11 from .animation.numbers import *
12 from .animation.rotation import *
13 from .animation.specialized import *
14 from .animation.transform import *
15 from .animation.update import *
16
17 from .camera.camera import *
18 from .camera.mapping_camera import *
19 from .camera.moving_camera import *
20 from .camera.three_d_camera import *
21
22 from .mobject.coordinate_systems import *
23 from .mobject.changing import *
24 from .mobject.frame import *
25 from .mobject.functions import *
26 from .mobject.geometry import *
27 from .mobject.matrix import *
28 from .mobject.mobject import *
29 from .mobject.number_line import *
30 from .mobject.numbers import *
31 from .mobject.probability import *
32 from .mobject.shape_matchers import *
33 from .mobject.svg.brace import *
34 from .mobject.svg.drawings import *
35 from .mobject.svg.svg_mobject import *
36 from .mobject.svg.tex_mobject import *
37 from .mobject.svg.text_mobject import *
38 from .mobject.svg.code_mobject import *
39 from .mobject.three_d_utils import *
40 from .mobject.three_dimensions import *
41 from .mobject.types.image_mobject import *
42 from .mobject.types.point_cloud_mobject import *
43 from .mobject.types.vectorized_mobject import *
44 from .mobject.mobject_update_utils import *
45 from .mobject.value_tracker import *
46 from .mobject.vector_field import *
47
48 from .scene.graph_scene import *
49 from .scene.moving_camera_scene import *
50 from .scene.reconfigurable_scene import *
51 from .scene.scene import *
52 from .scene.sample_space_scene import *
53 from .scene.graph_scene import *
54 from .scene.scene_from_video import *
55 from .scene.three_d_scene import *
56 from .scene.vector_space_scene import *
57 from .scene.zoomed_scene import *
58
59 from .utils.bezier import *
60 from .utils.color import *
61 from .utils.config_ops import *
62 from .utils.debug import *
63 from .utils.images import *
64 from .utils.iterables import *
65 from .utils.file_ops import *
66 from .utils.paths import *
67 from .utils.rate_functions import *
68 from .utils.simple_functions import *
69 from .utils.sounds import *
70 from .utils.space_ops import *
71 from .utils.strings import *
72
[end of manim/__init__.py]
[start of manim/scene/scene_from_video.py]
1 # from tqdm import tqdm as show_progress
2 # import cv2
3
4 # from ..scene.scene import Scene
5 # from ..logger import logger
6
7
8 # # TODO, is this depricated?
9 # class SceneFromVideo(Scene):
10 # def construct(self, file_name,
11 # freeze_last_frame=True,
12 # time_range=None):
13 # cap = cv2.VideoCapture(file_name)
14 # self.shape = (
15 # int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT)),
16 # int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH))
17 # )
18 # fps = cap.get(cv2.cv.CV_CAP_PROP_FPS)
19 # self.camera.frame_rate = fps
20 # frame_count = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT))
21 # if time_range is None:
22 # start_frame = 0
23 # end_frame = frame_count
24 # else:
25 # start_frame, end_frame = [fps * t for t in time_range]
26
27 # frame_count = end_frame - start_frame
28 # logger.info("Reading in " + file_name + "...")
29 # for count in show_progress(list(range(start_frame, end_frame + 1))):
30 # returned, frame = cap.read()
31 # if not returned:
32 # break
33 # # b, g, r = cv2.split(frame)
34 # # self.frames.append(cv2.merge([r, g, b]))
35 # self.frames.append(frame)
36 # cap.release()
37
38 # if freeze_last_frame and len(self.frames) > 0:
39 # self.original_background = self.background = self.frames[-1]
40
41 # def apply_gaussian_blur(self, ksize=(5, 5), sigmaX=5):
42 # self.frames = [
43 # cv2.GaussianBlur(frame, ksize, sigmaX)
44 # for frame in self.frames
45 # ]
46
47 # def apply_edge_detection(self, threshold1=50, threshold2=100):
48 # edged_frames = [
49 # cv2.Canny(frame, threshold1, threshold2)
50 # for frame in self.frames
51 # ]
52 # for index in range(len(self.frames)):
53 # for i in range(3):
54 # self.frames[index][:, :, i] = edged_frames[index]
[end of manim/scene/scene_from_video.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/manim/__init__.py b/manim/__init__.py
--- a/manim/__init__.py
+++ b/manim/__init__.py
@@ -51,7 +51,6 @@
from .scene.scene import *
from .scene.sample_space_scene import *
from .scene.graph_scene import *
-from .scene.scene_from_video import *
from .scene.three_d_scene import *
from .scene.vector_space_scene import *
from .scene.zoomed_scene import *
diff --git a/manim/scene/scene_from_video.py b/manim/scene/scene_from_video.py
deleted file mode 100644
--- a/manim/scene/scene_from_video.py
+++ /dev/null
@@ -1,54 +0,0 @@
-# from tqdm import tqdm as show_progress
-# import cv2
-
-# from ..scene.scene import Scene
-# from ..logger import logger
-
-
-# # TODO, is this depricated?
-# class SceneFromVideo(Scene):
-# def construct(self, file_name,
-# freeze_last_frame=True,
-# time_range=None):
-# cap = cv2.VideoCapture(file_name)
-# self.shape = (
-# int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT)),
-# int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH))
-# )
-# fps = cap.get(cv2.cv.CV_CAP_PROP_FPS)
-# self.camera.frame_rate = fps
-# frame_count = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT))
-# if time_range is None:
-# start_frame = 0
-# end_frame = frame_count
-# else:
-# start_frame, end_frame = [fps * t for t in time_range]
-
-# frame_count = end_frame - start_frame
-# logger.info("Reading in " + file_name + "...")
-# for count in show_progress(list(range(start_frame, end_frame + 1))):
-# returned, frame = cap.read()
-# if not returned:
-# break
-# # b, g, r = cv2.split(frame)
-# # self.frames.append(cv2.merge([r, g, b]))
-# self.frames.append(frame)
-# cap.release()
-
-# if freeze_last_frame and len(self.frames) > 0:
-# self.original_background = self.background = self.frames[-1]
-
-# def apply_gaussian_blur(self, ksize=(5, 5), sigmaX=5):
-# self.frames = [
-# cv2.GaussianBlur(frame, ksize, sigmaX)
-# for frame in self.frames
-# ]
-
-# def apply_edge_detection(self, threshold1=50, threshold2=100):
-# edged_frames = [
-# cv2.Canny(frame, threshold1, threshold2)
-# for frame in self.frames
-# ]
-# for index in range(len(self.frames)):
-# for i in range(3):
-# self.frames[index][:, :, i] = edged_frames[index]
\ No newline at end of file
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -21,7 +21,6 @@
"progressbar",
"scipy",
"tqdm",
- "opencv-python",
"pycairo",
"pydub",
"pygments",
|
{"golden_diff": "diff --git a/manim/__init__.py b/manim/__init__.py\n--- a/manim/__init__.py\n+++ b/manim/__init__.py\n@@ -51,7 +51,6 @@\n from .scene.scene import *\n from .scene.sample_space_scene import *\n from .scene.graph_scene import *\n-from .scene.scene_from_video import *\n from .scene.three_d_scene import *\n from .scene.vector_space_scene import *\n from .scene.zoomed_scene import *\ndiff --git a/manim/scene/scene_from_video.py b/manim/scene/scene_from_video.py\ndeleted file mode 100644\n--- a/manim/scene/scene_from_video.py\n+++ /dev/null\n@@ -1,54 +0,0 @@\n-# from tqdm import tqdm as show_progress\n-# import cv2\n-\n-# from ..scene.scene import Scene\n-# from ..logger import logger\n-\n-\n-# # TODO, is this depricated?\n-# class SceneFromVideo(Scene):\n-# def construct(self, file_name,\n-# freeze_last_frame=True,\n-# time_range=None):\n-# cap = cv2.VideoCapture(file_name)\n-# self.shape = (\n-# int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT)),\n-# int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH))\n-# )\n-# fps = cap.get(cv2.cv.CV_CAP_PROP_FPS)\n-# self.camera.frame_rate = fps\n-# frame_count = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT))\n-# if time_range is None:\n-# start_frame = 0\n-# end_frame = frame_count\n-# else:\n-# start_frame, end_frame = [fps * t for t in time_range]\n-\n-# frame_count = end_frame - start_frame\n-# logger.info(\"Reading in \" + file_name + \"...\")\n-# for count in show_progress(list(range(start_frame, end_frame + 1))):\n-# returned, frame = cap.read()\n-# if not returned:\n-# break\n-# # b, g, r = cv2.split(frame)\n-# # self.frames.append(cv2.merge([r, g, b]))\n-# self.frames.append(frame)\n-# cap.release()\n-\n-# if freeze_last_frame and len(self.frames) > 0:\n-# self.original_background = self.background = self.frames[-1]\n-\n-# def apply_gaussian_blur(self, ksize=(5, 5), sigmaX=5):\n-# self.frames = [\n-# cv2.GaussianBlur(frame, ksize, sigmaX)\n-# for frame in self.frames\n-# ]\n-\n-# def apply_edge_detection(self, threshold1=50, threshold2=100):\n-# edged_frames = [\n-# cv2.Canny(frame, threshold1, threshold2)\n-# for frame in self.frames\n-# ]\n-# for index in range(len(self.frames)):\n-# for i in range(3):\n-# self.frames[index][:, :, i] = edged_frames[index]\n\\ No newline at end of file\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -21,7 +21,6 @@\n \"progressbar\",\n \"scipy\",\n \"tqdm\",\n- \"opencv-python\",\n \"pycairo\",\n \"pydub\",\n \"pygments\",\n", "issue": "remove all of manim/files/\nIt contains svg files specific to 3b1b.\nRemove OpenCV and SceneFromVideo\nopencv does not seem to be used anywhere other than in ScenefromVideo, which seems to be broken.\n", "before_files": [{"content": "from setuptools import setup, find_namespace_packages\n\nsetup(\n name=\"manimlib\",\n version=\"0.2.0\",\n description=\"Animation engine for explanatory math videos\",\n license=\"MIT\",\n packages=find_namespace_packages(),\n package_data={ \"manim\": [\"*.tex\"] },\n entry_points={\n \"console_scripts\": [\n \"manim=manim.__main__:main\",\n \"manimcm=manim.__main__:main\",\n ]\n },\n install_requires=[\n \"argparse\",\n \"colour\",\n \"numpy\",\n \"Pillow\",\n \"progressbar\",\n \"scipy\",\n \"tqdm\",\n \"opencv-python\",\n \"pycairo\",\n \"pydub\",\n \"pygments\",\n \"pyreadline; sys_platform == 'win32'\",\n \"rich\",\n ],\n)\n", "path": "setup.py"}, {"content": "#!/usr/bin/env python\nfrom .constants import *\n\nfrom .animation.animation import *\nfrom .animation.composition import *\nfrom .animation.creation import *\nfrom .animation.fading import *\nfrom .animation.growing import *\nfrom .animation.indication import *\nfrom .animation.movement import *\nfrom .animation.numbers import *\nfrom .animation.rotation import *\nfrom .animation.specialized import *\nfrom .animation.transform import *\nfrom .animation.update import *\n\nfrom .camera.camera import *\nfrom .camera.mapping_camera import *\nfrom .camera.moving_camera import *\nfrom .camera.three_d_camera import *\n\nfrom .mobject.coordinate_systems import *\nfrom .mobject.changing import *\nfrom .mobject.frame import *\nfrom .mobject.functions import *\nfrom .mobject.geometry import *\nfrom .mobject.matrix import *\nfrom .mobject.mobject import *\nfrom .mobject.number_line import *\nfrom .mobject.numbers import *\nfrom .mobject.probability import *\nfrom .mobject.shape_matchers import *\nfrom .mobject.svg.brace import *\nfrom .mobject.svg.drawings import *\nfrom .mobject.svg.svg_mobject import *\nfrom .mobject.svg.tex_mobject import *\nfrom .mobject.svg.text_mobject import *\nfrom .mobject.svg.code_mobject import *\nfrom .mobject.three_d_utils import *\nfrom .mobject.three_dimensions import *\nfrom .mobject.types.image_mobject import *\nfrom .mobject.types.point_cloud_mobject import *\nfrom .mobject.types.vectorized_mobject import *\nfrom .mobject.mobject_update_utils import *\nfrom .mobject.value_tracker import *\nfrom .mobject.vector_field import *\n\nfrom .scene.graph_scene import *\nfrom .scene.moving_camera_scene import *\nfrom .scene.reconfigurable_scene import *\nfrom .scene.scene import *\nfrom .scene.sample_space_scene import *\nfrom .scene.graph_scene import *\nfrom .scene.scene_from_video import *\nfrom .scene.three_d_scene import *\nfrom .scene.vector_space_scene import *\nfrom .scene.zoomed_scene import *\n\nfrom .utils.bezier import *\nfrom .utils.color import *\nfrom .utils.config_ops import *\nfrom .utils.debug import *\nfrom .utils.images import *\nfrom .utils.iterables import *\nfrom .utils.file_ops import *\nfrom .utils.paths import *\nfrom .utils.rate_functions import *\nfrom .utils.simple_functions import *\nfrom .utils.sounds import *\nfrom .utils.space_ops import *\nfrom .utils.strings import *\n", "path": "manim/__init__.py"}, {"content": "# from tqdm import tqdm as show_progress\n# import cv2\n\n# from ..scene.scene import Scene\n# from ..logger import logger\n\n\n# # TODO, is this depricated?\n# class SceneFromVideo(Scene):\n# def construct(self, file_name,\n# freeze_last_frame=True,\n# time_range=None):\n# cap = cv2.VideoCapture(file_name)\n# self.shape = (\n# int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT)),\n# int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH))\n# )\n# fps = cap.get(cv2.cv.CV_CAP_PROP_FPS)\n# self.camera.frame_rate = fps\n# frame_count = int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT))\n# if time_range is None:\n# start_frame = 0\n# end_frame = frame_count\n# else:\n# start_frame, end_frame = [fps * t for t in time_range]\n\n# frame_count = end_frame - start_frame\n# logger.info(\"Reading in \" + file_name + \"...\")\n# for count in show_progress(list(range(start_frame, end_frame + 1))):\n# returned, frame = cap.read()\n# if not returned:\n# break\n# # b, g, r = cv2.split(frame)\n# # self.frames.append(cv2.merge([r, g, b]))\n# self.frames.append(frame)\n# cap.release()\n\n# if freeze_last_frame and len(self.frames) > 0:\n# self.original_background = self.background = self.frames[-1]\n\n# def apply_gaussian_blur(self, ksize=(5, 5), sigmaX=5):\n# self.frames = [\n# cv2.GaussianBlur(frame, ksize, sigmaX)\n# for frame in self.frames\n# ]\n\n# def apply_edge_detection(self, threshold1=50, threshold2=100):\n# edged_frames = [\n# cv2.Canny(frame, threshold1, threshold2)\n# for frame in self.frames\n# ]\n# for index in range(len(self.frames)):\n# for i in range(3):\n# self.frames[index][:, :, i] = edged_frames[index]", "path": "manim/scene/scene_from_video.py"}]}
| 2,084 | 771 |
gh_patches_debug_29786
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-2017
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dvc: unify requirements files and setup.py
We have duplication between `requirements.txt` and `setup.py` `install_requires`. We also use three `pip install` lines in docs to set things up. Ideally we would just say:
```bash
pip install -e .[tests]
# or
pip install -e .[all,tests] # for all remotes
```
So this contains several parts:
- [ ] include `requirements.txt` in test requirements with `-r` syntax,
- [ ] parse requirements files in `setup.py` or drop requirement files
- [ ] update contributing docs
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2 from setuptools.command.build_py import build_py as _build_py
3 import os
4
5
6 # https://packaging.python.org/guides/single-sourcing-package-version/
7 pkg_dir = os.path.dirname(__file__)
8
9 # This will define __version__ implicitly
10 with open(os.path.join(pkg_dir, "dvc", "version.py")) as fobj:
11 exec(fobj.read())
12
13 version = __version__ # noqa: F821
14
15
16 # To achieve consistency between the build version and the one provided
17 # by your package during runtime, you need to **pin** the build version.
18 #
19 # This custom class will replace the version.py module with a **static**
20 # `__version__` that your package can read at runtime, assuring consistancy.
21 #
22 # References:
23 # - https://docs.python.org/3.7/distutils/extending.html
24 # - https://github.com/python/mypy
25 class build_py(_build_py):
26 def pin_version(self):
27 path = os.path.join(self.build_lib, "dvc")
28 self.mkpath(path)
29 with open(os.path.join(path, "version.py"), "w") as fobj:
30 fobj.write("# AUTOGENERATED at build time by setup.py\n")
31 fobj.write('__version__ = "{}"\n'.format(version))
32
33 def run(self):
34 self.execute(self.pin_version, ())
35 _build_py.run(self)
36
37
38 install_requires = [
39 "ply>=3.9", # See https://github.com/pyinstaller/pyinstaller/issues/1945
40 "configparser>=3.5.0",
41 "zc.lockfile>=1.2.1",
42 "future>=0.16.0",
43 "colorama>=0.3.9",
44 "configobj>=5.0.6",
45 "networkx>=2.1",
46 "gitpython>=2.1.8",
47 "setuptools>=34.0.0",
48 "nanotime>=0.5.2",
49 "pyasn1>=0.4.1",
50 "schema>=0.6.7",
51 "jsonpath-ng>=1.4.3",
52 "requests>=2.22.0",
53 "grandalf==0.6",
54 "asciimatics>=1.10.0",
55 "distro>=1.3.0",
56 "appdirs>=1.4.3",
57 "treelib>=1.5.5",
58 "inflect>=2.1.0",
59 "humanize>=0.5.1",
60 "dulwich>=0.19.11",
61 "ruamel.yaml>=0.15.91",
62 ]
63
64 # Extra dependencies for remote integrations
65 gs = ["google-cloud-storage==1.13.0"]
66 s3 = ["boto3==1.9.115"]
67 azure = ["azure-storage-blob==1.3.0"]
68 oss = ["oss2==2.6.1"]
69 ssh = ["paramiko>=2.4.1"]
70 all_remotes = gs + s3 + azure + ssh + oss
71
72 setup(
73 name="dvc",
74 version=version,
75 description="Git for data scientists - manage your code and data together",
76 long_description=open("README.rst", "r").read(),
77 author="Dmitry Petrov",
78 author_email="[email protected]",
79 download_url="https://github.com/iterative/dvc",
80 license="Apache License 2.0",
81 install_requires=install_requires,
82 extras_require={
83 "all": all_remotes,
84 "gs": gs,
85 "s3": s3,
86 "azure": azure,
87 "oss": oss,
88 "ssh": ssh,
89 # NOTE: https://github.com/inveniosoftware/troubleshooting/issues/1
90 ':python_version=="2.7"': ["futures", "pathlib2"],
91 },
92 keywords="data science, data version control, machine learning",
93 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*",
94 classifiers=[
95 "Development Status :: 4 - Beta",
96 "Programming Language :: Python :: 2",
97 "Programming Language :: Python :: 2.7",
98 "Programming Language :: Python :: 3",
99 "Programming Language :: Python :: 3.5",
100 "Programming Language :: Python :: 3.6",
101 "Programming Language :: Python :: 3.7",
102 ],
103 packages=find_packages(exclude=["tests"]),
104 include_package_data=True,
105 url="http://dataversioncontrol.com",
106 entry_points={"console_scripts": ["dvc = dvc.main:main"]},
107 cmdclass={"build_py": build_py},
108 zip_safe=False,
109 )
110
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,6 +1,7 @@
from setuptools import setup, find_packages
from setuptools.command.build_py import build_py as _build_py
import os
+import sys
# https://packaging.python.org/guides/single-sourcing-package-version/
@@ -69,6 +70,34 @@
ssh = ["paramiko>=2.4.1"]
all_remotes = gs + s3 + azure + ssh + oss
+# Extra dependecies to run tests
+tests_requirements = [
+ "PyInstaller==3.4",
+ "wheel>=0.31.1",
+ "pydot>=1.2.4",
+ # Test requirements:
+ "pytest>=4.4.0",
+ "pytest-timeout>=1.3.3",
+ "pytest-cov>=2.6.1",
+ "pytest-xdist>=1.26.1",
+ "pytest-mock>=1.10.4",
+ "flaky>=3.5.3",
+ "mock>=3.0.0",
+ "xmltodict>=0.11.0",
+ "awscli>=1.16.125",
+ "google-compute-engine",
+ "pywin32; sys_platform == 'win32'",
+ "Pygments", # required by collective.checkdocs,
+ "collective.checkdocs",
+ "flake8",
+ "flake8-docstrings",
+ "jaraco.windows==3.9.2",
+ "mock-ssh-server>=0.5.0",
+]
+
+if (sys.version_info) >= (3, 6):
+ tests_requirements.append("black==19.3b0")
+
setup(
name="dvc",
version=version,
@@ -87,7 +116,8 @@
"oss": oss,
"ssh": ssh,
# NOTE: https://github.com/inveniosoftware/troubleshooting/issues/1
- ':python_version=="2.7"': ["futures", "pathlib2"],
+ ":python_version=='2.7'": ["futures", "pathlib2"],
+ "tests": tests_requirements,
},
keywords="data science, data version control, machine learning",
python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,6 +1,7 @@\n from setuptools import setup, find_packages\n from setuptools.command.build_py import build_py as _build_py\n import os\n+import sys\n \n \n # https://packaging.python.org/guides/single-sourcing-package-version/\n@@ -69,6 +70,34 @@\n ssh = [\"paramiko>=2.4.1\"]\n all_remotes = gs + s3 + azure + ssh + oss\n \n+# Extra dependecies to run tests\n+tests_requirements = [\n+ \"PyInstaller==3.4\",\n+ \"wheel>=0.31.1\",\n+ \"pydot>=1.2.4\",\n+ # Test requirements:\n+ \"pytest>=4.4.0\",\n+ \"pytest-timeout>=1.3.3\",\n+ \"pytest-cov>=2.6.1\",\n+ \"pytest-xdist>=1.26.1\",\n+ \"pytest-mock>=1.10.4\",\n+ \"flaky>=3.5.3\",\n+ \"mock>=3.0.0\",\n+ \"xmltodict>=0.11.0\",\n+ \"awscli>=1.16.125\",\n+ \"google-compute-engine\",\n+ \"pywin32; sys_platform == 'win32'\",\n+ \"Pygments\", # required by collective.checkdocs,\n+ \"collective.checkdocs\",\n+ \"flake8\",\n+ \"flake8-docstrings\",\n+ \"jaraco.windows==3.9.2\",\n+ \"mock-ssh-server>=0.5.0\",\n+]\n+\n+if (sys.version_info) >= (3, 6):\n+ tests_requirements.append(\"black==19.3b0\")\n+\n setup(\n name=\"dvc\",\n version=version,\n@@ -87,7 +116,8 @@\n \"oss\": oss,\n \"ssh\": ssh,\n # NOTE: https://github.com/inveniosoftware/troubleshooting/issues/1\n- ':python_version==\"2.7\"': [\"futures\", \"pathlib2\"],\n+ \":python_version=='2.7'\": [\"futures\", \"pathlib2\"],\n+ \"tests\": tests_requirements,\n },\n keywords=\"data science, data version control, machine learning\",\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*\",\n", "issue": "dvc: unify requirements files and setup.py\nWe have duplication between `requirements.txt` and `setup.py` `install_requires`. We also use three `pip install` lines in docs to set things up. Ideally we would just say:\r\n```bash\r\npip install -e .[tests]\r\n# or\r\npip install -e .[all,tests] # for all remotes\r\n```\r\nSo this contains several parts:\r\n- [ ] include `requirements.txt` in test requirements with `-r` syntax,\r\n- [ ] parse requirements files in `setup.py` or drop requirement files \r\n- [ ] update contributing docs\n", "before_files": [{"content": "from setuptools import setup, find_packages\nfrom setuptools.command.build_py import build_py as _build_py\nimport os\n\n\n# https://packaging.python.org/guides/single-sourcing-package-version/\npkg_dir = os.path.dirname(__file__)\n\n# This will define __version__ implicitly\nwith open(os.path.join(pkg_dir, \"dvc\", \"version.py\")) as fobj:\n exec(fobj.read())\n\nversion = __version__ # noqa: F821\n\n\n# To achieve consistency between the build version and the one provided\n# by your package during runtime, you need to **pin** the build version.\n#\n# This custom class will replace the version.py module with a **static**\n# `__version__` that your package can read at runtime, assuring consistancy.\n#\n# References:\n# - https://docs.python.org/3.7/distutils/extending.html\n# - https://github.com/python/mypy\nclass build_py(_build_py):\n def pin_version(self):\n path = os.path.join(self.build_lib, \"dvc\")\n self.mkpath(path)\n with open(os.path.join(path, \"version.py\"), \"w\") as fobj:\n fobj.write(\"# AUTOGENERATED at build time by setup.py\\n\")\n fobj.write('__version__ = \"{}\"\\n'.format(version))\n\n def run(self):\n self.execute(self.pin_version, ())\n _build_py.run(self)\n\n\ninstall_requires = [\n \"ply>=3.9\", # See https://github.com/pyinstaller/pyinstaller/issues/1945\n \"configparser>=3.5.0\",\n \"zc.lockfile>=1.2.1\",\n \"future>=0.16.0\",\n \"colorama>=0.3.9\",\n \"configobj>=5.0.6\",\n \"networkx>=2.1\",\n \"gitpython>=2.1.8\",\n \"setuptools>=34.0.0\",\n \"nanotime>=0.5.2\",\n \"pyasn1>=0.4.1\",\n \"schema>=0.6.7\",\n \"jsonpath-ng>=1.4.3\",\n \"requests>=2.22.0\",\n \"grandalf==0.6\",\n \"asciimatics>=1.10.0\",\n \"distro>=1.3.0\",\n \"appdirs>=1.4.3\",\n \"treelib>=1.5.5\",\n \"inflect>=2.1.0\",\n \"humanize>=0.5.1\",\n \"dulwich>=0.19.11\",\n \"ruamel.yaml>=0.15.91\",\n]\n\n# Extra dependencies for remote integrations\ngs = [\"google-cloud-storage==1.13.0\"]\ns3 = [\"boto3==1.9.115\"]\nazure = [\"azure-storage-blob==1.3.0\"]\noss = [\"oss2==2.6.1\"]\nssh = [\"paramiko>=2.4.1\"]\nall_remotes = gs + s3 + azure + ssh + oss\n\nsetup(\n name=\"dvc\",\n version=version,\n description=\"Git for data scientists - manage your code and data together\",\n long_description=open(\"README.rst\", \"r\").read(),\n author=\"Dmitry Petrov\",\n author_email=\"[email protected]\",\n download_url=\"https://github.com/iterative/dvc\",\n license=\"Apache License 2.0\",\n install_requires=install_requires,\n extras_require={\n \"all\": all_remotes,\n \"gs\": gs,\n \"s3\": s3,\n \"azure\": azure,\n \"oss\": oss,\n \"ssh\": ssh,\n # NOTE: https://github.com/inveniosoftware/troubleshooting/issues/1\n ':python_version==\"2.7\"': [\"futures\", \"pathlib2\"],\n },\n keywords=\"data science, data version control, machine learning\",\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*\",\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n packages=find_packages(exclude=[\"tests\"]),\n include_package_data=True,\n url=\"http://dataversioncontrol.com\",\n entry_points={\"console_scripts\": [\"dvc = dvc.main:main\"]},\n cmdclass={\"build_py\": build_py},\n zip_safe=False,\n)\n", "path": "setup.py"}]}
| 1,932 | 561 |
gh_patches_debug_33113
|
rasdani/github-patches
|
git_diff
|
kubeflow__pipelines-5892
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[feature] support timeout option for watching
### Feature Area
<!-- Uncomment the labels below which are relevant to this feature: -->
<!-- /area frontend -->
<!-- /area backend -->
/area sdk
<!-- /area samples -->
<!-- /area components -->
### What feature would you like to see?
- When we use `kfp run submit -w`, we have to wait forever when run stay at pending.
- However, I'd like to specify the timeout option with it.
### What is the use case or pain point?
- `kfp run submit -t 60` will wait 60 seconds max, until run be completed
- it would use [kfp client's wait_for_run_completion function](https://github.com/kubeflow/pipelines/blob/6a1b841db923e39016f5c6e66eac8698af27831d/sdk/python/kfp/_client.py#L922)
<!-- It helps us understand the benefit of this feature for your use case. -->
### Is there a workaround currently?
If you guys agree, I'd like to implement this feature.
<!-- Without this feature, how do you accomplish your task today? -->
---
<!-- Don't delete message below to encourage users to support your feature request! -->
Love this idea? Give it a 👍. We prioritize fulfilling features with the most 👍.
</issue>
<code>
[start of sdk/python/kfp/cli/run.py]
1 # Copyright 2018 The Kubeflow Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 import sys
17 import subprocess
18 import time
19 import json
20 import click
21 import shutil
22
23 from .output import print_output, OutputFormat
24
25
26 @click.group()
27 def run():
28 """manage run resources"""
29 pass
30
31
32 @run.command()
33 @click.option('-e', '--experiment-id', help='Parent experiment ID of listed runs.')
34 @click.option('-m', '--max-size', default=100, help='Max size of the listed runs.')
35 @click.pass_context
36 def list(ctx, experiment_id, max_size):
37 """list recent KFP runs"""
38 client = ctx.obj['client']
39 output_format = ctx.obj['output']
40 response = client.list_runs(experiment_id=experiment_id, page_size=max_size, sort_by='created_at desc')
41 if response and response.runs:
42 _print_runs(response.runs, output_format)
43 else:
44 if output_format == OutputFormat.json.name:
45 msg = json.dumps([])
46 else:
47 msg = 'No runs found.'
48 click.echo(msg)
49
50
51 @run.command()
52 @click.option('-e', '--experiment-name', required=True, help='Experiment name of the run.')
53 @click.option('-r', '--run-name', help='Name of the run.')
54 @click.option('-f', '--package-file', type=click.Path(exists=True, dir_okay=False),
55 help='Path of the pipeline package file.')
56 @click.option('-p', '--pipeline-id', help='ID of the pipeline template.')
57 @click.option('-n', '--pipeline-name', help='Name of the pipeline template.')
58 @click.option('-w', '--watch', is_flag=True, default=False,
59 help='Watch the run status until it finishes.')
60 @click.option('-v', '--version', help='ID of the pipeline version.')
61 @click.argument('args', nargs=-1)
62 @click.pass_context
63 def submit(ctx, experiment_name, run_name, package_file, pipeline_id, pipeline_name, watch,
64 version, args):
65 """submit a KFP run"""
66 client = ctx.obj['client']
67 namespace = ctx.obj['namespace']
68 output_format = ctx.obj['output']
69 if not run_name:
70 run_name = experiment_name
71
72 if not pipeline_id and pipeline_name:
73 pipeline_id = client.get_pipeline_id(name=pipeline_name)
74
75 if not package_file and not pipeline_id and not version:
76 click.echo('You must provide one of [package_file, pipeline_id, version].', err=True)
77 sys.exit(1)
78
79 arg_dict = dict(arg.split('=', maxsplit=1) for arg in args)
80
81 experiment = client.create_experiment(experiment_name)
82 run = client.run_pipeline(experiment.id, run_name, package_file, arg_dict, pipeline_id,
83 version_id=version)
84 _display_run(client, namespace, run.id, watch, output_format)
85
86
87 @run.command()
88 @click.option('-w', '--watch', is_flag=True, default=False,
89 help='Watch the run status until it finishes.')
90 @click.argument('run-id')
91 @click.pass_context
92 def get(ctx, watch, run_id):
93 """display the details of a KFP run"""
94 client = ctx.obj['client']
95 namespace = ctx.obj['namespace']
96 output_format = ctx.obj['output']
97 _display_run(client, namespace, run_id, watch, output_format)
98
99
100 def _display_run(client, namespace, run_id, watch, output_format):
101 run = client.get_run(run_id).run
102 _print_runs([run], output_format)
103 if not watch:
104 return
105 argo_path = shutil.which('argo')
106 if not argo_path:
107 raise RuntimeError("argo isn't found in $PATH. It's necessary for watch. "
108 "Please make sure it's installed and available. "
109 "Installation instructions be found here - "
110 "https://github.com/argoproj/argo/releases")
111
112 argo_workflow_name = None
113 while True:
114 time.sleep(1)
115 run_detail = client.get_run(run_id)
116 run = run_detail.run
117 if run_detail.pipeline_runtime and run_detail.pipeline_runtime.workflow_manifest:
118 manifest = json.loads(run_detail.pipeline_runtime.workflow_manifest)
119 if manifest['metadata'] and manifest['metadata']['name']:
120 argo_workflow_name = manifest['metadata']['name']
121 break
122 if run_detail.run.status in ['Succeeded', 'Skipped', 'Failed', 'Error']:
123 click.echo('Run is finished with status {}.'.format(run_detail.run.status))
124 return
125 if argo_workflow_name:
126 subprocess.run([argo_path, 'watch', argo_workflow_name, '-n', namespace])
127 _print_runs([run], output_format)
128
129
130 def _print_runs(runs, output_format):
131 headers = ['run id', 'name', 'status', 'created at']
132 data = [[run.id, run.name, run.status, run.created_at.isoformat()] for run in runs]
133 print_output(data, headers, output_format, table_format='grid')
134
[end of sdk/python/kfp/cli/run.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sdk/python/kfp/cli/run.py b/sdk/python/kfp/cli/run.py
--- a/sdk/python/kfp/cli/run.py
+++ b/sdk/python/kfp/cli/run.py
@@ -58,10 +58,11 @@
@click.option('-w', '--watch', is_flag=True, default=False,
help='Watch the run status until it finishes.')
@click.option('-v', '--version', help='ID of the pipeline version.')
[email protected]('-t', '--timeout', default=0, help='Wait for a run to complete until timeout in seconds.', type=int)
@click.argument('args', nargs=-1)
@click.pass_context
def submit(ctx, experiment_name, run_name, package_file, pipeline_id, pipeline_name, watch,
- version, args):
+ timeout, version, args):
"""submit a KFP run"""
client = ctx.obj['client']
namespace = ctx.obj['namespace']
@@ -81,7 +82,10 @@
experiment = client.create_experiment(experiment_name)
run = client.run_pipeline(experiment.id, run_name, package_file, arg_dict, pipeline_id,
version_id=version)
- _display_run(client, namespace, run.id, watch, output_format)
+ if timeout > 0:
+ _wait_for_run_completion(client, run.id, timeout, output_format)
+ else:
+ _display_run(client, namespace, run.id, watch, output_format)
@run.command()
@@ -127,6 +131,11 @@
_print_runs([run], output_format)
+def _wait_for_run_completion(client, run_id, timeout, output_format):
+ run_detail = client.wait_for_run_completion(run_id, timeout)
+ _print_runs([run_detail.run], output_format)
+
+
def _print_runs(runs, output_format):
headers = ['run id', 'name', 'status', 'created at']
data = [[run.id, run.name, run.status, run.created_at.isoformat()] for run in runs]
|
{"golden_diff": "diff --git a/sdk/python/kfp/cli/run.py b/sdk/python/kfp/cli/run.py\n--- a/sdk/python/kfp/cli/run.py\n+++ b/sdk/python/kfp/cli/run.py\n@@ -58,10 +58,11 @@\n @click.option('-w', '--watch', is_flag=True, default=False,\n help='Watch the run status until it finishes.')\n @click.option('-v', '--version', help='ID of the pipeline version.')\[email protected]('-t', '--timeout', default=0, help='Wait for a run to complete until timeout in seconds.', type=int)\n @click.argument('args', nargs=-1)\n @click.pass_context\n def submit(ctx, experiment_name, run_name, package_file, pipeline_id, pipeline_name, watch,\n- version, args):\n+ timeout, version, args):\n \"\"\"submit a KFP run\"\"\"\n client = ctx.obj['client']\n namespace = ctx.obj['namespace']\n@@ -81,7 +82,10 @@\n experiment = client.create_experiment(experiment_name)\n run = client.run_pipeline(experiment.id, run_name, package_file, arg_dict, pipeline_id,\n version_id=version)\n- _display_run(client, namespace, run.id, watch, output_format)\n+ if timeout > 0:\n+ _wait_for_run_completion(client, run.id, timeout, output_format)\n+ else:\n+ _display_run(client, namespace, run.id, watch, output_format)\n \n \n @run.command()\n@@ -127,6 +131,11 @@\n _print_runs([run], output_format)\n \n \n+def _wait_for_run_completion(client, run_id, timeout, output_format):\n+ run_detail = client.wait_for_run_completion(run_id, timeout)\n+ _print_runs([run_detail.run], output_format)\n+\n+\n def _print_runs(runs, output_format):\n headers = ['run id', 'name', 'status', 'created at']\n data = [[run.id, run.name, run.status, run.created_at.isoformat()] for run in runs]\n", "issue": "[feature] support timeout option for watching\n### Feature Area\r\n\r\n<!-- Uncomment the labels below which are relevant to this feature: -->\r\n<!-- /area frontend -->\r\n<!-- /area backend -->\r\n/area sdk\r\n<!-- /area samples -->\r\n<!-- /area components -->\r\n\r\n\r\n### What feature would you like to see?\r\n- When we use `kfp run submit -w`, we have to wait forever when run stay at pending.\r\n- However, I'd like to specify the timeout option with it.\r\n\r\n### What is the use case or pain point?\r\n- `kfp run submit -t 60` will wait 60 seconds max, until run be completed\r\n- it would use [kfp client's wait_for_run_completion function](https://github.com/kubeflow/pipelines/blob/6a1b841db923e39016f5c6e66eac8698af27831d/sdk/python/kfp/_client.py#L922)\r\n\r\n<!-- It helps us understand the benefit of this feature for your use case. -->\r\n\r\n### Is there a workaround currently?\r\n\r\nIf you guys agree, I'd like to implement this feature.\r\n\r\n<!-- Without this feature, how do you accomplish your task today? -->\r\n\r\n\r\n---\r\n\r\n<!-- Don't delete message below to encourage users to support your feature request! -->\r\nLove this idea? Give it a \ud83d\udc4d. We prioritize fulfilling features with the most \ud83d\udc4d.\r\n\n", "before_files": [{"content": "# Copyright 2018 The Kubeflow Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport sys\nimport subprocess\nimport time\nimport json\nimport click\nimport shutil\n\nfrom .output import print_output, OutputFormat\n\n\[email protected]()\ndef run():\n \"\"\"manage run resources\"\"\"\n pass\n\n\[email protected]()\[email protected]('-e', '--experiment-id', help='Parent experiment ID of listed runs.')\[email protected]('-m', '--max-size', default=100, help='Max size of the listed runs.')\[email protected]_context\ndef list(ctx, experiment_id, max_size):\n \"\"\"list recent KFP runs\"\"\"\n client = ctx.obj['client']\n output_format = ctx.obj['output']\n response = client.list_runs(experiment_id=experiment_id, page_size=max_size, sort_by='created_at desc')\n if response and response.runs:\n _print_runs(response.runs, output_format)\n else:\n if output_format == OutputFormat.json.name:\n msg = json.dumps([])\n else:\n msg = 'No runs found.'\n click.echo(msg)\n\n\[email protected]()\[email protected]('-e', '--experiment-name', required=True, help='Experiment name of the run.')\[email protected]('-r', '--run-name', help='Name of the run.')\[email protected]('-f', '--package-file', type=click.Path(exists=True, dir_okay=False),\n help='Path of the pipeline package file.')\[email protected]('-p', '--pipeline-id', help='ID of the pipeline template.')\[email protected]('-n', '--pipeline-name', help='Name of the pipeline template.')\[email protected]('-w', '--watch', is_flag=True, default=False,\n help='Watch the run status until it finishes.')\[email protected]('-v', '--version', help='ID of the pipeline version.')\[email protected]('args', nargs=-1)\[email protected]_context\ndef submit(ctx, experiment_name, run_name, package_file, pipeline_id, pipeline_name, watch,\n version, args):\n \"\"\"submit a KFP run\"\"\"\n client = ctx.obj['client']\n namespace = ctx.obj['namespace']\n output_format = ctx.obj['output']\n if not run_name:\n run_name = experiment_name\n\n if not pipeline_id and pipeline_name:\n pipeline_id = client.get_pipeline_id(name=pipeline_name)\n\n if not package_file and not pipeline_id and not version:\n click.echo('You must provide one of [package_file, pipeline_id, version].', err=True)\n sys.exit(1)\n\n arg_dict = dict(arg.split('=', maxsplit=1) for arg in args)\n\n experiment = client.create_experiment(experiment_name)\n run = client.run_pipeline(experiment.id, run_name, package_file, arg_dict, pipeline_id,\n version_id=version)\n _display_run(client, namespace, run.id, watch, output_format)\n\n\[email protected]()\[email protected]('-w', '--watch', is_flag=True, default=False,\n help='Watch the run status until it finishes.')\[email protected]('run-id')\[email protected]_context\ndef get(ctx, watch, run_id):\n \"\"\"display the details of a KFP run\"\"\"\n client = ctx.obj['client']\n namespace = ctx.obj['namespace']\n output_format = ctx.obj['output']\n _display_run(client, namespace, run_id, watch, output_format)\n\n\ndef _display_run(client, namespace, run_id, watch, output_format):\n run = client.get_run(run_id).run\n _print_runs([run], output_format)\n if not watch:\n return\n argo_path = shutil.which('argo')\n if not argo_path:\n raise RuntimeError(\"argo isn't found in $PATH. It's necessary for watch. \"\n \"Please make sure it's installed and available. \"\n \"Installation instructions be found here - \"\n \"https://github.com/argoproj/argo/releases\")\n\n argo_workflow_name = None\n while True:\n time.sleep(1)\n run_detail = client.get_run(run_id)\n run = run_detail.run\n if run_detail.pipeline_runtime and run_detail.pipeline_runtime.workflow_manifest:\n manifest = json.loads(run_detail.pipeline_runtime.workflow_manifest)\n if manifest['metadata'] and manifest['metadata']['name']:\n argo_workflow_name = manifest['metadata']['name']\n break\n if run_detail.run.status in ['Succeeded', 'Skipped', 'Failed', 'Error']:\n click.echo('Run is finished with status {}.'.format(run_detail.run.status))\n return\n if argo_workflow_name:\n subprocess.run([argo_path, 'watch', argo_workflow_name, '-n', namespace])\n _print_runs([run], output_format)\n\n\ndef _print_runs(runs, output_format):\n headers = ['run id', 'name', 'status', 'created at']\n data = [[run.id, run.name, run.status, run.created_at.isoformat()] for run in runs]\n print_output(data, headers, output_format, table_format='grid')\n", "path": "sdk/python/kfp/cli/run.py"}]}
| 2,329 | 445 |
gh_patches_debug_16158
|
rasdani/github-patches
|
git_diff
|
modin-project__modin-5452
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ASV: benchmark timed out (timeout 60.0s) for `TimeLevelAlign`, `TimeStack`, `TimeUnstack`
Due to this error, some cases are not displayed on ASV charts.
</issue>
<code>
[start of asv_bench/benchmarks/utils/data_shapes.py]
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 """Define data shapes."""
15
16 import os
17 import json
18
19 from .compatibility import ASV_USE_STORAGE_FORMAT, ASV_DATASET_SIZE
20
21 RAND_LOW = 0
22 RAND_HIGH = 1_000_000_000 if ASV_USE_STORAGE_FORMAT == "hdk" else 100
23
24 BINARY_OP_DATA_SIZE = {
25 "big": [
26 [[5000, 5000], [5000, 5000]],
27 # the case extremely inefficient
28 # [[20, 500_000], [10, 1_000_000]],
29 [[500_000, 20], [1_000_000, 10]],
30 ],
31 "small": [[[250, 250], [250, 250]], [[10_000, 20], [25_000, 10]]],
32 }
33 UNARY_OP_DATA_SIZE = {
34 "big": [
35 [5000, 5000],
36 # the case extremely inefficient
37 # [10, 1_000_000],
38 [1_000_000, 10],
39 ],
40 "small": [[250, 250], [10_000, 10]],
41 }
42 SERIES_DATA_SIZE = {
43 "big": [[100_000, 1]],
44 "small": [[10_000, 1]],
45 }
46 BINARY_OP_SERIES_DATA_SIZE = {
47 "big": [
48 [[500_000, 1], [1_000_000, 1]],
49 [[500_000, 1], [500_000, 1]],
50 ],
51 "small": [[[5_000, 1], [10_000, 1]]],
52 }
53
54
55 HDK_BINARY_OP_DATA_SIZE = {
56 "big": [[[500_000, 20], [1_000_000, 10]]],
57 "small": [[[10_000, 20], [25_000, 10]]],
58 }
59 HDK_UNARY_OP_DATA_SIZE = {
60 "big": [[1_000_000, 10]],
61 "small": [[10_000, 10]],
62 }
63 HDK_SERIES_DATA_SIZE = {
64 "big": [[10_000_000, 1]],
65 "small": [[100_000, 1]],
66 }
67
68 DEFAULT_GROUPBY_NGROUPS = {
69 "big": [100, "huge_amount_groups"],
70 "small": [5],
71 }
72 GROUPBY_NGROUPS = DEFAULT_GROUPBY_NGROUPS[ASV_DATASET_SIZE]
73
74 _DEFAULT_CONFIG_T = [
75 (
76 UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE],
77 [
78 # Pandas storage format benchmarks
79 "TimeGroupByMultiColumn",
80 "TimeGroupByDefaultAggregations",
81 "TimeGroupByDictionaryAggregation",
82 "TimeSetItem",
83 "TimeInsert",
84 "TimeArithmetic",
85 "TimeSortValues",
86 "TimeDrop",
87 "TimeHead",
88 "TimeTail",
89 "TimeExplode",
90 "TimeFillna",
91 "TimeFillnaDataFrame",
92 "TimeValueCountsFrame",
93 "TimeValueCountsSeries",
94 "TimeIndexing",
95 "TimeMultiIndexing",
96 "TimeResetIndex",
97 "TimeAstype",
98 "TimeDescribe",
99 "TimeProperties",
100 "TimeReindex",
101 "TimeReindexMethod",
102 "TimeFillnaMethodDataframe",
103 "TimeDropDuplicatesDataframe",
104 "TimeStack",
105 "TimeUnstack",
106 "TimeRepr",
107 "TimeMaskBool",
108 "TimeIsnull",
109 "TimeDropna",
110 "TimeEquals",
111 # IO benchmarks
112 "TimeReadCsvSkiprows",
113 "TimeReadCsvTrueFalseValues",
114 "TimeReadCsvNamesDtype",
115 "TimeReadParquet",
116 # Scalability benchmarks
117 "TimeFromPandas",
118 "TimeToPandas",
119 ],
120 ),
121 (
122 BINARY_OP_DATA_SIZE[ASV_DATASET_SIZE],
123 [
124 # Pandas storage format benchmarks
125 "TimeJoin",
126 "TimeMerge",
127 "TimeMergeDefault",
128 "TimeConcat",
129 "TimeAppend",
130 "TimeBinaryOp",
131 "TimeLevelAlign",
132 ],
133 ),
134 (
135 SERIES_DATA_SIZE[ASV_DATASET_SIZE],
136 [
137 # Pandas storage format benchmarks
138 "TimeFillnaSeries",
139 "TimeGroups",
140 "TimeIndexingNumericSeries",
141 "TimeFillnaMethodSeries",
142 "TimeDatetimeAccessor",
143 "TimeSetCategories",
144 "TimeRemoveCategories",
145 "TimeDropDuplicatesSeries",
146 ],
147 ),
148 (
149 BINARY_OP_SERIES_DATA_SIZE[ASV_DATASET_SIZE],
150 [
151 # Pandas storage format benchmarks
152 "TimeBinaryOpSeries",
153 ],
154 ),
155 ]
156
157 _DEFAULT_HDK_CONFIG_T = [
158 (
159 HDK_UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE],
160 [
161 "hdk.TimeJoin",
162 "hdk.TimeBinaryOpDataFrame",
163 "hdk.TimeArithmetic",
164 "hdk.TimeSortValues",
165 "hdk.TimeDrop",
166 "hdk.TimeHead",
167 "hdk.TimeFillna",
168 "hdk.TimeIndexing",
169 "hdk.TimeResetIndex",
170 "hdk.TimeAstype",
171 "hdk.TimeDescribe",
172 "hdk.TimeProperties",
173 "hdk.TimeGroupByDefaultAggregations",
174 "hdk.TimeGroupByMultiColumn",
175 "hdk.TimeValueCountsDataFrame",
176 "hdk.TimeReadCsvNames",
177 ],
178 ),
179 (
180 HDK_BINARY_OP_DATA_SIZE[ASV_DATASET_SIZE],
181 ["hdk.TimeMerge", "hdk.TimeAppend"],
182 ),
183 (
184 HDK_SERIES_DATA_SIZE[ASV_DATASET_SIZE],
185 ["hdk.TimeBinaryOpSeries", "hdk.TimeValueCountsSeries"],
186 ),
187 ]
188 DEFAULT_CONFIG = {}
189 DEFAULT_CONFIG["MergeCategoricals"] = (
190 [[10_000, 2]] if ASV_DATASET_SIZE == "big" else [[1_000, 2]]
191 )
192 DEFAULT_CONFIG["TimeJoinStringIndex"] = (
193 [[100_000, 64]] if ASV_DATASET_SIZE == "big" else [[1_000, 4]]
194 )
195 DEFAULT_CONFIG["TimeReplace"] = (
196 [[10_000, 2]] if ASV_DATASET_SIZE == "big" else [[1_000, 2]]
197 )
198 for config in (_DEFAULT_CONFIG_T, _DEFAULT_HDK_CONFIG_T):
199 for _shape, _names in config:
200 DEFAULT_CONFIG.update({_name: _shape for _name in _names})
201
202 # Correct forms in the case when the operation ended with a timeout error
203 if ASV_DATASET_SIZE == "big":
204 DEFAULT_CONFIG["TimeMergeDefault"] = [
205 [[1000, 1000], [1000, 1000]],
206 [[500_000, 20], [1_000_000, 10]],
207 ]
208
209 CONFIG_FROM_FILE = None
210
211
212 def get_benchmark_shapes(bench_id: str):
213 """
214 Get custom benchmark shapes from a json file stored in MODIN_ASV_DATASIZE_CONFIG.
215
216 If `bench_id` benchmark is not found in the file, then the default value will
217 be used.
218
219 Parameters
220 ----------
221 bench_id : str
222 Unique benchmark identifier that is used to get shapes.
223
224 Returns
225 -------
226 list
227 Benchmark shapes.
228 """
229 global CONFIG_FROM_FILE
230 if not CONFIG_FROM_FILE:
231 try:
232 from modin.config import AsvDataSizeConfig
233
234 filename = AsvDataSizeConfig.get()
235 except ImportError:
236 filename = os.environ.get("MODIN_ASV_DATASIZE_CONFIG", None)
237 if filename:
238 # should be json
239 with open(filename) as _f:
240 CONFIG_FROM_FILE = json.load(_f)
241
242 if CONFIG_FROM_FILE and bench_id in CONFIG_FROM_FILE:
243 # example: "hdk.TimeReadCsvNames": [[5555, 55], [3333, 33]]
244 return CONFIG_FROM_FILE[bench_id]
245 return DEFAULT_CONFIG[bench_id]
246
[end of asv_bench/benchmarks/utils/data_shapes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/asv_bench/benchmarks/utils/data_shapes.py b/asv_bench/benchmarks/utils/data_shapes.py
--- a/asv_bench/benchmarks/utils/data_shapes.py
+++ b/asv_bench/benchmarks/utils/data_shapes.py
@@ -199,12 +199,21 @@
for _shape, _names in config:
DEFAULT_CONFIG.update({_name: _shape for _name in _names})
-# Correct forms in the case when the operation ended with a timeout error
+# Correct shapes in the case when the operation ended with a timeout error
if ASV_DATASET_SIZE == "big":
DEFAULT_CONFIG["TimeMergeDefault"] = [
[[1000, 1000], [1000, 1000]],
[[500_000, 20], [1_000_000, 10]],
]
+ DEFAULT_CONFIG["TimeLevelAlign"] = [
+ [[2500, 2500], [2500, 2500]],
+ [[250_000, 20], [500_000, 10]],
+ ]
+ DEFAULT_CONFIG["TimeStack"] = [
+ [1500, 1500],
+ [100_000, 10],
+ ]
+ DEFAULT_CONFIG["TimeUnstack"] = DEFAULT_CONFIG["TimeStack"]
CONFIG_FROM_FILE = None
|
{"golden_diff": "diff --git a/asv_bench/benchmarks/utils/data_shapes.py b/asv_bench/benchmarks/utils/data_shapes.py\n--- a/asv_bench/benchmarks/utils/data_shapes.py\n+++ b/asv_bench/benchmarks/utils/data_shapes.py\n@@ -199,12 +199,21 @@\n for _shape, _names in config:\n DEFAULT_CONFIG.update({_name: _shape for _name in _names})\n \n-# Correct forms in the case when the operation ended with a timeout error\n+# Correct shapes in the case when the operation ended with a timeout error\n if ASV_DATASET_SIZE == \"big\":\n DEFAULT_CONFIG[\"TimeMergeDefault\"] = [\n [[1000, 1000], [1000, 1000]],\n [[500_000, 20], [1_000_000, 10]],\n ]\n+ DEFAULT_CONFIG[\"TimeLevelAlign\"] = [\n+ [[2500, 2500], [2500, 2500]],\n+ [[250_000, 20], [500_000, 10]],\n+ ]\n+ DEFAULT_CONFIG[\"TimeStack\"] = [\n+ [1500, 1500],\n+ [100_000, 10],\n+ ]\n+ DEFAULT_CONFIG[\"TimeUnstack\"] = DEFAULT_CONFIG[\"TimeStack\"]\n \n CONFIG_FROM_FILE = None\n", "issue": "ASV: benchmark timed out (timeout 60.0s) for `TimeLevelAlign`, `TimeStack`, `TimeUnstack`\nDue to this error, some cases are not displayed on ASV charts.\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\"\"\"Define data shapes.\"\"\"\n\nimport os\nimport json\n\nfrom .compatibility import ASV_USE_STORAGE_FORMAT, ASV_DATASET_SIZE\n\nRAND_LOW = 0\nRAND_HIGH = 1_000_000_000 if ASV_USE_STORAGE_FORMAT == \"hdk\" else 100\n\nBINARY_OP_DATA_SIZE = {\n \"big\": [\n [[5000, 5000], [5000, 5000]],\n # the case extremely inefficient\n # [[20, 500_000], [10, 1_000_000]],\n [[500_000, 20], [1_000_000, 10]],\n ],\n \"small\": [[[250, 250], [250, 250]], [[10_000, 20], [25_000, 10]]],\n}\nUNARY_OP_DATA_SIZE = {\n \"big\": [\n [5000, 5000],\n # the case extremely inefficient\n # [10, 1_000_000],\n [1_000_000, 10],\n ],\n \"small\": [[250, 250], [10_000, 10]],\n}\nSERIES_DATA_SIZE = {\n \"big\": [[100_000, 1]],\n \"small\": [[10_000, 1]],\n}\nBINARY_OP_SERIES_DATA_SIZE = {\n \"big\": [\n [[500_000, 1], [1_000_000, 1]],\n [[500_000, 1], [500_000, 1]],\n ],\n \"small\": [[[5_000, 1], [10_000, 1]]],\n}\n\n\nHDK_BINARY_OP_DATA_SIZE = {\n \"big\": [[[500_000, 20], [1_000_000, 10]]],\n \"small\": [[[10_000, 20], [25_000, 10]]],\n}\nHDK_UNARY_OP_DATA_SIZE = {\n \"big\": [[1_000_000, 10]],\n \"small\": [[10_000, 10]],\n}\nHDK_SERIES_DATA_SIZE = {\n \"big\": [[10_000_000, 1]],\n \"small\": [[100_000, 1]],\n}\n\nDEFAULT_GROUPBY_NGROUPS = {\n \"big\": [100, \"huge_amount_groups\"],\n \"small\": [5],\n}\nGROUPBY_NGROUPS = DEFAULT_GROUPBY_NGROUPS[ASV_DATASET_SIZE]\n\n_DEFAULT_CONFIG_T = [\n (\n UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE],\n [\n # Pandas storage format benchmarks\n \"TimeGroupByMultiColumn\",\n \"TimeGroupByDefaultAggregations\",\n \"TimeGroupByDictionaryAggregation\",\n \"TimeSetItem\",\n \"TimeInsert\",\n \"TimeArithmetic\",\n \"TimeSortValues\",\n \"TimeDrop\",\n \"TimeHead\",\n \"TimeTail\",\n \"TimeExplode\",\n \"TimeFillna\",\n \"TimeFillnaDataFrame\",\n \"TimeValueCountsFrame\",\n \"TimeValueCountsSeries\",\n \"TimeIndexing\",\n \"TimeMultiIndexing\",\n \"TimeResetIndex\",\n \"TimeAstype\",\n \"TimeDescribe\",\n \"TimeProperties\",\n \"TimeReindex\",\n \"TimeReindexMethod\",\n \"TimeFillnaMethodDataframe\",\n \"TimeDropDuplicatesDataframe\",\n \"TimeStack\",\n \"TimeUnstack\",\n \"TimeRepr\",\n \"TimeMaskBool\",\n \"TimeIsnull\",\n \"TimeDropna\",\n \"TimeEquals\",\n # IO benchmarks\n \"TimeReadCsvSkiprows\",\n \"TimeReadCsvTrueFalseValues\",\n \"TimeReadCsvNamesDtype\",\n \"TimeReadParquet\",\n # Scalability benchmarks\n \"TimeFromPandas\",\n \"TimeToPandas\",\n ],\n ),\n (\n BINARY_OP_DATA_SIZE[ASV_DATASET_SIZE],\n [\n # Pandas storage format benchmarks\n \"TimeJoin\",\n \"TimeMerge\",\n \"TimeMergeDefault\",\n \"TimeConcat\",\n \"TimeAppend\",\n \"TimeBinaryOp\",\n \"TimeLevelAlign\",\n ],\n ),\n (\n SERIES_DATA_SIZE[ASV_DATASET_SIZE],\n [\n # Pandas storage format benchmarks\n \"TimeFillnaSeries\",\n \"TimeGroups\",\n \"TimeIndexingNumericSeries\",\n \"TimeFillnaMethodSeries\",\n \"TimeDatetimeAccessor\",\n \"TimeSetCategories\",\n \"TimeRemoveCategories\",\n \"TimeDropDuplicatesSeries\",\n ],\n ),\n (\n BINARY_OP_SERIES_DATA_SIZE[ASV_DATASET_SIZE],\n [\n # Pandas storage format benchmarks\n \"TimeBinaryOpSeries\",\n ],\n ),\n]\n\n_DEFAULT_HDK_CONFIG_T = [\n (\n HDK_UNARY_OP_DATA_SIZE[ASV_DATASET_SIZE],\n [\n \"hdk.TimeJoin\",\n \"hdk.TimeBinaryOpDataFrame\",\n \"hdk.TimeArithmetic\",\n \"hdk.TimeSortValues\",\n \"hdk.TimeDrop\",\n \"hdk.TimeHead\",\n \"hdk.TimeFillna\",\n \"hdk.TimeIndexing\",\n \"hdk.TimeResetIndex\",\n \"hdk.TimeAstype\",\n \"hdk.TimeDescribe\",\n \"hdk.TimeProperties\",\n \"hdk.TimeGroupByDefaultAggregations\",\n \"hdk.TimeGroupByMultiColumn\",\n \"hdk.TimeValueCountsDataFrame\",\n \"hdk.TimeReadCsvNames\",\n ],\n ),\n (\n HDK_BINARY_OP_DATA_SIZE[ASV_DATASET_SIZE],\n [\"hdk.TimeMerge\", \"hdk.TimeAppend\"],\n ),\n (\n HDK_SERIES_DATA_SIZE[ASV_DATASET_SIZE],\n [\"hdk.TimeBinaryOpSeries\", \"hdk.TimeValueCountsSeries\"],\n ),\n]\nDEFAULT_CONFIG = {}\nDEFAULT_CONFIG[\"MergeCategoricals\"] = (\n [[10_000, 2]] if ASV_DATASET_SIZE == \"big\" else [[1_000, 2]]\n)\nDEFAULT_CONFIG[\"TimeJoinStringIndex\"] = (\n [[100_000, 64]] if ASV_DATASET_SIZE == \"big\" else [[1_000, 4]]\n)\nDEFAULT_CONFIG[\"TimeReplace\"] = (\n [[10_000, 2]] if ASV_DATASET_SIZE == \"big\" else [[1_000, 2]]\n)\nfor config in (_DEFAULT_CONFIG_T, _DEFAULT_HDK_CONFIG_T):\n for _shape, _names in config:\n DEFAULT_CONFIG.update({_name: _shape for _name in _names})\n\n# Correct forms in the case when the operation ended with a timeout error\nif ASV_DATASET_SIZE == \"big\":\n DEFAULT_CONFIG[\"TimeMergeDefault\"] = [\n [[1000, 1000], [1000, 1000]],\n [[500_000, 20], [1_000_000, 10]],\n ]\n\nCONFIG_FROM_FILE = None\n\n\ndef get_benchmark_shapes(bench_id: str):\n \"\"\"\n Get custom benchmark shapes from a json file stored in MODIN_ASV_DATASIZE_CONFIG.\n\n If `bench_id` benchmark is not found in the file, then the default value will\n be used.\n\n Parameters\n ----------\n bench_id : str\n Unique benchmark identifier that is used to get shapes.\n\n Returns\n -------\n list\n Benchmark shapes.\n \"\"\"\n global CONFIG_FROM_FILE\n if not CONFIG_FROM_FILE:\n try:\n from modin.config import AsvDataSizeConfig\n\n filename = AsvDataSizeConfig.get()\n except ImportError:\n filename = os.environ.get(\"MODIN_ASV_DATASIZE_CONFIG\", None)\n if filename:\n # should be json\n with open(filename) as _f:\n CONFIG_FROM_FILE = json.load(_f)\n\n if CONFIG_FROM_FILE and bench_id in CONFIG_FROM_FILE:\n # example: \"hdk.TimeReadCsvNames\": [[5555, 55], [3333, 33]]\n return CONFIG_FROM_FILE[bench_id]\n return DEFAULT_CONFIG[bench_id]\n", "path": "asv_bench/benchmarks/utils/data_shapes.py"}]}
| 3,293 | 339 |
gh_patches_debug_120
|
rasdani/github-patches
|
git_diff
|
pytorch__TensorRT-1896
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade `release/1.4` to Torch 2.0.1 + TensorRT 8.6.1
- Also upgrade `main` to TensorRT 8.6.1 (as a commit to #1852)
</issue>
<code>
[start of py/versions.py]
1 __version__ = "1.4.0.rc0"
2 __cuda_version__ = "11.8"
3 __cudnn_version__ = "8.8"
4 __tensorrt_version__ = "8.6"
5
[end of py/versions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/py/versions.py b/py/versions.py
--- a/py/versions.py
+++ b/py/versions.py
@@ -1,4 +1,4 @@
-__version__ = "1.4.0.rc0"
+__version__ = "1.4.0"
__cuda_version__ = "11.8"
__cudnn_version__ = "8.8"
__tensorrt_version__ = "8.6"
|
{"golden_diff": "diff --git a/py/versions.py b/py/versions.py\n--- a/py/versions.py\n+++ b/py/versions.py\n@@ -1,4 +1,4 @@\n-__version__ = \"1.4.0.rc0\"\n+__version__ = \"1.4.0\"\n __cuda_version__ = \"11.8\"\n __cudnn_version__ = \"8.8\"\n __tensorrt_version__ = \"8.6\"\n", "issue": "Upgrade `release/1.4` to Torch 2.0.1 + TensorRT 8.6.1\n- Also upgrade `main` to TensorRT 8.6.1 (as a commit to #1852)\n", "before_files": [{"content": "__version__ = \"1.4.0.rc0\"\n__cuda_version__ = \"11.8\"\n__cudnn_version__ = \"8.8\"\n__tensorrt_version__ = \"8.6\"\n", "path": "py/versions.py"}]}
| 637 | 98 |
gh_patches_debug_23202
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-616
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Check: CKV_GCP_19 False positive
**Describe the bug**
Checkov Will flag your code even if the `basic-auth` is already disabled on your cluster.
**To Reproduce**
Steps to reproduce the behavior:
1. Have a file as follows:
```
resource "google_container_cluster" "cluster-test" {
name = "cluster-test"
location = "europe-west1-c"
provider = google-beta
remove_default_node_pool = true
initial_node_count = 1
enable_shielded_nodes = true
release_channel {
channel = "RAPID"
}
pod_security_policy_config {
enabled = true
}
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
```
2. Run cli command 'checkov -d path/to/your/terraform/folder.'
3. See error:
```
Check: CKV_GCP_19: "Ensure GKE basic auth is disabled"
FAILED for resource: google_container_cluster.cluster-test
File: /cluster.tf:1-27
Guide: https://docs.bridgecrew.io/docs/bc_gcp_kubernetes_11
1 | resource "google_container_cluster" "cluster-test" {
2 | name = "cluster-test"
3 | location = "europe-west1-c"
4 | provider = google-beta
5 |
6 | remove_default_node_pool = true
7 | initial_node_count = 1
8 |
9 | enable_shielded_nodes = true
10 |
11 | release_channel {
12 | channel = "RAPID"
13 | }
14 |
15 | pod_security_policy_config {
16 | enabled = true
17 | }
18 |
19 | master_auth {
20 | username = ""
21 | password = ""
22 |
23 | client_certificate_config {
24 | issue_client_certificate = false
25 | }
26 | }
27 | }
```
**Expected behavior**
```
Check: CKV_GCP_19: "Ensure GKE basic auth is disabled"
PASSED for resource: google_container_cluster.cluster-test
File: /cluster.tf:1-27
Guide: https://docs.bridgecrew.io/docs/bc_gcp_kubernetes_7
```
The `basic-auth` is already supposed to be disabled using this bit of code as:
```
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
```
**Environment:**
- CI: Github Actions
- OS: Ubuntu-lastest
- Checkov Version [latest]
- Terraform v0.12.24
**Additional context**
Tested and installed today following documentation available here: https://www.checkov.io/1.Introduction/Getting%20Started.html
</issue>
<code>
[start of checkov/terraform/checks/resource/gcp/GKEBasicAuth.py]
1 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
2 from checkov.common.models.enums import CheckResult, CheckCategories
3
4
5 class GKEBasicAuth(BaseResourceCheck):
6 def __init__(self):
7 name = "Ensure GKE basic auth is disabled"
8 id = "CKV_GCP_19"
9 supported_resources = ['google_container_cluster']
10 categories = [CheckCategories.KUBERNETES]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def scan_resource_conf(self, conf):
14 """
15 Looks for password configuration at azure_instance:
16 https://www.terraform.io/docs/providers/google/r/compute_ssl_policy.html
17 :param conf: google_compute_ssl_policy configuration
18 :return: <CheckResult>
19 """
20 if 'master_auth' in conf.keys():
21 if conf['master_auth'][0].get('username') or conf['master_auth'][0].get('password'):
22 return CheckResult.FAILED
23 return CheckResult.PASSED
24 return CheckResult.FAILED
25
26
27 check = GKEBasicAuth()
28
[end of checkov/terraform/checks/resource/gcp/GKEBasicAuth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/terraform/checks/resource/gcp/GKEBasicAuth.py b/checkov/terraform/checks/resource/gcp/GKEBasicAuth.py
--- a/checkov/terraform/checks/resource/gcp/GKEBasicAuth.py
+++ b/checkov/terraform/checks/resource/gcp/GKEBasicAuth.py
@@ -1,5 +1,5 @@
-from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
from checkov.common.models.enums import CheckResult, CheckCategories
+from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
class GKEBasicAuth(BaseResourceCheck):
@@ -18,7 +18,13 @@
:return: <CheckResult>
"""
if 'master_auth' in conf.keys():
- if conf['master_auth'][0].get('username') or conf['master_auth'][0].get('password'):
+ username = conf['master_auth'][0].get('username')
+ password = conf['master_auth'][0].get('password')
+ if username or password:
+ # only if both are set to the empty string it is fine
+ # https://www.terraform.io/docs/providers/google/r/container_cluster.html
+ if len(username) == 1 and len(password) == 1 and username[0] == '' and password[0] == '':
+ return CheckResult.PASSED
return CheckResult.FAILED
return CheckResult.PASSED
return CheckResult.FAILED
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/gcp/GKEBasicAuth.py b/checkov/terraform/checks/resource/gcp/GKEBasicAuth.py\n--- a/checkov/terraform/checks/resource/gcp/GKEBasicAuth.py\n+++ b/checkov/terraform/checks/resource/gcp/GKEBasicAuth.py\n@@ -1,5 +1,5 @@\n-from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n from checkov.common.models.enums import CheckResult, CheckCategories\n+from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n \n \n class GKEBasicAuth(BaseResourceCheck):\n@@ -18,7 +18,13 @@\n :return: <CheckResult>\n \"\"\"\n if 'master_auth' in conf.keys():\n- if conf['master_auth'][0].get('username') or conf['master_auth'][0].get('password'):\n+ username = conf['master_auth'][0].get('username')\n+ password = conf['master_auth'][0].get('password')\n+ if username or password:\n+ # only if both are set to the empty string it is fine\n+ # https://www.terraform.io/docs/providers/google/r/container_cluster.html\n+ if len(username) == 1 and len(password) == 1 and username[0] == '' and password[0] == '':\n+ return CheckResult.PASSED\n return CheckResult.FAILED\n return CheckResult.PASSED\n return CheckResult.FAILED\n", "issue": "Check: CKV_GCP_19 False positive\n**Describe the bug**\r\nCheckov Will flag your code even if the `basic-auth` is already disabled on your cluster.\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Have a file as follows:\r\n``` \r\nresource \"google_container_cluster\" \"cluster-test\" {\r\n name = \"cluster-test\"\r\n location = \"europe-west1-c\"\r\n provider = google-beta\r\n\r\n remove_default_node_pool = true\r\n initial_node_count = 1\r\n\r\n enable_shielded_nodes = true\r\n\r\n release_channel {\r\n channel = \"RAPID\"\r\n }\r\n\r\n pod_security_policy_config {\r\n enabled = true\r\n }\r\n\r\n master_auth {\r\n username = \"\"\r\n password = \"\"\r\n\r\n client_certificate_config {\r\n issue_client_certificate = false\r\n }\r\n }\r\n}\r\n``` \r\n2. Run cli command 'checkov -d path/to/your/terraform/folder.'\r\n3. See error:\r\n``` \r\nCheck: CKV_GCP_19: \"Ensure GKE basic auth is disabled\"\r\n\tFAILED for resource: google_container_cluster.cluster-test\r\n\tFile: /cluster.tf:1-27\r\n\tGuide: https://docs.bridgecrew.io/docs/bc_gcp_kubernetes_11\r\n\r\n\t\t1 | resource \"google_container_cluster\" \"cluster-test\" {\r\n\t\t2 | name = \"cluster-test\"\r\n\t\t3 | location = \"europe-west1-c\"\r\n\t\t4 | provider = google-beta\r\n\t\t5 | \r\n\t\t6 | remove_default_node_pool = true\r\n\t\t7 | initial_node_count = 1\r\n\t\t8 | \r\n\t\t9 | enable_shielded_nodes = true\r\n\t\t10 | \r\n\t\t11 | release_channel {\r\n\t\t12 | channel = \"RAPID\"\r\n\t\t13 | }\r\n\t\t14 | \r\n\t\t15 | pod_security_policy_config {\r\n\t\t16 | enabled = true\r\n\t\t17 | }\r\n\t\t18 | \r\n\t\t19 | master_auth {\r\n\t\t20 | username = \"\"\r\n\t\t21 | password = \"\"\r\n\t\t22 | \r\n\t\t23 | client_certificate_config {\r\n\t\t24 | issue_client_certificate = false\r\n\t\t25 | }\r\n\t\t26 | }\r\n\t\t27 | }\r\n``` \r\n\r\n**Expected behavior**\r\n``` \r\nCheck: CKV_GCP_19: \"Ensure GKE basic auth is disabled\"\r\n\tPASSED for resource: google_container_cluster.cluster-test\r\n\tFile: /cluster.tf:1-27\r\n\tGuide: https://docs.bridgecrew.io/docs/bc_gcp_kubernetes_7\r\n``` \r\n\r\nThe `basic-auth` is already supposed to be disabled using this bit of code as:\r\n``` \r\n master_auth {\r\n username = \"\"\r\n password = \"\"\r\n\r\n client_certificate_config {\r\n issue_client_certificate = false\r\n }\r\n }\r\n```\r\n\r\n**Environment:**\r\n - CI: Github Actions \r\n - OS: Ubuntu-lastest\r\n - Checkov Version [latest]\r\n - Terraform v0.12.24\r\n\r\n**Additional context**\r\nTested and installed today following documentation available here: https://www.checkov.io/1.Introduction/Getting%20Started.html\n", "before_files": [{"content": "from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\nfrom checkov.common.models.enums import CheckResult, CheckCategories\n\n\nclass GKEBasicAuth(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure GKE basic auth is disabled\"\n id = \"CKV_GCP_19\"\n supported_resources = ['google_container_cluster']\n categories = [CheckCategories.KUBERNETES]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n \"\"\"\n Looks for password configuration at azure_instance:\n https://www.terraform.io/docs/providers/google/r/compute_ssl_policy.html\n :param conf: google_compute_ssl_policy configuration\n :return: <CheckResult>\n \"\"\"\n if 'master_auth' in conf.keys():\n if conf['master_auth'][0].get('username') or conf['master_auth'][0].get('password'):\n return CheckResult.FAILED\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = GKEBasicAuth()\n", "path": "checkov/terraform/checks/resource/gcp/GKEBasicAuth.py"}]}
| 1,553 | 326 |
gh_patches_debug_17872
|
rasdani/github-patches
|
git_diff
|
qutebrowser__qutebrowser-7847
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'Your internet access is blocked' when using keyboard 'f' to follow a link in a local file to a remote one, but works if clicking with mouse
**Version info**:
<!-- Please copy the first block from :version, not just the qutebrowser version -->
qutebrowser v3.0.0
Git commit:
Backend: QtWebEngine 6.5.2, based on Chromium 108.0.5359.220 (from api)
Qt: 6.5.2
**Does the bug happen if you start with `--temp-basedir`?**:
Yes.
**Description**
I have a bookmarks.html file with commonly-used links. I load the file into qutebrowser with ':open -t file:///home/chris/ckstemp.html'. If I use the keyboard shortcut 'f' to follow a link, then press the link letter (say, 'a'), I get a 'Your internet access is blocked' error page. If instead I click the link with the mouse, the page loads as expected (no error).
I have tried deleting the .cache, and I've tried clearing my browser history.
This is new behavior since I upgraded to qutebrowser 3.0.0 this morning.
**How to reproduce**
<!-- Link to the affected site, or steps to reproduce the issue (if possible/applicable). -->
See above for the steps to reproduces. Here is the contents of my ckstemp.html file:
```
<TITLE>Bookmarks</TITLE>
<DL><p>
<DT><H1>Bookmarks</H1>
<DT><H3>Comics</H3>
<DL><p>
<DT><A HREF="https://www.gocomics.com/foxtrot">Foxtrot</A>
</DL>
</DL>
```
</issue>
<code>
[start of qutebrowser/browser/webengine/webengineelem.py]
1 # SPDX-FileCopyrightText: Florian Bruhin (The Compiler) <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 """QtWebEngine specific part of the web element API."""
6
7 from typing import (
8 TYPE_CHECKING, Any, Callable, Dict, Iterator, Optional, Set, Tuple, Union)
9
10 from qutebrowser.qt.core import QRect, QEventLoop
11 from qutebrowser.qt.widgets import QApplication
12 from qutebrowser.qt.webenginecore import QWebEngineSettings
13
14 from qutebrowser.utils import log, javascript, urlutils, usertypes, utils, version
15 from qutebrowser.browser import webelem
16
17 if TYPE_CHECKING:
18 from qutebrowser.browser.webengine import webenginetab
19
20
21 class WebEngineElement(webelem.AbstractWebElement):
22
23 """A web element for QtWebEngine, using JS under the hood."""
24
25 _tab: "webenginetab.WebEngineTab"
26
27 def __init__(self, js_dict: Dict[str, Any],
28 tab: 'webenginetab.WebEngineTab') -> None:
29 super().__init__(tab)
30 # Do some sanity checks on the data we get from JS
31 js_dict_types: Dict[str, Union[type, Tuple[type, ...]]] = {
32 'id': int,
33 'text': str,
34 'value': (str, int, float),
35 'tag_name': str,
36 'outer_xml': str,
37 'class_name': str,
38 'rects': list,
39 'attributes': dict,
40 'is_content_editable': bool,
41 'caret_position': (int, type(None)),
42 }
43 assert set(js_dict.keys()).issubset(js_dict_types.keys())
44 for name, typ in js_dict_types.items():
45 if name in js_dict and not isinstance(js_dict[name], typ):
46 raise TypeError("Got {} for {} from JS but expected {}: "
47 "{}".format(type(js_dict[name]), name, typ,
48 js_dict))
49 for name, value in js_dict['attributes'].items():
50 if not isinstance(name, str):
51 raise TypeError("Got {} ({}) for attribute name from JS: "
52 "{}".format(name, type(name), js_dict))
53 if not isinstance(value, str):
54 raise TypeError("Got {} ({}) for attribute {} from JS: "
55 "{}".format(value, type(value), name, js_dict))
56 for rect in js_dict['rects']:
57 assert set(rect.keys()) == {'top', 'right', 'bottom', 'left',
58 'height', 'width'}, rect.keys()
59 for value in rect.values():
60 if not isinstance(value, (int, float)):
61 raise TypeError("Got {} ({}) for rect from JS: "
62 "{}".format(value, type(value), js_dict))
63
64 self._id = js_dict['id']
65 self._js_dict = js_dict
66
67 def __str__(self) -> str:
68 return self._js_dict.get('text', '')
69
70 def __eq__(self, other: object) -> bool:
71 if not isinstance(other, WebEngineElement):
72 return NotImplemented
73 return self._id == other._id
74
75 def __getitem__(self, key: str) -> str:
76 attrs = self._js_dict['attributes']
77 return attrs[key]
78
79 def __setitem__(self, key: str, val: str) -> None:
80 self._js_dict['attributes'][key] = val
81 self._js_call('set_attribute', key, val)
82
83 def __delitem__(self, key: str) -> None:
84 utils.unused(key)
85 log.stub()
86
87 def __iter__(self) -> Iterator[str]:
88 return iter(self._js_dict['attributes'])
89
90 def __len__(self) -> int:
91 return len(self._js_dict['attributes'])
92
93 def _js_call(self, name: str, *args: webelem.JsValueType,
94 callback: Callable[[Any], None] = None) -> None:
95 """Wrapper to run stuff from webelem.js."""
96 if self._tab.is_deleted():
97 raise webelem.OrphanedError("Tab containing element vanished")
98 js_code = javascript.assemble('webelem', name, self._id, *args)
99 self._tab.run_js_async(js_code, callback=callback)
100
101 def has_frame(self) -> bool:
102 return True
103
104 def geometry(self) -> QRect:
105 log.stub()
106 return QRect()
107
108 def classes(self) -> Set[str]:
109 """Get a list of classes assigned to this element."""
110 return set(self._js_dict['class_name'].split())
111
112 def tag_name(self) -> str:
113 """Get the tag name of this element.
114
115 The returned name will always be lower-case.
116 """
117 tag = self._js_dict['tag_name']
118 assert isinstance(tag, str), tag
119 return tag.lower()
120
121 def outer_xml(self) -> str:
122 """Get the full HTML representation of this element."""
123 return self._js_dict['outer_xml']
124
125 def is_content_editable_prop(self) -> bool:
126 return self._js_dict['is_content_editable']
127
128 def value(self) -> webelem.JsValueType:
129 return self._js_dict.get('value', None)
130
131 def set_value(self, value: webelem.JsValueType) -> None:
132 self._js_call('set_value', value)
133
134 def dispatch_event(self, event: str,
135 bubbles: bool = False,
136 cancelable: bool = False,
137 composed: bool = False) -> None:
138 self._js_call('dispatch_event', event, bubbles, cancelable, composed)
139
140 def caret_position(self) -> Optional[int]:
141 """Get the text caret position for the current element.
142
143 If the element is not a text element, None is returned.
144 """
145 return self._js_dict.get('caret_position', None)
146
147 def insert_text(self, text: str) -> None:
148 if not self.is_editable(strict=True):
149 raise webelem.Error("Element is not editable!")
150 log.webelem.debug("Inserting text into element {!r}".format(self))
151 self._js_call('insert_text', text)
152
153 def rect_on_view(self, *, elem_geometry: QRect = None,
154 no_js: bool = False) -> QRect:
155 """Get the geometry of the element relative to the webview.
156
157 Skipping of small rectangles is due to <a> elements containing other
158 elements with "display:block" style, see
159 https://github.com/qutebrowser/qutebrowser/issues/1298
160
161 Args:
162 elem_geometry: The geometry of the element, or None.
163 Ignored with QtWebEngine.
164 no_js: Fall back to the Python implementation.
165 Ignored with QtWebEngine.
166 """
167 utils.unused(elem_geometry)
168 utils.unused(no_js)
169 rects = self._js_dict['rects']
170 for rect in rects:
171 # FIXME:qtwebengine
172 # width = rect.get("width", 0)
173 # height = rect.get("height", 0)
174 width = rect['width']
175 height = rect['height']
176 left = rect['left']
177 top = rect['top']
178 if width > 1 and height > 1:
179 # Fix coordinates according to zoom level
180 # We're not checking for zoom.text_only here as that doesn't
181 # exist for QtWebEngine.
182 zoom = self._tab.zoom.factor()
183 rect = QRect(int(left * zoom), int(top * zoom),
184 int(width * zoom), int(height * zoom))
185 # FIXME:qtwebengine
186 # frame = self._elem.webFrame()
187 # while frame is not None:
188 # # Translate to parent frames' position (scroll position
189 # # is taken care of inside getClientRects)
190 # rect.translate(frame.geometry().topLeft())
191 # frame = frame.parentFrame()
192 return rect
193 log.webelem.debug("Couldn't find rectangle for {!r} ({})".format(
194 self, rects))
195 return QRect()
196
197 def remove_blank_target(self) -> None:
198 if self._js_dict['attributes'].get('target') == '_blank':
199 self._js_dict['attributes']['target'] = '_top'
200 self._js_call('remove_blank_target')
201
202 def delete(self) -> None:
203 self._js_call('delete')
204
205 def _move_text_cursor(self) -> None:
206 if self.is_text_input() and self.is_editable():
207 self._js_call('move_cursor_to_end')
208
209 def _requires_user_interaction(self) -> bool:
210 baseurl = self._tab.url()
211 url = self.resolve_url(baseurl)
212 if url is None:
213 return True
214 if baseurl.scheme() == url.scheme(): # e.g. a qute:// link
215 return False
216
217 # Qt 6.3+ needs a user interaction to allow navigations from qute:// to
218 # outside qute:// (like e.g. on qute://bookmarks).
219 versions = version.qtwebengine_versions()
220 if (
221 baseurl.scheme() == "qute" and
222 url.scheme() != "qute" and
223 versions.webengine >= utils.VersionNumber(6, 3)
224 ):
225 return True
226
227 return url.scheme() not in urlutils.WEBENGINE_SCHEMES
228
229 def _click_editable(self, click_target: usertypes.ClickTarget) -> None:
230 # This actually "clicks" the element by calling focus() on it in JS.
231 self._js_call('focus')
232 self._move_text_cursor()
233
234 def _click_js(self, _click_target: usertypes.ClickTarget) -> None:
235 # FIXME:qtwebengine Have a proper API for this
236 # pylint: disable=protected-access
237 view = self._tab._widget
238 assert view is not None
239 # pylint: enable=protected-access
240 attribute = QWebEngineSettings.WebAttribute.JavascriptCanOpenWindows
241 could_open_windows = view.settings().testAttribute(attribute)
242 view.settings().setAttribute(attribute, True)
243
244 # Get QtWebEngine do apply the settings
245 # (it does so with a 0ms QTimer...)
246 # This is also used in Qt's tests:
247 # https://github.com/qt/qtwebengine/commit/5e572e88efa7ba7c2b9138ec19e606d3e345ac90
248 QApplication.processEvents(
249 QEventLoop.ProcessEventsFlag.ExcludeSocketNotifiers |
250 QEventLoop.ProcessEventsFlag.ExcludeUserInputEvents)
251
252 def reset_setting(_arg: Any) -> None:
253 """Set the JavascriptCanOpenWindows setting to its old value."""
254 assert view is not None
255 try:
256 view.settings().setAttribute(attribute, could_open_windows)
257 except RuntimeError:
258 # Happens if this callback gets called during QWebEnginePage
259 # destruction, i.e. if the tab was closed in the meantime.
260 pass
261
262 self._js_call('click', callback=reset_setting)
263
[end of qutebrowser/browser/webengine/webengineelem.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/qutebrowser/browser/webengine/webengineelem.py b/qutebrowser/browser/webengine/webengineelem.py
--- a/qutebrowser/browser/webengine/webengineelem.py
+++ b/qutebrowser/browser/webengine/webengineelem.py
@@ -215,14 +215,16 @@
return False
# Qt 6.3+ needs a user interaction to allow navigations from qute:// to
- # outside qute:// (like e.g. on qute://bookmarks).
+ # outside qute:// (like e.g. on qute://bookmarks), as well as from file:// to
+ # outside of file:// (e.g. users having a local bookmarks.html).
versions = version.qtwebengine_versions()
- if (
- baseurl.scheme() == "qute" and
- url.scheme() != "qute" and
- versions.webengine >= utils.VersionNumber(6, 3)
- ):
- return True
+ for scheme in ["qute", "file"]:
+ if (
+ baseurl.scheme() == scheme and
+ url.scheme() != scheme and
+ versions.webengine >= utils.VersionNumber(6, 3)
+ ):
+ return True
return url.scheme() not in urlutils.WEBENGINE_SCHEMES
|
{"golden_diff": "diff --git a/qutebrowser/browser/webengine/webengineelem.py b/qutebrowser/browser/webengine/webengineelem.py\n--- a/qutebrowser/browser/webengine/webengineelem.py\n+++ b/qutebrowser/browser/webengine/webengineelem.py\n@@ -215,14 +215,16 @@\n return False\n \n # Qt 6.3+ needs a user interaction to allow navigations from qute:// to\n- # outside qute:// (like e.g. on qute://bookmarks).\n+ # outside qute:// (like e.g. on qute://bookmarks), as well as from file:// to\n+ # outside of file:// (e.g. users having a local bookmarks.html).\n versions = version.qtwebengine_versions()\n- if (\n- baseurl.scheme() == \"qute\" and\n- url.scheme() != \"qute\" and\n- versions.webengine >= utils.VersionNumber(6, 3)\n- ):\n- return True\n+ for scheme in [\"qute\", \"file\"]:\n+ if (\n+ baseurl.scheme() == scheme and\n+ url.scheme() != scheme and\n+ versions.webengine >= utils.VersionNumber(6, 3)\n+ ):\n+ return True\n \n return url.scheme() not in urlutils.WEBENGINE_SCHEMES\n", "issue": "'Your internet access is blocked' when using keyboard 'f' to follow a link in a local file to a remote one, but works if clicking with mouse\n**Version info**:\r\n<!-- Please copy the first block from :version, not just the qutebrowser version -->\r\nqutebrowser v3.0.0\r\nGit commit: \r\nBackend: QtWebEngine 6.5.2, based on Chromium 108.0.5359.220 (from api)\r\nQt: 6.5.2\r\n\r\n**Does the bug happen if you start with `--temp-basedir`?**:\r\nYes.\r\n\r\n**Description**\r\nI have a bookmarks.html file with commonly-used links. I load the file into qutebrowser with ':open -t file:///home/chris/ckstemp.html'. If I use the keyboard shortcut 'f' to follow a link, then press the link letter (say, 'a'), I get a 'Your internet access is blocked' error page. If instead I click the link with the mouse, the page loads as expected (no error).\r\n\r\nI have tried deleting the .cache, and I've tried clearing my browser history.\r\n\r\nThis is new behavior since I upgraded to qutebrowser 3.0.0 this morning.\r\n\r\n**How to reproduce**\r\n<!-- Link to the affected site, or steps to reproduce the issue (if possible/applicable). -->\r\n\r\nSee above for the steps to reproduces. Here is the contents of my ckstemp.html file:\r\n\r\n```\r\n<TITLE>Bookmarks</TITLE>\r\n<DL><p>\r\n <DT><H1>Bookmarks</H1>\r\n <DT><H3>Comics</H3>\r\n <DL><p>\r\n <DT><A HREF=\"https://www.gocomics.com/foxtrot\">Foxtrot</A>\r\n </DL>\r\n</DL>\r\n```\r\n\r\n\n", "before_files": [{"content": "# SPDX-FileCopyrightText: Florian Bruhin (The Compiler) <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\n\"\"\"QtWebEngine specific part of the web element API.\"\"\"\n\nfrom typing import (\n TYPE_CHECKING, Any, Callable, Dict, Iterator, Optional, Set, Tuple, Union)\n\nfrom qutebrowser.qt.core import QRect, QEventLoop\nfrom qutebrowser.qt.widgets import QApplication\nfrom qutebrowser.qt.webenginecore import QWebEngineSettings\n\nfrom qutebrowser.utils import log, javascript, urlutils, usertypes, utils, version\nfrom qutebrowser.browser import webelem\n\nif TYPE_CHECKING:\n from qutebrowser.browser.webengine import webenginetab\n\n\nclass WebEngineElement(webelem.AbstractWebElement):\n\n \"\"\"A web element for QtWebEngine, using JS under the hood.\"\"\"\n\n _tab: \"webenginetab.WebEngineTab\"\n\n def __init__(self, js_dict: Dict[str, Any],\n tab: 'webenginetab.WebEngineTab') -> None:\n super().__init__(tab)\n # Do some sanity checks on the data we get from JS\n js_dict_types: Dict[str, Union[type, Tuple[type, ...]]] = {\n 'id': int,\n 'text': str,\n 'value': (str, int, float),\n 'tag_name': str,\n 'outer_xml': str,\n 'class_name': str,\n 'rects': list,\n 'attributes': dict,\n 'is_content_editable': bool,\n 'caret_position': (int, type(None)),\n }\n assert set(js_dict.keys()).issubset(js_dict_types.keys())\n for name, typ in js_dict_types.items():\n if name in js_dict and not isinstance(js_dict[name], typ):\n raise TypeError(\"Got {} for {} from JS but expected {}: \"\n \"{}\".format(type(js_dict[name]), name, typ,\n js_dict))\n for name, value in js_dict['attributes'].items():\n if not isinstance(name, str):\n raise TypeError(\"Got {} ({}) for attribute name from JS: \"\n \"{}\".format(name, type(name), js_dict))\n if not isinstance(value, str):\n raise TypeError(\"Got {} ({}) for attribute {} from JS: \"\n \"{}\".format(value, type(value), name, js_dict))\n for rect in js_dict['rects']:\n assert set(rect.keys()) == {'top', 'right', 'bottom', 'left',\n 'height', 'width'}, rect.keys()\n for value in rect.values():\n if not isinstance(value, (int, float)):\n raise TypeError(\"Got {} ({}) for rect from JS: \"\n \"{}\".format(value, type(value), js_dict))\n\n self._id = js_dict['id']\n self._js_dict = js_dict\n\n def __str__(self) -> str:\n return self._js_dict.get('text', '')\n\n def __eq__(self, other: object) -> bool:\n if not isinstance(other, WebEngineElement):\n return NotImplemented\n return self._id == other._id\n\n def __getitem__(self, key: str) -> str:\n attrs = self._js_dict['attributes']\n return attrs[key]\n\n def __setitem__(self, key: str, val: str) -> None:\n self._js_dict['attributes'][key] = val\n self._js_call('set_attribute', key, val)\n\n def __delitem__(self, key: str) -> None:\n utils.unused(key)\n log.stub()\n\n def __iter__(self) -> Iterator[str]:\n return iter(self._js_dict['attributes'])\n\n def __len__(self) -> int:\n return len(self._js_dict['attributes'])\n\n def _js_call(self, name: str, *args: webelem.JsValueType,\n callback: Callable[[Any], None] = None) -> None:\n \"\"\"Wrapper to run stuff from webelem.js.\"\"\"\n if self._tab.is_deleted():\n raise webelem.OrphanedError(\"Tab containing element vanished\")\n js_code = javascript.assemble('webelem', name, self._id, *args)\n self._tab.run_js_async(js_code, callback=callback)\n\n def has_frame(self) -> bool:\n return True\n\n def geometry(self) -> QRect:\n log.stub()\n return QRect()\n\n def classes(self) -> Set[str]:\n \"\"\"Get a list of classes assigned to this element.\"\"\"\n return set(self._js_dict['class_name'].split())\n\n def tag_name(self) -> str:\n \"\"\"Get the tag name of this element.\n\n The returned name will always be lower-case.\n \"\"\"\n tag = self._js_dict['tag_name']\n assert isinstance(tag, str), tag\n return tag.lower()\n\n def outer_xml(self) -> str:\n \"\"\"Get the full HTML representation of this element.\"\"\"\n return self._js_dict['outer_xml']\n\n def is_content_editable_prop(self) -> bool:\n return self._js_dict['is_content_editable']\n\n def value(self) -> webelem.JsValueType:\n return self._js_dict.get('value', None)\n\n def set_value(self, value: webelem.JsValueType) -> None:\n self._js_call('set_value', value)\n\n def dispatch_event(self, event: str,\n bubbles: bool = False,\n cancelable: bool = False,\n composed: bool = False) -> None:\n self._js_call('dispatch_event', event, bubbles, cancelable, composed)\n\n def caret_position(self) -> Optional[int]:\n \"\"\"Get the text caret position for the current element.\n\n If the element is not a text element, None is returned.\n \"\"\"\n return self._js_dict.get('caret_position', None)\n\n def insert_text(self, text: str) -> None:\n if not self.is_editable(strict=True):\n raise webelem.Error(\"Element is not editable!\")\n log.webelem.debug(\"Inserting text into element {!r}\".format(self))\n self._js_call('insert_text', text)\n\n def rect_on_view(self, *, elem_geometry: QRect = None,\n no_js: bool = False) -> QRect:\n \"\"\"Get the geometry of the element relative to the webview.\n\n Skipping of small rectangles is due to <a> elements containing other\n elements with \"display:block\" style, see\n https://github.com/qutebrowser/qutebrowser/issues/1298\n\n Args:\n elem_geometry: The geometry of the element, or None.\n Ignored with QtWebEngine.\n no_js: Fall back to the Python implementation.\n Ignored with QtWebEngine.\n \"\"\"\n utils.unused(elem_geometry)\n utils.unused(no_js)\n rects = self._js_dict['rects']\n for rect in rects:\n # FIXME:qtwebengine\n # width = rect.get(\"width\", 0)\n # height = rect.get(\"height\", 0)\n width = rect['width']\n height = rect['height']\n left = rect['left']\n top = rect['top']\n if width > 1 and height > 1:\n # Fix coordinates according to zoom level\n # We're not checking for zoom.text_only here as that doesn't\n # exist for QtWebEngine.\n zoom = self._tab.zoom.factor()\n rect = QRect(int(left * zoom), int(top * zoom),\n int(width * zoom), int(height * zoom))\n # FIXME:qtwebengine\n # frame = self._elem.webFrame()\n # while frame is not None:\n # # Translate to parent frames' position (scroll position\n # # is taken care of inside getClientRects)\n # rect.translate(frame.geometry().topLeft())\n # frame = frame.parentFrame()\n return rect\n log.webelem.debug(\"Couldn't find rectangle for {!r} ({})\".format(\n self, rects))\n return QRect()\n\n def remove_blank_target(self) -> None:\n if self._js_dict['attributes'].get('target') == '_blank':\n self._js_dict['attributes']['target'] = '_top'\n self._js_call('remove_blank_target')\n\n def delete(self) -> None:\n self._js_call('delete')\n\n def _move_text_cursor(self) -> None:\n if self.is_text_input() and self.is_editable():\n self._js_call('move_cursor_to_end')\n\n def _requires_user_interaction(self) -> bool:\n baseurl = self._tab.url()\n url = self.resolve_url(baseurl)\n if url is None:\n return True\n if baseurl.scheme() == url.scheme(): # e.g. a qute:// link\n return False\n\n # Qt 6.3+ needs a user interaction to allow navigations from qute:// to\n # outside qute:// (like e.g. on qute://bookmarks).\n versions = version.qtwebengine_versions()\n if (\n baseurl.scheme() == \"qute\" and\n url.scheme() != \"qute\" and\n versions.webengine >= utils.VersionNumber(6, 3)\n ):\n return True\n\n return url.scheme() not in urlutils.WEBENGINE_SCHEMES\n\n def _click_editable(self, click_target: usertypes.ClickTarget) -> None:\n # This actually \"clicks\" the element by calling focus() on it in JS.\n self._js_call('focus')\n self._move_text_cursor()\n\n def _click_js(self, _click_target: usertypes.ClickTarget) -> None:\n # FIXME:qtwebengine Have a proper API for this\n # pylint: disable=protected-access\n view = self._tab._widget\n assert view is not None\n # pylint: enable=protected-access\n attribute = QWebEngineSettings.WebAttribute.JavascriptCanOpenWindows\n could_open_windows = view.settings().testAttribute(attribute)\n view.settings().setAttribute(attribute, True)\n\n # Get QtWebEngine do apply the settings\n # (it does so with a 0ms QTimer...)\n # This is also used in Qt's tests:\n # https://github.com/qt/qtwebengine/commit/5e572e88efa7ba7c2b9138ec19e606d3e345ac90\n QApplication.processEvents(\n QEventLoop.ProcessEventsFlag.ExcludeSocketNotifiers |\n QEventLoop.ProcessEventsFlag.ExcludeUserInputEvents)\n\n def reset_setting(_arg: Any) -> None:\n \"\"\"Set the JavascriptCanOpenWindows setting to its old value.\"\"\"\n assert view is not None\n try:\n view.settings().setAttribute(attribute, could_open_windows)\n except RuntimeError:\n # Happens if this callback gets called during QWebEnginePage\n # destruction, i.e. if the tab was closed in the meantime.\n pass\n\n self._js_call('click', callback=reset_setting)\n", "path": "qutebrowser/browser/webengine/webengineelem.py"}]}
| 4,033 | 293 |
gh_patches_debug_6875
|
rasdani/github-patches
|
git_diff
|
nautobot__nautobot-5476
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[v2.2] Team(s) field not pre-populating when editing a Contact
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Nautobot version (Docker tag too if applicable): 2.2.0a1
* Python version: 3.11.8
* Database platform, version: n/a
* Middleware(s): n/a
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
### Steps to Reproduce
1. Create a Team
2. Create a Contact and associate it to the Team
3. Edit the Contact
<!-- What did you expect to happen? -->
### Expected Behavior
The existing Team would be displayed in the edit form
<!-- What happened instead? -->
### Observed Behavior
The `Team(s)` field is empty and if you don't reapply the team it will remove it.
</issue>
<code>
[start of nautobot/extras/forms/contacts.py]
1 from django import forms
2 from django.contrib.contenttypes.models import ContentType
3
4 from nautobot.core.forms import DynamicModelChoiceField, DynamicModelMultipleChoiceField
5 from nautobot.extras.models import Role, Status
6 from nautobot.extras.models.contacts import Contact, ContactAssociation, Team
7
8 from .base import NautobotBulkEditForm, NautobotFilterForm, NautobotModelForm
9 from .mixins import TagsBulkEditFormMixin
10
11
12 class ContactForm(NautobotModelForm):
13 teams = DynamicModelMultipleChoiceField(
14 queryset=Team.objects.all(),
15 required=False,
16 label="Team(s)",
17 )
18
19 class Meta:
20 model = Contact
21 fields = [
22 "name",
23 "phone",
24 "email",
25 "address",
26 "teams",
27 "comments",
28 "tags",
29 ]
30
31 def save(self, *args, **kwargs):
32 """
33 Since `teams` field on Contact Model is the reverse side of an M2M,
34 we have to override save() method to explictly set the teams for the Contact instance.
35 """
36 teams = self.cleaned_data.get("teams", [])
37 obj = super().save(*args, **kwargs)
38 obj.teams.set(teams)
39 return obj
40
41
42 class ContactBulkEditForm(TagsBulkEditFormMixin, NautobotBulkEditForm):
43 pk = forms.ModelMultipleChoiceField(queryset=Contact.objects.all(), widget=forms.MultipleHiddenInput())
44 phone = forms.CharField(required=False)
45 email = forms.CharField(required=False)
46 address = forms.CharField(required=False, widget=forms.Textarea())
47
48 class Meta:
49 model = Contact
50
51
52 class ContactFilterForm(NautobotFilterForm):
53 model = Contact
54 q = forms.CharField(required=False, label="Search")
55
56
57 class ObjectNewContactForm(NautobotModelForm):
58 teams = DynamicModelMultipleChoiceField(
59 queryset=Team.objects.all(),
60 required=False,
61 label="Team(s)",
62 )
63 associated_object_type = DynamicModelChoiceField(queryset=ContentType.objects.all(), required=True)
64 associated_object_id = forms.CharField(required=True)
65 role = DynamicModelChoiceField(
66 queryset=Role.objects.all(),
67 required=True,
68 query_params={"content_types": ContactAssociation._meta.label_lower},
69 )
70 status = DynamicModelChoiceField(
71 queryset=Status.objects.all(),
72 required=True,
73 query_params={"content_types": ContactAssociation._meta.label_lower},
74 )
75
76 class Meta:
77 model = Contact
78 fields = [
79 "name",
80 "phone",
81 "email",
82 "address",
83 "teams",
84 "comments",
85 "tags",
86 "associated_object_type",
87 "associated_object_id",
88 "role",
89 "status",
90 ]
91
92 def save(self, *args, **kwargs):
93 """
94 Since `teams` field on Contact Model is the reverse side of an M2M,
95 we have to override save() method to explictly set the teams for the Contact instance.
96 """
97 teams = self.cleaned_data.get("teams", [])
98 obj = super().save(*args, **kwargs)
99 obj.teams.set(teams)
100 return obj
101
102
103 class ObjectNewTeamForm(NautobotModelForm):
104 contacts = DynamicModelMultipleChoiceField(
105 queryset=Contact.objects.all(),
106 required=False,
107 label="Contact(s)",
108 )
109 associated_object_type = DynamicModelChoiceField(queryset=ContentType.objects.all(), required=True)
110 associated_object_id = forms.CharField(required=True)
111 role = DynamicModelChoiceField(
112 queryset=Role.objects.all(),
113 required=True,
114 query_params={"content_types": ContactAssociation._meta.label_lower},
115 )
116 status = DynamicModelChoiceField(
117 queryset=Status.objects.all(),
118 required=True,
119 query_params={"content_types": ContactAssociation._meta.label_lower},
120 )
121
122 class Meta:
123 model = Team
124 fields = [
125 "name",
126 "phone",
127 "email",
128 "address",
129 "contacts",
130 "comments",
131 "tags",
132 "associated_object_type",
133 "associated_object_id",
134 "role",
135 "status",
136 ]
137
138
139 class ContactAssociationForm(NautobotModelForm):
140 contact = DynamicModelChoiceField(queryset=Contact.objects.all(), required=False)
141 team = DynamicModelChoiceField(queryset=Team.objects.all(), required=False)
142
143 class Meta:
144 model = ContactAssociation
145 fields = [
146 "contact",
147 "team",
148 "associated_object_type",
149 "associated_object_id",
150 "role",
151 "status",
152 ]
153
154
155 class ContactAssociationBulkEditForm(NautobotBulkEditForm):
156 pk = forms.ModelMultipleChoiceField(queryset=ContactAssociation.objects.all(), widget=forms.MultipleHiddenInput())
157 role = DynamicModelChoiceField(
158 queryset=Role.objects.all(),
159 required=False,
160 query_params={"content_types": ContactAssociation._meta.label_lower},
161 )
162 status = DynamicModelChoiceField(
163 queryset=Status.objects.all(),
164 required=False,
165 query_params={"content_types": ContactAssociation._meta.label_lower},
166 )
167
168 class Meta:
169 model = ContactAssociation
170
171
172 class TeamForm(NautobotModelForm):
173 contacts = DynamicModelMultipleChoiceField(
174 queryset=Contact.objects.all(),
175 required=False,
176 label="Contact(s)",
177 )
178
179 class Meta:
180 model = Team
181 fields = [
182 "name",
183 "phone",
184 "email",
185 "address",
186 "contacts",
187 "comments",
188 "tags",
189 ]
190
191
192 class TeamBulkEditForm(TagsBulkEditFormMixin, NautobotBulkEditForm):
193 pk = forms.ModelMultipleChoiceField(queryset=Team.objects.all(), widget=forms.MultipleHiddenInput())
194 phone = forms.CharField(required=False)
195 email = forms.CharField(required=False)
196 address = forms.CharField(required=False, widget=forms.Textarea())
197
198 class Meta:
199 model = Team
200
201
202 class TeamFilterForm(NautobotFilterForm):
203 model = Team
204 q = forms.CharField(required=False, label="Search")
205
[end of nautobot/extras/forms/contacts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nautobot/extras/forms/contacts.py b/nautobot/extras/forms/contacts.py
--- a/nautobot/extras/forms/contacts.py
+++ b/nautobot/extras/forms/contacts.py
@@ -28,6 +28,13 @@
"tags",
]
+ def __init__(self, instance=None, initial=None, **kwargs):
+ if instance is not None:
+ if initial is None:
+ initial = {}
+ initial.setdefault("teams", instance.teams.all())
+ super().__init__(instance=instance, initial=initial, **kwargs)
+
def save(self, *args, **kwargs):
"""
Since `teams` field on Contact Model is the reverse side of an M2M,
|
{"golden_diff": "diff --git a/nautobot/extras/forms/contacts.py b/nautobot/extras/forms/contacts.py\n--- a/nautobot/extras/forms/contacts.py\n+++ b/nautobot/extras/forms/contacts.py\n@@ -28,6 +28,13 @@\n \"tags\",\n ]\n \n+ def __init__(self, instance=None, initial=None, **kwargs):\n+ if instance is not None:\n+ if initial is None:\n+ initial = {}\n+ initial.setdefault(\"teams\", instance.teams.all())\n+ super().__init__(instance=instance, initial=initial, **kwargs)\n+\n def save(self, *args, **kwargs):\n \"\"\"\n Since `teams` field on Contact Model is the reverse side of an M2M,\n", "issue": "[v2.2] Team(s) field not pre-populating when editing a Contact\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Nautobot version (Docker tag too if applicable): 2.2.0a1\r\n* Python version: 3.11.8\r\n* Database platform, version: n/a\r\n* Middleware(s): n/a\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n### Steps to Reproduce\r\n1. Create a Team\r\n2. Create a Contact and associate it to the Team\r\n3. Edit the Contact\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\nThe existing Team would be displayed in the edit form\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\nThe `Team(s)` field is empty and if you don't reapply the team it will remove it.\n", "before_files": [{"content": "from django import forms\nfrom django.contrib.contenttypes.models import ContentType\n\nfrom nautobot.core.forms import DynamicModelChoiceField, DynamicModelMultipleChoiceField\nfrom nautobot.extras.models import Role, Status\nfrom nautobot.extras.models.contacts import Contact, ContactAssociation, Team\n\nfrom .base import NautobotBulkEditForm, NautobotFilterForm, NautobotModelForm\nfrom .mixins import TagsBulkEditFormMixin\n\n\nclass ContactForm(NautobotModelForm):\n teams = DynamicModelMultipleChoiceField(\n queryset=Team.objects.all(),\n required=False,\n label=\"Team(s)\",\n )\n\n class Meta:\n model = Contact\n fields = [\n \"name\",\n \"phone\",\n \"email\",\n \"address\",\n \"teams\",\n \"comments\",\n \"tags\",\n ]\n\n def save(self, *args, **kwargs):\n \"\"\"\n Since `teams` field on Contact Model is the reverse side of an M2M,\n we have to override save() method to explictly set the teams for the Contact instance.\n \"\"\"\n teams = self.cleaned_data.get(\"teams\", [])\n obj = super().save(*args, **kwargs)\n obj.teams.set(teams)\n return obj\n\n\nclass ContactBulkEditForm(TagsBulkEditFormMixin, NautobotBulkEditForm):\n pk = forms.ModelMultipleChoiceField(queryset=Contact.objects.all(), widget=forms.MultipleHiddenInput())\n phone = forms.CharField(required=False)\n email = forms.CharField(required=False)\n address = forms.CharField(required=False, widget=forms.Textarea())\n\n class Meta:\n model = Contact\n\n\nclass ContactFilterForm(NautobotFilterForm):\n model = Contact\n q = forms.CharField(required=False, label=\"Search\")\n\n\nclass ObjectNewContactForm(NautobotModelForm):\n teams = DynamicModelMultipleChoiceField(\n queryset=Team.objects.all(),\n required=False,\n label=\"Team(s)\",\n )\n associated_object_type = DynamicModelChoiceField(queryset=ContentType.objects.all(), required=True)\n associated_object_id = forms.CharField(required=True)\n role = DynamicModelChoiceField(\n queryset=Role.objects.all(),\n required=True,\n query_params={\"content_types\": ContactAssociation._meta.label_lower},\n )\n status = DynamicModelChoiceField(\n queryset=Status.objects.all(),\n required=True,\n query_params={\"content_types\": ContactAssociation._meta.label_lower},\n )\n\n class Meta:\n model = Contact\n fields = [\n \"name\",\n \"phone\",\n \"email\",\n \"address\",\n \"teams\",\n \"comments\",\n \"tags\",\n \"associated_object_type\",\n \"associated_object_id\",\n \"role\",\n \"status\",\n ]\n\n def save(self, *args, **kwargs):\n \"\"\"\n Since `teams` field on Contact Model is the reverse side of an M2M,\n we have to override save() method to explictly set the teams for the Contact instance.\n \"\"\"\n teams = self.cleaned_data.get(\"teams\", [])\n obj = super().save(*args, **kwargs)\n obj.teams.set(teams)\n return obj\n\n\nclass ObjectNewTeamForm(NautobotModelForm):\n contacts = DynamicModelMultipleChoiceField(\n queryset=Contact.objects.all(),\n required=False,\n label=\"Contact(s)\",\n )\n associated_object_type = DynamicModelChoiceField(queryset=ContentType.objects.all(), required=True)\n associated_object_id = forms.CharField(required=True)\n role = DynamicModelChoiceField(\n queryset=Role.objects.all(),\n required=True,\n query_params={\"content_types\": ContactAssociation._meta.label_lower},\n )\n status = DynamicModelChoiceField(\n queryset=Status.objects.all(),\n required=True,\n query_params={\"content_types\": ContactAssociation._meta.label_lower},\n )\n\n class Meta:\n model = Team\n fields = [\n \"name\",\n \"phone\",\n \"email\",\n \"address\",\n \"contacts\",\n \"comments\",\n \"tags\",\n \"associated_object_type\",\n \"associated_object_id\",\n \"role\",\n \"status\",\n ]\n\n\nclass ContactAssociationForm(NautobotModelForm):\n contact = DynamicModelChoiceField(queryset=Contact.objects.all(), required=False)\n team = DynamicModelChoiceField(queryset=Team.objects.all(), required=False)\n\n class Meta:\n model = ContactAssociation\n fields = [\n \"contact\",\n \"team\",\n \"associated_object_type\",\n \"associated_object_id\",\n \"role\",\n \"status\",\n ]\n\n\nclass ContactAssociationBulkEditForm(NautobotBulkEditForm):\n pk = forms.ModelMultipleChoiceField(queryset=ContactAssociation.objects.all(), widget=forms.MultipleHiddenInput())\n role = DynamicModelChoiceField(\n queryset=Role.objects.all(),\n required=False,\n query_params={\"content_types\": ContactAssociation._meta.label_lower},\n )\n status = DynamicModelChoiceField(\n queryset=Status.objects.all(),\n required=False,\n query_params={\"content_types\": ContactAssociation._meta.label_lower},\n )\n\n class Meta:\n model = ContactAssociation\n\n\nclass TeamForm(NautobotModelForm):\n contacts = DynamicModelMultipleChoiceField(\n queryset=Contact.objects.all(),\n required=False,\n label=\"Contact(s)\",\n )\n\n class Meta:\n model = Team\n fields = [\n \"name\",\n \"phone\",\n \"email\",\n \"address\",\n \"contacts\",\n \"comments\",\n \"tags\",\n ]\n\n\nclass TeamBulkEditForm(TagsBulkEditFormMixin, NautobotBulkEditForm):\n pk = forms.ModelMultipleChoiceField(queryset=Team.objects.all(), widget=forms.MultipleHiddenInput())\n phone = forms.CharField(required=False)\n email = forms.CharField(required=False)\n address = forms.CharField(required=False, widget=forms.Textarea())\n\n class Meta:\n model = Team\n\n\nclass TeamFilterForm(NautobotFilterForm):\n model = Team\n q = forms.CharField(required=False, label=\"Search\")\n", "path": "nautobot/extras/forms/contacts.py"}]}
| 2,702 | 167 |
gh_patches_debug_2190
|
rasdani/github-patches
|
git_diff
|
bookwyrm-social__bookwyrm-695
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove the "Cover" text from the alt text of book covers if one is present
Currently, when a book cover is present and is displayed, it's alt text consists of the book title, the text *Cover*, edition name, and the first published date.
For example, via VoiceOver under Safari:
```
image The Night Circus cover (Hardcover, 2011)
```
The fact that this is a cover image is redundant, because users are already notified about the presence of an image. In this case, the textual data is actually more important, e.g. book title and edition name, since the presence of a cover does not add more information when using a screen reader.
The expected result, via VoiceOver (and possibly other screen readers as well) is:
```
image The Night Circus (Hardcover, 2011)
```
</issue>
<code>
[start of bookwyrm/models/book.py]
1 ''' database schema for books and shelves '''
2 import re
3
4 from django.db import models
5 from model_utils.managers import InheritanceManager
6
7 from bookwyrm import activitypub
8 from bookwyrm.settings import DOMAIN
9
10 from .activitypub_mixin import OrderedCollectionPageMixin, ObjectMixin
11 from .base_model import BookWyrmModel
12 from . import fields
13
14 class BookDataModel(ObjectMixin, BookWyrmModel):
15 ''' fields shared between editable book data (books, works, authors) '''
16 origin_id = models.CharField(max_length=255, null=True, blank=True)
17 openlibrary_key = fields.CharField(
18 max_length=255, blank=True, null=True, deduplication_field=True)
19 librarything_key = fields.CharField(
20 max_length=255, blank=True, null=True, deduplication_field=True)
21 goodreads_key = fields.CharField(
22 max_length=255, blank=True, null=True, deduplication_field=True)
23
24 last_edited_by = models.ForeignKey(
25 'User', on_delete=models.PROTECT, null=True)
26
27 class Meta:
28 ''' can't initialize this model, that wouldn't make sense '''
29 abstract = True
30
31 def save(self, *args, **kwargs):
32 ''' ensure that the remote_id is within this instance '''
33 if self.id:
34 self.remote_id = self.get_remote_id()
35 else:
36 self.origin_id = self.remote_id
37 self.remote_id = None
38 return super().save(*args, **kwargs)
39
40
41 class Book(BookDataModel):
42 ''' a generic book, which can mean either an edition or a work '''
43 connector = models.ForeignKey(
44 'Connector', on_delete=models.PROTECT, null=True)
45
46 # book/work metadata
47 title = fields.CharField(max_length=255)
48 sort_title = fields.CharField(max_length=255, blank=True, null=True)
49 subtitle = fields.CharField(max_length=255, blank=True, null=True)
50 description = fields.HtmlField(blank=True, null=True)
51 languages = fields.ArrayField(
52 models.CharField(max_length=255), blank=True, default=list
53 )
54 series = fields.CharField(max_length=255, blank=True, null=True)
55 series_number = fields.CharField(max_length=255, blank=True, null=True)
56 subjects = fields.ArrayField(
57 models.CharField(max_length=255), blank=True, null=True, default=list
58 )
59 subject_places = fields.ArrayField(
60 models.CharField(max_length=255), blank=True, null=True, default=list
61 )
62 authors = fields.ManyToManyField('Author')
63 cover = fields.ImageField(
64 upload_to='covers/', blank=True, null=True, alt_field='alt_text')
65 first_published_date = fields.DateTimeField(blank=True, null=True)
66 published_date = fields.DateTimeField(blank=True, null=True)
67
68 objects = InheritanceManager()
69
70 @property
71 def author_text(self):
72 ''' format a list of authors '''
73 return ', '.join(a.name for a in self.authors.all())
74
75 @property
76 def latest_readthrough(self):
77 ''' most recent readthrough activity '''
78 return self.readthrough_set.order_by('-updated_date').first()
79
80 @property
81 def edition_info(self):
82 ''' properties of this edition, as a string '''
83 items = [
84 self.physical_format if hasattr(self, 'physical_format') else None,
85 self.languages[0] + ' language' if self.languages and \
86 self.languages[0] != 'English' else None,
87 str(self.published_date.year) if self.published_date else None,
88 ]
89 return ', '.join(i for i in items if i)
90
91 @property
92 def alt_text(self):
93 ''' image alt test '''
94 text = '%s cover' % self.title
95 if self.edition_info:
96 text += ' (%s)' % self.edition_info
97 return text
98
99 def save(self, *args, **kwargs):
100 ''' can't be abstract for query reasons, but you shouldn't USE it '''
101 if not isinstance(self, Edition) and not isinstance(self, Work):
102 raise ValueError('Books should be added as Editions or Works')
103 return super().save(*args, **kwargs)
104
105 def get_remote_id(self):
106 ''' editions and works both use "book" instead of model_name '''
107 return 'https://%s/book/%d' % (DOMAIN, self.id)
108
109 def __repr__(self):
110 return "<{} key={!r} title={!r}>".format(
111 self.__class__,
112 self.openlibrary_key,
113 self.title,
114 )
115
116
117 class Work(OrderedCollectionPageMixin, Book):
118 ''' a work (an abstract concept of a book that manifests in an edition) '''
119 # library of congress catalog control number
120 lccn = fields.CharField(
121 max_length=255, blank=True, null=True, deduplication_field=True)
122 # this has to be nullable but should never be null
123 default_edition = fields.ForeignKey(
124 'Edition',
125 on_delete=models.PROTECT,
126 null=True,
127 load_remote=False
128 )
129
130 def save(self, *args, **kwargs):
131 ''' set some fields on the edition object '''
132 # set rank
133 for edition in self.editions.all():
134 edition.save()
135 return super().save(*args, **kwargs)
136
137 def get_default_edition(self):
138 ''' in case the default edition is not set '''
139 return self.default_edition or self.editions.order_by(
140 '-edition_rank'
141 ).first()
142
143 def to_edition_list(self, **kwargs):
144 ''' an ordered collection of editions '''
145 return self.to_ordered_collection(
146 self.editions.order_by('-edition_rank').all(),
147 remote_id='%s/editions' % self.remote_id,
148 **kwargs
149 )
150
151 activity_serializer = activitypub.Work
152 serialize_reverse_fields = [('editions', 'editions', '-edition_rank')]
153 deserialize_reverse_fields = [('editions', 'editions')]
154
155
156 class Edition(Book):
157 ''' an edition of a book '''
158 # these identifiers only apply to editions, not works
159 isbn_10 = fields.CharField(
160 max_length=255, blank=True, null=True, deduplication_field=True)
161 isbn_13 = fields.CharField(
162 max_length=255, blank=True, null=True, deduplication_field=True)
163 oclc_number = fields.CharField(
164 max_length=255, blank=True, null=True, deduplication_field=True)
165 asin = fields.CharField(
166 max_length=255, blank=True, null=True, deduplication_field=True)
167 pages = fields.IntegerField(blank=True, null=True)
168 physical_format = fields.CharField(max_length=255, blank=True, null=True)
169 publishers = fields.ArrayField(
170 models.CharField(max_length=255), blank=True, default=list
171 )
172 shelves = models.ManyToManyField(
173 'Shelf',
174 symmetrical=False,
175 through='ShelfBook',
176 through_fields=('book', 'shelf')
177 )
178 parent_work = fields.ForeignKey(
179 'Work', on_delete=models.PROTECT, null=True,
180 related_name='editions', activitypub_field='work')
181 edition_rank = fields.IntegerField(default=0)
182
183 activity_serializer = activitypub.Edition
184 name_field = 'title'
185
186 def get_rank(self):
187 ''' calculate how complete the data is on this edition '''
188 if self.parent_work and self.parent_work.default_edition == self:
189 # default edition has the highest rank
190 return 20
191 rank = 0
192 rank += int(bool(self.cover)) * 3
193 rank += int(bool(self.isbn_13))
194 rank += int(bool(self.isbn_10))
195 rank += int(bool(self.oclc_number))
196 rank += int(bool(self.pages))
197 rank += int(bool(self.physical_format))
198 rank += int(bool(self.description))
199 # max rank is 9
200 return rank
201
202 def save(self, *args, **kwargs):
203 ''' set some fields on the edition object '''
204 # calculate isbn 10/13
205 if self.isbn_13 and self.isbn_13[:3] == '978' and not self.isbn_10:
206 self.isbn_10 = isbn_13_to_10(self.isbn_13)
207 if self.isbn_10 and not self.isbn_13:
208 self.isbn_13 = isbn_10_to_13(self.isbn_10)
209
210 # set rank
211 self.edition_rank = self.get_rank()
212
213 return super().save(*args, **kwargs)
214
215
216 def isbn_10_to_13(isbn_10):
217 ''' convert an isbn 10 into an isbn 13 '''
218 isbn_10 = re.sub(r'[^0-9X]', '', isbn_10)
219 # drop the last character of the isbn 10 number (the original checkdigit)
220 converted = isbn_10[:9]
221 # add "978" to the front
222 converted = '978' + converted
223 # add a check digit to the end
224 # multiply the odd digits by 1 and the even digits by 3 and sum them
225 try:
226 checksum = sum(int(i) for i in converted[::2]) + \
227 sum(int(i) * 3 for i in converted[1::2])
228 except ValueError:
229 return None
230 # add the checksum mod 10 to the end
231 checkdigit = checksum % 10
232 if checkdigit != 0:
233 checkdigit = 10 - checkdigit
234 return converted + str(checkdigit)
235
236
237 def isbn_13_to_10(isbn_13):
238 ''' convert isbn 13 to 10, if possible '''
239 if isbn_13[:3] != '978':
240 return None
241
242 isbn_13 = re.sub(r'[^0-9X]', '', isbn_13)
243
244 # remove '978' and old checkdigit
245 converted = isbn_13[3:-1]
246 # calculate checkdigit
247 # multiple each digit by 10,9,8.. successively and sum them
248 try:
249 checksum = sum(int(d) * (10 - idx) for (idx, d) in enumerate(converted))
250 except ValueError:
251 return None
252 checkdigit = checksum % 11
253 checkdigit = 11 - checkdigit
254 if checkdigit == 10:
255 checkdigit = 'X'
256 return converted + str(checkdigit)
257
[end of bookwyrm/models/book.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bookwyrm/models/book.py b/bookwyrm/models/book.py
--- a/bookwyrm/models/book.py
+++ b/bookwyrm/models/book.py
@@ -91,7 +91,7 @@
@property
def alt_text(self):
''' image alt test '''
- text = '%s cover' % self.title
+ text = '%s' % self.title
if self.edition_info:
text += ' (%s)' % self.edition_info
return text
|
{"golden_diff": "diff --git a/bookwyrm/models/book.py b/bookwyrm/models/book.py\n--- a/bookwyrm/models/book.py\n+++ b/bookwyrm/models/book.py\n@@ -91,7 +91,7 @@\n @property\n def alt_text(self):\n ''' image alt test '''\n- text = '%s cover' % self.title\n+ text = '%s' % self.title\n if self.edition_info:\n text += ' (%s)' % self.edition_info\n return text\n", "issue": "Remove the \"Cover\" text from the alt text of book covers if one is present\nCurrently, when a book cover is present and is displayed, it's alt text consists of the book title, the text *Cover*, edition name, and the first published date.\r\n\r\nFor example, via VoiceOver under Safari:\r\n\r\n```\r\nimage The Night Circus cover (Hardcover, 2011)\r\n```\r\n\r\nThe fact that this is a cover image is redundant, because users are already notified about the presence of an image. In this case, the textual data is actually more important, e.g. book title and edition name, since the presence of a cover does not add more information when using a screen reader.\r\n\r\nThe expected result, via VoiceOver (and possibly other screen readers as well) is:\r\n\r\n```\r\nimage The Night Circus (Hardcover, 2011)\r\n```\n", "before_files": [{"content": "''' database schema for books and shelves '''\nimport re\n\nfrom django.db import models\nfrom model_utils.managers import InheritanceManager\n\nfrom bookwyrm import activitypub\nfrom bookwyrm.settings import DOMAIN\n\nfrom .activitypub_mixin import OrderedCollectionPageMixin, ObjectMixin\nfrom .base_model import BookWyrmModel\nfrom . import fields\n\nclass BookDataModel(ObjectMixin, BookWyrmModel):\n ''' fields shared between editable book data (books, works, authors) '''\n origin_id = models.CharField(max_length=255, null=True, blank=True)\n openlibrary_key = fields.CharField(\n max_length=255, blank=True, null=True, deduplication_field=True)\n librarything_key = fields.CharField(\n max_length=255, blank=True, null=True, deduplication_field=True)\n goodreads_key = fields.CharField(\n max_length=255, blank=True, null=True, deduplication_field=True)\n\n last_edited_by = models.ForeignKey(\n 'User', on_delete=models.PROTECT, null=True)\n\n class Meta:\n ''' can't initialize this model, that wouldn't make sense '''\n abstract = True\n\n def save(self, *args, **kwargs):\n ''' ensure that the remote_id is within this instance '''\n if self.id:\n self.remote_id = self.get_remote_id()\n else:\n self.origin_id = self.remote_id\n self.remote_id = None\n return super().save(*args, **kwargs)\n\n\nclass Book(BookDataModel):\n ''' a generic book, which can mean either an edition or a work '''\n connector = models.ForeignKey(\n 'Connector', on_delete=models.PROTECT, null=True)\n\n # book/work metadata\n title = fields.CharField(max_length=255)\n sort_title = fields.CharField(max_length=255, blank=True, null=True)\n subtitle = fields.CharField(max_length=255, blank=True, null=True)\n description = fields.HtmlField(blank=True, null=True)\n languages = fields.ArrayField(\n models.CharField(max_length=255), blank=True, default=list\n )\n series = fields.CharField(max_length=255, blank=True, null=True)\n series_number = fields.CharField(max_length=255, blank=True, null=True)\n subjects = fields.ArrayField(\n models.CharField(max_length=255), blank=True, null=True, default=list\n )\n subject_places = fields.ArrayField(\n models.CharField(max_length=255), blank=True, null=True, default=list\n )\n authors = fields.ManyToManyField('Author')\n cover = fields.ImageField(\n upload_to='covers/', blank=True, null=True, alt_field='alt_text')\n first_published_date = fields.DateTimeField(blank=True, null=True)\n published_date = fields.DateTimeField(blank=True, null=True)\n\n objects = InheritanceManager()\n\n @property\n def author_text(self):\n ''' format a list of authors '''\n return ', '.join(a.name for a in self.authors.all())\n\n @property\n def latest_readthrough(self):\n ''' most recent readthrough activity '''\n return self.readthrough_set.order_by('-updated_date').first()\n\n @property\n def edition_info(self):\n ''' properties of this edition, as a string '''\n items = [\n self.physical_format if hasattr(self, 'physical_format') else None,\n self.languages[0] + ' language' if self.languages and \\\n self.languages[0] != 'English' else None,\n str(self.published_date.year) if self.published_date else None,\n ]\n return ', '.join(i for i in items if i)\n\n @property\n def alt_text(self):\n ''' image alt test '''\n text = '%s cover' % self.title\n if self.edition_info:\n text += ' (%s)' % self.edition_info\n return text\n\n def save(self, *args, **kwargs):\n ''' can't be abstract for query reasons, but you shouldn't USE it '''\n if not isinstance(self, Edition) and not isinstance(self, Work):\n raise ValueError('Books should be added as Editions or Works')\n return super().save(*args, **kwargs)\n\n def get_remote_id(self):\n ''' editions and works both use \"book\" instead of model_name '''\n return 'https://%s/book/%d' % (DOMAIN, self.id)\n\n def __repr__(self):\n return \"<{} key={!r} title={!r}>\".format(\n self.__class__,\n self.openlibrary_key,\n self.title,\n )\n\n\nclass Work(OrderedCollectionPageMixin, Book):\n ''' a work (an abstract concept of a book that manifests in an edition) '''\n # library of congress catalog control number\n lccn = fields.CharField(\n max_length=255, blank=True, null=True, deduplication_field=True)\n # this has to be nullable but should never be null\n default_edition = fields.ForeignKey(\n 'Edition',\n on_delete=models.PROTECT,\n null=True,\n load_remote=False\n )\n\n def save(self, *args, **kwargs):\n ''' set some fields on the edition object '''\n # set rank\n for edition in self.editions.all():\n edition.save()\n return super().save(*args, **kwargs)\n\n def get_default_edition(self):\n ''' in case the default edition is not set '''\n return self.default_edition or self.editions.order_by(\n '-edition_rank'\n ).first()\n\n def to_edition_list(self, **kwargs):\n ''' an ordered collection of editions '''\n return self.to_ordered_collection(\n self.editions.order_by('-edition_rank').all(),\n remote_id='%s/editions' % self.remote_id,\n **kwargs\n )\n\n activity_serializer = activitypub.Work\n serialize_reverse_fields = [('editions', 'editions', '-edition_rank')]\n deserialize_reverse_fields = [('editions', 'editions')]\n\n\nclass Edition(Book):\n ''' an edition of a book '''\n # these identifiers only apply to editions, not works\n isbn_10 = fields.CharField(\n max_length=255, blank=True, null=True, deduplication_field=True)\n isbn_13 = fields.CharField(\n max_length=255, blank=True, null=True, deduplication_field=True)\n oclc_number = fields.CharField(\n max_length=255, blank=True, null=True, deduplication_field=True)\n asin = fields.CharField(\n max_length=255, blank=True, null=True, deduplication_field=True)\n pages = fields.IntegerField(blank=True, null=True)\n physical_format = fields.CharField(max_length=255, blank=True, null=True)\n publishers = fields.ArrayField(\n models.CharField(max_length=255), blank=True, default=list\n )\n shelves = models.ManyToManyField(\n 'Shelf',\n symmetrical=False,\n through='ShelfBook',\n through_fields=('book', 'shelf')\n )\n parent_work = fields.ForeignKey(\n 'Work', on_delete=models.PROTECT, null=True,\n related_name='editions', activitypub_field='work')\n edition_rank = fields.IntegerField(default=0)\n\n activity_serializer = activitypub.Edition\n name_field = 'title'\n\n def get_rank(self):\n ''' calculate how complete the data is on this edition '''\n if self.parent_work and self.parent_work.default_edition == self:\n # default edition has the highest rank\n return 20\n rank = 0\n rank += int(bool(self.cover)) * 3\n rank += int(bool(self.isbn_13))\n rank += int(bool(self.isbn_10))\n rank += int(bool(self.oclc_number))\n rank += int(bool(self.pages))\n rank += int(bool(self.physical_format))\n rank += int(bool(self.description))\n # max rank is 9\n return rank\n\n def save(self, *args, **kwargs):\n ''' set some fields on the edition object '''\n # calculate isbn 10/13\n if self.isbn_13 and self.isbn_13[:3] == '978' and not self.isbn_10:\n self.isbn_10 = isbn_13_to_10(self.isbn_13)\n if self.isbn_10 and not self.isbn_13:\n self.isbn_13 = isbn_10_to_13(self.isbn_10)\n\n # set rank\n self.edition_rank = self.get_rank()\n\n return super().save(*args, **kwargs)\n\n\ndef isbn_10_to_13(isbn_10):\n ''' convert an isbn 10 into an isbn 13 '''\n isbn_10 = re.sub(r'[^0-9X]', '', isbn_10)\n # drop the last character of the isbn 10 number (the original checkdigit)\n converted = isbn_10[:9]\n # add \"978\" to the front\n converted = '978' + converted\n # add a check digit to the end\n # multiply the odd digits by 1 and the even digits by 3 and sum them\n try:\n checksum = sum(int(i) for i in converted[::2]) + \\\n sum(int(i) * 3 for i in converted[1::2])\n except ValueError:\n return None\n # add the checksum mod 10 to the end\n checkdigit = checksum % 10\n if checkdigit != 0:\n checkdigit = 10 - checkdigit\n return converted + str(checkdigit)\n\n\ndef isbn_13_to_10(isbn_13):\n ''' convert isbn 13 to 10, if possible '''\n if isbn_13[:3] != '978':\n return None\n\n isbn_13 = re.sub(r'[^0-9X]', '', isbn_13)\n\n # remove '978' and old checkdigit\n converted = isbn_13[3:-1]\n # calculate checkdigit\n # multiple each digit by 10,9,8.. successively and sum them\n try:\n checksum = sum(int(d) * (10 - idx) for (idx, d) in enumerate(converted))\n except ValueError:\n return None\n checkdigit = checksum % 11\n checkdigit = 11 - checkdigit\n if checkdigit == 10:\n checkdigit = 'X'\n return converted + str(checkdigit)\n", "path": "bookwyrm/models/book.py"}]}
| 3,689 | 111 |
gh_patches_debug_35697
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-2501
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_AWS_103 - Mentioned Checkov Checks producing an Exception, when the Listener protocol is HTTP (ALB).
Background of the Issue :
We have been using mentioned Checkov inbuilt checks/policy ID (CKV_AWS_103/ BC_AWS_GENERAL_43) for connection requests in AWS Load balancer. it was working fine when application team using the Module, "aws_lb_listener" "redirect" will only enable when application team users "HTTPS" Protocol. The particular issue is being presented when we attempt "HTTP" Protocol (Development Environment)
Sample Module snippet (AWS Load Balancer Listener ) (Re-direct Action) :-
resource "aws_lb_listener" "redirect_http_listeneter" {
load_balancer_arn = aws_lb.front_end.arn
port = "80"
protocol = "HTTP"
}
default action {
type = "redirect"
redirect {
port = "443"
}
}
}
ISSUE :-
Exception Occur :- [If application team used the protocol HTTP]
Exception message sample
"[MainThread][ERROR] Failed to run check: Ensure the load balancer is using TLS1.2 for configuration:{'alpn_policy':[None],'arn':['arn:aws:elasticloadbalancing:**********:listener/app/dev-opsmanager-ec2-alb/******/*****],'certicate_arn':[None], 'default_action':
In bridgecrewio github, [ checkov/terraform/checks/resource/aws/AppLoadBalancerTLS12.py ]
supports only HTTPS and it not producing any proper results, if we application team using HTTP. So require your support to tweak globally in the in-built check ID.
checkov version : 2.0.704
kindly let me know any other additional inputs require from our end.
</issue>
<code>
[start of checkov/terraform/checks/resource/aws/AppLoadBalancerTLS12.py]
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.common.util.type_forcers import force_list
3 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
4
5
6 class AppLoadBalancerTLS12(BaseResourceCheck):
7 def __init__(self):
8 name = "Ensure that load balancer is using TLS 1.2"
9 id = "CKV_AWS_103"
10 supported_resources = ["aws_lb_listener", "aws_alb_listener"]
11 categories = [CheckCategories.GENERAL_SECURITY]
12 super().__init__(
13 name=name,
14 id=id,
15 categories=categories,
16 supported_resources=supported_resources,
17 )
18
19 def scan_resource_conf(self, conf):
20 key = "protocol"
21 self.evaluated_keys = [key]
22 if key in conf.keys():
23 if conf[key] in (["HTTPS"], ["TLS"]):
24 # Only interested in HTTPS & TLS listeners
25 policy = "ssl_policy"
26 if policy in conf.keys():
27 self.evaluated_keys.append(policy)
28 name = str(conf[policy]).strip("['']")
29 if name.startswith("ELBSecurityPolicy-FS-1-2") or name.startswith("ELBSecurityPolicy-TLS-1-2"):
30 return CheckResult.PASSED
31 return CheckResult.FAILED
32 elif conf[key] in (["TCP"], ["UDP"], ["TCP_UDP"]):
33 return CheckResult.PASSED
34 for idx_action, action in enumerate(conf.get("default_action", [])):
35 redirects = action.get("redirect", [])
36 for idx_redirect, redirect in enumerate(force_list(redirects)):
37 if redirect.get("protocol", []) == ["HTTPS"]:
38 redirect_index = f"[{idx_redirect}]/" if isinstance(redirects, list) else ""
39 self.evaluated_keys.append(f'default_action/[{idx_action}]/redirect/{redirect_index}protocol')
40 return CheckResult.PASSED
41 return CheckResult.FAILED
42
43
44 check = AppLoadBalancerTLS12()
45
[end of checkov/terraform/checks/resource/aws/AppLoadBalancerTLS12.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/terraform/checks/resource/aws/AppLoadBalancerTLS12.py b/checkov/terraform/checks/resource/aws/AppLoadBalancerTLS12.py
--- a/checkov/terraform/checks/resource/aws/AppLoadBalancerTLS12.py
+++ b/checkov/terraform/checks/resource/aws/AppLoadBalancerTLS12.py
@@ -1,14 +1,16 @@
+from typing import Dict, List, Any
+
from checkov.common.models.enums import CheckResult, CheckCategories
from checkov.common.util.type_forcers import force_list
from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
class AppLoadBalancerTLS12(BaseResourceCheck):
- def __init__(self):
+ def __init__(self) -> None:
name = "Ensure that load balancer is using TLS 1.2"
id = "CKV_AWS_103"
- supported_resources = ["aws_lb_listener", "aws_alb_listener"]
- categories = [CheckCategories.GENERAL_SECURITY]
+ supported_resources = ("aws_lb_listener", "aws_alb_listener")
+ categories = (CheckCategories.GENERAL_SECURITY,)
super().__init__(
name=name,
id=id,
@@ -16,7 +18,7 @@
supported_resources=supported_resources,
)
- def scan_resource_conf(self, conf):
+ def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:
key = "protocol"
self.evaluated_keys = [key]
if key in conf.keys():
@@ -34,7 +36,7 @@
for idx_action, action in enumerate(conf.get("default_action", [])):
redirects = action.get("redirect", [])
for idx_redirect, redirect in enumerate(force_list(redirects)):
- if redirect.get("protocol", []) == ["HTTPS"]:
+ if isinstance(redirect, dict) and redirect.get("protocol", []) == ["HTTPS"]:
redirect_index = f"[{idx_redirect}]/" if isinstance(redirects, list) else ""
self.evaluated_keys.append(f'default_action/[{idx_action}]/redirect/{redirect_index}protocol')
return CheckResult.PASSED
|
{"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/AppLoadBalancerTLS12.py b/checkov/terraform/checks/resource/aws/AppLoadBalancerTLS12.py\n--- a/checkov/terraform/checks/resource/aws/AppLoadBalancerTLS12.py\n+++ b/checkov/terraform/checks/resource/aws/AppLoadBalancerTLS12.py\n@@ -1,14 +1,16 @@\n+from typing import Dict, List, Any\n+\n from checkov.common.models.enums import CheckResult, CheckCategories\n from checkov.common.util.type_forcers import force_list\n from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n \n \n class AppLoadBalancerTLS12(BaseResourceCheck):\n- def __init__(self):\n+ def __init__(self) -> None:\n name = \"Ensure that load balancer is using TLS 1.2\"\n id = \"CKV_AWS_103\"\n- supported_resources = [\"aws_lb_listener\", \"aws_alb_listener\"]\n- categories = [CheckCategories.GENERAL_SECURITY]\n+ supported_resources = (\"aws_lb_listener\", \"aws_alb_listener\")\n+ categories = (CheckCategories.GENERAL_SECURITY,)\n super().__init__(\n name=name,\n id=id,\n@@ -16,7 +18,7 @@\n supported_resources=supported_resources,\n )\n \n- def scan_resource_conf(self, conf):\n+ def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n key = \"protocol\"\n self.evaluated_keys = [key]\n if key in conf.keys():\n@@ -34,7 +36,7 @@\n for idx_action, action in enumerate(conf.get(\"default_action\", [])):\n redirects = action.get(\"redirect\", [])\n for idx_redirect, redirect in enumerate(force_list(redirects)):\n- if redirect.get(\"protocol\", []) == [\"HTTPS\"]:\n+ if isinstance(redirect, dict) and redirect.get(\"protocol\", []) == [\"HTTPS\"]:\n redirect_index = f\"[{idx_redirect}]/\" if isinstance(redirects, list) else \"\"\n self.evaluated_keys.append(f'default_action/[{idx_action}]/redirect/{redirect_index}protocol')\n return CheckResult.PASSED\n", "issue": "CKV_AWS_103 - Mentioned Checkov Checks producing an Exception, when the Listener protocol is HTTP (ALB).\nBackground of the Issue : \r\n\r\nWe have been using mentioned Checkov inbuilt checks/policy ID (CKV_AWS_103/ BC_AWS_GENERAL_43) for connection requests in AWS Load balancer. it was working fine when application team using the Module, \"aws_lb_listener\" \"redirect\" will only enable when application team users \"HTTPS\" Protocol. The particular issue is being presented when we attempt \"HTTP\" Protocol (Development Environment)\r\n\r\nSample Module snippet (AWS Load Balancer Listener ) (Re-direct Action) :- \r\n\r\nresource \"aws_lb_listener\" \"redirect_http_listeneter\" {\r\nload_balancer_arn = aws_lb.front_end.arn\r\nport = \"80\"\r\nprotocol = \"HTTP\" \r\n}\r\n\r\ndefault action {\r\n type = \"redirect\"\r\n\r\nredirect {\r\n port = \"443\"\r\n }\r\n}\r\n}\r\n\r\nISSUE :- \r\nException Occur :- [If application team used the protocol HTTP] \r\n\r\nException message sample \r\n\"[MainThread][ERROR] Failed to run check: Ensure the load balancer is using TLS1.2 for configuration:{'alpn_policy':[None],'arn':['arn:aws:elasticloadbalancing:**********:listener/app/dev-opsmanager-ec2-alb/******/*****],'certicate_arn':[None], 'default_action':\r\n\r\nIn bridgecrewio github, [ checkov/terraform/checks/resource/aws/AppLoadBalancerTLS12.py ]\r\nsupports only HTTPS and it not producing any proper results, if we application team using HTTP. So require your support to tweak globally in the in-built check ID. \r\n\r\ncheckov version : 2.0.704\r\n\r\nkindly let me know any other additional inputs require from our end.\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.common.util.type_forcers import force_list\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass AppLoadBalancerTLS12(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure that load balancer is using TLS 1.2\"\n id = \"CKV_AWS_103\"\n supported_resources = [\"aws_lb_listener\", \"aws_alb_listener\"]\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(\n name=name,\n id=id,\n categories=categories,\n supported_resources=supported_resources,\n )\n\n def scan_resource_conf(self, conf):\n key = \"protocol\"\n self.evaluated_keys = [key]\n if key in conf.keys():\n if conf[key] in ([\"HTTPS\"], [\"TLS\"]):\n # Only interested in HTTPS & TLS listeners\n policy = \"ssl_policy\"\n if policy in conf.keys():\n self.evaluated_keys.append(policy)\n name = str(conf[policy]).strip(\"['']\")\n if name.startswith(\"ELBSecurityPolicy-FS-1-2\") or name.startswith(\"ELBSecurityPolicy-TLS-1-2\"):\n return CheckResult.PASSED\n return CheckResult.FAILED\n elif conf[key] in ([\"TCP\"], [\"UDP\"], [\"TCP_UDP\"]):\n return CheckResult.PASSED\n for idx_action, action in enumerate(conf.get(\"default_action\", [])):\n redirects = action.get(\"redirect\", [])\n for idx_redirect, redirect in enumerate(force_list(redirects)):\n if redirect.get(\"protocol\", []) == [\"HTTPS\"]:\n redirect_index = f\"[{idx_redirect}]/\" if isinstance(redirects, list) else \"\"\n self.evaluated_keys.append(f'default_action/[{idx_action}]/redirect/{redirect_index}protocol')\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = AppLoadBalancerTLS12()\n", "path": "checkov/terraform/checks/resource/aws/AppLoadBalancerTLS12.py"}]}
| 1,450 | 477 |
gh_patches_debug_8595
|
rasdani/github-patches
|
git_diff
|
apluslms__a-plus-1083
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error: view object has no attribute 'redirect'
We have seen this error in the server logs a lot recently. Any internal server error is bad and implies that users end up suddenly on a crash page (HTTP 500 Internal server error). This must be looked into.
```
Internal Server Error: /COURSEKEY/COURSEINSTANCEKEY/toc/
AttributeError at /COURSEKEY/COURSEINSTANCEKEY/toc/
'TableOfContentsView' object has no attribute 'redirect'
```
This occurs on multiple courses (at least, I saw many errors on the courses Y1 and TRAK Y) and on multiple views. `TableOfContentsView` has been common there, but several other view classes have been named in the error logs, at least the following:
* SubmissionView
* ResultsView
* LanguageView
* ModuleView
</issue>
<code>
[start of course/viewbase.py]
1 from django.contrib import messages
2 from django.core.exceptions import PermissionDenied
3 from django.http import Http404
4 from django.shortcuts import get_object_or_404, redirect, render
5 from django.utils import translation
6 from django.utils.translation import gettext_lazy as _
7 from django.utils.translation import get_language, get_language_info
8
9 from authorization.permissions import ACCESS
10 from exercise.cache.content import CachedContent
11 from lib.helpers import remove_query_param_from_url, update_url_params
12 from lib.viewbase import BaseTemplateView
13 from userprofile.viewbase import UserProfileMixin
14 from exercise.models import LearningObject
15 from .cache.students import CachedStudent
16 from .exceptions import TranslationNotFound
17 from .permissions import (
18 CourseVisiblePermission,
19 CourseModulePermission,
20 )
21 from .models import Course, CourseInstance, CourseModule, UserTagging, Enrollment
22
23
24 class CourseMixin(UserProfileMixin):
25 course_kw = "course_slug"
26
27 def get_resource_objects(self):
28 super().get_resource_objects()
29 self.course = get_object_or_404(
30 Course,
31 url=self._get_kwarg(self.course_kw)
32 )
33 self.note("course")
34
35
36 class CourseBaseView(CourseMixin, BaseTemplateView):
37 pass
38
39
40 class CourseInstanceBaseMixin(object):
41 course_kw = CourseMixin.course_kw
42 instance_kw = "instance_slug"
43 course_permission_classes = (
44 CourseVisiblePermission,
45 )
46
47 def get_permissions(self):
48 perms = super().get_permissions()
49 perms.extend((Perm() for Perm in self.course_permission_classes))
50 return perms
51
52 # get_course_instance_object
53
54 def get_resource_objects(self):
55 super().get_resource_objects()
56 user = self.request.user
57 instance = self.get_course_instance_object()
58 if instance is not None:
59 self.instance = instance
60 self.course = self.instance.course
61 self.content = CachedContent(self.instance)
62 self.user_course_data = None
63 is_real_user = user.is_authenticated and not user.is_anonymous
64 if is_real_user:
65 self.user_course_data = self.instance.get_enrollment_for(user)
66 self.is_student = self.instance.is_student(user)
67 self.is_assistant = self.instance.is_assistant(user)
68 self.is_teacher = self.instance.is_teacher(user)
69 self.is_course_staff = self.is_teacher or self.is_assistant
70 self.get_taggings = lambda: CachedStudent(instance, user.id).data['tag_slugs']
71 self.url_without_language = remove_query_param_from_url(self.request.get_full_path(), 'hl')
72 self.query_language = None
73 self.user_language = None
74
75 self.note(
76 "course", "instance", "content", "user_course_data", "is_student", "is_assistant",
77 "is_teacher", "is_course_staff", "get_taggings", "url_without_language",
78 "query_language", "user_language"
79 )
80
81 # Try to find a language that is defined for this course instance
82 # and apply it
83 if self.instance.language:
84 instance_languages = self.instance.language.strip('|').split('|')
85 instance_def_language = instance_languages[0]
86 instance_languages = set(instance_languages)
87
88 languages = []
89 if self.user_course_data and self.user_course_data.language:
90 languages.append(self.user_course_data.language)
91 if is_real_user and user.userprofile.language:
92 languages.append(user.userprofile.language)
93 languages.append(get_language())
94
95 query_language = self.request.GET.get('hl')
96 if query_language:
97 if query_language[:2] in instance_languages:
98 language = query_language
99 if languages:
100 self.user_language = languages[0]
101 if self.user_language[:2] != query_language[:2]:
102 self.query_language = query_language
103 else:
104 raise TranslationNotFound
105 else:
106 for lang in languages:
107 if lang[:2] in instance_languages:
108 language = lang
109 break
110 else:
111 language = instance_def_language
112
113 language = language[:2]
114 # Override request.LANGUAGE_CODE. It is set in lib/middleware.py
115 # (class LocaleMiddleware) based on the userprofile.language.
116 # The middleware can not easily access the course context and
117 # the language from the enrollment. That is fixed here.
118 self.request.LANGUAGE_CODE = language
119 translation.activate(language)
120
121 def get_access_mode(self):
122 access_mode = super().get_access_mode()
123
124 if hasattr(self, 'instance'):
125 # Loosen the access mode if instance is public
126 show_for = self.instance.view_content_to
127 is_public = show_for == CourseInstance.VIEW_ACCESS.PUBLIC
128 access_mode_student = access_mode in (ACCESS.STUDENT, ACCESS.ENROLL)
129 if is_public and access_mode_student:
130 access_mode = ACCESS.ANONYMOUS
131
132 return access_mode
133
134 def handle_exception(self, exc):
135 if isinstance(exc, TranslationNotFound):
136 instance_languages = self.instance.language.strip("|").split("|")
137 url = remove_query_param_from_url(self.request.get_full_path(), 'hl')
138 for i, lang in enumerate(instance_languages):
139 instance_languages[i] = {"name": get_language_info(lang)['name'], "url": update_url_params(url, {'hl' : lang})}
140 return render(self.request, '404.html', {'error_msg': str(exc), 'languages': instance_languages}, status=404)
141 return super().handle_exception(exc)
142
143 class CourseInstanceMixin(CourseInstanceBaseMixin, UserProfileMixin):
144 def get_course_instance_object(self) -> CourseInstance:
145 return get_object_or_404(
146 CourseInstance.objects.prefetch_related('tabs'),
147 url=self.kwargs[self.instance_kw],
148 course__url=self.kwargs[self.course_kw],
149 )
150
151 def handle_no_permission(self):
152 if (self.request.user.is_authenticated
153 and not self.is_student
154 and not self.is_course_staff
155 and self.get_access_mode() in [ACCESS.STUDENT, ACCESS.ENROLLED]
156 and self.instance.view_content_to == CourseInstance.VIEW_ACCESS.ENROLLED):
157 # Redirect the user to the enrollment page instead of showing
158 # a 403 Forbidden error, if:
159 # - the user is signed in but not enrolled or staff
160 # - the page is not a teacher page (e.g. edit course)
161 # - the course is visible only to enrolled students
162 #
163 # If SIS enrollment is applied and course requires enrollment questionnaire,
164 # redirect to the questionnaire instead.
165 enrollment = self.user_course_data
166 if enrollment and enrollment.status == Enrollment.ENROLLMENT_STATUS.PENDING:
167 exercise = LearningObject.objects.find_enrollment_exercise(
168 self.instance, self.profile.is_external)
169 if exercise:
170 return self.redirect(exercise.get_absolute_url())
171 return redirect(self.instance.get_url('enroll'))
172 return super().handle_no_permission()
173
174
175 class CourseInstanceBaseView(CourseInstanceMixin, BaseTemplateView):
176 pass
177
178
179 class EnrollableViewMixin(CourseInstanceMixin):
180 access_mode = ACCESS.ENROLL
181
182 def get_common_objects(self):
183 self.enrolled = self.is_student
184 self.enrollable = (
185 self.profile
186 and self.instance.is_enrollable(self.profile.user)
187 )
188 self.note('enrolled', 'enrollable')
189
190
191 class CourseModuleBaseMixin(object):
192 module_kw = "module_slug"
193 module_permissions_classes = (
194 CourseModulePermission,
195 )
196
197 def get_permissions(self):
198 perms = super().get_permissions()
199 perms.extend((Perm() for Perm in self.module_permissions_classes))
200 return perms
201
202 # get_course_module_object
203
204 def get_resource_objects(self):
205 super().get_resource_objects()
206 self.module = self.get_course_module_object()
207 self.note("module")
208
209
210 class CourseModuleMixin(CourseModuleBaseMixin, CourseInstanceMixin):
211 def get_course_module_object(self):
212 return get_object_or_404(
213 CourseModule,
214 url=self.kwargs[self.module_kw],
215 course_instance=self.instance
216 )
217
218
219 class CourseModuleBaseView(CourseModuleMixin, BaseTemplateView):
220 pass
221
[end of course/viewbase.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/course/viewbase.py b/course/viewbase.py
--- a/course/viewbase.py
+++ b/course/viewbase.py
@@ -167,7 +167,7 @@
exercise = LearningObject.objects.find_enrollment_exercise(
self.instance, self.profile.is_external)
if exercise:
- return self.redirect(exercise.get_absolute_url())
+ return redirect(exercise.get_absolute_url())
return redirect(self.instance.get_url('enroll'))
return super().handle_no_permission()
|
{"golden_diff": "diff --git a/course/viewbase.py b/course/viewbase.py\n--- a/course/viewbase.py\n+++ b/course/viewbase.py\n@@ -167,7 +167,7 @@\n exercise = LearningObject.objects.find_enrollment_exercise(\n self.instance, self.profile.is_external)\n if exercise:\n- return self.redirect(exercise.get_absolute_url())\n+ return redirect(exercise.get_absolute_url())\n return redirect(self.instance.get_url('enroll'))\n return super().handle_no_permission()\n", "issue": "Error: view object has no attribute 'redirect'\nWe have seen this error in the server logs a lot recently. Any internal server error is bad and implies that users end up suddenly on a crash page (HTTP 500 Internal server error). This must be looked into.\r\n\r\n```\r\nInternal Server Error: /COURSEKEY/COURSEINSTANCEKEY/toc/\r\n\r\nAttributeError at /COURSEKEY/COURSEINSTANCEKEY/toc/\r\n'TableOfContentsView' object has no attribute 'redirect'\r\n```\r\n\r\nThis occurs on multiple courses (at least, I saw many errors on the courses Y1 and TRAK Y) and on multiple views. `TableOfContentsView` has been common there, but several other view classes have been named in the error logs, at least the following:\r\n\r\n* SubmissionView\r\n* ResultsView\r\n* LanguageView\r\n* ModuleView\n", "before_files": [{"content": "from django.contrib import messages\nfrom django.core.exceptions import PermissionDenied\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils import translation\nfrom django.utils.translation import gettext_lazy as _\nfrom django.utils.translation import get_language, get_language_info\n\nfrom authorization.permissions import ACCESS\nfrom exercise.cache.content import CachedContent\nfrom lib.helpers import remove_query_param_from_url, update_url_params\nfrom lib.viewbase import BaseTemplateView\nfrom userprofile.viewbase import UserProfileMixin\nfrom exercise.models import LearningObject\nfrom .cache.students import CachedStudent\nfrom .exceptions import TranslationNotFound\nfrom .permissions import (\n CourseVisiblePermission,\n CourseModulePermission,\n)\nfrom .models import Course, CourseInstance, CourseModule, UserTagging, Enrollment\n\n\nclass CourseMixin(UserProfileMixin):\n course_kw = \"course_slug\"\n\n def get_resource_objects(self):\n super().get_resource_objects()\n self.course = get_object_or_404(\n Course,\n url=self._get_kwarg(self.course_kw)\n )\n self.note(\"course\")\n\n\nclass CourseBaseView(CourseMixin, BaseTemplateView):\n pass\n\n\nclass CourseInstanceBaseMixin(object):\n course_kw = CourseMixin.course_kw\n instance_kw = \"instance_slug\"\n course_permission_classes = (\n CourseVisiblePermission,\n )\n\n def get_permissions(self):\n perms = super().get_permissions()\n perms.extend((Perm() for Perm in self.course_permission_classes))\n return perms\n\n # get_course_instance_object\n\n def get_resource_objects(self):\n super().get_resource_objects()\n user = self.request.user\n instance = self.get_course_instance_object()\n if instance is not None:\n self.instance = instance\n self.course = self.instance.course\n self.content = CachedContent(self.instance)\n self.user_course_data = None\n is_real_user = user.is_authenticated and not user.is_anonymous\n if is_real_user:\n self.user_course_data = self.instance.get_enrollment_for(user)\n self.is_student = self.instance.is_student(user)\n self.is_assistant = self.instance.is_assistant(user)\n self.is_teacher = self.instance.is_teacher(user)\n self.is_course_staff = self.is_teacher or self.is_assistant\n self.get_taggings = lambda: CachedStudent(instance, user.id).data['tag_slugs']\n self.url_without_language = remove_query_param_from_url(self.request.get_full_path(), 'hl')\n self.query_language = None\n self.user_language = None\n\n self.note(\n \"course\", \"instance\", \"content\", \"user_course_data\", \"is_student\", \"is_assistant\",\n \"is_teacher\", \"is_course_staff\", \"get_taggings\", \"url_without_language\",\n \"query_language\", \"user_language\"\n )\n\n # Try to find a language that is defined for this course instance\n # and apply it\n if self.instance.language:\n instance_languages = self.instance.language.strip('|').split('|')\n instance_def_language = instance_languages[0]\n instance_languages = set(instance_languages)\n\n languages = []\n if self.user_course_data and self.user_course_data.language:\n languages.append(self.user_course_data.language)\n if is_real_user and user.userprofile.language:\n languages.append(user.userprofile.language)\n languages.append(get_language())\n\n query_language = self.request.GET.get('hl')\n if query_language:\n if query_language[:2] in instance_languages:\n language = query_language\n if languages:\n self.user_language = languages[0]\n if self.user_language[:2] != query_language[:2]:\n self.query_language = query_language\n else:\n raise TranslationNotFound\n else:\n for lang in languages:\n if lang[:2] in instance_languages:\n language = lang\n break\n else:\n language = instance_def_language\n\n language = language[:2]\n # Override request.LANGUAGE_CODE. It is set in lib/middleware.py\n # (class LocaleMiddleware) based on the userprofile.language.\n # The middleware can not easily access the course context and\n # the language from the enrollment. That is fixed here.\n self.request.LANGUAGE_CODE = language\n translation.activate(language)\n\n def get_access_mode(self):\n access_mode = super().get_access_mode()\n\n if hasattr(self, 'instance'):\n # Loosen the access mode if instance is public\n show_for = self.instance.view_content_to\n is_public = show_for == CourseInstance.VIEW_ACCESS.PUBLIC\n access_mode_student = access_mode in (ACCESS.STUDENT, ACCESS.ENROLL)\n if is_public and access_mode_student:\n access_mode = ACCESS.ANONYMOUS\n\n return access_mode\n\n def handle_exception(self, exc):\n if isinstance(exc, TranslationNotFound):\n instance_languages = self.instance.language.strip(\"|\").split(\"|\")\n url = remove_query_param_from_url(self.request.get_full_path(), 'hl')\n for i, lang in enumerate(instance_languages):\n instance_languages[i] = {\"name\": get_language_info(lang)['name'], \"url\": update_url_params(url, {'hl' : lang})}\n return render(self.request, '404.html', {'error_msg': str(exc), 'languages': instance_languages}, status=404)\n return super().handle_exception(exc)\n\nclass CourseInstanceMixin(CourseInstanceBaseMixin, UserProfileMixin):\n def get_course_instance_object(self) -> CourseInstance:\n return get_object_or_404(\n CourseInstance.objects.prefetch_related('tabs'),\n url=self.kwargs[self.instance_kw],\n course__url=self.kwargs[self.course_kw],\n )\n\n def handle_no_permission(self):\n if (self.request.user.is_authenticated\n and not self.is_student\n and not self.is_course_staff\n and self.get_access_mode() in [ACCESS.STUDENT, ACCESS.ENROLLED]\n and self.instance.view_content_to == CourseInstance.VIEW_ACCESS.ENROLLED):\n # Redirect the user to the enrollment page instead of showing\n # a 403 Forbidden error, if:\n # - the user is signed in but not enrolled or staff\n # - the page is not a teacher page (e.g. edit course)\n # - the course is visible only to enrolled students\n #\n # If SIS enrollment is applied and course requires enrollment questionnaire,\n # redirect to the questionnaire instead.\n enrollment = self.user_course_data\n if enrollment and enrollment.status == Enrollment.ENROLLMENT_STATUS.PENDING:\n exercise = LearningObject.objects.find_enrollment_exercise(\n self.instance, self.profile.is_external)\n if exercise:\n return self.redirect(exercise.get_absolute_url())\n return redirect(self.instance.get_url('enroll'))\n return super().handle_no_permission()\n\n\nclass CourseInstanceBaseView(CourseInstanceMixin, BaseTemplateView):\n pass\n\n\nclass EnrollableViewMixin(CourseInstanceMixin):\n access_mode = ACCESS.ENROLL\n\n def get_common_objects(self):\n self.enrolled = self.is_student\n self.enrollable = (\n self.profile\n and self.instance.is_enrollable(self.profile.user)\n )\n self.note('enrolled', 'enrollable')\n\n\nclass CourseModuleBaseMixin(object):\n module_kw = \"module_slug\"\n module_permissions_classes = (\n CourseModulePermission,\n )\n\n def get_permissions(self):\n perms = super().get_permissions()\n perms.extend((Perm() for Perm in self.module_permissions_classes))\n return perms\n\n # get_course_module_object\n\n def get_resource_objects(self):\n super().get_resource_objects()\n self.module = self.get_course_module_object()\n self.note(\"module\")\n\n\nclass CourseModuleMixin(CourseModuleBaseMixin, CourseInstanceMixin):\n def get_course_module_object(self):\n return get_object_or_404(\n CourseModule,\n url=self.kwargs[self.module_kw],\n course_instance=self.instance\n )\n\n\nclass CourseModuleBaseView(CourseModuleMixin, BaseTemplateView):\n pass\n", "path": "course/viewbase.py"}]}
| 2,968 | 107 |
gh_patches_debug_612
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-1502
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.53
On the docket:
+ [x] pex stops interpreter search if even one intepreter fails to identify itself #1494
+ [x] Add support for setting custom venv prompts. #1499
+ [x] How to know whether we are running from within pex #1485
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.52"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.52"
+__version__ = "2.1.53"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.52\"\n+__version__ = \"2.1.53\"\n", "issue": "Release 2.1.53\nOn the docket:\r\n+ [x] pex stops interpreter search if even one intepreter fails to identify itself #1494\r\n+ [x] Add support for setting custom venv prompts. #1499\r\n+ [x] How to know whether we are running from within pex #1485 \n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.52\"\n", "path": "pex/version.py"}]}
| 662 | 96 |
gh_patches_debug_24863
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-4574
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
response.json()?
python-requests have response.json() that decodes json body and returns appropriate Python objects. Does it make sense to have something like this in Scrapy?
Add json response
My first PR for Scrapy that is supposed to fix GH https://github.com/scrapy/scrapy/issues/2444
Adds a `JsonResponse` class and a `json()` function similar to the one for the "requests" library [here](https://github.com/psf/requests/blob/master/requests/models.py#L874-L898)
Test manually by
```
# from the Scrapy package itself install Scrapy first
python setup.py install
# then run the Scrapy shell
scrapy shell https://api.github.com/events
>> response.json
```
Fixes #2444
</issue>
<code>
[start of scrapy/http/response/text.py]
1 """
2 This module implements the TextResponse class which adds encoding handling and
3 discovering (through HTTP headers) to base Response class.
4
5 See documentation in docs/topics/request-response.rst
6 """
7
8 import warnings
9 from contextlib import suppress
10 from typing import Generator
11 from urllib.parse import urljoin
12
13 import parsel
14 from w3lib.encoding import (html_body_declared_encoding, html_to_unicode,
15 http_content_type_encoding, resolve_encoding)
16 from w3lib.html import strip_html5_whitespace
17
18 from scrapy.exceptions import ScrapyDeprecationWarning
19 from scrapy.http import Request
20 from scrapy.http.response import Response
21 from scrapy.utils.python import memoizemethod_noargs, to_unicode
22 from scrapy.utils.response import get_base_url
23
24
25 class TextResponse(Response):
26
27 _DEFAULT_ENCODING = 'ascii'
28
29 def __init__(self, *args, **kwargs):
30 self._encoding = kwargs.pop('encoding', None)
31 self._cached_benc = None
32 self._cached_ubody = None
33 self._cached_selector = None
34 super(TextResponse, self).__init__(*args, **kwargs)
35
36 def _set_url(self, url):
37 if isinstance(url, str):
38 self._url = to_unicode(url, self.encoding)
39 else:
40 super(TextResponse, self)._set_url(url)
41
42 def _set_body(self, body):
43 self._body = b'' # used by encoding detection
44 if isinstance(body, str):
45 if self._encoding is None:
46 raise TypeError('Cannot convert unicode body - %s has no encoding' %
47 type(self).__name__)
48 self._body = body.encode(self._encoding)
49 else:
50 super(TextResponse, self)._set_body(body)
51
52 def replace(self, *args, **kwargs):
53 kwargs.setdefault('encoding', self.encoding)
54 return Response.replace(self, *args, **kwargs)
55
56 @property
57 def encoding(self):
58 return self._declared_encoding() or self._body_inferred_encoding()
59
60 def _declared_encoding(self):
61 return self._encoding or self._headers_encoding() \
62 or self._body_declared_encoding()
63
64 def body_as_unicode(self):
65 """Return body as unicode"""
66 warnings.warn('Response.body_as_unicode() is deprecated, '
67 'please use Response.text instead.',
68 ScrapyDeprecationWarning)
69 return self.text
70
71 @property
72 def text(self):
73 """ Body as unicode """
74 # access self.encoding before _cached_ubody to make sure
75 # _body_inferred_encoding is called
76 benc = self.encoding
77 if self._cached_ubody is None:
78 charset = 'charset=%s' % benc
79 self._cached_ubody = html_to_unicode(charset, self.body)[1]
80 return self._cached_ubody
81
82 def urljoin(self, url):
83 """Join this Response's url with a possible relative url to form an
84 absolute interpretation of the latter."""
85 return urljoin(get_base_url(self), url)
86
87 @memoizemethod_noargs
88 def _headers_encoding(self):
89 content_type = self.headers.get(b'Content-Type', b'')
90 return http_content_type_encoding(to_unicode(content_type))
91
92 def _body_inferred_encoding(self):
93 if self._cached_benc is None:
94 content_type = to_unicode(self.headers.get(b'Content-Type', b''))
95 benc, ubody = html_to_unicode(content_type, self.body,
96 auto_detect_fun=self._auto_detect_fun,
97 default_encoding=self._DEFAULT_ENCODING)
98 self._cached_benc = benc
99 self._cached_ubody = ubody
100 return self._cached_benc
101
102 def _auto_detect_fun(self, text):
103 for enc in (self._DEFAULT_ENCODING, 'utf-8', 'cp1252'):
104 try:
105 text.decode(enc)
106 except UnicodeError:
107 continue
108 return resolve_encoding(enc)
109
110 @memoizemethod_noargs
111 def _body_declared_encoding(self):
112 return html_body_declared_encoding(self.body)
113
114 @property
115 def selector(self):
116 from scrapy.selector import Selector
117 if self._cached_selector is None:
118 self._cached_selector = Selector(self)
119 return self._cached_selector
120
121 def xpath(self, query, **kwargs):
122 return self.selector.xpath(query, **kwargs)
123
124 def css(self, query):
125 return self.selector.css(query)
126
127 def follow(self, url, callback=None, method='GET', headers=None, body=None,
128 cookies=None, meta=None, encoding=None, priority=0,
129 dont_filter=False, errback=None, cb_kwargs=None, flags=None):
130 # type: (...) -> Request
131 """
132 Return a :class:`~.Request` instance to follow a link ``url``.
133 It accepts the same arguments as ``Request.__init__`` method,
134 but ``url`` can be not only an absolute URL, but also
135
136 * a relative URL
137 * a :class:`~scrapy.link.Link` object, e.g. the result of
138 :ref:`topics-link-extractors`
139 * a :class:`~scrapy.selector.Selector` object for a ``<link>`` or ``<a>`` element, e.g.
140 ``response.css('a.my_link')[0]``
141 * an attribute :class:`~scrapy.selector.Selector` (not SelectorList), e.g.
142 ``response.css('a::attr(href)')[0]`` or
143 ``response.xpath('//img/@src')[0]``
144
145 See :ref:`response-follow-example` for usage examples.
146 """
147 if isinstance(url, parsel.Selector):
148 url = _url_from_selector(url)
149 elif isinstance(url, parsel.SelectorList):
150 raise ValueError("SelectorList is not supported")
151 encoding = self.encoding if encoding is None else encoding
152 return super(TextResponse, self).follow(
153 url=url,
154 callback=callback,
155 method=method,
156 headers=headers,
157 body=body,
158 cookies=cookies,
159 meta=meta,
160 encoding=encoding,
161 priority=priority,
162 dont_filter=dont_filter,
163 errback=errback,
164 cb_kwargs=cb_kwargs,
165 flags=flags,
166 )
167
168 def follow_all(self, urls=None, callback=None, method='GET', headers=None, body=None,
169 cookies=None, meta=None, encoding=None, priority=0,
170 dont_filter=False, errback=None, cb_kwargs=None, flags=None,
171 css=None, xpath=None):
172 # type: (...) -> Generator[Request, None, None]
173 """
174 A generator that produces :class:`~.Request` instances to follow all
175 links in ``urls``. It accepts the same arguments as the :class:`~.Request`'s
176 ``__init__`` method, except that each ``urls`` element does not need to be
177 an absolute URL, it can be any of the following:
178
179 * a relative URL
180 * a :class:`~scrapy.link.Link` object, e.g. the result of
181 :ref:`topics-link-extractors`
182 * a :class:`~scrapy.selector.Selector` object for a ``<link>`` or ``<a>`` element, e.g.
183 ``response.css('a.my_link')[0]``
184 * an attribute :class:`~scrapy.selector.Selector` (not SelectorList), e.g.
185 ``response.css('a::attr(href)')[0]`` or
186 ``response.xpath('//img/@src')[0]``
187
188 In addition, ``css`` and ``xpath`` arguments are accepted to perform the link extraction
189 within the ``follow_all`` method (only one of ``urls``, ``css`` and ``xpath`` is accepted).
190
191 Note that when passing a ``SelectorList`` as argument for the ``urls`` parameter or
192 using the ``css`` or ``xpath`` parameters, this method will not produce requests for
193 selectors from which links cannot be obtained (for instance, anchor tags without an
194 ``href`` attribute)
195 """
196 arguments = [x for x in (urls, css, xpath) if x is not None]
197 if len(arguments) != 1:
198 raise ValueError(
199 "Please supply exactly one of the following arguments: urls, css, xpath"
200 )
201 if not urls:
202 if css:
203 urls = self.css(css)
204 if xpath:
205 urls = self.xpath(xpath)
206 if isinstance(urls, parsel.SelectorList):
207 selectors = urls
208 urls = []
209 for sel in selectors:
210 with suppress(_InvalidSelector):
211 urls.append(_url_from_selector(sel))
212 return super(TextResponse, self).follow_all(
213 urls=urls,
214 callback=callback,
215 method=method,
216 headers=headers,
217 body=body,
218 cookies=cookies,
219 meta=meta,
220 encoding=encoding,
221 priority=priority,
222 dont_filter=dont_filter,
223 errback=errback,
224 cb_kwargs=cb_kwargs,
225 flags=flags,
226 )
227
228
229 class _InvalidSelector(ValueError):
230 """
231 Raised when a URL cannot be obtained from a Selector
232 """
233
234
235 def _url_from_selector(sel):
236 # type: (parsel.Selector) -> str
237 if isinstance(sel.root, str):
238 # e.g. ::attr(href) result
239 return strip_html5_whitespace(sel.root)
240 if not hasattr(sel.root, 'tag'):
241 raise _InvalidSelector("Unsupported selector: %s" % sel)
242 if sel.root.tag not in ('a', 'link'):
243 raise _InvalidSelector("Only <a> and <link> elements are supported; got <%s>" %
244 sel.root.tag)
245 href = sel.root.get('href')
246 if href is None:
247 raise _InvalidSelector("<%s> element has no href attribute: %s" %
248 (sel.root.tag, sel))
249 return strip_html5_whitespace(href)
250
[end of scrapy/http/response/text.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scrapy/http/response/text.py b/scrapy/http/response/text.py
--- a/scrapy/http/response/text.py
+++ b/scrapy/http/response/text.py
@@ -5,6 +5,7 @@
See documentation in docs/topics/request-response.rst
"""
+import json
import warnings
from contextlib import suppress
from typing import Generator
@@ -21,10 +22,13 @@
from scrapy.utils.python import memoizemethod_noargs, to_unicode
from scrapy.utils.response import get_base_url
+_NONE = object()
+
class TextResponse(Response):
_DEFAULT_ENCODING = 'ascii'
+ _cached_decoded_json = _NONE
def __init__(self, *args, **kwargs):
self._encoding = kwargs.pop('encoding', None)
@@ -68,6 +72,14 @@
ScrapyDeprecationWarning)
return self.text
+ def json(self):
+ """
+ Deserialize a JSON document to a Python object.
+ """
+ if self._cached_decoded_json is _NONE:
+ self._cached_decoded_json = json.loads(self.text)
+ return self._cached_decoded_json
+
@property
def text(self):
""" Body as unicode """
|
{"golden_diff": "diff --git a/scrapy/http/response/text.py b/scrapy/http/response/text.py\n--- a/scrapy/http/response/text.py\n+++ b/scrapy/http/response/text.py\n@@ -5,6 +5,7 @@\n See documentation in docs/topics/request-response.rst\n \"\"\"\n \n+import json\n import warnings\n from contextlib import suppress\n from typing import Generator\n@@ -21,10 +22,13 @@\n from scrapy.utils.python import memoizemethod_noargs, to_unicode\n from scrapy.utils.response import get_base_url\n \n+_NONE = object()\n+\n \n class TextResponse(Response):\n \n _DEFAULT_ENCODING = 'ascii'\n+ _cached_decoded_json = _NONE\n \n def __init__(self, *args, **kwargs):\n self._encoding = kwargs.pop('encoding', None)\n@@ -68,6 +72,14 @@\n ScrapyDeprecationWarning)\n return self.text\n \n+ def json(self):\n+ \"\"\"\n+ Deserialize a JSON document to a Python object.\n+ \"\"\"\n+ if self._cached_decoded_json is _NONE:\n+ self._cached_decoded_json = json.loads(self.text)\n+ return self._cached_decoded_json\n+\n @property\n def text(self):\n \"\"\" Body as unicode \"\"\"\n", "issue": "response.json()?\npython-requests have response.json() that decodes json body and returns appropriate Python objects. Does it make sense to have something like this in Scrapy? \nAdd json response\nMy first PR for Scrapy that is supposed to fix GH https://github.com/scrapy/scrapy/issues/2444\r\n\r\nAdds a `JsonResponse` class and a `json()` function similar to the one for the \"requests\" library [here](https://github.com/psf/requests/blob/master/requests/models.py#L874-L898)\r\n\r\nTest manually by \r\n```\r\n# from the Scrapy package itself install Scrapy first \r\npython setup.py install\r\n\r\n# then run the Scrapy shell\r\nscrapy shell https://api.github.com/events\r\n>> response.json\r\n```\r\n\r\nFixes #2444\r\n\n", "before_files": [{"content": "\"\"\"\nThis module implements the TextResponse class which adds encoding handling and\ndiscovering (through HTTP headers) to base Response class.\n\nSee documentation in docs/topics/request-response.rst\n\"\"\"\n\nimport warnings\nfrom contextlib import suppress\nfrom typing import Generator\nfrom urllib.parse import urljoin\n\nimport parsel\nfrom w3lib.encoding import (html_body_declared_encoding, html_to_unicode,\n http_content_type_encoding, resolve_encoding)\nfrom w3lib.html import strip_html5_whitespace\n\nfrom scrapy.exceptions import ScrapyDeprecationWarning\nfrom scrapy.http import Request\nfrom scrapy.http.response import Response\nfrom scrapy.utils.python import memoizemethod_noargs, to_unicode\nfrom scrapy.utils.response import get_base_url\n\n\nclass TextResponse(Response):\n\n _DEFAULT_ENCODING = 'ascii'\n\n def __init__(self, *args, **kwargs):\n self._encoding = kwargs.pop('encoding', None)\n self._cached_benc = None\n self._cached_ubody = None\n self._cached_selector = None\n super(TextResponse, self).__init__(*args, **kwargs)\n\n def _set_url(self, url):\n if isinstance(url, str):\n self._url = to_unicode(url, self.encoding)\n else:\n super(TextResponse, self)._set_url(url)\n\n def _set_body(self, body):\n self._body = b'' # used by encoding detection\n if isinstance(body, str):\n if self._encoding is None:\n raise TypeError('Cannot convert unicode body - %s has no encoding' %\n type(self).__name__)\n self._body = body.encode(self._encoding)\n else:\n super(TextResponse, self)._set_body(body)\n\n def replace(self, *args, **kwargs):\n kwargs.setdefault('encoding', self.encoding)\n return Response.replace(self, *args, **kwargs)\n\n @property\n def encoding(self):\n return self._declared_encoding() or self._body_inferred_encoding()\n\n def _declared_encoding(self):\n return self._encoding or self._headers_encoding() \\\n or self._body_declared_encoding()\n\n def body_as_unicode(self):\n \"\"\"Return body as unicode\"\"\"\n warnings.warn('Response.body_as_unicode() is deprecated, '\n 'please use Response.text instead.',\n ScrapyDeprecationWarning)\n return self.text\n\n @property\n def text(self):\n \"\"\" Body as unicode \"\"\"\n # access self.encoding before _cached_ubody to make sure\n # _body_inferred_encoding is called\n benc = self.encoding\n if self._cached_ubody is None:\n charset = 'charset=%s' % benc\n self._cached_ubody = html_to_unicode(charset, self.body)[1]\n return self._cached_ubody\n\n def urljoin(self, url):\n \"\"\"Join this Response's url with a possible relative url to form an\n absolute interpretation of the latter.\"\"\"\n return urljoin(get_base_url(self), url)\n\n @memoizemethod_noargs\n def _headers_encoding(self):\n content_type = self.headers.get(b'Content-Type', b'')\n return http_content_type_encoding(to_unicode(content_type))\n\n def _body_inferred_encoding(self):\n if self._cached_benc is None:\n content_type = to_unicode(self.headers.get(b'Content-Type', b''))\n benc, ubody = html_to_unicode(content_type, self.body,\n auto_detect_fun=self._auto_detect_fun,\n default_encoding=self._DEFAULT_ENCODING)\n self._cached_benc = benc\n self._cached_ubody = ubody\n return self._cached_benc\n\n def _auto_detect_fun(self, text):\n for enc in (self._DEFAULT_ENCODING, 'utf-8', 'cp1252'):\n try:\n text.decode(enc)\n except UnicodeError:\n continue\n return resolve_encoding(enc)\n\n @memoizemethod_noargs\n def _body_declared_encoding(self):\n return html_body_declared_encoding(self.body)\n\n @property\n def selector(self):\n from scrapy.selector import Selector\n if self._cached_selector is None:\n self._cached_selector = Selector(self)\n return self._cached_selector\n\n def xpath(self, query, **kwargs):\n return self.selector.xpath(query, **kwargs)\n\n def css(self, query):\n return self.selector.css(query)\n\n def follow(self, url, callback=None, method='GET', headers=None, body=None,\n cookies=None, meta=None, encoding=None, priority=0,\n dont_filter=False, errback=None, cb_kwargs=None, flags=None):\n # type: (...) -> Request\n \"\"\"\n Return a :class:`~.Request` instance to follow a link ``url``.\n It accepts the same arguments as ``Request.__init__`` method,\n but ``url`` can be not only an absolute URL, but also\n\n * a relative URL\n * a :class:`~scrapy.link.Link` object, e.g. the result of\n :ref:`topics-link-extractors`\n * a :class:`~scrapy.selector.Selector` object for a ``<link>`` or ``<a>`` element, e.g.\n ``response.css('a.my_link')[0]``\n * an attribute :class:`~scrapy.selector.Selector` (not SelectorList), e.g.\n ``response.css('a::attr(href)')[0]`` or\n ``response.xpath('//img/@src')[0]``\n\n See :ref:`response-follow-example` for usage examples.\n \"\"\"\n if isinstance(url, parsel.Selector):\n url = _url_from_selector(url)\n elif isinstance(url, parsel.SelectorList):\n raise ValueError(\"SelectorList is not supported\")\n encoding = self.encoding if encoding is None else encoding\n return super(TextResponse, self).follow(\n url=url,\n callback=callback,\n method=method,\n headers=headers,\n body=body,\n cookies=cookies,\n meta=meta,\n encoding=encoding,\n priority=priority,\n dont_filter=dont_filter,\n errback=errback,\n cb_kwargs=cb_kwargs,\n flags=flags,\n )\n\n def follow_all(self, urls=None, callback=None, method='GET', headers=None, body=None,\n cookies=None, meta=None, encoding=None, priority=0,\n dont_filter=False, errback=None, cb_kwargs=None, flags=None,\n css=None, xpath=None):\n # type: (...) -> Generator[Request, None, None]\n \"\"\"\n A generator that produces :class:`~.Request` instances to follow all\n links in ``urls``. It accepts the same arguments as the :class:`~.Request`'s\n ``__init__`` method, except that each ``urls`` element does not need to be\n an absolute URL, it can be any of the following:\n\n * a relative URL\n * a :class:`~scrapy.link.Link` object, e.g. the result of\n :ref:`topics-link-extractors`\n * a :class:`~scrapy.selector.Selector` object for a ``<link>`` or ``<a>`` element, e.g.\n ``response.css('a.my_link')[0]``\n * an attribute :class:`~scrapy.selector.Selector` (not SelectorList), e.g.\n ``response.css('a::attr(href)')[0]`` or\n ``response.xpath('//img/@src')[0]``\n\n In addition, ``css`` and ``xpath`` arguments are accepted to perform the link extraction\n within the ``follow_all`` method (only one of ``urls``, ``css`` and ``xpath`` is accepted).\n\n Note that when passing a ``SelectorList`` as argument for the ``urls`` parameter or\n using the ``css`` or ``xpath`` parameters, this method will not produce requests for\n selectors from which links cannot be obtained (for instance, anchor tags without an\n ``href`` attribute)\n \"\"\"\n arguments = [x for x in (urls, css, xpath) if x is not None]\n if len(arguments) != 1:\n raise ValueError(\n \"Please supply exactly one of the following arguments: urls, css, xpath\"\n )\n if not urls:\n if css:\n urls = self.css(css)\n if xpath:\n urls = self.xpath(xpath)\n if isinstance(urls, parsel.SelectorList):\n selectors = urls\n urls = []\n for sel in selectors:\n with suppress(_InvalidSelector):\n urls.append(_url_from_selector(sel))\n return super(TextResponse, self).follow_all(\n urls=urls,\n callback=callback,\n method=method,\n headers=headers,\n body=body,\n cookies=cookies,\n meta=meta,\n encoding=encoding,\n priority=priority,\n dont_filter=dont_filter,\n errback=errback,\n cb_kwargs=cb_kwargs,\n flags=flags,\n )\n\n\nclass _InvalidSelector(ValueError):\n \"\"\"\n Raised when a URL cannot be obtained from a Selector\n \"\"\"\n\n\ndef _url_from_selector(sel):\n # type: (parsel.Selector) -> str\n if isinstance(sel.root, str):\n # e.g. ::attr(href) result\n return strip_html5_whitespace(sel.root)\n if not hasattr(sel.root, 'tag'):\n raise _InvalidSelector(\"Unsupported selector: %s\" % sel)\n if sel.root.tag not in ('a', 'link'):\n raise _InvalidSelector(\"Only <a> and <link> elements are supported; got <%s>\" %\n sel.root.tag)\n href = sel.root.get('href')\n if href is None:\n raise _InvalidSelector(\"<%s> element has no href attribute: %s\" %\n (sel.root.tag, sel))\n return strip_html5_whitespace(href)\n", "path": "scrapy/http/response/text.py"}]}
| 3,465 | 273 |
gh_patches_debug_25640
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-410
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in recursive attach of MetricsLambda
As a result of #385, the recursive attach of MetricsLambda is not working as intended.
The code below throws this error: `NotComputableError: Accuracy must have at least one example before it can be computed.`
```python
import torch
from ignite.metrics import Accuracy, Precision, Recall, MetricsLambda
from ignite.engine import Engine
accuracy_1 = Accuracy()
accuracy_2 = Accuracy()
mean_accuracy = (accuracy_1 + accuracy_2) / 2
metrics = {
# "a1": accuracy_1,
# "a2": accuracy_2,
"mean accuracy": mean_accuracy,
}
y_pred = torch.randint(0, 2, size=(15, 10, 4)).float()
y = torch.randint(0, 2, size=(15, 10, 4)).long()
def update_fn(engine, batch):
y_pred, y = batch
return y_pred, y
validator = Engine(update_fn)
for name, metric in metrics.items():
metric.attach(validator, name)
def data(y_pred, y):
for i in range(y_pred.shape[0]):
yield (y_pred[i], y[i])
d = data(y_pred, y)
state = validator.run(d, max_epochs=1)
print(state.metrics)
```
cc @vfdev-5 @amitibo
</issue>
<code>
[start of ignite/metrics/metrics_lambda.py]
1 from ignite.metrics.metric import Metric
2 from ignite.engine import Events
3
4
5 class MetricsLambda(Metric):
6 """
7 Apply a function to other metrics to obtain a new metric.
8 The result of the new metric is defined to be the result
9 of applying the function to the result of argument metrics.
10
11 When update, this metric does not recursively update the metrics
12 it depends on. When reset, all its dependency metrics would be
13 resetted. When attach, all its dependencies would be automatically
14 attached.
15
16 Args:
17 f (callable): the function that defines the computation
18 args (sequence): Sequence of other metrics or something
19 else that will be fed to ``f`` as arguments.
20
21 Example:
22
23 .. code-block:: python
24
25 precision = Precision(average=False)
26 recall = Recall(average=False)
27
28 def Fbeta(r, p, beta):
29 return torch.mean((1 + beta ** 2) * p * r / (beta ** 2 * p + r + 1e-20)).item()
30
31 F1 = MetricsLambda(Fbeta, recall, precision, 1)
32 F2 = MetricsLambda(Fbeta, recall, precision, 2)
33 F3 = MetricsLambda(Fbeta, recall, precision, 3)
34 F4 = MetricsLambda(Fbeta, recall, precision, 4)
35 """
36 def __init__(self, f, *args):
37 self.function = f
38 self.args = args
39 super(MetricsLambda, self).__init__()
40
41 def reset(self):
42 for i in self.args:
43 if isinstance(i, Metric):
44 i.reset()
45
46 def update(self, output):
47 # NB: this method does not recursively update dependency metrics,
48 # which might cause duplicate update issue. To update this metric,
49 # users should manually update its dependencies.
50 pass
51
52 def compute(self):
53 materialized = [i.compute() if isinstance(i, Metric) else i for i in self.args]
54 return self.function(*materialized)
55
56 def attach(self, engine, name):
57 # recursively attach all its dependencies
58 for index, metric in enumerate(self.args):
59 if isinstance(metric, Metric):
60 if not engine.has_event_handler(metric.started, Events.EPOCH_STARTED):
61 engine.add_event_handler(Events.EPOCH_STARTED, metric.started)
62 if not engine.has_event_handler(metric.iteration_completed, Events.ITERATION_COMPLETED):
63 engine.add_event_handler(Events.ITERATION_COMPLETED, metric.iteration_completed)
64 super(MetricsLambda, self).attach(engine, name)
65
[end of ignite/metrics/metrics_lambda.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ignite/metrics/metrics_lambda.py b/ignite/metrics/metrics_lambda.py
--- a/ignite/metrics/metrics_lambda.py
+++ b/ignite/metrics/metrics_lambda.py
@@ -53,12 +53,18 @@
materialized = [i.compute() if isinstance(i, Metric) else i for i in self.args]
return self.function(*materialized)
- def attach(self, engine, name):
- # recursively attach all its dependencies
+ def _internal_attach(self, engine):
for index, metric in enumerate(self.args):
- if isinstance(metric, Metric):
+ if isinstance(metric, MetricsLambda):
+ metric._internal_attach(engine)
+ elif isinstance(metric, Metric):
if not engine.has_event_handler(metric.started, Events.EPOCH_STARTED):
engine.add_event_handler(Events.EPOCH_STARTED, metric.started)
if not engine.has_event_handler(metric.iteration_completed, Events.ITERATION_COMPLETED):
engine.add_event_handler(Events.ITERATION_COMPLETED, metric.iteration_completed)
- super(MetricsLambda, self).attach(engine, name)
+
+ def attach(self, engine, name):
+ # recursively attach all its dependencies
+ self._internal_attach(engine)
+ # attach only handler on EPOCH_COMPLETED
+ engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)
|
{"golden_diff": "diff --git a/ignite/metrics/metrics_lambda.py b/ignite/metrics/metrics_lambda.py\n--- a/ignite/metrics/metrics_lambda.py\n+++ b/ignite/metrics/metrics_lambda.py\n@@ -53,12 +53,18 @@\n materialized = [i.compute() if isinstance(i, Metric) else i for i in self.args]\n return self.function(*materialized)\n \n- def attach(self, engine, name):\n- # recursively attach all its dependencies\n+ def _internal_attach(self, engine):\n for index, metric in enumerate(self.args):\n- if isinstance(metric, Metric):\n+ if isinstance(metric, MetricsLambda):\n+ metric._internal_attach(engine)\n+ elif isinstance(metric, Metric):\n if not engine.has_event_handler(metric.started, Events.EPOCH_STARTED):\n engine.add_event_handler(Events.EPOCH_STARTED, metric.started)\n if not engine.has_event_handler(metric.iteration_completed, Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.ITERATION_COMPLETED, metric.iteration_completed)\n- super(MetricsLambda, self).attach(engine, name)\n+\n+ def attach(self, engine, name):\n+ # recursively attach all its dependencies\n+ self._internal_attach(engine)\n+ # attach only handler on EPOCH_COMPLETED\n+ engine.add_event_handler(Events.EPOCH_COMPLETED, self.completed, name)\n", "issue": "Bug in recursive attach of MetricsLambda\nAs a result of #385, the recursive attach of MetricsLambda is not working as intended. \r\n\r\nThe code below throws this error: `NotComputableError: Accuracy must have at least one example before it can be computed.`\r\n\r\n```python\r\nimport torch\r\n\r\nfrom ignite.metrics import Accuracy, Precision, Recall, MetricsLambda\r\nfrom ignite.engine import Engine\r\n\r\naccuracy_1 = Accuracy()\r\naccuracy_2 = Accuracy()\r\nmean_accuracy = (accuracy_1 + accuracy_2) / 2\r\n\r\n\r\nmetrics = {\r\n# \"a1\": accuracy_1,\r\n# \"a2\": accuracy_2, \r\n \"mean accuracy\": mean_accuracy,\r\n}\r\n\r\n\r\ny_pred = torch.randint(0, 2, size=(15, 10, 4)).float()\r\ny = torch.randint(0, 2, size=(15, 10, 4)).long()\r\n\r\ndef update_fn(engine, batch):\r\n y_pred, y = batch\r\n return y_pred, y\r\n\r\nvalidator = Engine(update_fn)\r\n\r\nfor name, metric in metrics.items():\r\n metric.attach(validator, name)\r\n\r\ndef data(y_pred, y):\r\n for i in range(y_pred.shape[0]):\r\n yield (y_pred[i], y[i])\r\n\r\nd = data(y_pred, y)\r\nstate = validator.run(d, max_epochs=1)\r\n\r\nprint(state.metrics)\r\n```\r\n\r\ncc @vfdev-5 @amitibo \n", "before_files": [{"content": "from ignite.metrics.metric import Metric\nfrom ignite.engine import Events\n\n\nclass MetricsLambda(Metric):\n \"\"\"\n Apply a function to other metrics to obtain a new metric.\n The result of the new metric is defined to be the result\n of applying the function to the result of argument metrics.\n\n When update, this metric does not recursively update the metrics\n it depends on. When reset, all its dependency metrics would be\n resetted. When attach, all its dependencies would be automatically\n attached.\n\n Args:\n f (callable): the function that defines the computation\n args (sequence): Sequence of other metrics or something\n else that will be fed to ``f`` as arguments.\n\n Example:\n\n .. code-block:: python\n\n precision = Precision(average=False)\n recall = Recall(average=False)\n\n def Fbeta(r, p, beta):\n return torch.mean((1 + beta ** 2) * p * r / (beta ** 2 * p + r + 1e-20)).item()\n\n F1 = MetricsLambda(Fbeta, recall, precision, 1)\n F2 = MetricsLambda(Fbeta, recall, precision, 2)\n F3 = MetricsLambda(Fbeta, recall, precision, 3)\n F4 = MetricsLambda(Fbeta, recall, precision, 4)\n \"\"\"\n def __init__(self, f, *args):\n self.function = f\n self.args = args\n super(MetricsLambda, self).__init__()\n\n def reset(self):\n for i in self.args:\n if isinstance(i, Metric):\n i.reset()\n\n def update(self, output):\n # NB: this method does not recursively update dependency metrics,\n # which might cause duplicate update issue. To update this metric,\n # users should manually update its dependencies.\n pass\n\n def compute(self):\n materialized = [i.compute() if isinstance(i, Metric) else i for i in self.args]\n return self.function(*materialized)\n\n def attach(self, engine, name):\n # recursively attach all its dependencies\n for index, metric in enumerate(self.args):\n if isinstance(metric, Metric):\n if not engine.has_event_handler(metric.started, Events.EPOCH_STARTED):\n engine.add_event_handler(Events.EPOCH_STARTED, metric.started)\n if not engine.has_event_handler(metric.iteration_completed, Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.ITERATION_COMPLETED, metric.iteration_completed)\n super(MetricsLambda, self).attach(engine, name)\n", "path": "ignite/metrics/metrics_lambda.py"}]}
| 1,505 | 295 |
gh_patches_debug_30268
|
rasdani/github-patches
|
git_diff
|
piskvorky__gensim-2147
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
gensim.scripts.word2vec2tensor TypeError: write() argument must be str, not bytes
Python environment
```
Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 18:10:19)
[GCC 7.2.0] on linux
```
**How I make article_body_w2v_300.txt**
```
import gensim
from gensim.models import Word2Vec
from gensim.models.word2vec import LineSentence
sentences = LineSentence("./data/article_body_corpus.txt")
model = Word2Vec(sentences, size=300, window=5, min_count=5, workers=4)
model.wv.save_word2vec_format("article_body_w2v_300.txt", binary=False)
```
**Command I use to run gensim.scripts.word2vec2tensor**
```
python -m gensim.scripts.word2vec2tensor -i article_body_w2v_300.txt -o meow/
```
Console output
```
word_embedding python -m gensim.scripts.word2vec2tensor -i article_body_w2v_300.txt -o meow/
2018-03-07 16:30:29,484 - word2vec2tensor - INFO - running /home/cpu11453local/anaconda3/envs/gensim/lib/python3.6/site-packages/gensim/scripts/word2vec2tensor.py -i article_body_w2v_300.txt -o meow/
2018-03-07 16:30:29,484 - utils_any2vec - INFO - loading projection weights from article_body_w2v_300.txt
2018-03-07 16:30:41,992 - utils_any2vec - INFO - loaded (56543, 300) matrix from article_body_w2v_300.txt
Traceback (most recent call last):
File "/home/cpu11453local/anaconda3/envs/gensim/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/cpu11453local/anaconda3/envs/gensim/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/cpu11453local/anaconda3/envs/gensim/lib/python3.6/site-packages/gensim/scripts/word2vec2tensor.py", line 93, in <module>
word2vec2tensor(args.input, args.output, args.binary)
File "/home/cpu11453local/anaconda3/envs/gensim/lib/python3.6/site-packages/gensim/scripts/word2vec2tensor.py", line 73, in word2vec2tensor
file_metadata.write(gensim.utils.to_utf8(word) + gensim.utils.to_utf8('\n'))
TypeError: write() argument must be str, not bytes
```
</issue>
<code>
[start of gensim/scripts/word2vec2tensor.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) 2016 Loreto Parisi <[email protected]>
5 # Copyright (C) 2016 Silvio Olivastri <[email protected]>
6 # Copyright (C) 2016 Radim Rehurek <[email protected]>
7
8
9 """This script allows converting word-vectors from word2vec format into Tensorflow 2D tensor and metadata format.
10 This script used for for word-vector visualization on `Embedding Visualization <http://projector.tensorflow.org/>`_.
11
12
13 How to use
14 ----------
15 #. Convert your word-vector with this script (for example, we'll use model from
16 `gensim-data <https://rare-technologies.com/new-download-api-for-pretrained-nlp-models-and-datasets-in-gensim/>`_) ::
17
18 python -m gensim.downloader -d glove-wiki-gigaword-50 # download model in word2vec format
19 python -m gensim.scripts.word2vec2tensor -i ~/gensim-data/glove-wiki-gigaword-50/glove-wiki-gigaword-50.gz \
20 -o /tmp/my_model_prefix
21
22 #. Open http://projector.tensorflow.org/
23 #. Click "Load Data" button from the left menu.
24 #. Select "Choose file" in "Load a TSV file of vectors." and choose "/tmp/my_model_prefix_tensor.tsv" file.
25 #. Select "Choose file" in "Load a TSV file of metadata." and choose "/tmp/my_model_prefix_metadata.tsv" file.
26 #. ???
27 #. PROFIT!
28
29 For more information about TensorBoard TSV format please visit:
30 https://www.tensorflow.org/versions/master/how_tos/embedding_viz/
31
32
33 Command line arguments
34 ----------------------
35
36 .. program-output:: python -m gensim.scripts.word2vec2tensor --help
37 :ellipsis: 0, -7
38
39 """
40
41 import os
42 import sys
43 import logging
44 import argparse
45
46 import gensim
47
48 logger = logging.getLogger(__name__)
49
50
51 def word2vec2tensor(word2vec_model_path, tensor_filename, binary=False):
52 """Convert file in Word2Vec format and writes two files 2D tensor TSV file.
53
54 File "tensor_filename"_tensor.tsv contains word-vectors, "tensor_filename"_metadata.tsv contains words.
55
56 Parameters
57 ----------
58 word2vec_model_path : str
59 Path to file in Word2Vec format.
60 tensor_filename : str
61 Prefix for output files.
62 binary : bool, optional
63 True if input file in binary format.
64
65 """
66 model = gensim.models.KeyedVectors.load_word2vec_format(word2vec_model_path, binary=binary)
67 outfiletsv = tensor_filename + '_tensor.tsv'
68 outfiletsvmeta = tensor_filename + '_metadata.tsv'
69
70 with open(outfiletsv, 'w+') as file_vector:
71 with open(outfiletsvmeta, 'w+') as file_metadata:
72 for word in model.index2word:
73 file_metadata.write(gensim.utils.to_utf8(word) + gensim.utils.to_utf8('\n'))
74 vector_row = '\t'.join(str(x) for x in model[word])
75 file_vector.write(vector_row + '\n')
76
77 logger.info("2D tensor file saved to %s", outfiletsv)
78 logger.info("Tensor metadata file saved to %s", outfiletsvmeta)
79
80
81 if __name__ == "__main__":
82 logging.basicConfig(format='%(asctime)s - %(module)s - %(levelname)s - %(message)s', level=logging.INFO)
83 parser = argparse.ArgumentParser(formatter_class=argparse.RawDescriptionHelpFormatter, description=__doc__[:-138])
84 parser.add_argument("-i", "--input", required=True, help="Path to input file in word2vec format")
85 parser.add_argument("-o", "--output", required=True, help="Prefix path for output files")
86 parser.add_argument(
87 "-b", "--binary", action='store_const', const=True, default=False,
88 help="Set this flag if word2vec model in binary format (default: %(default)s)"
89 )
90 args = parser.parse_args()
91
92 logger.info("running %s", ' '.join(sys.argv))
93 word2vec2tensor(args.input, args.output, args.binary)
94 logger.info("finished running %s", os.path.basename(sys.argv[0]))
95
[end of gensim/scripts/word2vec2tensor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gensim/scripts/word2vec2tensor.py b/gensim/scripts/word2vec2tensor.py
--- a/gensim/scripts/word2vec2tensor.py
+++ b/gensim/scripts/word2vec2tensor.py
@@ -1,6 +1,7 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
+# Copyright (C) 2018 Vimig Socrates <[email protected]>
# Copyright (C) 2016 Loreto Parisi <[email protected]>
# Copyright (C) 2016 Silvio Olivastri <[email protected]>
# Copyright (C) 2016 Radim Rehurek <[email protected]>
@@ -43,6 +44,7 @@
import logging
import argparse
+from smart_open import smart_open
import gensim
logger = logging.getLogger(__name__)
@@ -67,12 +69,11 @@
outfiletsv = tensor_filename + '_tensor.tsv'
outfiletsvmeta = tensor_filename + '_metadata.tsv'
- with open(outfiletsv, 'w+') as file_vector:
- with open(outfiletsvmeta, 'w+') as file_metadata:
- for word in model.index2word:
- file_metadata.write(gensim.utils.to_utf8(word) + gensim.utils.to_utf8('\n'))
- vector_row = '\t'.join(str(x) for x in model[word])
- file_vector.write(vector_row + '\n')
+ with smart_open(outfiletsv, 'wb') as file_vector, smart_open(outfiletsvmeta, 'wb') as file_metadata:
+ for word in model.index2word:
+ file_metadata.write(gensim.utils.to_utf8(word) + gensim.utils.to_utf8('\n'))
+ vector_row = '\t'.join(str(x) for x in model[word])
+ file_vector.write(gensim.utils.to_utf8(vector_row) + gensim.utils.to_utf8('\n'))
logger.info("2D tensor file saved to %s", outfiletsv)
logger.info("Tensor metadata file saved to %s", outfiletsvmeta)
|
{"golden_diff": "diff --git a/gensim/scripts/word2vec2tensor.py b/gensim/scripts/word2vec2tensor.py\n--- a/gensim/scripts/word2vec2tensor.py\n+++ b/gensim/scripts/word2vec2tensor.py\n@@ -1,6 +1,7 @@\n #!/usr/bin/env python\n # -*- coding: utf-8 -*-\n #\n+# Copyright (C) 2018 Vimig Socrates <[email protected]>\n # Copyright (C) 2016 Loreto Parisi <[email protected]>\n # Copyright (C) 2016 Silvio Olivastri <[email protected]>\n # Copyright (C) 2016 Radim Rehurek <[email protected]>\n@@ -43,6 +44,7 @@\n import logging\n import argparse\n \n+from smart_open import smart_open\n import gensim\n \n logger = logging.getLogger(__name__)\n@@ -67,12 +69,11 @@\n outfiletsv = tensor_filename + '_tensor.tsv'\n outfiletsvmeta = tensor_filename + '_metadata.tsv'\n \n- with open(outfiletsv, 'w+') as file_vector:\n- with open(outfiletsvmeta, 'w+') as file_metadata:\n- for word in model.index2word:\n- file_metadata.write(gensim.utils.to_utf8(word) + gensim.utils.to_utf8('\\n'))\n- vector_row = '\\t'.join(str(x) for x in model[word])\n- file_vector.write(vector_row + '\\n')\n+ with smart_open(outfiletsv, 'wb') as file_vector, smart_open(outfiletsvmeta, 'wb') as file_metadata:\n+ for word in model.index2word:\n+ file_metadata.write(gensim.utils.to_utf8(word) + gensim.utils.to_utf8('\\n'))\n+ vector_row = '\\t'.join(str(x) for x in model[word])\n+ file_vector.write(gensim.utils.to_utf8(vector_row) + gensim.utils.to_utf8('\\n'))\n \n logger.info(\"2D tensor file saved to %s\", outfiletsv)\n logger.info(\"Tensor metadata file saved to %s\", outfiletsvmeta)\n", "issue": "gensim.scripts.word2vec2tensor TypeError: write() argument must be str, not bytes \nPython environment\r\n\r\n```\r\nPython 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 18:10:19) \r\n[GCC 7.2.0] on linux\r\n```\r\n\r\n\r\n**How I make article_body_w2v_300.txt** \r\n\r\n```\r\nimport gensim\r\nfrom gensim.models import Word2Vec\r\nfrom gensim.models.word2vec import LineSentence\r\n\r\nsentences = LineSentence(\"./data/article_body_corpus.txt\")\r\n\r\nmodel = Word2Vec(sentences, size=300, window=5, min_count=5, workers=4)\r\n\r\nmodel.wv.save_word2vec_format(\"article_body_w2v_300.txt\", binary=False)\r\n```\r\n\r\n\r\n**Command I use to run gensim.scripts.word2vec2tensor** \r\n\r\n```\r\npython -m gensim.scripts.word2vec2tensor -i article_body_w2v_300.txt -o meow/\r\n```\r\n\r\nConsole output\r\n```\r\nword_embedding python -m gensim.scripts.word2vec2tensor -i article_body_w2v_300.txt -o meow/\r\n2018-03-07 16:30:29,484 - word2vec2tensor - INFO - running /home/cpu11453local/anaconda3/envs/gensim/lib/python3.6/site-packages/gensim/scripts/word2vec2tensor.py -i article_body_w2v_300.txt -o meow/\r\n2018-03-07 16:30:29,484 - utils_any2vec - INFO - loading projection weights from article_body_w2v_300.txt\r\n2018-03-07 16:30:41,992 - utils_any2vec - INFO - loaded (56543, 300) matrix from article_body_w2v_300.txt\r\nTraceback (most recent call last):\r\n File \"/home/cpu11453local/anaconda3/envs/gensim/lib/python3.6/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/home/cpu11453local/anaconda3/envs/gensim/lib/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/cpu11453local/anaconda3/envs/gensim/lib/python3.6/site-packages/gensim/scripts/word2vec2tensor.py\", line 93, in <module>\r\n word2vec2tensor(args.input, args.output, args.binary)\r\n File \"/home/cpu11453local/anaconda3/envs/gensim/lib/python3.6/site-packages/gensim/scripts/word2vec2tensor.py\", line 73, in word2vec2tensor\r\n file_metadata.write(gensim.utils.to_utf8(word) + gensim.utils.to_utf8('\\n'))\r\nTypeError: write() argument must be str, not bytes\r\n\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) 2016 Loreto Parisi <[email protected]>\n# Copyright (C) 2016 Silvio Olivastri <[email protected]>\n# Copyright (C) 2016 Radim Rehurek <[email protected]>\n\n\n\"\"\"This script allows converting word-vectors from word2vec format into Tensorflow 2D tensor and metadata format.\nThis script used for for word-vector visualization on `Embedding Visualization <http://projector.tensorflow.org/>`_.\n\n\nHow to use\n----------\n#. Convert your word-vector with this script (for example, we'll use model from\n `gensim-data <https://rare-technologies.com/new-download-api-for-pretrained-nlp-models-and-datasets-in-gensim/>`_) ::\n\n python -m gensim.downloader -d glove-wiki-gigaword-50 # download model in word2vec format\n python -m gensim.scripts.word2vec2tensor -i ~/gensim-data/glove-wiki-gigaword-50/glove-wiki-gigaword-50.gz \\\n -o /tmp/my_model_prefix\n\n#. Open http://projector.tensorflow.org/\n#. Click \"Load Data\" button from the left menu.\n#. Select \"Choose file\" in \"Load a TSV file of vectors.\" and choose \"/tmp/my_model_prefix_tensor.tsv\" file.\n#. Select \"Choose file\" in \"Load a TSV file of metadata.\" and choose \"/tmp/my_model_prefix_metadata.tsv\" file.\n#. ???\n#. PROFIT!\n\nFor more information about TensorBoard TSV format please visit:\nhttps://www.tensorflow.org/versions/master/how_tos/embedding_viz/\n\n\nCommand line arguments\n----------------------\n\n.. program-output:: python -m gensim.scripts.word2vec2tensor --help\n :ellipsis: 0, -7\n\n\"\"\"\n\nimport os\nimport sys\nimport logging\nimport argparse\n\nimport gensim\n\nlogger = logging.getLogger(__name__)\n\n\ndef word2vec2tensor(word2vec_model_path, tensor_filename, binary=False):\n \"\"\"Convert file in Word2Vec format and writes two files 2D tensor TSV file.\n\n File \"tensor_filename\"_tensor.tsv contains word-vectors, \"tensor_filename\"_metadata.tsv contains words.\n\n Parameters\n ----------\n word2vec_model_path : str\n Path to file in Word2Vec format.\n tensor_filename : str\n Prefix for output files.\n binary : bool, optional\n True if input file in binary format.\n\n \"\"\"\n model = gensim.models.KeyedVectors.load_word2vec_format(word2vec_model_path, binary=binary)\n outfiletsv = tensor_filename + '_tensor.tsv'\n outfiletsvmeta = tensor_filename + '_metadata.tsv'\n\n with open(outfiletsv, 'w+') as file_vector:\n with open(outfiletsvmeta, 'w+') as file_metadata:\n for word in model.index2word:\n file_metadata.write(gensim.utils.to_utf8(word) + gensim.utils.to_utf8('\\n'))\n vector_row = '\\t'.join(str(x) for x in model[word])\n file_vector.write(vector_row + '\\n')\n\n logger.info(\"2D tensor file saved to %s\", outfiletsv)\n logger.info(\"Tensor metadata file saved to %s\", outfiletsvmeta)\n\n\nif __name__ == \"__main__\":\n logging.basicConfig(format='%(asctime)s - %(module)s - %(levelname)s - %(message)s', level=logging.INFO)\n parser = argparse.ArgumentParser(formatter_class=argparse.RawDescriptionHelpFormatter, description=__doc__[:-138])\n parser.add_argument(\"-i\", \"--input\", required=True, help=\"Path to input file in word2vec format\")\n parser.add_argument(\"-o\", \"--output\", required=True, help=\"Prefix path for output files\")\n parser.add_argument(\n \"-b\", \"--binary\", action='store_const', const=True, default=False,\n help=\"Set this flag if word2vec model in binary format (default: %(default)s)\"\n )\n args = parser.parse_args()\n\n logger.info(\"running %s\", ' '.join(sys.argv))\n word2vec2tensor(args.input, args.output, args.binary)\n logger.info(\"finished running %s\", os.path.basename(sys.argv[0]))\n", "path": "gensim/scripts/word2vec2tensor.py"}]}
| 2,412 | 492 |
gh_patches_debug_55590
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-6871
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upstream change in django-treebeard 4.5 requires a new wagtail core migration
Default MP_Node depth has changed to 1.
https://github.com/django-treebeard/django-treebeard/commit/454be8f29ac2b4b4fbe6512357b5afc1eb422bab#diff-35501ef525349cd39e4713d1a9f64a249fa4fbd31d875513e3a15e65988701a2
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 from wagtail import __version__
4 from wagtail.utils.setup import assets, check_bdist_egg, sdist
5
6
7 try:
8 from setuptools import find_packages, setup
9 except ImportError:
10 from distutils.core import setup
11
12
13 # Hack to prevent "TypeError: 'NoneType' object is not callable" error
14 # in multiprocessing/util.py _exit_function when setup.py exits
15 # (see http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)
16 try:
17 import multiprocessing # noqa
18 except ImportError:
19 pass
20
21
22 install_requires = [
23 "Django>=2.2,<3.2",
24 "django-modelcluster>=5.1,<6.0",
25 "django-taggit>=1.0,<2.0",
26 "django-treebeard>=4.2.0,<4.5",
27 "djangorestframework>=3.11.1,<4.0",
28 "django-filter>=2.2,<3.0",
29 "draftjs_exporter>=2.1.5,<3.0",
30 "Pillow>=4.0.0,<9.0.0",
31 "beautifulsoup4>=4.8,<4.9",
32 "html5lib>=0.999,<2",
33 "Willow>=1.4,<1.5",
34 "requests>=2.11.1,<3.0",
35 "l18n>=2018.5",
36 "xlsxwriter>=1.2.8,<2.0",
37 "tablib[xls,xlsx]>=0.14.0",
38 "anyascii>=0.1.5",
39 ]
40
41 # Testing dependencies
42 testing_extras = [
43 # Required for running the tests
44 'python-dateutil>=2.2',
45 'pytz>=2014.7',
46 'elasticsearch>=5.0,<6.0',
47 'Jinja2>=2.11,<3.0',
48 'boto3>=1.16,<1.17',
49 'freezegun>=0.3.8',
50 'openpyxl>=2.6.4',
51 'Unidecode>=0.04.14,<2.0',
52
53 # For coverage and PEP8 linting
54 'coverage>=3.7.0',
55 'flake8>=3.6.0',
56 'isort==5.6.4', # leave this pinned - it tends to change rules between patch releases
57 'flake8-blind-except==0.1.1',
58 'flake8-print==2.0.2',
59 'doc8==0.8.1',
60
61 # For templates linting
62 'jinjalint>=0.5',
63
64 # Pipenv hack to fix broken dependency causing CircleCI failures
65 'docutils==0.15',
66
67 # django-taggit 1.3.0 made changes to verbose_name which affect migrations;
68 # the test suite migrations correspond to >=1.3.0
69 'django-taggit>=1.3.0,<2.0',
70 ]
71
72 # Documentation dependencies
73 documentation_extras = [
74 'pyenchant>=3.1.1,<4',
75 'sphinxcontrib-spelling>=5.4.0,<6',
76 'Sphinx>=1.5.2',
77 'sphinx-autobuild>=0.6.0',
78 'sphinx_rtd_theme>=0.1.9',
79 'recommonmark>=0.7.1',
80 ]
81
82 setup(
83 name='wagtail',
84 version=__version__,
85 description='A Django content management system.',
86 author='Wagtail core team + contributors',
87 author_email='[email protected]', # For support queries, please see https://docs.wagtail.io/en/stable/support.html
88 url='https://wagtail.io/',
89 packages=find_packages(),
90 include_package_data=True,
91 license='BSD',
92 long_description="Wagtail is an open source content management \
93 system built on Django, with a strong community and commercial support. \
94 It’s focused on user experience, and offers precise control for \
95 designers and developers.\n\n\
96 For more details, see https://wagtail.io, https://docs.wagtail.io and \
97 https://github.com/wagtail/wagtail/.",
98 classifiers=[
99 'Development Status :: 5 - Production/Stable',
100 'Environment :: Web Environment',
101 'Intended Audience :: Developers',
102 'License :: OSI Approved :: BSD License',
103 'Operating System :: OS Independent',
104 'Programming Language :: Python',
105 'Programming Language :: Python :: 3',
106 'Programming Language :: Python :: 3.6',
107 'Programming Language :: Python :: 3.7',
108 'Programming Language :: Python :: 3.8',
109 'Programming Language :: Python :: 3.9',
110 'Framework :: Django',
111 'Framework :: Django :: 2.2',
112 'Framework :: Django :: 3.0',
113 'Framework :: Django :: 3.1',
114 'Framework :: Wagtail',
115 'Topic :: Internet :: WWW/HTTP :: Site Management',
116 ],
117 python_requires='>=3.6',
118 install_requires=install_requires,
119 extras_require={
120 'testing': testing_extras,
121 'docs': documentation_extras
122 },
123 entry_points="""
124 [console_scripts]
125 wagtail=wagtail.bin.wagtail:main
126 """,
127 zip_safe=False,
128 cmdclass={
129 'sdist': sdist,
130 'bdist_egg': check_bdist_egg,
131 'assets': assets,
132 },
133 )
134
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -23,7 +23,7 @@
"Django>=2.2,<3.2",
"django-modelcluster>=5.1,<6.0",
"django-taggit>=1.0,<2.0",
- "django-treebeard>=4.2.0,<4.5",
+ "django-treebeard>=4.2.0,<5.0,!=4.5",
"djangorestframework>=3.11.1,<4.0",
"django-filter>=2.2,<3.0",
"draftjs_exporter>=2.1.5,<3.0",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -23,7 +23,7 @@\n \"Django>=2.2,<3.2\",\n \"django-modelcluster>=5.1,<6.0\",\n \"django-taggit>=1.0,<2.0\",\n- \"django-treebeard>=4.2.0,<4.5\",\n+ \"django-treebeard>=4.2.0,<5.0,!=4.5\",\n \"djangorestframework>=3.11.1,<4.0\",\n \"django-filter>=2.2,<3.0\",\n \"draftjs_exporter>=2.1.5,<3.0\",\n", "issue": "Upstream change in django-treebeard 4.5 requires a new wagtail core migration\nDefault MP_Node depth has changed to 1.\r\n\r\nhttps://github.com/django-treebeard/django-treebeard/commit/454be8f29ac2b4b4fbe6512357b5afc1eb422bab#diff-35501ef525349cd39e4713d1a9f64a249fa4fbd31d875513e3a15e65988701a2\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom wagtail import __version__\nfrom wagtail.utils.setup import assets, check_bdist_egg, sdist\n\n\ntry:\n from setuptools import find_packages, setup\nexcept ImportError:\n from distutils.core import setup\n\n\n# Hack to prevent \"TypeError: 'NoneType' object is not callable\" error\n# in multiprocessing/util.py _exit_function when setup.py exits\n# (see http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)\ntry:\n import multiprocessing # noqa\nexcept ImportError:\n pass\n\n\ninstall_requires = [\n \"Django>=2.2,<3.2\",\n \"django-modelcluster>=5.1,<6.0\",\n \"django-taggit>=1.0,<2.0\",\n \"django-treebeard>=4.2.0,<4.5\",\n \"djangorestframework>=3.11.1,<4.0\",\n \"django-filter>=2.2,<3.0\",\n \"draftjs_exporter>=2.1.5,<3.0\",\n \"Pillow>=4.0.0,<9.0.0\",\n \"beautifulsoup4>=4.8,<4.9\",\n \"html5lib>=0.999,<2\",\n \"Willow>=1.4,<1.5\",\n \"requests>=2.11.1,<3.0\",\n \"l18n>=2018.5\",\n \"xlsxwriter>=1.2.8,<2.0\",\n \"tablib[xls,xlsx]>=0.14.0\",\n \"anyascii>=0.1.5\",\n]\n\n# Testing dependencies\ntesting_extras = [\n # Required for running the tests\n 'python-dateutil>=2.2',\n 'pytz>=2014.7',\n 'elasticsearch>=5.0,<6.0',\n 'Jinja2>=2.11,<3.0',\n 'boto3>=1.16,<1.17',\n 'freezegun>=0.3.8',\n 'openpyxl>=2.6.4',\n 'Unidecode>=0.04.14,<2.0',\n\n # For coverage and PEP8 linting\n 'coverage>=3.7.0',\n 'flake8>=3.6.0',\n 'isort==5.6.4', # leave this pinned - it tends to change rules between patch releases\n 'flake8-blind-except==0.1.1',\n 'flake8-print==2.0.2',\n 'doc8==0.8.1',\n\n # For templates linting\n 'jinjalint>=0.5',\n\n # Pipenv hack to fix broken dependency causing CircleCI failures\n 'docutils==0.15',\n\n # django-taggit 1.3.0 made changes to verbose_name which affect migrations;\n # the test suite migrations correspond to >=1.3.0\n 'django-taggit>=1.3.0,<2.0',\n]\n\n# Documentation dependencies\ndocumentation_extras = [\n 'pyenchant>=3.1.1,<4',\n 'sphinxcontrib-spelling>=5.4.0,<6',\n 'Sphinx>=1.5.2',\n 'sphinx-autobuild>=0.6.0',\n 'sphinx_rtd_theme>=0.1.9',\n 'recommonmark>=0.7.1',\n]\n\nsetup(\n name='wagtail',\n version=__version__,\n description='A Django content management system.',\n author='Wagtail core team + contributors',\n author_email='[email protected]', # For support queries, please see https://docs.wagtail.io/en/stable/support.html\n url='https://wagtail.io/',\n packages=find_packages(),\n include_package_data=True,\n license='BSD',\n long_description=\"Wagtail is an open source content management \\\nsystem built on Django, with a strong community and commercial support. \\\nIt\u2019s focused on user experience, and offers precise control for \\\ndesigners and developers.\\n\\n\\\nFor more details, see https://wagtail.io, https://docs.wagtail.io and \\\nhttps://github.com/wagtail/wagtail/.\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Framework :: Django',\n 'Framework :: Django :: 2.2',\n 'Framework :: Django :: 3.0',\n 'Framework :: Django :: 3.1',\n 'Framework :: Wagtail',\n 'Topic :: Internet :: WWW/HTTP :: Site Management',\n ],\n python_requires='>=3.6',\n install_requires=install_requires,\n extras_require={\n 'testing': testing_extras,\n 'docs': documentation_extras\n },\n entry_points=\"\"\"\n [console_scripts]\n wagtail=wagtail.bin.wagtail:main\n \"\"\",\n zip_safe=False,\n cmdclass={\n 'sdist': sdist,\n 'bdist_egg': check_bdist_egg,\n 'assets': assets,\n },\n)\n", "path": "setup.py"}]}
| 2,204 | 162 |
gh_patches_debug_3602
|
rasdani/github-patches
|
git_diff
|
NVIDIA__apex-184
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Failing optim_wrapper due to missing Scaler argument
When creating an optimizer and wrapping it via ``amp_handle.wrap_optimizer(optim)``, the handle [`calls the OptimWrapper`](https://github.com/NVIDIA/apex/blob/master/apex/amp/handle.py#L148), who wraps the optimizer and tries to instantiate a loss scaler per loss.
The `OptimWrapper` tries to [instantiate the loss-scaler without an argument](https://github.com/NVIDIA/apex/blob/master/apex/amp/opt.py#L16), but the loss-scaler [needs an argument `loss_scale`](https://github.com/NVIDIA/apex/blob/master/apex/amp/scaler.py#L28), which causes the whole wrapping process to fail.
</issue>
<code>
[start of apex/amp/opt.py]
1 import contextlib
2 import logging
3 import warnings
4
5 from .scaler import LossScaler, master_params
6
7 import numpy as np
8
9 class OptimWrapper(object):
10 def __init__(self, optimizer, amp_handle, num_loss):
11 self._optimizer = optimizer
12 self._amp_handle = amp_handle
13 self._num_loss = num_loss
14 self._loss_idx = 0
15 self._skip_next = [False] * num_loss
16 self._loss_scaler = [LossScaler() for _ in range(num_loss)]
17
18 @contextlib.contextmanager
19 def scale_loss(self, loss):
20 if not self._amp_handle.is_active():
21 yield loss
22 return
23
24 # When there are multiple losses per-optimizer, we need
25 # to save out current grad accumulation, since we won't be
26 # able to unscale this particulare loss once the grads are
27 # all mixed together.
28 cached_grads = []
29 if self._loss_idx > 0:
30 for p in master_params(self._optimizer):
31 if p.grad is not None:
32 cached_grads.append(p.grad.data.detach().clone())
33 else:
34 cached_grads.append(None)
35 self._optimizer.zero_grad()
36
37 loss_scale = self._cur_loss_scaler().loss_scale()
38 yield loss * loss_scale
39
40 self._cur_loss_scaler().clear_overflow_state()
41 self._cur_loss_scaler().unscale(
42 master_params(self._optimizer),
43 master_params(self._optimizer),
44 loss_scale)
45 self._skip_next[self._loss_idx] = self._cur_loss_scaler().update_scale()
46 self._loss_idx += 1
47
48 if len(cached_grads) > 0:
49 for p, cached_grad in zip(master_params(self._optimizer),
50 cached_grads):
51 if cached_grad is not None:
52 p.grad.data.add_(cached_grad)
53 cached_grads = []
54
55 def _cur_loss_scaler(self):
56 assert 0 <= self._loss_idx < self._num_loss
57 return self._loss_scaler[self._loss_idx]
58
59 def step(self, closure=None):
60 if not self._amp_handle.is_active():
61 return self._optimizer.step(closure=closure)
62
63 self._loss_idx = 0
64
65 for group in self._optimizer.param_groups:
66 for p in group['params']:
67 self._amp_handle.remove_cache(p)
68
69 if closure is not None:
70 raise NotImplementedError(
71 'The `closure` argument is unsupported by the amp ' +
72 'optimizer wrapper.')
73 if any(self._skip_next):
74 logger = logging.getLogger('apex.amp')
75 logger.info('Gradient overflow, skipping update')
76 self._skip_next = [False] * self._num_loss
77 else:
78 return self._optimizer.step(closure=closure)
79
80 # Forward any attribute lookups
81 def __getattr__(self, attr):
82 return getattr(self._optimizer, attr)
83
84 # Forward all torch.optim.Optimizer methods
85 def __getstate__(self):
86 return self._optimizer.__getstate__()
87
88 def __setstate__(self):
89 return self._optimizer.__setstate__()
90
91 def __repr__(self):
92 return self._optimizer.__repr__()
93
94 def state_dict(self):
95 return self._optimizer.state_dict()
96
97 def load_state_dict(self, state_dict):
98 return self._optimizer.load_state_dict(state_dict)
99
100 def zero_grad(self):
101 return self._optimizer.zero_grad()
102
103 def add_param_group(self, param_group):
104 return self._optimizer.add_param_group(param_group)
105
[end of apex/amp/opt.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apex/amp/opt.py b/apex/amp/opt.py
--- a/apex/amp/opt.py
+++ b/apex/amp/opt.py
@@ -13,7 +13,7 @@
self._num_loss = num_loss
self._loss_idx = 0
self._skip_next = [False] * num_loss
- self._loss_scaler = [LossScaler() for _ in range(num_loss)]
+ self._loss_scaler = [LossScaler('dynamic') for _ in range(num_loss)]
@contextlib.contextmanager
def scale_loss(self, loss):
|
{"golden_diff": "diff --git a/apex/amp/opt.py b/apex/amp/opt.py\n--- a/apex/amp/opt.py\n+++ b/apex/amp/opt.py\n@@ -13,7 +13,7 @@\n self._num_loss = num_loss\n self._loss_idx = 0\n self._skip_next = [False] * num_loss\n- self._loss_scaler = [LossScaler() for _ in range(num_loss)]\n+ self._loss_scaler = [LossScaler('dynamic') for _ in range(num_loss)]\n \n @contextlib.contextmanager\n def scale_loss(self, loss):\n", "issue": "Failing optim_wrapper due to missing Scaler argument\nWhen creating an optimizer and wrapping it via ``amp_handle.wrap_optimizer(optim)``, the handle [`calls the OptimWrapper`](https://github.com/NVIDIA/apex/blob/master/apex/amp/handle.py#L148), who wraps the optimizer and tries to instantiate a loss scaler per loss. \r\n\r\nThe `OptimWrapper` tries to [instantiate the loss-scaler without an argument](https://github.com/NVIDIA/apex/blob/master/apex/amp/opt.py#L16), but the loss-scaler [needs an argument `loss_scale`](https://github.com/NVIDIA/apex/blob/master/apex/amp/scaler.py#L28), which causes the whole wrapping process to fail.\n", "before_files": [{"content": "import contextlib\nimport logging\nimport warnings\n\nfrom .scaler import LossScaler, master_params\n\nimport numpy as np\n\nclass OptimWrapper(object):\n def __init__(self, optimizer, amp_handle, num_loss):\n self._optimizer = optimizer\n self._amp_handle = amp_handle\n self._num_loss = num_loss\n self._loss_idx = 0\n self._skip_next = [False] * num_loss\n self._loss_scaler = [LossScaler() for _ in range(num_loss)]\n\n @contextlib.contextmanager\n def scale_loss(self, loss):\n if not self._amp_handle.is_active():\n yield loss\n return\n\n # When there are multiple losses per-optimizer, we need\n # to save out current grad accumulation, since we won't be\n # able to unscale this particulare loss once the grads are\n # all mixed together.\n cached_grads = []\n if self._loss_idx > 0:\n for p in master_params(self._optimizer):\n if p.grad is not None:\n cached_grads.append(p.grad.data.detach().clone())\n else:\n cached_grads.append(None)\n self._optimizer.zero_grad()\n\n loss_scale = self._cur_loss_scaler().loss_scale()\n yield loss * loss_scale\n\n self._cur_loss_scaler().clear_overflow_state()\n self._cur_loss_scaler().unscale(\n master_params(self._optimizer),\n master_params(self._optimizer),\n loss_scale)\n self._skip_next[self._loss_idx] = self._cur_loss_scaler().update_scale()\n self._loss_idx += 1\n\n if len(cached_grads) > 0:\n for p, cached_grad in zip(master_params(self._optimizer),\n cached_grads):\n if cached_grad is not None:\n p.grad.data.add_(cached_grad)\n cached_grads = []\n\n def _cur_loss_scaler(self):\n assert 0 <= self._loss_idx < self._num_loss\n return self._loss_scaler[self._loss_idx]\n\n def step(self, closure=None):\n if not self._amp_handle.is_active():\n return self._optimizer.step(closure=closure)\n\n self._loss_idx = 0\n\n for group in self._optimizer.param_groups:\n for p in group['params']:\n self._amp_handle.remove_cache(p)\n\n if closure is not None:\n raise NotImplementedError(\n 'The `closure` argument is unsupported by the amp ' +\n 'optimizer wrapper.')\n if any(self._skip_next):\n logger = logging.getLogger('apex.amp')\n logger.info('Gradient overflow, skipping update')\n self._skip_next = [False] * self._num_loss\n else:\n return self._optimizer.step(closure=closure)\n\n # Forward any attribute lookups\n def __getattr__(self, attr):\n return getattr(self._optimizer, attr)\n\n # Forward all torch.optim.Optimizer methods\n def __getstate__(self):\n return self._optimizer.__getstate__()\n\n def __setstate__(self):\n return self._optimizer.__setstate__()\n\n def __repr__(self):\n return self._optimizer.__repr__()\n\n def state_dict(self):\n return self._optimizer.state_dict()\n\n def load_state_dict(self, state_dict):\n return self._optimizer.load_state_dict(state_dict)\n\n def zero_grad(self):\n return self._optimizer.zero_grad()\n\n def add_param_group(self, param_group):\n return self._optimizer.add_param_group(param_group)\n", "path": "apex/amp/opt.py"}]}
| 1,668 | 135 |
gh_patches_debug_40994
|
rasdani/github-patches
|
git_diff
|
pyload__pyload-1779
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[OneFichierCom]Non restricted file
Hello, i try to download little files on 1fichier( like this : ) but i cannot
```
ff^[^[^[^[^[^[19.08.2015 07:59:13 INFO Added package Nonrestricted file containing 1 links
19.08.2015 07:59:13 DEBUG Run Info Fetching for OneFichierCom
19.08.2015 07:59:14 INFO Download starts: https://1fichier.com/?86zu29oou8
19.08.2015 07:59:14 DEBUG HOOK UserAgentSwitcher: Use custom user-agent string: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0
19.08.2015 07:59:14 DEBUG HOOK UserAgentSwitcher: Use custom user-agent string: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0
19.08.2015 07:59:14 DEBUG HOSTER OneFichierCom[126]: PROCESS URL https://1fichier.com/?86zu29oou8 | PLUGIN VERSION 0.88
19.08.2015 07:59:14 INFO HOSTER OneFichierCom[126]: Updating file info...
Traceback (most recent call last):
File "/usr/share/pyload/module/network/HTTPRequest.py", line 279, in write
raise Exception("Loaded Url exceeded limit")
Exception: Loaded Url exceeded limit
19.08.2015 07:59:15 DEBUG Finished Info Fetching for OneFichierCom
Traceback (most recent call last):
File "/usr/share/pyload/module/network/HTTPRequest.py", line 279, in write
raise Exception("Loaded Url exceeded limit")
Exception: Loaded Url exceeded limit
19.08.2015 07:59:17 DEBUG HOSTER OneFichierCom[126]: File info: {'status': 3, 'url': u'https://1fichier.com/?86zu29oou8', 'size': 0, 'name': u'86zu29oou8', 'pattern': {'HOST': u'1fichier.com', 'ID2': u'86zu29oou8', 'ID1': None}}
19.08.2015 07:59:17 DEBUG HOSTER OneFichierCom[126]: Previous file info: {}
19.08.2015 07:59:17 INFO HOSTER OneFichierCom[126]: File name: 86zu29oou8
19.08.2015 07:59:17 INFO HOSTER OneFichierCom[126]: File size: Unknown
19.08.2015 07:59:17 INFO HOSTER OneFichierCom[126]: File status: queued
19.08.2015 07:59:17 INFO HOSTER OneFichierCom[126]: Looking for direct download link...
19.08.2015 07:59:17 DEBUG HOSTER OneFichierCom[126]: Redirect #0 to: https://1fichier.com/?86zu29oou8
19.08.2015 07:59:17 DEBUG HOSTER OneFichierCom[126]: LOAD URL https://1fichier.com/?86zu29oou8 | cookies=True | get={} | req=None | decode=True | multipart=False | post={} | ref=True | just_header=True
19.08.2015 07:59:17 DEBUG HOSTER OneFichierCom[126]: Redirect #1 to: https://a-9.1fichier.com/s25134576
19.08.2015 07:59:17 DEBUG HOSTER OneFichierCom[126]: LOAD URL https://a-9.1fichier.com/s25134576 | cookies=True | get={} | req=None | decode=True | multipart=False | post={} | ref=True | just_header=True
19.08.2015 07:59:19 INFO HOSTER OneFichierCom[126]: Direct download link detected
19.08.2015 07:59:19 INFO HOSTER OneFichierCom[126]: Downloading file...
19.08.2015 07:59:19 DEBUG HOSTER OneFichierCom[126]: DOWNLOAD URL https://a-9.1fichier.com/s25134576 | cookies=True | get={} | disposition=True | post={} | ref=True
19.08.2015 07:59:20 INFO HOSTER OneFichierCom[126]: Checking file...
19.08.2015 07:59:20 DEBUG HOSTER OneFichierCom[126]: Using default check rules...
19.08.2015 07:59:20 WARNING HOSTER OneFichierCom[126]: Check result: Html file | Waiting 1 minute and retry
19.08.2015 07:59:20 INFO HOSTER OneFichierCom[126]: WAIT 60 seconds
19.08.2015 07:59:20 DEBUG HOSTER OneFichierCom[126]: Previous waitUntil: 0.000000
19.08.2015 07:59:20 INFO HOSTER OneFichierCom[126]: RECONNECT disabled
19.08.2015 07:59:20 DEBUG HOSTER OneFichierCom[126]: Previous wantReconnect: True
19.08.2015 07:59:20 WARNING HOSTER OneFichierCom[126]: Ignore reconnection due logged account
```
curl :
```
curl 7.26.0 (arm-unknown-linux-gnueabihf) libcurl/7.26.0 OpenSSL/1.0.1e zlib/1.2.7 libidn/1.25 libssh2/1.4.2 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap pop3 pop3s rtmp rtsp scp sftp smtp smtps telnet tftp
Features: Debug GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
```
OS : Raspberry Rasbian 7
```
greg@serveur-pi ~ $ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 7 (wheezy)"
NAME="Raspbian GNU/Linux"
VERSION_ID="7"
VERSION="7 (wheezy)"
ID=raspbian
ID_LIKE=debian
ANSI_COLOR="1;31"
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
```
</issue>
<code>
[start of module/plugins/hoster/OneFichierCom.py]
1 # -*- coding: utf-8 -*-
2
3 import re
4
5 from module.plugins.internal.SimpleHoster import SimpleHoster, create_getInfo
6
7
8 class OneFichierCom(SimpleHoster):
9 __name__ = "OneFichierCom"
10 __type__ = "hoster"
11 __version__ = "0.88"
12 __status__ = "testing"
13
14 __pattern__ = r'https?://(?:www\.)?(?:(?P<ID1>\w+)\.)?(?P<HOST>1fichier\.com|alterupload\.com|cjoint\.net|d(es)?fichiers\.com|dl4free\.com|megadl\.fr|mesfichiers\.org|piecejointe\.net|pjointe\.com|tenvoi\.com)(?:/\?(?P<ID2>\w+))?'
15 __config__ = [("use_premium", "bool", "Use premium account if available", True)]
16
17 __description__ = """1fichier.com hoster plugin"""
18 __license__ = "GPLv3"
19 __authors__ = [("fragonib", "fragonib[AT]yahoo[DOT]es"),
20 ("the-razer", "daniel_ AT gmx DOT net"),
21 ("zoidberg", "[email protected]"),
22 ("imclem", None),
23 ("stickell", "[email protected]"),
24 ("Elrick69", "elrick69[AT]rocketmail[DOT]com"),
25 ("Walter Purcaro", "[email protected]"),
26 ("Ludovic Lehmann", "[email protected]")]
27
28
29 COOKIES = [("1fichier.com", "LG", "en")]
30
31 NAME_PATTERN = r'>File\s*Name :</td>\s*<td.*>(?P<N>.+?)<'
32 SIZE_PATTERN = r'>Size :</td>\s*<td.*>(?P<S>[\d.,]+) (?P<U>[\w^_]+)'
33 OFFLINE_PATTERN = r'File not found !\s*<'
34
35 WAIT_PATTERN = r'>You must wait \d+ minutes'
36
37
38 def setup(self):
39 self.multiDL = self.premium
40 self.resume_download = True
41
42
43 def handle_free(self, pyfile):
44 id = self.info['pattern']['ID1'] or self.info['pattern']['ID2']
45 url, inputs = self.parse_html_form('action="https://1fichier.com/\?%s' % id)
46
47 if not url:
48 self.fail(_("Download link not found"))
49
50 if "pass" in inputs:
51 inputs['pass'] = self.get_password()
52
53 inputs['submit'] = "Download"
54
55 self.download(url, post=inputs)
56
57
58 def handle_premium(self, pyfile):
59 self.download(pyfile.url, post={'dl': "Download", 'did': 0})
60
61
62 getInfo = create_getInfo(OneFichierCom)
63
[end of module/plugins/hoster/OneFichierCom.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/module/plugins/hoster/OneFichierCom.py b/module/plugins/hoster/OneFichierCom.py
--- a/module/plugins/hoster/OneFichierCom.py
+++ b/module/plugins/hoster/OneFichierCom.py
@@ -2,13 +2,14 @@
import re
+from module.network.RequestFactory import getURL as get_url
from module.plugins.internal.SimpleHoster import SimpleHoster, create_getInfo
class OneFichierCom(SimpleHoster):
__name__ = "OneFichierCom"
__type__ = "hoster"
- __version__ = "0.88"
+ __version__ = "0.89"
__status__ = "testing"
__pattern__ = r'https?://(?:www\.)?(?:(?P<ID1>\w+)\.)?(?P<HOST>1fichier\.com|alterupload\.com|cjoint\.net|d(es)?fichiers\.com|dl4free\.com|megadl\.fr|mesfichiers\.org|piecejointe\.net|pjointe\.com|tenvoi\.com)(?:/\?(?P<ID2>\w+))?'
@@ -28,6 +29,8 @@
COOKIES = [("1fichier.com", "LG", "en")]
+ DIRECT_LINK = True
+
NAME_PATTERN = r'>File\s*Name :</td>\s*<td.*>(?P<N>.+?)<'
SIZE_PATTERN = r'>Size :</td>\s*<td.*>(?P<S>[\d.,]+) (?P<U>[\w^_]+)'
OFFLINE_PATTERN = r'File not found !\s*<'
@@ -40,7 +43,61 @@
self.resume_download = True
+ @classmethod
+ def get_info(cls, url="", html=""):
+ redirect = url
+ for i in xrange(10):
+ try:
+ headers = dict(re.findall(r"(?P<name>.+?): (?P<value>.+?)\r?\n", get_url(redirect, just_header=True).lower()))
+ if 'location' in headers and headers['location']:
+ redirect = headers['location']
+ else:
+ if 'content-type' in headers and headers['content-type'] == "application/octet-stream":
+ if "filename=" in headers.get('content-disposition'):
+ name = dict(_i.split("=") for _i in map(str.strip, headers['content-disposition'].split(";"))[1:]['filename'].strip("\"'")
+ else:
+ name = url
+
+ info = {'name' : name,
+ 'size' : long(headers.get('content-length')),
+ 'status': 3,
+ 'url' : url}
+
+ else:
+ info = super(OneFichierCom, cls).get_info(url, html)
+
+ break
+
+ except Exception, e:
+ info = {'status' : 8,
+ 'error' : e.message}
+
+ else:
+ info = {'status' : 8,
+ 'error' : _("Too many redirects")}
+
+ return info
+
+
+ def handle_direct(self, pyfile):
+ redirect = pyfile.url
+ for i in xrange(self.get_config("maxredirs", plugin="UserAgentSwitcher")):
+
+ headers = self.load(redirect, just_header=True)
+ if 'location' in headers and headers['location']:
+ self.log_debug("Redirect #%d to: %s" % (i, redirect))
+ redirect = headers['location']
+ else:
+ if 'content-type' in headers and headers['content-type'] == "application/octet-stream":
+ self.link = pyfile.url
+ break
+ else:
+ self.fail(_("Too many redirects"))
+
+
def handle_free(self, pyfile):
+ self.check_errors()
+
id = self.info['pattern']['ID1'] or self.info['pattern']['ID2']
url, inputs = self.parse_html_form('action="https://1fichier.com/\?%s' % id)
|
{"golden_diff": "diff --git a/module/plugins/hoster/OneFichierCom.py b/module/plugins/hoster/OneFichierCom.py\n--- a/module/plugins/hoster/OneFichierCom.py\n+++ b/module/plugins/hoster/OneFichierCom.py\n@@ -2,13 +2,14 @@\n \n import re\n \n+from module.network.RequestFactory import getURL as get_url\n from module.plugins.internal.SimpleHoster import SimpleHoster, create_getInfo\n \n \n class OneFichierCom(SimpleHoster):\n __name__ = \"OneFichierCom\"\n __type__ = \"hoster\"\n- __version__ = \"0.88\"\n+ __version__ = \"0.89\"\n __status__ = \"testing\"\n \n __pattern__ = r'https?://(?:www\\.)?(?:(?P<ID1>\\w+)\\.)?(?P<HOST>1fichier\\.com|alterupload\\.com|cjoint\\.net|d(es)?fichiers\\.com|dl4free\\.com|megadl\\.fr|mesfichiers\\.org|piecejointe\\.net|pjointe\\.com|tenvoi\\.com)(?:/\\?(?P<ID2>\\w+))?'\n@@ -28,6 +29,8 @@\n \n COOKIES = [(\"1fichier.com\", \"LG\", \"en\")]\n \n+ DIRECT_LINK = True\n+\n NAME_PATTERN = r'>File\\s*Name :</td>\\s*<td.*>(?P<N>.+?)<'\n SIZE_PATTERN = r'>Size :</td>\\s*<td.*>(?P<S>[\\d.,]+) (?P<U>[\\w^_]+)'\n OFFLINE_PATTERN = r'File not found !\\s*<'\n@@ -40,7 +43,61 @@\n self.resume_download = True\n \n \n+ @classmethod\n+ def get_info(cls, url=\"\", html=\"\"):\n+ redirect = url\n+ for i in xrange(10):\n+ try:\n+ headers = dict(re.findall(r\"(?P<name>.+?): (?P<value>.+?)\\r?\\n\", get_url(redirect, just_header=True).lower()))\n+ if 'location' in headers and headers['location']:\n+ redirect = headers['location']\n+ else:\n+ if 'content-type' in headers and headers['content-type'] == \"application/octet-stream\":\n+ if \"filename=\" in headers.get('content-disposition'):\n+ name = dict(_i.split(\"=\") for _i in map(str.strip, headers['content-disposition'].split(\";\"))[1:]['filename'].strip(\"\\\"'\")\n+ else:\n+ name = url\n+\n+ info = {'name' : name,\n+ 'size' : long(headers.get('content-length')),\n+ 'status': 3,\n+ 'url' : url}\n+\n+ else:\n+ info = super(OneFichierCom, cls).get_info(url, html)\n+\n+ break\n+\n+ except Exception, e:\n+ info = {'status' : 8,\n+ 'error' : e.message}\n+\n+ else:\n+ info = {'status' : 8,\n+ 'error' : _(\"Too many redirects\")}\n+\n+ return info\n+\n+\n+ def handle_direct(self, pyfile):\n+ redirect = pyfile.url\n+ for i in xrange(self.get_config(\"maxredirs\", plugin=\"UserAgentSwitcher\")):\n+\n+ headers = self.load(redirect, just_header=True)\n+ if 'location' in headers and headers['location']:\n+ self.log_debug(\"Redirect #%d to: %s\" % (i, redirect))\n+ redirect = headers['location']\n+ else:\n+ if 'content-type' in headers and headers['content-type'] == \"application/octet-stream\":\n+ self.link = pyfile.url\n+ break\n+ else:\n+ self.fail(_(\"Too many redirects\"))\n+\n+\n def handle_free(self, pyfile):\n+ self.check_errors()\n+\n id = self.info['pattern']['ID1'] or self.info['pattern']['ID2']\n url, inputs = self.parse_html_form('action=\"https://1fichier.com/\\?%s' % id)\n", "issue": "[OneFichierCom]Non restricted file\nHello, i try to download little files on 1fichier( like this : ) but i cannot\n\n```\nff^[^[^[^[^[^[19.08.2015 07:59:13 INFO Added package Nonrestricted file containing 1 links\n19.08.2015 07:59:13 DEBUG Run Info Fetching for OneFichierCom\n19.08.2015 07:59:14 INFO Download starts: https://1fichier.com/?86zu29oou8\n19.08.2015 07:59:14 DEBUG HOOK UserAgentSwitcher: Use custom user-agent string: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0\n19.08.2015 07:59:14 DEBUG HOOK UserAgentSwitcher: Use custom user-agent string: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0\n19.08.2015 07:59:14 DEBUG HOSTER OneFichierCom[126]: PROCESS URL https://1fichier.com/?86zu29oou8 | PLUGIN VERSION 0.88\n19.08.2015 07:59:14 INFO HOSTER OneFichierCom[126]: Updating file info...\nTraceback (most recent call last):\n File \"/usr/share/pyload/module/network/HTTPRequest.py\", line 279, in write\n raise Exception(\"Loaded Url exceeded limit\")\nException: Loaded Url exceeded limit\n19.08.2015 07:59:15 DEBUG Finished Info Fetching for OneFichierCom\nTraceback (most recent call last):\n File \"/usr/share/pyload/module/network/HTTPRequest.py\", line 279, in write\n raise Exception(\"Loaded Url exceeded limit\")\nException: Loaded Url exceeded limit\n19.08.2015 07:59:17 DEBUG HOSTER OneFichierCom[126]: File info: {'status': 3, 'url': u'https://1fichier.com/?86zu29oou8', 'size': 0, 'name': u'86zu29oou8', 'pattern': {'HOST': u'1fichier.com', 'ID2': u'86zu29oou8', 'ID1': None}}\n19.08.2015 07:59:17 DEBUG HOSTER OneFichierCom[126]: Previous file info: {}\n19.08.2015 07:59:17 INFO HOSTER OneFichierCom[126]: File name: 86zu29oou8\n19.08.2015 07:59:17 INFO HOSTER OneFichierCom[126]: File size: Unknown\n19.08.2015 07:59:17 INFO HOSTER OneFichierCom[126]: File status: queued\n19.08.2015 07:59:17 INFO HOSTER OneFichierCom[126]: Looking for direct download link...\n19.08.2015 07:59:17 DEBUG HOSTER OneFichierCom[126]: Redirect #0 to: https://1fichier.com/?86zu29oou8\n19.08.2015 07:59:17 DEBUG HOSTER OneFichierCom[126]: LOAD URL https://1fichier.com/?86zu29oou8 | cookies=True | get={} | req=None | decode=True | multipart=False | post={} | ref=True | just_header=True\n19.08.2015 07:59:17 DEBUG HOSTER OneFichierCom[126]: Redirect #1 to: https://a-9.1fichier.com/s25134576\n19.08.2015 07:59:17 DEBUG HOSTER OneFichierCom[126]: LOAD URL https://a-9.1fichier.com/s25134576 | cookies=True | get={} | req=None | decode=True | multipart=False | post={} | ref=True | just_header=True\n19.08.2015 07:59:19 INFO HOSTER OneFichierCom[126]: Direct download link detected\n19.08.2015 07:59:19 INFO HOSTER OneFichierCom[126]: Downloading file...\n19.08.2015 07:59:19 DEBUG HOSTER OneFichierCom[126]: DOWNLOAD URL https://a-9.1fichier.com/s25134576 | cookies=True | get={} | disposition=True | post={} | ref=True\n19.08.2015 07:59:20 INFO HOSTER OneFichierCom[126]: Checking file...\n19.08.2015 07:59:20 DEBUG HOSTER OneFichierCom[126]: Using default check rules...\n19.08.2015 07:59:20 WARNING HOSTER OneFichierCom[126]: Check result: Html file | Waiting 1 minute and retry\n19.08.2015 07:59:20 INFO HOSTER OneFichierCom[126]: WAIT 60 seconds\n19.08.2015 07:59:20 DEBUG HOSTER OneFichierCom[126]: Previous waitUntil: 0.000000\n19.08.2015 07:59:20 INFO HOSTER OneFichierCom[126]: RECONNECT disabled\n19.08.2015 07:59:20 DEBUG HOSTER OneFichierCom[126]: Previous wantReconnect: True\n19.08.2015 07:59:20 WARNING HOSTER OneFichierCom[126]: Ignore reconnection due logged account\n```\n\ncurl : \n\n```\ncurl 7.26.0 (arm-unknown-linux-gnueabihf) libcurl/7.26.0 OpenSSL/1.0.1e zlib/1.2.7 libidn/1.25 libssh2/1.4.2 librtmp/2.3\nProtocols: dict file ftp ftps gopher http https imap imaps ldap pop3 pop3s rtmp rtsp scp sftp smtp smtps telnet tftp \nFeatures: Debug GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP\n```\n\nOS : Raspberry Rasbian 7\n\n```\ngreg@serveur-pi ~ $ cat /etc/os-release\nPRETTY_NAME=\"Raspbian GNU/Linux 7 (wheezy)\"\nNAME=\"Raspbian GNU/Linux\"\nVERSION_ID=\"7\"\nVERSION=\"7 (wheezy)\"\nID=raspbian\nID_LIKE=debian\nANSI_COLOR=\"1;31\"\nHOME_URL=\"http://www.raspbian.org/\"\nSUPPORT_URL=\"http://www.raspbian.org/RaspbianForums\"\nBUG_REPORT_URL=\"http://www.raspbian.org/RaspbianBugs\"\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport re\n\nfrom module.plugins.internal.SimpleHoster import SimpleHoster, create_getInfo\n\n\nclass OneFichierCom(SimpleHoster):\n __name__ = \"OneFichierCom\"\n __type__ = \"hoster\"\n __version__ = \"0.88\"\n __status__ = \"testing\"\n\n __pattern__ = r'https?://(?:www\\.)?(?:(?P<ID1>\\w+)\\.)?(?P<HOST>1fichier\\.com|alterupload\\.com|cjoint\\.net|d(es)?fichiers\\.com|dl4free\\.com|megadl\\.fr|mesfichiers\\.org|piecejointe\\.net|pjointe\\.com|tenvoi\\.com)(?:/\\?(?P<ID2>\\w+))?'\n __config__ = [(\"use_premium\", \"bool\", \"Use premium account if available\", True)]\n\n __description__ = \"\"\"1fichier.com hoster plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"fragonib\", \"fragonib[AT]yahoo[DOT]es\"),\n (\"the-razer\", \"daniel_ AT gmx DOT net\"),\n (\"zoidberg\", \"[email protected]\"),\n (\"imclem\", None),\n (\"stickell\", \"[email protected]\"),\n (\"Elrick69\", \"elrick69[AT]rocketmail[DOT]com\"),\n (\"Walter Purcaro\", \"[email protected]\"),\n (\"Ludovic Lehmann\", \"[email protected]\")]\n\n\n COOKIES = [(\"1fichier.com\", \"LG\", \"en\")]\n\n NAME_PATTERN = r'>File\\s*Name :</td>\\s*<td.*>(?P<N>.+?)<'\n SIZE_PATTERN = r'>Size :</td>\\s*<td.*>(?P<S>[\\d.,]+) (?P<U>[\\w^_]+)'\n OFFLINE_PATTERN = r'File not found !\\s*<'\n\n WAIT_PATTERN = r'>You must wait \\d+ minutes'\n\n\n def setup(self):\n self.multiDL = self.premium\n self.resume_download = True\n\n\n def handle_free(self, pyfile):\n id = self.info['pattern']['ID1'] or self.info['pattern']['ID2']\n url, inputs = self.parse_html_form('action=\"https://1fichier.com/\\?%s' % id)\n\n if not url:\n self.fail(_(\"Download link not found\"))\n\n if \"pass\" in inputs:\n inputs['pass'] = self.get_password()\n\n inputs['submit'] = \"Download\"\n\n self.download(url, post=inputs)\n\n\n def handle_premium(self, pyfile):\n self.download(pyfile.url, post={'dl': \"Download\", 'did': 0})\n\n\ngetInfo = create_getInfo(OneFichierCom)\n", "path": "module/plugins/hoster/OneFichierCom.py"}]}
| 3,134 | 938 |
gh_patches_debug_42681
|
rasdani/github-patches
|
git_diff
|
qtile__qtile-4127
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Docs] Graph for Commands API have non working links
### The issue:
https://docs.qtile.org/en/latest/manual/commands/api/root.html
This is about the online documentation. I created an issue here, because its hosted in same repository as the code itself. The links on the graphical nodes are wrong and returns a 404. It's this image: https://docs.qtile.org/en/latest/_images/graphviz-52d4c4e0812a40a7c28aae165439b7e828011b3c.png
### Required:
- [x] I have searched past issues to see if this bug has already been reported.
</issue>
<code>
[start of docs/qtile_docs/graph.py]
1 # Copyright (c) 2022 elParaguayo
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
19 # SOFTWARE.
20 from dataclasses import dataclass
21
22 from docutils.parsers.rst import Directive, directives
23 from qtile_docs.base import SimpleDirectiveMixin
24 from qtile_docs.templates import qtile_graph_template
25
26 from libqtile.command import graph
27
28 DISABLED_COLOUR = "Gray"
29
30
31 @dataclass
32 class Node:
33 node: graph.CommandGraphNode
34 x: float
35 y: float
36 fillcolor: str
37 color: str
38 url: str
39
40 @property
41 def name(self):
42 return getattr(self.node, "object_type", "root")
43
44 @property
45 def children(self):
46 return self.node.children
47
48 def node_args(self, enabled=True, highlight=False):
49 """Returns a dict of arguments that can be formatted for graphviz."""
50 return {
51 "pos": f"{self.x},{self.y}!",
52 "color": self.color if enabled else DISABLED_COLOUR,
53 "fillcolor": self.fillcolor if enabled else DISABLED_COLOUR,
54 "href": self.url,
55 "style": "filled",
56 "label": self.name,
57 "fontname": "bold" if highlight else "regular",
58 }
59
60
61 ROOT = graph.CommandGraphRoot()
62
63
64 # Define our nodes with their positions, colours and link to API docs page.
65 NODES = [
66 Node(ROOT, 0, 0, "Gray", "DarkGray", "/manual/commands/api/root.html"),
67 Node(graph._BarGraphNode, -1.94, -0.44, "Violet", "Purple", "/manual/commands/api/bars.html"),
68 Node(
69 graph._CoreGraphNode,
70 -1.56,
71 1.24,
72 "SlateBlue1",
73 "SlateBlue",
74 "/manual/commands/api/backend.html",
75 ),
76 Node(
77 graph._GroupGraphNode,
78 1.56,
79 1.24,
80 "Orange",
81 "OrangeRed",
82 "/manual/commands/api/groups.html",
83 ),
84 Node(
85 graph._LayoutGraphNode,
86 1.94,
87 -0.44,
88 "Gold",
89 "Goldenrod",
90 "/manual/commands/api/layouts.html",
91 ),
92 Node(
93 graph._ScreenGraphNode,
94 0.86,
95 -1.8,
96 "LimeGreen",
97 "DarkGreen",
98 "/manual/commands/api/screens.html",
99 ),
100 Node(
101 graph._WidgetGraphNode,
102 -0.86,
103 -1.8,
104 "LightBlue",
105 "Blue",
106 "/manual/commands/api/widgets.html",
107 ),
108 Node(graph._WindowGraphNode, 0, 2, "Tomato", "Red", "/manual/commands/api/windows.html"),
109 ]
110
111
112 # Convenient dict to access node object via node name
113 NODES_MAP = {n.name: n for n in NODES}
114
115
116 COMMAND_MAP = {n.name: n.children for n in NODES}
117
118
119 # Generate a list of all routest in the map.
120 # Each route is a tuple of (start, end, bidirectional)
121 ROUTES = []
122 for node, children in COMMAND_MAP.items():
123 for child in children:
124 route = (node, child, node in COMMAND_MAP[child])
125 # Check that the reverse route is not in the list already
126 if (child, node, node in COMMAND_MAP[child]) not in ROUTES:
127 ROUTES.append(route)
128
129
130 class QtileGraph(SimpleDirectiveMixin, Directive):
131 required_arguments = 0
132 option_spec = {
133 "root": directives.unchanged,
134 }
135
136 def make_nodes(self):
137 """Generates the node definition lines."""
138 node_lines = []
139
140 for name, node in NODES_MAP.items():
141 args_dict = node.node_args(name in self.visible_nodes, name == self.graph_name)
142 args_string = ", ".join(f'{k}="{v}"' for k, v in args_dict.items())
143 node_lines.extend([f"node [{args_string}];", f"{name};", ""])
144
145 return node_lines
146
147 def make_routes(self):
148 """Generates the route definition lines."""
149 route_lines = []
150 for r in ROUTES:
151 args = {}
152 if r not in self.visible_routes:
153 args["color"] = DISABLED_COLOUR
154 if r[2]:
155 args["dir"] = "both"
156
157 line = f"{r[0]} -> {r[1]}"
158 if args:
159 args_string = ", ".join(f'{k}="{v}"' for k, v in args.items())
160 line += f" [{args_string}]"
161 line += ";"
162 route_lines.append(line)
163
164 return route_lines
165
166 def find_linked_nodes_routes(self, node):
167 """Identifies routes connected to the selected node."""
168 nodes = []
169 routes = []
170 for r in ROUTES:
171 # Our node is the starting node
172 if r[0] == node:
173 nodes.append(r[1])
174 routes.append(r)
175 # Our node is the ending node and it's a bidirectional route
176 elif r[1] == node and r[2]:
177 nodes.append(r[0])
178 routes.append(r)
179
180 return (nodes, routes)
181
182 def make_rst(self):
183 self.graph_name = self.options.get("root", "all")
184 if self.graph_name == "all":
185 self.visible_nodes = [n for n in NODES_MAP]
186 self.visible_routes = ROUTES[:]
187 else:
188 linked_nodes, linked_routes = self.find_linked_nodes_routes(self.graph_name)
189 self.visible_nodes = [self.graph_name]
190 self.visible_nodes.extend(linked_nodes)
191 self.visible_routes = linked_routes
192
193 graph = []
194 graph.append(f"strict digraph {self.graph_name} {{")
195 graph.append('bgcolor="transparent"')
196 graph.extend(self.make_nodes())
197 graph.extend(self.make_routes())
198 graph.append("}")
199
200 rst = qtile_graph_template.render(graph=graph)
201 for line in rst.splitlines():
202 yield line
203
[end of docs/qtile_docs/graph.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/qtile_docs/graph.py b/docs/qtile_docs/graph.py
--- a/docs/qtile_docs/graph.py
+++ b/docs/qtile_docs/graph.py
@@ -45,13 +45,13 @@
def children(self):
return self.node.children
- def node_args(self, enabled=True, highlight=False):
+ def node_args(self, enabled=True, highlight=False, relative_url=str()):
"""Returns a dict of arguments that can be formatted for graphviz."""
return {
"pos": f"{self.x},{self.y}!",
"color": self.color if enabled else DISABLED_COLOUR,
"fillcolor": self.fillcolor if enabled else DISABLED_COLOUR,
- "href": self.url,
+ "href": f"{relative_url}{self.url}",
"style": "filled",
"label": self.name,
"fontname": "bold" if highlight else "regular",
@@ -63,15 +63,15 @@
# Define our nodes with their positions, colours and link to API docs page.
NODES = [
- Node(ROOT, 0, 0, "Gray", "DarkGray", "/manual/commands/api/root.html"),
- Node(graph._BarGraphNode, -1.94, -0.44, "Violet", "Purple", "/manual/commands/api/bars.html"),
+ Node(ROOT, 0, 0, "Gray", "DarkGray", "root.html"),
+ Node(graph._BarGraphNode, -1.94, -0.44, "Violet", "Purple", "bars.html"),
Node(
graph._CoreGraphNode,
-1.56,
1.24,
"SlateBlue1",
"SlateBlue",
- "/manual/commands/api/backend.html",
+ "backend.html",
),
Node(
graph._GroupGraphNode,
@@ -79,7 +79,7 @@
1.24,
"Orange",
"OrangeRed",
- "/manual/commands/api/groups.html",
+ "groups.html",
),
Node(
graph._LayoutGraphNode,
@@ -87,7 +87,7 @@
-0.44,
"Gold",
"Goldenrod",
- "/manual/commands/api/layouts.html",
+ "layouts.html",
),
Node(
graph._ScreenGraphNode,
@@ -95,7 +95,7 @@
-1.8,
"LimeGreen",
"DarkGreen",
- "/manual/commands/api/screens.html",
+ "screens.html",
),
Node(
graph._WidgetGraphNode,
@@ -103,9 +103,9 @@
-1.8,
"LightBlue",
"Blue",
- "/manual/commands/api/widgets.html",
+ "widgets.html",
),
- Node(graph._WindowGraphNode, 0, 2, "Tomato", "Red", "/manual/commands/api/windows.html"),
+ Node(graph._WindowGraphNode, 0, 2, "Tomato", "Red", "windows.html"),
]
@@ -131,6 +131,7 @@
required_arguments = 0
option_spec = {
"root": directives.unchanged,
+ "api_page_root": directives.unchanged,
}
def make_nodes(self):
@@ -138,7 +139,11 @@
node_lines = []
for name, node in NODES_MAP.items():
- args_dict = node.node_args(name in self.visible_nodes, name == self.graph_name)
+ args_dict = node.node_args(
+ name in self.visible_nodes,
+ name == self.graph_name,
+ self.options.get("api_page_root", ""),
+ )
args_string = ", ".join(f'{k}="{v}"' for k, v in args_dict.items())
node_lines.extend([f"node [{args_string}];", f"{name};", ""])
|
{"golden_diff": "diff --git a/docs/qtile_docs/graph.py b/docs/qtile_docs/graph.py\n--- a/docs/qtile_docs/graph.py\n+++ b/docs/qtile_docs/graph.py\n@@ -45,13 +45,13 @@\n def children(self):\n return self.node.children\n \n- def node_args(self, enabled=True, highlight=False):\n+ def node_args(self, enabled=True, highlight=False, relative_url=str()):\n \"\"\"Returns a dict of arguments that can be formatted for graphviz.\"\"\"\n return {\n \"pos\": f\"{self.x},{self.y}!\",\n \"color\": self.color if enabled else DISABLED_COLOUR,\n \"fillcolor\": self.fillcolor if enabled else DISABLED_COLOUR,\n- \"href\": self.url,\n+ \"href\": f\"{relative_url}{self.url}\",\n \"style\": \"filled\",\n \"label\": self.name,\n \"fontname\": \"bold\" if highlight else \"regular\",\n@@ -63,15 +63,15 @@\n \n # Define our nodes with their positions, colours and link to API docs page.\n NODES = [\n- Node(ROOT, 0, 0, \"Gray\", \"DarkGray\", \"/manual/commands/api/root.html\"),\n- Node(graph._BarGraphNode, -1.94, -0.44, \"Violet\", \"Purple\", \"/manual/commands/api/bars.html\"),\n+ Node(ROOT, 0, 0, \"Gray\", \"DarkGray\", \"root.html\"),\n+ Node(graph._BarGraphNode, -1.94, -0.44, \"Violet\", \"Purple\", \"bars.html\"),\n Node(\n graph._CoreGraphNode,\n -1.56,\n 1.24,\n \"SlateBlue1\",\n \"SlateBlue\",\n- \"/manual/commands/api/backend.html\",\n+ \"backend.html\",\n ),\n Node(\n graph._GroupGraphNode,\n@@ -79,7 +79,7 @@\n 1.24,\n \"Orange\",\n \"OrangeRed\",\n- \"/manual/commands/api/groups.html\",\n+ \"groups.html\",\n ),\n Node(\n graph._LayoutGraphNode,\n@@ -87,7 +87,7 @@\n -0.44,\n \"Gold\",\n \"Goldenrod\",\n- \"/manual/commands/api/layouts.html\",\n+ \"layouts.html\",\n ),\n Node(\n graph._ScreenGraphNode,\n@@ -95,7 +95,7 @@\n -1.8,\n \"LimeGreen\",\n \"DarkGreen\",\n- \"/manual/commands/api/screens.html\",\n+ \"screens.html\",\n ),\n Node(\n graph._WidgetGraphNode,\n@@ -103,9 +103,9 @@\n -1.8,\n \"LightBlue\",\n \"Blue\",\n- \"/manual/commands/api/widgets.html\",\n+ \"widgets.html\",\n ),\n- Node(graph._WindowGraphNode, 0, 2, \"Tomato\", \"Red\", \"/manual/commands/api/windows.html\"),\n+ Node(graph._WindowGraphNode, 0, 2, \"Tomato\", \"Red\", \"windows.html\"),\n ]\n \n \n@@ -131,6 +131,7 @@\n required_arguments = 0\n option_spec = {\n \"root\": directives.unchanged,\n+ \"api_page_root\": directives.unchanged,\n }\n \n def make_nodes(self):\n@@ -138,7 +139,11 @@\n node_lines = []\n \n for name, node in NODES_MAP.items():\n- args_dict = node.node_args(name in self.visible_nodes, name == self.graph_name)\n+ args_dict = node.node_args(\n+ name in self.visible_nodes,\n+ name == self.graph_name,\n+ self.options.get(\"api_page_root\", \"\"),\n+ )\n args_string = \", \".join(f'{k}=\"{v}\"' for k, v in args_dict.items())\n node_lines.extend([f\"node [{args_string}];\", f\"{name};\", \"\"])\n", "issue": "[Docs] Graph for Commands API have non working links\n### The issue:\n\nhttps://docs.qtile.org/en/latest/manual/commands/api/root.html\r\n\r\nThis is about the online documentation. I created an issue here, because its hosted in same repository as the code itself. The links on the graphical nodes are wrong and returns a 404. It's this image: https://docs.qtile.org/en/latest/_images/graphviz-52d4c4e0812a40a7c28aae165439b7e828011b3c.png\n\n### Required:\n\n- [x] I have searched past issues to see if this bug has already been reported.\n", "before_files": [{"content": "# Copyright (c) 2022 elParaguayo\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\nfrom dataclasses import dataclass\n\nfrom docutils.parsers.rst import Directive, directives\nfrom qtile_docs.base import SimpleDirectiveMixin\nfrom qtile_docs.templates import qtile_graph_template\n\nfrom libqtile.command import graph\n\nDISABLED_COLOUR = \"Gray\"\n\n\n@dataclass\nclass Node:\n node: graph.CommandGraphNode\n x: float\n y: float\n fillcolor: str\n color: str\n url: str\n\n @property\n def name(self):\n return getattr(self.node, \"object_type\", \"root\")\n\n @property\n def children(self):\n return self.node.children\n\n def node_args(self, enabled=True, highlight=False):\n \"\"\"Returns a dict of arguments that can be formatted for graphviz.\"\"\"\n return {\n \"pos\": f\"{self.x},{self.y}!\",\n \"color\": self.color if enabled else DISABLED_COLOUR,\n \"fillcolor\": self.fillcolor if enabled else DISABLED_COLOUR,\n \"href\": self.url,\n \"style\": \"filled\",\n \"label\": self.name,\n \"fontname\": \"bold\" if highlight else \"regular\",\n }\n\n\nROOT = graph.CommandGraphRoot()\n\n\n# Define our nodes with their positions, colours and link to API docs page.\nNODES = [\n Node(ROOT, 0, 0, \"Gray\", \"DarkGray\", \"/manual/commands/api/root.html\"),\n Node(graph._BarGraphNode, -1.94, -0.44, \"Violet\", \"Purple\", \"/manual/commands/api/bars.html\"),\n Node(\n graph._CoreGraphNode,\n -1.56,\n 1.24,\n \"SlateBlue1\",\n \"SlateBlue\",\n \"/manual/commands/api/backend.html\",\n ),\n Node(\n graph._GroupGraphNode,\n 1.56,\n 1.24,\n \"Orange\",\n \"OrangeRed\",\n \"/manual/commands/api/groups.html\",\n ),\n Node(\n graph._LayoutGraphNode,\n 1.94,\n -0.44,\n \"Gold\",\n \"Goldenrod\",\n \"/manual/commands/api/layouts.html\",\n ),\n Node(\n graph._ScreenGraphNode,\n 0.86,\n -1.8,\n \"LimeGreen\",\n \"DarkGreen\",\n \"/manual/commands/api/screens.html\",\n ),\n Node(\n graph._WidgetGraphNode,\n -0.86,\n -1.8,\n \"LightBlue\",\n \"Blue\",\n \"/manual/commands/api/widgets.html\",\n ),\n Node(graph._WindowGraphNode, 0, 2, \"Tomato\", \"Red\", \"/manual/commands/api/windows.html\"),\n]\n\n\n# Convenient dict to access node object via node name\nNODES_MAP = {n.name: n for n in NODES}\n\n\nCOMMAND_MAP = {n.name: n.children for n in NODES}\n\n\n# Generate a list of all routest in the map.\n# Each route is a tuple of (start, end, bidirectional)\nROUTES = []\nfor node, children in COMMAND_MAP.items():\n for child in children:\n route = (node, child, node in COMMAND_MAP[child])\n # Check that the reverse route is not in the list already\n if (child, node, node in COMMAND_MAP[child]) not in ROUTES:\n ROUTES.append(route)\n\n\nclass QtileGraph(SimpleDirectiveMixin, Directive):\n required_arguments = 0\n option_spec = {\n \"root\": directives.unchanged,\n }\n\n def make_nodes(self):\n \"\"\"Generates the node definition lines.\"\"\"\n node_lines = []\n\n for name, node in NODES_MAP.items():\n args_dict = node.node_args(name in self.visible_nodes, name == self.graph_name)\n args_string = \", \".join(f'{k}=\"{v}\"' for k, v in args_dict.items())\n node_lines.extend([f\"node [{args_string}];\", f\"{name};\", \"\"])\n\n return node_lines\n\n def make_routes(self):\n \"\"\"Generates the route definition lines.\"\"\"\n route_lines = []\n for r in ROUTES:\n args = {}\n if r not in self.visible_routes:\n args[\"color\"] = DISABLED_COLOUR\n if r[2]:\n args[\"dir\"] = \"both\"\n\n line = f\"{r[0]} -> {r[1]}\"\n if args:\n args_string = \", \".join(f'{k}=\"{v}\"' for k, v in args.items())\n line += f\" [{args_string}]\"\n line += \";\"\n route_lines.append(line)\n\n return route_lines\n\n def find_linked_nodes_routes(self, node):\n \"\"\"Identifies routes connected to the selected node.\"\"\"\n nodes = []\n routes = []\n for r in ROUTES:\n # Our node is the starting node\n if r[0] == node:\n nodes.append(r[1])\n routes.append(r)\n # Our node is the ending node and it's a bidirectional route\n elif r[1] == node and r[2]:\n nodes.append(r[0])\n routes.append(r)\n\n return (nodes, routes)\n\n def make_rst(self):\n self.graph_name = self.options.get(\"root\", \"all\")\n if self.graph_name == \"all\":\n self.visible_nodes = [n for n in NODES_MAP]\n self.visible_routes = ROUTES[:]\n else:\n linked_nodes, linked_routes = self.find_linked_nodes_routes(self.graph_name)\n self.visible_nodes = [self.graph_name]\n self.visible_nodes.extend(linked_nodes)\n self.visible_routes = linked_routes\n\n graph = []\n graph.append(f\"strict digraph {self.graph_name} {{\")\n graph.append('bgcolor=\"transparent\"')\n graph.extend(self.make_nodes())\n graph.extend(self.make_routes())\n graph.append(\"}\")\n\n rst = qtile_graph_template.render(graph=graph)\n for line in rst.splitlines():\n yield line\n", "path": "docs/qtile_docs/graph.py"}]}
| 2,723 | 875 |
gh_patches_debug_15496
|
rasdani/github-patches
|
git_diff
|
pwr-Solaar__Solaar-790
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support for Logitech G613 with receiver id c53d
Apparently solaar v1.0.1 on ubuntu ppa does not support Lightspeed receiver with id c53d . solaar reported that no receiver has been found.
Here is the output of my lsusb
tonny@fenrir:[~]: lsusb
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 005: ID 1b1c:1b65 Corsair
Bus 003 Device 006: ID 046d:c53d Logitech, Inc.
Bus 003 Device 003: ID 046d:082b Logitech, Inc. Webcam C170
Bus 003 Device 002: ID 1a40:0101 Terminus Technology Inc. Hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 8087:0aa7 Intel Corp.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
</issue>
<code>
[start of lib/logitech_receiver/base_usb.py]
1 # -*- python-mode -*-
2 # -*- coding: UTF-8 -*-
3
4 ## Copyright (C) 2012-2013 Daniel Pavel
5 ##
6 ## This program is free software; you can redistribute it and/or modify
7 ## it under the terms of the GNU General Public License as published by
8 ## the Free Software Foundation; either version 2 of the License, or
9 ## (at your option) any later version.
10 ##
11 ## This program is distributed in the hope that it will be useful,
12 ## but WITHOUT ANY WARRANTY; without even the implied warranty of
13 ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 ## GNU General Public License for more details.
15 ##
16 ## You should have received a copy of the GNU General Public License along
17 ## with this program; if not, write to the Free Software Foundation, Inc.,
18 ## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
19
20 # USB ids of Logitech wireless receivers.
21 # Only receivers supporting the HID++ protocol can go in here.
22
23 from __future__ import absolute_import, division, print_function, unicode_literals
24
25
26 _DRIVER = ('hid-generic', 'generic-usb', 'logitech-djreceiver')
27
28 # max_devices is only used for receivers that do not support reading from _R.receiver_info offset 0x03, default to 1
29 # may_unpair is only used for receivers that do not support reading from _R.receiver_info offset 0x03, default to False
30 ## should this last be changed so that may_unpair is used for all receivers? writing to _R.receiver_pairing doesn't seem right
31 # re_pairs determines whether a receiver pairs by replacing existing pairings, default to False
32 ## currently only one receiver is so marked - should there be more?
33
34 _unifying_receiver = lambda product_id: {
35 'vendor_id':0x046d,
36 'product_id':product_id,
37 'usb_interface':2,
38 'hid_driver':_DRIVER,
39 'name':'Unifying Receiver'
40 }
41
42 _nano_receiver = lambda product_id: {
43 'vendor_id':0x046d,
44 'product_id':product_id,
45 'usb_interface':1,
46 'hid_driver':_DRIVER,
47 'name':'Nano Receiver',
48 'may_unpair': False,
49 're_pairs': True
50 }
51
52 _nano_receiver_max2 = lambda product_id: {
53 'vendor_id':0x046d,
54 'product_id':product_id,
55 'usb_interface':1,
56 'hid_driver':_DRIVER,
57 'name':'Nano Receiver',
58 'max_devices': 2,
59 'may_unpair': False,
60 're_pairs': True
61 }
62
63 _nano_receiver_maxn = lambda product_id, max: {
64 'vendor_id':0x046d,
65 'product_id':product_id,
66 'usb_interface':1,
67 'hid_driver':_DRIVER,
68 'name':'Nano Receiver',
69 'max_devices': max,
70 'may_unpair': False,
71 're_pairs': True
72 }
73
74 _lenovo_receiver = lambda product_id: {
75 'vendor_id':0x17ef,
76 'product_id':product_id,
77 'usb_interface':1,
78 'hid_driver':_DRIVER,
79 'name':'Nano Receiver'
80 }
81
82 _lightspeed_receiver = lambda product_id: {
83 'vendor_id':0x046d,
84 'product_id':product_id,
85 'usb_interface':2,
86 'hid_driver':_DRIVER,
87 'name':'Lightspeed Receiver'
88 }
89
90 # standard Unifying receivers (marked with the orange Unifying logo)
91 UNIFYING_RECEIVER_C52B = _unifying_receiver(0xc52b)
92 UNIFYING_RECEIVER_C532 = _unifying_receiver(0xc532)
93
94 # Nano receviers that support the Unifying protocol
95 NANO_RECEIVER_ADVANCED = _nano_receiver(0xc52f)
96
97 # Nano receivers that don't support the Unifying protocol
98 NANO_RECEIVER_C517 = _nano_receiver_maxn(0xc517,6)
99 NANO_RECEIVER_C518 = _nano_receiver(0xc518)
100 NANO_RECEIVER_C51A = _nano_receiver(0xc51a)
101 NANO_RECEIVER_C51B = _nano_receiver(0xc51b)
102 NANO_RECEIVER_C521 = _nano_receiver(0xc521)
103 NANO_RECEIVER_C525 = _nano_receiver(0xc525)
104 NANO_RECEIVER_C526 = _nano_receiver(0xc526)
105 NANO_RECEIVER_C52e = _nano_receiver(0xc52e)
106 NANO_RECEIVER_C531 = _nano_receiver(0xc531)
107 NANO_RECEIVER_C534 = _nano_receiver_max2(0xc534)
108 NANO_RECEIVER_C537 = _nano_receiver(0xc537)
109 NANO_RECEIVER_6042 = _lenovo_receiver(0x6042)
110
111 # Lightspeed receivers
112 LIGHTSPEED_RECEIVER_C539 = _lightspeed_receiver(0xc539)
113 LIGHTSPEED_RECEIVER_C53a = _lightspeed_receiver(0xc53a)
114 LIGHTSPEED_RECEIVER_C53f = _lightspeed_receiver(0xc53f)
115
116 del _DRIVER, _unifying_receiver, _nano_receiver, _lenovo_receiver, _lightspeed_receiver
117
118
119 ALL = (
120 UNIFYING_RECEIVER_C52B,
121 UNIFYING_RECEIVER_C532,
122 NANO_RECEIVER_ADVANCED,
123 NANO_RECEIVER_C517,
124 NANO_RECEIVER_C518,
125 NANO_RECEIVER_C51A,
126 NANO_RECEIVER_C51B,
127 NANO_RECEIVER_C521,
128 NANO_RECEIVER_C525,
129 NANO_RECEIVER_C526,
130 NANO_RECEIVER_C52e,
131 NANO_RECEIVER_C531,
132 NANO_RECEIVER_C534,
133 NANO_RECEIVER_C537,
134 NANO_RECEIVER_6042,
135 LIGHTSPEED_RECEIVER_C539,
136 LIGHTSPEED_RECEIVER_C53a,
137 LIGHTSPEED_RECEIVER_C53f,
138 )
139
140 def product_information(usb_id):
141 if isinstance(usb_id,str):
142 usb_id = int(usb_id,16)
143 for r in ALL:
144 if usb_id == r.get('product_id'):
145 return r
146 return { }
147
[end of lib/logitech_receiver/base_usb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/logitech_receiver/base_usb.py b/lib/logitech_receiver/base_usb.py
--- a/lib/logitech_receiver/base_usb.py
+++ b/lib/logitech_receiver/base_usb.py
@@ -112,6 +112,7 @@
LIGHTSPEED_RECEIVER_C539 = _lightspeed_receiver(0xc539)
LIGHTSPEED_RECEIVER_C53a = _lightspeed_receiver(0xc53a)
LIGHTSPEED_RECEIVER_C53f = _lightspeed_receiver(0xc53f)
+LIGHTSPEED_RECEIVER_C53d = _lightspeed_receiver(0xc53d)
del _DRIVER, _unifying_receiver, _nano_receiver, _lenovo_receiver, _lightspeed_receiver
@@ -135,6 +136,7 @@
LIGHTSPEED_RECEIVER_C539,
LIGHTSPEED_RECEIVER_C53a,
LIGHTSPEED_RECEIVER_C53f,
+ LIGHTSPEED_RECEIVER_C53d,
)
def product_information(usb_id):
|
{"golden_diff": "diff --git a/lib/logitech_receiver/base_usb.py b/lib/logitech_receiver/base_usb.py\n--- a/lib/logitech_receiver/base_usb.py\n+++ b/lib/logitech_receiver/base_usb.py\n@@ -112,6 +112,7 @@\n LIGHTSPEED_RECEIVER_C539 = _lightspeed_receiver(0xc539)\n LIGHTSPEED_RECEIVER_C53a = _lightspeed_receiver(0xc53a)\n LIGHTSPEED_RECEIVER_C53f = _lightspeed_receiver(0xc53f)\n+LIGHTSPEED_RECEIVER_C53d = _lightspeed_receiver(0xc53d)\n \n del _DRIVER, _unifying_receiver, _nano_receiver, _lenovo_receiver, _lightspeed_receiver\n \n@@ -135,6 +136,7 @@\n \t\tLIGHTSPEED_RECEIVER_C539,\n \t\tLIGHTSPEED_RECEIVER_C53a,\n \t\tLIGHTSPEED_RECEIVER_C53f,\n+\t\tLIGHTSPEED_RECEIVER_C53d,\n \t)\n \n def product_information(usb_id):\n", "issue": "Support for Logitech G613 with receiver id c53d\nApparently solaar v1.0.1 on ubuntu ppa does not support Lightspeed receiver with id c53d . solaar reported that no receiver has been found.\r\n\r\nHere is the output of my lsusb\r\n\r\ntonny@fenrir:[~]: lsusb\r\nBus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub\r\nBus 003 Device 005: ID 1b1c:1b65 Corsair \r\nBus 003 Device 006: ID 046d:c53d Logitech, Inc. \r\nBus 003 Device 003: ID 046d:082b Logitech, Inc. Webcam C170\r\nBus 003 Device 002: ID 1a40:0101 Terminus Technology Inc. Hub\r\nBus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub\r\nBus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub\r\nBus 001 Device 002: ID 8087:0aa7 Intel Corp. \r\nBus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub\r\n\n", "before_files": [{"content": "# -*- python-mode -*-\n# -*- coding: UTF-8 -*-\n\n## Copyright (C) 2012-2013 Daniel Pavel\n##\n## This program is free software; you can redistribute it and/or modify\n## it under the terms of the GNU General Public License as published by\n## the Free Software Foundation; either version 2 of the License, or\n## (at your option) any later version.\n##\n## This program is distributed in the hope that it will be useful,\n## but WITHOUT ANY WARRANTY; without even the implied warranty of\n## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n## GNU General Public License for more details.\n##\n## You should have received a copy of the GNU General Public License along\n## with this program; if not, write to the Free Software Foundation, Inc.,\n## 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n\n# USB ids of Logitech wireless receivers.\n# Only receivers supporting the HID++ protocol can go in here.\n\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\n\n_DRIVER = ('hid-generic', 'generic-usb', 'logitech-djreceiver')\n\n# max_devices is only used for receivers that do not support reading from _R.receiver_info offset 0x03, default to 1\n# may_unpair is only used for receivers that do not support reading from _R.receiver_info offset 0x03, default to False\n## should this last be changed so that may_unpair is used for all receivers? writing to _R.receiver_pairing doesn't seem right\n# re_pairs determines whether a receiver pairs by replacing existing pairings, default to False\n## currently only one receiver is so marked - should there be more?\n\n_unifying_receiver = lambda product_id: {\n\t'vendor_id':0x046d,\n\t'product_id':product_id, \n\t'usb_interface':2,\n\t'hid_driver':_DRIVER,\n\t'name':'Unifying Receiver'\n}\n\n_nano_receiver = lambda product_id: {\n\t'vendor_id':0x046d,\n\t'product_id':product_id,\n\t'usb_interface':1,\n\t'hid_driver':_DRIVER,\n\t'name':'Nano Receiver',\n\t'may_unpair': False,\n\t're_pairs': True \n}\n\n_nano_receiver_max2 = lambda product_id: {\n\t'vendor_id':0x046d,\n\t'product_id':product_id,\n\t'usb_interface':1,\n\t'hid_driver':_DRIVER,\n\t'name':'Nano Receiver',\n\t'max_devices': 2,\n\t'may_unpair': False,\n\t're_pairs': True \n}\n\n_nano_receiver_maxn = lambda product_id, max: {\n\t'vendor_id':0x046d,\n\t'product_id':product_id,\n\t'usb_interface':1,\n\t'hid_driver':_DRIVER,\n\t'name':'Nano Receiver',\n\t'max_devices': max,\n\t'may_unpair': False,\n\t're_pairs': True \n}\n\n_lenovo_receiver = lambda product_id: {\n\t'vendor_id':0x17ef, \n\t'product_id':product_id, \n\t'usb_interface':1, \n\t'hid_driver':_DRIVER, \n\t'name':'Nano Receiver'\n}\n\n_lightspeed_receiver = lambda product_id: {\n\t'vendor_id':0x046d,\n\t'product_id':product_id,\n\t'usb_interface':2,\n\t'hid_driver':_DRIVER,\n\t'name':'Lightspeed Receiver'\n}\n\n# standard Unifying receivers (marked with the orange Unifying logo)\nUNIFYING_RECEIVER_C52B = _unifying_receiver(0xc52b)\nUNIFYING_RECEIVER_C532 = _unifying_receiver(0xc532)\n\n# Nano receviers that support the Unifying protocol\nNANO_RECEIVER_ADVANCED = _nano_receiver(0xc52f)\n\n# Nano receivers that don't support the Unifying protocol\nNANO_RECEIVER_C517 = _nano_receiver_maxn(0xc517,6)\nNANO_RECEIVER_C518 = _nano_receiver(0xc518)\nNANO_RECEIVER_C51A = _nano_receiver(0xc51a)\nNANO_RECEIVER_C51B = _nano_receiver(0xc51b)\nNANO_RECEIVER_C521 = _nano_receiver(0xc521)\nNANO_RECEIVER_C525 = _nano_receiver(0xc525)\nNANO_RECEIVER_C526 = _nano_receiver(0xc526)\nNANO_RECEIVER_C52e = _nano_receiver(0xc52e)\nNANO_RECEIVER_C531 = _nano_receiver(0xc531)\nNANO_RECEIVER_C534 = _nano_receiver_max2(0xc534)\nNANO_RECEIVER_C537 = _nano_receiver(0xc537)\nNANO_RECEIVER_6042 = _lenovo_receiver(0x6042)\n\n# Lightspeed receivers\nLIGHTSPEED_RECEIVER_C539 = _lightspeed_receiver(0xc539)\nLIGHTSPEED_RECEIVER_C53a = _lightspeed_receiver(0xc53a)\nLIGHTSPEED_RECEIVER_C53f = _lightspeed_receiver(0xc53f)\n\ndel _DRIVER, _unifying_receiver, _nano_receiver, _lenovo_receiver, _lightspeed_receiver\n\n\nALL = (\n\t\tUNIFYING_RECEIVER_C52B,\n\t\tUNIFYING_RECEIVER_C532,\n\t\tNANO_RECEIVER_ADVANCED,\n\t\tNANO_RECEIVER_C517,\n\t\tNANO_RECEIVER_C518,\n\t\tNANO_RECEIVER_C51A,\n\t\tNANO_RECEIVER_C51B,\n\t\tNANO_RECEIVER_C521,\n\t\tNANO_RECEIVER_C525,\n\t\tNANO_RECEIVER_C526,\n\t\tNANO_RECEIVER_C52e,\n\t\tNANO_RECEIVER_C531,\n\t\tNANO_RECEIVER_C534,\n\t\tNANO_RECEIVER_C537,\n\t\tNANO_RECEIVER_6042,\n\t\tLIGHTSPEED_RECEIVER_C539,\n\t\tLIGHTSPEED_RECEIVER_C53a,\n\t\tLIGHTSPEED_RECEIVER_C53f,\n\t)\n\ndef product_information(usb_id):\n\tif isinstance(usb_id,str):\n\t\tusb_id = int(usb_id,16)\n\tfor r in ALL:\n\t\tif usb_id == r.get('product_id'):\n\t\t\treturn r\n\treturn { }\n", "path": "lib/logitech_receiver/base_usb.py"}]}
| 2,723 | 242 |
gh_patches_debug_16422
|
rasdani/github-patches
|
git_diff
|
qtile__qtile-4133
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`ThermalSensor` doesn't respect `foreground` in `widget_defaults`
### The issue:
(reported on Discord/IRC)
ThermalSensor widget doesn't respect foreground colour in widget defaults. This is because the foreground value is copied to `foreground_normal` during `__init__` but this happens before `widget_defaults` are copied to the widget (which happens in `_configure`
### Required:
- [X] I have searched past issues to see if this bug has already been reported.
</issue>
<code>
[start of libqtile/widget/sensors.py]
1 # -*- coding:utf-8 -*-
2 # Copyright (c) 2012 TiN
3 # Copyright (c) 2012, 2014 Tycho Andersen
4 # Copyright (c) 2013 Tao Sauvage
5 # Copyright (c) 2014-2015 Sean Vig
6 # Copyright (c) 2014 Adi Sieker
7 # Copyright (c) 2014 Foster McLane
8 #
9 # Permission is hereby granted, free of charge, to any person obtaining a copy
10 # of this software and associated documentation files (the "Software"), to deal
11 # in the Software without restriction, including without limitation the rights
12 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
13 # copies of the Software, and to permit persons to whom the Software is
14 # furnished to do so, subject to the following conditions:
15 #
16 # The above copyright notice and this permission notice shall be included in
17 # all copies or substantial portions of the Software.
18 #
19 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
20 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
21 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
22 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
23 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
24 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
25 # SOFTWARE.
26
27 import psutil
28
29 from libqtile.widget import base
30
31
32 class ThermalSensor(base.InLoopPollText):
33 """Widget to display temperature sensor information
34
35 For using the thermal sensor widget you need to have lm-sensors installed.
36 You can get a list of the tag_sensors executing "sensors" in your terminal.
37 Then you can choose which you want, otherwise it will display the first
38 available.
39
40 Widget requirements: psutil_.
41
42 .. _psutil: https://pypi.org/project/psutil/
43 """
44
45 defaults = [
46 (
47 "format",
48 "{temp:.1f}{unit}",
49 "Display string format. Three options available: "
50 "``{temp}`` - temperature, "
51 "``{tag}`` - tag of the temperature sensor, and "
52 "``{unit}`` - °C or °F",
53 ),
54 ("metric", True, "True to use metric/C, False to use imperial/F"),
55 ("update_interval", 2, "Update interval in seconds"),
56 ("tag_sensor", None, 'Tag of the temperature sensor. For example: "temp1" or "Core 0"'),
57 (
58 "threshold",
59 70,
60 "If the current temperature value is above, "
61 "then change to foreground_alert colour",
62 ),
63 ("foreground_alert", "ff0000", "Foreground colour alert"),
64 ]
65
66 def __init__(self, **config):
67 base.InLoopPollText.__init__(self, **config)
68 self.add_defaults(ThermalSensor.defaults)
69 temp_values = self.get_temp_sensors()
70 self.foreground_normal = self.foreground
71
72 if temp_values is None:
73 self.data = "sensors command not found"
74 elif len(temp_values) == 0:
75 self.data = "Temperature sensors not found"
76 elif self.tag_sensor is None:
77 for k in temp_values:
78 self.tag_sensor = k
79 break
80
81 def _configure(self, qtile, bar):
82 self.unit = "°C" if self.metric else "°F"
83 base.InLoopPollText._configure(self, qtile, bar)
84
85 def get_temp_sensors(self):
86 """
87 Reads temperatures from sys-fs via psutil.
88 Output will be read Fahrenheit if user has specified it to be.
89 """
90
91 temperature_list = {}
92 temps = psutil.sensors_temperatures(fahrenheit=not self.metric)
93 empty_index = 0
94 for kernel_module in temps:
95 for sensor in temps[kernel_module]:
96 label = sensor.label
97 if not label:
98 label = "{}-{}".format(
99 kernel_module if kernel_module else "UNKNOWN", str(empty_index)
100 )
101 empty_index += 1
102 temperature_list[label] = sensor.current
103
104 return temperature_list
105
106 def poll(self):
107 temp_values = self.get_temp_sensors()
108
109 # Temperature not available
110 if (temp_values is None) or (self.tag_sensor not in temp_values):
111 return "N/A"
112
113 temp_value = temp_values.get(self.tag_sensor)
114 if temp_value > self.threshold:
115 self.layout.colour = self.foreground_alert
116 else:
117 self.layout.colour = self.foreground_normal
118
119 val = dict(temp=temp_value, tag=self.tag_sensor, unit=self.unit)
120 return self.format.format(**val)
121
[end of libqtile/widget/sensors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/libqtile/widget/sensors.py b/libqtile/widget/sensors.py
--- a/libqtile/widget/sensors.py
+++ b/libqtile/widget/sensors.py
@@ -67,7 +67,6 @@
base.InLoopPollText.__init__(self, **config)
self.add_defaults(ThermalSensor.defaults)
temp_values = self.get_temp_sensors()
- self.foreground_normal = self.foreground
if temp_values is None:
self.data = "sensors command not found"
@@ -81,6 +80,7 @@
def _configure(self, qtile, bar):
self.unit = "°C" if self.metric else "°F"
base.InLoopPollText._configure(self, qtile, bar)
+ self.foreground_normal = self.foreground
def get_temp_sensors(self):
"""
|
{"golden_diff": "diff --git a/libqtile/widget/sensors.py b/libqtile/widget/sensors.py\n--- a/libqtile/widget/sensors.py\n+++ b/libqtile/widget/sensors.py\n@@ -67,7 +67,6 @@\n base.InLoopPollText.__init__(self, **config)\n self.add_defaults(ThermalSensor.defaults)\n temp_values = self.get_temp_sensors()\n- self.foreground_normal = self.foreground\n \n if temp_values is None:\n self.data = \"sensors command not found\"\n@@ -81,6 +80,7 @@\n def _configure(self, qtile, bar):\n self.unit = \"\u00b0C\" if self.metric else \"\u00b0F\"\n base.InLoopPollText._configure(self, qtile, bar)\n+ self.foreground_normal = self.foreground\n \n def get_temp_sensors(self):\n \"\"\"\n", "issue": "`ThermalSensor` doesn't respect `foreground` in `widget_defaults`\n### The issue:\n\n(reported on Discord/IRC)\r\n\r\nThermalSensor widget doesn't respect foreground colour in widget defaults. This is because the foreground value is copied to `foreground_normal` during `__init__` but this happens before `widget_defaults` are copied to the widget (which happens in `_configure`\n\n### Required:\n\n- [X] I have searched past issues to see if this bug has already been reported.\n", "before_files": [{"content": "# -*- coding:utf-8 -*-\n# Copyright (c) 2012 TiN\n# Copyright (c) 2012, 2014 Tycho Andersen\n# Copyright (c) 2013 Tao Sauvage\n# Copyright (c) 2014-2015 Sean Vig\n# Copyright (c) 2014 Adi Sieker\n# Copyright (c) 2014 Foster McLane\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport psutil\n\nfrom libqtile.widget import base\n\n\nclass ThermalSensor(base.InLoopPollText):\n \"\"\"Widget to display temperature sensor information\n\n For using the thermal sensor widget you need to have lm-sensors installed.\n You can get a list of the tag_sensors executing \"sensors\" in your terminal.\n Then you can choose which you want, otherwise it will display the first\n available.\n\n Widget requirements: psutil_.\n\n .. _psutil: https://pypi.org/project/psutil/\n \"\"\"\n\n defaults = [\n (\n \"format\",\n \"{temp:.1f}{unit}\",\n \"Display string format. Three options available: \"\n \"``{temp}`` - temperature, \"\n \"``{tag}`` - tag of the temperature sensor, and \"\n \"``{unit}`` - \u00b0C or \u00b0F\",\n ),\n (\"metric\", True, \"True to use metric/C, False to use imperial/F\"),\n (\"update_interval\", 2, \"Update interval in seconds\"),\n (\"tag_sensor\", None, 'Tag of the temperature sensor. For example: \"temp1\" or \"Core 0\"'),\n (\n \"threshold\",\n 70,\n \"If the current temperature value is above, \"\n \"then change to foreground_alert colour\",\n ),\n (\"foreground_alert\", \"ff0000\", \"Foreground colour alert\"),\n ]\n\n def __init__(self, **config):\n base.InLoopPollText.__init__(self, **config)\n self.add_defaults(ThermalSensor.defaults)\n temp_values = self.get_temp_sensors()\n self.foreground_normal = self.foreground\n\n if temp_values is None:\n self.data = \"sensors command not found\"\n elif len(temp_values) == 0:\n self.data = \"Temperature sensors not found\"\n elif self.tag_sensor is None:\n for k in temp_values:\n self.tag_sensor = k\n break\n\n def _configure(self, qtile, bar):\n self.unit = \"\u00b0C\" if self.metric else \"\u00b0F\"\n base.InLoopPollText._configure(self, qtile, bar)\n\n def get_temp_sensors(self):\n \"\"\"\n Reads temperatures from sys-fs via psutil.\n Output will be read Fahrenheit if user has specified it to be.\n \"\"\"\n\n temperature_list = {}\n temps = psutil.sensors_temperatures(fahrenheit=not self.metric)\n empty_index = 0\n for kernel_module in temps:\n for sensor in temps[kernel_module]:\n label = sensor.label\n if not label:\n label = \"{}-{}\".format(\n kernel_module if kernel_module else \"UNKNOWN\", str(empty_index)\n )\n empty_index += 1\n temperature_list[label] = sensor.current\n\n return temperature_list\n\n def poll(self):\n temp_values = self.get_temp_sensors()\n\n # Temperature not available\n if (temp_values is None) or (self.tag_sensor not in temp_values):\n return \"N/A\"\n\n temp_value = temp_values.get(self.tag_sensor)\n if temp_value > self.threshold:\n self.layout.colour = self.foreground_alert\n else:\n self.layout.colour = self.foreground_normal\n\n val = dict(temp=temp_value, tag=self.tag_sensor, unit=self.unit)\n return self.format.format(**val)\n", "path": "libqtile/widget/sensors.py"}]}
| 1,943 | 190 |
gh_patches_debug_36671
|
rasdani/github-patches
|
git_diff
|
ytdl-org__youtube-dl-22954
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
JW player downloads seem to be broken
Using the latest version 2019.10.29
Tried downloading JWPlayer video embedded on this page:
https://www.businessinsider.com/sesame-street-mock-trump-best-moments-budget-proposal-pbs-public-funding-2017-3
Here's the verbose output:
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'-F', u'https://www.businessinsider.com/sesame-street-mock-trump-best-moments-budget-proposal-pbs-public-funding-2017-3?fbclid=IwAR2jOlaH5ADErmCQj44J8BOE-IJfNPAhBgnFPpV-nIOi7DK86sscO4YN9pA']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2019.10.29
[debug] Python version 2.7.16 (CPython) - Darwin-18.7.0-x86_64-i386-64bit
[debug] exe versions: rtmpdump 2.4
[debug] Proxy map: {}
[BusinessInsider] sesame-street-mock-trump-best-moments-budget-proposal-pbs-public-funding-2017-3: Downloading webpage
ERROR: Unable to extract jwplatform id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 796, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 530, in extract
ie_result = self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/businessinsider.py", line 39, in _real_extract
webpage, 'jwplatform id')
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 1005, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
RegexNotFoundError: Unable to extract jwplatform id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
JW player downloads seem to be broken
Using the latest version 2019.10.29
Tried downloading JWPlayer video embedded on this page:
https://www.businessinsider.com/sesame-street-mock-trump-best-moments-budget-proposal-pbs-public-funding-2017-3
Here's the verbose output:
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'-v', u'-F', u'https://www.businessinsider.com/sesame-street-mock-trump-best-moments-budget-proposal-pbs-public-funding-2017-3?fbclid=IwAR2jOlaH5ADErmCQj44J8BOE-IJfNPAhBgnFPpV-nIOi7DK86sscO4YN9pA']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2019.10.29
[debug] Python version 2.7.16 (CPython) - Darwin-18.7.0-x86_64-i386-64bit
[debug] exe versions: rtmpdump 2.4
[debug] Proxy map: {}
[BusinessInsider] sesame-street-mock-trump-best-moments-budget-proposal-pbs-public-funding-2017-3: Downloading webpage
ERROR: Unable to extract jwplatform id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 796, in extract_info
ie_result = ie.extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 530, in extract
ie_result = self._real_extract(url)
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/businessinsider.py", line 39, in _real_extract
webpage, 'jwplatform id')
File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 1005, in _search_regex
raise RegexNotFoundError('Unable to extract %s' % _name)
RegexNotFoundError: Unable to extract jwplatform id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
</issue>
<code>
[start of youtube_dl/extractor/businessinsider.py]
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 from .common import InfoExtractor
5 from .jwplatform import JWPlatformIE
6
7
8 class BusinessInsiderIE(InfoExtractor):
9 _VALID_URL = r'https?://(?:[^/]+\.)?businessinsider\.(?:com|nl)/(?:[^/]+/)*(?P<id>[^/?#&]+)'
10 _TESTS = [{
11 'url': 'http://uk.businessinsider.com/how-much-radiation-youre-exposed-to-in-everyday-life-2016-6',
12 'md5': 'ca237a53a8eb20b6dc5bd60564d4ab3e',
13 'info_dict': {
14 'id': 'hZRllCfw',
15 'ext': 'mp4',
16 'title': "Here's how much radiation you're exposed to in everyday life",
17 'description': 'md5:9a0d6e2c279948aadaa5e84d6d9b99bd',
18 'upload_date': '20170709',
19 'timestamp': 1499606400,
20 },
21 'params': {
22 'skip_download': True,
23 },
24 }, {
25 'url': 'https://www.businessinsider.nl/5-scientifically-proven-things-make-you-less-attractive-2017-7/',
26 'only_matching': True,
27 }, {
28 'url': 'http://www.businessinsider.com/excel-index-match-vlookup-video-how-to-2015-2?IR=T',
29 'only_matching': True,
30 }]
31
32 def _real_extract(self, url):
33 video_id = self._match_id(url)
34 webpage = self._download_webpage(url, video_id)
35 jwplatform_id = self._search_regex(
36 (r'data-media-id=["\']([a-zA-Z0-9]{8})',
37 r'id=["\']jwplayer_([a-zA-Z0-9]{8})',
38 r'id["\']?\s*:\s*["\']?([a-zA-Z0-9]{8})'),
39 webpage, 'jwplatform id')
40 return self.url_result(
41 'jwplatform:%s' % jwplatform_id, ie=JWPlatformIE.ie_key(),
42 video_id=video_id)
43
[end of youtube_dl/extractor/businessinsider.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/youtube_dl/extractor/businessinsider.py b/youtube_dl/extractor/businessinsider.py
--- a/youtube_dl/extractor/businessinsider.py
+++ b/youtube_dl/extractor/businessinsider.py
@@ -9,21 +9,26 @@
_VALID_URL = r'https?://(?:[^/]+\.)?businessinsider\.(?:com|nl)/(?:[^/]+/)*(?P<id>[^/?#&]+)'
_TESTS = [{
'url': 'http://uk.businessinsider.com/how-much-radiation-youre-exposed-to-in-everyday-life-2016-6',
- 'md5': 'ca237a53a8eb20b6dc5bd60564d4ab3e',
+ 'md5': 'ffed3e1e12a6f950aa2f7d83851b497a',
'info_dict': {
- 'id': 'hZRllCfw',
+ 'id': 'cjGDb0X9',
'ext': 'mp4',
- 'title': "Here's how much radiation you're exposed to in everyday life",
- 'description': 'md5:9a0d6e2c279948aadaa5e84d6d9b99bd',
- 'upload_date': '20170709',
- 'timestamp': 1499606400,
- },
- 'params': {
- 'skip_download': True,
+ 'title': "Bananas give you more radiation exposure than living next to a nuclear power plant",
+ 'description': 'md5:0175a3baf200dd8fa658f94cade841b3',
+ 'upload_date': '20160611',
+ 'timestamp': 1465675620,
},
}, {
'url': 'https://www.businessinsider.nl/5-scientifically-proven-things-make-you-less-attractive-2017-7/',
- 'only_matching': True,
+ 'md5': '43f438dbc6da0b89f5ac42f68529d84a',
+ 'info_dict': {
+ 'id': '5zJwd4FK',
+ 'ext': 'mp4',
+ 'title': 'Deze dingen zorgen ervoor dat je minder snel een date scoort',
+ 'description': 'md5:2af8975825d38a4fed24717bbe51db49',
+ 'upload_date': '20170705',
+ 'timestamp': 1499270528,
+ },
}, {
'url': 'http://www.businessinsider.com/excel-index-match-vlookup-video-how-to-2015-2?IR=T',
'only_matching': True,
@@ -35,7 +40,8 @@
jwplatform_id = self._search_regex(
(r'data-media-id=["\']([a-zA-Z0-9]{8})',
r'id=["\']jwplayer_([a-zA-Z0-9]{8})',
- r'id["\']?\s*:\s*["\']?([a-zA-Z0-9]{8})'),
+ r'id["\']?\s*:\s*["\']?([a-zA-Z0-9]{8})',
+ r'(?:jwplatform\.com/players/|jwplayer_)([a-zA-Z0-9]{8})'),
webpage, 'jwplatform id')
return self.url_result(
'jwplatform:%s' % jwplatform_id, ie=JWPlatformIE.ie_key(),
|
{"golden_diff": "diff --git a/youtube_dl/extractor/businessinsider.py b/youtube_dl/extractor/businessinsider.py\n--- a/youtube_dl/extractor/businessinsider.py\n+++ b/youtube_dl/extractor/businessinsider.py\n@@ -9,21 +9,26 @@\n _VALID_URL = r'https?://(?:[^/]+\\.)?businessinsider\\.(?:com|nl)/(?:[^/]+/)*(?P<id>[^/?#&]+)'\n _TESTS = [{\n 'url': 'http://uk.businessinsider.com/how-much-radiation-youre-exposed-to-in-everyday-life-2016-6',\n- 'md5': 'ca237a53a8eb20b6dc5bd60564d4ab3e',\n+ 'md5': 'ffed3e1e12a6f950aa2f7d83851b497a',\n 'info_dict': {\n- 'id': 'hZRllCfw',\n+ 'id': 'cjGDb0X9',\n 'ext': 'mp4',\n- 'title': \"Here's how much radiation you're exposed to in everyday life\",\n- 'description': 'md5:9a0d6e2c279948aadaa5e84d6d9b99bd',\n- 'upload_date': '20170709',\n- 'timestamp': 1499606400,\n- },\n- 'params': {\n- 'skip_download': True,\n+ 'title': \"Bananas give you more radiation exposure than living next to a nuclear power plant\",\n+ 'description': 'md5:0175a3baf200dd8fa658f94cade841b3',\n+ 'upload_date': '20160611',\n+ 'timestamp': 1465675620,\n },\n }, {\n 'url': 'https://www.businessinsider.nl/5-scientifically-proven-things-make-you-less-attractive-2017-7/',\n- 'only_matching': True,\n+ 'md5': '43f438dbc6da0b89f5ac42f68529d84a',\n+ 'info_dict': {\n+ 'id': '5zJwd4FK',\n+ 'ext': 'mp4',\n+ 'title': 'Deze dingen zorgen ervoor dat je minder snel een date scoort',\n+ 'description': 'md5:2af8975825d38a4fed24717bbe51db49',\n+ 'upload_date': '20170705',\n+ 'timestamp': 1499270528,\n+ },\n }, {\n 'url': 'http://www.businessinsider.com/excel-index-match-vlookup-video-how-to-2015-2?IR=T',\n 'only_matching': True,\n@@ -35,7 +40,8 @@\n jwplatform_id = self._search_regex(\n (r'data-media-id=[\"\\']([a-zA-Z0-9]{8})',\n r'id=[\"\\']jwplayer_([a-zA-Z0-9]{8})',\n- r'id[\"\\']?\\s*:\\s*[\"\\']?([a-zA-Z0-9]{8})'),\n+ r'id[\"\\']?\\s*:\\s*[\"\\']?([a-zA-Z0-9]{8})',\n+ r'(?:jwplatform\\.com/players/|jwplayer_)([a-zA-Z0-9]{8})'),\n webpage, 'jwplatform id')\n return self.url_result(\n 'jwplatform:%s' % jwplatform_id, ie=JWPlatformIE.ie_key(),\n", "issue": "JW player downloads seem to be broken\nUsing the latest version 2019.10.29\r\nTried downloading JWPlayer video embedded on this page:\r\nhttps://www.businessinsider.com/sesame-street-mock-trump-best-moments-budget-proposal-pbs-public-funding-2017-3\r\n\r\nHere's the verbose output:\r\n\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: [u'-v', u'-F', u'https://www.businessinsider.com/sesame-street-mock-trump-best-moments-budget-proposal-pbs-public-funding-2017-3?fbclid=IwAR2jOlaH5ADErmCQj44J8BOE-IJfNPAhBgnFPpV-nIOi7DK86sscO4YN9pA']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8\r\n[debug] youtube-dl version 2019.10.29\r\n[debug] Python version 2.7.16 (CPython) - Darwin-18.7.0-x86_64-i386-64bit\r\n[debug] exe versions: rtmpdump 2.4\r\n[debug] Proxy map: {}\r\n[BusinessInsider] sesame-street-mock-trump-best-moments-budget-proposal-pbs-public-funding-2017-3: Downloading webpage\r\nERROR: Unable to extract jwplatform id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 796, in extract_info\r\n ie_result = ie.extract(url)\r\n File \"/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py\", line 530, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/usr/local/bin/youtube-dl/youtube_dl/extractor/businessinsider.py\", line 39, in _real_extract\r\n webpage, 'jwplatform id')\r\n File \"/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py\", line 1005, in _search_regex\r\n raise RegexNotFoundError('Unable to extract %s' % _name)\r\nRegexNotFoundError: Unable to extract jwplatform id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\r\n\r\n\r\n\nJW player downloads seem to be broken\nUsing the latest version 2019.10.29\r\nTried downloading JWPlayer video embedded on this page:\r\nhttps://www.businessinsider.com/sesame-street-mock-trump-best-moments-budget-proposal-pbs-public-funding-2017-3\r\n\r\nHere's the verbose output:\r\n\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: [u'-v', u'-F', u'https://www.businessinsider.com/sesame-street-mock-trump-best-moments-budget-proposal-pbs-public-funding-2017-3?fbclid=IwAR2jOlaH5ADErmCQj44J8BOE-IJfNPAhBgnFPpV-nIOi7DK86sscO4YN9pA']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8\r\n[debug] youtube-dl version 2019.10.29\r\n[debug] Python version 2.7.16 (CPython) - Darwin-18.7.0-x86_64-i386-64bit\r\n[debug] exe versions: rtmpdump 2.4\r\n[debug] Proxy map: {}\r\n[BusinessInsider] sesame-street-mock-trump-best-moments-budget-proposal-pbs-public-funding-2017-3: Downloading webpage\r\nERROR: Unable to extract jwplatform id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py\", line 796, in extract_info\r\n ie_result = ie.extract(url)\r\n File \"/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py\", line 530, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/usr/local/bin/youtube-dl/youtube_dl/extractor/businessinsider.py\", line 39, in _real_extract\r\n webpage, 'jwplatform id')\r\n File \"/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py\", line 1005, in _search_regex\r\n raise RegexNotFoundError('Unable to extract %s' % _name)\r\nRegexNotFoundError: Unable to extract jwplatform id; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\r\n\r\n\r\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nfrom .common import InfoExtractor\nfrom .jwplatform import JWPlatformIE\n\n\nclass BusinessInsiderIE(InfoExtractor):\n _VALID_URL = r'https?://(?:[^/]+\\.)?businessinsider\\.(?:com|nl)/(?:[^/]+/)*(?P<id>[^/?#&]+)'\n _TESTS = [{\n 'url': 'http://uk.businessinsider.com/how-much-radiation-youre-exposed-to-in-everyday-life-2016-6',\n 'md5': 'ca237a53a8eb20b6dc5bd60564d4ab3e',\n 'info_dict': {\n 'id': 'hZRllCfw',\n 'ext': 'mp4',\n 'title': \"Here's how much radiation you're exposed to in everyday life\",\n 'description': 'md5:9a0d6e2c279948aadaa5e84d6d9b99bd',\n 'upload_date': '20170709',\n 'timestamp': 1499606400,\n },\n 'params': {\n 'skip_download': True,\n },\n }, {\n 'url': 'https://www.businessinsider.nl/5-scientifically-proven-things-make-you-less-attractive-2017-7/',\n 'only_matching': True,\n }, {\n 'url': 'http://www.businessinsider.com/excel-index-match-vlookup-video-how-to-2015-2?IR=T',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n webpage = self._download_webpage(url, video_id)\n jwplatform_id = self._search_regex(\n (r'data-media-id=[\"\\']([a-zA-Z0-9]{8})',\n r'id=[\"\\']jwplayer_([a-zA-Z0-9]{8})',\n r'id[\"\\']?\\s*:\\s*[\"\\']?([a-zA-Z0-9]{8})'),\n webpage, 'jwplatform id')\n return self.url_result(\n 'jwplatform:%s' % jwplatform_id, ie=JWPlatformIE.ie_key(),\n video_id=video_id)\n", "path": "youtube_dl/extractor/businessinsider.py"}]}
| 2,432 | 898 |
gh_patches_debug_23704
|
rasdani/github-patches
|
git_diff
|
dj-stripe__dj-stripe-222
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove subscriber__email from search_fields
> DJSTRIPE_SUBSCRIBER_MODEL must have an email field. If your existing model has no email field, add an email property that defines an email address to use.
Since the `DJSTRIPE_SUBSCRIBER_MODEL` could have email as a property we probably shouldn't be assuming `subscriber__email` in admin `search_fields`.
</issue>
<code>
[start of djstripe/admin.py]
1 # -*- coding: utf-8 -*-
2 """
3 Note: Django 1.4 support was dropped in #107
4 https://github.com/pydanny/dj-stripe/pull/107
5 """
6
7 from django.contrib import admin
8
9 from .models import Event, EventProcessingException, Transfer, Charge, Plan
10 from .models import Invoice, InvoiceItem, CurrentSubscription, Customer
11
12
13 class CustomerHasCardListFilter(admin.SimpleListFilter):
14 title = "card presence"
15 parameter_name = "has_card"
16
17 def lookups(self, request, model_admin):
18 return [
19 ["yes", "Has Card"],
20 ["no", "Does Not Have a Card"]
21 ]
22
23 def queryset(self, request, queryset):
24 if self.value() == "yes":
25 return queryset.exclude(card_fingerprint="")
26 if self.value() == "no":
27 return queryset.filter(card_fingerprint="")
28
29
30 class InvoiceCustomerHasCardListFilter(admin.SimpleListFilter):
31 title = "card presence"
32 parameter_name = "has_card"
33
34 def lookups(self, request, model_admin):
35 return [
36 ["yes", "Has Card"],
37 ["no", "Does Not Have a Card"]
38 ]
39
40 def queryset(self, request, queryset):
41 if self.value() == "yes":
42 return queryset.exclude(customer__card_fingerprint="")
43 if self.value() == "no":
44 return queryset.filter(customer__card_fingerprint="")
45
46
47 class CustomerSubscriptionStatusListFilter(admin.SimpleListFilter):
48 title = "subscription status"
49 parameter_name = "sub_status"
50
51 def lookups(self, request, model_admin):
52 statuses = [
53 [x, x.replace("_", " ").title()]
54 for x in CurrentSubscription.objects.all().values_list(
55 "status",
56 flat=True
57 ).distinct()
58 ]
59 statuses.append(["none", "No Subscription"])
60 return statuses
61
62 def queryset(self, request, queryset):
63 if self.value() is None:
64 return queryset.all()
65 else:
66 return queryset.filter(current_subscription__status=self.value())
67
68
69 def send_charge_receipt(modeladmin, request, queryset):
70 """
71 Function for sending receipts from the admin if a receipt is not sent for
72 a specific charge.
73 """
74 for charge in queryset:
75 charge.send_receipt()
76
77
78 admin.site.register(
79 Charge,
80 readonly_fields=('created',),
81 list_display=[
82 "stripe_id",
83 "customer",
84 "amount",
85 "description",
86 "paid",
87 "disputed",
88 "refunded",
89 "fee",
90 "receipt_sent",
91 "created"
92 ],
93 search_fields=[
94 "stripe_id",
95 "customer__stripe_id",
96 "customer__subscriber__email",
97 "card_last_4",
98 "invoice__stripe_id"
99 ],
100 list_filter=[
101 "paid",
102 "disputed",
103 "refunded",
104 "card_kind",
105 "created"
106 ],
107 raw_id_fields=[
108 "customer",
109 "invoice"
110 ],
111 actions=(send_charge_receipt,),
112 )
113
114 admin.site.register(
115 EventProcessingException,
116 readonly_fields=('created',),
117 list_display=[
118 "message",
119 "event",
120 "created"
121 ],
122 search_fields=[
123 "message",
124 "traceback",
125 "data"
126 ],
127 )
128
129 admin.site.register(
130 Event,
131 raw_id_fields=["customer"],
132 readonly_fields=('created',),
133 list_display=[
134 "stripe_id",
135 "kind",
136 "livemode",
137 "valid",
138 "processed",
139 "created"
140 ],
141 list_filter=[
142 "kind",
143 "created",
144 "valid",
145 "processed"
146 ],
147 search_fields=[
148 "stripe_id",
149 "customer__stripe_id",
150 "customer__subscriber__email",
151 "validated_message"
152 ],
153 )
154
155
156 class CurrentSubscriptionInline(admin.TabularInline):
157 model = CurrentSubscription
158
159
160 def subscription_status(obj):
161 return obj.current_subscription.status
162 subscription_status.short_description = "Subscription Status"
163
164
165 admin.site.register(
166 Customer,
167 raw_id_fields=["subscriber"],
168 readonly_fields=('created',),
169 list_display=[
170 "stripe_id",
171 "subscriber",
172 "card_kind",
173 "card_last_4",
174 subscription_status,
175 "created"
176 ],
177 list_filter=[
178 "card_kind",
179 CustomerHasCardListFilter,
180 CustomerSubscriptionStatusListFilter
181 ],
182 search_fields=[
183 "stripe_id",
184 "subscriber__email"
185 ],
186 inlines=[CurrentSubscriptionInline]
187 )
188
189
190 class InvoiceItemInline(admin.TabularInline):
191 model = InvoiceItem
192
193
194 def customer_has_card(obj):
195 """ Returns True if the customer has a card attached to its account."""
196 return obj.customer.card_fingerprint != ""
197 customer_has_card.short_description = "Customer Has Card"
198
199
200 def customer_email(obj):
201 """ Returns a string representation of the customer's email."""
202 return str(obj.customer.subscriber.email)
203 customer_email.short_description = "Customer"
204
205
206 admin.site.register(
207 Invoice,
208 raw_id_fields=["customer"],
209 readonly_fields=('created',),
210 list_display=[
211 "stripe_id",
212 "paid",
213 "closed",
214 customer_email,
215 customer_has_card,
216 "period_start",
217 "period_end",
218 "subtotal",
219 "total",
220 "created"
221 ],
222 search_fields=[
223 "stripe_id",
224 "customer__stripe_id",
225 "customer__subscriber__email"
226 ],
227 list_filter=[
228 InvoiceCustomerHasCardListFilter,
229 "paid",
230 "closed",
231 "attempted",
232 "attempts",
233 "created",
234 "date",
235 "period_end",
236 "total"
237 ],
238 inlines=[InvoiceItemInline]
239 )
240
241
242 admin.site.register(
243 Transfer,
244 raw_id_fields=["event"],
245 readonly_fields=('created',),
246 list_display=[
247 "stripe_id",
248 "amount",
249 "status",
250 "date",
251 "description",
252 "created"
253 ],
254 search_fields=[
255 "stripe_id",
256 "event__stripe_id"
257 ]
258 )
259
260
261 class PlanAdmin(admin.ModelAdmin):
262
263 def save_model(self, request, obj, form, change):
264 """Update or create objects using our custom methods that
265 sync with Stripe."""
266
267 if change:
268 obj.update_name()
269
270 else:
271 Plan.get_or_create(**form.cleaned_data)
272
273 def get_readonly_fields(self, request, obj=None):
274 readonly_fields = list(self.readonly_fields)
275 if obj:
276 readonly_fields.extend([
277 'stripe_id',
278 'amount',
279 'currency',
280 'interval',
281 'interval_count',
282 'trial_period_days'])
283
284 return readonly_fields
285
286 admin.site.register(Plan, PlanAdmin)
287
[end of djstripe/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/djstripe/admin.py b/djstripe/admin.py
--- a/djstripe/admin.py
+++ b/djstripe/admin.py
@@ -93,7 +93,6 @@
search_fields=[
"stripe_id",
"customer__stripe_id",
- "customer__subscriber__email",
"card_last_4",
"invoice__stripe_id"
],
@@ -147,7 +146,6 @@
search_fields=[
"stripe_id",
"customer__stripe_id",
- "customer__subscriber__email",
"validated_message"
],
)
@@ -180,8 +178,7 @@
CustomerSubscriptionStatusListFilter
],
search_fields=[
- "stripe_id",
- "subscriber__email"
+ "stripe_id"
],
inlines=[CurrentSubscriptionInline]
)
@@ -221,8 +218,7 @@
],
search_fields=[
"stripe_id",
- "customer__stripe_id",
- "customer__subscriber__email"
+ "customer__stripe_id"
],
list_filter=[
InvoiceCustomerHasCardListFilter,
|
{"golden_diff": "diff --git a/djstripe/admin.py b/djstripe/admin.py\n--- a/djstripe/admin.py\n+++ b/djstripe/admin.py\n@@ -93,7 +93,6 @@\n search_fields=[\n \"stripe_id\",\n \"customer__stripe_id\",\n- \"customer__subscriber__email\",\n \"card_last_4\",\n \"invoice__stripe_id\"\n ],\n@@ -147,7 +146,6 @@\n search_fields=[\n \"stripe_id\",\n \"customer__stripe_id\",\n- \"customer__subscriber__email\",\n \"validated_message\"\n ],\n )\n@@ -180,8 +178,7 @@\n CustomerSubscriptionStatusListFilter\n ],\n search_fields=[\n- \"stripe_id\",\n- \"subscriber__email\"\n+ \"stripe_id\"\n ],\n inlines=[CurrentSubscriptionInline]\n )\n@@ -221,8 +218,7 @@\n ],\n search_fields=[\n \"stripe_id\",\n- \"customer__stripe_id\",\n- \"customer__subscriber__email\"\n+ \"customer__stripe_id\"\n ],\n list_filter=[\n InvoiceCustomerHasCardListFilter,\n", "issue": "Remove subscriber__email from search_fields\n> DJSTRIPE_SUBSCRIBER_MODEL must have an email field. If your existing model has no email field, add an email property that defines an email address to use.\n\nSince the `DJSTRIPE_SUBSCRIBER_MODEL` could have email as a property we probably shouldn't be assuming `subscriber__email` in admin `search_fields`.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nNote: Django 1.4 support was dropped in #107\n https://github.com/pydanny/dj-stripe/pull/107\n\"\"\"\n\nfrom django.contrib import admin\n\nfrom .models import Event, EventProcessingException, Transfer, Charge, Plan\nfrom .models import Invoice, InvoiceItem, CurrentSubscription, Customer\n\n\nclass CustomerHasCardListFilter(admin.SimpleListFilter):\n title = \"card presence\"\n parameter_name = \"has_card\"\n\n def lookups(self, request, model_admin):\n return [\n [\"yes\", \"Has Card\"],\n [\"no\", \"Does Not Have a Card\"]\n ]\n\n def queryset(self, request, queryset):\n if self.value() == \"yes\":\n return queryset.exclude(card_fingerprint=\"\")\n if self.value() == \"no\":\n return queryset.filter(card_fingerprint=\"\")\n\n\nclass InvoiceCustomerHasCardListFilter(admin.SimpleListFilter):\n title = \"card presence\"\n parameter_name = \"has_card\"\n\n def lookups(self, request, model_admin):\n return [\n [\"yes\", \"Has Card\"],\n [\"no\", \"Does Not Have a Card\"]\n ]\n\n def queryset(self, request, queryset):\n if self.value() == \"yes\":\n return queryset.exclude(customer__card_fingerprint=\"\")\n if self.value() == \"no\":\n return queryset.filter(customer__card_fingerprint=\"\")\n\n\nclass CustomerSubscriptionStatusListFilter(admin.SimpleListFilter):\n title = \"subscription status\"\n parameter_name = \"sub_status\"\n\n def lookups(self, request, model_admin):\n statuses = [\n [x, x.replace(\"_\", \" \").title()]\n for x in CurrentSubscription.objects.all().values_list(\n \"status\",\n flat=True\n ).distinct()\n ]\n statuses.append([\"none\", \"No Subscription\"])\n return statuses\n\n def queryset(self, request, queryset):\n if self.value() is None:\n return queryset.all()\n else:\n return queryset.filter(current_subscription__status=self.value())\n\n\ndef send_charge_receipt(modeladmin, request, queryset):\n \"\"\"\n Function for sending receipts from the admin if a receipt is not sent for\n a specific charge.\n \"\"\"\n for charge in queryset:\n charge.send_receipt()\n\n\nadmin.site.register(\n Charge,\n readonly_fields=('created',),\n list_display=[\n \"stripe_id\",\n \"customer\",\n \"amount\",\n \"description\",\n \"paid\",\n \"disputed\",\n \"refunded\",\n \"fee\",\n \"receipt_sent\",\n \"created\"\n ],\n search_fields=[\n \"stripe_id\",\n \"customer__stripe_id\",\n \"customer__subscriber__email\",\n \"card_last_4\",\n \"invoice__stripe_id\"\n ],\n list_filter=[\n \"paid\",\n \"disputed\",\n \"refunded\",\n \"card_kind\",\n \"created\"\n ],\n raw_id_fields=[\n \"customer\",\n \"invoice\"\n ],\n actions=(send_charge_receipt,),\n)\n\nadmin.site.register(\n EventProcessingException,\n readonly_fields=('created',),\n list_display=[\n \"message\",\n \"event\",\n \"created\"\n ],\n search_fields=[\n \"message\",\n \"traceback\",\n \"data\"\n ],\n)\n\nadmin.site.register(\n Event,\n raw_id_fields=[\"customer\"],\n readonly_fields=('created',),\n list_display=[\n \"stripe_id\",\n \"kind\",\n \"livemode\",\n \"valid\",\n \"processed\",\n \"created\"\n ],\n list_filter=[\n \"kind\",\n \"created\",\n \"valid\",\n \"processed\"\n ],\n search_fields=[\n \"stripe_id\",\n \"customer__stripe_id\",\n \"customer__subscriber__email\",\n \"validated_message\"\n ],\n)\n\n\nclass CurrentSubscriptionInline(admin.TabularInline):\n model = CurrentSubscription\n\n\ndef subscription_status(obj):\n return obj.current_subscription.status\nsubscription_status.short_description = \"Subscription Status\"\n\n\nadmin.site.register(\n Customer,\n raw_id_fields=[\"subscriber\"],\n readonly_fields=('created',),\n list_display=[\n \"stripe_id\",\n \"subscriber\",\n \"card_kind\",\n \"card_last_4\",\n subscription_status,\n \"created\"\n ],\n list_filter=[\n \"card_kind\",\n CustomerHasCardListFilter,\n CustomerSubscriptionStatusListFilter\n ],\n search_fields=[\n \"stripe_id\",\n \"subscriber__email\"\n ],\n inlines=[CurrentSubscriptionInline]\n)\n\n\nclass InvoiceItemInline(admin.TabularInline):\n model = InvoiceItem\n\n\ndef customer_has_card(obj):\n \"\"\" Returns True if the customer has a card attached to its account.\"\"\"\n return obj.customer.card_fingerprint != \"\"\ncustomer_has_card.short_description = \"Customer Has Card\"\n\n\ndef customer_email(obj):\n \"\"\" Returns a string representation of the customer's email.\"\"\"\n return str(obj.customer.subscriber.email)\ncustomer_email.short_description = \"Customer\"\n\n\nadmin.site.register(\n Invoice,\n raw_id_fields=[\"customer\"],\n readonly_fields=('created',),\n list_display=[\n \"stripe_id\",\n \"paid\",\n \"closed\",\n customer_email,\n customer_has_card,\n \"period_start\",\n \"period_end\",\n \"subtotal\",\n \"total\",\n \"created\"\n ],\n search_fields=[\n \"stripe_id\",\n \"customer__stripe_id\",\n \"customer__subscriber__email\"\n ],\n list_filter=[\n InvoiceCustomerHasCardListFilter,\n \"paid\",\n \"closed\",\n \"attempted\",\n \"attempts\",\n \"created\",\n \"date\",\n \"period_end\",\n \"total\"\n ],\n inlines=[InvoiceItemInline]\n)\n\n\nadmin.site.register(\n Transfer,\n raw_id_fields=[\"event\"],\n readonly_fields=('created',),\n list_display=[\n \"stripe_id\",\n \"amount\",\n \"status\",\n \"date\",\n \"description\",\n \"created\"\n ],\n search_fields=[\n \"stripe_id\",\n \"event__stripe_id\"\n ]\n)\n\n\nclass PlanAdmin(admin.ModelAdmin):\n\n def save_model(self, request, obj, form, change):\n \"\"\"Update or create objects using our custom methods that\n sync with Stripe.\"\"\"\n\n if change:\n obj.update_name()\n\n else:\n Plan.get_or_create(**form.cleaned_data)\n\n def get_readonly_fields(self, request, obj=None):\n readonly_fields = list(self.readonly_fields)\n if obj:\n readonly_fields.extend([\n 'stripe_id',\n 'amount',\n 'currency',\n 'interval',\n 'interval_count',\n 'trial_period_days'])\n\n return readonly_fields\n\nadmin.site.register(Plan, PlanAdmin)\n", "path": "djstripe/admin.py"}]}
| 2,791 | 253 |
gh_patches_debug_20875
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-19045
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix traceback in service module when svc_cmd is None (2nd fix)
When service module is used on unsupported Linux system where init script is used directly, LinuxService.svc_cmd is None so .endswith() fails.
This extends fix from e2f20db53481128553d876109d5fbdab9f43dd5b also for state=restarted.
Fixes issue #3533
Running against Debian Lenny server:
```
$ ansible webserver.local -i webserver.local, -u root -m service -a 'name=apache2 state=restarted'
webserver.local | FAILED => failed to parse: Traceback (most recent call last):
File "/root/.ansible/tmp/ansible-1379082596.06-63995244988034/service", line 2080, in <module>
main()
File "/root/.ansible/tmp/ansible-1379082596.06-63995244988034/service", line 1112, in main
(rc, out, err) = service.modify_service_state()
File "/root/.ansible/tmp/ansible-1379082596.06-63995244988034/service", line 299, in modify_service_state
return self.service_control()
File "/root/.ansible/tmp/ansible-1379082596.06-63995244988034/service", line 670, in service_control
elif self.svc_cmd.endswith('rc-service'):
AttributeError: 'NoneType' object has no attribute 'endswith'
```
</issue>
<code>
[start of lib/ansible/modules/network/junos/junos_command.py]
1 #!/usr/bin/python
2 #
3 # This file is part of Ansible
4 #
5 # Ansible is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Ansible is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
17 #
18
19 ANSIBLE_METADATA = {'status': ['preview'],
20 'supported_by': 'core',
21 'version': '1.0'}
22
23 DOCUMENTATION = """
24 ---
25 module: junos_command
26 version_added: "2.1"
27 author: "Peter Sprygada (@privateip)"
28 short_description: Execute arbitrary commands on a remote device running Junos
29 description:
30 - Network devices running the Junos operating system provide a command
31 driven interface both over CLI and RPC. This module provides an
32 interface to execute commands using these functions and return the
33 results to the Ansible playbook. In addition, this
34 module can specify a set of conditionals to be evaluated against the
35 returned output, only returning control to the playbook once the
36 entire set of conditionals has been met.
37 extends_documentation_fragment: junos
38 options:
39 commands:
40 description:
41 - The C(commands) to send to the remote device over the Netconf
42 transport. The resulting output from the command
43 is returned. If the I(wait_for) argument is provided, the
44 module is not returned until the condition is satisfied or
45 the number of I(retries) has been exceeded.
46 required: false
47 default: null
48 rpcs:
49 description:
50 - The C(rpcs) argument accepts a list of RPCs to be executed
51 over a netconf session and the results from the RPC execution
52 is return to the playbook via the modules results dictionary.
53 required: false
54 default: null
55 wait_for:
56 description:
57 - Specifies what to evaluate from the output of the command
58 and what conditionals to apply. This argument will cause
59 the task to wait for a particular conditional to be true
60 before moving forward. If the conditional is not true
61 by the configured retries, the task fails. See examples.
62 required: false
63 default: null
64 aliases: ['waitfor']
65 version_added: "2.2"
66 match:
67 description:
68 - The I(match) argument is used in conjunction with the
69 I(wait_for) argument to specify the match policy. Valid
70 values are C(all) or C(any). If the value is set to C(all)
71 then all conditionals in the I(wait_for) must be satisfied. If
72 the value is set to C(any) then only one of the values must be
73 satisfied.
74 required: false
75 default: all
76 choices: ['any', 'all']
77 version_added: "2.2"
78 retries:
79 description:
80 - Specifies the number of retries a command should by tried
81 before it is considered failed. The command is run on the
82 target device every retry and evaluated against the I(waitfor)
83 conditionals.
84 required: false
85 default: 10
86 interval:
87 description:
88 - Configures the interval in seconds to wait between retries
89 of the command. If the command does not pass the specified
90 conditional, the interval indicates how to long to wait before
91 trying the command again.
92 required: false
93 default: 1
94 format:
95 description:
96 - Configures the encoding scheme to use when serializing output
97 from the device. This handles how to properly understand the
98 output and apply the conditionals path to the result set.
99 required: false
100 default: 'xml'
101 choices: ['xml', 'text']
102 requirements:
103 - junos-eznc
104 notes:
105 - This module requires the netconf system service be enabled on
106 the remote device being managed
107 """
108
109 EXAMPLES = """
110 # Note: examples below use the following provider dict to handle
111 # transport and authentication to the node.
112 vars:
113 netconf:
114 host: "{{ inventory_hostname }}"
115 username: ansible
116 password: Ansible
117
118 - name: run a set of commands
119 junos_command:
120 commands: ['show version', 'show ip route']
121 provider: "{{ netconf }}"
122
123 - name: run a command with a conditional applied to the second command
124 junos_command:
125 commands:
126 - show version
127 - show interfaces fxp0
128 waitfor:
129 - "result[1].interface-information.physical-interface.name eq fxp0"
130 provider: "{{ netconf }}"
131
132 - name: collect interface information using rpc
133 junos_command:
134 rpcs:
135 - "get_interface_information interface=em0 media=True"
136 - "get_interface_information interface=fxp0 media=True"
137 provider: "{{ netconf }}"
138 """
139
140 RETURN = """
141 stdout:
142 description: The output from the commands read from the device
143 returned: always
144 type: list
145 sample: ['...', '...']
146
147 stdout_lines:
148 description: The output read from the device split into lines
149 returned: always
150 type: list
151 sample: [['...', '...'], ['...', '...']]
152
153 failed_conditionals:
154 description: the conditionals that failed
155 returned: failed
156 type: list
157 sample: ['...', '...']
158 """
159
160 import ansible.module_utils.junos
161 from ansible.module_utils.basic import get_exception
162 from ansible.module_utils.network import NetworkModule, NetworkError
163 from ansible.module_utils.netcli import CommandRunner
164 from ansible.module_utils.netcli import AddCommandError, FailedConditionsError
165 from ansible.module_utils.netcli import FailedConditionalError, AddConditionError
166 from ansible.module_utils.junos import xml_to_json
167 from ansible.module_utils.six import string_types
168
169 VALID_KEYS = {
170 'cli': frozenset(['command', 'output', 'prompt', 'response']),
171 'rpc': frozenset(['command', 'output'])
172 }
173
174
175 def to_lines(stdout):
176 for item in stdout:
177 if isinstance(item, string_types):
178 item = str(item).split('\n')
179 yield item
180
181 def parse(module, command_type):
182 if command_type == 'cli':
183 items = module.params['commands']
184 elif command_type == 'rpc':
185 items = module.params['rpcs']
186
187 parsed = list()
188 for item in (items or list()):
189 if isinstance(item, string_types):
190 item = dict(command=item, output=None)
191 elif 'command' not in item:
192 module.fail_json(msg='command keyword argument is required')
193 elif item.get('output') not in [None, 'text', 'xml']:
194 module.fail_json(msg='invalid output specified for command'
195 'Supported values are `text` or `xml`')
196 elif not set(item.keys()).issubset(VALID_KEYS[command_type]):
197 module.fail_json(msg='unknown command keyword specified. Valid '
198 'values are %s' % ', '.join(VALID_KEYS[command_type]))
199
200 if not item['output']:
201 item['output'] = module.params['display']
202
203 item['command_type'] = command_type
204
205 # show configuration [options] will return as text
206 if item['command'].startswith('show configuration'):
207 item['output'] = 'text'
208
209 parsed.append(item)
210
211 return parsed
212
213
214 def main():
215 """main entry point for Ansible module
216 """
217
218 spec = dict(
219 commands=dict(type='list'),
220 rpcs=dict(type='list'),
221
222 display=dict(default='xml', choices=['text', 'xml'],
223 aliases=['format', 'output']),
224
225 wait_for=dict(type='list', aliases=['waitfor']),
226 match=dict(default='all', choices=['all', 'any']),
227
228 retries=dict(default=10, type='int'),
229 interval=dict(default=1, type='int'),
230
231 transport=dict(default='netconf', choices=['netconf'])
232 )
233
234 mutually_exclusive = [('commands', 'rpcs')]
235
236 module = NetworkModule(argument_spec=spec,
237 mutually_exclusive=mutually_exclusive,
238 supports_check_mode=True)
239
240 commands = list()
241 for key in VALID_KEYS.keys():
242 commands.extend(list(parse(module, key)))
243
244 conditionals = module.params['wait_for'] or list()
245
246 warnings = list()
247
248 runner = CommandRunner(module)
249
250 for cmd in commands:
251 if module.check_mode and not cmd['command'].startswith('show'):
252 warnings.append('only show commands are supported when using '
253 'check mode, not executing `%s`' % cmd['command'])
254 else:
255 if cmd['command'].startswith('co'):
256 module.fail_json(msg='junos_command does not support running '
257 'config mode commands. Please use '
258 'junos_config instead')
259 try:
260 runner.add_command(**cmd)
261 except AddCommandError:
262 exc = get_exception()
263 warnings.append('duplicate command detected: %s' % cmd)
264
265 try:
266 for item in conditionals:
267 runner.add_conditional(item)
268 except (ValueError, AddConditionError):
269 exc = get_exception()
270 module.fail_json(msg=str(exc), condition=exc.condition)
271
272 runner.retries = module.params['retries']
273 runner.interval = module.params['interval']
274 runner.match = module.params['match']
275
276 try:
277 runner.run()
278 except FailedConditionsError:
279 exc = get_exception()
280 module.fail_json(msg=str(exc), failed_conditions=exc.failed_conditions)
281 except FailedConditionalError:
282 exc = get_exception()
283 module.fail_json(msg=str(exc), failed_conditional=exc.failed_conditional)
284 except NetworkError:
285 exc = get_exception()
286 module.fail_json(msg=str(exc))
287
288 result = dict(changed=False, stdout=list())
289
290 for cmd in commands:
291 try:
292 output = runner.get_command(cmd['command'], cmd.get('output'))
293 except ValueError:
294 output = 'command not executed due to check_mode, see warnings'
295 result['stdout'].append(output)
296
297 result['warnings'] = warnings
298 result['stdout_lines'] = list(to_lines(result['stdout']))
299
300 module.exit_json(**result)
301
302
303 if __name__ == '__main__':
304 main()
305
[end of lib/ansible/modules/network/junos/junos_command.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/ansible/modules/network/junos/junos_command.py b/lib/ansible/modules/network/junos/junos_command.py
--- a/lib/ansible/modules/network/junos/junos_command.py
+++ b/lib/ansible/modules/network/junos/junos_command.py
@@ -98,12 +98,13 @@
output and apply the conditionals path to the result set.
required: false
default: 'xml'
- choices: ['xml', 'text']
+ choices: ['xml', 'text', 'json']
requirements:
- junos-eznc
notes:
- This module requires the netconf system service be enabled on
- the remote device being managed
+ the remote device being managed. 'json' format is supported
+ for JUNON version >= 14.2
"""
EXAMPLES = """
@@ -219,7 +220,7 @@
commands=dict(type='list'),
rpcs=dict(type='list'),
- display=dict(default='xml', choices=['text', 'xml'],
+ display=dict(default='xml', choices=['text', 'xml', 'json'],
aliases=['format', 'output']),
wait_for=dict(type='list', aliases=['waitfor']),
|
{"golden_diff": "diff --git a/lib/ansible/modules/network/junos/junos_command.py b/lib/ansible/modules/network/junos/junos_command.py\n--- a/lib/ansible/modules/network/junos/junos_command.py\n+++ b/lib/ansible/modules/network/junos/junos_command.py\n@@ -98,12 +98,13 @@\n output and apply the conditionals path to the result set.\n required: false\n default: 'xml'\n- choices: ['xml', 'text']\n+ choices: ['xml', 'text', 'json']\n requirements:\n - junos-eznc\n notes:\n - This module requires the netconf system service be enabled on\n- the remote device being managed\n+ the remote device being managed. 'json' format is supported\n+ for JUNON version >= 14.2\n \"\"\"\n \n EXAMPLES = \"\"\"\n@@ -219,7 +220,7 @@\n commands=dict(type='list'),\n rpcs=dict(type='list'),\n \n- display=dict(default='xml', choices=['text', 'xml'],\n+ display=dict(default='xml', choices=['text', 'xml', 'json'],\n aliases=['format', 'output']),\n \n wait_for=dict(type='list', aliases=['waitfor']),\n", "issue": "Fix traceback in service module when svc_cmd is None (2nd fix)\nWhen service module is used on unsupported Linux system where init script is used directly, LinuxService.svc_cmd is None so .endswith() fails.\n\nThis extends fix from e2f20db53481128553d876109d5fbdab9f43dd5b also for state=restarted.\n\nFixes issue #3533\n\nRunning against Debian Lenny server:\n\n```\n$ ansible webserver.local -i webserver.local, -u root -m service -a 'name=apache2 state=restarted'\nwebserver.local | FAILED => failed to parse: Traceback (most recent call last):\n File \"/root/.ansible/tmp/ansible-1379082596.06-63995244988034/service\", line 2080, in <module>\n main()\n File \"/root/.ansible/tmp/ansible-1379082596.06-63995244988034/service\", line 1112, in main\n (rc, out, err) = service.modify_service_state()\n File \"/root/.ansible/tmp/ansible-1379082596.06-63995244988034/service\", line 299, in modify_service_state\n return self.service_control()\n File \"/root/.ansible/tmp/ansible-1379082596.06-63995244988034/service\", line 670, in service_control\n elif self.svc_cmd.endswith('rc-service'):\nAttributeError: 'NoneType' object has no attribute 'endswith'\n```\n\n", "before_files": [{"content": "#!/usr/bin/python\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n#\n\nANSIBLE_METADATA = {'status': ['preview'],\n 'supported_by': 'core',\n 'version': '1.0'}\n\nDOCUMENTATION = \"\"\"\n---\nmodule: junos_command\nversion_added: \"2.1\"\nauthor: \"Peter Sprygada (@privateip)\"\nshort_description: Execute arbitrary commands on a remote device running Junos\ndescription:\n - Network devices running the Junos operating system provide a command\n driven interface both over CLI and RPC. This module provides an\n interface to execute commands using these functions and return the\n results to the Ansible playbook. In addition, this\n module can specify a set of conditionals to be evaluated against the\n returned output, only returning control to the playbook once the\n entire set of conditionals has been met.\nextends_documentation_fragment: junos\noptions:\n commands:\n description:\n - The C(commands) to send to the remote device over the Netconf\n transport. The resulting output from the command\n is returned. If the I(wait_for) argument is provided, the\n module is not returned until the condition is satisfied or\n the number of I(retries) has been exceeded.\n required: false\n default: null\n rpcs:\n description:\n - The C(rpcs) argument accepts a list of RPCs to be executed\n over a netconf session and the results from the RPC execution\n is return to the playbook via the modules results dictionary.\n required: false\n default: null\n wait_for:\n description:\n - Specifies what to evaluate from the output of the command\n and what conditionals to apply. This argument will cause\n the task to wait for a particular conditional to be true\n before moving forward. If the conditional is not true\n by the configured retries, the task fails. See examples.\n required: false\n default: null\n aliases: ['waitfor']\n version_added: \"2.2\"\n match:\n description:\n - The I(match) argument is used in conjunction with the\n I(wait_for) argument to specify the match policy. Valid\n values are C(all) or C(any). If the value is set to C(all)\n then all conditionals in the I(wait_for) must be satisfied. If\n the value is set to C(any) then only one of the values must be\n satisfied.\n required: false\n default: all\n choices: ['any', 'all']\n version_added: \"2.2\"\n retries:\n description:\n - Specifies the number of retries a command should by tried\n before it is considered failed. The command is run on the\n target device every retry and evaluated against the I(waitfor)\n conditionals.\n required: false\n default: 10\n interval:\n description:\n - Configures the interval in seconds to wait between retries\n of the command. If the command does not pass the specified\n conditional, the interval indicates how to long to wait before\n trying the command again.\n required: false\n default: 1\n format:\n description:\n - Configures the encoding scheme to use when serializing output\n from the device. This handles how to properly understand the\n output and apply the conditionals path to the result set.\n required: false\n default: 'xml'\n choices: ['xml', 'text']\nrequirements:\n - junos-eznc\nnotes:\n - This module requires the netconf system service be enabled on\n the remote device being managed\n\"\"\"\n\nEXAMPLES = \"\"\"\n# Note: examples below use the following provider dict to handle\n# transport and authentication to the node.\nvars:\n netconf:\n host: \"{{ inventory_hostname }}\"\n username: ansible\n password: Ansible\n\n- name: run a set of commands\n junos_command:\n commands: ['show version', 'show ip route']\n provider: \"{{ netconf }}\"\n\n- name: run a command with a conditional applied to the second command\n junos_command:\n commands:\n - show version\n - show interfaces fxp0\n waitfor:\n - \"result[1].interface-information.physical-interface.name eq fxp0\"\n provider: \"{{ netconf }}\"\n\n- name: collect interface information using rpc\n junos_command:\n rpcs:\n - \"get_interface_information interface=em0 media=True\"\n - \"get_interface_information interface=fxp0 media=True\"\n provider: \"{{ netconf }}\"\n\"\"\"\n\nRETURN = \"\"\"\nstdout:\n description: The output from the commands read from the device\n returned: always\n type: list\n sample: ['...', '...']\n\nstdout_lines:\n description: The output read from the device split into lines\n returned: always\n type: list\n sample: [['...', '...'], ['...', '...']]\n\nfailed_conditionals:\n description: the conditionals that failed\n returned: failed\n type: list\n sample: ['...', '...']\n\"\"\"\n\nimport ansible.module_utils.junos\nfrom ansible.module_utils.basic import get_exception\nfrom ansible.module_utils.network import NetworkModule, NetworkError\nfrom ansible.module_utils.netcli import CommandRunner\nfrom ansible.module_utils.netcli import AddCommandError, FailedConditionsError\nfrom ansible.module_utils.netcli import FailedConditionalError, AddConditionError\nfrom ansible.module_utils.junos import xml_to_json\nfrom ansible.module_utils.six import string_types\n\nVALID_KEYS = {\n 'cli': frozenset(['command', 'output', 'prompt', 'response']),\n 'rpc': frozenset(['command', 'output'])\n}\n\n\ndef to_lines(stdout):\n for item in stdout:\n if isinstance(item, string_types):\n item = str(item).split('\\n')\n yield item\n\ndef parse(module, command_type):\n if command_type == 'cli':\n items = module.params['commands']\n elif command_type == 'rpc':\n items = module.params['rpcs']\n\n parsed = list()\n for item in (items or list()):\n if isinstance(item, string_types):\n item = dict(command=item, output=None)\n elif 'command' not in item:\n module.fail_json(msg='command keyword argument is required')\n elif item.get('output') not in [None, 'text', 'xml']:\n module.fail_json(msg='invalid output specified for command'\n 'Supported values are `text` or `xml`')\n elif not set(item.keys()).issubset(VALID_KEYS[command_type]):\n module.fail_json(msg='unknown command keyword specified. Valid '\n 'values are %s' % ', '.join(VALID_KEYS[command_type]))\n\n if not item['output']:\n item['output'] = module.params['display']\n\n item['command_type'] = command_type\n\n # show configuration [options] will return as text\n if item['command'].startswith('show configuration'):\n item['output'] = 'text'\n\n parsed.append(item)\n\n return parsed\n\n\ndef main():\n \"\"\"main entry point for Ansible module\n \"\"\"\n\n spec = dict(\n commands=dict(type='list'),\n rpcs=dict(type='list'),\n\n display=dict(default='xml', choices=['text', 'xml'],\n aliases=['format', 'output']),\n\n wait_for=dict(type='list', aliases=['waitfor']),\n match=dict(default='all', choices=['all', 'any']),\n\n retries=dict(default=10, type='int'),\n interval=dict(default=1, type='int'),\n\n transport=dict(default='netconf', choices=['netconf'])\n )\n\n mutually_exclusive = [('commands', 'rpcs')]\n\n module = NetworkModule(argument_spec=spec,\n mutually_exclusive=mutually_exclusive,\n supports_check_mode=True)\n\n commands = list()\n for key in VALID_KEYS.keys():\n commands.extend(list(parse(module, key)))\n\n conditionals = module.params['wait_for'] or list()\n\n warnings = list()\n\n runner = CommandRunner(module)\n\n for cmd in commands:\n if module.check_mode and not cmd['command'].startswith('show'):\n warnings.append('only show commands are supported when using '\n 'check mode, not executing `%s`' % cmd['command'])\n else:\n if cmd['command'].startswith('co'):\n module.fail_json(msg='junos_command does not support running '\n 'config mode commands. Please use '\n 'junos_config instead')\n try:\n runner.add_command(**cmd)\n except AddCommandError:\n exc = get_exception()\n warnings.append('duplicate command detected: %s' % cmd)\n\n try:\n for item in conditionals:\n runner.add_conditional(item)\n except (ValueError, AddConditionError):\n exc = get_exception()\n module.fail_json(msg=str(exc), condition=exc.condition)\n\n runner.retries = module.params['retries']\n runner.interval = module.params['interval']\n runner.match = module.params['match']\n\n try:\n runner.run()\n except FailedConditionsError:\n exc = get_exception()\n module.fail_json(msg=str(exc), failed_conditions=exc.failed_conditions)\n except FailedConditionalError:\n exc = get_exception()\n module.fail_json(msg=str(exc), failed_conditional=exc.failed_conditional)\n except NetworkError:\n exc = get_exception()\n module.fail_json(msg=str(exc))\n\n result = dict(changed=False, stdout=list())\n\n for cmd in commands:\n try:\n output = runner.get_command(cmd['command'], cmd.get('output'))\n except ValueError:\n output = 'command not executed due to check_mode, see warnings'\n result['stdout'].append(output)\n\n result['warnings'] = warnings\n result['stdout_lines'] = list(to_lines(result['stdout']))\n\n module.exit_json(**result)\n\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/network/junos/junos_command.py"}]}
| 4,052 | 271 |
gh_patches_debug_3993
|
rasdani/github-patches
|
git_diff
|
openstates__openstates-scrapers-382
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
NY: Committees don't have members
Taken from bug #195
http://openstates.org/browse/ny/committees/
http://www.nysenate.gov/committee/state-native-american-relations
http://assembly.state.ny.us/comm/?sec=mem&id=60
</issue>
<code>
[start of openstates/ny/committees.py]
1 import re
2
3 from billy.scrape import NoDataForPeriod
4 from billy.scrape.committees import CommitteeScraper, Committee
5
6 import lxml.html
7
8
9 def parse_name(name):
10 """
11 Split a committee membership string into name and role.
12
13 >>> parse_name('Felix Ortiz')
14 ('Felix Ortiz', 'member')
15 >>> parse_name('Felix Ortiz (Chair)')
16 ('Felix Ortiz', 'chair')
17 >>> parse_name('Hon. Felix Ortiz, Co-Chair')
18 ('Felix Ortiz', 'co-chair')
19 >>> parse_name('Owen H.\\r\\nJohnson (Vice Chairperson)')
20 ('Owen H. Johnson', 'vice chairperson')
21 """
22 name = re.sub(r'^(Hon\.|Assemblyman|Assemblywoman)\s+', '', name)
23 name = re.sub(r'\s+', ' ', name)
24
25 roles = ["Chairwoman", "Chairperson", "Chair", "Secretary", "Treasurer",
26 "Parliamentarian", "Chaplain"]
27 match = re.match(
28 r'([^(]+),? \(?((Co|Vice)?-?\s*(%s))\)?' % '|'.join(roles),
29 name)
30
31 if match:
32 name = match.group(1).strip(' ,')
33 role = match.group(2).lower()
34 return (name, role)
35 return (name, 'member')
36
37
38 class NYCommitteeScraper(CommitteeScraper):
39 state = "ny"
40 latest_only = True
41
42 def scrape(self, chamber, term):
43 getattr(self, 'scrape_' + chamber)()
44
45 def scrape_lower(self, only_names=None):
46 committees = []
47 url = "http://assembly.state.ny.us/comm/"
48 page = self.urlopen(url)
49 page = lxml.html.fromstring(page)
50 page.make_links_absolute(url)
51
52 for link in page.xpath("//a[contains(@href, 'sec=mem')]"):
53 name = link.xpath("string(../strong)").strip()
54 if 'Caucus' in name:
55 continue
56
57 url = link.attrib['href']
58
59 committees.append(name)
60
61 self.scrape_lower_committee(name, url)
62 return committees
63
64 def scrape_lower_committee(self, name, url):
65 page = self.urlopen(url)
66 page = lxml.html.fromstring(page)
67
68 comm = Committee('lower', name)
69 comm.add_source(url)
70
71 for link in page.xpath("//a[contains(@href, 'mem?ad')]"):
72 member = link.text.strip()
73 member = re.sub(r'\s+', ' ', member)
74
75 name, role = parse_name(member)
76 comm.add_member(name, role)
77
78 self.save_committee(comm)
79
80 def scrape_upper(self):
81 committees = []
82 url = "http://www.nysenate.gov/committees"
83 page = self.urlopen(url)
84 page = lxml.html.fromstring(page)
85 page.make_links_absolute(url)
86
87 for link in page.xpath("//a[contains(@href, '/committee/')]"):
88 name = link.text.strip()
89
90 if name == 'New York State Conference of Black Senators':
91 # stop scraping once we reach the caucuses
92 break
93
94 committees.append(name)
95 self.scrape_upper_committee(name, link.attrib['href'])
96 return committees
97
98 def scrape_upper_committee(self, name, url):
99 page = self.urlopen(url)
100 page = lxml.html.fromstring(page)
101
102 comm = Committee('upper', name)
103 comm.add_source(url)
104
105 member_div = page.xpath("//div[@class = 'committee-members']")[0]
106
107 seen = set()
108 for link in member_div.xpath(".//a"):
109 if not link.text:
110 continue
111
112 member = link.text.strip()
113
114 next_elem = link.getnext()
115 if (next_elem is not None and
116 next_elem.tag == 'a' and
117 next_elem.attrib['href'] == link.attrib['href']):
118 # Sometimes NY is cool and splits names across a
119 # couple links
120 member = "%s %s" % (member, next_elem.text.strip())
121
122 member = re.sub(r'\s+', ' ', member)
123
124 if member in seen or not member:
125 continue
126 seen.add(member)
127
128 name, role = parse_name(member)
129 comm.add_member(name, role)
130
131 self.save_committee(comm)
132
[end of openstates/ny/committees.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/openstates/ny/committees.py b/openstates/ny/committees.py
--- a/openstates/ny/committees.py
+++ b/openstates/ny/committees.py
@@ -68,7 +68,7 @@
comm = Committee('lower', name)
comm.add_source(url)
- for link in page.xpath("//a[contains(@href, 'mem?ad')]"):
+ for link in page.xpath("//div[@class='commlinks']//a[contains(@href, 'mem')]"):
member = link.text.strip()
member = re.sub(r'\s+', ' ', member)
|
{"golden_diff": "diff --git a/openstates/ny/committees.py b/openstates/ny/committees.py\n--- a/openstates/ny/committees.py\n+++ b/openstates/ny/committees.py\n@@ -68,7 +68,7 @@\n comm = Committee('lower', name)\n comm.add_source(url)\n \n- for link in page.xpath(\"//a[contains(@href, 'mem?ad')]\"):\n+ for link in page.xpath(\"//div[@class='commlinks']//a[contains(@href, 'mem')]\"):\n member = link.text.strip()\n member = re.sub(r'\\s+', ' ', member)\n", "issue": "NY: Committees don't have members\nTaken from bug #195\n\nhttp://openstates.org/browse/ny/committees/\nhttp://www.nysenate.gov/committee/state-native-american-relations\nhttp://assembly.state.ny.us/comm/?sec=mem&id=60\n\n", "before_files": [{"content": "import re\n\nfrom billy.scrape import NoDataForPeriod\nfrom billy.scrape.committees import CommitteeScraper, Committee\n\nimport lxml.html\n\n\ndef parse_name(name):\n \"\"\"\n Split a committee membership string into name and role.\n\n >>> parse_name('Felix Ortiz')\n ('Felix Ortiz', 'member')\n >>> parse_name('Felix Ortiz (Chair)')\n ('Felix Ortiz', 'chair')\n >>> parse_name('Hon. Felix Ortiz, Co-Chair')\n ('Felix Ortiz', 'co-chair')\n >>> parse_name('Owen H.\\\\r\\\\nJohnson (Vice Chairperson)')\n ('Owen H. Johnson', 'vice chairperson')\n \"\"\"\n name = re.sub(r'^(Hon\\.|Assemblyman|Assemblywoman)\\s+', '', name)\n name = re.sub(r'\\s+', ' ', name)\n\n roles = [\"Chairwoman\", \"Chairperson\", \"Chair\", \"Secretary\", \"Treasurer\",\n \"Parliamentarian\", \"Chaplain\"]\n match = re.match(\n r'([^(]+),? \\(?((Co|Vice)?-?\\s*(%s))\\)?' % '|'.join(roles),\n name)\n\n if match:\n name = match.group(1).strip(' ,')\n role = match.group(2).lower()\n return (name, role)\n return (name, 'member')\n\n\nclass NYCommitteeScraper(CommitteeScraper):\n state = \"ny\"\n latest_only = True\n\n def scrape(self, chamber, term):\n getattr(self, 'scrape_' + chamber)()\n\n def scrape_lower(self, only_names=None):\n committees = []\n url = \"http://assembly.state.ny.us/comm/\"\n page = self.urlopen(url)\n page = lxml.html.fromstring(page)\n page.make_links_absolute(url)\n\n for link in page.xpath(\"//a[contains(@href, 'sec=mem')]\"):\n name = link.xpath(\"string(../strong)\").strip()\n if 'Caucus' in name:\n continue\n\n url = link.attrib['href']\n\n committees.append(name)\n\n self.scrape_lower_committee(name, url)\n return committees\n\n def scrape_lower_committee(self, name, url):\n page = self.urlopen(url)\n page = lxml.html.fromstring(page)\n\n comm = Committee('lower', name)\n comm.add_source(url)\n\n for link in page.xpath(\"//a[contains(@href, 'mem?ad')]\"):\n member = link.text.strip()\n member = re.sub(r'\\s+', ' ', member)\n\n name, role = parse_name(member)\n comm.add_member(name, role)\n\n self.save_committee(comm)\n\n def scrape_upper(self):\n committees = []\n url = \"http://www.nysenate.gov/committees\"\n page = self.urlopen(url)\n page = lxml.html.fromstring(page)\n page.make_links_absolute(url)\n\n for link in page.xpath(\"//a[contains(@href, '/committee/')]\"):\n name = link.text.strip()\n\n if name == 'New York State Conference of Black Senators':\n # stop scraping once we reach the caucuses\n break\n\n committees.append(name)\n self.scrape_upper_committee(name, link.attrib['href'])\n return committees\n\n def scrape_upper_committee(self, name, url):\n page = self.urlopen(url)\n page = lxml.html.fromstring(page)\n\n comm = Committee('upper', name)\n comm.add_source(url)\n\n member_div = page.xpath(\"//div[@class = 'committee-members']\")[0]\n\n seen = set()\n for link in member_div.xpath(\".//a\"):\n if not link.text:\n continue\n\n member = link.text.strip()\n\n next_elem = link.getnext()\n if (next_elem is not None and\n next_elem.tag == 'a' and\n next_elem.attrib['href'] == link.attrib['href']):\n # Sometimes NY is cool and splits names across a\n # couple links\n member = \"%s %s\" % (member, next_elem.text.strip())\n\n member = re.sub(r'\\s+', ' ', member)\n\n if member in seen or not member:\n continue\n seen.add(member)\n\n name, role = parse_name(member)\n comm.add_member(name, role)\n\n self.save_committee(comm)\n", "path": "openstates/ny/committees.py"}]}
| 1,839 | 137 |
gh_patches_debug_20787
|
rasdani/github-patches
|
git_diff
|
saulpw__visidata-2185
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
inputMultiple() displays a malformed empty status box
**Small description**
`search-keys` and `search-col` displays an empty box while waiting for the user to input a regex at the prompt. The box is shown where the usual status messages are shown.
**Expected result**
No box should be displayed.
**Actual result with screenshot**

For comparison, here is what the normal status messages look like:

**Steps to reproduce with sample data and a .vd**
Open any sheet and hit `r` or `/`.
**Additional context**
saul.pw/VisiData v3.0dev
Python 3.10.12
Ubuntu 22.04.3
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python3
2
3 from setuptools import setup
4 # tox can't actually run python3 setup.py: https://github.com/tox-dev/tox/issues/96
5 #from visidata import __version__
6 __version__ = '3.0dev'
7
8 setup(name='visidata',
9 version=__version__,
10 description='terminal interface for exploring and arranging tabular data',
11 long_description=open('README.md').read(),
12 long_description_content_type='text/markdown',
13 author='Saul Pwanson',
14 python_requires='>=3.7',
15 author_email='[email protected]',
16 url='https://visidata.org',
17 download_url='https://github.com/saulpw/visidata/tarball/' + __version__,
18 scripts=['bin/vd'],
19 entry_points={'console_scripts': [
20 'visidata=visidata.main:vd_cli'
21 ],
22 },
23 py_modules=['visidata'],
24 install_requires=[
25 'python-dateutil',
26 'windows-curses != 2.3.1; platform_system == "Windows"', #1841
27 'importlib-metadata >= 3.6',
28 'importlib_resources; python_version<"3.9"'
29 ],
30 packages=['visidata', 'visidata.loaders', 'visidata.vendor', 'visidata.tests', 'visidata.ddw', 'visidata.man', 'visidata.themes', 'visidata.features', 'visidata.experimental', 'visidata.apps', 'visidata.apps.vgit', 'visidata.apps.vdsql', 'visidata.desktop'],
31 data_files=[('share/man/man1', ['visidata/man/vd.1', 'visidata/man/visidata.1']), ('share/applications', ['visidata/desktop/visidata.desktop'])],
32 package_data={'visidata.man': ['vd.1', 'vd.txt'], 'visidata.ddw': ['input.ddw'], 'visidata.tests': ['sample.tsv'], 'visidata.desktop': ['visidata.desktop']},
33 license='GPLv3',
34 classifiers=[
35 'Development Status :: 5 - Production/Stable',
36 'Environment :: Console',
37 'Environment :: Console :: Curses',
38 'Intended Audience :: Developers',
39 'Intended Audience :: Science/Research',
40 'Intended Audience :: System Administrators',
41 'License :: OSI Approved :: GNU General Public License v3 (GPLv3)',
42 'Operating System :: OS Independent',
43 'Programming Language :: Python :: 3',
44 'Topic :: Database :: Front-Ends',
45 'Topic :: Scientific/Engineering',
46 'Topic :: Office/Business :: Financial :: Spreadsheet',
47 'Topic :: Scientific/Engineering :: Visualization',
48 'Topic :: Utilities',
49 ],
50 keywords=('console tabular data spreadsheet terminal viewer textpunk'
51 'curses csv hdf5 h5 xlsx excel tsv'),
52 )
53
54
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -29,7 +29,7 @@
],
packages=['visidata', 'visidata.loaders', 'visidata.vendor', 'visidata.tests', 'visidata.ddw', 'visidata.man', 'visidata.themes', 'visidata.features', 'visidata.experimental', 'visidata.apps', 'visidata.apps.vgit', 'visidata.apps.vdsql', 'visidata.desktop'],
data_files=[('share/man/man1', ['visidata/man/vd.1', 'visidata/man/visidata.1']), ('share/applications', ['visidata/desktop/visidata.desktop'])],
- package_data={'visidata.man': ['vd.1', 'vd.txt'], 'visidata.ddw': ['input.ddw'], 'visidata.tests': ['sample.tsv'], 'visidata.desktop': ['visidata.desktop']},
+ package_data={'visidata.man': ['vd.1', 'vd.txt'], 'visidata.ddw': ['input.ddw', 'regex.ddw'], 'visidata.tests': ['sample.tsv'], 'visidata.desktop': ['visidata.desktop']},
license='GPLv3',
classifiers=[
'Development Status :: 5 - Production/Stable',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -29,7 +29,7 @@\n ],\n packages=['visidata', 'visidata.loaders', 'visidata.vendor', 'visidata.tests', 'visidata.ddw', 'visidata.man', 'visidata.themes', 'visidata.features', 'visidata.experimental', 'visidata.apps', 'visidata.apps.vgit', 'visidata.apps.vdsql', 'visidata.desktop'],\n data_files=[('share/man/man1', ['visidata/man/vd.1', 'visidata/man/visidata.1']), ('share/applications', ['visidata/desktop/visidata.desktop'])],\n- package_data={'visidata.man': ['vd.1', 'vd.txt'], 'visidata.ddw': ['input.ddw'], 'visidata.tests': ['sample.tsv'], 'visidata.desktop': ['visidata.desktop']},\n+ package_data={'visidata.man': ['vd.1', 'vd.txt'], 'visidata.ddw': ['input.ddw', 'regex.ddw'], 'visidata.tests': ['sample.tsv'], 'visidata.desktop': ['visidata.desktop']},\n license='GPLv3',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n", "issue": "inputMultiple() displays a malformed empty status box\n**Small description**\r\n`search-keys` and `search-col` displays an empty box while waiting for the user to input a regex at the prompt. The box is shown where the usual status messages are shown.\r\n\r\n**Expected result**\r\nNo box should be displayed.\r\n\r\n**Actual result with screenshot**\r\n\r\n\r\nFor comparison, here is what the normal status messages look like:\r\n\r\n\r\n**Steps to reproduce with sample data and a .vd**\r\nOpen any sheet and hit `r` or `/`.\r\n\r\n**Additional context**\r\nsaul.pw/VisiData v3.0dev\r\nPython 3.10.12\r\nUbuntu 22.04.3\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom setuptools import setup\n# tox can't actually run python3 setup.py: https://github.com/tox-dev/tox/issues/96\n#from visidata import __version__\n__version__ = '3.0dev'\n\nsetup(name='visidata',\n version=__version__,\n description='terminal interface for exploring and arranging tabular data',\n long_description=open('README.md').read(),\n long_description_content_type='text/markdown',\n author='Saul Pwanson',\n python_requires='>=3.7',\n author_email='[email protected]',\n url='https://visidata.org',\n download_url='https://github.com/saulpw/visidata/tarball/' + __version__,\n scripts=['bin/vd'],\n entry_points={'console_scripts': [\n 'visidata=visidata.main:vd_cli'\n ],\n },\n py_modules=['visidata'],\n install_requires=[\n 'python-dateutil',\n 'windows-curses != 2.3.1; platform_system == \"Windows\"', #1841\n 'importlib-metadata >= 3.6',\n 'importlib_resources; python_version<\"3.9\"'\n ],\n packages=['visidata', 'visidata.loaders', 'visidata.vendor', 'visidata.tests', 'visidata.ddw', 'visidata.man', 'visidata.themes', 'visidata.features', 'visidata.experimental', 'visidata.apps', 'visidata.apps.vgit', 'visidata.apps.vdsql', 'visidata.desktop'],\n data_files=[('share/man/man1', ['visidata/man/vd.1', 'visidata/man/visidata.1']), ('share/applications', ['visidata/desktop/visidata.desktop'])],\n package_data={'visidata.man': ['vd.1', 'vd.txt'], 'visidata.ddw': ['input.ddw'], 'visidata.tests': ['sample.tsv'], 'visidata.desktop': ['visidata.desktop']},\n license='GPLv3',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Console :: Curses',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: GNU General Public License v3 (GPLv3)',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 3',\n 'Topic :: Database :: Front-Ends',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Office/Business :: Financial :: Spreadsheet',\n 'Topic :: Scientific/Engineering :: Visualization',\n 'Topic :: Utilities',\n ],\n keywords=('console tabular data spreadsheet terminal viewer textpunk'\n 'curses csv hdf5 h5 xlsx excel tsv'),\n )\n\n", "path": "setup.py"}]}
| 1,494 | 275 |
gh_patches_debug_20998
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-6578
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feature]: meltano add to project without install
### Feature scope
CLI (options, error messages, logging, etc.)
### Description
**As a user, I need to add to my meltano project without install**
Adding to a meltano project is currently a very slow operation. Our requirement is to add all the plugins to the project and then later build (i.e. 'meltano install') the whole project. This means that the initial 'meltano add' is unnecessarily slow as we don't use any of the pip-installed modules from the initial 'meltano add'
Work arounds considered:
- directly add a meltano.yml format file with our plugin definition to a directory with the root meltano.yml includes referencing that directory path & directly save a lock file to the /plugins directory (essentially avoiding using 'meltano add' at all)
This proposal is to be able to add to the meltano project rapidly without installing all pip dependencies, and we'd ideally like to be able to freeze those dependencies.
`meltano add --skip-install` should
1. download the lock file
2. add to the meltano.yml
3. add all the lock and meltano.yml files for any `requires` plugins (https://github.com/meltano/hub/pull/505 & https://github.com/meltano/meltano/issues/6107)
Related to:
This feature is related to https://github.com/meltano/meltano/issues/6416
**As a user, I need to add to my meltano project without install and create pip freeze**
`meltano add --skip-install --freeze` consider whether this --skip-install works if --freeze is used (assuming https://github.com/meltano/meltano/issues/6416 might implement --freeze)
</issue>
<code>
[start of src/meltano/cli/add.py]
1 """Plugin Add CLI."""
2 from __future__ import annotations
3
4 import click
5
6 from meltano.core.plugin import PluginType
7 from meltano.core.plugin.base import PluginRef
8 from meltano.core.plugin.project_plugin import ProjectPlugin
9 from meltano.core.plugin_install_service import PluginInstallReason
10 from meltano.core.project import Project
11 from meltano.core.project_add_service import ProjectAddService
12 from meltano.core.project_plugins_service import ProjectPluginsService
13 from meltano.core.tracking import CliEvent, PluginsTrackingContext
14
15 from . import cli
16 from .params import pass_project
17 from .utils import (
18 CliError,
19 PartialInstrumentedCmd,
20 add_plugin,
21 add_required_plugins,
22 check_dependencies_met,
23 install_plugins,
24 )
25
26
27 @cli.command( # noqa: WPS238
28 cls=PartialInstrumentedCmd,
29 short_help="Add a plugin to your project.",
30 )
31 @click.argument("plugin_type", type=click.Choice(PluginType.cli_arguments()))
32 @click.argument("plugin_name", nargs=-1, required=True)
33 @click.option(
34 "--inherit-from",
35 help=(
36 "Add a plugin inheriting from an existing plugin in the project"
37 + " or a discoverable plugin identified, by name."
38 ),
39 )
40 @click.option(
41 "--variant",
42 help="Add a specific (non-default) variant of the identified discoverable plugin.",
43 )
44 @click.option(
45 "--as",
46 "as_name",
47 help=(
48 "Shorthand for '--inherit-from', that can be used to add a discoverable "
49 + "plugin to your project with a different name. "
50 + "Usage:\b\n\nadd <type> <inherit-from> --as <name>"
51 ),
52 )
53 @click.option(
54 "--custom",
55 is_flag=True,
56 help="Add a custom plugin. The command will prompt you for the package's base plugin description metadata.",
57 )
58 @pass_project()
59 @click.pass_context
60 def add(
61 ctx,
62 project: Project,
63 plugin_type: str,
64 plugin_name: str,
65 inherit_from: str = None,
66 variant: str = None,
67 as_name: str = None,
68 **flags,
69 ):
70 """
71 Add a plugin to your project.
72
73 \b\nRead more at https://docs.meltano.com/reference/command-line-interface#add
74 """
75 tracker = ctx.obj["tracker"]
76 legacy_tracker = ctx.obj["legacy_tracker"]
77
78 plugin_type = PluginType.from_cli_argument(plugin_type)
79 plugin_names = plugin_name # nargs=-1
80
81 if as_name:
82 # `add <type> <inherit-from> --as <name>``
83 # is equivalent to:
84 # `add <type> <name> --inherit-from <inherit-from>``
85 inherit_from = plugin_names[0]
86 plugin_names = [as_name]
87
88 plugins_service = ProjectPluginsService(project)
89
90 if flags["custom"]:
91 if plugin_type in {
92 PluginType.TRANSFORMS,
93 PluginType.ORCHESTRATORS,
94 }:
95 tracker.track_command_event(CliEvent.aborted)
96 raise CliError(f"--custom is not supported for {plugin_type}")
97
98 plugin_refs = [
99 PluginRef(plugin_type=plugin_type, name=name) for name in plugin_names
100 ]
101 dependencies_met, err = check_dependencies_met(
102 plugin_refs=plugin_refs, plugins_service=plugins_service
103 )
104 if not dependencies_met:
105 tracker.track_command_event(CliEvent.aborted)
106 raise CliError(f"Failed to install plugin(s): {err}")
107
108 add_service = ProjectAddService(project, plugins_service=plugins_service)
109
110 plugins: list[ProjectPlugin] = []
111 for plugin in plugin_names:
112 try:
113 plugins.append(
114 add_plugin(
115 project,
116 plugin_type,
117 plugin,
118 inherit_from=inherit_from,
119 variant=variant,
120 custom=flags["custom"],
121 add_service=add_service,
122 )
123 )
124 except Exception:
125 # if the plugin is not known to meltano send what information we do have
126 tracker.add_contexts(
127 PluginsTrackingContext([(plugin, None) for plugin in plugins])
128 )
129 tracker.track_command_event(CliEvent.aborted)
130 raise
131
132 legacy_tracker.track_meltano_add(plugin_type=plugin_type, plugin_name=plugin)
133
134 required_plugins = add_required_plugins(
135 project, plugins, add_service=add_service
136 )
137 plugins.extend(required_plugins)
138 tracker.add_contexts(
139 PluginsTrackingContext([(candidate, None) for candidate in plugins])
140 )
141 tracker.track_command_event(CliEvent.inflight)
142
143 success = install_plugins(project, plugins, reason=PluginInstallReason.ADD)
144
145 if not success:
146 tracker.track_command_event(CliEvent.failed)
147 raise CliError("Failed to install plugin(s)")
148
149 _print_plugins(plugins)
150 tracker.track_command_event(CliEvent.completed)
151
152
153 def _print_plugins(plugins):
154 printed_empty_line = False
155 for plugin in plugins:
156 docs_url = plugin.docs or plugin.repo
157 if not docs_url:
158 continue
159
160 if not printed_empty_line:
161 click.echo()
162 printed_empty_line = True
163
164 click.echo(
165 f"To learn more about {plugin.type.descriptor} '{plugin.name}', visit {docs_url}"
166 )
167
[end of src/meltano/cli/add.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/meltano/cli/add.py b/src/meltano/cli/add.py
--- a/src/meltano/cli/add.py
+++ b/src/meltano/cli/add.py
@@ -55,6 +55,11 @@
is_flag=True,
help="Add a custom plugin. The command will prompt you for the package's base plugin description metadata.",
)
[email protected](
+ "--no-install",
+ is_flag=True,
+ help="Do not install the plugin after adding it to the project.",
+)
@pass_project()
@click.pass_context
def add(
@@ -140,11 +145,12 @@
)
tracker.track_command_event(CliEvent.inflight)
- success = install_plugins(project, plugins, reason=PluginInstallReason.ADD)
+ if not flags.get("no_install"):
+ success = install_plugins(project, plugins, reason=PluginInstallReason.ADD)
- if not success:
- tracker.track_command_event(CliEvent.failed)
- raise CliError("Failed to install plugin(s)")
+ if not success:
+ tracker.track_command_event(CliEvent.failed)
+ raise CliError("Failed to install plugin(s)")
_print_plugins(plugins)
tracker.track_command_event(CliEvent.completed)
|
{"golden_diff": "diff --git a/src/meltano/cli/add.py b/src/meltano/cli/add.py\n--- a/src/meltano/cli/add.py\n+++ b/src/meltano/cli/add.py\n@@ -55,6 +55,11 @@\n is_flag=True,\n help=\"Add a custom plugin. The command will prompt you for the package's base plugin description metadata.\",\n )\[email protected](\n+ \"--no-install\",\n+ is_flag=True,\n+ help=\"Do not install the plugin after adding it to the project.\",\n+)\n @pass_project()\n @click.pass_context\n def add(\n@@ -140,11 +145,12 @@\n )\n tracker.track_command_event(CliEvent.inflight)\n \n- success = install_plugins(project, plugins, reason=PluginInstallReason.ADD)\n+ if not flags.get(\"no_install\"):\n+ success = install_plugins(project, plugins, reason=PluginInstallReason.ADD)\n \n- if not success:\n- tracker.track_command_event(CliEvent.failed)\n- raise CliError(\"Failed to install plugin(s)\")\n+ if not success:\n+ tracker.track_command_event(CliEvent.failed)\n+ raise CliError(\"Failed to install plugin(s)\")\n \n _print_plugins(plugins)\n tracker.track_command_event(CliEvent.completed)\n", "issue": "[Feature]: meltano add to project without install\n### Feature scope\r\n\r\nCLI (options, error messages, logging, etc.)\r\n\r\n### Description\r\n\r\n**As a user, I need to add to my meltano project without install**\r\n\r\nAdding to a meltano project is currently a very slow operation. Our requirement is to add all the plugins to the project and then later build (i.e. 'meltano install') the whole project. This means that the initial 'meltano add' is unnecessarily slow as we don't use any of the pip-installed modules from the initial 'meltano add'\r\n\r\nWork arounds considered:\r\n- directly add a meltano.yml format file with our plugin definition to a directory with the root meltano.yml includes referencing that directory path & directly save a lock file to the /plugins directory (essentially avoiding using 'meltano add' at all)\r\n\r\n\r\nThis proposal is to be able to add to the meltano project rapidly without installing all pip dependencies, and we'd ideally like to be able to freeze those dependencies.\r\n\r\n`meltano add --skip-install` should\r\n1. download the lock file\r\n2. add to the meltano.yml\r\n3. add all the lock and meltano.yml files for any `requires` plugins (https://github.com/meltano/hub/pull/505 & https://github.com/meltano/meltano/issues/6107)\r\n\r\n\r\n\r\nRelated to:\r\nThis feature is related to https://github.com/meltano/meltano/issues/6416\r\n**As a user, I need to add to my meltano project without install and create pip freeze**\r\n\r\n`meltano add --skip-install --freeze` consider whether this --skip-install works if --freeze is used (assuming https://github.com/meltano/meltano/issues/6416 might implement --freeze)\r\n\n", "before_files": [{"content": "\"\"\"Plugin Add CLI.\"\"\"\nfrom __future__ import annotations\n\nimport click\n\nfrom meltano.core.plugin import PluginType\nfrom meltano.core.plugin.base import PluginRef\nfrom meltano.core.plugin.project_plugin import ProjectPlugin\nfrom meltano.core.plugin_install_service import PluginInstallReason\nfrom meltano.core.project import Project\nfrom meltano.core.project_add_service import ProjectAddService\nfrom meltano.core.project_plugins_service import ProjectPluginsService\nfrom meltano.core.tracking import CliEvent, PluginsTrackingContext\n\nfrom . import cli\nfrom .params import pass_project\nfrom .utils import (\n CliError,\n PartialInstrumentedCmd,\n add_plugin,\n add_required_plugins,\n check_dependencies_met,\n install_plugins,\n)\n\n\[email protected]( # noqa: WPS238\n cls=PartialInstrumentedCmd,\n short_help=\"Add a plugin to your project.\",\n)\[email protected](\"plugin_type\", type=click.Choice(PluginType.cli_arguments()))\[email protected](\"plugin_name\", nargs=-1, required=True)\[email protected](\n \"--inherit-from\",\n help=(\n \"Add a plugin inheriting from an existing plugin in the project\"\n + \" or a discoverable plugin identified, by name.\"\n ),\n)\[email protected](\n \"--variant\",\n help=\"Add a specific (non-default) variant of the identified discoverable plugin.\",\n)\[email protected](\n \"--as\",\n \"as_name\",\n help=(\n \"Shorthand for '--inherit-from', that can be used to add a discoverable \"\n + \"plugin to your project with a different name. \"\n + \"Usage:\\b\\n\\nadd <type> <inherit-from> --as <name>\"\n ),\n)\[email protected](\n \"--custom\",\n is_flag=True,\n help=\"Add a custom plugin. The command will prompt you for the package's base plugin description metadata.\",\n)\n@pass_project()\[email protected]_context\ndef add(\n ctx,\n project: Project,\n plugin_type: str,\n plugin_name: str,\n inherit_from: str = None,\n variant: str = None,\n as_name: str = None,\n **flags,\n):\n \"\"\"\n Add a plugin to your project.\n\n \\b\\nRead more at https://docs.meltano.com/reference/command-line-interface#add\n \"\"\"\n tracker = ctx.obj[\"tracker\"]\n legacy_tracker = ctx.obj[\"legacy_tracker\"]\n\n plugin_type = PluginType.from_cli_argument(plugin_type)\n plugin_names = plugin_name # nargs=-1\n\n if as_name:\n # `add <type> <inherit-from> --as <name>``\n # is equivalent to:\n # `add <type> <name> --inherit-from <inherit-from>``\n inherit_from = plugin_names[0]\n plugin_names = [as_name]\n\n plugins_service = ProjectPluginsService(project)\n\n if flags[\"custom\"]:\n if plugin_type in {\n PluginType.TRANSFORMS,\n PluginType.ORCHESTRATORS,\n }:\n tracker.track_command_event(CliEvent.aborted)\n raise CliError(f\"--custom is not supported for {plugin_type}\")\n\n plugin_refs = [\n PluginRef(plugin_type=plugin_type, name=name) for name in plugin_names\n ]\n dependencies_met, err = check_dependencies_met(\n plugin_refs=plugin_refs, plugins_service=plugins_service\n )\n if not dependencies_met:\n tracker.track_command_event(CliEvent.aborted)\n raise CliError(f\"Failed to install plugin(s): {err}\")\n\n add_service = ProjectAddService(project, plugins_service=plugins_service)\n\n plugins: list[ProjectPlugin] = []\n for plugin in plugin_names:\n try:\n plugins.append(\n add_plugin(\n project,\n plugin_type,\n plugin,\n inherit_from=inherit_from,\n variant=variant,\n custom=flags[\"custom\"],\n add_service=add_service,\n )\n )\n except Exception:\n # if the plugin is not known to meltano send what information we do have\n tracker.add_contexts(\n PluginsTrackingContext([(plugin, None) for plugin in plugins])\n )\n tracker.track_command_event(CliEvent.aborted)\n raise\n\n legacy_tracker.track_meltano_add(plugin_type=plugin_type, plugin_name=plugin)\n\n required_plugins = add_required_plugins(\n project, plugins, add_service=add_service\n )\n plugins.extend(required_plugins)\n tracker.add_contexts(\n PluginsTrackingContext([(candidate, None) for candidate in plugins])\n )\n tracker.track_command_event(CliEvent.inflight)\n\n success = install_plugins(project, plugins, reason=PluginInstallReason.ADD)\n\n if not success:\n tracker.track_command_event(CliEvent.failed)\n raise CliError(\"Failed to install plugin(s)\")\n\n _print_plugins(plugins)\n tracker.track_command_event(CliEvent.completed)\n\n\ndef _print_plugins(plugins):\n printed_empty_line = False\n for plugin in plugins:\n docs_url = plugin.docs or plugin.repo\n if not docs_url:\n continue\n\n if not printed_empty_line:\n click.echo()\n printed_empty_line = True\n\n click.echo(\n f\"To learn more about {plugin.type.descriptor} '{plugin.name}', visit {docs_url}\"\n )\n", "path": "src/meltano/cli/add.py"}]}
| 2,445 | 279 |
gh_patches_debug_19917
|
rasdani/github-patches
|
git_diff
|
sbi-dev__sbi-646
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SNLE + multiple independent prior + MCMC with num_workers > 1 throws error
SNLE:
- multiple independent prior + num_workers=2 leads to an error (ValueError: Expected value argument to be within the support of the distribution Uniform)
- no error if either num_workers=1 or using a BoxUniform prior
SNPE:
- seems to work
```
from sbi.inference.snpe import SNPE_A
from sbi.inference.snle import SNLE_A
from sbi.utils import BoxUniform
from sbi.inference.posteriors import MCMCPosterior
from sbi.inference.potentials import posterior_estimator_based_potential, likelihood_estimator_based_potential
from sbi.utils.user_input_checks import process_prior
from sbi.analysis.sbc import run_sbc
import torch
default_params = torch.rand(size=(4, 3),dtype=torch.float32)
default_obs = torch.rand(size=(4, 1), dtype=torch.float32)
# multiple independent prior
prior = [torch.distributions.Uniform(torch.FloatTensor([0]), torch.FloatTensor([20])),
torch.distributions.Uniform(torch.FloatTensor([-10]), torch.FloatTensor([10])),
torch.distributions.Uniform(torch.FloatTensor([0.5]), torch.FloatTensor([3]))]
# box uniform prior
# prior = BoxUniform(torch.Tensor([0, -10, 0.5]), torch.Tensor([20, 10, 3]))
prior, _, _ = process_prior(prior)
# inference = SNPE_A()
# density_estimator = inference.append_simulations(default_params, default_obs).train()
# potential_fn, theta_transform = posterior_estimator_based_potential(density_estimator, prior, default_obs[0])
inference = SNLE_A()
density_estimator = inference.append_simulations(default_params, default_obs).train()
potential_fn, theta_transform = likelihood_estimator_based_potential(density_estimator, prior, default_obs[0])
posterior = MCMCPosterior(potential_fn, proposal=prior, theta_transform=theta_transform)
# this line throws an error
ranks, dap_samples = run_sbc(default_params, default_obs, posterior, num_posterior_samples=10, num_workers=2, sbc_batch_size=2)
```
</issue>
<code>
[start of sbi/samplers/mcmc/init_strategy.py]
1 # This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed
2 # under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.
3
4 from typing import Any, Callable
5
6 import torch
7 import torch.distributions.transforms as torch_tf
8 from torch import Tensor
9
10
11 class IterateParameters:
12 """Iterates through parameters by rows"""
13
14 def __init__(self, parameters: torch.Tensor, **kwargs):
15 self.iter = self._make_iterator(parameters)
16
17 @staticmethod
18 def _make_iterator(t):
19 for i in range(t.shape[0]):
20 yield t[i, :].reshape(1, -1)
21
22 def __call__(self):
23 return next(self.iter)
24
25
26 def proposal_init(
27 proposal: Any, transform: torch_tf.Transform, **kwargs: Any
28 ) -> Tensor:
29 """Return a sample from the proposal."""
30 prior_samples = proposal.sample((1,)).detach()
31 transformed_prior_samples = transform(prior_samples)
32 return transformed_prior_samples
33
34
35 def sir(
36 proposal: Any,
37 potential_fn: Callable,
38 transform: torch_tf.Transform,
39 sir_num_batches: int = 10,
40 sir_batch_size: int = 1000,
41 **kwargs: Any,
42 ) -> Tensor:
43 r"""Return a sample obtained by sequential importance reweighting.
44
45 See Rubin 1988, "Using the sir algorithm to simulate posterior distributions."
46
47 This function can also do `SIR` on the conditional posterior
48 $p(\theta_i|\theta_j, x)$ when a `condition` and `dims_to_sample` are passed.
49
50 Args:
51 proposal: Proposal distribution, candidate samples are drawn from it.
52 potential_fn: Potential function that the candidate samples are weighted with.
53 Note that the function needs to return log probabilities.
54 sir_num_batches: Number of candidate batches drawn.
55 sir_batch_size: Batch size used for evaluating candidates.
56
57 Returns:
58 A single sample.
59 """
60
61 with torch.set_grad_enabled(False):
62 log_weights = []
63 init_param_candidates = []
64 for i in range(sir_num_batches):
65 batch_draws = proposal.sample((sir_batch_size,)).detach()
66 transformed_batch_draws = transform(batch_draws)
67 init_param_candidates.append(transformed_batch_draws)
68 log_weights.append(potential_fn(transformed_batch_draws).detach())
69 log_weights = torch.cat(log_weights)
70 init_param_candidates = torch.cat(init_param_candidates)
71
72 # Norm weights in log space
73 log_weights -= torch.logsumexp(log_weights, dim=0)
74 probs = torch.exp(log_weights.view(-1))
75 probs[torch.isnan(probs)] = 0.0
76 probs[torch.isinf(probs)] = 0.0
77 probs /= probs.sum()
78
79 idxs = torch.multinomial(probs, 1, replacement=False)
80 return init_param_candidates[idxs, :]
81
[end of sbi/samplers/mcmc/init_strategy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sbi/samplers/mcmc/init_strategy.py b/sbi/samplers/mcmc/init_strategy.py
--- a/sbi/samplers/mcmc/init_strategy.py
+++ b/sbi/samplers/mcmc/init_strategy.py
@@ -63,9 +63,8 @@
init_param_candidates = []
for i in range(sir_num_batches):
batch_draws = proposal.sample((sir_batch_size,)).detach()
- transformed_batch_draws = transform(batch_draws)
- init_param_candidates.append(transformed_batch_draws)
- log_weights.append(potential_fn(transformed_batch_draws).detach())
+ init_param_candidates.append(batch_draws)
+ log_weights.append(potential_fn(batch_draws).detach())
log_weights = torch.cat(log_weights)
init_param_candidates = torch.cat(init_param_candidates)
@@ -77,4 +76,5 @@
probs /= probs.sum()
idxs = torch.multinomial(probs, 1, replacement=False)
- return init_param_candidates[idxs, :]
+ # Return transformed sample.
+ return transform(init_param_candidates[idxs, :])
|
{"golden_diff": "diff --git a/sbi/samplers/mcmc/init_strategy.py b/sbi/samplers/mcmc/init_strategy.py\n--- a/sbi/samplers/mcmc/init_strategy.py\n+++ b/sbi/samplers/mcmc/init_strategy.py\n@@ -63,9 +63,8 @@\n init_param_candidates = []\n for i in range(sir_num_batches):\n batch_draws = proposal.sample((sir_batch_size,)).detach()\n- transformed_batch_draws = transform(batch_draws)\n- init_param_candidates.append(transformed_batch_draws)\n- log_weights.append(potential_fn(transformed_batch_draws).detach())\n+ init_param_candidates.append(batch_draws)\n+ log_weights.append(potential_fn(batch_draws).detach())\n log_weights = torch.cat(log_weights)\n init_param_candidates = torch.cat(init_param_candidates)\n \n@@ -77,4 +76,5 @@\n probs /= probs.sum()\n \n idxs = torch.multinomial(probs, 1, replacement=False)\n- return init_param_candidates[idxs, :]\n+ # Return transformed sample.\n+ return transform(init_param_candidates[idxs, :])\n", "issue": "SNLE + multiple independent prior + MCMC with num_workers > 1 throws error\nSNLE:\r\n- multiple independent prior + num_workers=2 leads to an error (ValueError: Expected value argument to be within the support of the distribution Uniform)\r\n- no error if either num_workers=1 or using a BoxUniform prior\r\n\r\nSNPE:\r\n- seems to work\r\n\r\n```\r\nfrom sbi.inference.snpe import SNPE_A\r\nfrom sbi.inference.snle import SNLE_A\r\nfrom sbi.utils import BoxUniform\r\nfrom sbi.inference.posteriors import MCMCPosterior\r\nfrom sbi.inference.potentials import posterior_estimator_based_potential, likelihood_estimator_based_potential\r\nfrom sbi.utils.user_input_checks import process_prior\r\nfrom sbi.analysis.sbc import run_sbc\r\nimport torch\r\n\r\ndefault_params = torch.rand(size=(4, 3),dtype=torch.float32)\r\ndefault_obs = torch.rand(size=(4, 1), dtype=torch.float32)\r\n\r\n# multiple independent prior\r\nprior = [torch.distributions.Uniform(torch.FloatTensor([0]), torch.FloatTensor([20])),\r\n torch.distributions.Uniform(torch.FloatTensor([-10]), torch.FloatTensor([10])),\r\n torch.distributions.Uniform(torch.FloatTensor([0.5]), torch.FloatTensor([3]))]\r\n# box uniform prior\r\n# prior = BoxUniform(torch.Tensor([0, -10, 0.5]), torch.Tensor([20, 10, 3]))\r\nprior, _, _ = process_prior(prior)\r\n\r\n# inference = SNPE_A()\r\n# density_estimator = inference.append_simulations(default_params, default_obs).train()\r\n# potential_fn, theta_transform = posterior_estimator_based_potential(density_estimator, prior, default_obs[0])\r\n\r\ninference = SNLE_A()\r\ndensity_estimator = inference.append_simulations(default_params, default_obs).train()\r\npotential_fn, theta_transform = likelihood_estimator_based_potential(density_estimator, prior, default_obs[0])\r\n\r\nposterior = MCMCPosterior(potential_fn, proposal=prior, theta_transform=theta_transform)\r\n\r\n# this line throws an error\r\nranks, dap_samples = run_sbc(default_params, default_obs, posterior, num_posterior_samples=10, num_workers=2, sbc_batch_size=2)\r\n```\r\n\n", "before_files": [{"content": "# This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed\n# under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.\n\nfrom typing import Any, Callable\n\nimport torch\nimport torch.distributions.transforms as torch_tf\nfrom torch import Tensor\n\n\nclass IterateParameters:\n \"\"\"Iterates through parameters by rows\"\"\"\n\n def __init__(self, parameters: torch.Tensor, **kwargs):\n self.iter = self._make_iterator(parameters)\n\n @staticmethod\n def _make_iterator(t):\n for i in range(t.shape[0]):\n yield t[i, :].reshape(1, -1)\n\n def __call__(self):\n return next(self.iter)\n\n\ndef proposal_init(\n proposal: Any, transform: torch_tf.Transform, **kwargs: Any\n) -> Tensor:\n \"\"\"Return a sample from the proposal.\"\"\"\n prior_samples = proposal.sample((1,)).detach()\n transformed_prior_samples = transform(prior_samples)\n return transformed_prior_samples\n\n\ndef sir(\n proposal: Any,\n potential_fn: Callable,\n transform: torch_tf.Transform,\n sir_num_batches: int = 10,\n sir_batch_size: int = 1000,\n **kwargs: Any,\n) -> Tensor:\n r\"\"\"Return a sample obtained by sequential importance reweighting.\n\n See Rubin 1988, \"Using the sir algorithm to simulate posterior distributions.\"\n\n This function can also do `SIR` on the conditional posterior\n $p(\\theta_i|\\theta_j, x)$ when a `condition` and `dims_to_sample` are passed.\n\n Args:\n proposal: Proposal distribution, candidate samples are drawn from it.\n potential_fn: Potential function that the candidate samples are weighted with.\n Note that the function needs to return log probabilities.\n sir_num_batches: Number of candidate batches drawn.\n sir_batch_size: Batch size used for evaluating candidates.\n\n Returns:\n A single sample.\n \"\"\"\n\n with torch.set_grad_enabled(False):\n log_weights = []\n init_param_candidates = []\n for i in range(sir_num_batches):\n batch_draws = proposal.sample((sir_batch_size,)).detach()\n transformed_batch_draws = transform(batch_draws)\n init_param_candidates.append(transformed_batch_draws)\n log_weights.append(potential_fn(transformed_batch_draws).detach())\n log_weights = torch.cat(log_weights)\n init_param_candidates = torch.cat(init_param_candidates)\n\n # Norm weights in log space\n log_weights -= torch.logsumexp(log_weights, dim=0)\n probs = torch.exp(log_weights.view(-1))\n probs[torch.isnan(probs)] = 0.0\n probs[torch.isinf(probs)] = 0.0\n probs /= probs.sum()\n\n idxs = torch.multinomial(probs, 1, replacement=False)\n return init_param_candidates[idxs, :]\n", "path": "sbi/samplers/mcmc/init_strategy.py"}]}
| 1,804 | 248 |
gh_patches_debug_28197
|
rasdani/github-patches
|
git_diff
|
PyGithub__PyGithub-1356
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ProjectCard should have the move action?
How can one move a project card with PyGithub. It seems like the moves action is not implemented. Or am I missing something? I need to be able to move a card from one column to another.
</issue>
<code>
[start of github/ProjectCard.py]
1 # -*- coding: utf-8 -*-
2
3 ############################ Copyrights and license ############################
4 # #
5 # Copyright 2018 bbi-yggy <[email protected]> #
6 # #
7 # This file is part of PyGithub. #
8 # http://pygithub.readthedocs.io/ #
9 # #
10 # PyGithub is free software: you can redistribute it and/or modify it under #
11 # the terms of the GNU Lesser General Public License as published by the Free #
12 # Software Foundation, either version 3 of the License, or (at your option) #
13 # any later version. #
14 # #
15 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
16 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
17 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
18 # details. #
19 # #
20 # You should have received a copy of the GNU Lesser General Public License #
21 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
22 # #
23 ################################################################################
24
25 import github.GithubObject
26
27 # NOTE: There is currently no way to get cards "in triage" for a project.
28 # https://platform.github.community/t/moving-github-project-cards-that-are-in-triage/3784
29 #
30 # See also https://developer.github.com/v4/object/projectcard for the next generation GitHub API,
31 # which may point the way to where the API is likely headed and what might come back to v3. E.g. ProjectCard.content member.
32
33
34 class ProjectCard(github.GithubObject.CompletableGithubObject):
35 """
36 This class represents Project Cards. The reference can be found here https://developer.github.com/v3/projects/cards
37 """
38
39 def __repr__(self):
40 return self.get__repr__({"id": self._id.value})
41
42 @property
43 def archived(self):
44 """
45 :type: bool
46 """
47 return self._archived.value
48
49 @property
50 def column_url(self):
51 """
52 :type: string
53 """
54 return self._column_url.value
55
56 @property
57 def content_url(self):
58 """
59 :type: string
60 """
61 return self._content_url.value
62
63 @property
64 def created_at(self):
65 """
66 :type: datetime.datetime
67 """
68 return self._created_at.value
69
70 @property
71 def creator(self):
72 """
73 :type: :class:`github.NamedUser.NamedUser`
74 """
75 return self._creator.value
76
77 @property
78 def id(self):
79 """
80 :type: integer
81 """
82 return self._id.value
83
84 @property
85 def node_id(self):
86 """
87 :type: string
88 """
89 return self._node_id.value
90
91 @property
92 def note(self):
93 """
94 :type: string
95 """
96 return self._note.value
97
98 @property
99 def updated_at(self):
100 """
101 :type: datetime.datetime
102 """
103 return self._updated_at.value
104
105 @property
106 def url(self):
107 """
108 :type: string
109 """
110 return self._url.value
111
112 # Note that the content_url for any card will be an "issue" URL, from
113 # which you can retrieve either an Issue or a PullRequest. Unforunately
114 # the API doesn't make it clear which you are dealing with.
115 def get_content(self, content_type=github.GithubObject.NotSet):
116 """
117 :calls: `GET /repos/:owner/:repo/pulls/:number <https://developer.github.com/v3/pulls/#get-a-single-pull-request>`_
118 :param content_type: string, optional
119 :rtype: :class:`github.PullRequest.PullRequest` or :class:`github.Issue.Issue`
120 """
121 assert content_type is github.GithubObject.NotSet or isinstance(
122 content_type, str
123 ), content_type
124 if self.content_url is None:
125 return None
126
127 if content_type == "PullRequest":
128 url = self.content_url.replace("issues", "pulls")
129 retclass = github.PullRequest.PullRequest
130 elif content_type is github.GithubObject.NotSet or content_type == "Issue":
131 url = self.content_url
132 retclass = github.Issue.Issue
133 else:
134 raise ValueError("Unknown content type: %s" % content_type)
135 headers, data = self._requester.requestJsonAndCheck("GET", url)
136 return retclass(self._requester, headers, data, completed=True)
137
138 def _initAttributes(self):
139 self._archived = github.GithubObject.NotSet
140 self._column_url = github.GithubObject.NotSet
141 self._content_url = github.GithubObject.NotSet
142 self._created_at = github.GithubObject.NotSet
143 self._creator = github.GithubObject.NotSet
144 self._id = github.GithubObject.NotSet
145 self._node_id = github.GithubObject.NotSet
146 self._note = github.GithubObject.NotSet
147 self._updated_at = github.GithubObject.NotSet
148 self._url = github.GithubObject.NotSet
149
150 def _useAttributes(self, attributes):
151 if "archived" in attributes: # pragma no branch
152 self._archived = self._makeBoolAttribute(attributes["archived"])
153 if "column_url" in attributes: # pragma no branch
154 self._column_url = self._makeStringAttribute(attributes["column_url"])
155 if "content_url" in attributes: # pragma no branch
156 self._content_url = self._makeStringAttribute(attributes["content_url"])
157 if "created_at" in attributes: # pragma no branch
158 self._created_at = self._makeDatetimeAttribute(attributes["created_at"])
159 if "creator" in attributes: # pragma no branch
160 self._creator = self._makeClassAttribute(
161 github.NamedUser.NamedUser, attributes["creator"]
162 )
163 if "id" in attributes: # pragma no branch
164 self._id = self._makeIntAttribute(attributes["id"])
165 if "node_id" in attributes: # pragma no branch
166 self._node_id = self._makeStringAttribute(attributes["node_id"])
167 if "note" in attributes: # pragma no branch
168 self._note = self._makeStringAttribute(attributes["note"])
169 if "updated_at" in attributes: # pragma no branch
170 self._updated_at = self._makeDatetimeAttribute(attributes["updated_at"])
171 if "url" in attributes: # pragma no branch
172 self._url = self._makeStringAttribute(attributes["url"])
173
[end of github/ProjectCard.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/github/ProjectCard.py b/github/ProjectCard.py
--- a/github/ProjectCard.py
+++ b/github/ProjectCard.py
@@ -24,6 +24,8 @@
import github.GithubObject
+from . import Consts
+
# NOTE: There is currently no way to get cards "in triage" for a project.
# https://platform.github.community/t/moving-github-project-cards-that-are-in-triage/3784
#
@@ -135,6 +137,31 @@
headers, data = self._requester.requestJsonAndCheck("GET", url)
return retclass(self._requester, headers, data, completed=True)
+ def move(self, position, column):
+ """
+ :calls: `POST /projects/columns/cards/:card_id/moves <https://developer.github.com/v3/projects/cards>`_
+ :param position: string
+ :param column: :class:`github.ProjectColumn.ProjectColumn` or int
+ :rtype: bool
+ """
+ assert isinstance(position, str), position
+ assert isinstance(column, github.ProjectColumn.ProjectColumn) or isinstance(
+ column, int
+ ), column
+ post_parameters = {
+ "position": position,
+ "column": column.id
+ if isinstance(column, github.ProjectColumn.ProjectColumn)
+ else column,
+ }
+ status, _, _ = self._requester.requestJson(
+ "POST",
+ self.url + "/moves",
+ input=post_parameters,
+ headers={"Accept": Consts.mediaTypeProjectsPreview},
+ )
+ return status == 201
+
def _initAttributes(self):
self._archived = github.GithubObject.NotSet
self._column_url = github.GithubObject.NotSet
|
{"golden_diff": "diff --git a/github/ProjectCard.py b/github/ProjectCard.py\n--- a/github/ProjectCard.py\n+++ b/github/ProjectCard.py\n@@ -24,6 +24,8 @@\n \n import github.GithubObject\n \n+from . import Consts\n+\n # NOTE: There is currently no way to get cards \"in triage\" for a project.\n # https://platform.github.community/t/moving-github-project-cards-that-are-in-triage/3784\n #\n@@ -135,6 +137,31 @@\n headers, data = self._requester.requestJsonAndCheck(\"GET\", url)\n return retclass(self._requester, headers, data, completed=True)\n \n+ def move(self, position, column):\n+ \"\"\"\n+ :calls: `POST /projects/columns/cards/:card_id/moves <https://developer.github.com/v3/projects/cards>`_\n+ :param position: string\n+ :param column: :class:`github.ProjectColumn.ProjectColumn` or int\n+ :rtype: bool\n+ \"\"\"\n+ assert isinstance(position, str), position\n+ assert isinstance(column, github.ProjectColumn.ProjectColumn) or isinstance(\n+ column, int\n+ ), column\n+ post_parameters = {\n+ \"position\": position,\n+ \"column\": column.id\n+ if isinstance(column, github.ProjectColumn.ProjectColumn)\n+ else column,\n+ }\n+ status, _, _ = self._requester.requestJson(\n+ \"POST\",\n+ self.url + \"/moves\",\n+ input=post_parameters,\n+ headers={\"Accept\": Consts.mediaTypeProjectsPreview},\n+ )\n+ return status == 201\n+\n def _initAttributes(self):\n self._archived = github.GithubObject.NotSet\n self._column_url = github.GithubObject.NotSet\n", "issue": "ProjectCard should have the move action?\n\r\nHow can one move a project card with PyGithub. It seems like the moves action is not implemented. Or am I missing something? I need to be able to move a card from one column to another.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n############################ Copyrights and license ############################\n# #\n# Copyright 2018 bbi-yggy <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\nimport github.GithubObject\n\n# NOTE: There is currently no way to get cards \"in triage\" for a project.\n# https://platform.github.community/t/moving-github-project-cards-that-are-in-triage/3784\n#\n# See also https://developer.github.com/v4/object/projectcard for the next generation GitHub API,\n# which may point the way to where the API is likely headed and what might come back to v3. E.g. ProjectCard.content member.\n\n\nclass ProjectCard(github.GithubObject.CompletableGithubObject):\n \"\"\"\n This class represents Project Cards. The reference can be found here https://developer.github.com/v3/projects/cards\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__({\"id\": self._id.value})\n\n @property\n def archived(self):\n \"\"\"\n :type: bool\n \"\"\"\n return self._archived.value\n\n @property\n def column_url(self):\n \"\"\"\n :type: string\n \"\"\"\n return self._column_url.value\n\n @property\n def content_url(self):\n \"\"\"\n :type: string\n \"\"\"\n return self._content_url.value\n\n @property\n def created_at(self):\n \"\"\"\n :type: datetime.datetime\n \"\"\"\n return self._created_at.value\n\n @property\n def creator(self):\n \"\"\"\n :type: :class:`github.NamedUser.NamedUser`\n \"\"\"\n return self._creator.value\n\n @property\n def id(self):\n \"\"\"\n :type: integer\n \"\"\"\n return self._id.value\n\n @property\n def node_id(self):\n \"\"\"\n :type: string\n \"\"\"\n return self._node_id.value\n\n @property\n def note(self):\n \"\"\"\n :type: string\n \"\"\"\n return self._note.value\n\n @property\n def updated_at(self):\n \"\"\"\n :type: datetime.datetime\n \"\"\"\n return self._updated_at.value\n\n @property\n def url(self):\n \"\"\"\n :type: string\n \"\"\"\n return self._url.value\n\n # Note that the content_url for any card will be an \"issue\" URL, from\n # which you can retrieve either an Issue or a PullRequest. Unforunately\n # the API doesn't make it clear which you are dealing with.\n def get_content(self, content_type=github.GithubObject.NotSet):\n \"\"\"\n :calls: `GET /repos/:owner/:repo/pulls/:number <https://developer.github.com/v3/pulls/#get-a-single-pull-request>`_\n :param content_type: string, optional\n :rtype: :class:`github.PullRequest.PullRequest` or :class:`github.Issue.Issue`\n \"\"\"\n assert content_type is github.GithubObject.NotSet or isinstance(\n content_type, str\n ), content_type\n if self.content_url is None:\n return None\n\n if content_type == \"PullRequest\":\n url = self.content_url.replace(\"issues\", \"pulls\")\n retclass = github.PullRequest.PullRequest\n elif content_type is github.GithubObject.NotSet or content_type == \"Issue\":\n url = self.content_url\n retclass = github.Issue.Issue\n else:\n raise ValueError(\"Unknown content type: %s\" % content_type)\n headers, data = self._requester.requestJsonAndCheck(\"GET\", url)\n return retclass(self._requester, headers, data, completed=True)\n\n def _initAttributes(self):\n self._archived = github.GithubObject.NotSet\n self._column_url = github.GithubObject.NotSet\n self._content_url = github.GithubObject.NotSet\n self._created_at = github.GithubObject.NotSet\n self._creator = github.GithubObject.NotSet\n self._id = github.GithubObject.NotSet\n self._node_id = github.GithubObject.NotSet\n self._note = github.GithubObject.NotSet\n self._updated_at = github.GithubObject.NotSet\n self._url = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"archived\" in attributes: # pragma no branch\n self._archived = self._makeBoolAttribute(attributes[\"archived\"])\n if \"column_url\" in attributes: # pragma no branch\n self._column_url = self._makeStringAttribute(attributes[\"column_url\"])\n if \"content_url\" in attributes: # pragma no branch\n self._content_url = self._makeStringAttribute(attributes[\"content_url\"])\n if \"created_at\" in attributes: # pragma no branch\n self._created_at = self._makeDatetimeAttribute(attributes[\"created_at\"])\n if \"creator\" in attributes: # pragma no branch\n self._creator = self._makeClassAttribute(\n github.NamedUser.NamedUser, attributes[\"creator\"]\n )\n if \"id\" in attributes: # pragma no branch\n self._id = self._makeIntAttribute(attributes[\"id\"])\n if \"node_id\" in attributes: # pragma no branch\n self._node_id = self._makeStringAttribute(attributes[\"node_id\"])\n if \"note\" in attributes: # pragma no branch\n self._note = self._makeStringAttribute(attributes[\"note\"])\n if \"updated_at\" in attributes: # pragma no branch\n self._updated_at = self._makeDatetimeAttribute(attributes[\"updated_at\"])\n if \"url\" in attributes: # pragma no branch\n self._url = self._makeStringAttribute(attributes[\"url\"])\n", "path": "github/ProjectCard.py"}]}
| 2,450 | 404 |
gh_patches_debug_3845
|
rasdani/github-patches
|
git_diff
|
yt-dlp__yt-dlp-6147
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Freesound] Downloads are broken
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
Freesound.org downloads are broken due to how the preview URLs sometimes appear in the HTML: `<meta property="og:audio" content="https://freesound.orghttps://cdn.freesound.org/..." />`
I have made a fix for this and will be opening a PR shortly.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU https://www.freesound.org/people/miklovan/sounds/194503/
[debug] Command-line config: ['-vU', 'https://www.freesound.org/people/miklovan/sounds/194503/']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.01.06 [6becd25] (pip)
[debug] Python 3.10.2 (CPython AMD64 64bit) - Windows-10-10.0.19044-SP0 (OpenSSL 1.1.1m 14 Dec 2021)
[debug] exe versions: ffmpeg git-2019-11-13-a7245ad, ffprobe 5.1.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.12.0, brotli-1.0.9, certifi-2021.10.08, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.1
[debug] Proxy map: {}
[debug] Loaded 1760 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.01.06, Current version: 2023.01.06
yt-dlp is up to date (2023.01.06)
[Freesound] Extracting URL: https://www.freesound.org/people/miklovan/sounds/194503/
[Freesound] 194503: Downloading webpage
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 194503: Downloading 1 format(s): 1
[debug] Invoking http downloader on "https://freesound.orghttps://cdn.freesound.org/previews/194/194503_224081-hq.mp3"
[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (1/10)...
[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (2/10)...
[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (3/10)...
[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (4/10)...
[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (5/10)...
[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (6/10)...
[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (7/10)...
[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (8/10)...
[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (9/10)...
[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (10/10)...
[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Giving up after 10 retries
File "C:\Program Files\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Program Files\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Program Files\Python310\Scripts\yt-dlp.exe\__main__.py", line 7, in <module>
sys.exit(main())
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\__init__.py", line 970, in main
_exit(*variadic(_real_main(argv)))
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\__init__.py", line 960, in _real_main
return ydl.download(all_urls)
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\YoutubeDL.py", line 3358, in download
self.__download_wrapper(self.extract_info)(
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\YoutubeDL.py", line 3333, in wrapper
res = func(*args, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\YoutubeDL.py", line 1491, in extract_info
return self.__extract_info(url, self.get_info_extractor(key), download, extra_info, process)
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\YoutubeDL.py", line 1502, in wrapper
return func(self, *args, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\YoutubeDL.py", line 1599, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\YoutubeDL.py", line 1658, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\YoutubeDL.py", line 2772, in process_video_result
self.process_info(new_info)
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\YoutubeDL.py", line 3236, in process_info
success, real_download = self.dl(temp_filename, info_dict)
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\YoutubeDL.py", line 2959, in dl
return fd.download(name, new_info, subtitle)
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\downloader\common.py", line 444, in download
ret = self.real_download(filename, info_dict)
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\downloader\http.py", line 369, in real_download
for retry in RetryManager(self.params.get('retries'), self.report_retry):
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\utils.py", line 6023, in __iter__
self.error_callback(self.error, self.attempt, self.retries)
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\downloader\common.py", line 389, in report_retry
RetryManager.report_retry(
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\utils.py", line 6030, in report_retry
return error(f'{e}. Giving up after {count - 1} retries') if count > 1 else error(str(e))
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\downloader\common.py", line 392, in <lambda>
error=IDENTITY if not fatal else lambda e: self.report_error(f'\r[download] Got error: {e}'),
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\YoutubeDL.py", line 1012, in report_error
self.trouble(f'{self._format_err("ERROR:", self.Styles.ERROR)} {message}', *args, **kwargs)
File "C:\Program Files\Python310\lib\site-packages\yt_dlp\YoutubeDL.py", line 941, in trouble
tb_data = traceback.format_list(traceback.extract_stack())
```
</issue>
<code>
[start of yt_dlp/extractor/freesound.py]
1 import re
2
3 from .common import InfoExtractor
4 from ..utils import (
5 float_or_none,
6 get_element_by_class,
7 get_element_by_id,
8 unified_strdate,
9 )
10
11
12 class FreesoundIE(InfoExtractor):
13 _VALID_URL = r'https?://(?:www\.)?freesound\.org/people/[^/]+/sounds/(?P<id>[^/]+)'
14 _TEST = {
15 'url': 'http://www.freesound.org/people/miklovan/sounds/194503/',
16 'md5': '12280ceb42c81f19a515c745eae07650',
17 'info_dict': {
18 'id': '194503',
19 'ext': 'mp3',
20 'title': 'gulls in the city.wav',
21 'description': 'the sounds of seagulls in the city',
22 'duration': 130.233,
23 'uploader': 'miklovan',
24 'upload_date': '20130715',
25 'tags': list,
26 }
27 }
28
29 def _real_extract(self, url):
30 audio_id = self._match_id(url)
31
32 webpage = self._download_webpage(url, audio_id)
33
34 audio_url = self._og_search_property('audio', webpage, 'song url')
35 title = self._og_search_property('audio:title', webpage, 'song title')
36
37 description = self._html_search_regex(
38 r'(?s)id=["\']sound_description["\'][^>]*>(.+?)</div>',
39 webpage, 'description', fatal=False)
40
41 duration = float_or_none(
42 get_element_by_class('duration', webpage), scale=1000)
43
44 upload_date = unified_strdate(get_element_by_id('sound_date', webpage))
45 uploader = self._og_search_property(
46 'audio:artist', webpage, 'uploader', fatal=False)
47
48 channels = self._html_search_regex(
49 r'Channels</dt><dd>(.+?)</dd>', webpage,
50 'channels info', fatal=False)
51
52 tags_str = get_element_by_class('tags', webpage)
53 tags = re.findall(r'<a[^>]+>([^<]+)', tags_str) if tags_str else None
54
55 audio_urls = [audio_url]
56
57 LQ_FORMAT = '-lq.mp3'
58 if LQ_FORMAT in audio_url:
59 audio_urls.append(audio_url.replace(LQ_FORMAT, '-hq.mp3'))
60
61 formats = [{
62 'url': format_url,
63 'format_note': channels,
64 'quality': quality,
65 } for quality, format_url in enumerate(audio_urls)]
66
67 return {
68 'id': audio_id,
69 'title': title,
70 'description': description,
71 'duration': duration,
72 'uploader': uploader,
73 'upload_date': upload_date,
74 'tags': tags,
75 'formats': formats,
76 }
77
[end of yt_dlp/extractor/freesound.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/yt_dlp/extractor/freesound.py b/yt_dlp/extractor/freesound.py
--- a/yt_dlp/extractor/freesound.py
+++ b/yt_dlp/extractor/freesound.py
@@ -52,6 +52,7 @@
tags_str = get_element_by_class('tags', webpage)
tags = re.findall(r'<a[^>]+>([^<]+)', tags_str) if tags_str else None
+ audio_url = re.sub(r'^https?://freesound\.org(https?://)', r'\1', audio_url)
audio_urls = [audio_url]
LQ_FORMAT = '-lq.mp3'
|
{"golden_diff": "diff --git a/yt_dlp/extractor/freesound.py b/yt_dlp/extractor/freesound.py\n--- a/yt_dlp/extractor/freesound.py\n+++ b/yt_dlp/extractor/freesound.py\n@@ -52,6 +52,7 @@\n tags_str = get_element_by_class('tags', webpage)\n tags = re.findall(r'<a[^>]+>([^<]+)', tags_str) if tags_str else None\n \n+ audio_url = re.sub(r'^https?://freesound\\.org(https?://)', r'\\1', audio_url)\n audio_urls = [audio_url]\n \n LQ_FORMAT = '-lq.mp3'\n", "issue": "[Freesound] Downloads are broken\n### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\n_No response_\n\n### Provide a description that is worded well enough to be understood\n\nFreesound.org downloads are broken due to how the preview URLs sometimes appear in the HTML: `<meta property=\"og:audio\" content=\"https://freesound.orghttps://cdn.freesound.org/...\" />`\r\n\r\nI have made a fix for this and will be opening a PR shortly.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\nyt-dlp -vU https://www.freesound.org/people/miklovan/sounds/194503/\r\n[debug] Command-line config: ['-vU', 'https://www.freesound.org/people/miklovan/sounds/194503/']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version 2023.01.06 [6becd25] (pip)\r\n[debug] Python 3.10.2 (CPython AMD64 64bit) - Windows-10-10.0.19044-SP0 (OpenSSL 1.1.1m 14 Dec 2021)\r\n[debug] exe versions: ffmpeg git-2019-11-13-a7245ad, ffprobe 5.1.2-full_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.12.0, brotli-1.0.9, certifi-2021.10.08, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.1\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1760 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: 2023.01.06, Current version: 2023.01.06\r\nyt-dlp is up to date (2023.01.06)\r\n[Freesound] Extracting URL: https://www.freesound.org/people/miklovan/sounds/194503/\r\n[Freesound] 194503: Downloading webpage\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] 194503: Downloading 1 format(s): 1\r\n[debug] Invoking http downloader on \"https://freesound.orghttps://cdn.freesound.org/previews/194/194503_224081-hq.mp3\"\r\n[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (1/10)...\r\n[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (2/10)...\r\n[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (3/10)...\r\n[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (4/10)...\r\n[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (5/10)...\r\n[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (6/10)...\r\n[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (7/10)...\r\n[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (8/10)...\r\n[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (9/10)...\r\n[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Retrying (10/10)...\r\n[download] Got error: <urlopen error [Errno 11001] getaddrinfo failed>. Giving up after 10 retries\r\n File \"C:\\Program Files\\Python310\\lib\\runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Program Files\\Python310\\lib\\runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Program Files\\Python310\\Scripts\\yt-dlp.exe\\__main__.py\", line 7, in <module>\r\n sys.exit(main())\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\__init__.py\", line 970, in main\r\n _exit(*variadic(_real_main(argv)))\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\__init__.py\", line 960, in _real_main\r\n return ydl.download(all_urls)\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 3358, in download\r\n self.__download_wrapper(self.extract_info)(\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 3333, in wrapper\r\n res = func(*args, **kwargs)\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 1491, in extract_info\r\n return self.__extract_info(url, self.get_info_extractor(key), download, extra_info, process)\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 1502, in wrapper\r\n return func(self, *args, **kwargs)\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 1599, in __extract_info\r\n return self.process_ie_result(ie_result, download, extra_info)\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 1658, in process_ie_result\r\n ie_result = self.process_video_result(ie_result, download=download)\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 2772, in process_video_result\r\n self.process_info(new_info)\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 3236, in process_info\r\n success, real_download = self.dl(temp_filename, info_dict)\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 2959, in dl\r\n return fd.download(name, new_info, subtitle)\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\downloader\\common.py\", line 444, in download\r\n ret = self.real_download(filename, info_dict)\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\downloader\\http.py\", line 369, in real_download\r\n for retry in RetryManager(self.params.get('retries'), self.report_retry):\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\utils.py\", line 6023, in __iter__\r\n self.error_callback(self.error, self.attempt, self.retries)\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\downloader\\common.py\", line 389, in report_retry\r\n RetryManager.report_retry(\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\utils.py\", line 6030, in report_retry\r\n return error(f'{e}. Giving up after {count - 1} retries') if count > 1 else error(str(e))\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\downloader\\common.py\", line 392, in <lambda>\r\n error=IDENTITY if not fatal else lambda e: self.report_error(f'\\r[download] Got error: {e}'),\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 1012, in report_error\r\n self.trouble(f'{self._format_err(\"ERROR:\", self.Styles.ERROR)} {message}', *args, **kwargs)\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\yt_dlp\\YoutubeDL.py\", line 941, in trouble\r\n tb_data = traceback.format_list(traceback.extract_stack())\n```\n\n", "before_files": [{"content": "import re\n\nfrom .common import InfoExtractor\nfrom ..utils import (\n float_or_none,\n get_element_by_class,\n get_element_by_id,\n unified_strdate,\n)\n\n\nclass FreesoundIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?freesound\\.org/people/[^/]+/sounds/(?P<id>[^/]+)'\n _TEST = {\n 'url': 'http://www.freesound.org/people/miklovan/sounds/194503/',\n 'md5': '12280ceb42c81f19a515c745eae07650',\n 'info_dict': {\n 'id': '194503',\n 'ext': 'mp3',\n 'title': 'gulls in the city.wav',\n 'description': 'the sounds of seagulls in the city',\n 'duration': 130.233,\n 'uploader': 'miklovan',\n 'upload_date': '20130715',\n 'tags': list,\n }\n }\n\n def _real_extract(self, url):\n audio_id = self._match_id(url)\n\n webpage = self._download_webpage(url, audio_id)\n\n audio_url = self._og_search_property('audio', webpage, 'song url')\n title = self._og_search_property('audio:title', webpage, 'song title')\n\n description = self._html_search_regex(\n r'(?s)id=[\"\\']sound_description[\"\\'][^>]*>(.+?)</div>',\n webpage, 'description', fatal=False)\n\n duration = float_or_none(\n get_element_by_class('duration', webpage), scale=1000)\n\n upload_date = unified_strdate(get_element_by_id('sound_date', webpage))\n uploader = self._og_search_property(\n 'audio:artist', webpage, 'uploader', fatal=False)\n\n channels = self._html_search_regex(\n r'Channels</dt><dd>(.+?)</dd>', webpage,\n 'channels info', fatal=False)\n\n tags_str = get_element_by_class('tags', webpage)\n tags = re.findall(r'<a[^>]+>([^<]+)', tags_str) if tags_str else None\n\n audio_urls = [audio_url]\n\n LQ_FORMAT = '-lq.mp3'\n if LQ_FORMAT in audio_url:\n audio_urls.append(audio_url.replace(LQ_FORMAT, '-hq.mp3'))\n\n formats = [{\n 'url': format_url,\n 'format_note': channels,\n 'quality': quality,\n } for quality, format_url in enumerate(audio_urls)]\n\n return {\n 'id': audio_id,\n 'title': title,\n 'description': description,\n 'duration': duration,\n 'uploader': uploader,\n 'upload_date': upload_date,\n 'tags': tags,\n 'formats': formats,\n }\n", "path": "yt_dlp/extractor/freesound.py"}]}
| 3,915 | 155 |
gh_patches_debug_16978
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-3183
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mail: New Session Proposals not Sent out to Organizers and Co-organizers
After the submission of several people the new Session Proposals have not been sent out to Organizers and Co-organizers. One reason could be, that these people did not verify their email address.
Please check configuration and ensure all emails of all submissions are sent to organizers/co-organizers.
</issue>
<code>
[start of app/helpers/notification_email_triggers.py]
1 from flask import url_for
2
3 from app.helpers.data_getter import DataGetter
4 from app.helpers.helpers import send_new_session_organizer, send_notif_new_session_organizer, \
5 send_notif_session_accept_reject, send_session_accept_reject, send_schedule_change, send_notif_session_schedule, \
6 send_email_for_after_purchase_organizers, send_notif_for_after_purchase_organizer
7 from app.models.mail import NEW_SESSION, SESSION_ACCEPT_REJECT, SESSION_SCHEDULE, TICKET_PURCHASED
8
9
10 def trigger_new_session_notifications(session_id, event_id=None, event=None):
11 if not event and not event_id:
12 raise Exception('event or event_id is required')
13 if not event:
14 event = DataGetter.get_event(event_id)
15
16 link = url_for('event_sessions.session_display_view',
17 event_id=event.id, session_id=session_id, _external=True)
18
19 admin_msg_setting = DataGetter.get_message_setting_by_action(NEW_SESSION)
20 organizers = DataGetter.get_user_event_roles_by_role_name(event.id, 'organizer')
21 for organizer in organizers:
22 email_notification_setting = DataGetter.get_email_notification_settings_by_event_id(organizer.user.id, event_id)
23 if not admin_msg_setting or \
24 (email_notification_setting and email_notification_setting.new_paper == 1 and
25 admin_msg_setting.user_control_status == 1) or admin_msg_setting.user_control_status == 0:
26
27 send_new_session_organizer(organizer.user.email, event.name, link)
28 # Send notification
29 send_notif_new_session_organizer(organizer.user, event.name, link)
30
31
32 def trigger_session_state_change_notifications(session, event_id, state=None, message=None, subject=None):
33 if not state:
34 state = session.state
35 link = url_for('event_sessions.session_display_view', event_id=event_id, session_id=session.id, _external=True)
36 admin_msg_setting = DataGetter.get_message_setting_by_action(SESSION_ACCEPT_REJECT)
37 for speaker in session.speakers:
38 email_notification_setting = DataGetter.get_email_notification_settings_by_event_id(speaker.user_id, event_id)
39 if not admin_msg_setting or \
40 (email_notification_setting and email_notification_setting.session_accept_reject == 1 and
41 admin_msg_setting.user_control_status == 1) or admin_msg_setting.user_control_status == 0:
42
43 if speaker.email:
44 send_session_accept_reject(speaker.email, session.title, state, link, subject=subject, message=message)
45 # Send notification
46 if speaker.user:
47 send_notif_session_accept_reject(speaker.user, session.title, state, link)
48 session.state_email_sent = True
49 from app.helpers.data import save_to_db
50 save_to_db(session)
51
52
53 def trigger_session_schedule_change_notifications(session, event_id):
54 link = url_for('event_sessions.session_display_view', event_id=event_id, session_id=session.id, _external=True)
55 admin_msg_setting = DataGetter.get_message_setting_by_action(SESSION_SCHEDULE)
56 for speaker in session.speakers:
57 email_notification_setting = DataGetter.get_email_notification_settings_by_event_id(speaker.user_id, event_id)
58 if not admin_msg_setting or \
59 (email_notification_setting and email_notification_setting.session_schedule == 1 and
60 admin_msg_setting.user_control_status == 1) or admin_msg_setting.user_control_status == 0:
61 if speaker.email:
62 send_schedule_change(speaker.email, session.title, link)
63 # Send notification
64 if speaker.user:
65 send_notif_session_schedule(speaker.user, session.title, link)
66
67
68 def trigger_after_purchase_notifications(buyer_email, event_id, event, invoice_id, order_url):
69 if not event and not event_id:
70 raise Exception('event or event_id is required')
71 if not event:
72 event = DataGetter.get_event(event_id)
73
74 admin_msg_setting = DataGetter.get_message_setting_by_action(TICKET_PURCHASED)
75 organizers = DataGetter.get_user_event_roles_by_role_name(event.id, 'organizer')
76 for organizer in organizers:
77 email_notification_setting = DataGetter.get_email_notification_settings_by_event_id(organizer.user.id, event_id)
78 if not admin_msg_setting or \
79 (email_notification_setting and email_notification_setting.after_ticket_purchase == 1 and
80 admin_msg_setting.user_control_status == 1) or admin_msg_setting.user_control_status == 0:
81 send_email_for_after_purchase_organizers(organizer.user.email, buyer_email, invoice_id, order_url, event.name, event.organizer_name)
82 send_notif_for_after_purchase_organizer(organizer.user, invoice_id, order_url, event.name, buyer_email)
83
84 coorganizers = DataGetter.get_user_event_roles_by_role_name(event.id, 'coorganizer')
85 for coorganizer in coorganizers:
86 email_notification_setting = DataGetter.get_email_notification_settings_by_event_id(coorganizer.user.id, event_id)
87 if not admin_msg_setting or \
88 (email_notification_setting and email_notification_setting.after_ticket_purchase == 1 and
89 admin_msg_setting.user_control_status == 1) or admin_msg_setting.user_control_status == 0:
90 send_email_for_after_purchase_organizers(coorganizer.user.email, buyer_email, invoice_id, order_url, event.name, event.organizer_name)
91 send_notif_for_after_purchase_organizer(organizer.user, invoice_id, order_url, event.name, buyer_email)
92
[end of app/helpers/notification_email_triggers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/helpers/notification_email_triggers.py b/app/helpers/notification_email_triggers.py
--- a/app/helpers/notification_email_triggers.py
+++ b/app/helpers/notification_email_triggers.py
@@ -19,7 +19,7 @@
admin_msg_setting = DataGetter.get_message_setting_by_action(NEW_SESSION)
organizers = DataGetter.get_user_event_roles_by_role_name(event.id, 'organizer')
for organizer in organizers:
- email_notification_setting = DataGetter.get_email_notification_settings_by_event_id(organizer.user.id, event_id)
+ email_notification_setting = DataGetter.get_email_notification_settings_by_event_id(organizer.user.id, event.id)
if not admin_msg_setting or \
(email_notification_setting and email_notification_setting.new_paper == 1 and
admin_msg_setting.user_control_status == 1) or admin_msg_setting.user_control_status == 0:
|
{"golden_diff": "diff --git a/app/helpers/notification_email_triggers.py b/app/helpers/notification_email_triggers.py\n--- a/app/helpers/notification_email_triggers.py\n+++ b/app/helpers/notification_email_triggers.py\n@@ -19,7 +19,7 @@\n admin_msg_setting = DataGetter.get_message_setting_by_action(NEW_SESSION)\n organizers = DataGetter.get_user_event_roles_by_role_name(event.id, 'organizer')\n for organizer in organizers:\n- email_notification_setting = DataGetter.get_email_notification_settings_by_event_id(organizer.user.id, event_id)\n+ email_notification_setting = DataGetter.get_email_notification_settings_by_event_id(organizer.user.id, event.id)\n if not admin_msg_setting or \\\n (email_notification_setting and email_notification_setting.new_paper == 1 and\n admin_msg_setting.user_control_status == 1) or admin_msg_setting.user_control_status == 0:\n", "issue": "Mail: New Session Proposals not Sent out to Organizers and Co-organizers\nAfter the submission of several people the new Session Proposals have not been sent out to Organizers and Co-organizers. One reason could be, that these people did not verify their email address.\r\n\r\nPlease check configuration and ensure all emails of all submissions are sent to organizers/co-organizers.\n", "before_files": [{"content": "from flask import url_for\n\nfrom app.helpers.data_getter import DataGetter\nfrom app.helpers.helpers import send_new_session_organizer, send_notif_new_session_organizer, \\\n send_notif_session_accept_reject, send_session_accept_reject, send_schedule_change, send_notif_session_schedule, \\\n send_email_for_after_purchase_organizers, send_notif_for_after_purchase_organizer\nfrom app.models.mail import NEW_SESSION, SESSION_ACCEPT_REJECT, SESSION_SCHEDULE, TICKET_PURCHASED\n\n\ndef trigger_new_session_notifications(session_id, event_id=None, event=None):\n if not event and not event_id:\n raise Exception('event or event_id is required')\n if not event:\n event = DataGetter.get_event(event_id)\n\n link = url_for('event_sessions.session_display_view',\n event_id=event.id, session_id=session_id, _external=True)\n\n admin_msg_setting = DataGetter.get_message_setting_by_action(NEW_SESSION)\n organizers = DataGetter.get_user_event_roles_by_role_name(event.id, 'organizer')\n for organizer in organizers:\n email_notification_setting = DataGetter.get_email_notification_settings_by_event_id(organizer.user.id, event_id)\n if not admin_msg_setting or \\\n (email_notification_setting and email_notification_setting.new_paper == 1 and\n admin_msg_setting.user_control_status == 1) or admin_msg_setting.user_control_status == 0:\n\n send_new_session_organizer(organizer.user.email, event.name, link)\n # Send notification\n send_notif_new_session_organizer(organizer.user, event.name, link)\n\n\ndef trigger_session_state_change_notifications(session, event_id, state=None, message=None, subject=None):\n if not state:\n state = session.state\n link = url_for('event_sessions.session_display_view', event_id=event_id, session_id=session.id, _external=True)\n admin_msg_setting = DataGetter.get_message_setting_by_action(SESSION_ACCEPT_REJECT)\n for speaker in session.speakers:\n email_notification_setting = DataGetter.get_email_notification_settings_by_event_id(speaker.user_id, event_id)\n if not admin_msg_setting or \\\n (email_notification_setting and email_notification_setting.session_accept_reject == 1 and\n admin_msg_setting.user_control_status == 1) or admin_msg_setting.user_control_status == 0:\n\n if speaker.email:\n send_session_accept_reject(speaker.email, session.title, state, link, subject=subject, message=message)\n # Send notification\n if speaker.user:\n send_notif_session_accept_reject(speaker.user, session.title, state, link)\n session.state_email_sent = True\n from app.helpers.data import save_to_db\n save_to_db(session)\n\n\ndef trigger_session_schedule_change_notifications(session, event_id):\n link = url_for('event_sessions.session_display_view', event_id=event_id, session_id=session.id, _external=True)\n admin_msg_setting = DataGetter.get_message_setting_by_action(SESSION_SCHEDULE)\n for speaker in session.speakers:\n email_notification_setting = DataGetter.get_email_notification_settings_by_event_id(speaker.user_id, event_id)\n if not admin_msg_setting or \\\n (email_notification_setting and email_notification_setting.session_schedule == 1 and\n admin_msg_setting.user_control_status == 1) or admin_msg_setting.user_control_status == 0:\n if speaker.email:\n send_schedule_change(speaker.email, session.title, link)\n # Send notification\n if speaker.user:\n send_notif_session_schedule(speaker.user, session.title, link)\n\n\ndef trigger_after_purchase_notifications(buyer_email, event_id, event, invoice_id, order_url):\n if not event and not event_id:\n raise Exception('event or event_id is required')\n if not event:\n event = DataGetter.get_event(event_id)\n\n admin_msg_setting = DataGetter.get_message_setting_by_action(TICKET_PURCHASED)\n organizers = DataGetter.get_user_event_roles_by_role_name(event.id, 'organizer')\n for organizer in organizers:\n email_notification_setting = DataGetter.get_email_notification_settings_by_event_id(organizer.user.id, event_id)\n if not admin_msg_setting or \\\n (email_notification_setting and email_notification_setting.after_ticket_purchase == 1 and\n admin_msg_setting.user_control_status == 1) or admin_msg_setting.user_control_status == 0:\n send_email_for_after_purchase_organizers(organizer.user.email, buyer_email, invoice_id, order_url, event.name, event.organizer_name)\n send_notif_for_after_purchase_organizer(organizer.user, invoice_id, order_url, event.name, buyer_email)\n\n coorganizers = DataGetter.get_user_event_roles_by_role_name(event.id, 'coorganizer')\n for coorganizer in coorganizers:\n email_notification_setting = DataGetter.get_email_notification_settings_by_event_id(coorganizer.user.id, event_id)\n if not admin_msg_setting or \\\n (email_notification_setting and email_notification_setting.after_ticket_purchase == 1 and\n admin_msg_setting.user_control_status == 1) or admin_msg_setting.user_control_status == 0:\n send_email_for_after_purchase_organizers(coorganizer.user.email, buyer_email, invoice_id, order_url, event.name, event.organizer_name)\n send_notif_for_after_purchase_organizer(organizer.user, invoice_id, order_url, event.name, buyer_email)\n", "path": "app/helpers/notification_email_triggers.py"}]}
| 1,940 | 187 |
gh_patches_debug_10374
|
rasdani/github-patches
|
git_diff
|
weecology__retriever-1584
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`retriever download` command throwing error.
After running `retriever download iris`, I am getting this error :

</issue>
<code>
[start of retriever/lib/install.py]
1 import os
2 from collections import OrderedDict
3
4 from retriever.engines import choose_engine
5 from retriever.lib.defaults import DATA_DIR, SCRIPT_WRITE_PATH, PROVENANCE_DIR
6 from retriever.lib.scripts import SCRIPT_LIST, name_matches
7 from retriever.lib.repository import check_for_updates
8 from retriever.lib.provenance import install_committed
9
10
11 def _install(args, use_cache, debug):
12 """Install datasets for retriever."""
13 engine = choose_engine(args)
14 engine.use_cache = use_cache
15
16 if args['dataset'].endswith('.zip') or args['hash_value']:
17 path_to_archive = args['dataset']
18 if args['hash_value']:
19 path_to_archive = os.path.join(
20 PROVENANCE_DIR, args['dataset'],
21 '{}-{}.zip'.format(args['dataset'], args['hash_value']))
22 if not os.path.exists(path_to_archive):
23 print('The committed file does not exist.')
24 engine = install_committed(path_to_archive,
25 engine,
26 force=args.get('force', False))
27 return engine
28 script_list = SCRIPT_LIST()
29 if not (script_list or os.listdir(SCRIPT_WRITE_PATH)):
30 check_for_updates()
31 script_list = SCRIPT_LIST()
32 data_sets_scripts = name_matches(script_list, args['dataset'])
33 if data_sets_scripts:
34 for data_sets_script in data_sets_scripts:
35 print("=> Installing", data_sets_script.name)
36 try:
37 if engine.name == "HDF5":
38 sqlite_opts = {
39 'command': 'install',
40 'dataset': data_sets_script,
41 'engine': 'sqlite',
42 'file': (args["file"].split("."))[0] + ".db",
43 'table_name': args["table_name"],
44 'data_dir': args["data_dir"]
45 }
46 sqlite_engine = choose_engine(sqlite_opts)
47 data_sets_script.download(sqlite_engine, debug=debug)
48 data_sets_script.engine.final_cleanup()
49 engine.script_table_registry = OrderedDict()
50 data_sets_script.download(engine, debug=debug)
51 data_sets_script.engine.final_cleanup()
52 except Exception as e:
53 print(e)
54 if debug:
55 raise
56 else:
57 message = "Run retriever.datasets() to list the currently available " \
58 "datasets."
59 raise ValueError(message)
60 return engine
61
62
63 def install_csv(dataset,
64 table_name='{db}_{table}.csv',
65 data_dir=DATA_DIR,
66 debug=False,
67 use_cache=True,
68 force=False,
69 hash_value=None):
70 """Install datasets into csv."""
71 args = {
72 'command': 'install',
73 'dataset': dataset,
74 'engine': 'csv',
75 'table_name': table_name,
76 'data_dir': data_dir,
77 'force': force,
78 'hash_value': hash_value
79 }
80 return _install(args, use_cache, debug)
81
82
83 def install_mysql(dataset,
84 user='root',
85 password='',
86 host='localhost',
87 port=3306,
88 database_name='{db}',
89 table_name='{db}.{table}',
90 debug=False,
91 use_cache=True,
92 force=False,
93 hash_value=None):
94 """Install datasets into mysql."""
95 args = {
96 'command': 'install',
97 'database_name': database_name,
98 'engine': 'mysql',
99 'dataset': dataset,
100 'host': host,
101 'port': port,
102 'password': password,
103 'table_name': table_name,
104 'user': user,
105 'force': force,
106 'hash_value': hash_value
107 }
108 return _install(args, use_cache, debug)
109
110
111 def install_postgres(dataset,
112 user='postgres',
113 password='',
114 host='localhost',
115 port=5432,
116 database='postgres',
117 database_name='{db}',
118 table_name='{db}.{table}',
119 bbox=[],
120 debug=False,
121 use_cache=True,
122 force=False,
123 hash_value=None):
124 """Install datasets into postgres."""
125 args = {
126 'command': 'install',
127 'database': database,
128 'database_name': database_name,
129 'engine': 'postgres',
130 'dataset': dataset,
131 'host': host,
132 'port': port,
133 'password': password,
134 'table_name': table_name,
135 'user': user,
136 'bbox': bbox,
137 'force': force,
138 'hash_value': hash_value
139 }
140 return _install(args, use_cache, debug)
141
142
143 def install_sqlite(dataset,
144 file='sqlite.db',
145 table_name='{db}_{table}',
146 data_dir=DATA_DIR,
147 debug=False,
148 use_cache=True,
149 force=False,
150 hash_value=None):
151 """Install datasets into sqlite."""
152 args = {
153 'command': 'install',
154 'dataset': dataset,
155 'engine': 'sqlite',
156 'file': file,
157 'table_name': table_name,
158 'data_dir': data_dir,
159 'force': force,
160 'hash_value': hash_value
161 }
162 return _install(args, use_cache, debug)
163
164
165 def install_msaccess(dataset,
166 file='access.mdb',
167 table_name='[{db} {table}]',
168 data_dir=DATA_DIR,
169 debug=False,
170 use_cache=True,
171 force=False,
172 hash_value=None):
173 """Install datasets into msaccess."""
174 args = {
175 'command': 'install',
176 'dataset': dataset,
177 'engine': 'msaccess',
178 'file': file,
179 'table_name': table_name,
180 'data_dir': data_dir,
181 'force': force,
182 'hash_value': hash_value
183 }
184 return _install(args, use_cache, debug)
185
186
187 def install_json(dataset,
188 table_name='{db}_{table}.json',
189 data_dir=DATA_DIR,
190 debug=False,
191 use_cache=True,
192 pretty=False,
193 force=False,
194 hash_value=None):
195 """Install datasets into json."""
196 args = {
197 'command': 'install',
198 'dataset': dataset,
199 'engine': 'json',
200 'table_name': table_name,
201 'data_dir': data_dir,
202 'pretty': pretty,
203 'force': force,
204 'hash_value': hash_value
205 }
206 return _install(args, use_cache, debug)
207
208
209 def install_xml(dataset,
210 table_name='{db}_{table}.xml',
211 data_dir=DATA_DIR,
212 debug=False,
213 use_cache=True,
214 force=False,
215 hash_value=None):
216 """Install datasets into xml."""
217 args = {
218 'command': 'install',
219 'dataset': dataset,
220 'engine': 'xml',
221 'table_name': table_name,
222 'data_dir': data_dir,
223 'force': force,
224 'hash_value': hash_value
225 }
226 return _install(args, use_cache, debug)
227
228
229 def install_hdf5(dataset,
230 file='hdf5.h5',
231 table_name='{db}_{table}',
232 data_dir=DATA_DIR,
233 debug=False,
234 use_cache=True,
235 hash_value=None):
236 """Install datasets into hdf5."""
237 args = {
238 'command': 'install',
239 'dataset': dataset,
240 'engine': 'hdf5',
241 'file': file,
242 'table_name': table_name,
243 'data_dir': data_dir,
244 'hash_value': hash_value
245 }
246 return _install(args, use_cache, debug)
247
[end of retriever/lib/install.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/retriever/lib/install.py b/retriever/lib/install.py
--- a/retriever/lib/install.py
+++ b/retriever/lib/install.py
@@ -13,9 +13,9 @@
engine = choose_engine(args)
engine.use_cache = use_cache
- if args['dataset'].endswith('.zip') or args['hash_value']:
+ if args['dataset'].endswith('.zip') or args.get('hash_value'):
path_to_archive = args['dataset']
- if args['hash_value']:
+ if args.get('hash_value'):
path_to_archive = os.path.join(
PROVENANCE_DIR, args['dataset'],
'{}-{}.zip'.format(args['dataset'], args['hash_value']))
|
{"golden_diff": "diff --git a/retriever/lib/install.py b/retriever/lib/install.py\n--- a/retriever/lib/install.py\n+++ b/retriever/lib/install.py\n@@ -13,9 +13,9 @@\n engine = choose_engine(args)\n engine.use_cache = use_cache\n \n- if args['dataset'].endswith('.zip') or args['hash_value']:\n+ if args['dataset'].endswith('.zip') or args.get('hash_value'):\n path_to_archive = args['dataset']\n- if args['hash_value']:\n+ if args.get('hash_value'):\n path_to_archive = os.path.join(\n PROVENANCE_DIR, args['dataset'],\n '{}-{}.zip'.format(args['dataset'], args['hash_value']))\n", "issue": "`retriever download` command throwing error.\nAfter running `retriever download iris`, I am getting this error : \r\n\n", "before_files": [{"content": "import os\nfrom collections import OrderedDict\n\nfrom retriever.engines import choose_engine\nfrom retriever.lib.defaults import DATA_DIR, SCRIPT_WRITE_PATH, PROVENANCE_DIR\nfrom retriever.lib.scripts import SCRIPT_LIST, name_matches\nfrom retriever.lib.repository import check_for_updates\nfrom retriever.lib.provenance import install_committed\n\n\ndef _install(args, use_cache, debug):\n \"\"\"Install datasets for retriever.\"\"\"\n engine = choose_engine(args)\n engine.use_cache = use_cache\n\n if args['dataset'].endswith('.zip') or args['hash_value']:\n path_to_archive = args['dataset']\n if args['hash_value']:\n path_to_archive = os.path.join(\n PROVENANCE_DIR, args['dataset'],\n '{}-{}.zip'.format(args['dataset'], args['hash_value']))\n if not os.path.exists(path_to_archive):\n print('The committed file does not exist.')\n engine = install_committed(path_to_archive,\n engine,\n force=args.get('force', False))\n return engine\n script_list = SCRIPT_LIST()\n if not (script_list or os.listdir(SCRIPT_WRITE_PATH)):\n check_for_updates()\n script_list = SCRIPT_LIST()\n data_sets_scripts = name_matches(script_list, args['dataset'])\n if data_sets_scripts:\n for data_sets_script in data_sets_scripts:\n print(\"=> Installing\", data_sets_script.name)\n try:\n if engine.name == \"HDF5\":\n sqlite_opts = {\n 'command': 'install',\n 'dataset': data_sets_script,\n 'engine': 'sqlite',\n 'file': (args[\"file\"].split(\".\"))[0] + \".db\",\n 'table_name': args[\"table_name\"],\n 'data_dir': args[\"data_dir\"]\n }\n sqlite_engine = choose_engine(sqlite_opts)\n data_sets_script.download(sqlite_engine, debug=debug)\n data_sets_script.engine.final_cleanup()\n engine.script_table_registry = OrderedDict()\n data_sets_script.download(engine, debug=debug)\n data_sets_script.engine.final_cleanup()\n except Exception as e:\n print(e)\n if debug:\n raise\n else:\n message = \"Run retriever.datasets() to list the currently available \" \\\n \"datasets.\"\n raise ValueError(message)\n return engine\n\n\ndef install_csv(dataset,\n table_name='{db}_{table}.csv',\n data_dir=DATA_DIR,\n debug=False,\n use_cache=True,\n force=False,\n hash_value=None):\n \"\"\"Install datasets into csv.\"\"\"\n args = {\n 'command': 'install',\n 'dataset': dataset,\n 'engine': 'csv',\n 'table_name': table_name,\n 'data_dir': data_dir,\n 'force': force,\n 'hash_value': hash_value\n }\n return _install(args, use_cache, debug)\n\n\ndef install_mysql(dataset,\n user='root',\n password='',\n host='localhost',\n port=3306,\n database_name='{db}',\n table_name='{db}.{table}',\n debug=False,\n use_cache=True,\n force=False,\n hash_value=None):\n \"\"\"Install datasets into mysql.\"\"\"\n args = {\n 'command': 'install',\n 'database_name': database_name,\n 'engine': 'mysql',\n 'dataset': dataset,\n 'host': host,\n 'port': port,\n 'password': password,\n 'table_name': table_name,\n 'user': user,\n 'force': force,\n 'hash_value': hash_value\n }\n return _install(args, use_cache, debug)\n\n\ndef install_postgres(dataset,\n user='postgres',\n password='',\n host='localhost',\n port=5432,\n database='postgres',\n database_name='{db}',\n table_name='{db}.{table}',\n bbox=[],\n debug=False,\n use_cache=True,\n force=False,\n hash_value=None):\n \"\"\"Install datasets into postgres.\"\"\"\n args = {\n 'command': 'install',\n 'database': database,\n 'database_name': database_name,\n 'engine': 'postgres',\n 'dataset': dataset,\n 'host': host,\n 'port': port,\n 'password': password,\n 'table_name': table_name,\n 'user': user,\n 'bbox': bbox,\n 'force': force,\n 'hash_value': hash_value\n }\n return _install(args, use_cache, debug)\n\n\ndef install_sqlite(dataset,\n file='sqlite.db',\n table_name='{db}_{table}',\n data_dir=DATA_DIR,\n debug=False,\n use_cache=True,\n force=False,\n hash_value=None):\n \"\"\"Install datasets into sqlite.\"\"\"\n args = {\n 'command': 'install',\n 'dataset': dataset,\n 'engine': 'sqlite',\n 'file': file,\n 'table_name': table_name,\n 'data_dir': data_dir,\n 'force': force,\n 'hash_value': hash_value\n }\n return _install(args, use_cache, debug)\n\n\ndef install_msaccess(dataset,\n file='access.mdb',\n table_name='[{db} {table}]',\n data_dir=DATA_DIR,\n debug=False,\n use_cache=True,\n force=False,\n hash_value=None):\n \"\"\"Install datasets into msaccess.\"\"\"\n args = {\n 'command': 'install',\n 'dataset': dataset,\n 'engine': 'msaccess',\n 'file': file,\n 'table_name': table_name,\n 'data_dir': data_dir,\n 'force': force,\n 'hash_value': hash_value\n }\n return _install(args, use_cache, debug)\n\n\ndef install_json(dataset,\n table_name='{db}_{table}.json',\n data_dir=DATA_DIR,\n debug=False,\n use_cache=True,\n pretty=False,\n force=False,\n hash_value=None):\n \"\"\"Install datasets into json.\"\"\"\n args = {\n 'command': 'install',\n 'dataset': dataset,\n 'engine': 'json',\n 'table_name': table_name,\n 'data_dir': data_dir,\n 'pretty': pretty,\n 'force': force,\n 'hash_value': hash_value\n }\n return _install(args, use_cache, debug)\n\n\ndef install_xml(dataset,\n table_name='{db}_{table}.xml',\n data_dir=DATA_DIR,\n debug=False,\n use_cache=True,\n force=False,\n hash_value=None):\n \"\"\"Install datasets into xml.\"\"\"\n args = {\n 'command': 'install',\n 'dataset': dataset,\n 'engine': 'xml',\n 'table_name': table_name,\n 'data_dir': data_dir,\n 'force': force,\n 'hash_value': hash_value\n }\n return _install(args, use_cache, debug)\n\n\ndef install_hdf5(dataset,\n file='hdf5.h5',\n table_name='{db}_{table}',\n data_dir=DATA_DIR,\n debug=False,\n use_cache=True,\n hash_value=None):\n \"\"\"Install datasets into hdf5.\"\"\"\n args = {\n 'command': 'install',\n 'dataset': dataset,\n 'engine': 'hdf5',\n 'file': file,\n 'table_name': table_name,\n 'data_dir': data_dir,\n 'hash_value': hash_value\n }\n return _install(args, use_cache, debug)\n", "path": "retriever/lib/install.py"}]}
| 2,795 | 161 |
gh_patches_debug_18365
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-3139
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Adding new profile should add newline before
Have multiple IAM users configured in `~/.aws/credentials`
Added IAM user as follows:
`aws configure --profile NEW_PROFILE_NAME`
Completed sucessfully as expected.
However, next step of running a command resulted in error:
```
PS C:\WINDOWS\system32> aws --profile NEW_PROFILE_NAME s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
```
Checking `~/.aws/credentials` file showed new creds were appended to file without newline as follows:
```
[OLD_PROFILE_NAME]
aws_access_key_id = AAAAAAAAAAAAAAAAAAAA
aws_secret_access_key = KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK
region=XX-east-X
toolkit_artifact_guid=11111111-1111-1111-1111-111111111111[NEW_PROFILE_NAME]
aws_access_key_id = BBBBBBBBBBBBBBBBBBBB
aws_secret_access_key = JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ
```
IMHO this is easily fixable by adding a newline echo before the new creds
CLI version: `aws-cli/1.12.2 Python/2.7.14 Windows/10 botocore/1.8.2`
</issue>
<code>
[start of awscli/customizations/configure/writer.py]
1 # Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 import os
14 import re
15
16 from . import SectionNotFoundError
17
18
19 class ConfigFileWriter(object):
20 SECTION_REGEX = re.compile(r'^\s*\[(?P<header>[^]]+)\]')
21 OPTION_REGEX = re.compile(
22 r'(?P<option>[^:=][^:=]*)'
23 r'\s*(?P<vi>[:=])\s*'
24 r'(?P<value>.*)$'
25 )
26
27 def update_config(self, new_values, config_filename):
28 """Update config file with new values.
29
30 This method will update a section in a config file with
31 new key value pairs.
32
33 This method provides a few conveniences:
34
35 * If the ``config_filename`` does not exist, it will
36 be created. Any parent directories will also be created
37 if necessary.
38 * If the section to update does not exist, it will be created.
39 * Any existing lines that are specified by ``new_values``
40 **will not be touched**. This ensures that commented out
41 values are left unaltered.
42
43 :type new_values: dict
44 :param new_values: The values to update. There is a special
45 key ``__section__``, that specifies what section in the INI
46 file to update. If this key is not present, then the
47 ``default`` section will be updated with the new values.
48
49 :type config_filename: str
50 :param config_filename: The config filename where values will be
51 written.
52
53 """
54 section_name = new_values.pop('__section__', 'default')
55 if not os.path.isfile(config_filename):
56 self._create_file(config_filename)
57 self._write_new_section(section_name, new_values, config_filename)
58 return
59 with open(config_filename, 'r') as f:
60 contents = f.readlines()
61 # We can only update a single section at a time so we first need
62 # to find the section in question
63 try:
64 self._update_section_contents(contents, section_name, new_values)
65 with open(config_filename, 'w') as f:
66 f.write(''.join(contents))
67 except SectionNotFoundError:
68 self._write_new_section(section_name, new_values, config_filename)
69
70 def _create_file(self, config_filename):
71 # Create the file as well as the parent dir if needed.
72 dirname = os.path.split(config_filename)[0]
73 if not os.path.isdir(dirname):
74 os.makedirs(dirname)
75 with os.fdopen(os.open(config_filename,
76 os.O_WRONLY | os.O_CREAT, 0o600), 'w'):
77 pass
78
79 def _write_new_section(self, section_name, new_values, config_filename):
80 with open(config_filename, 'a') as f:
81 f.write('[%s]\n' % section_name)
82 contents = []
83 self._insert_new_values(line_number=0,
84 contents=contents,
85 new_values=new_values)
86 f.write(''.join(contents))
87
88 def _find_section_start(self, contents, section_name):
89 for i in range(len(contents)):
90 line = contents[i]
91 if line.strip().startswith(('#', ';')):
92 # This is a comment, so we can safely ignore this line.
93 continue
94 match = self.SECTION_REGEX.search(line)
95 if match is not None and self._matches_section(match,
96 section_name):
97 return i
98 raise SectionNotFoundError(section_name)
99
100 def _update_section_contents(self, contents, section_name, new_values):
101 # First, find the line where the section_name is defined.
102 # This will be the value of i.
103 new_values = new_values.copy()
104 # ``contents`` is a list of file line contents.
105 section_start_line_num = self._find_section_start(contents,
106 section_name)
107 # If we get here, then we've found the section. We now need
108 # to figure out if we're updating a value or adding a new value.
109 # There's 2 cases. Either we're setting a normal scalar value
110 # of, we're setting a nested value.
111 last_matching_line = section_start_line_num
112 j = last_matching_line + 1
113 while j < len(contents):
114 line = contents[j]
115 if self.SECTION_REGEX.search(line) is not None:
116 # We've hit a new section which means the config key is
117 # not in the section. We need to add it here.
118 self._insert_new_values(line_number=last_matching_line,
119 contents=contents,
120 new_values=new_values)
121 return
122 match = self.OPTION_REGEX.search(line)
123 if match is not None:
124 last_matching_line = j
125 key_name = match.group(1).strip()
126 if key_name in new_values:
127 # We've found the line that defines the option name.
128 # if the value is not a dict, then we can write the line
129 # out now.
130 if not isinstance(new_values[key_name], dict):
131 option_value = new_values[key_name]
132 new_line = '%s = %s\n' % (key_name, option_value)
133 contents[j] = new_line
134 del new_values[key_name]
135 else:
136 j = self._update_subattributes(
137 j, contents, new_values[key_name],
138 len(match.group(1)) - len(match.group(1).lstrip()))
139 return
140 j += 1
141
142 if new_values:
143 if not contents[-1].endswith('\n'):
144 contents.append('\n')
145 self._insert_new_values(line_number=last_matching_line + 1,
146 contents=contents,
147 new_values=new_values)
148
149 def _update_subattributes(self, index, contents, values, starting_indent):
150 index += 1
151 for i in range(index, len(contents)):
152 line = contents[i]
153 match = self.OPTION_REGEX.search(line)
154 if match is not None:
155 current_indent = len(
156 match.group(1)) - len(match.group(1).lstrip())
157 key_name = match.group(1).strip()
158 if key_name in values:
159 option_value = values[key_name]
160 new_line = '%s%s = %s\n' % (' ' * current_indent,
161 key_name, option_value)
162 contents[i] = new_line
163 del values[key_name]
164 if starting_indent == current_indent or \
165 self.SECTION_REGEX.search(line) is not None:
166 # We've arrived at the starting indent level so we can just
167 # write out all the values now.
168 self._insert_new_values(i - 1, contents, values, ' ')
169 break
170 else:
171 if starting_indent != current_indent:
172 # The option is the last option in the file
173 self._insert_new_values(i, contents, values, ' ')
174 return i
175
176 def _insert_new_values(self, line_number, contents, new_values, indent=''):
177 new_contents = []
178 for key, value in list(new_values.items()):
179 if isinstance(value, dict):
180 subindent = indent + ' '
181 new_contents.append('%s%s =\n' % (indent, key))
182 for subkey, subval in list(value.items()):
183 new_contents.append('%s%s = %s\n' % (subindent, subkey,
184 subval))
185 else:
186 new_contents.append('%s%s = %s\n' % (indent, key, value))
187 del new_values[key]
188 contents.insert(line_number + 1, ''.join(new_contents))
189
190 def _matches_section(self, match, section_name):
191 parts = section_name.split(' ')
192 unquoted_match = match.group(0) == '[%s]' % section_name
193 if len(parts) > 1:
194 quoted_match = match.group(0) == '[%s "%s"]' % (
195 parts[0], ' '.join(parts[1:]))
196 return unquoted_match or quoted_match
197 return unquoted_match
198
[end of awscli/customizations/configure/writer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/awscli/customizations/configure/writer.py b/awscli/customizations/configure/writer.py
--- a/awscli/customizations/configure/writer.py
+++ b/awscli/customizations/configure/writer.py
@@ -76,8 +76,22 @@
os.O_WRONLY | os.O_CREAT, 0o600), 'w'):
pass
+ def _check_file_needs_newline(self, filename):
+ # check if the last byte is a newline
+ with open(filename, 'rb') as f:
+ # check if the file is empty
+ f.seek(0, os.SEEK_END)
+ if not f.tell():
+ return False
+ f.seek(-1, os.SEEK_END)
+ last = f.read()
+ return last != b'\n'
+
def _write_new_section(self, section_name, new_values, config_filename):
+ needs_newline = self._check_file_needs_newline(config_filename)
with open(config_filename, 'a') as f:
+ if needs_newline:
+ f.write('\n')
f.write('[%s]\n' % section_name)
contents = []
self._insert_new_values(line_number=0,
|
{"golden_diff": "diff --git a/awscli/customizations/configure/writer.py b/awscli/customizations/configure/writer.py\n--- a/awscli/customizations/configure/writer.py\n+++ b/awscli/customizations/configure/writer.py\n@@ -76,8 +76,22 @@\n os.O_WRONLY | os.O_CREAT, 0o600), 'w'):\n pass\n \n+ def _check_file_needs_newline(self, filename):\n+ # check if the last byte is a newline\n+ with open(filename, 'rb') as f:\n+ # check if the file is empty\n+ f.seek(0, os.SEEK_END)\n+ if not f.tell():\n+ return False\n+ f.seek(-1, os.SEEK_END)\n+ last = f.read()\n+ return last != b'\\n'\n+\n def _write_new_section(self, section_name, new_values, config_filename):\n+ needs_newline = self._check_file_needs_newline(config_filename)\n with open(config_filename, 'a') as f:\n+ if needs_newline:\n+ f.write('\\n')\n f.write('[%s]\\n' % section_name)\n contents = []\n self._insert_new_values(line_number=0,\n", "issue": "Adding new profile should add newline before\nHave multiple IAM users configured in `~/.aws/credentials`\r\n\r\nAdded IAM user as follows:\r\n`aws configure --profile NEW_PROFILE_NAME`\r\nCompleted sucessfully as expected.\r\n\r\nHowever, next step of running a command resulted in error: \r\n```\r\nPS C:\\WINDOWS\\system32> aws --profile NEW_PROFILE_NAME s3 ls\r\nUnable to locate credentials. You can configure credentials by running \"aws configure\".\r\n```\r\nChecking `~/.aws/credentials` file showed new creds were appended to file without newline as follows:\r\n```\r\n[OLD_PROFILE_NAME]\r\naws_access_key_id = AAAAAAAAAAAAAAAAAAAA\r\naws_secret_access_key = KKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK\r\nregion=XX-east-X\r\ntoolkit_artifact_guid=11111111-1111-1111-1111-111111111111[NEW_PROFILE_NAME]\r\naws_access_key_id = BBBBBBBBBBBBBBBBBBBB\r\naws_secret_access_key = JJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJJ\r\n```\r\n\r\nIMHO this is easily fixable by adding a newline echo before the new creds\r\n\r\nCLI version: `aws-cli/1.12.2 Python/2.7.14 Windows/10 botocore/1.8.2`\n", "before_files": [{"content": "# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport os\nimport re\n\nfrom . import SectionNotFoundError\n\n\nclass ConfigFileWriter(object):\n SECTION_REGEX = re.compile(r'^\\s*\\[(?P<header>[^]]+)\\]')\n OPTION_REGEX = re.compile(\n r'(?P<option>[^:=][^:=]*)'\n r'\\s*(?P<vi>[:=])\\s*'\n r'(?P<value>.*)$'\n )\n\n def update_config(self, new_values, config_filename):\n \"\"\"Update config file with new values.\n\n This method will update a section in a config file with\n new key value pairs.\n\n This method provides a few conveniences:\n\n * If the ``config_filename`` does not exist, it will\n be created. Any parent directories will also be created\n if necessary.\n * If the section to update does not exist, it will be created.\n * Any existing lines that are specified by ``new_values``\n **will not be touched**. This ensures that commented out\n values are left unaltered.\n\n :type new_values: dict\n :param new_values: The values to update. There is a special\n key ``__section__``, that specifies what section in the INI\n file to update. If this key is not present, then the\n ``default`` section will be updated with the new values.\n\n :type config_filename: str\n :param config_filename: The config filename where values will be\n written.\n\n \"\"\"\n section_name = new_values.pop('__section__', 'default')\n if not os.path.isfile(config_filename):\n self._create_file(config_filename)\n self._write_new_section(section_name, new_values, config_filename)\n return\n with open(config_filename, 'r') as f:\n contents = f.readlines()\n # We can only update a single section at a time so we first need\n # to find the section in question\n try:\n self._update_section_contents(contents, section_name, new_values)\n with open(config_filename, 'w') as f:\n f.write(''.join(contents))\n except SectionNotFoundError:\n self._write_new_section(section_name, new_values, config_filename)\n\n def _create_file(self, config_filename):\n # Create the file as well as the parent dir if needed.\n dirname = os.path.split(config_filename)[0]\n if not os.path.isdir(dirname):\n os.makedirs(dirname)\n with os.fdopen(os.open(config_filename,\n os.O_WRONLY | os.O_CREAT, 0o600), 'w'):\n pass\n\n def _write_new_section(self, section_name, new_values, config_filename):\n with open(config_filename, 'a') as f:\n f.write('[%s]\\n' % section_name)\n contents = []\n self._insert_new_values(line_number=0,\n contents=contents,\n new_values=new_values)\n f.write(''.join(contents))\n\n def _find_section_start(self, contents, section_name):\n for i in range(len(contents)):\n line = contents[i]\n if line.strip().startswith(('#', ';')):\n # This is a comment, so we can safely ignore this line.\n continue\n match = self.SECTION_REGEX.search(line)\n if match is not None and self._matches_section(match,\n section_name):\n return i\n raise SectionNotFoundError(section_name)\n\n def _update_section_contents(self, contents, section_name, new_values):\n # First, find the line where the section_name is defined.\n # This will be the value of i.\n new_values = new_values.copy()\n # ``contents`` is a list of file line contents.\n section_start_line_num = self._find_section_start(contents,\n section_name)\n # If we get here, then we've found the section. We now need\n # to figure out if we're updating a value or adding a new value.\n # There's 2 cases. Either we're setting a normal scalar value\n # of, we're setting a nested value.\n last_matching_line = section_start_line_num\n j = last_matching_line + 1\n while j < len(contents):\n line = contents[j]\n if self.SECTION_REGEX.search(line) is not None:\n # We've hit a new section which means the config key is\n # not in the section. We need to add it here.\n self._insert_new_values(line_number=last_matching_line,\n contents=contents,\n new_values=new_values)\n return\n match = self.OPTION_REGEX.search(line)\n if match is not None:\n last_matching_line = j\n key_name = match.group(1).strip()\n if key_name in new_values:\n # We've found the line that defines the option name.\n # if the value is not a dict, then we can write the line\n # out now.\n if not isinstance(new_values[key_name], dict):\n option_value = new_values[key_name]\n new_line = '%s = %s\\n' % (key_name, option_value)\n contents[j] = new_line\n del new_values[key_name]\n else:\n j = self._update_subattributes(\n j, contents, new_values[key_name],\n len(match.group(1)) - len(match.group(1).lstrip()))\n return\n j += 1\n\n if new_values:\n if not contents[-1].endswith('\\n'):\n contents.append('\\n')\n self._insert_new_values(line_number=last_matching_line + 1,\n contents=contents,\n new_values=new_values)\n\n def _update_subattributes(self, index, contents, values, starting_indent):\n index += 1\n for i in range(index, len(contents)):\n line = contents[i]\n match = self.OPTION_REGEX.search(line)\n if match is not None:\n current_indent = len(\n match.group(1)) - len(match.group(1).lstrip())\n key_name = match.group(1).strip()\n if key_name in values:\n option_value = values[key_name]\n new_line = '%s%s = %s\\n' % (' ' * current_indent,\n key_name, option_value)\n contents[i] = new_line\n del values[key_name]\n if starting_indent == current_indent or \\\n self.SECTION_REGEX.search(line) is not None:\n # We've arrived at the starting indent level so we can just\n # write out all the values now.\n self._insert_new_values(i - 1, contents, values, ' ')\n break\n else:\n if starting_indent != current_indent:\n # The option is the last option in the file\n self._insert_new_values(i, contents, values, ' ')\n return i\n\n def _insert_new_values(self, line_number, contents, new_values, indent=''):\n new_contents = []\n for key, value in list(new_values.items()):\n if isinstance(value, dict):\n subindent = indent + ' '\n new_contents.append('%s%s =\\n' % (indent, key))\n for subkey, subval in list(value.items()):\n new_contents.append('%s%s = %s\\n' % (subindent, subkey,\n subval))\n else:\n new_contents.append('%s%s = %s\\n' % (indent, key, value))\n del new_values[key]\n contents.insert(line_number + 1, ''.join(new_contents))\n\n def _matches_section(self, match, section_name):\n parts = section_name.split(' ')\n unquoted_match = match.group(0) == '[%s]' % section_name\n if len(parts) > 1:\n quoted_match = match.group(0) == '[%s \"%s\"]' % (\n parts[0], ' '.join(parts[1:]))\n return unquoted_match or quoted_match\n return unquoted_match\n", "path": "awscli/customizations/configure/writer.py"}]}
| 3,176 | 272 |
gh_patches_debug_16928
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-2739
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Subscription example does not work - TypeError: get_context() missing 1 required positional argument: 'response'
## Describe the Bug
When trying to run the first subscription example from the [Documentation](https://strawberry.rocks/docs/general/subscriptions) the following error occurs as soon as the subscription is sent from GraphiQL:
```
Running strawberry on http://0.0.0.0:8000/graphql 🍓
[2023-04-30 17:58:55]: No operation name
subscription {
count(target: 5)
}
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/Users/user/Library/Python/3.9/lib/python/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 254, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "/Users/user/Library/Python/3.9/lib/python/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/Users/user/Library/Python/3.9/lib/python/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/user/Library/Python/3.9/lib/python/site-packages/starlette/middleware/errors.py", line 149, in __call__
await self.app(scope, receive, send)
File "/Users/user/Library/Python/3.9/lib/python/site-packages/starlette/middleware/cors.py", line 76, in __call__
await self.app(scope, receive, send)
File "/Users/user/Library/Python/3.9/lib/python/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/Users/user/Library/Python/3.9/lib/python/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/Users/user/Library/Python/3.9/lib/python/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/Users/user/Library/Python/3.9/lib/python/site-packages/starlette/routing.py", line 341, in handle
await self.app(scope, receive, send)
File "/Users/user/Library/Python/3.9/lib/python/site-packages/strawberry/asgi/__init__.py", line 118, in __call__
await self.graphql_transport_ws_handler_class(
File "/Users/user/Library/Python/3.9/lib/python/site-packages/strawberry/subscriptions/protocols/graphql_transport_ws/handlers.py", line 78, in handle
return await self.handle_request()
File "/Users/user/Library/Python/3.9/lib/python/site-packages/strawberry/asgi/handlers/graphql_transport_ws_handler.py", line 58, in handle_request
await self.handle_message(message)
File "/Users/user/Library/Python/3.9/lib/python/site-packages/strawberry/subscriptions/protocols/graphql_transport_ws/handlers.py", line 125, in handle_message
await handler(handler_arg)
File "/Users/user/Library/Python/3.9/lib/python/site-packages/strawberry/subscriptions/protocols/graphql_transport_ws/handlers.py", line 181, in handle_subscribe
context = await self.get_context()
File "/Users/user/Library/Python/3.9/lib/python/site-packages/strawberry/asgi/handlers/graphql_transport_ws_handler.py", line 36, in get_context
return await self._get_context(request=self._ws)
TypeError: get_context() missing 1 required positional argument: 'response'
ERROR: closing handshake failed
Traceback (most recent call last):
File "/Users/user/Library/Python/3.9/lib/python/site-packages/websockets/legacy/server.py", line 248, in handler
await self.close()
File "/Users/user/Library/Python/3.9/lib/python/site-packages/websockets/legacy/protocol.py", line 766, in close
await self.write_close_frame(Close(code, reason))
File "/Users/user/Library/Python/3.9/lib/python/site-packages/websockets/legacy/protocol.py", line 1232, in write_close_frame
await self.write_frame(True, OP_CLOSE, data, _state=State.CLOSING)
File "/Users/user/Library/Python/3.9/lib/python/site-packages/websockets/legacy/protocol.py", line 1205, in write_frame
await self.drain()
File "/Users/user/Library/Python/3.9/lib/python/site-packages/websockets/legacy/protocol.py", line 1194, in drain
await self.ensure_open()
File "/Users/user/Library/Python/3.9/lib/python/site-packages/websockets/legacy/protocol.py", line 935, in ensure_open
raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: sent 1000 (OK); no close frame received
```
## System Information
- Operating system: `macOS 13.2 (MacBook Pro M2 Chip - ARM64)`
- Strawberry version (if applicable): `0.175.0`, also tested: `0.173.1`
## Additional Context
- Start command: `python3 -m strawberry server app`
- Python version: `3.9.0`
<!-- POLAR PLEDGE BADGE START -->
## Upvote & Fund
- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.
- We receive the funding once the issue is completed & confirmed by you.
- Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/strawberry-graphql/strawberry/issues/2738">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2738/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2738/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of strawberry/asgi/handlers/graphql_ws_handler.py]
1 from __future__ import annotations
2
3 from contextlib import suppress
4 from typing import TYPE_CHECKING, Any, Optional
5
6 from starlette.websockets import WebSocketDisconnect, WebSocketState
7
8 from strawberry.subscriptions import GRAPHQL_WS_PROTOCOL
9 from strawberry.subscriptions.protocols.graphql_ws.handlers import BaseGraphQLWSHandler
10
11 if TYPE_CHECKING:
12 from starlette.websockets import WebSocket
13
14 from strawberry.schema import BaseSchema
15 from strawberry.subscriptions.protocols.graphql_ws.types import OperationMessage
16
17
18 class GraphQLWSHandler(BaseGraphQLWSHandler):
19 def __init__(
20 self,
21 schema: BaseSchema,
22 debug: bool,
23 keep_alive: bool,
24 keep_alive_interval: float,
25 get_context,
26 get_root_value,
27 ws: WebSocket,
28 ):
29 super().__init__(schema, debug, keep_alive, keep_alive_interval)
30 self._get_context = get_context
31 self._get_root_value = get_root_value
32 self._ws = ws
33
34 async def get_context(self) -> Any:
35 return await self._get_context(request=self._ws)
36
37 async def get_root_value(self) -> Any:
38 return await self._get_root_value(request=self._ws)
39
40 async def send_json(self, data: OperationMessage) -> None:
41 await self._ws.send_json(data)
42
43 async def close(self, code: int = 1000, reason: Optional[str] = None) -> None:
44 await self._ws.close(code=code, reason=reason)
45
46 async def handle_request(self) -> Any:
47 await self._ws.accept(subprotocol=GRAPHQL_WS_PROTOCOL)
48
49 try:
50 while self._ws.application_state != WebSocketState.DISCONNECTED:
51 try:
52 message = await self._ws.receive_json()
53 except KeyError:
54 # Ignore non-text messages
55 continue
56 else:
57 await self.handle_message(message)
58 except WebSocketDisconnect: # pragma: no cover
59 pass
60 finally:
61 if self.keep_alive_task:
62 self.keep_alive_task.cancel()
63 with suppress(BaseException):
64 await self.keep_alive_task
65
66 for operation_id in list(self.subscriptions.keys()):
67 await self.cleanup_operation(operation_id)
68
[end of strawberry/asgi/handlers/graphql_ws_handler.py]
[start of strawberry/asgi/handlers/graphql_transport_ws_handler.py]
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING, Any
4
5 from starlette.websockets import WebSocketDisconnect, WebSocketState
6
7 from strawberry.subscriptions import GRAPHQL_TRANSPORT_WS_PROTOCOL
8 from strawberry.subscriptions.protocols.graphql_transport_ws.handlers import (
9 BaseGraphQLTransportWSHandler,
10 )
11
12 if TYPE_CHECKING:
13 from datetime import timedelta
14
15 from starlette.websockets import WebSocket
16
17 from strawberry.schema import BaseSchema
18
19
20 class GraphQLTransportWSHandler(BaseGraphQLTransportWSHandler):
21 def __init__(
22 self,
23 schema: BaseSchema,
24 debug: bool,
25 connection_init_wait_timeout: timedelta,
26 get_context,
27 get_root_value,
28 ws: WebSocket,
29 ):
30 super().__init__(schema, debug, connection_init_wait_timeout)
31 self._get_context = get_context
32 self._get_root_value = get_root_value
33 self._ws = ws
34
35 async def get_context(self) -> Any:
36 return await self._get_context(request=self._ws)
37
38 async def get_root_value(self) -> Any:
39 return await self._get_root_value(request=self._ws)
40
41 async def send_json(self, data: dict) -> None:
42 await self._ws.send_json(data)
43
44 async def close(self, code: int, reason: str) -> None:
45 await self._ws.close(code=code, reason=reason)
46
47 async def handle_request(self) -> None:
48 await self._ws.accept(subprotocol=GRAPHQL_TRANSPORT_WS_PROTOCOL)
49
50 try:
51 while self._ws.application_state != WebSocketState.DISCONNECTED:
52 try:
53 message = await self._ws.receive_json()
54 except KeyError:
55 error_message = "WebSocket message type must be text"
56 await self.handle_invalid_message(error_message)
57 else:
58 await self.handle_message(message)
59 except WebSocketDisconnect: # pragma: no cover
60 pass
61 finally:
62 for operation_id in list(self.subscriptions.keys()):
63 await self.cleanup_operation(operation_id)
64 await self.reap_completed_tasks()
65
[end of strawberry/asgi/handlers/graphql_transport_ws_handler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/strawberry/asgi/handlers/graphql_transport_ws_handler.py b/strawberry/asgi/handlers/graphql_transport_ws_handler.py
--- a/strawberry/asgi/handlers/graphql_transport_ws_handler.py
+++ b/strawberry/asgi/handlers/graphql_transport_ws_handler.py
@@ -33,7 +33,7 @@
self._ws = ws
async def get_context(self) -> Any:
- return await self._get_context(request=self._ws)
+ return await self._get_context(request=self._ws, response=None)
async def get_root_value(self) -> Any:
return await self._get_root_value(request=self._ws)
diff --git a/strawberry/asgi/handlers/graphql_ws_handler.py b/strawberry/asgi/handlers/graphql_ws_handler.py
--- a/strawberry/asgi/handlers/graphql_ws_handler.py
+++ b/strawberry/asgi/handlers/graphql_ws_handler.py
@@ -32,7 +32,7 @@
self._ws = ws
async def get_context(self) -> Any:
- return await self._get_context(request=self._ws)
+ return await self._get_context(request=self._ws, response=None)
async def get_root_value(self) -> Any:
return await self._get_root_value(request=self._ws)
|
{"golden_diff": "diff --git a/strawberry/asgi/handlers/graphql_transport_ws_handler.py b/strawberry/asgi/handlers/graphql_transport_ws_handler.py\n--- a/strawberry/asgi/handlers/graphql_transport_ws_handler.py\n+++ b/strawberry/asgi/handlers/graphql_transport_ws_handler.py\n@@ -33,7 +33,7 @@\n self._ws = ws\n \n async def get_context(self) -> Any:\n- return await self._get_context(request=self._ws)\n+ return await self._get_context(request=self._ws, response=None)\n \n async def get_root_value(self) -> Any:\n return await self._get_root_value(request=self._ws)\ndiff --git a/strawberry/asgi/handlers/graphql_ws_handler.py b/strawberry/asgi/handlers/graphql_ws_handler.py\n--- a/strawberry/asgi/handlers/graphql_ws_handler.py\n+++ b/strawberry/asgi/handlers/graphql_ws_handler.py\n@@ -32,7 +32,7 @@\n self._ws = ws\n \n async def get_context(self) -> Any:\n- return await self._get_context(request=self._ws)\n+ return await self._get_context(request=self._ws, response=None)\n \n async def get_root_value(self) -> Any:\n return await self._get_root_value(request=self._ws)\n", "issue": "Subscription example does not work - TypeError: get_context() missing 1 required positional argument: 'response'\n## Describe the Bug\r\n\r\nWhen trying to run the first subscription example from the [Documentation](https://strawberry.rocks/docs/general/subscriptions) the following error occurs as soon as the subscription is sent from GraphiQL:\r\n\r\n```\r\nRunning strawberry on http://0.0.0.0:8000/graphql \ud83c\udf53\r\n[2023-04-30 17:58:55]: No operation name\r\nsubscription {\r\n count(target: 5)\r\n}\r\n\r\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/uvicorn/protocols/websockets/websockets_impl.py\", line 254, in run_asgi\r\n result = await self.app(self.scope, self.asgi_receive, self.asgi_send)\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/uvicorn/middleware/proxy_headers.py\", line 78, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/starlette/applications.py\", line 122, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/starlette/middleware/errors.py\", line 149, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/starlette/middleware/cors.py\", line 76, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/starlette/middleware/exceptions.py\", line 79, in __call__\r\n raise exc\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/starlette/middleware/exceptions.py\", line 68, in __call__\r\n await self.app(scope, receive, sender)\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/starlette/routing.py\", line 718, in __call__\r\n await route.handle(scope, receive, send)\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/starlette/routing.py\", line 341, in handle\r\n await self.app(scope, receive, send)\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/strawberry/asgi/__init__.py\", line 118, in __call__\r\n await self.graphql_transport_ws_handler_class(\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/strawberry/subscriptions/protocols/graphql_transport_ws/handlers.py\", line 78, in handle\r\n return await self.handle_request()\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/strawberry/asgi/handlers/graphql_transport_ws_handler.py\", line 58, in handle_request\r\n await self.handle_message(message)\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/strawberry/subscriptions/protocols/graphql_transport_ws/handlers.py\", line 125, in handle_message\r\n await handler(handler_arg)\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/strawberry/subscriptions/protocols/graphql_transport_ws/handlers.py\", line 181, in handle_subscribe\r\n context = await self.get_context()\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/strawberry/asgi/handlers/graphql_transport_ws_handler.py\", line 36, in get_context\r\n return await self._get_context(request=self._ws)\r\nTypeError: get_context() missing 1 required positional argument: 'response'\r\nERROR: closing handshake failed\r\nTraceback (most recent call last):\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/websockets/legacy/server.py\", line 248, in handler\r\n await self.close()\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/websockets/legacy/protocol.py\", line 766, in close\r\n await self.write_close_frame(Close(code, reason))\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/websockets/legacy/protocol.py\", line 1232, in write_close_frame\r\n await self.write_frame(True, OP_CLOSE, data, _state=State.CLOSING)\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/websockets/legacy/protocol.py\", line 1205, in write_frame\r\n await self.drain()\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/websockets/legacy/protocol.py\", line 1194, in drain\r\n await self.ensure_open()\r\n File \"/Users/user/Library/Python/3.9/lib/python/site-packages/websockets/legacy/protocol.py\", line 935, in ensure_open\r\n raise self.connection_closed_exc()\r\nwebsockets.exceptions.ConnectionClosedError: sent 1000 (OK); no close frame received\r\n```\r\n\r\n## System Information\r\n\r\n - Operating system: `macOS 13.2 (MacBook Pro M2 Chip - ARM64)`\r\n - Strawberry version (if applicable): `0.175.0`, also tested: `0.173.1`\r\n\r\n## Additional Context\r\n\r\n- Start command: `python3 -m strawberry server app`\r\n- Python version: `3.9.0`\n\n<!-- POLAR PLEDGE BADGE START -->\n## Upvote & Fund\n\n- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.\n- We receive the funding once the issue is completed & confirmed by you.\n- Thank you in advance for helping prioritize & fund our backlog.\n\n<a href=\"https://polar.sh/strawberry-graphql/strawberry/issues/2738\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2738/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2738/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom contextlib import suppress\nfrom typing import TYPE_CHECKING, Any, Optional\n\nfrom starlette.websockets import WebSocketDisconnect, WebSocketState\n\nfrom strawberry.subscriptions import GRAPHQL_WS_PROTOCOL\nfrom strawberry.subscriptions.protocols.graphql_ws.handlers import BaseGraphQLWSHandler\n\nif TYPE_CHECKING:\n from starlette.websockets import WebSocket\n\n from strawberry.schema import BaseSchema\n from strawberry.subscriptions.protocols.graphql_ws.types import OperationMessage\n\n\nclass GraphQLWSHandler(BaseGraphQLWSHandler):\n def __init__(\n self,\n schema: BaseSchema,\n debug: bool,\n keep_alive: bool,\n keep_alive_interval: float,\n get_context,\n get_root_value,\n ws: WebSocket,\n ):\n super().__init__(schema, debug, keep_alive, keep_alive_interval)\n self._get_context = get_context\n self._get_root_value = get_root_value\n self._ws = ws\n\n async def get_context(self) -> Any:\n return await self._get_context(request=self._ws)\n\n async def get_root_value(self) -> Any:\n return await self._get_root_value(request=self._ws)\n\n async def send_json(self, data: OperationMessage) -> None:\n await self._ws.send_json(data)\n\n async def close(self, code: int = 1000, reason: Optional[str] = None) -> None:\n await self._ws.close(code=code, reason=reason)\n\n async def handle_request(self) -> Any:\n await self._ws.accept(subprotocol=GRAPHQL_WS_PROTOCOL)\n\n try:\n while self._ws.application_state != WebSocketState.DISCONNECTED:\n try:\n message = await self._ws.receive_json()\n except KeyError:\n # Ignore non-text messages\n continue\n else:\n await self.handle_message(message)\n except WebSocketDisconnect: # pragma: no cover\n pass\n finally:\n if self.keep_alive_task:\n self.keep_alive_task.cancel()\n with suppress(BaseException):\n await self.keep_alive_task\n\n for operation_id in list(self.subscriptions.keys()):\n await self.cleanup_operation(operation_id)\n", "path": "strawberry/asgi/handlers/graphql_ws_handler.py"}, {"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Any\n\nfrom starlette.websockets import WebSocketDisconnect, WebSocketState\n\nfrom strawberry.subscriptions import GRAPHQL_TRANSPORT_WS_PROTOCOL\nfrom strawberry.subscriptions.protocols.graphql_transport_ws.handlers import (\n BaseGraphQLTransportWSHandler,\n)\n\nif TYPE_CHECKING:\n from datetime import timedelta\n\n from starlette.websockets import WebSocket\n\n from strawberry.schema import BaseSchema\n\n\nclass GraphQLTransportWSHandler(BaseGraphQLTransportWSHandler):\n def __init__(\n self,\n schema: BaseSchema,\n debug: bool,\n connection_init_wait_timeout: timedelta,\n get_context,\n get_root_value,\n ws: WebSocket,\n ):\n super().__init__(schema, debug, connection_init_wait_timeout)\n self._get_context = get_context\n self._get_root_value = get_root_value\n self._ws = ws\n\n async def get_context(self) -> Any:\n return await self._get_context(request=self._ws)\n\n async def get_root_value(self) -> Any:\n return await self._get_root_value(request=self._ws)\n\n async def send_json(self, data: dict) -> None:\n await self._ws.send_json(data)\n\n async def close(self, code: int, reason: str) -> None:\n await self._ws.close(code=code, reason=reason)\n\n async def handle_request(self) -> None:\n await self._ws.accept(subprotocol=GRAPHQL_TRANSPORT_WS_PROTOCOL)\n\n try:\n while self._ws.application_state != WebSocketState.DISCONNECTED:\n try:\n message = await self._ws.receive_json()\n except KeyError:\n error_message = \"WebSocket message type must be text\"\n await self.handle_invalid_message(error_message)\n else:\n await self.handle_message(message)\n except WebSocketDisconnect: # pragma: no cover\n pass\n finally:\n for operation_id in list(self.subscriptions.keys()):\n await self.cleanup_operation(operation_id)\n await self.reap_completed_tasks()\n", "path": "strawberry/asgi/handlers/graphql_transport_ws_handler.py"}]}
| 3,152 | 298 |
gh_patches_debug_28782
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-1024
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pyramid app frozen with py2exe fails because of Chameleon imports
If I freeze a Pyramid application with py2exe and try to run it, it fails during the startup, with the following traceback:
```
Traceback (most recent call last):
File "web_server.py", line 2, in <module>
File "pyramid\config\__init__.pyc", line 66, in <module>
File "pyramid\config\rendering.pyc", line 11, in <module>
File "pyramid\chameleon_text.pyc", line 3, in <module>
File "chameleon\__init__.pyc", line 1, in <module>
File "chameleon\zpt\template.pyc", line 10, in <module>
File "chameleon\tales.pyc", line 19, in <module>
File "chameleon\compiler.pyc", line 145, in <module>
File "chameleon\codegen.pyc", line 93, in template
File "inspect.pyc", line 701, in getsource
File "inspect.pyc", line 690, in getsourcelines
File "inspect.pyc", line 538, in findsource
IOError: could not get source code
```
My application doesn't use Chameleon, but the Pyramid modules still import it, which ultimately causes the .exe to fail to run.
</issue>
<code>
[start of pyramid/chameleon_zpt.py]
1 from zope.interface import implementer
2
3 from chameleon.zpt.template import PageTemplateFile
4
5 from pyramid.interfaces import ITemplateRenderer
6 from pyramid.decorator import reify
7 from pyramid import renderers
8
9 def renderer_factory(info):
10 return renderers.template_renderer_factory(info, ZPTTemplateRenderer)
11
12 @implementer(ITemplateRenderer)
13 class ZPTTemplateRenderer(object):
14 def __init__(self, path, lookup, macro=None):
15 self.path = path
16 self.lookup = lookup
17 self.macro = macro
18
19 @reify # avoid looking up reload_templates before manager pushed
20 def template(self):
21 tf = PageTemplateFile(
22 self.path,
23 auto_reload=self.lookup.auto_reload,
24 debug=self.lookup.debug,
25 translate=self.lookup.translate
26 )
27 if self.macro:
28 # render only the portion of the template included in a
29 # define-macro named the value of self.macro
30 macro_renderer = tf.macros[self.macro].include
31 tf._render = macro_renderer
32 return tf
33
34 def implementation(self):
35 return self.template
36
37 def __call__(self, value, system):
38 try:
39 system.update(value)
40 except (TypeError, ValueError):
41 raise ValueError('renderer was passed non-dictionary as value')
42 result = self.template(**system)
43 return result
44
45
[end of pyramid/chameleon_zpt.py]
[start of pyramid/chameleon_text.py]
1 from zope.interface import implementer
2
3 from chameleon.zpt.template import PageTextTemplateFile
4
5 from pyramid.interfaces import ITemplateRenderer
6
7 from pyramid.decorator import reify
8 from pyramid import renderers
9
10 def renderer_factory(info):
11 return renderers.template_renderer_factory(info, TextTemplateRenderer)
12
13 @implementer(ITemplateRenderer)
14 class TextTemplateRenderer(object):
15 def __init__(self, path, lookup, macro=None):
16 self.path = path
17 self.lookup = lookup
18 # text template renderers have no macros, so we ignore the
19 # macro arg
20
21 @reify # avoid looking up reload_templates before manager pushed
22 def template(self):
23 return PageTextTemplateFile(self.path,
24 auto_reload=self.lookup.auto_reload,
25 debug=self.lookup.debug,
26 translate=self.lookup.translate)
27
28 def implementation(self):
29 return self.template
30
31 def __call__(self, value, system):
32 try:
33 system.update(value)
34 except (TypeError, ValueError):
35 raise ValueError('renderer was passed non-dictionary as value')
36 result = self.template(**system)
37 return result
38
39
[end of pyramid/chameleon_text.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyramid/chameleon_text.py b/pyramid/chameleon_text.py
--- a/pyramid/chameleon_text.py
+++ b/pyramid/chameleon_text.py
@@ -1,7 +1,5 @@
from zope.interface import implementer
-from chameleon.zpt.template import PageTextTemplateFile
-
from pyramid.interfaces import ITemplateRenderer
from pyramid.decorator import reify
@@ -20,6 +18,7 @@
@reify # avoid looking up reload_templates before manager pushed
def template(self):
+ from chameleon.zpt.template import PageTextTemplateFile
return PageTextTemplateFile(self.path,
auto_reload=self.lookup.auto_reload,
debug=self.lookup.debug,
diff --git a/pyramid/chameleon_zpt.py b/pyramid/chameleon_zpt.py
--- a/pyramid/chameleon_zpt.py
+++ b/pyramid/chameleon_zpt.py
@@ -1,7 +1,5 @@
from zope.interface import implementer
-from chameleon.zpt.template import PageTemplateFile
-
from pyramid.interfaces import ITemplateRenderer
from pyramid.decorator import reify
from pyramid import renderers
@@ -18,6 +16,7 @@
@reify # avoid looking up reload_templates before manager pushed
def template(self):
+ from chameleon.zpt.template import PageTemplateFile
tf = PageTemplateFile(
self.path,
auto_reload=self.lookup.auto_reload,
|
{"golden_diff": "diff --git a/pyramid/chameleon_text.py b/pyramid/chameleon_text.py\n--- a/pyramid/chameleon_text.py\n+++ b/pyramid/chameleon_text.py\n@@ -1,7 +1,5 @@\n from zope.interface import implementer\n \n-from chameleon.zpt.template import PageTextTemplateFile\n-\n from pyramid.interfaces import ITemplateRenderer\n \n from pyramid.decorator import reify\n@@ -20,6 +18,7 @@\n \n @reify # avoid looking up reload_templates before manager pushed\n def template(self):\n+ from chameleon.zpt.template import PageTextTemplateFile\n return PageTextTemplateFile(self.path,\n auto_reload=self.lookup.auto_reload,\n debug=self.lookup.debug,\ndiff --git a/pyramid/chameleon_zpt.py b/pyramid/chameleon_zpt.py\n--- a/pyramid/chameleon_zpt.py\n+++ b/pyramid/chameleon_zpt.py\n@@ -1,7 +1,5 @@\n from zope.interface import implementer\n \n-from chameleon.zpt.template import PageTemplateFile\n-\n from pyramid.interfaces import ITemplateRenderer\n from pyramid.decorator import reify\n from pyramid import renderers\n@@ -18,6 +16,7 @@\n \n @reify # avoid looking up reload_templates before manager pushed\n def template(self):\n+ from chameleon.zpt.template import PageTemplateFile\n tf = PageTemplateFile(\n self.path,\n auto_reload=self.lookup.auto_reload,\n", "issue": "Pyramid app frozen with py2exe fails because of Chameleon imports\nIf I freeze a Pyramid application with py2exe and try to run it, it fails during the startup, with the following traceback:\n\n```\nTraceback (most recent call last):\n File \"web_server.py\", line 2, in <module>\n File \"pyramid\\config\\__init__.pyc\", line 66, in <module>\n File \"pyramid\\config\\rendering.pyc\", line 11, in <module>\n File \"pyramid\\chameleon_text.pyc\", line 3, in <module>\n File \"chameleon\\__init__.pyc\", line 1, in <module>\n File \"chameleon\\zpt\\template.pyc\", line 10, in <module>\n File \"chameleon\\tales.pyc\", line 19, in <module>\n File \"chameleon\\compiler.pyc\", line 145, in <module>\n File \"chameleon\\codegen.pyc\", line 93, in template\n File \"inspect.pyc\", line 701, in getsource\n File \"inspect.pyc\", line 690, in getsourcelines\n File \"inspect.pyc\", line 538, in findsource\nIOError: could not get source code\n```\n\nMy application doesn't use Chameleon, but the Pyramid modules still import it, which ultimately causes the .exe to fail to run.\n\n", "before_files": [{"content": "from zope.interface import implementer\n\nfrom chameleon.zpt.template import PageTemplateFile\n\nfrom pyramid.interfaces import ITemplateRenderer\nfrom pyramid.decorator import reify\nfrom pyramid import renderers\n\ndef renderer_factory(info):\n return renderers.template_renderer_factory(info, ZPTTemplateRenderer)\n\n@implementer(ITemplateRenderer)\nclass ZPTTemplateRenderer(object):\n def __init__(self, path, lookup, macro=None):\n self.path = path\n self.lookup = lookup\n self.macro = macro\n\n @reify # avoid looking up reload_templates before manager pushed\n def template(self):\n tf = PageTemplateFile(\n self.path,\n auto_reload=self.lookup.auto_reload,\n debug=self.lookup.debug,\n translate=self.lookup.translate\n )\n if self.macro:\n # render only the portion of the template included in a\n # define-macro named the value of self.macro\n macro_renderer = tf.macros[self.macro].include\n tf._render = macro_renderer\n return tf\n\n def implementation(self):\n return self.template\n \n def __call__(self, value, system):\n try:\n system.update(value)\n except (TypeError, ValueError):\n raise ValueError('renderer was passed non-dictionary as value')\n result = self.template(**system)\n return result\n\n", "path": "pyramid/chameleon_zpt.py"}, {"content": "from zope.interface import implementer\n\nfrom chameleon.zpt.template import PageTextTemplateFile\n\nfrom pyramid.interfaces import ITemplateRenderer\n\nfrom pyramid.decorator import reify\nfrom pyramid import renderers\n\ndef renderer_factory(info):\n return renderers.template_renderer_factory(info, TextTemplateRenderer)\n\n@implementer(ITemplateRenderer)\nclass TextTemplateRenderer(object):\n def __init__(self, path, lookup, macro=None):\n self.path = path\n self.lookup = lookup\n # text template renderers have no macros, so we ignore the\n # macro arg\n\n @reify # avoid looking up reload_templates before manager pushed\n def template(self):\n return PageTextTemplateFile(self.path,\n auto_reload=self.lookup.auto_reload,\n debug=self.lookup.debug,\n translate=self.lookup.translate)\n\n def implementation(self):\n return self.template\n \n def __call__(self, value, system):\n try:\n system.update(value)\n except (TypeError, ValueError):\n raise ValueError('renderer was passed non-dictionary as value')\n result = self.template(**system)\n return result\n\n", "path": "pyramid/chameleon_text.py"}]}
| 1,549 | 308 |
gh_patches_debug_26639
|
rasdani/github-patches
|
git_diff
|
rasterio__rasterio-2441
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
How to handle statistics
I like statistics in my rasters since they load nicely into various GIS software that I use, which includes ArcGIS. Without statistics (or the correct statistics) the rasters don't look correct in these software.
Up to now, I generally need the help of osgeo.gdal to add/update band statistics after rasterio processing, e.g.:
``` python
ds = gdal.Open(fname, gdal.GA_Update)
for i in range(ds.RasterCount):
ds.GetRasterBand(i + 1).ComputeStatistics(0)
ds = band = None # save, close
```
How could this be done with rasterio? A few ideas:
- Have optional arguments for updatable rasters with `rasterio.open()`, e.g. `stats=True` or `approx_stats=True`, which are used when processing `close()`.
- Add a method like `obj.calc_stats(approx=False)`, which can be manually called while the raster dataset is open. Should it return the four statistics that [ComputeStatistics](http://www.gdal.org/classGDALRasterBand.html#a48883c1dae195b21b37b51b10e910f9b) returns? Note that this data are also available through the metadata tags.
- Start a `rio edit [--stats] [--approx_stats]` tool, similar [to this enhancement to gdal_edit.py](http://trac.osgeo.org/gdal/ticket/5805).
</issue>
<code>
[start of rasterio/rio/info.py]
1 """Command access to dataset metadata, stats, and more."""
2
3
4 import json
5
6 import click
7
8 import rasterio
9 from rasterio.rio import options
10 from rasterio.transform import from_gcps
11
12
13 @click.command(short_help="Print information about a data file.")
14 @options.file_in_arg
15 @click.option('--meta', 'aspect', flag_value='meta', default=True,
16 help="Show data file structure (default).")
17 @click.option('--tags', 'aspect', flag_value='tags',
18 help="Show data file tags.")
19 @click.option('--namespace', help="Select a tag namespace.")
20 @click.option('--indent', default=None, type=int,
21 help="Indentation level for pretty printed output")
22 # Options to pick out a single metadata item and print it as
23 # a string.
24 @click.option('--count', 'meta_member', flag_value='count',
25 help="Print the count of bands.")
26 @click.option('-t', '--dtype', 'meta_member', flag_value='dtype',
27 help="Print the dtype name.")
28 @click.option('--nodata', 'meta_member', flag_value='nodata',
29 help="Print the nodata value.")
30 @click.option('-f', '--format', '--driver', 'meta_member', flag_value='driver',
31 help="Print the format driver.")
32 @click.option('--shape', 'meta_member', flag_value='shape',
33 help="Print the (height, width) shape.")
34 @click.option('--height', 'meta_member', flag_value='height',
35 help="Print the height (number of rows).")
36 @click.option('--width', 'meta_member', flag_value='width',
37 help="Print the width (number of columns).")
38 @click.option('--crs', 'meta_member', flag_value='crs',
39 help="Print the CRS as a PROJ.4 string.")
40 @click.option('--bounds', 'meta_member', flag_value='bounds',
41 help="Print the boundary coordinates "
42 "(left, bottom, right, top).")
43 @click.option('-r', '--res', 'meta_member', flag_value='res',
44 help="Print pixel width and height.")
45 @click.option('--lnglat', 'meta_member', flag_value='lnglat',
46 help="Print longitude and latitude at center.")
47 @click.option('--stats', 'meta_member', flag_value='stats',
48 help="Print statistics (min, max, mean) of a single band "
49 "(use --bidx).")
50 @click.option('--checksum', 'meta_member', flag_value='checksum',
51 help="Print integer checksum of a single band "
52 "(use --bidx).")
53 @click.option('--subdatasets', 'meta_member', flag_value='subdatasets',
54 help="Print subdataset identifiers.")
55 @click.option('-v', '--tell-me-more', '--verbose', 'verbose', is_flag=True,
56 help="Output extra information.")
57 @options.bidx_opt
58 @options.masked_opt
59 @click.pass_context
60 def info(ctx, input, aspect, indent, namespace, meta_member, verbose, bidx,
61 masked):
62 """Print metadata about the dataset as JSON.
63
64 Optionally print a single metadata item as a string.
65 """
66 with ctx.obj['env'], rasterio.open(input) as src:
67
68 info = dict(src.profile)
69 info['shape'] = (info['height'], info['width'])
70 info['bounds'] = src.bounds
71
72 if src.crs:
73 epsg = src.crs.to_epsg()
74 if epsg:
75 info['crs'] = 'EPSG:{}'.format(epsg)
76 else:
77 info['crs'] = src.crs.to_string()
78 else:
79 info['crs'] = None
80
81 info['res'] = src.res
82 info['colorinterp'] = [ci.name for ci in src.colorinterp]
83 info['units'] = [units or None for units in src.units]
84 info['descriptions'] = src.descriptions
85 info['indexes'] = src.indexes
86 info['mask_flags'] = [[
87 flag.name for flag in flags] for flags in src.mask_flag_enums]
88
89 if src.crs:
90 info['lnglat'] = src.lnglat()
91
92 gcps, gcps_crs = src.gcps
93
94 if gcps:
95 info['gcps'] = {'points': [p.asdict() for p in gcps]}
96 if gcps_crs:
97 epsg = gcps_crs.to_epsg()
98 if epsg:
99 info['gcps']['crs'] = 'EPSG:{}'.format(epsg)
100 else:
101 info['gcps']['crs'] = src.crs.to_string()
102 else:
103 info['gcps']['crs'] = None
104
105 info['gcps']['transform'] = from_gcps(gcps)
106
107 if verbose:
108 stats = [{'min': float(b.min()),
109 'max': float(b.max()),
110 'mean': float(b.mean())
111 } for b in src.read(masked=masked)]
112 info['stats'] = stats
113 info['checksum'] = [src.checksum(i) for i in src.indexes]
114
115 if aspect == 'meta':
116 if meta_member == 'subdatasets':
117 for name in src.subdatasets:
118 click.echo(name)
119 elif meta_member == 'stats':
120 band = src.read(bidx, masked=masked)
121 click.echo('%f %f %f' % (
122 float(band.min()),
123 float(band.max()),
124 float(band.mean())))
125 elif meta_member == 'checksum':
126 click.echo(str(src.checksum(bidx)))
127 elif meta_member:
128 if isinstance(info[meta_member], (list, tuple)):
129 click.echo(" ".join(map(str, info[meta_member])))
130 else:
131 click.echo(info[meta_member])
132 else:
133 click.echo(json.dumps(info, sort_keys=True, indent=indent))
134
135 elif aspect == 'tags':
136 click.echo(
137 json.dumps(src.tags(ns=namespace), indent=indent))
138
[end of rasterio/rio/info.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/rasterio/rio/info.py b/rasterio/rio/info.py
--- a/rasterio/rio/info.py
+++ b/rasterio/rio/info.py
@@ -3,6 +3,7 @@
import json
+from attr import asdict
import click
import rasterio
@@ -105,10 +106,7 @@
info['gcps']['transform'] = from_gcps(gcps)
if verbose:
- stats = [{'min': float(b.min()),
- 'max': float(b.max()),
- 'mean': float(b.mean())
- } for b in src.read(masked=masked)]
+ stats = [asdict(src.statistics(bidx)) for bidx in src.indexes]
info['stats'] = stats
info['checksum'] = [src.checksum(i) for i in src.indexes]
@@ -117,11 +115,8 @@
for name in src.subdatasets:
click.echo(name)
elif meta_member == 'stats':
- band = src.read(bidx, masked=masked)
- click.echo('%f %f %f' % (
- float(band.min()),
- float(band.max()),
- float(band.mean())))
+ st = src.statistics(bidx)
+ click.echo("{st.min} {st.max} {st.mean} {st.std}".format(st=st))
elif meta_member == 'checksum':
click.echo(str(src.checksum(bidx)))
elif meta_member:
|
{"golden_diff": "diff --git a/rasterio/rio/info.py b/rasterio/rio/info.py\n--- a/rasterio/rio/info.py\n+++ b/rasterio/rio/info.py\n@@ -3,6 +3,7 @@\n \n import json\n \n+from attr import asdict\n import click\n \n import rasterio\n@@ -105,10 +106,7 @@\n info['gcps']['transform'] = from_gcps(gcps)\n \n if verbose:\n- stats = [{'min': float(b.min()),\n- 'max': float(b.max()),\n- 'mean': float(b.mean())\n- } for b in src.read(masked=masked)]\n+ stats = [asdict(src.statistics(bidx)) for bidx in src.indexes]\n info['stats'] = stats\n info['checksum'] = [src.checksum(i) for i in src.indexes]\n \n@@ -117,11 +115,8 @@\n for name in src.subdatasets:\n click.echo(name)\n elif meta_member == 'stats':\n- band = src.read(bidx, masked=masked)\n- click.echo('%f %f %f' % (\n- float(band.min()),\n- float(band.max()),\n- float(band.mean())))\n+ st = src.statistics(bidx)\n+ click.echo(\"{st.min} {st.max} {st.mean} {st.std}\".format(st=st))\n elif meta_member == 'checksum':\n click.echo(str(src.checksum(bidx)))\n elif meta_member:\n", "issue": "How to handle statistics\nI like statistics in my rasters since they load nicely into various GIS software that I use, which includes ArcGIS. Without statistics (or the correct statistics) the rasters don't look correct in these software.\n\nUp to now, I generally need the help of osgeo.gdal to add/update band statistics after rasterio processing, e.g.:\n\n``` python\nds = gdal.Open(fname, gdal.GA_Update)\nfor i in range(ds.RasterCount):\n ds.GetRasterBand(i + 1).ComputeStatistics(0)\nds = band = None # save, close\n```\n\nHow could this be done with rasterio? A few ideas:\n- Have optional arguments for updatable rasters with `rasterio.open()`, e.g. `stats=True` or `approx_stats=True`, which are used when processing `close()`.\n- Add a method like `obj.calc_stats(approx=False)`, which can be manually called while the raster dataset is open. Should it return the four statistics that [ComputeStatistics](http://www.gdal.org/classGDALRasterBand.html#a48883c1dae195b21b37b51b10e910f9b) returns? Note that this data are also available through the metadata tags.\n- Start a `rio edit [--stats] [--approx_stats]` tool, similar [to this enhancement to gdal_edit.py](http://trac.osgeo.org/gdal/ticket/5805).\n\n", "before_files": [{"content": "\"\"\"Command access to dataset metadata, stats, and more.\"\"\"\n\n\nimport json\n\nimport click\n\nimport rasterio\nfrom rasterio.rio import options\nfrom rasterio.transform import from_gcps\n\n\[email protected](short_help=\"Print information about a data file.\")\[email protected]_in_arg\[email protected]('--meta', 'aspect', flag_value='meta', default=True,\n help=\"Show data file structure (default).\")\[email protected]('--tags', 'aspect', flag_value='tags',\n help=\"Show data file tags.\")\[email protected]('--namespace', help=\"Select a tag namespace.\")\[email protected]('--indent', default=None, type=int,\n help=\"Indentation level for pretty printed output\")\n# Options to pick out a single metadata item and print it as\n# a string.\[email protected]('--count', 'meta_member', flag_value='count',\n help=\"Print the count of bands.\")\[email protected]('-t', '--dtype', 'meta_member', flag_value='dtype',\n help=\"Print the dtype name.\")\[email protected]('--nodata', 'meta_member', flag_value='nodata',\n help=\"Print the nodata value.\")\[email protected]('-f', '--format', '--driver', 'meta_member', flag_value='driver',\n help=\"Print the format driver.\")\[email protected]('--shape', 'meta_member', flag_value='shape',\n help=\"Print the (height, width) shape.\")\[email protected]('--height', 'meta_member', flag_value='height',\n help=\"Print the height (number of rows).\")\[email protected]('--width', 'meta_member', flag_value='width',\n help=\"Print the width (number of columns).\")\[email protected]('--crs', 'meta_member', flag_value='crs',\n help=\"Print the CRS as a PROJ.4 string.\")\[email protected]('--bounds', 'meta_member', flag_value='bounds',\n help=\"Print the boundary coordinates \"\n \"(left, bottom, right, top).\")\[email protected]('-r', '--res', 'meta_member', flag_value='res',\n help=\"Print pixel width and height.\")\[email protected]('--lnglat', 'meta_member', flag_value='lnglat',\n help=\"Print longitude and latitude at center.\")\[email protected]('--stats', 'meta_member', flag_value='stats',\n help=\"Print statistics (min, max, mean) of a single band \"\n \"(use --bidx).\")\[email protected]('--checksum', 'meta_member', flag_value='checksum',\n help=\"Print integer checksum of a single band \"\n \"(use --bidx).\")\[email protected]('--subdatasets', 'meta_member', flag_value='subdatasets',\n help=\"Print subdataset identifiers.\")\[email protected]('-v', '--tell-me-more', '--verbose', 'verbose', is_flag=True,\n help=\"Output extra information.\")\[email protected]_opt\[email protected]_opt\[email protected]_context\ndef info(ctx, input, aspect, indent, namespace, meta_member, verbose, bidx,\n masked):\n \"\"\"Print metadata about the dataset as JSON.\n\n Optionally print a single metadata item as a string.\n \"\"\"\n with ctx.obj['env'], rasterio.open(input) as src:\n\n info = dict(src.profile)\n info['shape'] = (info['height'], info['width'])\n info['bounds'] = src.bounds\n\n if src.crs:\n epsg = src.crs.to_epsg()\n if epsg:\n info['crs'] = 'EPSG:{}'.format(epsg)\n else:\n info['crs'] = src.crs.to_string()\n else:\n info['crs'] = None\n\n info['res'] = src.res\n info['colorinterp'] = [ci.name for ci in src.colorinterp]\n info['units'] = [units or None for units in src.units]\n info['descriptions'] = src.descriptions\n info['indexes'] = src.indexes\n info['mask_flags'] = [[\n flag.name for flag in flags] for flags in src.mask_flag_enums]\n\n if src.crs:\n info['lnglat'] = src.lnglat()\n\n gcps, gcps_crs = src.gcps\n\n if gcps:\n info['gcps'] = {'points': [p.asdict() for p in gcps]}\n if gcps_crs:\n epsg = gcps_crs.to_epsg()\n if epsg:\n info['gcps']['crs'] = 'EPSG:{}'.format(epsg)\n else:\n info['gcps']['crs'] = src.crs.to_string()\n else:\n info['gcps']['crs'] = None\n\n info['gcps']['transform'] = from_gcps(gcps)\n\n if verbose:\n stats = [{'min': float(b.min()),\n 'max': float(b.max()),\n 'mean': float(b.mean())\n } for b in src.read(masked=masked)]\n info['stats'] = stats\n info['checksum'] = [src.checksum(i) for i in src.indexes]\n\n if aspect == 'meta':\n if meta_member == 'subdatasets':\n for name in src.subdatasets:\n click.echo(name)\n elif meta_member == 'stats':\n band = src.read(bidx, masked=masked)\n click.echo('%f %f %f' % (\n float(band.min()),\n float(band.max()),\n float(band.mean())))\n elif meta_member == 'checksum':\n click.echo(str(src.checksum(bidx)))\n elif meta_member:\n if isinstance(info[meta_member], (list, tuple)):\n click.echo(\" \".join(map(str, info[meta_member])))\n else:\n click.echo(info[meta_member])\n else:\n click.echo(json.dumps(info, sort_keys=True, indent=indent))\n\n elif aspect == 'tags':\n click.echo(\n json.dumps(src.tags(ns=namespace), indent=indent))\n", "path": "rasterio/rio/info.py"}]}
| 2,437 | 333 |
gh_patches_debug_37098
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-1006
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Test student voting
From a quick search, the voting process has exactly one test, which is not much for the primary feature of the platform.
</issue>
<code>
[start of evap/student/views.py]
1 from collections import OrderedDict
2
3 from django.contrib import messages
4 from django.core.exceptions import PermissionDenied, SuspiciousOperation
5 from django.db import transaction
6 from django.shortcuts import get_object_or_404, redirect, render
7 from django.utils.translation import ugettext as _
8
9 from evap.evaluation.auth import participant_required
10 from evap.evaluation.models import Course, Semester
11 from evap.evaluation.tools import STUDENT_STATES_ORDERED
12
13 from evap.student.forms import QuestionsForm
14 from evap.student.tools import make_form_identifier
15
16
17 @participant_required
18 def index(request):
19 # retrieve all courses, where the user is a participant and that are not new
20 courses = list(set(Course.objects.filter(participants=request.user).exclude(state="new")))
21 voted_courses = list(set(Course.objects.filter(voters=request.user)))
22 due_courses = list(set(Course.objects.filter(participants=request.user, state='in_evaluation').exclude(voters=request.user)))
23
24 sorter = lambda course: (list(STUDENT_STATES_ORDERED.keys()).index(course.student_state), course.vote_end_date, course.name)
25 courses.sort(key=sorter)
26
27 semesters = Semester.objects.all()
28 semester_list = [dict(semester_name=semester.name, id=semester.id, is_active_semester=semester.is_active_semester,
29 courses=[course for course in courses if course.semester_id == semester.id]) for semester in semesters]
30
31 template_data = dict(
32 semester_list=semester_list,
33 voted_courses=voted_courses,
34 due_courses=due_courses,
35 can_download_grades=request.user.can_download_grades,
36 )
37 return render(request, "student_index.html", template_data)
38
39
40 def vote_preview(request, course, for_rendering_in_modal=False):
41 """
42 Renders a preview of the voting page for the given course.
43 Not used by the student app itself, but by staff and contributor.
44 """
45 form_groups = helper_create_voting_form_groups(request, course.contributions.all())
46 course_form_group = form_groups.pop(course.general_contribution)
47 contributor_form_groups = list((contribution.contributor, contribution.label, form_group, False) for contribution, form_group in form_groups.items())
48
49 template_data = dict(
50 errors_exist=False,
51 course_form_group=course_form_group,
52 contributor_form_groups=contributor_form_groups,
53 course=course,
54 preview=True,
55 for_rendering_in_modal=for_rendering_in_modal)
56 return render(request, "student_vote.html", template_data)
57
58
59 @participant_required
60 def vote(request, course_id):
61 # retrieve course and make sure that the user is allowed to vote
62 course = get_object_or_404(Course, id=course_id)
63 if not course.can_user_vote(request.user):
64 raise PermissionDenied
65
66 # prevent a user from voting on themselves.
67 contributions_to_vote_on = course.contributions.exclude(contributor=request.user).all()
68 form_groups = helper_create_voting_form_groups(request, contributions_to_vote_on)
69
70 if not all(all(form.is_valid() for form in form_group) for form_group in form_groups.values()):
71 errors_exist = any(helper_has_errors(form_group) for form_group in form_groups.values())
72
73 course_form_group = form_groups.pop(course.general_contribution)
74
75 contributor_form_groups = list((contribution.contributor, contribution.label, form_group, helper_has_errors(form_group)) for contribution, form_group in form_groups.items())
76
77 template_data = dict(
78 errors_exist=errors_exist,
79 course_form_group=course_form_group,
80 contributor_form_groups=contributor_form_groups,
81 course=course,
82 participants_warning=course.num_participants <= 5,
83 preview=False)
84 return render(request, "student_vote.html", template_data)
85
86 # all forms are valid, begin vote operation
87 with transaction.atomic():
88 # add user to course.voters
89 # not using course.voters.add(request.user) since it fails silently when done twice.
90 # manually inserting like this gives us the 'created' return value and ensures at the database level that nobody votes twice.
91 __, created = course.voters.through.objects.get_or_create(userprofile_id=request.user.pk, course_id=course.pk)
92 if not created: # vote already got recorded, bail out
93 raise SuspiciousOperation("A second vote has been received shortly after the first one.")
94
95 for contribution, form_group in form_groups.items():
96 for questionnaire_form in form_group:
97 questionnaire = questionnaire_form.questionnaire
98 for question in questionnaire.question_set.all():
99 identifier = make_form_identifier(contribution, questionnaire, question)
100 value = questionnaire_form.cleaned_data.get(identifier)
101
102 if question.is_text_question:
103 if value:
104 question.answer_class.objects.create(
105 contribution=contribution,
106 question=question,
107 answer=value)
108 else:
109 if value != 6:
110 answer_counter, __ = question.answer_class.objects.get_or_create(contribution=contribution, question=question, answer=value)
111 answer_counter.add_vote()
112 answer_counter.save()
113
114 course.course_evaluated.send(sender=Course, request=request, semester=course.semester)
115
116 messages.success(request, _("Your vote was recorded."))
117 return redirect('student:index')
118
119
120 def helper_create_voting_form_groups(request, contributions):
121 form_groups = OrderedDict()
122 for contribution in contributions:
123 questionnaires = contribution.questionnaires.all()
124 if not questionnaires.exists():
125 continue
126 form_groups[contribution] = [QuestionsForm(request.POST or None, contribution=contribution, questionnaire=questionnaire) for questionnaire in questionnaires]
127 return form_groups
128
129
130 def helper_has_errors(form_group):
131 return any(form.errors for form in form_group)
132
[end of evap/student/views.py]
[start of evap/student/tools.py]
1 def make_form_identifier(contribution, questionnaire, question):
2 """Generates a form field identifier for voting forms using the given
3 parameters."""
4
5 return "question_%s_%s_%s" % (
6 contribution.id,
7 questionnaire.id,
8 question.id)
9
[end of evap/student/tools.py]
[start of evap/student/forms.py]
1 from django import forms
2
3 from evap.student.tools import make_form_identifier
4 from evap.evaluation.tools import LIKERT_NAMES, GRADE_NAMES, POSITIVE_YES_NO_NAMES, NEGATIVE_YES_NO_NAMES
5
6
7 LIKERT_CHOICES = [(str(k), v) for k, v in LIKERT_NAMES.items()]
8 GRADE_CHOICES = [(str(k), v) for k, v in GRADE_NAMES.items()]
9 POSITIVE_YES_NO_CHOICES = [(str(k), v) for k, v in POSITIVE_YES_NO_NAMES.items()]
10 NEGATIVE_YES_NO_CHOICES = [(str(k), v) for k, v in NEGATIVE_YES_NO_NAMES.items()]
11
12
13 class QuestionsForm(forms.Form):
14 """Dynamic form class that adds one field per question.
15
16 See http://jacobian.org/writing/dynamic-form-generation/"""
17
18 def __init__(self, *args, contribution, questionnaire, **kwargs):
19 super().__init__(*args, **kwargs)
20 self.questionnaire = questionnaire
21
22 for question in self.questionnaire.question_set.all():
23 # generic arguments for all kinds of fields
24 field_args = dict(label=question.text)
25
26 if question.is_text_question:
27 field = forms.CharField(required=False, widget=forms.Textarea(),
28 **field_args)
29 elif question.is_likert_question:
30 field = forms.TypedChoiceField(widget=forms.RadioSelect(),
31 choices=LIKERT_CHOICES,
32 coerce=int,
33 **field_args)
34 elif question.is_grade_question:
35 field = forms.TypedChoiceField(widget=forms.RadioSelect(),
36 choices=GRADE_CHOICES,
37 coerce=int,
38 **field_args)
39 elif question.is_positive_yes_no_question:
40 field = forms.TypedChoiceField(widget=forms.RadioSelect(),
41 choices=POSITIVE_YES_NO_CHOICES,
42 coerce=int,
43 **field_args)
44 elif question.is_negative_yes_no_question:
45 field = forms.TypedChoiceField(widget=forms.RadioSelect(),
46 choices=NEGATIVE_YES_NO_CHOICES,
47 coerce=int,
48 **field_args)
49
50 identifier = make_form_identifier(contribution,
51 questionnaire,
52 question)
53 self.fields[identifier] = field
54
55 def caption(self):
56 return self.questionnaire.public_name
57
58 def teaser(self):
59 return self.questionnaire.teaser
60
[end of evap/student/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/evap/student/forms.py b/evap/student/forms.py
--- a/evap/student/forms.py
+++ b/evap/student/forms.py
@@ -1,9 +1,8 @@
from django import forms
-from evap.student.tools import make_form_identifier
+from evap.student.tools import question_id
from evap.evaluation.tools import LIKERT_NAMES, GRADE_NAMES, POSITIVE_YES_NO_NAMES, NEGATIVE_YES_NO_NAMES
-
LIKERT_CHOICES = [(str(k), v) for k, v in LIKERT_NAMES.items()]
GRADE_CHOICES = [(str(k), v) for k, v in GRADE_NAMES.items()]
POSITIVE_YES_NO_CHOICES = [(str(k), v) for k, v in POSITIVE_YES_NO_NAMES.items()]
@@ -47,9 +46,10 @@
coerce=int,
**field_args)
- identifier = make_form_identifier(contribution,
- questionnaire,
- question)
+ identifier = question_id(contribution,
+ questionnaire,
+ question)
+
self.fields[identifier] = field
def caption(self):
diff --git a/evap/student/tools.py b/evap/student/tools.py
--- a/evap/student/tools.py
+++ b/evap/student/tools.py
@@ -1,4 +1,4 @@
-def make_form_identifier(contribution, questionnaire, question):
+def question_id(contribution, questionnaire, question):
"""Generates a form field identifier for voting forms using the given
parameters."""
diff --git a/evap/student/views.py b/evap/student/views.py
--- a/evap/student/views.py
+++ b/evap/student/views.py
@@ -11,7 +11,7 @@
from evap.evaluation.tools import STUDENT_STATES_ORDERED
from evap.student.forms import QuestionsForm
-from evap.student.tools import make_form_identifier
+from evap.student.tools import question_id
@participant_required
@@ -96,7 +96,7 @@
for questionnaire_form in form_group:
questionnaire = questionnaire_form.questionnaire
for question in questionnaire.question_set.all():
- identifier = make_form_identifier(contribution, questionnaire, question)
+ identifier = question_id(contribution, questionnaire, question)
value = questionnaire_form.cleaned_data.get(identifier)
if question.is_text_question:
|
{"golden_diff": "diff --git a/evap/student/forms.py b/evap/student/forms.py\n--- a/evap/student/forms.py\n+++ b/evap/student/forms.py\n@@ -1,9 +1,8 @@\n from django import forms\n \n-from evap.student.tools import make_form_identifier\n+from evap.student.tools import question_id\n from evap.evaluation.tools import LIKERT_NAMES, GRADE_NAMES, POSITIVE_YES_NO_NAMES, NEGATIVE_YES_NO_NAMES\n \n-\n LIKERT_CHOICES = [(str(k), v) for k, v in LIKERT_NAMES.items()]\n GRADE_CHOICES = [(str(k), v) for k, v in GRADE_NAMES.items()]\n POSITIVE_YES_NO_CHOICES = [(str(k), v) for k, v in POSITIVE_YES_NO_NAMES.items()]\n@@ -47,9 +46,10 @@\n coerce=int,\n **field_args)\n \n- identifier = make_form_identifier(contribution,\n- questionnaire,\n- question)\n+ identifier = question_id(contribution,\n+ questionnaire,\n+ question)\n+\n self.fields[identifier] = field\n \n def caption(self):\ndiff --git a/evap/student/tools.py b/evap/student/tools.py\n--- a/evap/student/tools.py\n+++ b/evap/student/tools.py\n@@ -1,4 +1,4 @@\n-def make_form_identifier(contribution, questionnaire, question):\n+def question_id(contribution, questionnaire, question):\n \"\"\"Generates a form field identifier for voting forms using the given\n parameters.\"\"\"\n \ndiff --git a/evap/student/views.py b/evap/student/views.py\n--- a/evap/student/views.py\n+++ b/evap/student/views.py\n@@ -11,7 +11,7 @@\n from evap.evaluation.tools import STUDENT_STATES_ORDERED\n \n from evap.student.forms import QuestionsForm\n-from evap.student.tools import make_form_identifier\n+from evap.student.tools import question_id\n \n \n @participant_required\n@@ -96,7 +96,7 @@\n for questionnaire_form in form_group:\n questionnaire = questionnaire_form.questionnaire\n for question in questionnaire.question_set.all():\n- identifier = make_form_identifier(contribution, questionnaire, question)\n+ identifier = question_id(contribution, questionnaire, question)\n value = questionnaire_form.cleaned_data.get(identifier)\n \n if question.is_text_question:\n", "issue": "Test student voting\nFrom a quick search, the voting process has exactly one test, which is not much for the primary feature of the platform.\n\n", "before_files": [{"content": "from collections import OrderedDict\n\nfrom django.contrib import messages\nfrom django.core.exceptions import PermissionDenied, SuspiciousOperation\nfrom django.db import transaction\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.translation import ugettext as _\n\nfrom evap.evaluation.auth import participant_required\nfrom evap.evaluation.models import Course, Semester\nfrom evap.evaluation.tools import STUDENT_STATES_ORDERED\n\nfrom evap.student.forms import QuestionsForm\nfrom evap.student.tools import make_form_identifier\n\n\n@participant_required\ndef index(request):\n # retrieve all courses, where the user is a participant and that are not new\n courses = list(set(Course.objects.filter(participants=request.user).exclude(state=\"new\")))\n voted_courses = list(set(Course.objects.filter(voters=request.user)))\n due_courses = list(set(Course.objects.filter(participants=request.user, state='in_evaluation').exclude(voters=request.user)))\n\n sorter = lambda course: (list(STUDENT_STATES_ORDERED.keys()).index(course.student_state), course.vote_end_date, course.name)\n courses.sort(key=sorter)\n\n semesters = Semester.objects.all()\n semester_list = [dict(semester_name=semester.name, id=semester.id, is_active_semester=semester.is_active_semester,\n courses=[course for course in courses if course.semester_id == semester.id]) for semester in semesters]\n\n template_data = dict(\n semester_list=semester_list,\n voted_courses=voted_courses,\n due_courses=due_courses,\n can_download_grades=request.user.can_download_grades,\n )\n return render(request, \"student_index.html\", template_data)\n\n\ndef vote_preview(request, course, for_rendering_in_modal=False):\n \"\"\"\n Renders a preview of the voting page for the given course.\n Not used by the student app itself, but by staff and contributor.\n \"\"\"\n form_groups = helper_create_voting_form_groups(request, course.contributions.all())\n course_form_group = form_groups.pop(course.general_contribution)\n contributor_form_groups = list((contribution.contributor, contribution.label, form_group, False) for contribution, form_group in form_groups.items())\n\n template_data = dict(\n errors_exist=False,\n course_form_group=course_form_group,\n contributor_form_groups=contributor_form_groups,\n course=course,\n preview=True,\n for_rendering_in_modal=for_rendering_in_modal)\n return render(request, \"student_vote.html\", template_data)\n\n\n@participant_required\ndef vote(request, course_id):\n # retrieve course and make sure that the user is allowed to vote\n course = get_object_or_404(Course, id=course_id)\n if not course.can_user_vote(request.user):\n raise PermissionDenied\n\n # prevent a user from voting on themselves.\n contributions_to_vote_on = course.contributions.exclude(contributor=request.user).all()\n form_groups = helper_create_voting_form_groups(request, contributions_to_vote_on)\n\n if not all(all(form.is_valid() for form in form_group) for form_group in form_groups.values()):\n errors_exist = any(helper_has_errors(form_group) for form_group in form_groups.values())\n\n course_form_group = form_groups.pop(course.general_contribution)\n\n contributor_form_groups = list((contribution.contributor, contribution.label, form_group, helper_has_errors(form_group)) for contribution, form_group in form_groups.items())\n\n template_data = dict(\n errors_exist=errors_exist,\n course_form_group=course_form_group,\n contributor_form_groups=contributor_form_groups,\n course=course,\n participants_warning=course.num_participants <= 5,\n preview=False)\n return render(request, \"student_vote.html\", template_data)\n\n # all forms are valid, begin vote operation\n with transaction.atomic():\n # add user to course.voters\n # not using course.voters.add(request.user) since it fails silently when done twice.\n # manually inserting like this gives us the 'created' return value and ensures at the database level that nobody votes twice.\n __, created = course.voters.through.objects.get_or_create(userprofile_id=request.user.pk, course_id=course.pk)\n if not created: # vote already got recorded, bail out\n raise SuspiciousOperation(\"A second vote has been received shortly after the first one.\")\n\n for contribution, form_group in form_groups.items():\n for questionnaire_form in form_group:\n questionnaire = questionnaire_form.questionnaire\n for question in questionnaire.question_set.all():\n identifier = make_form_identifier(contribution, questionnaire, question)\n value = questionnaire_form.cleaned_data.get(identifier)\n\n if question.is_text_question:\n if value:\n question.answer_class.objects.create(\n contribution=contribution,\n question=question,\n answer=value)\n else:\n if value != 6:\n answer_counter, __ = question.answer_class.objects.get_or_create(contribution=contribution, question=question, answer=value)\n answer_counter.add_vote()\n answer_counter.save()\n\n course.course_evaluated.send(sender=Course, request=request, semester=course.semester)\n\n messages.success(request, _(\"Your vote was recorded.\"))\n return redirect('student:index')\n\n\ndef helper_create_voting_form_groups(request, contributions):\n form_groups = OrderedDict()\n for contribution in contributions:\n questionnaires = contribution.questionnaires.all()\n if not questionnaires.exists():\n continue\n form_groups[contribution] = [QuestionsForm(request.POST or None, contribution=contribution, questionnaire=questionnaire) for questionnaire in questionnaires]\n return form_groups\n\n\ndef helper_has_errors(form_group):\n return any(form.errors for form in form_group)\n", "path": "evap/student/views.py"}, {"content": "def make_form_identifier(contribution, questionnaire, question):\n \"\"\"Generates a form field identifier for voting forms using the given\n parameters.\"\"\"\n\n return \"question_%s_%s_%s\" % (\n contribution.id,\n questionnaire.id,\n question.id)\n", "path": "evap/student/tools.py"}, {"content": "from django import forms\n\nfrom evap.student.tools import make_form_identifier\nfrom evap.evaluation.tools import LIKERT_NAMES, GRADE_NAMES, POSITIVE_YES_NO_NAMES, NEGATIVE_YES_NO_NAMES\n\n\nLIKERT_CHOICES = [(str(k), v) for k, v in LIKERT_NAMES.items()]\nGRADE_CHOICES = [(str(k), v) for k, v in GRADE_NAMES.items()]\nPOSITIVE_YES_NO_CHOICES = [(str(k), v) for k, v in POSITIVE_YES_NO_NAMES.items()]\nNEGATIVE_YES_NO_CHOICES = [(str(k), v) for k, v in NEGATIVE_YES_NO_NAMES.items()]\n\n\nclass QuestionsForm(forms.Form):\n \"\"\"Dynamic form class that adds one field per question.\n\n See http://jacobian.org/writing/dynamic-form-generation/\"\"\"\n\n def __init__(self, *args, contribution, questionnaire, **kwargs):\n super().__init__(*args, **kwargs)\n self.questionnaire = questionnaire\n\n for question in self.questionnaire.question_set.all():\n # generic arguments for all kinds of fields\n field_args = dict(label=question.text)\n\n if question.is_text_question:\n field = forms.CharField(required=False, widget=forms.Textarea(),\n **field_args)\n elif question.is_likert_question:\n field = forms.TypedChoiceField(widget=forms.RadioSelect(),\n choices=LIKERT_CHOICES,\n coerce=int,\n **field_args)\n elif question.is_grade_question:\n field = forms.TypedChoiceField(widget=forms.RadioSelect(),\n choices=GRADE_CHOICES,\n coerce=int,\n **field_args)\n elif question.is_positive_yes_no_question:\n field = forms.TypedChoiceField(widget=forms.RadioSelect(),\n choices=POSITIVE_YES_NO_CHOICES,\n coerce=int,\n **field_args)\n elif question.is_negative_yes_no_question:\n field = forms.TypedChoiceField(widget=forms.RadioSelect(),\n choices=NEGATIVE_YES_NO_CHOICES,\n coerce=int,\n **field_args)\n\n identifier = make_form_identifier(contribution,\n questionnaire,\n question)\n self.fields[identifier] = field\n\n def caption(self):\n return self.questionnaire.public_name\n\n def teaser(self):\n return self.questionnaire.teaser\n", "path": "evap/student/forms.py"}]}
| 2,735 | 503 |
gh_patches_debug_32593
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-7998
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bokeh GMapOptions error
Bokeh version - 0.12.16
Python - 3.5
OS - Windows 10
Browser - IE, Chrome, Firefox
Other relevant packages - GMapOptions, gmap
I'm fairly new to Bokeh so please excuse me for my lack of knowledge/experience with bokeh. I am trying to create a geographic visualization using bokeh and then integrate my bokeh server app into flask to deploy it on a VPS. When run independently my code produces the necessary visualization but, gives an error in the command line. The error is occuring at the gmap call. I tried looking this error up and found a few relevant results all of which pointed to bugs in the Bokeh library. Where I am really getting stuck is, at the point of deployment of this visualization either independently as a bokeh server app or by integrating it with Flask because of this error. Any help on this is greatly appreciated. Thanks!
```
from bokeh.io import curdoc
from bokeh.models import GMapOptions
from bokeh.plotting import gmap
map_options = GMapOptions(lat=37.686293, lng=-97.3614409, map_type="roadmap", zoom=13)
p = gmap(google_api_key="My Google Maps API Key", map_options=map_options, title="Resolutions Clients", plot_width=1000, plot_height=600)
curdoc().add_root(p)
```
C:\Users\Administrator\PycharmProjects\FlaskApp>bokeh serve --show test.py
2018-06-11 14:20:52,449 Starting Bokeh server version 0.12.16 (running on Tornad
o 5.0.2)
2018-06-11 14:20:52,463 Bokeh app running at: http://localhost:5006/test
2018-06-11 14:20:52,464 Starting Bokeh server with process id: 9972
2018-06-11 14:20:52,816 200 GET /test (::1) 161.02ms
2018-06-11 14:20:53,072 101 GET /test/ws?bokeh-protocol-version=1.0&bokeh-sessio
n-id=YwriVJupgcBKp7IEeFdj5R95PJG2lxe6g82SFqbObcDA (::1) 1.00ms
2018-06-11 14:20:53,072 WebSocket connection opened
2018-06-11 14:20:53,072 ServerConnection created
2018-06-11 14:20:53,120 error handling message Message 'PATCH-DOC' (revision 1):
TypeError("__init__() missing 2 required positional arguments: 'google_api_key'
and 'map_options'",)
</issue>
<code>
[start of bokeh/plotting/gmap.py]
1 from __future__ import absolute_import, print_function
2
3 import logging
4 logger = logging.getLogger(__name__)
5
6 from six import string_types
7
8 from ..core.enums import HorizontalLocation, VerticalLocation
9 from ..core.properties import Auto, Either, Enum, Int, Seq, Instance, String
10 from ..models import GMapPlot, LinearAxis, MercatorTicker, MercatorTickFormatter, Range1d, Title, Tool
11 from ..models import glyphs, markers
12 from ..models.tools import Drag, Inspection, Scroll, Tap
13 from ..util.options import Options
14 from .helpers import _process_tools_arg, _process_active_tools, _glyph_function
15
16 DEFAULT_TOOLS = "pan,wheel_zoom,reset,help"
17
18 class GMapFigureOptions(Options):
19
20 tools = Either(String, Seq(Either(String, Instance(Tool))), default=DEFAULT_TOOLS, help="""
21 Tools the plot should start with.
22 """)
23
24 x_minor_ticks = Either(Auto, Int, default="auto", help="""
25 Number of minor ticks between adjacent x-axis major ticks.
26 """)
27
28 y_minor_ticks = Either(Auto, Int, default="auto", help="""
29 Number of minor ticks between adjacent y-axis major ticks.
30 """)
31
32 x_axis_location = Enum(VerticalLocation, default="below", help="""
33 Where the x-axis should be located.
34 """)
35
36 y_axis_location = Enum(HorizontalLocation, default="left", help="""
37 Where the y-axis should be located.
38 """)
39
40 x_axis_label = String(default="", help="""
41 A label for the x-axis.
42 """)
43
44 y_axis_label = String(default="", help="""
45 A label for the y-axis.
46 """)
47
48 active_drag = Either(Auto, String, Instance(Drag), default="auto", help="""
49 Which drag tool should initially be active.
50 """)
51
52 active_inspect = Either(Auto, String, Instance(Inspection), Seq(Instance(Inspection)), default="auto", help="""
53 Which drag tool should initially be active.
54 """)
55
56 active_scroll = Either(Auto, String, Instance(Scroll), default="auto", help="""
57 Which scroll tool should initially be active.
58 """)
59
60 active_tap = Either(Auto, String, Instance(Tap), default="auto", help="""
61 Which tap tool should initially be active.
62 """)
63
64 class GMap(GMapPlot):
65 ''' A subclass of :class:`~bokeh.models.plots.Plot` that simplifies plot
66 creation with default axes, grids, tools, etc.
67
68 In addition to all the Bokeh model property attributes documented below,
69 the ``Figure`` initializer also accepts the following options, which can
70 help simplify configuration:
71
72 .. bokeh-options:: GMapFigureOptions
73 :module: bokeh.plotting.figure
74
75 '''
76
77 __subtype__ = "GMap"
78 __view_model__ = "GMapPlot"
79
80 def __init__(self, google_api_key, map_options, **kw):
81
82 if 'plot_width' in kw and 'width' in kw:
83 raise ValueError("Figure called with both 'plot_width' and 'width' supplied, supply only one")
84 if 'plot_height' in kw and 'height' in kw:
85 raise ValueError("Figure called with both 'plot_height' and 'height' supplied, supply only one")
86 if 'height' in kw:
87 kw['plot_height'] = kw.pop('height')
88 if 'width' in kw:
89 kw['plot_width'] = kw.pop('width')
90
91 opts = GMapFigureOptions(kw)
92
93 title = kw.get("title", None)
94 if isinstance(title, string_types):
95 kw['title'] = Title(text=title)
96
97 super(GMap, self).__init__(api_key=google_api_key, map_options=map_options,
98 x_range=Range1d(), y_range=Range1d(), **kw)
99
100 xf = MercatorTickFormatter(dimension="lon")
101 xt = MercatorTicker(dimension="lon")
102 self.add_layout(LinearAxis(formatter=xf, ticker=xt), 'below')
103
104 yf = MercatorTickFormatter(dimension="lat")
105 yt = MercatorTicker(dimension="lat")
106 self.add_layout(LinearAxis(formatter=yf, ticker=yt), 'left')
107
108 tool_objs, tool_map = _process_tools_arg(self, opts.tools)
109 self.add_tools(*tool_objs)
110 _process_active_tools(self.toolbar, tool_map, opts.active_drag, opts.active_inspect, opts.active_scroll, opts.active_tap)
111
112 annular_wedge = _glyph_function(glyphs.AnnularWedge)
113
114 annulus = _glyph_function(glyphs.Annulus)
115
116 arc = _glyph_function(glyphs.Arc)
117
118 asterisk = _glyph_function(markers.Asterisk)
119
120 bezier = _glyph_function(glyphs.Bezier)
121
122 circle = _glyph_function(markers.Circle)
123
124 circle_cross = _glyph_function(markers.CircleCross)
125
126 circle_x = _glyph_function(markers.CircleX)
127
128 cross = _glyph_function(markers.Cross)
129
130 diamond = _glyph_function(markers.Diamond)
131
132 diamond_cross = _glyph_function(markers.DiamondCross)
133
134 hbar = _glyph_function(glyphs.HBar)
135
136 ellipse = _glyph_function(glyphs.Ellipse)
137
138 image = _glyph_function(glyphs.Image)
139
140 image_rgba = _glyph_function(glyphs.ImageRGBA)
141
142 image_url = _glyph_function(glyphs.ImageURL)
143
144 inverted_triangle = _glyph_function(markers.InvertedTriangle)
145
146 line = _glyph_function(glyphs.Line)
147
148 multi_line = _glyph_function(glyphs.MultiLine)
149
150 oval = _glyph_function(glyphs.Oval)
151
152 patch = _glyph_function(glyphs.Patch)
153
154 patches = _glyph_function(glyphs.Patches)
155
156 quad = _glyph_function(glyphs.Quad)
157
158 quadratic = _glyph_function(glyphs.Quadratic)
159
160 ray = _glyph_function(glyphs.Ray)
161
162 rect = _glyph_function(glyphs.Rect)
163
164 segment = _glyph_function(glyphs.Segment)
165
166 square = _glyph_function(markers.Square)
167
168 square_cross = _glyph_function(markers.SquareCross)
169
170 square_x = _glyph_function(markers.SquareX)
171
172 text = _glyph_function(glyphs.Text)
173
174 triangle = _glyph_function(markers.Triangle)
175
176 vbar = _glyph_function(glyphs.VBar)
177
178 wedge = _glyph_function(glyphs.Wedge)
179
180 x = _glyph_function(markers.X)
181
182 def gmap(google_api_key, map_options, **kwargs):
183 ''' Create a new :class:`~bokeh.plotting.gmap.GMap` for plotting.
184
185 Args:
186 google_api_key (str):
187 Google requires an API key be supplied for maps to function. See:
188
189 https://developers.google.com/maps/documentation/javascript/get-api-key
190
191 map_options: (GMapOptions)
192 Configuration specific to a Google Map
193
194 In addition to the standard :class:`~bokeh.plotting.gmap.GMap` keyword
195 arguments (e.g. ``plot_width`` or ``sizing_mode``), the following
196 additional options can be passed as well:
197
198 .. bokeh-options:: GMapFigureOptions
199 :module: bokeh.plotting.gmap
200
201 Returns:
202 GMap
203
204 '''
205
206 return GMap(google_api_key, map_options, **kwargs)
207
[end of bokeh/plotting/gmap.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bokeh/plotting/gmap.py b/bokeh/plotting/gmap.py
--- a/bokeh/plotting/gmap.py
+++ b/bokeh/plotting/gmap.py
@@ -65,6 +65,15 @@
''' A subclass of :class:`~bokeh.models.plots.Plot` that simplifies plot
creation with default axes, grids, tools, etc.
+ Args:
+ google_api_key (str):
+ Google requires an API key be supplied for maps to function. See:
+
+ https://developers.google.com/maps/documentation/javascript/get-api-key
+
+ map_options: (GMapOptions)
+ Configuration specific to a Google Map
+
In addition to all the Bokeh model property attributes documented below,
the ``Figure`` initializer also accepts the following options, which can
help simplify configuration:
@@ -77,7 +86,7 @@
__subtype__ = "GMap"
__view_model__ = "GMapPlot"
- def __init__(self, google_api_key, map_options, **kw):
+ def __init__(self, **kw):
if 'plot_width' in kw and 'width' in kw:
raise ValueError("Figure called with both 'plot_width' and 'width' supplied, supply only one")
@@ -94,8 +103,7 @@
if isinstance(title, string_types):
kw['title'] = Title(text=title)
- super(GMap, self).__init__(api_key=google_api_key, map_options=map_options,
- x_range=Range1d(), y_range=Range1d(), **kw)
+ super(GMap, self).__init__(x_range=Range1d(), y_range=Range1d(), **kw)
xf = MercatorTickFormatter(dimension="lon")
xt = MercatorTicker(dimension="lon")
@@ -203,4 +211,4 @@
'''
- return GMap(google_api_key, map_options, **kwargs)
+ return GMap(api_key=google_api_key, map_options=map_options, **kwargs)
|
{"golden_diff": "diff --git a/bokeh/plotting/gmap.py b/bokeh/plotting/gmap.py\n--- a/bokeh/plotting/gmap.py\n+++ b/bokeh/plotting/gmap.py\n@@ -65,6 +65,15 @@\n ''' A subclass of :class:`~bokeh.models.plots.Plot` that simplifies plot\n creation with default axes, grids, tools, etc.\n \n+ Args:\n+ google_api_key (str):\n+ Google requires an API key be supplied for maps to function. See:\n+\n+ https://developers.google.com/maps/documentation/javascript/get-api-key\n+\n+ map_options: (GMapOptions)\n+ Configuration specific to a Google Map\n+\n In addition to all the Bokeh model property attributes documented below,\n the ``Figure`` initializer also accepts the following options, which can\n help simplify configuration:\n@@ -77,7 +86,7 @@\n __subtype__ = \"GMap\"\n __view_model__ = \"GMapPlot\"\n \n- def __init__(self, google_api_key, map_options, **kw):\n+ def __init__(self, **kw):\n \n if 'plot_width' in kw and 'width' in kw:\n raise ValueError(\"Figure called with both 'plot_width' and 'width' supplied, supply only one\")\n@@ -94,8 +103,7 @@\n if isinstance(title, string_types):\n kw['title'] = Title(text=title)\n \n- super(GMap, self).__init__(api_key=google_api_key, map_options=map_options,\n- x_range=Range1d(), y_range=Range1d(), **kw)\n+ super(GMap, self).__init__(x_range=Range1d(), y_range=Range1d(), **kw)\n \n xf = MercatorTickFormatter(dimension=\"lon\")\n xt = MercatorTicker(dimension=\"lon\")\n@@ -203,4 +211,4 @@\n \n '''\n \n- return GMap(google_api_key, map_options, **kwargs)\n+ return GMap(api_key=google_api_key, map_options=map_options, **kwargs)\n", "issue": "Bokeh GMapOptions error\nBokeh version - 0.12.16\r\nPython - 3.5\r\nOS - Windows 10\r\nBrowser - IE, Chrome, Firefox\r\nOther relevant packages - GMapOptions, gmap\r\n\r\nI'm fairly new to Bokeh so please excuse me for my lack of knowledge/experience with bokeh. I am trying to create a geographic visualization using bokeh and then integrate my bokeh server app into flask to deploy it on a VPS. When run independently my code produces the necessary visualization but, gives an error in the command line. The error is occuring at the gmap call. I tried looking this error up and found a few relevant results all of which pointed to bugs in the Bokeh library. Where I am really getting stuck is, at the point of deployment of this visualization either independently as a bokeh server app or by integrating it with Flask because of this error. Any help on this is greatly appreciated. Thanks!\r\n\r\n\r\n```\r\nfrom bokeh.io import curdoc\r\nfrom bokeh.models import GMapOptions\r\nfrom bokeh.plotting import gmap\r\nmap_options = GMapOptions(lat=37.686293, lng=-97.3614409, map_type=\"roadmap\", zoom=13)\r\np = gmap(google_api_key=\"My Google Maps API Key\", map_options=map_options, title=\"Resolutions Clients\", plot_width=1000, plot_height=600)\r\ncurdoc().add_root(p)\r\n```\r\n\r\n\r\nC:\\Users\\Administrator\\PycharmProjects\\FlaskApp>bokeh serve --show test.py\r\n2018-06-11 14:20:52,449 Starting Bokeh server version 0.12.16 (running on Tornad\r\no 5.0.2)\r\n2018-06-11 14:20:52,463 Bokeh app running at: http://localhost:5006/test\r\n2018-06-11 14:20:52,464 Starting Bokeh server with process id: 9972\r\n2018-06-11 14:20:52,816 200 GET /test (::1) 161.02ms\r\n2018-06-11 14:20:53,072 101 GET /test/ws?bokeh-protocol-version=1.0&bokeh-sessio\r\nn-id=YwriVJupgcBKp7IEeFdj5R95PJG2lxe6g82SFqbObcDA (::1) 1.00ms\r\n2018-06-11 14:20:53,072 WebSocket connection opened\r\n2018-06-11 14:20:53,072 ServerConnection created\r\n2018-06-11 14:20:53,120 error handling message Message 'PATCH-DOC' (revision 1):\r\n TypeError(\"__init__() missing 2 required positional arguments: 'google_api_key'\r\n and 'map_options'\",)\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import absolute_import, print_function\n\nimport logging\nlogger = logging.getLogger(__name__)\n\nfrom six import string_types\n\nfrom ..core.enums import HorizontalLocation, VerticalLocation\nfrom ..core.properties import Auto, Either, Enum, Int, Seq, Instance, String\nfrom ..models import GMapPlot, LinearAxis, MercatorTicker, MercatorTickFormatter, Range1d, Title, Tool\nfrom ..models import glyphs, markers\nfrom ..models.tools import Drag, Inspection, Scroll, Tap\nfrom ..util.options import Options\nfrom .helpers import _process_tools_arg, _process_active_tools, _glyph_function\n\nDEFAULT_TOOLS = \"pan,wheel_zoom,reset,help\"\n\nclass GMapFigureOptions(Options):\n\n tools = Either(String, Seq(Either(String, Instance(Tool))), default=DEFAULT_TOOLS, help=\"\"\"\n Tools the plot should start with.\n \"\"\")\n\n x_minor_ticks = Either(Auto, Int, default=\"auto\", help=\"\"\"\n Number of minor ticks between adjacent x-axis major ticks.\n \"\"\")\n\n y_minor_ticks = Either(Auto, Int, default=\"auto\", help=\"\"\"\n Number of minor ticks between adjacent y-axis major ticks.\n \"\"\")\n\n x_axis_location = Enum(VerticalLocation, default=\"below\", help=\"\"\"\n Where the x-axis should be located.\n \"\"\")\n\n y_axis_location = Enum(HorizontalLocation, default=\"left\", help=\"\"\"\n Where the y-axis should be located.\n \"\"\")\n\n x_axis_label = String(default=\"\", help=\"\"\"\n A label for the x-axis.\n \"\"\")\n\n y_axis_label = String(default=\"\", help=\"\"\"\n A label for the y-axis.\n \"\"\")\n\n active_drag = Either(Auto, String, Instance(Drag), default=\"auto\", help=\"\"\"\n Which drag tool should initially be active.\n \"\"\")\n\n active_inspect = Either(Auto, String, Instance(Inspection), Seq(Instance(Inspection)), default=\"auto\", help=\"\"\"\n Which drag tool should initially be active.\n \"\"\")\n\n active_scroll = Either(Auto, String, Instance(Scroll), default=\"auto\", help=\"\"\"\n Which scroll tool should initially be active.\n \"\"\")\n\n active_tap = Either(Auto, String, Instance(Tap), default=\"auto\", help=\"\"\"\n Which tap tool should initially be active.\n \"\"\")\n\nclass GMap(GMapPlot):\n ''' A subclass of :class:`~bokeh.models.plots.Plot` that simplifies plot\n creation with default axes, grids, tools, etc.\n\n In addition to all the Bokeh model property attributes documented below,\n the ``Figure`` initializer also accepts the following options, which can\n help simplify configuration:\n\n .. bokeh-options:: GMapFigureOptions\n :module: bokeh.plotting.figure\n\n '''\n\n __subtype__ = \"GMap\"\n __view_model__ = \"GMapPlot\"\n\n def __init__(self, google_api_key, map_options, **kw):\n\n if 'plot_width' in kw and 'width' in kw:\n raise ValueError(\"Figure called with both 'plot_width' and 'width' supplied, supply only one\")\n if 'plot_height' in kw and 'height' in kw:\n raise ValueError(\"Figure called with both 'plot_height' and 'height' supplied, supply only one\")\n if 'height' in kw:\n kw['plot_height'] = kw.pop('height')\n if 'width' in kw:\n kw['plot_width'] = kw.pop('width')\n\n opts = GMapFigureOptions(kw)\n\n title = kw.get(\"title\", None)\n if isinstance(title, string_types):\n kw['title'] = Title(text=title)\n\n super(GMap, self).__init__(api_key=google_api_key, map_options=map_options,\n x_range=Range1d(), y_range=Range1d(), **kw)\n\n xf = MercatorTickFormatter(dimension=\"lon\")\n xt = MercatorTicker(dimension=\"lon\")\n self.add_layout(LinearAxis(formatter=xf, ticker=xt), 'below')\n\n yf = MercatorTickFormatter(dimension=\"lat\")\n yt = MercatorTicker(dimension=\"lat\")\n self.add_layout(LinearAxis(formatter=yf, ticker=yt), 'left')\n\n tool_objs, tool_map = _process_tools_arg(self, opts.tools)\n self.add_tools(*tool_objs)\n _process_active_tools(self.toolbar, tool_map, opts.active_drag, opts.active_inspect, opts.active_scroll, opts.active_tap)\n\n annular_wedge = _glyph_function(glyphs.AnnularWedge)\n\n annulus = _glyph_function(glyphs.Annulus)\n\n arc = _glyph_function(glyphs.Arc)\n\n asterisk = _glyph_function(markers.Asterisk)\n\n bezier = _glyph_function(glyphs.Bezier)\n\n circle = _glyph_function(markers.Circle)\n\n circle_cross = _glyph_function(markers.CircleCross)\n\n circle_x = _glyph_function(markers.CircleX)\n\n cross = _glyph_function(markers.Cross)\n\n diamond = _glyph_function(markers.Diamond)\n\n diamond_cross = _glyph_function(markers.DiamondCross)\n\n hbar = _glyph_function(glyphs.HBar)\n\n ellipse = _glyph_function(glyphs.Ellipse)\n\n image = _glyph_function(glyphs.Image)\n\n image_rgba = _glyph_function(glyphs.ImageRGBA)\n\n image_url = _glyph_function(glyphs.ImageURL)\n\n inverted_triangle = _glyph_function(markers.InvertedTriangle)\n\n line = _glyph_function(glyphs.Line)\n\n multi_line = _glyph_function(glyphs.MultiLine)\n\n oval = _glyph_function(glyphs.Oval)\n\n patch = _glyph_function(glyphs.Patch)\n\n patches = _glyph_function(glyphs.Patches)\n\n quad = _glyph_function(glyphs.Quad)\n\n quadratic = _glyph_function(glyphs.Quadratic)\n\n ray = _glyph_function(glyphs.Ray)\n\n rect = _glyph_function(glyphs.Rect)\n\n segment = _glyph_function(glyphs.Segment)\n\n square = _glyph_function(markers.Square)\n\n square_cross = _glyph_function(markers.SquareCross)\n\n square_x = _glyph_function(markers.SquareX)\n\n text = _glyph_function(glyphs.Text)\n\n triangle = _glyph_function(markers.Triangle)\n\n vbar = _glyph_function(glyphs.VBar)\n\n wedge = _glyph_function(glyphs.Wedge)\n\n x = _glyph_function(markers.X)\n\ndef gmap(google_api_key, map_options, **kwargs):\n ''' Create a new :class:`~bokeh.plotting.gmap.GMap` for plotting.\n\n Args:\n google_api_key (str):\n Google requires an API key be supplied for maps to function. See:\n\n https://developers.google.com/maps/documentation/javascript/get-api-key\n\n map_options: (GMapOptions)\n Configuration specific to a Google Map\n\n In addition to the standard :class:`~bokeh.plotting.gmap.GMap` keyword\n arguments (e.g. ``plot_width`` or ``sizing_mode``), the following\n additional options can be passed as well:\n\n .. bokeh-options:: GMapFigureOptions\n :module: bokeh.plotting.gmap\n\n Returns:\n GMap\n\n '''\n\n return GMap(google_api_key, map_options, **kwargs)\n", "path": "bokeh/plotting/gmap.py"}]}
| 3,419 | 473 |
gh_patches_debug_3570
|
rasdani/github-patches
|
git_diff
|
pystiche__pystiche-479
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
the default value for allow_inplace changed from False to True
This change was added in #392
# Before
https://github.com/pmeier/pystiche/blob/950b84837df26a0cab2f9f2714884655173206bf/pystiche/enc/models/vgg.py#L149
https://github.com/pmeier/pystiche/blob/950b84837df26a0cab2f9f2714884655173206bf/pystiche/enc/models/alexnet.py#L38
# After
https://github.com/pmeier/pystiche/blob/3fd3504b94d6bce5389784abea9e91d23c5fb153/pystiche/enc/models/utils.py#L51
</issue>
<code>
[start of pystiche/enc/models/utils.py]
1 from abc import abstractmethod
2 from typing import Any, Callable, Dict, List, Optional, Tuple, TypeVar, cast
3
4 import torch
5 from torch import hub, nn
6 from torch.nn.modules.module import _IncompatibleKeys
7
8 from ..multi_layer_encoder import MultiLayerEncoder
9 from ..prepostprocessing import preprocessing
10
11 __all__ = ["ModelMultiLayerEncoder", "select_url"]
12
13 T = TypeVar("T")
14
15
16 def select_url(
17 urls: Dict[T, str], key: T, format: Optional[Callable[[T], str]] = None
18 ) -> str:
19 if format is None:
20 format = str
21
22 try:
23 return urls[key]
24 except KeyError as error:
25 raise RuntimeError(f"No URL is available for\n\n{format(key)}") from error
26
27
28 class ModelMultiLayerEncoder(MultiLayerEncoder):
29 r"""Multi-layer encoder based on a pre-defined model.
30
31 Args:
32 pretrained: If ``True``, loads builtin weights. Defaults to ``True``.
33 framework: Name of the framework that was used to train the builtin weights.
34 Defaults to ``"torch"``.
35 internal_preprocessing: If ``True``, adds a preprocessing layer for the
36 selected ``framework`` as first layer. Defaults to ``True``.
37 allow_inplace: If ``True``, allows inplace operations to reduce the memory
38 requirement during the forward pass. Defaults to ``False``.
39
40 .. warning::
41 After performing an inplace operation the encodings of the previous
42 layer is no longer accessible. Only use this if you are sure that you
43 do **not** need these encodings.
44 """
45
46 def __init__(
47 self,
48 pretrained: bool = True,
49 framework: str = "torch",
50 internal_preprocessing: bool = True,
51 allow_inplace: bool = True,
52 ) -> None:
53 self.pretrained = pretrained
54 self.framework = framework
55 self.internal_preprocessing = internal_preprocessing
56 self.allow_inplace = allow_inplace
57
58 modules, self._state_dict_key_map = self.collect_modules(allow_inplace)
59 if internal_preprocessing:
60 modules.insert(0, ("preprocessing", preprocessing(framework)))
61
62 super().__init__(modules)
63
64 if pretrained:
65 self.load_state_dict_from_url(framework)
66
67 @abstractmethod
68 def state_dict_url(self, framework: str) -> str:
69 r"""Select URL of a downloadable ``state_dict``.
70
71 Args:
72 framework: Name of the framework that was used to train the weights.
73
74 Raises:
75 RuntimeError: If no ``state_dict`` is available.
76 """
77 pass
78
79 @abstractmethod
80 def collect_modules(
81 self, inplace: bool
82 ) -> Tuple[List[Tuple[str, nn.Module]], Dict[str, str]]:
83 r"""Collect modules of a base model with more descriptive names.
84
85 Args:
86 inplace: If ``True``, when possible, modules should use inplace operations.
87
88 Returns:
89 List of name-module-pairs as well as a dictionary mapping the new, more
90 descriptive names to the original ones.
91 """
92 pass
93
94 def _map_state_dict_keys(
95 self, state_dict: Dict[str, torch.Tensor]
96 ) -> Tuple[Dict[str, torch.Tensor], List[str]]:
97 remapped_state_dict = {}
98 unexpected_keys = []
99 for key, value in state_dict.items():
100 if key in self._state_dict_key_map:
101 remapped_state_dict[self._state_dict_key_map[key]] = value
102 else:
103 unexpected_keys.append(key)
104
105 return remapped_state_dict, unexpected_keys
106
107 def load_state_dict(
108 self,
109 state_dict: Dict[str, torch.Tensor],
110 strict: bool = True,
111 map_names: bool = True,
112 framework: str = "unknown",
113 ) -> _IncompatibleKeys:
114 r"""Loads parameters and buffers from the ``state_dict``.
115
116 Args:
117 state_dict: State dictionary.
118 strict: Enforce matching keys in ``state_dict`` and the internal states.
119 map_names: If ``True``, maps the names names in ``state_dict`` of the
120 underlying model to the more descriptive names generated by
121 :meth:`collect_modules`. Defaults to ``True``.
122 framework: Name of the framework that was used to train the weights in
123 ``state_dict``. Defaults to ``"unknown"``.
124
125 .. note::
126
127 This has no effect on the behavior, but makes the representation
128 of the :class:`ModelMultiLayerEncoder` more descriptive.
129
130 Returns:
131 Named tuple with ``missing_keys`` and ``unexpected_keys`` fields.
132
133 .. seealso::
134
135 :meth:`torch.nn.Module.load_state_dict`
136 """
137 if map_names:
138 state_dict, unexpected_keys = self._map_state_dict_keys(state_dict)
139 else:
140 unexpected_keys = []
141
142 keys = cast(
143 _IncompatibleKeys, super().load_state_dict(state_dict, strict=strict)
144 )
145 keys.unexpected_keys.extend(unexpected_keys)
146
147 self.pretrained = True
148 self.framework = framework
149
150 return keys
151
152 def load_state_dict_from_url(
153 self,
154 framework: str,
155 strict: bool = True,
156 map_names: bool = True,
157 check_hash: bool = True,
158 **kwargs: Any,
159 ) -> None:
160 r"""Downloads and loads parameters and buffers trained with ``framework``.
161
162 Args:
163 framework: Name of the framework that was used to train the weights of the
164 ``state_dict``.
165 strict: Enforce matching keys in ``state_dict`` and the internal states.
166 map_names: If ``True``, maps the names names in ``state_dict`` of the
167 underlying model to the more descriptive names generated by
168 :meth:`collect_modules`. Defaults to ``True``.
169 check_hash: If ``True``, checks if the hash postfix of the URL matches the
170 SHA256 hash of the downloaded ``state_dict``. Defaults to ``True``.
171 kwargs: Optional arguments for :meth:`torch.hub.load_state_dict_from_url` .
172
173 .. seealso::
174
175 - :meth:`state_dict_url`
176 - :meth:`load_state_dict`
177 - :meth:`torch.hub.load_state_dict_from_url`
178 """
179 url = self.state_dict_url(framework)
180 state_dict = hub.load_state_dict_from_url(url, check_hash=check_hash, **kwargs)
181 self.load_state_dict(
182 state_dict, strict=strict, map_names=map_names, framework=framework
183 )
184
185 def _properties(self) -> Dict[str, Any]:
186 dct = super()._properties()
187 if not self.pretrained:
188 dct["pretrained"] = False
189 else:
190 dct["framework"] = self.framework
191 if not self.internal_preprocessing:
192 dct["internal_preprocessing"] = self.internal_preprocessing
193 if self.allow_inplace:
194 dct["allow_inplace"] = self.allow_inplace
195 return dct
196
[end of pystiche/enc/models/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pystiche/enc/models/utils.py b/pystiche/enc/models/utils.py
--- a/pystiche/enc/models/utils.py
+++ b/pystiche/enc/models/utils.py
@@ -48,7 +48,7 @@
pretrained: bool = True,
framework: str = "torch",
internal_preprocessing: bool = True,
- allow_inplace: bool = True,
+ allow_inplace: bool = False,
) -> None:
self.pretrained = pretrained
self.framework = framework
|
{"golden_diff": "diff --git a/pystiche/enc/models/utils.py b/pystiche/enc/models/utils.py\n--- a/pystiche/enc/models/utils.py\n+++ b/pystiche/enc/models/utils.py\n@@ -48,7 +48,7 @@\n pretrained: bool = True,\n framework: str = \"torch\",\n internal_preprocessing: bool = True,\n- allow_inplace: bool = True,\n+ allow_inplace: bool = False,\n ) -> None:\n self.pretrained = pretrained\n self.framework = framework\n", "issue": "the default value for allow_inplace changed from False to True\nThis change was added in #392 \r\n\r\n# Before\r\n\r\nhttps://github.com/pmeier/pystiche/blob/950b84837df26a0cab2f9f2714884655173206bf/pystiche/enc/models/vgg.py#L149\r\n\r\nhttps://github.com/pmeier/pystiche/blob/950b84837df26a0cab2f9f2714884655173206bf/pystiche/enc/models/alexnet.py#L38\r\n\r\n# After\r\n\r\nhttps://github.com/pmeier/pystiche/blob/3fd3504b94d6bce5389784abea9e91d23c5fb153/pystiche/enc/models/utils.py#L51\n", "before_files": [{"content": "from abc import abstractmethod\nfrom typing import Any, Callable, Dict, List, Optional, Tuple, TypeVar, cast\n\nimport torch\nfrom torch import hub, nn\nfrom torch.nn.modules.module import _IncompatibleKeys\n\nfrom ..multi_layer_encoder import MultiLayerEncoder\nfrom ..prepostprocessing import preprocessing\n\n__all__ = [\"ModelMultiLayerEncoder\", \"select_url\"]\n\nT = TypeVar(\"T\")\n\n\ndef select_url(\n urls: Dict[T, str], key: T, format: Optional[Callable[[T], str]] = None\n) -> str:\n if format is None:\n format = str\n\n try:\n return urls[key]\n except KeyError as error:\n raise RuntimeError(f\"No URL is available for\\n\\n{format(key)}\") from error\n\n\nclass ModelMultiLayerEncoder(MultiLayerEncoder):\n r\"\"\"Multi-layer encoder based on a pre-defined model.\n\n Args:\n pretrained: If ``True``, loads builtin weights. Defaults to ``True``.\n framework: Name of the framework that was used to train the builtin weights.\n Defaults to ``\"torch\"``.\n internal_preprocessing: If ``True``, adds a preprocessing layer for the\n selected ``framework`` as first layer. Defaults to ``True``.\n allow_inplace: If ``True``, allows inplace operations to reduce the memory\n requirement during the forward pass. Defaults to ``False``.\n\n .. warning::\n After performing an inplace operation the encodings of the previous\n layer is no longer accessible. Only use this if you are sure that you\n do **not** need these encodings.\n \"\"\"\n\n def __init__(\n self,\n pretrained: bool = True,\n framework: str = \"torch\",\n internal_preprocessing: bool = True,\n allow_inplace: bool = True,\n ) -> None:\n self.pretrained = pretrained\n self.framework = framework\n self.internal_preprocessing = internal_preprocessing\n self.allow_inplace = allow_inplace\n\n modules, self._state_dict_key_map = self.collect_modules(allow_inplace)\n if internal_preprocessing:\n modules.insert(0, (\"preprocessing\", preprocessing(framework)))\n\n super().__init__(modules)\n\n if pretrained:\n self.load_state_dict_from_url(framework)\n\n @abstractmethod\n def state_dict_url(self, framework: str) -> str:\n r\"\"\"Select URL of a downloadable ``state_dict``.\n\n Args:\n framework: Name of the framework that was used to train the weights.\n\n Raises:\n RuntimeError: If no ``state_dict`` is available.\n \"\"\"\n pass\n\n @abstractmethod\n def collect_modules(\n self, inplace: bool\n ) -> Tuple[List[Tuple[str, nn.Module]], Dict[str, str]]:\n r\"\"\"Collect modules of a base model with more descriptive names.\n\n Args:\n inplace: If ``True``, when possible, modules should use inplace operations.\n\n Returns:\n List of name-module-pairs as well as a dictionary mapping the new, more\n descriptive names to the original ones.\n \"\"\"\n pass\n\n def _map_state_dict_keys(\n self, state_dict: Dict[str, torch.Tensor]\n ) -> Tuple[Dict[str, torch.Tensor], List[str]]:\n remapped_state_dict = {}\n unexpected_keys = []\n for key, value in state_dict.items():\n if key in self._state_dict_key_map:\n remapped_state_dict[self._state_dict_key_map[key]] = value\n else:\n unexpected_keys.append(key)\n\n return remapped_state_dict, unexpected_keys\n\n def load_state_dict(\n self,\n state_dict: Dict[str, torch.Tensor],\n strict: bool = True,\n map_names: bool = True,\n framework: str = \"unknown\",\n ) -> _IncompatibleKeys:\n r\"\"\"Loads parameters and buffers from the ``state_dict``.\n\n Args:\n state_dict: State dictionary.\n strict: Enforce matching keys in ``state_dict`` and the internal states.\n map_names: If ``True``, maps the names names in ``state_dict`` of the\n underlying model to the more descriptive names generated by\n :meth:`collect_modules`. Defaults to ``True``.\n framework: Name of the framework that was used to train the weights in\n ``state_dict``. Defaults to ``\"unknown\"``.\n\n .. note::\n\n This has no effect on the behavior, but makes the representation\n of the :class:`ModelMultiLayerEncoder` more descriptive.\n\n Returns:\n Named tuple with ``missing_keys`` and ``unexpected_keys`` fields.\n\n .. seealso::\n\n :meth:`torch.nn.Module.load_state_dict`\n \"\"\"\n if map_names:\n state_dict, unexpected_keys = self._map_state_dict_keys(state_dict)\n else:\n unexpected_keys = []\n\n keys = cast(\n _IncompatibleKeys, super().load_state_dict(state_dict, strict=strict)\n )\n keys.unexpected_keys.extend(unexpected_keys)\n\n self.pretrained = True\n self.framework = framework\n\n return keys\n\n def load_state_dict_from_url(\n self,\n framework: str,\n strict: bool = True,\n map_names: bool = True,\n check_hash: bool = True,\n **kwargs: Any,\n ) -> None:\n r\"\"\"Downloads and loads parameters and buffers trained with ``framework``.\n\n Args:\n framework: Name of the framework that was used to train the weights of the\n ``state_dict``.\n strict: Enforce matching keys in ``state_dict`` and the internal states.\n map_names: If ``True``, maps the names names in ``state_dict`` of the\n underlying model to the more descriptive names generated by\n :meth:`collect_modules`. Defaults to ``True``.\n check_hash: If ``True``, checks if the hash postfix of the URL matches the\n SHA256 hash of the downloaded ``state_dict``. Defaults to ``True``.\n kwargs: Optional arguments for :meth:`torch.hub.load_state_dict_from_url` .\n\n .. seealso::\n\n - :meth:`state_dict_url`\n - :meth:`load_state_dict`\n - :meth:`torch.hub.load_state_dict_from_url`\n \"\"\"\n url = self.state_dict_url(framework)\n state_dict = hub.load_state_dict_from_url(url, check_hash=check_hash, **kwargs)\n self.load_state_dict(\n state_dict, strict=strict, map_names=map_names, framework=framework\n )\n\n def _properties(self) -> Dict[str, Any]:\n dct = super()._properties()\n if not self.pretrained:\n dct[\"pretrained\"] = False\n else:\n dct[\"framework\"] = self.framework\n if not self.internal_preprocessing:\n dct[\"internal_preprocessing\"] = self.internal_preprocessing\n if self.allow_inplace:\n dct[\"allow_inplace\"] = self.allow_inplace\n return dct\n", "path": "pystiche/enc/models/utils.py"}]}
| 2,733 | 118 |
gh_patches_debug_24234
|
rasdani/github-patches
|
git_diff
|
pallets__werkzeug-1318
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Triggering a code reload during a PDB session breaks terminal input
Pylons and Django hit the same issue: https://github.com/Pylons/pyramid/issues/689
PR incoming to do exactly what they did 🤷♂️
</issue>
<code>
[start of werkzeug/_reloader.py]
1 import os
2 import sys
3 import time
4 import subprocess
5 import threading
6 from itertools import chain
7
8 from werkzeug._internal import _log
9 from werkzeug._compat import PY2, iteritems, text_type
10
11
12 def _iter_module_files():
13 """This iterates over all relevant Python files. It goes through all
14 loaded files from modules, all files in folders of already loaded modules
15 as well as all files reachable through a package.
16 """
17 # The list call is necessary on Python 3 in case the module
18 # dictionary modifies during iteration.
19 for module in list(sys.modules.values()):
20 if module is None:
21 continue
22 filename = getattr(module, '__file__', None)
23 if filename:
24 if os.path.isdir(filename) and \
25 os.path.exists(os.path.join(filename, "__init__.py")):
26 filename = os.path.join(filename, "__init__.py")
27
28 old = None
29 while not os.path.isfile(filename):
30 old = filename
31 filename = os.path.dirname(filename)
32 if filename == old:
33 break
34 else:
35 if filename[-4:] in ('.pyc', '.pyo'):
36 filename = filename[:-1]
37 yield filename
38
39
40 def _find_observable_paths(extra_files=None):
41 """Finds all paths that should be observed."""
42 rv = set(os.path.dirname(os.path.abspath(x))
43 if os.path.isfile(x) else os.path.abspath(x)
44 for x in sys.path)
45
46 for filename in extra_files or ():
47 rv.add(os.path.dirname(os.path.abspath(filename)))
48
49 for module in list(sys.modules.values()):
50 fn = getattr(module, '__file__', None)
51 if fn is None:
52 continue
53 fn = os.path.abspath(fn)
54 rv.add(os.path.dirname(fn))
55
56 return _find_common_roots(rv)
57
58
59 def _get_args_for_reloading():
60 """Returns the executable. This contains a workaround for windows
61 if the executable is incorrectly reported to not have the .exe
62 extension which can cause bugs on reloading.
63 """
64 rv = [sys.executable]
65 py_script = sys.argv[0]
66 if os.name == 'nt' and not os.path.exists(py_script) and \
67 os.path.exists(py_script + '.exe'):
68 py_script += '.exe'
69 if os.path.splitext(rv[0])[1] == '.exe' and os.path.splitext(py_script)[1] == '.exe':
70 rv.pop(0)
71 rv.append(py_script)
72 rv.extend(sys.argv[1:])
73 return rv
74
75
76 def _find_common_roots(paths):
77 """Out of some paths it finds the common roots that need monitoring."""
78 paths = [x.split(os.path.sep) for x in paths]
79 root = {}
80 for chunks in sorted(paths, key=len, reverse=True):
81 node = root
82 for chunk in chunks:
83 node = node.setdefault(chunk, {})
84 node.clear()
85
86 rv = set()
87
88 def _walk(node, path):
89 for prefix, child in iteritems(node):
90 _walk(child, path + (prefix,))
91 if not node:
92 rv.add('/'.join(path))
93 _walk(root, ())
94 return rv
95
96
97 class ReloaderLoop(object):
98 name = None
99
100 # monkeypatched by testsuite. wrapping with `staticmethod` is required in
101 # case time.sleep has been replaced by a non-c function (e.g. by
102 # `eventlet.monkey_patch`) before we get here
103 _sleep = staticmethod(time.sleep)
104
105 def __init__(self, extra_files=None, interval=1):
106 self.extra_files = set(os.path.abspath(x)
107 for x in extra_files or ())
108 self.interval = interval
109
110 def run(self):
111 pass
112
113 def restart_with_reloader(self):
114 """Spawn a new Python interpreter with the same arguments as this one,
115 but running the reloader thread.
116 """
117 while 1:
118 _log('info', ' * Restarting with %s' % self.name)
119 args = _get_args_for_reloading()
120 new_environ = os.environ.copy()
121 new_environ['WERKZEUG_RUN_MAIN'] = 'true'
122
123 # a weird bug on windows. sometimes unicode strings end up in the
124 # environment and subprocess.call does not like this, encode them
125 # to latin1 and continue.
126 if os.name == 'nt' and PY2:
127 for key, value in iteritems(new_environ):
128 if isinstance(value, text_type):
129 new_environ[key] = value.encode('iso-8859-1')
130
131 exit_code = subprocess.call(args, env=new_environ,
132 close_fds=False)
133 if exit_code != 3:
134 return exit_code
135
136 def trigger_reload(self, filename):
137 self.log_reload(filename)
138 sys.exit(3)
139
140 def log_reload(self, filename):
141 filename = os.path.abspath(filename)
142 _log('info', ' * Detected change in %r, reloading' % filename)
143
144
145 class StatReloaderLoop(ReloaderLoop):
146 name = 'stat'
147
148 def run(self):
149 mtimes = {}
150 while 1:
151 for filename in chain(_iter_module_files(),
152 self.extra_files):
153 try:
154 mtime = os.stat(filename).st_mtime
155 except OSError:
156 continue
157
158 old_time = mtimes.get(filename)
159 if old_time is None:
160 mtimes[filename] = mtime
161 continue
162 elif mtime > old_time:
163 self.trigger_reload(filename)
164 self._sleep(self.interval)
165
166
167 class WatchdogReloaderLoop(ReloaderLoop):
168
169 def __init__(self, *args, **kwargs):
170 ReloaderLoop.__init__(self, *args, **kwargs)
171 from watchdog.observers import Observer
172 from watchdog.events import FileSystemEventHandler
173 self.observable_paths = set()
174
175 def _check_modification(filename):
176 if filename in self.extra_files:
177 self.trigger_reload(filename)
178 dirname = os.path.dirname(filename)
179 if dirname.startswith(tuple(self.observable_paths)):
180 if filename.endswith(('.pyc', '.pyo', '.py')):
181 self.trigger_reload(filename)
182
183 class _CustomHandler(FileSystemEventHandler):
184
185 def on_created(self, event):
186 _check_modification(event.src_path)
187
188 def on_modified(self, event):
189 _check_modification(event.src_path)
190
191 def on_moved(self, event):
192 _check_modification(event.src_path)
193 _check_modification(event.dest_path)
194
195 def on_deleted(self, event):
196 _check_modification(event.src_path)
197
198 reloader_name = Observer.__name__.lower()
199 if reloader_name.endswith('observer'):
200 reloader_name = reloader_name[:-8]
201 reloader_name += ' reloader'
202
203 self.name = reloader_name
204
205 self.observer_class = Observer
206 self.event_handler = _CustomHandler()
207 self.should_reload = False
208
209 def trigger_reload(self, filename):
210 # This is called inside an event handler, which means throwing
211 # SystemExit has no effect.
212 # https://github.com/gorakhargosh/watchdog/issues/294
213 self.should_reload = True
214 self.log_reload(filename)
215
216 def run(self):
217 watches = {}
218 observer = self.observer_class()
219 observer.start()
220
221 try:
222 while not self.should_reload:
223 to_delete = set(watches)
224 paths = _find_observable_paths(self.extra_files)
225 for path in paths:
226 if path not in watches:
227 try:
228 watches[path] = observer.schedule(
229 self.event_handler, path, recursive=True)
230 except OSError:
231 # Clear this path from list of watches We don't want
232 # the same error message showing again in the next
233 # iteration.
234 watches[path] = None
235 to_delete.discard(path)
236 for path in to_delete:
237 watch = watches.pop(path, None)
238 if watch is not None:
239 observer.unschedule(watch)
240 self.observable_paths = paths
241 self._sleep(self.interval)
242 finally:
243 observer.stop()
244 observer.join()
245
246 sys.exit(3)
247
248
249 reloader_loops = {
250 'stat': StatReloaderLoop,
251 'watchdog': WatchdogReloaderLoop,
252 }
253
254 try:
255 __import__('watchdog.observers')
256 except ImportError:
257 reloader_loops['auto'] = reloader_loops['stat']
258 else:
259 reloader_loops['auto'] = reloader_loops['watchdog']
260
261
262 def run_with_reloader(main_func, extra_files=None, interval=1,
263 reloader_type='auto'):
264 """Run the given function in an independent python interpreter."""
265 import signal
266 reloader = reloader_loops[reloader_type](extra_files, interval)
267 signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))
268 try:
269 if os.environ.get('WERKZEUG_RUN_MAIN') == 'true':
270 t = threading.Thread(target=main_func, args=())
271 t.setDaemon(True)
272 t.start()
273 reloader.run()
274 else:
275 sys.exit(reloader.restart_with_reloader())
276 except KeyboardInterrupt:
277 pass
278
[end of werkzeug/_reloader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/werkzeug/_reloader.py b/werkzeug/_reloader.py
--- a/werkzeug/_reloader.py
+++ b/werkzeug/_reloader.py
@@ -259,6 +259,22 @@
reloader_loops['auto'] = reloader_loops['watchdog']
+def ensure_echo_on():
+ """Ensure that echo mode is enabled. Some tools such as PDB disable
+ it which causes usability issues after reload."""
+ # tcgetattr will fail if stdin isn't a tty
+ if not sys.stdin.isatty():
+ return
+ try:
+ import termios
+ except ImportError:
+ return
+ attributes = termios.tcgetattr(sys.stdin)
+ if not attributes[3] & termios.ECHO:
+ attributes[3] |= termios.ECHO
+ termios.tcsetattr(sys.stdin, termios.TCSANOW, attributes)
+
+
def run_with_reloader(main_func, extra_files=None, interval=1,
reloader_type='auto'):
"""Run the given function in an independent python interpreter."""
@@ -267,6 +283,7 @@
signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))
try:
if os.environ.get('WERKZEUG_RUN_MAIN') == 'true':
+ ensure_echo_on()
t = threading.Thread(target=main_func, args=())
t.setDaemon(True)
t.start()
|
{"golden_diff": "diff --git a/werkzeug/_reloader.py b/werkzeug/_reloader.py\n--- a/werkzeug/_reloader.py\n+++ b/werkzeug/_reloader.py\n@@ -259,6 +259,22 @@\n reloader_loops['auto'] = reloader_loops['watchdog']\n \n \n+def ensure_echo_on():\n+ \"\"\"Ensure that echo mode is enabled. Some tools such as PDB disable\n+ it which causes usability issues after reload.\"\"\"\n+ # tcgetattr will fail if stdin isn't a tty\n+ if not sys.stdin.isatty():\n+ return\n+ try:\n+ import termios\n+ except ImportError:\n+ return\n+ attributes = termios.tcgetattr(sys.stdin)\n+ if not attributes[3] & termios.ECHO:\n+ attributes[3] |= termios.ECHO\n+ termios.tcsetattr(sys.stdin, termios.TCSANOW, attributes)\n+\n+\n def run_with_reloader(main_func, extra_files=None, interval=1,\n reloader_type='auto'):\n \"\"\"Run the given function in an independent python interpreter.\"\"\"\n@@ -267,6 +283,7 @@\n signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))\n try:\n if os.environ.get('WERKZEUG_RUN_MAIN') == 'true':\n+ ensure_echo_on()\n t = threading.Thread(target=main_func, args=())\n t.setDaemon(True)\n t.start()\n", "issue": "Triggering a code reload during a PDB session breaks terminal input\nPylons and Django hit the same issue: https://github.com/Pylons/pyramid/issues/689\r\n\r\nPR incoming to do exactly what they did \ud83e\udd37\u200d\u2642\ufe0f\n", "before_files": [{"content": "import os\nimport sys\nimport time\nimport subprocess\nimport threading\nfrom itertools import chain\n\nfrom werkzeug._internal import _log\nfrom werkzeug._compat import PY2, iteritems, text_type\n\n\ndef _iter_module_files():\n \"\"\"This iterates over all relevant Python files. It goes through all\n loaded files from modules, all files in folders of already loaded modules\n as well as all files reachable through a package.\n \"\"\"\n # The list call is necessary on Python 3 in case the module\n # dictionary modifies during iteration.\n for module in list(sys.modules.values()):\n if module is None:\n continue\n filename = getattr(module, '__file__', None)\n if filename:\n if os.path.isdir(filename) and \\\n os.path.exists(os.path.join(filename, \"__init__.py\")):\n filename = os.path.join(filename, \"__init__.py\")\n\n old = None\n while not os.path.isfile(filename):\n old = filename\n filename = os.path.dirname(filename)\n if filename == old:\n break\n else:\n if filename[-4:] in ('.pyc', '.pyo'):\n filename = filename[:-1]\n yield filename\n\n\ndef _find_observable_paths(extra_files=None):\n \"\"\"Finds all paths that should be observed.\"\"\"\n rv = set(os.path.dirname(os.path.abspath(x))\n if os.path.isfile(x) else os.path.abspath(x)\n for x in sys.path)\n\n for filename in extra_files or ():\n rv.add(os.path.dirname(os.path.abspath(filename)))\n\n for module in list(sys.modules.values()):\n fn = getattr(module, '__file__', None)\n if fn is None:\n continue\n fn = os.path.abspath(fn)\n rv.add(os.path.dirname(fn))\n\n return _find_common_roots(rv)\n\n\ndef _get_args_for_reloading():\n \"\"\"Returns the executable. This contains a workaround for windows\n if the executable is incorrectly reported to not have the .exe\n extension which can cause bugs on reloading.\n \"\"\"\n rv = [sys.executable]\n py_script = sys.argv[0]\n if os.name == 'nt' and not os.path.exists(py_script) and \\\n os.path.exists(py_script + '.exe'):\n py_script += '.exe'\n if os.path.splitext(rv[0])[1] == '.exe' and os.path.splitext(py_script)[1] == '.exe':\n rv.pop(0)\n rv.append(py_script)\n rv.extend(sys.argv[1:])\n return rv\n\n\ndef _find_common_roots(paths):\n \"\"\"Out of some paths it finds the common roots that need monitoring.\"\"\"\n paths = [x.split(os.path.sep) for x in paths]\n root = {}\n for chunks in sorted(paths, key=len, reverse=True):\n node = root\n for chunk in chunks:\n node = node.setdefault(chunk, {})\n node.clear()\n\n rv = set()\n\n def _walk(node, path):\n for prefix, child in iteritems(node):\n _walk(child, path + (prefix,))\n if not node:\n rv.add('/'.join(path))\n _walk(root, ())\n return rv\n\n\nclass ReloaderLoop(object):\n name = None\n\n # monkeypatched by testsuite. wrapping with `staticmethod` is required in\n # case time.sleep has been replaced by a non-c function (e.g. by\n # `eventlet.monkey_patch`) before we get here\n _sleep = staticmethod(time.sleep)\n\n def __init__(self, extra_files=None, interval=1):\n self.extra_files = set(os.path.abspath(x)\n for x in extra_files or ())\n self.interval = interval\n\n def run(self):\n pass\n\n def restart_with_reloader(self):\n \"\"\"Spawn a new Python interpreter with the same arguments as this one,\n but running the reloader thread.\n \"\"\"\n while 1:\n _log('info', ' * Restarting with %s' % self.name)\n args = _get_args_for_reloading()\n new_environ = os.environ.copy()\n new_environ['WERKZEUG_RUN_MAIN'] = 'true'\n\n # a weird bug on windows. sometimes unicode strings end up in the\n # environment and subprocess.call does not like this, encode them\n # to latin1 and continue.\n if os.name == 'nt' and PY2:\n for key, value in iteritems(new_environ):\n if isinstance(value, text_type):\n new_environ[key] = value.encode('iso-8859-1')\n\n exit_code = subprocess.call(args, env=new_environ,\n close_fds=False)\n if exit_code != 3:\n return exit_code\n\n def trigger_reload(self, filename):\n self.log_reload(filename)\n sys.exit(3)\n\n def log_reload(self, filename):\n filename = os.path.abspath(filename)\n _log('info', ' * Detected change in %r, reloading' % filename)\n\n\nclass StatReloaderLoop(ReloaderLoop):\n name = 'stat'\n\n def run(self):\n mtimes = {}\n while 1:\n for filename in chain(_iter_module_files(),\n self.extra_files):\n try:\n mtime = os.stat(filename).st_mtime\n except OSError:\n continue\n\n old_time = mtimes.get(filename)\n if old_time is None:\n mtimes[filename] = mtime\n continue\n elif mtime > old_time:\n self.trigger_reload(filename)\n self._sleep(self.interval)\n\n\nclass WatchdogReloaderLoop(ReloaderLoop):\n\n def __init__(self, *args, **kwargs):\n ReloaderLoop.__init__(self, *args, **kwargs)\n from watchdog.observers import Observer\n from watchdog.events import FileSystemEventHandler\n self.observable_paths = set()\n\n def _check_modification(filename):\n if filename in self.extra_files:\n self.trigger_reload(filename)\n dirname = os.path.dirname(filename)\n if dirname.startswith(tuple(self.observable_paths)):\n if filename.endswith(('.pyc', '.pyo', '.py')):\n self.trigger_reload(filename)\n\n class _CustomHandler(FileSystemEventHandler):\n\n def on_created(self, event):\n _check_modification(event.src_path)\n\n def on_modified(self, event):\n _check_modification(event.src_path)\n\n def on_moved(self, event):\n _check_modification(event.src_path)\n _check_modification(event.dest_path)\n\n def on_deleted(self, event):\n _check_modification(event.src_path)\n\n reloader_name = Observer.__name__.lower()\n if reloader_name.endswith('observer'):\n reloader_name = reloader_name[:-8]\n reloader_name += ' reloader'\n\n self.name = reloader_name\n\n self.observer_class = Observer\n self.event_handler = _CustomHandler()\n self.should_reload = False\n\n def trigger_reload(self, filename):\n # This is called inside an event handler, which means throwing\n # SystemExit has no effect.\n # https://github.com/gorakhargosh/watchdog/issues/294\n self.should_reload = True\n self.log_reload(filename)\n\n def run(self):\n watches = {}\n observer = self.observer_class()\n observer.start()\n\n try:\n while not self.should_reload:\n to_delete = set(watches)\n paths = _find_observable_paths(self.extra_files)\n for path in paths:\n if path not in watches:\n try:\n watches[path] = observer.schedule(\n self.event_handler, path, recursive=True)\n except OSError:\n # Clear this path from list of watches We don't want\n # the same error message showing again in the next\n # iteration.\n watches[path] = None\n to_delete.discard(path)\n for path in to_delete:\n watch = watches.pop(path, None)\n if watch is not None:\n observer.unschedule(watch)\n self.observable_paths = paths\n self._sleep(self.interval)\n finally:\n observer.stop()\n observer.join()\n\n sys.exit(3)\n\n\nreloader_loops = {\n 'stat': StatReloaderLoop,\n 'watchdog': WatchdogReloaderLoop,\n}\n\ntry:\n __import__('watchdog.observers')\nexcept ImportError:\n reloader_loops['auto'] = reloader_loops['stat']\nelse:\n reloader_loops['auto'] = reloader_loops['watchdog']\n\n\ndef run_with_reloader(main_func, extra_files=None, interval=1,\n reloader_type='auto'):\n \"\"\"Run the given function in an independent python interpreter.\"\"\"\n import signal\n reloader = reloader_loops[reloader_type](extra_files, interval)\n signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))\n try:\n if os.environ.get('WERKZEUG_RUN_MAIN') == 'true':\n t = threading.Thread(target=main_func, args=())\n t.setDaemon(True)\n t.start()\n reloader.run()\n else:\n sys.exit(reloader.restart_with_reloader())\n except KeyboardInterrupt:\n pass\n", "path": "werkzeug/_reloader.py"}]}
| 3,289 | 324 |
gh_patches_debug_2156
|
rasdani/github-patches
|
git_diff
|
spacetelescope__jwql-569
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Write tests for bokeh templating software
With the merge of #459, bokeh templating will be implemented for `jwql`. We should address the test coverage for this software.
</issue>
<code>
[start of jwql/bokeh_templating/example/main.py]
1 """
2 This is a minimal example demonstrating how to create a Bokeh app using
3 the ``bokeh-templating`` package and the associated YAML template files.
4
5 Author
6 -------
7
8 - Graham Kanarek
9
10 Dependencies
11 ------------
12
13 The user must have PyYAML, Bokeh, and the ``bokeh-templating``
14 packages installed.
15 """
16
17 import os
18 import numpy as np
19
20 from jwql.bokeh_templating import BokehTemplate
21
22 file_dir = os.path.dirname(os.path.realpath(__file__))
23
24
25 class TestBokehApp(BokehTemplate):
26 """This is a minimal ``BokehTemplate`` app."""
27
28 def pre_init(self):
29 """Before creating the Bokeh interface (by parsing the interface
30 file), we must initialize our ``a`` and ``b`` variables, and set
31 the path to the interface file.
32 """
33
34 self.a, self.b = 4, 2
35
36 self.format_string = None
37 self.interface_file = os.path.join(file_dir, "example_interface.yaml")
38
39 # No post-initialization tasks are required.
40 post_init = None
41
42 @property
43 def x(self):
44 """The x-value of the Lissajous curves."""
45 return 4. * np.sin(self.a * np.linspace(0, 2 * np.pi, 500))
46
47 @property
48 def y(self):
49 """The y-value of the Lissajous curves."""
50 return 3. * np.sin(self.b * np.linspace(0, 2 * np.pi, 500))
51
52 def controller(self, attr, old, new):
53 """This is the controller function which is used to update the
54 curves when the sliders are adjusted. Note the use of the
55 ``self.refs`` dictionary for accessing the Bokeh object
56 attributes."""
57 self.a = self.refs["a_slider"].value
58 self.b = self.refs["b_slider"].value
59
60 self.refs["figure_source"].data = {'x': self.x, 'y': self.y}
61
62
63 TestBokehApp()
64
[end of jwql/bokeh_templating/example/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/jwql/bokeh_templating/example/main.py b/jwql/bokeh_templating/example/main.py
--- a/jwql/bokeh_templating/example/main.py
+++ b/jwql/bokeh_templating/example/main.py
@@ -24,6 +24,8 @@
class TestBokehApp(BokehTemplate):
"""This is a minimal ``BokehTemplate`` app."""
+
+ _embed = True
def pre_init(self):
"""Before creating the Bokeh interface (by parsing the interface
|
{"golden_diff": "diff --git a/jwql/bokeh_templating/example/main.py b/jwql/bokeh_templating/example/main.py\n--- a/jwql/bokeh_templating/example/main.py\n+++ b/jwql/bokeh_templating/example/main.py\n@@ -24,6 +24,8 @@\n \n class TestBokehApp(BokehTemplate):\n \"\"\"This is a minimal ``BokehTemplate`` app.\"\"\"\n+ \n+ _embed = True\n \n def pre_init(self):\n \"\"\"Before creating the Bokeh interface (by parsing the interface\n", "issue": "Write tests for bokeh templating software\nWith the merge of #459, bokeh templating will be implemented for `jwql`. We should address the test coverage for this software. \n", "before_files": [{"content": "\"\"\"\nThis is a minimal example demonstrating how to create a Bokeh app using\nthe ``bokeh-templating`` package and the associated YAML template files.\n\nAuthor\n-------\n\n - Graham Kanarek\n\nDependencies\n------------\n\n The user must have PyYAML, Bokeh, and the ``bokeh-templating``\n packages installed.\n\"\"\"\n\nimport os\nimport numpy as np\n\nfrom jwql.bokeh_templating import BokehTemplate\n\nfile_dir = os.path.dirname(os.path.realpath(__file__))\n\n\nclass TestBokehApp(BokehTemplate):\n \"\"\"This is a minimal ``BokehTemplate`` app.\"\"\"\n\n def pre_init(self):\n \"\"\"Before creating the Bokeh interface (by parsing the interface\n file), we must initialize our ``a`` and ``b`` variables, and set\n the path to the interface file.\n \"\"\"\n\n self.a, self.b = 4, 2\n\n self.format_string = None\n self.interface_file = os.path.join(file_dir, \"example_interface.yaml\")\n\n # No post-initialization tasks are required.\n post_init = None\n\n @property\n def x(self):\n \"\"\"The x-value of the Lissajous curves.\"\"\"\n return 4. * np.sin(self.a * np.linspace(0, 2 * np.pi, 500))\n\n @property\n def y(self):\n \"\"\"The y-value of the Lissajous curves.\"\"\"\n return 3. * np.sin(self.b * np.linspace(0, 2 * np.pi, 500))\n\n def controller(self, attr, old, new):\n \"\"\"This is the controller function which is used to update the\n curves when the sliders are adjusted. Note the use of the\n ``self.refs`` dictionary for accessing the Bokeh object\n attributes.\"\"\"\n self.a = self.refs[\"a_slider\"].value\n self.b = self.refs[\"b_slider\"].value\n\n self.refs[\"figure_source\"].data = {'x': self.x, 'y': self.y}\n\n\nTestBokehApp()\n", "path": "jwql/bokeh_templating/example/main.py"}]}
| 1,163 | 125 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.