problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_9744 | rasdani/github-patches | git_diff | ansible__ansible-modules-core-2328 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
glance images uploaded via os_image with is_public: true are not public
I have the following in a playbook:
```
- name: uploading centos atomic image to glance
os_image:
name: centos-7-atomic
is_public: yes
filename: images/centos-7-atomic.qcow2
```
The resulting image is not public:
```
+------------------------------+------------------+
| Property | Value |
+------------------------------+------------------+
| name | centos-7-atomic |
| visibility | private |
+------------------------------+------------------+
```
Whereas if I upload an image from the command line:
```
glance image-create --file images/centos-7-atomic.qcow2 \
--name larstest --visibility public \
--disk-format qcow2 --container-format bare
```
The resulting image is publicly visible:
```
+------------------+-------------+
| Property | Value |
+------------------+-------------+
| name | larstest |
| visibility | public |
+------------------+-------------+
```
</issue>
<code>
[start of cloud/openstack/os_image.py]
1 #!/usr/bin/python
2
3 # Copyright (c) 2015 Hewlett-Packard Development Company, L.P.
4 # Copyright (c) 2013, Benno Joy <[email protected]>
5 #
6 # This module is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # This software is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with this software. If not, see <http://www.gnu.org/licenses/>.
18
19 #TODO(mordred): we need to support "location"(v1) and "locations"(v2)
20 try:
21 import shade
22 HAS_SHADE = True
23 except ImportError:
24 HAS_SHADE = False
25
26
27 DOCUMENTATION = '''
28 ---
29 module: os_image
30 short_description: Add/Delete images from OpenStack Cloud
31 extends_documentation_fragment: openstack
32 version_added: "2.0"
33 author: "Monty Taylor (@emonty)"
34 description:
35 - Add or Remove images from the OpenStack Image Repository
36 options:
37 name:
38 description:
39 - Name that has to be given to the image
40 required: true
41 default: None
42 disk_format:
43 description:
44 - The format of the disk that is getting uploaded
45 required: false
46 default: qcow2
47 container_format:
48 description:
49 - The format of the container
50 required: false
51 default: bare
52 owner:
53 description:
54 - The owner of the image
55 required: false
56 default: None
57 min_disk:
58 description:
59 - The minimum disk space required to deploy this image
60 required: false
61 default: None
62 min_ram:
63 description:
64 - The minimum ram required to deploy this image
65 required: false
66 default: None
67 is_public:
68 description:
69 - Whether the image can be accessed publicly. Note that publicizing an image requires admin role by default.
70 required: false
71 default: 'yes'
72 filename:
73 description:
74 - The path to the file which has to be uploaded
75 required: false
76 default: None
77 ramdisk:
78 description:
79 - The name of an existing ramdisk image that will be associated with this image
80 required: false
81 default: None
82 kernel:
83 description:
84 - The name of an existing kernel image that will be associated with this image
85 required: false
86 default: None
87 properties:
88 description:
89 - Additional properties to be associated with this image
90 required: false
91 default: {}
92 state:
93 description:
94 - Should the resource be present or absent.
95 choices: [present, absent]
96 default: present
97 requirements: ["shade"]
98 '''
99
100 EXAMPLES = '''
101 # Upload an image from a local file named cirros-0.3.0-x86_64-disk.img
102 - os_image:
103 auth:
104 auth_url: http://localhost/auth/v2.0
105 username: admin
106 password: passme
107 project_name: admin
108 name: cirros
109 container_format: bare
110 disk_format: qcow2
111 state: present
112 filename: cirros-0.3.0-x86_64-disk.img
113 kernel: cirros-vmlinuz
114 ramdisk: cirros-initrd
115 properties:
116 cpu_arch: x86_64
117 distro: ubuntu
118 '''
119
120
121 def main():
122
123 argument_spec = openstack_full_argument_spec(
124 name = dict(required=True),
125 disk_format = dict(default='qcow2', choices=['ami', 'ari', 'aki', 'vhd', 'vmdk', 'raw', 'qcow2', 'vdi', 'iso']),
126 container_format = dict(default='bare', choices=['ami', 'aki', 'ari', 'bare', 'ovf', 'ova']),
127 owner = dict(default=None),
128 min_disk = dict(default=None),
129 min_ram = dict(default=None),
130 is_public = dict(default=False),
131 filename = dict(default=None),
132 ramdisk = dict(default=None),
133 kernel = dict(default=None),
134 properties = dict(default={}),
135 state = dict(default='present', choices=['absent', 'present']),
136 )
137 module_kwargs = openstack_module_kwargs()
138 module = AnsibleModule(argument_spec, **module_kwargs)
139
140 if not HAS_SHADE:
141 module.fail_json(msg='shade is required for this module')
142
143 try:
144 cloud = shade.openstack_cloud(**module.params)
145
146 changed = False
147 image = cloud.get_image(name_or_id=module.params['name'])
148
149 if module.params['state'] == 'present':
150 if not image:
151 image = cloud.create_image(
152 name=module.params['name'],
153 filename=module.params['filename'],
154 disk_format=module.params['disk_format'],
155 container_format=module.params['container_format'],
156 wait=module.params['wait'],
157 timeout=module.params['timeout']
158 )
159 changed = True
160 if not module.params['wait']:
161 module.exit_json(changed=changed, image=image, id=image.id)
162
163 cloud.update_image_properties(
164 image=image,
165 kernel=module.params['kernel'],
166 ramdisk=module.params['ramdisk'],
167 **module.params['properties'])
168 image = cloud.get_image(name_or_id=image.id)
169 module.exit_json(changed=changed, image=image, id=image.id)
170
171 elif module.params['state'] == 'absent':
172 if not image:
173 changed = False
174 else:
175 cloud.delete_image(
176 name_or_id=module.params['name'],
177 wait=module.params['wait'],
178 timeout=module.params['timeout'])
179 changed = True
180 module.exit_json(changed=changed)
181
182 except shade.OpenStackCloudException as e:
183 module.fail_json(msg=e.message, extra_data=e.extra_data)
184
185 # this is magic, see lib/ansible/module_common.py
186 from ansible.module_utils.basic import *
187 from ansible.module_utils.openstack import *
188 main()
189
[end of cloud/openstack/os_image.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cloud/openstack/os_image.py b/cloud/openstack/os_image.py
--- a/cloud/openstack/os_image.py
+++ b/cloud/openstack/os_image.py
@@ -154,7 +154,8 @@
disk_format=module.params['disk_format'],
container_format=module.params['container_format'],
wait=module.params['wait'],
- timeout=module.params['timeout']
+ timeout=module.params['timeout'],
+ is_public=module.params['is_public'],
)
changed = True
if not module.params['wait']:
| {"golden_diff": "diff --git a/cloud/openstack/os_image.py b/cloud/openstack/os_image.py\n--- a/cloud/openstack/os_image.py\n+++ b/cloud/openstack/os_image.py\n@@ -154,7 +154,8 @@\n disk_format=module.params['disk_format'],\n container_format=module.params['container_format'],\n wait=module.params['wait'],\n- timeout=module.params['timeout']\n+ timeout=module.params['timeout'],\n+ is_public=module.params['is_public'],\n )\n changed = True\n if not module.params['wait']:\n", "issue": "glance images uploaded via os_image with is_public: true are not public\nI have the following in a playbook:\n\n```\n- name: uploading centos atomic image to glance\n os_image:\n name: centos-7-atomic\n is_public: yes\n filename: images/centos-7-atomic.qcow2\n```\n\nThe resulting image is not public:\n\n```\n+------------------------------+------------------+\n| Property | Value |\n+------------------------------+------------------+\n| name | centos-7-atomic |\n| visibility | private |\n+------------------------------+------------------+\n```\n\nWhereas if I upload an image from the command line:\n\n```\nglance image-create --file images/centos-7-atomic.qcow2 \\\n --name larstest --visibility public \\\n --disk-format qcow2 --container-format bare\n```\n\nThe resulting image is publicly visible:\n\n```\n+------------------+-------------+\n| Property | Value |\n+------------------+-------------+\n| name | larstest |\n| visibility | public |\n+------------------+-------------+\n```\n\n", "before_files": [{"content": "#!/usr/bin/python\n\n# Copyright (c) 2015 Hewlett-Packard Development Company, L.P.\n# Copyright (c) 2013, Benno Joy <[email protected]>\n#\n# This module is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This software is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this software. If not, see <http://www.gnu.org/licenses/>.\n\n#TODO(mordred): we need to support \"location\"(v1) and \"locations\"(v2)\ntry:\n import shade\n HAS_SHADE = True\nexcept ImportError:\n HAS_SHADE = False\n\n\nDOCUMENTATION = '''\n---\nmodule: os_image\nshort_description: Add/Delete images from OpenStack Cloud\nextends_documentation_fragment: openstack\nversion_added: \"2.0\"\nauthor: \"Monty Taylor (@emonty)\"\ndescription:\n - Add or Remove images from the OpenStack Image Repository\noptions:\n name:\n description:\n - Name that has to be given to the image\n required: true\n default: None\n disk_format:\n description:\n - The format of the disk that is getting uploaded\n required: false\n default: qcow2\n container_format:\n description:\n - The format of the container\n required: false\n default: bare\n owner:\n description:\n - The owner of the image\n required: false\n default: None\n min_disk:\n description:\n - The minimum disk space required to deploy this image\n required: false\n default: None\n min_ram:\n description:\n - The minimum ram required to deploy this image\n required: false\n default: None\n is_public:\n description:\n - Whether the image can be accessed publicly. Note that publicizing an image requires admin role by default.\n required: false\n default: 'yes'\n filename:\n description:\n - The path to the file which has to be uploaded\n required: false\n default: None\n ramdisk:\n description:\n - The name of an existing ramdisk image that will be associated with this image\n required: false\n default: None\n kernel:\n description:\n - The name of an existing kernel image that will be associated with this image\n required: false\n default: None\n properties:\n description:\n - Additional properties to be associated with this image\n required: false\n default: {}\n state:\n description:\n - Should the resource be present or absent.\n choices: [present, absent]\n default: present\nrequirements: [\"shade\"]\n'''\n\nEXAMPLES = '''\n# Upload an image from a local file named cirros-0.3.0-x86_64-disk.img\n- os_image:\n auth:\n auth_url: http://localhost/auth/v2.0\n username: admin\n password: passme\n project_name: admin\n name: cirros\n container_format: bare\n disk_format: qcow2\n state: present\n filename: cirros-0.3.0-x86_64-disk.img\n kernel: cirros-vmlinuz\n ramdisk: cirros-initrd\n properties:\n cpu_arch: x86_64\n distro: ubuntu\n'''\n\n\ndef main():\n\n argument_spec = openstack_full_argument_spec(\n name = dict(required=True),\n disk_format = dict(default='qcow2', choices=['ami', 'ari', 'aki', 'vhd', 'vmdk', 'raw', 'qcow2', 'vdi', 'iso']),\n container_format = dict(default='bare', choices=['ami', 'aki', 'ari', 'bare', 'ovf', 'ova']),\n owner = dict(default=None),\n min_disk = dict(default=None),\n min_ram = dict(default=None),\n is_public = dict(default=False),\n filename = dict(default=None),\n ramdisk = dict(default=None),\n kernel = dict(default=None),\n properties = dict(default={}),\n state = dict(default='present', choices=['absent', 'present']),\n )\n module_kwargs = openstack_module_kwargs()\n module = AnsibleModule(argument_spec, **module_kwargs)\n\n if not HAS_SHADE:\n module.fail_json(msg='shade is required for this module')\n\n try:\n cloud = shade.openstack_cloud(**module.params)\n\n changed = False\n image = cloud.get_image(name_or_id=module.params['name'])\n\n if module.params['state'] == 'present':\n if not image:\n image = cloud.create_image(\n name=module.params['name'],\n filename=module.params['filename'],\n disk_format=module.params['disk_format'],\n container_format=module.params['container_format'],\n wait=module.params['wait'],\n timeout=module.params['timeout']\n )\n changed = True\n if not module.params['wait']:\n module.exit_json(changed=changed, image=image, id=image.id)\n\n cloud.update_image_properties(\n image=image,\n kernel=module.params['kernel'],\n ramdisk=module.params['ramdisk'],\n **module.params['properties'])\n image = cloud.get_image(name_or_id=image.id)\n module.exit_json(changed=changed, image=image, id=image.id)\n\n elif module.params['state'] == 'absent':\n if not image:\n changed = False\n else:\n cloud.delete_image(\n name_or_id=module.params['name'],\n wait=module.params['wait'],\n timeout=module.params['timeout'])\n changed = True\n module.exit_json(changed=changed)\n\n except shade.OpenStackCloudException as e:\n module.fail_json(msg=e.message, extra_data=e.extra_data)\n\n# this is magic, see lib/ansible/module_common.py\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.openstack import *\nmain()\n", "path": "cloud/openstack/os_image.py"}]} | 2,614 | 122 |
gh_patches_debug_36733 | rasdani/github-patches | git_diff | onnx__sklearn-onnx-150 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Logistic Regression probability scores wrong in case of string labels and multi_class set to multinomial.
data = load_digits()
X = data.data
y = data.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42)
model = LogisticRegression(random_state=42, multi_class='multinomial', solver='sag').fit(X_train, y_train)
onnx_model = convert_sklearn(model, 'lr', [('input', FloatTensorType(X_test.shape))])
save_model(onnx_model, 'lr.onnx')
sess = InferenceSession('lr.onnx')
res = sess.run(None, input_feed={'input': X_test.astype(np.float32)})
np.mean(np.isclose(model.predict_proba(X_test), list(map(lambda x:
list(map(lambda y: x[y], x)), res[1]))))
>> 0.010901001112347052
Logistic Regression probability scores wrong in case of string labels and multi_class set to multinomial.
data = load_digits()
X = data.data
y = data.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42)
model = LogisticRegression(random_state=42, multi_class='multinomial', solver='sag').fit(X_train, y_train)
onnx_model = convert_sklearn(model, 'lr', [('input', FloatTensorType(X_test.shape))])
save_model(onnx_model, 'lr.onnx')
sess = InferenceSession('lr.onnx')
res = sess.run(None, input_feed={'input': X_test.astype(np.float32)})
np.mean(np.isclose(model.predict_proba(X_test), list(map(lambda x:
list(map(lambda y: x[y], x)), res[1]))))
>> 0.010901001112347052
</issue>
<code>
[start of skl2onnx/operator_converters/linear_classifier.py]
1 # -------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License. See License.txt in the project root for
4 # license information.
5 # --------------------------------------------------------------------------
6
7 import numbers
8 import numpy as np
9 import six
10 from ..common._registration import register_converter
11 from ..proto import onnx_proto
12
13
14 def convert_sklearn_linear_classifier(scope, operator, container):
15 op = operator.raw_operator
16 classes = op.classes_
17 number_of_classes = len(classes)
18 coefficients = op.coef_.flatten().astype(float).tolist()
19 if isinstance(op.intercept_, (float, np.float32)) and op.intercept_ == 0:
20 # fit_intercept = False
21 intercepts = ([0.0] * number_of_classes if number_of_classes != 2 else
22 [0.0])
23 else:
24 intercepts = op.intercept_.tolist()
25 if number_of_classes == 2:
26 coefficients = list(map(lambda x: -1 * x, coefficients)) + coefficients
27 intercepts = list(map(lambda x: -1 * x, intercepts)) + intercepts
28
29 multi_class = 0
30 if hasattr(op, 'multi_class'):
31 if op.multi_class == 'ovr':
32 multi_class = 1
33 else:
34 multi_class = 2
35
36 classifier_type = 'LinearClassifier'
37 classifier_attrs = {
38 'name': scope.get_unique_operator_name(classifier_type)
39 }
40
41 # nb = NodeBuilder(context, 'LinearClassifier', op_domain='ai.onnx.ml')
42 classifier_attrs['coefficients'] = coefficients
43 classifier_attrs['intercepts'] = intercepts
44 classifier_attrs['multi_class'] = 1 if multi_class == 2 else 0
45 if op.__class__.__name__ == 'LinearSVC':
46 classifier_attrs['post_transform'] = 'NONE'
47 elif op.__class__.__name__ == 'LogisticRegression':
48 if multi_class == 2:
49 if number_of_classes == 2:
50 """
51 See method _predict_proba_lr.
52 When number if classes is two, the function
53 is not SOFTMAX.
54 https://github.com/scikit-learn/scikit-learn/blob/bac89c253b35a8f1a3827389fbee0f5bebcbc985/sklearn/linear_model/base.py#L300
55 """ # noqa
56 classifier_attrs['post_transform'] = 'LOGISTIC'
57 else:
58 classifier_attrs['post_transform'] = 'LOGISTIC'
59 else:
60 classifier_attrs['post_transform'] = 'LOGISTIC'
61 elif op.__class__.__name__ in ('LinearSVC'):
62 classifier_attrs['post_transform'] = 'NONE'
63 else:
64 if multi_class == 2:
65 classifier_attrs['post_transform'] = 'SOFTMAX'
66 else:
67 classifier_attrs['post_transform'] = 'LOGISTIC'
68
69 if all(isinstance(i, (six.string_types, six.text_type)) for i in classes):
70 class_labels = [str(i) for i in classes]
71 classifier_attrs['classlabels_strings'] = class_labels
72 elif all(isinstance(i, (numbers.Real, bool, np.bool_)) for i in classes):
73 class_labels = [int(i) for i in classes]
74 classifier_attrs['classlabels_ints'] = class_labels
75 else:
76 raise RuntimeError('Label vector must be a string or a integer tensor')
77
78 label_name = operator.outputs[0].full_name
79
80 if op.__class__.__name__ == 'LinearSVC' and op.classes_.shape[0] <= 2:
81 raw_scores_tensor_name = scope.get_unique_variable_name(
82 'raw_scores_tensor')
83 positive_class_index_name = scope.get_unique_variable_name(
84 'positive_class_index')
85
86 container.add_initializer(positive_class_index_name,
87 onnx_proto.TensorProto.INT64, [], [1])
88
89 container.add_node(classifier_type, operator.inputs[0].full_name,
90 [label_name, raw_scores_tensor_name],
91 op_domain='ai.onnx.ml', **classifier_attrs)
92 container.add_node(
93 'ArrayFeatureExtractor',
94 [raw_scores_tensor_name, positive_class_index_name],
95 operator.outputs[1].full_name, op_domain='ai.onnx.ml',
96 name=scope.get_unique_operator_name('ArrayFeatureExtractor'))
97 else:
98 # Make sure the probability sum is 1 over all classes
99 if multi_class > 0 and op.__class__.__name__ != 'LinearSVC':
100 probability_tensor_name = scope.get_unique_variable_name(
101 'probability_tensor')
102 container.add_node(classifier_type, operator.inputs[0].full_name,
103 [label_name, probability_tensor_name],
104 op_domain='ai.onnx.ml', **classifier_attrs)
105 normalizer_type = 'Normalizer'
106 normalizer_attrs = {
107 'name': scope.get_unique_operator_name(normalizer_type),
108 'norm': 'L1'
109 }
110 container.add_node(normalizer_type, probability_tensor_name,
111 operator.outputs[1].full_name,
112 op_domain='ai.onnx.ml', **normalizer_attrs)
113 else:
114 container.add_node(classifier_type, operator.inputs[0].full_name,
115 [label_name, operator.outputs[1].full_name],
116 op_domain='ai.onnx.ml', **classifier_attrs)
117
118
119 register_converter('SklearnLinearClassifier',
120 convert_sklearn_linear_classifier)
121 register_converter('SklearnLinearSVC', convert_sklearn_linear_classifier)
122
[end of skl2onnx/operator_converters/linear_classifier.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/skl2onnx/operator_converters/linear_classifier.py b/skl2onnx/operator_converters/linear_classifier.py
--- a/skl2onnx/operator_converters/linear_classifier.py
+++ b/skl2onnx/operator_converters/linear_classifier.py
@@ -16,12 +16,14 @@
classes = op.classes_
number_of_classes = len(classes)
coefficients = op.coef_.flatten().astype(float).tolist()
+
if isinstance(op.intercept_, (float, np.float32)) and op.intercept_ == 0:
# fit_intercept = False
intercepts = ([0.0] * number_of_classes if number_of_classes != 2 else
[0.0])
else:
intercepts = op.intercept_.tolist()
+
if number_of_classes == 2:
coefficients = list(map(lambda x: -1 * x, coefficients)) + coefficients
intercepts = list(map(lambda x: -1 * x, intercepts)) + intercepts
@@ -45,26 +47,14 @@
if op.__class__.__name__ == 'LinearSVC':
classifier_attrs['post_transform'] = 'NONE'
elif op.__class__.__name__ == 'LogisticRegression':
- if multi_class == 2:
- if number_of_classes == 2:
- """
- See method _predict_proba_lr.
- When number if classes is two, the function
- is not SOFTMAX.
- https://github.com/scikit-learn/scikit-learn/blob/bac89c253b35a8f1a3827389fbee0f5bebcbc985/sklearn/linear_model/base.py#L300
- """ # noqa
- classifier_attrs['post_transform'] = 'LOGISTIC'
- else:
- classifier_attrs['post_transform'] = 'LOGISTIC'
- else:
- classifier_attrs['post_transform'] = 'LOGISTIC'
- elif op.__class__.__name__ in ('LinearSVC'):
- classifier_attrs['post_transform'] = 'NONE'
+ ovr = (op.multi_class in ["ovr", "warn"] or
+ (op.multi_class == 'auto' and (op.classes_.size <= 2 or
+ op.solver == 'liblinear')))
+ classifier_attrs['post_transform'] = (
+ 'LOGISTIC' if ovr else 'SOFTMAX')
else:
- if multi_class == 2:
- classifier_attrs['post_transform'] = 'SOFTMAX'
- else:
- classifier_attrs['post_transform'] = 'LOGISTIC'
+ classifier_attrs['post_transform'] = (
+ 'LOGISTIC' if multi_class > 2 else 'SOFTMAX')
if all(isinstance(i, (six.string_types, six.text_type)) for i in classes):
class_labels = [str(i) for i in classes]
| {"golden_diff": "diff --git a/skl2onnx/operator_converters/linear_classifier.py b/skl2onnx/operator_converters/linear_classifier.py\n--- a/skl2onnx/operator_converters/linear_classifier.py\n+++ b/skl2onnx/operator_converters/linear_classifier.py\n@@ -16,12 +16,14 @@\n classes = op.classes_\n number_of_classes = len(classes)\n coefficients = op.coef_.flatten().astype(float).tolist()\n+\n if isinstance(op.intercept_, (float, np.float32)) and op.intercept_ == 0:\n # fit_intercept = False\n intercepts = ([0.0] * number_of_classes if number_of_classes != 2 else\n [0.0])\n else:\n intercepts = op.intercept_.tolist()\n+\n if number_of_classes == 2:\n coefficients = list(map(lambda x: -1 * x, coefficients)) + coefficients\n intercepts = list(map(lambda x: -1 * x, intercepts)) + intercepts\n@@ -45,26 +47,14 @@\n if op.__class__.__name__ == 'LinearSVC':\n classifier_attrs['post_transform'] = 'NONE'\n elif op.__class__.__name__ == 'LogisticRegression':\n- if multi_class == 2:\n- if number_of_classes == 2:\n- \"\"\"\n- See method _predict_proba_lr.\n- When number if classes is two, the function\n- is not SOFTMAX.\n- https://github.com/scikit-learn/scikit-learn/blob/bac89c253b35a8f1a3827389fbee0f5bebcbc985/sklearn/linear_model/base.py#L300\n- \"\"\" # noqa\n- classifier_attrs['post_transform'] = 'LOGISTIC'\n- else:\n- classifier_attrs['post_transform'] = 'LOGISTIC'\n- else:\n- classifier_attrs['post_transform'] = 'LOGISTIC'\n- elif op.__class__.__name__ in ('LinearSVC'):\n- classifier_attrs['post_transform'] = 'NONE'\n+ ovr = (op.multi_class in [\"ovr\", \"warn\"] or\n+ (op.multi_class == 'auto' and (op.classes_.size <= 2 or\n+ op.solver == 'liblinear')))\n+ classifier_attrs['post_transform'] = (\n+ 'LOGISTIC' if ovr else 'SOFTMAX')\n else:\n- if multi_class == 2:\n- classifier_attrs['post_transform'] = 'SOFTMAX'\n- else:\n- classifier_attrs['post_transform'] = 'LOGISTIC'\n+ classifier_attrs['post_transform'] = (\n+ 'LOGISTIC' if multi_class > 2 else 'SOFTMAX')\n \n if all(isinstance(i, (six.string_types, six.text_type)) for i in classes):\n class_labels = [str(i) for i in classes]\n", "issue": "Logistic Regression probability scores wrong in case of string labels and multi_class set to multinomial.\ndata = load_digits()\r\nX = data.data\r\ny = data.target\r\n\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42)\r\nmodel = LogisticRegression(random_state=42, multi_class='multinomial', solver='sag').fit(X_train, y_train)\r\n\r\nonnx_model = convert_sklearn(model, 'lr', [('input', FloatTensorType(X_test.shape))])\r\nsave_model(onnx_model, 'lr.onnx')\r\nsess = InferenceSession('lr.onnx')\r\nres = sess.run(None, input_feed={'input': X_test.astype(np.float32)})\r\n\r\nnp.mean(np.isclose(model.predict_proba(X_test), list(map(lambda x:\r\n list(map(lambda y: x[y], x)), res[1]))))\r\n\r\n>> 0.010901001112347052\nLogistic Regression probability scores wrong in case of string labels and multi_class set to multinomial.\ndata = load_digits()\r\nX = data.data\r\ny = data.target\r\n\r\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42)\r\nmodel = LogisticRegression(random_state=42, multi_class='multinomial', solver='sag').fit(X_train, y_train)\r\n\r\nonnx_model = convert_sklearn(model, 'lr', [('input', FloatTensorType(X_test.shape))])\r\nsave_model(onnx_model, 'lr.onnx')\r\nsess = InferenceSession('lr.onnx')\r\nres = sess.run(None, input_feed={'input': X_test.astype(np.float32)})\r\n\r\nnp.mean(np.isclose(model.predict_proba(X_test), list(map(lambda x:\r\n list(map(lambda y: x[y], x)), res[1]))))\r\n\r\n>> 0.010901001112347052\n", "before_files": [{"content": "# -------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for\n# license information.\n# --------------------------------------------------------------------------\n\nimport numbers\nimport numpy as np\nimport six\nfrom ..common._registration import register_converter\nfrom ..proto import onnx_proto\n\n\ndef convert_sklearn_linear_classifier(scope, operator, container):\n op = operator.raw_operator\n classes = op.classes_\n number_of_classes = len(classes)\n coefficients = op.coef_.flatten().astype(float).tolist()\n if isinstance(op.intercept_, (float, np.float32)) and op.intercept_ == 0:\n # fit_intercept = False\n intercepts = ([0.0] * number_of_classes if number_of_classes != 2 else\n [0.0])\n else:\n intercepts = op.intercept_.tolist()\n if number_of_classes == 2:\n coefficients = list(map(lambda x: -1 * x, coefficients)) + coefficients\n intercepts = list(map(lambda x: -1 * x, intercepts)) + intercepts\n\n multi_class = 0\n if hasattr(op, 'multi_class'):\n if op.multi_class == 'ovr':\n multi_class = 1\n else:\n multi_class = 2\n\n classifier_type = 'LinearClassifier'\n classifier_attrs = {\n 'name': scope.get_unique_operator_name(classifier_type)\n }\n\n # nb = NodeBuilder(context, 'LinearClassifier', op_domain='ai.onnx.ml')\n classifier_attrs['coefficients'] = coefficients\n classifier_attrs['intercepts'] = intercepts\n classifier_attrs['multi_class'] = 1 if multi_class == 2 else 0\n if op.__class__.__name__ == 'LinearSVC':\n classifier_attrs['post_transform'] = 'NONE'\n elif op.__class__.__name__ == 'LogisticRegression':\n if multi_class == 2:\n if number_of_classes == 2:\n \"\"\"\n See method _predict_proba_lr.\n When number if classes is two, the function\n is not SOFTMAX.\n https://github.com/scikit-learn/scikit-learn/blob/bac89c253b35a8f1a3827389fbee0f5bebcbc985/sklearn/linear_model/base.py#L300\n \"\"\" # noqa\n classifier_attrs['post_transform'] = 'LOGISTIC'\n else:\n classifier_attrs['post_transform'] = 'LOGISTIC'\n else:\n classifier_attrs['post_transform'] = 'LOGISTIC'\n elif op.__class__.__name__ in ('LinearSVC'):\n classifier_attrs['post_transform'] = 'NONE'\n else:\n if multi_class == 2:\n classifier_attrs['post_transform'] = 'SOFTMAX'\n else:\n classifier_attrs['post_transform'] = 'LOGISTIC'\n\n if all(isinstance(i, (six.string_types, six.text_type)) for i in classes):\n class_labels = [str(i) for i in classes]\n classifier_attrs['classlabels_strings'] = class_labels\n elif all(isinstance(i, (numbers.Real, bool, np.bool_)) for i in classes):\n class_labels = [int(i) for i in classes]\n classifier_attrs['classlabels_ints'] = class_labels\n else:\n raise RuntimeError('Label vector must be a string or a integer tensor')\n\n label_name = operator.outputs[0].full_name\n\n if op.__class__.__name__ == 'LinearSVC' and op.classes_.shape[0] <= 2:\n raw_scores_tensor_name = scope.get_unique_variable_name(\n 'raw_scores_tensor')\n positive_class_index_name = scope.get_unique_variable_name(\n 'positive_class_index')\n\n container.add_initializer(positive_class_index_name,\n onnx_proto.TensorProto.INT64, [], [1])\n\n container.add_node(classifier_type, operator.inputs[0].full_name,\n [label_name, raw_scores_tensor_name],\n op_domain='ai.onnx.ml', **classifier_attrs)\n container.add_node(\n 'ArrayFeatureExtractor',\n [raw_scores_tensor_name, positive_class_index_name],\n operator.outputs[1].full_name, op_domain='ai.onnx.ml',\n name=scope.get_unique_operator_name('ArrayFeatureExtractor'))\n else:\n # Make sure the probability sum is 1 over all classes\n if multi_class > 0 and op.__class__.__name__ != 'LinearSVC':\n probability_tensor_name = scope.get_unique_variable_name(\n 'probability_tensor')\n container.add_node(classifier_type, operator.inputs[0].full_name,\n [label_name, probability_tensor_name],\n op_domain='ai.onnx.ml', **classifier_attrs)\n normalizer_type = 'Normalizer'\n normalizer_attrs = {\n 'name': scope.get_unique_operator_name(normalizer_type),\n 'norm': 'L1'\n }\n container.add_node(normalizer_type, probability_tensor_name,\n operator.outputs[1].full_name,\n op_domain='ai.onnx.ml', **normalizer_attrs)\n else:\n container.add_node(classifier_type, operator.inputs[0].full_name,\n [label_name, operator.outputs[1].full_name],\n op_domain='ai.onnx.ml', **classifier_attrs)\n\n\nregister_converter('SklearnLinearClassifier',\n convert_sklearn_linear_classifier)\nregister_converter('SklearnLinearSVC', convert_sklearn_linear_classifier)\n", "path": "skl2onnx/operator_converters/linear_classifier.py"}]} | 2,405 | 651 |
gh_patches_debug_24378 | rasdani/github-patches | git_diff | localstack__localstack-694 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ElasticSearch 6 Support
Is there a plan for ElasticSearch 6 support anytime soon? Since [AWS supports ES 6](https://aws.amazon.com/about-aws/whats-new/2017/12/elasticsearch-6-0-now-available-on-amazon-elasticsearch-service/) it would be nice to be able to effectively simulate it locally.
</issue>
<code>
[start of localstack/constants.py]
1 import os
2 import localstack_client.config
3
4 # LocalStack version
5 VERSION = '0.8.5'
6
7 # default AWS region
8 if 'DEFAULT_REGION' not in os.environ:
9 os.environ['DEFAULT_REGION'] = 'us-east-1'
10 DEFAULT_REGION = os.environ['DEFAULT_REGION']
11
12 # constant to represent the "local" region, i.e., local machine
13 REGION_LOCAL = 'local'
14
15 # dev environment
16 ENV_DEV = 'dev'
17
18 # backend service ports, for services that are behind a proxy (counting down from 4566)
19 DEFAULT_PORT_APIGATEWAY_BACKEND = 4566
20 DEFAULT_PORT_KINESIS_BACKEND = 4565
21 DEFAULT_PORT_DYNAMODB_BACKEND = 4564
22 DEFAULT_PORT_S3_BACKEND = 4563
23 DEFAULT_PORT_SNS_BACKEND = 4562
24 DEFAULT_PORT_SQS_BACKEND = 4561
25 DEFAULT_PORT_ELASTICSEARCH_BACKEND = 4560
26 DEFAULT_PORT_CLOUDFORMATION_BACKEND = 4559
27
28 DEFAULT_PORT_WEB_UI = 8080
29
30 LOCALHOST = 'localhost'
31
32 # version of the Maven dependency with Java utility code
33 LOCALSTACK_MAVEN_VERSION = '0.1.11'
34
35 # map of default service APIs and ports to be spun up (fetch map from localstack_client)
36 DEFAULT_SERVICE_PORTS = localstack_client.config.get_service_ports()
37
38 # host to bind to when starting the services
39 BIND_HOST = '0.0.0.0'
40
41 # AWS user account ID used for tests
42 TEST_AWS_ACCOUNT_ID = '000000000000'
43 os.environ['TEST_AWS_ACCOUNT_ID'] = TEST_AWS_ACCOUNT_ID
44
45 # root code folder
46 LOCALSTACK_ROOT_FOLDER = os.path.realpath(os.path.join(os.path.dirname(os.path.realpath(__file__)), '..'))
47
48 # virtualenv folder
49 LOCALSTACK_VENV_FOLDER = os.path.join(LOCALSTACK_ROOT_FOLDER, '.venv')
50 if not os.path.isdir(LOCALSTACK_VENV_FOLDER):
51 # assuming this package lives here: <python>/lib/pythonX.X/site-packages/localstack/
52 LOCALSTACK_VENV_FOLDER = os.path.realpath(os.path.join(LOCALSTACK_ROOT_FOLDER, '..', '..', '..'))
53
54 # API Gateway path to indicate a user request sent to the gateway
55 PATH_USER_REQUEST = '_user_request_'
56
57 # name of LocalStack Docker image
58 DOCKER_IMAGE_NAME = 'localstack/localstack'
59
60 # environment variable name to tag local test runs
61 ENV_INTERNAL_TEST_RUN = 'LOCALSTACK_INTERNAL_TEST_RUN'
62
63 # content types
64 APPLICATION_AMZ_JSON_1_0 = 'application/x-amz-json-1.0'
65 APPLICATION_AMZ_JSON_1_1 = 'application/x-amz-json-1.1'
66 APPLICATION_JSON = 'application/json'
67
68 # Lambda defaults
69 LAMBDA_TEST_ROLE = 'arn:aws:iam::%s:role/lambda-test-role' % TEST_AWS_ACCOUNT_ID
70
71 # installation constants
72 ELASTICSEARCH_JAR_URL = 'https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.zip'
73 DYNAMODB_JAR_URL = 'https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.zip'
74 ELASTICMQ_JAR_URL = 'https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-0.13.8.jar'
75 STS_JAR_URL = 'http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-sts/1.11.14/aws-java-sdk-sts-1.11.14.jar'
76
77 # API endpoint for analytics events
78 API_ENDPOINT = 'https://api.localstack.cloud/v1'
79
[end of localstack/constants.py]
[start of localstack/services/es/es_api.py]
1 import json
2 from flask import Flask, jsonify, request, make_response
3 from localstack.services import generic_proxy
4 from localstack.constants import TEST_AWS_ACCOUNT_ID, DEFAULT_REGION
5 from localstack.utils.common import to_str
6
7 APP_NAME = 'es_api'
8 API_PREFIX = '/2015-01-01'
9
10 ES_DOMAINS = {}
11
12 app = Flask(APP_NAME)
13
14
15 def error_response(error_type, code=400, message='Unknown error.'):
16 if not message:
17 if error_type == 'ResourceNotFoundException':
18 message = 'Resource not found.'
19 elif error_type == 'ResourceAlreadyExistsException':
20 message = 'Resource already exists.'
21 response = make_response(jsonify({'error': message}))
22 response.headers['x-amzn-errortype'] = error_type
23 return response, code
24
25
26 def get_domain_status(domain_name, deleted=False):
27 return {
28 'DomainStatus': {
29 'ARN': 'arn:aws:es:%s:%s:domain/%s' % (DEFAULT_REGION, TEST_AWS_ACCOUNT_ID, domain_name),
30 'Created': True,
31 'Deleted': deleted,
32 'DomainId': '%s/%s' % (TEST_AWS_ACCOUNT_ID, domain_name),
33 'DomainName': domain_name,
34 'ElasticsearchClusterConfig': {
35 'DedicatedMasterCount': 1,
36 'DedicatedMasterEnabled': True,
37 'DedicatedMasterType': 'm3.medium.elasticsearch',
38 'InstanceCount': 1,
39 'InstanceType': 'm3.medium.elasticsearch',
40 'ZoneAwarenessEnabled': True
41 },
42 'ElasticsearchVersion': '5.3',
43 'Endpoint': None,
44 'Processing': True
45 }
46 }
47
48
49 @app.route('%s/domain' % API_PREFIX, methods=['GET'])
50 def list_domain_names():
51 result = {
52 'DomainNames': [{'DomainName': name} for name in ES_DOMAINS.keys()]
53 }
54 return jsonify(result)
55
56
57 @app.route('%s/es/domain' % API_PREFIX, methods=['POST'])
58 def create_domain():
59 data = json.loads(to_str(request.data))
60 domain_name = data['DomainName']
61 if domain_name in ES_DOMAINS:
62 return error_response(error_type='ResourceAlreadyExistsException')
63 ES_DOMAINS[domain_name] = data
64 result = get_domain_status(domain_name)
65 return jsonify(result)
66
67
68 @app.route('%s/es/domain/<domain_name>' % API_PREFIX, methods=['GET'])
69 def describe_domain(domain_name):
70 if domain_name not in ES_DOMAINS:
71 return error_response(error_type='ResourceNotFoundException')
72 result = get_domain_status(domain_name)
73 return jsonify(result)
74
75
76 @app.route('%s/es/domain/<domain_name>' % API_PREFIX, methods=['DELETE'])
77 def delete_domain(domain_name):
78 if domain_name not in ES_DOMAINS:
79 return error_response(error_type='ResourceNotFoundException')
80 result = get_domain_status(domain_name, deleted=True)
81 ES_DOMAINS.pop(domain_name)
82 return jsonify(result)
83
84
85 def serve(port, quiet=True):
86 generic_proxy.serve_flask_app(app=app, port=port, quiet=quiet)
87
[end of localstack/services/es/es_api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/localstack/constants.py b/localstack/constants.py
--- a/localstack/constants.py
+++ b/localstack/constants.py
@@ -69,7 +69,7 @@
LAMBDA_TEST_ROLE = 'arn:aws:iam::%s:role/lambda-test-role' % TEST_AWS_ACCOUNT_ID
# installation constants
-ELASTICSEARCH_JAR_URL = 'https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.zip'
+ELASTICSEARCH_JAR_URL = 'https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.0.zip'
DYNAMODB_JAR_URL = 'https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.zip'
ELASTICMQ_JAR_URL = 'https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-0.13.8.jar'
STS_JAR_URL = 'http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-sts/1.11.14/aws-java-sdk-sts-1.11.14.jar'
diff --git a/localstack/services/es/es_api.py b/localstack/services/es/es_api.py
--- a/localstack/services/es/es_api.py
+++ b/localstack/services/es/es_api.py
@@ -39,7 +39,7 @@
'InstanceType': 'm3.medium.elasticsearch',
'ZoneAwarenessEnabled': True
},
- 'ElasticsearchVersion': '5.3',
+ 'ElasticsearchVersion': '6.2',
'Endpoint': None,
'Processing': True
}
| {"golden_diff": "diff --git a/localstack/constants.py b/localstack/constants.py\n--- a/localstack/constants.py\n+++ b/localstack/constants.py\n@@ -69,7 +69,7 @@\n LAMBDA_TEST_ROLE = 'arn:aws:iam::%s:role/lambda-test-role' % TEST_AWS_ACCOUNT_ID\n \n # installation constants\n-ELASTICSEARCH_JAR_URL = 'https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.zip'\n+ELASTICSEARCH_JAR_URL = 'https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.0.zip'\n DYNAMODB_JAR_URL = 'https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.zip'\n ELASTICMQ_JAR_URL = 'https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-0.13.8.jar'\n STS_JAR_URL = 'http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-sts/1.11.14/aws-java-sdk-sts-1.11.14.jar'\ndiff --git a/localstack/services/es/es_api.py b/localstack/services/es/es_api.py\n--- a/localstack/services/es/es_api.py\n+++ b/localstack/services/es/es_api.py\n@@ -39,7 +39,7 @@\n 'InstanceType': 'm3.medium.elasticsearch',\n 'ZoneAwarenessEnabled': True\n },\n- 'ElasticsearchVersion': '5.3',\n+ 'ElasticsearchVersion': '6.2',\n 'Endpoint': None,\n 'Processing': True\n }\n", "issue": "ElasticSearch 6 Support\nIs there a plan for ElasticSearch 6 support anytime soon? Since [AWS supports ES 6](https://aws.amazon.com/about-aws/whats-new/2017/12/elasticsearch-6-0-now-available-on-amazon-elasticsearch-service/) it would be nice to be able to effectively simulate it locally.\n", "before_files": [{"content": "import os\nimport localstack_client.config\n\n# LocalStack version\nVERSION = '0.8.5'\n\n# default AWS region\nif 'DEFAULT_REGION' not in os.environ:\n os.environ['DEFAULT_REGION'] = 'us-east-1'\nDEFAULT_REGION = os.environ['DEFAULT_REGION']\n\n# constant to represent the \"local\" region, i.e., local machine\nREGION_LOCAL = 'local'\n\n# dev environment\nENV_DEV = 'dev'\n\n# backend service ports, for services that are behind a proxy (counting down from 4566)\nDEFAULT_PORT_APIGATEWAY_BACKEND = 4566\nDEFAULT_PORT_KINESIS_BACKEND = 4565\nDEFAULT_PORT_DYNAMODB_BACKEND = 4564\nDEFAULT_PORT_S3_BACKEND = 4563\nDEFAULT_PORT_SNS_BACKEND = 4562\nDEFAULT_PORT_SQS_BACKEND = 4561\nDEFAULT_PORT_ELASTICSEARCH_BACKEND = 4560\nDEFAULT_PORT_CLOUDFORMATION_BACKEND = 4559\n\nDEFAULT_PORT_WEB_UI = 8080\n\nLOCALHOST = 'localhost'\n\n# version of the Maven dependency with Java utility code\nLOCALSTACK_MAVEN_VERSION = '0.1.11'\n\n# map of default service APIs and ports to be spun up (fetch map from localstack_client)\nDEFAULT_SERVICE_PORTS = localstack_client.config.get_service_ports()\n\n# host to bind to when starting the services\nBIND_HOST = '0.0.0.0'\n\n# AWS user account ID used for tests\nTEST_AWS_ACCOUNT_ID = '000000000000'\nos.environ['TEST_AWS_ACCOUNT_ID'] = TEST_AWS_ACCOUNT_ID\n\n# root code folder\nLOCALSTACK_ROOT_FOLDER = os.path.realpath(os.path.join(os.path.dirname(os.path.realpath(__file__)), '..'))\n\n# virtualenv folder\nLOCALSTACK_VENV_FOLDER = os.path.join(LOCALSTACK_ROOT_FOLDER, '.venv')\nif not os.path.isdir(LOCALSTACK_VENV_FOLDER):\n # assuming this package lives here: <python>/lib/pythonX.X/site-packages/localstack/\n LOCALSTACK_VENV_FOLDER = os.path.realpath(os.path.join(LOCALSTACK_ROOT_FOLDER, '..', '..', '..'))\n\n# API Gateway path to indicate a user request sent to the gateway\nPATH_USER_REQUEST = '_user_request_'\n\n# name of LocalStack Docker image\nDOCKER_IMAGE_NAME = 'localstack/localstack'\n\n# environment variable name to tag local test runs\nENV_INTERNAL_TEST_RUN = 'LOCALSTACK_INTERNAL_TEST_RUN'\n\n# content types\nAPPLICATION_AMZ_JSON_1_0 = 'application/x-amz-json-1.0'\nAPPLICATION_AMZ_JSON_1_1 = 'application/x-amz-json-1.1'\nAPPLICATION_JSON = 'application/json'\n\n# Lambda defaults\nLAMBDA_TEST_ROLE = 'arn:aws:iam::%s:role/lambda-test-role' % TEST_AWS_ACCOUNT_ID\n\n# installation constants\nELASTICSEARCH_JAR_URL = 'https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.zip'\nDYNAMODB_JAR_URL = 'https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.zip'\nELASTICMQ_JAR_URL = 'https://s3-eu-west-1.amazonaws.com/softwaremill-public/elasticmq-server-0.13.8.jar'\nSTS_JAR_URL = 'http://central.maven.org/maven2/com/amazonaws/aws-java-sdk-sts/1.11.14/aws-java-sdk-sts-1.11.14.jar'\n\n# API endpoint for analytics events\nAPI_ENDPOINT = 'https://api.localstack.cloud/v1'\n", "path": "localstack/constants.py"}, {"content": "import json\nfrom flask import Flask, jsonify, request, make_response\nfrom localstack.services import generic_proxy\nfrom localstack.constants import TEST_AWS_ACCOUNT_ID, DEFAULT_REGION\nfrom localstack.utils.common import to_str\n\nAPP_NAME = 'es_api'\nAPI_PREFIX = '/2015-01-01'\n\nES_DOMAINS = {}\n\napp = Flask(APP_NAME)\n\n\ndef error_response(error_type, code=400, message='Unknown error.'):\n if not message:\n if error_type == 'ResourceNotFoundException':\n message = 'Resource not found.'\n elif error_type == 'ResourceAlreadyExistsException':\n message = 'Resource already exists.'\n response = make_response(jsonify({'error': message}))\n response.headers['x-amzn-errortype'] = error_type\n return response, code\n\n\ndef get_domain_status(domain_name, deleted=False):\n return {\n 'DomainStatus': {\n 'ARN': 'arn:aws:es:%s:%s:domain/%s' % (DEFAULT_REGION, TEST_AWS_ACCOUNT_ID, domain_name),\n 'Created': True,\n 'Deleted': deleted,\n 'DomainId': '%s/%s' % (TEST_AWS_ACCOUNT_ID, domain_name),\n 'DomainName': domain_name,\n 'ElasticsearchClusterConfig': {\n 'DedicatedMasterCount': 1,\n 'DedicatedMasterEnabled': True,\n 'DedicatedMasterType': 'm3.medium.elasticsearch',\n 'InstanceCount': 1,\n 'InstanceType': 'm3.medium.elasticsearch',\n 'ZoneAwarenessEnabled': True\n },\n 'ElasticsearchVersion': '5.3',\n 'Endpoint': None,\n 'Processing': True\n }\n }\n\n\[email protected]('%s/domain' % API_PREFIX, methods=['GET'])\ndef list_domain_names():\n result = {\n 'DomainNames': [{'DomainName': name} for name in ES_DOMAINS.keys()]\n }\n return jsonify(result)\n\n\[email protected]('%s/es/domain' % API_PREFIX, methods=['POST'])\ndef create_domain():\n data = json.loads(to_str(request.data))\n domain_name = data['DomainName']\n if domain_name in ES_DOMAINS:\n return error_response(error_type='ResourceAlreadyExistsException')\n ES_DOMAINS[domain_name] = data\n result = get_domain_status(domain_name)\n return jsonify(result)\n\n\[email protected]('%s/es/domain/<domain_name>' % API_PREFIX, methods=['GET'])\ndef describe_domain(domain_name):\n if domain_name not in ES_DOMAINS:\n return error_response(error_type='ResourceNotFoundException')\n result = get_domain_status(domain_name)\n return jsonify(result)\n\n\[email protected]('%s/es/domain/<domain_name>' % API_PREFIX, methods=['DELETE'])\ndef delete_domain(domain_name):\n if domain_name not in ES_DOMAINS:\n return error_response(error_type='ResourceNotFoundException')\n result = get_domain_status(domain_name, deleted=True)\n ES_DOMAINS.pop(domain_name)\n return jsonify(result)\n\n\ndef serve(port, quiet=True):\n generic_proxy.serve_flask_app(app=app, port=port, quiet=quiet)\n", "path": "localstack/services/es/es_api.py"}]} | 2,418 | 357 |
gh_patches_debug_26659 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-2847 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`strawberry.directive` type hints appear to be incorrect.
<!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
A function like:
```py
@strawberry.directive(
locations=[DirectiveLocation.FRAGMENT_DEFINITION],
description="Do nothing, but add a base class for query generation in the python client.",
)
def identity(value: str, a: str, b: str | None = None) -> str:
"""Do nothing, but add a directive so that the python client can use this to add metadata."""
return value
reveal_type(identity)
```
`mypy` will say that the revealed type here is `builtins.str`. Looking at the code for `directive`, it appears that the type hints imply that the decorated object is going to return the same type that the function returns (in this case, the return type of `identity`)
## System Information
- Operating system:
- Strawberry version (if applicable):
## Additional Context
<!-- Add any other relevant information about the problem here. -->
<!-- POLAR PLEDGE BADGE START -->
## Upvote & Fund
- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.
- We receive the funding once the issue is completed & confirmed by you.
- Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/strawberry-graphql/strawberry/issues/2846">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2846/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2846/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of strawberry/directive.py]
1 from __future__ import annotations
2
3 import dataclasses
4 from typing import TYPE_CHECKING, Any, Callable, List, Optional, TypeVar
5 from typing_extensions import Annotated
6
7 from graphql import DirectiveLocation
8
9 from strawberry.field import StrawberryField
10 from strawberry.types.fields.resolver import (
11 INFO_PARAMSPEC,
12 ReservedType,
13 StrawberryResolver,
14 )
15 from strawberry.unset import UNSET
16 from strawberry.utils.cached_property import cached_property
17
18 if TYPE_CHECKING:
19 import inspect
20
21 from strawberry.arguments import StrawberryArgument
22
23
24 def directive_field(name: str, default: object = UNSET) -> Any:
25 return StrawberryField(
26 python_name=None,
27 graphql_name=name,
28 default=default,
29 )
30
31
32 T = TypeVar("T")
33
34
35 class StrawberryDirectiveValue:
36 ...
37
38
39 DirectiveValue = Annotated[T, StrawberryDirectiveValue()]
40 DirectiveValue.__doc__ = (
41 """Represents the ``value`` argument for a GraphQL query directive."""
42 )
43
44 # Registers `DirectiveValue[...]` annotated arguments as reserved
45 VALUE_PARAMSPEC = ReservedType(name="value", type=StrawberryDirectiveValue)
46
47
48 class StrawberryDirectiveResolver(StrawberryResolver[T]):
49 RESERVED_PARAMSPEC = (
50 INFO_PARAMSPEC,
51 VALUE_PARAMSPEC,
52 )
53
54 @cached_property
55 def value_parameter(self) -> Optional[inspect.Parameter]:
56 return self.reserved_parameters.get(VALUE_PARAMSPEC)
57
58
59 @dataclasses.dataclass
60 class StrawberryDirective:
61 python_name: str
62 graphql_name: Optional[str]
63 resolver: StrawberryDirectiveResolver
64 locations: List[DirectiveLocation]
65 description: Optional[str] = None
66
67 @cached_property
68 def arguments(self) -> List[StrawberryArgument]:
69 return self.resolver.arguments
70
71
72 def directive(
73 *,
74 locations: List[DirectiveLocation],
75 description: Optional[str] = None,
76 name: Optional[str] = None,
77 ) -> Callable[[Callable[..., T]], T]:
78 def _wrap(f: Callable[..., T]) -> T:
79 return StrawberryDirective( # type: ignore
80 python_name=f.__name__,
81 graphql_name=name,
82 locations=locations,
83 description=description,
84 resolver=StrawberryDirectiveResolver(f),
85 )
86
87 return _wrap
88
89
90 __all__ = ["DirectiveLocation", "StrawberryDirective", "directive"]
91
[end of strawberry/directive.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/directive.py b/strawberry/directive.py
--- a/strawberry/directive.py
+++ b/strawberry/directive.py
@@ -1,7 +1,7 @@
from __future__ import annotations
import dataclasses
-from typing import TYPE_CHECKING, Any, Callable, List, Optional, TypeVar
+from typing import TYPE_CHECKING, Any, Callable, Generic, List, Optional, TypeVar
from typing_extensions import Annotated
from graphql import DirectiveLocation
@@ -57,10 +57,10 @@
@dataclasses.dataclass
-class StrawberryDirective:
+class StrawberryDirective(Generic[T]):
python_name: str
graphql_name: Optional[str]
- resolver: StrawberryDirectiveResolver
+ resolver: StrawberryDirectiveResolver[T]
locations: List[DirectiveLocation]
description: Optional[str] = None
@@ -74,9 +74,9 @@
locations: List[DirectiveLocation],
description: Optional[str] = None,
name: Optional[str] = None,
-) -> Callable[[Callable[..., T]], T]:
- def _wrap(f: Callable[..., T]) -> T:
- return StrawberryDirective( # type: ignore
+) -> Callable[[Callable[..., T]], StrawberryDirective[T]]:
+ def _wrap(f: Callable[..., T]) -> StrawberryDirective[T]:
+ return StrawberryDirective(
python_name=f.__name__,
graphql_name=name,
locations=locations,
| {"golden_diff": "diff --git a/strawberry/directive.py b/strawberry/directive.py\n--- a/strawberry/directive.py\n+++ b/strawberry/directive.py\n@@ -1,7 +1,7 @@\n from __future__ import annotations\n \n import dataclasses\n-from typing import TYPE_CHECKING, Any, Callable, List, Optional, TypeVar\n+from typing import TYPE_CHECKING, Any, Callable, Generic, List, Optional, TypeVar\n from typing_extensions import Annotated\n \n from graphql import DirectiveLocation\n@@ -57,10 +57,10 @@\n \n \n @dataclasses.dataclass\n-class StrawberryDirective:\n+class StrawberryDirective(Generic[T]):\n python_name: str\n graphql_name: Optional[str]\n- resolver: StrawberryDirectiveResolver\n+ resolver: StrawberryDirectiveResolver[T]\n locations: List[DirectiveLocation]\n description: Optional[str] = None\n \n@@ -74,9 +74,9 @@\n locations: List[DirectiveLocation],\n description: Optional[str] = None,\n name: Optional[str] = None,\n-) -> Callable[[Callable[..., T]], T]:\n- def _wrap(f: Callable[..., T]) -> T:\n- return StrawberryDirective( # type: ignore\n+) -> Callable[[Callable[..., T]], StrawberryDirective[T]]:\n+ def _wrap(f: Callable[..., T]) -> StrawberryDirective[T]:\n+ return StrawberryDirective(\n python_name=f.__name__,\n graphql_name=name,\n locations=locations,\n", "issue": "`strawberry.directive` type hints appear to be incorrect.\n<!-- Provide a general summary of the bug in the title above. -->\r\n\r\n<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->\r\n<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->\r\n\r\n## Describe the Bug\r\n\r\nA function like:\r\n\r\n```py\r\[email protected](\r\n locations=[DirectiveLocation.FRAGMENT_DEFINITION],\r\n description=\"Do nothing, but add a base class for query generation in the python client.\",\r\n)\r\ndef identity(value: str, a: str, b: str | None = None) -> str:\r\n \"\"\"Do nothing, but add a directive so that the python client can use this to add metadata.\"\"\"\r\n return value\r\n\r\nreveal_type(identity)\r\n```\r\n\r\n`mypy` will say that the revealed type here is `builtins.str`. Looking at the code for `directive`, it appears that the type hints imply that the decorated object is going to return the same type that the function returns (in this case, the return type of `identity`)\r\n\r\n\r\n## System Information\r\n\r\n - Operating system:\r\n - Strawberry version (if applicable):\r\n\r\n## Additional Context\r\n\r\n<!-- Add any other relevant information about the problem here. -->\r\n\n\n<!-- POLAR PLEDGE BADGE START -->\n## Upvote & Fund\n\n- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.\n- We receive the funding once the issue is completed & confirmed by you.\n- Thank you in advance for helping prioritize & fund our backlog.\n\n<a href=\"https://polar.sh/strawberry-graphql/strawberry/issues/2846\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2846/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2846/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport dataclasses\nfrom typing import TYPE_CHECKING, Any, Callable, List, Optional, TypeVar\nfrom typing_extensions import Annotated\n\nfrom graphql import DirectiveLocation\n\nfrom strawberry.field import StrawberryField\nfrom strawberry.types.fields.resolver import (\n INFO_PARAMSPEC,\n ReservedType,\n StrawberryResolver,\n)\nfrom strawberry.unset import UNSET\nfrom strawberry.utils.cached_property import cached_property\n\nif TYPE_CHECKING:\n import inspect\n\n from strawberry.arguments import StrawberryArgument\n\n\ndef directive_field(name: str, default: object = UNSET) -> Any:\n return StrawberryField(\n python_name=None,\n graphql_name=name,\n default=default,\n )\n\n\nT = TypeVar(\"T\")\n\n\nclass StrawberryDirectiveValue:\n ...\n\n\nDirectiveValue = Annotated[T, StrawberryDirectiveValue()]\nDirectiveValue.__doc__ = (\n \"\"\"Represents the ``value`` argument for a GraphQL query directive.\"\"\"\n)\n\n# Registers `DirectiveValue[...]` annotated arguments as reserved\nVALUE_PARAMSPEC = ReservedType(name=\"value\", type=StrawberryDirectiveValue)\n\n\nclass StrawberryDirectiveResolver(StrawberryResolver[T]):\n RESERVED_PARAMSPEC = (\n INFO_PARAMSPEC,\n VALUE_PARAMSPEC,\n )\n\n @cached_property\n def value_parameter(self) -> Optional[inspect.Parameter]:\n return self.reserved_parameters.get(VALUE_PARAMSPEC)\n\n\[email protected]\nclass StrawberryDirective:\n python_name: str\n graphql_name: Optional[str]\n resolver: StrawberryDirectiveResolver\n locations: List[DirectiveLocation]\n description: Optional[str] = None\n\n @cached_property\n def arguments(self) -> List[StrawberryArgument]:\n return self.resolver.arguments\n\n\ndef directive(\n *,\n locations: List[DirectiveLocation],\n description: Optional[str] = None,\n name: Optional[str] = None,\n) -> Callable[[Callable[..., T]], T]:\n def _wrap(f: Callable[..., T]) -> T:\n return StrawberryDirective( # type: ignore\n python_name=f.__name__,\n graphql_name=name,\n locations=locations,\n description=description,\n resolver=StrawberryDirectiveResolver(f),\n )\n\n return _wrap\n\n\n__all__ = [\"DirectiveLocation\", \"StrawberryDirective\", \"directive\"]\n", "path": "strawberry/directive.py"}]} | 1,677 | 322 |
gh_patches_debug_19405 | rasdani/github-patches | git_diff | weni-ai__bothub-engine-212 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix intent default value to all examples, migration intent required
</issue>
<code>
[start of bothub/common/migrations/0021_auto_20180921_1259.py]
1 # Generated by Django 2.0.6 on 2018-09-21 12:59
2
3 import django.core.validators
4 from django.db import migrations, models
5 import re
6
7
8 class Migration(migrations.Migration):
9
10 dependencies = [
11 ('common', '0020_auto_20180813_1320'),
12 ]
13
14 operations = [
15 migrations.AlterField(
16 model_name='repositoryexample',
17 name='intent',
18 field=models.CharField(default='no_intent', help_text='Example intent reference', max_length=64, validators=[django.core.validators.RegexValidator(re.compile('^[-a-z0-9_]+\\Z'), 'Enter a valid value consisting of lowercase letters, numbers, underscores or hyphens.', 'invalid')], verbose_name='intent'),
19 ),
20 ]
21
[end of bothub/common/migrations/0021_auto_20180921_1259.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bothub/common/migrations/0021_auto_20180921_1259.py b/bothub/common/migrations/0021_auto_20180921_1259.py
--- a/bothub/common/migrations/0021_auto_20180921_1259.py
+++ b/bothub/common/migrations/0021_auto_20180921_1259.py
@@ -5,6 +5,11 @@
import re
+def populate_empty_intent(apps, *args):
+ RepositoryExample = apps.get_model('common', 'RepositoryExample')
+ RepositoryExample.objects.filter(intent='').update(intent='no_intent')
+
+
class Migration(migrations.Migration):
dependencies = [
@@ -17,4 +22,5 @@
name='intent',
field=models.CharField(default='no_intent', help_text='Example intent reference', max_length=64, validators=[django.core.validators.RegexValidator(re.compile('^[-a-z0-9_]+\\Z'), 'Enter a valid value consisting of lowercase letters, numbers, underscores or hyphens.', 'invalid')], verbose_name='intent'),
),
+ migrations.RunPython(populate_empty_intent),
]
| {"golden_diff": "diff --git a/bothub/common/migrations/0021_auto_20180921_1259.py b/bothub/common/migrations/0021_auto_20180921_1259.py\n--- a/bothub/common/migrations/0021_auto_20180921_1259.py\n+++ b/bothub/common/migrations/0021_auto_20180921_1259.py\n@@ -5,6 +5,11 @@\n import re\n \n \n+def populate_empty_intent(apps, *args):\n+ RepositoryExample = apps.get_model('common', 'RepositoryExample')\n+ RepositoryExample.objects.filter(intent='').update(intent='no_intent')\n+\n+\n class Migration(migrations.Migration):\n \n dependencies = [\n@@ -17,4 +22,5 @@\n name='intent',\n field=models.CharField(default='no_intent', help_text='Example intent reference', max_length=64, validators=[django.core.validators.RegexValidator(re.compile('^[-a-z0-9_]+\\\\Z'), 'Enter a valid value consisting of lowercase letters, numbers, underscores or hyphens.', 'invalid')], verbose_name='intent'),\n ),\n+ migrations.RunPython(populate_empty_intent),\n ]\n", "issue": "Fix intent default value to all examples, migration intent required\n\n", "before_files": [{"content": "# Generated by Django 2.0.6 on 2018-09-21 12:59\n\nimport django.core.validators\nfrom django.db import migrations, models\nimport re\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('common', '0020_auto_20180813_1320'),\n ]\n\n operations = [\n migrations.AlterField(\n model_name='repositoryexample',\n name='intent',\n field=models.CharField(default='no_intent', help_text='Example intent reference', max_length=64, validators=[django.core.validators.RegexValidator(re.compile('^[-a-z0-9_]+\\\\Z'), 'Enter a valid value consisting of lowercase letters, numbers, underscores or hyphens.', 'invalid')], verbose_name='intent'),\n ),\n ]\n", "path": "bothub/common/migrations/0021_auto_20180921_1259.py"}]} | 806 | 292 |
gh_patches_debug_1744 | rasdani/github-patches | git_diff | pyg-team__pytorch_geometric-7902 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SetTransformerAggregation returns `nan` for an unconnected node.
### 🐛 Describe the bug
When you use message passing with a `SetTransformerAggregation` and the input graph includes any number of nodes that are disconnected from the rest of the graph, the `SetTransformerAggregation` returns `nan` for those nodes. This is in contrast to the `SumAggregation` which returns plain `0`.
```python
from torch import Tensor
import torch
from torch_geometric.nn import MessagePassing, SetTransformerAggregation
from torch_geometric.data import Data, Batch
from torch_geometric.utils import sort_edge_index
class MPNN4Set(MessagePassing):
def __init__(self, dim, n_heads):
super(MPNN4Set, self).__init__()
self.dim = dim
self.aggregator = SetTransformerAggregation(dim, heads=n_heads)
def forward(self, h, edge_index, batch):
edge_index = sort_edge_index(edge_index, sort_by_row=False)
h = self.propagate(edge_index, x=h, num_nodes=h.size(0), batch=batch)
return h
def message(self, x_i, x_j, edge_index, num_nodes, batch):
return x_j
def aggregate(self, inputs: Tensor, index: Tensor, ptr: Tensor | None = None, dim_size: int | None = None) -> Tensor:
h = self.aggregator(inputs, index, ptr, dim_size)
return h
def update(self, aggr_out, batch):
return aggr_out
m = MPNN4Set(10, 2)
graphs = [Data(x=torch.randn((3, 10)), edge_index=torch.tensor([[0, 1], [1, 0]], dtype=torch.long)), Data(x=torch.randn((3, 10)), edge_index=torch.tensor([[0, 1, 2], [2, 1, 0]], dtype=torch.long))]
batched_graphs = Batch.from_data_list(graphs)
res = m(batched_graphs.x, batched_graphs.edge_index, batched_graphs.batch)
assert res[2].isnan().any().item() is True
```
I managed to debug this a little bit and it seems like this stems from the fact that in PyTorch's `MultiHeadAttention` implementation you shouldn't mask a row completely:
```python
import torch
from torch.nn import functional as F
from torch import nn
m = nn.MultiheadAttention(10, 2)
t1 = torch.randn((3, 3, 10))
mask = torch.tensor([[True, True, True], [False, False, False], [False, False, False]])
m(t1, t1, t1, mask) # Includes nan
```
This happens because the `unbatch` function will mask the row corresponding to that node because it is not connected to any other node.
### Environment
* PyG version: 2.3.1
* PyTorch version: 2.1.0a0+b5021ba
* OS: Ubuntu 22.04
* Python version: 3.10.6
* CUDA/cuDNN version: 12.2
* How you installed PyTorch and PyG (`conda`, `pip`, source): pip
</issue>
<code>
[start of torch_geometric/nn/aggr/set_transformer.py]
1 from typing import Optional
2
3 import torch
4 from torch import Tensor
5
6 from torch_geometric.experimental import disable_dynamic_shapes
7 from torch_geometric.nn.aggr import Aggregation
8 from torch_geometric.nn.aggr.utils import (
9 PoolingByMultiheadAttention,
10 SetAttentionBlock,
11 )
12
13
14 class SetTransformerAggregation(Aggregation):
15 r"""Performs "Set Transformer" aggregation in which the elements to
16 aggregate are processed by multi-head attention blocks, as described in
17 the `"Graph Neural Networks with Adaptive Readouts"
18 <https://arxiv.org/abs/2211.04952>`_ paper.
19
20 .. note::
21
22 :class:`SetTransformerAggregation` requires sorted indices :obj:`index`
23 as input. Specifically, if you use this aggregation as part of
24 :class:`~torch_geometric.nn.conv.MessagePassing`, ensure that
25 :obj:`edge_index` is sorted by destination nodes, either by manually
26 sorting edge indices via :meth:`~torch_geometric.utils.sort_edge_index`
27 or by calling :meth:`torch_geometric.data.Data.sort`.
28
29 Args:
30 channels (int): Size of each input sample.
31 num_seed_points (int, optional): Number of seed points.
32 (default: :obj:`1`)
33 num_encoder_blocks (int, optional): Number of Set Attention Blocks
34 (SABs) in the encoder. (default: :obj:`1`).
35 num_decoder_blocks (int, optional): Number of Set Attention Blocks
36 (SABs) in the decoder. (default: :obj:`1`).
37 heads (int, optional): Number of multi-head-attentions.
38 (default: :obj:`1`)
39 concat (bool, optional): If set to :obj:`False`, the seed embeddings
40 are averaged instead of concatenated. (default: :obj:`True`)
41 norm (str, optional): If set to :obj:`True`, will apply layer
42 normalization. (default: :obj:`False`)
43 dropout (float, optional): Dropout probability of attention weights.
44 (default: :obj:`0`)
45 """
46 def __init__(
47 self,
48 channels: int,
49 num_seed_points: int = 1,
50 num_encoder_blocks: int = 1,
51 num_decoder_blocks: int = 1,
52 heads: int = 1,
53 concat: bool = True,
54 layer_norm: bool = False,
55 dropout: float = 0.0,
56 ):
57 super().__init__()
58
59 self.channels = channels
60 self.num_seed_points = num_seed_points
61 self.heads = heads
62 self.concat = concat
63 self.layer_norm = layer_norm
64 self.dropout = dropout
65
66 self.encoders = torch.nn.ModuleList([
67 SetAttentionBlock(channels, heads, layer_norm, dropout)
68 for _ in range(num_encoder_blocks)
69 ])
70
71 self.pma = PoolingByMultiheadAttention(channels, num_seed_points,
72 heads, layer_norm, dropout)
73
74 self.decoders = torch.nn.ModuleList([
75 SetAttentionBlock(channels, heads, layer_norm, dropout)
76 for _ in range(num_decoder_blocks)
77 ])
78
79 def reset_parameters(self):
80 for encoder in self.encoders:
81 encoder.reset_parameters()
82 self.pma.reset_parameters()
83 for decoder in self.decoders:
84 decoder.reset_parameters()
85
86 @disable_dynamic_shapes(required_args=['dim_size', 'max_num_elements'])
87 def forward(
88 self,
89 x: Tensor,
90 index: Optional[Tensor] = None,
91 ptr: Optional[Tensor] = None,
92 dim_size: Optional[int] = None,
93 dim: int = -2,
94 max_num_elements: Optional[int] = None,
95 ) -> Tensor:
96
97 x, mask = self.to_dense_batch(x, index, ptr, dim_size, dim,
98 max_num_elements=max_num_elements)
99
100 for encoder in self.encoders:
101 x = encoder(x, mask)
102
103 x = self.pma(x, mask)
104
105 for decoder in self.decoders:
106 x = decoder(x)
107
108 return x.flatten(1, 2) if self.concat else x.mean(dim=1)
109
110 def __repr__(self) -> str:
111 return (f'{self.__class__.__name__}({self.channels}, '
112 f'num_seed_points={self.num_seed_points}, '
113 f'heads={self.heads}, '
114 f'layer_norm={self.layer_norm}, '
115 f'dropout={self.dropout})')
116
[end of torch_geometric/nn/aggr/set_transformer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torch_geometric/nn/aggr/set_transformer.py b/torch_geometric/nn/aggr/set_transformer.py
--- a/torch_geometric/nn/aggr/set_transformer.py
+++ b/torch_geometric/nn/aggr/set_transformer.py
@@ -105,6 +105,8 @@
for decoder in self.decoders:
x = decoder(x)
+ x = x.nan_to_num()
+
return x.flatten(1, 2) if self.concat else x.mean(dim=1)
def __repr__(self) -> str:
| {"golden_diff": "diff --git a/torch_geometric/nn/aggr/set_transformer.py b/torch_geometric/nn/aggr/set_transformer.py\n--- a/torch_geometric/nn/aggr/set_transformer.py\n+++ b/torch_geometric/nn/aggr/set_transformer.py\n@@ -105,6 +105,8 @@\n for decoder in self.decoders:\n x = decoder(x)\n \n+ x = x.nan_to_num()\n+\n return x.flatten(1, 2) if self.concat else x.mean(dim=1)\n \n def __repr__(self) -> str:\n", "issue": "SetTransformerAggregation returns `nan` for an unconnected node.\n### \ud83d\udc1b Describe the bug\r\n\r\nWhen you use message passing with a `SetTransformerAggregation` and the input graph includes any number of nodes that are disconnected from the rest of the graph, the `SetTransformerAggregation` returns `nan` for those nodes. This is in contrast to the `SumAggregation` which returns plain `0`.\r\n\r\n```python\r\nfrom torch import Tensor\r\nimport torch\r\nfrom torch_geometric.nn import MessagePassing, SetTransformerAggregation\r\nfrom torch_geometric.data import Data, Batch\r\nfrom torch_geometric.utils import sort_edge_index\r\n\r\nclass MPNN4Set(MessagePassing):\r\n def __init__(self, dim, n_heads):\r\n super(MPNN4Set, self).__init__()\r\n self.dim = dim\r\n self.aggregator = SetTransformerAggregation(dim, heads=n_heads)\r\n\r\n def forward(self, h, edge_index, batch):\r\n edge_index = sort_edge_index(edge_index, sort_by_row=False)\r\n h = self.propagate(edge_index, x=h, num_nodes=h.size(0), batch=batch)\r\n\r\n return h\r\n\r\n def message(self, x_i, x_j, edge_index, num_nodes, batch):\r\n return x_j\r\n \r\n def aggregate(self, inputs: Tensor, index: Tensor, ptr: Tensor | None = None, dim_size: int | None = None) -> Tensor:\r\n h = self.aggregator(inputs, index, ptr, dim_size)\r\n return h\r\n\r\n def update(self, aggr_out, batch):\r\n return aggr_out\r\n \r\n \r\nm = MPNN4Set(10, 2)\r\ngraphs = [Data(x=torch.randn((3, 10)), edge_index=torch.tensor([[0, 1], [1, 0]], dtype=torch.long)), Data(x=torch.randn((3, 10)), edge_index=torch.tensor([[0, 1, 2], [2, 1, 0]], dtype=torch.long))]\r\n\r\nbatched_graphs = Batch.from_data_list(graphs)\r\n\r\nres = m(batched_graphs.x, batched_graphs.edge_index, batched_graphs.batch)\r\n\r\nassert res[2].isnan().any().item() is True\r\n\r\n```\r\n\r\nI managed to debug this a little bit and it seems like this stems from the fact that in PyTorch's `MultiHeadAttention` implementation you shouldn't mask a row completely:\r\n\r\n```python\r\nimport torch\r\nfrom torch.nn import functional as F\r\nfrom torch import nn\r\n\r\nm = nn.MultiheadAttention(10, 2)\r\n\r\nt1 = torch.randn((3, 3, 10))\r\nmask = torch.tensor([[True, True, True], [False, False, False], [False, False, False]])\r\n\r\nm(t1, t1, t1, mask) # Includes nan\r\n```\r\n\r\nThis happens because the `unbatch` function will mask the row corresponding to that node because it is not connected to any other node.\r\n\r\n### Environment\r\n\r\n* PyG version: 2.3.1\r\n* PyTorch version: 2.1.0a0+b5021ba\r\n* OS: Ubuntu 22.04\r\n* Python version: 3.10.6\r\n* CUDA/cuDNN version: 12.2\r\n* How you installed PyTorch and PyG (`conda`, `pip`, source): pip\r\n\n", "before_files": [{"content": "from typing import Optional\n\nimport torch\nfrom torch import Tensor\n\nfrom torch_geometric.experimental import disable_dynamic_shapes\nfrom torch_geometric.nn.aggr import Aggregation\nfrom torch_geometric.nn.aggr.utils import (\n PoolingByMultiheadAttention,\n SetAttentionBlock,\n)\n\n\nclass SetTransformerAggregation(Aggregation):\n r\"\"\"Performs \"Set Transformer\" aggregation in which the elements to\n aggregate are processed by multi-head attention blocks, as described in\n the `\"Graph Neural Networks with Adaptive Readouts\"\n <https://arxiv.org/abs/2211.04952>`_ paper.\n\n .. note::\n\n :class:`SetTransformerAggregation` requires sorted indices :obj:`index`\n as input. Specifically, if you use this aggregation as part of\n :class:`~torch_geometric.nn.conv.MessagePassing`, ensure that\n :obj:`edge_index` is sorted by destination nodes, either by manually\n sorting edge indices via :meth:`~torch_geometric.utils.sort_edge_index`\n or by calling :meth:`torch_geometric.data.Data.sort`.\n\n Args:\n channels (int): Size of each input sample.\n num_seed_points (int, optional): Number of seed points.\n (default: :obj:`1`)\n num_encoder_blocks (int, optional): Number of Set Attention Blocks\n (SABs) in the encoder. (default: :obj:`1`).\n num_decoder_blocks (int, optional): Number of Set Attention Blocks\n (SABs) in the decoder. (default: :obj:`1`).\n heads (int, optional): Number of multi-head-attentions.\n (default: :obj:`1`)\n concat (bool, optional): If set to :obj:`False`, the seed embeddings\n are averaged instead of concatenated. (default: :obj:`True`)\n norm (str, optional): If set to :obj:`True`, will apply layer\n normalization. (default: :obj:`False`)\n dropout (float, optional): Dropout probability of attention weights.\n (default: :obj:`0`)\n \"\"\"\n def __init__(\n self,\n channels: int,\n num_seed_points: int = 1,\n num_encoder_blocks: int = 1,\n num_decoder_blocks: int = 1,\n heads: int = 1,\n concat: bool = True,\n layer_norm: bool = False,\n dropout: float = 0.0,\n ):\n super().__init__()\n\n self.channels = channels\n self.num_seed_points = num_seed_points\n self.heads = heads\n self.concat = concat\n self.layer_norm = layer_norm\n self.dropout = dropout\n\n self.encoders = torch.nn.ModuleList([\n SetAttentionBlock(channels, heads, layer_norm, dropout)\n for _ in range(num_encoder_blocks)\n ])\n\n self.pma = PoolingByMultiheadAttention(channels, num_seed_points,\n heads, layer_norm, dropout)\n\n self.decoders = torch.nn.ModuleList([\n SetAttentionBlock(channels, heads, layer_norm, dropout)\n for _ in range(num_decoder_blocks)\n ])\n\n def reset_parameters(self):\n for encoder in self.encoders:\n encoder.reset_parameters()\n self.pma.reset_parameters()\n for decoder in self.decoders:\n decoder.reset_parameters()\n\n @disable_dynamic_shapes(required_args=['dim_size', 'max_num_elements'])\n def forward(\n self,\n x: Tensor,\n index: Optional[Tensor] = None,\n ptr: Optional[Tensor] = None,\n dim_size: Optional[int] = None,\n dim: int = -2,\n max_num_elements: Optional[int] = None,\n ) -> Tensor:\n\n x, mask = self.to_dense_batch(x, index, ptr, dim_size, dim,\n max_num_elements=max_num_elements)\n\n for encoder in self.encoders:\n x = encoder(x, mask)\n\n x = self.pma(x, mask)\n\n for decoder in self.decoders:\n x = decoder(x)\n\n return x.flatten(1, 2) if self.concat else x.mean(dim=1)\n\n def __repr__(self) -> str:\n return (f'{self.__class__.__name__}({self.channels}, '\n f'num_seed_points={self.num_seed_points}, '\n f'heads={self.heads}, '\n f'layer_norm={self.layer_norm}, '\n f'dropout={self.dropout})')\n", "path": "torch_geometric/nn/aggr/set_transformer.py"}]} | 2,478 | 131 |
gh_patches_debug_19373 | rasdani/github-patches | git_diff | HypothesisWorks__hypothesis-312 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Initial calculation of unicode table is not atomic
If you're using e.g. pytest-xdist this can result in a corrupt unicode table, which is bad. Instead the file should be created under a temporary name then moved atomically.
</issue>
<code>
[start of src/hypothesis/internal/charmap.py]
1 # coding=utf-8
2 #
3 # This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)
4 #
5 # Most of this work is copyright (C) 2013-2015 David R. MacIver
6 # ([email protected]), but it contains contributions by others. See
7 # https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a
8 # full list of people who may hold copyright, and consult the git log if you
9 # need to determine who owns an individual contribution.
10 #
11 # This Source Code Form is subject to the terms of the Mozilla Public License,
12 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
13 # obtain one at http://mozilla.org/MPL/2.0/.
14 #
15 # END HEADER
16
17 from __future__ import division, print_function, absolute_import
18
19 import os
20 import sys
21 import gzip
22 import pickle
23 import unicodedata
24
25 from hypothesis.configuration import storage_directory
26 from hypothesis.internal.compat import hunichr
27
28
29 def charmap_file():
30 return os.path.join(
31 storage_directory('unicodedata', unicodedata.unidata_version),
32 'charmap.pickle.gz'
33 )
34
35 _charmap = None
36
37
38 def charmap():
39 global _charmap
40 if _charmap is None:
41 f = charmap_file()
42 if not os.path.exists(f):
43 _charmap = {}
44 for i in range(0, sys.maxunicode + 1):
45 cat = unicodedata.category(hunichr(i))
46 rs = _charmap.setdefault(cat, [])
47 if rs and rs[-1][-1] == i - 1:
48 rs[-1][-1] += 1
49 else:
50 rs.append([i, i])
51 # We explicitly set the mtime to an arbitary value so as to get
52 # a stable format for our charmap.
53 data = sorted(
54 (k, tuple((map(tuple, v))))
55 for k, v in _charmap.items())
56 with gzip.GzipFile(f, 'wb', mtime=1) as o:
57 o.write(pickle.dumps(data, pickle.HIGHEST_PROTOCOL))
58 with gzip.open(f, 'rb') as i:
59 _charmap = dict(pickle.loads(i.read()))
60 assert _charmap is not None
61 return _charmap
62
63
64 _categories = None
65
66
67 def categories():
68 global _categories
69 if _categories is None:
70 cm = charmap()
71 _categories = sorted(
72 cm.keys(), key=lambda c: len(cm[c])
73 )
74 _categories.remove('Cc')
75 _categories.remove('Cs')
76 _categories.append('Cc')
77 _categories.append('Cs')
78 return _categories
79
80
81 def _union_interval_lists(x, y):
82 if not x:
83 return y
84 if not y:
85 return x
86 intervals = sorted(x + y, reverse=True)
87 result = [intervals.pop()]
88 while intervals:
89 u, v = intervals.pop()
90 a, b = result[-1]
91 if u <= b + 1:
92 result[-1] = (a, v)
93 else:
94 result.append((u, v))
95 return tuple(result)
96
97
98 category_index_cache = {
99 (): (),
100 }
101
102
103 def _category_key(exclude, include):
104 cs = categories()
105 if include is None:
106 include = set(cs)
107 else:
108 include = set(include)
109 exclude = set(exclude or ())
110 include -= exclude
111 result = tuple(c for c in cs if c in include)
112 return result
113
114
115 def _query_for_key(key):
116 try:
117 return category_index_cache[key]
118 except KeyError:
119 pass
120 assert key
121 cs = categories()
122 if len(key) == len(cs):
123 result = ((0, sys.maxunicode),)
124 else:
125 result = _union_interval_lists(
126 _query_for_key(key[:-1]), charmap()[key[-1]]
127 )
128 category_index_cache[key] = result
129 return result
130
131
132 limited_category_index_cache = {}
133
134
135 def query(
136 exclude_categories=(), include_categories=None,
137 min_codepoint=None, max_codepoint=None
138 ):
139 if min_codepoint is None:
140 min_codepoint = 0
141 if max_codepoint is None:
142 max_codepoint = sys.maxunicode
143 catkey = _category_key(exclude_categories, include_categories)
144 qkey = (catkey, min_codepoint, max_codepoint)
145 try:
146 return limited_category_index_cache[qkey]
147 except KeyError:
148 pass
149 base = _query_for_key(catkey)
150 result = []
151 for u, v in base:
152 if v >= min_codepoint and u <= max_codepoint:
153 result.append((
154 max(u, min_codepoint), min(v, max_codepoint)
155 ))
156 result = tuple(result)
157 limited_category_index_cache[qkey] = result
158 return result
159
[end of src/hypothesis/internal/charmap.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/hypothesis/internal/charmap.py b/src/hypothesis/internal/charmap.py
--- a/src/hypothesis/internal/charmap.py
+++ b/src/hypothesis/internal/charmap.py
@@ -20,6 +20,8 @@
import sys
import gzip
import pickle
+import shutil
+import tempfile
import unicodedata
from hypothesis.configuration import storage_directory
@@ -53,8 +55,13 @@
data = sorted(
(k, tuple((map(tuple, v))))
for k, v in _charmap.items())
- with gzip.GzipFile(f, 'wb', mtime=1) as o:
+
+ # Write the Unicode table atomically
+ _, tmpfile = tempfile.mkstemp()
+ with gzip.GzipFile(tmpfile, 'wb', mtime=1) as o:
o.write(pickle.dumps(data, pickle.HIGHEST_PROTOCOL))
+ shutil.move(tmpfile, f)
+
with gzip.open(f, 'rb') as i:
_charmap = dict(pickle.loads(i.read()))
assert _charmap is not None
| {"golden_diff": "diff --git a/src/hypothesis/internal/charmap.py b/src/hypothesis/internal/charmap.py\n--- a/src/hypothesis/internal/charmap.py\n+++ b/src/hypothesis/internal/charmap.py\n@@ -20,6 +20,8 @@\n import sys\n import gzip\n import pickle\n+import shutil\n+import tempfile\n import unicodedata\n \n from hypothesis.configuration import storage_directory\n@@ -53,8 +55,13 @@\n data = sorted(\n (k, tuple((map(tuple, v))))\n for k, v in _charmap.items())\n- with gzip.GzipFile(f, 'wb', mtime=1) as o:\n+\n+ # Write the Unicode table atomically\n+ _, tmpfile = tempfile.mkstemp()\n+ with gzip.GzipFile(tmpfile, 'wb', mtime=1) as o:\n o.write(pickle.dumps(data, pickle.HIGHEST_PROTOCOL))\n+ shutil.move(tmpfile, f)\n+\n with gzip.open(f, 'rb') as i:\n _charmap = dict(pickle.loads(i.read()))\n assert _charmap is not None\n", "issue": "Initial calculation of unicode table is not atomic\nIf you're using e.g. pytest-xdist this can result in a corrupt unicode table, which is bad. Instead the file should be created under a temporary name then moved atomically.\n\n", "before_files": [{"content": "# coding=utf-8\n#\n# This file is part of Hypothesis (https://github.com/DRMacIver/hypothesis)\n#\n# Most of this work is copyright (C) 2013-2015 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# https://github.com/DRMacIver/hypothesis/blob/master/CONTRIBUTING.rst for a\n# full list of people who may hold copyright, and consult the git log if you\n# need to determine who owns an individual contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at http://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nfrom __future__ import division, print_function, absolute_import\n\nimport os\nimport sys\nimport gzip\nimport pickle\nimport unicodedata\n\nfrom hypothesis.configuration import storage_directory\nfrom hypothesis.internal.compat import hunichr\n\n\ndef charmap_file():\n return os.path.join(\n storage_directory('unicodedata', unicodedata.unidata_version),\n 'charmap.pickle.gz'\n )\n\n_charmap = None\n\n\ndef charmap():\n global _charmap\n if _charmap is None:\n f = charmap_file()\n if not os.path.exists(f):\n _charmap = {}\n for i in range(0, sys.maxunicode + 1):\n cat = unicodedata.category(hunichr(i))\n rs = _charmap.setdefault(cat, [])\n if rs and rs[-1][-1] == i - 1:\n rs[-1][-1] += 1\n else:\n rs.append([i, i])\n # We explicitly set the mtime to an arbitary value so as to get\n # a stable format for our charmap.\n data = sorted(\n (k, tuple((map(tuple, v))))\n for k, v in _charmap.items())\n with gzip.GzipFile(f, 'wb', mtime=1) as o:\n o.write(pickle.dumps(data, pickle.HIGHEST_PROTOCOL))\n with gzip.open(f, 'rb') as i:\n _charmap = dict(pickle.loads(i.read()))\n assert _charmap is not None\n return _charmap\n\n\n_categories = None\n\n\ndef categories():\n global _categories\n if _categories is None:\n cm = charmap()\n _categories = sorted(\n cm.keys(), key=lambda c: len(cm[c])\n )\n _categories.remove('Cc')\n _categories.remove('Cs')\n _categories.append('Cc')\n _categories.append('Cs')\n return _categories\n\n\ndef _union_interval_lists(x, y):\n if not x:\n return y\n if not y:\n return x\n intervals = sorted(x + y, reverse=True)\n result = [intervals.pop()]\n while intervals:\n u, v = intervals.pop()\n a, b = result[-1]\n if u <= b + 1:\n result[-1] = (a, v)\n else:\n result.append((u, v))\n return tuple(result)\n\n\ncategory_index_cache = {\n (): (),\n}\n\n\ndef _category_key(exclude, include):\n cs = categories()\n if include is None:\n include = set(cs)\n else:\n include = set(include)\n exclude = set(exclude or ())\n include -= exclude\n result = tuple(c for c in cs if c in include)\n return result\n\n\ndef _query_for_key(key):\n try:\n return category_index_cache[key]\n except KeyError:\n pass\n assert key\n cs = categories()\n if len(key) == len(cs):\n result = ((0, sys.maxunicode),)\n else:\n result = _union_interval_lists(\n _query_for_key(key[:-1]), charmap()[key[-1]]\n )\n category_index_cache[key] = result\n return result\n\n\nlimited_category_index_cache = {}\n\n\ndef query(\n exclude_categories=(), include_categories=None,\n min_codepoint=None, max_codepoint=None\n):\n if min_codepoint is None:\n min_codepoint = 0\n if max_codepoint is None:\n max_codepoint = sys.maxunicode\n catkey = _category_key(exclude_categories, include_categories)\n qkey = (catkey, min_codepoint, max_codepoint)\n try:\n return limited_category_index_cache[qkey]\n except KeyError:\n pass\n base = _query_for_key(catkey)\n result = []\n for u, v in base:\n if v >= min_codepoint and u <= max_codepoint:\n result.append((\n max(u, min_codepoint), min(v, max_codepoint)\n ))\n result = tuple(result)\n limited_category_index_cache[qkey] = result\n return result\n", "path": "src/hypothesis/internal/charmap.py"}]} | 2,044 | 249 |
gh_patches_debug_10481 | rasdani/github-patches | git_diff | iterative__dvc-6242 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update pygtrie to 2.4.2?
DVC seems to pin pygtrie to version 2.3.2
I'm wondering what the reason is, and if it would feasible to either upgrade to or allow usage of 2.4.2? Currently I'm unable to install dvc alongside another project that does require the latest pygtrie.
</issue>
<code>
[start of setup.py]
1 import importlib.util
2 import os
3 from pathlib import Path
4
5 from setuptools import find_packages, setup
6 from setuptools.command.build_py import build_py as _build_py
7
8 # Prevents pkg_resources import in entry point script,
9 # see https://github.com/ninjaaron/fast-entry_points.
10 # This saves about 200 ms on startup time for non-wheel installs.
11 try:
12 import fastentrypoints # noqa: F401, pylint: disable=unused-import
13 except ImportError:
14 pass # not able to import when installing through pre-commit
15
16
17 # Read package meta-data from version.py
18 # see https://packaging.python.org/guides/single-sourcing-package-version/
19 pkg_dir = os.path.dirname(os.path.abspath(__file__))
20 version_path = os.path.join(pkg_dir, "dvc", "version.py")
21 spec = importlib.util.spec_from_file_location("dvc.version", version_path)
22 dvc_version = importlib.util.module_from_spec(spec)
23 spec.loader.exec_module(dvc_version)
24 version = dvc_version.__version__ # noqa: F821
25
26
27 # To achieve consistency between the build version and the one provided
28 # by your package during runtime, you need to **pin** the build version.
29 #
30 # This custom class will replace the version.py module with a **static**
31 # `__version__` that your package can read at runtime, assuring consistency.
32 #
33 # References:
34 # - https://docs.python.org/3.7/distutils/extending.html
35 # - https://github.com/python/mypy
36 class build_py(_build_py):
37 def pin_version(self):
38 path = os.path.join(self.build_lib, "dvc")
39 self.mkpath(path)
40 with open(os.path.join(path, "version.py"), "w") as fobj:
41 fobj.write("# AUTOGENERATED at build time by setup.py\n")
42 fobj.write('__version__ = "{}"\n'.format(version))
43
44 def run(self):
45 self.execute(self.pin_version, ())
46 _build_py.run(self)
47
48
49 install_requires = [
50 "ply>=3.9", # See https://github.com/pyinstaller/pyinstaller/issues/1945
51 "colorama>=0.3.9",
52 "configobj>=5.0.6",
53 "gitpython>3",
54 "dulwich>=0.20.23",
55 "pygit2>=1.5.0",
56 "setuptools>=34.0.0",
57 "nanotime>=0.5.2",
58 "pyasn1>=0.4.1",
59 "voluptuous>=0.11.7",
60 "jsonpath-ng>=1.5.1",
61 "requests>=2.22.0",
62 "grandalf==0.6",
63 "distro>=1.3.0",
64 "appdirs>=1.4.3",
65 "ruamel.yaml>=0.16.1",
66 "toml>=0.10.1",
67 "funcy>=1.14",
68 "pathspec>=0.6.0",
69 "shortuuid>=0.5.0",
70 "tqdm>=4.45.0,<5",
71 "packaging>=19.0",
72 "zc.lockfile>=1.2.1",
73 "flufl.lock>=3.2,<4",
74 "win-unicode-console>=0.5; sys_platform == 'win32'",
75 "pywin32>=225; sys_platform == 'win32'",
76 "networkx~=2.5",
77 "psutil>=5.8.0",
78 "pydot>=1.2.4",
79 "speedcopy>=2.0.1; python_version < '3.8' and sys_platform == 'win32'",
80 "dataclasses==0.7; python_version < '3.7'",
81 "flatten_dict>=0.3.0,<1",
82 "tabulate>=0.8.7",
83 "pygtrie==2.3.2",
84 "dpath>=2.0.1,<3",
85 "shtab>=1.3.4,<2",
86 "rich>=10.0.0",
87 "dictdiffer>=0.8.1",
88 "python-benedict>=0.21.1",
89 "pyparsing==2.4.7",
90 "typing_extensions>=3.7.4",
91 "fsspec==2021.6.1",
92 "diskcache>=5.2.1",
93 ]
94
95
96 # Extra dependencies for remote integrations
97
98 gs = ["gcsfs==2021.6.1"]
99 gdrive = ["pydrive2>=1.8.1", "six >= 1.13.0"]
100 s3 = ["s3fs==2021.6.1", "aiobotocore[boto3]==1.3.0"]
101 azure = ["adlfs==0.7.1", "azure-identity>=1.4.0", "knack"]
102 # https://github.com/Legrandin/pycryptodome/issues/465
103 oss = ["oss2==2.6.1", "pycryptodome>=3.10"]
104 ssh = ["paramiko[invoke]>=2.7.0"]
105
106 # Remove the env marker if/when pyarrow is available for Python3.9
107 hdfs = ["pyarrow>=2.0.0"]
108 webhdfs = ["hdfs==2.5.8"]
109 webdav = ["webdav4>=0.8.1"]
110 # gssapi should not be included in all_remotes, because it doesn't have wheels
111 # for linux and mac, so it will fail to compile if user doesn't have all the
112 # requirements, including kerberos itself. Once all the wheels are available,
113 # we can start shipping it by default.
114 ssh_gssapi = ["paramiko[invoke,gssapi]>=2.7.0"]
115 all_remotes = gs + s3 + azure + ssh + oss + gdrive + hdfs + webhdfs + webdav
116
117 tests_requirements = (
118 Path("test_requirements.txt").read_text().strip().splitlines()
119 )
120
121 setup(
122 name="dvc",
123 version=version,
124 description="Git for data scientists - manage your code and data together",
125 long_description=open("README.rst", "r", encoding="UTF-8").read(),
126 author="Dmitry Petrov",
127 author_email="[email protected]",
128 download_url="https://github.com/iterative/dvc",
129 license="Apache License 2.0",
130 install_requires=install_requires,
131 extras_require={
132 "all": all_remotes,
133 "gs": gs,
134 "gdrive": gdrive,
135 "s3": s3,
136 "azure": azure,
137 "oss": oss,
138 "ssh": ssh,
139 "ssh_gssapi": ssh_gssapi,
140 "hdfs": hdfs,
141 "webhdfs": webhdfs,
142 "webdav": webdav,
143 "tests": tests_requirements,
144 },
145 keywords="data-science data-version-control machine-learning git"
146 " developer-tools reproducibility collaboration ai",
147 python_requires=">=3.6",
148 classifiers=[
149 "Development Status :: 4 - Beta",
150 "Programming Language :: Python :: 3",
151 "Programming Language :: Python :: 3.6",
152 "Programming Language :: Python :: 3.7",
153 "Programming Language :: Python :: 3.8",
154 "Programming Language :: Python :: 3.9",
155 ],
156 packages=find_packages(exclude=["tests"]),
157 include_package_data=True,
158 url="http://dvc.org",
159 entry_points={"console_scripts": ["dvc = dvc.main:main"]},
160 cmdclass={"build_py": build_py},
161 zip_safe=False,
162 )
163
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -80,7 +80,7 @@
"dataclasses==0.7; python_version < '3.7'",
"flatten_dict>=0.3.0,<1",
"tabulate>=0.8.7",
- "pygtrie==2.3.2",
+ "pygtrie>=2.3.2",
"dpath>=2.0.1,<3",
"shtab>=1.3.4,<2",
"rich>=10.0.0",
@@ -88,7 +88,7 @@
"python-benedict>=0.21.1",
"pyparsing==2.4.7",
"typing_extensions>=3.7.4",
- "fsspec==2021.6.1",
+ "fsspec>=2021.6.1",
"diskcache>=5.2.1",
]
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -80,7 +80,7 @@\n \"dataclasses==0.7; python_version < '3.7'\",\n \"flatten_dict>=0.3.0,<1\",\n \"tabulate>=0.8.7\",\n- \"pygtrie==2.3.2\",\n+ \"pygtrie>=2.3.2\",\n \"dpath>=2.0.1,<3\",\n \"shtab>=1.3.4,<2\",\n \"rich>=10.0.0\",\n@@ -88,7 +88,7 @@\n \"python-benedict>=0.21.1\",\n \"pyparsing==2.4.7\",\n \"typing_extensions>=3.7.4\",\n- \"fsspec==2021.6.1\",\n+ \"fsspec>=2021.6.1\",\n \"diskcache>=5.2.1\",\n ]\n", "issue": "Update pygtrie to 2.4.2?\nDVC seems to pin pygtrie to version 2.3.2\r\n\r\nI'm wondering what the reason is, and if it would feasible to either upgrade to or allow usage of 2.4.2? Currently I'm unable to install dvc alongside another project that does require the latest pygtrie.\n", "before_files": [{"content": "import importlib.util\nimport os\nfrom pathlib import Path\n\nfrom setuptools import find_packages, setup\nfrom setuptools.command.build_py import build_py as _build_py\n\n# Prevents pkg_resources import in entry point script,\n# see https://github.com/ninjaaron/fast-entry_points.\n# This saves about 200 ms on startup time for non-wheel installs.\ntry:\n import fastentrypoints # noqa: F401, pylint: disable=unused-import\nexcept ImportError:\n pass # not able to import when installing through pre-commit\n\n\n# Read package meta-data from version.py\n# see https://packaging.python.org/guides/single-sourcing-package-version/\npkg_dir = os.path.dirname(os.path.abspath(__file__))\nversion_path = os.path.join(pkg_dir, \"dvc\", \"version.py\")\nspec = importlib.util.spec_from_file_location(\"dvc.version\", version_path)\ndvc_version = importlib.util.module_from_spec(spec)\nspec.loader.exec_module(dvc_version)\nversion = dvc_version.__version__ # noqa: F821\n\n\n# To achieve consistency between the build version and the one provided\n# by your package during runtime, you need to **pin** the build version.\n#\n# This custom class will replace the version.py module with a **static**\n# `__version__` that your package can read at runtime, assuring consistency.\n#\n# References:\n# - https://docs.python.org/3.7/distutils/extending.html\n# - https://github.com/python/mypy\nclass build_py(_build_py):\n def pin_version(self):\n path = os.path.join(self.build_lib, \"dvc\")\n self.mkpath(path)\n with open(os.path.join(path, \"version.py\"), \"w\") as fobj:\n fobj.write(\"# AUTOGENERATED at build time by setup.py\\n\")\n fobj.write('__version__ = \"{}\"\\n'.format(version))\n\n def run(self):\n self.execute(self.pin_version, ())\n _build_py.run(self)\n\n\ninstall_requires = [\n \"ply>=3.9\", # See https://github.com/pyinstaller/pyinstaller/issues/1945\n \"colorama>=0.3.9\",\n \"configobj>=5.0.6\",\n \"gitpython>3\",\n \"dulwich>=0.20.23\",\n \"pygit2>=1.5.0\",\n \"setuptools>=34.0.0\",\n \"nanotime>=0.5.2\",\n \"pyasn1>=0.4.1\",\n \"voluptuous>=0.11.7\",\n \"jsonpath-ng>=1.5.1\",\n \"requests>=2.22.0\",\n \"grandalf==0.6\",\n \"distro>=1.3.0\",\n \"appdirs>=1.4.3\",\n \"ruamel.yaml>=0.16.1\",\n \"toml>=0.10.1\",\n \"funcy>=1.14\",\n \"pathspec>=0.6.0\",\n \"shortuuid>=0.5.0\",\n \"tqdm>=4.45.0,<5\",\n \"packaging>=19.0\",\n \"zc.lockfile>=1.2.1\",\n \"flufl.lock>=3.2,<4\",\n \"win-unicode-console>=0.5; sys_platform == 'win32'\",\n \"pywin32>=225; sys_platform == 'win32'\",\n \"networkx~=2.5\",\n \"psutil>=5.8.0\",\n \"pydot>=1.2.4\",\n \"speedcopy>=2.0.1; python_version < '3.8' and sys_platform == 'win32'\",\n \"dataclasses==0.7; python_version < '3.7'\",\n \"flatten_dict>=0.3.0,<1\",\n \"tabulate>=0.8.7\",\n \"pygtrie==2.3.2\",\n \"dpath>=2.0.1,<3\",\n \"shtab>=1.3.4,<2\",\n \"rich>=10.0.0\",\n \"dictdiffer>=0.8.1\",\n \"python-benedict>=0.21.1\",\n \"pyparsing==2.4.7\",\n \"typing_extensions>=3.7.4\",\n \"fsspec==2021.6.1\",\n \"diskcache>=5.2.1\",\n]\n\n\n# Extra dependencies for remote integrations\n\ngs = [\"gcsfs==2021.6.1\"]\ngdrive = [\"pydrive2>=1.8.1\", \"six >= 1.13.0\"]\ns3 = [\"s3fs==2021.6.1\", \"aiobotocore[boto3]==1.3.0\"]\nazure = [\"adlfs==0.7.1\", \"azure-identity>=1.4.0\", \"knack\"]\n# https://github.com/Legrandin/pycryptodome/issues/465\noss = [\"oss2==2.6.1\", \"pycryptodome>=3.10\"]\nssh = [\"paramiko[invoke]>=2.7.0\"]\n\n# Remove the env marker if/when pyarrow is available for Python3.9\nhdfs = [\"pyarrow>=2.0.0\"]\nwebhdfs = [\"hdfs==2.5.8\"]\nwebdav = [\"webdav4>=0.8.1\"]\n# gssapi should not be included in all_remotes, because it doesn't have wheels\n# for linux and mac, so it will fail to compile if user doesn't have all the\n# requirements, including kerberos itself. Once all the wheels are available,\n# we can start shipping it by default.\nssh_gssapi = [\"paramiko[invoke,gssapi]>=2.7.0\"]\nall_remotes = gs + s3 + azure + ssh + oss + gdrive + hdfs + webhdfs + webdav\n\ntests_requirements = (\n Path(\"test_requirements.txt\").read_text().strip().splitlines()\n)\n\nsetup(\n name=\"dvc\",\n version=version,\n description=\"Git for data scientists - manage your code and data together\",\n long_description=open(\"README.rst\", \"r\", encoding=\"UTF-8\").read(),\n author=\"Dmitry Petrov\",\n author_email=\"[email protected]\",\n download_url=\"https://github.com/iterative/dvc\",\n license=\"Apache License 2.0\",\n install_requires=install_requires,\n extras_require={\n \"all\": all_remotes,\n \"gs\": gs,\n \"gdrive\": gdrive,\n \"s3\": s3,\n \"azure\": azure,\n \"oss\": oss,\n \"ssh\": ssh,\n \"ssh_gssapi\": ssh_gssapi,\n \"hdfs\": hdfs,\n \"webhdfs\": webhdfs,\n \"webdav\": webdav,\n \"tests\": tests_requirements,\n },\n keywords=\"data-science data-version-control machine-learning git\"\n \" developer-tools reproducibility collaboration ai\",\n python_requires=\">=3.6\",\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n packages=find_packages(exclude=[\"tests\"]),\n include_package_data=True,\n url=\"http://dvc.org\",\n entry_points={\"console_scripts\": [\"dvc = dvc.main:main\"]},\n cmdclass={\"build_py\": build_py},\n zip_safe=False,\n)\n", "path": "setup.py"}]} | 2,705 | 230 |
gh_patches_debug_26258 | rasdani/github-patches | git_diff | beeware__toga-2276 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ImageView doesn't shrink images to fit
### Describe the bug
The documentation for ImageView says that a `flex=1` image will be expanded or shrunk to fit the available space. It definitely expands images to fit, but a large image will not be *shrunk* to fit.
### Steps to reproduce
Use the following app code:
```py
main_box = toga.Box(style=Pack(direction=ROW))
main_box.add(toga.ImageView("image.png", style=Pack(flex=1)))
self.main_window = toga.MainWindow(title=self.formal_name)
self.main_window.content = main_box
self.main_window.show()
```
if image.png is smaller than the window, it will be expanded; if image.py is larger than the window, the window will be expanded to fit the natural size of the window (or, on mobile, show "content doesn't fit window" warnings).
### Expected behavior
An ImageView with `flex=1` should shrink large images to fit the available space, preserving aspect ratio.
### Screenshots
_No response_
### Environment
- Operating System: all (tested on macOS and iOS)
- Python version: All
- Software versions:
- Toga: 0.4.0
### Logs
_No response_
### Additional context
The underlying problem appears to be that the intrinsic constraint has been set to `at_least(size)` in each axis, rather than `at_least(0)`. This enforces a minimum size that can't be shrunk.
</issue>
<code>
[start of core/src/toga/widgets/imageview.py]
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING
4
5 from travertino.size import at_least
6
7 import toga
8 from toga.style.pack import NONE
9 from toga.widgets.base import Widget
10
11 if TYPE_CHECKING:
12 from pathlib import Path
13
14 import PIL.Image
15
16 from toga.images import ImageT
17
18
19 def rehint_imageview(image, style, scale=1):
20 """Compute the size hints for an ImageView based on the image.
21
22 This logic is common across all backends, so it's shared here.
23
24 :param image: The image being displayed.
25 :param style: The style object for the imageview.
26 :param scale: The scale factor (if any) to apply to native pixel sizes.
27 :returns: A triple containing the intrinsic width hint, intrinsic height
28 hint, and the aspect ratio to preserve (or None if the aspect ratio
29 should not be preserved).
30 """
31 if image:
32 if style.width != NONE and style.height != NONE:
33 # Explicit width and height for image. Scale the rendered image
34 # to fit the explicitly provided size.
35 width = int(style.width * scale)
36 height = int(style.height * scale)
37 aspect_ratio = None
38
39 elif style.width != NONE:
40 # Explicit width, implicit height. Preserve aspect ratio.
41 aspect_ratio = image.width / image.height
42 width = int(style.width * scale)
43 height = int(style.width * scale / aspect_ratio)
44 if style.flex:
45 height = at_least(height)
46 elif style.height != NONE:
47 # Explicit height, implicit width. Preserve aspect ratio.
48 aspect_ratio = image.width / image.height
49 width = int(style.height * scale * aspect_ratio)
50 height = int(style.height * scale)
51 if style.flex:
52 width = at_least(width)
53 else:
54 # Use the image's actual size.
55 aspect_ratio = image.width / image.height
56 width = int(image.width * scale)
57 height = int(image.height * scale)
58 if style.flex:
59 width = at_least(width)
60 height = at_least(height)
61 else:
62 # No image. Hinted size is 0.
63 width = 0
64 height = 0
65 aspect_ratio = None
66
67 return width, height, aspect_ratio
68
69
70 # Note: remove PIL type annotation when plugin system is implemented for image format
71 # registration; replace with ImageT?
72 class ImageView(Widget):
73 def __init__(
74 self,
75 image: str
76 | Path
77 | bytes
78 | bytearray
79 | memoryview
80 | PIL.Image.Image
81 | None = None,
82 id=None,
83 style=None,
84 ):
85 """
86 Create a new image view.
87
88 :param image: The image to display. This can take all the same formats as the
89 `src` parameter to :class:`toga.Image` -- namely, a file path (as string
90 or :any:`pathlib.Path`), bytes data in a supported image format,
91 or :any:`PIL.Image.Image`.
92 :param id: The ID for the widget.
93 :param style: A style object. If no style is provided, a default style will be
94 applied to the widget.
95 """
96 super().__init__(id=id, style=style)
97 # Prime the image attribute
98 self._image = None
99 self._impl = self.factory.ImageView(interface=self)
100 self.image = image
101
102 @property
103 def enabled(self) -> bool:
104 """Is the widget currently enabled? i.e., can the user interact with the widget?
105
106 ImageView widgets cannot be disabled; this property will always return True; any
107 attempt to modify it will be ignored.
108 """
109 return True
110
111 @enabled.setter
112 def enabled(self, value):
113 pass
114
115 def focus(self):
116 "No-op; ImageView cannot accept input focus"
117 pass
118
119 @property
120 def image(self) -> toga.Image | None:
121 """The image to display.
122
123 When setting an image, you can provide:
124
125 * An :class:`~toga.images.Image` instance; or
126
127 * Any value that would be a valid path specifier when creating a new
128 :class:`~toga.images.Image` instance; or
129
130 * :any:`None` to clear the image view.
131 """
132 return self._image
133
134 @image.setter
135 def image(self, image):
136 if isinstance(image, toga.Image):
137 self._image = image
138 elif image is None:
139 self._image = None
140 else:
141 self._image = toga.Image(image)
142
143 self._impl.set_image(self._image)
144 self.refresh()
145
146 def as_image(self, format: type[ImageT] = toga.Image) -> ImageT:
147 """Return the image in the specified format.
148
149 :param format: Format to provide. Defaults to :class:`~toga.images.Image`; also
150 supports :any:`PIL.Image.Image` if Pillow is installed.
151 :returns: The image in the specified format.
152 """
153 return self.image.as_format(format)
154
[end of core/src/toga/widgets/imageview.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/src/toga/widgets/imageview.py b/core/src/toga/widgets/imageview.py
--- a/core/src/toga/widgets/imageview.py
+++ b/core/src/toga/widgets/imageview.py
@@ -42,22 +42,22 @@
width = int(style.width * scale)
height = int(style.width * scale / aspect_ratio)
if style.flex:
- height = at_least(height)
+ height = at_least(0)
elif style.height != NONE:
# Explicit height, implicit width. Preserve aspect ratio.
aspect_ratio = image.width / image.height
width = int(style.height * scale * aspect_ratio)
height = int(style.height * scale)
if style.flex:
- width = at_least(width)
+ width = at_least(0)
else:
# Use the image's actual size.
aspect_ratio = image.width / image.height
width = int(image.width * scale)
height = int(image.height * scale)
if style.flex:
- width = at_least(width)
- height = at_least(height)
+ width = at_least(0)
+ height = at_least(0)
else:
# No image. Hinted size is 0.
width = 0
| {"golden_diff": "diff --git a/core/src/toga/widgets/imageview.py b/core/src/toga/widgets/imageview.py\n--- a/core/src/toga/widgets/imageview.py\n+++ b/core/src/toga/widgets/imageview.py\n@@ -42,22 +42,22 @@\n width = int(style.width * scale)\n height = int(style.width * scale / aspect_ratio)\n if style.flex:\n- height = at_least(height)\n+ height = at_least(0)\n elif style.height != NONE:\n # Explicit height, implicit width. Preserve aspect ratio.\n aspect_ratio = image.width / image.height\n width = int(style.height * scale * aspect_ratio)\n height = int(style.height * scale)\n if style.flex:\n- width = at_least(width)\n+ width = at_least(0)\n else:\n # Use the image's actual size.\n aspect_ratio = image.width / image.height\n width = int(image.width * scale)\n height = int(image.height * scale)\n if style.flex:\n- width = at_least(width)\n- height = at_least(height)\n+ width = at_least(0)\n+ height = at_least(0)\n else:\n # No image. Hinted size is 0.\n width = 0\n", "issue": "ImageView doesn't shrink images to fit\n### Describe the bug\r\n\r\nThe documentation for ImageView says that a `flex=1` image will be expanded or shrunk to fit the available space. It definitely expands images to fit, but a large image will not be *shrunk* to fit.\r\n\r\n### Steps to reproduce\r\n\r\nUse the following app code:\r\n```py\r\n main_box = toga.Box(style=Pack(direction=ROW))\r\n\r\n main_box.add(toga.ImageView(\"image.png\", style=Pack(flex=1)))\r\n\r\n self.main_window = toga.MainWindow(title=self.formal_name)\r\n self.main_window.content = main_box\r\n self.main_window.show()\r\n```\r\n\r\nif image.png is smaller than the window, it will be expanded; if image.py is larger than the window, the window will be expanded to fit the natural size of the window (or, on mobile, show \"content doesn't fit window\" warnings).\r\n\r\n\r\n### Expected behavior\r\n\r\nAn ImageView with `flex=1` should shrink large images to fit the available space, preserving aspect ratio.\r\n\r\n### Screenshots\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\n- Operating System: all (tested on macOS and iOS)\r\n- Python version: All\r\n- Software versions:\r\n - Toga: 0.4.0\r\n\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\nThe underlying problem appears to be that the intrinsic constraint has been set to `at_least(size)` in each axis, rather than `at_least(0)`. This enforces a minimum size that can't be shrunk.\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom travertino.size import at_least\n\nimport toga\nfrom toga.style.pack import NONE\nfrom toga.widgets.base import Widget\n\nif TYPE_CHECKING:\n from pathlib import Path\n\n import PIL.Image\n\n from toga.images import ImageT\n\n\ndef rehint_imageview(image, style, scale=1):\n \"\"\"Compute the size hints for an ImageView based on the image.\n\n This logic is common across all backends, so it's shared here.\n\n :param image: The image being displayed.\n :param style: The style object for the imageview.\n :param scale: The scale factor (if any) to apply to native pixel sizes.\n :returns: A triple containing the intrinsic width hint, intrinsic height\n hint, and the aspect ratio to preserve (or None if the aspect ratio\n should not be preserved).\n \"\"\"\n if image:\n if style.width != NONE and style.height != NONE:\n # Explicit width and height for image. Scale the rendered image\n # to fit the explicitly provided size.\n width = int(style.width * scale)\n height = int(style.height * scale)\n aspect_ratio = None\n\n elif style.width != NONE:\n # Explicit width, implicit height. Preserve aspect ratio.\n aspect_ratio = image.width / image.height\n width = int(style.width * scale)\n height = int(style.width * scale / aspect_ratio)\n if style.flex:\n height = at_least(height)\n elif style.height != NONE:\n # Explicit height, implicit width. Preserve aspect ratio.\n aspect_ratio = image.width / image.height\n width = int(style.height * scale * aspect_ratio)\n height = int(style.height * scale)\n if style.flex:\n width = at_least(width)\n else:\n # Use the image's actual size.\n aspect_ratio = image.width / image.height\n width = int(image.width * scale)\n height = int(image.height * scale)\n if style.flex:\n width = at_least(width)\n height = at_least(height)\n else:\n # No image. Hinted size is 0.\n width = 0\n height = 0\n aspect_ratio = None\n\n return width, height, aspect_ratio\n\n\n# Note: remove PIL type annotation when plugin system is implemented for image format\n# registration; replace with ImageT?\nclass ImageView(Widget):\n def __init__(\n self,\n image: str\n | Path\n | bytes\n | bytearray\n | memoryview\n | PIL.Image.Image\n | None = None,\n id=None,\n style=None,\n ):\n \"\"\"\n Create a new image view.\n\n :param image: The image to display. This can take all the same formats as the\n `src` parameter to :class:`toga.Image` -- namely, a file path (as string\n or :any:`pathlib.Path`), bytes data in a supported image format,\n or :any:`PIL.Image.Image`.\n :param id: The ID for the widget.\n :param style: A style object. If no style is provided, a default style will be\n applied to the widget.\n \"\"\"\n super().__init__(id=id, style=style)\n # Prime the image attribute\n self._image = None\n self._impl = self.factory.ImageView(interface=self)\n self.image = image\n\n @property\n def enabled(self) -> bool:\n \"\"\"Is the widget currently enabled? i.e., can the user interact with the widget?\n\n ImageView widgets cannot be disabled; this property will always return True; any\n attempt to modify it will be ignored.\n \"\"\"\n return True\n\n @enabled.setter\n def enabled(self, value):\n pass\n\n def focus(self):\n \"No-op; ImageView cannot accept input focus\"\n pass\n\n @property\n def image(self) -> toga.Image | None:\n \"\"\"The image to display.\n\n When setting an image, you can provide:\n\n * An :class:`~toga.images.Image` instance; or\n\n * Any value that would be a valid path specifier when creating a new\n :class:`~toga.images.Image` instance; or\n\n * :any:`None` to clear the image view.\n \"\"\"\n return self._image\n\n @image.setter\n def image(self, image):\n if isinstance(image, toga.Image):\n self._image = image\n elif image is None:\n self._image = None\n else:\n self._image = toga.Image(image)\n\n self._impl.set_image(self._image)\n self.refresh()\n\n def as_image(self, format: type[ImageT] = toga.Image) -> ImageT:\n \"\"\"Return the image in the specified format.\n\n :param format: Format to provide. Defaults to :class:`~toga.images.Image`; also\n supports :any:`PIL.Image.Image` if Pillow is installed.\n :returns: The image in the specified format.\n \"\"\"\n return self.image.as_format(format)\n", "path": "core/src/toga/widgets/imageview.py"}]} | 2,315 | 273 |
gh_patches_debug_20823 | rasdani/github-patches | git_diff | SeldonIO__MLServer-890 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add "don't log" header to Alibi runtime
</issue>
<code>
[start of runtimes/alibi-explain/mlserver_alibi_explain/common.py]
1 import re
2 from importlib import import_module
3 from typing import Any, Optional, Type, Union, List
4
5 import numpy as np
6 import requests
7 from pydantic import BaseSettings
8
9 from mlserver.codecs import StringCodec, NumpyCodec
10 from mlserver.types import (
11 ResponseOutput,
12 InferenceResponse,
13 InferenceRequest,
14 Parameters,
15 MetadataModelResponse,
16 )
17 from mlserver.utils import generate_uuid
18
19 from mlserver_alibi_explain.errors import RemoteInferenceError, InvalidExplanationShape
20
21 _DEFAULT_INPUT_NAME = "predict"
22
23 EXPLAINER_TYPE_TAG = "explainer_type"
24
25 _MAX_RETRY_ATTEMPT = 3
26
27 ENV_PREFIX_ALIBI_EXPLAIN_SETTINGS = "MLSERVER_MODEL_ALIBI_EXPLAIN_"
28 EXPLAIN_PARAMETERS_TAG = "explain_parameters"
29
30
31 # TODO: add this utility in the codec.
32 def convert_from_bytes(output: ResponseOutput, ty: Optional[Type] = None) -> Any:
33 """
34 This utility function decodes the response from bytes string to python object dict.
35 It is related to decoding StringCodec
36 """
37 if output.shape not in ([1], [1, 1]):
38 raise InvalidExplanationShape(output.shape)
39
40 if ty == str:
41 return bytearray(output.data[0]).decode("UTF-8")
42 else:
43 py_str = bytearray(output.data[0]).decode("UTF-8")
44
45 from ast import literal_eval
46
47 return literal_eval(py_str)
48
49
50 # TODO: add retry and better exceptions handling
51 def remote_predict(
52 v2_payload: InferenceRequest, predictor_url: str, ssl_verify_path: str
53 ) -> InferenceResponse:
54 verify: Union[str, bool] = True
55 if ssl_verify_path != "":
56 verify = ssl_verify_path
57 response_raw = requests.post(predictor_url, json=v2_payload.dict(), verify=verify)
58 if response_raw.status_code != 200:
59 raise RemoteInferenceError(response_raw.status_code, response_raw.reason)
60 return InferenceResponse.parse_raw(response_raw.text)
61
62
63 def remote_metadata(url: str, ssl_verify_path: str) -> MetadataModelResponse:
64 """Get metadata from v2 endpoint"""
65 verify: Union[str, bool] = True
66 if ssl_verify_path != "":
67 verify = ssl_verify_path
68 response_raw = requests.get(url, verify=verify)
69 if response_raw.status_code != 200:
70 raise RemoteInferenceError(response_raw.status_code, response_raw.reason)
71 return MetadataModelResponse.parse_raw(response_raw.text)
72
73
74 def construct_metadata_url(infer_url: str) -> str:
75 """Construct v2 metadata endpoint from v2 infer endpoint"""
76 return re.sub(r"/infer$", "", infer_url)
77
78
79 class AlibiExplainSettings(BaseSettings):
80 """
81 Parameters that apply only to alibi explain models
82 """
83
84 class Config:
85 env_prefix = ENV_PREFIX_ALIBI_EXPLAIN_SETTINGS
86
87 infer_uri: str
88 explainer_type: str
89 init_parameters: Optional[dict]
90 ssl_verify_path: Optional[str]
91
92
93 def import_and_get_class(class_path: str) -> type:
94 last_dot = class_path.rfind(".")
95 klass = getattr(import_module(class_path[:last_dot]), class_path[last_dot + 1 :])
96 return klass
97
98
99 def to_v2_inference_request(
100 input_data: Union[np.ndarray, List[str]],
101 metadata: Optional[MetadataModelResponse],
102 ) -> InferenceRequest:
103 """
104 Encode numpy payload to v2 protocol.
105
106 Note: We only fetch the first-input name and the list of outputs from the metadata
107 endpoint currently. We should consider wider reconciliation with data types etc.
108
109 Parameters
110 ----------
111 input_data
112 Numpy ndarray to encode
113 metadata
114 Extra metadata that can help encode the payload.
115 """
116
117 # MLServer does not really care about a correct input name!
118 input_name = _DEFAULT_INPUT_NAME
119 id_name = generate_uuid()
120 outputs = []
121
122 if metadata is not None:
123 if metadata.inputs:
124 # we only support a big single input numpy
125 input_name = metadata.inputs[0].name
126 if metadata.outputs:
127 outputs = metadata.outputs
128
129 # For List[str] (e.g. AnchorText), we use StringCodec for input
130 input_payload_codec = StringCodec if type(input_data) == list else NumpyCodec
131 v2_request = InferenceRequest(
132 id=id_name,
133 parameters=Parameters(content_type=input_payload_codec.ContentType),
134 # TODO: we probably need to tell alibi about the expected types to use
135 # or even whether it is a probability of classes or targets etc
136 inputs=[
137 input_payload_codec.encode_input( # type: ignore
138 name=input_name,
139 payload=input_data,
140 use_bytes=False,
141 )
142 ],
143 outputs=outputs,
144 )
145 return v2_request
146
[end of runtimes/alibi-explain/mlserver_alibi_explain/common.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/runtimes/alibi-explain/mlserver_alibi_explain/common.py b/runtimes/alibi-explain/mlserver_alibi_explain/common.py
--- a/runtimes/alibi-explain/mlserver_alibi_explain/common.py
+++ b/runtimes/alibi-explain/mlserver_alibi_explain/common.py
@@ -26,6 +26,7 @@
ENV_PREFIX_ALIBI_EXPLAIN_SETTINGS = "MLSERVER_MODEL_ALIBI_EXPLAIN_"
EXPLAIN_PARAMETERS_TAG = "explain_parameters"
+SELDON_SKIP_LOGGING_HEADER = "Seldon-Skip-Logging"
# TODO: add this utility in the codec.
@@ -54,7 +55,12 @@
verify: Union[str, bool] = True
if ssl_verify_path != "":
verify = ssl_verify_path
- response_raw = requests.post(predictor_url, json=v2_payload.dict(), verify=verify)
+ response_raw = requests.post(
+ predictor_url,
+ json=v2_payload.dict(),
+ headers={SELDON_SKIP_LOGGING_HEADER: "true"},
+ verify=verify,
+ )
if response_raw.status_code != 200:
raise RemoteInferenceError(response_raw.status_code, response_raw.reason)
return InferenceResponse.parse_raw(response_raw.text)
| {"golden_diff": "diff --git a/runtimes/alibi-explain/mlserver_alibi_explain/common.py b/runtimes/alibi-explain/mlserver_alibi_explain/common.py\n--- a/runtimes/alibi-explain/mlserver_alibi_explain/common.py\n+++ b/runtimes/alibi-explain/mlserver_alibi_explain/common.py\n@@ -26,6 +26,7 @@\n \n ENV_PREFIX_ALIBI_EXPLAIN_SETTINGS = \"MLSERVER_MODEL_ALIBI_EXPLAIN_\"\n EXPLAIN_PARAMETERS_TAG = \"explain_parameters\"\n+SELDON_SKIP_LOGGING_HEADER = \"Seldon-Skip-Logging\"\n \n \n # TODO: add this utility in the codec.\n@@ -54,7 +55,12 @@\n verify: Union[str, bool] = True\n if ssl_verify_path != \"\":\n verify = ssl_verify_path\n- response_raw = requests.post(predictor_url, json=v2_payload.dict(), verify=verify)\n+ response_raw = requests.post(\n+ predictor_url,\n+ json=v2_payload.dict(),\n+ headers={SELDON_SKIP_LOGGING_HEADER: \"true\"},\n+ verify=verify,\n+ )\n if response_raw.status_code != 200:\n raise RemoteInferenceError(response_raw.status_code, response_raw.reason)\n return InferenceResponse.parse_raw(response_raw.text)\n", "issue": "Add \"don't log\" header to Alibi runtime\n\n", "before_files": [{"content": "import re\nfrom importlib import import_module\nfrom typing import Any, Optional, Type, Union, List\n\nimport numpy as np\nimport requests\nfrom pydantic import BaseSettings\n\nfrom mlserver.codecs import StringCodec, NumpyCodec\nfrom mlserver.types import (\n ResponseOutput,\n InferenceResponse,\n InferenceRequest,\n Parameters,\n MetadataModelResponse,\n)\nfrom mlserver.utils import generate_uuid\n\nfrom mlserver_alibi_explain.errors import RemoteInferenceError, InvalidExplanationShape\n\n_DEFAULT_INPUT_NAME = \"predict\"\n\nEXPLAINER_TYPE_TAG = \"explainer_type\"\n\n_MAX_RETRY_ATTEMPT = 3\n\nENV_PREFIX_ALIBI_EXPLAIN_SETTINGS = \"MLSERVER_MODEL_ALIBI_EXPLAIN_\"\nEXPLAIN_PARAMETERS_TAG = \"explain_parameters\"\n\n\n# TODO: add this utility in the codec.\ndef convert_from_bytes(output: ResponseOutput, ty: Optional[Type] = None) -> Any:\n \"\"\"\n This utility function decodes the response from bytes string to python object dict.\n It is related to decoding StringCodec\n \"\"\"\n if output.shape not in ([1], [1, 1]):\n raise InvalidExplanationShape(output.shape)\n\n if ty == str:\n return bytearray(output.data[0]).decode(\"UTF-8\")\n else:\n py_str = bytearray(output.data[0]).decode(\"UTF-8\")\n\n from ast import literal_eval\n\n return literal_eval(py_str)\n\n\n# TODO: add retry and better exceptions handling\ndef remote_predict(\n v2_payload: InferenceRequest, predictor_url: str, ssl_verify_path: str\n) -> InferenceResponse:\n verify: Union[str, bool] = True\n if ssl_verify_path != \"\":\n verify = ssl_verify_path\n response_raw = requests.post(predictor_url, json=v2_payload.dict(), verify=verify)\n if response_raw.status_code != 200:\n raise RemoteInferenceError(response_raw.status_code, response_raw.reason)\n return InferenceResponse.parse_raw(response_raw.text)\n\n\ndef remote_metadata(url: str, ssl_verify_path: str) -> MetadataModelResponse:\n \"\"\"Get metadata from v2 endpoint\"\"\"\n verify: Union[str, bool] = True\n if ssl_verify_path != \"\":\n verify = ssl_verify_path\n response_raw = requests.get(url, verify=verify)\n if response_raw.status_code != 200:\n raise RemoteInferenceError(response_raw.status_code, response_raw.reason)\n return MetadataModelResponse.parse_raw(response_raw.text)\n\n\ndef construct_metadata_url(infer_url: str) -> str:\n \"\"\"Construct v2 metadata endpoint from v2 infer endpoint\"\"\"\n return re.sub(r\"/infer$\", \"\", infer_url)\n\n\nclass AlibiExplainSettings(BaseSettings):\n \"\"\"\n Parameters that apply only to alibi explain models\n \"\"\"\n\n class Config:\n env_prefix = ENV_PREFIX_ALIBI_EXPLAIN_SETTINGS\n\n infer_uri: str\n explainer_type: str\n init_parameters: Optional[dict]\n ssl_verify_path: Optional[str]\n\n\ndef import_and_get_class(class_path: str) -> type:\n last_dot = class_path.rfind(\".\")\n klass = getattr(import_module(class_path[:last_dot]), class_path[last_dot + 1 :])\n return klass\n\n\ndef to_v2_inference_request(\n input_data: Union[np.ndarray, List[str]],\n metadata: Optional[MetadataModelResponse],\n) -> InferenceRequest:\n \"\"\"\n Encode numpy payload to v2 protocol.\n\n Note: We only fetch the first-input name and the list of outputs from the metadata\n endpoint currently. We should consider wider reconciliation with data types etc.\n\n Parameters\n ----------\n input_data\n Numpy ndarray to encode\n metadata\n Extra metadata that can help encode the payload.\n \"\"\"\n\n # MLServer does not really care about a correct input name!\n input_name = _DEFAULT_INPUT_NAME\n id_name = generate_uuid()\n outputs = []\n\n if metadata is not None:\n if metadata.inputs:\n # we only support a big single input numpy\n input_name = metadata.inputs[0].name\n if metadata.outputs:\n outputs = metadata.outputs\n\n # For List[str] (e.g. AnchorText), we use StringCodec for input\n input_payload_codec = StringCodec if type(input_data) == list else NumpyCodec\n v2_request = InferenceRequest(\n id=id_name,\n parameters=Parameters(content_type=input_payload_codec.ContentType),\n # TODO: we probably need to tell alibi about the expected types to use\n # or even whether it is a probability of classes or targets etc\n inputs=[\n input_payload_codec.encode_input( # type: ignore\n name=input_name,\n payload=input_data,\n use_bytes=False,\n )\n ],\n outputs=outputs,\n )\n return v2_request\n", "path": "runtimes/alibi-explain/mlserver_alibi_explain/common.py"}]} | 1,942 | 285 |
gh_patches_debug_24698 | rasdani/github-patches | git_diff | pre-commit__pre-commit-2207 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow to `mamba install` environments
Thanks for maintaining pre-commit, we really enjoying working with it!
When installing conda environments, [mamba](https://github.com/mamba-org/mamba) is often the preferred option, as it is faster and [requires less memory](https://github.com/conda/conda/issues/5003) when solving an environment.
As of today, pre-commit uses conda for creating environments, as this is hard-coded into the source code:
https://github.com/pre-commit/pre-commit/blob/b944395d6638629a93756ed7279b4368473177af/pre_commit/languages/conda.py#L62-L66
We have a use case, where we want to install pre-commit hooks in CI, but the installation fails because of out-of-memory errors. Installing the same environment with mamba however, succeeds.
It would be nice if one could define whether pre-commit uses conda or mamba for installing environments. This could for example be accomplished by introducing an environment variable `PRE_COMMIT_USE_MAMBA`. If there is a better way to pass such configuration to pre-commit, please let me know.
I'd be glad to work on this feature, if this is something you'd like to see added to pre-commit.
</issue>
<code>
[start of pre_commit/languages/conda.py]
1 import contextlib
2 import os
3 from typing import Generator
4 from typing import Sequence
5 from typing import Tuple
6
7 from pre_commit.envcontext import envcontext
8 from pre_commit.envcontext import PatchesT
9 from pre_commit.envcontext import SubstitutionT
10 from pre_commit.envcontext import UNSET
11 from pre_commit.envcontext import Var
12 from pre_commit.hook import Hook
13 from pre_commit.languages import helpers
14 from pre_commit.prefix import Prefix
15 from pre_commit.util import clean_path_on_failure
16 from pre_commit.util import cmd_output_b
17
18 ENVIRONMENT_DIR = 'conda'
19 get_default_version = helpers.basic_get_default_version
20 healthy = helpers.basic_healthy
21
22
23 def get_env_patch(env: str) -> PatchesT:
24 # On non-windows systems executable live in $CONDA_PREFIX/bin, on Windows
25 # they can be in $CONDA_PREFIX/bin, $CONDA_PREFIX/Library/bin,
26 # $CONDA_PREFIX/Scripts and $CONDA_PREFIX. Whereas the latter only
27 # seems to be used for python.exe.
28 path: SubstitutionT = (os.path.join(env, 'bin'), os.pathsep, Var('PATH'))
29 if os.name == 'nt': # pragma: no cover (platform specific)
30 path = (env, os.pathsep, *path)
31 path = (os.path.join(env, 'Scripts'), os.pathsep, *path)
32 path = (os.path.join(env, 'Library', 'bin'), os.pathsep, *path)
33
34 return (
35 ('PYTHONHOME', UNSET),
36 ('VIRTUAL_ENV', UNSET),
37 ('CONDA_PREFIX', env),
38 ('PATH', path),
39 )
40
41
42 @contextlib.contextmanager
43 def in_env(
44 prefix: Prefix,
45 language_version: str,
46 ) -> Generator[None, None, None]:
47 directory = helpers.environment_dir(ENVIRONMENT_DIR, language_version)
48 envdir = prefix.path(directory)
49 with envcontext(get_env_patch(envdir)):
50 yield
51
52
53 def install_environment(
54 prefix: Prefix,
55 version: str,
56 additional_dependencies: Sequence[str],
57 ) -> None:
58 helpers.assert_version_default('conda', version)
59 directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
60
61 env_dir = prefix.path(directory)
62 with clean_path_on_failure(env_dir):
63 cmd_output_b(
64 'conda', 'env', 'create', '-p', env_dir, '--file',
65 'environment.yml', cwd=prefix.prefix_dir,
66 )
67 if additional_dependencies:
68 cmd_output_b(
69 'conda', 'install', '-p', env_dir, *additional_dependencies,
70 cwd=prefix.prefix_dir,
71 )
72
73
74 def run_hook(
75 hook: Hook,
76 file_args: Sequence[str],
77 color: bool,
78 ) -> Tuple[int, bytes]:
79 # TODO: Some rare commands need to be run using `conda run` but mostly we
80 # can run them without which is much quicker and produces a better
81 # output.
82 # cmd = ('conda', 'run', '-p', env_dir) + hook.cmd
83 with in_env(hook.prefix, hook.language_version):
84 return helpers.run_xargs(hook, hook.cmd, file_args, color=color)
85
[end of pre_commit/languages/conda.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/languages/conda.py b/pre_commit/languages/conda.py
--- a/pre_commit/languages/conda.py
+++ b/pre_commit/languages/conda.py
@@ -50,6 +50,15 @@
yield
+def _conda_exe() -> str:
+ if os.environ.get('PRE_COMMIT_USE_MICROMAMBA'):
+ return 'micromamba'
+ elif os.environ.get('PRE_COMMIT_USE_MAMBA'):
+ return 'mamba'
+ else:
+ return 'conda'
+
+
def install_environment(
prefix: Prefix,
version: str,
@@ -58,15 +67,17 @@
helpers.assert_version_default('conda', version)
directory = helpers.environment_dir(ENVIRONMENT_DIR, version)
+ conda_exe = _conda_exe()
+
env_dir = prefix.path(directory)
with clean_path_on_failure(env_dir):
cmd_output_b(
- 'conda', 'env', 'create', '-p', env_dir, '--file',
+ conda_exe, 'env', 'create', '-p', env_dir, '--file',
'environment.yml', cwd=prefix.prefix_dir,
)
if additional_dependencies:
cmd_output_b(
- 'conda', 'install', '-p', env_dir, *additional_dependencies,
+ conda_exe, 'install', '-p', env_dir, *additional_dependencies,
cwd=prefix.prefix_dir,
)
| {"golden_diff": "diff --git a/pre_commit/languages/conda.py b/pre_commit/languages/conda.py\n--- a/pre_commit/languages/conda.py\n+++ b/pre_commit/languages/conda.py\n@@ -50,6 +50,15 @@\n yield\n \n \n+def _conda_exe() -> str:\n+ if os.environ.get('PRE_COMMIT_USE_MICROMAMBA'):\n+ return 'micromamba'\n+ elif os.environ.get('PRE_COMMIT_USE_MAMBA'):\n+ return 'mamba'\n+ else:\n+ return 'conda'\n+\n+\n def install_environment(\n prefix: Prefix,\n version: str,\n@@ -58,15 +67,17 @@\n helpers.assert_version_default('conda', version)\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n \n+ conda_exe = _conda_exe()\n+\n env_dir = prefix.path(directory)\n with clean_path_on_failure(env_dir):\n cmd_output_b(\n- 'conda', 'env', 'create', '-p', env_dir, '--file',\n+ conda_exe, 'env', 'create', '-p', env_dir, '--file',\n 'environment.yml', cwd=prefix.prefix_dir,\n )\n if additional_dependencies:\n cmd_output_b(\n- 'conda', 'install', '-p', env_dir, *additional_dependencies,\n+ conda_exe, 'install', '-p', env_dir, *additional_dependencies,\n cwd=prefix.prefix_dir,\n )\n", "issue": "Allow to `mamba install` environments\nThanks for maintaining pre-commit, we really enjoying working with it!\r\n\r\nWhen installing conda environments, [mamba](https://github.com/mamba-org/mamba) is often the preferred option, as it is faster and [requires less memory](https://github.com/conda/conda/issues/5003) when solving an environment.\r\n\r\nAs of today, pre-commit uses conda for creating environments, as this is hard-coded into the source code:\r\n\r\nhttps://github.com/pre-commit/pre-commit/blob/b944395d6638629a93756ed7279b4368473177af/pre_commit/languages/conda.py#L62-L66\r\n\r\nWe have a use case, where we want to install pre-commit hooks in CI, but the installation fails because of out-of-memory errors. Installing the same environment with mamba however, succeeds.\r\n\r\nIt would be nice if one could define whether pre-commit uses conda or mamba for installing environments. This could for example be accomplished by introducing an environment variable `PRE_COMMIT_USE_MAMBA`. If there is a better way to pass such configuration to pre-commit, please let me know.\r\n\r\nI'd be glad to work on this feature, if this is something you'd like to see added to pre-commit.\n", "before_files": [{"content": "import contextlib\nimport os\nfrom typing import Generator\nfrom typing import Sequence\nfrom typing import Tuple\n\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import PatchesT\nfrom pre_commit.envcontext import SubstitutionT\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output_b\n\nENVIRONMENT_DIR = 'conda'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef get_env_patch(env: str) -> PatchesT:\n # On non-windows systems executable live in $CONDA_PREFIX/bin, on Windows\n # they can be in $CONDA_PREFIX/bin, $CONDA_PREFIX/Library/bin,\n # $CONDA_PREFIX/Scripts and $CONDA_PREFIX. Whereas the latter only\n # seems to be used for python.exe.\n path: SubstitutionT = (os.path.join(env, 'bin'), os.pathsep, Var('PATH'))\n if os.name == 'nt': # pragma: no cover (platform specific)\n path = (env, os.pathsep, *path)\n path = (os.path.join(env, 'Scripts'), os.pathsep, *path)\n path = (os.path.join(env, 'Library', 'bin'), os.pathsep, *path)\n\n return (\n ('PYTHONHOME', UNSET),\n ('VIRTUAL_ENV', UNSET),\n ('CONDA_PREFIX', env),\n ('PATH', path),\n )\n\n\[email protected]\ndef in_env(\n prefix: Prefix,\n language_version: str,\n) -> Generator[None, None, None]:\n directory = helpers.environment_dir(ENVIRONMENT_DIR, language_version)\n envdir = prefix.path(directory)\n with envcontext(get_env_patch(envdir)):\n yield\n\n\ndef install_environment(\n prefix: Prefix,\n version: str,\n additional_dependencies: Sequence[str],\n) -> None:\n helpers.assert_version_default('conda', version)\n directory = helpers.environment_dir(ENVIRONMENT_DIR, version)\n\n env_dir = prefix.path(directory)\n with clean_path_on_failure(env_dir):\n cmd_output_b(\n 'conda', 'env', 'create', '-p', env_dir, '--file',\n 'environment.yml', cwd=prefix.prefix_dir,\n )\n if additional_dependencies:\n cmd_output_b(\n 'conda', 'install', '-p', env_dir, *additional_dependencies,\n cwd=prefix.prefix_dir,\n )\n\n\ndef run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n) -> Tuple[int, bytes]:\n # TODO: Some rare commands need to be run using `conda run` but mostly we\n # can run them without which is much quicker and produces a better\n # output.\n # cmd = ('conda', 'run', '-p', env_dir) + hook.cmd\n with in_env(hook.prefix, hook.language_version):\n return helpers.run_xargs(hook, hook.cmd, file_args, color=color)\n", "path": "pre_commit/languages/conda.py"}]} | 1,664 | 317 |
gh_patches_debug_11191 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-597 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
object has no attribute '__qualname__'
I am getting the following error when trying to enable the scout APM on a starlette application:
```
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 391, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/app/.heroku/python/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "/app/.heroku/python/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__
await self.middleware_stack(scope, receive, send)
File "/app/.heroku/python/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc from None
File "/app/.heroku/python/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/app/.heroku/python/lib/python3.8/site-packages/scout_apm/async_/starlette.py", line 67, in __call__
grab_extra_data()
File "/app/.heroku/python/lib/python3.8/site-packages/scout_apm/async_/starlette.py", line 35, in grab_extra_data
endpoint.__module__, endpoint.__qualname__
AttributeError: 'GraphQLWithBackground' object has no attribute '__qualname__'
```
`GraphQLWithBackground` is a subclass of `ariadne.asgi.GraphQL`.
</issue>
<code>
[start of src/scout_apm/async_/starlette.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import wrapt
5 from starlette.background import BackgroundTask
6
7 import scout_apm.core
8 from scout_apm.core.tracked_request import TrackedRequest
9 from scout_apm.core.web_requests import asgi_track_request_data
10
11
12 class ScoutMiddleware:
13 def __init__(self, app):
14 self.app = app
15 installed = scout_apm.core.install()
16 self._do_nothing = not installed
17 if installed:
18 install_background_instrumentation()
19
20 async def __call__(self, scope, receive, send):
21 if self._do_nothing or scope["type"] != "http":
22 return await self.app(scope, receive, send)
23
24 tracked_request = TrackedRequest.instance()
25 # Can't name controller until post-routing - see final clause
26 controller_span = tracked_request.start_span(operation="Controller/Unknown")
27
28 asgi_track_request_data(scope, tracked_request)
29
30 def grab_extra_data():
31 if "endpoint" in scope:
32 # Rename top span
33 endpoint = scope["endpoint"]
34 controller_span.operation = "Controller/{}.{}".format(
35 endpoint.__module__, endpoint.__qualname__
36 )
37 tracked_request.is_real_request = True
38
39 # From AuthenticationMiddleware - bypass request.user because it
40 # throws AssertionError if 'user' is not in Scope, and we need a
41 # try/except already
42 try:
43 username = scope["user"].display_name
44 except (KeyError, AttributeError):
45 pass
46 else:
47 tracked_request.tag("username", username)
48
49 async def wrapped_send(data):
50 type_ = data.get("type", None)
51 if type_ == "http.response.start" and 500 <= data.get("status", 200) <= 599:
52 tracked_request.tag("error", "true")
53 elif type_ == "http.response.body" and not data.get("more_body", False):
54 # Finish HTTP span when body finishes sending, not later (e.g.
55 # after background tasks)
56 grab_extra_data()
57 tracked_request.stop_span()
58 return await send(data)
59
60 try:
61 await self.app(scope, receive, wrapped_send)
62 except Exception as exc:
63 tracked_request.tag("error", "true")
64 raise exc
65 finally:
66 if tracked_request.end_time is None:
67 grab_extra_data()
68 tracked_request.stop_span()
69
70
71 background_instrumentation_installed = False
72
73
74 def install_background_instrumentation():
75 global background_instrumentation_installed
76 if background_instrumentation_installed:
77 return
78 background_instrumentation_installed = True
79
80 @wrapt.decorator
81 async def wrapped_background_call(wrapped, instance, args, kwargs):
82 tracked_request = TrackedRequest.instance()
83 tracked_request.is_real_request = True
84
85 with tracked_request.span(
86 operation="Job/{}.{}".format(
87 instance.func.__module__, instance.func.__qualname__
88 )
89 ):
90 return await wrapped(*args, **kwargs)
91
92 BackgroundTask.__call__ = wrapped_background_call(BackgroundTask.__call__)
93
[end of src/scout_apm/async_/starlette.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/scout_apm/async_/starlette.py b/src/scout_apm/async_/starlette.py
--- a/src/scout_apm/async_/starlette.py
+++ b/src/scout_apm/async_/starlette.py
@@ -31,8 +31,11 @@
if "endpoint" in scope:
# Rename top span
endpoint = scope["endpoint"]
+ if not hasattr(endpoint, "__qualname__"):
+ endpoint = endpoint.__class__
controller_span.operation = "Controller/{}.{}".format(
- endpoint.__module__, endpoint.__qualname__
+ endpoint.__module__,
+ endpoint.__qualname__,
)
tracked_request.is_real_request = True
| {"golden_diff": "diff --git a/src/scout_apm/async_/starlette.py b/src/scout_apm/async_/starlette.py\n--- a/src/scout_apm/async_/starlette.py\n+++ b/src/scout_apm/async_/starlette.py\n@@ -31,8 +31,11 @@\n if \"endpoint\" in scope:\n # Rename top span\n endpoint = scope[\"endpoint\"]\n+ if not hasattr(endpoint, \"__qualname__\"):\n+ endpoint = endpoint.__class__\n controller_span.operation = \"Controller/{}.{}\".format(\n- endpoint.__module__, endpoint.__qualname__\n+ endpoint.__module__,\n+ endpoint.__qualname__,\n )\n tracked_request.is_real_request = True\n", "issue": "object has no attribute '__qualname__'\nI am getting the following error when trying to enable the scout APM on a starlette application:\r\n```\r\nTraceback (most recent call last):\r\n File \"/app/.heroku/python/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 391, in run_asgi\r\n result = await app(self.scope, self.receive, self.send)\r\n File \"/app/.heroku/python/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py\", line 45, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/app/.heroku/python/lib/python3.8/site-packages/starlette/applications.py\", line 111, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/app/.heroku/python/lib/python3.8/site-packages/starlette/middleware/errors.py\", line 181, in __call__\r\n raise exc from None\r\n File \"/app/.heroku/python/lib/python3.8/site-packages/starlette/middleware/errors.py\", line 159, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"/app/.heroku/python/lib/python3.8/site-packages/scout_apm/async_/starlette.py\", line 67, in __call__\r\n grab_extra_data()\r\n File \"/app/.heroku/python/lib/python3.8/site-packages/scout_apm/async_/starlette.py\", line 35, in grab_extra_data\r\n endpoint.__module__, endpoint.__qualname__\r\nAttributeError: 'GraphQLWithBackground' object has no attribute '__qualname__'\r\n```\r\n\r\n`GraphQLWithBackground` is a subclass of `ariadne.asgi.GraphQL`.\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport wrapt\nfrom starlette.background import BackgroundTask\n\nimport scout_apm.core\nfrom scout_apm.core.tracked_request import TrackedRequest\nfrom scout_apm.core.web_requests import asgi_track_request_data\n\n\nclass ScoutMiddleware:\n def __init__(self, app):\n self.app = app\n installed = scout_apm.core.install()\n self._do_nothing = not installed\n if installed:\n install_background_instrumentation()\n\n async def __call__(self, scope, receive, send):\n if self._do_nothing or scope[\"type\"] != \"http\":\n return await self.app(scope, receive, send)\n\n tracked_request = TrackedRequest.instance()\n # Can't name controller until post-routing - see final clause\n controller_span = tracked_request.start_span(operation=\"Controller/Unknown\")\n\n asgi_track_request_data(scope, tracked_request)\n\n def grab_extra_data():\n if \"endpoint\" in scope:\n # Rename top span\n endpoint = scope[\"endpoint\"]\n controller_span.operation = \"Controller/{}.{}\".format(\n endpoint.__module__, endpoint.__qualname__\n )\n tracked_request.is_real_request = True\n\n # From AuthenticationMiddleware - bypass request.user because it\n # throws AssertionError if 'user' is not in Scope, and we need a\n # try/except already\n try:\n username = scope[\"user\"].display_name\n except (KeyError, AttributeError):\n pass\n else:\n tracked_request.tag(\"username\", username)\n\n async def wrapped_send(data):\n type_ = data.get(\"type\", None)\n if type_ == \"http.response.start\" and 500 <= data.get(\"status\", 200) <= 599:\n tracked_request.tag(\"error\", \"true\")\n elif type_ == \"http.response.body\" and not data.get(\"more_body\", False):\n # Finish HTTP span when body finishes sending, not later (e.g.\n # after background tasks)\n grab_extra_data()\n tracked_request.stop_span()\n return await send(data)\n\n try:\n await self.app(scope, receive, wrapped_send)\n except Exception as exc:\n tracked_request.tag(\"error\", \"true\")\n raise exc\n finally:\n if tracked_request.end_time is None:\n grab_extra_data()\n tracked_request.stop_span()\n\n\nbackground_instrumentation_installed = False\n\n\ndef install_background_instrumentation():\n global background_instrumentation_installed\n if background_instrumentation_installed:\n return\n background_instrumentation_installed = True\n\n @wrapt.decorator\n async def wrapped_background_call(wrapped, instance, args, kwargs):\n tracked_request = TrackedRequest.instance()\n tracked_request.is_real_request = True\n\n with tracked_request.span(\n operation=\"Job/{}.{}\".format(\n instance.func.__module__, instance.func.__qualname__\n )\n ):\n return await wrapped(*args, **kwargs)\n\n BackgroundTask.__call__ = wrapped_background_call(BackgroundTask.__call__)\n", "path": "src/scout_apm/async_/starlette.py"}]} | 1,779 | 157 |
gh_patches_debug_5225 | rasdani/github-patches | git_diff | fidals__shopelectro-462 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dc-volumes.yml:27-28: Repair npm container reusing at...
The puzzle `449-27d8c19e` from #449 has to be resolved:
https://github.com/fidals/shopelectro/blob/f8ce85246c00411fcb21d025b0fb42abab81c7d5/docker/dc-volumes.yml#L27-L28
The puzzle was created by duker33 on 28-Jul-18.
Estimate: 60 minutes,
If you have any technical questions, don't ask me, submit new tickets instead. The task will be "done" when the problem is fixed and the text of the puzzle is _removed_ from the source code. Here is more about [PDD](http://www.yegor256.com/2009/03/04/pdd.html) and [about me](http://www.yegor256.com/2017/04/05/pdd-in-action.html).
В Firefox не поставить оценку товару
Не нажимаются звездочки, когда хочешь оставить отзыв о товаре
Firefox 58.0.2, чистый профиль. В Chromium работают
</issue>
<code>
[start of shopelectro/management/commands/_update_catalog/utils.py]
1 import glob
2 import logging
3 import math
4 import os
5 import shutil
6 import subprocess
7 import time
8 from contextlib import contextmanager
9 from itertools import chain
10 from typing import Iterator, Dict
11 from uuid import UUID
12 from xml.etree import ElementTree
13
14 import requests
15 from django.conf import settings
16
17 from shopelectro.exception import DownloadFilesError
18
19 logger = logging.getLogger(__name__)
20 DOWNLOAD_FILES_TIMEOUT = 15.0
21 UUID_TYPE = str
22 Data = Dict[str, Dict[str, dict]]
23 NOT_SAVE_TEMPLATE = '{entity} with name="{name}" has no {field}. It\'ll not be' \
24 ' saved'
25
26
27 def floor(x: float, precision=0) -> float:
28 """
29 The same behaviour as `math.floor`, but with precision.
30
31 >>> floor(1.234, precision=2) # result: 1.23
32 """
33 k = 10**precision
34 return math.floor(x * k) / k
35
36
37 def is_correct_uuid(uuid_):
38 try:
39 val = UUID(uuid_)
40 except (ValueError, TypeError):
41 return False
42 return str(val) == uuid_
43
44
45 class XmlFile:
46
47 namespace = '{urn:1C.ru:commerceml_2}'
48
49 def __init__(self, fetch_callback, xml_path_pattern, xpath_queries,
50 extra_options=None):
51 self.fetch_callback = fetch_callback
52 self.xml_path_pattern = xml_path_pattern
53 self.xpath_queries = xpath_queries
54 self.extra_options = extra_options or {}
55
56 @property
57 def parsed_files(self):
58 """Get parsed xml files, that matched the path pattern."""
59 xml_files = glob.glob(os.path.join(
60 settings.ASSETS_DIR, self.xml_path_pattern
61 ))
62 assert xml_files, 'Files on path {} does not exist.'.format(
63 self.xml_path_pattern
64 )
65 return [ElementTree.parse(file) for file in xml_files]
66
67 @property
68 def xpaths(self):
69 """Get xpath queries for xml."""
70 return {
71 name: query.format(self.namespace)
72 for name, query in self.xpath_queries.items()
73 }
74
75 def get_data(self) -> Iterator:
76 """
77 Get data from xml files.
78
79 Example files with products names or prices.
80 """
81 return chain.from_iterable(
82 self.fetch_callback(file, self)
83 for file in self.parsed_files
84 )
85
86
87 @contextmanager
88 def collect_errors(error_types: tuple):
89 errors = []
90
91 @contextmanager
92 def collect():
93 try:
94 yield
95 except error_types as error:
96 errors.append(error)
97 yield collect
98 if errors:
99 raise errors[0]
100
101
102 @contextmanager
103 def download_catalog(destination):
104 """Download catalog's xml files and delete after handle them."""
105 wget_command = (
106 'wget -r -P {} ftp://{}:{}@{}/webdata/'
107 ' 2>&1 | grep "время\|time\|Downloaded"'.format(
108 destination,
109 settings.FTP_USER,
110 settings.FTP_PASS,
111 settings.FTP_IP,
112 )
113 )
114
115 try:
116 subprocess.run(wget_command, timeout=DOWNLOAD_FILES_TIMEOUT, shell=True)
117 except subprocess.TimeoutExpired as e:
118 raise DownloadFilesError(str(e))
119
120 assert os.path.exists(os.path.join(
121 destination, settings.FTP_IP)), 'Files do not downloaded...'
122 logger.info('Download catalog - completed...')
123
124 try:
125 yield
126 finally:
127 # remove downloaded data
128 shutil.rmtree(os.path.join(destination, settings.FTP_IP))
129
130
131 def report(error):
132 report_url = getattr(settings, 'SLACK_REPORT_URL', None)
133 if report_url is not None:
134 requests.post(
135 url=report_url,
136 json={
137 'text': '*Не удалось обновить каталог Shopelectro.*\n'
138 '*Время*: {}\n'
139 '*Ошибка*: {}'.format(time.ctime(), error),
140 }
141 )
142
[end of shopelectro/management/commands/_update_catalog/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/shopelectro/management/commands/_update_catalog/utils.py b/shopelectro/management/commands/_update_catalog/utils.py
--- a/shopelectro/management/commands/_update_catalog/utils.py
+++ b/shopelectro/management/commands/_update_catalog/utils.py
@@ -17,7 +17,7 @@
from shopelectro.exception import DownloadFilesError
logger = logging.getLogger(__name__)
-DOWNLOAD_FILES_TIMEOUT = 15.0
+DOWNLOAD_FILES_TIMEOUT = 40.0
UUID_TYPE = str
Data = Dict[str, Dict[str, dict]]
NOT_SAVE_TEMPLATE = '{entity} with name="{name}" has no {field}. It\'ll not be' \
| {"golden_diff": "diff --git a/shopelectro/management/commands/_update_catalog/utils.py b/shopelectro/management/commands/_update_catalog/utils.py\n--- a/shopelectro/management/commands/_update_catalog/utils.py\n+++ b/shopelectro/management/commands/_update_catalog/utils.py\n@@ -17,7 +17,7 @@\n from shopelectro.exception import DownloadFilesError\n \n logger = logging.getLogger(__name__)\n-DOWNLOAD_FILES_TIMEOUT = 15.0\n+DOWNLOAD_FILES_TIMEOUT = 40.0\n UUID_TYPE = str\n Data = Dict[str, Dict[str, dict]]\n NOT_SAVE_TEMPLATE = '{entity} with name=\"{name}\" has no {field}. It\\'ll not be' \\\n", "issue": "dc-volumes.yml:27-28: Repair npm container reusing at...\nThe puzzle `449-27d8c19e` from #449 has to be resolved:\n\nhttps://github.com/fidals/shopelectro/blob/f8ce85246c00411fcb21d025b0fb42abab81c7d5/docker/dc-volumes.yml#L27-L28\n\nThe puzzle was created by duker33 on 28-Jul-18. \n\nEstimate: 60 minutes, \n\nIf you have any technical questions, don't ask me, submit new tickets instead. The task will be \"done\" when the problem is fixed and the text of the puzzle is _removed_ from the source code. Here is more about [PDD](http://www.yegor256.com/2009/03/04/pdd.html) and [about me](http://www.yegor256.com/2017/04/05/pdd-in-action.html).\n\u0412 Firefox \u043d\u0435 \u043f\u043e\u0441\u0442\u0430\u0432\u0438\u0442\u044c \u043e\u0446\u0435\u043d\u043a\u0443 \u0442\u043e\u0432\u0430\u0440\u0443\n\u041d\u0435 \u043d\u0430\u0436\u0438\u043c\u0430\u044e\u0442\u0441\u044f \u0437\u0432\u0435\u0437\u0434\u043e\u0447\u043a\u0438, \u043a\u043e\u0433\u0434\u0430 \u0445\u043e\u0447\u0435\u0448\u044c \u043e\u0441\u0442\u0430\u0432\u0438\u0442\u044c \u043e\u0442\u0437\u044b\u0432 \u043e \u0442\u043e\u0432\u0430\u0440\u0435\r\n\r\nFirefox 58.0.2, \u0447\u0438\u0441\u0442\u044b\u0439 \u043f\u0440\u043e\u0444\u0438\u043b\u044c. \u0412 Chromium \u0440\u0430\u0431\u043e\u0442\u0430\u044e\u0442\n", "before_files": [{"content": "import glob\nimport logging\nimport math\nimport os\nimport shutil\nimport subprocess\nimport time\nfrom contextlib import contextmanager\nfrom itertools import chain\nfrom typing import Iterator, Dict\nfrom uuid import UUID\nfrom xml.etree import ElementTree\n\nimport requests\nfrom django.conf import settings\n\nfrom shopelectro.exception import DownloadFilesError\n\nlogger = logging.getLogger(__name__)\nDOWNLOAD_FILES_TIMEOUT = 15.0\nUUID_TYPE = str\nData = Dict[str, Dict[str, dict]]\nNOT_SAVE_TEMPLATE = '{entity} with name=\"{name}\" has no {field}. It\\'ll not be' \\\n ' saved'\n\n\ndef floor(x: float, precision=0) -> float:\n \"\"\"\n The same behaviour as `math.floor`, but with precision.\n\n >>> floor(1.234, precision=2) # result: 1.23\n \"\"\"\n k = 10**precision\n return math.floor(x * k) / k\n\n\ndef is_correct_uuid(uuid_):\n try:\n val = UUID(uuid_)\n except (ValueError, TypeError):\n return False\n return str(val) == uuid_\n\n\nclass XmlFile:\n\n namespace = '{urn:1C.ru:commerceml_2}'\n\n def __init__(self, fetch_callback, xml_path_pattern, xpath_queries,\n extra_options=None):\n self.fetch_callback = fetch_callback\n self.xml_path_pattern = xml_path_pattern\n self.xpath_queries = xpath_queries\n self.extra_options = extra_options or {}\n\n @property\n def parsed_files(self):\n \"\"\"Get parsed xml files, that matched the path pattern.\"\"\"\n xml_files = glob.glob(os.path.join(\n settings.ASSETS_DIR, self.xml_path_pattern\n ))\n assert xml_files, 'Files on path {} does not exist.'.format(\n self.xml_path_pattern\n )\n return [ElementTree.parse(file) for file in xml_files]\n\n @property\n def xpaths(self):\n \"\"\"Get xpath queries for xml.\"\"\"\n return {\n name: query.format(self.namespace)\n for name, query in self.xpath_queries.items()\n }\n\n def get_data(self) -> Iterator:\n \"\"\"\n Get data from xml files.\n\n Example files with products names or prices.\n \"\"\"\n return chain.from_iterable(\n self.fetch_callback(file, self)\n for file in self.parsed_files\n )\n\n\n@contextmanager\ndef collect_errors(error_types: tuple):\n errors = []\n\n @contextmanager\n def collect():\n try:\n yield\n except error_types as error:\n errors.append(error)\n yield collect\n if errors:\n raise errors[0]\n\n\n@contextmanager\ndef download_catalog(destination):\n \"\"\"Download catalog's xml files and delete after handle them.\"\"\"\n wget_command = (\n 'wget -r -P {} ftp://{}:{}@{}/webdata/'\n ' 2>&1 | grep \"\u0432\u0440\u0435\u043c\u044f\\|time\\|Downloaded\"'.format(\n destination,\n settings.FTP_USER,\n settings.FTP_PASS,\n settings.FTP_IP,\n )\n )\n\n try:\n subprocess.run(wget_command, timeout=DOWNLOAD_FILES_TIMEOUT, shell=True)\n except subprocess.TimeoutExpired as e:\n raise DownloadFilesError(str(e))\n\n assert os.path.exists(os.path.join(\n destination, settings.FTP_IP)), 'Files do not downloaded...'\n logger.info('Download catalog - completed...')\n\n try:\n yield\n finally:\n # remove downloaded data\n shutil.rmtree(os.path.join(destination, settings.FTP_IP))\n\n\ndef report(error):\n report_url = getattr(settings, 'SLACK_REPORT_URL', None)\n if report_url is not None:\n requests.post(\n url=report_url,\n json={\n 'text': '*\u041d\u0435 \u0443\u0434\u0430\u043b\u043e\u0441\u044c \u043e\u0431\u043d\u043e\u0432\u0438\u0442\u044c \u043a\u0430\u0442\u0430\u043b\u043e\u0433 Shopelectro.*\\n'\n '*\u0412\u0440\u0435\u043c\u044f*: {}\\n'\n '*\u041e\u0448\u0438\u0431\u043a\u0430*: {}'.format(time.ctime(), error),\n }\n )\n", "path": "shopelectro/management/commands/_update_catalog/utils.py"}]} | 2,019 | 156 |
gh_patches_debug_25934 | rasdani/github-patches | git_diff | secdev__scapy-2431 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for IPv6 addresses in VRRPv3 layer
#### Brief description
The VRRPv3 layer has a field addrlist which supports IPv4 addresses only.
This prevents use of the layer for generating IPv6 VRRPv3 advertisement packets.
#### Environment
- Scapy version: `v2.4.3-242-gdcfb63d0`
- Python version: `3.7.6`
- Operating System: `macOS Mojave 10.14.6`
Problem was originally observed on CentOS 7.7 with scapy v2.4.3 using python 3.6. Tested using current github master branch (as above) on macOS to confirm.
#### How to reproduce
```
from scapy.layers.l2 import Ether
from scapy.layers.inet6 import IPv6
from scapy.layers.vrrp import VRRPv3
pkt = (Ether(src="00:00:5e:00:02:64",dst="33:33:00:00:00:12")/IPv6(src="2001:db8::1",dst="ff02::12")/VRRPv3(vrid=100,ipcount=1,addrlist=[ "2001:db8::100" ]))
```
#### Actual result
```
Traceback (most recent call last):
File "/Users/mgsmith/src/scapy/scapy/fields.py", line 535, in h2i
inet_aton(x)
OSError: illegal IP address string passed to inet_aton
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Users/mgsmith/src/scapy/scapy/base_classes.py", line 266, in __call__
i.__init__(*args, **kargs)
File "/Users/mgsmith/src/scapy/scapy/packet.py", line 157, in __init__
self.fields[fname] = self.get_field(fname).any2i(self, value)
File "/Users/mgsmith/src/scapy/scapy/fields.py", line 1412, in any2i
return [self.field.any2i(pkt, e) for e in x]
File "/Users/mgsmith/src/scapy/scapy/fields.py", line 1412, in <listcomp>
return [self.field.any2i(pkt, e) for e in x]
File "/Users/mgsmith/src/scapy/scapy/fields.py", line 562, in any2i
return self.h2i(pkt, x)
File "/Users/mgsmith/src/scapy/scapy/fields.py", line 537, in h2i
x = Net(x)
File "/Users/mgsmith/src/scapy/scapy/base_classes.py", line 108, in __init__
self.parsed, self.netmask = self._parse_net(net)
File "/Users/mgsmith/src/scapy/scapy/base_classes.py", line 101, in _parse_net
tmp[0] = socket.gethostbyname(tmp[0])
socket.gaierror: [Errno 8] nodename nor servname provided, or not known
```
#### Expected result
No error, valid packet created.
#### Related resources
RFC 5798
I have a patch which adds this support which I will submit as a pull request. See https://github.com/mgsmith1000/scapy/commit/020a6695d48d16b5521bf78a4a1ef5efba14fd21.
</issue>
<code>
[start of scapy/layers/vrrp.py]
1 # This file is part of Scapy
2 # See http://www.secdev.org/projects/scapy for more information
3 # Copyright (C) Philippe Biondi <[email protected]>
4 # Copyright (C) 6WIND <[email protected]>
5 # This program is published under a GPLv2 license
6
7 """
8 VRRP (Virtual Router Redundancy Protocol).
9 """
10
11 from scapy.packet import Packet, bind_layers
12 from scapy.fields import BitField, ByteField, FieldLenField, FieldListField, \
13 IPField, IntField, XShortField
14 from scapy.compat import chb, orb
15 from scapy.layers.inet import IP, in4_chksum, checksum
16 from scapy.layers.inet6 import IPv6, in6_chksum
17 from scapy.error import warning
18
19 IPPROTO_VRRP = 112
20
21 # RFC 3768 - Virtual Router Redundancy Protocol (VRRP)
22
23
24 class VRRP(Packet):
25 fields_desc = [
26 BitField("version", 2, 4),
27 BitField("type", 1, 4),
28 ByteField("vrid", 1),
29 ByteField("priority", 100),
30 FieldLenField("ipcount", None, count_of="addrlist", fmt="B"),
31 ByteField("authtype", 0),
32 ByteField("adv", 1),
33 XShortField("chksum", None),
34 FieldListField("addrlist", [], IPField("", "0.0.0.0"),
35 count_from=lambda pkt: pkt.ipcount),
36 IntField("auth1", 0),
37 IntField("auth2", 0)]
38
39 def post_build(self, p, pay):
40 if self.chksum is None:
41 ck = checksum(p)
42 p = p[:6] + chb(ck >> 8) + chb(ck & 0xff) + p[8:]
43 return p
44
45 @classmethod
46 def dispatch_hook(cls, _pkt=None, *args, **kargs):
47 if _pkt and len(_pkt) >= 9:
48 ver_n_type = orb(_pkt[0])
49 if ver_n_type >= 48 and ver_n_type <= 57: # Version == 3
50 return VRRPv3
51 return VRRP
52
53
54 # RFC 5798 - Virtual Router Redundancy Protocol (VRRP) Version 3
55 class VRRPv3(Packet):
56 fields_desc = [
57 BitField("version", 3, 4),
58 BitField("type", 1, 4),
59 ByteField("vrid", 1),
60 ByteField("priority", 100),
61 FieldLenField("ipcount", None, count_of="addrlist", fmt="B"),
62 BitField("res", 0, 4),
63 BitField("adv", 100, 12),
64 XShortField("chksum", None),
65 # FIXME: addrlist should also allow IPv6 addresses :/
66 FieldListField("addrlist", [], IPField("", "0.0.0.0"),
67 count_from=lambda pkt: pkt.ipcount)]
68
69 def post_build(self, p, pay):
70 if self.chksum is None:
71 if isinstance(self.underlayer, IP):
72 ck = in4_chksum(112, self.underlayer, p)
73 elif isinstance(self.underlayer, IPv6):
74 ck = in6_chksum(112, self.underlayer, p)
75 else:
76 warning("No IP(v6) layer to compute checksum on VRRP. Leaving null") # noqa: E501
77 ck = 0
78 p = p[:6] + chb(ck >> 8) + chb(ck & 0xff) + p[8:]
79 return p
80
81 @classmethod
82 def dispatch_hook(cls, _pkt=None, *args, **kargs):
83 if _pkt and len(_pkt) >= 16:
84 ver_n_type = orb(_pkt[0])
85 if ver_n_type < 48 or ver_n_type > 57: # Version != 3
86 return VRRP
87 return VRRPv3
88
89
90 # IPv6 is supported only on VRRPv3
91 # Warning: those layers need to be un-binded in the CARP contrib module.
92 # If you add/remove any, remember to also edit the one in CARP.py
93 bind_layers(IP, VRRP, proto=IPPROTO_VRRP)
94 bind_layers(IP, VRRPv3, proto=IPPROTO_VRRP)
95 bind_layers(IPv6, VRRPv3, nh=IPPROTO_VRRP)
96
[end of scapy/layers/vrrp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scapy/layers/vrrp.py b/scapy/layers/vrrp.py
--- a/scapy/layers/vrrp.py
+++ b/scapy/layers/vrrp.py
@@ -10,7 +10,7 @@
from scapy.packet import Packet, bind_layers
from scapy.fields import BitField, ByteField, FieldLenField, FieldListField, \
- IPField, IntField, XShortField
+ IPField, IP6Field, IntField, MultipleTypeField, StrField, XShortField
from scapy.compat import chb, orb
from scapy.layers.inet import IP, in4_chksum, checksum
from scapy.layers.inet6 import IPv6, in6_chksum
@@ -62,9 +62,18 @@
BitField("res", 0, 4),
BitField("adv", 100, 12),
XShortField("chksum", None),
- # FIXME: addrlist should also allow IPv6 addresses :/
- FieldListField("addrlist", [], IPField("", "0.0.0.0"),
- count_from=lambda pkt: pkt.ipcount)]
+ MultipleTypeField(
+ [
+ (FieldListField("addrlist", [], IPField("", "0.0.0.0"),
+ count_from=lambda pkt: pkt.ipcount),
+ lambda p: isinstance(p.underlayer, IP)),
+ (FieldListField("addrlist", [], IP6Field("", "::"),
+ count_from=lambda pkt: pkt.ipcount),
+ lambda p: isinstance(p.underlayer, IPv6)),
+ ],
+ StrField("addrlist", "")
+ )
+ ]
def post_build(self, p, pay):
if self.chksum is None:
| {"golden_diff": "diff --git a/scapy/layers/vrrp.py b/scapy/layers/vrrp.py\n--- a/scapy/layers/vrrp.py\n+++ b/scapy/layers/vrrp.py\n@@ -10,7 +10,7 @@\n \n from scapy.packet import Packet, bind_layers\n from scapy.fields import BitField, ByteField, FieldLenField, FieldListField, \\\n- IPField, IntField, XShortField\n+ IPField, IP6Field, IntField, MultipleTypeField, StrField, XShortField\n from scapy.compat import chb, orb\n from scapy.layers.inet import IP, in4_chksum, checksum\n from scapy.layers.inet6 import IPv6, in6_chksum\n@@ -62,9 +62,18 @@\n BitField(\"res\", 0, 4),\n BitField(\"adv\", 100, 12),\n XShortField(\"chksum\", None),\n- # FIXME: addrlist should also allow IPv6 addresses :/\n- FieldListField(\"addrlist\", [], IPField(\"\", \"0.0.0.0\"),\n- count_from=lambda pkt: pkt.ipcount)]\n+ MultipleTypeField(\n+ [\n+ (FieldListField(\"addrlist\", [], IPField(\"\", \"0.0.0.0\"),\n+ count_from=lambda pkt: pkt.ipcount),\n+ lambda p: isinstance(p.underlayer, IP)),\n+ (FieldListField(\"addrlist\", [], IP6Field(\"\", \"::\"),\n+ count_from=lambda pkt: pkt.ipcount),\n+ lambda p: isinstance(p.underlayer, IPv6)),\n+ ],\n+ StrField(\"addrlist\", \"\")\n+ )\n+ ]\n \n def post_build(self, p, pay):\n if self.chksum is None:\n", "issue": "Add support for IPv6 addresses in VRRPv3 layer\n#### Brief description\r\n\r\nThe VRRPv3 layer has a field addrlist which supports IPv4 addresses only.\r\n\r\nThis prevents use of the layer for generating IPv6 VRRPv3 advertisement packets.\r\n\r\n#### Environment\r\n\r\n- Scapy version: `v2.4.3-242-gdcfb63d0`\r\n- Python version: `3.7.6`\r\n- Operating System: `macOS Mojave 10.14.6`\r\n\r\nProblem was originally observed on CentOS 7.7 with scapy v2.4.3 using python 3.6. Tested using current github master branch (as above) on macOS to confirm.\r\n\r\n#### How to reproduce\r\n\r\n```\r\nfrom scapy.layers.l2 import Ether\r\nfrom scapy.layers.inet6 import IPv6\r\nfrom scapy.layers.vrrp import VRRPv3\r\n\r\npkt = (Ether(src=\"00:00:5e:00:02:64\",dst=\"33:33:00:00:00:12\")/IPv6(src=\"2001:db8::1\",dst=\"ff02::12\")/VRRPv3(vrid=100,ipcount=1,addrlist=[ \"2001:db8::100\" ]))\r\n```\r\n\r\n#### Actual result\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/mgsmith/src/scapy/scapy/fields.py\", line 535, in h2i\r\n inet_aton(x)\r\nOSError: illegal IP address string passed to inet_aton\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<console>\", line 1, in <module>\r\n File \"/Users/mgsmith/src/scapy/scapy/base_classes.py\", line 266, in __call__\r\n i.__init__(*args, **kargs)\r\n File \"/Users/mgsmith/src/scapy/scapy/packet.py\", line 157, in __init__\r\n self.fields[fname] = self.get_field(fname).any2i(self, value)\r\n File \"/Users/mgsmith/src/scapy/scapy/fields.py\", line 1412, in any2i\r\n return [self.field.any2i(pkt, e) for e in x]\r\n File \"/Users/mgsmith/src/scapy/scapy/fields.py\", line 1412, in <listcomp>\r\n return [self.field.any2i(pkt, e) for e in x]\r\n File \"/Users/mgsmith/src/scapy/scapy/fields.py\", line 562, in any2i\r\n return self.h2i(pkt, x)\r\n File \"/Users/mgsmith/src/scapy/scapy/fields.py\", line 537, in h2i\r\n x = Net(x)\r\n File \"/Users/mgsmith/src/scapy/scapy/base_classes.py\", line 108, in __init__\r\n self.parsed, self.netmask = self._parse_net(net)\r\n File \"/Users/mgsmith/src/scapy/scapy/base_classes.py\", line 101, in _parse_net\r\n tmp[0] = socket.gethostbyname(tmp[0])\r\nsocket.gaierror: [Errno 8] nodename nor servname provided, or not known\r\n```\r\n\r\n#### Expected result\r\n\r\nNo error, valid packet created.\r\n\r\n#### Related resources\r\n\r\nRFC 5798\r\nI have a patch which adds this support which I will submit as a pull request. See https://github.com/mgsmith1000/scapy/commit/020a6695d48d16b5521bf78a4a1ef5efba14fd21.\r\n\n", "before_files": [{"content": "# This file is part of Scapy\n# See http://www.secdev.org/projects/scapy for more information\n# Copyright (C) Philippe Biondi <[email protected]>\n# Copyright (C) 6WIND <[email protected]>\n# This program is published under a GPLv2 license\n\n\"\"\"\nVRRP (Virtual Router Redundancy Protocol).\n\"\"\"\n\nfrom scapy.packet import Packet, bind_layers\nfrom scapy.fields import BitField, ByteField, FieldLenField, FieldListField, \\\n IPField, IntField, XShortField\nfrom scapy.compat import chb, orb\nfrom scapy.layers.inet import IP, in4_chksum, checksum\nfrom scapy.layers.inet6 import IPv6, in6_chksum\nfrom scapy.error import warning\n\nIPPROTO_VRRP = 112\n\n# RFC 3768 - Virtual Router Redundancy Protocol (VRRP)\n\n\nclass VRRP(Packet):\n fields_desc = [\n BitField(\"version\", 2, 4),\n BitField(\"type\", 1, 4),\n ByteField(\"vrid\", 1),\n ByteField(\"priority\", 100),\n FieldLenField(\"ipcount\", None, count_of=\"addrlist\", fmt=\"B\"),\n ByteField(\"authtype\", 0),\n ByteField(\"adv\", 1),\n XShortField(\"chksum\", None),\n FieldListField(\"addrlist\", [], IPField(\"\", \"0.0.0.0\"),\n count_from=lambda pkt: pkt.ipcount),\n IntField(\"auth1\", 0),\n IntField(\"auth2\", 0)]\n\n def post_build(self, p, pay):\n if self.chksum is None:\n ck = checksum(p)\n p = p[:6] + chb(ck >> 8) + chb(ck & 0xff) + p[8:]\n return p\n\n @classmethod\n def dispatch_hook(cls, _pkt=None, *args, **kargs):\n if _pkt and len(_pkt) >= 9:\n ver_n_type = orb(_pkt[0])\n if ver_n_type >= 48 and ver_n_type <= 57: # Version == 3\n return VRRPv3\n return VRRP\n\n\n# RFC 5798 - Virtual Router Redundancy Protocol (VRRP) Version 3\nclass VRRPv3(Packet):\n fields_desc = [\n BitField(\"version\", 3, 4),\n BitField(\"type\", 1, 4),\n ByteField(\"vrid\", 1),\n ByteField(\"priority\", 100),\n FieldLenField(\"ipcount\", None, count_of=\"addrlist\", fmt=\"B\"),\n BitField(\"res\", 0, 4),\n BitField(\"adv\", 100, 12),\n XShortField(\"chksum\", None),\n # FIXME: addrlist should also allow IPv6 addresses :/\n FieldListField(\"addrlist\", [], IPField(\"\", \"0.0.0.0\"),\n count_from=lambda pkt: pkt.ipcount)]\n\n def post_build(self, p, pay):\n if self.chksum is None:\n if isinstance(self.underlayer, IP):\n ck = in4_chksum(112, self.underlayer, p)\n elif isinstance(self.underlayer, IPv6):\n ck = in6_chksum(112, self.underlayer, p)\n else:\n warning(\"No IP(v6) layer to compute checksum on VRRP. Leaving null\") # noqa: E501\n ck = 0\n p = p[:6] + chb(ck >> 8) + chb(ck & 0xff) + p[8:]\n return p\n\n @classmethod\n def dispatch_hook(cls, _pkt=None, *args, **kargs):\n if _pkt and len(_pkt) >= 16:\n ver_n_type = orb(_pkt[0])\n if ver_n_type < 48 or ver_n_type > 57: # Version != 3\n return VRRP\n return VRRPv3\n\n\n# IPv6 is supported only on VRRPv3\n# Warning: those layers need to be un-binded in the CARP contrib module.\n# If you add/remove any, remember to also edit the one in CARP.py\nbind_layers(IP, VRRP, proto=IPPROTO_VRRP)\nbind_layers(IP, VRRPv3, proto=IPPROTO_VRRP)\nbind_layers(IPv6, VRRPv3, nh=IPPROTO_VRRP)\n", "path": "scapy/layers/vrrp.py"}]} | 2,600 | 397 |
gh_patches_debug_3806 | rasdani/github-patches | git_diff | paperless-ngx__paperless-ngx-2459 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] unable to change mail rule order in new frontend
### Description
i was setting up mail rules and wasn't able to get them to work properly until i realized that in the django admin i was able to set a order for those rules. in the new frontend the order was already displayed how i actually wanted them to be but in the django admin i was able to see they were in the wrong order
### Steps to reproduce
1. go to django admin
2. change mail order rule
3. go to new mail rule frontend
4. mail rule order doesn't match the django admin order
5. (also there is no way to change the order in the new frontend)
### Webserver logs
```bash
n/a
```
### Browser logs
_No response_
### Paperless-ngx version
1.11.4
### Host OS
Debian 11
### Installation method
Bare metal
### Browser
Firefox 108
### Configuration changes
_No response_
### Other
_No response_
</issue>
<code>
[start of src/paperless_mail/views.py]
1 from paperless.views import StandardPagination
2 from paperless_mail.models import MailAccount
3 from paperless_mail.models import MailRule
4 from paperless_mail.serialisers import MailAccountSerializer
5 from paperless_mail.serialisers import MailRuleSerializer
6 from rest_framework.permissions import IsAuthenticated
7 from rest_framework.viewsets import ModelViewSet
8
9
10 class MailAccountViewSet(ModelViewSet):
11 model = MailAccount
12
13 queryset = MailAccount.objects.all().order_by("pk")
14 serializer_class = MailAccountSerializer
15 pagination_class = StandardPagination
16 permission_classes = (IsAuthenticated,)
17
18 # TODO: user-scoped
19 # def get_queryset(self):
20 # user = self.request.user
21 # return MailAccount.objects.filter(user=user)
22
23 # def perform_create(self, serializer):
24 # serializer.save(user=self.request.user)
25
26
27 class MailRuleViewSet(ModelViewSet):
28 model = MailRule
29
30 queryset = MailRule.objects.all().order_by("pk")
31 serializer_class = MailRuleSerializer
32 pagination_class = StandardPagination
33 permission_classes = (IsAuthenticated,)
34
35 # TODO: user-scoped
36 # def get_queryset(self):
37 # user = self.request.user
38 # return MailRule.objects.filter(user=user)
39
40 # def perform_create(self, serializer):
41 # serializer.save(user=self.request.user)
42
[end of src/paperless_mail/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/paperless_mail/views.py b/src/paperless_mail/views.py
--- a/src/paperless_mail/views.py
+++ b/src/paperless_mail/views.py
@@ -27,7 +27,7 @@
class MailRuleViewSet(ModelViewSet):
model = MailRule
- queryset = MailRule.objects.all().order_by("pk")
+ queryset = MailRule.objects.all().order_by("order")
serializer_class = MailRuleSerializer
pagination_class = StandardPagination
permission_classes = (IsAuthenticated,)
| {"golden_diff": "diff --git a/src/paperless_mail/views.py b/src/paperless_mail/views.py\n--- a/src/paperless_mail/views.py\n+++ b/src/paperless_mail/views.py\n@@ -27,7 +27,7 @@\n class MailRuleViewSet(ModelViewSet):\n model = MailRule\n \n- queryset = MailRule.objects.all().order_by(\"pk\")\n+ queryset = MailRule.objects.all().order_by(\"order\")\n serializer_class = MailRuleSerializer\n pagination_class = StandardPagination\n permission_classes = (IsAuthenticated,)\n", "issue": "[BUG] unable to change mail rule order in new frontend\n### Description\n\ni was setting up mail rules and wasn't able to get them to work properly until i realized that in the django admin i was able to set a order for those rules. in the new frontend the order was already displayed how i actually wanted them to be but in the django admin i was able to see they were in the wrong order\n\n### Steps to reproduce\n\n1. go to django admin\r\n2. change mail order rule\r\n3. go to new mail rule frontend\r\n4. mail rule order doesn't match the django admin order\r\n5. (also there is no way to change the order in the new frontend)\n\n### Webserver logs\n\n```bash\nn/a\n```\n\n\n### Browser logs\n\n_No response_\n\n### Paperless-ngx version\n\n1.11.4\n\n### Host OS\n\nDebian 11\n\n### Installation method\n\nBare metal\n\n### Browser\n\nFirefox 108\n\n### Configuration changes\n\n_No response_\n\n### Other\n\n_No response_\n", "before_files": [{"content": "from paperless.views import StandardPagination\nfrom paperless_mail.models import MailAccount\nfrom paperless_mail.models import MailRule\nfrom paperless_mail.serialisers import MailAccountSerializer\nfrom paperless_mail.serialisers import MailRuleSerializer\nfrom rest_framework.permissions import IsAuthenticated\nfrom rest_framework.viewsets import ModelViewSet\n\n\nclass MailAccountViewSet(ModelViewSet):\n model = MailAccount\n\n queryset = MailAccount.objects.all().order_by(\"pk\")\n serializer_class = MailAccountSerializer\n pagination_class = StandardPagination\n permission_classes = (IsAuthenticated,)\n\n # TODO: user-scoped\n # def get_queryset(self):\n # user = self.request.user\n # return MailAccount.objects.filter(user=user)\n\n # def perform_create(self, serializer):\n # serializer.save(user=self.request.user)\n\n\nclass MailRuleViewSet(ModelViewSet):\n model = MailRule\n\n queryset = MailRule.objects.all().order_by(\"pk\")\n serializer_class = MailRuleSerializer\n pagination_class = StandardPagination\n permission_classes = (IsAuthenticated,)\n\n # TODO: user-scoped\n # def get_queryset(self):\n # user = self.request.user\n # return MailRule.objects.filter(user=user)\n\n # def perform_create(self, serializer):\n # serializer.save(user=self.request.user)\n", "path": "src/paperless_mail/views.py"}]} | 1,111 | 116 |
gh_patches_debug_14159 | rasdani/github-patches | git_diff | cisagov__manage.get.gov-1778 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Logout: OIDC logout behavior resulting on "500" error in server restart/recompile
### Current Behavior
Logout behavior observed:
1. 500 error is displayed when user logs out
2. By clicking sign in and then sign out, user is able to successfully log out without error
Logs being shown when this error occurs

### Expected Behavior
User logs out successfully without encountering error.
### Steps to Reproduce
1. sign in to server
2. restart the server
3. click sign out
4. 500 error is displayed
5. By clicking sign in and then sign out, user is able to successfully log out without error
when running locally:
1. sign in to server
2. make a code change that causes the code to recompile
3. click sign out
4. 500 error is displayed
5. By clicking sign in and then sign out, user is able to successfully log out without error
### Environment
Observed locally on all branches, observed on sandboxes
### Additional Context
_No response_
### Issue Links
Related to #1505
</issue>
<code>
[start of src/djangooidc/views.py]
1 # coding: utf-8
2
3 import logging
4
5 from django.conf import settings
6 from django.contrib.auth import logout as auth_logout
7 from django.contrib.auth import authenticate, login
8 from django.http import HttpResponseRedirect
9 from django.shortcuts import redirect, render
10 from urllib.parse import parse_qs, urlencode
11
12 from djangooidc.oidc import Client
13 from djangooidc import exceptions as o_e
14 from registrar.models import User
15
16 logger = logging.getLogger(__name__)
17
18 CLIENT = None
19
20
21 def _initialize_client():
22 """Initialize the OIDC client. Exceptions are allowed to raise
23 and will need to be caught."""
24 global CLIENT
25 # Initialize provider using pyOICD
26 OP = getattr(settings, "OIDC_ACTIVE_PROVIDER")
27 CLIENT = Client(OP)
28 logger.debug("Client initialized: %s" % CLIENT)
29
30
31 def _client_is_none():
32 """Return if the CLIENT is currently None."""
33 global CLIENT
34 return CLIENT is None
35
36
37 # Initialize CLIENT
38 try:
39 _initialize_client()
40 except Exception as err:
41 # In the event of an exception, log the error and allow the app load to continue
42 # without the OIDC Client. Subsequent login attempts will attempt to initialize
43 # again if Client is None
44 logger.error(err)
45 logger.error("Unable to configure OpenID Connect provider. Users cannot log in.")
46
47
48 def error_page(request, error):
49 """Display a sensible message and log the error."""
50 logger.error(error)
51 if isinstance(error, o_e.AuthenticationFailed):
52 return render(
53 request,
54 "401.html",
55 context={
56 "friendly_message": error.friendly_message,
57 "log_identifier": error.locator,
58 },
59 status=401,
60 )
61 if isinstance(error, o_e.InternalError):
62 return render(
63 request,
64 "500.html",
65 context={
66 "friendly_message": error.friendly_message,
67 "log_identifier": error.locator,
68 },
69 status=500,
70 )
71 if isinstance(error, Exception):
72 return render(request, "500.html", status=500)
73
74
75 def openid(request):
76 """Redirect the user to an authentication provider (OP)."""
77 global CLIENT
78 try:
79 # If the CLIENT is none, attempt to reinitialize before handling the request
80 if _client_is_none():
81 logger.debug("OIDC client is None, attempting to initialize")
82 _initialize_client()
83 request.session["acr_value"] = CLIENT.get_default_acr_value()
84 request.session["next"] = request.GET.get("next", "/")
85 # Create the authentication request
86 return CLIENT.create_authn_request(request.session)
87 except Exception as err:
88 return error_page(request, err)
89
90
91 def login_callback(request):
92 """Analyze the token returned by the authentication provider (OP)."""
93 global CLIENT
94 try:
95 # If the CLIENT is none, attempt to reinitialize before handling the request
96 if _client_is_none():
97 logger.debug("OIDC client is None, attempting to initialize")
98 _initialize_client()
99 query = parse_qs(request.GET.urlencode())
100 userinfo = CLIENT.callback(query, request.session)
101 # test for need for identity verification and if it is satisfied
102 # if not satisfied, redirect user to login with stepped up acr_value
103 if _requires_step_up_auth(userinfo):
104 # add acr_value to request.session
105 request.session["acr_value"] = CLIENT.get_step_up_acr_value()
106 return CLIENT.create_authn_request(request.session)
107 user = authenticate(request=request, **userinfo)
108 if user:
109 login(request, user)
110 logger.info("Successfully logged in user %s" % user)
111 # Double login bug (1507)?
112 return redirect(request.session.get("next", "/"))
113 else:
114 raise o_e.BannedUser()
115 except o_e.NoStateDefined as nsd_err:
116 # In the event that a user is in the middle of a login when the app is restarted,
117 # their session state will no longer be available, so redirect the user to the
118 # beginning of login process without raising an error to the user.
119 logger.warning(f"No State Defined: {nsd_err}")
120 return redirect(request.session.get("next", "/"))
121 except Exception as err:
122 return error_page(request, err)
123
124
125 def _requires_step_up_auth(userinfo):
126 """if User.needs_identity_verification and step_up_acr_value not in
127 ial returned from callback, return True"""
128 step_up_acr_value = CLIENT.get_step_up_acr_value()
129 acr_value = userinfo.get("ial", "")
130 uuid = userinfo.get("sub", "")
131 email = userinfo.get("email", "")
132 if acr_value != step_up_acr_value:
133 # The acr of this attempt is not at the highest level
134 # so check if the user needs the higher level
135 return User.needs_identity_verification(email, uuid)
136 else:
137 # This attempt already came back at the highest level
138 # so does not require step up
139 return False
140
141
142 def logout(request, next_page=None):
143 """Redirect the user to the authentication provider (OP) logout page."""
144 try:
145 user = request.user
146 request_args = {
147 "client_id": CLIENT.client_id,
148 "state": request.session["state"],
149 }
150 if (
151 "post_logout_redirect_uris" in CLIENT.registration_response.keys()
152 and len(CLIENT.registration_response["post_logout_redirect_uris"]) > 0
153 ):
154 request_args.update(
155 {"post_logout_redirect_uri": CLIENT.registration_response["post_logout_redirect_uris"][0]}
156 )
157 url = CLIENT.provider_info["end_session_endpoint"]
158 url += "?" + urlencode(request_args)
159 return HttpResponseRedirect(url)
160 except Exception as err:
161 return error_page(request, err)
162 finally:
163 # Always remove Django session stuff - even if not logged out from OP.
164 # Don't wait for the callback as it may never come.
165 auth_logout(request)
166 logger.info("Successfully logged out user %s" % user)
167 next_page = getattr(settings, "LOGOUT_REDIRECT_URL", None)
168 if next_page:
169 request.session["next"] = next_page
170
171
172 def logout_callback(request):
173 """Simple redirection view: after logout, redirect to `next`."""
174 next = request.session.get("next", "/")
175 return redirect(next)
176
[end of src/djangooidc/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/djangooidc/views.py b/src/djangooidc/views.py
--- a/src/djangooidc/views.py
+++ b/src/djangooidc/views.py
@@ -145,8 +145,12 @@
user = request.user
request_args = {
"client_id": CLIENT.client_id,
- "state": request.session["state"],
}
+ # if state is not in request session, still redirect to the identity
+ # provider's logout url, but don't include the state in the url; this
+ # will successfully log out of the identity provider
+ if "state" in request.session:
+ request_args["state"] = request.session["state"]
if (
"post_logout_redirect_uris" in CLIENT.registration_response.keys()
and len(CLIENT.registration_response["post_logout_redirect_uris"]) > 0
| {"golden_diff": "diff --git a/src/djangooidc/views.py b/src/djangooidc/views.py\n--- a/src/djangooidc/views.py\n+++ b/src/djangooidc/views.py\n@@ -145,8 +145,12 @@\n user = request.user\n request_args = {\n \"client_id\": CLIENT.client_id,\n- \"state\": request.session[\"state\"],\n }\n+ # if state is not in request session, still redirect to the identity\n+ # provider's logout url, but don't include the state in the url; this\n+ # will successfully log out of the identity provider\n+ if \"state\" in request.session:\n+ request_args[\"state\"] = request.session[\"state\"]\n if (\n \"post_logout_redirect_uris\" in CLIENT.registration_response.keys()\n and len(CLIENT.registration_response[\"post_logout_redirect_uris\"]) > 0\n", "issue": "Logout: OIDC logout behavior resulting on \"500\" error in server restart/recompile\n### Current Behavior\r\n\r\nLogout behavior observed:\r\n\r\n1. 500 error is displayed when user logs out\r\n2. By clicking sign in and then sign out, user is able to successfully log out without error\r\n\r\nLogs being shown when this error occurs\r\n\r\n\r\n\r\n### Expected Behavior\r\n\r\nUser logs out successfully without encountering error.\r\n\r\n### Steps to Reproduce\r\n\r\n1. sign in to server\r\n2. restart the server\r\n3. click sign out\r\n4. 500 error is displayed \r\n5. By clicking sign in and then sign out, user is able to successfully log out without error\r\n\r\n\r\nwhen running locally:\r\n1. sign in to server\r\n2. make a code change that causes the code to recompile\r\n3. click sign out\r\n4. 500 error is displayed \r\n5. By clicking sign in and then sign out, user is able to successfully log out without error\r\n### Environment\r\nObserved locally on all branches, observed on sandboxes\r\n\r\n### Additional Context\r\n\r\n_No response_\r\n\r\n### Issue Links\r\n\r\nRelated to #1505 \n", "before_files": [{"content": "# coding: utf-8\n\nimport logging\n\nfrom django.conf import settings\nfrom django.contrib.auth import logout as auth_logout\nfrom django.contrib.auth import authenticate, login\nfrom django.http import HttpResponseRedirect\nfrom django.shortcuts import redirect, render\nfrom urllib.parse import parse_qs, urlencode\n\nfrom djangooidc.oidc import Client\nfrom djangooidc import exceptions as o_e\nfrom registrar.models import User\n\nlogger = logging.getLogger(__name__)\n\nCLIENT = None\n\n\ndef _initialize_client():\n \"\"\"Initialize the OIDC client. Exceptions are allowed to raise\n and will need to be caught.\"\"\"\n global CLIENT\n # Initialize provider using pyOICD\n OP = getattr(settings, \"OIDC_ACTIVE_PROVIDER\")\n CLIENT = Client(OP)\n logger.debug(\"Client initialized: %s\" % CLIENT)\n\n\ndef _client_is_none():\n \"\"\"Return if the CLIENT is currently None.\"\"\"\n global CLIENT\n return CLIENT is None\n\n\n# Initialize CLIENT\ntry:\n _initialize_client()\nexcept Exception as err:\n # In the event of an exception, log the error and allow the app load to continue\n # without the OIDC Client. Subsequent login attempts will attempt to initialize\n # again if Client is None\n logger.error(err)\n logger.error(\"Unable to configure OpenID Connect provider. Users cannot log in.\")\n\n\ndef error_page(request, error):\n \"\"\"Display a sensible message and log the error.\"\"\"\n logger.error(error)\n if isinstance(error, o_e.AuthenticationFailed):\n return render(\n request,\n \"401.html\",\n context={\n \"friendly_message\": error.friendly_message,\n \"log_identifier\": error.locator,\n },\n status=401,\n )\n if isinstance(error, o_e.InternalError):\n return render(\n request,\n \"500.html\",\n context={\n \"friendly_message\": error.friendly_message,\n \"log_identifier\": error.locator,\n },\n status=500,\n )\n if isinstance(error, Exception):\n return render(request, \"500.html\", status=500)\n\n\ndef openid(request):\n \"\"\"Redirect the user to an authentication provider (OP).\"\"\"\n global CLIENT\n try:\n # If the CLIENT is none, attempt to reinitialize before handling the request\n if _client_is_none():\n logger.debug(\"OIDC client is None, attempting to initialize\")\n _initialize_client()\n request.session[\"acr_value\"] = CLIENT.get_default_acr_value()\n request.session[\"next\"] = request.GET.get(\"next\", \"/\")\n # Create the authentication request\n return CLIENT.create_authn_request(request.session)\n except Exception as err:\n return error_page(request, err)\n\n\ndef login_callback(request):\n \"\"\"Analyze the token returned by the authentication provider (OP).\"\"\"\n global CLIENT\n try:\n # If the CLIENT is none, attempt to reinitialize before handling the request\n if _client_is_none():\n logger.debug(\"OIDC client is None, attempting to initialize\")\n _initialize_client()\n query = parse_qs(request.GET.urlencode())\n userinfo = CLIENT.callback(query, request.session)\n # test for need for identity verification and if it is satisfied\n # if not satisfied, redirect user to login with stepped up acr_value\n if _requires_step_up_auth(userinfo):\n # add acr_value to request.session\n request.session[\"acr_value\"] = CLIENT.get_step_up_acr_value()\n return CLIENT.create_authn_request(request.session)\n user = authenticate(request=request, **userinfo)\n if user:\n login(request, user)\n logger.info(\"Successfully logged in user %s\" % user)\n # Double login bug (1507)?\n return redirect(request.session.get(\"next\", \"/\"))\n else:\n raise o_e.BannedUser()\n except o_e.NoStateDefined as nsd_err:\n # In the event that a user is in the middle of a login when the app is restarted,\n # their session state will no longer be available, so redirect the user to the\n # beginning of login process without raising an error to the user.\n logger.warning(f\"No State Defined: {nsd_err}\")\n return redirect(request.session.get(\"next\", \"/\"))\n except Exception as err:\n return error_page(request, err)\n\n\ndef _requires_step_up_auth(userinfo):\n \"\"\"if User.needs_identity_verification and step_up_acr_value not in\n ial returned from callback, return True\"\"\"\n step_up_acr_value = CLIENT.get_step_up_acr_value()\n acr_value = userinfo.get(\"ial\", \"\")\n uuid = userinfo.get(\"sub\", \"\")\n email = userinfo.get(\"email\", \"\")\n if acr_value != step_up_acr_value:\n # The acr of this attempt is not at the highest level\n # so check if the user needs the higher level\n return User.needs_identity_verification(email, uuid)\n else:\n # This attempt already came back at the highest level\n # so does not require step up\n return False\n\n\ndef logout(request, next_page=None):\n \"\"\"Redirect the user to the authentication provider (OP) logout page.\"\"\"\n try:\n user = request.user\n request_args = {\n \"client_id\": CLIENT.client_id,\n \"state\": request.session[\"state\"],\n }\n if (\n \"post_logout_redirect_uris\" in CLIENT.registration_response.keys()\n and len(CLIENT.registration_response[\"post_logout_redirect_uris\"]) > 0\n ):\n request_args.update(\n {\"post_logout_redirect_uri\": CLIENT.registration_response[\"post_logout_redirect_uris\"][0]}\n )\n url = CLIENT.provider_info[\"end_session_endpoint\"]\n url += \"?\" + urlencode(request_args)\n return HttpResponseRedirect(url)\n except Exception as err:\n return error_page(request, err)\n finally:\n # Always remove Django session stuff - even if not logged out from OP.\n # Don't wait for the callback as it may never come.\n auth_logout(request)\n logger.info(\"Successfully logged out user %s\" % user)\n next_page = getattr(settings, \"LOGOUT_REDIRECT_URL\", None)\n if next_page:\n request.session[\"next\"] = next_page\n\n\ndef logout_callback(request):\n \"\"\"Simple redirection view: after logout, redirect to `next`.\"\"\"\n next = request.session.get(\"next\", \"/\")\n return redirect(next)\n", "path": "src/djangooidc/views.py"}]} | 2,612 | 194 |
gh_patches_debug_27007 | rasdani/github-patches | git_diff | WordPress__openverse-api-756 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Thumbnail timeouts raise `TimeoutError`
## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
We recently changed the thumbnail service we're using and some of the business logic around it (#630). The previous thumbnail retrieval step [did not have a timeout](https://github.com/WordPress/openverse-api/blame/0df21059dfe579be1e7a83f64e8e8a7d4e422416/api/catalog/api/views/media_views.py#L156), whereas the new thumbnail retrieval step has a timeout of 5 seconds:
https://github.com/WordPress/openverse-api/blob/03362db8caed8014e31d3fe5274c9bb084086b85/api/catalog/api/views/media_views.py#L173
We should also add a case for the `TimeoutError` exception here:
https://github.com/WordPress/openverse-api/blob/03362db8caed8014e31d3fe5274c9bb084086b85/api/catalog/api/views/media_views.py#L183
I think it might also be best for us to increase this timeout just a bit (maybe 10s?) to give the thumbnails for larger images/museum images more time to load.
## Additional context
<!-- Add any other context about the problem here; or delete the section entirely. -->
Sentry issue: https://sentry.io/share/issue/3daf5a2129a449859200e2de0c458ed6/
## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [ ] 🙋 I would be interested in resolving this bug.
</issue>
<code>
[start of api/catalog/api/views/media_views.py]
1 import json
2 import logging as log
3 from urllib.error import HTTPError
4 from urllib.parse import urlencode
5 from urllib.request import Request, urlopen
6
7 from django.conf import settings
8 from django.http.response import HttpResponse
9 from rest_framework import status
10 from rest_framework.decorators import action
11 from rest_framework.response import Response
12 from rest_framework.viewsets import ReadOnlyModelViewSet
13
14 from catalog.api.controllers import search_controller
15 from catalog.api.models import ContentProvider
16 from catalog.api.serializers.provider_serializers import ProviderSerializer
17 from catalog.api.utils.exceptions import get_api_exception
18 from catalog.api.utils.pagination import StandardPagination
19 from catalog.custom_auto_schema import CustomAutoSchema
20
21
22 class MediaViewSet(ReadOnlyModelViewSet):
23 swagger_schema = CustomAutoSchema
24
25 lookup_field = "identifier"
26 # TODO: https://github.com/encode/django-rest-framework/pull/6789
27 lookup_value_regex = r"[0-9a-f\-]{36}" # highly simplified approximation
28
29 pagination_class = StandardPagination
30
31 # Populate these in the corresponding subclass
32 model_class = None
33 query_serializer_class = None
34 default_index = None
35 qa_index = None
36
37 def __init__(self, *args, **kwargs):
38 super().__init__(*args, **kwargs)
39 required_fields = [
40 self.model_class,
41 self.query_serializer_class,
42 self.default_index,
43 self.qa_index,
44 ]
45 if any(val is None for val in required_fields):
46 msg = "Viewset fields are not completely populated."
47 raise ValueError(msg)
48
49 def get_queryset(self):
50 return self.model_class.objects.all()
51
52 # Standard actions
53
54 def list(self, request, *_, **__):
55 self.paginator.page_size = request.query_params.get("page_size")
56 page_size = self.paginator.page_size
57 self.paginator.page = request.query_params.get("page")
58 page = self.paginator.page
59
60 params = self.query_serializer_class(data=request.query_params)
61 params.is_valid(raise_exception=True)
62
63 hashed_ip = hash(self._get_user_ip(request))
64 qa = params.validated_data["qa"]
65 filter_dead = params.validated_data["filter_dead"]
66
67 search_index = self.qa_index if qa else self.default_index
68 try:
69 results, num_pages, num_results = search_controller.search(
70 params,
71 search_index,
72 page_size,
73 hashed_ip,
74 request,
75 filter_dead,
76 page,
77 )
78 self.paginator.page_count = num_pages
79 self.paginator.result_count = num_results
80 except ValueError as e:
81 raise get_api_exception(getattr(e, "message", str(e)))
82
83 serializer = self.get_serializer(results, many=True)
84 return self.get_paginated_response(serializer.data)
85
86 # Extra actions
87
88 @action(detail=False, serializer_class=ProviderSerializer, pagination_class=None)
89 def stats(self, *_, **__):
90 source_counts = search_controller.get_sources(self.default_index)
91 context = self.get_serializer_context() | {
92 "source_counts": source_counts,
93 }
94
95 providers = ContentProvider.objects.filter(
96 media_type=self.default_index, filter_content=False
97 )
98 serializer = self.get_serializer(providers, many=True, context=context)
99 return Response(serializer.data)
100
101 @action(detail=True)
102 def related(self, request, identifier=None, *_, **__):
103 try:
104 results, num_results = search_controller.related_media(
105 uuid=identifier,
106 index=self.default_index,
107 request=request,
108 filter_dead=True,
109 )
110 self.paginator.result_count = num_results
111 self.paginator.page_count = 1
112 # `page_size` refers to the maximum number of related images to return.
113 self.paginator.page_size = 10
114 except ValueError as e:
115 raise get_api_exception(getattr(e, "message", str(e)))
116
117 serializer = self.get_serializer(results, many=True)
118 return self.get_paginated_response(serializer.data)
119
120 def report(self, request, *_, **__):
121 media = self.get_object()
122 identifier = media.identifier
123 serializer = self.get_serializer(data=request.data)
124 if not serializer.is_valid():
125 raise get_api_exception("Invalid input.", 400)
126 report = serializer.save(identifier=identifier)
127
128 serializer = self.get_serializer(report)
129 return Response(data=serializer.data, status=status.HTTP_201_CREATED)
130
131 def thumbnail(self, image_url, request, *_, **__):
132 serializer = self.get_serializer(data=request.query_params)
133 serializer.is_valid(raise_exception=True)
134 return self._get_proxied_image(
135 image_url,
136 accept_header=request.headers.get("Accept", "image/*"),
137 **serializer.validated_data,
138 )
139
140 # Helper functions
141
142 @staticmethod
143 def _get_user_ip(request):
144 """
145 Read request headers to find the correct IP address.
146 It is assumed that X-Forwarded-For has been sanitized by the load
147 balancer and thus cannot be rewritten by malicious users.
148 :param request: A Django request object.
149 :return: An IP address.
150 """
151 x_forwarded_for = request.META.get("HTTP_X_FORWARDED_FOR")
152 if x_forwarded_for:
153 ip = x_forwarded_for.split(",")[0]
154 else:
155 ip = request.META.get("REMOTE_ADDR")
156 return ip
157
158 @staticmethod
159 def _thumbnail_proxy_comm(
160 path: str,
161 params: dict,
162 headers: tuple[tuple[str, str]] = (),
163 ):
164 proxy_url = settings.THUMBNAIL_PROXY_URL
165 query_string = urlencode(params)
166 upstream_url = f"{proxy_url}/{path}?{query_string}"
167 log.debug(f"Image proxy upstream URL: {upstream_url}")
168
169 try:
170 req = Request(upstream_url)
171 for key, val in headers:
172 req.add_header(key, val)
173 upstream_response = urlopen(req, timeout=5)
174
175 res_status = upstream_response.status
176 content_type = upstream_response.headers.get("Content-Type")
177 log.debug(
178 "Image proxy response "
179 f"status: {res_status}, content-type: {content_type}"
180 )
181
182 return upstream_response, res_status, content_type
183 except HTTPError as exc:
184 raise get_api_exception(f"Failed to render thumbnail: {exc}")
185
186 @staticmethod
187 def _get_proxied_image(
188 image_url: str,
189 accept_header: str = "image/*",
190 is_full_size: bool = False,
191 is_compressed: bool = True,
192 ):
193 width = settings.THUMBNAIL_WIDTH_PX
194 if is_full_size:
195 info_res, *_ = MediaViewSet._thumbnail_proxy_comm(
196 "info", {"url": image_url}
197 )
198 info = json.loads(info_res.read())
199 width = info["width"]
200
201 params = {
202 "url": image_url,
203 "width": width,
204 }
205
206 if is_compressed:
207 params |= {
208 "quality": settings.THUMBNAIL_JPG_QUALITY,
209 "compression": settings.THUMBNAIL_PNG_COMPRESSION,
210 }
211 else:
212 params |= {
213 "quality": 100,
214 "compression": 0,
215 }
216
217 if "webp" in accept_header:
218 params["type"] = "auto" # Use ``Accept`` header to determine output type.
219
220 img_res, res_status, content_type = MediaViewSet._thumbnail_proxy_comm(
221 "resize", params, (("Accept", accept_header),)
222 )
223 response = HttpResponse(
224 img_res.read(), status=res_status, content_type=content_type
225 )
226 return response
227
[end of api/catalog/api/views/media_views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/api/catalog/api/views/media_views.py b/api/catalog/api/views/media_views.py
--- a/api/catalog/api/views/media_views.py
+++ b/api/catalog/api/views/media_views.py
@@ -1,5 +1,6 @@
import json
import logging as log
+from http.client import RemoteDisconnected
from urllib.error import HTTPError
from urllib.parse import urlencode
from urllib.request import Request, urlopen
@@ -170,7 +171,7 @@
req = Request(upstream_url)
for key, val in headers:
req.add_header(key, val)
- upstream_response = urlopen(req, timeout=5)
+ upstream_response = urlopen(req, timeout=10)
res_status = upstream_response.status
content_type = upstream_response.headers.get("Content-Type")
@@ -180,8 +181,12 @@
)
return upstream_response, res_status, content_type
- except HTTPError as exc:
+ except (HTTPError, RemoteDisconnected, TimeoutError) as exc:
raise get_api_exception(f"Failed to render thumbnail: {exc}")
+ except Exception as exc:
+ raise get_api_exception(
+ f"Failed to render thumbnail due to unidentified exception: {exc}"
+ )
@staticmethod
def _get_proxied_image(
| {"golden_diff": "diff --git a/api/catalog/api/views/media_views.py b/api/catalog/api/views/media_views.py\n--- a/api/catalog/api/views/media_views.py\n+++ b/api/catalog/api/views/media_views.py\n@@ -1,5 +1,6 @@\n import json\n import logging as log\n+from http.client import RemoteDisconnected\n from urllib.error import HTTPError\n from urllib.parse import urlencode\n from urllib.request import Request, urlopen\n@@ -170,7 +171,7 @@\n req = Request(upstream_url)\n for key, val in headers:\n req.add_header(key, val)\n- upstream_response = urlopen(req, timeout=5)\n+ upstream_response = urlopen(req, timeout=10)\n \n res_status = upstream_response.status\n content_type = upstream_response.headers.get(\"Content-Type\")\n@@ -180,8 +181,12 @@\n )\n \n return upstream_response, res_status, content_type\n- except HTTPError as exc:\n+ except (HTTPError, RemoteDisconnected, TimeoutError) as exc:\n raise get_api_exception(f\"Failed to render thumbnail: {exc}\")\n+ except Exception as exc:\n+ raise get_api_exception(\n+ f\"Failed to render thumbnail due to unidentified exception: {exc}\"\n+ )\n \n @staticmethod\n def _get_proxied_image(\n", "issue": "Thumbnail timeouts raise `TimeoutError`\n## Description\n<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->\n<!-- For example: \"I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page.\" -->\nWe recently changed the thumbnail service we're using and some of the business logic around it (#630). The previous thumbnail retrieval step [did not have a timeout](https://github.com/WordPress/openverse-api/blame/0df21059dfe579be1e7a83f64e8e8a7d4e422416/api/catalog/api/views/media_views.py#L156), whereas the new thumbnail retrieval step has a timeout of 5 seconds:\n\nhttps://github.com/WordPress/openverse-api/blob/03362db8caed8014e31d3fe5274c9bb084086b85/api/catalog/api/views/media_views.py#L173\n\nWe should also add a case for the `TimeoutError` exception here:\n\nhttps://github.com/WordPress/openverse-api/blob/03362db8caed8014e31d3fe5274c9bb084086b85/api/catalog/api/views/media_views.py#L183\n\nI think it might also be best for us to increase this timeout just a bit (maybe 10s?) to give the thumbnails for larger images/museum images more time to load.\n\n## Additional context\n<!-- Add any other context about the problem here; or delete the section entirely. -->\nSentry issue: https://sentry.io/share/issue/3daf5a2129a449859200e2de0c458ed6/\n\n## Resolution\n<!-- Replace the [ ] with [x] to check the box. -->\n- [ ] \ud83d\ude4b I would be interested in resolving this bug.\n\n", "before_files": [{"content": "import json\nimport logging as log\nfrom urllib.error import HTTPError\nfrom urllib.parse import urlencode\nfrom urllib.request import Request, urlopen\n\nfrom django.conf import settings\nfrom django.http.response import HttpResponse\nfrom rest_framework import status\nfrom rest_framework.decorators import action\nfrom rest_framework.response import Response\nfrom rest_framework.viewsets import ReadOnlyModelViewSet\n\nfrom catalog.api.controllers import search_controller\nfrom catalog.api.models import ContentProvider\nfrom catalog.api.serializers.provider_serializers import ProviderSerializer\nfrom catalog.api.utils.exceptions import get_api_exception\nfrom catalog.api.utils.pagination import StandardPagination\nfrom catalog.custom_auto_schema import CustomAutoSchema\n\n\nclass MediaViewSet(ReadOnlyModelViewSet):\n swagger_schema = CustomAutoSchema\n\n lookup_field = \"identifier\"\n # TODO: https://github.com/encode/django-rest-framework/pull/6789\n lookup_value_regex = r\"[0-9a-f\\-]{36}\" # highly simplified approximation\n\n pagination_class = StandardPagination\n\n # Populate these in the corresponding subclass\n model_class = None\n query_serializer_class = None\n default_index = None\n qa_index = None\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n required_fields = [\n self.model_class,\n self.query_serializer_class,\n self.default_index,\n self.qa_index,\n ]\n if any(val is None for val in required_fields):\n msg = \"Viewset fields are not completely populated.\"\n raise ValueError(msg)\n\n def get_queryset(self):\n return self.model_class.objects.all()\n\n # Standard actions\n\n def list(self, request, *_, **__):\n self.paginator.page_size = request.query_params.get(\"page_size\")\n page_size = self.paginator.page_size\n self.paginator.page = request.query_params.get(\"page\")\n page = self.paginator.page\n\n params = self.query_serializer_class(data=request.query_params)\n params.is_valid(raise_exception=True)\n\n hashed_ip = hash(self._get_user_ip(request))\n qa = params.validated_data[\"qa\"]\n filter_dead = params.validated_data[\"filter_dead\"]\n\n search_index = self.qa_index if qa else self.default_index\n try:\n results, num_pages, num_results = search_controller.search(\n params,\n search_index,\n page_size,\n hashed_ip,\n request,\n filter_dead,\n page,\n )\n self.paginator.page_count = num_pages\n self.paginator.result_count = num_results\n except ValueError as e:\n raise get_api_exception(getattr(e, \"message\", str(e)))\n\n serializer = self.get_serializer(results, many=True)\n return self.get_paginated_response(serializer.data)\n\n # Extra actions\n\n @action(detail=False, serializer_class=ProviderSerializer, pagination_class=None)\n def stats(self, *_, **__):\n source_counts = search_controller.get_sources(self.default_index)\n context = self.get_serializer_context() | {\n \"source_counts\": source_counts,\n }\n\n providers = ContentProvider.objects.filter(\n media_type=self.default_index, filter_content=False\n )\n serializer = self.get_serializer(providers, many=True, context=context)\n return Response(serializer.data)\n\n @action(detail=True)\n def related(self, request, identifier=None, *_, **__):\n try:\n results, num_results = search_controller.related_media(\n uuid=identifier,\n index=self.default_index,\n request=request,\n filter_dead=True,\n )\n self.paginator.result_count = num_results\n self.paginator.page_count = 1\n # `page_size` refers to the maximum number of related images to return.\n self.paginator.page_size = 10\n except ValueError as e:\n raise get_api_exception(getattr(e, \"message\", str(e)))\n\n serializer = self.get_serializer(results, many=True)\n return self.get_paginated_response(serializer.data)\n\n def report(self, request, *_, **__):\n media = self.get_object()\n identifier = media.identifier\n serializer = self.get_serializer(data=request.data)\n if not serializer.is_valid():\n raise get_api_exception(\"Invalid input.\", 400)\n report = serializer.save(identifier=identifier)\n\n serializer = self.get_serializer(report)\n return Response(data=serializer.data, status=status.HTTP_201_CREATED)\n\n def thumbnail(self, image_url, request, *_, **__):\n serializer = self.get_serializer(data=request.query_params)\n serializer.is_valid(raise_exception=True)\n return self._get_proxied_image(\n image_url,\n accept_header=request.headers.get(\"Accept\", \"image/*\"),\n **serializer.validated_data,\n )\n\n # Helper functions\n\n @staticmethod\n def _get_user_ip(request):\n \"\"\"\n Read request headers to find the correct IP address.\n It is assumed that X-Forwarded-For has been sanitized by the load\n balancer and thus cannot be rewritten by malicious users.\n :param request: A Django request object.\n :return: An IP address.\n \"\"\"\n x_forwarded_for = request.META.get(\"HTTP_X_FORWARDED_FOR\")\n if x_forwarded_for:\n ip = x_forwarded_for.split(\",\")[0]\n else:\n ip = request.META.get(\"REMOTE_ADDR\")\n return ip\n\n @staticmethod\n def _thumbnail_proxy_comm(\n path: str,\n params: dict,\n headers: tuple[tuple[str, str]] = (),\n ):\n proxy_url = settings.THUMBNAIL_PROXY_URL\n query_string = urlencode(params)\n upstream_url = f\"{proxy_url}/{path}?{query_string}\"\n log.debug(f\"Image proxy upstream URL: {upstream_url}\")\n\n try:\n req = Request(upstream_url)\n for key, val in headers:\n req.add_header(key, val)\n upstream_response = urlopen(req, timeout=5)\n\n res_status = upstream_response.status\n content_type = upstream_response.headers.get(\"Content-Type\")\n log.debug(\n \"Image proxy response \"\n f\"status: {res_status}, content-type: {content_type}\"\n )\n\n return upstream_response, res_status, content_type\n except HTTPError as exc:\n raise get_api_exception(f\"Failed to render thumbnail: {exc}\")\n\n @staticmethod\n def _get_proxied_image(\n image_url: str,\n accept_header: str = \"image/*\",\n is_full_size: bool = False,\n is_compressed: bool = True,\n ):\n width = settings.THUMBNAIL_WIDTH_PX\n if is_full_size:\n info_res, *_ = MediaViewSet._thumbnail_proxy_comm(\n \"info\", {\"url\": image_url}\n )\n info = json.loads(info_res.read())\n width = info[\"width\"]\n\n params = {\n \"url\": image_url,\n \"width\": width,\n }\n\n if is_compressed:\n params |= {\n \"quality\": settings.THUMBNAIL_JPG_QUALITY,\n \"compression\": settings.THUMBNAIL_PNG_COMPRESSION,\n }\n else:\n params |= {\n \"quality\": 100,\n \"compression\": 0,\n }\n\n if \"webp\" in accept_header:\n params[\"type\"] = \"auto\" # Use ``Accept`` header to determine output type.\n\n img_res, res_status, content_type = MediaViewSet._thumbnail_proxy_comm(\n \"resize\", params, ((\"Accept\", accept_header),)\n )\n response = HttpResponse(\n img_res.read(), status=res_status, content_type=content_type\n )\n return response\n", "path": "api/catalog/api/views/media_views.py"}]} | 3,175 | 289 |
gh_patches_debug_11095 | rasdani/github-patches | git_diff | scikit-hep__pyhf-369 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
papermill seems to break? pin version?
# Description
In the CI for @nikoladze's PR https://github.com/diana-hep/pyhf/pull/365 it seems like papermill is breaking for unrelated reasons. Did something change upstream? Maybe we need to pin the version? Maybe @matthewfeickert knows?
https://travis-ci.org/diana-hep/pyhf/jobs/454931484
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 from setuptools import setup, find_packages
4 from os import path
5 import sys
6
7 this_directory = path.abspath(path.dirname(__file__))
8 if sys.version_info.major < 3:
9 from io import open
10 with open(path.join(this_directory, 'README.md'), encoding='utf-8') as readme_md:
11 long_description = readme_md.read()
12
13 extras_require = {
14 'tensorflow': [
15 'tensorflow<1.12.0,>=1.10.0',
16 'tensorflow-probability==0.3.0',
17 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass
18 'setuptools<=39.1.0',
19 ],
20 'torch': ['torch>=0.4.0'],
21 'mxnet': [
22 'mxnet>=1.0.0',
23 'requests<2.19.0,>=2.18.4',
24 'numpy<1.15.0,>=1.8.2',
25 'requests<2.19.0,>=2.18.4',
26 ],
27 # 'dask': [
28 # 'dask[array]'
29 # ],
30 'xmlimport': ['uproot'],
31 'minuit': ['iminuit'],
32 'develop': [
33 'pyflakes',
34 'pytest>=3.5.1',
35 'pytest-cov>=2.5.1',
36 'pytest-benchmark[histogram]',
37 'pytest-console-scripts',
38 'python-coveralls',
39 'coverage>=4.0', # coveralls
40 'matplotlib',
41 'jupyter',
42 'nbdime',
43 'uproot>=3.0.0',
44 'papermill',
45 'graphviz',
46 'bumpversion',
47 'sphinx',
48 'sphinxcontrib-bibtex',
49 'sphinxcontrib-napoleon',
50 'sphinx_rtd_theme',
51 'nbsphinx',
52 'sphinx-issues',
53 'm2r',
54 'jsonpatch',
55 'ipython<7', # jupyter_console and ipython clash in dependency requirement -- downgrade ipython for now
56 'pre-commit',
57 'black;python_version>="3.6"', # Black is Python3 only
58 'twine',
59 ],
60 }
61 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
62
63 setup(
64 name='pyhf',
65 version='0.0.15',
66 description='(partial) pure python histfactory implementation',
67 long_description=long_description,
68 long_description_content_type='text/markdown',
69 url='https://github.com/diana-hep/pyhf',
70 author='Lukas Heinrich',
71 author_email='[email protected]',
72 license='Apache',
73 keywords='physics fitting numpy scipy tensorflow pytorch mxnet dask',
74 classifiers=[
75 "Programming Language :: Python :: 2",
76 "Programming Language :: Python :: 2.7",
77 "Programming Language :: Python :: 3",
78 "Programming Language :: Python :: 3.6",
79 ],
80 packages=find_packages(),
81 include_package_data=True,
82 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*",
83 install_requires=[
84 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet
85 'click>=6.0', # for console scripts,
86 'tqdm', # for readxml
87 'six', # for modifiers
88 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6
89 'jsonpatch',
90 ],
91 extras_require=extras_require,
92 entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},
93 dependency_links=[],
94 )
95
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -31,7 +31,7 @@
'minuit': ['iminuit'],
'develop': [
'pyflakes',
- 'pytest>=3.5.1',
+ 'pytest<4.0.0,>=3.5.1',
'pytest-cov>=2.5.1',
'pytest-benchmark[histogram]',
'pytest-console-scripts',
@@ -41,7 +41,7 @@
'jupyter',
'nbdime',
'uproot>=3.0.0',
- 'papermill',
+ 'papermill>=0.16.0',
'graphviz',
'bumpversion',
'sphinx',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -31,7 +31,7 @@\n 'minuit': ['iminuit'],\n 'develop': [\n 'pyflakes',\n- 'pytest>=3.5.1',\n+ 'pytest<4.0.0,>=3.5.1',\n 'pytest-cov>=2.5.1',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n@@ -41,7 +41,7 @@\n 'jupyter',\n 'nbdime',\n 'uproot>=3.0.0',\n- 'papermill',\n+ 'papermill>=0.16.0',\n 'graphviz',\n 'bumpversion',\n 'sphinx',\n", "issue": "papermill seems to break? pin version?\n# Description\r\n\r\nIn the CI for @nikoladze's PR https://github.com/diana-hep/pyhf/pull/365 it seems like papermill is breaking for unrelated reasons. Did something change upstream? Maybe we need to pin the version? Maybe @matthewfeickert knows?\r\n\r\n\r\nhttps://travis-ci.org/diana-hep/pyhf/jobs/454931484\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\nfrom os import path\nimport sys\n\nthis_directory = path.abspath(path.dirname(__file__))\nif sys.version_info.major < 3:\n from io import open\nwith open(path.join(this_directory, 'README.md'), encoding='utf-8') as readme_md:\n long_description = readme_md.read()\n\nextras_require = {\n 'tensorflow': [\n 'tensorflow<1.12.0,>=1.10.0',\n 'tensorflow-probability==0.3.0',\n 'numpy<=1.14.5,>=1.14.0', # Lower of 1.14.0 instead of 1.13.3 to ensure doctest pass\n 'setuptools<=39.1.0',\n ],\n 'torch': ['torch>=0.4.0'],\n 'mxnet': [\n 'mxnet>=1.0.0',\n 'requests<2.19.0,>=2.18.4',\n 'numpy<1.15.0,>=1.8.2',\n 'requests<2.19.0,>=2.18.4',\n ],\n # 'dask': [\n # 'dask[array]'\n # ],\n 'xmlimport': ['uproot'],\n 'minuit': ['iminuit'],\n 'develop': [\n 'pyflakes',\n 'pytest>=3.5.1',\n 'pytest-cov>=2.5.1',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'python-coveralls',\n 'coverage>=4.0', # coveralls\n 'matplotlib',\n 'jupyter',\n 'nbdime',\n 'uproot>=3.0.0',\n 'papermill',\n 'graphviz',\n 'bumpversion',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'sphinx-issues',\n 'm2r',\n 'jsonpatch',\n 'ipython<7', # jupyter_console and ipython clash in dependency requirement -- downgrade ipython for now\n 'pre-commit',\n 'black;python_version>=\"3.6\"', # Black is Python3 only\n 'twine',\n ],\n}\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\nsetup(\n name='pyhf',\n version='0.0.15',\n description='(partial) pure python histfactory implementation',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/diana-hep/pyhf',\n author='Lukas Heinrich',\n author_email='[email protected]',\n license='Apache',\n keywords='physics fitting numpy scipy tensorflow pytorch mxnet dask',\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n ],\n packages=find_packages(),\n include_package_data=True,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*\",\n install_requires=[\n 'scipy', # requires numpy, which is required by pyhf, tensorflow, and mxnet\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'six', # for modifiers\n 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6\n 'jsonpatch',\n ],\n extras_require=extras_require,\n entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},\n dependency_links=[],\n)\n", "path": "setup.py"}]} | 1,683 | 177 |
gh_patches_debug_11703 | rasdani/github-patches | git_diff | great-expectations__great_expectations-3811 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
</issue>
<code>
[start of great_expectations/data_context/store/configuration_store.py]
1 import logging
2 from typing import Optional
3
4 from ruamel.yaml import YAML
5 from ruamel.yaml.comments import CommentedMap
6
7 import great_expectations.exceptions as ge_exceptions
8 from great_expectations.data_context.store import GeCloudStoreBackend
9 from great_expectations.data_context.store.store import Store
10 from great_expectations.data_context.store.tuple_store_backend import TupleStoreBackend
11 from great_expectations.data_context.types.base import BaseYamlConfig
12 from great_expectations.data_context.types.resource_identifiers import (
13 ConfigurationIdentifier,
14 )
15 from great_expectations.data_context.util import load_class
16 from great_expectations.util import (
17 filter_properties_dict,
18 verify_dynamic_loading_support,
19 )
20
21 yaml = YAML()
22
23 yaml.indent(mapping=2, sequence=4, offset=2)
24 yaml.default_flow_style = False
25
26 logger = logging.getLogger(__name__)
27
28
29 class ConfigurationStore(Store):
30
31 """
32 Configuration Store provides a way to store any Marshmallow Schema compatible Configuration (using the YAML format).
33 """
34
35 _key_class = ConfigurationIdentifier
36
37 _configuration_class = BaseYamlConfig
38
39 def __init__(
40 self,
41 store_name: str,
42 store_backend: Optional[dict] = None,
43 overwrite_existing: bool = False,
44 runtime_environment: Optional[dict] = None,
45 ):
46 if not issubclass(self._configuration_class, BaseYamlConfig):
47 raise ge_exceptions.DataContextError(
48 "Invalid configuration: A configuration_class needs to inherit from the BaseYamlConfig class."
49 )
50
51 if store_backend is not None:
52 store_backend_module_name = store_backend.get(
53 "module_name", "great_expectations.data_context.store"
54 )
55 store_backend_class_name = store_backend.get(
56 "class_name", "InMemoryStoreBackend"
57 )
58 verify_dynamic_loading_support(module_name=store_backend_module_name)
59 store_backend_class = load_class(
60 store_backend_class_name, store_backend_module_name
61 )
62
63 # Store Backend Class was loaded successfully; verify that it is of a correct subclass.
64 if issubclass(store_backend_class, TupleStoreBackend):
65 # Provide defaults for this common case
66 store_backend["filepath_template"] = store_backend.get(
67 "filepath_template", "{0}.yml"
68 )
69
70 super().__init__(
71 store_backend=store_backend,
72 runtime_environment=runtime_environment,
73 store_name=store_name,
74 )
75
76 # Gather the call arguments of the present function (include the "module_name" and add the "class_name"), filter
77 # out the Falsy values, and set the instance "_config" variable equal to the resulting dictionary.
78 self._config = {
79 "store_name": store_name,
80 "store_backend": store_backend,
81 "overwrite_existing": overwrite_existing,
82 "runtime_environment": runtime_environment,
83 "module_name": self.__class__.__module__,
84 "class_name": self.__class__.__name__,
85 }
86 filter_properties_dict(properties=self._config, clean_falsy=True, inplace=True)
87
88 self._overwrite_existing = overwrite_existing
89
90 def remove_key(self, key):
91 return self.store_backend.remove_key(key)
92
93 def serialize(self, key, value):
94 if self.ge_cloud_mode:
95 # GeCloudStoreBackend expects a json str
96 config_schema = value.get_schema_class()()
97 return config_schema.dump(value)
98 return value.to_yaml_str()
99
100 def deserialize(self, key, value):
101 config = value
102 if isinstance(value, str):
103 config: CommentedMap = yaml.load(value)
104 try:
105 return self._configuration_class.from_commented_map(commented_map=config)
106 except ge_exceptions.InvalidBaseYamlConfigError:
107 # Just to be explicit about what we intended to catch
108 raise
109
110 @property
111 def overwrite_existing(self) -> bool:
112 return self._overwrite_existing
113
114 @overwrite_existing.setter
115 def overwrite_existing(self, overwrite_existing: bool):
116 self._overwrite_existing = overwrite_existing
117
118 @property
119 def config(self) -> dict:
120 return self._config
121
122 def self_check(self, pretty_print: bool = True) -> dict:
123 # Provide visibility into parameters that ConfigurationStore was instantiated with.
124 report_object: dict = {"config": self.config}
125
126 if pretty_print:
127 print("Checking for existing keys...")
128
129 report_object["keys"] = sorted(
130 key.configuration_key for key in self.list_keys()
131 )
132
133 report_object["len_keys"] = len(report_object["keys"])
134 len_keys: int = report_object["len_keys"]
135
136 if pretty_print:
137 if report_object["len_keys"] == 0:
138 print(f"\t{len_keys} keys found")
139 else:
140 print(f"\t{len_keys} keys found:")
141 for key in report_object["keys"][:10]:
142 print("\t\t" + str(key))
143 if len_keys > 10:
144 print("\t\t...")
145 print()
146
147 self.serialization_self_check(pretty_print=pretty_print)
148
149 return report_object
150
151 def serialization_self_check(self, pretty_print: bool):
152 raise NotImplementedError
153
[end of great_expectations/data_context/store/configuration_store.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/great_expectations/data_context/store/configuration_store.py b/great_expectations/data_context/store/configuration_store.py
--- a/great_expectations/data_context/store/configuration_store.py
+++ b/great_expectations/data_context/store/configuration_store.py
@@ -63,8 +63,8 @@
# Store Backend Class was loaded successfully; verify that it is of a correct subclass.
if issubclass(store_backend_class, TupleStoreBackend):
# Provide defaults for this common case
- store_backend["filepath_template"] = store_backend.get(
- "filepath_template", "{0}.yml"
+ store_backend["filepath_suffix"] = store_backend.get(
+ "filepath_suffix", ".yml"
)
super().__init__(
| {"golden_diff": "diff --git a/great_expectations/data_context/store/configuration_store.py b/great_expectations/data_context/store/configuration_store.py\n--- a/great_expectations/data_context/store/configuration_store.py\n+++ b/great_expectations/data_context/store/configuration_store.py\n@@ -63,8 +63,8 @@\n # Store Backend Class was loaded successfully; verify that it is of a correct subclass.\n if issubclass(store_backend_class, TupleStoreBackend):\n # Provide defaults for this common case\n- store_backend[\"filepath_template\"] = store_backend.get(\n- \"filepath_template\", \"{0}.yml\"\n+ store_backend[\"filepath_suffix\"] = store_backend.get(\n+ \"filepath_suffix\", \".yml\"\n )\n \n super().__init__(\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "import logging\nfrom typing import Optional\n\nfrom ruamel.yaml import YAML\nfrom ruamel.yaml.comments import CommentedMap\n\nimport great_expectations.exceptions as ge_exceptions\nfrom great_expectations.data_context.store import GeCloudStoreBackend\nfrom great_expectations.data_context.store.store import Store\nfrom great_expectations.data_context.store.tuple_store_backend import TupleStoreBackend\nfrom great_expectations.data_context.types.base import BaseYamlConfig\nfrom great_expectations.data_context.types.resource_identifiers import (\n ConfigurationIdentifier,\n)\nfrom great_expectations.data_context.util import load_class\nfrom great_expectations.util import (\n filter_properties_dict,\n verify_dynamic_loading_support,\n)\n\nyaml = YAML()\n\nyaml.indent(mapping=2, sequence=4, offset=2)\nyaml.default_flow_style = False\n\nlogger = logging.getLogger(__name__)\n\n\nclass ConfigurationStore(Store):\n\n \"\"\"\n Configuration Store provides a way to store any Marshmallow Schema compatible Configuration (using the YAML format).\n \"\"\"\n\n _key_class = ConfigurationIdentifier\n\n _configuration_class = BaseYamlConfig\n\n def __init__(\n self,\n store_name: str,\n store_backend: Optional[dict] = None,\n overwrite_existing: bool = False,\n runtime_environment: Optional[dict] = None,\n ):\n if not issubclass(self._configuration_class, BaseYamlConfig):\n raise ge_exceptions.DataContextError(\n \"Invalid configuration: A configuration_class needs to inherit from the BaseYamlConfig class.\"\n )\n\n if store_backend is not None:\n store_backend_module_name = store_backend.get(\n \"module_name\", \"great_expectations.data_context.store\"\n )\n store_backend_class_name = store_backend.get(\n \"class_name\", \"InMemoryStoreBackend\"\n )\n verify_dynamic_loading_support(module_name=store_backend_module_name)\n store_backend_class = load_class(\n store_backend_class_name, store_backend_module_name\n )\n\n # Store Backend Class was loaded successfully; verify that it is of a correct subclass.\n if issubclass(store_backend_class, TupleStoreBackend):\n # Provide defaults for this common case\n store_backend[\"filepath_template\"] = store_backend.get(\n \"filepath_template\", \"{0}.yml\"\n )\n\n super().__init__(\n store_backend=store_backend,\n runtime_environment=runtime_environment,\n store_name=store_name,\n )\n\n # Gather the call arguments of the present function (include the \"module_name\" and add the \"class_name\"), filter\n # out the Falsy values, and set the instance \"_config\" variable equal to the resulting dictionary.\n self._config = {\n \"store_name\": store_name,\n \"store_backend\": store_backend,\n \"overwrite_existing\": overwrite_existing,\n \"runtime_environment\": runtime_environment,\n \"module_name\": self.__class__.__module__,\n \"class_name\": self.__class__.__name__,\n }\n filter_properties_dict(properties=self._config, clean_falsy=True, inplace=True)\n\n self._overwrite_existing = overwrite_existing\n\n def remove_key(self, key):\n return self.store_backend.remove_key(key)\n\n def serialize(self, key, value):\n if self.ge_cloud_mode:\n # GeCloudStoreBackend expects a json str\n config_schema = value.get_schema_class()()\n return config_schema.dump(value)\n return value.to_yaml_str()\n\n def deserialize(self, key, value):\n config = value\n if isinstance(value, str):\n config: CommentedMap = yaml.load(value)\n try:\n return self._configuration_class.from_commented_map(commented_map=config)\n except ge_exceptions.InvalidBaseYamlConfigError:\n # Just to be explicit about what we intended to catch\n raise\n\n @property\n def overwrite_existing(self) -> bool:\n return self._overwrite_existing\n\n @overwrite_existing.setter\n def overwrite_existing(self, overwrite_existing: bool):\n self._overwrite_existing = overwrite_existing\n\n @property\n def config(self) -> dict:\n return self._config\n\n def self_check(self, pretty_print: bool = True) -> dict:\n # Provide visibility into parameters that ConfigurationStore was instantiated with.\n report_object: dict = {\"config\": self.config}\n\n if pretty_print:\n print(\"Checking for existing keys...\")\n\n report_object[\"keys\"] = sorted(\n key.configuration_key for key in self.list_keys()\n )\n\n report_object[\"len_keys\"] = len(report_object[\"keys\"])\n len_keys: int = report_object[\"len_keys\"]\n\n if pretty_print:\n if report_object[\"len_keys\"] == 0:\n print(f\"\\t{len_keys} keys found\")\n else:\n print(f\"\\t{len_keys} keys found:\")\n for key in report_object[\"keys\"][:10]:\n print(\"\\t\\t\" + str(key))\n if len_keys > 10:\n print(\"\\t\\t...\")\n print()\n\n self.serialization_self_check(pretty_print=pretty_print)\n\n return report_object\n\n def serialization_self_check(self, pretty_print: bool):\n raise NotImplementedError\n", "path": "great_expectations/data_context/store/configuration_store.py"}]} | 2,006 | 159 |
gh_patches_debug_22283 | rasdani/github-patches | git_diff | ansible__molecule-1913 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
destroy steps do not run when tests fail and tags are used in provisioner options
# Issue Type
- Bug report
# Molecule and Ansible details
```
pipenv run ansible --version && pipenv run molecule --version
ansible 2.7.10
config file = /Users/deric/.ansible.cfg
configured module search path = ['/Users/deric/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/deric/.local/share/virtualenvs/molecule-destroy-cLlJrhaP/lib/python3.7/site-packages/ansible
executable location = /Users/deric/.local/share/virtualenvs/molecule-destroy-cLlJrhaP/bin/ansible
python version = 3.7.3 (default, Apr 7 2019, 11:57:41) [Clang 10.0.0 (clang-1000.11.45.5)]
molecule, version 2.20.1.dev18
```
Molecule installation method (one of):
- pipenv (see `Pipfile` in [example repo](https://github.com/dericcrago/molecule-destroy) and `README.md` for usage)
Ansible installation method (one of):
- pipenv (see `Pipfile` in [example repo](https://github.com/dericcrago/molecule-destroy) and `README.md` for usage)
Detail any linters or test runners used:
# Desired Behavior
Destroy steps run when tests fail and `tags` are used in `provisioner: options`.
# Actual Behaviour
Destroy steps do not run and PLAY RECAP details are missing when tests fail and `tags` are used in `provisioner: options`.
See [example repo](https://github.com/dericcrago/molecule-destroy).
```yaml
provisioner:
name: ansible
lint:
name: ansible-lint
options:
tags: no-op
```
```
PLAYBOOK: destroy.yml **********************************************************
1 plays in /Users/deric/.local/share/virtualenvs/molecule-destroy-cLlJrhaP/src/molecule/molecule/provisioner/ansible/playbooks/docker/destroy.yml
PLAY [Destroy] *****************************************************************
META: ran handlers
META: ran handlers
META: ran handlers
PLAY RECAP *********************************************************************
```
</issue>
<code>
[start of molecule/command/base.py]
1 # Copyright (c) 2015-2018 Cisco Systems, Inc.
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to
5 # deal in the Software without restriction, including without limitation the
6 # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
7 # sell copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
19 # DEALINGS IN THE SOFTWARE.
20
21 import abc
22 import collections
23 import glob
24 import os
25
26 import six
27
28 import molecule.command
29 import molecule.scenarios
30 from molecule import config
31 from molecule import logger
32 from molecule import util
33
34 LOG = logger.get_logger(__name__)
35 MOLECULE_GLOB = os.environ.get('MOLECULE_GLOB', 'molecule/*/molecule.yml')
36 MOLECULE_DEFAULT_SCENARIO_NAME = 'default'
37
38
39 @six.add_metaclass(abc.ABCMeta)
40 class Base(object):
41 """
42 An abstract base class used to define the command interface.
43 """
44
45 def __init__(self, c):
46 """
47 Base initializer for all :ref:`Command` classes.
48
49 :param c: An instance of a Molecule config.
50 :returns: None
51 """
52 self._config = c
53 self._setup()
54
55 @abc.abstractmethod
56 def execute(self): # pragma: no cover
57 pass
58
59 def print_info(self):
60 msg = "Scenario: '{}'".format(self._config.scenario.name)
61 LOG.info(msg)
62 msg = "Action: '{}'".format(util.underscore(self.__class__.__name__))
63 LOG.info(msg)
64
65 def _setup(self):
66 """
67 Prepare Molecule's provisioner and returns None.
68
69 :return: None
70 """
71 self._config.provisioner.write_config()
72 self._config.provisioner.manage_inventory()
73
74
75 def execute_cmdline_scenarios(scenario_name, args, command_args):
76 """
77 Execute scenario sequences based on parsed command-line arguments.
78
79 This is useful for subcommands that run scenario sequences, which
80 excludes subcommands such as ``list``, ``login``, and ``matrix``.
81
82 ``args`` and ``command_args`` are combined using :func:`get_configs`
83 to generate the scenario(s) configuration.
84
85 :param scenario_name: Name of scenario to run, or ``None`` to run all.
86 :param args: ``args`` dict from ``click`` command context
87 :param command_args: dict of command argumentss, including the target
88 subcommand to execute
89 :returns: None
90
91 """
92 scenarios = molecule.scenarios.Scenarios(
93 get_configs(args, command_args), scenario_name)
94 scenarios.print_matrix()
95 for scenario in scenarios:
96 try:
97 execute_scenario(scenario)
98 except SystemExit:
99 # if the command has a 'destroy' arg, like test does,
100 # handle that behavior here.
101 if command_args.get('destroy') == 'always':
102 msg = ('An error occurred during the {} sequence action: '
103 "'{}'. Cleaning up.").format(scenario.config.subcommand,
104 scenario.config.action)
105 LOG.warn(msg)
106 execute_subcommand(scenario.config, 'cleanup')
107 execute_subcommand(scenario.config, 'destroy')
108 # always prune ephemeral dir if destroying on failure
109 scenario.prune()
110 util.sysexit()
111 else:
112 raise
113
114
115 def execute_subcommand(config, subcommand):
116 command_module = getattr(molecule.command, subcommand)
117 command = getattr(command_module, util.camelize(subcommand))
118
119 return command(config).execute()
120
121
122 def execute_scenario(scenario):
123 """
124 Execute each command in the given scenario's configured sequence.
125
126 :param scenario: The scenario to execute.
127 :returns: None
128
129 """
130
131 for action in scenario.sequence:
132 # knowledge of the current action is used by some provisioners
133 # to ensure they behave correctly during certain sequence steps,
134 # and is also used for reporting in execute_cmdline_scenarios
135 scenario.config.action = action
136 execute_subcommand(scenario.config, action)
137
138 # pruning only if a 'destroy' step was in the sequence allows for normal
139 # debugging by manually stepping through a scenario sequence
140 if 'destroy' in scenario.sequence:
141 scenario.prune()
142
143
144 def get_configs(args, command_args, ansible_args=()):
145 """
146 Glob the current directory for Molecule config files, instantiate config
147 objects, and returns a list.
148
149 :param args: A dict of options, arguments and commands from the CLI.
150 :param command_args: A dict of options passed to the subcommand from
151 the CLI.
152 :param ansible_args: An optional tuple of arguments provided to the
153 `ansible-playbook` command.
154 :return: list
155 """
156 configs = [
157 config.Config(
158 molecule_file=util.abs_path(c),
159 args=args,
160 command_args=command_args,
161 ansible_args=ansible_args,
162 ) for c in glob.glob(MOLECULE_GLOB)
163 ]
164 _verify_configs(configs)
165
166 return configs
167
168
169 def _verify_configs(configs):
170 """
171 Verify a Molecule config was found and returns None.
172
173 :param configs: A list containing absolute paths to Molecule config files.
174 :return: None
175 """
176 if configs:
177 scenario_names = [c.scenario.name for c in configs]
178 for scenario_name, n in collections.Counter(scenario_names).items():
179 if n > 1:
180 msg = ("Duplicate scenario name '{}' found. "
181 'Exiting.').format(scenario_name)
182 util.sysexit_with_message(msg)
183
184 else:
185 msg = "'{}' glob failed. Exiting.".format(MOLECULE_GLOB)
186 util.sysexit_with_message(msg)
187
188
189 def _get_subcommand(string):
190 return string.split('.')[-1]
191
[end of molecule/command/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/molecule/command/base.py b/molecule/command/base.py
--- a/molecule/command/base.py
+++ b/molecule/command/base.py
@@ -115,6 +115,11 @@
def execute_subcommand(config, subcommand):
command_module = getattr(molecule.command, subcommand)
command = getattr(command_module, util.camelize(subcommand))
+ # knowledge of the current action is used by some provisioners
+ # to ensure they behave correctly during certain sequence steps,
+ # particulary the setting of ansible options in create/destroy,
+ # and is also used for reporting in execute_cmdline_scenarios
+ config.action = subcommand
return command(config).execute()
@@ -129,10 +134,6 @@
"""
for action in scenario.sequence:
- # knowledge of the current action is used by some provisioners
- # to ensure they behave correctly during certain sequence steps,
- # and is also used for reporting in execute_cmdline_scenarios
- scenario.config.action = action
execute_subcommand(scenario.config, action)
# pruning only if a 'destroy' step was in the sequence allows for normal
| {"golden_diff": "diff --git a/molecule/command/base.py b/molecule/command/base.py\n--- a/molecule/command/base.py\n+++ b/molecule/command/base.py\n@@ -115,6 +115,11 @@\n def execute_subcommand(config, subcommand):\n command_module = getattr(molecule.command, subcommand)\n command = getattr(command_module, util.camelize(subcommand))\n+ # knowledge of the current action is used by some provisioners\n+ # to ensure they behave correctly during certain sequence steps,\n+ # particulary the setting of ansible options in create/destroy,\n+ # and is also used for reporting in execute_cmdline_scenarios\n+ config.action = subcommand\n \n return command(config).execute()\n \n@@ -129,10 +134,6 @@\n \"\"\"\n \n for action in scenario.sequence:\n- # knowledge of the current action is used by some provisioners\n- # to ensure they behave correctly during certain sequence steps,\n- # and is also used for reporting in execute_cmdline_scenarios\n- scenario.config.action = action\n execute_subcommand(scenario.config, action)\n \n # pruning only if a 'destroy' step was in the sequence allows for normal\n", "issue": "destroy steps do not run when tests fail and tags are used in provisioner options\n# Issue Type\r\n\r\n- Bug report\r\n\r\n# Molecule and Ansible details\r\n\r\n```\r\npipenv run ansible --version && pipenv run molecule --version\r\nansible 2.7.10\r\n config file = /Users/deric/.ansible.cfg\r\n configured module search path = ['/Users/deric/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /Users/deric/.local/share/virtualenvs/molecule-destroy-cLlJrhaP/lib/python3.7/site-packages/ansible\r\n executable location = /Users/deric/.local/share/virtualenvs/molecule-destroy-cLlJrhaP/bin/ansible\r\n python version = 3.7.3 (default, Apr 7 2019, 11:57:41) [Clang 10.0.0 (clang-1000.11.45.5)]\r\nmolecule, version 2.20.1.dev18\r\n```\r\n\r\nMolecule installation method (one of):\r\n\r\n- pipenv (see `Pipfile` in [example repo](https://github.com/dericcrago/molecule-destroy) and `README.md` for usage)\r\n\r\nAnsible installation method (one of):\r\n\r\n- pipenv (see `Pipfile` in [example repo](https://github.com/dericcrago/molecule-destroy) and `README.md` for usage)\r\n\r\nDetail any linters or test runners used:\r\n\r\n# Desired Behavior\r\n\r\nDestroy steps run when tests fail and `tags` are used in `provisioner: options`.\r\n\r\n# Actual Behaviour\r\n\r\nDestroy steps do not run and PLAY RECAP details are missing when tests fail and `tags` are used in `provisioner: options`.\r\n\r\nSee [example repo](https://github.com/dericcrago/molecule-destroy).\r\n\r\n```yaml\r\nprovisioner:\r\n name: ansible\r\n lint:\r\n name: ansible-lint\r\n options:\r\n tags: no-op\r\n```\r\n\r\n```\r\n PLAYBOOK: destroy.yml **********************************************************\r\n 1 plays in /Users/deric/.local/share/virtualenvs/molecule-destroy-cLlJrhaP/src/molecule/molecule/provisioner/ansible/playbooks/docker/destroy.yml\r\n \r\n PLAY [Destroy] *****************************************************************\r\n META: ran handlers\r\n META: ran handlers\r\n META: ran handlers\r\n \r\n PLAY RECAP *********************************************************************\r\n```\n", "before_files": [{"content": "# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\nimport abc\nimport collections\nimport glob\nimport os\n\nimport six\n\nimport molecule.command\nimport molecule.scenarios\nfrom molecule import config\nfrom molecule import logger\nfrom molecule import util\n\nLOG = logger.get_logger(__name__)\nMOLECULE_GLOB = os.environ.get('MOLECULE_GLOB', 'molecule/*/molecule.yml')\nMOLECULE_DEFAULT_SCENARIO_NAME = 'default'\n\n\[email protected]_metaclass(abc.ABCMeta)\nclass Base(object):\n \"\"\"\n An abstract base class used to define the command interface.\n \"\"\"\n\n def __init__(self, c):\n \"\"\"\n Base initializer for all :ref:`Command` classes.\n\n :param c: An instance of a Molecule config.\n :returns: None\n \"\"\"\n self._config = c\n self._setup()\n\n @abc.abstractmethod\n def execute(self): # pragma: no cover\n pass\n\n def print_info(self):\n msg = \"Scenario: '{}'\".format(self._config.scenario.name)\n LOG.info(msg)\n msg = \"Action: '{}'\".format(util.underscore(self.__class__.__name__))\n LOG.info(msg)\n\n def _setup(self):\n \"\"\"\n Prepare Molecule's provisioner and returns None.\n\n :return: None\n \"\"\"\n self._config.provisioner.write_config()\n self._config.provisioner.manage_inventory()\n\n\ndef execute_cmdline_scenarios(scenario_name, args, command_args):\n \"\"\"\n Execute scenario sequences based on parsed command-line arguments.\n\n This is useful for subcommands that run scenario sequences, which\n excludes subcommands such as ``list``, ``login``, and ``matrix``.\n\n ``args`` and ``command_args`` are combined using :func:`get_configs`\n to generate the scenario(s) configuration.\n\n :param scenario_name: Name of scenario to run, or ``None`` to run all.\n :param args: ``args`` dict from ``click`` command context\n :param command_args: dict of command argumentss, including the target\n subcommand to execute\n :returns: None\n\n \"\"\"\n scenarios = molecule.scenarios.Scenarios(\n get_configs(args, command_args), scenario_name)\n scenarios.print_matrix()\n for scenario in scenarios:\n try:\n execute_scenario(scenario)\n except SystemExit:\n # if the command has a 'destroy' arg, like test does,\n # handle that behavior here.\n if command_args.get('destroy') == 'always':\n msg = ('An error occurred during the {} sequence action: '\n \"'{}'. Cleaning up.\").format(scenario.config.subcommand,\n scenario.config.action)\n LOG.warn(msg)\n execute_subcommand(scenario.config, 'cleanup')\n execute_subcommand(scenario.config, 'destroy')\n # always prune ephemeral dir if destroying on failure\n scenario.prune()\n util.sysexit()\n else:\n raise\n\n\ndef execute_subcommand(config, subcommand):\n command_module = getattr(molecule.command, subcommand)\n command = getattr(command_module, util.camelize(subcommand))\n\n return command(config).execute()\n\n\ndef execute_scenario(scenario):\n \"\"\"\n Execute each command in the given scenario's configured sequence.\n\n :param scenario: The scenario to execute.\n :returns: None\n\n \"\"\"\n\n for action in scenario.sequence:\n # knowledge of the current action is used by some provisioners\n # to ensure they behave correctly during certain sequence steps,\n # and is also used for reporting in execute_cmdline_scenarios\n scenario.config.action = action\n execute_subcommand(scenario.config, action)\n\n # pruning only if a 'destroy' step was in the sequence allows for normal\n # debugging by manually stepping through a scenario sequence\n if 'destroy' in scenario.sequence:\n scenario.prune()\n\n\ndef get_configs(args, command_args, ansible_args=()):\n \"\"\"\n Glob the current directory for Molecule config files, instantiate config\n objects, and returns a list.\n\n :param args: A dict of options, arguments and commands from the CLI.\n :param command_args: A dict of options passed to the subcommand from\n the CLI.\n :param ansible_args: An optional tuple of arguments provided to the\n `ansible-playbook` command.\n :return: list\n \"\"\"\n configs = [\n config.Config(\n molecule_file=util.abs_path(c),\n args=args,\n command_args=command_args,\n ansible_args=ansible_args,\n ) for c in glob.glob(MOLECULE_GLOB)\n ]\n _verify_configs(configs)\n\n return configs\n\n\ndef _verify_configs(configs):\n \"\"\"\n Verify a Molecule config was found and returns None.\n\n :param configs: A list containing absolute paths to Molecule config files.\n :return: None\n \"\"\"\n if configs:\n scenario_names = [c.scenario.name for c in configs]\n for scenario_name, n in collections.Counter(scenario_names).items():\n if n > 1:\n msg = (\"Duplicate scenario name '{}' found. \"\n 'Exiting.').format(scenario_name)\n util.sysexit_with_message(msg)\n\n else:\n msg = \"'{}' glob failed. Exiting.\".format(MOLECULE_GLOB)\n util.sysexit_with_message(msg)\n\n\ndef _get_subcommand(string):\n return string.split('.')[-1]\n", "path": "molecule/command/base.py"}]} | 2,956 | 262 |
gh_patches_debug_5486 | rasdani/github-patches | git_diff | mars-project__mars-216 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG][TENSOR] Chunks with same shape tiled by ones as well as zeros generate different key
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Chunks with same shape tiled by `ones` as well as `zeros` generate different keys, they should be the same.
**To Reproduce**
```
In [1]: import mars.tensor as mt
In [2]: a = mt.ones((100, 100), chunk_size=50)
In [3]: a.tiles()
Out[3]: Tensor <op=TensorOnes, shape=(100, 100), key=a4095842f813f99d58aa7cd330815190>
In [4]: [c.op.key for c in a.chunks]
Out[4]:
['1fc86c1b04351958eb2fda3ce467f5bf',
'0c710bf1f48424bca05d12ee833a1234',
'78900ebd5fa9c9baeafd17469eb9d757',
'77942146d3fc310de36f12e1809dc41a']
```
</issue>
<code>
[start of mars/tensor/expressions/datasource/core.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2018 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 import itertools
18
19 import numpy as np
20
21 from .... import opcodes as OperandDef
22 from ....operands import DataSource
23 from ....compat import izip
24 from ....config import options
25 from ....serialize import StringField
26 from ....utils import to_str
27 from ..utils import normalize_shape, decide_chunk_sizes
28 from ..core import TensorOperandMixin
29
30
31 class TensorDataSource(DataSource, TensorOperandMixin):
32 """
33 Tensor data source base class, provide universal tile logic,
34 subclass can overwrite tile method.
35 """
36
37 __slots__ = ()
38
39 def to_chunk_op(self, *args):
40 chunk_shape, idx, chunk_size = args
41 chunk_op = self.copy().reset_key()
42 chunk_op.params = {'size': chunk_shape, 'index': idx} # to make op key different
43 return chunk_op
44
45 @classmethod
46 def tile(cls, op):
47 tensor = op.outputs[0]
48
49 chunk_size = tensor.params.raw_chunk_size or options.tensor.chunk_size
50 chunk_size = decide_chunk_sizes(tensor.shape, chunk_size, tensor.dtype.itemsize)
51 chunk_size_idxes = (range(len(size)) for size in chunk_size)
52
53 out_chunks = []
54 for chunk_shape, chunk_idx in izip(itertools.product(*chunk_size),
55 itertools.product(*chunk_size_idxes)):
56 chunk_op = op.to_chunk_op(chunk_shape, chunk_idx, chunk_size)
57 out_chunk = chunk_op.new_chunk(None, chunk_shape, index=chunk_idx)
58 out_chunks.append(out_chunk)
59
60 new_op = op.copy()
61 return new_op.new_tensors(op.inputs, tensor.shape, chunks=out_chunks, nsplits=chunk_size)
62
63
64 class TensorNoInput(TensorDataSource):
65 """
66 Tensor operand with no inputs.
67 """
68
69 def check_inputs(self, inputs):
70 # no inputs
71 if inputs and len(inputs) > 0:
72 raise ValueError("Tensor data source has no inputs")
73
74 def calc_shape(self, *inputs_shape):
75 return self.outputs[0].shape
76
77 def _new_chunks(self, inputs, shape, index=None, output_limit=None, kws=None, **kw):
78 self.params['shape'] = shape # set shape to make the operand key different
79 return super(TensorNoInput, self)._new_chunks(
80 inputs, shape, index=index, output_limit=output_limit, kws=kws, **kw)
81
82 def _new_entities(self, inputs, shape, chunks=None, nsplits=None, output_limit=None,
83 kws=None, **kw):
84 self.params['shape'] = shape # set shape to make the operand key different
85 return super(TensorNoInput, self)._new_entities(
86 inputs, shape, chunks=chunks, nsplits=nsplits, output_limit=output_limit,
87 kws=kws, **kw)
88
89 def __call__(self, shape, chunk_size=None):
90 shape = normalize_shape(shape)
91 return self.new_tensor(None, shape, raw_chunk_size=chunk_size)
92
93
94 class TensorHasInput(TensorDataSource):
95 """
96 Tensor operand with a single input.
97 """
98
99 @property
100 def input(self):
101 return self._input
102
103 def check_inputs(self, inputs):
104 # no inputs
105 if len(inputs) != 1:
106 raise ValueError("Tensor can only have 1 input")
107
108 def _set_inputs(self, inputs):
109 super(TensorHasInput, self)._set_inputs(inputs)
110 self._input = self._inputs[0]
111
112 @classmethod
113 def tile(cls, op):
114 out_chunks = []
115 for c in op.input.chunks:
116 out_chunk = op.copy().reset_key().new_chunk([c], c.shape, index=c.index)
117 out_chunks.append(out_chunk)
118
119 new_op = op.copy()
120 return new_op.new_tensors(op.inputs, op.outputs[0].shape, chunks=out_chunks,
121 nsplits=op.input.nsplits)
122
123 def calc_shape(self, *inputs_shape):
124 return inputs_shape[0]
125
126 def __call__(self, a):
127 return self.new_tensor([a], a.shape)
128
129
130 class TensorLike(TensorHasInput):
131 def _set_inputs(self, inputs):
132 super(TensorLike, self)._set_inputs(inputs)
133 if self.dtype is None:
134 self._dtype = self.input.dtype
135 if self.gpu is None:
136 self._gpu = self.input.op.gpu
137
138 # FIXME: remove when cupy supports other dtypes
139 if self._gpu and self._dtype not in (np.float32, np.float64):
140 raise NotImplementedError('Sparse tensor on GPU only supports float32 and float64')
141
142
143 class TensorFetch(TensorNoInput):
144 _op_type_ = OperandDef.FETCH
145
146 _to_fetch_key = StringField('to_fetch_key', on_serialize=to_str)
147
148 def __init__(self, dtype=None, to_fetch_key=None, **kw):
149 super(TensorFetch, self).__init__(
150 _dtype=dtype, _to_fetch_key=to_fetch_key, **kw)
151
152 def _new_chunks(self, inputs, shape, index=None, output_limit=None, kws=None, **kw):
153 if '_key' in kw and self._to_fetch_key is None:
154 self._to_fetch_key = kw['_key']
155 return super(TensorFetch, self)._new_chunks(
156 inputs, shape, index=index, output_limit=output_limit, kws=kws, **kw)
157
158 def _new_entities(self, inputs, shape, chunks=None, nsplits=None, output_limit=None,
159 kws=None, **kw):
160 if '_key' in kw and self._to_fetch_key is None:
161 self._to_fetch_key = kw['_key']
162 return super(TensorFetch, self)._new_entities(
163 inputs, shape, chunks=chunks, nsplits=nsplits,
164 output_limit=output_limit, kws=kws, **kw)
165
166 @classmethod
167 def tile(cls, op):
168 raise NotImplementedError('Fetch tile cannot be handled by operand itself')
169
[end of mars/tensor/expressions/datasource/core.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mars/tensor/expressions/datasource/core.py b/mars/tensor/expressions/datasource/core.py
--- a/mars/tensor/expressions/datasource/core.py
+++ b/mars/tensor/expressions/datasource/core.py
@@ -37,9 +37,9 @@
__slots__ = ()
def to_chunk_op(self, *args):
- chunk_shape, idx, chunk_size = args
+ chunk_shape, _, chunk_size = args
chunk_op = self.copy().reset_key()
- chunk_op.params = {'size': chunk_shape, 'index': idx} # to make op key different
+ chunk_op.params = {'size': chunk_shape} # to make op key different
return chunk_op
@classmethod
| {"golden_diff": "diff --git a/mars/tensor/expressions/datasource/core.py b/mars/tensor/expressions/datasource/core.py\n--- a/mars/tensor/expressions/datasource/core.py\n+++ b/mars/tensor/expressions/datasource/core.py\n@@ -37,9 +37,9 @@\n __slots__ = ()\n \n def to_chunk_op(self, *args):\n- chunk_shape, idx, chunk_size = args\n+ chunk_shape, _, chunk_size = args\n chunk_op = self.copy().reset_key()\n- chunk_op.params = {'size': chunk_shape, 'index': idx} # to make op key different\n+ chunk_op.params = {'size': chunk_shape} # to make op key different\n return chunk_op\n \n @classmethod\n", "issue": "[BUG][TENSOR] Chunks with same shape tiled by ones as well as zeros generate different key\n<!--\r\nThank you for your contribution!\r\n\r\nPlease review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.\r\n-->\r\n\r\n**Describe the bug**\r\n\r\nChunks with same shape tiled by `ones` as well as `zeros` generate different keys, they should be the same.\r\n\r\n**To Reproduce**\r\n\r\n```\r\nIn [1]: import mars.tensor as mt \r\n\r\nIn [2]: a = mt.ones((100, 100), chunk_size=50) \r\n\r\nIn [3]: a.tiles() \r\nOut[3]: Tensor <op=TensorOnes, shape=(100, 100), key=a4095842f813f99d58aa7cd330815190>\r\n\r\nIn [4]: [c.op.key for c in a.chunks] \r\nOut[4]: \r\n['1fc86c1b04351958eb2fda3ce467f5bf',\r\n '0c710bf1f48424bca05d12ee833a1234',\r\n '78900ebd5fa9c9baeafd17469eb9d757',\r\n '77942146d3fc310de36f12e1809dc41a']\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2018 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport itertools\n\nimport numpy as np\n\nfrom .... import opcodes as OperandDef\nfrom ....operands import DataSource\nfrom ....compat import izip\nfrom ....config import options\nfrom ....serialize import StringField\nfrom ....utils import to_str\nfrom ..utils import normalize_shape, decide_chunk_sizes\nfrom ..core import TensorOperandMixin\n\n\nclass TensorDataSource(DataSource, TensorOperandMixin):\n \"\"\"\n Tensor data source base class, provide universal tile logic,\n subclass can overwrite tile method.\n \"\"\"\n\n __slots__ = ()\n\n def to_chunk_op(self, *args):\n chunk_shape, idx, chunk_size = args\n chunk_op = self.copy().reset_key()\n chunk_op.params = {'size': chunk_shape, 'index': idx} # to make op key different\n return chunk_op\n\n @classmethod\n def tile(cls, op):\n tensor = op.outputs[0]\n\n chunk_size = tensor.params.raw_chunk_size or options.tensor.chunk_size\n chunk_size = decide_chunk_sizes(tensor.shape, chunk_size, tensor.dtype.itemsize)\n chunk_size_idxes = (range(len(size)) for size in chunk_size)\n\n out_chunks = []\n for chunk_shape, chunk_idx in izip(itertools.product(*chunk_size),\n itertools.product(*chunk_size_idxes)):\n chunk_op = op.to_chunk_op(chunk_shape, chunk_idx, chunk_size)\n out_chunk = chunk_op.new_chunk(None, chunk_shape, index=chunk_idx)\n out_chunks.append(out_chunk)\n\n new_op = op.copy()\n return new_op.new_tensors(op.inputs, tensor.shape, chunks=out_chunks, nsplits=chunk_size)\n\n\nclass TensorNoInput(TensorDataSource):\n \"\"\"\n Tensor operand with no inputs.\n \"\"\"\n\n def check_inputs(self, inputs):\n # no inputs\n if inputs and len(inputs) > 0:\n raise ValueError(\"Tensor data source has no inputs\")\n\n def calc_shape(self, *inputs_shape):\n return self.outputs[0].shape\n\n def _new_chunks(self, inputs, shape, index=None, output_limit=None, kws=None, **kw):\n self.params['shape'] = shape # set shape to make the operand key different\n return super(TensorNoInput, self)._new_chunks(\n inputs, shape, index=index, output_limit=output_limit, kws=kws, **kw)\n\n def _new_entities(self, inputs, shape, chunks=None, nsplits=None, output_limit=None,\n kws=None, **kw):\n self.params['shape'] = shape # set shape to make the operand key different\n return super(TensorNoInput, self)._new_entities(\n inputs, shape, chunks=chunks, nsplits=nsplits, output_limit=output_limit,\n kws=kws, **kw)\n\n def __call__(self, shape, chunk_size=None):\n shape = normalize_shape(shape)\n return self.new_tensor(None, shape, raw_chunk_size=chunk_size)\n\n\nclass TensorHasInput(TensorDataSource):\n \"\"\"\n Tensor operand with a single input.\n \"\"\"\n\n @property\n def input(self):\n return self._input\n\n def check_inputs(self, inputs):\n # no inputs\n if len(inputs) != 1:\n raise ValueError(\"Tensor can only have 1 input\")\n\n def _set_inputs(self, inputs):\n super(TensorHasInput, self)._set_inputs(inputs)\n self._input = self._inputs[0]\n\n @classmethod\n def tile(cls, op):\n out_chunks = []\n for c in op.input.chunks:\n out_chunk = op.copy().reset_key().new_chunk([c], c.shape, index=c.index)\n out_chunks.append(out_chunk)\n\n new_op = op.copy()\n return new_op.new_tensors(op.inputs, op.outputs[0].shape, chunks=out_chunks,\n nsplits=op.input.nsplits)\n\n def calc_shape(self, *inputs_shape):\n return inputs_shape[0]\n\n def __call__(self, a):\n return self.new_tensor([a], a.shape)\n\n\nclass TensorLike(TensorHasInput):\n def _set_inputs(self, inputs):\n super(TensorLike, self)._set_inputs(inputs)\n if self.dtype is None:\n self._dtype = self.input.dtype\n if self.gpu is None:\n self._gpu = self.input.op.gpu\n\n # FIXME: remove when cupy supports other dtypes\n if self._gpu and self._dtype not in (np.float32, np.float64):\n raise NotImplementedError('Sparse tensor on GPU only supports float32 and float64')\n\n\nclass TensorFetch(TensorNoInput):\n _op_type_ = OperandDef.FETCH\n\n _to_fetch_key = StringField('to_fetch_key', on_serialize=to_str)\n\n def __init__(self, dtype=None, to_fetch_key=None, **kw):\n super(TensorFetch, self).__init__(\n _dtype=dtype, _to_fetch_key=to_fetch_key, **kw)\n\n def _new_chunks(self, inputs, shape, index=None, output_limit=None, kws=None, **kw):\n if '_key' in kw and self._to_fetch_key is None:\n self._to_fetch_key = kw['_key']\n return super(TensorFetch, self)._new_chunks(\n inputs, shape, index=index, output_limit=output_limit, kws=kws, **kw)\n\n def _new_entities(self, inputs, shape, chunks=None, nsplits=None, output_limit=None,\n kws=None, **kw):\n if '_key' in kw and self._to_fetch_key is None:\n self._to_fetch_key = kw['_key']\n return super(TensorFetch, self)._new_entities(\n inputs, shape, chunks=chunks, nsplits=nsplits,\n output_limit=output_limit, kws=kws, **kw)\n\n @classmethod\n def tile(cls, op):\n raise NotImplementedError('Fetch tile cannot be handled by operand itself')\n", "path": "mars/tensor/expressions/datasource/core.py"}]} | 2,726 | 176 |
gh_patches_debug_17708 | rasdani/github-patches | git_diff | saleor__saleor-459 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Geoip conflicting with Python 3.5?
It appears that python-geoip won't work with the current setup (utilizing Python 3.5).
I get this stack trace:
```
Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/
Django Version: 1.9.1
Python Version: 3.5.1
Installed Applications:
['offsite_storage',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.sitemaps',
'django.contrib.sites',
'django.contrib.staticfiles',
'django.contrib.admin',
'django.contrib.auth',
'saleor.userprofile',
'saleor.product',
'saleor.cart',
'saleor.checkout',
'saleor.core',
'saleor.order',
'saleor.registration',
'saleor.dashboard',
'saleor.shipping',
'versatileimagefield',
'babeldjango',
'bootstrap3',
'django_prices',
'emailit',
'mptt',
'payments',
'selectable',
'materializecssform',
'rest_framework']
Installed Middleware:
['django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.locale.LocaleMiddleware',
'babeldjango.middleware.LocaleMiddleware',
'saleor.cart.middleware.CartMiddleware',
'saleor.core.middleware.DiscountMiddleware',
'saleor.core.middleware.GoogleAnalytics',
'saleor.core.middleware.CountryMiddleware',
'saleor.core.middleware.CurrencyMiddleware']
Traceback:
File "/home/eightbit/saleor/env/lib/python3.5/site-packages/django/core/handlers/base.py" in get_response
123. response = middleware_method(request)
File "/home/eightbit/saleor/saleor/saleor/core/middleware.py" in process_request
37. request.country = get_country_by_ip(request.META['REMOTE_ADDR'])
File "/home/eightbit/saleor/saleor/saleor/core/__init__.py" in get_country_by_ip
14. geo_data = geolite2.lookup(ip_address)
File "/home/eightbit/saleor/env/lib/python3.5/site-packages/geoip.py" in lookup
364. return self._get_actual_db().lookup(ip_addr)
File "/home/eightbit/saleor/env/lib/python3.5/site-packages/geoip.py" in _get_actual_db
350. rv = self._load_database()
File "/home/eightbit/saleor/env/lib/python3.5/site-packages/geoip.py" in _load_database
342. return mod.loader(self, sys.modules[__name__])
File "/home/eightbit/saleor/env/lib/python3.5/site-packages/_geoip_geolite2/__init__.py" in loader
9. return mod.open_database(filename)
File "/home/eightbit/saleor/env/lib/python3.5/site-packages/geoip.py" in open_database
508. md = _read_mmdb_metadata(buf)
File "/home/eightbit/saleor/env/lib/python3.5/site-packages/geoip.py" in _read_mmdb_metadata
382. buf.size() - MMDB_METADATA_BLOCK_MAX_SIZE)
Exception Type: TypeError at /
Exception Value: a bytes-like object is required, not 'str'
```
python-geoip seems to be abandoned from looking at its GitHub page. Perhaps it should be replaced with something else?
</issue>
<code>
[start of saleor/core/__init__.py]
1 from __future__ import unicode_literals
2
3 from babel.numbers import get_territory_currencies
4 from django.conf import settings
5 from django_countries import countries
6 from django_countries.fields import Country
7 from geoip import geolite2
8
9 TOKEN_PATTERN = ('(?P<token>[0-9a-z]{8}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{4}'
10 '-[0-9a-z]{12})')
11
12
13 def get_country_by_ip(ip_address):
14 geo_data = geolite2.lookup(ip_address)
15 if geo_data and geo_data.country in countries:
16 return Country(geo_data.country)
17
18
19 def get_currency_for_country(country):
20 currencies = get_territory_currencies(country.code)
21 if len(currencies):
22 main_currency = currencies[0]
23 if main_currency in settings.AVAILABLE_CURRENCIES:
24 return main_currency
25 return settings.DEFAULT_CURRENCY
26
[end of saleor/core/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/core/__init__.py b/saleor/core/__init__.py
--- a/saleor/core/__init__.py
+++ b/saleor/core/__init__.py
@@ -4,16 +4,20 @@
from django.conf import settings
from django_countries import countries
from django_countries.fields import Country
-from geoip import geolite2
+from geolite2 import geolite2
TOKEN_PATTERN = ('(?P<token>[0-9a-z]{8}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{4}'
'-[0-9a-z]{12})')
def get_country_by_ip(ip_address):
- geo_data = geolite2.lookup(ip_address)
- if geo_data and geo_data.country in countries:
- return Country(geo_data.country)
+ reader = geolite2.reader()
+ geo_data = reader.get(ip_address)
+ geolite2.close()
+ if geo_data and 'country' in geo_data and 'iso_code' in geo_data['country']:
+ country_iso_code = geo_data['country']['iso_code']
+ if country_iso_code in countries:
+ return Country(country_iso_code)
def get_currency_for_country(country):
| {"golden_diff": "diff --git a/saleor/core/__init__.py b/saleor/core/__init__.py\n--- a/saleor/core/__init__.py\n+++ b/saleor/core/__init__.py\n@@ -4,16 +4,20 @@\n from django.conf import settings\n from django_countries import countries\n from django_countries.fields import Country\n-from geoip import geolite2\n+from geolite2 import geolite2\n \n TOKEN_PATTERN = ('(?P<token>[0-9a-z]{8}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{4}'\n '-[0-9a-z]{12})')\n \n \n def get_country_by_ip(ip_address):\n- geo_data = geolite2.lookup(ip_address)\n- if geo_data and geo_data.country in countries:\n- return Country(geo_data.country)\n+ reader = geolite2.reader()\n+ geo_data = reader.get(ip_address)\n+ geolite2.close()\n+ if geo_data and 'country' in geo_data and 'iso_code' in geo_data['country']:\n+ country_iso_code = geo_data['country']['iso_code']\n+ if country_iso_code in countries:\n+ return Country(country_iso_code)\n \n \n def get_currency_for_country(country):\n", "issue": "Geoip conflicting with Python 3.5?\nIt appears that python-geoip won't work with the current setup (utilizing Python 3.5).\nI get this stack trace:\n\n```\nEnvironment:\n\nRequest Method: GET\nRequest URL: http://127.0.0.1:8000/\n\nDjango Version: 1.9.1\nPython Version: 3.5.1\nInstalled Applications:\n['offsite_storage',\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.messages',\n 'django.contrib.sitemaps',\n 'django.contrib.sites',\n 'django.contrib.staticfiles',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'saleor.userprofile',\n 'saleor.product',\n 'saleor.cart',\n 'saleor.checkout',\n 'saleor.core',\n 'saleor.order',\n 'saleor.registration',\n 'saleor.dashboard',\n 'saleor.shipping',\n 'versatileimagefield',\n 'babeldjango',\n 'bootstrap3',\n 'django_prices',\n 'emailit',\n 'mptt',\n 'payments',\n 'selectable',\n 'materializecssform',\n 'rest_framework']\nInstalled Middleware:\n['django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'babeldjango.middleware.LocaleMiddleware',\n 'saleor.cart.middleware.CartMiddleware',\n 'saleor.core.middleware.DiscountMiddleware',\n 'saleor.core.middleware.GoogleAnalytics',\n 'saleor.core.middleware.CountryMiddleware',\n 'saleor.core.middleware.CurrencyMiddleware']\n\n\n\nTraceback:\n\nFile \"/home/eightbit/saleor/env/lib/python3.5/site-packages/django/core/handlers/base.py\" in get_response\n 123. response = middleware_method(request)\n\nFile \"/home/eightbit/saleor/saleor/saleor/core/middleware.py\" in process_request\n 37. request.country = get_country_by_ip(request.META['REMOTE_ADDR'])\n\nFile \"/home/eightbit/saleor/saleor/saleor/core/__init__.py\" in get_country_by_ip\n 14. geo_data = geolite2.lookup(ip_address)\n\nFile \"/home/eightbit/saleor/env/lib/python3.5/site-packages/geoip.py\" in lookup\n 364. return self._get_actual_db().lookup(ip_addr)\n\nFile \"/home/eightbit/saleor/env/lib/python3.5/site-packages/geoip.py\" in _get_actual_db\n 350. rv = self._load_database()\n\nFile \"/home/eightbit/saleor/env/lib/python3.5/site-packages/geoip.py\" in _load_database\n 342. return mod.loader(self, sys.modules[__name__])\n\nFile \"/home/eightbit/saleor/env/lib/python3.5/site-packages/_geoip_geolite2/__init__.py\" in loader\n 9. return mod.open_database(filename)\n\nFile \"/home/eightbit/saleor/env/lib/python3.5/site-packages/geoip.py\" in open_database\n 508. md = _read_mmdb_metadata(buf)\n\nFile \"/home/eightbit/saleor/env/lib/python3.5/site-packages/geoip.py\" in _read_mmdb_metadata\n 382. buf.size() - MMDB_METADATA_BLOCK_MAX_SIZE)\n\nException Type: TypeError at /\nException Value: a bytes-like object is required, not 'str'\n```\n\npython-geoip seems to be abandoned from looking at its GitHub page. Perhaps it should be replaced with something else?\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom babel.numbers import get_territory_currencies\nfrom django.conf import settings\nfrom django_countries import countries\nfrom django_countries.fields import Country\nfrom geoip import geolite2\n\nTOKEN_PATTERN = ('(?P<token>[0-9a-z]{8}-[0-9a-z]{4}-[0-9a-z]{4}-[0-9a-z]{4}'\n '-[0-9a-z]{12})')\n\n\ndef get_country_by_ip(ip_address):\n geo_data = geolite2.lookup(ip_address)\n if geo_data and geo_data.country in countries:\n return Country(geo_data.country)\n\n\ndef get_currency_for_country(country):\n currencies = get_territory_currencies(country.code)\n if len(currencies):\n main_currency = currencies[0]\n if main_currency in settings.AVAILABLE_CURRENCIES:\n return main_currency\n return settings.DEFAULT_CURRENCY\n", "path": "saleor/core/__init__.py"}]} | 1,567 | 286 |
gh_patches_debug_10512 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-2244 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Model.load_from_checkpoint tries to open file path as URL and fail
## 🐛 Bug
load_from_checkpoint tries to classify if the input is a file path or an URL and detects the hard drive letter as a scheme and then classify wrongly the input because of this.
urllib.error.URLError: \<urlopen error unknown url type: d\>
My input:
D:\Prog\Projects\AceriNet\research_seed\checkpoints\acerinet\bnacerinet0_target=OVA_OK_penalized=None_loss_fn=ce_normalized=True_balanced=FalseFalse_seed=42_val_loss=0.374_val_auroc=0.9041_v0.ckpt
### To Reproduce
Steps to reproduce the behavior:
Use any path with a hard drive letter at the start (windows formatting) for pl.LightningModule().load_from_checkpoint(path)
</issue>
<code>
[start of pytorch_lightning/utilities/cloud_io.py]
1 import torch
2
3 from urllib.parse import urlparse
4
5
6 def load(path_or_url: str, map_location=None):
7 parsed = urlparse(path_or_url)
8 if parsed.scheme == '':
9 # local file
10 return torch.load(path_or_url, map_location=map_location)
11 return torch.hub.load_state_dict_from_url(path_or_url, map_location=map_location)
12
[end of pytorch_lightning/utilities/cloud_io.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pytorch_lightning/utilities/cloud_io.py b/pytorch_lightning/utilities/cloud_io.py
--- a/pytorch_lightning/utilities/cloud_io.py
+++ b/pytorch_lightning/utilities/cloud_io.py
@@ -1,11 +1,12 @@
import torch
+from pathlib import Path
from urllib.parse import urlparse
def load(path_or_url: str, map_location=None):
parsed = urlparse(path_or_url)
- if parsed.scheme == '':
- # local file
+ if parsed.scheme == '' or Path(path_or_url).is_file():
+ # no scheme or local file
return torch.load(path_or_url, map_location=map_location)
return torch.hub.load_state_dict_from_url(path_or_url, map_location=map_location)
| {"golden_diff": "diff --git a/pytorch_lightning/utilities/cloud_io.py b/pytorch_lightning/utilities/cloud_io.py\n--- a/pytorch_lightning/utilities/cloud_io.py\n+++ b/pytorch_lightning/utilities/cloud_io.py\n@@ -1,11 +1,12 @@\n import torch\n \n+from pathlib import Path\n from urllib.parse import urlparse\n \n \n def load(path_or_url: str, map_location=None):\n parsed = urlparse(path_or_url)\n- if parsed.scheme == '':\n- # local file\n+ if parsed.scheme == '' or Path(path_or_url).is_file():\n+ # no scheme or local file\n return torch.load(path_or_url, map_location=map_location)\n return torch.hub.load_state_dict_from_url(path_or_url, map_location=map_location)\n", "issue": "Model.load_from_checkpoint tries to open file path as URL and fail\n## \ud83d\udc1b Bug\r\n load_from_checkpoint tries to classify if the input is a file path or an URL and detects the hard drive letter as a scheme and then classify wrongly the input because of this.\r\nurllib.error.URLError: \\<urlopen error unknown url type: d\\>\r\n\r\nMy input:\r\nD:\\Prog\\Projects\\AceriNet\\research_seed\\checkpoints\\acerinet\\bnacerinet0_target=OVA_OK_penalized=None_loss_fn=ce_normalized=True_balanced=FalseFalse_seed=42_val_loss=0.374_val_auroc=0.9041_v0.ckpt\r\n\r\n### To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\nUse any path with a hard drive letter at the start (windows formatting) for pl.LightningModule().load_from_checkpoint(path)\r\n\n", "before_files": [{"content": "import torch\n\nfrom urllib.parse import urlparse\n\n\ndef load(path_or_url: str, map_location=None):\n parsed = urlparse(path_or_url)\n if parsed.scheme == '':\n # local file\n return torch.load(path_or_url, map_location=map_location)\n return torch.hub.load_state_dict_from_url(path_or_url, map_location=map_location)\n", "path": "pytorch_lightning/utilities/cloud_io.py"}]} | 820 | 168 |
gh_patches_debug_1341 | rasdani/github-patches | git_diff | ivy-llc__ivy-18341 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
leaky_relu
Paddle Frontend
</issue>
<code>
[start of ivy/functional/frontends/paddle/nn/functional/activation.py]
1 # local
2 import ivy
3 from ivy.func_wrapper import with_supported_dtypes
4 from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back
5 from ivy.functional.frontends.paddle.tensor.math import tanh as paddle_tanh
6
7
8 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
9 @to_ivy_arrays_and_back
10 def selu(
11 x,
12 /,
13 *,
14 alpha=1.6732632423543772848170429916717,
15 scale=1.0507009873554804934193349852946,
16 name=None,
17 ):
18 if scale <= 1.0:
19 raise ValueError(f"The scale must be greater than 1.0. Received: {scale}.")
20
21 if alpha < 0:
22 raise ValueError(f"The alpha must be no less than zero. Received: {alpha}.")
23
24 ret = ivy.where(x > 0, x, alpha * ivy.expm1(x))
25 arr = scale * ret
26 return ivy.astype(arr, x.dtype)
27
28
29 tanh = paddle_tanh
30
31
32 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
33 @to_ivy_arrays_and_back
34 def hardshrink(x, threshold=0.5, name=None):
35 mask = ivy.logical_or(ivy.greater(x, threshold), ivy.less(x, -threshold))
36 return ivy.where(mask, x, 0.0)
37
38
39 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
40 @to_ivy_arrays_and_back
41 def hardswish(x, name=None):
42 relu6_val = ivy.relu6(ivy.add(x, 3))
43 ret = ivy.multiply(x, ivy.divide(relu6_val, 6))
44 return ret
45
46
47 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
48 @to_ivy_arrays_and_back
49 def hardtanh(
50 x,
51 /,
52 *,
53 min=-1.0,
54 max=1.0,
55 name=None,
56 ):
57 less = ivy.where(ivy.less(x, min), min, x)
58 ret = ivy.where(ivy.greater(x, max), max, less).astype(x.dtype)
59 return ret
60
61
62 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
63 @to_ivy_arrays_and_back
64 def gelu(x, approximate=False, name=None):
65 return ivy.gelu(x, approximate=approximate)
66
67
68 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
69 @to_ivy_arrays_and_back
70 def hardsigmoid(x, slope=0.1666667, offset=0.5, name=None):
71 ret = ivy.minimum(ivy.maximum(ivy.add(ivy.multiply(x, slope), offset), 0), 1)
72 return ret
73
74
75 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
76 @to_ivy_arrays_and_back
77 def relu6(x, name=None):
78 return ivy.relu6(x)
79
80
81 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
82 @to_ivy_arrays_and_back
83 def softshrink(
84 x,
85 /,
86 *,
87 threshold=0.5,
88 name=None,
89 ):
90 low = ivy.where(ivy.less(x, -threshold), ivy.add(x, threshold), 0)
91 up = ivy.where(ivy.greater(x, threshold), ivy.subtract(x, threshold), 0)
92 add = ivy.add(low, up)
93 return ivy.astype(add, x.dtype)
94
95
96 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
97 @to_ivy_arrays_and_back
98 def softsign(
99 x,
100 /,
101 *,
102 name=None,
103 ):
104 return ivy.divide(x, ivy.add(1, ivy.abs(x)))
105
106
107 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
108 @to_ivy_arrays_and_back
109 def log_softmax(x, axis=-1, dtype=None, name=None):
110 x = ivy.astype(x, dtype) if dtype else x
111 ret = ivy.log_softmax(x, axis=axis)
112 ret = ivy.astype(ret, dtype) if dtype else ret
113 return ret
114
115
116 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
117 @to_ivy_arrays_and_back
118 def prelu(x, weight, data_format="NCHW", name=None):
119 return ivy.add(ivy.maximum(0, x), ivy.multiply(weight, ivy.minimum(0, x)))
120
121
122 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
123 @to_ivy_arrays_and_back
124 def celu(
125 x,
126 /,
127 *,
128 alpha=1.0,
129 name=None,
130 ):
131 prod = alpha * (ivy.exp(x / alpha) - 1)
132 ret = ivy.maximum(0, x) + ivy.minimum(0, prod)
133 return ret
134
135
136 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
137 @to_ivy_arrays_and_back
138 def rrelu(
139 x,
140 /,
141 *,
142 lower=0.125,
143 upper=0.3333333333333333,
144 training=False,
145 name=None,
146 ):
147 if lower < 0 or lower > 1:
148 raise ValueError(
149 "The lower value must be no less than zero or greater than one. Received:"
150 f" {lower}."
151 )
152
153 if upper < lower:
154 raise ValueError(
155 "The upper value must be greater than lower value. Received: lower"
156 f" {lower}, upper {upper}."
157 )
158
159 if upper > 1:
160 raise ValueError(
161 f"The upper value must be no greater than one. Received: {upper}."
162 )
163
164 is_test = not training
165 if is_test:
166 add = lower + upper
167 ret = add * x * 0.5
168 out = ivy.where(x >= 0, x, ret)
169 return out.astype(x.dtype)
170 # else:
171 # ToDo implement a correctly after fixing ivy.random_uniform
172 # a = ivy.random_normal(low=lower, high=upper)
173 # ret = ivy.where(x >= 0, x, ivy.multiply(a, x))
174 # return ret.astype(x.dtype)
175
176
177 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
178 @to_ivy_arrays_and_back
179 def tanhshrink(
180 x,
181 /,
182 *,
183 name=None,
184 ):
185 return ivy.subtract(x, ivy.tanh(x))
186
187
188 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
189 @to_ivy_arrays_and_back
190 def relu_(x, name=None):
191 ret = ivy.relu(x)
192 ivy.inplace_update(x, ret)
193 return x
194
195
196 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
197 @to_ivy_arrays_and_back
198 def elu(
199 x,
200 /,
201 *,
202 alpha=1.0,
203 name=None,
204 ):
205 return ivy.elu(x, alpha=alpha)
206
207
208 @with_supported_dtypes({"2.5.0 and below": ("float32", "float64")}, "paddle")
209 @to_ivy_arrays_and_back
210 def mish(x, name=None):
211 return ivy.mish(x)
212
[end of ivy/functional/frontends/paddle/nn/functional/activation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/paddle/nn/functional/activation.py b/ivy/functional/frontends/paddle/nn/functional/activation.py
--- a/ivy/functional/frontends/paddle/nn/functional/activation.py
+++ b/ivy/functional/frontends/paddle/nn/functional/activation.py
@@ -209,3 +209,8 @@
@to_ivy_arrays_and_back
def mish(x, name=None):
return ivy.mish(x)
+
+@with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
+@to_ivy_arrays_and_back
+def leaky_relu(x, negative_slope=0.01, name=None):
+ return ivy.leaky_relu(x)
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/nn/functional/activation.py b/ivy/functional/frontends/paddle/nn/functional/activation.py\n--- a/ivy/functional/frontends/paddle/nn/functional/activation.py\n+++ b/ivy/functional/frontends/paddle/nn/functional/activation.py\n@@ -209,3 +209,8 @@\n @to_ivy_arrays_and_back\n def mish(x, name=None):\n return ivy.mish(x)\n+\n+@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n+@to_ivy_arrays_and_back\n+def leaky_relu(x, negative_slope=0.01, name=None):\n+ return ivy.leaky_relu(x)\n", "issue": "leaky_relu\nPaddle Frontend\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\nfrom ivy.functional.frontends.paddle.tensor.math import tanh as paddle_tanh\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef selu(\n x,\n /,\n *,\n alpha=1.6732632423543772848170429916717,\n scale=1.0507009873554804934193349852946,\n name=None,\n):\n if scale <= 1.0:\n raise ValueError(f\"The scale must be greater than 1.0. Received: {scale}.\")\n\n if alpha < 0:\n raise ValueError(f\"The alpha must be no less than zero. Received: {alpha}.\")\n\n ret = ivy.where(x > 0, x, alpha * ivy.expm1(x))\n arr = scale * ret\n return ivy.astype(arr, x.dtype)\n\n\ntanh = paddle_tanh\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef hardshrink(x, threshold=0.5, name=None):\n mask = ivy.logical_or(ivy.greater(x, threshold), ivy.less(x, -threshold))\n return ivy.where(mask, x, 0.0)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef hardswish(x, name=None):\n relu6_val = ivy.relu6(ivy.add(x, 3))\n ret = ivy.multiply(x, ivy.divide(relu6_val, 6))\n return ret\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef hardtanh(\n x,\n /,\n *,\n min=-1.0,\n max=1.0,\n name=None,\n):\n less = ivy.where(ivy.less(x, min), min, x)\n ret = ivy.where(ivy.greater(x, max), max, less).astype(x.dtype)\n return ret\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef gelu(x, approximate=False, name=None):\n return ivy.gelu(x, approximate=approximate)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef hardsigmoid(x, slope=0.1666667, offset=0.5, name=None):\n ret = ivy.minimum(ivy.maximum(ivy.add(ivy.multiply(x, slope), offset), 0), 1)\n return ret\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef relu6(x, name=None):\n return ivy.relu6(x)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef softshrink(\n x,\n /,\n *,\n threshold=0.5,\n name=None,\n):\n low = ivy.where(ivy.less(x, -threshold), ivy.add(x, threshold), 0)\n up = ivy.where(ivy.greater(x, threshold), ivy.subtract(x, threshold), 0)\n add = ivy.add(low, up)\n return ivy.astype(add, x.dtype)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef softsign(\n x,\n /,\n *,\n name=None,\n):\n return ivy.divide(x, ivy.add(1, ivy.abs(x)))\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log_softmax(x, axis=-1, dtype=None, name=None):\n x = ivy.astype(x, dtype) if dtype else x\n ret = ivy.log_softmax(x, axis=axis)\n ret = ivy.astype(ret, dtype) if dtype else ret\n return ret\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef prelu(x, weight, data_format=\"NCHW\", name=None):\n return ivy.add(ivy.maximum(0, x), ivy.multiply(weight, ivy.minimum(0, x)))\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef celu(\n x,\n /,\n *,\n alpha=1.0,\n name=None,\n):\n prod = alpha * (ivy.exp(x / alpha) - 1)\n ret = ivy.maximum(0, x) + ivy.minimum(0, prod)\n return ret\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef rrelu(\n x,\n /,\n *,\n lower=0.125,\n upper=0.3333333333333333,\n training=False,\n name=None,\n):\n if lower < 0 or lower > 1:\n raise ValueError(\n \"The lower value must be no less than zero or greater than one. Received:\"\n f\" {lower}.\"\n )\n\n if upper < lower:\n raise ValueError(\n \"The upper value must be greater than lower value. Received: lower\"\n f\" {lower}, upper {upper}.\"\n )\n\n if upper > 1:\n raise ValueError(\n f\"The upper value must be no greater than one. Received: {upper}.\"\n )\n\n is_test = not training\n if is_test:\n add = lower + upper\n ret = add * x * 0.5\n out = ivy.where(x >= 0, x, ret)\n return out.astype(x.dtype)\n # else:\n # ToDo implement a correctly after fixing ivy.random_uniform\n # a = ivy.random_normal(low=lower, high=upper)\n # ret = ivy.where(x >= 0, x, ivy.multiply(a, x))\n # return ret.astype(x.dtype)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tanhshrink(\n x,\n /,\n *,\n name=None,\n):\n return ivy.subtract(x, ivy.tanh(x))\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef relu_(x, name=None):\n ret = ivy.relu(x)\n ivy.inplace_update(x, ret)\n return x\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef elu(\n x,\n /,\n *,\n alpha=1.0,\n name=None,\n):\n return ivy.elu(x, alpha=alpha)\n\n\n@with_supported_dtypes({\"2.5.0 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef mish(x, name=None):\n return ivy.mish(x)\n", "path": "ivy/functional/frontends/paddle/nn/functional/activation.py"}]} | 2,973 | 177 |
gh_patches_debug_34284 | rasdani/github-patches | git_diff | yt-dlp__yt-dlp-6025 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bfmtv.com is supported, rmc.bfmtv.com and rmcbfmplay.com are not.
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
France
### Example URLs
https://rmc.bfmtv.com/replay-emissions/les-grandes-gueules/retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225.html
### Provide a description that is worded well enough to be understood
The site bfmtv.com is part of the supported sites.
e.g.: https://www.bfmtv.com/replay-emissions/l-interview/attal-la-reforme-des-retraites-est-pour-les-francais-de-classe-moyenne_VN-202301120207.html
There is now the subdomain or "cousin domain" https://rmc.bfmtv.com which is not supported
e.g.: https://rmc.bfmtv.com/replay-emissions/les-grandes-gueules/retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225.html
Maybe there is something to do also with this one https://www.rmcbfmplay.com
e.g.: https://www.rmcbfmplay.com/video/rmc-story/les-grandes-gueules/retraites-une-reforme-trop-light?contentId=Product::NEUF_NUM23_N23852986105527C&universe=PROVIDER
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://rmc.bfmtv.com/replay-emissions/les-grandes-gueules/retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225.html']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.01.06 [6becd25] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.17763-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg N-93151-gff03418348, ffprobe N-93151-gff03418348
[debug] Optional libraries: Cryptodome-3.16.0, brotli-1.0.9, certifi-2022.12.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4
[debug] Proxy map: {}
[debug] Loaded 1760 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.01.06, Current version: 2023.01.06
yt-dlp is up to date (2023.01.06)
[generic] Extracting URL: https://rmc.bfmtv.com/replay-emissions/les-grandes-gueules/retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225.html
[generic] retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://rmc.bfmtv.com/replay-emissions/les-grandes-gueules/retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225.html
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1502, in wrapper
File "yt_dlp\YoutubeDL.py", line 1578, in __extract_info
File "yt_dlp\extractor\common.py", line 680, in extract
File "yt_dlp\extractor\generic.py", line 2523, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://rmc.bfmtv.com/replay-emissions/les-grandes-gueules/retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225.html
```
</issue>
<code>
[start of yt_dlp/extractor/bfmtv.py]
1 import re
2
3 from .common import InfoExtractor
4 from ..utils import extract_attributes
5
6
7 class BFMTVBaseIE(InfoExtractor):
8 _VALID_URL_BASE = r'https?://(?:www\.)?bfmtv\.com/'
9 _VALID_URL_TMPL = _VALID_URL_BASE + r'(?:[^/]+/)*[^/?&#]+_%s[A-Z]-(?P<id>\d{12})\.html'
10 _VIDEO_BLOCK_REGEX = r'(<div[^>]+class="video_block"[^>]*>)'
11 BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s'
12
13 def _brightcove_url_result(self, video_id, video_block):
14 account_id = video_block.get('accountid') or '876450612001'
15 player_id = video_block.get('playerid') or 'I2qBTln4u'
16 return self.url_result(
17 self.BRIGHTCOVE_URL_TEMPLATE % (account_id, player_id, video_id),
18 'BrightcoveNew', video_id)
19
20
21 class BFMTVIE(BFMTVBaseIE):
22 IE_NAME = 'bfmtv'
23 _VALID_URL = BFMTVBaseIE._VALID_URL_TMPL % 'V'
24 _TESTS = [{
25 'url': 'https://www.bfmtv.com/politique/emmanuel-macron-l-islam-est-une-religion-qui-vit-une-crise-aujourd-hui-partout-dans-le-monde_VN-202010020146.html',
26 'info_dict': {
27 'id': '6196747868001',
28 'ext': 'mp4',
29 'title': 'Emmanuel Macron: "L\'Islam est une religion qui vit une crise aujourd’hui, partout dans le monde"',
30 'description': 'Le Président s\'exprime sur la question du séparatisme depuis les Mureaux, dans les Yvelines.',
31 'uploader_id': '876450610001',
32 'upload_date': '20201002',
33 'timestamp': 1601629620,
34 },
35 }]
36
37 def _real_extract(self, url):
38 bfmtv_id = self._match_id(url)
39 webpage = self._download_webpage(url, bfmtv_id)
40 video_block = extract_attributes(self._search_regex(
41 self._VIDEO_BLOCK_REGEX, webpage, 'video block'))
42 return self._brightcove_url_result(video_block['videoid'], video_block)
43
44
45 class BFMTVLiveIE(BFMTVIE): # XXX: Do not subclass from concrete IE
46 IE_NAME = 'bfmtv:live'
47 _VALID_URL = BFMTVBaseIE._VALID_URL_BASE + '(?P<id>(?:[^/]+/)?en-direct)'
48 _TESTS = [{
49 'url': 'https://www.bfmtv.com/en-direct/',
50 'info_dict': {
51 'id': '5615950982001',
52 'ext': 'mp4',
53 'title': r're:^le direct BFMTV WEB \d{4}-\d{2}-\d{2} \d{2}:\d{2}$',
54 'uploader_id': '876450610001',
55 'upload_date': '20171018',
56 'timestamp': 1508329950,
57 },
58 'params': {
59 'skip_download': True,
60 },
61 }, {
62 'url': 'https://www.bfmtv.com/economie/en-direct/',
63 'only_matching': True,
64 }]
65
66
67 class BFMTVArticleIE(BFMTVBaseIE):
68 IE_NAME = 'bfmtv:article'
69 _VALID_URL = BFMTVBaseIE._VALID_URL_TMPL % 'A'
70 _TESTS = [{
71 'url': 'https://www.bfmtv.com/sante/covid-19-un-responsable-de-l-institut-pasteur-se-demande-quand-la-france-va-se-reconfiner_AV-202101060198.html',
72 'info_dict': {
73 'id': '202101060198',
74 'title': 'Covid-19: un responsable de l\'Institut Pasteur se demande "quand la France va se reconfiner"',
75 'description': 'md5:947974089c303d3ac6196670ae262843',
76 },
77 'playlist_count': 2,
78 }, {
79 'url': 'https://www.bfmtv.com/international/pour-bolsonaro-le-bresil-est-en-faillite-mais-il-ne-peut-rien-faire_AD-202101060232.html',
80 'only_matching': True,
81 }, {
82 'url': 'https://www.bfmtv.com/sante/covid-19-oui-le-vaccin-de-pfizer-distribue-en-france-a-bien-ete-teste-sur-des-personnes-agees_AN-202101060275.html',
83 'only_matching': True,
84 }]
85
86 def _real_extract(self, url):
87 bfmtv_id = self._match_id(url)
88 webpage = self._download_webpage(url, bfmtv_id)
89
90 entries = []
91 for video_block_el in re.findall(self._VIDEO_BLOCK_REGEX, webpage):
92 video_block = extract_attributes(video_block_el)
93 video_id = video_block.get('videoid')
94 if not video_id:
95 continue
96 entries.append(self._brightcove_url_result(video_id, video_block))
97
98 return self.playlist_result(
99 entries, bfmtv_id, self._og_search_title(webpage, fatal=False),
100 self._html_search_meta(['og:description', 'description'], webpage))
101
[end of yt_dlp/extractor/bfmtv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/yt_dlp/extractor/bfmtv.py b/yt_dlp/extractor/bfmtv.py
--- a/yt_dlp/extractor/bfmtv.py
+++ b/yt_dlp/extractor/bfmtv.py
@@ -5,7 +5,7 @@
class BFMTVBaseIE(InfoExtractor):
- _VALID_URL_BASE = r'https?://(?:www\.)?bfmtv\.com/'
+ _VALID_URL_BASE = r'https?://(?:www\.|rmc\.)?bfmtv\.com/'
_VALID_URL_TMPL = _VALID_URL_BASE + r'(?:[^/]+/)*[^/?&#]+_%s[A-Z]-(?P<id>\d{12})\.html'
_VIDEO_BLOCK_REGEX = r'(<div[^>]+class="video_block"[^>]*>)'
BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s'
@@ -31,6 +31,9 @@
'uploader_id': '876450610001',
'upload_date': '20201002',
'timestamp': 1601629620,
+ 'duration': 44.757,
+ 'tags': ['bfmactu', 'politique'],
+ 'thumbnail': 'https://cf-images.eu-west-1.prod.boltdns.net/v1/static/876450610001/5041f4c1-bc48-4af8-a256-1b8300ad8ef0/cf2f9114-e8e2-4494-82b4-ab794ea4bc7d/1920x1080/match/image.jpg',
},
}]
@@ -81,6 +84,20 @@
}, {
'url': 'https://www.bfmtv.com/sante/covid-19-oui-le-vaccin-de-pfizer-distribue-en-france-a-bien-ete-teste-sur-des-personnes-agees_AN-202101060275.html',
'only_matching': True,
+ }, {
+ 'url': 'https://rmc.bfmtv.com/actualites/societe/transports/ce-n-est-plus-tout-rentable-le-bioethanol-e85-depasse-1eu-le-litre-des-automobilistes-regrettent_AV-202301100268.html',
+ 'info_dict': {
+ 'id': '6318445464112',
+ 'ext': 'mp4',
+ 'title': 'Le plein de bioéthanol fait de plus en plus mal à la pompe',
+ 'description': None,
+ 'uploader_id': '876630703001',
+ 'upload_date': '20230110',
+ 'timestamp': 1673341692,
+ 'duration': 109.269,
+ 'tags': ['rmc', 'show', 'apolline de malherbe', 'info', 'talk', 'matinale', 'radio'],
+ 'thumbnail': 'https://cf-images.eu-west-1.prod.boltdns.net/v1/static/876630703001/5bef74b8-9d5e-4480-a21f-60c2e2480c46/96c88b74-f9db-45e1-8040-e199c5da216c/1920x1080/match/image.jpg'
+ }
}]
def _real_extract(self, url):
| {"golden_diff": "diff --git a/yt_dlp/extractor/bfmtv.py b/yt_dlp/extractor/bfmtv.py\n--- a/yt_dlp/extractor/bfmtv.py\n+++ b/yt_dlp/extractor/bfmtv.py\n@@ -5,7 +5,7 @@\n \n \n class BFMTVBaseIE(InfoExtractor):\n- _VALID_URL_BASE = r'https?://(?:www\\.)?bfmtv\\.com/'\n+ _VALID_URL_BASE = r'https?://(?:www\\.|rmc\\.)?bfmtv\\.com/'\n _VALID_URL_TMPL = _VALID_URL_BASE + r'(?:[^/]+/)*[^/?&#]+_%s[A-Z]-(?P<id>\\d{12})\\.html'\n _VIDEO_BLOCK_REGEX = r'(<div[^>]+class=\"video_block\"[^>]*>)'\n BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s'\n@@ -31,6 +31,9 @@\n 'uploader_id': '876450610001',\n 'upload_date': '20201002',\n 'timestamp': 1601629620,\n+ 'duration': 44.757,\n+ 'tags': ['bfmactu', 'politique'],\n+ 'thumbnail': 'https://cf-images.eu-west-1.prod.boltdns.net/v1/static/876450610001/5041f4c1-bc48-4af8-a256-1b8300ad8ef0/cf2f9114-e8e2-4494-82b4-ab794ea4bc7d/1920x1080/match/image.jpg',\n },\n }]\n \n@@ -81,6 +84,20 @@\n }, {\n 'url': 'https://www.bfmtv.com/sante/covid-19-oui-le-vaccin-de-pfizer-distribue-en-france-a-bien-ete-teste-sur-des-personnes-agees_AN-202101060275.html',\n 'only_matching': True,\n+ }, {\n+ 'url': 'https://rmc.bfmtv.com/actualites/societe/transports/ce-n-est-plus-tout-rentable-le-bioethanol-e85-depasse-1eu-le-litre-des-automobilistes-regrettent_AV-202301100268.html',\n+ 'info_dict': {\n+ 'id': '6318445464112',\n+ 'ext': 'mp4',\n+ 'title': 'Le plein de bio\u00e9thanol fait de plus en plus mal \u00e0 la pompe',\n+ 'description': None,\n+ 'uploader_id': '876630703001',\n+ 'upload_date': '20230110',\n+ 'timestamp': 1673341692,\n+ 'duration': 109.269,\n+ 'tags': ['rmc', 'show', 'apolline de malherbe', 'info', 'talk', 'matinale', 'radio'],\n+ 'thumbnail': 'https://cf-images.eu-west-1.prod.boltdns.net/v1/static/876630703001/5bef74b8-9d5e-4480-a21f-60c2e2480c46/96c88b74-f9db-45e1-8040-e199c5da216c/1920x1080/match/image.jpg'\n+ }\n }]\n \n def _real_extract(self, url):\n", "issue": "bfmtv.com is supported, rmc.bfmtv.com and rmcbfmplay.com are not. \n### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting a new site support request\n- [X] I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required\n\n### Region\n\nFrance\n\n### Example URLs\n\nhttps://rmc.bfmtv.com/replay-emissions/les-grandes-gueules/retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225.html\r\n\n\n### Provide a description that is worded well enough to be understood\n\nThe site bfmtv.com is part of the supported sites. \r\n\r\ne.g.: https://www.bfmtv.com/replay-emissions/l-interview/attal-la-reforme-des-retraites-est-pour-les-francais-de-classe-moyenne_VN-202301120207.html\r\n\r\nThere is now the subdomain or \"cousin domain\" https://rmc.bfmtv.com which is not supported\r\n\r\ne.g.: https://rmc.bfmtv.com/replay-emissions/les-grandes-gueules/retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225.html\r\n\r\nMaybe there is something to do also with this one https://www.rmcbfmplay.com\r\n\r\ne.g.: https://www.rmcbfmplay.com/video/rmc-story/les-grandes-gueules/retraites-une-reforme-trop-light?contentId=Product::NEUF_NUM23_N23852986105527C&universe=PROVIDER\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU', 'https://rmc.bfmtv.com/replay-emissions/les-grandes-gueules/retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225.html']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version 2023.01.06 [6becd25] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.17763-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: ffmpeg N-93151-gff03418348, ffprobe N-93151-gff03418348\r\n[debug] Optional libraries: Cryptodome-3.16.0, brotli-1.0.9, certifi-2022.12.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1760 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: 2023.01.06, Current version: 2023.01.06\r\nyt-dlp is up to date (2023.01.06)\r\n[generic] Extracting URL: https://rmc.bfmtv.com/replay-emissions/les-grandes-gueules/retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225.html\r\n[generic] retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225: Downloading webpage\r\nWARNING: [generic] Falling back on generic information extractor\r\n[generic] retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225: Extracting information\r\n[debug] Looking for embeds\r\nERROR: Unsupported URL: https://rmc.bfmtv.com/replay-emissions/les-grandes-gueules/retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225.html\r\nTraceback (most recent call last):\r\n File \"yt_dlp\\YoutubeDL.py\", line 1502, in wrapper\r\n File \"yt_dlp\\YoutubeDL.py\", line 1578, in __extract_info\r\n File \"yt_dlp\\extractor\\common.py\", line 680, in extract\r\n File \"yt_dlp\\extractor\\generic.py\", line 2523, in _real_extract\r\nyt_dlp.utils.UnsupportedError: Unsupported URL: https://rmc.bfmtv.com/replay-emissions/les-grandes-gueules/retraites-la-facon-dont-cette-loi-est-faite-ce-n-est-pas-bon-tacle-mehdi-ghezzar_VN-202301110225.html\n```\n\n", "before_files": [{"content": "import re\n\nfrom .common import InfoExtractor\nfrom ..utils import extract_attributes\n\n\nclass BFMTVBaseIE(InfoExtractor):\n _VALID_URL_BASE = r'https?://(?:www\\.)?bfmtv\\.com/'\n _VALID_URL_TMPL = _VALID_URL_BASE + r'(?:[^/]+/)*[^/?&#]+_%s[A-Z]-(?P<id>\\d{12})\\.html'\n _VIDEO_BLOCK_REGEX = r'(<div[^>]+class=\"video_block\"[^>]*>)'\n BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/%s_default/index.html?videoId=%s'\n\n def _brightcove_url_result(self, video_id, video_block):\n account_id = video_block.get('accountid') or '876450612001'\n player_id = video_block.get('playerid') or 'I2qBTln4u'\n return self.url_result(\n self.BRIGHTCOVE_URL_TEMPLATE % (account_id, player_id, video_id),\n 'BrightcoveNew', video_id)\n\n\nclass BFMTVIE(BFMTVBaseIE):\n IE_NAME = 'bfmtv'\n _VALID_URL = BFMTVBaseIE._VALID_URL_TMPL % 'V'\n _TESTS = [{\n 'url': 'https://www.bfmtv.com/politique/emmanuel-macron-l-islam-est-une-religion-qui-vit-une-crise-aujourd-hui-partout-dans-le-monde_VN-202010020146.html',\n 'info_dict': {\n 'id': '6196747868001',\n 'ext': 'mp4',\n 'title': 'Emmanuel Macron: \"L\\'Islam est une religion qui vit une crise aujourd\u2019hui, partout dans le monde\"',\n 'description': 'Le Pr\u00e9sident s\\'exprime sur la question du s\u00e9paratisme depuis les Mureaux, dans les Yvelines.',\n 'uploader_id': '876450610001',\n 'upload_date': '20201002',\n 'timestamp': 1601629620,\n },\n }]\n\n def _real_extract(self, url):\n bfmtv_id = self._match_id(url)\n webpage = self._download_webpage(url, bfmtv_id)\n video_block = extract_attributes(self._search_regex(\n self._VIDEO_BLOCK_REGEX, webpage, 'video block'))\n return self._brightcove_url_result(video_block['videoid'], video_block)\n\n\nclass BFMTVLiveIE(BFMTVIE): # XXX: Do not subclass from concrete IE\n IE_NAME = 'bfmtv:live'\n _VALID_URL = BFMTVBaseIE._VALID_URL_BASE + '(?P<id>(?:[^/]+/)?en-direct)'\n _TESTS = [{\n 'url': 'https://www.bfmtv.com/en-direct/',\n 'info_dict': {\n 'id': '5615950982001',\n 'ext': 'mp4',\n 'title': r're:^le direct BFMTV WEB \\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}$',\n 'uploader_id': '876450610001',\n 'upload_date': '20171018',\n 'timestamp': 1508329950,\n },\n 'params': {\n 'skip_download': True,\n },\n }, {\n 'url': 'https://www.bfmtv.com/economie/en-direct/',\n 'only_matching': True,\n }]\n\n\nclass BFMTVArticleIE(BFMTVBaseIE):\n IE_NAME = 'bfmtv:article'\n _VALID_URL = BFMTVBaseIE._VALID_URL_TMPL % 'A'\n _TESTS = [{\n 'url': 'https://www.bfmtv.com/sante/covid-19-un-responsable-de-l-institut-pasteur-se-demande-quand-la-france-va-se-reconfiner_AV-202101060198.html',\n 'info_dict': {\n 'id': '202101060198',\n 'title': 'Covid-19: un responsable de l\\'Institut Pasteur se demande \"quand la France va se reconfiner\"',\n 'description': 'md5:947974089c303d3ac6196670ae262843',\n },\n 'playlist_count': 2,\n }, {\n 'url': 'https://www.bfmtv.com/international/pour-bolsonaro-le-bresil-est-en-faillite-mais-il-ne-peut-rien-faire_AD-202101060232.html',\n 'only_matching': True,\n }, {\n 'url': 'https://www.bfmtv.com/sante/covid-19-oui-le-vaccin-de-pfizer-distribue-en-france-a-bien-ete-teste-sur-des-personnes-agees_AN-202101060275.html',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n bfmtv_id = self._match_id(url)\n webpage = self._download_webpage(url, bfmtv_id)\n\n entries = []\n for video_block_el in re.findall(self._VIDEO_BLOCK_REGEX, webpage):\n video_block = extract_attributes(video_block_el)\n video_id = video_block.get('videoid')\n if not video_id:\n continue\n entries.append(self._brightcove_url_result(video_id, video_block))\n\n return self.playlist_result(\n entries, bfmtv_id, self._og_search_title(webpage, fatal=False),\n self._html_search_meta(['og:description', 'description'], webpage))\n", "path": "yt_dlp/extractor/bfmtv.py"}]} | 3,769 | 897 |
gh_patches_debug_3956 | rasdani/github-patches | git_diff | facebookresearch__CompilerGym-736 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in llvm_rl benchmark when setting a max_benchmarks
https://github.com/facebookresearch/CompilerGym/blob/b803856257aa663b20a26e49125d5d603f84a9b9/examples/llvm_rl/model/benchmarks.py#L96-L98
should be
`n = min(self.max_benchmarks, n)`
Bug in llvm_rl benchmark when setting a max_benchmarks
https://github.com/facebookresearch/CompilerGym/blob/b803856257aa663b20a26e49125d5d603f84a9b9/examples/llvm_rl/model/benchmarks.py#L96-L98
should be
`n = min(self.max_benchmarks, n)`
</issue>
<code>
[start of examples/llvm_rl/model/benchmarks.py]
1 # Copyright (c) Facebook, Inc. and its affiliates.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
5 from itertools import islice
6 from typing import Iterable, List, Union
7
8 from pydantic import BaseModel, Field, root_validator, validator
9
10 from compiler_gym.datasets import Benchmark, BenchmarkUri
11 from compiler_gym.envs import CompilerEnv
12
13
14 class Benchmarks(BaseModel):
15 """Represents a set of benchmarks to use for training/validation/testing.
16
17 There are two ways of describing benchmarks, either as a list of benchmark
18 URIs:
19
20 benchmarks:
21 uris:
22 - benchmark://cbench-v1/adpcm
23 - benchmark://cbench-v1/ghostscript
24
25 Or as a dataset to iterate over:
26
27 benchmarks:
28 dataset: benchmark://cbench-v1
29 max_benchmarks: 20
30 """
31
32 # === Start of fields list. ===
33
34 dataset: str = Field(default=None, allow_mutation=False)
35 """The name of a dataset to iterate over. If set, benchmarks are produced
36 by iterating over this dataset in order. If not set, the :code:`uris` list
37 must be provided.
38 """
39
40 uris: List[str] = Field(default=[], allow_mutation=False)
41 """A list of URIs to iterate over."""
42
43 max_benchmarks: int = Field(default=0, ge=0, allow_mutation=False)
44 """The maximum number of benchmarks to yield from the given dataset or URIs
45 list.
46 """
47
48 benchmarks_start_at: int = Field(default=0, ge=0, allow_mutation=False)
49 """An offset into the dataset or URIs list to start iterating from.
50
51 Note that using very large offsets will slow things down as the
52 implementation still has to iterate over the excluded benchmarks.
53 """
54
55 # === Start of public API. ===
56
57 def benchmarks_iterator(self, env: CompilerEnv) -> Iterable[Benchmark]:
58 """Return an iterator over the benchmarks."""
59 return self._benchmark_iterator(env)
60
61 def benchmark_uris_iterator(self, env: CompilerEnv) -> Iterable[str]:
62 """Return an iterator over the URIs of the benchmarks."""
63 return self._benchmark_iterator(env, uris=True)
64
65 # === Start of implementation details. ===
66
67 @root_validator
68 def check_that_either_dataset_or_uris_is_set(cls, values):
69 assert values.get("dataset") or values.get(
70 "uris"
71 ), "Neither dataset or uris given"
72 return values
73
74 @validator("uris", pre=True)
75 def validate_uris(cls, value, *, values, **kwargs):
76 del kwargs
77 for uri in value:
78 uri = BenchmarkUri.from_string(uri)
79 assert uri.scheme and uri.dataset, f"Invalid benchmark URI: {uri}"
80 return list(value)
81
82 def _benchmark_iterator(
83 self, env: CompilerEnv, uris: bool = False
84 ) -> Union[Iterable[Benchmark], Iterable[str]]:
85 return (
86 self._uris_iterator(env, uris)
87 if self.uris
88 else self._dataset_iterator(env, uris)
89 )
90
91 def _uris_iterator(
92 self, env: CompilerEnv, uris: bool = False
93 ) -> Union[Iterable[Benchmark], Iterable[str]]:
94 """Iterate from a URIs list."""
95 start = self.benchmarks_start_at
96 n = len(self.uris)
97 if self.max_benchmarks:
98 n = min(len(self.uris), n)
99
100 if uris:
101 # Shortcut in case we already have a list of URIs that we can slice
102 # rather than iterating over.
103 return iter(self.uris[start:n])
104
105 return islice((env.datasets.benchmark(u) for u in self.uris), start, start + n)
106
107 def _dataset_iterator(
108 self, env: CompilerEnv, uris: bool = False
109 ) -> Union[Iterable[Benchmark], Iterable[str]]:
110 """Iterate from a dataset name."""
111 dataset = env.datasets[self.dataset]
112 dataset.install()
113 n = dataset.size or self.max_benchmarks # dataset.size == 0 for inf
114 if self.max_benchmarks:
115 n = min(self.max_benchmarks, n)
116 start = self.benchmarks_start_at
117 iterator = dataset.benchmark_uris if uris else dataset.benchmarks
118 return islice(iterator(), start, start + n)
119
120 class Config:
121 validate_assignment = True
122
[end of examples/llvm_rl/model/benchmarks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/llvm_rl/model/benchmarks.py b/examples/llvm_rl/model/benchmarks.py
--- a/examples/llvm_rl/model/benchmarks.py
+++ b/examples/llvm_rl/model/benchmarks.py
@@ -95,7 +95,7 @@
start = self.benchmarks_start_at
n = len(self.uris)
if self.max_benchmarks:
- n = min(len(self.uris), n)
+ n = min(self.max_benchmarks, n)
if uris:
# Shortcut in case we already have a list of URIs that we can slice
| {"golden_diff": "diff --git a/examples/llvm_rl/model/benchmarks.py b/examples/llvm_rl/model/benchmarks.py\n--- a/examples/llvm_rl/model/benchmarks.py\n+++ b/examples/llvm_rl/model/benchmarks.py\n@@ -95,7 +95,7 @@\n start = self.benchmarks_start_at\n n = len(self.uris)\n if self.max_benchmarks:\n- n = min(len(self.uris), n)\n+ n = min(self.max_benchmarks, n)\n \n if uris:\n # Shortcut in case we already have a list of URIs that we can slice\n", "issue": "Bug in llvm_rl benchmark when setting a max_benchmarks\nhttps://github.com/facebookresearch/CompilerGym/blob/b803856257aa663b20a26e49125d5d603f84a9b9/examples/llvm_rl/model/benchmarks.py#L96-L98\r\n\r\nshould be\r\n\r\n`n = min(self.max_benchmarks, n)`\nBug in llvm_rl benchmark when setting a max_benchmarks\nhttps://github.com/facebookresearch/CompilerGym/blob/b803856257aa663b20a26e49125d5d603f84a9b9/examples/llvm_rl/model/benchmarks.py#L96-L98\r\n\r\nshould be\r\n\r\n`n = min(self.max_benchmarks, n)`\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\nfrom itertools import islice\nfrom typing import Iterable, List, Union\n\nfrom pydantic import BaseModel, Field, root_validator, validator\n\nfrom compiler_gym.datasets import Benchmark, BenchmarkUri\nfrom compiler_gym.envs import CompilerEnv\n\n\nclass Benchmarks(BaseModel):\n \"\"\"Represents a set of benchmarks to use for training/validation/testing.\n\n There are two ways of describing benchmarks, either as a list of benchmark\n URIs:\n\n benchmarks:\n uris:\n - benchmark://cbench-v1/adpcm\n - benchmark://cbench-v1/ghostscript\n\n Or as a dataset to iterate over:\n\n benchmarks:\n dataset: benchmark://cbench-v1\n max_benchmarks: 20\n \"\"\"\n\n # === Start of fields list. ===\n\n dataset: str = Field(default=None, allow_mutation=False)\n \"\"\"The name of a dataset to iterate over. If set, benchmarks are produced\n by iterating over this dataset in order. If not set, the :code:`uris` list\n must be provided.\n \"\"\"\n\n uris: List[str] = Field(default=[], allow_mutation=False)\n \"\"\"A list of URIs to iterate over.\"\"\"\n\n max_benchmarks: int = Field(default=0, ge=0, allow_mutation=False)\n \"\"\"The maximum number of benchmarks to yield from the given dataset or URIs\n list.\n \"\"\"\n\n benchmarks_start_at: int = Field(default=0, ge=0, allow_mutation=False)\n \"\"\"An offset into the dataset or URIs list to start iterating from.\n\n Note that using very large offsets will slow things down as the\n implementation still has to iterate over the excluded benchmarks.\n \"\"\"\n\n # === Start of public API. ===\n\n def benchmarks_iterator(self, env: CompilerEnv) -> Iterable[Benchmark]:\n \"\"\"Return an iterator over the benchmarks.\"\"\"\n return self._benchmark_iterator(env)\n\n def benchmark_uris_iterator(self, env: CompilerEnv) -> Iterable[str]:\n \"\"\"Return an iterator over the URIs of the benchmarks.\"\"\"\n return self._benchmark_iterator(env, uris=True)\n\n # === Start of implementation details. ===\n\n @root_validator\n def check_that_either_dataset_or_uris_is_set(cls, values):\n assert values.get(\"dataset\") or values.get(\n \"uris\"\n ), \"Neither dataset or uris given\"\n return values\n\n @validator(\"uris\", pre=True)\n def validate_uris(cls, value, *, values, **kwargs):\n del kwargs\n for uri in value:\n uri = BenchmarkUri.from_string(uri)\n assert uri.scheme and uri.dataset, f\"Invalid benchmark URI: {uri}\"\n return list(value)\n\n def _benchmark_iterator(\n self, env: CompilerEnv, uris: bool = False\n ) -> Union[Iterable[Benchmark], Iterable[str]]:\n return (\n self._uris_iterator(env, uris)\n if self.uris\n else self._dataset_iterator(env, uris)\n )\n\n def _uris_iterator(\n self, env: CompilerEnv, uris: bool = False\n ) -> Union[Iterable[Benchmark], Iterable[str]]:\n \"\"\"Iterate from a URIs list.\"\"\"\n start = self.benchmarks_start_at\n n = len(self.uris)\n if self.max_benchmarks:\n n = min(len(self.uris), n)\n\n if uris:\n # Shortcut in case we already have a list of URIs that we can slice\n # rather than iterating over.\n return iter(self.uris[start:n])\n\n return islice((env.datasets.benchmark(u) for u in self.uris), start, start + n)\n\n def _dataset_iterator(\n self, env: CompilerEnv, uris: bool = False\n ) -> Union[Iterable[Benchmark], Iterable[str]]:\n \"\"\"Iterate from a dataset name.\"\"\"\n dataset = env.datasets[self.dataset]\n dataset.install()\n n = dataset.size or self.max_benchmarks # dataset.size == 0 for inf\n if self.max_benchmarks:\n n = min(self.max_benchmarks, n)\n start = self.benchmarks_start_at\n iterator = dataset.benchmark_uris if uris else dataset.benchmarks\n return islice(iterator(), start, start + n)\n\n class Config:\n validate_assignment = True\n", "path": "examples/llvm_rl/model/benchmarks.py"}]} | 1,985 | 135 |
gh_patches_debug_56909 | rasdani/github-patches | git_diff | NVIDIA__NVFlare-920 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Tenseal necessary, makes installation impossible on Apple Silicon.
As discussed in #130 tenseal remains unavailable for Apple Silicon. The current NVFlare version (2.2.0) has no optional features and tenseal is necessary, making installation impossible on Apple Silicon.
</issue>
<code>
[start of setup.py]
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import datetime
16 import os
17 import shutil
18
19 from setuptools import find_packages, setup
20
21 import versioneer
22
23 # read the contents of your README file
24 this_directory = os.path.abspath(os.path.dirname(__file__))
25 with open(os.path.join(this_directory, "README.md"), encoding="utf-8") as f:
26 long_description = f.read()
27
28 if os.path.exists(os.path.join(this_directory, "nvflare", "poc.zip")):
29 os.remove(os.path.join(this_directory, "nvflare", "poc.zip"))
30 shutil.make_archive(base_name="poc", format="zip", root_dir=os.path.join(this_directory, "nvflare"), base_dir="poc")
31 shutil.move("poc.zip", os.path.join(this_directory, "nvflare", "poc.zip"))
32
33 versions = versioneer.get_versions()
34 if versions["error"]:
35 today = datetime.date.today().timetuple()
36 year = today[0] % 1000
37 month = today[1]
38 day = today[2]
39 version = f"0.0.{year:02d}{month:02d}{day:02d}"
40 else:
41 version = versions["version"]
42
43 release = os.environ.get("NVFL_RELEASE")
44 if release == "1":
45 package_name = "nvflare"
46 else:
47 package_name = "nvflare-nightly"
48
49 setup(
50 name=package_name,
51 version=version,
52 cmdclass=versioneer.get_cmdclass(),
53 description="Federated Learning Application Runtime Environment",
54 url="https://github.com/NVIDIA/NVFlare",
55 package_dir={"nvflare": "nvflare"},
56 packages=find_packages(
57 where=".",
58 include=[
59 "*",
60 ],
61 exclude=["tests", "tests.*"],
62 ),
63 package_data={"": ["*.yml", "*.html", "poc.zip"]},
64 zip_safe=True,
65 license_files=("LICENSE",),
66 classifiers=[
67 "Programming Language :: Python :: 3.7",
68 "Programming Language :: Python :: 3.8",
69 "License :: OSI Approved :: Apache Software License",
70 "Operating System :: POSIX :: Linux",
71 ],
72 long_description=long_description,
73 long_description_content_type="text/markdown",
74 python_requires=">=3.7,<3.9",
75 install_requires=[
76 "cryptography>=36.0.0",
77 "Flask==2.1.2",
78 "Flask-JWT-Extended==4.4.3",
79 "Flask-SQLAlchemy==2.5.1",
80 "google-api-python-client==2.49.0",
81 "grpcio==1.46.3",
82 "gunicorn==20.1.0",
83 "numpy",
84 "protobuf==3.20.1",
85 "psutil==5.9.1",
86 "PyYAML==6.0",
87 "six>=1.15.0",
88 "tenseal==0.3.0",
89 "msgpack==1.0.3",
90 "docker>=6.0",
91 ],
92 entry_points={
93 "console_scripts": [
94 "provision=nvflare.lighter.provision:main",
95 "poc=nvflare.lighter.poc:main",
96 "nvflare=nvflare.cli:main",
97 "authz_preview=nvflare.fuel.hci.tools.authz_preview:main",
98 ],
99 },
100 )
101
102 os.remove(os.path.join(this_directory, "nvflare", "poc.zip"))
103
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -85,10 +85,10 @@
"psutil==5.9.1",
"PyYAML==6.0",
"six>=1.15.0",
- "tenseal==0.3.0",
"msgpack==1.0.3",
"docker>=6.0",
],
+ extras_require={"HE": ["tenseal==0.3.0"]},
entry_points={
"console_scripts": [
"provision=nvflare.lighter.provision:main",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -85,10 +85,10 @@\n \"psutil==5.9.1\",\n \"PyYAML==6.0\",\n \"six>=1.15.0\",\n- \"tenseal==0.3.0\",\n \"msgpack==1.0.3\",\n \"docker>=6.0\",\n ],\n+ extras_require={\"HE\": [\"tenseal==0.3.0\"]},\n entry_points={\n \"console_scripts\": [\n \"provision=nvflare.lighter.provision:main\",\n", "issue": "[BUG] Tenseal necessary, makes installation impossible on Apple Silicon.\n\r\nAs discussed in #130 tenseal remains unavailable for Apple Silicon. The current NVFlare version (2.2.0) has no optional features and tenseal is necessary, making installation impossible on Apple Silicon.\n", "before_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport datetime\nimport os\nimport shutil\n\nfrom setuptools import find_packages, setup\n\nimport versioneer\n\n# read the contents of your README file\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nif os.path.exists(os.path.join(this_directory, \"nvflare\", \"poc.zip\")):\n os.remove(os.path.join(this_directory, \"nvflare\", \"poc.zip\"))\nshutil.make_archive(base_name=\"poc\", format=\"zip\", root_dir=os.path.join(this_directory, \"nvflare\"), base_dir=\"poc\")\nshutil.move(\"poc.zip\", os.path.join(this_directory, \"nvflare\", \"poc.zip\"))\n\nversions = versioneer.get_versions()\nif versions[\"error\"]:\n today = datetime.date.today().timetuple()\n year = today[0] % 1000\n month = today[1]\n day = today[2]\n version = f\"0.0.{year:02d}{month:02d}{day:02d}\"\nelse:\n version = versions[\"version\"]\n\nrelease = os.environ.get(\"NVFL_RELEASE\")\nif release == \"1\":\n package_name = \"nvflare\"\nelse:\n package_name = \"nvflare-nightly\"\n\nsetup(\n name=package_name,\n version=version,\n cmdclass=versioneer.get_cmdclass(),\n description=\"Federated Learning Application Runtime Environment\",\n url=\"https://github.com/NVIDIA/NVFlare\",\n package_dir={\"nvflare\": \"nvflare\"},\n packages=find_packages(\n where=\".\",\n include=[\n \"*\",\n ],\n exclude=[\"tests\", \"tests.*\"],\n ),\n package_data={\"\": [\"*.yml\", \"*.html\", \"poc.zip\"]},\n zip_safe=True,\n license_files=(\"LICENSE\",),\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: POSIX :: Linux\",\n ],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n python_requires=\">=3.7,<3.9\",\n install_requires=[\n \"cryptography>=36.0.0\",\n \"Flask==2.1.2\",\n \"Flask-JWT-Extended==4.4.3\",\n \"Flask-SQLAlchemy==2.5.1\",\n \"google-api-python-client==2.49.0\",\n \"grpcio==1.46.3\",\n \"gunicorn==20.1.0\",\n \"numpy\",\n \"protobuf==3.20.1\",\n \"psutil==5.9.1\",\n \"PyYAML==6.0\",\n \"six>=1.15.0\",\n \"tenseal==0.3.0\",\n \"msgpack==1.0.3\",\n \"docker>=6.0\",\n ],\n entry_points={\n \"console_scripts\": [\n \"provision=nvflare.lighter.provision:main\",\n \"poc=nvflare.lighter.poc:main\",\n \"nvflare=nvflare.cli:main\",\n \"authz_preview=nvflare.fuel.hci.tools.authz_preview:main\",\n ],\n },\n)\n\nos.remove(os.path.join(this_directory, \"nvflare\", \"poc.zip\"))\n", "path": "setup.py"}]} | 1,684 | 143 |
gh_patches_debug_38392 | rasdani/github-patches | git_diff | kivy__python-for-android-1343 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Numpy support w/ python3crystax
I'm trying to get my app to compile with Python 3 using the Crystax NDK. Everything is set up, and if I try to push my app to my device without specifying that numpy is needed, everything works fine. However, numpy raises a few issues:
1. (easily fixed) The recipe needs to be modified such that python3crystax is supported. Currently, it only claims python2 support, but that's overridden easily enough (`depends = ['python2']` -> `depends = [('python3crystax', 'python2)]` in the `__init__.py`).
2. The compiler claims it can't find -lcrystax:
``` sh
...
don't know how to compile Fortran code on platform 'posix'
C compiler: /usr/bin/ccache arm-linux-androideabi-gcc -DANDROID -mandroid -fomit-frame-pointer --sysroot /home/wwoods/walt/dev/kivy/crystax-ndk-10.3.2/platforms/android-19/arch-arm -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DANDROID -mandroid -fomit-frame-pointer --sysroot /home/wwoods/walt/dev/kivy/crystax-ndk-10.3.2/platforms/android-19/arch-arm -march=armv7-a -mfloat-abi=softfp -mfpu=vfp -mthumb -fPIC
compile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/home/wwoods/miniconda3/envs/kivy/include/python3.5m -c'
ccache: _configtest.c
/usr/bin/ccache arm-linux-androideabi-gcc -DANDROID -mandroid -fomit-frame-pointer --sysroot /home/wwoods/walt/dev/kivy/crystax-ndk-10.3.2/platforms/android-19/arch-arm _configtest.o -o _configtest
/home/wwoods/walt/dev/kivy/crystax-ndk-10.3.2/toolchains/arm-linux-androideabi-5/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/5.3/../../../../arm-linux-androideabi/bin/ld: error: cannot find -lcrystax
collect2: error: ld returned 1 exit status
/home/wwoods/walt/dev/kivy/crystax-ndk-10.3.2/toolchains/arm-linux-androideabi-5/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/5.3/../../../../arm-linux-androideabi/bin/ld: error: cannot find -lcrystax
collect2: error: ld returned 1 exit status
failure.
removing: _configtest.c _configtest.o
```
Everything except numpy works fine, so I think that it's just Numpy's recipe that needs a minor alteration. Oddly though, specifying a LIBRARY_PATH or LDFLAGS does not have any effect. In fact, if I add a `get_recipe_env` to the `NumpyRecipe` that prints out what `super().get_recipe_env(arch)` contains, it has LDFLAGS with the appropriate path on there. Running the linker by hand with the given flags find the library fine (`ld $(LDFLAGS) -lcrystax --verbose`). In the context of python-for-android though, the linker always fails to find libcrystax.a.
Any help appreciated, thanks.
</issue>
<code>
[start of pythonforandroid/recipes/numpy/__init__.py]
1 from pythonforandroid.recipe import CompiledComponentsPythonRecipe
2 from pythonforandroid.toolchain import warning
3
4
5 class NumpyRecipe(CompiledComponentsPythonRecipe):
6
7 version = '1.9.2'
8 url = 'https://pypi.python.org/packages/source/n/numpy/numpy-{version}.tar.gz'
9 site_packages_name= 'numpy'
10
11 depends = ['python2']
12
13 patches = ['patches/fix-numpy.patch',
14 'patches/prevent_libs_check.patch',
15 'patches/ar.patch',
16 'patches/lib.patch']
17
18 def get_recipe_env(self, arch):
19 """ looks like numpy has no proper -L flags. Code copied and adapted from
20 https://github.com/frmdstryr/p4a-numpy/
21 """
22
23 env = super(NumpyRecipe, self).get_recipe_env(arch)
24 #: Hack add path L to crystax as a CFLAG
25
26 py_ver = '3.5'
27 if {'python2crystax', 'python2'} & set(self.ctx.recipe_build_order):
28 py_ver = '2.7'
29
30 py_so = '2.7' if py_ver == '2.7' else '3.5m'
31
32 api_ver = self.ctx.android_api
33
34 platform = 'arm' if 'arm' in arch.arch else arch.arch
35 #: Not sure why but we have to inject these into the CC and LD env's for it to
36 #: use the correct arguments.
37 flags = " -L{ctx.ndk_dir}/platforms/android-{api_ver}/arch-{platform}/usr/lib/" \
38 " --sysroot={ctx.ndk_dir}/platforms/android-{api_ver}/arch-{platform}" \
39 .format(ctx=self.ctx, arch=arch, platform=platform, api_ver=api_ver,
40 py_so=py_so, py_ver=py_ver)
41 if flags not in env['CC']:
42 env['CC'] += flags
43 if flags not in env['LD']:
44 env['LD'] += flags + ' -shared'
45
46 return env
47
48 def prebuild_arch(self, arch):
49 super(NumpyRecipe, self).prebuild_arch(arch)
50
51 warning('Numpy is built assuming the archiver name is '
52 'arm-linux-androideabi-ar, which may not always be true!')
53
54
55 recipe = NumpyRecipe()
56
[end of pythonforandroid/recipes/numpy/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pythonforandroid/recipes/numpy/__init__.py b/pythonforandroid/recipes/numpy/__init__.py
--- a/pythonforandroid/recipes/numpy/__init__.py
+++ b/pythonforandroid/recipes/numpy/__init__.py
@@ -1,48 +1,48 @@
from pythonforandroid.recipe import CompiledComponentsPythonRecipe
from pythonforandroid.toolchain import warning
+from os.path import join
class NumpyRecipe(CompiledComponentsPythonRecipe):
- version = '1.9.2'
- url = 'https://pypi.python.org/packages/source/n/numpy/numpy-{version}.tar.gz'
- site_packages_name= 'numpy'
+ version = '1.15.1'
+ url = 'https://pypi.python.org/packages/source/n/numpy/numpy-{version}.zip'
+ site_packages_name = 'numpy'
- depends = ['python2']
+ depends = [('python2', 'python3crystax')]
- patches = ['patches/fix-numpy.patch',
- 'patches/prevent_libs_check.patch',
- 'patches/ar.patch',
- 'patches/lib.patch']
+ patches = [
+ join('patches', 'fix-numpy.patch'),
+ join('patches', 'prevent_libs_check.patch'),
+ join('patches', 'ar.patch'),
+ join('patches', 'lib.patch'),
+ join('patches', 'python2-fixes.patch')
+ ]
def get_recipe_env(self, arch):
- """ looks like numpy has no proper -L flags. Code copied and adapted from
- https://github.com/frmdstryr/p4a-numpy/
- """
-
env = super(NumpyRecipe, self).get_recipe_env(arch)
- #: Hack add path L to crystax as a CFLAG
-
- py_ver = '3.5'
- if {'python2crystax', 'python2'} & set(self.ctx.recipe_build_order):
- py_ver = '2.7'
- py_so = '2.7' if py_ver == '2.7' else '3.5m'
+ flags = " -L{} --sysroot={}".format(
+ join(self.ctx.ndk_platform, 'usr', 'lib'),
+ self.ctx.ndk_platform
+ )
+
+ if self.ctx.ndk == 'crystax':
+ py_ver = self.ctx.python_recipe.version[0:3]
+ src_dir = join(self.ctx.ndk_dir, 'sources')
+ py_inc_dir = join(src_dir, 'python', py_ver, 'include', 'python')
+ py_lib_dir = join(src_dir, 'python', py_ver, 'libs', arch.arch)
+ cry_inc_dir = join(src_dir, 'crystax', 'include')
+ cry_lib_dir = join(src_dir, 'crystax', 'libs', arch.arch)
+ flags += ' -I{}'.format(py_inc_dir)
+ flags += ' -L{} -lpython{}m'.format(py_lib_dir, py_ver)
+ flags += " -I{}".format(cry_inc_dir)
+ flags += " -L{}".format(cry_lib_dir)
- api_ver = self.ctx.android_api
-
- platform = 'arm' if 'arm' in arch.arch else arch.arch
- #: Not sure why but we have to inject these into the CC and LD env's for it to
- #: use the correct arguments.
- flags = " -L{ctx.ndk_dir}/platforms/android-{api_ver}/arch-{platform}/usr/lib/" \
- " --sysroot={ctx.ndk_dir}/platforms/android-{api_ver}/arch-{platform}" \
- .format(ctx=self.ctx, arch=arch, platform=platform, api_ver=api_ver,
- py_so=py_so, py_ver=py_ver)
if flags not in env['CC']:
env['CC'] += flags
if flags not in env['LD']:
env['LD'] += flags + ' -shared'
-
return env
def prebuild_arch(self, arch):
| {"golden_diff": "diff --git a/pythonforandroid/recipes/numpy/__init__.py b/pythonforandroid/recipes/numpy/__init__.py\n--- a/pythonforandroid/recipes/numpy/__init__.py\n+++ b/pythonforandroid/recipes/numpy/__init__.py\n@@ -1,48 +1,48 @@\n from pythonforandroid.recipe import CompiledComponentsPythonRecipe\n from pythonforandroid.toolchain import warning\n+from os.path import join\n \n \n class NumpyRecipe(CompiledComponentsPythonRecipe):\n \n- version = '1.9.2'\n- url = 'https://pypi.python.org/packages/source/n/numpy/numpy-{version}.tar.gz'\n- site_packages_name= 'numpy'\n+ version = '1.15.1'\n+ url = 'https://pypi.python.org/packages/source/n/numpy/numpy-{version}.zip'\n+ site_packages_name = 'numpy'\n \n- depends = ['python2']\n+ depends = [('python2', 'python3crystax')]\n \n- patches = ['patches/fix-numpy.patch',\n- 'patches/prevent_libs_check.patch',\n- 'patches/ar.patch',\n- 'patches/lib.patch']\n+ patches = [\n+ join('patches', 'fix-numpy.patch'),\n+ join('patches', 'prevent_libs_check.patch'),\n+ join('patches', 'ar.patch'),\n+ join('patches', 'lib.patch'),\n+ join('patches', 'python2-fixes.patch')\n+ ]\n \n def get_recipe_env(self, arch):\n- \"\"\" looks like numpy has no proper -L flags. Code copied and adapted from\n- https://github.com/frmdstryr/p4a-numpy/\n- \"\"\"\n-\n env = super(NumpyRecipe, self).get_recipe_env(arch)\n- #: Hack add path L to crystax as a CFLAG\n-\n- py_ver = '3.5'\n- if {'python2crystax', 'python2'} & set(self.ctx.recipe_build_order):\n- py_ver = '2.7'\n \n- py_so = '2.7' if py_ver == '2.7' else '3.5m'\n+ flags = \" -L{} --sysroot={}\".format(\n+ join(self.ctx.ndk_platform, 'usr', 'lib'),\n+ self.ctx.ndk_platform\n+ )\n+\n+ if self.ctx.ndk == 'crystax':\n+ py_ver = self.ctx.python_recipe.version[0:3]\n+ src_dir = join(self.ctx.ndk_dir, 'sources')\n+ py_inc_dir = join(src_dir, 'python', py_ver, 'include', 'python')\n+ py_lib_dir = join(src_dir, 'python', py_ver, 'libs', arch.arch)\n+ cry_inc_dir = join(src_dir, 'crystax', 'include')\n+ cry_lib_dir = join(src_dir, 'crystax', 'libs', arch.arch)\n+ flags += ' -I{}'.format(py_inc_dir)\n+ flags += ' -L{} -lpython{}m'.format(py_lib_dir, py_ver)\n+ flags += \" -I{}\".format(cry_inc_dir)\n+ flags += \" -L{}\".format(cry_lib_dir)\n \n- api_ver = self.ctx.android_api\n-\n- platform = 'arm' if 'arm' in arch.arch else arch.arch\n- #: Not sure why but we have to inject these into the CC and LD env's for it to\n- #: use the correct arguments.\n- flags = \" -L{ctx.ndk_dir}/platforms/android-{api_ver}/arch-{platform}/usr/lib/\" \\\n- \" --sysroot={ctx.ndk_dir}/platforms/android-{api_ver}/arch-{platform}\" \\\n- .format(ctx=self.ctx, arch=arch, platform=platform, api_ver=api_ver,\n- py_so=py_so, py_ver=py_ver)\n if flags not in env['CC']:\n env['CC'] += flags\n if flags not in env['LD']:\n env['LD'] += flags + ' -shared'\n-\n return env\n \n def prebuild_arch(self, arch):\n", "issue": "Numpy support w/ python3crystax\nI'm trying to get my app to compile with Python 3 using the Crystax NDK. Everything is set up, and if I try to push my app to my device without specifying that numpy is needed, everything works fine. However, numpy raises a few issues:\n1. (easily fixed) The recipe needs to be modified such that python3crystax is supported. Currently, it only claims python2 support, but that's overridden easily enough (`depends = ['python2']` -> `depends = [('python3crystax', 'python2)]` in the `__init__.py`).\n2. The compiler claims it can't find -lcrystax:\n\n``` sh\n...\ndon't know how to compile Fortran code on platform 'posix'\nC compiler: /usr/bin/ccache arm-linux-androideabi-gcc -DANDROID -mandroid -fomit-frame-pointer --sysroot /home/wwoods/walt/dev/kivy/crystax-ndk-10.3.2/platforms/android-19/arch-arm -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DANDROID -mandroid -fomit-frame-pointer --sysroot /home/wwoods/walt/dev/kivy/crystax-ndk-10.3.2/platforms/android-19/arch-arm -march=armv7-a -mfloat-abi=softfp -mfpu=vfp -mthumb -fPIC\n\ncompile options: '-Inumpy/core/src/private -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -Inumpy/core/include -I/home/wwoods/miniconda3/envs/kivy/include/python3.5m -c'\nccache: _configtest.c\n/usr/bin/ccache arm-linux-androideabi-gcc -DANDROID -mandroid -fomit-frame-pointer --sysroot /home/wwoods/walt/dev/kivy/crystax-ndk-10.3.2/platforms/android-19/arch-arm _configtest.o -o _configtest\n/home/wwoods/walt/dev/kivy/crystax-ndk-10.3.2/toolchains/arm-linux-androideabi-5/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/5.3/../../../../arm-linux-androideabi/bin/ld: error: cannot find -lcrystax\ncollect2: error: ld returned 1 exit status\n/home/wwoods/walt/dev/kivy/crystax-ndk-10.3.2/toolchains/arm-linux-androideabi-5/prebuilt/linux-x86_64/bin/../lib/gcc/arm-linux-androideabi/5.3/../../../../arm-linux-androideabi/bin/ld: error: cannot find -lcrystax\ncollect2: error: ld returned 1 exit status\nfailure.\nremoving: _configtest.c _configtest.o\n```\n\nEverything except numpy works fine, so I think that it's just Numpy's recipe that needs a minor alteration. Oddly though, specifying a LIBRARY_PATH or LDFLAGS does not have any effect. In fact, if I add a `get_recipe_env` to the `NumpyRecipe` that prints out what `super().get_recipe_env(arch)` contains, it has LDFLAGS with the appropriate path on there. Running the linker by hand with the given flags find the library fine (`ld $(LDFLAGS) -lcrystax --verbose`). In the context of python-for-android though, the linker always fails to find libcrystax.a.\n\nAny help appreciated, thanks.\n\n", "before_files": [{"content": "from pythonforandroid.recipe import CompiledComponentsPythonRecipe\nfrom pythonforandroid.toolchain import warning\n\n\nclass NumpyRecipe(CompiledComponentsPythonRecipe):\n\n version = '1.9.2'\n url = 'https://pypi.python.org/packages/source/n/numpy/numpy-{version}.tar.gz'\n site_packages_name= 'numpy'\n\n depends = ['python2']\n\n patches = ['patches/fix-numpy.patch',\n 'patches/prevent_libs_check.patch',\n 'patches/ar.patch',\n 'patches/lib.patch']\n\n def get_recipe_env(self, arch):\n \"\"\" looks like numpy has no proper -L flags. Code copied and adapted from\n https://github.com/frmdstryr/p4a-numpy/\n \"\"\"\n\n env = super(NumpyRecipe, self).get_recipe_env(arch)\n #: Hack add path L to crystax as a CFLAG\n\n py_ver = '3.5'\n if {'python2crystax', 'python2'} & set(self.ctx.recipe_build_order):\n py_ver = '2.7'\n\n py_so = '2.7' if py_ver == '2.7' else '3.5m'\n\n api_ver = self.ctx.android_api\n\n platform = 'arm' if 'arm' in arch.arch else arch.arch\n #: Not sure why but we have to inject these into the CC and LD env's for it to\n #: use the correct arguments.\n flags = \" -L{ctx.ndk_dir}/platforms/android-{api_ver}/arch-{platform}/usr/lib/\" \\\n \" --sysroot={ctx.ndk_dir}/platforms/android-{api_ver}/arch-{platform}\" \\\n .format(ctx=self.ctx, arch=arch, platform=platform, api_ver=api_ver,\n py_so=py_so, py_ver=py_ver)\n if flags not in env['CC']:\n env['CC'] += flags\n if flags not in env['LD']:\n env['LD'] += flags + ' -shared'\n\n return env\n\n def prebuild_arch(self, arch):\n super(NumpyRecipe, self).prebuild_arch(arch)\n\n warning('Numpy is built assuming the archiver name is '\n 'arm-linux-androideabi-ar, which may not always be true!')\n\n\nrecipe = NumpyRecipe()\n", "path": "pythonforandroid/recipes/numpy/__init__.py"}]} | 1,982 | 905 |
gh_patches_debug_40248 | rasdani/github-patches | git_diff | AlexsLemonade__refinebio-1740 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[HOTFIX] Prevent compendia jobs from getting cleaned up and fix QN jobs
## Issue Number
#1728
#1726
#1727
</issue>
<code>
[start of foreman/data_refinery_foreman/foreman/management/commands/create_compendia.py]
1 import sys
2
3 from django.core.management.base import BaseCommand
4
5 from data_refinery_common.job_lookup import ProcessorPipeline
6 from data_refinery_common.logging import get_and_configure_logger
7 from data_refinery_common.message_queue import send_job
8 from data_refinery_common.models import (Dataset, Experiment, Organism,
9 ProcessorJob,
10 ProcessorJobDatasetAssociation)
11 from data_refinery_common.utils import queryset_iterator
12
13 logger = get_and_configure_logger(__name__)
14
15 def create_job_for_organism(organism=Organism, quant_sf_only=False, svd_algorithm='ARPACK'):
16 """Returns a compendia job for the provided organism.
17
18 Fetch all of the experiments and compile large but normally formated Dataset.
19 """
20 data = {}
21 experiments = Experiment.objects.filter(organisms=organism).prefetch_related('samples')
22
23 for experiment in queryset_iterator(experiments):
24 data[experiment.accession_code] = list(experiment.samples.filter(organism=organism).values_list('accession_code', flat=True))
25
26 job = ProcessorJob()
27 job.pipeline_applied = ProcessorPipeline.CREATE_COMPENDIA.value
28 job.save()
29
30 dset = Dataset()
31 dset.data = data
32 dset.scale_by = 'NONE'
33 dset.aggregate_by = 'SPECIES'
34 dset.quantile_normalize = False
35 dset.quant_sf_only = quant_sf_only
36 dset.svd_algorithm = svd_algorithm
37 dset.save()
38
39 pjda = ProcessorJobDatasetAssociation()
40 pjda.processor_job = job
41 pjda.dataset = dset
42 pjda.save()
43
44 return job
45
46
47 class Command(BaseCommand):
48
49 def add_arguments(self, parser):
50 parser.add_argument(
51 "--organisms",
52 type=str,
53 help=("Comma separated list of organism names."))
54
55 parser.add_argument(
56 "--quant-sf-only",
57 type=lambda x: x == "True",
58 help=("Whether to create a quantpendium or normal compendium."))
59
60 parser.add_argument(
61 "--svd-algorithm",
62 type=str,
63 help=("Specify SVD algorithm applied during imputation ARPACK, RANDOMIZED or NONE to skip."))
64
65 def handle(self, *args, **options):
66 """Create a compendium for one or more organisms.
67
68 If --organism is supplied will immediately create a compedium
69 for it. If not a new job will be dispatched for each organism
70 with enough microarray samples except for human and mouse.
71 """
72 if options["organisms"] is None:
73 all_organisms = Organism.objects.exclude(name__in=["HOMO_SAPIENS", "MUS_MUSCULUS"])
74 else:
75 organisms = options["organisms"].upper().replace(" ", "_").split(",")
76 all_organisms = Organism.objects.filter(name__in=organisms)
77
78 # I think we could just use options["quant_sf_only"] but I
79 # wanna make sure that values that are not True do not trigger
80 # a truthy evaluation.
81 quant_sf_only = False
82 if options["quant_sf_only"] is True:
83 quant_sf_only = True
84
85 # default algorithm to arpack until we decide that ranomized is preferred
86 svd_algorithm = 'NONE' if quant_sf_only else 'ARPACK'
87 if options["svd_algorithm"] in ['ARPACK', 'RANDOMIZED', 'NONE']:
88 svd_algorithm = options["svd_algorithm"]
89
90 logger.debug(all_organisms)
91
92 for organism in all_organisms:
93 logger.debug(organism)
94 job = create_job_for_organism(organism, quant_sf_only, svd_algorithm)
95 logger.info("Sending CREATE_COMPENDIA for Organism", job_id=str(job.pk), organism=str(organism))
96 send_job(ProcessorPipeline.CREATE_COMPENDIA, job)
97
98 sys.exit(0)
99
[end of foreman/data_refinery_foreman/foreman/management/commands/create_compendia.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/foreman/data_refinery_foreman/foreman/management/commands/create_compendia.py b/foreman/data_refinery_foreman/foreman/management/commands/create_compendia.py
--- a/foreman/data_refinery_foreman/foreman/management/commands/create_compendia.py
+++ b/foreman/data_refinery_foreman/foreman/management/commands/create_compendia.py
@@ -21,7 +21,8 @@
experiments = Experiment.objects.filter(organisms=organism).prefetch_related('samples')
for experiment in queryset_iterator(experiments):
- data[experiment.accession_code] = list(experiment.samples.filter(organism=organism).values_list('accession_code', flat=True))
+ data[experiment.accession_code] = list(experiment.samples.filter(organism=organism)\
+ .values_list('accession_code', flat=True))
job = ProcessorJob()
job.pipeline_applied = ProcessorPipeline.CREATE_COMPENDIA.value
@@ -30,7 +31,8 @@
dset = Dataset()
dset.data = data
dset.scale_by = 'NONE'
- dset.aggregate_by = 'SPECIES'
+ # The quantpendias should be aggregated by species
+ dset.aggregate_by = 'EXPERIMENT' if quant_sf_only else 'SPECIES'
dset.quantile_normalize = False
dset.quant_sf_only = quant_sf_only
dset.svd_algorithm = svd_algorithm
@@ -55,7 +57,7 @@
parser.add_argument(
"--quant-sf-only",
type=lambda x: x == "True",
- help=("Whether to create a quantpendium or normal compendium."))
+ help=("Whether to create a quantpendium or normal compendium. Quantpendium will be aggregated by EXPERIMENT"))
parser.add_argument(
"--svd-algorithm",
@@ -78,19 +80,16 @@
# I think we could just use options["quant_sf_only"] but I
# wanna make sure that values that are not True do not trigger
# a truthy evaluation.
- quant_sf_only = False
- if options["quant_sf_only"] is True:
- quant_sf_only = True
+ quant_sf_only = options["quant_sf_only"] is True
# default algorithm to arpack until we decide that ranomized is preferred
svd_algorithm = 'NONE' if quant_sf_only else 'ARPACK'
if options["svd_algorithm"] in ['ARPACK', 'RANDOMIZED', 'NONE']:
svd_algorithm = options["svd_algorithm"]
- logger.debug(all_organisms)
+ logger.debug('Generating compendia for organisms', organisms=all_organisms)
for organism in all_organisms:
- logger.debug(organism)
job = create_job_for_organism(organism, quant_sf_only, svd_algorithm)
logger.info("Sending CREATE_COMPENDIA for Organism", job_id=str(job.pk), organism=str(organism))
send_job(ProcessorPipeline.CREATE_COMPENDIA, job)
| {"golden_diff": "diff --git a/foreman/data_refinery_foreman/foreman/management/commands/create_compendia.py b/foreman/data_refinery_foreman/foreman/management/commands/create_compendia.py\n--- a/foreman/data_refinery_foreman/foreman/management/commands/create_compendia.py\n+++ b/foreman/data_refinery_foreman/foreman/management/commands/create_compendia.py\n@@ -21,7 +21,8 @@\n experiments = Experiment.objects.filter(organisms=organism).prefetch_related('samples')\n \n for experiment in queryset_iterator(experiments):\n- data[experiment.accession_code] = list(experiment.samples.filter(organism=organism).values_list('accession_code', flat=True))\n+ data[experiment.accession_code] = list(experiment.samples.filter(organism=organism)\\\n+ .values_list('accession_code', flat=True))\n \n job = ProcessorJob()\n job.pipeline_applied = ProcessorPipeline.CREATE_COMPENDIA.value\n@@ -30,7 +31,8 @@\n dset = Dataset()\n dset.data = data\n dset.scale_by = 'NONE'\n- dset.aggregate_by = 'SPECIES'\n+ # The quantpendias should be aggregated by species\n+ dset.aggregate_by = 'EXPERIMENT' if quant_sf_only else 'SPECIES'\n dset.quantile_normalize = False\n dset.quant_sf_only = quant_sf_only\n dset.svd_algorithm = svd_algorithm\n@@ -55,7 +57,7 @@\n parser.add_argument(\n \"--quant-sf-only\",\n type=lambda x: x == \"True\",\n- help=(\"Whether to create a quantpendium or normal compendium.\"))\n+ help=(\"Whether to create a quantpendium or normal compendium. Quantpendium will be aggregated by EXPERIMENT\"))\n \n parser.add_argument(\n \"--svd-algorithm\",\n@@ -78,19 +80,16 @@\n # I think we could just use options[\"quant_sf_only\"] but I\n # wanna make sure that values that are not True do not trigger\n # a truthy evaluation.\n- quant_sf_only = False\n- if options[\"quant_sf_only\"] is True:\n- quant_sf_only = True\n+ quant_sf_only = options[\"quant_sf_only\"] is True\n \n # default algorithm to arpack until we decide that ranomized is preferred\n svd_algorithm = 'NONE' if quant_sf_only else 'ARPACK'\n if options[\"svd_algorithm\"] in ['ARPACK', 'RANDOMIZED', 'NONE']:\n svd_algorithm = options[\"svd_algorithm\"]\n \n- logger.debug(all_organisms)\n+ logger.debug('Generating compendia for organisms', organisms=all_organisms)\n \n for organism in all_organisms:\n- logger.debug(organism)\n job = create_job_for_organism(organism, quant_sf_only, svd_algorithm)\n logger.info(\"Sending CREATE_COMPENDIA for Organism\", job_id=str(job.pk), organism=str(organism))\n send_job(ProcessorPipeline.CREATE_COMPENDIA, job)\n", "issue": "[HOTFIX] Prevent compendia jobs from getting cleaned up and fix QN jobs\n## Issue Number\r\n\r\n#1728 \r\n#1726 \r\n#1727 \n", "before_files": [{"content": "import sys\n\nfrom django.core.management.base import BaseCommand\n\nfrom data_refinery_common.job_lookup import ProcessorPipeline\nfrom data_refinery_common.logging import get_and_configure_logger\nfrom data_refinery_common.message_queue import send_job\nfrom data_refinery_common.models import (Dataset, Experiment, Organism,\n ProcessorJob,\n ProcessorJobDatasetAssociation)\nfrom data_refinery_common.utils import queryset_iterator\n\nlogger = get_and_configure_logger(__name__)\n\ndef create_job_for_organism(organism=Organism, quant_sf_only=False, svd_algorithm='ARPACK'):\n \"\"\"Returns a compendia job for the provided organism.\n\n Fetch all of the experiments and compile large but normally formated Dataset.\n \"\"\"\n data = {}\n experiments = Experiment.objects.filter(organisms=organism).prefetch_related('samples')\n\n for experiment in queryset_iterator(experiments):\n data[experiment.accession_code] = list(experiment.samples.filter(organism=organism).values_list('accession_code', flat=True))\n\n job = ProcessorJob()\n job.pipeline_applied = ProcessorPipeline.CREATE_COMPENDIA.value\n job.save()\n\n dset = Dataset()\n dset.data = data\n dset.scale_by = 'NONE'\n dset.aggregate_by = 'SPECIES'\n dset.quantile_normalize = False\n dset.quant_sf_only = quant_sf_only\n dset.svd_algorithm = svd_algorithm\n dset.save()\n\n pjda = ProcessorJobDatasetAssociation()\n pjda.processor_job = job\n pjda.dataset = dset\n pjda.save()\n\n return job\n\n\nclass Command(BaseCommand):\n\n def add_arguments(self, parser):\n parser.add_argument(\n \"--organisms\",\n type=str,\n help=(\"Comma separated list of organism names.\"))\n\n parser.add_argument(\n \"--quant-sf-only\",\n type=lambda x: x == \"True\",\n help=(\"Whether to create a quantpendium or normal compendium.\"))\n\n parser.add_argument(\n \"--svd-algorithm\",\n type=str,\n help=(\"Specify SVD algorithm applied during imputation ARPACK, RANDOMIZED or NONE to skip.\"))\n\n def handle(self, *args, **options):\n \"\"\"Create a compendium for one or more organisms.\n\n If --organism is supplied will immediately create a compedium\n for it. If not a new job will be dispatched for each organism\n with enough microarray samples except for human and mouse.\n \"\"\"\n if options[\"organisms\"] is None:\n all_organisms = Organism.objects.exclude(name__in=[\"HOMO_SAPIENS\", \"MUS_MUSCULUS\"])\n else:\n organisms = options[\"organisms\"].upper().replace(\" \", \"_\").split(\",\")\n all_organisms = Organism.objects.filter(name__in=organisms)\n\n # I think we could just use options[\"quant_sf_only\"] but I\n # wanna make sure that values that are not True do not trigger\n # a truthy evaluation.\n quant_sf_only = False\n if options[\"quant_sf_only\"] is True:\n quant_sf_only = True\n\n # default algorithm to arpack until we decide that ranomized is preferred\n svd_algorithm = 'NONE' if quant_sf_only else 'ARPACK'\n if options[\"svd_algorithm\"] in ['ARPACK', 'RANDOMIZED', 'NONE']:\n svd_algorithm = options[\"svd_algorithm\"]\n\n logger.debug(all_organisms)\n\n for organism in all_organisms:\n logger.debug(organism)\n job = create_job_for_organism(organism, quant_sf_only, svd_algorithm)\n logger.info(\"Sending CREATE_COMPENDIA for Organism\", job_id=str(job.pk), organism=str(organism))\n send_job(ProcessorPipeline.CREATE_COMPENDIA, job)\n\n sys.exit(0)\n", "path": "foreman/data_refinery_foreman/foreman/management/commands/create_compendia.py"}]} | 1,620 | 685 |
gh_patches_debug_17545 | rasdani/github-patches | git_diff | sktime__sktime-769 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ThetaForecaster does not work with "initial_level" different from None
**Describe the bug**
When calling
`forecaster = ThetaForecaster(initial_level=0.5, deseasonalize=True, sp=1)` then fitting via `forecaster.fit(y_train)` leads to the following error:
```python
`Traceback (most recent call last):
File "<ipython-input-362-6ef348c52370>", line 1, in <module>
forecaster.fit(y_train)
File "/Users/muhlbach/opt/anaconda3/envs/experimental/lib/python3.8/site-packages/sktime/forecasting/theta.py", line 131, in fit
super(ThetaForecaster, self).fit(y, fh=fh)
File "/Users/muhlbach/opt/anaconda3/envs/experimental/lib/python3.8/site-packages/sktime/forecasting/base/adapters/_statsmodels.py", line 47, in fit
self._fit_forecaster(y, X)
File "/Users/muhlbach/opt/anaconda3/envs/experimental/lib/python3.8/site-packages/sktime/forecasting/exp_smoothing.py", line 79, in _fit_forecaster
self._forecaster = _ExponentialSmoothing(
File "/Users/muhlbach/opt/anaconda3/envs/experimental/lib/python3.8/site-packages/pandas/util/_decorators.py", line 199, in wrapper
return func(*args, **kwargs)
File "/Users/muhlbach/opt/anaconda3/envs/experimental/lib/python3.8/site-packages/statsmodels/tsa/holtwinters/model.py", line 291, in __init__
self._initialize()
File "/Users/muhlbach/opt/anaconda3/envs/experimental/lib/python3.8/site-packages/statsmodels/tsa/holtwinters/model.py", line 439, in _initialize
raise ValueError(msg.format("level"))
ValueError: initialization method is estimated but initial_level has been set.`
```
**To Reproduce**
```python
from sktime.forecasting.theta import ThetaForecaster
from sktime.datasets import load_airline
y = load_airline()
forecaster = ThetaForecaster(initial_level=0.5).fit(y)
```
**Expected behavior**
It should estimate Theta with initial smoothing level.
**Additional context**
I think the problem relates to the class ExponentialSmoothing and the fact that users cannot provide the `initialization_method` argument. See https://github.com/alan-turing-institute/sktime/blob/060af89907d135e41491d61f0966e285289a805b/sktime/forecasting/exp_smoothing.py#L10
</issue>
<code>
[start of sktime/forecasting/theta.py]
1 # -*- coding: utf-8 -*-
2 __all__ = ["ThetaForecaster"]
3 __author__ = ["@big-o", "Markus Löning"]
4
5 from warnings import warn
6
7 import numpy as np
8 import pandas as pd
9 from scipy.stats import norm
10
11 from sktime.forecasting.base._base import DEFAULT_ALPHA
12 from sktime.forecasting.exp_smoothing import ExponentialSmoothing
13 from sktime.transformations.series.detrend import Deseasonalizer
14 from sktime.utils.slope_and_trend import _fit_trend
15 from sktime.utils.validation.forecasting import check_sp
16 from sktime.utils.validation.forecasting import check_y_X
17
18
19 class ThetaForecaster(ExponentialSmoothing):
20 """
21 Theta method of forecasting.
22
23 The theta method as defined in [1]_ is equivalent to simple exponential
24 smoothing
25 (SES) with drift. This is demonstrated in [2]_.
26
27 The series is tested for seasonality using the test outlined in A&N. If
28 deemed
29 seasonal, the series is seasonally adjusted using a classical
30 multiplicative
31 decomposition before applying the theta method. The resulting forecasts
32 are then
33 reseasonalised.
34
35 In cases where SES results in a constant forecast, the theta forecaster
36 will revert
37 to predicting the SES constant plus a linear trend derived from the
38 training data.
39
40 Prediction intervals are computed using the underlying state space model.
41
42 Parameters
43 ----------
44
45 initial_level : float, optional
46 The alpha value of the simple exponential smoothing, if the value is
47 set then
48 this will be used, otherwise it will be estimated from the data.
49
50 deseasonalize : bool, optional (default=True)
51 If True, data is seasonally adjusted.
52
53 sp : int, optional (default=1)
54 The number of observations that constitute a seasonal period for a
55 multiplicative deseasonaliser, which is used if seasonality is
56 detected in the
57 training data. Ignored if a deseasonaliser transformer is provided.
58 Default is
59 1 (no seasonality).
60
61 Attributes
62 ----------
63
64 initial_level_ : float
65 The estimated alpha value of the SES fit.
66
67 drift_ : float
68 The estimated drift of the fitted model.
69
70 se_ : float
71 The standard error of the predictions. Used to calculate prediction
72 intervals.
73
74 References
75 ----------
76
77 .. [1] `Assimakopoulos, V. and Nikolopoulos, K. The theta model: a
78 decomposition
79 approach to forecasting. International Journal of Forecasting 16,
80 521-530,
81 2000.
82 <https://www.sciencedirect.com/science/article/pii
83 /S0169207000000662>`_
84
85 .. [2] `Hyndman, Rob J., and Billah, Baki. Unmasking the Theta method.
86 International J. Forecasting, 19, 287-290, 2003.
87 <https://www.sciencedirect.com/science/article/pii
88 /S0169207001001431>`_
89 """
90
91 _fitted_param_names = ("initial_level", "smoothing_level")
92
93 def __init__(self, initial_level=None, deseasonalize=True, sp=1):
94
95 self.sp = sp
96 self.deseasonalize = deseasonalize
97
98 self.deseasonalizer_ = None
99 self.trend_ = None
100 self.initial_level_ = None
101 self.drift_ = None
102 self.se_ = None
103 super(ThetaForecaster, self).__init__(initial_level=initial_level, sp=sp)
104
105 def fit(self, y, X=None, fh=None):
106 """Fit to training data.
107
108 Parameters
109 ----------
110 y : pd.Series
111 Target time series to which to fit the forecaster.
112 fh : int, list or np.array, optional (default=None)
113 The forecasters horizon with the steps ahead to to predict.
114 X : pd.DataFrame, optional (default=None)
115 Exogenous variables are ignored
116 Returns
117 -------
118 self : returns an instance of self.
119 """
120 y, _ = check_y_X(y, X)
121 sp = check_sp(self.sp)
122 if sp > 1 and not self.deseasonalize:
123 warn("`sp` is ignored when `deseasonalise`=False")
124
125 if self.deseasonalize:
126 self.deseasonalizer_ = Deseasonalizer(sp=self.sp, model="multiplicative")
127 y = self.deseasonalizer_.fit_transform(y)
128
129 # fit exponential smoothing forecaster
130 # find theta lines: Theta lines are just SES + drift
131 super(ThetaForecaster, self).fit(y, fh=fh)
132 self.initial_level_ = self._fitted_forecaster.params["smoothing_level"]
133
134 # compute trend
135 self.trend_ = self._compute_trend(y)
136 self._is_fitted = True
137 return self
138
139 def _predict(self, fh, X=None, return_pred_int=False, alpha=DEFAULT_ALPHA):
140 """
141 Make forecasts.
142
143 Parameters
144 ----------
145
146 fh : array-like
147 The forecasters horizon with the steps ahead to to predict.
148 Default is
149 one-step ahead forecast, i.e. np.array([1]).
150
151 Returns
152 -------
153
154 y_pred : pandas.Series
155 Returns series of predicted values.
156 """
157 y_pred = super(ThetaForecaster, self)._predict(
158 fh, X, return_pred_int=False, alpha=alpha
159 )
160
161 # Add drift.
162 drift = self._compute_drift()
163 y_pred += drift
164
165 if self.deseasonalize:
166 y_pred = self.deseasonalizer_.inverse_transform(y_pred)
167
168 if return_pred_int:
169 pred_int = self.compute_pred_int(y_pred=y_pred, alpha=alpha)
170 return y_pred, pred_int
171
172 return y_pred
173
174 @staticmethod
175 def _compute_trend(y):
176 # Trend calculated through least squares regression.
177 coefs = _fit_trend(y.values.reshape(1, -1), order=1)
178 return coefs[0, 0] / 2
179
180 def _compute_drift(self):
181 fh = self.fh.to_relative(self.cutoff)
182 if np.isclose(self.initial_level_, 0.0):
183 # SES was constant, so revert to simple trend
184 drift = self.trend_ * fh
185 else:
186 # Calculate drift from SES parameters
187 n_timepoints = len(self._y)
188 drift = self.trend_ * (
189 fh
190 + (1 - (1 - self.initial_level_) ** n_timepoints) / self.initial_level_
191 )
192
193 return drift
194
195 def _compute_pred_err(self, alphas):
196 """
197 Get the prediction errors for the forecast.
198 """
199 self.check_is_fitted()
200
201 n_timepoints = len(self._y)
202
203 self.sigma_ = np.sqrt(self._fitted_forecaster.sse / (n_timepoints - 1))
204 sem = self.sigma_ * np.sqrt(
205 self.fh.to_relative(self.cutoff) * self.initial_level_ ** 2 + 1
206 )
207
208 errors = []
209 for alpha in alphas:
210 z = _zscore(1 - alpha)
211 error = z * sem
212 errors.append(pd.Series(error, index=self.fh.to_absolute(self.cutoff)))
213
214 return errors
215
216 def update(self, y, X=None, update_params=True):
217 super(ThetaForecaster, self).update(
218 y, X, update_params=False
219 ) # use custom update_params routine
220 if update_params:
221 if self.deseasonalize:
222 y = self.deseasonalizer_.transform(self._y) # use updated y
223 self.initial_level_ = self._fitted_forecaster.params["smoothing_level"]
224 self.trend_ = self._compute_trend(y)
225 return self
226
227
228 def _zscore(level: float, two_tailed: bool = True) -> float:
229 """
230 Calculate a z-score from a confidence level.
231
232 Parameters
233 ----------
234
235 level : float
236 A confidence level, in the open interval (0, 1).
237
238 two_tailed : bool (default=True)
239 If True, return the two-tailed z score.
240
241 Returns
242 -------
243
244 z : float
245 The z score.
246 """
247 alpha = 1 - level
248 if two_tailed:
249 alpha /= 2
250
251 return -norm.ppf(alpha)
252
[end of sktime/forecasting/theta.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sktime/forecasting/theta.py b/sktime/forecasting/theta.py
--- a/sktime/forecasting/theta.py
+++ b/sktime/forecasting/theta.py
@@ -94,7 +94,6 @@
self.sp = sp
self.deseasonalize = deseasonalize
-
self.deseasonalizer_ = None
self.trend_ = None
self.initial_level_ = None
@@ -126,6 +125,7 @@
self.deseasonalizer_ = Deseasonalizer(sp=self.sp, model="multiplicative")
y = self.deseasonalizer_.fit_transform(y)
+ self.initialization_method = "known" if self.initial_level else "estimated"
# fit exponential smoothing forecaster
# find theta lines: Theta lines are just SES + drift
super(ThetaForecaster, self).fit(y, fh=fh)
| {"golden_diff": "diff --git a/sktime/forecasting/theta.py b/sktime/forecasting/theta.py\n--- a/sktime/forecasting/theta.py\n+++ b/sktime/forecasting/theta.py\n@@ -94,7 +94,6 @@\n \n self.sp = sp\n self.deseasonalize = deseasonalize\n-\n self.deseasonalizer_ = None\n self.trend_ = None\n self.initial_level_ = None\n@@ -126,6 +125,7 @@\n self.deseasonalizer_ = Deseasonalizer(sp=self.sp, model=\"multiplicative\")\n y = self.deseasonalizer_.fit_transform(y)\n \n+ self.initialization_method = \"known\" if self.initial_level else \"estimated\"\n # fit exponential smoothing forecaster\n # find theta lines: Theta lines are just SES + drift\n super(ThetaForecaster, self).fit(y, fh=fh)\n", "issue": "ThetaForecaster does not work with \"initial_level\" different from None\n**Describe the bug**\r\nWhen calling\r\n`forecaster = ThetaForecaster(initial_level=0.5, deseasonalize=True, sp=1)` then fitting via `forecaster.fit(y_train)` leads to the following error:\r\n\r\n```python\r\n`Traceback (most recent call last):\r\n\r\n File \"<ipython-input-362-6ef348c52370>\", line 1, in <module>\r\n forecaster.fit(y_train)\r\n\r\n File \"/Users/muhlbach/opt/anaconda3/envs/experimental/lib/python3.8/site-packages/sktime/forecasting/theta.py\", line 131, in fit\r\n super(ThetaForecaster, self).fit(y, fh=fh)\r\n\r\n File \"/Users/muhlbach/opt/anaconda3/envs/experimental/lib/python3.8/site-packages/sktime/forecasting/base/adapters/_statsmodels.py\", line 47, in fit\r\n self._fit_forecaster(y, X)\r\n\r\n File \"/Users/muhlbach/opt/anaconda3/envs/experimental/lib/python3.8/site-packages/sktime/forecasting/exp_smoothing.py\", line 79, in _fit_forecaster\r\n self._forecaster = _ExponentialSmoothing(\r\n\r\n File \"/Users/muhlbach/opt/anaconda3/envs/experimental/lib/python3.8/site-packages/pandas/util/_decorators.py\", line 199, in wrapper\r\n return func(*args, **kwargs)\r\n\r\n File \"/Users/muhlbach/opt/anaconda3/envs/experimental/lib/python3.8/site-packages/statsmodels/tsa/holtwinters/model.py\", line 291, in __init__\r\n self._initialize()\r\n\r\n File \"/Users/muhlbach/opt/anaconda3/envs/experimental/lib/python3.8/site-packages/statsmodels/tsa/holtwinters/model.py\", line 439, in _initialize\r\n raise ValueError(msg.format(\"level\"))\r\n\r\nValueError: initialization method is estimated but initial_level has been set.`\r\n```\r\n\r\n**To Reproduce**\r\n\r\n```python\r\nfrom sktime.forecasting.theta import ThetaForecaster\r\nfrom sktime.datasets import load_airline\r\ny = load_airline()\r\nforecaster = ThetaForecaster(initial_level=0.5).fit(y)\r\n```\r\n\r\n**Expected behavior**\r\nIt should estimate Theta with initial smoothing level.\r\n\r\n**Additional context**\r\nI think the problem relates to the class ExponentialSmoothing and the fact that users cannot provide the `initialization_method` argument. See https://github.com/alan-turing-institute/sktime/blob/060af89907d135e41491d61f0966e285289a805b/sktime/forecasting/exp_smoothing.py#L10 \r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n__all__ = [\"ThetaForecaster\"]\n__author__ = [\"@big-o\", \"Markus L\u00f6ning\"]\n\nfrom warnings import warn\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.stats import norm\n\nfrom sktime.forecasting.base._base import DEFAULT_ALPHA\nfrom sktime.forecasting.exp_smoothing import ExponentialSmoothing\nfrom sktime.transformations.series.detrend import Deseasonalizer\nfrom sktime.utils.slope_and_trend import _fit_trend\nfrom sktime.utils.validation.forecasting import check_sp\nfrom sktime.utils.validation.forecasting import check_y_X\n\n\nclass ThetaForecaster(ExponentialSmoothing):\n \"\"\"\n Theta method of forecasting.\n\n The theta method as defined in [1]_ is equivalent to simple exponential\n smoothing\n (SES) with drift. This is demonstrated in [2]_.\n\n The series is tested for seasonality using the test outlined in A&N. If\n deemed\n seasonal, the series is seasonally adjusted using a classical\n multiplicative\n decomposition before applying the theta method. The resulting forecasts\n are then\n reseasonalised.\n\n In cases where SES results in a constant forecast, the theta forecaster\n will revert\n to predicting the SES constant plus a linear trend derived from the\n training data.\n\n Prediction intervals are computed using the underlying state space model.\n\n Parameters\n ----------\n\n initial_level : float, optional\n The alpha value of the simple exponential smoothing, if the value is\n set then\n this will be used, otherwise it will be estimated from the data.\n\n deseasonalize : bool, optional (default=True)\n If True, data is seasonally adjusted.\n\n sp : int, optional (default=1)\n The number of observations that constitute a seasonal period for a\n multiplicative deseasonaliser, which is used if seasonality is\n detected in the\n training data. Ignored if a deseasonaliser transformer is provided.\n Default is\n 1 (no seasonality).\n\n Attributes\n ----------\n\n initial_level_ : float\n The estimated alpha value of the SES fit.\n\n drift_ : float\n The estimated drift of the fitted model.\n\n se_ : float\n The standard error of the predictions. Used to calculate prediction\n intervals.\n\n References\n ----------\n\n .. [1] `Assimakopoulos, V. and Nikolopoulos, K. The theta model: a\n decomposition\n approach to forecasting. International Journal of Forecasting 16,\n 521-530,\n 2000.\n <https://www.sciencedirect.com/science/article/pii\n /S0169207000000662>`_\n\n .. [2] `Hyndman, Rob J., and Billah, Baki. Unmasking the Theta method.\n International J. Forecasting, 19, 287-290, 2003.\n <https://www.sciencedirect.com/science/article/pii\n /S0169207001001431>`_\n \"\"\"\n\n _fitted_param_names = (\"initial_level\", \"smoothing_level\")\n\n def __init__(self, initial_level=None, deseasonalize=True, sp=1):\n\n self.sp = sp\n self.deseasonalize = deseasonalize\n\n self.deseasonalizer_ = None\n self.trend_ = None\n self.initial_level_ = None\n self.drift_ = None\n self.se_ = None\n super(ThetaForecaster, self).__init__(initial_level=initial_level, sp=sp)\n\n def fit(self, y, X=None, fh=None):\n \"\"\"Fit to training data.\n\n Parameters\n ----------\n y : pd.Series\n Target time series to which to fit the forecaster.\n fh : int, list or np.array, optional (default=None)\n The forecasters horizon with the steps ahead to to predict.\n X : pd.DataFrame, optional (default=None)\n Exogenous variables are ignored\n Returns\n -------\n self : returns an instance of self.\n \"\"\"\n y, _ = check_y_X(y, X)\n sp = check_sp(self.sp)\n if sp > 1 and not self.deseasonalize:\n warn(\"`sp` is ignored when `deseasonalise`=False\")\n\n if self.deseasonalize:\n self.deseasonalizer_ = Deseasonalizer(sp=self.sp, model=\"multiplicative\")\n y = self.deseasonalizer_.fit_transform(y)\n\n # fit exponential smoothing forecaster\n # find theta lines: Theta lines are just SES + drift\n super(ThetaForecaster, self).fit(y, fh=fh)\n self.initial_level_ = self._fitted_forecaster.params[\"smoothing_level\"]\n\n # compute trend\n self.trend_ = self._compute_trend(y)\n self._is_fitted = True\n return self\n\n def _predict(self, fh, X=None, return_pred_int=False, alpha=DEFAULT_ALPHA):\n \"\"\"\n Make forecasts.\n\n Parameters\n ----------\n\n fh : array-like\n The forecasters horizon with the steps ahead to to predict.\n Default is\n one-step ahead forecast, i.e. np.array([1]).\n\n Returns\n -------\n\n y_pred : pandas.Series\n Returns series of predicted values.\n \"\"\"\n y_pred = super(ThetaForecaster, self)._predict(\n fh, X, return_pred_int=False, alpha=alpha\n )\n\n # Add drift.\n drift = self._compute_drift()\n y_pred += drift\n\n if self.deseasonalize:\n y_pred = self.deseasonalizer_.inverse_transform(y_pred)\n\n if return_pred_int:\n pred_int = self.compute_pred_int(y_pred=y_pred, alpha=alpha)\n return y_pred, pred_int\n\n return y_pred\n\n @staticmethod\n def _compute_trend(y):\n # Trend calculated through least squares regression.\n coefs = _fit_trend(y.values.reshape(1, -1), order=1)\n return coefs[0, 0] / 2\n\n def _compute_drift(self):\n fh = self.fh.to_relative(self.cutoff)\n if np.isclose(self.initial_level_, 0.0):\n # SES was constant, so revert to simple trend\n drift = self.trend_ * fh\n else:\n # Calculate drift from SES parameters\n n_timepoints = len(self._y)\n drift = self.trend_ * (\n fh\n + (1 - (1 - self.initial_level_) ** n_timepoints) / self.initial_level_\n )\n\n return drift\n\n def _compute_pred_err(self, alphas):\n \"\"\"\n Get the prediction errors for the forecast.\n \"\"\"\n self.check_is_fitted()\n\n n_timepoints = len(self._y)\n\n self.sigma_ = np.sqrt(self._fitted_forecaster.sse / (n_timepoints - 1))\n sem = self.sigma_ * np.sqrt(\n self.fh.to_relative(self.cutoff) * self.initial_level_ ** 2 + 1\n )\n\n errors = []\n for alpha in alphas:\n z = _zscore(1 - alpha)\n error = z * sem\n errors.append(pd.Series(error, index=self.fh.to_absolute(self.cutoff)))\n\n return errors\n\n def update(self, y, X=None, update_params=True):\n super(ThetaForecaster, self).update(\n y, X, update_params=False\n ) # use custom update_params routine\n if update_params:\n if self.deseasonalize:\n y = self.deseasonalizer_.transform(self._y) # use updated y\n self.initial_level_ = self._fitted_forecaster.params[\"smoothing_level\"]\n self.trend_ = self._compute_trend(y)\n return self\n\n\ndef _zscore(level: float, two_tailed: bool = True) -> float:\n \"\"\"\n Calculate a z-score from a confidence level.\n\n Parameters\n ----------\n\n level : float\n A confidence level, in the open interval (0, 1).\n\n two_tailed : bool (default=True)\n If True, return the two-tailed z score.\n\n Returns\n -------\n\n z : float\n The z score.\n \"\"\"\n alpha = 1 - level\n if two_tailed:\n alpha /= 2\n\n return -norm.ppf(alpha)\n", "path": "sktime/forecasting/theta.py"}]} | 3,731 | 216 |
gh_patches_debug_21621 | rasdani/github-patches | git_diff | mdn__kuma-6962 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
T - fix TypeError: in stripe hook
https://sentry.prod.mozaws.net/operations/mdn-prod/issues/7810756/
```
Resolver404: {'tried': [[<URLPattern '^media/(?:redesign/)?css/(?P<doc>.*)-min.css$'>], [<URLPattern '^media/(?:redesign/)?js/(?P<doc>.*)-min.js$'>], [<URLPattern '^media/(?:redesign/)?img(?P<suffix>.*)$'>], [<URLPattern '^media/(?:redesign/)?css(?P<suffix>.*)$'>], [<URLPattern '^media/(?:redesign/)?js(?P<suffix>.*)$'>], [<URLPattern '^media/(?:redesign/)?fonts(?P<suffix>.*)$'>], [<URLPattern '^media/uploads/demos/(?:.*)$'>], [<URLPattern '(?i)^(?P<one>.*)//(?P<two>.*)//(?P<three>.*)$'>], [<URLPattern '(?i)^(?P<one>.*)//(?P<two>.*)$'>], [<URLPattern '(?i)^samples/canvas-tutorial/2_1_canvas_rect.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/2_2_canvas_moveto.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/2_3_canvas_lineto.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/2_4_canvas_arc.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/2_5_canvas_quadraticcurveto.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/2_6_canvas_beziercurveto.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/3_1_canvas_drawimage.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/3_2_canvas_drawimage.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/3_3_canvas_drawimage.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/3_4_canvas_gallery.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_1_canvas_fillstyle.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_2_canvas_strokestyle.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_3_canvas_globalalpha.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_4_canvas_rgba.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_5_canvas_linewidth.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_6_canvas_linecap.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_7_canvas_linejoin.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_8_canvas_miterlimit.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_9_canvas_lineargradient.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_10_canvas_radialgradient.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_11_canvas_createpattern.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/5_1_canvas_savestate.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/5_2_canvas_translate.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/5_3_canvas_rotate.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/5_4_canvas_scale.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/6_1_canvas_composite.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/6_2_canvas_clipping.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/globalCompositeOperation.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/backdrop.png$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/bg_gallery.png$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_1.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_2.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_3.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_4.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_5.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_6.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_7.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_8.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/picture_frame.png$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/rhino.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/wallpaper.png$'>], [<URLPattern '(?i)^samples/domref/mozGetAsFile.html$'>], [<URLPattern '(?i)^samples/raycaster/input.js$'>], [<URLPattern '(?i)^samples/raycaster/Level.js$'>], [<URLPattern '(?i)^samples/raycaster/Player.js$'>], [<URLPattern '(?i)^samples/raycaster/RayCaster.html$'>], [<URLPattern '(?i)^samples/raycaster/RayCaster.js$'>], [<URLPattern '(?i)^samples/raycaster/trace.css$'>], [<URLPattern '(?i)^samples/raycaster/trace.js$'>], [<URLPattern '(?i)^samples/webgl/sample1$'>], [<URLPattern '(?i)^samples/webgl/sample1/index.html$'>], [<URLPattern '(?i)^samples/webgl/sample1/webgl-demo.js$'>], [<UR...
File "redirect_urls/middleware.py", line 14, in __call__
resolver_match = self.resolver.resolve(request.path_info)
File "newrelic/hooks/framework_django.py", line 600, in wrapper
return _wrapped(*args, **kwargs)
File "newrelic/hooks/framework_django.py", line 588, in _wrapped
result = wrapped(path)
File "django/urls/resolvers.py", line 567, in resolve
raise Resolver404({'tried': tried, 'path': new_path})
TypeError: 'NoneType' object is not subscriptable
(3 additional frame(s) were not displayed)
...
File "newrelic/hooks/framework_django.py", line 539, in wrapper
return wrapped(*args, **kwargs)
File "django/views/decorators/cache.py", line 44, in _wrapped_view_func
response = view_func(request, *args, **kwargs)
File "django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "kuma/users/views.py", line 915, in stripe_hooks
_send_payment_received_email(
File "kuma/users/views.py", line 944, in _send_payment_received_email
"next_payment_date": subscription_info["next_payment_at"],
TypeError: 'NoneType' object is not subscriptable
```
</issue>
<code>
[start of kuma/users/stripe_utils.py]
1 from datetime import datetime
2
3 import stripe
4 from django.conf import settings
5 from django.utils import timezone
6
7 from kuma.core.urlresolvers import reverse
8 from kuma.wiki.templatetags.jinja_helpers import absolutify
9
10 from .models import UserSubscription
11
12
13 def retrieve_stripe_subscription(customer):
14 for subscription in customer.subscriptions.list().auto_paging_iter():
15 # We have to use array indexing syntax, as stripe uses dicts to
16 # represent its objects (dicts come with an .items method)
17 for item in subscription["items"].auto_paging_iter():
18 if item.plan.id == settings.STRIPE_PLAN_ID:
19 return subscription
20
21 return None
22
23
24 def retrieve_and_synchronize_subscription_info(user):
25 """For the given user, if it has as 'stripe_customer_id' retrieve the info
26 about the subscription if it's there. All packaged in a way that is
27 practical for the stripe_subscription.html template.
28
29 Also, whilst doing this check, we also verify that the UserSubscription record
30 for this user is right. Doing that check is a second-layer check in case
31 our webhooks have failed us.
32 """
33 subscription_info = None
34 stripe_customer = get_stripe_customer(user)
35 if stripe_customer:
36 stripe_subscription_info = get_stripe_subscription_info(stripe_customer)
37 if stripe_subscription_info:
38 source = stripe_customer.default_source
39 if source.object == "card":
40 card = source
41 elif source.object == "source":
42 card = source.card
43 else:
44 raise ValueError(
45 f"unexpected stripe customer default_source of type {source.object!r}"
46 )
47
48 subscription_info = {
49 "id": stripe_subscription_info.id,
50 "amount": stripe_subscription_info.plan.amount,
51 "brand": card.brand,
52 "expires_at": f"{card.exp_month}/{card.exp_year}",
53 "last4": card.last4,
54 # Cards that are part of a "source" don't have a zip
55 "zip": card.get("address_zip", None),
56 # TODO: Deprecated. Only used in the Edit Profile view
57 "next_payment_at": datetime.fromtimestamp(
58 stripe_subscription_info.current_period_end
59 ),
60 }
61
62 # To perfect the synchronization, take this opportunity to make sure
63 # we have an up-to-date record of this.
64 UserSubscription.set_active(user, stripe_subscription_info.id)
65 else:
66 # The user has a stripe_customer_id but no active subscription
67 # on the current settings.STRIPE_PLAN_ID! Perhaps it has been canceled
68 # and not updated in our own records.
69 for user_subscription in UserSubscription.objects.filter(
70 user=user, canceled__isnull=True
71 ):
72 user_subscription.canceled = timezone.now()
73 user_subscription.save()
74
75 return subscription_info
76
77
78 def create_stripe_customer_and_subscription_for_user(user, email, stripe_token):
79 customer = (
80 stripe.Customer.retrieve(user.stripe_customer_id)
81 if user.stripe_customer_id
82 else None
83 )
84 if not customer or customer.email != email:
85 customer = stripe.Customer.create(email=email, source=stripe_token)
86 user.stripe_customer_id = customer.id
87 user.save()
88
89 subscription = retrieve_stripe_subscription(customer)
90 if not subscription:
91 subscription = stripe.Subscription.create(
92 customer=customer.id, items=[{"plan": settings.STRIPE_PLAN_ID}],
93 )
94
95 UserSubscription.set_active(user, subscription.id)
96
97
98 def cancel_stripe_customer_subscriptions(user):
99 """Delete all subscriptions for a Stripe customer."""
100 assert user.stripe_customer_id
101 customer = stripe.Customer.retrieve(user.stripe_customer_id)
102 canceled = []
103 for sub in customer.subscriptions.data:
104 s = stripe.Subscription.retrieve(sub.id)
105 UserSubscription.set_canceled(user, s.id)
106 s.delete()
107 canceled.append(s)
108 return canceled
109
110
111 def get_stripe_customer(user):
112 if settings.STRIPE_PLAN_ID and user.stripe_customer_id:
113 return stripe.Customer.retrieve(
114 user.stripe_customer_id, expand=["default_source"]
115 )
116
117
118 def get_stripe_subscription_info(stripe_customer):
119 return retrieve_stripe_subscription(stripe_customer)
120
121
122 def create_missing_stripe_webhook():
123 url_path = reverse("users.stripe_hooks")
124 url = (
125 "https://" + settings.STRIPE_WEBHOOK_HOSTNAME + url_path
126 if settings.STRIPE_WEBHOOK_HOSTNAME
127 else absolutify(url_path)
128 )
129
130 # From https://stripe.com/docs/api/webhook_endpoints/create
131 events = (
132 # "Occurs whenever an invoice payment attempt succeeds."
133 "invoice.payment_succeeded",
134 # "Occurs whenever a customer’s subscription ends."
135 # Also, if you go into the Stripe Dashboard, click Billing, Subscriptions,
136 # and find a customer and click the "Cancel subscription" button, this
137 # triggers.
138 "customer.subscription.deleted",
139 )
140
141 for webhook in stripe.WebhookEndpoint.list().auto_paging_iter():
142 if webhook.url == url and set(events) == set(webhook.enabled_events):
143 return
144
145 stripe.WebhookEndpoint.create(
146 url=url, enabled_events=events,
147 )
148
[end of kuma/users/stripe_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kuma/users/stripe_utils.py b/kuma/users/stripe_utils.py
--- a/kuma/users/stripe_utils.py
+++ b/kuma/users/stripe_utils.py
@@ -11,14 +11,25 @@
def retrieve_stripe_subscription(customer):
+ """
+ Returns the first subscription it finds matching the configured stripe
+ plan or, if there's none, just the first it finds.
+ """
+ first_subscription = None
+
for subscription in customer.subscriptions.list().auto_paging_iter():
+ if first_subscription is None:
+ first_subscription = subscription
+
# We have to use array indexing syntax, as stripe uses dicts to
# represent its objects (dicts come with an .items method)
for item in subscription["items"].auto_paging_iter():
if item.plan.id == settings.STRIPE_PLAN_ID:
+ # If we find a subscription matching the selected plan we
+ # return that instead of whatever we found first
return subscription
- return None
+ return first_subscription
def retrieve_and_synchronize_subscription_info(user):
| {"golden_diff": "diff --git a/kuma/users/stripe_utils.py b/kuma/users/stripe_utils.py\n--- a/kuma/users/stripe_utils.py\n+++ b/kuma/users/stripe_utils.py\n@@ -11,14 +11,25 @@\n \n \n def retrieve_stripe_subscription(customer):\n+ \"\"\"\n+ Returns the first subscription it finds matching the configured stripe\n+ plan or, if there's none, just the first it finds.\n+ \"\"\"\n+ first_subscription = None\n+\n for subscription in customer.subscriptions.list().auto_paging_iter():\n+ if first_subscription is None:\n+ first_subscription = subscription\n+\n # We have to use array indexing syntax, as stripe uses dicts to\n # represent its objects (dicts come with an .items method)\n for item in subscription[\"items\"].auto_paging_iter():\n if item.plan.id == settings.STRIPE_PLAN_ID:\n+ # If we find a subscription matching the selected plan we\n+ # return that instead of whatever we found first\n return subscription\n \n- return None\n+ return first_subscription\n \n \n def retrieve_and_synchronize_subscription_info(user):\n", "issue": "T - fix TypeError: in stripe hook\nhttps://sentry.prod.mozaws.net/operations/mdn-prod/issues/7810756/\n\n```\nResolver404: {'tried': [[<URLPattern '^media/(?:redesign/)?css/(?P<doc>.*)-min.css$'>], [<URLPattern '^media/(?:redesign/)?js/(?P<doc>.*)-min.js$'>], [<URLPattern '^media/(?:redesign/)?img(?P<suffix>.*)$'>], [<URLPattern '^media/(?:redesign/)?css(?P<suffix>.*)$'>], [<URLPattern '^media/(?:redesign/)?js(?P<suffix>.*)$'>], [<URLPattern '^media/(?:redesign/)?fonts(?P<suffix>.*)$'>], [<URLPattern '^media/uploads/demos/(?:.*)$'>], [<URLPattern '(?i)^(?P<one>.*)//(?P<two>.*)//(?P<three>.*)$'>], [<URLPattern '(?i)^(?P<one>.*)//(?P<two>.*)$'>], [<URLPattern '(?i)^samples/canvas-tutorial/2_1_canvas_rect.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/2_2_canvas_moveto.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/2_3_canvas_lineto.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/2_4_canvas_arc.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/2_5_canvas_quadraticcurveto.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/2_6_canvas_beziercurveto.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/3_1_canvas_drawimage.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/3_2_canvas_drawimage.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/3_3_canvas_drawimage.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/3_4_canvas_gallery.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_1_canvas_fillstyle.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_2_canvas_strokestyle.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_3_canvas_globalalpha.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_4_canvas_rgba.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_5_canvas_linewidth.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_6_canvas_linecap.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_7_canvas_linejoin.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_8_canvas_miterlimit.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_9_canvas_lineargradient.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_10_canvas_radialgradient.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/4_11_canvas_createpattern.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/5_1_canvas_savestate.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/5_2_canvas_translate.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/5_3_canvas_rotate.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/5_4_canvas_scale.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/6_1_canvas_composite.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/6_2_canvas_clipping.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/globalCompositeOperation.html$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/backdrop.png$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/bg_gallery.png$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_1.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_2.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_3.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_4.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_5.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_6.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_7.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/gallery_8.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/picture_frame.png$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/rhino.jpg$'>], [<URLPattern '(?i)^samples/canvas-tutorial/images/wallpaper.png$'>], [<URLPattern '(?i)^samples/domref/mozGetAsFile.html$'>], [<URLPattern '(?i)^samples/raycaster/input.js$'>], [<URLPattern '(?i)^samples/raycaster/Level.js$'>], [<URLPattern '(?i)^samples/raycaster/Player.js$'>], [<URLPattern '(?i)^samples/raycaster/RayCaster.html$'>], [<URLPattern '(?i)^samples/raycaster/RayCaster.js$'>], [<URLPattern '(?i)^samples/raycaster/trace.css$'>], [<URLPattern '(?i)^samples/raycaster/trace.js$'>], [<URLPattern '(?i)^samples/webgl/sample1$'>], [<URLPattern '(?i)^samples/webgl/sample1/index.html$'>], [<URLPattern '(?i)^samples/webgl/sample1/webgl-demo.js$'>], [<UR...\n File \"redirect_urls/middleware.py\", line 14, in __call__\n resolver_match = self.resolver.resolve(request.path_info)\n File \"newrelic/hooks/framework_django.py\", line 600, in wrapper\n return _wrapped(*args, **kwargs)\n File \"newrelic/hooks/framework_django.py\", line 588, in _wrapped\n result = wrapped(path)\n File \"django/urls/resolvers.py\", line 567, in resolve\n raise Resolver404({'tried': tried, 'path': new_path})\n\nTypeError: 'NoneType' object is not subscriptable\n(3 additional frame(s) were not displayed)\n...\n File \"newrelic/hooks/framework_django.py\", line 539, in wrapper\n return wrapped(*args, **kwargs)\n File \"django/views/decorators/cache.py\", line 44, in _wrapped_view_func\n response = view_func(request, *args, **kwargs)\n File \"django/views/decorators/csrf.py\", line 54, in wrapped_view\n return view_func(*args, **kwargs)\n File \"kuma/users/views.py\", line 915, in stripe_hooks\n _send_payment_received_email(\n File \"kuma/users/views.py\", line 944, in _send_payment_received_email\n \"next_payment_date\": subscription_info[\"next_payment_at\"],\n\nTypeError: 'NoneType' object is not subscriptable\n```\n", "before_files": [{"content": "from datetime import datetime\n\nimport stripe\nfrom django.conf import settings\nfrom django.utils import timezone\n\nfrom kuma.core.urlresolvers import reverse\nfrom kuma.wiki.templatetags.jinja_helpers import absolutify\n\nfrom .models import UserSubscription\n\n\ndef retrieve_stripe_subscription(customer):\n for subscription in customer.subscriptions.list().auto_paging_iter():\n # We have to use array indexing syntax, as stripe uses dicts to\n # represent its objects (dicts come with an .items method)\n for item in subscription[\"items\"].auto_paging_iter():\n if item.plan.id == settings.STRIPE_PLAN_ID:\n return subscription\n\n return None\n\n\ndef retrieve_and_synchronize_subscription_info(user):\n \"\"\"For the given user, if it has as 'stripe_customer_id' retrieve the info\n about the subscription if it's there. All packaged in a way that is\n practical for the stripe_subscription.html template.\n\n Also, whilst doing this check, we also verify that the UserSubscription record\n for this user is right. Doing that check is a second-layer check in case\n our webhooks have failed us.\n \"\"\"\n subscription_info = None\n stripe_customer = get_stripe_customer(user)\n if stripe_customer:\n stripe_subscription_info = get_stripe_subscription_info(stripe_customer)\n if stripe_subscription_info:\n source = stripe_customer.default_source\n if source.object == \"card\":\n card = source\n elif source.object == \"source\":\n card = source.card\n else:\n raise ValueError(\n f\"unexpected stripe customer default_source of type {source.object!r}\"\n )\n\n subscription_info = {\n \"id\": stripe_subscription_info.id,\n \"amount\": stripe_subscription_info.plan.amount,\n \"brand\": card.brand,\n \"expires_at\": f\"{card.exp_month}/{card.exp_year}\",\n \"last4\": card.last4,\n # Cards that are part of a \"source\" don't have a zip\n \"zip\": card.get(\"address_zip\", None),\n # TODO: Deprecated. Only used in the Edit Profile view\n \"next_payment_at\": datetime.fromtimestamp(\n stripe_subscription_info.current_period_end\n ),\n }\n\n # To perfect the synchronization, take this opportunity to make sure\n # we have an up-to-date record of this.\n UserSubscription.set_active(user, stripe_subscription_info.id)\n else:\n # The user has a stripe_customer_id but no active subscription\n # on the current settings.STRIPE_PLAN_ID! Perhaps it has been canceled\n # and not updated in our own records.\n for user_subscription in UserSubscription.objects.filter(\n user=user, canceled__isnull=True\n ):\n user_subscription.canceled = timezone.now()\n user_subscription.save()\n\n return subscription_info\n\n\ndef create_stripe_customer_and_subscription_for_user(user, email, stripe_token):\n customer = (\n stripe.Customer.retrieve(user.stripe_customer_id)\n if user.stripe_customer_id\n else None\n )\n if not customer or customer.email != email:\n customer = stripe.Customer.create(email=email, source=stripe_token)\n user.stripe_customer_id = customer.id\n user.save()\n\n subscription = retrieve_stripe_subscription(customer)\n if not subscription:\n subscription = stripe.Subscription.create(\n customer=customer.id, items=[{\"plan\": settings.STRIPE_PLAN_ID}],\n )\n\n UserSubscription.set_active(user, subscription.id)\n\n\ndef cancel_stripe_customer_subscriptions(user):\n \"\"\"Delete all subscriptions for a Stripe customer.\"\"\"\n assert user.stripe_customer_id\n customer = stripe.Customer.retrieve(user.stripe_customer_id)\n canceled = []\n for sub in customer.subscriptions.data:\n s = stripe.Subscription.retrieve(sub.id)\n UserSubscription.set_canceled(user, s.id)\n s.delete()\n canceled.append(s)\n return canceled\n\n\ndef get_stripe_customer(user):\n if settings.STRIPE_PLAN_ID and user.stripe_customer_id:\n return stripe.Customer.retrieve(\n user.stripe_customer_id, expand=[\"default_source\"]\n )\n\n\ndef get_stripe_subscription_info(stripe_customer):\n return retrieve_stripe_subscription(stripe_customer)\n\n\ndef create_missing_stripe_webhook():\n url_path = reverse(\"users.stripe_hooks\")\n url = (\n \"https://\" + settings.STRIPE_WEBHOOK_HOSTNAME + url_path\n if settings.STRIPE_WEBHOOK_HOSTNAME\n else absolutify(url_path)\n )\n\n # From https://stripe.com/docs/api/webhook_endpoints/create\n events = (\n # \"Occurs whenever an invoice payment attempt succeeds.\"\n \"invoice.payment_succeeded\",\n # \"Occurs whenever a customer\u2019s subscription ends.\"\n # Also, if you go into the Stripe Dashboard, click Billing, Subscriptions,\n # and find a customer and click the \"Cancel subscription\" button, this\n # triggers.\n \"customer.subscription.deleted\",\n )\n\n for webhook in stripe.WebhookEndpoint.list().auto_paging_iter():\n if webhook.url == url and set(events) == set(webhook.enabled_events):\n return\n\n stripe.WebhookEndpoint.create(\n url=url, enabled_events=events,\n )\n", "path": "kuma/users/stripe_utils.py"}]} | 3,642 | 242 |
gh_patches_debug_13666 | rasdani/github-patches | git_diff | Flexget__Flexget-1985 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incompatibility with Libtorrent version 1.1.3 and above
### Actual behaviour:
flexget crashes when converting magnet URI to a torrent file with libtorrent versions 1.1.3 and higher
### Steps to reproduce:
upgrade libtorrent to 1.1.3 or higher (I'm using 1.1.4 but the change occurred in 1.1.3)
download torrent with convert_magnet plugin
#### Config:
```
tasks:
stage:
convert_magnet: yes
rss: http://xxx.xxx/rss
accept_all: yes
download:
path: /home/flexget/test/
```
#### Log:
```
2017-10-09 15:36 VERBOSE task_queue There are 1 tasks to execute. Shutdown will commence when they have completed.
2017-10-09 15:36 VERBOSE details stage Produced 74 entries.
2017-10-09 15:36 INFO series stage identified_by has locked in to type `ep` for Lucifer
2017-10-09 15:36 VERBOSE task stage ACCEPTED: `Lucifer S03E02 1080p HDTV X264 DIMENSION` by series plugin because choosing best available quality
2017-10-09 15:36 VERBOSE task stage ACCEPTED: `Lucifer S03E01 1080p HDTV X264 DIMENSION` by series plugin because choosing best available quality
2017-10-09 15:36 INFO convert_magnet stage Converting entry Lucifer S03E02 1080p HDTV X264 DIMENSION magnet URI to a torrent file
2017-10-09 15:36 INFO convert_magnet stage lt_ver caught
2017-10-09 15:36 CRITICAL task stage BUG: Unhandled error in plugin convert_magnet: 'str' object has no attribute 'to_bytes'
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flexget/task.py", line 486, in __run_plugin
return method(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/flexget/event.py", line 23, in __call__
return self.func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/flexget/plugins/modify/convert_magnet.py", line 93, in on_task_download
torrent_file = self.magnet_to_torrent(entry['url'], converted_path, timeout)
File "/usr/local/lib/python2.7/dist-packages/flexget/plugins/modify/convert_magnet.py", line 41, in magnet_to_torrent
params['info_hash'] = params['info_hash'].to_bytes()
AttributeError: 'str' object has no attribute 'to_bytes'
```
### Additional information:
- FlexGet version: 2.10.97
- Python version: 2.7.13
- Installation method: pip
- Using daemon (yes/no): no
- OS and version: Debian Stretch 9.1
- Link to crash log: see above
flexget/plugins/modify/convert_magnet.py line 40 converts the info_hash to bytes if the libtorrent version is above 0.16.13. The comments seem to indicate this was done as a work around because the info_hash value did not comply with the defined structure. according to the release notes on libtorrent v1.1.3 this has been corrected, info_hash is now a string value which does not contain the attribute to_bytes, this causes the crash. Since the underlying issue in libtorrent has been corrected, the info_hash value can be used as is without converting to_bytes if the libtorrent versions above 1.1.3. Standby for PR
### References:
https://github.com/arvidn/libtorrent/releases/tag/libtorrent-1_1_3
https://github.com/arvidn/libtorrent/commit/2e367ea53b1ce36d42eec73d12056ca4cd8029a4
</issue>
<code>
[start of flexget/plugins/modify/convert_magnet.py]
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # noqa pylint: disable=unused-import, redefined-builtin
3 import os
4 import time
5 import logging
6
7 from flexget import plugin
8 from flexget.event import event
9 from flexget.utils.tools import parse_timedelta
10 from flexget.utils.pathscrub import pathscrub
11
12 log = logging.getLogger('convert_magnet')
13
14
15 class ConvertMagnet(object):
16 """Convert magnet only entries to a torrent file"""
17
18 schema = {
19 "oneOf": [
20 # Allow convert_magnet: no form to turn off plugin altogether
21 {"type": "boolean"},
22 {
23 "type": "object",
24 "properties": {
25 "timeout": {"type": "string", "format": "interval"},
26 "force": {"type": "boolean"}
27 },
28 "additionalProperties": False
29 }
30 ]
31 }
32
33 def magnet_to_torrent(self, magnet_uri, destination_folder, timeout):
34 import libtorrent
35 params = libtorrent.parse_magnet_uri(magnet_uri)
36 session = libtorrent.session()
37 lt_version = [int(v) for v in libtorrent.version.split('.')]
38 if lt_version > [0,16,13,0]:
39 # for some reason the info_hash needs to be bytes but it's a struct called sha1_hash
40 params['info_hash'] = params['info_hash'].to_bytes()
41 handle = libtorrent.add_magnet_uri(session, magnet_uri, params)
42 log.debug('Acquiring torrent metadata for magnet %s', magnet_uri)
43 timeout_value = timeout
44 while not handle.has_metadata():
45 time.sleep(0.1)
46 timeout_value -= 0.1
47 if timeout_value <= 0:
48 raise plugin.PluginError('Timed out after {} seconds trying to magnetize'.format(timeout))
49 log.debug('Metadata acquired')
50 torrent_info = handle.get_torrent_info()
51 torrent_file = libtorrent.create_torrent(torrent_info)
52 torrent_path = pathscrub(os.path.join(destination_folder, torrent_info.name() + ".torrent"))
53 with open(torrent_path, "wb") as f:
54 f.write(libtorrent.bencode(torrent_file.generate()))
55 log.debug('Torrent file wrote to %s', torrent_path)
56 return torrent_path
57
58 def prepare_config(self, config):
59 if not isinstance(config, dict):
60 config = {}
61 config.setdefault('timeout', '30 seconds')
62 config.setdefault('force', False)
63 return config
64
65 @plugin.priority(255)
66 def on_task_start(self, task, config):
67 if config is False:
68 return
69 try:
70 import libtorrent # noqa
71 except ImportError:
72 raise plugin.DependencyError('convert_magnet', 'libtorrent', 'libtorrent package required', log)
73
74 @plugin.priority(130)
75 def on_task_download(self, task, config):
76 if config is False:
77 return
78 config = self.prepare_config(config)
79 # Create the conversion target directory
80 converted_path = os.path.join(task.manager.config_base, 'converted')
81
82 timeout = parse_timedelta(config['timeout']).total_seconds()
83
84 if not os.path.isdir(converted_path):
85 os.mkdir(converted_path)
86
87 for entry in task.accepted:
88 if entry['url'].startswith('magnet:'):
89 entry.setdefault('urls', [entry['url']])
90 try:
91 log.info('Converting entry {} magnet URI to a torrent file'.format(entry['title']))
92 torrent_file = self.magnet_to_torrent(entry['url'], converted_path, timeout)
93 except (plugin.PluginError, TypeError) as e:
94 log.error('Unable to convert Magnet URI for entry %s: %s', entry['title'], e)
95 if config['force']:
96 entry.fail('Magnet URI conversion failed')
97 continue
98 # Windows paths need an extra / prepended to them for url
99 if not torrent_file.startswith('/'):
100 torrent_file = '/' + torrent_file
101 entry['url'] = torrent_file
102 entry['file'] = torrent_file
103 # make sure it's first in the list because of how download plugin works
104 entry['urls'].insert(0, 'file://{}'.format(torrent_file))
105
106
107 @event('plugin.register')
108 def register_plugin():
109 plugin.register(ConvertMagnet, 'convert_magnet', api_ver=2)
110
[end of flexget/plugins/modify/convert_magnet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flexget/plugins/modify/convert_magnet.py b/flexget/plugins/modify/convert_magnet.py
--- a/flexget/plugins/modify/convert_magnet.py
+++ b/flexget/plugins/modify/convert_magnet.py
@@ -35,7 +35,7 @@
params = libtorrent.parse_magnet_uri(magnet_uri)
session = libtorrent.session()
lt_version = [int(v) for v in libtorrent.version.split('.')]
- if lt_version > [0,16,13,0]:
+ if lt_version > [0,16,13,0] and lt_version < [1,1,3,0]:
# for some reason the info_hash needs to be bytes but it's a struct called sha1_hash
params['info_hash'] = params['info_hash'].to_bytes()
handle = libtorrent.add_magnet_uri(session, magnet_uri, params)
| {"golden_diff": "diff --git a/flexget/plugins/modify/convert_magnet.py b/flexget/plugins/modify/convert_magnet.py\n--- a/flexget/plugins/modify/convert_magnet.py\n+++ b/flexget/plugins/modify/convert_magnet.py\n@@ -35,7 +35,7 @@\n params = libtorrent.parse_magnet_uri(magnet_uri)\n session = libtorrent.session()\n lt_version = [int(v) for v in libtorrent.version.split('.')]\n- if lt_version > [0,16,13,0]:\n+ if lt_version > [0,16,13,0] and lt_version < [1,1,3,0]:\n # for some reason the info_hash needs to be bytes but it's a struct called sha1_hash\n params['info_hash'] = params['info_hash'].to_bytes()\n handle = libtorrent.add_magnet_uri(session, magnet_uri, params)\n", "issue": "Incompatibility with Libtorrent version 1.1.3 and above\n### Actual behaviour:\r\nflexget crashes when converting magnet URI to a torrent file with libtorrent versions 1.1.3 and higher\r\n\r\n### Steps to reproduce:\r\nupgrade libtorrent to 1.1.3 or higher (I'm using 1.1.4 but the change occurred in 1.1.3)\r\ndownload torrent with convert_magnet plugin \r\n\r\n#### Config:\r\n```\r\ntasks:\r\n stage:\r\n convert_magnet: yes\r\n rss: http://xxx.xxx/rss\r\n accept_all: yes\r\n download:\r\n path: /home/flexget/test/\r\n```\r\n \r\n#### Log:\r\n```\r\n2017-10-09 15:36 VERBOSE task_queue There are 1 tasks to execute. Shutdown will commence when they have completed.\r\n2017-10-09 15:36 VERBOSE details stage Produced 74 entries.\r\n2017-10-09 15:36 INFO series stage identified_by has locked in to type `ep` for Lucifer\r\n2017-10-09 15:36 VERBOSE task stage ACCEPTED: `Lucifer S03E02 1080p HDTV X264 DIMENSION` by series plugin because choosing best available quality\r\n2017-10-09 15:36 VERBOSE task stage ACCEPTED: `Lucifer S03E01 1080p HDTV X264 DIMENSION` by series plugin because choosing best available quality\r\n2017-10-09 15:36 INFO convert_magnet stage Converting entry Lucifer S03E02 1080p HDTV X264 DIMENSION magnet URI to a torrent file\r\n2017-10-09 15:36 INFO convert_magnet stage lt_ver caught\r\n2017-10-09 15:36 CRITICAL task stage BUG: Unhandled error in plugin convert_magnet: 'str' object has no attribute 'to_bytes'\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/dist-packages/flexget/task.py\", line 486, in __run_plugin\r\n return method(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/flexget/event.py\", line 23, in __call__\r\n return self.func(*args, **kwargs)\r\n File \"/usr/local/lib/python2.7/dist-packages/flexget/plugins/modify/convert_magnet.py\", line 93, in on_task_download\r\n torrent_file = self.magnet_to_torrent(entry['url'], converted_path, timeout)\r\n File \"/usr/local/lib/python2.7/dist-packages/flexget/plugins/modify/convert_magnet.py\", line 41, in magnet_to_torrent\r\n params['info_hash'] = params['info_hash'].to_bytes()\r\nAttributeError: 'str' object has no attribute 'to_bytes'\r\n```\r\n\r\n### Additional information:\r\n\r\n- FlexGet version: 2.10.97\r\n- Python version: 2.7.13\r\n- Installation method: pip\r\n- Using daemon (yes/no): no\r\n- OS and version: Debian Stretch 9.1\r\n- Link to crash log: see above\r\n\r\nflexget/plugins/modify/convert_magnet.py line 40 converts the info_hash to bytes if the libtorrent version is above 0.16.13. The comments seem to indicate this was done as a work around because the info_hash value did not comply with the defined structure. according to the release notes on libtorrent v1.1.3 this has been corrected, info_hash is now a string value which does not contain the attribute to_bytes, this causes the crash. Since the underlying issue in libtorrent has been corrected, the info_hash value can be used as is without converting to_bytes if the libtorrent versions above 1.1.3. Standby for PR\r\n\r\n\r\n### References: \r\nhttps://github.com/arvidn/libtorrent/releases/tag/libtorrent-1_1_3\r\nhttps://github.com/arvidn/libtorrent/commit/2e367ea53b1ce36d42eec73d12056ca4cd8029a4\r\n\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\nimport os\nimport time\nimport logging\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils.tools import parse_timedelta\nfrom flexget.utils.pathscrub import pathscrub\n\nlog = logging.getLogger('convert_magnet')\n\n\nclass ConvertMagnet(object):\n \"\"\"Convert magnet only entries to a torrent file\"\"\"\n\n schema = {\n \"oneOf\": [\n # Allow convert_magnet: no form to turn off plugin altogether\n {\"type\": \"boolean\"},\n {\n \"type\": \"object\",\n \"properties\": {\n \"timeout\": {\"type\": \"string\", \"format\": \"interval\"},\n \"force\": {\"type\": \"boolean\"}\n },\n \"additionalProperties\": False\n }\n ]\n }\n\n def magnet_to_torrent(self, magnet_uri, destination_folder, timeout):\n import libtorrent\n params = libtorrent.parse_magnet_uri(magnet_uri)\n session = libtorrent.session()\n lt_version = [int(v) for v in libtorrent.version.split('.')]\n if lt_version > [0,16,13,0]:\n # for some reason the info_hash needs to be bytes but it's a struct called sha1_hash\n params['info_hash'] = params['info_hash'].to_bytes()\n handle = libtorrent.add_magnet_uri(session, magnet_uri, params)\n log.debug('Acquiring torrent metadata for magnet %s', magnet_uri)\n timeout_value = timeout\n while not handle.has_metadata():\n time.sleep(0.1)\n timeout_value -= 0.1\n if timeout_value <= 0:\n raise plugin.PluginError('Timed out after {} seconds trying to magnetize'.format(timeout))\n log.debug('Metadata acquired')\n torrent_info = handle.get_torrent_info()\n torrent_file = libtorrent.create_torrent(torrent_info)\n torrent_path = pathscrub(os.path.join(destination_folder, torrent_info.name() + \".torrent\"))\n with open(torrent_path, \"wb\") as f:\n f.write(libtorrent.bencode(torrent_file.generate()))\n log.debug('Torrent file wrote to %s', torrent_path)\n return torrent_path\n\n def prepare_config(self, config):\n if not isinstance(config, dict):\n config = {}\n config.setdefault('timeout', '30 seconds')\n config.setdefault('force', False)\n return config\n\n @plugin.priority(255)\n def on_task_start(self, task, config):\n if config is False:\n return\n try:\n import libtorrent # noqa\n except ImportError:\n raise plugin.DependencyError('convert_magnet', 'libtorrent', 'libtorrent package required', log)\n\n @plugin.priority(130)\n def on_task_download(self, task, config):\n if config is False:\n return\n config = self.prepare_config(config)\n # Create the conversion target directory\n converted_path = os.path.join(task.manager.config_base, 'converted')\n\n timeout = parse_timedelta(config['timeout']).total_seconds()\n\n if not os.path.isdir(converted_path):\n os.mkdir(converted_path)\n\n for entry in task.accepted:\n if entry['url'].startswith('magnet:'):\n entry.setdefault('urls', [entry['url']])\n try:\n log.info('Converting entry {} magnet URI to a torrent file'.format(entry['title']))\n torrent_file = self.magnet_to_torrent(entry['url'], converted_path, timeout)\n except (plugin.PluginError, TypeError) as e:\n log.error('Unable to convert Magnet URI for entry %s: %s', entry['title'], e)\n if config['force']:\n entry.fail('Magnet URI conversion failed')\n continue\n # Windows paths need an extra / prepended to them for url\n if not torrent_file.startswith('/'):\n torrent_file = '/' + torrent_file\n entry['url'] = torrent_file\n entry['file'] = torrent_file\n # make sure it's first in the list because of how download plugin works\n entry['urls'].insert(0, 'file://{}'.format(torrent_file))\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(ConvertMagnet, 'convert_magnet', api_ver=2)\n", "path": "flexget/plugins/modify/convert_magnet.py"}]} | 2,678 | 205 |
gh_patches_debug_57712 | rasdani/github-patches | git_diff | activeloopai__deeplake-1998 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[FEATURE] Add license in PyPI packages
## 🚨🚨 Feature Request
- [x] A new implementation (Improvement)
### Is your feature request related to a problem?
deeplake, hub and libdeeplake currently use [Mozilla Public License](https://www.mozilla.org/en-US/MPL/) (MPL). However they do not advertise their license in their respective PyPI packages. This makes automatic license checks (`liccheck`) fail.
### Description of the possible solution
Add license and license classifiers in `setup.cfg` for hub, deeplake and libdeeplake.
Syntax:
> setup.cfg
```
[metadata]
license = MPL 2.0
license_file = LICENSE
classifiers =
License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)
```
References:
- https://setuptools.pypa.io/en/latest/deprecated/distutils/setupscript.html#additional-meta-data
- https://pypi.org/classifiers/
- https://github.com/SethMMorton/natsort/blob/master/setup.cfg
</issue>
<code>
[start of setup.py]
1 import os
2 import sys
3 import re
4 import platform
5 from setuptools import find_packages, setup
6
7 project_name = "deeplake"
8
9
10 this_directory = os.path.abspath(os.path.dirname(__file__))
11
12 with open(os.path.join(this_directory, "deeplake/requirements/common.txt")) as f:
13 requirements = f.readlines()
14
15 with open(os.path.join(this_directory, "deeplake/requirements/tests.txt")) as f:
16 tests = f.readlines()
17
18 with open(os.path.join(this_directory, "README.md"), encoding="utf-8") as f:
19 long_description = f.read()
20
21
22 req_map = {
23 b: a
24 for a, b in (
25 re.findall(r"^(([^!=<>~]+)(?:[!=<>~].*)?$)", x.strip("\n"))[0]
26 for x in requirements
27 )
28 }
29
30 # Add optional dependencies to this dict without version. Version should be specified in requirements.txt
31 extras = {
32 "audio": ["av"],
33 "video": ["av"],
34 "av": ["av"],
35 "gcp": ["google-cloud-storage", "google-auth", "google-auth-oauthlib"],
36 "dicom": ["pydicom"],
37 "visualizer": ["IPython", "flask"],
38 "gdrive": [
39 "google-api-python-client",
40 "oauth2client",
41 "google-auth",
42 "google-auth-oauthlib",
43 ],
44 "point_cloud": ["laspy"],
45 }
46
47 all_extras = {r for v in extras.values() for r in v}
48 install_requires = [req_map[r] for r in req_map if r not in all_extras]
49 extras_require = {k: [req_map[r] for r in v] for k, v in extras.items()}
50 extras_require["all"] = [req_map[r] for r in all_extras]
51
52
53 init_file = os.path.join(project_name, "__init__.py")
54
55
56 def get_property(prop):
57 result = re.search(
58 # find variable with name `prop` in the __init__.py file
59 rf'{prop}\s*=\s*[\'"]([^\'"]*)[\'"]',
60 open(init_file).read(),
61 )
62 return result.group(1)
63
64
65 def libdeeplake_availabe():
66 py_ver = sys.version_info
67 if sys.platform == "linux":
68 if py_ver >= (3, 6) and py_ver <= (3, 10):
69 return True
70 if sys.platform == "darwin":
71 mac_ver = list(map(int, platform.mac_ver()[0].split(".")))
72 if (
73 (mac_ver[0] > 10 or mac_ver[0] == 10 and mac_ver[1] >= 12)
74 and py_ver >= (3, 7)
75 and py_ver <= (3, 10)
76 ):
77 return True
78 return False
79
80
81 if libdeeplake_availabe():
82 install_requires.insert(0, "libdeeplake==0.0.25")
83 install_requires.append("hub>=2.8.7")
84
85
86 config = {
87 "name": project_name,
88 "version": get_property("__version__"),
89 "description": "Activeloop Deep Lake",
90 "long_description": long_description,
91 "long_description_content_type": "text/markdown",
92 "author": "activeloop.ai",
93 "author_email": "[email protected]",
94 "packages": find_packages(),
95 "install_requires": install_requires,
96 "extras_require": extras_require,
97 "tests_require": tests,
98 "include_package_data": True,
99 "zip_safe": False,
100 "entry_points": {"console_scripts": ["activeloop = deeplake.cli.commands:cli"]},
101 "dependency_links": [],
102 "project_urls": {
103 "Documentation": "https://docs.activeloop.ai/",
104 "Source": "https://github.com/activeloopai/deeplake",
105 },
106 }
107
108 setup(**config)
109
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -103,6 +103,10 @@
"Documentation": "https://docs.activeloop.ai/",
"Source": "https://github.com/activeloopai/deeplake",
},
+ "license": "MPL-2.0",
+ "classifiers": [
+ "License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
+ ],
}
setup(**config)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -103,6 +103,10 @@\n \"Documentation\": \"https://docs.activeloop.ai/\",\n \"Source\": \"https://github.com/activeloopai/deeplake\",\n },\n+ \"license\": \"MPL-2.0\",\n+ \"classifiers\": [\n+ \"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)\",\n+ ],\n }\n \n setup(**config)\n", "issue": "[FEATURE] Add license in PyPI packages\n## \ud83d\udea8\ud83d\udea8 Feature Request\r\n\r\n- [x] A new implementation (Improvement) \r\n\r\n### Is your feature request related to a problem?\r\n\r\ndeeplake, hub and libdeeplake currently use [Mozilla Public License](https://www.mozilla.org/en-US/MPL/) (MPL). However they do not advertise their license in their respective PyPI packages. This makes automatic license checks (`liccheck`) fail. \r\n\r\n### Description of the possible solution\r\n\r\nAdd license and license classifiers in `setup.cfg` for hub, deeplake and libdeeplake.\r\nSyntax:\r\n> setup.cfg\r\n```\r\n[metadata]\r\nlicense = MPL 2.0\r\nlicense_file = LICENSE\r\nclassifiers = \r\n\tLicense :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)\r\n```\r\n\r\nReferences:\r\n- https://setuptools.pypa.io/en/latest/deprecated/distutils/setupscript.html#additional-meta-data\r\n- https://pypi.org/classifiers/\r\n- https://github.com/SethMMorton/natsort/blob/master/setup.cfg\n", "before_files": [{"content": "import os\nimport sys\nimport re\nimport platform\nfrom setuptools import find_packages, setup\n\nproject_name = \"deeplake\"\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(this_directory, \"deeplake/requirements/common.txt\")) as f:\n requirements = f.readlines()\n\nwith open(os.path.join(this_directory, \"deeplake/requirements/tests.txt\")) as f:\n tests = f.readlines()\n\nwith open(os.path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\n\nreq_map = {\n b: a\n for a, b in (\n re.findall(r\"^(([^!=<>~]+)(?:[!=<>~].*)?$)\", x.strip(\"\\n\"))[0]\n for x in requirements\n )\n}\n\n# Add optional dependencies to this dict without version. Version should be specified in requirements.txt\nextras = {\n \"audio\": [\"av\"],\n \"video\": [\"av\"],\n \"av\": [\"av\"],\n \"gcp\": [\"google-cloud-storage\", \"google-auth\", \"google-auth-oauthlib\"],\n \"dicom\": [\"pydicom\"],\n \"visualizer\": [\"IPython\", \"flask\"],\n \"gdrive\": [\n \"google-api-python-client\",\n \"oauth2client\",\n \"google-auth\",\n \"google-auth-oauthlib\",\n ],\n \"point_cloud\": [\"laspy\"],\n}\n\nall_extras = {r for v in extras.values() for r in v}\ninstall_requires = [req_map[r] for r in req_map if r not in all_extras]\nextras_require = {k: [req_map[r] for r in v] for k, v in extras.items()}\nextras_require[\"all\"] = [req_map[r] for r in all_extras]\n\n\ninit_file = os.path.join(project_name, \"__init__.py\")\n\n\ndef get_property(prop):\n result = re.search(\n # find variable with name `prop` in the __init__.py file\n rf'{prop}\\s*=\\s*[\\'\"]([^\\'\"]*)[\\'\"]',\n open(init_file).read(),\n )\n return result.group(1)\n\n\ndef libdeeplake_availabe():\n py_ver = sys.version_info\n if sys.platform == \"linux\":\n if py_ver >= (3, 6) and py_ver <= (3, 10):\n return True\n if sys.platform == \"darwin\":\n mac_ver = list(map(int, platform.mac_ver()[0].split(\".\")))\n if (\n (mac_ver[0] > 10 or mac_ver[0] == 10 and mac_ver[1] >= 12)\n and py_ver >= (3, 7)\n and py_ver <= (3, 10)\n ):\n return True\n return False\n\n\nif libdeeplake_availabe():\n install_requires.insert(0, \"libdeeplake==0.0.25\")\ninstall_requires.append(\"hub>=2.8.7\")\n\n\nconfig = {\n \"name\": project_name,\n \"version\": get_property(\"__version__\"),\n \"description\": \"Activeloop Deep Lake\",\n \"long_description\": long_description,\n \"long_description_content_type\": \"text/markdown\",\n \"author\": \"activeloop.ai\",\n \"author_email\": \"[email protected]\",\n \"packages\": find_packages(),\n \"install_requires\": install_requires,\n \"extras_require\": extras_require,\n \"tests_require\": tests,\n \"include_package_data\": True,\n \"zip_safe\": False,\n \"entry_points\": {\"console_scripts\": [\"activeloop = deeplake.cli.commands:cli\"]},\n \"dependency_links\": [],\n \"project_urls\": {\n \"Documentation\": \"https://docs.activeloop.ai/\",\n \"Source\": \"https://github.com/activeloopai/deeplake\",\n },\n}\n\nsetup(**config)\n", "path": "setup.py"}]} | 1,835 | 121 |
gh_patches_debug_59137 | rasdani/github-patches | git_diff | pallets__werkzeug-1515 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ProfilerMiddleware Missing in Latest Release
### Environment
```
$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.14.4
BuildVersion: 18E226
$ python --version
Python 3.7.3
$ pip freeze
Werkzeug==0.15.2
```
### Observed Behavior
Inability to import the ProfilerMiddleware from werkzeug as described in [the documentation](https://werkzeug.palletsprojects.com/en/0.15.x/middleware/profiler/)
```
>>> from werkzeug.middleware.profile import ProfilerMiddleware
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'werkzeug.middleware.profile'
```
### Steps to Reproduce
```
[~/git] $ mkdir test_venv
[~/git] $ cd test_venv/
[~/git/test_venv] $ python3 -m venv venv
[~/git/test_venv] $ source venv/bin/activate
(venv) [~/git/test_venv] $ pip --version
pip 19.0.3 from /Users/cchapline/git/test_venv/venv/lib/python3.7/site-packages/pip (python 3.7)
(venv) [~/git/test_venv] $ pip install werkzeug
Collecting werkzeug
Using cached https://files.pythonhosted.org/packages/18/79/84f02539cc181cdbf5ff5a41b9f52cae870b6f632767e43ba6ac70132e92/Werkzeug-0.15.2-py2.py3-none-any.whl
Installing collected packages: werkzeug
Successfully installed werkzeug-0.15.2
(venv) [~/git/test_venv] $ python
Python 3.7.3 (default, Apr 4 2019, 10:56:22)
[Clang 10.0.1 (clang-1001.0.46.3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from werkzeug.middleware.profile import ProfilerMiddleware
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'werkzeug.middleware.profile'
>>> import werkzeug.middleware as mw
>>> dir(mw)
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'dispatcher', 'http_proxy', 'shared_data']
```
### Expected Behavior
The `ImportError` should not occur as I can see the code in `site-packages`:
```
(venv) [~/git/test_venv] $ ls venv/lib/python3.7/site-packages/werkzeug/middleware/profiler.py
venv/lib/python3.7/site-packages/werkzeug/middleware/profiler.py
```
</issue>
<code>
[start of src/werkzeug/middleware/profiler.py]
1 """
2 Application Profiler
3 ====================
4
5 This module provides a middleware that profiles each request with the
6 :mod:`cProfile` module. This can help identify bottlenecks in your code
7 that may be slowing down your application.
8
9 .. autoclass:: ProfilerMiddleware
10
11 :copyright: 2007 Pallets
12 :license: BSD-3-Clause
13 """
14 from __future__ import print_function
15
16 import os.path
17 import sys
18 import time
19 from pstats import Stats
20
21 try:
22 from cProfile import Profile
23 except ImportError:
24 from profile import Profile
25
26
27 class ProfilerMiddleware(object):
28 """Wrap a WSGI application and profile the execution of each
29 request. Responses are buffered so that timings are more exact.
30
31 If ``stream`` is given, :class:`pstats.Stats` are written to it
32 after each request. If ``profile_dir`` is given, :mod:`cProfile`
33 data files are saved to that directory, one file per request.
34
35 The filename can be customized by passing ``filename_format``. If
36 it is a string, it will be formatted using :meth:`str.format` with
37 the following fields available:
38
39 - ``{method}`` - The request method; GET, POST, etc.
40 - ``{path}`` - The request path or 'root' should one not exist.
41 - ``{elapsed}`` - The elapsed time of the request.
42 - ``{time}`` - The time of the request.
43
44 If it is a callable, it will be called with the WSGI ``environ``
45 dict and should return a filename.
46
47 :param app: The WSGI application to wrap.
48 :param stream: Write stats to this stream. Disable with ``None``.
49 :param sort_by: A tuple of columns to sort stats by. See
50 :meth:`pstats.Stats.sort_stats`.
51 :param restrictions: A tuple of restrictions to filter stats by. See
52 :meth:`pstats.Stats.print_stats`.
53 :param profile_dir: Save profile data files to this directory.
54 :param filename_format: Format string for profile data file names,
55 or a callable returning a name. See explanation above.
56
57 .. code-block:: python
58
59 from werkzeug.middleware.profile import ProfilerMiddleware
60 app = ProfilerMiddleware(app)
61
62 .. versionchanged:: 0.15
63 Stats are written even if ``profile_dir`` is given, and can be
64 disable by passing ``stream=None``.
65
66 .. versionadded:: 0.15
67 Added ``filename_format``.
68
69 .. versionadded:: 0.9
70 Added ``restrictions`` and ``profile_dir``.
71 """
72
73 def __init__(
74 self,
75 app,
76 stream=sys.stdout,
77 sort_by=("time", "calls"),
78 restrictions=(),
79 profile_dir=None,
80 filename_format="{method}.{path}.{elapsed:06d}ms.{time:d}.prof",
81 ):
82 self._app = app
83 self._stream = stream
84 self._sort_by = sort_by
85 self._restrictions = restrictions
86 self._profile_dir = profile_dir
87 self._filename_format = filename_format
88
89 def __call__(self, environ, start_response):
90 response_body = []
91
92 def catching_start_response(status, headers, exc_info=None):
93 start_response(status, headers, exc_info)
94 return response_body.append
95
96 def runapp():
97 app_iter = self._app(environ, catching_start_response)
98 response_body.extend(app_iter)
99
100 if hasattr(app_iter, "close"):
101 app_iter.close()
102
103 profile = Profile()
104 start = time.time()
105 profile.runcall(runapp)
106 body = b"".join(response_body)
107 elapsed = time.time() - start
108
109 if self._profile_dir is not None:
110 if callable(self._filename_format):
111 filename = self._filename_format(environ)
112 else:
113 filename = self._filename_format.format(
114 method=environ["REQUEST_METHOD"],
115 path=(
116 environ.get("PATH_INFO").strip("/").replace("/", ".") or "root"
117 ),
118 elapsed=elapsed * 1000.0,
119 time=time.time(),
120 )
121 filename = os.path.join(self._profile_dir, filename)
122 profile.dump_stats(filename)
123
124 if self._stream is not None:
125 stats = Stats(profile, stream=self._stream)
126 stats.sort_stats(*self._sort_by)
127 print("-" * 80, file=self._stream)
128 print("PATH: {!r}".format(environ.get("PATH_INFO", "")), file=self._stream)
129 stats.print_stats(*self._restrictions)
130 print("-" * 80 + "\n", file=self._stream)
131
132 return [body]
133
[end of src/werkzeug/middleware/profiler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/werkzeug/middleware/profiler.py b/src/werkzeug/middleware/profiler.py
--- a/src/werkzeug/middleware/profiler.py
+++ b/src/werkzeug/middleware/profiler.py
@@ -56,7 +56,7 @@
.. code-block:: python
- from werkzeug.middleware.profile import ProfilerMiddleware
+ from werkzeug.middleware.profiler import ProfilerMiddleware
app = ProfilerMiddleware(app)
.. versionchanged:: 0.15
| {"golden_diff": "diff --git a/src/werkzeug/middleware/profiler.py b/src/werkzeug/middleware/profiler.py\n--- a/src/werkzeug/middleware/profiler.py\n+++ b/src/werkzeug/middleware/profiler.py\n@@ -56,7 +56,7 @@\n \n .. code-block:: python\n \n- from werkzeug.middleware.profile import ProfilerMiddleware\n+ from werkzeug.middleware.profiler import ProfilerMiddleware\n app = ProfilerMiddleware(app)\n \n .. versionchanged:: 0.15\n", "issue": "ProfilerMiddleware Missing in Latest Release\n### Environment\r\n\r\n```\r\n$ sw_vers\r\nProductName:\tMac OS X\r\nProductVersion:\t10.14.4\r\nBuildVersion:\t18E226\r\n\r\n$ python --version\r\nPython 3.7.3\r\n\r\n$ pip freeze\r\nWerkzeug==0.15.2\r\n```\r\n\r\n### Observed Behavior\r\nInability to import the ProfilerMiddleware from werkzeug as described in [the documentation](https://werkzeug.palletsprojects.com/en/0.15.x/middleware/profiler/)\r\n\r\n```\r\n>>> from werkzeug.middleware.profile import ProfilerMiddleware\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nModuleNotFoundError: No module named 'werkzeug.middleware.profile'\r\n```\r\n\r\n### Steps to Reproduce\r\n```\r\n[~/git] $ mkdir test_venv\r\n[~/git] $ cd test_venv/\r\n[~/git/test_venv] $ python3 -m venv venv\r\n[~/git/test_venv] $ source venv/bin/activate\r\n(venv) [~/git/test_venv] $ pip --version\r\npip 19.0.3 from /Users/cchapline/git/test_venv/venv/lib/python3.7/site-packages/pip (python 3.7)\r\n(venv) [~/git/test_venv] $ pip install werkzeug\r\nCollecting werkzeug\r\n Using cached https://files.pythonhosted.org/packages/18/79/84f02539cc181cdbf5ff5a41b9f52cae870b6f632767e43ba6ac70132e92/Werkzeug-0.15.2-py2.py3-none-any.whl\r\nInstalling collected packages: werkzeug\r\nSuccessfully installed werkzeug-0.15.2\r\n(venv) [~/git/test_venv] $ python\r\nPython 3.7.3 (default, Apr 4 2019, 10:56:22)\r\n[Clang 10.0.1 (clang-1001.0.46.3)] on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from werkzeug.middleware.profile import ProfilerMiddleware\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\nModuleNotFoundError: No module named 'werkzeug.middleware.profile'\r\n>>> import werkzeug.middleware as mw\r\n>>> dir(mw)\r\n['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'dispatcher', 'http_proxy', 'shared_data']\r\n```\r\n\r\n### Expected Behavior\r\n\r\nThe `ImportError` should not occur as I can see the code in `site-packages`:\r\n\r\n```\r\n(venv) [~/git/test_venv] $ ls venv/lib/python3.7/site-packages/werkzeug/middleware/profiler.py\r\nvenv/lib/python3.7/site-packages/werkzeug/middleware/profiler.py\r\n```\n", "before_files": [{"content": "\"\"\"\nApplication Profiler\n====================\n\nThis module provides a middleware that profiles each request with the\n:mod:`cProfile` module. This can help identify bottlenecks in your code\nthat may be slowing down your application.\n\n.. autoclass:: ProfilerMiddleware\n\n:copyright: 2007 Pallets\n:license: BSD-3-Clause\n\"\"\"\nfrom __future__ import print_function\n\nimport os.path\nimport sys\nimport time\nfrom pstats import Stats\n\ntry:\n from cProfile import Profile\nexcept ImportError:\n from profile import Profile\n\n\nclass ProfilerMiddleware(object):\n \"\"\"Wrap a WSGI application and profile the execution of each\n request. Responses are buffered so that timings are more exact.\n\n If ``stream`` is given, :class:`pstats.Stats` are written to it\n after each request. If ``profile_dir`` is given, :mod:`cProfile`\n data files are saved to that directory, one file per request.\n\n The filename can be customized by passing ``filename_format``. If\n it is a string, it will be formatted using :meth:`str.format` with\n the following fields available:\n\n - ``{method}`` - The request method; GET, POST, etc.\n - ``{path}`` - The request path or 'root' should one not exist.\n - ``{elapsed}`` - The elapsed time of the request.\n - ``{time}`` - The time of the request.\n\n If it is a callable, it will be called with the WSGI ``environ``\n dict and should return a filename.\n\n :param app: The WSGI application to wrap.\n :param stream: Write stats to this stream. Disable with ``None``.\n :param sort_by: A tuple of columns to sort stats by. See\n :meth:`pstats.Stats.sort_stats`.\n :param restrictions: A tuple of restrictions to filter stats by. See\n :meth:`pstats.Stats.print_stats`.\n :param profile_dir: Save profile data files to this directory.\n :param filename_format: Format string for profile data file names,\n or a callable returning a name. See explanation above.\n\n .. code-block:: python\n\n from werkzeug.middleware.profile import ProfilerMiddleware\n app = ProfilerMiddleware(app)\n\n .. versionchanged:: 0.15\n Stats are written even if ``profile_dir`` is given, and can be\n disable by passing ``stream=None``.\n\n .. versionadded:: 0.15\n Added ``filename_format``.\n\n .. versionadded:: 0.9\n Added ``restrictions`` and ``profile_dir``.\n \"\"\"\n\n def __init__(\n self,\n app,\n stream=sys.stdout,\n sort_by=(\"time\", \"calls\"),\n restrictions=(),\n profile_dir=None,\n filename_format=\"{method}.{path}.{elapsed:06d}ms.{time:d}.prof\",\n ):\n self._app = app\n self._stream = stream\n self._sort_by = sort_by\n self._restrictions = restrictions\n self._profile_dir = profile_dir\n self._filename_format = filename_format\n\n def __call__(self, environ, start_response):\n response_body = []\n\n def catching_start_response(status, headers, exc_info=None):\n start_response(status, headers, exc_info)\n return response_body.append\n\n def runapp():\n app_iter = self._app(environ, catching_start_response)\n response_body.extend(app_iter)\n\n if hasattr(app_iter, \"close\"):\n app_iter.close()\n\n profile = Profile()\n start = time.time()\n profile.runcall(runapp)\n body = b\"\".join(response_body)\n elapsed = time.time() - start\n\n if self._profile_dir is not None:\n if callable(self._filename_format):\n filename = self._filename_format(environ)\n else:\n filename = self._filename_format.format(\n method=environ[\"REQUEST_METHOD\"],\n path=(\n environ.get(\"PATH_INFO\").strip(\"/\").replace(\"/\", \".\") or \"root\"\n ),\n elapsed=elapsed * 1000.0,\n time=time.time(),\n )\n filename = os.path.join(self._profile_dir, filename)\n profile.dump_stats(filename)\n\n if self._stream is not None:\n stats = Stats(profile, stream=self._stream)\n stats.sort_stats(*self._sort_by)\n print(\"-\" * 80, file=self._stream)\n print(\"PATH: {!r}\".format(environ.get(\"PATH_INFO\", \"\")), file=self._stream)\n stats.print_stats(*self._restrictions)\n print(\"-\" * 80 + \"\\n\", file=self._stream)\n\n return [body]\n", "path": "src/werkzeug/middleware/profiler.py"}]} | 2,561 | 113 |
gh_patches_debug_56904 | rasdani/github-patches | git_diff | NVIDIA__NVFlare-314 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Server admin port still vulnerable to DOS
There are 128 connections allowed where each is limited to 512mb, this leaves 64GB of memory that can be acquired by a mal actor.
There is also still the issue where it is not checking if the socket is closed.
If I understand the idea of that port, the only data sent through it are some json files? I don't think it justifies such a large max size.
---
I think this is a larger problem though. Why is the the socket being accessed directly? There are many similar gotchas need to be considered when programming directly on a TCP socket and there are many libraries that have already done the hard work of solving those problems.
gRPC is an option since its already in your stack, Flask is an option but it doesn't match the use case too well, zeromq is an option
</issue>
<code>
[start of nvflare/fuel/hci/server/hci.py]
1 # Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import logging
16 import socketserver
17 import ssl
18 import threading
19
20 from nvflare.fuel.hci.conn import Connection, receive_til_end
21 from nvflare.fuel.hci.proto import validate_proto
22 from nvflare.fuel.hci.security import get_certificate_common_name
23
24 from .reg import ServerCommandRegister
25
26 MAX_ADMIN_CONNECTIONS = 128
27
28
29 class _MsgHandler(socketserver.BaseRequestHandler):
30 """Message handler.
31
32 Used by the AdminServer to receive admin commands, validate, then process and do command through the
33 ServerCommandRegister.
34 """
35
36 connections = 0
37 lock = threading.Lock()
38
39 def __init__(self, request, client_address, server):
40 # handle() is called in the constructor so logger must be initialized first
41 self.logger = logging.getLogger(self.__class__.__name__)
42 super().__init__(request, client_address, server)
43
44 def handle(self):
45 try:
46 with _MsgHandler.lock:
47 _MsgHandler.connections += 1
48
49 self.logger.debug(f"Concurrent admin connections: {_MsgHandler.connections}")
50 if _MsgHandler.connections > MAX_ADMIN_CONNECTIONS:
51 raise ConnectionRefusedError(f"Admin connection limit ({MAX_ADMIN_CONNECTIONS}) reached")
52
53 conn = Connection(self.request, self.server)
54
55 if self.server.use_ssl:
56 cn = get_certificate_common_name(self.request.getpeercert())
57 conn.set_prop("_client_cn", cn)
58 valid = self.server.validate_client_cn(cn)
59 else:
60 valid = True
61
62 if not valid:
63 conn.append_error("authentication error")
64 else:
65 req = receive_til_end(self.request).strip()
66 command = None
67 req_json = validate_proto(req)
68 conn.request = req_json
69 if req_json is not None:
70 data = req_json["data"]
71 for item in data:
72 it = item["type"]
73 if it == "command":
74 command = item["data"]
75 break
76
77 if command is None:
78 conn.append_error("protocol violation")
79 else:
80 self.server.cmd_reg.process_command(conn, command)
81 else:
82 # not json encoded
83 conn.append_error("protocol violation")
84
85 if not conn.ended:
86 conn.close()
87 except BaseException as exc:
88 self.logger.error(f"Admin connection terminated due to exception: {str(exc)}")
89 if self.logger.getEffectiveLevel() <= logging.DEBUG:
90 self.logger.exception("Admin connection error")
91 finally:
92 with _MsgHandler.lock:
93 _MsgHandler.connections -= 1
94
95
96 def initialize_hci():
97 socketserver.TCPServer.allow_reuse_address = True
98
99
100 class AdminServer(socketserver.ThreadingTCPServer):
101 # faster re-binding
102 allow_reuse_address = True
103
104 # make this bigger than five
105 request_queue_size = 10
106
107 # kick connections when we exit
108 daemon_threads = True
109
110 def __init__(
111 self,
112 cmd_reg: ServerCommandRegister,
113 host,
114 port,
115 ca_cert=None,
116 server_cert=None,
117 server_key=None,
118 accepted_client_cns=None,
119 ):
120 """Base class of FedAdminServer to create a server that can receive commands.
121
122 Args:
123 cmd_reg: CommandRegister
124 host: the IP address of the admin server
125 port: port number of admin server
126 ca_cert: the root CA's cert file name
127 server_cert: server's cert, signed by the CA
128 server_key: server's private key file
129 accepted_client_cns: list of accepted Common Names from client, if specified
130 """
131 socketserver.TCPServer.__init__(self, (host, port), _MsgHandler, False)
132
133 self.use_ssl = False
134 if ca_cert and server_cert:
135 if accepted_client_cns:
136 assert isinstance(accepted_client_cns, list), "accepted_client_cns must be list but got {}.".format(
137 accepted_client_cns
138 )
139
140 ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
141 ctx.verify_mode = ssl.CERT_REQUIRED
142 ctx.load_verify_locations(ca_cert)
143 ctx.load_cert_chain(certfile=server_cert, keyfile=server_key)
144
145 # replace the socket with an SSL version of itself
146 self.socket = ctx.wrap_socket(self.socket, server_side=True)
147 self.use_ssl = True
148
149 # bind the socket and start the server
150 self.server_bind()
151 self.server_activate()
152
153 self._thread = None
154 self.host = host
155 self.port = port
156 self.accepted_client_cns = accepted_client_cns
157 self.cmd_reg = cmd_reg
158 cmd_reg.finalize()
159 self.logger = logging.getLogger(self.__class__.__name__)
160
161 def validate_client_cn(self, cn):
162 if self.accepted_client_cns:
163 return cn in self.accepted_client_cns
164 else:
165 return True
166
167 def stop(self):
168 self.shutdown()
169 self.cmd_reg.close()
170
171 if self._thread.is_alive():
172 self._thread.join()
173
174 self.logger.info(f"Admin Server {self.host} on Port {self.port} shutdown!")
175
176 def set_command_registry(self, cmd_reg: ServerCommandRegister):
177 if cmd_reg:
178 cmd_reg.finalize()
179
180 if self.cmd_reg:
181 self.cmd_reg.close()
182
183 self.cmd_reg = cmd_reg
184
185 def start(self):
186 if self._thread is None:
187 self._thread = threading.Thread(target=self._run, args=())
188
189 if not self._thread.is_alive():
190 self._thread.start()
191
192 def _run(self):
193 self.logger.info(f"Starting Admin Server {self.host} on Port {self.port}")
194 self.serve_forever()
195
[end of nvflare/fuel/hci/server/hci.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nvflare/fuel/hci/server/hci.py b/nvflare/fuel/hci/server/hci.py
--- a/nvflare/fuel/hci/server/hci.py
+++ b/nvflare/fuel/hci/server/hci.py
@@ -23,7 +23,7 @@
from .reg import ServerCommandRegister
-MAX_ADMIN_CONNECTIONS = 128
+MAX_ADMIN_CONNECTIONS = 16
class _MsgHandler(socketserver.BaseRequestHandler):
| {"golden_diff": "diff --git a/nvflare/fuel/hci/server/hci.py b/nvflare/fuel/hci/server/hci.py\n--- a/nvflare/fuel/hci/server/hci.py\n+++ b/nvflare/fuel/hci/server/hci.py\n@@ -23,7 +23,7 @@\n \n from .reg import ServerCommandRegister\n \n-MAX_ADMIN_CONNECTIONS = 128\n+MAX_ADMIN_CONNECTIONS = 16\n \n \n class _MsgHandler(socketserver.BaseRequestHandler):\n", "issue": "Server admin port still vulnerable to DOS\nThere are 128 connections allowed where each is limited to 512mb, this leaves 64GB of memory that can be acquired by a mal actor.\r\n\r\nThere is also still the issue where it is not checking if the socket is closed.\r\n\r\nIf I understand the idea of that port, the only data sent through it are some json files? I don't think it justifies such a large max size.\r\n\r\n---\r\n\r\nI think this is a larger problem though. Why is the the socket being accessed directly? There are many similar gotchas need to be considered when programming directly on a TCP socket and there are many libraries that have already done the hard work of solving those problems.\r\n\r\ngRPC is an option since its already in your stack, Flask is an option but it doesn't match the use case too well, zeromq is an option\n", "before_files": [{"content": "# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nimport socketserver\nimport ssl\nimport threading\n\nfrom nvflare.fuel.hci.conn import Connection, receive_til_end\nfrom nvflare.fuel.hci.proto import validate_proto\nfrom nvflare.fuel.hci.security import get_certificate_common_name\n\nfrom .reg import ServerCommandRegister\n\nMAX_ADMIN_CONNECTIONS = 128\n\n\nclass _MsgHandler(socketserver.BaseRequestHandler):\n \"\"\"Message handler.\n\n Used by the AdminServer to receive admin commands, validate, then process and do command through the\n ServerCommandRegister.\n \"\"\"\n\n connections = 0\n lock = threading.Lock()\n\n def __init__(self, request, client_address, server):\n # handle() is called in the constructor so logger must be initialized first\n self.logger = logging.getLogger(self.__class__.__name__)\n super().__init__(request, client_address, server)\n\n def handle(self):\n try:\n with _MsgHandler.lock:\n _MsgHandler.connections += 1\n\n self.logger.debug(f\"Concurrent admin connections: {_MsgHandler.connections}\")\n if _MsgHandler.connections > MAX_ADMIN_CONNECTIONS:\n raise ConnectionRefusedError(f\"Admin connection limit ({MAX_ADMIN_CONNECTIONS}) reached\")\n\n conn = Connection(self.request, self.server)\n\n if self.server.use_ssl:\n cn = get_certificate_common_name(self.request.getpeercert())\n conn.set_prop(\"_client_cn\", cn)\n valid = self.server.validate_client_cn(cn)\n else:\n valid = True\n\n if not valid:\n conn.append_error(\"authentication error\")\n else:\n req = receive_til_end(self.request).strip()\n command = None\n req_json = validate_proto(req)\n conn.request = req_json\n if req_json is not None:\n data = req_json[\"data\"]\n for item in data:\n it = item[\"type\"]\n if it == \"command\":\n command = item[\"data\"]\n break\n\n if command is None:\n conn.append_error(\"protocol violation\")\n else:\n self.server.cmd_reg.process_command(conn, command)\n else:\n # not json encoded\n conn.append_error(\"protocol violation\")\n\n if not conn.ended:\n conn.close()\n except BaseException as exc:\n self.logger.error(f\"Admin connection terminated due to exception: {str(exc)}\")\n if self.logger.getEffectiveLevel() <= logging.DEBUG:\n self.logger.exception(\"Admin connection error\")\n finally:\n with _MsgHandler.lock:\n _MsgHandler.connections -= 1\n\n\ndef initialize_hci():\n socketserver.TCPServer.allow_reuse_address = True\n\n\nclass AdminServer(socketserver.ThreadingTCPServer):\n # faster re-binding\n allow_reuse_address = True\n\n # make this bigger than five\n request_queue_size = 10\n\n # kick connections when we exit\n daemon_threads = True\n\n def __init__(\n self,\n cmd_reg: ServerCommandRegister,\n host,\n port,\n ca_cert=None,\n server_cert=None,\n server_key=None,\n accepted_client_cns=None,\n ):\n \"\"\"Base class of FedAdminServer to create a server that can receive commands.\n\n Args:\n cmd_reg: CommandRegister\n host: the IP address of the admin server\n port: port number of admin server\n ca_cert: the root CA's cert file name\n server_cert: server's cert, signed by the CA\n server_key: server's private key file\n accepted_client_cns: list of accepted Common Names from client, if specified\n \"\"\"\n socketserver.TCPServer.__init__(self, (host, port), _MsgHandler, False)\n\n self.use_ssl = False\n if ca_cert and server_cert:\n if accepted_client_cns:\n assert isinstance(accepted_client_cns, list), \"accepted_client_cns must be list but got {}.\".format(\n accepted_client_cns\n )\n\n ctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)\n ctx.verify_mode = ssl.CERT_REQUIRED\n ctx.load_verify_locations(ca_cert)\n ctx.load_cert_chain(certfile=server_cert, keyfile=server_key)\n\n # replace the socket with an SSL version of itself\n self.socket = ctx.wrap_socket(self.socket, server_side=True)\n self.use_ssl = True\n\n # bind the socket and start the server\n self.server_bind()\n self.server_activate()\n\n self._thread = None\n self.host = host\n self.port = port\n self.accepted_client_cns = accepted_client_cns\n self.cmd_reg = cmd_reg\n cmd_reg.finalize()\n self.logger = logging.getLogger(self.__class__.__name__)\n\n def validate_client_cn(self, cn):\n if self.accepted_client_cns:\n return cn in self.accepted_client_cns\n else:\n return True\n\n def stop(self):\n self.shutdown()\n self.cmd_reg.close()\n\n if self._thread.is_alive():\n self._thread.join()\n\n self.logger.info(f\"Admin Server {self.host} on Port {self.port} shutdown!\")\n\n def set_command_registry(self, cmd_reg: ServerCommandRegister):\n if cmd_reg:\n cmd_reg.finalize()\n\n if self.cmd_reg:\n self.cmd_reg.close()\n\n self.cmd_reg = cmd_reg\n\n def start(self):\n if self._thread is None:\n self._thread = threading.Thread(target=self._run, args=())\n\n if not self._thread.is_alive():\n self._thread.start()\n\n def _run(self):\n self.logger.info(f\"Starting Admin Server {self.host} on Port {self.port}\")\n self.serve_forever()\n", "path": "nvflare/fuel/hci/server/hci.py"}]} | 2,579 | 109 |
gh_patches_debug_1890 | rasdani/github-patches | git_diff | pydantic__pydantic-738 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
duplicated errors when validators raise ValidationError
# Bug
As a work around for #619 I tried the following
```py
from pydantic import VERSION, BaseModel, Union, validator
from typing_extensions import Literal
print('pydantic version:', VERSION)
class Foo(BaseModel):
model_type: Literal['foo']
f: int
class Bar(BaseModel):
model_type: Literal['bar']
b: int
class MyModel(BaseModel):
foobar: Union[Foo, Bar]
@validator('foobar', pre=True)
def check_action(cls, v):
if isinstance(v, dict):
model_type = v.get('model_type')
if model_type == 'foo':
return Foo(**v)
if model_type == 'var':
return Bar(**v)
return v
MyModel(foobar={'model_type': 'foo', 'f': 'x'})
```
Output:
```
pydantic version: 0.32.1
Traceback (most recent call last):
File "test.py", line 31, in <module>
MyModel(foobar={'model_type': 'foo', 'f': 'x'})
File "pydantic/main.py", line 275, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 785, in pydantic.main.validate_model
pydantic.error_wrappers.ValidationError: 2 validation errors for MyModel
foobar -> f
value is not a valid integer (type=type_error.integer)
foobar -> f
value is not a valid integer (type=type_error.integer)
```
When validators raise `ValidationError` the errors are duplicated.
Won't be that common, but should be fixed.
Repeated error when validator raises an exception
# Bug
Please complete:
* OS: **Ubuntu**
* Python version `import sys; print(sys.version)`: **3.7.4**
* Pydantic version `import pydantic; print(pydantic.VERSION)`: **v0.32.1**
```py
from typing import Optional
from pydantic import BaseModel, validator
class Foobar(BaseModel):
foo: Optional[str] = None
@validator('foo', always=True)
def check_foo(cls, v):
if not v:
raise ValueError('custom error, foo is required')
return v
print(Foobar(foo='x'))
print(Foobar())
```
Outputs:
```
pydantic.error_wrappers.ValidationError: 2 validation errors for Foobar
foo
none is not an allowed value (type=type_error.none.not_allowed)
foo
custom error, foo is required (type=value_error)
```
If i add `pre=True`, the error is even weirder:
```
pydantic.error_wrappers.ValidationError: 2 validation errors for Foobar
foo
custom error, foo is required (type=value_error)
foo
custom error, foo is required (type=value_error)
```
</issue>
<code>
[start of pydantic/error_wrappers.py]
1 import json
2 from functools import lru_cache
3 from typing import TYPE_CHECKING, Any, Dict, Generator, List, Optional, Sequence, Tuple, Type, Union
4
5 if TYPE_CHECKING: # pragma: no cover
6 from pydantic import BaseConfig # noqa: F401
7
8 __all__ = ('ErrorWrapper', 'ValidationError')
9
10
11 class ErrorWrapper:
12 __slots__ = 'exc', 'type_', 'loc', 'msg_template'
13
14 def __init__(
15 self, exc: Exception, *, loc: Union[Tuple[str, ...], str], config: Optional[Type['BaseConfig']] = None
16 ) -> None:
17 self.exc = exc
18 self.type_ = get_exc_type(type(exc))
19 self.loc: Tuple[str, ...] = loc if isinstance(loc, tuple) else (loc,) # type: ignore
20 self.msg_template = config.error_msg_templates.get(self.type_) if config else None
21
22 @property
23 def ctx(self) -> Dict[str, Any]:
24 return getattr(self.exc, 'ctx', None)
25
26 @property
27 def msg(self) -> str:
28 default_msg_template = getattr(self.exc, 'msg_template', None)
29 msg_template = self.msg_template or default_msg_template
30 if msg_template:
31 return msg_template.format(**self.ctx or {})
32
33 return str(self.exc)
34
35 def dict(self, *, loc_prefix: Optional[Tuple[str, ...]] = None) -> Dict[str, Any]:
36 loc = self.loc if loc_prefix is None else loc_prefix + self.loc
37
38 d: Dict[str, Any] = {'loc': loc, 'msg': self.msg, 'type': self.type_}
39
40 if self.ctx is not None:
41 d['ctx'] = self.ctx
42
43 return d
44
45
46 # ErrorList is something like Union[List[Union[List[ErrorWrapper], ErrorWrapper]], ErrorWrapper]
47 # but recursive, therefore just use:
48 ErrorList = Union[Sequence[Any], ErrorWrapper]
49
50
51 class ValidationError(ValueError):
52 __slots__ = ('raw_errors', 'model')
53
54 def __init__(self, errors: Sequence[ErrorList], model: Type[Any]) -> None:
55 self.raw_errors = errors
56 self.model = model
57
58 @lru_cache()
59 def errors(self) -> List[Dict[str, Any]]:
60 return list(flatten_errors(self.raw_errors))
61
62 def json(self, *, indent: Union[None, int, str] = 2) -> str:
63 return json.dumps(self.errors(), indent=indent)
64
65 def __str__(self) -> str:
66 errors = self.errors()
67 no_errors = len(errors)
68 return (
69 f'{no_errors} validation error{"" if no_errors == 1 else "s"} for {self.model.__name__}\n'
70 f'{display_errors(errors)}'
71 )
72
73
74 def display_errors(errors: List[Dict[str, Any]]) -> str:
75 return '\n'.join(f'{_display_error_loc(e)}\n {e["msg"]} ({_display_error_type_and_ctx(e)})' for e in errors)
76
77
78 def _display_error_loc(error: Dict[str, Any]) -> str:
79 return ' -> '.join(str(l) for l in error['loc'])
80
81
82 def _display_error_type_and_ctx(error: Dict[str, Any]) -> str:
83 t = 'type=' + error['type']
84 ctx = error.get('ctx')
85 if ctx:
86 return t + ''.join(f'; {k}={v}' for k, v in ctx.items())
87 else:
88 return t
89
90
91 def flatten_errors(
92 errors: Sequence[Any], *, loc: Optional[Tuple[str, ...]] = None
93 ) -> Generator[Dict[str, Any], None, None]:
94 for error in errors:
95 if isinstance(error, ErrorWrapper):
96 if isinstance(error.exc, ValidationError):
97 if loc is not None:
98 error_loc = loc + error.loc
99 else:
100 error_loc = error.loc
101 yield from flatten_errors(error.exc.raw_errors, loc=error_loc)
102 else:
103 yield error.dict(loc_prefix=loc)
104 elif isinstance(error, list):
105 yield from flatten_errors(error)
106 else:
107 raise RuntimeError(f'Unknown error object: {error}')
108
109
110 @lru_cache()
111 def get_exc_type(cls: Type[Exception]) -> str:
112
113 base_name = 'type_error' if issubclass(cls, TypeError) else 'value_error'
114 if cls in (TypeError, ValueError):
115 # just TypeError or ValueError, no extra code
116 return base_name
117
118 # if it's not a TypeError or ValueError, we just take the lowercase of the exception name
119 # no chaining or snake case logic, use "code" for more complex error types.
120 code = getattr(cls, 'code', None) or cls.__name__.replace('Error', '').lower()
121 return base_name + '.' + code
122
[end of pydantic/error_wrappers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pydantic/error_wrappers.py b/pydantic/error_wrappers.py
--- a/pydantic/error_wrappers.py
+++ b/pydantic/error_wrappers.py
@@ -42,6 +42,9 @@
return d
+ def __repr__(self) -> str:
+ return f'<ErrorWrapper {self.dict()}>'
+
# ErrorList is something like Union[List[Union[List[ErrorWrapper], ErrorWrapper]], ErrorWrapper]
# but recursive, therefore just use:
| {"golden_diff": "diff --git a/pydantic/error_wrappers.py b/pydantic/error_wrappers.py\n--- a/pydantic/error_wrappers.py\n+++ b/pydantic/error_wrappers.py\n@@ -42,6 +42,9 @@\n \n return d\n \n+ def __repr__(self) -> str:\n+ return f'<ErrorWrapper {self.dict()}>'\n+\n \n # ErrorList is something like Union[List[Union[List[ErrorWrapper], ErrorWrapper]], ErrorWrapper]\n # but recursive, therefore just use:\n", "issue": "duplicated errors when validators raise ValidationError\n# Bug\r\n\r\nAs a work around for #619 I tried the following\r\n\r\n```py\r\nfrom pydantic import VERSION, BaseModel, Union, validator\r\nfrom typing_extensions import Literal\r\nprint('pydantic version:', VERSION)\r\n\r\nclass Foo(BaseModel):\r\n model_type: Literal['foo']\r\n f: int\r\n\r\nclass Bar(BaseModel):\r\n model_type: Literal['bar']\r\n b: int\r\n\r\nclass MyModel(BaseModel):\r\n foobar: Union[Foo, Bar]\r\n\r\n @validator('foobar', pre=True)\r\n def check_action(cls, v):\r\n if isinstance(v, dict):\r\n model_type = v.get('model_type')\r\n if model_type == 'foo':\r\n return Foo(**v)\r\n if model_type == 'var':\r\n return Bar(**v)\r\n return v\r\n\r\nMyModel(foobar={'model_type': 'foo', 'f': 'x'})\r\n```\r\nOutput:\r\n```\r\npydantic version: 0.32.1\r\nTraceback (most recent call last):\r\n File \"test.py\", line 31, in <module>\r\n MyModel(foobar={'model_type': 'foo', 'f': 'x'})\r\n File \"pydantic/main.py\", line 275, in pydantic.main.BaseModel.__init__\r\n File \"pydantic/main.py\", line 785, in pydantic.main.validate_model\r\npydantic.error_wrappers.ValidationError: 2 validation errors for MyModel\r\nfoobar -> f\r\n value is not a valid integer (type=type_error.integer)\r\nfoobar -> f\r\n value is not a valid integer (type=type_error.integer)\r\n```\r\n\r\nWhen validators raise `ValidationError` the errors are duplicated.\r\n\r\nWon't be that common, but should be fixed.\r\n\r\n\nRepeated error when validator raises an exception\n# Bug\r\n\r\nPlease complete:\r\n* OS: **Ubuntu**\r\n* Python version `import sys; print(sys.version)`: **3.7.4**\r\n* Pydantic version `import pydantic; print(pydantic.VERSION)`: **v0.32.1**\r\n\r\n```py\r\nfrom typing import Optional\r\n\r\nfrom pydantic import BaseModel, validator\r\n\r\nclass Foobar(BaseModel):\r\n foo: Optional[str] = None\r\n\r\n @validator('foo', always=True)\r\n def check_foo(cls, v):\r\n if not v:\r\n raise ValueError('custom error, foo is required')\r\n return v\r\n\r\nprint(Foobar(foo='x'))\r\nprint(Foobar())\r\n```\r\n\r\nOutputs:\r\n```\r\npydantic.error_wrappers.ValidationError: 2 validation errors for Foobar\r\nfoo\r\n none is not an allowed value (type=type_error.none.not_allowed)\r\nfoo\r\n custom error, foo is required (type=value_error)\r\n```\r\n\r\nIf i add `pre=True`, the error is even weirder:\r\n```\r\npydantic.error_wrappers.ValidationError: 2 validation errors for Foobar\r\nfoo\r\n custom error, foo is required (type=value_error)\r\nfoo\r\n custom error, foo is required (type=value_error)\r\n```\n", "before_files": [{"content": "import json\nfrom functools import lru_cache\nfrom typing import TYPE_CHECKING, Any, Dict, Generator, List, Optional, Sequence, Tuple, Type, Union\n\nif TYPE_CHECKING: # pragma: no cover\n from pydantic import BaseConfig # noqa: F401\n\n__all__ = ('ErrorWrapper', 'ValidationError')\n\n\nclass ErrorWrapper:\n __slots__ = 'exc', 'type_', 'loc', 'msg_template'\n\n def __init__(\n self, exc: Exception, *, loc: Union[Tuple[str, ...], str], config: Optional[Type['BaseConfig']] = None\n ) -> None:\n self.exc = exc\n self.type_ = get_exc_type(type(exc))\n self.loc: Tuple[str, ...] = loc if isinstance(loc, tuple) else (loc,) # type: ignore\n self.msg_template = config.error_msg_templates.get(self.type_) if config else None\n\n @property\n def ctx(self) -> Dict[str, Any]:\n return getattr(self.exc, 'ctx', None)\n\n @property\n def msg(self) -> str:\n default_msg_template = getattr(self.exc, 'msg_template', None)\n msg_template = self.msg_template or default_msg_template\n if msg_template:\n return msg_template.format(**self.ctx or {})\n\n return str(self.exc)\n\n def dict(self, *, loc_prefix: Optional[Tuple[str, ...]] = None) -> Dict[str, Any]:\n loc = self.loc if loc_prefix is None else loc_prefix + self.loc\n\n d: Dict[str, Any] = {'loc': loc, 'msg': self.msg, 'type': self.type_}\n\n if self.ctx is not None:\n d['ctx'] = self.ctx\n\n return d\n\n\n# ErrorList is something like Union[List[Union[List[ErrorWrapper], ErrorWrapper]], ErrorWrapper]\n# but recursive, therefore just use:\nErrorList = Union[Sequence[Any], ErrorWrapper]\n\n\nclass ValidationError(ValueError):\n __slots__ = ('raw_errors', 'model')\n\n def __init__(self, errors: Sequence[ErrorList], model: Type[Any]) -> None:\n self.raw_errors = errors\n self.model = model\n\n @lru_cache()\n def errors(self) -> List[Dict[str, Any]]:\n return list(flatten_errors(self.raw_errors))\n\n def json(self, *, indent: Union[None, int, str] = 2) -> str:\n return json.dumps(self.errors(), indent=indent)\n\n def __str__(self) -> str:\n errors = self.errors()\n no_errors = len(errors)\n return (\n f'{no_errors} validation error{\"\" if no_errors == 1 else \"s\"} for {self.model.__name__}\\n'\n f'{display_errors(errors)}'\n )\n\n\ndef display_errors(errors: List[Dict[str, Any]]) -> str:\n return '\\n'.join(f'{_display_error_loc(e)}\\n {e[\"msg\"]} ({_display_error_type_and_ctx(e)})' for e in errors)\n\n\ndef _display_error_loc(error: Dict[str, Any]) -> str:\n return ' -> '.join(str(l) for l in error['loc'])\n\n\ndef _display_error_type_and_ctx(error: Dict[str, Any]) -> str:\n t = 'type=' + error['type']\n ctx = error.get('ctx')\n if ctx:\n return t + ''.join(f'; {k}={v}' for k, v in ctx.items())\n else:\n return t\n\n\ndef flatten_errors(\n errors: Sequence[Any], *, loc: Optional[Tuple[str, ...]] = None\n) -> Generator[Dict[str, Any], None, None]:\n for error in errors:\n if isinstance(error, ErrorWrapper):\n if isinstance(error.exc, ValidationError):\n if loc is not None:\n error_loc = loc + error.loc\n else:\n error_loc = error.loc\n yield from flatten_errors(error.exc.raw_errors, loc=error_loc)\n else:\n yield error.dict(loc_prefix=loc)\n elif isinstance(error, list):\n yield from flatten_errors(error)\n else:\n raise RuntimeError(f'Unknown error object: {error}')\n\n\n@lru_cache()\ndef get_exc_type(cls: Type[Exception]) -> str:\n\n base_name = 'type_error' if issubclass(cls, TypeError) else 'value_error'\n if cls in (TypeError, ValueError):\n # just TypeError or ValueError, no extra code\n return base_name\n\n # if it's not a TypeError or ValueError, we just take the lowercase of the exception name\n # no chaining or snake case logic, use \"code\" for more complex error types.\n code = getattr(cls, 'code', None) or cls.__name__.replace('Error', '').lower()\n return base_name + '.' + code\n", "path": "pydantic/error_wrappers.py"}]} | 2,498 | 112 |
gh_patches_debug_3411 | rasdani/github-patches | git_diff | rasterio__rasterio-1827 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
optimizing `transform_geom` for repeated transformations
Related to: https://github.com/Toblerity/Fiona/issues/799
Is there interest in adding this to rasterio as well?
</issue>
<code>
[start of rasterio/compat.py]
1 """Python 2-3 compatibility."""
2
3 import itertools
4 import sys
5 import warnings
6
7
8 if sys.version_info[0] >= 3: # pragma: no cover
9 string_types = str,
10 text_type = str
11 integer_types = int,
12 zip_longest = itertools.zip_longest
13 import configparser
14 from urllib.parse import urlparse
15 from collections import UserDict
16 from collections.abc import Iterable, Mapping
17 from inspect import getfullargspec as getargspec
18 else: # pragma: no cover
19 warnings.warn("Python 2 compatibility will be removed after version 1.1", DeprecationWarning)
20 string_types = basestring,
21 text_type = unicode
22 integer_types = int, long
23 zip_longest = itertools.izip_longest
24 import ConfigParser as configparser
25 from urlparse import urlparse
26 from UserDict import UserDict
27 from inspect import getargspec
28 from collections import Iterable, Mapping
29
[end of rasterio/compat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rasterio/compat.py b/rasterio/compat.py
--- a/rasterio/compat.py
+++ b/rasterio/compat.py
@@ -26,3 +26,8 @@
from UserDict import UserDict
from inspect import getargspec
from collections import Iterable, Mapping
+
+# Users can pass in objects that subclass a few different objects
+# More specifically, rasterio has a CRS() class that subclasses UserDict()
+# In Python 2 UserDict() is in its own module and does not subclass Mapping()
+DICT_TYPES = (dict, Mapping, UserDict)
| {"golden_diff": "diff --git a/rasterio/compat.py b/rasterio/compat.py\n--- a/rasterio/compat.py\n+++ b/rasterio/compat.py\n@@ -26,3 +26,8 @@\n from UserDict import UserDict\n from inspect import getargspec\n from collections import Iterable, Mapping\n+\n+# Users can pass in objects that subclass a few different objects\n+# More specifically, rasterio has a CRS() class that subclasses UserDict()\n+# In Python 2 UserDict() is in its own module and does not subclass Mapping()\n+DICT_TYPES = (dict, Mapping, UserDict)\n", "issue": "optimizing `transform_geom` for repeated transformations\nRelated to: https://github.com/Toblerity/Fiona/issues/799\r\n\r\nIs there interest in adding this to rasterio as well?\n", "before_files": [{"content": "\"\"\"Python 2-3 compatibility.\"\"\"\n\nimport itertools\nimport sys\nimport warnings\n\n\nif sys.version_info[0] >= 3: # pragma: no cover\n string_types = str,\n text_type = str\n integer_types = int,\n zip_longest = itertools.zip_longest\n import configparser\n from urllib.parse import urlparse\n from collections import UserDict\n from collections.abc import Iterable, Mapping\n from inspect import getfullargspec as getargspec\nelse: # pragma: no cover\n warnings.warn(\"Python 2 compatibility will be removed after version 1.1\", DeprecationWarning)\n string_types = basestring,\n text_type = unicode\n integer_types = int, long\n zip_longest = itertools.izip_longest\n import ConfigParser as configparser\n from urlparse import urlparse\n from UserDict import UserDict\n from inspect import getargspec\n from collections import Iterable, Mapping\n", "path": "rasterio/compat.py"}]} | 834 | 134 |
gh_patches_debug_8726 | rasdani/github-patches | git_diff | gratipay__gratipay.com-3031 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
audit csrf_token for httponly
We've received a security disclosure about [not setting `httponly` for `csrf_token`](https://github.com/gratipay/gratipay.com/blob/1749/gratipay/security/csrf.py#L153). We should review that decision and leave a comment in the code where that is necessitated.
</issue>
<code>
[start of gratipay/security/csrf.py]
1 """Cross Site Request Forgery middleware, borrowed from Django.
2
3 See also:
4
5 https://github.com/django/django/blob/master/django/middleware/csrf.py
6 https://docs.djangoproject.com/en/dev/ref/contrib/csrf/
7 https://github.com/gratipay/gratipay.com/issues/88
8
9 """
10
11 from datetime import timedelta
12 import re
13 import urlparse
14 from aspen import log_dammit
15
16
17 #from django.utils.cache import patch_vary_headers
18 cc_delim_re = re.compile(r'\s*,\s*')
19 def patch_vary_headers(response, newheaders):
20 """
21 Adds (or updates) the "Vary" header in the given HttpResponse object.
22 newheaders is a list of header names that should be in "Vary". Existing
23 headers in "Vary" aren't removed.
24 """
25 # Note that we need to keep the original order intact, because cache
26 # implementations may rely on the order of the Vary contents in, say,
27 # computing an MD5 hash.
28 if 'Vary' in response.headers:
29 vary_headers = cc_delim_re.split(response.headers['Vary'])
30 else:
31 vary_headers = []
32 # Use .lower() here so we treat headers as case-insensitive.
33 existing_headers = set([header.lower() for header in vary_headers])
34 additional_headers = [newheader for newheader in newheaders
35 if newheader.lower() not in existing_headers]
36 response.headers['Vary'] = ', '.join(vary_headers + additional_headers)
37
38
39 #from django.utils.http import same_origin
40 def same_origin(url1, url2):
41 """
42 Checks if two URLs are 'same-origin'
43 """
44 p1, p2 = urlparse.urlparse(url1), urlparse.urlparse(url2)
45 return (p1.scheme, p1.hostname, p1.port) == (p2.scheme, p2.hostname, p2.port)
46
47
48 from aspen import Response
49 from crypto import constant_time_compare, get_random_string
50
51 REASON_NO_REFERER = "Referer checking failed - no Referer."
52 REASON_BAD_REFERER = "Referer checking failed - %s does not match %s."
53 REASON_NO_CSRF_COOKIE = "CSRF cookie not set."
54 REASON_BAD_TOKEN = "CSRF token missing or incorrect."
55
56 TOKEN_LENGTH = 32
57 CSRF_TIMEOUT = timedelta(days=7)
58
59
60 def _get_new_csrf_key():
61 return get_random_string(TOKEN_LENGTH)
62
63
64 def _sanitize_token(token):
65 # Allow only alphanum, and ensure we return a 'str' for the sake
66 # of the post processing middleware.
67 if len(token) > TOKEN_LENGTH:
68 return _get_new_csrf_key()
69 token = re.sub('[^a-zA-Z0-9]+', '', str(token.decode('ascii', 'ignore')))
70 if token == "":
71 # In case the cookie has been truncated to nothing at some point.
72 return _get_new_csrf_key()
73 return token
74
75 def _is_secure(request):
76 import gratipay
77 return gratipay.canonical_scheme == 'https'
78
79 def _get_host(request):
80 """Returns the HTTP host using the request headers.
81 """
82 return request.headers.get('X-Forwarded-Host', request.headers['Host'])
83
84
85
86 def get_csrf_token_from_request(request):
87 """Given a Request object, reject it if it's a forgery.
88 """
89 if request.line.uri.startswith('/assets/'): return
90 if request.line.uri.startswith('/callbacks/'): return
91
92 try:
93 csrf_token = _sanitize_token(request.headers.cookie['csrf_token'].value)
94 except KeyError:
95 csrf_token = None
96
97 request.context['csrf_token'] = csrf_token or _get_new_csrf_key()
98
99 # Assume that anything not defined as 'safe' by RC2616 needs protection
100 if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):
101
102 if _is_secure(request):
103 # Suppose user visits http://example.com/
104 # An active network attacker (man-in-the-middle, MITM) sends a
105 # POST form that targets https://example.com/detonate-bomb/ and
106 # submits it via JavaScript.
107 #
108 # The attacker will need to provide a CSRF cookie and token, but
109 # that's no problem for a MITM and the session-independent
110 # nonce we're using. So the MITM can circumvent the CSRF
111 # protection. This is true for any HTTP connection, but anyone
112 # using HTTPS expects better! For this reason, for
113 # https://example.com/ we need additional protection that treats
114 # http://example.com/ as completely untrusted. Under HTTPS,
115 # Barth et al. found that the Referer header is missing for
116 # same-domain requests in only about 0.2% of cases or less, so
117 # we can use strict Referer checking.
118 referer = request.headers.get('Referer')
119 if referer is None:
120 raise Response(403, REASON_NO_REFERER)
121
122 good_referer = 'https://%s/' % _get_host(request)
123 if not same_origin(referer, good_referer):
124 reason = REASON_BAD_REFERER % (referer, good_referer)
125 log_dammit(reason)
126 raise Response(403, reason)
127
128 if csrf_token is None:
129 raise Response(403, REASON_NO_CSRF_COOKIE)
130
131 # Check non-cookie token for match.
132 request_csrf_token = ""
133 if request.line.method == "POST":
134 if isinstance(request.body, dict):
135 request_csrf_token = request.body.get('csrf_token', '')
136
137 if request_csrf_token == "":
138 # Fall back to X-CSRF-TOKEN, to make things easier for AJAX,
139 # and possible for PUT/DELETE.
140 request_csrf_token = request.headers.get('X-CSRF-TOKEN', '')
141
142 if not constant_time_compare(request_csrf_token, csrf_token):
143 raise Response(403, REASON_BAD_TOKEN)
144
145
146 def add_csrf_token_to_response(response, request=None):
147 """Store the latest CSRF token as a cookie.
148 """
149 if request is None:
150 return # early parsing must've failed
151 csrf_token = request.context.get('csrf_token')
152 if csrf_token:
153 response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT, httponly=False)
154
155 # Content varies with the CSRF cookie, so set the Vary header.
156 patch_vary_headers(response, ('Cookie',))
157
[end of gratipay/security/csrf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gratipay/security/csrf.py b/gratipay/security/csrf.py
--- a/gratipay/security/csrf.py
+++ b/gratipay/security/csrf.py
@@ -150,6 +150,8 @@
return # early parsing must've failed
csrf_token = request.context.get('csrf_token')
if csrf_token:
+ # Don't set httponly so that we can POST using XHR.
+ # https://github.com/gratipay/gratipay.com/issues/3030
response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT, httponly=False)
# Content varies with the CSRF cookie, so set the Vary header.
| {"golden_diff": "diff --git a/gratipay/security/csrf.py b/gratipay/security/csrf.py\n--- a/gratipay/security/csrf.py\n+++ b/gratipay/security/csrf.py\n@@ -150,6 +150,8 @@\n return # early parsing must've failed\n csrf_token = request.context.get('csrf_token')\n if csrf_token:\n+ # Don't set httponly so that we can POST using XHR.\n+ # https://github.com/gratipay/gratipay.com/issues/3030\n response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT, httponly=False)\n \n # Content varies with the CSRF cookie, so set the Vary header.\n", "issue": "audit csrf_token for httponly\nWe've received a security disclosure about [not setting `httponly` for `csrf_token`](https://github.com/gratipay/gratipay.com/blob/1749/gratipay/security/csrf.py#L153). We should review that decision and leave a comment in the code where that is necessitated.\n\n", "before_files": [{"content": "\"\"\"Cross Site Request Forgery middleware, borrowed from Django.\n\nSee also:\n\n https://github.com/django/django/blob/master/django/middleware/csrf.py\n https://docs.djangoproject.com/en/dev/ref/contrib/csrf/\n https://github.com/gratipay/gratipay.com/issues/88\n\n\"\"\"\n\nfrom datetime import timedelta\nimport re\nimport urlparse\nfrom aspen import log_dammit\n\n\n#from django.utils.cache import patch_vary_headers\ncc_delim_re = re.compile(r'\\s*,\\s*')\ndef patch_vary_headers(response, newheaders):\n \"\"\"\n Adds (or updates) the \"Vary\" header in the given HttpResponse object.\n newheaders is a list of header names that should be in \"Vary\". Existing\n headers in \"Vary\" aren't removed.\n \"\"\"\n # Note that we need to keep the original order intact, because cache\n # implementations may rely on the order of the Vary contents in, say,\n # computing an MD5 hash.\n if 'Vary' in response.headers:\n vary_headers = cc_delim_re.split(response.headers['Vary'])\n else:\n vary_headers = []\n # Use .lower() here so we treat headers as case-insensitive.\n existing_headers = set([header.lower() for header in vary_headers])\n additional_headers = [newheader for newheader in newheaders\n if newheader.lower() not in existing_headers]\n response.headers['Vary'] = ', '.join(vary_headers + additional_headers)\n\n\n#from django.utils.http import same_origin\ndef same_origin(url1, url2):\n \"\"\"\n Checks if two URLs are 'same-origin'\n \"\"\"\n p1, p2 = urlparse.urlparse(url1), urlparse.urlparse(url2)\n return (p1.scheme, p1.hostname, p1.port) == (p2.scheme, p2.hostname, p2.port)\n\n\nfrom aspen import Response\nfrom crypto import constant_time_compare, get_random_string\n\nREASON_NO_REFERER = \"Referer checking failed - no Referer.\"\nREASON_BAD_REFERER = \"Referer checking failed - %s does not match %s.\"\nREASON_NO_CSRF_COOKIE = \"CSRF cookie not set.\"\nREASON_BAD_TOKEN = \"CSRF token missing or incorrect.\"\n\nTOKEN_LENGTH = 32\nCSRF_TIMEOUT = timedelta(days=7)\n\n\ndef _get_new_csrf_key():\n return get_random_string(TOKEN_LENGTH)\n\n\ndef _sanitize_token(token):\n # Allow only alphanum, and ensure we return a 'str' for the sake\n # of the post processing middleware.\n if len(token) > TOKEN_LENGTH:\n return _get_new_csrf_key()\n token = re.sub('[^a-zA-Z0-9]+', '', str(token.decode('ascii', 'ignore')))\n if token == \"\":\n # In case the cookie has been truncated to nothing at some point.\n return _get_new_csrf_key()\n return token\n\ndef _is_secure(request):\n import gratipay\n return gratipay.canonical_scheme == 'https'\n\ndef _get_host(request):\n \"\"\"Returns the HTTP host using the request headers.\n \"\"\"\n return request.headers.get('X-Forwarded-Host', request.headers['Host'])\n\n\n\ndef get_csrf_token_from_request(request):\n \"\"\"Given a Request object, reject it if it's a forgery.\n \"\"\"\n if request.line.uri.startswith('/assets/'): return\n if request.line.uri.startswith('/callbacks/'): return\n\n try:\n csrf_token = _sanitize_token(request.headers.cookie['csrf_token'].value)\n except KeyError:\n csrf_token = None\n\n request.context['csrf_token'] = csrf_token or _get_new_csrf_key()\n\n # Assume that anything not defined as 'safe' by RC2616 needs protection\n if request.line.method not in ('GET', 'HEAD', 'OPTIONS', 'TRACE'):\n\n if _is_secure(request):\n # Suppose user visits http://example.com/\n # An active network attacker (man-in-the-middle, MITM) sends a\n # POST form that targets https://example.com/detonate-bomb/ and\n # submits it via JavaScript.\n #\n # The attacker will need to provide a CSRF cookie and token, but\n # that's no problem for a MITM and the session-independent\n # nonce we're using. So the MITM can circumvent the CSRF\n # protection. This is true for any HTTP connection, but anyone\n # using HTTPS expects better! For this reason, for\n # https://example.com/ we need additional protection that treats\n # http://example.com/ as completely untrusted. Under HTTPS,\n # Barth et al. found that the Referer header is missing for\n # same-domain requests in only about 0.2% of cases or less, so\n # we can use strict Referer checking.\n referer = request.headers.get('Referer')\n if referer is None:\n raise Response(403, REASON_NO_REFERER)\n\n good_referer = 'https://%s/' % _get_host(request)\n if not same_origin(referer, good_referer):\n reason = REASON_BAD_REFERER % (referer, good_referer)\n log_dammit(reason)\n raise Response(403, reason)\n\n if csrf_token is None:\n raise Response(403, REASON_NO_CSRF_COOKIE)\n\n # Check non-cookie token for match.\n request_csrf_token = \"\"\n if request.line.method == \"POST\":\n if isinstance(request.body, dict):\n request_csrf_token = request.body.get('csrf_token', '')\n\n if request_csrf_token == \"\":\n # Fall back to X-CSRF-TOKEN, to make things easier for AJAX,\n # and possible for PUT/DELETE.\n request_csrf_token = request.headers.get('X-CSRF-TOKEN', '')\n\n if not constant_time_compare(request_csrf_token, csrf_token):\n raise Response(403, REASON_BAD_TOKEN)\n\n\ndef add_csrf_token_to_response(response, request=None):\n \"\"\"Store the latest CSRF token as a cookie.\n \"\"\"\n if request is None:\n return # early parsing must've failed\n csrf_token = request.context.get('csrf_token')\n if csrf_token:\n response.set_cookie('csrf_token', csrf_token, expires=CSRF_TIMEOUT, httponly=False)\n\n # Content varies with the CSRF cookie, so set the Vary header.\n patch_vary_headers(response, ('Cookie',))\n", "path": "gratipay/security/csrf.py"}]} | 2,386 | 158 |
gh_patches_debug_0 | rasdani/github-patches | git_diff | OpenEnergyPlatform__oeplatform-1353 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Version in CITATION.cff is out of date
## Description of the issue
We have introduced the citation.cff file, which also contains a version. This should be updated every time a release is made. It would be great if the version could be imported automatically from the VERSION file so we don't have to maintain multiple version identifiers.
## Ideas of solution
- [x] add note to RELEASE_PROCEDURE.md (see #1228)
- [x] auto import a version update from the VERSION file
## Context and Environment
* Version used:
* Operating system:
* Environment setup and (python) version:
## Workflow checklist
- [x] I am aware of the workflow in [CONTRIBUTING.md](https://github.com/OpenEnergyPlatform/oeplatform/blob/develop/CONTRIBUTING.md)
</issue>
<code>
[start of oeplatform/__init__.py]
[end of oeplatform/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/oeplatform/__init__.py b/oeplatform/__init__.py
--- a/oeplatform/__init__.py
+++ b/oeplatform/__init__.py
@@ -0,0 +1 @@
+__version__ = "0.14.1"
| {"golden_diff": "diff --git a/oeplatform/__init__.py b/oeplatform/__init__.py\n--- a/oeplatform/__init__.py\n+++ b/oeplatform/__init__.py\n@@ -0,0 +1 @@\n+__version__ = \"0.14.1\"\n", "issue": "Version in CITATION.cff is out of date\n## Description of the issue\r\n\r\nWe have introduced the citation.cff file, which also contains a version. This should be updated every time a release is made. It would be great if the version could be imported automatically from the VERSION file so we don't have to maintain multiple version identifiers.\r\n\r\n## Ideas of solution\r\n\r\n - [x] add note to RELEASE_PROCEDURE.md (see #1228)\r\n - [x] auto import a version update from the VERSION file\r\n\r\n## Context and Environment\r\n* Version used: \r\n* Operating system: \r\n* Environment setup and (python) version: \r\n\r\n## Workflow checklist\r\n- [x] I am aware of the workflow in [CONTRIBUTING.md](https://github.com/OpenEnergyPlatform/oeplatform/blob/develop/CONTRIBUTING.md)\r\n\n", "before_files": [{"content": "", "path": "oeplatform/__init__.py"}]} | 714 | 61 |
gh_patches_debug_35354 | rasdani/github-patches | git_diff | sanic-org__sanic-2628 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
distutils.strtobool deprecation
**Describe the bug**
he distutils package is deprecated and slated for removal in Python 3.12. [PEP 632](https://www.python.org/dev/peps/pep-0632/)
**Code snippet**
[sanic/server/loop.py](https://github.com/sanic-org/sanic/blob/ac388d644b1e22156e228470fad8ea34932c080a/sanic/server/loop.py#L3)
**Expected behavior**
A lack of warnings about deprecation
**Environment (please complete the following information):**
- OS: All
- Version: All supported as of 2022/01/23
**Additional context**
This is housekeeping to clean up a core python deprecation.
</issue>
<code>
[start of setup.py]
1 """
2 Sanic
3 """
4 import codecs
5 import os
6 import re
7 import sys
8
9 from distutils.util import strtobool
10
11 from setuptools import find_packages, setup
12 from setuptools.command.test import test as TestCommand
13
14
15 class PyTest(TestCommand):
16 """
17 Provide a Test runner to be used from setup.py to run unit tests
18 """
19
20 user_options = [("pytest-args=", "a", "Arguments to pass to pytest")]
21
22 def initialize_options(self):
23 TestCommand.initialize_options(self)
24 self.pytest_args = ""
25
26 def run_tests(self):
27 import shlex
28
29 import pytest
30
31 errno = pytest.main(shlex.split(self.pytest_args))
32 sys.exit(errno)
33
34
35 def open_local(paths, mode="r", encoding="utf8"):
36 path = os.path.join(os.path.abspath(os.path.dirname(__file__)), *paths)
37
38 return codecs.open(path, mode, encoding)
39
40
41 with open_local(["sanic", "__version__.py"], encoding="latin1") as fp:
42 try:
43 version = re.findall(
44 r"^__version__ = \"([^']+)\"\r?$", fp.read(), re.M
45 )[0]
46 except IndexError:
47 raise RuntimeError("Unable to determine version.")
48
49 with open_local(["README.rst"]) as rm:
50 long_description = rm.read()
51
52 setup_kwargs = {
53 "name": "sanic",
54 "version": version,
55 "url": "http://github.com/sanic-org/sanic/",
56 "license": "MIT",
57 "author": "Sanic Community",
58 "author_email": "[email protected]",
59 "description": (
60 "A web server and web framework that's written to go fast. "
61 "Build fast. Run fast."
62 ),
63 "long_description": long_description,
64 "packages": find_packages(exclude=("tests", "tests.*")),
65 "package_data": {"sanic": ["py.typed"]},
66 "platforms": "any",
67 "python_requires": ">=3.7",
68 "classifiers": [
69 "Development Status :: 4 - Beta",
70 "Environment :: Web Environment",
71 "License :: OSI Approved :: MIT License",
72 "Programming Language :: Python :: 3.7",
73 "Programming Language :: Python :: 3.8",
74 "Programming Language :: Python :: 3.9",
75 "Programming Language :: Python :: 3.10",
76 ],
77 "entry_points": {"console_scripts": ["sanic = sanic.__main__:main"]},
78 }
79
80 env_dependency = (
81 '; sys_platform != "win32" ' 'and implementation_name == "cpython"'
82 )
83 ujson = "ujson>=1.35" + env_dependency
84 uvloop = "uvloop>=0.15.0" + env_dependency
85 types_ujson = "types-ujson" + env_dependency
86 requirements = [
87 "sanic-routing>=22.8.0",
88 "httptools>=0.0.10",
89 uvloop,
90 ujson,
91 "aiofiles>=0.6.0",
92 "websockets>=10.0",
93 "multidict>=5.0,<7.0",
94 ]
95
96 tests_require = [
97 "sanic-testing>=22.9.0",
98 "pytest==7.1.*",
99 "coverage",
100 "beautifulsoup4",
101 "pytest-sanic",
102 "pytest-benchmark",
103 "chardet==3.*",
104 "flake8",
105 "black",
106 "isort>=5.0.0",
107 "bandit",
108 "mypy>=0.901,<0.910",
109 "docutils",
110 "pygments",
111 "uvicorn<0.15.0",
112 "slotscheck>=0.8.0,<1",
113 types_ujson,
114 ]
115
116 docs_require = [
117 "sphinx>=2.1.2",
118 "sphinx_rtd_theme>=0.4.3",
119 "docutils",
120 "pygments",
121 "m2r2",
122 "enum-tools[sphinx]",
123 "mistune<2.0.0",
124 ]
125
126 dev_require = tests_require + [
127 "cryptography",
128 "tox",
129 "towncrier",
130 ]
131
132 all_require = list(set(dev_require + docs_require))
133
134 if strtobool(os.environ.get("SANIC_NO_UJSON", "no")):
135 print("Installing without uJSON")
136 requirements.remove(ujson)
137 tests_require.remove(types_ujson)
138
139 # 'nt' means windows OS
140 if strtobool(os.environ.get("SANIC_NO_UVLOOP", "no")):
141 print("Installing without uvLoop")
142 requirements.remove(uvloop)
143
144 extras_require = {
145 "test": tests_require,
146 "dev": dev_require,
147 "docs": docs_require,
148 "all": all_require,
149 "ext": ["sanic-ext"],
150 "http3": ["aioquic"],
151 }
152
153 setup_kwargs["install_requires"] = requirements
154 setup_kwargs["tests_require"] = tests_require
155 setup_kwargs["extras_require"] = extras_require
156 setup_kwargs["cmdclass"] = {"test": PyTest}
157 setup(**setup_kwargs)
158
[end of setup.py]
[start of sanic/server/loop.py]
1 import asyncio
2 import sys
3
4 from distutils.util import strtobool
5 from os import getenv
6
7 from sanic.compat import OS_IS_WINDOWS
8 from sanic.log import error_logger
9
10
11 def try_use_uvloop() -> None:
12 """
13 Use uvloop instead of the default asyncio loop.
14 """
15 if OS_IS_WINDOWS:
16 error_logger.warning(
17 "You are trying to use uvloop, but uvloop is not compatible "
18 "with your system. You can disable uvloop completely by setting "
19 "the 'USE_UVLOOP' configuration value to false, or simply not "
20 "defining it and letting Sanic handle it for you. Sanic will now "
21 "continue to run using the default event loop."
22 )
23 return
24
25 try:
26 import uvloop # type: ignore
27 except ImportError:
28 error_logger.warning(
29 "You are trying to use uvloop, but uvloop is not "
30 "installed in your system. In order to use uvloop "
31 "you must first install it. Otherwise, you can disable "
32 "uvloop completely by setting the 'USE_UVLOOP' "
33 "configuration value to false. Sanic will now continue "
34 "to run with the default event loop."
35 )
36 return
37
38 uvloop_install_removed = strtobool(getenv("SANIC_NO_UVLOOP", "no"))
39 if uvloop_install_removed:
40 error_logger.info(
41 "You are requesting to run Sanic using uvloop, but the "
42 "install-time 'SANIC_NO_UVLOOP' environment variable (used to "
43 "opt-out of installing uvloop with Sanic) is set to true. If "
44 "you want to prevent Sanic from overriding the event loop policy "
45 "during runtime, set the 'USE_UVLOOP' configuration value to "
46 "false."
47 )
48
49 if not isinstance(asyncio.get_event_loop_policy(), uvloop.EventLoopPolicy):
50 asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
51
52
53 def try_windows_loop():
54 if not OS_IS_WINDOWS:
55 error_logger.warning(
56 "You are trying to use an event loop policy that is not "
57 "compatible with your system. You can simply let Sanic handle "
58 "selecting the best loop for you. Sanic will now continue to run "
59 "using the default event loop."
60 )
61 return
62
63 if sys.version_info >= (3, 8) and not isinstance(
64 asyncio.get_event_loop_policy(), asyncio.WindowsSelectorEventLoopPolicy
65 ):
66 asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
67
[end of sanic/server/loop.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sanic/server/loop.py b/sanic/server/loop.py
--- a/sanic/server/loop.py
+++ b/sanic/server/loop.py
@@ -1,11 +1,11 @@
import asyncio
import sys
-from distutils.util import strtobool
from os import getenv
from sanic.compat import OS_IS_WINDOWS
from sanic.log import error_logger
+from sanic.utils import str_to_bool
def try_use_uvloop() -> None:
@@ -35,7 +35,7 @@
)
return
- uvloop_install_removed = strtobool(getenv("SANIC_NO_UVLOOP", "no"))
+ uvloop_install_removed = str_to_bool(getenv("SANIC_NO_UVLOOP", "no"))
if uvloop_install_removed:
error_logger.info(
"You are requesting to run Sanic using uvloop, but the "
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -6,8 +6,6 @@
import re
import sys
-from distutils.util import strtobool
-
from setuptools import find_packages, setup
from setuptools.command.test import test as TestCommand
@@ -37,6 +35,25 @@
return codecs.open(path, mode, encoding)
+def str_to_bool(val: str) -> bool:
+ val = val.lower()
+ if val in {
+ "y",
+ "yes",
+ "yep",
+ "yup",
+ "t",
+ "true",
+ "on",
+ "enable",
+ "enabled",
+ "1",
+ }:
+ return True
+ elif val in {"n", "no", "f", "false", "off", "disable", "disabled", "0"}:
+ return False
+ else:
+ raise ValueError(f"Invalid truth value {val}")
with open_local(["sanic", "__version__.py"], encoding="latin1") as fp:
try:
@@ -131,13 +148,13 @@
all_require = list(set(dev_require + docs_require))
-if strtobool(os.environ.get("SANIC_NO_UJSON", "no")):
+if str_to_bool(os.environ.get("SANIC_NO_UJSON", "no")):
print("Installing without uJSON")
requirements.remove(ujson)
tests_require.remove(types_ujson)
# 'nt' means windows OS
-if strtobool(os.environ.get("SANIC_NO_UVLOOP", "no")):
+if str_to_bool(os.environ.get("SANIC_NO_UVLOOP", "no")):
print("Installing without uvLoop")
requirements.remove(uvloop)
| {"golden_diff": "diff --git a/sanic/server/loop.py b/sanic/server/loop.py\n--- a/sanic/server/loop.py\n+++ b/sanic/server/loop.py\n@@ -1,11 +1,11 @@\n import asyncio\n import sys\n \n-from distutils.util import strtobool\n from os import getenv\n \n from sanic.compat import OS_IS_WINDOWS\n from sanic.log import error_logger\n+from sanic.utils import str_to_bool\n \n \n def try_use_uvloop() -> None:\n@@ -35,7 +35,7 @@\n )\n return\n \n- uvloop_install_removed = strtobool(getenv(\"SANIC_NO_UVLOOP\", \"no\"))\n+ uvloop_install_removed = str_to_bool(getenv(\"SANIC_NO_UVLOOP\", \"no\"))\n if uvloop_install_removed:\n error_logger.info(\n \"You are requesting to run Sanic using uvloop, but the \"\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -6,8 +6,6 @@\n import re\n import sys\n \n-from distutils.util import strtobool\n-\n from setuptools import find_packages, setup\n from setuptools.command.test import test as TestCommand\n \n@@ -37,6 +35,25 @@\n \n return codecs.open(path, mode, encoding)\n \n+def str_to_bool(val: str) -> bool:\n+ val = val.lower()\n+ if val in {\n+ \"y\",\n+ \"yes\",\n+ \"yep\",\n+ \"yup\",\n+ \"t\",\n+ \"true\",\n+ \"on\",\n+ \"enable\",\n+ \"enabled\",\n+ \"1\",\n+ }:\n+ return True\n+ elif val in {\"n\", \"no\", \"f\", \"false\", \"off\", \"disable\", \"disabled\", \"0\"}:\n+ return False\n+ else:\n+ raise ValueError(f\"Invalid truth value {val}\")\n \n with open_local([\"sanic\", \"__version__.py\"], encoding=\"latin1\") as fp:\n try:\n@@ -131,13 +148,13 @@\n \n all_require = list(set(dev_require + docs_require))\n \n-if strtobool(os.environ.get(\"SANIC_NO_UJSON\", \"no\")):\n+if str_to_bool(os.environ.get(\"SANIC_NO_UJSON\", \"no\")):\n print(\"Installing without uJSON\")\n requirements.remove(ujson)\n tests_require.remove(types_ujson)\n \n # 'nt' means windows OS\n-if strtobool(os.environ.get(\"SANIC_NO_UVLOOP\", \"no\")):\n+if str_to_bool(os.environ.get(\"SANIC_NO_UVLOOP\", \"no\")):\n print(\"Installing without uvLoop\")\n requirements.remove(uvloop)\n", "issue": "distutils.strtobool deprecation \n**Describe the bug**\r\nhe distutils package is deprecated and slated for removal in Python 3.12. [PEP 632](https://www.python.org/dev/peps/pep-0632/)\r\n\r\n**Code snippet**\r\n[sanic/server/loop.py](https://github.com/sanic-org/sanic/blob/ac388d644b1e22156e228470fad8ea34932c080a/sanic/server/loop.py#L3)\r\n\r\n**Expected behavior**\r\nA lack of warnings about deprecation\r\n\r\n**Environment (please complete the following information):**\r\n - OS: All\r\n - Version: All supported as of 2022/01/23 \r\n\r\n**Additional context**\r\nThis is housekeeping to clean up a core python deprecation.\r\n\n", "before_files": [{"content": "\"\"\"\nSanic\n\"\"\"\nimport codecs\nimport os\nimport re\nimport sys\n\nfrom distutils.util import strtobool\n\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\n\nclass PyTest(TestCommand):\n \"\"\"\n Provide a Test runner to be used from setup.py to run unit tests\n \"\"\"\n\n user_options = [(\"pytest-args=\", \"a\", \"Arguments to pass to pytest\")]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.pytest_args = \"\"\n\n def run_tests(self):\n import shlex\n\n import pytest\n\n errno = pytest.main(shlex.split(self.pytest_args))\n sys.exit(errno)\n\n\ndef open_local(paths, mode=\"r\", encoding=\"utf8\"):\n path = os.path.join(os.path.abspath(os.path.dirname(__file__)), *paths)\n\n return codecs.open(path, mode, encoding)\n\n\nwith open_local([\"sanic\", \"__version__.py\"], encoding=\"latin1\") as fp:\n try:\n version = re.findall(\n r\"^__version__ = \\\"([^']+)\\\"\\r?$\", fp.read(), re.M\n )[0]\n except IndexError:\n raise RuntimeError(\"Unable to determine version.\")\n\nwith open_local([\"README.rst\"]) as rm:\n long_description = rm.read()\n\nsetup_kwargs = {\n \"name\": \"sanic\",\n \"version\": version,\n \"url\": \"http://github.com/sanic-org/sanic/\",\n \"license\": \"MIT\",\n \"author\": \"Sanic Community\",\n \"author_email\": \"[email protected]\",\n \"description\": (\n \"A web server and web framework that's written to go fast. \"\n \"Build fast. Run fast.\"\n ),\n \"long_description\": long_description,\n \"packages\": find_packages(exclude=(\"tests\", \"tests.*\")),\n \"package_data\": {\"sanic\": [\"py.typed\"]},\n \"platforms\": \"any\",\n \"python_requires\": \">=3.7\",\n \"classifiers\": [\n \"Development Status :: 4 - Beta\",\n \"Environment :: Web Environment\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n \"entry_points\": {\"console_scripts\": [\"sanic = sanic.__main__:main\"]},\n}\n\nenv_dependency = (\n '; sys_platform != \"win32\" ' 'and implementation_name == \"cpython\"'\n)\nujson = \"ujson>=1.35\" + env_dependency\nuvloop = \"uvloop>=0.15.0\" + env_dependency\ntypes_ujson = \"types-ujson\" + env_dependency\nrequirements = [\n \"sanic-routing>=22.8.0\",\n \"httptools>=0.0.10\",\n uvloop,\n ujson,\n \"aiofiles>=0.6.0\",\n \"websockets>=10.0\",\n \"multidict>=5.0,<7.0\",\n]\n\ntests_require = [\n \"sanic-testing>=22.9.0\",\n \"pytest==7.1.*\",\n \"coverage\",\n \"beautifulsoup4\",\n \"pytest-sanic\",\n \"pytest-benchmark\",\n \"chardet==3.*\",\n \"flake8\",\n \"black\",\n \"isort>=5.0.0\",\n \"bandit\",\n \"mypy>=0.901,<0.910\",\n \"docutils\",\n \"pygments\",\n \"uvicorn<0.15.0\",\n \"slotscheck>=0.8.0,<1\",\n types_ujson,\n]\n\ndocs_require = [\n \"sphinx>=2.1.2\",\n \"sphinx_rtd_theme>=0.4.3\",\n \"docutils\",\n \"pygments\",\n \"m2r2\",\n \"enum-tools[sphinx]\",\n \"mistune<2.0.0\",\n]\n\ndev_require = tests_require + [\n \"cryptography\",\n \"tox\",\n \"towncrier\",\n]\n\nall_require = list(set(dev_require + docs_require))\n\nif strtobool(os.environ.get(\"SANIC_NO_UJSON\", \"no\")):\n print(\"Installing without uJSON\")\n requirements.remove(ujson)\n tests_require.remove(types_ujson)\n\n# 'nt' means windows OS\nif strtobool(os.environ.get(\"SANIC_NO_UVLOOP\", \"no\")):\n print(\"Installing without uvLoop\")\n requirements.remove(uvloop)\n\nextras_require = {\n \"test\": tests_require,\n \"dev\": dev_require,\n \"docs\": docs_require,\n \"all\": all_require,\n \"ext\": [\"sanic-ext\"],\n \"http3\": [\"aioquic\"],\n}\n\nsetup_kwargs[\"install_requires\"] = requirements\nsetup_kwargs[\"tests_require\"] = tests_require\nsetup_kwargs[\"extras_require\"] = extras_require\nsetup_kwargs[\"cmdclass\"] = {\"test\": PyTest}\nsetup(**setup_kwargs)\n", "path": "setup.py"}, {"content": "import asyncio\nimport sys\n\nfrom distutils.util import strtobool\nfrom os import getenv\n\nfrom sanic.compat import OS_IS_WINDOWS\nfrom sanic.log import error_logger\n\n\ndef try_use_uvloop() -> None:\n \"\"\"\n Use uvloop instead of the default asyncio loop.\n \"\"\"\n if OS_IS_WINDOWS:\n error_logger.warning(\n \"You are trying to use uvloop, but uvloop is not compatible \"\n \"with your system. You can disable uvloop completely by setting \"\n \"the 'USE_UVLOOP' configuration value to false, or simply not \"\n \"defining it and letting Sanic handle it for you. Sanic will now \"\n \"continue to run using the default event loop.\"\n )\n return\n\n try:\n import uvloop # type: ignore\n except ImportError:\n error_logger.warning(\n \"You are trying to use uvloop, but uvloop is not \"\n \"installed in your system. In order to use uvloop \"\n \"you must first install it. Otherwise, you can disable \"\n \"uvloop completely by setting the 'USE_UVLOOP' \"\n \"configuration value to false. Sanic will now continue \"\n \"to run with the default event loop.\"\n )\n return\n\n uvloop_install_removed = strtobool(getenv(\"SANIC_NO_UVLOOP\", \"no\"))\n if uvloop_install_removed:\n error_logger.info(\n \"You are requesting to run Sanic using uvloop, but the \"\n \"install-time 'SANIC_NO_UVLOOP' environment variable (used to \"\n \"opt-out of installing uvloop with Sanic) is set to true. If \"\n \"you want to prevent Sanic from overriding the event loop policy \"\n \"during runtime, set the 'USE_UVLOOP' configuration value to \"\n \"false.\"\n )\n\n if not isinstance(asyncio.get_event_loop_policy(), uvloop.EventLoopPolicy):\n asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())\n\n\ndef try_windows_loop():\n if not OS_IS_WINDOWS:\n error_logger.warning(\n \"You are trying to use an event loop policy that is not \"\n \"compatible with your system. You can simply let Sanic handle \"\n \"selecting the best loop for you. Sanic will now continue to run \"\n \"using the default event loop.\"\n )\n return\n\n if sys.version_info >= (3, 8) and not isinstance(\n asyncio.get_event_loop_policy(), asyncio.WindowsSelectorEventLoopPolicy\n ):\n asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())\n", "path": "sanic/server/loop.py"}]} | 2,904 | 597 |
gh_patches_debug_44887 | rasdani/github-patches | git_diff | scikit-image__scikit-image-2924 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Odd keyword name `min_size` for morphology's `remove_small_holes`
Should the `min_size` parameter be `max_size`, or something else entirely? It's not a very intuitive naming. At worst, we should document this well.
/cc @mcquin
</issue>
<code>
[start of skimage/morphology/misc.py]
1 import numpy as np
2 import functools
3 from scipy import ndimage as ndi
4 from .._shared.utils import warn
5 from .selem import _default_selem
6
7 # Our function names don't exactly correspond to ndimages.
8 # This dictionary translates from our names to scipy's.
9 funcs = ('erosion', 'dilation', 'opening', 'closing')
10 skimage2ndimage = dict((x, 'grey_' + x) for x in funcs)
11
12 # These function names are the same in ndimage.
13 funcs = ('binary_erosion', 'binary_dilation', 'binary_opening',
14 'binary_closing', 'black_tophat', 'white_tophat')
15 skimage2ndimage.update(dict((x, x) for x in funcs))
16
17
18 def default_selem(func):
19 """Decorator to add a default structuring element to morphology functions.
20
21 Parameters
22 ----------
23 func : function
24 A morphology function such as erosion, dilation, opening, closing,
25 white_tophat, or black_tophat.
26
27 Returns
28 -------
29 func_out : function
30 The function, using a default structuring element of same dimension
31 as the input image with connectivity 1.
32 """
33 @functools.wraps(func)
34 def func_out(image, selem=None, *args, **kwargs):
35 if selem is None:
36 selem = _default_selem(image.ndim)
37 return func(image, selem=selem, *args, **kwargs)
38
39 return func_out
40
41 def _check_dtype_supported(ar):
42 # Should use `issubdtype` for bool below, but there's a bug in numpy 1.7
43 if not (ar.dtype == bool or np.issubdtype(ar.dtype, np.integer)):
44 raise TypeError("Only bool or integer image types are supported. "
45 "Got %s." % ar.dtype)
46
47 def remove_small_objects(ar, min_size=64, connectivity=1, in_place=False):
48 """Remove connected components smaller than the specified size.
49
50 Parameters
51 ----------
52 ar : ndarray (arbitrary shape, int or bool type)
53 The array containing the connected components of interest. If the array
54 type is int, it is assumed that it contains already-labeled objects.
55 The ints must be non-negative.
56 min_size : int, optional (default: 64)
57 The smallest allowable connected component size.
58 connectivity : int, {1, 2, ..., ar.ndim}, optional (default: 1)
59 The connectivity defining the neighborhood of a pixel.
60 in_place : bool, optional (default: False)
61 If `True`, remove the connected components in the input array itself.
62 Otherwise, make a copy.
63
64 Raises
65 ------
66 TypeError
67 If the input array is of an invalid type, such as float or string.
68 ValueError
69 If the input array contains negative values.
70
71 Returns
72 -------
73 out : ndarray, same shape and type as input `ar`
74 The input array with small connected components removed.
75
76 Examples
77 --------
78 >>> from skimage import morphology
79 >>> a = np.array([[0, 0, 0, 1, 0],
80 ... [1, 1, 1, 0, 0],
81 ... [1, 1, 1, 0, 1]], bool)
82 >>> b = morphology.remove_small_objects(a, 6)
83 >>> b
84 array([[False, False, False, False, False],
85 [ True, True, True, False, False],
86 [ True, True, True, False, False]], dtype=bool)
87 >>> c = morphology.remove_small_objects(a, 7, connectivity=2)
88 >>> c
89 array([[False, False, False, True, False],
90 [ True, True, True, False, False],
91 [ True, True, True, False, False]], dtype=bool)
92 >>> d = morphology.remove_small_objects(a, 6, in_place=True)
93 >>> d is a
94 True
95 """
96 # Raising type error if not int or bool
97 _check_dtype_supported(ar)
98
99 if in_place:
100 out = ar
101 else:
102 out = ar.copy()
103
104 if min_size == 0: # shortcut for efficiency
105 return out
106
107 if out.dtype == bool:
108 selem = ndi.generate_binary_structure(ar.ndim, connectivity)
109 ccs = np.zeros_like(ar, dtype=np.int32)
110 ndi.label(ar, selem, output=ccs)
111 else:
112 ccs = out
113
114 try:
115 component_sizes = np.bincount(ccs.ravel())
116 except ValueError:
117 raise ValueError("Negative value labels are not supported. Try "
118 "relabeling the input with `scipy.ndimage.label` or "
119 "`skimage.morphology.label`.")
120
121 if len(component_sizes) == 2:
122 warn("Only one label was provided to `remove_small_objects`. "
123 "Did you mean to use a boolean array?")
124
125 too_small = component_sizes < min_size
126 too_small_mask = too_small[ccs]
127 out[too_small_mask] = 0
128
129 return out
130
131 def remove_small_holes(ar, min_size=64, connectivity=1, in_place=False):
132 """Remove continguous holes smaller than the specified size.
133
134 Parameters
135 ----------
136 ar : ndarray (arbitrary shape, int or bool type)
137 The array containing the connected components of interest.
138 min_size : int, optional (default: 64)
139 The hole component size.
140 connectivity : int, {1, 2, ..., ar.ndim}, optional (default: 1)
141 The connectivity defining the neighborhood of a pixel.
142 in_place : bool, optional (default: False)
143 If `True`, remove the connected components in the input array itself.
144 Otherwise, make a copy.
145
146 Raises
147 ------
148 TypeError
149 If the input array is of an invalid type, such as float or string.
150 ValueError
151 If the input array contains negative values.
152
153 Returns
154 -------
155 out : ndarray, same shape and type as input `ar`
156 The input array with small holes within connected components removed.
157
158 Examples
159 --------
160 >>> from skimage import morphology
161 >>> a = np.array([[1, 1, 1, 1, 1, 0],
162 ... [1, 1, 1, 0, 1, 0],
163 ... [1, 0, 0, 1, 1, 0],
164 ... [1, 1, 1, 1, 1, 0]], bool)
165 >>> b = morphology.remove_small_holes(a, 2)
166 >>> b
167 array([[ True, True, True, True, True, False],
168 [ True, True, True, True, True, False],
169 [ True, False, False, True, True, False],
170 [ True, True, True, True, True, False]], dtype=bool)
171 >>> c = morphology.remove_small_holes(a, 2, connectivity=2)
172 >>> c
173 array([[ True, True, True, True, True, False],
174 [ True, True, True, False, True, False],
175 [ True, False, False, True, True, False],
176 [ True, True, True, True, True, False]], dtype=bool)
177 >>> d = morphology.remove_small_holes(a, 2, in_place=True)
178 >>> d is a
179 True
180
181 Notes
182 -----
183
184 If the array type is int, it is assumed that it contains already-labeled
185 objects. The labels are not kept in the output image (this function always
186 outputs a bool image). It is suggested that labeling is completed after
187 using this function.
188 """
189 _check_dtype_supported(ar)
190
191 #Creates warning if image is an integer image
192 if ar.dtype != bool:
193 warn("Any labeled images will be returned as a boolean array. "
194 "Did you mean to use a boolean array?", UserWarning)
195
196 if in_place:
197 out = ar
198 else:
199 out = ar.copy()
200
201 #Creating the inverse of ar
202 if in_place:
203 out = np.logical_not(out,out)
204 else:
205 out = np.logical_not(out)
206
207 #removing small objects from the inverse of ar
208 out = remove_small_objects(out, min_size, connectivity, in_place)
209
210 if in_place:
211 out = np.logical_not(out,out)
212 else:
213 out = np.logical_not(out)
214
215 return out
216
[end of skimage/morphology/misc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/skimage/morphology/misc.py b/skimage/morphology/misc.py
--- a/skimage/morphology/misc.py
+++ b/skimage/morphology/misc.py
@@ -1,3 +1,4 @@
+"""Miscellaneous morphology functions."""
import numpy as np
import functools
from scipy import ndimage as ndi
@@ -29,6 +30,7 @@
func_out : function
The function, using a default structuring element of same dimension
as the input image with connectivity 1.
+
"""
@functools.wraps(func)
def func_out(image, selem=None, *args, **kwargs):
@@ -38,12 +40,14 @@
return func_out
+
def _check_dtype_supported(ar):
# Should use `issubdtype` for bool below, but there's a bug in numpy 1.7
if not (ar.dtype == bool or np.issubdtype(ar.dtype, np.integer)):
raise TypeError("Only bool or integer image types are supported. "
"Got %s." % ar.dtype)
+
def remove_small_objects(ar, min_size=64, connectivity=1, in_place=False):
"""Remove connected components smaller than the specified size.
@@ -92,6 +96,7 @@
>>> d = morphology.remove_small_objects(a, 6, in_place=True)
>>> d is a
True
+
"""
# Raising type error if not int or bool
_check_dtype_supported(ar)
@@ -128,21 +133,25 @@
return out
-def remove_small_holes(ar, min_size=64, connectivity=1, in_place=False):
+
+def remove_small_holes(ar, area_threshold=64, connectivity=1, in_place=False,
+ min_size=None):
"""Remove continguous holes smaller than the specified size.
Parameters
----------
ar : ndarray (arbitrary shape, int or bool type)
The array containing the connected components of interest.
- min_size : int, optional (default: 64)
- The hole component size.
+ area_threshold : int, optional (default: 64)
+ The maximum area, in pixels, of a contiguous hole that will be filled.
+ Replaces `min_size`.
connectivity : int, {1, 2, ..., ar.ndim}, optional (default: 1)
The connectivity defining the neighborhood of a pixel.
in_place : bool, optional (default: False)
If `True`, remove the connected components in the input array itself.
Otherwise, make a copy.
+
Raises
------
TypeError
@@ -180,35 +189,40 @@
Notes
-----
-
If the array type is int, it is assumed that it contains already-labeled
objects. The labels are not kept in the output image (this function always
outputs a bool image). It is suggested that labeling is completed after
using this function.
+
"""
_check_dtype_supported(ar)
- #Creates warning if image is an integer image
+ # Creates warning if image is an integer image
if ar.dtype != bool:
warn("Any labeled images will be returned as a boolean array. "
"Did you mean to use a boolean array?", UserWarning)
+ if min_size is not None:
+ warn("the min_size argument is deprecated and will be removed in " +
+ "0.16. Use area_threshold instead.")
+ area_threshold = min_size
+
if in_place:
out = ar
else:
out = ar.copy()
- #Creating the inverse of ar
+ # Creating the inverse of ar
if in_place:
- out = np.logical_not(out,out)
+ out = np.logical_not(out, out)
else:
out = np.logical_not(out)
- #removing small objects from the inverse of ar
- out = remove_small_objects(out, min_size, connectivity, in_place)
+ # removing small objects from the inverse of ar
+ out = remove_small_objects(out, area_threshold, connectivity, in_place)
if in_place:
- out = np.logical_not(out,out)
+ out = np.logical_not(out, out)
else:
out = np.logical_not(out)
| {"golden_diff": "diff --git a/skimage/morphology/misc.py b/skimage/morphology/misc.py\n--- a/skimage/morphology/misc.py\n+++ b/skimage/morphology/misc.py\n@@ -1,3 +1,4 @@\n+\"\"\"Miscellaneous morphology functions.\"\"\"\n import numpy as np\n import functools\n from scipy import ndimage as ndi\n@@ -29,6 +30,7 @@\n func_out : function\n The function, using a default structuring element of same dimension\n as the input image with connectivity 1.\n+\n \"\"\"\n @functools.wraps(func)\n def func_out(image, selem=None, *args, **kwargs):\n@@ -38,12 +40,14 @@\n \n return func_out\n \n+\n def _check_dtype_supported(ar):\n # Should use `issubdtype` for bool below, but there's a bug in numpy 1.7\n if not (ar.dtype == bool or np.issubdtype(ar.dtype, np.integer)):\n raise TypeError(\"Only bool or integer image types are supported. \"\n \"Got %s.\" % ar.dtype)\n \n+\n def remove_small_objects(ar, min_size=64, connectivity=1, in_place=False):\n \"\"\"Remove connected components smaller than the specified size.\n \n@@ -92,6 +96,7 @@\n >>> d = morphology.remove_small_objects(a, 6, in_place=True)\n >>> d is a\n True\n+\n \"\"\"\n # Raising type error if not int or bool\n _check_dtype_supported(ar)\n@@ -128,21 +133,25 @@\n \n return out\n \n-def remove_small_holes(ar, min_size=64, connectivity=1, in_place=False):\n+\n+def remove_small_holes(ar, area_threshold=64, connectivity=1, in_place=False,\n+ min_size=None):\n \"\"\"Remove continguous holes smaller than the specified size.\n \n Parameters\n ----------\n ar : ndarray (arbitrary shape, int or bool type)\n The array containing the connected components of interest.\n- min_size : int, optional (default: 64)\n- The hole component size.\n+ area_threshold : int, optional (default: 64)\n+ The maximum area, in pixels, of a contiguous hole that will be filled.\n+ Replaces `min_size`.\n connectivity : int, {1, 2, ..., ar.ndim}, optional (default: 1)\n The connectivity defining the neighborhood of a pixel.\n in_place : bool, optional (default: False)\n If `True`, remove the connected components in the input array itself.\n Otherwise, make a copy.\n \n+\n Raises\n ------\n TypeError\n@@ -180,35 +189,40 @@\n \n Notes\n -----\n-\n If the array type is int, it is assumed that it contains already-labeled\n objects. The labels are not kept in the output image (this function always\n outputs a bool image). It is suggested that labeling is completed after\n using this function.\n+\n \"\"\"\n _check_dtype_supported(ar)\n \n- #Creates warning if image is an integer image\n+ # Creates warning if image is an integer image\n if ar.dtype != bool:\n warn(\"Any labeled images will be returned as a boolean array. \"\n \"Did you mean to use a boolean array?\", UserWarning)\n \n+ if min_size is not None:\n+ warn(\"the min_size argument is deprecated and will be removed in \" +\n+ \"0.16. Use area_threshold instead.\")\n+ area_threshold = min_size\n+\n if in_place:\n out = ar\n else:\n out = ar.copy()\n \n- #Creating the inverse of ar\n+ # Creating the inverse of ar\n if in_place:\n- out = np.logical_not(out,out)\n+ out = np.logical_not(out, out)\n else:\n out = np.logical_not(out)\n \n- #removing small objects from the inverse of ar\n- out = remove_small_objects(out, min_size, connectivity, in_place)\n+ # removing small objects from the inverse of ar\n+ out = remove_small_objects(out, area_threshold, connectivity, in_place)\n \n if in_place:\n- out = np.logical_not(out,out)\n+ out = np.logical_not(out, out)\n else:\n out = np.logical_not(out)\n", "issue": "Odd keyword name `min_size` for morphology's `remove_small_holes`\nShould the `min_size` parameter be `max_size`, or something else entirely? It's not a very intuitive naming. At worst, we should document this well.\r\n\r\n/cc @mcquin\n", "before_files": [{"content": "import numpy as np\nimport functools\nfrom scipy import ndimage as ndi\nfrom .._shared.utils import warn\nfrom .selem import _default_selem\n\n# Our function names don't exactly correspond to ndimages.\n# This dictionary translates from our names to scipy's.\nfuncs = ('erosion', 'dilation', 'opening', 'closing')\nskimage2ndimage = dict((x, 'grey_' + x) for x in funcs)\n\n# These function names are the same in ndimage.\nfuncs = ('binary_erosion', 'binary_dilation', 'binary_opening',\n 'binary_closing', 'black_tophat', 'white_tophat')\nskimage2ndimage.update(dict((x, x) for x in funcs))\n\n\ndef default_selem(func):\n \"\"\"Decorator to add a default structuring element to morphology functions.\n\n Parameters\n ----------\n func : function\n A morphology function such as erosion, dilation, opening, closing,\n white_tophat, or black_tophat.\n\n Returns\n -------\n func_out : function\n The function, using a default structuring element of same dimension\n as the input image with connectivity 1.\n \"\"\"\n @functools.wraps(func)\n def func_out(image, selem=None, *args, **kwargs):\n if selem is None:\n selem = _default_selem(image.ndim)\n return func(image, selem=selem, *args, **kwargs)\n\n return func_out\n\ndef _check_dtype_supported(ar):\n # Should use `issubdtype` for bool below, but there's a bug in numpy 1.7\n if not (ar.dtype == bool or np.issubdtype(ar.dtype, np.integer)):\n raise TypeError(\"Only bool or integer image types are supported. \"\n \"Got %s.\" % ar.dtype)\n\ndef remove_small_objects(ar, min_size=64, connectivity=1, in_place=False):\n \"\"\"Remove connected components smaller than the specified size.\n\n Parameters\n ----------\n ar : ndarray (arbitrary shape, int or bool type)\n The array containing the connected components of interest. If the array\n type is int, it is assumed that it contains already-labeled objects.\n The ints must be non-negative.\n min_size : int, optional (default: 64)\n The smallest allowable connected component size.\n connectivity : int, {1, 2, ..., ar.ndim}, optional (default: 1)\n The connectivity defining the neighborhood of a pixel.\n in_place : bool, optional (default: False)\n If `True`, remove the connected components in the input array itself.\n Otherwise, make a copy.\n\n Raises\n ------\n TypeError\n If the input array is of an invalid type, such as float or string.\n ValueError\n If the input array contains negative values.\n\n Returns\n -------\n out : ndarray, same shape and type as input `ar`\n The input array with small connected components removed.\n\n Examples\n --------\n >>> from skimage import morphology\n >>> a = np.array([[0, 0, 0, 1, 0],\n ... [1, 1, 1, 0, 0],\n ... [1, 1, 1, 0, 1]], bool)\n >>> b = morphology.remove_small_objects(a, 6)\n >>> b\n array([[False, False, False, False, False],\n [ True, True, True, False, False],\n [ True, True, True, False, False]], dtype=bool)\n >>> c = morphology.remove_small_objects(a, 7, connectivity=2)\n >>> c\n array([[False, False, False, True, False],\n [ True, True, True, False, False],\n [ True, True, True, False, False]], dtype=bool)\n >>> d = morphology.remove_small_objects(a, 6, in_place=True)\n >>> d is a\n True\n \"\"\"\n # Raising type error if not int or bool\n _check_dtype_supported(ar)\n\n if in_place:\n out = ar\n else:\n out = ar.copy()\n\n if min_size == 0: # shortcut for efficiency\n return out\n\n if out.dtype == bool:\n selem = ndi.generate_binary_structure(ar.ndim, connectivity)\n ccs = np.zeros_like(ar, dtype=np.int32)\n ndi.label(ar, selem, output=ccs)\n else:\n ccs = out\n\n try:\n component_sizes = np.bincount(ccs.ravel())\n except ValueError:\n raise ValueError(\"Negative value labels are not supported. Try \"\n \"relabeling the input with `scipy.ndimage.label` or \"\n \"`skimage.morphology.label`.\")\n\n if len(component_sizes) == 2:\n warn(\"Only one label was provided to `remove_small_objects`. \"\n \"Did you mean to use a boolean array?\")\n\n too_small = component_sizes < min_size\n too_small_mask = too_small[ccs]\n out[too_small_mask] = 0\n\n return out\n\ndef remove_small_holes(ar, min_size=64, connectivity=1, in_place=False):\n \"\"\"Remove continguous holes smaller than the specified size.\n\n Parameters\n ----------\n ar : ndarray (arbitrary shape, int or bool type)\n The array containing the connected components of interest.\n min_size : int, optional (default: 64)\n The hole component size.\n connectivity : int, {1, 2, ..., ar.ndim}, optional (default: 1)\n The connectivity defining the neighborhood of a pixel.\n in_place : bool, optional (default: False)\n If `True`, remove the connected components in the input array itself.\n Otherwise, make a copy.\n\n Raises\n ------\n TypeError\n If the input array is of an invalid type, such as float or string.\n ValueError\n If the input array contains negative values.\n\n Returns\n -------\n out : ndarray, same shape and type as input `ar`\n The input array with small holes within connected components removed.\n\n Examples\n --------\n >>> from skimage import morphology\n >>> a = np.array([[1, 1, 1, 1, 1, 0],\n ... [1, 1, 1, 0, 1, 0],\n ... [1, 0, 0, 1, 1, 0],\n ... [1, 1, 1, 1, 1, 0]], bool)\n >>> b = morphology.remove_small_holes(a, 2)\n >>> b\n array([[ True, True, True, True, True, False],\n [ True, True, True, True, True, False],\n [ True, False, False, True, True, False],\n [ True, True, True, True, True, False]], dtype=bool)\n >>> c = morphology.remove_small_holes(a, 2, connectivity=2)\n >>> c\n array([[ True, True, True, True, True, False],\n [ True, True, True, False, True, False],\n [ True, False, False, True, True, False],\n [ True, True, True, True, True, False]], dtype=bool)\n >>> d = morphology.remove_small_holes(a, 2, in_place=True)\n >>> d is a\n True\n\n Notes\n -----\n\n If the array type is int, it is assumed that it contains already-labeled\n objects. The labels are not kept in the output image (this function always\n outputs a bool image). It is suggested that labeling is completed after\n using this function.\n \"\"\"\n _check_dtype_supported(ar)\n\n #Creates warning if image is an integer image\n if ar.dtype != bool:\n warn(\"Any labeled images will be returned as a boolean array. \"\n \"Did you mean to use a boolean array?\", UserWarning)\n\n if in_place:\n out = ar\n else:\n out = ar.copy()\n\n #Creating the inverse of ar\n if in_place:\n out = np.logical_not(out,out)\n else:\n out = np.logical_not(out)\n\n #removing small objects from the inverse of ar\n out = remove_small_objects(out, min_size, connectivity, in_place)\n\n if in_place:\n out = np.logical_not(out,out)\n else:\n out = np.logical_not(out)\n\n return out\n", "path": "skimage/morphology/misc.py"}]} | 3,029 | 963 |
gh_patches_debug_18747 | rasdani/github-patches | git_diff | CTFd__CTFd-1798 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CTFd pages route is relative when it shouldn't be
For some reason CTFd page routes are being generated in the navbar as relative when they shouldn't be. E.g. (`page` instead of `/page`).
</issue>
<code>
[start of CTFd/plugins/__init__.py]
1 import glob
2 import importlib
3 import os
4 from collections import namedtuple
5
6 from flask import current_app as app
7 from flask import send_file, send_from_directory
8
9 from CTFd.utils.config.pages import get_pages
10 from CTFd.utils.decorators import admins_only as admins_only_wrapper
11 from CTFd.utils.plugins import override_template as utils_override_template
12 from CTFd.utils.plugins import (
13 register_admin_script as utils_register_admin_plugin_script,
14 )
15 from CTFd.utils.plugins import (
16 register_admin_stylesheet as utils_register_admin_plugin_stylesheet,
17 )
18 from CTFd.utils.plugins import register_script as utils_register_plugin_script
19 from CTFd.utils.plugins import register_stylesheet as utils_register_plugin_stylesheet
20
21 Menu = namedtuple("Menu", ["title", "route"])
22
23
24 def register_plugin_assets_directory(app, base_path, admins_only=False, endpoint=None):
25 """
26 Registers a directory to serve assets
27
28 :param app: A CTFd application
29 :param string base_path: The path to the directory
30 :param boolean admins_only: Whether or not the assets served out of the directory should be accessible to the public
31 :return:
32 """
33 base_path = base_path.strip("/")
34 if endpoint is None:
35 endpoint = base_path.replace("/", ".")
36
37 def assets_handler(path):
38 return send_from_directory(base_path, path)
39
40 rule = "/" + base_path + "/<path:path>"
41 app.add_url_rule(rule=rule, endpoint=endpoint, view_func=assets_handler)
42
43
44 def register_plugin_asset(app, asset_path, admins_only=False, endpoint=None):
45 """
46 Registers an file path to be served by CTFd
47
48 :param app: A CTFd application
49 :param string asset_path: The path to the asset file
50 :param boolean admins_only: Whether or not this file should be accessible to the public
51 :return:
52 """
53 asset_path = asset_path.strip("/")
54 if endpoint is None:
55 endpoint = asset_path.replace("/", ".")
56
57 def asset_handler():
58 return send_file(asset_path)
59
60 if admins_only:
61 asset_handler = admins_only_wrapper(asset_handler)
62 rule = "/" + asset_path
63 app.add_url_rule(rule=rule, endpoint=endpoint, view_func=asset_handler)
64
65
66 def override_template(*args, **kwargs):
67 """
68 Overrides a template with the provided html content.
69
70 e.g. override_template('scoreboard.html', '<h1>scores</h1>')
71 """
72 utils_override_template(*args, **kwargs)
73
74
75 def register_plugin_script(*args, **kwargs):
76 """
77 Adds a given script to the base.html template which all pages inherit from
78 """
79 utils_register_plugin_script(*args, **kwargs)
80
81
82 def register_plugin_stylesheet(*args, **kwargs):
83 """
84 Adds a given stylesheet to the base.html template which all pages inherit from.
85 """
86 utils_register_plugin_stylesheet(*args, **kwargs)
87
88
89 def register_admin_plugin_script(*args, **kwargs):
90 """
91 Adds a given script to the base.html of the admin theme which all admin pages inherit from
92 :param args:
93 :param kwargs:
94 :return:
95 """
96 utils_register_admin_plugin_script(*args, **kwargs)
97
98
99 def register_admin_plugin_stylesheet(*args, **kwargs):
100 """
101 Adds a given stylesheet to the base.html of the admin theme which all admin pages inherit from
102 :param args:
103 :param kwargs:
104 :return:
105 """
106 utils_register_admin_plugin_stylesheet(*args, **kwargs)
107
108
109 def register_admin_plugin_menu_bar(title, route):
110 """
111 Registers links on the Admin Panel menubar/navbar
112
113 :param name: A string that is shown on the navbar HTML
114 :param route: A string that is the href used by the link
115 :return:
116 """
117 am = Menu(title=title, route=route)
118 app.admin_plugin_menu_bar.append(am)
119
120
121 def get_admin_plugin_menu_bar():
122 """
123 Access the list used to store the plugin menu bar
124
125 :return: Returns a list of Menu namedtuples. They have name, and route attributes.
126 """
127 return app.admin_plugin_menu_bar
128
129
130 def register_user_page_menu_bar(title, route):
131 """
132 Registers links on the User side menubar/navbar
133
134 :param name: A string that is shown on the navbar HTML
135 :param route: A string that is the href used by the link
136 :return:
137 """
138 p = Menu(title=title, route=route)
139 app.plugin_menu_bar.append(p)
140
141
142 def get_user_page_menu_bar():
143 """
144 Access the list used to store the user page menu bar
145
146 :return: Returns a list of Menu namedtuples. They have name, and route attributes.
147 """
148 return get_pages() + app.plugin_menu_bar
149
150
151 def bypass_csrf_protection(f):
152 """
153 Decorator that allows a route to bypass the need for a CSRF nonce on POST requests.
154
155 This should be considered beta and may change in future versions.
156
157 :param f: A function that needs to bypass CSRF protection
158 :return: Returns a function with the _bypass_csrf attribute set which tells CTFd to not require CSRF protection.
159 """
160 f._bypass_csrf = True
161 return f
162
163
164 def get_plugin_names():
165 modules = sorted(glob.glob(app.plugins_dir + "/*"))
166 blacklist = {"__pycache__"}
167 plugins = []
168 for module in modules:
169 module_name = os.path.basename(module)
170 if os.path.isdir(module) and module_name not in blacklist:
171 plugins.append(module_name)
172 return plugins
173
174
175 def init_plugins(app):
176 """
177 Searches for the load function in modules in the CTFd/plugins folder. This function is called with the current CTFd
178 app as a parameter. This allows CTFd plugins to modify CTFd's behavior.
179
180 :param app: A CTFd application
181 :return:
182 """
183 app.admin_plugin_scripts = []
184 app.admin_plugin_stylesheets = []
185 app.plugin_scripts = []
186 app.plugin_stylesheets = []
187
188 app.admin_plugin_menu_bar = []
189 app.plugin_menu_bar = []
190 app.plugins_dir = os.path.dirname(__file__)
191
192 if app.config.get("SAFE_MODE", False) is False:
193 for plugin in get_plugin_names():
194 module = "." + plugin
195 module = importlib.import_module(module, package="CTFd.plugins")
196 module.load(app)
197 print(" * Loaded module, %s" % module)
198
199 app.jinja_env.globals.update(get_admin_plugin_menu_bar=get_admin_plugin_menu_bar)
200 app.jinja_env.globals.update(get_user_page_menu_bar=get_user_page_menu_bar)
201
[end of CTFd/plugins/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CTFd/plugins/__init__.py b/CTFd/plugins/__init__.py
--- a/CTFd/plugins/__init__.py
+++ b/CTFd/plugins/__init__.py
@@ -4,7 +4,7 @@
from collections import namedtuple
from flask import current_app as app
-from flask import send_file, send_from_directory
+from flask import send_file, send_from_directory, url_for
from CTFd.utils.config.pages import get_pages
from CTFd.utils.decorators import admins_only as admins_only_wrapper
@@ -145,7 +145,15 @@
:return: Returns a list of Menu namedtuples. They have name, and route attributes.
"""
- return get_pages() + app.plugin_menu_bar
+ pages = []
+ for p in get_pages() + app.plugin_menu_bar:
+ if p.route.startswith("http"):
+ route = p.route
+ else:
+ route = url_for("views.static_html", route=p.route)
+ print(route)
+ pages.append(Menu(title=p.title, route=route))
+ return pages
def bypass_csrf_protection(f):
| {"golden_diff": "diff --git a/CTFd/plugins/__init__.py b/CTFd/plugins/__init__.py\n--- a/CTFd/plugins/__init__.py\n+++ b/CTFd/plugins/__init__.py\n@@ -4,7 +4,7 @@\n from collections import namedtuple\n \n from flask import current_app as app\n-from flask import send_file, send_from_directory\n+from flask import send_file, send_from_directory, url_for\n \n from CTFd.utils.config.pages import get_pages\n from CTFd.utils.decorators import admins_only as admins_only_wrapper\n@@ -145,7 +145,15 @@\n \n :return: Returns a list of Menu namedtuples. They have name, and route attributes.\n \"\"\"\n- return get_pages() + app.plugin_menu_bar\n+ pages = []\n+ for p in get_pages() + app.plugin_menu_bar:\n+ if p.route.startswith(\"http\"):\n+ route = p.route\n+ else:\n+ route = url_for(\"views.static_html\", route=p.route)\n+ print(route)\n+ pages.append(Menu(title=p.title, route=route))\n+ return pages\n \n \n def bypass_csrf_protection(f):\n", "issue": "CTFd pages route is relative when it shouldn't be\nFor some reason CTFd page routes are being generated in the navbar as relative when they shouldn't be. E.g. (`page` instead of `/page`).\n", "before_files": [{"content": "import glob\nimport importlib\nimport os\nfrom collections import namedtuple\n\nfrom flask import current_app as app\nfrom flask import send_file, send_from_directory\n\nfrom CTFd.utils.config.pages import get_pages\nfrom CTFd.utils.decorators import admins_only as admins_only_wrapper\nfrom CTFd.utils.plugins import override_template as utils_override_template\nfrom CTFd.utils.plugins import (\n register_admin_script as utils_register_admin_plugin_script,\n)\nfrom CTFd.utils.plugins import (\n register_admin_stylesheet as utils_register_admin_plugin_stylesheet,\n)\nfrom CTFd.utils.plugins import register_script as utils_register_plugin_script\nfrom CTFd.utils.plugins import register_stylesheet as utils_register_plugin_stylesheet\n\nMenu = namedtuple(\"Menu\", [\"title\", \"route\"])\n\n\ndef register_plugin_assets_directory(app, base_path, admins_only=False, endpoint=None):\n \"\"\"\n Registers a directory to serve assets\n\n :param app: A CTFd application\n :param string base_path: The path to the directory\n :param boolean admins_only: Whether or not the assets served out of the directory should be accessible to the public\n :return:\n \"\"\"\n base_path = base_path.strip(\"/\")\n if endpoint is None:\n endpoint = base_path.replace(\"/\", \".\")\n\n def assets_handler(path):\n return send_from_directory(base_path, path)\n\n rule = \"/\" + base_path + \"/<path:path>\"\n app.add_url_rule(rule=rule, endpoint=endpoint, view_func=assets_handler)\n\n\ndef register_plugin_asset(app, asset_path, admins_only=False, endpoint=None):\n \"\"\"\n Registers an file path to be served by CTFd\n\n :param app: A CTFd application\n :param string asset_path: The path to the asset file\n :param boolean admins_only: Whether or not this file should be accessible to the public\n :return:\n \"\"\"\n asset_path = asset_path.strip(\"/\")\n if endpoint is None:\n endpoint = asset_path.replace(\"/\", \".\")\n\n def asset_handler():\n return send_file(asset_path)\n\n if admins_only:\n asset_handler = admins_only_wrapper(asset_handler)\n rule = \"/\" + asset_path\n app.add_url_rule(rule=rule, endpoint=endpoint, view_func=asset_handler)\n\n\ndef override_template(*args, **kwargs):\n \"\"\"\n Overrides a template with the provided html content.\n\n e.g. override_template('scoreboard.html', '<h1>scores</h1>')\n \"\"\"\n utils_override_template(*args, **kwargs)\n\n\ndef register_plugin_script(*args, **kwargs):\n \"\"\"\n Adds a given script to the base.html template which all pages inherit from\n \"\"\"\n utils_register_plugin_script(*args, **kwargs)\n\n\ndef register_plugin_stylesheet(*args, **kwargs):\n \"\"\"\n Adds a given stylesheet to the base.html template which all pages inherit from.\n \"\"\"\n utils_register_plugin_stylesheet(*args, **kwargs)\n\n\ndef register_admin_plugin_script(*args, **kwargs):\n \"\"\"\n Adds a given script to the base.html of the admin theme which all admin pages inherit from\n :param args:\n :param kwargs:\n :return:\n \"\"\"\n utils_register_admin_plugin_script(*args, **kwargs)\n\n\ndef register_admin_plugin_stylesheet(*args, **kwargs):\n \"\"\"\n Adds a given stylesheet to the base.html of the admin theme which all admin pages inherit from\n :param args:\n :param kwargs:\n :return:\n \"\"\"\n utils_register_admin_plugin_stylesheet(*args, **kwargs)\n\n\ndef register_admin_plugin_menu_bar(title, route):\n \"\"\"\n Registers links on the Admin Panel menubar/navbar\n\n :param name: A string that is shown on the navbar HTML\n :param route: A string that is the href used by the link\n :return:\n \"\"\"\n am = Menu(title=title, route=route)\n app.admin_plugin_menu_bar.append(am)\n\n\ndef get_admin_plugin_menu_bar():\n \"\"\"\n Access the list used to store the plugin menu bar\n\n :return: Returns a list of Menu namedtuples. They have name, and route attributes.\n \"\"\"\n return app.admin_plugin_menu_bar\n\n\ndef register_user_page_menu_bar(title, route):\n \"\"\"\n Registers links on the User side menubar/navbar\n\n :param name: A string that is shown on the navbar HTML\n :param route: A string that is the href used by the link\n :return:\n \"\"\"\n p = Menu(title=title, route=route)\n app.plugin_menu_bar.append(p)\n\n\ndef get_user_page_menu_bar():\n \"\"\"\n Access the list used to store the user page menu bar\n\n :return: Returns a list of Menu namedtuples. They have name, and route attributes.\n \"\"\"\n return get_pages() + app.plugin_menu_bar\n\n\ndef bypass_csrf_protection(f):\n \"\"\"\n Decorator that allows a route to bypass the need for a CSRF nonce on POST requests.\n\n This should be considered beta and may change in future versions.\n\n :param f: A function that needs to bypass CSRF protection\n :return: Returns a function with the _bypass_csrf attribute set which tells CTFd to not require CSRF protection.\n \"\"\"\n f._bypass_csrf = True\n return f\n\n\ndef get_plugin_names():\n modules = sorted(glob.glob(app.plugins_dir + \"/*\"))\n blacklist = {\"__pycache__\"}\n plugins = []\n for module in modules:\n module_name = os.path.basename(module)\n if os.path.isdir(module) and module_name not in blacklist:\n plugins.append(module_name)\n return plugins\n\n\ndef init_plugins(app):\n \"\"\"\n Searches for the load function in modules in the CTFd/plugins folder. This function is called with the current CTFd\n app as a parameter. This allows CTFd plugins to modify CTFd's behavior.\n\n :param app: A CTFd application\n :return:\n \"\"\"\n app.admin_plugin_scripts = []\n app.admin_plugin_stylesheets = []\n app.plugin_scripts = []\n app.plugin_stylesheets = []\n\n app.admin_plugin_menu_bar = []\n app.plugin_menu_bar = []\n app.plugins_dir = os.path.dirname(__file__)\n\n if app.config.get(\"SAFE_MODE\", False) is False:\n for plugin in get_plugin_names():\n module = \".\" + plugin\n module = importlib.import_module(module, package=\"CTFd.plugins\")\n module.load(app)\n print(\" * Loaded module, %s\" % module)\n\n app.jinja_env.globals.update(get_admin_plugin_menu_bar=get_admin_plugin_menu_bar)\n app.jinja_env.globals.update(get_user_page_menu_bar=get_user_page_menu_bar)\n", "path": "CTFd/plugins/__init__.py"}]} | 2,513 | 253 |
gh_patches_debug_28343 | rasdani/github-patches | git_diff | sanic-org__sanic-2537 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enforce exception handler uniquness
1. You should not be able to register the same exception more than once, or at least not on the same App/Blueprint.
2. Handlers should only be fetched in relation to the BP or App context of the matched route. This effectively means that some exceptions (`NotFound` could only be registered app level).
_Originally posted by @ahopkins in https://github.com/sanic-org/sanic/issues/2121#issuecomment-827077284_
</issue>
<code>
[start of sanic/handlers.py]
1 from __future__ import annotations
2
3 from typing import Dict, List, Optional, Tuple, Type
4
5 from sanic.errorpages import BaseRenderer, TextRenderer, exception_response
6 from sanic.exceptions import (
7 HeaderNotFound,
8 InvalidRangeType,
9 RangeNotSatisfiable,
10 )
11 from sanic.log import deprecation, error_logger
12 from sanic.models.handler_types import RouteHandler
13 from sanic.response import text
14
15
16 class ErrorHandler:
17 """
18 Provide :class:`sanic.app.Sanic` application with a mechanism to handle
19 and process any and all uncaught exceptions in a way the application
20 developer will set fit.
21
22 This error handling framework is built into the core that can be extended
23 by the developers to perform a wide range of tasks from recording the error
24 stats to reporting them to an external service that can be used for
25 realtime alerting system.
26
27 """
28
29 def __init__(
30 self,
31 base: Type[BaseRenderer] = TextRenderer,
32 ):
33 self.cached_handlers: Dict[
34 Tuple[Type[BaseException], Optional[str]], Optional[RouteHandler]
35 ] = {}
36 self.debug = False
37 self.base = base
38
39 @classmethod
40 def finalize(cls, *args, **kwargs):
41 deprecation(
42 "ErrorHandler.finalize is deprecated and no longer needed. "
43 "Please remove update your code to remove it. ",
44 22.12,
45 )
46
47 def _full_lookup(self, exception, route_name: Optional[str] = None):
48 return self.lookup(exception, route_name)
49
50 def add(self, exception, handler, route_names: Optional[List[str]] = None):
51 """
52 Add a new exception handler to an already existing handler object.
53
54 :param exception: Type of exception that need to be handled
55 :param handler: Reference to the method that will handle the exception
56
57 :type exception: :class:`sanic.exceptions.SanicException` or
58 :class:`Exception`
59 :type handler: ``function``
60
61 :return: None
62 """
63 if route_names:
64 for route in route_names:
65 self.cached_handlers[(exception, route)] = handler
66 else:
67 self.cached_handlers[(exception, None)] = handler
68
69 def lookup(self, exception, route_name: Optional[str] = None):
70 """
71 Lookup the existing instance of :class:`ErrorHandler` and fetch the
72 registered handler for a specific type of exception.
73
74 This method leverages a dict lookup to speedup the retrieval process.
75
76 :param exception: Type of exception
77
78 :type exception: :class:`sanic.exceptions.SanicException` or
79 :class:`Exception`
80
81 :return: Registered function if found ``None`` otherwise
82 """
83 exception_class = type(exception)
84
85 for name in (route_name, None):
86 exception_key = (exception_class, name)
87 handler = self.cached_handlers.get(exception_key)
88 if handler:
89 return handler
90
91 for name in (route_name, None):
92 for ancestor in type.mro(exception_class):
93 exception_key = (ancestor, name)
94 if exception_key in self.cached_handlers:
95 handler = self.cached_handlers[exception_key]
96 self.cached_handlers[
97 (exception_class, route_name)
98 ] = handler
99 return handler
100
101 if ancestor is BaseException:
102 break
103 self.cached_handlers[(exception_class, route_name)] = None
104 handler = None
105 return handler
106
107 _lookup = _full_lookup
108
109 def response(self, request, exception):
110 """Fetches and executes an exception handler and returns a response
111 object
112
113 :param request: Instance of :class:`sanic.request.Request`
114 :param exception: Exception to handle
115
116 :type request: :class:`sanic.request.Request`
117 :type exception: :class:`sanic.exceptions.SanicException` or
118 :class:`Exception`
119
120 :return: Wrap the return value obtained from :func:`default`
121 or registered handler for that type of exception.
122 """
123 route_name = request.name if request else None
124 handler = self._lookup(exception, route_name)
125 response = None
126 try:
127 if handler:
128 response = handler(request, exception)
129 if response is None:
130 response = self.default(request, exception)
131 except Exception:
132 try:
133 url = repr(request.url)
134 except AttributeError: # no cov
135 url = "unknown"
136 response_message = (
137 "Exception raised in exception handler " '"%s" for uri: %s'
138 )
139 error_logger.exception(response_message, handler.__name__, url)
140
141 if self.debug:
142 return text(response_message % (handler.__name__, url), 500)
143 else:
144 return text("An error occurred while handling an error", 500)
145 return response
146
147 def default(self, request, exception):
148 """
149 Provide a default behavior for the objects of :class:`ErrorHandler`.
150 If a developer chooses to extent the :class:`ErrorHandler` they can
151 provide a custom implementation for this method to behave in a way
152 they see fit.
153
154 :param request: Incoming request
155 :param exception: Exception object
156
157 :type request: :class:`sanic.request.Request`
158 :type exception: :class:`sanic.exceptions.SanicException` or
159 :class:`Exception`
160 :return:
161 """
162 self.log(request, exception)
163 fallback = request.app.config.FALLBACK_ERROR_FORMAT
164 return exception_response(
165 request,
166 exception,
167 debug=self.debug,
168 base=self.base,
169 fallback=fallback,
170 )
171
172 @staticmethod
173 def log(request, exception):
174 quiet = getattr(exception, "quiet", False)
175 noisy = getattr(request.app.config, "NOISY_EXCEPTIONS", False)
176 if quiet is False or noisy is True:
177 try:
178 url = repr(request.url)
179 except AttributeError: # no cov
180 url = "unknown"
181
182 error_logger.exception(
183 "Exception occurred while handling uri: %s", url
184 )
185
186
187 class ContentRangeHandler:
188 """
189 A mechanism to parse and process the incoming request headers to
190 extract the content range information.
191
192 :param request: Incoming api request
193 :param stats: Stats related to the content
194
195 :type request: :class:`sanic.request.Request`
196 :type stats: :class:`posix.stat_result`
197
198 :ivar start: Content Range start
199 :ivar end: Content Range end
200 :ivar size: Length of the content
201 :ivar total: Total size identified by the :class:`posix.stat_result`
202 instance
203 :ivar ContentRangeHandler.headers: Content range header ``dict``
204 """
205
206 __slots__ = ("start", "end", "size", "total", "headers")
207
208 def __init__(self, request, stats):
209 self.total = stats.st_size
210 _range = request.headers.getone("range", None)
211 if _range is None:
212 raise HeaderNotFound("Range Header Not Found")
213 unit, _, value = tuple(map(str.strip, _range.partition("=")))
214 if unit != "bytes":
215 raise InvalidRangeType(
216 "%s is not a valid Range Type" % (unit,), self
217 )
218 start_b, _, end_b = tuple(map(str.strip, value.partition("-")))
219 try:
220 self.start = int(start_b) if start_b else None
221 except ValueError:
222 raise RangeNotSatisfiable(
223 "'%s' is invalid for Content Range" % (start_b,), self
224 )
225 try:
226 self.end = int(end_b) if end_b else None
227 except ValueError:
228 raise RangeNotSatisfiable(
229 "'%s' is invalid for Content Range" % (end_b,), self
230 )
231 if self.end is None:
232 if self.start is None:
233 raise RangeNotSatisfiable(
234 "Invalid for Content Range parameters", self
235 )
236 else:
237 # this case represents `Content-Range: bytes 5-`
238 self.end = self.total - 1
239 else:
240 if self.start is None:
241 # this case represents `Content-Range: bytes -5`
242 self.start = self.total - self.end
243 self.end = self.total - 1
244 if self.start >= self.end:
245 raise RangeNotSatisfiable(
246 "Invalid for Content Range parameters", self
247 )
248 self.size = self.end - self.start + 1
249 self.headers = {
250 "Content-Range": "bytes %s-%s/%s"
251 % (self.start, self.end, self.total)
252 }
253
254 def __bool__(self):
255 return self.size > 0
256
[end of sanic/handlers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sanic/handlers.py b/sanic/handlers.py
--- a/sanic/handlers.py
+++ b/sanic/handlers.py
@@ -47,6 +47,28 @@
def _full_lookup(self, exception, route_name: Optional[str] = None):
return self.lookup(exception, route_name)
+ def _add(
+ self,
+ key: Tuple[Type[BaseException], Optional[str]],
+ handler: RouteHandler,
+ ) -> None:
+ if key in self.cached_handlers:
+ exc, name = key
+ if name is None:
+ name = "__ALL_ROUTES__"
+
+ error_logger.warning(
+ f"Duplicate exception handler definition on: route={name} "
+ f"and exception={exc}"
+ )
+ deprecation(
+ "A duplicate exception handler definition was discovered. "
+ "This may cause unintended consequences. A warning has been "
+ "issued now, but it will not be allowed starting in v23.3.",
+ 23.3,
+ )
+ self.cached_handlers[key] = handler
+
def add(self, exception, handler, route_names: Optional[List[str]] = None):
"""
Add a new exception handler to an already existing handler object.
@@ -62,9 +84,9 @@
"""
if route_names:
for route in route_names:
- self.cached_handlers[(exception, route)] = handler
+ self._add((exception, route), handler)
else:
- self.cached_handlers[(exception, None)] = handler
+ self._add((exception, None), handler)
def lookup(self, exception, route_name: Optional[str] = None):
"""
| {"golden_diff": "diff --git a/sanic/handlers.py b/sanic/handlers.py\n--- a/sanic/handlers.py\n+++ b/sanic/handlers.py\n@@ -47,6 +47,28 @@\n def _full_lookup(self, exception, route_name: Optional[str] = None):\n return self.lookup(exception, route_name)\n \n+ def _add(\n+ self,\n+ key: Tuple[Type[BaseException], Optional[str]],\n+ handler: RouteHandler,\n+ ) -> None:\n+ if key in self.cached_handlers:\n+ exc, name = key\n+ if name is None:\n+ name = \"__ALL_ROUTES__\"\n+\n+ error_logger.warning(\n+ f\"Duplicate exception handler definition on: route={name} \"\n+ f\"and exception={exc}\"\n+ )\n+ deprecation(\n+ \"A duplicate exception handler definition was discovered. \"\n+ \"This may cause unintended consequences. A warning has been \"\n+ \"issued now, but it will not be allowed starting in v23.3.\",\n+ 23.3,\n+ )\n+ self.cached_handlers[key] = handler\n+\n def add(self, exception, handler, route_names: Optional[List[str]] = None):\n \"\"\"\n Add a new exception handler to an already existing handler object.\n@@ -62,9 +84,9 @@\n \"\"\"\n if route_names:\n for route in route_names:\n- self.cached_handlers[(exception, route)] = handler\n+ self._add((exception, route), handler)\n else:\n- self.cached_handlers[(exception, None)] = handler\n+ self._add((exception, None), handler)\n \n def lookup(self, exception, route_name: Optional[str] = None):\n \"\"\"\n", "issue": "Enforce exception handler uniquness\n1. You should not be able to register the same exception more than once, or at least not on the same App/Blueprint.\r\n2. Handlers should only be fetched in relation to the BP or App context of the matched route. This effectively means that some exceptions (`NotFound` could only be registered app level).\r\n\r\n_Originally posted by @ahopkins in https://github.com/sanic-org/sanic/issues/2121#issuecomment-827077284_\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import Dict, List, Optional, Tuple, Type\n\nfrom sanic.errorpages import BaseRenderer, TextRenderer, exception_response\nfrom sanic.exceptions import (\n HeaderNotFound,\n InvalidRangeType,\n RangeNotSatisfiable,\n)\nfrom sanic.log import deprecation, error_logger\nfrom sanic.models.handler_types import RouteHandler\nfrom sanic.response import text\n\n\nclass ErrorHandler:\n \"\"\"\n Provide :class:`sanic.app.Sanic` application with a mechanism to handle\n and process any and all uncaught exceptions in a way the application\n developer will set fit.\n\n This error handling framework is built into the core that can be extended\n by the developers to perform a wide range of tasks from recording the error\n stats to reporting them to an external service that can be used for\n realtime alerting system.\n\n \"\"\"\n\n def __init__(\n self,\n base: Type[BaseRenderer] = TextRenderer,\n ):\n self.cached_handlers: Dict[\n Tuple[Type[BaseException], Optional[str]], Optional[RouteHandler]\n ] = {}\n self.debug = False\n self.base = base\n\n @classmethod\n def finalize(cls, *args, **kwargs):\n deprecation(\n \"ErrorHandler.finalize is deprecated and no longer needed. \"\n \"Please remove update your code to remove it. \",\n 22.12,\n )\n\n def _full_lookup(self, exception, route_name: Optional[str] = None):\n return self.lookup(exception, route_name)\n\n def add(self, exception, handler, route_names: Optional[List[str]] = None):\n \"\"\"\n Add a new exception handler to an already existing handler object.\n\n :param exception: Type of exception that need to be handled\n :param handler: Reference to the method that will handle the exception\n\n :type exception: :class:`sanic.exceptions.SanicException` or\n :class:`Exception`\n :type handler: ``function``\n\n :return: None\n \"\"\"\n if route_names:\n for route in route_names:\n self.cached_handlers[(exception, route)] = handler\n else:\n self.cached_handlers[(exception, None)] = handler\n\n def lookup(self, exception, route_name: Optional[str] = None):\n \"\"\"\n Lookup the existing instance of :class:`ErrorHandler` and fetch the\n registered handler for a specific type of exception.\n\n This method leverages a dict lookup to speedup the retrieval process.\n\n :param exception: Type of exception\n\n :type exception: :class:`sanic.exceptions.SanicException` or\n :class:`Exception`\n\n :return: Registered function if found ``None`` otherwise\n \"\"\"\n exception_class = type(exception)\n\n for name in (route_name, None):\n exception_key = (exception_class, name)\n handler = self.cached_handlers.get(exception_key)\n if handler:\n return handler\n\n for name in (route_name, None):\n for ancestor in type.mro(exception_class):\n exception_key = (ancestor, name)\n if exception_key in self.cached_handlers:\n handler = self.cached_handlers[exception_key]\n self.cached_handlers[\n (exception_class, route_name)\n ] = handler\n return handler\n\n if ancestor is BaseException:\n break\n self.cached_handlers[(exception_class, route_name)] = None\n handler = None\n return handler\n\n _lookup = _full_lookup\n\n def response(self, request, exception):\n \"\"\"Fetches and executes an exception handler and returns a response\n object\n\n :param request: Instance of :class:`sanic.request.Request`\n :param exception: Exception to handle\n\n :type request: :class:`sanic.request.Request`\n :type exception: :class:`sanic.exceptions.SanicException` or\n :class:`Exception`\n\n :return: Wrap the return value obtained from :func:`default`\n or registered handler for that type of exception.\n \"\"\"\n route_name = request.name if request else None\n handler = self._lookup(exception, route_name)\n response = None\n try:\n if handler:\n response = handler(request, exception)\n if response is None:\n response = self.default(request, exception)\n except Exception:\n try:\n url = repr(request.url)\n except AttributeError: # no cov\n url = \"unknown\"\n response_message = (\n \"Exception raised in exception handler \" '\"%s\" for uri: %s'\n )\n error_logger.exception(response_message, handler.__name__, url)\n\n if self.debug:\n return text(response_message % (handler.__name__, url), 500)\n else:\n return text(\"An error occurred while handling an error\", 500)\n return response\n\n def default(self, request, exception):\n \"\"\"\n Provide a default behavior for the objects of :class:`ErrorHandler`.\n If a developer chooses to extent the :class:`ErrorHandler` they can\n provide a custom implementation for this method to behave in a way\n they see fit.\n\n :param request: Incoming request\n :param exception: Exception object\n\n :type request: :class:`sanic.request.Request`\n :type exception: :class:`sanic.exceptions.SanicException` or\n :class:`Exception`\n :return:\n \"\"\"\n self.log(request, exception)\n fallback = request.app.config.FALLBACK_ERROR_FORMAT\n return exception_response(\n request,\n exception,\n debug=self.debug,\n base=self.base,\n fallback=fallback,\n )\n\n @staticmethod\n def log(request, exception):\n quiet = getattr(exception, \"quiet\", False)\n noisy = getattr(request.app.config, \"NOISY_EXCEPTIONS\", False)\n if quiet is False or noisy is True:\n try:\n url = repr(request.url)\n except AttributeError: # no cov\n url = \"unknown\"\n\n error_logger.exception(\n \"Exception occurred while handling uri: %s\", url\n )\n\n\nclass ContentRangeHandler:\n \"\"\"\n A mechanism to parse and process the incoming request headers to\n extract the content range information.\n\n :param request: Incoming api request\n :param stats: Stats related to the content\n\n :type request: :class:`sanic.request.Request`\n :type stats: :class:`posix.stat_result`\n\n :ivar start: Content Range start\n :ivar end: Content Range end\n :ivar size: Length of the content\n :ivar total: Total size identified by the :class:`posix.stat_result`\n instance\n :ivar ContentRangeHandler.headers: Content range header ``dict``\n \"\"\"\n\n __slots__ = (\"start\", \"end\", \"size\", \"total\", \"headers\")\n\n def __init__(self, request, stats):\n self.total = stats.st_size\n _range = request.headers.getone(\"range\", None)\n if _range is None:\n raise HeaderNotFound(\"Range Header Not Found\")\n unit, _, value = tuple(map(str.strip, _range.partition(\"=\")))\n if unit != \"bytes\":\n raise InvalidRangeType(\n \"%s is not a valid Range Type\" % (unit,), self\n )\n start_b, _, end_b = tuple(map(str.strip, value.partition(\"-\")))\n try:\n self.start = int(start_b) if start_b else None\n except ValueError:\n raise RangeNotSatisfiable(\n \"'%s' is invalid for Content Range\" % (start_b,), self\n )\n try:\n self.end = int(end_b) if end_b else None\n except ValueError:\n raise RangeNotSatisfiable(\n \"'%s' is invalid for Content Range\" % (end_b,), self\n )\n if self.end is None:\n if self.start is None:\n raise RangeNotSatisfiable(\n \"Invalid for Content Range parameters\", self\n )\n else:\n # this case represents `Content-Range: bytes 5-`\n self.end = self.total - 1\n else:\n if self.start is None:\n # this case represents `Content-Range: bytes -5`\n self.start = self.total - self.end\n self.end = self.total - 1\n if self.start >= self.end:\n raise RangeNotSatisfiable(\n \"Invalid for Content Range parameters\", self\n )\n self.size = self.end - self.start + 1\n self.headers = {\n \"Content-Range\": \"bytes %s-%s/%s\"\n % (self.start, self.end, self.total)\n }\n\n def __bool__(self):\n return self.size > 0\n", "path": "sanic/handlers.py"}]} | 3,189 | 391 |
gh_patches_debug_23950 | rasdani/github-patches | git_diff | cleanlab__cleanlab-514 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ensure unit tests work with termcolor v2.1.0
tests/test_token_classification.py currently fails after termcolor just upgraded to v2.1.0, specifically `test_color_sentence`
- [ ] update unit test code to make them pass with termcolor v2.1.0
- [ ] ensure new unit test code also works with older versions of termcolor pre v2.1.0 (suboptimal but ok if the unit test only works with versions post v2.0.0, as long as the package works with all termcolor versions currently supported).
- [ ] remove version upper bound on termcolor if it has been added to package in the meantime.
https://pypi.org/project/termcolor/
https://github.com/termcolor/termcolor/pull/25/files
https://github.com/cleanlab/cleanlab/actions/runs/3357515340/jobs/5563372689
</issue>
<code>
[start of cleanlab/internal/token_classification_utils.py]
1 # Copyright (C) 2017-2022 Cleanlab Inc.
2 # This file is part of cleanlab.
3 #
4 # cleanlab is free software: you can redistribute it and/or modify
5 # it under the terms of the GNU Affero General Public License as published
6 # by the Free Software Foundation, either version 3 of the License, or
7 # (at your option) any later version.
8 #
9 # cleanlab is distributed in the hope that it will be useful,
10 # but WITHOUT ANY WARRANTY; without even the implied warranty of
11 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 # GNU Affero General Public License for more details.
13 #
14 # You should have received a copy of the GNU Affero General Public License
15 # along with cleanlab. If not, see <https://www.gnu.org/licenses/>.
16
17 """
18 Helper methods used internally in cleanlab.token_classification
19 """
20
21 import re
22 import string
23 import numpy as np
24 from termcolor import colored
25 from typing import List, Optional, Callable, Tuple
26
27
28 def get_sentence(words: List[str]) -> str:
29 """
30 Get sentence formed by a list of words with minor processing for readability
31
32 Parameters
33 ----------
34 words:
35 list of word-level tokens
36
37 Returns
38 ----------
39 sentence:
40 sentence formed by list of word-level tokens
41
42 Examples
43 --------
44 >>> from cleanlab.internal.token_classification_utils import get_sentence
45 >>> words = ["This", "is", "a", "sentence", "."]
46 >>> get_sentence(words)
47 'This is a sentence.'
48 """
49 sentence = ""
50 for word in words:
51 if word not in string.punctuation or word in ["-", "("]:
52 word = " " + word
53 sentence += word
54 sentence = sentence.replace(" '", "'").replace("( ", "(").strip()
55 return sentence
56
57
58 def filter_sentence(
59 sentences: List[str],
60 condition: Optional[Callable[[str], bool]] = None,
61 ) -> Tuple[List[str], List[bool]]:
62 """
63 Filter sentence based on some condition, and returns filter mask
64
65 Parameters
66 ----------
67 sentences:
68 list of sentences
69
70 condition:
71 sentence filtering condition
72
73 Returns
74 ---------
75 sentences:
76 list of sentences filtered
77
78 mask:
79 boolean mask such that `mask[i] == True` if the i'th sentence is included in the
80 filtered sentence, otherwise `mask[i] == False`
81
82 Examples
83 --------
84 >>> from cleanlab.internal.token_classification_utils import filter_sentence
85 >>> sentences = ["Short sentence.", "This is a longer sentence."]
86 >>> condition = lambda x: len(x.split()) > 2
87 >>> long_sentences, _ = filter_sentence(sentences, condition)
88 >>> long_sentences
89 ['This is a longer sentence.']
90 >>> document = ["# Headline", "Sentence 1.", "&", "Sentence 2."]
91 >>> sentences, mask = filter_sentence(document)
92 >>> sentences, mask
93 (['Sentence 1.', 'Sentence 2.'], [False, True, False, True])
94 """
95 if not condition:
96 condition = lambda sentence: len(sentence) > 1 and "#" not in sentence
97 mask = list(map(condition, sentences))
98 sentences = [sentence for m, sentence in zip(mask, sentences) if m]
99 return sentences, mask
100
101
102 def process_token(token: str, replace: List[Tuple[str, str]] = [("#", "")]) -> str:
103 """
104 Replaces special characters in the tokens
105
106 Parameters
107 ----------
108 token:
109 token which potentially contains special characters
110
111 replace:
112 list of tuples `(s1, s2)`, where all occurances of s1 are replaced by s2
113
114 Returns
115 ---------
116 processed_token:
117 processed token whose special character has been replaced
118
119 Note
120 ----
121 Only applies to characters in the original input token.
122
123 Examples
124 --------
125 >>> from cleanlab.internal.token_classification_utils import process_token
126 >>> token = "#Comment"
127 >>> process_token("#Comment")
128 'Comment'
129
130 Specify custom replacement rules
131
132 >>> replace = [("C", "a"), ("a", "C")]
133 >>> process_token("Cleanlab", replace)
134 'aleCnlCb'
135 """
136 replace_dict = {re.escape(k): v for (k, v) in replace}
137 pattern = "|".join(replace_dict.keys())
138 compiled_pattern = re.compile(pattern)
139 replacement = lambda match: replace_dict[re.escape(match.group(0))]
140 processed_token = compiled_pattern.sub(replacement, token)
141 return processed_token
142
143
144 def mapping(entities: List[int], maps: List[int]) -> List[int]:
145 """
146 Map a list of entities to its corresponding entities
147
148 Parameters
149 ----------
150 entities:
151 a list of given entities
152
153 maps:
154 a list of mapped entities, such that the i'th indexed token should be mapped to `maps[i]`
155
156 Returns
157 ---------
158 mapped_entities:
159 a list of mapped entities
160
161 Examples
162 --------
163 >>> unique_identities = [0, 1, 2, 3, 4] # ["O", "B-PER", "I-PER", "B-LOC", "I-LOC"]
164 >>> maps = [0, 1, 1, 2, 2] # ["O", "PER", "PER", "LOC", "LOC"]
165 >>> mapping(unique_identities, maps)
166 [0, 1, 1, 2, 2] # ["O", "PER", "PER", "LOC", "LOC"]
167 >>> mapping([0, 0, 4, 4, 3, 4, 0, 2], maps)
168 [0, 0, 2, 2, 2, 2, 0, 1] # ["O", "O", "LOC", "LOC", "LOC", "LOC", "O", "PER"]
169 """
170 f = lambda x: maps[x]
171 return list(map(f, entities))
172
173
174 def merge_probs(probs: np.ndarray, maps: List[int]) -> np.ndarray:
175 """
176 Merges model-predictive probabilities with desired mapping
177
178 Parameters
179 ----------
180 probs:
181 np.array of shape `(N, K)`, where N is the number of tokens, and K is the number of classes for the model
182
183 maps:
184 a list of mapped index, such that the probability of the token being in the i'th class is mapped to the
185 `maps[i]` index. If `maps[i] == -1`, the i'th column of `probs` is ignored. If `np.any(maps == -1)`, the
186 returned probability is re-normalized.
187
188 Returns
189 ---------
190 probs_merged:
191 np.array of shape ``(N, K')``, where `K` is the number of new classes. Probabilities are merged and
192 re-normalized if necessary.
193
194 Examples
195 --------
196 >>> import numpy as np
197 >>> from cleanlab.internal.token_classification_utils import merge_probs
198 >>> probs = np.array([
199 ... [0.55, 0.0125, 0.0375, 0.1, 0.3],
200 ... [0.1, 0.8, 0, 0.075, 0.025],
201 ... ])
202 >>> maps = [0, 1, 1, 2, 2]
203 >>> merge_probs(probs, maps)
204 array([[0.55, 0.05, 0.4 ],
205 [0.1 , 0.8 , 0.1 ]])
206 """
207 old_classes = probs.shape[1]
208 map_size = np.max(maps) + 1
209 probs_merged = np.zeros([len(probs), map_size], dtype=probs.dtype.type)
210
211 for i in range(old_classes):
212 if maps[i] >= 0:
213 probs_merged[:, maps[i]] += probs[:, i]
214 if -1 in maps:
215 row_sums = probs_merged.sum(axis=1)
216 probs_merged /= row_sums[:, np.newaxis]
217 return probs_merged
218
219
220 def color_sentence(sentence: str, word: str) -> str:
221 """
222 Searches for a given token in the sentence and returns the sentence where the given token is colored red
223
224 Parameters
225 ----------
226 sentence:
227 a sentence where the word is searched
228
229 word:
230 keyword to find in `sentence`. Assumes the word exists in the sentence.
231 Returns
232 ---------
233 colored_sentence:
234 `sentence` where the every occurrence of the word is colored red, using ``termcolor.colored``
235
236 Examples
237 --------
238 >>> from cleanlab.internal.token_classification_utils import color_sentence
239 >>> sentence = "This is a sentence."
240 >>> word = "sentence"
241 >>> color_sentence(sentence, word)
242 'This is a \x1b[31msentence\x1b[0m.'
243
244 Also works for multiple occurrences of the word
245
246 >>> document = "This is a sentence. This is another sentence."
247 >>> word = "sentence"
248 >>> color_sentence(document, word)
249 'This is a \x1b[31msentence\x1b[0m. This is another \x1b[31msentence\x1b[0m.'
250 """
251 colored_word = colored(word, "red")
252 colored_sentence, number_of_substitions = re.subn(
253 r"\b{}\b".format(re.escape(word)), colored_word, sentence
254 )
255 if number_of_substitions == 0:
256 # Use basic string manipulation if regex fails
257 colored_sentence = sentence.replace(word, colored_word)
258 return colored_sentence
259
[end of cleanlab/internal/token_classification_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cleanlab/internal/token_classification_utils.py b/cleanlab/internal/token_classification_utils.py
--- a/cleanlab/internal/token_classification_utils.py
+++ b/cleanlab/internal/token_classification_utils.py
@@ -249,10 +249,35 @@
'This is a \x1b[31msentence\x1b[0m. This is another \x1b[31msentence\x1b[0m.'
"""
colored_word = colored(word, "red")
- colored_sentence, number_of_substitions = re.subn(
- r"\b{}\b".format(re.escape(word)), colored_word, sentence
+ return _replace_sentence(sentence=sentence, word=word, new_word=colored_word)
+
+
+def _replace_sentence(sentence: str, word: str, new_word: str) -> str:
+ """
+ Searches for a given token in the sentence and returns the sentence where the given token has been replaced by
+ `new_word`.
+
+ Parameters
+ ----------
+ sentence:
+ a sentence where the word is searched
+
+ word:
+ keyword to find in `sentence`. Assumes the word exists in the sentence.
+
+ new_word:
+ the word to replace the keyword with
+
+ Returns
+ ---------
+ new_sentence:
+ `sentence` where the every occurrence of the word is replaced by `colored_word`
+ """
+
+ new_sentence, number_of_substitions = re.subn(
+ r"\b{}\b".format(re.escape(word)), new_word, sentence
)
if number_of_substitions == 0:
# Use basic string manipulation if regex fails
- colored_sentence = sentence.replace(word, colored_word)
- return colored_sentence
+ new_sentence = sentence.replace(word, new_word)
+ return new_sentence
| {"golden_diff": "diff --git a/cleanlab/internal/token_classification_utils.py b/cleanlab/internal/token_classification_utils.py\n--- a/cleanlab/internal/token_classification_utils.py\n+++ b/cleanlab/internal/token_classification_utils.py\n@@ -249,10 +249,35 @@\n 'This is a \\x1b[31msentence\\x1b[0m. This is another \\x1b[31msentence\\x1b[0m.'\n \"\"\"\n colored_word = colored(word, \"red\")\n- colored_sentence, number_of_substitions = re.subn(\n- r\"\\b{}\\b\".format(re.escape(word)), colored_word, sentence\n+ return _replace_sentence(sentence=sentence, word=word, new_word=colored_word)\n+\n+\n+def _replace_sentence(sentence: str, word: str, new_word: str) -> str:\n+ \"\"\"\n+ Searches for a given token in the sentence and returns the sentence where the given token has been replaced by\n+ `new_word`.\n+\n+ Parameters\n+ ----------\n+ sentence:\n+ a sentence where the word is searched\n+\n+ word:\n+ keyword to find in `sentence`. Assumes the word exists in the sentence.\n+\n+ new_word:\n+ the word to replace the keyword with\n+\n+ Returns\n+ ---------\n+ new_sentence:\n+ `sentence` where the every occurrence of the word is replaced by `colored_word`\n+ \"\"\"\n+\n+ new_sentence, number_of_substitions = re.subn(\n+ r\"\\b{}\\b\".format(re.escape(word)), new_word, sentence\n )\n if number_of_substitions == 0:\n # Use basic string manipulation if regex fails\n- colored_sentence = sentence.replace(word, colored_word)\n- return colored_sentence\n+ new_sentence = sentence.replace(word, new_word)\n+ return new_sentence\n", "issue": "Ensure unit tests\u00a0work with termcolor v2.1.0\ntests/test_token_classification.py currently fails after termcolor just upgraded to v2.1.0, specifically `test_color_sentence` \n\n- [ ] update unit test code to make them pass with termcolor v2.1.0\n- [ ] ensure new unit test code also works with older versions of termcolor pre v2.1.0 (suboptimal but ok if the unit test only works with versions post v2.0.0, as long as the package works with all termcolor versions currently supported).\n- [ ] remove version upper bound on termcolor if it has been added to package in the meantime.\n\nhttps://pypi.org/project/termcolor/\nhttps://github.com/termcolor/termcolor/pull/25/files\nhttps://github.com/cleanlab/cleanlab/actions/runs/3357515340/jobs/5563372689\n\n", "before_files": [{"content": "# Copyright (C) 2017-2022 Cleanlab Inc.\n# This file is part of cleanlab.\n#\n# cleanlab is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Affero General Public License as published\n# by the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# cleanlab is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Affero General Public License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with cleanlab. If not, see <https://www.gnu.org/licenses/>.\n\n\"\"\"\nHelper methods used internally in cleanlab.token_classification\n\"\"\"\n\nimport re\nimport string\nimport numpy as np\nfrom termcolor import colored\nfrom typing import List, Optional, Callable, Tuple\n\n\ndef get_sentence(words: List[str]) -> str:\n \"\"\"\n Get sentence formed by a list of words with minor processing for readability\n\n Parameters\n ----------\n words:\n list of word-level tokens\n\n Returns\n ----------\n sentence:\n sentence formed by list of word-level tokens\n\n Examples\n --------\n >>> from cleanlab.internal.token_classification_utils import get_sentence\n >>> words = [\"This\", \"is\", \"a\", \"sentence\", \".\"]\n >>> get_sentence(words)\n 'This is a sentence.'\n \"\"\"\n sentence = \"\"\n for word in words:\n if word not in string.punctuation or word in [\"-\", \"(\"]:\n word = \" \" + word\n sentence += word\n sentence = sentence.replace(\" '\", \"'\").replace(\"( \", \"(\").strip()\n return sentence\n\n\ndef filter_sentence(\n sentences: List[str],\n condition: Optional[Callable[[str], bool]] = None,\n) -> Tuple[List[str], List[bool]]:\n \"\"\"\n Filter sentence based on some condition, and returns filter mask\n\n Parameters\n ----------\n sentences:\n list of sentences\n\n condition:\n sentence filtering condition\n\n Returns\n ---------\n sentences:\n list of sentences filtered\n\n mask:\n boolean mask such that `mask[i] == True` if the i'th sentence is included in the\n filtered sentence, otherwise `mask[i] == False`\n\n Examples\n --------\n >>> from cleanlab.internal.token_classification_utils import filter_sentence\n >>> sentences = [\"Short sentence.\", \"This is a longer sentence.\"]\n >>> condition = lambda x: len(x.split()) > 2\n >>> long_sentences, _ = filter_sentence(sentences, condition)\n >>> long_sentences\n ['This is a longer sentence.']\n >>> document = [\"# Headline\", \"Sentence 1.\", \"&\", \"Sentence 2.\"]\n >>> sentences, mask = filter_sentence(document)\n >>> sentences, mask\n (['Sentence 1.', 'Sentence 2.'], [False, True, False, True])\n \"\"\"\n if not condition:\n condition = lambda sentence: len(sentence) > 1 and \"#\" not in sentence\n mask = list(map(condition, sentences))\n sentences = [sentence for m, sentence in zip(mask, sentences) if m]\n return sentences, mask\n\n\ndef process_token(token: str, replace: List[Tuple[str, str]] = [(\"#\", \"\")]) -> str:\n \"\"\"\n Replaces special characters in the tokens\n\n Parameters\n ----------\n token:\n token which potentially contains special characters\n\n replace:\n list of tuples `(s1, s2)`, where all occurances of s1 are replaced by s2\n\n Returns\n ---------\n processed_token:\n processed token whose special character has been replaced\n\n Note\n ----\n Only applies to characters in the original input token.\n\n Examples\n --------\n >>> from cleanlab.internal.token_classification_utils import process_token\n >>> token = \"#Comment\"\n >>> process_token(\"#Comment\")\n 'Comment'\n\n Specify custom replacement rules\n\n >>> replace = [(\"C\", \"a\"), (\"a\", \"C\")]\n >>> process_token(\"Cleanlab\", replace)\n 'aleCnlCb'\n \"\"\"\n replace_dict = {re.escape(k): v for (k, v) in replace}\n pattern = \"|\".join(replace_dict.keys())\n compiled_pattern = re.compile(pattern)\n replacement = lambda match: replace_dict[re.escape(match.group(0))]\n processed_token = compiled_pattern.sub(replacement, token)\n return processed_token\n\n\ndef mapping(entities: List[int], maps: List[int]) -> List[int]:\n \"\"\"\n Map a list of entities to its corresponding entities\n\n Parameters\n ----------\n entities:\n a list of given entities\n\n maps:\n a list of mapped entities, such that the i'th indexed token should be mapped to `maps[i]`\n\n Returns\n ---------\n mapped_entities:\n a list of mapped entities\n\n Examples\n --------\n >>> unique_identities = [0, 1, 2, 3, 4] # [\"O\", \"B-PER\", \"I-PER\", \"B-LOC\", \"I-LOC\"]\n >>> maps = [0, 1, 1, 2, 2] # [\"O\", \"PER\", \"PER\", \"LOC\", \"LOC\"]\n >>> mapping(unique_identities, maps)\n [0, 1, 1, 2, 2] # [\"O\", \"PER\", \"PER\", \"LOC\", \"LOC\"]\n >>> mapping([0, 0, 4, 4, 3, 4, 0, 2], maps)\n [0, 0, 2, 2, 2, 2, 0, 1] # [\"O\", \"O\", \"LOC\", \"LOC\", \"LOC\", \"LOC\", \"O\", \"PER\"]\n \"\"\"\n f = lambda x: maps[x]\n return list(map(f, entities))\n\n\ndef merge_probs(probs: np.ndarray, maps: List[int]) -> np.ndarray:\n \"\"\"\n Merges model-predictive probabilities with desired mapping\n\n Parameters\n ----------\n probs:\n np.array of shape `(N, K)`, where N is the number of tokens, and K is the number of classes for the model\n\n maps:\n a list of mapped index, such that the probability of the token being in the i'th class is mapped to the\n `maps[i]` index. If `maps[i] == -1`, the i'th column of `probs` is ignored. If `np.any(maps == -1)`, the\n returned probability is re-normalized.\n\n Returns\n ---------\n probs_merged:\n np.array of shape ``(N, K')``, where `K` is the number of new classes. Probabilities are merged and\n re-normalized if necessary.\n\n Examples\n --------\n >>> import numpy as np\n >>> from cleanlab.internal.token_classification_utils import merge_probs\n >>> probs = np.array([\n ... [0.55, 0.0125, 0.0375, 0.1, 0.3],\n ... [0.1, 0.8, 0, 0.075, 0.025],\n ... ])\n >>> maps = [0, 1, 1, 2, 2]\n >>> merge_probs(probs, maps)\n array([[0.55, 0.05, 0.4 ],\n [0.1 , 0.8 , 0.1 ]])\n \"\"\"\n old_classes = probs.shape[1]\n map_size = np.max(maps) + 1\n probs_merged = np.zeros([len(probs), map_size], dtype=probs.dtype.type)\n\n for i in range(old_classes):\n if maps[i] >= 0:\n probs_merged[:, maps[i]] += probs[:, i]\n if -1 in maps:\n row_sums = probs_merged.sum(axis=1)\n probs_merged /= row_sums[:, np.newaxis]\n return probs_merged\n\n\ndef color_sentence(sentence: str, word: str) -> str:\n \"\"\"\n Searches for a given token in the sentence and returns the sentence where the given token is colored red\n\n Parameters\n ----------\n sentence:\n a sentence where the word is searched\n\n word:\n keyword to find in `sentence`. Assumes the word exists in the sentence.\n Returns\n ---------\n colored_sentence:\n `sentence` where the every occurrence of the word is colored red, using ``termcolor.colored``\n\n Examples\n --------\n >>> from cleanlab.internal.token_classification_utils import color_sentence\n >>> sentence = \"This is a sentence.\"\n >>> word = \"sentence\"\n >>> color_sentence(sentence, word)\n 'This is a \\x1b[31msentence\\x1b[0m.'\n\n Also works for multiple occurrences of the word\n\n >>> document = \"This is a sentence. This is another sentence.\"\n >>> word = \"sentence\"\n >>> color_sentence(document, word)\n 'This is a \\x1b[31msentence\\x1b[0m. This is another \\x1b[31msentence\\x1b[0m.'\n \"\"\"\n colored_word = colored(word, \"red\")\n colored_sentence, number_of_substitions = re.subn(\n r\"\\b{}\\b\".format(re.escape(word)), colored_word, sentence\n )\n if number_of_substitions == 0:\n # Use basic string manipulation if regex fails\n colored_sentence = sentence.replace(word, colored_word)\n return colored_sentence\n", "path": "cleanlab/internal/token_classification_utils.py"}]} | 3,539 | 411 |
gh_patches_debug_10644 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-2593 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Provide better error for Union of scalars.
This keeps coming from questions on discord
```py
union: Union[List[int], List[str]]
```
gives something like
```console
File "/root/backend/env/lib/python3.10/site-packages/strawberry/schema/name_converter.py", line 99, in from_union
assert hasattr(type_, "_type_definition")
AssertionError
The above exception was the direct cause of the following exception:
File "/root/backend/env/lib/python3.10/site-packages/graphql/type/definition.py", line 811, in fields
raise cls(f"{self.name} fields cannot be resolved. {error}") from error
TypeError: Query fields cannot be resolved
```
</issue>
<code>
[start of strawberry/exceptions/invalid_union_type.py]
1 from __future__ import annotations
2
3 from inspect import getframeinfo, stack
4 from pathlib import Path
5 from typing import TYPE_CHECKING, Optional, Type
6
7 from strawberry.exceptions.utils.source_finder import SourceFinder
8 from strawberry.utils.cached_property import cached_property
9
10 from .exception import StrawberryException
11
12 if TYPE_CHECKING:
13 from strawberry.union import StrawberryUnion
14
15 from .exception_source import ExceptionSource
16
17
18 class InvalidUnionTypeError(StrawberryException):
19 """The union is constructed with an invalid type"""
20
21 invalid_type: object
22
23 def __init__(self, union_name: str, invalid_type: object) -> None:
24 from strawberry.custom_scalar import ScalarWrapper
25
26 self.union_name = union_name
27 self.invalid_type = invalid_type
28
29 # assuming that the exception happens two stack frames above the current one.
30 # one is our code checking for invalid types, the other is the caller
31 self.frame = getframeinfo(stack()[2][0])
32
33 if isinstance(invalid_type, ScalarWrapper):
34 type_name = invalid_type.wrap.__name__
35 else:
36 type_name = invalid_type.__name__ # type: ignore
37
38 self.message = f"Type `{type_name}` cannot be used in a GraphQL Union"
39 self.rich_message = (
40 f"Type `[underline]{type_name}[/]` cannot be used in a GraphQL Union"
41 )
42 self.suggestion = (
43 "To fix this error you should replace the type a strawberry.type"
44 )
45 self.annotation_message = "invalid type here"
46
47 @cached_property
48 def exception_source(self) -> Optional[ExceptionSource]:
49 path = Path(self.frame.filename)
50
51 source_finder = SourceFinder()
52
53 return source_finder.find_union_call(path, self.union_name, self.invalid_type)
54
55
56 class InvalidTypeForUnionMergeError(StrawberryException):
57 """A specialized version of InvalidUnionTypeError for when trying
58 to merge unions using the pipe operator."""
59
60 invalid_type: Type
61
62 def __init__(self, union: StrawberryUnion, other: object) -> None:
63 self.union = union
64 self.other = other
65
66 # assuming that the exception happens two stack frames above the current one.
67 # one is our code checking for invalid types, the other is the caller
68 self.frame = getframeinfo(stack()[2][0])
69
70 other_name = getattr(other, "__name__", str(other))
71
72 self.message = f"`{other_name}` cannot be used when merging GraphQL Unions"
73 self.rich_message = (
74 f"`[underline]{other_name}[/]` cannot be used when merging GraphQL Unions"
75 )
76 self.suggestion = ""
77 self.annotation_message = "invalid type here"
78
79 @cached_property
80 def exception_source(self) -> Optional[ExceptionSource]:
81 source_finder = SourceFinder()
82
83 return source_finder.find_union_merge(self.union, self.other, frame=self.frame)
84
[end of strawberry/exceptions/invalid_union_type.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/exceptions/invalid_union_type.py b/strawberry/exceptions/invalid_union_type.py
--- a/strawberry/exceptions/invalid_union_type.py
+++ b/strawberry/exceptions/invalid_union_type.py
@@ -33,7 +33,11 @@
if isinstance(invalid_type, ScalarWrapper):
type_name = invalid_type.wrap.__name__
else:
- type_name = invalid_type.__name__ # type: ignore
+ try:
+ type_name = invalid_type.__name__ # type: ignore
+ except AttributeError:
+ # might be StrawberryList instance
+ type_name = invalid_type.__class__.__name__
self.message = f"Type `{type_name}` cannot be used in a GraphQL Union"
self.rich_message = (
| {"golden_diff": "diff --git a/strawberry/exceptions/invalid_union_type.py b/strawberry/exceptions/invalid_union_type.py\n--- a/strawberry/exceptions/invalid_union_type.py\n+++ b/strawberry/exceptions/invalid_union_type.py\n@@ -33,7 +33,11 @@\n if isinstance(invalid_type, ScalarWrapper):\n type_name = invalid_type.wrap.__name__\n else:\n- type_name = invalid_type.__name__ # type: ignore\n+ try:\n+ type_name = invalid_type.__name__ # type: ignore\n+ except AttributeError:\n+ # might be StrawberryList instance\n+ type_name = invalid_type.__class__.__name__\n \n self.message = f\"Type `{type_name}` cannot be used in a GraphQL Union\"\n self.rich_message = (\n", "issue": "Provide better error for Union of scalars.\nThis keeps coming from questions on discord\r\n```py\r\nunion: Union[List[int], List[str]]\r\n```\r\ngives something like\r\n```console\r\n File \"/root/backend/env/lib/python3.10/site-packages/strawberry/schema/name_converter.py\", line 99, in from_union\r\n assert hasattr(type_, \"_type_definition\")\r\nAssertionError\r\n\r\nThe above exception was the direct cause of the following exception:\r\n File \"/root/backend/env/lib/python3.10/site-packages/graphql/type/definition.py\", line 811, in fields\r\n raise cls(f\"{self.name} fields cannot be resolved. {error}\") from error\r\nTypeError: Query fields cannot be resolved\r\n```\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom inspect import getframeinfo, stack\nfrom pathlib import Path\nfrom typing import TYPE_CHECKING, Optional, Type\n\nfrom strawberry.exceptions.utils.source_finder import SourceFinder\nfrom strawberry.utils.cached_property import cached_property\n\nfrom .exception import StrawberryException\n\nif TYPE_CHECKING:\n from strawberry.union import StrawberryUnion\n\n from .exception_source import ExceptionSource\n\n\nclass InvalidUnionTypeError(StrawberryException):\n \"\"\"The union is constructed with an invalid type\"\"\"\n\n invalid_type: object\n\n def __init__(self, union_name: str, invalid_type: object) -> None:\n from strawberry.custom_scalar import ScalarWrapper\n\n self.union_name = union_name\n self.invalid_type = invalid_type\n\n # assuming that the exception happens two stack frames above the current one.\n # one is our code checking for invalid types, the other is the caller\n self.frame = getframeinfo(stack()[2][0])\n\n if isinstance(invalid_type, ScalarWrapper):\n type_name = invalid_type.wrap.__name__\n else:\n type_name = invalid_type.__name__ # type: ignore\n\n self.message = f\"Type `{type_name}` cannot be used in a GraphQL Union\"\n self.rich_message = (\n f\"Type `[underline]{type_name}[/]` cannot be used in a GraphQL Union\"\n )\n self.suggestion = (\n \"To fix this error you should replace the type a strawberry.type\"\n )\n self.annotation_message = \"invalid type here\"\n\n @cached_property\n def exception_source(self) -> Optional[ExceptionSource]:\n path = Path(self.frame.filename)\n\n source_finder = SourceFinder()\n\n return source_finder.find_union_call(path, self.union_name, self.invalid_type)\n\n\nclass InvalidTypeForUnionMergeError(StrawberryException):\n \"\"\"A specialized version of InvalidUnionTypeError for when trying\n to merge unions using the pipe operator.\"\"\"\n\n invalid_type: Type\n\n def __init__(self, union: StrawberryUnion, other: object) -> None:\n self.union = union\n self.other = other\n\n # assuming that the exception happens two stack frames above the current one.\n # one is our code checking for invalid types, the other is the caller\n self.frame = getframeinfo(stack()[2][0])\n\n other_name = getattr(other, \"__name__\", str(other))\n\n self.message = f\"`{other_name}` cannot be used when merging GraphQL Unions\"\n self.rich_message = (\n f\"`[underline]{other_name}[/]` cannot be used when merging GraphQL Unions\"\n )\n self.suggestion = \"\"\n self.annotation_message = \"invalid type here\"\n\n @cached_property\n def exception_source(self) -> Optional[ExceptionSource]:\n source_finder = SourceFinder()\n\n return source_finder.find_union_merge(self.union, self.other, frame=self.frame)\n", "path": "strawberry/exceptions/invalid_union_type.py"}]} | 1,480 | 181 |
gh_patches_debug_32470 | rasdani/github-patches | git_diff | cocotb__cocotb-3568 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Thoughts on adding git rev to dev version numbers?
How would everyone feel about adding `git rev-parse --short HEAD` to `__version__`? Currently it is `2.0.0.dev0` and I'm not sure what the final `0` is for, but I'm guessing it's to manually increment as one feels like? What if instead it were `2.0.0.dev-d379318e`? I'd propose that we build the file on the fly in `setup.py` but only add the git rev when the non-suffixed version (stored as a variable in `setup.py`) includes `dev`.
I can take a stab at this if people are onboard.
</issue>
<code>
[start of src/cocotb/_version.py]
1 # Package versioning solution originally found here:
2 # http://stackoverflow.com/q/458550
3
4 # Store the version here so:
5 # 1) we don't load dependencies by storing it in __init__.py
6 # 2) we can import it in setup.py for the same reason
7 # 3) we can import it into your module
8 __version__ = "2.0.0.dev0"
9
[end of src/cocotb/_version.py]
[start of setup.py]
1 #!/usr/bin/env python
2 ###############################################################################
3 # Copyright (c) 2013 Potential Ventures Ltd
4 # Copyright (c) 2013 SolarFlare Communications Inc
5 # All rights reserved.
6 #
7 # Redistribution and use in source and binary forms, with or without
8 # modification, are permitted provided that the following conditions are met:
9 # * Redistributions of source code must retain the above copyright
10 # notice, this list of conditions and the following disclaimer.
11 # * Redistributions in binary form must reproduce the above copyright
12 # notice, this list of conditions and the following disclaimer in the
13 # documentation and/or other materials provided with the distribution.
14 # * Neither the name of Potential Ventures Ltd,
15 # SolarFlare Communications Inc nor the
16 # names of its contributors may be used to endorse or promote products
17 # derived from this software without specific prior written permission.
18 #
19 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
20 # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
21 # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
22 # DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY
23 # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
24 # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
25 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
26 # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
27 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
28 # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
29 ###############################################################################
30
31 import sys
32
33 if sys.version_info[:2] < (3, 6): # noqa: UP036 | bug in ruff
34 msg = [
35 "This version of cocotb requires at least Python 3.6,",
36 "you are running Python %d.%d.%d."
37 % (sys.version_info[0], sys.version_info[1], sys.version_info[2]),
38 ]
39 msg += [
40 "For more information please refer to the documentation at ",
41 "https://cocotb.readthedocs.io.",
42 ]
43
44 raise SystemExit("\n".join(msg))
45
46 import logging
47 from io import StringIO
48 from os import path, walk
49
50 from setuptools import find_packages, setup
51
52 # Note: cocotb is not installed properly yet and is missing dependencies and binaries
53 # We can still import other files next to setup.py, as long as they're in MANIFEST.in
54 # The below line is necessary for PEP517 support
55 sys.path.append(path.dirname(__file__))
56 from cocotb_build_libs import build_ext, get_ext # noqa: E402
57
58
59 def read_file(fname):
60 with open(path.join(path.dirname(__file__), fname), encoding="utf8") as f:
61 return f.read()
62
63
64 def package_files(directory):
65 paths = []
66 for fpath, directories, filenames in walk(directory):
67 for filename in filenames:
68 paths.append(path.join("..", "..", fpath, filename))
69 return paths
70
71
72 # this sets the __version__ variable
73 exec(read_file(path.join("src", "cocotb", "_version.py")))
74
75 # store log from build_libs and display at the end in verbose mode
76 # see https://github.com/pypa/pip/issues/6634
77 log_stream = StringIO()
78 handler = logging.StreamHandler(log_stream)
79 log = logging.getLogger("cocotb._build_libs")
80 log.setLevel(logging.INFO)
81 log.addHandler(handler)
82
83 setup(
84 name="cocotb",
85 cmdclass={"build_ext": build_ext},
86 version=__version__, # noqa: F821
87 description="cocotb is a coroutine based cosimulation library for writing VHDL and Verilog testbenches in Python.",
88 url="https://www.cocotb.org",
89 license="BSD",
90 long_description=read_file("README.md"),
91 long_description_content_type="text/markdown",
92 author="Chris Higgs, Stuart Hodgson",
93 maintainer="cocotb contributors",
94 maintainer_email="[email protected]",
95 install_requires=[
96 "find_libpython",
97 ],
98 python_requires=">=3.6",
99 packages=find_packages(where="src"),
100 package_dir={"": "src"},
101 package_data={
102 "cocotb": (
103 package_files("src/cocotb/share/makefiles")
104 + package_files("src/cocotb/share/include") # noqa: W504
105 + package_files("src/cocotb/share/def") # noqa: W504
106 + package_files("src/cocotb/share/lib/verilator") # noqa: W504
107 )
108 },
109 ext_modules=get_ext(),
110 entry_points={
111 "console_scripts": [
112 "cocotb-config=cocotb.config:main",
113 ]
114 },
115 platforms="any",
116 classifiers=[
117 "Programming Language :: Python :: 3",
118 "Programming Language :: Python :: 3.6",
119 "Programming Language :: Python :: 3.7",
120 "Programming Language :: Python :: 3.8",
121 "Programming Language :: Python :: 3.9",
122 "Programming Language :: Python :: 3.10",
123 "Programming Language :: Python :: 3.11",
124 "License :: OSI Approved :: BSD License",
125 "Topic :: Scientific/Engineering :: Electronic Design Automation (EDA)",
126 "Framework :: cocotb",
127 ],
128 # these appear in the sidebar on PyPI
129 project_urls={
130 "Bug Tracker": "https://github.com/cocotb/cocotb/issues",
131 "Source Code": "https://github.com/cocotb/cocotb",
132 "Documentation": "https://docs.cocotb.org",
133 },
134 )
135
136 print(log_stream.getvalue())
137
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -44,6 +44,7 @@
raise SystemExit("\n".join(msg))
import logging
+import subprocess
from io import StringIO
from os import path, walk
@@ -69,8 +70,26 @@
return paths
-# this sets the __version__ variable
-exec(read_file(path.join("src", "cocotb", "_version.py")))
+version_file_path = path.join("src", "cocotb", "_version.py")
+__version__ = "2.0.0.dev0"
+if "dev" in __version__:
+ try:
+ rev = subprocess.check_output(
+ ["git", "rev-parse", "--short", "HEAD"], universal_newlines=True
+ ).strip()
+ __version__ += f"+{rev}"
+ except Exception as e:
+ # if this is not a git repository and _version.py already exists,
+ # we are probably installing from an sdist, so use the existing _version.py
+ if path.exists(version_file_path):
+ exec(read_file(version_file_path))
+ else:
+ print(e, file=sys.stderr)
+with open(version_file_path, "w") as f:
+ f.write("# Package version\n")
+ f.write("# Generated by setup.py -- do not modify directly\n\n")
+ f.write(f'__version__ = "{__version__}"')
+
# store log from build_libs and display at the end in verbose mode
# see https://github.com/pypa/pip/issues/6634
@@ -83,7 +102,7 @@
setup(
name="cocotb",
cmdclass={"build_ext": build_ext},
- version=__version__, # noqa: F821
+ version=__version__,
description="cocotb is a coroutine based cosimulation library for writing VHDL and Verilog testbenches in Python.",
url="https://www.cocotb.org",
license="BSD",
diff --git a/src/cocotb/_version.py b/src/cocotb/_version.py
deleted file mode 100644
--- a/src/cocotb/_version.py
+++ /dev/null
@@ -1,8 +0,0 @@
-# Package versioning solution originally found here:
-# http://stackoverflow.com/q/458550
-
-# Store the version here so:
-# 1) we don't load dependencies by storing it in __init__.py
-# 2) we can import it in setup.py for the same reason
-# 3) we can import it into your module
-__version__ = "2.0.0.dev0"
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -44,6 +44,7 @@\n raise SystemExit(\"\\n\".join(msg))\n \n import logging\n+import subprocess\n from io import StringIO\n from os import path, walk\n \n@@ -69,8 +70,26 @@\n return paths\n \n \n-# this sets the __version__ variable\n-exec(read_file(path.join(\"src\", \"cocotb\", \"_version.py\")))\n+version_file_path = path.join(\"src\", \"cocotb\", \"_version.py\")\n+__version__ = \"2.0.0.dev0\"\n+if \"dev\" in __version__:\n+ try:\n+ rev = subprocess.check_output(\n+ [\"git\", \"rev-parse\", \"--short\", \"HEAD\"], universal_newlines=True\n+ ).strip()\n+ __version__ += f\"+{rev}\"\n+ except Exception as e:\n+ # if this is not a git repository and _version.py already exists,\n+ # we are probably installing from an sdist, so use the existing _version.py\n+ if path.exists(version_file_path):\n+ exec(read_file(version_file_path))\n+ else:\n+ print(e, file=sys.stderr)\n+with open(version_file_path, \"w\") as f:\n+ f.write(\"# Package version\\n\")\n+ f.write(\"# Generated by setup.py -- do not modify directly\\n\\n\")\n+ f.write(f'__version__ = \"{__version__}\"')\n+\n \n # store log from build_libs and display at the end in verbose mode\n # see https://github.com/pypa/pip/issues/6634\n@@ -83,7 +102,7 @@\n setup(\n name=\"cocotb\",\n cmdclass={\"build_ext\": build_ext},\n- version=__version__, # noqa: F821\n+ version=__version__,\n description=\"cocotb is a coroutine based cosimulation library for writing VHDL and Verilog testbenches in Python.\",\n url=\"https://www.cocotb.org\",\n license=\"BSD\",\ndiff --git a/src/cocotb/_version.py b/src/cocotb/_version.py\ndeleted file mode 100644\n--- a/src/cocotb/_version.py\n+++ /dev/null\n@@ -1,8 +0,0 @@\n-# Package versioning solution originally found here:\n-# http://stackoverflow.com/q/458550\n-\n-# Store the version here so:\n-# 1) we don't load dependencies by storing it in __init__.py\n-# 2) we can import it in setup.py for the same reason\n-# 3) we can import it into your module\n-__version__ = \"2.0.0.dev0\"\n", "issue": "Thoughts on adding git rev to dev version numbers?\nHow would everyone feel about adding `git rev-parse --short HEAD` to `__version__`? Currently it is `2.0.0.dev0` and I'm not sure what the final `0` is for, but I'm guessing it's to manually increment as one feels like? What if instead it were `2.0.0.dev-d379318e`? I'd propose that we build the file on the fly in `setup.py` but only add the git rev when the non-suffixed version (stored as a variable in `setup.py`) includes `dev`.\r\n\r\nI can take a stab at this if people are onboard.\n", "before_files": [{"content": "# Package versioning solution originally found here:\n# http://stackoverflow.com/q/458550\n\n# Store the version here so:\n# 1) we don't load dependencies by storing it in __init__.py\n# 2) we can import it in setup.py for the same reason\n# 3) we can import it into your module\n__version__ = \"2.0.0.dev0\"\n", "path": "src/cocotb/_version.py"}, {"content": "#!/usr/bin/env python\n###############################################################################\n# Copyright (c) 2013 Potential Ventures Ltd\n# Copyright (c) 2013 SolarFlare Communications Inc\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of Potential Ventures Ltd,\n# SolarFlare Communications Inc nor the\n# names of its contributors may be used to endorse or promote products\n# derived from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\n# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n###############################################################################\n\nimport sys\n\nif sys.version_info[:2] < (3, 6): # noqa: UP036 | bug in ruff\n msg = [\n \"This version of cocotb requires at least Python 3.6,\",\n \"you are running Python %d.%d.%d.\"\n % (sys.version_info[0], sys.version_info[1], sys.version_info[2]),\n ]\n msg += [\n \"For more information please refer to the documentation at \",\n \"https://cocotb.readthedocs.io.\",\n ]\n\n raise SystemExit(\"\\n\".join(msg))\n\nimport logging\nfrom io import StringIO\nfrom os import path, walk\n\nfrom setuptools import find_packages, setup\n\n# Note: cocotb is not installed properly yet and is missing dependencies and binaries\n# We can still import other files next to setup.py, as long as they're in MANIFEST.in\n# The below line is necessary for PEP517 support\nsys.path.append(path.dirname(__file__))\nfrom cocotb_build_libs import build_ext, get_ext # noqa: E402\n\n\ndef read_file(fname):\n with open(path.join(path.dirname(__file__), fname), encoding=\"utf8\") as f:\n return f.read()\n\n\ndef package_files(directory):\n paths = []\n for fpath, directories, filenames in walk(directory):\n for filename in filenames:\n paths.append(path.join(\"..\", \"..\", fpath, filename))\n return paths\n\n\n# this sets the __version__ variable\nexec(read_file(path.join(\"src\", \"cocotb\", \"_version.py\")))\n\n# store log from build_libs and display at the end in verbose mode\n# see https://github.com/pypa/pip/issues/6634\nlog_stream = StringIO()\nhandler = logging.StreamHandler(log_stream)\nlog = logging.getLogger(\"cocotb._build_libs\")\nlog.setLevel(logging.INFO)\nlog.addHandler(handler)\n\nsetup(\n name=\"cocotb\",\n cmdclass={\"build_ext\": build_ext},\n version=__version__, # noqa: F821\n description=\"cocotb is a coroutine based cosimulation library for writing VHDL and Verilog testbenches in Python.\",\n url=\"https://www.cocotb.org\",\n license=\"BSD\",\n long_description=read_file(\"README.md\"),\n long_description_content_type=\"text/markdown\",\n author=\"Chris Higgs, Stuart Hodgson\",\n maintainer=\"cocotb contributors\",\n maintainer_email=\"[email protected]\",\n install_requires=[\n \"find_libpython\",\n ],\n python_requires=\">=3.6\",\n packages=find_packages(where=\"src\"),\n package_dir={\"\": \"src\"},\n package_data={\n \"cocotb\": (\n package_files(\"src/cocotb/share/makefiles\")\n + package_files(\"src/cocotb/share/include\") # noqa: W504\n + package_files(\"src/cocotb/share/def\") # noqa: W504\n + package_files(\"src/cocotb/share/lib/verilator\") # noqa: W504\n )\n },\n ext_modules=get_ext(),\n entry_points={\n \"console_scripts\": [\n \"cocotb-config=cocotb.config:main\",\n ]\n },\n platforms=\"any\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"License :: OSI Approved :: BSD License\",\n \"Topic :: Scientific/Engineering :: Electronic Design Automation (EDA)\",\n \"Framework :: cocotb\",\n ],\n # these appear in the sidebar on PyPI\n project_urls={\n \"Bug Tracker\": \"https://github.com/cocotb/cocotb/issues\",\n \"Source Code\": \"https://github.com/cocotb/cocotb\",\n \"Documentation\": \"https://docs.cocotb.org\",\n },\n)\n\nprint(log_stream.getvalue())\n", "path": "setup.py"}]} | 2,381 | 616 |
gh_patches_debug_27239 | rasdani/github-patches | git_diff | Kinto__kinto-697 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Crash when querying `GET /buckets/collections`
<details>
<summary>
<code>ValueError: You cannot build children keys from its parent key.Trying to build type "collection" from object key "/buckets/collections"</code></summary>
```
return authz_policy.permits(context, principals, permission)
File "/home/mathieu/Code/Mozilla/kinto/kinto/core/authorization.py", line 76, in permits
get_bound_permissions=self.get_bound_permissions)
File "/home/mathieu/Code/Mozilla/kinto/kinto/core/authorization.py", line 169, in check_permission
return self._check_permission(self.permission_object_id, *args, **kw)
File "/home/mathieu/Code/Mozilla/kinto/kinto/core/permission/postgresql/__init__.py", line 237, in check_permission
perms = get_bound_permissions(object_id, permission)
File "/home/mathieu/Code/Mozilla/kinto/kinto/authorization.py", line 144, in get_bound_permissions
return build_permissions_set(*args, **kwargs)
File "/home/mathieu/Code/Mozilla/kinto/kinto/authorization.py", line 136, in build_permissions_set
granters.add(build_permission_tuple(obj, permission, obj_parts))
File "/home/mathieu/Code/Mozilla/kinto/kinto/authorization.py", line 105, in build_permission_tuple
obj_type, '/'.join(obj_parts)))
ValueError: You cannot build children keys from its parent key.Trying to build type "collection" from object key "/buckets/collections". lang=None uid=7447fd40326a7840a5135b3fa2e1acef4c516ece4635b6ebad0522aaa9331c3a
```
</details>
</issue>
<code>
[start of kinto/authorization.py]
1 from kinto.core import authorization as core_authorization
2 from pyramid.security import IAuthorizationPolicy, Authenticated
3 from zope.interface import implementer
4
5
6 # Vocab really matters when you deal with permissions. Let's do a quick recap
7 # of the terms used here:
8 #
9 # Object URI:
10 # An unique identifier for an object.
11 # for instance, /buckets/blog/collections/articles/records/article1
12 #
13 # Object:
14 # A common denomination of an object (e.g. "collection" or "record")
15 #
16 # Unbound permission:
17 # A permission not bound to an object (e.g. "create")
18 #
19 # Bound permission:
20 # A permission bound to an object (e.g. "collection:create")
21
22
23 # Dictionary which list all permissions a given permission enables.
24 PERMISSIONS_INHERITANCE_TREE = {
25 'bucket:write': {
26 'bucket': ['write']
27 },
28 'bucket:read': {
29 'bucket': ['write', 'read']
30 },
31 'bucket:group:create': {
32 'bucket': ['write', 'group:create']
33 },
34 'bucket:collection:create': {
35 'bucket': ['write', 'collection:create']
36 },
37 'group:write': {
38 'bucket': ['write'],
39 'group': ['write']
40 },
41 'group:read': {
42 'bucket': ['write', 'read'],
43 'group': ['write', 'read']
44 },
45 'collection:write': {
46 'bucket': ['write'],
47 'collection': ['write'],
48 },
49 'collection:read': {
50 'bucket': ['write', 'read'],
51 'collection': ['write', 'read'],
52 },
53 'collection:record:create': {
54 'bucket': ['write'],
55 'collection': ['write', 'record:create']
56 },
57 'record:write': {
58 'bucket': ['write'],
59 'collection': ['write'],
60 'record': ['write']
61 },
62 'record:read': {
63 'bucket': ['write', 'read'],
64 'collection': ['write', 'read'],
65 'record': ['write', 'read']
66 }
67 }
68
69
70 def get_object_type(object_uri):
71 """Return the type of an object from its id."""
72
73 obj_parts = object_uri.split('/')
74 if len(obj_parts) % 2 == 0:
75 object_uri = '/'.join(obj_parts[:-1])
76
77 # Order matters here. More precise is tested first.
78 if 'records' in object_uri:
79 obj_type = 'record'
80 elif 'collections' in object_uri:
81 obj_type = 'collection'
82 elif 'groups' in object_uri:
83 obj_type = 'group'
84 elif 'buckets' in object_uri:
85 obj_type = 'bucket'
86 else:
87 obj_type = None
88 return obj_type
89
90
91 def build_permission_tuple(obj_type, unbound_permission, obj_parts):
92 """Returns a tuple of (object_uri, unbound_permission)"""
93 PARTS_LENGTH = {
94 'bucket': 3,
95 'collection': 5,
96 'group': 5,
97 'record': 7
98 }
99 if obj_type not in PARTS_LENGTH:
100 raise ValueError('Invalid object type: %s' % obj_type)
101
102 if PARTS_LENGTH[obj_type] > len(obj_parts):
103 raise ValueError('You cannot build children keys from its parent key.'
104 'Trying to build type "%s" from object key "%s".' % (
105 obj_type, '/'.join(obj_parts)))
106 length = PARTS_LENGTH[obj_type]
107 return ('/'.join(obj_parts[:length]), unbound_permission)
108
109
110 def build_permissions_set(object_uri, unbound_permission,
111 inheritance_tree=None):
112 """Build a set of all permissions that can grant access to the given
113 object URI and unbound permission.
114
115 >>> build_required_permissions('/buckets/blog', 'write')
116 set(('/buckets/blog', 'write'))
117
118 """
119
120 if inheritance_tree is None:
121 inheritance_tree = PERMISSIONS_INHERITANCE_TREE
122
123 obj_type = get_object_type(object_uri)
124
125 # Unknown object type, does not map the INHERITANCE_TREE.
126 # In that case, the set of related permissions is empty.
127 if obj_type is None:
128 return set()
129
130 bound_permission = '%s:%s' % (obj_type, unbound_permission)
131 granters = set()
132
133 obj_parts = object_uri.split('/')
134 for obj, permission_list in inheritance_tree[bound_permission].items():
135 for permission in permission_list:
136 granters.add(build_permission_tuple(obj, permission, obj_parts))
137
138 return granters
139
140
141 @implementer(IAuthorizationPolicy)
142 class AuthorizationPolicy(core_authorization.AuthorizationPolicy):
143 def get_bound_permissions(self, *args, **kwargs):
144 return build_permissions_set(*args, **kwargs)
145
146
147 class RouteFactory(core_authorization.RouteFactory):
148 pass
149
150
151 class BucketRouteFactory(RouteFactory):
152 def fetch_shared_records(self, perm, principals, get_bound_permissions):
153 """Buckets list is authorized even if no object is accessible for
154 the current principals.
155 """
156 shared = super(BucketRouteFactory, self).fetch_shared_records(
157 perm, principals, get_bound_permissions)
158 if shared is None and Authenticated in principals:
159 self.shared_ids = []
160 return self.shared_ids
161
[end of kinto/authorization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/authorization.py b/kinto/authorization.py
--- a/kinto/authorization.py
+++ b/kinto/authorization.py
@@ -1,7 +1,9 @@
-from kinto.core import authorization as core_authorization
+import re
+
from pyramid.security import IAuthorizationPolicy, Authenticated
from zope.interface import implementer
+from kinto.core import authorization as core_authorization
# Vocab really matters when you deal with permissions. Let's do a quick recap
# of the terms used here:
@@ -69,23 +71,15 @@
def get_object_type(object_uri):
"""Return the type of an object from its id."""
-
- obj_parts = object_uri.split('/')
- if len(obj_parts) % 2 == 0:
- object_uri = '/'.join(obj_parts[:-1])
-
- # Order matters here. More precise is tested first.
- if 'records' in object_uri:
- obj_type = 'record'
- elif 'collections' in object_uri:
- obj_type = 'collection'
- elif 'groups' in object_uri:
- obj_type = 'group'
- elif 'buckets' in object_uri:
- obj_type = 'bucket'
- else:
- obj_type = None
- return obj_type
+ if re.match(r'/buckets/(.+)/collections/(.+)/records/(.+)?', object_uri):
+ return 'record'
+ if re.match(r'/buckets/(.+)/collections/(.+)?', object_uri):
+ return 'collection'
+ if re.match(r'/buckets/(.+)/groups/(.+)?', object_uri):
+ return 'group'
+ if re.match(r'/buckets/(.+)?', object_uri):
+ return 'bucket'
+ return None
def build_permission_tuple(obj_type, unbound_permission, obj_parts):
| {"golden_diff": "diff --git a/kinto/authorization.py b/kinto/authorization.py\n--- a/kinto/authorization.py\n+++ b/kinto/authorization.py\n@@ -1,7 +1,9 @@\n-from kinto.core import authorization as core_authorization\n+import re\n+\n from pyramid.security import IAuthorizationPolicy, Authenticated\n from zope.interface import implementer\n \n+from kinto.core import authorization as core_authorization\n \n # Vocab really matters when you deal with permissions. Let's do a quick recap\n # of the terms used here:\n@@ -69,23 +71,15 @@\n \n def get_object_type(object_uri):\n \"\"\"Return the type of an object from its id.\"\"\"\n-\n- obj_parts = object_uri.split('/')\n- if len(obj_parts) % 2 == 0:\n- object_uri = '/'.join(obj_parts[:-1])\n-\n- # Order matters here. More precise is tested first.\n- if 'records' in object_uri:\n- obj_type = 'record'\n- elif 'collections' in object_uri:\n- obj_type = 'collection'\n- elif 'groups' in object_uri:\n- obj_type = 'group'\n- elif 'buckets' in object_uri:\n- obj_type = 'bucket'\n- else:\n- obj_type = None\n- return obj_type\n+ if re.match(r'/buckets/(.+)/collections/(.+)/records/(.+)?', object_uri):\n+ return 'record'\n+ if re.match(r'/buckets/(.+)/collections/(.+)?', object_uri):\n+ return 'collection'\n+ if re.match(r'/buckets/(.+)/groups/(.+)?', object_uri):\n+ return 'group'\n+ if re.match(r'/buckets/(.+)?', object_uri):\n+ return 'bucket'\n+ return None\n \n \n def build_permission_tuple(obj_type, unbound_permission, obj_parts):\n", "issue": "Crash when querying `GET /buckets/collections`\n<details>\n <summary>\n\n<code>ValueError: You cannot build children keys from its parent key.Trying to build type \"collection\" from object key \"/buckets/collections\"</code></summary>\n\n\n\n```\n return authz_policy.permits(context, principals, permission)\n File \"/home/mathieu/Code/Mozilla/kinto/kinto/core/authorization.py\", line 76, in permits\n get_bound_permissions=self.get_bound_permissions)\n File \"/home/mathieu/Code/Mozilla/kinto/kinto/core/authorization.py\", line 169, in check_permission\n return self._check_permission(self.permission_object_id, *args, **kw)\n File \"/home/mathieu/Code/Mozilla/kinto/kinto/core/permission/postgresql/__init__.py\", line 237, in check_permission\n perms = get_bound_permissions(object_id, permission)\n File \"/home/mathieu/Code/Mozilla/kinto/kinto/authorization.py\", line 144, in get_bound_permissions\n return build_permissions_set(*args, **kwargs)\n File \"/home/mathieu/Code/Mozilla/kinto/kinto/authorization.py\", line 136, in build_permissions_set\n granters.add(build_permission_tuple(obj, permission, obj_parts))\n File \"/home/mathieu/Code/Mozilla/kinto/kinto/authorization.py\", line 105, in build_permission_tuple\n obj_type, '/'.join(obj_parts)))\nValueError: You cannot build children keys from its parent key.Trying to build type \"collection\" from object key \"/buckets/collections\". lang=None uid=7447fd40326a7840a5135b3fa2e1acef4c516ece4635b6ebad0522aaa9331c3a\n```\n\n</details>\n\n", "before_files": [{"content": "from kinto.core import authorization as core_authorization\nfrom pyramid.security import IAuthorizationPolicy, Authenticated\nfrom zope.interface import implementer\n\n\n# Vocab really matters when you deal with permissions. Let's do a quick recap\n# of the terms used here:\n#\n# Object URI:\n# An unique identifier for an object.\n# for instance, /buckets/blog/collections/articles/records/article1\n#\n# Object:\n# A common denomination of an object (e.g. \"collection\" or \"record\")\n#\n# Unbound permission:\n# A permission not bound to an object (e.g. \"create\")\n#\n# Bound permission:\n# A permission bound to an object (e.g. \"collection:create\")\n\n\n# Dictionary which list all permissions a given permission enables.\nPERMISSIONS_INHERITANCE_TREE = {\n 'bucket:write': {\n 'bucket': ['write']\n },\n 'bucket:read': {\n 'bucket': ['write', 'read']\n },\n 'bucket:group:create': {\n 'bucket': ['write', 'group:create']\n },\n 'bucket:collection:create': {\n 'bucket': ['write', 'collection:create']\n },\n 'group:write': {\n 'bucket': ['write'],\n 'group': ['write']\n },\n 'group:read': {\n 'bucket': ['write', 'read'],\n 'group': ['write', 'read']\n },\n 'collection:write': {\n 'bucket': ['write'],\n 'collection': ['write'],\n },\n 'collection:read': {\n 'bucket': ['write', 'read'],\n 'collection': ['write', 'read'],\n },\n 'collection:record:create': {\n 'bucket': ['write'],\n 'collection': ['write', 'record:create']\n },\n 'record:write': {\n 'bucket': ['write'],\n 'collection': ['write'],\n 'record': ['write']\n },\n 'record:read': {\n 'bucket': ['write', 'read'],\n 'collection': ['write', 'read'],\n 'record': ['write', 'read']\n }\n}\n\n\ndef get_object_type(object_uri):\n \"\"\"Return the type of an object from its id.\"\"\"\n\n obj_parts = object_uri.split('/')\n if len(obj_parts) % 2 == 0:\n object_uri = '/'.join(obj_parts[:-1])\n\n # Order matters here. More precise is tested first.\n if 'records' in object_uri:\n obj_type = 'record'\n elif 'collections' in object_uri:\n obj_type = 'collection'\n elif 'groups' in object_uri:\n obj_type = 'group'\n elif 'buckets' in object_uri:\n obj_type = 'bucket'\n else:\n obj_type = None\n return obj_type\n\n\ndef build_permission_tuple(obj_type, unbound_permission, obj_parts):\n \"\"\"Returns a tuple of (object_uri, unbound_permission)\"\"\"\n PARTS_LENGTH = {\n 'bucket': 3,\n 'collection': 5,\n 'group': 5,\n 'record': 7\n }\n if obj_type not in PARTS_LENGTH:\n raise ValueError('Invalid object type: %s' % obj_type)\n\n if PARTS_LENGTH[obj_type] > len(obj_parts):\n raise ValueError('You cannot build children keys from its parent key.'\n 'Trying to build type \"%s\" from object key \"%s\".' % (\n obj_type, '/'.join(obj_parts)))\n length = PARTS_LENGTH[obj_type]\n return ('/'.join(obj_parts[:length]), unbound_permission)\n\n\ndef build_permissions_set(object_uri, unbound_permission,\n inheritance_tree=None):\n \"\"\"Build a set of all permissions that can grant access to the given\n object URI and unbound permission.\n\n >>> build_required_permissions('/buckets/blog', 'write')\n set(('/buckets/blog', 'write'))\n\n \"\"\"\n\n if inheritance_tree is None:\n inheritance_tree = PERMISSIONS_INHERITANCE_TREE\n\n obj_type = get_object_type(object_uri)\n\n # Unknown object type, does not map the INHERITANCE_TREE.\n # In that case, the set of related permissions is empty.\n if obj_type is None:\n return set()\n\n bound_permission = '%s:%s' % (obj_type, unbound_permission)\n granters = set()\n\n obj_parts = object_uri.split('/')\n for obj, permission_list in inheritance_tree[bound_permission].items():\n for permission in permission_list:\n granters.add(build_permission_tuple(obj, permission, obj_parts))\n\n return granters\n\n\n@implementer(IAuthorizationPolicy)\nclass AuthorizationPolicy(core_authorization.AuthorizationPolicy):\n def get_bound_permissions(self, *args, **kwargs):\n return build_permissions_set(*args, **kwargs)\n\n\nclass RouteFactory(core_authorization.RouteFactory):\n pass\n\n\nclass BucketRouteFactory(RouteFactory):\n def fetch_shared_records(self, perm, principals, get_bound_permissions):\n \"\"\"Buckets list is authorized even if no object is accessible for\n the current principals.\n \"\"\"\n shared = super(BucketRouteFactory, self).fetch_shared_records(\n perm, principals, get_bound_permissions)\n if shared is None and Authenticated in principals:\n self.shared_ids = []\n return self.shared_ids\n", "path": "kinto/authorization.py"}]} | 2,458 | 408 |
gh_patches_debug_5763 | rasdani/github-patches | git_diff | huggingface__diffusers-6410 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ValueError when `to('cuda')` on Value-guided planning pipeline
### Describe the bug
When I run the Value-guided planning pipeline colab <https://colab.research.google.com/drive/1rXm8CX4ZdN5qivjJ2lhwhkOmt_m0CvU0> with `to('cuda')` to speed up the diffusion process. It gave me a ValueError.
### Reproduction
<https://colab.research.google.com/drive/1SFl7daLQxd8QyHJP6ndSPznSFX2rppQ2> Code block 24 `pipeline.to(DEVICE)`
### Logs
```shell
ValueError: ValueGuidedRLPipeline {
"_class_name": "ValueGuidedRLPipeline",
"_diffusers_version": "0.25.0.dev0",
"_name_or_path": "bglick13/hopper-medium-v2-value-function-hor32"
}
has been incorrectly initialized or <class 'diffusers.experimental.rl.value_guided_sampling.ValueGuidedRLPipeline'> is incorrectly implemented. Expected {'env', 'unet', 'value_function', 'scheduler'} to be defined, but dict_keys([]) are defined.
```
### System Info
colab
### Who can help?
@yiyixuxu
ValueError when `to('cuda')` on Value-guided planning pipeline
### Describe the bug
When I run the Value-guided planning pipeline colab <https://colab.research.google.com/drive/1rXm8CX4ZdN5qivjJ2lhwhkOmt_m0CvU0> with `to('cuda')` to speed up the diffusion process. It gave me a ValueError.
### Reproduction
<https://colab.research.google.com/drive/1SFl7daLQxd8QyHJP6ndSPznSFX2rppQ2> Code block 24 `pipeline.to(DEVICE)`
### Logs
```shell
ValueError: ValueGuidedRLPipeline {
"_class_name": "ValueGuidedRLPipeline",
"_diffusers_version": "0.25.0.dev0",
"_name_or_path": "bglick13/hopper-medium-v2-value-function-hor32"
}
has been incorrectly initialized or <class 'diffusers.experimental.rl.value_guided_sampling.ValueGuidedRLPipeline'> is incorrectly implemented. Expected {'env', 'unet', 'value_function', 'scheduler'} to be defined, but dict_keys([]) are defined.
```
### System Info
colab
### Who can help?
@yiyixuxu
</issue>
<code>
[start of src/diffusers/experimental/rl/value_guided_sampling.py]
1 # Copyright 2023 The HuggingFace Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import numpy as np
16 import torch
17 import tqdm
18
19 from ...models.unet_1d import UNet1DModel
20 from ...pipelines import DiffusionPipeline
21 from ...utils.dummy_pt_objects import DDPMScheduler
22 from ...utils.torch_utils import randn_tensor
23
24
25 class ValueGuidedRLPipeline(DiffusionPipeline):
26 r"""
27 Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states.
28
29 This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
30 implemented for all pipelines (downloading, saving, running on a particular device, etc.).
31
32 Parameters:
33 value_function ([`UNet1DModel`]):
34 A specialized UNet for fine-tuning trajectories base on reward.
35 unet ([`UNet1DModel`]):
36 UNet architecture to denoise the encoded trajectories.
37 scheduler ([`SchedulerMixin`]):
38 A scheduler to be used in combination with `unet` to denoise the encoded trajectories. Default for this
39 application is [`DDPMScheduler`].
40 env ():
41 An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models.
42 """
43
44 def __init__(
45 self,
46 value_function: UNet1DModel,
47 unet: UNet1DModel,
48 scheduler: DDPMScheduler,
49 env,
50 ):
51 super().__init__()
52 self.value_function = value_function
53 self.unet = unet
54 self.scheduler = scheduler
55 self.env = env
56 self.data = env.get_dataset()
57 self.means = {}
58 for key in self.data.keys():
59 try:
60 self.means[key] = self.data[key].mean()
61 except: # noqa: E722
62 pass
63 self.stds = {}
64 for key in self.data.keys():
65 try:
66 self.stds[key] = self.data[key].std()
67 except: # noqa: E722
68 pass
69 self.state_dim = env.observation_space.shape[0]
70 self.action_dim = env.action_space.shape[0]
71
72 def normalize(self, x_in, key):
73 return (x_in - self.means[key]) / self.stds[key]
74
75 def de_normalize(self, x_in, key):
76 return x_in * self.stds[key] + self.means[key]
77
78 def to_torch(self, x_in):
79 if isinstance(x_in, dict):
80 return {k: self.to_torch(v) for k, v in x_in.items()}
81 elif torch.is_tensor(x_in):
82 return x_in.to(self.unet.device)
83 return torch.tensor(x_in, device=self.unet.device)
84
85 def reset_x0(self, x_in, cond, act_dim):
86 for key, val in cond.items():
87 x_in[:, key, act_dim:] = val.clone()
88 return x_in
89
90 def run_diffusion(self, x, conditions, n_guide_steps, scale):
91 batch_size = x.shape[0]
92 y = None
93 for i in tqdm.tqdm(self.scheduler.timesteps):
94 # create batch of timesteps to pass into model
95 timesteps = torch.full((batch_size,), i, device=self.unet.device, dtype=torch.long)
96 for _ in range(n_guide_steps):
97 with torch.enable_grad():
98 x.requires_grad_()
99
100 # permute to match dimension for pre-trained models
101 y = self.value_function(x.permute(0, 2, 1), timesteps).sample
102 grad = torch.autograd.grad([y.sum()], [x])[0]
103
104 posterior_variance = self.scheduler._get_variance(i)
105 model_std = torch.exp(0.5 * posterior_variance)
106 grad = model_std * grad
107
108 grad[timesteps < 2] = 0
109 x = x.detach()
110 x = x + scale * grad
111 x = self.reset_x0(x, conditions, self.action_dim)
112
113 prev_x = self.unet(x.permute(0, 2, 1), timesteps).sample.permute(0, 2, 1)
114
115 # TODO: verify deprecation of this kwarg
116 x = self.scheduler.step(prev_x, i, x)["prev_sample"]
117
118 # apply conditions to the trajectory (set the initial state)
119 x = self.reset_x0(x, conditions, self.action_dim)
120 x = self.to_torch(x)
121 return x, y
122
123 def __call__(self, obs, batch_size=64, planning_horizon=32, n_guide_steps=2, scale=0.1):
124 # normalize the observations and create batch dimension
125 obs = self.normalize(obs, "observations")
126 obs = obs[None].repeat(batch_size, axis=0)
127
128 conditions = {0: self.to_torch(obs)}
129 shape = (batch_size, planning_horizon, self.state_dim + self.action_dim)
130
131 # generate initial noise and apply our conditions (to make the trajectories start at current state)
132 x1 = randn_tensor(shape, device=self.unet.device)
133 x = self.reset_x0(x1, conditions, self.action_dim)
134 x = self.to_torch(x)
135
136 # run the diffusion process
137 x, y = self.run_diffusion(x, conditions, n_guide_steps, scale)
138
139 # sort output trajectories by value
140 sorted_idx = y.argsort(0, descending=True).squeeze()
141 sorted_values = x[sorted_idx]
142 actions = sorted_values[:, :, : self.action_dim]
143 actions = actions.detach().cpu().numpy()
144 denorm_actions = self.de_normalize(actions, key="actions")
145
146 # select the action with the highest value
147 if y is not None:
148 selected_index = 0
149 else:
150 # if we didn't run value guiding, select a random action
151 selected_index = np.random.randint(0, batch_size)
152
153 denorm_actions = denorm_actions[selected_index, 0]
154 return denorm_actions
155
[end of src/diffusers/experimental/rl/value_guided_sampling.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/diffusers/experimental/rl/value_guided_sampling.py b/src/diffusers/experimental/rl/value_guided_sampling.py
--- a/src/diffusers/experimental/rl/value_guided_sampling.py
+++ b/src/diffusers/experimental/rl/value_guided_sampling.py
@@ -49,10 +49,9 @@
env,
):
super().__init__()
- self.value_function = value_function
- self.unet = unet
- self.scheduler = scheduler
- self.env = env
+
+ self.register_modules(value_function=value_function, unet=unet, scheduler=scheduler, env=env)
+
self.data = env.get_dataset()
self.means = {}
for key in self.data.keys():
| {"golden_diff": "diff --git a/src/diffusers/experimental/rl/value_guided_sampling.py b/src/diffusers/experimental/rl/value_guided_sampling.py\n--- a/src/diffusers/experimental/rl/value_guided_sampling.py\n+++ b/src/diffusers/experimental/rl/value_guided_sampling.py\n@@ -49,10 +49,9 @@\n env,\n ):\n super().__init__()\n- self.value_function = value_function\n- self.unet = unet\n- self.scheduler = scheduler\n- self.env = env\n+\n+ self.register_modules(value_function=value_function, unet=unet, scheduler=scheduler, env=env)\n+\n self.data = env.get_dataset()\n self.means = {}\n for key in self.data.keys():\n", "issue": "ValueError when `to('cuda')` on Value-guided planning pipeline\n### Describe the bug\r\n\r\nWhen I run the Value-guided planning pipeline colab <https://colab.research.google.com/drive/1rXm8CX4ZdN5qivjJ2lhwhkOmt_m0CvU0> with `to('cuda')` to speed up the diffusion process. It gave me a ValueError.\r\n\r\n### Reproduction\r\n\r\n<https://colab.research.google.com/drive/1SFl7daLQxd8QyHJP6ndSPznSFX2rppQ2> Code block 24 `pipeline.to(DEVICE)`\r\n\r\n### Logs\r\n\r\n```shell\r\nValueError: ValueGuidedRLPipeline {\r\n \"_class_name\": \"ValueGuidedRLPipeline\",\r\n \"_diffusers_version\": \"0.25.0.dev0\",\r\n \"_name_or_path\": \"bglick13/hopper-medium-v2-value-function-hor32\"\r\n}\r\n has been incorrectly initialized or <class 'diffusers.experimental.rl.value_guided_sampling.ValueGuidedRLPipeline'> is incorrectly implemented. Expected {'env', 'unet', 'value_function', 'scheduler'} to be defined, but dict_keys([]) are defined.\r\n```\r\n\r\n### System Info\r\n\r\ncolab\r\n\r\n### Who can help?\r\n\r\n@yiyixuxu\nValueError when `to('cuda')` on Value-guided planning pipeline\n### Describe the bug\r\n\r\nWhen I run the Value-guided planning pipeline colab <https://colab.research.google.com/drive/1rXm8CX4ZdN5qivjJ2lhwhkOmt_m0CvU0> with `to('cuda')` to speed up the diffusion process. It gave me a ValueError.\r\n\r\n### Reproduction\r\n\r\n<https://colab.research.google.com/drive/1SFl7daLQxd8QyHJP6ndSPznSFX2rppQ2> Code block 24 `pipeline.to(DEVICE)`\r\n\r\n### Logs\r\n\r\n```shell\r\nValueError: ValueGuidedRLPipeline {\r\n \"_class_name\": \"ValueGuidedRLPipeline\",\r\n \"_diffusers_version\": \"0.25.0.dev0\",\r\n \"_name_or_path\": \"bglick13/hopper-medium-v2-value-function-hor32\"\r\n}\r\n has been incorrectly initialized or <class 'diffusers.experimental.rl.value_guided_sampling.ValueGuidedRLPipeline'> is incorrectly implemented. Expected {'env', 'unet', 'value_function', 'scheduler'} to be defined, but dict_keys([]) are defined.\r\n```\r\n\r\n### System Info\r\n\r\ncolab\r\n\r\n### Who can help?\r\n\r\n@yiyixuxu\n", "before_files": [{"content": "# Copyright 2023 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\nimport torch\nimport tqdm\n\nfrom ...models.unet_1d import UNet1DModel\nfrom ...pipelines import DiffusionPipeline\nfrom ...utils.dummy_pt_objects import DDPMScheduler\nfrom ...utils.torch_utils import randn_tensor\n\n\nclass ValueGuidedRLPipeline(DiffusionPipeline):\n r\"\"\"\n Pipeline for value-guided sampling from a diffusion model trained to predict sequences of states.\n\n This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods\n implemented for all pipelines (downloading, saving, running on a particular device, etc.).\n\n Parameters:\n value_function ([`UNet1DModel`]):\n A specialized UNet for fine-tuning trajectories base on reward.\n unet ([`UNet1DModel`]):\n UNet architecture to denoise the encoded trajectories.\n scheduler ([`SchedulerMixin`]):\n A scheduler to be used in combination with `unet` to denoise the encoded trajectories. Default for this\n application is [`DDPMScheduler`].\n env ():\n An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models.\n \"\"\"\n\n def __init__(\n self,\n value_function: UNet1DModel,\n unet: UNet1DModel,\n scheduler: DDPMScheduler,\n env,\n ):\n super().__init__()\n self.value_function = value_function\n self.unet = unet\n self.scheduler = scheduler\n self.env = env\n self.data = env.get_dataset()\n self.means = {}\n for key in self.data.keys():\n try:\n self.means[key] = self.data[key].mean()\n except: # noqa: E722\n pass\n self.stds = {}\n for key in self.data.keys():\n try:\n self.stds[key] = self.data[key].std()\n except: # noqa: E722\n pass\n self.state_dim = env.observation_space.shape[0]\n self.action_dim = env.action_space.shape[0]\n\n def normalize(self, x_in, key):\n return (x_in - self.means[key]) / self.stds[key]\n\n def de_normalize(self, x_in, key):\n return x_in * self.stds[key] + self.means[key]\n\n def to_torch(self, x_in):\n if isinstance(x_in, dict):\n return {k: self.to_torch(v) for k, v in x_in.items()}\n elif torch.is_tensor(x_in):\n return x_in.to(self.unet.device)\n return torch.tensor(x_in, device=self.unet.device)\n\n def reset_x0(self, x_in, cond, act_dim):\n for key, val in cond.items():\n x_in[:, key, act_dim:] = val.clone()\n return x_in\n\n def run_diffusion(self, x, conditions, n_guide_steps, scale):\n batch_size = x.shape[0]\n y = None\n for i in tqdm.tqdm(self.scheduler.timesteps):\n # create batch of timesteps to pass into model\n timesteps = torch.full((batch_size,), i, device=self.unet.device, dtype=torch.long)\n for _ in range(n_guide_steps):\n with torch.enable_grad():\n x.requires_grad_()\n\n # permute to match dimension for pre-trained models\n y = self.value_function(x.permute(0, 2, 1), timesteps).sample\n grad = torch.autograd.grad([y.sum()], [x])[0]\n\n posterior_variance = self.scheduler._get_variance(i)\n model_std = torch.exp(0.5 * posterior_variance)\n grad = model_std * grad\n\n grad[timesteps < 2] = 0\n x = x.detach()\n x = x + scale * grad\n x = self.reset_x0(x, conditions, self.action_dim)\n\n prev_x = self.unet(x.permute(0, 2, 1), timesteps).sample.permute(0, 2, 1)\n\n # TODO: verify deprecation of this kwarg\n x = self.scheduler.step(prev_x, i, x)[\"prev_sample\"]\n\n # apply conditions to the trajectory (set the initial state)\n x = self.reset_x0(x, conditions, self.action_dim)\n x = self.to_torch(x)\n return x, y\n\n def __call__(self, obs, batch_size=64, planning_horizon=32, n_guide_steps=2, scale=0.1):\n # normalize the observations and create batch dimension\n obs = self.normalize(obs, \"observations\")\n obs = obs[None].repeat(batch_size, axis=0)\n\n conditions = {0: self.to_torch(obs)}\n shape = (batch_size, planning_horizon, self.state_dim + self.action_dim)\n\n # generate initial noise and apply our conditions (to make the trajectories start at current state)\n x1 = randn_tensor(shape, device=self.unet.device)\n x = self.reset_x0(x1, conditions, self.action_dim)\n x = self.to_torch(x)\n\n # run the diffusion process\n x, y = self.run_diffusion(x, conditions, n_guide_steps, scale)\n\n # sort output trajectories by value\n sorted_idx = y.argsort(0, descending=True).squeeze()\n sorted_values = x[sorted_idx]\n actions = sorted_values[:, :, : self.action_dim]\n actions = actions.detach().cpu().numpy()\n denorm_actions = self.de_normalize(actions, key=\"actions\")\n\n # select the action with the highest value\n if y is not None:\n selected_index = 0\n else:\n # if we didn't run value guiding, select a random action\n selected_index = np.random.randint(0, batch_size)\n\n denorm_actions = denorm_actions[selected_index, 0]\n return denorm_actions\n", "path": "src/diffusers/experimental/rl/value_guided_sampling.py"}]} | 2,916 | 167 |
gh_patches_debug_23702 | rasdani/github-patches | git_diff | sunpy__sunpy-1409 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Download of sample data is repeated for each server
Why do we have `sunpy.data.download_sample_data()` download all of the sample data files twice, once for each of the two servers (data.sunpy.org and hesperia.gsfc.nasa.gov)? This seems silly.
Lines 60–61 in `sunpy/data/_sample.py`:
``` python
for base_url in _base_urls:
for file_name in _files.itervalues():
```
Output:
```
>>> import sunpy.data
>>> sunpy.data.download_sample_data()
Downloading sample files to c:/Users/Albert\sunpy\data/sample_data
Downloading http://data.sunpy.org/sample-data/BIR_20110922_103000_01.fit
|===========================================| 760k/760k (100.00%) 4s
Downloading http://data.sunpy.org/sample-data/swap_lv1_20120101_001607.fits
|===========================================| 2.1M/2.1M (100.00%) 4s
Downloading http://data.sunpy.org/sample-data/eit_l1_20020625_100011.fits
|===========================================| 8.3M/8.3M (100.00%) 10s
Downloading http://data.sunpy.org/sample-data/aia.lev1.193A_2013-09-21T16_00_06.84Z.image_
lev1.fits.zip
|===========================================| 12M/ 12M (100.00%) 22s
Unpacking: aia.lev1.193A_2013-09-21T16_00_06.84Z.image_lev1.fits
Downloading http://data.sunpy.org/sample-data/hsi_calib_ev_20020220_1106_20020220_1106_25_
40.fits
|===========================================| 207k/207k (100.00%) 0s
Downloading http://data.sunpy.org/sample-data/AIA20110319_105400_0171.fits
|===========================================| 4.2M/4.2M (100.00%) 6s
Downloading http://data.sunpy.org/sample-data/hsi_image_20101016_191218.fits
|===========================================| 95k/ 95k (100.00%) 0s
Downloading http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/BIR_20110922_103000_
01.fit
|===========================================| 760k/760k (100.00%) 0s
Downloading http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/swap_lv1_20120101_00
1607.fits
|===========================================| 2.1M/2.1M (100.00%) 2s
Downloading http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/eit_l1_20020625_1000
11.fits
|===========================================| 8.3M/8.3M (100.00%) 6s
Downloading http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/aia.lev1.193A_2013-0
9-21T16_00_06.84Z.image_lev1.fits.zip
|===========================================| 12M/ 12M (100.00%) 10s
Unpacking: aia.lev1.193A_2013-09-21T16_00_06.84Z.image_lev1.fits
Downloading http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/hsi_calib_ev_2002022
0_1106_20020220_1106_25_40.fits
|===========================================| 207k/207k (100.00%) 0s
Downloading http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/AIA20110319_105400_0
171.fits
|===========================================| 4.2M/4.2M (100.00%) 3s
Downloading http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/hsi_image_20101016_1
91218.fits
|===========================================| 95k/ 95k (100.00%) 0s
```
</issue>
<code>
[start of sunpy/data/_sample.py]
1 # -*- coding: utf-8 -*-
2 """SunPy sample data files"""
3 from __future__ import absolute_import
4
5 from os import remove
6 import os.path
7 from zipfile import ZipFile
8 from urllib2 import URLError
9 from shutil import move
10
11 from astropy.utils.data import download_file
12
13 from sunpy.util.net import url_exists
14 from sunpy import config
15
16 __author__ = "Steven Christe"
17 __email__ = "[email protected]"
18
19
20 sampledata_dir = config.get("downloads", "sample_dir")
21
22 # urls to search for the sample data
23 _base_urls = (
24 'http://data.sunpy.org/sample-data/',
25 'http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/')
26
27 # keys are file shortcuts
28 # values consist of filename as well as optional file extension if files are
29 # hosted compressed. This extension is removed after download.
30 _files = {
31 "AIA_171_IMAGE": ("AIA20110319_105400_0171.fits", ""),
32 "RHESSI_IMAGE": ("hsi_image_20101016_191218.fits", ""),
33 "EIT_195_IMAGE": ("eit_l1_20020625_100011.fits", ""),
34 "CALLISTO_IMAGE": ("BIR_20110922_103000_01.fit", ""),
35 "RHESSI_EVENT_LIST": ("hsi_calib_ev_20020220_1106_20020220_1106_25_40.fits", ""),
36 "SWAP_LEVEL1_IMAGE": ("swap_lv1_20120101_001607.fits", ""),
37 "AIA_193_IMAGE": ("aia.lev1.193A_2013-09-21T16_00_06.84Z.image_lev1.fits", ".zip")
38 }
39
40 sample_files = {}
41 for key in _files:
42 sample_files[key] = os.path.abspath(os.path.join(sampledata_dir, _files[key][0]))
43
44
45 def download_sample_data(progress=True):
46 """
47 Download the sample data.
48
49 Parameters
50 ----------
51 progress: bool
52 Show a progress bar during download
53
54 Returns
55 -------
56 None
57 """
58 number_of_files_fetched = 0
59 print("Downloading sample files to " + sampledata_dir)
60 for base_url in _base_urls:
61 for file_name in _files.itervalues():
62 full_file_name = file_name[0] + file_name[1]
63 if url_exists(os.path.join(base_url, full_file_name)):
64 f = download_file(os.path.join(base_url, full_file_name))
65 real_name, ext = os.path.splitext(full_file_name)
66
67 if file_name[1] == '.zip':
68 print("Unpacking: %s" % real_name)
69 with ZipFile(f, 'r') as zip_file:
70 zip_file.extract(real_name, sampledata_dir)
71 remove(f)
72 else:
73 # move files to the data directory
74 move(f, os.path.join(sampledata_dir, file_name[0]))
75 # increment the number of files obtained to check later
76 number_of_files_fetched += 1
77
78 if number_of_files_fetched < len(_files.keys()):
79 raise URLError("Could not download all samples files. Problem with accessing sample data servers.")
80
[end of sunpy/data/_sample.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sunpy/data/_sample.py b/sunpy/data/_sample.py
--- a/sunpy/data/_sample.py
+++ b/sunpy/data/_sample.py
@@ -57,8 +57,8 @@
"""
number_of_files_fetched = 0
print("Downloading sample files to " + sampledata_dir)
- for base_url in _base_urls:
- for file_name in _files.itervalues():
+ for file_name in _files.itervalues():
+ for base_url in _base_urls:
full_file_name = file_name[0] + file_name[1]
if url_exists(os.path.join(base_url, full_file_name)):
f = download_file(os.path.join(base_url, full_file_name))
@@ -74,6 +74,7 @@
move(f, os.path.join(sampledata_dir, file_name[0]))
# increment the number of files obtained to check later
number_of_files_fetched += 1
+ break
if number_of_files_fetched < len(_files.keys()):
raise URLError("Could not download all samples files. Problem with accessing sample data servers.")
| {"golden_diff": "diff --git a/sunpy/data/_sample.py b/sunpy/data/_sample.py\n--- a/sunpy/data/_sample.py\n+++ b/sunpy/data/_sample.py\n@@ -57,8 +57,8 @@\n \"\"\"\n number_of_files_fetched = 0\n print(\"Downloading sample files to \" + sampledata_dir)\n- for base_url in _base_urls:\n- for file_name in _files.itervalues():\n+ for file_name in _files.itervalues():\n+ for base_url in _base_urls:\n full_file_name = file_name[0] + file_name[1]\n if url_exists(os.path.join(base_url, full_file_name)):\n f = download_file(os.path.join(base_url, full_file_name))\n@@ -74,6 +74,7 @@\n move(f, os.path.join(sampledata_dir, file_name[0]))\n # increment the number of files obtained to check later\n number_of_files_fetched += 1\n+ break\n \n if number_of_files_fetched < len(_files.keys()):\n raise URLError(\"Could not download all samples files. Problem with accessing sample data servers.\")\n", "issue": "Download of sample data is repeated for each server\nWhy do we have `sunpy.data.download_sample_data()` download all of the sample data files twice, once for each of the two servers (data.sunpy.org and hesperia.gsfc.nasa.gov)? This seems silly.\n\nLines 60\u201361 in `sunpy/data/_sample.py`:\n\n``` python\n for base_url in _base_urls:\n for file_name in _files.itervalues():\n```\n\nOutput:\n\n```\n>>> import sunpy.data\n>>> sunpy.data.download_sample_data()\nDownloading sample files to c:/Users/Albert\\sunpy\\data/sample_data\nDownloading http://data.sunpy.org/sample-data/BIR_20110922_103000_01.fit\n|===========================================| 760k/760k (100.00%) 4s\nDownloading http://data.sunpy.org/sample-data/swap_lv1_20120101_001607.fits\n|===========================================| 2.1M/2.1M (100.00%) 4s\nDownloading http://data.sunpy.org/sample-data/eit_l1_20020625_100011.fits\n|===========================================| 8.3M/8.3M (100.00%) 10s\nDownloading http://data.sunpy.org/sample-data/aia.lev1.193A_2013-09-21T16_00_06.84Z.image_\nlev1.fits.zip\n|===========================================| 12M/ 12M (100.00%) 22s\nUnpacking: aia.lev1.193A_2013-09-21T16_00_06.84Z.image_lev1.fits\nDownloading http://data.sunpy.org/sample-data/hsi_calib_ev_20020220_1106_20020220_1106_25_\n40.fits\n|===========================================| 207k/207k (100.00%) 0s\nDownloading http://data.sunpy.org/sample-data/AIA20110319_105400_0171.fits\n|===========================================| 4.2M/4.2M (100.00%) 6s\nDownloading http://data.sunpy.org/sample-data/hsi_image_20101016_191218.fits\n|===========================================| 95k/ 95k (100.00%) 0s\nDownloading http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/BIR_20110922_103000_\n01.fit\n|===========================================| 760k/760k (100.00%) 0s\nDownloading http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/swap_lv1_20120101_00\n1607.fits\n|===========================================| 2.1M/2.1M (100.00%) 2s\nDownloading http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/eit_l1_20020625_1000\n11.fits\n|===========================================| 8.3M/8.3M (100.00%) 6s\nDownloading http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/aia.lev1.193A_2013-0\n9-21T16_00_06.84Z.image_lev1.fits.zip\n|===========================================| 12M/ 12M (100.00%) 10s\nUnpacking: aia.lev1.193A_2013-09-21T16_00_06.84Z.image_lev1.fits\nDownloading http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/hsi_calib_ev_2002022\n0_1106_20020220_1106_25_40.fits\n|===========================================| 207k/207k (100.00%) 0s\nDownloading http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/AIA20110319_105400_0\n171.fits\n|===========================================| 4.2M/4.2M (100.00%) 3s\nDownloading http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/hsi_image_20101016_1\n91218.fits\n|===========================================| 95k/ 95k (100.00%) 0s\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"SunPy sample data files\"\"\"\nfrom __future__ import absolute_import\n\nfrom os import remove\nimport os.path\nfrom zipfile import ZipFile\nfrom urllib2 import URLError\nfrom shutil import move\n\nfrom astropy.utils.data import download_file\n\nfrom sunpy.util.net import url_exists\nfrom sunpy import config\n\n__author__ = \"Steven Christe\"\n__email__ = \"[email protected]\"\n\n\nsampledata_dir = config.get(\"downloads\", \"sample_dir\")\n\n# urls to search for the sample data\n_base_urls = (\n 'http://data.sunpy.org/sample-data/',\n 'http://hesperia.gsfc.nasa.gov/~schriste/sunpy-sample-data/')\n\n# keys are file shortcuts\n# values consist of filename as well as optional file extension if files are\n# hosted compressed. This extension is removed after download.\n_files = {\n \"AIA_171_IMAGE\": (\"AIA20110319_105400_0171.fits\", \"\"),\n \"RHESSI_IMAGE\": (\"hsi_image_20101016_191218.fits\", \"\"),\n \"EIT_195_IMAGE\": (\"eit_l1_20020625_100011.fits\", \"\"),\n \"CALLISTO_IMAGE\": (\"BIR_20110922_103000_01.fit\", \"\"),\n \"RHESSI_EVENT_LIST\": (\"hsi_calib_ev_20020220_1106_20020220_1106_25_40.fits\", \"\"),\n \"SWAP_LEVEL1_IMAGE\": (\"swap_lv1_20120101_001607.fits\", \"\"),\n \"AIA_193_IMAGE\": (\"aia.lev1.193A_2013-09-21T16_00_06.84Z.image_lev1.fits\", \".zip\")\n}\n\nsample_files = {}\nfor key in _files:\n sample_files[key] = os.path.abspath(os.path.join(sampledata_dir, _files[key][0]))\n\n\ndef download_sample_data(progress=True):\n \"\"\"\n Download the sample data.\n\n Parameters\n ----------\n progress: bool\n Show a progress bar during download\n\n Returns\n -------\n None\n \"\"\"\n number_of_files_fetched = 0\n print(\"Downloading sample files to \" + sampledata_dir)\n for base_url in _base_urls:\n for file_name in _files.itervalues():\n full_file_name = file_name[0] + file_name[1]\n if url_exists(os.path.join(base_url, full_file_name)):\n f = download_file(os.path.join(base_url, full_file_name))\n real_name, ext = os.path.splitext(full_file_name)\n\n if file_name[1] == '.zip':\n print(\"Unpacking: %s\" % real_name)\n with ZipFile(f, 'r') as zip_file:\n zip_file.extract(real_name, sampledata_dir)\n remove(f)\n else:\n # move files to the data directory\n move(f, os.path.join(sampledata_dir, file_name[0]))\n # increment the number of files obtained to check later\n number_of_files_fetched += 1\n\n if number_of_files_fetched < len(_files.keys()):\n raise URLError(\"Could not download all samples files. Problem with accessing sample data servers.\")\n", "path": "sunpy/data/_sample.py"}]} | 2,656 | 252 |
gh_patches_debug_22897 | rasdani/github-patches | git_diff | NVIDIA__apex-490 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docker build fails on pytorch master since Aug 15, 2019 with "AssertionError: Found no NVIDIA driver on your system"
Pytorch recently added a CUDA architecture check when building with torch.utils.cpp_extension. From the [commit notes](https://github.com/pytorch/pytorch/commit/cd207737017db8c81584763207df20bc6110ed75):
> The old behavior was to always use `sm_30`. The new behavior is:
> - For building via a setup.py, check if `'arch'` is in `extra_compile_args`. If so, don't change anything.
> - If `TORCH_CUDA_ARCH_LIST` is set, respect that (can be 1 or more arches)
> - Otherwise, query device capability and use that.
Following this commit, when attempting to install apex via nvidia-docker, pytorch reverts to querying device capability and fails. The issue probably should be addressed in apex's setup.py, if not nvidia-docker/pytorch themselves. Currently, I work around the issue by setting the environment variable in the dockerfile, eg:
`ENV TORCH_CUDA_ARCH_LIST=Volta;Turing;Kepler+Tesla`
`RUN pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" apex/`
</issue>
<code>
[start of setup.py]
1 import torch
2 from setuptools import setup, find_packages
3 import subprocess
4
5 from pip._internal import main as pipmain
6 import sys
7 import warnings
8
9 if not torch.cuda.is_available():
10 print("\nWarning: Torch did not find available GPUs on this system.\n",
11 "If your intention is to cross-compile, this is not an error.\n")
12
13 print("torch.__version__ = ", torch.__version__)
14 TORCH_MAJOR = int(torch.__version__.split('.')[0])
15 TORCH_MINOR = int(torch.__version__.split('.')[1])
16
17 if TORCH_MAJOR == 0 and TORCH_MINOR < 4:
18 raise RuntimeError("Apex requires Pytorch 0.4 or newer.\n" +
19 "The latest stable release can be obtained from https://pytorch.org/")
20
21 cmdclass = {}
22 ext_modules = []
23
24 if "--pyprof" in sys.argv:
25 with open('requirements.txt') as f:
26 required_packages = f.read().splitlines()
27 pipmain(["install"] + required_packages)
28 try:
29 sys.argv.remove("--pyprof")
30 except:
31 pass
32 else:
33 warnings.warn("Option --pyprof not specified. Not installing PyProf dependencies!")
34
35 if "--cpp_ext" in sys.argv or "--cuda_ext" in sys.argv:
36 if TORCH_MAJOR == 0:
37 raise RuntimeError("--cpp_ext requires Pytorch 1.0 or later, "
38 "found torch.__version__ = {}".format(torch.__version__))
39 from torch.utils.cpp_extension import BuildExtension
40 cmdclass['build_ext'] = BuildExtension
41
42 if "--cpp_ext" in sys.argv:
43 from torch.utils.cpp_extension import CppExtension
44 sys.argv.remove("--cpp_ext")
45 ext_modules.append(
46 CppExtension('apex_C',
47 ['csrc/flatten_unflatten.cpp',]))
48
49 def check_cuda_torch_binary_vs_bare_metal(cuda_dir):
50 raw_output = subprocess.check_output([cuda_dir + "/bin/nvcc", "-V"], universal_newlines=True)
51 output = raw_output.split()
52 release_idx = output.index("release") + 1
53 release = output[release_idx].split(".")
54 bare_metal_major = release[0]
55 bare_metal_minor = release[1][0]
56 torch_binary_major = torch.version.cuda.split(".")[0]
57 torch_binary_minor = torch.version.cuda.split(".")[1]
58
59 print("\nCompiling cuda extensions with")
60 print(raw_output + "from " + cuda_dir + "/bin\n")
61
62 if (bare_metal_major != torch_binary_major) or (bare_metal_minor != torch_binary_minor):
63 raise RuntimeError("Cuda extensions are being compiled with a version of Cuda that does " +
64 "not match the version used to compile Pytorch binaries. " +
65 "Pytorch binaries were compiled with Cuda {}.\n".format(torch.version.cuda) +
66 "In some cases, a minor-version mismatch will not cause later errors: " +
67 "https://github.com/NVIDIA/apex/pull/323#discussion_r287021798. "
68 "You can try commenting out this check (at your own risk).")
69
70 # Set up macros for forward/backward compatibility hack around
71 # https://github.com/pytorch/pytorch/commit/4404762d7dd955383acee92e6f06b48144a0742e
72 # and
73 # https://github.com/NVIDIA/apex/issues/456
74 # https://github.com/pytorch/pytorch/commit/eb7b39e02f7d75c26d8a795ea8c7fd911334da7e#diff-4632522f237f1e4e728cb824300403ac
75 version_ge_1_1 = []
76 if (TORCH_MAJOR > 1) or (TORCH_MAJOR == 1 and TORCH_MINOR > 0):
77 version_ge_1_1 = ['-DVERSION_GE_1_1']
78 version_ge_1_3 = []
79 if (TORCH_MAJOR > 1) or (TORCH_MAJOR == 1 and TORCH_MINOR > 2):
80 version_ge_1_3 = ['-DVERSION_GE_1_3']
81 version_dependent_macros = version_ge_1_1 + version_ge_1_3
82
83 if "--cuda_ext" in sys.argv:
84 from torch.utils.cpp_extension import CUDAExtension
85 sys.argv.remove("--cuda_ext")
86
87 if torch.utils.cpp_extension.CUDA_HOME is None:
88 raise RuntimeError("--cuda_ext was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.")
89 else:
90 check_cuda_torch_binary_vs_bare_metal(torch.utils.cpp_extension.CUDA_HOME)
91
92 ext_modules.append(
93 CUDAExtension(name='amp_C',
94 sources=['csrc/amp_C_frontend.cpp',
95 'csrc/multi_tensor_sgd_kernel.cu',
96 'csrc/multi_tensor_scale_kernel.cu',
97 'csrc/multi_tensor_axpby_kernel.cu',
98 'csrc/multi_tensor_l2norm_kernel.cu',
99 'csrc/multi_tensor_lamb_stage_1.cu',
100 'csrc/multi_tensor_lamb_stage_2.cu',
101 'csrc/multi_tensor_adam.cu',
102 'csrc/multi_tensor_novograd.cu',
103 'csrc/multi_tensor_lamb.cu'],
104 extra_compile_args={'cxx': ['-O3'] + version_dependent_macros,
105 'nvcc':['-lineinfo',
106 '-O3',
107 # '--resource-usage',
108 '--use_fast_math'] + version_dependent_macros}))
109 ext_modules.append(
110 CUDAExtension(name='fused_adam_cuda',
111 sources=['csrc/fused_adam_cuda.cpp',
112 'csrc/fused_adam_cuda_kernel.cu'],
113 extra_compile_args={'cxx': ['-O3',] + version_dependent_macros,
114 'nvcc':['-O3',
115 '--use_fast_math'] + version_dependent_macros}))
116 ext_modules.append(
117 CUDAExtension(name='syncbn',
118 sources=['csrc/syncbn.cpp',
119 'csrc/welford.cu'],
120 extra_compile_args={'cxx': ['-O3'] + version_dependent_macros,
121 'nvcc':['-O3'] + version_dependent_macros}))
122
123 ext_modules.append(
124 CUDAExtension(name='fused_layer_norm_cuda',
125 sources=['csrc/layer_norm_cuda.cpp',
126 'csrc/layer_norm_cuda_kernel.cu'],
127 extra_compile_args={'cxx': ['-O3'] + version_dependent_macros,
128 'nvcc':['-maxrregcount=50',
129 '-O3',
130 '--use_fast_math'] + version_dependent_macros}))
131
132 if "--bnp" in sys.argv:
133 from torch.utils.cpp_extension import CUDAExtension
134 sys.argv.remove("--bnp")
135
136 from torch.utils.cpp_extension import BuildExtension
137 cmdclass['build_ext'] = BuildExtension
138
139 if torch.utils.cpp_extension.CUDA_HOME is None:
140 raise RuntimeError("--bnp was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.")
141 else:
142 ext_modules.append(
143 CUDAExtension(name='bnp',
144 sources=['apex/contrib/csrc/groupbn/batch_norm.cu',
145 'apex/contrib/csrc/groupbn/ipc.cu',
146 'apex/contrib/csrc/groupbn/interface.cpp',
147 'apex/contrib/csrc/groupbn/batch_norm_add_relu.cu'],
148 include_dirs=['csrc'],
149 extra_compile_args={'cxx': [] + version_dependent_macros,
150 'nvcc':['-DCUDA_HAS_FP16=1',
151 '-D__CUDA_NO_HALF_OPERATORS__',
152 '-D__CUDA_NO_HALF_CONVERSIONS__',
153 '-D__CUDA_NO_HALF2_OPERATORS__',
154 '-gencode',
155 'arch=compute_70,code=sm_70'] + version_dependent_macros}))
156
157 if "--xentropy" in sys.argv:
158 from torch.utils.cpp_extension import CUDAExtension
159 sys.argv.remove("--xentropy")
160
161 from torch.utils.cpp_extension import BuildExtension
162 cmdclass['build_ext'] = BuildExtension
163
164 if torch.utils.cpp_extension.CUDA_HOME is None:
165 raise RuntimeError("--xentropy was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.")
166 else:
167 ext_modules.append(
168 CUDAExtension(name='xentropy_cuda',
169 sources=['apex/contrib/csrc/xentropy/interface.cpp',
170 'apex/contrib/csrc/xentropy/xentropy_kernel.cu'],
171 include_dirs=['csrc'],
172 extra_compile_args={'cxx': ['-O3'] + version_dependent_macros,
173 'nvcc':['-O3'] + version_dependent_macros}))
174
175 setup(
176 name='apex',
177 version='0.1',
178 packages=find_packages(exclude=('build',
179 'csrc',
180 'include',
181 'tests',
182 'dist',
183 'docs',
184 'tests',
185 'examples',
186 'apex.egg-info',)),
187 description='PyTorch Extensions written by NVIDIA',
188 ext_modules=ext_modules,
189 cmdclass=cmdclass,
190 )
191
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -5,10 +5,20 @@
from pip._internal import main as pipmain
import sys
import warnings
+import os
if not torch.cuda.is_available():
- print("\nWarning: Torch did not find available GPUs on this system.\n",
- "If your intention is to cross-compile, this is not an error.\n")
+ # https://github.com/NVIDIA/apex/issues/486
+ # Extension builds after https://github.com/pytorch/pytorch/pull/23408 attempt to query torch.cuda.get_device_capability(),
+ # which will fail if you are compiling in an environment without visible GPUs (e.g. during an nvidia-docker build command).
+ print('\nWarning: Torch did not find available GPUs on this system.\n',
+ 'If your intention is to cross-compile, this is not an error.\n'
+ 'By default, Apex will cross-compile for Pascal (compute capabilities 6.0, 6.1, 6.2),\n'
+ 'Volta (compute capability 7.0), and Turing (compute capability 7.5).\n'
+ 'If you wish to cross-compile for a single specific architecture,\n'
+ 'export TORCH_CUDA_ARCH_LIST="compute capability" before running setup.py.\n')
+ if os.environ.get("TORCH_CUDA_ARCH_LIST", None) is None:
+ os.environ["TORCH_CUDA_ARCH_LIST"] = "6.0;6.1;6.2;7.0;7.5"
print("torch.__version__ = ", torch.__version__)
TORCH_MAJOR = int(torch.__version__.split('.')[0])
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -5,10 +5,20 @@\n from pip._internal import main as pipmain\n import sys\n import warnings\n+import os\n \n if not torch.cuda.is_available():\n- print(\"\\nWarning: Torch did not find available GPUs on this system.\\n\",\n- \"If your intention is to cross-compile, this is not an error.\\n\")\n+ # https://github.com/NVIDIA/apex/issues/486\n+ # Extension builds after https://github.com/pytorch/pytorch/pull/23408 attempt to query torch.cuda.get_device_capability(),\n+ # which will fail if you are compiling in an environment without visible GPUs (e.g. during an nvidia-docker build command).\n+ print('\\nWarning: Torch did not find available GPUs on this system.\\n',\n+ 'If your intention is to cross-compile, this is not an error.\\n'\n+ 'By default, Apex will cross-compile for Pascal (compute capabilities 6.0, 6.1, 6.2),\\n'\n+ 'Volta (compute capability 7.0), and Turing (compute capability 7.5).\\n'\n+ 'If you wish to cross-compile for a single specific architecture,\\n'\n+ 'export TORCH_CUDA_ARCH_LIST=\"compute capability\" before running setup.py.\\n')\n+ if os.environ.get(\"TORCH_CUDA_ARCH_LIST\", None) is None:\n+ os.environ[\"TORCH_CUDA_ARCH_LIST\"] = \"6.0;6.1;6.2;7.0;7.5\"\n \n print(\"torch.__version__ = \", torch.__version__)\n TORCH_MAJOR = int(torch.__version__.split('.')[0])\n", "issue": "Docker build fails on pytorch master since Aug 15, 2019 with \"AssertionError: Found no NVIDIA driver on your system\"\nPytorch recently added a CUDA architecture check when building with torch.utils.cpp_extension. From the [commit notes](https://github.com/pytorch/pytorch/commit/cd207737017db8c81584763207df20bc6110ed75):\r\n\r\n> The old behavior was to always use `sm_30`. The new behavior is:\r\n> - For building via a setup.py, check if `'arch'` is in `extra_compile_args`. If so, don't change anything.\r\n> - If `TORCH_CUDA_ARCH_LIST` is set, respect that (can be 1 or more arches)\r\n> - Otherwise, query device capability and use that.\r\n\r\nFollowing this commit, when attempting to install apex via nvidia-docker, pytorch reverts to querying device capability and fails. The issue probably should be addressed in apex's setup.py, if not nvidia-docker/pytorch themselves. Currently, I work around the issue by setting the environment variable in the dockerfile, eg:\r\n\r\n`ENV TORCH_CUDA_ARCH_LIST=Volta;Turing;Kepler+Tesla`\r\n`RUN pip install -v --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" apex/` \r\n\r\n\n", "before_files": [{"content": "import torch\nfrom setuptools import setup, find_packages\nimport subprocess\n\nfrom pip._internal import main as pipmain\nimport sys\nimport warnings\n\nif not torch.cuda.is_available():\n print(\"\\nWarning: Torch did not find available GPUs on this system.\\n\",\n \"If your intention is to cross-compile, this is not an error.\\n\")\n\nprint(\"torch.__version__ = \", torch.__version__)\nTORCH_MAJOR = int(torch.__version__.split('.')[0])\nTORCH_MINOR = int(torch.__version__.split('.')[1])\n\nif TORCH_MAJOR == 0 and TORCH_MINOR < 4:\n raise RuntimeError(\"Apex requires Pytorch 0.4 or newer.\\n\" +\n \"The latest stable release can be obtained from https://pytorch.org/\")\n\ncmdclass = {}\next_modules = []\n\nif \"--pyprof\" in sys.argv:\n with open('requirements.txt') as f:\n required_packages = f.read().splitlines()\n pipmain([\"install\"] + required_packages)\n try:\n sys.argv.remove(\"--pyprof\")\n except:\n pass\nelse:\n warnings.warn(\"Option --pyprof not specified. Not installing PyProf dependencies!\")\n\nif \"--cpp_ext\" in sys.argv or \"--cuda_ext\" in sys.argv:\n if TORCH_MAJOR == 0:\n raise RuntimeError(\"--cpp_ext requires Pytorch 1.0 or later, \"\n \"found torch.__version__ = {}\".format(torch.__version__))\n from torch.utils.cpp_extension import BuildExtension\n cmdclass['build_ext'] = BuildExtension\n\nif \"--cpp_ext\" in sys.argv:\n from torch.utils.cpp_extension import CppExtension\n sys.argv.remove(\"--cpp_ext\")\n ext_modules.append(\n CppExtension('apex_C',\n ['csrc/flatten_unflatten.cpp',]))\n\ndef check_cuda_torch_binary_vs_bare_metal(cuda_dir):\n raw_output = subprocess.check_output([cuda_dir + \"/bin/nvcc\", \"-V\"], universal_newlines=True)\n output = raw_output.split()\n release_idx = output.index(\"release\") + 1\n release = output[release_idx].split(\".\")\n bare_metal_major = release[0]\n bare_metal_minor = release[1][0]\n torch_binary_major = torch.version.cuda.split(\".\")[0]\n torch_binary_minor = torch.version.cuda.split(\".\")[1]\n\n print(\"\\nCompiling cuda extensions with\")\n print(raw_output + \"from \" + cuda_dir + \"/bin\\n\")\n\n if (bare_metal_major != torch_binary_major) or (bare_metal_minor != torch_binary_minor):\n raise RuntimeError(\"Cuda extensions are being compiled with a version of Cuda that does \" +\n \"not match the version used to compile Pytorch binaries. \" +\n \"Pytorch binaries were compiled with Cuda {}.\\n\".format(torch.version.cuda) +\n \"In some cases, a minor-version mismatch will not cause later errors: \" +\n \"https://github.com/NVIDIA/apex/pull/323#discussion_r287021798. \"\n \"You can try commenting out this check (at your own risk).\")\n\n# Set up macros for forward/backward compatibility hack around\n# https://github.com/pytorch/pytorch/commit/4404762d7dd955383acee92e6f06b48144a0742e\n# and\n# https://github.com/NVIDIA/apex/issues/456\n# https://github.com/pytorch/pytorch/commit/eb7b39e02f7d75c26d8a795ea8c7fd911334da7e#diff-4632522f237f1e4e728cb824300403ac\nversion_ge_1_1 = []\nif (TORCH_MAJOR > 1) or (TORCH_MAJOR == 1 and TORCH_MINOR > 0):\n version_ge_1_1 = ['-DVERSION_GE_1_1']\nversion_ge_1_3 = []\nif (TORCH_MAJOR > 1) or (TORCH_MAJOR == 1 and TORCH_MINOR > 2):\n version_ge_1_3 = ['-DVERSION_GE_1_3']\nversion_dependent_macros = version_ge_1_1 + version_ge_1_3\n\nif \"--cuda_ext\" in sys.argv:\n from torch.utils.cpp_extension import CUDAExtension\n sys.argv.remove(\"--cuda_ext\")\n\n if torch.utils.cpp_extension.CUDA_HOME is None:\n raise RuntimeError(\"--cuda_ext was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.\")\n else:\n check_cuda_torch_binary_vs_bare_metal(torch.utils.cpp_extension.CUDA_HOME)\n\n ext_modules.append(\n CUDAExtension(name='amp_C',\n sources=['csrc/amp_C_frontend.cpp',\n 'csrc/multi_tensor_sgd_kernel.cu',\n 'csrc/multi_tensor_scale_kernel.cu',\n 'csrc/multi_tensor_axpby_kernel.cu',\n 'csrc/multi_tensor_l2norm_kernel.cu',\n 'csrc/multi_tensor_lamb_stage_1.cu',\n 'csrc/multi_tensor_lamb_stage_2.cu',\n 'csrc/multi_tensor_adam.cu',\n 'csrc/multi_tensor_novograd.cu',\n 'csrc/multi_tensor_lamb.cu'],\n extra_compile_args={'cxx': ['-O3'] + version_dependent_macros,\n 'nvcc':['-lineinfo',\n '-O3',\n # '--resource-usage',\n '--use_fast_math'] + version_dependent_macros}))\n ext_modules.append(\n CUDAExtension(name='fused_adam_cuda',\n sources=['csrc/fused_adam_cuda.cpp',\n 'csrc/fused_adam_cuda_kernel.cu'],\n extra_compile_args={'cxx': ['-O3',] + version_dependent_macros,\n 'nvcc':['-O3',\n '--use_fast_math'] + version_dependent_macros}))\n ext_modules.append(\n CUDAExtension(name='syncbn',\n sources=['csrc/syncbn.cpp',\n 'csrc/welford.cu'],\n extra_compile_args={'cxx': ['-O3'] + version_dependent_macros,\n 'nvcc':['-O3'] + version_dependent_macros}))\n\n ext_modules.append(\n CUDAExtension(name='fused_layer_norm_cuda',\n sources=['csrc/layer_norm_cuda.cpp',\n 'csrc/layer_norm_cuda_kernel.cu'],\n extra_compile_args={'cxx': ['-O3'] + version_dependent_macros,\n 'nvcc':['-maxrregcount=50',\n '-O3',\n '--use_fast_math'] + version_dependent_macros}))\n\nif \"--bnp\" in sys.argv:\n from torch.utils.cpp_extension import CUDAExtension\n sys.argv.remove(\"--bnp\")\n\n from torch.utils.cpp_extension import BuildExtension\n cmdclass['build_ext'] = BuildExtension\n\n if torch.utils.cpp_extension.CUDA_HOME is None:\n raise RuntimeError(\"--bnp was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.\")\n else:\n ext_modules.append(\n CUDAExtension(name='bnp',\n sources=['apex/contrib/csrc/groupbn/batch_norm.cu',\n 'apex/contrib/csrc/groupbn/ipc.cu',\n 'apex/contrib/csrc/groupbn/interface.cpp',\n 'apex/contrib/csrc/groupbn/batch_norm_add_relu.cu'],\n include_dirs=['csrc'],\n extra_compile_args={'cxx': [] + version_dependent_macros,\n 'nvcc':['-DCUDA_HAS_FP16=1',\n '-D__CUDA_NO_HALF_OPERATORS__',\n '-D__CUDA_NO_HALF_CONVERSIONS__',\n '-D__CUDA_NO_HALF2_OPERATORS__',\n '-gencode',\n 'arch=compute_70,code=sm_70'] + version_dependent_macros}))\n\nif \"--xentropy\" in sys.argv:\n from torch.utils.cpp_extension import CUDAExtension\n sys.argv.remove(\"--xentropy\")\n\n from torch.utils.cpp_extension import BuildExtension\n cmdclass['build_ext'] = BuildExtension\n\n if torch.utils.cpp_extension.CUDA_HOME is None:\n raise RuntimeError(\"--xentropy was requested, but nvcc was not found. Are you sure your environment has nvcc available? If you're installing within a container from https://hub.docker.com/r/pytorch/pytorch, only images whose names contain 'devel' will provide nvcc.\")\n else:\n ext_modules.append(\n CUDAExtension(name='xentropy_cuda',\n sources=['apex/contrib/csrc/xentropy/interface.cpp',\n 'apex/contrib/csrc/xentropy/xentropy_kernel.cu'],\n include_dirs=['csrc'],\n extra_compile_args={'cxx': ['-O3'] + version_dependent_macros,\n 'nvcc':['-O3'] + version_dependent_macros}))\n\nsetup(\n name='apex',\n version='0.1',\n packages=find_packages(exclude=('build',\n 'csrc',\n 'include',\n 'tests',\n 'dist',\n 'docs',\n 'tests',\n 'examples',\n 'apex.egg-info',)),\n description='PyTorch Extensions written by NVIDIA',\n ext_modules=ext_modules,\n cmdclass=cmdclass,\n)\n", "path": "setup.py"}]} | 3,408 | 389 |
gh_patches_debug_1951 | rasdani/github-patches | git_diff | googleapis__python-bigquery-974 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Python to construct CASE WHEN update SQL statement
I try to update 2K rows in BQ
```
def update_bq_ads_status_failed(self, update_ads):
affected_rows = 0
for update_ads_chunk in split(update_ads, _UPDATE_CHUNK_SIZE):
ad_ids = [item["ad_id"] for item in update_ads_chunk]
removal_errors = [item["removal_error"] for item in update_ads_chunk]
update_removal_error = ""
for ad_id, removal_error in zip(ad_ids, removal_errors):
update_removal_error = update_removal_error + \
f''' WHEN ad_id = '{ad_id}' Then '{removal_error}' '''
affected_rows += self.update_bq_ads_status(f"""
UPDATE '{table_full_name}'
SET status = 'Failed Removing'
SET removal_error = CASE {update_removal_error} END
WHERE ad_id IN {str(ad_ids)}
""")
return affected_rows
```
I'm getting this error. I know it's too vague and not possible to debug like this.
> timeout=300.0, headers={'X-Server-Timeout': '300.0',
> 'Accept-Encoding': 'gzip', 'Content-Type': 'application/json',
> 'X-Goog-API-Client': 'gl-python/3.8.10 grpc/1.39.0 gax/2.0.0
> gapic/2.26.0 gccl/2.26.0', 'User-Agent': 'gl-python/3.8.10 grpc/1.39.0
> gax/2.0.0 gapic/2.26.0 gccl/2.26.0'})), last exception: ('Connection
> aborted.', RemoteDisconnected('Remote end closed connection without
> response'))
I'm trying to eliminate errors. Is my BQ update syntactically correct?
What's the BQ update timeout?
</issue>
<code>
[start of google/cloud/bigquery/retry.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from google.api_core import exceptions
16 from google.api_core import retry
17 from google.auth import exceptions as auth_exceptions
18 import requests.exceptions
19
20
21 _RETRYABLE_REASONS = frozenset(
22 ["rateLimitExceeded", "backendError", "internalError", "badGateway"]
23 )
24
25 _UNSTRUCTURED_RETRYABLE_TYPES = (
26 ConnectionError,
27 exceptions.TooManyRequests,
28 exceptions.InternalServerError,
29 exceptions.BadGateway,
30 requests.exceptions.ChunkedEncodingError,
31 requests.exceptions.ConnectionError,
32 requests.exceptions.Timeout,
33 auth_exceptions.TransportError,
34 )
35
36 _DEFAULT_JOB_DEADLINE = 60.0 * 10.0 # seconds
37
38
39 def _should_retry(exc):
40 """Predicate for determining when to retry.
41
42 We retry if and only if the 'reason' is 'backendError'
43 or 'rateLimitExceeded'.
44 """
45 if not hasattr(exc, "errors") or len(exc.errors) == 0:
46 # Check for unstructured error returns, e.g. from GFE
47 return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES)
48
49 reason = exc.errors[0]["reason"]
50 return reason in _RETRYABLE_REASONS
51
52
53 DEFAULT_RETRY = retry.Retry(predicate=_should_retry, deadline=600.0)
54 """The default retry object.
55
56 Any method with a ``retry`` parameter will be retried automatically,
57 with reasonable defaults. To disable retry, pass ``retry=None``.
58 To modify the default retry behavior, call a ``with_XXX`` method
59 on ``DEFAULT_RETRY``. For example, to change the deadline to 30 seconds,
60 pass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.
61 """
62
63 DEFAULT_TIMEOUT = 5.0 * 60.0
64 """The default API timeout.
65
66 This is the time to wait per request. To adjust the total wait time, set a
67 deadline on the retry object.
68 """
69
70 job_retry_reasons = "rateLimitExceeded", "backendError"
71
72
73 def _job_should_retry(exc):
74 if not hasattr(exc, "errors") or len(exc.errors) == 0:
75 return False
76
77 reason = exc.errors[0]["reason"]
78 return reason in job_retry_reasons
79
80
81 DEFAULT_JOB_RETRY = retry.Retry(
82 predicate=_job_should_retry, deadline=_DEFAULT_JOB_DEADLINE
83 )
84 """
85 The default job retry object.
86 """
87
[end of google/cloud/bigquery/retry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/google/cloud/bigquery/retry.py b/google/cloud/bigquery/retry.py
--- a/google/cloud/bigquery/retry.py
+++ b/google/cloud/bigquery/retry.py
@@ -60,7 +60,7 @@
pass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.
"""
-DEFAULT_TIMEOUT = 5.0 * 60.0
+DEFAULT_TIMEOUT = None
"""The default API timeout.
This is the time to wait per request. To adjust the total wait time, set a
| {"golden_diff": "diff --git a/google/cloud/bigquery/retry.py b/google/cloud/bigquery/retry.py\n--- a/google/cloud/bigquery/retry.py\n+++ b/google/cloud/bigquery/retry.py\n@@ -60,7 +60,7 @@\n pass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.\n \"\"\"\n \n-DEFAULT_TIMEOUT = 5.0 * 60.0\n+DEFAULT_TIMEOUT = None\n \"\"\"The default API timeout.\n \n This is the time to wait per request. To adjust the total wait time, set a\n", "issue": "Python to construct CASE WHEN update SQL statement\nI try to update 2K rows in BQ \r\n\r\n\r\n\r\n\r\n```\r\ndef update_bq_ads_status_failed(self, update_ads):\r\n affected_rows = 0\r\n for update_ads_chunk in split(update_ads, _UPDATE_CHUNK_SIZE):\r\n ad_ids = [item[\"ad_id\"] for item in update_ads_chunk]\r\n removal_errors = [item[\"removal_error\"] for item in update_ads_chunk]\r\n\r\n update_removal_error = \"\"\r\n for ad_id, removal_error in zip(ad_ids, removal_errors):\r\n update_removal_error = update_removal_error + \\\r\n f''' WHEN ad_id = '{ad_id}' Then '{removal_error}' '''\r\n affected_rows += self.update_bq_ads_status(f\"\"\"\r\n UPDATE '{table_full_name}' \r\n SET status = 'Failed Removing' \r\n SET removal_error = CASE {update_removal_error} END \r\n WHERE ad_id IN {str(ad_ids)}\r\n \"\"\")\r\n return affected_rows\r\n```\r\n\r\n\r\nI'm getting this error. I know it's too vague and not possible to debug like this.\r\n\r\n> timeout=300.0, headers={'X-Server-Timeout': '300.0',\r\n> 'Accept-Encoding': 'gzip', 'Content-Type': 'application/json',\r\n> 'X-Goog-API-Client': 'gl-python/3.8.10 grpc/1.39.0 gax/2.0.0\r\n> gapic/2.26.0 gccl/2.26.0', 'User-Agent': 'gl-python/3.8.10 grpc/1.39.0\r\n> gax/2.0.0 gapic/2.26.0 gccl/2.26.0'})), last exception: ('Connection\r\n> aborted.', RemoteDisconnected('Remote end closed connection without\r\n> response'))\r\n\r\n\r\nI'm trying to eliminate errors. Is my BQ update syntactically correct? \r\n\r\nWhat's the BQ update timeout?\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom google.api_core import exceptions\nfrom google.api_core import retry\nfrom google.auth import exceptions as auth_exceptions\nimport requests.exceptions\n\n\n_RETRYABLE_REASONS = frozenset(\n [\"rateLimitExceeded\", \"backendError\", \"internalError\", \"badGateway\"]\n)\n\n_UNSTRUCTURED_RETRYABLE_TYPES = (\n ConnectionError,\n exceptions.TooManyRequests,\n exceptions.InternalServerError,\n exceptions.BadGateway,\n requests.exceptions.ChunkedEncodingError,\n requests.exceptions.ConnectionError,\n requests.exceptions.Timeout,\n auth_exceptions.TransportError,\n)\n\n_DEFAULT_JOB_DEADLINE = 60.0 * 10.0 # seconds\n\n\ndef _should_retry(exc):\n \"\"\"Predicate for determining when to retry.\n\n We retry if and only if the 'reason' is 'backendError'\n or 'rateLimitExceeded'.\n \"\"\"\n if not hasattr(exc, \"errors\") or len(exc.errors) == 0:\n # Check for unstructured error returns, e.g. from GFE\n return isinstance(exc, _UNSTRUCTURED_RETRYABLE_TYPES)\n\n reason = exc.errors[0][\"reason\"]\n return reason in _RETRYABLE_REASONS\n\n\nDEFAULT_RETRY = retry.Retry(predicate=_should_retry, deadline=600.0)\n\"\"\"The default retry object.\n\nAny method with a ``retry`` parameter will be retried automatically,\nwith reasonable defaults. To disable retry, pass ``retry=None``.\nTo modify the default retry behavior, call a ``with_XXX`` method\non ``DEFAULT_RETRY``. For example, to change the deadline to 30 seconds,\npass ``retry=bigquery.DEFAULT_RETRY.with_deadline(30)``.\n\"\"\"\n\nDEFAULT_TIMEOUT = 5.0 * 60.0\n\"\"\"The default API timeout.\n\nThis is the time to wait per request. To adjust the total wait time, set a\ndeadline on the retry object.\n\"\"\"\n\njob_retry_reasons = \"rateLimitExceeded\", \"backendError\"\n\n\ndef _job_should_retry(exc):\n if not hasattr(exc, \"errors\") or len(exc.errors) == 0:\n return False\n\n reason = exc.errors[0][\"reason\"]\n return reason in job_retry_reasons\n\n\nDEFAULT_JOB_RETRY = retry.Retry(\n predicate=_job_should_retry, deadline=_DEFAULT_JOB_DEADLINE\n)\n\"\"\"\nThe default job retry object.\n\"\"\"\n", "path": "google/cloud/bigquery/retry.py"}]} | 1,776 | 118 |
gh_patches_debug_28122 | rasdani/github-patches | git_diff | bridgecrewio__checkov-1985 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Checkov failure on F driver windows
**Describe the bug**
When running checkov from the jetbrains plugin using this command:
```checkov -s --bc-api-key **-**-**-** --repo-id jetbrains/extension -f F:\Code\modules\test2\s3.tf -o json ```
We are getting this error:
File "c:\Python39\Scripts\checkov.cmd", line 53, in <module>
run()
File "C:\Users\MESH User\AppData\Roaming\Python\Python37\site-packages\checkov\main.py", line 232, in run
repo_root_for_plan_enrichment=config.repo_root_for_plan_enrichment)
File "C:\Users\MESH User\AppData\Roaming\Python\Python37\site-packages\checkov\common\runners\runner_registry.py", line 66, in run
for scan_report in reports:
File "C:\Python37\lib\concurrent\futures\_base.py", line 598, in result_iterator
yield fs.pop().result()
File "C:\Python37\lib\concurrent\futures\_base.py", line 428, in result
return self.__get_result()
File "C:\Python37\lib\concurrent\futures\_base.py", line 384, in __get_result
raise self._exception
File "C:\Python37\lib\concurrent\futures\thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\MESH User\AppData\Roaming\Python\Python37\site-packages\checkov\common\runners\runner_registry.py", line 63, in <lambda>
runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments),
File "C:\Users\MESH User\AppData\Roaming\Python\Python37\site-packages\checkov\terraform\runner.py", line 102, in run
self.check_tf_definition(report, root_folder, runner_filter, collect_skip_comments)
File "C:\Users\MESH User\AppData\Roaming\Python\Python37\site-packages\checkov\terraform\runner.py", line 195, in check_tf_definition
scanned_file, runner_filter, abs_referrer)
File "C:\Users\MESH User\AppData\Roaming\Python\Python37\site-packages\checkov\terraform\runner.py", line 206, in run_all_blocks
scanned_file, block_type, runner_filter, None, module_referrer)
File "C:\Users\MESH User\AppData\Roaming\Python\Python37\site-packages\checkov\terraform\runner.py", line 289, in run_block
caller_file_line_range=caller_file_line_range)
File "C:\Users\MESH User\AppData\Roaming\Python\Python37\site-packages\checkov\common\output\record.py", line 44, in __init__
self.repo_file_path = convert_to_unix_path(f'/{os.path.relpath(file_abs_path)}') # matches file paths given in the BC platform and should always be a unix path
File "C:\Python37\lib\ntpath.py", line 562, in relpath
path_drive, start_drive))
ValueError: path is on mount 'F:', start on mount 'C:'
**To Reproduce**
Steps to reproduce the behavior:
1. Run checkov on a driver which is not the driver checkov is installed on windows.
**Additional context**
Add any other context about the problem here (e.g. code snippets).
</issue>
<code>
[start of checkov/common/output/record.py]
1 import os
2 import re
3
4 from colorama import init, Fore, Style
5 from termcolor import colored
6
7 from checkov.common.models.enums import CheckResult
8 from checkov.common.util.file_utils import convert_to_unix_path
9
10 init(autoreset=True)
11
12
13 class Record:
14 check_id = ""
15 check_name = ""
16 check_result = None
17 check_class = ""
18 code_block = ""
19 file_path = ""
20 file_line_range = []
21 caller_file_path = None # When created from a module
22 caller_file_line_range = None #
23 resource = ""
24 guideline = None
25 fixed_definition = None
26 entity_tags = None
27
28 def __init__(self, check_id, check_name, check_result, code_block, file_path, file_line_range, resource,
29 evaluations, check_class, file_abs_path, entity_tags=None,
30 caller_file_path=None, caller_file_line_range=None, bc_check_id=None):
31 """
32 :param evaluations: A dict with the key being the variable name, value being a dict containing:
33 - 'var_file'
34 - 'value'
35 - 'definitions', a list of dicts which contain 'definition_expression'
36 """
37 self.check_id = check_id
38 self.bc_check_id = bc_check_id
39 self.check_name = check_name
40 self.check_result = check_result
41 self.code_block = code_block
42 self.file_path = file_path
43 self.file_abs_path = file_abs_path
44 self.repo_file_path = convert_to_unix_path(f'/{os.path.relpath(file_abs_path)}') # matches file paths given in the BC platform and should always be a unix path
45 self.file_line_range = file_line_range
46 self.resource = resource
47 self.evaluations = evaluations
48 self.check_class = check_class
49 self.fixed_definition = None
50 self.entity_tags = entity_tags
51 self.caller_file_path = caller_file_path
52 self.caller_file_line_range = caller_file_line_range
53
54 def set_guideline(self, guideline):
55 self.guideline = guideline
56
57 @staticmethod
58 def _trim_special_chars(expression):
59 return "".join(re.findall(r'[^ ${\}]+', expression))
60
61 def _is_expression_in_code_lines(self, expression):
62 stripped_expression = self._trim_special_chars(expression)
63 return any(stripped_expression in self._trim_special_chars(line) for (_, line) in self.code_block)
64
65 @staticmethod
66 def _code_line_string(code_block):
67 string_block = ''
68 last_line_number, _ = code_block[-1]
69
70 for (line_num, line) in code_block:
71 spaces = ' ' * (len(str(last_line_number)) - len(str(line_num)))
72 if line.lstrip().startswith('#'):
73 string_block += "\t\t" + Fore.WHITE + str(line_num) + spaces + ' | ' + line
74 else:
75 string_block += "\t\t" + Fore.WHITE + str(line_num) + spaces + ' | ' + Fore.YELLOW + line
76 return string_block
77
78 def to_string(self, compact=False, use_bc_ids=False):
79 status = ''
80 evaluation_message = f''
81 status_color = "white"
82 if self.check_result['result'] == CheckResult.PASSED:
83 status = CheckResult.PASSED.name
84 status_color = "green"
85 elif self.check_result['result'] == CheckResult.FAILED:
86 status = CheckResult.FAILED.name
87 status_color = "red"
88 elif self.check_result['result'] == CheckResult.SKIPPED:
89 status = CheckResult.SKIPPED.name
90 status_color = 'blue'
91 suppress_comment = "\tSuppress comment: {}\n".format(self.check_result['suppress_comment'])
92
93 check_message = colored("Check: {}: \"{}\"\n".format(self.get_output_id(use_bc_ids), self.check_name), "white")
94 guideline_message = ''
95 if self.guideline:
96 guideline_message = "\tGuide: " + Style.BRIGHT + colored(f"{self.guideline}\n", 'blue', attrs=['underline']) + Style.RESET_ALL
97 file_details = colored(
98 "\tFile: {}:{}\n".format(self.file_path, "-".join([str(x) for x in self.file_line_range])),
99 "magenta")
100 code_lines = ""
101 if self.code_block:
102 code_lines = "\n{}\n".format("".join(
103 [self._code_line_string(self.code_block)]))
104 caller_file_details = ""
105 if self.caller_file_path and self.caller_file_line_range:
106 caller_file_details = colored(
107 "\tCalling File: {}:{}\n".format(self.caller_file_path,
108 "-".join([str(x) for x in self.caller_file_line_range])),
109 "magenta")
110 if self.evaluations:
111 for (var_name, var_evaluations) in self.evaluations.items():
112 var_file = var_evaluations['var_file']
113 var_definitions = var_evaluations['definitions']
114 for definition_obj in var_definitions:
115 definition_expression = definition_obj["definition_expression"]
116 if self._is_expression_in_code_lines(definition_expression):
117 evaluation_message = evaluation_message + colored(
118 f'\tVariable {colored(var_name, "yellow")} (of {var_file}) evaluated to value "{colored(var_evaluations["value"], "yellow")}" '
119 f'in expression: {colored(definition_obj["definition_name"] + " = ", "yellow")}{colored(definition_obj["definition_expression"], "yellow")}\n',
120 'white')
121 status_message = colored("\t{} for resource: {}\n".format(status, self.resource), status_color)
122 if self.check_result['result'] == CheckResult.FAILED and code_lines and not compact:
123 return check_message + status_message + file_details + caller_file_details + guideline_message + code_lines + evaluation_message
124
125 if self.check_result['result'] == CheckResult.SKIPPED:
126 return check_message + status_message + suppress_comment + file_details + caller_file_details + guideline_message
127 else:
128 return check_message + status_message + file_details + caller_file_details + evaluation_message + guideline_message
129
130 def __str__(self):
131 return self.to_string()
132
133 def get_output_id(self, use_bc_ids: bool) -> str:
134 return self.bc_check_id if self.bc_check_id and use_bc_ids else self.check_id
135
136 def get_unique_string(self):
137 return f"{self.check_id}.{self.check_result}.{self.file_abs_path}.{self.file_line_range}.{self.resource}"
138
[end of checkov/common/output/record.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/common/output/record.py b/checkov/common/output/record.py
--- a/checkov/common/output/record.py
+++ b/checkov/common/output/record.py
@@ -1,5 +1,7 @@
import os
import re
+from pathlib import Path
+from typing import Union
from colorama import init, Fore, Style
from termcolor import colored
@@ -41,7 +43,7 @@
self.code_block = code_block
self.file_path = file_path
self.file_abs_path = file_abs_path
- self.repo_file_path = convert_to_unix_path(f'/{os.path.relpath(file_abs_path)}') # matches file paths given in the BC platform and should always be a unix path
+ self.repo_file_path = self._determine_repo_file_path(file_abs_path)
self.file_line_range = file_line_range
self.resource = resource
self.evaluations = evaluations
@@ -51,6 +53,15 @@
self.caller_file_path = caller_file_path
self.caller_file_line_range = caller_file_line_range
+ @staticmethod
+ def _determine_repo_file_path(file_path: Union[str, "os.PathLike[str]"]) -> str:
+ # matches file paths given in the BC platform and should always be a unix path
+ repo_file_path = Path(file_path)
+ if Path.cwd().drive == repo_file_path.drive:
+ return convert_to_unix_path(f"/{os.path.relpath(repo_file_path)}").replace("/..", "")
+
+ return f"/{'/'.join(repo_file_path.parts[1:])}"
+
def set_guideline(self, guideline):
self.guideline = guideline
| {"golden_diff": "diff --git a/checkov/common/output/record.py b/checkov/common/output/record.py\n--- a/checkov/common/output/record.py\n+++ b/checkov/common/output/record.py\n@@ -1,5 +1,7 @@\n import os\n import re\n+from pathlib import Path\n+from typing import Union\n \n from colorama import init, Fore, Style\n from termcolor import colored\n@@ -41,7 +43,7 @@\n self.code_block = code_block\n self.file_path = file_path\n self.file_abs_path = file_abs_path\n- self.repo_file_path = convert_to_unix_path(f'/{os.path.relpath(file_abs_path)}') # matches file paths given in the BC platform and should always be a unix path\n+ self.repo_file_path = self._determine_repo_file_path(file_abs_path)\n self.file_line_range = file_line_range\n self.resource = resource\n self.evaluations = evaluations\n@@ -51,6 +53,15 @@\n self.caller_file_path = caller_file_path\n self.caller_file_line_range = caller_file_line_range\n \n+ @staticmethod\n+ def _determine_repo_file_path(file_path: Union[str, \"os.PathLike[str]\"]) -> str:\n+ # matches file paths given in the BC platform and should always be a unix path\n+ repo_file_path = Path(file_path)\n+ if Path.cwd().drive == repo_file_path.drive:\n+ return convert_to_unix_path(f\"/{os.path.relpath(repo_file_path)}\").replace(\"/..\", \"\")\n+\n+ return f\"/{'/'.join(repo_file_path.parts[1:])}\"\n+\n def set_guideline(self, guideline):\n self.guideline = guideline\n", "issue": "Checkov failure on F driver windows\n**Describe the bug**\r\nWhen running checkov from the jetbrains plugin using this command:\r\n```checkov -s --bc-api-key **-**-**-** --repo-id jetbrains/extension -f F:\\Code\\modules\\test2\\s3.tf -o json ```\r\nWe are getting this error:\r\n File \"c:\\Python39\\Scripts\\checkov.cmd\", line 53, in <module>\r\n run()\r\n File \"C:\\Users\\MESH User\\AppData\\Roaming\\Python\\Python37\\site-packages\\checkov\\main.py\", line 232, in run\r\n repo_root_for_plan_enrichment=config.repo_root_for_plan_enrichment)\r\n File \"C:\\Users\\MESH User\\AppData\\Roaming\\Python\\Python37\\site-packages\\checkov\\common\\runners\\runner_registry.py\", line 66, in run\r\n for scan_report in reports:\r\n File \"C:\\Python37\\lib\\concurrent\\futures\\_base.py\", line 598, in result_iterator\r\n yield fs.pop().result()\r\n File \"C:\\Python37\\lib\\concurrent\\futures\\_base.py\", line 428, in result\r\n return self.__get_result()\r\n File \"C:\\Python37\\lib\\concurrent\\futures\\_base.py\", line 384, in __get_result\r\n raise self._exception\r\n File \"C:\\Python37\\lib\\concurrent\\futures\\thread.py\", line 57, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"C:\\Users\\MESH User\\AppData\\Roaming\\Python\\Python37\\site-packages\\checkov\\common\\runners\\runner_registry.py\", line 63, in <lambda>\r\n runner_filter=self.runner_filter, collect_skip_comments=collect_skip_comments),\r\n File \"C:\\Users\\MESH User\\AppData\\Roaming\\Python\\Python37\\site-packages\\checkov\\terraform\\runner.py\", line 102, in run\r\n self.check_tf_definition(report, root_folder, runner_filter, collect_skip_comments)\r\n File \"C:\\Users\\MESH User\\AppData\\Roaming\\Python\\Python37\\site-packages\\checkov\\terraform\\runner.py\", line 195, in check_tf_definition\r\n scanned_file, runner_filter, abs_referrer)\r\n File \"C:\\Users\\MESH User\\AppData\\Roaming\\Python\\Python37\\site-packages\\checkov\\terraform\\runner.py\", line 206, in run_all_blocks\r\n scanned_file, block_type, runner_filter, None, module_referrer)\r\n File \"C:\\Users\\MESH User\\AppData\\Roaming\\Python\\Python37\\site-packages\\checkov\\terraform\\runner.py\", line 289, in run_block\r\n caller_file_line_range=caller_file_line_range)\r\n File \"C:\\Users\\MESH User\\AppData\\Roaming\\Python\\Python37\\site-packages\\checkov\\common\\output\\record.py\", line 44, in __init__\r\n self.repo_file_path = convert_to_unix_path(f'/{os.path.relpath(file_abs_path)}') # matches file paths given in the BC platform and should always be a unix path\r\n File \"C:\\Python37\\lib\\ntpath.py\", line 562, in relpath\r\n path_drive, start_drive))\r\nValueError: path is on mount 'F:', start on mount 'C:'\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Run checkov on a driver which is not the driver checkov is installed on windows.\r\n\r\n\r\n**Additional context**\r\nAdd any other context about the problem here (e.g. code snippets).\r\n\n", "before_files": [{"content": "import os\nimport re\n\nfrom colorama import init, Fore, Style\nfrom termcolor import colored\n\nfrom checkov.common.models.enums import CheckResult\nfrom checkov.common.util.file_utils import convert_to_unix_path\n\ninit(autoreset=True)\n\n\nclass Record:\n check_id = \"\"\n check_name = \"\"\n check_result = None\n check_class = \"\"\n code_block = \"\"\n file_path = \"\"\n file_line_range = []\n caller_file_path = None # When created from a module\n caller_file_line_range = None #\n resource = \"\"\n guideline = None\n fixed_definition = None\n entity_tags = None\n\n def __init__(self, check_id, check_name, check_result, code_block, file_path, file_line_range, resource,\n evaluations, check_class, file_abs_path, entity_tags=None,\n caller_file_path=None, caller_file_line_range=None, bc_check_id=None):\n \"\"\"\n :param evaluations: A dict with the key being the variable name, value being a dict containing:\n - 'var_file'\n - 'value'\n - 'definitions', a list of dicts which contain 'definition_expression'\n \"\"\"\n self.check_id = check_id\n self.bc_check_id = bc_check_id\n self.check_name = check_name\n self.check_result = check_result\n self.code_block = code_block\n self.file_path = file_path\n self.file_abs_path = file_abs_path\n self.repo_file_path = convert_to_unix_path(f'/{os.path.relpath(file_abs_path)}') # matches file paths given in the BC platform and should always be a unix path\n self.file_line_range = file_line_range\n self.resource = resource\n self.evaluations = evaluations\n self.check_class = check_class\n self.fixed_definition = None\n self.entity_tags = entity_tags\n self.caller_file_path = caller_file_path\n self.caller_file_line_range = caller_file_line_range\n\n def set_guideline(self, guideline):\n self.guideline = guideline\n\n @staticmethod\n def _trim_special_chars(expression):\n return \"\".join(re.findall(r'[^ ${\\}]+', expression))\n\n def _is_expression_in_code_lines(self, expression):\n stripped_expression = self._trim_special_chars(expression)\n return any(stripped_expression in self._trim_special_chars(line) for (_, line) in self.code_block)\n\n @staticmethod\n def _code_line_string(code_block):\n string_block = ''\n last_line_number, _ = code_block[-1]\n\n for (line_num, line) in code_block:\n spaces = ' ' * (len(str(last_line_number)) - len(str(line_num)))\n if line.lstrip().startswith('#'):\n string_block += \"\\t\\t\" + Fore.WHITE + str(line_num) + spaces + ' | ' + line\n else:\n string_block += \"\\t\\t\" + Fore.WHITE + str(line_num) + spaces + ' | ' + Fore.YELLOW + line\n return string_block\n\n def to_string(self, compact=False, use_bc_ids=False):\n status = ''\n evaluation_message = f''\n status_color = \"white\"\n if self.check_result['result'] == CheckResult.PASSED:\n status = CheckResult.PASSED.name\n status_color = \"green\"\n elif self.check_result['result'] == CheckResult.FAILED:\n status = CheckResult.FAILED.name\n status_color = \"red\"\n elif self.check_result['result'] == CheckResult.SKIPPED:\n status = CheckResult.SKIPPED.name\n status_color = 'blue'\n suppress_comment = \"\\tSuppress comment: {}\\n\".format(self.check_result['suppress_comment'])\n\n check_message = colored(\"Check: {}: \\\"{}\\\"\\n\".format(self.get_output_id(use_bc_ids), self.check_name), \"white\")\n guideline_message = ''\n if self.guideline:\n guideline_message = \"\\tGuide: \" + Style.BRIGHT + colored(f\"{self.guideline}\\n\", 'blue', attrs=['underline']) + Style.RESET_ALL\n file_details = colored(\n \"\\tFile: {}:{}\\n\".format(self.file_path, \"-\".join([str(x) for x in self.file_line_range])),\n \"magenta\")\n code_lines = \"\"\n if self.code_block:\n code_lines = \"\\n{}\\n\".format(\"\".join(\n [self._code_line_string(self.code_block)]))\n caller_file_details = \"\"\n if self.caller_file_path and self.caller_file_line_range:\n caller_file_details = colored(\n \"\\tCalling File: {}:{}\\n\".format(self.caller_file_path,\n \"-\".join([str(x) for x in self.caller_file_line_range])),\n \"magenta\")\n if self.evaluations:\n for (var_name, var_evaluations) in self.evaluations.items():\n var_file = var_evaluations['var_file']\n var_definitions = var_evaluations['definitions']\n for definition_obj in var_definitions:\n definition_expression = definition_obj[\"definition_expression\"]\n if self._is_expression_in_code_lines(definition_expression):\n evaluation_message = evaluation_message + colored(\n f'\\tVariable {colored(var_name, \"yellow\")} (of {var_file}) evaluated to value \"{colored(var_evaluations[\"value\"], \"yellow\")}\" '\n f'in expression: {colored(definition_obj[\"definition_name\"] + \" = \", \"yellow\")}{colored(definition_obj[\"definition_expression\"], \"yellow\")}\\n',\n 'white')\n status_message = colored(\"\\t{} for resource: {}\\n\".format(status, self.resource), status_color)\n if self.check_result['result'] == CheckResult.FAILED and code_lines and not compact:\n return check_message + status_message + file_details + caller_file_details + guideline_message + code_lines + evaluation_message\n\n if self.check_result['result'] == CheckResult.SKIPPED:\n return check_message + status_message + suppress_comment + file_details + caller_file_details + guideline_message\n else:\n return check_message + status_message + file_details + caller_file_details + evaluation_message + guideline_message\n\n def __str__(self):\n return self.to_string()\n\n def get_output_id(self, use_bc_ids: bool) -> str:\n return self.bc_check_id if self.bc_check_id and use_bc_ids else self.check_id\n\n def get_unique_string(self):\n return f\"{self.check_id}.{self.check_result}.{self.file_abs_path}.{self.file_line_range}.{self.resource}\"\n", "path": "checkov/common/output/record.py"}]} | 3,052 | 372 |
gh_patches_debug_49194 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-3093 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
sorting in control panel
The items of the control panel are completely unsorted (should be sorted in alphabetical order (depending on the current language in Plone).

</issue>
<code>
[start of Products/CMFPlone/PloneControlPanel.py]
1 # -*- coding: utf-8 -*-
2 from AccessControl import ClassSecurityInfo
3 from AccessControl.class_init import InitializeClass
4 from App.special_dtml import DTMLFile
5 from OFS.Folder import Folder
6 from OFS.PropertyManager import PropertyManager
7 from Products.CMFCore.ActionInformation import ActionInformation
8 from Products.CMFCore.ActionProviderBase import ActionProviderBase
9 from Products.CMFCore.Expression import Expression, createExprContext
10 from Products.CMFCore.permissions import ManagePortal, View
11 from Products.CMFCore.utils import _checkPermission
12 from Products.CMFCore.utils import getToolByName
13 from Products.CMFCore.utils import registerToolInterface
14 from Products.CMFCore.utils import UniqueObject
15 from Products.CMFPlone import PloneMessageFactory as _
16 from Products.CMFPlone.interfaces import IControlPanel
17 from Products.CMFPlone.PloneBaseTool import PloneBaseTool
18 from zope.component.hooks import getSite
19 from zope.i18n import translate
20 from zope.i18nmessageid import Message
21 from zope.interface import implementer
22
23 import six
24
25
26 class PloneConfiglet(ActionInformation):
27
28 def __init__(self, appId, **kwargs):
29 self.appId = appId
30 ActionInformation.__init__(self, **kwargs)
31
32 def getAppId(self):
33 return self.appId
34
35 def getDescription(self):
36 return self.description
37
38 def clone(self):
39 return self.__class__(**self.__dict__)
40
41 def getAction(self, ec):
42 res = ActionInformation.getAction(self, ec)
43 res['description'] = self.getDescription()
44 return res
45
46
47 @implementer(IControlPanel)
48 class PloneControlPanel(PloneBaseTool, UniqueObject,
49 Folder, ActionProviderBase, PropertyManager):
50 """Weave together the various sources of "actions" which
51 are apropos to the current user and context.
52 """
53
54 security = ClassSecurityInfo()
55
56 id = 'portal_controlpanel'
57 title = 'Control Panel'
58 toolicon = 'skins/plone_images/site_icon.png'
59 meta_type = 'Plone Control Panel Tool'
60 _actions_form = DTMLFile('www/editPloneConfiglets', globals())
61
62 manage_options = (ActionProviderBase.manage_options +
63 PropertyManager.manage_options)
64
65 group = dict(
66 member=[
67 ('Member', _(u'My Preferences')),
68 ],
69 site=[
70 ('plone-general', _(u'General')),
71 ('plone-content', _(u'Content')),
72 ('plone-users', _(u'Users')),
73 ('plone-security', _(u'Security')),
74 ('plone-advanced', _(u'Advanced')),
75 ('Plone', _(u'Plone Configuration')),
76 ('Products', _(u'Add-on Configuration')),
77 ]
78 )
79
80 def __init__(self, **kw):
81 if kw:
82 self.__dict__.update(**kw)
83
84 security.declareProtected(ManagePortal, 'registerConfiglets')
85
86 def registerConfiglets(self, configlets):
87 for conf in configlets:
88 self.registerConfiglet(**conf)
89
90 security.declareProtected(ManagePortal, 'getGroupIds')
91
92 def getGroupIds(self, category='site'):
93 groups = self.group.get(category, [])
94 return [g[0] for g in groups if g]
95
96 security.declareProtected(View, 'getGroups')
97
98 def getGroups(self, category='site'):
99 groups = self.group.get(category, [])
100 return [{'id': g[0], 'title': g[1]} for g in groups if g]
101
102 security.declarePrivate('listActions')
103
104 def listActions(self, info=None, object=None):
105 # This exists here to shut up a deprecation warning about old-style
106 # actions in CMFCore's ActionProviderBase. It was decided not to
107 # move configlets to be based on action tool categories for Plone 4
108 # (see PLIP #8804), but that (or an alternative) will have to happen
109 # before CMF 2.4 when support for old-style actions is removed.
110 return self._actions or ()
111
112 security.declarePublic('maySeeSomeConfiglets')
113
114 def maySeeSomeConfiglets(self):
115 groups = self.getGroups('site')
116
117 all = []
118 for group in groups:
119 all.extend(self.enumConfiglets(group=group['id']))
120 all = [item for item in all if item['visible']]
121 return len(all) != 0
122
123 security.declarePublic('enumConfiglets')
124
125 def enumConfiglets(self, group=None):
126 portal = getToolByName(self, 'portal_url').getPortalObject()
127 context = createExprContext(self, portal, self)
128 res = []
129 for a in self.listActions():
130 verified = 0
131 for permission in a.permissions:
132 if _checkPermission(permission, portal):
133 verified = 1
134 if verified and a.category == group and a.testCondition(context) \
135 and a.visible:
136 res.append(a.getAction(context))
137 # Translate the title for sorting
138 if getattr(self, 'REQUEST', None) is not None:
139 for a in res:
140 title = a['title']
141 if not isinstance(title, Message):
142 title = Message(title, domain='plone')
143 a['title'] = translate(title,
144 context=self.REQUEST)
145
146 def _id(v):
147 return v['id']
148 res.sort(key=_id)
149 return res
150
151 security.declareProtected(ManagePortal, 'unregisterConfiglet')
152
153 def unregisterConfiglet(self, id):
154 actids = [o.id for o in self.listActions()]
155 selection = [actids.index(a) for a in actids if a == id]
156 if not selection:
157 return
158 self.deleteActions(selection)
159
160 security.declareProtected(ManagePortal, 'unregisterApplication')
161
162 def unregisterApplication(self, appId):
163 acts = list(self.listActions())
164 selection = [acts.index(a) for a in acts if a.appId == appId]
165 if not selection:
166 return
167 self.deleteActions(selection)
168
169 def _extractAction(self, properties, index):
170 # Extract an ActionInformation from the funky form properties.
171 id = str(properties.get('id_%d' % index, ''))
172 name = str(properties.get('name_%d' % index, ''))
173 action = str(properties.get('action_%d' % index, ''))
174 condition = str(properties.get('condition_%d' % index, ''))
175 category = str(properties.get('category_%d' % index, ''))
176 visible = properties.get('visible_%d' % index, 0)
177 permissions = properties.get('permission_%d' % index, ())
178 appId = properties.get('appId_%d' % index, '')
179 description = properties.get('description_%d' % index, '')
180 icon_expr = properties.get('icon_expr_%d' % index, '')
181
182 if not name:
183 raise ValueError('A name is required.')
184
185 if action != '':
186 action = Expression(text=action)
187
188 if condition != '':
189 condition = Expression(text=condition)
190
191 if category == '':
192 category = 'object'
193
194 if not isinstance(visible, int):
195 try:
196 visible = int(visible)
197 except ValueError:
198 visible = 0
199
200 if isinstance(permissions, six.string_types):
201 permissions = (permissions, )
202
203 return PloneConfiglet(id=id,
204 title=name,
205 action=action,
206 condition=condition,
207 permissions=permissions,
208 category=category,
209 visible=visible,
210 appId=appId,
211 description=description,
212 icon_expr=icon_expr,
213 )
214
215 security.declareProtected(ManagePortal, 'addAction')
216
217 def addAction(self,
218 id,
219 name,
220 action,
221 condition='',
222 permission='',
223 category='Plone',
224 visible=1,
225 appId=None,
226 icon_expr='',
227 description='',
228 REQUEST=None,
229 ):
230 # Add an action to our list.
231 if not name:
232 raise ValueError('A name is required.')
233
234 a_expr = action and Expression(text=str(action)) or ''
235 c_expr = condition and Expression(text=str(condition)) or ''
236
237 if not isinstance(permission, tuple):
238 permission = permission and (str(permission), ) or ()
239
240 new_actions = self._cloneActions()
241
242 new_action = PloneConfiglet(id=str(id),
243 title=name,
244 action=a_expr,
245 condition=c_expr,
246 permissions=permission,
247 category=str(category),
248 visible=int(visible),
249 appId=appId,
250 description=description,
251 icon_expr=icon_expr,
252 )
253
254 new_actions.append(new_action)
255 self._actions = tuple(new_actions)
256
257 if REQUEST is not None:
258 return self.manage_editActionsForm(
259 REQUEST, manage_tabs_message='Added.')
260
261 security.declareProtected(ManagePortal, 'registerConfiglet')
262 registerConfiglet = addAction
263
264 security.declareProtected(ManagePortal, 'manage_editActionsForm')
265
266 def manage_editActionsForm(self, REQUEST, manage_tabs_message=None):
267 """ Show the 'Actions' management tab.
268 """
269 actions = []
270
271 for a in self.listActions():
272
273 a1 = {}
274 a1['id'] = a.getId()
275 a1['name'] = a.Title()
276 p = a.getPermissions()
277 if p:
278 a1['permission'] = p[0]
279 else:
280 a1['permission'] = ''
281 a1['category'] = a.getCategory() or 'object'
282 a1['visible'] = a.getVisibility()
283 a1['action'] = a.getActionExpression()
284 a1['condition'] = a.getCondition()
285 a1['appId'] = a.getAppId()
286 a1['description'] = a.getDescription()
287 a1['icon_expr'] = a.getIconExpression()
288 actions.append(a1)
289
290 # possible_permissions is in OFS.role.RoleManager.
291 pp = self.possible_permissions()
292 return self._actions_form(
293 self,
294 REQUEST,
295 actions=actions,
296 possible_permissions=pp,
297 management_view='Actions',
298 manage_tabs_message=manage_tabs_message,
299 )
300
301 @property
302 def site_url(self):
303 """Return the absolute URL to the current site, which is likely not
304 necessarily the portal root.
305 Used by ``portlet_prefs`` to construct the URL to
306 ``@@overview-controlpanel``.
307 """
308 return getSite().absolute_url()
309
310
311 InitializeClass(PloneControlPanel)
312 registerToolInterface('portal_controlpanel', IControlPanel)
313
[end of Products/CMFPlone/PloneControlPanel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/PloneControlPanel.py b/Products/CMFPlone/PloneControlPanel.py
--- a/Products/CMFPlone/PloneControlPanel.py
+++ b/Products/CMFPlone/PloneControlPanel.py
@@ -143,9 +143,10 @@
a['title'] = translate(title,
context=self.REQUEST)
- def _id(v):
- return v['id']
- res.sort(key=_id)
+ def _title(v):
+ return v['title']
+
+ res.sort(key=_title)
return res
security.declareProtected(ManagePortal, 'unregisterConfiglet')
| {"golden_diff": "diff --git a/Products/CMFPlone/PloneControlPanel.py b/Products/CMFPlone/PloneControlPanel.py\n--- a/Products/CMFPlone/PloneControlPanel.py\n+++ b/Products/CMFPlone/PloneControlPanel.py\n@@ -143,9 +143,10 @@\n a['title'] = translate(title,\n context=self.REQUEST)\n \n- def _id(v):\n- return v['id']\n- res.sort(key=_id)\n+ def _title(v):\n+ return v['title']\n+\n+ res.sort(key=_title)\n return res\n \n security.declareProtected(ManagePortal, 'unregisterConfiglet')\n", "issue": "sorting in control panel\nThe items of the control panel are completely unsorted (should be sorted in alphabetical order (depending on the current language in Plone).\n\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom AccessControl import ClassSecurityInfo\nfrom AccessControl.class_init import InitializeClass\nfrom App.special_dtml import DTMLFile\nfrom OFS.Folder import Folder\nfrom OFS.PropertyManager import PropertyManager\nfrom Products.CMFCore.ActionInformation import ActionInformation\nfrom Products.CMFCore.ActionProviderBase import ActionProviderBase\nfrom Products.CMFCore.Expression import Expression, createExprContext\nfrom Products.CMFCore.permissions import ManagePortal, View\nfrom Products.CMFCore.utils import _checkPermission\nfrom Products.CMFCore.utils import getToolByName\nfrom Products.CMFCore.utils import registerToolInterface\nfrom Products.CMFCore.utils import UniqueObject\nfrom Products.CMFPlone import PloneMessageFactory as _\nfrom Products.CMFPlone.interfaces import IControlPanel\nfrom Products.CMFPlone.PloneBaseTool import PloneBaseTool\nfrom zope.component.hooks import getSite\nfrom zope.i18n import translate\nfrom zope.i18nmessageid import Message\nfrom zope.interface import implementer\n\nimport six\n\n\nclass PloneConfiglet(ActionInformation):\n\n def __init__(self, appId, **kwargs):\n self.appId = appId\n ActionInformation.__init__(self, **kwargs)\n\n def getAppId(self):\n return self.appId\n\n def getDescription(self):\n return self.description\n\n def clone(self):\n return self.__class__(**self.__dict__)\n\n def getAction(self, ec):\n res = ActionInformation.getAction(self, ec)\n res['description'] = self.getDescription()\n return res\n\n\n@implementer(IControlPanel)\nclass PloneControlPanel(PloneBaseTool, UniqueObject,\n Folder, ActionProviderBase, PropertyManager):\n \"\"\"Weave together the various sources of \"actions\" which\n are apropos to the current user and context.\n \"\"\"\n\n security = ClassSecurityInfo()\n\n id = 'portal_controlpanel'\n title = 'Control Panel'\n toolicon = 'skins/plone_images/site_icon.png'\n meta_type = 'Plone Control Panel Tool'\n _actions_form = DTMLFile('www/editPloneConfiglets', globals())\n\n manage_options = (ActionProviderBase.manage_options +\n PropertyManager.manage_options)\n\n group = dict(\n member=[\n ('Member', _(u'My Preferences')),\n ],\n site=[\n ('plone-general', _(u'General')),\n ('plone-content', _(u'Content')),\n ('plone-users', _(u'Users')),\n ('plone-security', _(u'Security')),\n ('plone-advanced', _(u'Advanced')),\n ('Plone', _(u'Plone Configuration')),\n ('Products', _(u'Add-on Configuration')),\n ]\n )\n\n def __init__(self, **kw):\n if kw:\n self.__dict__.update(**kw)\n\n security.declareProtected(ManagePortal, 'registerConfiglets')\n\n def registerConfiglets(self, configlets):\n for conf in configlets:\n self.registerConfiglet(**conf)\n\n security.declareProtected(ManagePortal, 'getGroupIds')\n\n def getGroupIds(self, category='site'):\n groups = self.group.get(category, [])\n return [g[0] for g in groups if g]\n\n security.declareProtected(View, 'getGroups')\n\n def getGroups(self, category='site'):\n groups = self.group.get(category, [])\n return [{'id': g[0], 'title': g[1]} for g in groups if g]\n\n security.declarePrivate('listActions')\n\n def listActions(self, info=None, object=None):\n # This exists here to shut up a deprecation warning about old-style\n # actions in CMFCore's ActionProviderBase. It was decided not to\n # move configlets to be based on action tool categories for Plone 4\n # (see PLIP #8804), but that (or an alternative) will have to happen\n # before CMF 2.4 when support for old-style actions is removed.\n return self._actions or ()\n\n security.declarePublic('maySeeSomeConfiglets')\n\n def maySeeSomeConfiglets(self):\n groups = self.getGroups('site')\n\n all = []\n for group in groups:\n all.extend(self.enumConfiglets(group=group['id']))\n all = [item for item in all if item['visible']]\n return len(all) != 0\n\n security.declarePublic('enumConfiglets')\n\n def enumConfiglets(self, group=None):\n portal = getToolByName(self, 'portal_url').getPortalObject()\n context = createExprContext(self, portal, self)\n res = []\n for a in self.listActions():\n verified = 0\n for permission in a.permissions:\n if _checkPermission(permission, portal):\n verified = 1\n if verified and a.category == group and a.testCondition(context) \\\n and a.visible:\n res.append(a.getAction(context))\n # Translate the title for sorting\n if getattr(self, 'REQUEST', None) is not None:\n for a in res:\n title = a['title']\n if not isinstance(title, Message):\n title = Message(title, domain='plone')\n a['title'] = translate(title,\n context=self.REQUEST)\n\n def _id(v):\n return v['id']\n res.sort(key=_id)\n return res\n\n security.declareProtected(ManagePortal, 'unregisterConfiglet')\n\n def unregisterConfiglet(self, id):\n actids = [o.id for o in self.listActions()]\n selection = [actids.index(a) for a in actids if a == id]\n if not selection:\n return\n self.deleteActions(selection)\n\n security.declareProtected(ManagePortal, 'unregisterApplication')\n\n def unregisterApplication(self, appId):\n acts = list(self.listActions())\n selection = [acts.index(a) for a in acts if a.appId == appId]\n if not selection:\n return\n self.deleteActions(selection)\n\n def _extractAction(self, properties, index):\n # Extract an ActionInformation from the funky form properties.\n id = str(properties.get('id_%d' % index, ''))\n name = str(properties.get('name_%d' % index, ''))\n action = str(properties.get('action_%d' % index, ''))\n condition = str(properties.get('condition_%d' % index, ''))\n category = str(properties.get('category_%d' % index, ''))\n visible = properties.get('visible_%d' % index, 0)\n permissions = properties.get('permission_%d' % index, ())\n appId = properties.get('appId_%d' % index, '')\n description = properties.get('description_%d' % index, '')\n icon_expr = properties.get('icon_expr_%d' % index, '')\n\n if not name:\n raise ValueError('A name is required.')\n\n if action != '':\n action = Expression(text=action)\n\n if condition != '':\n condition = Expression(text=condition)\n\n if category == '':\n category = 'object'\n\n if not isinstance(visible, int):\n try:\n visible = int(visible)\n except ValueError:\n visible = 0\n\n if isinstance(permissions, six.string_types):\n permissions = (permissions, )\n\n return PloneConfiglet(id=id,\n title=name,\n action=action,\n condition=condition,\n permissions=permissions,\n category=category,\n visible=visible,\n appId=appId,\n description=description,\n icon_expr=icon_expr,\n )\n\n security.declareProtected(ManagePortal, 'addAction')\n\n def addAction(self,\n id,\n name,\n action,\n condition='',\n permission='',\n category='Plone',\n visible=1,\n appId=None,\n icon_expr='',\n description='',\n REQUEST=None,\n ):\n # Add an action to our list.\n if not name:\n raise ValueError('A name is required.')\n\n a_expr = action and Expression(text=str(action)) or ''\n c_expr = condition and Expression(text=str(condition)) or ''\n\n if not isinstance(permission, tuple):\n permission = permission and (str(permission), ) or ()\n\n new_actions = self._cloneActions()\n\n new_action = PloneConfiglet(id=str(id),\n title=name,\n action=a_expr,\n condition=c_expr,\n permissions=permission,\n category=str(category),\n visible=int(visible),\n appId=appId,\n description=description,\n icon_expr=icon_expr,\n )\n\n new_actions.append(new_action)\n self._actions = tuple(new_actions)\n\n if REQUEST is not None:\n return self.manage_editActionsForm(\n REQUEST, manage_tabs_message='Added.')\n\n security.declareProtected(ManagePortal, 'registerConfiglet')\n registerConfiglet = addAction\n\n security.declareProtected(ManagePortal, 'manage_editActionsForm')\n\n def manage_editActionsForm(self, REQUEST, manage_tabs_message=None):\n \"\"\" Show the 'Actions' management tab.\n \"\"\"\n actions = []\n\n for a in self.listActions():\n\n a1 = {}\n a1['id'] = a.getId()\n a1['name'] = a.Title()\n p = a.getPermissions()\n if p:\n a1['permission'] = p[0]\n else:\n a1['permission'] = ''\n a1['category'] = a.getCategory() or 'object'\n a1['visible'] = a.getVisibility()\n a1['action'] = a.getActionExpression()\n a1['condition'] = a.getCondition()\n a1['appId'] = a.getAppId()\n a1['description'] = a.getDescription()\n a1['icon_expr'] = a.getIconExpression()\n actions.append(a1)\n\n # possible_permissions is in OFS.role.RoleManager.\n pp = self.possible_permissions()\n return self._actions_form(\n self,\n REQUEST,\n actions=actions,\n possible_permissions=pp,\n management_view='Actions',\n manage_tabs_message=manage_tabs_message,\n )\n\n @property\n def site_url(self):\n \"\"\"Return the absolute URL to the current site, which is likely not\n necessarily the portal root.\n Used by ``portlet_prefs`` to construct the URL to\n ``@@overview-controlpanel``.\n \"\"\"\n return getSite().absolute_url()\n\n\nInitializeClass(PloneControlPanel)\nregisterToolInterface('portal_controlpanel', IControlPanel)\n", "path": "Products/CMFPlone/PloneControlPanel.py"}]} | 3,779 | 158 |
gh_patches_debug_7040 | rasdani/github-patches | git_diff | saleor__saleor-1208 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Superuser can change his status
In `StaffForm`:
https://github.com/mirumee/saleor/blob/master/saleor/dashboard/staff/forms.py#L12-L13
Widget is disabled `self.fields['is_staff'].widget.attrs['disabled'] = True`
But it won't prevent you from changing values on POST.
We should disable field instead, like `self.fields['is_active'].disabled = True`
Test covering this would be nice as well.
</issue>
<code>
[start of saleor/dashboard/staff/forms.py]
1 from django import forms
2
3 from ...userprofile.models import User
4
5
6 class StaffForm(forms.ModelForm):
7 def __init__(self, *args, **kwargs):
8 self.user = kwargs.pop('user', None)
9 kwargs.update(initial={'is_staff': True})
10 super(StaffForm, self).__init__(*args, **kwargs)
11 if self.user == self.instance:
12 self.fields['is_staff'].widget.attrs['disabled'] = True
13 self.fields['is_active'].widget.attrs['disabled'] = True
14
15 class Meta:
16 model = User
17 fields = ['email', 'groups', 'is_staff', 'is_active']
18
[end of saleor/dashboard/staff/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/dashboard/staff/forms.py b/saleor/dashboard/staff/forms.py
--- a/saleor/dashboard/staff/forms.py
+++ b/saleor/dashboard/staff/forms.py
@@ -9,8 +9,8 @@
kwargs.update(initial={'is_staff': True})
super(StaffForm, self).__init__(*args, **kwargs)
if self.user == self.instance:
- self.fields['is_staff'].widget.attrs['disabled'] = True
- self.fields['is_active'].widget.attrs['disabled'] = True
+ self.fields['is_staff'].disabled = True
+ self.fields['is_active'].disabled = True
class Meta:
model = User
| {"golden_diff": "diff --git a/saleor/dashboard/staff/forms.py b/saleor/dashboard/staff/forms.py\n--- a/saleor/dashboard/staff/forms.py\n+++ b/saleor/dashboard/staff/forms.py\n@@ -9,8 +9,8 @@\n kwargs.update(initial={'is_staff': True})\n super(StaffForm, self).__init__(*args, **kwargs)\n if self.user == self.instance:\n- self.fields['is_staff'].widget.attrs['disabled'] = True\n- self.fields['is_active'].widget.attrs['disabled'] = True\n+ self.fields['is_staff'].disabled = True\n+ self.fields['is_active'].disabled = True\n \n class Meta:\n model = User\n", "issue": "Superuser can change his status\nIn `StaffForm`:\r\nhttps://github.com/mirumee/saleor/blob/master/saleor/dashboard/staff/forms.py#L12-L13\r\nWidget is disabled `self.fields['is_staff'].widget.attrs['disabled'] = True` \r\nBut it won't prevent you from changing values on POST.\r\nWe should disable field instead, like `self.fields['is_active'].disabled = True` \r\nTest covering this would be nice as well.\r\n\n", "before_files": [{"content": "from django import forms\n\nfrom ...userprofile.models import User\n\n\nclass StaffForm(forms.ModelForm):\n def __init__(self, *args, **kwargs):\n self.user = kwargs.pop('user', None)\n kwargs.update(initial={'is_staff': True})\n super(StaffForm, self).__init__(*args, **kwargs)\n if self.user == self.instance:\n self.fields['is_staff'].widget.attrs['disabled'] = True\n self.fields['is_active'].widget.attrs['disabled'] = True\n\n class Meta:\n model = User\n fields = ['email', 'groups', 'is_staff', 'is_active']\n", "path": "saleor/dashboard/staff/forms.py"}]} | 805 | 152 |
gh_patches_debug_39158 | rasdani/github-patches | git_diff | quantumlib__Cirq-1668 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Class methods on GridQubit to create common lattices
Some obvious shapes
```GridQubit.square(3, 3)```
```GirdQubit.rect(2, 4)```
returning list of list with row and column. Default to 0,0 as one corner, but option to set corner.
Ability to construction from an ASCII diagram like xmon devices
```GridQubit.from_pic```
which operations on maps like
```
...A...
..AAA..
.AAAAA.
AAAAAAAA
````
</issue>
<code>
[start of cirq/devices/grid_qubit.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 from typing import Dict
17
18 from cirq import ops
19
20
21 class GridQubit(ops.Qid):
22 """A qubit on a 2d square lattice.
23
24 GridQubits use row-major ordering:
25
26 GridQubit(0, 0) < GridQubit(0, 1) < GridQubit(1, 0) < GridQubit(1, 1)
27 """
28
29 def __init__(self, row: int, col: int):
30 self.row = row
31 self.col = col
32
33 def _comparison_key(self):
34 return self.row, self.col
35
36 def is_adjacent(self, other: ops.Qid) -> bool:
37 """Determines if two qubits are adjacent qubits."""
38 return (isinstance(other, GridQubit) and
39 abs(self.row - other.row) + abs(self.col - other.col) == 1)
40
41 def __repr__(self):
42 return 'cirq.GridQubit({}, {})'.format(self.row, self.col)
43
44 def __str__(self):
45 return '({}, {})'.format(self.row, self.col)
46
47 def to_proto_dict(self) -> Dict:
48 """Return the proto in dictionary form."""
49 return {
50 'row': self.row,
51 'col': self.col,
52 }
53
54 @staticmethod
55 def from_proto_dict(proto_dict: Dict) -> 'GridQubit':
56 """Proto dict must have 'row' and 'col' keys."""
57 if 'row' not in proto_dict or 'col' not in proto_dict:
58 raise ValueError(
59 'Proto dict does not contain row or col: {}'.format(proto_dict))
60 return GridQubit(row=proto_dict['row'], col=proto_dict['col'])
61
[end of cirq/devices/grid_qubit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cirq/devices/grid_qubit.py b/cirq/devices/grid_qubit.py
--- a/cirq/devices/grid_qubit.py
+++ b/cirq/devices/grid_qubit.py
@@ -13,7 +13,7 @@
# limitations under the License.
-from typing import Dict
+from typing import Dict, List
from cirq import ops
@@ -38,6 +38,93 @@
return (isinstance(other, GridQubit) and
abs(self.row - other.row) + abs(self.col - other.col) == 1)
+ @staticmethod
+ def square(diameter: int, top: int = 0, left: int = 0) -> List['GridQubit']:
+ """Returns a square of GridQubits.
+
+ Args:
+ diameter: Length of a side of the square
+ top: Row number of the topmost row
+ left: Column number of the leftmost row
+
+ Returns:
+ A list of GridQubits filling in a square grid
+ """
+ return GridQubit.rect(diameter, diameter, top=top, left=left)
+
+ @staticmethod
+ def rect(rows: int, cols: int, top: int = 0,
+ left: int = 0) -> List['GridQubit']:
+ """Returns a rectangle of GridQubits.
+
+ Args:
+ rows: Number of rows in the rectangle
+ cols: Number of columns in the rectangle
+ top: Row number of the topmost row
+ left: Column number of the leftmost row
+
+ Returns:
+ A list of GridQubits filling in a rectangular grid
+ """
+ return [
+ GridQubit(row, col)
+ for row in range(top, top + rows)
+ for col in range(left, left + cols)
+ ]
+
+ @staticmethod
+ def from_diagram(diagram: str) -> List['GridQubit']:
+ """Parse ASCII art device layout into info about qubits and
+ connectivity. As an example, the below diagram will create a list of
+ GridQubits in a pyramid structure.
+ ---A---
+ --AAA--
+ -AAAAA-
+ AAAAAAA
+
+ You can use any character other than a hyphen to mark a qubit. As an
+ example, the qubits for the Bristlecone device could be represented by
+ the below diagram. This produces a diamond-shaped grid of qubits, and
+ qubits with the same letter correspond to the same readout line.
+
+ .....AB.....
+ ....ABCD....
+ ...ABCDEF...
+ ..ABCDEFGH..
+ .ABCDEFGHIJ.
+ ABCDEFGHIJKL
+ .CDEFGHIJKL.
+ ..EFGHIJKL..
+ ...GHIJKL...
+ ....IJKL....
+ .....KL.....
+
+ Args:
+ diagram: String representing the qubit layout. Each line represents
+ a row. Alphanumeric characters are assigned as qubits.
+ Dots ('.'), dashes ('-'), and spaces (' ') are treated as
+ empty locations in the grid. If diagram has characters other
+ than alphanumerics, spacers, and newlines ('\n'), an error will
+ be thrown. The top-left corner of the diagram will be have
+ coordinate (0,0).
+
+ Returns:
+ A list of GridQubits corresponding to the provided diagram
+
+ Raises:
+ ValueError: If the input string contains an invalid character.
+ """
+ lines = diagram.strip().split('\n')
+ no_qubit_characters = ['.', '-', ' ']
+ qubits = []
+ for row, line in enumerate(lines):
+ for col, c in enumerate(line.strip()):
+ if c not in no_qubit_characters:
+ if not c.isalnum():
+ raise ValueError("Input string has invalid character")
+ qubits.append(GridQubit(row, col))
+ return qubits
+
def __repr__(self):
return 'cirq.GridQubit({}, {})'.format(self.row, self.col)
| {"golden_diff": "diff --git a/cirq/devices/grid_qubit.py b/cirq/devices/grid_qubit.py\n--- a/cirq/devices/grid_qubit.py\n+++ b/cirq/devices/grid_qubit.py\n@@ -13,7 +13,7 @@\n # limitations under the License.\n \n \n-from typing import Dict\n+from typing import Dict, List\n \n from cirq import ops\n \n@@ -38,6 +38,93 @@\n return (isinstance(other, GridQubit) and\n abs(self.row - other.row) + abs(self.col - other.col) == 1)\n \n+ @staticmethod\n+ def square(diameter: int, top: int = 0, left: int = 0) -> List['GridQubit']:\n+ \"\"\"Returns a square of GridQubits.\n+\n+ Args:\n+ diameter: Length of a side of the square\n+ top: Row number of the topmost row\n+ left: Column number of the leftmost row\n+\n+ Returns:\n+ A list of GridQubits filling in a square grid\n+ \"\"\"\n+ return GridQubit.rect(diameter, diameter, top=top, left=left)\n+\n+ @staticmethod\n+ def rect(rows: int, cols: int, top: int = 0,\n+ left: int = 0) -> List['GridQubit']:\n+ \"\"\"Returns a rectangle of GridQubits.\n+\n+ Args:\n+ rows: Number of rows in the rectangle\n+ cols: Number of columns in the rectangle\n+ top: Row number of the topmost row\n+ left: Column number of the leftmost row\n+\n+ Returns:\n+ A list of GridQubits filling in a rectangular grid\n+ \"\"\"\n+ return [\n+ GridQubit(row, col)\n+ for row in range(top, top + rows)\n+ for col in range(left, left + cols)\n+ ]\n+\n+ @staticmethod\n+ def from_diagram(diagram: str) -> List['GridQubit']:\n+ \"\"\"Parse ASCII art device layout into info about qubits and\n+ connectivity. As an example, the below diagram will create a list of\n+ GridQubits in a pyramid structure.\n+ ---A---\n+ --AAA--\n+ -AAAAA-\n+ AAAAAAA\n+\n+ You can use any character other than a hyphen to mark a qubit. As an\n+ example, the qubits for the Bristlecone device could be represented by\n+ the below diagram. This produces a diamond-shaped grid of qubits, and\n+ qubits with the same letter correspond to the same readout line.\n+\n+ .....AB.....\n+ ....ABCD....\n+ ...ABCDEF...\n+ ..ABCDEFGH..\n+ .ABCDEFGHIJ.\n+ ABCDEFGHIJKL\n+ .CDEFGHIJKL.\n+ ..EFGHIJKL..\n+ ...GHIJKL...\n+ ....IJKL....\n+ .....KL.....\n+\n+ Args:\n+ diagram: String representing the qubit layout. Each line represents\n+ a row. Alphanumeric characters are assigned as qubits.\n+ Dots ('.'), dashes ('-'), and spaces (' ') are treated as\n+ empty locations in the grid. If diagram has characters other\n+ than alphanumerics, spacers, and newlines ('\\n'), an error will\n+ be thrown. The top-left corner of the diagram will be have\n+ coordinate (0,0).\n+\n+ Returns:\n+ A list of GridQubits corresponding to the provided diagram\n+\n+ Raises:\n+ ValueError: If the input string contains an invalid character.\n+ \"\"\"\n+ lines = diagram.strip().split('\\n')\n+ no_qubit_characters = ['.', '-', ' ']\n+ qubits = []\n+ for row, line in enumerate(lines):\n+ for col, c in enumerate(line.strip()):\n+ if c not in no_qubit_characters:\n+ if not c.isalnum():\n+ raise ValueError(\"Input string has invalid character\")\n+ qubits.append(GridQubit(row, col))\n+ return qubits\n+\n def __repr__(self):\n return 'cirq.GridQubit({}, {})'.format(self.row, self.col)\n", "issue": "Class methods on GridQubit to create common lattices\nSome obvious shapes\r\n```GridQubit.square(3, 3)```\r\n```GirdQubit.rect(2, 4)```\r\nreturning list of list with row and column. Default to 0,0 as one corner, but option to set corner.\r\n\r\nAbility to construction from an ASCII diagram like xmon devices\r\n```GridQubit.from_pic```\r\nwhich operations on maps like\r\n```\r\n...A...\r\n..AAA..\r\n.AAAAA.\r\nAAAAAAAA\r\n````\r\n\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom typing import Dict\n\nfrom cirq import ops\n\n\nclass GridQubit(ops.Qid):\n \"\"\"A qubit on a 2d square lattice.\n\n GridQubits use row-major ordering:\n\n GridQubit(0, 0) < GridQubit(0, 1) < GridQubit(1, 0) < GridQubit(1, 1)\n \"\"\"\n\n def __init__(self, row: int, col: int):\n self.row = row\n self.col = col\n\n def _comparison_key(self):\n return self.row, self.col\n\n def is_adjacent(self, other: ops.Qid) -> bool:\n \"\"\"Determines if two qubits are adjacent qubits.\"\"\"\n return (isinstance(other, GridQubit) and\n abs(self.row - other.row) + abs(self.col - other.col) == 1)\n\n def __repr__(self):\n return 'cirq.GridQubit({}, {})'.format(self.row, self.col)\n\n def __str__(self):\n return '({}, {})'.format(self.row, self.col)\n\n def to_proto_dict(self) -> Dict:\n \"\"\"Return the proto in dictionary form.\"\"\"\n return {\n 'row': self.row,\n 'col': self.col,\n }\n\n @staticmethod\n def from_proto_dict(proto_dict: Dict) -> 'GridQubit':\n \"\"\"Proto dict must have 'row' and 'col' keys.\"\"\"\n if 'row' not in proto_dict or 'col' not in proto_dict:\n raise ValueError(\n 'Proto dict does not contain row or col: {}'.format(proto_dict))\n return GridQubit(row=proto_dict['row'], col=proto_dict['col'])\n", "path": "cirq/devices/grid_qubit.py"}]} | 1,276 | 938 |
gh_patches_debug_2317 | rasdani/github-patches | git_diff | ivy-llc__ivy-23306 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
rfftfreq
</issue>
<code>
[start of ivy/functional/frontends/paddle/fft.py]
1 # global
2 import ivy
3 from ivy.func_wrapper import with_supported_dtypes
4 from ivy.functional.frontends.paddle.func_wrapper import (
5 to_ivy_arrays_and_back,
6 )
7
8
9 @with_supported_dtypes(
10 {"2.5.1 and below": ("complex64", "complex128")},
11 "paddle",
12 )
13 @to_ivy_arrays_and_back
14 def fft(x, n=None, axis=-1.0, norm="backward", name=None):
15 ret = ivy.fft(ivy.astype(x, "complex128"), axis, norm=norm, n=n)
16 return ivy.astype(ret, x.dtype)
17
18
19 @with_supported_dtypes(
20 {
21 "2.5.1 and below": (
22 "int32",
23 "int64",
24 "float32",
25 "float64",
26 "complex64",
27 "complex128",
28 )
29 },
30 "paddle",
31 )
32 @to_ivy_arrays_and_back
33 def fftshift(x, axes=None, name=None):
34 shape = x.shape
35
36 if axes is None:
37 axes = tuple(range(x.ndim))
38 shifts = [(dim // 2) for dim in shape]
39 elif isinstance(axes, int):
40 shifts = shape[axes] // 2
41 else:
42 shifts = ivy.concat([shape[ax] // 2 for ax in axes])
43
44 roll = ivy.roll(x, shifts, axis=axes)
45
46 return roll
47
48
49 @with_supported_dtypes(
50 {"2.5.1 and below": ("complex64", "complex128")},
51 "paddle",
52 )
53 @to_ivy_arrays_and_back
54 def hfft(x, n=None, axis=-1, norm="backward", name=None):
55 """Compute the FFT of a signal that has Hermitian symmetry, resulting in a real
56 spectrum."""
57 # Determine the input shape and axis length
58 input_shape = x.shape
59 input_len = input_shape[axis]
60
61 # Calculate n if not provided
62 if n is None:
63 n = 2 * (input_len - 1)
64
65 # Perform the FFT along the specified axis
66 result = ivy.fft(x, axis, n=n, norm=norm)
67
68 return ivy.real(result)
69
70
71 @with_supported_dtypes(
72 {"2.5.1 and below": ("complex64", "complex128")},
73 "paddle",
74 )
75 @to_ivy_arrays_and_back
76 def ifft(x, n=None, axis=-1.0, norm="backward", name=None):
77 ret = ivy.ifft(ivy.astype(x, "complex128"), axis, norm=norm, n=n)
78 return ivy.astype(ret, x.dtype)
79
80
81 @with_supported_dtypes(
82 {
83 "2.5.1 and below": (
84 "int32",
85 "int64",
86 "float32",
87 "float64",
88 )
89 },
90 "paddle",
91 )
92 @to_ivy_arrays_and_back
93 def ifftshift(x, axes=None, name=None):
94 shape = x.shape
95
96 if axes is None:
97 axes = tuple(range(x.ndim))
98 shifts = [-(dim // 2) for dim in shape]
99 elif isinstance(axes, int):
100 shifts = -(shape[axes] // 2)
101 else:
102 shifts = ivy.concat([-shape[ax] // 2 for ax in axes])
103
104 roll = ivy.roll(x, shifts, axis=axes)
105
106 return roll
107
108
109 @with_supported_dtypes(
110 {"2.5.1 and below": ("complex64", "complex128")},
111 "paddle",
112 )
113 @to_ivy_arrays_and_back
114 def irfft(x, n=None, axis=-1.0, norm="backward", name=None):
115 if n is None:
116 n = 2 * (x.shape[axis] - 1)
117
118 pos_freq_terms = ivy.take_along_axis(x, range(n // 2 + 1), axis)
119 neg_freq_terms = ivy.conj(pos_freq_terms[1:-1][::-1])
120 combined_freq_terms = ivy.concat((pos_freq_terms, neg_freq_terms), axis=axis)
121 time_domain = ivy.ifft(combined_freq_terms, axis, norm=norm, n=n)
122 if ivy.isreal(x):
123 time_domain = ivy.real(time_domain)
124 return time_domain
125
[end of ivy/functional/frontends/paddle/fft.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/paddle/fft.py b/ivy/functional/frontends/paddle/fft.py
--- a/ivy/functional/frontends/paddle/fft.py
+++ b/ivy/functional/frontends/paddle/fft.py
@@ -122,3 +122,12 @@
if ivy.isreal(x):
time_domain = ivy.real(time_domain)
return time_domain
+
+
+@to_ivy_arrays_and_back
+def rfftfreq(n, d=1.0, dtype=None, name=None):
+ dtype = ivy.default_dtype()
+ val = 1.0 / (n * d)
+ pos_max = n // 2 + 1
+ indices = ivy.arange(0, pos_max, dtype=dtype)
+ return indices * val
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/fft.py b/ivy/functional/frontends/paddle/fft.py\n--- a/ivy/functional/frontends/paddle/fft.py\n+++ b/ivy/functional/frontends/paddle/fft.py\n@@ -122,3 +122,12 @@\n if ivy.isreal(x):\n time_domain = ivy.real(time_domain)\n return time_domain\n+\n+\n+@to_ivy_arrays_and_back\n+def rfftfreq(n, d=1.0, dtype=None, name=None):\n+ dtype = ivy.default_dtype()\n+ val = 1.0 / (n * d)\n+ pos_max = n // 2 + 1\n+ indices = ivy.arange(0, pos_max, dtype=dtype)\n+ return indices * val\n", "issue": "rfftfreq\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef fft(x, n=None, axis=-1.0, norm=\"backward\", name=None):\n ret = ivy.fft(ivy.astype(x, \"complex128\"), axis, norm=norm, n=n)\n return ivy.astype(ret, x.dtype)\n\n\n@with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef fftshift(x, axes=None, name=None):\n shape = x.shape\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shifts = [(dim // 2) for dim in shape]\n elif isinstance(axes, int):\n shifts = shape[axes] // 2\n else:\n shifts = ivy.concat([shape[ax] // 2 for ax in axes])\n\n roll = ivy.roll(x, shifts, axis=axes)\n\n return roll\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef hfft(x, n=None, axis=-1, norm=\"backward\", name=None):\n \"\"\"Compute the FFT of a signal that has Hermitian symmetry, resulting in a real\n spectrum.\"\"\"\n # Determine the input shape and axis length\n input_shape = x.shape\n input_len = input_shape[axis]\n\n # Calculate n if not provided\n if n is None:\n n = 2 * (input_len - 1)\n\n # Perform the FFT along the specified axis\n result = ivy.fft(x, axis, n=n, norm=norm)\n\n return ivy.real(result)\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef ifft(x, n=None, axis=-1.0, norm=\"backward\", name=None):\n ret = ivy.ifft(ivy.astype(x, \"complex128\"), axis, norm=norm, n=n)\n return ivy.astype(ret, x.dtype)\n\n\n@with_supported_dtypes(\n {\n \"2.5.1 and below\": (\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef ifftshift(x, axes=None, name=None):\n shape = x.shape\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shifts = [-(dim // 2) for dim in shape]\n elif isinstance(axes, int):\n shifts = -(shape[axes] // 2)\n else:\n shifts = ivy.concat([-shape[ax] // 2 for ax in axes])\n\n roll = ivy.roll(x, shifts, axis=axes)\n\n return roll\n\n\n@with_supported_dtypes(\n {\"2.5.1 and below\": (\"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef irfft(x, n=None, axis=-1.0, norm=\"backward\", name=None):\n if n is None:\n n = 2 * (x.shape[axis] - 1)\n\n pos_freq_terms = ivy.take_along_axis(x, range(n // 2 + 1), axis)\n neg_freq_terms = ivy.conj(pos_freq_terms[1:-1][::-1])\n combined_freq_terms = ivy.concat((pos_freq_terms, neg_freq_terms), axis=axis)\n time_domain = ivy.ifft(combined_freq_terms, axis, norm=norm, n=n)\n if ivy.isreal(x):\n time_domain = ivy.real(time_domain)\n return time_domain\n", "path": "ivy/functional/frontends/paddle/fft.py"}]} | 1,784 | 185 |
gh_patches_debug_22143 | rasdani/github-patches | git_diff | nautobot__nautobot-261 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cable path tracing does not work with data imported by nautobot-netbox-importer (KeyError)
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Python version: any
* Nautobot version: 1.0.0b3
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
### Steps to Reproduce
1. Install Nautobot 1.0.0b3
2. Install the `nautobot-netbox-importer` plugin
3. Import data from NetBox using the importer
4. In the Nautobot web UI, navigate to the Device detail view for a Device that has connected interfaces, and attempt to access the `Interfaces` tab.
<!-- What did you expect to happen? -->
### Expected Behavior
The page should load successfully, including cable traces for any connected interfaces.
<!-- What happened instead? -->
### Observed Behavior
Nautobot throws a KeyError:
```
File "/usr/local/lib/python3.7/site-packages/nautobot/dcim/models/cables.py", line 465, in get_path
path.append(prefetched[ct_id][object_id])
```
</issue>
<code>
[start of nautobot/dcim/utils.py]
1 import uuid
2
3 from django.contrib.contenttypes.models import ContentType
4
5 from nautobot.utilities.utils import hex_to_rgb, lighten_color, rgb_to_hex
6
7
8 def compile_path_node(ct_id, object_id):
9 return f"{ct_id}:{object_id}"
10
11
12 def decompile_path_node(repr):
13 ct_id, object_id = repr.split(":")
14 # The value is stored as a string, but the lookup later uses UUID objects as keys so we convert it now.
15 # Note that the content type ID is still an integer because we have no control over that model.
16 return int(ct_id), uuid.UUID(object_id, version=4)
17
18
19 def object_to_path_node(obj):
20 """
21 Return a representation of an object suitable for inclusion in a CablePath path. Node representation is in the
22 form <ContentType ID>:<Object ID>.
23 """
24 ct = ContentType.objects.get_for_model(obj)
25 return compile_path_node(ct.pk, obj.pk)
26
27
28 def path_node_to_object(repr):
29 """
30 Given the string representation of a path node, return the corresponding instance.
31 """
32 ct_id, object_id = decompile_path_node(repr)
33 ct = ContentType.objects.get_for_id(ct_id)
34 return ct.model_class().objects.get(pk=object_id)
35
36
37 def cable_status_color_css(record):
38 """
39 Given a record such as an Interface, return the CSS needed to apply appropriate coloring to it.
40 """
41 if not record.cable:
42 return ""
43 # The status colors are for use with labels and such, and tend to be quite bright.
44 # For this function we want a much milder, mellower color suitable as a row background.
45 base_color = record.cable.get_status_color().strip("#")
46 lighter_color = rgb_to_hex(*lighten_color(*hex_to_rgb(base_color), 0.75))
47 return f"background-color: #{lighter_color}"
48
[end of nautobot/dcim/utils.py]
[start of nautobot/core/api/serializers.py]
1 import uuid
2
3 from django.core.exceptions import (
4 FieldError,
5 MultipleObjectsReturned,
6 ObjectDoesNotExist,
7 )
8 from django.db.models import AutoField, ManyToManyField
9 from rest_framework import serializers
10 from rest_framework.exceptions import ValidationError
11
12 from nautobot.utilities.utils import dict_to_filter_params
13
14
15 class ValidatedModelSerializer(serializers.ModelSerializer):
16 """
17 Extends the built-in ModelSerializer to enforce calling full_clean() on a copy of the associated instance during
18 validation. (DRF does not do this by default; see https://github.com/encode/django-rest-framework/issues/3144)
19 """
20
21 def validate(self, data):
22
23 # Remove custom fields data and tags (if any) prior to model validation
24 attrs = data.copy()
25 attrs.pop("custom_fields", None)
26 attrs.pop("tags", None)
27
28 # Skip ManyToManyFields
29 for field in self.Meta.model._meta.get_fields():
30 if isinstance(field, ManyToManyField):
31 attrs.pop(field.name, None)
32
33 # Run clean() on an instance of the model
34 if self.instance is None:
35 instance = self.Meta.model(**attrs)
36 else:
37 instance = self.instance
38 for k, v in attrs.items():
39 setattr(instance, k, v)
40 instance.full_clean()
41
42 return data
43
44
45 class WritableNestedSerializer(serializers.ModelSerializer):
46 """
47 Returns a nested representation of an object on read, but accepts only a primary key on write.
48 """
49
50 def to_internal_value(self, data):
51
52 if data is None:
53 return None
54
55 # Dictionary of related object attributes
56 if isinstance(data, dict):
57 params = dict_to_filter_params(data)
58 queryset = self.Meta.model.objects
59 try:
60 return queryset.get(**params)
61 except ObjectDoesNotExist:
62 raise ValidationError("Related object not found using the provided attributes: {}".format(params))
63 except MultipleObjectsReturned:
64 raise ValidationError("Multiple objects match the provided attributes: {}".format(params))
65 except FieldError as e:
66 raise ValidationError(e)
67
68 queryset = self.Meta.model.objects
69 pk = None
70
71 if isinstance(self.Meta.model._meta.pk, AutoField):
72 # PK is an int for this model. This is usually the User model
73 try:
74 pk = int(data)
75 except (TypeError, ValueError):
76 raise ValidationError(
77 "Related objects must be referenced by ID or by dictionary of attributes. Received an "
78 "unrecognized value: {}".format(data)
79 )
80
81 else:
82 # We assume a type of UUIDField for all other models
83
84 # PK of related object
85 try:
86 # Ensure the pk is a valid UUID
87 pk = uuid.UUID(str(data), version=4)
88 except (TypeError, ValueError):
89 raise ValidationError(
90 "Related objects must be referenced by ID or by dictionary of attributes. Received an "
91 "unrecognized value: {}".format(data)
92 )
93
94 try:
95 return queryset.get(pk=pk)
96 except ObjectDoesNotExist:
97 raise ValidationError("Related object not found using the provided ID: {}".format(pk))
98
99
100 class BulkOperationSerializer(serializers.Serializer):
101 id = serializers.CharField() # This supports both UUIDs and numeric ID for the User model
102
103
104 #
105 # GraphQL, used by the openapi doc, not by the view
106 #
107
108
109 class GraphQLAPISerializer(serializers.Serializer):
110 query = serializers.CharField(required=True, help_text="GraphQL query")
111 variables = serializers.JSONField(required=False, help_text="Variables in JSON Format")
112
[end of nautobot/core/api/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nautobot/core/api/serializers.py b/nautobot/core/api/serializers.py
--- a/nautobot/core/api/serializers.py
+++ b/nautobot/core/api/serializers.py
@@ -84,7 +84,7 @@
# PK of related object
try:
# Ensure the pk is a valid UUID
- pk = uuid.UUID(str(data), version=4)
+ pk = uuid.UUID(str(data))
except (TypeError, ValueError):
raise ValidationError(
"Related objects must be referenced by ID or by dictionary of attributes. Received an "
diff --git a/nautobot/dcim/utils.py b/nautobot/dcim/utils.py
--- a/nautobot/dcim/utils.py
+++ b/nautobot/dcim/utils.py
@@ -13,7 +13,7 @@
ct_id, object_id = repr.split(":")
# The value is stored as a string, but the lookup later uses UUID objects as keys so we convert it now.
# Note that the content type ID is still an integer because we have no control over that model.
- return int(ct_id), uuid.UUID(object_id, version=4)
+ return int(ct_id), uuid.UUID(object_id)
def object_to_path_node(obj):
| {"golden_diff": "diff --git a/nautobot/core/api/serializers.py b/nautobot/core/api/serializers.py\n--- a/nautobot/core/api/serializers.py\n+++ b/nautobot/core/api/serializers.py\n@@ -84,7 +84,7 @@\n # PK of related object\n try:\n # Ensure the pk is a valid UUID\n- pk = uuid.UUID(str(data), version=4)\n+ pk = uuid.UUID(str(data))\n except (TypeError, ValueError):\n raise ValidationError(\n \"Related objects must be referenced by ID or by dictionary of attributes. Received an \"\ndiff --git a/nautobot/dcim/utils.py b/nautobot/dcim/utils.py\n--- a/nautobot/dcim/utils.py\n+++ b/nautobot/dcim/utils.py\n@@ -13,7 +13,7 @@\n ct_id, object_id = repr.split(\":\")\n # The value is stored as a string, but the lookup later uses UUID objects as keys so we convert it now.\n # Note that the content type ID is still an integer because we have no control over that model.\n- return int(ct_id), uuid.UUID(object_id, version=4)\n+ return int(ct_id), uuid.UUID(object_id)\n \n \n def object_to_path_node(obj):\n", "issue": "Cable path tracing does not work with data imported by nautobot-netbox-importer (KeyError)\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Python version: any\r\n* Nautobot version: 1.0.0b3\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n### Steps to Reproduce\r\n1. Install Nautobot 1.0.0b3\r\n2. Install the `nautobot-netbox-importer` plugin\r\n3. Import data from NetBox using the importer\r\n4. In the Nautobot web UI, navigate to the Device detail view for a Device that has connected interfaces, and attempt to access the `Interfaces` tab.\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\n\r\nThe page should load successfully, including cable traces for any connected interfaces.\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\n\r\nNautobot throws a KeyError:\r\n\r\n```\r\n File \"/usr/local/lib/python3.7/site-packages/nautobot/dcim/models/cables.py\", line 465, in get_path\r\n path.append(prefetched[ct_id][object_id])\r\n```\r\n\n", "before_files": [{"content": "import uuid\n\nfrom django.contrib.contenttypes.models import ContentType\n\nfrom nautobot.utilities.utils import hex_to_rgb, lighten_color, rgb_to_hex\n\n\ndef compile_path_node(ct_id, object_id):\n return f\"{ct_id}:{object_id}\"\n\n\ndef decompile_path_node(repr):\n ct_id, object_id = repr.split(\":\")\n # The value is stored as a string, but the lookup later uses UUID objects as keys so we convert it now.\n # Note that the content type ID is still an integer because we have no control over that model.\n return int(ct_id), uuid.UUID(object_id, version=4)\n\n\ndef object_to_path_node(obj):\n \"\"\"\n Return a representation of an object suitable for inclusion in a CablePath path. Node representation is in the\n form <ContentType ID>:<Object ID>.\n \"\"\"\n ct = ContentType.objects.get_for_model(obj)\n return compile_path_node(ct.pk, obj.pk)\n\n\ndef path_node_to_object(repr):\n \"\"\"\n Given the string representation of a path node, return the corresponding instance.\n \"\"\"\n ct_id, object_id = decompile_path_node(repr)\n ct = ContentType.objects.get_for_id(ct_id)\n return ct.model_class().objects.get(pk=object_id)\n\n\ndef cable_status_color_css(record):\n \"\"\"\n Given a record such as an Interface, return the CSS needed to apply appropriate coloring to it.\n \"\"\"\n if not record.cable:\n return \"\"\n # The status colors are for use with labels and such, and tend to be quite bright.\n # For this function we want a much milder, mellower color suitable as a row background.\n base_color = record.cable.get_status_color().strip(\"#\")\n lighter_color = rgb_to_hex(*lighten_color(*hex_to_rgb(base_color), 0.75))\n return f\"background-color: #{lighter_color}\"\n", "path": "nautobot/dcim/utils.py"}, {"content": "import uuid\n\nfrom django.core.exceptions import (\n FieldError,\n MultipleObjectsReturned,\n ObjectDoesNotExist,\n)\nfrom django.db.models import AutoField, ManyToManyField\nfrom rest_framework import serializers\nfrom rest_framework.exceptions import ValidationError\n\nfrom nautobot.utilities.utils import dict_to_filter_params\n\n\nclass ValidatedModelSerializer(serializers.ModelSerializer):\n \"\"\"\n Extends the built-in ModelSerializer to enforce calling full_clean() on a copy of the associated instance during\n validation. (DRF does not do this by default; see https://github.com/encode/django-rest-framework/issues/3144)\n \"\"\"\n\n def validate(self, data):\n\n # Remove custom fields data and tags (if any) prior to model validation\n attrs = data.copy()\n attrs.pop(\"custom_fields\", None)\n attrs.pop(\"tags\", None)\n\n # Skip ManyToManyFields\n for field in self.Meta.model._meta.get_fields():\n if isinstance(field, ManyToManyField):\n attrs.pop(field.name, None)\n\n # Run clean() on an instance of the model\n if self.instance is None:\n instance = self.Meta.model(**attrs)\n else:\n instance = self.instance\n for k, v in attrs.items():\n setattr(instance, k, v)\n instance.full_clean()\n\n return data\n\n\nclass WritableNestedSerializer(serializers.ModelSerializer):\n \"\"\"\n Returns a nested representation of an object on read, but accepts only a primary key on write.\n \"\"\"\n\n def to_internal_value(self, data):\n\n if data is None:\n return None\n\n # Dictionary of related object attributes\n if isinstance(data, dict):\n params = dict_to_filter_params(data)\n queryset = self.Meta.model.objects\n try:\n return queryset.get(**params)\n except ObjectDoesNotExist:\n raise ValidationError(\"Related object not found using the provided attributes: {}\".format(params))\n except MultipleObjectsReturned:\n raise ValidationError(\"Multiple objects match the provided attributes: {}\".format(params))\n except FieldError as e:\n raise ValidationError(e)\n\n queryset = self.Meta.model.objects\n pk = None\n\n if isinstance(self.Meta.model._meta.pk, AutoField):\n # PK is an int for this model. This is usually the User model\n try:\n pk = int(data)\n except (TypeError, ValueError):\n raise ValidationError(\n \"Related objects must be referenced by ID or by dictionary of attributes. Received an \"\n \"unrecognized value: {}\".format(data)\n )\n\n else:\n # We assume a type of UUIDField for all other models\n\n # PK of related object\n try:\n # Ensure the pk is a valid UUID\n pk = uuid.UUID(str(data), version=4)\n except (TypeError, ValueError):\n raise ValidationError(\n \"Related objects must be referenced by ID or by dictionary of attributes. Received an \"\n \"unrecognized value: {}\".format(data)\n )\n\n try:\n return queryset.get(pk=pk)\n except ObjectDoesNotExist:\n raise ValidationError(\"Related object not found using the provided ID: {}\".format(pk))\n\n\nclass BulkOperationSerializer(serializers.Serializer):\n id = serializers.CharField() # This supports both UUIDs and numeric ID for the User model\n\n\n#\n# GraphQL, used by the openapi doc, not by the view\n#\n\n\nclass GraphQLAPISerializer(serializers.Serializer):\n query = serializers.CharField(required=True, help_text=\"GraphQL query\")\n variables = serializers.JSONField(required=False, help_text=\"Variables in JSON Format\")\n", "path": "nautobot/core/api/serializers.py"}]} | 2,462 | 276 |
gh_patches_debug_12679 | rasdani/github-patches | git_diff | saulpw__visidata-1824 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
YAML fails reading files with tuples
Example file:
```
foo: !!python/tuple
- 1
- 2
- 3
```
Error:
```
ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/tuple'·
```
</issue>
<code>
[start of visidata/loaders/yaml.py]
1 from itertools import chain
2
3 from visidata import VisiData, Progress, JsonSheet, vd
4
5
6 @VisiData.api
7 def open_yml(vd, p):
8 return YamlSheet(p.name, source=p)
9
10 VisiData.open_yaml = VisiData.open_yml
11
12 class YamlSheet(JsonSheet):
13 def iterload(self):
14 yaml = vd.importExternal('yaml', 'PyYAML')
15 with self.source.open_text() as fp:
16 documents = yaml.safe_load_all(fp)
17
18 self.columns = []
19 self._knownKeys.clear()
20
21 # Peek at the document stream to determine how to best DWIM.
22 #
23 # This code is a bit verbose because it avoids slurping the generator
24 # all at once into memory.
25 try:
26 first = next(documents)
27 except StopIteration:
28 # Empty file‽
29 yield None
30 return
31
32 try:
33 second = next(documents)
34 except StopIteration:
35 if isinstance(first, list):
36 # A file with a single YAML list: yield one row per list item.
37 yield from Progress(first)
38 else:
39 # A file with a single YAML non-list value, e.g a dict.
40 yield first
41 else:
42 # A file containing multiple YAML documents: yield one row per document.
43 yield from Progress(chain([first, second], documents), total=0)
44
[end of visidata/loaders/yaml.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/visidata/loaders/yaml.py b/visidata/loaders/yaml.py
--- a/visidata/loaders/yaml.py
+++ b/visidata/loaders/yaml.py
@@ -12,8 +12,18 @@
class YamlSheet(JsonSheet):
def iterload(self):
yaml = vd.importExternal('yaml', 'PyYAML')
+
+ class PrettySafeLoader(yaml.SafeLoader):
+ def construct_python_tuple(self, node):
+ return tuple(self.construct_sequence(node))
+
+ PrettySafeLoader.add_constructor(
+ u'tag:yaml.org,2002:python/tuple',
+ PrettySafeLoader.construct_python_tuple
+ )
+
with self.source.open_text() as fp:
- documents = yaml.safe_load_all(fp)
+ documents = yaml.load_all(fp, PrettySafeLoader)
self.columns = []
self._knownKeys.clear()
| {"golden_diff": "diff --git a/visidata/loaders/yaml.py b/visidata/loaders/yaml.py\n--- a/visidata/loaders/yaml.py\n+++ b/visidata/loaders/yaml.py\n@@ -12,8 +12,18 @@\n class YamlSheet(JsonSheet):\n def iterload(self):\n yaml = vd.importExternal('yaml', 'PyYAML')\n+\n+ class PrettySafeLoader(yaml.SafeLoader):\n+ def construct_python_tuple(self, node):\n+ return tuple(self.construct_sequence(node))\n+\n+ PrettySafeLoader.add_constructor(\n+ u'tag:yaml.org,2002:python/tuple',\n+ PrettySafeLoader.construct_python_tuple\n+ )\n+\n with self.source.open_text() as fp:\n- documents = yaml.safe_load_all(fp)\n+ documents = yaml.load_all(fp, PrettySafeLoader)\n \n self.columns = []\n self._knownKeys.clear()\n", "issue": "YAML fails reading files with tuples\nExample file:\r\n\r\n```\r\nfoo: !!python/tuple\r\n- 1\r\n- 2\r\n- 3\r\n```\r\n\r\nError:\r\n\r\n```\r\nConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/tuple'\u00b7\r\n```\n", "before_files": [{"content": "from itertools import chain\n\nfrom visidata import VisiData, Progress, JsonSheet, vd\n\n\[email protected]\ndef open_yml(vd, p):\n return YamlSheet(p.name, source=p)\n\nVisiData.open_yaml = VisiData.open_yml\n\nclass YamlSheet(JsonSheet):\n def iterload(self):\n yaml = vd.importExternal('yaml', 'PyYAML')\n with self.source.open_text() as fp:\n documents = yaml.safe_load_all(fp)\n\n self.columns = []\n self._knownKeys.clear()\n\n # Peek at the document stream to determine how to best DWIM.\n #\n # This code is a bit verbose because it avoids slurping the generator\n # all at once into memory.\n try:\n first = next(documents)\n except StopIteration:\n # Empty file\u203d\n yield None\n return\n\n try:\n second = next(documents)\n except StopIteration:\n if isinstance(first, list):\n # A file with a single YAML list: yield one row per list item.\n yield from Progress(first)\n else:\n # A file with a single YAML non-list value, e.g a dict.\n yield first\n else:\n # A file containing multiple YAML documents: yield one row per document.\n yield from Progress(chain([first, second], documents), total=0)\n", "path": "visidata/loaders/yaml.py"}]} | 983 | 201 |
gh_patches_debug_19433 | rasdani/github-patches | git_diff | TheAlgorithms__Python-5833 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Merge insertion sort doesn't work
```
>>> merge_insertion_sort([0, 1, 2, 3, 4])
[0, 2, 1, 3, 4]
```
Cc @ulwlu #2211 [`sorts/merge_insertion_sort.py`](../blob/master/sorts/merge_insertion_sort.py)
</issue>
<code>
[start of sorts/merge_insertion_sort.py]
1 """
2 This is a pure Python implementation of the merge-insertion sort algorithm
3 Source: https://en.wikipedia.org/wiki/Merge-insertion_sort
4
5 For doctests run following command:
6 python3 -m doctest -v merge_insertion_sort.py
7 or
8 python -m doctest -v merge_insertion_sort.py
9
10 For manual testing run:
11 python3 merge_insertion_sort.py
12 """
13
14 from __future__ import annotations
15
16
17 def merge_insertion_sort(collection: list[int]) -> list[int]:
18 """Pure implementation of merge-insertion sort algorithm in Python
19
20 :param collection: some mutable ordered collection with heterogeneous
21 comparable items inside
22 :return: the same collection ordered by ascending
23
24 Examples:
25 >>> merge_insertion_sort([0, 5, 3, 2, 2])
26 [0, 2, 2, 3, 5]
27
28 >>> merge_insertion_sort([99])
29 [99]
30
31 >>> merge_insertion_sort([-2, -5, -45])
32 [-45, -5, -2]
33 """
34
35 def binary_search_insertion(sorted_list, item):
36 left = 0
37 right = len(sorted_list) - 1
38 while left <= right:
39 middle = (left + right) // 2
40 if left == right:
41 if sorted_list[middle] < item:
42 left = middle + 1
43 break
44 elif sorted_list[middle] < item:
45 left = middle + 1
46 else:
47 right = middle - 1
48 sorted_list.insert(left, item)
49 return sorted_list
50
51 def sortlist_2d(list_2d):
52 def merge(left, right):
53 result = []
54 while left and right:
55 if left[0][0] < right[0][0]:
56 result.append(left.pop(0))
57 else:
58 result.append(right.pop(0))
59 return result + left + right
60
61 length = len(list_2d)
62 if length <= 1:
63 return list_2d
64 middle = length // 2
65 return merge(sortlist_2d(list_2d[:middle]), sortlist_2d(list_2d[middle:]))
66
67 if len(collection) <= 1:
68 return collection
69
70 """
71 Group the items into two pairs, and leave one element if there is a last odd item.
72
73 Example: [999, 100, 75, 40, 10000]
74 -> [999, 100], [75, 40]. Leave 10000.
75 """
76 two_paired_list = []
77 has_last_odd_item = False
78 for i in range(0, len(collection), 2):
79 if i == len(collection) - 1:
80 has_last_odd_item = True
81 else:
82 """
83 Sort two-pairs in each groups.
84
85 Example: [999, 100], [75, 40]
86 -> [100, 999], [40, 75]
87 """
88 if collection[i] < collection[i + 1]:
89 two_paired_list.append([collection[i], collection[i + 1]])
90 else:
91 two_paired_list.append([collection[i + 1], collection[i]])
92
93 """
94 Sort two_paired_list.
95
96 Example: [100, 999], [40, 75]
97 -> [40, 75], [100, 999]
98 """
99 sorted_list_2d = sortlist_2d(two_paired_list)
100
101 """
102 40 < 100 is sure because it has already been sorted.
103 Generate the sorted_list of them so that you can avoid unnecessary comparison.
104
105 Example:
106 group0 group1
107 40 100
108 75 999
109 ->
110 group0 group1
111 [40, 100]
112 75 999
113 """
114 result = [i[0] for i in sorted_list_2d]
115
116 """
117 100 < 999 is sure because it has already been sorted.
118 Put 999 in last of the sorted_list so that you can avoid unnecessary comparison.
119
120 Example:
121 group0 group1
122 [40, 100]
123 75 999
124 ->
125 group0 group1
126 [40, 100, 999]
127 75
128 """
129 result.append(sorted_list_2d[-1][1])
130
131 """
132 Insert the last odd item left if there is.
133
134 Example:
135 group0 group1
136 [40, 100, 999]
137 75
138 ->
139 group0 group1
140 [40, 100, 999, 10000]
141 75
142 """
143 if has_last_odd_item:
144 pivot = collection[-1]
145 result = binary_search_insertion(result, pivot)
146
147 """
148 Insert the remaining items.
149 In this case, 40 < 75 is sure because it has already been sorted.
150 Therefore, you only need to insert 75 into [100, 999, 10000],
151 so that you can avoid unnecessary comparison.
152
153 Example:
154 group0 group1
155 [40, 100, 999, 10000]
156 ^ You don't need to compare with this as 40 < 75 is already sure.
157 75
158 ->
159 [40, 75, 100, 999, 10000]
160 """
161 is_last_odd_item_inserted_before_this_index = False
162 for i in range(len(sorted_list_2d) - 1):
163 if result[i] == collection[-i]:
164 is_last_odd_item_inserted_before_this_index = True
165 pivot = sorted_list_2d[i][1]
166 # If last_odd_item is inserted before the item's index,
167 # you should forward index one more.
168 if is_last_odd_item_inserted_before_this_index:
169 result = result[: i + 2] + binary_search_insertion(result[i + 2 :], pivot)
170 else:
171 result = result[: i + 1] + binary_search_insertion(result[i + 1 :], pivot)
172
173 return result
174
175
176 if __name__ == "__main__":
177 user_input = input("Enter numbers separated by a comma:\n").strip()
178 unsorted = [int(item) for item in user_input.split(",")]
179 print(merge_insertion_sort(unsorted))
180
[end of sorts/merge_insertion_sort.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sorts/merge_insertion_sort.py b/sorts/merge_insertion_sort.py
--- a/sorts/merge_insertion_sort.py
+++ b/sorts/merge_insertion_sort.py
@@ -30,6 +30,12 @@
>>> merge_insertion_sort([-2, -5, -45])
[-45, -5, -2]
+
+ Testing with all permutations on range(0,5):
+ >>> import itertools
+ >>> permutations = list(itertools.permutations([0, 1, 2, 3, 4]))
+ >>> all(merge_insertion_sort(p) == [0, 1, 2, 3, 4] for p in permutations)
+ True
"""
def binary_search_insertion(sorted_list, item):
@@ -160,7 +166,7 @@
"""
is_last_odd_item_inserted_before_this_index = False
for i in range(len(sorted_list_2d) - 1):
- if result[i] == collection[-i]:
+ if result[i] == collection[-1] and has_last_odd_item:
is_last_odd_item_inserted_before_this_index = True
pivot = sorted_list_2d[i][1]
# If last_odd_item is inserted before the item's index,
| {"golden_diff": "diff --git a/sorts/merge_insertion_sort.py b/sorts/merge_insertion_sort.py\n--- a/sorts/merge_insertion_sort.py\n+++ b/sorts/merge_insertion_sort.py\n@@ -30,6 +30,12 @@\n \n >>> merge_insertion_sort([-2, -5, -45])\n [-45, -5, -2]\n+\n+ Testing with all permutations on range(0,5):\n+ >>> import itertools\n+ >>> permutations = list(itertools.permutations([0, 1, 2, 3, 4]))\n+ >>> all(merge_insertion_sort(p) == [0, 1, 2, 3, 4] for p in permutations)\n+ True\n \"\"\"\n \n def binary_search_insertion(sorted_list, item):\n@@ -160,7 +166,7 @@\n \"\"\"\n is_last_odd_item_inserted_before_this_index = False\n for i in range(len(sorted_list_2d) - 1):\n- if result[i] == collection[-i]:\n+ if result[i] == collection[-1] and has_last_odd_item:\n is_last_odd_item_inserted_before_this_index = True\n pivot = sorted_list_2d[i][1]\n # If last_odd_item is inserted before the item's index,\n", "issue": "Merge insertion sort doesn't work\n```\r\n>>> merge_insertion_sort([0, 1, 2, 3, 4])\r\n[0, 2, 1, 3, 4]\r\n```\r\n\r\nCc @ulwlu #2211 [`sorts/merge_insertion_sort.py`](../blob/master/sorts/merge_insertion_sort.py)\n", "before_files": [{"content": "\"\"\"\nThis is a pure Python implementation of the merge-insertion sort algorithm\nSource: https://en.wikipedia.org/wiki/Merge-insertion_sort\n\nFor doctests run following command:\npython3 -m doctest -v merge_insertion_sort.py\nor\npython -m doctest -v merge_insertion_sort.py\n\nFor manual testing run:\npython3 merge_insertion_sort.py\n\"\"\"\n\nfrom __future__ import annotations\n\n\ndef merge_insertion_sort(collection: list[int]) -> list[int]:\n \"\"\"Pure implementation of merge-insertion sort algorithm in Python\n\n :param collection: some mutable ordered collection with heterogeneous\n comparable items inside\n :return: the same collection ordered by ascending\n\n Examples:\n >>> merge_insertion_sort([0, 5, 3, 2, 2])\n [0, 2, 2, 3, 5]\n\n >>> merge_insertion_sort([99])\n [99]\n\n >>> merge_insertion_sort([-2, -5, -45])\n [-45, -5, -2]\n \"\"\"\n\n def binary_search_insertion(sorted_list, item):\n left = 0\n right = len(sorted_list) - 1\n while left <= right:\n middle = (left + right) // 2\n if left == right:\n if sorted_list[middle] < item:\n left = middle + 1\n break\n elif sorted_list[middle] < item:\n left = middle + 1\n else:\n right = middle - 1\n sorted_list.insert(left, item)\n return sorted_list\n\n def sortlist_2d(list_2d):\n def merge(left, right):\n result = []\n while left and right:\n if left[0][0] < right[0][0]:\n result.append(left.pop(0))\n else:\n result.append(right.pop(0))\n return result + left + right\n\n length = len(list_2d)\n if length <= 1:\n return list_2d\n middle = length // 2\n return merge(sortlist_2d(list_2d[:middle]), sortlist_2d(list_2d[middle:]))\n\n if len(collection) <= 1:\n return collection\n\n \"\"\"\n Group the items into two pairs, and leave one element if there is a last odd item.\n\n Example: [999, 100, 75, 40, 10000]\n -> [999, 100], [75, 40]. Leave 10000.\n \"\"\"\n two_paired_list = []\n has_last_odd_item = False\n for i in range(0, len(collection), 2):\n if i == len(collection) - 1:\n has_last_odd_item = True\n else:\n \"\"\"\n Sort two-pairs in each groups.\n\n Example: [999, 100], [75, 40]\n -> [100, 999], [40, 75]\n \"\"\"\n if collection[i] < collection[i + 1]:\n two_paired_list.append([collection[i], collection[i + 1]])\n else:\n two_paired_list.append([collection[i + 1], collection[i]])\n\n \"\"\"\n Sort two_paired_list.\n\n Example: [100, 999], [40, 75]\n -> [40, 75], [100, 999]\n \"\"\"\n sorted_list_2d = sortlist_2d(two_paired_list)\n\n \"\"\"\n 40 < 100 is sure because it has already been sorted.\n Generate the sorted_list of them so that you can avoid unnecessary comparison.\n\n Example:\n group0 group1\n 40 100\n 75 999\n ->\n group0 group1\n [40, 100]\n 75 999\n \"\"\"\n result = [i[0] for i in sorted_list_2d]\n\n \"\"\"\n 100 < 999 is sure because it has already been sorted.\n Put 999 in last of the sorted_list so that you can avoid unnecessary comparison.\n\n Example:\n group0 group1\n [40, 100]\n 75 999\n ->\n group0 group1\n [40, 100, 999]\n 75\n \"\"\"\n result.append(sorted_list_2d[-1][1])\n\n \"\"\"\n Insert the last odd item left if there is.\n\n Example:\n group0 group1\n [40, 100, 999]\n 75\n ->\n group0 group1\n [40, 100, 999, 10000]\n 75\n \"\"\"\n if has_last_odd_item:\n pivot = collection[-1]\n result = binary_search_insertion(result, pivot)\n\n \"\"\"\n Insert the remaining items.\n In this case, 40 < 75 is sure because it has already been sorted.\n Therefore, you only need to insert 75 into [100, 999, 10000],\n so that you can avoid unnecessary comparison.\n\n Example:\n group0 group1\n [40, 100, 999, 10000]\n ^ You don't need to compare with this as 40 < 75 is already sure.\n 75\n ->\n [40, 75, 100, 999, 10000]\n \"\"\"\n is_last_odd_item_inserted_before_this_index = False\n for i in range(len(sorted_list_2d) - 1):\n if result[i] == collection[-i]:\n is_last_odd_item_inserted_before_this_index = True\n pivot = sorted_list_2d[i][1]\n # If last_odd_item is inserted before the item's index,\n # you should forward index one more.\n if is_last_odd_item_inserted_before_this_index:\n result = result[: i + 2] + binary_search_insertion(result[i + 2 :], pivot)\n else:\n result = result[: i + 1] + binary_search_insertion(result[i + 1 :], pivot)\n\n return result\n\n\nif __name__ == \"__main__\":\n user_input = input(\"Enter numbers separated by a comma:\\n\").strip()\n unsorted = [int(item) for item in user_input.split(\",\")]\n print(merge_insertion_sort(unsorted))\n", "path": "sorts/merge_insertion_sort.py"}]} | 2,566 | 292 |
gh_patches_debug_18373 | rasdani/github-patches | git_diff | PlasmaPy__PlasmaPy-971 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Notebook for ExB drift
A Jupyter notebook needs to be created in our documentation that not only shows how to use `plamsapy.formulary.drifts.ExB_drift()` but that also describes the physics behind the drift. The notebook has to be placed in `/docs/notebooks/` or one of it's sub-directories. Then the notebook needs to be properly reference in the documentation.
</issue>
<code>
[start of docs/conf.py]
1 #!/usr/bin/env python3.6
2 # -*- coding: utf-8 -*-
3 #
4 # PlasmaPy documentation build configuration file, created by
5 # sphinx-quickstart on Wed May 31 18:16:46 2017.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 #
20
21 import os
22 import sys
23
24 from datetime import datetime
25 from pkg_resources import parse_version
26 from sphinx.application import Sphinx
27
28 sys.path.insert(0, os.path.abspath(".."))
29
30 from plasmapy import __version__ as release
31
32 # -- General configuration ------------------------------------------------
33
34 # If your documentation needs a minimal Sphinx version, state it here.
35 #
36 # needs_sphinx = '1.0'
37
38 # Add any Sphinx extension module names here, as strings. They can be
39 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
40 # ones.
41 extensions = [
42 "sphinx.ext.autodoc",
43 "sphinx.ext.intersphinx",
44 "sphinx.ext.graphviz",
45 "sphinx.ext.mathjax",
46 "sphinx.ext.napoleon",
47 "sphinx_automodapi.automodapi",
48 "sphinx_automodapi.smart_resolver",
49 "nbsphinx",
50 "sphinx_copybutton",
51 "sphinx_gallery.load_style",
52 ]
53
54 intersphinx_mapping = {
55 "python": ("https://docs.python.org/3", None),
56 "numpy": ("https://numpy.org/doc/stable/", None),
57 "scipy": ("https://docs.scipy.org/doc/scipy/reference/", None),
58 "pandas": ("http://pandas.pydata.org/pandas-docs/stable/", None),
59 "astropy": ("http://docs.astropy.org/en/stable/", None),
60 }
61
62 autoclass_content = "both"
63
64 # Add any paths that contain templates here, relative to this directory.
65 templates_path = ["_templates"]
66
67 # The suffix(es) of source filenames.
68 # You can specify multiple suffix as a list of string:
69 #
70 # source_suffix = ['.rst', '.md']
71 source_suffix = ".rst"
72
73 # The master toctree document.
74 master_doc = "index"
75
76 # General information about the project.
77 project = "PlasmaPy"
78 author = "PlasmaPy Community"
79 copyright = f"2015-{datetime.utcnow().year}, {author}"
80
81
82 # The version info for the project you're documenting, acts as replacement for
83 # |version| and |release|, also used in various other places throughout the
84 # built documents.
85 #
86 # The full version, including alpha/beta/rc tags.
87 # Note: If plasmapy.__version__ can not be defined then it is set to 'unknown'.
88 # However, release needs to be a semantic style version number, so set
89 # the 'unknown' case to ''.
90 release = "" if release == "unknown" else release
91 if release == "unknown":
92 release = version = revision = ""
93 else:
94 pv = parse_version(release)
95 release = pv.public
96 version = ".".join(release.split(".")[:2]) # short X.Y version
97 if pv.local is not None:
98 revision = pv.local[1:] # revision number w/o the leading g
99 else:
100 revision = ""
101
102
103 # The language for content autogenerated by Sphinx. Refer to documentation
104 # for a list of supported languages.
105 #
106 # This is also used if you do content translation via gettext catalogs.
107 # Usually you set "language" from the command line for these cases.
108 language = None
109
110 # List of patterns, relative to source directory, that match files and
111 # directories to ignore when looking for source files.
112 # This patterns also effect to html_static_path and html_extra_path
113 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
114
115 # The name of the Pygments (syntax highlighting) style to use.
116 pygments_style = "sphinx"
117
118 # If true, `todo` and `todoList` produce output, else they produce nothing.
119 todo_include_todos = False
120
121 default_role = "obj"
122
123 # -- Options for HTML output ----------------------------------------------
124
125 # The theme to use for HTML and HTML Help pages. See the documentation for
126 # a list of builtin themes.
127 #
128 # html_theme = 'alabaster'
129 # html_theme = 'traditional'
130 # html_theme = 'agogo'
131 html_theme = "sphinx_rtd_theme"
132
133 # Theme options are theme-specific and customize the look and feel of a theme
134 # further. For a list of options available for each theme, see the
135 # documentation.
136 #
137 html_logo = "./_static/with-text-light-190px.png"
138 html_theme_options = {
139 "logo_only": True,
140 #
141 # TOC options
142 # https://sphinx-rtd-theme.readthedocs.io/en/stable/configuring.html#theme-options
143 "includehidden": False,
144 }
145
146 # Add any paths that contain custom static files (such as style sheets) here,
147 # relative to this directory. They are copied after the builtin static files,
148 # so a file named "default.css" will overwrite the builtin "default.css".
149 html_static_path = ["_static"]
150
151 # A list of prefixes that are ignored for sorting the Python module
152 # index (e.g., if this is set to ['foo.'], then foo.bar is shown under
153 # B, not F).
154 modindex_common_prefix = ["plasmapy."]
155
156 # -- Options for HTMLHelp output ------------------------------------------
157
158 # Output file base name for HTML help builder.
159 htmlhelp_basename = "PlasmaPydoc"
160
161
162 # -- Options for LaTeX output ---------------------------------------------
163
164 latex_elements = {
165 # The paper size ('letterpaper' or 'a4paper').
166 # 'papersize': 'letterpaper',
167 #
168 # The font size ('10pt', '11pt' or '12pt').
169 # 'pointsize': '10pt',
170 #
171 # Additional stuff for the LaTeX preamble.
172 # 'preamble': '',
173 #
174 # Latex figure (float) alignment
175 # 'figure_align': 'htbp',
176 }
177
178 # Grouping the document tree into LaTeX files. List of tuples
179 # (source start file, target name, title,
180 # author, documentclass [howto, manual, or own class]).
181 latex_documents = [
182 (
183 master_doc,
184 "PlasmaPy.tex",
185 "PlasmaPy Documentation",
186 "PlasmaPy Community",
187 "manual",
188 )
189 ]
190
191
192 # -- Options for manual page output ---------------------------------------
193
194 # One entry per manual page. List of tuples
195 # (source start file, name, description, authors, manual section).
196 man_pages = [(master_doc, "plasmapy", "PlasmaPy Documentation", [author], 1)]
197
198
199 # -- Options for Texinfo output -------------------------------------------
200
201 # Grouping the document tree into Texinfo files. List of tuples
202 # (source start file, target name, title, author,
203 # dir menu entry, description, category)
204 texinfo_documents = [
205 (
206 master_doc,
207 "PlasmaPy",
208 "PlasmaPy Documentation",
209 author,
210 "PlasmaPy",
211 "Python package for plasma physics",
212 "Miscellaneous",
213 )
214 ]
215
216 html_favicon = "./_static/icon.ico"
217
218
219 # -- NBSphinx options
220
221 nbsphinx_thumbnails = {"notebooks/*": "_images/graphic-circular.png"}
222
223 # adapted from https://github.com/spatialaudio/nbsphinx/blob/58b8034dd9d7349c1b4ac3e7a7d6baa87ab2a6a9/doc/conf.py
224
225 # This is processed by Jinja2 and inserted before each notebook
226 nbsphinx_prolog = r"""
227 {% set docname = 'docs/' + env.doc2path(env.docname, base=None) %}
228 {% set nb_base = 'tree' if env.config.revision else 'blob' %}
229 {% set nb_where = env.config.revision if env.config.revision else 'master' %}
230
231 .. raw:: html
232
233 <div class="admonition note">
234 <p style="margin-bottom:0px">
235 This page was generated by
236 <a href="https://nbsphinx.readthedocs.io/">nbsphinx</a> from
237 <a class="reference external" href="https://github.com/PlasmaPy/PlasmaPy/{{ nb_base|e }}/{{ nb_where|e }}/{{ docname|e }}">{{ docname|e }}</a>.
238 <br>
239 Interactive online version:
240 <a href="https://mybinder.org/v2/gh/PlasmaPy/PlasmaPy/{{ nb_where|e }}/?filepath={{ docname|e }}"><img alt="Binder badge" src="https://mybinder.org/badge_logo.svg" style="vertical-align:text-bottom"></a>.
241 </p>
242 </div>
243
244 .. raw:: latex
245
246 \nbsphinxstartnotebook{\scriptsize\noindent\strut
247 \textcolor{gray}{The following section was generated from
248 \sphinxcode{\sphinxupquote{\strut {{ docname | escape_latex }}}} \dotfill}}
249 """
250
251
252 def setup(app: Sphinx) -> None:
253 app.add_config_value("revision", "", True)
254 app.add_stylesheet("rtd_theme_overrides.css")
255
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -49,6 +49,7 @@
"nbsphinx",
"sphinx_copybutton",
"sphinx_gallery.load_style",
+ "IPython.sphinxext.ipython_console_highlighting",
]
intersphinx_mapping = {
@@ -110,10 +111,8 @@
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
-exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
+exclude_patterns = ["_build", "Thumbs.db", ".DS_Store", "**.ipynb_checkpoints"]
-# The name of the Pygments (syntax highlighting) style to use.
-pygments_style = "sphinx"
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -49,6 +49,7 @@\n \"nbsphinx\",\n \"sphinx_copybutton\",\n \"sphinx_gallery.load_style\",\n+ \"IPython.sphinxext.ipython_console_highlighting\",\n ]\n \n intersphinx_mapping = {\n@@ -110,10 +111,8 @@\n # List of patterns, relative to source directory, that match files and\n # directories to ignore when looking for source files.\n # This patterns also effect to html_static_path and html_extra_path\n-exclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n+exclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\", \"**.ipynb_checkpoints\"]\n \n-# The name of the Pygments (syntax highlighting) style to use.\n-pygments_style = \"sphinx\"\n \n # If true, `todo` and `todoList` produce output, else they produce nothing.\n todo_include_todos = False\n", "issue": "Notebook for ExB drift\nA Jupyter notebook needs to be created in our documentation that not only shows how to use `plamsapy.formulary.drifts.ExB_drift()` but that also describes the physics behind the drift. The notebook has to be placed in `/docs/notebooks/` or one of it's sub-directories. Then the notebook needs to be properly reference in the documentation.\n", "before_files": [{"content": "#!/usr/bin/env python3.6\n# -*- coding: utf-8 -*-\n#\n# PlasmaPy documentation build configuration file, created by\n# sphinx-quickstart on Wed May 31 18:16:46 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n\nimport os\nimport sys\n\nfrom datetime import datetime\nfrom pkg_resources import parse_version\nfrom sphinx.application import Sphinx\n\nsys.path.insert(0, os.path.abspath(\"..\"))\n\nfrom plasmapy import __version__ as release\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.graphviz\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.napoleon\",\n \"sphinx_automodapi.automodapi\",\n \"sphinx_automodapi.smart_resolver\",\n \"nbsphinx\",\n \"sphinx_copybutton\",\n \"sphinx_gallery.load_style\",\n]\n\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3\", None),\n \"numpy\": (\"https://numpy.org/doc/stable/\", None),\n \"scipy\": (\"https://docs.scipy.org/doc/scipy/reference/\", None),\n \"pandas\": (\"http://pandas.pydata.org/pandas-docs/stable/\", None),\n \"astropy\": (\"http://docs.astropy.org/en/stable/\", None),\n}\n\nautoclass_content = \"both\"\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"PlasmaPy\"\nauthor = \"PlasmaPy Community\"\ncopyright = f\"2015-{datetime.utcnow().year}, {author}\"\n\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The full version, including alpha/beta/rc tags.\n# Note: If plasmapy.__version__ can not be defined then it is set to 'unknown'.\n# However, release needs to be a semantic style version number, so set\n# the 'unknown' case to ''.\nrelease = \"\" if release == \"unknown\" else release\nif release == \"unknown\":\n release = version = revision = \"\"\nelse:\n pv = parse_version(release)\n release = pv.public\n version = \".\".join(release.split(\".\")[:2]) # short X.Y version\n if pv.local is not None:\n revision = pv.local[1:] # revision number w/o the leading g\n else:\n revision = \"\"\n\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\ndefault_role = \"obj\"\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n# html_theme = 'alabaster'\n# html_theme = 'traditional'\n# html_theme = 'agogo'\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_logo = \"./_static/with-text-light-190px.png\"\nhtml_theme_options = {\n \"logo_only\": True,\n #\n # TOC options\n # https://sphinx-rtd-theme.readthedocs.io/en/stable/configuring.html#theme-options\n \"includehidden\": False,\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# A list of prefixes that are ignored for sorting the Python module\n# index (e.g., if this is set to ['foo.'], then foo.bar is shown under\n# B, not F).\nmodindex_common_prefix = [\"plasmapy.\"]\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"PlasmaPydoc\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n #\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n #\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n #\n # Latex figure (float) alignment\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"PlasmaPy.tex\",\n \"PlasmaPy Documentation\",\n \"PlasmaPy Community\",\n \"manual\",\n )\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"plasmapy\", \"PlasmaPy Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"PlasmaPy\",\n \"PlasmaPy Documentation\",\n author,\n \"PlasmaPy\",\n \"Python package for plasma physics\",\n \"Miscellaneous\",\n )\n]\n\nhtml_favicon = \"./_static/icon.ico\"\n\n\n# -- NBSphinx options\n\nnbsphinx_thumbnails = {\"notebooks/*\": \"_images/graphic-circular.png\"}\n\n# adapted from https://github.com/spatialaudio/nbsphinx/blob/58b8034dd9d7349c1b4ac3e7a7d6baa87ab2a6a9/doc/conf.py\n\n# This is processed by Jinja2 and inserted before each notebook\nnbsphinx_prolog = r\"\"\"\n{% set docname = 'docs/' + env.doc2path(env.docname, base=None) %}\n{% set nb_base = 'tree' if env.config.revision else 'blob' %}\n{% set nb_where = env.config.revision if env.config.revision else 'master' %}\n\n.. raw:: html\n\n <div class=\"admonition note\">\n <p style=\"margin-bottom:0px\">\n This page was generated by\n <a href=\"https://nbsphinx.readthedocs.io/\">nbsphinx</a> from\n <a class=\"reference external\" href=\"https://github.com/PlasmaPy/PlasmaPy/{{ nb_base|e }}/{{ nb_where|e }}/{{ docname|e }}\">{{ docname|e }}</a>.\n <br>\n Interactive online version:\n <a href=\"https://mybinder.org/v2/gh/PlasmaPy/PlasmaPy/{{ nb_where|e }}/?filepath={{ docname|e }}\"><img alt=\"Binder badge\" src=\"https://mybinder.org/badge_logo.svg\" style=\"vertical-align:text-bottom\"></a>.\n </p>\n </div>\n\n.. raw:: latex\n\n \\nbsphinxstartnotebook{\\scriptsize\\noindent\\strut\n \\textcolor{gray}{The following section was generated from\n \\sphinxcode{\\sphinxupquote{\\strut {{ docname | escape_latex }}}} \\dotfill}}\n\"\"\"\n\n\ndef setup(app: Sphinx) -> None:\n app.add_config_value(\"revision\", \"\", True)\n app.add_stylesheet(\"rtd_theme_overrides.css\")\n", "path": "docs/conf.py"}]} | 3,392 | 229 |
gh_patches_debug_3864 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1203 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
APM Agent Serialization Error
**Describe the bug**: APM Agent crashes when using Python AWS SDK's builder for DynamoDB Key condition expressions.
**To Reproduce**
1. Import `from boto3.dynamodb.conditions import Key` and make a DynamoDB query that includes `KeyConditionExpression` option with a value generated by `Key` class. Example Broken Code:
```
SESSION = boto3.Session(region_name="us-west-2",)
DYNAMODB_TABLE = SESSION.resource("dynamodb").Table("TableName")
options = {
"KeyConditionExpression": Key("primary_key").eq(1)
}
result = DYNAMODB_TABLE.query(**options)
```
2. If APM Agent is running it will crash with the following exception when above code executes:
```
Exception in thread eapm event processor thread:
Traceback (most recent call last):
File "/var/lang/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/var/lang/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/var/task/elasticapm/transport/base.py", line 145, in _process_queue
buffer.write((self._json_serializer({event_type: data}) + "\n").encode("utf-8"))
File "/var/task/elasticapm/utils/json_encoder.py", line 63, in dumps
return json.dumps(value, cls=BetterJSONEncoder, **kwargs)
File "/var/lang/lib/python3.8/json/__init__.py", line 234, in dumps
return cls(
File "/var/lang/lib/python3.8/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/var/lang/lib/python3.8/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/var/task/elasticapm/utils/json_encoder.py", line 55, in default
return super(BetterJSONEncoder, self).default(obj)
File "/var/lang/lib/python3.8/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type Equals is not JSON serializable
```
3. Excluding the usage of `Key` class for building Condition Expressions avoids the bug. Example Working Code:
```
SESSION = boto3.Session(region_name="us-west-2",)
DYNAMODB_TABLE = SESSION.resource("dynamodb").Table("TableName")
options = {
"KeyConditionExpression": "#PrimaryKey = :value",
"ExpressionAttributeNames": {"#PrimaryKey": "primary_key"},
"ExpressionAttributeValues": {":value": 1}
}
result = DYNAMODB_TABLE.query(**options)
```
**Environment (please complete the following information)**
- OS: Linux
- Python version: 3.8
- Framework and version [e.g. Django 2.1]: Not Applicable
- APM Server version: 7.13.4
- Agent version: 6.3.3
</issue>
<code>
[start of elasticapm/utils/json_encoder.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2012, the Sentry Team, see AUTHORS for more details
4 # Copyright (c) 2019, Elasticsearch BV
5 # All rights reserved.
6 #
7 # Redistribution and use in source and binary forms, with or without
8 # modification, are permitted provided that the following conditions are met:
9 #
10 # * Redistributions of source code must retain the above copyright notice, this
11 # list of conditions and the following disclaimer.
12 #
13 # * Redistributions in binary form must reproduce the above copyright notice,
14 # this list of conditions and the following disclaimer in the documentation
15 # and/or other materials provided with the distribution.
16 #
17 # * Neither the name of the copyright holder nor the names of its
18 # contributors may be used to endorse or promote products derived from
19 # this software without specific prior written permission.
20 #
21 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
22 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
23 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
24 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
25 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
26 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
27 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
28 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
29 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
30
31
32 import datetime
33 import decimal
34 import uuid
35
36 try:
37 import json
38 except ImportError:
39 import simplejson as json
40
41
42 class BetterJSONEncoder(json.JSONEncoder):
43 ENCODERS = {
44 set: list,
45 frozenset: list,
46 datetime.datetime: lambda obj: obj.strftime("%Y-%m-%dT%H:%M:%S.%fZ"),
47 uuid.UUID: lambda obj: obj.hex,
48 bytes: lambda obj: obj.decode("utf-8", errors="replace"),
49 decimal.Decimal: lambda obj: float(obj),
50 }
51
52 def default(self, obj):
53 if type(obj) in self.ENCODERS:
54 return self.ENCODERS[type(obj)](obj)
55 return super(BetterJSONEncoder, self).default(obj)
56
57
58 def better_decoder(data):
59 return data
60
61
62 def dumps(value, **kwargs):
63 return json.dumps(value, cls=BetterJSONEncoder, **kwargs)
64
65
66 def loads(value, **kwargs):
67 return json.loads(value, object_hook=better_decoder)
68
[end of elasticapm/utils/json_encoder.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticapm/utils/json_encoder.py b/elasticapm/utils/json_encoder.py
--- a/elasticapm/utils/json_encoder.py
+++ b/elasticapm/utils/json_encoder.py
@@ -52,7 +52,10 @@
def default(self, obj):
if type(obj) in self.ENCODERS:
return self.ENCODERS[type(obj)](obj)
- return super(BetterJSONEncoder, self).default(obj)
+ try:
+ return super(BetterJSONEncoder, self).default(obj)
+ except TypeError:
+ return str(obj)
def better_decoder(data):
| {"golden_diff": "diff --git a/elasticapm/utils/json_encoder.py b/elasticapm/utils/json_encoder.py\n--- a/elasticapm/utils/json_encoder.py\n+++ b/elasticapm/utils/json_encoder.py\n@@ -52,7 +52,10 @@\n def default(self, obj):\n if type(obj) in self.ENCODERS:\n return self.ENCODERS[type(obj)](obj)\n- return super(BetterJSONEncoder, self).default(obj)\n+ try:\n+ return super(BetterJSONEncoder, self).default(obj)\n+ except TypeError:\n+ return str(obj)\n \n \n def better_decoder(data):\n", "issue": "APM Agent Serialization Error\n**Describe the bug**: APM Agent crashes when using Python AWS SDK's builder for DynamoDB Key condition expressions.\r\n\r\n**To Reproduce**\r\n\r\n1. Import `from boto3.dynamodb.conditions import Key` and make a DynamoDB query that includes `KeyConditionExpression` option with a value generated by `Key` class. Example Broken Code:\r\n\r\n```\r\nSESSION = boto3.Session(region_name=\"us-west-2\",)\r\nDYNAMODB_TABLE = SESSION.resource(\"dynamodb\").Table(\"TableName\")\r\n\r\noptions = {\r\n \"KeyConditionExpression\": Key(\"primary_key\").eq(1)\r\n}\r\n\r\nresult = DYNAMODB_TABLE.query(**options)\r\n```\r\n\r\n2. If APM Agent is running it will crash with the following exception when above code executes:\r\n```\r\nException in thread eapm event processor thread:\r\nTraceback (most recent call last):\r\n File \"/var/lang/lib/python3.8/threading.py\", line 932, in _bootstrap_inner\r\n self.run()\r\n File \"/var/lang/lib/python3.8/threading.py\", line 870, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/var/task/elasticapm/transport/base.py\", line 145, in _process_queue\r\n buffer.write((self._json_serializer({event_type: data}) + \"\\n\").encode(\"utf-8\"))\r\n File \"/var/task/elasticapm/utils/json_encoder.py\", line 63, in dumps\r\n return json.dumps(value, cls=BetterJSONEncoder, **kwargs)\r\n File \"/var/lang/lib/python3.8/json/__init__.py\", line 234, in dumps\r\n return cls(\r\n File \"/var/lang/lib/python3.8/json/encoder.py\", line 199, in encode\r\n chunks = self.iterencode(o, _one_shot=True)\r\n File \"/var/lang/lib/python3.8/json/encoder.py\", line 257, in iterencode\r\n return _iterencode(o, 0)\r\n File \"/var/task/elasticapm/utils/json_encoder.py\", line 55, in default\r\n return super(BetterJSONEncoder, self).default(obj)\r\n File \"/var/lang/lib/python3.8/json/encoder.py\", line 179, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nTypeError: Object of type Equals is not JSON serializable\r\n```\r\n\r\n3. Excluding the usage of `Key` class for building Condition Expressions avoids the bug. Example Working Code:\r\n```\r\nSESSION = boto3.Session(region_name=\"us-west-2\",)\r\nDYNAMODB_TABLE = SESSION.resource(\"dynamodb\").Table(\"TableName\")\r\n\r\noptions = {\r\n \"KeyConditionExpression\": \"#PrimaryKey = :value\",\r\n \"ExpressionAttributeNames\": {\"#PrimaryKey\": \"primary_key\"},\r\n \"ExpressionAttributeValues\": {\":value\": 1}\r\n}\r\n\r\nresult = DYNAMODB_TABLE.query(**options)\r\n```\r\n\r\n**Environment (please complete the following information)**\r\n- OS: Linux\r\n- Python version: 3.8\r\n- Framework and version [e.g. Django 2.1]: Not Applicable\r\n- APM Server version: 7.13.4\r\n- Agent version: 6.3.3\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2012, the Sentry Team, see AUTHORS for more details\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n\n\nimport datetime\nimport decimal\nimport uuid\n\ntry:\n import json\nexcept ImportError:\n import simplejson as json\n\n\nclass BetterJSONEncoder(json.JSONEncoder):\n ENCODERS = {\n set: list,\n frozenset: list,\n datetime.datetime: lambda obj: obj.strftime(\"%Y-%m-%dT%H:%M:%S.%fZ\"),\n uuid.UUID: lambda obj: obj.hex,\n bytes: lambda obj: obj.decode(\"utf-8\", errors=\"replace\"),\n decimal.Decimal: lambda obj: float(obj),\n }\n\n def default(self, obj):\n if type(obj) in self.ENCODERS:\n return self.ENCODERS[type(obj)](obj)\n return super(BetterJSONEncoder, self).default(obj)\n\n\ndef better_decoder(data):\n return data\n\n\ndef dumps(value, **kwargs):\n return json.dumps(value, cls=BetterJSONEncoder, **kwargs)\n\n\ndef loads(value, **kwargs):\n return json.loads(value, object_hook=better_decoder)\n", "path": "elasticapm/utils/json_encoder.py"}]} | 1,929 | 138 |
gh_patches_debug_24746 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1115 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Python agent should show database as a dependency, shows up in span
**Is your feature request related to a problem? Please describe.**
I can see Postgres under **Time spent by span type** graph but it is not shown as a dependency. Rails agent is able to show this as Postgresql
<img width="397" alt="Screenshot 2021-04-24 at 12 03 10 PM" src="https://user-images.githubusercontent.com/36618391/115949791-14edbe80-a4f5-11eb-999a-9b2880460abc.png">
I am using asyncpg==0.21.0
**Describe the solution you'd like**
The database(Postgres) should show up as a dependency just like it shows up for ruby agent. Screenshot attached below.
<img width="979" alt="Screenshot 2021-04-24 at 11 40 09 AM" src="https://user-images.githubusercontent.com/36618391/115949754-d526d700-a4f4-11eb-9d43-bad114aad4eb.png">
</issue>
<code>
[start of elasticapm/instrumentation/packages/asyncio/asyncpg.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from elasticapm.contrib.asyncio.traces import async_capture_span
32 from elasticapm.instrumentation.packages.asyncio.base import AsyncAbstractInstrumentedModule
33 from elasticapm.instrumentation.packages.dbapi2 import extract_signature
34
35
36 class AsyncPGInstrumentation(AsyncAbstractInstrumentedModule):
37 """
38 Implement asyncpg instrumentation with two methods Connection.execute
39 and Connection.executemany since Connection._do_execute is not called
40 given a prepared query is passed to a connection. As in:
41 https://github.com/MagicStack/asyncpg/blob/master/asyncpg/connection.py#L294-L297
42 """
43
44 name = "asyncpg"
45
46 instrument_list = [
47 ("asyncpg.connection", "Connection.execute"),
48 ("asyncpg.connection", "Connection.executemany"),
49 ("asyncpg.connection", "Connection.fetch"),
50 ("asyncpg.connection", "Connection.fetchval"),
51 ("asyncpg.connection", "Connection.fetchrow"),
52 ]
53
54 async def call(self, module, method, wrapped, instance, args, kwargs):
55 query = args[0] if len(args) else kwargs["query"]
56 name = extract_signature(query)
57 context = {"db": {"type": "sql", "statement": query}}
58 action = "query"
59 async with async_capture_span(
60 name, leaf=True, span_type="db", span_subtype="postgres", span_action=action, extra=context
61 ):
62 return await wrapped(*args, **kwargs)
63
[end of elasticapm/instrumentation/packages/asyncio/asyncpg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticapm/instrumentation/packages/asyncio/asyncpg.py b/elasticapm/instrumentation/packages/asyncio/asyncpg.py
--- a/elasticapm/instrumentation/packages/asyncio/asyncpg.py
+++ b/elasticapm/instrumentation/packages/asyncio/asyncpg.py
@@ -31,6 +31,7 @@
from elasticapm.contrib.asyncio.traces import async_capture_span
from elasticapm.instrumentation.packages.asyncio.base import AsyncAbstractInstrumentedModule
from elasticapm.instrumentation.packages.dbapi2 import extract_signature
+from elasticapm.utils import default_ports
class AsyncPGInstrumentation(AsyncAbstractInstrumentedModule):
@@ -56,6 +57,12 @@
name = extract_signature(query)
context = {"db": {"type": "sql", "statement": query}}
action = "query"
+ destination_info = {
+ "address": kwargs.get("host", "localhost"),
+ "port": int(kwargs.get("port", default_ports.get("postgresql"))),
+ "service": {"name": "postgres", "resource": "postgres", "type": "db"},
+ }
+ context["destination"] = destination_info
async with async_capture_span(
name, leaf=True, span_type="db", span_subtype="postgres", span_action=action, extra=context
):
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/asyncio/asyncpg.py b/elasticapm/instrumentation/packages/asyncio/asyncpg.py\n--- a/elasticapm/instrumentation/packages/asyncio/asyncpg.py\n+++ b/elasticapm/instrumentation/packages/asyncio/asyncpg.py\n@@ -31,6 +31,7 @@\n from elasticapm.contrib.asyncio.traces import async_capture_span\n from elasticapm.instrumentation.packages.asyncio.base import AsyncAbstractInstrumentedModule\n from elasticapm.instrumentation.packages.dbapi2 import extract_signature\n+from elasticapm.utils import default_ports\n \n \n class AsyncPGInstrumentation(AsyncAbstractInstrumentedModule):\n@@ -56,6 +57,12 @@\n name = extract_signature(query)\n context = {\"db\": {\"type\": \"sql\", \"statement\": query}}\n action = \"query\"\n+ destination_info = {\n+ \"address\": kwargs.get(\"host\", \"localhost\"),\n+ \"port\": int(kwargs.get(\"port\", default_ports.get(\"postgresql\"))),\n+ \"service\": {\"name\": \"postgres\", \"resource\": \"postgres\", \"type\": \"db\"},\n+ }\n+ context[\"destination\"] = destination_info\n async with async_capture_span(\n name, leaf=True, span_type=\"db\", span_subtype=\"postgres\", span_action=action, extra=context\n ):\n", "issue": "Python agent should show database as a dependency, shows up in span\n**Is your feature request related to a problem? Please describe.**\r\nI can see Postgres under **Time spent by span type** graph but it is not shown as a dependency. Rails agent is able to show this as Postgresql\r\n\r\n<img width=\"397\" alt=\"Screenshot 2021-04-24 at 12 03 10 PM\" src=\"https://user-images.githubusercontent.com/36618391/115949791-14edbe80-a4f5-11eb-999a-9b2880460abc.png\">\r\n\r\nI am using asyncpg==0.21.0\r\n\r\n**Describe the solution you'd like**\r\nThe database(Postgres) should show up as a dependency just like it shows up for ruby agent. Screenshot attached below.\r\n\r\n<img width=\"979\" alt=\"Screenshot 2021-04-24 at 11 40 09 AM\" src=\"https://user-images.githubusercontent.com/36618391/115949754-d526d700-a4f4-11eb-9d43-bad114aad4eb.png\">\r\n\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom elasticapm.contrib.asyncio.traces import async_capture_span\nfrom elasticapm.instrumentation.packages.asyncio.base import AsyncAbstractInstrumentedModule\nfrom elasticapm.instrumentation.packages.dbapi2 import extract_signature\n\n\nclass AsyncPGInstrumentation(AsyncAbstractInstrumentedModule):\n \"\"\"\n Implement asyncpg instrumentation with two methods Connection.execute\n and Connection.executemany since Connection._do_execute is not called\n given a prepared query is passed to a connection. As in:\n https://github.com/MagicStack/asyncpg/blob/master/asyncpg/connection.py#L294-L297\n \"\"\"\n\n name = \"asyncpg\"\n\n instrument_list = [\n (\"asyncpg.connection\", \"Connection.execute\"),\n (\"asyncpg.connection\", \"Connection.executemany\"),\n (\"asyncpg.connection\", \"Connection.fetch\"),\n (\"asyncpg.connection\", \"Connection.fetchval\"),\n (\"asyncpg.connection\", \"Connection.fetchrow\"),\n ]\n\n async def call(self, module, method, wrapped, instance, args, kwargs):\n query = args[0] if len(args) else kwargs[\"query\"]\n name = extract_signature(query)\n context = {\"db\": {\"type\": \"sql\", \"statement\": query}}\n action = \"query\"\n async with async_capture_span(\n name, leaf=True, span_type=\"db\", span_subtype=\"postgres\", span_action=action, extra=context\n ):\n return await wrapped(*args, **kwargs)\n", "path": "elasticapm/instrumentation/packages/asyncio/asyncpg.py"}]} | 1,616 | 303 |
gh_patches_debug_23294 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5275 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Deprecate CKV_GCP_67
**Describe the issue**
CKV_GCP_67: https://docs.bridgecrew.io/docs/ensure-legacy-compute-engine-instance-metadata-apis-are-disabled
https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster#protect_node_metadata
As per this article: `The v0.1 and v1beta1 Compute Engine metadata server endpoints were deprecated and shutdown on September 30, 2020.`
</issue>
<code>
[start of checkov/terraform/checks/resource/gcp/GKELegacyInstanceMetadataDisabled.py]
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.common.util.type_forcers import force_float
3 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
4
5
6 class GKELegacyInstanceMetadataDisabled(BaseResourceValueCheck):
7
8 def __init__(self):
9 name = "Ensure legacy Compute Engine instance metadata APIs are Disabled"
10 id = "CKV_GCP_67"
11 supported_resources = ['google_container_cluster']
12 categories = [CheckCategories.KUBERNETES]
13 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
14
15 def scan_resource_conf(self, conf):
16 """
17 looks for min_master_version =1.12 which ensures that legacy metadata endpoints are disabled
18 https://www.terraform.io/docs/providers/google/r/compute_ssl_policy.html
19 :param conf: google_container_cluster configuration
20 :return: <CheckResult>
21 """
22 if 'min_master_version' in conf:
23 min_master_version = force_float(conf.get('min_master_version')[0])
24 if min_master_version and min_master_version >= 1.12:
25 return CheckResult.PASSED
26
27 return CheckResult.FAILED
28
29 def get_inspected_key(self):
30 return 'min_master_version'
31
32 def get_expected_value(self):
33 return "1.12"
34
35
36 check = GKELegacyInstanceMetadataDisabled()
37
[end of checkov/terraform/checks/resource/gcp/GKELegacyInstanceMetadataDisabled.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/gcp/GKELegacyInstanceMetadataDisabled.py b/checkov/terraform/checks/resource/gcp/GKELegacyInstanceMetadataDisabled.py
deleted file mode 100644
--- a/checkov/terraform/checks/resource/gcp/GKELegacyInstanceMetadataDisabled.py
+++ /dev/null
@@ -1,36 +0,0 @@
-from checkov.common.models.enums import CheckResult, CheckCategories
-from checkov.common.util.type_forcers import force_float
-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
-
-
-class GKELegacyInstanceMetadataDisabled(BaseResourceValueCheck):
-
- def __init__(self):
- name = "Ensure legacy Compute Engine instance metadata APIs are Disabled"
- id = "CKV_GCP_67"
- supported_resources = ['google_container_cluster']
- categories = [CheckCategories.KUBERNETES]
- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
-
- def scan_resource_conf(self, conf):
- """
- looks for min_master_version =1.12 which ensures that legacy metadata endpoints are disabled
- https://www.terraform.io/docs/providers/google/r/compute_ssl_policy.html
- :param conf: google_container_cluster configuration
- :return: <CheckResult>
- """
- if 'min_master_version' in conf:
- min_master_version = force_float(conf.get('min_master_version')[0])
- if min_master_version and min_master_version >= 1.12:
- return CheckResult.PASSED
-
- return CheckResult.FAILED
-
- def get_inspected_key(self):
- return 'min_master_version'
-
- def get_expected_value(self):
- return "1.12"
-
-
-check = GKELegacyInstanceMetadataDisabled()
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/gcp/GKELegacyInstanceMetadataDisabled.py b/checkov/terraform/checks/resource/gcp/GKELegacyInstanceMetadataDisabled.py\ndeleted file mode 100644\n--- a/checkov/terraform/checks/resource/gcp/GKELegacyInstanceMetadataDisabled.py\n+++ /dev/null\n@@ -1,36 +0,0 @@\n-from checkov.common.models.enums import CheckResult, CheckCategories\n-from checkov.common.util.type_forcers import force_float\n-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n-\n-\n-class GKELegacyInstanceMetadataDisabled(BaseResourceValueCheck):\n-\n- def __init__(self):\n- name = \"Ensure legacy Compute Engine instance metadata APIs are Disabled\"\n- id = \"CKV_GCP_67\"\n- supported_resources = ['google_container_cluster']\n- categories = [CheckCategories.KUBERNETES]\n- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n-\n- def scan_resource_conf(self, conf):\n- \"\"\"\n- looks for min_master_version =1.12 which ensures that legacy metadata endpoints are disabled\n- https://www.terraform.io/docs/providers/google/r/compute_ssl_policy.html\n- :param conf: google_container_cluster configuration\n- :return: <CheckResult>\n- \"\"\"\n- if 'min_master_version' in conf:\n- min_master_version = force_float(conf.get('min_master_version')[0])\n- if min_master_version and min_master_version >= 1.12:\n- return CheckResult.PASSED\n-\n- return CheckResult.FAILED\n-\n- def get_inspected_key(self):\n- return 'min_master_version'\n-\n- def get_expected_value(self):\n- return \"1.12\"\n-\n-\n-check = GKELegacyInstanceMetadataDisabled()\n", "issue": "Deprecate CKV_GCP_67\n**Describe the issue**\r\n\r\nCKV_GCP_67: https://docs.bridgecrew.io/docs/ensure-legacy-compute-engine-instance-metadata-apis-are-disabled\r\n\r\nhttps://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster#protect_node_metadata\r\n\r\nAs per this article: `The v0.1 and v1beta1 Compute Engine metadata server endpoints were deprecated and shutdown on September 30, 2020.`\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.common.util.type_forcers import force_float\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass GKELegacyInstanceMetadataDisabled(BaseResourceValueCheck):\n\n def __init__(self):\n name = \"Ensure legacy Compute Engine instance metadata APIs are Disabled\"\n id = \"CKV_GCP_67\"\n supported_resources = ['google_container_cluster']\n categories = [CheckCategories.KUBERNETES]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n \"\"\"\n looks for min_master_version =1.12 which ensures that legacy metadata endpoints are disabled\n https://www.terraform.io/docs/providers/google/r/compute_ssl_policy.html\n :param conf: google_container_cluster configuration\n :return: <CheckResult>\n \"\"\"\n if 'min_master_version' in conf:\n min_master_version = force_float(conf.get('min_master_version')[0])\n if min_master_version and min_master_version >= 1.12:\n return CheckResult.PASSED\n\n return CheckResult.FAILED\n\n def get_inspected_key(self):\n return 'min_master_version'\n\n def get_expected_value(self):\n return \"1.12\"\n\n\ncheck = GKELegacyInstanceMetadataDisabled()\n", "path": "checkov/terraform/checks/resource/gcp/GKELegacyInstanceMetadataDisabled.py"}]} | 1,038 | 418 |
gh_patches_debug_13425 | rasdani/github-patches | git_diff | python-poetry__poetry-7140 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make config file relocation instructions more explicit
- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.
## Issue
After upgrading from `1.1` to `1.2` I received the following message:
```
Configuration file exists at /Users/xxx/Library/Application Support/pypoetry, reusing this directory.
Consider moving configuration to /Users/xxx/Library/Preferences/pypoetry, as support for the legacy directory will be removed in an upcoming release.
```
Similar to #6854 I (naively) assumed (based on above message) that the entire directory was configuration related and therefore moved it from `~/Library/Application Support/` to `~/Library/Preferences`.
Of course this lead to poetry no longer functioning.
If an automatic move of the config file is not in the cards, at least the warning message needs to be more explicit what file(s) actually need to be moved.
</issue>
<code>
[start of src/poetry/locations.py]
1 from __future__ import annotations
2
3 import logging
4 import os
5 import sys
6
7 from pathlib import Path
8
9 from platformdirs import user_cache_path
10 from platformdirs import user_config_path
11 from platformdirs import user_data_path
12
13
14 logger = logging.getLogger(__name__)
15
16 _APP_NAME = "pypoetry"
17
18 DEFAULT_CACHE_DIR = user_cache_path(_APP_NAME, appauthor=False)
19 CONFIG_DIR = Path(
20 os.getenv("POETRY_CONFIG_DIR")
21 or user_config_path(_APP_NAME, appauthor=False, roaming=True)
22 )
23
24 # platformdirs 2.0.0 corrected the OSX/macOS config directory from
25 # /Users/<user>/Library/Application Support/<appname> to
26 # /Users/<user>/Library/Preferences/<appname>.
27 #
28 # For now we only deprecate use of the old directory.
29 if sys.platform == "darwin":
30 _LEGACY_CONFIG_DIR = CONFIG_DIR.parent.parent / "Application Support" / _APP_NAME
31 config_toml = _LEGACY_CONFIG_DIR / "config.toml"
32 auth_toml = _LEGACY_CONFIG_DIR / "auth.toml"
33
34 if any(file.exists() for file in (auth_toml, config_toml)):
35 logger.warning(
36 (
37 "Configuration file exists at %s, reusing this directory.\n\nConsider"
38 " moving configuration to %s, as support for the legacy directory will"
39 " be removed in an upcoming release."
40 ),
41 _LEGACY_CONFIG_DIR,
42 CONFIG_DIR,
43 )
44 CONFIG_DIR = _LEGACY_CONFIG_DIR
45
46
47 def data_dir() -> Path:
48 poetry_home = os.getenv("POETRY_HOME")
49 if poetry_home:
50 return Path(poetry_home).expanduser()
51
52 return user_data_path(_APP_NAME, appauthor=False, roaming=True)
53
[end of src/poetry/locations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/poetry/locations.py b/src/poetry/locations.py
--- a/src/poetry/locations.py
+++ b/src/poetry/locations.py
@@ -34,9 +34,12 @@
if any(file.exists() for file in (auth_toml, config_toml)):
logger.warning(
(
- "Configuration file exists at %s, reusing this directory.\n\nConsider"
- " moving configuration to %s, as support for the legacy directory will"
- " be removed in an upcoming release."
+ (
+ "Configuration file exists at %s, reusing this"
+ " directory.\n\nConsider moving TOML configuration files to %s, as"
+ " support for the legacy directory will be removed in an upcoming"
+ " release."
+ ),
),
_LEGACY_CONFIG_DIR,
CONFIG_DIR,
| {"golden_diff": "diff --git a/src/poetry/locations.py b/src/poetry/locations.py\n--- a/src/poetry/locations.py\n+++ b/src/poetry/locations.py\n@@ -34,9 +34,12 @@\n if any(file.exists() for file in (auth_toml, config_toml)):\n logger.warning(\n (\n- \"Configuration file exists at %s, reusing this directory.\\n\\nConsider\"\n- \" moving configuration to %s, as support for the legacy directory will\"\n- \" be removed in an upcoming release.\"\n+ (\n+ \"Configuration file exists at %s, reusing this\"\n+ \" directory.\\n\\nConsider moving TOML configuration files to %s, as\"\n+ \" support for the legacy directory will be removed in an upcoming\"\n+ \" release.\"\n+ ),\n ),\n _LEGACY_CONFIG_DIR,\n CONFIG_DIR,\n", "issue": "Make config file relocation instructions more explicit\n- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.\r\n\r\n## Issue\r\n\r\nAfter upgrading from `1.1` to `1.2` I received the following message:\r\n```\r\nConfiguration file exists at /Users/xxx/Library/Application Support/pypoetry, reusing this directory.\r\n\r\nConsider moving configuration to /Users/xxx/Library/Preferences/pypoetry, as support for the legacy directory will be removed in an upcoming release.\r\n```\r\n\r\nSimilar to #6854 I (naively) assumed (based on above message) that the entire directory was configuration related and therefore moved it from `~/Library/Application Support/` to `~/Library/Preferences`.\r\n\r\nOf course this lead to poetry no longer functioning.\r\n\r\nIf an automatic move of the config file is not in the cards, at least the warning message needs to be more explicit what file(s) actually need to be moved.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport logging\nimport os\nimport sys\n\nfrom pathlib import Path\n\nfrom platformdirs import user_cache_path\nfrom platformdirs import user_config_path\nfrom platformdirs import user_data_path\n\n\nlogger = logging.getLogger(__name__)\n\n_APP_NAME = \"pypoetry\"\n\nDEFAULT_CACHE_DIR = user_cache_path(_APP_NAME, appauthor=False)\nCONFIG_DIR = Path(\n os.getenv(\"POETRY_CONFIG_DIR\")\n or user_config_path(_APP_NAME, appauthor=False, roaming=True)\n)\n\n# platformdirs 2.0.0 corrected the OSX/macOS config directory from\n# /Users/<user>/Library/Application Support/<appname> to\n# /Users/<user>/Library/Preferences/<appname>.\n#\n# For now we only deprecate use of the old directory.\nif sys.platform == \"darwin\":\n _LEGACY_CONFIG_DIR = CONFIG_DIR.parent.parent / \"Application Support\" / _APP_NAME\n config_toml = _LEGACY_CONFIG_DIR / \"config.toml\"\n auth_toml = _LEGACY_CONFIG_DIR / \"auth.toml\"\n\n if any(file.exists() for file in (auth_toml, config_toml)):\n logger.warning(\n (\n \"Configuration file exists at %s, reusing this directory.\\n\\nConsider\"\n \" moving configuration to %s, as support for the legacy directory will\"\n \" be removed in an upcoming release.\"\n ),\n _LEGACY_CONFIG_DIR,\n CONFIG_DIR,\n )\n CONFIG_DIR = _LEGACY_CONFIG_DIR\n\n\ndef data_dir() -> Path:\n poetry_home = os.getenv(\"POETRY_HOME\")\n if poetry_home:\n return Path(poetry_home).expanduser()\n\n return user_data_path(_APP_NAME, appauthor=False, roaming=True)\n", "path": "src/poetry/locations.py"}]} | 1,232 | 197 |
gh_patches_debug_17079 | rasdani/github-patches | git_diff | qutebrowser__qutebrowser-2953 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use CommandParser for configmodel.bind
In the `new-config` branch, there's a `CommandParser` in `commands/runners.py` which got split off from `CommandRunner` and can be used standalone. Various places doing their own command parsing got updated accordingly, but `configmodel.bind` is still doing its own parsing. An example how it's used, from `:bind`:
https://github.com/qutebrowser/qutebrowser/blob/2117824cf9fdc47ea6fd9457c12cecbac117202e/qutebrowser/config/config.py#L179-L189
Split off from #2779, cc @rcorre - if you want to take a look at this, feel free to do a PR against `new-config`, or wait until that's merged and then do one against `master`.
</issue>
<code>
[start of qutebrowser/completion/models/configmodel.py]
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2014-2017 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """Functions that return config-related completion models."""
21
22 from qutebrowser.config import configdata, configexc
23 from qutebrowser.completion.models import completionmodel, listcategory, util
24 from qutebrowser.commands import cmdutils
25
26
27 def option(*, info):
28 """A CompletionModel filled with settings and their descriptions."""
29 model = completionmodel.CompletionModel(column_widths=(20, 70, 10))
30 options = ((opt.name, opt.description, info.config.get_str(opt.name))
31 for opt in configdata.DATA.values())
32 model.add_category(listcategory.ListCategory("Options", sorted(options)))
33 return model
34
35
36 def value(optname, *_values, info):
37 """A CompletionModel filled with setting values.
38
39 Args:
40 optname: The name of the config option this model shows.
41 _values: The values already provided on the command line.
42 info: A CompletionInfo instance.
43 """
44 model = completionmodel.CompletionModel(column_widths=(30, 70, 0))
45
46 try:
47 current = info.config.get_str(optname) or '""'
48 except configexc.NoOptionError:
49 return None
50
51 opt = info.config.get_opt(optname)
52 default = opt.typ.to_str(opt.default)
53 cur_cat = listcategory.ListCategory("Current/Default",
54 [(current, "Current value"), (default, "Default value")])
55 model.add_category(cur_cat)
56
57 vals = opt.typ.complete()
58 if vals is not None:
59 model.add_category(listcategory.ListCategory("Completions",
60 sorted(vals)))
61 return model
62
63
64 def bind(key, *, info):
65 """A CompletionModel filled with all bindable commands and descriptions.
66
67 Args:
68 key: the key being bound.
69 """
70 model = completionmodel.CompletionModel(column_widths=(20, 60, 20))
71 cmd_text = info.keyconf.get_command(key, 'normal')
72
73 if cmd_text:
74 cmd_name = cmd_text.split(' ')[0]
75 cmd = cmdutils.cmd_dict.get(cmd_name)
76 data = [(cmd_text, cmd.desc, key)]
77 model.add_category(listcategory.ListCategory("Current", data))
78
79 cmdlist = util.get_cmd_completions(info, include_hidden=True,
80 include_aliases=True)
81 model.add_category(listcategory.ListCategory("Commands", cmdlist))
82 return model
83
[end of qutebrowser/completion/models/configmodel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qutebrowser/completion/models/configmodel.py b/qutebrowser/completion/models/configmodel.py
--- a/qutebrowser/completion/models/configmodel.py
+++ b/qutebrowser/completion/models/configmodel.py
@@ -21,7 +21,7 @@
from qutebrowser.config import configdata, configexc
from qutebrowser.completion.models import completionmodel, listcategory, util
-from qutebrowser.commands import cmdutils
+from qutebrowser.commands import runners
def option(*, info):
@@ -71,8 +71,8 @@
cmd_text = info.keyconf.get_command(key, 'normal')
if cmd_text:
- cmd_name = cmd_text.split(' ')[0]
- cmd = cmdutils.cmd_dict.get(cmd_name)
+ parser = runners.CommandParser()
+ cmd = parser.parse(cmd_text).cmd
data = [(cmd_text, cmd.desc, key)]
model.add_category(listcategory.ListCategory("Current", data))
| {"golden_diff": "diff --git a/qutebrowser/completion/models/configmodel.py b/qutebrowser/completion/models/configmodel.py\n--- a/qutebrowser/completion/models/configmodel.py\n+++ b/qutebrowser/completion/models/configmodel.py\n@@ -21,7 +21,7 @@\n \n from qutebrowser.config import configdata, configexc\n from qutebrowser.completion.models import completionmodel, listcategory, util\n-from qutebrowser.commands import cmdutils\n+from qutebrowser.commands import runners\n \n \n def option(*, info):\n@@ -71,8 +71,8 @@\n cmd_text = info.keyconf.get_command(key, 'normal')\n \n if cmd_text:\n- cmd_name = cmd_text.split(' ')[0]\n- cmd = cmdutils.cmd_dict.get(cmd_name)\n+ parser = runners.CommandParser()\n+ cmd = parser.parse(cmd_text).cmd\n data = [(cmd_text, cmd.desc, key)]\n model.add_category(listcategory.ListCategory(\"Current\", data))\n", "issue": "Use CommandParser for configmodel.bind\nIn the `new-config` branch, there's a `CommandParser` in `commands/runners.py` which got split off from `CommandRunner` and can be used standalone. Various places doing their own command parsing got updated accordingly, but `configmodel.bind` is still doing its own parsing. An example how it's used, from `:bind`:\r\n\r\nhttps://github.com/qutebrowser/qutebrowser/blob/2117824cf9fdc47ea6fd9457c12cecbac117202e/qutebrowser/config/config.py#L179-L189\r\n\r\nSplit off from #2779, cc @rcorre - if you want to take a look at this, feel free to do a PR against `new-config`, or wait until that's merged and then do one against `master`.\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2017 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Functions that return config-related completion models.\"\"\"\n\nfrom qutebrowser.config import configdata, configexc\nfrom qutebrowser.completion.models import completionmodel, listcategory, util\nfrom qutebrowser.commands import cmdutils\n\n\ndef option(*, info):\n \"\"\"A CompletionModel filled with settings and their descriptions.\"\"\"\n model = completionmodel.CompletionModel(column_widths=(20, 70, 10))\n options = ((opt.name, opt.description, info.config.get_str(opt.name))\n for opt in configdata.DATA.values())\n model.add_category(listcategory.ListCategory(\"Options\", sorted(options)))\n return model\n\n\ndef value(optname, *_values, info):\n \"\"\"A CompletionModel filled with setting values.\n\n Args:\n optname: The name of the config option this model shows.\n _values: The values already provided on the command line.\n info: A CompletionInfo instance.\n \"\"\"\n model = completionmodel.CompletionModel(column_widths=(30, 70, 0))\n\n try:\n current = info.config.get_str(optname) or '\"\"'\n except configexc.NoOptionError:\n return None\n\n opt = info.config.get_opt(optname)\n default = opt.typ.to_str(opt.default)\n cur_cat = listcategory.ListCategory(\"Current/Default\",\n [(current, \"Current value\"), (default, \"Default value\")])\n model.add_category(cur_cat)\n\n vals = opt.typ.complete()\n if vals is not None:\n model.add_category(listcategory.ListCategory(\"Completions\",\n sorted(vals)))\n return model\n\n\ndef bind(key, *, info):\n \"\"\"A CompletionModel filled with all bindable commands and descriptions.\n\n Args:\n key: the key being bound.\n \"\"\"\n model = completionmodel.CompletionModel(column_widths=(20, 60, 20))\n cmd_text = info.keyconf.get_command(key, 'normal')\n\n if cmd_text:\n cmd_name = cmd_text.split(' ')[0]\n cmd = cmdutils.cmd_dict.get(cmd_name)\n data = [(cmd_text, cmd.desc, key)]\n model.add_category(listcategory.ListCategory(\"Current\", data))\n\n cmdlist = util.get_cmd_completions(info, include_hidden=True,\n include_aliases=True)\n model.add_category(listcategory.ListCategory(\"Commands\", cmdlist))\n return model\n", "path": "qutebrowser/completion/models/configmodel.py"}]} | 1,603 | 214 |
gh_patches_debug_198 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-2900 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
E1017 Select does not find already supported function when using complex list with nested Selects
### CloudFormation Lint Version
cfn-lint 0.80.4
### What operating system are you using?
Mac
### Describe the bug
When launching a template with complex nested Selects and list to extract value from, it seems to be reporting E1017 while it should not. Templates are correctly deployed and work fine on my side.
Output from command
```
E1017 Select should use a supported function of Fn::FindInMap, Fn::GetAtt, Fn::GetAZs, Fn::If, Fn::Split, Fn::Cidr, Ref
/file1.yml:3189:11
```
### Expected behavior
No E1017 reported by cfn-lint.
Template is working fine in Cloudformation, E1017 should not be reported.
### Reproduction template
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Build EC2 instance'
Resources:
MountTarget1:
Type: AWS::EFS::MountTarget
Properties:
FileSystemId: fs-1234567svsdabsf76s
# E1017 STARTS HERE
SubnetId: !Select
- 0
- !Select
- 0
- [
[
"subnet-0987sknlnsdoi9j76",
"subnet-875jgyjlpzj75j8k0",
"subnet-5447hnd6hI8js45js"
],
[
"subnet-0987sknlnsdoi9j76",
"subnet-875jgyjlpzj75j8k0",
"subnet-5447hnd6hI8js45js"
],
[
"subnet-0987sknlnsdoi9j76",
"subnet-875jgyjlpzj75j8k0",
"subnet-5447hnd6hI8js45js"
]
]
SecurityGroups: [sg-00qdqeef0a5c345gf]
</issue>
<code>
[start of src/cfnlint/rules/functions/Select.py]
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 from cfnlint.rules import CloudFormationLintRule, RuleMatch
6
7
8 class Select(CloudFormationLintRule):
9 """Check if Select values are correct"""
10
11 id = "E1017"
12 shortdesc = "Select validation of parameters"
13 description = "Making sure the Select function is properly configured"
14 source_url = "https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-select.html"
15 tags = ["functions", "select"]
16 supported_functions = [
17 "Fn::FindInMap",
18 "Fn::GetAtt",
19 "Fn::GetAZs",
20 "Fn::If",
21 "Fn::Split",
22 "Fn::Cidr",
23 "Ref",
24 ]
25
26 def _test_index_obj(self, index_obj, path):
27 matches = []
28 if isinstance(index_obj, dict):
29 if len(index_obj) == 1:
30 for index_key, _ in index_obj.items():
31 if index_key not in [
32 "Ref",
33 "Fn::FindInMap",
34 "Fn::Select",
35 ]:
36 message = "Select index should be an Integer or a function Ref, Fn::FindInMap, or Fn::Select for {0}"
37 matches.append(
38 RuleMatch(
39 path,
40 message.format("/".join(map(str, path))),
41 )
42 )
43 else:
44 message = "Select index should be an Integer or a function Ref, Fn::FindInMap, or Fn::Select for {0}"
45 matches.append(
46 RuleMatch(
47 path,
48 message.format("/".join(map(str, path))),
49 )
50 )
51 elif not isinstance(index_obj, int):
52 try:
53 int(index_obj)
54 except (ValueError, TypeError):
55 message = "Select index should be an Integer or a function of Ref, Fn::FindInMap, or Fn::Select for {0}"
56 matches.append(
57 RuleMatch(path, message.format("/".join(map(str, path))))
58 )
59
60 return matches
61
62 def _test_list_obj(self, list_obj, path):
63 matches = []
64 if isinstance(list_obj, dict):
65 if len(list_obj) == 1:
66 for key, _ in list_obj.items():
67 if key not in self.supported_functions:
68 message = "Select should use a supported function of {0}"
69 matches.append(
70 RuleMatch(
71 path,
72 message.format(
73 ", ".join(map(str, self.supported_functions))
74 ),
75 )
76 )
77 else:
78 message = "Select should use a supported function of {0}"
79 matches.append(
80 RuleMatch(
81 path,
82 message.format(", ".join(map(str, self.supported_functions))),
83 )
84 )
85 elif not isinstance(list_obj, list):
86 message = "Select should be an array of values for {0}"
87 matches.append(RuleMatch(path, message.format("/".join(map(str, path)))))
88
89 return matches
90
91 def _test_select_obj(self, select_obj, path):
92 matches = []
93 if not isinstance(select_obj, list):
94 message = "Select should be a list of 2 elements for {0}"
95 matches.append(RuleMatch(path, message.format("/".join(map(str, path)))))
96 return matches
97 if len(select_obj) != 2:
98 message = "Select should be a list of 2 elements for {0}"
99 matches.append(RuleMatch(path, message.format("/".join(map(str, path)))))
100 return matches
101
102 index_obj = select_obj[0]
103 list_of_objs = select_obj[1]
104 matches.extend(self._test_index_obj(index_obj, path[:] + [0]))
105 matches.extend(self._test_list_obj(list_of_objs, path[:] + [1]))
106
107 return matches
108
109 def match(self, cfn):
110 matches = []
111
112 select_objs = cfn.search_deep_keys("Fn::Select")
113
114 for select_obj in select_objs:
115 select_value_obj = select_obj[-1]
116 tree = select_obj[:-1]
117 matches.extend(self._test_select_obj(select_value_obj, tree[:]))
118
119 return matches
120
[end of src/cfnlint/rules/functions/Select.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/functions/Select.py b/src/cfnlint/rules/functions/Select.py
--- a/src/cfnlint/rules/functions/Select.py
+++ b/src/cfnlint/rules/functions/Select.py
@@ -20,6 +20,7 @@
"Fn::If",
"Fn::Split",
"Fn::Cidr",
+ "Fn::Select", # issue: 2895
"Ref",
]
| {"golden_diff": "diff --git a/src/cfnlint/rules/functions/Select.py b/src/cfnlint/rules/functions/Select.py\n--- a/src/cfnlint/rules/functions/Select.py\n+++ b/src/cfnlint/rules/functions/Select.py\n@@ -20,6 +20,7 @@\n \"Fn::If\",\n \"Fn::Split\",\n \"Fn::Cidr\",\n+ \"Fn::Select\", # issue: 2895\n \"Ref\",\n ]\n", "issue": "E1017 Select does not find already supported function when using complex list with nested Selects\n### CloudFormation Lint Version\n\ncfn-lint 0.80.4\n\n### What operating system are you using?\n\nMac\n\n### Describe the bug\n\nWhen launching a template with complex nested Selects and list to extract value from, it seems to be reporting E1017 while it should not. Templates are correctly deployed and work fine on my side.\r\n\r\nOutput from command\r\n```\r\nE1017 Select should use a supported function of Fn::FindInMap, Fn::GetAtt, Fn::GetAZs, Fn::If, Fn::Split, Fn::Cidr, Ref\r\n/file1.yml:3189:11\r\n```\n\n### Expected behavior\n\nNo E1017 reported by cfn-lint.\r\nTemplate is working fine in Cloudformation, E1017 should not be reported.\n\n### Reproduction template\n\nAWSTemplateFormatVersion: '2010-09-09'\r\nDescription: 'Build EC2 instance'\r\n\r\nResources:\r\n MountTarget1:\r\n Type: AWS::EFS::MountTarget\r\n Properties:\r\n FileSystemId: fs-1234567svsdabsf76s\r\n# E1017 STARTS HERE\r\n SubnetId: !Select\r\n - 0\r\n - !Select\r\n - 0\r\n - [\r\n [\r\n \"subnet-0987sknlnsdoi9j76\",\r\n \"subnet-875jgyjlpzj75j8k0\",\r\n \"subnet-5447hnd6hI8js45js\"\r\n ],\r\n [\r\n \"subnet-0987sknlnsdoi9j76\",\r\n \"subnet-875jgyjlpzj75j8k0\",\r\n \"subnet-5447hnd6hI8js45js\"\r\n ],\r\n [\r\n \"subnet-0987sknlnsdoi9j76\",\r\n \"subnet-875jgyjlpzj75j8k0\",\r\n \"subnet-5447hnd6hI8js45js\"\r\n ] \r\n ]\r\n SecurityGroups: [sg-00qdqeef0a5c345gf]\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nfrom cfnlint.rules import CloudFormationLintRule, RuleMatch\n\n\nclass Select(CloudFormationLintRule):\n \"\"\"Check if Select values are correct\"\"\"\n\n id = \"E1017\"\n shortdesc = \"Select validation of parameters\"\n description = \"Making sure the Select function is properly configured\"\n source_url = \"https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-select.html\"\n tags = [\"functions\", \"select\"]\n supported_functions = [\n \"Fn::FindInMap\",\n \"Fn::GetAtt\",\n \"Fn::GetAZs\",\n \"Fn::If\",\n \"Fn::Split\",\n \"Fn::Cidr\",\n \"Ref\",\n ]\n\n def _test_index_obj(self, index_obj, path):\n matches = []\n if isinstance(index_obj, dict):\n if len(index_obj) == 1:\n for index_key, _ in index_obj.items():\n if index_key not in [\n \"Ref\",\n \"Fn::FindInMap\",\n \"Fn::Select\",\n ]:\n message = \"Select index should be an Integer or a function Ref, Fn::FindInMap, or Fn::Select for {0}\"\n matches.append(\n RuleMatch(\n path,\n message.format(\"/\".join(map(str, path))),\n )\n )\n else:\n message = \"Select index should be an Integer or a function Ref, Fn::FindInMap, or Fn::Select for {0}\"\n matches.append(\n RuleMatch(\n path,\n message.format(\"/\".join(map(str, path))),\n )\n )\n elif not isinstance(index_obj, int):\n try:\n int(index_obj)\n except (ValueError, TypeError):\n message = \"Select index should be an Integer or a function of Ref, Fn::FindInMap, or Fn::Select for {0}\"\n matches.append(\n RuleMatch(path, message.format(\"/\".join(map(str, path))))\n )\n\n return matches\n\n def _test_list_obj(self, list_obj, path):\n matches = []\n if isinstance(list_obj, dict):\n if len(list_obj) == 1:\n for key, _ in list_obj.items():\n if key not in self.supported_functions:\n message = \"Select should use a supported function of {0}\"\n matches.append(\n RuleMatch(\n path,\n message.format(\n \", \".join(map(str, self.supported_functions))\n ),\n )\n )\n else:\n message = \"Select should use a supported function of {0}\"\n matches.append(\n RuleMatch(\n path,\n message.format(\", \".join(map(str, self.supported_functions))),\n )\n )\n elif not isinstance(list_obj, list):\n message = \"Select should be an array of values for {0}\"\n matches.append(RuleMatch(path, message.format(\"/\".join(map(str, path)))))\n\n return matches\n\n def _test_select_obj(self, select_obj, path):\n matches = []\n if not isinstance(select_obj, list):\n message = \"Select should be a list of 2 elements for {0}\"\n matches.append(RuleMatch(path, message.format(\"/\".join(map(str, path)))))\n return matches\n if len(select_obj) != 2:\n message = \"Select should be a list of 2 elements for {0}\"\n matches.append(RuleMatch(path, message.format(\"/\".join(map(str, path)))))\n return matches\n\n index_obj = select_obj[0]\n list_of_objs = select_obj[1]\n matches.extend(self._test_index_obj(index_obj, path[:] + [0]))\n matches.extend(self._test_list_obj(list_of_objs, path[:] + [1]))\n\n return matches\n\n def match(self, cfn):\n matches = []\n\n select_objs = cfn.search_deep_keys(\"Fn::Select\")\n\n for select_obj in select_objs:\n select_value_obj = select_obj[-1]\n tree = select_obj[:-1]\n matches.extend(self._test_select_obj(select_value_obj, tree[:]))\n\n return matches\n", "path": "src/cfnlint/rules/functions/Select.py"}]} | 2,204 | 103 |
gh_patches_debug_37807 | rasdani/github-patches | git_diff | getsentry__sentry-55943 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add subscription for team in `models/groupsubscription`
Two updates we need to make here:
1) Update `subscribe_actor` (https://github.com/getsentry/sentry/blob/master/src/sentry/models/groupsubscription.py#L67) to no longer read all the team's users and just set subscribe the Team
2) Update `bulk_subscribe` to allow for bulk subscribing teams, instead of just users
</issue>
<code>
[start of src/sentry/models/groupsubscription.py]
1 from __future__ import annotations
2
3 from typing import TYPE_CHECKING, Iterable, Optional, Sequence, Union
4
5 from django.conf import settings
6 from django.db import IntegrityError, models, router, transaction
7 from django.utils import timezone
8
9 from sentry.backup.scopes import RelocationScope
10 from sentry.db.models import (
11 BaseManager,
12 BoundedPositiveIntegerField,
13 FlexibleForeignKey,
14 Model,
15 region_silo_only_model,
16 sane_repr,
17 )
18 from sentry.db.models.fields.hybrid_cloud_foreign_key import HybridCloudForeignKey
19 from sentry.notifications.helpers import (
20 transform_to_notification_settings_by_recipient,
21 where_should_be_participating,
22 )
23 from sentry.notifications.types import GroupSubscriptionReason, NotificationSettingTypes
24 from sentry.services.hybrid_cloud.actor import RpcActor
25 from sentry.services.hybrid_cloud.notifications import notifications_service
26 from sentry.services.hybrid_cloud.user import RpcUser
27
28 if TYPE_CHECKING:
29 from sentry.models import Group, Team, User
30 from sentry.notifications.utils.participants import ParticipantMap
31
32
33 class GroupSubscriptionManager(BaseManager):
34 def subscribe(
35 self,
36 group: Group,
37 subscriber: User | RpcUser | Team,
38 reason: int = GroupSubscriptionReason.unknown,
39 ) -> bool:
40 """
41 Subscribe a user or team to an issue, but only if that user or team has not explicitly
42 unsubscribed.
43 """
44 from sentry.models import Team, User
45
46 try:
47 with transaction.atomic(router.db_for_write(GroupSubscription)):
48 if isinstance(subscriber, (User, RpcUser)):
49 self.create(
50 user_id=subscriber.id,
51 group=group,
52 project=group.project,
53 is_active=True,
54 reason=reason,
55 )
56 elif isinstance(subscriber, Team):
57 self.create(
58 team=subscriber,
59 group=group,
60 project=group.project,
61 is_active=True,
62 reason=reason,
63 )
64 except IntegrityError:
65 pass
66 return True
67
68 def subscribe_actor(
69 self,
70 group: Group,
71 actor: Union[Team, User, RpcUser],
72 reason: int = GroupSubscriptionReason.unknown,
73 ) -> Optional[bool]:
74 from sentry import features
75 from sentry.models import Team, User
76
77 if isinstance(actor, (RpcUser, User)):
78 return self.subscribe(group, actor, reason)
79 if isinstance(actor, Team):
80 if features.has("organizations:team-workflow-notifications", group.organization):
81 return self.subscribe(group, actor, reason)
82 else:
83 # subscribe the members of the team
84 team_users_ids = list(actor.member_set.values_list("user_id", flat=True))
85 return self.bulk_subscribe(group, team_users_ids, reason)
86
87 raise NotImplementedError("Unknown actor type: %r" % type(actor))
88
89 def bulk_subscribe(
90 self,
91 group: Group,
92 user_ids: Iterable[int],
93 reason: int = GroupSubscriptionReason.unknown,
94 ) -> bool:
95 """
96 Subscribe a list of user ids to an issue, but only if the users are not explicitly
97 unsubscribed.
98 """
99 # Unique the IDs.
100 user_ids = set(user_ids)
101
102 # 5 retries for race conditions where
103 # concurrent subscription attempts cause integrity errors
104 for i in range(4, -1, -1): # 4 3 2 1 0
105
106 existing_subscriptions = set(
107 GroupSubscription.objects.filter(
108 user_id__in=user_ids, group=group, project=group.project
109 ).values_list("user_id", flat=True)
110 )
111
112 subscriptions = [
113 GroupSubscription(
114 user_id=user_id,
115 group=group,
116 project=group.project,
117 is_active=True,
118 reason=reason,
119 )
120 for user_id in user_ids
121 if user_id not in existing_subscriptions
122 ]
123
124 try:
125 with transaction.atomic(router.db_for_write(GroupSubscription)):
126 self.bulk_create(subscriptions)
127 return True
128 except IntegrityError as e:
129 if i == 0:
130 raise e
131 return False
132
133 def get_participants(self, group: Group) -> ParticipantMap:
134 """
135 Identify all users who are participating with a given issue.
136 :param group: Group object
137 """
138 from sentry.notifications.utils.participants import ParticipantMap
139
140 all_possible_users = RpcActor.many_from_object(group.project.get_members_as_rpc_users())
141 active_and_disabled_subscriptions = self.filter(
142 group=group, user_id__in=[u.id for u in all_possible_users]
143 )
144
145 notification_settings = notifications_service.get_settings_for_recipient_by_parent(
146 type=NotificationSettingTypes.WORKFLOW,
147 recipients=all_possible_users,
148 parent_id=group.project_id,
149 )
150 subscriptions_by_user_id = {
151 subscription.user_id: subscription for subscription in active_and_disabled_subscriptions
152 }
153 notification_settings_by_recipient = transform_to_notification_settings_by_recipient(
154 notification_settings, all_possible_users
155 )
156
157 result = ParticipantMap()
158 for user in all_possible_users:
159 subscription_option = subscriptions_by_user_id.get(user.id)
160 providers = where_should_be_participating(
161 user,
162 subscription_option,
163 notification_settings_by_recipient,
164 )
165 for provider in providers:
166 reason = (
167 subscription_option
168 and subscription_option.reason
169 or GroupSubscriptionReason.implicit
170 )
171 result.add(provider, user, reason)
172
173 return result
174
175 @staticmethod
176 def get_participating_user_ids(group: Group) -> Sequence[int]:
177 """Return the list of user ids participating in this issue."""
178
179 return list(
180 GroupSubscription.objects.filter(group=group, is_active=True).values_list(
181 "user_id", flat=True
182 )
183 )
184
185
186 @region_silo_only_model
187 class GroupSubscription(Model):
188 """
189 Identifies a subscription relationship between a user and an issue.
190 """
191
192 __relocation_scope__ = RelocationScope.Excluded
193
194 project = FlexibleForeignKey("sentry.Project", related_name="subscription_set")
195 group = FlexibleForeignKey("sentry.Group", related_name="subscription_set")
196 user_id = HybridCloudForeignKey(settings.AUTH_USER_MODEL, null=True, on_delete="CASCADE")
197 team = FlexibleForeignKey("sentry.Team", null=True, db_index=True, on_delete=models.CASCADE)
198 is_active = models.BooleanField(default=True)
199 reason = BoundedPositiveIntegerField(default=GroupSubscriptionReason.unknown)
200 date_added = models.DateTimeField(default=timezone.now, null=True)
201
202 objects = GroupSubscriptionManager()
203
204 class Meta:
205 app_label = "sentry"
206 db_table = "sentry_groupsubscription"
207 unique_together = (("group", "user_id"), ("group", "team"))
208 constraints = [
209 models.CheckConstraint(
210 check=models.Q(team_id__isnull=False, user_id__isnull=True)
211 | models.Q(team_id__isnull=True, user_id__isnull=False),
212 name="subscription_team_or_user_check",
213 )
214 ]
215
216 __repr__ = sane_repr("project_id", "group_id", "user_id")
217
[end of src/sentry/models/groupsubscription.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/sentry/models/groupsubscription.py b/src/sentry/models/groupsubscription.py
--- a/src/sentry/models/groupsubscription.py
+++ b/src/sentry/models/groupsubscription.py
@@ -82,22 +82,28 @@
else:
# subscribe the members of the team
team_users_ids = list(actor.member_set.values_list("user_id", flat=True))
- return self.bulk_subscribe(group, team_users_ids, reason)
+ return self.bulk_subscribe(group=group, user_ids=team_users_ids, reason=reason)
raise NotImplementedError("Unknown actor type: %r" % type(actor))
def bulk_subscribe(
self,
group: Group,
- user_ids: Iterable[int],
+ user_ids: Iterable[int] | None = None,
+ team_ids: Iterable[int] | None = None,
reason: int = GroupSubscriptionReason.unknown,
) -> bool:
"""
- Subscribe a list of user ids to an issue, but only if the users are not explicitly
+ Subscribe a list of user ids and/or teams to an issue, but only if the users/teams are not explicitly
unsubscribed.
"""
+ from sentry import features
+
# Unique the IDs.
- user_ids = set(user_ids)
+ user_ids = set(user_ids) if user_ids else set()
+
+ # Unique the teams.
+ team_ids = set(team_ids) if team_ids else set()
# 5 retries for race conditions where
# concurrent subscription attempts cause integrity errors
@@ -117,10 +123,29 @@
is_active=True,
reason=reason,
)
- for user_id in user_ids
- if user_id not in existing_subscriptions
+ for user_id in user_ids.difference(existing_subscriptions)
]
+ if features.has("organizations:team-workflow-notifications", group.organization):
+ existing_team_subscriptions = set(
+ GroupSubscription.objects.filter(
+ team_id__in=team_ids, group=group, project=group.project
+ ).values_list("team_id", flat=True)
+ )
+
+ subscriptions.extend(
+ [
+ GroupSubscription(
+ team_id=team_id,
+ group=group,
+ project=group.project,
+ is_active=True,
+ reason=reason,
+ )
+ for team_id in team_ids.difference(existing_team_subscriptions)
+ ]
+ )
+
try:
with transaction.atomic(router.db_for_write(GroupSubscription)):
self.bulk_create(subscriptions)
| {"golden_diff": "diff --git a/src/sentry/models/groupsubscription.py b/src/sentry/models/groupsubscription.py\n--- a/src/sentry/models/groupsubscription.py\n+++ b/src/sentry/models/groupsubscription.py\n@@ -82,22 +82,28 @@\n else:\n # subscribe the members of the team\n team_users_ids = list(actor.member_set.values_list(\"user_id\", flat=True))\n- return self.bulk_subscribe(group, team_users_ids, reason)\n+ return self.bulk_subscribe(group=group, user_ids=team_users_ids, reason=reason)\n \n raise NotImplementedError(\"Unknown actor type: %r\" % type(actor))\n \n def bulk_subscribe(\n self,\n group: Group,\n- user_ids: Iterable[int],\n+ user_ids: Iterable[int] | None = None,\n+ team_ids: Iterable[int] | None = None,\n reason: int = GroupSubscriptionReason.unknown,\n ) -> bool:\n \"\"\"\n- Subscribe a list of user ids to an issue, but only if the users are not explicitly\n+ Subscribe a list of user ids and/or teams to an issue, but only if the users/teams are not explicitly\n unsubscribed.\n \"\"\"\n+ from sentry import features\n+\n # Unique the IDs.\n- user_ids = set(user_ids)\n+ user_ids = set(user_ids) if user_ids else set()\n+\n+ # Unique the teams.\n+ team_ids = set(team_ids) if team_ids else set()\n \n # 5 retries for race conditions where\n # concurrent subscription attempts cause integrity errors\n@@ -117,10 +123,29 @@\n is_active=True,\n reason=reason,\n )\n- for user_id in user_ids\n- if user_id not in existing_subscriptions\n+ for user_id in user_ids.difference(existing_subscriptions)\n ]\n \n+ if features.has(\"organizations:team-workflow-notifications\", group.organization):\n+ existing_team_subscriptions = set(\n+ GroupSubscription.objects.filter(\n+ team_id__in=team_ids, group=group, project=group.project\n+ ).values_list(\"team_id\", flat=True)\n+ )\n+\n+ subscriptions.extend(\n+ [\n+ GroupSubscription(\n+ team_id=team_id,\n+ group=group,\n+ project=group.project,\n+ is_active=True,\n+ reason=reason,\n+ )\n+ for team_id in team_ids.difference(existing_team_subscriptions)\n+ ]\n+ )\n+\n try:\n with transaction.atomic(router.db_for_write(GroupSubscription)):\n self.bulk_create(subscriptions)\n", "issue": "Add subscription for team in `models/groupsubscription`\nTwo updates we need to make here:\r\n1) Update `subscribe_actor` (https://github.com/getsentry/sentry/blob/master/src/sentry/models/groupsubscription.py#L67) to no longer read all the team's users and just set subscribe the Team\r\n2) Update `bulk_subscribe` to allow for bulk subscribing teams, instead of just users\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom typing import TYPE_CHECKING, Iterable, Optional, Sequence, Union\n\nfrom django.conf import settings\nfrom django.db import IntegrityError, models, router, transaction\nfrom django.utils import timezone\n\nfrom sentry.backup.scopes import RelocationScope\nfrom sentry.db.models import (\n BaseManager,\n BoundedPositiveIntegerField,\n FlexibleForeignKey,\n Model,\n region_silo_only_model,\n sane_repr,\n)\nfrom sentry.db.models.fields.hybrid_cloud_foreign_key import HybridCloudForeignKey\nfrom sentry.notifications.helpers import (\n transform_to_notification_settings_by_recipient,\n where_should_be_participating,\n)\nfrom sentry.notifications.types import GroupSubscriptionReason, NotificationSettingTypes\nfrom sentry.services.hybrid_cloud.actor import RpcActor\nfrom sentry.services.hybrid_cloud.notifications import notifications_service\nfrom sentry.services.hybrid_cloud.user import RpcUser\n\nif TYPE_CHECKING:\n from sentry.models import Group, Team, User\n from sentry.notifications.utils.participants import ParticipantMap\n\n\nclass GroupSubscriptionManager(BaseManager):\n def subscribe(\n self,\n group: Group,\n subscriber: User | RpcUser | Team,\n reason: int = GroupSubscriptionReason.unknown,\n ) -> bool:\n \"\"\"\n Subscribe a user or team to an issue, but only if that user or team has not explicitly\n unsubscribed.\n \"\"\"\n from sentry.models import Team, User\n\n try:\n with transaction.atomic(router.db_for_write(GroupSubscription)):\n if isinstance(subscriber, (User, RpcUser)):\n self.create(\n user_id=subscriber.id,\n group=group,\n project=group.project,\n is_active=True,\n reason=reason,\n )\n elif isinstance(subscriber, Team):\n self.create(\n team=subscriber,\n group=group,\n project=group.project,\n is_active=True,\n reason=reason,\n )\n except IntegrityError:\n pass\n return True\n\n def subscribe_actor(\n self,\n group: Group,\n actor: Union[Team, User, RpcUser],\n reason: int = GroupSubscriptionReason.unknown,\n ) -> Optional[bool]:\n from sentry import features\n from sentry.models import Team, User\n\n if isinstance(actor, (RpcUser, User)):\n return self.subscribe(group, actor, reason)\n if isinstance(actor, Team):\n if features.has(\"organizations:team-workflow-notifications\", group.organization):\n return self.subscribe(group, actor, reason)\n else:\n # subscribe the members of the team\n team_users_ids = list(actor.member_set.values_list(\"user_id\", flat=True))\n return self.bulk_subscribe(group, team_users_ids, reason)\n\n raise NotImplementedError(\"Unknown actor type: %r\" % type(actor))\n\n def bulk_subscribe(\n self,\n group: Group,\n user_ids: Iterable[int],\n reason: int = GroupSubscriptionReason.unknown,\n ) -> bool:\n \"\"\"\n Subscribe a list of user ids to an issue, but only if the users are not explicitly\n unsubscribed.\n \"\"\"\n # Unique the IDs.\n user_ids = set(user_ids)\n\n # 5 retries for race conditions where\n # concurrent subscription attempts cause integrity errors\n for i in range(4, -1, -1): # 4 3 2 1 0\n\n existing_subscriptions = set(\n GroupSubscription.objects.filter(\n user_id__in=user_ids, group=group, project=group.project\n ).values_list(\"user_id\", flat=True)\n )\n\n subscriptions = [\n GroupSubscription(\n user_id=user_id,\n group=group,\n project=group.project,\n is_active=True,\n reason=reason,\n )\n for user_id in user_ids\n if user_id not in existing_subscriptions\n ]\n\n try:\n with transaction.atomic(router.db_for_write(GroupSubscription)):\n self.bulk_create(subscriptions)\n return True\n except IntegrityError as e:\n if i == 0:\n raise e\n return False\n\n def get_participants(self, group: Group) -> ParticipantMap:\n \"\"\"\n Identify all users who are participating with a given issue.\n :param group: Group object\n \"\"\"\n from sentry.notifications.utils.participants import ParticipantMap\n\n all_possible_users = RpcActor.many_from_object(group.project.get_members_as_rpc_users())\n active_and_disabled_subscriptions = self.filter(\n group=group, user_id__in=[u.id for u in all_possible_users]\n )\n\n notification_settings = notifications_service.get_settings_for_recipient_by_parent(\n type=NotificationSettingTypes.WORKFLOW,\n recipients=all_possible_users,\n parent_id=group.project_id,\n )\n subscriptions_by_user_id = {\n subscription.user_id: subscription for subscription in active_and_disabled_subscriptions\n }\n notification_settings_by_recipient = transform_to_notification_settings_by_recipient(\n notification_settings, all_possible_users\n )\n\n result = ParticipantMap()\n for user in all_possible_users:\n subscription_option = subscriptions_by_user_id.get(user.id)\n providers = where_should_be_participating(\n user,\n subscription_option,\n notification_settings_by_recipient,\n )\n for provider in providers:\n reason = (\n subscription_option\n and subscription_option.reason\n or GroupSubscriptionReason.implicit\n )\n result.add(provider, user, reason)\n\n return result\n\n @staticmethod\n def get_participating_user_ids(group: Group) -> Sequence[int]:\n \"\"\"Return the list of user ids participating in this issue.\"\"\"\n\n return list(\n GroupSubscription.objects.filter(group=group, is_active=True).values_list(\n \"user_id\", flat=True\n )\n )\n\n\n@region_silo_only_model\nclass GroupSubscription(Model):\n \"\"\"\n Identifies a subscription relationship between a user and an issue.\n \"\"\"\n\n __relocation_scope__ = RelocationScope.Excluded\n\n project = FlexibleForeignKey(\"sentry.Project\", related_name=\"subscription_set\")\n group = FlexibleForeignKey(\"sentry.Group\", related_name=\"subscription_set\")\n user_id = HybridCloudForeignKey(settings.AUTH_USER_MODEL, null=True, on_delete=\"CASCADE\")\n team = FlexibleForeignKey(\"sentry.Team\", null=True, db_index=True, on_delete=models.CASCADE)\n is_active = models.BooleanField(default=True)\n reason = BoundedPositiveIntegerField(default=GroupSubscriptionReason.unknown)\n date_added = models.DateTimeField(default=timezone.now, null=True)\n\n objects = GroupSubscriptionManager()\n\n class Meta:\n app_label = \"sentry\"\n db_table = \"sentry_groupsubscription\"\n unique_together = ((\"group\", \"user_id\"), (\"group\", \"team\"))\n constraints = [\n models.CheckConstraint(\n check=models.Q(team_id__isnull=False, user_id__isnull=True)\n | models.Q(team_id__isnull=True, user_id__isnull=False),\n name=\"subscription_team_or_user_check\",\n )\n ]\n\n __repr__ = sane_repr(\"project_id\", \"group_id\", \"user_id\")\n", "path": "src/sentry/models/groupsubscription.py"}]} | 2,673 | 560 |
gh_patches_debug_21712 | rasdani/github-patches | git_diff | koxudaxi__datamodel-code-generator-1767 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
JSON Schema `const` value and type ignored when setting defaults for Pydantic V2
**Describe the bug**
Use of `--force-optional` clobbers `--use-one-literal-as-default`. In my opinion `--force-optional` should use defaults where they exist and only fall back to assigning to `None` where they don't exist.
### Input
```json
{
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"title": "Force optional demo",
"properties": {
"foo": {
"const": "foo"
}
}
}
```
### Used command
```
datamodel-codegen --input force-optional-demo.json --output-model-type pydantic_v2.BaseModel --force-optional --use-one-literal-as-default
```
### Actual output
```python
# generated by datamodel-codegen:
# filename: force-optional-demo.json
# timestamp: 2023-12-05T15:06:01+00:00
from __future__ import annotations
from pydantic import BaseModel
from typing_extensions import Literal
class ForceOptionalDemo(BaseModel):
foo: Literal['foo'] = None
```
### Expected output
```python
# generated by datamodel-codegen:
# filename: force-optional-demo.json
# timestamp: 2023-12-05T15:06:01+00:00
from __future__ import annotations
from pydantic import BaseModel
from typing_extensions import Literal
class ForceOptionalDemo(BaseModel):
foo: Literal['foo'] = 'foo'
```
</issue>
<code>
[start of datamodel_code_generator/model/pydantic_v2/base_model.py]
1 from pathlib import Path
2 from typing import (
3 TYPE_CHECKING,
4 Any,
5 ClassVar,
6 DefaultDict,
7 Dict,
8 List,
9 NamedTuple,
10 Optional,
11 Set,
12 )
13
14 from pydantic import Field
15
16 from datamodel_code_generator.model.base import UNDEFINED, DataModelFieldBase
17 from datamodel_code_generator.model.pydantic.base_model import (
18 BaseModelBase,
19 )
20 from datamodel_code_generator.model.pydantic.base_model import (
21 Constraints as _Constraints,
22 )
23 from datamodel_code_generator.model.pydantic.base_model import (
24 DataModelField as DataModelFieldV1,
25 )
26 from datamodel_code_generator.model.pydantic_v2.imports import IMPORT_CONFIG_DICT
27 from datamodel_code_generator.reference import Reference
28 from datamodel_code_generator.util import field_validator, model_validator
29
30 if TYPE_CHECKING:
31 from typing_extensions import Literal
32 else:
33 try:
34 from typing import Literal
35 except ImportError:
36 from typing_extensions import Literal
37
38
39 class Constraints(_Constraints):
40 # To override existing pattern alias
41 regex: Optional[str] = Field(None, alias='regex')
42 pattern: Optional[str] = Field(None, alias='pattern')
43
44 @model_validator(mode='before')
45 def validate_min_max_items(cls, values: Any) -> Dict[str, Any]:
46 if not isinstance(values, dict): # pragma: no cover
47 return values
48 min_items = values.pop('minItems', None)
49 if min_items is not None:
50 values['minLength'] = min_items
51 max_items = values.pop('maxItems', None)
52 if max_items is not None:
53 values['maxLength'] = max_items
54 return values
55
56
57 class DataModelField(DataModelFieldV1):
58 _EXCLUDE_FIELD_KEYS: ClassVar[Set[str]] = {
59 'alias',
60 'default',
61 'gt',
62 'ge',
63 'lt',
64 'le',
65 'multiple_of',
66 'min_length',
67 'max_length',
68 'pattern',
69 }
70 constraints: Optional[Constraints] = None
71 _PARSE_METHOD: ClassVar[str] = 'model_validate'
72
73 @field_validator('extras')
74 def validate_extras(cls, values: Any) -> Dict[str, Any]:
75 if not isinstance(values, dict):
76 return values
77 if 'examples' in values:
78 return values
79
80 if 'example' in values:
81 values['examples'] = [values.pop('example')]
82 return values
83
84 def process_const(self) -> None:
85 if 'const' not in self.extras:
86 return None
87 self.const = True
88 self.nullable = False
89 const = self.extras['const']
90 if self.data_type.type == 'str' and isinstance(
91 const, str
92 ): # pragma: no cover # Literal supports only str
93 self.data_type = self.data_type.__class__(literals=[const])
94
95 def _process_data_in_str(self, data: Dict[str, Any]) -> None:
96 if self.const:
97 # const is removed in pydantic 2.0
98 data.pop('const')
99
100 # unique_items is not supported in pydantic 2.0
101 data.pop('unique_items', None)
102
103 def _process_annotated_field_arguments(
104 self, field_arguments: List[str]
105 ) -> List[str]:
106 if not self.required:
107 if self.use_default_kwarg:
108 return [
109 f'default={repr(self.default)}',
110 *field_arguments,
111 ]
112 else:
113 # TODO: Allow '=' style default for v1?
114 return [f'{repr(self.default)}', *field_arguments]
115 return field_arguments
116
117
118 class ConfigAttribute(NamedTuple):
119 from_: str
120 to: str
121 invert: bool
122
123
124 class BaseModel(BaseModelBase):
125 TEMPLATE_FILE_PATH: ClassVar[str] = 'pydantic_v2/BaseModel.jinja2'
126 BASE_CLASS: ClassVar[str] = 'pydantic.BaseModel'
127 CONFIG_ATTRIBUTES: ClassVar[List[ConfigAttribute]] = [
128 ConfigAttribute('allow_population_by_field_name', 'populate_by_name', False),
129 ConfigAttribute('populate_by_name', 'populate_by_name', False),
130 ConfigAttribute('allow_mutation', 'frozen', True),
131 ConfigAttribute('frozen', 'frozen', False),
132 ]
133
134 def __init__(
135 self,
136 *,
137 reference: Reference,
138 fields: List[DataModelFieldBase],
139 decorators: Optional[List[str]] = None,
140 base_classes: Optional[List[Reference]] = None,
141 custom_base_class: Optional[str] = None,
142 custom_template_dir: Optional[Path] = None,
143 extra_template_data: Optional[DefaultDict[str, Any]] = None,
144 path: Optional[Path] = None,
145 description: Optional[str] = None,
146 default: Any = UNDEFINED,
147 nullable: bool = False,
148 ) -> None:
149 super().__init__(
150 reference=reference,
151 fields=fields,
152 decorators=decorators,
153 base_classes=base_classes,
154 custom_base_class=custom_base_class,
155 custom_template_dir=custom_template_dir,
156 extra_template_data=extra_template_data,
157 path=path,
158 description=description,
159 default=default,
160 nullable=nullable,
161 )
162 config_parameters: Dict[str, Any] = {}
163
164 extra = self._get_config_extra()
165 if extra:
166 config_parameters['extra'] = extra
167
168 for from_, to, invert in self.CONFIG_ATTRIBUTES:
169 if from_ in self.extra_template_data:
170 config_parameters[to] = (
171 not self.extra_template_data[from_]
172 if invert
173 else self.extra_template_data[from_]
174 )
175 for data_type in self.all_data_types:
176 if data_type.is_custom_type:
177 config_parameters['arbitrary_types_allowed'] = True
178 break
179
180 if isinstance(self.extra_template_data.get('config'), dict):
181 for key, value in self.extra_template_data['config'].items():
182 config_parameters[key] = value
183
184 if config_parameters:
185 from datamodel_code_generator.model.pydantic_v2 import ConfigDict
186
187 self.extra_template_data['config'] = ConfigDict.parse_obj(config_parameters)
188 self._additional_imports.append(IMPORT_CONFIG_DICT)
189
190 def _get_config_extra(self) -> Optional[Literal["'allow'", "'forbid'"]]:
191 additionalProperties = self.extra_template_data.get('additionalProperties')
192 allow_extra_fields = self.extra_template_data.get('allow_extra_fields')
193 if additionalProperties is not None or allow_extra_fields:
194 return (
195 "'allow'" if additionalProperties or allow_extra_fields else "'forbid'"
196 )
197 return None
198
[end of datamodel_code_generator/model/pydantic_v2/base_model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/datamodel_code_generator/model/pydantic_v2/base_model.py b/datamodel_code_generator/model/pydantic_v2/base_model.py
--- a/datamodel_code_generator/model/pydantic_v2/base_model.py
+++ b/datamodel_code_generator/model/pydantic_v2/base_model.py
@@ -87,10 +87,9 @@
self.const = True
self.nullable = False
const = self.extras['const']
- if self.data_type.type == 'str' and isinstance(
- const, str
- ): # pragma: no cover # Literal supports only str
- self.data_type = self.data_type.__class__(literals=[const])
+ self.data_type = self.data_type.__class__(literals=[const])
+ if not self.default:
+ self.default = const
def _process_data_in_str(self, data: Dict[str, Any]) -> None:
if self.const:
@@ -103,7 +102,7 @@
def _process_annotated_field_arguments(
self, field_arguments: List[str]
) -> List[str]:
- if not self.required:
+ if not self.required or self.const:
if self.use_default_kwarg:
return [
f'default={repr(self.default)}',
| {"golden_diff": "diff --git a/datamodel_code_generator/model/pydantic_v2/base_model.py b/datamodel_code_generator/model/pydantic_v2/base_model.py\n--- a/datamodel_code_generator/model/pydantic_v2/base_model.py\n+++ b/datamodel_code_generator/model/pydantic_v2/base_model.py\n@@ -87,10 +87,9 @@\n self.const = True\n self.nullable = False\n const = self.extras['const']\n- if self.data_type.type == 'str' and isinstance(\n- const, str\n- ): # pragma: no cover # Literal supports only str\n- self.data_type = self.data_type.__class__(literals=[const])\n+ self.data_type = self.data_type.__class__(literals=[const])\n+ if not self.default:\n+ self.default = const\n \n def _process_data_in_str(self, data: Dict[str, Any]) -> None:\n if self.const:\n@@ -103,7 +102,7 @@\n def _process_annotated_field_arguments(\n self, field_arguments: List[str]\n ) -> List[str]:\n- if not self.required:\n+ if not self.required or self.const:\n if self.use_default_kwarg:\n return [\n f'default={repr(self.default)}',\n", "issue": "JSON Schema `const` value and type ignored when setting defaults for Pydantic V2\n**Describe the bug**\r\nUse of `--force-optional` clobbers `--use-one-literal-as-default`. In my opinion `--force-optional` should use defaults where they exist and only fall back to assigning to `None` where they don't exist.\r\n\r\n### Input\r\n\r\n```json\r\n{\r\n \"$schema\": \"http://json-schema.org/draft-07/schema#\",\r\n \"type\": \"object\",\r\n \"title\": \"Force optional demo\",\r\n \"properties\": {\r\n \"foo\": {\r\n \"const\": \"foo\"\r\n }\r\n }\r\n}\r\n```\r\n\r\n### Used command\r\n```\r\ndatamodel-codegen --input force-optional-demo.json --output-model-type pydantic_v2.BaseModel --force-optional --use-one-literal-as-default\r\n```\r\n\r\n### Actual output\r\n\r\n```python\r\n# generated by datamodel-codegen:\r\n# filename: force-optional-demo.json\r\n# timestamp: 2023-12-05T15:06:01+00:00\r\n\r\nfrom __future__ import annotations\r\n\r\nfrom pydantic import BaseModel\r\nfrom typing_extensions import Literal\r\n\r\n\r\nclass ForceOptionalDemo(BaseModel):\r\n foo: Literal['foo'] = None\r\n```\r\n\r\n### Expected output\r\n\r\n```python\r\n# generated by datamodel-codegen:\r\n# filename: force-optional-demo.json\r\n# timestamp: 2023-12-05T15:06:01+00:00\r\n\r\nfrom __future__ import annotations\r\n\r\nfrom pydantic import BaseModel\r\nfrom typing_extensions import Literal\r\n\r\n\r\nclass ForceOptionalDemo(BaseModel):\r\n foo: Literal['foo'] = 'foo'\r\n```\n", "before_files": [{"content": "from pathlib import Path\nfrom typing import (\n TYPE_CHECKING,\n Any,\n ClassVar,\n DefaultDict,\n Dict,\n List,\n NamedTuple,\n Optional,\n Set,\n)\n\nfrom pydantic import Field\n\nfrom datamodel_code_generator.model.base import UNDEFINED, DataModelFieldBase\nfrom datamodel_code_generator.model.pydantic.base_model import (\n BaseModelBase,\n)\nfrom datamodel_code_generator.model.pydantic.base_model import (\n Constraints as _Constraints,\n)\nfrom datamodel_code_generator.model.pydantic.base_model import (\n DataModelField as DataModelFieldV1,\n)\nfrom datamodel_code_generator.model.pydantic_v2.imports import IMPORT_CONFIG_DICT\nfrom datamodel_code_generator.reference import Reference\nfrom datamodel_code_generator.util import field_validator, model_validator\n\nif TYPE_CHECKING:\n from typing_extensions import Literal\nelse:\n try:\n from typing import Literal\n except ImportError:\n from typing_extensions import Literal\n\n\nclass Constraints(_Constraints):\n # To override existing pattern alias\n regex: Optional[str] = Field(None, alias='regex')\n pattern: Optional[str] = Field(None, alias='pattern')\n\n @model_validator(mode='before')\n def validate_min_max_items(cls, values: Any) -> Dict[str, Any]:\n if not isinstance(values, dict): # pragma: no cover\n return values\n min_items = values.pop('minItems', None)\n if min_items is not None:\n values['minLength'] = min_items\n max_items = values.pop('maxItems', None)\n if max_items is not None:\n values['maxLength'] = max_items\n return values\n\n\nclass DataModelField(DataModelFieldV1):\n _EXCLUDE_FIELD_KEYS: ClassVar[Set[str]] = {\n 'alias',\n 'default',\n 'gt',\n 'ge',\n 'lt',\n 'le',\n 'multiple_of',\n 'min_length',\n 'max_length',\n 'pattern',\n }\n constraints: Optional[Constraints] = None\n _PARSE_METHOD: ClassVar[str] = 'model_validate'\n\n @field_validator('extras')\n def validate_extras(cls, values: Any) -> Dict[str, Any]:\n if not isinstance(values, dict):\n return values\n if 'examples' in values:\n return values\n\n if 'example' in values:\n values['examples'] = [values.pop('example')]\n return values\n\n def process_const(self) -> None:\n if 'const' not in self.extras:\n return None\n self.const = True\n self.nullable = False\n const = self.extras['const']\n if self.data_type.type == 'str' and isinstance(\n const, str\n ): # pragma: no cover # Literal supports only str\n self.data_type = self.data_type.__class__(literals=[const])\n\n def _process_data_in_str(self, data: Dict[str, Any]) -> None:\n if self.const:\n # const is removed in pydantic 2.0\n data.pop('const')\n\n # unique_items is not supported in pydantic 2.0\n data.pop('unique_items', None)\n\n def _process_annotated_field_arguments(\n self, field_arguments: List[str]\n ) -> List[str]:\n if not self.required:\n if self.use_default_kwarg:\n return [\n f'default={repr(self.default)}',\n *field_arguments,\n ]\n else:\n # TODO: Allow '=' style default for v1?\n return [f'{repr(self.default)}', *field_arguments]\n return field_arguments\n\n\nclass ConfigAttribute(NamedTuple):\n from_: str\n to: str\n invert: bool\n\n\nclass BaseModel(BaseModelBase):\n TEMPLATE_FILE_PATH: ClassVar[str] = 'pydantic_v2/BaseModel.jinja2'\n BASE_CLASS: ClassVar[str] = 'pydantic.BaseModel'\n CONFIG_ATTRIBUTES: ClassVar[List[ConfigAttribute]] = [\n ConfigAttribute('allow_population_by_field_name', 'populate_by_name', False),\n ConfigAttribute('populate_by_name', 'populate_by_name', False),\n ConfigAttribute('allow_mutation', 'frozen', True),\n ConfigAttribute('frozen', 'frozen', False),\n ]\n\n def __init__(\n self,\n *,\n reference: Reference,\n fields: List[DataModelFieldBase],\n decorators: Optional[List[str]] = None,\n base_classes: Optional[List[Reference]] = None,\n custom_base_class: Optional[str] = None,\n custom_template_dir: Optional[Path] = None,\n extra_template_data: Optional[DefaultDict[str, Any]] = None,\n path: Optional[Path] = None,\n description: Optional[str] = None,\n default: Any = UNDEFINED,\n nullable: bool = False,\n ) -> None:\n super().__init__(\n reference=reference,\n fields=fields,\n decorators=decorators,\n base_classes=base_classes,\n custom_base_class=custom_base_class,\n custom_template_dir=custom_template_dir,\n extra_template_data=extra_template_data,\n path=path,\n description=description,\n default=default,\n nullable=nullable,\n )\n config_parameters: Dict[str, Any] = {}\n\n extra = self._get_config_extra()\n if extra:\n config_parameters['extra'] = extra\n\n for from_, to, invert in self.CONFIG_ATTRIBUTES:\n if from_ in self.extra_template_data:\n config_parameters[to] = (\n not self.extra_template_data[from_]\n if invert\n else self.extra_template_data[from_]\n )\n for data_type in self.all_data_types:\n if data_type.is_custom_type:\n config_parameters['arbitrary_types_allowed'] = True\n break\n\n if isinstance(self.extra_template_data.get('config'), dict):\n for key, value in self.extra_template_data['config'].items():\n config_parameters[key] = value\n\n if config_parameters:\n from datamodel_code_generator.model.pydantic_v2 import ConfigDict\n\n self.extra_template_data['config'] = ConfigDict.parse_obj(config_parameters)\n self._additional_imports.append(IMPORT_CONFIG_DICT)\n\n def _get_config_extra(self) -> Optional[Literal[\"'allow'\", \"'forbid'\"]]:\n additionalProperties = self.extra_template_data.get('additionalProperties')\n allow_extra_fields = self.extra_template_data.get('allow_extra_fields')\n if additionalProperties is not None or allow_extra_fields:\n return (\n \"'allow'\" if additionalProperties or allow_extra_fields else \"'forbid'\"\n )\n return None\n", "path": "datamodel_code_generator/model/pydantic_v2/base_model.py"}]} | 2,840 | 283 |
gh_patches_debug_16863 | rasdani/github-patches | git_diff | TencentBlueKing__bk-user-1192 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
feat: add stringify_pydantic_error util
</issue>
<code>
[start of src/bk-user/bkuser/common/passwd/__init__.py]
1 # -*- coding: utf-8 -*-
2 """
3 TencentBlueKing is pleased to support the open source community by making 蓝鲸智云-用户管理(Bk-User) available.
4 Copyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.
5 Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
6 You may obtain a copy of the License at http://opensource.org/licenses/MIT
7 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
8 an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
9 specific language governing permissions and limitations under the License.
10 """
11 from .exceptions import PasswordStrengthError
12 from .generator import PasswordGenerator
13 from .models import PasswordRule, ValidateResult
14 from .validator import PasswordValidator
15
16 __all__ = [
17 # 密码规则
18 "PasswordRule",
19 # 密码生成器
20 "PasswordGenerator",
21 # 密码强度校验器
22 "PasswordValidator",
23 # 密码校验结果
24 "ValidateResult",
25 # 密码强度过低异常
26 "PasswordStrengthError",
27 ]
28
[end of src/bk-user/bkuser/common/passwd/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/bk-user/bkuser/common/passwd/__init__.py b/src/bk-user/bkuser/common/passwd/__init__.py
--- a/src/bk-user/bkuser/common/passwd/__init__.py
+++ b/src/bk-user/bkuser/common/passwd/__init__.py
@@ -8,7 +8,7 @@
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
-from .exceptions import PasswordStrengthError
+from .exceptions import PasswordGenerateError, PasswordStrengthError
from .generator import PasswordGenerator
from .models import PasswordRule, ValidateResult
from .validator import PasswordValidator
@@ -24,4 +24,6 @@
"ValidateResult",
# 密码强度过低异常
"PasswordStrengthError",
+ # 不合理的规则导致生成密码失败
+ "PasswordGenerateError",
]
| {"golden_diff": "diff --git a/src/bk-user/bkuser/common/passwd/__init__.py b/src/bk-user/bkuser/common/passwd/__init__.py\n--- a/src/bk-user/bkuser/common/passwd/__init__.py\n+++ b/src/bk-user/bkuser/common/passwd/__init__.py\n@@ -8,7 +8,7 @@\n an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\n specific language governing permissions and limitations under the License.\n \"\"\"\n-from .exceptions import PasswordStrengthError\n+from .exceptions import PasswordGenerateError, PasswordStrengthError\n from .generator import PasswordGenerator\n from .models import PasswordRule, ValidateResult\n from .validator import PasswordValidator\n@@ -24,4 +24,6 @@\n \"ValidateResult\",\n # \u5bc6\u7801\u5f3a\u5ea6\u8fc7\u4f4e\u5f02\u5e38\n \"PasswordStrengthError\",\n+ # \u4e0d\u5408\u7406\u7684\u89c4\u5219\u5bfc\u81f4\u751f\u6210\u5bc6\u7801\u5931\u8d25\n+ \"PasswordGenerateError\",\n ]\n", "issue": "feat: add stringify_pydantic_error util\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nTencentBlueKing is pleased to support the open source community by making \u84dd\u9cb8\u667a\u4e91-\u7528\u6237\u7ba1\u7406(Bk-User) available.\nCopyright (C) 2017-2021 THL A29 Limited, a Tencent company. All rights reserved.\nLicensed under the MIT License (the \"License\"); you may not use this file except in compliance with the License.\nYou may obtain a copy of the License at http://opensource.org/licenses/MIT\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on\nan \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the\nspecific language governing permissions and limitations under the License.\n\"\"\"\nfrom .exceptions import PasswordStrengthError\nfrom .generator import PasswordGenerator\nfrom .models import PasswordRule, ValidateResult\nfrom .validator import PasswordValidator\n\n__all__ = [\n # \u5bc6\u7801\u89c4\u5219\n \"PasswordRule\",\n # \u5bc6\u7801\u751f\u6210\u5668\n \"PasswordGenerator\",\n # \u5bc6\u7801\u5f3a\u5ea6\u6821\u9a8c\u5668\n \"PasswordValidator\",\n # \u5bc6\u7801\u6821\u9a8c\u7ed3\u679c\n \"ValidateResult\",\n # \u5bc6\u7801\u5f3a\u5ea6\u8fc7\u4f4e\u5f02\u5e38\n \"PasswordStrengthError\",\n]\n", "path": "src/bk-user/bkuser/common/passwd/__init__.py"}]} | 882 | 212 |
gh_patches_debug_17775 | rasdani/github-patches | git_diff | ocf__ocfweb-131 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CSRF token PR fails in prod due to referer checks
The cookies seem to be working properly, but referer checks are failing?
```
Jan 31 18:46:57 coma gunicorn[23653]: Forbidden (Referer checking failed - https://www.ocf.berkeley.edu/account/register/ does not match any trusted origins.): /account/register/
```
Explained in the docs:
https://docs.djangoproject.com/en/dev/ref/csrf/#how-it-works
Maybe the referer header is not what we think it is due to the proxying?
I reverted my change in 89a8931ff0fe9e511905780a42be24a63b1d5c9a
</issue>
<code>
[start of ocfweb/settings.py]
1 import configparser
2 import os
3 from getpass import getuser
4
5 from django.template.base import TemplateSyntaxError
6
7
8 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
9
10 SECRET_KEY = 'not_a_secret'
11 DEBUG = True
12
13 ALLOWED_HOSTS = [
14 'www.ocf.berkeley.edu',
15 'dev.ocf.berkeley.edu',
16 'dev-www.ocf.berkeley.edu',
17 'ocfweb.ocf.berkeley.edu',
18 ]
19
20 INSTALLED_APPS = (
21 'bootstrapform',
22 'django.contrib.humanize',
23 'django.contrib.messages',
24 'django.contrib.sessions',
25 'django.contrib.staticfiles',
26 'mathfilters',
27 'ocfweb',
28 'ocfweb.about',
29 'ocfweb.account',
30 'ocfweb.docs',
31 'ocfweb.login',
32 'ocfweb.main',
33 'ocfweb.middleware',
34 'ocfweb.stats',
35 'ocfweb.test',
36 )
37
38 MIDDLEWARE_CLASSES = (
39 'django.contrib.sessions.middleware.SessionMiddleware',
40 'django.middleware.common.CommonMiddleware',
41 'django.middleware.csrf.CsrfViewMiddleware',
42 'django.contrib.messages.middleware.MessageMiddleware',
43 'django.middleware.clickjacking.XFrameOptionsMiddleware',
44 'ocfweb.middleware.errors.OcflibErrorMiddleware',
45 )
46
47 SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
48 ROOT_URLCONF = 'ocfweb.urls'
49
50
51 class InvalidReferenceInTemplate(str):
52 """Raise exceptions on invalid references in templates.
53
54 By default Django just replaces references to undefined variables with
55 empty strings. This is a horrible idea, so we instead hack it to raise an
56 exception.
57 """
58
59 def __mod__(self, ref):
60 raise TemplateSyntaxError('Invalid reference in template: {}'.format(ref))
61
62
63 TEMPLATES = [{
64 'BACKEND': 'django.template.backends.django.DjangoTemplates',
65 'DIRS': [],
66 'APP_DIRS': True,
67 'OPTIONS': {
68 'context_processors': [
69 'django.template.context_processors.request',
70 'django.contrib.messages.context_processors.messages',
71 'ocfweb.context_processors.ocf_template_processor',
72 ],
73 'string_if_invalid': InvalidReferenceInTemplate('%s'),
74 },
75 }]
76
77 WSGI_APPLICATION = 'ocfweb.wsgi.application'
78
79 DATABASES = {}
80
81 # store sessions in the cache
82 SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
83
84 # XXX: DO NOT CHANGE
85 # Ensure cookies can't be read by JavaScript.
86 SESSION_COOKIE_HTTPONLY = True
87 SESSION_COOKIE_SECURE = False
88 SESSION_COOKIE_PATH = '/'
89 SESSION_COOKIE_NAME = 'OCFWEB_SESSIONID'
90
91 CACHES = { # sessions are stored here
92 'default': {
93 # on dev, we use a file-backed cache so that you don't get logged out
94 # every time you update code and the server restarts.
95 'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
96 'LOCATION': os.path.expanduser('~/.ocfweb-cache'),
97 },
98 'TIMEOUT': 60 * 60 * 12, # 12 hours
99 'OPTIONS': {
100 'MAX_ENTRIES': 1000,
101 },
102 }
103
104 LANGUAGE_CODE = 'en-us'
105 TIME_ZONE = 'America/Los_Angeles'
106 USE_I18N = False
107 USE_L10N = False
108 USE_TZ = True
109
110 STATIC_URL = '/static/'
111 os.environ.setdefault('OCFWEB_STATIC_ROOT', '')
112 STATIC_ROOT = os.environ['OCFWEB_STATIC_ROOT']
113
114 X_FRAME_OPTIONS = 'DENY'
115
116 # log exceptions to stderr
117 LOGGING = {
118 'version': 1,
119 'disable_existing_loggers': False,
120 'handlers': {
121 'console': {
122 'class': 'logging.StreamHandler',
123 },
124 },
125 'loggers': {
126 'django': {
127 'handlers': ['console'],
128 'level': os.getenv('DJANGO_LOG_LEVEL', 'INFO'),
129 },
130 },
131 }
132
133 CELERY_BROKER = 'redis://create'
134 CELERY_BACKEND = 'redis://create'
135
136 if getuser() == 'ocfweb':
137 # not running in development, override options from config file
138 conf = configparser.ConfigParser()
139 conf.read('/etc/ocfweb/ocfweb.conf')
140
141 SECRET_KEY = conf.get('django', 'secret')
142 DEBUG = conf.getboolean('django', 'debug')
143
144 STATIC_URL = conf.get('django', 'static_url')
145 STATIC_ROOT = conf.get('django', 'static_root')
146
147 CELERY_BROKER = conf.get('celery', 'broker')
148 CELERY_BACKEND = conf.get('celery', 'backend')
149
150 # on prod, we use Redis as a cache
151 CACHES['default'] = {
152 'BACKEND': 'django_redis.cache.RedisCache',
153 'LOCATION': 'redis://localhost:6379/0',
154 'OPTIONS': {
155 'CLIENT_CLASS': 'django_redis.client.DefaultClient',
156 }
157 }
158
159 SESSION_COOKIE_SECURE = True
160 SESSION_COOKIE_DOMAIN = 'www.ocf.berkeley.edu'
161 else:
162 # running in development
163
164 # try to read celery values used by approve
165 # (only works on supernova by staff members)
166 try:
167 conf = configparser.ConfigParser()
168 conf.read('/etc/ocf-create/ocf-create.conf')
169 CELERY_BROKER = conf.get('celery', 'broker')
170 CELERY_BACKEND = conf.get('celery', 'backend')
171 except configparser.NoSectionError:
172 pass
173
[end of ocfweb/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ocfweb/settings.py b/ocfweb/settings.py
--- a/ocfweb/settings.py
+++ b/ocfweb/settings.py
@@ -82,7 +82,13 @@
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
# XXX: DO NOT CHANGE
-# Ensure cookies can't be read by JavaScript.
+# Ensure cookies can't be read by JavaScript or users.
+# Our proxy filters cookies starting with "OCFWEB_" when going to user sites,
+# so it's important our cookies match this pattern.
+CSRF_COOKIE_HTTPONLY = True
+CSRF_COOKIE_SECURE = False
+CSRF_COOKIE_PATH = '/'
+CSRF_COOKIE_NAME = 'OCFWEB_CSRF_TOKEN'
SESSION_COOKIE_HTTPONLY = True
SESSION_COOKIE_SECURE = False
SESSION_COOKIE_PATH = '/'
@@ -156,6 +162,9 @@
}
}
+ CSRF_COOKIE_SECURE = True
+ CSRF_COOKIE_DOMAIN = 'www.ocf.berkeley.edu'
+ CSRF_TRUSTED_ORIGINS = ['www.ocf.berkeley.edu']
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_DOMAIN = 'www.ocf.berkeley.edu'
else:
| {"golden_diff": "diff --git a/ocfweb/settings.py b/ocfweb/settings.py\n--- a/ocfweb/settings.py\n+++ b/ocfweb/settings.py\n@@ -82,7 +82,13 @@\n SESSION_ENGINE = 'django.contrib.sessions.backends.cache'\n \n # XXX: DO NOT CHANGE\n-# Ensure cookies can't be read by JavaScript.\n+# Ensure cookies can't be read by JavaScript or users.\n+# Our proxy filters cookies starting with \"OCFWEB_\" when going to user sites,\n+# so it's important our cookies match this pattern.\n+CSRF_COOKIE_HTTPONLY = True\n+CSRF_COOKIE_SECURE = False\n+CSRF_COOKIE_PATH = '/'\n+CSRF_COOKIE_NAME = 'OCFWEB_CSRF_TOKEN'\n SESSION_COOKIE_HTTPONLY = True\n SESSION_COOKIE_SECURE = False\n SESSION_COOKIE_PATH = '/'\n@@ -156,6 +162,9 @@\n }\n }\n \n+ CSRF_COOKIE_SECURE = True\n+ CSRF_COOKIE_DOMAIN = 'www.ocf.berkeley.edu'\n+ CSRF_TRUSTED_ORIGINS = ['www.ocf.berkeley.edu']\n SESSION_COOKIE_SECURE = True\n SESSION_COOKIE_DOMAIN = 'www.ocf.berkeley.edu'\n else:\n", "issue": "CSRF token PR fails in prod due to referer checks\nThe cookies seem to be working properly, but referer checks are failing?\n\n```\nJan 31 18:46:57 coma gunicorn[23653]: Forbidden (Referer checking failed - https://www.ocf.berkeley.edu/account/register/ does not match any trusted origins.): /account/register/\n```\n\nExplained in the docs:\nhttps://docs.djangoproject.com/en/dev/ref/csrf/#how-it-works\n\nMaybe the referer header is not what we think it is due to the proxying?\n\nI reverted my change in 89a8931ff0fe9e511905780a42be24a63b1d5c9a\n\n", "before_files": [{"content": "import configparser\nimport os\nfrom getpass import getuser\n\nfrom django.template.base import TemplateSyntaxError\n\n\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\nSECRET_KEY = 'not_a_secret'\nDEBUG = True\n\nALLOWED_HOSTS = [\n 'www.ocf.berkeley.edu',\n 'dev.ocf.berkeley.edu',\n 'dev-www.ocf.berkeley.edu',\n 'ocfweb.ocf.berkeley.edu',\n]\n\nINSTALLED_APPS = (\n 'bootstrapform',\n 'django.contrib.humanize',\n 'django.contrib.messages',\n 'django.contrib.sessions',\n 'django.contrib.staticfiles',\n 'mathfilters',\n 'ocfweb',\n 'ocfweb.about',\n 'ocfweb.account',\n 'ocfweb.docs',\n 'ocfweb.login',\n 'ocfweb.main',\n 'ocfweb.middleware',\n 'ocfweb.stats',\n 'ocfweb.test',\n)\n\nMIDDLEWARE_CLASSES = (\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n 'ocfweb.middleware.errors.OcflibErrorMiddleware',\n)\n\nSECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')\nROOT_URLCONF = 'ocfweb.urls'\n\n\nclass InvalidReferenceInTemplate(str):\n \"\"\"Raise exceptions on invalid references in templates.\n\n By default Django just replaces references to undefined variables with\n empty strings. This is a horrible idea, so we instead hack it to raise an\n exception.\n \"\"\"\n\n def __mod__(self, ref):\n raise TemplateSyntaxError('Invalid reference in template: {}'.format(ref))\n\n\nTEMPLATES = [{\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.request',\n 'django.contrib.messages.context_processors.messages',\n 'ocfweb.context_processors.ocf_template_processor',\n ],\n 'string_if_invalid': InvalidReferenceInTemplate('%s'),\n },\n}]\n\nWSGI_APPLICATION = 'ocfweb.wsgi.application'\n\nDATABASES = {}\n\n# store sessions in the cache\nSESSION_ENGINE = 'django.contrib.sessions.backends.cache'\n\n# XXX: DO NOT CHANGE\n# Ensure cookies can't be read by JavaScript.\nSESSION_COOKIE_HTTPONLY = True\nSESSION_COOKIE_SECURE = False\nSESSION_COOKIE_PATH = '/'\nSESSION_COOKIE_NAME = 'OCFWEB_SESSIONID'\n\nCACHES = { # sessions are stored here\n 'default': {\n # on dev, we use a file-backed cache so that you don't get logged out\n # every time you update code and the server restarts.\n 'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',\n 'LOCATION': os.path.expanduser('~/.ocfweb-cache'),\n },\n 'TIMEOUT': 60 * 60 * 12, # 12 hours\n 'OPTIONS': {\n 'MAX_ENTRIES': 1000,\n },\n}\n\nLANGUAGE_CODE = 'en-us'\nTIME_ZONE = 'America/Los_Angeles'\nUSE_I18N = False\nUSE_L10N = False\nUSE_TZ = True\n\nSTATIC_URL = '/static/'\nos.environ.setdefault('OCFWEB_STATIC_ROOT', '')\nSTATIC_ROOT = os.environ['OCFWEB_STATIC_ROOT']\n\nX_FRAME_OPTIONS = 'DENY'\n\n# log exceptions to stderr\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'handlers': {\n 'console': {\n 'class': 'logging.StreamHandler',\n },\n },\n 'loggers': {\n 'django': {\n 'handlers': ['console'],\n 'level': os.getenv('DJANGO_LOG_LEVEL', 'INFO'),\n },\n },\n}\n\nCELERY_BROKER = 'redis://create'\nCELERY_BACKEND = 'redis://create'\n\nif getuser() == 'ocfweb':\n # not running in development, override options from config file\n conf = configparser.ConfigParser()\n conf.read('/etc/ocfweb/ocfweb.conf')\n\n SECRET_KEY = conf.get('django', 'secret')\n DEBUG = conf.getboolean('django', 'debug')\n\n STATIC_URL = conf.get('django', 'static_url')\n STATIC_ROOT = conf.get('django', 'static_root')\n\n CELERY_BROKER = conf.get('celery', 'broker')\n CELERY_BACKEND = conf.get('celery', 'backend')\n\n # on prod, we use Redis as a cache\n CACHES['default'] = {\n 'BACKEND': 'django_redis.cache.RedisCache',\n 'LOCATION': 'redis://localhost:6379/0',\n 'OPTIONS': {\n 'CLIENT_CLASS': 'django_redis.client.DefaultClient',\n }\n }\n\n SESSION_COOKIE_SECURE = True\n SESSION_COOKIE_DOMAIN = 'www.ocf.berkeley.edu'\nelse:\n # running in development\n\n # try to read celery values used by approve\n # (only works on supernova by staff members)\n try:\n conf = configparser.ConfigParser()\n conf.read('/etc/ocf-create/ocf-create.conf')\n CELERY_BROKER = conf.get('celery', 'broker')\n CELERY_BACKEND = conf.get('celery', 'backend')\n except configparser.NoSectionError:\n pass\n", "path": "ocfweb/settings.py"}]} | 2,318 | 259 |
gh_patches_debug_23596 | rasdani/github-patches | git_diff | rasterio__rasterio-1851 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
WarpedVRT does not permit boundless reads (sample.py)
## Expected behavior and actual behavior.
```
def test_rasterio_vrt(self):
import rasterio
# tmp_file default crs is UTM: CRS({'init': 'epsg:32618'}
with create_tmp_geotiff() as (tmp_file, expected):
with rasterio.open(tmp_file) as src:
with rasterio.vrt.WarpedVRT(src, crs="epsg:4326") as vrt:
expected_shape = (vrt.width, vrt.height)
expected_crs = vrt.crs
expected_res = vrt.res
# Value of single pixel in center of image
lon, lat = vrt.xy(vrt.width // 2, vrt.height // 2)
> expected_val = next(vrt.sample([(lon, lat)]))
test/integration/test_integration__io.py:799:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../miniconda/envs/test/lib/python3.7/site-packages/rasterio/sample.py:43: in sample_gen
data = read(indexes, window=window, masked=masked, boundless=True)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E ValueError: WarpedVRT does not permit boundless reads
```
## Operating system
For example: Mac OS, Windows, Linux
## Rasterio version and provenance
1.1.1 from conda-forge
</issue>
<code>
[start of rasterio/sample.py]
1 # Workaround for issue #378. A pure Python generator.
2
3 import numpy
4
5 from rasterio.windows import Window
6
7
8 def sample_gen(dataset, xy, indexes=None, masked=False):
9 """Sample pixels from a dataset
10
11 Parameters
12 ----------
13 dataset : rasterio Dataset
14 Opened in "r" mode.
15 xy : iterable
16 Pairs of x, y coordinates in the dataset's reference system.
17 indexes : int or list of int
18 Indexes of dataset bands to sample.
19 masked : bool, default: False
20 Whether to mask samples that fall outside the extent of the
21 dataset.
22
23 Yields
24 ------
25 array
26 A array of length equal to the number of specified indexes
27 containing the dataset values for the bands corresponding to
28 those indexes.
29
30 """
31 index = dataset.index
32 read = dataset.read
33
34 if isinstance(indexes, int):
35 indexes = [indexes]
36
37 for x, y in xy:
38 row_off, col_off = index(x, y)
39 # if row_off < 0 or col_off < 0:
40 # yield numpy.ones((dataset.count,), dtype=dataset.dtypes[0]) * dataset.nodata
41 # else:
42 window = Window(col_off, row_off, 1, 1)
43 data = read(indexes, window=window, masked=masked, boundless=True)
44 yield data[:, 0, 0]
45
[end of rasterio/sample.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rasterio/sample.py b/rasterio/sample.py
--- a/rasterio/sample.py
+++ b/rasterio/sample.py
@@ -2,6 +2,7 @@
import numpy
+from rasterio.enums import MaskFlags
from rasterio.windows import Window
@@ -31,14 +32,24 @@
index = dataset.index
read = dataset.read
- if isinstance(indexes, int):
+ if indexes is None:
+ indexes = dataset.indexes
+ elif isinstance(indexes, int):
indexes = [indexes]
for x, y in xy:
+
row_off, col_off = index(x, y)
-# if row_off < 0 or col_off < 0:
-# yield numpy.ones((dataset.count,), dtype=dataset.dtypes[0]) * dataset.nodata
-# else:
- window = Window(col_off, row_off, 1, 1)
- data = read(indexes, window=window, masked=masked, boundless=True)
- yield data[:, 0, 0]
+
+ if row_off < 0 or col_off < 0 or row_off >= dataset.height or col_off >= dataset.width:
+ data = numpy.ones((len(indexes),), dtype=dataset.dtypes[0]) * (dataset.nodata or 0)
+ if masked:
+ mask = [False if MaskFlags.all_valid in dataset.mask_flag_enums[i - 1] else True for i in indexes]
+ yield numpy.ma.array(data, mask=mask)
+ else:
+ yield data
+
+ else:
+ window = Window(col_off, row_off, 1, 1)
+ data = read(indexes, window=window, masked=masked)
+ yield data[:, 0, 0]
| {"golden_diff": "diff --git a/rasterio/sample.py b/rasterio/sample.py\n--- a/rasterio/sample.py\n+++ b/rasterio/sample.py\n@@ -2,6 +2,7 @@\n \n import numpy\n \n+from rasterio.enums import MaskFlags\n from rasterio.windows import Window\n \n \n@@ -31,14 +32,24 @@\n index = dataset.index\n read = dataset.read\n \n- if isinstance(indexes, int):\n+ if indexes is None:\n+ indexes = dataset.indexes\n+ elif isinstance(indexes, int):\n indexes = [indexes]\n \n for x, y in xy:\n+\n row_off, col_off = index(x, y)\n-# if row_off < 0 or col_off < 0:\n-# yield numpy.ones((dataset.count,), dtype=dataset.dtypes[0]) * dataset.nodata\n-# else:\n- window = Window(col_off, row_off, 1, 1)\n- data = read(indexes, window=window, masked=masked, boundless=True)\n- yield data[:, 0, 0]\n+\n+ if row_off < 0 or col_off < 0 or row_off >= dataset.height or col_off >= dataset.width:\n+ data = numpy.ones((len(indexes),), dtype=dataset.dtypes[0]) * (dataset.nodata or 0)\n+ if masked:\n+ mask = [False if MaskFlags.all_valid in dataset.mask_flag_enums[i - 1] else True for i in indexes]\n+ yield numpy.ma.array(data, mask=mask)\n+ else:\n+ yield data\n+\n+ else:\n+ window = Window(col_off, row_off, 1, 1)\n+ data = read(indexes, window=window, masked=masked)\n+ yield data[:, 0, 0]\n", "issue": "WarpedVRT does not permit boundless reads (sample.py)\n## Expected behavior and actual behavior.\r\n\r\n```\r\n def test_rasterio_vrt(self):\r\n\r\n import rasterio\r\n\r\n \r\n\r\n # tmp_file default crs is UTM: CRS({'init': 'epsg:32618'}\r\n\r\n with create_tmp_geotiff() as (tmp_file, expected):\r\n\r\n with rasterio.open(tmp_file) as src:\r\n\r\n with rasterio.vrt.WarpedVRT(src, crs=\"epsg:4326\") as vrt:\r\n\r\n expected_shape = (vrt.width, vrt.height)\r\n\r\n expected_crs = vrt.crs\r\n\r\n expected_res = vrt.res\r\n\r\n # Value of single pixel in center of image\r\n\r\n lon, lat = vrt.xy(vrt.width // 2, vrt.height // 2)\r\n\r\n> expected_val = next(vrt.sample([(lon, lat)]))\r\n\r\ntest/integration/test_integration__io.py:799: \r\n\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\n../../../miniconda/envs/test/lib/python3.7/site-packages/rasterio/sample.py:43: in sample_gen\r\n\r\n data = read(indexes, window=window, masked=masked, boundless=True)\r\n\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\n> ???\r\n\r\nE ValueError: WarpedVRT does not permit boundless reads\r\n```\r\n\r\n## Operating system\r\n\r\nFor example: Mac OS, Windows, Linux\r\n\r\n## Rasterio version and provenance\r\n\r\n1.1.1 from conda-forge\n", "before_files": [{"content": "# Workaround for issue #378. A pure Python generator.\n\nimport numpy\n\nfrom rasterio.windows import Window\n\n\ndef sample_gen(dataset, xy, indexes=None, masked=False):\n \"\"\"Sample pixels from a dataset\n\n Parameters\n ----------\n dataset : rasterio Dataset\n Opened in \"r\" mode.\n xy : iterable\n Pairs of x, y coordinates in the dataset's reference system.\n indexes : int or list of int\n Indexes of dataset bands to sample.\n masked : bool, default: False\n Whether to mask samples that fall outside the extent of the\n dataset.\n\n Yields\n ------\n array\n A array of length equal to the number of specified indexes\n containing the dataset values for the bands corresponding to\n those indexes.\n\n \"\"\"\n index = dataset.index\n read = dataset.read\n\n if isinstance(indexes, int):\n indexes = [indexes]\n\n for x, y in xy:\n row_off, col_off = index(x, y)\n# if row_off < 0 or col_off < 0:\n# yield numpy.ones((dataset.count,), dtype=dataset.dtypes[0]) * dataset.nodata\n# else:\n window = Window(col_off, row_off, 1, 1)\n data = read(indexes, window=window, masked=masked, boundless=True)\n yield data[:, 0, 0]\n", "path": "rasterio/sample.py"}]} | 1,317 | 403 |
gh_patches_debug_38933 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-2802 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider anthonys_restaurants is broken
During the global build at 2021-06-16-14-42-20, spider **anthonys_restaurants** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/logs/anthonys_restaurants.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/anthonys_restaurants.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/anthonys_restaurants.geojson))
</issue>
<code>
[start of locations/spiders/anthonys_restaurants.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 from locations.items import GeojsonPointItem
4
5
6 class AnthonysRestaurantsSpiders(scrapy.Spider):
7 name = "anthonys_restaurants"
8 item_attributes = { 'brand': "Anthony's" }
9 allowed_domains = ["www.anthonys.com"]
10 start_urls = (
11 'https://www.anthonys.com/restaurants/search/47.6062095/-122.3320708/2000',
12 )
13
14 def parse(self, response):
15 for match in response.xpath("//markers/marker"):
16 fullAddress=match.xpath('.//@address').extract_first().replace('<br />', ',')
17
18 # Accounts for cases with second address line
19 if(len(fullAddress.split(",")) == 4):
20 cityString = fullAddress.split(",")[2].strip()
21 stateString = fullAddress.split(",")[3].strip().split(" ")[0].strip()
22 postString = fullAddress.split(",")[3].strip().split(" ")[1].strip()
23 addrLineOne = fullAddress.split(",")[0].strip()
24 addrLineTwo = fullAddress.split(",")[1].strip()
25 addrString = addrLineOne + ", " + addrLineTwo
26 else:
27 cityString = fullAddress.split(",")[1].strip()
28 stateString = fullAddress.split(",")[2].strip().split(" ")[0].strip()
29 postString = fullAddress.split(",")[2].strip().split(" ")[1].strip()
30 addrString = fullAddress.split(",")[0]
31
32 yield GeojsonPointItem(
33 ref=match.xpath('.//@title').extract_first().strip(),
34 lat=match.xpath('.//@lat').extract_first().strip(),
35 lon=match.xpath('.//@lng').extract_first().strip(),
36 addr_full=addrString,
37 city=cityString,
38 state=stateString,
39 postcode=postString,
40 phone=match.xpath('.//@phone').extract_first().replace(" ", ""),
41 )
42
[end of locations/spiders/anthonys_restaurants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/anthonys_restaurants.py b/locations/spiders/anthonys_restaurants.py
--- a/locations/spiders/anthonys_restaurants.py
+++ b/locations/spiders/anthonys_restaurants.py
@@ -1,41 +1,49 @@
# -*- coding: utf-8 -*-
+import json
+import re
+
import scrapy
+
from locations.items import GeojsonPointItem
class AnthonysRestaurantsSpiders(scrapy.Spider):
name = "anthonys_restaurants"
- item_attributes = { 'brand': "Anthony's" }
+ item_attributes = {"brand": "Anthony's"}
allowed_domains = ["www.anthonys.com"]
- start_urls = (
- 'https://www.anthonys.com/restaurants/search/47.6062095/-122.3320708/2000',
- )
+ start_urls = ("https://www.anthonys.com/restaurants/",)
def parse(self, response):
- for match in response.xpath("//markers/marker"):
- fullAddress=match.xpath('.//@address').extract_first().replace('<br />', ',')
-
- # Accounts for cases with second address line
- if(len(fullAddress.split(",")) == 4):
- cityString = fullAddress.split(",")[2].strip()
- stateString = fullAddress.split(",")[3].strip().split(" ")[0].strip()
- postString = fullAddress.split(",")[3].strip().split(" ")[1].strip()
- addrLineOne = fullAddress.split(",")[0].strip()
- addrLineTwo = fullAddress.split(",")[1].strip()
- addrString = addrLineOne + ", " + addrLineTwo
- else:
- cityString = fullAddress.split(",")[1].strip()
- stateString = fullAddress.split(",")[2].strip().split(" ")[0].strip()
- postString = fullAddress.split(",")[2].strip().split(" ")[1].strip()
- addrString = fullAddress.split(",")[0]
-
- yield GeojsonPointItem(
- ref=match.xpath('.//@title').extract_first().strip(),
- lat=match.xpath('.//@lat').extract_first().strip(),
- lon=match.xpath('.//@lng').extract_first().strip(),
- addr_full=addrString,
- city=cityString,
- state=stateString,
- postcode=postString,
- phone=match.xpath('.//@phone').extract_first().replace(" ", ""),
- )
+ script = response.css("#acf-block-locations-map-script-js-extra::text").get()
+ j = json.loads(script[script.find("{") : 1 + script.rfind("}")])
+ for row in j["restaurants"]:
+ meta = {"json": row}
+ yield scrapy.Request(row["link"], meta=meta, callback=self.parse_location)
+
+ def parse_location(self, response):
+ json_data = response.meta["json"]
+ address = json_data["address"]
+ # decode entities
+ name = scrapy.Selector(text=json_data["name"]).xpath("//text()").get()
+
+ # These are weird enough that there's no hope of parsing them, but
+ # clean the text up
+ hours = response.xpath('//strong[text()="Hours:"]/../text()').extract()
+ hours = ';'.join(s.strip().replace('\xa0', ' ') for s in hours)
+
+ properties = {
+ "ref": re.search(r"postid-(\d+)", response.css("body").attrib["class"])[1],
+ "lat": address["latitude"],
+ "lon": address["longitude"],
+ "addr_full": address["address"],
+ "city": address["city"],
+ "state": address["state"],
+ "postcode": address["zip_code"],
+ "name": name,
+ "website": response.url,
+ "phone": (
+ response.xpath("//*[starts-with(@href, 'tel:')]/@href").get() or ""
+ )[4:],
+ "opening_hours": hours,
+ }
+ return GeojsonPointItem(**properties)
| {"golden_diff": "diff --git a/locations/spiders/anthonys_restaurants.py b/locations/spiders/anthonys_restaurants.py\n--- a/locations/spiders/anthonys_restaurants.py\n+++ b/locations/spiders/anthonys_restaurants.py\n@@ -1,41 +1,49 @@\n # -*- coding: utf-8 -*-\n+import json\n+import re\n+\n import scrapy\n+\n from locations.items import GeojsonPointItem\n \n \n class AnthonysRestaurantsSpiders(scrapy.Spider):\n name = \"anthonys_restaurants\"\n- item_attributes = { 'brand': \"Anthony's\" }\n+ item_attributes = {\"brand\": \"Anthony's\"}\n allowed_domains = [\"www.anthonys.com\"]\n- start_urls = (\n- 'https://www.anthonys.com/restaurants/search/47.6062095/-122.3320708/2000',\n- )\n+ start_urls = (\"https://www.anthonys.com/restaurants/\",)\n \n def parse(self, response):\n- for match in response.xpath(\"//markers/marker\"):\n- fullAddress=match.xpath('.//@address').extract_first().replace('<br />', ',')\n-\n- # Accounts for cases with second address line\n- if(len(fullAddress.split(\",\")) == 4):\n- cityString = fullAddress.split(\",\")[2].strip()\n- stateString = fullAddress.split(\",\")[3].strip().split(\" \")[0].strip()\n- postString = fullAddress.split(\",\")[3].strip().split(\" \")[1].strip()\n- addrLineOne = fullAddress.split(\",\")[0].strip()\n- addrLineTwo = fullAddress.split(\",\")[1].strip()\n- addrString = addrLineOne + \", \" + addrLineTwo\n- else:\n- cityString = fullAddress.split(\",\")[1].strip()\n- stateString = fullAddress.split(\",\")[2].strip().split(\" \")[0].strip()\n- postString = fullAddress.split(\",\")[2].strip().split(\" \")[1].strip()\n- addrString = fullAddress.split(\",\")[0]\n-\n- yield GeojsonPointItem(\n- ref=match.xpath('.//@title').extract_first().strip(),\n- lat=match.xpath('.//@lat').extract_first().strip(),\n- lon=match.xpath('.//@lng').extract_first().strip(),\n- addr_full=addrString,\n- city=cityString,\n- state=stateString,\n- postcode=postString,\n- phone=match.xpath('.//@phone').extract_first().replace(\" \", \"\"),\n- )\n+ script = response.css(\"#acf-block-locations-map-script-js-extra::text\").get()\n+ j = json.loads(script[script.find(\"{\") : 1 + script.rfind(\"}\")])\n+ for row in j[\"restaurants\"]:\n+ meta = {\"json\": row}\n+ yield scrapy.Request(row[\"link\"], meta=meta, callback=self.parse_location)\n+\n+ def parse_location(self, response):\n+ json_data = response.meta[\"json\"]\n+ address = json_data[\"address\"]\n+ # decode entities\n+ name = scrapy.Selector(text=json_data[\"name\"]).xpath(\"//text()\").get()\n+\n+ # These are weird enough that there's no hope of parsing them, but\n+ # clean the text up\n+ hours = response.xpath('//strong[text()=\"Hours:\"]/../text()').extract()\n+ hours = ';'.join(s.strip().replace('\\xa0', ' ') for s in hours)\n+\n+ properties = {\n+ \"ref\": re.search(r\"postid-(\\d+)\", response.css(\"body\").attrib[\"class\"])[1],\n+ \"lat\": address[\"latitude\"],\n+ \"lon\": address[\"longitude\"],\n+ \"addr_full\": address[\"address\"],\n+ \"city\": address[\"city\"],\n+ \"state\": address[\"state\"],\n+ \"postcode\": address[\"zip_code\"],\n+ \"name\": name,\n+ \"website\": response.url,\n+ \"phone\": (\n+ response.xpath(\"//*[starts-with(@href, 'tel:')]/@href\").get() or \"\"\n+ )[4:],\n+ \"opening_hours\": hours,\n+ }\n+ return GeojsonPointItem(**properties)\n", "issue": "Spider anthonys_restaurants is broken\nDuring the global build at 2021-06-16-14-42-20, spider **anthonys_restaurants** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/logs/anthonys_restaurants.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/anthonys_restaurants.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/anthonys_restaurants.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nfrom locations.items import GeojsonPointItem\n\n\nclass AnthonysRestaurantsSpiders(scrapy.Spider):\n name = \"anthonys_restaurants\"\n item_attributes = { 'brand': \"Anthony's\" }\n allowed_domains = [\"www.anthonys.com\"]\n start_urls = (\n 'https://www.anthonys.com/restaurants/search/47.6062095/-122.3320708/2000',\n )\n\n def parse(self, response):\n for match in response.xpath(\"//markers/marker\"):\n fullAddress=match.xpath('.//@address').extract_first().replace('<br />', ',')\n\n # Accounts for cases with second address line\n if(len(fullAddress.split(\",\")) == 4):\n cityString = fullAddress.split(\",\")[2].strip()\n stateString = fullAddress.split(\",\")[3].strip().split(\" \")[0].strip()\n postString = fullAddress.split(\",\")[3].strip().split(\" \")[1].strip()\n addrLineOne = fullAddress.split(\",\")[0].strip()\n addrLineTwo = fullAddress.split(\",\")[1].strip()\n addrString = addrLineOne + \", \" + addrLineTwo\n else:\n cityString = fullAddress.split(\",\")[1].strip()\n stateString = fullAddress.split(\",\")[2].strip().split(\" \")[0].strip()\n postString = fullAddress.split(\",\")[2].strip().split(\" \")[1].strip()\n addrString = fullAddress.split(\",\")[0]\n\n yield GeojsonPointItem(\n ref=match.xpath('.//@title').extract_first().strip(),\n lat=match.xpath('.//@lat').extract_first().strip(),\n lon=match.xpath('.//@lng').extract_first().strip(),\n addr_full=addrString,\n city=cityString,\n state=stateString,\n postcode=postString,\n phone=match.xpath('.//@phone').extract_first().replace(\" \", \"\"),\n )\n", "path": "locations/spiders/anthonys_restaurants.py"}]} | 1,247 | 927 |
gh_patches_debug_11635 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-4320 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG]: Multi-rank on same device
### 🐛 Describe the bug
When I use colossalai CLI with 2 node, I got an error "rank 8 and rank 0 both on CUDA device d000"
I have examined my scripts and command. And torchrun works well.
The error msg is:
```
Error: failed to run torchrun --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr=192.168.0.64 --master_port=29500 benchmark.py -c 7b --plugin zero --zero 1 -l 2048 -g -b 10 on 192.168.0.64, is localhost: True, exception: I/O operation on closed file
Error: failed to run torchrun --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr=192.168.0.64 --master_port=29500 benchmark.py -c 7b --plugin zero --zero 1 -l 2048 -g -b 10 on 192.168.0.189, is localhost: True, exception: I/O operation on closed file
```
### Environment
_No response_
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of colossalai/cli/launcher/hostinfo.py]
1 import socket
2 from typing import List
3
4
5 class HostInfo:
6 """
7 A data class to store host connection-related data.
8
9 Args:
10 hostname (str): name or IP address of the host
11 port (str): the port for ssh connection
12 """
13
14 def __init__(
15 self,
16 hostname: str,
17 port: str = None,
18 ):
19 self.hostname = hostname
20 self.port = port
21 self.is_local_host = HostInfo.is_host_localhost(hostname, port)
22
23 @staticmethod
24 def is_host_localhost(hostname: str, port: str = None) -> None:
25 """
26 Check if the host refers to the local machine.
27
28 Args:
29 hostname (str): name or IP address of the host
30 port (str): the port for ssh connection
31
32 Returns:
33 bool: True if it is local, False otherwise
34 """
35
36 if port is None:
37 port = 22 # no port specified, lets just use the ssh port
38
39 # socket.getfqdn("127.0.0.1") does not return localhost
40 # on some users' machines
41 # thus, we directly return True if hostname is localhost, 127.0.0.1 or 0.0.0.0
42 if hostname in ("localhost", "127.0.0.1", "0.0.0.0"):
43 return True
44
45 hostname = socket.getfqdn(hostname)
46 localhost = socket.gethostname()
47 localaddrs = socket.getaddrinfo(localhost, port)
48 targetaddrs = socket.getaddrinfo(hostname, port)
49 for (family, socktype, proto, canonname, sockaddr) in localaddrs:
50 for (rfamily, rsocktype, rproto, rcanonname, rsockaddr) in targetaddrs:
51 if rsockaddr[0] == sockaddr[0]:
52 return True
53 return False
54
55 def __str__(self):
56 return f'hostname: {self.hostname}, port: {self.port}'
57
58 def __repr__(self):
59 return self.__str__()
60
61
62 class HostInfoList:
63 """
64 A data class to store a list of HostInfo objects.
65 """
66
67 def __init__(self):
68 self.hostinfo_list = []
69
70 def append(self, hostinfo: HostInfo) -> None:
71 """
72 Add an HostInfo object to the list.
73
74 Args:
75 hostinfo (HostInfo): host information
76 """
77
78 self.hostinfo_list.append(hostinfo)
79
80 def remove(self, hostname: str) -> None:
81 """
82 Add an HostInfo object to the list.
83
84 Args:
85 hostname (str): the name of the host
86 """
87
88 hostinfo = self.get_hostinfo(hostname)
89 self.hostinfo_list.remove(hostinfo)
90
91 def get_hostinfo(self, hostname: str) -> HostInfo:
92 """
93 Return the HostInfo object which matches with the hostname.
94
95 Args:
96 hostname (str): the name of the host
97
98 Returns:
99 hostinfo (HostInfo): the HostInfo object which matches with the hostname
100 """
101
102 for hostinfo in self.hostinfo_list:
103 if hostinfo.hostname == hostname:
104 return hostinfo
105
106 raise Exception(f"Hostname {hostname} is not found")
107
108 def has(self, hostname: str) -> bool:
109 """
110 Check if the hostname has been added.
111
112 Args:
113 hostname (str): the name of the host
114
115 Returns:
116 bool: True if added, False otherwise
117 """
118 for hostinfo in self.hostinfo_list:
119 if hostinfo.hostname == hostname:
120 return True
121 return False
122
123 def __iter__(self):
124 return iter(self.hostinfo_list)
125
126 def __len__(self):
127 return len(self.hostinfo_list)
128
[end of colossalai/cli/launcher/hostinfo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/colossalai/cli/launcher/hostinfo.py b/colossalai/cli/launcher/hostinfo.py
--- a/colossalai/cli/launcher/hostinfo.py
+++ b/colossalai/cli/launcher/hostinfo.py
@@ -46,11 +46,8 @@
localhost = socket.gethostname()
localaddrs = socket.getaddrinfo(localhost, port)
targetaddrs = socket.getaddrinfo(hostname, port)
- for (family, socktype, proto, canonname, sockaddr) in localaddrs:
- for (rfamily, rsocktype, rproto, rcanonname, rsockaddr) in targetaddrs:
- if rsockaddr[0] == sockaddr[0]:
- return True
- return False
+
+ return localaddrs == targetaddrs
def __str__(self):
return f'hostname: {self.hostname}, port: {self.port}'
| {"golden_diff": "diff --git a/colossalai/cli/launcher/hostinfo.py b/colossalai/cli/launcher/hostinfo.py\n--- a/colossalai/cli/launcher/hostinfo.py\n+++ b/colossalai/cli/launcher/hostinfo.py\n@@ -46,11 +46,8 @@\n localhost = socket.gethostname()\n localaddrs = socket.getaddrinfo(localhost, port)\n targetaddrs = socket.getaddrinfo(hostname, port)\n- for (family, socktype, proto, canonname, sockaddr) in localaddrs:\n- for (rfamily, rsocktype, rproto, rcanonname, rsockaddr) in targetaddrs:\n- if rsockaddr[0] == sockaddr[0]:\n- return True\n- return False\n+\n+ return localaddrs == targetaddrs\n \n def __str__(self):\n return f'hostname: {self.hostname}, port: {self.port}'\n", "issue": "[BUG]: Multi-rank on same device\n### \ud83d\udc1b Describe the bug\n\nWhen I use colossalai CLI with 2 node, I got an error \"rank 8 and rank 0 both on CUDA device d000\"\r\nI have examined my scripts and command. And torchrun works well.\r\n\r\nThe error msg is:\r\n```\r\nError: failed to run torchrun --nproc_per_node=8 --nnodes=2 --node_rank=0 --master_addr=192.168.0.64 --master_port=29500 benchmark.py -c 7b --plugin zero --zero 1 -l 2048 -g -b 10 on 192.168.0.64, is localhost: True, exception: I/O operation on closed file\r\nError: failed to run torchrun --nproc_per_node=8 --nnodes=2 --node_rank=1 --master_addr=192.168.0.64 --master_port=29500 benchmark.py -c 7b --plugin zero --zero 1 -l 2048 -g -b 10 on 192.168.0.189, is localhost: True, exception: I/O operation on closed file\r\n```\r\n\n\n### Environment\n\n_No response_\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import socket\nfrom typing import List\n\n\nclass HostInfo:\n \"\"\"\n A data class to store host connection-related data.\n\n Args:\n hostname (str): name or IP address of the host\n port (str): the port for ssh connection\n \"\"\"\n\n def __init__(\n self,\n hostname: str,\n port: str = None,\n ):\n self.hostname = hostname\n self.port = port\n self.is_local_host = HostInfo.is_host_localhost(hostname, port)\n\n @staticmethod\n def is_host_localhost(hostname: str, port: str = None) -> None:\n \"\"\"\n Check if the host refers to the local machine.\n\n Args:\n hostname (str): name or IP address of the host\n port (str): the port for ssh connection\n\n Returns:\n bool: True if it is local, False otherwise\n \"\"\"\n\n if port is None:\n port = 22 # no port specified, lets just use the ssh port\n\n # socket.getfqdn(\"127.0.0.1\") does not return localhost\n # on some users' machines\n # thus, we directly return True if hostname is localhost, 127.0.0.1 or 0.0.0.0\n if hostname in (\"localhost\", \"127.0.0.1\", \"0.0.0.0\"):\n return True\n\n hostname = socket.getfqdn(hostname)\n localhost = socket.gethostname()\n localaddrs = socket.getaddrinfo(localhost, port)\n targetaddrs = socket.getaddrinfo(hostname, port)\n for (family, socktype, proto, canonname, sockaddr) in localaddrs:\n for (rfamily, rsocktype, rproto, rcanonname, rsockaddr) in targetaddrs:\n if rsockaddr[0] == sockaddr[0]:\n return True\n return False\n\n def __str__(self):\n return f'hostname: {self.hostname}, port: {self.port}'\n\n def __repr__(self):\n return self.__str__()\n\n\nclass HostInfoList:\n \"\"\"\n A data class to store a list of HostInfo objects.\n \"\"\"\n\n def __init__(self):\n self.hostinfo_list = []\n\n def append(self, hostinfo: HostInfo) -> None:\n \"\"\"\n Add an HostInfo object to the list.\n\n Args:\n hostinfo (HostInfo): host information\n \"\"\"\n\n self.hostinfo_list.append(hostinfo)\n\n def remove(self, hostname: str) -> None:\n \"\"\"\n Add an HostInfo object to the list.\n\n Args:\n hostname (str): the name of the host\n \"\"\"\n\n hostinfo = self.get_hostinfo(hostname)\n self.hostinfo_list.remove(hostinfo)\n\n def get_hostinfo(self, hostname: str) -> HostInfo:\n \"\"\"\n Return the HostInfo object which matches with the hostname.\n\n Args:\n hostname (str): the name of the host\n\n Returns:\n hostinfo (HostInfo): the HostInfo object which matches with the hostname\n \"\"\"\n\n for hostinfo in self.hostinfo_list:\n if hostinfo.hostname == hostname:\n return hostinfo\n\n raise Exception(f\"Hostname {hostname} is not found\")\n\n def has(self, hostname: str) -> bool:\n \"\"\"\n Check if the hostname has been added.\n\n Args:\n hostname (str): the name of the host\n\n Returns:\n bool: True if added, False otherwise\n \"\"\"\n for hostinfo in self.hostinfo_list:\n if hostinfo.hostname == hostname:\n return True\n return False\n\n def __iter__(self):\n return iter(self.hostinfo_list)\n\n def __len__(self):\n return len(self.hostinfo_list)\n", "path": "colossalai/cli/launcher/hostinfo.py"}]} | 1,961 | 207 |
gh_patches_debug_891 | rasdani/github-patches | git_diff | openvinotoolkit__datumaro-743 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong annotated return type in Registry class
https://github.com/openvinotoolkit/datumaro/blob/0d4a73d3bbe3a93585af7a0148a0e344fd1106b3/datumaro/components/environment.py#L41-L42
In the referenced code the return type of the method appears to be wrong.
Either it should be `Iterator[str]` since iteration over a dict returns its keys which are of type `str` or the return statement should be `return iter(self.items.values())`.
When using the library with static type checkers this annotation causes type check errors. When removing the annotation, type checkers correctly infer the type `Iterator[str]`.
Wrong annotated return type in Registry class
https://github.com/openvinotoolkit/datumaro/blob/0d4a73d3bbe3a93585af7a0148a0e344fd1106b3/datumaro/components/environment.py#L41-L42
In the referenced code the return type of the method appears to be wrong.
Either it should be `Iterator[str]` since iteration over a dict returns its keys which are of type `str` or the return statement should be `return iter(self.items.values())`.
When using the library with static type checkers this annotation causes type check errors. When removing the annotation, type checkers correctly infer the type `Iterator[str]`.
</issue>
<code>
[start of datumaro/components/environment.py]
1 # Copyright (C) 2020-2022 Intel Corporation
2 #
3 # SPDX-License-Identifier: MIT
4
5 import glob
6 import importlib
7 import logging as log
8 import os.path as osp
9 from functools import partial
10 from inspect import isclass
11 from typing import Callable, Dict, Generic, Iterable, Iterator, List, Optional, Type, TypeVar
12
13 from datumaro.components.cli_plugin import CliPlugin, plugin_types
14 from datumaro.components.format_detection import RejectionReason, detect_dataset_format
15 from datumaro.util.os_util import import_foreign_module, split_path
16
17 T = TypeVar("T")
18
19
20 class Registry(Generic[T]):
21 def __init__(self):
22 self.items: Dict[str, T] = {}
23
24 def register(self, name: str, value: T) -> T:
25 self.items[name] = value
26 return value
27
28 def unregister(self, name: str) -> Optional[T]:
29 return self.items.pop(name, None)
30
31 def get(self, key: str):
32 """Returns a class or a factory function"""
33 return self.items[key]
34
35 def __getitem__(self, key: str) -> T:
36 return self.get(key)
37
38 def __contains__(self, key) -> bool:
39 return key in self.items
40
41 def __iter__(self) -> Iterator[T]:
42 return iter(self.items)
43
44
45 class PluginRegistry(Registry[Type[CliPlugin]]):
46 def __init__(
47 self, filter: Callable[[Type[CliPlugin]], bool] = None
48 ): # pylint: disable=redefined-builtin
49 super().__init__()
50 self._filter = filter
51
52 def batch_register(self, values: Iterable[CliPlugin]):
53 for v in values:
54 if self._filter and not self._filter(v):
55 continue
56
57 self.register(v.NAME, v)
58
59
60 class Environment:
61 _builtin_plugins = None
62
63 @classmethod
64 def _make_filter(cls, accept, skip=None):
65 accept = (accept,) if isclass(accept) else tuple(accept)
66 skip = {skip} if isclass(skip) else set(skip or [])
67 skip = tuple(skip | set(accept))
68 return partial(cls._check_type, accept=accept, skip=skip)
69
70 @staticmethod
71 def _check_type(t, *, accept, skip):
72 return issubclass(t, accept) and t not in skip
73
74 def __init__(self):
75 from datumaro.components.converter import Converter
76 from datumaro.components.dataset_generator import DatasetGenerator
77 from datumaro.components.extractor import (
78 Extractor,
79 Importer,
80 ItemTransform,
81 SourceExtractor,
82 Transform,
83 )
84 from datumaro.components.launcher import Launcher
85 from datumaro.components.validator import Validator
86
87 _filter = self._make_filter
88 self._extractors = PluginRegistry(_filter(Extractor, skip=SourceExtractor))
89 self._importers = PluginRegistry(_filter(Importer))
90 self._launchers = PluginRegistry(_filter(Launcher))
91 self._converters = PluginRegistry(_filter(Converter))
92 self._generators = PluginRegistry(_filter(DatasetGenerator))
93 self._transforms = PluginRegistry(_filter(Transform, skip=ItemTransform))
94 self._validators = PluginRegistry(_filter(Validator))
95 self._builtins_initialized = False
96
97 def _get_plugin_registry(self, name):
98 if not self._builtins_initialized:
99 self._builtins_initialized = True
100 self._register_builtin_plugins()
101 return getattr(self, name)
102
103 @property
104 def extractors(self) -> PluginRegistry:
105 return self._get_plugin_registry("_extractors")
106
107 @property
108 def importers(self) -> PluginRegistry:
109 return self._get_plugin_registry("_importers")
110
111 @property
112 def launchers(self) -> PluginRegistry:
113 return self._get_plugin_registry("_launchers")
114
115 @property
116 def converters(self) -> PluginRegistry:
117 return self._get_plugin_registry("_converters")
118
119 @property
120 def generators(self) -> PluginRegistry:
121 return self._get_plugin_registry("_generators")
122
123 @property
124 def transforms(self) -> PluginRegistry:
125 return self._get_plugin_registry("_transforms")
126
127 @property
128 def validators(self) -> PluginRegistry:
129 return self._get_plugin_registry("_validators")
130
131 @staticmethod
132 def _find_plugins(plugins_dir):
133 plugins = []
134
135 for pattern in ("*.py", "*/*.py"):
136 for path in glob.glob(osp.join(glob.escape(plugins_dir), pattern)):
137 if not osp.isfile(path):
138 continue
139
140 path_rel = osp.relpath(path, plugins_dir)
141 name_parts = split_path(osp.splitext(path_rel)[0])
142
143 # a module with a dot in the name won't load correctly
144 if any("." in part for part in name_parts):
145 log.warning(
146 "Python file '%s' in directory '%s' can't be imported "
147 "due to a dot in the name; skipping.",
148 path_rel,
149 plugins_dir,
150 )
151 continue
152 plugins.append(".".join(name_parts))
153
154 return plugins
155
156 @classmethod
157 def _get_plugin_exports(cls, module, types):
158 exports = []
159 if hasattr(module, "exports"):
160 exports = module.exports
161 else:
162 for symbol in dir(module):
163 if symbol.startswith("_"):
164 continue
165 exports.append(getattr(module, symbol))
166
167 exports = [s for s in exports if isclass(s) and issubclass(s, types) and not s in types]
168
169 return exports
170
171 @classmethod
172 def _load_plugins(cls, module_names, *, importer, types=None):
173 types = tuple(types or plugin_types())
174
175 all_exports = []
176 for module_name in module_names:
177 try:
178 module = importer(module_name)
179 exports = cls._get_plugin_exports(module, types)
180 except Exception as e:
181 module_search_error = ModuleNotFoundError
182
183 message = ["Failed to import module '%s': %s", module_name, e]
184 if isinstance(e, module_search_error):
185 log.debug(*message)
186 else:
187 log.warning(*message)
188 continue
189
190 log.debug(
191 "Imported the following symbols from %s: %s"
192 % (module_name, ", ".join(s.__name__ for s in exports))
193 )
194 all_exports.extend(exports)
195
196 return all_exports
197
198 @classmethod
199 def _load_builtin_plugins(cls):
200 if cls._builtin_plugins is None:
201 import datumaro.plugins
202
203 plugins_dir = osp.dirname(datumaro.plugins.__file__)
204 module_names = [
205 datumaro.plugins.__name__ + "." + name for name in cls._find_plugins(plugins_dir)
206 ]
207 cls._builtin_plugins = cls._load_plugins(module_names, importer=importlib.import_module)
208 return cls._builtin_plugins
209
210 def load_plugins(self, plugins_dir):
211 module_names = self._find_plugins(plugins_dir)
212 plugins = self._load_plugins(
213 module_names, importer=partial(import_foreign_module, path=plugins_dir)
214 )
215 self._register_plugins(plugins)
216
217 def _register_builtin_plugins(self):
218 self._register_plugins(self._load_builtin_plugins())
219
220 def _register_plugins(self, plugins):
221 self.extractors.batch_register(plugins)
222 self.importers.batch_register(plugins)
223 self.launchers.batch_register(plugins)
224 self.converters.batch_register(plugins)
225 self.generators.batch_register(plugins)
226 self.transforms.batch_register(plugins)
227 self.validators.batch_register(plugins)
228
229 def make_extractor(self, name, *args, **kwargs):
230 return self.extractors.get(name)(*args, **kwargs)
231
232 def make_importer(self, name, *args, **kwargs):
233 return self.importers.get(name)(*args, **kwargs)
234
235 def make_launcher(self, name, *args, **kwargs):
236 return self.launchers.get(name)(*args, **kwargs)
237
238 def make_converter(self, name, *args, **kwargs):
239 result = self.converters.get(name)
240 if isclass(result):
241 result = result.convert
242 return partial(result, *args, **kwargs)
243
244 def make_transform(self, name, *args, **kwargs):
245 return partial(self.transforms.get(name), *args, **kwargs)
246
247 def is_format_known(self, name):
248 return name in self.importers or name in self.extractors
249
250 def detect_dataset(
251 self,
252 path: str,
253 depth: int = 1,
254 rejection_callback: Optional[Callable[[str, RejectionReason, str], None]] = None,
255 ) -> List[str]:
256 ignore_dirs = {"__MSOSX", "__MACOSX"}
257 matched_formats = set()
258 for _ in range(depth + 1):
259 detected_formats = detect_dataset_format(
260 (
261 (format_name, importer.detect)
262 for format_name, importer in self.importers.items.items()
263 ),
264 path,
265 rejection_callback=rejection_callback,
266 )
267
268 if detected_formats and len(detected_formats) == 1:
269 return detected_formats
270 elif detected_formats:
271 matched_formats |= set(detected_formats)
272
273 paths = glob.glob(osp.join(path, "*"))
274 path = "" if len(paths) != 1 else paths[0]
275 if not osp.isdir(path) or osp.basename(path) in ignore_dirs:
276 break
277
278 return list(matched_formats)
279
[end of datumaro/components/environment.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/datumaro/components/environment.py b/datumaro/components/environment.py
--- a/datumaro/components/environment.py
+++ b/datumaro/components/environment.py
@@ -38,7 +38,7 @@
def __contains__(self, key) -> bool:
return key in self.items
- def __iter__(self) -> Iterator[T]:
+ def __iter__(self) -> Iterator[str]:
return iter(self.items)
| {"golden_diff": "diff --git a/datumaro/components/environment.py b/datumaro/components/environment.py\n--- a/datumaro/components/environment.py\n+++ b/datumaro/components/environment.py\n@@ -38,7 +38,7 @@\n def __contains__(self, key) -> bool:\n return key in self.items\n \n- def __iter__(self) -> Iterator[T]:\n+ def __iter__(self) -> Iterator[str]:\n return iter(self.items)\n", "issue": "Wrong annotated return type in Registry class\nhttps://github.com/openvinotoolkit/datumaro/blob/0d4a73d3bbe3a93585af7a0148a0e344fd1106b3/datumaro/components/environment.py#L41-L42\r\nIn the referenced code the return type of the method appears to be wrong. \r\n\r\nEither it should be `Iterator[str]` since iteration over a dict returns its keys which are of type `str` or the return statement should be `return iter(self.items.values())`.\r\n\r\nWhen using the library with static type checkers this annotation causes type check errors. When removing the annotation, type checkers correctly infer the type `Iterator[str]`.\nWrong annotated return type in Registry class\nhttps://github.com/openvinotoolkit/datumaro/blob/0d4a73d3bbe3a93585af7a0148a0e344fd1106b3/datumaro/components/environment.py#L41-L42\r\nIn the referenced code the return type of the method appears to be wrong. \r\n\r\nEither it should be `Iterator[str]` since iteration over a dict returns its keys which are of type `str` or the return statement should be `return iter(self.items.values())`.\r\n\r\nWhen using the library with static type checkers this annotation causes type check errors. When removing the annotation, type checkers correctly infer the type `Iterator[str]`.\n", "before_files": [{"content": "# Copyright (C) 2020-2022 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\nimport glob\nimport importlib\nimport logging as log\nimport os.path as osp\nfrom functools import partial\nfrom inspect import isclass\nfrom typing import Callable, Dict, Generic, Iterable, Iterator, List, Optional, Type, TypeVar\n\nfrom datumaro.components.cli_plugin import CliPlugin, plugin_types\nfrom datumaro.components.format_detection import RejectionReason, detect_dataset_format\nfrom datumaro.util.os_util import import_foreign_module, split_path\n\nT = TypeVar(\"T\")\n\n\nclass Registry(Generic[T]):\n def __init__(self):\n self.items: Dict[str, T] = {}\n\n def register(self, name: str, value: T) -> T:\n self.items[name] = value\n return value\n\n def unregister(self, name: str) -> Optional[T]:\n return self.items.pop(name, None)\n\n def get(self, key: str):\n \"\"\"Returns a class or a factory function\"\"\"\n return self.items[key]\n\n def __getitem__(self, key: str) -> T:\n return self.get(key)\n\n def __contains__(self, key) -> bool:\n return key in self.items\n\n def __iter__(self) -> Iterator[T]:\n return iter(self.items)\n\n\nclass PluginRegistry(Registry[Type[CliPlugin]]):\n def __init__(\n self, filter: Callable[[Type[CliPlugin]], bool] = None\n ): # pylint: disable=redefined-builtin\n super().__init__()\n self._filter = filter\n\n def batch_register(self, values: Iterable[CliPlugin]):\n for v in values:\n if self._filter and not self._filter(v):\n continue\n\n self.register(v.NAME, v)\n\n\nclass Environment:\n _builtin_plugins = None\n\n @classmethod\n def _make_filter(cls, accept, skip=None):\n accept = (accept,) if isclass(accept) else tuple(accept)\n skip = {skip} if isclass(skip) else set(skip or [])\n skip = tuple(skip | set(accept))\n return partial(cls._check_type, accept=accept, skip=skip)\n\n @staticmethod\n def _check_type(t, *, accept, skip):\n return issubclass(t, accept) and t not in skip\n\n def __init__(self):\n from datumaro.components.converter import Converter\n from datumaro.components.dataset_generator import DatasetGenerator\n from datumaro.components.extractor import (\n Extractor,\n Importer,\n ItemTransform,\n SourceExtractor,\n Transform,\n )\n from datumaro.components.launcher import Launcher\n from datumaro.components.validator import Validator\n\n _filter = self._make_filter\n self._extractors = PluginRegistry(_filter(Extractor, skip=SourceExtractor))\n self._importers = PluginRegistry(_filter(Importer))\n self._launchers = PluginRegistry(_filter(Launcher))\n self._converters = PluginRegistry(_filter(Converter))\n self._generators = PluginRegistry(_filter(DatasetGenerator))\n self._transforms = PluginRegistry(_filter(Transform, skip=ItemTransform))\n self._validators = PluginRegistry(_filter(Validator))\n self._builtins_initialized = False\n\n def _get_plugin_registry(self, name):\n if not self._builtins_initialized:\n self._builtins_initialized = True\n self._register_builtin_plugins()\n return getattr(self, name)\n\n @property\n def extractors(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_extractors\")\n\n @property\n def importers(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_importers\")\n\n @property\n def launchers(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_launchers\")\n\n @property\n def converters(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_converters\")\n\n @property\n def generators(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_generators\")\n\n @property\n def transforms(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_transforms\")\n\n @property\n def validators(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_validators\")\n\n @staticmethod\n def _find_plugins(plugins_dir):\n plugins = []\n\n for pattern in (\"*.py\", \"*/*.py\"):\n for path in glob.glob(osp.join(glob.escape(plugins_dir), pattern)):\n if not osp.isfile(path):\n continue\n\n path_rel = osp.relpath(path, plugins_dir)\n name_parts = split_path(osp.splitext(path_rel)[0])\n\n # a module with a dot in the name won't load correctly\n if any(\".\" in part for part in name_parts):\n log.warning(\n \"Python file '%s' in directory '%s' can't be imported \"\n \"due to a dot in the name; skipping.\",\n path_rel,\n plugins_dir,\n )\n continue\n plugins.append(\".\".join(name_parts))\n\n return plugins\n\n @classmethod\n def _get_plugin_exports(cls, module, types):\n exports = []\n if hasattr(module, \"exports\"):\n exports = module.exports\n else:\n for symbol in dir(module):\n if symbol.startswith(\"_\"):\n continue\n exports.append(getattr(module, symbol))\n\n exports = [s for s in exports if isclass(s) and issubclass(s, types) and not s in types]\n\n return exports\n\n @classmethod\n def _load_plugins(cls, module_names, *, importer, types=None):\n types = tuple(types or plugin_types())\n\n all_exports = []\n for module_name in module_names:\n try:\n module = importer(module_name)\n exports = cls._get_plugin_exports(module, types)\n except Exception as e:\n module_search_error = ModuleNotFoundError\n\n message = [\"Failed to import module '%s': %s\", module_name, e]\n if isinstance(e, module_search_error):\n log.debug(*message)\n else:\n log.warning(*message)\n continue\n\n log.debug(\n \"Imported the following symbols from %s: %s\"\n % (module_name, \", \".join(s.__name__ for s in exports))\n )\n all_exports.extend(exports)\n\n return all_exports\n\n @classmethod\n def _load_builtin_plugins(cls):\n if cls._builtin_plugins is None:\n import datumaro.plugins\n\n plugins_dir = osp.dirname(datumaro.plugins.__file__)\n module_names = [\n datumaro.plugins.__name__ + \".\" + name for name in cls._find_plugins(plugins_dir)\n ]\n cls._builtin_plugins = cls._load_plugins(module_names, importer=importlib.import_module)\n return cls._builtin_plugins\n\n def load_plugins(self, plugins_dir):\n module_names = self._find_plugins(plugins_dir)\n plugins = self._load_plugins(\n module_names, importer=partial(import_foreign_module, path=plugins_dir)\n )\n self._register_plugins(plugins)\n\n def _register_builtin_plugins(self):\n self._register_plugins(self._load_builtin_plugins())\n\n def _register_plugins(self, plugins):\n self.extractors.batch_register(plugins)\n self.importers.batch_register(plugins)\n self.launchers.batch_register(plugins)\n self.converters.batch_register(plugins)\n self.generators.batch_register(plugins)\n self.transforms.batch_register(plugins)\n self.validators.batch_register(plugins)\n\n def make_extractor(self, name, *args, **kwargs):\n return self.extractors.get(name)(*args, **kwargs)\n\n def make_importer(self, name, *args, **kwargs):\n return self.importers.get(name)(*args, **kwargs)\n\n def make_launcher(self, name, *args, **kwargs):\n return self.launchers.get(name)(*args, **kwargs)\n\n def make_converter(self, name, *args, **kwargs):\n result = self.converters.get(name)\n if isclass(result):\n result = result.convert\n return partial(result, *args, **kwargs)\n\n def make_transform(self, name, *args, **kwargs):\n return partial(self.transforms.get(name), *args, **kwargs)\n\n def is_format_known(self, name):\n return name in self.importers or name in self.extractors\n\n def detect_dataset(\n self,\n path: str,\n depth: int = 1,\n rejection_callback: Optional[Callable[[str, RejectionReason, str], None]] = None,\n ) -> List[str]:\n ignore_dirs = {\"__MSOSX\", \"__MACOSX\"}\n matched_formats = set()\n for _ in range(depth + 1):\n detected_formats = detect_dataset_format(\n (\n (format_name, importer.detect)\n for format_name, importer in self.importers.items.items()\n ),\n path,\n rejection_callback=rejection_callback,\n )\n\n if detected_formats and len(detected_formats) == 1:\n return detected_formats\n elif detected_formats:\n matched_formats |= set(detected_formats)\n\n paths = glob.glob(osp.join(path, \"*\"))\n path = \"\" if len(paths) != 1 else paths[0]\n if not osp.isdir(path) or osp.basename(path) in ignore_dirs:\n break\n\n return list(matched_formats)\n", "path": "datumaro/components/environment.py"}]} | 3,607 | 97 |
gh_patches_debug_25454 | rasdani/github-patches | git_diff | buildbot__buildbot-3179 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RolesFromEmails doesn't work with GitHub auth (and maybe others)
I've got a setup like this:
```
auth = util.GitHubAuth(CLIENT_ID, CLIENT_SECRET)
authz = util.Authz(
allowRules = [ util.AnyControlEndpointMatcher(role = "admins") ],
roleMatchers = [
util.RolesFromEmails(
admins = [ "[email protected]" ]
)
]
)
```
`[email protected]` is my primary email address. This doesn't work; I have to use my username, `samizzo`, as the email address in the `RolesFromEmails`.
Looking through the code, I can't see how this has ever worked. The authentication mechanism ends up calling [`UserInfoProviderBase.getUserInfo`](https://github.com/buildbot/buildbot/blob/master/master/buildbot/www/auth.py#L83) which returns the username as the email address in the user info.
I'm not sure what the right fix for this is; I don't know the buildbot code very well. I've switched over to using `RolesFromUsername` which is more convenient anyway.
</issue>
<code>
[start of master/buildbot/www/auth.py]
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16 from __future__ import absolute_import
17 from __future__ import print_function
18
19 import re
20
21 from twisted.cred.checkers import FilePasswordDB
22 from twisted.cred.checkers import InMemoryUsernamePasswordDatabaseDontUse
23 from twisted.cred.portal import IRealm
24 from twisted.cred.portal import Portal
25 from twisted.internet import defer
26 from twisted.web.error import Error
27 from twisted.web.guard import BasicCredentialFactory
28 from twisted.web.guard import DigestCredentialFactory
29 from twisted.web.guard import HTTPAuthSessionWrapper
30 from twisted.web.resource import IResource
31 from zope.interface import implementer
32
33 from buildbot.util import bytes2NativeString
34 from buildbot.util import config
35 from buildbot.www import resource
36
37
38 class AuthRootResource(resource.Resource):
39
40 def getChild(self, path, request):
41 # return dynamically generated resources
42 if path == b'login':
43 return self.master.www.auth.getLoginResource()
44 elif path == b'logout':
45 return self.master.www.auth.getLogoutResource()
46 return resource.Resource.getChild(self, path, request)
47
48
49 class AuthBase(config.ConfiguredMixin):
50
51 def __init__(self, userInfoProvider=None):
52 if userInfoProvider is None:
53 userInfoProvider = UserInfoProviderBase()
54 self.userInfoProvider = userInfoProvider
55
56 def reconfigAuth(self, master, new_config):
57 self.master = master
58
59 def maybeAutoLogin(self, request):
60 return defer.succeed(None)
61
62 def getLoginResource(self):
63 raise Error(501, "not implemented")
64
65 def getLogoutResource(self):
66 return LogoutResource(self.master)
67
68 @defer.inlineCallbacks
69 def updateUserInfo(self, request):
70 session = request.getSession()
71 if self.userInfoProvider is not None:
72 infos = yield self.userInfoProvider.getUserInfo(session.user_info['username'])
73 session.user_info.update(infos)
74 session.updateSession(request)
75
76 def getConfigDict(self):
77 return {'name': type(self).__name__}
78
79
80 class UserInfoProviderBase(config.ConfiguredMixin):
81 name = "noinfo"
82
83 def getUserInfo(self, username):
84 return defer.succeed({'email': username})
85
86
87 class LoginResource(resource.Resource):
88
89 def render_GET(self, request):
90 return self.asyncRenderHelper(request, self.renderLogin)
91
92 @defer.inlineCallbacks
93 def renderLogin(self, request):
94 raise NotImplementedError
95
96
97 class NoAuth(AuthBase):
98 pass
99
100
101 class RemoteUserAuth(AuthBase):
102 header = "REMOTE_USER"
103 headerRegex = re.compile(r"(?P<username>[^ @]+)@(?P<realm>[^ @]+)")
104
105 def __init__(self, header=None, headerRegex=None, **kwargs):
106 AuthBase.__init__(self, **kwargs)
107 if header is not None:
108 self.header = header
109 if headerRegex is not None:
110 self.headerRegex = re.compile(headerRegex)
111
112 @defer.inlineCallbacks
113 def maybeAutoLogin(self, request):
114 header = request.getHeader(self.header)
115 if header is None:
116 raise Error(403, "missing http header %s. Check your reverse proxy config!" % (
117 self.header))
118 res = self.headerRegex.match(header)
119 if res is None:
120 raise Error(
121 403, 'http header does not match regex! "%s" not matching %s' %
122 (header, self.headerRegex.pattern))
123 session = request.getSession()
124 if session.user_info != dict(res.groupdict()):
125 session.user_info = dict(res.groupdict())
126 yield self.updateUserInfo(request)
127
128
129 @implementer(IRealm)
130 class AuthRealm(object):
131
132 def __init__(self, master, auth):
133 self.auth = auth
134 self.master = master
135
136 def requestAvatar(self, avatarId, mind, *interfaces):
137 if IResource in interfaces:
138 return (IResource,
139 PreAuthenticatedLoginResource(self.master, avatarId),
140 lambda: None)
141 raise NotImplementedError()
142
143
144 class TwistedICredAuthBase(AuthBase):
145
146 def __init__(self, credentialFactories, checkers, **kwargs):
147 AuthBase.__init__(self, **kwargs)
148 self.credentialFactories = credentialFactories
149 self.checkers = checkers
150
151 def getLoginResource(self):
152 return HTTPAuthSessionWrapper(
153 Portal(AuthRealm(self.master, self), self.checkers),
154 self.credentialFactories)
155
156
157 class HTPasswdAuth(TwistedICredAuthBase):
158
159 def __init__(self, passwdFile, **kwargs):
160 TwistedICredAuthBase.__init__(
161 self,
162 [DigestCredentialFactory(b"md5", b"buildbot"),
163 BasicCredentialFactory(b"buildbot")],
164 [FilePasswordDB(passwdFile)],
165 **kwargs)
166
167
168 class UserPasswordAuth(TwistedICredAuthBase):
169
170 def __init__(self, users, **kwargs):
171 TwistedICredAuthBase.__init__(
172 self,
173 [DigestCredentialFactory(b"md5", b"buildbot"),
174 BasicCredentialFactory(b"buildbot")],
175 [InMemoryUsernamePasswordDatabaseDontUse(**dict(users))],
176 **kwargs)
177
178
179 def _redirect(master, request):
180 url = request.args.get("redirect", ["/"])[0]
181 return resource.Redirect(master.config.buildbotURL + "#" + url)
182
183
184 class PreAuthenticatedLoginResource(LoginResource):
185 # a LoginResource which is already authenticated via a
186 # HTTPAuthSessionWrapper
187
188 def __init__(self, master, username):
189 LoginResource.__init__(self, master)
190 self.username = username
191
192 @defer.inlineCallbacks
193 def renderLogin(self, request):
194 session = request.getSession()
195 session.user_info = dict(username=bytes2NativeString(self.username))
196 yield self.master.www.auth.updateUserInfo(request)
197 raise _redirect(self.master, request)
198
199
200 class LogoutResource(resource.Resource):
201
202 def render_GET(self, request):
203 session = request.getSession()
204 session.expire()
205 session.updateSession(request)
206 request.redirect(_redirect(self.master, request).url)
207 return b''
208
[end of master/buildbot/www/auth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/master/buildbot/www/auth.py b/master/buildbot/www/auth.py
--- a/master/buildbot/www/auth.py
+++ b/master/buildbot/www/auth.py
@@ -49,8 +49,6 @@
class AuthBase(config.ConfiguredMixin):
def __init__(self, userInfoProvider=None):
- if userInfoProvider is None:
- userInfoProvider = UserInfoProviderBase()
self.userInfoProvider = userInfoProvider
def reconfigAuth(self, master, new_config):
@@ -104,6 +102,8 @@
def __init__(self, header=None, headerRegex=None, **kwargs):
AuthBase.__init__(self, **kwargs)
+ if self.userInfoProvider is None:
+ self.userInfoProvider = UserInfoProviderBase()
if header is not None:
self.header = header
if headerRegex is not None:
@@ -145,6 +145,8 @@
def __init__(self, credentialFactories, checkers, **kwargs):
AuthBase.__init__(self, **kwargs)
+ if self.userInfoProvider is None:
+ self.userInfoProvider = UserInfoProviderBase()
self.credentialFactories = credentialFactories
self.checkers = checkers
| {"golden_diff": "diff --git a/master/buildbot/www/auth.py b/master/buildbot/www/auth.py\n--- a/master/buildbot/www/auth.py\n+++ b/master/buildbot/www/auth.py\n@@ -49,8 +49,6 @@\n class AuthBase(config.ConfiguredMixin):\n \n def __init__(self, userInfoProvider=None):\n- if userInfoProvider is None:\n- userInfoProvider = UserInfoProviderBase()\n self.userInfoProvider = userInfoProvider\n \n def reconfigAuth(self, master, new_config):\n@@ -104,6 +102,8 @@\n \n def __init__(self, header=None, headerRegex=None, **kwargs):\n AuthBase.__init__(self, **kwargs)\n+ if self.userInfoProvider is None:\n+ self.userInfoProvider = UserInfoProviderBase()\n if header is not None:\n self.header = header\n if headerRegex is not None:\n@@ -145,6 +145,8 @@\n \n def __init__(self, credentialFactories, checkers, **kwargs):\n AuthBase.__init__(self, **kwargs)\n+ if self.userInfoProvider is None:\n+ self.userInfoProvider = UserInfoProviderBase()\n self.credentialFactories = credentialFactories\n self.checkers = checkers\n", "issue": "RolesFromEmails doesn't work with GitHub auth (and maybe others)\nI've got a setup like this:\r\n\r\n```\r\nauth = util.GitHubAuth(CLIENT_ID, CLIENT_SECRET)\r\nauthz = util.Authz(\r\n allowRules = [ util.AnyControlEndpointMatcher(role = \"admins\") ],\r\n roleMatchers = [\r\n util.RolesFromEmails(\r\n admins = [ \"[email protected]\" ]\r\n )\r\n ]\r\n)\r\n```\r\n\r\n`[email protected]` is my primary email address. This doesn't work; I have to use my username, `samizzo`, as the email address in the `RolesFromEmails`.\r\n\r\nLooking through the code, I can't see how this has ever worked. The authentication mechanism ends up calling [`UserInfoProviderBase.getUserInfo`](https://github.com/buildbot/buildbot/blob/master/master/buildbot/www/auth.py#L83) which returns the username as the email address in the user info.\r\n\r\nI'm not sure what the right fix for this is; I don't know the buildbot code very well. I've switched over to using `RolesFromUsername` which is more convenient anyway.\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\nimport re\n\nfrom twisted.cred.checkers import FilePasswordDB\nfrom twisted.cred.checkers import InMemoryUsernamePasswordDatabaseDontUse\nfrom twisted.cred.portal import IRealm\nfrom twisted.cred.portal import Portal\nfrom twisted.internet import defer\nfrom twisted.web.error import Error\nfrom twisted.web.guard import BasicCredentialFactory\nfrom twisted.web.guard import DigestCredentialFactory\nfrom twisted.web.guard import HTTPAuthSessionWrapper\nfrom twisted.web.resource import IResource\nfrom zope.interface import implementer\n\nfrom buildbot.util import bytes2NativeString\nfrom buildbot.util import config\nfrom buildbot.www import resource\n\n\nclass AuthRootResource(resource.Resource):\n\n def getChild(self, path, request):\n # return dynamically generated resources\n if path == b'login':\n return self.master.www.auth.getLoginResource()\n elif path == b'logout':\n return self.master.www.auth.getLogoutResource()\n return resource.Resource.getChild(self, path, request)\n\n\nclass AuthBase(config.ConfiguredMixin):\n\n def __init__(self, userInfoProvider=None):\n if userInfoProvider is None:\n userInfoProvider = UserInfoProviderBase()\n self.userInfoProvider = userInfoProvider\n\n def reconfigAuth(self, master, new_config):\n self.master = master\n\n def maybeAutoLogin(self, request):\n return defer.succeed(None)\n\n def getLoginResource(self):\n raise Error(501, \"not implemented\")\n\n def getLogoutResource(self):\n return LogoutResource(self.master)\n\n @defer.inlineCallbacks\n def updateUserInfo(self, request):\n session = request.getSession()\n if self.userInfoProvider is not None:\n infos = yield self.userInfoProvider.getUserInfo(session.user_info['username'])\n session.user_info.update(infos)\n session.updateSession(request)\n\n def getConfigDict(self):\n return {'name': type(self).__name__}\n\n\nclass UserInfoProviderBase(config.ConfiguredMixin):\n name = \"noinfo\"\n\n def getUserInfo(self, username):\n return defer.succeed({'email': username})\n\n\nclass LoginResource(resource.Resource):\n\n def render_GET(self, request):\n return self.asyncRenderHelper(request, self.renderLogin)\n\n @defer.inlineCallbacks\n def renderLogin(self, request):\n raise NotImplementedError\n\n\nclass NoAuth(AuthBase):\n pass\n\n\nclass RemoteUserAuth(AuthBase):\n header = \"REMOTE_USER\"\n headerRegex = re.compile(r\"(?P<username>[^ @]+)@(?P<realm>[^ @]+)\")\n\n def __init__(self, header=None, headerRegex=None, **kwargs):\n AuthBase.__init__(self, **kwargs)\n if header is not None:\n self.header = header\n if headerRegex is not None:\n self.headerRegex = re.compile(headerRegex)\n\n @defer.inlineCallbacks\n def maybeAutoLogin(self, request):\n header = request.getHeader(self.header)\n if header is None:\n raise Error(403, \"missing http header %s. Check your reverse proxy config!\" % (\n self.header))\n res = self.headerRegex.match(header)\n if res is None:\n raise Error(\n 403, 'http header does not match regex! \"%s\" not matching %s' %\n (header, self.headerRegex.pattern))\n session = request.getSession()\n if session.user_info != dict(res.groupdict()):\n session.user_info = dict(res.groupdict())\n yield self.updateUserInfo(request)\n\n\n@implementer(IRealm)\nclass AuthRealm(object):\n\n def __init__(self, master, auth):\n self.auth = auth\n self.master = master\n\n def requestAvatar(self, avatarId, mind, *interfaces):\n if IResource in interfaces:\n return (IResource,\n PreAuthenticatedLoginResource(self.master, avatarId),\n lambda: None)\n raise NotImplementedError()\n\n\nclass TwistedICredAuthBase(AuthBase):\n\n def __init__(self, credentialFactories, checkers, **kwargs):\n AuthBase.__init__(self, **kwargs)\n self.credentialFactories = credentialFactories\n self.checkers = checkers\n\n def getLoginResource(self):\n return HTTPAuthSessionWrapper(\n Portal(AuthRealm(self.master, self), self.checkers),\n self.credentialFactories)\n\n\nclass HTPasswdAuth(TwistedICredAuthBase):\n\n def __init__(self, passwdFile, **kwargs):\n TwistedICredAuthBase.__init__(\n self,\n [DigestCredentialFactory(b\"md5\", b\"buildbot\"),\n BasicCredentialFactory(b\"buildbot\")],\n [FilePasswordDB(passwdFile)],\n **kwargs)\n\n\nclass UserPasswordAuth(TwistedICredAuthBase):\n\n def __init__(self, users, **kwargs):\n TwistedICredAuthBase.__init__(\n self,\n [DigestCredentialFactory(b\"md5\", b\"buildbot\"),\n BasicCredentialFactory(b\"buildbot\")],\n [InMemoryUsernamePasswordDatabaseDontUse(**dict(users))],\n **kwargs)\n\n\ndef _redirect(master, request):\n url = request.args.get(\"redirect\", [\"/\"])[0]\n return resource.Redirect(master.config.buildbotURL + \"#\" + url)\n\n\nclass PreAuthenticatedLoginResource(LoginResource):\n # a LoginResource which is already authenticated via a\n # HTTPAuthSessionWrapper\n\n def __init__(self, master, username):\n LoginResource.__init__(self, master)\n self.username = username\n\n @defer.inlineCallbacks\n def renderLogin(self, request):\n session = request.getSession()\n session.user_info = dict(username=bytes2NativeString(self.username))\n yield self.master.www.auth.updateUserInfo(request)\n raise _redirect(self.master, request)\n\n\nclass LogoutResource(resource.Resource):\n\n def render_GET(self, request):\n session = request.getSession()\n session.expire()\n session.updateSession(request)\n request.redirect(_redirect(self.master, request).url)\n return b''\n", "path": "master/buildbot/www/auth.py"}]} | 2,758 | 269 |
gh_patches_debug_39826 | rasdani/github-patches | git_diff | mozmeao__basket-508 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TypeError when SFMC error is returned
SFMC response.content is bytes, but the sfmc.py module tries to concat that message with strings to throw NewsletterExceptions.
</issue>
<code>
[start of basket/news/backends/sfmc.py]
1 """
2 API Client Library for Salesforce Marketing Cloud (SFMC)
3 Formerly ExactTarget
4 """
5 from random import randint
6 from time import time
7
8 from django.conf import settings
9 from django.core.cache import cache
10
11 import requests
12 from django_statsd.clients import statsd
13 from FuelSDK import ET_Client, ET_DataExtension_Row, ET_TriggeredSend
14
15 from basket.news.backends.common import get_timer_decorator, NewsletterException, \
16 NewsletterNoResultsException
17
18
19 time_request = get_timer_decorator('news.backends.sfmc')
20
21
22 HERD_TIMEOUT = 60
23 AUTH_BUFFER = 300 # 5 min
24 MAX_BUFFER = HERD_TIMEOUT + AUTH_BUFFER
25
26
27 class ETRefreshClient(ET_Client):
28 token_cache_key = 'backends:sfmc:auth:tokens'
29 authTokenExpiresIn = None
30 token_property_names = [
31 'authToken',
32 'authTokenExpiration',
33 'internalAuthToken',
34 'refreshKey',
35 ]
36 _old_authToken = None
37
38 def __init__(self, get_server_wsdl=False, debug=False, params=None):
39 # setting this manually as it has thrown errors and doesn't change
40 if settings.USE_SANDBOX_BACKEND:
41 self.endpoint = 'https://webservice.test.exacttarget.com/Service.asmx'
42 else:
43 self.endpoint = 'https://webservice.s4.exacttarget.com/Service.asmx'
44
45 super(ETRefreshClient, self).__init__(get_server_wsdl, debug, params)
46
47 def token_is_expired(self):
48 """Report token is expired between 5 and 6 minutes early
49
50 Having the expiration be random helps prevent multiple basket
51 instances simultaneously requesting a new token from SFMC,
52 a.k.a. the Thundering Herd problem.
53 """
54 if self.authTokenExpiration is None:
55 return True
56
57 time_buffer = randint(1, HERD_TIMEOUT) + AUTH_BUFFER
58 return time() + time_buffer > self.authTokenExpiration
59
60 def refresh_auth_tokens_from_cache(self):
61 """Refresh the auth token and other values from cache"""
62 if self.authToken is not None and time() + MAX_BUFFER < self.authTokenExpiration:
63 # no need to refresh if the current tokens are still good
64 return
65
66 tokens = cache.get(self.token_cache_key)
67 if tokens:
68 if not isinstance(tokens, dict):
69 # something wrong was cached
70 cache.delete(self.token_cache_key)
71 return
72
73 for prop, value in tokens.items():
74 if prop in self.token_property_names:
75 setattr(self, prop, value)
76
77 # set the value so we can detect if it changed later
78 self._old_authToken = self.authToken
79 self.build_soap_client()
80
81 def cache_auth_tokens(self):
82 if self.authToken is not None and self.authToken != self._old_authToken:
83 new_tokens = {prop: getattr(self, prop) for prop in self.token_property_names}
84 # 10 min longer than expiration so that refreshKey can be used
85 cache.set(self.token_cache_key, new_tokens, self.authTokenExpiresIn + 600)
86
87 def request_token(self, payload):
88 r = requests.post(self.auth_url, json=payload)
89 try:
90 token_response = r.json()
91 except ValueError:
92 raise NewsletterException('SFMC Error During Auth: ' + r.content,
93 status_code=r.status_code)
94
95 if 'accessToken' in token_response:
96 return token_response
97
98 # try again without refreshToken
99 if 'refreshToken' in payload:
100 # not strictly required, makes testing easier
101 payload = payload.copy()
102 del payload['refreshToken']
103 return self.request_token(payload)
104
105 raise NewsletterException('SFMC Error During Auth: ' + r.content,
106 status_code=r.status_code)
107
108 def refresh_token(self, force_refresh=False):
109 """
110 Called from many different places right before executing a SOAP call
111 """
112 # If we don't already have a token or the token expires within 5 min(300 seconds), get one
113 self.refresh_auth_tokens_from_cache()
114 if force_refresh or self.authToken is None or self.token_is_expired():
115 payload = {
116 'clientId': self.client_id,
117 'clientSecret': self.client_secret,
118 'accessType': 'offline',
119 }
120 if self.refreshKey:
121 payload['refreshToken'] = self.refreshKey
122
123 token_response = self.request_token(payload)
124 statsd.incr('news.backends.sfmc.auth_token_refresh')
125 self.authToken = token_response['accessToken']
126 self.authTokenExpiresIn = token_response['expiresIn']
127 self.authTokenExpiration = time() + self.authTokenExpiresIn
128 self.internalAuthToken = token_response['legacyToken']
129 if 'refreshToken' in token_response:
130 self.refreshKey = token_response['refreshToken']
131
132 self.build_soap_client()
133 self.cache_auth_tokens()
134
135
136 def assert_response(resp):
137 if not resp.status:
138 raise NewsletterException(str(resp.results))
139
140
141 def assert_results(resp):
142 assert_response(resp)
143 if not resp.results:
144 raise NewsletterNoResultsException()
145
146
147 def build_attributes(data):
148 return [{'Name': key, 'Value': value} for key, value in data.items()]
149
150
151 class SFMC(object):
152 _client = None
153 sms_api_url = 'https://www.exacttargetapis.com/sms/v1/messageContact/{}/send'
154 rowset_api_url = 'https://www.exacttargetapis.com/hub/v1/dataevents/key:{}/rowset'
155
156 @property
157 def client(self):
158 if self._client is None and 'clientid' in settings.SFMC_SETTINGS:
159 self._client = ETRefreshClient(False, settings.SFMC_DEBUG, settings.SFMC_SETTINGS)
160
161 return self._client
162
163 @property
164 def auth_header(self):
165 self.client.refresh_token()
166 return {'Authorization': 'Bearer {0}'.format(self.client.authToken)}
167
168 def _get_row_obj(self, de_name, props):
169 row = ET_DataExtension_Row()
170 row.auth_stub = self.client
171 row.CustomerKey = row.Name = de_name
172 row.props = props
173 return row
174
175 @time_request
176 def get_row(self, de_name, fields, token=None, email=None):
177 """
178 Get the values of `fields` from a data extension. Either token or email is required.
179
180 @param de_name: name of the data extension
181 @param fields: list of column names
182 @param token: the user's token
183 @param email: the user's email address
184 @return: dict of user data
185 """
186 assert token or email, 'token or email required'
187 row = self._get_row_obj(de_name, fields)
188 if token:
189 row.search_filter = {
190 'Property': 'TOKEN',
191 'SimpleOperator': 'equals',
192 'Value': token,
193 }
194 elif email:
195 row.search_filter = {
196 'Property': 'EMAIL_ADDRESS_',
197 'SimpleOperator': 'equals',
198 'Value': email,
199 }
200
201 resp = row.get()
202 assert_results(resp)
203 # TODO do something if more than 1 result is returned
204 return dict((p.Name, p.Value)
205 for p in resp.results[0].Properties.Property)
206
207 @time_request
208 def add_row(self, de_name, values):
209 """
210 Add a row to a data extension.
211
212 @param de_name: name of the data extension
213 @param values: dict containing the COLUMN: VALUE pairs
214 @return: None
215 """
216 row = self._get_row_obj(de_name, values)
217 resp = row.post()
218 assert_response(resp)
219
220 @time_request
221 def update_row(self, de_name, values):
222 """
223 Update a row in a data extension.
224
225 @param de_name: name of the data extension
226 @param values: dict containing the COLUMN: VALUE pairs.
227 Must contain TOKEN or EMAIL_ADDRESS_.
228 @return: None
229 """
230 row = self._get_row_obj(de_name, values)
231 resp = row.patch()
232 assert_response(resp)
233
234 @time_request
235 def upsert_row(self, de_name, values):
236 """
237 Add or update a row in a data extension.
238
239 @param de_name: name of the data extension
240 @param values: dict containing the COLUMN: VALUE pairs.
241 Must contain TOKEN or EMAIL_ADDRESS_.
242 @return: None
243 """
244 row = self._get_row_obj(de_name, values)
245 resp = row.patch(True)
246 assert_response(resp)
247
248 @time_request
249 def delete_row(self, de_name, column, value):
250 """
251 Delete a row from a data extension. Either token or email are required.
252
253 @param de_name: name of the data extension
254 @param token: user's token
255 @param email: user's email address
256 @return: None
257 """
258 row = self._get_row_obj(de_name, {column: value})
259 resp = row.delete()
260 assert_response(resp)
261
262 @time_request
263 def send_mail(self, ts_name, email, subscriber_key, token=None):
264 """
265 Send an email message to a user (Triggered Send).
266
267 @param ts_name: the name of the message to send
268 @param email: the email address of the user
269 @param subscriber_key: the key for the user in SFMC
270 @param format: T or H for Text or HTML
271 @param token: optional token if a recovery message
272 @return: None
273 """
274 ts = ET_TriggeredSend()
275 ts.auth_stub = self.client
276 ts.props = {'CustomerKey': ts_name}
277 subscriber = {
278 'EmailAddress': email,
279 'SubscriberKey': subscriber_key,
280 }
281 if token:
282 ts.attributes = build_attributes({
283 'Token__c': token,
284 })
285 subscriber['Attributes'] = ts.attributes
286 ts.subscribers = [subscriber]
287 resp = ts.send()
288 assert_response(resp)
289
290 @time_request
291 def send_sms(self, phone_numbers, message_id):
292 if isinstance(phone_numbers, str):
293 phone_numbers = [phone_numbers]
294
295 phone_numbers = [pn.lstrip('+') for pn in phone_numbers]
296 data = {
297 'mobileNumbers': phone_numbers,
298 'Subscribe': True,
299 'Resubscribe': True,
300 'keyword': 'FFDROID', # TODO: Set keyword in arguments.
301 }
302 url = self.sms_api_url.format(message_id)
303 response = requests.post(url, json=data, headers=self.auth_header, timeout=10)
304 if response.status_code >= 500:
305 raise NewsletterException('SFMC Server Error: {}'.format(response.content),
306 status_code=response.status_code)
307
308 if response.status_code >= 400:
309 raise NewsletterException('SFMC Request Error: {}'.format(response.content),
310 status_code=response.status_code)
311
312 @time_request
313 def bulk_upsert_rows(self, de_name, values):
314 url = self.rowset_api_url.format(de_name)
315 response = requests.post(url, json=values, headers=self.auth_header, timeout=30)
316 if response.status_code >= 500:
317 raise NewsletterException('SFMC Server Error: {}'.format(response.content),
318 status_code=response.status_code)
319
320 if response.status_code >= 400:
321 raise NewsletterException(response.content, status_code=response.status_code)
322
323
324 sfmc = SFMC()
325
[end of basket/news/backends/sfmc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/basket/news/backends/sfmc.py b/basket/news/backends/sfmc.py
--- a/basket/news/backends/sfmc.py
+++ b/basket/news/backends/sfmc.py
@@ -7,6 +7,7 @@
from django.conf import settings
from django.core.cache import cache
+from django.utils.encoding import force_str
import requests
from django_statsd.clients import statsd
@@ -89,7 +90,7 @@
try:
token_response = r.json()
except ValueError:
- raise NewsletterException('SFMC Error During Auth: ' + r.content,
+ raise NewsletterException('SFMC Error During Auth: ' + force_str(r.content),
status_code=r.status_code)
if 'accessToken' in token_response:
@@ -102,7 +103,7 @@
del payload['refreshToken']
return self.request_token(payload)
- raise NewsletterException('SFMC Error During Auth: ' + r.content,
+ raise NewsletterException('SFMC Error During Auth: ' + force_str(r.content),
status_code=r.status_code)
def refresh_token(self, force_refresh=False):
@@ -302,11 +303,11 @@
url = self.sms_api_url.format(message_id)
response = requests.post(url, json=data, headers=self.auth_header, timeout=10)
if response.status_code >= 500:
- raise NewsletterException('SFMC Server Error: {}'.format(response.content),
+ raise NewsletterException('SFMC Server Error: {}'.format(force_str(response.content)),
status_code=response.status_code)
if response.status_code >= 400:
- raise NewsletterException('SFMC Request Error: {}'.format(response.content),
+ raise NewsletterException('SFMC Request Error: {}'.format(force_str(response.content)),
status_code=response.status_code)
@time_request
@@ -314,11 +315,11 @@
url = self.rowset_api_url.format(de_name)
response = requests.post(url, json=values, headers=self.auth_header, timeout=30)
if response.status_code >= 500:
- raise NewsletterException('SFMC Server Error: {}'.format(response.content),
+ raise NewsletterException('SFMC Server Error: {}'.format(force_str(response.content)),
status_code=response.status_code)
if response.status_code >= 400:
- raise NewsletterException(response.content, status_code=response.status_code)
+ raise NewsletterException(force_str(response.content), status_code=response.status_code)
sfmc = SFMC()
| {"golden_diff": "diff --git a/basket/news/backends/sfmc.py b/basket/news/backends/sfmc.py\n--- a/basket/news/backends/sfmc.py\n+++ b/basket/news/backends/sfmc.py\n@@ -7,6 +7,7 @@\n \n from django.conf import settings\n from django.core.cache import cache\n+from django.utils.encoding import force_str\n \n import requests\n from django_statsd.clients import statsd\n@@ -89,7 +90,7 @@\n try:\n token_response = r.json()\n except ValueError:\n- raise NewsletterException('SFMC Error During Auth: ' + r.content,\n+ raise NewsletterException('SFMC Error During Auth: ' + force_str(r.content),\n status_code=r.status_code)\n \n if 'accessToken' in token_response:\n@@ -102,7 +103,7 @@\n del payload['refreshToken']\n return self.request_token(payload)\n \n- raise NewsletterException('SFMC Error During Auth: ' + r.content,\n+ raise NewsletterException('SFMC Error During Auth: ' + force_str(r.content),\n status_code=r.status_code)\n \n def refresh_token(self, force_refresh=False):\n@@ -302,11 +303,11 @@\n url = self.sms_api_url.format(message_id)\n response = requests.post(url, json=data, headers=self.auth_header, timeout=10)\n if response.status_code >= 500:\n- raise NewsletterException('SFMC Server Error: {}'.format(response.content),\n+ raise NewsletterException('SFMC Server Error: {}'.format(force_str(response.content)),\n status_code=response.status_code)\n \n if response.status_code >= 400:\n- raise NewsletterException('SFMC Request Error: {}'.format(response.content),\n+ raise NewsletterException('SFMC Request Error: {}'.format(force_str(response.content)),\n status_code=response.status_code)\n \n @time_request\n@@ -314,11 +315,11 @@\n url = self.rowset_api_url.format(de_name)\n response = requests.post(url, json=values, headers=self.auth_header, timeout=30)\n if response.status_code >= 500:\n- raise NewsletterException('SFMC Server Error: {}'.format(response.content),\n+ raise NewsletterException('SFMC Server Error: {}'.format(force_str(response.content)),\n status_code=response.status_code)\n \n if response.status_code >= 400:\n- raise NewsletterException(response.content, status_code=response.status_code)\n+ raise NewsletterException(force_str(response.content), status_code=response.status_code)\n \n \n sfmc = SFMC()\n", "issue": "TypeError when SFMC error is returned\nSFMC response.content is bytes, but the sfmc.py module tries to concat that message with strings to throw NewsletterExceptions. \n", "before_files": [{"content": "\"\"\"\nAPI Client Library for Salesforce Marketing Cloud (SFMC)\nFormerly ExactTarget\n\"\"\"\nfrom random import randint\nfrom time import time\n\nfrom django.conf import settings\nfrom django.core.cache import cache\n\nimport requests\nfrom django_statsd.clients import statsd\nfrom FuelSDK import ET_Client, ET_DataExtension_Row, ET_TriggeredSend\n\nfrom basket.news.backends.common import get_timer_decorator, NewsletterException, \\\n NewsletterNoResultsException\n\n\ntime_request = get_timer_decorator('news.backends.sfmc')\n\n\nHERD_TIMEOUT = 60\nAUTH_BUFFER = 300 # 5 min\nMAX_BUFFER = HERD_TIMEOUT + AUTH_BUFFER\n\n\nclass ETRefreshClient(ET_Client):\n token_cache_key = 'backends:sfmc:auth:tokens'\n authTokenExpiresIn = None\n token_property_names = [\n 'authToken',\n 'authTokenExpiration',\n 'internalAuthToken',\n 'refreshKey',\n ]\n _old_authToken = None\n\n def __init__(self, get_server_wsdl=False, debug=False, params=None):\n # setting this manually as it has thrown errors and doesn't change\n if settings.USE_SANDBOX_BACKEND:\n self.endpoint = 'https://webservice.test.exacttarget.com/Service.asmx'\n else:\n self.endpoint = 'https://webservice.s4.exacttarget.com/Service.asmx'\n\n super(ETRefreshClient, self).__init__(get_server_wsdl, debug, params)\n\n def token_is_expired(self):\n \"\"\"Report token is expired between 5 and 6 minutes early\n\n Having the expiration be random helps prevent multiple basket\n instances simultaneously requesting a new token from SFMC,\n a.k.a. the Thundering Herd problem.\n \"\"\"\n if self.authTokenExpiration is None:\n return True\n\n time_buffer = randint(1, HERD_TIMEOUT) + AUTH_BUFFER\n return time() + time_buffer > self.authTokenExpiration\n\n def refresh_auth_tokens_from_cache(self):\n \"\"\"Refresh the auth token and other values from cache\"\"\"\n if self.authToken is not None and time() + MAX_BUFFER < self.authTokenExpiration:\n # no need to refresh if the current tokens are still good\n return\n\n tokens = cache.get(self.token_cache_key)\n if tokens:\n if not isinstance(tokens, dict):\n # something wrong was cached\n cache.delete(self.token_cache_key)\n return\n\n for prop, value in tokens.items():\n if prop in self.token_property_names:\n setattr(self, prop, value)\n\n # set the value so we can detect if it changed later\n self._old_authToken = self.authToken\n self.build_soap_client()\n\n def cache_auth_tokens(self):\n if self.authToken is not None and self.authToken != self._old_authToken:\n new_tokens = {prop: getattr(self, prop) for prop in self.token_property_names}\n # 10 min longer than expiration so that refreshKey can be used\n cache.set(self.token_cache_key, new_tokens, self.authTokenExpiresIn + 600)\n\n def request_token(self, payload):\n r = requests.post(self.auth_url, json=payload)\n try:\n token_response = r.json()\n except ValueError:\n raise NewsletterException('SFMC Error During Auth: ' + r.content,\n status_code=r.status_code)\n\n if 'accessToken' in token_response:\n return token_response\n\n # try again without refreshToken\n if 'refreshToken' in payload:\n # not strictly required, makes testing easier\n payload = payload.copy()\n del payload['refreshToken']\n return self.request_token(payload)\n\n raise NewsletterException('SFMC Error During Auth: ' + r.content,\n status_code=r.status_code)\n\n def refresh_token(self, force_refresh=False):\n \"\"\"\n Called from many different places right before executing a SOAP call\n \"\"\"\n # If we don't already have a token or the token expires within 5 min(300 seconds), get one\n self.refresh_auth_tokens_from_cache()\n if force_refresh or self.authToken is None or self.token_is_expired():\n payload = {\n 'clientId': self.client_id,\n 'clientSecret': self.client_secret,\n 'accessType': 'offline',\n }\n if self.refreshKey:\n payload['refreshToken'] = self.refreshKey\n\n token_response = self.request_token(payload)\n statsd.incr('news.backends.sfmc.auth_token_refresh')\n self.authToken = token_response['accessToken']\n self.authTokenExpiresIn = token_response['expiresIn']\n self.authTokenExpiration = time() + self.authTokenExpiresIn\n self.internalAuthToken = token_response['legacyToken']\n if 'refreshToken' in token_response:\n self.refreshKey = token_response['refreshToken']\n\n self.build_soap_client()\n self.cache_auth_tokens()\n\n\ndef assert_response(resp):\n if not resp.status:\n raise NewsletterException(str(resp.results))\n\n\ndef assert_results(resp):\n assert_response(resp)\n if not resp.results:\n raise NewsletterNoResultsException()\n\n\ndef build_attributes(data):\n return [{'Name': key, 'Value': value} for key, value in data.items()]\n\n\nclass SFMC(object):\n _client = None\n sms_api_url = 'https://www.exacttargetapis.com/sms/v1/messageContact/{}/send'\n rowset_api_url = 'https://www.exacttargetapis.com/hub/v1/dataevents/key:{}/rowset'\n\n @property\n def client(self):\n if self._client is None and 'clientid' in settings.SFMC_SETTINGS:\n self._client = ETRefreshClient(False, settings.SFMC_DEBUG, settings.SFMC_SETTINGS)\n\n return self._client\n\n @property\n def auth_header(self):\n self.client.refresh_token()\n return {'Authorization': 'Bearer {0}'.format(self.client.authToken)}\n\n def _get_row_obj(self, de_name, props):\n row = ET_DataExtension_Row()\n row.auth_stub = self.client\n row.CustomerKey = row.Name = de_name\n row.props = props\n return row\n\n @time_request\n def get_row(self, de_name, fields, token=None, email=None):\n \"\"\"\n Get the values of `fields` from a data extension. Either token or email is required.\n\n @param de_name: name of the data extension\n @param fields: list of column names\n @param token: the user's token\n @param email: the user's email address\n @return: dict of user data\n \"\"\"\n assert token or email, 'token or email required'\n row = self._get_row_obj(de_name, fields)\n if token:\n row.search_filter = {\n 'Property': 'TOKEN',\n 'SimpleOperator': 'equals',\n 'Value': token,\n }\n elif email:\n row.search_filter = {\n 'Property': 'EMAIL_ADDRESS_',\n 'SimpleOperator': 'equals',\n 'Value': email,\n }\n\n resp = row.get()\n assert_results(resp)\n # TODO do something if more than 1 result is returned\n return dict((p.Name, p.Value)\n for p in resp.results[0].Properties.Property)\n\n @time_request\n def add_row(self, de_name, values):\n \"\"\"\n Add a row to a data extension.\n\n @param de_name: name of the data extension\n @param values: dict containing the COLUMN: VALUE pairs\n @return: None\n \"\"\"\n row = self._get_row_obj(de_name, values)\n resp = row.post()\n assert_response(resp)\n\n @time_request\n def update_row(self, de_name, values):\n \"\"\"\n Update a row in a data extension.\n\n @param de_name: name of the data extension\n @param values: dict containing the COLUMN: VALUE pairs.\n Must contain TOKEN or EMAIL_ADDRESS_.\n @return: None\n \"\"\"\n row = self._get_row_obj(de_name, values)\n resp = row.patch()\n assert_response(resp)\n\n @time_request\n def upsert_row(self, de_name, values):\n \"\"\"\n Add or update a row in a data extension.\n\n @param de_name: name of the data extension\n @param values: dict containing the COLUMN: VALUE pairs.\n Must contain TOKEN or EMAIL_ADDRESS_.\n @return: None\n \"\"\"\n row = self._get_row_obj(de_name, values)\n resp = row.patch(True)\n assert_response(resp)\n\n @time_request\n def delete_row(self, de_name, column, value):\n \"\"\"\n Delete a row from a data extension. Either token or email are required.\n\n @param de_name: name of the data extension\n @param token: user's token\n @param email: user's email address\n @return: None\n \"\"\"\n row = self._get_row_obj(de_name, {column: value})\n resp = row.delete()\n assert_response(resp)\n\n @time_request\n def send_mail(self, ts_name, email, subscriber_key, token=None):\n \"\"\"\n Send an email message to a user (Triggered Send).\n\n @param ts_name: the name of the message to send\n @param email: the email address of the user\n @param subscriber_key: the key for the user in SFMC\n @param format: T or H for Text or HTML\n @param token: optional token if a recovery message\n @return: None\n \"\"\"\n ts = ET_TriggeredSend()\n ts.auth_stub = self.client\n ts.props = {'CustomerKey': ts_name}\n subscriber = {\n 'EmailAddress': email,\n 'SubscriberKey': subscriber_key,\n }\n if token:\n ts.attributes = build_attributes({\n 'Token__c': token,\n })\n subscriber['Attributes'] = ts.attributes\n ts.subscribers = [subscriber]\n resp = ts.send()\n assert_response(resp)\n\n @time_request\n def send_sms(self, phone_numbers, message_id):\n if isinstance(phone_numbers, str):\n phone_numbers = [phone_numbers]\n\n phone_numbers = [pn.lstrip('+') for pn in phone_numbers]\n data = {\n 'mobileNumbers': phone_numbers,\n 'Subscribe': True,\n 'Resubscribe': True,\n 'keyword': 'FFDROID', # TODO: Set keyword in arguments.\n }\n url = self.sms_api_url.format(message_id)\n response = requests.post(url, json=data, headers=self.auth_header, timeout=10)\n if response.status_code >= 500:\n raise NewsletterException('SFMC Server Error: {}'.format(response.content),\n status_code=response.status_code)\n\n if response.status_code >= 400:\n raise NewsletterException('SFMC Request Error: {}'.format(response.content),\n status_code=response.status_code)\n\n @time_request\n def bulk_upsert_rows(self, de_name, values):\n url = self.rowset_api_url.format(de_name)\n response = requests.post(url, json=values, headers=self.auth_header, timeout=30)\n if response.status_code >= 500:\n raise NewsletterException('SFMC Server Error: {}'.format(response.content),\n status_code=response.status_code)\n\n if response.status_code >= 400:\n raise NewsletterException(response.content, status_code=response.status_code)\n\n\nsfmc = SFMC()\n", "path": "basket/news/backends/sfmc.py"}]} | 3,916 | 563 |
gh_patches_debug_22899 | rasdani/github-patches | git_diff | mlflow__mlflow-8879 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
clean and transparent store registration
These lines
https://github.com/mlflow/mlflow/blob/152063e0b6fbadfbb2baecaf7d0ac7ca1b304b31/mlflow/tracking/_tracking_service/utils.py#L194C1-L206C1
initialize a global variable holding important information about store endpoints.
But the style is not good:
- should be moved to the top and marked clearly
- perhaps it would be beneficial to wrap them to be recycled per user request (re-register / re-initialize)
</issue>
<code>
[start of mlflow/tracking/_tracking_service/utils.py]
1 import os
2 from functools import partial
3 import logging
4 from pathlib import Path
5 from typing import Union
6 from contextlib import contextmanager
7
8 from mlflow.environment_variables import (
9 MLFLOW_TRACKING_AWS_SIGV4,
10 MLFLOW_TRACKING_URI,
11 MLFLOW_TRACKING_TOKEN,
12 MLFLOW_TRACKING_INSECURE_TLS,
13 MLFLOW_TRACKING_CLIENT_CERT_PATH,
14 MLFLOW_TRACKING_SERVER_CERT_PATH,
15 )
16 from mlflow.store.tracking import DEFAULT_LOCAL_FILE_AND_ARTIFACT_PATH
17 from mlflow.store.db.db_types import DATABASE_ENGINES
18 from mlflow.store.tracking.file_store import FileStore
19 from mlflow.store.tracking.rest_store import RestStore
20 from mlflow.tracking._tracking_service.registry import TrackingStoreRegistry
21 from mlflow.utils import rest_utils
22 from mlflow.utils.file_utils import path_to_local_file_uri
23 from mlflow.utils.databricks_utils import get_databricks_host_creds
24 from mlflow.utils.uri import _DATABRICKS_UNITY_CATALOG_SCHEME
25 from mlflow.utils.credentials import read_mlflow_creds
26
27 _logger = logging.getLogger(__name__)
28 _tracking_uri = None
29
30
31 def is_tracking_uri_set():
32 """Returns True if the tracking URI has been set, False otherwise."""
33 if _tracking_uri or MLFLOW_TRACKING_URI.get():
34 return True
35 return False
36
37
38 def set_tracking_uri(uri: Union[str, Path]) -> None:
39 """
40 Set the tracking server URI. This does not affect the
41 currently active run (if one exists), but takes effect for successive runs.
42
43 :param uri:
44
45 - An empty string, or a local file path, prefixed with ``file:/``. Data is stored
46 locally at the provided file (or ``./mlruns`` if empty).
47 - An HTTP URI like ``https://my-tracking-server:5000``.
48 - A Databricks workspace, provided as the string "databricks" or, to use a
49 Databricks CLI
50 `profile <https://github.com/databricks/databricks-cli#installation>`_,
51 "databricks://<profileName>".
52 - A :py:class:`pathlib.Path` instance
53
54 .. test-code-block:: python
55 :caption: Example
56
57 import mlflow
58
59 mlflow.set_tracking_uri("file:///tmp/my_tracking")
60 tracking_uri = mlflow.get_tracking_uri()
61 print("Current tracking uri: {}".format(tracking_uri))
62
63 .. code-block:: text
64 :caption: Output
65
66 Current tracking uri: file:///tmp/my_tracking
67 """
68 if isinstance(uri, Path):
69 # On Windows with Python3.8 (https://bugs.python.org/issue38671)
70 # .resolve() doesn't return the absolute path if the directory doesn't exist
71 # so we're calling .absolute() first to get the absolute path on Windows,
72 # then .resolve() to clean the path
73 uri = uri.absolute().resolve().as_uri()
74 global _tracking_uri
75 _tracking_uri = uri
76
77
78 @contextmanager
79 def _use_tracking_uri(uri: str, local_store_root_path: str = None) -> None:
80 """
81 Similar to `mlflow.tracking.set_tracking_uri` function but return a context manager.
82 :param uri: tracking URI to use.
83 :param local_store_root_path: the local store root path for the tracking URI.
84 """
85 global _tracking_uri
86 cwd = os.getcwd()
87 old_tracking_uri = _tracking_uri
88 try:
89 if local_store_root_path is not None:
90 os.chdir(local_store_root_path)
91 _tracking_uri = uri
92 yield
93 finally:
94 _tracking_uri = old_tracking_uri
95 os.chdir(cwd)
96
97
98 def _resolve_tracking_uri(tracking_uri=None):
99 return tracking_uri or get_tracking_uri()
100
101
102 def get_tracking_uri() -> str:
103 """
104 Get the current tracking URI. This may not correspond to the tracking URI of
105 the currently active run, since the tracking URI can be updated via ``set_tracking_uri``.
106
107 :return: The tracking URI.
108
109 .. code-block:: python
110 :caption: Example
111
112 import mlflow
113
114 # Get the current tracking uri
115 tracking_uri = mlflow.get_tracking_uri()
116 print("Current tracking uri: {}".format(tracking_uri))
117
118 .. code-block:: text
119 :caption: Output
120
121 Current tracking uri: file:///.../mlruns
122 """
123 global _tracking_uri
124 if _tracking_uri is not None:
125 return _tracking_uri
126 elif uri := MLFLOW_TRACKING_URI.get():
127 return uri
128 else:
129 return path_to_local_file_uri(os.path.abspath(DEFAULT_LOCAL_FILE_AND_ARTIFACT_PATH))
130
131
132 def _get_file_store(store_uri, **_):
133 return FileStore(store_uri, store_uri)
134
135
136 def _get_sqlalchemy_store(store_uri, artifact_uri):
137 from mlflow.store.tracking.sqlalchemy_store import SqlAlchemyStore
138
139 if artifact_uri is None:
140 artifact_uri = DEFAULT_LOCAL_FILE_AND_ARTIFACT_PATH
141 return SqlAlchemyStore(store_uri, artifact_uri)
142
143
144 def _get_default_host_creds(store_uri):
145 creds = read_mlflow_creds()
146 return rest_utils.MlflowHostCreds(
147 host=store_uri,
148 username=creds.username,
149 password=creds.password,
150 token=MLFLOW_TRACKING_TOKEN.get(),
151 aws_sigv4=MLFLOW_TRACKING_AWS_SIGV4.get(),
152 ignore_tls_verification=MLFLOW_TRACKING_INSECURE_TLS.get(),
153 client_cert_path=MLFLOW_TRACKING_CLIENT_CERT_PATH.get(),
154 server_cert_path=MLFLOW_TRACKING_SERVER_CERT_PATH.get(),
155 )
156
157
158 def _get_rest_store(store_uri, **_):
159 return RestStore(partial(_get_default_host_creds, store_uri))
160
161
162 def _get_databricks_rest_store(store_uri, **_):
163 return RestStore(partial(get_databricks_host_creds, store_uri))
164
165
166 def _get_databricks_uc_rest_store(store_uri, **_):
167 from mlflow.exceptions import MlflowException
168 from mlflow.version import VERSION
169
170 global _tracking_store_registry
171 supported_schemes = [
172 scheme
173 for scheme in _tracking_store_registry._registry
174 if scheme != _DATABRICKS_UNITY_CATALOG_SCHEME
175 ]
176 raise MlflowException(
177 f"Detected Unity Catalog tracking URI '{store_uri}'. "
178 "Setting the tracking URI to a Unity Catalog backend is not supported in the current "
179 f"version of the MLflow client ({VERSION}). "
180 "Please specify a different tracking URI via mlflow.set_tracking_uri, with "
181 "one of the supported schemes: "
182 f"{supported_schemes}. If you're trying to access models in the Unity "
183 "Catalog, please upgrade to the latest version of the MLflow Python "
184 "client, then specify a Unity Catalog model registry URI via "
185 f"mlflow.set_registry_uri('{_DATABRICKS_UNITY_CATALOG_SCHEME}') or "
186 f"mlflow.set_registry_uri('{_DATABRICKS_UNITY_CATALOG_SCHEME}://profile_name'), where "
187 "'profile_name' is the name of the Databricks CLI profile to use for "
188 "authentication. Be sure to leave the tracking URI configured to use "
189 "one of the supported schemes listed above."
190 )
191
192
193 _tracking_store_registry = TrackingStoreRegistry()
194 _tracking_store_registry.register("", _get_file_store)
195 _tracking_store_registry.register("file", _get_file_store)
196 _tracking_store_registry.register("databricks", _get_databricks_rest_store)
197 _tracking_store_registry.register(_DATABRICKS_UNITY_CATALOG_SCHEME, _get_databricks_uc_rest_store)
198
199 for scheme in ["http", "https"]:
200 _tracking_store_registry.register(scheme, _get_rest_store)
201
202 for scheme in DATABASE_ENGINES:
203 _tracking_store_registry.register(scheme, _get_sqlalchemy_store)
204
205 _tracking_store_registry.register_entrypoints()
206
207
208 def _get_store(store_uri=None, artifact_uri=None):
209 return _tracking_store_registry.get_store(store_uri, artifact_uri)
210
211
212 # TODO(sueann): move to a projects utils module
213 def _get_git_url_if_present(uri):
214 """
215 Return the path git_uri#sub_directory if the URI passed is a local path that's part of
216 a Git repo, or returns the original URI otherwise.
217 :param uri: The expanded uri
218 :return: The git_uri#sub_directory if the uri is part of a Git repo,
219 otherwise return the original uri
220 """
221 if "#" in uri:
222 # Already a URI in git repo format
223 return uri
224 try:
225 from git import Repo, InvalidGitRepositoryError, GitCommandNotFound, NoSuchPathError
226 except ImportError as e:
227 _logger.warning(
228 "Failed to import Git (the git executable is probably not on your PATH),"
229 " so Git SHA is not available. Error: %s",
230 e,
231 )
232 return uri
233 try:
234 # Check whether this is part of a git repo
235 repo = Repo(uri, search_parent_directories=True)
236
237 # Repo url
238 repo_url = "file://%s" % repo.working_tree_dir
239
240 # Sub directory
241 rlpath = uri.replace(repo.working_tree_dir, "")
242 if rlpath == "":
243 git_path = repo_url
244 elif rlpath[0] == "/":
245 git_path = repo_url + "#" + rlpath[1:]
246 else:
247 git_path = repo_url + "#" + rlpath
248 return git_path
249 except (InvalidGitRepositoryError, GitCommandNotFound, ValueError, NoSuchPathError):
250 return uri
251
[end of mlflow/tracking/_tracking_service/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mlflow/tracking/_tracking_service/utils.py b/mlflow/tracking/_tracking_service/utils.py
--- a/mlflow/tracking/_tracking_service/utils.py
+++ b/mlflow/tracking/_tracking_service/utils.py
@@ -191,18 +191,27 @@
_tracking_store_registry = TrackingStoreRegistry()
-_tracking_store_registry.register("", _get_file_store)
-_tracking_store_registry.register("file", _get_file_store)
-_tracking_store_registry.register("databricks", _get_databricks_rest_store)
-_tracking_store_registry.register(_DATABRICKS_UNITY_CATALOG_SCHEME, _get_databricks_uc_rest_store)
-for scheme in ["http", "https"]:
- _tracking_store_registry.register(scheme, _get_rest_store)
-for scheme in DATABASE_ENGINES:
- _tracking_store_registry.register(scheme, _get_sqlalchemy_store)
+def _register_tracking_stores():
+ global _tracking_store_registry
+ _tracking_store_registry.register("", _get_file_store)
+ _tracking_store_registry.register("file", _get_file_store)
+ _tracking_store_registry.register("databricks", _get_databricks_rest_store)
+ _tracking_store_registry.register(
+ _DATABRICKS_UNITY_CATALOG_SCHEME, _get_databricks_uc_rest_store
+ )
+
+ for scheme in ["http", "https"]:
+ _tracking_store_registry.register(scheme, _get_rest_store)
+
+ for scheme in DATABASE_ENGINES:
+ _tracking_store_registry.register(scheme, _get_sqlalchemy_store)
+
+ _tracking_store_registry.register_entrypoints()
+
-_tracking_store_registry.register_entrypoints()
+_register_tracking_stores()
def _get_store(store_uri=None, artifact_uri=None):
| {"golden_diff": "diff --git a/mlflow/tracking/_tracking_service/utils.py b/mlflow/tracking/_tracking_service/utils.py\n--- a/mlflow/tracking/_tracking_service/utils.py\n+++ b/mlflow/tracking/_tracking_service/utils.py\n@@ -191,18 +191,27 @@\n \n \n _tracking_store_registry = TrackingStoreRegistry()\n-_tracking_store_registry.register(\"\", _get_file_store)\n-_tracking_store_registry.register(\"file\", _get_file_store)\n-_tracking_store_registry.register(\"databricks\", _get_databricks_rest_store)\n-_tracking_store_registry.register(_DATABRICKS_UNITY_CATALOG_SCHEME, _get_databricks_uc_rest_store)\n \n-for scheme in [\"http\", \"https\"]:\n- _tracking_store_registry.register(scheme, _get_rest_store)\n \n-for scheme in DATABASE_ENGINES:\n- _tracking_store_registry.register(scheme, _get_sqlalchemy_store)\n+def _register_tracking_stores():\n+ global _tracking_store_registry\n+ _tracking_store_registry.register(\"\", _get_file_store)\n+ _tracking_store_registry.register(\"file\", _get_file_store)\n+ _tracking_store_registry.register(\"databricks\", _get_databricks_rest_store)\n+ _tracking_store_registry.register(\n+ _DATABRICKS_UNITY_CATALOG_SCHEME, _get_databricks_uc_rest_store\n+ )\n+\n+ for scheme in [\"http\", \"https\"]:\n+ _tracking_store_registry.register(scheme, _get_rest_store)\n+\n+ for scheme in DATABASE_ENGINES:\n+ _tracking_store_registry.register(scheme, _get_sqlalchemy_store)\n+\n+ _tracking_store_registry.register_entrypoints()\n+\n \n-_tracking_store_registry.register_entrypoints()\n+_register_tracking_stores()\n \n \n def _get_store(store_uri=None, artifact_uri=None):\n", "issue": "clean and transparent store registration\nThese lines\r\n\r\nhttps://github.com/mlflow/mlflow/blob/152063e0b6fbadfbb2baecaf7d0ac7ca1b304b31/mlflow/tracking/_tracking_service/utils.py#L194C1-L206C1\r\n\r\ninitialize a global variable holding important information about store endpoints. \r\n\r\nBut the style is not good:\r\n- should be moved to the top and marked clearly\r\n- perhaps it would be beneficial to wrap them to be recycled per user request (re-register / re-initialize)\n", "before_files": [{"content": "import os\nfrom functools import partial\nimport logging\nfrom pathlib import Path\nfrom typing import Union\nfrom contextlib import contextmanager\n\nfrom mlflow.environment_variables import (\n MLFLOW_TRACKING_AWS_SIGV4,\n MLFLOW_TRACKING_URI,\n MLFLOW_TRACKING_TOKEN,\n MLFLOW_TRACKING_INSECURE_TLS,\n MLFLOW_TRACKING_CLIENT_CERT_PATH,\n MLFLOW_TRACKING_SERVER_CERT_PATH,\n)\nfrom mlflow.store.tracking import DEFAULT_LOCAL_FILE_AND_ARTIFACT_PATH\nfrom mlflow.store.db.db_types import DATABASE_ENGINES\nfrom mlflow.store.tracking.file_store import FileStore\nfrom mlflow.store.tracking.rest_store import RestStore\nfrom mlflow.tracking._tracking_service.registry import TrackingStoreRegistry\nfrom mlflow.utils import rest_utils\nfrom mlflow.utils.file_utils import path_to_local_file_uri\nfrom mlflow.utils.databricks_utils import get_databricks_host_creds\nfrom mlflow.utils.uri import _DATABRICKS_UNITY_CATALOG_SCHEME\nfrom mlflow.utils.credentials import read_mlflow_creds\n\n_logger = logging.getLogger(__name__)\n_tracking_uri = None\n\n\ndef is_tracking_uri_set():\n \"\"\"Returns True if the tracking URI has been set, False otherwise.\"\"\"\n if _tracking_uri or MLFLOW_TRACKING_URI.get():\n return True\n return False\n\n\ndef set_tracking_uri(uri: Union[str, Path]) -> None:\n \"\"\"\n Set the tracking server URI. This does not affect the\n currently active run (if one exists), but takes effect for successive runs.\n\n :param uri:\n\n - An empty string, or a local file path, prefixed with ``file:/``. Data is stored\n locally at the provided file (or ``./mlruns`` if empty).\n - An HTTP URI like ``https://my-tracking-server:5000``.\n - A Databricks workspace, provided as the string \"databricks\" or, to use a\n Databricks CLI\n `profile <https://github.com/databricks/databricks-cli#installation>`_,\n \"databricks://<profileName>\".\n - A :py:class:`pathlib.Path` instance\n\n .. test-code-block:: python\n :caption: Example\n\n import mlflow\n\n mlflow.set_tracking_uri(\"file:///tmp/my_tracking\")\n tracking_uri = mlflow.get_tracking_uri()\n print(\"Current tracking uri: {}\".format(tracking_uri))\n\n .. code-block:: text\n :caption: Output\n\n Current tracking uri: file:///tmp/my_tracking\n \"\"\"\n if isinstance(uri, Path):\n # On Windows with Python3.8 (https://bugs.python.org/issue38671)\n # .resolve() doesn't return the absolute path if the directory doesn't exist\n # so we're calling .absolute() first to get the absolute path on Windows,\n # then .resolve() to clean the path\n uri = uri.absolute().resolve().as_uri()\n global _tracking_uri\n _tracking_uri = uri\n\n\n@contextmanager\ndef _use_tracking_uri(uri: str, local_store_root_path: str = None) -> None:\n \"\"\"\n Similar to `mlflow.tracking.set_tracking_uri` function but return a context manager.\n :param uri: tracking URI to use.\n :param local_store_root_path: the local store root path for the tracking URI.\n \"\"\"\n global _tracking_uri\n cwd = os.getcwd()\n old_tracking_uri = _tracking_uri\n try:\n if local_store_root_path is not None:\n os.chdir(local_store_root_path)\n _tracking_uri = uri\n yield\n finally:\n _tracking_uri = old_tracking_uri\n os.chdir(cwd)\n\n\ndef _resolve_tracking_uri(tracking_uri=None):\n return tracking_uri or get_tracking_uri()\n\n\ndef get_tracking_uri() -> str:\n \"\"\"\n Get the current tracking URI. This may not correspond to the tracking URI of\n the currently active run, since the tracking URI can be updated via ``set_tracking_uri``.\n\n :return: The tracking URI.\n\n .. code-block:: python\n :caption: Example\n\n import mlflow\n\n # Get the current tracking uri\n tracking_uri = mlflow.get_tracking_uri()\n print(\"Current tracking uri: {}\".format(tracking_uri))\n\n .. code-block:: text\n :caption: Output\n\n Current tracking uri: file:///.../mlruns\n \"\"\"\n global _tracking_uri\n if _tracking_uri is not None:\n return _tracking_uri\n elif uri := MLFLOW_TRACKING_URI.get():\n return uri\n else:\n return path_to_local_file_uri(os.path.abspath(DEFAULT_LOCAL_FILE_AND_ARTIFACT_PATH))\n\n\ndef _get_file_store(store_uri, **_):\n return FileStore(store_uri, store_uri)\n\n\ndef _get_sqlalchemy_store(store_uri, artifact_uri):\n from mlflow.store.tracking.sqlalchemy_store import SqlAlchemyStore\n\n if artifact_uri is None:\n artifact_uri = DEFAULT_LOCAL_FILE_AND_ARTIFACT_PATH\n return SqlAlchemyStore(store_uri, artifact_uri)\n\n\ndef _get_default_host_creds(store_uri):\n creds = read_mlflow_creds()\n return rest_utils.MlflowHostCreds(\n host=store_uri,\n username=creds.username,\n password=creds.password,\n token=MLFLOW_TRACKING_TOKEN.get(),\n aws_sigv4=MLFLOW_TRACKING_AWS_SIGV4.get(),\n ignore_tls_verification=MLFLOW_TRACKING_INSECURE_TLS.get(),\n client_cert_path=MLFLOW_TRACKING_CLIENT_CERT_PATH.get(),\n server_cert_path=MLFLOW_TRACKING_SERVER_CERT_PATH.get(),\n )\n\n\ndef _get_rest_store(store_uri, **_):\n return RestStore(partial(_get_default_host_creds, store_uri))\n\n\ndef _get_databricks_rest_store(store_uri, **_):\n return RestStore(partial(get_databricks_host_creds, store_uri))\n\n\ndef _get_databricks_uc_rest_store(store_uri, **_):\n from mlflow.exceptions import MlflowException\n from mlflow.version import VERSION\n\n global _tracking_store_registry\n supported_schemes = [\n scheme\n for scheme in _tracking_store_registry._registry\n if scheme != _DATABRICKS_UNITY_CATALOG_SCHEME\n ]\n raise MlflowException(\n f\"Detected Unity Catalog tracking URI '{store_uri}'. \"\n \"Setting the tracking URI to a Unity Catalog backend is not supported in the current \"\n f\"version of the MLflow client ({VERSION}). \"\n \"Please specify a different tracking URI via mlflow.set_tracking_uri, with \"\n \"one of the supported schemes: \"\n f\"{supported_schemes}. If you're trying to access models in the Unity \"\n \"Catalog, please upgrade to the latest version of the MLflow Python \"\n \"client, then specify a Unity Catalog model registry URI via \"\n f\"mlflow.set_registry_uri('{_DATABRICKS_UNITY_CATALOG_SCHEME}') or \"\n f\"mlflow.set_registry_uri('{_DATABRICKS_UNITY_CATALOG_SCHEME}://profile_name'), where \"\n \"'profile_name' is the name of the Databricks CLI profile to use for \"\n \"authentication. Be sure to leave the tracking URI configured to use \"\n \"one of the supported schemes listed above.\"\n )\n\n\n_tracking_store_registry = TrackingStoreRegistry()\n_tracking_store_registry.register(\"\", _get_file_store)\n_tracking_store_registry.register(\"file\", _get_file_store)\n_tracking_store_registry.register(\"databricks\", _get_databricks_rest_store)\n_tracking_store_registry.register(_DATABRICKS_UNITY_CATALOG_SCHEME, _get_databricks_uc_rest_store)\n\nfor scheme in [\"http\", \"https\"]:\n _tracking_store_registry.register(scheme, _get_rest_store)\n\nfor scheme in DATABASE_ENGINES:\n _tracking_store_registry.register(scheme, _get_sqlalchemy_store)\n\n_tracking_store_registry.register_entrypoints()\n\n\ndef _get_store(store_uri=None, artifact_uri=None):\n return _tracking_store_registry.get_store(store_uri, artifact_uri)\n\n\n# TODO(sueann): move to a projects utils module\ndef _get_git_url_if_present(uri):\n \"\"\"\n Return the path git_uri#sub_directory if the URI passed is a local path that's part of\n a Git repo, or returns the original URI otherwise.\n :param uri: The expanded uri\n :return: The git_uri#sub_directory if the uri is part of a Git repo,\n otherwise return the original uri\n \"\"\"\n if \"#\" in uri:\n # Already a URI in git repo format\n return uri\n try:\n from git import Repo, InvalidGitRepositoryError, GitCommandNotFound, NoSuchPathError\n except ImportError as e:\n _logger.warning(\n \"Failed to import Git (the git executable is probably not on your PATH),\"\n \" so Git SHA is not available. Error: %s\",\n e,\n )\n return uri\n try:\n # Check whether this is part of a git repo\n repo = Repo(uri, search_parent_directories=True)\n\n # Repo url\n repo_url = \"file://%s\" % repo.working_tree_dir\n\n # Sub directory\n rlpath = uri.replace(repo.working_tree_dir, \"\")\n if rlpath == \"\":\n git_path = repo_url\n elif rlpath[0] == \"/\":\n git_path = repo_url + \"#\" + rlpath[1:]\n else:\n git_path = repo_url + \"#\" + rlpath\n return git_path\n except (InvalidGitRepositoryError, GitCommandNotFound, ValueError, NoSuchPathError):\n return uri\n", "path": "mlflow/tracking/_tracking_service/utils.py"}]} | 3,414 | 386 |
gh_patches_debug_6423 | rasdani/github-patches | git_diff | pytorch__examples-229 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unused import of math in time_sequence_prediction example
The generate_sine_wave.py module imports math on the first line, but doesn't use it. This import should be removed.
</issue>
<code>
[start of time_sequence_prediction/generate_sine_wave.py]
1 import math
2 import numpy as np
3 import torch
4 T = 20
5 L = 1000
6 N = 100
7 np.random.seed(2)
8 x = np.empty((N, L), 'int64')
9 x[:] = np.array(range(L)) + np.random.randint(-4*T, 4*T, N).reshape(N, 1)
10 data = np.sin(x / 1.0 / T).astype('float64')
11 torch.save(data, open('traindata.pt', 'wb'))
12
13
[end of time_sequence_prediction/generate_sine_wave.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/time_sequence_prediction/generate_sine_wave.py b/time_sequence_prediction/generate_sine_wave.py
--- a/time_sequence_prediction/generate_sine_wave.py
+++ b/time_sequence_prediction/generate_sine_wave.py
@@ -1,12 +1,13 @@
-import math
import numpy as np
import torch
+
+np.random.seed(2)
+
T = 20
L = 1000
N = 100
-np.random.seed(2)
+
x = np.empty((N, L), 'int64')
-x[:] = np.array(range(L)) + np.random.randint(-4*T, 4*T, N).reshape(N, 1)
+x[:] = np.array(range(L)) + np.random.randint(-4 * T, 4 * T, N).reshape(N, 1)
data = np.sin(x / 1.0 / T).astype('float64')
torch.save(data, open('traindata.pt', 'wb'))
-
| {"golden_diff": "diff --git a/time_sequence_prediction/generate_sine_wave.py b/time_sequence_prediction/generate_sine_wave.py\n--- a/time_sequence_prediction/generate_sine_wave.py\n+++ b/time_sequence_prediction/generate_sine_wave.py\n@@ -1,12 +1,13 @@\n-import math\n import numpy as np\n import torch\n+\n+np.random.seed(2)\n+\n T = 20\n L = 1000\n N = 100\n-np.random.seed(2)\n+\n x = np.empty((N, L), 'int64')\n-x[:] = np.array(range(L)) + np.random.randint(-4*T, 4*T, N).reshape(N, 1)\n+x[:] = np.array(range(L)) + np.random.randint(-4 * T, 4 * T, N).reshape(N, 1)\n data = np.sin(x / 1.0 / T).astype('float64')\n torch.save(data, open('traindata.pt', 'wb'))\n-\n", "issue": "Unused import of math in time_sequence_prediction example\nThe generate_sine_wave.py module imports math on the first line, but doesn't use it. This import should be removed.\n", "before_files": [{"content": "import math\nimport numpy as np\nimport torch\nT = 20\nL = 1000\nN = 100\nnp.random.seed(2)\nx = np.empty((N, L), 'int64')\nx[:] = np.array(range(L)) + np.random.randint(-4*T, 4*T, N).reshape(N, 1)\ndata = np.sin(x / 1.0 / T).astype('float64')\ntorch.save(data, open('traindata.pt', 'wb'))\n\n", "path": "time_sequence_prediction/generate_sine_wave.py"}]} | 709 | 214 |
gh_patches_debug_27771 | rasdani/github-patches | git_diff | translate__translate-3898 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Nested YAML dictionary is serialized as OrderedDict in PO file
### Problem
If there is a nested dictionary in a YAML file, the resulting po file contains a string starting with `OrderedDict`
### Expectation
The nested keys are shown correctly in the PO file
### How to reproduce
The following YAML input
```
e1:
- s1: Subtag 1
```
given to `yaml2po -i test.yml -o test.po` results in the PO file content of
```
#: t1-%3E[0]
msgid "OrderedDict([('s1', 'Subtag 1')])"
msgstr ""
```
I would expect something like
```
#: t1-%3E[0]-%3Es1
msgid "Subtag 1"
msgstr ""
```
Perhaps it may be related to #3819.
</issue>
<code>
[start of translate/storage/yaml.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2016 Michal Čihař
4 #
5 # This file is part of the Translate Toolkit.
6 #
7 # This program is free software; you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation; either version 2 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with this program; if not, see <http://www.gnu.org/licenses/>.
19
20 r"""Class that manages YAML data files for translation
21 """
22
23 from __future__ import absolute_import
24 from __future__ import unicode_literals
25
26 import uuid
27
28 import six
29 from ruamel.yaml import YAML, YAMLError
30 from ruamel.yaml.comments import CommentedMap
31
32 from translate.lang.data import cldr_plural_categories, plural_tags
33 from translate.misc.deprecation import deprecated
34 from translate.misc.multistring import multistring
35 from translate.storage import base
36
37
38 class YAMLUnit(base.TranslationUnit):
39 """A YAML entry"""
40
41 def __init__(self, source=None, **kwargs):
42 self._id = None
43 if source:
44 self.source = source
45 super(YAMLUnit, self).__init__(source)
46
47 @property
48 def source(self):
49 return self.target
50
51 @source.setter
52 def source(self, source):
53 self.target = source
54
55 # Deprecated on 2.3.1
56 @deprecated("Use `source` property instead")
57 def getsource(self):
58 return self.source
59
60 def setid(self, value):
61 self._id = value
62
63 def getid(self):
64 # Ensure we have ID (for serialization)
65 if self._id is None:
66 self._id = str(uuid.uuid4())
67 return self._id
68
69 def getlocations(self):
70 return [self.getid()]
71
72
73 class YAMLFile(base.TranslationStore):
74 """A YAML file"""
75
76 UnitClass = YAMLUnit
77
78 def __init__(self, inputfile=None, **kwargs):
79 """construct a YAML file, optionally reading in from inputfile."""
80 super(YAMLFile, self).__init__(**kwargs)
81 self.filename = ''
82 self._file = u''
83 if inputfile is not None:
84 self.parse(inputfile)
85
86 def get_root_node(self, node):
87 """Returns root node for serialize"""
88 return node
89
90 def serialize_value(self, value):
91 return value
92
93 def serialize(self, out):
94 def nested_set(target, path, value):
95 value = self.serialize_value(value)
96 if len(path) > 1:
97 if len(path) == 2 and path[1] and path[1][0] == '[' and path[1][-1] == ']' and path[1][1:-1].isdigit():
98 if path[0] not in target:
99 target[path[0]] = []
100 target[path[0]].append(value)
101 else:
102 # Add empty dict in case there is value and we
103 # expect dict
104 if path[0] not in target or not isinstance(target[path[0]], dict):
105 target[path[0]] = CommentedMap()
106 nested_set(target[path[0]], path[1:], value)
107 else:
108 target[path[0]] = value
109
110 units = CommentedMap()
111 for unit in self.unit_iter():
112 nested_set(units, unit.getid().split('->'), unit.target)
113 yaml = YAML()
114 yaml.default_flow_style = False
115 yaml.dump(self.get_root_node(units), out)
116
117 def _parse_dict(self, data, prev):
118 for k, v in six.iteritems(data):
119 if not isinstance(k, six.string_types):
120 raise base.ParseError(
121 'Key not string: {0}/{1} ({2})'.format(prev, k, type(k))
122 )
123
124 for x in self._flatten(v, '->'.join((prev, k)) if prev else k):
125 yield x
126
127 def _flatten(self, data, prev=""):
128 """Flatten YAML dictionary.
129 """
130 if isinstance(data, dict):
131 for x in self._parse_dict(data, prev):
132 yield x
133 else:
134 if isinstance(data, six.string_types):
135 yield (prev, data)
136 elif isinstance(data, (bool, int)):
137 yield (prev, str(data))
138 elif isinstance(data, list):
139 for k, v in enumerate(data):
140 key = '[{0}]'.format(k)
141 yield ('->'.join((prev, key)), six.text_type(v))
142 elif data is None:
143 pass
144 else:
145 raise ValueError("We don't handle these values:\n"
146 "Type: %s\n"
147 "Data: %s\n"
148 "Previous: %s" % (type(data), data, prev))
149
150 def preprocess(self, data):
151 """Preprocess hook for child formats"""
152 return data
153
154 def parse(self, input):
155 """parse the given file or file source string"""
156 if hasattr(input, 'name'):
157 self.filename = input.name
158 elif not getattr(self, 'filename', ''):
159 self.filename = ''
160 if hasattr(input, "read"):
161 src = input.read()
162 input.close()
163 input = src
164 if isinstance(input, bytes):
165 input = input.decode('utf-8')
166 yaml = YAML()
167 try:
168 self._file = yaml.load(input)
169 except YAMLError as e:
170 message = e.problem if hasattr(e, 'problem') else e.message
171 if hasattr(e, 'problem_mark'):
172 message += ' {0}'.format(e.problem_mark)
173 raise base.ParseError(message)
174
175 self._file = self.preprocess(self._file)
176
177 for k, data in self._flatten(self._file):
178 unit = self.UnitClass(data)
179 unit.setid(k)
180 self.addunit(unit)
181
182
183 class RubyYAMLFile(YAMLFile):
184 """Ruby YAML file, it has language code as first node."""
185
186 def preprocess(self, data):
187 if isinstance(data, CommentedMap) and len(data) == 1:
188 lang = list(data.keys())[0]
189 self.settargetlanguage(lang)
190 return data[lang]
191 return data
192
193 def get_root_node(self, node):
194 """Returns root node for serialize"""
195 if self.targetlanguage is not None:
196 result = CommentedMap()
197 result[self.targetlanguage] = node
198 return result
199 return node
200
201 def _parse_dict(self, data, prev):
202 # Does this look like a plural?
203 if data and all((x in cldr_plural_categories for x in data.keys())):
204 # Ensure we have correct plurals ordering.
205 values = [data[item] for item in cldr_plural_categories if item in data]
206 yield (prev, multistring(values))
207 return
208
209 # Handle normal dict
210 for x in super(RubyYAMLFile, self)._parse_dict(data, prev):
211 yield x
212
213 def serialize_value(self, value):
214 if not isinstance(value, multistring):
215 return value
216
217 tags = plural_tags.get(self.targetlanguage, plural_tags['en'])
218
219 strings = [six.text_type(s) for s in value.strings]
220
221 # Sync plural_strings elements to plural_tags count.
222 if len(strings) < len(tags):
223 strings += [''] * (len(tags) - len(strings))
224 strings = strings[:len(tags)]
225
226 return CommentedMap(zip(tags, strings))
227
[end of translate/storage/yaml.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/translate/storage/yaml.py b/translate/storage/yaml.py
--- a/translate/storage/yaml.py
+++ b/translate/storage/yaml.py
@@ -94,10 +94,15 @@
def nested_set(target, path, value):
value = self.serialize_value(value)
if len(path) > 1:
- if len(path) == 2 and path[1] and path[1][0] == '[' and path[1][-1] == ']' and path[1][1:-1].isdigit():
+ if len(path) >= 2 and path[1] and path[1][0] == '[' and path[1][-1] == ']' and path[1][1:-1].isdigit():
if path[0] not in target:
target[path[0]] = []
- target[path[0]].append(value)
+ if len(path) > 2:
+ new_value = CommentedMap()
+ nested_set(new_value, path[2:], value)
+ target[path[0]].append(new_value)
+ else:
+ target[path[0]].append(value)
else:
# Add empty dict in case there is value and we
# expect dict
@@ -138,7 +143,8 @@
elif isinstance(data, list):
for k, v in enumerate(data):
key = '[{0}]'.format(k)
- yield ('->'.join((prev, key)), six.text_type(v))
+ for value in self._flatten(v, '->'.join((prev, key))):
+ yield value
elif data is None:
pass
else:
| {"golden_diff": "diff --git a/translate/storage/yaml.py b/translate/storage/yaml.py\n--- a/translate/storage/yaml.py\n+++ b/translate/storage/yaml.py\n@@ -94,10 +94,15 @@\n def nested_set(target, path, value):\n value = self.serialize_value(value)\n if len(path) > 1:\n- if len(path) == 2 and path[1] and path[1][0] == '[' and path[1][-1] == ']' and path[1][1:-1].isdigit():\n+ if len(path) >= 2 and path[1] and path[1][0] == '[' and path[1][-1] == ']' and path[1][1:-1].isdigit():\n if path[0] not in target:\n target[path[0]] = []\n- target[path[0]].append(value)\n+ if len(path) > 2:\n+ new_value = CommentedMap()\n+ nested_set(new_value, path[2:], value)\n+ target[path[0]].append(new_value)\n+ else:\n+ target[path[0]].append(value)\n else:\n # Add empty dict in case there is value and we\n # expect dict\n@@ -138,7 +143,8 @@\n elif isinstance(data, list):\n for k, v in enumerate(data):\n key = '[{0}]'.format(k)\n- yield ('->'.join((prev, key)), six.text_type(v))\n+ for value in self._flatten(v, '->'.join((prev, key))):\n+ yield value\n elif data is None:\n pass\n else:\n", "issue": "Nested YAML dictionary is serialized as OrderedDict in PO file\n### Problem\r\nIf there is a nested dictionary in a YAML file, the resulting po file contains a string starting with `OrderedDict`\r\n### Expectation\r\nThe nested keys are shown correctly in the PO file\r\n### How to reproduce\r\nThe following YAML input\r\n```\r\ne1:\r\n- s1: Subtag 1\r\n```\r\ngiven to `yaml2po -i test.yml -o test.po` results in the PO file content of\r\n```\r\n#: t1-%3E[0]\r\nmsgid \"OrderedDict([('s1', 'Subtag 1')])\"\r\nmsgstr \"\"\r\n```\r\nI would expect something like\r\n```\r\n#: t1-%3E[0]-%3Es1\r\nmsgid \"Subtag 1\"\r\nmsgstr \"\"\r\n```\r\nPerhaps it may be related to #3819.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2016 Michal \u010ciha\u0159\n#\n# This file is part of the Translate Toolkit.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\nr\"\"\"Class that manages YAML data files for translation\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import unicode_literals\n\nimport uuid\n\nimport six\nfrom ruamel.yaml import YAML, YAMLError\nfrom ruamel.yaml.comments import CommentedMap\n\nfrom translate.lang.data import cldr_plural_categories, plural_tags\nfrom translate.misc.deprecation import deprecated\nfrom translate.misc.multistring import multistring\nfrom translate.storage import base\n\n\nclass YAMLUnit(base.TranslationUnit):\n \"\"\"A YAML entry\"\"\"\n\n def __init__(self, source=None, **kwargs):\n self._id = None\n if source:\n self.source = source\n super(YAMLUnit, self).__init__(source)\n\n @property\n def source(self):\n return self.target\n\n @source.setter\n def source(self, source):\n self.target = source\n\n # Deprecated on 2.3.1\n @deprecated(\"Use `source` property instead\")\n def getsource(self):\n return self.source\n\n def setid(self, value):\n self._id = value\n\n def getid(self):\n # Ensure we have ID (for serialization)\n if self._id is None:\n self._id = str(uuid.uuid4())\n return self._id\n\n def getlocations(self):\n return [self.getid()]\n\n\nclass YAMLFile(base.TranslationStore):\n \"\"\"A YAML file\"\"\"\n\n UnitClass = YAMLUnit\n\n def __init__(self, inputfile=None, **kwargs):\n \"\"\"construct a YAML file, optionally reading in from inputfile.\"\"\"\n super(YAMLFile, self).__init__(**kwargs)\n self.filename = ''\n self._file = u''\n if inputfile is not None:\n self.parse(inputfile)\n\n def get_root_node(self, node):\n \"\"\"Returns root node for serialize\"\"\"\n return node\n\n def serialize_value(self, value):\n return value\n\n def serialize(self, out):\n def nested_set(target, path, value):\n value = self.serialize_value(value)\n if len(path) > 1:\n if len(path) == 2 and path[1] and path[1][0] == '[' and path[1][-1] == ']' and path[1][1:-1].isdigit():\n if path[0] not in target:\n target[path[0]] = []\n target[path[0]].append(value)\n else:\n # Add empty dict in case there is value and we\n # expect dict\n if path[0] not in target or not isinstance(target[path[0]], dict):\n target[path[0]] = CommentedMap()\n nested_set(target[path[0]], path[1:], value)\n else:\n target[path[0]] = value\n\n units = CommentedMap()\n for unit in self.unit_iter():\n nested_set(units, unit.getid().split('->'), unit.target)\n yaml = YAML()\n yaml.default_flow_style = False\n yaml.dump(self.get_root_node(units), out)\n\n def _parse_dict(self, data, prev):\n for k, v in six.iteritems(data):\n if not isinstance(k, six.string_types):\n raise base.ParseError(\n 'Key not string: {0}/{1} ({2})'.format(prev, k, type(k))\n )\n\n for x in self._flatten(v, '->'.join((prev, k)) if prev else k):\n yield x\n\n def _flatten(self, data, prev=\"\"):\n \"\"\"Flatten YAML dictionary.\n \"\"\"\n if isinstance(data, dict):\n for x in self._parse_dict(data, prev):\n yield x\n else:\n if isinstance(data, six.string_types):\n yield (prev, data)\n elif isinstance(data, (bool, int)):\n yield (prev, str(data))\n elif isinstance(data, list):\n for k, v in enumerate(data):\n key = '[{0}]'.format(k)\n yield ('->'.join((prev, key)), six.text_type(v))\n elif data is None:\n pass\n else:\n raise ValueError(\"We don't handle these values:\\n\"\n \"Type: %s\\n\"\n \"Data: %s\\n\"\n \"Previous: %s\" % (type(data), data, prev))\n\n def preprocess(self, data):\n \"\"\"Preprocess hook for child formats\"\"\"\n return data\n\n def parse(self, input):\n \"\"\"parse the given file or file source string\"\"\"\n if hasattr(input, 'name'):\n self.filename = input.name\n elif not getattr(self, 'filename', ''):\n self.filename = ''\n if hasattr(input, \"read\"):\n src = input.read()\n input.close()\n input = src\n if isinstance(input, bytes):\n input = input.decode('utf-8')\n yaml = YAML()\n try:\n self._file = yaml.load(input)\n except YAMLError as e:\n message = e.problem if hasattr(e, 'problem') else e.message\n if hasattr(e, 'problem_mark'):\n message += ' {0}'.format(e.problem_mark)\n raise base.ParseError(message)\n\n self._file = self.preprocess(self._file)\n\n for k, data in self._flatten(self._file):\n unit = self.UnitClass(data)\n unit.setid(k)\n self.addunit(unit)\n\n\nclass RubyYAMLFile(YAMLFile):\n \"\"\"Ruby YAML file, it has language code as first node.\"\"\"\n\n def preprocess(self, data):\n if isinstance(data, CommentedMap) and len(data) == 1:\n lang = list(data.keys())[0]\n self.settargetlanguage(lang)\n return data[lang]\n return data\n\n def get_root_node(self, node):\n \"\"\"Returns root node for serialize\"\"\"\n if self.targetlanguage is not None:\n result = CommentedMap()\n result[self.targetlanguage] = node\n return result\n return node\n\n def _parse_dict(self, data, prev):\n # Does this look like a plural?\n if data and all((x in cldr_plural_categories for x in data.keys())):\n # Ensure we have correct plurals ordering.\n values = [data[item] for item in cldr_plural_categories if item in data]\n yield (prev, multistring(values))\n return\n\n # Handle normal dict\n for x in super(RubyYAMLFile, self)._parse_dict(data, prev):\n yield x\n\n def serialize_value(self, value):\n if not isinstance(value, multistring):\n return value\n\n tags = plural_tags.get(self.targetlanguage, plural_tags['en'])\n\n strings = [six.text_type(s) for s in value.strings]\n\n # Sync plural_strings elements to plural_tags count.\n if len(strings) < len(tags):\n strings += [''] * (len(tags) - len(strings))\n strings = strings[:len(tags)]\n\n return CommentedMap(zip(tags, strings))\n", "path": "translate/storage/yaml.py"}]} | 2,967 | 365 |
gh_patches_debug_8809 | rasdani/github-patches | git_diff | conan-io__conan-center-index-16928 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] xorg-makedepend/any: Homepage url incorrect
### Description
In the `xorg-makedepend` recipe the homepage url is incorrectly set to "https://gitlab.freedesktop.org/xorg/util/cf" which is a different repository in the same group, the correct repository url is "https://gitlab.freedesktop.org/xorg/util/makedepend". This should be changed accordingly.
To be fixed in https://github.com/conan-io/conan-center-index/blob/master/recipes/xorg-makedepend/all/conanfile.py
### Package and Environment Details
* Package Name/Version: xorg-makedepend/any
* Operating System+version: n/a
* Compiler+version: n/a
* Docker image: n/a
* Conan version: n/a
* Python version: n/a
### Conan profile
n/a
### Steps to reproduce
n/a
### Logs
n/a
</issue>
<code>
[start of recipes/xorg-makedepend/all/conanfile.py]
1 from conan import ConanFile
2 from conan.errors import ConanInvalidConfiguration
3 from conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get, load, rmdir, save
4 from conan.tools.gnu import Autotools, AutotoolsToolchain, PkgConfigDeps
5 from conan.tools.layout import basic_layout
6 import os
7 import re
8
9 required_conan_version = ">=1.53.0"
10
11
12 class XorgMakedepend(ConanFile):
13 name = "xorg-makedepend"
14 description = "Utility to parse C source files to make dependency lists for Makefiles"
15 topics = ("xorg", "dependency", "obsolete")
16 license = "MIT"
17 homepage = "https://gitlab.freedesktop.org/xorg/util/cf"
18 url = "https://github.com/conan-io/conan-center-index"
19 settings = "os", "arch", "compiler", "build_type"
20
21 @property
22 def _settings_build(self):
23 return getattr(self, "settings_build", self.settings)
24
25 def export_sources(self):
26 export_conandata_patches(self)
27
28 def requirements(self):
29 self.requires("xorg-macros/1.19.3")
30 self.requires("xorg-proto/2022.2")
31
32 def build_requirements(self):
33 self.build_requires("pkgconf/1.7.4")
34
35 def validate(self):
36 if self.settings.os == "Windows":
37 raise ConanInvalidConfiguration("Windows is not supported by xorg-makedepend")
38
39 def configure(self):
40 self.settings.rm_safe("compiler.cppstd")
41 self.settings.rm_safe("compiler.libcxx")
42
43 def package_id(self):
44 del self.info.settings.compiler
45
46 def layout(self):
47 basic_layout(self, src_folder="src")
48
49 def source(self):
50 get(self, **self.conan_data["sources"][self.version],
51 destination=self.source_folder, strip_root=True)
52
53 @property
54 def _user_info_build(self):
55 return getattr(self, "user_info_build", self.deps_user_info)
56
57 def generate(self):
58 tc = AutotoolsToolchain(self)
59 tc.generate()
60
61 deps = PkgConfigDeps(self)
62 deps.generate()
63
64 def build(self):
65 apply_conandata_patches(self)
66 autotools = Autotools(self)
67 autotools.configure()
68 autotools.make()
69
70 def package(self):
71 copy(self, "COPYING", src=self.source_folder, dst=os.path.join(self.package_folder, "licenses"))
72 def_h_text = load(self, os.path.join(self.source_folder, "def.h"))
73 license_text = next(re.finditer(r"/\*([^*]+)\*/", def_h_text)).group(1)
74 save(self, os.path.join(self.package_folder, "licenses", "LICENSE"), license_text)
75
76 autotools = Autotools(self)
77 autotools.install()
78 rmdir(self, os.path.join(self.package_folder, "share"))
79
80 def package_info(self):
81 self.cpp_info.libdirs = []
82 self.cpp_info.includedirs = []
83
84 bin_path = os.path.join(self.package_folder, "bin")
85 self.output.info("Appending PATH environment variable: {}".format(bin_path))
86 self.env_info.PATH.append(bin_path)
87
[end of recipes/xorg-makedepend/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/xorg-makedepend/all/conanfile.py b/recipes/xorg-makedepend/all/conanfile.py
--- a/recipes/xorg-makedepend/all/conanfile.py
+++ b/recipes/xorg-makedepend/all/conanfile.py
@@ -14,7 +14,7 @@
description = "Utility to parse C source files to make dependency lists for Makefiles"
topics = ("xorg", "dependency", "obsolete")
license = "MIT"
- homepage = "https://gitlab.freedesktop.org/xorg/util/cf"
+ homepage = "https://gitlab.freedesktop.org/xorg/util/makedepend"
url = "https://github.com/conan-io/conan-center-index"
settings = "os", "arch", "compiler", "build_type"
| {"golden_diff": "diff --git a/recipes/xorg-makedepend/all/conanfile.py b/recipes/xorg-makedepend/all/conanfile.py\n--- a/recipes/xorg-makedepend/all/conanfile.py\n+++ b/recipes/xorg-makedepend/all/conanfile.py\n@@ -14,7 +14,7 @@\n description = \"Utility to parse C source files to make dependency lists for Makefiles\"\n topics = (\"xorg\", \"dependency\", \"obsolete\")\n license = \"MIT\"\n- homepage = \"https://gitlab.freedesktop.org/xorg/util/cf\"\n+ homepage = \"https://gitlab.freedesktop.org/xorg/util/makedepend\"\n url = \"https://github.com/conan-io/conan-center-index\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n", "issue": "[package] xorg-makedepend/any: Homepage url incorrect\n### Description\n\nIn the `xorg-makedepend` recipe the homepage url is incorrectly set to \"https://gitlab.freedesktop.org/xorg/util/cf\" which is a different repository in the same group, the correct repository url is \"https://gitlab.freedesktop.org/xorg/util/makedepend\". This should be changed accordingly.\r\n\r\nTo be fixed in https://github.com/conan-io/conan-center-index/blob/master/recipes/xorg-makedepend/all/conanfile.py\n\n### Package and Environment Details\n\n* Package Name/Version: xorg-makedepend/any\r\n* Operating System+version: n/a\r\n* Compiler+version: n/a\r\n* Docker image: n/a\r\n* Conan version: n/a\r\n* Python version: n/a\r\n\n\n### Conan profile\n\nn/a\n\n### Steps to reproduce\n\nn/a\n\n### Logs\n\nn/a\n", "before_files": [{"content": "from conan import ConanFile\nfrom conan.errors import ConanInvalidConfiguration\nfrom conan.tools.files import apply_conandata_patches, copy, export_conandata_patches, get, load, rmdir, save\nfrom conan.tools.gnu import Autotools, AutotoolsToolchain, PkgConfigDeps\nfrom conan.tools.layout import basic_layout\nimport os\nimport re\n\nrequired_conan_version = \">=1.53.0\"\n\n\nclass XorgMakedepend(ConanFile):\n name = \"xorg-makedepend\"\n description = \"Utility to parse C source files to make dependency lists for Makefiles\"\n topics = (\"xorg\", \"dependency\", \"obsolete\")\n license = \"MIT\"\n homepage = \"https://gitlab.freedesktop.org/xorg/util/cf\"\n url = \"https://github.com/conan-io/conan-center-index\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n\n @property\n def _settings_build(self):\n return getattr(self, \"settings_build\", self.settings)\n\n def export_sources(self):\n export_conandata_patches(self)\n\n def requirements(self):\n self.requires(\"xorg-macros/1.19.3\")\n self.requires(\"xorg-proto/2022.2\")\n\n def build_requirements(self):\n self.build_requires(\"pkgconf/1.7.4\")\n\n def validate(self):\n if self.settings.os == \"Windows\":\n raise ConanInvalidConfiguration(\"Windows is not supported by xorg-makedepend\")\n\n def configure(self):\n self.settings.rm_safe(\"compiler.cppstd\")\n self.settings.rm_safe(\"compiler.libcxx\")\n\n def package_id(self):\n del self.info.settings.compiler\n\n def layout(self):\n basic_layout(self, src_folder=\"src\")\n\n def source(self):\n get(self, **self.conan_data[\"sources\"][self.version],\n destination=self.source_folder, strip_root=True)\n\n @property\n def _user_info_build(self):\n return getattr(self, \"user_info_build\", self.deps_user_info)\n\n def generate(self):\n tc = AutotoolsToolchain(self)\n tc.generate()\n\n deps = PkgConfigDeps(self)\n deps.generate()\n\n def build(self):\n apply_conandata_patches(self)\n autotools = Autotools(self)\n autotools.configure()\n autotools.make()\n\n def package(self):\n copy(self, \"COPYING\", src=self.source_folder, dst=os.path.join(self.package_folder, \"licenses\"))\n def_h_text = load(self, os.path.join(self.source_folder, \"def.h\"))\n license_text = next(re.finditer(r\"/\\*([^*]+)\\*/\", def_h_text)).group(1)\n save(self, os.path.join(self.package_folder, \"licenses\", \"LICENSE\"), license_text)\n\n autotools = Autotools(self)\n autotools.install()\n rmdir(self, os.path.join(self.package_folder, \"share\"))\n\n def package_info(self):\n self.cpp_info.libdirs = []\n self.cpp_info.includedirs = []\n\n bin_path = os.path.join(self.package_folder, \"bin\")\n self.output.info(\"Appending PATH environment variable: {}\".format(bin_path))\n self.env_info.PATH.append(bin_path)\n", "path": "recipes/xorg-makedepend/all/conanfile.py"}]} | 1,625 | 180 |
gh_patches_debug_17607 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-4246 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot call sqlite3.backup(db) on a TracedSQLite object
Thanks for taking the time for reporting an issue!
Before reporting an issue on dd-trace-py, please be sure to provide all
necessary information.
If you're hitting a bug, make sure that you're using the latest version of this
library.
### Which version of dd-trace-py are you using?
1.5.0
### Which version of pip are you using?
21.1.1
_ddtrace requires pip>=18 to install one of our pre-built wheels_
### Which version of the libraries are you using?
You can copy/paste the output of `pip freeze` here.
```
ddtrace==1.5.0
```
### How can we reproduce your problem?
```
from ddtrace import config, patch_all
import sqlite3
config.env = "test" # the environment the application is in
config.service = "app" # name of your application
config.version = "v1" # version of your application
patch_all()
src = sqlite3.connect("1.db")
dst = sqlite3.connect("2.db")
with dst:
src.backup(dst, pages=1)
dst.close()
src.close()
```
### What is the result that you get?
The following TypeError
```
TypeError: backup() argument 1 must be sqlite3.Connection, not TracedSQLite
```
### What is the result that you expected?
The function should succeed without error.
</issue>
<code>
[start of ddtrace/contrib/sqlite3/patch.py]
1 import os
2 import sqlite3
3 import sqlite3.dbapi2
4
5 from ddtrace import config
6 from ddtrace.vendor import wrapt
7
8 from ...contrib.dbapi import FetchTracedCursor
9 from ...contrib.dbapi import TracedConnection
10 from ...contrib.dbapi import TracedCursor
11 from ...internal.utils.formats import asbool
12 from ...pin import Pin
13
14
15 # Original connect method
16 _connect = sqlite3.connect
17
18 config._add(
19 "sqlite",
20 dict(
21 _default_service="sqlite",
22 _dbapi_span_name_prefix="sqlite",
23 trace_fetch_methods=asbool(os.getenv("DD_SQLITE_TRACE_FETCH_METHODS", default=False)),
24 ),
25 )
26
27
28 def patch():
29 wrapped = wrapt.FunctionWrapper(_connect, traced_connect)
30
31 setattr(sqlite3, "connect", wrapped)
32 setattr(sqlite3.dbapi2, "connect", wrapped)
33
34
35 def unpatch():
36 sqlite3.connect = _connect
37 sqlite3.dbapi2.connect = _connect
38
39
40 def traced_connect(func, _, args, kwargs):
41 conn = func(*args, **kwargs)
42 return patch_conn(conn)
43
44
45 def patch_conn(conn):
46 wrapped = TracedSQLite(conn)
47 Pin().onto(wrapped)
48 return wrapped
49
50
51 class TracedSQLiteCursor(TracedCursor):
52 def executemany(self, *args, **kwargs):
53 # DEV: SQLite3 Cursor.execute always returns back the cursor instance
54 super(TracedSQLiteCursor, self).executemany(*args, **kwargs)
55 return self
56
57 def execute(self, *args, **kwargs):
58 # DEV: SQLite3 Cursor.execute always returns back the cursor instance
59 super(TracedSQLiteCursor, self).execute(*args, **kwargs)
60 return self
61
62
63 class TracedSQLiteFetchCursor(TracedSQLiteCursor, FetchTracedCursor):
64 pass
65
66
67 class TracedSQLite(TracedConnection):
68 def __init__(self, conn, pin=None, cursor_cls=None):
69 if not cursor_cls:
70 # Do not trace `fetch*` methods by default
71 cursor_cls = TracedSQLiteFetchCursor if config.sqlite.trace_fetch_methods else TracedSQLiteCursor
72
73 super(TracedSQLite, self).__init__(conn, pin=pin, cfg=config.sqlite, cursor_cls=cursor_cls)
74
75 def execute(self, *args, **kwargs):
76 # sqlite has a few extra sugar functions
77 return self.cursor().execute(*args, **kwargs)
78
[end of ddtrace/contrib/sqlite3/patch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ddtrace/contrib/sqlite3/patch.py b/ddtrace/contrib/sqlite3/patch.py
--- a/ddtrace/contrib/sqlite3/patch.py
+++ b/ddtrace/contrib/sqlite3/patch.py
@@ -1,6 +1,7 @@
import os
import sqlite3
import sqlite3.dbapi2
+import sys
from ddtrace import config
from ddtrace.vendor import wrapt
@@ -75,3 +76,13 @@
def execute(self, *args, **kwargs):
# sqlite has a few extra sugar functions
return self.cursor().execute(*args, **kwargs)
+
+ # backup was added in Python 3.7
+ if sys.version_info >= (3, 7, 0):
+
+ def backup(self, target, *args, **kwargs):
+ # sqlite3 checks the type of `target`, it cannot be a wrapped connection
+ # https://github.com/python/cpython/blob/4652093e1b816b78e9a585d671a807ce66427417/Modules/_sqlite/connection.c#L1897-L1899
+ if isinstance(target, TracedConnection):
+ target = target.__wrapped__
+ return self.__wrapped__.backup(target, *args, **kwargs)
| {"golden_diff": "diff --git a/ddtrace/contrib/sqlite3/patch.py b/ddtrace/contrib/sqlite3/patch.py\n--- a/ddtrace/contrib/sqlite3/patch.py\n+++ b/ddtrace/contrib/sqlite3/patch.py\n@@ -1,6 +1,7 @@\n import os\n import sqlite3\n import sqlite3.dbapi2\n+import sys\n \n from ddtrace import config\n from ddtrace.vendor import wrapt\n@@ -75,3 +76,13 @@\n def execute(self, *args, **kwargs):\n # sqlite has a few extra sugar functions\n return self.cursor().execute(*args, **kwargs)\n+\n+ # backup was added in Python 3.7\n+ if sys.version_info >= (3, 7, 0):\n+\n+ def backup(self, target, *args, **kwargs):\n+ # sqlite3 checks the type of `target`, it cannot be a wrapped connection\n+ # https://github.com/python/cpython/blob/4652093e1b816b78e9a585d671a807ce66427417/Modules/_sqlite/connection.c#L1897-L1899\n+ if isinstance(target, TracedConnection):\n+ target = target.__wrapped__\n+ return self.__wrapped__.backup(target, *args, **kwargs)\n", "issue": "Cannot call sqlite3.backup(db) on a TracedSQLite object\nThanks for taking the time for reporting an issue!\r\n\r\nBefore reporting an issue on dd-trace-py, please be sure to provide all\r\nnecessary information.\r\n\r\nIf you're hitting a bug, make sure that you're using the latest version of this\r\nlibrary.\r\n\r\n### Which version of dd-trace-py are you using?\r\n1.5.0\r\n### Which version of pip are you using?\r\n21.1.1\r\n_ddtrace requires pip>=18 to install one of our pre-built wheels_\r\n\r\n### Which version of the libraries are you using?\r\n\r\nYou can copy/paste the output of `pip freeze` here.\r\n\r\n```\r\nddtrace==1.5.0\r\n```\r\n\r\n### How can we reproduce your problem?\r\n\r\n```\r\nfrom ddtrace import config, patch_all\r\nimport sqlite3\r\n\r\nconfig.env = \"test\" # the environment the application is in\r\nconfig.service = \"app\" # name of your application\r\nconfig.version = \"v1\" # version of your application\r\npatch_all()\r\n\r\nsrc = sqlite3.connect(\"1.db\")\r\ndst = sqlite3.connect(\"2.db\")\r\nwith dst:\r\n src.backup(dst, pages=1)\r\ndst.close()\r\nsrc.close()\r\n```\r\n\r\n### What is the result that you get?\r\n\r\nThe following TypeError\r\n```\r\nTypeError: backup() argument 1 must be sqlite3.Connection, not TracedSQLite\r\n```\r\n\r\n### What is the result that you expected?\r\n\r\nThe function should succeed without error.\r\n\n", "before_files": [{"content": "import os\nimport sqlite3\nimport sqlite3.dbapi2\n\nfrom ddtrace import config\nfrom ddtrace.vendor import wrapt\n\nfrom ...contrib.dbapi import FetchTracedCursor\nfrom ...contrib.dbapi import TracedConnection\nfrom ...contrib.dbapi import TracedCursor\nfrom ...internal.utils.formats import asbool\nfrom ...pin import Pin\n\n\n# Original connect method\n_connect = sqlite3.connect\n\nconfig._add(\n \"sqlite\",\n dict(\n _default_service=\"sqlite\",\n _dbapi_span_name_prefix=\"sqlite\",\n trace_fetch_methods=asbool(os.getenv(\"DD_SQLITE_TRACE_FETCH_METHODS\", default=False)),\n ),\n)\n\n\ndef patch():\n wrapped = wrapt.FunctionWrapper(_connect, traced_connect)\n\n setattr(sqlite3, \"connect\", wrapped)\n setattr(sqlite3.dbapi2, \"connect\", wrapped)\n\n\ndef unpatch():\n sqlite3.connect = _connect\n sqlite3.dbapi2.connect = _connect\n\n\ndef traced_connect(func, _, args, kwargs):\n conn = func(*args, **kwargs)\n return patch_conn(conn)\n\n\ndef patch_conn(conn):\n wrapped = TracedSQLite(conn)\n Pin().onto(wrapped)\n return wrapped\n\n\nclass TracedSQLiteCursor(TracedCursor):\n def executemany(self, *args, **kwargs):\n # DEV: SQLite3 Cursor.execute always returns back the cursor instance\n super(TracedSQLiteCursor, self).executemany(*args, **kwargs)\n return self\n\n def execute(self, *args, **kwargs):\n # DEV: SQLite3 Cursor.execute always returns back the cursor instance\n super(TracedSQLiteCursor, self).execute(*args, **kwargs)\n return self\n\n\nclass TracedSQLiteFetchCursor(TracedSQLiteCursor, FetchTracedCursor):\n pass\n\n\nclass TracedSQLite(TracedConnection):\n def __init__(self, conn, pin=None, cursor_cls=None):\n if not cursor_cls:\n # Do not trace `fetch*` methods by default\n cursor_cls = TracedSQLiteFetchCursor if config.sqlite.trace_fetch_methods else TracedSQLiteCursor\n\n super(TracedSQLite, self).__init__(conn, pin=pin, cfg=config.sqlite, cursor_cls=cursor_cls)\n\n def execute(self, *args, **kwargs):\n # sqlite has a few extra sugar functions\n return self.cursor().execute(*args, **kwargs)\n", "path": "ddtrace/contrib/sqlite3/patch.py"}]} | 1,540 | 307 |
gh_patches_debug_21357 | rasdani/github-patches | git_diff | nextcloud__appstore-282 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API: registering an app id and uploading an app release needs to check revoked certificates
In order to prevent old or lost certificates from being abused we need to check if the certificate has been revoked. This has to be done before validating the certificate on app release upload and before registering a new app id.
</issue>
<code>
[start of nextcloudappstore/core/certificate/validator.py]
1 import logging
2 from base64 import b64decode
3
4 import pem
5 from OpenSSL.crypto import FILETYPE_PEM, load_certificate, verify, X509, \
6 X509Store, X509StoreContext, load_crl
7 from django.conf import settings # type: ignore
8 from rest_framework.exceptions import APIException
9
10 logger = logging.getLogger(__name__)
11
12
13 class CertificateConfiguration:
14 def __init__(self) -> None:
15 self.digest = settings.CERTIFICATE_DIGEST
16
17
18 class InvalidSignatureException(APIException):
19 pass
20
21
22 class InvalidCertificateException(APIException):
23 pass
24
25
26 class CertificateAppIdMismatchException(APIException):
27 pass
28
29
30 class CertificateValidator:
31 """
32 See https://pyopenssl.readthedocs.io/en/stable/api/crypto.html#signing
33 -and-verifying-signatures
34 """
35
36 def __init__(self, config: CertificateConfiguration) -> None:
37 self.config = config
38
39 def validate_signature(self, certificate: str, signature: str,
40 data: bytes) -> None:
41 """
42 Tests if a value is a valid certificate using SHA512
43 :param certificate: the certificate to use as string
44 :param signature: the signature base64 encoded string to test
45 :param data: the binary file content that was signed
46 :raises: InvalidSignatureException if the signature is invalid
47 :return: None
48 """
49 cert = self._to_cert(certificate)
50 err_msg = 'Signature is invalid'
51 try:
52 result = verify(cert, b64decode(signature.encode()), data,
53 self.config.digest)
54 if result is not None:
55 raise InvalidSignatureException(err_msg)
56 except Exception as e:
57 raise InvalidSignatureException('%s: %s' % (err_msg, str(e)))
58
59 def validate_certificate(self, certificate: str, chain: str,
60 crl: str = None) -> None:
61 """
62 Tests if a certificate has been signed by the chain, is not revoked
63 and has not yet been expired.
64 :param certificate: the certificate to test as string
65 :param chain: the certificate chain file content as string
66 :param crl: the certificate revocation list file content as string
67 :raises: InvalidCertificateException if the certificate is invalid
68 :return: None
69 """
70 # root and intermediary certificate need to be split
71 cas = pem.parse(chain.encode())
72 store = X509Store()
73 for ca in cas:
74 store.add_cert(self._to_cert(str(ca)))
75
76 cert = self._to_cert(certificate)
77 ctx = X509StoreContext(store, cert)
78 err_msg = 'Certificate is invalid'
79
80 if crl:
81 crl = load_crl(FILETYPE_PEM, crl)
82 store.add_crl(crl)
83
84 try:
85 result = ctx.verify_certificate()
86 if result is not None:
87 raise InvalidCertificateException(err_msg)
88 except Exception as e:
89 raise InvalidCertificateException('%s: %s' % (err_msg, str(e)))
90
91 def get_cn(self, certificate: str) -> str:
92 """
93 Extracts the CN from a certificate and removes the leading
94 slash, e.g. /news should return news
95 :param certificate: certificate
96 :return: the certificate's subject without the leading slash
97 """
98 cert = self._to_cert(certificate)
99 return cert.get_subject().CN
100
101 def validate_app_id(self, certificate: str, app_id: str) -> None:
102 """
103 Validates if the CN matches the app id
104 :param certificate: app certificate
105 :param app_id: the app id
106 :raises CertificateAppIdMismatchException: if the app id and cert CN do
107 not match
108 :return: None
109 """
110 cn = self.get_cn(certificate)
111 if cn != app_id:
112 msg = 'App id %s does not match cert CN %s' % (app_id, cn)
113 raise CertificateAppIdMismatchException(msg)
114
115 def _to_cert(self, certificate: str) -> X509:
116 return load_certificate(FILETYPE_PEM, certificate.encode())
117
[end of nextcloudappstore/core/certificate/validator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nextcloudappstore/core/certificate/validator.py b/nextcloudappstore/core/certificate/validator.py
--- a/nextcloudappstore/core/certificate/validator.py
+++ b/nextcloudappstore/core/certificate/validator.py
@@ -3,7 +3,7 @@
import pem
from OpenSSL.crypto import FILETYPE_PEM, load_certificate, verify, X509, \
- X509Store, X509StoreContext, load_crl
+ X509Store, X509StoreContext, load_crl, X509StoreFlags
from django.conf import settings # type: ignore
from rest_framework.exceptions import APIException
@@ -74,12 +74,14 @@
store.add_cert(self._to_cert(str(ca)))
cert = self._to_cert(certificate)
- ctx = X509StoreContext(store, cert)
- err_msg = 'Certificate is invalid'
if crl:
- crl = load_crl(FILETYPE_PEM, crl)
- store.add_crl(crl)
+ parsed_crl = load_crl(FILETYPE_PEM, crl)
+ store.set_flags(X509StoreFlags.CRL_CHECK)
+ store.add_crl(parsed_crl)
+
+ ctx = X509StoreContext(store, cert)
+ err_msg = 'Certificate is invalid'
try:
result = ctx.verify_certificate()
| {"golden_diff": "diff --git a/nextcloudappstore/core/certificate/validator.py b/nextcloudappstore/core/certificate/validator.py\n--- a/nextcloudappstore/core/certificate/validator.py\n+++ b/nextcloudappstore/core/certificate/validator.py\n@@ -3,7 +3,7 @@\n \n import pem\n from OpenSSL.crypto import FILETYPE_PEM, load_certificate, verify, X509, \\\n- X509Store, X509StoreContext, load_crl\n+ X509Store, X509StoreContext, load_crl, X509StoreFlags\n from django.conf import settings # type: ignore\n from rest_framework.exceptions import APIException\n \n@@ -74,12 +74,14 @@\n store.add_cert(self._to_cert(str(ca)))\n \n cert = self._to_cert(certificate)\n- ctx = X509StoreContext(store, cert)\n- err_msg = 'Certificate is invalid'\n \n if crl:\n- crl = load_crl(FILETYPE_PEM, crl)\n- store.add_crl(crl)\n+ parsed_crl = load_crl(FILETYPE_PEM, crl)\n+ store.set_flags(X509StoreFlags.CRL_CHECK)\n+ store.add_crl(parsed_crl)\n+\n+ ctx = X509StoreContext(store, cert)\n+ err_msg = 'Certificate is invalid'\n \n try:\n result = ctx.verify_certificate()\n", "issue": "API: registering an app id and uploading an app release needs to check revoked certificates\nIn order to prevent old or lost certificates from being abused we need to check if the certificate has been revoked. This has to be done before validating the certificate on app release upload and before registering a new app id.\n\n", "before_files": [{"content": "import logging\nfrom base64 import b64decode\n\nimport pem\nfrom OpenSSL.crypto import FILETYPE_PEM, load_certificate, verify, X509, \\\n X509Store, X509StoreContext, load_crl\nfrom django.conf import settings # type: ignore\nfrom rest_framework.exceptions import APIException\n\nlogger = logging.getLogger(__name__)\n\n\nclass CertificateConfiguration:\n def __init__(self) -> None:\n self.digest = settings.CERTIFICATE_DIGEST\n\n\nclass InvalidSignatureException(APIException):\n pass\n\n\nclass InvalidCertificateException(APIException):\n pass\n\n\nclass CertificateAppIdMismatchException(APIException):\n pass\n\n\nclass CertificateValidator:\n \"\"\"\n See https://pyopenssl.readthedocs.io/en/stable/api/crypto.html#signing\n -and-verifying-signatures\n \"\"\"\n\n def __init__(self, config: CertificateConfiguration) -> None:\n self.config = config\n\n def validate_signature(self, certificate: str, signature: str,\n data: bytes) -> None:\n \"\"\"\n Tests if a value is a valid certificate using SHA512\n :param certificate: the certificate to use as string\n :param signature: the signature base64 encoded string to test\n :param data: the binary file content that was signed\n :raises: InvalidSignatureException if the signature is invalid\n :return: None\n \"\"\"\n cert = self._to_cert(certificate)\n err_msg = 'Signature is invalid'\n try:\n result = verify(cert, b64decode(signature.encode()), data,\n self.config.digest)\n if result is not None:\n raise InvalidSignatureException(err_msg)\n except Exception as e:\n raise InvalidSignatureException('%s: %s' % (err_msg, str(e)))\n\n def validate_certificate(self, certificate: str, chain: str,\n crl: str = None) -> None:\n \"\"\"\n Tests if a certificate has been signed by the chain, is not revoked\n and has not yet been expired.\n :param certificate: the certificate to test as string\n :param chain: the certificate chain file content as string\n :param crl: the certificate revocation list file content as string\n :raises: InvalidCertificateException if the certificate is invalid\n :return: None\n \"\"\"\n # root and intermediary certificate need to be split\n cas = pem.parse(chain.encode())\n store = X509Store()\n for ca in cas:\n store.add_cert(self._to_cert(str(ca)))\n\n cert = self._to_cert(certificate)\n ctx = X509StoreContext(store, cert)\n err_msg = 'Certificate is invalid'\n\n if crl:\n crl = load_crl(FILETYPE_PEM, crl)\n store.add_crl(crl)\n\n try:\n result = ctx.verify_certificate()\n if result is not None:\n raise InvalidCertificateException(err_msg)\n except Exception as e:\n raise InvalidCertificateException('%s: %s' % (err_msg, str(e)))\n\n def get_cn(self, certificate: str) -> str:\n \"\"\"\n Extracts the CN from a certificate and removes the leading\n slash, e.g. /news should return news\n :param certificate: certificate\n :return: the certificate's subject without the leading slash\n \"\"\"\n cert = self._to_cert(certificate)\n return cert.get_subject().CN\n\n def validate_app_id(self, certificate: str, app_id: str) -> None:\n \"\"\"\n Validates if the CN matches the app id\n :param certificate: app certificate\n :param app_id: the app id\n :raises CertificateAppIdMismatchException: if the app id and cert CN do\n not match\n :return: None\n \"\"\"\n cn = self.get_cn(certificate)\n if cn != app_id:\n msg = 'App id %s does not match cert CN %s' % (app_id, cn)\n raise CertificateAppIdMismatchException(msg)\n\n def _to_cert(self, certificate: str) -> X509:\n return load_certificate(FILETYPE_PEM, certificate.encode())\n", "path": "nextcloudappstore/core/certificate/validator.py"}]} | 1,748 | 323 |
gh_patches_debug_5815 | rasdani/github-patches | git_diff | pulp__pulpcore-4722 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
file:// sync deletes files from directory
**Version**
Pulpcore 3.39
**Describe the bug**
When syncing file:// repositories, files are disappearing after the sync.
**To Reproduce**
1) Copy these two repositories to the FS:
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file1
- https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file2
2) Sync one, then the other
3) See that some files disappeared.
- In my case, file2 lost every file except PULP_MANIFEST
**Expected behavior**
No files disappear.
**Additional context**
This also occurred with RPM content type files.
</issue>
<code>
[start of pulpcore/download/file.py]
1 import os
2
3 from urllib.parse import urlparse
4
5 import aiofiles
6
7 from .base import BaseDownloader, DownloadResult
8
9
10 class FileDownloader(BaseDownloader):
11 """
12 A downloader for downloading files from the filesystem.
13
14 It provides digest and size validation along with computation of the digests needed to save the
15 file as an Artifact. It writes a new file to the disk and the return path is included in the
16 :class:`~pulpcore.plugin.download.DownloadResult`.
17
18 This downloader has all of the attributes of
19 :class:`~pulpcore.plugin.download.BaseDownloader`
20 """
21
22 def __init__(self, url, *args, **kwargs):
23 """
24 Download files from a url that starts with `file://`
25
26 Args:
27 url (str): The url to the file. This is expected to begin with `file://`
28 kwargs (dict): This accepts the parameters of
29 :class:`~pulpcore.plugin.download.BaseDownloader`.
30
31 Raises:
32 ValidationError: When the url starts with `file://`, but is not a subfolder of a path in
33 the ALLOWED_IMPORT_PATH setting.
34 """
35 from pulpcore.app.serializers import RemoteSerializer
36
37 RemoteSerializer().validate_url(url)
38 p = urlparse(url)
39 self._path = os.path.abspath(os.path.join(p.netloc, p.path))
40 super().__init__(url, *args, **kwargs)
41
42 async def _run(self, extra_data=None):
43 """
44 Read, validate, and compute digests on the `url`. This is a coroutine.
45
46 This method provides the same return object type and documented in
47 :meth:`~pulpcore.plugin.download.BaseDownloader._run`.
48
49 Args:
50 extra_data (dict): Extra data passed to the downloader.
51 """
52 async with aiofiles.open(self._path, "rb") as f_handle:
53 while True:
54 chunk = await f_handle.read(1048576) # 1 megabyte
55 if not chunk:
56 await self.finalize()
57 break # the reading is done
58 await self.handle_data(chunk)
59 return DownloadResult(
60 path=self._path,
61 artifact_attributes=self.artifact_attributes,
62 url=self.url,
63 headers=None,
64 )
65
[end of pulpcore/download/file.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/download/file.py b/pulpcore/download/file.py
--- a/pulpcore/download/file.py
+++ b/pulpcore/download/file.py
@@ -57,7 +57,7 @@
break # the reading is done
await self.handle_data(chunk)
return DownloadResult(
- path=self._path,
+ path=self.path,
artifact_attributes=self.artifact_attributes,
url=self.url,
headers=None,
| {"golden_diff": "diff --git a/pulpcore/download/file.py b/pulpcore/download/file.py\n--- a/pulpcore/download/file.py\n+++ b/pulpcore/download/file.py\n@@ -57,7 +57,7 @@\n break # the reading is done\n await self.handle_data(chunk)\n return DownloadResult(\n- path=self._path,\n+ path=self.path,\n artifact_attributes=self.artifact_attributes,\n url=self.url,\n headers=None,\n", "issue": "file:// sync deletes files from directory\n**Version**\r\nPulpcore 3.39\r\n\r\n**Describe the bug**\r\nWhen syncing file:// repositories, files are disappearing after the sync.\r\n\r\n**To Reproduce**\r\n1) Copy these two repositories to the FS:\r\n - https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file1\r\n - https://github.com/Katello/katello/tree/master/test/fixtures/test_repos/file2\r\n2) Sync one, then the other\r\n3) See that some files disappeared.\r\n - In my case, file2 lost every file except PULP_MANIFEST\r\n\r\n\r\n**Expected behavior**\r\nNo files disappear.\r\n\r\n**Additional context**\r\nThis also occurred with RPM content type files.\r\n\n", "before_files": [{"content": "import os\n\nfrom urllib.parse import urlparse\n\nimport aiofiles\n\nfrom .base import BaseDownloader, DownloadResult\n\n\nclass FileDownloader(BaseDownloader):\n \"\"\"\n A downloader for downloading files from the filesystem.\n\n It provides digest and size validation along with computation of the digests needed to save the\n file as an Artifact. It writes a new file to the disk and the return path is included in the\n :class:`~pulpcore.plugin.download.DownloadResult`.\n\n This downloader has all of the attributes of\n :class:`~pulpcore.plugin.download.BaseDownloader`\n \"\"\"\n\n def __init__(self, url, *args, **kwargs):\n \"\"\"\n Download files from a url that starts with `file://`\n\n Args:\n url (str): The url to the file. This is expected to begin with `file://`\n kwargs (dict): This accepts the parameters of\n :class:`~pulpcore.plugin.download.BaseDownloader`.\n\n Raises:\n ValidationError: When the url starts with `file://`, but is not a subfolder of a path in\n the ALLOWED_IMPORT_PATH setting.\n \"\"\"\n from pulpcore.app.serializers import RemoteSerializer\n\n RemoteSerializer().validate_url(url)\n p = urlparse(url)\n self._path = os.path.abspath(os.path.join(p.netloc, p.path))\n super().__init__(url, *args, **kwargs)\n\n async def _run(self, extra_data=None):\n \"\"\"\n Read, validate, and compute digests on the `url`. This is a coroutine.\n\n This method provides the same return object type and documented in\n :meth:`~pulpcore.plugin.download.BaseDownloader._run`.\n\n Args:\n extra_data (dict): Extra data passed to the downloader.\n \"\"\"\n async with aiofiles.open(self._path, \"rb\") as f_handle:\n while True:\n chunk = await f_handle.read(1048576) # 1 megabyte\n if not chunk:\n await self.finalize()\n break # the reading is done\n await self.handle_data(chunk)\n return DownloadResult(\n path=self._path,\n artifact_attributes=self.artifact_attributes,\n url=self.url,\n headers=None,\n )\n", "path": "pulpcore/download/file.py"}]} | 1,291 | 99 |
gh_patches_debug_17299 | rasdani/github-patches | git_diff | cal-itp__benefits-1056 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enrollment Success page: Final touches
Make the page look like the Figma mocks.
1. Desktop: Design the last line to this. It's a 10-column wide large div that is left-aligned.
<img width="837" alt="image" src="https://user-images.githubusercontent.com/3673236/195391617-3f48afbb-9b1a-4500-b727-087a43e5865a.png">
<img width="1512" alt="image" src="https://user-images.githubusercontent.com/3673236/195273016-ca031b6e-673f-4bf1-943e-2fdd8f085255.png">
2. Mobile: Design this last line. This mobile alignment is trickier. Might have to write CSS to add a margin-left and then calculate exactly how much the amount would be, based on what is the diameter of the icon plus the margin.
<img width="209" alt="image" src="https://user-images.githubusercontent.com/3673236/195392741-aba35117-a63f-457e-a8ec-3687b5ef7f30.png">
<img width="1512" alt="image" src="https://user-images.githubusercontent.com/3673236/195273060-2df3fad3-045e-48ad-af27-8c4f9abb75b7.png">
3. Enrollment Success post-log out:
<img width="193" alt="image" src="https://user-images.githubusercontent.com/3673236/195393285-3eb7e077-b56d-4d11-9e2c-9d2a8858ad8d.png">
<img width="636" alt="image" src="https://user-images.githubusercontent.com/3673236/195393331-387c207b-c988-471a-9eca-ae31356ab360.png">
<img width="1512" alt="image" src="https://user-images.githubusercontent.com/3673236/195273236-40fe1650-f74c-4126-8559-d86bdb7811da.png">
</issue>
<code>
[start of benefits/core/views.py]
1 """
2 The core application: view definition for the root of the webapp.
3 """
4 from django.http import HttpResponse, HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError
5 from django.shortcuts import redirect
6 from django.template import loader
7 from django.template.response import TemplateResponse
8 from django.urls import reverse
9 from django.utils.translation import pgettext, gettext as _
10
11 from . import models, session, viewmodels
12 from .middleware import pageview_decorator
13
14 ROUTE_INDEX = "core:index"
15 ROUTE_ELIGIBILITY = "eligibility:index"
16 ROUTE_HELP = "core:help"
17 ROUTE_LOGGED_OUT = "core:logged_out"
18
19 TEMPLATE_PAGE = "core/page.html"
20 TEMPLATE_AGENCY = "core/agency_index.html"
21 TEMPLATE_HELP = "core/help.html"
22
23
24 @pageview_decorator
25 def index(request):
26 """View handler for the main entry page."""
27 session.reset(request)
28
29 agencies = models.TransitAgency.all_active()
30
31 if len(agencies) == 1:
32 agency = agencies[0]
33 return redirect(agency.index_url)
34
35 # generate a button to the landing page for each active agency
36 buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]
37 buttons[0].classes.append("mt-3")
38 buttons[0].label = _("core.pages.index.chooseprovider")
39
40 page = viewmodels.Page(
41 title=_("core.pages.index.title"),
42 headline=_("core.pages.index.headline"),
43 buttons=buttons,
44 classes="home",
45 )
46
47 return TemplateResponse(request, TEMPLATE_PAGE, page.context_dict())
48
49
50 @pageview_decorator
51 def agency_index(request, agency):
52 """View handler for an agency entry page."""
53 session.reset(request)
54 session.update(request, agency=agency, origin=agency.index_url)
55
56 button = viewmodels.Button.primary(text=_("core.pages.index.continue"), url=reverse(ROUTE_ELIGIBILITY))
57
58 page = viewmodels.Page(
59 title=_("core.pages.agency_index.title"),
60 headline=_("core.pages.agency_index.mst_cc.headline"),
61 button=button,
62 classes="home",
63 )
64
65 return TemplateResponse(request, TEMPLATE_AGENCY, page.context_dict())
66
67
68 @pageview_decorator
69 def agency_public_key(request, agency):
70 """View handler returns an agency's public key as plain text."""
71 return HttpResponse(agency.public_key_data, content_type="text/plain")
72
73
74 @pageview_decorator
75 def help(request):
76 """View handler for the help page."""
77 if session.active_agency(request):
78 agency = session.agency(request)
79 buttons = viewmodels.Button.agency_contact_links(agency)
80 else:
81 buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]
82
83 buttons.append(viewmodels.Button.home(request, _("core.buttons.back")))
84
85 page = viewmodels.Page(
86 title=_("core.buttons.help"),
87 headline=_("core.buttons.help"),
88 buttons=buttons,
89 )
90
91 return TemplateResponse(request, TEMPLATE_HELP, page.context_dict())
92
93
94 @pageview_decorator
95 def bad_request(request, exception, template_name="400.html"):
96 """View handler for HTTP 400 Bad Request responses."""
97 if session.active_agency(request):
98 session.update(request, origin=session.agency(request).index_url)
99 else:
100 session.update(request, origin=reverse(ROUTE_INDEX))
101
102 home = viewmodels.Button.home(request)
103 page = viewmodels.ErrorPage.server_error(button=home)
104 t = loader.get_template(template_name)
105
106 return HttpResponseBadRequest(t.render(page.context_dict()))
107
108
109 @pageview_decorator
110 def csrf_failure(request, reason):
111 """
112 View handler for CSRF_FAILURE_VIEW with custom data.
113 """
114 if session.active_agency(request):
115 session.update(request, origin=session.agency(request).index_url)
116 else:
117 session.update(request, origin=reverse(ROUTE_INDEX))
118
119 home = viewmodels.Button.home(request)
120 page = viewmodels.ErrorPage.not_found(button=home, path=request.path)
121 t = loader.get_template("400.html")
122
123 return HttpResponseNotFound(t.render(page.context_dict()))
124
125
126 @pageview_decorator
127 def page_not_found(request, exception, template_name="404.html"):
128 """View handler for HTTP 404 Not Found responses."""
129 if session.active_agency(request):
130 session.update(request, origin=session.agency(request).index_url)
131 else:
132 session.update(request, origin=reverse(ROUTE_INDEX))
133
134 home = viewmodels.Button.home(request)
135 # show a more user-friendly message instead of not_found
136 page = viewmodels.ErrorPage.user_error(button=home, path=request.path)
137 t = loader.get_template(template_name)
138
139 return HttpResponseNotFound(t.render(page.context_dict()))
140
141
142 @pageview_decorator
143 def server_error(request, template_name="500.html"):
144 """View handler for HTTP 500 Server Error responses."""
145 if session.active_agency(request):
146 session.update(request, origin=session.agency(request).index_url)
147 else:
148 session.update(request, origin=reverse(ROUTE_INDEX))
149
150 home = viewmodels.Button.home(request)
151 page = viewmodels.ErrorPage.server_error(button=home)
152 t = loader.get_template(template_name)
153
154 return HttpResponseServerError(t.render(page.context_dict()))
155
156
157 def logged_out(request):
158 """View handler for the final log out confirmation message."""
159 page = viewmodels.Page(
160 title=_("core.pages.logged_out.title"),
161 headline=_("core.pages.logged_out.headline"),
162 icon=viewmodels.Icon("happybus", pgettext("image alt text", "core.icons.happybus")),
163 )
164
165 return TemplateResponse(request, TEMPLATE_PAGE, page.context_dict())
166
[end of benefits/core/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/benefits/core/views.py b/benefits/core/views.py
--- a/benefits/core/views.py
+++ b/benefits/core/views.py
@@ -19,6 +19,7 @@
TEMPLATE_PAGE = "core/page.html"
TEMPLATE_AGENCY = "core/agency_index.html"
TEMPLATE_HELP = "core/help.html"
+TEMPLATE_LOGGED_OUT = "core/logged_out.html"
@pageview_decorator
@@ -158,8 +159,7 @@
"""View handler for the final log out confirmation message."""
page = viewmodels.Page(
title=_("core.pages.logged_out.title"),
- headline=_("core.pages.logged_out.headline"),
icon=viewmodels.Icon("happybus", pgettext("image alt text", "core.icons.happybus")),
)
- return TemplateResponse(request, TEMPLATE_PAGE, page.context_dict())
+ return TemplateResponse(request, TEMPLATE_LOGGED_OUT, page.context_dict())
| {"golden_diff": "diff --git a/benefits/core/views.py b/benefits/core/views.py\n--- a/benefits/core/views.py\n+++ b/benefits/core/views.py\n@@ -19,6 +19,7 @@\n TEMPLATE_PAGE = \"core/page.html\"\n TEMPLATE_AGENCY = \"core/agency_index.html\"\n TEMPLATE_HELP = \"core/help.html\"\n+TEMPLATE_LOGGED_OUT = \"core/logged_out.html\"\n \n \n @pageview_decorator\n@@ -158,8 +159,7 @@\n \"\"\"View handler for the final log out confirmation message.\"\"\"\n page = viewmodels.Page(\n title=_(\"core.pages.logged_out.title\"),\n- headline=_(\"core.pages.logged_out.headline\"),\n icon=viewmodels.Icon(\"happybus\", pgettext(\"image alt text\", \"core.icons.happybus\")),\n )\n \n- return TemplateResponse(request, TEMPLATE_PAGE, page.context_dict())\n+ return TemplateResponse(request, TEMPLATE_LOGGED_OUT, page.context_dict())\n", "issue": "Enrollment Success page: Final touches\nMake the page look like the Figma mocks. \r\n\r\n1. Desktop: Design the last line to this. It's a 10-column wide large div that is left-aligned.\r\n\r\n<img width=\"837\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/195391617-3f48afbb-9b1a-4500-b727-087a43e5865a.png\">\r\n\r\n\r\n\r\n<img width=\"1512\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/195273016-ca031b6e-673f-4bf1-943e-2fdd8f085255.png\">\r\n\r\n2. Mobile: Design this last line. This mobile alignment is trickier. Might have to write CSS to add a margin-left and then calculate exactly how much the amount would be, based on what is the diameter of the icon plus the margin.\r\n\r\n<img width=\"209\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/195392741-aba35117-a63f-457e-a8ec-3687b5ef7f30.png\">\r\n<img width=\"1512\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/195273060-2df3fad3-045e-48ad-af27-8c4f9abb75b7.png\">\r\n\r\n\r\n3. Enrollment Success post-log out:\r\n<img width=\"193\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/195393285-3eb7e077-b56d-4d11-9e2c-9d2a8858ad8d.png\">\r\n<img width=\"636\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/195393331-387c207b-c988-471a-9eca-ae31356ab360.png\">\r\n\r\n\r\n<img width=\"1512\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/195273236-40fe1650-f74c-4126-8559-d86bdb7811da.png\">\r\n\n", "before_files": [{"content": "\"\"\"\nThe core application: view definition for the root of the webapp.\n\"\"\"\nfrom django.http import HttpResponse, HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError\nfrom django.shortcuts import redirect\nfrom django.template import loader\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.translation import pgettext, gettext as _\n\nfrom . import models, session, viewmodels\nfrom .middleware import pageview_decorator\n\nROUTE_INDEX = \"core:index\"\nROUTE_ELIGIBILITY = \"eligibility:index\"\nROUTE_HELP = \"core:help\"\nROUTE_LOGGED_OUT = \"core:logged_out\"\n\nTEMPLATE_PAGE = \"core/page.html\"\nTEMPLATE_AGENCY = \"core/agency_index.html\"\nTEMPLATE_HELP = \"core/help.html\"\n\n\n@pageview_decorator\ndef index(request):\n \"\"\"View handler for the main entry page.\"\"\"\n session.reset(request)\n\n agencies = models.TransitAgency.all_active()\n\n if len(agencies) == 1:\n agency = agencies[0]\n return redirect(agency.index_url)\n\n # generate a button to the landing page for each active agency\n buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]\n buttons[0].classes.append(\"mt-3\")\n buttons[0].label = _(\"core.pages.index.chooseprovider\")\n\n page = viewmodels.Page(\n title=_(\"core.pages.index.title\"),\n headline=_(\"core.pages.index.headline\"),\n buttons=buttons,\n classes=\"home\",\n )\n\n return TemplateResponse(request, TEMPLATE_PAGE, page.context_dict())\n\n\n@pageview_decorator\ndef agency_index(request, agency):\n \"\"\"View handler for an agency entry page.\"\"\"\n session.reset(request)\n session.update(request, agency=agency, origin=agency.index_url)\n\n button = viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(ROUTE_ELIGIBILITY))\n\n page = viewmodels.Page(\n title=_(\"core.pages.agency_index.title\"),\n headline=_(\"core.pages.agency_index.mst_cc.headline\"),\n button=button,\n classes=\"home\",\n )\n\n return TemplateResponse(request, TEMPLATE_AGENCY, page.context_dict())\n\n\n@pageview_decorator\ndef agency_public_key(request, agency):\n \"\"\"View handler returns an agency's public key as plain text.\"\"\"\n return HttpResponse(agency.public_key_data, content_type=\"text/plain\")\n\n\n@pageview_decorator\ndef help(request):\n \"\"\"View handler for the help page.\"\"\"\n if session.active_agency(request):\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n else:\n buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]\n\n buttons.append(viewmodels.Button.home(request, _(\"core.buttons.back\")))\n\n page = viewmodels.Page(\n title=_(\"core.buttons.help\"),\n headline=_(\"core.buttons.help\"),\n buttons=buttons,\n )\n\n return TemplateResponse(request, TEMPLATE_HELP, page.context_dict())\n\n\n@pageview_decorator\ndef bad_request(request, exception, template_name=\"400.html\"):\n \"\"\"View handler for HTTP 400 Bad Request responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=reverse(ROUTE_INDEX))\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.server_error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseBadRequest(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef csrf_failure(request, reason):\n \"\"\"\n View handler for CSRF_FAILURE_VIEW with custom data.\n \"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=reverse(ROUTE_INDEX))\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(\"400.html\")\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef page_not_found(request, exception, template_name=\"404.html\"):\n \"\"\"View handler for HTTP 404 Not Found responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=reverse(ROUTE_INDEX))\n\n home = viewmodels.Button.home(request)\n # show a more user-friendly message instead of not_found\n page = viewmodels.ErrorPage.user_error(button=home, path=request.path)\n t = loader.get_template(template_name)\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef server_error(request, template_name=\"500.html\"):\n \"\"\"View handler for HTTP 500 Server Error responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=reverse(ROUTE_INDEX))\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.server_error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseServerError(t.render(page.context_dict()))\n\n\ndef logged_out(request):\n \"\"\"View handler for the final log out confirmation message.\"\"\"\n page = viewmodels.Page(\n title=_(\"core.pages.logged_out.title\"),\n headline=_(\"core.pages.logged_out.headline\"),\n icon=viewmodels.Icon(\"happybus\", pgettext(\"image alt text\", \"core.icons.happybus\")),\n )\n\n return TemplateResponse(request, TEMPLATE_PAGE, page.context_dict())\n", "path": "benefits/core/views.py"}]} | 2,741 | 203 |
gh_patches_debug_2426 | rasdani/github-patches | git_diff | kserve__kserve-864 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
explanations no longer working with 0.3.0
Am following the steps in with 0.3.0 of kfserving: https://github.com/kubeflow/kfserving/tree/master/docs/samples/explanation/alibi/income
When I execute the curl for the explain I get a 500 error and the container logs show the below. I'm guessing the [update to master](https://github.com/kubeflow/kfserving/pull/803) means that the explainer models have also been updated and so they no longer work with 0.3.0 (the latest release version)
```
[E 200605 17:15:14 web:1792] Uncaught exception POST /v1/models/income:explain (127.0.0.1)
HTTPServerRequest(protocol='http', host='income-explainer-default.default.svc.cluster.local', method='POST', uri='/v1/models/income:explain', version='HTTP/1.1', remote_ip='127.0.0.1')
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/tornado/web.py", line 1701, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "/kfserving/kfserving/handlers/http.py", line 61, in post
response = model.explain(request)
File "/alibiexplainer/alibiexplainer/explainer.py", line 74, in explain
explanation = self.wrapper.explain(request["instances"])
File "/alibiexplainer/alibiexplainer/anchor_tabular.py", line 89, in explain
anchor_exp = self.anchors_tabular.explain(arr[0], **self.kwargs)
File "/usr/local/lib/python3.7/site-packages/alibi/explainers/anchor_tabular.py", line 803, in explain
for sampler in self.samplers:
AttributeError: 'AnchorTabular' object has no attribute 'samplers'
[E 200605 17:15:14 web:2250] 500 POST /v1/models/income:explain (127.0.0.1) 58.80ms
[I 200605 17:18:22 anchor_tabular:83] Arr shape ((1, 12),)
[E 200605 17:18:22 web:1792] Uncaught exception POST /v1/models/income:explain (127.0.0.1)
HTTPServerRequest(protocol='http', host='income-explainer-default.default.svc.cluster.local', method='POST', uri='/v1/models/income:explain', version='HTTP/1.1', remote_ip='127.0.0.1')
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/tornado/web.py", line 1701, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "/kfserving/kfserving/handlers/http.py", line 61, in post
response = model.explain(request)
File "/alibiexplainer/alibiexplainer/explainer.py", line 74, in explain
explanation = self.wrapper.explain(request["instances"])
File "/alibiexplainer/alibiexplainer/anchor_tabular.py", line 89, in explain
anchor_exp = self.anchors_tabular.explain(arr[0], **self.kwargs)
File "/usr/local/lib/python3.7/site-packages/alibi/explainers/anchor_tabular.py", line 803, in explain
for sampler in self.samplers:
AttributeError: 'AnchorTabular' object has no attribute 'samplers'
[E 200605 17:18:22 web:2250] 500 POST /v1/models/income:explain (127.0.0.1) 31.17ms
```
Presumably it would work on master. Does that sound right @cliveseldon ? If so maybe we should just close this.
</issue>
<code>
[start of python/alibiexplainer/setup.py]
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from setuptools import setup, find_packages
16
17 tests_require = [
18 'pytest',
19 'pytest-tornasync',
20 'mypy'
21 ]
22
23 setup(
24 name='alibiexplainer',
25 version='0.3.0',
26 author_email='[email protected]',
27 license='../../LICENSE.txt',
28 url='https://github.com/kubeflow/kfserving/python/kfserving/alibiexplainer',
29 description='Model Explaination Server. \
30 Not intended for use outside KFServing Frameworks Images',
31 long_description=open('README.md').read(),
32 python_requires='>=3.6',
33 packages=find_packages("alibiexplainer"),
34 install_requires=[
35 "kfserving>=0.3.0",
36 "alibi>=0.3",
37 "scikit-learn>=0.20.3",
38 "argparse>=1.4.0",
39 "requests>=2.22.0",
40 "joblib>=0.13.2",
41 "pandas>=0.24.2",
42 "numpy>=1.16.3",
43 "dill>=0.3.0",
44 "spacy>=2.1.4"
45 ],
46 tests_require=tests_require,
47 extras_require={'test': tests_require}
48 )
49
[end of python/alibiexplainer/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/python/alibiexplainer/setup.py b/python/alibiexplainer/setup.py
--- a/python/alibiexplainer/setup.py
+++ b/python/alibiexplainer/setup.py
@@ -33,7 +33,7 @@
packages=find_packages("alibiexplainer"),
install_requires=[
"kfserving>=0.3.0",
- "alibi>=0.3",
+ "alibi==0.3.2",
"scikit-learn>=0.20.3",
"argparse>=1.4.0",
"requests>=2.22.0",
| {"golden_diff": "diff --git a/python/alibiexplainer/setup.py b/python/alibiexplainer/setup.py\n--- a/python/alibiexplainer/setup.py\n+++ b/python/alibiexplainer/setup.py\n@@ -33,7 +33,7 @@\n packages=find_packages(\"alibiexplainer\"),\n install_requires=[\n \"kfserving>=0.3.0\",\n- \"alibi>=0.3\",\n+ \"alibi==0.3.2\",\n \"scikit-learn>=0.20.3\",\n \"argparse>=1.4.0\",\n \"requests>=2.22.0\",\n", "issue": "explanations no longer working with 0.3.0\nAm following the steps in with 0.3.0 of kfserving: https://github.com/kubeflow/kfserving/tree/master/docs/samples/explanation/alibi/income\r\n\r\nWhen I execute the curl for the explain I get a 500 error and the container logs show the below. I'm guessing the [update to master](https://github.com/kubeflow/kfserving/pull/803) means that the explainer models have also been updated and so they no longer work with 0.3.0 (the latest release version)\r\n\r\n```\r\n[E 200605 17:15:14 web:1792] Uncaught exception POST /v1/models/income:explain (127.0.0.1)\r\n HTTPServerRequest(protocol='http', host='income-explainer-default.default.svc.cluster.local', method='POST', uri='/v1/models/income:explain', version='HTTP/1.1', remote_ip='127.0.0.1')\r\n Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/site-packages/tornado/web.py\", line 1701, in _execute\r\n result = method(*self.path_args, **self.path_kwargs)\r\n File \"/kfserving/kfserving/handlers/http.py\", line 61, in post\r\n response = model.explain(request)\r\n File \"/alibiexplainer/alibiexplainer/explainer.py\", line 74, in explain\r\n explanation = self.wrapper.explain(request[\"instances\"])\r\n File \"/alibiexplainer/alibiexplainer/anchor_tabular.py\", line 89, in explain\r\n anchor_exp = self.anchors_tabular.explain(arr[0], **self.kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/alibi/explainers/anchor_tabular.py\", line 803, in explain\r\n for sampler in self.samplers:\r\n AttributeError: 'AnchorTabular' object has no attribute 'samplers'\r\n[E 200605 17:15:14 web:2250] 500 POST /v1/models/income:explain (127.0.0.1) 58.80ms\r\n[I 200605 17:18:22 anchor_tabular:83] Arr shape ((1, 12),) \r\n[E 200605 17:18:22 web:1792] Uncaught exception POST /v1/models/income:explain (127.0.0.1)\r\n HTTPServerRequest(protocol='http', host='income-explainer-default.default.svc.cluster.local', method='POST', uri='/v1/models/income:explain', version='HTTP/1.1', remote_ip='127.0.0.1')\r\n Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/site-packages/tornado/web.py\", line 1701, in _execute\r\n result = method(*self.path_args, **self.path_kwargs)\r\n File \"/kfserving/kfserving/handlers/http.py\", line 61, in post\r\n response = model.explain(request)\r\n File \"/alibiexplainer/alibiexplainer/explainer.py\", line 74, in explain\r\n explanation = self.wrapper.explain(request[\"instances\"])\r\n File \"/alibiexplainer/alibiexplainer/anchor_tabular.py\", line 89, in explain\r\n anchor_exp = self.anchors_tabular.explain(arr[0], **self.kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/alibi/explainers/anchor_tabular.py\", line 803, in explain\r\n for sampler in self.samplers:\r\n AttributeError: 'AnchorTabular' object has no attribute 'samplers'\r\n[E 200605 17:18:22 web:2250] 500 POST /v1/models/income:explain (127.0.0.1) 31.17ms\r\n\r\n```\r\n\r\nPresumably it would work on master. Does that sound right @cliveseldon ? If so maybe we should just close this.\n", "before_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'pytest',\n 'pytest-tornasync',\n 'mypy'\n]\n\nsetup(\n name='alibiexplainer',\n version='0.3.0',\n author_email='[email protected]',\n license='../../LICENSE.txt',\n url='https://github.com/kubeflow/kfserving/python/kfserving/alibiexplainer',\n description='Model Explaination Server. \\\n Not intended for use outside KFServing Frameworks Images',\n long_description=open('README.md').read(),\n python_requires='>=3.6',\n packages=find_packages(\"alibiexplainer\"),\n install_requires=[\n \"kfserving>=0.3.0\",\n \"alibi>=0.3\",\n \"scikit-learn>=0.20.3\",\n \"argparse>=1.4.0\",\n \"requests>=2.22.0\",\n \"joblib>=0.13.2\",\n \"pandas>=0.24.2\",\n \"numpy>=1.16.3\",\n \"dill>=0.3.0\",\n \"spacy>=2.1.4\"\n ],\n tests_require=tests_require,\n extras_require={'test': tests_require}\n)\n", "path": "python/alibiexplainer/setup.py"}]} | 1,990 | 136 |
gh_patches_debug_34547 | rasdani/github-patches | git_diff | keras-team__keras-nlp-340 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update our glue evaluation script to use tensorflow datasets
When it was originally written, glue for tfds was totally broken, so we used huggingface datasets.
This appears to be fixed, so let's used tfds instead.
We should update the line here as well to flip the dependency.
https://github.com/keras-team/keras-nlp/blob/master/setup.py#L55
</issue>
<code>
[start of setup.py]
1 # Copyright 2021 The KerasNLP Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Setup script."""
16
17 import pathlib
18
19 from setuptools import find_packages
20 from setuptools import setup
21
22 HERE = pathlib.Path(__file__).parent
23 README = (HERE / "README.md").read_text()
24
25 setup(
26 name="keras-nlp",
27 description=(
28 "Industry-strength Natural Language Processing extensions for Keras."
29 ),
30 long_description=README,
31 long_description_content_type="text/markdown",
32 version="0.3.0",
33 url="https://github.com/keras-team/keras-nlp",
34 author="Keras team",
35 author_email="[email protected]",
36 license="Apache License 2.0",
37 install_requires=[
38 "absl-py",
39 "numpy",
40 "packaging",
41 "tensorflow",
42 "tensorflow-text",
43 ],
44 extras_require={
45 "tests": [
46 "black",
47 "flake8",
48 "isort",
49 "pytest",
50 "pytest-cov",
51 "rouge-score",
52 "sentencepiece",
53 ],
54 "examples": [
55 "datasets", # For GLUE in BERT example.
56 "nltk",
57 "wikiextractor",
58 "keras-tuner",
59 ],
60 },
61 classifiers=[
62 "Programming Language :: Python",
63 "Programming Language :: Python :: 3.7",
64 "Operating System :: Unix",
65 "Operating System :: Microsoft :: Windows",
66 "Operating System :: MacOS",
67 "Intended Audience :: Science/Research",
68 "Topic :: Scientific/Engineering",
69 "Topic :: Software Development",
70 ],
71 packages=find_packages(exclude=("*_test.py",)),
72 )
73
[end of setup.py]
[start of examples/bert/bert_finetune_glue.py]
1 # Copyright 2022 The KerasNLP Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Run finetuning on a GLUE task."""
15
16 import tempfile
17
18 import datasets
19 import keras_tuner
20 import tensorflow as tf
21 from absl import app
22 from absl import flags
23 from tensorflow import keras
24
25 import keras_nlp
26 from examples.bert.bert_config import FINETUNING_CONFIG
27 from examples.bert.bert_config import PREPROCESSING_CONFIG
28
29 FLAGS = flags.FLAGS
30
31 flags.DEFINE_string(
32 "vocab_file",
33 None,
34 "The vocabulary file for tokenization.",
35 )
36
37 flags.DEFINE_string(
38 "saved_model_input",
39 None,
40 "The directory to load the pretrained model.",
41 )
42
43 flags.DEFINE_string(
44 "saved_model_output",
45 None,
46 "The directory to save the finetuned model.",
47 )
48
49 flags.DEFINE_string(
50 "task_name",
51 "mrpc",
52 "The name of the GLUE task to finetune on.",
53 )
54
55 flags.DEFINE_bool(
56 "do_lower_case",
57 True,
58 "Whether to lower case the input text.",
59 )
60
61 flags.DEFINE_bool(
62 "do_evaluation",
63 True,
64 "Whether to run evaluation on test data.",
65 )
66
67
68 def load_data(task_name):
69 if task_name in ("cola", "sst2"):
70 feature_names = ("sentence",)
71 elif task_name in ("mrpc", "stsb", "rte", "wnli"):
72 feature_names = ("sentence1", "sentence2")
73 elif task_name in ("mnli", "mnli_matched", "mnli_mismatched"):
74 feature_names = ("premise", "hypothesis")
75 elif task_name in "qnli":
76 feature_names = ("question", "sentence")
77 elif task_name in "qqp":
78 feature_names = ("question1", "question2")
79 else:
80 raise ValueError(f"Unkown task_name {task_name}.")
81
82 test_suffix = ""
83 if task_name in ("mnli", "mnli_matched"):
84 # For "mnli", just run default to "mnli_matched".
85 task_name = "mnli"
86 test_suffix = "_matched"
87 elif task_name in ("mnli_mismatched",):
88 task_name = "mnli"
89 test_suffix = "_mismatched"
90
91 def to_tf_dataset(split):
92 # Format each sample as a tuple of string features and an int label.
93 features = tuple([split[f] for f in feature_names])
94 label = tf.cast(split["label"], tf.int32)
95 return tf.data.Dataset.from_tensor_slices((features, label))
96
97 data = datasets.load_dataset("glue", task_name)
98 data.set_format(type="tensorflow")
99 train_ds = to_tf_dataset(data["train"])
100 test_ds = to_tf_dataset(data["test" + test_suffix])
101 validation_ds = to_tf_dataset(data["validation" + test_suffix])
102 return train_ds, test_ds, validation_ds
103
104
105 class BertHyperModel(keras_tuner.HyperModel):
106 """Creates a hypermodel to help with the search space for finetuning."""
107
108 def build(self, hp):
109 model = keras.models.load_model(FLAGS.saved_model_input, compile=False)
110 finetuning_model = keras_nlp.models.BertClassifier(
111 base_model=model,
112 num_classes=3 if FLAGS.task_name in ("mnli", "ax") else 2,
113 )
114 finetuning_model.compile(
115 optimizer=keras.optimizers.Adam(
116 learning_rate=hp.Choice(
117 "lr", FINETUNING_CONFIG["learning_rates"]
118 ),
119 ),
120 loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
121 metrics=[keras.metrics.SparseCategoricalAccuracy()],
122 )
123 return finetuning_model
124
125
126 def main(_):
127 print(f"Reading input model from {FLAGS.saved_model_input}")
128
129 tokenizer = keras_nlp.tokenizers.WordPieceTokenizer(
130 vocabulary=FLAGS.vocab_file,
131 lowercase=FLAGS.do_lower_case,
132 )
133 packer = keras_nlp.layers.MultiSegmentPacker(
134 sequence_length=PREPROCESSING_CONFIG["max_seq_length"],
135 start_value=tokenizer.token_to_id("[CLS]"),
136 end_value=tokenizer.token_to_id("[SEP]"),
137 )
138
139 def preprocess_data(inputs, labels):
140 inputs = [tokenizer(x) for x in inputs]
141 token_ids, segment_ids = packer(inputs)
142 return {
143 "token_ids": token_ids,
144 "segment_ids": segment_ids,
145 "padding_mask": tf.cast(token_ids != 0, "int32"),
146 }, labels
147
148 # Read and preprocess GLUE task data.
149 train_ds, test_ds, validation_ds = load_data(FLAGS.task_name)
150
151 batch_size = FINETUNING_CONFIG["batch_size"]
152 train_ds = train_ds.batch(batch_size).map(
153 preprocess_data, num_parallel_calls=tf.data.AUTOTUNE
154 )
155 validation_ds = validation_ds.batch(batch_size).map(
156 preprocess_data, num_parallel_calls=tf.data.AUTOTUNE
157 )
158 test_ds = test_ds.batch(batch_size).map(
159 preprocess_data, num_parallel_calls=tf.data.AUTOTUNE
160 )
161
162 # Create a hypermodel object for a RandomSearch.
163 hypermodel = BertHyperModel()
164
165 # Initialize the random search over the 4 learning rate parameters, for 4
166 # trials and 3 epochs for each trial.
167 tuner = keras_tuner.RandomSearch(
168 hypermodel=hypermodel,
169 objective=keras_tuner.Objective("val_loss", direction="min"),
170 max_trials=4,
171 overwrite=True,
172 project_name="hyperparameter_tuner_results",
173 directory=tempfile.mkdtemp(),
174 )
175
176 tuner.search(
177 train_ds,
178 epochs=FINETUNING_CONFIG["epochs"],
179 validation_data=validation_ds,
180 )
181
182 # Extract the best hyperparameters after the search.
183 best_hp = tuner.get_best_hyperparameters()[0]
184 finetuning_model = tuner.get_best_models()[0]
185
186 print(
187 f"The best hyperparameters found are:\nLearning Rate: {best_hp['lr']}"
188 )
189
190 if FLAGS.do_evaluation:
191 print("Evaluating on test set.")
192 finetuning_model.evaluate(test_ds)
193
194 if FLAGS.saved_model_output:
195 print(f"Saving to {FLAGS.saved_model_output}")
196 finetuning_model.save(FLAGS.saved_model_output)
197
198
199 if __name__ == "__main__":
200 flags.mark_flag_as_required("vocab_file")
201 flags.mark_flag_as_required("saved_model_input")
202 app.run(main)
203
[end of examples/bert/bert_finetune_glue.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/bert/bert_finetune_glue.py b/examples/bert/bert_finetune_glue.py
--- a/examples/bert/bert_finetune_glue.py
+++ b/examples/bert/bert_finetune_glue.py
@@ -15,9 +15,9 @@
import tempfile
-import datasets
import keras_tuner
import tensorflow as tf
+import tensorflow_datasets as tfds
from absl import app
from absl import flags
from tensorflow import keras
@@ -77,7 +77,7 @@
elif task_name in "qqp":
feature_names = ("question1", "question2")
else:
- raise ValueError(f"Unkown task_name {task_name}.")
+ raise ValueError(f"Unknown task_name {task_name}.")
test_suffix = ""
if task_name in ("mnli", "mnli_matched"):
@@ -88,17 +88,18 @@
task_name = "mnli"
test_suffix = "_mismatched"
- def to_tf_dataset(split):
- # Format each sample as a tuple of string features and an int label.
- features = tuple([split[f] for f in feature_names])
- label = tf.cast(split["label"], tf.int32)
- return tf.data.Dataset.from_tensor_slices((features, label))
-
- data = datasets.load_dataset("glue", task_name)
- data.set_format(type="tensorflow")
- train_ds = to_tf_dataset(data["train"])
- test_ds = to_tf_dataset(data["test" + test_suffix])
- validation_ds = to_tf_dataset(data["validation" + test_suffix])
+ def split_features(x):
+ return {feature_name: x[feature_name] for feature_name in feature_names}
+
+ train_ds, test_ds, validation_ds = tfds.load(
+ f"glue/{task_name}",
+ split=["train", "test" + test_suffix, "validation" + test_suffix],
+ )
+ train_ds = train_ds.map(split_features, num_parallel_calls=tf.data.AUTOTUNE)
+ test_ds = test_ds.map(split_features, num_parallel_calls=tf.data.AUTOTUNE)
+ validation_ds = validation_ds.map(
+ split_features, num_parallel_calls=tf.data.AUTOTUNE
+ )
return train_ds, test_ds, validation_ds
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -52,7 +52,7 @@
"sentencepiece",
],
"examples": [
- "datasets", # For GLUE in BERT example.
+ "tensorflow_datasets", # For GLUE in BERT example.
"nltk",
"wikiextractor",
"keras-tuner",
| {"golden_diff": "diff --git a/examples/bert/bert_finetune_glue.py b/examples/bert/bert_finetune_glue.py\n--- a/examples/bert/bert_finetune_glue.py\n+++ b/examples/bert/bert_finetune_glue.py\n@@ -15,9 +15,9 @@\n \n import tempfile\n \n-import datasets\n import keras_tuner\n import tensorflow as tf\n+import tensorflow_datasets as tfds\n from absl import app\n from absl import flags\n from tensorflow import keras\n@@ -77,7 +77,7 @@\n elif task_name in \"qqp\":\n feature_names = (\"question1\", \"question2\")\n else:\n- raise ValueError(f\"Unkown task_name {task_name}.\")\n+ raise ValueError(f\"Unknown task_name {task_name}.\")\n \n test_suffix = \"\"\n if task_name in (\"mnli\", \"mnli_matched\"):\n@@ -88,17 +88,18 @@\n task_name = \"mnli\"\n test_suffix = \"_mismatched\"\n \n- def to_tf_dataset(split):\n- # Format each sample as a tuple of string features and an int label.\n- features = tuple([split[f] for f in feature_names])\n- label = tf.cast(split[\"label\"], tf.int32)\n- return tf.data.Dataset.from_tensor_slices((features, label))\n-\n- data = datasets.load_dataset(\"glue\", task_name)\n- data.set_format(type=\"tensorflow\")\n- train_ds = to_tf_dataset(data[\"train\"])\n- test_ds = to_tf_dataset(data[\"test\" + test_suffix])\n- validation_ds = to_tf_dataset(data[\"validation\" + test_suffix])\n+ def split_features(x):\n+ return {feature_name: x[feature_name] for feature_name in feature_names}\n+\n+ train_ds, test_ds, validation_ds = tfds.load(\n+ f\"glue/{task_name}\",\n+ split=[\"train\", \"test\" + test_suffix, \"validation\" + test_suffix],\n+ )\n+ train_ds = train_ds.map(split_features, num_parallel_calls=tf.data.AUTOTUNE)\n+ test_ds = test_ds.map(split_features, num_parallel_calls=tf.data.AUTOTUNE)\n+ validation_ds = validation_ds.map(\n+ split_features, num_parallel_calls=tf.data.AUTOTUNE\n+ )\n return train_ds, test_ds, validation_ds\n \n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -52,7 +52,7 @@\n \"sentencepiece\",\n ],\n \"examples\": [\n- \"datasets\", # For GLUE in BERT example.\n+ \"tensorflow_datasets\", # For GLUE in BERT example.\n \"nltk\",\n \"wikiextractor\",\n \"keras-tuner\",\n", "issue": "Update our glue evaluation script to use tensorflow datasets\nWhen it was originally written, glue for tfds was totally broken, so we used huggingface datasets.\r\n\r\nThis appears to be fixed, so let's used tfds instead.\r\n\r\nWe should update the line here as well to flip the dependency.\r\nhttps://github.com/keras-team/keras-nlp/blob/master/setup.py#L55\n", "before_files": [{"content": "# Copyright 2021 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script.\"\"\"\n\nimport pathlib\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nHERE = pathlib.Path(__file__).parent\nREADME = (HERE / \"README.md\").read_text()\n\nsetup(\n name=\"keras-nlp\",\n description=(\n \"Industry-strength Natural Language Processing extensions for Keras.\"\n ),\n long_description=README,\n long_description_content_type=\"text/markdown\",\n version=\"0.3.0\",\n url=\"https://github.com/keras-team/keras-nlp\",\n author=\"Keras team\",\n author_email=\"[email protected]\",\n license=\"Apache License 2.0\",\n install_requires=[\n \"absl-py\",\n \"numpy\",\n \"packaging\",\n \"tensorflow\",\n \"tensorflow-text\",\n ],\n extras_require={\n \"tests\": [\n \"black\",\n \"flake8\",\n \"isort\",\n \"pytest\",\n \"pytest-cov\",\n \"rouge-score\",\n \"sentencepiece\",\n ],\n \"examples\": [\n \"datasets\", # For GLUE in BERT example.\n \"nltk\",\n \"wikiextractor\",\n \"keras-tuner\",\n ],\n },\n classifiers=[\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3.7\",\n \"Operating System :: Unix\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Software Development\",\n ],\n packages=find_packages(exclude=(\"*_test.py\",)),\n)\n", "path": "setup.py"}, {"content": "# Copyright 2022 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Run finetuning on a GLUE task.\"\"\"\n\nimport tempfile\n\nimport datasets\nimport keras_tuner\nimport tensorflow as tf\nfrom absl import app\nfrom absl import flags\nfrom tensorflow import keras\n\nimport keras_nlp\nfrom examples.bert.bert_config import FINETUNING_CONFIG\nfrom examples.bert.bert_config import PREPROCESSING_CONFIG\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string(\n \"vocab_file\",\n None,\n \"The vocabulary file for tokenization.\",\n)\n\nflags.DEFINE_string(\n \"saved_model_input\",\n None,\n \"The directory to load the pretrained model.\",\n)\n\nflags.DEFINE_string(\n \"saved_model_output\",\n None,\n \"The directory to save the finetuned model.\",\n)\n\nflags.DEFINE_string(\n \"task_name\",\n \"mrpc\",\n \"The name of the GLUE task to finetune on.\",\n)\n\nflags.DEFINE_bool(\n \"do_lower_case\",\n True,\n \"Whether to lower case the input text.\",\n)\n\nflags.DEFINE_bool(\n \"do_evaluation\",\n True,\n \"Whether to run evaluation on test data.\",\n)\n\n\ndef load_data(task_name):\n if task_name in (\"cola\", \"sst2\"):\n feature_names = (\"sentence\",)\n elif task_name in (\"mrpc\", \"stsb\", \"rte\", \"wnli\"):\n feature_names = (\"sentence1\", \"sentence2\")\n elif task_name in (\"mnli\", \"mnli_matched\", \"mnli_mismatched\"):\n feature_names = (\"premise\", \"hypothesis\")\n elif task_name in \"qnli\":\n feature_names = (\"question\", \"sentence\")\n elif task_name in \"qqp\":\n feature_names = (\"question1\", \"question2\")\n else:\n raise ValueError(f\"Unkown task_name {task_name}.\")\n\n test_suffix = \"\"\n if task_name in (\"mnli\", \"mnli_matched\"):\n # For \"mnli\", just run default to \"mnli_matched\".\n task_name = \"mnli\"\n test_suffix = \"_matched\"\n elif task_name in (\"mnli_mismatched\",):\n task_name = \"mnli\"\n test_suffix = \"_mismatched\"\n\n def to_tf_dataset(split):\n # Format each sample as a tuple of string features and an int label.\n features = tuple([split[f] for f in feature_names])\n label = tf.cast(split[\"label\"], tf.int32)\n return tf.data.Dataset.from_tensor_slices((features, label))\n\n data = datasets.load_dataset(\"glue\", task_name)\n data.set_format(type=\"tensorflow\")\n train_ds = to_tf_dataset(data[\"train\"])\n test_ds = to_tf_dataset(data[\"test\" + test_suffix])\n validation_ds = to_tf_dataset(data[\"validation\" + test_suffix])\n return train_ds, test_ds, validation_ds\n\n\nclass BertHyperModel(keras_tuner.HyperModel):\n \"\"\"Creates a hypermodel to help with the search space for finetuning.\"\"\"\n\n def build(self, hp):\n model = keras.models.load_model(FLAGS.saved_model_input, compile=False)\n finetuning_model = keras_nlp.models.BertClassifier(\n base_model=model,\n num_classes=3 if FLAGS.task_name in (\"mnli\", \"ax\") else 2,\n )\n finetuning_model.compile(\n optimizer=keras.optimizers.Adam(\n learning_rate=hp.Choice(\n \"lr\", FINETUNING_CONFIG[\"learning_rates\"]\n ),\n ),\n loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[keras.metrics.SparseCategoricalAccuracy()],\n )\n return finetuning_model\n\n\ndef main(_):\n print(f\"Reading input model from {FLAGS.saved_model_input}\")\n\n tokenizer = keras_nlp.tokenizers.WordPieceTokenizer(\n vocabulary=FLAGS.vocab_file,\n lowercase=FLAGS.do_lower_case,\n )\n packer = keras_nlp.layers.MultiSegmentPacker(\n sequence_length=PREPROCESSING_CONFIG[\"max_seq_length\"],\n start_value=tokenizer.token_to_id(\"[CLS]\"),\n end_value=tokenizer.token_to_id(\"[SEP]\"),\n )\n\n def preprocess_data(inputs, labels):\n inputs = [tokenizer(x) for x in inputs]\n token_ids, segment_ids = packer(inputs)\n return {\n \"token_ids\": token_ids,\n \"segment_ids\": segment_ids,\n \"padding_mask\": tf.cast(token_ids != 0, \"int32\"),\n }, labels\n\n # Read and preprocess GLUE task data.\n train_ds, test_ds, validation_ds = load_data(FLAGS.task_name)\n\n batch_size = FINETUNING_CONFIG[\"batch_size\"]\n train_ds = train_ds.batch(batch_size).map(\n preprocess_data, num_parallel_calls=tf.data.AUTOTUNE\n )\n validation_ds = validation_ds.batch(batch_size).map(\n preprocess_data, num_parallel_calls=tf.data.AUTOTUNE\n )\n test_ds = test_ds.batch(batch_size).map(\n preprocess_data, num_parallel_calls=tf.data.AUTOTUNE\n )\n\n # Create a hypermodel object for a RandomSearch.\n hypermodel = BertHyperModel()\n\n # Initialize the random search over the 4 learning rate parameters, for 4\n # trials and 3 epochs for each trial.\n tuner = keras_tuner.RandomSearch(\n hypermodel=hypermodel,\n objective=keras_tuner.Objective(\"val_loss\", direction=\"min\"),\n max_trials=4,\n overwrite=True,\n project_name=\"hyperparameter_tuner_results\",\n directory=tempfile.mkdtemp(),\n )\n\n tuner.search(\n train_ds,\n epochs=FINETUNING_CONFIG[\"epochs\"],\n validation_data=validation_ds,\n )\n\n # Extract the best hyperparameters after the search.\n best_hp = tuner.get_best_hyperparameters()[0]\n finetuning_model = tuner.get_best_models()[0]\n\n print(\n f\"The best hyperparameters found are:\\nLearning Rate: {best_hp['lr']}\"\n )\n\n if FLAGS.do_evaluation:\n print(\"Evaluating on test set.\")\n finetuning_model.evaluate(test_ds)\n\n if FLAGS.saved_model_output:\n print(f\"Saving to {FLAGS.saved_model_output}\")\n finetuning_model.save(FLAGS.saved_model_output)\n\n\nif __name__ == \"__main__\":\n flags.mark_flag_as_required(\"vocab_file\")\n flags.mark_flag_as_required(\"saved_model_input\")\n app.run(main)\n", "path": "examples/bert/bert_finetune_glue.py"}]} | 3,278 | 612 |
gh_patches_debug_23599 | rasdani/github-patches | git_diff | svthalia__concrexit-1793 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Food order can be changed after paying
### Describe the bug
If you order a pizza and pay it, you can still change the product. If you change the product through the api, the payment is not removed.
### How to reproduce
Steps to reproduce the behaviour:
1. Order a pizza
2. Pay with Thalia Pay
3. Change the order through the api
4. Get an expensive pizza for little money
### Expected behaviour
Either changing the order after paying is impossible, or it removes the payment. I think removing the payment (as the website currently seems to do) would be strange, and for event registration we've decided not to enable this.
### Screenshots
<img width="569" alt="image" src="https://user-images.githubusercontent.com/41264528/123456318-01d59880-d5e3-11eb-86c8-9217e4720988.png">
There are probably no food events any time soon, so a hotfix may not be needed, though it might be good to double-check that similar stuff is not possible with registrations.
</issue>
<code>
[start of website/pizzas/api/v2/views.py]
1 from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
2 from rest_framework.generics import (
3 ListAPIView,
4 RetrieveAPIView,
5 get_object_or_404,
6 CreateAPIView,
7 DestroyAPIView,
8 UpdateAPIView,
9 )
10
11 from rest_framework import filters as framework_filters, status
12 from rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly
13 from rest_framework.response import Response
14
15 from pizzas.api.v2 import filters
16 from pizzas.api.v2.serializers import (
17 ProductSerializer,
18 FoodOrderSerializer,
19 FoodOrderUpdateSerializer,
20 FoodOrderCreateSerializer,
21 )
22 from pizzas.api.v2.serializers.food_event import FoodEventSerializer
23 from pizzas.models import FoodEvent, Product, FoodOrder
24 from thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod
25
26
27 class FoodEventListView(ListAPIView):
28 """Returns an overview of all food events."""
29
30 serializer_class = FoodEventSerializer
31 queryset = FoodEvent.objects.all()
32 filter_backends = (
33 framework_filters.OrderingFilter,
34 filters.FoodEventDateFilterBackend,
35 )
36 ordering_fields = ("start", "end")
37 permission_classes = [
38 IsAuthenticatedOrTokenHasScope,
39 DjangoModelPermissionsOrAnonReadOnly,
40 ]
41 required_scopes = ["food:read"]
42
43
44 class FoodEventDetailView(RetrieveAPIView):
45 """Returns one single food event."""
46
47 serializer_class = FoodEventSerializer
48 queryset = FoodEvent.objects.all()
49 permission_classes = [
50 IsAuthenticatedOrTokenHasScope,
51 DjangoModelPermissionsOrAnonReadOnly,
52 ]
53 required_scopes = ["food:read"]
54
55
56 class FoodEventProductsListView(ListAPIView):
57 """Returns an overview of all products."""
58
59 serializer_class = ProductSerializer
60 queryset = Product.available_products.all()
61 filter_backends = (framework_filters.SearchFilter,)
62 search_fields = ("name",)
63 permission_classes = [
64 IsAuthenticatedOrTokenHasScope,
65 DjangoModelPermissionsOrAnonReadOnly,
66 ]
67 required_scopes = ["food:read"]
68
69
70 class FoodEventOrderDetailView(
71 RetrieveAPIView, CreateAPIView, UpdateAPIView, DestroyAPIView
72 ):
73 """Returns details of a food order."""
74
75 permission_classes = [
76 IsAuthenticatedOrTokenHasScopeForMethod,
77 DjangoModelPermissionsOrAnonReadOnly,
78 ]
79 required_scopes_per_method = {
80 "GET": ["food:read"],
81 "POST": ["food:order"],
82 "PUT": ["food:order"],
83 "PATCH": ["food:order"],
84 "DELETE": ["food:order"],
85 }
86
87 def get_serializer_class(self):
88 if self.request.method.lower() == "get":
89 return FoodOrderSerializer
90 if self.request.method.lower() == "post":
91 return FoodOrderCreateSerializer
92 return FoodOrderUpdateSerializer
93
94 def get_queryset(self):
95 return FoodOrder.objects.filter(food_event=self.food_event)
96
97 def get_object(self):
98 queryset = self.filter_queryset(self.get_queryset())
99 obj = get_object_or_404(queryset, member=self.request.member)
100
101 # May raise a permission denied
102 self.check_object_permissions(self.request, obj)
103
104 return obj
105
106 def dispatch(self, request, *args, **kwargs):
107 self.food_event = get_object_or_404(FoodEvent, pk=self.kwargs.get("pk"))
108 return super().dispatch(request, *args, **kwargs)
109
110 def update(self, request, *args, **kwargs):
111 super().update(request, *args, **kwargs)
112 instance = self.get_object()
113 return Response(
114 FoodOrderSerializer(instance, context=self.get_serializer_context()).data
115 )
116
117 def create(self, request, *args, **kwargs):
118 serializer = self.get_serializer(data=request.data)
119 serializer.is_valid(raise_exception=True)
120 self.perform_create(serializer)
121 return Response(
122 FoodOrderSerializer(
123 serializer.instance, context=self.get_serializer_context()
124 ).data,
125 status=status.HTTP_201_CREATED,
126 )
127
[end of website/pizzas/api/v2/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/pizzas/api/v2/views.py b/website/pizzas/api/v2/views.py
--- a/website/pizzas/api/v2/views.py
+++ b/website/pizzas/api/v2/views.py
@@ -12,6 +12,8 @@
from rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly
from rest_framework.response import Response
+from payments.exceptions import PaymentError
+from payments.services import delete_payment
from pizzas.api.v2 import filters
from pizzas.api.v2.serializers import (
ProductSerializer,
@@ -110,6 +112,18 @@
def update(self, request, *args, **kwargs):
super().update(request, *args, **kwargs)
instance = self.get_object()
+
+ if instance.payment:
+ try:
+ delete_payment(
+ instance, member=request.member, ignore_change_window=True
+ )
+ except PaymentError:
+ return Response(
+ "Your order could not be updated because it was already paid.",
+ status=status.HTTP_403_FORBIDDEN,
+ )
+
return Response(
FoodOrderSerializer(instance, context=self.get_serializer_context()).data
)
| {"golden_diff": "diff --git a/website/pizzas/api/v2/views.py b/website/pizzas/api/v2/views.py\n--- a/website/pizzas/api/v2/views.py\n+++ b/website/pizzas/api/v2/views.py\n@@ -12,6 +12,8 @@\n from rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly\n from rest_framework.response import Response\n \n+from payments.exceptions import PaymentError\n+from payments.services import delete_payment\n from pizzas.api.v2 import filters\n from pizzas.api.v2.serializers import (\n ProductSerializer,\n@@ -110,6 +112,18 @@\n def update(self, request, *args, **kwargs):\n super().update(request, *args, **kwargs)\n instance = self.get_object()\n+\n+ if instance.payment:\n+ try:\n+ delete_payment(\n+ instance, member=request.member, ignore_change_window=True\n+ )\n+ except PaymentError:\n+ return Response(\n+ \"Your order could not be updated because it was already paid.\",\n+ status=status.HTTP_403_FORBIDDEN,\n+ )\n+\n return Response(\n FoodOrderSerializer(instance, context=self.get_serializer_context()).data\n )\n", "issue": "Food order can be changed after paying\n### Describe the bug\r\nIf you order a pizza and pay it, you can still change the product. If you change the product through the api, the payment is not removed.\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Order a pizza\r\n2. Pay with Thalia Pay\r\n3. Change the order through the api\r\n4. Get an expensive pizza for little money\r\n\r\n### Expected behaviour\r\nEither changing the order after paying is impossible, or it removes the payment. I think removing the payment (as the website currently seems to do) would be strange, and for event registration we've decided not to enable this.\r\n\r\n### Screenshots\r\n<img width=\"569\" alt=\"image\" src=\"https://user-images.githubusercontent.com/41264528/123456318-01d59880-d5e3-11eb-86c8-9217e4720988.png\">\r\n\r\nThere are probably no food events any time soon, so a hotfix may not be needed, though it might be good to double-check that similar stuff is not possible with registrations.\r\n\n", "before_files": [{"content": "from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework.generics import (\n ListAPIView,\n RetrieveAPIView,\n get_object_or_404,\n CreateAPIView,\n DestroyAPIView,\n UpdateAPIView,\n)\n\nfrom rest_framework import filters as framework_filters, status\nfrom rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly\nfrom rest_framework.response import Response\n\nfrom pizzas.api.v2 import filters\nfrom pizzas.api.v2.serializers import (\n ProductSerializer,\n FoodOrderSerializer,\n FoodOrderUpdateSerializer,\n FoodOrderCreateSerializer,\n)\nfrom pizzas.api.v2.serializers.food_event import FoodEventSerializer\nfrom pizzas.models import FoodEvent, Product, FoodOrder\nfrom thaliawebsite.api.v2.permissions import IsAuthenticatedOrTokenHasScopeForMethod\n\n\nclass FoodEventListView(ListAPIView):\n \"\"\"Returns an overview of all food events.\"\"\"\n\n serializer_class = FoodEventSerializer\n queryset = FoodEvent.objects.all()\n filter_backends = (\n framework_filters.OrderingFilter,\n filters.FoodEventDateFilterBackend,\n )\n ordering_fields = (\"start\", \"end\")\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventDetailView(RetrieveAPIView):\n \"\"\"Returns one single food event.\"\"\"\n\n serializer_class = FoodEventSerializer\n queryset = FoodEvent.objects.all()\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventProductsListView(ListAPIView):\n \"\"\"Returns an overview of all products.\"\"\"\n\n serializer_class = ProductSerializer\n queryset = Product.available_products.all()\n filter_backends = (framework_filters.SearchFilter,)\n search_fields = (\"name\",)\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes = [\"food:read\"]\n\n\nclass FoodEventOrderDetailView(\n RetrieveAPIView, CreateAPIView, UpdateAPIView, DestroyAPIView\n):\n \"\"\"Returns details of a food order.\"\"\"\n\n permission_classes = [\n IsAuthenticatedOrTokenHasScopeForMethod,\n DjangoModelPermissionsOrAnonReadOnly,\n ]\n required_scopes_per_method = {\n \"GET\": [\"food:read\"],\n \"POST\": [\"food:order\"],\n \"PUT\": [\"food:order\"],\n \"PATCH\": [\"food:order\"],\n \"DELETE\": [\"food:order\"],\n }\n\n def get_serializer_class(self):\n if self.request.method.lower() == \"get\":\n return FoodOrderSerializer\n if self.request.method.lower() == \"post\":\n return FoodOrderCreateSerializer\n return FoodOrderUpdateSerializer\n\n def get_queryset(self):\n return FoodOrder.objects.filter(food_event=self.food_event)\n\n def get_object(self):\n queryset = self.filter_queryset(self.get_queryset())\n obj = get_object_or_404(queryset, member=self.request.member)\n\n # May raise a permission denied\n self.check_object_permissions(self.request, obj)\n\n return obj\n\n def dispatch(self, request, *args, **kwargs):\n self.food_event = get_object_or_404(FoodEvent, pk=self.kwargs.get(\"pk\"))\n return super().dispatch(request, *args, **kwargs)\n\n def update(self, request, *args, **kwargs):\n super().update(request, *args, **kwargs)\n instance = self.get_object()\n return Response(\n FoodOrderSerializer(instance, context=self.get_serializer_context()).data\n )\n\n def create(self, request, *args, **kwargs):\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n self.perform_create(serializer)\n return Response(\n FoodOrderSerializer(\n serializer.instance, context=self.get_serializer_context()\n ).data,\n status=status.HTTP_201_CREATED,\n )\n", "path": "website/pizzas/api/v2/views.py"}]} | 1,917 | 259 |
gh_patches_debug_496 | rasdani/github-patches | git_diff | deepchecks__deepchecks-1494 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Add copy button to code snippets
We used to have this:

For all code snippets (currently both in sphinx-gallery files. Doesn't appear anymore.
</issue>
<code>
[start of docs/source/user-guide/tabular/tutorials/plot_quick_data_integrity.py]
1 # -*- coding: utf-8 -*-
2 """
3 Data Integrity Suite on Avocado Sales Data - Quickstart
4 *******************************************************
5
6 The deepchecks integrity suite is relevant any time you have data that you wish to validate:
7 whether it's on a fresh batch of data, or right before splitting it or using it for training.
8 Here we'll use the avocado prices dataset, to demonstrate how you can run
9 the suite with only a few simple lines of code, and see which kind of insights it can find.
10
11 .. code-block:: bash
12
13 # Before we start, if you don't have deepchecks installed yet,
14 # make sure to run:
15 pip install deepchecks -U --quiet #--user
16 """
17
18 #%%
19 # Load and Prepare Data
20 # ====================================================
21
22 from deepchecks.tabular import datasets
23
24 # load data
25 data = datasets.regression.avocado.load_data(data_format='DataFrame', as_train_test=False)
26 #%%
27
28 # drop unused columns (remove after fix...)
29 data = data.drop(columns=['Unnamed: 0'])
30
31 #%%
32 # Insert a few typcial problems to dataset for demonstration.
33
34 import pandas as pd
35
36 def add_dirty_data(df):
37 # change strings
38 df.loc[df[df['type'] == 'organic'].sample(frac=0.18).index,'type'] = 'Organic'
39 df.loc[df[df['type'] == 'organic'].sample(frac=0.01).index,'type'] = 'ORGANIC'
40 # add duplicates
41 df = pd.concat([df, df.sample(frac=0.156)], axis=0, ignore_index=True)
42 # add column with single value
43 df['Is Ripe'] = True
44 return df
45
46
47 dirty_df = add_dirty_data(data)
48
49 #%%
50 # Run Deepchecks for Data Integrity
51 # ====================================
52 #
53 # Define a Dataset Object
54 # ------------------------
55 #
56 # Create a deepchecks Dataset, including the relevant metadata (label, date, index, etc.).
57 # Check out :class:`deepchecks.tabular.Dataset` to see all of the columns that can be declared.
58
59 from deepchecks.tabular import Dataset
60
61 # We explicitly state the categorical features,
62 # otherwise they will be automatically inferred, which may not work perfectly and is not recommended.
63 # The label can be passed as a column name or a separate pd.Series / pd.DataFrame
64 ds = Dataset(dirty_df, cat_features = ['type'], datetime_name='Date', label = 'AveragePrice')
65
66 #%%
67 # Run the Deepchecks Suite
68 # --------------------------
69 #
70 # Validate your data with the :class:`deepchecks.tabular.suites.single_dataset_integrity` suite.
71 # It runs on a single dataset, so you can run it on any batch of data (e.g. train data, test data, a new batch of data
72 # that recently arrived)
73 #
74 # Check out the :doc:`when should you use </getting-started/when_should_you_use>`
75 # deepchecks guide for some more info about the existing suites and when to use them.
76
77 from deepchecks.tabular.suites import data_integrity
78
79 # Run Suite:
80 integ_suite = data_integrity()
81 integ_suite.run(ds)
82
83 #%%
84 # We can inspect the suite outputs and see that there are a few problems we'd like to fix.
85 # We'll now fix them and check that they're resolved by re-running those specific checks.
86
87
88 #%%
89 # Run a Single Check
90 # -------------------
91 # We can run a single check on a dataset, and see the results.
92
93 from deepchecks.tabular.checks import IsSingleValue, DataDuplicates
94
95 # first let's see how the check runs:
96 IsSingleValue().run(ds)
97
98 #%%
99
100 # we can also add a condition:
101 single_value_with_condition = IsSingleValue().add_condition_not_single_value()
102 result = single_value_with_condition.run(ds)
103 result
104
105 #%%
106
107 # We can also inspect and use the result's value:
108 result.value
109
110 #%%
111 # Now let's remove the single value column and rerun (notice that we're using directly
112 # the ``data`` attribute that stores the dataframe inside the Dataset)
113
114 ds.data.drop('Is Ripe', axis=1, inplace=True)
115 result = single_value_with_condition.run(ds)
116 result
117
118 #%%
119
120 # Alternatively we can fix the dataframe directly, and create a new dataset.
121 # Let's fix also the duplicate values:
122 dirty_df.drop_duplicates(inplace=True)
123 dirty_df.drop('Is Ripe', axis=1, inplace=True)
124 ds = Dataset(dirty_df, cat_features=['type'], datetime_name='Date', label='AveragePrice')
125 result = DataDuplicates().add_condition_ratio_not_greater_than(0).run(ds)
126 result
127
128 #%%
129 # Rerun Suite on the Fixed Dataset
130 # ---------------------------------
131 # Finally, we'll choose to keep the "organic" multiple spellings as they represent different sources.
132 # So we'll customaize the suite by removing the condition from it (or delete check completely).
133 # Alternatively - we can customize it by creating a new Suite with the desired checks and conditions.
134 # See :doc:`/user-guide/general/customizations/examples/customizing-suites` for more info.
135
136 # let's inspect the suite's structure
137 integ_suite
138
139 #%%
140
141 # and remove the condition:
142 integ_suite[3].clean_conditions()
143
144 #%%
145 # Now we can re-run the suite using:
146 integ_suite.run(ds)
147
148 #%%
149 # and all of the conditions will pass.
150 #
151 # *Note: the check we manipulated will still run as part of the Suite, however
152 # it won't appear in the Conditions Summary since it no longer has any
153 # conditions defined on it. You can still see its display results in the
154 # Additional Outputs section*
155 #
156 # For more info about working with conditions, see the detailed
157 # :doc:`/user-guide/general/customizations/examples/plot_configure_checks_conditions' guide.
158
[end of docs/source/user-guide/tabular/tutorials/plot_quick_data_integrity.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/source/user-guide/tabular/tutorials/plot_quick_data_integrity.py b/docs/source/user-guide/tabular/tutorials/plot_quick_data_integrity.py
--- a/docs/source/user-guide/tabular/tutorials/plot_quick_data_integrity.py
+++ b/docs/source/user-guide/tabular/tutorials/plot_quick_data_integrity.py
@@ -143,7 +143,7 @@
#%%
# Now we can re-run the suite using:
-integ_suite.run(ds)
+res = integ_suite.run(ds)
#%%
# and all of the conditions will pass.
| {"golden_diff": "diff --git a/docs/source/user-guide/tabular/tutorials/plot_quick_data_integrity.py b/docs/source/user-guide/tabular/tutorials/plot_quick_data_integrity.py\n--- a/docs/source/user-guide/tabular/tutorials/plot_quick_data_integrity.py\n+++ b/docs/source/user-guide/tabular/tutorials/plot_quick_data_integrity.py\n@@ -143,7 +143,7 @@\n \n #%%\n # Now we can re-run the suite using:\n-integ_suite.run(ds)\n+res = integ_suite.run(ds)\n \n #%%\n # and all of the conditions will pass.\n", "issue": "[BUG] Add copy button to code snippets\nWe used to have this:\r\n\r\n\r\nFor all code snippets (currently both in sphinx-gallery files. Doesn't appear anymore.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nData Integrity Suite on Avocado Sales Data - Quickstart\n*******************************************************\n\nThe deepchecks integrity suite is relevant any time you have data that you wish to validate:\nwhether it's on a fresh batch of data, or right before splitting it or using it for training. \nHere we'll use the avocado prices dataset, to demonstrate how you can run\nthe suite with only a few simple lines of code, and see which kind of insights it can find.\n\n.. code-block:: bash\n\n # Before we start, if you don't have deepchecks installed yet,\n # make sure to run:\n pip install deepchecks -U --quiet #--user\n\"\"\"\n\n#%%\n# Load and Prepare Data\n# ====================================================\n\nfrom deepchecks.tabular import datasets\n\n# load data\ndata = datasets.regression.avocado.load_data(data_format='DataFrame', as_train_test=False)\n#%%\n\n# drop unused columns (remove after fix...)\ndata = data.drop(columns=['Unnamed: 0'])\n\n#%%\n# Insert a few typcial problems to dataset for demonstration.\n\nimport pandas as pd\n\ndef add_dirty_data(df):\n # change strings\n df.loc[df[df['type'] == 'organic'].sample(frac=0.18).index,'type'] = 'Organic'\n df.loc[df[df['type'] == 'organic'].sample(frac=0.01).index,'type'] = 'ORGANIC'\n # add duplicates\n df = pd.concat([df, df.sample(frac=0.156)], axis=0, ignore_index=True)\n # add column with single value\n df['Is Ripe'] = True\n return df\n\n\ndirty_df = add_dirty_data(data)\n\n#%%\n# Run Deepchecks for Data Integrity\n# ====================================\n#\n# Define a Dataset Object\n# ------------------------\n#\n# Create a deepchecks Dataset, including the relevant metadata (label, date, index, etc.).\n# Check out :class:`deepchecks.tabular.Dataset` to see all of the columns that can be declared.\n\nfrom deepchecks.tabular import Dataset\n\n# We explicitly state the categorical features,\n# otherwise they will be automatically inferred, which may not work perfectly and is not recommended.\n# The label can be passed as a column name or a separate pd.Series / pd.DataFrame\nds = Dataset(dirty_df, cat_features = ['type'], datetime_name='Date', label = 'AveragePrice')\n\n#%%\n# Run the Deepchecks Suite\n# --------------------------\n#\n# Validate your data with the :class:`deepchecks.tabular.suites.single_dataset_integrity` suite.\n# It runs on a single dataset, so you can run it on any batch of data (e.g. train data, test data, a new batch of data\n# that recently arrived)\n#\n# Check out the :doc:`when should you use </getting-started/when_should_you_use>`\n# deepchecks guide for some more info about the existing suites and when to use them.\n\nfrom deepchecks.tabular.suites import data_integrity\n\n# Run Suite:\ninteg_suite = data_integrity()\ninteg_suite.run(ds)\n\n#%%\n# We can inspect the suite outputs and see that there are a few problems we'd like to fix.\n# We'll now fix them and check that they're resolved by re-running those specific checks.\n\n\n#%%\n# Run a Single Check\n# -------------------\n# We can run a single check on a dataset, and see the results.\n\nfrom deepchecks.tabular.checks import IsSingleValue, DataDuplicates\n\n# first let's see how the check runs:\nIsSingleValue().run(ds)\n\n#%%\n\n# we can also add a condition:\nsingle_value_with_condition = IsSingleValue().add_condition_not_single_value()\nresult = single_value_with_condition.run(ds)\nresult\n\n#%%\n\n# We can also inspect and use the result's value:\nresult.value\n\n#%%\n# Now let's remove the single value column and rerun (notice that we're using directly \n# the ``data`` attribute that stores the dataframe inside the Dataset)\n\nds.data.drop('Is Ripe', axis=1, inplace=True)\nresult = single_value_with_condition.run(ds)\nresult\n\n#%%\n\n# Alternatively we can fix the dataframe directly, and create a new dataset.\n# Let's fix also the duplicate values:\ndirty_df.drop_duplicates(inplace=True)\ndirty_df.drop('Is Ripe', axis=1, inplace=True)\nds = Dataset(dirty_df, cat_features=['type'], datetime_name='Date', label='AveragePrice')\nresult = DataDuplicates().add_condition_ratio_not_greater_than(0).run(ds)\nresult\n\n#%%\n# Rerun Suite on the Fixed Dataset\n# ---------------------------------\n# Finally, we'll choose to keep the \"organic\" multiple spellings as they represent different sources.\n# So we'll customaize the suite by removing the condition from it (or delete check completely).\n# Alternatively - we can customize it by creating a new Suite with the desired checks and conditions.\n# See :doc:`/user-guide/general/customizations/examples/customizing-suites` for more info.\n\n# let's inspect the suite's structure\ninteg_suite\n\n#%%\n\n# and remove the condition:\ninteg_suite[3].clean_conditions()\n\n#%%\n# Now we can re-run the suite using:\ninteg_suite.run(ds)\n\n#%%\n# and all of the conditions will pass.\n#\n# *Note: the check we manipulated will still run as part of the Suite, however\n# it won't appear in the Conditions Summary since it no longer has any\n# conditions defined on it. You can still see its display results in the \n# Additional Outputs section*\n#\n# For more info about working with conditions, see the detailed\n# :doc:`/user-guide/general/customizations/examples/plot_configure_checks_conditions' guide.\n", "path": "docs/source/user-guide/tabular/tutorials/plot_quick_data_integrity.py"}]} | 2,254 | 125 |
gh_patches_debug_6893 | rasdani/github-patches | git_diff | pypa__setuptools-830 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Discrepancy between setuptools and distutils in sdist without MANIFEST.in
Originally reported by: **ikelos (Bitbucket: [ikelos](http://bitbucket.org/ikelos), GitHub: [ikelos](http://github.com/ikelos))**
---
[This upstream bug](http://bugs.python.org/issue2279) (fixed in to 2010) resolved the need to create a MANIFEST.in file as well as list data_files in the setup.py. This does not appear to have been fixed in setuptools, and as such sdist and similar functions fail to include the appropriate files on machines that have setuptools, but not those that don't, when a fallback import system is used...
Would it be possible to align the two again so that setuptools is again a drop-in replacement for distutils?
---
- Bitbucket: https://bitbucket.org/pypa/setuptools/issue/274
Setuptools' sdist doesn't use data_files while distutils does
Originally reported by: **ionelmc (Bitbucket: [ionelmc](http://bitbucket.org/ionelmc), GitHub: [ionelmc](http://github.com/ionelmc))**
---
Distutils has this: https://github.com/python/cpython/blob/master/Lib/distutils/command/sdist.py#L270-L282
However that code is not copies in the sdist provided by setuptools.
This can be problematic if you don't inlcude a file in MANIFEST.in but you have it in data_files - python's docs will most certainly mislead you as they say that sdist uses data_files: https://docs.python.org/3/distutils/sourcedist.html?highlight=data_files#specifying-the-files-to-distribute (last bullet)
---
- Bitbucket: https://bitbucket.org/pypa/setuptools/issue/521
</issue>
<code>
[start of setuptools/command/sdist.py]
1 from distutils import log
2 import distutils.command.sdist as orig
3 import os
4 import sys
5 import io
6 import contextlib
7
8 from setuptools.extern import six
9
10 from .py36compat import sdist_add_defaults
11
12 import pkg_resources
13
14 _default_revctrl = list
15
16
17 def walk_revctrl(dirname=''):
18 """Find all files under revision control"""
19 for ep in pkg_resources.iter_entry_points('setuptools.file_finders'):
20 for item in ep.load()(dirname):
21 yield item
22
23
24 class sdist(sdist_add_defaults, orig.sdist):
25 """Smart sdist that finds anything supported by revision control"""
26
27 user_options = [
28 ('formats=', None,
29 "formats for source distribution (comma-separated list)"),
30 ('keep-temp', 'k',
31 "keep the distribution tree around after creating " +
32 "archive file(s)"),
33 ('dist-dir=', 'd',
34 "directory to put the source distribution archive(s) in "
35 "[default: dist]"),
36 ]
37
38 negative_opt = {}
39
40 READMES = 'README', 'README.rst', 'README.txt'
41
42 def run(self):
43 self.run_command('egg_info')
44 ei_cmd = self.get_finalized_command('egg_info')
45 self.filelist = ei_cmd.filelist
46 self.filelist.append(os.path.join(ei_cmd.egg_info, 'SOURCES.txt'))
47 self.check_readme()
48
49 # Run sub commands
50 for cmd_name in self.get_sub_commands():
51 self.run_command(cmd_name)
52
53 # Call check_metadata only if no 'check' command
54 # (distutils <= 2.6)
55 import distutils.command
56
57 if 'check' not in distutils.command.__all__:
58 self.check_metadata()
59
60 self.make_distribution()
61
62 dist_files = getattr(self.distribution, 'dist_files', [])
63 for file in self.archive_files:
64 data = ('sdist', '', file)
65 if data not in dist_files:
66 dist_files.append(data)
67
68 def initialize_options(self):
69 orig.sdist.initialize_options(self)
70
71 self._default_to_gztar()
72
73 def _default_to_gztar(self):
74 # only needed on Python prior to 3.6.
75 if sys.version_info >= (3, 6, 0, 'beta', 1):
76 return
77 self.formats = ['gztar']
78
79 def make_distribution(self):
80 """
81 Workaround for #516
82 """
83 with self._remove_os_link():
84 orig.sdist.make_distribution(self)
85
86 @staticmethod
87 @contextlib.contextmanager
88 def _remove_os_link():
89 """
90 In a context, remove and restore os.link if it exists
91 """
92
93 class NoValue:
94 pass
95
96 orig_val = getattr(os, 'link', NoValue)
97 try:
98 del os.link
99 except Exception:
100 pass
101 try:
102 yield
103 finally:
104 if orig_val is not NoValue:
105 setattr(os, 'link', orig_val)
106
107 def __read_template_hack(self):
108 # This grody hack closes the template file (MANIFEST.in) if an
109 # exception occurs during read_template.
110 # Doing so prevents an error when easy_install attempts to delete the
111 # file.
112 try:
113 orig.sdist.read_template(self)
114 except Exception:
115 _, _, tb = sys.exc_info()
116 tb.tb_next.tb_frame.f_locals['template'].close()
117 raise
118
119 # Beginning with Python 2.7.2, 3.1.4, and 3.2.1, this leaky file handle
120 # has been fixed, so only override the method if we're using an earlier
121 # Python.
122 has_leaky_handle = (
123 sys.version_info < (2, 7, 2)
124 or (3, 0) <= sys.version_info < (3, 1, 4)
125 or (3, 2) <= sys.version_info < (3, 2, 1)
126 )
127 if has_leaky_handle:
128 read_template = __read_template_hack
129
130 def _add_defaults_python(self):
131 """getting python files"""
132 if self.distribution.has_pure_modules():
133 build_py = self.get_finalized_command('build_py')
134 self.filelist.extend(build_py.get_source_files())
135 # This functionality is incompatible with include_package_data, and
136 # will in fact create an infinite recursion if include_package_data
137 # is True. Use of include_package_data will imply that
138 # distutils-style automatic handling of package_data is disabled
139 if not self.distribution.include_package_data:
140 for _, src_dir, _, filenames in build_py.data_files:
141 self.filelist.extend([os.path.join(src_dir, filename)
142 for filename in filenames])
143
144 def _add_defaults_data_files(self):
145 """
146 Don't add any data files, but why?
147 """
148
149 def check_readme(self):
150 for f in self.READMES:
151 if os.path.exists(f):
152 return
153 else:
154 self.warn(
155 "standard file not found: should have one of " +
156 ', '.join(self.READMES)
157 )
158
159 def make_release_tree(self, base_dir, files):
160 orig.sdist.make_release_tree(self, base_dir, files)
161
162 # Save any egg_info command line options used to create this sdist
163 dest = os.path.join(base_dir, 'setup.cfg')
164 if hasattr(os, 'link') and os.path.exists(dest):
165 # unlink and re-copy, since it might be hard-linked, and
166 # we don't want to change the source version
167 os.unlink(dest)
168 self.copy_file('setup.cfg', dest)
169
170 self.get_finalized_command('egg_info').save_version_info(dest)
171
172 def _manifest_is_not_generated(self):
173 # check for special comment used in 2.7.1 and higher
174 if not os.path.isfile(self.manifest):
175 return False
176
177 with io.open(self.manifest, 'rb') as fp:
178 first_line = fp.readline()
179 return (first_line !=
180 '# file GENERATED by distutils, do NOT edit\n'.encode())
181
182 def read_manifest(self):
183 """Read the manifest file (named by 'self.manifest') and use it to
184 fill in 'self.filelist', the list of files to include in the source
185 distribution.
186 """
187 log.info("reading manifest file '%s'", self.manifest)
188 manifest = open(self.manifest, 'rb')
189 for line in manifest:
190 # The manifest must contain UTF-8. See #303.
191 if six.PY3:
192 try:
193 line = line.decode('UTF-8')
194 except UnicodeDecodeError:
195 log.warn("%r not UTF-8 decodable -- skipping" % line)
196 continue
197 # ignore comments and blank lines
198 line = line.strip()
199 if line.startswith('#') or not line:
200 continue
201 self.filelist.append(line)
202 manifest.close()
203
[end of setuptools/command/sdist.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setuptools/command/sdist.py b/setuptools/command/sdist.py
--- a/setuptools/command/sdist.py
+++ b/setuptools/command/sdist.py
@@ -141,11 +141,6 @@
self.filelist.extend([os.path.join(src_dir, filename)
for filename in filenames])
- def _add_defaults_data_files(self):
- """
- Don't add any data files, but why?
- """
-
def check_readme(self):
for f in self.READMES:
if os.path.exists(f):
| {"golden_diff": "diff --git a/setuptools/command/sdist.py b/setuptools/command/sdist.py\n--- a/setuptools/command/sdist.py\n+++ b/setuptools/command/sdist.py\n@@ -141,11 +141,6 @@\n self.filelist.extend([os.path.join(src_dir, filename)\n for filename in filenames])\n \n- def _add_defaults_data_files(self):\n- \"\"\"\n- Don't add any data files, but why?\n- \"\"\"\n-\n def check_readme(self):\n for f in self.READMES:\n if os.path.exists(f):\n", "issue": "Discrepancy between setuptools and distutils in sdist without MANIFEST.in\nOriginally reported by: **ikelos (Bitbucket: [ikelos](http://bitbucket.org/ikelos), GitHub: [ikelos](http://github.com/ikelos))**\n\n---\n\n[This upstream bug](http://bugs.python.org/issue2279) (fixed in to 2010) resolved the need to create a MANIFEST.in file as well as list data_files in the setup.py. This does not appear to have been fixed in setuptools, and as such sdist and similar functions fail to include the appropriate files on machines that have setuptools, but not those that don't, when a fallback import system is used...\n\nWould it be possible to align the two again so that setuptools is again a drop-in replacement for distutils?\n\n---\n- Bitbucket: https://bitbucket.org/pypa/setuptools/issue/274\n\nSetuptools' sdist doesn't use data_files while distutils does\nOriginally reported by: **ionelmc (Bitbucket: [ionelmc](http://bitbucket.org/ionelmc), GitHub: [ionelmc](http://github.com/ionelmc))**\n\n---\n\nDistutils has this: https://github.com/python/cpython/blob/master/Lib/distutils/command/sdist.py#L270-L282\n\nHowever that code is not copies in the sdist provided by setuptools. \n\nThis can be problematic if you don't inlcude a file in MANIFEST.in but you have it in data_files - python's docs will most certainly mislead you as they say that sdist uses data_files: https://docs.python.org/3/distutils/sourcedist.html?highlight=data_files#specifying-the-files-to-distribute (last bullet)\n\n---\n- Bitbucket: https://bitbucket.org/pypa/setuptools/issue/521\n\n", "before_files": [{"content": "from distutils import log\nimport distutils.command.sdist as orig\nimport os\nimport sys\nimport io\nimport contextlib\n\nfrom setuptools.extern import six\n\nfrom .py36compat import sdist_add_defaults\n\nimport pkg_resources\n\n_default_revctrl = list\n\n\ndef walk_revctrl(dirname=''):\n \"\"\"Find all files under revision control\"\"\"\n for ep in pkg_resources.iter_entry_points('setuptools.file_finders'):\n for item in ep.load()(dirname):\n yield item\n\n\nclass sdist(sdist_add_defaults, orig.sdist):\n \"\"\"Smart sdist that finds anything supported by revision control\"\"\"\n\n user_options = [\n ('formats=', None,\n \"formats for source distribution (comma-separated list)\"),\n ('keep-temp', 'k',\n \"keep the distribution tree around after creating \" +\n \"archive file(s)\"),\n ('dist-dir=', 'd',\n \"directory to put the source distribution archive(s) in \"\n \"[default: dist]\"),\n ]\n\n negative_opt = {}\n\n READMES = 'README', 'README.rst', 'README.txt'\n\n def run(self):\n self.run_command('egg_info')\n ei_cmd = self.get_finalized_command('egg_info')\n self.filelist = ei_cmd.filelist\n self.filelist.append(os.path.join(ei_cmd.egg_info, 'SOURCES.txt'))\n self.check_readme()\n\n # Run sub commands\n for cmd_name in self.get_sub_commands():\n self.run_command(cmd_name)\n\n # Call check_metadata only if no 'check' command\n # (distutils <= 2.6)\n import distutils.command\n\n if 'check' not in distutils.command.__all__:\n self.check_metadata()\n\n self.make_distribution()\n\n dist_files = getattr(self.distribution, 'dist_files', [])\n for file in self.archive_files:\n data = ('sdist', '', file)\n if data not in dist_files:\n dist_files.append(data)\n\n def initialize_options(self):\n orig.sdist.initialize_options(self)\n\n self._default_to_gztar()\n\n def _default_to_gztar(self):\n # only needed on Python prior to 3.6.\n if sys.version_info >= (3, 6, 0, 'beta', 1):\n return\n self.formats = ['gztar']\n\n def make_distribution(self):\n \"\"\"\n Workaround for #516\n \"\"\"\n with self._remove_os_link():\n orig.sdist.make_distribution(self)\n\n @staticmethod\n @contextlib.contextmanager\n def _remove_os_link():\n \"\"\"\n In a context, remove and restore os.link if it exists\n \"\"\"\n\n class NoValue:\n pass\n\n orig_val = getattr(os, 'link', NoValue)\n try:\n del os.link\n except Exception:\n pass\n try:\n yield\n finally:\n if orig_val is not NoValue:\n setattr(os, 'link', orig_val)\n\n def __read_template_hack(self):\n # This grody hack closes the template file (MANIFEST.in) if an\n # exception occurs during read_template.\n # Doing so prevents an error when easy_install attempts to delete the\n # file.\n try:\n orig.sdist.read_template(self)\n except Exception:\n _, _, tb = sys.exc_info()\n tb.tb_next.tb_frame.f_locals['template'].close()\n raise\n\n # Beginning with Python 2.7.2, 3.1.4, and 3.2.1, this leaky file handle\n # has been fixed, so only override the method if we're using an earlier\n # Python.\n has_leaky_handle = (\n sys.version_info < (2, 7, 2)\n or (3, 0) <= sys.version_info < (3, 1, 4)\n or (3, 2) <= sys.version_info < (3, 2, 1)\n )\n if has_leaky_handle:\n read_template = __read_template_hack\n\n def _add_defaults_python(self):\n \"\"\"getting python files\"\"\"\n if self.distribution.has_pure_modules():\n build_py = self.get_finalized_command('build_py')\n self.filelist.extend(build_py.get_source_files())\n # This functionality is incompatible with include_package_data, and\n # will in fact create an infinite recursion if include_package_data\n # is True. Use of include_package_data will imply that\n # distutils-style automatic handling of package_data is disabled\n if not self.distribution.include_package_data:\n for _, src_dir, _, filenames in build_py.data_files:\n self.filelist.extend([os.path.join(src_dir, filename)\n for filename in filenames])\n\n def _add_defaults_data_files(self):\n \"\"\"\n Don't add any data files, but why?\n \"\"\"\n\n def check_readme(self):\n for f in self.READMES:\n if os.path.exists(f):\n return\n else:\n self.warn(\n \"standard file not found: should have one of \" +\n ', '.join(self.READMES)\n )\n\n def make_release_tree(self, base_dir, files):\n orig.sdist.make_release_tree(self, base_dir, files)\n\n # Save any egg_info command line options used to create this sdist\n dest = os.path.join(base_dir, 'setup.cfg')\n if hasattr(os, 'link') and os.path.exists(dest):\n # unlink and re-copy, since it might be hard-linked, and\n # we don't want to change the source version\n os.unlink(dest)\n self.copy_file('setup.cfg', dest)\n\n self.get_finalized_command('egg_info').save_version_info(dest)\n\n def _manifest_is_not_generated(self):\n # check for special comment used in 2.7.1 and higher\n if not os.path.isfile(self.manifest):\n return False\n\n with io.open(self.manifest, 'rb') as fp:\n first_line = fp.readline()\n return (first_line !=\n '# file GENERATED by distutils, do NOT edit\\n'.encode())\n\n def read_manifest(self):\n \"\"\"Read the manifest file (named by 'self.manifest') and use it to\n fill in 'self.filelist', the list of files to include in the source\n distribution.\n \"\"\"\n log.info(\"reading manifest file '%s'\", self.manifest)\n manifest = open(self.manifest, 'rb')\n for line in manifest:\n # The manifest must contain UTF-8. See #303.\n if six.PY3:\n try:\n line = line.decode('UTF-8')\n except UnicodeDecodeError:\n log.warn(\"%r not UTF-8 decodable -- skipping\" % line)\n continue\n # ignore comments and blank lines\n line = line.strip()\n if line.startswith('#') or not line:\n continue\n self.filelist.append(line)\n manifest.close()\n", "path": "setuptools/command/sdist.py"}]} | 2,944 | 123 |
gh_patches_debug_4097 | rasdani/github-patches | git_diff | mozilla__bugbug-130 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add bug alias as a feature
Not the alias by itself, but something like True if `CVE` is in bug['alias'], False otherwise.
</issue>
<code>
[start of bugbug/bug_features.py]
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import re
7 from datetime import datetime
8 from datetime import timezone
9
10 import pandas as pd
11 from libmozdata import versions
12 from sklearn.base import BaseEstimator
13 from sklearn.base import TransformerMixin
14
15 from bugbug import bug_snapshot
16 from bugbug import repository
17
18
19 def field(bug, field):
20 if field in bug and bug[field] != '---':
21 return bug[field]
22
23 return None
24
25
26 class has_str(object):
27 def __call__(self, bug):
28 return field(bug, 'cf_has_str')
29
30
31 class has_regression_range(object):
32 def __call__(self, bug):
33 return field(bug, 'cf_has_regression_range')
34
35
36 class has_crash_signature(object):
37 def __call__(self, bug):
38 return 'cf_crash_signature' in bug and bug['cf_crash_signature'] != ''
39
40
41 class keywords(object):
42 def __init__(self, to_ignore=set()):
43 self.to_ignore = to_ignore
44
45 def __call__(self, bug):
46 keywords = []
47 subkeywords = []
48 for keyword in bug['keywords']:
49 if keyword in self.to_ignore:
50 continue
51
52 keywords.append(keyword)
53
54 if keyword.startswith('sec-'):
55 subkeywords.append('sec-')
56 elif keyword.startswith('csectype-'):
57 subkeywords.append('csectype-')
58 return keywords + subkeywords
59
60
61 class severity(object):
62 def __call__(self, bug):
63 return field(bug, 'severity')
64
65
66 class is_coverity_issue(object):
67 def __call__(self, bug):
68 return re.search('[CID ?[0-9]+]', bug['summary']) is not None or re.search('[CID ?[0-9]+]', bug['whiteboard']) is not None
69
70
71 class has_url(object):
72 def __call__(self, bug):
73 return bug['url'] != ''
74
75
76 class has_w3c_url(object):
77 def __call__(self, bug):
78 return 'w3c' in bug['url']
79
80
81 class has_github_url(object):
82 def __call__(self, bug):
83 return 'github' in bug['url']
84
85
86 class whiteboard(object):
87 def __call__(self, bug):
88
89 # Split by '['
90 paren_splits = bug['whiteboard'].lower().split('[')
91
92 # Split splits by space if they weren't in [ and ].
93 splits = []
94 for paren_split in paren_splits:
95 if ']' in paren_split:
96 paren_split = paren_split.split(']')
97 splits += paren_split
98 else:
99 splits += paren_split.split(' ')
100
101 # Remove empty splits and strip
102 splits = [split.strip() for split in splits if split.strip() != '']
103
104 # For splits which contain ':', return both the whole string and the string before ':'.
105 splits += [split.split(':', 1)[0] for split in splits if ':' in split]
106
107 return splits
108
109
110 class patches(object):
111 def __call__(self, bug):
112 return sum(1 for a in bug['attachments'] if a['is_patch'] or a['content_type'] in ['text/x-review-board-request', 'text/x-phabricator-request'])
113
114
115 class landings(object):
116 def __call__(self, bug):
117 return sum(1 for c in bug['comments'] if '://hg.mozilla.org/' in c['text'])
118
119
120 class title(object):
121 def __call__(self, bug):
122 ret = []
123
124 keywords = [
125 'fail',
126 ]
127 for keyword in keywords:
128 if keyword in bug['summary'].lower():
129 ret.append(keyword)
130
131 return ret
132
133
134 class product(object):
135 def __call__(self, bug):
136 return bug['product']
137
138
139 class component(object):
140 def __call__(self, bug):
141 return bug['component']
142
143
144 class is_mozillian(object):
145 def __call__(self, bug):
146 return any(bug['creator_detail']['email'].endswith(domain) for domain in ['@mozilla.com', '@mozilla.org'])
147
148
149 class delta_request_merge(object):
150 def __call__(self, bug):
151 for history in bug['history']:
152 for change in history['changes']:
153 if change['added'].startswith('approval-mozilla'):
154 uplift_request_datetime = datetime.strptime(history['when'], '%Y-%m-%dT%H:%M:%SZ').replace(tzinfo=timezone.utc)
155 timedelta = versions.getCloserRelease(uplift_request_datetime)[1] - uplift_request_datetime
156 return timedelta.days + timedelta.seconds / (24 * 60 * 60)
157
158 return None
159
160
161 class commit_added(object):
162 def __call__(self, bug):
163 return sum(commit['added'] for commit in bug['commits'])
164
165
166 class commit_deleted(object):
167 def __call__(self, bug):
168 return sum(commit['deleted'] for commit in bug['commits'])
169
170
171 class commit_types(object):
172 def __call__(self, bug):
173 return sum((commit['types'] for commit in bug['commits']), [])
174
175
176 class blocked_bugs_number(object):
177 def __call__(self, bug):
178 return len(bug['blocks'])
179
180
181 class priority(object):
182 def __call__(self, bug):
183 return bug['priority']
184
185
186 class commit_files_modified_num(object):
187 def __call__(self, bug):
188 return sum(commit['files_modified_num'] for commit in bug['commits'])
189
190
191 class comment_count(object):
192 def __call__(self, bug):
193 return field(bug, 'comment_count')
194
195
196 class comment_length(object):
197 def __call__(self, bug):
198 return sum(len(x['text']) for x in bug['comments'])
199
200
201 def cleanup_url(text):
202 text = re.sub(r'http[s]?://(hg.mozilla|searchfox|dxr.mozilla)\S+', '__CODE_REFERENCE_URL__', text)
203 return re.sub(r'http\S+', '__URL__', text)
204
205
206 def cleanup_fileref(text):
207 return re.sub(r'\w+\.py\b|\w+\.json\b|\w+\.js\b|\w+\.jsm\b|\w+\.html\b|\w+\.css\b|\w+\.c\b|\w+\.cpp\b|\w+\.h\b', '__FILE_REFERENCE__', text)
208
209
210 def cleanup_responses(text):
211 return re.sub('>[^\n]+', ' ', text)
212
213
214 def cleanup_hex(text):
215 return re.sub(r'\b0[xX][0-9a-fA-F]+\b', '__HEX_NUMBER__', text)
216
217
218 def cleanup_dll(text):
219 return re.sub(r'\w+(\.dll|\.so|\.dylib)\b', '__DLL_NAME__', text)
220
221
222 def cleanup_synonyms(text):
223 synonyms = [
224 ('safemode', ['safemode', 'safe mode']),
225 ('str', ['str', 'steps to reproduce', 'repro steps']),
226 ('uaf', ['uaf', 'use after free', 'use-after-free']),
227 ('asan', ['asan', 'address sanitizer', 'addresssanitizer']),
228 ('permafailure', ['permafailure', 'permafailing', 'permafail', 'perma failure', 'perma failing', 'perma fail', 'perma-failure', 'perma-failing', 'perma-fail']),
229 ('spec', ['spec', 'specification']),
230 ]
231
232 for synonym_group, synonym_list in synonyms:
233 text = re.sub('|'.join(fr'\b{synonym}\b' for synonym in synonym_list), synonym_group, text, flags=re.IGNORECASE)
234
235 return text
236
237
238 def cleanup_crash(text):
239 return re.sub(r'bp-[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{6}[0-9]{6}\b', '__CRASH_STATS_LINK__', text)
240
241
242 class BugExtractor(BaseEstimator, TransformerMixin):
243 def __init__(self, feature_extractors, cleanup_functions, rollback=False, rollback_when=None, commit_data=False):
244 self.feature_extractors = feature_extractors
245 self.cleanup_functions = cleanup_functions
246 self.rollback = rollback
247 self.rollback_when = rollback_when
248 self.commit_map = repository.get_commit_map() if commit_data else None
249
250 def fit(self, x, y=None):
251 return self
252
253 def transform(self, bugs):
254 results = []
255
256 for bug in bugs:
257 bug_id = bug['id']
258
259 if self.rollback:
260 bug = bug_snapshot.rollback(bug, self.rollback_when)
261
262 data = {}
263
264 if self.commit_map is not None:
265 if bug_id in self.commit_map:
266 bug['commits'] = self.commit_map[bug_id]
267 else:
268 bug['commits'] = []
269
270 for f in self.feature_extractors:
271 res = f(bug)
272
273 if res is None:
274 continue
275
276 if isinstance(res, list):
277 for item in res:
278 data[f.__class__.__name__ + '-' + item] = 'True'
279 continue
280
281 if isinstance(res, bool):
282 res = str(res)
283
284 data[f.__class__.__name__] = res
285
286 # TODO: Try simply using all possible fields instead of extracting features manually.
287
288 for cleanup_function in self.cleanup_functions:
289 bug['summary'] = cleanup_function(bug['summary'])
290 for c in bug['comments']:
291 c['text'] = cleanup_function(c['text'])
292
293 result = {
294 'data': data,
295 'title': bug['summary'],
296 'first_comment': bug['comments'][0]['text'],
297 'comments': ' '.join([c['text'] for c in bug['comments']]),
298 }
299
300 results.append(result)
301
302 return pd.DataFrame(results)
303
[end of bugbug/bug_features.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bugbug/bug_features.py b/bugbug/bug_features.py
--- a/bugbug/bug_features.py
+++ b/bugbug/bug_features.py
@@ -183,6 +183,11 @@
return bug['priority']
+class bug_has_cve_in_alias(object):
+ def __call__(self, bug):
+ return bug['alias'] is not None and 'CVE' in bug['alias']
+
+
class commit_files_modified_num(object):
def __call__(self, bug):
return sum(commit['files_modified_num'] for commit in bug['commits'])
| {"golden_diff": "diff --git a/bugbug/bug_features.py b/bugbug/bug_features.py\n--- a/bugbug/bug_features.py\n+++ b/bugbug/bug_features.py\n@@ -183,6 +183,11 @@\n return bug['priority']\n \n \n+class bug_has_cve_in_alias(object):\n+ def __call__(self, bug):\n+ return bug['alias'] is not None and 'CVE' in bug['alias']\n+\n+\n class commit_files_modified_num(object):\n def __call__(self, bug):\n return sum(commit['files_modified_num'] for commit in bug['commits'])\n", "issue": "Add bug alias as a feature\nNot the alias by itself, but something like True if `CVE` is in bug['alias'], False otherwise.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport re\nfrom datetime import datetime\nfrom datetime import timezone\n\nimport pandas as pd\nfrom libmozdata import versions\nfrom sklearn.base import BaseEstimator\nfrom sklearn.base import TransformerMixin\n\nfrom bugbug import bug_snapshot\nfrom bugbug import repository\n\n\ndef field(bug, field):\n if field in bug and bug[field] != '---':\n return bug[field]\n\n return None\n\n\nclass has_str(object):\n def __call__(self, bug):\n return field(bug, 'cf_has_str')\n\n\nclass has_regression_range(object):\n def __call__(self, bug):\n return field(bug, 'cf_has_regression_range')\n\n\nclass has_crash_signature(object):\n def __call__(self, bug):\n return 'cf_crash_signature' in bug and bug['cf_crash_signature'] != ''\n\n\nclass keywords(object):\n def __init__(self, to_ignore=set()):\n self.to_ignore = to_ignore\n\n def __call__(self, bug):\n keywords = []\n subkeywords = []\n for keyword in bug['keywords']:\n if keyword in self.to_ignore:\n continue\n\n keywords.append(keyword)\n\n if keyword.startswith('sec-'):\n subkeywords.append('sec-')\n elif keyword.startswith('csectype-'):\n subkeywords.append('csectype-')\n return keywords + subkeywords\n\n\nclass severity(object):\n def __call__(self, bug):\n return field(bug, 'severity')\n\n\nclass is_coverity_issue(object):\n def __call__(self, bug):\n return re.search('[CID ?[0-9]+]', bug['summary']) is not None or re.search('[CID ?[0-9]+]', bug['whiteboard']) is not None\n\n\nclass has_url(object):\n def __call__(self, bug):\n return bug['url'] != ''\n\n\nclass has_w3c_url(object):\n def __call__(self, bug):\n return 'w3c' in bug['url']\n\n\nclass has_github_url(object):\n def __call__(self, bug):\n return 'github' in bug['url']\n\n\nclass whiteboard(object):\n def __call__(self, bug):\n\n # Split by '['\n paren_splits = bug['whiteboard'].lower().split('[')\n\n # Split splits by space if they weren't in [ and ].\n splits = []\n for paren_split in paren_splits:\n if ']' in paren_split:\n paren_split = paren_split.split(']')\n splits += paren_split\n else:\n splits += paren_split.split(' ')\n\n # Remove empty splits and strip\n splits = [split.strip() for split in splits if split.strip() != '']\n\n # For splits which contain ':', return both the whole string and the string before ':'.\n splits += [split.split(':', 1)[0] for split in splits if ':' in split]\n\n return splits\n\n\nclass patches(object):\n def __call__(self, bug):\n return sum(1 for a in bug['attachments'] if a['is_patch'] or a['content_type'] in ['text/x-review-board-request', 'text/x-phabricator-request'])\n\n\nclass landings(object):\n def __call__(self, bug):\n return sum(1 for c in bug['comments'] if '://hg.mozilla.org/' in c['text'])\n\n\nclass title(object):\n def __call__(self, bug):\n ret = []\n\n keywords = [\n 'fail',\n ]\n for keyword in keywords:\n if keyword in bug['summary'].lower():\n ret.append(keyword)\n\n return ret\n\n\nclass product(object):\n def __call__(self, bug):\n return bug['product']\n\n\nclass component(object):\n def __call__(self, bug):\n return bug['component']\n\n\nclass is_mozillian(object):\n def __call__(self, bug):\n return any(bug['creator_detail']['email'].endswith(domain) for domain in ['@mozilla.com', '@mozilla.org'])\n\n\nclass delta_request_merge(object):\n def __call__(self, bug):\n for history in bug['history']:\n for change in history['changes']:\n if change['added'].startswith('approval-mozilla'):\n uplift_request_datetime = datetime.strptime(history['when'], '%Y-%m-%dT%H:%M:%SZ').replace(tzinfo=timezone.utc)\n timedelta = versions.getCloserRelease(uplift_request_datetime)[1] - uplift_request_datetime\n return timedelta.days + timedelta.seconds / (24 * 60 * 60)\n\n return None\n\n\nclass commit_added(object):\n def __call__(self, bug):\n return sum(commit['added'] for commit in bug['commits'])\n\n\nclass commit_deleted(object):\n def __call__(self, bug):\n return sum(commit['deleted'] for commit in bug['commits'])\n\n\nclass commit_types(object):\n def __call__(self, bug):\n return sum((commit['types'] for commit in bug['commits']), [])\n\n\nclass blocked_bugs_number(object):\n def __call__(self, bug):\n return len(bug['blocks'])\n\n\nclass priority(object):\n def __call__(self, bug):\n return bug['priority']\n\n\nclass commit_files_modified_num(object):\n def __call__(self, bug):\n return sum(commit['files_modified_num'] for commit in bug['commits'])\n\n\nclass comment_count(object):\n def __call__(self, bug):\n return field(bug, 'comment_count')\n\n\nclass comment_length(object):\n def __call__(self, bug):\n return sum(len(x['text']) for x in bug['comments'])\n\n\ndef cleanup_url(text):\n text = re.sub(r'http[s]?://(hg.mozilla|searchfox|dxr.mozilla)\\S+', '__CODE_REFERENCE_URL__', text)\n return re.sub(r'http\\S+', '__URL__', text)\n\n\ndef cleanup_fileref(text):\n return re.sub(r'\\w+\\.py\\b|\\w+\\.json\\b|\\w+\\.js\\b|\\w+\\.jsm\\b|\\w+\\.html\\b|\\w+\\.css\\b|\\w+\\.c\\b|\\w+\\.cpp\\b|\\w+\\.h\\b', '__FILE_REFERENCE__', text)\n\n\ndef cleanup_responses(text):\n return re.sub('>[^\\n]+', ' ', text)\n\n\ndef cleanup_hex(text):\n return re.sub(r'\\b0[xX][0-9a-fA-F]+\\b', '__HEX_NUMBER__', text)\n\n\ndef cleanup_dll(text):\n return re.sub(r'\\w+(\\.dll|\\.so|\\.dylib)\\b', '__DLL_NAME__', text)\n\n\ndef cleanup_synonyms(text):\n synonyms = [\n ('safemode', ['safemode', 'safe mode']),\n ('str', ['str', 'steps to reproduce', 'repro steps']),\n ('uaf', ['uaf', 'use after free', 'use-after-free']),\n ('asan', ['asan', 'address sanitizer', 'addresssanitizer']),\n ('permafailure', ['permafailure', 'permafailing', 'permafail', 'perma failure', 'perma failing', 'perma fail', 'perma-failure', 'perma-failing', 'perma-fail']),\n ('spec', ['spec', 'specification']),\n ]\n\n for synonym_group, synonym_list in synonyms:\n text = re.sub('|'.join(fr'\\b{synonym}\\b' for synonym in synonym_list), synonym_group, text, flags=re.IGNORECASE)\n\n return text\n\n\ndef cleanup_crash(text):\n return re.sub(r'bp-[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{6}[0-9]{6}\\b', '__CRASH_STATS_LINK__', text)\n\n\nclass BugExtractor(BaseEstimator, TransformerMixin):\n def __init__(self, feature_extractors, cleanup_functions, rollback=False, rollback_when=None, commit_data=False):\n self.feature_extractors = feature_extractors\n self.cleanup_functions = cleanup_functions\n self.rollback = rollback\n self.rollback_when = rollback_when\n self.commit_map = repository.get_commit_map() if commit_data else None\n\n def fit(self, x, y=None):\n return self\n\n def transform(self, bugs):\n results = []\n\n for bug in bugs:\n bug_id = bug['id']\n\n if self.rollback:\n bug = bug_snapshot.rollback(bug, self.rollback_when)\n\n data = {}\n\n if self.commit_map is not None:\n if bug_id in self.commit_map:\n bug['commits'] = self.commit_map[bug_id]\n else:\n bug['commits'] = []\n\n for f in self.feature_extractors:\n res = f(bug)\n\n if res is None:\n continue\n\n if isinstance(res, list):\n for item in res:\n data[f.__class__.__name__ + '-' + item] = 'True'\n continue\n\n if isinstance(res, bool):\n res = str(res)\n\n data[f.__class__.__name__] = res\n\n # TODO: Try simply using all possible fields instead of extracting features manually.\n\n for cleanup_function in self.cleanup_functions:\n bug['summary'] = cleanup_function(bug['summary'])\n for c in bug['comments']:\n c['text'] = cleanup_function(c['text'])\n\n result = {\n 'data': data,\n 'title': bug['summary'],\n 'first_comment': bug['comments'][0]['text'],\n 'comments': ' '.join([c['text'] for c in bug['comments']]),\n }\n\n results.append(result)\n\n return pd.DataFrame(results)\n", "path": "bugbug/bug_features.py"}]} | 3,551 | 136 |
gh_patches_debug_9032 | rasdani/github-patches | git_diff | scikit-hep__pyhf-101 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
speed up CI tests (do we need all conda packages?)
By using Conda, unfortunately the setup phase of the CI jobs have become a bit slower than without conda, maybe we can look into speeding them up again by checking whether we need all the packages that we install during CI
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2 setup(
3 name = 'pyhf',
4 version = '0.0.8',
5 description = '(partial) pure python histfactory implementation',
6 url = '',
7 author = 'Lukas Heinrich',
8 author_email = '[email protected]',
9 packages = find_packages(),
10 include_package_data = True,
11 install_requires = [
12 'numpy',
13 'scipy'
14 ],
15 extras_require = {
16 'xmlimport': [
17 'uproot',
18 ],
19 'torch': [
20 'torch'
21 ],
22 'mxnet':[
23 'mxnet',
24 ],
25 'develop': [
26 'pyflakes',
27 'pytest>=3.2.0',
28 'pytest-cov>=2.5.1',
29 'pytest-benchmark[histogram]',
30 'python-coveralls',
31 'matplotlib',
32 'jupyter',
33 'uproot',
34 'papermill',
35 'torch',
36 'tensorflow',
37 'mxnet>=1.0.0',
38 'graphviz',
39 'sphinx',
40 'sphinxcontrib-napoleon',
41 'sphinx_rtd_theme',
42 'nbsphinx',
43 'jsonschema>=2.6.0'
44 ]
45 },
46 entry_points = {
47 },
48 dependency_links = [
49 ]
50 )
51
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -9,7 +9,7 @@
packages = find_packages(),
include_package_data = True,
install_requires = [
- 'numpy',
+ 'numpy>=1.14.3',
'scipy'
],
extras_require = {
@@ -24,7 +24,7 @@
],
'develop': [
'pyflakes',
- 'pytest>=3.2.0',
+ 'pytest>=3.5.1',
'pytest-cov>=2.5.1',
'pytest-benchmark[histogram]',
'python-coveralls',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -9,7 +9,7 @@\n packages = find_packages(),\n include_package_data = True,\n install_requires = [\n- 'numpy',\n+ 'numpy>=1.14.3',\n 'scipy'\n ],\n extras_require = {\n@@ -24,7 +24,7 @@\n ],\n 'develop': [\n 'pyflakes',\n- 'pytest>=3.2.0',\n+ 'pytest>=3.5.1',\n 'pytest-cov>=2.5.1',\n 'pytest-benchmark[histogram]',\n 'python-coveralls',\n", "issue": "speed up CI tests (do we need all conda packages?)\nBy using Conda, unfortunately the setup phase of the CI jobs have become a bit slower than without conda, maybe we can look into speeding them up again by checking whether we need all the packages that we install during CI\n", "before_files": [{"content": "from setuptools import setup, find_packages\nsetup(\n name = 'pyhf',\n version = '0.0.8',\n description = '(partial) pure python histfactory implementation',\n url = '',\n author = 'Lukas Heinrich',\n author_email = '[email protected]',\n packages = find_packages(),\n include_package_data = True,\n install_requires = [\n 'numpy',\n 'scipy'\n ],\n extras_require = {\n 'xmlimport': [\n 'uproot',\n ],\n 'torch': [\n 'torch'\n ],\n 'mxnet':[\n 'mxnet',\n ],\n 'develop': [\n 'pyflakes',\n 'pytest>=3.2.0',\n 'pytest-cov>=2.5.1',\n 'pytest-benchmark[histogram]',\n 'python-coveralls',\n 'matplotlib',\n 'jupyter',\n 'uproot',\n 'papermill',\n 'torch',\n 'tensorflow',\n 'mxnet>=1.0.0',\n 'graphviz',\n 'sphinx',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'jsonschema>=2.6.0'\n ]\n },\n entry_points = {\n },\n dependency_links = [\n ]\n)\n", "path": "setup.py"}]} | 969 | 151 |
gh_patches_debug_29080 | rasdani/github-patches | git_diff | matrix-org__synapse-7506 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement room version 6
Will contain additional features:
* #6898
* https://github.com/matrix-org/synapse/pull/7381
* #7501
Do not enable by default just yet.
</issue>
<code>
[start of synapse/api/room_versions.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2019 New Vector Ltd
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from typing import Dict
17
18 import attr
19
20
21 class EventFormatVersions(object):
22 """This is an internal enum for tracking the version of the event format,
23 independently from the room version.
24 """
25
26 V1 = 1 # $id:server event id format
27 V2 = 2 # MSC1659-style $hash event id format: introduced for room v3
28 V3 = 3 # MSC1884-style $hash format: introduced for room v4
29
30
31 KNOWN_EVENT_FORMAT_VERSIONS = {
32 EventFormatVersions.V1,
33 EventFormatVersions.V2,
34 EventFormatVersions.V3,
35 }
36
37
38 class StateResolutionVersions(object):
39 """Enum to identify the state resolution algorithms"""
40
41 V1 = 1 # room v1 state res
42 V2 = 2 # MSC1442 state res: room v2 and later
43
44
45 class RoomDisposition(object):
46 STABLE = "stable"
47 UNSTABLE = "unstable"
48
49
50 @attr.s(slots=True, frozen=True)
51 class RoomVersion(object):
52 """An object which describes the unique attributes of a room version."""
53
54 identifier = attr.ib() # str; the identifier for this version
55 disposition = attr.ib() # str; one of the RoomDispositions
56 event_format = attr.ib() # int; one of the EventFormatVersions
57 state_res = attr.ib() # int; one of the StateResolutionVersions
58 enforce_key_validity = attr.ib() # bool
59
60 # bool: before MSC2261/MSC2432, m.room.aliases had special auth rules and redaction rules
61 special_case_aliases_auth = attr.ib(type=bool)
62 # Strictly enforce canonicaljson, do not allow:
63 # * Integers outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1]
64 # * Floats
65 # * NaN, Infinity, -Infinity
66 strict_canonicaljson = attr.ib(type=bool)
67 # bool: MSC2209: Check 'notifications' key while verifying
68 # m.room.power_levels auth rules.
69 limit_notifications_power_levels = attr.ib(type=bool)
70
71
72 class RoomVersions(object):
73 V1 = RoomVersion(
74 "1",
75 RoomDisposition.STABLE,
76 EventFormatVersions.V1,
77 StateResolutionVersions.V1,
78 enforce_key_validity=False,
79 special_case_aliases_auth=True,
80 strict_canonicaljson=False,
81 limit_notifications_power_levels=False,
82 )
83 V2 = RoomVersion(
84 "2",
85 RoomDisposition.STABLE,
86 EventFormatVersions.V1,
87 StateResolutionVersions.V2,
88 enforce_key_validity=False,
89 special_case_aliases_auth=True,
90 strict_canonicaljson=False,
91 limit_notifications_power_levels=False,
92 )
93 V3 = RoomVersion(
94 "3",
95 RoomDisposition.STABLE,
96 EventFormatVersions.V2,
97 StateResolutionVersions.V2,
98 enforce_key_validity=False,
99 special_case_aliases_auth=True,
100 strict_canonicaljson=False,
101 limit_notifications_power_levels=False,
102 )
103 V4 = RoomVersion(
104 "4",
105 RoomDisposition.STABLE,
106 EventFormatVersions.V3,
107 StateResolutionVersions.V2,
108 enforce_key_validity=False,
109 special_case_aliases_auth=True,
110 strict_canonicaljson=False,
111 limit_notifications_power_levels=False,
112 )
113 V5 = RoomVersion(
114 "5",
115 RoomDisposition.STABLE,
116 EventFormatVersions.V3,
117 StateResolutionVersions.V2,
118 enforce_key_validity=True,
119 special_case_aliases_auth=True,
120 strict_canonicaljson=False,
121 limit_notifications_power_levels=False,
122 )
123 MSC2432_DEV = RoomVersion(
124 "org.matrix.msc2432",
125 RoomDisposition.UNSTABLE,
126 EventFormatVersions.V3,
127 StateResolutionVersions.V2,
128 enforce_key_validity=True,
129 special_case_aliases_auth=False,
130 strict_canonicaljson=False,
131 limit_notifications_power_levels=False,
132 )
133 STRICT_CANONICALJSON = RoomVersion(
134 "org.matrix.strict_canonicaljson",
135 RoomDisposition.UNSTABLE,
136 EventFormatVersions.V3,
137 StateResolutionVersions.V2,
138 enforce_key_validity=True,
139 special_case_aliases_auth=True,
140 strict_canonicaljson=True,
141 limit_notifications_power_levels=False,
142 )
143 MSC2209_DEV = RoomVersion(
144 "org.matrix.msc2209",
145 RoomDisposition.UNSTABLE,
146 EventFormatVersions.V3,
147 StateResolutionVersions.V2,
148 enforce_key_validity=True,
149 special_case_aliases_auth=True,
150 strict_canonicaljson=False,
151 limit_notifications_power_levels=True,
152 )
153
154
155 KNOWN_ROOM_VERSIONS = {
156 v.identifier: v
157 for v in (
158 RoomVersions.V1,
159 RoomVersions.V2,
160 RoomVersions.V3,
161 RoomVersions.V4,
162 RoomVersions.V5,
163 RoomVersions.MSC2432_DEV,
164 RoomVersions.STRICT_CANONICALJSON,
165 RoomVersions.MSC2209_DEV,
166 )
167 } # type: Dict[str, RoomVersion]
168
[end of synapse/api/room_versions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/synapse/api/room_versions.py b/synapse/api/room_versions.py
--- a/synapse/api/room_versions.py
+++ b/synapse/api/room_versions.py
@@ -120,34 +120,14 @@
strict_canonicaljson=False,
limit_notifications_power_levels=False,
)
- MSC2432_DEV = RoomVersion(
- "org.matrix.msc2432",
- RoomDisposition.UNSTABLE,
+ V6 = RoomVersion(
+ "6",
+ RoomDisposition.STABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
- strict_canonicaljson=False,
- limit_notifications_power_levels=False,
- )
- STRICT_CANONICALJSON = RoomVersion(
- "org.matrix.strict_canonicaljson",
- RoomDisposition.UNSTABLE,
- EventFormatVersions.V3,
- StateResolutionVersions.V2,
- enforce_key_validity=True,
- special_case_aliases_auth=True,
strict_canonicaljson=True,
- limit_notifications_power_levels=False,
- )
- MSC2209_DEV = RoomVersion(
- "org.matrix.msc2209",
- RoomDisposition.UNSTABLE,
- EventFormatVersions.V3,
- StateResolutionVersions.V2,
- enforce_key_validity=True,
- special_case_aliases_auth=True,
- strict_canonicaljson=False,
limit_notifications_power_levels=True,
)
@@ -160,8 +140,6 @@
RoomVersions.V3,
RoomVersions.V4,
RoomVersions.V5,
- RoomVersions.MSC2432_DEV,
- RoomVersions.STRICT_CANONICALJSON,
- RoomVersions.MSC2209_DEV,
+ RoomVersions.V6,
)
} # type: Dict[str, RoomVersion]
| {"golden_diff": "diff --git a/synapse/api/room_versions.py b/synapse/api/room_versions.py\n--- a/synapse/api/room_versions.py\n+++ b/synapse/api/room_versions.py\n@@ -120,34 +120,14 @@\n strict_canonicaljson=False,\n limit_notifications_power_levels=False,\n )\n- MSC2432_DEV = RoomVersion(\n- \"org.matrix.msc2432\",\n- RoomDisposition.UNSTABLE,\n+ V6 = RoomVersion(\n+ \"6\",\n+ RoomDisposition.STABLE,\n EventFormatVersions.V3,\n StateResolutionVersions.V2,\n enforce_key_validity=True,\n special_case_aliases_auth=False,\n- strict_canonicaljson=False,\n- limit_notifications_power_levels=False,\n- )\n- STRICT_CANONICALJSON = RoomVersion(\n- \"org.matrix.strict_canonicaljson\",\n- RoomDisposition.UNSTABLE,\n- EventFormatVersions.V3,\n- StateResolutionVersions.V2,\n- enforce_key_validity=True,\n- special_case_aliases_auth=True,\n strict_canonicaljson=True,\n- limit_notifications_power_levels=False,\n- )\n- MSC2209_DEV = RoomVersion(\n- \"org.matrix.msc2209\",\n- RoomDisposition.UNSTABLE,\n- EventFormatVersions.V3,\n- StateResolutionVersions.V2,\n- enforce_key_validity=True,\n- special_case_aliases_auth=True,\n- strict_canonicaljson=False,\n limit_notifications_power_levels=True,\n )\n \n@@ -160,8 +140,6 @@\n RoomVersions.V3,\n RoomVersions.V4,\n RoomVersions.V5,\n- RoomVersions.MSC2432_DEV,\n- RoomVersions.STRICT_CANONICALJSON,\n- RoomVersions.MSC2209_DEV,\n+ RoomVersions.V6,\n )\n } # type: Dict[str, RoomVersion]\n", "issue": "Implement room version 6\nWill contain additional features:\r\n* #6898\r\n* https://github.com/matrix-org/synapse/pull/7381\r\n* #7501\r\n\r\nDo not enable by default just yet.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2019 New Vector Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Dict\n\nimport attr\n\n\nclass EventFormatVersions(object):\n \"\"\"This is an internal enum for tracking the version of the event format,\n independently from the room version.\n \"\"\"\n\n V1 = 1 # $id:server event id format\n V2 = 2 # MSC1659-style $hash event id format: introduced for room v3\n V3 = 3 # MSC1884-style $hash format: introduced for room v4\n\n\nKNOWN_EVENT_FORMAT_VERSIONS = {\n EventFormatVersions.V1,\n EventFormatVersions.V2,\n EventFormatVersions.V3,\n}\n\n\nclass StateResolutionVersions(object):\n \"\"\"Enum to identify the state resolution algorithms\"\"\"\n\n V1 = 1 # room v1 state res\n V2 = 2 # MSC1442 state res: room v2 and later\n\n\nclass RoomDisposition(object):\n STABLE = \"stable\"\n UNSTABLE = \"unstable\"\n\n\[email protected](slots=True, frozen=True)\nclass RoomVersion(object):\n \"\"\"An object which describes the unique attributes of a room version.\"\"\"\n\n identifier = attr.ib() # str; the identifier for this version\n disposition = attr.ib() # str; one of the RoomDispositions\n event_format = attr.ib() # int; one of the EventFormatVersions\n state_res = attr.ib() # int; one of the StateResolutionVersions\n enforce_key_validity = attr.ib() # bool\n\n # bool: before MSC2261/MSC2432, m.room.aliases had special auth rules and redaction rules\n special_case_aliases_auth = attr.ib(type=bool)\n # Strictly enforce canonicaljson, do not allow:\n # * Integers outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1]\n # * Floats\n # * NaN, Infinity, -Infinity\n strict_canonicaljson = attr.ib(type=bool)\n # bool: MSC2209: Check 'notifications' key while verifying\n # m.room.power_levels auth rules.\n limit_notifications_power_levels = attr.ib(type=bool)\n\n\nclass RoomVersions(object):\n V1 = RoomVersion(\n \"1\",\n RoomDisposition.STABLE,\n EventFormatVersions.V1,\n StateResolutionVersions.V1,\n enforce_key_validity=False,\n special_case_aliases_auth=True,\n strict_canonicaljson=False,\n limit_notifications_power_levels=False,\n )\n V2 = RoomVersion(\n \"2\",\n RoomDisposition.STABLE,\n EventFormatVersions.V1,\n StateResolutionVersions.V2,\n enforce_key_validity=False,\n special_case_aliases_auth=True,\n strict_canonicaljson=False,\n limit_notifications_power_levels=False,\n )\n V3 = RoomVersion(\n \"3\",\n RoomDisposition.STABLE,\n EventFormatVersions.V2,\n StateResolutionVersions.V2,\n enforce_key_validity=False,\n special_case_aliases_auth=True,\n strict_canonicaljson=False,\n limit_notifications_power_levels=False,\n )\n V4 = RoomVersion(\n \"4\",\n RoomDisposition.STABLE,\n EventFormatVersions.V3,\n StateResolutionVersions.V2,\n enforce_key_validity=False,\n special_case_aliases_auth=True,\n strict_canonicaljson=False,\n limit_notifications_power_levels=False,\n )\n V5 = RoomVersion(\n \"5\",\n RoomDisposition.STABLE,\n EventFormatVersions.V3,\n StateResolutionVersions.V2,\n enforce_key_validity=True,\n special_case_aliases_auth=True,\n strict_canonicaljson=False,\n limit_notifications_power_levels=False,\n )\n MSC2432_DEV = RoomVersion(\n \"org.matrix.msc2432\",\n RoomDisposition.UNSTABLE,\n EventFormatVersions.V3,\n StateResolutionVersions.V2,\n enforce_key_validity=True,\n special_case_aliases_auth=False,\n strict_canonicaljson=False,\n limit_notifications_power_levels=False,\n )\n STRICT_CANONICALJSON = RoomVersion(\n \"org.matrix.strict_canonicaljson\",\n RoomDisposition.UNSTABLE,\n EventFormatVersions.V3,\n StateResolutionVersions.V2,\n enforce_key_validity=True,\n special_case_aliases_auth=True,\n strict_canonicaljson=True,\n limit_notifications_power_levels=False,\n )\n MSC2209_DEV = RoomVersion(\n \"org.matrix.msc2209\",\n RoomDisposition.UNSTABLE,\n EventFormatVersions.V3,\n StateResolutionVersions.V2,\n enforce_key_validity=True,\n special_case_aliases_auth=True,\n strict_canonicaljson=False,\n limit_notifications_power_levels=True,\n )\n\n\nKNOWN_ROOM_VERSIONS = {\n v.identifier: v\n for v in (\n RoomVersions.V1,\n RoomVersions.V2,\n RoomVersions.V3,\n RoomVersions.V4,\n RoomVersions.V5,\n RoomVersions.MSC2432_DEV,\n RoomVersions.STRICT_CANONICALJSON,\n RoomVersions.MSC2209_DEV,\n )\n} # type: Dict[str, RoomVersion]\n", "path": "synapse/api/room_versions.py"}]} | 2,231 | 423 |
gh_patches_debug_15497 | rasdani/github-patches | git_diff | ipython__ipython-4363 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`?` may generate hundreds of cell
By mistake I have executed a cell like
```
for i in range(3):
x= range?
```
but with ~70 instead of 3
which generated 70 code cell with just `x= range` in it...
it was _really_ painfull to clean, it would be nice to prevent something like that
</issue>
<code>
[start of IPython/core/payload.py]
1 # -*- coding: utf-8 -*-
2 """Payload system for IPython.
3
4 Authors:
5
6 * Fernando Perez
7 * Brian Granger
8 """
9
10 #-----------------------------------------------------------------------------
11 # Copyright (C) 2008-2011 The IPython Development Team
12 #
13 # Distributed under the terms of the BSD License. The full license is in
14 # the file COPYING, distributed as part of this software.
15 #-----------------------------------------------------------------------------
16
17 #-----------------------------------------------------------------------------
18 # Imports
19 #-----------------------------------------------------------------------------
20
21 from IPython.config.configurable import Configurable
22 from IPython.utils.traitlets import List
23
24 #-----------------------------------------------------------------------------
25 # Main payload class
26 #-----------------------------------------------------------------------------
27
28 class PayloadManager(Configurable):
29
30 _payload = List([])
31
32 def write_payload(self, data):
33 if not isinstance(data, dict):
34 raise TypeError('Each payload write must be a dict, got: %r' % data)
35 self._payload.append(data)
36
37 def read_payload(self):
38 return self._payload
39
40 def clear_payload(self):
41 self._payload = []
42
[end of IPython/core/payload.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/IPython/core/payload.py b/IPython/core/payload.py
--- a/IPython/core/payload.py
+++ b/IPython/core/payload.py
@@ -29,9 +29,23 @@
_payload = List([])
- def write_payload(self, data):
+ def write_payload(self, data, single=True):
+ """Include or update the specified `data` payload in the PayloadManager.
+
+ If a previous payload with the same source exists and `single` is True,
+ it will be overwritten with the new one.
+ """
+
if not isinstance(data, dict):
raise TypeError('Each payload write must be a dict, got: %r' % data)
+
+ if single and 'source' in data:
+ source = data['source']
+ for i, pl in enumerate(self._payload):
+ if 'source' in pl and pl['source'] == source:
+ self._payload[i] = data
+ return
+
self._payload.append(data)
def read_payload(self):
| {"golden_diff": "diff --git a/IPython/core/payload.py b/IPython/core/payload.py\n--- a/IPython/core/payload.py\n+++ b/IPython/core/payload.py\n@@ -29,9 +29,23 @@\n \n _payload = List([])\n \n- def write_payload(self, data):\n+ def write_payload(self, data, single=True):\n+ \"\"\"Include or update the specified `data` payload in the PayloadManager.\n+\n+ If a previous payload with the same source exists and `single` is True,\n+ it will be overwritten with the new one.\n+ \"\"\"\n+\n if not isinstance(data, dict):\n raise TypeError('Each payload write must be a dict, got: %r' % data)\n+\n+ if single and 'source' in data:\n+ source = data['source']\n+ for i, pl in enumerate(self._payload):\n+ if 'source' in pl and pl['source'] == source:\n+ self._payload[i] = data\n+ return\n+\n self._payload.append(data)\n \n def read_payload(self):\n", "issue": "`?` may generate hundreds of cell \nBy mistake I have executed a cell like \r\n\r\n```\r\nfor i in range(3):\r\n x= range?\r\n```\r\n\r\nbut with ~70 instead of 3\r\nwhich generated 70 code cell with just `x= range` in it...\r\nit was _really_ painfull to clean, it would be nice to prevent something like that\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Payload system for IPython.\n\nAuthors:\n\n* Fernando Perez\n* Brian Granger\n\"\"\"\n\n#-----------------------------------------------------------------------------\n# Copyright (C) 2008-2011 The IPython Development Team\n#\n# Distributed under the terms of the BSD License. The full license is in\n# the file COPYING, distributed as part of this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\n\nfrom IPython.config.configurable import Configurable\nfrom IPython.utils.traitlets import List\n\n#-----------------------------------------------------------------------------\n# Main payload class\n#-----------------------------------------------------------------------------\n\nclass PayloadManager(Configurable):\n\n _payload = List([])\n\n def write_payload(self, data):\n if not isinstance(data, dict):\n raise TypeError('Each payload write must be a dict, got: %r' % data)\n self._payload.append(data)\n\n def read_payload(self):\n return self._payload\n\n def clear_payload(self):\n self._payload = []\n", "path": "IPython/core/payload.py"}]} | 912 | 234 |
gh_patches_debug_32362 | rasdani/github-patches | git_diff | ansible__ansible-23872 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ec2_facts throws str error in Python 3.5+
<!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ec2_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.3.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 3.5.2 (default, Nov 17 2016, 17:05:23) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
```
# inventory file that I use via ansible-playbook -i
[all]
localhost
[all:vars]
ansible_python_interpreter=/usr/bin/python3
```
##### OS / ENVIRONMENT
Ubuntu 16.04.2 LTS
Python 3.5.2
##### SUMMARY
When trying to run `ec2_facts`, I get this error: `TypeError: a bytes-like object is required, not 'str'`
<!--- Explain the problem briefly -->
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
# in any playbook
- name: Gather facts
action: ec2_facts
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
That ec2_facts would work.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
It errors out with `a bytes-like object is required, not 'str'`
Just as a note, I thought this was connected to #17038, but perhaps not.
ec2_facts throws str error in Python 3.5+
<!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
ec2_facts
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.3.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 3.5.2 (default, Nov 17 2016, 17:05:23) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
```
# inventory file that I use via ansible-playbook -i
[all]
localhost
[all:vars]
ansible_python_interpreter=/usr/bin/python3
```
##### OS / ENVIRONMENT
Ubuntu 16.04.2 LTS
Python 3.5.2
##### SUMMARY
When trying to run `ec2_facts`, I get this error: `TypeError: a bytes-like object is required, not 'str'`
<!--- Explain the problem briefly -->
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
# in any playbook
- name: Gather facts
action: ec2_facts
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
That ec2_facts would work.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
It errors out with `a bytes-like object is required, not 'str'`
Just as a note, I thought this was connected to #17038, but perhaps not.
</issue>
<code>
[start of lib/ansible/modules/cloud/amazon/ec2_facts.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # This file is part of Ansible
5 #
6 # Ansible is free software: you can redistribute it and/or modify
7 # it under the terms of the GNU General Public License as published by
8 # the Free Software Foundation, either version 3 of the License, or
9 # (at your option) any later version.
10 #
11 # Ansible is distributed in the hope that it will be useful,
12 # but WITHOUT ANY WARRANTY; without even the implied warranty of
13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 # GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License
17 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
18
19 ANSIBLE_METADATA = {'metadata_version': '1.0',
20 'status': ['stableinterface'],
21 'supported_by': 'curated'}
22
23
24 DOCUMENTATION = '''
25 ---
26 module: ec2_facts
27 short_description: Gathers facts about remote hosts within ec2 (aws)
28 version_added: "1.0"
29 options:
30 validate_certs:
31 description:
32 - If C(no), SSL certificates will not be validated. This should only be used
33 on personally controlled sites using self-signed certificates.
34 required: false
35 default: 'yes'
36 choices: ['yes', 'no']
37 version_added: '1.5.1'
38 description:
39 - This module fetches data from the metadata servers in ec2 (aws) as per
40 http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html.
41 The module must be called from within the EC2 instance itself.
42 notes:
43 - Parameters to filter on ec2_facts may be added later.
44 author: "Silviu Dicu (@silviud) <[email protected]>"
45 '''
46
47 EXAMPLES = '''
48 # Conditional example
49 - name: Gather facts
50 ec2_facts:
51
52 - name: Conditional
53 debug:
54 msg: "This instance is a t1.micro"
55 when: ansible_ec2_instance_type == "t1.micro"
56 '''
57
58 import socket
59 import re
60
61 socket.setdefaulttimeout(5)
62
63
64 class Ec2Metadata(object):
65 ec2_metadata_uri = 'http://169.254.169.254/latest/meta-data/'
66 ec2_sshdata_uri = 'http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key'
67 ec2_userdata_uri = 'http://169.254.169.254/latest/user-data/'
68
69 AWS_REGIONS = ('ap-northeast-1',
70 'ap-northeast-2',
71 'ap-south-1',
72 'ap-southeast-1',
73 'ap-southeast-2',
74 'ca-central-1',
75 'eu-central-1',
76 'eu-west-1',
77 'eu-west-2',
78 'sa-east-1',
79 'us-east-1',
80 'us-east-2',
81 'us-west-1',
82 'us-west-2',
83 'us-gov-west-1',
84 )
85
86 def __init__(self, module, ec2_metadata_uri=None, ec2_sshdata_uri=None, ec2_userdata_uri=None):
87 self.module = module
88 self.uri_meta = ec2_metadata_uri or self.ec2_metadata_uri
89 self.uri_user = ec2_userdata_uri or self.ec2_userdata_uri
90 self.uri_ssh = ec2_sshdata_uri or self.ec2_sshdata_uri
91 self._data = {}
92 self._prefix = 'ansible_ec2_%s'
93
94 def _fetch(self, url):
95 (response, info) = fetch_url(self.module, url, force=True)
96 if response:
97 data = response.read()
98 else:
99 data = None
100 return data
101
102 def _mangle_fields(self, fields, uri, filter_patterns=['public-keys-0']):
103 new_fields = {}
104 for key, value in fields.items():
105 split_fields = key[len(uri):].split('/')
106 if len(split_fields) > 1 and split_fields[1]:
107 new_key = "-".join(split_fields)
108 new_fields[self._prefix % new_key] = value
109 else:
110 new_key = "".join(split_fields)
111 new_fields[self._prefix % new_key] = value
112 for pattern in filter_patterns:
113 for key in new_fields.keys():
114 match = re.search(pattern, key)
115 if match:
116 new_fields.pop(key)
117 return new_fields
118
119 def fetch(self, uri, recurse=True):
120 raw_subfields = self._fetch(uri)
121 if not raw_subfields:
122 return
123 subfields = raw_subfields.split('\n')
124 for field in subfields:
125 if field.endswith('/') and recurse:
126 self.fetch(uri + field)
127 if uri.endswith('/'):
128 new_uri = uri + field
129 else:
130 new_uri = uri + '/' + field
131 if new_uri not in self._data and not new_uri.endswith('/'):
132 content = self._fetch(new_uri)
133 if field == 'security-groups':
134 sg_fields = ",".join(content.split('\n'))
135 self._data['%s' % (new_uri)] = sg_fields
136 else:
137 self._data['%s' % (new_uri)] = content
138
139 def fix_invalid_varnames(self, data):
140 """Change ':'' and '-' to '_' to ensure valid template variable names"""
141 for (key, value) in data.items():
142 if ':' in key or '-' in key:
143 newkey = key.replace(':', '_').replace('-', '_')
144 del data[key]
145 data[newkey] = value
146
147 def add_ec2_region(self, data):
148 """Use the 'ansible_ec2_placement_availability_zone' key/value
149 pair to add 'ansible_ec2_placement_region' key/value pair with
150 the EC2 region name.
151 """
152
153 # Only add a 'ansible_ec2_placement_region' key if the
154 # 'ansible_ec2_placement_availability_zone' exists.
155 zone = data.get('ansible_ec2_placement_availability_zone')
156 if zone is not None:
157 # Use the zone name as the region name unless the zone
158 # name starts with a known AWS region name.
159 region = zone
160 for r in self.AWS_REGIONS:
161 if zone.startswith(r):
162 region = r
163 break
164 data['ansible_ec2_placement_region'] = region
165
166 def run(self):
167 self.fetch(self.uri_meta) # populate _data
168 data = self._mangle_fields(self._data, self.uri_meta)
169 data[self._prefix % 'user-data'] = self._fetch(self.uri_user)
170 data[self._prefix % 'public-key'] = self._fetch(self.uri_ssh)
171 self.fix_invalid_varnames(data)
172 self.add_ec2_region(data)
173 return data
174
175
176 def main():
177 argument_spec = url_argument_spec()
178
179 module = AnsibleModule(
180 argument_spec=argument_spec,
181 supports_check_mode=True,
182 )
183
184 ec2_facts = Ec2Metadata(module).run()
185 ec2_facts_result = dict(changed=False, ansible_facts=ec2_facts)
186
187 module.exit_json(**ec2_facts_result)
188
189
190 # import module snippets
191 from ansible.module_utils.basic import *
192 from ansible.module_utils.urls import *
193
194 if __name__ == '__main__':
195 main()
196
[end of lib/ansible/modules/cloud/amazon/ec2_facts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/ansible/modules/cloud/amazon/ec2_facts.py b/lib/ansible/modules/cloud/amazon/ec2_facts.py
--- a/lib/ansible/modules/cloud/amazon/ec2_facts.py
+++ b/lib/ansible/modules/cloud/amazon/ec2_facts.py
@@ -57,7 +57,7 @@
import socket
import re
-
+from ansible.module_utils._text import to_text
socket.setdefaulttimeout(5)
@@ -97,7 +97,7 @@
data = response.read()
else:
data = None
- return data
+ return to_text(data)
def _mangle_fields(self, fields, uri, filter_patterns=['public-keys-0']):
new_fields = {}
@@ -110,7 +110,7 @@
new_key = "".join(split_fields)
new_fields[self._prefix % new_key] = value
for pattern in filter_patterns:
- for key in new_fields.keys():
+ for key in dict(new_fields):
match = re.search(pattern, key)
if match:
new_fields.pop(key)
@@ -138,11 +138,10 @@
def fix_invalid_varnames(self, data):
"""Change ':'' and '-' to '_' to ensure valid template variable names"""
- for (key, value) in data.items():
+ for key in data:
if ':' in key or '-' in key:
newkey = key.replace(':', '_').replace('-', '_')
- del data[key]
- data[newkey] = value
+ data[newkey] = data.pop(key)
def add_ec2_region(self, data):
"""Use the 'ansible_ec2_placement_availability_zone' key/value
| {"golden_diff": "diff --git a/lib/ansible/modules/cloud/amazon/ec2_facts.py b/lib/ansible/modules/cloud/amazon/ec2_facts.py\n--- a/lib/ansible/modules/cloud/amazon/ec2_facts.py\n+++ b/lib/ansible/modules/cloud/amazon/ec2_facts.py\n@@ -57,7 +57,7 @@\n \n import socket\n import re\n-\n+from ansible.module_utils._text import to_text\n socket.setdefaulttimeout(5)\n \n \n@@ -97,7 +97,7 @@\n data = response.read()\n else:\n data = None\n- return data\n+ return to_text(data)\n \n def _mangle_fields(self, fields, uri, filter_patterns=['public-keys-0']):\n new_fields = {}\n@@ -110,7 +110,7 @@\n new_key = \"\".join(split_fields)\n new_fields[self._prefix % new_key] = value\n for pattern in filter_patterns:\n- for key in new_fields.keys():\n+ for key in dict(new_fields):\n match = re.search(pattern, key)\n if match:\n new_fields.pop(key)\n@@ -138,11 +138,10 @@\n \n def fix_invalid_varnames(self, data):\n \"\"\"Change ':'' and '-' to '_' to ensure valid template variable names\"\"\"\n- for (key, value) in data.items():\n+ for key in data:\n if ':' in key or '-' in key:\n newkey = key.replace(':', '_').replace('-', '_')\n- del data[key]\n- data[newkey] = value\n+ data[newkey] = data.pop(key)\n \n def add_ec2_region(self, data):\n \"\"\"Use the 'ansible_ec2_placement_availability_zone' key/value\n", "issue": "ec2_facts throws str error in Python 3.5+\n<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and master branch are affected too.\r\n-->\r\n\r\n##### ISSUE TYPE\r\n<!--- Pick one below and delete the rest: -->\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\nec2_facts\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \u201cansible --version\u201d between quotes below -->\r\n```\r\nansible 2.3.0.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = Default w/o overrides\r\n python version = 3.5.2 (default, Nov 17 2016, 17:05:23) [GCC 5.4.0 20160609]\r\n```\r\n\r\n##### CONFIGURATION\r\n```\r\n# inventory file that I use via ansible-playbook -i\r\n[all]\r\nlocalhost\r\n\r\n[all:vars]\r\nansible_python_interpreter=/usr/bin/python3\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nUbuntu 16.04.2 LTS\r\nPython 3.5.2\r\n\r\n##### SUMMARY\r\nWhen trying to run `ec2_facts`, I get this error: `TypeError: a bytes-like object is required, not 'str'`\r\n<!--- Explain the problem briefly -->\r\n\r\n##### STEPS TO REPRODUCE\r\n<!---\r\nFor bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used.\r\n-->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n# in any playbook\r\n - name: Gather facts\r\n action: ec2_facts\r\n```\r\n\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- What did you expect to happen when running the steps above? -->\r\nThat ec2_facts would work.\r\n\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\nIt errors out with `a bytes-like object is required, not 'str'`\r\n\r\nJust as a note, I thought this was connected to #17038, but perhaps not.\nec2_facts throws str error in Python 3.5+\n<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and master branch are affected too.\r\n-->\r\n\r\n##### ISSUE TYPE\r\n<!--- Pick one below and delete the rest: -->\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\nec2_facts\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \u201cansible --version\u201d between quotes below -->\r\n```\r\nansible 2.3.0.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = Default w/o overrides\r\n python version = 3.5.2 (default, Nov 17 2016, 17:05:23) [GCC 5.4.0 20160609]\r\n```\r\n\r\n##### CONFIGURATION\r\n```\r\n# inventory file that I use via ansible-playbook -i\r\n[all]\r\nlocalhost\r\n\r\n[all:vars]\r\nansible_python_interpreter=/usr/bin/python3\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nUbuntu 16.04.2 LTS\r\nPython 3.5.2\r\n\r\n##### SUMMARY\r\nWhen trying to run `ec2_facts`, I get this error: `TypeError: a bytes-like object is required, not 'str'`\r\n<!--- Explain the problem briefly -->\r\n\r\n##### STEPS TO REPRODUCE\r\n<!---\r\nFor bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used.\r\n-->\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n# in any playbook\r\n - name: Gather facts\r\n action: ec2_facts\r\n```\r\n\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- What did you expect to happen when running the steps above? -->\r\nThat ec2_facts would work.\r\n\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\nIt errors out with `a bytes-like object is required, not 'str'`\r\n\r\nJust as a note, I thought this was connected to #17038, but perhaps not.\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nANSIBLE_METADATA = {'metadata_version': '1.0',\n 'status': ['stableinterface'],\n 'supported_by': 'curated'}\n\n\nDOCUMENTATION = '''\n---\nmodule: ec2_facts\nshort_description: Gathers facts about remote hosts within ec2 (aws)\nversion_added: \"1.0\"\noptions:\n validate_certs:\n description:\n - If C(no), SSL certificates will not be validated. This should only be used\n on personally controlled sites using self-signed certificates.\n required: false\n default: 'yes'\n choices: ['yes', 'no']\n version_added: '1.5.1'\ndescription:\n - This module fetches data from the metadata servers in ec2 (aws) as per\n http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html.\n The module must be called from within the EC2 instance itself.\nnotes:\n - Parameters to filter on ec2_facts may be added later.\nauthor: \"Silviu Dicu (@silviud) <[email protected]>\"\n'''\n\nEXAMPLES = '''\n# Conditional example\n- name: Gather facts\n ec2_facts:\n\n- name: Conditional\n debug:\n msg: \"This instance is a t1.micro\"\n when: ansible_ec2_instance_type == \"t1.micro\"\n'''\n\nimport socket\nimport re\n\nsocket.setdefaulttimeout(5)\n\n\nclass Ec2Metadata(object):\n ec2_metadata_uri = 'http://169.254.169.254/latest/meta-data/'\n ec2_sshdata_uri = 'http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key'\n ec2_userdata_uri = 'http://169.254.169.254/latest/user-data/'\n\n AWS_REGIONS = ('ap-northeast-1',\n 'ap-northeast-2',\n 'ap-south-1',\n 'ap-southeast-1',\n 'ap-southeast-2',\n 'ca-central-1',\n 'eu-central-1',\n 'eu-west-1',\n 'eu-west-2',\n 'sa-east-1',\n 'us-east-1',\n 'us-east-2',\n 'us-west-1',\n 'us-west-2',\n 'us-gov-west-1',\n )\n\n def __init__(self, module, ec2_metadata_uri=None, ec2_sshdata_uri=None, ec2_userdata_uri=None):\n self.module = module\n self.uri_meta = ec2_metadata_uri or self.ec2_metadata_uri\n self.uri_user = ec2_userdata_uri or self.ec2_userdata_uri\n self.uri_ssh = ec2_sshdata_uri or self.ec2_sshdata_uri\n self._data = {}\n self._prefix = 'ansible_ec2_%s'\n\n def _fetch(self, url):\n (response, info) = fetch_url(self.module, url, force=True)\n if response:\n data = response.read()\n else:\n data = None\n return data\n\n def _mangle_fields(self, fields, uri, filter_patterns=['public-keys-0']):\n new_fields = {}\n for key, value in fields.items():\n split_fields = key[len(uri):].split('/')\n if len(split_fields) > 1 and split_fields[1]:\n new_key = \"-\".join(split_fields)\n new_fields[self._prefix % new_key] = value\n else:\n new_key = \"\".join(split_fields)\n new_fields[self._prefix % new_key] = value\n for pattern in filter_patterns:\n for key in new_fields.keys():\n match = re.search(pattern, key)\n if match:\n new_fields.pop(key)\n return new_fields\n\n def fetch(self, uri, recurse=True):\n raw_subfields = self._fetch(uri)\n if not raw_subfields:\n return\n subfields = raw_subfields.split('\\n')\n for field in subfields:\n if field.endswith('/') and recurse:\n self.fetch(uri + field)\n if uri.endswith('/'):\n new_uri = uri + field\n else:\n new_uri = uri + '/' + field\n if new_uri not in self._data and not new_uri.endswith('/'):\n content = self._fetch(new_uri)\n if field == 'security-groups':\n sg_fields = \",\".join(content.split('\\n'))\n self._data['%s' % (new_uri)] = sg_fields\n else:\n self._data['%s' % (new_uri)] = content\n\n def fix_invalid_varnames(self, data):\n \"\"\"Change ':'' and '-' to '_' to ensure valid template variable names\"\"\"\n for (key, value) in data.items():\n if ':' in key or '-' in key:\n newkey = key.replace(':', '_').replace('-', '_')\n del data[key]\n data[newkey] = value\n\n def add_ec2_region(self, data):\n \"\"\"Use the 'ansible_ec2_placement_availability_zone' key/value\n pair to add 'ansible_ec2_placement_region' key/value pair with\n the EC2 region name.\n \"\"\"\n\n # Only add a 'ansible_ec2_placement_region' key if the\n # 'ansible_ec2_placement_availability_zone' exists.\n zone = data.get('ansible_ec2_placement_availability_zone')\n if zone is not None:\n # Use the zone name as the region name unless the zone\n # name starts with a known AWS region name.\n region = zone\n for r in self.AWS_REGIONS:\n if zone.startswith(r):\n region = r\n break\n data['ansible_ec2_placement_region'] = region\n\n def run(self):\n self.fetch(self.uri_meta) # populate _data\n data = self._mangle_fields(self._data, self.uri_meta)\n data[self._prefix % 'user-data'] = self._fetch(self.uri_user)\n data[self._prefix % 'public-key'] = self._fetch(self.uri_ssh)\n self.fix_invalid_varnames(data)\n self.add_ec2_region(data)\n return data\n\n\ndef main():\n argument_spec = url_argument_spec()\n\n module = AnsibleModule(\n argument_spec=argument_spec,\n supports_check_mode=True,\n )\n\n ec2_facts = Ec2Metadata(module).run()\n ec2_facts_result = dict(changed=False, ansible_facts=ec2_facts)\n\n module.exit_json(**ec2_facts_result)\n\n\n# import module snippets\nfrom ansible.module_utils.basic import *\nfrom ansible.module_utils.urls import *\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/cloud/amazon/ec2_facts.py"}]} | 3,589 | 380 |
gh_patches_debug_66590 | rasdani/github-patches | git_diff | StackStorm__st2-3843 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Action 'linux.service' fails on Centos7
When I tried to execute restart some service on the Centos7 server got the following error:
```
Traceback (most recent call last):
File "/tmp/5a0459bc07ac686fb813a920/service.py", line 24, in <module>
subprocess.call(cmd, shell=True)
NameError: name 'cmd' is not defined
```
After investigation the resolution has been found:
in file /opt/stackstorm/packs/linux/actions/service.py the entry
`elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora'):`
fixed to
`elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora') or re.search(distro, 'CentOS Linux'):`
The issue has gone
</issue>
<code>
[start of contrib/linux/actions/service.py]
1 #!/usr/bin/env python
2
3 import re
4 import sys
5 import os
6 import platform
7 import subprocess
8
9 distro = platform.linux_distribution()[0]
10
11 args = {'act': sys.argv[1], 'service': sys.argv[2]}
12
13 if re.search(distro, 'Ubuntu'):
14 if os.path.isfile("/etc/init/%s.conf" % args['service']):
15 cmd = args['act'] + " " + args['service']
16 elif os.path.isfile("/etc/init.d/%s" % args['service']):
17 cmd = "/etc/init.d/%s %s" % (args['service'], args['act'])
18 else:
19 print("Unknown service")
20 sys.exit(2)
21 elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora'):
22 cmd = "systemctl %s %s" % (args['act'], args['service'])
23
24 subprocess.call(cmd, shell=True)
25
[end of contrib/linux/actions/service.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/contrib/linux/actions/service.py b/contrib/linux/actions/service.py
--- a/contrib/linux/actions/service.py
+++ b/contrib/linux/actions/service.py
@@ -18,7 +18,8 @@
else:
print("Unknown service")
sys.exit(2)
-elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora'):
+elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora') or \
+ re.search(distro, 'CentOS Linux'):
cmd = "systemctl %s %s" % (args['act'], args['service'])
subprocess.call(cmd, shell=True)
| {"golden_diff": "diff --git a/contrib/linux/actions/service.py b/contrib/linux/actions/service.py\n--- a/contrib/linux/actions/service.py\n+++ b/contrib/linux/actions/service.py\n@@ -18,7 +18,8 @@\n else:\n print(\"Unknown service\")\n sys.exit(2)\n-elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora'):\n+elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora') or \\\n+ re.search(distro, 'CentOS Linux'):\n cmd = \"systemctl %s %s\" % (args['act'], args['service'])\n \n subprocess.call(cmd, shell=True)\n", "issue": "Action 'linux.service' fails on Centos7\nWhen I tried to execute restart some service on the Centos7 server got the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/tmp/5a0459bc07ac686fb813a920/service.py\", line 24, in <module>\r\n subprocess.call(cmd, shell=True)\r\nNameError: name 'cmd' is not defined\r\n```\r\nAfter investigation the resolution has been found:\r\nin file /opt/stackstorm/packs/linux/actions/service.py the entry\r\n\r\n`elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora'):`\r\n\r\nfixed to \r\n\r\n`elif re.search(distro, 'Redhat') or re.search(distro, 'Fedora') or re.search(distro, 'CentOS Linux'):`\r\n\r\nThe issue has gone\r\n\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport re\nimport sys\nimport os\nimport platform\nimport subprocess\n\ndistro = platform.linux_distribution()[0]\n\nargs = {'act': sys.argv[1], 'service': sys.argv[2]}\n\nif re.search(distro, 'Ubuntu'):\n if os.path.isfile(\"/etc/init/%s.conf\" % args['service']):\n cmd = args['act'] + \" \" + args['service']\n elif os.path.isfile(\"/etc/init.d/%s\" % args['service']):\n cmd = \"/etc/init.d/%s %s\" % (args['service'], args['act'])\n else:\n print(\"Unknown service\")\n sys.exit(2)\nelif re.search(distro, 'Redhat') or re.search(distro, 'Fedora'):\n cmd = \"systemctl %s %s\" % (args['act'], args['service'])\n\nsubprocess.call(cmd, shell=True)\n", "path": "contrib/linux/actions/service.py"}]} | 964 | 148 |
gh_patches_debug_29344 | rasdani/github-patches | git_diff | getnikola__nikola-2241 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
jinjify: convert a single mako template file to jinja?
It looks like jinjify requires a full theme to work. For example, how do you synchronize the two templates in the [projectpages plugin](https://github.com/getnikola/plugins/tree/master/v7/projectpages/templates) with jinjify?
</issue>
<code>
[start of scripts/jinjify.py]
1 #!/usr/bin/env python
2 import io
3 import glob
4 import sys
5 import os
6 import re
7 import json
8 import shutil
9
10 import colorama
11 import jinja2
12
13 dumb_replacements = [
14 ["{% if any(post.is_mathjax for post in posts) %}", '{% if posts|selectattr("is_mathjax")|list %}'],
15 ["json.dumps(title)", "title|tojson"],
16 ["{{ parent.extra_head() }}", "{{ super() }}"],
17 ["{{ parent.content() }}", "{{ super() }}"],
18 ["prefix='\\", "prefix='"],
19 ["og: http://ogp.me/ns# \\", "og: http://ogp.me/ns#"],
20 ["article: http://ogp.me/ns/article# \\", "article: http://ogp.me/ns/article#"],
21 ["fb: http://ogp.me/ns/fb# \\", "fb: http://ogp.me/ns/fb#"],
22 ['dir="rtl" \\', 'dir="rtl"'],
23 ['sorted(translations)', 'translations|sort'],
24 ]
25
26 dumber_replacements = [
27 ['<%! import json %>\n\n', ''],
28 ["<html\n\\", "<html\n"],
29 ["\n'\\\n", "\n'\n"],
30 ["{% endif %}\n\\", "{% endif %}\n"]
31 ]
32
33
34 def jinjify(in_theme, out_theme):
35 """Convert in_theme into a jinja version and put it in out_theme"""
36
37 in_templates_path = os.path.join(in_theme, "templates")
38 out_templates_path = os.path.join(out_theme, "templates")
39 try:
40 os.makedirs(out_templates_path)
41 except:
42 pass
43 lookup = jinja2.Environment()
44 lookup.filters['tojson'] = json.dumps
45 lookup.loader = jinja2.FileSystemLoader([out_templates_path], encoding='utf-8')
46 for template in glob.glob(os.path.join(in_templates_path, "*.tmpl")):
47 out_template = os.path.join(out_templates_path, os.path.basename(template))
48 with io.open(template, "r", encoding="utf-8") as inf:
49 data = mako2jinja(inf)
50
51 lines = []
52 for line in data.splitlines():
53 for repl in dumb_replacements:
54 line = line.replace(*repl)
55 lines.append(line)
56 data = '\n'.join(lines)
57
58 for repl in dumber_replacements:
59 data = data.replace(*repl)
60
61 with io.open(out_template, "w+", encoding="utf-8") as outf:
62 outf.write(data + '\n')
63
64 # Syntax check output
65 source, filename = lookup.loader.get_source(lookup, os.path.basename(template))[:2]
66 try:
67 lookup.parse(source)
68 except Exception as e:
69 error("Syntax error in {0}:{1}".format(out_template, e.lineno))
70
71 parent = os.path.basename(in_theme.rstrip('/'))
72 child = os.path.basename(out_theme.rstrip('/'))
73 mappings = {
74 'base-jinja': 'base',
75 'bootstrap3-jinja': 'base-jinja',
76 }
77
78 if child in mappings:
79 parent = mappings[child]
80
81 with io.open(os.path.join(out_theme, "parent"), "w+", encoding='utf-8') as outf:
82 outf.write(u'{0}\n'.format(parent))
83
84 with io.open(os.path.join(out_theme, "engine"), "w+", encoding='utf-8') as outf:
85 outf.write(u"jinja\n")
86
87 # Copy assets in bootstrap/bootstrap3
88 if child == 'bootstrap3-jinja':
89 shutil.rmtree(os.path.join(out_theme, "assets"))
90 shutil.copytree(
91 os.path.join(in_theme, "assets"), os.path.join(out_theme, "assets"),
92 symlinks=True)
93
94 # Copy bundles
95 # shutil.copy(os.path.join(in_theme, "bundles"), os.path.join(out_theme, "bundles"))
96
97 # Copy README
98 if os.path.isfile(os.path.join(in_theme, "README.md")):
99 shutil.copy(os.path.join(in_theme, "README.md"), os.path.join(out_theme, "README.md"))
100
101
102 def error(msg):
103 print(colorama.Fore.RED + "ERROR:" + msg)
104
105
106 def mako2jinja(input_file):
107
108 output = ''
109
110 # TODO: OMG, this code is so horrible. Look at it; just look at it:
111
112 macro_start = re.compile(r'(.*)<%.*def name="(.*?)".*>(.*)', re.IGNORECASE)
113 macro_end = re.compile(r'(.*)</%def>(.*)', re.IGNORECASE)
114
115 if_start = re.compile(r'(.*)% *if (.*):(.*)', re.IGNORECASE)
116 if_else = re.compile(r'(.*)% *else.*:(.*)', re.IGNORECASE)
117 if_elif = re.compile(r'(.*)% *elif (.*):(.*)', re.IGNORECASE)
118 if_end = re.compile(r'(.*)% *endif(.*)', re.IGNORECASE)
119
120 for_start = re.compile(r'(.*)% *for (.*):(.*)', re.IGNORECASE)
121 for_end = re.compile(r'(.*)% *endfor(.*)', re.IGNORECASE)
122
123 namespace = re.compile(r'(.*)<% *namespace name="(.*?)".* file="(.*?)".*/>(.*)', re.IGNORECASE)
124 inherit = re.compile(r'(.*)<% *inherit file="(.*?)".*/>(.*)', re.IGNORECASE)
125
126 block_single_line = re.compile(r'(.*)<% *block.*name="(.*?)".*>(.*)</% *block>(.*)', re.IGNORECASE)
127 block_start = re.compile(r'(.*)<% *block.*name="(.*?)".*>(.*)', re.IGNORECASE)
128 block_end = re.compile(r'(.*)</%block>(.*)', re.IGNORECASE)
129
130 val = re.compile(r'\$\{(.*?)\}', re.IGNORECASE)
131 func_len = re.compile(r'len\((.*?)\)', re.IGNORECASE)
132 filter_h = re.compile(r'\|h', re.IGNORECASE)
133 filter_striphtml = re.compile(r'\|striphtml', re.IGNORECASE)
134 filter_u = re.compile(r'\|u', re.IGNORECASE)
135
136 comment_single_line = re.compile(r'^.*##(.*?)$', re.IGNORECASE)
137
138 for line in input_file:
139
140 # Process line for repeated inline replacements
141 m_val = val.search(line)
142 m_func_len = func_len.search(line)
143 m_filter_h = filter_h.search(line)
144 m_filter_striphtml = filter_striphtml.search(line)
145 m_filter_u = filter_u.search(line)
146
147 if m_val:
148 line = val.sub(r'{{ \1 }}', line)
149
150 if m_filter_h:
151 line = filter_h.sub(r'|e', line)
152
153 if m_filter_striphtml:
154 line = filter_striphtml.sub(r'|e', line)
155
156 if m_filter_u:
157 line = filter_u.sub(r'|urlencode', line)
158
159 if m_func_len:
160 line = func_len.sub(r'\1|length', line)
161
162 # Process line for single 'whole line' replacements
163 m_macro_start = macro_start.search(line)
164 m_macro_end = macro_end.search(line)
165 m_if_start = if_start.search(line)
166 m_if_else = if_else.search(line)
167 m_if_elif = if_elif.search(line)
168 m_if_end = if_end.search(line)
169 m_for_start = for_start.search(line)
170 m_for_end = for_end.search(line)
171 m_namspace = namespace.search(line)
172 m_inherit = inherit.search(line)
173 m_block_single_line = block_single_line.search(line)
174 m_block_start = block_start.search(line)
175 m_block_end = block_end.search(line)
176
177 m_comment_single_line = comment_single_line.search(line)
178
179 if m_comment_single_line:
180 output += m_comment_single_line.expand(r'{# \1 #}') + '\n'
181
182 elif m_macro_start:
183 output += m_macro_start.expand(r'\1{% macro \2 %}\3') + '\n'
184 elif m_macro_end:
185 output += m_macro_end.expand(r'\1{% endmacro %}\1') + '\n'
186
187 elif m_if_start:
188 output += m_if_start.expand(r'\1{% if \2 %}\3') + '\n'
189 elif m_if_else:
190 output += m_if_else.expand(r'\1{% else %}\2') + '\n'
191 elif m_if_elif:
192 output += m_if_elif.expand(r'\1{% elif \2 %}\3') + '\n'
193 elif m_if_end:
194 output += m_if_end.expand(r'\1{% endif %}\2') + '\n'
195
196 elif m_for_start:
197 output += m_for_start.expand(r'\1{% for \2 %}\3') + '\n'
198 elif m_for_end:
199 output += m_for_end.expand(r'\1{% endfor %}\2') + '\n'
200
201 elif m_namspace:
202 output += m_namspace.expand(r"\1{% import '\3' as \2 with context %}\4") + '\n'
203 elif m_inherit:
204 output += m_inherit.expand(r"{% extends '\2' %}\3") + '\n'
205
206 elif m_block_single_line:
207 output += m_block_single_line.expand(r'\1{% block \2 %}\3{% endblock %}\4') + '\n'
208 elif m_block_start:
209 output += m_block_start.expand(r'\1{% block \2 %}\3') + '\n'
210 elif m_block_end:
211 output += m_block_end.expand(r'\1{% endblock %}\2') + '\n'
212
213 else:
214 # Doesn't match anything we're going to process, pass though
215 output += line
216
217 return output
218
219
220 def jinjify_shortcodes(in_dir, out_dir):
221 for fname in os.listdir(in_dir):
222 if not fname.endswith('.tmpl'):
223 continue
224 in_file = os.path.join(in_dir, fname)
225 out_file = os.path.join(out_dir, fname)
226 with open(in_file) as inf:
227 data = mako2jinja(inf)
228 with open(out_file, 'w') as outf:
229 outf.write(data)
230
231
232 if __name__ == "__main__":
233 if len(sys.argv) == 1:
234 print('Performing standard conversions:')
235 for m, j in (
236 ('nikola/data/themes/base', 'nikola/data/themes/base-jinja'),
237 ('nikola/data/themes/bootstrap3', 'nikola/data/themes/bootstrap3-jinja')
238 ):
239 print(' {0} -> {1}'.format(m, j))
240 jinjify(m, j)
241 jinjify_shortcodes('nikola/data/shortcodes/mako', 'nikola/data/shortcodes/jinja')
242 elif len(sys.argv) != 3:
243 print('ERROR: needs input and output directory, or no arguments for default conversions.')
244 else:
245 jinjify(sys.argv[1], sys.argv[2])
246
[end of scripts/jinjify.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/jinjify.py b/scripts/jinjify.py
--- a/scripts/jinjify.py
+++ b/scripts/jinjify.py
@@ -6,6 +6,7 @@
import re
import json
import shutil
+import tempfile
import colorama
import jinja2
@@ -229,6 +230,11 @@
outf.write(data)
+def usage():
+ print("Usage: python {} [in-dir] [out-dir]".format(sys.argv[0]))
+ print("OR")
+ print("Usage: python {} [in-file] [out-file]".format(sys.argv[0]))
+
if __name__ == "__main__":
if len(sys.argv) == 1:
print('Performing standard conversions:')
@@ -240,6 +246,18 @@
jinjify(m, j)
jinjify_shortcodes('nikola/data/shortcodes/mako', 'nikola/data/shortcodes/jinja')
elif len(sys.argv) != 3:
- print('ERROR: needs input and output directory, or no arguments for default conversions.')
- else:
+ print('ERROR: needs input and output directory (file), or no arguments for default conversions.')
+ usage()
+ elif os.path.isdir(sys.argv[1]) and (os.path.isdir(sys.argv[2]) or not os.path.exists(sys.argv[2])):
jinjify(sys.argv[1], sys.argv[2])
+ elif os.path.isfile(sys.argv[1]) and (os.path.isfile(sys.argv[2]) or not os.path.exists(sys.argv[2])):
+ tmpdir = tempfile.mkdtemp()
+ indir = os.path.sep.join((tmpdir, 'in', 'templates'))
+ outdir = os.path.sep.join((tmpdir, 'out', 'templates'))
+ os.makedirs(indir)
+ shutil.copy(sys.argv[1], indir)
+ jinjify(os.path.dirname(indir), os.path.dirname(outdir))
+ shutil.move(os.path.sep.join((outdir, os.path.basename(sys.argv[1]))), sys.argv[2])
+ else:
+ print('ERROR: the two arguments must be both directories or files')
+ usage()
| {"golden_diff": "diff --git a/scripts/jinjify.py b/scripts/jinjify.py\n--- a/scripts/jinjify.py\n+++ b/scripts/jinjify.py\n@@ -6,6 +6,7 @@\n import re\n import json\n import shutil\n+import tempfile\n \n import colorama\n import jinja2\n@@ -229,6 +230,11 @@\n outf.write(data)\n \n \n+def usage():\n+ print(\"Usage: python {} [in-dir] [out-dir]\".format(sys.argv[0]))\n+ print(\"OR\")\n+ print(\"Usage: python {} [in-file] [out-file]\".format(sys.argv[0]))\n+\n if __name__ == \"__main__\":\n if len(sys.argv) == 1:\n print('Performing standard conversions:')\n@@ -240,6 +246,18 @@\n jinjify(m, j)\n jinjify_shortcodes('nikola/data/shortcodes/mako', 'nikola/data/shortcodes/jinja')\n elif len(sys.argv) != 3:\n- print('ERROR: needs input and output directory, or no arguments for default conversions.')\n- else:\n+ print('ERROR: needs input and output directory (file), or no arguments for default conversions.')\n+ usage()\n+ elif os.path.isdir(sys.argv[1]) and (os.path.isdir(sys.argv[2]) or not os.path.exists(sys.argv[2])):\n jinjify(sys.argv[1], sys.argv[2])\n+ elif os.path.isfile(sys.argv[1]) and (os.path.isfile(sys.argv[2]) or not os.path.exists(sys.argv[2])):\n+ tmpdir = tempfile.mkdtemp()\n+ indir = os.path.sep.join((tmpdir, 'in', 'templates'))\n+ outdir = os.path.sep.join((tmpdir, 'out', 'templates'))\n+ os.makedirs(indir)\n+ shutil.copy(sys.argv[1], indir)\n+ jinjify(os.path.dirname(indir), os.path.dirname(outdir))\n+ shutil.move(os.path.sep.join((outdir, os.path.basename(sys.argv[1]))), sys.argv[2])\n+ else:\n+ print('ERROR: the two arguments must be both directories or files')\n+ usage()\n", "issue": "jinjify: convert a single mako template file to jinja?\nIt looks like jinjify requires a full theme to work. For example, how do you synchronize the two templates in the [projectpages plugin](https://github.com/getnikola/plugins/tree/master/v7/projectpages/templates) with jinjify?\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport io\nimport glob\nimport sys\nimport os\nimport re\nimport json\nimport shutil\n\nimport colorama\nimport jinja2\n\ndumb_replacements = [\n [\"{% if any(post.is_mathjax for post in posts) %}\", '{% if posts|selectattr(\"is_mathjax\")|list %}'],\n [\"json.dumps(title)\", \"title|tojson\"],\n [\"{{ parent.extra_head() }}\", \"{{ super() }}\"],\n [\"{{ parent.content() }}\", \"{{ super() }}\"],\n [\"prefix='\\\\\", \"prefix='\"],\n [\"og: http://ogp.me/ns# \\\\\", \"og: http://ogp.me/ns#\"],\n [\"article: http://ogp.me/ns/article# \\\\\", \"article: http://ogp.me/ns/article#\"],\n [\"fb: http://ogp.me/ns/fb# \\\\\", \"fb: http://ogp.me/ns/fb#\"],\n ['dir=\"rtl\" \\\\', 'dir=\"rtl\"'],\n ['sorted(translations)', 'translations|sort'],\n]\n\ndumber_replacements = [\n ['<%! import json %>\\n\\n', ''],\n [\"<html\\n\\\\\", \"<html\\n\"],\n [\"\\n'\\\\\\n\", \"\\n'\\n\"],\n [\"{% endif %}\\n\\\\\", \"{% endif %}\\n\"]\n]\n\n\ndef jinjify(in_theme, out_theme):\n \"\"\"Convert in_theme into a jinja version and put it in out_theme\"\"\"\n\n in_templates_path = os.path.join(in_theme, \"templates\")\n out_templates_path = os.path.join(out_theme, \"templates\")\n try:\n os.makedirs(out_templates_path)\n except:\n pass\n lookup = jinja2.Environment()\n lookup.filters['tojson'] = json.dumps\n lookup.loader = jinja2.FileSystemLoader([out_templates_path], encoding='utf-8')\n for template in glob.glob(os.path.join(in_templates_path, \"*.tmpl\")):\n out_template = os.path.join(out_templates_path, os.path.basename(template))\n with io.open(template, \"r\", encoding=\"utf-8\") as inf:\n data = mako2jinja(inf)\n\n lines = []\n for line in data.splitlines():\n for repl in dumb_replacements:\n line = line.replace(*repl)\n lines.append(line)\n data = '\\n'.join(lines)\n\n for repl in dumber_replacements:\n data = data.replace(*repl)\n\n with io.open(out_template, \"w+\", encoding=\"utf-8\") as outf:\n outf.write(data + '\\n')\n\n # Syntax check output\n source, filename = lookup.loader.get_source(lookup, os.path.basename(template))[:2]\n try:\n lookup.parse(source)\n except Exception as e:\n error(\"Syntax error in {0}:{1}\".format(out_template, e.lineno))\n\n parent = os.path.basename(in_theme.rstrip('/'))\n child = os.path.basename(out_theme.rstrip('/'))\n mappings = {\n 'base-jinja': 'base',\n 'bootstrap3-jinja': 'base-jinja',\n }\n\n if child in mappings:\n parent = mappings[child]\n\n with io.open(os.path.join(out_theme, \"parent\"), \"w+\", encoding='utf-8') as outf:\n outf.write(u'{0}\\n'.format(parent))\n\n with io.open(os.path.join(out_theme, \"engine\"), \"w+\", encoding='utf-8') as outf:\n outf.write(u\"jinja\\n\")\n\n # Copy assets in bootstrap/bootstrap3\n if child == 'bootstrap3-jinja':\n shutil.rmtree(os.path.join(out_theme, \"assets\"))\n shutil.copytree(\n os.path.join(in_theme, \"assets\"), os.path.join(out_theme, \"assets\"),\n symlinks=True)\n\n # Copy bundles\n # shutil.copy(os.path.join(in_theme, \"bundles\"), os.path.join(out_theme, \"bundles\"))\n\n # Copy README\n if os.path.isfile(os.path.join(in_theme, \"README.md\")):\n shutil.copy(os.path.join(in_theme, \"README.md\"), os.path.join(out_theme, \"README.md\"))\n\n\ndef error(msg):\n print(colorama.Fore.RED + \"ERROR:\" + msg)\n\n\ndef mako2jinja(input_file):\n\n output = ''\n\n # TODO: OMG, this code is so horrible. Look at it; just look at it:\n\n macro_start = re.compile(r'(.*)<%.*def name=\"(.*?)\".*>(.*)', re.IGNORECASE)\n macro_end = re.compile(r'(.*)</%def>(.*)', re.IGNORECASE)\n\n if_start = re.compile(r'(.*)% *if (.*):(.*)', re.IGNORECASE)\n if_else = re.compile(r'(.*)% *else.*:(.*)', re.IGNORECASE)\n if_elif = re.compile(r'(.*)% *elif (.*):(.*)', re.IGNORECASE)\n if_end = re.compile(r'(.*)% *endif(.*)', re.IGNORECASE)\n\n for_start = re.compile(r'(.*)% *for (.*):(.*)', re.IGNORECASE)\n for_end = re.compile(r'(.*)% *endfor(.*)', re.IGNORECASE)\n\n namespace = re.compile(r'(.*)<% *namespace name=\"(.*?)\".* file=\"(.*?)\".*/>(.*)', re.IGNORECASE)\n inherit = re.compile(r'(.*)<% *inherit file=\"(.*?)\".*/>(.*)', re.IGNORECASE)\n\n block_single_line = re.compile(r'(.*)<% *block.*name=\"(.*?)\".*>(.*)</% *block>(.*)', re.IGNORECASE)\n block_start = re.compile(r'(.*)<% *block.*name=\"(.*?)\".*>(.*)', re.IGNORECASE)\n block_end = re.compile(r'(.*)</%block>(.*)', re.IGNORECASE)\n\n val = re.compile(r'\\$\\{(.*?)\\}', re.IGNORECASE)\n func_len = re.compile(r'len\\((.*?)\\)', re.IGNORECASE)\n filter_h = re.compile(r'\\|h', re.IGNORECASE)\n filter_striphtml = re.compile(r'\\|striphtml', re.IGNORECASE)\n filter_u = re.compile(r'\\|u', re.IGNORECASE)\n\n comment_single_line = re.compile(r'^.*##(.*?)$', re.IGNORECASE)\n\n for line in input_file:\n\n # Process line for repeated inline replacements\n m_val = val.search(line)\n m_func_len = func_len.search(line)\n m_filter_h = filter_h.search(line)\n m_filter_striphtml = filter_striphtml.search(line)\n m_filter_u = filter_u.search(line)\n\n if m_val:\n line = val.sub(r'{{ \\1 }}', line)\n\n if m_filter_h:\n line = filter_h.sub(r'|e', line)\n\n if m_filter_striphtml:\n line = filter_striphtml.sub(r'|e', line)\n\n if m_filter_u:\n line = filter_u.sub(r'|urlencode', line)\n\n if m_func_len:\n line = func_len.sub(r'\\1|length', line)\n\n # Process line for single 'whole line' replacements\n m_macro_start = macro_start.search(line)\n m_macro_end = macro_end.search(line)\n m_if_start = if_start.search(line)\n m_if_else = if_else.search(line)\n m_if_elif = if_elif.search(line)\n m_if_end = if_end.search(line)\n m_for_start = for_start.search(line)\n m_for_end = for_end.search(line)\n m_namspace = namespace.search(line)\n m_inherit = inherit.search(line)\n m_block_single_line = block_single_line.search(line)\n m_block_start = block_start.search(line)\n m_block_end = block_end.search(line)\n\n m_comment_single_line = comment_single_line.search(line)\n\n if m_comment_single_line:\n output += m_comment_single_line.expand(r'{# \\1 #}') + '\\n'\n\n elif m_macro_start:\n output += m_macro_start.expand(r'\\1{% macro \\2 %}\\3') + '\\n'\n elif m_macro_end:\n output += m_macro_end.expand(r'\\1{% endmacro %}\\1') + '\\n'\n\n elif m_if_start:\n output += m_if_start.expand(r'\\1{% if \\2 %}\\3') + '\\n'\n elif m_if_else:\n output += m_if_else.expand(r'\\1{% else %}\\2') + '\\n'\n elif m_if_elif:\n output += m_if_elif.expand(r'\\1{% elif \\2 %}\\3') + '\\n'\n elif m_if_end:\n output += m_if_end.expand(r'\\1{% endif %}\\2') + '\\n'\n\n elif m_for_start:\n output += m_for_start.expand(r'\\1{% for \\2 %}\\3') + '\\n'\n elif m_for_end:\n output += m_for_end.expand(r'\\1{% endfor %}\\2') + '\\n'\n\n elif m_namspace:\n output += m_namspace.expand(r\"\\1{% import '\\3' as \\2 with context %}\\4\") + '\\n'\n elif m_inherit:\n output += m_inherit.expand(r\"{% extends '\\2' %}\\3\") + '\\n'\n\n elif m_block_single_line:\n output += m_block_single_line.expand(r'\\1{% block \\2 %}\\3{% endblock %}\\4') + '\\n'\n elif m_block_start:\n output += m_block_start.expand(r'\\1{% block \\2 %}\\3') + '\\n'\n elif m_block_end:\n output += m_block_end.expand(r'\\1{% endblock %}\\2') + '\\n'\n\n else:\n # Doesn't match anything we're going to process, pass though\n output += line\n\n return output\n\n\ndef jinjify_shortcodes(in_dir, out_dir):\n for fname in os.listdir(in_dir):\n if not fname.endswith('.tmpl'):\n continue\n in_file = os.path.join(in_dir, fname)\n out_file = os.path.join(out_dir, fname)\n with open(in_file) as inf:\n data = mako2jinja(inf)\n with open(out_file, 'w') as outf:\n outf.write(data)\n \n\nif __name__ == \"__main__\":\n if len(sys.argv) == 1:\n print('Performing standard conversions:')\n for m, j in (\n ('nikola/data/themes/base', 'nikola/data/themes/base-jinja'),\n ('nikola/data/themes/bootstrap3', 'nikola/data/themes/bootstrap3-jinja')\n ):\n print(' {0} -> {1}'.format(m, j))\n jinjify(m, j)\n jinjify_shortcodes('nikola/data/shortcodes/mako', 'nikola/data/shortcodes/jinja')\n elif len(sys.argv) != 3:\n print('ERROR: needs input and output directory, or no arguments for default conversions.')\n else:\n jinjify(sys.argv[1], sys.argv[2])\n", "path": "scripts/jinjify.py"}]} | 3,628 | 493 |
gh_patches_debug_13852 | rasdani/github-patches | git_diff | ESMCI__cime-2700 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Issue downloading data (wildcards not supported in HTTP)
I was wanting to have a case download all of the data it needs. First create an empty tmp inputdata directory, set the DIN env vars. However, I got error below which seems like a problem with wget and wildcards?
```
Refcase not found in /global/cscratch1/sd/ndk/inputdata/e3sm_init/20171228.beta3rc13_1850.ne30_oECv3_ICG.edison/0331-01-01, will attempt to download from inputdata
Model refcase missing file refdir = '/global/cscratch1/sd/ndk/inputdata/e3sm_init/20171228.beta3rc13_1850.ne30_oECv3_ICG.edison/0331-01-01/'
wget failed with output: and errput Warning: wildcards not supported in HTTP.
--2018-06-29 14:11:00-- https://web.lcrc.anl.gov/public/e3sm/inputdata/e3sm_init/20171228.beta3rc13_1850.ne30_oECv3_ICG.edison/0331-01-01/*
Resolving web.lcrc.anl.gov (web.lcrc.anl.gov)... 140.221.74.23
Connecting to web.lcrc.anl.gov (web.lcrc.anl.gov)|140.221.74.23|:443... connected.
HTTP request sent, awaiting response... 404 Not Found
2018-06-29 14:11:00 ERROR 404: Not Found.
```
The test I was using: `SMS_Ld2.ne30_oECv3_ICG.A_WCYCL1850S_CMIP6.cori-knl_intel.allactive-v1cmip6`
</issue>
<code>
[start of scripts/lib/CIME/Servers/wget.py]
1 """
2 WGET Server class. Interact with a server using WGET protocol
3 """
4 # pylint: disable=super-init-not-called
5 from CIME.XML.standard_module_setup import *
6 from CIME.Servers.generic_server import GenericServer
7
8 logger = logging.getLogger(__name__)
9
10 class WGET(GenericServer):
11 def __init__(self, address, user='', passwd=''):
12 self._args = ''
13 if user:
14 self._args += "--user {}".format(user)
15 if passwd:
16 self._args += "--password {}".format(passwd)
17
18 err = run_cmd("wget {} --spider {}".format(self._args, address))[0]
19 expect(err == 0,"Could not connect to repo '{0}'\nThis is most likely either a proxy, or network issue .")
20 self._server_loc = address
21
22 def fileexists(self, rel_path):
23 full_url = os.path.join(self._server_loc, rel_path)
24 stat, out, err = run_cmd("wget {} --spider {}".format(self._args, full_url))
25 if (stat != 0):
26 logging.warning("FAIL: Repo '{}' does not have file '{}'\nReason:{}\n{}\n".format(self._server_loc, full_url, out.encode('utf-8'), err.encode('utf-8')))
27 return False
28 return True
29
30 def getfile(self, rel_path, full_path):
31 full_url = os.path.join(self._server_loc, rel_path)
32 stat, output, errput = \
33 run_cmd("wget {} {} -nc --output-document {}".format(self._args, full_url, full_path))
34 if (stat != 0):
35 logging.warning("wget failed with output: {} and errput {}\n".format(output, errput))
36 # wget puts an empty file if it fails.
37 try:
38 os.remove(full_path)
39 except OSError:
40 pass
41 return False
42 else:
43 logging.info("SUCCESS\n")
44 return True
45
46 def getdirectory(self, rel_path, full_path):
47 full_url = os.path.join(self._server_loc, rel_path)
48 stat, output, errput = \
49 run_cmd("wget {} {} -P {}".format(self._args, full_url+os.sep+'*', full_path+os.sep))
50 if (stat != 0):
51 logging.warning("wget failed with output: {} and errput {}\n".format(output, errput))
52 # wget puts an empty file if it fails.
53 try:
54 os.remove(full_path)
55 except OSError:
56 pass
57 return False
58 else:
59 logging.info("SUCCESS\n")
60 return True
61
[end of scripts/lib/CIME/Servers/wget.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/lib/CIME/Servers/wget.py b/scripts/lib/CIME/Servers/wget.py
--- a/scripts/lib/CIME/Servers/wget.py
+++ b/scripts/lib/CIME/Servers/wget.py
@@ -46,7 +46,9 @@
def getdirectory(self, rel_path, full_path):
full_url = os.path.join(self._server_loc, rel_path)
stat, output, errput = \
- run_cmd("wget {} {} -P {}".format(self._args, full_url+os.sep+'*', full_path+os.sep))
+ run_cmd("wget {} {} -r -N --no-directories ".format(self._args, full_url+os.sep), from_dir=full_path)
+ logger.debug(output)
+ logger.debug(errput)
if (stat != 0):
logging.warning("wget failed with output: {} and errput {}\n".format(output, errput))
# wget puts an empty file if it fails.
| {"golden_diff": "diff --git a/scripts/lib/CIME/Servers/wget.py b/scripts/lib/CIME/Servers/wget.py\n--- a/scripts/lib/CIME/Servers/wget.py\n+++ b/scripts/lib/CIME/Servers/wget.py\n@@ -46,7 +46,9 @@\n def getdirectory(self, rel_path, full_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, output, errput = \\\n- run_cmd(\"wget {} {} -P {}\".format(self._args, full_url+os.sep+'*', full_path+os.sep))\n+ run_cmd(\"wget {} {} -r -N --no-directories \".format(self._args, full_url+os.sep), from_dir=full_path)\n+ logger.debug(output)\n+ logger.debug(errput)\n if (stat != 0):\n logging.warning(\"wget failed with output: {} and errput {}\\n\".format(output, errput))\n # wget puts an empty file if it fails.\n", "issue": "Issue downloading data (wildcards not supported in HTTP)\nI was wanting to have a case download all of the data it needs. First create an empty tmp inputdata directory, set the DIN env vars. However, I got error below which seems like a problem with wget and wildcards?\r\n\r\n```\r\n Refcase not found in /global/cscratch1/sd/ndk/inputdata/e3sm_init/20171228.beta3rc13_1850.ne30_oECv3_ICG.edison/0331-01-01, will attempt to download from inputdata\r\n Model refcase missing file refdir = '/global/cscratch1/sd/ndk/inputdata/e3sm_init/20171228.beta3rc13_1850.ne30_oECv3_ICG.edison/0331-01-01/'\r\n wget failed with output: and errput Warning: wildcards not supported in HTTP.\r\n --2018-06-29 14:11:00-- https://web.lcrc.anl.gov/public/e3sm/inputdata/e3sm_init/20171228.beta3rc13_1850.ne30_oECv3_ICG.edison/0331-01-01/*\r\n Resolving web.lcrc.anl.gov (web.lcrc.anl.gov)... 140.221.74.23\r\n Connecting to web.lcrc.anl.gov (web.lcrc.anl.gov)|140.221.74.23|:443... connected.\r\n HTTP request sent, awaiting response... 404 Not Found\r\n 2018-06-29 14:11:00 ERROR 404: Not Found.\r\n```\r\n\r\nThe test I was using: `SMS_Ld2.ne30_oECv3_ICG.A_WCYCL1850S_CMIP6.cori-knl_intel.allactive-v1cmip6`\n", "before_files": [{"content": "\"\"\"\nWGET Server class. Interact with a server using WGET protocol\n\"\"\"\n# pylint: disable=super-init-not-called\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.Servers.generic_server import GenericServer\n\nlogger = logging.getLogger(__name__)\n\nclass WGET(GenericServer):\n def __init__(self, address, user='', passwd=''):\n self._args = ''\n if user:\n self._args += \"--user {}\".format(user)\n if passwd:\n self._args += \"--password {}\".format(passwd)\n\n err = run_cmd(\"wget {} --spider {}\".format(self._args, address))[0]\n expect(err == 0,\"Could not connect to repo '{0}'\\nThis is most likely either a proxy, or network issue .\")\n self._server_loc = address\n\n def fileexists(self, rel_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, out, err = run_cmd(\"wget {} --spider {}\".format(self._args, full_url))\n if (stat != 0):\n logging.warning(\"FAIL: Repo '{}' does not have file '{}'\\nReason:{}\\n{}\\n\".format(self._server_loc, full_url, out.encode('utf-8'), err.encode('utf-8')))\n return False\n return True\n\n def getfile(self, rel_path, full_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, output, errput = \\\n run_cmd(\"wget {} {} -nc --output-document {}\".format(self._args, full_url, full_path))\n if (stat != 0):\n logging.warning(\"wget failed with output: {} and errput {}\\n\".format(output, errput))\n # wget puts an empty file if it fails.\n try:\n os.remove(full_path)\n except OSError:\n pass\n return False\n else:\n logging.info(\"SUCCESS\\n\")\n return True\n\n def getdirectory(self, rel_path, full_path):\n full_url = os.path.join(self._server_loc, rel_path)\n stat, output, errput = \\\n run_cmd(\"wget {} {} -P {}\".format(self._args, full_url+os.sep+'*', full_path+os.sep))\n if (stat != 0):\n logging.warning(\"wget failed with output: {} and errput {}\\n\".format(output, errput))\n # wget puts an empty file if it fails.\n try:\n os.remove(full_path)\n except OSError:\n pass\n return False\n else:\n logging.info(\"SUCCESS\\n\")\n return True\n", "path": "scripts/lib/CIME/Servers/wget.py"}]} | 1,684 | 215 |
gh_patches_debug_19063 | rasdani/github-patches | git_diff | streamlink__streamlink-185 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Plugin for Livestream.com not working right? exit's quickly for hls and not at all for normal streams
I am trying to get a live stream on livestreamer.com to work and i can't get it to play more then about 35 seconds...
When I run this command:
streamlink "http://livestream.com/Miraclenet/events/5004281" 270p --fifo --player omxplayer
it gives me an error about an swf being needed. When I run this command:
streamlink "http://livestream.com/Miraclenet/events/5004281" 270p_hls --fifo --player omxplayer
it will play the stream but just for about 35 seconds or so... I kinda don't want to have to restart it every 35 seconds to watch this stream... I'd like it to run until I stop it myself...
Any help for this non-python, non-linux guy would be very helpful...
btw, this is running on a Raspberry Pi. Just got a nice little 7 inch lcd for it and set it up on my desk to be able to watch it while I work, but can't get it to play for long at a time...
(edited to correct commands used)
</issue>
<code>
[start of src/streamlink/plugins/livestream.py]
1 import re
2
3 from streamlink.compat import urljoin
4 from streamlink.plugin import Plugin
5 from streamlink.plugin.api import http, validate
6 from streamlink.plugin.api.utils import parse_json
7 from streamlink.stream import AkamaiHDStream, HLSStream
8
9 _url_re = re.compile("http(s)?://(www\.)?livestream.com/")
10 _stream_config_schema = validate.Schema({
11 "event": {
12 "stream_info": validate.any({
13 "is_live": bool,
14 "qualities": [{
15 "bitrate": int,
16 "height": int
17 }],
18 validate.optional("play_url"): validate.url(scheme="http"),
19 validate.optional("m3u8_url"): validate.url(
20 scheme="http",
21 path=validate.endswith(".m3u8")
22 ),
23 }, None)
24 },
25 validate.optional("playerUri"): validate.text
26 })
27 _smil_schema = validate.Schema(validate.union({
28 "http_base": validate.all(
29 validate.xml_find("{http://www.w3.org/2001/SMIL20/Language}head/"
30 "{http://www.w3.org/2001/SMIL20/Language}meta"
31 "[@name='httpBase']"),
32 validate.xml_element(attrib={
33 "content": validate.text
34 }),
35 validate.get("content")
36 ),
37 "videos": validate.all(
38 validate.xml_findall("{http://www.w3.org/2001/SMIL20/Language}body/"
39 "{http://www.w3.org/2001/SMIL20/Language}switch/"
40 "{http://www.w3.org/2001/SMIL20/Language}video"),
41 [
42 validate.all(
43 validate.xml_element(attrib={
44 "src": validate.text,
45 "system-bitrate": validate.all(
46 validate.text,
47 validate.transform(int)
48 )
49 }),
50 validate.transform(
51 lambda e: (e.attrib["src"], e.attrib["system-bitrate"])
52 )
53 )
54 ],
55 )
56 }))
57
58
59 class Livestream(Plugin):
60 @classmethod
61 def default_stream_types(cls, streams):
62 return ["akamaihd", "hls"]
63
64 @classmethod
65 def can_handle_url(self, url):
66 return _url_re.match(url)
67
68 def _get_stream_info(self):
69 res = http.get(self.url)
70 match = re.search("window.config = ({.+})", res.text)
71 if match:
72 config = match.group(1)
73 return parse_json(config, "config JSON",
74 schema=_stream_config_schema)
75
76 def _parse_smil(self, url, swf_url):
77 res = http.get(url)
78 smil = http.xml(res, "SMIL config", schema=_smil_schema)
79
80 for src, bitrate in smil["videos"]:
81 url = urljoin(smil["http_base"], src)
82 yield bitrate, AkamaiHDStream(self.session, url, swf=swf_url)
83
84 def _get_streams(self):
85 info = self._get_stream_info()
86 if not info:
87 return
88
89 stream_info = info["event"]["stream_info"]
90 if not (stream_info and stream_info["is_live"]):
91 # Stream is not live
92 return
93
94 play_url = stream_info.get("play_url")
95 if play_url:
96 swf_url = info.get("playerUri")
97 if swf_url:
98 if not swf_url.startswith("http"):
99 swf_url = "http://" + swf_url
100
101 # Work around broken SSL.
102 swf_url = swf_url.replace("https://", "http://")
103
104 qualities = stream_info["qualities"]
105 for bitrate, stream in self._parse_smil(play_url, swf_url):
106 name = "{0:d}k".format(int(bitrate / 1000))
107 for quality in qualities:
108 if quality["bitrate"] == bitrate:
109 name = "{0}p".format(quality["height"])
110
111 yield name, stream
112
113 m3u8_url = stream_info.get("m3u8_url")
114 if m3u8_url:
115 streams = HLSStream.parse_variant_playlist(self.session, m3u8_url,
116 namekey="pixels")
117 # TODO: Replace with "yield from" when dropping Python 2.
118 for stream in streams.items():
119 yield stream
120
121 __plugin__ = Livestream
122
[end of src/streamlink/plugins/livestream.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/livestream.py b/src/streamlink/plugins/livestream.py
--- a/src/streamlink/plugins/livestream.py
+++ b/src/streamlink/plugins/livestream.py
@@ -22,7 +22,10 @@
),
}, None)
},
- validate.optional("playerUri"): validate.text
+ validate.optional("playerUri"): validate.text,
+ validate.optional("viewerPlusSwfUrl"): validate.url(scheme="http"),
+ validate.optional("lsPlayerSwfUrl"): validate.text,
+ validate.optional("hdPlayerSwfUrl"): validate.text
})
_smil_schema = validate.Schema(validate.union({
"http_base": validate.all(
@@ -93,7 +96,7 @@
play_url = stream_info.get("play_url")
if play_url:
- swf_url = info.get("playerUri")
+ swf_url = info.get("playerUri") or info.get("hdPlayerSwfUrl") or info.get("lsPlayerSwfUrl") or info.get("viewerPlusSwfUrl")
if swf_url:
if not swf_url.startswith("http"):
swf_url = "http://" + swf_url
| {"golden_diff": "diff --git a/src/streamlink/plugins/livestream.py b/src/streamlink/plugins/livestream.py\n--- a/src/streamlink/plugins/livestream.py\n+++ b/src/streamlink/plugins/livestream.py\n@@ -22,7 +22,10 @@\n ),\n }, None)\n },\n- validate.optional(\"playerUri\"): validate.text\n+ validate.optional(\"playerUri\"): validate.text,\n+ validate.optional(\"viewerPlusSwfUrl\"): validate.url(scheme=\"http\"),\n+ validate.optional(\"lsPlayerSwfUrl\"): validate.text,\n+ validate.optional(\"hdPlayerSwfUrl\"): validate.text\n })\n _smil_schema = validate.Schema(validate.union({\n \"http_base\": validate.all(\n@@ -93,7 +96,7 @@\n \n play_url = stream_info.get(\"play_url\")\n if play_url:\n- swf_url = info.get(\"playerUri\")\n+ swf_url = info.get(\"playerUri\") or info.get(\"hdPlayerSwfUrl\") or info.get(\"lsPlayerSwfUrl\") or info.get(\"viewerPlusSwfUrl\")\n if swf_url:\n if not swf_url.startswith(\"http\"):\n swf_url = \"http://\" + swf_url\n", "issue": "Plugin for Livestream.com not working right? exit's quickly for hls and not at all for normal streams\nI am trying to get a live stream on livestreamer.com to work and i can't get it to play more then about 35 seconds...\r\n\r\nWhen I run this command:\r\nstreamlink \"http://livestream.com/Miraclenet/events/5004281\" 270p --fifo --player omxplayer\r\n\r\nit gives me an error about an swf being needed. When I run this command:\r\nstreamlink \"http://livestream.com/Miraclenet/events/5004281\" 270p_hls --fifo --player omxplayer\r\n\r\nit will play the stream but just for about 35 seconds or so... I kinda don't want to have to restart it every 35 seconds to watch this stream... I'd like it to run until I stop it myself...\r\n\r\nAny help for this non-python, non-linux guy would be very helpful...\r\n\r\nbtw, this is running on a Raspberry Pi. Just got a nice little 7 inch lcd for it and set it up on my desk to be able to watch it while I work, but can't get it to play for long at a time...\r\n\r\n(edited to correct commands used)\n", "before_files": [{"content": "import re\n\nfrom streamlink.compat import urljoin\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http, validate\nfrom streamlink.plugin.api.utils import parse_json\nfrom streamlink.stream import AkamaiHDStream, HLSStream\n\n_url_re = re.compile(\"http(s)?://(www\\.)?livestream.com/\")\n_stream_config_schema = validate.Schema({\n \"event\": {\n \"stream_info\": validate.any({\n \"is_live\": bool,\n \"qualities\": [{\n \"bitrate\": int,\n \"height\": int\n }],\n validate.optional(\"play_url\"): validate.url(scheme=\"http\"),\n validate.optional(\"m3u8_url\"): validate.url(\n scheme=\"http\",\n path=validate.endswith(\".m3u8\")\n ),\n }, None)\n },\n validate.optional(\"playerUri\"): validate.text\n})\n_smil_schema = validate.Schema(validate.union({\n \"http_base\": validate.all(\n validate.xml_find(\"{http://www.w3.org/2001/SMIL20/Language}head/\"\n \"{http://www.w3.org/2001/SMIL20/Language}meta\"\n \"[@name='httpBase']\"),\n validate.xml_element(attrib={\n \"content\": validate.text\n }),\n validate.get(\"content\")\n ),\n \"videos\": validate.all(\n validate.xml_findall(\"{http://www.w3.org/2001/SMIL20/Language}body/\"\n \"{http://www.w3.org/2001/SMIL20/Language}switch/\"\n \"{http://www.w3.org/2001/SMIL20/Language}video\"),\n [\n validate.all(\n validate.xml_element(attrib={\n \"src\": validate.text,\n \"system-bitrate\": validate.all(\n validate.text,\n validate.transform(int)\n )\n }),\n validate.transform(\n lambda e: (e.attrib[\"src\"], e.attrib[\"system-bitrate\"])\n )\n )\n ],\n )\n}))\n\n\nclass Livestream(Plugin):\n @classmethod\n def default_stream_types(cls, streams):\n return [\"akamaihd\", \"hls\"]\n\n @classmethod\n def can_handle_url(self, url):\n return _url_re.match(url)\n\n def _get_stream_info(self):\n res = http.get(self.url)\n match = re.search(\"window.config = ({.+})\", res.text)\n if match:\n config = match.group(1)\n return parse_json(config, \"config JSON\",\n schema=_stream_config_schema)\n\n def _parse_smil(self, url, swf_url):\n res = http.get(url)\n smil = http.xml(res, \"SMIL config\", schema=_smil_schema)\n\n for src, bitrate in smil[\"videos\"]:\n url = urljoin(smil[\"http_base\"], src)\n yield bitrate, AkamaiHDStream(self.session, url, swf=swf_url)\n\n def _get_streams(self):\n info = self._get_stream_info()\n if not info:\n return\n\n stream_info = info[\"event\"][\"stream_info\"]\n if not (stream_info and stream_info[\"is_live\"]):\n # Stream is not live\n return\n\n play_url = stream_info.get(\"play_url\")\n if play_url:\n swf_url = info.get(\"playerUri\")\n if swf_url:\n if not swf_url.startswith(\"http\"):\n swf_url = \"http://\" + swf_url\n\n # Work around broken SSL.\n swf_url = swf_url.replace(\"https://\", \"http://\")\n\n qualities = stream_info[\"qualities\"]\n for bitrate, stream in self._parse_smil(play_url, swf_url):\n name = \"{0:d}k\".format(int(bitrate / 1000))\n for quality in qualities:\n if quality[\"bitrate\"] == bitrate:\n name = \"{0}p\".format(quality[\"height\"])\n\n yield name, stream\n\n m3u8_url = stream_info.get(\"m3u8_url\")\n if m3u8_url:\n streams = HLSStream.parse_variant_playlist(self.session, m3u8_url,\n namekey=\"pixels\")\n # TODO: Replace with \"yield from\" when dropping Python 2.\n for stream in streams.items():\n yield stream\n\n__plugin__ = Livestream\n", "path": "src/streamlink/plugins/livestream.py"}]} | 2,038 | 268 |
gh_patches_debug_20419 | rasdani/github-patches | git_diff | scrapy__scrapy-2091 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DEFAULT_REQUEST_HEADERS can't set User-Agent
If I use DEFAULT_REQUEST_HEADERS to set `User-Agent`, it doesn't work.
``` python
DEFAULT_REQUEST_HEADERS = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'Accept-Encoding': 'gzip, deflate, sdch',
'Accept-Language': 'en-US,en;q=0.8,zh-CN;q=0.6,zh;q=0.4',
}
```
I know I can set `User-Agent` for crawler by using USER_AGENT setting key:
``` python
# settings.py
USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36'
```
I think this behaviour is not corresponding with the doc [User Agent](http://doc.scrapy.org/en/latest/topics/settings.html#user-agent) and [DefaultHeadersMiddleware](http://doc.scrapy.org/en/latest/topics/downloader-middleware.html#scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware) (_This middleware sets all default requests headers specified in the DEFAULT_REQUEST_HEADERS setting._)
If this behaviour is designed, maybe the doc should be modified.
Thanks.
</issue>
<code>
[start of scrapy/settings/default_settings.py]
1 """
2 This module contains the default values for all settings used by Scrapy.
3
4 For more information about these settings you can read the settings
5 documentation in docs/topics/settings.rst
6
7 Scrapy developers, if you add a setting here remember to:
8
9 * add it in alphabetical order
10 * group similar settings without leaving blank lines
11 * add its documentation to the available settings documentation
12 (docs/topics/settings.rst)
13
14 """
15
16 import os
17 import sys
18 from importlib import import_module
19 from os.path import join, abspath, dirname
20
21 import six
22
23 AJAXCRAWL_ENABLED = False
24
25 AUTOTHROTTLE_ENABLED = False
26 AUTOTHROTTLE_DEBUG = False
27 AUTOTHROTTLE_MAX_DELAY = 60.0
28 AUTOTHROTTLE_START_DELAY = 5.0
29 AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
30
31 BOT_NAME = 'scrapybot'
32
33 CLOSESPIDER_TIMEOUT = 0
34 CLOSESPIDER_PAGECOUNT = 0
35 CLOSESPIDER_ITEMCOUNT = 0
36 CLOSESPIDER_ERRORCOUNT = 0
37
38 COMMANDS_MODULE = ''
39
40 COMPRESSION_ENABLED = True
41
42 CONCURRENT_ITEMS = 100
43
44 CONCURRENT_REQUESTS = 16
45 CONCURRENT_REQUESTS_PER_DOMAIN = 8
46 CONCURRENT_REQUESTS_PER_IP = 0
47
48 COOKIES_ENABLED = True
49 COOKIES_DEBUG = False
50
51 DEFAULT_ITEM_CLASS = 'scrapy.item.Item'
52
53 DEFAULT_REQUEST_HEADERS = {
54 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
55 'Accept-Language': 'en',
56 }
57
58 DEPTH_LIMIT = 0
59 DEPTH_STATS = True
60 DEPTH_PRIORITY = 0
61
62 DNSCACHE_ENABLED = True
63 DNSCACHE_SIZE = 10000
64 DNS_TIMEOUT = 60
65
66 DOWNLOAD_DELAY = 0
67
68 DOWNLOAD_HANDLERS = {}
69 DOWNLOAD_HANDLERS_BASE = {
70 'file': 'scrapy.core.downloader.handlers.file.FileDownloadHandler',
71 'http': 'scrapy.core.downloader.handlers.http.HTTPDownloadHandler',
72 'https': 'scrapy.core.downloader.handlers.http.HTTPDownloadHandler',
73 's3': 'scrapy.core.downloader.handlers.s3.S3DownloadHandler',
74 'ftp': 'scrapy.core.downloader.handlers.ftp.FTPDownloadHandler',
75 }
76
77 DOWNLOAD_TIMEOUT = 180 # 3mins
78
79 DOWNLOAD_MAXSIZE = 1024*1024*1024 # 1024m
80 DOWNLOAD_WARNSIZE = 32*1024*1024 # 32m
81
82 DOWNLOADER = 'scrapy.core.downloader.Downloader'
83
84 DOWNLOADER_HTTPCLIENTFACTORY = 'scrapy.core.downloader.webclient.ScrapyHTTPClientFactory'
85 DOWNLOADER_CLIENTCONTEXTFACTORY = 'scrapy.core.downloader.contextfactory.ScrapyClientContextFactory'
86 DOWNLOADER_CLIENT_TLS_METHOD = 'TLS' # Use highest TLS/SSL protocol version supported by the platform,
87 # also allowing negotiation
88
89 DOWNLOADER_MIDDLEWARES = {}
90
91 DOWNLOADER_MIDDLEWARES_BASE = {
92 # Engine side
93 'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,
94 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,
95 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,
96 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 400,
97 'scrapy.downloadermiddlewares.retry.RetryMiddleware': 500,
98 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 550,
99 'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,
100 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,
101 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,
102 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600,
103 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700,
104 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,
105 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware': 830,
106 'scrapy.downloadermiddlewares.stats.DownloaderStats': 850,
107 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900,
108 # Downloader side
109 }
110
111 DOWNLOADER_STATS = True
112
113 DUPEFILTER_CLASS = 'scrapy.dupefilters.RFPDupeFilter'
114
115 try:
116 EDITOR = os.environ['EDITOR']
117 except KeyError:
118 if sys.platform == 'win32':
119 EDITOR = '%s -m idlelib.idle'
120 else:
121 EDITOR = 'vi'
122
123 EXTENSIONS = {}
124
125 EXTENSIONS_BASE = {
126 'scrapy.extensions.corestats.CoreStats': 0,
127 'scrapy.extensions.telnet.TelnetConsole': 0,
128 'scrapy.extensions.memusage.MemoryUsage': 0,
129 'scrapy.extensions.memdebug.MemoryDebugger': 0,
130 'scrapy.extensions.closespider.CloseSpider': 0,
131 'scrapy.extensions.feedexport.FeedExporter': 0,
132 'scrapy.extensions.logstats.LogStats': 0,
133 'scrapy.extensions.spiderstate.SpiderState': 0,
134 'scrapy.extensions.throttle.AutoThrottle': 0,
135 }
136
137 FEED_TEMPDIR = None
138 FEED_URI = None
139 FEED_URI_PARAMS = None # a function to extend uri arguments
140 FEED_FORMAT = 'jsonlines'
141 FEED_STORE_EMPTY = False
142 FEED_EXPORT_FIELDS = None
143 FEED_STORAGES = {}
144 FEED_STORAGES_BASE = {
145 '': 'scrapy.extensions.feedexport.FileFeedStorage',
146 'file': 'scrapy.extensions.feedexport.FileFeedStorage',
147 'stdout': 'scrapy.extensions.feedexport.StdoutFeedStorage',
148 's3': 'scrapy.extensions.feedexport.S3FeedStorage',
149 'ftp': 'scrapy.extensions.feedexport.FTPFeedStorage',
150 }
151 FEED_EXPORTERS = {}
152 FEED_EXPORTERS_BASE = {
153 'json': 'scrapy.exporters.JsonItemExporter',
154 'jsonlines': 'scrapy.exporters.JsonLinesItemExporter',
155 'jl': 'scrapy.exporters.JsonLinesItemExporter',
156 'csv': 'scrapy.exporters.CsvItemExporter',
157 'xml': 'scrapy.exporters.XmlItemExporter',
158 'marshal': 'scrapy.exporters.MarshalItemExporter',
159 'pickle': 'scrapy.exporters.PickleItemExporter',
160 }
161
162 FILES_STORE_S3_ACL = 'private'
163 FILES_EXPIRES = 90
164 FILES_URLS_FIELD = 'file_urls'
165 FILES_RESULT_FIELD = 'files'
166
167 HTTPCACHE_ENABLED = False
168 HTTPCACHE_DIR = 'httpcache'
169 HTTPCACHE_IGNORE_MISSING = False
170 HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
171 HTTPCACHE_EXPIRATION_SECS = 0
172 HTTPCACHE_ALWAYS_STORE = False
173 HTTPCACHE_IGNORE_HTTP_CODES = []
174 HTTPCACHE_IGNORE_SCHEMES = ['file']
175 HTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS = []
176 HTTPCACHE_DBM_MODULE = 'anydbm' if six.PY2 else 'dbm'
177 HTTPCACHE_POLICY = 'scrapy.extensions.httpcache.DummyPolicy'
178 HTTPCACHE_GZIP = False
179
180 HTTPPROXY_AUTH_ENCODING = 'latin-1'
181
182 IMAGES_MIN_WIDTH = 0
183 IMAGES_MIN_HEIGHT = 0
184 IMAGES_EXPIRES = 90
185 IMAGES_THUMBS = {}
186 IMAGES_URLS_FIELD = 'image_urls'
187 IMAGES_RESULT_FIELD = 'images'
188
189 ITEM_PROCESSOR = 'scrapy.pipelines.ItemPipelineManager'
190
191 ITEM_PIPELINES = {}
192 ITEM_PIPELINES_BASE = {}
193
194 LOG_ENABLED = True
195 LOG_ENCODING = 'utf-8'
196 LOG_FORMATTER = 'scrapy.logformatter.LogFormatter'
197 LOG_FORMAT = '%(asctime)s [%(name)s] %(levelname)s: %(message)s'
198 LOG_DATEFORMAT = '%Y-%m-%d %H:%M:%S'
199 LOG_STDOUT = False
200 LOG_LEVEL = 'DEBUG'
201 LOG_FILE = None
202
203 LOG_UNSERIALIZABLE_REQUESTS = False
204
205 LOGSTATS_INTERVAL = 60.0
206
207 MAIL_HOST = 'localhost'
208 MAIL_PORT = 25
209 MAIL_FROM = 'scrapy@localhost'
210 MAIL_PASS = None
211 MAIL_USER = None
212
213 MEMDEBUG_ENABLED = False # enable memory debugging
214 MEMDEBUG_NOTIFY = [] # send memory debugging report by mail at engine shutdown
215
216 MEMUSAGE_CHECK_INTERVAL_SECONDS = 60.0
217 MEMUSAGE_ENABLED = False
218 MEMUSAGE_LIMIT_MB = 0
219 MEMUSAGE_NOTIFY_MAIL = []
220 MEMUSAGE_REPORT = False
221 MEMUSAGE_WARNING_MB = 0
222
223 METAREFRESH_ENABLED = True
224 METAREFRESH_MAXDELAY = 100
225
226 NEWSPIDER_MODULE = ''
227
228 RANDOMIZE_DOWNLOAD_DELAY = True
229
230 REACTOR_THREADPOOL_MAXSIZE = 10
231
232 REDIRECT_ENABLED = True
233 REDIRECT_MAX_TIMES = 20 # uses Firefox default setting
234 REDIRECT_PRIORITY_ADJUST = +2
235
236 REFERER_ENABLED = True
237
238 RETRY_ENABLED = True
239 RETRY_TIMES = 2 # initial response + 2 retries = 3 requests
240 RETRY_HTTP_CODES = [500, 502, 503, 504, 408]
241 RETRY_PRIORITY_ADJUST = -1
242
243 ROBOTSTXT_OBEY = False
244
245 SCHEDULER = 'scrapy.core.scheduler.Scheduler'
246 SCHEDULER_DISK_QUEUE = 'scrapy.squeues.PickleLifoDiskQueue'
247 SCHEDULER_MEMORY_QUEUE = 'scrapy.squeues.LifoMemoryQueue'
248 SCHEDULER_PRIORITY_QUEUE = 'queuelib.PriorityQueue'
249
250 SPIDER_LOADER_CLASS = 'scrapy.spiderloader.SpiderLoader'
251
252 SPIDER_MIDDLEWARES = {}
253
254 SPIDER_MIDDLEWARES_BASE = {
255 # Engine side
256 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': 50,
257 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': 500,
258 'scrapy.spidermiddlewares.referer.RefererMiddleware': 700,
259 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware': 800,
260 'scrapy.spidermiddlewares.depth.DepthMiddleware': 900,
261 # Spider side
262 }
263
264 SPIDER_MODULES = []
265
266 STATS_CLASS = 'scrapy.statscollectors.MemoryStatsCollector'
267 STATS_DUMP = True
268
269 STATSMAILER_RCPTS = []
270
271 TEMPLATES_DIR = abspath(join(dirname(__file__), '..', 'templates'))
272
273 URLLENGTH_LIMIT = 2083
274
275 USER_AGENT = 'Scrapy/%s (+http://scrapy.org)' % import_module('scrapy').__version__
276
277 TELNETCONSOLE_ENABLED = 1
278 TELNETCONSOLE_PORT = [6023, 6073]
279 TELNETCONSOLE_HOST = '127.0.0.1'
280
281 SPIDER_CONTRACTS = {}
282 SPIDER_CONTRACTS_BASE = {
283 'scrapy.contracts.default.UrlContract': 1,
284 'scrapy.contracts.default.ReturnsContract': 2,
285 'scrapy.contracts.default.ScrapesContract': 3,
286 }
287
[end of scrapy/settings/default_settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/settings/default_settings.py b/scrapy/settings/default_settings.py
--- a/scrapy/settings/default_settings.py
+++ b/scrapy/settings/default_settings.py
@@ -93,9 +93,9 @@
'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,
- 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 400,
- 'scrapy.downloadermiddlewares.retry.RetryMiddleware': 500,
- 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 550,
+ 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 400,
+ 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 500,
+ 'scrapy.downloadermiddlewares.retry.RetryMiddleware': 550,
'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,
| {"golden_diff": "diff --git a/scrapy/settings/default_settings.py b/scrapy/settings/default_settings.py\n--- a/scrapy/settings/default_settings.py\n+++ b/scrapy/settings/default_settings.py\n@@ -93,9 +93,9 @@\n 'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,\n 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,\n 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,\n- 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 400,\n- 'scrapy.downloadermiddlewares.retry.RetryMiddleware': 500,\n- 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 550,\n+ 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 400,\n+ 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 500,\n+ 'scrapy.downloadermiddlewares.retry.RetryMiddleware': 550,\n 'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,\n 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,\n 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,\n", "issue": "DEFAULT_REQUEST_HEADERS can't set User-Agent\nIf I use DEFAULT_REQUEST_HEADERS to set `User-Agent`, it doesn't work.\n\n``` python\nDEFAULT_REQUEST_HEADERS = {\n 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36',\n 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',\n 'Accept-Encoding': 'gzip, deflate, sdch',\n 'Accept-Language': 'en-US,en;q=0.8,zh-CN;q=0.6,zh;q=0.4',\n}\n```\n\nI know I can set `User-Agent` for crawler by using USER_AGENT setting key:\n\n``` python\n# settings.py\nUSER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36'\n```\n\nI think this behaviour is not corresponding with the doc [User Agent](http://doc.scrapy.org/en/latest/topics/settings.html#user-agent) and [DefaultHeadersMiddleware](http://doc.scrapy.org/en/latest/topics/downloader-middleware.html#scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware) (_This middleware sets all default requests headers specified in the DEFAULT_REQUEST_HEADERS setting._)\n\nIf this behaviour is designed, maybe the doc should be modified.\n\nThanks.\n\n", "before_files": [{"content": "\"\"\"\nThis module contains the default values for all settings used by Scrapy.\n\nFor more information about these settings you can read the settings\ndocumentation in docs/topics/settings.rst\n\nScrapy developers, if you add a setting here remember to:\n\n* add it in alphabetical order\n* group similar settings without leaving blank lines\n* add its documentation to the available settings documentation\n (docs/topics/settings.rst)\n\n\"\"\"\n\nimport os\nimport sys\nfrom importlib import import_module\nfrom os.path import join, abspath, dirname\n\nimport six\n\nAJAXCRAWL_ENABLED = False\n\nAUTOTHROTTLE_ENABLED = False\nAUTOTHROTTLE_DEBUG = False\nAUTOTHROTTLE_MAX_DELAY = 60.0\nAUTOTHROTTLE_START_DELAY = 5.0\nAUTOTHROTTLE_TARGET_CONCURRENCY = 1.0\n\nBOT_NAME = 'scrapybot'\n\nCLOSESPIDER_TIMEOUT = 0\nCLOSESPIDER_PAGECOUNT = 0\nCLOSESPIDER_ITEMCOUNT = 0\nCLOSESPIDER_ERRORCOUNT = 0\n\nCOMMANDS_MODULE = ''\n\nCOMPRESSION_ENABLED = True\n\nCONCURRENT_ITEMS = 100\n\nCONCURRENT_REQUESTS = 16\nCONCURRENT_REQUESTS_PER_DOMAIN = 8\nCONCURRENT_REQUESTS_PER_IP = 0\n\nCOOKIES_ENABLED = True\nCOOKIES_DEBUG = False\n\nDEFAULT_ITEM_CLASS = 'scrapy.item.Item'\n\nDEFAULT_REQUEST_HEADERS = {\n 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',\n 'Accept-Language': 'en',\n}\n\nDEPTH_LIMIT = 0\nDEPTH_STATS = True\nDEPTH_PRIORITY = 0\n\nDNSCACHE_ENABLED = True\nDNSCACHE_SIZE = 10000\nDNS_TIMEOUT = 60\n\nDOWNLOAD_DELAY = 0\n\nDOWNLOAD_HANDLERS = {}\nDOWNLOAD_HANDLERS_BASE = {\n 'file': 'scrapy.core.downloader.handlers.file.FileDownloadHandler',\n 'http': 'scrapy.core.downloader.handlers.http.HTTPDownloadHandler',\n 'https': 'scrapy.core.downloader.handlers.http.HTTPDownloadHandler',\n 's3': 'scrapy.core.downloader.handlers.s3.S3DownloadHandler',\n 'ftp': 'scrapy.core.downloader.handlers.ftp.FTPDownloadHandler',\n}\n\nDOWNLOAD_TIMEOUT = 180 # 3mins\n\nDOWNLOAD_MAXSIZE = 1024*1024*1024 # 1024m\nDOWNLOAD_WARNSIZE = 32*1024*1024 # 32m\n\nDOWNLOADER = 'scrapy.core.downloader.Downloader'\n\nDOWNLOADER_HTTPCLIENTFACTORY = 'scrapy.core.downloader.webclient.ScrapyHTTPClientFactory'\nDOWNLOADER_CLIENTCONTEXTFACTORY = 'scrapy.core.downloader.contextfactory.ScrapyClientContextFactory'\nDOWNLOADER_CLIENT_TLS_METHOD = 'TLS' # Use highest TLS/SSL protocol version supported by the platform,\n # also allowing negotiation\n\nDOWNLOADER_MIDDLEWARES = {}\n\nDOWNLOADER_MIDDLEWARES_BASE = {\n # Engine side\n 'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware': 100,\n 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware': 300,\n 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware': 350,\n 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': 400,\n 'scrapy.downloadermiddlewares.retry.RetryMiddleware': 500,\n 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware': 550,\n 'scrapy.downloadermiddlewares.ajaxcrawl.AjaxCrawlMiddleware': 560,\n 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware': 580,\n 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 590,\n 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware': 600,\n 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware': 700,\n 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 750,\n 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware': 830,\n 'scrapy.downloadermiddlewares.stats.DownloaderStats': 850,\n 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware': 900,\n # Downloader side\n}\n\nDOWNLOADER_STATS = True\n\nDUPEFILTER_CLASS = 'scrapy.dupefilters.RFPDupeFilter'\n\ntry:\n EDITOR = os.environ['EDITOR']\nexcept KeyError:\n if sys.platform == 'win32':\n EDITOR = '%s -m idlelib.idle'\n else:\n EDITOR = 'vi'\n\nEXTENSIONS = {}\n\nEXTENSIONS_BASE = {\n 'scrapy.extensions.corestats.CoreStats': 0,\n 'scrapy.extensions.telnet.TelnetConsole': 0,\n 'scrapy.extensions.memusage.MemoryUsage': 0,\n 'scrapy.extensions.memdebug.MemoryDebugger': 0,\n 'scrapy.extensions.closespider.CloseSpider': 0,\n 'scrapy.extensions.feedexport.FeedExporter': 0,\n 'scrapy.extensions.logstats.LogStats': 0,\n 'scrapy.extensions.spiderstate.SpiderState': 0,\n 'scrapy.extensions.throttle.AutoThrottle': 0,\n}\n\nFEED_TEMPDIR = None\nFEED_URI = None\nFEED_URI_PARAMS = None # a function to extend uri arguments\nFEED_FORMAT = 'jsonlines'\nFEED_STORE_EMPTY = False\nFEED_EXPORT_FIELDS = None\nFEED_STORAGES = {}\nFEED_STORAGES_BASE = {\n '': 'scrapy.extensions.feedexport.FileFeedStorage',\n 'file': 'scrapy.extensions.feedexport.FileFeedStorage',\n 'stdout': 'scrapy.extensions.feedexport.StdoutFeedStorage',\n 's3': 'scrapy.extensions.feedexport.S3FeedStorage',\n 'ftp': 'scrapy.extensions.feedexport.FTPFeedStorage',\n}\nFEED_EXPORTERS = {}\nFEED_EXPORTERS_BASE = {\n 'json': 'scrapy.exporters.JsonItemExporter',\n 'jsonlines': 'scrapy.exporters.JsonLinesItemExporter',\n 'jl': 'scrapy.exporters.JsonLinesItemExporter',\n 'csv': 'scrapy.exporters.CsvItemExporter',\n 'xml': 'scrapy.exporters.XmlItemExporter',\n 'marshal': 'scrapy.exporters.MarshalItemExporter',\n 'pickle': 'scrapy.exporters.PickleItemExporter',\n}\n\nFILES_STORE_S3_ACL = 'private'\nFILES_EXPIRES = 90\nFILES_URLS_FIELD = 'file_urls'\nFILES_RESULT_FIELD = 'files'\n\nHTTPCACHE_ENABLED = False\nHTTPCACHE_DIR = 'httpcache'\nHTTPCACHE_IGNORE_MISSING = False\nHTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'\nHTTPCACHE_EXPIRATION_SECS = 0\nHTTPCACHE_ALWAYS_STORE = False\nHTTPCACHE_IGNORE_HTTP_CODES = []\nHTTPCACHE_IGNORE_SCHEMES = ['file']\nHTTPCACHE_IGNORE_RESPONSE_CACHE_CONTROLS = []\nHTTPCACHE_DBM_MODULE = 'anydbm' if six.PY2 else 'dbm'\nHTTPCACHE_POLICY = 'scrapy.extensions.httpcache.DummyPolicy'\nHTTPCACHE_GZIP = False\n\nHTTPPROXY_AUTH_ENCODING = 'latin-1'\n\nIMAGES_MIN_WIDTH = 0\nIMAGES_MIN_HEIGHT = 0\nIMAGES_EXPIRES = 90\nIMAGES_THUMBS = {}\nIMAGES_URLS_FIELD = 'image_urls'\nIMAGES_RESULT_FIELD = 'images'\n\nITEM_PROCESSOR = 'scrapy.pipelines.ItemPipelineManager'\n\nITEM_PIPELINES = {}\nITEM_PIPELINES_BASE = {}\n\nLOG_ENABLED = True\nLOG_ENCODING = 'utf-8'\nLOG_FORMATTER = 'scrapy.logformatter.LogFormatter'\nLOG_FORMAT = '%(asctime)s [%(name)s] %(levelname)s: %(message)s'\nLOG_DATEFORMAT = '%Y-%m-%d %H:%M:%S'\nLOG_STDOUT = False\nLOG_LEVEL = 'DEBUG'\nLOG_FILE = None\n\nLOG_UNSERIALIZABLE_REQUESTS = False\n\nLOGSTATS_INTERVAL = 60.0\n\nMAIL_HOST = 'localhost'\nMAIL_PORT = 25\nMAIL_FROM = 'scrapy@localhost'\nMAIL_PASS = None\nMAIL_USER = None\n\nMEMDEBUG_ENABLED = False # enable memory debugging\nMEMDEBUG_NOTIFY = [] # send memory debugging report by mail at engine shutdown\n\nMEMUSAGE_CHECK_INTERVAL_SECONDS = 60.0\nMEMUSAGE_ENABLED = False\nMEMUSAGE_LIMIT_MB = 0\nMEMUSAGE_NOTIFY_MAIL = []\nMEMUSAGE_REPORT = False\nMEMUSAGE_WARNING_MB = 0\n\nMETAREFRESH_ENABLED = True\nMETAREFRESH_MAXDELAY = 100\n\nNEWSPIDER_MODULE = ''\n\nRANDOMIZE_DOWNLOAD_DELAY = True\n\nREACTOR_THREADPOOL_MAXSIZE = 10\n\nREDIRECT_ENABLED = True\nREDIRECT_MAX_TIMES = 20 # uses Firefox default setting\nREDIRECT_PRIORITY_ADJUST = +2\n\nREFERER_ENABLED = True\n\nRETRY_ENABLED = True\nRETRY_TIMES = 2 # initial response + 2 retries = 3 requests\nRETRY_HTTP_CODES = [500, 502, 503, 504, 408]\nRETRY_PRIORITY_ADJUST = -1\n\nROBOTSTXT_OBEY = False\n\nSCHEDULER = 'scrapy.core.scheduler.Scheduler'\nSCHEDULER_DISK_QUEUE = 'scrapy.squeues.PickleLifoDiskQueue'\nSCHEDULER_MEMORY_QUEUE = 'scrapy.squeues.LifoMemoryQueue'\nSCHEDULER_PRIORITY_QUEUE = 'queuelib.PriorityQueue'\n\nSPIDER_LOADER_CLASS = 'scrapy.spiderloader.SpiderLoader'\n\nSPIDER_MIDDLEWARES = {}\n\nSPIDER_MIDDLEWARES_BASE = {\n # Engine side\n 'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': 50,\n 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware': 500,\n 'scrapy.spidermiddlewares.referer.RefererMiddleware': 700,\n 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware': 800,\n 'scrapy.spidermiddlewares.depth.DepthMiddleware': 900,\n # Spider side\n}\n\nSPIDER_MODULES = []\n\nSTATS_CLASS = 'scrapy.statscollectors.MemoryStatsCollector'\nSTATS_DUMP = True\n\nSTATSMAILER_RCPTS = []\n\nTEMPLATES_DIR = abspath(join(dirname(__file__), '..', 'templates'))\n\nURLLENGTH_LIMIT = 2083\n\nUSER_AGENT = 'Scrapy/%s (+http://scrapy.org)' % import_module('scrapy').__version__\n\nTELNETCONSOLE_ENABLED = 1\nTELNETCONSOLE_PORT = [6023, 6073]\nTELNETCONSOLE_HOST = '127.0.0.1'\n\nSPIDER_CONTRACTS = {}\nSPIDER_CONTRACTS_BASE = {\n 'scrapy.contracts.default.UrlContract': 1,\n 'scrapy.contracts.default.ReturnsContract': 2,\n 'scrapy.contracts.default.ScrapesContract': 3,\n}\n", "path": "scrapy/settings/default_settings.py"}]} | 4,081 | 276 |
gh_patches_debug_24176 | rasdani/github-patches | git_diff | pystiche__pystiche-325 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Download buttons for all examples are broken for RTD

This is a bug in `sphinx-gallery` and should be fixed with sphinx-gallery/sphinx-gallery#706.
</issue>
<code>
[start of docs/source/conf.py]
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full list see
4 # the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Imports ---------------------------------------------------------------------------
8
9 import os
10 import shutil
11 import warnings
12 from datetime import datetime
13 from distutils.util import strtobool
14 from os import path
15 from urllib.parse import urljoin
16
17 from sphinx_gallery.sorting import ExampleTitleSortKey, ExplicitOrder
18
19 import torch
20
21 import pystiche
22 from pystiche.misc import download_file
23
24 # -- Run config ------------------------------------------------------------------------
25
26
27 def get_bool_env_var(name, default=False):
28 try:
29 return bool(strtobool(os.environ[name]))
30 except KeyError:
31 return default
32
33
34 run_by_github_actions = get_bool_env_var("GITHUB_ACTIONS")
35 run_by_travis_ci = get_bool_env_var("TRAVIS")
36 run_by_appveyor = get_bool_env_var("APPVEYOR")
37 run_by_rtd = get_bool_env_var("READTHEDOCS")
38 run_by_ci = (
39 run_by_github_actions
40 or run_by_travis_ci
41 or run_by_appveyor
42 or run_by_rtd
43 or get_bool_env_var("CI")
44 )
45
46 # -- Path setup ------------------------------------------------------------------------
47
48 # If extensions (or modules to document with autodoc) are in another directory, add
49 # these directories to sys.path here. If the directory is relative to the documentation
50 # root, use os.path.abspath to make it absolute, like shown here.
51 #
52 # import os
53 # import sys
54 # sys.path.insert(0, os.path.abspath('.'))
55
56 PROJECT_ROOT = path.abspath(path.join(path.dirname(__file__), "..", ".."))
57
58
59 # -- Project information ---------------------------------------------------------------
60
61 project = pystiche.__name__
62 author = pystiche.__author__
63 copyright = f"2019 - {datetime.now().year}, {author}"
64 version = release = pystiche.__version__
65
66
67 # -- General configuration -------------------------------------------------------------
68
69 # Add any Sphinx extension module names here, as strings. They can be extensions coming
70 # with Sphinx (named 'sphinx.ext.*') or your custom ones.
71 extensions = [
72 "sphinx.ext.autodoc",
73 "sphinx.ext.napoleon",
74 "sphinx.ext.coverage",
75 "sphinx.ext.intersphinx",
76 "sphinxcontrib.bibtex",
77 "sphinx_gallery.gen_gallery",
78 "sphinx_autodoc_typehints",
79 ]
80
81 # Add any paths that contain templates here, relative to this directory.
82 templates_path = ["_templates"]
83
84 # List of patterns, relative to source directory, that match files and directories to
85 # ignore when looking for source files. This pattern also affects html_static_path and
86 # html_extra_path.
87 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
88
89 # -- intersphinx configuration ---------------------------------------------------------
90
91 intersphinx_mapping = {
92 "python": ("https://docs.python.org/3.6", None),
93 "torch": ("https://pytorch.org/docs/stable/", None),
94 "torchvision": ("https://pytorch.org/docs/stable/", None),
95 "PIL": ("https://pillow.readthedocs.io/en/stable/", None),
96 "numpy": ("https://numpy.org/doc/1.18/", None),
97 "requests": ("https://requests.readthedocs.io/en/stable/", None),
98 "matplotlib": ("https://matplotlib.org", None),
99 }
100
101
102 # -- sphinx-gallery configuration ------------------------------------------------------
103
104 plot_gallery = get_bool_env_var("PYSTICHE_PLOT_GALLERY", default=True) and not run_by_ci
105 download_gallery = get_bool_env_var("PYSTICHE_DOWNLOAD_GALLERY") or run_by_ci
106
107 if download_gallery:
108 base = "https://download.pystiche.org/galleries/"
109 file = (
110 "master.zip"
111 if pystiche.__is_dev_version__
112 else f"v{pystiche.__base_version__}.zip"
113 )
114
115 url = urljoin(base, file)
116 print(f"Downloading pre-built galleries from {url}")
117 download_file(url, file)
118
119 shutil.unpack_archive(file, extract_dir=".")
120 os.remove(file)
121
122 extensions.remove("sphinx_gallery.gen_gallery")
123 extensions.append("sphinx_gallery.load_style")
124 plot_gallery = False
125
126 if plot_gallery and not torch.cuda.is_available():
127 msg = (
128 "The galleries will be built, but CUDA is not available. "
129 "This will take a long time."
130 )
131 print(msg)
132
133
134 def show_cuda_memory(func):
135 torch.cuda.reset_peak_memory_stats()
136 out = func()
137
138 stats = torch.cuda.memory_stats()
139 peak_bytes_usage = stats["allocated_bytes.all.peak"]
140 memory = peak_bytes_usage / 1024 ** 2
141
142 return memory, out
143
144
145 class PysticheExampleTitleSortKey(ExampleTitleSortKey):
146 def __call__(self, filename):
147 # The beginner example *without* pystiche is placed before the example *with*
148 # to clarify the narrative.
149 if filename == "example_nst_without_pystiche.py":
150 return "1"
151 elif filename == "example_nst_with_pystiche.py":
152 return "2"
153 else:
154 return super().__call__(filename)
155
156
157 sphinx_gallery_conf = {
158 "examples_dirs": path.join(PROJECT_ROOT, "examples"),
159 "gallery_dirs": path.join("galleries", "examples"),
160 "filename_pattern": os.sep + "example_",
161 "line_numbers": True,
162 "remove_config_comments": True,
163 "plot_gallery": plot_gallery,
164 "subsection_order": ExplicitOrder(
165 [
166 path.join("..", "..", "examples", sub_gallery)
167 for sub_gallery in ("beginner", "advanced")
168 ]
169 ),
170 "within_subsection_order": PysticheExampleTitleSortKey,
171 "show_memory": show_cuda_memory if torch.cuda.is_available() else True,
172 }
173
174 # Remove matplotlib agg warnings from generated doc when using plt.show
175 warnings.filterwarnings(
176 "ignore",
177 category=UserWarning,
178 message=(
179 "Matplotlib is currently using agg, which is a non-GUI backend, so cannot show "
180 "the figure."
181 ),
182 )
183
184
185 # -- Options for HTML output -----------------------------------------------------------
186
187 # The theme to use for HTML and HTML Help pages. See the documentation for a list of
188 # builtin themes.
189 html_theme = "sphinx_rtd_theme"
190
191 # Add any paths that contain custom static files (such as style sheets) here, relative
192 # to this directory. They are copied after the builtin static files, so a file named
193 # "default.css" will overwrite the builtin "default.css".
194 # html_static_path = ["_static"]
195
196
197 # -- Latex / Mathjax config ------------------------------------------------------------
198
199 with open("custom_cmds.tex", "r") as fh:
200 custom_cmds = fh.read()
201
202 latex_elements = {"preamble": custom_cmds}
203
204 mathjax_inline = [r"\(" + custom_cmds, r"\)"]
205 mathjax_display = [r"\[" + custom_cmds, r"\]"]
206
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -7,6 +7,7 @@
# -- Imports ---------------------------------------------------------------------------
import os
+import re
import shutil
import warnings
from datetime import datetime
@@ -116,9 +117,26 @@
print(f"Downloading pre-built galleries from {url}")
download_file(url, file)
+ try:
+ shutil.rmtree("galleries")
+ except FileNotFoundError:
+ pass
shutil.unpack_archive(file, extract_dir=".")
os.remove(file)
+ # This is workaround for a bug in sphinx-gallery that replaces absolute with
+ # relative paths. See https://github.com/pmeier/pystiche/pull/325 for details.
+ index_file = path.join("galleries", "examples", "index.rst")
+ with open(index_file, "r") as fh:
+ content = fh.read()
+ content = re.sub(
+ r"(?P<file>examples_(python|jupyter)\.zip) <[\w/.]+>",
+ r"\g<file> <\g<file>>",
+ content,
+ )
+ with open(index_file, "w") as fh:
+ fh.write(content)
+
extensions.remove("sphinx_gallery.gen_gallery")
extensions.append("sphinx_gallery.load_style")
plot_gallery = False
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -7,6 +7,7 @@\n # -- Imports ---------------------------------------------------------------------------\n \n import os\n+import re\n import shutil\n import warnings\n from datetime import datetime\n@@ -116,9 +117,26 @@\n print(f\"Downloading pre-built galleries from {url}\")\n download_file(url, file)\n \n+ try:\n+ shutil.rmtree(\"galleries\")\n+ except FileNotFoundError:\n+ pass\n shutil.unpack_archive(file, extract_dir=\".\")\n os.remove(file)\n \n+ # This is workaround for a bug in sphinx-gallery that replaces absolute with\n+ # relative paths. See https://github.com/pmeier/pystiche/pull/325 for details.\n+ index_file = path.join(\"galleries\", \"examples\", \"index.rst\")\n+ with open(index_file, \"r\") as fh:\n+ content = fh.read()\n+ content = re.sub(\n+ r\"(?P<file>examples_(python|jupyter)\\.zip) <[\\w/.]+>\",\n+ r\"\\g<file> <\\g<file>>\",\n+ content,\n+ )\n+ with open(index_file, \"w\") as fh:\n+ fh.write(content)\n+\n extensions.remove(\"sphinx_gallery.gen_gallery\")\n extensions.append(\"sphinx_gallery.load_style\")\n plot_gallery = False\n", "issue": "Download buttons for all examples are broken for RTD\n\r\n\r\nThis is a bug in `sphinx-gallery` and should be fixed with sphinx-gallery/sphinx-gallery#706.\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full list see\n# the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Imports ---------------------------------------------------------------------------\n\nimport os\nimport shutil\nimport warnings\nfrom datetime import datetime\nfrom distutils.util import strtobool\nfrom os import path\nfrom urllib.parse import urljoin\n\nfrom sphinx_gallery.sorting import ExampleTitleSortKey, ExplicitOrder\n\nimport torch\n\nimport pystiche\nfrom pystiche.misc import download_file\n\n# -- Run config ------------------------------------------------------------------------\n\n\ndef get_bool_env_var(name, default=False):\n try:\n return bool(strtobool(os.environ[name]))\n except KeyError:\n return default\n\n\nrun_by_github_actions = get_bool_env_var(\"GITHUB_ACTIONS\")\nrun_by_travis_ci = get_bool_env_var(\"TRAVIS\")\nrun_by_appveyor = get_bool_env_var(\"APPVEYOR\")\nrun_by_rtd = get_bool_env_var(\"READTHEDOCS\")\nrun_by_ci = (\n run_by_github_actions\n or run_by_travis_ci\n or run_by_appveyor\n or run_by_rtd\n or get_bool_env_var(\"CI\")\n)\n\n# -- Path setup ------------------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory, add\n# these directories to sys.path here. If the directory is relative to the documentation\n# root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nPROJECT_ROOT = path.abspath(path.join(path.dirname(__file__), \"..\", \"..\"))\n\n\n# -- Project information ---------------------------------------------------------------\n\nproject = pystiche.__name__\nauthor = pystiche.__author__\ncopyright = f\"2019 - {datetime.now().year}, {author}\"\nversion = release = pystiche.__version__\n\n\n# -- General configuration -------------------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be extensions coming\n# with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.intersphinx\",\n \"sphinxcontrib.bibtex\",\n \"sphinx_gallery.gen_gallery\",\n \"sphinx_autodoc_typehints\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and directories to\n# ignore when looking for source files. This pattern also affects html_static_path and\n# html_extra_path.\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# -- intersphinx configuration ---------------------------------------------------------\n\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/3.6\", None),\n \"torch\": (\"https://pytorch.org/docs/stable/\", None),\n \"torchvision\": (\"https://pytorch.org/docs/stable/\", None),\n \"PIL\": (\"https://pillow.readthedocs.io/en/stable/\", None),\n \"numpy\": (\"https://numpy.org/doc/1.18/\", None),\n \"requests\": (\"https://requests.readthedocs.io/en/stable/\", None),\n \"matplotlib\": (\"https://matplotlib.org\", None),\n}\n\n\n# -- sphinx-gallery configuration ------------------------------------------------------\n\nplot_gallery = get_bool_env_var(\"PYSTICHE_PLOT_GALLERY\", default=True) and not run_by_ci\ndownload_gallery = get_bool_env_var(\"PYSTICHE_DOWNLOAD_GALLERY\") or run_by_ci\n\nif download_gallery:\n base = \"https://download.pystiche.org/galleries/\"\n file = (\n \"master.zip\"\n if pystiche.__is_dev_version__\n else f\"v{pystiche.__base_version__}.zip\"\n )\n\n url = urljoin(base, file)\n print(f\"Downloading pre-built galleries from {url}\")\n download_file(url, file)\n\n shutil.unpack_archive(file, extract_dir=\".\")\n os.remove(file)\n\n extensions.remove(\"sphinx_gallery.gen_gallery\")\n extensions.append(\"sphinx_gallery.load_style\")\n plot_gallery = False\n\nif plot_gallery and not torch.cuda.is_available():\n msg = (\n \"The galleries will be built, but CUDA is not available. \"\n \"This will take a long time.\"\n )\n print(msg)\n\n\ndef show_cuda_memory(func):\n torch.cuda.reset_peak_memory_stats()\n out = func()\n\n stats = torch.cuda.memory_stats()\n peak_bytes_usage = stats[\"allocated_bytes.all.peak\"]\n memory = peak_bytes_usage / 1024 ** 2\n\n return memory, out\n\n\nclass PysticheExampleTitleSortKey(ExampleTitleSortKey):\n def __call__(self, filename):\n # The beginner example *without* pystiche is placed before the example *with*\n # to clarify the narrative.\n if filename == \"example_nst_without_pystiche.py\":\n return \"1\"\n elif filename == \"example_nst_with_pystiche.py\":\n return \"2\"\n else:\n return super().__call__(filename)\n\n\nsphinx_gallery_conf = {\n \"examples_dirs\": path.join(PROJECT_ROOT, \"examples\"),\n \"gallery_dirs\": path.join(\"galleries\", \"examples\"),\n \"filename_pattern\": os.sep + \"example_\",\n \"line_numbers\": True,\n \"remove_config_comments\": True,\n \"plot_gallery\": plot_gallery,\n \"subsection_order\": ExplicitOrder(\n [\n path.join(\"..\", \"..\", \"examples\", sub_gallery)\n for sub_gallery in (\"beginner\", \"advanced\")\n ]\n ),\n \"within_subsection_order\": PysticheExampleTitleSortKey,\n \"show_memory\": show_cuda_memory if torch.cuda.is_available() else True,\n}\n\n# Remove matplotlib agg warnings from generated doc when using plt.show\nwarnings.filterwarnings(\n \"ignore\",\n category=UserWarning,\n message=(\n \"Matplotlib is currently using agg, which is a non-GUI backend, so cannot show \"\n \"the figure.\"\n ),\n)\n\n\n# -- Options for HTML output -----------------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for a list of\n# builtin themes.\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Add any paths that contain custom static files (such as style sheets) here, relative\n# to this directory. They are copied after the builtin static files, so a file named\n# \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = [\"_static\"]\n\n\n# -- Latex / Mathjax config ------------------------------------------------------------\n\nwith open(\"custom_cmds.tex\", \"r\") as fh:\n custom_cmds = fh.read()\n\nlatex_elements = {\"preamble\": custom_cmds}\n\nmathjax_inline = [r\"\\(\" + custom_cmds, r\"\\)\"]\nmathjax_display = [r\"\\[\" + custom_cmds, r\"\\]\"]\n", "path": "docs/source/conf.py"}]} | 2,683 | 316 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.