problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_38796
|
rasdani/github-patches
|
git_diff
|
facebookresearch__xformers-263
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[feat] Add smooth relu to the fused linear layer (triton) activations
# 🚀 Feature
Should be super easy to add [in there](https://github.com/facebookresearch/xformers/blob/main/xformers/triton/k_activations.py), would be interesting to see it benchmarked down the line
## Motivation
See [this](https://ai.googleblog.com/2022/04/reproducibility-in-deep-learning-and.html) and [that](https://arxiv.org/abs/2202.06499)
## Pitch
- easy thing to add
- triton should be fairly efficient there, vs. other options (naive pytorch)
## Alternatives
Not doing it
</issue>
<code>
[start of xformers/components/activations.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2 #
3 # This source code is licensed under the BSD license found in the
4 # LICENSE file in the root directory of this source tree.
5
6
7 from enum import Enum
8 from typing import Optional
9
10 import torch
11 from torch import nn
12
13
14 class Activation(str, Enum):
15 SquaredReLU = "squared_relu"
16 GeLU = "gelu"
17 LeakyReLU = "leaky_relu"
18 ReLU = "relu"
19
20
21 # For unit testing / parity comparisons, probably not the fastest way
22 class SquaredReLU(nn.Module):
23 def __init__(self) -> None:
24 super().__init__()
25
26 def forward(self, x: torch.Tensor) -> torch.Tensor:
27 x_ = torch.nn.functional.relu(x)
28 return x_ * x_
29
30
31 class Passthrough(nn.Module):
32 def __init__(self) -> None:
33 super().__init__()
34
35 def forward(self, x: torch.Tensor) -> torch.Tensor:
36 return x
37
38
39 def build_activation(activation: Optional[Activation]):
40 if not activation:
41 return Passthrough()
42
43 return {
44 Activation.ReLU: nn.ReLU,
45 Activation.GeLU: nn.GELU,
46 Activation.LeakyReLU: nn.LeakyReLU,
47 Activation.SquaredReLU: SquaredReLU,
48 }[activation]()
49
[end of xformers/components/activations.py]
[start of xformers/triton/k_activations.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2 #
3 # This source code is licensed under the BSD license found in the
4 # LICENSE file in the root directory of this source tree.
5
6 import math
7 from typing import Optional
8
9 import triton
10 import triton.language as tl
11
12 from xformers.components import Activation
13
14 _kAlpha = math.sqrt(2.0 / math.pi)
15
16
17 def get_triton_activation_kernel(activation: Optional[Activation]):
18 return (
19 {
20 Activation.ReLU: relu,
21 Activation.LeakyReLU: leaky_relu,
22 Activation.GeLU: gelu,
23 Activation.SquaredReLU: squared_relu,
24 }[activation]
25 if activation
26 else None
27 )
28
29
30 def get_triton_activation_bwd_kernel(activation: Optional[Activation]):
31 return (
32 {
33 Activation.ReLU: relu_grad,
34 Activation.LeakyReLU: leaky_relu_grad,
35 Activation.GeLU: gelu_grad,
36 Activation.SquaredReLU: squared_relu_grad,
37 }[activation]
38 if activation
39 else None
40 )
41
42
43 @triton.jit
44 def tanh(x):
45 # Tanh is just a scaled sigmoid
46 return 2 * tl.sigmoid(2 * x) - 1
47
48
49 @triton.jit
50 def cosh(x):
51 exp_x = tl.exp(x)
52 return (exp_x + 1.0 / exp_x) * 0.5
53
54
55 # a Triton implementation of the most used activations
56 # See for instance http://arxiv.org/abs/1606.08415 for an overview
57
58 # ReLU
59 @triton.jit
60 def relu(x):
61 """
62 ReLU_ activation function
63
64 .. _ReLU: https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html
65 """
66 zero = 0.0
67 return tl.where(x >= 0, x, zero.to(x.dtype))
68
69
70 @triton.jit
71 def relu_grad(x):
72 # ReLU is different from other activations
73 # in that it does not require the input to retrospectively compute its gradient
74 # here the input is the downstream gradient, and we return the upstream gradient directly
75 zero = 0.0
76 one = 1.0
77 return tl.where(x >= 0, one.to(x.dtype), zero.to(x.dtype))
78
79
80 @triton.jit
81 def squared_relu(x):
82 """
83 Squared ReLU activation, as proposed in the Primer_ paper.
84
85 .. _Primer: https://arxiv.org/abs/2109.08668
86 """
87 x_ = relu(x)
88 return (x_ * x_).to(x.dtype)
89
90
91 @triton.jit
92 def squared_relu_grad(x):
93 return tl.where(x >= 0, 2.0 * x, 0.0)
94
95
96 # Leaky ReLU
97 @triton.jit
98 def leaky_relu(x):
99 """
100 LeakyReLU_ activation
101
102 .. _LeakyReLU: https://pytorch.org/docs/stable/generated/torch.nn.LeakyReLU.html
103 """
104 scale = 0.01 + 0.0
105 scale = scale.to(x.dtype)
106 return tl.where(x >= 0, x, scale * x)
107
108
109 @triton.jit
110 def leaky_relu_grad(x):
111 min_grad = 0.01
112 max_grad = 1
113
114 min_grad = min_grad.to(x.dtype)
115 max_grad = max_grad.to(x.dtype)
116
117 return tl.where(x >= 0, max_grad, min_grad)
118
119
120 @triton.jit
121 def gelu(x):
122 """
123 GeLU_ activation - Gaussian error linear unit
124
125 .. _GeLU: https://arxiv.org/pdf/1606.08415.pdf
126 """
127 return 0.5 * x * (1 + tanh(_kAlpha * (x + 0.044715 * x * x * x)))
128
129
130 @triton.jit
131 def gelu_grad(x):
132 # CREDITS: Fast implementation proposed in
133 # https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/fused_bias_gelu.py#L30
134 tanh_out = tanh(0.79788456 * x * (1 + 0.044715 * x * x))
135 return 0.5 * x * (
136 (1 - tanh_out * tanh_out) * (0.79788456 + 0.1070322243 * x * x)
137 ) + 0.5 * (1 + tanh_out)
138
[end of xformers/triton/k_activations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/xformers/components/activations.py b/xformers/components/activations.py
--- a/xformers/components/activations.py
+++ b/xformers/components/activations.py
@@ -16,6 +16,7 @@
GeLU = "gelu"
LeakyReLU = "leaky_relu"
ReLU = "relu"
+ SmeLU = "smelu"
# For unit testing / parity comparisons, probably not the fastest way
@@ -28,6 +29,24 @@
return x_ * x_
+class SmeLU(nn.Module):
+ def __init__(self, beta: float = 2.0) -> None:
+ super().__init__()
+ self.beta = beta
+
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
+ relu = torch.where(
+ x >= self.beta,
+ x,
+ torch.tensor([0.0], device=x.device, dtype=x.dtype),
+ )
+ return torch.where(
+ torch.abs(x) <= self.beta,
+ ((x + self.beta) ** 2).type_as(x) / (4.0 * self.beta),
+ relu,
+ )
+
+
class Passthrough(nn.Module):
def __init__(self) -> None:
super().__init__()
@@ -45,4 +64,5 @@
Activation.GeLU: nn.GELU,
Activation.LeakyReLU: nn.LeakyReLU,
Activation.SquaredReLU: SquaredReLU,
+ Activation.SmeLU: SmeLU,
}[activation]()
diff --git a/xformers/triton/k_activations.py b/xformers/triton/k_activations.py
--- a/xformers/triton/k_activations.py
+++ b/xformers/triton/k_activations.py
@@ -21,6 +21,7 @@
Activation.LeakyReLU: leaky_relu,
Activation.GeLU: gelu,
Activation.SquaredReLU: squared_relu,
+ Activation.SmeLU: smelu,
}[activation]
if activation
else None
@@ -34,6 +35,7 @@
Activation.LeakyReLU: leaky_relu_grad,
Activation.GeLU: gelu_grad,
Activation.SquaredReLU: squared_relu_grad,
+ Activation.SmeLU: smelu_grad,
}[activation]
if activation
else None
@@ -135,3 +137,32 @@
return 0.5 * x * (
(1 - tanh_out * tanh_out) * (0.79788456 + 0.1070322243 * x * x)
) + 0.5 * (1 + tanh_out)
+
+
[email protected]
+def smelu(x):
+ """
+ SmeLU_ activation - Smooth ReLU with beta=2.0
+
+ .. _SmeLU: https://arxiv.org/pdf/2202.06499.pdf
+ """
+ zero = 0.0
+ four = 4.0
+ two = 2.0
+ beta = two.to(x.dtype)
+
+ output = (x + beta) * (x + beta) / (four.to(x.dtype) * beta)
+ relu = tl.where(x >= beta, x, zero.to(x.dtype))
+ return tl.where(tl.abs(x) <= beta, output, relu)
+
+
[email protected]
+def smelu_grad(x):
+ zero = 0.0
+ one = 1.0
+ two = 2.0
+ beta = two.to(x.dtype)
+
+ grad = (beta + x) / (two.to(x.dtype) * beta)
+ relu_grad = tl.where(x >= beta, one.to(x.dtype), zero.to(x.dtype))
+ return tl.where(tl.abs(x) <= beta, grad, relu_grad)
|
{"golden_diff": "diff --git a/xformers/components/activations.py b/xformers/components/activations.py\n--- a/xformers/components/activations.py\n+++ b/xformers/components/activations.py\n@@ -16,6 +16,7 @@\n GeLU = \"gelu\"\n LeakyReLU = \"leaky_relu\"\n ReLU = \"relu\"\n+ SmeLU = \"smelu\"\n \n \n # For unit testing / parity comparisons, probably not the fastest way\n@@ -28,6 +29,24 @@\n return x_ * x_\n \n \n+class SmeLU(nn.Module):\n+ def __init__(self, beta: float = 2.0) -> None:\n+ super().__init__()\n+ self.beta = beta\n+\n+ def forward(self, x: torch.Tensor) -> torch.Tensor:\n+ relu = torch.where(\n+ x >= self.beta,\n+ x,\n+ torch.tensor([0.0], device=x.device, dtype=x.dtype),\n+ )\n+ return torch.where(\n+ torch.abs(x) <= self.beta,\n+ ((x + self.beta) ** 2).type_as(x) / (4.0 * self.beta),\n+ relu,\n+ )\n+\n+\n class Passthrough(nn.Module):\n def __init__(self) -> None:\n super().__init__()\n@@ -45,4 +64,5 @@\n Activation.GeLU: nn.GELU,\n Activation.LeakyReLU: nn.LeakyReLU,\n Activation.SquaredReLU: SquaredReLU,\n+ Activation.SmeLU: SmeLU,\n }[activation]()\ndiff --git a/xformers/triton/k_activations.py b/xformers/triton/k_activations.py\n--- a/xformers/triton/k_activations.py\n+++ b/xformers/triton/k_activations.py\n@@ -21,6 +21,7 @@\n Activation.LeakyReLU: leaky_relu,\n Activation.GeLU: gelu,\n Activation.SquaredReLU: squared_relu,\n+ Activation.SmeLU: smelu,\n }[activation]\n if activation\n else None\n@@ -34,6 +35,7 @@\n Activation.LeakyReLU: leaky_relu_grad,\n Activation.GeLU: gelu_grad,\n Activation.SquaredReLU: squared_relu_grad,\n+ Activation.SmeLU: smelu_grad,\n }[activation]\n if activation\n else None\n@@ -135,3 +137,32 @@\n return 0.5 * x * (\n (1 - tanh_out * tanh_out) * (0.79788456 + 0.1070322243 * x * x)\n ) + 0.5 * (1 + tanh_out)\n+\n+\[email protected]\n+def smelu(x):\n+ \"\"\"\n+ SmeLU_ activation - Smooth ReLU with beta=2.0\n+\n+ .. _SmeLU: https://arxiv.org/pdf/2202.06499.pdf\n+ \"\"\"\n+ zero = 0.0\n+ four = 4.0\n+ two = 2.0\n+ beta = two.to(x.dtype)\n+\n+ output = (x + beta) * (x + beta) / (four.to(x.dtype) * beta)\n+ relu = tl.where(x >= beta, x, zero.to(x.dtype))\n+ return tl.where(tl.abs(x) <= beta, output, relu)\n+\n+\[email protected]\n+def smelu_grad(x):\n+ zero = 0.0\n+ one = 1.0\n+ two = 2.0\n+ beta = two.to(x.dtype)\n+\n+ grad = (beta + x) / (two.to(x.dtype) * beta)\n+ relu_grad = tl.where(x >= beta, one.to(x.dtype), zero.to(x.dtype))\n+ return tl.where(tl.abs(x) <= beta, grad, relu_grad)\n", "issue": "[feat] Add smooth relu to the fused linear layer (triton) activations\n# \ud83d\ude80 Feature\r\nShould be super easy to add [in there](https://github.com/facebookresearch/xformers/blob/main/xformers/triton/k_activations.py), would be interesting to see it benchmarked down the line \r\n\r\n## Motivation\r\nSee [this](https://ai.googleblog.com/2022/04/reproducibility-in-deep-learning-and.html) and [that](https://arxiv.org/abs/2202.06499)\r\n\r\n## Pitch\r\n- easy thing to add\r\n- triton should be fairly efficient there, vs. other options (naive pytorch)\r\n\r\n## Alternatives\r\nNot doing it\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\n\nfrom enum import Enum\nfrom typing import Optional\n\nimport torch\nfrom torch import nn\n\n\nclass Activation(str, Enum):\n SquaredReLU = \"squared_relu\"\n GeLU = \"gelu\"\n LeakyReLU = \"leaky_relu\"\n ReLU = \"relu\"\n\n\n# For unit testing / parity comparisons, probably not the fastest way\nclass SquaredReLU(nn.Module):\n def __init__(self) -> None:\n super().__init__()\n\n def forward(self, x: torch.Tensor) -> torch.Tensor:\n x_ = torch.nn.functional.relu(x)\n return x_ * x_\n\n\nclass Passthrough(nn.Module):\n def __init__(self) -> None:\n super().__init__()\n\n def forward(self, x: torch.Tensor) -> torch.Tensor:\n return x\n\n\ndef build_activation(activation: Optional[Activation]):\n if not activation:\n return Passthrough()\n\n return {\n Activation.ReLU: nn.ReLU,\n Activation.GeLU: nn.GELU,\n Activation.LeakyReLU: nn.LeakyReLU,\n Activation.SquaredReLU: SquaredReLU,\n }[activation]()\n", "path": "xformers/components/activations.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.\n#\n# This source code is licensed under the BSD license found in the\n# LICENSE file in the root directory of this source tree.\n\nimport math\nfrom typing import Optional\n\nimport triton\nimport triton.language as tl\n\nfrom xformers.components import Activation\n\n_kAlpha = math.sqrt(2.0 / math.pi)\n\n\ndef get_triton_activation_kernel(activation: Optional[Activation]):\n return (\n {\n Activation.ReLU: relu,\n Activation.LeakyReLU: leaky_relu,\n Activation.GeLU: gelu,\n Activation.SquaredReLU: squared_relu,\n }[activation]\n if activation\n else None\n )\n\n\ndef get_triton_activation_bwd_kernel(activation: Optional[Activation]):\n return (\n {\n Activation.ReLU: relu_grad,\n Activation.LeakyReLU: leaky_relu_grad,\n Activation.GeLU: gelu_grad,\n Activation.SquaredReLU: squared_relu_grad,\n }[activation]\n if activation\n else None\n )\n\n\[email protected]\ndef tanh(x):\n # Tanh is just a scaled sigmoid\n return 2 * tl.sigmoid(2 * x) - 1\n\n\[email protected]\ndef cosh(x):\n exp_x = tl.exp(x)\n return (exp_x + 1.0 / exp_x) * 0.5\n\n\n# a Triton implementation of the most used activations\n# See for instance http://arxiv.org/abs/1606.08415 for an overview\n\n# ReLU\[email protected]\ndef relu(x):\n \"\"\"\n ReLU_ activation function\n\n .. _ReLU: https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html\n \"\"\"\n zero = 0.0\n return tl.where(x >= 0, x, zero.to(x.dtype))\n\n\[email protected]\ndef relu_grad(x):\n # ReLU is different from other activations\n # in that it does not require the input to retrospectively compute its gradient\n # here the input is the downstream gradient, and we return the upstream gradient directly\n zero = 0.0\n one = 1.0\n return tl.where(x >= 0, one.to(x.dtype), zero.to(x.dtype))\n\n\[email protected]\ndef squared_relu(x):\n \"\"\"\n Squared ReLU activation, as proposed in the Primer_ paper.\n\n .. _Primer: https://arxiv.org/abs/2109.08668\n \"\"\"\n x_ = relu(x)\n return (x_ * x_).to(x.dtype)\n\n\[email protected]\ndef squared_relu_grad(x):\n return tl.where(x >= 0, 2.0 * x, 0.0)\n\n\n# Leaky ReLU\[email protected]\ndef leaky_relu(x):\n \"\"\"\n LeakyReLU_ activation\n\n .. _LeakyReLU: https://pytorch.org/docs/stable/generated/torch.nn.LeakyReLU.html\n \"\"\"\n scale = 0.01 + 0.0\n scale = scale.to(x.dtype)\n return tl.where(x >= 0, x, scale * x)\n\n\[email protected]\ndef leaky_relu_grad(x):\n min_grad = 0.01\n max_grad = 1\n\n min_grad = min_grad.to(x.dtype)\n max_grad = max_grad.to(x.dtype)\n\n return tl.where(x >= 0, max_grad, min_grad)\n\n\[email protected]\ndef gelu(x):\n \"\"\"\n GeLU_ activation - Gaussian error linear unit\n\n .. _GeLU: https://arxiv.org/pdf/1606.08415.pdf\n \"\"\"\n return 0.5 * x * (1 + tanh(_kAlpha * (x + 0.044715 * x * x * x)))\n\n\[email protected]\ndef gelu_grad(x):\n # CREDITS: Fast implementation proposed in\n # https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/fused_bias_gelu.py#L30\n tanh_out = tanh(0.79788456 * x * (1 + 0.044715 * x * x))\n return 0.5 * x * (\n (1 - tanh_out * tanh_out) * (0.79788456 + 0.1070322243 * x * x)\n ) + 0.5 * (1 + tanh_out)\n", "path": "xformers/triton/k_activations.py"}]}
| 2,450 | 886 |
gh_patches_debug_5208
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-11146
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CLI become options are ultimately ignored
At this point I am not exactly sure where this is happening, however the become options provided on the CLI are ultimately ignored.
I have however determined that when `ConnectionInformation` is initiated, that the attributes are properly set via the 'set_options`method. Immediately afterwards,`set_play`is executed and the options are set to`None`.
Commenting out the call to `set_play`, the attributes on `ConnectionInformation` remain correct, but by the time that `make_become_cmd` is executed, `self.become` has been set to `False`.
Other than `set_play` overwriting the variables when it probably shouldn't, I haven't been able to track down what else is setting `ConnectionInformation.become` to `False` before `make_become_cmd`.
</issue>
<code>
[start of lib/ansible/playbook/become.py]
1 # (c) 2012-2014, Michael DeHaan <[email protected]>
2 #
3 # This file is part of Ansible
4 #
5 # Ansible is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Ansible is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
17
18 # Make coding more python3-ish
19 from __future__ import (absolute_import, division, print_function)
20 __metaclass__ = type
21
22 from ansible import constants as C
23 from ansible.errors import AnsibleError, AnsibleParserError
24 from ansible.playbook.attribute import Attribute, FieldAttribute
25 #from ansible.utils.display import deprecated
26
27 class Become:
28
29 # Privlege escalation
30 _become = FieldAttribute(isa='bool', default=False)
31 _become_method = FieldAttribute(isa='string')
32 _become_user = FieldAttribute(isa='string')
33 _become_pass = FieldAttribute(isa='string')
34
35 def __init__(self):
36 return super(Become, self).__init__()
37
38 def _detect_privilege_escalation_conflict(self, ds):
39
40 # Fail out if user specifies conflicting privilege escalations
41 has_become = 'become' in ds or 'become_user'in ds
42 has_sudo = 'sudo' in ds or 'sudo_user' in ds
43 has_su = 'su' in ds or 'su_user' in ds
44
45 if has_become:
46 msg = 'The become params ("become", "become_user") and'
47 if has_sudo:
48 raise AnsibleParserError('%s sudo params ("sudo", "sudo_user") cannot be used together' % msg)
49 elif has_su:
50 raise AnsibleParserError('%s su params ("su", "su_user") cannot be used together' % msg)
51 elif has_sudo and has_su:
52 raise AnsibleParserError('sudo params ("sudo", "sudo_user") and su params ("su", "su_user") cannot be used together')
53
54 def _preprocess_data_become(self, ds):
55 """Preprocess the playbook data for become attributes
56
57 This is called from the Base object's preprocess_data() method which
58 in turn is called pretty much anytime any sort of playbook object
59 (plays, tasks, blocks, etc) are created.
60 """
61
62 self._detect_privilege_escalation_conflict(ds)
63
64 # Setting user implies setting become/sudo/su to true
65 if 'become_user' in ds and not ds.get('become', False):
66 ds['become'] = True
67
68 # Privilege escalation, backwards compatibility for sudo/su
69 if 'sudo' in ds or 'sudo_user' in ds:
70 ds['become_method'] = 'sudo'
71 if 'sudo' in ds:
72 ds['become'] = ds['sudo']
73 del ds['sudo']
74 else:
75 ds['become'] = True
76 if 'sudo_user' in ds:
77 ds['become_user'] = ds['sudo_user']
78 del ds['sudo_user']
79
80 #deprecated("Instead of sudo/sudo_user, use become/become_user and set become_method to 'sudo' (default)")
81
82 elif 'su' in ds or 'su_user' in ds:
83 ds['become_method'] = 'su'
84 if 'su' in ds:
85 ds['become'] = ds['su']
86 del ds['su']
87 else:
88 ds['become'] = True
89 if 'su_user' in ds:
90 ds['become_user'] = ds['su_user']
91 del ds['su_user']
92
93 #deprecated("Instead of su/su_user, use become/become_user and set become_method to 'su' (default is sudo)")
94
95 # if we are becoming someone else, but some fields are unset,
96 # make sure they're initialized to the default config values
97 if ds.get('become', False):
98 if ds.get('become_method', None) is None:
99 ds['become_method'] = C.DEFAULT_BECOME_METHOD
100 if ds.get('become_user', None) is None:
101 ds['become_user'] = C.DEFAULT_BECOME_USER
102
103 return ds
104
105 def _get_attr_become(self):
106 '''
107 Override for the 'become' getattr fetcher, used from Base.
108 '''
109 if hasattr(self, '_get_parent_attribute'):
110 return self._get_parent_attribute('become')
111 else:
112 return self._attributes['become']
113
114 def _get_attr_become_method(self):
115 '''
116 Override for the 'become_method' getattr fetcher, used from Base.
117 '''
118 if hasattr(self, '_get_parent_attribute'):
119 return self._get_parent_attribute('become_method')
120 else:
121 return self._attributes['become_method']
122
123 def _get_attr_become_user(self):
124 '''
125 Override for the 'become_user' getattr fetcher, used from Base.
126 '''
127 if hasattr(self, '_get_parent_attribute'):
128 return self._get_parent_attribute('become_user')
129 else:
130 return self._attributes['become_user']
131
132 def _get_attr_become_password(self):
133 '''
134 Override for the 'become_password' getattr fetcher, used from Base.
135 '''
136 if hasattr(self, '_get_parent_attribute'):
137 return self._get_parent_attribute('become_password')
138 else:
139 return self._attributes['become_password']
140
141
142
[end of lib/ansible/playbook/become.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/ansible/playbook/become.py b/lib/ansible/playbook/become.py
--- a/lib/ansible/playbook/become.py
+++ b/lib/ansible/playbook/become.py
@@ -27,7 +27,7 @@
class Become:
# Privlege escalation
- _become = FieldAttribute(isa='bool', default=False)
+ _become = FieldAttribute(isa='bool')
_become_method = FieldAttribute(isa='string')
_become_user = FieldAttribute(isa='string')
_become_pass = FieldAttribute(isa='string')
|
{"golden_diff": "diff --git a/lib/ansible/playbook/become.py b/lib/ansible/playbook/become.py\n--- a/lib/ansible/playbook/become.py\n+++ b/lib/ansible/playbook/become.py\n@@ -27,7 +27,7 @@\n class Become:\n \n # Privlege escalation\n- _become = FieldAttribute(isa='bool', default=False)\n+ _become = FieldAttribute(isa='bool')\n _become_method = FieldAttribute(isa='string')\n _become_user = FieldAttribute(isa='string')\n _become_pass = FieldAttribute(isa='string')\n", "issue": "CLI become options are ultimately ignored\nAt this point I am not exactly sure where this is happening, however the become options provided on the CLI are ultimately ignored.\n\nI have however determined that when `ConnectionInformation` is initiated, that the attributes are properly set via the 'set_options`method. Immediately afterwards,`set_play`is executed and the options are set to`None`.\n\nCommenting out the call to `set_play`, the attributes on `ConnectionInformation` remain correct, but by the time that `make_become_cmd` is executed, `self.become` has been set to `False`.\n\nOther than `set_play` overwriting the variables when it probably shouldn't, I haven't been able to track down what else is setting `ConnectionInformation.become` to `False` before `make_become_cmd`.\n\n", "before_files": [{"content": "# (c) 2012-2014, Michael DeHaan <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\n# Make coding more python3-ish\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nfrom ansible import constants as C\nfrom ansible.errors import AnsibleError, AnsibleParserError\nfrom ansible.playbook.attribute import Attribute, FieldAttribute\n#from ansible.utils.display import deprecated\n\nclass Become:\n\n # Privlege escalation\n _become = FieldAttribute(isa='bool', default=False)\n _become_method = FieldAttribute(isa='string')\n _become_user = FieldAttribute(isa='string')\n _become_pass = FieldAttribute(isa='string')\n\n def __init__(self):\n return super(Become, self).__init__()\n\n def _detect_privilege_escalation_conflict(self, ds):\n\n # Fail out if user specifies conflicting privilege escalations\n has_become = 'become' in ds or 'become_user'in ds\n has_sudo = 'sudo' in ds or 'sudo_user' in ds\n has_su = 'su' in ds or 'su_user' in ds\n\n if has_become:\n msg = 'The become params (\"become\", \"become_user\") and'\n if has_sudo:\n raise AnsibleParserError('%s sudo params (\"sudo\", \"sudo_user\") cannot be used together' % msg)\n elif has_su:\n raise AnsibleParserError('%s su params (\"su\", \"su_user\") cannot be used together' % msg)\n elif has_sudo and has_su:\n raise AnsibleParserError('sudo params (\"sudo\", \"sudo_user\") and su params (\"su\", \"su_user\") cannot be used together')\n\n def _preprocess_data_become(self, ds):\n \"\"\"Preprocess the playbook data for become attributes\n\n This is called from the Base object's preprocess_data() method which\n in turn is called pretty much anytime any sort of playbook object\n (plays, tasks, blocks, etc) are created.\n \"\"\"\n\n self._detect_privilege_escalation_conflict(ds)\n\n # Setting user implies setting become/sudo/su to true\n if 'become_user' in ds and not ds.get('become', False):\n ds['become'] = True\n\n # Privilege escalation, backwards compatibility for sudo/su\n if 'sudo' in ds or 'sudo_user' in ds:\n ds['become_method'] = 'sudo'\n if 'sudo' in ds:\n ds['become'] = ds['sudo']\n del ds['sudo']\n else:\n ds['become'] = True\n if 'sudo_user' in ds:\n ds['become_user'] = ds['sudo_user']\n del ds['sudo_user']\n\n #deprecated(\"Instead of sudo/sudo_user, use become/become_user and set become_method to 'sudo' (default)\")\n\n elif 'su' in ds or 'su_user' in ds:\n ds['become_method'] = 'su'\n if 'su' in ds:\n ds['become'] = ds['su']\n del ds['su']\n else:\n ds['become'] = True\n if 'su_user' in ds:\n ds['become_user'] = ds['su_user']\n del ds['su_user']\n\n #deprecated(\"Instead of su/su_user, use become/become_user and set become_method to 'su' (default is sudo)\")\n\n # if we are becoming someone else, but some fields are unset,\n # make sure they're initialized to the default config values\n if ds.get('become', False):\n if ds.get('become_method', None) is None:\n ds['become_method'] = C.DEFAULT_BECOME_METHOD\n if ds.get('become_user', None) is None:\n ds['become_user'] = C.DEFAULT_BECOME_USER\n\n return ds\n\n def _get_attr_become(self):\n '''\n Override for the 'become' getattr fetcher, used from Base.\n '''\n if hasattr(self, '_get_parent_attribute'):\n return self._get_parent_attribute('become')\n else:\n return self._attributes['become']\n\n def _get_attr_become_method(self):\n '''\n Override for the 'become_method' getattr fetcher, used from Base.\n '''\n if hasattr(self, '_get_parent_attribute'):\n return self._get_parent_attribute('become_method')\n else:\n return self._attributes['become_method']\n\n def _get_attr_become_user(self):\n '''\n Override for the 'become_user' getattr fetcher, used from Base.\n '''\n if hasattr(self, '_get_parent_attribute'):\n return self._get_parent_attribute('become_user')\n else:\n return self._attributes['become_user']\n\n def _get_attr_become_password(self):\n '''\n Override for the 'become_password' getattr fetcher, used from Base.\n '''\n if hasattr(self, '_get_parent_attribute'):\n return self._get_parent_attribute('become_password')\n else:\n return self._attributes['become_password']\n\n\n", "path": "lib/ansible/playbook/become.py"}]}
| 2,327 | 139 |
gh_patches_debug_23733
|
rasdani/github-patches
|
git_diff
|
e2nIEE__pandapower-857
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ConstControl: Modifying scaling of all loads based on input dataframe
Hello,
I am trying to give an input dataframe to control the scaling of all loads for each time_step in a timeseries simulation
At the end of the simulation, the res_bus[p_mw] is correctly varying according to the given dataframe.
Nevertheless, the results of bus voltages and lines loading appear constant; it sounds like the power flow results don't take into account the load scaling.
This is how I create the controller:
```python
def create_controllers(net, ds):
ConstControl(net, element='load', variable='scaling', element_index=net.load.index,
data_source=ds, profile_name=["Load_p"])
```
where `Load_p` is the dataframe column with the scaling factors.
I have the feeling that this approach should work, so I may be doing something wrong.
Do you have any suggestions about how to handle it?
Best regards,
Michele
ConstControl: Modifying scaling of all loads based on input dataframe
Hello,
I am trying to give an input dataframe to control the scaling of all loads for each time_step in a timeseries simulation
At the end of the simulation, the res_bus[p_mw] is correctly varying according to the given dataframe.
Nevertheless, the results of bus voltages and lines loading appear constant; it sounds like the power flow results don't take into account the load scaling.
This is how I create the controller:
```python
def create_controllers(net, ds):
ConstControl(net, element='load', variable='scaling', element_index=net.load.index,
data_source=ds, profile_name=["Load_p"])
```
where `Load_p` is the dataframe column with the scaling factors.
I have the feeling that this approach should work, so I may be doing something wrong.
Do you have any suggestions about how to handle it?
Best regards,
Michele
</issue>
<code>
[start of pandapower/control/controller/const_control.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright (c) 2016-2020 by University of Kassel and Fraunhofer Institute for Energy Economics
4 # and Energy System Technology (IEE), Kassel. All rights reserved.
5
6 import numpy as np
7 from pandas import Index
8 from pandapower.control.basic_controller import Controller
9
10 try:
11 import pplog as logging
12 except ImportError:
13 import logging
14
15 logger = logging.getLogger(__name__)
16
17
18 class ConstControl(Controller):
19 """
20 Class representing a generic time series controller for a specified element and variable
21 Control strategy: "No Control" -> just updates timeseries
22
23 INPUT:
24
25 **net** (attrdict) - The net in which the controller resides
26
27 **element** - element table ('sgen', 'load' etc.)
28
29 **variable** - variable ('p_mw', 'q_mvar', 'vm_pu', 'tap_pos' etc.)
30
31 **element_index** (int[]) - IDs of the controlled elements
32
33 **data_source** (obj) - The data source that provides profile data
34
35 **profile_name** (str[]) - The profile names of the elements in the data source
36
37
38 OPTIONAL:
39
40 **scale_factor** (real, 1.0) - Scaling factor for time series input values
41
42 **in_service** (bool, True) - Indicates if the controller is currently in_service
43
44 **recycle** (bool, True) - Re-use of internal-data in a time series loop.
45
46 **drop_same_existing_ctrl** (bool, False) - Indicates if already existing controllers of the same type and with the same matching parameters (e.g. at same element) should be dropped
47
48 .. note:: If multiple elements are represented with one controller, the data source must have integer columns. At the moment, only the DFData format is tested for the multiple const control.
49 """
50
51 def __init__(self, net, element, variable, element_index, profile_name=None, data_source=None,
52 scale_factor=1.0, in_service=True, recycle=True, order=0, level=0,
53 drop_same_existing_ctrl=False, set_q_from_cosphi=False, matching_params=None, initial_run=False,
54 **kwargs):
55 # just calling init of the parent
56 if matching_params is None:
57 matching_params = {"element": element, "variable": variable,
58 "element_index": element_index}
59 super().__init__(net, in_service=in_service, recycle=recycle, order=order, level=level,
60 drop_same_existing_ctrl=drop_same_existing_ctrl,
61 matching_params=matching_params, initial_run = initial_run,
62 **kwargs)
63 self.matching_params = {"element": element, "variable": variable,
64 "element_index": element_index}
65
66 # data source for time series values
67 self.data_source = data_source
68 # ids of sgens or loads
69 self.element_index = element_index
70 # element type
71 self.element = element
72 self.variable = variable
73 self.values = None
74 self.profile_name = profile_name
75 self.scale_factor = scale_factor
76 if set_q_from_cosphi:
77 logger.error("Parameter set_q_from_cosphi deprecated!")
78 raise ValueError
79 self.applied = False
80 self.initial_run = initial_run
81 # write functions faster, depending on type of self.element_index
82 if isinstance(self.element_index, int):
83 # use .at if element_index is integer for speedup
84 self.write = "single_index"
85 # commenting this out for now, see issue 609
86 # elif self.net[self.element].index.equals(Index(self.element_index)):
87 # # use : indexer if all elements are in index
88 # self.write = "all_index"
89 else:
90 # use common .loc
91 self.write = "loc"
92 self.set_recycle()
93
94 def set_recycle(self):
95 allowed_elements = ["load", "sgen", "storage", "gen", "ext_grid", "trafo", "trafo3w", "line"]
96 if self.recycle is False or self.element not in allowed_elements:
97 # if recycle is set to False by the user when creating the controller it is deactivated or when
98 # const control controls an element which is not able to be recycled
99 self.recycle = False
100 return
101 # these variables determine what is re-calculated during a time series run
102 recycle = dict(trafo=False, gen=False, bus_pq=False)
103 if self.element in ["sgen", "load", "storage"] and self.variable in ["p_mw", "q_mvar"]:
104 recycle["bus_pq"] = True
105 if self.element in ["gen"] and self.variable in ["p_mw", "vm_pu"] \
106 or self.element in ["ext_grid"] and self.variable in ["vm_pu", "va_degree"]:
107 recycle["gen"] = True
108 if self.element in ["trafo", "trafo3w", "line"]:
109 recycle["trafo"] = True
110 self.recycle = recycle
111
112 def write_to_net(self):
113 """
114 Writes to self.element at index self.element_index in the column self.variable the data
115 from self.values
116 """
117 # write functions faster, depending on type of self.element_index
118 if self.write == "single_index":
119 self._write_to_single_index()
120 elif self.write == "all_index":
121 self._write_to_all_index()
122 elif self.write == "loc":
123 self._write_with_loc()
124 else:
125 raise NotImplementedError("ConstControl: self.write must be one of "
126 "['single_index', 'all_index', 'loc']")
127
128 def time_step(self, time):
129 """
130 Get the values of the element from data source
131 """
132 self.values = self.data_source.get_time_step_value(time_step=time,
133 profile_name=self.profile_name,
134 scale_factor=self.scale_factor)
135 # self.write_to_net()
136
137 def initialize_control(self):
138 """
139 At the beginning of each run_control call reset applied-flag
140 """
141 #
142 if self.data_source is None:
143 self.values = self.net[self.element][self.variable].loc[self.element_index]
144 self.applied = False
145
146 def is_converged(self):
147 """
148 Actual implementation of the convergence criteria: If controller is applied, it can stop
149 """
150 return self.applied
151
152 def control_step(self):
153 """
154 Write to pandapower net by calling write_to_net()
155 """
156 if self.values is not None:
157 self.write_to_net()
158 self.applied = True
159
160 def _write_to_single_index(self):
161 self.net[self.element].at[self.element_index, self.variable] = self.values
162
163 def _write_to_all_index(self):
164 self.net[self.element].loc[:, self.variable] = self.values
165
166 def _write_with_loc(self):
167 self.net[self.element].loc[self.element_index, self.variable] = self.values
168
[end of pandapower/control/controller/const_control.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pandapower/control/controller/const_control.py b/pandapower/control/controller/const_control.py
--- a/pandapower/control/controller/const_control.py
+++ b/pandapower/control/controller/const_control.py
@@ -100,14 +100,16 @@
return
# these variables determine what is re-calculated during a time series run
recycle = dict(trafo=False, gen=False, bus_pq=False)
- if self.element in ["sgen", "load", "storage"] and self.variable in ["p_mw", "q_mvar"]:
+ if self.element in ["sgen", "load", "storage"] and self.variable in ["p_mw", "q_mvar", "scaling"]:
recycle["bus_pq"] = True
- if self.element in ["gen"] and self.variable in ["p_mw", "vm_pu"] \
+ if self.element in ["gen"] and self.variable in ["p_mw", "vm_pu", "scaling"] \
or self.element in ["ext_grid"] and self.variable in ["vm_pu", "va_degree"]:
recycle["gen"] = True
if self.element in ["trafo", "trafo3w", "line"]:
recycle["trafo"] = True
- self.recycle = recycle
+ # recycle is either the dict what should be recycled
+ # or False if the element + variable combination is not supported
+ self.recycle = recycle if not any(list(recycle.values())) else False
def write_to_net(self):
"""
|
{"golden_diff": "diff --git a/pandapower/control/controller/const_control.py b/pandapower/control/controller/const_control.py\n--- a/pandapower/control/controller/const_control.py\n+++ b/pandapower/control/controller/const_control.py\n@@ -100,14 +100,16 @@\n return\n # these variables determine what is re-calculated during a time series run\n recycle = dict(trafo=False, gen=False, bus_pq=False)\n- if self.element in [\"sgen\", \"load\", \"storage\"] and self.variable in [\"p_mw\", \"q_mvar\"]:\n+ if self.element in [\"sgen\", \"load\", \"storage\"] and self.variable in [\"p_mw\", \"q_mvar\", \"scaling\"]:\n recycle[\"bus_pq\"] = True\n- if self.element in [\"gen\"] and self.variable in [\"p_mw\", \"vm_pu\"] \\\n+ if self.element in [\"gen\"] and self.variable in [\"p_mw\", \"vm_pu\", \"scaling\"] \\\n or self.element in [\"ext_grid\"] and self.variable in [\"vm_pu\", \"va_degree\"]:\n recycle[\"gen\"] = True\n if self.element in [\"trafo\", \"trafo3w\", \"line\"]:\n recycle[\"trafo\"] = True\n- self.recycle = recycle\n+ # recycle is either the dict what should be recycled\n+ # or False if the element + variable combination is not supported\n+ self.recycle = recycle if not any(list(recycle.values())) else False\n \n def write_to_net(self):\n \"\"\"\n", "issue": "ConstControl: Modifying scaling of all loads based on input dataframe\nHello,\r\nI am trying to give an input dataframe to control the scaling of all loads for each time_step in a timeseries simulation\r\nAt the end of the simulation, the res_bus[p_mw] is correctly varying according to the given dataframe.\r\nNevertheless, the results of bus voltages and lines loading appear constant; it sounds like the power flow results don't take into account the load scaling.\r\n\r\nThis is how I create the controller:\r\n```python\r\ndef create_controllers(net, ds):\r\n ConstControl(net, element='load', variable='scaling', element_index=net.load.index,\r\n data_source=ds, profile_name=[\"Load_p\"])\r\n```\r\nwhere `Load_p` is the dataframe column with the scaling factors.\r\nI have the feeling that this approach should work, so I may be doing something wrong.\r\nDo you have any suggestions about how to handle it?\r\n\r\nBest regards,\r\nMichele\nConstControl: Modifying scaling of all loads based on input dataframe\nHello,\r\nI am trying to give an input dataframe to control the scaling of all loads for each time_step in a timeseries simulation\r\nAt the end of the simulation, the res_bus[p_mw] is correctly varying according to the given dataframe.\r\nNevertheless, the results of bus voltages and lines loading appear constant; it sounds like the power flow results don't take into account the load scaling.\r\n\r\nThis is how I create the controller:\r\n```python\r\ndef create_controllers(net, ds):\r\n ConstControl(net, element='load', variable='scaling', element_index=net.load.index,\r\n data_source=ds, profile_name=[\"Load_p\"])\r\n```\r\nwhere `Load_p` is the dataframe column with the scaling factors.\r\nI have the feeling that this approach should work, so I may be doing something wrong.\r\nDo you have any suggestions about how to handle it?\r\n\r\nBest regards,\r\nMichele\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2016-2020 by University of Kassel and Fraunhofer Institute for Energy Economics\n# and Energy System Technology (IEE), Kassel. All rights reserved.\n\nimport numpy as np\nfrom pandas import Index\nfrom pandapower.control.basic_controller import Controller\n\ntry:\n import pplog as logging\nexcept ImportError:\n import logging\n\nlogger = logging.getLogger(__name__)\n\n\nclass ConstControl(Controller):\n \"\"\"\n Class representing a generic time series controller for a specified element and variable\n Control strategy: \"No Control\" -> just updates timeseries\n\n INPUT:\n\n **net** (attrdict) - The net in which the controller resides\n\n **element** - element table ('sgen', 'load' etc.)\n\n **variable** - variable ('p_mw', 'q_mvar', 'vm_pu', 'tap_pos' etc.)\n\n **element_index** (int[]) - IDs of the controlled elements\n\n **data_source** (obj) - The data source that provides profile data\n\n **profile_name** (str[]) - The profile names of the elements in the data source\n\n\n OPTIONAL:\n\n **scale_factor** (real, 1.0) - Scaling factor for time series input values\n\n **in_service** (bool, True) - Indicates if the controller is currently in_service\n\n **recycle** (bool, True) - Re-use of internal-data in a time series loop.\n\n **drop_same_existing_ctrl** (bool, False) - Indicates if already existing controllers of the same type and with the same matching parameters (e.g. at same element) should be dropped\n\n .. note:: If multiple elements are represented with one controller, the data source must have integer columns. At the moment, only the DFData format is tested for the multiple const control.\n \"\"\"\n\n def __init__(self, net, element, variable, element_index, profile_name=None, data_source=None,\n scale_factor=1.0, in_service=True, recycle=True, order=0, level=0,\n drop_same_existing_ctrl=False, set_q_from_cosphi=False, matching_params=None, initial_run=False,\n **kwargs):\n # just calling init of the parent\n if matching_params is None:\n matching_params = {\"element\": element, \"variable\": variable,\n \"element_index\": element_index}\n super().__init__(net, in_service=in_service, recycle=recycle, order=order, level=level,\n drop_same_existing_ctrl=drop_same_existing_ctrl,\n matching_params=matching_params, initial_run = initial_run,\n **kwargs)\n self.matching_params = {\"element\": element, \"variable\": variable,\n \"element_index\": element_index}\n\n # data source for time series values\n self.data_source = data_source\n # ids of sgens or loads\n self.element_index = element_index\n # element type\n self.element = element\n self.variable = variable\n self.values = None\n self.profile_name = profile_name\n self.scale_factor = scale_factor\n if set_q_from_cosphi:\n logger.error(\"Parameter set_q_from_cosphi deprecated!\")\n raise ValueError\n self.applied = False\n self.initial_run = initial_run\n # write functions faster, depending on type of self.element_index\n if isinstance(self.element_index, int):\n # use .at if element_index is integer for speedup\n self.write = \"single_index\"\n # commenting this out for now, see issue 609\n # elif self.net[self.element].index.equals(Index(self.element_index)):\n # # use : indexer if all elements are in index\n # self.write = \"all_index\"\n else:\n # use common .loc\n self.write = \"loc\"\n self.set_recycle()\n\n def set_recycle(self):\n allowed_elements = [\"load\", \"sgen\", \"storage\", \"gen\", \"ext_grid\", \"trafo\", \"trafo3w\", \"line\"]\n if self.recycle is False or self.element not in allowed_elements:\n # if recycle is set to False by the user when creating the controller it is deactivated or when\n # const control controls an element which is not able to be recycled\n self.recycle = False\n return\n # these variables determine what is re-calculated during a time series run\n recycle = dict(trafo=False, gen=False, bus_pq=False)\n if self.element in [\"sgen\", \"load\", \"storage\"] and self.variable in [\"p_mw\", \"q_mvar\"]:\n recycle[\"bus_pq\"] = True\n if self.element in [\"gen\"] and self.variable in [\"p_mw\", \"vm_pu\"] \\\n or self.element in [\"ext_grid\"] and self.variable in [\"vm_pu\", \"va_degree\"]:\n recycle[\"gen\"] = True\n if self.element in [\"trafo\", \"trafo3w\", \"line\"]:\n recycle[\"trafo\"] = True\n self.recycle = recycle\n\n def write_to_net(self):\n \"\"\"\n Writes to self.element at index self.element_index in the column self.variable the data\n from self.values\n \"\"\"\n # write functions faster, depending on type of self.element_index\n if self.write == \"single_index\":\n self._write_to_single_index()\n elif self.write == \"all_index\":\n self._write_to_all_index()\n elif self.write == \"loc\":\n self._write_with_loc()\n else:\n raise NotImplementedError(\"ConstControl: self.write must be one of \"\n \"['single_index', 'all_index', 'loc']\")\n \n def time_step(self, time):\n \"\"\"\n Get the values of the element from data source\n \"\"\"\n self.values = self.data_source.get_time_step_value(time_step=time,\n profile_name=self.profile_name,\n scale_factor=self.scale_factor)\n # self.write_to_net()\n\n def initialize_control(self):\n \"\"\"\n At the beginning of each run_control call reset applied-flag\n \"\"\"\n #\n if self.data_source is None:\n self.values = self.net[self.element][self.variable].loc[self.element_index]\n self.applied = False\n\n def is_converged(self):\n \"\"\"\n Actual implementation of the convergence criteria: If controller is applied, it can stop\n \"\"\"\n return self.applied\n\n def control_step(self):\n \"\"\"\n Write to pandapower net by calling write_to_net()\n \"\"\"\n if self.values is not None:\n self.write_to_net()\n self.applied = True\n\n def _write_to_single_index(self):\n self.net[self.element].at[self.element_index, self.variable] = self.values\n\n def _write_to_all_index(self):\n self.net[self.element].loc[:, self.variable] = self.values\n\n def _write_with_loc(self):\n self.net[self.element].loc[self.element_index, self.variable] = self.values\n", "path": "pandapower/control/controller/const_control.py"}]}
| 2,842 | 346 |
gh_patches_debug_1532
|
rasdani/github-patches
|
git_diff
|
mne-tools__mne-bids-259
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update text before release
The setup.py still states that this is experimental. I think it's grown up a fair bit :) this is the text that ends up on pypi. we should update it to reflect the package's aim
https://github.com/mne-tools/mne-bids/blob/f8f267b45ac36e1600ea9ceb5540299e1bf3ab21/setup.py#L17
</issue>
<code>
[start of setup.py]
1 #! /usr/bin/env python
2 """Setup MNE-BIDS."""
3 import os
4 from setuptools import setup, find_packages
5
6 # get the version
7 version = None
8 with open(os.path.join('mne_bids', '__init__.py'), 'r') as fid:
9 for line in (line.strip() for line in fid):
10 if line.startswith('__version__'):
11 version = line.split('=')[1].strip().strip('\'')
12 break
13 if version is None:
14 raise RuntimeError('Could not determine version')
15
16
17 descr = """Experimental code for BIDS using MNE."""
18
19 DISTNAME = 'mne-bids'
20 DESCRIPTION = descr
21 MAINTAINER = 'Mainak Jas'
22 MAINTAINER_EMAIL = '[email protected]'
23 URL = 'https://mne-tools.github.io/mne-bids/'
24 LICENSE = 'BSD (3-clause)'
25 DOWNLOAD_URL = 'http://github.com/mne-tools/mne-bids'
26 VERSION = version
27
28 if __name__ == "__main__":
29 setup(name=DISTNAME,
30 maintainer=MAINTAINER,
31 maintainer_email=MAINTAINER_EMAIL,
32 description=DESCRIPTION,
33 license=LICENSE,
34 url=URL,
35 version=VERSION,
36 download_url=DOWNLOAD_URL,
37 long_description=open('README.rst').read(),
38 long_description_content_type='text/x-rst',
39 classifiers=[
40 'Intended Audience :: Science/Research',
41 'Intended Audience :: Developers',
42 'License :: OSI Approved',
43 'Programming Language :: Python',
44 'Topic :: Software Development',
45 'Topic :: Scientific/Engineering',
46 'Operating System :: Microsoft :: Windows',
47 'Operating System :: POSIX',
48 'Operating System :: Unix',
49 'Operating System :: MacOS',
50 ],
51 platforms='any',
52 packages=find_packages(),
53 scripts=['bin/mne_bids']
54 )
55
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -14,7 +14,8 @@
raise RuntimeError('Could not determine version')
-descr = """Experimental code for BIDS using MNE."""
+descr = ('An MNE project for organizing and formatting MEG and EEG data '
+ 'according to the BIDS specification.')
DISTNAME = 'mne-bids'
DESCRIPTION = descr
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -14,7 +14,8 @@\n raise RuntimeError('Could not determine version')\n \n \n-descr = \"\"\"Experimental code for BIDS using MNE.\"\"\"\n+descr = ('An MNE project for organizing and formatting MEG and EEG data '\n+ 'according to the BIDS specification.')\n \n DISTNAME = 'mne-bids'\n DESCRIPTION = descr\n", "issue": "Update text before release\nThe setup.py still states that this is experimental. I think it's grown up a fair bit :) this is the text that ends up on pypi. we should update it to reflect the package's aim\r\n\r\nhttps://github.com/mne-tools/mne-bids/blob/f8f267b45ac36e1600ea9ceb5540299e1bf3ab21/setup.py#L17\n", "before_files": [{"content": "#! /usr/bin/env python\n\"\"\"Setup MNE-BIDS.\"\"\"\nimport os\nfrom setuptools import setup, find_packages\n\n# get the version\nversion = None\nwith open(os.path.join('mne_bids', '__init__.py'), 'r') as fid:\n for line in (line.strip() for line in fid):\n if line.startswith('__version__'):\n version = line.split('=')[1].strip().strip('\\'')\n break\nif version is None:\n raise RuntimeError('Could not determine version')\n\n\ndescr = \"\"\"Experimental code for BIDS using MNE.\"\"\"\n\nDISTNAME = 'mne-bids'\nDESCRIPTION = descr\nMAINTAINER = 'Mainak Jas'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'https://mne-tools.github.io/mne-bids/'\nLICENSE = 'BSD (3-clause)'\nDOWNLOAD_URL = 'http://github.com/mne-tools/mne-bids'\nVERSION = version\n\nif __name__ == \"__main__\":\n setup(name=DISTNAME,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n description=DESCRIPTION,\n license=LICENSE,\n url=URL,\n version=VERSION,\n download_url=DOWNLOAD_URL,\n long_description=open('README.rst').read(),\n long_description_content_type='text/x-rst',\n classifiers=[\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved',\n 'Programming Language :: Python',\n 'Topic :: Software Development',\n 'Topic :: Scientific/Engineering',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS',\n ],\n platforms='any',\n packages=find_packages(),\n scripts=['bin/mne_bids']\n )\n", "path": "setup.py"}]}
| 1,120 | 97 |
gh_patches_debug_22071
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-1919
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Failure to get the container id
With #1888 in place, there's a regression when running inside a container. There's an assumption in https://github.com/pre-commit/pre-commit/blob/master/pre_commit/languages/docker.py#L32 that the hostname is the container ID which is not always the case (it's easy enough to set a different hostname with `docker run --hostname foo`). It causes the `docker inspect` command that follows to fail. A more reliable way to get the container id is from `/proc/1/cpuset` or from the first line in `/proc/1/cgroup` which is already checked in `_is_in_docker`.
Thanks @asottile and @okainov for your work on #1387 and pre-commit in general.
</issue>
<code>
[start of pre_commit/languages/docker.py]
1 import hashlib
2 import json
3 import os
4 import socket
5 from typing import Sequence
6 from typing import Tuple
7
8 import pre_commit.constants as C
9 from pre_commit.hook import Hook
10 from pre_commit.languages import helpers
11 from pre_commit.prefix import Prefix
12 from pre_commit.util import clean_path_on_failure
13 from pre_commit.util import cmd_output_b
14
15 ENVIRONMENT_DIR = 'docker'
16 PRE_COMMIT_LABEL = 'PRE_COMMIT'
17 get_default_version = helpers.basic_get_default_version
18 healthy = helpers.basic_healthy
19
20
21 def _is_in_docker() -> bool:
22 try:
23 with open('/proc/1/cgroup', 'rb') as f:
24 return b'docker' in f.read()
25 except FileNotFoundError:
26 return False
27
28
29 def _get_docker_path(path: str) -> str:
30 if not _is_in_docker():
31 return path
32 hostname = socket.gethostname()
33
34 _, out, _ = cmd_output_b('docker', 'inspect', hostname)
35
36 container, = json.loads(out)
37 for mount in container['Mounts']:
38 src_path = mount['Source']
39 to_path = mount['Destination']
40 if os.path.commonpath((path, to_path)) == to_path:
41 # So there is something in common,
42 # and we can proceed remapping it
43 return path.replace(to_path, src_path)
44 # we're in Docker, but the path is not mounted, cannot really do anything,
45 # so fall back to original path
46 return path
47
48
49 def md5(s: str) -> str: # pragma: win32 no cover
50 return hashlib.md5(s.encode()).hexdigest()
51
52
53 def docker_tag(prefix: Prefix) -> str: # pragma: win32 no cover
54 md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()
55 return f'pre-commit-{md5sum}'
56
57
58 def build_docker_image(
59 prefix: Prefix,
60 *,
61 pull: bool,
62 ) -> None: # pragma: win32 no cover
63 cmd: Tuple[str, ...] = (
64 'docker', 'build',
65 '--tag', docker_tag(prefix),
66 '--label', PRE_COMMIT_LABEL,
67 )
68 if pull:
69 cmd += ('--pull',)
70 # This must come last for old versions of docker. See #477
71 cmd += ('.',)
72 helpers.run_setup_cmd(prefix, cmd)
73
74
75 def install_environment(
76 prefix: Prefix, version: str, additional_dependencies: Sequence[str],
77 ) -> None: # pragma: win32 no cover
78 helpers.assert_version_default('docker', version)
79 helpers.assert_no_additional_deps('docker', additional_dependencies)
80
81 directory = prefix.path(
82 helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),
83 )
84
85 # Docker doesn't really have relevant disk environment, but pre-commit
86 # still needs to cleanup its state files on failure
87 with clean_path_on_failure(directory):
88 build_docker_image(prefix, pull=True)
89 os.mkdir(directory)
90
91
92 def get_docker_user() -> Tuple[str, ...]: # pragma: win32 no cover
93 try:
94 return ('-u', f'{os.getuid()}:{os.getgid()}')
95 except AttributeError:
96 return ()
97
98
99 def docker_cmd() -> Tuple[str, ...]: # pragma: win32 no cover
100 return (
101 'docker', 'run',
102 '--rm',
103 *get_docker_user(),
104 # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from
105 # The `Z` option tells Docker to label the content with a private
106 # unshared label. Only the current container can use a private volume.
107 '-v', f'{_get_docker_path(os.getcwd())}:/src:rw,Z',
108 '--workdir', '/src',
109 )
110
111
112 def run_hook(
113 hook: Hook,
114 file_args: Sequence[str],
115 color: bool,
116 ) -> Tuple[int, bytes]: # pragma: win32 no cover
117 # Rebuild the docker image in case it has gone missing, as many people do
118 # automated cleanup of docker images.
119 build_docker_image(hook.prefix, pull=False)
120
121 entry_exe, *cmd_rest = hook.cmd
122
123 entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))
124 cmd = (*docker_cmd(), *entry_tag, *cmd_rest)
125 return helpers.run_xargs(hook, cmd, file_args, color=color)
126
[end of pre_commit/languages/docker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py
--- a/pre_commit/languages/docker.py
+++ b/pre_commit/languages/docker.py
@@ -1,7 +1,6 @@
import hashlib
import json
import os
-import socket
from typing import Sequence
from typing import Tuple
@@ -26,12 +25,24 @@
return False
+def _get_container_id() -> str:
+ # It's assumed that we already check /proc/1/cgroup in _is_in_docker. The
+ # cpuset cgroup controller existed since cgroups were introduced so this
+ # way of getting the container ID is pretty reliable.
+ with open('/proc/1/cgroup', 'rb') as f:
+ for line in f.readlines():
+ if line.split(b':')[1] == b'cpuset':
+ return os.path.basename(line.split(b':')[2]).strip().decode()
+ raise RuntimeError('Failed to find the container ID in /proc/1/cgroup.')
+
+
def _get_docker_path(path: str) -> str:
if not _is_in_docker():
return path
- hostname = socket.gethostname()
- _, out, _ = cmd_output_b('docker', 'inspect', hostname)
+ container_id = _get_container_id()
+
+ _, out, _ = cmd_output_b('docker', 'inspect', container_id)
container, = json.loads(out)
for mount in container['Mounts']:
|
{"golden_diff": "diff --git a/pre_commit/languages/docker.py b/pre_commit/languages/docker.py\n--- a/pre_commit/languages/docker.py\n+++ b/pre_commit/languages/docker.py\n@@ -1,7 +1,6 @@\n import hashlib\n import json\n import os\n-import socket\n from typing import Sequence\n from typing import Tuple\n \n@@ -26,12 +25,24 @@\n return False\n \n \n+def _get_container_id() -> str:\n+ # It's assumed that we already check /proc/1/cgroup in _is_in_docker. The\n+ # cpuset cgroup controller existed since cgroups were introduced so this\n+ # way of getting the container ID is pretty reliable.\n+ with open('/proc/1/cgroup', 'rb') as f:\n+ for line in f.readlines():\n+ if line.split(b':')[1] == b'cpuset':\n+ return os.path.basename(line.split(b':')[2]).strip().decode()\n+ raise RuntimeError('Failed to find the container ID in /proc/1/cgroup.')\n+\n+\n def _get_docker_path(path: str) -> str:\n if not _is_in_docker():\n return path\n- hostname = socket.gethostname()\n \n- _, out, _ = cmd_output_b('docker', 'inspect', hostname)\n+ container_id = _get_container_id()\n+\n+ _, out, _ = cmd_output_b('docker', 'inspect', container_id)\n \n container, = json.loads(out)\n for mount in container['Mounts']:\n", "issue": "Failure to get the container id\nWith #1888 in place, there's a regression when running inside a container. There's an assumption in https://github.com/pre-commit/pre-commit/blob/master/pre_commit/languages/docker.py#L32 that the hostname is the container ID which is not always the case (it's easy enough to set a different hostname with `docker run --hostname foo`). It causes the `docker inspect` command that follows to fail. A more reliable way to get the container id is from `/proc/1/cpuset` or from the first line in `/proc/1/cgroup` which is already checked in `_is_in_docker`.\r\n\r\nThanks @asottile and @okainov for your work on #1387 and pre-commit in general.\n", "before_files": [{"content": "import hashlib\nimport json\nimport os\nimport socket\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit.hook import Hook\nfrom pre_commit.languages import helpers\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output_b\n\nENVIRONMENT_DIR = 'docker'\nPRE_COMMIT_LABEL = 'PRE_COMMIT'\nget_default_version = helpers.basic_get_default_version\nhealthy = helpers.basic_healthy\n\n\ndef _is_in_docker() -> bool:\n try:\n with open('/proc/1/cgroup', 'rb') as f:\n return b'docker' in f.read()\n except FileNotFoundError:\n return False\n\n\ndef _get_docker_path(path: str) -> str:\n if not _is_in_docker():\n return path\n hostname = socket.gethostname()\n\n _, out, _ = cmd_output_b('docker', 'inspect', hostname)\n\n container, = json.loads(out)\n for mount in container['Mounts']:\n src_path = mount['Source']\n to_path = mount['Destination']\n if os.path.commonpath((path, to_path)) == to_path:\n # So there is something in common,\n # and we can proceed remapping it\n return path.replace(to_path, src_path)\n # we're in Docker, but the path is not mounted, cannot really do anything,\n # so fall back to original path\n return path\n\n\ndef md5(s: str) -> str: # pragma: win32 no cover\n return hashlib.md5(s.encode()).hexdigest()\n\n\ndef docker_tag(prefix: Prefix) -> str: # pragma: win32 no cover\n md5sum = md5(os.path.basename(prefix.prefix_dir)).lower()\n return f'pre-commit-{md5sum}'\n\n\ndef build_docker_image(\n prefix: Prefix,\n *,\n pull: bool,\n) -> None: # pragma: win32 no cover\n cmd: Tuple[str, ...] = (\n 'docker', 'build',\n '--tag', docker_tag(prefix),\n '--label', PRE_COMMIT_LABEL,\n )\n if pull:\n cmd += ('--pull',)\n # This must come last for old versions of docker. See #477\n cmd += ('.',)\n helpers.run_setup_cmd(prefix, cmd)\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None: # pragma: win32 no cover\n helpers.assert_version_default('docker', version)\n helpers.assert_no_additional_deps('docker', additional_dependencies)\n\n directory = prefix.path(\n helpers.environment_dir(ENVIRONMENT_DIR, C.DEFAULT),\n )\n\n # Docker doesn't really have relevant disk environment, but pre-commit\n # still needs to cleanup its state files on failure\n with clean_path_on_failure(directory):\n build_docker_image(prefix, pull=True)\n os.mkdir(directory)\n\n\ndef get_docker_user() -> Tuple[str, ...]: # pragma: win32 no cover\n try:\n return ('-u', f'{os.getuid()}:{os.getgid()}')\n except AttributeError:\n return ()\n\n\ndef docker_cmd() -> Tuple[str, ...]: # pragma: win32 no cover\n return (\n 'docker', 'run',\n '--rm',\n *get_docker_user(),\n # https://docs.docker.com/engine/reference/commandline/run/#mount-volumes-from-container-volumes-from\n # The `Z` option tells Docker to label the content with a private\n # unshared label. Only the current container can use a private volume.\n '-v', f'{_get_docker_path(os.getcwd())}:/src:rw,Z',\n '--workdir', '/src',\n )\n\n\ndef run_hook(\n hook: Hook,\n file_args: Sequence[str],\n color: bool,\n) -> Tuple[int, bytes]: # pragma: win32 no cover\n # Rebuild the docker image in case it has gone missing, as many people do\n # automated cleanup of docker images.\n build_docker_image(hook.prefix, pull=False)\n\n entry_exe, *cmd_rest = hook.cmd\n\n entry_tag = ('--entrypoint', entry_exe, docker_tag(hook.prefix))\n cmd = (*docker_cmd(), *entry_tag, *cmd_rest)\n return helpers.run_xargs(hook, cmd, file_args, color=color)\n", "path": "pre_commit/languages/docker.py"}]}
| 1,935 | 332 |
gh_patches_debug_27339
|
rasdani/github-patches
|
git_diff
|
ansible__awx-8487
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
In Kubernetes container groups, user-supplied .metadata.labels is ignored
##### ISSUE TYPE
- Bug Report
##### SUMMARY
Due to `awx/main/scheduler/kubernetes.py` overriding the `.metadata.labels` of the pods it creates (instead of merging them with the user-supplied `pod_spec_override`), features such as pod anti-affinity between AWX runners cannot work.
##### ENVIRONMENT
* AWX version: 15.0.1 (also present in devel)
* AWX install method: openshift
* Ansible version: irrelevant
* Operating System: Linux (all versions)
* Web Browser: irrelevant
##### STEPS TO REPRODUCE
1. Create a Kubernetes container group with the below piece of YAML as the pod spec override
1. Run a job out of this instance group
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
deploymentconfig: ansible-runner
namespace: wwp-test
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchExpressions:
- key: deploymentconfig
operator: In
values:
- ansible-runner
# ...
```
##### EXPECTED RESULTS
The pods run by AWX as part of the container group should contain both the custom labels and the affinity structure.
##### ACTUAL RESULTS
Only the affinity structure shows up in the `Kind: pod` Kubernetes objects, rendering the `podAntiAffinity` clause inoperative (for lack of a label to match on under `metdata`)
##### ADDITIONAL INFORMATION
The cause is the equals sign on [this line](https://github.com/ansible/awx/blob/devel/awx/main/scheduler/kubernetes.py#L132).
</issue>
<code>
[start of awx/main/scheduler/kubernetes.py]
1 import collections
2 import time
3 import logging
4 from base64 import b64encode
5
6 from django.conf import settings
7 from kubernetes import client, config
8 from django.utils.functional import cached_property
9
10 from awx.main.utils.common import parse_yaml_or_json
11
12 logger = logging.getLogger('awx.main.scheduler')
13
14
15 class PodManager(object):
16
17 def __init__(self, task=None):
18 self.task = task
19
20 def deploy(self):
21 if not self.credential.kubernetes:
22 raise RuntimeError('Pod deployment cannot occur without a Kubernetes credential')
23
24 self.kube_api.create_namespaced_pod(body=self.pod_definition,
25 namespace=self.namespace,
26 _request_timeout=settings.AWX_CONTAINER_GROUP_K8S_API_TIMEOUT)
27
28 num_retries = settings.AWX_CONTAINER_GROUP_POD_LAUNCH_RETRIES
29 for retry_attempt in range(num_retries - 1):
30 logger.debug(f"Checking for pod {self.pod_name}. Attempt {retry_attempt + 1} of {num_retries}")
31 pod = self.kube_api.read_namespaced_pod(name=self.pod_name,
32 namespace=self.namespace,
33 _request_timeout=settings.AWX_CONTAINER_GROUP_K8S_API_TIMEOUT)
34 if pod.status.phase != 'Pending':
35 break
36 else:
37 logger.debug(f"Pod {self.pod_name} is Pending.")
38 time.sleep(settings.AWX_CONTAINER_GROUP_POD_LAUNCH_RETRY_DELAY)
39 continue
40
41 if pod.status.phase == 'Running':
42 logger.debug(f"Pod {self.pod_name} is online.")
43 return pod
44 else:
45 logger.warn(f"Pod {self.pod_name} did not start. Status is {pod.status.phase}.")
46
47 @classmethod
48 def list_active_jobs(self, instance_group):
49 task = collections.namedtuple('Task', 'id instance_group')(
50 id='',
51 instance_group=instance_group
52 )
53 pm = PodManager(task)
54 try:
55 for pod in pm.kube_api.list_namespaced_pod(
56 pm.namespace,
57 label_selector='ansible-awx={}'.format(settings.INSTALL_UUID)
58 ).to_dict().get('items', []):
59 job = pod['metadata'].get('labels', {}).get('ansible-awx-job-id')
60 if job:
61 try:
62 yield int(job)
63 except ValueError:
64 pass
65 except Exception:
66 logger.exception('Failed to list pods for container group {}'.format(instance_group))
67
68 def delete(self):
69 return self.kube_api.delete_namespaced_pod(name=self.pod_name,
70 namespace=self.namespace,
71 _request_timeout=settings.AWX_CONTAINER_GROUP_K8S_API_TIMEOUT)
72
73 @property
74 def namespace(self):
75 return self.pod_definition['metadata']['namespace']
76
77 @property
78 def credential(self):
79 return self.task.instance_group.credential
80
81 @cached_property
82 def kube_config(self):
83 return generate_tmp_kube_config(self.credential, self.namespace)
84
85 @cached_property
86 def kube_api(self):
87 # this feels a little janky, but it's what k8s' own code does
88 # internally when it reads kube config files from disk:
89 # https://github.com/kubernetes-client/python-base/blob/0b208334ef0247aad9afcaae8003954423b61a0d/config/kube_config.py#L643
90 loader = config.kube_config.KubeConfigLoader(
91 config_dict=self.kube_config
92 )
93 cfg = type.__call__(client.Configuration)
94 loader.load_and_set(cfg)
95 return client.CoreV1Api(api_client=client.ApiClient(
96 configuration=cfg
97 ))
98
99 @property
100 def pod_name(self):
101 return f"awx-job-{self.task.id}"
102
103 @property
104 def pod_definition(self):
105 default_pod_spec = {
106 "apiVersion": "v1",
107 "kind": "Pod",
108 "metadata": {
109 "namespace": settings.AWX_CONTAINER_GROUP_DEFAULT_NAMESPACE
110 },
111 "spec": {
112 "containers": [{
113 "image": settings.AWX_CONTAINER_GROUP_DEFAULT_IMAGE,
114 "tty": True,
115 "stdin": True,
116 "imagePullPolicy": "Always",
117 "args": [
118 'sleep', 'infinity'
119 ]
120 }]
121 }
122 }
123
124 pod_spec_override = {}
125 if self.task and self.task.instance_group.pod_spec_override:
126 pod_spec_override = parse_yaml_or_json(
127 self.task.instance_group.pod_spec_override)
128 pod_spec = {**default_pod_spec, **pod_spec_override}
129
130 if self.task:
131 pod_spec['metadata']['name'] = self.pod_name
132 pod_spec['metadata']['labels'] = {
133 'ansible-awx': settings.INSTALL_UUID,
134 'ansible-awx-job-id': str(self.task.id)
135 }
136 pod_spec['spec']['containers'][0]['name'] = self.pod_name
137
138 return pod_spec
139
140
141 def generate_tmp_kube_config(credential, namespace):
142 host_input = credential.get_input('host')
143 config = {
144 "apiVersion": "v1",
145 "kind": "Config",
146 "preferences": {},
147 "clusters": [
148 {
149 "name": host_input,
150 "cluster": {
151 "server": host_input
152 }
153 }
154 ],
155 "users": [
156 {
157 "name": host_input,
158 "user": {
159 "token": credential.get_input('bearer_token')
160 }
161 }
162 ],
163 "contexts": [
164 {
165 "name": host_input,
166 "context": {
167 "cluster": host_input,
168 "user": host_input,
169 "namespace": namespace
170 }
171 }
172 ],
173 "current-context": host_input
174 }
175
176 if credential.get_input('verify_ssl') and 'ssl_ca_cert' in credential.inputs:
177 config["clusters"][0]["cluster"]["certificate-authority-data"] = b64encode(
178 credential.get_input('ssl_ca_cert').encode() # encode to bytes
179 ).decode() # decode the base64 data into a str
180 else:
181 config["clusters"][0]["cluster"]["insecure-skip-tls-verify"] = True
182 return config
183
[end of awx/main/scheduler/kubernetes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/awx/main/scheduler/kubernetes.py b/awx/main/scheduler/kubernetes.py
--- a/awx/main/scheduler/kubernetes.py
+++ b/awx/main/scheduler/kubernetes.py
@@ -12,6 +12,24 @@
logger = logging.getLogger('awx.main.scheduler')
+def deepmerge(a, b):
+ """
+ Merge dict structures and return the result.
+
+ >>> a = {'first': {'all_rows': {'pass': 'dog', 'number': '1'}}}
+ >>> b = {'first': {'all_rows': {'fail': 'cat', 'number': '5'}}}
+ >>> import pprint; pprint.pprint(deepmerge(a, b))
+ {'first': {'all_rows': {'fail': 'cat', 'number': '5', 'pass': 'dog'}}}
+ """
+ if isinstance(a, dict) and isinstance(b, dict):
+ return dict([(k, deepmerge(a.get(k), b.get(k)))
+ for k in set(a.keys()).union(b.keys())])
+ elif b is None:
+ return a
+ else:
+ return b
+
+
class PodManager(object):
def __init__(self, task=None):
@@ -128,11 +146,13 @@
pod_spec = {**default_pod_spec, **pod_spec_override}
if self.task:
- pod_spec['metadata']['name'] = self.pod_name
- pod_spec['metadata']['labels'] = {
- 'ansible-awx': settings.INSTALL_UUID,
- 'ansible-awx-job-id': str(self.task.id)
- }
+ pod_spec['metadata'] = deepmerge(
+ pod_spec.get('metadata', {}),
+ dict(name=self.pod_name,
+ labels={
+ 'ansible-awx': settings.INSTALL_UUID,
+ 'ansible-awx-job-id': str(self.task.id)
+ }))
pod_spec['spec']['containers'][0]['name'] = self.pod_name
return pod_spec
|
{"golden_diff": "diff --git a/awx/main/scheduler/kubernetes.py b/awx/main/scheduler/kubernetes.py\n--- a/awx/main/scheduler/kubernetes.py\n+++ b/awx/main/scheduler/kubernetes.py\n@@ -12,6 +12,24 @@\n logger = logging.getLogger('awx.main.scheduler')\n \n \n+def deepmerge(a, b):\n+ \"\"\"\n+ Merge dict structures and return the result.\n+\n+ >>> a = {'first': {'all_rows': {'pass': 'dog', 'number': '1'}}}\n+ >>> b = {'first': {'all_rows': {'fail': 'cat', 'number': '5'}}}\n+ >>> import pprint; pprint.pprint(deepmerge(a, b))\n+ {'first': {'all_rows': {'fail': 'cat', 'number': '5', 'pass': 'dog'}}}\n+ \"\"\"\n+ if isinstance(a, dict) and isinstance(b, dict):\n+ return dict([(k, deepmerge(a.get(k), b.get(k)))\n+ for k in set(a.keys()).union(b.keys())])\n+ elif b is None:\n+ return a\n+ else:\n+ return b\n+\n+\n class PodManager(object):\n \n def __init__(self, task=None):\n@@ -128,11 +146,13 @@\n pod_spec = {**default_pod_spec, **pod_spec_override}\n \n if self.task:\n- pod_spec['metadata']['name'] = self.pod_name\n- pod_spec['metadata']['labels'] = {\n- 'ansible-awx': settings.INSTALL_UUID,\n- 'ansible-awx-job-id': str(self.task.id)\n- }\n+ pod_spec['metadata'] = deepmerge(\n+ pod_spec.get('metadata', {}),\n+ dict(name=self.pod_name,\n+ labels={\n+ 'ansible-awx': settings.INSTALL_UUID,\n+ 'ansible-awx-job-id': str(self.task.id)\n+ }))\n pod_spec['spec']['containers'][0]['name'] = self.pod_name\n \n return pod_spec\n", "issue": "In Kubernetes container groups, user-supplied .metadata.labels is ignored\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### SUMMARY\r\n\r\nDue to `awx/main/scheduler/kubernetes.py` overriding the `.metadata.labels` of the pods it creates (instead of merging them with the user-supplied `pod_spec_override`), features such as pod anti-affinity between AWX runners cannot work.\r\n\r\n##### ENVIRONMENT\r\n* AWX version: 15.0.1 (also present in devel)\r\n* AWX install method: openshift\r\n* Ansible version: irrelevant\r\n* Operating System: Linux (all versions)\r\n* Web Browser: irrelevant\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\n1. Create a Kubernetes container group with the below piece of YAML as the pod spec override\r\n1. Run a job out of this instance group\r\n\r\n```yaml\r\napiVersion: v1\r\nkind: Pod\r\nmetadata:\r\n labels:\r\n deploymentconfig: ansible-runner\r\n namespace: wwp-test\r\nspec:\r\n affinity:\r\n podAntiAffinity:\r\n requiredDuringSchedulingIgnoredDuringExecution:\r\n - topologyKey: kubernetes.io/hostname\r\n labelSelector:\r\n matchExpressions:\r\n - key: deploymentconfig\r\n operator: In\r\n values:\r\n - ansible-runner\r\n # ...\r\n```\r\n\r\n##### EXPECTED RESULTS\r\n\r\nThe pods run by AWX as part of the container group should contain both the custom labels and the affinity structure.\r\n\r\n##### ACTUAL RESULTS\r\n\r\nOnly the affinity structure shows up in the `Kind: pod` Kubernetes objects, rendering the `podAntiAffinity` clause inoperative (for lack of a label to match on under `metdata`)\r\n\r\n##### ADDITIONAL INFORMATION\r\n\r\nThe cause is the equals sign on [this line](https://github.com/ansible/awx/blob/devel/awx/main/scheduler/kubernetes.py#L132).\n", "before_files": [{"content": "import collections\nimport time\nimport logging\nfrom base64 import b64encode\n\nfrom django.conf import settings\nfrom kubernetes import client, config\nfrom django.utils.functional import cached_property\n\nfrom awx.main.utils.common import parse_yaml_or_json\n\nlogger = logging.getLogger('awx.main.scheduler')\n\n\nclass PodManager(object):\n\n def __init__(self, task=None):\n self.task = task\n\n def deploy(self):\n if not self.credential.kubernetes:\n raise RuntimeError('Pod deployment cannot occur without a Kubernetes credential')\n\n self.kube_api.create_namespaced_pod(body=self.pod_definition,\n namespace=self.namespace,\n _request_timeout=settings.AWX_CONTAINER_GROUP_K8S_API_TIMEOUT)\n\n num_retries = settings.AWX_CONTAINER_GROUP_POD_LAUNCH_RETRIES\n for retry_attempt in range(num_retries - 1):\n logger.debug(f\"Checking for pod {self.pod_name}. Attempt {retry_attempt + 1} of {num_retries}\")\n pod = self.kube_api.read_namespaced_pod(name=self.pod_name,\n namespace=self.namespace,\n _request_timeout=settings.AWX_CONTAINER_GROUP_K8S_API_TIMEOUT)\n if pod.status.phase != 'Pending':\n break\n else:\n logger.debug(f\"Pod {self.pod_name} is Pending.\")\n time.sleep(settings.AWX_CONTAINER_GROUP_POD_LAUNCH_RETRY_DELAY)\n continue\n\n if pod.status.phase == 'Running':\n logger.debug(f\"Pod {self.pod_name} is online.\")\n return pod\n else:\n logger.warn(f\"Pod {self.pod_name} did not start. Status is {pod.status.phase}.\")\n\n @classmethod\n def list_active_jobs(self, instance_group):\n task = collections.namedtuple('Task', 'id instance_group')(\n id='',\n instance_group=instance_group\n )\n pm = PodManager(task)\n try:\n for pod in pm.kube_api.list_namespaced_pod(\n pm.namespace,\n label_selector='ansible-awx={}'.format(settings.INSTALL_UUID)\n ).to_dict().get('items', []):\n job = pod['metadata'].get('labels', {}).get('ansible-awx-job-id')\n if job:\n try:\n yield int(job)\n except ValueError:\n pass\n except Exception:\n logger.exception('Failed to list pods for container group {}'.format(instance_group))\n\n def delete(self):\n return self.kube_api.delete_namespaced_pod(name=self.pod_name,\n namespace=self.namespace,\n _request_timeout=settings.AWX_CONTAINER_GROUP_K8S_API_TIMEOUT)\n\n @property\n def namespace(self):\n return self.pod_definition['metadata']['namespace']\n\n @property\n def credential(self):\n return self.task.instance_group.credential\n\n @cached_property\n def kube_config(self):\n return generate_tmp_kube_config(self.credential, self.namespace)\n\n @cached_property\n def kube_api(self):\n # this feels a little janky, but it's what k8s' own code does\n # internally when it reads kube config files from disk:\n # https://github.com/kubernetes-client/python-base/blob/0b208334ef0247aad9afcaae8003954423b61a0d/config/kube_config.py#L643\n loader = config.kube_config.KubeConfigLoader(\n config_dict=self.kube_config\n )\n cfg = type.__call__(client.Configuration)\n loader.load_and_set(cfg)\n return client.CoreV1Api(api_client=client.ApiClient(\n configuration=cfg\n ))\n\n @property\n def pod_name(self):\n return f\"awx-job-{self.task.id}\"\n\n @property\n def pod_definition(self):\n default_pod_spec = {\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"namespace\": settings.AWX_CONTAINER_GROUP_DEFAULT_NAMESPACE\n },\n \"spec\": {\n \"containers\": [{\n \"image\": settings.AWX_CONTAINER_GROUP_DEFAULT_IMAGE,\n \"tty\": True,\n \"stdin\": True,\n \"imagePullPolicy\": \"Always\",\n \"args\": [\n 'sleep', 'infinity'\n ]\n }]\n }\n }\n\n pod_spec_override = {}\n if self.task and self.task.instance_group.pod_spec_override:\n pod_spec_override = parse_yaml_or_json(\n self.task.instance_group.pod_spec_override)\n pod_spec = {**default_pod_spec, **pod_spec_override}\n\n if self.task:\n pod_spec['metadata']['name'] = self.pod_name\n pod_spec['metadata']['labels'] = {\n 'ansible-awx': settings.INSTALL_UUID,\n 'ansible-awx-job-id': str(self.task.id)\n }\n pod_spec['spec']['containers'][0]['name'] = self.pod_name\n\n return pod_spec\n\n\ndef generate_tmp_kube_config(credential, namespace):\n host_input = credential.get_input('host')\n config = {\n \"apiVersion\": \"v1\",\n \"kind\": \"Config\",\n \"preferences\": {},\n \"clusters\": [\n {\n \"name\": host_input,\n \"cluster\": {\n \"server\": host_input\n }\n }\n ],\n \"users\": [\n {\n \"name\": host_input,\n \"user\": {\n \"token\": credential.get_input('bearer_token')\n }\n }\n ],\n \"contexts\": [\n {\n \"name\": host_input,\n \"context\": {\n \"cluster\": host_input,\n \"user\": host_input,\n \"namespace\": namespace\n }\n }\n ],\n \"current-context\": host_input\n }\n\n if credential.get_input('verify_ssl') and 'ssl_ca_cert' in credential.inputs:\n config[\"clusters\"][0][\"cluster\"][\"certificate-authority-data\"] = b64encode(\n credential.get_input('ssl_ca_cert').encode() # encode to bytes\n ).decode() # decode the base64 data into a str\n else:\n config[\"clusters\"][0][\"cluster\"][\"insecure-skip-tls-verify\"] = True\n return config\n", "path": "awx/main/scheduler/kubernetes.py"}]}
| 2,690 | 452 |
gh_patches_debug_15976
|
rasdani/github-patches
|
git_diff
|
WordPress__openverse-api-938
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use smaller image for generating thumbnails for SMK
## Description
<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->
<!-- For example: "I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page." -->
Images linked in the `image_url`s for the SMK provider are all ~2MB. These are large enough to overload our thumbnail service. Some of these requests timeout, such that the frontend falls back to downloading the entire full image. The result is unacceptable load times.
We should update the API to detect the SMK provider and modify the URL used for thumbnail generation.
`image_url` that we receive has the form: https://iip.smk.dk/iiif/jp2/KKSgb5100_34.TIF.jp2/full/!2048,/0/default.jpg Note the`2048`.
The modified URL should be identical, except for a reduced filesize: https://iip.smk.dk/iiif/jp2/KKSgb5100_34.TIF.jp2/full/!400,/0/default.jpg
## Additional context
<!-- Add any other context about the problem here; or delete the section entirely. -->
This change should be a temporary quick-fix that will allow us to re-enable the SMK provider, so we can make these images available again as soon as possible.
In the longer term, https://github.com/WordPress/openverse-catalog/issues/698 tracks updating the provider to provide links to smaller files for the purpose of thumbnail generation.
## Resolution
<!-- Replace the [ ] with [x] to check the box. -->
- [ ] 🙋 I would be interested in resolving this bug.
</issue>
<code>
[start of api/catalog/api/views/image_views.py]
1 import io
2 import struct
3
4 from django.conf import settings
5 from django.http.response import FileResponse, HttpResponse
6 from django.utils.decorators import method_decorator
7 from rest_framework.decorators import action
8 from rest_framework.exceptions import NotFound
9 from rest_framework.response import Response
10
11 import piexif
12 import requests
13 from drf_yasg.utils import swagger_auto_schema
14 from PIL import Image as PILImage
15
16 from catalog.api.constants.media_types import IMAGE_TYPE
17 from catalog.api.docs.image_docs import (
18 ImageComplain,
19 ImageDetail,
20 ImageOembed,
21 ImageRelated,
22 ImageSearch,
23 ImageStats,
24 ImageThumbnail,
25 )
26 from catalog.api.models import Image
27 from catalog.api.serializers.image_serializers import (
28 ImageReportRequestSerializer,
29 ImageSearchRequestSerializer,
30 ImageSerializer,
31 OembedRequestSerializer,
32 OembedSerializer,
33 WatermarkRequestSerializer,
34 )
35 from catalog.api.serializers.media_serializers import MediaThumbnailRequestSerializer
36 from catalog.api.utils.exceptions import get_api_exception
37 from catalog.api.utils.throttle import (
38 AnonThumbnailRateThrottle,
39 OAuth2IdThumbnailRateThrottle,
40 )
41 from catalog.api.utils.watermark import watermark
42 from catalog.api.views.media_views import MediaViewSet
43
44
45 @method_decorator(swagger_auto_schema(**ImageSearch.swagger_setup), "list")
46 @method_decorator(swagger_auto_schema(**ImageStats.swagger_setup), "stats")
47 @method_decorator(swagger_auto_schema(**ImageDetail.swagger_setup), "retrieve")
48 @method_decorator(swagger_auto_schema(**ImageRelated.swagger_setup), "related")
49 @method_decorator(swagger_auto_schema(**ImageComplain.swagger_setup), "report")
50 @method_decorator(swagger_auto_schema(**ImageOembed.swagger_setup), "oembed")
51 @method_decorator(swagger_auto_schema(**ImageThumbnail.swagger_setup), "thumbnail")
52 @method_decorator(swagger_auto_schema(auto_schema=None), "watermark")
53 class ImageViewSet(MediaViewSet):
54 """
55 Viewset for all endpoints pertaining to images.
56 """
57
58 model_class = Image
59 query_serializer_class = ImageSearchRequestSerializer
60 default_index = settings.MEDIA_INDEX_MAPPING[IMAGE_TYPE]
61 qa_index = "search-qa-image"
62
63 serializer_class = ImageSerializer
64
65 OEMBED_HEADERS = {
66 "User-Agent": settings.OUTBOUND_USER_AGENT_TEMPLATE.format(purpose="OEmbed"),
67 }
68
69 # Extra actions
70
71 @action(
72 detail=False,
73 url_path="oembed",
74 url_name="oembed",
75 serializer_class=OembedSerializer,
76 )
77 def oembed(self, request, *_, **__):
78 params = OembedRequestSerializer(data=request.query_params)
79 params.is_valid(raise_exception=True)
80
81 context = self.get_serializer_context()
82
83 url = params.validated_data["url"]
84 identifier = url.rsplit("/", 1)[1]
85 try:
86 image = self.get_queryset().get(identifier=identifier)
87 except Image.DoesNotExist:
88 return get_api_exception("Could not find image.", 404)
89 if not (image.height and image.width):
90 image_file = requests.get(image.url, headers=self.OEMBED_HEADERS)
91 width, height = PILImage.open(io.BytesIO(image_file.content)).size
92 context |= {
93 "width": width,
94 "height": height,
95 }
96
97 serializer = self.get_serializer(image, context=context)
98 return Response(data=serializer.data)
99
100 @action(
101 detail=True,
102 url_path="thumb",
103 url_name="thumb",
104 serializer_class=MediaThumbnailRequestSerializer,
105 throttle_classes=[AnonThumbnailRateThrottle, OAuth2IdThumbnailRateThrottle],
106 )
107 def thumbnail(self, request, *_, **__):
108 image = self.get_object()
109
110 image_url = image.url
111 if not image_url:
112 raise get_api_exception("Could not find image.", 404)
113
114 return super().thumbnail(image_url, request)
115
116 @action(detail=True, url_path="watermark", url_name="watermark")
117 def watermark(self, request, *_, **__):
118 if not settings.WATERMARK_ENABLED:
119 raise NotFound("The watermark feature is currently disabled.")
120
121 params = WatermarkRequestSerializer(data=request.query_params)
122 params.is_valid(raise_exception=True)
123
124 image = self.get_object()
125 image_url = image.url
126 image_info = {
127 attr: getattr(image, attr)
128 for attr in ["title", "creator", "license", "license_version"]
129 }
130
131 # Create the actual watermarked image.
132 watermarked, exif = watermark(image_url, image_info, params.data["watermark"])
133 # Re-insert EXIF metadata.
134 if exif:
135 # piexif dump raises InvalidImageDataError which is a child class
136 # of ValueError, and a struct error when the value is not
137 # between -2147483648 and 2147483647
138 # https://github.com/WordPress/openverse-api/issues/849
139 try:
140 exif_bytes = piexif.dump(exif)
141 except (struct.error, ValueError):
142 exif_bytes = None
143 else:
144 exif_bytes = None
145 img_bytes = io.BytesIO()
146 self._save_wrapper(watermarked, exif_bytes, img_bytes)
147
148 if params.data["embed_metadata"]:
149 # Embed ccREL metadata with XMP.
150 work_properties = {
151 "creator": image.creator,
152 "license_url": image.license_url,
153 "attribution": image.attribution,
154 "work_landing_page": image.foreign_landing_url,
155 "identifier": str(image.identifier),
156 }
157
158 # Import inside a function to allow server run without Exempi library
159 import libxmp
160
161 from catalog.api.utils import ccrel
162
163 try:
164 with_xmp = ccrel.embed_xmp_bytes(img_bytes, work_properties)
165 return FileResponse(with_xmp, content_type="image/jpeg")
166 except (libxmp.XMPError, AttributeError):
167 # Just send the EXIF-ified file if libxmp fails to add metadata
168 response = HttpResponse(content_type="image/jpeg")
169 self._save_wrapper(watermarked, exif_bytes, response)
170 return response
171 else:
172 response = HttpResponse(img_bytes, content_type="image/jpeg")
173 self._save_wrapper(watermarked, exif_bytes, response)
174 return response
175
176 @action(
177 detail=True,
178 methods=["post"],
179 serializer_class=ImageReportRequestSerializer,
180 )
181 def report(self, *args, **kwargs):
182 return super().report(*args, **kwargs)
183
184 # Helper functions
185
186 @staticmethod
187 def _save_wrapper(pil_img, exif_bytes, destination):
188 """
189 PIL crashes if exif_bytes=None, so we have to wrap it to avoid littering
190 the code with branches.
191 """
192 if exif_bytes:
193 pil_img.save(destination, "jpeg", exif=exif_bytes)
194 else:
195 pil_img.save(destination, "jpeg")
196
[end of api/catalog/api/views/image_views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/api/catalog/api/views/image_views.py b/api/catalog/api/views/image_views.py
--- a/api/catalog/api/views/image_views.py
+++ b/api/catalog/api/views/image_views.py
@@ -1,4 +1,5 @@
import io
+import re
import struct
from django.conf import settings
@@ -111,6 +112,13 @@
if not image_url:
raise get_api_exception("Could not find image.", 404)
+ # Hotfix to use scaled down version of the image from SMK
+ # TODO Remove when this issue is addressed:
+ # TODO https://github.com/WordPress/openverse-catalog/issues/698
+ if "iip.smk.dk" in image_url:
+ width = settings.THUMBNAIL_WIDTH_PX
+ image_url = re.sub(r"!\d+,", f"!{width},", image_url)
+
return super().thumbnail(image_url, request)
@action(detail=True, url_path="watermark", url_name="watermark")
|
{"golden_diff": "diff --git a/api/catalog/api/views/image_views.py b/api/catalog/api/views/image_views.py\n--- a/api/catalog/api/views/image_views.py\n+++ b/api/catalog/api/views/image_views.py\n@@ -1,4 +1,5 @@\n import io\n+import re\n import struct\n \n from django.conf import settings\n@@ -111,6 +112,13 @@\n if not image_url:\n raise get_api_exception(\"Could not find image.\", 404)\n \n+ # Hotfix to use scaled down version of the image from SMK\n+ # TODO Remove when this issue is addressed:\n+ # TODO https://github.com/WordPress/openverse-catalog/issues/698\n+ if \"iip.smk.dk\" in image_url:\n+ width = settings.THUMBNAIL_WIDTH_PX\n+ image_url = re.sub(r\"!\\d+,\", f\"!{width},\", image_url)\n+\n return super().thumbnail(image_url, request)\n \n @action(detail=True, url_path=\"watermark\", url_name=\"watermark\")\n", "issue": "Use smaller image for generating thumbnails for SMK\n## Description\r\n<!-- Concisely describe the bug. Compare your experience with what you expected to happen. -->\r\n<!-- For example: \"I clicked the 'submit' button and instead of seeing a thank you message, I saw a blank page.\" -->\r\nImages linked in the `image_url`s for the SMK provider are all ~2MB. These are large enough to overload our thumbnail service. Some of these requests timeout, such that the frontend falls back to downloading the entire full image. The result is unacceptable load times.\r\n\r\nWe should update the API to detect the SMK provider and modify the URL used for thumbnail generation.\r\n\r\n`image_url` that we receive has the form: https://iip.smk.dk/iiif/jp2/KKSgb5100_34.TIF.jp2/full/!2048,/0/default.jpg Note the`2048`.\r\n\r\nThe modified URL should be identical, except for a reduced filesize: https://iip.smk.dk/iiif/jp2/KKSgb5100_34.TIF.jp2/full/!400,/0/default.jpg\r\n\r\n## Additional context\r\n<!-- Add any other context about the problem here; or delete the section entirely. -->\r\nThis change should be a temporary quick-fix that will allow us to re-enable the SMK provider, so we can make these images available again as soon as possible.\r\n\r\nIn the longer term, https://github.com/WordPress/openverse-catalog/issues/698 tracks updating the provider to provide links to smaller files for the purpose of thumbnail generation.\r\n\r\n## Resolution\r\n<!-- Replace the [ ] with [x] to check the box. -->\r\n- [ ] \ud83d\ude4b I would be interested in resolving this bug.\r\n\n", "before_files": [{"content": "import io\nimport struct\n\nfrom django.conf import settings\nfrom django.http.response import FileResponse, HttpResponse\nfrom django.utils.decorators import method_decorator\nfrom rest_framework.decorators import action\nfrom rest_framework.exceptions import NotFound\nfrom rest_framework.response import Response\n\nimport piexif\nimport requests\nfrom drf_yasg.utils import swagger_auto_schema\nfrom PIL import Image as PILImage\n\nfrom catalog.api.constants.media_types import IMAGE_TYPE\nfrom catalog.api.docs.image_docs import (\n ImageComplain,\n ImageDetail,\n ImageOembed,\n ImageRelated,\n ImageSearch,\n ImageStats,\n ImageThumbnail,\n)\nfrom catalog.api.models import Image\nfrom catalog.api.serializers.image_serializers import (\n ImageReportRequestSerializer,\n ImageSearchRequestSerializer,\n ImageSerializer,\n OembedRequestSerializer,\n OembedSerializer,\n WatermarkRequestSerializer,\n)\nfrom catalog.api.serializers.media_serializers import MediaThumbnailRequestSerializer\nfrom catalog.api.utils.exceptions import get_api_exception\nfrom catalog.api.utils.throttle import (\n AnonThumbnailRateThrottle,\n OAuth2IdThumbnailRateThrottle,\n)\nfrom catalog.api.utils.watermark import watermark\nfrom catalog.api.views.media_views import MediaViewSet\n\n\n@method_decorator(swagger_auto_schema(**ImageSearch.swagger_setup), \"list\")\n@method_decorator(swagger_auto_schema(**ImageStats.swagger_setup), \"stats\")\n@method_decorator(swagger_auto_schema(**ImageDetail.swagger_setup), \"retrieve\")\n@method_decorator(swagger_auto_schema(**ImageRelated.swagger_setup), \"related\")\n@method_decorator(swagger_auto_schema(**ImageComplain.swagger_setup), \"report\")\n@method_decorator(swagger_auto_schema(**ImageOembed.swagger_setup), \"oembed\")\n@method_decorator(swagger_auto_schema(**ImageThumbnail.swagger_setup), \"thumbnail\")\n@method_decorator(swagger_auto_schema(auto_schema=None), \"watermark\")\nclass ImageViewSet(MediaViewSet):\n \"\"\"\n Viewset for all endpoints pertaining to images.\n \"\"\"\n\n model_class = Image\n query_serializer_class = ImageSearchRequestSerializer\n default_index = settings.MEDIA_INDEX_MAPPING[IMAGE_TYPE]\n qa_index = \"search-qa-image\"\n\n serializer_class = ImageSerializer\n\n OEMBED_HEADERS = {\n \"User-Agent\": settings.OUTBOUND_USER_AGENT_TEMPLATE.format(purpose=\"OEmbed\"),\n }\n\n # Extra actions\n\n @action(\n detail=False,\n url_path=\"oembed\",\n url_name=\"oembed\",\n serializer_class=OembedSerializer,\n )\n def oembed(self, request, *_, **__):\n params = OembedRequestSerializer(data=request.query_params)\n params.is_valid(raise_exception=True)\n\n context = self.get_serializer_context()\n\n url = params.validated_data[\"url\"]\n identifier = url.rsplit(\"/\", 1)[1]\n try:\n image = self.get_queryset().get(identifier=identifier)\n except Image.DoesNotExist:\n return get_api_exception(\"Could not find image.\", 404)\n if not (image.height and image.width):\n image_file = requests.get(image.url, headers=self.OEMBED_HEADERS)\n width, height = PILImage.open(io.BytesIO(image_file.content)).size\n context |= {\n \"width\": width,\n \"height\": height,\n }\n\n serializer = self.get_serializer(image, context=context)\n return Response(data=serializer.data)\n\n @action(\n detail=True,\n url_path=\"thumb\",\n url_name=\"thumb\",\n serializer_class=MediaThumbnailRequestSerializer,\n throttle_classes=[AnonThumbnailRateThrottle, OAuth2IdThumbnailRateThrottle],\n )\n def thumbnail(self, request, *_, **__):\n image = self.get_object()\n\n image_url = image.url\n if not image_url:\n raise get_api_exception(\"Could not find image.\", 404)\n\n return super().thumbnail(image_url, request)\n\n @action(detail=True, url_path=\"watermark\", url_name=\"watermark\")\n def watermark(self, request, *_, **__):\n if not settings.WATERMARK_ENABLED:\n raise NotFound(\"The watermark feature is currently disabled.\")\n\n params = WatermarkRequestSerializer(data=request.query_params)\n params.is_valid(raise_exception=True)\n\n image = self.get_object()\n image_url = image.url\n image_info = {\n attr: getattr(image, attr)\n for attr in [\"title\", \"creator\", \"license\", \"license_version\"]\n }\n\n # Create the actual watermarked image.\n watermarked, exif = watermark(image_url, image_info, params.data[\"watermark\"])\n # Re-insert EXIF metadata.\n if exif:\n # piexif dump raises InvalidImageDataError which is a child class\n # of ValueError, and a struct error when the value is not\n # between -2147483648 and 2147483647\n # https://github.com/WordPress/openverse-api/issues/849\n try:\n exif_bytes = piexif.dump(exif)\n except (struct.error, ValueError):\n exif_bytes = None\n else:\n exif_bytes = None\n img_bytes = io.BytesIO()\n self._save_wrapper(watermarked, exif_bytes, img_bytes)\n\n if params.data[\"embed_metadata\"]:\n # Embed ccREL metadata with XMP.\n work_properties = {\n \"creator\": image.creator,\n \"license_url\": image.license_url,\n \"attribution\": image.attribution,\n \"work_landing_page\": image.foreign_landing_url,\n \"identifier\": str(image.identifier),\n }\n\n # Import inside a function to allow server run without Exempi library\n import libxmp\n\n from catalog.api.utils import ccrel\n\n try:\n with_xmp = ccrel.embed_xmp_bytes(img_bytes, work_properties)\n return FileResponse(with_xmp, content_type=\"image/jpeg\")\n except (libxmp.XMPError, AttributeError):\n # Just send the EXIF-ified file if libxmp fails to add metadata\n response = HttpResponse(content_type=\"image/jpeg\")\n self._save_wrapper(watermarked, exif_bytes, response)\n return response\n else:\n response = HttpResponse(img_bytes, content_type=\"image/jpeg\")\n self._save_wrapper(watermarked, exif_bytes, response)\n return response\n\n @action(\n detail=True,\n methods=[\"post\"],\n serializer_class=ImageReportRequestSerializer,\n )\n def report(self, *args, **kwargs):\n return super().report(*args, **kwargs)\n\n # Helper functions\n\n @staticmethod\n def _save_wrapper(pil_img, exif_bytes, destination):\n \"\"\"\n PIL crashes if exif_bytes=None, so we have to wrap it to avoid littering\n the code with branches.\n \"\"\"\n if exif_bytes:\n pil_img.save(destination, \"jpeg\", exif=exif_bytes)\n else:\n pil_img.save(destination, \"jpeg\")\n", "path": "api/catalog/api/views/image_views.py"}]}
| 2,877 | 232 |
gh_patches_debug_6310
|
rasdani/github-patches
|
git_diff
|
kornia__kornia-1421
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PyPI tarball missing required files
### Describe the bug
The tarball uploaded to PyPI does not contain `requirements/*` files which are required to run the `setup.py` file.
### Reproduction steps
```bash
$ wget https://files.pythonhosted.org/packages/source/k/kornia/kornia-0.6.0.tar.gz
$ tar zxf kornia-0.6.0.tar.gz
$ cd kornia-0.6.0
$ python setup.py install
...
Traceback (most recent call last):
File "setup.py", line 43, in <module>
"x": load_requirements("requirements/x.txt"),
File "setup.py", line 38, in load_requirements
with open(filename) as f:
FileNotFoundError: [Errno 2] No such file or directory: 'requirements/x.txt'
```
### Expected behavior
I would expect the `setup.py` to function correctly. I believe there's a setuptools option to control which files get included in the upload tarball.
### Environment
```shell
- PyTorch Version (e.g., 1.0): 1.10
- OS (e.g., Linux): macOS
- How you installed PyTorch (`conda`, `pip`, source): `spack`
- Build command you used (if compiling from source): `python setup.py install`
- Python version: 3.8.11
- CUDA/cuDNN version: N/A
- GPU models and configuration: N/A
- Any other relevant information: N/A
```
### Additional context
_No response_
</issue>
<code>
[start of setup.py]
1 # Welcome to the Kornia setup.py.
2 #
3 import re
4 import sys
5
6 # Make sure that kornia is running on Python 3.6.0 or later
7 # (to avoid running into this bug: https://bugs.python.org/issue29246)
8
9 if sys.version_info < (3, 6, 0):
10 raise RuntimeError("Kornia requires Python 3.6.0 or later.")
11
12
13 from setuptools import find_packages, setup
14
15
16 def find_version(file_path: str) -> str:
17 version_file = open(file_path).read()
18 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M)
19 if not version_match:
20 raise RuntimeError(f"Unable to find version string in {file_path}")
21 return version_match.group(1)
22
23
24 VERSION = find_version("kornia/_version.py")
25
26
27 # NOTE: kornia MUST only require PyTorch
28 requirements = [
29 'torch>=1.8.1', 'packaging',
30 ]
31
32 # open readme file and set long description
33 with open("README.md", encoding="utf-8") as fh:
34 long_description = fh.read()
35
36
37 def load_requirements(filename: str):
38 with open(filename) as f:
39 return [x.strip() for x in f.readlines() if "-r" != x[0:2]]
40
41
42 requirements_extras = {
43 "x": load_requirements("requirements/x.txt"),
44 "dev": load_requirements("requirements/dev.txt")
45 }
46 requirements_extras["all"] = requirements_extras["x"] + requirements_extras["dev"]
47
48
49 if __name__ == '__main__':
50 setup(
51 name='kornia',
52 version=VERSION,
53 author='Edgar Riba',
54 author_email='[email protected]',
55 url='https://www.kornia.org',
56 download_url='https://github.com/kornia/kornia',
57 license='Apache License 2.0',
58 description='Open Source Differentiable Computer Vision Library for PyTorch',
59 long_description=long_description,
60 long_description_content_type='text/markdown',
61 python_requires='>=3.6',
62 setup_requires=['pytest-runner'],
63 tests_require=['pytest'],
64 packages=find_packages(exclude=('docs', 'test', 'examples')),
65 package_data={"kornia": ["py.typed"]},
66 zip_safe=True,
67 install_requires=requirements,
68 extras_require=requirements_extras,
69 keywords=['computer vision', 'deep learning', 'pytorch'],
70 project_urls={
71 "Bug Tracker": "https://github.com/kornia/kornia/issues",
72 "Documentation": "https://kornia.readthedocs.io/en/latest",
73 "Source Code": "https://github.com/kornia/kornia",
74 },
75 classifiers=[
76 'Environment :: GPU',
77 'Environment :: Console',
78 'Natural Language :: English',
79 # How mature is this project? Common values are
80 # 3 - Alpha, 4 - Beta, 5 - Production/Stable
81 'Development Status :: 4 - Beta',
82 # Indicate who your project is intended for
83 'Intended Audience :: Developers',
84 'Intended Audience :: Education',
85 'Intended Audience :: Science/Research',
86 'Intended Audience :: Information Technology',
87 'Topic :: Software Development :: Libraries',
88 'Topic :: Scientific/Engineering :: Artificial Intelligence',
89 'Topic :: Scientific/Engineering :: Image Processing',
90 # Pick your license as you wish
91 'License :: OSI Approved :: Apache Software License',
92 'Operating System :: OS Independent',
93 # Specify the Python versions you support here. In particular, ensure
94 # that you indicate whether you support Python 2, Python 3 or both.
95 'Programming Language :: Python :: 3',
96 'Programming Language :: Python :: 3.6',
97 'Programming Language :: Python :: 3.7',
98 'Programming Language :: Python :: 3.8',
99 ],
100 )
101
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -63,6 +63,7 @@
tests_require=['pytest'],
packages=find_packages(exclude=('docs', 'test', 'examples')),
package_data={"kornia": ["py.typed"]},
+ data_files=[('', ['requirements/x.txt', 'requirements/dev.txt'])],
zip_safe=True,
install_requires=requirements,
extras_require=requirements_extras,
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -63,6 +63,7 @@\n tests_require=['pytest'],\n packages=find_packages(exclude=('docs', 'test', 'examples')),\n package_data={\"kornia\": [\"py.typed\"]},\n+ data_files=[('', ['requirements/x.txt', 'requirements/dev.txt'])],\n zip_safe=True,\n install_requires=requirements,\n extras_require=requirements_extras,\n", "issue": "PyPI tarball missing required files\n### Describe the bug\r\n\r\nThe tarball uploaded to PyPI does not contain `requirements/*` files which are required to run the `setup.py` file.\r\n\r\n### Reproduction steps\r\n\r\n```bash\r\n$ wget https://files.pythonhosted.org/packages/source/k/kornia/kornia-0.6.0.tar.gz\r\n$ tar zxf kornia-0.6.0.tar.gz\r\n$ cd kornia-0.6.0\r\n$ python setup.py install\r\n...\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 43, in <module>\r\n \"x\": load_requirements(\"requirements/x.txt\"),\r\n File \"setup.py\", line 38, in load_requirements\r\n with open(filename) as f:\r\nFileNotFoundError: [Errno 2] No such file or directory: 'requirements/x.txt'\r\n```\r\n\r\n\r\n### Expected behavior\r\n\r\nI would expect the `setup.py` to function correctly. I believe there's a setuptools option to control which files get included in the upload tarball.\r\n\r\n### Environment\r\n\r\n```shell\r\n- PyTorch Version (e.g., 1.0): 1.10\r\n- OS (e.g., Linux): macOS\r\n- How you installed PyTorch (`conda`, `pip`, source): `spack`\r\n- Build command you used (if compiling from source): `python setup.py install`\r\n- Python version: 3.8.11\r\n- CUDA/cuDNN version: N/A\r\n- GPU models and configuration: N/A\r\n- Any other relevant information: N/A\r\n```\r\n\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "# Welcome to the Kornia setup.py.\n#\nimport re\nimport sys\n\n# Make sure that kornia is running on Python 3.6.0 or later\n# (to avoid running into this bug: https://bugs.python.org/issue29246)\n\nif sys.version_info < (3, 6, 0):\n raise RuntimeError(\"Kornia requires Python 3.6.0 or later.\")\n\n\nfrom setuptools import find_packages, setup\n\n\ndef find_version(file_path: str) -> str:\n version_file = open(file_path).read()\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M)\n if not version_match:\n raise RuntimeError(f\"Unable to find version string in {file_path}\")\n return version_match.group(1)\n\n\nVERSION = find_version(\"kornia/_version.py\")\n\n\n# NOTE: kornia MUST only require PyTorch\nrequirements = [\n 'torch>=1.8.1', 'packaging',\n]\n\n# open readme file and set long description\nwith open(\"README.md\", encoding=\"utf-8\") as fh:\n long_description = fh.read()\n\n\ndef load_requirements(filename: str):\n with open(filename) as f:\n return [x.strip() for x in f.readlines() if \"-r\" != x[0:2]]\n\n\nrequirements_extras = {\n \"x\": load_requirements(\"requirements/x.txt\"),\n \"dev\": load_requirements(\"requirements/dev.txt\")\n}\nrequirements_extras[\"all\"] = requirements_extras[\"x\"] + requirements_extras[\"dev\"]\n\n\nif __name__ == '__main__':\n setup(\n name='kornia',\n version=VERSION,\n author='Edgar Riba',\n author_email='[email protected]',\n url='https://www.kornia.org',\n download_url='https://github.com/kornia/kornia',\n license='Apache License 2.0',\n description='Open Source Differentiable Computer Vision Library for PyTorch',\n long_description=long_description,\n long_description_content_type='text/markdown',\n python_requires='>=3.6',\n setup_requires=['pytest-runner'],\n tests_require=['pytest'],\n packages=find_packages(exclude=('docs', 'test', 'examples')),\n package_data={\"kornia\": [\"py.typed\"]},\n zip_safe=True,\n install_requires=requirements,\n extras_require=requirements_extras,\n keywords=['computer vision', 'deep learning', 'pytorch'],\n project_urls={\n \"Bug Tracker\": \"https://github.com/kornia/kornia/issues\",\n \"Documentation\": \"https://kornia.readthedocs.io/en/latest\",\n \"Source Code\": \"https://github.com/kornia/kornia\",\n },\n classifiers=[\n 'Environment :: GPU',\n 'Environment :: Console',\n 'Natural Language :: English',\n # How mature is this project? Common values are\n # 3 - Alpha, 4 - Beta, 5 - Production/Stable\n 'Development Status :: 4 - Beta',\n # Indicate who your project is intended for\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Information Technology',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Image Processing',\n # Pick your license as you wish\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n # Specify the Python versions you support here. In particular, ensure\n # that you indicate whether you support Python 2, Python 3 or both.\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n )\n", "path": "setup.py"}]}
| 1,927 | 103 |
gh_patches_debug_16698
|
rasdani/github-patches
|
git_diff
|
GeotrekCE__Geotrek-admin-2462
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Outdoor - Orientations
- [x] Ajouter les noms des champs dans les filtres
</issue>
<code>
[start of mapentity/filters.py]
1 from django.db.models.fields.related import ManyToOneRel
2 from django.conf import settings
3
4 from django_filters import FilterSet, Filter
5 from django_filters.filterset import get_model_field
6 from django.contrib.gis import forms
7
8 from .settings import app_settings, API_SRID
9 from .widgets import HiddenGeometryWidget
10
11
12 class PolygonFilter(Filter):
13
14 field_class = forms.PolygonField
15
16 def __init__(self, *args, **kwargs):
17 kwargs.setdefault('field_name', app_settings['GEOM_FIELD_NAME'])
18 kwargs.setdefault('widget', HiddenGeometryWidget)
19 kwargs.setdefault('lookup_expr', 'intersects')
20 super(PolygonFilter, self).__init__(*args, **kwargs)
21
22
23 class PythonPolygonFilter(PolygonFilter):
24
25 def filter(self, qs, value):
26 if not value:
27 return qs
28 if not value.srid:
29 value.srid = API_SRID
30 value.transform(settings.SRID)
31 filtered = []
32 for o in qs.all():
33 geom = getattr(o, self.field_name)
34 if geom and geom.valid and not geom.empty:
35 if getattr(geom, self.lookup_expr)(value):
36 filtered.append(o.pk)
37 else:
38 filtered.append(o.pk)
39 return qs.filter(pk__in=filtered)
40
41
42 class BaseMapEntityFilterSet(FilterSet):
43 def __init__(self, *args, **kwargs):
44 super(BaseMapEntityFilterSet, self).__init__(*args, **kwargs)
45 self.__bypass_labels()
46
47 def __bypass_labels(self):
48 """
49 These hacks allow to bypass field labels. Using either placeholders,
50 empty choices label, etc. This allows to greatly save space in form layout,
51 which is required for concise filter forms.
52 """
53 for fieldname in self.base_filters.keys():
54 field = self.form.fields[fieldname]
55 if isinstance(field, forms.MultiValueField):
56 for i, widget in enumerate(field.widget.widgets):
57 self.__set_placeholder(field.fields[i], widget)
58 elif isinstance(field, forms.ChoiceField):
59 field.empty_label = field.label
60 self.__set_placeholder(field, field.widget)
61 elif isinstance(field, forms.NullBooleanField):
62 choices = [(u'1', field.label)] + field.widget.choices[1:]
63 field.widget.choices = choices
64 self.__set_placeholder(field, field.widget)
65 else:
66 self.__set_placeholder(field, field.widget)
67
68 def __set_placeholder(self, field, widget):
69 field.help_text = '' # Hide help text
70 widget.attrs['placeholder'] = field.label
71 widget.attrs['data-placeholder'] = field.label
72 widget.attrs['title'] = field.label
73 widget.attrs['data-label'] = field.label
74
75 @classmethod
76 def add_filter(cls, name, filter_=None):
77 field = get_model_field(cls._meta.model, name)
78 if filter_ is None:
79 if isinstance(field, ManyToOneRel):
80 filter_ = cls.filter_for_reverse_field(field, name)
81 else:
82 filter_ = cls.filter_for_field(field, name)
83 cls.base_filters[name] = filter_
84
85 @classmethod
86 def add_filters(cls, filters):
87 for name, filter_ in filters.items():
88 filter_.field_name = name
89 cls.add_filter(name, filter_)
90
91
92 class MapEntityFilterSet(BaseMapEntityFilterSet):
93 bbox = PolygonFilter()
94
95 class Meta:
96 fields = ['bbox']
97
[end of mapentity/filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mapentity/filters.py b/mapentity/filters.py
--- a/mapentity/filters.py
+++ b/mapentity/filters.py
@@ -2,6 +2,7 @@
from django.conf import settings
from django_filters import FilterSet, Filter
+from django_filters.fields import ChoiceField
from django_filters.filterset import get_model_field
from django.contrib.gis import forms
@@ -42,6 +43,9 @@
class BaseMapEntityFilterSet(FilterSet):
def __init__(self, *args, **kwargs):
super(BaseMapEntityFilterSet, self).__init__(*args, **kwargs)
+ for filter_ in self.filters.values():
+ if filter_.field_class == ChoiceField:
+ filter_.extra.setdefault('empty_label', filter_.label)
self.__bypass_labels()
def __bypass_labels(self):
|
{"golden_diff": "diff --git a/mapentity/filters.py b/mapentity/filters.py\n--- a/mapentity/filters.py\n+++ b/mapentity/filters.py\n@@ -2,6 +2,7 @@\n from django.conf import settings\n \n from django_filters import FilterSet, Filter\n+from django_filters.fields import ChoiceField\n from django_filters.filterset import get_model_field\n from django.contrib.gis import forms\n \n@@ -42,6 +43,9 @@\n class BaseMapEntityFilterSet(FilterSet):\n def __init__(self, *args, **kwargs):\n super(BaseMapEntityFilterSet, self).__init__(*args, **kwargs)\n+ for filter_ in self.filters.values():\n+ if filter_.field_class == ChoiceField:\n+ filter_.extra.setdefault('empty_label', filter_.label)\n self.__bypass_labels()\n \n def __bypass_labels(self):\n", "issue": "Outdoor - Orientations\n- [x] Ajouter les noms des champs dans les filtres\n", "before_files": [{"content": "from django.db.models.fields.related import ManyToOneRel\nfrom django.conf import settings\n\nfrom django_filters import FilterSet, Filter\nfrom django_filters.filterset import get_model_field\nfrom django.contrib.gis import forms\n\nfrom .settings import app_settings, API_SRID\nfrom .widgets import HiddenGeometryWidget\n\n\nclass PolygonFilter(Filter):\n\n field_class = forms.PolygonField\n\n def __init__(self, *args, **kwargs):\n kwargs.setdefault('field_name', app_settings['GEOM_FIELD_NAME'])\n kwargs.setdefault('widget', HiddenGeometryWidget)\n kwargs.setdefault('lookup_expr', 'intersects')\n super(PolygonFilter, self).__init__(*args, **kwargs)\n\n\nclass PythonPolygonFilter(PolygonFilter):\n\n def filter(self, qs, value):\n if not value:\n return qs\n if not value.srid:\n value.srid = API_SRID\n value.transform(settings.SRID)\n filtered = []\n for o in qs.all():\n geom = getattr(o, self.field_name)\n if geom and geom.valid and not geom.empty:\n if getattr(geom, self.lookup_expr)(value):\n filtered.append(o.pk)\n else:\n filtered.append(o.pk)\n return qs.filter(pk__in=filtered)\n\n\nclass BaseMapEntityFilterSet(FilterSet):\n def __init__(self, *args, **kwargs):\n super(BaseMapEntityFilterSet, self).__init__(*args, **kwargs)\n self.__bypass_labels()\n\n def __bypass_labels(self):\n \"\"\"\n These hacks allow to bypass field labels. Using either placeholders,\n empty choices label, etc. This allows to greatly save space in form layout,\n which is required for concise filter forms.\n \"\"\"\n for fieldname in self.base_filters.keys():\n field = self.form.fields[fieldname]\n if isinstance(field, forms.MultiValueField):\n for i, widget in enumerate(field.widget.widgets):\n self.__set_placeholder(field.fields[i], widget)\n elif isinstance(field, forms.ChoiceField):\n field.empty_label = field.label\n self.__set_placeholder(field, field.widget)\n elif isinstance(field, forms.NullBooleanField):\n choices = [(u'1', field.label)] + field.widget.choices[1:]\n field.widget.choices = choices\n self.__set_placeholder(field, field.widget)\n else:\n self.__set_placeholder(field, field.widget)\n\n def __set_placeholder(self, field, widget):\n field.help_text = '' # Hide help text\n widget.attrs['placeholder'] = field.label\n widget.attrs['data-placeholder'] = field.label\n widget.attrs['title'] = field.label\n widget.attrs['data-label'] = field.label\n\n @classmethod\n def add_filter(cls, name, filter_=None):\n field = get_model_field(cls._meta.model, name)\n if filter_ is None:\n if isinstance(field, ManyToOneRel):\n filter_ = cls.filter_for_reverse_field(field, name)\n else:\n filter_ = cls.filter_for_field(field, name)\n cls.base_filters[name] = filter_\n\n @classmethod\n def add_filters(cls, filters):\n for name, filter_ in filters.items():\n filter_.field_name = name\n cls.add_filter(name, filter_)\n\n\nclass MapEntityFilterSet(BaseMapEntityFilterSet):\n bbox = PolygonFilter()\n\n class Meta:\n fields = ['bbox']\n", "path": "mapentity/filters.py"}]}
| 1,464 | 186 |
gh_patches_debug_27824
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-976
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve Frequency
## 🚀 Feature
If we would like to log datapoints/second every 100 iterations, we most probably do like this
```python
wps_metric = Frequency(output_transformer=lambda x: x['ntokens'])
wps_metric.attach(trainer, name='wps', event_name=Events.ITERATION_COMPLETED(every=100))
```
however, seems like this wont take into account all other iterations while computing the total number of tokens.
```python
class Frequency(Metric):
....
def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):
engine.add_event_handler(Events.EPOCH_STARTED, self.started)
engine.add_event_handler(event_name, self.iteration_completed)
engine.add_event_handler(event_name, self.completed, name)
```
IMO, should be
```python
class Frequency(Metric):
....
def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):
engine.add_event_handler(Events.EPOCH_STARTED, self.started)
engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)
engine.add_event_handler(event_name, self.completed, name)
```
cc @erip
Improve Frequency
## 🚀 Feature
If we would like to log datapoints/second every 100 iterations, we most probably do like this
```python
wps_metric = Frequency(output_transformer=lambda x: x['ntokens'])
wps_metric.attach(trainer, name='wps', event_name=Events.ITERATION_COMPLETED(every=100))
```
however, seems like this wont take into account all other iterations while computing the total number of tokens.
```python
class Frequency(Metric):
....
def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):
engine.add_event_handler(Events.EPOCH_STARTED, self.started)
engine.add_event_handler(event_name, self.iteration_completed)
engine.add_event_handler(event_name, self.completed, name)
```
IMO, should be
```python
class Frequency(Metric):
....
def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):
engine.add_event_handler(Events.EPOCH_STARTED, self.started)
engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)
engine.add_event_handler(event_name, self.completed, name)
```
cc @erip
</issue>
<code>
[start of ignite/metrics/frequency.py]
1 import torch
2 import torch.distributed as dist
3
4 from ignite.engine import Events
5 from ignite.metrics import Metric
6 from ignite.handlers.timing import Timer
7 from ignite.metrics.metric import sync_all_reduce, reinit__is_reduced
8
9
10 class Frequency(Metric):
11 """Provides metrics for the number of examples processed per second.
12
13 Examples:
14
15 .. code-block:: python
16
17 # Compute number of tokens processed
18 wps_metric = Frequency(output_transform=lambda x: x['ntokens'])
19 wps_metric.attach(trainer, name='wps')
20 # Logging with TQDM
21 ProgressBar(persist=True).attach(trainer, metric_names=['wps'])
22 # Progress bar will looks like
23 # Epoch [2/10]: [12/24] 50%|█████ , wps=400 [00:17<1:23]
24 """
25
26 def __init__(self, output_transform=lambda x: x, device=None):
27 self._timer = None
28 self._acc = None
29 self._n = None
30 self._elapsed = None
31 super(Frequency, self).__init__(output_transform=output_transform, device=device)
32
33 @reinit__is_reduced
34 def reset(self):
35 self._timer = Timer()
36 self._acc = 0
37 self._n = 0
38 self._elapsed = 0.0
39 super(Frequency, self).reset()
40
41 @reinit__is_reduced
42 def update(self, output):
43 self._acc += output
44 self._n = self._acc
45 self._elapsed = torch.tensor(self._timer.value(), device=self._device)
46
47 @sync_all_reduce("_n", "_elapsed")
48 def compute(self):
49 time_divisor = 1.0
50
51 if dist.is_available() and dist.is_initialized():
52 time_divisor *= dist.get_world_size()
53
54 # Returns the average processed objects per second across all workers
55 return self._n / self._elapsed.item() * time_divisor
56
57 def completed(self, engine, name):
58 engine.state.metrics[name] = int(self.compute())
59
60 def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):
61 engine.add_event_handler(Events.EPOCH_STARTED, self.started)
62 engine.add_event_handler(event_name, self.iteration_completed)
63 engine.add_event_handler(event_name, self.completed, name)
64
[end of ignite/metrics/frequency.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ignite/metrics/frequency.py b/ignite/metrics/frequency.py
--- a/ignite/metrics/frequency.py
+++ b/ignite/metrics/frequency.py
@@ -19,8 +19,21 @@
wps_metric.attach(trainer, name='wps')
# Logging with TQDM
ProgressBar(persist=True).attach(trainer, metric_names=['wps'])
- # Progress bar will looks like
+ # Progress bar will look like
# Epoch [2/10]: [12/24] 50%|█████ , wps=400 [00:17<1:23]
+
+
+ To compute examples processed per second every 50th iteration:
+
+ .. code-block:: python
+
+ # Compute number of tokens processed
+ wps_metric = Frequency(output_transform=lambda x: x['ntokens'])
+ wps_metric.attach(trainer, name='wps', event_name=Events.ITERATION_COMPLETED(every=50))
+ # Logging with TQDM
+ ProgressBar(persist=True).attach(trainer, metric_names=['wps'])
+ # Progress bar will look like
+ # Epoch [2/10]: [50/100] 50%|█████ , wps=400 [00:17<00:35]
"""
def __init__(self, output_transform=lambda x: x, device=None):
@@ -59,5 +72,5 @@
def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):
engine.add_event_handler(Events.EPOCH_STARTED, self.started)
- engine.add_event_handler(event_name, self.iteration_completed)
+ engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)
engine.add_event_handler(event_name, self.completed, name)
|
{"golden_diff": "diff --git a/ignite/metrics/frequency.py b/ignite/metrics/frequency.py\n--- a/ignite/metrics/frequency.py\n+++ b/ignite/metrics/frequency.py\n@@ -19,8 +19,21 @@\n wps_metric.attach(trainer, name='wps')\n # Logging with TQDM\n ProgressBar(persist=True).attach(trainer, metric_names=['wps'])\n- # Progress bar will looks like\n+ # Progress bar will look like\n # Epoch [2/10]: [12/24] 50%|\u2588\u2588\u2588\u2588\u2588 , wps=400 [00:17<1:23]\n+\n+\n+ To compute examples processed per second every 50th iteration:\n+\n+ .. code-block:: python\n+\n+ # Compute number of tokens processed\n+ wps_metric = Frequency(output_transform=lambda x: x['ntokens'])\n+ wps_metric.attach(trainer, name='wps', event_name=Events.ITERATION_COMPLETED(every=50))\n+ # Logging with TQDM\n+ ProgressBar(persist=True).attach(trainer, metric_names=['wps'])\n+ # Progress bar will look like\n+ # Epoch [2/10]: [50/100] 50%|\u2588\u2588\u2588\u2588\u2588 , wps=400 [00:17<00:35]\n \"\"\"\n \n def __init__(self, output_transform=lambda x: x, device=None):\n@@ -59,5 +72,5 @@\n \n def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\n- engine.add_event_handler(event_name, self.iteration_completed)\n+ engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\n engine.add_event_handler(event_name, self.completed, name)\n", "issue": "Improve Frequency\n## \ud83d\ude80 Feature\r\n\r\nIf we would like to log datapoints/second every 100 iterations, we most probably do like this \r\n```python\r\nwps_metric = Frequency(output_transformer=lambda x: x['ntokens'])\r\nwps_metric.attach(trainer, name='wps', event_name=Events.ITERATION_COMPLETED(every=100))\r\n```\r\nhowever, seems like this wont take into account all other iterations while computing the total number of tokens.\r\n```python\r\nclass Frequency(Metric):\r\n ....\r\n def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):\r\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\r\n engine.add_event_handler(event_name, self.iteration_completed)\r\n engine.add_event_handler(event_name, self.completed, name)\r\n```\r\nIMO, should be \r\n```python\r\nclass Frequency(Metric):\r\n ....\r\n def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):\r\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\r\n engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\r\n engine.add_event_handler(event_name, self.completed, name)\r\n```\r\n\r\ncc @erip \r\n\nImprove Frequency\n## \ud83d\ude80 Feature\r\n\r\nIf we would like to log datapoints/second every 100 iterations, we most probably do like this \r\n```python\r\nwps_metric = Frequency(output_transformer=lambda x: x['ntokens'])\r\nwps_metric.attach(trainer, name='wps', event_name=Events.ITERATION_COMPLETED(every=100))\r\n```\r\nhowever, seems like this wont take into account all other iterations while computing the total number of tokens.\r\n```python\r\nclass Frequency(Metric):\r\n ....\r\n def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):\r\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\r\n engine.add_event_handler(event_name, self.iteration_completed)\r\n engine.add_event_handler(event_name, self.completed, name)\r\n```\r\nIMO, should be \r\n```python\r\nclass Frequency(Metric):\r\n ....\r\n def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):\r\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\r\n engine.add_event_handler(Events.ITERATION_COMPLETED, self.iteration_completed)\r\n engine.add_event_handler(event_name, self.completed, name)\r\n```\r\n\r\ncc @erip \r\n\n", "before_files": [{"content": "import torch\nimport torch.distributed as dist\n\nfrom ignite.engine import Events\nfrom ignite.metrics import Metric\nfrom ignite.handlers.timing import Timer\nfrom ignite.metrics.metric import sync_all_reduce, reinit__is_reduced\n\n\nclass Frequency(Metric):\n \"\"\"Provides metrics for the number of examples processed per second.\n\n Examples:\n\n .. code-block:: python\n\n # Compute number of tokens processed\n wps_metric = Frequency(output_transform=lambda x: x['ntokens'])\n wps_metric.attach(trainer, name='wps')\n # Logging with TQDM\n ProgressBar(persist=True).attach(trainer, metric_names=['wps'])\n # Progress bar will looks like\n # Epoch [2/10]: [12/24] 50%|\u2588\u2588\u2588\u2588\u2588 , wps=400 [00:17<1:23]\n \"\"\"\n\n def __init__(self, output_transform=lambda x: x, device=None):\n self._timer = None\n self._acc = None\n self._n = None\n self._elapsed = None\n super(Frequency, self).__init__(output_transform=output_transform, device=device)\n\n @reinit__is_reduced\n def reset(self):\n self._timer = Timer()\n self._acc = 0\n self._n = 0\n self._elapsed = 0.0\n super(Frequency, self).reset()\n\n @reinit__is_reduced\n def update(self, output):\n self._acc += output\n self._n = self._acc\n self._elapsed = torch.tensor(self._timer.value(), device=self._device)\n\n @sync_all_reduce(\"_n\", \"_elapsed\")\n def compute(self):\n time_divisor = 1.0\n\n if dist.is_available() and dist.is_initialized():\n time_divisor *= dist.get_world_size()\n\n # Returns the average processed objects per second across all workers\n return self._n / self._elapsed.item() * time_divisor\n\n def completed(self, engine, name):\n engine.state.metrics[name] = int(self.compute())\n\n def attach(self, engine, name, event_name=Events.ITERATION_COMPLETED):\n engine.add_event_handler(Events.EPOCH_STARTED, self.started)\n engine.add_event_handler(event_name, self.iteration_completed)\n engine.add_event_handler(event_name, self.completed, name)\n", "path": "ignite/metrics/frequency.py"}]}
| 1,698 | 425 |
gh_patches_debug_29420
|
rasdani/github-patches
|
git_diff
|
scipy__scipy-14478
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: n-d interpolation parameter provided to geometric_slerp
This was first reported by @BvB93 during addition of type hints to the related code.
In short, [`geometric_slerp()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.geometric_slerp.html) was intended to be a bit like `np.linspace` on the surface of an n-dimensional circle/sphere, interpolating between start and end points on that surface. However, it somehow slipped through that we allow arbitrary dimensions for the interpolation points/array, and this leads to weird/inconsistent/incorrect results.
Examples are below, where the "intended" 1-dimensional case produces the correct output--two points with 1 in the middle of the unit circle between the start/end. Notice the confusing discrepancy between the degenerate path output shapes vs. non-degenerate. While the non-degenerate outputs respect the input shape for `ndim > 1`, their numerical results are incorrect--the `x` coordinate isn't reflecting the movement along the unit circle to the halfway point between `[0, 1]` and `[1, 0]`.
What I'd like to do is raise an exception if `np.asarray(t).ndim > 1` and just move on, but as Bas points out we do have *de facto* support (no error raised) for arbitrary dimensions whether I like it or not, and I don't want my own view on it to exempt it from backwards compatibility concerns. That said, I wonder if we can basically say that this is a clear bug so it trumps backward compatibility?
```python
import numpy as np
import scipy
from scipy.spatial import geometric_slerp
print("scipy.__version__:", scipy.__version__)
arr1 = np.array([0, 1])
arr2 = np.array([1, 0])
for t in [[0, 0.5],
[[0, 0.5]],
[[[[[[[[[0, 0.5]]]]]]]]]]:
print("t dims:", np.asarray(t).ndim)
path = geometric_slerp(start=arr1,
end=arr2,
t=t)
print("path:\n", path)
degenerate_path = geometric_slerp(start=arr1,
end=arr1,
t=t)
print("degenerate_path:\n", degenerate_path)
```
```
scipy.__version__: 1.8.0.dev0+1472.130a1e6
t dims: 1
path:
[[0. 1. ]
[0.70710678 0.70710678]]
degenerate_path:
[[0. 1.]
[0. 1.]]
t dims: 2
path:
[[[0. 0.70710678]]]
degenerate_path:
[[0. 1.]
[0. 1.]]
t dims: 9
path:
[[[[[[[[[[0. 0.70710678]]]]]]]]]]
degenerate_path:
[[0. 1.]
[0. 1.]]
```
</issue>
<code>
[start of scipy/spatial/_geometric_slerp.py]
1 from __future__ import annotations
2
3 __all__ = ['geometric_slerp']
4
5 import warnings
6 from typing import TYPE_CHECKING
7
8 import numpy as np
9 from scipy.spatial.distance import euclidean
10
11 if TYPE_CHECKING:
12 import numpy.typing as npt
13
14
15 def _geometric_slerp(start, end, t):
16 # create an orthogonal basis using QR decomposition
17 basis = np.vstack([start, end])
18 Q, R = np.linalg.qr(basis.T)
19 signs = 2 * (np.diag(R) >= 0) - 1
20 Q = Q.T * signs.T[:, np.newaxis]
21 R = R.T * signs.T[:, np.newaxis]
22
23 # calculate the angle between `start` and `end`
24 c = np.dot(start, end)
25 s = np.linalg.det(R)
26 omega = np.arctan2(s, c)
27
28 # interpolate
29 start, end = Q
30 s = np.sin(t * omega)
31 c = np.cos(t * omega)
32 return start * c[:, np.newaxis] + end * s[:, np.newaxis]
33
34
35 def geometric_slerp(
36 start: npt.ArrayLike,
37 end: npt.ArrayLike,
38 t: npt.ArrayLike,
39 tol: float = 1e-7,
40 ) -> np.ndarray:
41 """
42 Geometric spherical linear interpolation.
43
44 The interpolation occurs along a unit-radius
45 great circle arc in arbitrary dimensional space.
46
47 Parameters
48 ----------
49 start : (n_dimensions, ) array-like
50 Single n-dimensional input coordinate in a 1-D array-like
51 object. `n` must be greater than 1.
52 end : (n_dimensions, ) array-like
53 Single n-dimensional input coordinate in a 1-D array-like
54 object. `n` must be greater than 1.
55 t: float or (n_points,) array-like
56 A float or array-like of doubles representing interpolation
57 parameters, with values required in the inclusive interval
58 between 0 and 1. A common approach is to generate the array
59 with ``np.linspace(0, 1, n_pts)`` for linearly spaced points.
60 Ascending, descending, and scrambled orders are permitted.
61 tol: float
62 The absolute tolerance for determining if the start and end
63 coordinates are antipodes.
64
65 Returns
66 -------
67 result : (t.size, D)
68 An array of doubles containing the interpolated
69 spherical path and including start and
70 end when 0 and 1 t are used. The
71 interpolated values should correspond to the
72 same sort order provided in the t array. The result
73 may be 1-dimensional if ``t`` is a float.
74
75 Raises
76 ------
77 ValueError
78 If ``start`` and ``end`` are antipodes, not on the
79 unit n-sphere, or for a variety of degenerate conditions.
80
81 Notes
82 -----
83 The implementation is based on the mathematical formula provided in [1]_,
84 and the first known presentation of this algorithm, derived from study of
85 4-D geometry, is credited to Glenn Davis in a footnote of the original
86 quaternion Slerp publication by Ken Shoemake [2]_.
87
88 .. versionadded:: 1.5.0
89
90 References
91 ----------
92 .. [1] https://en.wikipedia.org/wiki/Slerp#Geometric_Slerp
93 .. [2] Ken Shoemake (1985) Animating rotation with quaternion curves.
94 ACM SIGGRAPH Computer Graphics, 19(3): 245-254.
95
96 See Also
97 --------
98 scipy.spatial.transform.Slerp : 3-D Slerp that works with quaternions
99
100 Examples
101 --------
102 Interpolate four linearly-spaced values on the circumference of
103 a circle spanning 90 degrees:
104
105 >>> from scipy.spatial import geometric_slerp
106 >>> import matplotlib.pyplot as plt
107 >>> fig = plt.figure()
108 >>> ax = fig.add_subplot(111)
109 >>> start = np.array([1, 0])
110 >>> end = np.array([0, 1])
111 >>> t_vals = np.linspace(0, 1, 4)
112 >>> result = geometric_slerp(start,
113 ... end,
114 ... t_vals)
115
116 The interpolated results should be at 30 degree intervals
117 recognizable on the unit circle:
118
119 >>> ax.scatter(result[...,0], result[...,1], c='k')
120 >>> circle = plt.Circle((0, 0), 1, color='grey')
121 >>> ax.add_artist(circle)
122 >>> ax.set_aspect('equal')
123 >>> plt.show()
124
125 Attempting to interpolate between antipodes on a circle is
126 ambiguous because there are two possible paths, and on a
127 sphere there are infinite possible paths on the geodesic surface.
128 Nonetheless, one of the ambiguous paths is returned along
129 with a warning:
130
131 >>> opposite_pole = np.array([-1, 0])
132 >>> with np.testing.suppress_warnings() as sup:
133 ... sup.filter(UserWarning)
134 ... geometric_slerp(start,
135 ... opposite_pole,
136 ... t_vals)
137 array([[ 1.00000000e+00, 0.00000000e+00],
138 [ 5.00000000e-01, 8.66025404e-01],
139 [-5.00000000e-01, 8.66025404e-01],
140 [-1.00000000e+00, 1.22464680e-16]])
141
142 Extend the original example to a sphere and plot interpolation
143 points in 3D:
144
145 >>> from mpl_toolkits.mplot3d import proj3d
146 >>> fig = plt.figure()
147 >>> ax = fig.add_subplot(111, projection='3d')
148
149 Plot the unit sphere for reference (optional):
150
151 >>> u = np.linspace(0, 2 * np.pi, 100)
152 >>> v = np.linspace(0, np.pi, 100)
153 >>> x = np.outer(np.cos(u), np.sin(v))
154 >>> y = np.outer(np.sin(u), np.sin(v))
155 >>> z = np.outer(np.ones(np.size(u)), np.cos(v))
156 >>> ax.plot_surface(x, y, z, color='y', alpha=0.1)
157
158 Interpolating over a larger number of points
159 may provide the appearance of a smooth curve on
160 the surface of the sphere, which is also useful
161 for discretized integration calculations on a
162 sphere surface:
163
164 >>> start = np.array([1, 0, 0])
165 >>> end = np.array([0, 0, 1])
166 >>> t_vals = np.linspace(0, 1, 200)
167 >>> result = geometric_slerp(start,
168 ... end,
169 ... t_vals)
170 >>> ax.plot(result[...,0],
171 ... result[...,1],
172 ... result[...,2],
173 ... c='k')
174 >>> plt.show()
175 """
176
177 start = np.asarray(start, dtype=np.float64)
178 end = np.asarray(end, dtype=np.float64)
179
180 if start.ndim != 1 or end.ndim != 1:
181 raise ValueError("Start and end coordinates "
182 "must be one-dimensional")
183
184 if start.size != end.size:
185 raise ValueError("The dimensions of start and "
186 "end must match (have same size)")
187
188 if start.size < 2 or end.size < 2:
189 raise ValueError("The start and end coordinates must "
190 "both be in at least two-dimensional "
191 "space")
192
193 if np.array_equal(start, end):
194 return np.linspace(start, start, np.asarray(t).size)
195
196 # for points that violate equation for n-sphere
197 for coord in [start, end]:
198 if not np.allclose(np.linalg.norm(coord), 1.0,
199 rtol=1e-9,
200 atol=0):
201 raise ValueError("start and end are not"
202 " on a unit n-sphere")
203
204 if not isinstance(tol, float):
205 raise ValueError("tol must be a float")
206 else:
207 tol = np.fabs(tol)
208
209 coord_dist = euclidean(start, end)
210
211 # diameter of 2 within tolerance means antipodes, which is a problem
212 # for all unit n-spheres (even the 0-sphere would have an ambiguous path)
213 if np.allclose(coord_dist, 2.0, rtol=0, atol=tol):
214 warnings.warn("start and end are antipodes"
215 " using the specified tolerance;"
216 " this may cause ambiguous slerp paths")
217
218 t = np.asarray(t, dtype=np.float64)
219
220 if t.size == 0:
221 return np.empty((0, start.size))
222
223 if t.min() < 0 or t.max() > 1:
224 raise ValueError("interpolation parameter must be in [0, 1]")
225
226 if t.ndim == 0:
227 return _geometric_slerp(start,
228 end,
229 np.atleast_1d(t)).ravel()
230 else:
231 return _geometric_slerp(start,
232 end,
233 t)
234
[end of scipy/spatial/_geometric_slerp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scipy/spatial/_geometric_slerp.py b/scipy/spatial/_geometric_slerp.py
--- a/scipy/spatial/_geometric_slerp.py
+++ b/scipy/spatial/_geometric_slerp.py
@@ -52,8 +52,8 @@
end : (n_dimensions, ) array-like
Single n-dimensional input coordinate in a 1-D array-like
object. `n` must be greater than 1.
- t: float or (n_points,) array-like
- A float or array-like of doubles representing interpolation
+ t: float or (n_points,) 1D array-like
+ A float or 1D array-like of doubles representing interpolation
parameters, with values required in the inclusive interval
between 0 and 1. A common approach is to generate the array
with ``np.linspace(0, 1, n_pts)`` for linearly spaced points.
@@ -176,6 +176,11 @@
start = np.asarray(start, dtype=np.float64)
end = np.asarray(end, dtype=np.float64)
+ t = np.asarray(t)
+
+ if t.ndim > 1:
+ raise ValueError("The interpolation parameter "
+ "value must be one dimensional.")
if start.ndim != 1 or end.ndim != 1:
raise ValueError("Start and end coordinates "
@@ -191,7 +196,7 @@
"space")
if np.array_equal(start, end):
- return np.linspace(start, start, np.asarray(t).size)
+ return np.linspace(start, start, t.size)
# for points that violate equation for n-sphere
for coord in [start, end]:
|
{"golden_diff": "diff --git a/scipy/spatial/_geometric_slerp.py b/scipy/spatial/_geometric_slerp.py\n--- a/scipy/spatial/_geometric_slerp.py\n+++ b/scipy/spatial/_geometric_slerp.py\n@@ -52,8 +52,8 @@\n end : (n_dimensions, ) array-like\n Single n-dimensional input coordinate in a 1-D array-like\n object. `n` must be greater than 1.\n- t: float or (n_points,) array-like\n- A float or array-like of doubles representing interpolation\n+ t: float or (n_points,) 1D array-like\n+ A float or 1D array-like of doubles representing interpolation\n parameters, with values required in the inclusive interval\n between 0 and 1. A common approach is to generate the array\n with ``np.linspace(0, 1, n_pts)`` for linearly spaced points.\n@@ -176,6 +176,11 @@\n \n start = np.asarray(start, dtype=np.float64)\n end = np.asarray(end, dtype=np.float64)\n+ t = np.asarray(t)\n+\n+ if t.ndim > 1:\n+ raise ValueError(\"The interpolation parameter \"\n+ \"value must be one dimensional.\")\n \n if start.ndim != 1 or end.ndim != 1:\n raise ValueError(\"Start and end coordinates \"\n@@ -191,7 +196,7 @@\n \"space\")\n \n if np.array_equal(start, end):\n- return np.linspace(start, start, np.asarray(t).size)\n+ return np.linspace(start, start, t.size)\n \n # for points that violate equation for n-sphere\n for coord in [start, end]:\n", "issue": "BUG: n-d interpolation parameter provided to geometric_slerp\nThis was first reported by @BvB93 during addition of type hints to the related code.\r\n\r\nIn short, [`geometric_slerp()`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.geometric_slerp.html) was intended to be a bit like `np.linspace` on the surface of an n-dimensional circle/sphere, interpolating between start and end points on that surface. However, it somehow slipped through that we allow arbitrary dimensions for the interpolation points/array, and this leads to weird/inconsistent/incorrect results.\r\n\r\nExamples are below, where the \"intended\" 1-dimensional case produces the correct output--two points with 1 in the middle of the unit circle between the start/end. Notice the confusing discrepancy between the degenerate path output shapes vs. non-degenerate. While the non-degenerate outputs respect the input shape for `ndim > 1`, their numerical results are incorrect--the `x` coordinate isn't reflecting the movement along the unit circle to the halfway point between `[0, 1]` and `[1, 0]`.\r\n\r\nWhat I'd like to do is raise an exception if `np.asarray(t).ndim > 1` and just move on, but as Bas points out we do have *de facto* support (no error raised) for arbitrary dimensions whether I like it or not, and I don't want my own view on it to exempt it from backwards compatibility concerns. That said, I wonder if we can basically say that this is a clear bug so it trumps backward compatibility?\r\n\r\n```python\r\nimport numpy as np\r\nimport scipy\r\nfrom scipy.spatial import geometric_slerp\r\n\r\nprint(\"scipy.__version__:\", scipy.__version__)\r\n\r\narr1 = np.array([0, 1])\r\narr2 = np.array([1, 0])\r\n\r\nfor t in [[0, 0.5],\r\n [[0, 0.5]],\r\n [[[[[[[[[0, 0.5]]]]]]]]]]:\r\n print(\"t dims:\", np.asarray(t).ndim)\r\n path = geometric_slerp(start=arr1,\r\n end=arr2,\r\n t=t)\r\n print(\"path:\\n\", path)\r\n degenerate_path = geometric_slerp(start=arr1,\r\n end=arr1,\r\n t=t)\r\n print(\"degenerate_path:\\n\", degenerate_path)\r\n```\r\n\r\n```\r\nscipy.__version__: 1.8.0.dev0+1472.130a1e6\r\nt dims: 1\r\npath:\r\n [[0. 1. ]\r\n [0.70710678 0.70710678]]\r\ndegenerate_path:\r\n [[0. 1.]\r\n [0. 1.]]\r\nt dims: 2\r\npath:\r\n [[[0. 0.70710678]]]\r\ndegenerate_path:\r\n [[0. 1.]\r\n [0. 1.]]\r\nt dims: 9\r\npath:\r\n [[[[[[[[[[0. 0.70710678]]]]]]]]]]\r\ndegenerate_path:\r\n [[0. 1.]\r\n [0. 1.]]\r\n\r\n```\n", "before_files": [{"content": "from __future__ import annotations\n\n__all__ = ['geometric_slerp']\n\nimport warnings\nfrom typing import TYPE_CHECKING\n\nimport numpy as np\nfrom scipy.spatial.distance import euclidean\n\nif TYPE_CHECKING:\n import numpy.typing as npt\n\n\ndef _geometric_slerp(start, end, t):\n # create an orthogonal basis using QR decomposition\n basis = np.vstack([start, end])\n Q, R = np.linalg.qr(basis.T)\n signs = 2 * (np.diag(R) >= 0) - 1\n Q = Q.T * signs.T[:, np.newaxis]\n R = R.T * signs.T[:, np.newaxis]\n\n # calculate the angle between `start` and `end`\n c = np.dot(start, end)\n s = np.linalg.det(R)\n omega = np.arctan2(s, c)\n\n # interpolate\n start, end = Q\n s = np.sin(t * omega)\n c = np.cos(t * omega)\n return start * c[:, np.newaxis] + end * s[:, np.newaxis]\n\n\ndef geometric_slerp(\n start: npt.ArrayLike,\n end: npt.ArrayLike,\n t: npt.ArrayLike,\n tol: float = 1e-7,\n) -> np.ndarray:\n \"\"\"\n Geometric spherical linear interpolation.\n\n The interpolation occurs along a unit-radius\n great circle arc in arbitrary dimensional space.\n\n Parameters\n ----------\n start : (n_dimensions, ) array-like\n Single n-dimensional input coordinate in a 1-D array-like\n object. `n` must be greater than 1.\n end : (n_dimensions, ) array-like\n Single n-dimensional input coordinate in a 1-D array-like\n object. `n` must be greater than 1.\n t: float or (n_points,) array-like\n A float or array-like of doubles representing interpolation\n parameters, with values required in the inclusive interval\n between 0 and 1. A common approach is to generate the array\n with ``np.linspace(0, 1, n_pts)`` for linearly spaced points.\n Ascending, descending, and scrambled orders are permitted.\n tol: float\n The absolute tolerance for determining if the start and end\n coordinates are antipodes.\n\n Returns\n -------\n result : (t.size, D)\n An array of doubles containing the interpolated\n spherical path and including start and\n end when 0 and 1 t are used. The\n interpolated values should correspond to the\n same sort order provided in the t array. The result\n may be 1-dimensional if ``t`` is a float.\n\n Raises\n ------\n ValueError\n If ``start`` and ``end`` are antipodes, not on the\n unit n-sphere, or for a variety of degenerate conditions.\n\n Notes\n -----\n The implementation is based on the mathematical formula provided in [1]_,\n and the first known presentation of this algorithm, derived from study of\n 4-D geometry, is credited to Glenn Davis in a footnote of the original\n quaternion Slerp publication by Ken Shoemake [2]_.\n\n .. versionadded:: 1.5.0\n\n References\n ----------\n .. [1] https://en.wikipedia.org/wiki/Slerp#Geometric_Slerp\n .. [2] Ken Shoemake (1985) Animating rotation with quaternion curves.\n ACM SIGGRAPH Computer Graphics, 19(3): 245-254.\n\n See Also\n --------\n scipy.spatial.transform.Slerp : 3-D Slerp that works with quaternions\n\n Examples\n --------\n Interpolate four linearly-spaced values on the circumference of\n a circle spanning 90 degrees:\n\n >>> from scipy.spatial import geometric_slerp\n >>> import matplotlib.pyplot as plt\n >>> fig = plt.figure()\n >>> ax = fig.add_subplot(111)\n >>> start = np.array([1, 0])\n >>> end = np.array([0, 1])\n >>> t_vals = np.linspace(0, 1, 4)\n >>> result = geometric_slerp(start,\n ... end,\n ... t_vals)\n\n The interpolated results should be at 30 degree intervals\n recognizable on the unit circle:\n\n >>> ax.scatter(result[...,0], result[...,1], c='k')\n >>> circle = plt.Circle((0, 0), 1, color='grey')\n >>> ax.add_artist(circle)\n >>> ax.set_aspect('equal')\n >>> plt.show()\n\n Attempting to interpolate between antipodes on a circle is\n ambiguous because there are two possible paths, and on a\n sphere there are infinite possible paths on the geodesic surface.\n Nonetheless, one of the ambiguous paths is returned along\n with a warning:\n\n >>> opposite_pole = np.array([-1, 0])\n >>> with np.testing.suppress_warnings() as sup:\n ... sup.filter(UserWarning)\n ... geometric_slerp(start,\n ... opposite_pole,\n ... t_vals)\n array([[ 1.00000000e+00, 0.00000000e+00],\n [ 5.00000000e-01, 8.66025404e-01],\n [-5.00000000e-01, 8.66025404e-01],\n [-1.00000000e+00, 1.22464680e-16]])\n\n Extend the original example to a sphere and plot interpolation\n points in 3D:\n\n >>> from mpl_toolkits.mplot3d import proj3d\n >>> fig = plt.figure()\n >>> ax = fig.add_subplot(111, projection='3d')\n\n Plot the unit sphere for reference (optional):\n\n >>> u = np.linspace(0, 2 * np.pi, 100)\n >>> v = np.linspace(0, np.pi, 100)\n >>> x = np.outer(np.cos(u), np.sin(v))\n >>> y = np.outer(np.sin(u), np.sin(v))\n >>> z = np.outer(np.ones(np.size(u)), np.cos(v))\n >>> ax.plot_surface(x, y, z, color='y', alpha=0.1)\n\n Interpolating over a larger number of points\n may provide the appearance of a smooth curve on\n the surface of the sphere, which is also useful\n for discretized integration calculations on a\n sphere surface:\n\n >>> start = np.array([1, 0, 0])\n >>> end = np.array([0, 0, 1])\n >>> t_vals = np.linspace(0, 1, 200)\n >>> result = geometric_slerp(start,\n ... end,\n ... t_vals)\n >>> ax.plot(result[...,0],\n ... result[...,1],\n ... result[...,2],\n ... c='k')\n >>> plt.show()\n \"\"\"\n\n start = np.asarray(start, dtype=np.float64)\n end = np.asarray(end, dtype=np.float64)\n\n if start.ndim != 1 or end.ndim != 1:\n raise ValueError(\"Start and end coordinates \"\n \"must be one-dimensional\")\n\n if start.size != end.size:\n raise ValueError(\"The dimensions of start and \"\n \"end must match (have same size)\")\n\n if start.size < 2 or end.size < 2:\n raise ValueError(\"The start and end coordinates must \"\n \"both be in at least two-dimensional \"\n \"space\")\n\n if np.array_equal(start, end):\n return np.linspace(start, start, np.asarray(t).size)\n\n # for points that violate equation for n-sphere\n for coord in [start, end]:\n if not np.allclose(np.linalg.norm(coord), 1.0,\n rtol=1e-9,\n atol=0):\n raise ValueError(\"start and end are not\"\n \" on a unit n-sphere\")\n\n if not isinstance(tol, float):\n raise ValueError(\"tol must be a float\")\n else:\n tol = np.fabs(tol)\n\n coord_dist = euclidean(start, end)\n\n # diameter of 2 within tolerance means antipodes, which is a problem\n # for all unit n-spheres (even the 0-sphere would have an ambiguous path)\n if np.allclose(coord_dist, 2.0, rtol=0, atol=tol):\n warnings.warn(\"start and end are antipodes\"\n \" using the specified tolerance;\"\n \" this may cause ambiguous slerp paths\")\n\n t = np.asarray(t, dtype=np.float64)\n\n if t.size == 0:\n return np.empty((0, start.size))\n\n if t.min() < 0 or t.max() > 1:\n raise ValueError(\"interpolation parameter must be in [0, 1]\")\n\n if t.ndim == 0:\n return _geometric_slerp(start,\n end,\n np.atleast_1d(t)).ravel()\n else:\n return _geometric_slerp(start,\n end,\n t)\n", "path": "scipy/spatial/_geometric_slerp.py"}]}
| 3,910 | 390 |
gh_patches_debug_24002
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-831
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_K8S_31 failure with RuntimeDefault configured for workloads
**Describe the bug**
Please see #710
**To Reproduce**
Please see #710
**Expected behavior**
Please see #710
**Additional context**
The bug reported in #710 needs to be fixed for workloads in https://github.com/bridgecrewio/checkov/blob/master/checkov/kubernetes/checks/Seccomp.py#L44:L48 as well.
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
....
spec:
...
...
template:
...
...
spec:
....
....
securityContext:
allowPrivilegeEscalation: false
seccompProfile:
type: RuntimeDefault
```
**Related PRs**
#711
</issue>
<code>
[start of checkov/kubernetes/checks/Seccomp.py]
1 import dpath
2
3 from checkov.common.models.enums import CheckCategories, CheckResult
4 from checkov.kubernetes.base_spec_check import BaseK8Check
5
6
7 class Seccomp(BaseK8Check):
8
9 def __init__(self):
10 # CIS-1.5 5.7.2
11 name = "Ensure that the seccomp profile is set to docker/default or runtime/default"
12 id = "CKV_K8S_31"
13 # Location: Pod.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod
14 # Location: CronJob.spec.jobTemplate.spec.template.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod
15 # Location: *.spec.template.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod
16 # Location: *.spec.securityContext.seccompProfile.type
17 supported_kind = ['Pod', 'Deployment', 'DaemonSet', 'StatefulSet', 'ReplicaSet', 'ReplicationController', 'Job', 'CronJob']
18 categories = [CheckCategories.KUBERNETES]
19 super().__init__(name=name, id=id, categories=categories, supported_entities=supported_kind)
20
21 def get_resource_id(self, conf):
22 if "namespace" in conf["metadata"]:
23 return "{}.{}.{}".format(conf["kind"], conf["metadata"]["name"], conf["metadata"]["namespace"])
24 else:
25 return "{}.{}.default".format(conf["kind"], conf["metadata"]["name"])
26
27 def scan_spec_conf(self, conf):
28 metadata = {}
29
30 if conf['kind'] == 'Pod':
31 security_profile = dpath.search(conf, 'spec/securityContext/seccompProfile/type')
32 if security_profile:
33 security_profile = dpath.get(conf, 'spec/securityContext/seccompProfile/type')
34 return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED
35 if "metadata" in conf:
36 metadata = conf["metadata"]
37 elif conf['kind'] == 'CronJob':
38 if "spec" in conf:
39 if "jobTemplate" in conf["spec"]:
40 if "spec" in conf["spec"]["jobTemplate"]:
41 if "template" in conf["spec"]["jobTemplate"]["spec"]:
42 if "metadata" in conf["spec"]["jobTemplate"]["spec"]["template"]:
43 metadata = conf["spec"]["jobTemplate"]["spec"]["template"]["metadata"]
44 else:
45 if "spec" in conf:
46 if "template" in conf["spec"]:
47 if "metadata" in conf["spec"]["template"]:
48 metadata = conf["spec"]["template"]["metadata"]
49
50 if metadata:
51 if "annotations" in metadata and isinstance(metadata['annotations'], dict):
52 if "seccomp.security.alpha.kubernetes.io/pod" in metadata["annotations"]:
53 if ("docker/default" in metadata["annotations"]["seccomp.security.alpha.kubernetes.io/pod"] or
54 "runtime/default" in metadata["annotations"]["seccomp.security.alpha.kubernetes.io/pod"]):
55 return CheckResult.PASSED
56 return CheckResult.FAILED
57
58
59 check = Seccomp()
60
[end of checkov/kubernetes/checks/Seccomp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/kubernetes/checks/Seccomp.py b/checkov/kubernetes/checks/Seccomp.py
--- a/checkov/kubernetes/checks/Seccomp.py
+++ b/checkov/kubernetes/checks/Seccomp.py
@@ -34,6 +34,20 @@
return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED
if "metadata" in conf:
metadata = conf["metadata"]
+ if conf['kind'] == 'Deployment':
+ security_profile = dpath.search(conf, 'spec/template/spec/securityContext/seccompProfile/type')
+ if security_profile:
+ security_profile = dpath.get(conf, 'spec/template/spec/securityContext/seccompProfile/type')
+ return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED
+ if "metadata" in conf:
+ metadata = conf["metadata"]
+ if conf['kind'] == 'StatefulSet':
+ security_profile = dpath.search(conf, 'spec/template/spec/securityContext/seccompProfile/type')
+ if security_profile:
+ security_profile = dpath.get(conf, 'spec/template/spec/securityContext/seccompProfile/type')
+ return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED
+ if "metadata" in conf:
+ metadata = conf["metadata"]
elif conf['kind'] == 'CronJob':
if "spec" in conf:
if "jobTemplate" in conf["spec"]:
|
{"golden_diff": "diff --git a/checkov/kubernetes/checks/Seccomp.py b/checkov/kubernetes/checks/Seccomp.py\n--- a/checkov/kubernetes/checks/Seccomp.py\n+++ b/checkov/kubernetes/checks/Seccomp.py\n@@ -34,6 +34,20 @@\n return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED\n if \"metadata\" in conf:\n metadata = conf[\"metadata\"]\n+ if conf['kind'] == 'Deployment':\n+ security_profile = dpath.search(conf, 'spec/template/spec/securityContext/seccompProfile/type')\n+ if security_profile:\n+ security_profile = dpath.get(conf, 'spec/template/spec/securityContext/seccompProfile/type')\n+ return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED\n+ if \"metadata\" in conf:\n+ metadata = conf[\"metadata\"]\n+ if conf['kind'] == 'StatefulSet':\n+ security_profile = dpath.search(conf, 'spec/template/spec/securityContext/seccompProfile/type')\n+ if security_profile:\n+ security_profile = dpath.get(conf, 'spec/template/spec/securityContext/seccompProfile/type')\n+ return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED\n+ if \"metadata\" in conf:\n+ metadata = conf[\"metadata\"] \n elif conf['kind'] == 'CronJob':\n if \"spec\" in conf:\n if \"jobTemplate\" in conf[\"spec\"]:\n", "issue": "CKV_K8S_31 failure with RuntimeDefault configured for workloads\n**Describe the bug**\r\nPlease see #710\r\n\r\n**To Reproduce**\r\nPlease see #710\r\n\r\n**Expected behavior**\r\nPlease see #710\r\n\r\n**Additional context**\r\nThe bug reported in #710 needs to be fixed for workloads in https://github.com/bridgecrewio/checkov/blob/master/checkov/kubernetes/checks/Seccomp.py#L44:L48 as well.\r\n\r\n```yaml\r\napiVersion: apps/v1\r\nkind: Deployment\r\nmetadata:\r\n....\r\nspec:\r\n...\r\n...\r\n template:\r\n ...\r\n ...\r\n spec:\r\n ....\r\n ....\r\n securityContext:\r\n allowPrivilegeEscalation: false\r\n seccompProfile:\r\n type: RuntimeDefault\r\n```\r\n\r\n**Related PRs**\r\n#711 \r\n\n", "before_files": [{"content": "import dpath\n\nfrom checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.kubernetes.base_spec_check import BaseK8Check\n\n\nclass Seccomp(BaseK8Check):\n\n def __init__(self):\n # CIS-1.5 5.7.2\n name = \"Ensure that the seccomp profile is set to docker/default or runtime/default\"\n id = \"CKV_K8S_31\"\n # Location: Pod.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod\n # Location: CronJob.spec.jobTemplate.spec.template.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod\n # Location: *.spec.template.metadata.annotations.seccomp.security.alpha.kubernetes.io/pod\n # Location: *.spec.securityContext.seccompProfile.type\n supported_kind = ['Pod', 'Deployment', 'DaemonSet', 'StatefulSet', 'ReplicaSet', 'ReplicationController', 'Job', 'CronJob']\n categories = [CheckCategories.KUBERNETES]\n super().__init__(name=name, id=id, categories=categories, supported_entities=supported_kind)\n\n def get_resource_id(self, conf):\n if \"namespace\" in conf[\"metadata\"]:\n return \"{}.{}.{}\".format(conf[\"kind\"], conf[\"metadata\"][\"name\"], conf[\"metadata\"][\"namespace\"])\n else:\n return \"{}.{}.default\".format(conf[\"kind\"], conf[\"metadata\"][\"name\"])\n\n def scan_spec_conf(self, conf):\n metadata = {}\n\n if conf['kind'] == 'Pod':\n security_profile = dpath.search(conf, 'spec/securityContext/seccompProfile/type')\n if security_profile:\n security_profile = dpath.get(conf, 'spec/securityContext/seccompProfile/type')\n return CheckResult.PASSED if security_profile == 'RuntimeDefault' else CheckResult.FAILED\n if \"metadata\" in conf:\n metadata = conf[\"metadata\"]\n elif conf['kind'] == 'CronJob':\n if \"spec\" in conf:\n if \"jobTemplate\" in conf[\"spec\"]:\n if \"spec\" in conf[\"spec\"][\"jobTemplate\"]:\n if \"template\" in conf[\"spec\"][\"jobTemplate\"][\"spec\"]:\n if \"metadata\" in conf[\"spec\"][\"jobTemplate\"][\"spec\"][\"template\"]:\n metadata = conf[\"spec\"][\"jobTemplate\"][\"spec\"][\"template\"][\"metadata\"]\n else:\n if \"spec\" in conf:\n if \"template\" in conf[\"spec\"]:\n if \"metadata\" in conf[\"spec\"][\"template\"]:\n metadata = conf[\"spec\"][\"template\"][\"metadata\"]\n\n if metadata:\n if \"annotations\" in metadata and isinstance(metadata['annotations'], dict):\n if \"seccomp.security.alpha.kubernetes.io/pod\" in metadata[\"annotations\"]:\n if (\"docker/default\" in metadata[\"annotations\"][\"seccomp.security.alpha.kubernetes.io/pod\"] or\n \"runtime/default\" in metadata[\"annotations\"][\"seccomp.security.alpha.kubernetes.io/pod\"]):\n return CheckResult.PASSED\n return CheckResult.FAILED\n\n\ncheck = Seccomp()\n", "path": "checkov/kubernetes/checks/Seccomp.py"}]}
| 1,496 | 338 |
gh_patches_debug_30738
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-506
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
--associate-public-ip-address option with --security-group-ids
#501 #502
when I ran command with --subnet-id, it works fine but when I add --security-group-ids, it does not work.
It seems that same modifications are required.
</issue>
<code>
[start of awscli/customizations/ec2runinstances.py]
1 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 """
14 This customization adds two new parameters to the ``ec2 run-instance``
15 command. The first, ``--secondary-private-ip-addresses`` allows a list
16 of IP addresses within the specified subnet to be associated with the
17 new instance. The second, ``--secondary-ip-address-count`` allows you
18 to specify how many additional IP addresses you want but the actual
19 address will be assigned for you.
20
21 This functionality (and much more) is also available using the
22 ``--network-interfaces`` complex argument. This just makes two of
23 the most commonly used features available more easily.
24 """
25 from awscli.arguments import CustomArgument
26
27 # --secondary-private-ip-address
28 SECONDARY_PRIVATE_IP_ADDRESSES_DOCS = (
29 '[EC2-VPC] A secondary private IP address for the network interface '
30 'or instance. You can specify this multiple times to assign multiple '
31 'secondary IP addresses. If you want additional private IP addresses '
32 'but do not need a specific address, use the '
33 '--secondary-private-ip-address-count option.')
34
35 # --secondary-private-ip-address-count
36 SECONDARY_PRIVATE_IP_ADDRESS_COUNT_DOCS = (
37 '[EC2-VPC] The number of secondary IP addresses to assign to '
38 'the network interface or instance.')
39
40 # --associate-public-ip-address
41 ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS = (
42 '[EC2-VPC] If specified a public IP address will be assigned '
43 'to the new instance in a VPC.')
44
45 def _add_params(argument_table, operation, **kwargs):
46 arg = SecondaryPrivateIpAddressesArgument(
47 name='secondary-private-ip-addresses',
48 help_text=SECONDARY_PRIVATE_IP_ADDRESSES_DOCS)
49 argument_table['secondary-private-ip-addresses'] = arg
50 arg = SecondaryPrivateIpAddressCountArgument(
51 name='secondary-private-ip-address-count',
52 help_text=SECONDARY_PRIVATE_IP_ADDRESS_COUNT_DOCS)
53 argument_table['secondary-private-ip-address-count'] = arg
54 arg = AssociatePublicIpAddressArgument(
55 name='associate-public-ip-address',
56 help_text=ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS,
57 action='store_true', group_name='associate_public_ip')
58 argument_table['associate-public-ip-address'] = arg
59 arg = NoAssociatePublicIpAddressArgument(
60 name='no-associate-public-ip-address',
61 help_text=ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS,
62 action='store_false', group_name='associate_public_ip')
63 argument_table['no-associate-public-ip-address'] = arg
64
65
66 def _check_args(parsed_args, **kwargs):
67 # This function checks the parsed args. If the user specified
68 # the --network-interfaces option with any of the scalar options we
69 # raise an error.
70 arg_dict = vars(parsed_args)
71 if arg_dict['network_interfaces']:
72 for key in ('secondary_private_ip_addresses',
73 'secondary_private_ip_address_count',
74 'associate_public_ip_address'):
75 if arg_dict[key]:
76 msg = ('Mixing the --network-interfaces option '
77 'with the simple, scalar options is '
78 'not supported.')
79 raise ValueError(msg)
80
81
82 def _fix_subnet(operation, endpoint, params, **kwargs):
83 # If the user has supplied a --subnet-id option AND we also
84 # have inserted an AssociatePublicIpAddress into the network_interfaces
85 # structure, we need to move the subnetId value down into the
86 # network_interfaces structure or we will get a client error from EC2.
87 if 'network_interfaces' in params:
88 ni = params['network_interfaces']
89 if 'AssociatePublicIpAddress' in ni[0]:
90 if 'subnet_id' in params:
91 ni[0]['SubnetId'] = params['subnet_id']
92 del params['subnet_id']
93
94 EVENTS = [
95 ('building-argument-table.ec2.run-instances', _add_params),
96 ('operation-args-parsed.ec2.run-instances', _check_args),
97 ('before-parameter-build.ec2.RunInstances', _fix_subnet),
98 ]
99
100
101 def register_runinstances(event_handler):
102 # Register all of the events for customizing BundleInstance
103 for event, handler in EVENTS:
104 event_handler.register(event, handler)
105
106
107 def _build_network_interfaces(params, key, value):
108 # Build up the NetworkInterfaces data structure
109 if 'network_interfaces' not in params:
110 params['network_interfaces'] = [{'DeviceIndex': 0}]
111
112 if key == 'PrivateIpAddresses':
113 if 'PrivateIpAddresses' not in params['network_interfaces'][0]:
114 params['network_interfaces'][0]['PrivateIpAddresses'] = value
115 else:
116 params['network_interfaces'][0][key] = value
117
118
119 class SecondaryPrivateIpAddressesArgument(CustomArgument):
120
121 def add_to_parser(self, parser, cli_name=None):
122 parser.add_argument(self.cli_name, dest=self.py_name,
123 default=self._default, nargs='*')
124
125 def add_to_params(self, parameters, value):
126 if value:
127 value = [{'PrivateIpAddress': v, 'Primary': False} for
128 v in value]
129 _build_network_interfaces(parameters,
130 'PrivateIpAddresses',
131 value)
132
133
134 class SecondaryPrivateIpAddressCountArgument(CustomArgument):
135
136 def add_to_parser(self, parser, cli_name=None):
137 parser.add_argument(self.cli_name, dest=self.py_name,
138 default=self._default, type=int)
139
140 def add_to_params(self, parameters, value):
141 if value:
142 _build_network_interfaces(parameters,
143 'SecondaryPrivateIpAddressCount',
144 value)
145
146
147 class AssociatePublicIpAddressArgument(CustomArgument):
148
149 def add_to_params(self, parameters, value):
150 if value is True:
151 _build_network_interfaces(parameters,
152 'AssociatePublicIpAddress',
153 value)
154
155
156 class NoAssociatePublicIpAddressArgument(CustomArgument):
157
158 def add_to_params(self, parameters, value):
159 if value is False:
160 _build_network_interfaces(parameters,
161 'AssociatePublicIpAddress',
162 value)
163
[end of awscli/customizations/ec2runinstances.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/awscli/customizations/ec2runinstances.py b/awscli/customizations/ec2runinstances.py
--- a/awscli/customizations/ec2runinstances.py
+++ b/awscli/customizations/ec2runinstances.py
@@ -79,22 +79,33 @@
raise ValueError(msg)
-def _fix_subnet(operation, endpoint, params, **kwargs):
- # If the user has supplied a --subnet-id option AND we also
- # have inserted an AssociatePublicIpAddress into the network_interfaces
- # structure, we need to move the subnetId value down into the
- # network_interfaces structure or we will get a client error from EC2.
+def _fix_args(operation, endpoint, params, **kwargs):
+ # The RunInstances request provides some parameters
+ # such as --subnet-id and --security-group-id that can be specified
+ # as separate options only if the request DOES NOT include a
+ # NetworkInterfaces structure. In those cases, the values for
+ # these parameters must be specified inside the NetworkInterfaces
+ # structure. This function checks for those parameters
+ # and fixes them if necessary.
+ # NOTE: If the user is a default VPC customer, RunInstances
+ # allows them to specify the security group by name or by id.
+ # However, in this scenario we can only support id because
+ # we can't place a group name in the NetworkInterfaces structure.
if 'network_interfaces' in params:
ni = params['network_interfaces']
if 'AssociatePublicIpAddress' in ni[0]:
if 'subnet_id' in params:
ni[0]['SubnetId'] = params['subnet_id']
del params['subnet_id']
+ if 'security_group_ids' in params:
+ ni[0]['Groups'] = params['security_group_ids']
+ del params['security_group_ids']
+
EVENTS = [
('building-argument-table.ec2.run-instances', _add_params),
('operation-args-parsed.ec2.run-instances', _check_args),
- ('before-parameter-build.ec2.RunInstances', _fix_subnet),
+ ('before-parameter-build.ec2.RunInstances', _fix_args),
]
|
{"golden_diff": "diff --git a/awscli/customizations/ec2runinstances.py b/awscli/customizations/ec2runinstances.py\n--- a/awscli/customizations/ec2runinstances.py\n+++ b/awscli/customizations/ec2runinstances.py\n@@ -79,22 +79,33 @@\n raise ValueError(msg)\n \n \n-def _fix_subnet(operation, endpoint, params, **kwargs):\n- # If the user has supplied a --subnet-id option AND we also\n- # have inserted an AssociatePublicIpAddress into the network_interfaces\n- # structure, we need to move the subnetId value down into the\n- # network_interfaces structure or we will get a client error from EC2.\n+def _fix_args(operation, endpoint, params, **kwargs):\n+ # The RunInstances request provides some parameters\n+ # such as --subnet-id and --security-group-id that can be specified\n+ # as separate options only if the request DOES NOT include a\n+ # NetworkInterfaces structure. In those cases, the values for\n+ # these parameters must be specified inside the NetworkInterfaces\n+ # structure. This function checks for those parameters\n+ # and fixes them if necessary.\n+ # NOTE: If the user is a default VPC customer, RunInstances\n+ # allows them to specify the security group by name or by id.\n+ # However, in this scenario we can only support id because\n+ # we can't place a group name in the NetworkInterfaces structure.\n if 'network_interfaces' in params:\n ni = params['network_interfaces']\n if 'AssociatePublicIpAddress' in ni[0]:\n if 'subnet_id' in params:\n ni[0]['SubnetId'] = params['subnet_id']\n del params['subnet_id']\n+ if 'security_group_ids' in params:\n+ ni[0]['Groups'] = params['security_group_ids']\n+ del params['security_group_ids']\n+\n \n EVENTS = [\n ('building-argument-table.ec2.run-instances', _add_params),\n ('operation-args-parsed.ec2.run-instances', _check_args),\n- ('before-parameter-build.ec2.RunInstances', _fix_subnet),\n+ ('before-parameter-build.ec2.RunInstances', _fix_args),\n ]\n", "issue": "--associate-public-ip-address option with --security-group-ids\n#501 #502\n\nwhen I ran command with --subnet-id, it works fine but when I add --security-group-ids, it does not work.\nIt seems that same modifications are required.\n\n", "before_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"\nThis customization adds two new parameters to the ``ec2 run-instance``\ncommand. The first, ``--secondary-private-ip-addresses`` allows a list\nof IP addresses within the specified subnet to be associated with the\nnew instance. The second, ``--secondary-ip-address-count`` allows you\nto specify how many additional IP addresses you want but the actual\naddress will be assigned for you.\n\nThis functionality (and much more) is also available using the\n``--network-interfaces`` complex argument. This just makes two of\nthe most commonly used features available more easily.\n\"\"\"\nfrom awscli.arguments import CustomArgument\n\n# --secondary-private-ip-address\nSECONDARY_PRIVATE_IP_ADDRESSES_DOCS = (\n '[EC2-VPC] A secondary private IP address for the network interface '\n 'or instance. You can specify this multiple times to assign multiple '\n 'secondary IP addresses. If you want additional private IP addresses '\n 'but do not need a specific address, use the '\n '--secondary-private-ip-address-count option.')\n\n# --secondary-private-ip-address-count\nSECONDARY_PRIVATE_IP_ADDRESS_COUNT_DOCS = (\n '[EC2-VPC] The number of secondary IP addresses to assign to '\n 'the network interface or instance.')\n\n# --associate-public-ip-address\nASSOCIATE_PUBLIC_IP_ADDRESS_DOCS = (\n '[EC2-VPC] If specified a public IP address will be assigned '\n 'to the new instance in a VPC.')\n\ndef _add_params(argument_table, operation, **kwargs):\n arg = SecondaryPrivateIpAddressesArgument(\n name='secondary-private-ip-addresses',\n help_text=SECONDARY_PRIVATE_IP_ADDRESSES_DOCS)\n argument_table['secondary-private-ip-addresses'] = arg\n arg = SecondaryPrivateIpAddressCountArgument(\n name='secondary-private-ip-address-count',\n help_text=SECONDARY_PRIVATE_IP_ADDRESS_COUNT_DOCS)\n argument_table['secondary-private-ip-address-count'] = arg\n arg = AssociatePublicIpAddressArgument(\n name='associate-public-ip-address',\n help_text=ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS,\n action='store_true', group_name='associate_public_ip')\n argument_table['associate-public-ip-address'] = arg\n arg = NoAssociatePublicIpAddressArgument(\n name='no-associate-public-ip-address',\n help_text=ASSOCIATE_PUBLIC_IP_ADDRESS_DOCS,\n action='store_false', group_name='associate_public_ip')\n argument_table['no-associate-public-ip-address'] = arg\n\n\ndef _check_args(parsed_args, **kwargs):\n # This function checks the parsed args. If the user specified\n # the --network-interfaces option with any of the scalar options we\n # raise an error.\n arg_dict = vars(parsed_args)\n if arg_dict['network_interfaces']:\n for key in ('secondary_private_ip_addresses',\n 'secondary_private_ip_address_count',\n 'associate_public_ip_address'):\n if arg_dict[key]:\n msg = ('Mixing the --network-interfaces option '\n 'with the simple, scalar options is '\n 'not supported.')\n raise ValueError(msg)\n\n\ndef _fix_subnet(operation, endpoint, params, **kwargs):\n # If the user has supplied a --subnet-id option AND we also\n # have inserted an AssociatePublicIpAddress into the network_interfaces\n # structure, we need to move the subnetId value down into the\n # network_interfaces structure or we will get a client error from EC2.\n if 'network_interfaces' in params:\n ni = params['network_interfaces']\n if 'AssociatePublicIpAddress' in ni[0]:\n if 'subnet_id' in params:\n ni[0]['SubnetId'] = params['subnet_id']\n del params['subnet_id']\n\nEVENTS = [\n ('building-argument-table.ec2.run-instances', _add_params),\n ('operation-args-parsed.ec2.run-instances', _check_args),\n ('before-parameter-build.ec2.RunInstances', _fix_subnet),\n ]\n\n\ndef register_runinstances(event_handler):\n # Register all of the events for customizing BundleInstance\n for event, handler in EVENTS:\n event_handler.register(event, handler)\n\n\ndef _build_network_interfaces(params, key, value):\n # Build up the NetworkInterfaces data structure\n if 'network_interfaces' not in params:\n params['network_interfaces'] = [{'DeviceIndex': 0}]\n\n if key == 'PrivateIpAddresses':\n if 'PrivateIpAddresses' not in params['network_interfaces'][0]:\n params['network_interfaces'][0]['PrivateIpAddresses'] = value\n else:\n params['network_interfaces'][0][key] = value\n\n\nclass SecondaryPrivateIpAddressesArgument(CustomArgument):\n\n def add_to_parser(self, parser, cli_name=None):\n parser.add_argument(self.cli_name, dest=self.py_name,\n default=self._default, nargs='*')\n\n def add_to_params(self, parameters, value):\n if value:\n value = [{'PrivateIpAddress': v, 'Primary': False} for\n v in value]\n _build_network_interfaces(parameters,\n 'PrivateIpAddresses',\n value)\n\n\nclass SecondaryPrivateIpAddressCountArgument(CustomArgument):\n\n def add_to_parser(self, parser, cli_name=None):\n parser.add_argument(self.cli_name, dest=self.py_name,\n default=self._default, type=int)\n\n def add_to_params(self, parameters, value):\n if value:\n _build_network_interfaces(parameters,\n 'SecondaryPrivateIpAddressCount',\n value)\n\n\nclass AssociatePublicIpAddressArgument(CustomArgument):\n\n def add_to_params(self, parameters, value):\n if value is True:\n _build_network_interfaces(parameters,\n 'AssociatePublicIpAddress',\n value)\n\n\nclass NoAssociatePublicIpAddressArgument(CustomArgument):\n\n def add_to_params(self, parameters, value):\n if value is False:\n _build_network_interfaces(parameters,\n 'AssociatePublicIpAddress',\n value)\n", "path": "awscli/customizations/ec2runinstances.py"}]}
| 2,375 | 485 |
gh_patches_debug_17707
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-974
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RecursionError with custom node
In a custom Node, I try to initialize with `CtrlNode.__init__(self, name, terminals=terminals)`, but I get a RecursionError:
``` python
Traceback (most recent call last):
File "/usr/lib/python3.5/site-packages/pyqtgraph/flowchart/Flowchart.py", line 871, in nodeMenuTriggered
self.chart.createNode(nodeType, pos=pos)
File "/usr/lib/python3.5/site-packages/pyqtgraph/flowchart/Flowchart.py", line 174, in createNode
node = self.library.getNodeType(nodeType)(name)
File "/data/libs.git/JML/python/TBA/nodes.py", line 37, in __init__
CtrlNode.__init__(self, name, terminals=terminals)
File "/usr/lib/python3.5/site-packages/pyqtgraph/flowchart/library/common.py", line 89, in __init__
if hasattr(self, 'uiTemplate'):
File "/usr/lib/python3.5/site-packages/pyqtgraph/flowchart/Node.py", line 193, in __getattr__
if attr not in self.terminals:
File "/usr/lib/python3.5/site-packages/pyqtgraph/flowchart/Node.py", line 193, in __getattr__
if attr not in self.terminals:
(...)
File "/usr/lib/python3.5/site-packages/pyqtgraph/flowchart/Node.py", line 193, in __getattr__
if attr not in self.terminals:
RecursionError: maximum recursion depth exceeded while calling a Python object
```
The problem is that `__getattr__` checks for `self.terminals` which is not yet defined, so `__getattr__` is called again and so on.
I think putting the `if ui is None:` block after `Node.__init__` in `CtrlNode.__init__` would do the trick.
</issue>
<code>
[start of pyqtgraph/flowchart/library/common.py]
1 # -*- coding: utf-8 -*-
2 from ...Qt import QtCore, QtGui
3 from ...widgets.SpinBox import SpinBox
4 #from ...SignalProxy import SignalProxy
5 from ...WidgetGroup import WidgetGroup
6 #from ColorMapper import ColorMapper
7 from ..Node import Node
8 import numpy as np
9 from ...widgets.ColorButton import ColorButton
10 try:
11 import metaarray
12 HAVE_METAARRAY = True
13 except:
14 HAVE_METAARRAY = False
15
16
17 def generateUi(opts):
18 """Convenience function for generating common UI types"""
19 widget = QtGui.QWidget()
20 l = QtGui.QFormLayout()
21 l.setSpacing(0)
22 widget.setLayout(l)
23 ctrls = {}
24 row = 0
25 for opt in opts:
26 if len(opt) == 2:
27 k, t = opt
28 o = {}
29 elif len(opt) == 3:
30 k, t, o = opt
31 else:
32 raise Exception("Widget specification must be (name, type) or (name, type, {opts})")
33
34 ## clean out these options so they don't get sent to SpinBox
35 hidden = o.pop('hidden', False)
36 tip = o.pop('tip', None)
37
38 if t == 'intSpin':
39 w = QtGui.QSpinBox()
40 if 'max' in o:
41 w.setMaximum(o['max'])
42 if 'min' in o:
43 w.setMinimum(o['min'])
44 if 'value' in o:
45 w.setValue(o['value'])
46 elif t == 'doubleSpin':
47 w = QtGui.QDoubleSpinBox()
48 if 'max' in o:
49 w.setMaximum(o['max'])
50 if 'min' in o:
51 w.setMinimum(o['min'])
52 if 'value' in o:
53 w.setValue(o['value'])
54 elif t == 'spin':
55 w = SpinBox()
56 w.setOpts(**o)
57 elif t == 'check':
58 w = QtGui.QCheckBox()
59 if 'checked' in o:
60 w.setChecked(o['checked'])
61 elif t == 'combo':
62 w = QtGui.QComboBox()
63 for i in o['values']:
64 w.addItem(i)
65 #elif t == 'colormap':
66 #w = ColorMapper()
67 elif t == 'color':
68 w = ColorButton()
69 else:
70 raise Exception("Unknown widget type '%s'" % str(t))
71
72 if tip is not None:
73 w.setToolTip(tip)
74 w.setObjectName(k)
75 l.addRow(k, w)
76 if hidden:
77 w.hide()
78 label = l.labelForField(w)
79 label.hide()
80
81 ctrls[k] = w
82 w.rowNum = row
83 row += 1
84 group = WidgetGroup(widget)
85 return widget, group, ctrls
86
87
88 class CtrlNode(Node):
89 """Abstract class for nodes with auto-generated control UI"""
90
91 sigStateChanged = QtCore.Signal(object)
92
93 def __init__(self, name, ui=None, terminals=None):
94 if ui is None:
95 if hasattr(self, 'uiTemplate'):
96 ui = self.uiTemplate
97 else:
98 ui = []
99 if terminals is None:
100 terminals = {'In': {'io': 'in'}, 'Out': {'io': 'out', 'bypass': 'In'}}
101 Node.__init__(self, name=name, terminals=terminals)
102
103 self.ui, self.stateGroup, self.ctrls = generateUi(ui)
104 self.stateGroup.sigChanged.connect(self.changed)
105
106 def ctrlWidget(self):
107 return self.ui
108
109 def changed(self):
110 self.update()
111 self.sigStateChanged.emit(self)
112
113 def process(self, In, display=True):
114 out = self.processData(In)
115 return {'Out': out}
116
117 def saveState(self):
118 state = Node.saveState(self)
119 state['ctrl'] = self.stateGroup.state()
120 return state
121
122 def restoreState(self, state):
123 Node.restoreState(self, state)
124 if self.stateGroup is not None:
125 self.stateGroup.setState(state.get('ctrl', {}))
126
127 def hideRow(self, name):
128 w = self.ctrls[name]
129 l = self.ui.layout().labelForField(w)
130 w.hide()
131 l.hide()
132
133 def showRow(self, name):
134 w = self.ctrls[name]
135 l = self.ui.layout().labelForField(w)
136 w.show()
137 l.show()
138
139
140 class PlottingCtrlNode(CtrlNode):
141 """Abstract class for CtrlNodes that can connect to plots."""
142
143 def __init__(self, name, ui=None, terminals=None):
144 #print "PlottingCtrlNode.__init__ called."
145 CtrlNode.__init__(self, name, ui=ui, terminals=terminals)
146 self.plotTerminal = self.addOutput('plot', optional=True)
147
148 def connected(self, term, remote):
149 CtrlNode.connected(self, term, remote)
150 if term is not self.plotTerminal:
151 return
152 node = remote.node()
153 node.sigPlotChanged.connect(self.connectToPlot)
154 self.connectToPlot(node)
155
156 def disconnected(self, term, remote):
157 CtrlNode.disconnected(self, term, remote)
158 if term is not self.plotTerminal:
159 return
160 remote.node().sigPlotChanged.disconnect(self.connectToPlot)
161 self.disconnectFromPlot(remote.node().getPlot())
162
163 def connectToPlot(self, node):
164 """Define what happens when the node is connected to a plot"""
165 raise Exception("Must be re-implemented in subclass")
166
167 def disconnectFromPlot(self, plot):
168 """Define what happens when the node is disconnected from a plot"""
169 raise Exception("Must be re-implemented in subclass")
170
171 def process(self, In, display=True):
172 out = CtrlNode.process(self, In, display)
173 out['plot'] = None
174 return out
175
176
177 def metaArrayWrapper(fn):
178 def newFn(self, data, *args, **kargs):
179 if HAVE_METAARRAY and (hasattr(data, 'implements') and data.implements('MetaArray')):
180 d1 = fn(self, data.view(np.ndarray), *args, **kargs)
181 info = data.infoCopy()
182 if d1.shape != data.shape:
183 for i in range(data.ndim):
184 if 'values' in info[i]:
185 info[i]['values'] = info[i]['values'][:d1.shape[i]]
186 return metaarray.MetaArray(d1, info=info)
187 else:
188 return fn(self, data, *args, **kargs)
189 return newFn
190
191
[end of pyqtgraph/flowchart/library/common.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyqtgraph/flowchart/library/common.py b/pyqtgraph/flowchart/library/common.py
--- a/pyqtgraph/flowchart/library/common.py
+++ b/pyqtgraph/flowchart/library/common.py
@@ -91,14 +91,15 @@
sigStateChanged = QtCore.Signal(object)
def __init__(self, name, ui=None, terminals=None):
+ if terminals is None:
+ terminals = {'In': {'io': 'in'}, 'Out': {'io': 'out', 'bypass': 'In'}}
+ Node.__init__(self, name=name, terminals=terminals)
+
if ui is None:
if hasattr(self, 'uiTemplate'):
ui = self.uiTemplate
else:
ui = []
- if terminals is None:
- terminals = {'In': {'io': 'in'}, 'Out': {'io': 'out', 'bypass': 'In'}}
- Node.__init__(self, name=name, terminals=terminals)
self.ui, self.stateGroup, self.ctrls = generateUi(ui)
self.stateGroup.sigChanged.connect(self.changed)
|
{"golden_diff": "diff --git a/pyqtgraph/flowchart/library/common.py b/pyqtgraph/flowchart/library/common.py\n--- a/pyqtgraph/flowchart/library/common.py\n+++ b/pyqtgraph/flowchart/library/common.py\n@@ -91,14 +91,15 @@\n sigStateChanged = QtCore.Signal(object)\n \n def __init__(self, name, ui=None, terminals=None):\n+ if terminals is None:\n+ terminals = {'In': {'io': 'in'}, 'Out': {'io': 'out', 'bypass': 'In'}}\n+ Node.__init__(self, name=name, terminals=terminals)\n+ \n if ui is None:\n if hasattr(self, 'uiTemplate'):\n ui = self.uiTemplate\n else:\n ui = []\n- if terminals is None:\n- terminals = {'In': {'io': 'in'}, 'Out': {'io': 'out', 'bypass': 'In'}}\n- Node.__init__(self, name=name, terminals=terminals)\n \n self.ui, self.stateGroup, self.ctrls = generateUi(ui)\n self.stateGroup.sigChanged.connect(self.changed)\n", "issue": "RecursionError with custom node\nIn a custom Node, I try to initialize with `CtrlNode.__init__(self, name, terminals=terminals)`, but I get a RecursionError:\n\n``` python\nTraceback (most recent call last):\n File \"/usr/lib/python3.5/site-packages/pyqtgraph/flowchart/Flowchart.py\", line 871, in nodeMenuTriggered\n self.chart.createNode(nodeType, pos=pos)\n File \"/usr/lib/python3.5/site-packages/pyqtgraph/flowchart/Flowchart.py\", line 174, in createNode\n node = self.library.getNodeType(nodeType)(name)\n File \"/data/libs.git/JML/python/TBA/nodes.py\", line 37, in __init__\n CtrlNode.__init__(self, name, terminals=terminals)\n File \"/usr/lib/python3.5/site-packages/pyqtgraph/flowchart/library/common.py\", line 89, in __init__\n if hasattr(self, 'uiTemplate'):\n File \"/usr/lib/python3.5/site-packages/pyqtgraph/flowchart/Node.py\", line 193, in __getattr__\n if attr not in self.terminals:\n File \"/usr/lib/python3.5/site-packages/pyqtgraph/flowchart/Node.py\", line 193, in __getattr__\n if attr not in self.terminals:\n(...)\n File \"/usr/lib/python3.5/site-packages/pyqtgraph/flowchart/Node.py\", line 193, in __getattr__\n if attr not in self.terminals:\nRecursionError: maximum recursion depth exceeded while calling a Python object\n```\n\nThe problem is that `__getattr__` checks for `self.terminals` which is not yet defined, so `__getattr__` is called again and so on.\n\nI think putting the `if ui is None:` block after `Node.__init__` in `CtrlNode.__init__` would do the trick.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom ...Qt import QtCore, QtGui\nfrom ...widgets.SpinBox import SpinBox\n#from ...SignalProxy import SignalProxy\nfrom ...WidgetGroup import WidgetGroup\n#from ColorMapper import ColorMapper\nfrom ..Node import Node\nimport numpy as np\nfrom ...widgets.ColorButton import ColorButton\ntry:\n import metaarray\n HAVE_METAARRAY = True\nexcept:\n HAVE_METAARRAY = False\n\n\ndef generateUi(opts):\n \"\"\"Convenience function for generating common UI types\"\"\"\n widget = QtGui.QWidget()\n l = QtGui.QFormLayout()\n l.setSpacing(0)\n widget.setLayout(l)\n ctrls = {}\n row = 0\n for opt in opts:\n if len(opt) == 2:\n k, t = opt\n o = {}\n elif len(opt) == 3:\n k, t, o = opt\n else:\n raise Exception(\"Widget specification must be (name, type) or (name, type, {opts})\")\n \n ## clean out these options so they don't get sent to SpinBox\n hidden = o.pop('hidden', False)\n tip = o.pop('tip', None)\n\n if t == 'intSpin':\n w = QtGui.QSpinBox()\n if 'max' in o:\n w.setMaximum(o['max'])\n if 'min' in o:\n w.setMinimum(o['min'])\n if 'value' in o:\n w.setValue(o['value'])\n elif t == 'doubleSpin':\n w = QtGui.QDoubleSpinBox()\n if 'max' in o:\n w.setMaximum(o['max'])\n if 'min' in o:\n w.setMinimum(o['min']) \n if 'value' in o:\n w.setValue(o['value'])\n elif t == 'spin':\n w = SpinBox()\n w.setOpts(**o)\n elif t == 'check':\n w = QtGui.QCheckBox()\n if 'checked' in o:\n w.setChecked(o['checked'])\n elif t == 'combo':\n w = QtGui.QComboBox()\n for i in o['values']:\n w.addItem(i)\n #elif t == 'colormap':\n #w = ColorMapper()\n elif t == 'color':\n w = ColorButton()\n else:\n raise Exception(\"Unknown widget type '%s'\" % str(t))\n\n if tip is not None:\n w.setToolTip(tip)\n w.setObjectName(k)\n l.addRow(k, w)\n if hidden:\n w.hide()\n label = l.labelForField(w)\n label.hide()\n \n ctrls[k] = w\n w.rowNum = row\n row += 1\n group = WidgetGroup(widget)\n return widget, group, ctrls\n\n\nclass CtrlNode(Node):\n \"\"\"Abstract class for nodes with auto-generated control UI\"\"\"\n \n sigStateChanged = QtCore.Signal(object)\n \n def __init__(self, name, ui=None, terminals=None):\n if ui is None:\n if hasattr(self, 'uiTemplate'):\n ui = self.uiTemplate\n else:\n ui = []\n if terminals is None:\n terminals = {'In': {'io': 'in'}, 'Out': {'io': 'out', 'bypass': 'In'}}\n Node.__init__(self, name=name, terminals=terminals)\n \n self.ui, self.stateGroup, self.ctrls = generateUi(ui)\n self.stateGroup.sigChanged.connect(self.changed)\n \n def ctrlWidget(self):\n return self.ui\n \n def changed(self):\n self.update()\n self.sigStateChanged.emit(self)\n\n def process(self, In, display=True):\n out = self.processData(In)\n return {'Out': out}\n \n def saveState(self):\n state = Node.saveState(self)\n state['ctrl'] = self.stateGroup.state()\n return state\n \n def restoreState(self, state):\n Node.restoreState(self, state)\n if self.stateGroup is not None:\n self.stateGroup.setState(state.get('ctrl', {}))\n \n def hideRow(self, name):\n w = self.ctrls[name]\n l = self.ui.layout().labelForField(w)\n w.hide()\n l.hide()\n \n def showRow(self, name):\n w = self.ctrls[name]\n l = self.ui.layout().labelForField(w)\n w.show()\n l.show()\n\n\nclass PlottingCtrlNode(CtrlNode):\n \"\"\"Abstract class for CtrlNodes that can connect to plots.\"\"\"\n \n def __init__(self, name, ui=None, terminals=None):\n #print \"PlottingCtrlNode.__init__ called.\"\n CtrlNode.__init__(self, name, ui=ui, terminals=terminals)\n self.plotTerminal = self.addOutput('plot', optional=True)\n \n def connected(self, term, remote):\n CtrlNode.connected(self, term, remote)\n if term is not self.plotTerminal:\n return\n node = remote.node()\n node.sigPlotChanged.connect(self.connectToPlot)\n self.connectToPlot(node) \n \n def disconnected(self, term, remote):\n CtrlNode.disconnected(self, term, remote)\n if term is not self.plotTerminal:\n return\n remote.node().sigPlotChanged.disconnect(self.connectToPlot)\n self.disconnectFromPlot(remote.node().getPlot()) \n \n def connectToPlot(self, node):\n \"\"\"Define what happens when the node is connected to a plot\"\"\"\n raise Exception(\"Must be re-implemented in subclass\")\n \n def disconnectFromPlot(self, plot):\n \"\"\"Define what happens when the node is disconnected from a plot\"\"\"\n raise Exception(\"Must be re-implemented in subclass\")\n\n def process(self, In, display=True):\n out = CtrlNode.process(self, In, display)\n out['plot'] = None\n return out\n\n\ndef metaArrayWrapper(fn):\n def newFn(self, data, *args, **kargs):\n if HAVE_METAARRAY and (hasattr(data, 'implements') and data.implements('MetaArray')):\n d1 = fn(self, data.view(np.ndarray), *args, **kargs)\n info = data.infoCopy()\n if d1.shape != data.shape:\n for i in range(data.ndim):\n if 'values' in info[i]:\n info[i]['values'] = info[i]['values'][:d1.shape[i]]\n return metaarray.MetaArray(d1, info=info)\n else:\n return fn(self, data, *args, **kargs)\n return newFn\n\n", "path": "pyqtgraph/flowchart/library/common.py"}]}
| 2,824 | 248 |
gh_patches_debug_36921
|
rasdani/github-patches
|
git_diff
|
dmlc__dgl-5147
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Sparse] Improve the efficiency of multiplication between SparseMatrix and DiagMatrix.
## 🔨Work Item
**IMPORTANT:**
* This template is only for dev team to track project progress. For feature request or bug report, please use the corresponding issue templates.
* DO NOT create a new work item if the purpose is to fix an existing issue or feature request. We will directly use the issue in the project tracker.
Project tracker: https://github.com/orgs/dmlc/projects/2
## Description
<!-- short description of the work item -->
## Depending work items or issues
<!-- what must be done before this -->
</issue>
<code>
[start of python/dgl/sparse/matmul.py]
1 """Matmul ops for SparseMatrix"""
2 # pylint: disable=invalid-name
3 from typing import Union
4
5 import torch
6
7 from .diag_matrix import diag, DiagMatrix
8
9 from .sparse_matrix import SparseMatrix
10
11 __all__ = ["spmm", "bspmm", "spspmm", "mm"]
12
13
14 def spmm(A: Union[SparseMatrix, DiagMatrix], X: torch.Tensor) -> torch.Tensor:
15 """Multiply a sparse matrix by a dense matrix.
16
17 Parameters
18 ----------
19 A : SparseMatrix or DiagMatrix
20 Sparse matrix of shape (N, M) with values of shape (nnz)
21 X : torch.Tensor
22 Dense tensor of shape (M, F) or (M)
23
24 Returns
25 -------
26 torch.Tensor
27 The multiplication result of shape (N, F) or (N)
28
29 Examples
30 --------
31
32 >>> row = torch.tensor([0, 1, 1])
33 >>> col = torch.tensor([1, 0, 1])
34 >>> val = torch.randn(len(row))
35 >>> A = from_coo(row, col, val)
36 >>> X = torch.randn(2, 3)
37 >>> result = dgl.sparse.spmm(A, X)
38 >>> print(type(result))
39 <class 'torch.Tensor'>
40 >>> print(result.shape)
41 torch.Size([2, 3])
42 """
43 assert isinstance(
44 A, (SparseMatrix, DiagMatrix)
45 ), f"Expect arg1 to be a SparseMatrix or DiagMatrix object, got {type(A)}"
46 assert isinstance(
47 X, torch.Tensor
48 ), f"Expect arg2 to be a torch.Tensor, got {type(X)}"
49
50 # The input is a DiagMatrix. Cast it to SparseMatrix
51 if not isinstance(A, SparseMatrix):
52 A = A.as_sparse()
53 return torch.ops.dgl_sparse.spmm(A.c_sparse_matrix, X)
54
55
56 def bspmm(A: Union[SparseMatrix, DiagMatrix], X: torch.Tensor) -> torch.Tensor:
57 """Multiply a sparse matrix by a dense matrix by batches.
58
59 Parameters
60 ----------
61 A : SparseMatrix or DiagMatrix
62 Sparse matrix of shape (N, M, B) with values of shape (nnz)
63 X : torch.Tensor
64 Dense tensor of shape (M, F, B)
65
66 Returns
67 -------
68 torch.Tensor
69 The multiplication result of shape (N, F, B)
70
71 Examples
72 --------
73
74 >>> row = torch.tensor([0, 1, 1])
75 >>> col = torch.tensor([1, 0, 2])
76 >>> val = torch.randn(len(row), 2)
77 >>> A = from_coo(row, col, val, shape=(3, 3))
78 >>> X = torch.randn(3, 3, 2)
79 >>> result = dgl.sparse.bspmm(A, X)
80 >>> print(type(result))
81 <class 'torch.Tensor'>
82 >>> print(result.shape)
83 torch.Size([3, 3, 2])
84 """
85 assert isinstance(
86 A, (SparseMatrix, DiagMatrix)
87 ), f"Expect arg1 to be a SparseMatrix or DiagMatrix object, got {type(A)}"
88 assert isinstance(
89 X, torch.Tensor
90 ), f"Expect arg2 to be a torch.Tensor, got {type(X)}"
91 return spmm(A, X)
92
93
94 def _diag_diag_mm(A1: DiagMatrix, A2: DiagMatrix) -> DiagMatrix:
95 """Internal function for multiplying a diagonal matrix by a diagonal matrix
96
97 Parameters
98 ----------
99 A1 : DiagMatrix
100 Matrix of shape (N, M), with values of shape (nnz1)
101 A2 : DiagMatrix
102 Matrix of shape (M, P), with values of shape (nnz2)
103
104 Returns
105 -------
106 DiagMatrix
107 The result of multiplication.
108 """
109 M, N = A1.shape
110 N, P = A2.shape
111 common_diag_len = min(M, N, P)
112 new_diag_len = min(M, P)
113 diag_val = torch.zeros(new_diag_len)
114 diag_val[:common_diag_len] = (
115 A1.val[:common_diag_len] * A2.val[:common_diag_len]
116 )
117 return diag(diag_val.to(A1.device), (M, P))
118
119
120 def spspmm(
121 A1: Union[SparseMatrix, DiagMatrix], A2: Union[SparseMatrix, DiagMatrix]
122 ) -> Union[SparseMatrix, DiagMatrix]:
123 """Multiply a sparse matrix by a sparse matrix. The non-zero values of the
124 two sparse matrices must be 1D.
125
126 Parameters
127 ----------
128 A1 : SparseMatrix or DiagMatrix
129 Sparse matrix of shape (N, M) with values of shape (nnz)
130 A2 : SparseMatrix or DiagMatrix
131 Sparse matrix of shape (M, P) with values of shape (nnz)
132
133 Returns
134 -------
135 SparseMatrix or DiagMatrix
136 The result of multiplication. It is a DiagMatrix object if both matrices
137 are DiagMatrix objects. It is a SparseMatrix object otherwise.
138
139 Examples
140 --------
141
142 >>> row1 = torch.tensor([0, 1, 1])
143 >>> col1 = torch.tensor([1, 0, 1])
144 >>> val1 = torch.ones(len(row1))
145 >>> A1 = from_coo(row1, col1, val1)
146
147 >>> row2 = torch.tensor([0, 1, 1])
148 >>> col2 = torch.tensor([0, 2, 1])
149 >>> val2 = torch.ones(len(row2))
150 >>> A2 = from_coo(row2, col2, val2)
151 >>> result = dgl.sparse.spspmm(A1, A2)
152 >>> print(result)
153 SparseMatrix(indices=tensor([[0, 0, 1, 1, 1],
154 [1, 2, 0, 1, 2]]),
155 values=tensor([1., 1., 1., 1., 1.]),
156 shape=(2, 3), nnz=5)
157 """
158 assert isinstance(
159 A1, (SparseMatrix, DiagMatrix)
160 ), f"Expect A1 to be a SparseMatrix or DiagMatrix object, got {type(A1)}"
161 assert isinstance(
162 A2, (SparseMatrix, DiagMatrix)
163 ), f"Expect A2 to be a SparseMatrix or DiagMatrix object, got {type(A2)}"
164
165 if isinstance(A1, DiagMatrix) and isinstance(A2, DiagMatrix):
166 return _diag_diag_mm(A1, A2)
167 if isinstance(A1, DiagMatrix):
168 A1 = A1.as_sparse()
169 if isinstance(A2, DiagMatrix):
170 A2 = A2.as_sparse()
171 return SparseMatrix(
172 torch.ops.dgl_sparse.spspmm(A1.c_sparse_matrix, A2.c_sparse_matrix)
173 )
174
175
176 def mm(
177 A1: Union[SparseMatrix, DiagMatrix],
178 A2: Union[torch.Tensor, SparseMatrix, DiagMatrix],
179 ) -> Union[torch.Tensor, SparseMatrix, DiagMatrix]:
180 """Multiply a sparse/diagonal matrix by a dense/sparse/diagonal matrix.
181 If an input is a SparseMatrix or DiagMatrix, its non-zero values should
182 be 1-D.
183
184 Parameters
185 ----------
186 A1 : SparseMatrix or DiagMatrix
187 Matrix of shape (N, M), with values of shape (nnz1)
188 A2 : torch.Tensor, SparseMatrix, or DiagMatrix
189 Matrix of shape (M, P). If it is a SparseMatrix or DiagMatrix,
190 it should have values of shape (nnz2).
191
192 Returns
193 -------
194 torch.Tensor or DiagMatrix or SparseMatrix
195 The result of multiplication of shape (N, P)
196
197 * It is a dense torch tensor if :attr:`A2` is so.
198 * It is a DiagMatrix object if both :attr:`A1` and :attr:`A2` are so.
199 * It is a SparseMatrix object otherwise.
200
201 Examples
202 --------
203
204 >>> val = torch.randn(3)
205 >>> A1 = diag(val)
206 >>> A2 = torch.randn(3, 2)
207 >>> result = dgl.sparse.mm(A1, A2)
208 >>> print(type(result))
209 <class 'torch.Tensor'>
210 >>> print(result.shape)
211 torch.Size([3, 2])
212 """
213 assert isinstance(
214 A1, (SparseMatrix, DiagMatrix)
215 ), f"Expect arg1 to be a SparseMatrix, or DiagMatrix object, got {type(A1)}."
216 assert isinstance(A2, (torch.Tensor, SparseMatrix, DiagMatrix)), (
217 f"Expect arg2 to be a torch Tensor, SparseMatrix, or DiagMatrix"
218 f"object, got {type(A2)}."
219 )
220 if isinstance(A2, torch.Tensor):
221 return spmm(A1, A2)
222 if isinstance(A1, DiagMatrix) and isinstance(A2, DiagMatrix):
223 return _diag_diag_mm(A1, A2)
224 return spspmm(A1, A2)
225
226
227 SparseMatrix.__matmul__ = mm
228 DiagMatrix.__matmul__ = mm
229
[end of python/dgl/sparse/matmul.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/python/dgl/sparse/matmul.py b/python/dgl/sparse/matmul.py
--- a/python/dgl/sparse/matmul.py
+++ b/python/dgl/sparse/matmul.py
@@ -6,7 +6,7 @@
from .diag_matrix import diag, DiagMatrix
-from .sparse_matrix import SparseMatrix
+from .sparse_matrix import SparseMatrix, val_like
__all__ = ["spmm", "bspmm", "spspmm", "mm"]
@@ -117,6 +117,62 @@
return diag(diag_val.to(A1.device), (M, P))
+def _sparse_diag_mm(A, D):
+ """Internal function for multiplying a sparse matrix by a diagonal matrix.
+
+ Parameters
+ ----------
+ A : SparseMatrix
+ Matrix of shape (N, M), with values of shape (nnz1)
+ D : DiagMatrix
+ Matrix of shape (M, P), with values of shape (nnz2)
+
+ Returns
+ -------
+ SparseMatrix
+ SparseMatrix with shape (N, P)
+ """
+ assert (
+ A.shape[1] == D.shape[0]
+ ), f"The second dimension of SparseMatrix should be equal to the first \
+ dimension of DiagMatrix in matmul(SparseMatrix, DiagMatrix), but the \
+ shapes of SparseMatrix and DiagMatrix are {A.shape} and {D.shape} \
+ respectively."
+ assert (
+ D.shape[0] == D.shape[1]
+ ), f"The DiagMatrix should be a square in matmul(SparseMatrix, DiagMatrix) \
+ but got {D.shape}"
+ return val_like(A, D.val[A.col] * A.val)
+
+
+def _diag_sparse_mm(D, A):
+ """Internal function for multiplying a diag matrix by a sparse matrix.
+
+ Parameters
+ ----------
+ D : DiagMatrix
+ Matrix of shape (N, M), with values of shape (nnz1)
+ A : DiagMatrix
+ Matrix of shape (M, P), with values of shape (nnz2)
+
+ Returns
+ -------
+ SparseMatrix
+ SparseMatrix with shape (N, P)
+ """
+ assert (
+ D.shape[1] == A.shape[0]
+ ), f"The second dimension of DiagMatrix should be equal to the first \
+ dimension of SparseMatrix in matmul(DiagMatrix, SparseMatrix), but the \
+ shapes of DiagMatrix and SparseMatrix are {D.shape} and {A.shape} \
+ respectively."
+ assert (
+ D.shape[0] == D.shape[1]
+ ), f"The DiagMatrix should be a square in matmul(DiagMatrix, SparseMatrix) \
+ but got {D.shape}"
+ return val_like(A, D.val[A.row] * A.val)
+
+
def spspmm(
A1: Union[SparseMatrix, DiagMatrix], A2: Union[SparseMatrix, DiagMatrix]
) -> Union[SparseMatrix, DiagMatrix]:
@@ -165,9 +221,9 @@
if isinstance(A1, DiagMatrix) and isinstance(A2, DiagMatrix):
return _diag_diag_mm(A1, A2)
if isinstance(A1, DiagMatrix):
- A1 = A1.as_sparse()
+ return _diag_sparse_mm(A1, A2)
if isinstance(A2, DiagMatrix):
- A2 = A2.as_sparse()
+ return _sparse_diag_mm(A1, A2)
return SparseMatrix(
torch.ops.dgl_sparse.spspmm(A1.c_sparse_matrix, A2.c_sparse_matrix)
)
|
{"golden_diff": "diff --git a/python/dgl/sparse/matmul.py b/python/dgl/sparse/matmul.py\n--- a/python/dgl/sparse/matmul.py\n+++ b/python/dgl/sparse/matmul.py\n@@ -6,7 +6,7 @@\n \n from .diag_matrix import diag, DiagMatrix\n \n-from .sparse_matrix import SparseMatrix\n+from .sparse_matrix import SparseMatrix, val_like\n \n __all__ = [\"spmm\", \"bspmm\", \"spspmm\", \"mm\"]\n \n@@ -117,6 +117,62 @@\n return diag(diag_val.to(A1.device), (M, P))\n \n \n+def _sparse_diag_mm(A, D):\n+ \"\"\"Internal function for multiplying a sparse matrix by a diagonal matrix.\n+\n+ Parameters\n+ ----------\n+ A : SparseMatrix\n+ Matrix of shape (N, M), with values of shape (nnz1)\n+ D : DiagMatrix\n+ Matrix of shape (M, P), with values of shape (nnz2)\n+\n+ Returns\n+ -------\n+ SparseMatrix\n+ SparseMatrix with shape (N, P)\n+ \"\"\"\n+ assert (\n+ A.shape[1] == D.shape[0]\n+ ), f\"The second dimension of SparseMatrix should be equal to the first \\\n+ dimension of DiagMatrix in matmul(SparseMatrix, DiagMatrix), but the \\\n+ shapes of SparseMatrix and DiagMatrix are {A.shape} and {D.shape} \\\n+ respectively.\"\n+ assert (\n+ D.shape[0] == D.shape[1]\n+ ), f\"The DiagMatrix should be a square in matmul(SparseMatrix, DiagMatrix) \\\n+ but got {D.shape}\"\n+ return val_like(A, D.val[A.col] * A.val)\n+\n+\n+def _diag_sparse_mm(D, A):\n+ \"\"\"Internal function for multiplying a diag matrix by a sparse matrix.\n+\n+ Parameters\n+ ----------\n+ D : DiagMatrix\n+ Matrix of shape (N, M), with values of shape (nnz1)\n+ A : DiagMatrix\n+ Matrix of shape (M, P), with values of shape (nnz2)\n+\n+ Returns\n+ -------\n+ SparseMatrix\n+ SparseMatrix with shape (N, P)\n+ \"\"\"\n+ assert (\n+ D.shape[1] == A.shape[0]\n+ ), f\"The second dimension of DiagMatrix should be equal to the first \\\n+ dimension of SparseMatrix in matmul(DiagMatrix, SparseMatrix), but the \\\n+ shapes of DiagMatrix and SparseMatrix are {D.shape} and {A.shape} \\\n+ respectively.\"\n+ assert (\n+ D.shape[0] == D.shape[1]\n+ ), f\"The DiagMatrix should be a square in matmul(DiagMatrix, SparseMatrix) \\\n+ but got {D.shape}\"\n+ return val_like(A, D.val[A.row] * A.val)\n+\n+\n def spspmm(\n A1: Union[SparseMatrix, DiagMatrix], A2: Union[SparseMatrix, DiagMatrix]\n ) -> Union[SparseMatrix, DiagMatrix]:\n@@ -165,9 +221,9 @@\n if isinstance(A1, DiagMatrix) and isinstance(A2, DiagMatrix):\n return _diag_diag_mm(A1, A2)\n if isinstance(A1, DiagMatrix):\n- A1 = A1.as_sparse()\n+ return _diag_sparse_mm(A1, A2)\n if isinstance(A2, DiagMatrix):\n- A2 = A2.as_sparse()\n+ return _sparse_diag_mm(A1, A2)\n return SparseMatrix(\n torch.ops.dgl_sparse.spspmm(A1.c_sparse_matrix, A2.c_sparse_matrix)\n )\n", "issue": "[Sparse] Improve the efficiency of multiplication between SparseMatrix and DiagMatrix.\n## \ud83d\udd28Work Item\r\n\r\n**IMPORTANT:**\r\n* This template is only for dev team to track project progress. For feature request or bug report, please use the corresponding issue templates.\r\n* DO NOT create a new work item if the purpose is to fix an existing issue or feature request. We will directly use the issue in the project tracker.\r\n\r\nProject tracker: https://github.com/orgs/dmlc/projects/2\r\n\r\n## Description\r\n\r\n<!-- short description of the work item -->\r\n\r\n## Depending work items or issues\r\n\r\n<!-- what must be done before this -->\r\n\n", "before_files": [{"content": "\"\"\"Matmul ops for SparseMatrix\"\"\"\n# pylint: disable=invalid-name\nfrom typing import Union\n\nimport torch\n\nfrom .diag_matrix import diag, DiagMatrix\n\nfrom .sparse_matrix import SparseMatrix\n\n__all__ = [\"spmm\", \"bspmm\", \"spspmm\", \"mm\"]\n\n\ndef spmm(A: Union[SparseMatrix, DiagMatrix], X: torch.Tensor) -> torch.Tensor:\n \"\"\"Multiply a sparse matrix by a dense matrix.\n\n Parameters\n ----------\n A : SparseMatrix or DiagMatrix\n Sparse matrix of shape (N, M) with values of shape (nnz)\n X : torch.Tensor\n Dense tensor of shape (M, F) or (M)\n\n Returns\n -------\n torch.Tensor\n The multiplication result of shape (N, F) or (N)\n\n Examples\n --------\n\n >>> row = torch.tensor([0, 1, 1])\n >>> col = torch.tensor([1, 0, 1])\n >>> val = torch.randn(len(row))\n >>> A = from_coo(row, col, val)\n >>> X = torch.randn(2, 3)\n >>> result = dgl.sparse.spmm(A, X)\n >>> print(type(result))\n <class 'torch.Tensor'>\n >>> print(result.shape)\n torch.Size([2, 3])\n \"\"\"\n assert isinstance(\n A, (SparseMatrix, DiagMatrix)\n ), f\"Expect arg1 to be a SparseMatrix or DiagMatrix object, got {type(A)}\"\n assert isinstance(\n X, torch.Tensor\n ), f\"Expect arg2 to be a torch.Tensor, got {type(X)}\"\n\n # The input is a DiagMatrix. Cast it to SparseMatrix\n if not isinstance(A, SparseMatrix):\n A = A.as_sparse()\n return torch.ops.dgl_sparse.spmm(A.c_sparse_matrix, X)\n\n\ndef bspmm(A: Union[SparseMatrix, DiagMatrix], X: torch.Tensor) -> torch.Tensor:\n \"\"\"Multiply a sparse matrix by a dense matrix by batches.\n\n Parameters\n ----------\n A : SparseMatrix or DiagMatrix\n Sparse matrix of shape (N, M, B) with values of shape (nnz)\n X : torch.Tensor\n Dense tensor of shape (M, F, B)\n\n Returns\n -------\n torch.Tensor\n The multiplication result of shape (N, F, B)\n\n Examples\n --------\n\n >>> row = torch.tensor([0, 1, 1])\n >>> col = torch.tensor([1, 0, 2])\n >>> val = torch.randn(len(row), 2)\n >>> A = from_coo(row, col, val, shape=(3, 3))\n >>> X = torch.randn(3, 3, 2)\n >>> result = dgl.sparse.bspmm(A, X)\n >>> print(type(result))\n <class 'torch.Tensor'>\n >>> print(result.shape)\n torch.Size([3, 3, 2])\n \"\"\"\n assert isinstance(\n A, (SparseMatrix, DiagMatrix)\n ), f\"Expect arg1 to be a SparseMatrix or DiagMatrix object, got {type(A)}\"\n assert isinstance(\n X, torch.Tensor\n ), f\"Expect arg2 to be a torch.Tensor, got {type(X)}\"\n return spmm(A, X)\n\n\ndef _diag_diag_mm(A1: DiagMatrix, A2: DiagMatrix) -> DiagMatrix:\n \"\"\"Internal function for multiplying a diagonal matrix by a diagonal matrix\n\n Parameters\n ----------\n A1 : DiagMatrix\n Matrix of shape (N, M), with values of shape (nnz1)\n A2 : DiagMatrix\n Matrix of shape (M, P), with values of shape (nnz2)\n\n Returns\n -------\n DiagMatrix\n The result of multiplication.\n \"\"\"\n M, N = A1.shape\n N, P = A2.shape\n common_diag_len = min(M, N, P)\n new_diag_len = min(M, P)\n diag_val = torch.zeros(new_diag_len)\n diag_val[:common_diag_len] = (\n A1.val[:common_diag_len] * A2.val[:common_diag_len]\n )\n return diag(diag_val.to(A1.device), (M, P))\n\n\ndef spspmm(\n A1: Union[SparseMatrix, DiagMatrix], A2: Union[SparseMatrix, DiagMatrix]\n) -> Union[SparseMatrix, DiagMatrix]:\n \"\"\"Multiply a sparse matrix by a sparse matrix. The non-zero values of the\n two sparse matrices must be 1D.\n\n Parameters\n ----------\n A1 : SparseMatrix or DiagMatrix\n Sparse matrix of shape (N, M) with values of shape (nnz)\n A2 : SparseMatrix or DiagMatrix\n Sparse matrix of shape (M, P) with values of shape (nnz)\n\n Returns\n -------\n SparseMatrix or DiagMatrix\n The result of multiplication. It is a DiagMatrix object if both matrices\n are DiagMatrix objects. It is a SparseMatrix object otherwise.\n\n Examples\n --------\n\n >>> row1 = torch.tensor([0, 1, 1])\n >>> col1 = torch.tensor([1, 0, 1])\n >>> val1 = torch.ones(len(row1))\n >>> A1 = from_coo(row1, col1, val1)\n\n >>> row2 = torch.tensor([0, 1, 1])\n >>> col2 = torch.tensor([0, 2, 1])\n >>> val2 = torch.ones(len(row2))\n >>> A2 = from_coo(row2, col2, val2)\n >>> result = dgl.sparse.spspmm(A1, A2)\n >>> print(result)\n SparseMatrix(indices=tensor([[0, 0, 1, 1, 1],\n [1, 2, 0, 1, 2]]),\n values=tensor([1., 1., 1., 1., 1.]),\n shape=(2, 3), nnz=5)\n \"\"\"\n assert isinstance(\n A1, (SparseMatrix, DiagMatrix)\n ), f\"Expect A1 to be a SparseMatrix or DiagMatrix object, got {type(A1)}\"\n assert isinstance(\n A2, (SparseMatrix, DiagMatrix)\n ), f\"Expect A2 to be a SparseMatrix or DiagMatrix object, got {type(A2)}\"\n\n if isinstance(A1, DiagMatrix) and isinstance(A2, DiagMatrix):\n return _diag_diag_mm(A1, A2)\n if isinstance(A1, DiagMatrix):\n A1 = A1.as_sparse()\n if isinstance(A2, DiagMatrix):\n A2 = A2.as_sparse()\n return SparseMatrix(\n torch.ops.dgl_sparse.spspmm(A1.c_sparse_matrix, A2.c_sparse_matrix)\n )\n\n\ndef mm(\n A1: Union[SparseMatrix, DiagMatrix],\n A2: Union[torch.Tensor, SparseMatrix, DiagMatrix],\n) -> Union[torch.Tensor, SparseMatrix, DiagMatrix]:\n \"\"\"Multiply a sparse/diagonal matrix by a dense/sparse/diagonal matrix.\n If an input is a SparseMatrix or DiagMatrix, its non-zero values should\n be 1-D.\n\n Parameters\n ----------\n A1 : SparseMatrix or DiagMatrix\n Matrix of shape (N, M), with values of shape (nnz1)\n A2 : torch.Tensor, SparseMatrix, or DiagMatrix\n Matrix of shape (M, P). If it is a SparseMatrix or DiagMatrix,\n it should have values of shape (nnz2).\n\n Returns\n -------\n torch.Tensor or DiagMatrix or SparseMatrix\n The result of multiplication of shape (N, P)\n\n * It is a dense torch tensor if :attr:`A2` is so.\n * It is a DiagMatrix object if both :attr:`A1` and :attr:`A2` are so.\n * It is a SparseMatrix object otherwise.\n\n Examples\n --------\n\n >>> val = torch.randn(3)\n >>> A1 = diag(val)\n >>> A2 = torch.randn(3, 2)\n >>> result = dgl.sparse.mm(A1, A2)\n >>> print(type(result))\n <class 'torch.Tensor'>\n >>> print(result.shape)\n torch.Size([3, 2])\n \"\"\"\n assert isinstance(\n A1, (SparseMatrix, DiagMatrix)\n ), f\"Expect arg1 to be a SparseMatrix, or DiagMatrix object, got {type(A1)}.\"\n assert isinstance(A2, (torch.Tensor, SparseMatrix, DiagMatrix)), (\n f\"Expect arg2 to be a torch Tensor, SparseMatrix, or DiagMatrix\"\n f\"object, got {type(A2)}.\"\n )\n if isinstance(A2, torch.Tensor):\n return spmm(A1, A2)\n if isinstance(A1, DiagMatrix) and isinstance(A2, DiagMatrix):\n return _diag_diag_mm(A1, A2)\n return spspmm(A1, A2)\n\n\nSparseMatrix.__matmul__ = mm\nDiagMatrix.__matmul__ = mm\n", "path": "python/dgl/sparse/matmul.py"}]}
| 3,294 | 842 |
gh_patches_debug_18031
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-2725
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Internal server error when importing CSVs with long names
Follow the same steps as reported in #2634 and observer the error from the screenshot below:
API: `http://localhost/api/db/v0/tables/12/records/?limit=500&offset=0`
<img width="1512" alt="Screenshot 2023-03-20 at 5 29 52 AM" src="https://user-images.githubusercontent.com/11032856/226218521-75355de8-eee0-4b5e-9a9c-47aa3ff67da2.png">
</issue>
<code>
[start of db/identifiers.py]
1 import hashlib
2
3
4 def truncate_if_necessary(identifier):
5 """
6 Takes an identifier and returns it, truncating it, if it is too long. The truncated version
7 will end with a hash of the passed identifier, therefore column name collision should be very
8 rare.
9
10 Iteratively removes characters from the end of the identifier, until the resulting string, with
11 the suffix hash of the identifier appended, is short enough that it doesn't need to be truncated
12 anymore. Whitespace is trimmed from the truncated identifier before appending the suffix.
13 """
14 assert type(identifier) is str
15 if not is_identifier_too_long(identifier):
16 return identifier
17 right_side = "-" + _get_truncation_hash(identifier)
18 identifier_length = len(identifier)
19 assert len(right_side) < identifier_length # Sanity check
20 range_of_num_of_chars_to_remove = range(1, identifier_length)
21 for num_of_chars_to_remove in range_of_num_of_chars_to_remove:
22 left_side = identifier[:num_of_chars_to_remove * -1]
23 left_side = left_side.rstrip()
24 truncated_identifier = left_side + right_side
25 if not is_identifier_too_long(truncated_identifier):
26 return truncated_identifier
27 raise Exception(
28 "Acceptable truncation not found; should never happen."
29 )
30
31
32 def is_identifier_too_long(identifier):
33 postgres_identifier_size_limit = 63
34 size = _get_size_of_identifier_in_bytes(identifier)
35 return size > postgres_identifier_size_limit
36
37
38 def _get_truncation_hash(identifier):
39 """
40 Produces an 8-character string hash of the passed identifier.
41
42 Using hash function blake2s, because it seems fairly recommended and it seems to be better
43 suited for shorter digests than blake2b. We want short digests to not take up too much of the
44 truncated identifier in whose construction this will be used.
45 """
46 h = hashlib.blake2s(digest_size=4)
47 bytes = _get_identifier_in_bytes(identifier)
48 h.update(bytes)
49 return h.hexdigest()
50
51
52 def _get_size_of_identifier_in_bytes(s):
53 bytes = _get_identifier_in_bytes(s)
54 return len(bytes)
55
56
57 def _get_identifier_in_bytes(s):
58 """
59 Afaict, following Postgres doc [0] says that UTF-8 supports all languages; therefore, different
60 server locale configurations should not break this.
61
62 [0] https://www.postgresql.org/docs/13/multibyte.html
63 """
64 return s.encode('utf-8')
65
[end of db/identifiers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/db/identifiers.py b/db/identifiers.py
--- a/db/identifiers.py
+++ b/db/identifiers.py
@@ -1,6 +1,9 @@
import hashlib
+POSTGRES_IDENTIFIER_SIZE_LIMIT = 63
+
+
def truncate_if_necessary(identifier):
"""
Takes an identifier and returns it, truncating it, if it is too long. The truncated version
@@ -30,9 +33,13 @@
def is_identifier_too_long(identifier):
- postgres_identifier_size_limit = 63
+ # TODO we should support POSTGRES_IDENTIFIER_SIZE_LIMIT here;
+ # Our current limit due to an unknown bug that manifests at least
+ # when importing CSVs seems to be 57 bytes. Here we're setting it even
+ # lower just in case.
+ our_temporary_identifier_size_limit = 48
size = _get_size_of_identifier_in_bytes(identifier)
- return size > postgres_identifier_size_limit
+ return size > our_temporary_identifier_size_limit
def _get_truncation_hash(identifier):
|
{"golden_diff": "diff --git a/db/identifiers.py b/db/identifiers.py\n--- a/db/identifiers.py\n+++ b/db/identifiers.py\n@@ -1,6 +1,9 @@\n import hashlib\n \n \n+POSTGRES_IDENTIFIER_SIZE_LIMIT = 63\n+\n+\n def truncate_if_necessary(identifier):\n \"\"\"\n Takes an identifier and returns it, truncating it, if it is too long. The truncated version\n@@ -30,9 +33,13 @@\n \n \n def is_identifier_too_long(identifier):\n- postgres_identifier_size_limit = 63\n+ # TODO we should support POSTGRES_IDENTIFIER_SIZE_LIMIT here;\n+ # Our current limit due to an unknown bug that manifests at least\n+ # when importing CSVs seems to be 57 bytes. Here we're setting it even\n+ # lower just in case.\n+ our_temporary_identifier_size_limit = 48\n size = _get_size_of_identifier_in_bytes(identifier)\n- return size > postgres_identifier_size_limit\n+ return size > our_temporary_identifier_size_limit\n \n \n def _get_truncation_hash(identifier):\n", "issue": "Internal server error when importing CSVs with long names\nFollow the same steps as reported in #2634 and observer the error from the screenshot below: \r\n\r\nAPI: `http://localhost/api/db/v0/tables/12/records/?limit=500&offset=0`\r\n\r\n<img width=\"1512\" alt=\"Screenshot 2023-03-20 at 5 29 52 AM\" src=\"https://user-images.githubusercontent.com/11032856/226218521-75355de8-eee0-4b5e-9a9c-47aa3ff67da2.png\">\r\n\n", "before_files": [{"content": "import hashlib\n\n\ndef truncate_if_necessary(identifier):\n \"\"\"\n Takes an identifier and returns it, truncating it, if it is too long. The truncated version\n will end with a hash of the passed identifier, therefore column name collision should be very\n rare.\n\n Iteratively removes characters from the end of the identifier, until the resulting string, with\n the suffix hash of the identifier appended, is short enough that it doesn't need to be truncated\n anymore. Whitespace is trimmed from the truncated identifier before appending the suffix.\n \"\"\"\n assert type(identifier) is str\n if not is_identifier_too_long(identifier):\n return identifier\n right_side = \"-\" + _get_truncation_hash(identifier)\n identifier_length = len(identifier)\n assert len(right_side) < identifier_length # Sanity check\n range_of_num_of_chars_to_remove = range(1, identifier_length)\n for num_of_chars_to_remove in range_of_num_of_chars_to_remove:\n left_side = identifier[:num_of_chars_to_remove * -1]\n left_side = left_side.rstrip()\n truncated_identifier = left_side + right_side\n if not is_identifier_too_long(truncated_identifier):\n return truncated_identifier\n raise Exception(\n \"Acceptable truncation not found; should never happen.\"\n )\n\n\ndef is_identifier_too_long(identifier):\n postgres_identifier_size_limit = 63\n size = _get_size_of_identifier_in_bytes(identifier)\n return size > postgres_identifier_size_limit\n\n\ndef _get_truncation_hash(identifier):\n \"\"\"\n Produces an 8-character string hash of the passed identifier.\n\n Using hash function blake2s, because it seems fairly recommended and it seems to be better\n suited for shorter digests than blake2b. We want short digests to not take up too much of the\n truncated identifier in whose construction this will be used.\n \"\"\"\n h = hashlib.blake2s(digest_size=4)\n bytes = _get_identifier_in_bytes(identifier)\n h.update(bytes)\n return h.hexdigest()\n\n\ndef _get_size_of_identifier_in_bytes(s):\n bytes = _get_identifier_in_bytes(s)\n return len(bytes)\n\n\ndef _get_identifier_in_bytes(s):\n \"\"\"\n Afaict, following Postgres doc [0] says that UTF-8 supports all languages; therefore, different\n server locale configurations should not break this.\n\n [0] https://www.postgresql.org/docs/13/multibyte.html\n \"\"\"\n return s.encode('utf-8')\n", "path": "db/identifiers.py"}]}
| 1,357 | 240 |
gh_patches_debug_63591
|
rasdani/github-patches
|
git_diff
|
openai__gym-1092
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ImportError when installing on Windows 10 and [33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>
Dears,
Would you please let me know how I could solve this warning and this error? (Windows 10)
Using TensorFlow backend.
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
File "C:\Users\fi\Desktop\rl\code\3.6\stock_market_reinforcement_learning-master\environment.py", line 43, in __init__
self.reset()
File "C:\Users\fi\Anaconda30\envs\tensorflow\lib\site-packages\gym\core.py", line 70, in reset
raise NotImplementedError
NotImplementedErrorr
</issue>
<code>
[start of gym/envs/mujoco/mujoco_env.py]
1 import os
2
3 from gym import error, spaces
4 from gym.utils import seeding
5 import numpy as np
6 from os import path
7 import gym
8 import six
9
10 try:
11 import mujoco_py
12 except ImportError as e:
13 raise error.DependencyNotInstalled("{}. (HINT: you need to install mujoco_py, and also perform the setup instructions here: https://github.com/openai/mujoco-py/.)".format(e))
14
15 DEFAULT_SIZE = 500
16
17 class MujocoEnv(gym.Env):
18 """Superclass for all MuJoCo environments.
19 """
20
21 def __init__(self, model_path, frame_skip):
22 if model_path.startswith("/"):
23 fullpath = model_path
24 else:
25 fullpath = os.path.join(os.path.dirname(__file__), "assets", model_path)
26 if not path.exists(fullpath):
27 raise IOError("File %s does not exist" % fullpath)
28 self.frame_skip = frame_skip
29 self.model = mujoco_py.load_model_from_path(fullpath)
30 self.sim = mujoco_py.MjSim(self.model)
31 self.data = self.sim.data
32 self.viewer = None
33 self._viewers = {}
34
35 self.metadata = {
36 'render.modes': ['human', 'rgb_array'],
37 'video.frames_per_second': int(np.round(1.0 / self.dt))
38 }
39
40 self.init_qpos = self.sim.data.qpos.ravel().copy()
41 self.init_qvel = self.sim.data.qvel.ravel().copy()
42 observation, _reward, done, _info = self.step(np.zeros(self.model.nu))
43 assert not done
44 self.obs_dim = observation.size
45
46 bounds = self.model.actuator_ctrlrange.copy()
47 low = bounds[:, 0]
48 high = bounds[:, 1]
49 self.action_space = spaces.Box(low=low, high=high)
50
51 high = np.inf*np.ones(self.obs_dim)
52 low = -high
53 self.observation_space = spaces.Box(low, high)
54
55 self.seed()
56
57 def seed(self, seed=None):
58 self.np_random, seed = seeding.np_random(seed)
59 return [seed]
60
61 # methods to override:
62 # ----------------------------
63
64 def reset_model(self):
65 """
66 Reset the robot degrees of freedom (qpos and qvel).
67 Implement this in each subclass.
68 """
69 raise NotImplementedError
70
71 def viewer_setup(self):
72 """
73 This method is called when the viewer is initialized and after every reset
74 Optionally implement this method, if you need to tinker with camera position
75 and so forth.
76 """
77 pass
78
79 # -----------------------------
80
81 def reset(self):
82 self.sim.reset()
83 ob = self.reset_model()
84 old_viewer = self.viewer
85 for v in self._viewers.values():
86 self.viewer = v
87 self.viewer_setup()
88 self.viewer = old_viewer
89 return ob
90
91 def set_state(self, qpos, qvel):
92 assert qpos.shape == (self.model.nq,) and qvel.shape == (self.model.nv,)
93 old_state = self.sim.get_state()
94 new_state = mujoco_py.MjSimState(old_state.time, qpos, qvel,
95 old_state.act, old_state.udd_state)
96 self.sim.set_state(new_state)
97 self.sim.forward()
98
99 @property
100 def dt(self):
101 return self.model.opt.timestep * self.frame_skip
102
103 def do_simulation(self, ctrl, n_frames):
104 self.sim.data.ctrl[:] = ctrl
105 for _ in range(n_frames):
106 self.sim.step()
107
108 def render(self, mode='human', width=DEFAULT_SIZE, height=DEFAULT_SIZE):
109 if mode == 'rgb_array':
110 self._get_viewer(mode).render(width, height)
111 # window size used for old mujoco-py:
112 data = self._get_viewer(mode).read_pixels(width, height, depth=False)
113 # original image is upside-down, so flip it
114 return data[::-1, :, :]
115 elif mode == 'human':
116 self._get_viewer(mode).render()
117
118 def close(self):
119 if self.viewer is not None:
120 # self.viewer.finish()
121 self.viewer = None
122 self._viewers = {}
123
124 def _get_viewer(self, mode):
125 self.viewer = self._viewers.get(mode)
126 if self.viewer is None:
127 if mode == 'human':
128 self.viewer = mujoco_py.MjViewer(self.sim)
129 elif mode == 'rgb_array':
130 self.viewer = mujoco_py.MjRenderContextOffscreen(self.sim, 0)
131 self.viewer_setup()
132 self._viewers[mode] = self.viewer
133 return self.viewer
134
135 def get_body_com(self, body_name):
136 return self.data.get_body_xpos(body_name)
137
138 def state_vector(self):
139 return np.concatenate([
140 self.sim.data.qpos.flat,
141 self.sim.data.qvel.flat
142 ])
143
[end of gym/envs/mujoco/mujoco_env.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gym/envs/mujoco/mujoco_env.py b/gym/envs/mujoco/mujoco_env.py
--- a/gym/envs/mujoco/mujoco_env.py
+++ b/gym/envs/mujoco/mujoco_env.py
@@ -46,7 +46,7 @@
bounds = self.model.actuator_ctrlrange.copy()
low = bounds[:, 0]
high = bounds[:, 1]
- self.action_space = spaces.Box(low=low, high=high)
+ self.action_space = spaces.Box(low=low, high=high, dtype=np.float32)
high = np.inf*np.ones(self.obs_dim)
low = -high
|
{"golden_diff": "diff --git a/gym/envs/mujoco/mujoco_env.py b/gym/envs/mujoco/mujoco_env.py\n--- a/gym/envs/mujoco/mujoco_env.py\n+++ b/gym/envs/mujoco/mujoco_env.py\n@@ -46,7 +46,7 @@\n bounds = self.model.actuator_ctrlrange.copy()\n low = bounds[:, 0]\n high = bounds[:, 1]\n- self.action_space = spaces.Box(low=low, high=high)\n+ self.action_space = spaces.Box(low=low, high=high, dtype=np.float32)\n \n high = np.inf*np.ones(self.obs_dim)\n low = -high\n", "issue": "ImportError when installing on Windows 10 and [33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>\nDears,\r\nWould you please let me know how I could solve this warning and this error? (Windows 10)\r\n\r\nUsing TensorFlow backend.\r\n\u001b[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.\u001b[0m\r\n\r\n File \"C:\\Users\\fi\\Desktop\\rl\\code\\3.6\\stock_market_reinforcement_learning-master\\environment.py\", line 43, in __init__\r\n self.reset()\r\n File \"C:\\Users\\fi\\Anaconda30\\envs\\tensorflow\\lib\\site-packages\\gym\\core.py\", line 70, in reset\r\n raise NotImplementedError\r\nNotImplementedErrorr\r\n\n", "before_files": [{"content": "import os\n\nfrom gym import error, spaces\nfrom gym.utils import seeding\nimport numpy as np\nfrom os import path\nimport gym\nimport six\n\ntry:\n import mujoco_py\nexcept ImportError as e:\n raise error.DependencyNotInstalled(\"{}. (HINT: you need to install mujoco_py, and also perform the setup instructions here: https://github.com/openai/mujoco-py/.)\".format(e))\n\nDEFAULT_SIZE = 500\n\nclass MujocoEnv(gym.Env):\n \"\"\"Superclass for all MuJoCo environments.\n \"\"\"\n\n def __init__(self, model_path, frame_skip):\n if model_path.startswith(\"/\"):\n fullpath = model_path\n else:\n fullpath = os.path.join(os.path.dirname(__file__), \"assets\", model_path)\n if not path.exists(fullpath):\n raise IOError(\"File %s does not exist\" % fullpath)\n self.frame_skip = frame_skip\n self.model = mujoco_py.load_model_from_path(fullpath)\n self.sim = mujoco_py.MjSim(self.model)\n self.data = self.sim.data\n self.viewer = None\n self._viewers = {}\n\n self.metadata = {\n 'render.modes': ['human', 'rgb_array'],\n 'video.frames_per_second': int(np.round(1.0 / self.dt))\n }\n\n self.init_qpos = self.sim.data.qpos.ravel().copy()\n self.init_qvel = self.sim.data.qvel.ravel().copy()\n observation, _reward, done, _info = self.step(np.zeros(self.model.nu))\n assert not done\n self.obs_dim = observation.size\n\n bounds = self.model.actuator_ctrlrange.copy()\n low = bounds[:, 0]\n high = bounds[:, 1]\n self.action_space = spaces.Box(low=low, high=high)\n\n high = np.inf*np.ones(self.obs_dim)\n low = -high\n self.observation_space = spaces.Box(low, high)\n\n self.seed()\n\n def seed(self, seed=None):\n self.np_random, seed = seeding.np_random(seed)\n return [seed]\n\n # methods to override:\n # ----------------------------\n\n def reset_model(self):\n \"\"\"\n Reset the robot degrees of freedom (qpos and qvel).\n Implement this in each subclass.\n \"\"\"\n raise NotImplementedError\n\n def viewer_setup(self):\n \"\"\"\n This method is called when the viewer is initialized and after every reset\n Optionally implement this method, if you need to tinker with camera position\n and so forth.\n \"\"\"\n pass\n\n # -----------------------------\n\n def reset(self):\n self.sim.reset()\n ob = self.reset_model()\n old_viewer = self.viewer\n for v in self._viewers.values():\n self.viewer = v\n self.viewer_setup()\n self.viewer = old_viewer\n return ob\n\n def set_state(self, qpos, qvel):\n assert qpos.shape == (self.model.nq,) and qvel.shape == (self.model.nv,)\n old_state = self.sim.get_state()\n new_state = mujoco_py.MjSimState(old_state.time, qpos, qvel,\n old_state.act, old_state.udd_state)\n self.sim.set_state(new_state)\n self.sim.forward()\n\n @property\n def dt(self):\n return self.model.opt.timestep * self.frame_skip\n\n def do_simulation(self, ctrl, n_frames):\n self.sim.data.ctrl[:] = ctrl\n for _ in range(n_frames):\n self.sim.step()\n\n def render(self, mode='human', width=DEFAULT_SIZE, height=DEFAULT_SIZE):\n if mode == 'rgb_array':\n self._get_viewer(mode).render(width, height)\n # window size used for old mujoco-py:\n data = self._get_viewer(mode).read_pixels(width, height, depth=False)\n # original image is upside-down, so flip it\n return data[::-1, :, :]\n elif mode == 'human':\n self._get_viewer(mode).render()\n\n def close(self):\n if self.viewer is not None:\n # self.viewer.finish()\n self.viewer = None\n self._viewers = {}\n\n def _get_viewer(self, mode):\n self.viewer = self._viewers.get(mode)\n if self.viewer is None:\n if mode == 'human':\n self.viewer = mujoco_py.MjViewer(self.sim)\n elif mode == 'rgb_array':\n self.viewer = mujoco_py.MjRenderContextOffscreen(self.sim, 0)\n self.viewer_setup()\n self._viewers[mode] = self.viewer\n return self.viewer\n\n def get_body_com(self, body_name):\n return self.data.get_body_xpos(body_name)\n\n def state_vector(self):\n return np.concatenate([\n self.sim.data.qpos.flat,\n self.sim.data.qvel.flat\n ])\n", "path": "gym/envs/mujoco/mujoco_env.py"}]}
| 2,109 | 155 |
gh_patches_debug_7763
|
rasdani/github-patches
|
git_diff
|
plotly__dash-808
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Defer `pytest` import?
Looks like `pytest` isn't the safest dependency, causing issues with scikit-learn: https://community.plot.ly/t/pytest-transient-dependency/25383
Could we move the `import pytest` into the testing module/class/function itself and not require it upon install? We could even have a separate install with setup.py's "extras" feature (https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies) like `pip install dash[testing]` or something.
</issue>
<code>
[start of setup.py]
1 import io
2 from setuptools import setup, find_packages
3
4 main_ns = {}
5 exec(open("dash/version.py").read(), main_ns) # pylint: disable=exec-used
6
7
8 def read_req_file(req_type):
9 with open("requires-{}.txt".format(req_type)) as fp:
10 requires = (line.strip() for line in fp)
11 return [req for req in requires if req and not req.startswith("#")]
12
13
14 setup(
15 name="dash",
16 version=main_ns["__version__"],
17 author="chris p",
18 author_email="[email protected]",
19 packages=find_packages(exclude=["tests*"]),
20 include_package_data=True,
21 license="MIT",
22 description=(
23 "A Python framework for building reactive web-apps. "
24 "Developed by Plotly."
25 ),
26 long_description=io.open("README.md", encoding="utf-8").read(),
27 long_description_content_type="text/markdown",
28 install_requires=read_req_file("install"),
29 extras_require={"ci": read_req_file("ci")},
30 entry_points={
31 "console_scripts": [
32 "dash-generate-components ="
33 " dash.development.component_generator:cli"
34 ],
35 "pytest11": ["dash = dash.testing.plugin"],
36 },
37 url="https://plot.ly/dash",
38 classifiers=[
39 "Development Status :: 5 - Production/Stable",
40 "Environment :: Web Environment",
41 "Framework :: Flask",
42 "Intended Audience :: Developers",
43 "Intended Audience :: Education",
44 "Intended Audience :: Financial and Insurance Industry",
45 "Intended Audience :: Healthcare Industry",
46 "Intended Audience :: Manufacturing",
47 "Intended Audience :: Science/Research",
48 "License :: OSI Approved :: MIT License",
49 "Programming Language :: Python",
50 "Programming Language :: Python :: 2",
51 "Programming Language :: Python :: 2.7",
52 "Programming Language :: Python :: 3",
53 "Programming Language :: Python :: 3.3",
54 "Programming Language :: Python :: 3.4",
55 "Programming Language :: Python :: 3.5",
56 "Programming Language :: Python :: 3.6",
57 "Programming Language :: Python :: 3.7",
58 "Topic :: Database :: Front-Ends",
59 "Topic :: Office/Business :: Financial :: Spreadsheet",
60 "Topic :: Scientific/Engineering :: Visualization",
61 "Topic :: Software Development :: Libraries :: Application Frameworks",
62 "Topic :: Software Development :: Widget Sets",
63 ],
64 )
65
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,7 +26,10 @@
long_description=io.open("README.md", encoding="utf-8").read(),
long_description_content_type="text/markdown",
install_requires=read_req_file("install"),
- extras_require={"ci": read_req_file("ci")},
+ extras_require={
+ "ci": read_req_file("ci"),
+ "testing": read_req_file("testing"),
+ },
entry_points={
"console_scripts": [
"dash-generate-components ="
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,7 +26,10 @@\n long_description=io.open(\"README.md\", encoding=\"utf-8\").read(),\n long_description_content_type=\"text/markdown\",\n install_requires=read_req_file(\"install\"),\n- extras_require={\"ci\": read_req_file(\"ci\")},\n+ extras_require={\n+ \"ci\": read_req_file(\"ci\"),\n+ \"testing\": read_req_file(\"testing\"),\n+ },\n entry_points={\n \"console_scripts\": [\n \"dash-generate-components =\"\n", "issue": "Defer `pytest` import?\nLooks like `pytest` isn't the safest dependency, causing issues with scikit-learn: https://community.plot.ly/t/pytest-transient-dependency/25383\r\n\r\nCould we move the `import pytest` into the testing module/class/function itself and not require it upon install? We could even have a separate install with setup.py's \"extras\" feature (https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies) like `pip install dash[testing]` or something.\n", "before_files": [{"content": "import io\nfrom setuptools import setup, find_packages\n\nmain_ns = {}\nexec(open(\"dash/version.py\").read(), main_ns) # pylint: disable=exec-used\n\n\ndef read_req_file(req_type):\n with open(\"requires-{}.txt\".format(req_type)) as fp:\n requires = (line.strip() for line in fp)\n return [req for req in requires if req and not req.startswith(\"#\")]\n\n\nsetup(\n name=\"dash\",\n version=main_ns[\"__version__\"],\n author=\"chris p\",\n author_email=\"[email protected]\",\n packages=find_packages(exclude=[\"tests*\"]),\n include_package_data=True,\n license=\"MIT\",\n description=(\n \"A Python framework for building reactive web-apps. \"\n \"Developed by Plotly.\"\n ),\n long_description=io.open(\"README.md\", encoding=\"utf-8\").read(),\n long_description_content_type=\"text/markdown\",\n install_requires=read_req_file(\"install\"),\n extras_require={\"ci\": read_req_file(\"ci\")},\n entry_points={\n \"console_scripts\": [\n \"dash-generate-components =\"\n \" dash.development.component_generator:cli\"\n ],\n \"pytest11\": [\"dash = dash.testing.plugin\"],\n },\n url=\"https://plot.ly/dash\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Web Environment\",\n \"Framework :: Flask\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Financial and Insurance Industry\",\n \"Intended Audience :: Healthcare Industry\",\n \"Intended Audience :: Manufacturing\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Database :: Front-Ends\",\n \"Topic :: Office/Business :: Financial :: Spreadsheet\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n \"Topic :: Software Development :: Libraries :: Application Frameworks\",\n \"Topic :: Software Development :: Widget Sets\",\n ],\n)\n", "path": "setup.py"}]}
| 1,301 | 131 |
gh_patches_debug_39246
|
rasdani/github-patches
|
git_diff
|
dask__distributed-1462
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add rejoin operation to rejoin thread pool
Currently the `secede` function allows a task to remove itself from the worker's current thread pool, opening up a space for more tasks.
We might consider an inverse operation, `rejoin` that blocks until a new spot in the thread pool has opened up. This would enable long-running task computations to avoid contention of many threads computing at once.
First suggested by @adamklein in https://github.com/dask/distributed/issues/1342
also cc @ogrisel
</issue>
<code>
[start of distributed/__init__.py]
1 from __future__ import print_function, division, absolute_import
2
3 from .config import config
4 from .core import connect, rpc
5 from .deploy import LocalCluster
6 from .diagnostics import progress
7 from .client import (Client, Executor, CompatibleExecutor,
8 wait, as_completed, default_client, fire_and_forget,
9 Future)
10 from .nanny import Nanny
11 from .queues import Queue
12 from .scheduler import Scheduler
13 from .utils import sync
14 from .variable import Variable
15 from .worker import Worker, get_worker, get_client, secede
16 from .worker_client import local_client, worker_client
17
18 from ._version import get_versions
19 versions = get_versions()
20 __version__ = versions['version']
21 __git_revision__ = versions['full-revisionid']
22 del get_versions, versions
23
[end of distributed/__init__.py]
[start of distributed/threadpoolexecutor.py]
1 """
2 Modified ThreadPoolExecutor to support threads leaving the thread pool
3
4 This includes a global `secede` method that a submitted function can call to
5 have its thread leave the ThreadPoolExecutor's thread pool. This allows the
6 thread pool to allocate another thread if necessary and so is useful when a
7 function realises that it is going to be a long-running job that doesn't want
8 to take up space. When the function finishes its thread will terminate
9 gracefully.
10
11 This code copies and modifies two functions from the
12 `concurrent.futures.thread` module, notably `_worker` and
13 ThreadPoolExecutor._adjust_thread_count` to allow for checking against a global
14 `threading.local` state. These functions are subject to the following license,
15 which is included as a comment at the end of this file:
16
17 https://docs.python.org/3/license.html
18
19 ... and are under copyright by the Python Software Foundation
20
21 Copyright 2001-2016 Python Software Foundation; All Rights Reserved
22 """
23 from __future__ import print_function, division, absolute_import
24
25 from . import _concurrent_futures_thread as thread
26 import logging
27 import threading
28
29 from .compatibility import get_thread_identity
30 from .metrics import time
31
32 logger = logging.getLogger(__name__)
33
34 thread_state = threading.local()
35
36
37 def _worker(executor, work_queue):
38 thread_state.proceed = True
39 thread_state.executor = executor
40
41 try:
42 while thread_state.proceed:
43 task = work_queue.get()
44 if task is not None: # sentinel
45 task.run()
46 del task
47 elif thread._shutdown or executor is None or executor._shutdown:
48 work_queue.put(None)
49 return
50 del executor
51 except BaseException:
52 logger.critical('Exception in worker', exc_info=True)
53 finally:
54 del thread_state.proceed
55 del thread_state.executor
56
57
58 class ThreadPoolExecutor(thread.ThreadPoolExecutor):
59 def _adjust_thread_count(self):
60 if len(self._threads) < self._max_workers:
61 t = threading.Thread(target=_worker,
62 name="ThreadPool worker %d" % len(self._threads,),
63 args=(self, self._work_queue))
64 t.daemon = True
65 self._threads.add(t)
66 t.start()
67
68 def shutdown(self, wait=True, timeout=None):
69 with threads_lock:
70 with self._shutdown_lock:
71 self._shutdown = True
72 self._work_queue.put(None)
73 if timeout is not None:
74 deadline = time() + timeout
75 for t in self._threads:
76 if timeout is not None:
77 timeout2 = max(deadline - time(), 0)
78 else:
79 timeout2 = None
80 t.join(timeout=timeout2)
81
82
83 def secede():
84 """ Have this thread secede from the ThreadPoolExecutor """
85 thread_state.proceed = False
86 ident = get_thread_identity()
87 with threads_lock:
88 for t in list(thread_state.executor._threads):
89 if t.ident == ident:
90 thread_state.executor._threads.remove(t)
91 break
92 thread_state.executor._adjust_thread_count()
93
94
95 threads_lock = threading.Lock()
96
97 """
98 PSF LICENSE AGREEMENT FOR PYTHON 3.5.2
99 ======================================
100
101 1. This LICENSE AGREEMENT is between the Python Software Foundation ("PSF"), and
102 the Individual or Organization ("Licensee") accessing and otherwise using Python
103 3.5.2 software in source or binary form and its associated documentation.
104
105 2. Subject to the terms and conditions of this License Agreement, PSF hereby
106 grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
107 analyze, test, perform and/or display publicly, prepare derivative works,
108 distribute, and otherwise use Python 3.5.2 alone or in any derivative
109 version, provided, however, that PSF's License Agreement and PSF's notice of
110 copyright, i.e., "Copyright c 2001-2016 Python Software Foundation; All Rights
111 Reserved" are retained in Python 3.5.2 alone or in any derivative version
112 prepared by Licensee.
113
114 3. In the event Licensee prepares a derivative work that is based on or
115 incorporates Python 3.5.2 or any part thereof, and wants to make the
116 derivative work available to others as provided herein, then Licensee hereby
117 agrees to include in any such work a brief summary of the changes made to Python
118 3.5.2.
119
120 4. PSF is making Python 3.5.2 available to Licensee on an "AS IS" basis.
121 PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF
122 EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR
123 WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE
124 USE OF PYTHON 3.5.2 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
125
126 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 3.5.2
127 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF
128 MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 3.5.2, OR ANY DERIVATIVE
129 THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
130
131 6. This License Agreement will automatically terminate upon a material breach of
132 its terms and conditions.
133
134 7. Nothing in this License Agreement shall be deemed to create any relationship
135 of agency, partnership, or joint venture between PSF and Licensee. This License
136 Agreement does not grant permission to use PSF trademarks or trade name in a
137 trademark sense to endorse or promote products or services of Licensee, or any
138 third party.
139
140 8. By copying, installing or otherwise using Python 3.5.2, Licensee agrees
141 to be bound by the terms and conditions of this License Agreement.
142 """
143
[end of distributed/threadpoolexecutor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/distributed/__init__.py b/distributed/__init__.py
--- a/distributed/__init__.py
+++ b/distributed/__init__.py
@@ -10,6 +10,7 @@
from .nanny import Nanny
from .queues import Queue
from .scheduler import Scheduler
+from .threadpoolexecutor import rejoin
from .utils import sync
from .variable import Variable
from .worker import Worker, get_worker, get_client, secede
diff --git a/distributed/threadpoolexecutor.py b/distributed/threadpoolexecutor.py
--- a/distributed/threadpoolexecutor.py
+++ b/distributed/threadpoolexecutor.py
@@ -26,7 +26,6 @@
import logging
import threading
-from .compatibility import get_thread_identity
from .metrics import time
logger = logging.getLogger(__name__)
@@ -40,6 +39,13 @@
try:
while thread_state.proceed:
+ with executor._rejoin_lock:
+ if executor._rejoin_list:
+ rejoin_thread, rejoin_event = executor._rejoin_list.pop()
+ executor._threads.add(rejoin_thread)
+ executor._threads.remove(threading.current_thread())
+ rejoin_event.set()
+ break
task = work_queue.get()
if task is not None: # sentinel
task.run()
@@ -56,6 +62,11 @@
class ThreadPoolExecutor(thread.ThreadPoolExecutor):
+ def __init__(self, *args, **kwargs):
+ super(ThreadPoolExecutor, self).__init__(*args, **kwargs)
+ self._rejoin_list = []
+ self._rejoin_lock = threading.Lock()
+
def _adjust_thread_count(self):
if len(self._threads) < self._max_workers:
t = threading.Thread(target=_worker,
@@ -80,16 +91,38 @@
t.join(timeout=timeout2)
-def secede():
- """ Have this thread secede from the ThreadPoolExecutor """
+def secede(adjust=True):
+ """ Have this thread secede from the ThreadPoolExecutor
+
+ See Also
+ --------
+ rejoin: rejoin the thread pool
+ """
thread_state.proceed = False
- ident = get_thread_identity()
with threads_lock:
- for t in list(thread_state.executor._threads):
- if t.ident == ident:
- thread_state.executor._threads.remove(t)
- break
- thread_state.executor._adjust_thread_count()
+ thread_state.executor._threads.remove(threading.current_thread())
+ if adjust:
+ thread_state.executor._adjust_thread_count()
+
+
+def rejoin():
+ """ Have this thread rejoin the ThreadPoolExecutor
+
+ This will block until a new slot opens up in the executor. The next thread
+ to finish a task will leave the pool to allow this one to join.
+
+ See Also
+ --------
+ secede: leave the thread pool
+ """
+ thread = threading.current_thread()
+ event = threading.Event()
+ e = thread_state.executor
+ with e._rejoin_lock:
+ e._rejoin_list.append((thread, event))
+ e.submit(lambda: None)
+ event.wait()
+ thread_state.proceed = True
threads_lock = threading.Lock()
|
{"golden_diff": "diff --git a/distributed/__init__.py b/distributed/__init__.py\n--- a/distributed/__init__.py\n+++ b/distributed/__init__.py\n@@ -10,6 +10,7 @@\n from .nanny import Nanny\n from .queues import Queue\n from .scheduler import Scheduler\n+from .threadpoolexecutor import rejoin\n from .utils import sync\n from .variable import Variable\n from .worker import Worker, get_worker, get_client, secede\ndiff --git a/distributed/threadpoolexecutor.py b/distributed/threadpoolexecutor.py\n--- a/distributed/threadpoolexecutor.py\n+++ b/distributed/threadpoolexecutor.py\n@@ -26,7 +26,6 @@\n import logging\n import threading\n \n-from .compatibility import get_thread_identity\n from .metrics import time\n \n logger = logging.getLogger(__name__)\n@@ -40,6 +39,13 @@\n \n try:\n while thread_state.proceed:\n+ with executor._rejoin_lock:\n+ if executor._rejoin_list:\n+ rejoin_thread, rejoin_event = executor._rejoin_list.pop()\n+ executor._threads.add(rejoin_thread)\n+ executor._threads.remove(threading.current_thread())\n+ rejoin_event.set()\n+ break\n task = work_queue.get()\n if task is not None: # sentinel\n task.run()\n@@ -56,6 +62,11 @@\n \n \n class ThreadPoolExecutor(thread.ThreadPoolExecutor):\n+ def __init__(self, *args, **kwargs):\n+ super(ThreadPoolExecutor, self).__init__(*args, **kwargs)\n+ self._rejoin_list = []\n+ self._rejoin_lock = threading.Lock()\n+\n def _adjust_thread_count(self):\n if len(self._threads) < self._max_workers:\n t = threading.Thread(target=_worker,\n@@ -80,16 +91,38 @@\n t.join(timeout=timeout2)\n \n \n-def secede():\n- \"\"\" Have this thread secede from the ThreadPoolExecutor \"\"\"\n+def secede(adjust=True):\n+ \"\"\" Have this thread secede from the ThreadPoolExecutor\n+\n+ See Also\n+ --------\n+ rejoin: rejoin the thread pool\n+ \"\"\"\n thread_state.proceed = False\n- ident = get_thread_identity()\n with threads_lock:\n- for t in list(thread_state.executor._threads):\n- if t.ident == ident:\n- thread_state.executor._threads.remove(t)\n- break\n- thread_state.executor._adjust_thread_count()\n+ thread_state.executor._threads.remove(threading.current_thread())\n+ if adjust:\n+ thread_state.executor._adjust_thread_count()\n+\n+\n+def rejoin():\n+ \"\"\" Have this thread rejoin the ThreadPoolExecutor\n+\n+ This will block until a new slot opens up in the executor. The next thread\n+ to finish a task will leave the pool to allow this one to join.\n+\n+ See Also\n+ --------\n+ secede: leave the thread pool\n+ \"\"\"\n+ thread = threading.current_thread()\n+ event = threading.Event()\n+ e = thread_state.executor\n+ with e._rejoin_lock:\n+ e._rejoin_list.append((thread, event))\n+ e.submit(lambda: None)\n+ event.wait()\n+ thread_state.proceed = True\n \n \n threads_lock = threading.Lock()\n", "issue": "Add rejoin operation to rejoin thread pool\nCurrently the `secede` function allows a task to remove itself from the worker's current thread pool, opening up a space for more tasks.\r\n\r\nWe might consider an inverse operation, `rejoin` that blocks until a new spot in the thread pool has opened up. This would enable long-running task computations to avoid contention of many threads computing at once.\r\n\r\nFirst suggested by @adamklein in https://github.com/dask/distributed/issues/1342\r\n\r\nalso cc @ogrisel\n", "before_files": [{"content": "from __future__ import print_function, division, absolute_import\n\nfrom .config import config\nfrom .core import connect, rpc\nfrom .deploy import LocalCluster\nfrom .diagnostics import progress\nfrom .client import (Client, Executor, CompatibleExecutor,\n wait, as_completed, default_client, fire_and_forget,\n Future)\nfrom .nanny import Nanny\nfrom .queues import Queue\nfrom .scheduler import Scheduler\nfrom .utils import sync\nfrom .variable import Variable\nfrom .worker import Worker, get_worker, get_client, secede\nfrom .worker_client import local_client, worker_client\n\nfrom ._version import get_versions\nversions = get_versions()\n__version__ = versions['version']\n__git_revision__ = versions['full-revisionid']\ndel get_versions, versions\n", "path": "distributed/__init__.py"}, {"content": "\"\"\"\nModified ThreadPoolExecutor to support threads leaving the thread pool\n\nThis includes a global `secede` method that a submitted function can call to\nhave its thread leave the ThreadPoolExecutor's thread pool. This allows the\nthread pool to allocate another thread if necessary and so is useful when a\nfunction realises that it is going to be a long-running job that doesn't want\nto take up space. When the function finishes its thread will terminate\ngracefully.\n\nThis code copies and modifies two functions from the\n`concurrent.futures.thread` module, notably `_worker` and\nThreadPoolExecutor._adjust_thread_count` to allow for checking against a global\n`threading.local` state. These functions are subject to the following license,\nwhich is included as a comment at the end of this file:\n\n https://docs.python.org/3/license.html\n\n... and are under copyright by the Python Software Foundation\n\n Copyright 2001-2016 Python Software Foundation; All Rights Reserved\n\"\"\"\nfrom __future__ import print_function, division, absolute_import\n\nfrom . import _concurrent_futures_thread as thread\nimport logging\nimport threading\n\nfrom .compatibility import get_thread_identity\nfrom .metrics import time\n\nlogger = logging.getLogger(__name__)\n\nthread_state = threading.local()\n\n\ndef _worker(executor, work_queue):\n thread_state.proceed = True\n thread_state.executor = executor\n\n try:\n while thread_state.proceed:\n task = work_queue.get()\n if task is not None: # sentinel\n task.run()\n del task\n elif thread._shutdown or executor is None or executor._shutdown:\n work_queue.put(None)\n return\n del executor\n except BaseException:\n logger.critical('Exception in worker', exc_info=True)\n finally:\n del thread_state.proceed\n del thread_state.executor\n\n\nclass ThreadPoolExecutor(thread.ThreadPoolExecutor):\n def _adjust_thread_count(self):\n if len(self._threads) < self._max_workers:\n t = threading.Thread(target=_worker,\n name=\"ThreadPool worker %d\" % len(self._threads,),\n args=(self, self._work_queue))\n t.daemon = True\n self._threads.add(t)\n t.start()\n\n def shutdown(self, wait=True, timeout=None):\n with threads_lock:\n with self._shutdown_lock:\n self._shutdown = True\n self._work_queue.put(None)\n if timeout is not None:\n deadline = time() + timeout\n for t in self._threads:\n if timeout is not None:\n timeout2 = max(deadline - time(), 0)\n else:\n timeout2 = None\n t.join(timeout=timeout2)\n\n\ndef secede():\n \"\"\" Have this thread secede from the ThreadPoolExecutor \"\"\"\n thread_state.proceed = False\n ident = get_thread_identity()\n with threads_lock:\n for t in list(thread_state.executor._threads):\n if t.ident == ident:\n thread_state.executor._threads.remove(t)\n break\n thread_state.executor._adjust_thread_count()\n\n\nthreads_lock = threading.Lock()\n\n\"\"\"\nPSF LICENSE AGREEMENT FOR PYTHON 3.5.2\n======================================\n\n1. This LICENSE AGREEMENT is between the Python Software Foundation (\"PSF\"), and\n the Individual or Organization (\"Licensee\") accessing and otherwise using Python\n 3.5.2 software in source or binary form and its associated documentation.\n\n2. Subject to the terms and conditions of this License Agreement, PSF hereby\n grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,\n analyze, test, perform and/or display publicly, prepare derivative works,\n distribute, and otherwise use Python 3.5.2 alone or in any derivative\n version, provided, however, that PSF's License Agreement and PSF's notice of\n copyright, i.e., \"Copyright c 2001-2016 Python Software Foundation; All Rights\n Reserved\" are retained in Python 3.5.2 alone or in any derivative version\n prepared by Licensee.\n\n3. In the event Licensee prepares a derivative work that is based on or\n incorporates Python 3.5.2 or any part thereof, and wants to make the\n derivative work available to others as provided herein, then Licensee hereby\n agrees to include in any such work a brief summary of the changes made to Python\n 3.5.2.\n\n4. PSF is making Python 3.5.2 available to Licensee on an \"AS IS\" basis.\n PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED. BY WAY OF\n EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND DISCLAIMS ANY REPRESENTATION OR\n WARRANTY OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE\n USE OF PYTHON 3.5.2 WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.\n\n5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON 3.5.2\n FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS A RESULT OF\n MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 3.5.2, OR ANY DERIVATIVE\n THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.\n\n6. This License Agreement will automatically terminate upon a material breach of\n its terms and conditions.\n\n7. Nothing in this License Agreement shall be deemed to create any relationship\n of agency, partnership, or joint venture between PSF and Licensee. This License\n Agreement does not grant permission to use PSF trademarks or trade name in a\n trademark sense to endorse or promote products or services of Licensee, or any\n third party.\n\n8. By copying, installing or otherwise using Python 3.5.2, Licensee agrees\n to be bound by the terms and conditions of this License Agreement.\n\"\"\"\n", "path": "distributed/threadpoolexecutor.py"}]}
| 2,469 | 737 |
gh_patches_debug_34466
|
rasdani/github-patches
|
git_diff
|
horovod__horovod-3074
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incremental build support
**Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet) TensorFlow
2. Framework version: TF 1.14.0
3. Horovod version: tip of master
4. MPI version:
5. CUDA version: 10.0
6. NCCL version: tip of master
7. Python version: 3.6.8
8. OS and version: Ubuntu 18.04
9. GCC version: 7.4.0
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes.
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? N/A
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? N/A
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes.
**Your question:**
Is there any way to do an incremental build of horovod? I can't figure out a way to build horovod from a local copy of the source code, except through `pip install .`, but that seems to build from scratch every time, regardless of changes to the source code.
</issue>
<code>
[start of setup.py]
1 # Copyright 2019 Uber Technologies, Inc. All Rights Reserved.
2 # Modifications copyright Microsoft
3 # Modifications copyright (C) 2020, NVIDIA CORPORATION. All rights reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 # ==============================================================================
17
18 import os
19 import subprocess
20 import sys
21 import textwrap
22
23 from setuptools import setup, Extension, find_packages
24 from setuptools.command.build_ext import build_ext
25
26 from horovod import __version__
27
28
29 class CMakeExtension(Extension):
30 def __init__(self, name, cmake_lists_dir='.', sources=[], **kwa):
31 Extension.__init__(self, name, sources=sources, **kwa)
32 self.cmake_lists_dir = os.path.abspath(cmake_lists_dir)
33
34
35 tensorflow_mpi_lib = CMakeExtension('horovod.tensorflow.mpi_lib',
36 cmake_lists_dir='.', sources=[])
37 torch_mpi_lib_v2 = CMakeExtension('horovod.torch.mpi_lib_v2',
38 cmake_lists_dir='.', sources=[])
39 mxnet_mpi_lib = CMakeExtension('horovod.mxnet.mpi_lib',
40 cmake_lists_dir='.', sources=[])
41
42 def is_build_action():
43 if len(sys.argv) <= 1:
44 return False
45
46 if sys.argv[1].startswith('build'):
47 return True
48
49 if sys.argv[1].startswith('bdist'):
50 return True
51
52 if sys.argv[1].startswith('install'):
53 return True
54
55
56 def get_cmake_bin():
57 return os.environ.get('HOROVOD_CMAKE', 'cmake')
58
59
60 class custom_build_ext(build_ext):
61 def build_extensions(self):
62 if os.getenv('HOROVOD_SKIP_COMPILE') == '1':
63 # Skip building extensions using CMake
64 print("Horovod is being installed without native libraries")
65 return
66
67 cmake_bin = get_cmake_bin()
68
69 config = 'Debug' if self.debug else 'RelWithDebInfo'
70
71 ext_name = self.extensions[0].name
72 build_dir = self.get_ext_fullpath(ext_name).replace(self.get_ext_filename(ext_name), '')
73 build_dir = os.path.abspath(build_dir)
74
75 cmake_args = ['-DCMAKE_BUILD_TYPE=' + config,
76 '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_{}={}'.format(config.upper(), build_dir),
77 '-DPYTHON_EXECUTABLE:FILEPATH=' + sys.executable]
78
79 make_args = []
80 if self.verbose:
81 make_args.append('VERBOSE=1')
82
83 cmake_build_args = ['--config', config]
84 if make_args:
85 # -- specifies that these args are going to the native build tool: make
86 cmake_build_args += ['--'] + make_args
87
88 cmake_build_dir = os.path.join(self.build_temp, config)
89 if not os.path.exists(cmake_build_dir):
90 os.makedirs(cmake_build_dir)
91
92 # Config and build the extension
93 try:
94 subprocess.check_call([cmake_bin, self.extensions[0].cmake_lists_dir] + cmake_args,
95 cwd=cmake_build_dir)
96 subprocess.check_call([cmake_bin, '--build', '.'] + cmake_build_args,
97 cwd=cmake_build_dir)
98 except OSError as e:
99 raise RuntimeError('CMake failed: {}'.format(str(e)))
100
101
102 # python packages required to use horovod in general
103 require_list = ['cloudpickle', 'psutil', 'pyyaml', 'dataclasses;python_version<"3.7"']
104
105 # framework dependencies
106 tensorflow_require_list = ['tensorflow']
107 tensorflow_cpu_require_list = ['tensorflow-cpu']
108 tensorflow_gpu_require_list = ['tensorflow-gpu']
109 keras_require_list = ['keras>=2.0.8,!=2.0.9,!=2.1.0,!=2.1.1']
110 pytorch_require_list = ['torch', 'pytorch_lightning']
111 mxnet_require_list = ['mxnet>=1.4.1']
112 pyspark_require_list = ['pyspark>=2.3.2;python_version<"3.8"',
113 'pyspark>=3.0.0;python_version>="3.8"']
114 # Pin h5py: https://github.com/h5py/h5py/issues/1732
115 spark_require_list = ['h5py<3', 'numpy', 'petastorm>=0.11.0', 'pyarrow>=0.15.0', 'fsspec']
116 ray_require_list = ['ray']
117 pytorch_spark_require_list = pytorch_require_list + \
118 spark_require_list + \
119 pyspark_require_list
120
121 # all frameworks' dependencies
122 all_frameworks_require_list = tensorflow_require_list + \
123 keras_require_list + \
124 pytorch_require_list + \
125 mxnet_require_list + \
126 spark_require_list + \
127 pyspark_require_list
128
129 # python packages required / recommended to develop horovod
130 # these are the earliest versions to work with Python 3.8
131 # keep in sync with Dockerfile.test.cpu
132 # NOTE: do not use versions with +cpu or +gpu here as users would need to add --find-links to pip
133 dev_require_list = ['tensorflow-cpu==2.2.0',
134 'keras==2.3.1',
135 'torch==1.4.0',
136 'torchvision==0.5.0',
137 'pytorch_lightning>=1.2.9',
138 'mxnet==1.5.0',
139 'pyspark==3.0.1'] + spark_require_list
140 # torchvision 0.5.0 depends on torch==1.4.0
141
142 # python packages required only to run tests
143 # Pin h5py: https://github.com/h5py/h5py/issues/1732
144 test_require_list = ['mock', 'pytest', 'pytest-forked', 'parameterized', 'h5py<3']
145
146 # Skip cffi if pytorch extension explicitly disabled
147 if not os.environ.get('HOROVOD_WITHOUT_PYTORCH'):
148 require_list.append('cffi>=1.4.0')
149
150
151 def get_package_version():
152 return __version__ + "+" + os.environ['HOROVOD_LOCAL_VERSION'] if 'HOROVOD_LOCAL_VERSION' in os.environ else __version__
153
154
155 setup(name='horovod',
156 version=get_package_version(),
157 packages=find_packages(),
158 description='Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.',
159 author='The Horovod Authors',
160 license='Apache 2.0',
161 long_description=textwrap.dedent('''\
162 Horovod is a distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
163 The goal of Horovod is to make distributed Deep Learning fast and easy to use.'''),
164 url='https://github.com/horovod/horovod',
165 keywords=['deep learning', 'tensorflow', 'keras', 'pytorch', 'mxnet', 'spark', 'AI'],
166 classifiers=[
167 'License :: OSI Approved :: Apache Software License',
168 'Development Status :: 4 - Beta',
169 'Intended Audience :: Developers',
170 'Topic :: Scientific/Engineering :: Artificial Intelligence',
171 ],
172 ext_modules=[tensorflow_mpi_lib, torch_mpi_lib_v2, mxnet_mpi_lib],
173 cmdclass={'build_ext': custom_build_ext},
174 # cffi is required for PyTorch
175 # If cffi is specified in setup_requires, it will need libffi to be installed on the machine,
176 # which is undesirable. Luckily, `install` action will install cffi before executing build,
177 # so it's only necessary for `build*` or `bdist*` actions.
178 setup_requires=require_list if is_build_action() else [],
179 install_requires=require_list,
180 tests_require=test_require_list,
181 extras_require={
182 'all-frameworks': all_frameworks_require_list,
183 'tensorflow': tensorflow_require_list,
184 'tensorflow-cpu': tensorflow_cpu_require_list,
185 'tensorflow-gpu': tensorflow_gpu_require_list,
186 'keras': keras_require_list,
187 'pytorch': pytorch_require_list,
188 'mxnet': mxnet_require_list,
189 'spark': spark_require_list + pyspark_require_list,
190 'pytorch-spark': pytorch_spark_require_list,
191 'ray': ray_require_list,
192 'dev': dev_require_list,
193 'test': test_require_list,
194 },
195 python_requires='>=3.6',
196 zip_safe=False,
197 entry_points={
198 'console_scripts': [
199 'horovodrun = horovod.runner.launch:run_commandline'
200 ]
201 })
202
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -16,6 +16,7 @@
# ==============================================================================
import os
+import shutil
import subprocess
import sys
import textwrap
@@ -25,6 +26,7 @@
from horovod import __version__
+_FRAMEWORK_METADATA_FILE = 'horovod/metadata.json'
class CMakeExtension(Extension):
def __init__(self, name, cmake_lists_dir='.', sources=[], **kwa):
@@ -52,6 +54,8 @@
if sys.argv[1].startswith('install'):
return True
+ if sys.argv[1].startswith('develop'):
+ return True
def get_cmake_bin():
return os.environ.get('HOROVOD_CMAKE', 'cmake')
@@ -66,7 +70,7 @@
cmake_bin = get_cmake_bin()
- config = 'Debug' if self.debug else 'RelWithDebInfo'
+ config = 'Debug' if self.debug or os.environ.get('HOROVOD_DEBUG') == "1" else 'RelWithDebInfo'
ext_name = self.extensions[0].name
build_dir = self.get_ext_fullpath(ext_name).replace(self.get_ext_filename(ext_name), '')
@@ -98,6 +102,13 @@
except OSError as e:
raise RuntimeError('CMake failed: {}'.format(str(e)))
+ if sys.argv[1].startswith('develop'):
+ # Copy over metadata.json file from build directory
+ shutil.copyfile(os.path.join(build_dir, _FRAMEWORK_METADATA_FILE),
+ os.path.join(self.extensions[0].cmake_lists_dir, _FRAMEWORK_METADATA_FILE))
+ # Remove unfound frameworks, otherwise develop mode will fail the install
+ self.extensions = [x for x in self.extensions if os.path.exists(self.get_ext_fullpath(x.name))]
+
# python packages required to use horovod in general
require_list = ['cloudpickle', 'psutil', 'pyyaml', 'dataclasses;python_version<"3.7"']
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,6 +16,7 @@\n # ==============================================================================\n \n import os\n+import shutil\n import subprocess\n import sys\n import textwrap\n@@ -25,6 +26,7 @@\n \n from horovod import __version__\n \n+_FRAMEWORK_METADATA_FILE = 'horovod/metadata.json'\n \n class CMakeExtension(Extension):\n def __init__(self, name, cmake_lists_dir='.', sources=[], **kwa):\n@@ -52,6 +54,8 @@\n if sys.argv[1].startswith('install'):\n return True\n \n+ if sys.argv[1].startswith('develop'):\n+ return True\n \n def get_cmake_bin():\n return os.environ.get('HOROVOD_CMAKE', 'cmake')\n@@ -66,7 +70,7 @@\n \n cmake_bin = get_cmake_bin()\n \n- config = 'Debug' if self.debug else 'RelWithDebInfo'\n+ config = 'Debug' if self.debug or os.environ.get('HOROVOD_DEBUG') == \"1\" else 'RelWithDebInfo'\n \n ext_name = self.extensions[0].name\n build_dir = self.get_ext_fullpath(ext_name).replace(self.get_ext_filename(ext_name), '')\n@@ -98,6 +102,13 @@\n except OSError as e:\n raise RuntimeError('CMake failed: {}'.format(str(e)))\n \n+ if sys.argv[1].startswith('develop'):\n+ # Copy over metadata.json file from build directory\n+ shutil.copyfile(os.path.join(build_dir, _FRAMEWORK_METADATA_FILE),\n+ os.path.join(self.extensions[0].cmake_lists_dir, _FRAMEWORK_METADATA_FILE))\n+ # Remove unfound frameworks, otherwise develop mode will fail the install\n+ self.extensions = [x for x in self.extensions if os.path.exists(self.get_ext_fullpath(x.name))]\n+\n \n # python packages required to use horovod in general\n require_list = ['cloudpickle', 'psutil', 'pyyaml', 'dataclasses;python_version<\"3.7\"']\n", "issue": "Incremental build support\n**Environment:**\r\n1. Framework: (TensorFlow, Keras, PyTorch, MXNet) TensorFlow\r\n2. Framework version: TF 1.14.0\r\n3. Horovod version: tip of master\r\n4. MPI version:\r\n5. CUDA version: 10.0\r\n6. NCCL version: tip of master\r\n7. Python version: 3.6.8\r\n8. OS and version: Ubuntu 18.04\r\n9. GCC version: 7.4.0\r\n\r\n**Checklist:**\r\n1. Did you search issues to find if somebody asked this question before? Yes.\r\n2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? N/A\r\n3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? N/A\r\n4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes.\r\n\r\n**Your question:**\r\n\r\nIs there any way to do an incremental build of horovod? I can't figure out a way to build horovod from a local copy of the source code, except through `pip install .`, but that seems to build from scratch every time, regardless of changes to the source code.\n", "before_files": [{"content": "# Copyright 2019 Uber Technologies, Inc. All Rights Reserved.\n# Modifications copyright Microsoft\n# Modifications copyright (C) 2020, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\nimport os\nimport subprocess\nimport sys\nimport textwrap\n\nfrom setuptools import setup, Extension, find_packages\nfrom setuptools.command.build_ext import build_ext\n\nfrom horovod import __version__\n\n\nclass CMakeExtension(Extension):\n def __init__(self, name, cmake_lists_dir='.', sources=[], **kwa):\n Extension.__init__(self, name, sources=sources, **kwa)\n self.cmake_lists_dir = os.path.abspath(cmake_lists_dir)\n\n\ntensorflow_mpi_lib = CMakeExtension('horovod.tensorflow.mpi_lib',\n cmake_lists_dir='.', sources=[])\ntorch_mpi_lib_v2 = CMakeExtension('horovod.torch.mpi_lib_v2',\n cmake_lists_dir='.', sources=[])\nmxnet_mpi_lib = CMakeExtension('horovod.mxnet.mpi_lib',\n cmake_lists_dir='.', sources=[])\n\ndef is_build_action():\n if len(sys.argv) <= 1:\n return False\n\n if sys.argv[1].startswith('build'):\n return True\n\n if sys.argv[1].startswith('bdist'):\n return True\n\n if sys.argv[1].startswith('install'):\n return True\n\n\ndef get_cmake_bin():\n return os.environ.get('HOROVOD_CMAKE', 'cmake')\n\n\nclass custom_build_ext(build_ext):\n def build_extensions(self):\n if os.getenv('HOROVOD_SKIP_COMPILE') == '1':\n # Skip building extensions using CMake\n print(\"Horovod is being installed without native libraries\")\n return\n\n cmake_bin = get_cmake_bin()\n\n config = 'Debug' if self.debug else 'RelWithDebInfo'\n\n ext_name = self.extensions[0].name\n build_dir = self.get_ext_fullpath(ext_name).replace(self.get_ext_filename(ext_name), '')\n build_dir = os.path.abspath(build_dir)\n\n cmake_args = ['-DCMAKE_BUILD_TYPE=' + config,\n '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_{}={}'.format(config.upper(), build_dir),\n '-DPYTHON_EXECUTABLE:FILEPATH=' + sys.executable]\n\n make_args = []\n if self.verbose:\n make_args.append('VERBOSE=1')\n\n cmake_build_args = ['--config', config]\n if make_args:\n # -- specifies that these args are going to the native build tool: make\n cmake_build_args += ['--'] + make_args\n\n cmake_build_dir = os.path.join(self.build_temp, config)\n if not os.path.exists(cmake_build_dir):\n os.makedirs(cmake_build_dir)\n\n # Config and build the extension\n try:\n subprocess.check_call([cmake_bin, self.extensions[0].cmake_lists_dir] + cmake_args,\n cwd=cmake_build_dir)\n subprocess.check_call([cmake_bin, '--build', '.'] + cmake_build_args,\n cwd=cmake_build_dir)\n except OSError as e:\n raise RuntimeError('CMake failed: {}'.format(str(e)))\n\n\n# python packages required to use horovod in general\nrequire_list = ['cloudpickle', 'psutil', 'pyyaml', 'dataclasses;python_version<\"3.7\"']\n\n# framework dependencies\ntensorflow_require_list = ['tensorflow']\ntensorflow_cpu_require_list = ['tensorflow-cpu']\ntensorflow_gpu_require_list = ['tensorflow-gpu']\nkeras_require_list = ['keras>=2.0.8,!=2.0.9,!=2.1.0,!=2.1.1']\npytorch_require_list = ['torch', 'pytorch_lightning']\nmxnet_require_list = ['mxnet>=1.4.1']\npyspark_require_list = ['pyspark>=2.3.2;python_version<\"3.8\"',\n 'pyspark>=3.0.0;python_version>=\"3.8\"']\n# Pin h5py: https://github.com/h5py/h5py/issues/1732\nspark_require_list = ['h5py<3', 'numpy', 'petastorm>=0.11.0', 'pyarrow>=0.15.0', 'fsspec']\nray_require_list = ['ray']\npytorch_spark_require_list = pytorch_require_list + \\\n spark_require_list + \\\n pyspark_require_list\n\n# all frameworks' dependencies\nall_frameworks_require_list = tensorflow_require_list + \\\n keras_require_list + \\\n pytorch_require_list + \\\n mxnet_require_list + \\\n spark_require_list + \\\n pyspark_require_list\n\n# python packages required / recommended to develop horovod\n# these are the earliest versions to work with Python 3.8\n# keep in sync with Dockerfile.test.cpu\n# NOTE: do not use versions with +cpu or +gpu here as users would need to add --find-links to pip\ndev_require_list = ['tensorflow-cpu==2.2.0',\n 'keras==2.3.1',\n 'torch==1.4.0',\n 'torchvision==0.5.0',\n 'pytorch_lightning>=1.2.9',\n 'mxnet==1.5.0',\n 'pyspark==3.0.1'] + spark_require_list\n# torchvision 0.5.0 depends on torch==1.4.0\n\n# python packages required only to run tests\n# Pin h5py: https://github.com/h5py/h5py/issues/1732\ntest_require_list = ['mock', 'pytest', 'pytest-forked', 'parameterized', 'h5py<3']\n\n# Skip cffi if pytorch extension explicitly disabled\nif not os.environ.get('HOROVOD_WITHOUT_PYTORCH'):\n require_list.append('cffi>=1.4.0')\n\n\ndef get_package_version():\n return __version__ + \"+\" + os.environ['HOROVOD_LOCAL_VERSION'] if 'HOROVOD_LOCAL_VERSION' in os.environ else __version__\n\n\nsetup(name='horovod',\n version=get_package_version(),\n packages=find_packages(),\n description='Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.',\n author='The Horovod Authors',\n license='Apache 2.0',\n long_description=textwrap.dedent('''\\\n Horovod is a distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.\n The goal of Horovod is to make distributed Deep Learning fast and easy to use.'''),\n url='https://github.com/horovod/horovod',\n keywords=['deep learning', 'tensorflow', 'keras', 'pytorch', 'mxnet', 'spark', 'AI'],\n classifiers=[\n 'License :: OSI Approved :: Apache Software License',\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n ],\n ext_modules=[tensorflow_mpi_lib, torch_mpi_lib_v2, mxnet_mpi_lib],\n cmdclass={'build_ext': custom_build_ext},\n # cffi is required for PyTorch\n # If cffi is specified in setup_requires, it will need libffi to be installed on the machine,\n # which is undesirable. Luckily, `install` action will install cffi before executing build,\n # so it's only necessary for `build*` or `bdist*` actions.\n setup_requires=require_list if is_build_action() else [],\n install_requires=require_list,\n tests_require=test_require_list,\n extras_require={\n 'all-frameworks': all_frameworks_require_list,\n 'tensorflow': tensorflow_require_list,\n 'tensorflow-cpu': tensorflow_cpu_require_list,\n 'tensorflow-gpu': tensorflow_gpu_require_list,\n 'keras': keras_require_list,\n 'pytorch': pytorch_require_list,\n 'mxnet': mxnet_require_list,\n 'spark': spark_require_list + pyspark_require_list,\n 'pytorch-spark': pytorch_spark_require_list,\n 'ray': ray_require_list,\n 'dev': dev_require_list,\n 'test': test_require_list,\n },\n python_requires='>=3.6',\n zip_safe=False,\n entry_points={\n 'console_scripts': [\n 'horovodrun = horovod.runner.launch:run_commandline'\n ]\n })\n", "path": "setup.py"}]}
| 3,308 | 468 |
gh_patches_debug_4284
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-3544
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Provide an API to query the build status by commit
In order to do a check before release that everything is ok, I would like to have a way to obtain the current build status for a given commit. So, in addition to:
```
GET /api/v1/build/{id}/
```
also have this:
```
GET /api/v1/commit/{sha1}/
```
or
```
GET /api/v1/{user}/{project}/commit/{sha1}/
```
Is this possible right now?
</issue>
<code>
[start of readthedocs/restapi/views/model_views.py]
1 """Endpoints for listing Projects, Versions, Builds, etc."""
2
3 from __future__ import absolute_import
4 import logging
5
6 from django.shortcuts import get_object_or_404
7 from rest_framework import decorators, permissions, viewsets, status
8 from rest_framework.decorators import detail_route
9 from rest_framework.renderers import JSONRenderer
10 from rest_framework.response import Response
11
12 from readthedocs.builds.constants import BRANCH
13 from readthedocs.builds.constants import TAG
14 from readthedocs.builds.models import Build, BuildCommandResult, Version
15 from readthedocs.core.utils import trigger_build
16 from readthedocs.core.utils.extend import SettingsOverrideObject
17 from readthedocs.oauth.services import GitHubService, registry
18 from readthedocs.oauth.models import RemoteOrganization, RemoteRepository
19 from readthedocs.projects.models import Project, EmailHook, Domain
20 from readthedocs.projects.version_handling import determine_stable_version
21
22 from ..permissions import (APIPermission, APIRestrictedPermission,
23 RelatedProjectIsOwner, IsOwner)
24 from ..serializers import (BuildSerializer, BuildAdminSerializer,
25 BuildCommandSerializer,
26 ProjectSerializer, ProjectAdminSerializer,
27 VersionSerializer, VersionAdminSerializer,
28 DomainSerializer, RemoteOrganizationSerializer,
29 RemoteRepositorySerializer)
30 from .. import utils as api_utils
31
32 log = logging.getLogger(__name__)
33
34
35 class UserSelectViewSet(viewsets.ModelViewSet):
36
37 """
38 View set that varies serializer class based on request user credentials.
39
40 Viewsets using this class should have an attribute `admin_serializer_class`,
41 which is a serializer that might have more fields that only admin/staff
42 users require. If the user is staff, this class will be returned instead.
43 """
44
45 def get_serializer_class(self):
46 try:
47 if self.request.user.is_staff and self.admin_serializer_class is not None:
48 return self.admin_serializer_class
49 except AttributeError:
50 pass
51 return self.serializer_class
52
53 def get_queryset(self):
54 """Use our API manager method to determine authorization on queryset."""
55 return self.model.objects.api(self.request.user)
56
57
58 class ProjectViewSet(UserSelectViewSet):
59
60 """List, filter, etc. Projects."""
61
62 permission_classes = [APIPermission]
63 renderer_classes = (JSONRenderer,)
64 serializer_class = ProjectSerializer
65 admin_serializer_class = ProjectAdminSerializer
66 model = Project
67 paginate_by = 100
68 paginate_by_param = 'page_size'
69 max_paginate_by = 1000
70
71 @decorators.detail_route()
72 def valid_versions(self, request, **kwargs):
73 """Maintain state of versions that are wanted."""
74 project = get_object_or_404(
75 Project.objects.api(request.user), pk=kwargs['pk'])
76 if not project.num_major or not project.num_minor or not project.num_point:
77 return Response(
78 {'error': 'Project does not support point version control'},
79 status=status.HTTP_400_BAD_REQUEST)
80 version_strings = project.supported_versions()
81 # Disable making old versions inactive for now.
82 # project.versions.exclude(verbose_name__in=version_strings).update(active=False)
83 project.versions.filter(
84 verbose_name__in=version_strings).update(active=True)
85 return Response({
86 'flat': version_strings,
87 })
88
89 @detail_route()
90 def translations(self, *_, **__):
91 translations = self.get_object().translations.all()
92 return Response({
93 'translations': ProjectSerializer(translations, many=True).data
94 })
95
96 @detail_route()
97 def subprojects(self, request, **kwargs):
98 project = get_object_or_404(
99 Project.objects.api(request.user), pk=kwargs['pk'])
100 rels = project.subprojects.all()
101 children = [rel.child for rel in rels]
102 return Response({
103 'subprojects': ProjectSerializer(children, many=True).data
104 })
105
106 @detail_route()
107 def active_versions(self, request, **kwargs):
108 project = get_object_or_404(
109 Project.objects.api(request.user), pk=kwargs['pk'])
110 versions = project.versions.filter(active=True)
111 return Response({
112 'versions': VersionSerializer(versions, many=True).data
113 })
114
115 @decorators.detail_route(permission_classes=[permissions.IsAdminUser])
116 def token(self, request, **kwargs):
117 project = get_object_or_404(
118 Project.objects.api(request.user), pk=kwargs['pk'])
119 token = GitHubService.get_token_for_project(project, force_local=True)
120 return Response({
121 'token': token
122 })
123
124 @decorators.detail_route()
125 def canonical_url(self, request, **kwargs):
126 project = get_object_or_404(
127 Project.objects.api(request.user), pk=kwargs['pk'])
128 return Response({
129 'url': project.get_docs_url()
130 })
131
132 @decorators.detail_route(permission_classes=[permissions.IsAdminUser], methods=['post'])
133 def sync_versions(self, request, **kwargs): # noqa: D205
134 """
135 Sync the version data in the repo (on the build server) with what we
136 have in the database.
137
138 Returns the identifiers for the versions that have been deleted.
139 """
140 project = get_object_or_404(
141 Project.objects.api(request.user), pk=kwargs['pk'])
142
143 # If the currently highest non-prerelease version is active, then make
144 # the new latest version active as well.
145 old_highest_version = determine_stable_version(project.versions.all())
146 if old_highest_version is not None:
147 activate_new_stable = old_highest_version.active
148 else:
149 activate_new_stable = False
150
151 try:
152 # Update All Versions
153 data = request.data
154 added_versions = set()
155 if 'tags' in data:
156 ret_set = api_utils.sync_versions(
157 project=project, versions=data['tags'], type=TAG)
158 added_versions.update(ret_set)
159 if 'branches' in data:
160 ret_set = api_utils.sync_versions(
161 project=project, versions=data['branches'], type=BRANCH)
162 added_versions.update(ret_set)
163 deleted_versions = api_utils.delete_versions(project, data)
164 except Exception as e:
165 log.exception("Sync Versions Error: %s", e.message)
166 return Response({'error': e.message}, status=status.HTTP_400_BAD_REQUEST)
167
168 promoted_version = project.update_stable_version()
169 if promoted_version:
170 new_stable = project.get_stable_version()
171 log.info(
172 "Triggering new stable build: {project}:{version}".format(
173 project=project.slug,
174 version=new_stable.identifier))
175 trigger_build(project=project, version=new_stable)
176
177 # Marking the tag that is considered the new stable version as
178 # active and building it if it was just added.
179 if (
180 activate_new_stable and
181 promoted_version.slug in added_versions):
182 promoted_version.active = True
183 promoted_version.save()
184 trigger_build(project=project, version=promoted_version)
185
186 return Response({
187 'added_versions': added_versions,
188 'deleted_versions': deleted_versions,
189 })
190
191
192 class VersionViewSet(UserSelectViewSet):
193
194 permission_classes = [APIRestrictedPermission]
195 renderer_classes = (JSONRenderer,)
196 serializer_class = VersionSerializer
197 admin_serializer_class = VersionAdminSerializer
198 model = Version
199
200
201 class BuildViewSetBase(UserSelectViewSet):
202 permission_classes = [APIRestrictedPermission]
203 renderer_classes = (JSONRenderer,)
204 serializer_class = BuildSerializer
205 admin_serializer_class = BuildAdminSerializer
206 model = Build
207
208
209 class BuildViewSet(SettingsOverrideObject):
210
211 """A pluggable class to allow for build cold storage."""
212
213 _default_class = BuildViewSetBase
214
215
216 class BuildCommandViewSet(UserSelectViewSet):
217 permission_classes = [APIRestrictedPermission]
218 renderer_classes = (JSONRenderer,)
219 serializer_class = BuildCommandSerializer
220 model = BuildCommandResult
221
222
223 class NotificationViewSet(viewsets.ReadOnlyModelViewSet):
224 permission_classes = (permissions.IsAuthenticated, RelatedProjectIsOwner)
225 renderer_classes = (JSONRenderer,)
226 model = EmailHook
227
228 def get_queryset(self):
229 return self.model.objects.api(self.request.user)
230
231
232 class DomainViewSet(UserSelectViewSet):
233 permission_classes = [APIRestrictedPermission]
234 renderer_classes = (JSONRenderer,)
235 serializer_class = DomainSerializer
236 model = Domain
237
238
239 class RemoteOrganizationViewSet(viewsets.ReadOnlyModelViewSet):
240 permission_classes = [IsOwner]
241 renderer_classes = (JSONRenderer,)
242 serializer_class = RemoteOrganizationSerializer
243 model = RemoteOrganization
244 paginate_by = 25
245
246 def get_queryset(self):
247 return (self.model.objects.api(self.request.user)
248 .filter(account__provider__in=[service.adapter.provider_id
249 for service in registry]))
250
251
252 class RemoteRepositoryViewSet(viewsets.ReadOnlyModelViewSet):
253 permission_classes = [IsOwner]
254 renderer_classes = (JSONRenderer,)
255 serializer_class = RemoteRepositorySerializer
256 model = RemoteRepository
257
258 def get_queryset(self):
259 query = self.model.objects.api(self.request.user)
260 org = self.request.query_params.get('org', None)
261 if org is not None:
262 query = query.filter(organization__pk=org)
263 query = query.filter(account__provider__in=[service.adapter.provider_id
264 for service in registry])
265 return query
266
267 def get_paginate_by(self):
268 return self.request.query_params.get('page_size', 25)
269
[end of readthedocs/restapi/views/model_views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/readthedocs/restapi/views/model_views.py b/readthedocs/restapi/views/model_views.py
--- a/readthedocs/restapi/views/model_views.py
+++ b/readthedocs/restapi/views/model_views.py
@@ -205,6 +205,13 @@
admin_serializer_class = BuildAdminSerializer
model = Build
+ def get_queryset(self):
+ query = super(BuildViewSetBase, self).get_queryset()
+ commit = self.request.query_params.get('commit', None)
+ if commit is not None:
+ query = query.filter(commit=commit)
+ return query
+
class BuildViewSet(SettingsOverrideObject):
|
{"golden_diff": "diff --git a/readthedocs/restapi/views/model_views.py b/readthedocs/restapi/views/model_views.py\n--- a/readthedocs/restapi/views/model_views.py\n+++ b/readthedocs/restapi/views/model_views.py\n@@ -205,6 +205,13 @@\n admin_serializer_class = BuildAdminSerializer\n model = Build\n \n+ def get_queryset(self):\n+ query = super(BuildViewSetBase, self).get_queryset()\n+ commit = self.request.query_params.get('commit', None)\n+ if commit is not None:\n+ query = query.filter(commit=commit)\n+ return query\n+\n \n class BuildViewSet(SettingsOverrideObject):\n", "issue": "Provide an API to query the build status by commit\nIn order to do a check before release that everything is ok, I would like to have a way to obtain the current build status for a given commit. So, in addition to:\n\n```\nGET /api/v1/build/{id}/\n```\n\nalso have this:\n\n```\nGET /api/v1/commit/{sha1}/\n```\n\nor \n\n```\nGET /api/v1/{user}/{project}/commit/{sha1}/\n```\n\nIs this possible right now?\n\n", "before_files": [{"content": "\"\"\"Endpoints for listing Projects, Versions, Builds, etc.\"\"\"\n\nfrom __future__ import absolute_import\nimport logging\n\nfrom django.shortcuts import get_object_or_404\nfrom rest_framework import decorators, permissions, viewsets, status\nfrom rest_framework.decorators import detail_route\nfrom rest_framework.renderers import JSONRenderer\nfrom rest_framework.response import Response\n\nfrom readthedocs.builds.constants import BRANCH\nfrom readthedocs.builds.constants import TAG\nfrom readthedocs.builds.models import Build, BuildCommandResult, Version\nfrom readthedocs.core.utils import trigger_build\nfrom readthedocs.core.utils.extend import SettingsOverrideObject\nfrom readthedocs.oauth.services import GitHubService, registry\nfrom readthedocs.oauth.models import RemoteOrganization, RemoteRepository\nfrom readthedocs.projects.models import Project, EmailHook, Domain\nfrom readthedocs.projects.version_handling import determine_stable_version\n\nfrom ..permissions import (APIPermission, APIRestrictedPermission,\n RelatedProjectIsOwner, IsOwner)\nfrom ..serializers import (BuildSerializer, BuildAdminSerializer,\n BuildCommandSerializer,\n ProjectSerializer, ProjectAdminSerializer,\n VersionSerializer, VersionAdminSerializer,\n DomainSerializer, RemoteOrganizationSerializer,\n RemoteRepositorySerializer)\nfrom .. import utils as api_utils\n\nlog = logging.getLogger(__name__)\n\n\nclass UserSelectViewSet(viewsets.ModelViewSet):\n\n \"\"\"\n View set that varies serializer class based on request user credentials.\n\n Viewsets using this class should have an attribute `admin_serializer_class`,\n which is a serializer that might have more fields that only admin/staff\n users require. If the user is staff, this class will be returned instead.\n \"\"\"\n\n def get_serializer_class(self):\n try:\n if self.request.user.is_staff and self.admin_serializer_class is not None:\n return self.admin_serializer_class\n except AttributeError:\n pass\n return self.serializer_class\n\n def get_queryset(self):\n \"\"\"Use our API manager method to determine authorization on queryset.\"\"\"\n return self.model.objects.api(self.request.user)\n\n\nclass ProjectViewSet(UserSelectViewSet):\n\n \"\"\"List, filter, etc. Projects.\"\"\"\n\n permission_classes = [APIPermission]\n renderer_classes = (JSONRenderer,)\n serializer_class = ProjectSerializer\n admin_serializer_class = ProjectAdminSerializer\n model = Project\n paginate_by = 100\n paginate_by_param = 'page_size'\n max_paginate_by = 1000\n\n @decorators.detail_route()\n def valid_versions(self, request, **kwargs):\n \"\"\"Maintain state of versions that are wanted.\"\"\"\n project = get_object_or_404(\n Project.objects.api(request.user), pk=kwargs['pk'])\n if not project.num_major or not project.num_minor or not project.num_point:\n return Response(\n {'error': 'Project does not support point version control'},\n status=status.HTTP_400_BAD_REQUEST)\n version_strings = project.supported_versions()\n # Disable making old versions inactive for now.\n # project.versions.exclude(verbose_name__in=version_strings).update(active=False)\n project.versions.filter(\n verbose_name__in=version_strings).update(active=True)\n return Response({\n 'flat': version_strings,\n })\n\n @detail_route()\n def translations(self, *_, **__):\n translations = self.get_object().translations.all()\n return Response({\n 'translations': ProjectSerializer(translations, many=True).data\n })\n\n @detail_route()\n def subprojects(self, request, **kwargs):\n project = get_object_or_404(\n Project.objects.api(request.user), pk=kwargs['pk'])\n rels = project.subprojects.all()\n children = [rel.child for rel in rels]\n return Response({\n 'subprojects': ProjectSerializer(children, many=True).data\n })\n\n @detail_route()\n def active_versions(self, request, **kwargs):\n project = get_object_or_404(\n Project.objects.api(request.user), pk=kwargs['pk'])\n versions = project.versions.filter(active=True)\n return Response({\n 'versions': VersionSerializer(versions, many=True).data\n })\n\n @decorators.detail_route(permission_classes=[permissions.IsAdminUser])\n def token(self, request, **kwargs):\n project = get_object_or_404(\n Project.objects.api(request.user), pk=kwargs['pk'])\n token = GitHubService.get_token_for_project(project, force_local=True)\n return Response({\n 'token': token\n })\n\n @decorators.detail_route()\n def canonical_url(self, request, **kwargs):\n project = get_object_or_404(\n Project.objects.api(request.user), pk=kwargs['pk'])\n return Response({\n 'url': project.get_docs_url()\n })\n\n @decorators.detail_route(permission_classes=[permissions.IsAdminUser], methods=['post'])\n def sync_versions(self, request, **kwargs): # noqa: D205\n \"\"\"\n Sync the version data in the repo (on the build server) with what we\n have in the database.\n\n Returns the identifiers for the versions that have been deleted.\n \"\"\"\n project = get_object_or_404(\n Project.objects.api(request.user), pk=kwargs['pk'])\n\n # If the currently highest non-prerelease version is active, then make\n # the new latest version active as well.\n old_highest_version = determine_stable_version(project.versions.all())\n if old_highest_version is not None:\n activate_new_stable = old_highest_version.active\n else:\n activate_new_stable = False\n\n try:\n # Update All Versions\n data = request.data\n added_versions = set()\n if 'tags' in data:\n ret_set = api_utils.sync_versions(\n project=project, versions=data['tags'], type=TAG)\n added_versions.update(ret_set)\n if 'branches' in data:\n ret_set = api_utils.sync_versions(\n project=project, versions=data['branches'], type=BRANCH)\n added_versions.update(ret_set)\n deleted_versions = api_utils.delete_versions(project, data)\n except Exception as e:\n log.exception(\"Sync Versions Error: %s\", e.message)\n return Response({'error': e.message}, status=status.HTTP_400_BAD_REQUEST)\n\n promoted_version = project.update_stable_version()\n if promoted_version:\n new_stable = project.get_stable_version()\n log.info(\n \"Triggering new stable build: {project}:{version}\".format(\n project=project.slug,\n version=new_stable.identifier))\n trigger_build(project=project, version=new_stable)\n\n # Marking the tag that is considered the new stable version as\n # active and building it if it was just added.\n if (\n activate_new_stable and\n promoted_version.slug in added_versions):\n promoted_version.active = True\n promoted_version.save()\n trigger_build(project=project, version=promoted_version)\n\n return Response({\n 'added_versions': added_versions,\n 'deleted_versions': deleted_versions,\n })\n\n\nclass VersionViewSet(UserSelectViewSet):\n\n permission_classes = [APIRestrictedPermission]\n renderer_classes = (JSONRenderer,)\n serializer_class = VersionSerializer\n admin_serializer_class = VersionAdminSerializer\n model = Version\n\n\nclass BuildViewSetBase(UserSelectViewSet):\n permission_classes = [APIRestrictedPermission]\n renderer_classes = (JSONRenderer,)\n serializer_class = BuildSerializer\n admin_serializer_class = BuildAdminSerializer\n model = Build\n\n\nclass BuildViewSet(SettingsOverrideObject):\n\n \"\"\"A pluggable class to allow for build cold storage.\"\"\"\n\n _default_class = BuildViewSetBase\n\n\nclass BuildCommandViewSet(UserSelectViewSet):\n permission_classes = [APIRestrictedPermission]\n renderer_classes = (JSONRenderer,)\n serializer_class = BuildCommandSerializer\n model = BuildCommandResult\n\n\nclass NotificationViewSet(viewsets.ReadOnlyModelViewSet):\n permission_classes = (permissions.IsAuthenticated, RelatedProjectIsOwner)\n renderer_classes = (JSONRenderer,)\n model = EmailHook\n\n def get_queryset(self):\n return self.model.objects.api(self.request.user)\n\n\nclass DomainViewSet(UserSelectViewSet):\n permission_classes = [APIRestrictedPermission]\n renderer_classes = (JSONRenderer,)\n serializer_class = DomainSerializer\n model = Domain\n\n\nclass RemoteOrganizationViewSet(viewsets.ReadOnlyModelViewSet):\n permission_classes = [IsOwner]\n renderer_classes = (JSONRenderer,)\n serializer_class = RemoteOrganizationSerializer\n model = RemoteOrganization\n paginate_by = 25\n\n def get_queryset(self):\n return (self.model.objects.api(self.request.user)\n .filter(account__provider__in=[service.adapter.provider_id\n for service in registry]))\n\n\nclass RemoteRepositoryViewSet(viewsets.ReadOnlyModelViewSet):\n permission_classes = [IsOwner]\n renderer_classes = (JSONRenderer,)\n serializer_class = RemoteRepositorySerializer\n model = RemoteRepository\n\n def get_queryset(self):\n query = self.model.objects.api(self.request.user)\n org = self.request.query_params.get('org', None)\n if org is not None:\n query = query.filter(organization__pk=org)\n query = query.filter(account__provider__in=[service.adapter.provider_id\n for service in registry])\n return query\n\n def get_paginate_by(self):\n return self.request.query_params.get('page_size', 25)\n", "path": "readthedocs/restapi/views/model_views.py"}]}
| 3,354 | 144 |
gh_patches_debug_13155
|
rasdani/github-patches
|
git_diff
|
Zeroto521__my-data-toolkit-88
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: feature_union can't concat different dataframe well
https://github.com/Zeroto521/my-data-toolkit/blob/8b6ec3ce2658626f265bf5e1bab5a7c0897a0787/dtoolkit/transformer.py#L55-L56
When we use transformers to handle data frames, some rows would be deleted.
So use the feature union transformer would cause the following problem.
```python
0 1.0 0.0 0.0 1.0 0.0 ... 0.070607 0.0 1.0 1.0 1.0
1 0.0 1.0 0.0 1.0 0.0 ... 0.000000 0.0 1.0 1.0 1.0
2 0.0 0.0 1.0 0.0 1.0 ... 0.853865 1.0 1.0 1.0 1.0
3 0.0 0.0 1.0 0.0 1.0 ... 0.279593 0.0 0.0 1.0 0.0
4 0.0 0.0 1.0 1.0 0.0 ... 1.000000 0.0 1.0 1.0 0.0
5 1.0 0.0 0.0 0.0 1.0 ... 0.566105 0.0 0.0 1.0 0.0
6 0.0 1.0 0.0 1.0 0.0 ... 0.007911 0.0 1.0 0.0 1.0
7 0.0 1.0 0.0 1.0 0.0 ... 0.220168 0.0 1.0 0.0 1.0
8 0.0 1.0 0.0 1.0 0.0 ... 0.242736 0.0 1.0 0.0 1.0
9 1.0 0.0 0.0 1.0 0.0 ... 0.491557 0.0 1.0 0.0 1.0
10 1.0 0.0 0.0 0.0 1.0 ... NaN NaN NaN NaN NaN
11 NaN NaN NaN NaN NaN ... 0.184352 0.0 1.0 0.0 1.0
```
We could see, row index 10 and 11 data have NaN.
To fix this, there should add a parameter to ignore the index then concat data.
</issue>
<code>
[start of dtoolkit/transformer.py]
1 from __future__ import annotations
2
3 import numpy as np
4 import pandas as pd
5 from more_itertools import flatten
6 from scipy import sparse
7 from sklearn.base import TransformerMixin
8 from sklearn.pipeline import _name_estimators
9 from sklearn.pipeline import FeatureUnion as SKFeatureUnion
10 from sklearn.preprocessing import MinMaxScaler as SKMinMaxScaler
11 from sklearn.preprocessing import OneHotEncoder as SKOneHotEncoder
12
13 from ._checking import check_dataframe_type
14 from ._checking import istype
15 from ._typing import PandasTypeList
16 from .accessor import FilterInAccessor # noqa
17
18
19 class Transformer(TransformerMixin):
20 def __init__(self, *args, **kwargs):
21 self.args = args
22 self.kwargs = kwargs
23
24 def operate(self, X, *_, **__):
25 return X
26
27 def validate(self, *_, **__):
28 ...
29
30 def fit(self, *_):
31 return self
32
33 def transform(self, X, *_):
34 self.validate(X)
35
36 return self.operate(X, *self.args, **self.kwargs)
37
38 def fit_transform(self, X, *_):
39 return self.fit().transform(X)
40
41 def inverse_transform(self, X, *_):
42 return X
43
44
45 #
46 # Sklearn's operation
47 #
48
49
50 class FeatureUnion(SKFeatureUnion):
51 def _hstack(self, Xs):
52 if any(sparse.issparse(f) for f in Xs):
53 return sparse.hstack(Xs).tocsr()
54
55 if all(istype(i, PandasTypeList) for i in Xs):
56 return pd.concat(Xs, axis=1)
57
58 return np.hstack(Xs)
59
60
61 # make_union function ported with modifications from scikit-learn
62 # https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py
63
64
65 def make_union(*transformers, n_jobs=None, verbose=False):
66 return FeatureUnion(
67 _name_estimators(transformers),
68 n_jobs=n_jobs,
69 verbose=verbose,
70 )
71
72
73 def _change_data_to_df(
74 data: np.ndarray,
75 df: pd.DataFrame | np.ndarray,
76 ) -> pd.DataFrame | np.ndarray:
77 if isinstance(df, pd.DataFrame):
78 return pd.DataFrame(data, columns=df.columns, index=df.index)
79
80 return data
81
82
83 class MinMaxScaler(SKMinMaxScaler):
84 def transform(self, X, *_):
85 X_new = super().transform(X, *_)
86
87 return _change_data_to_df(X_new, X)
88
89 def inverse_transform(self, X, *_):
90 X_new = super().inverse_transform(X, *_)
91
92 return _change_data_to_df(X_new, X)
93
94
95 class OneHotEncoder(SKOneHotEncoder):
96 def __init__(
97 self,
98 categories="auto",
99 drop=None,
100 sparse=False,
101 dtype=np.float64,
102 handle_unknown="error",
103 ):
104 super().__init__(
105 categories=categories,
106 drop=drop,
107 sparse=sparse,
108 dtype=dtype,
109 handle_unknown=handle_unknown,
110 )
111
112 def transform(self, X, *_):
113 X_new = super().transform(X, *_)
114
115 if self.sparse is False:
116 categories = flatten(self.categories_)
117 return pd.DataFrame(X_new, columns=categories)
118
119 return X_new
120
121
122 #
123 # Pandas's operation
124 #
125
126
127 class DataFrameTF(Transformer):
128 def validate(self, *args, **kwargs):
129 return check_dataframe_type(*args, **kwargs)
130
131
132 class AssignTF(Transformer):
133 def operate(self, *args, **kwargs):
134 return pd.DataFrame.assign(*args, **kwargs)
135
136
137 class AppendTF(DataFrameTF):
138 def operate(self, *args, **kwargs):
139 return pd.DataFrame.append(*args, **kwargs)
140
141
142 class DropTF(DataFrameTF):
143 def operate(self, *args, **kwargs):
144 return pd.DataFrame.drop(*args, **kwargs)
145
146
147 class EvalTF(DataFrameTF):
148 def operate(self, *args, **kwargs):
149 return pd.DataFrame.eval(*args, **kwargs)
150
151
152 class FillnaTF(DataFrameTF):
153 def operate(self, *args, **kwargs):
154 return pd.DataFrame.fillna(*args, **kwargs)
155
156
157 class FilterInTF(DataFrameTF):
158 def transform(self, X, *_):
159 self.validate(X)
160
161 return X.filterin(*self.args, **self.kwargs)
162
163
164 class FilterTF(DataFrameTF):
165 def operate(self, *args, **kwargs):
166 return pd.DataFrame.filter(*args, **kwargs)
167
168
169 class GetTF(Transformer):
170 def operate(self, *args, **kwargs):
171 return pd.DataFrame.get(*args, **kwargs)
172
173
174 class QueryTF(DataFrameTF):
175 def operate(self, *args, **kwargs):
176 return pd.DataFrame.query(*args, **kwargs)
177
178
179 class ReplaceTF(DataFrameTF):
180 def operate(self, *args, **kwargs):
181 return pd.DataFrame.replace(*args, **kwargs)
182
183
184 #
185 # numpy's operation
186 #
187
188
189 class RavelTF(Transformer):
190 def operate(self, *args, **kwargs):
191 return np.ravel(*args, **kwargs)
192
[end of dtoolkit/transformer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dtoolkit/transformer.py b/dtoolkit/transformer.py
--- a/dtoolkit/transformer.py
+++ b/dtoolkit/transformer.py
@@ -13,6 +13,7 @@
from ._checking import check_dataframe_type
from ._checking import istype
from ._typing import PandasTypeList
+from .accessor import ColumnAccessor # noqa
from .accessor import FilterInAccessor # noqa
@@ -53,6 +54,7 @@
return sparse.hstack(Xs).tocsr()
if all(istype(i, PandasTypeList) for i in Xs):
+ Xs = (i.reset_index(drop=True) for i in Xs)
return pd.concat(Xs, axis=1)
return np.hstack(Xs)
|
{"golden_diff": "diff --git a/dtoolkit/transformer.py b/dtoolkit/transformer.py\n--- a/dtoolkit/transformer.py\n+++ b/dtoolkit/transformer.py\n@@ -13,6 +13,7 @@\n from ._checking import check_dataframe_type\n from ._checking import istype\n from ._typing import PandasTypeList\n+from .accessor import ColumnAccessor # noqa\n from .accessor import FilterInAccessor # noqa\n \n \n@@ -53,6 +54,7 @@\n return sparse.hstack(Xs).tocsr()\n \n if all(istype(i, PandasTypeList) for i in Xs):\n+ Xs = (i.reset_index(drop=True) for i in Xs)\n return pd.concat(Xs, axis=1)\n \n return np.hstack(Xs)\n", "issue": "BUG: feature_union can't concat different dataframe well\nhttps://github.com/Zeroto521/my-data-toolkit/blob/8b6ec3ce2658626f265bf5e1bab5a7c0897a0787/dtoolkit/transformer.py#L55-L56\r\n\r\nWhen we use transformers to handle data frames, some rows would be deleted.\r\nSo use the feature union transformer would cause the following problem.\r\n\r\n```python\r\n0 1.0 0.0 0.0 1.0 0.0 ... 0.070607 0.0 1.0 1.0 1.0\r\n1 0.0 1.0 0.0 1.0 0.0 ... 0.000000 0.0 1.0 1.0 1.0\r\n2 0.0 0.0 1.0 0.0 1.0 ... 0.853865 1.0 1.0 1.0 1.0\r\n3 0.0 0.0 1.0 0.0 1.0 ... 0.279593 0.0 0.0 1.0 0.0\r\n4 0.0 0.0 1.0 1.0 0.0 ... 1.000000 0.0 1.0 1.0 0.0\r\n5 1.0 0.0 0.0 0.0 1.0 ... 0.566105 0.0 0.0 1.0 0.0\r\n6 0.0 1.0 0.0 1.0 0.0 ... 0.007911 0.0 1.0 0.0 1.0\r\n7 0.0 1.0 0.0 1.0 0.0 ... 0.220168 0.0 1.0 0.0 1.0\r\n8 0.0 1.0 0.0 1.0 0.0 ... 0.242736 0.0 1.0 0.0 1.0\r\n9 1.0 0.0 0.0 1.0 0.0 ... 0.491557 0.0 1.0 0.0 1.0\r\n10 1.0 0.0 0.0 0.0 1.0 ... NaN NaN NaN NaN NaN\r\n11 NaN NaN NaN NaN NaN ... 0.184352 0.0 1.0 0.0 1.0\r\n```\r\n\r\nWe could see, row index 10 and 11 data have NaN.\r\n\r\nTo fix this, there should add a parameter to ignore the index then concat data.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport numpy as np\nimport pandas as pd\nfrom more_itertools import flatten\nfrom scipy import sparse\nfrom sklearn.base import TransformerMixin\nfrom sklearn.pipeline import _name_estimators\nfrom sklearn.pipeline import FeatureUnion as SKFeatureUnion\nfrom sklearn.preprocessing import MinMaxScaler as SKMinMaxScaler\nfrom sklearn.preprocessing import OneHotEncoder as SKOneHotEncoder\n\nfrom ._checking import check_dataframe_type\nfrom ._checking import istype\nfrom ._typing import PandasTypeList\nfrom .accessor import FilterInAccessor # noqa\n\n\nclass Transformer(TransformerMixin):\n def __init__(self, *args, **kwargs):\n self.args = args\n self.kwargs = kwargs\n\n def operate(self, X, *_, **__):\n return X\n\n def validate(self, *_, **__):\n ...\n\n def fit(self, *_):\n return self\n\n def transform(self, X, *_):\n self.validate(X)\n\n return self.operate(X, *self.args, **self.kwargs)\n\n def fit_transform(self, X, *_):\n return self.fit().transform(X)\n\n def inverse_transform(self, X, *_):\n return X\n\n\n#\n# Sklearn's operation\n#\n\n\nclass FeatureUnion(SKFeatureUnion):\n def _hstack(self, Xs):\n if any(sparse.issparse(f) for f in Xs):\n return sparse.hstack(Xs).tocsr()\n\n if all(istype(i, PandasTypeList) for i in Xs):\n return pd.concat(Xs, axis=1)\n\n return np.hstack(Xs)\n\n\n# make_union function ported with modifications from scikit-learn\n# https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/pipeline.py\n\n\ndef make_union(*transformers, n_jobs=None, verbose=False):\n return FeatureUnion(\n _name_estimators(transformers),\n n_jobs=n_jobs,\n verbose=verbose,\n )\n\n\ndef _change_data_to_df(\n data: np.ndarray,\n df: pd.DataFrame | np.ndarray,\n) -> pd.DataFrame | np.ndarray:\n if isinstance(df, pd.DataFrame):\n return pd.DataFrame(data, columns=df.columns, index=df.index)\n\n return data\n\n\nclass MinMaxScaler(SKMinMaxScaler):\n def transform(self, X, *_):\n X_new = super().transform(X, *_)\n\n return _change_data_to_df(X_new, X)\n\n def inverse_transform(self, X, *_):\n X_new = super().inverse_transform(X, *_)\n\n return _change_data_to_df(X_new, X)\n\n\nclass OneHotEncoder(SKOneHotEncoder):\n def __init__(\n self,\n categories=\"auto\",\n drop=None,\n sparse=False,\n dtype=np.float64,\n handle_unknown=\"error\",\n ):\n super().__init__(\n categories=categories,\n drop=drop,\n sparse=sparse,\n dtype=dtype,\n handle_unknown=handle_unknown,\n )\n\n def transform(self, X, *_):\n X_new = super().transform(X, *_)\n\n if self.sparse is False:\n categories = flatten(self.categories_)\n return pd.DataFrame(X_new, columns=categories)\n\n return X_new\n\n\n#\n# Pandas's operation\n#\n\n\nclass DataFrameTF(Transformer):\n def validate(self, *args, **kwargs):\n return check_dataframe_type(*args, **kwargs)\n\n\nclass AssignTF(Transformer):\n def operate(self, *args, **kwargs):\n return pd.DataFrame.assign(*args, **kwargs)\n\n\nclass AppendTF(DataFrameTF):\n def operate(self, *args, **kwargs):\n return pd.DataFrame.append(*args, **kwargs)\n\n\nclass DropTF(DataFrameTF):\n def operate(self, *args, **kwargs):\n return pd.DataFrame.drop(*args, **kwargs)\n\n\nclass EvalTF(DataFrameTF):\n def operate(self, *args, **kwargs):\n return pd.DataFrame.eval(*args, **kwargs)\n\n\nclass FillnaTF(DataFrameTF):\n def operate(self, *args, **kwargs):\n return pd.DataFrame.fillna(*args, **kwargs)\n\n\nclass FilterInTF(DataFrameTF):\n def transform(self, X, *_):\n self.validate(X)\n\n return X.filterin(*self.args, **self.kwargs)\n\n\nclass FilterTF(DataFrameTF):\n def operate(self, *args, **kwargs):\n return pd.DataFrame.filter(*args, **kwargs)\n\n\nclass GetTF(Transformer):\n def operate(self, *args, **kwargs):\n return pd.DataFrame.get(*args, **kwargs)\n\n\nclass QueryTF(DataFrameTF):\n def operate(self, *args, **kwargs):\n return pd.DataFrame.query(*args, **kwargs)\n\n\nclass ReplaceTF(DataFrameTF):\n def operate(self, *args, **kwargs):\n return pd.DataFrame.replace(*args, **kwargs)\n\n\n#\n# numpy's operation\n#\n\n\nclass RavelTF(Transformer):\n def operate(self, *args, **kwargs):\n return np.ravel(*args, **kwargs)\n", "path": "dtoolkit/transformer.py"}]}
| 2,925 | 181 |
gh_patches_debug_42715
|
rasdani/github-patches
|
git_diff
|
openai__gym-1878
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Box bound precision warning
I get this warning a lot when using Box environments:
```
.../gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32
```
This is particularly annoying, especially because the [default dtype for Box is](https://github.com/openai/gym/blob/master/gym/spaces/box.py#L24) `np.float(32)`
</issue>
<code>
[start of gym/spaces/box.py]
1 import numpy as np
2
3 from .space import Space
4 from gym import logger
5
6
7 class Box(Space):
8 """
9 A (possibly unbounded) box in R^n. Specifically, a Box represents the
10 Cartesian product of n closed intervals. Each interval has the form of one
11 of [a, b], (-oo, b], [a, oo), or (-oo, oo).
12
13 There are two common use cases:
14
15 * Identical bound for each dimension::
16 >>> Box(low=-1.0, high=2.0, shape=(3, 4), dtype=np.float32)
17 Box(3, 4)
18
19 * Independent bound for each dimension::
20 >>> Box(low=np.array([-1.0, -2.0]), high=np.array([2.0, 4.0]), dtype=np.float32)
21 Box(2,)
22
23 """
24 def __init__(self, low, high, shape=None, dtype=np.float32):
25 assert dtype is not None, 'dtype must be explicitly provided. '
26 self.dtype = np.dtype(dtype)
27
28 if shape is None:
29 assert low.shape == high.shape, 'box dimension mismatch. '
30 self.shape = low.shape
31 self.low = low
32 self.high = high
33 else:
34 assert np.isscalar(low) and np.isscalar(high), 'box requires scalar bounds. '
35 self.shape = tuple(shape)
36 self.low = np.full(self.shape, low)
37 self.high = np.full(self.shape, high)
38
39 def _get_precision(dtype):
40 if np.issubdtype(dtype, np.floating):
41 return np.finfo(dtype).precision
42 else:
43 return np.inf
44 low_precision = _get_precision(self.low.dtype)
45 high_precision = _get_precision(self.high.dtype)
46 dtype_precision = _get_precision(self.dtype)
47 if min(low_precision, high_precision) > dtype_precision:
48 logger.warn("Box bound precision lowered by casting to {}".format(self.dtype))
49 self.low = self.low.astype(self.dtype)
50 self.high = self.high.astype(self.dtype)
51
52 # Boolean arrays which indicate the interval type for each coordinate
53 self.bounded_below = -np.inf < self.low
54 self.bounded_above = np.inf > self.high
55
56 super(Box, self).__init__(self.shape, self.dtype)
57
58 def is_bounded(self, manner="both"):
59 below = np.all(self.bounded_below)
60 above = np.all(self.bounded_above)
61 if manner == "both":
62 return below and above
63 elif manner == "below":
64 return below
65 elif manner == "above":
66 return above
67 else:
68 raise ValueError("manner is not in {'below', 'above', 'both'}")
69
70 def sample(self):
71 """
72 Generates a single random sample inside of the Box.
73
74 In creating a sample of the box, each coordinate is sampled according to
75 the form of the interval:
76
77 * [a, b] : uniform distribution
78 * [a, oo) : shifted exponential distribution
79 * (-oo, b] : shifted negative exponential distribution
80 * (-oo, oo) : normal distribution
81 """
82 high = self.high if self.dtype.kind == 'f' \
83 else self.high.astype('int64') + 1
84 sample = np.empty(self.shape)
85
86 # Masking arrays which classify the coordinates according to interval
87 # type
88 unbounded = ~self.bounded_below & ~self.bounded_above
89 upp_bounded = ~self.bounded_below & self.bounded_above
90 low_bounded = self.bounded_below & ~self.bounded_above
91 bounded = self.bounded_below & self.bounded_above
92
93
94 # Vectorized sampling by interval type
95 sample[unbounded] = self.np_random.normal(
96 size=unbounded[unbounded].shape)
97
98 sample[low_bounded] = self.np_random.exponential(
99 size=low_bounded[low_bounded].shape) + self.low[low_bounded]
100
101 sample[upp_bounded] = -self.np_random.exponential(
102 size=upp_bounded[upp_bounded].shape) + self.high[upp_bounded]
103
104 sample[bounded] = self.np_random.uniform(low=self.low[bounded],
105 high=high[bounded],
106 size=bounded[bounded].shape)
107 if self.dtype.kind == 'i':
108 sample = np.floor(sample)
109
110 return sample.astype(self.dtype)
111
112 def contains(self, x):
113 if isinstance(x, list):
114 x = np.array(x) # Promote list to array for contains check
115 return x.shape == self.shape and np.all(x >= self.low) and np.all(x <= self.high)
116
117 def to_jsonable(self, sample_n):
118 return np.array(sample_n).tolist()
119
120 def from_jsonable(self, sample_n):
121 return [np.asarray(sample) for sample in sample_n]
122
123 def __repr__(self):
124 return "Box" + str(self.shape)
125
126 def __eq__(self, other):
127 return isinstance(other, Box) and (self.shape == other.shape) and np.allclose(self.low, other.low) and np.allclose(self.high, other.high)
128
[end of gym/spaces/box.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gym/spaces/box.py b/gym/spaces/box.py
--- a/gym/spaces/box.py
+++ b/gym/spaces/box.py
@@ -9,13 +9,13 @@
A (possibly unbounded) box in R^n. Specifically, a Box represents the
Cartesian product of n closed intervals. Each interval has the form of one
of [a, b], (-oo, b], [a, oo), or (-oo, oo).
-
+
There are two common use cases:
-
+
* Identical bound for each dimension::
>>> Box(low=-1.0, high=2.0, shape=(3, 4), dtype=np.float32)
Box(3, 4)
-
+
* Independent bound for each dimension::
>>> Box(low=np.array([-1.0, -2.0]), high=np.array([2.0, 4.0]), dtype=np.float32)
Box(2,)
@@ -33,8 +33,8 @@
else:
assert np.isscalar(low) and np.isscalar(high), 'box requires scalar bounds. '
self.shape = tuple(shape)
- self.low = np.full(self.shape, low)
- self.high = np.full(self.shape, high)
+ self.low = np.full(self.shape, low, dtype=dtype)
+ self.high = np.full(self.shape, high, dtype=dtype)
def _get_precision(dtype):
if np.issubdtype(dtype, np.floating):
@@ -69,12 +69,12 @@
def sample(self):
"""
- Generates a single random sample inside of the Box.
+ Generates a single random sample inside of the Box.
In creating a sample of the box, each coordinate is sampled according to
the form of the interval:
-
- * [a, b] : uniform distribution
+
+ * [a, b] : uniform distribution
* [a, oo) : shifted exponential distribution
* (-oo, b] : shifted negative exponential distribution
* (-oo, oo) : normal distribution
@@ -89,7 +89,7 @@
upp_bounded = ~self.bounded_below & self.bounded_above
low_bounded = self.bounded_below & ~self.bounded_above
bounded = self.bounded_below & self.bounded_above
-
+
# Vectorized sampling by interval type
sample[unbounded] = self.np_random.normal(
@@ -97,18 +97,18 @@
sample[low_bounded] = self.np_random.exponential(
size=low_bounded[low_bounded].shape) + self.low[low_bounded]
-
+
sample[upp_bounded] = -self.np_random.exponential(
size=upp_bounded[upp_bounded].shape) + self.high[upp_bounded]
-
- sample[bounded] = self.np_random.uniform(low=self.low[bounded],
+
+ sample[bounded] = self.np_random.uniform(low=self.low[bounded],
high=high[bounded],
size=bounded[bounded].shape)
if self.dtype.kind == 'i':
sample = np.floor(sample)
return sample.astype(self.dtype)
-
+
def contains(self, x):
if isinstance(x, list):
x = np.array(x) # Promote list to array for contains check
|
{"golden_diff": "diff --git a/gym/spaces/box.py b/gym/spaces/box.py\n--- a/gym/spaces/box.py\n+++ b/gym/spaces/box.py\n@@ -9,13 +9,13 @@\n A (possibly unbounded) box in R^n. Specifically, a Box represents the\n Cartesian product of n closed intervals. Each interval has the form of one\n of [a, b], (-oo, b], [a, oo), or (-oo, oo).\n- \n+\n There are two common use cases:\n- \n+\n * Identical bound for each dimension::\n >>> Box(low=-1.0, high=2.0, shape=(3, 4), dtype=np.float32)\n Box(3, 4)\n- \n+\n * Independent bound for each dimension::\n >>> Box(low=np.array([-1.0, -2.0]), high=np.array([2.0, 4.0]), dtype=np.float32)\n Box(2,)\n@@ -33,8 +33,8 @@\n else:\n assert np.isscalar(low) and np.isscalar(high), 'box requires scalar bounds. '\n self.shape = tuple(shape)\n- self.low = np.full(self.shape, low)\n- self.high = np.full(self.shape, high)\n+ self.low = np.full(self.shape, low, dtype=dtype)\n+ self.high = np.full(self.shape, high, dtype=dtype)\n \n def _get_precision(dtype):\n if np.issubdtype(dtype, np.floating):\n@@ -69,12 +69,12 @@\n \n def sample(self):\n \"\"\"\n- Generates a single random sample inside of the Box. \n+ Generates a single random sample inside of the Box.\n \n In creating a sample of the box, each coordinate is sampled according to\n the form of the interval:\n- \n- * [a, b] : uniform distribution \n+\n+ * [a, b] : uniform distribution\n * [a, oo) : shifted exponential distribution\n * (-oo, b] : shifted negative exponential distribution\n * (-oo, oo) : normal distribution\n@@ -89,7 +89,7 @@\n upp_bounded = ~self.bounded_below & self.bounded_above\n low_bounded = self.bounded_below & ~self.bounded_above\n bounded = self.bounded_below & self.bounded_above\n- \n+\n \n # Vectorized sampling by interval type\n sample[unbounded] = self.np_random.normal(\n@@ -97,18 +97,18 @@\n \n sample[low_bounded] = self.np_random.exponential(\n size=low_bounded[low_bounded].shape) + self.low[low_bounded]\n- \n+\n sample[upp_bounded] = -self.np_random.exponential(\n size=upp_bounded[upp_bounded].shape) + self.high[upp_bounded]\n- \n- sample[bounded] = self.np_random.uniform(low=self.low[bounded], \n+\n+ sample[bounded] = self.np_random.uniform(low=self.low[bounded],\n high=high[bounded],\n size=bounded[bounded].shape)\n if self.dtype.kind == 'i':\n sample = np.floor(sample)\n \n return sample.astype(self.dtype)\n- \n+\n def contains(self, x):\n if isinstance(x, list):\n x = np.array(x) # Promote list to array for contains check\n", "issue": "Box bound precision warning\nI get this warning a lot when using Box environments:\r\n\r\n```\r\n.../gym/logger.py:30: UserWarning: WARN: Box bound precision lowered by casting to float32\r\n```\r\nThis is particularly annoying, especially because the [default dtype for Box is](https://github.com/openai/gym/blob/master/gym/spaces/box.py#L24) `np.float(32)`\n", "before_files": [{"content": "import numpy as np\n\nfrom .space import Space\nfrom gym import logger\n\n\nclass Box(Space):\n \"\"\"\n A (possibly unbounded) box in R^n. Specifically, a Box represents the\n Cartesian product of n closed intervals. Each interval has the form of one\n of [a, b], (-oo, b], [a, oo), or (-oo, oo).\n \n There are two common use cases:\n \n * Identical bound for each dimension::\n >>> Box(low=-1.0, high=2.0, shape=(3, 4), dtype=np.float32)\n Box(3, 4)\n \n * Independent bound for each dimension::\n >>> Box(low=np.array([-1.0, -2.0]), high=np.array([2.0, 4.0]), dtype=np.float32)\n Box(2,)\n\n \"\"\"\n def __init__(self, low, high, shape=None, dtype=np.float32):\n assert dtype is not None, 'dtype must be explicitly provided. '\n self.dtype = np.dtype(dtype)\n\n if shape is None:\n assert low.shape == high.shape, 'box dimension mismatch. '\n self.shape = low.shape\n self.low = low\n self.high = high\n else:\n assert np.isscalar(low) and np.isscalar(high), 'box requires scalar bounds. '\n self.shape = tuple(shape)\n self.low = np.full(self.shape, low)\n self.high = np.full(self.shape, high)\n\n def _get_precision(dtype):\n if np.issubdtype(dtype, np.floating):\n return np.finfo(dtype).precision\n else:\n return np.inf\n low_precision = _get_precision(self.low.dtype)\n high_precision = _get_precision(self.high.dtype)\n dtype_precision = _get_precision(self.dtype)\n if min(low_precision, high_precision) > dtype_precision:\n logger.warn(\"Box bound precision lowered by casting to {}\".format(self.dtype))\n self.low = self.low.astype(self.dtype)\n self.high = self.high.astype(self.dtype)\n\n # Boolean arrays which indicate the interval type for each coordinate\n self.bounded_below = -np.inf < self.low\n self.bounded_above = np.inf > self.high\n\n super(Box, self).__init__(self.shape, self.dtype)\n\n def is_bounded(self, manner=\"both\"):\n below = np.all(self.bounded_below)\n above = np.all(self.bounded_above)\n if manner == \"both\":\n return below and above\n elif manner == \"below\":\n return below\n elif manner == \"above\":\n return above\n else:\n raise ValueError(\"manner is not in {'below', 'above', 'both'}\")\n\n def sample(self):\n \"\"\"\n Generates a single random sample inside of the Box. \n\n In creating a sample of the box, each coordinate is sampled according to\n the form of the interval:\n \n * [a, b] : uniform distribution \n * [a, oo) : shifted exponential distribution\n * (-oo, b] : shifted negative exponential distribution\n * (-oo, oo) : normal distribution\n \"\"\"\n high = self.high if self.dtype.kind == 'f' \\\n else self.high.astype('int64') + 1\n sample = np.empty(self.shape)\n\n # Masking arrays which classify the coordinates according to interval\n # type\n unbounded = ~self.bounded_below & ~self.bounded_above\n upp_bounded = ~self.bounded_below & self.bounded_above\n low_bounded = self.bounded_below & ~self.bounded_above\n bounded = self.bounded_below & self.bounded_above\n \n\n # Vectorized sampling by interval type\n sample[unbounded] = self.np_random.normal(\n size=unbounded[unbounded].shape)\n\n sample[low_bounded] = self.np_random.exponential(\n size=low_bounded[low_bounded].shape) + self.low[low_bounded]\n \n sample[upp_bounded] = -self.np_random.exponential(\n size=upp_bounded[upp_bounded].shape) + self.high[upp_bounded]\n \n sample[bounded] = self.np_random.uniform(low=self.low[bounded], \n high=high[bounded],\n size=bounded[bounded].shape)\n if self.dtype.kind == 'i':\n sample = np.floor(sample)\n\n return sample.astype(self.dtype)\n \n def contains(self, x):\n if isinstance(x, list):\n x = np.array(x) # Promote list to array for contains check\n return x.shape == self.shape and np.all(x >= self.low) and np.all(x <= self.high)\n\n def to_jsonable(self, sample_n):\n return np.array(sample_n).tolist()\n\n def from_jsonable(self, sample_n):\n return [np.asarray(sample) for sample in sample_n]\n\n def __repr__(self):\n return \"Box\" + str(self.shape)\n\n def __eq__(self, other):\n return isinstance(other, Box) and (self.shape == other.shape) and np.allclose(self.low, other.low) and np.allclose(self.high, other.high)\n", "path": "gym/spaces/box.py"}]}
| 2,041 | 769 |
gh_patches_debug_67479
|
rasdani/github-patches
|
git_diff
|
scverse__scanpy-783
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
get.rank_genes_groups() key argument not used
`rank_genes_groups_df` takes `key` as an argument and the docs says it is the key differential expression groups were stored under. However, the function does not use that key and fetches DE results from the default 'rank_genes_groups' key.
line 55 under `rank_genes_groups_df() ` in scanpy/get.py
`d[k] = adata.uns["rank_genes_groups"][k][group]` should be changed to `d[k] = adata.uns[key][k][group]`
</issue>
<code>
[start of scanpy/get.py]
1 """This module contains helper functions for accessing data."""
2 from typing import Optional, Iterable, Tuple
3
4 import numpy as np
5 import pandas as pd
6 from scipy.sparse import spmatrix
7
8 from anndata import AnnData
9 # --------------------------------------------------------------------------------
10 # Plotting data helpers
11 # --------------------------------------------------------------------------------
12
13
14 # TODO: implement diffxpy method, make singledispatch
15 def rank_genes_groups_df(
16 adata: AnnData,
17 group: str, # Can this be something other than a str?
18 *,
19 key: str = "rank_genes_groups",
20 pval_cutoff: Optional[float] = None,
21 log2fc_min: Optional[float] = None,
22 log2fc_max: Optional[float] = None,
23 gene_symbols: Optional[str] = None
24 ) -> pd.DataFrame:
25 """
26 :func:`scanpy.tl.rank_genes_groups` results in the form of a :class:`pd.DataFrame`.
27
28 Params
29 ------
30 adata
31 Object to get results from.
32 group
33 Which group (as in :func:`scanpy.tl.rank_genes_groups`'s `groupby`
34 argument) to return results from.
35 key
36 Key differential expression groups were stored under.
37 pval_cutoff
38 Minimum adjusted pval to return.
39 log2fc_min
40 Minumum logfc to return.
41 log2fc_max
42 Maximum logfc to return.
43 gene_symbols
44 Column name in `.var` DataFrame that stores gene symbols. Specifying
45 this will add that column to the returned dataframe.
46
47 Example
48 -------
49 >>> pbmc = sc.datasets.pbmc68k_reduced()
50 >>> sc.tl.rank_genes_groups(pbmc, groupby="louvain", use_raw=True, n_genes=pbmc.shape[1])
51 >>> dedf = sc.get.rank_genes_groups_df(pbmc, group="0")
52 """
53 d = pd.DataFrame()
54 for k in ['scores', 'names', 'logfoldchanges', 'pvals', 'pvals_adj']:
55 d[k] = adata.uns["rank_genes_groups"][k][group]
56 if pval_cutoff is not None:
57 d = d[d["pvals_adj"] < pval_cutoff]
58 if log2fc_min is not None:
59 d = d[d["logfoldchanges"] > log2fc_min]
60 if log2fc_max is not None:
61 d = d[d["logfoldchanges"] < log2fc_max]
62 if gene_symbols is not None:
63 d = d.join(adata.var[gene_symbols], on="names")
64 return d
65
66
67 def obs_df(
68 adata: AnnData,
69 keys: Iterable[str] = (),
70 obsm_keys: Iterable[Tuple[str, int]] = (),
71 *,
72 layer: str = None,
73 gene_symbols: str = None,
74 use_raw: bool = False
75 ) -> pd.DataFrame:
76 """\
77 Return values for observations in adata.
78
79 Params
80 ------
81 adata
82 AnnData object to get values from.
83 keys
84 Keys from either `.var_names`, `.var[gene_symbols]`, or `.obs.columns`.
85 obsm_keys
86 Tuple of `(key from obsm, column index of obsm[key])`.
87 layer
88 Layer of `adata` to use as expression values.
89 gene_symbols
90 Column of `adata.var` to search for `keys` in.
91 use_raw
92 Whether to get expression values from `adata.raw`.
93
94 Returns
95 -------
96 A dataframe with `adata.obs_names` as index, and values specified by `keys`
97 and `obsm_keys`.
98
99 Examples
100 --------
101 Getting value for plotting:
102
103 >>> pbmc = sc.datasets.pbmc68k_reduced()
104 >>> plotdf = sc.get.obs_df(
105 pbmc,
106 keys=["CD8B", "n_genes"],
107 obsm_keys=[("X_umap", 0), ("X_umap", 1)]
108 )
109 >>> plotdf.plot.scatter("X_umap0", "X_umap1", c="CD8B")
110
111 Calculating mean expression for marker genes by cluster:
112
113 >>> pbmc = sc.datasets.pbmc68k_reduced()
114 >>> marker_genes = ['CD79A', 'MS4A1', 'CD8A', 'CD8B', 'LYZ']
115 >>> genedf = sc.get.obs_df(
116 pbmc,
117 keys=["louvain", *marker_genes]
118 )
119 >>> grouped = genedf.groupby("louvain")
120 >>> mean, var = grouped.mean(), grouped.var()
121 """
122 if use_raw:
123 assert layer is None, "Cannot specify use_raw=True and a layer at the same time."
124 if gene_symbols is not None:
125 gene_names = pd.Series(adata.raw.var_names, index=adata.raw.var[gene_symbols])
126 else:
127 gene_names = pd.Series(adata.raw.var_names, index=adata.raw.var_names)
128 else:
129 if gene_symbols is not None:
130 gene_names = pd.Series(adata.var_names, index=adata.var[gene_symbols])
131 else:
132 gene_names = pd.Series(adata.var_names, index=adata.var_names)
133 lookup_keys = []
134 not_found = []
135 for key in keys:
136 if key in adata.obs.columns:
137 lookup_keys.append(key)
138 elif key in gene_names.index:
139 lookup_keys.append(gene_names[key])
140 else:
141 not_found.append(key)
142 if len(not_found) > 0:
143 if use_raw:
144 if gene_symbols is None:
145 gene_error = "`adata.raw.var_names`"
146 else:
147 gene_error = "gene_symbols column `adata.raw.var[{}].values`".format(gene_symbols)
148 else:
149 if gene_symbols is None:
150 gene_error = "`adata.var_names`"
151 else:
152 gene_error = "gene_symbols column `adata.var[{}].values`".format(gene_symbols)
153 raise KeyError(
154 f"Could not find keys '{not_found}' in columns of `adata.obs` or in"
155 f" {gene_error}."
156 )
157
158 # Make df
159 df = pd.DataFrame(index=adata.obs_names)
160 for k, l in zip(keys, lookup_keys):
161 if not use_raw or k in adata.obs.columns:
162 df[k] = adata.obs_vector(l, layer=layer)
163 else:
164 df[k] = adata.raw.obs_vector(l)
165 for k, idx in obsm_keys:
166 added_k = f"{k}-{idx}"
167 val = adata.obsm[k]
168 if isinstance(val, np.ndarray):
169 df[added_k] = np.ravel(val[:, idx])
170 elif isinstance(val, spmatrix):
171 df[added_k] = np.ravel(val[:, idx].toarray())
172 elif isinstance(val, pd.DataFrame):
173 df[added_k] = val.loc[:, idx]
174 return df
175
176
177 def var_df(
178 adata: AnnData,
179 keys: Iterable[str] = (),
180 varm_keys: Iterable[Tuple[str, int]] = (),
181 *,
182 layer: str = None,
183 ) -> pd.DataFrame:
184 """\
185 Return values for observations in adata.
186
187 Params
188 ------
189 adata
190 AnnData object to get values from.
191 keys
192 Keys from either `.obs_names`, or `.var.columns`.
193 varm_keys
194 Tuple of `(key from varm, column index of varm[key])`.
195 layer
196 Layer of `adata` to use as expression values.
197
198 Returns
199 -------
200 A dataframe with `adata.var_names` as index, and values specified by `keys`
201 and `varm_keys`.
202 """
203 # Argument handling
204 lookup_keys = []
205 not_found = []
206 for key in keys:
207 if key in adata.var.columns:
208 lookup_keys.append(key)
209 elif key in adata.obs_names:
210 lookup_keys.append(key)
211 else:
212 not_found.append(key)
213 if len(not_found) > 0:
214 raise KeyError(
215 f"Could not find keys '{not_found}' in columns of `adata.var` or"
216 " in `adata.obs_names`."
217 )
218
219 # Make df
220 df = pd.DataFrame(index=adata.var_names)
221 for k, l in zip(keys, lookup_keys):
222 df[k] = adata.var_vector(l, layer=layer)
223 for k, idx in varm_keys:
224 added_k = f"{k}-{idx}"
225 val = adata.varm[k]
226 if isinstance(val, np.ndarray):
227 df[added_k] = np.ravel(val[:, idx])
228 elif isinstance(val, spmatrix):
229 df[added_k] = np.ravel(val[:, idx].toarray())
230 elif isinstance(val, pd.DataFrame):
231 df[added_k] = val.loc[:, idx]
232 return df
233
[end of scanpy/get.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scanpy/get.py b/scanpy/get.py
--- a/scanpy/get.py
+++ b/scanpy/get.py
@@ -52,7 +52,7 @@
"""
d = pd.DataFrame()
for k in ['scores', 'names', 'logfoldchanges', 'pvals', 'pvals_adj']:
- d[k] = adata.uns["rank_genes_groups"][k][group]
+ d[k] = adata.uns[key][k][group]
if pval_cutoff is not None:
d = d[d["pvals_adj"] < pval_cutoff]
if log2fc_min is not None:
|
{"golden_diff": "diff --git a/scanpy/get.py b/scanpy/get.py\n--- a/scanpy/get.py\n+++ b/scanpy/get.py\n@@ -52,7 +52,7 @@\n \"\"\"\n d = pd.DataFrame()\n for k in ['scores', 'names', 'logfoldchanges', 'pvals', 'pvals_adj']:\n- d[k] = adata.uns[\"rank_genes_groups\"][k][group]\n+ d[k] = adata.uns[key][k][group]\n if pval_cutoff is not None:\n d = d[d[\"pvals_adj\"] < pval_cutoff]\n if log2fc_min is not None:\n", "issue": "get.rank_genes_groups() key argument not used\n`rank_genes_groups_df` takes `key` as an argument and the docs says it is the key differential expression groups were stored under. However, the function does not use that key and fetches DE results from the default 'rank_genes_groups' key.\r\n\r\nline 55 under `rank_genes_groups_df() ` in scanpy/get.py\r\n`d[k] = adata.uns[\"rank_genes_groups\"][k][group]` should be changed to `d[k] = adata.uns[key][k][group]`\n", "before_files": [{"content": "\"\"\"This module contains helper functions for accessing data.\"\"\"\nfrom typing import Optional, Iterable, Tuple\n\nimport numpy as np\nimport pandas as pd\nfrom scipy.sparse import spmatrix\n\nfrom anndata import AnnData\n# --------------------------------------------------------------------------------\n# Plotting data helpers\n# --------------------------------------------------------------------------------\n\n\n# TODO: implement diffxpy method, make singledispatch\ndef rank_genes_groups_df(\n adata: AnnData,\n group: str, # Can this be something other than a str?\n *,\n key: str = \"rank_genes_groups\",\n pval_cutoff: Optional[float] = None,\n log2fc_min: Optional[float] = None,\n log2fc_max: Optional[float] = None,\n gene_symbols: Optional[str] = None\n) -> pd.DataFrame:\n \"\"\"\n :func:`scanpy.tl.rank_genes_groups` results in the form of a :class:`pd.DataFrame`.\n\n Params\n ------\n adata\n Object to get results from.\n group\n Which group (as in :func:`scanpy.tl.rank_genes_groups`'s `groupby`\n argument) to return results from.\n key\n Key differential expression groups were stored under.\n pval_cutoff\n Minimum adjusted pval to return.\n log2fc_min\n Minumum logfc to return.\n log2fc_max\n Maximum logfc to return.\n gene_symbols\n Column name in `.var` DataFrame that stores gene symbols. Specifying\n this will add that column to the returned dataframe.\n\n Example\n -------\n >>> pbmc = sc.datasets.pbmc68k_reduced()\n >>> sc.tl.rank_genes_groups(pbmc, groupby=\"louvain\", use_raw=True, n_genes=pbmc.shape[1])\n >>> dedf = sc.get.rank_genes_groups_df(pbmc, group=\"0\")\n \"\"\"\n d = pd.DataFrame()\n for k in ['scores', 'names', 'logfoldchanges', 'pvals', 'pvals_adj']:\n d[k] = adata.uns[\"rank_genes_groups\"][k][group]\n if pval_cutoff is not None:\n d = d[d[\"pvals_adj\"] < pval_cutoff]\n if log2fc_min is not None:\n d = d[d[\"logfoldchanges\"] > log2fc_min]\n if log2fc_max is not None:\n d = d[d[\"logfoldchanges\"] < log2fc_max]\n if gene_symbols is not None:\n d = d.join(adata.var[gene_symbols], on=\"names\")\n return d\n\n\ndef obs_df(\n adata: AnnData,\n keys: Iterable[str] = (),\n obsm_keys: Iterable[Tuple[str, int]] = (),\n *,\n layer: str = None,\n gene_symbols: str = None,\n use_raw: bool = False\n) -> pd.DataFrame:\n \"\"\"\\\n Return values for observations in adata.\n\n Params\n ------\n adata\n AnnData object to get values from.\n keys\n Keys from either `.var_names`, `.var[gene_symbols]`, or `.obs.columns`.\n obsm_keys\n Tuple of `(key from obsm, column index of obsm[key])`.\n layer\n Layer of `adata` to use as expression values.\n gene_symbols\n Column of `adata.var` to search for `keys` in.\n use_raw\n Whether to get expression values from `adata.raw`.\n\n Returns\n -------\n A dataframe with `adata.obs_names` as index, and values specified by `keys`\n and `obsm_keys`.\n\n Examples\n --------\n Getting value for plotting:\n\n >>> pbmc = sc.datasets.pbmc68k_reduced()\n >>> plotdf = sc.get.obs_df(\n pbmc,\n keys=[\"CD8B\", \"n_genes\"],\n obsm_keys=[(\"X_umap\", 0), (\"X_umap\", 1)]\n )\n >>> plotdf.plot.scatter(\"X_umap0\", \"X_umap1\", c=\"CD8B\")\n\n Calculating mean expression for marker genes by cluster:\n\n >>> pbmc = sc.datasets.pbmc68k_reduced()\n >>> marker_genes = ['CD79A', 'MS4A1', 'CD8A', 'CD8B', 'LYZ']\n >>> genedf = sc.get.obs_df(\n pbmc,\n keys=[\"louvain\", *marker_genes]\n )\n >>> grouped = genedf.groupby(\"louvain\")\n >>> mean, var = grouped.mean(), grouped.var()\n \"\"\"\n if use_raw:\n assert layer is None, \"Cannot specify use_raw=True and a layer at the same time.\"\n if gene_symbols is not None:\n gene_names = pd.Series(adata.raw.var_names, index=adata.raw.var[gene_symbols])\n else:\n gene_names = pd.Series(adata.raw.var_names, index=adata.raw.var_names)\n else:\n if gene_symbols is not None:\n gene_names = pd.Series(adata.var_names, index=adata.var[gene_symbols])\n else:\n gene_names = pd.Series(adata.var_names, index=adata.var_names)\n lookup_keys = []\n not_found = []\n for key in keys:\n if key in adata.obs.columns:\n lookup_keys.append(key)\n elif key in gene_names.index:\n lookup_keys.append(gene_names[key])\n else:\n not_found.append(key)\n if len(not_found) > 0:\n if use_raw:\n if gene_symbols is None:\n gene_error = \"`adata.raw.var_names`\"\n else:\n gene_error = \"gene_symbols column `adata.raw.var[{}].values`\".format(gene_symbols)\n else:\n if gene_symbols is None:\n gene_error = \"`adata.var_names`\"\n else:\n gene_error = \"gene_symbols column `adata.var[{}].values`\".format(gene_symbols)\n raise KeyError(\n f\"Could not find keys '{not_found}' in columns of `adata.obs` or in\"\n f\" {gene_error}.\"\n )\n\n # Make df\n df = pd.DataFrame(index=adata.obs_names)\n for k, l in zip(keys, lookup_keys):\n if not use_raw or k in adata.obs.columns:\n df[k] = adata.obs_vector(l, layer=layer)\n else:\n df[k] = adata.raw.obs_vector(l)\n for k, idx in obsm_keys:\n added_k = f\"{k}-{idx}\"\n val = adata.obsm[k]\n if isinstance(val, np.ndarray):\n df[added_k] = np.ravel(val[:, idx])\n elif isinstance(val, spmatrix):\n df[added_k] = np.ravel(val[:, idx].toarray())\n elif isinstance(val, pd.DataFrame):\n df[added_k] = val.loc[:, idx]\n return df\n\n\ndef var_df(\n adata: AnnData,\n keys: Iterable[str] = (),\n varm_keys: Iterable[Tuple[str, int]] = (),\n *,\n layer: str = None,\n) -> pd.DataFrame:\n \"\"\"\\\n Return values for observations in adata.\n\n Params\n ------\n adata\n AnnData object to get values from.\n keys\n Keys from either `.obs_names`, or `.var.columns`.\n varm_keys\n Tuple of `(key from varm, column index of varm[key])`.\n layer\n Layer of `adata` to use as expression values.\n\n Returns\n -------\n A dataframe with `adata.var_names` as index, and values specified by `keys`\n and `varm_keys`.\n \"\"\"\n # Argument handling\n lookup_keys = []\n not_found = []\n for key in keys:\n if key in adata.var.columns:\n lookup_keys.append(key)\n elif key in adata.obs_names:\n lookup_keys.append(key)\n else:\n not_found.append(key)\n if len(not_found) > 0:\n raise KeyError(\n f\"Could not find keys '{not_found}' in columns of `adata.var` or\"\n \" in `adata.obs_names`.\"\n )\n\n # Make df\n df = pd.DataFrame(index=adata.var_names)\n for k, l in zip(keys, lookup_keys):\n df[k] = adata.var_vector(l, layer=layer)\n for k, idx in varm_keys:\n added_k = f\"{k}-{idx}\"\n val = adata.varm[k]\n if isinstance(val, np.ndarray):\n df[added_k] = np.ravel(val[:, idx])\n elif isinstance(val, spmatrix):\n df[added_k] = np.ravel(val[:, idx].toarray())\n elif isinstance(val, pd.DataFrame):\n df[added_k] = val.loc[:, idx]\n return df\n", "path": "scanpy/get.py"}]}
| 3,139 | 145 |
gh_patches_debug_23301
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-2266
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
matplotlib.use('Agg') conflicts with user code
In this line in `chainer/chainer/training/extensions/plot_report.py` the `matplotlib` backend is changed [Source](https://github.com/pfnet/chainer/blob/master/chainer/training/extensions/plot_report.py#L16):
matplotlib.use('Agg')
Unfortunately, this can interfere with users code. For example, when the user sets the backend himself anywhere, it is not known, whether his setting or the Chainer settings wins (is imported first).
The `plot_report` gets imported, when `extensions` is imported. For now, I just removed from the corresponding `__init__` file locally, which is definitely not a clean solution.
</issue>
<code>
[start of chainer/training/extensions/plot_report.py]
1 import json
2 from os import path
3 import warnings
4
5 import numpy
6 import six
7
8 from chainer import reporter
9 import chainer.serializer as serializer_module
10 from chainer.training import extension
11 import chainer.training.trigger as trigger_module
12
13 try:
14 import matplotlib
15
16 matplotlib.use('Agg')
17 from matplotlib import pyplot as plot
18
19 _available = True
20
21 except ImportError:
22 _available = False
23
24
25 def _check_available():
26 if not _available:
27 warnings.warn('matplotlib is not installed on your environment, '
28 'so nothing will be plotted at this time. '
29 'Please install matplotlib to plot figures.\n\n'
30 ' $ pip install matplotlib\n')
31
32
33 class PlotReport(extension.Extension):
34
35 """Trainer extension to output plots.
36
37 This extension accumulates the observations of the trainer to
38 :class:`~chainer.DictSummary` at a regular interval specified by a supplied
39 trigger, and plot a graph with using them.
40
41 There are two triggers to handle this extension. One is the trigger to
42 invoke this extension, which is used to handle the timing of accumulating
43 the results. It is set to ``1, 'iteration'`` by default. The other is the
44 trigger to determine when to emit the result. When this trigger returns
45 True, this extension appends the summary of accumulated values to the list
46 of past summaries, and writes the list to the log file. Then, this
47 extension makes a new fresh summary object which is used until the next
48 time that the trigger fires.
49
50 It also adds ``'epoch'`` and ``'iteration'`` entries to each result
51 dictionary, which are the epoch and iteration counts at the output.
52
53 Args:
54 y_keys (iterable of strs): Keys of values regarded as y. If this is
55 None, nothing is output to the graph.
56 x_key (str): Keys of values regarded as x. The default value is
57 'iteration'.
58 trigger: Trigger that decides when to aggregate the result and output
59 the values. This is distinct from the trigger of this extension
60 itself. If it is a tuple in the form ``<int>, 'epoch'`` or ``<int>,
61 'iteration'``, it is passed to :class:`IntervalTrigger`.
62 postprocess: Callback to postprocess the result dictionaries. Figure
63 object, Axes object, and all plot data are passed to this callback
64 in this order. This callback can modify the figure.
65 file_name (str): Name of the figure file under the output directory.
66 It can be a format string.
67 marker (str): The marker used to plot the graph. Default is ``'x'``. If
68 ``None`` is given, it draws with no markers.
69 grid (bool): Set the axis grid on if True. Default is True.
70
71 """
72
73 def __init__(self, y_keys, x_key='iteration', trigger=(1, 'epoch'),
74 postprocess=None, file_name='plot.png', marker='x',
75 grid=True):
76
77 _check_available()
78
79 if not _available:
80 return
81
82 self._x_key = x_key
83 if isinstance(y_keys, str):
84 y_keys = (y_keys,)
85
86 self._y_keys = y_keys
87 self._trigger = trigger_module.get_trigger(trigger)
88 self._file_name = file_name
89 self._marker = marker
90 self._grid = grid
91 self._postprocess = postprocess
92 self._init_summary()
93 self._data = {k: [] for k in y_keys}
94
95 def __call__(self, trainer):
96 if not _available:
97 return
98
99 keys = self._y_keys
100 observation = trainer.observation
101 summary = self._summary
102
103 if keys is None:
104 summary.add(observation)
105 else:
106 summary.add({k: observation[k] for k in keys if k in observation})
107
108 if self._trigger(trainer):
109 stats = self._summary.compute_mean()
110 stats_cpu = {}
111 for name, value in six.iteritems(stats):
112 stats_cpu[name] = float(value) # copy to CPU
113
114 updater = trainer.updater
115 stats_cpu['epoch'] = updater.epoch
116 stats_cpu['iteration'] = updater.iteration
117 x = stats_cpu[self._x_key]
118 data = self._data
119
120 for k in keys:
121 if k in stats_cpu:
122 data[k].append((x, stats_cpu[k]))
123
124 f = plot.figure()
125 a = f.add_subplot(111)
126 a.set_xlabel(self._x_key)
127 if self._grid:
128 a.grid()
129
130 for k in keys:
131 xy = data[k]
132 if len(xy) == 0:
133 continue
134
135 xy = numpy.array(xy)
136 a.plot(xy[:, 0], xy[:, 1], marker=self._marker, label=k)
137
138 if a.has_data():
139 if self._postprocess is not None:
140 self._postprocess(f, a, summary)
141 l = a.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
142 f.savefig(path.join(trainer.out, self._file_name),
143 bbox_extra_artists=(l,), bbox_inches='tight')
144
145 plot.close()
146 self._init_summary()
147
148 def serialize(self, serializer):
149 if isinstance(serializer, serializer_module.Serializer):
150 serializer('_plot_{}'.format(self._file_name),
151 json.dumps(self._data))
152
153 else:
154 self._data = json.loads(
155 serializer('_plot_{}'.format(self._file_name), ''))
156
157 def _init_summary(self):
158 self._summary = reporter.DictSummary()
159
[end of chainer/training/extensions/plot_report.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/chainer/training/extensions/plot_report.py b/chainer/training/extensions/plot_report.py
--- a/chainer/training/extensions/plot_report.py
+++ b/chainer/training/extensions/plot_report.py
@@ -11,9 +11,6 @@
import chainer.training.trigger as trigger_module
try:
- import matplotlib
-
- matplotlib.use('Agg')
from matplotlib import pyplot as plot
_available = True
@@ -50,6 +47,25 @@
It also adds ``'epoch'`` and ``'iteration'`` entries to each result
dictionary, which are the epoch and iteration counts at the output.
+ .. warning::
+
+ If your environment needs to specify a backend of matplotlib
+ explicitly, please call ``matplotlib.use`` before importing Chainer.
+ For example:
+
+ .. code-block:: python
+
+ import matplotlib
+ matplotlib.use('Agg')
+
+ import chainer
+
+ Then, once ``chainer.training.extensions`` is imported,
+ ``matplotlib.use`` will have no effect.
+
+ For the details, please see here:
+ http://matplotlib.org/faq/usage_faq.html#what-is-a-backend
+
Args:
y_keys (iterable of strs): Keys of values regarded as y. If this is
None, nothing is output to the graph.
|
{"golden_diff": "diff --git a/chainer/training/extensions/plot_report.py b/chainer/training/extensions/plot_report.py\n--- a/chainer/training/extensions/plot_report.py\n+++ b/chainer/training/extensions/plot_report.py\n@@ -11,9 +11,6 @@\n import chainer.training.trigger as trigger_module\n \n try:\n- import matplotlib\n-\n- matplotlib.use('Agg')\n from matplotlib import pyplot as plot\n \n _available = True\n@@ -50,6 +47,25 @@\n It also adds ``'epoch'`` and ``'iteration'`` entries to each result\n dictionary, which are the epoch and iteration counts at the output.\n \n+ .. warning::\n+\n+ If your environment needs to specify a backend of matplotlib\n+ explicitly, please call ``matplotlib.use`` before importing Chainer.\n+ For example:\n+\n+ .. code-block:: python\n+\n+ import matplotlib\n+ matplotlib.use('Agg')\n+\n+ import chainer\n+\n+ Then, once ``chainer.training.extensions`` is imported,\n+ ``matplotlib.use`` will have no effect.\n+\n+ For the details, please see here:\n+ http://matplotlib.org/faq/usage_faq.html#what-is-a-backend\n+\n Args:\n y_keys (iterable of strs): Keys of values regarded as y. If this is\n None, nothing is output to the graph.\n", "issue": "matplotlib.use('Agg') conflicts with user code\nIn this line in `chainer/chainer/training/extensions/plot_report.py` the `matplotlib` backend is changed [Source](https://github.com/pfnet/chainer/blob/master/chainer/training/extensions/plot_report.py#L16):\r\n\r\n matplotlib.use('Agg')\r\n\r\nUnfortunately, this can interfere with users code. For example, when the user sets the backend himself anywhere, it is not known, whether his setting or the Chainer settings wins (is imported first).\r\n\r\nThe `plot_report` gets imported, when `extensions` is imported. For now, I just removed from the corresponding `__init__` file locally, which is definitely not a clean solution.\n", "before_files": [{"content": "import json\nfrom os import path\nimport warnings\n\nimport numpy\nimport six\n\nfrom chainer import reporter\nimport chainer.serializer as serializer_module\nfrom chainer.training import extension\nimport chainer.training.trigger as trigger_module\n\ntry:\n import matplotlib\n\n matplotlib.use('Agg')\n from matplotlib import pyplot as plot\n\n _available = True\n\nexcept ImportError:\n _available = False\n\n\ndef _check_available():\n if not _available:\n warnings.warn('matplotlib is not installed on your environment, '\n 'so nothing will be plotted at this time. '\n 'Please install matplotlib to plot figures.\\n\\n'\n ' $ pip install matplotlib\\n')\n\n\nclass PlotReport(extension.Extension):\n\n \"\"\"Trainer extension to output plots.\n\n This extension accumulates the observations of the trainer to\n :class:`~chainer.DictSummary` at a regular interval specified by a supplied\n trigger, and plot a graph with using them.\n\n There are two triggers to handle this extension. One is the trigger to\n invoke this extension, which is used to handle the timing of accumulating\n the results. It is set to ``1, 'iteration'`` by default. The other is the\n trigger to determine when to emit the result. When this trigger returns\n True, this extension appends the summary of accumulated values to the list\n of past summaries, and writes the list to the log file. Then, this\n extension makes a new fresh summary object which is used until the next\n time that the trigger fires.\n\n It also adds ``'epoch'`` and ``'iteration'`` entries to each result\n dictionary, which are the epoch and iteration counts at the output.\n\n Args:\n y_keys (iterable of strs): Keys of values regarded as y. If this is\n None, nothing is output to the graph.\n x_key (str): Keys of values regarded as x. The default value is\n 'iteration'.\n trigger: Trigger that decides when to aggregate the result and output\n the values. This is distinct from the trigger of this extension\n itself. If it is a tuple in the form ``<int>, 'epoch'`` or ``<int>,\n 'iteration'``, it is passed to :class:`IntervalTrigger`.\n postprocess: Callback to postprocess the result dictionaries. Figure\n object, Axes object, and all plot data are passed to this callback\n in this order. This callback can modify the figure.\n file_name (str): Name of the figure file under the output directory.\n It can be a format string.\n marker (str): The marker used to plot the graph. Default is ``'x'``. If\n ``None`` is given, it draws with no markers.\n grid (bool): Set the axis grid on if True. Default is True.\n\n \"\"\"\n\n def __init__(self, y_keys, x_key='iteration', trigger=(1, 'epoch'),\n postprocess=None, file_name='plot.png', marker='x',\n grid=True):\n\n _check_available()\n\n if not _available:\n return\n\n self._x_key = x_key\n if isinstance(y_keys, str):\n y_keys = (y_keys,)\n\n self._y_keys = y_keys\n self._trigger = trigger_module.get_trigger(trigger)\n self._file_name = file_name\n self._marker = marker\n self._grid = grid\n self._postprocess = postprocess\n self._init_summary()\n self._data = {k: [] for k in y_keys}\n\n def __call__(self, trainer):\n if not _available:\n return\n\n keys = self._y_keys\n observation = trainer.observation\n summary = self._summary\n\n if keys is None:\n summary.add(observation)\n else:\n summary.add({k: observation[k] for k in keys if k in observation})\n\n if self._trigger(trainer):\n stats = self._summary.compute_mean()\n stats_cpu = {}\n for name, value in six.iteritems(stats):\n stats_cpu[name] = float(value) # copy to CPU\n\n updater = trainer.updater\n stats_cpu['epoch'] = updater.epoch\n stats_cpu['iteration'] = updater.iteration\n x = stats_cpu[self._x_key]\n data = self._data\n\n for k in keys:\n if k in stats_cpu:\n data[k].append((x, stats_cpu[k]))\n\n f = plot.figure()\n a = f.add_subplot(111)\n a.set_xlabel(self._x_key)\n if self._grid:\n a.grid()\n\n for k in keys:\n xy = data[k]\n if len(xy) == 0:\n continue\n\n xy = numpy.array(xy)\n a.plot(xy[:, 0], xy[:, 1], marker=self._marker, label=k)\n\n if a.has_data():\n if self._postprocess is not None:\n self._postprocess(f, a, summary)\n l = a.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)\n f.savefig(path.join(trainer.out, self._file_name),\n bbox_extra_artists=(l,), bbox_inches='tight')\n\n plot.close()\n self._init_summary()\n\n def serialize(self, serializer):\n if isinstance(serializer, serializer_module.Serializer):\n serializer('_plot_{}'.format(self._file_name),\n json.dumps(self._data))\n\n else:\n self._data = json.loads(\n serializer('_plot_{}'.format(self._file_name), ''))\n\n def _init_summary(self):\n self._summary = reporter.DictSummary()\n", "path": "chainer/training/extensions/plot_report.py"}]}
| 2,294 | 306 |
gh_patches_debug_37064
|
rasdani/github-patches
|
git_diff
|
qtile__qtile-2111
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`qtile --log-level=INFO` no longer works
My `.xsession` runs qtile using `qtile --log-level=INFO`.
This no longer works.
```
qtile: error: unrecognized arguments: --log-level=INFO
```
I'm guessing due to 908b910d00087ece13bb576f672c94bcf9e6fc43?
No big deal, but the changelog says
```
Running `qtile` without arguments will continue to work for the
forseeable future, but will be eventually deprecated.
```
</issue>
<code>
[start of libqtile/scripts/main.py]
1 import argparse
2 import sys
3
4 from libqtile.scripts import cmd_obj, run_cmd, shell, start, top
5
6 try:
7 import pkg_resources
8 VERSION = pkg_resources.require("qtile")[0].version
9 except (pkg_resources.DistributionNotFound, ImportError):
10 VERSION = 'dev'
11
12
13 def main():
14 # backward compat hack: `qtile` with no args (or non-subcommand args)
15 # should default to `qtile start`. it seems impolite for commands to do
16 # nothing when run with no args, so let's warn about this being deprecated.
17 if len(sys.argv) == 1:
18 print("please move to `qtile start` as your qtile invocation, "
19 "instead of just `qtile`; this default will be removed Soon(TM)")
20 sys.argv.insert(1, "start")
21
22 parser = argparse.ArgumentParser(
23 prog='qtile',
24 description='A full-featured, pure-Python tiling window manager.',
25 )
26 parser.add_argument(
27 '--version',
28 action='version',
29 version=VERSION,
30 )
31
32 subparsers = parser.add_subparsers()
33 start.add_subcommand(subparsers)
34 shell.add_subcommand(subparsers)
35 top.add_subcommand(subparsers)
36 run_cmd.add_subcommand(subparsers)
37 cmd_obj.add_subcommand(subparsers)
38
39 # `qtile help` should print help
40 def print_help(options):
41 parser.print_help()
42 help_ = subparsers.add_parser("help", help="Print help information and exit")
43 help_.set_defaults(func=print_help)
44
45 options = parser.parse_args()
46 options.func(options)
47
[end of libqtile/scripts/main.py]
[start of libqtile/scripts/start.py]
1 # Copyright (c) 2008, Aldo Cortesi. All rights reserved.
2 # Copyright (c) 2011, Florian Mounier
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining a copy
5 # of this software and associated documentation files (the "Software"), to deal
6 # in the Software without restriction, including without limitation the rights
7 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
8 # copies of the Software, and to permit persons to whom the Software is
9 # furnished to do so, subject to the following conditions:
10 #
11 # The above copyright notice and this permission notice shall be included in
12 # all copies or substantial portions of the Software.
13 #
14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
16 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
17 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
18 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
19 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
20 # SOFTWARE.
21
22 # Set the locale before any widgets or anything are imported, so any widget
23 # whose defaults depend on a reasonable locale sees something reasonable.
24 import locale
25 import logging
26 from os import getenv, makedirs, path
27 from sys import exit, stdout
28
29 import libqtile.backend
30 from libqtile import confreader
31 from libqtile.log_utils import init_log, logger
32
33
34 def rename_process():
35 """
36 Try to rename the qtile process if py-setproctitle is installed:
37
38 http://code.google.com/p/py-setproctitle/
39
40 Will fail silently if it's not installed. Setting the title lets you do
41 stuff like "killall qtile".
42 """
43 try:
44 import setproctitle
45 setproctitle.setproctitle("qtile")
46 except ImportError:
47 pass
48
49
50 def make_qtile(options):
51 log_level = getattr(logging, options.log_level)
52 init_log(log_level=log_level, log_color=stdout.isatty())
53 kore = libqtile.backend.get_core(options.backend)
54
55 if not path.isfile(options.configfile):
56 try:
57 makedirs(path.dirname(options.configfile), exist_ok=True)
58 from shutil import copyfile
59 default_config_path = path.join(path.dirname(__file__),
60 "..",
61 "resources",
62 "default_config.py")
63 copyfile(default_config_path, options.configfile)
64 logger.info('Copied default_config.py to %s', options.configfile)
65 except Exception as e:
66 logger.exception('Failed to copy default_config.py to %s: (%s)',
67 options.configfile, e)
68
69 config = confreader.Config(options.configfile, kore=kore)
70
71 # XXX: the import is here because we need to call init_log
72 # before start importing stuff
73 from libqtile.core.manager import Qtile
74 return Qtile(
75 kore,
76 config,
77 no_spawn=options.no_spawn,
78 state=options.state,
79 socket_path=options.socket,
80 )
81
82
83 def start(options):
84 try:
85 locale.setlocale(locale.LC_ALL, locale.getdefaultlocale()) # type: ignore
86 except locale.Error:
87 pass
88
89 rename_process()
90 q = make_qtile(options)
91 try:
92 q.loop()
93 except Exception:
94 logger.exception('Qtile crashed')
95 exit(1)
96 logger.info('Exiting...')
97
98
99 def add_subcommand(subparsers):
100 parser = subparsers.add_parser("start", help="Start the window manager")
101 parser.add_argument(
102 "-c", "--config",
103 action="store",
104 default=path.expanduser(path.join(
105 getenv('XDG_CONFIG_HOME', '~/.config'), 'qtile', 'config.py')),
106 dest="configfile",
107 help='Use the specified configuration file',
108 )
109 parser.add_argument(
110 "-s", "--socket",
111 action="store",
112 default=None,
113 dest="socket",
114 help='Path of the Qtile IPC socket.'
115 )
116 parser.add_argument(
117 "-n", "--no-spawn",
118 action="store_true",
119 default=False,
120 dest="no_spawn",
121 help='Avoid spawning apps. (Used for restart)'
122 )
123 parser.add_argument(
124 '-l', '--log-level',
125 default='WARNING',
126 dest='log_level',
127 choices=('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'),
128 help='Set qtile log level'
129 )
130 parser.add_argument(
131 '--with-state',
132 default=None,
133 dest='state',
134 help='Pickled QtileState object (typically used only internally)',
135 )
136 parser.add_argument(
137 '-b', '--backend',
138 default='x11',
139 dest='backend',
140 choices=libqtile.backend.CORES,
141 help='Use specified backend. Currently only x11 is implemented.',
142 )
143 parser.set_defaults(func=start)
144
[end of libqtile/scripts/start.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/libqtile/scripts/main.py b/libqtile/scripts/main.py
--- a/libqtile/scripts/main.py
+++ b/libqtile/scripts/main.py
@@ -1,6 +1,8 @@
import argparse
+import logging
import sys
+from libqtile.log_utils import init_log
from libqtile.scripts import cmd_obj, run_cmd, shell, start, top
try:
@@ -28,6 +30,14 @@
action='version',
version=VERSION,
)
+ parser.add_argument(
+ '-l', '--log-level',
+ default='WARNING',
+ dest='log_level',
+ type=str.upper,
+ choices=('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'),
+ help='Set qtile log level'
+ )
subparsers = parser.add_subparsers()
start.add_subcommand(subparsers)
@@ -43,4 +53,6 @@
help_.set_defaults(func=print_help)
options = parser.parse_args()
+ log_level = getattr(logging, options.log_level)
+ init_log(log_level=log_level, log_color=sys.stdout.isatty())
options.func(options)
diff --git a/libqtile/scripts/start.py b/libqtile/scripts/start.py
--- a/libqtile/scripts/start.py
+++ b/libqtile/scripts/start.py
@@ -22,13 +22,12 @@
# Set the locale before any widgets or anything are imported, so any widget
# whose defaults depend on a reasonable locale sees something reasonable.
import locale
-import logging
from os import getenv, makedirs, path
-from sys import exit, stdout
+from sys import exit
import libqtile.backend
from libqtile import confreader
-from libqtile.log_utils import init_log, logger
+from libqtile.log_utils import logger
def rename_process():
@@ -48,8 +47,6 @@
def make_qtile(options):
- log_level = getattr(logging, options.log_level)
- init_log(log_level=log_level, log_color=stdout.isatty())
kore = libqtile.backend.get_core(options.backend)
if not path.isfile(options.configfile):
@@ -120,13 +117,6 @@
dest="no_spawn",
help='Avoid spawning apps. (Used for restart)'
)
- parser.add_argument(
- '-l', '--log-level',
- default='WARNING',
- dest='log_level',
- choices=('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'),
- help='Set qtile log level'
- )
parser.add_argument(
'--with-state',
default=None,
|
{"golden_diff": "diff --git a/libqtile/scripts/main.py b/libqtile/scripts/main.py\n--- a/libqtile/scripts/main.py\n+++ b/libqtile/scripts/main.py\n@@ -1,6 +1,8 @@\n import argparse\n+import logging\n import sys\n \n+from libqtile.log_utils import init_log\n from libqtile.scripts import cmd_obj, run_cmd, shell, start, top\n \n try:\n@@ -28,6 +30,14 @@\n action='version',\n version=VERSION,\n )\n+ parser.add_argument(\n+ '-l', '--log-level',\n+ default='WARNING',\n+ dest='log_level',\n+ type=str.upper,\n+ choices=('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'),\n+ help='Set qtile log level'\n+ )\n \n subparsers = parser.add_subparsers()\n start.add_subcommand(subparsers)\n@@ -43,4 +53,6 @@\n help_.set_defaults(func=print_help)\n \n options = parser.parse_args()\n+ log_level = getattr(logging, options.log_level)\n+ init_log(log_level=log_level, log_color=sys.stdout.isatty())\n options.func(options)\ndiff --git a/libqtile/scripts/start.py b/libqtile/scripts/start.py\n--- a/libqtile/scripts/start.py\n+++ b/libqtile/scripts/start.py\n@@ -22,13 +22,12 @@\n # Set the locale before any widgets or anything are imported, so any widget\n # whose defaults depend on a reasonable locale sees something reasonable.\n import locale\n-import logging\n from os import getenv, makedirs, path\n-from sys import exit, stdout\n+from sys import exit\n \n import libqtile.backend\n from libqtile import confreader\n-from libqtile.log_utils import init_log, logger\n+from libqtile.log_utils import logger\n \n \n def rename_process():\n@@ -48,8 +47,6 @@\n \n \n def make_qtile(options):\n- log_level = getattr(logging, options.log_level)\n- init_log(log_level=log_level, log_color=stdout.isatty())\n kore = libqtile.backend.get_core(options.backend)\n \n if not path.isfile(options.configfile):\n@@ -120,13 +117,6 @@\n dest=\"no_spawn\",\n help='Avoid spawning apps. (Used for restart)'\n )\n- parser.add_argument(\n- '-l', '--log-level',\n- default='WARNING',\n- dest='log_level',\n- choices=('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'),\n- help='Set qtile log level'\n- )\n parser.add_argument(\n '--with-state',\n default=None,\n", "issue": "`qtile --log-level=INFO` no longer works\nMy `.xsession` runs qtile using `qtile --log-level=INFO`.\r\n\r\nThis no longer works.\r\n\r\n```\r\nqtile: error: unrecognized arguments: --log-level=INFO\r\n```\r\n\r\nI'm guessing due to 908b910d00087ece13bb576f672c94bcf9e6fc43?\r\n\r\nNo big deal, but the changelog says\r\n\r\n```\r\n Running `qtile` without arguments will continue to work for the\r\n forseeable future, but will be eventually deprecated.\r\n```\n", "before_files": [{"content": "import argparse\nimport sys\n\nfrom libqtile.scripts import cmd_obj, run_cmd, shell, start, top\n\ntry:\n import pkg_resources\n VERSION = pkg_resources.require(\"qtile\")[0].version\nexcept (pkg_resources.DistributionNotFound, ImportError):\n VERSION = 'dev'\n\n\ndef main():\n # backward compat hack: `qtile` with no args (or non-subcommand args)\n # should default to `qtile start`. it seems impolite for commands to do\n # nothing when run with no args, so let's warn about this being deprecated.\n if len(sys.argv) == 1:\n print(\"please move to `qtile start` as your qtile invocation, \"\n \"instead of just `qtile`; this default will be removed Soon(TM)\")\n sys.argv.insert(1, \"start\")\n\n parser = argparse.ArgumentParser(\n prog='qtile',\n description='A full-featured, pure-Python tiling window manager.',\n )\n parser.add_argument(\n '--version',\n action='version',\n version=VERSION,\n )\n\n subparsers = parser.add_subparsers()\n start.add_subcommand(subparsers)\n shell.add_subcommand(subparsers)\n top.add_subcommand(subparsers)\n run_cmd.add_subcommand(subparsers)\n cmd_obj.add_subcommand(subparsers)\n\n # `qtile help` should print help\n def print_help(options):\n parser.print_help()\n help_ = subparsers.add_parser(\"help\", help=\"Print help information and exit\")\n help_.set_defaults(func=print_help)\n\n options = parser.parse_args()\n options.func(options)\n", "path": "libqtile/scripts/main.py"}, {"content": "# Copyright (c) 2008, Aldo Cortesi. All rights reserved.\n# Copyright (c) 2011, Florian Mounier\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\n# Set the locale before any widgets or anything are imported, so any widget\n# whose defaults depend on a reasonable locale sees something reasonable.\nimport locale\nimport logging\nfrom os import getenv, makedirs, path\nfrom sys import exit, stdout\n\nimport libqtile.backend\nfrom libqtile import confreader\nfrom libqtile.log_utils import init_log, logger\n\n\ndef rename_process():\n \"\"\"\n Try to rename the qtile process if py-setproctitle is installed:\n\n http://code.google.com/p/py-setproctitle/\n\n Will fail silently if it's not installed. Setting the title lets you do\n stuff like \"killall qtile\".\n \"\"\"\n try:\n import setproctitle\n setproctitle.setproctitle(\"qtile\")\n except ImportError:\n pass\n\n\ndef make_qtile(options):\n log_level = getattr(logging, options.log_level)\n init_log(log_level=log_level, log_color=stdout.isatty())\n kore = libqtile.backend.get_core(options.backend)\n\n if not path.isfile(options.configfile):\n try:\n makedirs(path.dirname(options.configfile), exist_ok=True)\n from shutil import copyfile\n default_config_path = path.join(path.dirname(__file__),\n \"..\",\n \"resources\",\n \"default_config.py\")\n copyfile(default_config_path, options.configfile)\n logger.info('Copied default_config.py to %s', options.configfile)\n except Exception as e:\n logger.exception('Failed to copy default_config.py to %s: (%s)',\n options.configfile, e)\n\n config = confreader.Config(options.configfile, kore=kore)\n\n # XXX: the import is here because we need to call init_log\n # before start importing stuff\n from libqtile.core.manager import Qtile\n return Qtile(\n kore,\n config,\n no_spawn=options.no_spawn,\n state=options.state,\n socket_path=options.socket,\n )\n\n\ndef start(options):\n try:\n locale.setlocale(locale.LC_ALL, locale.getdefaultlocale()) # type: ignore\n except locale.Error:\n pass\n\n rename_process()\n q = make_qtile(options)\n try:\n q.loop()\n except Exception:\n logger.exception('Qtile crashed')\n exit(1)\n logger.info('Exiting...')\n\n\ndef add_subcommand(subparsers):\n parser = subparsers.add_parser(\"start\", help=\"Start the window manager\")\n parser.add_argument(\n \"-c\", \"--config\",\n action=\"store\",\n default=path.expanduser(path.join(\n getenv('XDG_CONFIG_HOME', '~/.config'), 'qtile', 'config.py')),\n dest=\"configfile\",\n help='Use the specified configuration file',\n )\n parser.add_argument(\n \"-s\", \"--socket\",\n action=\"store\",\n default=None,\n dest=\"socket\",\n help='Path of the Qtile IPC socket.'\n )\n parser.add_argument(\n \"-n\", \"--no-spawn\",\n action=\"store_true\",\n default=False,\n dest=\"no_spawn\",\n help='Avoid spawning apps. (Used for restart)'\n )\n parser.add_argument(\n '-l', '--log-level',\n default='WARNING',\n dest='log_level',\n choices=('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'),\n help='Set qtile log level'\n )\n parser.add_argument(\n '--with-state',\n default=None,\n dest='state',\n help='Pickled QtileState object (typically used only internally)',\n )\n parser.add_argument(\n '-b', '--backend',\n default='x11',\n dest='backend',\n choices=libqtile.backend.CORES,\n help='Use specified backend. Currently only x11 is implemented.',\n )\n parser.set_defaults(func=start)\n", "path": "libqtile/scripts/start.py"}]}
| 2,534 | 587 |
gh_patches_debug_12420
|
rasdani/github-patches
|
git_diff
|
modin-project__modin-6173
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: Failed to pass storage_options parameter to the to_csv function of PandasOnUnidistIO class with fsspec
Similar to #6097.
</issue>
<code>
[start of modin/core/execution/unidist/implementations/pandas_on_unidist/io/io.py]
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 """The module holds the factory which performs I/O using pandas on unidist."""
15
16 import io
17
18 import pandas
19
20 from modin.core.storage_formats.pandas.query_compiler import PandasQueryCompiler
21 from modin.core.execution.unidist.generic.io import UnidistIO
22 from modin.core.io import (
23 CSVDispatcher,
24 FWFDispatcher,
25 JSONDispatcher,
26 ParquetDispatcher,
27 FeatherDispatcher,
28 SQLDispatcher,
29 ExcelDispatcher,
30 )
31 from modin.core.storage_formats.pandas.parsers import (
32 PandasCSVParser,
33 PandasFWFParser,
34 PandasJSONParser,
35 PandasParquetParser,
36 PandasFeatherParser,
37 PandasSQLParser,
38 PandasExcelParser,
39 )
40 from modin.core.execution.unidist.common import UnidistWrapper, SignalActor
41 from ..dataframe import PandasOnUnidistDataframe
42 from ..partitioning import PandasOnUnidistDataframePartition
43
44
45 class PandasOnUnidistIO(UnidistIO):
46 """Factory providing methods for performing I/O operations using pandas as storage format on unidist as engine."""
47
48 frame_cls = PandasOnUnidistDataframe
49 query_compiler_cls = PandasQueryCompiler
50 build_args = dict(
51 frame_partition_cls=PandasOnUnidistDataframePartition,
52 query_compiler_cls=PandasQueryCompiler,
53 frame_cls=PandasOnUnidistDataframe,
54 base_io=UnidistIO,
55 )
56
57 def __make_read(*classes, build_args=build_args):
58 # used to reduce code duplication
59 return type("", (UnidistWrapper, *classes), build_args).read
60
61 def __make_write(*classes, build_args=build_args):
62 # used to reduce code duplication
63 return type("", (UnidistWrapper, *classes), build_args).write
64
65 read_csv = __make_read(PandasCSVParser, CSVDispatcher)
66 read_fwf = __make_read(PandasFWFParser, FWFDispatcher)
67 read_json = __make_read(PandasJSONParser, JSONDispatcher)
68 read_parquet = __make_read(PandasParquetParser, ParquetDispatcher)
69 to_parquet = __make_write(ParquetDispatcher)
70 # Blocked on pandas-dev/pandas#12236. It is faster to default to pandas.
71 # read_hdf = __make_read(PandasHDFParser, HDFReader)
72 read_feather = __make_read(PandasFeatherParser, FeatherDispatcher)
73 read_sql = __make_read(PandasSQLParser, SQLDispatcher)
74 to_sql = __make_write(SQLDispatcher)
75 read_excel = __make_read(PandasExcelParser, ExcelDispatcher)
76
77 del __make_read # to not pollute class namespace
78 del __make_write # to not pollute class namespace
79
80 @staticmethod
81 def _to_csv_check_support(kwargs):
82 """
83 Check if parallel version of ``to_csv`` could be used.
84
85 Parameters
86 ----------
87 kwargs : dict
88 Keyword arguments passed to ``.to_csv()``.
89
90 Returns
91 -------
92 bool
93 Whether parallel version of ``to_csv`` is applicable.
94 """
95 path_or_buf = kwargs["path_or_buf"]
96 compression = kwargs["compression"]
97 if not isinstance(path_or_buf, str):
98 return False
99 # case when the pointer is placed at the beginning of the file.
100 if "r" in kwargs["mode"] and "+" in kwargs["mode"]:
101 return False
102 # encodings with BOM don't support;
103 # instead of one mark in result bytes we will have them by the number of partitions
104 # so we should fallback in pandas for `utf-16`, `utf-32` with all aliases, in instance
105 # (`utf_32_be`, `utf_16_le` and so on)
106 if kwargs["encoding"] is not None:
107 encoding = kwargs["encoding"].lower()
108 if "u" in encoding or "utf" in encoding:
109 if "16" in encoding or "32" in encoding:
110 return False
111 if compression is None or not compression == "infer":
112 return False
113 if any((path_or_buf.endswith(ext) for ext in [".gz", ".bz2", ".zip", ".xz"])):
114 return False
115 return True
116
117 @classmethod
118 def to_csv(cls, qc, **kwargs):
119 """
120 Write records stored in the `qc` to a CSV file.
121
122 Parameters
123 ----------
124 qc : BaseQueryCompiler
125 The query compiler of the Modin dataframe that we want to run ``to_csv`` on.
126 **kwargs : dict
127 Parameters for ``pandas.to_csv(**kwargs)``.
128 """
129 if not cls._to_csv_check_support(kwargs):
130 return UnidistIO.to_csv(qc, **kwargs)
131
132 signals = SignalActor.remote(len(qc._modin_frame._partitions) + 1)
133
134 def func(df, **kw): # pragma: no cover
135 """
136 Dump a chunk of rows as csv, then save them to target maintaining order.
137
138 Parameters
139 ----------
140 df : pandas.DataFrame
141 A chunk of rows to write to a CSV file.
142 **kw : dict
143 Arguments to pass to ``pandas.to_csv(**kw)`` plus an extra argument
144 `partition_idx` serving as chunk index to maintain rows order.
145 """
146 partition_idx = kw["partition_idx"]
147 # the copy is made to not implicitly change the input parameters;
148 # to write to an intermediate buffer, we need to change `path_or_buf` in kwargs
149 csv_kwargs = kwargs.copy()
150 if partition_idx != 0:
151 # we need to create a new file only for first recording
152 # all the rest should be recorded in appending mode
153 if "w" in csv_kwargs["mode"]:
154 csv_kwargs["mode"] = csv_kwargs["mode"].replace("w", "a")
155 # It is enough to write the header for the first partition
156 csv_kwargs["header"] = False
157
158 # for parallelization purposes, each partition is written to an intermediate buffer
159 path_or_buf = csv_kwargs["path_or_buf"]
160 is_binary = "b" in csv_kwargs["mode"]
161 csv_kwargs["path_or_buf"] = io.BytesIO() if is_binary else io.StringIO()
162 df.to_csv(**csv_kwargs)
163 content = csv_kwargs["path_or_buf"].getvalue()
164 csv_kwargs["path_or_buf"].close()
165
166 # each process waits for its turn to write to a file
167 UnidistWrapper.materialize(signals.wait.remote(partition_idx))
168
169 # preparing to write data from the buffer to a file
170 with pandas.io.common.get_handle(
171 path_or_buf,
172 # in case when using URL in implicit text mode
173 # pandas try to open `path_or_buf` in binary mode
174 csv_kwargs["mode"] if is_binary else csv_kwargs["mode"] + "t",
175 encoding=kwargs["encoding"],
176 errors=kwargs["errors"],
177 compression=kwargs["compression"],
178 storage_options=kwargs.get("storage_options", None),
179 is_text=not is_binary,
180 ) as handles:
181 handles.handle.write(content)
182
183 # signal that the next process can start writing to the file
184 UnidistWrapper.materialize(signals.send.remote(partition_idx + 1))
185 # used for synchronization purposes
186 return pandas.DataFrame()
187
188 # signaling that the partition with id==0 can be written to the file
189 UnidistWrapper.materialize(signals.send.remote(0))
190 # Ensure that the metadata is syncrhonized
191 qc._modin_frame._propagate_index_objs(axis=None)
192 result = qc._modin_frame._partition_mgr_cls.map_axis_partitions(
193 axis=1,
194 partitions=qc._modin_frame._partitions,
195 map_func=func,
196 keep_partitioning=True,
197 lengths=None,
198 enumerate_partitions=True,
199 max_retries=0,
200 )
201 # pending completion
202 UnidistWrapper.materialize(
203 [part.list_of_blocks[0] for row in result for part in row]
204 )
205
[end of modin/core/execution/unidist/implementations/pandas_on_unidist/io/io.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/modin/core/execution/unidist/implementations/pandas_on_unidist/io/io.py b/modin/core/execution/unidist/implementations/pandas_on_unidist/io/io.py
--- a/modin/core/execution/unidist/implementations/pandas_on_unidist/io/io.py
+++ b/modin/core/execution/unidist/implementations/pandas_on_unidist/io/io.py
@@ -159,7 +159,9 @@
path_or_buf = csv_kwargs["path_or_buf"]
is_binary = "b" in csv_kwargs["mode"]
csv_kwargs["path_or_buf"] = io.BytesIO() if is_binary else io.StringIO()
+ storage_options = csv_kwargs.pop("storage_options", None)
df.to_csv(**csv_kwargs)
+ csv_kwargs.update({"storage_options": storage_options})
content = csv_kwargs["path_or_buf"].getvalue()
csv_kwargs["path_or_buf"].close()
|
{"golden_diff": "diff --git a/modin/core/execution/unidist/implementations/pandas_on_unidist/io/io.py b/modin/core/execution/unidist/implementations/pandas_on_unidist/io/io.py\n--- a/modin/core/execution/unidist/implementations/pandas_on_unidist/io/io.py\n+++ b/modin/core/execution/unidist/implementations/pandas_on_unidist/io/io.py\n@@ -159,7 +159,9 @@\n path_or_buf = csv_kwargs[\"path_or_buf\"]\n is_binary = \"b\" in csv_kwargs[\"mode\"]\n csv_kwargs[\"path_or_buf\"] = io.BytesIO() if is_binary else io.StringIO()\n+ storage_options = csv_kwargs.pop(\"storage_options\", None)\n df.to_csv(**csv_kwargs)\n+ csv_kwargs.update({\"storage_options\": storage_options})\n content = csv_kwargs[\"path_or_buf\"].getvalue()\n csv_kwargs[\"path_or_buf\"].close()\n", "issue": "BUG: Failed to pass storage_options parameter to the to_csv function of PandasOnUnidistIO class with fsspec\nSimilar to #6097.\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\"\"\"The module holds the factory which performs I/O using pandas on unidist.\"\"\"\n\nimport io\n\nimport pandas\n\nfrom modin.core.storage_formats.pandas.query_compiler import PandasQueryCompiler\nfrom modin.core.execution.unidist.generic.io import UnidistIO\nfrom modin.core.io import (\n CSVDispatcher,\n FWFDispatcher,\n JSONDispatcher,\n ParquetDispatcher,\n FeatherDispatcher,\n SQLDispatcher,\n ExcelDispatcher,\n)\nfrom modin.core.storage_formats.pandas.parsers import (\n PandasCSVParser,\n PandasFWFParser,\n PandasJSONParser,\n PandasParquetParser,\n PandasFeatherParser,\n PandasSQLParser,\n PandasExcelParser,\n)\nfrom modin.core.execution.unidist.common import UnidistWrapper, SignalActor\nfrom ..dataframe import PandasOnUnidistDataframe\nfrom ..partitioning import PandasOnUnidistDataframePartition\n\n\nclass PandasOnUnidistIO(UnidistIO):\n \"\"\"Factory providing methods for performing I/O operations using pandas as storage format on unidist as engine.\"\"\"\n\n frame_cls = PandasOnUnidistDataframe\n query_compiler_cls = PandasQueryCompiler\n build_args = dict(\n frame_partition_cls=PandasOnUnidistDataframePartition,\n query_compiler_cls=PandasQueryCompiler,\n frame_cls=PandasOnUnidistDataframe,\n base_io=UnidistIO,\n )\n\n def __make_read(*classes, build_args=build_args):\n # used to reduce code duplication\n return type(\"\", (UnidistWrapper, *classes), build_args).read\n\n def __make_write(*classes, build_args=build_args):\n # used to reduce code duplication\n return type(\"\", (UnidistWrapper, *classes), build_args).write\n\n read_csv = __make_read(PandasCSVParser, CSVDispatcher)\n read_fwf = __make_read(PandasFWFParser, FWFDispatcher)\n read_json = __make_read(PandasJSONParser, JSONDispatcher)\n read_parquet = __make_read(PandasParquetParser, ParquetDispatcher)\n to_parquet = __make_write(ParquetDispatcher)\n # Blocked on pandas-dev/pandas#12236. It is faster to default to pandas.\n # read_hdf = __make_read(PandasHDFParser, HDFReader)\n read_feather = __make_read(PandasFeatherParser, FeatherDispatcher)\n read_sql = __make_read(PandasSQLParser, SQLDispatcher)\n to_sql = __make_write(SQLDispatcher)\n read_excel = __make_read(PandasExcelParser, ExcelDispatcher)\n\n del __make_read # to not pollute class namespace\n del __make_write # to not pollute class namespace\n\n @staticmethod\n def _to_csv_check_support(kwargs):\n \"\"\"\n Check if parallel version of ``to_csv`` could be used.\n\n Parameters\n ----------\n kwargs : dict\n Keyword arguments passed to ``.to_csv()``.\n\n Returns\n -------\n bool\n Whether parallel version of ``to_csv`` is applicable.\n \"\"\"\n path_or_buf = kwargs[\"path_or_buf\"]\n compression = kwargs[\"compression\"]\n if not isinstance(path_or_buf, str):\n return False\n # case when the pointer is placed at the beginning of the file.\n if \"r\" in kwargs[\"mode\"] and \"+\" in kwargs[\"mode\"]:\n return False\n # encodings with BOM don't support;\n # instead of one mark in result bytes we will have them by the number of partitions\n # so we should fallback in pandas for `utf-16`, `utf-32` with all aliases, in instance\n # (`utf_32_be`, `utf_16_le` and so on)\n if kwargs[\"encoding\"] is not None:\n encoding = kwargs[\"encoding\"].lower()\n if \"u\" in encoding or \"utf\" in encoding:\n if \"16\" in encoding or \"32\" in encoding:\n return False\n if compression is None or not compression == \"infer\":\n return False\n if any((path_or_buf.endswith(ext) for ext in [\".gz\", \".bz2\", \".zip\", \".xz\"])):\n return False\n return True\n\n @classmethod\n def to_csv(cls, qc, **kwargs):\n \"\"\"\n Write records stored in the `qc` to a CSV file.\n\n Parameters\n ----------\n qc : BaseQueryCompiler\n The query compiler of the Modin dataframe that we want to run ``to_csv`` on.\n **kwargs : dict\n Parameters for ``pandas.to_csv(**kwargs)``.\n \"\"\"\n if not cls._to_csv_check_support(kwargs):\n return UnidistIO.to_csv(qc, **kwargs)\n\n signals = SignalActor.remote(len(qc._modin_frame._partitions) + 1)\n\n def func(df, **kw): # pragma: no cover\n \"\"\"\n Dump a chunk of rows as csv, then save them to target maintaining order.\n\n Parameters\n ----------\n df : pandas.DataFrame\n A chunk of rows to write to a CSV file.\n **kw : dict\n Arguments to pass to ``pandas.to_csv(**kw)`` plus an extra argument\n `partition_idx` serving as chunk index to maintain rows order.\n \"\"\"\n partition_idx = kw[\"partition_idx\"]\n # the copy is made to not implicitly change the input parameters;\n # to write to an intermediate buffer, we need to change `path_or_buf` in kwargs\n csv_kwargs = kwargs.copy()\n if partition_idx != 0:\n # we need to create a new file only for first recording\n # all the rest should be recorded in appending mode\n if \"w\" in csv_kwargs[\"mode\"]:\n csv_kwargs[\"mode\"] = csv_kwargs[\"mode\"].replace(\"w\", \"a\")\n # It is enough to write the header for the first partition\n csv_kwargs[\"header\"] = False\n\n # for parallelization purposes, each partition is written to an intermediate buffer\n path_or_buf = csv_kwargs[\"path_or_buf\"]\n is_binary = \"b\" in csv_kwargs[\"mode\"]\n csv_kwargs[\"path_or_buf\"] = io.BytesIO() if is_binary else io.StringIO()\n df.to_csv(**csv_kwargs)\n content = csv_kwargs[\"path_or_buf\"].getvalue()\n csv_kwargs[\"path_or_buf\"].close()\n\n # each process waits for its turn to write to a file\n UnidistWrapper.materialize(signals.wait.remote(partition_idx))\n\n # preparing to write data from the buffer to a file\n with pandas.io.common.get_handle(\n path_or_buf,\n # in case when using URL in implicit text mode\n # pandas try to open `path_or_buf` in binary mode\n csv_kwargs[\"mode\"] if is_binary else csv_kwargs[\"mode\"] + \"t\",\n encoding=kwargs[\"encoding\"],\n errors=kwargs[\"errors\"],\n compression=kwargs[\"compression\"],\n storage_options=kwargs.get(\"storage_options\", None),\n is_text=not is_binary,\n ) as handles:\n handles.handle.write(content)\n\n # signal that the next process can start writing to the file\n UnidistWrapper.materialize(signals.send.remote(partition_idx + 1))\n # used for synchronization purposes\n return pandas.DataFrame()\n\n # signaling that the partition with id==0 can be written to the file\n UnidistWrapper.materialize(signals.send.remote(0))\n # Ensure that the metadata is syncrhonized\n qc._modin_frame._propagate_index_objs(axis=None)\n result = qc._modin_frame._partition_mgr_cls.map_axis_partitions(\n axis=1,\n partitions=qc._modin_frame._partitions,\n map_func=func,\n keep_partitioning=True,\n lengths=None,\n enumerate_partitions=True,\n max_retries=0,\n )\n # pending completion\n UnidistWrapper.materialize(\n [part.list_of_blocks[0] for row in result for part in row]\n )\n", "path": "modin/core/execution/unidist/implementations/pandas_on_unidist/io/io.py"}]}
| 3,020 | 209 |
gh_patches_debug_30451
|
rasdani/github-patches
|
git_diff
|
bids-standard__pybids-447
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update prep_zenodo.py to only count commits in grabbit up to 0.2.6
With #369, we're dropping the grabbit dependency, so changes there will no longer contribute to pybids.
</issue>
<code>
[start of tools/prep_zenodo.py]
1 #!/usr/bin/env python3
2 import git
3 import json
4 from subprocess import run, PIPE, CalledProcessError
5 from pathlib import Path
6 from tempfile import TemporaryDirectory
7
8
9 def decommify(name):
10 return ' '.join(name.split(', ')[::-1])
11
12
13 # List of repositories whose commits should be counted as contributions
14 codependents = ['https://github.com/grabbles/grabbit.git']
15
16 # Last shablona commit
17 origin_commit = 'd72caaf5933907ed699d57faddaec7bfc836ce6f'
18
19 git_root = Path(git.Repo('.', search_parent_directories=True).working_dir)
20 zenodo_file = git_root / '.zenodo.json'
21
22 zenodo = json.loads(zenodo_file.read_text()) if zenodo_file.exists() else {}
23
24 orig_creators = zenodo.get('creators', [])
25 creator_map = {decommify(creator['name']): creator
26 for creator in orig_creators}
27
28 shortlog = run(['git', 'shortlog', '-ns', f'{origin_commit}..'], stdout=PIPE)
29 counts = [line.split('\t', 1)[::-1]
30 for line in shortlog.stdout.decode().split('\n') if line]
31
32 # Get additional commit counts from dependencies
33 with TemporaryDirectory() as tmpdir:
34 tmppath = Path(tmpdir)
35 for repo in codependents:
36 repo_dir = str(tmppath / repo.rsplit('/', 1)[1].split('.', 1)[0])
37 try:
38 clone = run(['git', 'clone', repo, repo_dir], check=True)
39 except CalledProcessError as err:
40 raise RuntimeError("Could not clone {}".format(repo)) from err
41 tag = run(['git', '-C', repo_dir, 'tag'], stdout=PIPE)
42 latest_tag = tag.stdout.decode().strip().rsplit('\n', 1)[1]
43 dep_shortlog = run(
44 ['git', '-C', repo_dir, 'shortlog', '-ns', latest_tag],
45 stdout=PIPE)
46 counts.extend(line.split('\t', 1)[::-1]
47 for line in dep_shortlog.stdout.decode().split('\n')
48 if line)
49
50 commit_counts = {}
51 for committer, commits in counts:
52 commit_counts[committer] = commit_counts.get(committer, 0) + int(commits)
53
54 # Stable sort:
55 # Number of commits in reverse order
56 # Ties broken by alphabetical order of first name
57 committers = [committer
58 for committer, _ in sorted(commit_counts.items(),
59 key=lambda x: (-x[1], x[0]))]
60
61 # Tal to the top
62 first_author = 'Tal Yarkoni'
63 if committers[0] != first_author:
64 committers.remove(first_author)
65 committers.insert(0, first_author)
66
67 creators = [
68 creator_map.get(committer, {'name': committer})
69 for committer in committers
70 ]
71
72 zenodo['creators'] = creators
73 zenodo_file.write_text(json.dumps(zenodo, indent=2, sort_keys=True) + '\n')
74
[end of tools/prep_zenodo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/prep_zenodo.py b/tools/prep_zenodo.py
--- a/tools/prep_zenodo.py
+++ b/tools/prep_zenodo.py
@@ -11,7 +11,7 @@
# List of repositories whose commits should be counted as contributions
-codependents = ['https://github.com/grabbles/grabbit.git']
+codependents = [('https://github.com/grabbles/grabbit.git', '0.2.6')]
# Last shablona commit
origin_commit = 'd72caaf5933907ed699d57faddaec7bfc836ce6f'
@@ -33,15 +33,23 @@
with TemporaryDirectory() as tmpdir:
tmppath = Path(tmpdir)
for repo in codependents:
+ try:
+ repo, ref = repo
+ except (TypeError, ValueError):
+ ref = None
repo_dir = str(tmppath / repo.rsplit('/', 1)[1].split('.', 1)[0])
try:
- clone = run(['git', 'clone', repo, repo_dir], check=True)
+ clone = run(['git', 'clone', '-q', repo, repo_dir], check=True)
except CalledProcessError as err:
raise RuntimeError("Could not clone {}".format(repo)) from err
- tag = run(['git', '-C', repo_dir, 'tag'], stdout=PIPE)
- latest_tag = tag.stdout.decode().strip().rsplit('\n', 1)[1]
+
+ if ref is None:
+ tag = run(['git', '-C', repo_dir, 'tag'], stdout=PIPE)
+ # latest tag
+ ref = tag.stdout.decode().strip().rsplit('\n', 1)[1]
+
dep_shortlog = run(
- ['git', '-C', repo_dir, 'shortlog', '-ns', latest_tag],
+ ['git', '-C', repo_dir, 'shortlog', '-ns', ref],
stdout=PIPE)
counts.extend(line.split('\t', 1)[::-1]
for line in dep_shortlog.stdout.decode().split('\n')
|
{"golden_diff": "diff --git a/tools/prep_zenodo.py b/tools/prep_zenodo.py\n--- a/tools/prep_zenodo.py\n+++ b/tools/prep_zenodo.py\n@@ -11,7 +11,7 @@\n \n \n # List of repositories whose commits should be counted as contributions\n-codependents = ['https://github.com/grabbles/grabbit.git']\n+codependents = [('https://github.com/grabbles/grabbit.git', '0.2.6')]\n \n # Last shablona commit\n origin_commit = 'd72caaf5933907ed699d57faddaec7bfc836ce6f'\n@@ -33,15 +33,23 @@\n with TemporaryDirectory() as tmpdir:\n tmppath = Path(tmpdir)\n for repo in codependents:\n+ try:\n+ repo, ref = repo\n+ except (TypeError, ValueError):\n+ ref = None\n repo_dir = str(tmppath / repo.rsplit('/', 1)[1].split('.', 1)[0])\n try:\n- clone = run(['git', 'clone', repo, repo_dir], check=True)\n+ clone = run(['git', 'clone', '-q', repo, repo_dir], check=True)\n except CalledProcessError as err:\n raise RuntimeError(\"Could not clone {}\".format(repo)) from err\n- tag = run(['git', '-C', repo_dir, 'tag'], stdout=PIPE)\n- latest_tag = tag.stdout.decode().strip().rsplit('\\n', 1)[1]\n+\n+ if ref is None:\n+ tag = run(['git', '-C', repo_dir, 'tag'], stdout=PIPE)\n+ # latest tag\n+ ref = tag.stdout.decode().strip().rsplit('\\n', 1)[1]\n+\n dep_shortlog = run(\n- ['git', '-C', repo_dir, 'shortlog', '-ns', latest_tag],\n+ ['git', '-C', repo_dir, 'shortlog', '-ns', ref],\n stdout=PIPE)\n counts.extend(line.split('\\t', 1)[::-1]\n for line in dep_shortlog.stdout.decode().split('\\n')\n", "issue": "Update prep_zenodo.py to only count commits in grabbit up to 0.2.6\nWith #369, we're dropping the grabbit dependency, so changes there will no longer contribute to pybids.\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport git\nimport json\nfrom subprocess import run, PIPE, CalledProcessError\nfrom pathlib import Path\nfrom tempfile import TemporaryDirectory\n\n\ndef decommify(name):\n return ' '.join(name.split(', ')[::-1])\n\n\n# List of repositories whose commits should be counted as contributions\ncodependents = ['https://github.com/grabbles/grabbit.git']\n\n# Last shablona commit\norigin_commit = 'd72caaf5933907ed699d57faddaec7bfc836ce6f'\n\ngit_root = Path(git.Repo('.', search_parent_directories=True).working_dir)\nzenodo_file = git_root / '.zenodo.json'\n\nzenodo = json.loads(zenodo_file.read_text()) if zenodo_file.exists() else {}\n\norig_creators = zenodo.get('creators', [])\ncreator_map = {decommify(creator['name']): creator\n for creator in orig_creators}\n\nshortlog = run(['git', 'shortlog', '-ns', f'{origin_commit}..'], stdout=PIPE)\ncounts = [line.split('\\t', 1)[::-1]\n for line in shortlog.stdout.decode().split('\\n') if line]\n\n# Get additional commit counts from dependencies\nwith TemporaryDirectory() as tmpdir:\n tmppath = Path(tmpdir)\n for repo in codependents:\n repo_dir = str(tmppath / repo.rsplit('/', 1)[1].split('.', 1)[0])\n try:\n clone = run(['git', 'clone', repo, repo_dir], check=True)\n except CalledProcessError as err:\n raise RuntimeError(\"Could not clone {}\".format(repo)) from err\n tag = run(['git', '-C', repo_dir, 'tag'], stdout=PIPE)\n latest_tag = tag.stdout.decode().strip().rsplit('\\n', 1)[1]\n dep_shortlog = run(\n ['git', '-C', repo_dir, 'shortlog', '-ns', latest_tag],\n stdout=PIPE)\n counts.extend(line.split('\\t', 1)[::-1]\n for line in dep_shortlog.stdout.decode().split('\\n')\n if line)\n\ncommit_counts = {}\nfor committer, commits in counts:\n commit_counts[committer] = commit_counts.get(committer, 0) + int(commits)\n\n# Stable sort:\n# Number of commits in reverse order\n# Ties broken by alphabetical order of first name\ncommitters = [committer\n for committer, _ in sorted(commit_counts.items(),\n key=lambda x: (-x[1], x[0]))]\n\n# Tal to the top\nfirst_author = 'Tal Yarkoni'\nif committers[0] != first_author:\n committers.remove(first_author)\n committers.insert(0, first_author)\n\ncreators = [\n creator_map.get(committer, {'name': committer})\n for committer in committers\n ]\n\nzenodo['creators'] = creators\nzenodo_file.write_text(json.dumps(zenodo, indent=2, sort_keys=True) + '\\n')\n", "path": "tools/prep_zenodo.py"}]}
| 1,402 | 484 |
gh_patches_debug_10326
|
rasdani/github-patches
|
git_diff
|
biolab__orange3-text-361
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Guardian: Fix failing tests on Travis
<!--
This is an issue template. Please fill in the relevant details in the
sections below.
-->
##### Text version
<!-- From menu _Options→Add-ons→Orange3-Text_ or code `orangecontrib.text.version.full_version` -->
0.3.0
##### Orange version
<!-- From menu _Help→About→Version_ or code `Orange.version.full_version` -->
3.15.dev
##### Expected behavior
Tests pass.
##### Actual behavior
Guardian tests is failing.
##### Steps to reproduce the behavior
##### Additional info (worksheets, data, screenshots, ...)
Fix tests.
</issue>
<code>
[start of orangecontrib/text/guardian.py]
1 """ This module fetches data from The Guardian API.
2
3 To use first create :class:`TheGuardianCredentials`:
4
5 >>> from orangecontrib.text.guardian import TheGuardianCredentials
6 >>> credentials = TheGuardianCredentials('<your-api-key>')
7
8 Then create :class:`TheGuardianAPI` object and use it for searching:
9
10 >>> from orangecontrib.text.guardian import TheGuardianAPI
11 >>> api = TheGuardianAPI(credentials)
12 >>> corpus = api.search('Slovenia', max_documents=10)
13 >>> len(corpus)
14 10
15
16 """
17
18 import requests
19 import math
20 import json
21
22 from Orange import data
23
24 from orangecontrib.text.corpus import Corpus
25
26
27 BASE_URL = 'http://content.guardianapis.com/search'
28 ARTICLES_PER_PAGE = 10
29
30
31 class TheGuardianCredentials:
32 """ The Guardian API credentials. """
33 def __init__(self, key):
34 """
35 Args:
36 key (str): The Guardian API key. Use `test` for testing purposes.
37 """
38 self.key = key
39
40 @property
41 def valid(self):
42 """ Check if given API key is valid. """
43 response = requests.get(BASE_URL, {'api-key': self.key})
44 return response.status_code != 403 # 403 == Forbidden
45
46 def __eq__(self, other):
47 return self.key == other.key
48
49
50 class TheGuardianAPI:
51 attributes = []
52
53 class_vars = [
54 (data.DiscreteVariable('Section'), lambda doc: doc['sectionName']),
55 ]
56
57 tv = data.TimeVariable('Publication Date')
58 metas = [
59 (data.StringVariable('Headline'), lambda doc: doc['fields']['headline']),
60 (data.StringVariable('Content'), lambda doc: doc['fields']['bodyText']),
61 (data.StringVariable('Trail Text'), lambda doc: doc['fields']['trailText']),
62 (data.StringVariable('HTML'), lambda doc: doc['fields']['body']),
63 (tv, lambda doc: TheGuardianAPI.tv.parse(doc['webPublicationDate'])),
64 (data.DiscreteVariable('Type'), lambda doc: doc['type']),
65 (data.DiscreteVariable('Language'), lambda doc: doc['fields']['lang']),
66 (data.StringVariable('Tags'),
67 lambda doc: ', '.join(tag['webTitle'] for tag in doc['tags'])),
68 (data.StringVariable('URL'), lambda doc: doc['webUrl']),
69 (data.ContinuousVariable('Word Count', number_of_decimals=0),
70 lambda doc: doc['fields']['wordcount']),
71 ]
72
73 text_features = [metas[0][0], metas[1][0]] # Headline + Content
74 title_indices = [-1] # Headline
75
76 def __init__(self, credentials, on_progress=None, should_break=None):
77 """
78 Args:
79 credentials (:class:`TheGuardianCredentials`): The Guardian Creentials.
80 on_progress (callable): Function for progress reporting.
81 should_break (callable): Function for early stopping.
82 """
83 self.per_page = ARTICLES_PER_PAGE
84 self.pages = 0
85 self.credentials = credentials
86 self.on_progress = on_progress or (lambda x, y: None)
87 self.should_break = should_break or (lambda: False)
88
89 self.results = []
90
91 def _search(self, query, from_date, to_date, page=1):
92 data = self._build_query(query, from_date, to_date, page)
93
94 response = requests.get(BASE_URL, data)
95 parsed = json.loads(response.text)
96
97 if page == 1: # store number of pages
98 self.pages = parsed['response']['pages']
99
100 self.results.extend(parsed['response']['results'])
101
102 def _build_query(self, query, from_date=None, to_date=None, page=1):
103 data = {
104 'q': query,
105 'api-key': self.credentials.key,
106 'page': str(page),
107 'show-fields': 'headline,trailText,body,bodyText,lang,wordcount',
108 'show-tags': 'all',
109 }
110 if from_date is not None:
111 data['from-date'] = from_date
112 if to_date is not None:
113 data['to-date'] = to_date
114
115 return data
116
117 def search(self, query, from_date=None, to_date=None, max_documents=None,
118 accumulate=False):
119 """
120 Search The Guardian API for articles.
121
122 Args:
123 query (str): A query for searching the articles by
124 from_date (str): Search only articles newer than the date provided.
125 Date should be in ISO format; e.g. '2016-12-31'.
126 to_date (str): Search only articles older than the date provided.
127 Date should be in ISO format; e.g. '2016-12-31'.
128 max_documents (int): Maximum number of documents to retrieve.
129 When not given, retrieve all documents.
130 accumulate (bool): A flag indicating whether to accumulate results
131 of multiple consequent search calls.
132
133 Returns:
134 :ref:`Corpus`
135 """
136 if not accumulate:
137 self.results = []
138
139 self._search(query, from_date, to_date)
140
141 pages = math.ceil(max_documents/self.per_page) if max_documents else self.pages
142 self.on_progress(self.per_page, pages * self.per_page)
143
144 for p in range(2, pages+1): # to one based
145 if self.should_break():
146 break
147 self._search(query, from_date, to_date, p)
148 self.on_progress(p*self.per_page, pages * self.per_page)
149
150 c = Corpus.from_documents(
151 self.results, 'The Guardian', self.attributes, self.class_vars,
152 self.metas, title_indices=self.title_indices)
153 c.text_features = self.text_features
154 return c
155
156
157 if __name__ == '__main__':
158 credentials = TheGuardianCredentials('test')
159 print(credentials.valid)
160 api = TheGuardianAPI(credentials=credentials)
161 c = api.search('refugees', max_documents=10)
162 print(c)
163
[end of orangecontrib/text/guardian.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/orangecontrib/text/guardian.py b/orangecontrib/text/guardian.py
--- a/orangecontrib/text/guardian.py
+++ b/orangecontrib/text/guardian.py
@@ -18,6 +18,7 @@
import requests
import math
import json
+import os
from Orange import data
@@ -155,7 +156,8 @@
if __name__ == '__main__':
- credentials = TheGuardianCredentials('test')
+ key = os.getenv('THE_GUARDIAN_API_KEY', 'test')
+ credentials = TheGuardianCredentials(key)
print(credentials.valid)
api = TheGuardianAPI(credentials=credentials)
c = api.search('refugees', max_documents=10)
|
{"golden_diff": "diff --git a/orangecontrib/text/guardian.py b/orangecontrib/text/guardian.py\n--- a/orangecontrib/text/guardian.py\n+++ b/orangecontrib/text/guardian.py\n@@ -18,6 +18,7 @@\n import requests\n import math\n import json\n+import os\n \n from Orange import data\n \n@@ -155,7 +156,8 @@\n \n \n if __name__ == '__main__':\n- credentials = TheGuardianCredentials('test')\n+ key = os.getenv('THE_GUARDIAN_API_KEY', 'test')\n+ credentials = TheGuardianCredentials(key)\n print(credentials.valid)\n api = TheGuardianAPI(credentials=credentials)\n c = api.search('refugees', max_documents=10)\n", "issue": "Guardian: Fix failing tests on Travis\n<!--\r\nThis is an issue template. Please fill in the relevant details in the\r\nsections below.\r\n-->\r\n\r\n##### Text version\r\n<!-- From menu _Options\u2192Add-ons\u2192Orange3-Text_ or code `orangecontrib.text.version.full_version` -->\r\n0.3.0\r\n\r\n##### Orange version\r\n<!-- From menu _Help\u2192About\u2192Version_ or code `Orange.version.full_version` -->\r\n3.15.dev\r\n\r\n##### Expected behavior\r\nTests pass.\r\n\r\n\r\n##### Actual behavior\r\nGuardian tests is failing.\r\n\r\n\r\n##### Steps to reproduce the behavior\r\n\r\n\r\n\r\n##### Additional info (worksheets, data, screenshots, ...)\r\nFix tests.\r\n\r\n\n", "before_files": [{"content": "\"\"\" This module fetches data from The Guardian API.\n\nTo use first create :class:`TheGuardianCredentials`:\n\n >>> from orangecontrib.text.guardian import TheGuardianCredentials\n >>> credentials = TheGuardianCredentials('<your-api-key>')\n\nThen create :class:`TheGuardianAPI` object and use it for searching:\n\n >>> from orangecontrib.text.guardian import TheGuardianAPI\n >>> api = TheGuardianAPI(credentials)\n >>> corpus = api.search('Slovenia', max_documents=10)\n >>> len(corpus)\n 10\n\n\"\"\"\n\nimport requests\nimport math\nimport json\n\nfrom Orange import data\n\nfrom orangecontrib.text.corpus import Corpus\n\n\nBASE_URL = 'http://content.guardianapis.com/search'\nARTICLES_PER_PAGE = 10\n\n\nclass TheGuardianCredentials:\n \"\"\" The Guardian API credentials. \"\"\"\n def __init__(self, key):\n \"\"\"\n Args:\n key (str): The Guardian API key. Use `test` for testing purposes.\n \"\"\"\n self.key = key\n\n @property\n def valid(self):\n \"\"\" Check if given API key is valid. \"\"\"\n response = requests.get(BASE_URL, {'api-key': self.key})\n return response.status_code != 403 # 403 == Forbidden\n\n def __eq__(self, other):\n return self.key == other.key\n\n\nclass TheGuardianAPI:\n attributes = []\n\n class_vars = [\n (data.DiscreteVariable('Section'), lambda doc: doc['sectionName']),\n ]\n\n tv = data.TimeVariable('Publication Date')\n metas = [\n (data.StringVariable('Headline'), lambda doc: doc['fields']['headline']),\n (data.StringVariable('Content'), lambda doc: doc['fields']['bodyText']),\n (data.StringVariable('Trail Text'), lambda doc: doc['fields']['trailText']),\n (data.StringVariable('HTML'), lambda doc: doc['fields']['body']),\n (tv, lambda doc: TheGuardianAPI.tv.parse(doc['webPublicationDate'])),\n (data.DiscreteVariable('Type'), lambda doc: doc['type']),\n (data.DiscreteVariable('Language'), lambda doc: doc['fields']['lang']),\n (data.StringVariable('Tags'),\n lambda doc: ', '.join(tag['webTitle'] for tag in doc['tags'])),\n (data.StringVariable('URL'), lambda doc: doc['webUrl']),\n (data.ContinuousVariable('Word Count', number_of_decimals=0),\n lambda doc: doc['fields']['wordcount']),\n ]\n\n text_features = [metas[0][0], metas[1][0]] # Headline + Content\n title_indices = [-1] # Headline\n\n def __init__(self, credentials, on_progress=None, should_break=None):\n \"\"\"\n Args:\n credentials (:class:`TheGuardianCredentials`): The Guardian Creentials.\n on_progress (callable): Function for progress reporting.\n should_break (callable): Function for early stopping.\n \"\"\"\n self.per_page = ARTICLES_PER_PAGE\n self.pages = 0\n self.credentials = credentials\n self.on_progress = on_progress or (lambda x, y: None)\n self.should_break = should_break or (lambda: False)\n\n self.results = []\n\n def _search(self, query, from_date, to_date, page=1):\n data = self._build_query(query, from_date, to_date, page)\n\n response = requests.get(BASE_URL, data)\n parsed = json.loads(response.text)\n\n if page == 1: # store number of pages\n self.pages = parsed['response']['pages']\n\n self.results.extend(parsed['response']['results'])\n\n def _build_query(self, query, from_date=None, to_date=None, page=1):\n data = {\n 'q': query,\n 'api-key': self.credentials.key,\n 'page': str(page),\n 'show-fields': 'headline,trailText,body,bodyText,lang,wordcount',\n 'show-tags': 'all',\n }\n if from_date is not None:\n data['from-date'] = from_date\n if to_date is not None:\n data['to-date'] = to_date\n\n return data\n\n def search(self, query, from_date=None, to_date=None, max_documents=None,\n accumulate=False):\n \"\"\"\n Search The Guardian API for articles.\n\n Args:\n query (str): A query for searching the articles by\n from_date (str): Search only articles newer than the date provided.\n Date should be in ISO format; e.g. '2016-12-31'.\n to_date (str): Search only articles older than the date provided.\n Date should be in ISO format; e.g. '2016-12-31'.\n max_documents (int): Maximum number of documents to retrieve.\n When not given, retrieve all documents.\n accumulate (bool): A flag indicating whether to accumulate results\n of multiple consequent search calls.\n\n Returns:\n :ref:`Corpus`\n \"\"\"\n if not accumulate:\n self.results = []\n\n self._search(query, from_date, to_date)\n\n pages = math.ceil(max_documents/self.per_page) if max_documents else self.pages\n self.on_progress(self.per_page, pages * self.per_page)\n\n for p in range(2, pages+1): # to one based\n if self.should_break():\n break\n self._search(query, from_date, to_date, p)\n self.on_progress(p*self.per_page, pages * self.per_page)\n\n c = Corpus.from_documents(\n self.results, 'The Guardian', self.attributes, self.class_vars,\n self.metas, title_indices=self.title_indices)\n c.text_features = self.text_features\n return c\n\n\nif __name__ == '__main__':\n credentials = TheGuardianCredentials('test')\n print(credentials.valid)\n api = TheGuardianAPI(credentials=credentials)\n c = api.search('refugees', max_documents=10)\n print(c)\n", "path": "orangecontrib/text/guardian.py"}]}
| 2,387 | 167 |
gh_patches_debug_33722
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-1225
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
redis-py-cluster new API 2.0.0
### Which version of dd-trace-py are you using?
lastest 0.34.0
### Which version of the libraries are you using?
redis-py-cluster 2.0.0
### How can we reproduce your problem?
change https://github.com/DataDog/dd-trace-py/blob/master/tox.ini redis-py-cluster version
### What is the result that you get?
ERROR. It's no more StrictRedisCluster but just RedisCluster
### What is result that you expected?
moving to new api
</issue>
<code>
[start of ddtrace/contrib/rediscluster/patch.py]
1 # 3p
2 import rediscluster
3 from ddtrace.vendor import wrapt
4
5 # project
6 from ddtrace import config
7 from ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY
8 from ...pin import Pin
9 from ...ext import SpanTypes, redis as redisx
10 from ...utils.wrappers import unwrap
11 from ..redis.patch import traced_execute_command, traced_pipeline
12 from ..redis.util import format_command_args
13
14
15 def patch():
16 """Patch the instrumented methods
17 """
18 if getattr(rediscluster, '_datadog_patch', False):
19 return
20 setattr(rediscluster, '_datadog_patch', True)
21
22 _w = wrapt.wrap_function_wrapper
23 _w('rediscluster', 'StrictRedisCluster.execute_command', traced_execute_command)
24 _w('rediscluster', 'StrictRedisCluster.pipeline', traced_pipeline)
25 _w('rediscluster', 'StrictClusterPipeline.execute', traced_execute_pipeline)
26 Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.StrictRedisCluster)
27
28
29 def unpatch():
30 if getattr(rediscluster, '_datadog_patch', False):
31 setattr(rediscluster, '_datadog_patch', False)
32 unwrap(rediscluster.StrictRedisCluster, 'execute_command')
33 unwrap(rediscluster.StrictRedisCluster, 'pipeline')
34 unwrap(rediscluster.StrictClusterPipeline, 'execute')
35
36
37 #
38 # tracing functions
39 #
40
41 def traced_execute_pipeline(func, instance, args, kwargs):
42 pin = Pin.get_from(instance)
43 if not pin or not pin.enabled():
44 return func(*args, **kwargs)
45
46 cmds = [format_command_args(c.args) for c in instance.command_stack]
47 resource = '\n'.join(cmds)
48 tracer = pin.tracer
49 with tracer.trace(redisx.CMD, resource=resource, service=pin.service, span_type=SpanTypes.REDIS) as s:
50 s.set_tag(SPAN_MEASURED_KEY)
51 s.set_tag(redisx.RAWCMD, resource)
52 s.set_metric(redisx.PIPELINE_LEN, len(instance.command_stack))
53
54 # set analytics sample rate if enabled
55 s.set_tag(
56 ANALYTICS_SAMPLE_RATE_KEY,
57 config.rediscluster.get_analytics_sample_rate()
58 )
59
60 return func(*args, **kwargs)
61
[end of ddtrace/contrib/rediscluster/patch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ddtrace/contrib/rediscluster/patch.py b/ddtrace/contrib/rediscluster/patch.py
--- a/ddtrace/contrib/rediscluster/patch.py
+++ b/ddtrace/contrib/rediscluster/patch.py
@@ -12,6 +12,11 @@
from ..redis.util import format_command_args
+# DEV: In `2.0.0` `__version__` is a string and `VERSION` is a tuple,
+# but in `1.x.x` `__version__` is a tuple annd `VERSION` does not exist
+REDISCLUSTER_VERSION = getattr(rediscluster, 'VERSION', rediscluster.__version__)
+
+
def patch():
"""Patch the instrumented methods
"""
@@ -20,18 +25,30 @@
setattr(rediscluster, '_datadog_patch', True)
_w = wrapt.wrap_function_wrapper
- _w('rediscluster', 'StrictRedisCluster.execute_command', traced_execute_command)
- _w('rediscluster', 'StrictRedisCluster.pipeline', traced_pipeline)
- _w('rediscluster', 'StrictClusterPipeline.execute', traced_execute_pipeline)
- Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.StrictRedisCluster)
+ if REDISCLUSTER_VERSION >= (2, 0, 0):
+ _w('rediscluster', 'RedisCluster.execute_command', traced_execute_command)
+ _w('rediscluster', 'RedisCluster.pipeline', traced_pipeline)
+ _w('rediscluster', 'ClusterPipeline.execute', traced_execute_pipeline)
+ Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.RedisCluster)
+ else:
+ _w('rediscluster', 'StrictRedisCluster.execute_command', traced_execute_command)
+ _w('rediscluster', 'StrictRedisCluster.pipeline', traced_pipeline)
+ _w('rediscluster', 'StrictClusterPipeline.execute', traced_execute_pipeline)
+ Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.StrictRedisCluster)
def unpatch():
if getattr(rediscluster, '_datadog_patch', False):
setattr(rediscluster, '_datadog_patch', False)
- unwrap(rediscluster.StrictRedisCluster, 'execute_command')
- unwrap(rediscluster.StrictRedisCluster, 'pipeline')
- unwrap(rediscluster.StrictClusterPipeline, 'execute')
+
+ if REDISCLUSTER_VERSION >= (2, 0, 0):
+ unwrap(rediscluster.RedisCluster, 'execute_command')
+ unwrap(rediscluster.RedisCluster, 'pipeline')
+ unwrap(rediscluster.ClusterPipeline, 'execute')
+ else:
+ unwrap(rediscluster.StrictRedisCluster, 'execute_command')
+ unwrap(rediscluster.StrictRedisCluster, 'pipeline')
+ unwrap(rediscluster.StrictClusterPipeline, 'execute')
#
|
{"golden_diff": "diff --git a/ddtrace/contrib/rediscluster/patch.py b/ddtrace/contrib/rediscluster/patch.py\n--- a/ddtrace/contrib/rediscluster/patch.py\n+++ b/ddtrace/contrib/rediscluster/patch.py\n@@ -12,6 +12,11 @@\n from ..redis.util import format_command_args\n \n \n+# DEV: In `2.0.0` `__version__` is a string and `VERSION` is a tuple,\n+# but in `1.x.x` `__version__` is a tuple annd `VERSION` does not exist\n+REDISCLUSTER_VERSION = getattr(rediscluster, 'VERSION', rediscluster.__version__)\n+\n+\n def patch():\n \"\"\"Patch the instrumented methods\n \"\"\"\n@@ -20,18 +25,30 @@\n setattr(rediscluster, '_datadog_patch', True)\n \n _w = wrapt.wrap_function_wrapper\n- _w('rediscluster', 'StrictRedisCluster.execute_command', traced_execute_command)\n- _w('rediscluster', 'StrictRedisCluster.pipeline', traced_pipeline)\n- _w('rediscluster', 'StrictClusterPipeline.execute', traced_execute_pipeline)\n- Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.StrictRedisCluster)\n+ if REDISCLUSTER_VERSION >= (2, 0, 0):\n+ _w('rediscluster', 'RedisCluster.execute_command', traced_execute_command)\n+ _w('rediscluster', 'RedisCluster.pipeline', traced_pipeline)\n+ _w('rediscluster', 'ClusterPipeline.execute', traced_execute_pipeline)\n+ Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.RedisCluster)\n+ else:\n+ _w('rediscluster', 'StrictRedisCluster.execute_command', traced_execute_command)\n+ _w('rediscluster', 'StrictRedisCluster.pipeline', traced_pipeline)\n+ _w('rediscluster', 'StrictClusterPipeline.execute', traced_execute_pipeline)\n+ Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.StrictRedisCluster)\n \n \n def unpatch():\n if getattr(rediscluster, '_datadog_patch', False):\n setattr(rediscluster, '_datadog_patch', False)\n- unwrap(rediscluster.StrictRedisCluster, 'execute_command')\n- unwrap(rediscluster.StrictRedisCluster, 'pipeline')\n- unwrap(rediscluster.StrictClusterPipeline, 'execute')\n+\n+ if REDISCLUSTER_VERSION >= (2, 0, 0):\n+ unwrap(rediscluster.RedisCluster, 'execute_command')\n+ unwrap(rediscluster.RedisCluster, 'pipeline')\n+ unwrap(rediscluster.ClusterPipeline, 'execute')\n+ else:\n+ unwrap(rediscluster.StrictRedisCluster, 'execute_command')\n+ unwrap(rediscluster.StrictRedisCluster, 'pipeline')\n+ unwrap(rediscluster.StrictClusterPipeline, 'execute')\n \n \n #\n", "issue": "redis-py-cluster new API 2.0.0\n### Which version of dd-trace-py are you using?\r\n lastest 0.34.0\r\n\r\n### Which version of the libraries are you using?\r\n\r\nredis-py-cluster 2.0.0\r\n\r\n### How can we reproduce your problem?\r\n\r\nchange https://github.com/DataDog/dd-trace-py/blob/master/tox.ini redis-py-cluster version\r\n\r\n### What is the result that you get?\r\n\r\nERROR. It's no more StrictRedisCluster but just RedisCluster\r\n\r\n### What is result that you expected?\r\n\r\nmoving to new api\r\n\n", "before_files": [{"content": "# 3p\nimport rediscluster\nfrom ddtrace.vendor import wrapt\n\n# project\nfrom ddtrace import config\nfrom ...constants import ANALYTICS_SAMPLE_RATE_KEY, SPAN_MEASURED_KEY\nfrom ...pin import Pin\nfrom ...ext import SpanTypes, redis as redisx\nfrom ...utils.wrappers import unwrap\nfrom ..redis.patch import traced_execute_command, traced_pipeline\nfrom ..redis.util import format_command_args\n\n\ndef patch():\n \"\"\"Patch the instrumented methods\n \"\"\"\n if getattr(rediscluster, '_datadog_patch', False):\n return\n setattr(rediscluster, '_datadog_patch', True)\n\n _w = wrapt.wrap_function_wrapper\n _w('rediscluster', 'StrictRedisCluster.execute_command', traced_execute_command)\n _w('rediscluster', 'StrictRedisCluster.pipeline', traced_pipeline)\n _w('rediscluster', 'StrictClusterPipeline.execute', traced_execute_pipeline)\n Pin(service=redisx.DEFAULT_SERVICE, app=redisx.APP).onto(rediscluster.StrictRedisCluster)\n\n\ndef unpatch():\n if getattr(rediscluster, '_datadog_patch', False):\n setattr(rediscluster, '_datadog_patch', False)\n unwrap(rediscluster.StrictRedisCluster, 'execute_command')\n unwrap(rediscluster.StrictRedisCluster, 'pipeline')\n unwrap(rediscluster.StrictClusterPipeline, 'execute')\n\n\n#\n# tracing functions\n#\n\ndef traced_execute_pipeline(func, instance, args, kwargs):\n pin = Pin.get_from(instance)\n if not pin or not pin.enabled():\n return func(*args, **kwargs)\n\n cmds = [format_command_args(c.args) for c in instance.command_stack]\n resource = '\\n'.join(cmds)\n tracer = pin.tracer\n with tracer.trace(redisx.CMD, resource=resource, service=pin.service, span_type=SpanTypes.REDIS) as s:\n s.set_tag(SPAN_MEASURED_KEY)\n s.set_tag(redisx.RAWCMD, resource)\n s.set_metric(redisx.PIPELINE_LEN, len(instance.command_stack))\n\n # set analytics sample rate if enabled\n s.set_tag(\n ANALYTICS_SAMPLE_RATE_KEY,\n config.rediscluster.get_analytics_sample_rate()\n )\n\n return func(*args, **kwargs)\n", "path": "ddtrace/contrib/rediscluster/patch.py"}]}
| 1,271 | 613 |
gh_patches_debug_7855
|
rasdani/github-patches
|
git_diff
|
litestar-org__litestar-1540
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: FieldMeta unexpected keyword argument 'constant'
### Description
After going from `polyfactory==2.0.0alpha1` => `2.0.0` I end up with `FieldMeta.__init__() got an unexpected keyword argument 'constant'`
Looks like the example generation for the openapi docs is broken because the `constant` boolean field is removed from 2.0.0
https://github.com/litestar-org/polyfactory/blob/v2.0.0/polyfactory/field_meta.py#L39-L48 (2.0.0)
vs
https://github.com/litestar-org/polyfactory/blob/v2.0.0alpha1/polyfactory/field_meta.py#L12-L21 (2.0.0a1)
And is set by https://github.com/litestar-org/litestar/blob/v2.0.0alpha4/litestar/_openapi/schema_generation/examples.py#L44 (2.0.0a4)
Running on docker `python:3.11-alpine`
### URL to code causing the issue
_No response_
### MCVE
```python
class TestController(Controller):
path = "/test"
@post(
path="/route",
summary="Test Route",
tags=["Test"],
responses={503: ResponseSpec(data_container=ServiceUnavailableModel, description="Device or service unavailable")},
)
async def test_route(self, data: SomeDataModel) -> SomeResponseModel:
return {"test": data}
```
The `responses=` line causes this error.
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
```bash
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/litestar/middleware/exceptions/middleware.py", line 149, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/litestar/routes/http.py", line 77, in handle
response = await self._get_response_for_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/routes/http.py", line 129, in _get_response_for_request
response = await self._call_handler_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/routes/http.py", line 158, in _call_handler_function
response_data, cleanup_group = await self._get_response_data(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/routes/http.py", line 210, in _get_response_data
data = route_handler.fn.value(**parsed_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/openapi/controller.py", line 221, in root
return Response(content=render_method(request), media_type=MediaType.HTML)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/openapi/controller.py", line 397, in render_redoc
schema = self.get_schema_from_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/openapi/controller.py", line 105, in get_schema_from_request
return request.app.openapi_schema
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/app.py", line 510, in openapi_schema
self.update_openapi_schema()
File "/usr/local/lib/python3.11/site-packages/litestar/app.py", line 825, in update_openapi_schema
path_item, created_operation_ids = create_path_item(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/_openapi/path_item.py", line 125, in create_path_item
responses=create_responses(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/_openapi/responses.py", line 259, in create_responses
for status_code, response in create_additional_responses(
File "/usr/local/lib/python3.11/site-packages/litestar/_openapi/responses.py", line 226, in create_additional_responses
schema = create_schema(
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 724, in create_schema
result = create_schema_for_pydantic_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 541, in create_schema_for_pydantic_model
properties={
^
File "/usr/local/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 542, in <dictcomp>
(f.alias or f.name): create_schema(
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 769, in create_schema
return _process_schema_result(field=field, schema=result, generate_examples=generate_examples, schemas=schemas)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 680, in _process_schema_result
schema.examples = create_examples_for_field(field=field)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/_openapi/schema_generation/examples.py", line 60, in create_examples_for_field
field_meta = _create_field_meta(field)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/litestar/_openapi/schema_generation/examples.py", line 41, in _create_field_meta
return FieldMeta(
^^^^^^^^^^
TypeError: FieldMeta.__init__() got an unexpected keyword argument 'constant'
```
### Litestar Version
Litestar 2.0.0a4
polyfactory 2.0.0alpha1 (no error)
polyfactory 2.0.0 (error)
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [X] Other (Please specify in the description above)
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
</issue>
<code>
[start of litestar/_openapi/schema_generation/examples.py]
1 from __future__ import annotations
2
3 from enum import Enum
4 from typing import TYPE_CHECKING, Any
5
6 from _decimal import Decimal
7 from polyfactory.exceptions import ParameterException
8 from polyfactory.field_meta import FieldMeta, Null
9
10 from litestar.openapi.spec import Example
11 from litestar.types import Empty
12 from litestar.utils import is_pydantic_model_instance
13
14 try:
15 from polyfactory.factories.pydantic_factory import ModelFactory as Factory
16 except ImportError:
17 from polyfactory.factories import DataclassFactory as Factory # type: ignore[assignment]
18
19
20 if TYPE_CHECKING:
21 from litestar._signature.field import SignatureField
22
23
24 def _normalize_example_value(value: Any) -> Any:
25 """Normalize the example value to make it look a bit prettier."""
26 if isinstance(value, (Decimal, float)):
27 value = round(float(value), 2)
28 if isinstance(value, Enum):
29 value = value.value
30 if is_pydantic_model_instance(value):
31 value = value.dict()
32 if isinstance(value, (list, set)):
33 value = [_normalize_example_value(v) for v in value]
34 if isinstance(value, dict):
35 for k, v in value.items():
36 value[k] = _normalize_example_value(v)
37 return value
38
39
40 def _create_field_meta(field: "SignatureField") -> FieldMeta:
41 return FieldMeta(
42 name=field.name,
43 annotation=field.field_type,
44 constant=field.is_const,
45 default=field.default_value if field.default_value is not Empty else Null,
46 children=[_create_field_meta(child) for child in field.children] if field.children else None,
47 )
48
49
50 def create_examples_for_field(field: "SignatureField") -> list["Example"]:
51 """Create an OpenAPI Example instance.
52
53 Args:
54 field: A signature field.
55
56 Returns:
57 A list including a single example.
58 """
59 try:
60 field_meta = _create_field_meta(field)
61 value = _normalize_example_value(Factory.get_field_value(field_meta))
62 return [Example(description=f"Example {field.name} value", value=value)]
63 except ParameterException: # pragma: no cover
64 return []
65
[end of litestar/_openapi/schema_generation/examples.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/litestar/_openapi/schema_generation/examples.py b/litestar/_openapi/schema_generation/examples.py
--- a/litestar/_openapi/schema_generation/examples.py
+++ b/litestar/_openapi/schema_generation/examples.py
@@ -41,7 +41,7 @@
return FieldMeta(
name=field.name,
annotation=field.field_type,
- constant=field.is_const,
+ constraints={"constant": field.is_const},
default=field.default_value if field.default_value is not Empty else Null,
children=[_create_field_meta(child) for child in field.children] if field.children else None,
)
|
{"golden_diff": "diff --git a/litestar/_openapi/schema_generation/examples.py b/litestar/_openapi/schema_generation/examples.py\n--- a/litestar/_openapi/schema_generation/examples.py\n+++ b/litestar/_openapi/schema_generation/examples.py\n@@ -41,7 +41,7 @@\n return FieldMeta(\n name=field.name,\n annotation=field.field_type,\n- constant=field.is_const,\n+ constraints={\"constant\": field.is_const},\n default=field.default_value if field.default_value is not Empty else Null,\n children=[_create_field_meta(child) for child in field.children] if field.children else None,\n )\n", "issue": "Bug: FieldMeta unexpected keyword argument 'constant'\n### Description\r\n\r\nAfter going from `polyfactory==2.0.0alpha1` => `2.0.0` I end up with `FieldMeta.__init__() got an unexpected keyword argument 'constant'`\r\n\r\nLooks like the example generation for the openapi docs is broken because the `constant` boolean field is removed from 2.0.0\r\n\r\nhttps://github.com/litestar-org/polyfactory/blob/v2.0.0/polyfactory/field_meta.py#L39-L48 (2.0.0)\r\n\r\nvs\r\n\r\nhttps://github.com/litestar-org/polyfactory/blob/v2.0.0alpha1/polyfactory/field_meta.py#L12-L21 (2.0.0a1)\r\n\r\nAnd is set by https://github.com/litestar-org/litestar/blob/v2.0.0alpha4/litestar/_openapi/schema_generation/examples.py#L44 (2.0.0a4)\r\n\r\nRunning on docker `python:3.11-alpine`\r\n\r\n### URL to code causing the issue\r\n\r\n_No response_\r\n\r\n### MCVE\r\n\r\n```python\r\nclass TestController(Controller):\r\n path = \"/test\"\r\n\r\n @post(\r\n path=\"/route\",\r\n summary=\"Test Route\",\r\n tags=[\"Test\"],\r\n responses={503: ResponseSpec(data_container=ServiceUnavailableModel, description=\"Device or service unavailable\")},\r\n )\r\n async def test_route(self, data: SomeDataModel) -> SomeResponseModel:\r\n return {\"test\": data}\r\n```\r\n\r\nThe `responses=` line causes this error. \r\n\r\n\r\n### Steps to reproduce\r\n\r\n_No response_\r\n\r\n### Screenshots\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n```bash\r\nTraceback (most recent call last):\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/middleware/exceptions/middleware.py\", line 149, in __call__\r\n await self.app(scope, receive, send)\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/routes/http.py\", line 77, in handle\r\n response = await self._get_response_for_request(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/routes/http.py\", line 129, in _get_response_for_request\r\n response = await self._call_handler_function(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/routes/http.py\", line 158, in _call_handler_function\r\n response_data, cleanup_group = await self._get_response_data(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/routes/http.py\", line 210, in _get_response_data\r\n data = route_handler.fn.value(**parsed_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/openapi/controller.py\", line 221, in root\r\n return Response(content=render_method(request), media_type=MediaType.HTML)\r\n ^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/openapi/controller.py\", line 397, in render_redoc\r\n schema = self.get_schema_from_request(request)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/openapi/controller.py\", line 105, in get_schema_from_request\r\n return request.app.openapi_schema\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/app.py\", line 510, in openapi_schema\r\n self.update_openapi_schema()\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/app.py\", line 825, in update_openapi_schema\r\n path_item, created_operation_ids = create_path_item(\r\n ^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/_openapi/path_item.py\", line 125, in create_path_item\r\n responses=create_responses(\r\n ^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/_openapi/responses.py\", line 259, in create_responses\r\n for status_code, response in create_additional_responses(\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/_openapi/responses.py\", line 226, in create_additional_responses\r\n schema = create_schema(\r\n ^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py\", line 724, in create_schema\r\n result = create_schema_for_pydantic_model(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py\", line 541, in create_schema_for_pydantic_model\r\n properties={\r\n ^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py\", line 542, in <dictcomp>\r\n (f.alias or f.name): create_schema(\r\n ^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py\", line 769, in create_schema\r\n return _process_schema_result(field=field, schema=result, generate_examples=generate_examples, schemas=schemas)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py\", line 680, in _process_schema_result\r\n schema.examples = create_examples_for_field(field=field)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/_openapi/schema_generation/examples.py\", line 60, in create_examples_for_field\r\n field_meta = _create_field_meta(field)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFile \"/usr/local/lib/python3.11/site-packages/litestar/_openapi/schema_generation/examples.py\", line 41, in _create_field_meta\r\n return FieldMeta(\r\n ^^^^^^^^^^\r\nTypeError: FieldMeta.__init__() got an unexpected keyword argument 'constant'\r\n```\r\n\r\n\r\n### Litestar Version\r\n\r\nLitestar 2.0.0a4\r\npolyfactory 2.0.0alpha1 (no error)\r\npolyfactory 2.0.0 (error)\r\n\r\n### Platform\r\n\r\n- [ ] Linux\r\n- [ ] Mac\r\n- [ ] Windows\r\n- [X] Other (Please specify in the description above)\nStaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom enum import Enum\nfrom typing import TYPE_CHECKING, Any\n\nfrom _decimal import Decimal\nfrom polyfactory.exceptions import ParameterException\nfrom polyfactory.field_meta import FieldMeta, Null\n\nfrom litestar.openapi.spec import Example\nfrom litestar.types import Empty\nfrom litestar.utils import is_pydantic_model_instance\n\ntry:\n from polyfactory.factories.pydantic_factory import ModelFactory as Factory\nexcept ImportError:\n from polyfactory.factories import DataclassFactory as Factory # type: ignore[assignment]\n\n\nif TYPE_CHECKING:\n from litestar._signature.field import SignatureField\n\n\ndef _normalize_example_value(value: Any) -> Any:\n \"\"\"Normalize the example value to make it look a bit prettier.\"\"\"\n if isinstance(value, (Decimal, float)):\n value = round(float(value), 2)\n if isinstance(value, Enum):\n value = value.value\n if is_pydantic_model_instance(value):\n value = value.dict()\n if isinstance(value, (list, set)):\n value = [_normalize_example_value(v) for v in value]\n if isinstance(value, dict):\n for k, v in value.items():\n value[k] = _normalize_example_value(v)\n return value\n\n\ndef _create_field_meta(field: \"SignatureField\") -> FieldMeta:\n return FieldMeta(\n name=field.name,\n annotation=field.field_type,\n constant=field.is_const,\n default=field.default_value if field.default_value is not Empty else Null,\n children=[_create_field_meta(child) for child in field.children] if field.children else None,\n )\n\n\ndef create_examples_for_field(field: \"SignatureField\") -> list[\"Example\"]:\n \"\"\"Create an OpenAPI Example instance.\n\n Args:\n field: A signature field.\n\n Returns:\n A list including a single example.\n \"\"\"\n try:\n field_meta = _create_field_meta(field)\n value = _normalize_example_value(Factory.get_field_value(field_meta))\n return [Example(description=f\"Example {field.name} value\", value=value)]\n except ParameterException: # pragma: no cover\n return []\n", "path": "litestar/_openapi/schema_generation/examples.py"}]}
| 2,782 | 139 |
gh_patches_debug_34609
|
rasdani/github-patches
|
git_diff
|
microsoft__botbuilder-python-285
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for Message Reactions to ActivityHandler
ActivityHandler should be extended to include MessageReactions. This has now been added to the C# and The JavaScript.
Here is a pointer to the JavaScript implementation:
https://github.com/microsoft/botbuilder-js/pull/1038
</issue>
<code>
[start of libraries/botbuilder-core/botbuilder/core/activity_handler.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 from botbuilder.schema import ActivityTypes, ChannelAccount
5 from .turn_context import TurnContext
6
7
8 class ActivityHandler:
9 async def on_turn(self, turn_context: TurnContext):
10 if turn_context is None:
11 raise TypeError("ActivityHandler.on_turn(): turn_context cannot be None.")
12
13 if hasattr(turn_context, "activity") and turn_context.activity is None:
14 raise TypeError(
15 "ActivityHandler.on_turn(): turn_context must have a non-None activity."
16 )
17
18 if (
19 hasattr(turn_context.activity, "type")
20 and turn_context.activity.type is None
21 ):
22 raise TypeError(
23 "ActivityHandler.on_turn(): turn_context activity must have a non-None type."
24 )
25
26 if turn_context.activity.type == ActivityTypes.message:
27 await self.on_message_activity(turn_context)
28 elif turn_context.activity.type == ActivityTypes.conversation_update:
29 await self.on_conversation_update_activity(turn_context)
30 elif turn_context.activity.type == ActivityTypes.event:
31 await self.on_event_activity(turn_context)
32 else:
33 await self.on_unrecognized_activity_type(turn_context)
34
35 async def on_message_activity( # pylint: disable=unused-argument
36 self, turn_context: TurnContext
37 ):
38 return
39
40 async def on_conversation_update_activity(self, turn_context: TurnContext):
41 if (
42 turn_context.activity.members_added is not None
43 and turn_context.activity.members_added
44 ):
45 return await self.on_members_added_activity(
46 turn_context.activity.members_added, turn_context
47 )
48 if (
49 turn_context.activity.members_removed is not None
50 and turn_context.activity.members_removed
51 ):
52 return await self.on_members_removed_activity(
53 turn_context.activity.members_removed, turn_context
54 )
55 return
56
57 async def on_members_added_activity(
58 self, members_added: ChannelAccount, turn_context: TurnContext
59 ): # pylint: disable=unused-argument
60 return
61
62 async def on_members_removed_activity(
63 self, members_removed: ChannelAccount, turn_context: TurnContext
64 ): # pylint: disable=unused-argument
65 return
66
67 async def on_event_activity(self, turn_context: TurnContext):
68 if turn_context.activity.name == "tokens/response":
69 return await self.on_token_response_event(turn_context)
70
71 return await self.on_event(turn_context)
72
73 async def on_token_response_event( # pylint: disable=unused-argument
74 self, turn_context: TurnContext
75 ):
76 return
77
78 async def on_event( # pylint: disable=unused-argument
79 self, turn_context: TurnContext
80 ):
81 return
82
83 async def on_unrecognized_activity_type( # pylint: disable=unused-argument
84 self, turn_context: TurnContext
85 ):
86 return
87
[end of libraries/botbuilder-core/botbuilder/core/activity_handler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/libraries/botbuilder-core/botbuilder/core/activity_handler.py b/libraries/botbuilder-core/botbuilder/core/activity_handler.py
--- a/libraries/botbuilder-core/botbuilder/core/activity_handler.py
+++ b/libraries/botbuilder-core/botbuilder/core/activity_handler.py
@@ -1,7 +1,8 @@
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License.
+from typing import List
-from botbuilder.schema import ActivityTypes, ChannelAccount
+from botbuilder.schema import ActivityTypes, ChannelAccount, MessageReaction
from .turn_context import TurnContext
@@ -27,6 +28,8 @@
await self.on_message_activity(turn_context)
elif turn_context.activity.type == ActivityTypes.conversation_update:
await self.on_conversation_update_activity(turn_context)
+ elif turn_context.activity.type == ActivityTypes.message_reaction:
+ await self.on_message_reaction_activity(turn_context)
elif turn_context.activity.type == ActivityTypes.event:
await self.on_event_activity(turn_context)
else:
@@ -64,6 +67,27 @@
): # pylint: disable=unused-argument
return
+ async def on_message_reaction_activity(self, turn_context: TurnContext):
+ if turn_context.activity.reactions_added is not None:
+ await self.on_reactions_added(
+ turn_context.activity.reactions_added, turn_context
+ )
+
+ if turn_context.activity.reactions_removed is not None:
+ await self.on_reactions_removed(
+ turn_context.activity.reactions_removed, turn_context
+ )
+
+ async def on_reactions_added( # pylint: disable=unused-argument
+ self, message_reactions: List[MessageReaction], turn_context: TurnContext
+ ):
+ return
+
+ async def on_reactions_removed( # pylint: disable=unused-argument
+ self, message_reactions: List[MessageReaction], turn_context: TurnContext
+ ):
+ return
+
async def on_event_activity(self, turn_context: TurnContext):
if turn_context.activity.name == "tokens/response":
return await self.on_token_response_event(turn_context)
|
{"golden_diff": "diff --git a/libraries/botbuilder-core/botbuilder/core/activity_handler.py b/libraries/botbuilder-core/botbuilder/core/activity_handler.py\n--- a/libraries/botbuilder-core/botbuilder/core/activity_handler.py\n+++ b/libraries/botbuilder-core/botbuilder/core/activity_handler.py\n@@ -1,7 +1,8 @@\n # Copyright (c) Microsoft Corporation. All rights reserved.\n # Licensed under the MIT License.\n+from typing import List\n \n-from botbuilder.schema import ActivityTypes, ChannelAccount\n+from botbuilder.schema import ActivityTypes, ChannelAccount, MessageReaction\n from .turn_context import TurnContext\n \n \n@@ -27,6 +28,8 @@\n await self.on_message_activity(turn_context)\n elif turn_context.activity.type == ActivityTypes.conversation_update:\n await self.on_conversation_update_activity(turn_context)\n+ elif turn_context.activity.type == ActivityTypes.message_reaction:\n+ await self.on_message_reaction_activity(turn_context)\n elif turn_context.activity.type == ActivityTypes.event:\n await self.on_event_activity(turn_context)\n else:\n@@ -64,6 +67,27 @@\n ): # pylint: disable=unused-argument\n return\n \n+ async def on_message_reaction_activity(self, turn_context: TurnContext):\n+ if turn_context.activity.reactions_added is not None:\n+ await self.on_reactions_added(\n+ turn_context.activity.reactions_added, turn_context\n+ )\n+\n+ if turn_context.activity.reactions_removed is not None:\n+ await self.on_reactions_removed(\n+ turn_context.activity.reactions_removed, turn_context\n+ )\n+\n+ async def on_reactions_added( # pylint: disable=unused-argument\n+ self, message_reactions: List[MessageReaction], turn_context: TurnContext\n+ ):\n+ return\n+\n+ async def on_reactions_removed( # pylint: disable=unused-argument\n+ self, message_reactions: List[MessageReaction], turn_context: TurnContext\n+ ):\n+ return\n+\n async def on_event_activity(self, turn_context: TurnContext):\n if turn_context.activity.name == \"tokens/response\":\n return await self.on_token_response_event(turn_context)\n", "issue": "Add support for Message Reactions to ActivityHandler \nActivityHandler should be extended to include MessageReactions. This has now been added to the C# and The JavaScript.\r\n\r\nHere is a pointer to the JavaScript implementation:\r\n\r\nhttps://github.com/microsoft/botbuilder-js/pull/1038\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\nfrom botbuilder.schema import ActivityTypes, ChannelAccount\nfrom .turn_context import TurnContext\n\n\nclass ActivityHandler:\n async def on_turn(self, turn_context: TurnContext):\n if turn_context is None:\n raise TypeError(\"ActivityHandler.on_turn(): turn_context cannot be None.\")\n\n if hasattr(turn_context, \"activity\") and turn_context.activity is None:\n raise TypeError(\n \"ActivityHandler.on_turn(): turn_context must have a non-None activity.\"\n )\n\n if (\n hasattr(turn_context.activity, \"type\")\n and turn_context.activity.type is None\n ):\n raise TypeError(\n \"ActivityHandler.on_turn(): turn_context activity must have a non-None type.\"\n )\n\n if turn_context.activity.type == ActivityTypes.message:\n await self.on_message_activity(turn_context)\n elif turn_context.activity.type == ActivityTypes.conversation_update:\n await self.on_conversation_update_activity(turn_context)\n elif turn_context.activity.type == ActivityTypes.event:\n await self.on_event_activity(turn_context)\n else:\n await self.on_unrecognized_activity_type(turn_context)\n\n async def on_message_activity( # pylint: disable=unused-argument\n self, turn_context: TurnContext\n ):\n return\n\n async def on_conversation_update_activity(self, turn_context: TurnContext):\n if (\n turn_context.activity.members_added is not None\n and turn_context.activity.members_added\n ):\n return await self.on_members_added_activity(\n turn_context.activity.members_added, turn_context\n )\n if (\n turn_context.activity.members_removed is not None\n and turn_context.activity.members_removed\n ):\n return await self.on_members_removed_activity(\n turn_context.activity.members_removed, turn_context\n )\n return\n\n async def on_members_added_activity(\n self, members_added: ChannelAccount, turn_context: TurnContext\n ): # pylint: disable=unused-argument\n return\n\n async def on_members_removed_activity(\n self, members_removed: ChannelAccount, turn_context: TurnContext\n ): # pylint: disable=unused-argument\n return\n\n async def on_event_activity(self, turn_context: TurnContext):\n if turn_context.activity.name == \"tokens/response\":\n return await self.on_token_response_event(turn_context)\n\n return await self.on_event(turn_context)\n\n async def on_token_response_event( # pylint: disable=unused-argument\n self, turn_context: TurnContext\n ):\n return\n\n async def on_event( # pylint: disable=unused-argument\n self, turn_context: TurnContext\n ):\n return\n\n async def on_unrecognized_activity_type( # pylint: disable=unused-argument\n self, turn_context: TurnContext\n ):\n return\n", "path": "libraries/botbuilder-core/botbuilder/core/activity_handler.py"}]}
| 1,386 | 474 |
gh_patches_debug_8428
|
rasdani/github-patches
|
git_diff
|
mindsdb__mindsdb-1661
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add option to list tables in ClickHouse integration :bookmark_tabs:
When users create a connection to the ClickHouse database it will be useful to show them tips with a list of tables. To be able to do this we need a new method `get_tables_list` implemented in the ClickHouse integration class.
## Steps :male_detective: :female_detective:
- Frok MindsDB repo
- Add new implementation in https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/clickhouse/clickhouse.py#L25
- Make a PR to staging branch
## Additional rewards :1st_place_medal:
Each code PR brings :three: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/
</issue>
<code>
[start of mindsdb/integrations/clickhouse/clickhouse.py]
1 import requests
2 from lightwood.api import dtype
3 from mindsdb.integrations.base import Integration
4 from mindsdb.utilities.log import log
5
6
7 class ClickhouseConnectionChecker:
8 def __init__(self, **kwargs):
9 self.host = kwargs.get("host")
10 self.port = kwargs.get("port")
11 self.user = kwargs.get("user")
12 self.password = kwargs.get("password")
13
14 def check_connection(self):
15 try:
16 res = requests.post(f"http://{self.host}:{self.port}",
17 data="select 1;",
18 params={'user': self.user, 'password': self.password})
19 connected = res.status_code == 200
20 except Exception:
21 connected = False
22 return connected
23
24
25 class Clickhouse(Integration, ClickhouseConnectionChecker):
26 def __init__(self, config, name, db_info):
27 super().__init__(config, name)
28 self.user = db_info.get('user', 'default')
29 self.password = db_info.get('password', None)
30 self.host = db_info.get('host')
31 self.port = db_info.get('port')
32
33 def _to_clickhouse_table(self, dtype_dict, predicted_cols, columns):
34 subtype_map = {
35 dtype.integer: 'Nullable(Int64)',
36 dtype.float: 'Nullable(Float64)',
37 dtype.binary: 'Nullable(UInt8)',
38 dtype.date: 'Nullable(Date)',
39 dtype.datetime: 'Nullable(Datetime)',
40 dtype.binary: 'Nullable(String)',
41 dtype.categorical: 'Nullable(String)',
42 dtype.tags: 'Nullable(String)',
43 dtype.image: 'Nullable(String)',
44 dtype.video: 'Nullable(String)',
45 dtype.audio: 'Nullable(String)',
46 dtype.short_text: 'Nullable(String)',
47 dtype.rich_text: 'Nullable(String)',
48 dtype.array: 'Nullable(String)'
49 }
50
51 column_declaration = []
52 for name in columns:
53 try:
54 col_subtype = dtype_dict[name]
55 new_type = subtype_map[col_subtype]
56 column_declaration.append(f' `{name}` {new_type} ')
57 if name in predicted_cols:
58 column_declaration.append(f' `{name}_original` {new_type} ')
59 except Exception as e:
60 log.error(f'Error: can not determine clickhouse data type for column {name}: {e}')
61
62 return column_declaration
63
64 def _query(self, query):
65 params = {'user': self.user}
66
67 if self.password is not None:
68 params['password'] = self.password
69
70 host = self.host
71 port = self.port
72
73 response = requests.post(f'http://{host}:{port}', data=query, params=params)
74
75 if response.status_code != 200:
76 raise Exception(f'Error: {response.content}\nQuery:{query}')
77
78 return response
79
80 def _get_mysql_user(self):
81 return f"{self.config['api']['mysql']['user']}_{self.name}"
82
83 def _escape_table_name(self, name):
84 return '`' + name.replace('`', '\\`') + '`'
85
86 def setup(self):
87 self._query(f'DROP DATABASE IF EXISTS {self.mindsdb_database}')
88 self._query(f'CREATE DATABASE IF NOT EXISTS {self.mindsdb_database}')
89
90 msqyl_conn = self.config['api']['mysql']['host'] + ':' + str(self.config['api']['mysql']['port'])
91 msqyl_pass = self.config['api']['mysql']['password']
92 msqyl_user = self._get_mysql_user()
93
94 q = f"""
95 CREATE TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (
96 name String,
97 status String,
98 accuracy String,
99 predict String,
100 select_data_query String,
101 external_datasource String,
102 training_options String
103 ) ENGINE=MySQL('{msqyl_conn}', 'mindsdb', 'predictors', '{msqyl_user}', '{msqyl_pass}')
104 """
105 self._query(q)
106 q = f"""
107 CREATE TABLE IF NOT EXISTS {self.mindsdb_database}.commands (
108 command String
109 ) ENGINE=MySQL('{msqyl_conn}', 'mindsdb', 'commands', '{msqyl_user}', '{msqyl_pass}')
110 """
111 self._query(q)
112
113 def register_predictors(self, model_data_arr):
114 for model_meta in model_data_arr:
115 name = self._escape_table_name(model_meta['name'])
116
117 predict = model_meta['predict']
118 if not isinstance(predict, list):
119 predict = [predict]
120
121 columns_sql = ','.join(self._to_clickhouse_table(
122 model_meta['dtype_dict'],
123 predict,
124 list(model_meta['dtype_dict'].keys())
125 ))
126 columns_sql += ',`when_data` Nullable(String)'
127 columns_sql += ',`select_data_query` Nullable(String)'
128 columns_sql += ',`external_datasource` Nullable(String)'
129 for col in predict:
130 columns_sql += f',`{col}_confidence` Nullable(Float64)'
131
132 if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):
133 columns_sql += f',`{col}_min` Nullable(Float64)'
134 columns_sql += f',`{col}_max` Nullable(Float64)'
135 columns_sql += f',`{col}_explain` Nullable(String)'
136
137 msqyl_conn = self.config['api']['mysql']['host'] + ':' + str(self.config['api']['mysql']['port'])
138 msqyl_pass = self.config['api']['mysql']['password']
139 msqyl_user = self._get_mysql_user()
140
141 self.unregister_predictor(model_meta['name'])
142 q = f"""
143 CREATE TABLE {self.mindsdb_database}.{name}
144 ({columns_sql}
145 ) ENGINE=MySQL('{msqyl_conn}', 'mindsdb', {name}, '{msqyl_user}', '{msqyl_pass}')
146 """
147 self._query(q)
148
149 def unregister_predictor(self, name):
150 q = f"""
151 drop table if exists {self.mindsdb_database}.{self._escape_table_name(name)};
152 """
153 self._query(q)
154
[end of mindsdb/integrations/clickhouse/clickhouse.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mindsdb/integrations/clickhouse/clickhouse.py b/mindsdb/integrations/clickhouse/clickhouse.py
--- a/mindsdb/integrations/clickhouse/clickhouse.py
+++ b/mindsdb/integrations/clickhouse/clickhouse.py
@@ -151,3 +151,13 @@
drop table if exists {self.mindsdb_database}.{self._escape_table_name(name)};
"""
self._query(q)
+
+ def get_tables_list(self):
+ q = f"""SELECT database, table
+ FROM system.parts
+ WHERE active and database NOT IN ('system', 'mdb_system')
+ GROUP BY database, table
+ ORDER BY database, table;"""
+ tables_list = self._query(q)
+ tables= [f"{table[0]}.{table[1]}" for table in tables_list]
+ return tables
\ No newline at end of file
|
{"golden_diff": "diff --git a/mindsdb/integrations/clickhouse/clickhouse.py b/mindsdb/integrations/clickhouse/clickhouse.py\n--- a/mindsdb/integrations/clickhouse/clickhouse.py\n+++ b/mindsdb/integrations/clickhouse/clickhouse.py\n@@ -151,3 +151,13 @@\n drop table if exists {self.mindsdb_database}.{self._escape_table_name(name)};\n \"\"\"\n self._query(q)\n+\n+ def get_tables_list(self):\n+ q = f\"\"\"SELECT database, table\n+ FROM system.parts\n+ WHERE active and database NOT IN ('system', 'mdb_system')\n+ GROUP BY database, table\n+ ORDER BY database, table;\"\"\"\n+ tables_list = self._query(q)\n+ tables= [f\"{table[0]}.{table[1]}\" for table in tables_list]\n+ return tables\n\\ No newline at end of file\n", "issue": "Add option to list tables in ClickHouse integration :bookmark_tabs: \nWhen users create a connection to the ClickHouse database it will be useful to show them tips with a list of tables. To be able to do this we need a new method `get_tables_list` implemented in the ClickHouse integration class.\r\n\r\n## Steps :male_detective: :female_detective: \r\n\r\n- Frok MindsDB repo\r\n- Add new implementation in https://github.com/mindsdb/mindsdb/blob/staging/mindsdb/integrations/clickhouse/clickhouse.py#L25\r\n- Make a PR to staging branch\r\n\r\n## Additional rewards :1st_place_medal: \r\n\r\nEach code PR brings :three: point for entry into the draw for a :computer: Deep Learning Laptop powered by the NVIDIA RTX 3080 Max-Q GPU or other swag :shirt: :bear: . For more info check out https://mindsdb.com/hacktoberfest/\n", "before_files": [{"content": "import requests\nfrom lightwood.api import dtype\nfrom mindsdb.integrations.base import Integration\nfrom mindsdb.utilities.log import log\n\n\nclass ClickhouseConnectionChecker:\n def __init__(self, **kwargs):\n self.host = kwargs.get(\"host\")\n self.port = kwargs.get(\"port\")\n self.user = kwargs.get(\"user\")\n self.password = kwargs.get(\"password\")\n\n def check_connection(self):\n try:\n res = requests.post(f\"http://{self.host}:{self.port}\",\n data=\"select 1;\",\n params={'user': self.user, 'password': self.password})\n connected = res.status_code == 200\n except Exception:\n connected = False\n return connected\n\n\nclass Clickhouse(Integration, ClickhouseConnectionChecker):\n def __init__(self, config, name, db_info):\n super().__init__(config, name)\n self.user = db_info.get('user', 'default')\n self.password = db_info.get('password', None)\n self.host = db_info.get('host')\n self.port = db_info.get('port')\n\n def _to_clickhouse_table(self, dtype_dict, predicted_cols, columns):\n subtype_map = {\n dtype.integer: 'Nullable(Int64)',\n dtype.float: 'Nullable(Float64)',\n dtype.binary: 'Nullable(UInt8)',\n dtype.date: 'Nullable(Date)',\n dtype.datetime: 'Nullable(Datetime)',\n dtype.binary: 'Nullable(String)',\n dtype.categorical: 'Nullable(String)',\n dtype.tags: 'Nullable(String)',\n dtype.image: 'Nullable(String)',\n dtype.video: 'Nullable(String)',\n dtype.audio: 'Nullable(String)',\n dtype.short_text: 'Nullable(String)',\n dtype.rich_text: 'Nullable(String)',\n dtype.array: 'Nullable(String)'\n }\n\n column_declaration = []\n for name in columns:\n try:\n col_subtype = dtype_dict[name]\n new_type = subtype_map[col_subtype]\n column_declaration.append(f' `{name}` {new_type} ')\n if name in predicted_cols:\n column_declaration.append(f' `{name}_original` {new_type} ')\n except Exception as e:\n log.error(f'Error: can not determine clickhouse data type for column {name}: {e}')\n\n return column_declaration\n\n def _query(self, query):\n params = {'user': self.user}\n\n if self.password is not None:\n params['password'] = self.password\n\n host = self.host\n port = self.port\n\n response = requests.post(f'http://{host}:{port}', data=query, params=params)\n\n if response.status_code != 200:\n raise Exception(f'Error: {response.content}\\nQuery:{query}')\n\n return response\n\n def _get_mysql_user(self):\n return f\"{self.config['api']['mysql']['user']}_{self.name}\"\n\n def _escape_table_name(self, name):\n return '`' + name.replace('`', '\\\\`') + '`'\n\n def setup(self):\n self._query(f'DROP DATABASE IF EXISTS {self.mindsdb_database}')\n self._query(f'CREATE DATABASE IF NOT EXISTS {self.mindsdb_database}')\n\n msqyl_conn = self.config['api']['mysql']['host'] + ':' + str(self.config['api']['mysql']['port'])\n msqyl_pass = self.config['api']['mysql']['password']\n msqyl_user = self._get_mysql_user()\n\n q = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.mindsdb_database}.predictors (\n name String,\n status String,\n accuracy String,\n predict String,\n select_data_query String,\n external_datasource String,\n training_options String\n ) ENGINE=MySQL('{msqyl_conn}', 'mindsdb', 'predictors', '{msqyl_user}', '{msqyl_pass}')\n \"\"\"\n self._query(q)\n q = f\"\"\"\n CREATE TABLE IF NOT EXISTS {self.mindsdb_database}.commands (\n command String\n ) ENGINE=MySQL('{msqyl_conn}', 'mindsdb', 'commands', '{msqyl_user}', '{msqyl_pass}')\n \"\"\"\n self._query(q)\n\n def register_predictors(self, model_data_arr):\n for model_meta in model_data_arr:\n name = self._escape_table_name(model_meta['name'])\n\n predict = model_meta['predict']\n if not isinstance(predict, list):\n predict = [predict]\n\n columns_sql = ','.join(self._to_clickhouse_table(\n model_meta['dtype_dict'],\n predict,\n list(model_meta['dtype_dict'].keys())\n ))\n columns_sql += ',`when_data` Nullable(String)'\n columns_sql += ',`select_data_query` Nullable(String)'\n columns_sql += ',`external_datasource` Nullable(String)'\n for col in predict:\n columns_sql += f',`{col}_confidence` Nullable(Float64)'\n\n if model_meta['dtype_dict'][col] in (dtype.integer, dtype.float):\n columns_sql += f',`{col}_min` Nullable(Float64)'\n columns_sql += f',`{col}_max` Nullable(Float64)'\n columns_sql += f',`{col}_explain` Nullable(String)'\n\n msqyl_conn = self.config['api']['mysql']['host'] + ':' + str(self.config['api']['mysql']['port'])\n msqyl_pass = self.config['api']['mysql']['password']\n msqyl_user = self._get_mysql_user()\n\n self.unregister_predictor(model_meta['name'])\n q = f\"\"\"\n CREATE TABLE {self.mindsdb_database}.{name}\n ({columns_sql}\n ) ENGINE=MySQL('{msqyl_conn}', 'mindsdb', {name}, '{msqyl_user}', '{msqyl_pass}')\n \"\"\"\n self._query(q)\n\n def unregister_predictor(self, name):\n q = f\"\"\"\n drop table if exists {self.mindsdb_database}.{self._escape_table_name(name)};\n \"\"\"\n self._query(q)\n", "path": "mindsdb/integrations/clickhouse/clickhouse.py"}]}
| 2,390 | 210 |
gh_patches_debug_3776
|
rasdani/github-patches
|
git_diff
|
esphome__esphome-docs-1150
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add airquality wp6003 + am312 tutorial
Add air quality + am312 tutorial
## Description:
**Related issue (if applicable):** fixes <link to issue>
**Pull request in [esphome](https://github.com/esphome/esphome) with YAML changes (if applicable):** esphome/esphome#<esphome PR number goes here>
## Checklist:
- [ ] Branch: `next` is for changes and new documentation that will go public with the next ESPHome release. Fixes, changes and adjustments for the current release should be created against `current`.
- [ ] Link added in `/index.rst` when creating new documents for new components or cookbook.
</issue>
<code>
[start of conf.py]
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # esphome documentation build configuration file, created by
5 # sphinx-quickstart on Mon Jan 22 21:44:07 2018.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 #
20 # import os
21 # import sys
22 # sys.path.insert(0, os.path.abspath('.'))
23 import hashlib
24 import os
25 import sys
26
27
28 sys.path.append(os.path.abspath("."))
29
30 # -- General configuration ------------------------------------------------
31
32 # If your documentation needs a minimal Sphinx version, state it here.
33 #
34 # needs_sphinx = '1.0'
35
36 # Add any Sphinx extension module names here, as strings. They can be
37 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
38 # ones.
39 extensions = [
40 "github",
41 "seo",
42 "sitemap",
43 "schema_doc",
44 ]
45
46 # Add any paths that contain templates here, relative to this directory.
47 templates_path = ["_templates"]
48
49 # The suffix(es) of source filenames.
50 # You can specify multiple suffix as a list of string:
51 #
52 # source_suffix = ['.rst', '.md']
53 source_suffix = ".rst"
54
55 # The master toctree document.
56 master_doc = "index"
57
58 # General information about the project.
59 project = "ESPHome"
60 copyright = "2019, Otto Winter"
61 html_show_copyright = False
62 html_show_sphinx = False
63 author = "Otto Winter"
64
65 # The version info for the project you're documenting, acts as replacement for
66 # |version| and |release|, also used in various other places throughout the
67 # built documents.
68 #
69 # The short X.Y version.
70 version = "1.17"
71 # The full version, including alpha/beta/rc tags.
72 release = "1.17.2"
73
74 # The language for content autogenerated by Sphinx. Refer to documentation
75 # for a list of supported languages.
76 #
77 # This is also used if you do content translation via gettext catalogs.
78 # Usually you set "language" from the command line for these cases.
79 language = "en"
80
81 # List of patterns, relative to source directory, that match files and
82 # directories to ignore when looking for source files.
83 # This patterns also effect to html_static_path and html_extra_path
84 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
85
86 # The reST default role (used for this markup: `text`) to use for all documents.
87 # default_role = 'cpp:any'
88
89 # The name of the Pygments (syntax highlighting) style to use.
90 pygments_style = "xcode"
91
92 highlight_language = "yaml"
93
94 primary_domain = None
95
96 # If true, `todo` and `todoList` produce output, else they produce nothing.
97 todo_include_todos = False
98
99
100 # -- Options for HTML output ----------------------------------------------
101
102 # The theme to use for HTML and HTML Help pages. See the documentation for
103 # a list of builtin themes.
104 #
105 html_theme = "alabaster"
106
107 # Theme options are theme-specific and customize the look and feel of a theme
108 # further. For a list of options available for each theme, see the
109 # documentation.
110 #
111 html_baseurl = os.getenv("BASE_URL", "https://esphome.io")
112 with open("_static/custom.css", "rb") as f:
113 custom_css_hash = hashlib.md5(f.read()).hexdigest()[:8]
114
115 html_theme_options = {
116 # 'logo': 'logo-full.png',
117 "logo_name": False,
118 "show_related": False,
119 "sidebar_collapse": True,
120 "fixed_sidebar": True,
121 "show_powered_by": False,
122 }
123
124 html_context = {
125 "custom_css_hash": custom_css_hash,
126 }
127
128 html_logo = "images/logo-text.svg"
129 html_copy_source = True
130 html_show_sourcelink = False
131 html_last_updated_fmt = None
132 html_use_smartypants = False
133 html_title = "ESPHome"
134
135 # Add any paths that contain custom static files (such as style sheets) here,
136 # relative to this directory. They are copied after the builtin static files,
137 # so a file named "default.css" will overwrite the builtin "default.css".
138 html_static_path = ["_static"]
139
140 # Custom sidebar templates, must be a dictionary that maps document names
141 # to template names.
142 #
143 # This is required for the alabaster theme
144 # refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
145 html_sidebars = {
146 "**": [
147 # 'about.html',
148 "searchbox.html",
149 "localtoc.html",
150 ]
151 }
152
153
154 # -- Options for HTMLHelp output ------------------------------------------
155
156 # Output file base name for HTML help builder.
157 htmlhelp_basename = "esphomedoc"
158
159
160 # -- Options for LaTeX output ---------------------------------------------
161
162 latex_elements = {
163 # The paper size ('letterpaper' or 'a4paper').
164 #
165 # 'papersize': 'letterpaper',
166 # The font size ('10pt', '11pt' or '12pt').
167 #
168 # 'pointsize': '10pt',
169 # Additional stuff for the LaTeX preamble.
170 #
171 # 'preamble': '',
172 # Latex figure (float) alignment
173 #
174 # 'figure_align': 'htbp',
175 }
176
177 # Grouping the document tree into LaTeX files. List of tuples
178 # (source start file, target name, title,
179 # author, documentclass [howto, manual, or own class]).
180 latex_documents = [
181 (master_doc, "esphome.tex", "ESPHome Documentation", "Otto Winter", "manual"),
182 ]
183
184 latex_engine = "xelatex"
185
186
187 # -- Options for manual page output ---------------------------------------
188
189 # One entry per manual page. List of tuples
190 # (source start file, name, description, authors, manual section).
191 man_pages = [(master_doc, "esphome", "ESPHome Documentation", [author], 1)]
192
193
194 # -- Options for Texinfo output -------------------------------------------
195
196 # Grouping the document tree into Texinfo files. List of tuples
197 # (source start file, target name, title, author,
198 # dir menu entry, description, category)
199 texinfo_documents = [
200 (
201 master_doc,
202 "esphome",
203 "ESPHome Documentation",
204 author,
205 "esphome",
206 "One line description of project.",
207 "Miscellaneous",
208 ),
209 ]
210 linkcheck_ignore = [r"https://github.com/.*", r"https://discord.gg/.*"]
211
[end of conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conf.py b/conf.py
--- a/conf.py
+++ b/conf.py
@@ -67,9 +67,9 @@
# built documents.
#
# The short X.Y version.
-version = "1.17"
+version = "1.18"
# The full version, including alpha/beta/rc tags.
-release = "1.17.2"
+release = "1.18.0b1"
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
|
{"golden_diff": "diff --git a/conf.py b/conf.py\n--- a/conf.py\n+++ b/conf.py\n@@ -67,9 +67,9 @@\n # built documents.\n #\n # The short X.Y version.\n-version = \"1.17\"\n+version = \"1.18\"\n # The full version, including alpha/beta/rc tags.\n-release = \"1.17.2\"\n+release = \"1.18.0b1\"\n \n # The language for content autogenerated by Sphinx. Refer to documentation\n # for a list of supported languages.\n", "issue": "Add airquality wp6003 + am312 tutorial\nAdd air quality + am312 tutorial\r\n\r\n## Description:\r\n\r\n\r\n**Related issue (if applicable):** fixes <link to issue>\r\n\r\n**Pull request in [esphome](https://github.com/esphome/esphome) with YAML changes (if applicable):** esphome/esphome#<esphome PR number goes here>\r\n\r\n## Checklist:\r\n\r\n - [ ] Branch: `next` is for changes and new documentation that will go public with the next ESPHome release. Fixes, changes and adjustments for the current release should be created against `current`.\r\n - [ ] Link added in `/index.rst` when creating new documents for new components or cookbook.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# esphome documentation build configuration file, created by\n# sphinx-quickstart on Mon Jan 22 21:44:07 2018.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\nimport hashlib\nimport os\nimport sys\n\n\nsys.path.append(os.path.abspath(\".\"))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"github\",\n \"seo\",\n \"sitemap\",\n \"schema_doc\",\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"ESPHome\"\ncopyright = \"2019, Otto Winter\"\nhtml_show_copyright = False\nhtml_show_sphinx = False\nauthor = \"Otto Winter\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \"1.17\"\n# The full version, including alpha/beta/rc tags.\nrelease = \"1.17.2\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = \"en\"\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n# default_role = 'cpp:any'\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"xcode\"\n\nhighlight_language = \"yaml\"\n\nprimary_domain = None\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"alabaster\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_baseurl = os.getenv(\"BASE_URL\", \"https://esphome.io\")\nwith open(\"_static/custom.css\", \"rb\") as f:\n custom_css_hash = hashlib.md5(f.read()).hexdigest()[:8]\n\nhtml_theme_options = {\n # 'logo': 'logo-full.png',\n \"logo_name\": False,\n \"show_related\": False,\n \"sidebar_collapse\": True,\n \"fixed_sidebar\": True,\n \"show_powered_by\": False,\n}\n\nhtml_context = {\n \"custom_css_hash\": custom_css_hash,\n}\n\nhtml_logo = \"images/logo-text.svg\"\nhtml_copy_source = True\nhtml_show_sourcelink = False\nhtml_last_updated_fmt = None\nhtml_use_smartypants = False\nhtml_title = \"ESPHome\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# This is required for the alabaster theme\n# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars\nhtml_sidebars = {\n \"**\": [\n # 'about.html',\n \"searchbox.html\",\n \"localtoc.html\",\n ]\n}\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"esphomedoc\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, \"esphome.tex\", \"ESPHome Documentation\", \"Otto Winter\", \"manual\"),\n]\n\nlatex_engine = \"xelatex\"\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"esphome\", \"ESPHome Documentation\", [author], 1)]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"esphome\",\n \"ESPHome Documentation\",\n author,\n \"esphome\",\n \"One line description of project.\",\n \"Miscellaneous\",\n ),\n]\nlinkcheck_ignore = [r\"https://github.com/.*\", r\"https://discord.gg/.*\"]\n", "path": "conf.py"}]}
| 2,718 | 118 |
gh_patches_debug_4811
|
rasdani/github-patches
|
git_diff
|
pytorch__text-254
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use getattr rather than __dict__ in Batch (adds support for __slots__ in Example subclasses)
This is a proposal to change [one line of code](https://github.com/pytorch/text/blob/c839a7934930819be7e240ea972e4d600966afdc/torchtext/data/batch.py#L27) in Batch.py
I suggest `[x.__dict__[name] for x in data]` should become `[getattr(x, name) for x in data]`
A major advantage to doing this is compatibility with `__slots__`. A class that is going to be instantiated for every data point is an ideal use-case for `__slots__`, which reduces per-instance memory overhead. It makes sense for specific projects to subclass Example using `__slots__` with the known fields of the project. If you do, the instances will have empty `__dicts__` but the slots can be accessed via `getattr`.
I don't _think_ this change would break anything...
</issue>
<code>
[start of torchtext/data/batch.py]
1 from torch import typename
2 from torch.tensor import _TensorBase
3
4
5 class Batch(object):
6 """Defines a batch of examples along with its Fields.
7
8 Attributes:
9 batch_size: Number of examples in the batch.
10 dataset: A reference to the dataset object the examples come from
11 (which itself contains the dataset's Field objects).
12 train: Whether the batch is from a training set.
13
14 Also stores the Variable for each column in the batch as an attribute.
15 """
16
17 def __init__(self, data=None, dataset=None, device=None, train=True):
18 """Create a Batch from a list of examples."""
19 if data is not None:
20 self.batch_size = len(data)
21 self.dataset = dataset
22 self.train = train
23 self.fields = dataset.fields.keys() # copy field names
24
25 for (name, field) in dataset.fields.items():
26 if field is not None:
27 batch = [x.__dict__[name] for x in data]
28 setattr(self, name, field.process(batch, device=device, train=train))
29
30 @classmethod
31 def fromvars(cls, dataset, batch_size, train=True, **kwargs):
32 """Create a Batch directly from a number of Variables."""
33 batch = cls()
34 batch.batch_size = batch_size
35 batch.dataset = dataset
36 batch.train = train
37 for k, v in kwargs.items():
38 setattr(batch, k, v)
39 return batch
40
41 def __repr__(self):
42 return str(self)
43
44 def __str__(self):
45 if not self.__dict__:
46 return 'Empty {} instance'.format(typename(self))
47
48 var_strs = '\n'.join(['\t[.' + name + ']' + ":" + _short_str(getattr(self, name))
49 for name in self.fields if hasattr(self, name)])
50
51 data_str = (' from {}'.format(self.dataset.name.upper())
52 if hasattr(self.dataset, 'name') and
53 isinstance(self.dataset.name, str) else '')
54
55 strt = '[{} of size {}{}]\n{}'.format(typename(self),
56 self.batch_size, data_str, var_strs)
57 return '\n' + strt
58
59 def __len__(self):
60 return self.batch_size
61
62
63 def _short_str(tensor):
64 # unwrap variable to tensor
65 if hasattr(tensor, 'data'):
66 tensor = tensor.data
67
68 # fallback in case of wrong argument type
69 if issubclass(type(tensor), _TensorBase) is False:
70 return str(tensor)
71
72 # copied from torch _tensor_str
73 size_str = 'x'.join(str(size) for size in tensor.size())
74 device_str = '' if not tensor.is_cuda else \
75 ' (GPU {})'.format(tensor.get_device())
76 strt = '[{} of size {}{}]'.format(typename(tensor),
77 size_str, device_str)
78 return strt
79
[end of torchtext/data/batch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torchtext/data/batch.py b/torchtext/data/batch.py
--- a/torchtext/data/batch.py
+++ b/torchtext/data/batch.py
@@ -24,7 +24,7 @@
for (name, field) in dataset.fields.items():
if field is not None:
- batch = [x.__dict__[name] for x in data]
+ batch = [getattr(x, name) for x in data]
setattr(self, name, field.process(batch, device=device, train=train))
@classmethod
|
{"golden_diff": "diff --git a/torchtext/data/batch.py b/torchtext/data/batch.py\n--- a/torchtext/data/batch.py\n+++ b/torchtext/data/batch.py\n@@ -24,7 +24,7 @@\n \n for (name, field) in dataset.fields.items():\n if field is not None:\n- batch = [x.__dict__[name] for x in data]\n+ batch = [getattr(x, name) for x in data]\n setattr(self, name, field.process(batch, device=device, train=train))\n \n @classmethod\n", "issue": "Use getattr rather than __dict__ in Batch (adds support for __slots__ in Example subclasses)\nThis is a proposal to change [one line of code](https://github.com/pytorch/text/blob/c839a7934930819be7e240ea972e4d600966afdc/torchtext/data/batch.py#L27) in Batch.py\r\n\r\nI suggest `[x.__dict__[name] for x in data]` should become `[getattr(x, name) for x in data]`\r\n\r\nA major advantage to doing this is compatibility with `__slots__`. A class that is going to be instantiated for every data point is an ideal use-case for `__slots__`, which reduces per-instance memory overhead. It makes sense for specific projects to subclass Example using `__slots__` with the known fields of the project. If you do, the instances will have empty `__dicts__` but the slots can be accessed via `getattr`.\r\n\r\nI don't _think_ this change would break anything...\n", "before_files": [{"content": "from torch import typename\nfrom torch.tensor import _TensorBase\n\n\nclass Batch(object):\n \"\"\"Defines a batch of examples along with its Fields.\n\n Attributes:\n batch_size: Number of examples in the batch.\n dataset: A reference to the dataset object the examples come from\n (which itself contains the dataset's Field objects).\n train: Whether the batch is from a training set.\n\n Also stores the Variable for each column in the batch as an attribute.\n \"\"\"\n\n def __init__(self, data=None, dataset=None, device=None, train=True):\n \"\"\"Create a Batch from a list of examples.\"\"\"\n if data is not None:\n self.batch_size = len(data)\n self.dataset = dataset\n self.train = train\n self.fields = dataset.fields.keys() # copy field names\n\n for (name, field) in dataset.fields.items():\n if field is not None:\n batch = [x.__dict__[name] for x in data]\n setattr(self, name, field.process(batch, device=device, train=train))\n\n @classmethod\n def fromvars(cls, dataset, batch_size, train=True, **kwargs):\n \"\"\"Create a Batch directly from a number of Variables.\"\"\"\n batch = cls()\n batch.batch_size = batch_size\n batch.dataset = dataset\n batch.train = train\n for k, v in kwargs.items():\n setattr(batch, k, v)\n return batch\n\n def __repr__(self):\n return str(self)\n\n def __str__(self):\n if not self.__dict__:\n return 'Empty {} instance'.format(typename(self))\n\n var_strs = '\\n'.join(['\\t[.' + name + ']' + \":\" + _short_str(getattr(self, name))\n for name in self.fields if hasattr(self, name)])\n\n data_str = (' from {}'.format(self.dataset.name.upper())\n if hasattr(self.dataset, 'name') and\n isinstance(self.dataset.name, str) else '')\n\n strt = '[{} of size {}{}]\\n{}'.format(typename(self),\n self.batch_size, data_str, var_strs)\n return '\\n' + strt\n\n def __len__(self):\n return self.batch_size\n\n\ndef _short_str(tensor):\n # unwrap variable to tensor\n if hasattr(tensor, 'data'):\n tensor = tensor.data\n\n # fallback in case of wrong argument type\n if issubclass(type(tensor), _TensorBase) is False:\n return str(tensor)\n\n # copied from torch _tensor_str\n size_str = 'x'.join(str(size) for size in tensor.size())\n device_str = '' if not tensor.is_cuda else \\\n ' (GPU {})'.format(tensor.get_device())\n strt = '[{} of size {}{}]'.format(typename(tensor),\n size_str, device_str)\n return strt\n", "path": "torchtext/data/batch.py"}]}
| 1,524 | 124 |
gh_patches_debug_16345
|
rasdani/github-patches
|
git_diff
|
Pyomo__pyomo-573
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ipopt has_capability('integer') returns True
```
>>> import pyomo.environ as pe
>>> pe.SolverFactory('ipopt').has_capability('integer')
True
```
I think this should return False. There is a comment in the code that says returning False might create headaches for some people, but I don't see how. Can I change this?
</issue>
<code>
[start of pyomo/solvers/plugins/solvers/IPOPT.py]
1 # ___________________________________________________________________________
2 #
3 # Pyomo: Python Optimization Modeling Objects
4 # Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC
5 # Under the terms of Contract DE-NA0003525 with National Technology and
6 # Engineering Solutions of Sandia, LLC, the U.S. Government retains certain
7 # rights in this software.
8 # This software is distributed under the 3-clause BSD License.
9 # ___________________________________________________________________________
10
11 import os
12
13 import pyutilib.services
14 import pyutilib.misc
15
16 import pyomo.common.plugin
17 from pyomo.opt.base import *
18 from pyomo.opt.base.solvers import _extract_version
19 from pyomo.opt.results import *
20 from pyomo.opt.solver import *
21
22 import logging
23 logger = logging.getLogger('pyomo.solvers')
24
25 try:
26 unicode
27 except:
28 basestring = str
29
30 class IPOPT(SystemCallSolver):
31 """
32 An interface to the Ipopt optimizer that uses the AMPL Solver Library.
33 """
34
35 pyomo.common.plugin.alias('ipopt', doc='The Ipopt NLP solver')
36
37 def __init__(self, **kwds):
38 #
39 # Call base constructor
40 #
41 kwds["type"] = "ipopt"
42 super(IPOPT, self).__init__(**kwds)
43 #
44 # Setup valid problem formats, and valid results for each problem format
45 # Also set the default problem and results formats.
46 #
47 self._valid_problem_formats=[ProblemFormat.nl]
48 self._valid_result_formats = {}
49 self._valid_result_formats[ProblemFormat.nl] = [ResultsFormat.sol]
50 self.set_problem_format(ProblemFormat.nl)
51
52 # Note: Undefined capabilities default to 'None'
53 self._capabilities = pyutilib.misc.Options()
54 self._capabilities.linear = True
55 # Should we set this to False? Doing so might cause
56 # a headache for some folks.
57 self._capabilities.integer = True
58 self._capabilities.quadratic_objective = True
59 self._capabilities.quadratic_constraint = True
60 self._capabilities.sos1 = True
61 self._capabilities.sos2 = True
62
63 def _default_results_format(self, prob_format):
64 return ResultsFormat.sol
65
66 def _default_executable(self):
67 executable = pyutilib.services.registered_executable("ipopt")
68 if executable is None:
69 logger.warning("Could not locate the 'ipopt' executable, "
70 "which is required for solver %s" % self.name)
71 self.enable = False
72 return None
73 return executable.get_path()
74
75 def _get_version(self):
76 """
77 Returns a tuple describing the solver executable version.
78 """
79 solver_exec = self.executable()
80 if solver_exec is None:
81 return _extract_version('')
82 results = pyutilib.subprocess.run( [solver_exec,"-v"], timelimit=1 )
83 return _extract_version(results[1])
84
85 def create_command_line(self, executable, problem_files):
86
87 assert(self._problem_format == ProblemFormat.nl)
88 assert(self._results_format == ResultsFormat.sol)
89
90 #
91 # Define log file
92 #
93 if self._log_file is None:
94 self._log_file = pyutilib.services.TempfileManager.\
95 create_tempfile(suffix="_ipopt.log")
96
97 fname = problem_files[0]
98 if '.' in fname:
99 tmp = fname.split('.')
100 if len(tmp) > 2:
101 fname = '.'.join(tmp[:-1])
102 else:
103 fname = tmp[0]
104 self._soln_file = fname+".sol"
105
106 #
107 # Define results file (since an external parser is used)
108 #
109 self._results_file = self._soln_file
110
111 #
112 # Define command line
113 #
114 env=os.environ.copy()
115 #
116 # Merge the PYOMO_AMPLFUNC (externals defined within
117 # Pyomo/Pyomo) with any user-specified external function
118 # libraries
119 #
120 if 'PYOMO_AMPLFUNC' in env:
121 if 'AMPLFUNC' in env:
122 env['AMPLFUNC'] += "\n" + env['PYOMO_AMPLFUNC']
123 else:
124 env['AMPLFUNC'] = env['PYOMO_AMPLFUNC']
125
126 cmd = [executable, problem_files[0], '-AMPL']
127 if self._timer:
128 cmd.insert(0, self._timer)
129
130 env_opt = []
131 of_opt = []
132 ofn_option_used = False
133 for key in self.options:
134 if key == 'solver':
135 continue
136 elif key.startswith("OF_"):
137 assert len(key) > 3
138 of_opt.append((key[3:], self.options[key]))
139 else:
140 if key == "option_file_name":
141 ofn_option_used = True
142 if isinstance(self.options[key], basestring) and ' ' in self.options[key]:
143 env_opt.append(key+"=\""+str(self.options[key])+"\"")
144 cmd.append(str(key)+"="+str(self.options[key]))
145 else:
146 env_opt.append(key+"="+str(self.options[key]))
147 cmd.append(str(key)+"="+str(self.options[key]))
148
149 if len(of_opt) > 0:
150 # If the 'option_file_name' command-line option
151 # was used, we don't know if we should overwrite,
152 # merge it, or it is was a mistake, so raise an
153 # exception. Maybe this can be changed.
154 if ofn_option_used:
155 raise ValueError(
156 "The 'option_file_name' command-line "
157 "option for Ipopt can not be used "
158 "when specifying options for the "
159 "options file (i.e., options that "
160 "start with 'OF_'")
161
162 # Now check if an 'ipopt.opt' file exists in the
163 # current working directory. If so, we need to
164 # make it clear that this file will be ignored.
165 default_of_name = os.path.join(os.getcwd(), 'ipopt.opt')
166 if os.path.exists(default_of_name):
167 logger.warning("A file named '%s' exists in "
168 "the current working directory, but "
169 "Ipopt options file options (i.e., "
170 "options that start with 'OF_') were "
171 "provided. The options file '%s' will "
172 "be ignored." % (default_of_name,
173 default_of_name))
174
175 # Now write the new options file
176 options_filename = pyutilib.services.TempfileManager.\
177 create_tempfile(suffix="_ipopt.opt")
178 with open(options_filename, "w") as f:
179 for key, val in of_opt:
180 f.write(key+" "+str(val)+"\n")
181
182 # Now set the command-line option telling Ipopt
183 # to use this file
184 env_opt.append('option_file_name="'+str(options_filename)+'"')
185 cmd.append('option_file_name='+str(options_filename))
186
187 envstr = "%s_options" % self.options.solver
188 # Merge with any options coming in through the environment
189 env[envstr] = " ".join(env_opt)
190
191 return pyutilib.misc.Bunch(cmd=cmd, log_file=self._log_file, env=env)
192
193 def process_output(self, rc):
194 if os.path.exists(self._results_file):
195 return super(IPOPT, self).process_output(rc)
196 else:
197 res = SolverResults()
198 res.solver.status = SolverStatus.warning
199 res.solver.termination_condition = TerminationCondition.other
200 if os.path.exists(self._log_file):
201 with open(self._log_file) as f:
202 for line in f:
203 if "TOO_FEW_DEGREES_OF_FREEDOM" in line:
204 res.solver.message = line.split(':')[2].strip()
205 assert "degrees of freedom" in res.solver.message
206 return res
207
208 pyutilib.services.register_executable(name="ipopt")
209
[end of pyomo/solvers/plugins/solvers/IPOPT.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyomo/solvers/plugins/solvers/IPOPT.py b/pyomo/solvers/plugins/solvers/IPOPT.py
--- a/pyomo/solvers/plugins/solvers/IPOPT.py
+++ b/pyomo/solvers/plugins/solvers/IPOPT.py
@@ -52,13 +52,11 @@
# Note: Undefined capabilities default to 'None'
self._capabilities = pyutilib.misc.Options()
self._capabilities.linear = True
- # Should we set this to False? Doing so might cause
- # a headache for some folks.
- self._capabilities.integer = True
+ self._capabilities.integer = False
self._capabilities.quadratic_objective = True
self._capabilities.quadratic_constraint = True
- self._capabilities.sos1 = True
- self._capabilities.sos2 = True
+ self._capabilities.sos1 = False
+ self._capabilities.sos2 = False
def _default_results_format(self, prob_format):
return ResultsFormat.sol
|
{"golden_diff": "diff --git a/pyomo/solvers/plugins/solvers/IPOPT.py b/pyomo/solvers/plugins/solvers/IPOPT.py\n--- a/pyomo/solvers/plugins/solvers/IPOPT.py\n+++ b/pyomo/solvers/plugins/solvers/IPOPT.py\n@@ -52,13 +52,11 @@\n # Note: Undefined capabilities default to 'None'\n self._capabilities = pyutilib.misc.Options()\n self._capabilities.linear = True\n- # Should we set this to False? Doing so might cause\n- # a headache for some folks.\n- self._capabilities.integer = True\n+ self._capabilities.integer = False\n self._capabilities.quadratic_objective = True\n self._capabilities.quadratic_constraint = True\n- self._capabilities.sos1 = True\n- self._capabilities.sos2 = True\n+ self._capabilities.sos1 = False\n+ self._capabilities.sos2 = False\n \n def _default_results_format(self, prob_format):\n return ResultsFormat.sol\n", "issue": "Ipopt has_capability('integer') returns True\n```\r\n>>> import pyomo.environ as pe\r\n>>> pe.SolverFactory('ipopt').has_capability('integer')\r\nTrue\r\n```\r\n\r\nI think this should return False. There is a comment in the code that says returning False might create headaches for some people, but I don't see how. Can I change this?\n", "before_files": [{"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and \n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain \n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\n\nimport os\n\nimport pyutilib.services\nimport pyutilib.misc\n\nimport pyomo.common.plugin\nfrom pyomo.opt.base import *\nfrom pyomo.opt.base.solvers import _extract_version\nfrom pyomo.opt.results import *\nfrom pyomo.opt.solver import *\n\nimport logging\nlogger = logging.getLogger('pyomo.solvers')\n\ntry:\n unicode\nexcept:\n basestring = str\n\nclass IPOPT(SystemCallSolver):\n \"\"\"\n An interface to the Ipopt optimizer that uses the AMPL Solver Library.\n \"\"\"\n\n pyomo.common.plugin.alias('ipopt', doc='The Ipopt NLP solver')\n\n def __init__(self, **kwds):\n #\n # Call base constructor\n #\n kwds[\"type\"] = \"ipopt\"\n super(IPOPT, self).__init__(**kwds)\n #\n # Setup valid problem formats, and valid results for each problem format\n # Also set the default problem and results formats.\n #\n self._valid_problem_formats=[ProblemFormat.nl]\n self._valid_result_formats = {}\n self._valid_result_formats[ProblemFormat.nl] = [ResultsFormat.sol]\n self.set_problem_format(ProblemFormat.nl)\n\n # Note: Undefined capabilities default to 'None'\n self._capabilities = pyutilib.misc.Options()\n self._capabilities.linear = True\n # Should we set this to False? Doing so might cause\n # a headache for some folks.\n self._capabilities.integer = True\n self._capabilities.quadratic_objective = True\n self._capabilities.quadratic_constraint = True\n self._capabilities.sos1 = True\n self._capabilities.sos2 = True\n\n def _default_results_format(self, prob_format):\n return ResultsFormat.sol\n\n def _default_executable(self):\n executable = pyutilib.services.registered_executable(\"ipopt\")\n if executable is None:\n logger.warning(\"Could not locate the 'ipopt' executable, \"\n \"which is required for solver %s\" % self.name)\n self.enable = False\n return None\n return executable.get_path()\n\n def _get_version(self):\n \"\"\"\n Returns a tuple describing the solver executable version.\n \"\"\"\n solver_exec = self.executable()\n if solver_exec is None:\n return _extract_version('')\n results = pyutilib.subprocess.run( [solver_exec,\"-v\"], timelimit=1 )\n return _extract_version(results[1])\n\n def create_command_line(self, executable, problem_files):\n\n assert(self._problem_format == ProblemFormat.nl)\n assert(self._results_format == ResultsFormat.sol)\n\n #\n # Define log file\n #\n if self._log_file is None:\n self._log_file = pyutilib.services.TempfileManager.\\\n create_tempfile(suffix=\"_ipopt.log\")\n\n fname = problem_files[0]\n if '.' in fname:\n tmp = fname.split('.')\n if len(tmp) > 2:\n fname = '.'.join(tmp[:-1])\n else:\n fname = tmp[0]\n self._soln_file = fname+\".sol\"\n\n #\n # Define results file (since an external parser is used)\n #\n self._results_file = self._soln_file\n\n #\n # Define command line\n #\n env=os.environ.copy()\n #\n # Merge the PYOMO_AMPLFUNC (externals defined within\n # Pyomo/Pyomo) with any user-specified external function\n # libraries\n #\n if 'PYOMO_AMPLFUNC' in env:\n if 'AMPLFUNC' in env:\n env['AMPLFUNC'] += \"\\n\" + env['PYOMO_AMPLFUNC']\n else:\n env['AMPLFUNC'] = env['PYOMO_AMPLFUNC']\n\n cmd = [executable, problem_files[0], '-AMPL']\n if self._timer:\n cmd.insert(0, self._timer)\n\n env_opt = []\n of_opt = []\n ofn_option_used = False\n for key in self.options:\n if key == 'solver':\n continue\n elif key.startswith(\"OF_\"):\n assert len(key) > 3\n of_opt.append((key[3:], self.options[key]))\n else:\n if key == \"option_file_name\":\n ofn_option_used = True\n if isinstance(self.options[key], basestring) and ' ' in self.options[key]:\n env_opt.append(key+\"=\\\"\"+str(self.options[key])+\"\\\"\")\n cmd.append(str(key)+\"=\"+str(self.options[key]))\n else:\n env_opt.append(key+\"=\"+str(self.options[key]))\n cmd.append(str(key)+\"=\"+str(self.options[key]))\n\n if len(of_opt) > 0:\n # If the 'option_file_name' command-line option\n # was used, we don't know if we should overwrite,\n # merge it, or it is was a mistake, so raise an\n # exception. Maybe this can be changed.\n if ofn_option_used:\n raise ValueError(\n \"The 'option_file_name' command-line \"\n \"option for Ipopt can not be used \"\n \"when specifying options for the \"\n \"options file (i.e., options that \"\n \"start with 'OF_'\")\n\n # Now check if an 'ipopt.opt' file exists in the\n # current working directory. If so, we need to\n # make it clear that this file will be ignored.\n default_of_name = os.path.join(os.getcwd(), 'ipopt.opt')\n if os.path.exists(default_of_name):\n logger.warning(\"A file named '%s' exists in \"\n \"the current working directory, but \"\n \"Ipopt options file options (i.e., \"\n \"options that start with 'OF_') were \"\n \"provided. The options file '%s' will \"\n \"be ignored.\" % (default_of_name,\n default_of_name))\n\n # Now write the new options file\n options_filename = pyutilib.services.TempfileManager.\\\n create_tempfile(suffix=\"_ipopt.opt\")\n with open(options_filename, \"w\") as f:\n for key, val in of_opt:\n f.write(key+\" \"+str(val)+\"\\n\")\n\n # Now set the command-line option telling Ipopt\n # to use this file\n env_opt.append('option_file_name=\"'+str(options_filename)+'\"')\n cmd.append('option_file_name='+str(options_filename))\n\n envstr = \"%s_options\" % self.options.solver\n # Merge with any options coming in through the environment\n env[envstr] = \" \".join(env_opt)\n\n return pyutilib.misc.Bunch(cmd=cmd, log_file=self._log_file, env=env)\n\n def process_output(self, rc):\n if os.path.exists(self._results_file):\n return super(IPOPT, self).process_output(rc)\n else:\n res = SolverResults()\n res.solver.status = SolverStatus.warning\n res.solver.termination_condition = TerminationCondition.other\n if os.path.exists(self._log_file):\n with open(self._log_file) as f:\n for line in f:\n if \"TOO_FEW_DEGREES_OF_FREEDOM\" in line:\n res.solver.message = line.split(':')[2].strip()\n assert \"degrees of freedom\" in res.solver.message\n return res\n\npyutilib.services.register_executable(name=\"ipopt\")\n", "path": "pyomo/solvers/plugins/solvers/IPOPT.py"}]}
| 2,846 | 224 |
gh_patches_debug_32524
|
rasdani/github-patches
|
git_diff
|
pymodbus-dev__pymodbus-1189
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
REPL server missing serial configuration
<!--
Please use the Pymodbus gitter channel at https://gitter.im/pymodbus_dev/Lobby or Stack Overflow(tag [pymodbus](https://stackoverflow.com/questions/tagged/pymodbus) for
support questions.
Before opening a new issue, make sure you do the following:
* check that your issue isn't already filed: https://github.com/riptideio/pymodbus/issues
* check the discussions forum https://github.com/riptideio/pymodbus/discussions
* prepare a short, runnable example that reproduce the issue with the latest development version of Pymodbus
-->
### Versions
* Python: Python 3.9.2 (default, Mar 12 2021, 04:06:34)
[GCC 10.2.1 20210110] on linux
* OS: PRETTY_NAME="Raspbian GNU/Linux 11 (bullseye)"
NAME="Raspbian GNU/Linux"
VERSION_ID="11"
VERSION="11 (bullseye)"
VERSION_CODENAME=bullseye
ID=raspbian
ID_LIKE=debian
* Pymodbus: 3.0.2 REPL
* Modbus Hardware (if used): Serial USB Adapter
### Pymodbus Specific
* Server: rtu - sync/async
* Client: rtu - sync/async
### Description
I am using pymodbus REPL server and client for modbus rtu communication with two usb adapters. Unfortunately I wasn't able to configure another baudrate, parity, stop bits and other serial settings for the server. I checked the config of the serial port with the command stty. In my example the server wasn't set to 38400 baud. I was able to set it manually while running the server. The client settings are fine.
Are there command line parameters to set those properties? I haven't found them in the example videos, docs or code.
### Code and Logs
```python
pymodbus.server --verbose run -s serial -f rtu -p /dev/ttyUSB1 --baudrate 38400 -u 1 -r 2
pymodbus.console serial --method rtu --port /dev/ttyUSB0 --baudrate 38400
```
```
#serial settings and logs
pi@pi1:~/pymodbus-dev/pymodbus $ stty -F /dev/ttyUSB1 -a
#speed 9600 baud; rows 0; columns 0; line = 0;
#intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; #swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = #^V; discard = ^O; min = 0; time = 0;
#-parenb -parodd -cmspar cs8 hupcl -cstopb cread clocal -crtscts
#-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon -ixoff -iuclc -ixany #-imaxbel -iutf8
#-opost -olcuc -ocrnl -onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
#-isig -icanon -iexten -echo -echoe -echok -echonl -noflsh -xcase -tostop -echoprt -#echoctl -echoke -flusho -extproc
#manual edit interface while running the server
stty -F /dev/ttyUSB1 38400
```
</issue>
<code>
[start of pymodbus/repl/server/main.py]
1 """Repl server main."""
2 from __future__ import annotations
3
4 import asyncio
5 import json
6 import logging
7 from enum import Enum
8 from pathlib import Path
9 from typing import List
10
11 import typer
12
13 from pymodbus.framer.socket_framer import ModbusSocketFramer
14 from pymodbus.repl.server.cli import run_repl
15 from pymodbus.server.reactive.default_config import DEFAULT_CONFIG
16 from pymodbus.server.reactive.main import (
17 DEFAULT_FRAMER,
18 DEFUALT_HANDLERS,
19 ReactiveServer,
20 )
21
22
23 CANCELLED_ERROR = asyncio.exceptions.CancelledError
24
25 _logger = logging.getLogger(__name__)
26
27 CONTEXT_SETTING = {"allow_extra_args": True, "ignore_unknown_options": True}
28
29 # TBD class ModbusServerConfig:
30
31
32 class ModbusServerTypes(str, Enum):
33 """Server types."""
34
35 # ["tcp", "serial", "tls", "udp"]
36 tcp = "tcp" # pylint: disable=invalid-name
37 serial = "serial" # pylint: disable=invalid-name
38 tls = "tls" # pylint: disable=invalid-name
39 udp = "udp" # pylint: disable=invalid-name
40
41
42 class ModbusFramerTypes(str, Enum):
43 """Framer types."""
44
45 # ["socket", "rtu", "tls", "ascii", "binary"]
46 socket = "socket" # pylint: disable=invalid-name
47 rtu = "rtu" # pylint: disable=invalid-name
48 tls = "tls" # pylint: disable=invalid-name
49 ascii = "ascii" # pylint: disable=invalid-name
50 binary = "binary" # pylint: disable=invalid-name
51
52
53 def _completer(incomplete: str, valid_values: List[str]) -> List[str]:
54 """Complete value."""
55 completion = []
56 for name in valid_values:
57 if name.startswith(incomplete):
58 completion.append(name)
59 return completion
60
61
62 def framers(incomplete: str) -> List[str]:
63 """Return an autocompleted list of supported clouds."""
64 _framers = ["socket", "rtu", "tls", "ascii", "binary"]
65 return _completer(incomplete, _framers)
66
67
68 def servers(incomplete: str) -> List[str]:
69 """Return an autocompleted list of supported clouds."""
70 _servers = ["tcp", "serial", "tls", "udp"]
71 return _completer(incomplete, _servers)
72
73
74 app = typer.Typer(
75 no_args_is_help=True,
76 context_settings=CONTEXT_SETTING,
77 help="Reactive modebus server",
78 )
79
80
81 @app.callback()
82 def server(
83 ctx: typer.Context,
84 host: str = typer.Option("localhost", "--host", help="Host address"),
85 web_port: int = typer.Option(8080, "--web-port", help="Web app port"),
86 broadcast_support: bool = typer.Option(
87 False, "-b", help="Support broadcast messages"
88 ),
89 repl: bool = typer.Option(True, help="Enable/Disable repl for server"),
90 verbose: bool = typer.Option(
91 False, help="Run with debug logs enabled for pymodbus"
92 ),
93 ):
94 """Run server code."""
95 FORMAT = ( # pylint: disable=invalid-name
96 "%(asctime)-15s %(threadName)-15s"
97 " %(levelname)-8s %(module)-15s:%(lineno)-8s %(message)s"
98 )
99 pymodbus_logger = logging.getLogger("pymodbus")
100 logging.basicConfig(format=FORMAT)
101 logger = logging.getLogger(__name__)
102 if verbose:
103 pymodbus_logger.setLevel(logging.DEBUG)
104 logger.setLevel(logging.DEBUG)
105 else:
106 pymodbus_logger.setLevel(logging.ERROR)
107 logger.setLevel(logging.ERROR)
108
109 ctx.obj = {
110 "repl": repl,
111 "host": host,
112 "web_port": web_port,
113 "broadcast": broadcast_support,
114 }
115
116
117 @app.command("run", context_settings=CONTEXT_SETTING)
118 def run(
119 ctx: typer.Context,
120 modbus_server: str = typer.Option(
121 ModbusServerTypes.tcp,
122 "--modbus-server",
123 "-s",
124 case_sensitive=False,
125 autocompletion=servers,
126 help="Modbus Server",
127 ),
128 modbus_framer: str = typer.Option(
129 ModbusFramerTypes.socket,
130 "--framer",
131 "-f",
132 case_sensitive=False,
133 autocompletion=framers,
134 help="Modbus framer to use",
135 ),
136 modbus_port: str = typer.Option("5020", "--modbus-port", "-p", help="Modbus port"),
137 modbus_unit_id: List[int] = typer.Option(
138 None, "--unit-id", "-u", help="Supported Modbus unit id's"
139 ),
140 modbus_config: Path = typer.Option(
141 None, help="Path to additional modbus server config"
142 ),
143 randomize: int = typer.Option(
144 0,
145 "--random",
146 "-r",
147 help="Randomize every `r` reads. 0=never, 1=always,2=every-second-read"
148 ", and so on. Applicable IR and DI.",
149 ),
150 ):
151 """Run Reactive Modbus server.
152
153 Exposing REST endpoint for response manipulation.
154 """
155 repl = ctx.obj.pop("repl")
156 # TBD extra_args = ctx.args
157 web_app_config = ctx.obj
158 loop = asyncio.get_event_loop()
159 framer = DEFAULT_FRAMER.get(modbus_framer, ModbusSocketFramer)
160 if modbus_config:
161 with open(modbus_config) as my_file: # pylint: disable=unspecified-encoding
162 modbus_config = json.load(my_file)
163 else:
164 modbus_config = DEFAULT_CONFIG
165
166 data_block_settings = modbus_config.pop("data_block_settings", {})
167 modbus_config = modbus_config.get(modbus_server, {})
168 if modbus_server != "serial":
169 modbus_port = int(modbus_port)
170 handler = modbus_config.pop("handler", "ModbusConnectedRequestHandler")
171 else:
172 handler = modbus_config.pop("handler", "ModbusSingleRequestHandler")
173 handler = DEFUALT_HANDLERS.get(handler.strip())
174
175 modbus_config["handler"] = handler
176 modbus_config["randomize"] = randomize
177 app = ReactiveServer.factory(
178 modbus_server,
179 framer,
180 modbus_port=modbus_port,
181 unit=modbus_unit_id,
182 loop=loop,
183 single=False,
184 data_block_settings=data_block_settings,
185 **web_app_config,
186 **modbus_config,
187 )
188 try:
189 loop.run_until_complete(app.run_async(repl))
190 if repl:
191 loop.run_until_complete(run_repl(app))
192 loop.run_forever()
193
194 except CANCELLED_ERROR:
195 print("Done!!!!!")
196
197
198 if __name__ == "__main__":
199 app()
200
[end of pymodbus/repl/server/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pymodbus/repl/server/main.py b/pymodbus/repl/server/main.py
--- a/pymodbus/repl/server/main.py
+++ b/pymodbus/repl/server/main.py
@@ -4,6 +4,7 @@
import asyncio
import json
import logging
+import sys
from enum import Enum
from pathlib import Path
from typing import List
@@ -26,6 +27,7 @@
CONTEXT_SETTING = {"allow_extra_args": True, "ignore_unknown_options": True}
+
# TBD class ModbusServerConfig:
@@ -71,6 +73,23 @@
return _completer(incomplete, _servers)
+def process_extra_args(extra_args: list[str], modbus_config: dict) -> dict:
+ """Process extra args passed to server."""
+ options_stripped = [x.strip().replace("--", "") for x in extra_args[::2]]
+ extra_args = dict(list(zip(options_stripped, extra_args[1::2])))
+ for option, value in extra_args.items():
+ if option in modbus_config:
+ try:
+ modbus_config[option] = type(modbus_config[option])(value)
+ except ValueError as err:
+ msg = (
+ f"Error parsing extra arg {option}' " f"with value '{value}'. {err}"
+ )
+ _logger.error(msg)
+ sys.exit(1)
+ return modbus_config
+
+
app = typer.Typer(
no_args_is_help=True,
context_settings=CONTEXT_SETTING,
@@ -163,8 +182,10 @@
else:
modbus_config = DEFAULT_CONFIG
+ extra_args = ctx.args
data_block_settings = modbus_config.pop("data_block_settings", {})
modbus_config = modbus_config.get(modbus_server, {})
+ modbus_config = process_extra_args(extra_args, modbus_config)
if modbus_server != "serial":
modbus_port = int(modbus_port)
handler = modbus_config.pop("handler", "ModbusConnectedRequestHandler")
|
{"golden_diff": "diff --git a/pymodbus/repl/server/main.py b/pymodbus/repl/server/main.py\n--- a/pymodbus/repl/server/main.py\n+++ b/pymodbus/repl/server/main.py\n@@ -4,6 +4,7 @@\n import asyncio\n import json\n import logging\n+import sys\n from enum import Enum\n from pathlib import Path\n from typing import List\n@@ -26,6 +27,7 @@\n \n CONTEXT_SETTING = {\"allow_extra_args\": True, \"ignore_unknown_options\": True}\n \n+\n # TBD class ModbusServerConfig:\n \n \n@@ -71,6 +73,23 @@\n return _completer(incomplete, _servers)\n \n \n+def process_extra_args(extra_args: list[str], modbus_config: dict) -> dict:\n+ \"\"\"Process extra args passed to server.\"\"\"\n+ options_stripped = [x.strip().replace(\"--\", \"\") for x in extra_args[::2]]\n+ extra_args = dict(list(zip(options_stripped, extra_args[1::2])))\n+ for option, value in extra_args.items():\n+ if option in modbus_config:\n+ try:\n+ modbus_config[option] = type(modbus_config[option])(value)\n+ except ValueError as err:\n+ msg = (\n+ f\"Error parsing extra arg {option}' \" f\"with value '{value}'. {err}\"\n+ )\n+ _logger.error(msg)\n+ sys.exit(1)\n+ return modbus_config\n+\n+\n app = typer.Typer(\n no_args_is_help=True,\n context_settings=CONTEXT_SETTING,\n@@ -163,8 +182,10 @@\n else:\n modbus_config = DEFAULT_CONFIG\n \n+ extra_args = ctx.args\n data_block_settings = modbus_config.pop(\"data_block_settings\", {})\n modbus_config = modbus_config.get(modbus_server, {})\n+ modbus_config = process_extra_args(extra_args, modbus_config)\n if modbus_server != \"serial\":\n modbus_port = int(modbus_port)\n handler = modbus_config.pop(\"handler\", \"ModbusConnectedRequestHandler\")\n", "issue": "REPL server missing serial configuration\n<!--\r\nPlease use the Pymodbus gitter channel at https://gitter.im/pymodbus_dev/Lobby or Stack Overflow(tag [pymodbus](https://stackoverflow.com/questions/tagged/pymodbus) for\r\nsupport questions.\r\n\r\nBefore opening a new issue, make sure you do the following:\r\n * check that your issue isn't already filed: https://github.com/riptideio/pymodbus/issues\r\n * check the discussions forum https://github.com/riptideio/pymodbus/discussions\r\n * prepare a short, runnable example that reproduce the issue with the latest development version of Pymodbus\r\n-->\r\n\r\n### Versions\r\n\r\n* Python: Python 3.9.2 (default, Mar 12 2021, 04:06:34)\r\n[GCC 10.2.1 20210110] on linux\r\n* OS: PRETTY_NAME=\"Raspbian GNU/Linux 11 (bullseye)\"\r\nNAME=\"Raspbian GNU/Linux\"\r\nVERSION_ID=\"11\"\r\nVERSION=\"11 (bullseye)\"\r\nVERSION_CODENAME=bullseye\r\nID=raspbian\r\nID_LIKE=debian\r\n* Pymodbus: 3.0.2 REPL\r\n* Modbus Hardware (if used): Serial USB Adapter\r\n\r\n### Pymodbus Specific\r\n* Server: rtu - sync/async\r\n* Client: rtu - sync/async\r\n\r\n### Description\r\n\r\nI am using pymodbus REPL server and client for modbus rtu communication with two usb adapters. Unfortunately I wasn't able to configure another baudrate, parity, stop bits and other serial settings for the server. I checked the config of the serial port with the command stty. In my example the server wasn't set to 38400 baud. I was able to set it manually while running the server. The client settings are fine.\r\n\r\nAre there command line parameters to set those properties? I haven't found them in the example videos, docs or code. \r\n\r\n### Code and Logs\r\n\r\n```python\r\npymodbus.server --verbose run -s serial -f rtu -p /dev/ttyUSB1 --baudrate 38400 -u 1 -r 2\r\n\r\npymodbus.console serial --method rtu --port /dev/ttyUSB0 --baudrate 38400\r\n```\r\n```\r\n#serial settings and logs\r\npi@pi1:~/pymodbus-dev/pymodbus $ stty -F /dev/ttyUSB1 -a\r\n#speed 9600 baud; rows 0; columns 0; line = 0;\r\n#intr = ^C; quit = ^\\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; #swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = #^V; discard = ^O; min = 0; time = 0;\r\n#-parenb -parodd -cmspar cs8 hupcl -cstopb cread clocal -crtscts\r\n#-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon -ixoff -iuclc -ixany #-imaxbel -iutf8\r\n#-opost -olcuc -ocrnl -onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0\r\n#-isig -icanon -iexten -echo -echoe -echok -echonl -noflsh -xcase -tostop -echoprt -#echoctl -echoke -flusho -extproc\r\n\r\n#manual edit interface while running the server\r\nstty -F /dev/ttyUSB1 38400\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Repl server main.\"\"\"\nfrom __future__ import annotations\n\nimport asyncio\nimport json\nimport logging\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import List\n\nimport typer\n\nfrom pymodbus.framer.socket_framer import ModbusSocketFramer\nfrom pymodbus.repl.server.cli import run_repl\nfrom pymodbus.server.reactive.default_config import DEFAULT_CONFIG\nfrom pymodbus.server.reactive.main import (\n DEFAULT_FRAMER,\n DEFUALT_HANDLERS,\n ReactiveServer,\n)\n\n\nCANCELLED_ERROR = asyncio.exceptions.CancelledError\n\n_logger = logging.getLogger(__name__)\n\nCONTEXT_SETTING = {\"allow_extra_args\": True, \"ignore_unknown_options\": True}\n\n# TBD class ModbusServerConfig:\n\n\nclass ModbusServerTypes(str, Enum):\n \"\"\"Server types.\"\"\"\n\n # [\"tcp\", \"serial\", \"tls\", \"udp\"]\n tcp = \"tcp\" # pylint: disable=invalid-name\n serial = \"serial\" # pylint: disable=invalid-name\n tls = \"tls\" # pylint: disable=invalid-name\n udp = \"udp\" # pylint: disable=invalid-name\n\n\nclass ModbusFramerTypes(str, Enum):\n \"\"\"Framer types.\"\"\"\n\n # [\"socket\", \"rtu\", \"tls\", \"ascii\", \"binary\"]\n socket = \"socket\" # pylint: disable=invalid-name\n rtu = \"rtu\" # pylint: disable=invalid-name\n tls = \"tls\" # pylint: disable=invalid-name\n ascii = \"ascii\" # pylint: disable=invalid-name\n binary = \"binary\" # pylint: disable=invalid-name\n\n\ndef _completer(incomplete: str, valid_values: List[str]) -> List[str]:\n \"\"\"Complete value.\"\"\"\n completion = []\n for name in valid_values:\n if name.startswith(incomplete):\n completion.append(name)\n return completion\n\n\ndef framers(incomplete: str) -> List[str]:\n \"\"\"Return an autocompleted list of supported clouds.\"\"\"\n _framers = [\"socket\", \"rtu\", \"tls\", \"ascii\", \"binary\"]\n return _completer(incomplete, _framers)\n\n\ndef servers(incomplete: str) -> List[str]:\n \"\"\"Return an autocompleted list of supported clouds.\"\"\"\n _servers = [\"tcp\", \"serial\", \"tls\", \"udp\"]\n return _completer(incomplete, _servers)\n\n\napp = typer.Typer(\n no_args_is_help=True,\n context_settings=CONTEXT_SETTING,\n help=\"Reactive modebus server\",\n)\n\n\[email protected]()\ndef server(\n ctx: typer.Context,\n host: str = typer.Option(\"localhost\", \"--host\", help=\"Host address\"),\n web_port: int = typer.Option(8080, \"--web-port\", help=\"Web app port\"),\n broadcast_support: bool = typer.Option(\n False, \"-b\", help=\"Support broadcast messages\"\n ),\n repl: bool = typer.Option(True, help=\"Enable/Disable repl for server\"),\n verbose: bool = typer.Option(\n False, help=\"Run with debug logs enabled for pymodbus\"\n ),\n):\n \"\"\"Run server code.\"\"\"\n FORMAT = ( # pylint: disable=invalid-name\n \"%(asctime)-15s %(threadName)-15s\"\n \" %(levelname)-8s %(module)-15s:%(lineno)-8s %(message)s\"\n )\n pymodbus_logger = logging.getLogger(\"pymodbus\")\n logging.basicConfig(format=FORMAT)\n logger = logging.getLogger(__name__)\n if verbose:\n pymodbus_logger.setLevel(logging.DEBUG)\n logger.setLevel(logging.DEBUG)\n else:\n pymodbus_logger.setLevel(logging.ERROR)\n logger.setLevel(logging.ERROR)\n\n ctx.obj = {\n \"repl\": repl,\n \"host\": host,\n \"web_port\": web_port,\n \"broadcast\": broadcast_support,\n }\n\n\[email protected](\"run\", context_settings=CONTEXT_SETTING)\ndef run(\n ctx: typer.Context,\n modbus_server: str = typer.Option(\n ModbusServerTypes.tcp,\n \"--modbus-server\",\n \"-s\",\n case_sensitive=False,\n autocompletion=servers,\n help=\"Modbus Server\",\n ),\n modbus_framer: str = typer.Option(\n ModbusFramerTypes.socket,\n \"--framer\",\n \"-f\",\n case_sensitive=False,\n autocompletion=framers,\n help=\"Modbus framer to use\",\n ),\n modbus_port: str = typer.Option(\"5020\", \"--modbus-port\", \"-p\", help=\"Modbus port\"),\n modbus_unit_id: List[int] = typer.Option(\n None, \"--unit-id\", \"-u\", help=\"Supported Modbus unit id's\"\n ),\n modbus_config: Path = typer.Option(\n None, help=\"Path to additional modbus server config\"\n ),\n randomize: int = typer.Option(\n 0,\n \"--random\",\n \"-r\",\n help=\"Randomize every `r` reads. 0=never, 1=always,2=every-second-read\"\n \", and so on. Applicable IR and DI.\",\n ),\n):\n \"\"\"Run Reactive Modbus server.\n\n Exposing REST endpoint for response manipulation.\n \"\"\"\n repl = ctx.obj.pop(\"repl\")\n # TBD extra_args = ctx.args\n web_app_config = ctx.obj\n loop = asyncio.get_event_loop()\n framer = DEFAULT_FRAMER.get(modbus_framer, ModbusSocketFramer)\n if modbus_config:\n with open(modbus_config) as my_file: # pylint: disable=unspecified-encoding\n modbus_config = json.load(my_file)\n else:\n modbus_config = DEFAULT_CONFIG\n\n data_block_settings = modbus_config.pop(\"data_block_settings\", {})\n modbus_config = modbus_config.get(modbus_server, {})\n if modbus_server != \"serial\":\n modbus_port = int(modbus_port)\n handler = modbus_config.pop(\"handler\", \"ModbusConnectedRequestHandler\")\n else:\n handler = modbus_config.pop(\"handler\", \"ModbusSingleRequestHandler\")\n handler = DEFUALT_HANDLERS.get(handler.strip())\n\n modbus_config[\"handler\"] = handler\n modbus_config[\"randomize\"] = randomize\n app = ReactiveServer.factory(\n modbus_server,\n framer,\n modbus_port=modbus_port,\n unit=modbus_unit_id,\n loop=loop,\n single=False,\n data_block_settings=data_block_settings,\n **web_app_config,\n **modbus_config,\n )\n try:\n loop.run_until_complete(app.run_async(repl))\n if repl:\n loop.run_until_complete(run_repl(app))\n loop.run_forever()\n\n except CANCELLED_ERROR:\n print(\"Done!!!!!\")\n\n\nif __name__ == \"__main__\":\n app()\n", "path": "pymodbus/repl/server/main.py"}]}
| 3,422 | 465 |
gh_patches_debug_4743
|
rasdani/github-patches
|
git_diff
|
netket__netket-1112
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade `flakehell` in the pre-commit hook
It seems that `flakehell` is not actively maintained, and it is incompatible with `flake8 4.x` released in last October (see flakehell/flakehell#22). That issue is not resolved after a few months. If a new developer of NetKet runs `pre-commit install-hooks`, it will just fail.
We may use [this fix](https://github.com/flakehell/flakehell/pull/23#issuecomment-985879201), or change it to [flakeheaven](https://github.com/flakeheaven/flakeheaven) which seems more actively maintained.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2
3 DEV_DEPENDENCIES = [
4 "pytest>=6",
5 "pytest-xdist>=2",
6 "coverage>=5",
7 "pytest-cov>=2.10.1",
8 "networkx~=2.4",
9 "flaky>=3.7",
10 "pre-commit",
11 "black==22.1.0",
12 "flakehell>=0.9",
13 ]
14 MPI_DEPENDENCIES = ["mpi4py>=3.0.1, <4", "mpi4jax~=0.3.1"]
15 EXTRA_DEPENDENCIES = ["tensorboardx>=2.0.0", "openfermion>=1.0.0"]
16 BASE_DEPENDENCIES = [
17 "numpy~=1.18",
18 "scipy>=1.5.3, <2",
19 "tqdm~=4.60",
20 "plum-dispatch~=1.5.1",
21 "numba>=0.52, <0.56",
22 "igraph~=0.9.8",
23 "jax>=0.2.23, <0.4",
24 "jaxlib>=0.1.69",
25 "flax>=0.3.5, <0.5",
26 "orjson~=3.4",
27 "optax>=0.1.1, <0.2",
28 "numba4jax>=0.0.3, <0.1",
29 ]
30
31 setup(
32 name="netket",
33 author="Giuseppe Carleo et al.",
34 url="http://github.com/netket/netket",
35 author_email="[email protected]",
36 license="Apache 2.0",
37 description="Netket : Machine Learning techniques for many-body quantum systems.",
38 long_description="""NetKet is an open-source project delivering cutting-edge
39 methods for the study of many-body quantum systems with artificial
40 neural networks and machine learning techniques.""",
41 classifiers=[
42 "Programming Language :: Python :: 3",
43 "Development Status :: 5 - Production/Stable",
44 "Intended Audience :: Science/Research",
45 "License :: OSI Approved :: Apache Software License",
46 "Operating System :: MacOS :: MacOS X",
47 "Operating System :: POSIX :: Linux",
48 "Operating System :: Unix",
49 "Topic :: Scientific/Engineering :: Physics",
50 ],
51 packages=find_packages(),
52 install_requires=BASE_DEPENDENCIES,
53 python_requires=">=3.7",
54 extras_require={
55 "dev": DEV_DEPENDENCIES,
56 "mpi": MPI_DEPENDENCIES,
57 "extra": EXTRA_DEPENDENCIES,
58 "all": MPI_DEPENDENCIES + DEV_DEPENDENCIES + EXTRA_DEPENDENCIES,
59 },
60 )
61
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -6,10 +6,9 @@
"coverage>=5",
"pytest-cov>=2.10.1",
"networkx~=2.4",
- "flaky>=3.7",
- "pre-commit",
+ "pre-commit>=2.7",
"black==22.1.0",
- "flakehell>=0.9",
+ "flake8==4.0.1",
]
MPI_DEPENDENCIES = ["mpi4py>=3.0.1, <4", "mpi4jax~=0.3.1"]
EXTRA_DEPENDENCIES = ["tensorboardx>=2.0.0", "openfermion>=1.0.0"]
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -6,10 +6,9 @@\n \"coverage>=5\",\n \"pytest-cov>=2.10.1\",\n \"networkx~=2.4\",\n- \"flaky>=3.7\",\n- \"pre-commit\",\n+ \"pre-commit>=2.7\",\n \"black==22.1.0\",\n- \"flakehell>=0.9\",\n+ \"flake8==4.0.1\",\n ]\n MPI_DEPENDENCIES = [\"mpi4py>=3.0.1, <4\", \"mpi4jax~=0.3.1\"]\n EXTRA_DEPENDENCIES = [\"tensorboardx>=2.0.0\", \"openfermion>=1.0.0\"]\n", "issue": "Upgrade `flakehell` in the pre-commit hook\nIt seems that `flakehell` is not actively maintained, and it is incompatible with `flake8 4.x` released in last October (see flakehell/flakehell#22). That issue is not resolved after a few months. If a new developer of NetKet runs `pre-commit install-hooks`, it will just fail.\r\n\r\nWe may use [this fix](https://github.com/flakehell/flakehell/pull/23#issuecomment-985879201), or change it to [flakeheaven](https://github.com/flakeheaven/flakeheaven) which seems more actively maintained.\n", "before_files": [{"content": "from setuptools import setup, find_packages\n\nDEV_DEPENDENCIES = [\n \"pytest>=6\",\n \"pytest-xdist>=2\",\n \"coverage>=5\",\n \"pytest-cov>=2.10.1\",\n \"networkx~=2.4\",\n \"flaky>=3.7\",\n \"pre-commit\",\n \"black==22.1.0\",\n \"flakehell>=0.9\",\n]\nMPI_DEPENDENCIES = [\"mpi4py>=3.0.1, <4\", \"mpi4jax~=0.3.1\"]\nEXTRA_DEPENDENCIES = [\"tensorboardx>=2.0.0\", \"openfermion>=1.0.0\"]\nBASE_DEPENDENCIES = [\n \"numpy~=1.18\",\n \"scipy>=1.5.3, <2\",\n \"tqdm~=4.60\",\n \"plum-dispatch~=1.5.1\",\n \"numba>=0.52, <0.56\",\n \"igraph~=0.9.8\",\n \"jax>=0.2.23, <0.4\",\n \"jaxlib>=0.1.69\",\n \"flax>=0.3.5, <0.5\",\n \"orjson~=3.4\",\n \"optax>=0.1.1, <0.2\",\n \"numba4jax>=0.0.3, <0.1\",\n]\n\nsetup(\n name=\"netket\",\n author=\"Giuseppe Carleo et al.\",\n url=\"http://github.com/netket/netket\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n description=\"Netket : Machine Learning techniques for many-body quantum systems.\",\n long_description=\"\"\"NetKet is an open-source project delivering cutting-edge\n methods for the study of many-body quantum systems with artificial\n neural networks and machine learning techniques.\"\"\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Unix\",\n \"Topic :: Scientific/Engineering :: Physics\",\n ],\n packages=find_packages(),\n install_requires=BASE_DEPENDENCIES,\n python_requires=\">=3.7\",\n extras_require={\n \"dev\": DEV_DEPENDENCIES,\n \"mpi\": MPI_DEPENDENCIES,\n \"extra\": EXTRA_DEPENDENCIES,\n \"all\": MPI_DEPENDENCIES + DEV_DEPENDENCIES + EXTRA_DEPENDENCIES,\n },\n)\n", "path": "setup.py"}]}
| 1,357 | 175 |
gh_patches_debug_12294
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-2757
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
console: shift+tab is broken in WSL for some Windows terminals
See https://github.com/Microsoft/WSL/issues/1770. This seems to affect some but not all Windows terminals.
We use shift+tab by default to switch to the next pane in the console app. We can:
- Say that this is not our problem, and wait for upstream to fix it.
- Find a different binding for next pane - which would be a shame, because shit+tab is very natural.
@mhils what say you?
</issue>
<code>
[start of mitmproxy/tools/console/defaultkeys.py]
1
2 def map(km):
3 km.add(":", "console.command ", ["global"], "Command prompt")
4 km.add("?", "console.view.help", ["global"], "View help")
5 km.add("B", "browser.start", ["global"], "Start an attached browser")
6 km.add("C", "console.view.commands", ["global"], "View commands")
7 km.add("K", "console.view.keybindings", ["global"], "View key bindings")
8 km.add("O", "console.view.options", ["global"], "View options")
9 km.add("E", "console.view.eventlog", ["global"], "View event log")
10 km.add("Q", "console.exit", ["global"], "Exit immediately")
11 km.add("q", "console.view.pop", ["global"], "Exit the current view")
12 km.add("-", "console.layout.cycle", ["global"], "Cycle to next layout")
13 km.add("shift tab", "console.panes.next", ["global"], "Focus next layout pane")
14 km.add("P", "console.view.flow @focus", ["global"], "View flow details")
15
16 km.add("g", "console.nav.start", ["global"], "Go to start")
17 km.add("G", "console.nav.end", ["global"], "Go to end")
18 km.add("k", "console.nav.up", ["global"], "Up")
19 km.add("j", "console.nav.down", ["global"], "Down")
20 km.add("l", "console.nav.right", ["global"], "Right")
21 km.add("h", "console.nav.left", ["global"], "Left")
22 km.add("tab", "console.nav.next", ["global"], "Next")
23 km.add("enter", "console.nav.select", ["global"], "Select")
24 km.add("space", "console.nav.pagedown", ["global"], "Page down")
25 km.add("ctrl f", "console.nav.pagedown", ["global"], "Page down")
26 km.add("ctrl b", "console.nav.pageup", ["global"], "Page up")
27
28 km.add("I", "console.intercept.toggle", ["global"], "Toggle intercept")
29 km.add("i", "console.command.set intercept", ["global"], "Set intercept")
30 km.add("W", "console.command.set save_stream_file", ["global"], "Stream to file")
31 km.add("A", "flow.resume @all", ["flowlist", "flowview"], "Resume all intercepted flows")
32 km.add("a", "flow.resume @focus", ["flowlist", "flowview"], "Resume this intercepted flow")
33 km.add(
34 "b", "console.command cut.save @focus response.content ",
35 ["flowlist", "flowview"],
36 "Save response body to file"
37 )
38 km.add("d", "view.remove @focus", ["flowlist", "flowview"], "Delete flow from view")
39 km.add("D", "view.duplicate @focus", ["flowlist", "flowview"], "Duplicate flow")
40 km.add(
41 "e",
42 """
43 console.choose.cmd Format export.formats
44 console.command export.file {choice} @focus
45 """,
46 ["flowlist", "flowview"],
47 "Export this flow to file"
48 )
49 km.add("f", "console.command.set view_filter", ["flowlist"], "Set view filter")
50 km.add("F", "set console_focus_follow=toggle", ["flowlist"], "Set focus follow")
51 km.add(
52 "ctrl l",
53 "console.command cut.clip ",
54 ["flowlist", "flowview"],
55 "Send cuts to clipboard"
56 )
57 km.add("L", "console.command view.load ", ["flowlist"], "Load flows from file")
58 km.add("m", "flow.mark.toggle @focus", ["flowlist"], "Toggle mark on this flow")
59 km.add("M", "view.marked.toggle", ["flowlist"], "Toggle viewing marked flows")
60 km.add(
61 "n",
62 "console.command view.create get https://example.com/",
63 ["flowlist"],
64 "Create a new flow"
65 )
66 km.add(
67 "o",
68 """
69 console.choose.cmd Order view.order.options
70 set view_order={choice}
71 """,
72 ["flowlist"],
73 "Set flow list order"
74 )
75 km.add("r", "replay.client @focus", ["flowlist", "flowview"], "Replay this flow")
76 km.add("S", "console.command replay.server ", ["flowlist"], "Start server replay")
77 km.add("v", "set view_order_reversed=toggle", ["flowlist"], "Reverse flow list order")
78 km.add("U", "flow.mark @all false", ["flowlist"], "Un-set all marks")
79 km.add("w", "console.command save.file @shown ", ["flowlist"], "Save listed flows to file")
80 km.add("V", "flow.revert @focus", ["flowlist", "flowview"], "Revert changes to this flow")
81 km.add("X", "flow.kill @focus", ["flowlist"], "Kill this flow")
82 km.add("z", "view.remove @all", ["flowlist"], "Clear flow list")
83 km.add("Z", "view.remove @hidden", ["flowlist"], "Purge all flows not showing")
84 km.add(
85 "|",
86 "console.command script.run @focus ",
87 ["flowlist", "flowview"],
88 "Run a script on this flow"
89 )
90
91 km.add(
92 "e",
93 """
94 console.choose.cmd Part console.edit.focus.options
95 console.edit.focus {choice}
96 """,
97 ["flowview"],
98 "Edit a flow component"
99 )
100 km.add(
101 "f",
102 "view.setval.toggle @focus fullcontents",
103 ["flowview"],
104 "Toggle viewing full contents on this flow",
105 )
106 km.add("w", "console.command save.file @focus ", ["flowview"], "Save flow to file")
107 km.add("space", "view.focus.next", ["flowview"], "Go to next flow")
108
109 km.add(
110 "v",
111 """
112 console.choose "View Part" request,response
113 console.bodyview @focus {choice}
114 """,
115 ["flowview"],
116 "View flow body in an external viewer"
117 )
118 km.add("p", "view.focus.prev", ["flowview"], "Go to previous flow")
119 km.add(
120 "m",
121 """
122 console.choose.cmd Mode console.flowview.mode.options
123 console.flowview.mode.set {choice}
124 """,
125 ["flowview"],
126 "Set flow view mode"
127 )
128 km.add(
129 "z",
130 """
131 console.choose "Part" request,response
132 flow.encode.toggle @focus {choice}
133 """,
134 ["flowview"],
135 "Encode/decode flow body"
136 )
137
138 km.add("L", "console.command options.load ", ["options"], "Load from file")
139 km.add("S", "console.command options.save ", ["options"], "Save to file")
140 km.add("D", "options.reset", ["options"], "Reset all options")
141 km.add("d", "console.options.reset.focus", ["options"], "Reset this option")
142
143 km.add("a", "console.grideditor.add", ["grideditor"], "Add a row after cursor")
144 km.add("A", "console.grideditor.insert", ["grideditor"], "Insert a row before cursor")
145 km.add("d", "console.grideditor.delete", ["grideditor"], "Delete this row")
146 km.add(
147 "r",
148 "console.command console.grideditor.load",
149 ["grideditor"],
150 "Read unescaped data into the current cell from file"
151 )
152 km.add(
153 "R",
154 "console.command console.grideditor.load_escaped",
155 ["grideditor"],
156 "Load a Python-style escaped string into the current cell from file"
157 )
158 km.add("e", "console.grideditor.editor", ["grideditor"], "Edit in external editor")
159 km.add(
160 "w",
161 "console.command console.grideditor.save ",
162 ["grideditor"],
163 "Save data to file as CSV"
164 )
165
166 km.add("z", "eventstore.clear", ["eventlog"], "Clear")
167
168 km.add(
169 "a",
170 """
171 console.choose.cmd "Context" console.key.contexts
172 console.command console.key.bind {choice}
173 """,
174 ["keybindings"],
175 "Add a key binding"
176 )
177 km.add(
178 "d",
179 "console.key.unbind.focus",
180 ["keybindings"],
181 "Unbind the currently focused key binding"
182 )
183 km.add(
184 "x",
185 "console.key.execute.focus",
186 ["keybindings"],
187 "Execute the currently focused key binding"
188 )
189 km.add(
190 "enter",
191 "console.key.edit.focus",
192 ["keybindings"],
193 "Edit the currently focused key binding"
194 )
195
[end of mitmproxy/tools/console/defaultkeys.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mitmproxy/tools/console/defaultkeys.py b/mitmproxy/tools/console/defaultkeys.py
--- a/mitmproxy/tools/console/defaultkeys.py
+++ b/mitmproxy/tools/console/defaultkeys.py
@@ -11,6 +11,7 @@
km.add("q", "console.view.pop", ["global"], "Exit the current view")
km.add("-", "console.layout.cycle", ["global"], "Cycle to next layout")
km.add("shift tab", "console.panes.next", ["global"], "Focus next layout pane")
+ km.add("ctrl right", "console.panes.next", ["global"], "Focus next layout pane")
km.add("P", "console.view.flow @focus", ["global"], "View flow details")
km.add("g", "console.nav.start", ["global"], "Go to start")
|
{"golden_diff": "diff --git a/mitmproxy/tools/console/defaultkeys.py b/mitmproxy/tools/console/defaultkeys.py\n--- a/mitmproxy/tools/console/defaultkeys.py\n+++ b/mitmproxy/tools/console/defaultkeys.py\n@@ -11,6 +11,7 @@\n km.add(\"q\", \"console.view.pop\", [\"global\"], \"Exit the current view\")\n km.add(\"-\", \"console.layout.cycle\", [\"global\"], \"Cycle to next layout\")\n km.add(\"shift tab\", \"console.panes.next\", [\"global\"], \"Focus next layout pane\")\n+ km.add(\"ctrl right\", \"console.panes.next\", [\"global\"], \"Focus next layout pane\")\n km.add(\"P\", \"console.view.flow @focus\", [\"global\"], \"View flow details\")\n \n km.add(\"g\", \"console.nav.start\", [\"global\"], \"Go to start\")\n", "issue": "console: shift+tab is broken in WSL for some Windows terminals\nSee https://github.com/Microsoft/WSL/issues/1770. This seems to affect some but not all Windows terminals. \r\n\r\nWe use shift+tab by default to switch to the next pane in the console app. We can:\r\n\r\n- Say that this is not our problem, and wait for upstream to fix it.\r\n- Find a different binding for next pane - which would be a shame, because shit+tab is very natural.\r\n\r\n@mhils what say you?\n", "before_files": [{"content": "\ndef map(km):\n km.add(\":\", \"console.command \", [\"global\"], \"Command prompt\")\n km.add(\"?\", \"console.view.help\", [\"global\"], \"View help\")\n km.add(\"B\", \"browser.start\", [\"global\"], \"Start an attached browser\")\n km.add(\"C\", \"console.view.commands\", [\"global\"], \"View commands\")\n km.add(\"K\", \"console.view.keybindings\", [\"global\"], \"View key bindings\")\n km.add(\"O\", \"console.view.options\", [\"global\"], \"View options\")\n km.add(\"E\", \"console.view.eventlog\", [\"global\"], \"View event log\")\n km.add(\"Q\", \"console.exit\", [\"global\"], \"Exit immediately\")\n km.add(\"q\", \"console.view.pop\", [\"global\"], \"Exit the current view\")\n km.add(\"-\", \"console.layout.cycle\", [\"global\"], \"Cycle to next layout\")\n km.add(\"shift tab\", \"console.panes.next\", [\"global\"], \"Focus next layout pane\")\n km.add(\"P\", \"console.view.flow @focus\", [\"global\"], \"View flow details\")\n\n km.add(\"g\", \"console.nav.start\", [\"global\"], \"Go to start\")\n km.add(\"G\", \"console.nav.end\", [\"global\"], \"Go to end\")\n km.add(\"k\", \"console.nav.up\", [\"global\"], \"Up\")\n km.add(\"j\", \"console.nav.down\", [\"global\"], \"Down\")\n km.add(\"l\", \"console.nav.right\", [\"global\"], \"Right\")\n km.add(\"h\", \"console.nav.left\", [\"global\"], \"Left\")\n km.add(\"tab\", \"console.nav.next\", [\"global\"], \"Next\")\n km.add(\"enter\", \"console.nav.select\", [\"global\"], \"Select\")\n km.add(\"space\", \"console.nav.pagedown\", [\"global\"], \"Page down\")\n km.add(\"ctrl f\", \"console.nav.pagedown\", [\"global\"], \"Page down\")\n km.add(\"ctrl b\", \"console.nav.pageup\", [\"global\"], \"Page up\")\n\n km.add(\"I\", \"console.intercept.toggle\", [\"global\"], \"Toggle intercept\")\n km.add(\"i\", \"console.command.set intercept\", [\"global\"], \"Set intercept\")\n km.add(\"W\", \"console.command.set save_stream_file\", [\"global\"], \"Stream to file\")\n km.add(\"A\", \"flow.resume @all\", [\"flowlist\", \"flowview\"], \"Resume all intercepted flows\")\n km.add(\"a\", \"flow.resume @focus\", [\"flowlist\", \"flowview\"], \"Resume this intercepted flow\")\n km.add(\n \"b\", \"console.command cut.save @focus response.content \",\n [\"flowlist\", \"flowview\"],\n \"Save response body to file\"\n )\n km.add(\"d\", \"view.remove @focus\", [\"flowlist\", \"flowview\"], \"Delete flow from view\")\n km.add(\"D\", \"view.duplicate @focus\", [\"flowlist\", \"flowview\"], \"Duplicate flow\")\n km.add(\n \"e\",\n \"\"\"\n console.choose.cmd Format export.formats\n console.command export.file {choice} @focus\n \"\"\",\n [\"flowlist\", \"flowview\"],\n \"Export this flow to file\"\n )\n km.add(\"f\", \"console.command.set view_filter\", [\"flowlist\"], \"Set view filter\")\n km.add(\"F\", \"set console_focus_follow=toggle\", [\"flowlist\"], \"Set focus follow\")\n km.add(\n \"ctrl l\",\n \"console.command cut.clip \",\n [\"flowlist\", \"flowview\"],\n \"Send cuts to clipboard\"\n )\n km.add(\"L\", \"console.command view.load \", [\"flowlist\"], \"Load flows from file\")\n km.add(\"m\", \"flow.mark.toggle @focus\", [\"flowlist\"], \"Toggle mark on this flow\")\n km.add(\"M\", \"view.marked.toggle\", [\"flowlist\"], \"Toggle viewing marked flows\")\n km.add(\n \"n\",\n \"console.command view.create get https://example.com/\",\n [\"flowlist\"],\n \"Create a new flow\"\n )\n km.add(\n \"o\",\n \"\"\"\n console.choose.cmd Order view.order.options\n set view_order={choice}\n \"\"\",\n [\"flowlist\"],\n \"Set flow list order\"\n )\n km.add(\"r\", \"replay.client @focus\", [\"flowlist\", \"flowview\"], \"Replay this flow\")\n km.add(\"S\", \"console.command replay.server \", [\"flowlist\"], \"Start server replay\")\n km.add(\"v\", \"set view_order_reversed=toggle\", [\"flowlist\"], \"Reverse flow list order\")\n km.add(\"U\", \"flow.mark @all false\", [\"flowlist\"], \"Un-set all marks\")\n km.add(\"w\", \"console.command save.file @shown \", [\"flowlist\"], \"Save listed flows to file\")\n km.add(\"V\", \"flow.revert @focus\", [\"flowlist\", \"flowview\"], \"Revert changes to this flow\")\n km.add(\"X\", \"flow.kill @focus\", [\"flowlist\"], \"Kill this flow\")\n km.add(\"z\", \"view.remove @all\", [\"flowlist\"], \"Clear flow list\")\n km.add(\"Z\", \"view.remove @hidden\", [\"flowlist\"], \"Purge all flows not showing\")\n km.add(\n \"|\",\n \"console.command script.run @focus \",\n [\"flowlist\", \"flowview\"],\n \"Run a script on this flow\"\n )\n\n km.add(\n \"e\",\n \"\"\"\n console.choose.cmd Part console.edit.focus.options\n console.edit.focus {choice}\n \"\"\",\n [\"flowview\"],\n \"Edit a flow component\"\n )\n km.add(\n \"f\",\n \"view.setval.toggle @focus fullcontents\",\n [\"flowview\"],\n \"Toggle viewing full contents on this flow\",\n )\n km.add(\"w\", \"console.command save.file @focus \", [\"flowview\"], \"Save flow to file\")\n km.add(\"space\", \"view.focus.next\", [\"flowview\"], \"Go to next flow\")\n\n km.add(\n \"v\",\n \"\"\"\n console.choose \"View Part\" request,response\n console.bodyview @focus {choice}\n \"\"\",\n [\"flowview\"],\n \"View flow body in an external viewer\"\n )\n km.add(\"p\", \"view.focus.prev\", [\"flowview\"], \"Go to previous flow\")\n km.add(\n \"m\",\n \"\"\"\n console.choose.cmd Mode console.flowview.mode.options\n console.flowview.mode.set {choice}\n \"\"\",\n [\"flowview\"],\n \"Set flow view mode\"\n )\n km.add(\n \"z\",\n \"\"\"\n console.choose \"Part\" request,response\n flow.encode.toggle @focus {choice}\n \"\"\",\n [\"flowview\"],\n \"Encode/decode flow body\"\n )\n\n km.add(\"L\", \"console.command options.load \", [\"options\"], \"Load from file\")\n km.add(\"S\", \"console.command options.save \", [\"options\"], \"Save to file\")\n km.add(\"D\", \"options.reset\", [\"options\"], \"Reset all options\")\n km.add(\"d\", \"console.options.reset.focus\", [\"options\"], \"Reset this option\")\n\n km.add(\"a\", \"console.grideditor.add\", [\"grideditor\"], \"Add a row after cursor\")\n km.add(\"A\", \"console.grideditor.insert\", [\"grideditor\"], \"Insert a row before cursor\")\n km.add(\"d\", \"console.grideditor.delete\", [\"grideditor\"], \"Delete this row\")\n km.add(\n \"r\",\n \"console.command console.grideditor.load\",\n [\"grideditor\"],\n \"Read unescaped data into the current cell from file\"\n )\n km.add(\n \"R\",\n \"console.command console.grideditor.load_escaped\",\n [\"grideditor\"],\n \"Load a Python-style escaped string into the current cell from file\"\n )\n km.add(\"e\", \"console.grideditor.editor\", [\"grideditor\"], \"Edit in external editor\")\n km.add(\n \"w\",\n \"console.command console.grideditor.save \",\n [\"grideditor\"],\n \"Save data to file as CSV\"\n )\n\n km.add(\"z\", \"eventstore.clear\", [\"eventlog\"], \"Clear\")\n\n km.add(\n \"a\",\n \"\"\"\n console.choose.cmd \"Context\" console.key.contexts\n console.command console.key.bind {choice}\n \"\"\",\n [\"keybindings\"],\n \"Add a key binding\"\n )\n km.add(\n \"d\",\n \"console.key.unbind.focus\",\n [\"keybindings\"],\n \"Unbind the currently focused key binding\"\n )\n km.add(\n \"x\",\n \"console.key.execute.focus\",\n [\"keybindings\"],\n \"Execute the currently focused key binding\"\n )\n km.add(\n \"enter\",\n \"console.key.edit.focus\",\n [\"keybindings\"],\n \"Edit the currently focused key binding\"\n )\n", "path": "mitmproxy/tools/console/defaultkeys.py"}]}
| 3,025 | 180 |
gh_patches_debug_27640
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-6301
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wagtail should not change month name translations
### Issue Summary
Wagtail translations overrides month name translations (at least for Slovenian language) which changes how dates are formatted.
### Steps to Reproduce
With wagtail installed:
```python
>>> from django.utils.translation import activate
>>> activate("sl")
>>> from django.utils import formats
...
>>> from datetime import date
>>> formats.date_format(date.today())
'5. Avgust 2020'
```
It should be (and without wagtail installed it is) `5. avgust 2020`.
* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: (yes / no)
yes
### Technical details
* Python version: Run `python --version`.
Python 3.7.1
Django version: Look in your requirements.txt, or run `pip show django | grep Version`.
Version: 2.2.14
* Wagtail version: Look at the bottom of the Settings menu in the Wagtail admin, or run `pip show wagtail | grep Version:`.
Version: 2.9.2
</issue>
<code>
[start of wagtail/admin/localization.py]
1 import pytz
2
3 from django.conf import settings
4 from django.utils.translation import gettext as _
5 from django.utils.translation import gettext_lazy
6
7
8 # Wagtail languages with >=90% coverage
9 # This list is manually maintained
10 WAGTAILADMIN_PROVIDED_LANGUAGES = [
11 ('ar', gettext_lazy('Arabic')),
12 ('ca', gettext_lazy('Catalan')),
13 ('cs', gettext_lazy('Czech')),
14 ('de', gettext_lazy('German')),
15 ('el', gettext_lazy('Greek')),
16 ('en', gettext_lazy('English')),
17 ('es', gettext_lazy('Spanish')),
18 ('fi', gettext_lazy('Finnish')),
19 ('fr', gettext_lazy('French')),
20 ('gl', gettext_lazy('Galician')),
21 ('hu', gettext_lazy('Hungarian')),
22 ('id-id', gettext_lazy('Indonesian')),
23 ('is-is', gettext_lazy('Icelandic')),
24 ('it', gettext_lazy('Italian')),
25 ('ja', gettext_lazy('Japanese')),
26 ('ko', gettext_lazy('Korean')),
27 ('lt', gettext_lazy('Lithuanian')),
28 ('mn', gettext_lazy('Mongolian')),
29 ('nb', gettext_lazy('Norwegian Bokmål')),
30 ('nl-nl', gettext_lazy('Netherlands Dutch')),
31 ('fa', gettext_lazy('Persian')),
32 ('pl', gettext_lazy('Polish')),
33 ('pt-br', gettext_lazy('Brazilian Portuguese')),
34 ('pt-pt', gettext_lazy('Portuguese')),
35 ('ro', gettext_lazy('Romanian')),
36 ('ru', gettext_lazy('Russian')),
37 ('sv', gettext_lazy('Swedish')),
38 ('sk-sk', gettext_lazy('Slovak')),
39 ('th', gettext_lazy('Thai')),
40 ('tr', gettext_lazy('Turkish')),
41 ('tr-tr', gettext_lazy('Turkish (Turkey)')),
42 ('uk', gettext_lazy('Ukrainian')),
43 ('zh-hans', gettext_lazy('Chinese (Simplified)')),
44 ('zh-hant', gettext_lazy('Chinese (Traditional)')),
45 ]
46
47
48 # Translatable strings to be made available to JavaScript code
49 # as the wagtailConfig.STRINGS object
50 def get_js_translation_strings():
51 return {
52 'DELETE': _('Delete'),
53 'EDIT': _('Edit'),
54 'PAGE': _('Page'),
55 'PAGES': _('Pages'),
56 'LOADING': _('Loading…'),
57 'NO_RESULTS': _('No results'),
58 'SERVER_ERROR': _('Server Error'),
59 'SEE_ALL': _('See all'),
60 'CLOSE_EXPLORER': _('Close explorer'),
61 'ALT_TEXT': _('Alt text'),
62 'WRITE_HERE': _('Write here…'),
63 'HORIZONTAL_LINE': _('Horizontal line'),
64 'LINE_BREAK': _('Line break'),
65 'UNDO': _('Undo'),
66 'REDO': _('Redo'),
67 'RELOAD_PAGE': _('Reload the page'),
68 'RELOAD_EDITOR': _('Reload saved content'),
69 'SHOW_LATEST_CONTENT': _('Show latest content'),
70 'SHOW_ERROR': _('Show error'),
71 'EDITOR_CRASH': _('The editor just crashed. Content has been reset to the last saved version.'),
72 'BROKEN_LINK': _('Broken link'),
73 'MISSING_DOCUMENT': _('Missing document'),
74 'CLOSE': _('Close'),
75 'EDIT_PAGE': _('Edit \'{title}\''),
76 'VIEW_CHILD_PAGES_OF_PAGE': _('View child pages of \'{title}\''),
77 'PAGE_EXPLORER': _('Page explorer'),
78
79 'MONTHS': [
80 _('January'),
81 _('February'),
82 _('March'),
83 _('April'),
84 _('May'),
85 _('June'),
86 _('July'),
87 _('August'),
88 _('September'),
89 _('October'),
90 _('November'),
91 _('December')
92 ],
93 'WEEKDAYS': [
94 _('Sunday'),
95 _('Monday'),
96 _('Tuesday'),
97 _('Wednesday'),
98 _('Thursday'),
99 _('Friday'),
100 _('Saturday')
101 ],
102 'WEEKDAYS_SHORT': [
103 _('Sun'),
104 _('Mon'),
105 _('Tue'),
106 _('Wed'),
107 _('Thu'),
108 _('Fri'),
109 _('Sat')
110 ]
111 }
112
113
114 def get_available_admin_languages():
115 return getattr(settings, 'WAGTAILADMIN_PERMITTED_LANGUAGES', WAGTAILADMIN_PROVIDED_LANGUAGES)
116
117
118 def get_available_admin_time_zones():
119 return getattr(settings, 'WAGTAIL_USER_TIME_ZONES', pytz.common_timezones)
120
[end of wagtail/admin/localization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wagtail/admin/localization.py b/wagtail/admin/localization.py
--- a/wagtail/admin/localization.py
+++ b/wagtail/admin/localization.py
@@ -1,6 +1,7 @@
import pytz
from django.conf import settings
+from django.utils.dates import MONTHS, WEEKDAYS, WEEKDAYS_ABBR
from django.utils.translation import gettext as _
from django.utils.translation import gettext_lazy
@@ -76,38 +77,12 @@
'VIEW_CHILD_PAGES_OF_PAGE': _('View child pages of \'{title}\''),
'PAGE_EXPLORER': _('Page explorer'),
- 'MONTHS': [
- _('January'),
- _('February'),
- _('March'),
- _('April'),
- _('May'),
- _('June'),
- _('July'),
- _('August'),
- _('September'),
- _('October'),
- _('November'),
- _('December')
- ],
- 'WEEKDAYS': [
- _('Sunday'),
- _('Monday'),
- _('Tuesday'),
- _('Wednesday'),
- _('Thursday'),
- _('Friday'),
- _('Saturday')
- ],
- 'WEEKDAYS_SHORT': [
- _('Sun'),
- _('Mon'),
- _('Tue'),
- _('Wed'),
- _('Thu'),
- _('Fri'),
- _('Sat')
- ]
+ 'MONTHS': [str(m) for m in MONTHS.values()],
+
+ # Django's WEEKDAYS list begins on Monday, but ours should start on Sunday, so start
+ # counting from -1 and use modulo 7 to get an array index
+ 'WEEKDAYS': [str(WEEKDAYS[d % 7]) for d in range(-1, 6)],
+ 'WEEKDAYS_SHORT': [str(WEEKDAYS_ABBR[d % 7]) for d in range(-1, 6)],
}
|
{"golden_diff": "diff --git a/wagtail/admin/localization.py b/wagtail/admin/localization.py\n--- a/wagtail/admin/localization.py\n+++ b/wagtail/admin/localization.py\n@@ -1,6 +1,7 @@\n import pytz\n \n from django.conf import settings\n+from django.utils.dates import MONTHS, WEEKDAYS, WEEKDAYS_ABBR\n from django.utils.translation import gettext as _\n from django.utils.translation import gettext_lazy\n \n@@ -76,38 +77,12 @@\n 'VIEW_CHILD_PAGES_OF_PAGE': _('View child pages of \\'{title}\\''),\n 'PAGE_EXPLORER': _('Page explorer'),\n \n- 'MONTHS': [\n- _('January'),\n- _('February'),\n- _('March'),\n- _('April'),\n- _('May'),\n- _('June'),\n- _('July'),\n- _('August'),\n- _('September'),\n- _('October'),\n- _('November'),\n- _('December')\n- ],\n- 'WEEKDAYS': [\n- _('Sunday'),\n- _('Monday'),\n- _('Tuesday'),\n- _('Wednesday'),\n- _('Thursday'),\n- _('Friday'),\n- _('Saturday')\n- ],\n- 'WEEKDAYS_SHORT': [\n- _('Sun'),\n- _('Mon'),\n- _('Tue'),\n- _('Wed'),\n- _('Thu'),\n- _('Fri'),\n- _('Sat')\n- ]\n+ 'MONTHS': [str(m) for m in MONTHS.values()],\n+\n+ # Django's WEEKDAYS list begins on Monday, but ours should start on Sunday, so start\n+ # counting from -1 and use modulo 7 to get an array index\n+ 'WEEKDAYS': [str(WEEKDAYS[d % 7]) for d in range(-1, 6)],\n+ 'WEEKDAYS_SHORT': [str(WEEKDAYS_ABBR[d % 7]) for d in range(-1, 6)],\n }\n", "issue": "Wagtail should not change month name translations\n### Issue Summary\r\n\r\nWagtail translations overrides month name translations (at least for Slovenian language) which changes how dates are formatted.\r\n\r\n### Steps to Reproduce\r\n\r\nWith wagtail installed:\r\n\r\n```python\r\n>>> from django.utils.translation import activate \r\n>>> activate(\"sl\") \r\n>>> from django.utils import formats \r\n... \r\n>>> from datetime import date \r\n>>> formats.date_format(date.today()) \r\n'5. Avgust 2020'\r\n```\r\n\r\nIt should be (and without wagtail installed it is) `5. avgust 2020`.\r\n\r\n* I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: (yes / no)\r\n\r\nyes\r\n\r\n### Technical details\r\n\r\n* Python version: Run `python --version`.\r\n\r\nPython 3.7.1\r\n\r\nDjango version: Look in your requirements.txt, or run `pip show django | grep Version`.\r\n\r\nVersion: 2.2.14\r\n\r\n* Wagtail version: Look at the bottom of the Settings menu in the Wagtail admin, or run `pip show wagtail | grep Version:`.\r\n\r\nVersion: 2.9.2\r\n\n", "before_files": [{"content": "import pytz\n\nfrom django.conf import settings\nfrom django.utils.translation import gettext as _\nfrom django.utils.translation import gettext_lazy\n\n\n# Wagtail languages with >=90% coverage\n# This list is manually maintained\nWAGTAILADMIN_PROVIDED_LANGUAGES = [\n ('ar', gettext_lazy('Arabic')),\n ('ca', gettext_lazy('Catalan')),\n ('cs', gettext_lazy('Czech')),\n ('de', gettext_lazy('German')),\n ('el', gettext_lazy('Greek')),\n ('en', gettext_lazy('English')),\n ('es', gettext_lazy('Spanish')),\n ('fi', gettext_lazy('Finnish')),\n ('fr', gettext_lazy('French')),\n ('gl', gettext_lazy('Galician')),\n ('hu', gettext_lazy('Hungarian')),\n ('id-id', gettext_lazy('Indonesian')),\n ('is-is', gettext_lazy('Icelandic')),\n ('it', gettext_lazy('Italian')),\n ('ja', gettext_lazy('Japanese')),\n ('ko', gettext_lazy('Korean')),\n ('lt', gettext_lazy('Lithuanian')),\n ('mn', gettext_lazy('Mongolian')),\n ('nb', gettext_lazy('Norwegian Bokm\u00e5l')),\n ('nl-nl', gettext_lazy('Netherlands Dutch')),\n ('fa', gettext_lazy('Persian')),\n ('pl', gettext_lazy('Polish')),\n ('pt-br', gettext_lazy('Brazilian Portuguese')),\n ('pt-pt', gettext_lazy('Portuguese')),\n ('ro', gettext_lazy('Romanian')),\n ('ru', gettext_lazy('Russian')),\n ('sv', gettext_lazy('Swedish')),\n ('sk-sk', gettext_lazy('Slovak')),\n ('th', gettext_lazy('Thai')),\n ('tr', gettext_lazy('Turkish')),\n ('tr-tr', gettext_lazy('Turkish (Turkey)')),\n ('uk', gettext_lazy('Ukrainian')),\n ('zh-hans', gettext_lazy('Chinese (Simplified)')),\n ('zh-hant', gettext_lazy('Chinese (Traditional)')),\n]\n\n\n# Translatable strings to be made available to JavaScript code\n# as the wagtailConfig.STRINGS object\ndef get_js_translation_strings():\n return {\n 'DELETE': _('Delete'),\n 'EDIT': _('Edit'),\n 'PAGE': _('Page'),\n 'PAGES': _('Pages'),\n 'LOADING': _('Loading\u2026'),\n 'NO_RESULTS': _('No results'),\n 'SERVER_ERROR': _('Server Error'),\n 'SEE_ALL': _('See all'),\n 'CLOSE_EXPLORER': _('Close explorer'),\n 'ALT_TEXT': _('Alt text'),\n 'WRITE_HERE': _('Write here\u2026'),\n 'HORIZONTAL_LINE': _('Horizontal line'),\n 'LINE_BREAK': _('Line break'),\n 'UNDO': _('Undo'),\n 'REDO': _('Redo'),\n 'RELOAD_PAGE': _('Reload the page'),\n 'RELOAD_EDITOR': _('Reload saved content'),\n 'SHOW_LATEST_CONTENT': _('Show latest content'),\n 'SHOW_ERROR': _('Show error'),\n 'EDITOR_CRASH': _('The editor just crashed. Content has been reset to the last saved version.'),\n 'BROKEN_LINK': _('Broken link'),\n 'MISSING_DOCUMENT': _('Missing document'),\n 'CLOSE': _('Close'),\n 'EDIT_PAGE': _('Edit \\'{title}\\''),\n 'VIEW_CHILD_PAGES_OF_PAGE': _('View child pages of \\'{title}\\''),\n 'PAGE_EXPLORER': _('Page explorer'),\n\n 'MONTHS': [\n _('January'),\n _('February'),\n _('March'),\n _('April'),\n _('May'),\n _('June'),\n _('July'),\n _('August'),\n _('September'),\n _('October'),\n _('November'),\n _('December')\n ],\n 'WEEKDAYS': [\n _('Sunday'),\n _('Monday'),\n _('Tuesday'),\n _('Wednesday'),\n _('Thursday'),\n _('Friday'),\n _('Saturday')\n ],\n 'WEEKDAYS_SHORT': [\n _('Sun'),\n _('Mon'),\n _('Tue'),\n _('Wed'),\n _('Thu'),\n _('Fri'),\n _('Sat')\n ]\n }\n\n\ndef get_available_admin_languages():\n return getattr(settings, 'WAGTAILADMIN_PERMITTED_LANGUAGES', WAGTAILADMIN_PROVIDED_LANGUAGES)\n\n\ndef get_available_admin_time_zones():\n return getattr(settings, 'WAGTAIL_USER_TIME_ZONES', pytz.common_timezones)\n", "path": "wagtail/admin/localization.py"}]}
| 1,972 | 431 |
gh_patches_debug_21832
|
rasdani/github-patches
|
git_diff
|
pallets__werkzeug-2771
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
use scrypt by default
#2654 added scrypt support, but couldn't make it the default because [PyPy didn't support it at the time](https://foss.heptapod.net/pypy/pypy/-/issues/3921). Now PyPy has fixed that and [made a release](https://doc.pypy.org/en/latest/release-v7.3.12.html), I'm comfortable with making scrypt the default.
</issue>
<code>
[start of src/werkzeug/security.py]
1 from __future__ import annotations
2
3 import hashlib
4 import hmac
5 import os
6 import posixpath
7 import secrets
8
9 SALT_CHARS = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
10 DEFAULT_PBKDF2_ITERATIONS = 600000
11
12 _os_alt_seps: list[str] = list(
13 sep for sep in [os.sep, os.path.altsep] if sep is not None and sep != "/"
14 )
15
16
17 def gen_salt(length: int) -> str:
18 """Generate a random string of SALT_CHARS with specified ``length``."""
19 if length <= 0:
20 raise ValueError("Salt length must be at least 1.")
21
22 return "".join(secrets.choice(SALT_CHARS) for _ in range(length))
23
24
25 def _hash_internal(method: str, salt: str, password: str) -> tuple[str, str]:
26 method, *args = method.split(":")
27 salt = salt.encode("utf-8")
28 password = password.encode("utf-8")
29
30 if method == "scrypt":
31 if not args:
32 n = 2**15
33 r = 8
34 p = 1
35 else:
36 try:
37 n, r, p = map(int, args)
38 except ValueError:
39 raise ValueError("'scrypt' takes 3 arguments.") from None
40
41 maxmem = 132 * n * r * p # ideally 128, but some extra seems needed
42 return (
43 hashlib.scrypt(password, salt=salt, n=n, r=r, p=p, maxmem=maxmem).hex(),
44 f"scrypt:{n}:{r}:{p}",
45 )
46 elif method == "pbkdf2":
47 len_args = len(args)
48
49 if len_args == 0:
50 hash_name = "sha256"
51 iterations = DEFAULT_PBKDF2_ITERATIONS
52 elif len_args == 1:
53 hash_name = args[0]
54 iterations = DEFAULT_PBKDF2_ITERATIONS
55 elif len_args == 2:
56 hash_name = args[0]
57 iterations = int(args[1])
58 else:
59 raise ValueError("'pbkdf2' takes 2 arguments.")
60
61 return (
62 hashlib.pbkdf2_hmac(hash_name, password, salt, iterations).hex(),
63 f"pbkdf2:{hash_name}:{iterations}",
64 )
65 else:
66 raise ValueError(f"Invalid hash method '{method}'.")
67
68
69 def generate_password_hash(
70 password: str, method: str = "pbkdf2", salt_length: int = 16
71 ) -> str:
72 """Securely hash a password for storage. A password can be compared to a stored hash
73 using :func:`check_password_hash`.
74
75 The following methods are supported:
76
77 - ``scrypt``, more secure but not available on PyPy. The parameters are ``n``,
78 ``r``, and ``p``, the default is ``scrypt:32768:8:1``. See
79 :func:`hashlib.scrypt`.
80 - ``pbkdf2``, the default. The parameters are ``hash_method`` and ``iterations``,
81 the default is ``pbkdf2:sha256:600000``. See :func:`hashlib.pbkdf2_hmac`.
82
83 Default parameters may be updated to reflect current guidelines, and methods may be
84 deprecated and removed if they are no longer considered secure. To migrate old
85 hashes, you may generate a new hash when checking an old hash, or you may contact
86 users with a link to reset their password.
87
88 :param password: The plaintext password.
89 :param method: The key derivation function and parameters.
90 :param salt_length: The number of characters to generate for the salt.
91
92 .. versionchanged:: 2.3
93 Scrypt support was added.
94
95 .. versionchanged:: 2.3
96 The default iterations for pbkdf2 was increased to 600,000.
97
98 .. versionchanged:: 2.3
99 All plain hashes are deprecated and will not be supported in Werkzeug 3.0.
100 """
101 salt = gen_salt(salt_length)
102 h, actual_method = _hash_internal(method, salt, password)
103 return f"{actual_method}${salt}${h}"
104
105
106 def check_password_hash(pwhash: str, password: str) -> bool:
107 """Securely check that the given stored password hash, previously generated using
108 :func:`generate_password_hash`, matches the given password.
109
110 Methods may be deprecated and removed if they are no longer considered secure. To
111 migrate old hashes, you may generate a new hash when checking an old hash, or you
112 may contact users with a link to reset their password.
113
114 :param pwhash: The hashed password.
115 :param password: The plaintext password.
116
117 .. versionchanged:: 2.3
118 All plain hashes are deprecated and will not be supported in Werkzeug 3.0.
119 """
120 try:
121 method, salt, hashval = pwhash.split("$", 2)
122 except ValueError:
123 return False
124
125 return hmac.compare_digest(_hash_internal(method, salt, password)[0], hashval)
126
127
128 def safe_join(directory: str, *pathnames: str) -> str | None:
129 """Safely join zero or more untrusted path components to a base
130 directory to avoid escaping the base directory.
131
132 :param directory: The trusted base directory.
133 :param pathnames: The untrusted path components relative to the
134 base directory.
135 :return: A safe path, otherwise ``None``.
136 """
137 if not directory:
138 # Ensure we end up with ./path if directory="" is given,
139 # otherwise the first untrusted part could become trusted.
140 directory = "."
141
142 parts = [directory]
143
144 for filename in pathnames:
145 if filename != "":
146 filename = posixpath.normpath(filename)
147
148 if (
149 any(sep in filename for sep in _os_alt_seps)
150 or os.path.isabs(filename)
151 or filename == ".."
152 or filename.startswith("../")
153 ):
154 return None
155
156 parts.append(filename)
157
158 return posixpath.join(*parts)
159
[end of src/werkzeug/security.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/werkzeug/security.py b/src/werkzeug/security.py
--- a/src/werkzeug/security.py
+++ b/src/werkzeug/security.py
@@ -67,17 +67,16 @@
def generate_password_hash(
- password: str, method: str = "pbkdf2", salt_length: int = 16
+ password: str, method: str = "scrypt", salt_length: int = 16
) -> str:
"""Securely hash a password for storage. A password can be compared to a stored hash
using :func:`check_password_hash`.
The following methods are supported:
- - ``scrypt``, more secure but not available on PyPy. The parameters are ``n``,
- ``r``, and ``p``, the default is ``scrypt:32768:8:1``. See
- :func:`hashlib.scrypt`.
- - ``pbkdf2``, the default. The parameters are ``hash_method`` and ``iterations``,
+ - ``scrypt``, the default. The parameters are ``n``, ``r``, and ``p``, the default
+ is ``scrypt:32768:8:1``. See :func:`hashlib.scrypt`.
+ - ``pbkdf2``, less secure. The parameters are ``hash_method`` and ``iterations``,
the default is ``pbkdf2:sha256:600000``. See :func:`hashlib.pbkdf2_hmac`.
Default parameters may be updated to reflect current guidelines, and methods may be
|
{"golden_diff": "diff --git a/src/werkzeug/security.py b/src/werkzeug/security.py\n--- a/src/werkzeug/security.py\n+++ b/src/werkzeug/security.py\n@@ -67,17 +67,16 @@\n \n \n def generate_password_hash(\n- password: str, method: str = \"pbkdf2\", salt_length: int = 16\n+ password: str, method: str = \"scrypt\", salt_length: int = 16\n ) -> str:\n \"\"\"Securely hash a password for storage. A password can be compared to a stored hash\n using :func:`check_password_hash`.\n \n The following methods are supported:\n \n- - ``scrypt``, more secure but not available on PyPy. The parameters are ``n``,\n- ``r``, and ``p``, the default is ``scrypt:32768:8:1``. See\n- :func:`hashlib.scrypt`.\n- - ``pbkdf2``, the default. The parameters are ``hash_method`` and ``iterations``,\n+ - ``scrypt``, the default. The parameters are ``n``, ``r``, and ``p``, the default\n+ is ``scrypt:32768:8:1``. See :func:`hashlib.scrypt`.\n+ - ``pbkdf2``, less secure. The parameters are ``hash_method`` and ``iterations``,\n the default is ``pbkdf2:sha256:600000``. See :func:`hashlib.pbkdf2_hmac`.\n \n Default parameters may be updated to reflect current guidelines, and methods may be\n", "issue": "use scrypt by default\n#2654 added scrypt support, but couldn't make it the default because [PyPy didn't support it at the time](https://foss.heptapod.net/pypy/pypy/-/issues/3921). Now PyPy has fixed that and [made a release](https://doc.pypy.org/en/latest/release-v7.3.12.html), I'm comfortable with making scrypt the default.\n", "before_files": [{"content": "from __future__ import annotations\n\nimport hashlib\nimport hmac\nimport os\nimport posixpath\nimport secrets\n\nSALT_CHARS = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789\"\nDEFAULT_PBKDF2_ITERATIONS = 600000\n\n_os_alt_seps: list[str] = list(\n sep for sep in [os.sep, os.path.altsep] if sep is not None and sep != \"/\"\n)\n\n\ndef gen_salt(length: int) -> str:\n \"\"\"Generate a random string of SALT_CHARS with specified ``length``.\"\"\"\n if length <= 0:\n raise ValueError(\"Salt length must be at least 1.\")\n\n return \"\".join(secrets.choice(SALT_CHARS) for _ in range(length))\n\n\ndef _hash_internal(method: str, salt: str, password: str) -> tuple[str, str]:\n method, *args = method.split(\":\")\n salt = salt.encode(\"utf-8\")\n password = password.encode(\"utf-8\")\n\n if method == \"scrypt\":\n if not args:\n n = 2**15\n r = 8\n p = 1\n else:\n try:\n n, r, p = map(int, args)\n except ValueError:\n raise ValueError(\"'scrypt' takes 3 arguments.\") from None\n\n maxmem = 132 * n * r * p # ideally 128, but some extra seems needed\n return (\n hashlib.scrypt(password, salt=salt, n=n, r=r, p=p, maxmem=maxmem).hex(),\n f\"scrypt:{n}:{r}:{p}\",\n )\n elif method == \"pbkdf2\":\n len_args = len(args)\n\n if len_args == 0:\n hash_name = \"sha256\"\n iterations = DEFAULT_PBKDF2_ITERATIONS\n elif len_args == 1:\n hash_name = args[0]\n iterations = DEFAULT_PBKDF2_ITERATIONS\n elif len_args == 2:\n hash_name = args[0]\n iterations = int(args[1])\n else:\n raise ValueError(\"'pbkdf2' takes 2 arguments.\")\n\n return (\n hashlib.pbkdf2_hmac(hash_name, password, salt, iterations).hex(),\n f\"pbkdf2:{hash_name}:{iterations}\",\n )\n else:\n raise ValueError(f\"Invalid hash method '{method}'.\")\n\n\ndef generate_password_hash(\n password: str, method: str = \"pbkdf2\", salt_length: int = 16\n) -> str:\n \"\"\"Securely hash a password for storage. A password can be compared to a stored hash\n using :func:`check_password_hash`.\n\n The following methods are supported:\n\n - ``scrypt``, more secure but not available on PyPy. The parameters are ``n``,\n ``r``, and ``p``, the default is ``scrypt:32768:8:1``. See\n :func:`hashlib.scrypt`.\n - ``pbkdf2``, the default. The parameters are ``hash_method`` and ``iterations``,\n the default is ``pbkdf2:sha256:600000``. See :func:`hashlib.pbkdf2_hmac`.\n\n Default parameters may be updated to reflect current guidelines, and methods may be\n deprecated and removed if they are no longer considered secure. To migrate old\n hashes, you may generate a new hash when checking an old hash, or you may contact\n users with a link to reset their password.\n\n :param password: The plaintext password.\n :param method: The key derivation function and parameters.\n :param salt_length: The number of characters to generate for the salt.\n\n .. versionchanged:: 2.3\n Scrypt support was added.\n\n .. versionchanged:: 2.3\n The default iterations for pbkdf2 was increased to 600,000.\n\n .. versionchanged:: 2.3\n All plain hashes are deprecated and will not be supported in Werkzeug 3.0.\n \"\"\"\n salt = gen_salt(salt_length)\n h, actual_method = _hash_internal(method, salt, password)\n return f\"{actual_method}${salt}${h}\"\n\n\ndef check_password_hash(pwhash: str, password: str) -> bool:\n \"\"\"Securely check that the given stored password hash, previously generated using\n :func:`generate_password_hash`, matches the given password.\n\n Methods may be deprecated and removed if they are no longer considered secure. To\n migrate old hashes, you may generate a new hash when checking an old hash, or you\n may contact users with a link to reset their password.\n\n :param pwhash: The hashed password.\n :param password: The plaintext password.\n\n .. versionchanged:: 2.3\n All plain hashes are deprecated and will not be supported in Werkzeug 3.0.\n \"\"\"\n try:\n method, salt, hashval = pwhash.split(\"$\", 2)\n except ValueError:\n return False\n\n return hmac.compare_digest(_hash_internal(method, salt, password)[0], hashval)\n\n\ndef safe_join(directory: str, *pathnames: str) -> str | None:\n \"\"\"Safely join zero or more untrusted path components to a base\n directory to avoid escaping the base directory.\n\n :param directory: The trusted base directory.\n :param pathnames: The untrusted path components relative to the\n base directory.\n :return: A safe path, otherwise ``None``.\n \"\"\"\n if not directory:\n # Ensure we end up with ./path if directory=\"\" is given,\n # otherwise the first untrusted part could become trusted.\n directory = \".\"\n\n parts = [directory]\n\n for filename in pathnames:\n if filename != \"\":\n filename = posixpath.normpath(filename)\n\n if (\n any(sep in filename for sep in _os_alt_seps)\n or os.path.isabs(filename)\n or filename == \"..\"\n or filename.startswith(\"../\")\n ):\n return None\n\n parts.append(filename)\n\n return posixpath.join(*parts)\n", "path": "src/werkzeug/security.py"}]}
| 2,358 | 366 |
gh_patches_debug_28835
|
rasdani/github-patches
|
git_diff
|
HypothesisWorks__hypothesis-2669
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Missing dependency on setuptools
When importing `hypothesis` in a container that only installs the declared dependencies (i.e. `sortedcontainers` and `attrs`) then we see the following error:
```
Traceback (most recent call last):
File "/run/shm/bazel-sandbox.58afe50bce693a45c42be3530f1579c8f12116963ef7e303ad72c1a2b06ed6f2/processwrapper-sandbox/4962/execroot/__main__/bazel-out/k8-fastbuild/bin/perception/cloud/proto_format_test.runfiles/internal_pip_dependency_hypothesis/pypi__hypothesis/hypothesis/entry_points.py", line 27, in <module>
from importlib import metadata as importlib_metadata
ImportError: cannot import name 'metadata'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/run/shm/bazel-sandbox.58afe50bce693a45c42be3530f1579c8f12116963ef7e303ad72c1a2b06ed6f2/processwrapper-sandbox/4962/execroot/__main__/bazel-out/k8-fastbuild/bin/perception/cloud/proto_format_test.runfiles/internal_pip_dependency_hypothesis/pypi__hypothesis/hypothesis/entry_points.py", line 29, in <module>
import importlib_metadata # type: ignore # mypy thinks this is a redefinition
ModuleNotFoundError: No module named 'importlib_metadata'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "common/python/runtime/python3_wrapper.py", line 36, in <module>
eval(compiled_code, module.__dict__)
File "/run/shm/bazel-sandbox.58afe50bce693a45c42be3530f1579c8f12116963ef7e303ad72c1a2b06ed6f2/processwrapper-sandbox/4962/execroot/__main__/bazel-out/k8-fastbuild/bin/perception/cloud/proto_format_test.runfiles/__main__/perception/cloud/proto_format_test.py", line 7, in <module>
import hypothesis as hyp
File "/run/shm/bazel-sandbox.58afe50bce693a45c42be3530f1579c8f12116963ef7e303ad72c1a2b06ed6f2/processwrapper-sandbox/4962/execroot/__main__/bazel-out/k8-fastbuild/bin/perception/cloud/proto_format_test.runfiles/internal_pip_dependency_hypothesis/pypi__hypothesis/hypothesis/__init__.py", line 27, in <module>
from hypothesis.entry_points import run
File "/run/shm/bazel-sandbox.58afe50bce693a45c42be3530f1579c8f12116963ef7e303ad72c1a2b06ed6f2/processwrapper-sandbox/4962/execroot/__main__/bazel-out/k8-fastbuild/bin/perception/cloud/proto_format_test.runfiles/internal_pip_dependency_hypothesis/pypi__hypothesis/hypothesis/entry_points.py", line 38, in <module>
import pkg_resources
ModuleNotFoundError: No module named 'pkg_resources'
```
The `pkg_resources` module is provided by `setuptools`.
I think tweaking `setup.py` should fix it.
</issue>
<code>
[start of hypothesis-python/setup.py]
1 # This file is part of Hypothesis, which may be found at
2 # https://github.com/HypothesisWorks/hypothesis/
3 #
4 # Most of this work is copyright (C) 2013-2020 David R. MacIver
5 # ([email protected]), but it contains contributions by others. See
6 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
7 # consult the git log if you need to determine who owns an individual
8 # contribution.
9 #
10 # This Source Code Form is subject to the terms of the Mozilla Public License,
11 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
12 # obtain one at https://mozilla.org/MPL/2.0/.
13 #
14 # END HEADER
15
16 import os
17 import sys
18 import warnings
19
20 import setuptools
21
22 if sys.version_info[:2] < (3, 6):
23 raise Exception(
24 "This version of Python is too old to install new versions of Hypothesis. "
25 "Update `pip` and `setuptools`, try again, and you will automatically "
26 "get the latest compatible version of Hypothesis instead. "
27 "See also https://python3statement.org/practicalities/"
28 )
29
30
31 def local_file(name):
32 return os.path.relpath(os.path.join(os.path.dirname(__file__), name))
33
34
35 SOURCE = local_file("src")
36 README = local_file("README.rst")
37
38 setuptools_version = tuple(map(int, setuptools.__version__.split(".")[:2]))
39
40 if setuptools_version < (36, 2):
41 # Warning only - very bad if uploading bdist but fine if installing sdist.
42 warnings.warn(
43 "This version of setuptools is too old to correctly store "
44 "conditional dependencies in binary wheels. For more info, see: "
45 "https://hynek.me/articles/conditional-python-dependencies/"
46 )
47
48
49 # Assignment to placate pyflakes. The actual version is from the exec that
50 # follows.
51 __version__ = None
52
53 with open(local_file("src/hypothesis/version.py")) as o:
54 exec(o.read())
55
56 assert __version__ is not None
57
58
59 extras = {
60 "cli": ["click>=7.0", "black>=19.10b0"],
61 "ghostwriter": ["black>=19.10b0"],
62 "pytz": ["pytz>=2014.1"],
63 "dateutil": ["python-dateutil>=1.4"],
64 "lark": ["lark-parser>=0.6.5"],
65 "numpy": ["numpy>=1.9.0"],
66 "pandas": ["pandas>=0.19"],
67 "pytest": ["pytest>=4.3"],
68 "dpcontracts": ["dpcontracts>=0.4"],
69 "redis": ["redis>=3.0.0"],
70 # We only support Django versions with upstream support - see
71 # https://www.djangoproject.com/download/#supported-versions
72 "django": ["pytz>=2014.1", "django>=2.2"],
73 }
74
75 extras["all"] = sorted(set(sum(extras.values(), [])))
76
77
78 setuptools.setup(
79 name="hypothesis",
80 version=__version__,
81 author="David R. MacIver",
82 author_email="[email protected]",
83 packages=setuptools.find_packages(SOURCE),
84 package_dir={"": SOURCE},
85 package_data={"hypothesis": ["py.typed", "vendor/tlds-alpha-by-domain.txt"]},
86 url="https://github.com/HypothesisWorks/hypothesis/tree/master/hypothesis-python",
87 project_urls={
88 "Website": "https://hypothesis.works",
89 "Documentation": "https://hypothesis.readthedocs.io",
90 "Issues": "https://github.com/HypothesisWorks/hypothesis/issues",
91 },
92 license="MPL v2",
93 description="A library for property-based testing",
94 zip_safe=False,
95 extras_require=extras,
96 install_requires=["attrs>=19.2.0", "sortedcontainers>=2.1.0,<3.0.0"],
97 python_requires=">=3.6",
98 classifiers=[
99 "Development Status :: 5 - Production/Stable",
100 "Framework :: Hypothesis",
101 "Framework :: Pytest",
102 "Intended Audience :: Developers",
103 "License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
104 "Operating System :: Unix",
105 "Operating System :: POSIX",
106 "Operating System :: Microsoft :: Windows",
107 "Programming Language :: Python",
108 "Programming Language :: Python :: 3",
109 "Programming Language :: Python :: 3 :: Only",
110 "Programming Language :: Python :: 3.6",
111 "Programming Language :: Python :: 3.7",
112 "Programming Language :: Python :: 3.8",
113 "Programming Language :: Python :: Implementation :: CPython",
114 "Programming Language :: Python :: Implementation :: PyPy",
115 "Topic :: Education :: Testing",
116 "Topic :: Software Development :: Testing",
117 "Typing :: Typed",
118 ],
119 entry_points={
120 "pytest11": ["hypothesispytest = hypothesis.extra.pytestplugin"],
121 "console_scripts": ["hypothesis = hypothesis.extra.cli:main"],
122 },
123 long_description=open(README).read(),
124 long_description_content_type="text/x-rst",
125 keywords="python testing fuzzing property-based-testing",
126 )
127
[end of hypothesis-python/setup.py]
[start of hypothesis-python/src/hypothesis/entry_points.py]
1 # This file is part of Hypothesis, which may be found at
2 # https://github.com/HypothesisWorks/hypothesis/
3 #
4 # Most of this work is copyright (C) 2013-2020 David R. MacIver
5 # ([email protected]), but it contains contributions by others. See
6 # CONTRIBUTING.rst for a full list of people who may hold copyright, and
7 # consult the git log if you need to determine who owns an individual
8 # contribution.
9 #
10 # This Source Code Form is subject to the terms of the Mozilla Public License,
11 # v. 2.0. If a copy of the MPL was not distributed with this file, You can
12 # obtain one at https://mozilla.org/MPL/2.0/.
13 #
14 # END HEADER
15
16 """Run all functions registered for the "hypothesis" entry point.
17
18 This can be used with `st.register_type_strategy` to register strategies for your
19 custom types, running the relevant code when *hypothesis* is imported instead of
20 your package.
21 """
22
23 try:
24 # We prefer to use importlib.metadata, or the backport on Python <= 3.7,
25 # because it's much faster than pkg_resources (200ms import time speedup).
26 try:
27 from importlib import metadata as importlib_metadata
28 except ImportError:
29 import importlib_metadata # type: ignore # mypy thinks this is a redefinition
30
31 def get_entry_points():
32 yield from importlib_metadata.entry_points().get("hypothesis", [])
33
34
35 except ImportError:
36 # But if we're not on Python >= 3.8 and the importlib_metadata backport
37 # is not installed, we fall back to pkg_resources anyway.
38 import pkg_resources
39
40 def get_entry_points():
41 yield from pkg_resources.iter_entry_points("hypothesis")
42
43
44 def run():
45 for entry in get_entry_points(): # pragma: no cover
46 hook = entry.load()
47 if callable(hook):
48 hook()
49
[end of hypothesis-python/src/hypothesis/entry_points.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/hypothesis-python/setup.py b/hypothesis-python/setup.py
--- a/hypothesis-python/setup.py
+++ b/hypothesis-python/setup.py
@@ -63,7 +63,7 @@
"dateutil": ["python-dateutil>=1.4"],
"lark": ["lark-parser>=0.6.5"],
"numpy": ["numpy>=1.9.0"],
- "pandas": ["pandas>=0.19"],
+ "pandas": ["pandas>=0.25"],
"pytest": ["pytest>=4.3"],
"dpcontracts": ["dpcontracts>=0.4"],
"redis": ["redis>=3.0.0"],
@@ -72,7 +72,9 @@
"django": ["pytz>=2014.1", "django>=2.2"],
}
-extras["all"] = sorted(set(sum(extras.values(), [])))
+extras["all"] = sorted(
+ set(sum(extras.values(), ["importlib_metadata ; python_version<'3.8'"]))
+)
setuptools.setup(
diff --git a/hypothesis-python/src/hypothesis/entry_points.py b/hypothesis-python/src/hypothesis/entry_points.py
--- a/hypothesis-python/src/hypothesis/entry_points.py
+++ b/hypothesis-python/src/hypothesis/entry_points.py
@@ -35,10 +35,26 @@
except ImportError:
# But if we're not on Python >= 3.8 and the importlib_metadata backport
# is not installed, we fall back to pkg_resources anyway.
- import pkg_resources
+ try:
+ import pkg_resources
+ except ImportError:
+ import warnings
- def get_entry_points():
- yield from pkg_resources.iter_entry_points("hypothesis")
+ from hypothesis.errors import HypothesisWarning
+
+ warnings.warn(
+ "Under Python <= 3.7, Hypothesis requires either the importlib_metadata "
+ "or setuptools package in order to load plugins via entrypoints.",
+ HypothesisWarning,
+ )
+
+ def get_entry_points():
+ yield from ()
+
+ else:
+
+ def get_entry_points():
+ yield from pkg_resources.iter_entry_points("hypothesis")
def run():
|
{"golden_diff": "diff --git a/hypothesis-python/setup.py b/hypothesis-python/setup.py\n--- a/hypothesis-python/setup.py\n+++ b/hypothesis-python/setup.py\n@@ -63,7 +63,7 @@\n \"dateutil\": [\"python-dateutil>=1.4\"],\n \"lark\": [\"lark-parser>=0.6.5\"],\n \"numpy\": [\"numpy>=1.9.0\"],\n- \"pandas\": [\"pandas>=0.19\"],\n+ \"pandas\": [\"pandas>=0.25\"],\n \"pytest\": [\"pytest>=4.3\"],\n \"dpcontracts\": [\"dpcontracts>=0.4\"],\n \"redis\": [\"redis>=3.0.0\"],\n@@ -72,7 +72,9 @@\n \"django\": [\"pytz>=2014.1\", \"django>=2.2\"],\n }\n \n-extras[\"all\"] = sorted(set(sum(extras.values(), [])))\n+extras[\"all\"] = sorted(\n+ set(sum(extras.values(), [\"importlib_metadata ; python_version<'3.8'\"]))\n+)\n \n \n setuptools.setup(\ndiff --git a/hypothesis-python/src/hypothesis/entry_points.py b/hypothesis-python/src/hypothesis/entry_points.py\n--- a/hypothesis-python/src/hypothesis/entry_points.py\n+++ b/hypothesis-python/src/hypothesis/entry_points.py\n@@ -35,10 +35,26 @@\n except ImportError:\n # But if we're not on Python >= 3.8 and the importlib_metadata backport\n # is not installed, we fall back to pkg_resources anyway.\n- import pkg_resources\n+ try:\n+ import pkg_resources\n+ except ImportError:\n+ import warnings\n \n- def get_entry_points():\n- yield from pkg_resources.iter_entry_points(\"hypothesis\")\n+ from hypothesis.errors import HypothesisWarning\n+\n+ warnings.warn(\n+ \"Under Python <= 3.7, Hypothesis requires either the importlib_metadata \"\n+ \"or setuptools package in order to load plugins via entrypoints.\",\n+ HypothesisWarning,\n+ )\n+\n+ def get_entry_points():\n+ yield from ()\n+\n+ else:\n+\n+ def get_entry_points():\n+ yield from pkg_resources.iter_entry_points(\"hypothesis\")\n \n \n def run():\n", "issue": "Missing dependency on setuptools\nWhen importing `hypothesis` in a container that only installs the declared dependencies (i.e. `sortedcontainers` and `attrs`) then we see the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/run/shm/bazel-sandbox.58afe50bce693a45c42be3530f1579c8f12116963ef7e303ad72c1a2b06ed6f2/processwrapper-sandbox/4962/execroot/__main__/bazel-out/k8-fastbuild/bin/perception/cloud/proto_format_test.runfiles/internal_pip_dependency_hypothesis/pypi__hypothesis/hypothesis/entry_points.py\", line 27, in <module>\r\n from importlib import metadata as importlib_metadata\r\nImportError: cannot import name 'metadata'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/run/shm/bazel-sandbox.58afe50bce693a45c42be3530f1579c8f12116963ef7e303ad72c1a2b06ed6f2/processwrapper-sandbox/4962/execroot/__main__/bazel-out/k8-fastbuild/bin/perception/cloud/proto_format_test.runfiles/internal_pip_dependency_hypothesis/pypi__hypothesis/hypothesis/entry_points.py\", line 29, in <module>\r\n import importlib_metadata # type: ignore # mypy thinks this is a redefinition\r\nModuleNotFoundError: No module named 'importlib_metadata'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"common/python/runtime/python3_wrapper.py\", line 36, in <module>\r\n eval(compiled_code, module.__dict__)\r\n File \"/run/shm/bazel-sandbox.58afe50bce693a45c42be3530f1579c8f12116963ef7e303ad72c1a2b06ed6f2/processwrapper-sandbox/4962/execroot/__main__/bazel-out/k8-fastbuild/bin/perception/cloud/proto_format_test.runfiles/__main__/perception/cloud/proto_format_test.py\", line 7, in <module>\r\n import hypothesis as hyp\r\n File \"/run/shm/bazel-sandbox.58afe50bce693a45c42be3530f1579c8f12116963ef7e303ad72c1a2b06ed6f2/processwrapper-sandbox/4962/execroot/__main__/bazel-out/k8-fastbuild/bin/perception/cloud/proto_format_test.runfiles/internal_pip_dependency_hypothesis/pypi__hypothesis/hypothesis/__init__.py\", line 27, in <module>\r\n from hypothesis.entry_points import run\r\n File \"/run/shm/bazel-sandbox.58afe50bce693a45c42be3530f1579c8f12116963ef7e303ad72c1a2b06ed6f2/processwrapper-sandbox/4962/execroot/__main__/bazel-out/k8-fastbuild/bin/perception/cloud/proto_format_test.runfiles/internal_pip_dependency_hypothesis/pypi__hypothesis/hypothesis/entry_points.py\", line 38, in <module>\r\n import pkg_resources\r\nModuleNotFoundError: No module named 'pkg_resources'\r\n```\r\nThe `pkg_resources` module is provided by `setuptools`.\r\n\r\nI think tweaking `setup.py` should fix it.\n", "before_files": [{"content": "# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis/\n#\n# Most of this work is copyright (C) 2013-2020 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at https://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\nimport os\nimport sys\nimport warnings\n\nimport setuptools\n\nif sys.version_info[:2] < (3, 6):\n raise Exception(\n \"This version of Python is too old to install new versions of Hypothesis. \"\n \"Update `pip` and `setuptools`, try again, and you will automatically \"\n \"get the latest compatible version of Hypothesis instead. \"\n \"See also https://python3statement.org/practicalities/\"\n )\n\n\ndef local_file(name):\n return os.path.relpath(os.path.join(os.path.dirname(__file__), name))\n\n\nSOURCE = local_file(\"src\")\nREADME = local_file(\"README.rst\")\n\nsetuptools_version = tuple(map(int, setuptools.__version__.split(\".\")[:2]))\n\nif setuptools_version < (36, 2):\n # Warning only - very bad if uploading bdist but fine if installing sdist.\n warnings.warn(\n \"This version of setuptools is too old to correctly store \"\n \"conditional dependencies in binary wheels. For more info, see: \"\n \"https://hynek.me/articles/conditional-python-dependencies/\"\n )\n\n\n# Assignment to placate pyflakes. The actual version is from the exec that\n# follows.\n__version__ = None\n\nwith open(local_file(\"src/hypothesis/version.py\")) as o:\n exec(o.read())\n\nassert __version__ is not None\n\n\nextras = {\n \"cli\": [\"click>=7.0\", \"black>=19.10b0\"],\n \"ghostwriter\": [\"black>=19.10b0\"],\n \"pytz\": [\"pytz>=2014.1\"],\n \"dateutil\": [\"python-dateutil>=1.4\"],\n \"lark\": [\"lark-parser>=0.6.5\"],\n \"numpy\": [\"numpy>=1.9.0\"],\n \"pandas\": [\"pandas>=0.19\"],\n \"pytest\": [\"pytest>=4.3\"],\n \"dpcontracts\": [\"dpcontracts>=0.4\"],\n \"redis\": [\"redis>=3.0.0\"],\n # We only support Django versions with upstream support - see\n # https://www.djangoproject.com/download/#supported-versions\n \"django\": [\"pytz>=2014.1\", \"django>=2.2\"],\n}\n\nextras[\"all\"] = sorted(set(sum(extras.values(), [])))\n\n\nsetuptools.setup(\n name=\"hypothesis\",\n version=__version__,\n author=\"David R. MacIver\",\n author_email=\"[email protected]\",\n packages=setuptools.find_packages(SOURCE),\n package_dir={\"\": SOURCE},\n package_data={\"hypothesis\": [\"py.typed\", \"vendor/tlds-alpha-by-domain.txt\"]},\n url=\"https://github.com/HypothesisWorks/hypothesis/tree/master/hypothesis-python\",\n project_urls={\n \"Website\": \"https://hypothesis.works\",\n \"Documentation\": \"https://hypothesis.readthedocs.io\",\n \"Issues\": \"https://github.com/HypothesisWorks/hypothesis/issues\",\n },\n license=\"MPL v2\",\n description=\"A library for property-based testing\",\n zip_safe=False,\n extras_require=extras,\n install_requires=[\"attrs>=19.2.0\", \"sortedcontainers>=2.1.0,<3.0.0\"],\n python_requires=\">=3.6\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Framework :: Hypothesis\",\n \"Framework :: Pytest\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)\",\n \"Operating System :: Unix\",\n \"Operating System :: POSIX\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Education :: Testing\",\n \"Topic :: Software Development :: Testing\",\n \"Typing :: Typed\",\n ],\n entry_points={\n \"pytest11\": [\"hypothesispytest = hypothesis.extra.pytestplugin\"],\n \"console_scripts\": [\"hypothesis = hypothesis.extra.cli:main\"],\n },\n long_description=open(README).read(),\n long_description_content_type=\"text/x-rst\",\n keywords=\"python testing fuzzing property-based-testing\",\n)\n", "path": "hypothesis-python/setup.py"}, {"content": "# This file is part of Hypothesis, which may be found at\n# https://github.com/HypothesisWorks/hypothesis/\n#\n# Most of this work is copyright (C) 2013-2020 David R. MacIver\n# ([email protected]), but it contains contributions by others. See\n# CONTRIBUTING.rst for a full list of people who may hold copyright, and\n# consult the git log if you need to determine who owns an individual\n# contribution.\n#\n# This Source Code Form is subject to the terms of the Mozilla Public License,\n# v. 2.0. If a copy of the MPL was not distributed with this file, You can\n# obtain one at https://mozilla.org/MPL/2.0/.\n#\n# END HEADER\n\n\"\"\"Run all functions registered for the \"hypothesis\" entry point.\n\nThis can be used with `st.register_type_strategy` to register strategies for your\ncustom types, running the relevant code when *hypothesis* is imported instead of\nyour package.\n\"\"\"\n\ntry:\n # We prefer to use importlib.metadata, or the backport on Python <= 3.7,\n # because it's much faster than pkg_resources (200ms import time speedup).\n try:\n from importlib import metadata as importlib_metadata\n except ImportError:\n import importlib_metadata # type: ignore # mypy thinks this is a redefinition\n\n def get_entry_points():\n yield from importlib_metadata.entry_points().get(\"hypothesis\", [])\n\n\nexcept ImportError:\n # But if we're not on Python >= 3.8 and the importlib_metadata backport\n # is not installed, we fall back to pkg_resources anyway.\n import pkg_resources\n\n def get_entry_points():\n yield from pkg_resources.iter_entry_points(\"hypothesis\")\n\n\ndef run():\n for entry in get_entry_points(): # pragma: no cover\n hook = entry.load()\n if callable(hook):\n hook()\n", "path": "hypothesis-python/src/hypothesis/entry_points.py"}]}
| 3,392 | 511 |
gh_patches_debug_766
|
rasdani/github-patches
|
git_diff
|
kivy__kivy-4149
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ModalView background size is not updated
Since https://github.com/kivy/kivy/pull/4136 the ModalView background is not resized when the window size changes, run `kivy/uix/modalview.py`, then resize the window.

</issue>
<code>
[start of kivy/uix/modalview.py]
1 '''
2 ModalView
3 =========
4
5 .. versionadded:: 1.4.0
6
7 The :class:`ModalView` widget is used to create modal views. By default, the
8 view will cover the whole "parent" window.
9
10 Remember that the default size of a Widget is size_hint=(1, 1). If you don't
11 want your view to be fullscreen, either use size hints with values lower than
12 1 (for instance size_hint=(.8, .8)) or deactivate the size_hint and use fixed
13 size attributes.
14
15 Examples
16 --------
17
18 Example of a simple 400x400 Hello world view::
19
20 view = ModalView(size_hint=(None, None), size=(400, 400))
21 view.add_widget(Label(text='Hello world'))
22
23 By default, any click outside the view will dismiss it. If you don't
24 want that, you can set :attr:`ModalView.auto_dismiss` to False::
25
26 view = ModalView(auto_dismiss=False)
27 view.add_widget(Label(text='Hello world'))
28 view.open()
29
30 To manually dismiss/close the view, use the :meth:`ModalView.dismiss` method of
31 the ModalView instance::
32
33 view.dismiss()
34
35 Both :meth:`ModalView.open` and :meth:`ModalView.dismiss` are bindable. That
36 means you can directly bind the function to an action, e.g. to a button's
37 on_press ::
38
39 # create content and add it to the view
40 content = Button(text='Close me!')
41 view = ModalView(auto_dismiss=False)
42 view.add_widget(content)
43
44 # bind the on_press event of the button to the dismiss function
45 content.bind(on_press=view.dismiss)
46
47 # open the view
48 view.open()
49
50
51 ModalView Events
52 ----------------
53
54 There are two events available: `on_open` which is raised when the view is
55 opening, and `on_dismiss` which is raised when the view is closed.
56 For `on_dismiss`, you can prevent the view from closing by explictly returning
57 True from your callback. ::
58
59 def my_callback(instance):
60 print('ModalView', instance, 'is being dismissed, but is prevented!')
61 return True
62 view = ModalView()
63 view.add_widget(Label(text='Hello world'))
64 view.bind(on_dismiss=my_callback)
65 view.open()
66
67
68 .. versionchanged:: 1.5.0
69 The ModalView can be closed by hitting the escape key on the
70 keyboard if the :attr:`ModalView.auto_dismiss` property is True (the
71 default).
72
73 '''
74
75 __all__ = ('ModalView', )
76
77 from kivy.logger import Logger
78 from kivy.animation import Animation
79 from kivy.uix.anchorlayout import AnchorLayout
80 from kivy.properties import StringProperty, BooleanProperty, ObjectProperty, \
81 NumericProperty, ListProperty
82
83
84 class ModalView(AnchorLayout):
85 '''ModalView class. See module documentation for more information.
86
87 :Events:
88 `on_open`:
89 Fired when the ModalView is opened.
90 `on_dismiss`:
91 Fired when the ModalView is closed. If the callback returns True,
92 the dismiss will be canceled.
93 '''
94
95 auto_dismiss = BooleanProperty(True)
96 '''This property determines if the view is automatically
97 dismissed when the user clicks outside it.
98
99 :attr:`auto_dismiss` is a :class:`~kivy.properties.BooleanProperty` and
100 defaults to True.
101 '''
102
103 attach_to = ObjectProperty(None)
104 '''If a widget is set on attach_to, the view will attach to the nearest
105 parent window of the widget. If none is found, it will attach to the
106 main/global Window.
107
108 :attr:`attach_to` is an :class:`~kivy.properties.ObjectProperty` and
109 defaults to None.
110 '''
111
112 background_color = ListProperty([0, 0, 0, .7])
113 '''Background color in the format (r, g, b, a).
114
115 :attr:`background_color` is a :class:`~kivy.properties.ListProperty` and
116 defaults to [0, 0, 0, .7].
117 '''
118
119 background = StringProperty(
120 'atlas://data/images/defaulttheme/modalview-background')
121 '''Background image of the view used for the view background.
122
123 :attr:`background` is a :class:`~kivy.properties.StringProperty` and
124 defaults to 'atlas://data/images/defaulttheme/modalview-background'.
125 '''
126
127 border = ListProperty([16, 16, 16, 16])
128 '''Border used for :class:`~kivy.graphics.vertex_instructions.BorderImage`
129 graphics instruction. Used for the :attr:`background_normal` and the
130 :attr:`background_down` properties. Can be used when using custom
131 backgrounds.
132
133 It must be a list of four values: (top, right, bottom, left). Read the
134 BorderImage instructions for more information about how to use it.
135
136 :attr:`border` is a :class:`~kivy.properties.ListProperty` and defaults to
137 (16, 16, 16, 16).
138 '''
139
140 # Internals properties used for graphical representation.
141
142 _anim_alpha = NumericProperty(0)
143
144 _anim_duration = NumericProperty(.1)
145
146 _window = ObjectProperty(None, allownone=True)
147
148 __events__ = ('on_open', 'on_dismiss')
149
150 def __init__(self, **kwargs):
151 self._parent = None
152 super(ModalView, self).__init__(**kwargs)
153
154 def _search_window(self):
155 # get window to attach to
156 window = None
157 if self.attach_to is not None:
158 window = self.attach_to.get_parent_window()
159 if not window:
160 window = self.attach_to.get_root_window()
161 if not window:
162 from kivy.core.window import Window
163 window = Window
164 return window
165
166 def open(self, *largs):
167 '''Show the view window from the :attr:`attach_to` widget. If set, it
168 will attach to the nearest window. If the widget is not attached to any
169 window, the view will attach to the global
170 :class:`~kivy.core.window.Window`.
171 '''
172 if self._window is not None:
173 Logger.warning('ModalView: you can only open once.')
174 return self
175 # search window
176 self._window = self._search_window()
177 if not self._window:
178 Logger.warning('ModalView: cannot open view, no window found.')
179 return self
180 self._window.add_widget(self)
181 self._window.bind(
182 on_resize=self._align_center,
183 on_keyboard=self._handle_keyboard)
184 self.center = self._window.center
185 self.fbind('center', self._align_center)
186 a = Animation(_anim_alpha=1., d=self._anim_duration)
187 a.bind(on_complete=lambda *x: self.dispatch('on_open'))
188 a.start(self)
189 return self
190
191 def dismiss(self, *largs, **kwargs):
192 '''Close the view if it is open. If you really want to close the
193 view, whatever the on_dismiss event returns, you can use the *force*
194 argument:
195 ::
196
197 view = ModalView(...)
198 view.dismiss(force=True)
199
200 When the view is dismissed, it will be faded out before being
201 removed from the parent. If you don't want animation, use::
202
203 view.dismiss(animation=False)
204
205 '''
206 if self._window is None:
207 return self
208 if self.dispatch('on_dismiss') is True:
209 if kwargs.get('force', False) is not True:
210 return self
211 if kwargs.get('animation', True):
212 Animation(_anim_alpha=0., d=self._anim_duration).start(self)
213 else:
214 self._anim_alpha = 0
215 self._real_remove_widget()
216 return self
217
218 def _align_center(self, *l):
219 if self._window:
220 self.center = self._window.center
221
222 def on_touch_down(self, touch):
223 if not self.collide_point(*touch.pos):
224 if self.auto_dismiss:
225 self.dismiss()
226 return True
227 super(ModalView, self).on_touch_down(touch)
228 return True
229
230 def on_touch_move(self, touch):
231 super(ModalView, self).on_touch_move(touch)
232 return True
233
234 def on_touch_up(self, touch):
235 super(ModalView, self).on_touch_up(touch)
236 return True
237
238 def on__anim_alpha(self, instance, value):
239 if value == 0 and self._window is not None:
240 self._real_remove_widget()
241
242 def _real_remove_widget(self):
243 if self._window is None:
244 return
245 self._window.remove_widget(self)
246 self._window.unbind(
247 on_resize=self._align_center,
248 on_keyboard=self._handle_keyboard)
249 self._window = None
250
251 def on_open(self):
252 pass
253
254 def on_dismiss(self):
255 pass
256
257 def _handle_keyboard(self, window, key, *largs):
258 if key == 27 and self.auto_dismiss:
259 self.dismiss()
260 return True
261
262
263 if __name__ == '__main__':
264 from kivy.base import runTouchApp
265 from kivy.uix.button import Button
266 from kivy.uix.label import Label
267 from kivy.uix.gridlayout import GridLayout
268 from kivy.core.window import Window
269
270 # add view
271 content = GridLayout(cols=1)
272 content.add_widget(Label(text='This is a hello world'))
273 view = ModalView(size_hint=(None, None), size=(256, 256),
274 auto_dismiss=True)
275 view.add_widget(content)
276
277 def open_view(btn):
278 view.open()
279
280 layout = GridLayout(cols=3)
281 for x in range(9):
282 btn = Button(text='click me %s' % x)
283 btn.bind(on_release=view.open)
284 layout.add_widget(btn)
285 Window.add_widget(layout)
286
287 view.open()
288
289 runTouchApp()
290
[end of kivy/uix/modalview.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kivy/uix/modalview.py b/kivy/uix/modalview.py
--- a/kivy/uix/modalview.py
+++ b/kivy/uix/modalview.py
@@ -143,7 +143,7 @@
_anim_duration = NumericProperty(.1)
- _window = ObjectProperty(None, allownone=True)
+ _window = ObjectProperty(None, allownone=True, rebind=True)
__events__ = ('on_open', 'on_dismiss')
|
{"golden_diff": "diff --git a/kivy/uix/modalview.py b/kivy/uix/modalview.py\n--- a/kivy/uix/modalview.py\n+++ b/kivy/uix/modalview.py\n@@ -143,7 +143,7 @@\n \n _anim_duration = NumericProperty(.1)\n \n- _window = ObjectProperty(None, allownone=True)\n+ _window = ObjectProperty(None, allownone=True, rebind=True)\n \n __events__ = ('on_open', 'on_dismiss')\n", "issue": "ModalView background size is not updated\nSince https://github.com/kivy/kivy/pull/4136 the ModalView background is not resized when the window size changes, run `kivy/uix/modalview.py`, then resize the window.\n\n\n\n", "before_files": [{"content": "'''\nModalView\n=========\n\n.. versionadded:: 1.4.0\n\nThe :class:`ModalView` widget is used to create modal views. By default, the\nview will cover the whole \"parent\" window.\n\nRemember that the default size of a Widget is size_hint=(1, 1). If you don't\nwant your view to be fullscreen, either use size hints with values lower than\n1 (for instance size_hint=(.8, .8)) or deactivate the size_hint and use fixed\nsize attributes.\n\nExamples\n--------\n\nExample of a simple 400x400 Hello world view::\n\n view = ModalView(size_hint=(None, None), size=(400, 400))\n view.add_widget(Label(text='Hello world'))\n\nBy default, any click outside the view will dismiss it. If you don't\nwant that, you can set :attr:`ModalView.auto_dismiss` to False::\n\n view = ModalView(auto_dismiss=False)\n view.add_widget(Label(text='Hello world'))\n view.open()\n\nTo manually dismiss/close the view, use the :meth:`ModalView.dismiss` method of\nthe ModalView instance::\n\n view.dismiss()\n\nBoth :meth:`ModalView.open` and :meth:`ModalView.dismiss` are bindable. That\nmeans you can directly bind the function to an action, e.g. to a button's\non_press ::\n\n # create content and add it to the view\n content = Button(text='Close me!')\n view = ModalView(auto_dismiss=False)\n view.add_widget(content)\n\n # bind the on_press event of the button to the dismiss function\n content.bind(on_press=view.dismiss)\n\n # open the view\n view.open()\n\n\nModalView Events\n----------------\n\nThere are two events available: `on_open` which is raised when the view is\nopening, and `on_dismiss` which is raised when the view is closed.\nFor `on_dismiss`, you can prevent the view from closing by explictly returning\nTrue from your callback. ::\n\n def my_callback(instance):\n print('ModalView', instance, 'is being dismissed, but is prevented!')\n return True\n view = ModalView()\n view.add_widget(Label(text='Hello world'))\n view.bind(on_dismiss=my_callback)\n view.open()\n\n\n.. versionchanged:: 1.5.0\n The ModalView can be closed by hitting the escape key on the\n keyboard if the :attr:`ModalView.auto_dismiss` property is True (the\n default).\n\n'''\n\n__all__ = ('ModalView', )\n\nfrom kivy.logger import Logger\nfrom kivy.animation import Animation\nfrom kivy.uix.anchorlayout import AnchorLayout\nfrom kivy.properties import StringProperty, BooleanProperty, ObjectProperty, \\\n NumericProperty, ListProperty\n\n\nclass ModalView(AnchorLayout):\n '''ModalView class. See module documentation for more information.\n\n :Events:\n `on_open`:\n Fired when the ModalView is opened.\n `on_dismiss`:\n Fired when the ModalView is closed. If the callback returns True,\n the dismiss will be canceled.\n '''\n\n auto_dismiss = BooleanProperty(True)\n '''This property determines if the view is automatically\n dismissed when the user clicks outside it.\n\n :attr:`auto_dismiss` is a :class:`~kivy.properties.BooleanProperty` and\n defaults to True.\n '''\n\n attach_to = ObjectProperty(None)\n '''If a widget is set on attach_to, the view will attach to the nearest\n parent window of the widget. If none is found, it will attach to the\n main/global Window.\n\n :attr:`attach_to` is an :class:`~kivy.properties.ObjectProperty` and\n defaults to None.\n '''\n\n background_color = ListProperty([0, 0, 0, .7])\n '''Background color in the format (r, g, b, a).\n\n :attr:`background_color` is a :class:`~kivy.properties.ListProperty` and\n defaults to [0, 0, 0, .7].\n '''\n\n background = StringProperty(\n 'atlas://data/images/defaulttheme/modalview-background')\n '''Background image of the view used for the view background.\n\n :attr:`background` is a :class:`~kivy.properties.StringProperty` and\n defaults to 'atlas://data/images/defaulttheme/modalview-background'.\n '''\n\n border = ListProperty([16, 16, 16, 16])\n '''Border used for :class:`~kivy.graphics.vertex_instructions.BorderImage`\n graphics instruction. Used for the :attr:`background_normal` and the\n :attr:`background_down` properties. Can be used when using custom\n backgrounds.\n\n It must be a list of four values: (top, right, bottom, left). Read the\n BorderImage instructions for more information about how to use it.\n\n :attr:`border` is a :class:`~kivy.properties.ListProperty` and defaults to\n (16, 16, 16, 16).\n '''\n\n # Internals properties used for graphical representation.\n\n _anim_alpha = NumericProperty(0)\n\n _anim_duration = NumericProperty(.1)\n\n _window = ObjectProperty(None, allownone=True)\n\n __events__ = ('on_open', 'on_dismiss')\n\n def __init__(self, **kwargs):\n self._parent = None\n super(ModalView, self).__init__(**kwargs)\n\n def _search_window(self):\n # get window to attach to\n window = None\n if self.attach_to is not None:\n window = self.attach_to.get_parent_window()\n if not window:\n window = self.attach_to.get_root_window()\n if not window:\n from kivy.core.window import Window\n window = Window\n return window\n\n def open(self, *largs):\n '''Show the view window from the :attr:`attach_to` widget. If set, it\n will attach to the nearest window. If the widget is not attached to any\n window, the view will attach to the global\n :class:`~kivy.core.window.Window`.\n '''\n if self._window is not None:\n Logger.warning('ModalView: you can only open once.')\n return self\n # search window\n self._window = self._search_window()\n if not self._window:\n Logger.warning('ModalView: cannot open view, no window found.')\n return self\n self._window.add_widget(self)\n self._window.bind(\n on_resize=self._align_center,\n on_keyboard=self._handle_keyboard)\n self.center = self._window.center\n self.fbind('center', self._align_center)\n a = Animation(_anim_alpha=1., d=self._anim_duration)\n a.bind(on_complete=lambda *x: self.dispatch('on_open'))\n a.start(self)\n return self\n\n def dismiss(self, *largs, **kwargs):\n '''Close the view if it is open. If you really want to close the\n view, whatever the on_dismiss event returns, you can use the *force*\n argument:\n ::\n\n view = ModalView(...)\n view.dismiss(force=True)\n\n When the view is dismissed, it will be faded out before being\n removed from the parent. If you don't want animation, use::\n\n view.dismiss(animation=False)\n\n '''\n if self._window is None:\n return self\n if self.dispatch('on_dismiss') is True:\n if kwargs.get('force', False) is not True:\n return self\n if kwargs.get('animation', True):\n Animation(_anim_alpha=0., d=self._anim_duration).start(self)\n else:\n self._anim_alpha = 0\n self._real_remove_widget()\n return self\n\n def _align_center(self, *l):\n if self._window:\n self.center = self._window.center\n\n def on_touch_down(self, touch):\n if not self.collide_point(*touch.pos):\n if self.auto_dismiss:\n self.dismiss()\n return True\n super(ModalView, self).on_touch_down(touch)\n return True\n\n def on_touch_move(self, touch):\n super(ModalView, self).on_touch_move(touch)\n return True\n\n def on_touch_up(self, touch):\n super(ModalView, self).on_touch_up(touch)\n return True\n\n def on__anim_alpha(self, instance, value):\n if value == 0 and self._window is not None:\n self._real_remove_widget()\n\n def _real_remove_widget(self):\n if self._window is None:\n return\n self._window.remove_widget(self)\n self._window.unbind(\n on_resize=self._align_center,\n on_keyboard=self._handle_keyboard)\n self._window = None\n\n def on_open(self):\n pass\n\n def on_dismiss(self):\n pass\n\n def _handle_keyboard(self, window, key, *largs):\n if key == 27 and self.auto_dismiss:\n self.dismiss()\n return True\n\n\nif __name__ == '__main__':\n from kivy.base import runTouchApp\n from kivy.uix.button import Button\n from kivy.uix.label import Label\n from kivy.uix.gridlayout import GridLayout\n from kivy.core.window import Window\n\n # add view\n content = GridLayout(cols=1)\n content.add_widget(Label(text='This is a hello world'))\n view = ModalView(size_hint=(None, None), size=(256, 256),\n auto_dismiss=True)\n view.add_widget(content)\n\n def open_view(btn):\n view.open()\n\n layout = GridLayout(cols=3)\n for x in range(9):\n btn = Button(text='click me %s' % x)\n btn.bind(on_release=view.open)\n layout.add_widget(btn)\n Window.add_widget(layout)\n\n view.open()\n\n runTouchApp()\n", "path": "kivy/uix/modalview.py"}]}
| 3,605 | 113 |
gh_patches_debug_29474
|
rasdani/github-patches
|
git_diff
|
borgbackup__borg-1193
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
borgbackup build failure when using OpenSSL 1.1.0
https://groups.google.com/d/msg/linux.debian.devel/53fq9S-Qpp4/V_0pPtdzBQAJ
</issue>
<code>
[start of borg/testsuite/crypto.py]
1 from binascii import hexlify
2
3 from ..crypto import AES, bytes_to_long, bytes_to_int, long_to_bytes
4 from . import BaseTestCase
5
6
7 class CryptoTestCase(BaseTestCase):
8
9 def test_bytes_to_int(self):
10 self.assert_equal(bytes_to_int(b'\0\0\0\1'), 1)
11
12 def test_bytes_to_long(self):
13 self.assert_equal(bytes_to_long(b'\0\0\0\0\0\0\0\1'), 1)
14 self.assert_equal(long_to_bytes(1), b'\0\0\0\0\0\0\0\1')
15
16 def test_aes(self):
17 key = b'X' * 32
18 data = b'foo' * 10
19 # encrypt
20 aes = AES(is_encrypt=True, key=key)
21 self.assert_equal(bytes_to_long(aes.iv, 8), 0)
22 cdata = aes.encrypt(data)
23 self.assert_equal(hexlify(cdata), b'c6efb702de12498f34a2c2bbc8149e759996d08bf6dc5c610aefc0c3a466')
24 self.assert_equal(bytes_to_long(aes.iv, 8), 2)
25 # decrypt
26 aes = AES(is_encrypt=False, key=key)
27 self.assert_equal(bytes_to_long(aes.iv, 8), 0)
28 pdata = aes.decrypt(cdata)
29 self.assert_equal(data, pdata)
30 self.assert_equal(bytes_to_long(aes.iv, 8), 2)
31
[end of borg/testsuite/crypto.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/borg/testsuite/crypto.py b/borg/testsuite/crypto.py
--- a/borg/testsuite/crypto.py
+++ b/borg/testsuite/crypto.py
@@ -1,6 +1,7 @@
from binascii import hexlify
from ..crypto import AES, bytes_to_long, bytes_to_int, long_to_bytes
+from ..crypto import increment_iv, bytes16_to_int, int_to_bytes16
from . import BaseTestCase
@@ -13,6 +14,27 @@
self.assert_equal(bytes_to_long(b'\0\0\0\0\0\0\0\1'), 1)
self.assert_equal(long_to_bytes(1), b'\0\0\0\0\0\0\0\1')
+ def test_bytes16_to_int(self):
+ self.assert_equal(bytes16_to_int(b'\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1'), 1)
+ self.assert_equal(int_to_bytes16(1), b'\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1')
+ self.assert_equal(bytes16_to_int(b'\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0\0'), 2 ** 64)
+ self.assert_equal(int_to_bytes16(2 ** 64), b'\0\0\0\0\0\0\0\1\0\0\0\0\0\0\0\0')
+
+ def test_increment_iv(self):
+ iv0 = b'\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0'
+ iv1 = b'\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\1'
+ iv2 = b'\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\2'
+ self.assert_equal(increment_iv(iv0, 0), iv0)
+ self.assert_equal(increment_iv(iv0, 1), iv1)
+ self.assert_equal(increment_iv(iv0, 2), iv2)
+ iva = b'\0\0\0\0\0\0\0\0\xff\xff\xff\xff\xff\xff\xff\xff'
+ ivb = b'\0\0\0\0\0\0\0\1\x00\x00\x00\x00\x00\x00\x00\x00'
+ ivc = b'\0\0\0\0\0\0\0\1\x00\x00\x00\x00\x00\x00\x00\x01'
+ self.assert_equal(increment_iv(iva, 0), iva)
+ self.assert_equal(increment_iv(iva, 1), ivb)
+ self.assert_equal(increment_iv(iva, 2), ivc)
+ self.assert_equal(increment_iv(iv0, 2**64), ivb)
+
def test_aes(self):
key = b'X' * 32
data = b'foo' * 10
|
{"golden_diff": "diff --git a/borg/testsuite/crypto.py b/borg/testsuite/crypto.py\n--- a/borg/testsuite/crypto.py\n+++ b/borg/testsuite/crypto.py\n@@ -1,6 +1,7 @@\n from binascii import hexlify\n \n from ..crypto import AES, bytes_to_long, bytes_to_int, long_to_bytes\n+from ..crypto import increment_iv, bytes16_to_int, int_to_bytes16\n from . import BaseTestCase\n \n \n@@ -13,6 +14,27 @@\n self.assert_equal(bytes_to_long(b'\\0\\0\\0\\0\\0\\0\\0\\1'), 1)\n self.assert_equal(long_to_bytes(1), b'\\0\\0\\0\\0\\0\\0\\0\\1')\n \n+ def test_bytes16_to_int(self):\n+ self.assert_equal(bytes16_to_int(b'\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\1'), 1)\n+ self.assert_equal(int_to_bytes16(1), b'\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\1')\n+ self.assert_equal(bytes16_to_int(b'\\0\\0\\0\\0\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0\\0'), 2 ** 64)\n+ self.assert_equal(int_to_bytes16(2 ** 64), b'\\0\\0\\0\\0\\0\\0\\0\\1\\0\\0\\0\\0\\0\\0\\0\\0')\n+\n+ def test_increment_iv(self):\n+ iv0 = b'\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0'\n+ iv1 = b'\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\1'\n+ iv2 = b'\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\2'\n+ self.assert_equal(increment_iv(iv0, 0), iv0)\n+ self.assert_equal(increment_iv(iv0, 1), iv1)\n+ self.assert_equal(increment_iv(iv0, 2), iv2)\n+ iva = b'\\0\\0\\0\\0\\0\\0\\0\\0\\xff\\xff\\xff\\xff\\xff\\xff\\xff\\xff'\n+ ivb = b'\\0\\0\\0\\0\\0\\0\\0\\1\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'\n+ ivc = b'\\0\\0\\0\\0\\0\\0\\0\\1\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x01'\n+ self.assert_equal(increment_iv(iva, 0), iva)\n+ self.assert_equal(increment_iv(iva, 1), ivb)\n+ self.assert_equal(increment_iv(iva, 2), ivc)\n+ self.assert_equal(increment_iv(iv0, 2**64), ivb)\n+\n def test_aes(self):\n key = b'X' * 32\n data = b'foo' * 10\n", "issue": "borgbackup build failure when using OpenSSL 1.1.0\nhttps://groups.google.com/d/msg/linux.debian.devel/53fq9S-Qpp4/V_0pPtdzBQAJ\n\n", "before_files": [{"content": "from binascii import hexlify\n\nfrom ..crypto import AES, bytes_to_long, bytes_to_int, long_to_bytes\nfrom . import BaseTestCase\n\n\nclass CryptoTestCase(BaseTestCase):\n\n def test_bytes_to_int(self):\n self.assert_equal(bytes_to_int(b'\\0\\0\\0\\1'), 1)\n\n def test_bytes_to_long(self):\n self.assert_equal(bytes_to_long(b'\\0\\0\\0\\0\\0\\0\\0\\1'), 1)\n self.assert_equal(long_to_bytes(1), b'\\0\\0\\0\\0\\0\\0\\0\\1')\n\n def test_aes(self):\n key = b'X' * 32\n data = b'foo' * 10\n # encrypt\n aes = AES(is_encrypt=True, key=key)\n self.assert_equal(bytes_to_long(aes.iv, 8), 0)\n cdata = aes.encrypt(data)\n self.assert_equal(hexlify(cdata), b'c6efb702de12498f34a2c2bbc8149e759996d08bf6dc5c610aefc0c3a466')\n self.assert_equal(bytes_to_long(aes.iv, 8), 2)\n # decrypt\n aes = AES(is_encrypt=False, key=key)\n self.assert_equal(bytes_to_long(aes.iv, 8), 0)\n pdata = aes.decrypt(cdata)\n self.assert_equal(data, pdata)\n self.assert_equal(bytes_to_long(aes.iv, 8), 2)\n", "path": "borg/testsuite/crypto.py"}]}
| 987 | 779 |
gh_patches_debug_7561
|
rasdani/github-patches
|
git_diff
|
mdn__kuma-6693
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
T - TypeError 'count.toLocaleString' in SSR
https://sentry.prod.mozaws.net/operations/mdn-prod/issues/7090931/
```
TypeError: Cannot read property 'toLocaleString' of undefined
File "/app/kuma/javascript/dist/ssr.js", line 22, in kt
'{snip} ocuments found for "%(query)s" in %(locale)s.',a),{count:a.toLocaleString(),locale:n,query:c})," ",t))}function jt(e){var t=e.locale;return( {snip}
File "/app/kuma/javascript/dist/ssr.js", line 22, in a
'{snip} .state);null!=d&&(f.state=r({},f.state,d))}}else if(D={},f=a(o.props,i,s),null==(f=G(a,o.props,f,i))||null==f.render)return void he(e=f,a);i {snip}
File "/app/kuma/javascript/dist/ssr.js", line 22, in ye
'{snip} lement(e);){var i=e,c=i.type;if("function"!=typeof c)break;a(i,c)}return{child:e,context:t}}var ve=function(){function e(t,n){if(!(this inst {snip}
File "/app/kuma/javascript/dist/ssr.js", line 22, in e.render
'{snip} -- --\x3e"+I(n):(this.previousWasTextNode=!0,I(n));if(e=(t=ye(e,t,this.threadID)).child,t=t.context,null===e||!1===e)return"";if(!o.isValidE {snip}
File "/app/kuma/javascript/dist/ssr.js", line 22, in e.read
'{snip} +=c}else{var f=i.children[i.childIndex++],m="";try{m+=this.render(f,i.context,i.domNamespace)}catch(e){throw e}r.length<=this.suspenseDepth& {snip}
...
(5 additional frame(s) were not displayed)
```
</issue>
<code>
[start of kuma/search/views.py]
1 from django.shortcuts import render
2 from django.urls import reverse_lazy
3 from django.views.decorators.cache import never_cache
4 from django.views.decorators.http import require_GET
5 from django.views.generic import RedirectView
6 from ratelimit.decorators import ratelimit
7
8 from kuma.api.v1.views import search as search_api
9 from kuma.core.decorators import shared_cache_control
10 from kuma.core.utils import is_wiki
11
12 from .search import SearchView
13
14 # Since the search endpoint accepts user input (via query parameters) and its
15 # response is compressed, use rate limiting to mitigate the BREACH attack
16 # (see http://breachattack.com/). It still needs to allow a user to click
17 # the filter switches (bug 1426968).
18 # Alternate: forbid gzip by setting Content-Encoding: identity
19 @never_cache
20 @require_GET
21 @ratelimit(key="user_or_ip", rate="25/m", block=True)
22 def search(request, *args, **kwargs):
23 """
24 The search view.
25 """
26 if is_wiki(request):
27 return wiki_search(request, *args, **kwargs)
28
29 results = search_api(request, *args, **kwargs).data
30
31 # Determine if there were validation errors
32 error = results.get("error") or results.get("q")
33 # If q is returned in the data, there was a validation error for that field,
34 # so return 400 status.
35 status = 200 if results.get("q") is None else 400
36
37 context = {"results": {"results": None if error else results, "error": error}}
38
39 return render(request, "search/react.html", context, status=status)
40
41
42 wiki_search = SearchView.as_view()
43
44
45 class SearchRedirectView(RedirectView):
46 permanent = True
47
48 def get_redirect_url(self, *args, **kwargs):
49 query_string = self.request.META.get("QUERY_STRING")
50 url = reverse_lazy(
51 "api.v1.search", kwargs={"locale": self.request.LANGUAGE_CODE}
52 )
53 if query_string:
54 url += "?" + query_string
55 return url
56
57
58 @shared_cache_control(s_maxage=60 * 60 * 24 * 7)
59 def plugin(request):
60 """Render an OpenSearch Plugin."""
61 return render(
62 request,
63 "search/plugin.html",
64 {"locale": request.LANGUAGE_CODE},
65 content_type="application/opensearchdescription+xml",
66 )
67
[end of kuma/search/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kuma/search/views.py b/kuma/search/views.py
--- a/kuma/search/views.py
+++ b/kuma/search/views.py
@@ -33,6 +33,10 @@
# If q is returned in the data, there was a validation error for that field,
# so return 400 status.
status = 200 if results.get("q") is None else 400
+ # If there was an error with the pagination you'll get...
+ if results.get("detail"):
+ error = str(results["detail"])
+ status = 400
context = {"results": {"results": None if error else results, "error": error}}
|
{"golden_diff": "diff --git a/kuma/search/views.py b/kuma/search/views.py\n--- a/kuma/search/views.py\n+++ b/kuma/search/views.py\n@@ -33,6 +33,10 @@\n # If q is returned in the data, there was a validation error for that field,\n # so return 400 status.\n status = 200 if results.get(\"q\") is None else 400\n+ # If there was an error with the pagination you'll get...\n+ if results.get(\"detail\"):\n+ error = str(results[\"detail\"])\n+ status = 400\n \n context = {\"results\": {\"results\": None if error else results, \"error\": error}}\n", "issue": "T - TypeError 'count.toLocaleString' in SSR\nhttps://sentry.prod.mozaws.net/operations/mdn-prod/issues/7090931/\n\n```\nTypeError: Cannot read property 'toLocaleString' of undefined\n File \"/app/kuma/javascript/dist/ssr.js\", line 22, in kt\n '{snip} ocuments found for \"%(query)s\" in %(locale)s.',a),{count:a.toLocaleString(),locale:n,query:c}),\" \",t))}function jt(e){var t=e.locale;return( {snip}\n File \"/app/kuma/javascript/dist/ssr.js\", line 22, in a\n '{snip} .state);null!=d&&(f.state=r({},f.state,d))}}else if(D={},f=a(o.props,i,s),null==(f=G(a,o.props,f,i))||null==f.render)return void he(e=f,a);i {snip}\n File \"/app/kuma/javascript/dist/ssr.js\", line 22, in ye\n '{snip} lement(e);){var i=e,c=i.type;if(\"function\"!=typeof c)break;a(i,c)}return{child:e,context:t}}var ve=function(){function e(t,n){if(!(this inst {snip}\n File \"/app/kuma/javascript/dist/ssr.js\", line 22, in e.render\n '{snip} -- --\\x3e\"+I(n):(this.previousWasTextNode=!0,I(n));if(e=(t=ye(e,t,this.threadID)).child,t=t.context,null===e||!1===e)return\"\";if(!o.isValidE {snip}\n File \"/app/kuma/javascript/dist/ssr.js\", line 22, in e.read\n '{snip} +=c}else{var f=i.children[i.childIndex++],m=\"\";try{m+=this.render(f,i.context,i.domNamespace)}catch(e){throw e}r.length<=this.suspenseDepth& {snip}\n...\n(5 additional frame(s) were not displayed)\n```\n", "before_files": [{"content": "from django.shortcuts import render\nfrom django.urls import reverse_lazy\nfrom django.views.decorators.cache import never_cache\nfrom django.views.decorators.http import require_GET\nfrom django.views.generic import RedirectView\nfrom ratelimit.decorators import ratelimit\n\nfrom kuma.api.v1.views import search as search_api\nfrom kuma.core.decorators import shared_cache_control\nfrom kuma.core.utils import is_wiki\n\nfrom .search import SearchView\n\n# Since the search endpoint accepts user input (via query parameters) and its\n# response is compressed, use rate limiting to mitigate the BREACH attack\n# (see http://breachattack.com/). It still needs to allow a user to click\n# the filter switches (bug 1426968).\n# Alternate: forbid gzip by setting Content-Encoding: identity\n@never_cache\n@require_GET\n@ratelimit(key=\"user_or_ip\", rate=\"25/m\", block=True)\ndef search(request, *args, **kwargs):\n \"\"\"\n The search view.\n \"\"\"\n if is_wiki(request):\n return wiki_search(request, *args, **kwargs)\n\n results = search_api(request, *args, **kwargs).data\n\n # Determine if there were validation errors\n error = results.get(\"error\") or results.get(\"q\")\n # If q is returned in the data, there was a validation error for that field,\n # so return 400 status.\n status = 200 if results.get(\"q\") is None else 400\n\n context = {\"results\": {\"results\": None if error else results, \"error\": error}}\n\n return render(request, \"search/react.html\", context, status=status)\n\n\nwiki_search = SearchView.as_view()\n\n\nclass SearchRedirectView(RedirectView):\n permanent = True\n\n def get_redirect_url(self, *args, **kwargs):\n query_string = self.request.META.get(\"QUERY_STRING\")\n url = reverse_lazy(\n \"api.v1.search\", kwargs={\"locale\": self.request.LANGUAGE_CODE}\n )\n if query_string:\n url += \"?\" + query_string\n return url\n\n\n@shared_cache_control(s_maxage=60 * 60 * 24 * 7)\ndef plugin(request):\n \"\"\"Render an OpenSearch Plugin.\"\"\"\n return render(\n request,\n \"search/plugin.html\",\n {\"locale\": request.LANGUAGE_CODE},\n content_type=\"application/opensearchdescription+xml\",\n )\n", "path": "kuma/search/views.py"}]}
| 1,633 | 156 |
gh_patches_debug_40463
|
rasdani/github-patches
|
git_diff
|
elastic__apm-agent-python-1371
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Django Celery trace.id integration
**Is your feature request related to a problem? Please describe.**
As of now, it is impossible to keep the same trace id between a Django view and a Celery task launched from the same view.
**Describe the solution you'd like**
Provide a way to easily pass a trace parent string to the Celery task. Preferably via it's headers field (introduced in Celery 3.1).
### What would it looks likes
**User side code (Django view)**
```python
def get(self, request):
transaction = execution_context.get_transaction()
trace_parent = transaction.trace_parent
trace_parent_string = trace_parent.to_string()
my_celery_task.apply_async(headers={"elasticapm": {"trace_parent_string": trace_parent_string} })
```
**Library side code (`elasticapm.contrib.celery.__init__.py`), rewrite of `begin_transaction()`, naïve implementation**
```python
def begin_transaction(*args, **kwargs):
trace_parent = None
try:
trace_parent_string = kwargs["task"].request.headers["elasticapm"]["trace_parent_string"]
trace_parent = TraceParent.from_string(trace_parent_string)
except:
pass
client.begin_transaction("celery", trace_parent=trace_parent)
```
- **Why using Celery headers field ?** It seems the most unobstrusive way of doing it.
- **Why using a nested field (["elasticapm"]["trace_parent_string"]) ?** Seems "future proof", usefull future fields for elasticapm could be added under the "elasticapm" key. Users of the API shouldn't see their code break as they are aware that using this library, the headers Celery field has a reserved key "elasticapm" used for this integration.
**Additional context**
**For anyone wanting to try it, BEWARE !!** There is a Celery [bug](https://github.com/celery/celery/issues/4875) concerning it's headers field.
You might have to do this:
```python
my_celery_task.apply_async(headers={"headers": {"elasticapm": {"trace_parent_string": trace_parent_string} } })
```
Edits: fixed code error/typos
</issue>
<code>
[start of elasticapm/contrib/celery/__init__.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2012, the Sentry Team, see AUTHORS for more details
4 # Copyright (c) 2019, Elasticsearch BV
5 # All rights reserved.
6 #
7 # Redistribution and use in source and binary forms, with or without
8 # modification, are permitted provided that the following conditions are met:
9 #
10 # * Redistributions of source code must retain the above copyright notice, this
11 # list of conditions and the following disclaimer.
12 #
13 # * Redistributions in binary form must reproduce the above copyright notice,
14 # this list of conditions and the following disclaimer in the documentation
15 # and/or other materials provided with the distribution.
16 #
17 # * Neither the name of the copyright holder nor the names of its
18 # contributors may be used to endorse or promote products derived from
19 # this software without specific prior written permission.
20 #
21 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
22 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
23 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
24 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
25 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
26 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
27 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
28 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
29 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
30
31
32 from celery import signals, states
33
34 import elasticapm
35 from elasticapm.conf import constants
36 from elasticapm.utils import get_name_from_func
37
38
39 class CeleryFilter(object):
40 def filter(self, record):
41 if record.funcName in ("_log_error",):
42 return 0
43 else:
44 return 1
45
46
47 def register_exception_tracking(client):
48 dispatch_uid = "elasticapm-exc-tracking"
49
50 def process_failure_signal(sender, task_id, exception, args, kwargs, traceback, einfo, **kw):
51 client.capture_exception(
52 extra={"task_id": task_id, "task": sender, "args": args, "kwargs": kwargs}, handled=False
53 )
54
55 signals.task_failure.disconnect(process_failure_signal, dispatch_uid=dispatch_uid)
56 signals.task_failure.connect(process_failure_signal, weak=False, dispatch_uid=dispatch_uid)
57 _register_worker_signals(client)
58
59
60 def register_instrumentation(client):
61 def begin_transaction(*args, **kwargs):
62 client.begin_transaction("celery")
63
64 def end_transaction(task_id, task, *args, **kwargs):
65 name = get_name_from_func(task)
66 state = kwargs.get("state", "None")
67 if state == states.SUCCESS:
68 outcome = constants.OUTCOME.SUCCESS
69 elif state in states.EXCEPTION_STATES:
70 outcome = constants.OUTCOME.FAILURE
71 else:
72 outcome = constants.OUTCOME.UNKNOWN
73 elasticapm.set_transaction_outcome(outcome, override=False)
74 client.end_transaction(name, state)
75
76 dispatch_uid = "elasticapm-tracing-%s"
77
78 # unregister any existing clients
79 signals.task_prerun.disconnect(begin_transaction, dispatch_uid=dispatch_uid % "prerun")
80 signals.task_postrun.disconnect(end_transaction, dispatch_uid=dispatch_uid % "postrun")
81
82 # register for this client
83 signals.task_prerun.connect(begin_transaction, dispatch_uid=dispatch_uid % "prerun", weak=False)
84 signals.task_postrun.connect(end_transaction, weak=False, dispatch_uid=dispatch_uid % "postrun")
85 _register_worker_signals(client)
86
87
88 def _register_worker_signals(client):
89 def worker_shutdown(*args, **kwargs):
90 client.close()
91
92 def connect_worker_process_init(*args, **kwargs):
93 signals.worker_process_shutdown.connect(worker_shutdown, dispatch_uid="elasticapm-shutdown-worker", weak=False)
94
95 signals.worker_init.connect(
96 connect_worker_process_init, dispatch_uid="elasticapm-connect-start-threads", weak=False
97 )
98
[end of elasticapm/contrib/celery/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/elasticapm/contrib/celery/__init__.py b/elasticapm/contrib/celery/__init__.py
--- a/elasticapm/contrib/celery/__init__.py
+++ b/elasticapm/contrib/celery/__init__.py
@@ -27,13 +27,15 @@
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-
+from contextlib import suppress
from celery import signals, states
import elasticapm
from elasticapm.conf import constants
+from elasticapm.traces import execution_context
from elasticapm.utils import get_name_from_func
+from elasticapm.utils.disttracing import TraceParent
class CeleryFilter(object):
@@ -57,9 +59,41 @@
_register_worker_signals(client)
+def set_celery_headers(headers=None, **kwargs):
+ """
+ Add elasticapm specific information to celery headers
+ """
+ headers = {} if headers is None else headers
+
+ transaction = execution_context.get_transaction()
+ if transaction is not None:
+ trace_parent = transaction.trace_parent
+ trace_parent_string = trace_parent.to_string()
+
+ headers.update({"elasticapm": {"trace_parent_string": trace_parent_string}})
+
+
+def get_trace_parent(celery_task):
+ """
+ Return a trace parent contained in the request headers of a Celery Task object or None
+ """
+ trace_parent = None
+ with suppress(AttributeError, KeyError, TypeError):
+ if celery_task.request.headers is not None:
+ trace_parent_string = celery_task.request.headers["elasticapm"]["trace_parent_string"]
+ trace_parent = TraceParent.from_string(trace_parent_string)
+ else:
+ trace_parent_string = celery_task.request.elasticapm["trace_parent_string"]
+ trace_parent = TraceParent.from_string(trace_parent_string)
+ return trace_parent
+
+
def register_instrumentation(client):
def begin_transaction(*args, **kwargs):
- client.begin_transaction("celery")
+ task = kwargs["task"]
+
+ trace_parent = get_trace_parent(task)
+ client.begin_transaction("celery", trace_parent=trace_parent)
def end_transaction(task_id, task, *args, **kwargs):
name = get_name_from_func(task)
@@ -76,10 +110,12 @@
dispatch_uid = "elasticapm-tracing-%s"
# unregister any existing clients
+ signals.before_task_publish.disconnect(set_celery_headers, dispatch_uid=dispatch_uid % "before-publish")
signals.task_prerun.disconnect(begin_transaction, dispatch_uid=dispatch_uid % "prerun")
signals.task_postrun.disconnect(end_transaction, dispatch_uid=dispatch_uid % "postrun")
# register for this client
+ signals.before_task_publish.connect(set_celery_headers, dispatch_uid=dispatch_uid % "before-publish")
signals.task_prerun.connect(begin_transaction, dispatch_uid=dispatch_uid % "prerun", weak=False)
signals.task_postrun.connect(end_transaction, weak=False, dispatch_uid=dispatch_uid % "postrun")
_register_worker_signals(client)
|
{"golden_diff": "diff --git a/elasticapm/contrib/celery/__init__.py b/elasticapm/contrib/celery/__init__.py\n--- a/elasticapm/contrib/celery/__init__.py\n+++ b/elasticapm/contrib/celery/__init__.py\n@@ -27,13 +27,15 @@\n # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n-\n+from contextlib import suppress\n \n from celery import signals, states\n \n import elasticapm\n from elasticapm.conf import constants\n+from elasticapm.traces import execution_context\n from elasticapm.utils import get_name_from_func\n+from elasticapm.utils.disttracing import TraceParent\n \n \n class CeleryFilter(object):\n@@ -57,9 +59,41 @@\n _register_worker_signals(client)\n \n \n+def set_celery_headers(headers=None, **kwargs):\n+ \"\"\"\n+ Add elasticapm specific information to celery headers\n+ \"\"\"\n+ headers = {} if headers is None else headers\n+\n+ transaction = execution_context.get_transaction()\n+ if transaction is not None:\n+ trace_parent = transaction.trace_parent\n+ trace_parent_string = trace_parent.to_string()\n+\n+ headers.update({\"elasticapm\": {\"trace_parent_string\": trace_parent_string}})\n+\n+\n+def get_trace_parent(celery_task):\n+ \"\"\"\n+ Return a trace parent contained in the request headers of a Celery Task object or None\n+ \"\"\"\n+ trace_parent = None\n+ with suppress(AttributeError, KeyError, TypeError):\n+ if celery_task.request.headers is not None:\n+ trace_parent_string = celery_task.request.headers[\"elasticapm\"][\"trace_parent_string\"]\n+ trace_parent = TraceParent.from_string(trace_parent_string)\n+ else:\n+ trace_parent_string = celery_task.request.elasticapm[\"trace_parent_string\"]\n+ trace_parent = TraceParent.from_string(trace_parent_string)\n+ return trace_parent\n+\n+\n def register_instrumentation(client):\n def begin_transaction(*args, **kwargs):\n- client.begin_transaction(\"celery\")\n+ task = kwargs[\"task\"]\n+\n+ trace_parent = get_trace_parent(task)\n+ client.begin_transaction(\"celery\", trace_parent=trace_parent)\n \n def end_transaction(task_id, task, *args, **kwargs):\n name = get_name_from_func(task)\n@@ -76,10 +110,12 @@\n dispatch_uid = \"elasticapm-tracing-%s\"\n \n # unregister any existing clients\n+ signals.before_task_publish.disconnect(set_celery_headers, dispatch_uid=dispatch_uid % \"before-publish\")\n signals.task_prerun.disconnect(begin_transaction, dispatch_uid=dispatch_uid % \"prerun\")\n signals.task_postrun.disconnect(end_transaction, dispatch_uid=dispatch_uid % \"postrun\")\n \n # register for this client\n+ signals.before_task_publish.connect(set_celery_headers, dispatch_uid=dispatch_uid % \"before-publish\")\n signals.task_prerun.connect(begin_transaction, dispatch_uid=dispatch_uid % \"prerun\", weak=False)\n signals.task_postrun.connect(end_transaction, weak=False, dispatch_uid=dispatch_uid % \"postrun\")\n _register_worker_signals(client)\n", "issue": "Django Celery trace.id integration\n**Is your feature request related to a problem? Please describe.**\r\nAs of now, it is impossible to keep the same trace id between a Django view and a Celery task launched from the same view.\r\n\r\n**Describe the solution you'd like**\r\nProvide a way to easily pass a trace parent string to the Celery task. Preferably via it's headers field (introduced in Celery 3.1).\r\n\r\n### What would it looks likes\r\n**User side code (Django view)**\r\n```python\r\ndef get(self, request):\r\n transaction = execution_context.get_transaction()\r\n trace_parent = transaction.trace_parent\r\n trace_parent_string = trace_parent.to_string()\r\n my_celery_task.apply_async(headers={\"elasticapm\": {\"trace_parent_string\": trace_parent_string} })\r\n```\r\n\r\n**Library side code (`elasticapm.contrib.celery.__init__.py`), rewrite of `begin_transaction()`, na\u00efve implementation**\r\n```python\r\ndef begin_transaction(*args, **kwargs):\r\n trace_parent = None\r\n try:\r\n trace_parent_string = kwargs[\"task\"].request.headers[\"elasticapm\"][\"trace_parent_string\"]\r\n trace_parent = TraceParent.from_string(trace_parent_string)\r\n except:\r\n pass\r\n client.begin_transaction(\"celery\", trace_parent=trace_parent)\r\n```\r\n\r\n- **Why using Celery headers field ?** It seems the most unobstrusive way of doing it.\r\n- **Why using a nested field ([\"elasticapm\"][\"trace_parent_string\"]) ?** Seems \"future proof\", usefull future fields for elasticapm could be added under the \"elasticapm\" key. Users of the API shouldn't see their code break as they are aware that using this library, the headers Celery field has a reserved key \"elasticapm\" used for this integration.\r\n\r\n**Additional context**\r\n**For anyone wanting to try it, BEWARE !!** There is a Celery [bug](https://github.com/celery/celery/issues/4875) concerning it's headers field.\r\nYou might have to do this:\r\n```python\r\nmy_celery_task.apply_async(headers={\"headers\": {\"elasticapm\": {\"trace_parent_string\": trace_parent_string} } })\r\n```\r\n\r\nEdits: fixed code error/typos\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2012, the Sentry Team, see AUTHORS for more details\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n\n\nfrom celery import signals, states\n\nimport elasticapm\nfrom elasticapm.conf import constants\nfrom elasticapm.utils import get_name_from_func\n\n\nclass CeleryFilter(object):\n def filter(self, record):\n if record.funcName in (\"_log_error\",):\n return 0\n else:\n return 1\n\n\ndef register_exception_tracking(client):\n dispatch_uid = \"elasticapm-exc-tracking\"\n\n def process_failure_signal(sender, task_id, exception, args, kwargs, traceback, einfo, **kw):\n client.capture_exception(\n extra={\"task_id\": task_id, \"task\": sender, \"args\": args, \"kwargs\": kwargs}, handled=False\n )\n\n signals.task_failure.disconnect(process_failure_signal, dispatch_uid=dispatch_uid)\n signals.task_failure.connect(process_failure_signal, weak=False, dispatch_uid=dispatch_uid)\n _register_worker_signals(client)\n\n\ndef register_instrumentation(client):\n def begin_transaction(*args, **kwargs):\n client.begin_transaction(\"celery\")\n\n def end_transaction(task_id, task, *args, **kwargs):\n name = get_name_from_func(task)\n state = kwargs.get(\"state\", \"None\")\n if state == states.SUCCESS:\n outcome = constants.OUTCOME.SUCCESS\n elif state in states.EXCEPTION_STATES:\n outcome = constants.OUTCOME.FAILURE\n else:\n outcome = constants.OUTCOME.UNKNOWN\n elasticapm.set_transaction_outcome(outcome, override=False)\n client.end_transaction(name, state)\n\n dispatch_uid = \"elasticapm-tracing-%s\"\n\n # unregister any existing clients\n signals.task_prerun.disconnect(begin_transaction, dispatch_uid=dispatch_uid % \"prerun\")\n signals.task_postrun.disconnect(end_transaction, dispatch_uid=dispatch_uid % \"postrun\")\n\n # register for this client\n signals.task_prerun.connect(begin_transaction, dispatch_uid=dispatch_uid % \"prerun\", weak=False)\n signals.task_postrun.connect(end_transaction, weak=False, dispatch_uid=dispatch_uid % \"postrun\")\n _register_worker_signals(client)\n\n\ndef _register_worker_signals(client):\n def worker_shutdown(*args, **kwargs):\n client.close()\n\n def connect_worker_process_init(*args, **kwargs):\n signals.worker_process_shutdown.connect(worker_shutdown, dispatch_uid=\"elasticapm-shutdown-worker\", weak=False)\n\n signals.worker_init.connect(\n connect_worker_process_init, dispatch_uid=\"elasticapm-connect-start-threads\", weak=False\n )\n", "path": "elasticapm/contrib/celery/__init__.py"}]}
| 2,092 | 738 |
gh_patches_debug_46195
|
rasdani/github-patches
|
git_diff
|
Project-MONAI__MONAI-459
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support to replace original keys in post-transform
**Is your feature request related to a problem? Please describe.**
If the `output_postfix` is None, the post transform should use the original keys to save memory.
</issue>
<code>
[start of monai/transforms/post/dictionary.py]
1 # Copyright 2020 MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11 """
12 A collection of dictionary-based wrappers around the "vanilla" transforms for model output tensors
13 defined in :py:class:`monai.transforms.utility.array`.
14
15 Class names are ended with 'd' to denote dictionary-based transforms.
16 """
17
18 from monai.utils.misc import ensure_tuple_rep
19 from monai.transforms.compose import MapTransform
20 from monai.transforms.post.array import SplitChannel, Activations, AsDiscrete, KeepLargestConnectedComponent
21
22
23 class SplitChanneld(MapTransform):
24 """
25 Dictionary-based wrapper of :py:class:`monai.transforms.SplitChannel`.
26 All the input specified by `keys` should be splitted into same count of data.
27
28 """
29
30 def __init__(self, keys, output_postfixes, to_onehot=False, num_classes=None):
31 """
32 Args:
33 keys (hashable items): keys of the corresponding items to be transformed.
34 See also: :py:class:`monai.transforms.compose.MapTransform`
35 output_postfixes (list, tuple): the postfixes to construct keys to store splitted data.
36 for example: if the key of input data is `pred` and split 2 classes, the output
37 data keys will be: pred_(output_postfixes[0]), pred_(output_postfixes[1])
38 to_onehot (bool or list of bool): whether to convert the data to One-Hot format, default is False.
39 num_classes (int or list of int): the class number used to convert to One-Hot format
40 if `to_onehot` is True.
41 """
42 super().__init__(keys)
43 if not isinstance(output_postfixes, (list, tuple)):
44 raise ValueError("must specify key postfixes to store splitted data.")
45 self.output_postfixes = output_postfixes
46 self.to_onehot = ensure_tuple_rep(to_onehot, len(self.keys))
47 self.num_classes = ensure_tuple_rep(num_classes, len(self.keys))
48 self.splitter = SplitChannel()
49
50 def __call__(self, data):
51 d = dict(data)
52 for idx, key in enumerate(self.keys):
53 rets = self.splitter(d[key], self.to_onehot[idx], self.num_classes[idx])
54 assert len(self.output_postfixes) == len(rets), "count of splitted results must match output_postfixes."
55 for i, r in enumerate(rets):
56 d[f"{key}_{self.output_postfixes[i]}"] = r
57 return d
58
59
60 class Activationsd(MapTransform):
61 """
62 Dictionary-based wrapper of :py:class:`monai.transforms.AddActivations`.
63 Add activation layers to the input data specified by `keys`.
64 """
65
66 def __init__(self, keys, output_postfix="act", sigmoid=False, softmax=False, other=None):
67 """
68 Args:
69 keys (hashable items): keys of the corresponding items to model output and label.
70 See also: :py:class:`monai.transforms.compose.MapTransform`
71 output_postfix (str): the postfix string to construct keys to store converted data.
72 for example: if the keys of input data is `pred` and `label`, output_postfix is `act`,
73 the output data keys will be: `pred_act`, `label_act`.
74 sigmoid (bool, tuple or list of bool): whether to execute sigmoid function on model
75 output before transform.
76 softmax (bool, tuple or list of bool): whether to execute softmax function on model
77 output before transform.
78 other (Callable, tuple or list of Callables): callable function to execute other activation layers,
79 for example: `other = lambda x: torch.tanh(x)`
80 """
81 super().__init__(keys)
82 if not isinstance(output_postfix, str):
83 raise ValueError("output_postfix must be a string.")
84 self.output_postfix = output_postfix
85 self.sigmoid = ensure_tuple_rep(sigmoid, len(self.keys))
86 self.softmax = ensure_tuple_rep(softmax, len(self.keys))
87 self.other = ensure_tuple_rep(other, len(self.keys))
88 self.converter = Activations()
89
90 def __call__(self, data):
91 d = dict(data)
92 for idx, key in enumerate(self.keys):
93 ret = self.converter(d[key], self.sigmoid[idx], self.softmax[idx], self.other[idx])
94 d[f"{key}_{self.output_postfix}"] = ret
95 return d
96
97
98 class AsDiscreted(MapTransform):
99 """
100 Dictionary-based wrapper of :py:class:`monai.transforms.AsDiscrete`.
101 """
102
103 def __init__(
104 self,
105 keys,
106 output_postfix="discreted",
107 argmax=False,
108 to_onehot=False,
109 n_classes=None,
110 threshold_values=False,
111 logit_thresh=0.5,
112 ):
113 """
114 Args:
115 keys (hashable items): keys of the corresponding items to model output and label.
116 See also: :py:class:`monai.transforms.compose.MapTransform`
117 output_postfix (str): the postfix string to construct keys to store converted data.
118 for example: if the keys of input data is `pred` and `label`, output_postfix is `discreted`,
119 the output data keys will be: `pred_discreted`, `label_discreted`.
120 argmax (bool): whether to execute argmax function on input data before transform.
121 to_onehot (bool): whether to convert input data into the one-hot format. Defaults to False.
122 n_classes (bool): the number of classes to convert to One-Hot format.
123 threshold_values (bool): whether threshold the float value to int number 0 or 1, default is False.
124 logit_thresh (float): the threshold value for thresholding operation, default is 0.5.
125 """
126 super().__init__(keys)
127 if not isinstance(output_postfix, str):
128 raise ValueError("output_postfix must be a string.")
129 self.output_postfix = output_postfix
130 self.argmax = ensure_tuple_rep(argmax, len(self.keys))
131 self.to_onehot = ensure_tuple_rep(to_onehot, len(self.keys))
132 self.n_classes = ensure_tuple_rep(n_classes, len(self.keys))
133 self.threshold_values = ensure_tuple_rep(threshold_values, len(self.keys))
134 self.logit_thresh = ensure_tuple_rep(logit_thresh, len(self.keys))
135 self.converter = AsDiscrete()
136
137 def __call__(self, data):
138 d = dict(data)
139 for idx, key in enumerate(self.keys):
140 d[f"{key}_{self.output_postfix}"] = self.converter(
141 d[key],
142 self.argmax[idx],
143 self.to_onehot[idx],
144 self.n_classes[idx],
145 self.threshold_values[idx],
146 self.logit_thresh[idx],
147 )
148 return d
149
150
151 class KeepLargestConnectedComponentd(MapTransform):
152 """
153 dictionary-based wrapper of :py:class:monai.transforms.utility.array.KeepLargestConnectedComponent.
154 """
155
156 def __init__(
157 self, keys, applied_values, independent=True, background=0, connectivity=None, output_postfix="largestcc",
158 ):
159 """
160 Args:
161 keys (hashable items): keys of the corresponding items to be transformed.
162 See also: :py:class:`monai.transforms.compose.MapTransform`
163 applied_values (list or tuple of int): number list for applying the connected component on.
164 The pixel whose value is not in this list will remain unchanged.
165 independent (bool): consider several labels as a whole or independent, default is `True`.
166 Example use case would be segment label 1 is liver and label 2 is liver tumor, in that case
167 you want this "independent" to be specified as False.
168 background (int): Background pixel value. The over-segmented pixels will be set as this value.
169 connectivity (int): Maximum number of orthogonal hops to consider a pixel/voxel as a neighbor.
170 Accepted values are ranging from 1 to input.ndim. If ``None``, a full
171 connectivity of ``input.ndim`` is used.
172 output_postfix (str): the postfix string to construct keys to store converted data.
173 for example: if the keys of input data is `label`, output_postfix is `largestcc`,
174 the output data keys will be: `label_largestcc`.
175 """
176 super().__init__(keys)
177 if not isinstance(output_postfix, str):
178 raise ValueError("output_postfix must be a string.")
179 self.output_postfix = output_postfix
180 self.converter = KeepLargestConnectedComponent(applied_values, independent, background, connectivity)
181
182 def __call__(self, data):
183 d = dict(data)
184 for idx, key in enumerate(self.keys):
185 d[f"{key}_{self.output_postfix}"] = self.converter(d[key])
186 return d
187
188
189 SplitChannelD = SplitChannelDict = SplitChanneld
190 ActivationsD = ActivationsDict = Activationsd
191 AsDiscreteD = AsDiscreteDict = AsDiscreted
192 KeepLargestConnectedComponentD = KeepLargestConnectedComponentDict = KeepLargestConnectedComponentd
193
[end of monai/transforms/post/dictionary.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/monai/transforms/post/dictionary.py b/monai/transforms/post/dictionary.py
--- a/monai/transforms/post/dictionary.py
+++ b/monai/transforms/post/dictionary.py
@@ -71,6 +71,7 @@
output_postfix (str): the postfix string to construct keys to store converted data.
for example: if the keys of input data is `pred` and `label`, output_postfix is `act`,
the output data keys will be: `pred_act`, `label_act`.
+ if set to None, will replace the original data with the same key.
sigmoid (bool, tuple or list of bool): whether to execute sigmoid function on model
output before transform.
softmax (bool, tuple or list of bool): whether to execute softmax function on model
@@ -79,7 +80,7 @@
for example: `other = lambda x: torch.tanh(x)`
"""
super().__init__(keys)
- if not isinstance(output_postfix, str):
+ if output_postfix is not None and not isinstance(output_postfix, str):
raise ValueError("output_postfix must be a string.")
self.output_postfix = output_postfix
self.sigmoid = ensure_tuple_rep(sigmoid, len(self.keys))
@@ -91,7 +92,8 @@
d = dict(data)
for idx, key in enumerate(self.keys):
ret = self.converter(d[key], self.sigmoid[idx], self.softmax[idx], self.other[idx])
- d[f"{key}_{self.output_postfix}"] = ret
+ output_key = key if self.output_postfix is None else f"{key}_{self.output_postfix}"
+ d[output_key] = ret
return d
@@ -117,6 +119,7 @@
output_postfix (str): the postfix string to construct keys to store converted data.
for example: if the keys of input data is `pred` and `label`, output_postfix is `discreted`,
the output data keys will be: `pred_discreted`, `label_discreted`.
+ if set to None, will replace the original data with the same key.
argmax (bool): whether to execute argmax function on input data before transform.
to_onehot (bool): whether to convert input data into the one-hot format. Defaults to False.
n_classes (bool): the number of classes to convert to One-Hot format.
@@ -124,7 +127,7 @@
logit_thresh (float): the threshold value for thresholding operation, default is 0.5.
"""
super().__init__(keys)
- if not isinstance(output_postfix, str):
+ if output_postfix is not None and not isinstance(output_postfix, str):
raise ValueError("output_postfix must be a string.")
self.output_postfix = output_postfix
self.argmax = ensure_tuple_rep(argmax, len(self.keys))
@@ -137,7 +140,8 @@
def __call__(self, data):
d = dict(data)
for idx, key in enumerate(self.keys):
- d[f"{key}_{self.output_postfix}"] = self.converter(
+ output_key = key if self.output_postfix is None else f"{key}_{self.output_postfix}"
+ d[output_key] = self.converter(
d[key],
self.argmax[idx],
self.to_onehot[idx],
@@ -172,9 +176,10 @@
output_postfix (str): the postfix string to construct keys to store converted data.
for example: if the keys of input data is `label`, output_postfix is `largestcc`,
the output data keys will be: `label_largestcc`.
+ if set to None, will replace the original data with the same key.
"""
super().__init__(keys)
- if not isinstance(output_postfix, str):
+ if output_postfix is not None and not isinstance(output_postfix, str):
raise ValueError("output_postfix must be a string.")
self.output_postfix = output_postfix
self.converter = KeepLargestConnectedComponent(applied_values, independent, background, connectivity)
@@ -182,7 +187,8 @@
def __call__(self, data):
d = dict(data)
for idx, key in enumerate(self.keys):
- d[f"{key}_{self.output_postfix}"] = self.converter(d[key])
+ output_key = key if self.output_postfix is None else f"{key}_{self.output_postfix}"
+ d[output_key] = self.converter(d[key])
return d
|
{"golden_diff": "diff --git a/monai/transforms/post/dictionary.py b/monai/transforms/post/dictionary.py\n--- a/monai/transforms/post/dictionary.py\n+++ b/monai/transforms/post/dictionary.py\n@@ -71,6 +71,7 @@\n output_postfix (str): the postfix string to construct keys to store converted data.\n for example: if the keys of input data is `pred` and `label`, output_postfix is `act`,\n the output data keys will be: `pred_act`, `label_act`.\n+ if set to None, will replace the original data with the same key.\n sigmoid (bool, tuple or list of bool): whether to execute sigmoid function on model\n output before transform.\n softmax (bool, tuple or list of bool): whether to execute softmax function on model\n@@ -79,7 +80,7 @@\n for example: `other = lambda x: torch.tanh(x)`\n \"\"\"\n super().__init__(keys)\n- if not isinstance(output_postfix, str):\n+ if output_postfix is not None and not isinstance(output_postfix, str):\n raise ValueError(\"output_postfix must be a string.\")\n self.output_postfix = output_postfix\n self.sigmoid = ensure_tuple_rep(sigmoid, len(self.keys))\n@@ -91,7 +92,8 @@\n d = dict(data)\n for idx, key in enumerate(self.keys):\n ret = self.converter(d[key], self.sigmoid[idx], self.softmax[idx], self.other[idx])\n- d[f\"{key}_{self.output_postfix}\"] = ret\n+ output_key = key if self.output_postfix is None else f\"{key}_{self.output_postfix}\"\n+ d[output_key] = ret\n return d\n \n \n@@ -117,6 +119,7 @@\n output_postfix (str): the postfix string to construct keys to store converted data.\n for example: if the keys of input data is `pred` and `label`, output_postfix is `discreted`,\n the output data keys will be: `pred_discreted`, `label_discreted`.\n+ if set to None, will replace the original data with the same key.\n argmax (bool): whether to execute argmax function on input data before transform.\n to_onehot (bool): whether to convert input data into the one-hot format. Defaults to False.\n n_classes (bool): the number of classes to convert to One-Hot format.\n@@ -124,7 +127,7 @@\n logit_thresh (float): the threshold value for thresholding operation, default is 0.5.\n \"\"\"\n super().__init__(keys)\n- if not isinstance(output_postfix, str):\n+ if output_postfix is not None and not isinstance(output_postfix, str):\n raise ValueError(\"output_postfix must be a string.\")\n self.output_postfix = output_postfix\n self.argmax = ensure_tuple_rep(argmax, len(self.keys))\n@@ -137,7 +140,8 @@\n def __call__(self, data):\n d = dict(data)\n for idx, key in enumerate(self.keys):\n- d[f\"{key}_{self.output_postfix}\"] = self.converter(\n+ output_key = key if self.output_postfix is None else f\"{key}_{self.output_postfix}\"\n+ d[output_key] = self.converter(\n d[key],\n self.argmax[idx],\n self.to_onehot[idx],\n@@ -172,9 +176,10 @@\n output_postfix (str): the postfix string to construct keys to store converted data.\n for example: if the keys of input data is `label`, output_postfix is `largestcc`,\n the output data keys will be: `label_largestcc`.\n+ if set to None, will replace the original data with the same key.\n \"\"\"\n super().__init__(keys)\n- if not isinstance(output_postfix, str):\n+ if output_postfix is not None and not isinstance(output_postfix, str):\n raise ValueError(\"output_postfix must be a string.\")\n self.output_postfix = output_postfix\n self.converter = KeepLargestConnectedComponent(applied_values, independent, background, connectivity)\n@@ -182,7 +187,8 @@\n def __call__(self, data):\n d = dict(data)\n for idx, key in enumerate(self.keys):\n- d[f\"{key}_{self.output_postfix}\"] = self.converter(d[key])\n+ output_key = key if self.output_postfix is None else f\"{key}_{self.output_postfix}\"\n+ d[output_key] = self.converter(d[key])\n return d\n", "issue": "Support to replace original keys in post-transform\n**Is your feature request related to a problem? Please describe.**\r\nIf the `output_postfix` is None, the post transform should use the original keys to save memory.\r\n\n", "before_files": [{"content": "# Copyright 2020 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nA collection of dictionary-based wrappers around the \"vanilla\" transforms for model output tensors\ndefined in :py:class:`monai.transforms.utility.array`.\n\nClass names are ended with 'd' to denote dictionary-based transforms.\n\"\"\"\n\nfrom monai.utils.misc import ensure_tuple_rep\nfrom monai.transforms.compose import MapTransform\nfrom monai.transforms.post.array import SplitChannel, Activations, AsDiscrete, KeepLargestConnectedComponent\n\n\nclass SplitChanneld(MapTransform):\n \"\"\"\n Dictionary-based wrapper of :py:class:`monai.transforms.SplitChannel`.\n All the input specified by `keys` should be splitted into same count of data.\n\n \"\"\"\n\n def __init__(self, keys, output_postfixes, to_onehot=False, num_classes=None):\n \"\"\"\n Args:\n keys (hashable items): keys of the corresponding items to be transformed.\n See also: :py:class:`monai.transforms.compose.MapTransform`\n output_postfixes (list, tuple): the postfixes to construct keys to store splitted data.\n for example: if the key of input data is `pred` and split 2 classes, the output\n data keys will be: pred_(output_postfixes[0]), pred_(output_postfixes[1])\n to_onehot (bool or list of bool): whether to convert the data to One-Hot format, default is False.\n num_classes (int or list of int): the class number used to convert to One-Hot format\n if `to_onehot` is True.\n \"\"\"\n super().__init__(keys)\n if not isinstance(output_postfixes, (list, tuple)):\n raise ValueError(\"must specify key postfixes to store splitted data.\")\n self.output_postfixes = output_postfixes\n self.to_onehot = ensure_tuple_rep(to_onehot, len(self.keys))\n self.num_classes = ensure_tuple_rep(num_classes, len(self.keys))\n self.splitter = SplitChannel()\n\n def __call__(self, data):\n d = dict(data)\n for idx, key in enumerate(self.keys):\n rets = self.splitter(d[key], self.to_onehot[idx], self.num_classes[idx])\n assert len(self.output_postfixes) == len(rets), \"count of splitted results must match output_postfixes.\"\n for i, r in enumerate(rets):\n d[f\"{key}_{self.output_postfixes[i]}\"] = r\n return d\n\n\nclass Activationsd(MapTransform):\n \"\"\"\n Dictionary-based wrapper of :py:class:`monai.transforms.AddActivations`.\n Add activation layers to the input data specified by `keys`.\n \"\"\"\n\n def __init__(self, keys, output_postfix=\"act\", sigmoid=False, softmax=False, other=None):\n \"\"\"\n Args:\n keys (hashable items): keys of the corresponding items to model output and label.\n See also: :py:class:`monai.transforms.compose.MapTransform`\n output_postfix (str): the postfix string to construct keys to store converted data.\n for example: if the keys of input data is `pred` and `label`, output_postfix is `act`,\n the output data keys will be: `pred_act`, `label_act`.\n sigmoid (bool, tuple or list of bool): whether to execute sigmoid function on model\n output before transform.\n softmax (bool, tuple or list of bool): whether to execute softmax function on model\n output before transform.\n other (Callable, tuple or list of Callables): callable function to execute other activation layers,\n for example: `other = lambda x: torch.tanh(x)`\n \"\"\"\n super().__init__(keys)\n if not isinstance(output_postfix, str):\n raise ValueError(\"output_postfix must be a string.\")\n self.output_postfix = output_postfix\n self.sigmoid = ensure_tuple_rep(sigmoid, len(self.keys))\n self.softmax = ensure_tuple_rep(softmax, len(self.keys))\n self.other = ensure_tuple_rep(other, len(self.keys))\n self.converter = Activations()\n\n def __call__(self, data):\n d = dict(data)\n for idx, key in enumerate(self.keys):\n ret = self.converter(d[key], self.sigmoid[idx], self.softmax[idx], self.other[idx])\n d[f\"{key}_{self.output_postfix}\"] = ret\n return d\n\n\nclass AsDiscreted(MapTransform):\n \"\"\"\n Dictionary-based wrapper of :py:class:`monai.transforms.AsDiscrete`.\n \"\"\"\n\n def __init__(\n self,\n keys,\n output_postfix=\"discreted\",\n argmax=False,\n to_onehot=False,\n n_classes=None,\n threshold_values=False,\n logit_thresh=0.5,\n ):\n \"\"\"\n Args:\n keys (hashable items): keys of the corresponding items to model output and label.\n See also: :py:class:`monai.transforms.compose.MapTransform`\n output_postfix (str): the postfix string to construct keys to store converted data.\n for example: if the keys of input data is `pred` and `label`, output_postfix is `discreted`,\n the output data keys will be: `pred_discreted`, `label_discreted`.\n argmax (bool): whether to execute argmax function on input data before transform.\n to_onehot (bool): whether to convert input data into the one-hot format. Defaults to False.\n n_classes (bool): the number of classes to convert to One-Hot format.\n threshold_values (bool): whether threshold the float value to int number 0 or 1, default is False.\n logit_thresh (float): the threshold value for thresholding operation, default is 0.5.\n \"\"\"\n super().__init__(keys)\n if not isinstance(output_postfix, str):\n raise ValueError(\"output_postfix must be a string.\")\n self.output_postfix = output_postfix\n self.argmax = ensure_tuple_rep(argmax, len(self.keys))\n self.to_onehot = ensure_tuple_rep(to_onehot, len(self.keys))\n self.n_classes = ensure_tuple_rep(n_classes, len(self.keys))\n self.threshold_values = ensure_tuple_rep(threshold_values, len(self.keys))\n self.logit_thresh = ensure_tuple_rep(logit_thresh, len(self.keys))\n self.converter = AsDiscrete()\n\n def __call__(self, data):\n d = dict(data)\n for idx, key in enumerate(self.keys):\n d[f\"{key}_{self.output_postfix}\"] = self.converter(\n d[key],\n self.argmax[idx],\n self.to_onehot[idx],\n self.n_classes[idx],\n self.threshold_values[idx],\n self.logit_thresh[idx],\n )\n return d\n\n\nclass KeepLargestConnectedComponentd(MapTransform):\n \"\"\"\n dictionary-based wrapper of :py:class:monai.transforms.utility.array.KeepLargestConnectedComponent.\n \"\"\"\n\n def __init__(\n self, keys, applied_values, independent=True, background=0, connectivity=None, output_postfix=\"largestcc\",\n ):\n \"\"\"\n Args:\n keys (hashable items): keys of the corresponding items to be transformed.\n See also: :py:class:`monai.transforms.compose.MapTransform`\n applied_values (list or tuple of int): number list for applying the connected component on.\n The pixel whose value is not in this list will remain unchanged.\n independent (bool): consider several labels as a whole or independent, default is `True`.\n Example use case would be segment label 1 is liver and label 2 is liver tumor, in that case\n you want this \"independent\" to be specified as False.\n background (int): Background pixel value. The over-segmented pixels will be set as this value.\n connectivity (int): Maximum number of orthogonal hops to consider a pixel/voxel as a neighbor.\n Accepted values are ranging from 1 to input.ndim. If ``None``, a full\n connectivity of ``input.ndim`` is used.\n output_postfix (str): the postfix string to construct keys to store converted data.\n for example: if the keys of input data is `label`, output_postfix is `largestcc`,\n the output data keys will be: `label_largestcc`.\n \"\"\"\n super().__init__(keys)\n if not isinstance(output_postfix, str):\n raise ValueError(\"output_postfix must be a string.\")\n self.output_postfix = output_postfix\n self.converter = KeepLargestConnectedComponent(applied_values, independent, background, connectivity)\n\n def __call__(self, data):\n d = dict(data)\n for idx, key in enumerate(self.keys):\n d[f\"{key}_{self.output_postfix}\"] = self.converter(d[key])\n return d\n\n\nSplitChannelD = SplitChannelDict = SplitChanneld\nActivationsD = ActivationsDict = Activationsd\nAsDiscreteD = AsDiscreteDict = AsDiscreted\nKeepLargestConnectedComponentD = KeepLargestConnectedComponentDict = KeepLargestConnectedComponentd\n", "path": "monai/transforms/post/dictionary.py"}]}
| 3,102 | 1,016 |
gh_patches_debug_22130
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-6596
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
feature: support glob patterns for paths in file bundle plugin `update` extra
### Feature scope
CLI (options, error messages, logging, etc.)
### Description
### Overview
File bundle plugins can specify a number of files to update with the `update` extra, when `meltano upgrade files` is run.
`meltano.yml`
```yml
version: 1
default_environment: dev
environments:
- name: dev
- name: staging
- name: prod
project_id: fefc3baf-ebb0-4f68-87d1-fe5b3afbe6e8
plugins:
files:
- name: files-dbt
pip_url: git+https://github.com/meltano/files-dbt
update:
transform/models/.gitkeep: true
transform/profile/profiles.yml: true
transform/.gitignore: true
transform/dbt_project.yml: true
```
Currently, each file than can be updated by `meltano upgrade files` must have its path declared under `update` individually. This can lead to bloated `meltano.yml` file bundle definitions that specify many files within common directories as upgradable.
### Proposal
Support glob patterns to reduce the number of paths needed to specify all files required for upgrade.
All bundle files:
```yml
update:
'**/*': true
```
All bundle `.yml` files:
```yml
update:
'**/*.yml': true
```
All bundle `.yml` files under the `transform` directory:
```yml
update:
'transform/**/*.yml': true
```
</issue>
<code>
[start of src/meltano/core/plugin/file.py]
1 """Meltano file plugin type."""
2
3 from __future__ import annotations
4
5 from typing import TYPE_CHECKING
6
7 import structlog
8
9 from meltano.core.behavior.hookable import hook
10 from meltano.core.plugin import BasePlugin, PluginType
11 from meltano.core.plugin.project_plugin import ProjectPlugin
12 from meltano.core.plugin.settings_service import PluginSettingsService
13 from meltano.core.plugin_install_service import (
14 PluginInstallReason,
15 PluginInstallService,
16 )
17 from meltano.core.setting_definition import SettingDefinition, SettingKind
18 from meltano.core.venv_service import VirtualEnv
19
20 if TYPE_CHECKING:
21 from os import PathLike
22 from pathlib import Path
23
24 from meltano.core.project import Project
25
26
27 logger = structlog.getLogger(__name__)
28
29
30 class FilePlugin(BasePlugin):
31 """Meltano file plugin type."""
32
33 __plugin_type__ = PluginType.FILES
34
35 EXTRA_SETTINGS = [
36 SettingDefinition(
37 name="_update", kind=SettingKind.OBJECT, aliases=["update"], value={}
38 )
39 ]
40
41 def is_invokable(self) -> bool:
42 """Return whether the plugin is invokable.
43
44 Returns:
45 True if the plugin is invokable, False otherwise.
46 """
47 return False
48
49 def should_add_to_file(self) -> bool:
50 """Return whether the plugin should be added to `meltano.yml`.
51
52 Returns:
53 True if the plugin should be added to `meltano.yml`, False otherwise.
54 """
55 return len(self.extras.get("update", [])) > 0
56
57 def file_contents(self, project: Project) -> dict[Path, str]:
58 """Return the contents of the files to be created or updated.
59
60 Args:
61 project: The Meltano project.
62
63 Returns:
64 A dictionary of file names and their contents.
65 """
66 venv = VirtualEnv(project.plugin_dir(self, "venv"))
67 bundle_dir = venv.site_packages_dir.joinpath("bundle")
68
69 return {
70 path.relative_to(bundle_dir): path.read_text()
71 for path in bundle_dir.glob("**/*")
72 if path.is_file()
73 and "__pycache__" not in path.parts
74 and path != bundle_dir.joinpath("__init__.py")
75 }
76
77 def update_file_header(self, relative_path: PathLike) -> str:
78 """Return the header to be added to the top of the file.
79
80 Args:
81 relative_path: The relative path of the file.
82
83 Returns:
84 The header to be added to the top of the file.
85 """
86 return "\n".join(
87 (
88 f"# This file is managed by the '{self.name}' {self.type.descriptor} and updated automatically when `meltano upgrade` is run.",
89 f"# To prevent any manual changes from being overwritten, remove the {self.type.descriptor} from `meltano.yml` or disable automatic updates:",
90 f"# meltano config --plugin-type={self.type} {self.name} set _update {relative_path} false",
91 )
92 )
93
94 def project_file_contents(
95 self,
96 project: Project,
97 paths_to_update: list[str],
98 ) -> dict[Path, str]:
99 """Return the contents of the files to be created or updated in the project.
100
101 Args:
102 project: The Meltano project.
103 paths_to_update: The paths of the files to be updated.
104
105 Returns:
106 A dictionary of file names and their contents.
107 """
108
109 def with_update_header(content: str, relative_path: PathLike):
110 if str(relative_path) in paths_to_update:
111 content = "\n\n".join([self.update_file_header(relative_path), content])
112
113 return content
114
115 return {
116 relative_path: with_update_header(content, relative_path)
117 for relative_path, content in self.file_contents(project).items()
118 }
119
120 def write_file(
121 self,
122 project: Project,
123 relative_path: PathLike,
124 content: str,
125 ) -> bool:
126 """Write the file to the project.
127
128 Args:
129 project: The Meltano project.
130 relative_path: The relative path of the file.
131 content: The contents of the file.
132
133 Returns:
134 True if the file was written, False otherwise.
135 """
136 project_path = project.root_dir(relative_path)
137 if project_path.exists() and project_path.read_text() == content:
138 return False
139
140 project_path.parent.mkdir(parents=True, exist_ok=True)
141 project_path.write_text(content)
142
143 return True
144
145 def write_files(
146 self,
147 project: Project,
148 files_content: dict[Path, str],
149 ) -> list[Path]:
150 """Write the files to the project.
151
152 Args:
153 project: The Meltano project.
154 files_content: A dictionary of file names and their contents.
155
156 Returns:
157 A list of the paths of the files that were written.
158 """
159 return [
160 relative_path
161 for relative_path, content in files_content.items()
162 if self.write_file(project, relative_path, content)
163 ]
164
165 def files_to_create(
166 self,
167 project: Project,
168 paths_to_update: list[str],
169 ) -> dict[Path, str]:
170 """Return the contents of the files to be created in the project.
171
172 Args:
173 project: The Meltano project.
174 paths_to_update: The paths of the files to be updated.
175
176 Returns:
177 A dictionary of file names and their contents.
178 """
179
180 def rename_if_exists(relative_path: Path):
181 if not project.root_dir(relative_path).exists():
182 return relative_path
183
184 logger.info(
185 f"File {str(relative_path)!r} already exists, keeping both versions"
186 )
187 return relative_path.with_name(
188 f"{relative_path.stem} ({self.name}){relative_path.suffix}"
189 )
190
191 return {
192 rename_if_exists(relative_path): content
193 for relative_path, content in self.project_file_contents(
194 project, paths_to_update
195 ).items()
196 }
197
198 def files_to_update(
199 self,
200 project: Project,
201 paths_to_update: list[str],
202 ) -> dict[Path, str]:
203 """Return the contents of the files to be updated in the project.
204
205 Args:
206 project: The Meltano project.
207 paths_to_update: The paths of the files to be updated.
208
209 Returns:
210 A dictionary of file names and their contents.
211 """
212 return {
213 relative_path: content
214 for relative_path, content in self.project_file_contents(
215 project, paths_to_update
216 ).items()
217 if str(relative_path) in paths_to_update
218 }
219
220 def create_files(
221 self,
222 project: Project,
223 paths_to_update: list[str] | None = None,
224 ) -> list[Path]:
225 """Create the files in the project.
226
227 Args:
228 project: The Meltano project.
229 paths_to_update: Optional paths of the files to be updated.
230
231 Returns:
232 A list of the paths of the files that were created.
233 """
234 return self.write_files(
235 project,
236 self.files_to_create(
237 project, [] if paths_to_update is None else paths_to_update
238 ),
239 )
240
241 def update_files(
242 self,
243 project: Project,
244 paths_to_update: list[str] | None = None,
245 ) -> list[Path]:
246 """Update the files in the project.
247
248 Args:
249 project: The Meltano project.
250 paths_to_update: Optional paths of the files to be updated.
251
252 Returns:
253 A list of the paths of the files that were updated.
254 """
255 return self.write_files(
256 project,
257 self.files_to_update(
258 project, [] if paths_to_update is None else paths_to_update
259 ),
260 )
261
262 @hook("after_install")
263 async def after_install(
264 self,
265 installer: PluginInstallService,
266 plugin: ProjectPlugin,
267 reason: PluginInstallReason,
268 ):
269 """Trigger after install tasks.
270
271 Args:
272 installer: The plugin installer.
273 plugin: The installed plugin.
274 reason: The reason for the installation.
275 """
276 project = installer.project
277 plugins_service = installer.plugins_service
278
279 plugin_settings_service = PluginSettingsService(
280 project, plugin, plugins_service=plugins_service
281 )
282 update_config = plugin_settings_service.get("_update")
283 paths_to_update = [
284 path for path, to_update in update_config.items() if to_update
285 ]
286
287 if reason is PluginInstallReason.ADD:
288 logger.info(f"Adding '{plugin.name}' files to project...")
289
290 for path in self.create_files(project, paths_to_update):
291 logger.info(f"Created {path}")
292 elif reason is PluginInstallReason.UPGRADE:
293 logger.info(f"Updating '{plugin.name}' files in project...")
294
295 updated_paths = self.update_files(project, paths_to_update)
296 if not updated_paths:
297 logger.info("Nothing to update")
298 return
299
300 for path in updated_paths:
301 logger.info(f"Updated {path}")
302 else:
303 logger.info(
304 f"Run `meltano upgrade files` to update your project's '{plugin.name}' files."
305 )
306
[end of src/meltano/core/plugin/file.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/meltano/core/plugin/file.py b/src/meltano/core/plugin/file.py
--- a/src/meltano/core/plugin/file.py
+++ b/src/meltano/core/plugin/file.py
@@ -107,7 +107,7 @@
"""
def with_update_header(content: str, relative_path: PathLike):
- if str(relative_path) in paths_to_update:
+ if any(relative_path.match(path) for path in paths_to_update):
content = "\n\n".join([self.update_file_header(relative_path), content])
return content
@@ -209,12 +209,11 @@
Returns:
A dictionary of file names and their contents.
"""
+ file_contents = self.project_file_contents(project, paths_to_update)
return {
relative_path: content
- for relative_path, content in self.project_file_contents(
- project, paths_to_update
- ).items()
- if str(relative_path) in paths_to_update
+ for relative_path, content in file_contents.items()
+ if any(relative_path.match(path) for path in paths_to_update)
}
def create_files(
|
{"golden_diff": "diff --git a/src/meltano/core/plugin/file.py b/src/meltano/core/plugin/file.py\n--- a/src/meltano/core/plugin/file.py\n+++ b/src/meltano/core/plugin/file.py\n@@ -107,7 +107,7 @@\n \"\"\"\n \n def with_update_header(content: str, relative_path: PathLike):\n- if str(relative_path) in paths_to_update:\n+ if any(relative_path.match(path) for path in paths_to_update):\n content = \"\\n\\n\".join([self.update_file_header(relative_path), content])\n \n return content\n@@ -209,12 +209,11 @@\n Returns:\n A dictionary of file names and their contents.\n \"\"\"\n+ file_contents = self.project_file_contents(project, paths_to_update)\n return {\n relative_path: content\n- for relative_path, content in self.project_file_contents(\n- project, paths_to_update\n- ).items()\n- if str(relative_path) in paths_to_update\n+ for relative_path, content in file_contents.items()\n+ if any(relative_path.match(path) for path in paths_to_update)\n }\n \n def create_files(\n", "issue": "feature: support glob patterns for paths in file bundle plugin `update` extra\n### Feature scope\n\nCLI (options, error messages, logging, etc.)\n\n### Description\n\n### Overview\r\nFile bundle plugins can specify a number of files to update with the `update` extra, when `meltano upgrade files` is run.\r\n\r\n`meltano.yml`\r\n```yml\r\nversion: 1\r\ndefault_environment: dev\r\nenvironments:\r\n- name: dev\r\n- name: staging\r\n- name: prod\r\nproject_id: fefc3baf-ebb0-4f68-87d1-fe5b3afbe6e8\r\nplugins:\r\n files:\r\n - name: files-dbt\r\n pip_url: git+https://github.com/meltano/files-dbt\r\n update:\r\n transform/models/.gitkeep: true\r\n transform/profile/profiles.yml: true\r\n transform/.gitignore: true\r\n transform/dbt_project.yml: true\r\n```\r\n\r\nCurrently, each file than can be updated by `meltano upgrade files` must have its path declared under `update` individually. This can lead to bloated `meltano.yml` file bundle definitions that specify many files within common directories as upgradable.\r\n\r\n### Proposal\r\nSupport glob patterns to reduce the number of paths needed to specify all files required for upgrade.\r\n\r\nAll bundle files:\r\n```yml\r\n update:\r\n '**/*': true\r\n```\r\n\r\nAll bundle `.yml` files:\r\n```yml\r\n update:\r\n '**/*.yml': true\r\n```\r\n\r\nAll bundle `.yml` files under the `transform` directory:\r\n```yml\r\n update:\r\n 'transform/**/*.yml': true\r\n```\n", "before_files": [{"content": "\"\"\"Meltano file plugin type.\"\"\"\n\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nimport structlog\n\nfrom meltano.core.behavior.hookable import hook\nfrom meltano.core.plugin import BasePlugin, PluginType\nfrom meltano.core.plugin.project_plugin import ProjectPlugin\nfrom meltano.core.plugin.settings_service import PluginSettingsService\nfrom meltano.core.plugin_install_service import (\n PluginInstallReason,\n PluginInstallService,\n)\nfrom meltano.core.setting_definition import SettingDefinition, SettingKind\nfrom meltano.core.venv_service import VirtualEnv\n\nif TYPE_CHECKING:\n from os import PathLike\n from pathlib import Path\n\n from meltano.core.project import Project\n\n\nlogger = structlog.getLogger(__name__)\n\n\nclass FilePlugin(BasePlugin):\n \"\"\"Meltano file plugin type.\"\"\"\n\n __plugin_type__ = PluginType.FILES\n\n EXTRA_SETTINGS = [\n SettingDefinition(\n name=\"_update\", kind=SettingKind.OBJECT, aliases=[\"update\"], value={}\n )\n ]\n\n def is_invokable(self) -> bool:\n \"\"\"Return whether the plugin is invokable.\n\n Returns:\n True if the plugin is invokable, False otherwise.\n \"\"\"\n return False\n\n def should_add_to_file(self) -> bool:\n \"\"\"Return whether the plugin should be added to `meltano.yml`.\n\n Returns:\n True if the plugin should be added to `meltano.yml`, False otherwise.\n \"\"\"\n return len(self.extras.get(\"update\", [])) > 0\n\n def file_contents(self, project: Project) -> dict[Path, str]:\n \"\"\"Return the contents of the files to be created or updated.\n\n Args:\n project: The Meltano project.\n\n Returns:\n A dictionary of file names and their contents.\n \"\"\"\n venv = VirtualEnv(project.plugin_dir(self, \"venv\"))\n bundle_dir = venv.site_packages_dir.joinpath(\"bundle\")\n\n return {\n path.relative_to(bundle_dir): path.read_text()\n for path in bundle_dir.glob(\"**/*\")\n if path.is_file()\n and \"__pycache__\" not in path.parts\n and path != bundle_dir.joinpath(\"__init__.py\")\n }\n\n def update_file_header(self, relative_path: PathLike) -> str:\n \"\"\"Return the header to be added to the top of the file.\n\n Args:\n relative_path: The relative path of the file.\n\n Returns:\n The header to be added to the top of the file.\n \"\"\"\n return \"\\n\".join(\n (\n f\"# This file is managed by the '{self.name}' {self.type.descriptor} and updated automatically when `meltano upgrade` is run.\",\n f\"# To prevent any manual changes from being overwritten, remove the {self.type.descriptor} from `meltano.yml` or disable automatic updates:\",\n f\"# meltano config --plugin-type={self.type} {self.name} set _update {relative_path} false\",\n )\n )\n\n def project_file_contents(\n self,\n project: Project,\n paths_to_update: list[str],\n ) -> dict[Path, str]:\n \"\"\"Return the contents of the files to be created or updated in the project.\n\n Args:\n project: The Meltano project.\n paths_to_update: The paths of the files to be updated.\n\n Returns:\n A dictionary of file names and their contents.\n \"\"\"\n\n def with_update_header(content: str, relative_path: PathLike):\n if str(relative_path) in paths_to_update:\n content = \"\\n\\n\".join([self.update_file_header(relative_path), content])\n\n return content\n\n return {\n relative_path: with_update_header(content, relative_path)\n for relative_path, content in self.file_contents(project).items()\n }\n\n def write_file(\n self,\n project: Project,\n relative_path: PathLike,\n content: str,\n ) -> bool:\n \"\"\"Write the file to the project.\n\n Args:\n project: The Meltano project.\n relative_path: The relative path of the file.\n content: The contents of the file.\n\n Returns:\n True if the file was written, False otherwise.\n \"\"\"\n project_path = project.root_dir(relative_path)\n if project_path.exists() and project_path.read_text() == content:\n return False\n\n project_path.parent.mkdir(parents=True, exist_ok=True)\n project_path.write_text(content)\n\n return True\n\n def write_files(\n self,\n project: Project,\n files_content: dict[Path, str],\n ) -> list[Path]:\n \"\"\"Write the files to the project.\n\n Args:\n project: The Meltano project.\n files_content: A dictionary of file names and their contents.\n\n Returns:\n A list of the paths of the files that were written.\n \"\"\"\n return [\n relative_path\n for relative_path, content in files_content.items()\n if self.write_file(project, relative_path, content)\n ]\n\n def files_to_create(\n self,\n project: Project,\n paths_to_update: list[str],\n ) -> dict[Path, str]:\n \"\"\"Return the contents of the files to be created in the project.\n\n Args:\n project: The Meltano project.\n paths_to_update: The paths of the files to be updated.\n\n Returns:\n A dictionary of file names and their contents.\n \"\"\"\n\n def rename_if_exists(relative_path: Path):\n if not project.root_dir(relative_path).exists():\n return relative_path\n\n logger.info(\n f\"File {str(relative_path)!r} already exists, keeping both versions\"\n )\n return relative_path.with_name(\n f\"{relative_path.stem} ({self.name}){relative_path.suffix}\"\n )\n\n return {\n rename_if_exists(relative_path): content\n for relative_path, content in self.project_file_contents(\n project, paths_to_update\n ).items()\n }\n\n def files_to_update(\n self,\n project: Project,\n paths_to_update: list[str],\n ) -> dict[Path, str]:\n \"\"\"Return the contents of the files to be updated in the project.\n\n Args:\n project: The Meltano project.\n paths_to_update: The paths of the files to be updated.\n\n Returns:\n A dictionary of file names and their contents.\n \"\"\"\n return {\n relative_path: content\n for relative_path, content in self.project_file_contents(\n project, paths_to_update\n ).items()\n if str(relative_path) in paths_to_update\n }\n\n def create_files(\n self,\n project: Project,\n paths_to_update: list[str] | None = None,\n ) -> list[Path]:\n \"\"\"Create the files in the project.\n\n Args:\n project: The Meltano project.\n paths_to_update: Optional paths of the files to be updated.\n\n Returns:\n A list of the paths of the files that were created.\n \"\"\"\n return self.write_files(\n project,\n self.files_to_create(\n project, [] if paths_to_update is None else paths_to_update\n ),\n )\n\n def update_files(\n self,\n project: Project,\n paths_to_update: list[str] | None = None,\n ) -> list[Path]:\n \"\"\"Update the files in the project.\n\n Args:\n project: The Meltano project.\n paths_to_update: Optional paths of the files to be updated.\n\n Returns:\n A list of the paths of the files that were updated.\n \"\"\"\n return self.write_files(\n project,\n self.files_to_update(\n project, [] if paths_to_update is None else paths_to_update\n ),\n )\n\n @hook(\"after_install\")\n async def after_install(\n self,\n installer: PluginInstallService,\n plugin: ProjectPlugin,\n reason: PluginInstallReason,\n ):\n \"\"\"Trigger after install tasks.\n\n Args:\n installer: The plugin installer.\n plugin: The installed plugin.\n reason: The reason for the installation.\n \"\"\"\n project = installer.project\n plugins_service = installer.plugins_service\n\n plugin_settings_service = PluginSettingsService(\n project, plugin, plugins_service=plugins_service\n )\n update_config = plugin_settings_service.get(\"_update\")\n paths_to_update = [\n path for path, to_update in update_config.items() if to_update\n ]\n\n if reason is PluginInstallReason.ADD:\n logger.info(f\"Adding '{plugin.name}' files to project...\")\n\n for path in self.create_files(project, paths_to_update):\n logger.info(f\"Created {path}\")\n elif reason is PluginInstallReason.UPGRADE:\n logger.info(f\"Updating '{plugin.name}' files in project...\")\n\n updated_paths = self.update_files(project, paths_to_update)\n if not updated_paths:\n logger.info(\"Nothing to update\")\n return\n\n for path in updated_paths:\n logger.info(f\"Updated {path}\")\n else:\n logger.info(\n f\"Run `meltano upgrade files` to update your project's '{plugin.name}' files.\"\n )\n", "path": "src/meltano/core/plugin/file.py"}]}
| 3,674 | 256 |
gh_patches_debug_66285
|
rasdani/github-patches
|
git_diff
|
python-poetry__poetry-578
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Poetry run: ModuleOrPackageNotFound with implicit namespace packages (PEP420)
<!-- Checked checkbox should look like this: [x] -->
- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.
- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).
- **OS version and name**: Arch Linux 4.18.16
- **Poetry version**: 0.12.5
- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: https://gist.github.com/Kazy/692963f6a41c64d38f38ac9a3f95619a
## Issue
I'm using implicit namespace packages to organize the packages at work, which works well by specifying the `packages` value in the `pyproject.toml` like that:
```toml
packages = [
{ include = "org" }
]
```
The file structure is like that:
```
├── org
│ └── subpackage
│ ├── __init__.py
│ └── command
│ └── cli.py
└── pyproject.toml
```
The issue is when you try to do `poetry run my-command`, you get:
```
[ModuleOrPackageNotFound]
No file/folder found for package org-subpackage-command
```
I already found how to fix it and will follow with a PR, but I wanted to create the issue in case my fix isn't the right one, and to make organization easier on your side as well.
</issue>
<code>
[start of poetry/console/commands/run.py]
1 from .env_command import EnvCommand
2
3
4 class RunCommand(EnvCommand):
5 """
6 Runs a command in the appropriate environment.
7
8 run
9 { args* : The command and arguments/options to run. }
10 """
11
12 def handle(self):
13 args = self.argument("args")
14 script = args[0]
15 scripts = self.poetry.local_config.get("scripts")
16
17 if scripts and script in scripts:
18 return self.run_script(scripts[script], args)
19
20 return self.env.execute(*args)
21
22 def run_script(self, script, args):
23 if isinstance(script, dict):
24 script = script["callable"]
25
26 module, callable_ = script.split(":")
27
28 src_in_sys_path = "sys.path.append('src'); " if self._module.is_in_src() else ""
29
30 cmd = ["python", "-c"]
31
32 cmd += [
33 '"import sys; '
34 "from importlib import import_module; "
35 "sys.argv = {!r}; {}"
36 "import_module('{}').{}()\"".format(
37 args, src_in_sys_path, module, callable_
38 )
39 ]
40
41 return self.env.run(*cmd, shell=True, call=True)
42
43 @property
44 def _module(self):
45 from ...masonry.utils.module import Module
46
47 poetry = self.poetry
48 package = poetry.package
49 path = poetry.file.parent
50 module = Module(package.name, path.as_posix())
51 return module
52
53 def merge_application_definition(self, merge_args=True):
54 if self._application is None or (
55 self._application_definition_merged
56 and (self._application_definition_merged_with_args or not merge_args)
57 ):
58 return
59
60 if merge_args:
61 current_arguments = self._definition.get_arguments()
62 self._definition.set_arguments(
63 self._application.get_definition().get_arguments()
64 )
65 self._definition.add_arguments(current_arguments)
66
67 self._application_definition_merged = True
68 if merge_args:
69 self._application_definition_merged_with_args = True
70
[end of poetry/console/commands/run.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/poetry/console/commands/run.py b/poetry/console/commands/run.py
--- a/poetry/console/commands/run.py
+++ b/poetry/console/commands/run.py
@@ -47,7 +47,7 @@
poetry = self.poetry
package = poetry.package
path = poetry.file.parent
- module = Module(package.name, path.as_posix())
+ module = Module(package.name, path.as_posix(), package.packages)
return module
def merge_application_definition(self, merge_args=True):
|
{"golden_diff": "diff --git a/poetry/console/commands/run.py b/poetry/console/commands/run.py\n--- a/poetry/console/commands/run.py\n+++ b/poetry/console/commands/run.py\n@@ -47,7 +47,7 @@\n poetry = self.poetry\n package = poetry.package\n path = poetry.file.parent\n- module = Module(package.name, path.as_posix())\n+ module = Module(package.name, path.as_posix(), package.packages)\n return module\n \n def merge_application_definition(self, merge_args=True):\n", "issue": "Poetry run: ModuleOrPackageNotFound with implicit namespace packages (PEP420)\n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.\r\n- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option).\r\n\r\n- **OS version and name**: Arch Linux 4.18.16\r\n- **Poetry version**: 0.12.5\r\n- **Link of a [Gist](https://gist.github.com/) with the contents of your pyproject.toml file**: https://gist.github.com/Kazy/692963f6a41c64d38f38ac9a3f95619a\r\n\r\n## Issue\r\nI'm using implicit namespace packages to organize the packages at work, which works well by specifying the `packages` value in the `pyproject.toml` like that:\r\n```toml\r\npackages = [\r\n { include = \"org\" }\r\n]\r\n```\r\nThe file structure is like that:\r\n```\r\n\u251c\u2500\u2500 org\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 subpackage\r\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 __init__.py\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 command\r\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 cli.py\r\n\u2514\u2500\u2500 pyproject.toml\r\n```\r\n\r\nThe issue is when you try to do `poetry run my-command`, you get:\r\n```\r\n[ModuleOrPackageNotFound]\r\nNo file/folder found for package org-subpackage-command\r\n```\r\n\r\nI already found how to fix it and will follow with a PR, but I wanted to create the issue in case my fix isn't the right one, and to make organization easier on your side as well.\r\n\n", "before_files": [{"content": "from .env_command import EnvCommand\n\n\nclass RunCommand(EnvCommand):\n \"\"\"\n Runs a command in the appropriate environment.\n\n run\n { args* : The command and arguments/options to run. }\n \"\"\"\n\n def handle(self):\n args = self.argument(\"args\")\n script = args[0]\n scripts = self.poetry.local_config.get(\"scripts\")\n\n if scripts and script in scripts:\n return self.run_script(scripts[script], args)\n\n return self.env.execute(*args)\n\n def run_script(self, script, args):\n if isinstance(script, dict):\n script = script[\"callable\"]\n\n module, callable_ = script.split(\":\")\n\n src_in_sys_path = \"sys.path.append('src'); \" if self._module.is_in_src() else \"\"\n\n cmd = [\"python\", \"-c\"]\n\n cmd += [\n '\"import sys; '\n \"from importlib import import_module; \"\n \"sys.argv = {!r}; {}\"\n \"import_module('{}').{}()\\\"\".format(\n args, src_in_sys_path, module, callable_\n )\n ]\n\n return self.env.run(*cmd, shell=True, call=True)\n\n @property\n def _module(self):\n from ...masonry.utils.module import Module\n\n poetry = self.poetry\n package = poetry.package\n path = poetry.file.parent\n module = Module(package.name, path.as_posix())\n return module\n\n def merge_application_definition(self, merge_args=True):\n if self._application is None or (\n self._application_definition_merged\n and (self._application_definition_merged_with_args or not merge_args)\n ):\n return\n\n if merge_args:\n current_arguments = self._definition.get_arguments()\n self._definition.set_arguments(\n self._application.get_definition().get_arguments()\n )\n self._definition.add_arguments(current_arguments)\n\n self._application_definition_merged = True\n if merge_args:\n self._application_definition_merged_with_args = True\n", "path": "poetry/console/commands/run.py"}]}
| 1,514 | 121 |
gh_patches_debug_25695
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-6665
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
export_csv example under 0.12.7dev11 doesn't resize table
Running example export_csv from https://github.com/bokeh/bokeh/tree/master/examples/app/export_csv
under 0.12.7dev11 the table doesn't resize and extra rows are filled with 'undefined', '$NaN'.
The number of rows is 248 and doesn't change when moving the slider.
The rows after 248 are not shown.
Under 0.12.6 everything works perfect.
</issue>
<code>
[start of examples/app/export_csv/main.py]
1 from os.path import dirname, join
2
3 import pandas as pd
4
5 from bokeh.layouts import row, widgetbox
6 from bokeh.models import ColumnDataSource, CustomJS
7 from bokeh.models.widgets import Slider, Button, DataTable, TableColumn, NumberFormatter
8 from bokeh.io import curdoc
9
10 df = pd.read_csv(join(dirname(__file__), 'salary_data.csv'))
11
12 source = ColumnDataSource(data=dict())
13
14 def update():
15 current = df[df['salary'] <= slider.value].dropna()
16 source.data = {
17 'name' : current.name,
18 'salary' : current.salary,
19 'years_experience' : current.years_experience,
20 }
21
22 slider = Slider(title="Max Salary", start=10000, end=250000, value=150000, step=1000)
23 slider.on_change('value', lambda attr, old, new: update())
24
25 button = Button(label="Download", button_type="success")
26 button.callback = CustomJS(args=dict(source=source),
27 code=open(join(dirname(__file__), "download.js")).read())
28
29 columns = [
30 TableColumn(field="name", title="Employee Name"),
31 TableColumn(field="salary", title="Income", formatter=NumberFormatter(format="$0,0.00")),
32 TableColumn(field="years_experience", title="Experience (years)")
33 ]
34
35 data_table = DataTable(source=source, columns=columns, width=800)
36
37 controls = widgetbox(slider, button)
38 table = widgetbox(data_table)
39
40 curdoc().add_root(row(controls, table))
41 curdoc().title = "Export CSV"
42
43 update()
44
[end of examples/app/export_csv/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/app/export_csv/main.py b/examples/app/export_csv/main.py
--- a/examples/app/export_csv/main.py
+++ b/examples/app/export_csv/main.py
@@ -4,7 +4,7 @@
from bokeh.layouts import row, widgetbox
from bokeh.models import ColumnDataSource, CustomJS
-from bokeh.models.widgets import Slider, Button, DataTable, TableColumn, NumberFormatter
+from bokeh.models.widgets import RangeSlider, Button, DataTable, TableColumn, NumberFormatter
from bokeh.io import curdoc
df = pd.read_csv(join(dirname(__file__), 'salary_data.csv'))
@@ -12,14 +12,14 @@
source = ColumnDataSource(data=dict())
def update():
- current = df[df['salary'] <= slider.value].dropna()
+ current = df[(df['salary'] >= slider.value[0]) & (df['salary'] <= slider.value[1])].dropna()
source.data = {
'name' : current.name,
'salary' : current.salary,
'years_experience' : current.years_experience,
}
-slider = Slider(title="Max Salary", start=10000, end=250000, value=150000, step=1000)
+slider = RangeSlider(title="Max Salary", start=10000, end=110000, value=(10000, 50000), step=1000, format="0,0")
slider.on_change('value', lambda attr, old, new: update())
button = Button(label="Download", button_type="success")
|
{"golden_diff": "diff --git a/examples/app/export_csv/main.py b/examples/app/export_csv/main.py\n--- a/examples/app/export_csv/main.py\n+++ b/examples/app/export_csv/main.py\n@@ -4,7 +4,7 @@\n \n from bokeh.layouts import row, widgetbox\n from bokeh.models import ColumnDataSource, CustomJS\n-from bokeh.models.widgets import Slider, Button, DataTable, TableColumn, NumberFormatter\n+from bokeh.models.widgets import RangeSlider, Button, DataTable, TableColumn, NumberFormatter\n from bokeh.io import curdoc\n \n df = pd.read_csv(join(dirname(__file__), 'salary_data.csv'))\n@@ -12,14 +12,14 @@\n source = ColumnDataSource(data=dict())\n \n def update():\n- current = df[df['salary'] <= slider.value].dropna()\n+ current = df[(df['salary'] >= slider.value[0]) & (df['salary'] <= slider.value[1])].dropna()\n source.data = {\n 'name' : current.name,\n 'salary' : current.salary,\n 'years_experience' : current.years_experience,\n }\n \n-slider = Slider(title=\"Max Salary\", start=10000, end=250000, value=150000, step=1000)\n+slider = RangeSlider(title=\"Max Salary\", start=10000, end=110000, value=(10000, 50000), step=1000, format=\"0,0\")\n slider.on_change('value', lambda attr, old, new: update())\n \n button = Button(label=\"Download\", button_type=\"success\")\n", "issue": "export_csv example under 0.12.7dev11 doesn't resize table\nRunning example export_csv from https://github.com/bokeh/bokeh/tree/master/examples/app/export_csv\r\n\r\nunder 0.12.7dev11 the table doesn't resize and extra rows are filled with 'undefined', '$NaN'.\r\nThe number of rows is 248 and doesn't change when moving the slider.\r\nThe rows after 248 are not shown.\r\n\r\nUnder 0.12.6 everything works perfect.\n", "before_files": [{"content": "from os.path import dirname, join\n\nimport pandas as pd\n\nfrom bokeh.layouts import row, widgetbox\nfrom bokeh.models import ColumnDataSource, CustomJS\nfrom bokeh.models.widgets import Slider, Button, DataTable, TableColumn, NumberFormatter\nfrom bokeh.io import curdoc\n\ndf = pd.read_csv(join(dirname(__file__), 'salary_data.csv'))\n\nsource = ColumnDataSource(data=dict())\n\ndef update():\n current = df[df['salary'] <= slider.value].dropna()\n source.data = {\n 'name' : current.name,\n 'salary' : current.salary,\n 'years_experience' : current.years_experience,\n }\n\nslider = Slider(title=\"Max Salary\", start=10000, end=250000, value=150000, step=1000)\nslider.on_change('value', lambda attr, old, new: update())\n\nbutton = Button(label=\"Download\", button_type=\"success\")\nbutton.callback = CustomJS(args=dict(source=source),\n code=open(join(dirname(__file__), \"download.js\")).read())\n\ncolumns = [\n TableColumn(field=\"name\", title=\"Employee Name\"),\n TableColumn(field=\"salary\", title=\"Income\", formatter=NumberFormatter(format=\"$0,0.00\")),\n TableColumn(field=\"years_experience\", title=\"Experience (years)\")\n]\n\ndata_table = DataTable(source=source, columns=columns, width=800)\n\ncontrols = widgetbox(slider, button)\ntable = widgetbox(data_table)\n\ncurdoc().add_root(row(controls, table))\ncurdoc().title = \"Export CSV\"\n\nupdate()\n", "path": "examples/app/export_csv/main.py"}]}
| 1,074 | 361 |
gh_patches_debug_22499
|
rasdani/github-patches
|
git_diff
|
AUTOMATIC1111__stable-diffusion-webui-737
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GFPGAN restore faces error
Using GFPGAN restore faces gives following error
Traceback (most recent call last):
File "/home/x/stable-diff/stable-diffusion-webui/modules/ui.py", line 128, in f
res = list(func(*args, **kwargs))
File "/home/x/stable-diff/stable-diffusion-webui/webui.py", line 55, in f
res = func(*args, **kwargs)
File "/home/x/stable-diff/stable-diffusion-webui/modules/txt2img.py", line 39, in txt2img
processed = process_images(p)
File "/home/x/stable-diff/stable-diffusion-webui/modules/processing.py", line 314, in process_images
x_sample = modules.face_restoration.restore_faces(x_sample)
File "/home/x/stable-diff/stable-diffusion-webui/modules/face_restoration.py", line 19, in restore_faces
return face_restorer.restore(np_image)
File "/home/x/stable-diff/stable-diffusion-webui/modules/codeformer_model.py", line 79, in restore
self.face_helper.get_face_landmarks_5(only_center_face=False, resize=640, eye_dist_threshold=5)
File "/home/x/stable-diff/stable-diffusion-webui/repositories/CodeFormer/facelib/utils/face_restoration_helper.py", line 151, in get_face_landmarks_5
bboxes = self.face_det.detect_faces(input_img)
File "/home/x/stable-diff/stable-diffusion-webui/repositories/CodeFormer/facelib/detection/retinaface/retinaface.py", line 231, in detect_faces
keep = py_cpu_nms(bounding_boxes, nms_threshold)
File "/home/x/stable-diff/stable-diffusion-webui/repositories/CodeFormer/facelib/detection/retinaface/retinaface_utils.py", line 41, in py_cpu_nms
keep = torchvision.ops.nms(
File "/home/x/.local/lib/python3.10/site-packages/torchvision/ops/boxes.py", line 40, in nms
_assert_has_ops()
File "/home/x/.local/lib/python3.10/site-packages/torchvision/extension.py", line 33, in _assert_has_ops
raise RuntimeError(
RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.__version__ and your torchvision version with torchvision.__version__ and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.
Running: python -c "import torch; import torchvision; print(torch.__version__); print(torchvision.__version__)"
Gives the following results:
1.12.1+cu113
0.13.1+cu102
on Latest Arch Linux.
GFPGAN works without issues in this similar tool: https://github.com/cmdr2/stable-diffusion-ui
</issue>
<code>
[start of launch.py]
1 # this scripts installs necessary requirements and launches main program in webui.py
2
3 import subprocess
4 import os
5 import sys
6 import importlib.util
7 import shlex
8
9 dir_repos = "repositories"
10 dir_tmp = "tmp"
11
12 python = sys.executable
13 git = os.environ.get('GIT', "git")
14 torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113")
15 requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
16 commandline_args = os.environ.get('COMMANDLINE_ARGS', "")
17
18 k_diffusion_package = os.environ.get('K_DIFFUSION_PACKAGE', "git+https://github.com/crowsonkb/k-diffusion.git@1a0703dfb7d24d8806267c3e7ccc4caf67fd1331")
19 gfpgan_package = os.environ.get('GFPGAN_PACKAGE', "git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379")
20
21 stable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', "69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc")
22 taming_transformers_commit_hash = os.environ.get('TAMING_TRANSFORMERS_COMMIT_HASH', "24268930bf1dce879235a7fddd0b2355b84d7ea6")
23 codeformer_commit_hash = os.environ.get('CODEFORMER_COMMIT_HASH', "c5b4593074ba6214284d6acd5f1719b6c5d739af")
24 blip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', "48211a1594f1321b00f14c9f7a5b4813144b2fb9")
25
26 args = shlex.split(commandline_args)
27
28
29 def extract_arg(args, name):
30 return [x for x in args if x != name], name in args
31
32
33 args, skip_torch_cuda_test = extract_arg(args, '--skip-torch-cuda-test')
34
35
36 def repo_dir(name):
37 return os.path.join(dir_repos, name)
38
39
40 def run(command, desc=None, errdesc=None):
41 if desc is not None:
42 print(desc)
43
44 result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
45
46 if result.returncode != 0:
47
48 message = f"""{errdesc or 'Error running command'}.
49 Command: {command}
50 Error code: {result.returncode}
51 stdout: {result.stdout.decode(encoding="utf8", errors="ignore") if len(result.stdout)>0 else '<empty>'}
52 stderr: {result.stderr.decode(encoding="utf8", errors="ignore") if len(result.stderr)>0 else '<empty>'}
53 """
54 raise RuntimeError(message)
55
56 return result.stdout.decode(encoding="utf8", errors="ignore")
57
58
59 def run_python(code, desc=None, errdesc=None):
60 return run(f'"{python}" -c "{code}"', desc, errdesc)
61
62
63 def run_pip(args, desc=None):
64 return run(f'"{python}" -m pip {args} --prefer-binary', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
65
66
67 def check_run(command):
68 result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
69 return result.returncode == 0
70
71
72 def check_run_python(code):
73 return check_run(f'"{python}" -c "{code}"')
74
75
76 def is_installed(package):
77 try:
78 spec = importlib.util.find_spec(package)
79 except ModuleNotFoundError:
80 return False
81
82 return spec is not None
83
84
85 def git_clone(url, dir, name, commithash=None):
86 # TODO clone into temporary dir and move if successful
87
88 if os.path.exists(dir):
89 return
90
91 run(f'"{git}" clone "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}")
92
93 if commithash is not None:
94 run(f'"{git}" -C {dir} checkout {commithash}', None, "Couldn't checkout {name}'s hash: {commithash}")
95
96
97 try:
98 commit = run(f"{git} rev-parse HEAD").strip()
99 except Exception:
100 commit = "<none>"
101
102 print(f"Python {sys.version}")
103 print(f"Commit hash: {commit}")
104
105 if not is_installed("torch"):
106 run(f'"{python}" -m {torch_command}', "Installing torch", "Couldn't install torch")
107
108 if not skip_torch_cuda_test:
109 run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDINE_ARGS variable to disable this check'")
110
111 if not is_installed("k_diffusion.sampling"):
112 run_pip(f"install {k_diffusion_package}", "k-diffusion")
113
114 if not is_installed("gfpgan"):
115 run_pip(f"install {gfpgan_package}", "gfpgan")
116
117 os.makedirs(dir_repos, exist_ok=True)
118
119 git_clone("https://github.com/CompVis/stable-diffusion.git", repo_dir('stable-diffusion'), "Stable Diffusion", stable_diffusion_commit_hash)
120 git_clone("https://github.com/CompVis/taming-transformers.git", repo_dir('taming-transformers'), "Taming Transformers", taming_transformers_commit_hash)
121 git_clone("https://github.com/sczhou/CodeFormer.git", repo_dir('CodeFormer'), "CodeFormer", codeformer_commit_hash)
122 git_clone("https://github.com/salesforce/BLIP.git", repo_dir('BLIP'), "BLIP", blip_commit_hash)
123
124 if not is_installed("lpips"):
125 run_pip(f"install -r {os.path.join(repo_dir('CodeFormer'), 'requirements.txt')}", "requirements for CodeFormer")
126
127 run_pip(f"install -r {requirements_file}", "requirements for Web UI")
128
129 sys.argv += args
130
131
132 def start_webui():
133 print(f"Launching Web UI with arguments: {' '.join(sys.argv[1:])}")
134 import webui
135 webui.webui()
136
137 if __name__ == "__main__":
138 start_webui()
139
[end of launch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/launch.py b/launch.py
--- a/launch.py
+++ b/launch.py
@@ -11,7 +11,7 @@
python = sys.executable
git = os.environ.get('GIT', "git")
-torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113")
+torch_command = os.environ.get('TORCH_COMMAND', "pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113")
requirements_file = os.environ.get('REQS_FILE', "requirements_versions.txt")
commandline_args = os.environ.get('COMMANDLINE_ARGS', "")
@@ -92,8 +92,9 @@
print(f"Python {sys.version}")
print(f"Commit hash: {commit}")
-if not is_installed("torch"):
- run(f'"{python}" -m {torch_command}', "Installing torch", "Couldn't install torch")
+
+if not is_installed("torch") or not is_installed("torchvision"):
+ run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch")
run_python("import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU'")
|
{"golden_diff": "diff --git a/launch.py b/launch.py\n--- a/launch.py\n+++ b/launch.py\n@@ -11,7 +11,7 @@\n \r\n python = sys.executable\r\n git = os.environ.get('GIT', \"git\")\r\n-torch_command = os.environ.get('TORCH_COMMAND', \"pip install torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113\")\r\n+torch_command = os.environ.get('TORCH_COMMAND', \"pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113\")\r\n requirements_file = os.environ.get('REQS_FILE', \"requirements_versions.txt\")\r\n commandline_args = os.environ.get('COMMANDLINE_ARGS', \"\")\r\n \r\n@@ -92,8 +92,9 @@\n print(f\"Python {sys.version}\")\r\n print(f\"Commit hash: {commit}\")\r\n \r\n-if not is_installed(\"torch\"):\r\n- run(f'\"{python}\" -m {torch_command}', \"Installing torch\", \"Couldn't install torch\")\r\n+\r\n+if not is_installed(\"torch\") or not is_installed(\"torchvision\"):\r\n+ run(f'\"{python}\" -m {torch_command}', \"Installing torch and torchvision\", \"Couldn't install torch\")\r\n \r\n run_python(\"import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU'\")\n", "issue": "GFPGAN restore faces error\nUsing GFPGAN restore faces gives following error\r\n\r\nTraceback (most recent call last):\r\n File \"/home/x/stable-diff/stable-diffusion-webui/modules/ui.py\", line 128, in f\r\n res = list(func(*args, **kwargs))\r\n File \"/home/x/stable-diff/stable-diffusion-webui/webui.py\", line 55, in f\r\n res = func(*args, **kwargs)\r\n File \"/home/x/stable-diff/stable-diffusion-webui/modules/txt2img.py\", line 39, in txt2img\r\n processed = process_images(p)\r\n File \"/home/x/stable-diff/stable-diffusion-webui/modules/processing.py\", line 314, in process_images\r\n x_sample = modules.face_restoration.restore_faces(x_sample)\r\n File \"/home/x/stable-diff/stable-diffusion-webui/modules/face_restoration.py\", line 19, in restore_faces\r\n return face_restorer.restore(np_image)\r\n File \"/home/x/stable-diff/stable-diffusion-webui/modules/codeformer_model.py\", line 79, in restore\r\n self.face_helper.get_face_landmarks_5(only_center_face=False, resize=640, eye_dist_threshold=5)\r\n File \"/home/x/stable-diff/stable-diffusion-webui/repositories/CodeFormer/facelib/utils/face_restoration_helper.py\", line 151, in get_face_landmarks_5\r\n bboxes = self.face_det.detect_faces(input_img)\r\n File \"/home/x/stable-diff/stable-diffusion-webui/repositories/CodeFormer/facelib/detection/retinaface/retinaface.py\", line 231, in detect_faces\r\n keep = py_cpu_nms(bounding_boxes, nms_threshold)\r\n File \"/home/x/stable-diff/stable-diffusion-webui/repositories/CodeFormer/facelib/detection/retinaface/retinaface_utils.py\", line 41, in py_cpu_nms\r\n keep = torchvision.ops.nms(\r\n File \"/home/x/.local/lib/python3.10/site-packages/torchvision/ops/boxes.py\", line 40, in nms\r\n _assert_has_ops()\r\n File \"/home/x/.local/lib/python3.10/site-packages/torchvision/extension.py\", line 33, in _assert_has_ops\r\n raise RuntimeError(\r\nRuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.__version__ and your torchvision version with torchvision.__version__ and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.\r\n\r\n\r\nRunning: python -c \"import torch; import torchvision; print(torch.__version__); print(torchvision.__version__)\"\r\n\r\nGives the following results:\r\n1.12.1+cu113\r\n0.13.1+cu102\r\n\r\n\r\non Latest Arch Linux.\r\n\r\nGFPGAN works without issues in this similar tool: https://github.com/cmdr2/stable-diffusion-ui\r\n\r\n\n", "before_files": [{"content": "# this scripts installs necessary requirements and launches main program in webui.py\r\n\r\nimport subprocess\r\nimport os\r\nimport sys\r\nimport importlib.util\r\nimport shlex\r\n\r\ndir_repos = \"repositories\"\r\ndir_tmp = \"tmp\"\r\n\r\npython = sys.executable\r\ngit = os.environ.get('GIT', \"git\")\r\ntorch_command = os.environ.get('TORCH_COMMAND', \"pip install torch==1.12.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113\")\r\nrequirements_file = os.environ.get('REQS_FILE', \"requirements_versions.txt\")\r\ncommandline_args = os.environ.get('COMMANDLINE_ARGS', \"\")\r\n\r\nk_diffusion_package = os.environ.get('K_DIFFUSION_PACKAGE', \"git+https://github.com/crowsonkb/k-diffusion.git@1a0703dfb7d24d8806267c3e7ccc4caf67fd1331\")\r\ngfpgan_package = os.environ.get('GFPGAN_PACKAGE', \"git+https://github.com/TencentARC/GFPGAN.git@8d2447a2d918f8eba5a4a01463fd48e45126a379\")\r\n\r\nstable_diffusion_commit_hash = os.environ.get('STABLE_DIFFUSION_COMMIT_HASH', \"69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc\")\r\ntaming_transformers_commit_hash = os.environ.get('TAMING_TRANSFORMERS_COMMIT_HASH', \"24268930bf1dce879235a7fddd0b2355b84d7ea6\")\r\ncodeformer_commit_hash = os.environ.get('CODEFORMER_COMMIT_HASH', \"c5b4593074ba6214284d6acd5f1719b6c5d739af\")\r\nblip_commit_hash = os.environ.get('BLIP_COMMIT_HASH', \"48211a1594f1321b00f14c9f7a5b4813144b2fb9\")\r\n\r\nargs = shlex.split(commandline_args)\r\n\r\n\r\ndef extract_arg(args, name):\r\n return [x for x in args if x != name], name in args\r\n\r\n\r\nargs, skip_torch_cuda_test = extract_arg(args, '--skip-torch-cuda-test')\r\n\r\n\r\ndef repo_dir(name):\r\n return os.path.join(dir_repos, name)\r\n\r\n\r\ndef run(command, desc=None, errdesc=None):\r\n if desc is not None:\r\n print(desc)\r\n\r\n result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)\r\n\r\n if result.returncode != 0:\r\n\r\n message = f\"\"\"{errdesc or 'Error running command'}.\r\nCommand: {command}\r\nError code: {result.returncode}\r\nstdout: {result.stdout.decode(encoding=\"utf8\", errors=\"ignore\") if len(result.stdout)>0 else '<empty>'}\r\nstderr: {result.stderr.decode(encoding=\"utf8\", errors=\"ignore\") if len(result.stderr)>0 else '<empty>'}\r\n\"\"\"\r\n raise RuntimeError(message)\r\n\r\n return result.stdout.decode(encoding=\"utf8\", errors=\"ignore\")\r\n\r\n\r\ndef run_python(code, desc=None, errdesc=None):\r\n return run(f'\"{python}\" -c \"{code}\"', desc, errdesc)\r\n\r\n\r\ndef run_pip(args, desc=None):\r\n return run(f'\"{python}\" -m pip {args} --prefer-binary', desc=f\"Installing {desc}\", errdesc=f\"Couldn't install {desc}\")\r\n\r\n\r\ndef check_run(command):\r\n result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)\r\n return result.returncode == 0\r\n\r\n\r\ndef check_run_python(code):\r\n return check_run(f'\"{python}\" -c \"{code}\"')\r\n\r\n\r\ndef is_installed(package):\r\n try:\r\n spec = importlib.util.find_spec(package)\r\n except ModuleNotFoundError:\r\n return False\r\n\r\n return spec is not None\r\n\r\n\r\ndef git_clone(url, dir, name, commithash=None):\r\n # TODO clone into temporary dir and move if successful\r\n\r\n if os.path.exists(dir):\r\n return\r\n\r\n run(f'\"{git}\" clone \"{url}\" \"{dir}\"', f\"Cloning {name} into {dir}...\", f\"Couldn't clone {name}\")\r\n\r\n if commithash is not None:\r\n run(f'\"{git}\" -C {dir} checkout {commithash}', None, \"Couldn't checkout {name}'s hash: {commithash}\")\r\n\r\n\r\ntry:\r\n commit = run(f\"{git} rev-parse HEAD\").strip()\r\nexcept Exception:\r\n commit = \"<none>\"\r\n\r\nprint(f\"Python {sys.version}\")\r\nprint(f\"Commit hash: {commit}\")\r\n\r\nif not is_installed(\"torch\"):\r\n run(f'\"{python}\" -m {torch_command}', \"Installing torch\", \"Couldn't install torch\")\r\n\r\nif not skip_torch_cuda_test:\r\n run_python(\"import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDINE_ARGS variable to disable this check'\")\r\n\r\nif not is_installed(\"k_diffusion.sampling\"):\r\n run_pip(f\"install {k_diffusion_package}\", \"k-diffusion\")\r\n\r\nif not is_installed(\"gfpgan\"):\r\n run_pip(f\"install {gfpgan_package}\", \"gfpgan\")\r\n\r\nos.makedirs(dir_repos, exist_ok=True)\r\n\r\ngit_clone(\"https://github.com/CompVis/stable-diffusion.git\", repo_dir('stable-diffusion'), \"Stable Diffusion\", stable_diffusion_commit_hash)\r\ngit_clone(\"https://github.com/CompVis/taming-transformers.git\", repo_dir('taming-transformers'), \"Taming Transformers\", taming_transformers_commit_hash)\r\ngit_clone(\"https://github.com/sczhou/CodeFormer.git\", repo_dir('CodeFormer'), \"CodeFormer\", codeformer_commit_hash)\r\ngit_clone(\"https://github.com/salesforce/BLIP.git\", repo_dir('BLIP'), \"BLIP\", blip_commit_hash)\r\n\r\nif not is_installed(\"lpips\"):\r\n run_pip(f\"install -r {os.path.join(repo_dir('CodeFormer'), 'requirements.txt')}\", \"requirements for CodeFormer\")\r\n\r\nrun_pip(f\"install -r {requirements_file}\", \"requirements for Web UI\")\r\n\r\nsys.argv += args\r\n\r\n\r\ndef start_webui():\r\n print(f\"Launching Web UI with arguments: {' '.join(sys.argv[1:])}\")\r\n import webui\r\n webui.webui()\r\n\r\nif __name__ == \"__main__\":\r\n start_webui()\r\n", "path": "launch.py"}]}
| 3,032 | 319 |
gh_patches_debug_9720
|
rasdani/github-patches
|
git_diff
|
ivy-llc__ivy-21138
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
triu_indices
Working on (#8431 -> #18033)
</issue>
<code>
[start of ivy/functional/frontends/paddle/tensor/creation.py]
1 # global
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
4 from .tensor import Tensor
5 from ivy.functional.frontends.paddle.func_wrapper import (
6 to_ivy_arrays_and_back,
7 )
8
9
10 @to_ivy_arrays_and_back
11 def to_tensor(data, /, *, dtype=None, place=None, stop_gradient=True):
12 array = ivy.array(data, dtype=dtype, device=place)
13 return Tensor(array, dtype=dtype, place=place)
14
15
16 @with_unsupported_dtypes({"2.5.1 and below": "int8"}, "paddle")
17 @to_ivy_arrays_and_back
18 def ones(shape, /, *, dtype=None, name=None):
19 dtype = "float32" if dtype is None else dtype
20 return ivy.ones(shape, dtype=dtype)
21
22
23 @with_unsupported_dtypes(
24 {"2.5.1 and below": ("uint8", "int8", "complex64", "complex128")}, "paddle"
25 )
26 @to_ivy_arrays_and_back
27 def ones_like(x, /, *, dtype=None, name=None):
28 dtype = x.dtype if dtype is None else dtype
29 return ivy.ones_like(x, dtype=dtype)
30
31
32 @with_unsupported_dtypes({"2.5.1 and below": "int8"}, "paddle")
33 @to_ivy_arrays_and_back
34 def zeros(shape, /, *, dtype=None, name=None):
35 dtype = "float32" if dtype is None else dtype
36 return ivy.zeros(shape, dtype=dtype)
37
38
39 @with_unsupported_dtypes(
40 {"2.5.1 and below": ("uint8", "int8", "complex64", "complex128")}, "paddle"
41 )
42 @to_ivy_arrays_and_back
43 def zeros_like(x, /, *, dtype=None, name=None):
44 dtype = x.dtype if dtype is None else dtype
45 return ivy.zeros_like(x, dtype=dtype)
46
47
48 @to_ivy_arrays_and_back
49 def full(shape, fill_value, /, *, dtype=None, name=None):
50 dtype = "float32" if dtype is None else dtype
51 return ivy.full(shape, fill_value, dtype=dtype)
52
53
54 @to_ivy_arrays_and_back
55 def full_like(x, fill_value, /, *, dtype=None, name=None):
56 dtype = x.dtype if dtype is None else dtype
57 return ivy.full_like(x, fill_value, dtype=dtype)
58
59
60 @with_unsupported_dtypes({"2.5.1 and below": ("float16", "bfloat16")}, "paddle")
61 @to_ivy_arrays_and_back
62 def arange(start, end=None, step=1, dtype=None, name=None):
63 return ivy.arange(start, end, step=step, dtype=dtype)
64
65
66 @to_ivy_arrays_and_back
67 def empty(shape, dtype=None):
68 return ivy.empty(shape=shape, dtype=dtype)
69
70
71 @to_ivy_arrays_and_back
72 def eye(num_rows, num_columns=None, dtype=None, name=None):
73 return ivy.eye(num_rows, num_columns, dtype=dtype)
74
75
76 @to_ivy_arrays_and_back
77 def empty_like(x, dtype=None, name=None):
78 return ivy.empty_like(x, dtype=dtype)
79
80
81 @with_unsupported_dtypes(
82 {
83 "2.5.1 and below": (
84 "uint8",
85 "int8",
86 "int16",
87 "float16",
88 "complex64",
89 "complex128",
90 "bool",
91 )
92 },
93 "paddle",
94 )
95 @to_ivy_arrays_and_back
96 def tril(x, diagonal=0, name=None):
97 return ivy.tril(x, k=diagonal)
98
99
100 @with_unsupported_dtypes(
101 {
102 "2.5.1 and below": (
103 "uint8",
104 "int8",
105 "int16",
106 "float16",
107 "complex64",
108 "complex128",
109 "bool",
110 )
111 },
112 "paddle",
113 )
114 @to_ivy_arrays_and_back
115 def triu(x, diagonal=0, name=None):
116 return ivy.triu(x, k=diagonal)
117
118
119 @with_supported_dtypes(
120 {"2.5.1 and below": ("float32", "float64", "int32", "int64")}, "paddle"
121 )
122 @to_ivy_arrays_and_back
123 def diagflat(x, offset=0, name=None):
124 arr = ivy.diagflat(x, offset=offset)
125 return arr
126
127
128 @with_supported_dtypes(
129 {"2.5.1 and below": ("float32", "float64", "int32", "int64")}, "paddle"
130 )
131 @to_ivy_arrays_and_back
132 def meshgrid(*args, **kwargs):
133 return ivy.meshgrid(*args, indexing="ij")
134
135
136 @with_supported_dtypes({"2.5.1 and below": ("int32", "int64")}, "paddle")
137 @to_ivy_arrays_and_back
138 def tril_indices(row, col, offset=0, dtype="int64"):
139 arr = ivy.tril_indices(row, col, offset)
140 arr = ivy.astype(arr, dtype)
141 return arr
142
143
144 @with_supported_dtypes(
145 {"2.5.1 and below": ("float16", "float32", "float64", "int32", "int64", "bool")},
146 "paddle",
147 )
148 @to_ivy_arrays_and_back
149 def assign(x, output=None):
150 if len(ivy.shape(x)) == 0:
151 x = ivy.reshape(ivy.Array(x), (1,))
152 if ivy.exists(output):
153 output = ivy.reshape(ivy.Array(output), (1,))
154 else:
155 x = ivy.reshape(x, ivy.shape(x))
156 ret = ivy.copy_array(x, to_ivy_array=False, out=output)
157 return ret
158
159
160 @with_supported_dtypes(
161 {"2.5.1 and below": ("float32", "float64", "int32", "int64")}, "paddle"
162 )
163 @to_ivy_arrays_and_back
164 def diag(x, offset=0, padding_value=0, name=None):
165 if len(x.shape) == 1:
166 padding_value = ivy.astype(padding_value, ivy.dtype(x))
167 ret = ivy.diagflat(x, offset=offset, padding_value=padding_value)
168 if len(ret.shape) != 2:
169 ret = ivy.reshape(ret, (1, 1))
170 else:
171 ret = ivy.diag(x, k=offset)
172 return ret
173
174
175 @with_supported_dtypes(
176 {"2.5.1 and below": ("float32", "float64", "int32", "int64")}, "paddle"
177 )
178 @to_ivy_arrays_and_back
179 def logspace(start, stop, num, base=10.0, dtype=None, name=None):
180 return ivy.logspace(start, stop, num=num, base=base, dtype=dtype)
181
182
183 @with_supported_dtypes(
184 {"2.5.1 and below": ("float32", "float64")},
185 "paddle",
186 )
187 @to_ivy_arrays_and_back
188 def complex(real, imag, name=None):
189 assert real.dtype == imag.dtype, (
190 "(InvalidArgument) The type of data we are trying to retrieve does not match"
191 " the type of data currently contained in the container."
192 )
193 complex_dtype = "complex64" if real.dtype == "float32" else "complex128"
194 imag_cmplx = ivy.astype(imag, complex_dtype) * 1j
195 complex_array = real + imag_cmplx
196 return complex_array
197
[end of ivy/functional/frontends/paddle/tensor/creation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ivy/functional/frontends/paddle/tensor/creation.py b/ivy/functional/frontends/paddle/tensor/creation.py
--- a/ivy/functional/frontends/paddle/tensor/creation.py
+++ b/ivy/functional/frontends/paddle/tensor/creation.py
@@ -180,6 +180,16 @@
return ivy.logspace(start, stop, num=num, base=base, dtype=dtype)
+@with_supported_dtypes({"2.5.1 and below": ("int32", "int64")}, "paddle")
+@to_ivy_arrays_and_back
+def triu_indices(row, col=None, offset=0, dtype="int64"):
+ arr = ivy.triu_indices(row, col, offset)
+ if not ivy.to_scalar(ivy.shape(arr[0], as_array=True)):
+ return arr
+ arr = ivy.astype(arr, dtype)
+ return arr
+
+
@with_supported_dtypes(
{"2.5.1 and below": ("float32", "float64")},
"paddle",
|
{"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/creation.py b/ivy/functional/frontends/paddle/tensor/creation.py\n--- a/ivy/functional/frontends/paddle/tensor/creation.py\n+++ b/ivy/functional/frontends/paddle/tensor/creation.py\n@@ -180,6 +180,16 @@\n return ivy.logspace(start, stop, num=num, base=base, dtype=dtype)\r\n \r\n \r\n+@with_supported_dtypes({\"2.5.1 and below\": (\"int32\", \"int64\")}, \"paddle\")\r\n+@to_ivy_arrays_and_back\r\n+def triu_indices(row, col=None, offset=0, dtype=\"int64\"):\r\n+ arr = ivy.triu_indices(row, col, offset)\r\n+ if not ivy.to_scalar(ivy.shape(arr[0], as_array=True)):\r\n+ return arr\r\n+ arr = ivy.astype(arr, dtype)\r\n+ return arr\r\n+\r\n+\r\n @with_supported_dtypes(\r\n {\"2.5.1 and below\": (\"float32\", \"float64\")},\r\n \"paddle\",\n", "issue": "triu_indices\nWorking on (#8431 -> #18033)\n", "before_files": [{"content": "# global\r\nimport ivy\r\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\r\nfrom .tensor import Tensor\r\nfrom ivy.functional.frontends.paddle.func_wrapper import (\r\n to_ivy_arrays_and_back,\r\n)\r\n\r\n\r\n@to_ivy_arrays_and_back\r\ndef to_tensor(data, /, *, dtype=None, place=None, stop_gradient=True):\r\n array = ivy.array(data, dtype=dtype, device=place)\r\n return Tensor(array, dtype=dtype, place=place)\r\n\r\n\r\n@with_unsupported_dtypes({\"2.5.1 and below\": \"int8\"}, \"paddle\")\r\n@to_ivy_arrays_and_back\r\ndef ones(shape, /, *, dtype=None, name=None):\r\n dtype = \"float32\" if dtype is None else dtype\r\n return ivy.ones(shape, dtype=dtype)\r\n\r\n\r\n@with_unsupported_dtypes(\r\n {\"2.5.1 and below\": (\"uint8\", \"int8\", \"complex64\", \"complex128\")}, \"paddle\"\r\n)\r\n@to_ivy_arrays_and_back\r\ndef ones_like(x, /, *, dtype=None, name=None):\r\n dtype = x.dtype if dtype is None else dtype\r\n return ivy.ones_like(x, dtype=dtype)\r\n\r\n\r\n@with_unsupported_dtypes({\"2.5.1 and below\": \"int8\"}, \"paddle\")\r\n@to_ivy_arrays_and_back\r\ndef zeros(shape, /, *, dtype=None, name=None):\r\n dtype = \"float32\" if dtype is None else dtype\r\n return ivy.zeros(shape, dtype=dtype)\r\n\r\n\r\n@with_unsupported_dtypes(\r\n {\"2.5.1 and below\": (\"uint8\", \"int8\", \"complex64\", \"complex128\")}, \"paddle\"\r\n)\r\n@to_ivy_arrays_and_back\r\ndef zeros_like(x, /, *, dtype=None, name=None):\r\n dtype = x.dtype if dtype is None else dtype\r\n return ivy.zeros_like(x, dtype=dtype)\r\n\r\n\r\n@to_ivy_arrays_and_back\r\ndef full(shape, fill_value, /, *, dtype=None, name=None):\r\n dtype = \"float32\" if dtype is None else dtype\r\n return ivy.full(shape, fill_value, dtype=dtype)\r\n\r\n\r\n@to_ivy_arrays_and_back\r\ndef full_like(x, fill_value, /, *, dtype=None, name=None):\r\n dtype = x.dtype if dtype is None else dtype\r\n return ivy.full_like(x, fill_value, dtype=dtype)\r\n\r\n\r\n@with_unsupported_dtypes({\"2.5.1 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\r\n@to_ivy_arrays_and_back\r\ndef arange(start, end=None, step=1, dtype=None, name=None):\r\n return ivy.arange(start, end, step=step, dtype=dtype)\r\n\r\n\r\n@to_ivy_arrays_and_back\r\ndef empty(shape, dtype=None):\r\n return ivy.empty(shape=shape, dtype=dtype)\r\n\r\n\r\n@to_ivy_arrays_and_back\r\ndef eye(num_rows, num_columns=None, dtype=None, name=None):\r\n return ivy.eye(num_rows, num_columns, dtype=dtype)\r\n\r\n\r\n@to_ivy_arrays_and_back\r\ndef empty_like(x, dtype=None, name=None):\r\n return ivy.empty_like(x, dtype=dtype)\r\n\r\n\r\n@with_unsupported_dtypes(\r\n {\r\n \"2.5.1 and below\": (\r\n \"uint8\",\r\n \"int8\",\r\n \"int16\",\r\n \"float16\",\r\n \"complex64\",\r\n \"complex128\",\r\n \"bool\",\r\n )\r\n },\r\n \"paddle\",\r\n)\r\n@to_ivy_arrays_and_back\r\ndef tril(x, diagonal=0, name=None):\r\n return ivy.tril(x, k=diagonal)\r\n\r\n\r\n@with_unsupported_dtypes(\r\n {\r\n \"2.5.1 and below\": (\r\n \"uint8\",\r\n \"int8\",\r\n \"int16\",\r\n \"float16\",\r\n \"complex64\",\r\n \"complex128\",\r\n \"bool\",\r\n )\r\n },\r\n \"paddle\",\r\n)\r\n@to_ivy_arrays_and_back\r\ndef triu(x, diagonal=0, name=None):\r\n return ivy.triu(x, k=diagonal)\r\n\r\n\r\n@with_supported_dtypes(\r\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\r\n)\r\n@to_ivy_arrays_and_back\r\ndef diagflat(x, offset=0, name=None):\r\n arr = ivy.diagflat(x, offset=offset)\r\n return arr\r\n\r\n\r\n@with_supported_dtypes(\r\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\r\n)\r\n@to_ivy_arrays_and_back\r\ndef meshgrid(*args, **kwargs):\r\n return ivy.meshgrid(*args, indexing=\"ij\")\r\n\r\n\r\n@with_supported_dtypes({\"2.5.1 and below\": (\"int32\", \"int64\")}, \"paddle\")\r\n@to_ivy_arrays_and_back\r\ndef tril_indices(row, col, offset=0, dtype=\"int64\"):\r\n arr = ivy.tril_indices(row, col, offset)\r\n arr = ivy.astype(arr, dtype)\r\n return arr\r\n\r\n\r\n@with_supported_dtypes(\r\n {\"2.5.1 and below\": (\"float16\", \"float32\", \"float64\", \"int32\", \"int64\", \"bool\")},\r\n \"paddle\",\r\n)\r\n@to_ivy_arrays_and_back\r\ndef assign(x, output=None):\r\n if len(ivy.shape(x)) == 0:\r\n x = ivy.reshape(ivy.Array(x), (1,))\r\n if ivy.exists(output):\r\n output = ivy.reshape(ivy.Array(output), (1,))\r\n else:\r\n x = ivy.reshape(x, ivy.shape(x))\r\n ret = ivy.copy_array(x, to_ivy_array=False, out=output)\r\n return ret\r\n\r\n\r\n@with_supported_dtypes(\r\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\r\n)\r\n@to_ivy_arrays_and_back\r\ndef diag(x, offset=0, padding_value=0, name=None):\r\n if len(x.shape) == 1:\r\n padding_value = ivy.astype(padding_value, ivy.dtype(x))\r\n ret = ivy.diagflat(x, offset=offset, padding_value=padding_value)\r\n if len(ret.shape) != 2:\r\n ret = ivy.reshape(ret, (1, 1))\r\n else:\r\n ret = ivy.diag(x, k=offset)\r\n return ret\r\n\r\n\r\n@with_supported_dtypes(\r\n {\"2.5.1 and below\": (\"float32\", \"float64\", \"int32\", \"int64\")}, \"paddle\"\r\n)\r\n@to_ivy_arrays_and_back\r\ndef logspace(start, stop, num, base=10.0, dtype=None, name=None):\r\n return ivy.logspace(start, stop, num=num, base=base, dtype=dtype)\r\n\r\n\r\n@with_supported_dtypes(\r\n {\"2.5.1 and below\": (\"float32\", \"float64\")},\r\n \"paddle\",\r\n)\r\n@to_ivy_arrays_and_back\r\ndef complex(real, imag, name=None):\r\n assert real.dtype == imag.dtype, (\r\n \"(InvalidArgument) The type of data we are trying to retrieve does not match\"\r\n \" the type of data currently contained in the container.\"\r\n )\r\n complex_dtype = \"complex64\" if real.dtype == \"float32\" else \"complex128\"\r\n imag_cmplx = ivy.astype(imag, complex_dtype) * 1j\r\n complex_array = real + imag_cmplx\r\n return complex_array\r\n", "path": "ivy/functional/frontends/paddle/tensor/creation.py"}]}
| 2,778 | 251 |
gh_patches_debug_21709
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__pytorch-lightning-706
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TensorBoardLogger and ModelCheckpoint are not using the same folder by default
## 🐛 Bug
(master branch)
By default, the TensorBoardLogger writes logs into `lightning_logs/0` but ModelCheckpoint writes checkpoint into `lightning_logs/version_0`.
</issue>
<code>
[start of pytorch_lightning/logging/tensorboard.py]
1 import os
2 from warnings import warn
3 from argparse import Namespace
4 from pkg_resources import parse_version
5
6 import torch
7 import pandas as pd
8 from torch.utils.tensorboard import SummaryWriter
9
10 from .base import LightningLoggerBase, rank_zero_only
11
12
13 class TensorBoardLogger(LightningLoggerBase):
14 r"""
15
16 Log to local file system in TensorBoard format
17
18 Implemented using :class:`torch.utils.tensorboard.SummaryWriter`. Logs are saved to
19 `os.path.join(save_dir, name, version)`
20
21 Example
22 --------
23
24 .. code-block:: python
25
26 logger = TensorBoardLogger("tb_logs", name="my_model")
27 trainer = Trainer(logger=logger)
28 trainer.train(model)
29
30 Args:
31 save_dir (str): Save directory
32 name (str): Experiment name. Defaults to "default".
33 version (int): Experiment version. If version is not specified the logger inspects the save
34 directory for existing versions, then automatically assigns the next available version.
35 \**kwargs (dict): Other arguments are passed directly to the :class:`SummaryWriter` constructor.
36
37 """
38 NAME_CSV_TAGS = 'meta_tags.csv'
39
40 def __init__(self, save_dir, name="default", version=None, **kwargs):
41 super().__init__()
42 self.save_dir = save_dir
43 self._name = name
44 self._version = version
45
46 self._experiment = None
47 self.tags = {}
48 self.kwargs = kwargs
49
50 @property
51 def experiment(self):
52 r"""
53
54 Actual tensorboard object. To use tensorboard features do the following.
55
56 Example::
57
58 self.logger.experiment.some_tensorboard_function()
59
60 """
61 if self._experiment is not None:
62 return self._experiment
63
64 root_dir = os.path.join(self.save_dir, self.name)
65 os.makedirs(root_dir, exist_ok=True)
66 log_dir = os.path.join(root_dir, str(self.version))
67 self._experiment = SummaryWriter(log_dir=log_dir, **self.kwargs)
68 return self._experiment
69
70 @rank_zero_only
71 def log_hyperparams(self, params):
72 if params is None:
73 return
74
75 # in case converting from namespace
76 if isinstance(params, Namespace):
77 params = vars(params)
78 params = dict(params)
79
80 if parse_version(torch.__version__) < parse_version("1.3.0"):
81 warn(
82 f"Hyperparameter logging is not available for Torch version {torch.__version__}."
83 " Skipping log_hyperparams. Upgrade to Torch 1.3.0 or above to enable"
84 " hyperparameter logging."
85 )
86 else:
87 # `add_hparams` requires both - hparams and metric
88 self.experiment.add_hparams(hparam_dict=params, metric_dict={})
89 # some alternative should be added
90 self.tags.update(params)
91
92 @rank_zero_only
93 def log_metrics(self, metrics, step=None):
94 for k, v in metrics.items():
95 if isinstance(v, torch.Tensor):
96 v = v.item()
97 self.experiment.add_scalar(k, v, step)
98
99 @rank_zero_only
100 def save(self):
101 try:
102 self.experiment.flush()
103 except AttributeError:
104 # you are using PT version (<v1.2) which does not have implemented flush
105 self.experiment._get_file_writer().flush()
106
107 # create a preudo standard path ala test-tube
108 dir_path = os.path.join(self.save_dir, self.name, 'version_%s' % self.version)
109 if not os.path.isdir(dir_path):
110 dir_path = self.save_dir
111 # prepare the file path
112 meta_tags_path = os.path.join(dir_path, self.NAME_CSV_TAGS)
113 # save the metatags file
114 df = pd.DataFrame({'key': list(self.tags.keys()),
115 'value': list(self.tags.values())})
116 df.to_csv(meta_tags_path, index=False)
117
118 @rank_zero_only
119 def finalize(self, status):
120 self.save()
121
122 @property
123 def name(self):
124 return self._name
125
126 @property
127 def version(self):
128 if self._version is None:
129 self._version = self._get_next_version()
130 return self._version
131
132 def _get_next_version(self):
133 root_dir = os.path.join(self.save_dir, self.name)
134 existing_versions = [
135 int(d) for d in os.listdir(root_dir) if os.path.isdir(os.path.join(root_dir, d)) and d.isdigit()
136 ]
137 if len(existing_versions) == 0:
138 return 0
139 else:
140 return max(existing_versions) + 1
141
[end of pytorch_lightning/logging/tensorboard.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pytorch_lightning/logging/tensorboard.py b/pytorch_lightning/logging/tensorboard.py
--- a/pytorch_lightning/logging/tensorboard.py
+++ b/pytorch_lightning/logging/tensorboard.py
@@ -63,7 +63,7 @@
root_dir = os.path.join(self.save_dir, self.name)
os.makedirs(root_dir, exist_ok=True)
- log_dir = os.path.join(root_dir, str(self.version))
+ log_dir = os.path.join(root_dir, "version_" + str(self.version))
self._experiment = SummaryWriter(log_dir=log_dir, **self.kwargs)
return self._experiment
@@ -131,9 +131,11 @@
def _get_next_version(self):
root_dir = os.path.join(self.save_dir, self.name)
- existing_versions = [
- int(d) for d in os.listdir(root_dir) if os.path.isdir(os.path.join(root_dir, d)) and d.isdigit()
- ]
+ existing_versions = []
+ for d in os.listdir(root_dir):
+ if os.path.isdir(os.path.join(root_dir, d)) and d.startswith("version_"):
+ existing_versions.append(int(d.split("_")[1]))
+
if len(existing_versions) == 0:
return 0
else:
|
{"golden_diff": "diff --git a/pytorch_lightning/logging/tensorboard.py b/pytorch_lightning/logging/tensorboard.py\n--- a/pytorch_lightning/logging/tensorboard.py\n+++ b/pytorch_lightning/logging/tensorboard.py\n@@ -63,7 +63,7 @@\n \n root_dir = os.path.join(self.save_dir, self.name)\n os.makedirs(root_dir, exist_ok=True)\n- log_dir = os.path.join(root_dir, str(self.version))\n+ log_dir = os.path.join(root_dir, \"version_\" + str(self.version))\n self._experiment = SummaryWriter(log_dir=log_dir, **self.kwargs)\n return self._experiment\n \n@@ -131,9 +131,11 @@\n \n def _get_next_version(self):\n root_dir = os.path.join(self.save_dir, self.name)\n- existing_versions = [\n- int(d) for d in os.listdir(root_dir) if os.path.isdir(os.path.join(root_dir, d)) and d.isdigit()\n- ]\n+ existing_versions = []\n+ for d in os.listdir(root_dir):\n+ if os.path.isdir(os.path.join(root_dir, d)) and d.startswith(\"version_\"):\n+ existing_versions.append(int(d.split(\"_\")[1]))\n+\n if len(existing_versions) == 0:\n return 0\n else:\n", "issue": "TensorBoardLogger and ModelCheckpoint are not using the same folder by default\n## \ud83d\udc1b Bug\r\n(master branch)\r\nBy default, the TensorBoardLogger writes logs into `lightning_logs/0` but ModelCheckpoint writes checkpoint into `lightning_logs/version_0`.\n", "before_files": [{"content": "import os\nfrom warnings import warn\nfrom argparse import Namespace\nfrom pkg_resources import parse_version\n\nimport torch\nimport pandas as pd\nfrom torch.utils.tensorboard import SummaryWriter\n\nfrom .base import LightningLoggerBase, rank_zero_only\n\n\nclass TensorBoardLogger(LightningLoggerBase):\n r\"\"\"\n\n Log to local file system in TensorBoard format\n\n Implemented using :class:`torch.utils.tensorboard.SummaryWriter`. Logs are saved to\n `os.path.join(save_dir, name, version)`\n\n Example\n --------\n\n .. code-block:: python\n\n logger = TensorBoardLogger(\"tb_logs\", name=\"my_model\")\n trainer = Trainer(logger=logger)\n trainer.train(model)\n\n Args:\n save_dir (str): Save directory\n name (str): Experiment name. Defaults to \"default\".\n version (int): Experiment version. If version is not specified the logger inspects the save\n directory for existing versions, then automatically assigns the next available version.\n \\**kwargs (dict): Other arguments are passed directly to the :class:`SummaryWriter` constructor.\n\n \"\"\"\n NAME_CSV_TAGS = 'meta_tags.csv'\n\n def __init__(self, save_dir, name=\"default\", version=None, **kwargs):\n super().__init__()\n self.save_dir = save_dir\n self._name = name\n self._version = version\n\n self._experiment = None\n self.tags = {}\n self.kwargs = kwargs\n\n @property\n def experiment(self):\n r\"\"\"\n\n Actual tensorboard object. To use tensorboard features do the following.\n\n Example::\n\n self.logger.experiment.some_tensorboard_function()\n\n \"\"\"\n if self._experiment is not None:\n return self._experiment\n\n root_dir = os.path.join(self.save_dir, self.name)\n os.makedirs(root_dir, exist_ok=True)\n log_dir = os.path.join(root_dir, str(self.version))\n self._experiment = SummaryWriter(log_dir=log_dir, **self.kwargs)\n return self._experiment\n\n @rank_zero_only\n def log_hyperparams(self, params):\n if params is None:\n return\n\n # in case converting from namespace\n if isinstance(params, Namespace):\n params = vars(params)\n params = dict(params)\n\n if parse_version(torch.__version__) < parse_version(\"1.3.0\"):\n warn(\n f\"Hyperparameter logging is not available for Torch version {torch.__version__}.\"\n \" Skipping log_hyperparams. Upgrade to Torch 1.3.0 or above to enable\"\n \" hyperparameter logging.\"\n )\n else:\n # `add_hparams` requires both - hparams and metric\n self.experiment.add_hparams(hparam_dict=params, metric_dict={})\n # some alternative should be added\n self.tags.update(params)\n\n @rank_zero_only\n def log_metrics(self, metrics, step=None):\n for k, v in metrics.items():\n if isinstance(v, torch.Tensor):\n v = v.item()\n self.experiment.add_scalar(k, v, step)\n\n @rank_zero_only\n def save(self):\n try:\n self.experiment.flush()\n except AttributeError:\n # you are using PT version (<v1.2) which does not have implemented flush\n self.experiment._get_file_writer().flush()\n\n # create a preudo standard path ala test-tube\n dir_path = os.path.join(self.save_dir, self.name, 'version_%s' % self.version)\n if not os.path.isdir(dir_path):\n dir_path = self.save_dir\n # prepare the file path\n meta_tags_path = os.path.join(dir_path, self.NAME_CSV_TAGS)\n # save the metatags file\n df = pd.DataFrame({'key': list(self.tags.keys()),\n 'value': list(self.tags.values())})\n df.to_csv(meta_tags_path, index=False)\n\n @rank_zero_only\n def finalize(self, status):\n self.save()\n\n @property\n def name(self):\n return self._name\n\n @property\n def version(self):\n if self._version is None:\n self._version = self._get_next_version()\n return self._version\n\n def _get_next_version(self):\n root_dir = os.path.join(self.save_dir, self.name)\n existing_versions = [\n int(d) for d in os.listdir(root_dir) if os.path.isdir(os.path.join(root_dir, d)) and d.isdigit()\n ]\n if len(existing_versions) == 0:\n return 0\n else:\n return max(existing_versions) + 1\n", "path": "pytorch_lightning/logging/tensorboard.py"}]}
| 1,907 | 285 |
gh_patches_debug_17184
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-33
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Do not print page title above each page
Having a h1 HOME on your home page looks stupid. Either remove this completely and show currently selected page in menu, or put page title at top of content by default, so it is there by default but can be edited away if needed
</issue>
<code>
[start of django/comicsite/views.py]
1 '''
2 Created on Jun 18, 2012
3
4 Testing views. Each of these views is referenced in urls.py
5
6 @author: Sjoerd
7 '''
8
9 from django.http import HttpResponse
10 from django.http import Http404
11 from django.shortcuts import render_to_response
12 from django.template import RequestContext
13
14 from comicsite.models import ComicSite,Page,ComicSiteException
15 from dataproviders import FileSystemDataProvider
16
17
18 def index(request):
19 return HttpResponse("ComicSite index page.",context_instance=RequestContext(request))
20
21
22 def site(request, site_short_name):
23 """ show a single COMIC site, default start page """
24 #TODO: Is it bad to use site name here, which is not the specified key?
25
26 site = getSite(site_short_name)
27
28 pages = getPages(site_short_name)
29
30 return render_to_response('page.html', {'site': site, 'page': pages[0], "pages":pages },context_instance=RequestContext(request))
31
32
33 def page(request, site_short_name, page_title):
34 """ show a single page on a site """
35
36 try:
37 p = Page.objects.get(ComicSite__short_name=site_short_name, title=page_title)
38 except Page.DoesNotExist:
39 raise Http404
40 pages = getPages(site_short_name)
41
42 return render_to_response('page.html', {'site': p.ComicSite, 'page': p, "pages":pages },context_instance=RequestContext(request))
43
44
45
46
47 def dataPage(request):
48 """ test function for data provider. Just get some files from provider and show them as list"""
49 #= r"D:\userdata\Sjoerd\Aptana Studio 3 Workspace\comic-django\django\static\files"
50
51 path = r"D:\userdata\Sjoerd\Aptana Studio 3 Workspace\comic-django\django\static\files"
52 dp = FileSystemDataProvider.FileSystemDataProvider(path)
53 images = dp.getImages()
54
55 htmlOut = "available files:"+", ".join(images)
56 p = createTestPage(html=htmlOut)
57 pages = [p]
58
59 return render_to_response('page.html', {'site': p.ComicSite, 'page': p, "pages":pages },context_instance=RequestContext(request))
60
61 # ======================================== not called directly from urls.py =========================================
62
63 def getSite(site_short_name):
64 try:
65 site = ComicSite.objects.get(short_name=site_short_name)
66 except ComicSite.DoesNotExist:
67 raise Http404
68 return site
69
70
71 def getPages(site_short_name):
72 """ get all pages of the given site from db"""
73 try:
74 pages = Page.objects.filter(ComicSite__short_name=site_short_name)
75 except Page.DoesNotExist:
76 raise Http404
77 return pages
78
79 # trying to follow pep 0008 here, finally.
80 def site_exists(site_short_name):
81 try:
82 site = ComicSite.objects.get(short_name=site_short_name)
83 return True
84 except ComicSite.DoesNotExist:
85 return False
86
87
88 # ====================================================== debug and test ==================================================
89 def createTestPage(title="testPage",html=""):
90 """ Create a quick mockup on the ComicSite 'Test'"""
91
92 if site_exists("test"):
93 #TODO log a warning here, no exception.
94 raise ComicSiteException("I am creating a spoof ComicSite called 'test' on the fly, by a project called 'test' was already defined in DB. This message should be a warning instead of an exception")
95
96 # if no site exists by that name, create it on the fly.
97 site = ComicSite()
98 site.short_name = "test"
99 site.name = "Test Page"
100 site.skin = ""
101
102 return Page(ComicSite=site,title=title,html=html)
103
104
105 def givePageHTML(page):
106 return "<h1>%s</h1> <p>%s</p>" %(page.title ,page.html)
[end of django/comicsite/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/django/comicsite/views.py b/django/comicsite/views.py
--- a/django/comicsite/views.py
+++ b/django/comicsite/views.py
@@ -27,7 +27,7 @@
pages = getPages(site_short_name)
- return render_to_response('page.html', {'site': site, 'page': pages[0], "pages":pages },context_instance=RequestContext(request))
+ return render_to_response('page.html', {'site': site, 'currentpage': pages[0], "pages":pages },context_instance=RequestContext(request))
def page(request, site_short_name, page_title):
@@ -39,7 +39,7 @@
raise Http404
pages = getPages(site_short_name)
- return render_to_response('page.html', {'site': p.ComicSite, 'page': p, "pages":pages },context_instance=RequestContext(request))
+ return render_to_response('page.html', {'site': p.ComicSite, 'currentpage': p, "pages":pages },context_instance=RequestContext(request))
|
{"golden_diff": "diff --git a/django/comicsite/views.py b/django/comicsite/views.py\n--- a/django/comicsite/views.py\n+++ b/django/comicsite/views.py\n@@ -27,7 +27,7 @@\n \n pages = getPages(site_short_name)\n \n- return render_to_response('page.html', {'site': site, 'page': pages[0], \"pages\":pages },context_instance=RequestContext(request))\n+ return render_to_response('page.html', {'site': site, 'currentpage': pages[0], \"pages\":pages },context_instance=RequestContext(request))\n \n \n def page(request, site_short_name, page_title):\n@@ -39,7 +39,7 @@\n raise Http404\n pages = getPages(site_short_name)\n \n- return render_to_response('page.html', {'site': p.ComicSite, 'page': p, \"pages\":pages },context_instance=RequestContext(request))\n+ return render_to_response('page.html', {'site': p.ComicSite, 'currentpage': p, \"pages\":pages },context_instance=RequestContext(request))\n", "issue": "Do not print page title above each page\nHaving a h1 HOME on your home page looks stupid. Either remove this completely and show currently selected page in menu, or put page title at top of content by default, so it is there by default but can be edited away if needed \n\n", "before_files": [{"content": "'''\nCreated on Jun 18, 2012\n\nTesting views. Each of these views is referenced in urls.py \n\n@author: Sjoerd\n'''\n\nfrom django.http import HttpResponse\nfrom django.http import Http404\nfrom django.shortcuts import render_to_response\nfrom django.template import RequestContext\n\nfrom comicsite.models import ComicSite,Page,ComicSiteException\nfrom dataproviders import FileSystemDataProvider\n\n\ndef index(request):\n return HttpResponse(\"ComicSite index page.\",context_instance=RequestContext(request))\n\n\ndef site(request, site_short_name):\n \"\"\" show a single COMIC site, default start page \"\"\"\n #TODO: Is it bad to use site name here, which is not the specified key?\n \n site = getSite(site_short_name)\n \n pages = getPages(site_short_name)\n \n return render_to_response('page.html', {'site': site, 'page': pages[0], \"pages\":pages },context_instance=RequestContext(request))\n \n\ndef page(request, site_short_name, page_title):\n \"\"\" show a single page on a site \"\"\"\n \n try:\n p = Page.objects.get(ComicSite__short_name=site_short_name, title=page_title)\n except Page.DoesNotExist: \n raise Http404\n pages = getPages(site_short_name)\n \n return render_to_response('page.html', {'site': p.ComicSite, 'page': p, \"pages\":pages },context_instance=RequestContext(request))\n \n \n \n\ndef dataPage(request):\n \"\"\" test function for data provider. Just get some files from provider and show them as list\"\"\"\n #= r\"D:\\userdata\\Sjoerd\\Aptana Studio 3 Workspace\\comic-django\\django\\static\\files\"\n \n path = r\"D:\\userdata\\Sjoerd\\Aptana Studio 3 Workspace\\comic-django\\django\\static\\files\"\n dp = FileSystemDataProvider.FileSystemDataProvider(path)\n images = dp.getImages()\n \n htmlOut = \"available files:\"+\", \".join(images)\n p = createTestPage(html=htmlOut)\n pages = [p]\n \n return render_to_response('page.html', {'site': p.ComicSite, 'page': p, \"pages\":pages },context_instance=RequestContext(request))\n\n# ======================================== not called directly from urls.py =========================================\n\ndef getSite(site_short_name):\n try:\n site = ComicSite.objects.get(short_name=site_short_name)\n except ComicSite.DoesNotExist: \n raise Http404 \n return site \n \n \ndef getPages(site_short_name):\n \"\"\" get all pages of the given site from db\"\"\"\n try:\n pages = Page.objects.filter(ComicSite__short_name=site_short_name)\n except Page.DoesNotExist: \n raise Http404\n return pages\n\n# trying to follow pep 0008 here, finally.\ndef site_exists(site_short_name):\n try:\n site = ComicSite.objects.get(short_name=site_short_name)\n return True\n except ComicSite.DoesNotExist: \n return False\n \n \n# ====================================================== debug and test ==================================================\ndef createTestPage(title=\"testPage\",html=\"\"):\n \"\"\" Create a quick mockup on the ComicSite 'Test'\"\"\"\n \n if site_exists(\"test\"):\n #TODO log a warning here, no exception.\n raise ComicSiteException(\"I am creating a spoof ComicSite called 'test' on the fly, by a project called 'test' was already defined in DB. This message should be a warning instead of an exception\") \n \n # if no site exists by that name, create it on the fly.\n site = ComicSite()\n site.short_name = \"test\"\n site.name = \"Test Page\"\n site.skin = \"\"\n \n return Page(ComicSite=site,title=title,html=html)\n \n\ndef givePageHTML(page):\n return \"<h1>%s</h1> <p>%s</p>\" %(page.title ,page.html)", "path": "django/comicsite/views.py"}]}
| 1,654 | 245 |
gh_patches_debug_19590
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-1913
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change order of the blog posts
Currently all blog posts that are published on grand-challenge are sorted based on the date the post was initially created. We would like to change this to the date the post was published such that the most recent post are shown on top. I had contact with @jmsmkn on slack after which he suggested a few changes I could make in the code to change this. After discussing this with Kiran we thought it might be best to first create an issue here.
</issue>
<code>
[start of app/grandchallenge/blogs/models.py]
1 from django.conf import settings
2 from django.contrib.auth import get_user_model
3 from django.db import models
4 from django_extensions.db.fields import AutoSlugField
5 from simple_history.models import HistoricalRecords
6 from stdimage import JPEGField
7
8 from grandchallenge.core.storage import get_logo_path, public_s3_storage
9 from grandchallenge.subdomains.utils import reverse
10
11
12 class Tag(models.Model):
13 name = models.CharField(max_length=200, unique=True)
14 slug = AutoSlugField(populate_from="name", max_length=200)
15
16 def __str__(self):
17 return self.name
18
19
20 class Post(models.Model):
21 created = models.DateTimeField(auto_now_add=True)
22 modified = models.DateTimeField(auto_now=True)
23
24 title = models.CharField(max_length=1024)
25 slug = AutoSlugField(populate_from="title", max_length=1024)
26 description = models.TextField()
27 content = models.TextField()
28
29 authors = models.ManyToManyField(
30 to=get_user_model(), related_name="blog_authors"
31 )
32
33 logo = JPEGField(
34 upload_to=get_logo_path,
35 storage=public_s3_storage,
36 variations=settings.STDIMAGE_SOCIAL_VARIATIONS,
37 )
38
39 tags = models.ManyToManyField(to=Tag, blank=True, related_name="posts")
40
41 published = models.BooleanField(default=False)
42
43 history = HistoricalRecords()
44
45 class Meta:
46 ordering = ("-created",)
47
48 def __str__(self):
49 return self.title
50
51 def get_absolute_url(self):
52 return reverse("blogs:detail", kwargs={"slug": self.slug})
53
54 @property
55 def public(self):
56 return self.published
57
[end of app/grandchallenge/blogs/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/grandchallenge/blogs/models.py b/app/grandchallenge/blogs/models.py
--- a/app/grandchallenge/blogs/models.py
+++ b/app/grandchallenge/blogs/models.py
@@ -1,6 +1,7 @@
from django.conf import settings
from django.contrib.auth import get_user_model
from django.db import models
+from django.utils import timezone
from django_extensions.db.fields import AutoSlugField
from simple_history.models import HistoricalRecords
from stdimage import JPEGField
@@ -48,6 +49,16 @@
def __str__(self):
return self.title
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self._published_orig = self.published
+
+ def save(self, *args, **kwargs):
+ if self._published_orig is False and self.published is True:
+ self.created = timezone.now()
+
+ super().save(*args, **kwargs)
+
def get_absolute_url(self):
return reverse("blogs:detail", kwargs={"slug": self.slug})
|
{"golden_diff": "diff --git a/app/grandchallenge/blogs/models.py b/app/grandchallenge/blogs/models.py\n--- a/app/grandchallenge/blogs/models.py\n+++ b/app/grandchallenge/blogs/models.py\n@@ -1,6 +1,7 @@\n from django.conf import settings\n from django.contrib.auth import get_user_model\n from django.db import models\n+from django.utils import timezone\n from django_extensions.db.fields import AutoSlugField\n from simple_history.models import HistoricalRecords\n from stdimage import JPEGField\n@@ -48,6 +49,16 @@\n def __str__(self):\n return self.title\n \n+ def __init__(self, *args, **kwargs):\n+ super().__init__(*args, **kwargs)\n+ self._published_orig = self.published\n+\n+ def save(self, *args, **kwargs):\n+ if self._published_orig is False and self.published is True:\n+ self.created = timezone.now()\n+\n+ super().save(*args, **kwargs)\n+\n def get_absolute_url(self):\n return reverse(\"blogs:detail\", kwargs={\"slug\": self.slug})\n", "issue": "Change order of the blog posts\nCurrently all blog posts that are published on grand-challenge are sorted based on the date the post was initially created. We would like to change this to the date the post was published such that the most recent post are shown on top. I had contact with @jmsmkn on slack after which he suggested a few changes I could make in the code to change this. After discussing this with Kiran we thought it might be best to first create an issue here. \r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.db import models\nfrom django_extensions.db.fields import AutoSlugField\nfrom simple_history.models import HistoricalRecords\nfrom stdimage import JPEGField\n\nfrom grandchallenge.core.storage import get_logo_path, public_s3_storage\nfrom grandchallenge.subdomains.utils import reverse\n\n\nclass Tag(models.Model):\n name = models.CharField(max_length=200, unique=True)\n slug = AutoSlugField(populate_from=\"name\", max_length=200)\n\n def __str__(self):\n return self.name\n\n\nclass Post(models.Model):\n created = models.DateTimeField(auto_now_add=True)\n modified = models.DateTimeField(auto_now=True)\n\n title = models.CharField(max_length=1024)\n slug = AutoSlugField(populate_from=\"title\", max_length=1024)\n description = models.TextField()\n content = models.TextField()\n\n authors = models.ManyToManyField(\n to=get_user_model(), related_name=\"blog_authors\"\n )\n\n logo = JPEGField(\n upload_to=get_logo_path,\n storage=public_s3_storage,\n variations=settings.STDIMAGE_SOCIAL_VARIATIONS,\n )\n\n tags = models.ManyToManyField(to=Tag, blank=True, related_name=\"posts\")\n\n published = models.BooleanField(default=False)\n\n history = HistoricalRecords()\n\n class Meta:\n ordering = (\"-created\",)\n\n def __str__(self):\n return self.title\n\n def get_absolute_url(self):\n return reverse(\"blogs:detail\", kwargs={\"slug\": self.slug})\n\n @property\n def public(self):\n return self.published\n", "path": "app/grandchallenge/blogs/models.py"}]}
| 1,102 | 238 |
gh_patches_debug_32645
|
rasdani/github-patches
|
git_diff
|
ddionrails__ddionrails-707
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove "namespace" fields in search index
Do we need a "namespace" field in the search index?
https://github.com/ddionrails/ddionrails/blob/f4f4c9356d2b23b596b02e9ca921106635a20282/templates/elastic/help.html#L13
https://github.com/ddionrails/ddionrails/blob/4f50e614f95c26c0625243a66608ea8ea0c52d84/ddionrails/instruments/imports.py#L53
https://github.com/ddionrails/ddionrails/blob/4f50e614f95c26c0625243a66608ea8ea0c52d84/ddionrails/data/imports.py#L66
</issue>
<code>
[start of ddionrails/imports/management/commands/index.py]
1 # -*- coding: utf-8 -*-
2
3 """ "Index" management command for ddionrails project """
4
5 import json
6 import pathlib
7
8 import djclick as click
9 from django.conf import settings
10 from elasticsearch import Elasticsearch, helpers
11
12 from ddionrails.concepts.models import Concept
13 from ddionrails.data.models import Variable
14 from ddionrails.instruments.models import Question
15 from ddionrails.publications.models import Publication
16 from ddionrails.studies.models import Study
17
18 elasticsearch_client = Elasticsearch(hosts=[settings.INDEX_HOST])
19
20
21 def create(mapping_file: str) -> None:
22 """ Create an Elasticsearch index
23
24 using:
25 - settings.INDEX_HOST
26 - settings.INDEX_NAME
27 - mapping_file
28 """
29 if elasticsearch_client.indices.exists(settings.INDEX_NAME):
30 click.secho(f'Index "{settings.INDEX_NAME}" already exists.', fg="red")
31 exit(1)
32 else:
33 if pathlib.Path(mapping_file).exists() is False:
34 click.secho(f'Mapping file "{mapping_file}" not found.', fg="red")
35 exit(1)
36 else:
37 with open(mapping_file, "r") as infile:
38 mapping = json.load(infile)
39 click.secho(
40 f'Creating index "{settings.INDEX_NAME}" with maping from "{mapping_file}"',
41 fg="green",
42 )
43 result = elasticsearch_client.indices.create(
44 index=settings.INDEX_NAME, body=mapping
45 )
46 click.secho(str(result), fg="green")
47
48
49 def delete() -> None:
50 """ Delete an Elasticsearch index
51
52 using:
53 - settings.INDEX_HOST
54 - settings.INDEX_NAME
55 """
56 if elasticsearch_client.indices.exists(settings.INDEX_NAME):
57 click.secho(f'Deleting index "{settings.INDEX_NAME}"', fg="green")
58 result = elasticsearch_client.indices.delete(index=settings.INDEX_NAME)
59 click.secho(str(result), fg="green")
60 else:
61 click.secho(f'Index "{settings.INDEX_NAME}" does not exist.', fg="red")
62 exit(1)
63
64
65 def reset(mapping_file: str) -> None:
66 """ Reset an Elasticsearch index
67
68 using:
69 - settings.INDEX_HOST
70 - settings.INDEX_NAME
71 - mapping_file
72 """
73 if pathlib.Path(mapping_file).exists() is False:
74 click.secho(f'Mapping file "{mapping_file}" not found.', fg="red")
75 exit(1)
76 delete()
77 create(mapping_file)
78
79
80 def concepts():
81 """ Iterate over all concepts in the database """
82
83 queryset = Concept.objects.prefetch_related("variables").all()
84 for concept in queryset:
85 study = list(
86 Study.objects.filter(datasets__variables__concept_id=concept.id)
87 .values_list("name", flat=True)
88 .distinct()
89 )
90 yield {
91 "_index": settings.INDEX_NAME,
92 "_type": concept.DOC_TYPE,
93 "_id": str(concept.id),
94 "_source": {"name": concept.name, "label": concept.label, "study": study},
95 }
96
97
98 def publications():
99 """ Iterate over all publications in the database """
100
101 queryset = Publication.objects.select_related("study").all()
102 for publication in queryset:
103 yield {
104 "_index": settings.INDEX_NAME,
105 "_type": publication.DOC_TYPE,
106 "_id": str(publication.id),
107 "_source": publication.to_elastic_dict(),
108 }
109
110
111 def questions():
112 """ Iterate over all questions in the database """
113
114 queryset = Question.objects.select_related(
115 "instrument",
116 "instrument__analysis_unit",
117 "instrument__period",
118 "instrument__study",
119 ).all()
120 for question in queryset:
121 period = question.get_period(period_id="name")
122 try:
123 analysis_unit = question.instrument.analysis_unit.name
124 except AttributeError:
125 analysis_unit = None
126 yield {
127 "_index": settings.INDEX_NAME,
128 "_type": question.DOC_TYPE,
129 "_id": str(question.id),
130 "_source": {
131 "period": period,
132 "analysis_unit": analysis_unit,
133 "question": question.name,
134 "name": question.name,
135 "label": question.label,
136 "items": question.items,
137 "sort_id": question.sort_id,
138 "instrument": question.instrument.name,
139 "study": question.instrument.study.name,
140 "namespace": question.instrument.study.name,
141 },
142 }
143
144
145 def variables():
146 """ Iterate over all variables in the database """
147
148 queryset = Variable.objects.select_related(
149 "dataset",
150 "dataset__study",
151 "dataset__analysis_unit",
152 "dataset__conceptual_dataset",
153 "dataset__period",
154 ).all()
155 for variable in queryset:
156 period = variable.get_period(period_id="name")
157 try:
158 analysis_unit = variable.dataset.analysis_unit.name
159 except AttributeError:
160 analysis_unit = None
161 try:
162 sub_type = variable.dataset.conceptual_dataset.name
163 except AttributeError:
164 sub_type = None
165
166 yield {
167 "_index": settings.INDEX_NAME,
168 "_type": variable.DOC_TYPE,
169 "_id": str(variable.id),
170 "_source": {
171 "name": variable.name,
172 "variable": variable.name,
173 "label": variable.label,
174 "label_de": variable.label_de,
175 "dataset": variable.dataset.name,
176 "period": period,
177 "sub_type": sub_type,
178 "analysis_unit": analysis_unit,
179 "study": variable.dataset.study.name,
180 "namespace": variable.dataset.study.name,
181 "categories": variable.categories,
182 },
183 }
184
185
186 def populate():
187 """ Workaround """
188 print(f"Indexing {Publication.objects.count()} publications into Elasticsearch")
189 result = helpers.bulk(elasticsearch_client, publications())
190 print(result)
191
192 print(f"Indexing {Concept.objects.count()} concepts into Elasticsearch")
193 result = helpers.bulk(elasticsearch_client, concepts())
194 print(result)
195
196 print(f"Indexing {Question.objects.count()} questions into Elasticsearch")
197 result = helpers.bulk(elasticsearch_client, questions())
198 print(result)
199
200 print(f"Indexing {Variable.objects.count()} variables into Elasticsearch")
201 result = helpers.bulk(elasticsearch_client, variables())
202 print(result)
203
204
205 @click.group()
206 def command():
207 """ddionrails: Elasticsearch index creation/deletion/reset tool."""
208
209
210 @command.command(
211 "create",
212 short_help='Create the index defined in "settings.INDEX_NAME" and the given "mapping_file"',
213 )
214 @click.option(
215 "-f",
216 "--file",
217 "mapping_file",
218 default="ddionrails/elastic/mapping.json",
219 help='Elasticsearch mapping file in JSON format (defaults to "ddionrails/elastic/mapping.json")',
220 )
221 def create_command(mapping_file: str) -> None:
222 """ Create an Elasticsearch index
223
224 using:\n
225 - settings.INDEX_HOST\n
226 - settings.INDEX_NAME\n
227 - mapping_file
228 """
229 create(mapping_file)
230
231
232 @command.command("delete", short_help='Delete the index defined in "settings.INDEX_NAME"')
233 def delete_command():
234 """ Delete an Elasticsearch index
235
236 using:\n
237 - settings.INDEX_HOST\n
238 - settings.INDEX_NAME
239 """
240 delete()
241
242
243 @command.command(
244 "reset",
245 short_help='Reset the index defined in "settings.INDEX_NAME" and the given "mapping_file"',
246 )
247 @click.option(
248 "-f",
249 "--file",
250 "mapping_file",
251 default="ddionrails/elastic/mapping.json",
252 help='Elasticsearch mapping file in JSON format (defaults to "ddionrails/elastic/mapping.json")',
253 )
254 def reset(mapping_file: str) -> None:
255 """ Reset an Elasticsearch index
256
257 using:\n
258 - settings.INDEX_HOST\n
259 - settings.INDEX_NAME\n
260 - mapping_file
261 """
262 delete()
263 create(mapping_file)
264
265
266 @command.command("populate", short_help="Populate the Elasticsearch index")
267 def populate_command() -> None:
268 """ Populate the Elasticsearch index """
269 populate()
270
271
272 # remove "verbosity", "settings", "pythonpath", "traceback", "color" options from django-click
273 command.params = command.params[:2] + command.params[7:]
274
[end of ddionrails/imports/management/commands/index.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ddionrails/imports/management/commands/index.py b/ddionrails/imports/management/commands/index.py
--- a/ddionrails/imports/management/commands/index.py
+++ b/ddionrails/imports/management/commands/index.py
@@ -122,7 +122,7 @@
try:
analysis_unit = question.instrument.analysis_unit.name
except AttributeError:
- analysis_unit = None
+ analysis_unit = "None"
yield {
"_index": settings.INDEX_NAME,
"_type": question.DOC_TYPE,
@@ -137,7 +137,6 @@
"sort_id": question.sort_id,
"instrument": question.instrument.name,
"study": question.instrument.study.name,
- "namespace": question.instrument.study.name,
},
}
@@ -157,11 +156,11 @@
try:
analysis_unit = variable.dataset.analysis_unit.name
except AttributeError:
- analysis_unit = None
+ analysis_unit = "None"
try:
sub_type = variable.dataset.conceptual_dataset.name
except AttributeError:
- sub_type = None
+ sub_type = "None"
yield {
"_index": settings.INDEX_NAME,
@@ -177,7 +176,6 @@
"sub_type": sub_type,
"analysis_unit": analysis_unit,
"study": variable.dataset.study.name,
- "namespace": variable.dataset.study.name,
"categories": variable.categories,
},
}
|
{"golden_diff": "diff --git a/ddionrails/imports/management/commands/index.py b/ddionrails/imports/management/commands/index.py\n--- a/ddionrails/imports/management/commands/index.py\n+++ b/ddionrails/imports/management/commands/index.py\n@@ -122,7 +122,7 @@\n try:\n analysis_unit = question.instrument.analysis_unit.name\n except AttributeError:\n- analysis_unit = None\n+ analysis_unit = \"None\"\n yield {\n \"_index\": settings.INDEX_NAME,\n \"_type\": question.DOC_TYPE,\n@@ -137,7 +137,6 @@\n \"sort_id\": question.sort_id,\n \"instrument\": question.instrument.name,\n \"study\": question.instrument.study.name,\n- \"namespace\": question.instrument.study.name,\n },\n }\n \n@@ -157,11 +156,11 @@\n try:\n analysis_unit = variable.dataset.analysis_unit.name\n except AttributeError:\n- analysis_unit = None\n+ analysis_unit = \"None\"\n try:\n sub_type = variable.dataset.conceptual_dataset.name\n except AttributeError:\n- sub_type = None\n+ sub_type = \"None\"\n \n yield {\n \"_index\": settings.INDEX_NAME,\n@@ -177,7 +176,6 @@\n \"sub_type\": sub_type,\n \"analysis_unit\": analysis_unit,\n \"study\": variable.dataset.study.name,\n- \"namespace\": variable.dataset.study.name,\n \"categories\": variable.categories,\n },\n }\n", "issue": "Remove \"namespace\" fields in search index\nDo we need a \"namespace\" field in the search index?\r\n\r\nhttps://github.com/ddionrails/ddionrails/blob/f4f4c9356d2b23b596b02e9ca921106635a20282/templates/elastic/help.html#L13\r\nhttps://github.com/ddionrails/ddionrails/blob/4f50e614f95c26c0625243a66608ea8ea0c52d84/ddionrails/instruments/imports.py#L53\r\nhttps://github.com/ddionrails/ddionrails/blob/4f50e614f95c26c0625243a66608ea8ea0c52d84/ddionrails/data/imports.py#L66\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\" \"Index\" management command for ddionrails project \"\"\"\n\nimport json\nimport pathlib\n\nimport djclick as click\nfrom django.conf import settings\nfrom elasticsearch import Elasticsearch, helpers\n\nfrom ddionrails.concepts.models import Concept\nfrom ddionrails.data.models import Variable\nfrom ddionrails.instruments.models import Question\nfrom ddionrails.publications.models import Publication\nfrom ddionrails.studies.models import Study\n\nelasticsearch_client = Elasticsearch(hosts=[settings.INDEX_HOST])\n\n\ndef create(mapping_file: str) -> None:\n \"\"\" Create an Elasticsearch index\n\n using:\n - settings.INDEX_HOST\n - settings.INDEX_NAME\n - mapping_file\n \"\"\"\n if elasticsearch_client.indices.exists(settings.INDEX_NAME):\n click.secho(f'Index \"{settings.INDEX_NAME}\" already exists.', fg=\"red\")\n exit(1)\n else:\n if pathlib.Path(mapping_file).exists() is False:\n click.secho(f'Mapping file \"{mapping_file}\" not found.', fg=\"red\")\n exit(1)\n else:\n with open(mapping_file, \"r\") as infile:\n mapping = json.load(infile)\n click.secho(\n f'Creating index \"{settings.INDEX_NAME}\" with maping from \"{mapping_file}\"',\n fg=\"green\",\n )\n result = elasticsearch_client.indices.create(\n index=settings.INDEX_NAME, body=mapping\n )\n click.secho(str(result), fg=\"green\")\n\n\ndef delete() -> None:\n \"\"\" Delete an Elasticsearch index\n\n using:\n - settings.INDEX_HOST\n - settings.INDEX_NAME\n \"\"\"\n if elasticsearch_client.indices.exists(settings.INDEX_NAME):\n click.secho(f'Deleting index \"{settings.INDEX_NAME}\"', fg=\"green\")\n result = elasticsearch_client.indices.delete(index=settings.INDEX_NAME)\n click.secho(str(result), fg=\"green\")\n else:\n click.secho(f'Index \"{settings.INDEX_NAME}\" does not exist.', fg=\"red\")\n exit(1)\n\n\ndef reset(mapping_file: str) -> None:\n \"\"\" Reset an Elasticsearch index\n\n using:\n - settings.INDEX_HOST\n - settings.INDEX_NAME\n - mapping_file\n \"\"\"\n if pathlib.Path(mapping_file).exists() is False:\n click.secho(f'Mapping file \"{mapping_file}\" not found.', fg=\"red\")\n exit(1)\n delete()\n create(mapping_file)\n\n\ndef concepts():\n \"\"\" Iterate over all concepts in the database \"\"\"\n\n queryset = Concept.objects.prefetch_related(\"variables\").all()\n for concept in queryset:\n study = list(\n Study.objects.filter(datasets__variables__concept_id=concept.id)\n .values_list(\"name\", flat=True)\n .distinct()\n )\n yield {\n \"_index\": settings.INDEX_NAME,\n \"_type\": concept.DOC_TYPE,\n \"_id\": str(concept.id),\n \"_source\": {\"name\": concept.name, \"label\": concept.label, \"study\": study},\n }\n\n\ndef publications():\n \"\"\" Iterate over all publications in the database \"\"\"\n\n queryset = Publication.objects.select_related(\"study\").all()\n for publication in queryset:\n yield {\n \"_index\": settings.INDEX_NAME,\n \"_type\": publication.DOC_TYPE,\n \"_id\": str(publication.id),\n \"_source\": publication.to_elastic_dict(),\n }\n\n\ndef questions():\n \"\"\" Iterate over all questions in the database \"\"\"\n\n queryset = Question.objects.select_related(\n \"instrument\",\n \"instrument__analysis_unit\",\n \"instrument__period\",\n \"instrument__study\",\n ).all()\n for question in queryset:\n period = question.get_period(period_id=\"name\")\n try:\n analysis_unit = question.instrument.analysis_unit.name\n except AttributeError:\n analysis_unit = None\n yield {\n \"_index\": settings.INDEX_NAME,\n \"_type\": question.DOC_TYPE,\n \"_id\": str(question.id),\n \"_source\": {\n \"period\": period,\n \"analysis_unit\": analysis_unit,\n \"question\": question.name,\n \"name\": question.name,\n \"label\": question.label,\n \"items\": question.items,\n \"sort_id\": question.sort_id,\n \"instrument\": question.instrument.name,\n \"study\": question.instrument.study.name,\n \"namespace\": question.instrument.study.name,\n },\n }\n\n\ndef variables():\n \"\"\" Iterate over all variables in the database \"\"\"\n\n queryset = Variable.objects.select_related(\n \"dataset\",\n \"dataset__study\",\n \"dataset__analysis_unit\",\n \"dataset__conceptual_dataset\",\n \"dataset__period\",\n ).all()\n for variable in queryset:\n period = variable.get_period(period_id=\"name\")\n try:\n analysis_unit = variable.dataset.analysis_unit.name\n except AttributeError:\n analysis_unit = None\n try:\n sub_type = variable.dataset.conceptual_dataset.name\n except AttributeError:\n sub_type = None\n\n yield {\n \"_index\": settings.INDEX_NAME,\n \"_type\": variable.DOC_TYPE,\n \"_id\": str(variable.id),\n \"_source\": {\n \"name\": variable.name,\n \"variable\": variable.name,\n \"label\": variable.label,\n \"label_de\": variable.label_de,\n \"dataset\": variable.dataset.name,\n \"period\": period,\n \"sub_type\": sub_type,\n \"analysis_unit\": analysis_unit,\n \"study\": variable.dataset.study.name,\n \"namespace\": variable.dataset.study.name,\n \"categories\": variable.categories,\n },\n }\n\n\ndef populate():\n \"\"\" Workaround \"\"\"\n print(f\"Indexing {Publication.objects.count()} publications into Elasticsearch\")\n result = helpers.bulk(elasticsearch_client, publications())\n print(result)\n\n print(f\"Indexing {Concept.objects.count()} concepts into Elasticsearch\")\n result = helpers.bulk(elasticsearch_client, concepts())\n print(result)\n\n print(f\"Indexing {Question.objects.count()} questions into Elasticsearch\")\n result = helpers.bulk(elasticsearch_client, questions())\n print(result)\n\n print(f\"Indexing {Variable.objects.count()} variables into Elasticsearch\")\n result = helpers.bulk(elasticsearch_client, variables())\n print(result)\n\n\[email protected]()\ndef command():\n \"\"\"ddionrails: Elasticsearch index creation/deletion/reset tool.\"\"\"\n\n\[email protected](\n \"create\",\n short_help='Create the index defined in \"settings.INDEX_NAME\" and the given \"mapping_file\"',\n)\[email protected](\n \"-f\",\n \"--file\",\n \"mapping_file\",\n default=\"ddionrails/elastic/mapping.json\",\n help='Elasticsearch mapping file in JSON format (defaults to \"ddionrails/elastic/mapping.json\")',\n)\ndef create_command(mapping_file: str) -> None:\n \"\"\" Create an Elasticsearch index\n\n using:\\n\n - settings.INDEX_HOST\\n\n - settings.INDEX_NAME\\n\n - mapping_file\n \"\"\"\n create(mapping_file)\n\n\[email protected](\"delete\", short_help='Delete the index defined in \"settings.INDEX_NAME\"')\ndef delete_command():\n \"\"\" Delete an Elasticsearch index\n\n using:\\n\n - settings.INDEX_HOST\\n\n - settings.INDEX_NAME\n \"\"\"\n delete()\n\n\[email protected](\n \"reset\",\n short_help='Reset the index defined in \"settings.INDEX_NAME\" and the given \"mapping_file\"',\n)\[email protected](\n \"-f\",\n \"--file\",\n \"mapping_file\",\n default=\"ddionrails/elastic/mapping.json\",\n help='Elasticsearch mapping file in JSON format (defaults to \"ddionrails/elastic/mapping.json\")',\n)\ndef reset(mapping_file: str) -> None:\n \"\"\" Reset an Elasticsearch index\n\n using:\\n\n - settings.INDEX_HOST\\n\n - settings.INDEX_NAME\\n\n - mapping_file\n \"\"\"\n delete()\n create(mapping_file)\n\n\[email protected](\"populate\", short_help=\"Populate the Elasticsearch index\")\ndef populate_command() -> None:\n \"\"\" Populate the Elasticsearch index \"\"\"\n populate()\n\n\n# remove \"verbosity\", \"settings\", \"pythonpath\", \"traceback\", \"color\" options from django-click\ncommand.params = command.params[:2] + command.params[7:]\n", "path": "ddionrails/imports/management/commands/index.py"}]}
| 3,225 | 333 |
gh_patches_debug_42716
|
rasdani/github-patches
|
git_diff
|
pallets__click-1061
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
New upcoming hidden option in Click.Option is not hidden from bash completion
Thanks for wonderful package.
I don't know whether this is intended behavior or not but I was just trying out new upcoming release 7.0 for hidden option and autocompletion to Click.Option and hidden option hides option from help message but it shows up in bash completion.
However it will be good if we can hide it from bash completion also, so that hidden option is actually hidden from everywhere.
</issue>
<code>
[start of click/_bashcomplete.py]
1 import collections
2 import copy
3 import os
4 import re
5
6 from .utils import echo
7 from .parser import split_arg_string
8 from .core import MultiCommand, Option, Argument
9 from .types import Choice
10
11 WORDBREAK = '='
12
13 # Note, only BASH version 4.4 and later have the nosort option.
14 COMPLETION_SCRIPT_BASH = '''
15 %(complete_func)s() {
16 local IFS=$'\n'
17 COMPREPLY=( $( env COMP_WORDS="${COMP_WORDS[*]}" \\
18 COMP_CWORD=$COMP_CWORD \\
19 %(autocomplete_var)s=complete $1 ) )
20 return 0
21 }
22
23 %(complete_func)setup() {
24 local COMPLETION_OPTIONS=""
25 local BASH_VERSION_ARR=(${BASH_VERSION//./ })
26 if [ ${BASH_VERSION_ARR[0]} -ge 4 ] && [ ${BASH_VERSION_ARR[1]} -ge 4 ];then
27 COMPLETION_OPTIONS="-o nosort"
28 fi
29
30 complete $COMPLETION_OPTIONS -F %(complete_func)s %(script_names)s
31 }
32
33 %(complete_func)setup
34 '''
35
36 COMPLETION_SCRIPT_ZSH = '''
37 %(complete_func)s() {
38 local -a completions
39 local -a completions_with_descriptions
40 local -a response
41 response=("${(@f)$( env COMP_WORDS=\"${words[*]}\" \\
42 COMP_CWORD=$((CURRENT-1)) \\
43 %(autocomplete_var)s=\"complete_zsh\" \\
44 %(script_names)s )}")
45
46 for key descr in ${(kv)response}; do
47 if [[ "$descr" == "_" ]]; then
48 completions+=("$key")
49 else
50 completions_with_descriptions+=("$key":"$descr")
51 fi
52 done
53
54 if [ -n "$completions_with_descriptions" ]; then
55 _describe -V unsorted completions_with_descriptions -U -Q
56 fi
57
58 if [ -n "$completions" ]; then
59 compadd -U -V unsorted -Q -a completions
60 fi
61 compstate[insert]="automenu"
62 }
63
64 compdef %(complete_func)s %(script_names)s
65 '''
66
67 _invalid_ident_char_re = re.compile(r'[^a-zA-Z0-9_]')
68
69
70 def get_completion_script(prog_name, complete_var, shell):
71 cf_name = _invalid_ident_char_re.sub('', prog_name.replace('-', '_'))
72 script = COMPLETION_SCRIPT_ZSH if shell == 'zsh' else COMPLETION_SCRIPT_BASH
73 return (script % {
74 'complete_func': '_%s_completion' % cf_name,
75 'script_names': prog_name,
76 'autocomplete_var': complete_var,
77 }).strip() + ';'
78
79
80 def resolve_ctx(cli, prog_name, args):
81 """
82 Parse into a hierarchy of contexts. Contexts are connected through the parent variable.
83 :param cli: command definition
84 :param prog_name: the program that is running
85 :param args: full list of args
86 :return: the final context/command parsed
87 """
88 ctx = cli.make_context(prog_name, args, resilient_parsing=True)
89 args = ctx.protected_args + ctx.args
90 while args:
91 if isinstance(ctx.command, MultiCommand):
92 if not ctx.command.chain:
93 cmd_name, cmd, args = ctx.command.resolve_command(ctx, args)
94 if cmd is None:
95 return ctx
96 ctx = cmd.make_context(cmd_name, args, parent=ctx,
97 resilient_parsing=True)
98 args = ctx.protected_args + ctx.args
99 else:
100 # Walk chained subcommand contexts saving the last one.
101 while args:
102 cmd_name, cmd, args = ctx.command.resolve_command(ctx, args)
103 if cmd is None:
104 return ctx
105 sub_ctx = cmd.make_context(cmd_name, args, parent=ctx,
106 allow_extra_args=True,
107 allow_interspersed_args=False,
108 resilient_parsing=True)
109 args = sub_ctx.args
110 ctx = sub_ctx
111 args = sub_ctx.protected_args + sub_ctx.args
112 else:
113 break
114 return ctx
115
116
117 def start_of_option(param_str):
118 """
119 :param param_str: param_str to check
120 :return: whether or not this is the start of an option declaration (i.e. starts "-" or "--")
121 """
122 return param_str and param_str[:1] == '-'
123
124
125 def is_incomplete_option(all_args, cmd_param):
126 """
127 :param all_args: the full original list of args supplied
128 :param cmd_param: the current command paramter
129 :return: whether or not the last option declaration (i.e. starts "-" or "--") is incomplete and
130 corresponds to this cmd_param. In other words whether this cmd_param option can still accept
131 values
132 """
133 if not isinstance(cmd_param, Option):
134 return False
135 if cmd_param.is_flag:
136 return False
137 last_option = None
138 for index, arg_str in enumerate(reversed([arg for arg in all_args if arg != WORDBREAK])):
139 if index + 1 > cmd_param.nargs:
140 break
141 if start_of_option(arg_str):
142 last_option = arg_str
143
144 return True if last_option and last_option in cmd_param.opts else False
145
146
147 def is_incomplete_argument(current_params, cmd_param):
148 """
149 :param current_params: the current params and values for this argument as already entered
150 :param cmd_param: the current command parameter
151 :return: whether or not the last argument is incomplete and corresponds to this cmd_param. In
152 other words whether or not the this cmd_param argument can still accept values
153 """
154 if not isinstance(cmd_param, Argument):
155 return False
156 current_param_values = current_params[cmd_param.name]
157 if current_param_values is None:
158 return True
159 if cmd_param.nargs == -1:
160 return True
161 if isinstance(current_param_values, collections.Iterable) \
162 and cmd_param.nargs > 1 and len(current_param_values) < cmd_param.nargs:
163 return True
164 return False
165
166
167 def get_user_autocompletions(ctx, args, incomplete, cmd_param):
168 """
169 :param ctx: context associated with the parsed command
170 :param args: full list of args
171 :param incomplete: the incomplete text to autocomplete
172 :param cmd_param: command definition
173 :return: all the possible user-specified completions for the param
174 """
175 results = []
176 if isinstance(cmd_param.type, Choice):
177 # Choices don't support descriptions.
178 results = [(c, None)
179 for c in cmd_param.type.choices if c.startswith(incomplete)]
180 elif cmd_param.autocompletion is not None:
181 dynamic_completions = cmd_param.autocompletion(ctx=ctx,
182 args=args,
183 incomplete=incomplete)
184 results = [c if isinstance(c, tuple) else (c, None)
185 for c in dynamic_completions]
186 return results
187
188
189 def add_subcommand_completions(ctx, incomplete, completions_out):
190 # Add subcommand completions.
191 if isinstance(ctx.command, MultiCommand):
192 completions_out.extend(
193 [(c, ctx.command.get_command(ctx, c).get_short_help_str()) for c in ctx.command.list_commands(ctx) if c.startswith(incomplete)])
194
195 # Walk up the context list and add any other completion possibilities from chained commands
196 while ctx.parent is not None:
197 ctx = ctx.parent
198 if isinstance(ctx.command, MultiCommand) and ctx.command.chain:
199 remaining_commands = sorted(
200 set(ctx.command.list_commands(ctx)) - set(ctx.protected_args))
201 completions_out.extend(
202 [(c, ctx.command.get_command(ctx, c).get_short_help_str()) for c in remaining_commands if c.startswith(incomplete)])
203
204
205 def get_choices(cli, prog_name, args, incomplete):
206 """
207 :param cli: command definition
208 :param prog_name: the program that is running
209 :param args: full list of args
210 :param incomplete: the incomplete text to autocomplete
211 :return: all the possible completions for the incomplete
212 """
213 all_args = copy.deepcopy(args)
214
215 ctx = resolve_ctx(cli, prog_name, args)
216 if ctx is None:
217 return []
218
219 # In newer versions of bash long opts with '='s are partitioned, but it's easier to parse
220 # without the '='
221 if start_of_option(incomplete) and WORDBREAK in incomplete:
222 partition_incomplete = incomplete.partition(WORDBREAK)
223 all_args.append(partition_incomplete[0])
224 incomplete = partition_incomplete[2]
225 elif incomplete == WORDBREAK:
226 incomplete = ''
227
228 completions = []
229 if start_of_option(incomplete):
230 # completions for partial options
231 for param in ctx.command.params:
232 if isinstance(param, Option):
233 param_opts = [param_opt for param_opt in param.opts +
234 param.secondary_opts if param_opt not in all_args or param.multiple]
235 completions.extend(
236 [(o, param.help) for o in param_opts if o.startswith(incomplete)])
237 return completions
238 # completion for option values from user supplied values
239 for param in ctx.command.params:
240 if is_incomplete_option(all_args, param):
241 return get_user_autocompletions(ctx, all_args, incomplete, param)
242 # completion for argument values from user supplied values
243 for param in ctx.command.params:
244 if is_incomplete_argument(ctx.params, param):
245 return get_user_autocompletions(ctx, all_args, incomplete, param)
246
247 add_subcommand_completions(ctx, incomplete, completions)
248 # Sort before returning so that proper ordering can be enforced in custom types.
249 return sorted(completions)
250
251
252 def do_complete(cli, prog_name, include_descriptions):
253 cwords = split_arg_string(os.environ['COMP_WORDS'])
254 cword = int(os.environ['COMP_CWORD'])
255 args = cwords[1:cword]
256 try:
257 incomplete = cwords[cword]
258 except IndexError:
259 incomplete = ''
260
261 for item in get_choices(cli, prog_name, args, incomplete):
262 echo(item[0])
263 if include_descriptions:
264 # ZSH has trouble dealing with empty array parameters when returned from commands, so use a well defined character '_' to indicate no description is present.
265 echo(item[1] if item[1] else '_')
266
267 return True
268
269
270 def bashcomplete(cli, prog_name, complete_var, complete_instr):
271 if complete_instr.startswith('source'):
272 shell = 'zsh' if complete_instr == 'source_zsh' else 'bash'
273 echo(get_completion_script(prog_name, complete_var, shell))
274 return True
275 elif complete_instr == 'complete' or complete_instr == 'complete_zsh':
276 return do_complete(cli, prog_name, complete_instr == 'complete_zsh')
277 return False
278
[end of click/_bashcomplete.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/click/_bashcomplete.py b/click/_bashcomplete.py
--- a/click/_bashcomplete.py
+++ b/click/_bashcomplete.py
@@ -23,7 +23,8 @@
%(complete_func)setup() {
local COMPLETION_OPTIONS=""
local BASH_VERSION_ARR=(${BASH_VERSION//./ })
- if [ ${BASH_VERSION_ARR[0]} -ge 4 ] && [ ${BASH_VERSION_ARR[1]} -ge 4 ];then
+ # Only BASH version 4.4 and later have the nosort option.
+ if [ ${BASH_VERSION_ARR[0]} -gt 4 ] || ([ ${BASH_VERSION_ARR[0]} -eq 4 ] && [ ${BASH_VERSION_ARR[1]} -ge 4 ]); then
COMPLETION_OPTIONS="-o nosort"
fi
@@ -176,7 +177,7 @@
if isinstance(cmd_param.type, Choice):
# Choices don't support descriptions.
results = [(c, None)
- for c in cmd_param.type.choices if c.startswith(incomplete)]
+ for c in cmd_param.type.choices if str(c).startswith(incomplete)]
elif cmd_param.autocompletion is not None:
dynamic_completions = cmd_param.autocompletion(ctx=ctx,
args=args,
@@ -186,20 +187,32 @@
return results
+def get_visible_commands_starting_with(ctx, starts_with):
+ """
+ :param ctx: context associated with the parsed command
+ :starts_with: string that visible commands must start with.
+ :return: all visible (not hidden) commands that start with starts_with.
+ """
+ for c in ctx.command.list_commands(ctx):
+ if c.startswith(starts_with):
+ command = ctx.command.get_command(ctx, c)
+ if not command.hidden:
+ yield command
+
+
def add_subcommand_completions(ctx, incomplete, completions_out):
# Add subcommand completions.
if isinstance(ctx.command, MultiCommand):
completions_out.extend(
- [(c, ctx.command.get_command(ctx, c).get_short_help_str()) for c in ctx.command.list_commands(ctx) if c.startswith(incomplete)])
+ [(c.name, c.get_short_help_str()) for c in get_visible_commands_starting_with(ctx, incomplete)])
# Walk up the context list and add any other completion possibilities from chained commands
while ctx.parent is not None:
ctx = ctx.parent
if isinstance(ctx.command, MultiCommand) and ctx.command.chain:
- remaining_commands = sorted(
- set(ctx.command.list_commands(ctx)) - set(ctx.protected_args))
- completions_out.extend(
- [(c, ctx.command.get_command(ctx, c).get_short_help_str()) for c in remaining_commands if c.startswith(incomplete)])
+ remaining_commands = [c for c in get_visible_commands_starting_with(ctx, incomplete)
+ if c.name not in ctx.protected_args]
+ completions_out.extend([(c.name, c.get_short_help_str()) for c in remaining_commands])
def get_choices(cli, prog_name, args, incomplete):
@@ -229,11 +242,10 @@
if start_of_option(incomplete):
# completions for partial options
for param in ctx.command.params:
- if isinstance(param, Option):
+ if isinstance(param, Option) and not param.hidden:
param_opts = [param_opt for param_opt in param.opts +
param.secondary_opts if param_opt not in all_args or param.multiple]
- completions.extend(
- [(o, param.help) for o in param_opts if o.startswith(incomplete)])
+ completions.extend([(o, param.help) for o in param_opts if o.startswith(incomplete)])
return completions
# completion for option values from user supplied values
for param in ctx.command.params:
|
{"golden_diff": "diff --git a/click/_bashcomplete.py b/click/_bashcomplete.py\n--- a/click/_bashcomplete.py\n+++ b/click/_bashcomplete.py\n@@ -23,7 +23,8 @@\n %(complete_func)setup() {\n local COMPLETION_OPTIONS=\"\"\n local BASH_VERSION_ARR=(${BASH_VERSION//./ })\n- if [ ${BASH_VERSION_ARR[0]} -ge 4 ] && [ ${BASH_VERSION_ARR[1]} -ge 4 ];then\n+ # Only BASH version 4.4 and later have the nosort option.\n+ if [ ${BASH_VERSION_ARR[0]} -gt 4 ] || ([ ${BASH_VERSION_ARR[0]} -eq 4 ] && [ ${BASH_VERSION_ARR[1]} -ge 4 ]); then\n COMPLETION_OPTIONS=\"-o nosort\"\n fi\n \n@@ -176,7 +177,7 @@\n if isinstance(cmd_param.type, Choice):\n # Choices don't support descriptions.\n results = [(c, None)\n- for c in cmd_param.type.choices if c.startswith(incomplete)]\n+ for c in cmd_param.type.choices if str(c).startswith(incomplete)]\n elif cmd_param.autocompletion is not None:\n dynamic_completions = cmd_param.autocompletion(ctx=ctx,\n args=args,\n@@ -186,20 +187,32 @@\n return results\n \n \n+def get_visible_commands_starting_with(ctx, starts_with):\n+ \"\"\"\n+ :param ctx: context associated with the parsed command\n+ :starts_with: string that visible commands must start with.\n+ :return: all visible (not hidden) commands that start with starts_with.\n+ \"\"\"\n+ for c in ctx.command.list_commands(ctx):\n+ if c.startswith(starts_with):\n+ command = ctx.command.get_command(ctx, c)\n+ if not command.hidden:\n+ yield command\n+\n+\n def add_subcommand_completions(ctx, incomplete, completions_out):\n # Add subcommand completions.\n if isinstance(ctx.command, MultiCommand):\n completions_out.extend(\n- [(c, ctx.command.get_command(ctx, c).get_short_help_str()) for c in ctx.command.list_commands(ctx) if c.startswith(incomplete)])\n+ [(c.name, c.get_short_help_str()) for c in get_visible_commands_starting_with(ctx, incomplete)])\n \n # Walk up the context list and add any other completion possibilities from chained commands\n while ctx.parent is not None:\n ctx = ctx.parent\n if isinstance(ctx.command, MultiCommand) and ctx.command.chain:\n- remaining_commands = sorted(\n- set(ctx.command.list_commands(ctx)) - set(ctx.protected_args))\n- completions_out.extend(\n- [(c, ctx.command.get_command(ctx, c).get_short_help_str()) for c in remaining_commands if c.startswith(incomplete)])\n+ remaining_commands = [c for c in get_visible_commands_starting_with(ctx, incomplete)\n+ if c.name not in ctx.protected_args]\n+ completions_out.extend([(c.name, c.get_short_help_str()) for c in remaining_commands])\n \n \n def get_choices(cli, prog_name, args, incomplete):\n@@ -229,11 +242,10 @@\n if start_of_option(incomplete):\n # completions for partial options\n for param in ctx.command.params:\n- if isinstance(param, Option):\n+ if isinstance(param, Option) and not param.hidden:\n param_opts = [param_opt for param_opt in param.opts +\n param.secondary_opts if param_opt not in all_args or param.multiple]\n- completions.extend(\n- [(o, param.help) for o in param_opts if o.startswith(incomplete)])\n+ completions.extend([(o, param.help) for o in param_opts if o.startswith(incomplete)])\n return completions\n # completion for option values from user supplied values\n for param in ctx.command.params:\n", "issue": "New upcoming hidden option in Click.Option is not hidden from bash completion\nThanks for wonderful package.\r\n\r\nI don't know whether this is intended behavior or not but I was just trying out new upcoming release 7.0 for hidden option and autocompletion to Click.Option and hidden option hides option from help message but it shows up in bash completion.\r\nHowever it will be good if we can hide it from bash completion also, so that hidden option is actually hidden from everywhere.\n", "before_files": [{"content": "import collections\nimport copy\nimport os\nimport re\n\nfrom .utils import echo\nfrom .parser import split_arg_string\nfrom .core import MultiCommand, Option, Argument\nfrom .types import Choice\n\nWORDBREAK = '='\n\n# Note, only BASH version 4.4 and later have the nosort option.\nCOMPLETION_SCRIPT_BASH = '''\n%(complete_func)s() {\n local IFS=$'\\n'\n COMPREPLY=( $( env COMP_WORDS=\"${COMP_WORDS[*]}\" \\\\\n COMP_CWORD=$COMP_CWORD \\\\\n %(autocomplete_var)s=complete $1 ) )\n return 0\n}\n\n%(complete_func)setup() {\n local COMPLETION_OPTIONS=\"\"\n local BASH_VERSION_ARR=(${BASH_VERSION//./ })\n if [ ${BASH_VERSION_ARR[0]} -ge 4 ] && [ ${BASH_VERSION_ARR[1]} -ge 4 ];then\n COMPLETION_OPTIONS=\"-o nosort\"\n fi\n\n complete $COMPLETION_OPTIONS -F %(complete_func)s %(script_names)s\n}\n\n%(complete_func)setup\n'''\n\nCOMPLETION_SCRIPT_ZSH = '''\n%(complete_func)s() {\n local -a completions\n local -a completions_with_descriptions\n local -a response\n response=(\"${(@f)$( env COMP_WORDS=\\\"${words[*]}\\\" \\\\\n COMP_CWORD=$((CURRENT-1)) \\\\\n %(autocomplete_var)s=\\\"complete_zsh\\\" \\\\\n %(script_names)s )}\")\n\n for key descr in ${(kv)response}; do\n if [[ \"$descr\" == \"_\" ]]; then\n completions+=(\"$key\")\n else\n completions_with_descriptions+=(\"$key\":\"$descr\")\n fi\n done\n\n if [ -n \"$completions_with_descriptions\" ]; then\n _describe -V unsorted completions_with_descriptions -U -Q\n fi\n\n if [ -n \"$completions\" ]; then\n compadd -U -V unsorted -Q -a completions\n fi\n compstate[insert]=\"automenu\"\n}\n\ncompdef %(complete_func)s %(script_names)s\n'''\n\n_invalid_ident_char_re = re.compile(r'[^a-zA-Z0-9_]')\n\n\ndef get_completion_script(prog_name, complete_var, shell):\n cf_name = _invalid_ident_char_re.sub('', prog_name.replace('-', '_'))\n script = COMPLETION_SCRIPT_ZSH if shell == 'zsh' else COMPLETION_SCRIPT_BASH\n return (script % {\n 'complete_func': '_%s_completion' % cf_name,\n 'script_names': prog_name,\n 'autocomplete_var': complete_var,\n }).strip() + ';'\n\n\ndef resolve_ctx(cli, prog_name, args):\n \"\"\"\n Parse into a hierarchy of contexts. Contexts are connected through the parent variable.\n :param cli: command definition\n :param prog_name: the program that is running\n :param args: full list of args\n :return: the final context/command parsed\n \"\"\"\n ctx = cli.make_context(prog_name, args, resilient_parsing=True)\n args = ctx.protected_args + ctx.args\n while args:\n if isinstance(ctx.command, MultiCommand):\n if not ctx.command.chain:\n cmd_name, cmd, args = ctx.command.resolve_command(ctx, args)\n if cmd is None:\n return ctx\n ctx = cmd.make_context(cmd_name, args, parent=ctx,\n resilient_parsing=True)\n args = ctx.protected_args + ctx.args\n else:\n # Walk chained subcommand contexts saving the last one.\n while args:\n cmd_name, cmd, args = ctx.command.resolve_command(ctx, args)\n if cmd is None:\n return ctx\n sub_ctx = cmd.make_context(cmd_name, args, parent=ctx,\n allow_extra_args=True,\n allow_interspersed_args=False,\n resilient_parsing=True)\n args = sub_ctx.args\n ctx = sub_ctx\n args = sub_ctx.protected_args + sub_ctx.args\n else:\n break\n return ctx\n\n\ndef start_of_option(param_str):\n \"\"\"\n :param param_str: param_str to check\n :return: whether or not this is the start of an option declaration (i.e. starts \"-\" or \"--\")\n \"\"\"\n return param_str and param_str[:1] == '-'\n\n\ndef is_incomplete_option(all_args, cmd_param):\n \"\"\"\n :param all_args: the full original list of args supplied\n :param cmd_param: the current command paramter\n :return: whether or not the last option declaration (i.e. starts \"-\" or \"--\") is incomplete and\n corresponds to this cmd_param. In other words whether this cmd_param option can still accept\n values\n \"\"\"\n if not isinstance(cmd_param, Option):\n return False\n if cmd_param.is_flag:\n return False\n last_option = None\n for index, arg_str in enumerate(reversed([arg for arg in all_args if arg != WORDBREAK])):\n if index + 1 > cmd_param.nargs:\n break\n if start_of_option(arg_str):\n last_option = arg_str\n\n return True if last_option and last_option in cmd_param.opts else False\n\n\ndef is_incomplete_argument(current_params, cmd_param):\n \"\"\"\n :param current_params: the current params and values for this argument as already entered\n :param cmd_param: the current command parameter\n :return: whether or not the last argument is incomplete and corresponds to this cmd_param. In\n other words whether or not the this cmd_param argument can still accept values\n \"\"\"\n if not isinstance(cmd_param, Argument):\n return False\n current_param_values = current_params[cmd_param.name]\n if current_param_values is None:\n return True\n if cmd_param.nargs == -1:\n return True\n if isinstance(current_param_values, collections.Iterable) \\\n and cmd_param.nargs > 1 and len(current_param_values) < cmd_param.nargs:\n return True\n return False\n\n\ndef get_user_autocompletions(ctx, args, incomplete, cmd_param):\n \"\"\"\n :param ctx: context associated with the parsed command\n :param args: full list of args\n :param incomplete: the incomplete text to autocomplete\n :param cmd_param: command definition\n :return: all the possible user-specified completions for the param\n \"\"\"\n results = []\n if isinstance(cmd_param.type, Choice):\n # Choices don't support descriptions.\n results = [(c, None)\n for c in cmd_param.type.choices if c.startswith(incomplete)]\n elif cmd_param.autocompletion is not None:\n dynamic_completions = cmd_param.autocompletion(ctx=ctx,\n args=args,\n incomplete=incomplete)\n results = [c if isinstance(c, tuple) else (c, None)\n for c in dynamic_completions]\n return results\n\n\ndef add_subcommand_completions(ctx, incomplete, completions_out):\n # Add subcommand completions.\n if isinstance(ctx.command, MultiCommand):\n completions_out.extend(\n [(c, ctx.command.get_command(ctx, c).get_short_help_str()) for c in ctx.command.list_commands(ctx) if c.startswith(incomplete)])\n\n # Walk up the context list and add any other completion possibilities from chained commands\n while ctx.parent is not None:\n ctx = ctx.parent\n if isinstance(ctx.command, MultiCommand) and ctx.command.chain:\n remaining_commands = sorted(\n set(ctx.command.list_commands(ctx)) - set(ctx.protected_args))\n completions_out.extend(\n [(c, ctx.command.get_command(ctx, c).get_short_help_str()) for c in remaining_commands if c.startswith(incomplete)])\n\n\ndef get_choices(cli, prog_name, args, incomplete):\n \"\"\"\n :param cli: command definition\n :param prog_name: the program that is running\n :param args: full list of args\n :param incomplete: the incomplete text to autocomplete\n :return: all the possible completions for the incomplete\n \"\"\"\n all_args = copy.deepcopy(args)\n\n ctx = resolve_ctx(cli, prog_name, args)\n if ctx is None:\n return []\n\n # In newer versions of bash long opts with '='s are partitioned, but it's easier to parse\n # without the '='\n if start_of_option(incomplete) and WORDBREAK in incomplete:\n partition_incomplete = incomplete.partition(WORDBREAK)\n all_args.append(partition_incomplete[0])\n incomplete = partition_incomplete[2]\n elif incomplete == WORDBREAK:\n incomplete = ''\n\n completions = []\n if start_of_option(incomplete):\n # completions for partial options\n for param in ctx.command.params:\n if isinstance(param, Option):\n param_opts = [param_opt for param_opt in param.opts +\n param.secondary_opts if param_opt not in all_args or param.multiple]\n completions.extend(\n [(o, param.help) for o in param_opts if o.startswith(incomplete)])\n return completions\n # completion for option values from user supplied values\n for param in ctx.command.params:\n if is_incomplete_option(all_args, param):\n return get_user_autocompletions(ctx, all_args, incomplete, param)\n # completion for argument values from user supplied values\n for param in ctx.command.params:\n if is_incomplete_argument(ctx.params, param):\n return get_user_autocompletions(ctx, all_args, incomplete, param)\n\n add_subcommand_completions(ctx, incomplete, completions)\n # Sort before returning so that proper ordering can be enforced in custom types.\n return sorted(completions)\n\n\ndef do_complete(cli, prog_name, include_descriptions):\n cwords = split_arg_string(os.environ['COMP_WORDS'])\n cword = int(os.environ['COMP_CWORD'])\n args = cwords[1:cword]\n try:\n incomplete = cwords[cword]\n except IndexError:\n incomplete = ''\n\n for item in get_choices(cli, prog_name, args, incomplete):\n echo(item[0])\n if include_descriptions:\n # ZSH has trouble dealing with empty array parameters when returned from commands, so use a well defined character '_' to indicate no description is present.\n echo(item[1] if item[1] else '_')\n\n return True\n\n\ndef bashcomplete(cli, prog_name, complete_var, complete_instr):\n if complete_instr.startswith('source'):\n shell = 'zsh' if complete_instr == 'source_zsh' else 'bash'\n echo(get_completion_script(prog_name, complete_var, shell))\n return True\n elif complete_instr == 'complete' or complete_instr == 'complete_zsh':\n return do_complete(cli, prog_name, complete_instr == 'complete_zsh')\n return False\n", "path": "click/_bashcomplete.py"}]}
| 3,686 | 847 |
gh_patches_debug_16154
|
rasdani/github-patches
|
git_diff
|
bornhack__bornhack-website-378
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
backoffice: show quantity of products ordered after scanning qr code
backoffice: show quantity of products ordered after scanning qr code
</issue>
<code>
[start of src/tickets/models.py]
1 import io
2 import hashlib
3 import base64
4 import qrcode
5 from django.conf import settings
6 from django.urls import reverse_lazy
7 from django.utils.translation import ugettext_lazy as _
8
9 from shop.models import OrderProductRelation
10 from utils.models import UUIDModel, CampRelatedModel
11 from utils.pdf import generate_pdf_letter
12 from django.db import models
13 import logging
14
15 logger = logging.getLogger("bornhack.%s" % __name__)
16
17
18 # TicketType can be full week, one day. etc.
19 class TicketType(CampRelatedModel, UUIDModel):
20 name = models.TextField()
21 camp = models.ForeignKey("camps.Camp", on_delete=models.PROTECT)
22 includes_badge = models.BooleanField(default=False)
23 single_ticket_per_product = models.BooleanField(
24 default=False,
25 help_text=(
26 "Only create one ticket for a product/order pair no matter the quantity. "
27 "Useful for products which are bought in larger quantity (ie. village chairs)"
28 ),
29 )
30
31 def __str__(self):
32 return "{} ({})".format(self.name, self.camp.title)
33
34
35 def create_ticket_token(string):
36 return hashlib.sha256(string).hexdigest()
37
38
39 def qr_code_base64(token):
40 qr = qrcode.make(
41 token, version=1, error_correction=qrcode.constants.ERROR_CORRECT_H
42 ).resize((250, 250))
43 file_like = io.BytesIO()
44 qr.save(file_like, format="png")
45 qrcode_base64 = base64.b64encode(file_like.getvalue())
46 return qrcode_base64
47
48
49 class BaseTicket(CampRelatedModel, UUIDModel):
50 ticket_type = models.ForeignKey("TicketType", on_delete=models.PROTECT)
51 used = models.BooleanField(default=False)
52 badge_handed_out = models.BooleanField(default=False)
53 token = models.CharField(max_length=64, blank=True)
54 badge_token = models.CharField(max_length=64, blank=True)
55
56 class Meta:
57 abstract = True
58
59 @property
60 def camp(self):
61 return self.ticket_type.camp
62
63 def save(self, **kwargs):
64 self.token = self._get_token()
65 self.badge_token = self._get_badge_token()
66 super().save(**kwargs)
67
68 def _get_token(self):
69 return create_ticket_token(
70 "{_id}{secret_key}".format(
71 _id=self.uuid, secret_key=settings.SECRET_KEY
72 ).encode("utf-8")
73 )
74
75 def _get_badge_token(self):
76 return create_ticket_token(
77 "{_id}{secret_key}-badge".format(
78 _id=self.uuid, secret_key=settings.SECRET_KEY
79 ).encode("utf-8")
80 )
81
82 def get_qr_code_url(self):
83 return "data:image/png;base64,{}".format(
84 qr_code_base64(self._get_token()).decode("utf-8")
85 )
86
87 def get_qr_badge_code_url(self):
88 return "data:image/png;base64,{}".format(
89 qr_code_base64(self._get_badge_token()).decode("utf-8")
90 )
91
92 def generate_pdf(self):
93 formatdict = {"ticket": self}
94
95 if self.ticket_type.single_ticket_per_product and self.shortname == "shop":
96 orp = self.get_orp()
97 formatdict["quantity"] = orp.quantity
98
99 return generate_pdf_letter(
100 filename="{}_ticket_{}.pdf".format(self.shortname, self.pk),
101 formatdict=formatdict,
102 template="pdf/ticket.html",
103 )
104
105
106 class SponsorTicket(BaseTicket):
107 sponsor = models.ForeignKey("sponsors.Sponsor", on_delete=models.PROTECT)
108
109 def __str__(self):
110 return "SponsorTicket: {}".format(self.pk)
111
112 @property
113 def shortname(self):
114 return "sponsor"
115
116
117 class DiscountTicket(BaseTicket):
118 price = models.IntegerField(
119 help_text=_("Price of the discounted ticket (in DKK, including VAT).")
120 )
121
122 def __str__(self):
123 return "DiscountTicket: {}".format(self.pk)
124
125 @property
126 def shortname(self):
127 return "discount"
128
129
130 class ShopTicket(BaseTicket):
131 order = models.ForeignKey(
132 "shop.Order", related_name="shoptickets", on_delete=models.PROTECT
133 )
134 product = models.ForeignKey("shop.Product", on_delete=models.PROTECT)
135
136 name = models.CharField(
137 max_length=100,
138 help_text=(
139 "Name of the person this ticket belongs to. "
140 "This can be different from the buying user."
141 ),
142 null=True,
143 blank=True,
144 )
145
146 email = models.EmailField(null=True, blank=True)
147
148 # overwrite the _get_token method because old tickets use the user_id
149 def _get_token(self):
150 return hashlib.sha256(
151 "{_id}{user_id}{secret_key}".format(
152 _id=self.pk, user_id=self.order.user.pk, secret_key=settings.SECRET_KEY
153 ).encode("utf-8")
154 ).hexdigest()
155
156 def __str__(self):
157 return "Ticket {user} {product}".format(
158 user=self.order.user, product=self.product
159 )
160
161 def get_absolute_url(self):
162 return str(reverse_lazy("tickets:shopticket_edit", kwargs={"pk": self.pk}))
163
164 @property
165 def shortname(self):
166 return "shop"
167
168 def get_orp(self):
169 return OrderProductRelation.objects.get(product=self.product, order=self.order)
170
[end of src/tickets/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/tickets/models.py b/src/tickets/models.py
--- a/src/tickets/models.py
+++ b/src/tickets/models.py
@@ -93,8 +93,7 @@
formatdict = {"ticket": self}
if self.ticket_type.single_ticket_per_product and self.shortname == "shop":
- orp = self.get_orp()
- formatdict["quantity"] = orp.quantity
+ formatdict["quantity"] = self.orp.quantity
return generate_pdf_letter(
filename="{}_ticket_{}.pdf".format(self.shortname, self.pk),
@@ -165,5 +164,6 @@
def shortname(self):
return "shop"
- def get_orp(self):
+ @property
+ def orp(self):
return OrderProductRelation.objects.get(product=self.product, order=self.order)
|
{"golden_diff": "diff --git a/src/tickets/models.py b/src/tickets/models.py\n--- a/src/tickets/models.py\n+++ b/src/tickets/models.py\n@@ -93,8 +93,7 @@\n formatdict = {\"ticket\": self}\n \n if self.ticket_type.single_ticket_per_product and self.shortname == \"shop\":\n- orp = self.get_orp()\n- formatdict[\"quantity\"] = orp.quantity\n+ formatdict[\"quantity\"] = self.orp.quantity\n \n return generate_pdf_letter(\n filename=\"{}_ticket_{}.pdf\".format(self.shortname, self.pk),\n@@ -165,5 +164,6 @@\n def shortname(self):\n return \"shop\"\n \n- def get_orp(self):\n+ @property\n+ def orp(self):\n return OrderProductRelation.objects.get(product=self.product, order=self.order)\n", "issue": "backoffice: show quantity of products ordered after scanning qr code\n\nbackoffice: show quantity of products ordered after scanning qr code\n\n", "before_files": [{"content": "import io\nimport hashlib\nimport base64\nimport qrcode\nfrom django.conf import settings\nfrom django.urls import reverse_lazy\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom shop.models import OrderProductRelation\nfrom utils.models import UUIDModel, CampRelatedModel\nfrom utils.pdf import generate_pdf_letter\nfrom django.db import models\nimport logging\n\nlogger = logging.getLogger(\"bornhack.%s\" % __name__)\n\n\n# TicketType can be full week, one day. etc.\nclass TicketType(CampRelatedModel, UUIDModel):\n name = models.TextField()\n camp = models.ForeignKey(\"camps.Camp\", on_delete=models.PROTECT)\n includes_badge = models.BooleanField(default=False)\n single_ticket_per_product = models.BooleanField(\n default=False,\n help_text=(\n \"Only create one ticket for a product/order pair no matter the quantity. \"\n \"Useful for products which are bought in larger quantity (ie. village chairs)\"\n ),\n )\n\n def __str__(self):\n return \"{} ({})\".format(self.name, self.camp.title)\n\n\ndef create_ticket_token(string):\n return hashlib.sha256(string).hexdigest()\n\n\ndef qr_code_base64(token):\n qr = qrcode.make(\n token, version=1, error_correction=qrcode.constants.ERROR_CORRECT_H\n ).resize((250, 250))\n file_like = io.BytesIO()\n qr.save(file_like, format=\"png\")\n qrcode_base64 = base64.b64encode(file_like.getvalue())\n return qrcode_base64\n\n\nclass BaseTicket(CampRelatedModel, UUIDModel):\n ticket_type = models.ForeignKey(\"TicketType\", on_delete=models.PROTECT)\n used = models.BooleanField(default=False)\n badge_handed_out = models.BooleanField(default=False)\n token = models.CharField(max_length=64, blank=True)\n badge_token = models.CharField(max_length=64, blank=True)\n\n class Meta:\n abstract = True\n\n @property\n def camp(self):\n return self.ticket_type.camp\n\n def save(self, **kwargs):\n self.token = self._get_token()\n self.badge_token = self._get_badge_token()\n super().save(**kwargs)\n\n def _get_token(self):\n return create_ticket_token(\n \"{_id}{secret_key}\".format(\n _id=self.uuid, secret_key=settings.SECRET_KEY\n ).encode(\"utf-8\")\n )\n\n def _get_badge_token(self):\n return create_ticket_token(\n \"{_id}{secret_key}-badge\".format(\n _id=self.uuid, secret_key=settings.SECRET_KEY\n ).encode(\"utf-8\")\n )\n\n def get_qr_code_url(self):\n return \"data:image/png;base64,{}\".format(\n qr_code_base64(self._get_token()).decode(\"utf-8\")\n )\n\n def get_qr_badge_code_url(self):\n return \"data:image/png;base64,{}\".format(\n qr_code_base64(self._get_badge_token()).decode(\"utf-8\")\n )\n\n def generate_pdf(self):\n formatdict = {\"ticket\": self}\n\n if self.ticket_type.single_ticket_per_product and self.shortname == \"shop\":\n orp = self.get_orp()\n formatdict[\"quantity\"] = orp.quantity\n\n return generate_pdf_letter(\n filename=\"{}_ticket_{}.pdf\".format(self.shortname, self.pk),\n formatdict=formatdict,\n template=\"pdf/ticket.html\",\n )\n\n\nclass SponsorTicket(BaseTicket):\n sponsor = models.ForeignKey(\"sponsors.Sponsor\", on_delete=models.PROTECT)\n\n def __str__(self):\n return \"SponsorTicket: {}\".format(self.pk)\n\n @property\n def shortname(self):\n return \"sponsor\"\n\n\nclass DiscountTicket(BaseTicket):\n price = models.IntegerField(\n help_text=_(\"Price of the discounted ticket (in DKK, including VAT).\")\n )\n\n def __str__(self):\n return \"DiscountTicket: {}\".format(self.pk)\n\n @property\n def shortname(self):\n return \"discount\"\n\n\nclass ShopTicket(BaseTicket):\n order = models.ForeignKey(\n \"shop.Order\", related_name=\"shoptickets\", on_delete=models.PROTECT\n )\n product = models.ForeignKey(\"shop.Product\", on_delete=models.PROTECT)\n\n name = models.CharField(\n max_length=100,\n help_text=(\n \"Name of the person this ticket belongs to. \"\n \"This can be different from the buying user.\"\n ),\n null=True,\n blank=True,\n )\n\n email = models.EmailField(null=True, blank=True)\n\n # overwrite the _get_token method because old tickets use the user_id\n def _get_token(self):\n return hashlib.sha256(\n \"{_id}{user_id}{secret_key}\".format(\n _id=self.pk, user_id=self.order.user.pk, secret_key=settings.SECRET_KEY\n ).encode(\"utf-8\")\n ).hexdigest()\n\n def __str__(self):\n return \"Ticket {user} {product}\".format(\n user=self.order.user, product=self.product\n )\n\n def get_absolute_url(self):\n return str(reverse_lazy(\"tickets:shopticket_edit\", kwargs={\"pk\": self.pk}))\n\n @property\n def shortname(self):\n return \"shop\"\n\n def get_orp(self):\n return OrderProductRelation.objects.get(product=self.product, order=self.order)\n", "path": "src/tickets/models.py"}]}
| 2,149 | 190 |
gh_patches_debug_643
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-1925
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.107
On the docket:
+ [x] `git` username replaced with `****` redaction in lockfile for `git+ssh` direct references #1918
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.106"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.106"
+__version__ = "2.1.107"
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.106\"\n+__version__ = \"2.1.107\"\n", "issue": "Release 2.1.107\nOn the docket:\r\n+ [x] `git` username replaced with `****` redaction in lockfile for `git+ssh` direct references #1918\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.106\"\n", "path": "pex/version.py"}]}
| 630 | 98 |
gh_patches_debug_37259
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-25990
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cloud/vmware/vmware_vswitch.py nic_name should be optional
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
cloud/vmware/vmware_vswitch.py
##### ANSIBLE VERSION
```
ansible 2.3.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
n/a
##### OS / ENVIRONMENT
n/a
##### SUMMARY
The vmware_vswitch module erroneously assumes that 'nic_name' is required. It is valid (and sometimes desired) to make a vmware virtual switch that does not have any uplink nics at all - the use case is multiple isolated port-groups for isolated networking.
After the vswitch is created, we create a port-group with VLAN 4095 (all vlans), with network policy permitting mac changes, forged transmit, and promiscuous all enabled.
In /ansible/modules/cloud/vmware/vmware_vswitch.py , we can omit this line if nic_name is not specified and the port-group is created as desired.
```python
if self.nic_name:
vss_spec.bridge = vim.host.VirtualSwitch.BondBridge(nicDevice=[self.nic_name])
```
##### STEPS TO REPRODUCE
Run ansible-playbook against a task using the vmware_vswitch module, omitting nic_name.
```
- name: add test_switch
local_action:
module: vmware_vswitch
hostname: esxi_host
username: esxi_username
password: esxi_password
switch_name: item
mtu: 9000
validate_certs: no
number_of_ports: 8
#nic_name: 'null'
```
##### EXPECTED RESULTS
I expect the vmware vswitch to be created, but without any uplink nics.
##### ACTUAL RESULTS
```
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "msg": "missing required arguments: nic_name"}
```
</issue>
<code>
[start of lib/ansible/modules/cloud/vmware/vmware_vswitch.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # (c) 2015, Joseph Callen <jcallen () csc.com>
5 #
6 # This file is part of Ansible
7 #
8 # Ansible is free software: you can redistribute it and/or modify
9 # it under the terms of the GNU General Public License as published by
10 # the Free Software Foundation, either version 3 of the License, or
11 # (at your option) any later version.
12 #
13 # Ansible is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU General Public License for more details.
17 #
18 # You should have received a copy of the GNU General Public License
19 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
20
21 ANSIBLE_METADATA = {'metadata_version': '1.0',
22 'status': ['preview'],
23 'supported_by': 'community'}
24
25
26 DOCUMENTATION = '''
27 ---
28 module: vmware_vswitch
29 short_description: Add a VMware Standard Switch to an ESXi host
30 description:
31 - Add a VMware Standard Switch to an ESXi host
32 version_added: 2.0
33 author: "Joseph Callen (@jcpowermac), Russell Teague (@mtnbikenc)"
34 notes:
35 - Tested on vSphere 5.5
36 requirements:
37 - "python >= 2.6"
38 - PyVmomi
39 options:
40 switch_name:
41 description:
42 - vSwitch name to add
43 required: True
44 nic_name:
45 description:
46 - vmnic name to attach to vswitch
47 required: True
48 number_of_ports:
49 description:
50 - Number of port to configure on vswitch
51 default: 128
52 required: False
53 mtu:
54 description:
55 - MTU to configure on vswitch
56 required: False
57 state:
58 description:
59 - Add or remove the switch
60 default: 'present'
61 choices:
62 - 'present'
63 - 'absent'
64 required: False
65 extends_documentation_fragment: vmware.documentation
66 '''
67
68 EXAMPLES = '''
69 # Example from Ansible playbook
70
71 - name: Add a VMware vSwitch
72 local_action:
73 module: vmware_vswitch
74 hostname: esxi_hostname
75 username: esxi_username
76 password: esxi_password
77 switch_name: vswitch_name
78 nic_name: vmnic_name
79 mtu: 9000
80 '''
81
82 try:
83 from pyVmomi import vim, vmodl
84 HAS_PYVMOMI = True
85 except ImportError:
86 HAS_PYVMOMI = False
87
88
89 def find_vswitch_by_name(host, vswitch_name):
90 for vss in host.config.network.vswitch:
91 if vss.name == vswitch_name:
92 return vss
93 return None
94
95
96 class VMwareHostVirtualSwitch(object):
97
98 def __init__(self, module):
99 self.host_system = None
100 self.content = None
101 self.vss = None
102 self.module = module
103 self.switch_name = module.params['switch_name']
104 self.number_of_ports = module.params['number_of_ports']
105 self.nic_name = module.params['nic_name']
106 self.mtu = module.params['mtu']
107 self.state = module.params['state']
108 self.content = connect_to_api(self.module)
109
110 def process_state(self):
111 try:
112 vswitch_states = {
113 'absent': {
114 'present': self.state_destroy_vswitch,
115 'absent': self.state_exit_unchanged,
116 },
117 'present': {
118 'update': self.state_update_vswitch,
119 'present': self.state_exit_unchanged,
120 'absent': self.state_create_vswitch,
121 }
122 }
123
124 vswitch_states[self.state][self.check_vswitch_configuration()]()
125
126 except vmodl.RuntimeFault as runtime_fault:
127 self.module.fail_json(msg=runtime_fault.msg)
128 except vmodl.MethodFault as method_fault:
129 self.module.fail_json(msg=method_fault.msg)
130 except Exception as e:
131 self.module.fail_json(msg=str(e))
132
133
134 # Source from
135 # https://github.com/rreubenur/pyvmomi-community-samples/blob/patch-1/samples/create_vswitch.py
136
137 def state_create_vswitch(self):
138 vss_spec = vim.host.VirtualSwitch.Specification()
139 vss_spec.numPorts = self.number_of_ports
140 vss_spec.mtu = self.mtu
141 vss_spec.bridge = vim.host.VirtualSwitch.BondBridge(nicDevice=[self.nic_name])
142 self.host_system.configManager.networkSystem.AddVirtualSwitch(vswitchName=self.switch_name, spec=vss_spec)
143 self.module.exit_json(changed=True)
144
145 def state_exit_unchanged(self):
146 self.module.exit_json(changed=False)
147
148 def state_destroy_vswitch(self):
149 config = vim.host.NetworkConfig()
150
151 for portgroup in self.host_system.configManager.networkSystem.networkInfo.portgroup:
152 if portgroup.spec.vswitchName == self.vss.name:
153 portgroup_config = vim.host.PortGroup.Config()
154 portgroup_config.changeOperation = "remove"
155 portgroup_config.spec = vim.host.PortGroup.Specification()
156 portgroup_config.spec.name = portgroup.spec.name
157 portgroup_config.spec.name = portgroup.spec.name
158 portgroup_config.spec.vlanId = portgroup.spec.vlanId
159 portgroup_config.spec.vswitchName = portgroup.spec.vswitchName
160 portgroup_config.spec.policy = vim.host.NetworkPolicy()
161 config.portgroup.append(portgroup_config)
162
163 self.host_system.configManager.networkSystem.UpdateNetworkConfig(config, "modify")
164 self.host_system.configManager.networkSystem.RemoveVirtualSwitch(self.vss.name)
165 self.module.exit_json(changed=True)
166
167 def state_update_vswitch(self):
168 self.module.exit_json(changed=False, msg="Currently not implemented.")
169
170 def check_vswitch_configuration(self):
171 host = get_all_objs(self.content, [vim.HostSystem])
172 if not host:
173 self.module.fail_json(msg="Unable to find host")
174
175 self.host_system = host.keys()[0]
176 self.vss = find_vswitch_by_name(self.host_system, self.switch_name)
177
178 if self.vss is None:
179 return 'absent'
180 else:
181 return 'present'
182
183
184 def main():
185 argument_spec = vmware_argument_spec()
186 argument_spec.update(dict(switch_name=dict(required=True, type='str'),
187 nic_name=dict(required=True, type='str'),
188 number_of_ports=dict(required=False, type='int', default=128),
189 mtu=dict(required=False, type='int', default=1500),
190 state=dict(default='present', choices=['present', 'absent'], type='str')))
191
192 module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False)
193
194 if not HAS_PYVMOMI:
195 module.fail_json(msg='pyvmomi is required for this module')
196
197 host_virtual_switch = VMwareHostVirtualSwitch(module)
198 host_virtual_switch.process_state()
199
200 from ansible.module_utils.vmware import *
201 from ansible.module_utils.basic import *
202
203 if __name__ == '__main__':
204 main()
205
[end of lib/ansible/modules/cloud/vmware/vmware_vswitch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/ansible/modules/cloud/vmware/vmware_vswitch.py b/lib/ansible/modules/cloud/vmware/vmware_vswitch.py
--- a/lib/ansible/modules/cloud/vmware/vmware_vswitch.py
+++ b/lib/ansible/modules/cloud/vmware/vmware_vswitch.py
@@ -44,7 +44,7 @@
nic_name:
description:
- vmnic name to attach to vswitch
- required: True
+ required: False
number_of_ports:
description:
- Number of port to configure on vswitch
@@ -66,17 +66,24 @@
'''
EXAMPLES = '''
-# Example from Ansible playbook
-
- - name: Add a VMware vSwitch
- local_action:
- module: vmware_vswitch
- hostname: esxi_hostname
- username: esxi_username
- password: esxi_password
- switch_name: vswitch_name
- nic_name: vmnic_name
- mtu: 9000
+- name: Add a VMware vSwitch
+ local_action:
+ module: vmware_vswitch
+ hostname: esxi_hostname
+ username: esxi_username
+ password: esxi_password
+ switch_name: vswitch_name
+ nic_name: vmnic_name
+ mtu: 9000
+
+- name: Add a VMWare vSwitch without any physical NIC attached
+ vmware_vswitch:
+ hostname: 192.168.10.1
+ username: admin
+ password: password123
+ switch_name: vswitch_0001
+ mtu: 9000
+
'''
try:
@@ -138,7 +145,8 @@
vss_spec = vim.host.VirtualSwitch.Specification()
vss_spec.numPorts = self.number_of_ports
vss_spec.mtu = self.mtu
- vss_spec.bridge = vim.host.VirtualSwitch.BondBridge(nicDevice=[self.nic_name])
+ if self.nic_name:
+ vss_spec.bridge = vim.host.VirtualSwitch.BondBridge(nicDevice=[self.nic_name])
self.host_system.configManager.networkSystem.AddVirtualSwitch(vswitchName=self.switch_name, spec=vss_spec)
self.module.exit_json(changed=True)
@@ -184,7 +192,7 @@
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(dict(switch_name=dict(required=True, type='str'),
- nic_name=dict(required=True, type='str'),
+ nic_name=dict(required=False, type='str'),
number_of_ports=dict(required=False, type='int', default=128),
mtu=dict(required=False, type='int', default=1500),
state=dict(default='present', choices=['present', 'absent'], type='str')))
|
{"golden_diff": "diff --git a/lib/ansible/modules/cloud/vmware/vmware_vswitch.py b/lib/ansible/modules/cloud/vmware/vmware_vswitch.py\n--- a/lib/ansible/modules/cloud/vmware/vmware_vswitch.py\n+++ b/lib/ansible/modules/cloud/vmware/vmware_vswitch.py\n@@ -44,7 +44,7 @@\n nic_name:\n description:\n - vmnic name to attach to vswitch\n- required: True\n+ required: False\n number_of_ports:\n description:\n - Number of port to configure on vswitch\n@@ -66,17 +66,24 @@\n '''\n \n EXAMPLES = '''\n-# Example from Ansible playbook\n-\n- - name: Add a VMware vSwitch\n- local_action:\n- module: vmware_vswitch\n- hostname: esxi_hostname\n- username: esxi_username\n- password: esxi_password\n- switch_name: vswitch_name\n- nic_name: vmnic_name\n- mtu: 9000\n+- name: Add a VMware vSwitch\n+ local_action:\n+ module: vmware_vswitch\n+ hostname: esxi_hostname\n+ username: esxi_username\n+ password: esxi_password\n+ switch_name: vswitch_name\n+ nic_name: vmnic_name\n+ mtu: 9000\n+\n+- name: Add a VMWare vSwitch without any physical NIC attached\n+ vmware_vswitch:\n+ hostname: 192.168.10.1\n+ username: admin\n+ password: password123\n+ switch_name: vswitch_0001\n+ mtu: 9000\n+\n '''\n \n try:\n@@ -138,7 +145,8 @@\n vss_spec = vim.host.VirtualSwitch.Specification()\n vss_spec.numPorts = self.number_of_ports\n vss_spec.mtu = self.mtu\n- vss_spec.bridge = vim.host.VirtualSwitch.BondBridge(nicDevice=[self.nic_name])\n+ if self.nic_name:\n+ vss_spec.bridge = vim.host.VirtualSwitch.BondBridge(nicDevice=[self.nic_name])\n self.host_system.configManager.networkSystem.AddVirtualSwitch(vswitchName=self.switch_name, spec=vss_spec)\n self.module.exit_json(changed=True)\n \n@@ -184,7 +192,7 @@\n def main():\n argument_spec = vmware_argument_spec()\n argument_spec.update(dict(switch_name=dict(required=True, type='str'),\n- nic_name=dict(required=True, type='str'),\n+ nic_name=dict(required=False, type='str'),\n number_of_ports=dict(required=False, type='int', default=128),\n mtu=dict(required=False, type='int', default=1500),\n state=dict(default='present', choices=['present', 'absent'], type='str')))\n", "issue": "cloud/vmware/vmware_vswitch.py nic_name should be optional\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\ncloud/vmware/vmware_vswitch.py\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible 2.3.1.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = Default w/o overrides\r\n python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]\r\n\r\n```\r\n\r\n##### CONFIGURATION\r\nn/a\r\n##### OS / ENVIRONMENT\r\nn/a\r\n##### SUMMARY\r\nThe vmware_vswitch module erroneously assumes that 'nic_name' is required. It is valid (and sometimes desired) to make a vmware virtual switch that does not have any uplink nics at all - the use case is multiple isolated port-groups for isolated networking. \r\n\r\nAfter the vswitch is created, we create a port-group with VLAN 4095 (all vlans), with network policy permitting mac changes, forged transmit, and promiscuous all enabled.\r\n\r\nIn /ansible/modules/cloud/vmware/vmware_vswitch.py , we can omit this line if nic_name is not specified and the port-group is created as desired.\r\n\r\n```python\r\nif self.nic_name:\r\n vss_spec.bridge = vim.host.VirtualSwitch.BondBridge(nicDevice=[self.nic_name])\r\n```\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\nRun ansible-playbook against a task using the vmware_vswitch module, omitting nic_name.\r\n```\r\n - name: add test_switch\r\n local_action:\r\n module: vmware_vswitch\r\n hostname: esxi_host\r\n username: esxi_username\r\n password: esxi_password\r\n switch_name: item\r\n mtu: 9000\r\n validate_certs: no\r\n number_of_ports: 8\r\n #nic_name: 'null'\r\n\r\n```\r\n\r\n##### EXPECTED RESULTS\r\nI expect the vmware vswitch to be created, but without any uplink nics.\r\n\r\n##### ACTUAL RESULTS\r\n```\r\nfatal: [localhost -> localhost]: FAILED! => {\"changed\": false, \"failed\": true, \"msg\": \"missing required arguments: nic_name\"}\r\n```\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2015, Joseph Callen <jcallen () csc.com>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nANSIBLE_METADATA = {'metadata_version': '1.0',\n 'status': ['preview'],\n 'supported_by': 'community'}\n\n\nDOCUMENTATION = '''\n---\nmodule: vmware_vswitch\nshort_description: Add a VMware Standard Switch to an ESXi host\ndescription:\n - Add a VMware Standard Switch to an ESXi host\nversion_added: 2.0\nauthor: \"Joseph Callen (@jcpowermac), Russell Teague (@mtnbikenc)\"\nnotes:\n - Tested on vSphere 5.5\nrequirements:\n - \"python >= 2.6\"\n - PyVmomi\noptions:\n switch_name:\n description:\n - vSwitch name to add\n required: True\n nic_name:\n description:\n - vmnic name to attach to vswitch\n required: True\n number_of_ports:\n description:\n - Number of port to configure on vswitch\n default: 128\n required: False\n mtu:\n description:\n - MTU to configure on vswitch\n required: False\n state:\n description:\n - Add or remove the switch\n default: 'present'\n choices:\n - 'present'\n - 'absent'\n required: False\nextends_documentation_fragment: vmware.documentation\n'''\n\nEXAMPLES = '''\n# Example from Ansible playbook\n\n - name: Add a VMware vSwitch\n local_action:\n module: vmware_vswitch\n hostname: esxi_hostname\n username: esxi_username\n password: esxi_password\n switch_name: vswitch_name\n nic_name: vmnic_name\n mtu: 9000\n'''\n\ntry:\n from pyVmomi import vim, vmodl\n HAS_PYVMOMI = True\nexcept ImportError:\n HAS_PYVMOMI = False\n\n\ndef find_vswitch_by_name(host, vswitch_name):\n for vss in host.config.network.vswitch:\n if vss.name == vswitch_name:\n return vss\n return None\n\n\nclass VMwareHostVirtualSwitch(object):\n\n def __init__(self, module):\n self.host_system = None\n self.content = None\n self.vss = None\n self.module = module\n self.switch_name = module.params['switch_name']\n self.number_of_ports = module.params['number_of_ports']\n self.nic_name = module.params['nic_name']\n self.mtu = module.params['mtu']\n self.state = module.params['state']\n self.content = connect_to_api(self.module)\n\n def process_state(self):\n try:\n vswitch_states = {\n 'absent': {\n 'present': self.state_destroy_vswitch,\n 'absent': self.state_exit_unchanged,\n },\n 'present': {\n 'update': self.state_update_vswitch,\n 'present': self.state_exit_unchanged,\n 'absent': self.state_create_vswitch,\n }\n }\n\n vswitch_states[self.state][self.check_vswitch_configuration()]()\n\n except vmodl.RuntimeFault as runtime_fault:\n self.module.fail_json(msg=runtime_fault.msg)\n except vmodl.MethodFault as method_fault:\n self.module.fail_json(msg=method_fault.msg)\n except Exception as e:\n self.module.fail_json(msg=str(e))\n\n\n # Source from\n # https://github.com/rreubenur/pyvmomi-community-samples/blob/patch-1/samples/create_vswitch.py\n\n def state_create_vswitch(self):\n vss_spec = vim.host.VirtualSwitch.Specification()\n vss_spec.numPorts = self.number_of_ports\n vss_spec.mtu = self.mtu\n vss_spec.bridge = vim.host.VirtualSwitch.BondBridge(nicDevice=[self.nic_name])\n self.host_system.configManager.networkSystem.AddVirtualSwitch(vswitchName=self.switch_name, spec=vss_spec)\n self.module.exit_json(changed=True)\n\n def state_exit_unchanged(self):\n self.module.exit_json(changed=False)\n\n def state_destroy_vswitch(self):\n config = vim.host.NetworkConfig()\n\n for portgroup in self.host_system.configManager.networkSystem.networkInfo.portgroup:\n if portgroup.spec.vswitchName == self.vss.name:\n portgroup_config = vim.host.PortGroup.Config()\n portgroup_config.changeOperation = \"remove\"\n portgroup_config.spec = vim.host.PortGroup.Specification()\n portgroup_config.spec.name = portgroup.spec.name\n portgroup_config.spec.name = portgroup.spec.name\n portgroup_config.spec.vlanId = portgroup.spec.vlanId\n portgroup_config.spec.vswitchName = portgroup.spec.vswitchName\n portgroup_config.spec.policy = vim.host.NetworkPolicy()\n config.portgroup.append(portgroup_config)\n\n self.host_system.configManager.networkSystem.UpdateNetworkConfig(config, \"modify\")\n self.host_system.configManager.networkSystem.RemoveVirtualSwitch(self.vss.name)\n self.module.exit_json(changed=True)\n\n def state_update_vswitch(self):\n self.module.exit_json(changed=False, msg=\"Currently not implemented.\")\n\n def check_vswitch_configuration(self):\n host = get_all_objs(self.content, [vim.HostSystem])\n if not host:\n self.module.fail_json(msg=\"Unable to find host\")\n\n self.host_system = host.keys()[0]\n self.vss = find_vswitch_by_name(self.host_system, self.switch_name)\n\n if self.vss is None:\n return 'absent'\n else:\n return 'present'\n\n\ndef main():\n argument_spec = vmware_argument_spec()\n argument_spec.update(dict(switch_name=dict(required=True, type='str'),\n nic_name=dict(required=True, type='str'),\n number_of_ports=dict(required=False, type='int', default=128),\n mtu=dict(required=False, type='int', default=1500),\n state=dict(default='present', choices=['present', 'absent'], type='str')))\n\n module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=False)\n\n if not HAS_PYVMOMI:\n module.fail_json(msg='pyvmomi is required for this module')\n\n host_virtual_switch = VMwareHostVirtualSwitch(module)\n host_virtual_switch.process_state()\n\nfrom ansible.module_utils.vmware import *\nfrom ansible.module_utils.basic import *\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/cloud/vmware/vmware_vswitch.py"}]}
| 3,120 | 654 |
gh_patches_debug_1877
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-2921
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
local cache inconsistent after enabling short_paths in a recipe
To help us debug your issue please explain:
- [x] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).
- [x] I've specified the Conan version, operating system version and any tool that can be relevant.
- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.
Conan Version 1.3.3
Windows 10
With a package in local cache whose recipe does NOT have `short_paths=True`, modify in normal development folder the recipe and set `short_paths=True` and run conan create.
Folders in local cache become inconsistent showing both folders from previous conan create run and .conan_link files pointing to the short paths folders.
This seems no not affect conan tool behavior when running commands and works well if `short_paths` is removed once again.
</issue>
<code>
[start of conans/util/windows.py]
1 import os
2 import subprocess
3
4 from conans.util.files import load, mkdir, save, rmdir
5 import tempfile
6
7
8 CONAN_LINK = ".conan_link"
9
10
11 def conan_expand_user(path):
12 """ wrapper to the original expanduser function, to workaround python returning
13 verbatim %USERPROFILE% when some other app (git for windows) sets HOME envvar
14 """
15 # In win these variables should exist and point to user directory, which
16 # must exist. Using context to avoid permanent modification of os.environ
17 old_env = dict(os.environ)
18 try:
19 home = os.environ.get("HOME")
20 # Problematic cases of wrong HOME variable
21 # - HOME = %USERPROFILE% verbatim, as messed by some other tools
22 # - MSYS console, that defines a different user home in /c/mingw/msys/users/xxx
23 # In these cases, it is safe to remove it and rely on USERPROFILE directly
24 if home and (not os.path.exists(home) or
25 (os.getenv("MSYSTEM") and os.getenv("USERPROFILE"))):
26 del os.environ["HOME"]
27 result = os.path.expanduser(path)
28 finally:
29 os.environ.clear()
30 os.environ.update(old_env)
31 return result
32
33
34 def path_shortener(path, short_paths):
35 """ short_paths is 4-state:
36 False: Never shorten the path
37 True: Always shorten the path, create link if not existing
38 None: Use shorten path only if already exists, not create
39 """
40 if short_paths is False or os.getenv("CONAN_USER_HOME_SHORT") == "None":
41 return path
42 link = os.path.join(path, CONAN_LINK)
43 if os.path.exists(link):
44 return load(link)
45 elif short_paths is None:
46 return path
47
48 short_home = os.getenv("CONAN_USER_HOME_SHORT")
49 if not short_home:
50 drive = os.path.splitdrive(path)[0]
51 short_home = drive + "/.conan"
52 mkdir(short_home)
53
54 # Workaround for short_home living in NTFS file systems. Give full control permission to current user to avoid
55 # access problems in cygwin/msys2 windows subsystems when using short_home folder
56 try:
57 username = os.getenv("USERDOMAIN")
58 domainname = "%s\%s" % (username, os.environ["USERNAME"]) if username else os.environ["USERNAME"]
59 cmd = r'cacls %s /E /G "%s":F' % (short_home, domainname)
60 subprocess.check_output(cmd, stderr=subprocess.STDOUT) # Ignoring any returned output, make command quiet
61 except subprocess.CalledProcessError:
62 # cmd can fail if trying to set ACL in non NTFS drives, ignoring it.
63 pass
64
65 redirect = tempfile.mkdtemp(dir=short_home, prefix="")
66 # This "1" is the way to have a non-existing directory, so commands like
67 # shutil.copytree() to it, works. It can be removed without compromising the
68 # temp folder generator and conan-links consistency
69 redirect = os.path.join(redirect, "1")
70 save(link, redirect)
71 return redirect
72
73
74 def ignore_long_path_files(src_folder, build_folder, output):
75 def _filter(src, files):
76 filtered_files = []
77 for the_file in files:
78 source_path = os.path.join(src, the_file)
79 # Without storage path, just relative
80 rel_path = os.path.relpath(source_path, src_folder)
81 dest_path = os.path.normpath(os.path.join(build_folder, rel_path))
82 # it is NOT that "/" is counted as "\\" so it counts double
83 # seems a bug in python, overflows paths near the limit of 260,
84 if len(dest_path) >= 249:
85 filtered_files.append(the_file)
86 output.warn("Filename too long, file excluded: %s" % dest_path)
87 return filtered_files
88 return _filter
89
90
91 def rm_conandir(path):
92 """removal of a directory that might contain a link to a short path"""
93 link = os.path.join(path, CONAN_LINK)
94 if os.path.exists(link):
95 short_path = load(link)
96 rmdir(os.path.dirname(short_path))
97 rmdir(path)
98
[end of conans/util/windows.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conans/util/windows.py b/conans/util/windows.py
--- a/conans/util/windows.py
+++ b/conans/util/windows.py
@@ -45,6 +45,9 @@
elif short_paths is None:
return path
+ if os.path.exists(path):
+ rmdir(path)
+
short_home = os.getenv("CONAN_USER_HOME_SHORT")
if not short_home:
drive = os.path.splitdrive(path)[0]
|
{"golden_diff": "diff --git a/conans/util/windows.py b/conans/util/windows.py\n--- a/conans/util/windows.py\n+++ b/conans/util/windows.py\n@@ -45,6 +45,9 @@\n elif short_paths is None:\n return path\n \n+ if os.path.exists(path):\n+ rmdir(path)\n+\n short_home = os.getenv(\"CONAN_USER_HOME_SHORT\")\n if not short_home:\n drive = os.path.splitdrive(path)[0]\n", "issue": "local cache inconsistent after enabling short_paths in a recipe\nTo help us debug your issue please explain:\r\n\r\n- [x] I've read the [CONTRIBUTING guide](https://raw.githubusercontent.com/conan-io/conan/develop/.github/CONTRIBUTING.md).\r\n- [x] I've specified the Conan version, operating system version and any tool that can be relevant.\r\n- [x] I've explained the steps to reproduce the error or the motivation/use case of the question/suggestion.\r\n\r\nConan Version 1.3.3\r\nWindows 10\r\n\r\nWith a package in local cache whose recipe does NOT have `short_paths=True`, modify in normal development folder the recipe and set `short_paths=True` and run conan create.\r\n\r\nFolders in local cache become inconsistent showing both folders from previous conan create run and .conan_link files pointing to the short paths folders.\r\n\r\nThis seems no not affect conan tool behavior when running commands and works well if `short_paths` is removed once again.\r\n\r\n\n", "before_files": [{"content": "import os\nimport subprocess\n\nfrom conans.util.files import load, mkdir, save, rmdir\nimport tempfile\n\n\nCONAN_LINK = \".conan_link\"\n\n\ndef conan_expand_user(path):\n \"\"\" wrapper to the original expanduser function, to workaround python returning\n verbatim %USERPROFILE% when some other app (git for windows) sets HOME envvar\n \"\"\"\n # In win these variables should exist and point to user directory, which\n # must exist. Using context to avoid permanent modification of os.environ\n old_env = dict(os.environ)\n try:\n home = os.environ.get(\"HOME\")\n # Problematic cases of wrong HOME variable\n # - HOME = %USERPROFILE% verbatim, as messed by some other tools\n # - MSYS console, that defines a different user home in /c/mingw/msys/users/xxx\n # In these cases, it is safe to remove it and rely on USERPROFILE directly\n if home and (not os.path.exists(home) or\n (os.getenv(\"MSYSTEM\") and os.getenv(\"USERPROFILE\"))):\n del os.environ[\"HOME\"]\n result = os.path.expanduser(path)\n finally:\n os.environ.clear()\n os.environ.update(old_env)\n return result\n\n\ndef path_shortener(path, short_paths):\n \"\"\" short_paths is 4-state:\n False: Never shorten the path\n True: Always shorten the path, create link if not existing\n None: Use shorten path only if already exists, not create\n \"\"\"\n if short_paths is False or os.getenv(\"CONAN_USER_HOME_SHORT\") == \"None\":\n return path\n link = os.path.join(path, CONAN_LINK)\n if os.path.exists(link):\n return load(link)\n elif short_paths is None:\n return path\n\n short_home = os.getenv(\"CONAN_USER_HOME_SHORT\")\n if not short_home:\n drive = os.path.splitdrive(path)[0]\n short_home = drive + \"/.conan\"\n mkdir(short_home)\n\n # Workaround for short_home living in NTFS file systems. Give full control permission to current user to avoid\n # access problems in cygwin/msys2 windows subsystems when using short_home folder\n try:\n username = os.getenv(\"USERDOMAIN\")\n domainname = \"%s\\%s\" % (username, os.environ[\"USERNAME\"]) if username else os.environ[\"USERNAME\"]\n cmd = r'cacls %s /E /G \"%s\":F' % (short_home, domainname)\n subprocess.check_output(cmd, stderr=subprocess.STDOUT) # Ignoring any returned output, make command quiet\n except subprocess.CalledProcessError:\n # cmd can fail if trying to set ACL in non NTFS drives, ignoring it.\n pass\n\n redirect = tempfile.mkdtemp(dir=short_home, prefix=\"\")\n # This \"1\" is the way to have a non-existing directory, so commands like\n # shutil.copytree() to it, works. It can be removed without compromising the\n # temp folder generator and conan-links consistency\n redirect = os.path.join(redirect, \"1\")\n save(link, redirect)\n return redirect\n\n\ndef ignore_long_path_files(src_folder, build_folder, output):\n def _filter(src, files):\n filtered_files = []\n for the_file in files:\n source_path = os.path.join(src, the_file)\n # Without storage path, just relative\n rel_path = os.path.relpath(source_path, src_folder)\n dest_path = os.path.normpath(os.path.join(build_folder, rel_path))\n # it is NOT that \"/\" is counted as \"\\\\\" so it counts double\n # seems a bug in python, overflows paths near the limit of 260,\n if len(dest_path) >= 249:\n filtered_files.append(the_file)\n output.warn(\"Filename too long, file excluded: %s\" % dest_path)\n return filtered_files\n return _filter\n\n\ndef rm_conandir(path):\n \"\"\"removal of a directory that might contain a link to a short path\"\"\"\n link = os.path.join(path, CONAN_LINK)\n if os.path.exists(link):\n short_path = load(link)\n rmdir(os.path.dirname(short_path))\n rmdir(path)\n", "path": "conans/util/windows.py"}]}
| 1,852 | 100 |
gh_patches_debug_28256
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-8355
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
feature: Install multiple plugins of any type
### Feature scope
CLI (options, error messages, logging, etc.)
### Description
Currently, you can only leverage `meltano install` in parallel for all plugin types or all plugins of a specific type:
```sh
# all plugins
meltano install
# all extractors
meltano install [extractor|extractors]
# all loaders
meltano install [loader|loaders]
```
It would be great if you could install multiple plugins of any type - something like:
```sh
meltano install <extractor> <loader> <transformer> <utility>
```
This change would remove the need to specify a plugin type at all, since a plugin name is already unique to a Meltano project. This is currently not possible without a breaking change however, since a plugin type is required as the first argument to `meltano install` when specifying plugin names. #8228 introduced the `--from-file` option for `meltano config <plugin> set`, which accepts a special character `-` to refer to stdin - `meltano install` could reuse this or a similar concept to skip the plugin type argument and leverage parallel install:
```sh
meltano install - <extractor> <loader> <transformer> <utility>
```
Once a convention is established, this feature could be extended to `meltano remove` as well.
I have a POC of this working for `meltano install` locally, so happy to open a PR.
</issue>
<code>
[start of src/meltano/cli/install.py]
1 """CLI command `meltano install`."""
2
3 from __future__ import annotations
4
5 import typing as t
6
7 import click
8 import structlog
9
10 from meltano.cli.params import pass_project
11 from meltano.cli.utils import CliError, PartialInstrumentedCmd, install_plugins
12 from meltano.core.block.parser import BlockParser
13 from meltano.core.plugin import PluginType
14 from meltano.core.schedule_service import ScheduleService
15 from meltano.core.tracking.contexts import CliEvent, PluginsTrackingContext
16
17 if t.TYPE_CHECKING:
18 from meltano.core.project import Project
19 from meltano.core.tracking import Tracker
20
21 logger = structlog.getLogger(__name__)
22
23
24 @click.command(cls=PartialInstrumentedCmd, short_help="Install project dependencies.")
25 @click.argument(
26 "plugin_type",
27 type=click.Choice(PluginType.cli_arguments()),
28 required=False,
29 )
30 @click.argument("plugin_name", nargs=-1, required=False)
31 @click.option(
32 "--clean",
33 is_flag=True,
34 help="Completely reinstall a plugin rather than simply upgrading if necessary.",
35 )
36 @click.option(
37 "--parallelism",
38 "-p",
39 type=click.INT,
40 default=None,
41 help=(
42 "Limit the number of plugins to install in parallel. "
43 "Defaults to the number of cores."
44 ),
45 )
46 @click.option(
47 "--force",
48 "-f",
49 is_flag=True,
50 help="Ignore the required Python version declared by the plugins.",
51 )
52 @click.option(
53 "--schedule",
54 "-s",
55 "schedule_name",
56 help="Install all plugins from the given schedule.",
57 )
58 @click.pass_context
59 @pass_project(migrate=True)
60 def install( # noqa: C901
61 project: Project,
62 ctx: click.Context,
63 plugin_type: str,
64 plugin_name: str,
65 clean: bool,
66 parallelism: int,
67 force: bool,
68 schedule_name: str,
69 ):
70 """
71 Install all the dependencies of your project based on the meltano.yml file.
72
73 \b\nRead more at https://docs.meltano.com/reference/command-line-interface#install
74 """
75 tracker: Tracker = ctx.obj["tracker"]
76 try:
77 if plugin_type:
78 plugin_type = PluginType.from_cli_argument(plugin_type)
79 plugins = project.plugins.get_plugins_of_type(plugin_type)
80 if plugin_name:
81 plugins = [plugin for plugin in plugins if plugin.name in plugin_name]
82 else:
83 plugins = list(project.plugins.plugins())
84
85 if schedule_name:
86 schedule_plugins = _get_schedule_plugins(
87 ctx.obj["project"],
88 schedule_name,
89 )
90 plugins = list(set(plugins) & set(schedule_plugins))
91 except Exception:
92 tracker.track_command_event(CliEvent.aborted)
93 raise
94
95 click.echo(f"Installing {len(plugins)} plugins...")
96 tracker.add_contexts(
97 PluginsTrackingContext([(candidate, None) for candidate in plugins]),
98 )
99 tracker.track_command_event(CliEvent.inflight)
100
101 success = install_plugins(
102 project,
103 plugins,
104 parallelism=parallelism,
105 clean=clean,
106 force=force,
107 )
108 if not success:
109 tracker.track_command_event(CliEvent.failed)
110 raise CliError("Failed to install plugin(s)") # noqa: EM101
111 tracker.track_command_event(CliEvent.completed)
112
113
114 def _get_schedule_plugins(project: Project, schedule_name: str):
115 schedule_service = ScheduleService(project)
116 schedule_obj = schedule_service.find_schedule(schedule_name)
117 schedule_plugins = set()
118 if schedule_obj.elt_schedule:
119 for plugin_name in (schedule_obj.extractor, schedule_obj.loader):
120 schedule_plugins.add(project.plugins.find_plugin(plugin_name))
121 else:
122 task_sets = schedule_service.task_sets_service.get(schedule_obj.job)
123 for blocks in task_sets.flat_args_per_set:
124 parser = BlockParser(logger, project, blocks)
125 for plugin in parser.plugins:
126 schedule_plugins.add(
127 project.plugins.find_plugin(plugin.info.get("name"))
128 if plugin.type == PluginType.MAPPERS
129 else plugin,
130 )
131 return schedule_plugins
132
[end of src/meltano/cli/install.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/meltano/cli/install.py b/src/meltano/cli/install.py
--- a/src/meltano/cli/install.py
+++ b/src/meltano/cli/install.py
@@ -18,13 +18,15 @@
from meltano.core.project import Project
from meltano.core.tracking import Tracker
+ANY = "-"
+
logger = structlog.getLogger(__name__)
@click.command(cls=PartialInstrumentedCmd, short_help="Install project dependencies.")
@click.argument(
"plugin_type",
- type=click.Choice(PluginType.cli_arguments()),
+ type=click.Choice((*PluginType.cli_arguments(), ANY)),
required=False,
)
@click.argument("plugin_name", nargs=-1, required=False)
@@ -74,14 +76,15 @@
"""
tracker: Tracker = ctx.obj["tracker"]
try:
- if plugin_type:
+ if plugin_type and plugin_type != ANY:
plugin_type = PluginType.from_cli_argument(plugin_type)
plugins = project.plugins.get_plugins_of_type(plugin_type)
- if plugin_name:
- plugins = [plugin for plugin in plugins if plugin.name in plugin_name]
else:
plugins = list(project.plugins.plugins())
+ if plugin_name:
+ plugins = [plugin for plugin in plugins if plugin.name in plugin_name]
+
if schedule_name:
schedule_plugins = _get_schedule_plugins(
ctx.obj["project"],
|
{"golden_diff": "diff --git a/src/meltano/cli/install.py b/src/meltano/cli/install.py\n--- a/src/meltano/cli/install.py\n+++ b/src/meltano/cli/install.py\n@@ -18,13 +18,15 @@\n from meltano.core.project import Project\n from meltano.core.tracking import Tracker\n \n+ANY = \"-\"\n+\n logger = structlog.getLogger(__name__)\n \n \n @click.command(cls=PartialInstrumentedCmd, short_help=\"Install project dependencies.\")\n @click.argument(\n \"plugin_type\",\n- type=click.Choice(PluginType.cli_arguments()),\n+ type=click.Choice((*PluginType.cli_arguments(), ANY)),\n required=False,\n )\n @click.argument(\"plugin_name\", nargs=-1, required=False)\n@@ -74,14 +76,15 @@\n \"\"\"\n tracker: Tracker = ctx.obj[\"tracker\"]\n try:\n- if plugin_type:\n+ if plugin_type and plugin_type != ANY:\n plugin_type = PluginType.from_cli_argument(plugin_type)\n plugins = project.plugins.get_plugins_of_type(plugin_type)\n- if plugin_name:\n- plugins = [plugin for plugin in plugins if plugin.name in plugin_name]\n else:\n plugins = list(project.plugins.plugins())\n \n+ if plugin_name:\n+ plugins = [plugin for plugin in plugins if plugin.name in plugin_name]\n+\n if schedule_name:\n schedule_plugins = _get_schedule_plugins(\n ctx.obj[\"project\"],\n", "issue": "feature: Install multiple plugins of any type\n### Feature scope\r\n\r\nCLI (options, error messages, logging, etc.)\r\n\r\n### Description\r\n\r\nCurrently, you can only leverage `meltano install` in parallel for all plugin types or all plugins of a specific type:\r\n\r\n```sh\r\n# all plugins\r\nmeltano install\r\n\r\n# all extractors\r\nmeltano install [extractor|extractors]\r\n\r\n# all loaders\r\nmeltano install [loader|loaders]\r\n```\r\n\r\nIt would be great if you could install multiple plugins of any type - something like:\r\n\r\n```sh\r\nmeltano install <extractor> <loader> <transformer> <utility>\r\n```\r\n\r\nThis change would remove the need to specify a plugin type at all, since a plugin name is already unique to a Meltano project. This is currently not possible without a breaking change however, since a plugin type is required as the first argument to `meltano install` when specifying plugin names. #8228 introduced the `--from-file` option for `meltano config <plugin> set`, which accepts a special character `-` to refer to stdin - `meltano install` could reuse this or a similar concept to skip the plugin type argument and leverage parallel install:\r\n\r\n```sh\r\nmeltano install - <extractor> <loader> <transformer> <utility>\r\n```\r\n\r\nOnce a convention is established, this feature could be extended to `meltano remove` as well.\r\n\r\nI have a POC of this working for `meltano install` locally, so happy to open a PR.\n", "before_files": [{"content": "\"\"\"CLI command `meltano install`.\"\"\"\n\nfrom __future__ import annotations\n\nimport typing as t\n\nimport click\nimport structlog\n\nfrom meltano.cli.params import pass_project\nfrom meltano.cli.utils import CliError, PartialInstrumentedCmd, install_plugins\nfrom meltano.core.block.parser import BlockParser\nfrom meltano.core.plugin import PluginType\nfrom meltano.core.schedule_service import ScheduleService\nfrom meltano.core.tracking.contexts import CliEvent, PluginsTrackingContext\n\nif t.TYPE_CHECKING:\n from meltano.core.project import Project\n from meltano.core.tracking import Tracker\n\nlogger = structlog.getLogger(__name__)\n\n\[email protected](cls=PartialInstrumentedCmd, short_help=\"Install project dependencies.\")\[email protected](\n \"plugin_type\",\n type=click.Choice(PluginType.cli_arguments()),\n required=False,\n)\[email protected](\"plugin_name\", nargs=-1, required=False)\[email protected](\n \"--clean\",\n is_flag=True,\n help=\"Completely reinstall a plugin rather than simply upgrading if necessary.\",\n)\[email protected](\n \"--parallelism\",\n \"-p\",\n type=click.INT,\n default=None,\n help=(\n \"Limit the number of plugins to install in parallel. \"\n \"Defaults to the number of cores.\"\n ),\n)\[email protected](\n \"--force\",\n \"-f\",\n is_flag=True,\n help=\"Ignore the required Python version declared by the plugins.\",\n)\[email protected](\n \"--schedule\",\n \"-s\",\n \"schedule_name\",\n help=\"Install all plugins from the given schedule.\",\n)\[email protected]_context\n@pass_project(migrate=True)\ndef install( # noqa: C901\n project: Project,\n ctx: click.Context,\n plugin_type: str,\n plugin_name: str,\n clean: bool,\n parallelism: int,\n force: bool,\n schedule_name: str,\n):\n \"\"\"\n Install all the dependencies of your project based on the meltano.yml file.\n\n \\b\\nRead more at https://docs.meltano.com/reference/command-line-interface#install\n \"\"\"\n tracker: Tracker = ctx.obj[\"tracker\"]\n try:\n if plugin_type:\n plugin_type = PluginType.from_cli_argument(plugin_type)\n plugins = project.plugins.get_plugins_of_type(plugin_type)\n if plugin_name:\n plugins = [plugin for plugin in plugins if plugin.name in plugin_name]\n else:\n plugins = list(project.plugins.plugins())\n\n if schedule_name:\n schedule_plugins = _get_schedule_plugins(\n ctx.obj[\"project\"],\n schedule_name,\n )\n plugins = list(set(plugins) & set(schedule_plugins))\n except Exception:\n tracker.track_command_event(CliEvent.aborted)\n raise\n\n click.echo(f\"Installing {len(plugins)} plugins...\")\n tracker.add_contexts(\n PluginsTrackingContext([(candidate, None) for candidate in plugins]),\n )\n tracker.track_command_event(CliEvent.inflight)\n\n success = install_plugins(\n project,\n plugins,\n parallelism=parallelism,\n clean=clean,\n force=force,\n )\n if not success:\n tracker.track_command_event(CliEvent.failed)\n raise CliError(\"Failed to install plugin(s)\") # noqa: EM101\n tracker.track_command_event(CliEvent.completed)\n\n\ndef _get_schedule_plugins(project: Project, schedule_name: str):\n schedule_service = ScheduleService(project)\n schedule_obj = schedule_service.find_schedule(schedule_name)\n schedule_plugins = set()\n if schedule_obj.elt_schedule:\n for plugin_name in (schedule_obj.extractor, schedule_obj.loader):\n schedule_plugins.add(project.plugins.find_plugin(plugin_name))\n else:\n task_sets = schedule_service.task_sets_service.get(schedule_obj.job)\n for blocks in task_sets.flat_args_per_set:\n parser = BlockParser(logger, project, blocks)\n for plugin in parser.plugins:\n schedule_plugins.add(\n project.plugins.find_plugin(plugin.info.get(\"name\"))\n if plugin.type == PluginType.MAPPERS\n else plugin,\n )\n return schedule_plugins\n", "path": "src/meltano/cli/install.py"}]}
| 2,020 | 309 |
gh_patches_debug_31034
|
rasdani/github-patches
|
git_diff
|
python-telegram-bot__python-telegram-bot-688
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Encode error when downloading files with non-ASCII filenames.
<!--
Thanks for reporting issues of python-telegram-bot!
To make it easier for us to help you please enter detailed information below.
Please note, we only support the latest version of python-telegram-bot and
master branch. Please make sure to upgrade & recreate the issue on the latest
version prior to opening an issue.
-->
### Steps to reproduce
1. `head /dev/random > 凵冂工匚口わ巨` and send the file to a bot.
2.
```python
import telegram
b = telegram.Bot(TOKEN)
file_id = b.getUpdates()[0].message.document.file_id
b.getFile(file_id).download("./storage")
```
### Expected behaviour
Tell us what should happen
Download the file to specified directory.
### Actual behaviour
Tell us what happens instead
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/site-packages/telegram/file.py", line 106, in download
self.bot.request.download(url, filename, timeout=timeout)
File "/usr/local/lib/python3.6/site-packages/telegram/utils/request.py", line 284, in download
buf = self.retrieve(url, timeout=timeout)
File "/usr/local/lib/python3.6/site-packages/telegram/utils/request.py", line 270, in retrieve
return self._request_wrapper('GET', url, **urlopen_kwargs)
File "/usr/local/lib/python3.6/site-packages/telegram/utils/request.py", line 174, in _request_wrapper
resp = self._con_pool.request(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/telegram/vendor/ptb_urllib3/urllib3/request.py", line 66, in request
**urlopen_kw)
File "/usr/local/lib/python3.6/site-packages/telegram/vendor/ptb_urllib3/urllib3/request.py", line 87, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File "/usr/local/lib/python3.6/site-packages/telegram/vendor/ptb_urllib3/urllib3/poolmanager.py", line 244, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/usr/local/lib/python3.6/site-packages/telegram/vendor/ptb_urllib3/urllib3/connectionpool.py", line 617, in urlopen
chunked=chunked)
File "/usr/local/lib/python3.6/site-packages/telegram/vendor/ptb_urllib3/urllib3/connectionpool.py", line 390, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1250, in _send_request
self.putrequest(method, url, **skips)
File "/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1117, in putrequest
self._output(request.encode('ascii'))
UnicodeEncodeError: 'ascii' codec can't encode characters in position 69-75: ordinal not in range(128)
'ascii' codec can't encode characters in position 69-75: ordinal not in range(128)
```
### Configuration
**Operating System:**
Tested on:
- Mac OS X 10.11
- Ubuntu 16.04
**Version of Python, python-telegram-bot & dependencies:**
``$ python -m telegram``
```
python-telegram-bot 6.0.3
urllib3 1.21.1
certifi 2017.04.17
future 0.16.0
Python 3.6.1 (default, Mar 23 2017, 16:49:01) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)]
```
### Possible Solution
URL escape the "download path" given by `getFile`, then download.
</issue>
<code>
[start of telegram/files/file.py]
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2017
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains an object that represents a Telegram File."""
20
21 from os.path import basename
22
23 from telegram import TelegramObject
24
25
26 class File(TelegramObject):
27 """This object represents a Telegram File.
28
29 Attributes:
30 file_id (str):
31 file_size (str):
32 file_path (str):
33
34 Args:
35 file_id (str):
36 bot (telegram.Bot):
37 file_size (Optional[int]):
38 file_path (Optional[str]):
39 **kwargs (dict): Arbitrary keyword arguments.
40
41 """
42
43 def __init__(self, file_id, bot, file_size=None, file_path=None, **kwargs):
44 # Required
45 self.file_id = str(file_id)
46
47 # Optionals
48 self.file_size = file_size
49 if file_path:
50 self.file_path = str(file_path)
51
52 self.bot = bot
53
54 self._id_attrs = (self.file_id,)
55
56 @staticmethod
57 def de_json(data, bot):
58 """
59 Args:
60 data (dict):
61 bot (telegram.Bot):
62
63 Returns:
64 telegram.File:
65 """
66 if not data:
67 return None
68
69 return File(bot=bot, **data)
70
71 def download(self, custom_path=None, out=None, timeout=None):
72 """
73 Download this file. By default, the file is saved in the current working directory with its
74 original filename as reported by Telegram. If a ``custom_path`` is supplied, it will be
75 saved to that path instead. If ``out`` is defined, the file contents will be saved to that
76 object using the ``out.write`` method. ``custom_path`` and ``out`` are mutually exclusive.
77
78 Args:
79 custom_path (Optional[str]): Custom path.
80 out (Optional[object]): A file-like object. Must be opened in binary mode, if
81 applicable.
82 timeout (Optional[int|float]): If this value is specified, use it as the read timeout
83 from the server (instead of the one specified during creation of the connection
84 pool).
85
86 Raises:
87 ValueError: If both ``custom_path`` and ``out`` are passed.
88
89 """
90
91 if custom_path is not None and out is not None:
92 raise ValueError('custom_path and out are mutually exclusive')
93
94 url = self.file_path
95
96 if out:
97 buf = self.bot.request.retrieve(url)
98 out.write(buf)
99
100 else:
101 if custom_path:
102 filename = custom_path
103 else:
104 filename = basename(url)
105
106 self.bot.request.download(url, filename, timeout=timeout)
107
[end of telegram/files/file.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/telegram/files/file.py b/telegram/files/file.py
--- a/telegram/files/file.py
+++ b/telegram/files/file.py
@@ -17,9 +17,10 @@
# You should have received a copy of the GNU Lesser Public License
# along with this program. If not, see [http://www.gnu.org/licenses/].
"""This module contains an object that represents a Telegram File."""
-
from os.path import basename
+from future.backports.urllib import parse as urllib_parse
+
from telegram import TelegramObject
@@ -46,8 +47,7 @@
# Optionals
self.file_size = file_size
- if file_path:
- self.file_path = str(file_path)
+ self.file_path = file_path
self.bot = bot
@@ -91,7 +91,10 @@
if custom_path is not None and out is not None:
raise ValueError('custom_path and out are mutually exclusive')
- url = self.file_path
+ # Convert any UTF-8 char into a url encoded ASCII string.
+ sres = urllib_parse.urlsplit(self.file_path)
+ url = urllib_parse.urlunsplit(urllib_parse.SplitResult(
+ sres.scheme, sres.netloc, urllib_parse.quote(sres.path), sres.query, sres.fragment))
if out:
buf = self.bot.request.retrieve(url)
@@ -101,6 +104,6 @@
if custom_path:
filename = custom_path
else:
- filename = basename(url)
+ filename = basename(self.file_path)
self.bot.request.download(url, filename, timeout=timeout)
|
{"golden_diff": "diff --git a/telegram/files/file.py b/telegram/files/file.py\n--- a/telegram/files/file.py\n+++ b/telegram/files/file.py\n@@ -17,9 +17,10 @@\n # You should have received a copy of the GNU Lesser Public License\n # along with this program. If not, see [http://www.gnu.org/licenses/].\n \"\"\"This module contains an object that represents a Telegram File.\"\"\"\n-\n from os.path import basename\n \n+from future.backports.urllib import parse as urllib_parse\n+\n from telegram import TelegramObject\n \n \n@@ -46,8 +47,7 @@\n \n # Optionals\n self.file_size = file_size\n- if file_path:\n- self.file_path = str(file_path)\n+ self.file_path = file_path\n \n self.bot = bot\n \n@@ -91,7 +91,10 @@\n if custom_path is not None and out is not None:\n raise ValueError('custom_path and out are mutually exclusive')\n \n- url = self.file_path\n+ # Convert any UTF-8 char into a url encoded ASCII string.\n+ sres = urllib_parse.urlsplit(self.file_path)\n+ url = urllib_parse.urlunsplit(urllib_parse.SplitResult(\n+ sres.scheme, sres.netloc, urllib_parse.quote(sres.path), sres.query, sres.fragment))\n \n if out:\n buf = self.bot.request.retrieve(url)\n@@ -101,6 +104,6 @@\n if custom_path:\n filename = custom_path\n else:\n- filename = basename(url)\n+ filename = basename(self.file_path)\n \n self.bot.request.download(url, filename, timeout=timeout)\n", "issue": "Encode error when downloading files with non-ASCII filenames.\n<!--\r\nThanks for reporting issues of python-telegram-bot!\r\nTo make it easier for us to help you please enter detailed information below.\r\n\r\nPlease note, we only support the latest version of python-telegram-bot and\r\nmaster branch. Please make sure to upgrade & recreate the issue on the latest\r\nversion prior to opening an issue.\r\n-->\r\n### Steps to reproduce\r\n1. `head /dev/random > \u51f5\u5182\u5de5\u531a\u53e3\u308f\u5de8` and send the file to a bot.\r\n2. \r\n```python\r\nimport telegram\r\nb = telegram.Bot(TOKEN)\r\nfile_id = b.getUpdates()[0].message.document.file_id\r\nb.getFile(file_id).download(\"./storage\")\r\n```\r\n### Expected behaviour\r\nTell us what should happen\r\n\r\nDownload the file to specified directory.\r\n\r\n### Actual behaviour\r\nTell us what happens instead\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/usr/local/lib/python3.6/site-packages/telegram/file.py\", line 106, in download\r\n self.bot.request.download(url, filename, timeout=timeout)\r\n File \"/usr/local/lib/python3.6/site-packages/telegram/utils/request.py\", line 284, in download\r\n buf = self.retrieve(url, timeout=timeout)\r\n File \"/usr/local/lib/python3.6/site-packages/telegram/utils/request.py\", line 270, in retrieve\r\n return self._request_wrapper('GET', url, **urlopen_kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/telegram/utils/request.py\", line 174, in _request_wrapper\r\n resp = self._con_pool.request(*args, **kwargs)\r\n File \"/usr/local/lib/python3.6/site-packages/telegram/vendor/ptb_urllib3/urllib3/request.py\", line 66, in request\r\n **urlopen_kw)\r\n File \"/usr/local/lib/python3.6/site-packages/telegram/vendor/ptb_urllib3/urllib3/request.py\", line 87, in request_encode_url\r\n return self.urlopen(method, url, **extra_kw)\r\n File \"/usr/local/lib/python3.6/site-packages/telegram/vendor/ptb_urllib3/urllib3/poolmanager.py\", line 244, in urlopen\r\n response = conn.urlopen(method, u.request_uri, **kw)\r\n File \"/usr/local/lib/python3.6/site-packages/telegram/vendor/ptb_urllib3/urllib3/connectionpool.py\", line 617, in urlopen\r\n chunked=chunked)\r\n File \"/usr/local/lib/python3.6/site-packages/telegram/vendor/ptb_urllib3/urllib3/connectionpool.py\", line 390, in _make_request\r\n conn.request(method, url, **httplib_request_kw)\r\n File \"/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py\", line 1239, in request\r\n self._send_request(method, url, body, headers, encode_chunked)\r\n File \"/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py\", line 1250, in _send_request\r\n self.putrequest(method, url, **skips)\r\n File \"/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py\", line 1117, in putrequest\r\n self._output(request.encode('ascii'))\r\nUnicodeEncodeError: 'ascii' codec can't encode characters in position 69-75: ordinal not in range(128)\r\n'ascii' codec can't encode characters in position 69-75: ordinal not in range(128)\r\n```\r\n\r\n### Configuration\r\n**Operating System:**\r\nTested on:\r\n- Mac OS X 10.11\r\n- Ubuntu 16.04\r\n\r\n**Version of Python, python-telegram-bot & dependencies:**\r\n\r\n``$ python -m telegram``\r\n\r\n```\r\npython-telegram-bot 6.0.3\r\nurllib3 1.21.1\r\ncertifi 2017.04.17\r\nfuture 0.16.0\r\nPython 3.6.1 (default, Mar 23 2017, 16:49:01) [GCC 4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)]\r\n```\r\n\r\n### Possible Solution\r\nURL escape the \"download path\" given by `getFile`, then download.\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2017\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains an object that represents a Telegram File.\"\"\"\n\nfrom os.path import basename\n\nfrom telegram import TelegramObject\n\n\nclass File(TelegramObject):\n \"\"\"This object represents a Telegram File.\n\n Attributes:\n file_id (str):\n file_size (str):\n file_path (str):\n\n Args:\n file_id (str):\n bot (telegram.Bot):\n file_size (Optional[int]):\n file_path (Optional[str]):\n **kwargs (dict): Arbitrary keyword arguments.\n\n \"\"\"\n\n def __init__(self, file_id, bot, file_size=None, file_path=None, **kwargs):\n # Required\n self.file_id = str(file_id)\n\n # Optionals\n self.file_size = file_size\n if file_path:\n self.file_path = str(file_path)\n\n self.bot = bot\n\n self._id_attrs = (self.file_id,)\n\n @staticmethod\n def de_json(data, bot):\n \"\"\"\n Args:\n data (dict):\n bot (telegram.Bot):\n\n Returns:\n telegram.File:\n \"\"\"\n if not data:\n return None\n\n return File(bot=bot, **data)\n\n def download(self, custom_path=None, out=None, timeout=None):\n \"\"\"\n Download this file. By default, the file is saved in the current working directory with its\n original filename as reported by Telegram. If a ``custom_path`` is supplied, it will be\n saved to that path instead. If ``out`` is defined, the file contents will be saved to that\n object using the ``out.write`` method. ``custom_path`` and ``out`` are mutually exclusive.\n\n Args:\n custom_path (Optional[str]): Custom path.\n out (Optional[object]): A file-like object. Must be opened in binary mode, if\n applicable.\n timeout (Optional[int|float]): If this value is specified, use it as the read timeout\n from the server (instead of the one specified during creation of the connection\n pool).\n\n Raises:\n ValueError: If both ``custom_path`` and ``out`` are passed.\n\n \"\"\"\n\n if custom_path is not None and out is not None:\n raise ValueError('custom_path and out are mutually exclusive')\n\n url = self.file_path\n\n if out:\n buf = self.bot.request.retrieve(url)\n out.write(buf)\n\n else:\n if custom_path:\n filename = custom_path\n else:\n filename = basename(url)\n\n self.bot.request.download(url, filename, timeout=timeout)\n", "path": "telegram/files/file.py"}]}
| 2,520 | 365 |
gh_patches_debug_27992
|
rasdani/github-patches
|
git_diff
|
jazzband__pip-tools-1919
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip-sync 7.0.0 reinstalls packages due to case difference
#### Description
pip-sync 7.0.0 reinstalls all packages which have non-lowercase letters in their names
For instance in `*.in` and `*.txt` files we have the following data:
```
django
```
and
```
django==4.2.0
...
```
When we run pip sync, it will uninstall and install django because in `pip-freeze` the same package mentioned as `Django`
#### Environment Versions
| Required | Information |
| ---------- | -------------- |
| OS Type | MacOs/Linux |
| Python version: `$ python -V` | Python 3.11.4 |
| pip version: `$ pip --version`: |pip 23.2 from /home/user/venv/lib/python3.11/site-packages/pip (python 3.11) |
| pip-tools version: `$ pip-compile --version` | pip-compile, version 7.0.0 |
#### Steps to replicate
1. Create `*.in` file with `django` requirement
2. Compile requirements to generate `*.txt` file
3. Run pip-sync on the txt file
4. Run pip-sync on the txt file again
#### Expected result
The output should be `Everything up-to-date`
#### Actual result
The requirement is reinstalled every time you run `pip-sync`
</issue>
<code>
[start of piptools/sync.py]
1 from __future__ import annotations
2
3 import collections
4 import os
5 import sys
6 import tempfile
7 from subprocess import run # nosec
8 from typing import Deque, Iterable, Mapping, ValuesView
9
10 import click
11 from pip._internal.models.direct_url import ArchiveInfo
12 from pip._internal.req import InstallRequirement
13 from pip._internal.utils.compat import stdlib_pkgs
14 from pip._internal.utils.direct_url_helpers import (
15 direct_url_as_pep440_direct_reference,
16 direct_url_from_link,
17 )
18
19 from ._compat import Distribution, get_dev_pkgs
20 from .exceptions import IncompatibleRequirements
21 from .logging import log
22 from .utils import (
23 flat_map,
24 format_requirement,
25 get_hashes_from_ireq,
26 is_url_requirement,
27 key_from_ireq,
28 key_from_req,
29 )
30
31 PACKAGES_TO_IGNORE = [
32 "-markerlib",
33 "pip",
34 "pip-tools",
35 "pip-review",
36 "pkg-resources",
37 *stdlib_pkgs,
38 *get_dev_pkgs(),
39 ]
40
41
42 def dependency_tree(
43 installed_keys: Mapping[str, Distribution], root_key: str
44 ) -> set[str]:
45 """
46 Calculate the dependency tree for the package `root_key` and return
47 a collection of all its dependencies. Uses a DFS traversal algorithm.
48
49 `installed_keys` should be a {key: requirement} mapping, e.g.
50 {'django': from_line('django==1.8')}
51 `root_key` should be the key to return the dependency tree for.
52 """
53 dependencies = set()
54 queue: Deque[Distribution] = collections.deque()
55
56 if root_key in installed_keys:
57 dep = installed_keys[root_key]
58 queue.append(dep)
59
60 while queue:
61 v = queue.popleft()
62 key = v.key
63 if key in dependencies:
64 continue
65
66 dependencies.add(key)
67
68 for dep_specifier in v.requires:
69 dep_name = key_from_req(dep_specifier)
70 if dep_name in installed_keys:
71 dep = installed_keys[dep_name]
72
73 if dep_specifier.specifier.contains(dep.version):
74 queue.append(dep)
75
76 return dependencies
77
78
79 def get_dists_to_ignore(installed: Iterable[Distribution]) -> list[str]:
80 """
81 Returns a collection of package names to ignore when performing pip-sync,
82 based on the currently installed environment. For example, when pip-tools
83 is installed in the local environment, it should be ignored, including all
84 of its dependencies (e.g. click). When pip-tools is not installed
85 locally, click should also be installed/uninstalled depending on the given
86 requirements.
87 """
88 installed_keys = {r.key: r for r in installed}
89 return list(
90 flat_map(lambda req: dependency_tree(installed_keys, req), PACKAGES_TO_IGNORE)
91 )
92
93
94 def merge(
95 requirements: Iterable[InstallRequirement], ignore_conflicts: bool
96 ) -> ValuesView[InstallRequirement]:
97 by_key: dict[str, InstallRequirement] = {}
98
99 for ireq in requirements:
100 # Limitation: URL requirements are merged by precise string match, so
101 # "file:///example.zip#egg=example", "file:///example.zip", and
102 # "example==1.0" will not merge with each other
103 if ireq.match_markers():
104 key = key_from_ireq(ireq)
105
106 if not ignore_conflicts:
107 existing_ireq = by_key.get(key)
108 if existing_ireq:
109 # NOTE: We check equality here since we can assume that the
110 # requirements are all pinned
111 if (
112 ireq.req
113 and existing_ireq.req
114 and ireq.specifier != existing_ireq.specifier
115 ):
116 raise IncompatibleRequirements(ireq, existing_ireq)
117
118 # TODO: Always pick the largest specifier in case of a conflict
119 by_key[key] = ireq
120 return by_key.values()
121
122
123 def diff_key_from_ireq(ireq: InstallRequirement) -> str:
124 """
125 Calculate a key for comparing a compiled requirement with installed modules.
126 For URL requirements, only provide a useful key if the url includes
127 a hash, e.g. #sha1=..., in any of the supported hash algorithms.
128 Otherwise return ireq.link so the key will not match and the package will
129 reinstall. Reinstall is necessary to ensure that packages will reinstall
130 if the contents at the URL have changed but the version has not.
131 """
132 if is_url_requirement(ireq):
133 if getattr(ireq.req, "name", None) and ireq.link.has_hash:
134 return str(
135 direct_url_as_pep440_direct_reference(
136 direct_url_from_link(ireq.link), ireq.req.name
137 )
138 )
139 # TODO: Also support VCS and editable installs.
140 return str(ireq.link)
141 return key_from_ireq(ireq)
142
143
144 def diff_key_from_req(req: Distribution) -> str:
145 """Get a unique key for the requirement."""
146 key = req.key
147 if (
148 req.direct_url
149 and isinstance(req.direct_url.info, ArchiveInfo)
150 and req.direct_url.info.hash
151 ):
152 key = direct_url_as_pep440_direct_reference(req.direct_url, key)
153 # TODO: Also support VCS and editable installs.
154 return key
155
156
157 def diff(
158 compiled_requirements: Iterable[InstallRequirement],
159 installed_dists: Iterable[Distribution],
160 ) -> tuple[set[InstallRequirement], set[str]]:
161 """
162 Calculate which packages should be installed or uninstalled, given a set
163 of compiled requirements and a list of currently installed modules.
164 """
165 requirements_lut = {diff_key_from_ireq(r): r for r in compiled_requirements}
166
167 satisfied = set() # holds keys
168 to_install = set() # holds InstallRequirement objects
169 to_uninstall = set() # holds keys
170
171 pkgs_to_ignore = get_dists_to_ignore(installed_dists)
172 for dist in installed_dists:
173 key = diff_key_from_req(dist)
174 if key not in requirements_lut or not requirements_lut[key].match_markers():
175 to_uninstall.add(key)
176 elif requirements_lut[key].specifier.contains(dist.version):
177 satisfied.add(key)
178
179 for key, requirement in requirements_lut.items():
180 if key not in satisfied and requirement.match_markers():
181 to_install.add(requirement)
182
183 # Make sure to not uninstall any packages that should be ignored
184 to_uninstall -= set(pkgs_to_ignore)
185
186 return (to_install, to_uninstall)
187
188
189 def sync(
190 to_install: Iterable[InstallRequirement],
191 to_uninstall: Iterable[InstallRequirement],
192 dry_run: bool = False,
193 install_flags: list[str] | None = None,
194 ask: bool = False,
195 python_executable: str | None = None,
196 ) -> int:
197 """
198 Install and uninstalls the given sets of modules.
199 """
200 exit_code = 0
201
202 python_executable = python_executable or sys.executable
203
204 if not to_uninstall and not to_install:
205 log.info("Everything up-to-date", err=False)
206 return exit_code
207
208 pip_flags = []
209 if log.verbosity < 0:
210 pip_flags += ["-q"]
211
212 if ask:
213 dry_run = True
214
215 if dry_run:
216 if to_uninstall:
217 click.echo("Would uninstall:")
218 for pkg in sorted(to_uninstall):
219 click.echo(f" {pkg}")
220
221 if to_install:
222 click.echo("Would install:")
223 for ireq in sorted(to_install, key=key_from_ireq):
224 click.echo(f" {format_requirement(ireq)}")
225
226 exit_code = 1
227
228 if ask and click.confirm("Would you like to proceed with these changes?"):
229 dry_run = False
230 exit_code = 0
231
232 if not dry_run:
233 if to_uninstall:
234 run( # nosec
235 [
236 python_executable,
237 "-m",
238 "pip",
239 "uninstall",
240 "-y",
241 *pip_flags,
242 *sorted(to_uninstall),
243 ],
244 check=True,
245 )
246
247 if to_install:
248 if install_flags is None:
249 install_flags = []
250 # prepare requirement lines
251 req_lines = []
252 for ireq in sorted(to_install, key=key_from_ireq):
253 ireq_hashes = get_hashes_from_ireq(ireq)
254 req_lines.append(format_requirement(ireq, hashes=ireq_hashes))
255
256 # save requirement lines to a temporary file
257 tmp_req_file = tempfile.NamedTemporaryFile(mode="wt", delete=False)
258 tmp_req_file.write("\n".join(req_lines))
259 tmp_req_file.close()
260
261 try:
262 run( # nosec
263 [
264 python_executable,
265 "-m",
266 "pip",
267 "install",
268 "-r",
269 tmp_req_file.name,
270 *pip_flags,
271 *install_flags,
272 ],
273 check=True,
274 )
275 finally:
276 os.unlink(tmp_req_file.name)
277
278 return exit_code
279
[end of piptools/sync.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/piptools/sync.py b/piptools/sync.py
--- a/piptools/sync.py
+++ b/piptools/sync.py
@@ -15,6 +15,7 @@
direct_url_as_pep440_direct_reference,
direct_url_from_link,
)
+from pip._vendor.packaging.utils import canonicalize_name
from ._compat import Distribution, get_dev_pkgs
from .exceptions import IncompatibleRequirements
@@ -59,7 +60,7 @@
while queue:
v = queue.popleft()
- key = v.key
+ key = str(canonicalize_name(v.key))
if key in dependencies:
continue
@@ -85,7 +86,7 @@
locally, click should also be installed/uninstalled depending on the given
requirements.
"""
- installed_keys = {r.key: r for r in installed}
+ installed_keys = {str(canonicalize_name(r.key)): r for r in installed}
return list(
flat_map(lambda req: dependency_tree(installed_keys, req), PACKAGES_TO_IGNORE)
)
@@ -143,7 +144,7 @@
def diff_key_from_req(req: Distribution) -> str:
"""Get a unique key for the requirement."""
- key = req.key
+ key = str(canonicalize_name(req.key))
if (
req.direct_url
and isinstance(req.direct_url.info, ArchiveInfo)
|
{"golden_diff": "diff --git a/piptools/sync.py b/piptools/sync.py\n--- a/piptools/sync.py\n+++ b/piptools/sync.py\n@@ -15,6 +15,7 @@\n direct_url_as_pep440_direct_reference,\n direct_url_from_link,\n )\n+from pip._vendor.packaging.utils import canonicalize_name\n \n from ._compat import Distribution, get_dev_pkgs\n from .exceptions import IncompatibleRequirements\n@@ -59,7 +60,7 @@\n \n while queue:\n v = queue.popleft()\n- key = v.key\n+ key = str(canonicalize_name(v.key))\n if key in dependencies:\n continue\n \n@@ -85,7 +86,7 @@\n locally, click should also be installed/uninstalled depending on the given\n requirements.\n \"\"\"\n- installed_keys = {r.key: r for r in installed}\n+ installed_keys = {str(canonicalize_name(r.key)): r for r in installed}\n return list(\n flat_map(lambda req: dependency_tree(installed_keys, req), PACKAGES_TO_IGNORE)\n )\n@@ -143,7 +144,7 @@\n \n def diff_key_from_req(req: Distribution) -> str:\n \"\"\"Get a unique key for the requirement.\"\"\"\n- key = req.key\n+ key = str(canonicalize_name(req.key))\n if (\n req.direct_url\n and isinstance(req.direct_url.info, ArchiveInfo)\n", "issue": "pip-sync 7.0.0 reinstalls packages due to case difference\n#### Description\r\n\r\npip-sync 7.0.0 reinstalls all packages which have non-lowercase letters in their names\r\n\r\nFor instance in `*.in` and `*.txt` files we have the following data:\r\n```\r\ndjango\r\n```\r\nand\r\n```\r\ndjango==4.2.0\r\n ...\r\n```\r\nWhen we run pip sync, it will uninstall and install django because in `pip-freeze` the same package mentioned as `Django`\r\n\r\n#### Environment Versions\r\n\r\n| Required | Information |\r\n| ---------- | -------------- |\r\n| OS Type | MacOs/Linux |\r\n| Python version: `$ python -V` | Python 3.11.4 |\r\n| pip version: `$ pip --version`: |pip 23.2 from /home/user/venv/lib/python3.11/site-packages/pip (python 3.11) |\r\n| pip-tools version: `$ pip-compile --version` | pip-compile, version 7.0.0 |\r\n\r\n#### Steps to replicate\r\n\r\n1. Create `*.in` file with `django` requirement\r\n2. Compile requirements to generate `*.txt` file\r\n3. Run pip-sync on the txt file\r\n4. Run pip-sync on the txt file again\r\n\r\n#### Expected result\r\n\r\nThe output should be `Everything up-to-date`\r\n\r\n#### Actual result\r\n\r\nThe requirement is reinstalled every time you run `pip-sync`\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport collections\nimport os\nimport sys\nimport tempfile\nfrom subprocess import run # nosec\nfrom typing import Deque, Iterable, Mapping, ValuesView\n\nimport click\nfrom pip._internal.models.direct_url import ArchiveInfo\nfrom pip._internal.req import InstallRequirement\nfrom pip._internal.utils.compat import stdlib_pkgs\nfrom pip._internal.utils.direct_url_helpers import (\n direct_url_as_pep440_direct_reference,\n direct_url_from_link,\n)\n\nfrom ._compat import Distribution, get_dev_pkgs\nfrom .exceptions import IncompatibleRequirements\nfrom .logging import log\nfrom .utils import (\n flat_map,\n format_requirement,\n get_hashes_from_ireq,\n is_url_requirement,\n key_from_ireq,\n key_from_req,\n)\n\nPACKAGES_TO_IGNORE = [\n \"-markerlib\",\n \"pip\",\n \"pip-tools\",\n \"pip-review\",\n \"pkg-resources\",\n *stdlib_pkgs,\n *get_dev_pkgs(),\n]\n\n\ndef dependency_tree(\n installed_keys: Mapping[str, Distribution], root_key: str\n) -> set[str]:\n \"\"\"\n Calculate the dependency tree for the package `root_key` and return\n a collection of all its dependencies. Uses a DFS traversal algorithm.\n\n `installed_keys` should be a {key: requirement} mapping, e.g.\n {'django': from_line('django==1.8')}\n `root_key` should be the key to return the dependency tree for.\n \"\"\"\n dependencies = set()\n queue: Deque[Distribution] = collections.deque()\n\n if root_key in installed_keys:\n dep = installed_keys[root_key]\n queue.append(dep)\n\n while queue:\n v = queue.popleft()\n key = v.key\n if key in dependencies:\n continue\n\n dependencies.add(key)\n\n for dep_specifier in v.requires:\n dep_name = key_from_req(dep_specifier)\n if dep_name in installed_keys:\n dep = installed_keys[dep_name]\n\n if dep_specifier.specifier.contains(dep.version):\n queue.append(dep)\n\n return dependencies\n\n\ndef get_dists_to_ignore(installed: Iterable[Distribution]) -> list[str]:\n \"\"\"\n Returns a collection of package names to ignore when performing pip-sync,\n based on the currently installed environment. For example, when pip-tools\n is installed in the local environment, it should be ignored, including all\n of its dependencies (e.g. click). When pip-tools is not installed\n locally, click should also be installed/uninstalled depending on the given\n requirements.\n \"\"\"\n installed_keys = {r.key: r for r in installed}\n return list(\n flat_map(lambda req: dependency_tree(installed_keys, req), PACKAGES_TO_IGNORE)\n )\n\n\ndef merge(\n requirements: Iterable[InstallRequirement], ignore_conflicts: bool\n) -> ValuesView[InstallRequirement]:\n by_key: dict[str, InstallRequirement] = {}\n\n for ireq in requirements:\n # Limitation: URL requirements are merged by precise string match, so\n # \"file:///example.zip#egg=example\", \"file:///example.zip\", and\n # \"example==1.0\" will not merge with each other\n if ireq.match_markers():\n key = key_from_ireq(ireq)\n\n if not ignore_conflicts:\n existing_ireq = by_key.get(key)\n if existing_ireq:\n # NOTE: We check equality here since we can assume that the\n # requirements are all pinned\n if (\n ireq.req\n and existing_ireq.req\n and ireq.specifier != existing_ireq.specifier\n ):\n raise IncompatibleRequirements(ireq, existing_ireq)\n\n # TODO: Always pick the largest specifier in case of a conflict\n by_key[key] = ireq\n return by_key.values()\n\n\ndef diff_key_from_ireq(ireq: InstallRequirement) -> str:\n \"\"\"\n Calculate a key for comparing a compiled requirement with installed modules.\n For URL requirements, only provide a useful key if the url includes\n a hash, e.g. #sha1=..., in any of the supported hash algorithms.\n Otherwise return ireq.link so the key will not match and the package will\n reinstall. Reinstall is necessary to ensure that packages will reinstall\n if the contents at the URL have changed but the version has not.\n \"\"\"\n if is_url_requirement(ireq):\n if getattr(ireq.req, \"name\", None) and ireq.link.has_hash:\n return str(\n direct_url_as_pep440_direct_reference(\n direct_url_from_link(ireq.link), ireq.req.name\n )\n )\n # TODO: Also support VCS and editable installs.\n return str(ireq.link)\n return key_from_ireq(ireq)\n\n\ndef diff_key_from_req(req: Distribution) -> str:\n \"\"\"Get a unique key for the requirement.\"\"\"\n key = req.key\n if (\n req.direct_url\n and isinstance(req.direct_url.info, ArchiveInfo)\n and req.direct_url.info.hash\n ):\n key = direct_url_as_pep440_direct_reference(req.direct_url, key)\n # TODO: Also support VCS and editable installs.\n return key\n\n\ndef diff(\n compiled_requirements: Iterable[InstallRequirement],\n installed_dists: Iterable[Distribution],\n) -> tuple[set[InstallRequirement], set[str]]:\n \"\"\"\n Calculate which packages should be installed or uninstalled, given a set\n of compiled requirements and a list of currently installed modules.\n \"\"\"\n requirements_lut = {diff_key_from_ireq(r): r for r in compiled_requirements}\n\n satisfied = set() # holds keys\n to_install = set() # holds InstallRequirement objects\n to_uninstall = set() # holds keys\n\n pkgs_to_ignore = get_dists_to_ignore(installed_dists)\n for dist in installed_dists:\n key = diff_key_from_req(dist)\n if key not in requirements_lut or not requirements_lut[key].match_markers():\n to_uninstall.add(key)\n elif requirements_lut[key].specifier.contains(dist.version):\n satisfied.add(key)\n\n for key, requirement in requirements_lut.items():\n if key not in satisfied and requirement.match_markers():\n to_install.add(requirement)\n\n # Make sure to not uninstall any packages that should be ignored\n to_uninstall -= set(pkgs_to_ignore)\n\n return (to_install, to_uninstall)\n\n\ndef sync(\n to_install: Iterable[InstallRequirement],\n to_uninstall: Iterable[InstallRequirement],\n dry_run: bool = False,\n install_flags: list[str] | None = None,\n ask: bool = False,\n python_executable: str | None = None,\n) -> int:\n \"\"\"\n Install and uninstalls the given sets of modules.\n \"\"\"\n exit_code = 0\n\n python_executable = python_executable or sys.executable\n\n if not to_uninstall and not to_install:\n log.info(\"Everything up-to-date\", err=False)\n return exit_code\n\n pip_flags = []\n if log.verbosity < 0:\n pip_flags += [\"-q\"]\n\n if ask:\n dry_run = True\n\n if dry_run:\n if to_uninstall:\n click.echo(\"Would uninstall:\")\n for pkg in sorted(to_uninstall):\n click.echo(f\" {pkg}\")\n\n if to_install:\n click.echo(\"Would install:\")\n for ireq in sorted(to_install, key=key_from_ireq):\n click.echo(f\" {format_requirement(ireq)}\")\n\n exit_code = 1\n\n if ask and click.confirm(\"Would you like to proceed with these changes?\"):\n dry_run = False\n exit_code = 0\n\n if not dry_run:\n if to_uninstall:\n run( # nosec\n [\n python_executable,\n \"-m\",\n \"pip\",\n \"uninstall\",\n \"-y\",\n *pip_flags,\n *sorted(to_uninstall),\n ],\n check=True,\n )\n\n if to_install:\n if install_flags is None:\n install_flags = []\n # prepare requirement lines\n req_lines = []\n for ireq in sorted(to_install, key=key_from_ireq):\n ireq_hashes = get_hashes_from_ireq(ireq)\n req_lines.append(format_requirement(ireq, hashes=ireq_hashes))\n\n # save requirement lines to a temporary file\n tmp_req_file = tempfile.NamedTemporaryFile(mode=\"wt\", delete=False)\n tmp_req_file.write(\"\\n\".join(req_lines))\n tmp_req_file.close()\n\n try:\n run( # nosec\n [\n python_executable,\n \"-m\",\n \"pip\",\n \"install\",\n \"-r\",\n tmp_req_file.name,\n *pip_flags,\n *install_flags,\n ],\n check=True,\n )\n finally:\n os.unlink(tmp_req_file.name)\n\n return exit_code\n", "path": "piptools/sync.py"}]}
| 3,542 | 318 |
gh_patches_debug_11206
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-3656
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PermissionError not defined in Python 2.7
### System information
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 16
- **Ray installed from (source or binary)**: binary
- **Ray version**: 0.6.1
- **Python version**: 2.7
- **Exact command to reproduce**:
I don't have access to `/tmp`, and I get this following error:
```
cluster_tests.py:55: in _start_new_cluster
"num_heartbeats_timeout": 10
/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/test/cluster_utils.py:43: in __init__
self.add_node(**head_node_args)
/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/test/cluster_utils.py:86: in add_node
**node_kwargs)
/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/services.py:1777: in start_ray_head
_internal_config=_internal_config)
/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/services.py:1436: in start_ray_processes
redis_max_memory=redis_max_memory)
/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/services.py:458: in start_redis
redis_stdout_file, redis_stderr_file = new_redis_log_file(redirect_output)
/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/tempfile_services.py:182: in new_redis_log_file
"redis", redirect_output)
/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/tempfile_services.py:166: in new_log_files
try_to_create_directory("/tmp/ray")
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
directory_path = '/tmp/ray'
def try_to_create_directory(directory_path):
"""Attempt to create a directory that is globally readable/writable.
Args:
directory_path: The path of the directory to create.
"""
directory_path = os.path.expanduser(directory_path)
if not os.path.exists(directory_path):
try:
os.makedirs(directory_path)
except OSError as e:
if e.errno != os.errno.EEXIST:
raise e
logger.warning(
"Attempted to create '{}', but the directory already "
"exists.".format(directory_path))
# Change the log directory permissions so others can use it. This is
# important when multiple people are using the same machine.
try:
os.chmod(directory_path, 0o0777)
> except PermissionError:
E NameError: global name 'PermissionError' is not defined
/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/tempfile_services.py:69: NameError
```
</issue>
<code>
[start of python/ray/tempfile_services.py]
1 import binascii
2 import collections
3 import datetime
4 import errno
5 import logging
6 import os
7 import shutil
8 import tempfile
9
10 import ray.utils
11
12 logger = logging.getLogger(__name__)
13 _incremental_dict = collections.defaultdict(lambda: 0)
14 _temp_root = None
15
16
17 def make_inc_temp(suffix="", prefix="", directory_name="/tmp/ray"):
18 """Return a incremental temporary file name. The file is not created.
19
20 Args:
21 suffix (str): The suffix of the temp file.
22 prefix (str): The prefix of the temp file.
23 directory_name (str) : The base directory of the temp file.
24
25 Returns:
26 A string of file name. If there existing a file having the same name,
27 the returned name will look like
28 "{directory_name}/{prefix}.{unique_index}{suffix}"
29 """
30 directory_name = os.path.expanduser(directory_name)
31 index = _incremental_dict[suffix, prefix, directory_name]
32 # `tempfile.TMP_MAX` could be extremely large,
33 # so using `range` in Python2.x should be avoided.
34 while index < tempfile.TMP_MAX:
35 if index == 0:
36 filename = os.path.join(directory_name, prefix + suffix)
37 else:
38 filename = os.path.join(directory_name,
39 prefix + "." + str(index) + suffix)
40 index += 1
41 if not os.path.exists(filename):
42 _incremental_dict[suffix, prefix,
43 directory_name] = index # Save the index.
44 return filename
45
46 raise FileExistsError(errno.EEXIST, "No usable temporary filename found")
47
48
49 def try_to_create_directory(directory_path):
50 """Attempt to create a directory that is globally readable/writable.
51
52 Args:
53 directory_path: The path of the directory to create.
54 """
55 directory_path = os.path.expanduser(directory_path)
56 if not os.path.exists(directory_path):
57 try:
58 os.makedirs(directory_path)
59 except OSError as e:
60 if e.errno != os.errno.EEXIST:
61 raise e
62 logger.warning(
63 "Attempted to create '{}', but the directory already "
64 "exists.".format(directory_path))
65 # Change the log directory permissions so others can use it. This is
66 # important when multiple people are using the same machine.
67 try:
68 os.chmod(directory_path, 0o0777)
69 except PermissionError:
70 pass
71
72
73 def get_temp_root():
74 """Get the path of the temporary root. If not existing, it will be created.
75 """
76 global _temp_root
77
78 date_str = datetime.datetime.today().strftime("%Y-%m-%d_%H-%M-%S")
79
80 # Lazy creation. Avoid creating directories never used.
81 if _temp_root is None:
82 _temp_root = make_inc_temp(
83 prefix="session_{date_str}_{pid}".format(
84 pid=os.getpid(), date_str=date_str),
85 directory_name="/tmp/ray")
86 try_to_create_directory(_temp_root)
87 return _temp_root
88
89
90 def set_temp_root(path):
91 """Set the path of the temporary root. It will be created lazily."""
92 global _temp_root
93 _temp_root = path
94
95
96 def get_logs_dir_path():
97 """Get a temp dir for logging."""
98 logs_dir = os.path.join(get_temp_root(), "logs")
99 try_to_create_directory(logs_dir)
100 return logs_dir
101
102
103 def get_sockets_dir_path():
104 """Get a temp dir for sockets."""
105 sockets_dir = os.path.join(get_temp_root(), "sockets")
106 try_to_create_directory(sockets_dir)
107 return sockets_dir
108
109
110 def get_raylet_socket_name(suffix=""):
111 """Get a socket name for raylet."""
112 sockets_dir = get_sockets_dir_path()
113
114 raylet_socket_name = make_inc_temp(
115 prefix="raylet", directory_name=sockets_dir, suffix=suffix)
116 return raylet_socket_name
117
118
119 def get_object_store_socket_name():
120 """Get a socket name for plasma object store."""
121 sockets_dir = get_sockets_dir_path()
122 return make_inc_temp(prefix="plasma_store", directory_name=sockets_dir)
123
124
125 def get_ipython_notebook_path(port):
126 """Get a new ipython notebook path"""
127
128 notebook_filepath = os.path.join(
129 os.path.dirname(os.path.abspath(__file__)), "WebUI.ipynb")
130 # We copy the notebook file so that the original doesn't get modified by
131 # the user.
132 notebook_name = make_inc_temp(
133 suffix=".ipynb", prefix="ray_ui", directory_name=get_temp_root())
134 shutil.copy(notebook_filepath, notebook_name)
135 new_notebook_directory = os.path.dirname(notebook_name)
136 token = ray.utils.decode(binascii.hexlify(os.urandom(24)))
137 webui_url = ("http://localhost:{}/notebooks/{}?token={}".format(
138 port, os.path.basename(notebook_name), token))
139 return new_notebook_directory, webui_url, token
140
141
142 def new_log_files(name, redirect_output):
143 """Generate partially randomized filenames for log files.
144
145 Args:
146 name (str): descriptive string for this log file.
147 redirect_output (bool): True if files should be generated for logging
148 stdout and stderr and false if stdout and stderr should not be
149 redirected.
150
151 Returns:
152 If redirect_output is true, this will return a tuple of two
153 filehandles. The first is for redirecting stdout and the second is
154 for redirecting stderr. If redirect_output is false, this will
155 return a tuple of two None objects.
156 """
157 if not redirect_output:
158 return None, None
159
160 # Create a directory to be used for process log files.
161 logs_dir = get_logs_dir_path()
162 # Create another directory that will be used by some of the RL algorithms.
163
164 # TODO(suquark): This is done by the old code.
165 # We should be able to control its path later.
166 try_to_create_directory("/tmp/ray")
167
168 log_stdout = make_inc_temp(
169 suffix=".out", prefix=name, directory_name=logs_dir)
170 log_stderr = make_inc_temp(
171 suffix=".err", prefix=name, directory_name=logs_dir)
172 # Line-buffer the output (mode 1)
173 log_stdout_file = open(log_stdout, "a", buffering=1)
174 log_stderr_file = open(log_stderr, "a", buffering=1)
175 return log_stdout_file, log_stderr_file
176
177
178 def new_redis_log_file(redirect_output, shard_number=None):
179 """Create new logging files for redis"""
180 if shard_number is None:
181 redis_stdout_file, redis_stderr_file = new_log_files(
182 "redis", redirect_output)
183 else:
184 redis_stdout_file, redis_stderr_file = new_log_files(
185 "redis-shard_{}".format(shard_number), redirect_output)
186 return redis_stdout_file, redis_stderr_file
187
188
189 def new_raylet_log_file(local_scheduler_index, redirect_output):
190 """Create new logging files for raylet."""
191 raylet_stdout_file, raylet_stderr_file = new_log_files(
192 "raylet_{}".format(local_scheduler_index),
193 redirect_output=redirect_output)
194 return raylet_stdout_file, raylet_stderr_file
195
196
197 def new_webui_log_file():
198 """Create new logging files for web ui."""
199 ui_stdout_file, ui_stderr_file = new_log_files(
200 "webui", redirect_output=True)
201 return ui_stdout_file, ui_stderr_file
202
203
204 def new_worker_redirected_log_file(worker_id):
205 """Create new logging files for workers to redirect its output."""
206 worker_stdout_file, worker_stderr_file = (new_log_files(
207 "worker-" + ray.utils.binary_to_hex(worker_id), True))
208 return worker_stdout_file, worker_stderr_file
209
210
211 def new_log_monitor_log_file():
212 """Create new logging files for the log monitor."""
213 log_monitor_stdout_file, log_monitor_stderr_file = new_log_files(
214 "log_monitor", redirect_output=True)
215 return log_monitor_stdout_file, log_monitor_stderr_file
216
217
218 def new_plasma_store_log_file(local_scheduler_index, redirect_output):
219 """Create new logging files for the plasma store."""
220 plasma_store_stdout_file, plasma_store_stderr_file = new_log_files(
221 "plasma_store_{}".format(local_scheduler_index), redirect_output)
222 return plasma_store_stdout_file, plasma_store_stderr_file
223
224
225 def new_monitor_log_file(redirect_output):
226 """Create new logging files for the monitor."""
227 monitor_stdout_file, monitor_stderr_file = new_log_files(
228 "monitor", redirect_output)
229 return monitor_stdout_file, monitor_stderr_file
230
[end of python/ray/tempfile_services.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/python/ray/tempfile_services.py b/python/ray/tempfile_services.py
--- a/python/ray/tempfile_services.py
+++ b/python/ray/tempfile_services.py
@@ -66,8 +66,16 @@
# important when multiple people are using the same machine.
try:
os.chmod(directory_path, 0o0777)
- except PermissionError:
- pass
+ except OSError as e:
+ # Silently suppress the PermissionError that is thrown by the chmod.
+ # This is done because the user attempting to change the permissions
+ # on a directory may not own it. The chmod is attempted whether the
+ # directory is new or not to avoid race conditions.
+ # ray-project/ray/#3591
+ if e.errno in [errno.EACCES, errno.EPERM]:
+ pass
+ else:
+ raise
def get_temp_root():
|
{"golden_diff": "diff --git a/python/ray/tempfile_services.py b/python/ray/tempfile_services.py\n--- a/python/ray/tempfile_services.py\n+++ b/python/ray/tempfile_services.py\n@@ -66,8 +66,16 @@\n # important when multiple people are using the same machine.\n try:\n os.chmod(directory_path, 0o0777)\n- except PermissionError:\n- pass\n+ except OSError as e:\n+ # Silently suppress the PermissionError that is thrown by the chmod.\n+ # This is done because the user attempting to change the permissions\n+ # on a directory may not own it. The chmod is attempted whether the\n+ # directory is new or not to avoid race conditions.\n+ # ray-project/ray/#3591\n+ if e.errno in [errno.EACCES, errno.EPERM]:\n+ pass\n+ else:\n+ raise\n \n \n def get_temp_root():\n", "issue": "PermissionError not defined in Python 2.7\n### System information\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: Ubuntu 16\r\n- **Ray installed from (source or binary)**: binary\r\n- **Ray version**: 0.6.1\r\n- **Python version**: 2.7\r\n- **Exact command to reproduce**:\r\n\r\nI don't have access to `/tmp`, and I get this following error:\r\n\r\n```\r\ncluster_tests.py:55: in _start_new_cluster\r\n \"num_heartbeats_timeout\": 10\r\n/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/test/cluster_utils.py:43: in __init__\r\n self.add_node(**head_node_args)\r\n/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/test/cluster_utils.py:86: in add_node\r\n **node_kwargs)\r\n/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/services.py:1777: in start_ray_head\r\n _internal_config=_internal_config)\r\n/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/services.py:1436: in start_ray_processes\r\n redis_max_memory=redis_max_memory)\r\n/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/services.py:458: in start_redis\r\n redis_stdout_file, redis_stderr_file = new_redis_log_file(redirect_output)\r\n/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/tempfile_services.py:182: in new_redis_log_file\r\n \"redis\", redirect_output)\r\n/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/tempfile_services.py:166: in new_log_files\r\n try_to_create_directory(\"/tmp/ray\")\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\ndirectory_path = '/tmp/ray'\r\n\r\n def try_to_create_directory(directory_path):\r\n \"\"\"Attempt to create a directory that is globally readable/writable.\r\n\r\n Args:\r\n directory_path: The path of the directory to create.\r\n \"\"\"\r\n directory_path = os.path.expanduser(directory_path)\r\n if not os.path.exists(directory_path):\r\n try:\r\n os.makedirs(directory_path)\r\n except OSError as e:\r\n if e.errno != os.errno.EEXIST:\r\n raise e\r\n logger.warning(\r\n \"Attempted to create '{}', but the directory already \"\r\n \"exists.\".format(directory_path))\r\n # Change the log directory permissions so others can use it. This is\r\n # important when multiple people are using the same machine.\r\n try:\r\n os.chmod(directory_path, 0o0777)\r\n> except PermissionError:\r\nE NameError: global name 'PermissionError' is not defined\r\n\r\n/data/rliaw/miniconda3/envs/py2/lib/python2.7/site-packages/ray/tempfile_services.py:69: NameError\r\n```\n", "before_files": [{"content": "import binascii\nimport collections\nimport datetime\nimport errno\nimport logging\nimport os\nimport shutil\nimport tempfile\n\nimport ray.utils\n\nlogger = logging.getLogger(__name__)\n_incremental_dict = collections.defaultdict(lambda: 0)\n_temp_root = None\n\n\ndef make_inc_temp(suffix=\"\", prefix=\"\", directory_name=\"/tmp/ray\"):\n \"\"\"Return a incremental temporary file name. The file is not created.\n\n Args:\n suffix (str): The suffix of the temp file.\n prefix (str): The prefix of the temp file.\n directory_name (str) : The base directory of the temp file.\n\n Returns:\n A string of file name. If there existing a file having the same name,\n the returned name will look like\n \"{directory_name}/{prefix}.{unique_index}{suffix}\"\n \"\"\"\n directory_name = os.path.expanduser(directory_name)\n index = _incremental_dict[suffix, prefix, directory_name]\n # `tempfile.TMP_MAX` could be extremely large,\n # so using `range` in Python2.x should be avoided.\n while index < tempfile.TMP_MAX:\n if index == 0:\n filename = os.path.join(directory_name, prefix + suffix)\n else:\n filename = os.path.join(directory_name,\n prefix + \".\" + str(index) + suffix)\n index += 1\n if not os.path.exists(filename):\n _incremental_dict[suffix, prefix,\n directory_name] = index # Save the index.\n return filename\n\n raise FileExistsError(errno.EEXIST, \"No usable temporary filename found\")\n\n\ndef try_to_create_directory(directory_path):\n \"\"\"Attempt to create a directory that is globally readable/writable.\n\n Args:\n directory_path: The path of the directory to create.\n \"\"\"\n directory_path = os.path.expanduser(directory_path)\n if not os.path.exists(directory_path):\n try:\n os.makedirs(directory_path)\n except OSError as e:\n if e.errno != os.errno.EEXIST:\n raise e\n logger.warning(\n \"Attempted to create '{}', but the directory already \"\n \"exists.\".format(directory_path))\n # Change the log directory permissions so others can use it. This is\n # important when multiple people are using the same machine.\n try:\n os.chmod(directory_path, 0o0777)\n except PermissionError:\n pass\n\n\ndef get_temp_root():\n \"\"\"Get the path of the temporary root. If not existing, it will be created.\n \"\"\"\n global _temp_root\n\n date_str = datetime.datetime.today().strftime(\"%Y-%m-%d_%H-%M-%S\")\n\n # Lazy creation. Avoid creating directories never used.\n if _temp_root is None:\n _temp_root = make_inc_temp(\n prefix=\"session_{date_str}_{pid}\".format(\n pid=os.getpid(), date_str=date_str),\n directory_name=\"/tmp/ray\")\n try_to_create_directory(_temp_root)\n return _temp_root\n\n\ndef set_temp_root(path):\n \"\"\"Set the path of the temporary root. It will be created lazily.\"\"\"\n global _temp_root\n _temp_root = path\n\n\ndef get_logs_dir_path():\n \"\"\"Get a temp dir for logging.\"\"\"\n logs_dir = os.path.join(get_temp_root(), \"logs\")\n try_to_create_directory(logs_dir)\n return logs_dir\n\n\ndef get_sockets_dir_path():\n \"\"\"Get a temp dir for sockets.\"\"\"\n sockets_dir = os.path.join(get_temp_root(), \"sockets\")\n try_to_create_directory(sockets_dir)\n return sockets_dir\n\n\ndef get_raylet_socket_name(suffix=\"\"):\n \"\"\"Get a socket name for raylet.\"\"\"\n sockets_dir = get_sockets_dir_path()\n\n raylet_socket_name = make_inc_temp(\n prefix=\"raylet\", directory_name=sockets_dir, suffix=suffix)\n return raylet_socket_name\n\n\ndef get_object_store_socket_name():\n \"\"\"Get a socket name for plasma object store.\"\"\"\n sockets_dir = get_sockets_dir_path()\n return make_inc_temp(prefix=\"plasma_store\", directory_name=sockets_dir)\n\n\ndef get_ipython_notebook_path(port):\n \"\"\"Get a new ipython notebook path\"\"\"\n\n notebook_filepath = os.path.join(\n os.path.dirname(os.path.abspath(__file__)), \"WebUI.ipynb\")\n # We copy the notebook file so that the original doesn't get modified by\n # the user.\n notebook_name = make_inc_temp(\n suffix=\".ipynb\", prefix=\"ray_ui\", directory_name=get_temp_root())\n shutil.copy(notebook_filepath, notebook_name)\n new_notebook_directory = os.path.dirname(notebook_name)\n token = ray.utils.decode(binascii.hexlify(os.urandom(24)))\n webui_url = (\"http://localhost:{}/notebooks/{}?token={}\".format(\n port, os.path.basename(notebook_name), token))\n return new_notebook_directory, webui_url, token\n\n\ndef new_log_files(name, redirect_output):\n \"\"\"Generate partially randomized filenames for log files.\n\n Args:\n name (str): descriptive string for this log file.\n redirect_output (bool): True if files should be generated for logging\n stdout and stderr and false if stdout and stderr should not be\n redirected.\n\n Returns:\n If redirect_output is true, this will return a tuple of two\n filehandles. The first is for redirecting stdout and the second is\n for redirecting stderr. If redirect_output is false, this will\n return a tuple of two None objects.\n \"\"\"\n if not redirect_output:\n return None, None\n\n # Create a directory to be used for process log files.\n logs_dir = get_logs_dir_path()\n # Create another directory that will be used by some of the RL algorithms.\n\n # TODO(suquark): This is done by the old code.\n # We should be able to control its path later.\n try_to_create_directory(\"/tmp/ray\")\n\n log_stdout = make_inc_temp(\n suffix=\".out\", prefix=name, directory_name=logs_dir)\n log_stderr = make_inc_temp(\n suffix=\".err\", prefix=name, directory_name=logs_dir)\n # Line-buffer the output (mode 1)\n log_stdout_file = open(log_stdout, \"a\", buffering=1)\n log_stderr_file = open(log_stderr, \"a\", buffering=1)\n return log_stdout_file, log_stderr_file\n\n\ndef new_redis_log_file(redirect_output, shard_number=None):\n \"\"\"Create new logging files for redis\"\"\"\n if shard_number is None:\n redis_stdout_file, redis_stderr_file = new_log_files(\n \"redis\", redirect_output)\n else:\n redis_stdout_file, redis_stderr_file = new_log_files(\n \"redis-shard_{}\".format(shard_number), redirect_output)\n return redis_stdout_file, redis_stderr_file\n\n\ndef new_raylet_log_file(local_scheduler_index, redirect_output):\n \"\"\"Create new logging files for raylet.\"\"\"\n raylet_stdout_file, raylet_stderr_file = new_log_files(\n \"raylet_{}\".format(local_scheduler_index),\n redirect_output=redirect_output)\n return raylet_stdout_file, raylet_stderr_file\n\n\ndef new_webui_log_file():\n \"\"\"Create new logging files for web ui.\"\"\"\n ui_stdout_file, ui_stderr_file = new_log_files(\n \"webui\", redirect_output=True)\n return ui_stdout_file, ui_stderr_file\n\n\ndef new_worker_redirected_log_file(worker_id):\n \"\"\"Create new logging files for workers to redirect its output.\"\"\"\n worker_stdout_file, worker_stderr_file = (new_log_files(\n \"worker-\" + ray.utils.binary_to_hex(worker_id), True))\n return worker_stdout_file, worker_stderr_file\n\n\ndef new_log_monitor_log_file():\n \"\"\"Create new logging files for the log monitor.\"\"\"\n log_monitor_stdout_file, log_monitor_stderr_file = new_log_files(\n \"log_monitor\", redirect_output=True)\n return log_monitor_stdout_file, log_monitor_stderr_file\n\n\ndef new_plasma_store_log_file(local_scheduler_index, redirect_output):\n \"\"\"Create new logging files for the plasma store.\"\"\"\n plasma_store_stdout_file, plasma_store_stderr_file = new_log_files(\n \"plasma_store_{}\".format(local_scheduler_index), redirect_output)\n return plasma_store_stdout_file, plasma_store_stderr_file\n\n\ndef new_monitor_log_file(redirect_output):\n \"\"\"Create new logging files for the monitor.\"\"\"\n monitor_stdout_file, monitor_stderr_file = new_log_files(\n \"monitor\", redirect_output)\n return monitor_stdout_file, monitor_stderr_file\n", "path": "python/ray/tempfile_services.py"}]}
| 3,729 | 211 |
gh_patches_debug_42792
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-1064
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support env variables for Zipkin exporter
The spec describes environment variables that should be supported to configure the Zipkin exporter, this feature request is to add support in the current implementation.
https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/sdk-environment-variables.md
</issue>
<code>
[start of exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 This library allows to export tracing data to `Zipkin <https://zipkin.io/>`_.
17
18 Usage
19 -----
20
21 The **OpenTelemetry Zipkin Exporter** allows to export `OpenTelemetry`_ traces to `Zipkin`_.
22 This exporter always send traces to the configured Zipkin collector using HTTP.
23
24
25 .. _Zipkin: https://zipkin.io/
26 .. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
27
28 .. code:: python
29
30 from opentelemetry import trace
31 from opentelemetry.exporter import zipkin
32 from opentelemetry.sdk.trace import TracerProvider
33 from opentelemetry.sdk.trace.export import BatchExportSpanProcessor
34
35 trace.set_tracer_provider(TracerProvider())
36 tracer = trace.get_tracer(__name__)
37
38 # create a ZipkinSpanExporter
39 zipkin_exporter = zipkin.ZipkinSpanExporter(
40 service_name="my-helloworld-service",
41 # optional:
42 # host_name="localhost",
43 # port=9411,
44 # endpoint="/api/v2/spans",
45 # protocol="http",
46 # ipv4="",
47 # ipv6="",
48 # retry=False,
49 )
50
51 # Create a BatchExportSpanProcessor and add the exporter to it
52 span_processor = BatchExportSpanProcessor(zipkin_exporter)
53
54 # add to the tracer
55 trace.get_tracer_provider().add_span_processor(span_processor)
56
57 with tracer.start_as_current_span("foo"):
58 print("Hello world!")
59
60 API
61 ---
62 """
63
64 import json
65 import logging
66 from typing import Optional, Sequence
67
68 import requests
69
70 from opentelemetry.sdk.trace.export import SpanExporter, SpanExportResult
71 from opentelemetry.trace import Span, SpanContext, SpanKind
72
73 DEFAULT_ENDPOINT = "/api/v2/spans"
74 DEFAULT_HOST_NAME = "localhost"
75 DEFAULT_PORT = 9411
76 DEFAULT_PROTOCOL = "http"
77 DEFAULT_RETRY = False
78 ZIPKIN_HEADERS = {"Content-Type": "application/json"}
79
80 SPAN_KIND_MAP = {
81 SpanKind.INTERNAL: None,
82 SpanKind.SERVER: "SERVER",
83 SpanKind.CLIENT: "CLIENT",
84 SpanKind.PRODUCER: "PRODUCER",
85 SpanKind.CONSUMER: "CONSUMER",
86 }
87
88 SUCCESS_STATUS_CODES = (200, 202)
89
90 logger = logging.getLogger(__name__)
91
92
93 class ZipkinSpanExporter(SpanExporter):
94 """Zipkin span exporter for OpenTelemetry.
95
96 Args:
97 service_name: Service that logged an annotation in a trace.Classifier
98 when query for spans.
99 host_name: The host name of the Zipkin server
100 port: The port of the Zipkin server
101 endpoint: The endpoint of the Zipkin server
102 protocol: The protocol used for the request.
103 ipv4: Primary IPv4 address associated with this connection.
104 ipv6: Primary IPv6 address associated with this connection.
105 retry: Set to True to configure the exporter to retry on failure.
106 """
107
108 def __init__(
109 self,
110 service_name: str,
111 host_name: str = DEFAULT_HOST_NAME,
112 port: int = DEFAULT_PORT,
113 endpoint: str = DEFAULT_ENDPOINT,
114 protocol: str = DEFAULT_PROTOCOL,
115 ipv4: Optional[str] = None,
116 ipv6: Optional[str] = None,
117 retry: Optional[str] = DEFAULT_RETRY,
118 ):
119 self.service_name = service_name
120 self.host_name = host_name
121 self.port = port
122 self.endpoint = endpoint
123 self.protocol = protocol
124 self.url = "{}://{}:{}{}".format(
125 self.protocol, self.host_name, self.port, self.endpoint
126 )
127 self.ipv4 = ipv4
128 self.ipv6 = ipv6
129 self.retry = retry
130
131 def export(self, spans: Sequence[Span]) -> SpanExportResult:
132 zipkin_spans = self._translate_to_zipkin(spans)
133 result = requests.post(
134 url=self.url, data=json.dumps(zipkin_spans), headers=ZIPKIN_HEADERS
135 )
136
137 if result.status_code not in SUCCESS_STATUS_CODES:
138 logger.error(
139 "Traces cannot be uploaded; status code: %s, message %s",
140 result.status_code,
141 result.text,
142 )
143
144 if self.retry:
145 return SpanExportResult.FAILURE
146 return SpanExportResult.FAILURE
147 return SpanExportResult.SUCCESS
148
149 def _translate_to_zipkin(self, spans: Sequence[Span]):
150
151 local_endpoint = {"serviceName": self.service_name, "port": self.port}
152
153 if self.ipv4 is not None:
154 local_endpoint["ipv4"] = self.ipv4
155
156 if self.ipv6 is not None:
157 local_endpoint["ipv6"] = self.ipv6
158
159 zipkin_spans = []
160 for span in spans:
161 context = span.get_context()
162 trace_id = context.trace_id
163 span_id = context.span_id
164
165 # Timestamp in zipkin spans is int of microseconds.
166 # see: https://zipkin.io/pages/instrumenting.html
167 start_timestamp_mus = _nsec_to_usec_round(span.start_time)
168 duration_mus = _nsec_to_usec_round(span.end_time - span.start_time)
169
170 zipkin_span = {
171 # Ensure left-zero-padding of traceId, spanId, parentId
172 "traceId": format(trace_id, "032x"),
173 "id": format(span_id, "016x"),
174 "name": span.name,
175 "timestamp": start_timestamp_mus,
176 "duration": duration_mus,
177 "localEndpoint": local_endpoint,
178 "kind": SPAN_KIND_MAP[span.kind],
179 "tags": _extract_tags_from_span(span),
180 "annotations": _extract_annotations_from_events(span.events),
181 }
182
183 if context.trace_flags.sampled:
184 zipkin_span["debug"] = True
185
186 if isinstance(span.parent, Span):
187 zipkin_span["parentId"] = format(
188 span.parent.get_context().span_id, "016x"
189 )
190 elif isinstance(span.parent, SpanContext):
191 zipkin_span["parentId"] = format(span.parent.span_id, "016x")
192
193 zipkin_spans.append(zipkin_span)
194 return zipkin_spans
195
196 def shutdown(self) -> None:
197 pass
198
199
200 def _extract_tags_from_dict(tags_dict):
201 tags = {}
202 if not tags_dict:
203 return tags
204 for attribute_key, attribute_value in tags_dict.items():
205 if isinstance(attribute_value, (int, bool, float)):
206 value = str(attribute_value)
207 elif isinstance(attribute_value, str):
208 value = attribute_value[:128]
209 else:
210 logger.warning("Could not serialize tag %s", attribute_key)
211 continue
212 tags[attribute_key] = value
213 return tags
214
215
216 def _extract_tags_from_span(span: Span):
217 tags = _extract_tags_from_dict(getattr(span, "attributes", None))
218 if span.resource:
219 tags.update(_extract_tags_from_dict(span.resource.labels))
220 return tags
221
222
223 def _extract_annotations_from_events(events):
224 return (
225 [
226 {"timestamp": _nsec_to_usec_round(e.timestamp), "value": e.name}
227 for e in events
228 ]
229 if events
230 else None
231 )
232
233
234 def _nsec_to_usec_round(nsec):
235 """Round nanoseconds to microseconds"""
236 return (nsec + 500) // 10 ** 3
237
[end of exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/__init__.py b/exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/__init__.py
--- a/exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/__init__.py
+++ b/exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/__init__.py
@@ -24,6 +24,7 @@
.. _Zipkin: https://zipkin.io/
.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
+.. _Specification: https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/sdk-environment-variables.md#zipkin-exporter
.. code:: python
@@ -39,10 +40,7 @@
zipkin_exporter = zipkin.ZipkinSpanExporter(
service_name="my-helloworld-service",
# optional:
- # host_name="localhost",
- # port=9411,
- # endpoint="/api/v2/spans",
- # protocol="http",
+ # url="http://localhost:9411/api/v2/spans",
# ipv4="",
# ipv6="",
# retry=False,
@@ -57,24 +55,25 @@
with tracer.start_as_current_span("foo"):
print("Hello world!")
+The exporter supports endpoint configuration via the OTEL_EXPORTER_ZIPKIN_ENDPOINT environment variables as defined in the `Specification`_
+
API
---
"""
import json
import logging
+import os
from typing import Optional, Sequence
+from urllib.parse import urlparse
import requests
from opentelemetry.sdk.trace.export import SpanExporter, SpanExportResult
from opentelemetry.trace import Span, SpanContext, SpanKind
-DEFAULT_ENDPOINT = "/api/v2/spans"
-DEFAULT_HOST_NAME = "localhost"
-DEFAULT_PORT = 9411
-DEFAULT_PROTOCOL = "http"
DEFAULT_RETRY = False
+DEFAULT_URL = "http://localhost:9411/api/v2/spans"
ZIPKIN_HEADERS = {"Content-Type": "application/json"}
SPAN_KIND_MAP = {
@@ -96,10 +95,7 @@
Args:
service_name: Service that logged an annotation in a trace.Classifier
when query for spans.
- host_name: The host name of the Zipkin server
- port: The port of the Zipkin server
- endpoint: The endpoint of the Zipkin server
- protocol: The protocol used for the request.
+ url: The Zipkin endpoint URL
ipv4: Primary IPv4 address associated with this connection.
ipv6: Primary IPv6 address associated with this connection.
retry: Set to True to configure the exporter to retry on failure.
@@ -108,22 +104,21 @@
def __init__(
self,
service_name: str,
- host_name: str = DEFAULT_HOST_NAME,
- port: int = DEFAULT_PORT,
- endpoint: str = DEFAULT_ENDPOINT,
- protocol: str = DEFAULT_PROTOCOL,
+ url: str = None,
ipv4: Optional[str] = None,
ipv6: Optional[str] = None,
retry: Optional[str] = DEFAULT_RETRY,
):
self.service_name = service_name
- self.host_name = host_name
- self.port = port
- self.endpoint = endpoint
- self.protocol = protocol
- self.url = "{}://{}:{}{}".format(
- self.protocol, self.host_name, self.port, self.endpoint
- )
+ if url is None:
+ self.url = os.environ.get(
+ "OTEL_EXPORTER_ZIPKIN_ENDPOINT", DEFAULT_URL
+ )
+ else:
+ self.url = url
+
+ self.port = urlparse(self.url).port
+
self.ipv4 = ipv4
self.ipv6 = ipv6
self.retry = retry
|
{"golden_diff": "diff --git a/exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/__init__.py b/exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/__init__.py\n--- a/exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/__init__.py\n+++ b/exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/__init__.py\n@@ -24,6 +24,7 @@\n \n .. _Zipkin: https://zipkin.io/\n .. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/\n+.. _Specification: https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/sdk-environment-variables.md#zipkin-exporter\n \n .. code:: python\n \n@@ -39,10 +40,7 @@\n zipkin_exporter = zipkin.ZipkinSpanExporter(\n service_name=\"my-helloworld-service\",\n # optional:\n- # host_name=\"localhost\",\n- # port=9411,\n- # endpoint=\"/api/v2/spans\",\n- # protocol=\"http\",\n+ # url=\"http://localhost:9411/api/v2/spans\",\n # ipv4=\"\",\n # ipv6=\"\",\n # retry=False,\n@@ -57,24 +55,25 @@\n with tracer.start_as_current_span(\"foo\"):\n print(\"Hello world!\")\n \n+The exporter supports endpoint configuration via the OTEL_EXPORTER_ZIPKIN_ENDPOINT environment variables as defined in the `Specification`_\n+\n API\n ---\n \"\"\"\n \n import json\n import logging\n+import os\n from typing import Optional, Sequence\n+from urllib.parse import urlparse\n \n import requests\n \n from opentelemetry.sdk.trace.export import SpanExporter, SpanExportResult\n from opentelemetry.trace import Span, SpanContext, SpanKind\n \n-DEFAULT_ENDPOINT = \"/api/v2/spans\"\n-DEFAULT_HOST_NAME = \"localhost\"\n-DEFAULT_PORT = 9411\n-DEFAULT_PROTOCOL = \"http\"\n DEFAULT_RETRY = False\n+DEFAULT_URL = \"http://localhost:9411/api/v2/spans\"\n ZIPKIN_HEADERS = {\"Content-Type\": \"application/json\"}\n \n SPAN_KIND_MAP = {\n@@ -96,10 +95,7 @@\n Args:\n service_name: Service that logged an annotation in a trace.Classifier\n when query for spans.\n- host_name: The host name of the Zipkin server\n- port: The port of the Zipkin server\n- endpoint: The endpoint of the Zipkin server\n- protocol: The protocol used for the request.\n+ url: The Zipkin endpoint URL\n ipv4: Primary IPv4 address associated with this connection.\n ipv6: Primary IPv6 address associated with this connection.\n retry: Set to True to configure the exporter to retry on failure.\n@@ -108,22 +104,21 @@\n def __init__(\n self,\n service_name: str,\n- host_name: str = DEFAULT_HOST_NAME,\n- port: int = DEFAULT_PORT,\n- endpoint: str = DEFAULT_ENDPOINT,\n- protocol: str = DEFAULT_PROTOCOL,\n+ url: str = None,\n ipv4: Optional[str] = None,\n ipv6: Optional[str] = None,\n retry: Optional[str] = DEFAULT_RETRY,\n ):\n self.service_name = service_name\n- self.host_name = host_name\n- self.port = port\n- self.endpoint = endpoint\n- self.protocol = protocol\n- self.url = \"{}://{}:{}{}\".format(\n- self.protocol, self.host_name, self.port, self.endpoint\n- )\n+ if url is None:\n+ self.url = os.environ.get(\n+ \"OTEL_EXPORTER_ZIPKIN_ENDPOINT\", DEFAULT_URL\n+ )\n+ else:\n+ self.url = url\n+\n+ self.port = urlparse(self.url).port\n+\n self.ipv4 = ipv4\n self.ipv6 = ipv6\n self.retry = retry\n", "issue": "Support env variables for Zipkin exporter\nThe spec describes environment variables that should be supported to configure the Zipkin exporter, this feature request is to add support in the current implementation.\r\n\r\nhttps://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/sdk-environment-variables.md\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis library allows to export tracing data to `Zipkin <https://zipkin.io/>`_.\n\nUsage\n-----\n\nThe **OpenTelemetry Zipkin Exporter** allows to export `OpenTelemetry`_ traces to `Zipkin`_.\nThis exporter always send traces to the configured Zipkin collector using HTTP.\n\n\n.. _Zipkin: https://zipkin.io/\n.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/\n\n.. code:: python\n\n from opentelemetry import trace\n from opentelemetry.exporter import zipkin\n from opentelemetry.sdk.trace import TracerProvider\n from opentelemetry.sdk.trace.export import BatchExportSpanProcessor\n\n trace.set_tracer_provider(TracerProvider())\n tracer = trace.get_tracer(__name__)\n\n # create a ZipkinSpanExporter\n zipkin_exporter = zipkin.ZipkinSpanExporter(\n service_name=\"my-helloworld-service\",\n # optional:\n # host_name=\"localhost\",\n # port=9411,\n # endpoint=\"/api/v2/spans\",\n # protocol=\"http\",\n # ipv4=\"\",\n # ipv6=\"\",\n # retry=False,\n )\n\n # Create a BatchExportSpanProcessor and add the exporter to it\n span_processor = BatchExportSpanProcessor(zipkin_exporter)\n\n # add to the tracer\n trace.get_tracer_provider().add_span_processor(span_processor)\n\n with tracer.start_as_current_span(\"foo\"):\n print(\"Hello world!\")\n\nAPI\n---\n\"\"\"\n\nimport json\nimport logging\nfrom typing import Optional, Sequence\n\nimport requests\n\nfrom opentelemetry.sdk.trace.export import SpanExporter, SpanExportResult\nfrom opentelemetry.trace import Span, SpanContext, SpanKind\n\nDEFAULT_ENDPOINT = \"/api/v2/spans\"\nDEFAULT_HOST_NAME = \"localhost\"\nDEFAULT_PORT = 9411\nDEFAULT_PROTOCOL = \"http\"\nDEFAULT_RETRY = False\nZIPKIN_HEADERS = {\"Content-Type\": \"application/json\"}\n\nSPAN_KIND_MAP = {\n SpanKind.INTERNAL: None,\n SpanKind.SERVER: \"SERVER\",\n SpanKind.CLIENT: \"CLIENT\",\n SpanKind.PRODUCER: \"PRODUCER\",\n SpanKind.CONSUMER: \"CONSUMER\",\n}\n\nSUCCESS_STATUS_CODES = (200, 202)\n\nlogger = logging.getLogger(__name__)\n\n\nclass ZipkinSpanExporter(SpanExporter):\n \"\"\"Zipkin span exporter for OpenTelemetry.\n\n Args:\n service_name: Service that logged an annotation in a trace.Classifier\n when query for spans.\n host_name: The host name of the Zipkin server\n port: The port of the Zipkin server\n endpoint: The endpoint of the Zipkin server\n protocol: The protocol used for the request.\n ipv4: Primary IPv4 address associated with this connection.\n ipv6: Primary IPv6 address associated with this connection.\n retry: Set to True to configure the exporter to retry on failure.\n \"\"\"\n\n def __init__(\n self,\n service_name: str,\n host_name: str = DEFAULT_HOST_NAME,\n port: int = DEFAULT_PORT,\n endpoint: str = DEFAULT_ENDPOINT,\n protocol: str = DEFAULT_PROTOCOL,\n ipv4: Optional[str] = None,\n ipv6: Optional[str] = None,\n retry: Optional[str] = DEFAULT_RETRY,\n ):\n self.service_name = service_name\n self.host_name = host_name\n self.port = port\n self.endpoint = endpoint\n self.protocol = protocol\n self.url = \"{}://{}:{}{}\".format(\n self.protocol, self.host_name, self.port, self.endpoint\n )\n self.ipv4 = ipv4\n self.ipv6 = ipv6\n self.retry = retry\n\n def export(self, spans: Sequence[Span]) -> SpanExportResult:\n zipkin_spans = self._translate_to_zipkin(spans)\n result = requests.post(\n url=self.url, data=json.dumps(zipkin_spans), headers=ZIPKIN_HEADERS\n )\n\n if result.status_code not in SUCCESS_STATUS_CODES:\n logger.error(\n \"Traces cannot be uploaded; status code: %s, message %s\",\n result.status_code,\n result.text,\n )\n\n if self.retry:\n return SpanExportResult.FAILURE\n return SpanExportResult.FAILURE\n return SpanExportResult.SUCCESS\n\n def _translate_to_zipkin(self, spans: Sequence[Span]):\n\n local_endpoint = {\"serviceName\": self.service_name, \"port\": self.port}\n\n if self.ipv4 is not None:\n local_endpoint[\"ipv4\"] = self.ipv4\n\n if self.ipv6 is not None:\n local_endpoint[\"ipv6\"] = self.ipv6\n\n zipkin_spans = []\n for span in spans:\n context = span.get_context()\n trace_id = context.trace_id\n span_id = context.span_id\n\n # Timestamp in zipkin spans is int of microseconds.\n # see: https://zipkin.io/pages/instrumenting.html\n start_timestamp_mus = _nsec_to_usec_round(span.start_time)\n duration_mus = _nsec_to_usec_round(span.end_time - span.start_time)\n\n zipkin_span = {\n # Ensure left-zero-padding of traceId, spanId, parentId\n \"traceId\": format(trace_id, \"032x\"),\n \"id\": format(span_id, \"016x\"),\n \"name\": span.name,\n \"timestamp\": start_timestamp_mus,\n \"duration\": duration_mus,\n \"localEndpoint\": local_endpoint,\n \"kind\": SPAN_KIND_MAP[span.kind],\n \"tags\": _extract_tags_from_span(span),\n \"annotations\": _extract_annotations_from_events(span.events),\n }\n\n if context.trace_flags.sampled:\n zipkin_span[\"debug\"] = True\n\n if isinstance(span.parent, Span):\n zipkin_span[\"parentId\"] = format(\n span.parent.get_context().span_id, \"016x\"\n )\n elif isinstance(span.parent, SpanContext):\n zipkin_span[\"parentId\"] = format(span.parent.span_id, \"016x\")\n\n zipkin_spans.append(zipkin_span)\n return zipkin_spans\n\n def shutdown(self) -> None:\n pass\n\n\ndef _extract_tags_from_dict(tags_dict):\n tags = {}\n if not tags_dict:\n return tags\n for attribute_key, attribute_value in tags_dict.items():\n if isinstance(attribute_value, (int, bool, float)):\n value = str(attribute_value)\n elif isinstance(attribute_value, str):\n value = attribute_value[:128]\n else:\n logger.warning(\"Could not serialize tag %s\", attribute_key)\n continue\n tags[attribute_key] = value\n return tags\n\n\ndef _extract_tags_from_span(span: Span):\n tags = _extract_tags_from_dict(getattr(span, \"attributes\", None))\n if span.resource:\n tags.update(_extract_tags_from_dict(span.resource.labels))\n return tags\n\n\ndef _extract_annotations_from_events(events):\n return (\n [\n {\"timestamp\": _nsec_to_usec_round(e.timestamp), \"value\": e.name}\n for e in events\n ]\n if events\n else None\n )\n\n\ndef _nsec_to_usec_round(nsec):\n \"\"\"Round nanoseconds to microseconds\"\"\"\n return (nsec + 500) // 10 ** 3\n", "path": "exporter/opentelemetry-exporter-zipkin/src/opentelemetry/exporter/zipkin/__init__.py"}]}
| 2,963 | 889 |
gh_patches_debug_1751
|
rasdani/github-patches
|
git_diff
|
vispy__vispy-245
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
glsl-sandbox-cube GL_DEPTH issue (Linux Python 2.7.6)
I get the following issue when running glsl-sanbox-cube; setting `GL_DEPTH` doesn't seem to work.
```
Traceback (most recent call last):
File "glsl-sandbox-cube.py", line 82, in on_initialize
gloo.set_state(depth=True)
File "/usr/local/lib/python2.7/dist-packages/vispy-0.2.1-py2.7.egg/vispy/gloo/wrappers.py", line 531, in set_state
func(_gl_attr(key))
File "/usr/local/lib/python2.7/dist-packages/vispy-0.2.1-py2.7.egg/vispy/gloo/wrappers.py", line 43, in _gl_attr
% (x, y))
ValueError: gl has no attribute corresponding to name depth (GL_DEPTH)
```
However when I check `PyOpenGL`:
```
import OpenGL.GL as gl
print gl.GL_DEPTH
>> GL_DEPTH (6145)
```
</issue>
<code>
[start of examples/glsl-sandbox-cube.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """
4 A GLSL sandbox application based on the spinning cube. Requires PySide
5 or PyQt4.
6 """
7
8 import numpy as np
9 from vispy import app, gloo, dataio
10 from vispy.util.transforms import perspective, translate, rotate
11
12 # Force using qt and take QtCore+QtGui from backend module,
13 # since we do not know whether PySide or PyQt4 is used
14 app.use('qt')
15 QtCore = app.default_app.backend_module.QtCore,
16 QtGui = app.default_app.backend_module.QtGui
17
18
19 VERT_CODE = """
20 uniform mat4 u_model;
21 uniform mat4 u_view;
22 uniform mat4 u_projection;
23
24 attribute vec3 a_position;
25 attribute vec2 a_texcoord;
26
27 varying vec2 v_texcoord;
28
29 void main()
30 {
31 v_texcoord = a_texcoord;
32 gl_Position = u_projection * u_view * u_model * vec4(a_position,1.0);
33 //gl_Position = vec4(a_position,1.0);
34 }
35 """
36
37
38 FRAG_CODE = """
39 uniform sampler2D u_texture;
40 varying vec2 v_texcoord;
41
42 void main()
43 {
44 float ty = v_texcoord.y;
45 float tx = sin(ty*50.0)*0.01 + v_texcoord.x;
46 gl_FragColor = texture2D(u_texture, vec2(tx, ty));
47
48 }
49 """
50
51
52 # Read cube data
53 positions, faces, normals, texcoords = dataio.read_mesh('cube.obj')
54 colors = np.random.uniform(0, 1, positions.shape).astype('float32')
55
56 faces_buffer = gloo.IndexBuffer(faces.astype(np.uint16))
57
58
59 class Canvas(app.Canvas):
60
61 def __init__(self, **kwargs):
62 app.Canvas.__init__(self, **kwargs)
63 self.geometry = 0, 0, 400, 400
64
65 self.program = gloo.Program(VERT_CODE, FRAG_CODE)
66
67 # Set attributes
68 self.program['a_position'] = gloo.VertexBuffer(positions)
69 self.program['a_texcoord'] = gloo.VertexBuffer(texcoords)
70
71 self.program['u_texture'] = gloo.Texture2D(dataio.crate())
72
73 # Handle transformations
74 self.init_transforms()
75
76 self.timer = app.Timer(1.0 / 60)
77 self.timer.connect(self.update_transforms)
78 self.timer.start()
79
80 def on_initialize(self, event):
81 gloo.set_clear_color((1, 1, 1, 1))
82 gloo.set_state(depth=True)
83
84 def on_resize(self, event):
85 width, height = event.size
86 gloo.set_viewport(0, 0, width, height)
87 self.projection = perspective(45.0, width / float(height), 2.0, 10.0)
88 self.program['u_projection'] = self.projection
89
90 def on_paint(self, event):
91
92 gloo.clear()
93 self.program.draw('triangles', faces_buffer)
94
95 def init_transforms(self):
96 self.view = np.eye(4, dtype=np.float32)
97 self.model = np.eye(4, dtype=np.float32)
98 self.projection = np.eye(4, dtype=np.float32)
99
100 self.theta = 0
101 self.phi = 0
102
103 translate(self.view, 0, 0, -5)
104 self.program['u_model'] = self.model
105 self.program['u_view'] = self.view
106
107 def update_transforms(self, event):
108 self.theta += .5
109 self.phi += .5
110 self.model = np.eye(4, dtype=np.float32)
111 rotate(self.model, self.theta, 0, 0, 1)
112 rotate(self.model, self.phi, 0, 1, 0)
113 self.program['u_model'] = self.model
114 self.update()
115
116
117 class TextField(QtGui.QPlainTextEdit):
118
119 def __init__(self, parent):
120 QtGui.QPlainTextEdit.__init__(self, parent)
121 # Set font to monospaced (TypeWriter)
122 font = QtGui.QFont('')
123 font.setStyleHint(font.TypeWriter, font.PreferDefault)
124 font.setPointSize(8)
125 self.setFont(font)
126
127
128 class MainWindow(QtGui.QWidget):
129
130 def __init__(self):
131 QtGui.QWidget.__init__(self, None)
132
133 self.setMinimumSize(600, 400)
134
135 # Create two labels and a button
136 self.vertLabel = QtGui.QLabel("Vertex code", self)
137 self.fragLabel = QtGui.QLabel("Fragment code", self)
138 self.theButton = QtGui.QPushButton("Compile!", self)
139 self.theButton.clicked.connect(self.on_compile)
140
141 # Create two editors
142 self.vertEdit = TextField(self)
143 self.vertEdit.setPlainText(VERT_CODE)
144 self.fragEdit = TextField(self)
145 self.fragEdit.setPlainText(FRAG_CODE)
146
147 # Create a canvas
148 self.canvas = Canvas()
149 self.canvas.create_native()
150 self.canvas.native.setParent(self)
151
152 # Layout
153 hlayout = QtGui.QHBoxLayout(self)
154 self.setLayout(hlayout)
155 vlayout = QtGui.QVBoxLayout()
156 #
157 hlayout.addLayout(vlayout, 1)
158 hlayout.addWidget(self.canvas.native, 1)
159 #
160 vlayout.addWidget(self.vertLabel, 0)
161 vlayout.addWidget(self.vertEdit, 1)
162 vlayout.addWidget(self.fragLabel, 0)
163 vlayout.addWidget(self.fragEdit, 1)
164 vlayout.addWidget(self.theButton, 0)
165
166 def on_compile(self):
167 vert_code = str(self.vertEdit.toPlainText())
168 frag_code = str(self.fragEdit.toPlainText())
169 self.canvas.program.shaders[0].code = vert_code
170 self.canvas.program.shaders[1].code = frag_code
171
172
173 if __name__ == '__main__':
174 app.create()
175 m = MainWindow()
176 m.show()
177 app.run()
178
[end of examples/glsl-sandbox-cube.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/glsl-sandbox-cube.py b/examples/glsl-sandbox-cube.py
--- a/examples/glsl-sandbox-cube.py
+++ b/examples/glsl-sandbox-cube.py
@@ -79,7 +79,7 @@
def on_initialize(self, event):
gloo.set_clear_color((1, 1, 1, 1))
- gloo.set_state(depth=True)
+ gloo.set_state(depth_test=True)
def on_resize(self, event):
width, height = event.size
|
{"golden_diff": "diff --git a/examples/glsl-sandbox-cube.py b/examples/glsl-sandbox-cube.py\n--- a/examples/glsl-sandbox-cube.py\n+++ b/examples/glsl-sandbox-cube.py\n@@ -79,7 +79,7 @@\n \n def on_initialize(self, event):\n gloo.set_clear_color((1, 1, 1, 1))\n- gloo.set_state(depth=True)\n+ gloo.set_state(depth_test=True)\n \n def on_resize(self, event):\n width, height = event.size\n", "issue": "glsl-sandbox-cube GL_DEPTH issue (Linux Python 2.7.6)\nI get the following issue when running glsl-sanbox-cube; setting `GL_DEPTH` doesn't seem to work. \n\n```\nTraceback (most recent call last):\n File \"glsl-sandbox-cube.py\", line 82, in on_initialize\n gloo.set_state(depth=True)\n File \"/usr/local/lib/python2.7/dist-packages/vispy-0.2.1-py2.7.egg/vispy/gloo/wrappers.py\", line 531, in set_state\n func(_gl_attr(key))\n File \"/usr/local/lib/python2.7/dist-packages/vispy-0.2.1-py2.7.egg/vispy/gloo/wrappers.py\", line 43, in _gl_attr\n % (x, y))\nValueError: gl has no attribute corresponding to name depth (GL_DEPTH)\n```\n\nHowever when I check `PyOpenGL`:\n\n```\nimport OpenGL.GL as gl\nprint gl.GL_DEPTH\n>> GL_DEPTH (6145)\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nA GLSL sandbox application based on the spinning cube. Requires PySide\nor PyQt4.\n\"\"\"\n\nimport numpy as np\nfrom vispy import app, gloo, dataio\nfrom vispy.util.transforms import perspective, translate, rotate\n\n# Force using qt and take QtCore+QtGui from backend module,\n# since we do not know whether PySide or PyQt4 is used\napp.use('qt')\nQtCore = app.default_app.backend_module.QtCore,\nQtGui = app.default_app.backend_module.QtGui\n\n\nVERT_CODE = \"\"\"\nuniform mat4 u_model;\nuniform mat4 u_view;\nuniform mat4 u_projection;\n\nattribute vec3 a_position;\nattribute vec2 a_texcoord;\n\nvarying vec2 v_texcoord;\n\nvoid main()\n{\n v_texcoord = a_texcoord;\n gl_Position = u_projection * u_view * u_model * vec4(a_position,1.0);\n //gl_Position = vec4(a_position,1.0);\n}\n\"\"\"\n\n\nFRAG_CODE = \"\"\"\nuniform sampler2D u_texture;\nvarying vec2 v_texcoord;\n\nvoid main()\n{\n float ty = v_texcoord.y;\n float tx = sin(ty*50.0)*0.01 + v_texcoord.x;\n gl_FragColor = texture2D(u_texture, vec2(tx, ty));\n \n}\n\"\"\"\n\n\n# Read cube data\npositions, faces, normals, texcoords = dataio.read_mesh('cube.obj')\ncolors = np.random.uniform(0, 1, positions.shape).astype('float32')\n\nfaces_buffer = gloo.IndexBuffer(faces.astype(np.uint16))\n\n\nclass Canvas(app.Canvas):\n\n def __init__(self, **kwargs):\n app.Canvas.__init__(self, **kwargs)\n self.geometry = 0, 0, 400, 400\n\n self.program = gloo.Program(VERT_CODE, FRAG_CODE)\n\n # Set attributes\n self.program['a_position'] = gloo.VertexBuffer(positions)\n self.program['a_texcoord'] = gloo.VertexBuffer(texcoords)\n\n self.program['u_texture'] = gloo.Texture2D(dataio.crate())\n\n # Handle transformations\n self.init_transforms()\n\n self.timer = app.Timer(1.0 / 60)\n self.timer.connect(self.update_transforms)\n self.timer.start()\n\n def on_initialize(self, event):\n gloo.set_clear_color((1, 1, 1, 1))\n gloo.set_state(depth=True)\n\n def on_resize(self, event):\n width, height = event.size\n gloo.set_viewport(0, 0, width, height)\n self.projection = perspective(45.0, width / float(height), 2.0, 10.0)\n self.program['u_projection'] = self.projection\n\n def on_paint(self, event):\n\n gloo.clear()\n self.program.draw('triangles', faces_buffer)\n\n def init_transforms(self):\n self.view = np.eye(4, dtype=np.float32)\n self.model = np.eye(4, dtype=np.float32)\n self.projection = np.eye(4, dtype=np.float32)\n\n self.theta = 0\n self.phi = 0\n\n translate(self.view, 0, 0, -5)\n self.program['u_model'] = self.model\n self.program['u_view'] = self.view\n\n def update_transforms(self, event):\n self.theta += .5\n self.phi += .5\n self.model = np.eye(4, dtype=np.float32)\n rotate(self.model, self.theta, 0, 0, 1)\n rotate(self.model, self.phi, 0, 1, 0)\n self.program['u_model'] = self.model\n self.update()\n\n\nclass TextField(QtGui.QPlainTextEdit):\n\n def __init__(self, parent):\n QtGui.QPlainTextEdit.__init__(self, parent)\n # Set font to monospaced (TypeWriter)\n font = QtGui.QFont('')\n font.setStyleHint(font.TypeWriter, font.PreferDefault)\n font.setPointSize(8)\n self.setFont(font)\n\n\nclass MainWindow(QtGui.QWidget):\n\n def __init__(self):\n QtGui.QWidget.__init__(self, None)\n\n self.setMinimumSize(600, 400)\n\n # Create two labels and a button\n self.vertLabel = QtGui.QLabel(\"Vertex code\", self)\n self.fragLabel = QtGui.QLabel(\"Fragment code\", self)\n self.theButton = QtGui.QPushButton(\"Compile!\", self)\n self.theButton.clicked.connect(self.on_compile)\n\n # Create two editors\n self.vertEdit = TextField(self)\n self.vertEdit.setPlainText(VERT_CODE)\n self.fragEdit = TextField(self)\n self.fragEdit.setPlainText(FRAG_CODE)\n\n # Create a canvas\n self.canvas = Canvas()\n self.canvas.create_native()\n self.canvas.native.setParent(self)\n\n # Layout\n hlayout = QtGui.QHBoxLayout(self)\n self.setLayout(hlayout)\n vlayout = QtGui.QVBoxLayout()\n #\n hlayout.addLayout(vlayout, 1)\n hlayout.addWidget(self.canvas.native, 1)\n #\n vlayout.addWidget(self.vertLabel, 0)\n vlayout.addWidget(self.vertEdit, 1)\n vlayout.addWidget(self.fragLabel, 0)\n vlayout.addWidget(self.fragEdit, 1)\n vlayout.addWidget(self.theButton, 0)\n\n def on_compile(self):\n vert_code = str(self.vertEdit.toPlainText())\n frag_code = str(self.fragEdit.toPlainText())\n self.canvas.program.shaders[0].code = vert_code\n self.canvas.program.shaders[1].code = frag_code\n\n\nif __name__ == '__main__':\n app.create()\n m = MainWindow()\n m.show()\n app.run()\n", "path": "examples/glsl-sandbox-cube.py"}]}
| 2,508 | 119 |
gh_patches_debug_42384
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-1500
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feedback] - Mail text is borked
The feedback notification mail is broken.
Probably after the python upgrade.
</issue>
<code>
[start of apps/feedback/mommy.py]
1 # -*- coding: utf-8 -*-
2
3 import locale
4 import logging
5
6 from django.conf import settings
7 from django.core.mail import EmailMessage
8 from django.utils import timezone
9
10 from apps.feedback.models import FeedbackRelation
11 from apps.marks.models import Mark, MarkUser
12 from apps.mommy import schedule
13 from apps.mommy.registry import Task
14
15
16 class FeedbackMail(Task):
17
18 @staticmethod
19 def run():
20 logger = logging.getLogger("feedback")
21 logger.info("Feedback job started")
22 locale.setlocale(locale.LC_ALL, "nb_NO.UTF-8")
23 active_feedbacks = FeedbackRelation.objects.filter(active=True)
24
25 for feedback in active_feedbacks:
26 message = FeedbackMail.generate_message(feedback, logger)
27 logger.info("Status: " + message.status)
28
29 if message.send:
30 EmailMessage(
31 message.subject,
32 str(message),
33 message.committee_mail,
34 [],
35 message.attended_mails
36 ).send()
37 logger.info('Emails sent to: ' + str(message.attended_mails))
38
39 if message.results_message:
40 EmailMessage(
41 "Feedback resultat",
42 message.results_message,
43 "[email protected]",
44 [message.committee_mail]
45 ).send()
46 logger.info('Results mail sent to :' + message.committee_mail)
47
48 @staticmethod
49 def generate_message(feedback, logger):
50 logger.info('Processing: "' + feedback.content_title() + '"')
51
52 today = timezone.now().date()
53 end_date = feedback.content_end_date()
54
55 message = Message()
56
57 if not end_date:
58 message.status = "Content object has no date"
59 return message
60
61 # Return if the event has not yet happened
62 if end_date.date() >= today:
63 message.status = "Event not done"
64 return message
65
66 not_responded = FeedbackMail.get_users(feedback)
67 logger.info('Not responded: ' + str(not_responded))
68
69 # Return if everyone has answered
70 if not not_responded:
71 feedback.active = False
72 feedback.save()
73 message.status = 'Everyone has answered'
74 return message
75
76 message.attended_mails = FeedbackMail.get_user_mails(not_responded)
77
78 message.committee_mail = FeedbackMail.get_committee_email(feedback)
79 deadline = feedback.deadline.strftime("%d. %B").encode("utf-8")
80 title = FeedbackMail.get_title(feedback)
81 message.link = str("\n\n" + FeedbackMail.get_link(feedback)).encode()
82 results_link = str(FeedbackMail.get_link(feedback) + "results").encode()
83
84 deadline_diff = (feedback.deadline - today).days
85
86 message.subject = "Feedback: " + title
87 message.intro = "Hei, vi ønsker tilbakemelding på \"" + title + "\""
88 message.mark = FeedbackMail.mark_message(feedback)
89 message.contact = "\n\nEventuelle spørsmål sendes til %s " % message.committee_mail
90 message.date = FeedbackMail.date_message(end_date)
91
92 if deadline_diff < 0: # Deadline passed
93 feedback.active = False
94 feedback.save()
95 logger.info("Deadline passed feedback set to inactive")
96 message.status = "Deadine passed"
97
98 if feedback.gives_mark:
99 FeedbackMail.set_marks(title, not_responded)
100
101 message.intro = "Fristen for å svare på \"%s\" har gått ut og du har fått en prikk." % title
102 message.mark = ""
103 message.date = ""
104 message.link = ""
105 message.send = True
106
107 logger.info("Marks given to: " + str(not_responded))
108
109 elif deadline_diff < 1: # Last warning
110 message.deadline = "\n\nI dag innen 23:59 er siste frist til å svare på skjemaet."
111
112 message.results_message = """
113 Hei, siste purremail på feedback skjema har blitt sendt til alle
114 gjenværende deltagere på \"{}\".\nDere kan se feedback-resultatene på:\n{}\n
115 """.format(title, results_link)
116 message.send = True
117 message.status = "Last warning"
118 elif deadline_diff < 3 and feedback.gives_mark: # 3 days from the deadline
119 message.deadline = "\n\nFristen for å svare på skjema er %s innen kl 23:59." % deadline
120 message.send = True
121 message.status = "Warning message"
122 elif not feedback.first_mail_sent:
123 message.deadline = "\n\nFristen for å svare på skjema er %s innen kl 23:59." % deadline
124
125 message.results_message = """
126 Hei, nå har feedbackmail blitt sendt til alle
127 deltagere på \"{}\".\nDere kan se feedback-resultatene på:\n{}\n
128 """.format(title, results_link)
129 message.send = True
130 message.status = "First message"
131 feedback.first_mail_sent = True
132 feedback.save()
133 logger.info("first_mail_sent set")
134 else:
135 message.status = "No message generated"
136 return message
137
138 @staticmethod
139 def end_date(feedback):
140 end_date = feedback.content_end_date()
141
142 if end_date:
143 return end_date.date()
144 else:
145 return False
146
147 @staticmethod
148 def date_message(date):
149 # If the object(event) doesnt have start date it will send
150 # The first notification the day after the feedbackrelation is made
151 if date:
152 date_string = date.strftime("%d. %B").encode("utf-8")
153 message_date = "som du var med på den %s:" % date_string
154 else:
155 message_date = ""
156
157 return message_date
158
159 @staticmethod
160 def get_users(feedback):
161 return feedback.not_answered()
162
163 @staticmethod
164 def get_user_mails(not_responded):
165 return [user.email for user in not_responded]
166
167 @staticmethod
168 def get_link(feedback):
169 return str(settings.BASE_URL + feedback.get_absolute_url())
170
171 @staticmethod
172 def get_title(feedback):
173 return str(feedback.content_title())
174
175 @staticmethod
176 def get_committee_email(feedback):
177 return feedback.content_email()
178
179 @staticmethod
180 def mark_message(feedback):
181 if feedback.gives_mark:
182 return """
183 \nVær oppmerksom på at du får prikk dersom du ikke svarer
184 på disse spørsmålene innen fristen.
185 """
186 else:
187 return ""
188
189 @staticmethod
190 def set_marks(title, not_responded):
191 mark = Mark()
192 mark.title = "Manglende tilbakemelding på %s" % title
193 mark.category = 4 # Missed feedback
194 mark.description = "Du har fått en prikk fordi du ikke har levert tilbakemelding."
195 mark.save()
196
197 for user in not_responded:
198 user_entry = MarkUser()
199 user_entry.user = user
200 user_entry.mark = mark
201 user_entry.save()
202
203
204 class Message(object):
205 subject = ""
206 intro = ""
207 date = ""
208 deadline = ""
209 mark = ""
210 contact = ""
211 link = ""
212 send = False
213 end = "\n\nMvh\nLinjeforeningen Online"
214 results_message = False
215 status = "-"
216
217 committee_mail = ""
218 attended_mails = False
219
220 def __str__(self):
221 message = "%s %s %s %s %s %s %s" % (
222 self.intro,
223 self.date,
224 self.link,
225 self.deadline,
226 self.mark,
227 self.contact,
228 self.end
229 )
230 return message
231
232 schedule.register(FeedbackMail, day_of_week='mon-sun', hour=8, minute=00)
233
[end of apps/feedback/mommy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/feedback/mommy.py b/apps/feedback/mommy.py
--- a/apps/feedback/mommy.py
+++ b/apps/feedback/mommy.py
@@ -76,10 +76,10 @@
message.attended_mails = FeedbackMail.get_user_mails(not_responded)
message.committee_mail = FeedbackMail.get_committee_email(feedback)
- deadline = feedback.deadline.strftime("%d. %B").encode("utf-8")
+ deadline = feedback.deadline.strftime("%d. %B")
title = FeedbackMail.get_title(feedback)
- message.link = str("\n\n" + FeedbackMail.get_link(feedback)).encode()
- results_link = str(FeedbackMail.get_link(feedback) + "results").encode()
+ message.link = str("\n\n" + FeedbackMail.get_link(feedback))
+ results_link = str(FeedbackMail.get_link(feedback) + "results")
deadline_diff = (feedback.deadline - today).days
@@ -109,10 +109,8 @@
elif deadline_diff < 1: # Last warning
message.deadline = "\n\nI dag innen 23:59 er siste frist til å svare på skjemaet."
- message.results_message = """
- Hei, siste purremail på feedback skjema har blitt sendt til alle
- gjenværende deltagere på \"{}\".\nDere kan se feedback-resultatene på:\n{}\n
- """.format(title, results_link)
+ message.results_message = "Hei, siste purremail på feedback skjema har blitt sendt til alle gjenværende " \
+ "deltagere på \"{}\".\nDere kan se feedback-resultatene på:\n{}\n".format(title, results_link)
message.send = True
message.status = "Last warning"
elif deadline_diff < 3 and feedback.gives_mark: # 3 days from the deadline
@@ -121,13 +119,11 @@
message.status = "Warning message"
elif not feedback.first_mail_sent:
message.deadline = "\n\nFristen for å svare på skjema er %s innen kl 23:59." % deadline
-
- message.results_message = """
- Hei, nå har feedbackmail blitt sendt til alle
- deltagere på \"{}\".\nDere kan se feedback-resultatene på:\n{}\n
- """.format(title, results_link)
+ message.results_message = "Hei, nå har feedbackmail blitt sendt til alle deltagere på \"{}\"." \
+ "\nDere kan se resultatene på:\n{}\n".format(title, results_link)
message.send = True
message.status = "First message"
+
feedback.first_mail_sent = True
feedback.save()
logger.info("first_mail_sent set")
@@ -149,7 +145,7 @@
# If the object(event) doesnt have start date it will send
# The first notification the day after the feedbackrelation is made
if date:
- date_string = date.strftime("%d. %B").encode("utf-8")
+ date_string = date.strftime("%d. %B")
message_date = "som du var med på den %s:" % date_string
else:
message_date = ""
@@ -179,10 +175,8 @@
@staticmethod
def mark_message(feedback):
if feedback.gives_mark:
- return """
- \nVær oppmerksom på at du får prikk dersom du ikke svarer
- på disse spørsmålene innen fristen.
- """
+ return "\nVær oppmerksom på at du får prikk dersom du ikke svarer " \
+ "på disse spørsmålene innen fristen."
else:
return ""
|
{"golden_diff": "diff --git a/apps/feedback/mommy.py b/apps/feedback/mommy.py\n--- a/apps/feedback/mommy.py\n+++ b/apps/feedback/mommy.py\n@@ -76,10 +76,10 @@\n message.attended_mails = FeedbackMail.get_user_mails(not_responded)\n \n message.committee_mail = FeedbackMail.get_committee_email(feedback)\n- deadline = feedback.deadline.strftime(\"%d. %B\").encode(\"utf-8\")\n+ deadline = feedback.deadline.strftime(\"%d. %B\")\n title = FeedbackMail.get_title(feedback)\n- message.link = str(\"\\n\\n\" + FeedbackMail.get_link(feedback)).encode()\n- results_link = str(FeedbackMail.get_link(feedback) + \"results\").encode()\n+ message.link = str(\"\\n\\n\" + FeedbackMail.get_link(feedback))\n+ results_link = str(FeedbackMail.get_link(feedback) + \"results\")\n \n deadline_diff = (feedback.deadline - today).days\n \n@@ -109,10 +109,8 @@\n elif deadline_diff < 1: # Last warning\n message.deadline = \"\\n\\nI dag innen 23:59 er siste frist til \u00e5 svare p\u00e5 skjemaet.\"\n \n- message.results_message = \"\"\"\n- Hei, siste purremail p\u00e5 feedback skjema har blitt sendt til alle\n- gjenv\u00e6rende deltagere p\u00e5 \\\"{}\\\".\\nDere kan se feedback-resultatene p\u00e5:\\n{}\\n\n- \"\"\".format(title, results_link)\n+ message.results_message = \"Hei, siste purremail p\u00e5 feedback skjema har blitt sendt til alle gjenv\u00e6rende \" \\\n+ \"deltagere p\u00e5 \\\"{}\\\".\\nDere kan se feedback-resultatene p\u00e5:\\n{}\\n\".format(title, results_link)\n message.send = True\n message.status = \"Last warning\"\n elif deadline_diff < 3 and feedback.gives_mark: # 3 days from the deadline\n@@ -121,13 +119,11 @@\n message.status = \"Warning message\"\n elif not feedback.first_mail_sent:\n message.deadline = \"\\n\\nFristen for \u00e5 svare p\u00e5 skjema er %s innen kl 23:59.\" % deadline\n-\n- message.results_message = \"\"\"\n- Hei, n\u00e5 har feedbackmail blitt sendt til alle\n- deltagere p\u00e5 \\\"{}\\\".\\nDere kan se feedback-resultatene p\u00e5:\\n{}\\n\n- \"\"\".format(title, results_link)\n+ message.results_message = \"Hei, n\u00e5 har feedbackmail blitt sendt til alle deltagere p\u00e5 \\\"{}\\\".\" \\\n+ \"\\nDere kan se resultatene p\u00e5:\\n{}\\n\".format(title, results_link)\n message.send = True\n message.status = \"First message\"\n+\n feedback.first_mail_sent = True\n feedback.save()\n logger.info(\"first_mail_sent set\")\n@@ -149,7 +145,7 @@\n # If the object(event) doesnt have start date it will send\n # The first notification the day after the feedbackrelation is made\n if date:\n- date_string = date.strftime(\"%d. %B\").encode(\"utf-8\")\n+ date_string = date.strftime(\"%d. %B\")\n message_date = \"som du var med p\u00e5 den %s:\" % date_string\n else:\n message_date = \"\"\n@@ -179,10 +175,8 @@\n @staticmethod\n def mark_message(feedback):\n if feedback.gives_mark:\n- return \"\"\"\n- \\nV\u00e6r oppmerksom p\u00e5 at du f\u00e5r prikk dersom du ikke svarer\n- p\u00e5 disse sp\u00f8rsm\u00e5lene innen fristen.\n- \"\"\"\n+ return \"\\nV\u00e6r oppmerksom p\u00e5 at du f\u00e5r prikk dersom du ikke svarer \" \\\n+ \"p\u00e5 disse sp\u00f8rsm\u00e5lene innen fristen.\"\n else:\n return \"\"\n", "issue": "[Feedback] - Mail text is borked\nThe feedback notification mail is broken.\nProbably after the python upgrade.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport locale\nimport logging\n\nfrom django.conf import settings\nfrom django.core.mail import EmailMessage\nfrom django.utils import timezone\n\nfrom apps.feedback.models import FeedbackRelation\nfrom apps.marks.models import Mark, MarkUser\nfrom apps.mommy import schedule\nfrom apps.mommy.registry import Task\n\n\nclass FeedbackMail(Task):\n\n @staticmethod\n def run():\n logger = logging.getLogger(\"feedback\")\n logger.info(\"Feedback job started\")\n locale.setlocale(locale.LC_ALL, \"nb_NO.UTF-8\")\n active_feedbacks = FeedbackRelation.objects.filter(active=True)\n\n for feedback in active_feedbacks:\n message = FeedbackMail.generate_message(feedback, logger)\n logger.info(\"Status: \" + message.status)\n\n if message.send:\n EmailMessage(\n message.subject,\n str(message),\n message.committee_mail,\n [],\n message.attended_mails\n ).send()\n logger.info('Emails sent to: ' + str(message.attended_mails))\n\n if message.results_message:\n EmailMessage(\n \"Feedback resultat\",\n message.results_message,\n \"[email protected]\",\n [message.committee_mail]\n ).send()\n logger.info('Results mail sent to :' + message.committee_mail)\n\n @staticmethod\n def generate_message(feedback, logger):\n logger.info('Processing: \"' + feedback.content_title() + '\"')\n\n today = timezone.now().date()\n end_date = feedback.content_end_date()\n\n message = Message()\n\n if not end_date:\n message.status = \"Content object has no date\"\n return message\n\n # Return if the event has not yet happened\n if end_date.date() >= today:\n message.status = \"Event not done\"\n return message\n\n not_responded = FeedbackMail.get_users(feedback)\n logger.info('Not responded: ' + str(not_responded))\n\n # Return if everyone has answered\n if not not_responded:\n feedback.active = False\n feedback.save()\n message.status = 'Everyone has answered'\n return message\n\n message.attended_mails = FeedbackMail.get_user_mails(not_responded)\n\n message.committee_mail = FeedbackMail.get_committee_email(feedback)\n deadline = feedback.deadline.strftime(\"%d. %B\").encode(\"utf-8\")\n title = FeedbackMail.get_title(feedback)\n message.link = str(\"\\n\\n\" + FeedbackMail.get_link(feedback)).encode()\n results_link = str(FeedbackMail.get_link(feedback) + \"results\").encode()\n\n deadline_diff = (feedback.deadline - today).days\n\n message.subject = \"Feedback: \" + title\n message.intro = \"Hei, vi \u00f8nsker tilbakemelding p\u00e5 \\\"\" + title + \"\\\"\"\n message.mark = FeedbackMail.mark_message(feedback)\n message.contact = \"\\n\\nEventuelle sp\u00f8rsm\u00e5l sendes til %s \" % message.committee_mail\n message.date = FeedbackMail.date_message(end_date)\n\n if deadline_diff < 0: # Deadline passed\n feedback.active = False\n feedback.save()\n logger.info(\"Deadline passed feedback set to inactive\")\n message.status = \"Deadine passed\"\n\n if feedback.gives_mark:\n FeedbackMail.set_marks(title, not_responded)\n\n message.intro = \"Fristen for \u00e5 svare p\u00e5 \\\"%s\\\" har g\u00e5tt ut og du har f\u00e5tt en prikk.\" % title\n message.mark = \"\"\n message.date = \"\"\n message.link = \"\"\n message.send = True\n\n logger.info(\"Marks given to: \" + str(not_responded))\n\n elif deadline_diff < 1: # Last warning\n message.deadline = \"\\n\\nI dag innen 23:59 er siste frist til \u00e5 svare p\u00e5 skjemaet.\"\n\n message.results_message = \"\"\"\n Hei, siste purremail p\u00e5 feedback skjema har blitt sendt til alle\n gjenv\u00e6rende deltagere p\u00e5 \\\"{}\\\".\\nDere kan se feedback-resultatene p\u00e5:\\n{}\\n\n \"\"\".format(title, results_link)\n message.send = True\n message.status = \"Last warning\"\n elif deadline_diff < 3 and feedback.gives_mark: # 3 days from the deadline\n message.deadline = \"\\n\\nFristen for \u00e5 svare p\u00e5 skjema er %s innen kl 23:59.\" % deadline\n message.send = True\n message.status = \"Warning message\"\n elif not feedback.first_mail_sent:\n message.deadline = \"\\n\\nFristen for \u00e5 svare p\u00e5 skjema er %s innen kl 23:59.\" % deadline\n\n message.results_message = \"\"\"\n Hei, n\u00e5 har feedbackmail blitt sendt til alle\n deltagere p\u00e5 \\\"{}\\\".\\nDere kan se feedback-resultatene p\u00e5:\\n{}\\n\n \"\"\".format(title, results_link)\n message.send = True\n message.status = \"First message\"\n feedback.first_mail_sent = True\n feedback.save()\n logger.info(\"first_mail_sent set\")\n else:\n message.status = \"No message generated\"\n return message\n\n @staticmethod\n def end_date(feedback):\n end_date = feedback.content_end_date()\n\n if end_date:\n return end_date.date()\n else:\n return False\n\n @staticmethod\n def date_message(date):\n # If the object(event) doesnt have start date it will send\n # The first notification the day after the feedbackrelation is made\n if date:\n date_string = date.strftime(\"%d. %B\").encode(\"utf-8\")\n message_date = \"som du var med p\u00e5 den %s:\" % date_string\n else:\n message_date = \"\"\n\n return message_date\n\n @staticmethod\n def get_users(feedback):\n return feedback.not_answered()\n\n @staticmethod\n def get_user_mails(not_responded):\n return [user.email for user in not_responded]\n\n @staticmethod\n def get_link(feedback):\n return str(settings.BASE_URL + feedback.get_absolute_url())\n\n @staticmethod\n def get_title(feedback):\n return str(feedback.content_title())\n\n @staticmethod\n def get_committee_email(feedback):\n return feedback.content_email()\n\n @staticmethod\n def mark_message(feedback):\n if feedback.gives_mark:\n return \"\"\"\n \\nV\u00e6r oppmerksom p\u00e5 at du f\u00e5r prikk dersom du ikke svarer\n p\u00e5 disse sp\u00f8rsm\u00e5lene innen fristen.\n \"\"\"\n else:\n return \"\"\n\n @staticmethod\n def set_marks(title, not_responded):\n mark = Mark()\n mark.title = \"Manglende tilbakemelding p\u00e5 %s\" % title\n mark.category = 4 # Missed feedback\n mark.description = \"Du har f\u00e5tt en prikk fordi du ikke har levert tilbakemelding.\"\n mark.save()\n\n for user in not_responded:\n user_entry = MarkUser()\n user_entry.user = user\n user_entry.mark = mark\n user_entry.save()\n\n\nclass Message(object):\n subject = \"\"\n intro = \"\"\n date = \"\"\n deadline = \"\"\n mark = \"\"\n contact = \"\"\n link = \"\"\n send = False\n end = \"\\n\\nMvh\\nLinjeforeningen Online\"\n results_message = False\n status = \"-\"\n\n committee_mail = \"\"\n attended_mails = False\n\n def __str__(self):\n message = \"%s %s %s %s %s %s %s\" % (\n self.intro,\n self.date,\n self.link,\n self.deadline,\n self.mark,\n self.contact,\n self.end\n )\n return message\n\nschedule.register(FeedbackMail, day_of_week='mon-sun', hour=8, minute=00)\n", "path": "apps/feedback/mommy.py"}]}
| 2,882 | 899 |
gh_patches_debug_9613
|
rasdani/github-patches
|
git_diff
|
azavea__raster-vision-550
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Changelog
We need a changelog in the docs that we can update for every PR that adds a fix or a feature.
</issue>
<code>
[start of docs/conf.py]
1 from pallets_sphinx_themes import ProjectLink, get_version
2
3 # -*- coding: utf-8 -*-
4 #
5 # Configuration file for the Sphinx documentation builder.
6 #
7 # This file does only contain a selection of the most common options. For a
8 # full list see the documentation:
9 # http://www.sphinx-doc.org/en/stable/config
10
11 # -- Path setup --------------------------------------------------------------
12
13 # If extensions (or modules to document with autodoc) are in another directory,
14 # add these directories to sys.path here. If the directory is relative to the
15 # documentation root, use os.path.abspath to make it absolute, like shown here.
16 #
17 # import os
18 # import sys
19 # sys.path.insert(0, os.path.abspath('.'))
20
21
22 # -- Project information -----------------------------------------------------
23
24 project = 'Raster Vision'
25 copyright = '2018, Azavea'
26 author = 'Azavea'
27
28 # The short X.Y version
29 version = '0.8'
30 # The full version, including alpha/beta/rc tags
31 release = '0.8.0'
32
33
34 # -- General configuration ---------------------------------------------------
35
36 # If your documentation needs a minimal Sphinx version, state it here.
37 #
38 # needs_sphinx = '1.0'
39
40 # Add any Sphinx extension module names here, as strings. They can be
41 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
42 # ones.
43 extensions = [
44 'sphinx.ext.autodoc',
45 'sphinx.ext.intersphinx',
46 'pallets_sphinx_themes',
47 'sphinxcontrib.programoutput'
48 ]
49
50 intersphinx_mapping = {'python': ('https://docs.python.org/3/', None)}
51
52 # Add any paths that contain templates here, relative to this directory.
53 templates_path = ['_templates']
54
55 # The suffix(es) of source filenames.
56 # You can specify multiple suffix as a list of string:
57 #
58 # source_suffix = ['.rst', '.md']
59 source_suffix = '.rst'
60
61 # The master toctree document.
62 master_doc = 'index'
63
64 # The language for content autogenerated by Sphinx. Refer to documentation
65 # for a list of supported languages.
66 #
67 # This is also used if you do content translation via gettext catalogs.
68 # Usually you set "language" from the command line for these cases.
69 language = None
70
71 # List of patterns, relative to source directory, that match files and
72 # directories to ignore when looking for source files.
73 # This pattern also affects html_static_path and html_extra_path .
74 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'README.md']
75
76 # The name of the Pygments (syntax highlighting) style to use.
77 # pygments_style = 'sphinx'
78
79 # HTML -----------------------------------------------------------------
80
81 html_theme = 'click'
82 html_theme_options = {'index_sidebar_logo': False}
83 html_context = {
84 'project_links': [
85 ProjectLink('Documentation TOC', 'index.html#documentation'),
86 ProjectLink('API Reference TOC', 'index.html#api-reference'),
87 ProjectLink('Project Website', 'https://rastervision.io/'),
88 ProjectLink('PyPI releases', 'https://pypi.org/project/rastervision/'),
89 ProjectLink('GitHub', 'https://github.com/azavea/raster-vision'),
90 ProjectLink('Gitter Channel', 'https://gitter.im/azavea/raster-vision'),
91 ProjectLink('Raster Vision Examples', 'https://github.com/azavea/raster-vision-examples'),
92 ProjectLink('QGIS Plugin', 'https://github.com/azavea/raster-vision-qgis'),
93 ProjectLink('AWS Batch Setup', 'https://github.com/azavea/raster-vision-aws'),
94 ProjectLink('Issue Tracker', 'https://github.com/azavea/raster-vision/issues/'),
95 ProjectLink('Azavea', 'https://www.azavea.com/'),
96 ],
97 'css_files': [
98 '_static/rastervision.css',
99 'https://media.readthedocs.org/css/badge_only.css'
100 ]
101 }
102 html_sidebars = {
103 'index': ['project.html', 'versions.html', 'searchbox.html'],
104 '**': ['project.html', 'localtoc.html', 'relations.html', 'versions.html', 'searchbox.html'],
105 }
106 singlehtml_sidebars = {'index': ['project.html', 'versions.html', 'localtoc.html']}
107 html_static_path = ['_static']
108 html_favicon = '_static/raster-vision-icon.png'
109 html_logo = '_static/raster-vision-logo.png'
110 html_title = 'Raster Vision Documentation ({})'.format(version)
111 html_show_sourcelink = False
112 html_domain_indices = False
113 html_experimental_html5_writer = True
114
115 # -- Options for HTMLHelp output ---------------------------------------------
116
117 # Output file base name for HTML help builder.
118 htmlhelp_basename = 'RasterVisiondoc'
119
120
121 # -- Options for LaTeX output ------------------------------------------------
122
123 latex_elements = {
124 # The paper size ('letterpaper' or 'a4paper').
125 #
126 # 'papersize': 'letterpaper',
127
128 # The font size ('10pt', '11pt' or '12pt').
129 #
130 # 'pointsize': '10pt',
131
132 # Additional stuff for the LaTeX preamble.
133 #
134 # 'preamble': '',
135
136 # Latex figure (float) alignment
137 #
138 # 'figure_align': 'htbp',
139 }
140
141 # Grouping the document tree into LaTeX files. List of tuples
142 # (source start file, target name, title,
143 # author, documentclass [howto, manual, or own class]).
144 latex_documents = [
145 (master_doc, 'RasterVision.tex', 'Raster Vision Documentation',
146 'Azavea', 'manual'),
147 ]
148
149
150 # -- Options for manual page output ------------------------------------------
151
152 # One entry per manual page. List of tuples
153 # (source start file, name, description, authors, manual section).
154 man_pages = [
155 (master_doc, 'RasterVisoin-{}.tex', html_title,
156 [author], 'manual')
157 ]
158
159
160 # -- Options for Texinfo output ----------------------------------------------
161
162 # Grouping the document tree into Texinfo files. List of tuples
163 # (source start file, target name, title, author,
164 # dir menu entry, description, category)
165 texinfo_documents = [
166 (master_doc, 'RasterVision', 'Raster Vision Documentation',
167 author, 'RasterVision', 'One line description of project.',
168 'Miscellaneous'),
169 ]
170
171
172 # -- Extension configuration -------------------------------------------------
173
174 programoutput_prompt_template = '> {command}\n{output}'
175
176 # -- Options for todo extension ----------------------------------------------
177
178 # If true, `todo` and `todoList` produce output, else they produce nothing.
179 todo_include_todos = True
180
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -92,6 +92,7 @@
ProjectLink('QGIS Plugin', 'https://github.com/azavea/raster-vision-qgis'),
ProjectLink('AWS Batch Setup', 'https://github.com/azavea/raster-vision-aws'),
ProjectLink('Issue Tracker', 'https://github.com/azavea/raster-vision/issues/'),
+ ProjectLink('CHANGELOG', 'changelog.html'),
ProjectLink('Azavea', 'https://www.azavea.com/'),
],
'css_files': [
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -92,6 +92,7 @@\n ProjectLink('QGIS Plugin', 'https://github.com/azavea/raster-vision-qgis'),\n ProjectLink('AWS Batch Setup', 'https://github.com/azavea/raster-vision-aws'),\n ProjectLink('Issue Tracker', 'https://github.com/azavea/raster-vision/issues/'),\n+ ProjectLink('CHANGELOG', 'changelog.html'),\n ProjectLink('Azavea', 'https://www.azavea.com/'),\n ],\n 'css_files': [\n", "issue": "Add Changelog\nWe need a changelog in the docs that we can update for every PR that adds a fix or a feature.\n", "before_files": [{"content": "from pallets_sphinx_themes import ProjectLink, get_version\n\n# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = 'Raster Vision'\ncopyright = '2018, Azavea'\nauthor = 'Azavea'\n\n# The short X.Y version\nversion = '0.8'\n# The full version, including alpha/beta/rc tags\nrelease = '0.8.0'\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'pallets_sphinx_themes',\n 'sphinxcontrib.programoutput'\n]\n\nintersphinx_mapping = {'python': ('https://docs.python.org/3/', None)}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store', 'README.md']\n\n# The name of the Pygments (syntax highlighting) style to use.\n# pygments_style = 'sphinx'\n\n# HTML -----------------------------------------------------------------\n\nhtml_theme = 'click'\nhtml_theme_options = {'index_sidebar_logo': False}\nhtml_context = {\n 'project_links': [\n ProjectLink('Documentation TOC', 'index.html#documentation'),\n ProjectLink('API Reference TOC', 'index.html#api-reference'),\n ProjectLink('Project Website', 'https://rastervision.io/'),\n ProjectLink('PyPI releases', 'https://pypi.org/project/rastervision/'),\n ProjectLink('GitHub', 'https://github.com/azavea/raster-vision'),\n ProjectLink('Gitter Channel', 'https://gitter.im/azavea/raster-vision'),\n ProjectLink('Raster Vision Examples', 'https://github.com/azavea/raster-vision-examples'),\n ProjectLink('QGIS Plugin', 'https://github.com/azavea/raster-vision-qgis'),\n ProjectLink('AWS Batch Setup', 'https://github.com/azavea/raster-vision-aws'),\n ProjectLink('Issue Tracker', 'https://github.com/azavea/raster-vision/issues/'),\n ProjectLink('Azavea', 'https://www.azavea.com/'),\n ],\n 'css_files': [\n '_static/rastervision.css',\n 'https://media.readthedocs.org/css/badge_only.css'\n ]\n}\nhtml_sidebars = {\n 'index': ['project.html', 'versions.html', 'searchbox.html'],\n '**': ['project.html', 'localtoc.html', 'relations.html', 'versions.html', 'searchbox.html'],\n}\nsinglehtml_sidebars = {'index': ['project.html', 'versions.html', 'localtoc.html']}\nhtml_static_path = ['_static']\nhtml_favicon = '_static/raster-vision-icon.png'\nhtml_logo = '_static/raster-vision-logo.png'\nhtml_title = 'Raster Vision Documentation ({})'.format(version)\nhtml_show_sourcelink = False\nhtml_domain_indices = False\nhtml_experimental_html5_writer = True\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'RasterVisiondoc'\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'RasterVision.tex', 'Raster Vision Documentation',\n 'Azavea', 'manual'),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'RasterVisoin-{}.tex', html_title,\n [author], 'manual')\n]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'RasterVision', 'Raster Vision Documentation',\n author, 'RasterVision', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\nprogramoutput_prompt_template = '> {command}\\n{output}'\n\n# -- Options for todo extension ----------------------------------------------\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n", "path": "docs/conf.py"}]}
| 2,425 | 147 |
gh_patches_debug_3785
|
rasdani/github-patches
|
git_diff
|
Mailu__Mailu-744
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Why a print statement in this code?
@kaiyou, according git blame this is yours. I wonder if it was included for debugging purposes and not taken out anymore?
https://github.com/Mailu/Mailu/blob/5c2439011413a114c470862f95206f8487fa4568/core/admin/mailu/internal/views/dovecot.py#L18
</issue>
<code>
[start of core/admin/mailu/internal/views/dovecot.py]
1 from mailu import models
2 from mailu.internal import internal
3 from flask import current_app as app
4
5 import flask
6 import socket
7 import os
8
9 @internal.route("/dovecot/passdb/<user_email>")
10 def dovecot_passdb_dict(user_email):
11 user = models.User.query.get(user_email) or flask.abort(404)
12 allow_nets = []
13 allow_nets.append(app.config["SUBNET"])
14 if app.config["POD_ADDRESS_RANGE"]:
15 allow_nets.append(app.config["POD_ADDRESS_RANGE"])
16 print(allow_nets)
17 return flask.jsonify({
18 "password": None,
19 "nopassword": "Y",
20 "allow_nets": ",".join(allow_nets)
21 })
22
23
24 @internal.route("/dovecot/userdb/<user_email>")
25 def dovecot_userdb_dict(user_email):
26 user = models.User.query.get(user_email) or flask.abort(404)
27 return flask.jsonify({
28 "quota_rule": "*:bytes={}".format(user.quota_bytes)
29 })
30
31
32 @internal.route("/dovecot/quota/<ns>/<user_email>", methods=["POST"])
33 def dovecot_quota(ns, user_email):
34 user = models.User.query.get(user_email) or flask.abort(404)
35 if ns == "storage":
36 user.quota_bytes_used = flask.request.get_json()
37 models.db.session.commit()
38 return flask.jsonify(None)
39
40
41 @internal.route("/dovecot/sieve/name/<script>/<user_email>")
42 def dovecot_sieve_name(script, user_email):
43 return flask.jsonify(script)
44
45
46 @internal.route("/dovecot/sieve/data/default/<user_email>")
47 def dovecot_sieve_data(user_email):
48 user = models.User.query.get(user_email) or flask.abort(404)
49 return flask.jsonify(flask.render_template("default.sieve", user=user))
50
[end of core/admin/mailu/internal/views/dovecot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/core/admin/mailu/internal/views/dovecot.py b/core/admin/mailu/internal/views/dovecot.py
--- a/core/admin/mailu/internal/views/dovecot.py
+++ b/core/admin/mailu/internal/views/dovecot.py
@@ -13,7 +13,6 @@
allow_nets.append(app.config["SUBNET"])
if app.config["POD_ADDRESS_RANGE"]:
allow_nets.append(app.config["POD_ADDRESS_RANGE"])
- print(allow_nets)
return flask.jsonify({
"password": None,
"nopassword": "Y",
|
{"golden_diff": "diff --git a/core/admin/mailu/internal/views/dovecot.py b/core/admin/mailu/internal/views/dovecot.py\n--- a/core/admin/mailu/internal/views/dovecot.py\n+++ b/core/admin/mailu/internal/views/dovecot.py\n@@ -13,7 +13,6 @@\n allow_nets.append(app.config[\"SUBNET\"])\n if app.config[\"POD_ADDRESS_RANGE\"]:\n allow_nets.append(app.config[\"POD_ADDRESS_RANGE\"])\n- print(allow_nets)\n return flask.jsonify({\n \"password\": None,\n \"nopassword\": \"Y\",\n", "issue": "Why a print statement in this code?\n@kaiyou, according git blame this is yours. I wonder if it was included for debugging purposes and not taken out anymore?\r\n\r\nhttps://github.com/Mailu/Mailu/blob/5c2439011413a114c470862f95206f8487fa4568/core/admin/mailu/internal/views/dovecot.py#L18\n", "before_files": [{"content": "from mailu import models\nfrom mailu.internal import internal\nfrom flask import current_app as app\n\nimport flask\nimport socket\nimport os\n\[email protected](\"/dovecot/passdb/<user_email>\")\ndef dovecot_passdb_dict(user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n allow_nets = []\n allow_nets.append(app.config[\"SUBNET\"])\n if app.config[\"POD_ADDRESS_RANGE\"]:\n allow_nets.append(app.config[\"POD_ADDRESS_RANGE\"])\n print(allow_nets)\n return flask.jsonify({\n \"password\": None,\n \"nopassword\": \"Y\",\n \"allow_nets\": \",\".join(allow_nets)\n })\n\n\[email protected](\"/dovecot/userdb/<user_email>\")\ndef dovecot_userdb_dict(user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n return flask.jsonify({\n \"quota_rule\": \"*:bytes={}\".format(user.quota_bytes)\n })\n\n\[email protected](\"/dovecot/quota/<ns>/<user_email>\", methods=[\"POST\"])\ndef dovecot_quota(ns, user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n if ns == \"storage\":\n user.quota_bytes_used = flask.request.get_json()\n models.db.session.commit()\n return flask.jsonify(None)\n\n\[email protected](\"/dovecot/sieve/name/<script>/<user_email>\")\ndef dovecot_sieve_name(script, user_email):\n return flask.jsonify(script)\n\n\[email protected](\"/dovecot/sieve/data/default/<user_email>\")\ndef dovecot_sieve_data(user_email):\n user = models.User.query.get(user_email) or flask.abort(404)\n return flask.jsonify(flask.render_template(\"default.sieve\", user=user))\n", "path": "core/admin/mailu/internal/views/dovecot.py"}]}
| 1,141 | 126 |
gh_patches_debug_10468
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-70
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Breadcrumbs type error
We're having an issue with v0.3.2 where `popleft` is called on a list type:
```
File "/srv/frontend/project/lib/python3.5/site-packages/sentry_sdk/hub.py" in add_breadcrumb
209. scope._breadcrumbs.popleft()
Exception Type: AttributeError at /
Exception Value: 'list' object has no attribute 'popleft'
```
</issue>
<code>
[start of sentry_sdk/hub.py]
1 import sys
2 import copy
3 from datetime import datetime
4 from contextlib import contextmanager
5
6 from sentry_sdk._compat import with_metaclass
7 from sentry_sdk.scope import Scope
8 from sentry_sdk.utils import (
9 exc_info_from_error,
10 event_from_exception,
11 logger,
12 ContextVar,
13 )
14
15
16 _local = ContextVar("sentry_current_hub")
17
18
19 def _get_client_options():
20 hub = Hub.current
21 if hub and hub.client:
22 return hub.client.options
23
24
25 def _should_send_default_pii():
26 client = Hub.current.client
27 if not client:
28 return False
29 return client.options["send_default_pii"]
30
31
32 class HubMeta(type):
33 @property
34 def current(self):
35 """Returns the current instance of the hub."""
36 rv = _local.get(None)
37 if rv is None:
38 rv = Hub(GLOBAL_HUB)
39 _local.set(rv)
40 return rv
41
42 @property
43 def main(self):
44 """Returns the main instance of the hub."""
45 return GLOBAL_HUB
46
47
48 class _HubManager(object):
49 def __init__(self, hub):
50 self._old = Hub.current
51 _local.set(hub)
52
53 def __exit__(self, exc_type, exc_value, tb):
54 _local.set(self._old)
55
56
57 class _ScopeManager(object):
58 def __init__(self, hub, layer):
59 self._hub = hub
60 self._layer = layer
61
62 def __enter__(self):
63 scope = self._layer[1]
64 if scope is None:
65 scope = Scope()
66 return scope
67
68 def __exit__(self, exc_type, exc_value, tb):
69 assert self._hub.pop_scope_unsafe() == self._layer, "popped wrong scope"
70
71
72 class Hub(with_metaclass(HubMeta)):
73 """The hub wraps the concurrency management of the SDK. Each thread has
74 its own hub but the hub might transfer with the flow of execution if
75 context vars are available.
76
77 If the hub is used with a with statement it's temporarily activated.
78 """
79
80 def __init__(self, client_or_hub=None, scope=None):
81 if isinstance(client_or_hub, Hub):
82 hub = client_or_hub
83 client, other_scope = hub._stack[-1]
84 if scope is None:
85 scope = copy.copy(other_scope)
86 else:
87 client = client_or_hub
88 if scope is None:
89 scope = Scope()
90 self._stack = [(client, scope)]
91 self._last_event_id = None
92 self._old_hubs = []
93
94 def __enter__(self):
95 self._old_hubs.append(Hub.current)
96 _local.set(self)
97 return self
98
99 def __exit__(self, exc_type, exc_value, tb):
100 old = self._old_hubs.pop()
101 _local.set(old)
102
103 def run(self, callback):
104 """Runs a callback in the context of the hub. Alternatively the
105 with statement can be used on the hub directly.
106 """
107 with self:
108 return callback()
109
110 @property
111 def client(self):
112 """Returns the current client on the hub."""
113 return self._stack[-1][0]
114
115 def last_event_id(self):
116 """Returns the last event ID."""
117 return self._last_event_id
118
119 def bind_client(self, new):
120 """Binds a new client to the hub."""
121 top = self._stack[-1]
122 self._stack[-1] = (new, top[1])
123
124 def capture_event(self, event, hint=None):
125 """Captures an event. The return value is the ID of the event.
126
127 The event is a dictionary following the Sentry v7/v8 protocol
128 specification. Optionally an event hint dict can be passed that
129 is used by processors to extract additional information from it.
130 Typically the event hint object would contain exception information.
131 """
132 client, scope = self._stack[-1]
133 if client is not None:
134 rv = client.capture_event(event, hint, scope)
135 if rv is not None:
136 self._last_event_id = rv
137 return rv
138
139 def capture_message(self, message, level=None):
140 """Captures a message. The message is just a string. If no level
141 is provided the default level is `info`.
142 """
143 if self.client is None:
144 return
145 if level is None:
146 level = "info"
147 return self.capture_event({"message": message, "level": level})
148
149 def capture_exception(self, error=None):
150 """Captures an exception.
151
152 The argument passed can be `None` in which case the last exception
153 will be reported, otherwise an exception object or an `exc_info`
154 tuple.
155 """
156 client = self.client
157 if client is None:
158 return
159 if error is None:
160 exc_info = sys.exc_info()
161 else:
162 exc_info = exc_info_from_error(error)
163
164 event, hint = event_from_exception(
165 exc_info, with_locals=client.options["with_locals"]
166 )
167 try:
168 return self.capture_event(event, hint=hint)
169 except Exception:
170 self._capture_internal_exception(sys.exc_info())
171
172 def _capture_internal_exception(self, exc_info):
173 """Capture an exception that is likely caused by a bug in the SDK
174 itself."""
175 logger.debug("Internal error in sentry_sdk", exc_info=exc_info)
176
177 def add_breadcrumb(self, crumb=None, hint=None, **kwargs):
178 """Adds a breadcrumb. The breadcrumbs are a dictionary with the
179 data as the sentry v7/v8 protocol expects. `hint` is an optional
180 value that can be used by `before_breadcrumb` to customize the
181 breadcrumbs that are emitted.
182 """
183 client, scope = self._stack[-1]
184 if client is None:
185 logger.info("Dropped breadcrumb because no client bound")
186 return
187
188 crumb = dict(crumb or ())
189 crumb.update(kwargs)
190 if not crumb:
191 return
192
193 hint = dict(hint or ())
194
195 if crumb.get("timestamp") is None:
196 crumb["timestamp"] = datetime.utcnow()
197 if crumb.get("type") is None:
198 crumb["type"] = "default"
199
200 original_crumb = crumb
201 if client.options["before_breadcrumb"] is not None:
202 crumb = client.options["before_breadcrumb"](crumb, hint)
203
204 if crumb is not None:
205 scope._breadcrumbs.append(crumb)
206 else:
207 logger.info("before breadcrumb dropped breadcrumb (%s)", original_crumb)
208 while len(scope._breadcrumbs) >= client.options["max_breadcrumbs"]:
209 scope._breadcrumbs.popleft()
210
211 def push_scope(self, callback=None):
212 """Pushes a new layer on the scope stack. Returns a context manager
213 that should be used to pop the scope again. Alternatively a callback
214 can be provided that is executed in the context of the scope.
215 """
216 client, scope = self._stack[-1]
217 new_layer = (client, copy.copy(scope))
218 self._stack.append(new_layer)
219
220 if callback is not None:
221 if client is not None:
222 callback(scope)
223 else:
224 return _ScopeManager(self, new_layer)
225
226 def pop_scope_unsafe(self):
227 """Pops a scope layer from the stack. Try to use the context manager
228 `push_scope()` instead."""
229 rv = self._stack.pop()
230 assert self._stack
231 return rv
232
233 def configure_scope(self, callback=None):
234 """Reconfigures the scope."""
235 client, scope = self._stack[-1]
236 if callback is not None:
237 if client is not None:
238 callback(scope)
239 return
240
241 @contextmanager
242 def inner():
243 if client is not None:
244 yield scope
245 else:
246 yield Scope()
247
248 return inner()
249
250 def scope(self, callback=None):
251 """Pushes a new scope and yields it for configuration.
252
253 The scope is dropped at the end of the with statement. Alternatively
254 a callback can be provided similar to `configure_scope`.
255 """
256 with self.push_scope():
257 client, scope = self._stack[-1]
258 return self.configure_scope(callback)
259
260
261 GLOBAL_HUB = Hub()
262
[end of sentry_sdk/hub.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sentry_sdk/hub.py b/sentry_sdk/hub.py
--- a/sentry_sdk/hub.py
+++ b/sentry_sdk/hub.py
@@ -205,8 +205,8 @@
scope._breadcrumbs.append(crumb)
else:
logger.info("before breadcrumb dropped breadcrumb (%s)", original_crumb)
- while len(scope._breadcrumbs) >= client.options["max_breadcrumbs"]:
- scope._breadcrumbs.popleft()
+ while len(scope._breadcrumbs) > client.options["max_breadcrumbs"]:
+ scope._breadcrumbs.pop(0)
def push_scope(self, callback=None):
"""Pushes a new layer on the scope stack. Returns a context manager
|
{"golden_diff": "diff --git a/sentry_sdk/hub.py b/sentry_sdk/hub.py\n--- a/sentry_sdk/hub.py\n+++ b/sentry_sdk/hub.py\n@@ -205,8 +205,8 @@\n scope._breadcrumbs.append(crumb)\n else:\n logger.info(\"before breadcrumb dropped breadcrumb (%s)\", original_crumb)\n- while len(scope._breadcrumbs) >= client.options[\"max_breadcrumbs\"]:\n- scope._breadcrumbs.popleft()\n+ while len(scope._breadcrumbs) > client.options[\"max_breadcrumbs\"]:\n+ scope._breadcrumbs.pop(0)\n \n def push_scope(self, callback=None):\n \"\"\"Pushes a new layer on the scope stack. Returns a context manager\n", "issue": "Breadcrumbs type error\nWe're having an issue with v0.3.2 where `popleft` is called on a list type:\r\n\r\n```\r\nFile \"/srv/frontend/project/lib/python3.5/site-packages/sentry_sdk/hub.py\" in add_breadcrumb\r\n 209. scope._breadcrumbs.popleft()\r\n\r\nException Type: AttributeError at /\r\nException Value: 'list' object has no attribute 'popleft'\r\n```\n", "before_files": [{"content": "import sys\nimport copy\nfrom datetime import datetime\nfrom contextlib import contextmanager\n\nfrom sentry_sdk._compat import with_metaclass\nfrom sentry_sdk.scope import Scope\nfrom sentry_sdk.utils import (\n exc_info_from_error,\n event_from_exception,\n logger,\n ContextVar,\n)\n\n\n_local = ContextVar(\"sentry_current_hub\")\n\n\ndef _get_client_options():\n hub = Hub.current\n if hub and hub.client:\n return hub.client.options\n\n\ndef _should_send_default_pii():\n client = Hub.current.client\n if not client:\n return False\n return client.options[\"send_default_pii\"]\n\n\nclass HubMeta(type):\n @property\n def current(self):\n \"\"\"Returns the current instance of the hub.\"\"\"\n rv = _local.get(None)\n if rv is None:\n rv = Hub(GLOBAL_HUB)\n _local.set(rv)\n return rv\n\n @property\n def main(self):\n \"\"\"Returns the main instance of the hub.\"\"\"\n return GLOBAL_HUB\n\n\nclass _HubManager(object):\n def __init__(self, hub):\n self._old = Hub.current\n _local.set(hub)\n\n def __exit__(self, exc_type, exc_value, tb):\n _local.set(self._old)\n\n\nclass _ScopeManager(object):\n def __init__(self, hub, layer):\n self._hub = hub\n self._layer = layer\n\n def __enter__(self):\n scope = self._layer[1]\n if scope is None:\n scope = Scope()\n return scope\n\n def __exit__(self, exc_type, exc_value, tb):\n assert self._hub.pop_scope_unsafe() == self._layer, \"popped wrong scope\"\n\n\nclass Hub(with_metaclass(HubMeta)):\n \"\"\"The hub wraps the concurrency management of the SDK. Each thread has\n its own hub but the hub might transfer with the flow of execution if\n context vars are available.\n\n If the hub is used with a with statement it's temporarily activated.\n \"\"\"\n\n def __init__(self, client_or_hub=None, scope=None):\n if isinstance(client_or_hub, Hub):\n hub = client_or_hub\n client, other_scope = hub._stack[-1]\n if scope is None:\n scope = copy.copy(other_scope)\n else:\n client = client_or_hub\n if scope is None:\n scope = Scope()\n self._stack = [(client, scope)]\n self._last_event_id = None\n self._old_hubs = []\n\n def __enter__(self):\n self._old_hubs.append(Hub.current)\n _local.set(self)\n return self\n\n def __exit__(self, exc_type, exc_value, tb):\n old = self._old_hubs.pop()\n _local.set(old)\n\n def run(self, callback):\n \"\"\"Runs a callback in the context of the hub. Alternatively the\n with statement can be used on the hub directly.\n \"\"\"\n with self:\n return callback()\n\n @property\n def client(self):\n \"\"\"Returns the current client on the hub.\"\"\"\n return self._stack[-1][0]\n\n def last_event_id(self):\n \"\"\"Returns the last event ID.\"\"\"\n return self._last_event_id\n\n def bind_client(self, new):\n \"\"\"Binds a new client to the hub.\"\"\"\n top = self._stack[-1]\n self._stack[-1] = (new, top[1])\n\n def capture_event(self, event, hint=None):\n \"\"\"Captures an event. The return value is the ID of the event.\n\n The event is a dictionary following the Sentry v7/v8 protocol\n specification. Optionally an event hint dict can be passed that\n is used by processors to extract additional information from it.\n Typically the event hint object would contain exception information.\n \"\"\"\n client, scope = self._stack[-1]\n if client is not None:\n rv = client.capture_event(event, hint, scope)\n if rv is not None:\n self._last_event_id = rv\n return rv\n\n def capture_message(self, message, level=None):\n \"\"\"Captures a message. The message is just a string. If no level\n is provided the default level is `info`.\n \"\"\"\n if self.client is None:\n return\n if level is None:\n level = \"info\"\n return self.capture_event({\"message\": message, \"level\": level})\n\n def capture_exception(self, error=None):\n \"\"\"Captures an exception.\n\n The argument passed can be `None` in which case the last exception\n will be reported, otherwise an exception object or an `exc_info`\n tuple.\n \"\"\"\n client = self.client\n if client is None:\n return\n if error is None:\n exc_info = sys.exc_info()\n else:\n exc_info = exc_info_from_error(error)\n\n event, hint = event_from_exception(\n exc_info, with_locals=client.options[\"with_locals\"]\n )\n try:\n return self.capture_event(event, hint=hint)\n except Exception:\n self._capture_internal_exception(sys.exc_info())\n\n def _capture_internal_exception(self, exc_info):\n \"\"\"Capture an exception that is likely caused by a bug in the SDK\n itself.\"\"\"\n logger.debug(\"Internal error in sentry_sdk\", exc_info=exc_info)\n\n def add_breadcrumb(self, crumb=None, hint=None, **kwargs):\n \"\"\"Adds a breadcrumb. The breadcrumbs are a dictionary with the\n data as the sentry v7/v8 protocol expects. `hint` is an optional\n value that can be used by `before_breadcrumb` to customize the\n breadcrumbs that are emitted.\n \"\"\"\n client, scope = self._stack[-1]\n if client is None:\n logger.info(\"Dropped breadcrumb because no client bound\")\n return\n\n crumb = dict(crumb or ())\n crumb.update(kwargs)\n if not crumb:\n return\n\n hint = dict(hint or ())\n\n if crumb.get(\"timestamp\") is None:\n crumb[\"timestamp\"] = datetime.utcnow()\n if crumb.get(\"type\") is None:\n crumb[\"type\"] = \"default\"\n\n original_crumb = crumb\n if client.options[\"before_breadcrumb\"] is not None:\n crumb = client.options[\"before_breadcrumb\"](crumb, hint)\n\n if crumb is not None:\n scope._breadcrumbs.append(crumb)\n else:\n logger.info(\"before breadcrumb dropped breadcrumb (%s)\", original_crumb)\n while len(scope._breadcrumbs) >= client.options[\"max_breadcrumbs\"]:\n scope._breadcrumbs.popleft()\n\n def push_scope(self, callback=None):\n \"\"\"Pushes a new layer on the scope stack. Returns a context manager\n that should be used to pop the scope again. Alternatively a callback\n can be provided that is executed in the context of the scope.\n \"\"\"\n client, scope = self._stack[-1]\n new_layer = (client, copy.copy(scope))\n self._stack.append(new_layer)\n\n if callback is not None:\n if client is not None:\n callback(scope)\n else:\n return _ScopeManager(self, new_layer)\n\n def pop_scope_unsafe(self):\n \"\"\"Pops a scope layer from the stack. Try to use the context manager\n `push_scope()` instead.\"\"\"\n rv = self._stack.pop()\n assert self._stack\n return rv\n\n def configure_scope(self, callback=None):\n \"\"\"Reconfigures the scope.\"\"\"\n client, scope = self._stack[-1]\n if callback is not None:\n if client is not None:\n callback(scope)\n return\n\n @contextmanager\n def inner():\n if client is not None:\n yield scope\n else:\n yield Scope()\n\n return inner()\n\n def scope(self, callback=None):\n \"\"\"Pushes a new scope and yields it for configuration.\n\n The scope is dropped at the end of the with statement. Alternatively\n a callback can be provided similar to `configure_scope`.\n \"\"\"\n with self.push_scope():\n client, scope = self._stack[-1]\n return self.configure_scope(callback)\n\n\nGLOBAL_HUB = Hub()\n", "path": "sentry_sdk/hub.py"}]}
| 3,103 | 152 |
gh_patches_debug_28595
|
rasdani/github-patches
|
git_diff
|
liqd__a4-opin-1900
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Offline events an more info of private projects visible even if not logged in
If I type in/ link to the URL of an offline event, I can see the event’s content and the info tab’s content of a private project.
For example, if you take this URL:
https://opin-stage.liqd.net/de/offlineevents/53/
you can access the information even if you are not logged in with an account.
</issue>
<code>
[start of euth/offlinephases/rules.py]
1 import rules
2 from rules.predicates import is_superuser
3
4 from .predicates import is_offlinephase_moderator
5
6 rules.add_perm(
7 'euth_offlinephases.modify_offlinephase',
8 is_offlinephase_moderator | is_superuser)
9
[end of euth/offlinephases/rules.py]
[start of euth/offlinephases/views.py]
1 from django.contrib import messages
2 from django.db import transaction
3 from django.shortcuts import redirect, render
4 from django.urls import reverse
5 from django.utils.translation import ugettext_lazy as _
6 from django.views import generic
7
8 from adhocracy4.dashboard import mixins
9 from adhocracy4.projects.mixins import ProjectMixin
10
11 from . import forms, models
12 from .mixins import OfflineEventFormMixin
13
14
15 class OfflineEventDetailView(
16 generic.DetailView
17 ):
18 model = models.OfflineEvent
19
20 @property
21 def project(self):
22 return self.object.project
23
24
25 class OfflineEventListView(ProjectMixin,
26 mixins.DashboardBaseMixin,
27 mixins.DashboardComponentMixin,
28 generic.ListView):
29
30 model = models.OfflineEvent
31 template_name = 'euth_offlinephases/offlineevent_list.html'
32 permission_required = 'a4projects.change_project'
33
34 def get_queryset(self):
35 return super().get_queryset().filter(project=self.project)
36
37 def get_permission_object(self):
38 return self.project
39
40
41 class OfflineEventCreateView(
42 ProjectMixin,
43 mixins.DashboardBaseMixin,
44 mixins.DashboardComponentMixin,
45 generic.TemplateView,
46 OfflineEventFormMixin
47 ):
48 template_name = 'euth_offlinephases/offlineevent_form.html'
49 permission_required = 'a4projects.change_project'
50 project_url_kwarg = 'project_slug'
51
52 def get_permission_object(self):
53 return self.project
54
55 def get_success_url(self):
56 return reverse(
57 'a4dashboard:offlineevent-list',
58 kwargs={'project_slug': self.project.slug})
59
60 def get_context_data(self, form=None, upload_forms=None, **kwargs):
61 context = super().get_context_data(**kwargs)
62 if not form:
63 form = forms.OfflineEventForm()
64 if not upload_forms:
65 upload_forms = self.empty_upload_formset()
66 context['form'] = form
67 context['upload_forms'] = upload_forms
68 return context
69
70 def _process_formdata(self, form, upload_forms):
71 form.instance.project = self.project
72 with transaction.atomic():
73 object = form.save()
74 intstances = upload_forms.save(commit=False)
75 for instance in intstances:
76 instance.offlineevent = object
77 instance.save()
78
79 def post(self, request, *args, **kwargs):
80 form = forms.OfflineEventForm(request.POST)
81 upload_forms = self.filled_upload_formset(request)
82 if form.is_valid() and upload_forms.is_valid():
83 self._process_formdata(form, upload_forms)
84 messages.add_message(request,
85 messages.SUCCESS,
86 _('Offline events '
87 'have been updated'))
88 response = redirect(self.get_success_url())
89 else:
90 response = render(request,
91 self.template_name,
92 self.get_context_data(form=form,
93 upload_forms=upload_forms))
94 return response
95
96
97 class OfflineEventUpdateView(ProjectMixin,
98 mixins.DashboardBaseMixin,
99 mixins.DashboardComponentMixin,
100 generic.detail.SingleObjectMixin,
101 generic.TemplateView,
102 OfflineEventFormMixin):
103
104 model = models.OfflineEvent
105 permission_required = 'a4projects.change_project'
106 template_name = 'euth_offlinephases/offlineevent_form.html'
107 get_context_from_object = True
108
109 def dispatch(self, *args, **kwargs):
110 self.object = self.get_object()
111 return super().dispatch(*args, **kwargs)
112
113 def get_context_data(self, form=None, upload_forms=None, **kwargs):
114 context = super().get_context_data(**kwargs)
115 if not form:
116 form = forms.OfflineEventForm(instance=self.get_object())
117 if not upload_forms:
118 queryset = \
119 models.OfflineEventFileUpload\
120 .objects.filter(offlineevent=self.get_object())
121 upload_forms = self.update_upload_formset(queryset)
122 context['form'] = form
123 context['upload_forms'] = upload_forms
124 return context
125
126 def get_success_url(self):
127 return reverse(
128 'a4dashboard:offlineevent-list',
129 kwargs={'project_slug': self.project.slug})
130
131 def get_permission_object(self):
132 return self.project
133
134 def _process_formdata(self, form, upload_forms):
135 with transaction.atomic():
136 form.save()
137 intstances = upload_forms.save(commit=False)
138 for obj in upload_forms.deleted_objects:
139 obj.delete()
140 for instance in intstances:
141 instance.offlineevent = self.object
142 instance.save()
143
144 def post(self, request, *args, **kwargs):
145 upload_forms = self.filled_upload_formset(request)
146 form = forms.OfflineEventForm(request.POST, instance=self.object)
147 if upload_forms.is_valid() and form.is_valid():
148 self._process_formdata(form, upload_forms)
149 messages.add_message(request,
150 messages.SUCCESS,
151 _('Offline events '
152 'have been updated'))
153 response = redirect(self.get_success_url())
154 else:
155 response = render(request,
156 self.template_name,
157 self.get_context_data(
158 form=form, upload_forms=upload_forms))
159 return response
160
161
162 class OfflineEventDeleteView(ProjectMixin,
163 mixins.DashboardBaseMixin,
164 mixins.DashboardComponentMixin,
165 mixins.DashboardComponentDeleteSignalMixin,
166 generic.DeleteView):
167 model = models.OfflineEvent
168 success_message = _('The offline event has been deleted')
169 permission_required = ''
170 template_name = 'euth_offlinephases/offlineevent_confirm_delete.html'
171 get_context_from_object = True
172
173 def delete(self, request, *args, **kwargs):
174 messages.success(self.request, self.success_message)
175 return super().delete(request, *args, **kwargs)
176
177 def get_success_url(self):
178 return reverse(
179 'a4dashboard:offlineevent-list',
180 kwargs={'project_slug': self.project.slug})
181
182 @property
183 def organisation(self):
184 return self.project.organisation
185
186 def get_permission_object(self):
187 return self.project
188
[end of euth/offlinephases/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/euth/offlinephases/rules.py b/euth/offlinephases/rules.py
--- a/euth/offlinephases/rules.py
+++ b/euth/offlinephases/rules.py
@@ -1,8 +1,17 @@
import rules
from rules.predicates import is_superuser
+from adhocracy4.modules.predicates import (is_context_initiator,
+ is_context_member,
+ is_context_moderator,
+ is_public_context)
+
from .predicates import is_offlinephase_moderator
rules.add_perm(
'euth_offlinephases.modify_offlinephase',
is_offlinephase_moderator | is_superuser)
+
+rules.add_perm('euth_offlinephases.view_offlineevent',
+ is_superuser | is_context_moderator | is_context_initiator |
+ is_context_member | is_public_context)
diff --git a/euth/offlinephases/views.py b/euth/offlinephases/views.py
--- a/euth/offlinephases/views.py
+++ b/euth/offlinephases/views.py
@@ -4,6 +4,7 @@
from django.urls import reverse
from django.utils.translation import ugettext_lazy as _
from django.views import generic
+from rules.contrib.views import PermissionRequiredMixin
from adhocracy4.dashboard import mixins
from adhocracy4.projects.mixins import ProjectMixin
@@ -12,10 +13,10 @@
from .mixins import OfflineEventFormMixin
-class OfflineEventDetailView(
- generic.DetailView
-):
+class OfflineEventDetailView(PermissionRequiredMixin,
+ generic.DetailView):
model = models.OfflineEvent
+ permission_required = 'euth_offlinephases.view_offlineevent'
@property
def project(self):
|
{"golden_diff": "diff --git a/euth/offlinephases/rules.py b/euth/offlinephases/rules.py\n--- a/euth/offlinephases/rules.py\n+++ b/euth/offlinephases/rules.py\n@@ -1,8 +1,17 @@\n import rules\n from rules.predicates import is_superuser\n \n+from adhocracy4.modules.predicates import (is_context_initiator,\n+ is_context_member,\n+ is_context_moderator,\n+ is_public_context)\n+\n from .predicates import is_offlinephase_moderator\n \n rules.add_perm(\n 'euth_offlinephases.modify_offlinephase',\n is_offlinephase_moderator | is_superuser)\n+\n+rules.add_perm('euth_offlinephases.view_offlineevent',\n+ is_superuser | is_context_moderator | is_context_initiator |\n+ is_context_member | is_public_context)\ndiff --git a/euth/offlinephases/views.py b/euth/offlinephases/views.py\n--- a/euth/offlinephases/views.py\n+++ b/euth/offlinephases/views.py\n@@ -4,6 +4,7 @@\n from django.urls import reverse\n from django.utils.translation import ugettext_lazy as _\n from django.views import generic\n+from rules.contrib.views import PermissionRequiredMixin\n \n from adhocracy4.dashboard import mixins\n from adhocracy4.projects.mixins import ProjectMixin\n@@ -12,10 +13,10 @@\n from .mixins import OfflineEventFormMixin\n \n \n-class OfflineEventDetailView(\n- generic.DetailView\n-):\n+class OfflineEventDetailView(PermissionRequiredMixin,\n+ generic.DetailView):\n model = models.OfflineEvent\n+ permission_required = 'euth_offlinephases.view_offlineevent'\n \n @property\n def project(self):\n", "issue": "Offline events an more info of private projects visible even if not logged in\nIf I type in/ link to the URL of an offline event, I can see the event\u2019s content and the info tab\u2019s content of a private project.\r\n\r\nFor example, if you take this URL:\r\nhttps://opin-stage.liqd.net/de/offlineevents/53/\r\n\r\nyou can access the information even if you are not logged in with an account.\n", "before_files": [{"content": "import rules\nfrom rules.predicates import is_superuser\n\nfrom .predicates import is_offlinephase_moderator\n\nrules.add_perm(\n 'euth_offlinephases.modify_offlinephase',\n is_offlinephase_moderator | is_superuser)\n", "path": "euth/offlinephases/rules.py"}, {"content": "from django.contrib import messages\nfrom django.db import transaction\nfrom django.shortcuts import redirect, render\nfrom django.urls import reverse\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.views import generic\n\nfrom adhocracy4.dashboard import mixins\nfrom adhocracy4.projects.mixins import ProjectMixin\n\nfrom . import forms, models\nfrom .mixins import OfflineEventFormMixin\n\n\nclass OfflineEventDetailView(\n generic.DetailView\n):\n model = models.OfflineEvent\n\n @property\n def project(self):\n return self.object.project\n\n\nclass OfflineEventListView(ProjectMixin,\n mixins.DashboardBaseMixin,\n mixins.DashboardComponentMixin,\n generic.ListView):\n\n model = models.OfflineEvent\n template_name = 'euth_offlinephases/offlineevent_list.html'\n permission_required = 'a4projects.change_project'\n\n def get_queryset(self):\n return super().get_queryset().filter(project=self.project)\n\n def get_permission_object(self):\n return self.project\n\n\nclass OfflineEventCreateView(\n ProjectMixin,\n mixins.DashboardBaseMixin,\n mixins.DashboardComponentMixin,\n generic.TemplateView,\n OfflineEventFormMixin\n):\n template_name = 'euth_offlinephases/offlineevent_form.html'\n permission_required = 'a4projects.change_project'\n project_url_kwarg = 'project_slug'\n\n def get_permission_object(self):\n return self.project\n\n def get_success_url(self):\n return reverse(\n 'a4dashboard:offlineevent-list',\n kwargs={'project_slug': self.project.slug})\n\n def get_context_data(self, form=None, upload_forms=None, **kwargs):\n context = super().get_context_data(**kwargs)\n if not form:\n form = forms.OfflineEventForm()\n if not upload_forms:\n upload_forms = self.empty_upload_formset()\n context['form'] = form\n context['upload_forms'] = upload_forms\n return context\n\n def _process_formdata(self, form, upload_forms):\n form.instance.project = self.project\n with transaction.atomic():\n object = form.save()\n intstances = upload_forms.save(commit=False)\n for instance in intstances:\n instance.offlineevent = object\n instance.save()\n\n def post(self, request, *args, **kwargs):\n form = forms.OfflineEventForm(request.POST)\n upload_forms = self.filled_upload_formset(request)\n if form.is_valid() and upload_forms.is_valid():\n self._process_formdata(form, upload_forms)\n messages.add_message(request,\n messages.SUCCESS,\n _('Offline events '\n 'have been updated'))\n response = redirect(self.get_success_url())\n else:\n response = render(request,\n self.template_name,\n self.get_context_data(form=form,\n upload_forms=upload_forms))\n return response\n\n\nclass OfflineEventUpdateView(ProjectMixin,\n mixins.DashboardBaseMixin,\n mixins.DashboardComponentMixin,\n generic.detail.SingleObjectMixin,\n generic.TemplateView,\n OfflineEventFormMixin):\n\n model = models.OfflineEvent\n permission_required = 'a4projects.change_project'\n template_name = 'euth_offlinephases/offlineevent_form.html'\n get_context_from_object = True\n\n def dispatch(self, *args, **kwargs):\n self.object = self.get_object()\n return super().dispatch(*args, **kwargs)\n\n def get_context_data(self, form=None, upload_forms=None, **kwargs):\n context = super().get_context_data(**kwargs)\n if not form:\n form = forms.OfflineEventForm(instance=self.get_object())\n if not upload_forms:\n queryset = \\\n models.OfflineEventFileUpload\\\n .objects.filter(offlineevent=self.get_object())\n upload_forms = self.update_upload_formset(queryset)\n context['form'] = form\n context['upload_forms'] = upload_forms\n return context\n\n def get_success_url(self):\n return reverse(\n 'a4dashboard:offlineevent-list',\n kwargs={'project_slug': self.project.slug})\n\n def get_permission_object(self):\n return self.project\n\n def _process_formdata(self, form, upload_forms):\n with transaction.atomic():\n form.save()\n intstances = upload_forms.save(commit=False)\n for obj in upload_forms.deleted_objects:\n obj.delete()\n for instance in intstances:\n instance.offlineevent = self.object\n instance.save()\n\n def post(self, request, *args, **kwargs):\n upload_forms = self.filled_upload_formset(request)\n form = forms.OfflineEventForm(request.POST, instance=self.object)\n if upload_forms.is_valid() and form.is_valid():\n self._process_formdata(form, upload_forms)\n messages.add_message(request,\n messages.SUCCESS,\n _('Offline events '\n 'have been updated'))\n response = redirect(self.get_success_url())\n else:\n response = render(request,\n self.template_name,\n self.get_context_data(\n form=form, upload_forms=upload_forms))\n return response\n\n\nclass OfflineEventDeleteView(ProjectMixin,\n mixins.DashboardBaseMixin,\n mixins.DashboardComponentMixin,\n mixins.DashboardComponentDeleteSignalMixin,\n generic.DeleteView):\n model = models.OfflineEvent\n success_message = _('The offline event has been deleted')\n permission_required = ''\n template_name = 'euth_offlinephases/offlineevent_confirm_delete.html'\n get_context_from_object = True\n\n def delete(self, request, *args, **kwargs):\n messages.success(self.request, self.success_message)\n return super().delete(request, *args, **kwargs)\n\n def get_success_url(self):\n return reverse(\n 'a4dashboard:offlineevent-list',\n kwargs={'project_slug': self.project.slug})\n\n @property\n def organisation(self):\n return self.project.organisation\n\n def get_permission_object(self):\n return self.project\n", "path": "euth/offlinephases/views.py"}]}
| 2,425 | 382 |
gh_patches_debug_23963
|
rasdani/github-patches
|
git_diff
|
optuna__optuna-3182
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve visualization tutorial
<!-- Please write a clear and concise description of what content in https://optuna.readthedocs.io/ is an issue. -->
I suggest updating the [visualization tutorial](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/005_visualization.html) as follows
- Add missing [`visualization.plot_pareto_front`](https://optuna.readthedocs.io/en/stable/reference/visualization/generated/optuna.visualization.plot_pareto_front.html#optuna.visualization.plot_pareto_front) example; since this function needs multi-objective function unlike other visualization examples, we might need to define such an objective function after the other examples. If adding such an example is not appropriate, at least we need to mention the existence of `visualization.plot_pareto_front`.
- Mention the availability of matplotlib version in the first paragraph.
</issue>
<code>
[start of tutorial/10_key_features/005_visualization.py]
1 """
2 .. _visualization:
3
4 Quick Visualization for Hyperparameter Optimization Analysis
5 ============================================================
6
7 Optuna provides various visualization features in :mod:`optuna.visualization` to analyze optimization results visually.
8
9 This tutorial walks you through this module by visualizing the history of lightgbm model for breast cancer dataset.
10 """
11
12 ###################################################################################################
13 import lightgbm as lgb
14 import numpy as np
15 import sklearn.datasets
16 import sklearn.metrics
17 from sklearn.model_selection import train_test_split
18
19 import optuna
20 from optuna.visualization import plot_contour
21 from optuna.visualization import plot_edf
22 from optuna.visualization import plot_intermediate_values
23 from optuna.visualization import plot_optimization_history
24 from optuna.visualization import plot_parallel_coordinate
25 from optuna.visualization import plot_param_importances
26 from optuna.visualization import plot_slice
27
28 SEED = 42
29
30 np.random.seed(SEED)
31
32
33 ###################################################################################################
34 # Define the objective function.
35 def objective(trial):
36 data, target = sklearn.datasets.load_breast_cancer(return_X_y=True)
37 train_x, valid_x, train_y, valid_y = train_test_split(data, target, test_size=0.25)
38 dtrain = lgb.Dataset(train_x, label=train_y)
39 dvalid = lgb.Dataset(valid_x, label=valid_y)
40
41 param = {
42 "objective": "binary",
43 "metric": "auc",
44 "verbosity": -1,
45 "boosting_type": "gbdt",
46 "bagging_fraction": trial.suggest_float("bagging_fraction", 0.4, 1.0),
47 "bagging_freq": trial.suggest_int("bagging_freq", 1, 7),
48 "min_child_samples": trial.suggest_int("min_child_samples", 5, 100),
49 }
50
51 # Add a callback for pruning.
52 pruning_callback = optuna.integration.LightGBMPruningCallback(trial, "auc")
53 gbm = lgb.train(
54 param, dtrain, valid_sets=[dvalid], verbose_eval=False, callbacks=[pruning_callback]
55 )
56
57 preds = gbm.predict(valid_x)
58 pred_labels = np.rint(preds)
59 accuracy = sklearn.metrics.accuracy_score(valid_y, pred_labels)
60 return accuracy
61
62
63 ###################################################################################################
64 study = optuna.create_study(
65 direction="maximize",
66 sampler=optuna.samplers.TPESampler(seed=SEED),
67 pruner=optuna.pruners.MedianPruner(n_warmup_steps=10),
68 )
69 study.optimize(objective, n_trials=100, timeout=600)
70
71 ###################################################################################################
72 # Plot functions
73 # --------------
74 # Visualize the optimization history. See :func:`~optuna.visualization.plot_optimization_history` for the details.
75 plot_optimization_history(study)
76
77 ###################################################################################################
78 # Visualize the learning curves of the trials. See :func:`~optuna.visualization.plot_intermediate_values` for the details.
79 plot_intermediate_values(study)
80
81 ###################################################################################################
82 # Visualize high-dimensional parameter relationships. See :func:`~optuna.visualization.plot_parallel_coordinate` for the details.
83 plot_parallel_coordinate(study)
84
85 ###################################################################################################
86 # Select parameters to visualize.
87 plot_parallel_coordinate(study, params=["bagging_freq", "bagging_fraction"])
88
89 ###################################################################################################
90 # Visualize hyperparameter relationships. See :func:`~optuna.visualization.plot_contour` for the details.
91 plot_contour(study)
92
93 ###################################################################################################
94 # Select parameters to visualize.
95 plot_contour(study, params=["bagging_freq", "bagging_fraction"])
96
97 ###################################################################################################
98 # Visualize individual hyperparameters as slice plot. See :func:`~optuna.visualization.plot_slice` for the details.
99 plot_slice(study)
100
101 ###################################################################################################
102 # Select parameters to visualize.
103 plot_slice(study, params=["bagging_freq", "bagging_fraction"])
104
105 ###################################################################################################
106 # Visualize parameter importances. See :func:`~optuna.visualization.plot_param_importances` for the details.
107 plot_param_importances(study)
108
109 ###################################################################################################
110 # Learn which hyperparameters are affecting the trial duration with hyperparameter importance.
111 optuna.visualization.plot_param_importances(
112 study, target=lambda t: t.duration.total_seconds(), target_name="duration"
113 )
114
115 ###################################################################################################
116 # Visualize empirical distribution function. See :func:`~optuna.visualization.plot_edf` for the details.
117 plot_edf(study)
118
[end of tutorial/10_key_features/005_visualization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tutorial/10_key_features/005_visualization.py b/tutorial/10_key_features/005_visualization.py
--- a/tutorial/10_key_features/005_visualization.py
+++ b/tutorial/10_key_features/005_visualization.py
@@ -7,6 +7,9 @@
Optuna provides various visualization features in :mod:`optuna.visualization` to analyze optimization results visually.
This tutorial walks you through this module by visualizing the history of lightgbm model for breast cancer dataset.
+
+For visualizing multi-objective optimization (i.e., the usage of :func:`optuna.visualization.plot_pareto_front`),
+please refer to the tutorial of :ref:`multi_objective`.
"""
###################################################################################################
@@ -17,6 +20,9 @@
from sklearn.model_selection import train_test_split
import optuna
+
+# You can use Matplotlib instead of Plotly for visualization by simply replacing `optuna.visualization` with
+# `optuna.visualization.matplotlib` in the following examples.
from optuna.visualization import plot_contour
from optuna.visualization import plot_edf
from optuna.visualization import plot_intermediate_values
|
{"golden_diff": "diff --git a/tutorial/10_key_features/005_visualization.py b/tutorial/10_key_features/005_visualization.py\n--- a/tutorial/10_key_features/005_visualization.py\n+++ b/tutorial/10_key_features/005_visualization.py\n@@ -7,6 +7,9 @@\n Optuna provides various visualization features in :mod:`optuna.visualization` to analyze optimization results visually.\n \n This tutorial walks you through this module by visualizing the history of lightgbm model for breast cancer dataset.\n+\n+For visualizing multi-objective optimization (i.e., the usage of :func:`optuna.visualization.plot_pareto_front`),\n+please refer to the tutorial of :ref:`multi_objective`.\n \"\"\"\n \n ###################################################################################################\n@@ -17,6 +20,9 @@\n from sklearn.model_selection import train_test_split\n \n import optuna\n+\n+# You can use Matplotlib instead of Plotly for visualization by simply replacing `optuna.visualization` with\n+# `optuna.visualization.matplotlib` in the following examples.\n from optuna.visualization import plot_contour\n from optuna.visualization import plot_edf\n from optuna.visualization import plot_intermediate_values\n", "issue": "Improve visualization tutorial\n<!-- Please write a clear and concise description of what content in https://optuna.readthedocs.io/ is an issue. -->\r\n\r\nI suggest updating the [visualization tutorial](https://optuna.readthedocs.io/en/stable/tutorial/10_key_features/005_visualization.html) as follows\r\n\r\n- Add missing [`visualization.plot_pareto_front`](https://optuna.readthedocs.io/en/stable/reference/visualization/generated/optuna.visualization.plot_pareto_front.html#optuna.visualization.plot_pareto_front) example; since this function needs multi-objective function unlike other visualization examples, we might need to define such an objective function after the other examples. If adding such an example is not appropriate, at least we need to mention the existence of `visualization.plot_pareto_front`.\r\n- Mention the availability of matplotlib version in the first paragraph. \r\n\n", "before_files": [{"content": "\"\"\"\n.. _visualization:\n\nQuick Visualization for Hyperparameter Optimization Analysis\n============================================================\n\nOptuna provides various visualization features in :mod:`optuna.visualization` to analyze optimization results visually.\n\nThis tutorial walks you through this module by visualizing the history of lightgbm model for breast cancer dataset.\n\"\"\"\n\n###################################################################################################\nimport lightgbm as lgb\nimport numpy as np\nimport sklearn.datasets\nimport sklearn.metrics\nfrom sklearn.model_selection import train_test_split\n\nimport optuna\nfrom optuna.visualization import plot_contour\nfrom optuna.visualization import plot_edf\nfrom optuna.visualization import plot_intermediate_values\nfrom optuna.visualization import plot_optimization_history\nfrom optuna.visualization import plot_parallel_coordinate\nfrom optuna.visualization import plot_param_importances\nfrom optuna.visualization import plot_slice\n\nSEED = 42\n\nnp.random.seed(SEED)\n\n\n###################################################################################################\n# Define the objective function.\ndef objective(trial):\n data, target = sklearn.datasets.load_breast_cancer(return_X_y=True)\n train_x, valid_x, train_y, valid_y = train_test_split(data, target, test_size=0.25)\n dtrain = lgb.Dataset(train_x, label=train_y)\n dvalid = lgb.Dataset(valid_x, label=valid_y)\n\n param = {\n \"objective\": \"binary\",\n \"metric\": \"auc\",\n \"verbosity\": -1,\n \"boosting_type\": \"gbdt\",\n \"bagging_fraction\": trial.suggest_float(\"bagging_fraction\", 0.4, 1.0),\n \"bagging_freq\": trial.suggest_int(\"bagging_freq\", 1, 7),\n \"min_child_samples\": trial.suggest_int(\"min_child_samples\", 5, 100),\n }\n\n # Add a callback for pruning.\n pruning_callback = optuna.integration.LightGBMPruningCallback(trial, \"auc\")\n gbm = lgb.train(\n param, dtrain, valid_sets=[dvalid], verbose_eval=False, callbacks=[pruning_callback]\n )\n\n preds = gbm.predict(valid_x)\n pred_labels = np.rint(preds)\n accuracy = sklearn.metrics.accuracy_score(valid_y, pred_labels)\n return accuracy\n\n\n###################################################################################################\nstudy = optuna.create_study(\n direction=\"maximize\",\n sampler=optuna.samplers.TPESampler(seed=SEED),\n pruner=optuna.pruners.MedianPruner(n_warmup_steps=10),\n)\nstudy.optimize(objective, n_trials=100, timeout=600)\n\n###################################################################################################\n# Plot functions\n# --------------\n# Visualize the optimization history. See :func:`~optuna.visualization.plot_optimization_history` for the details.\nplot_optimization_history(study)\n\n###################################################################################################\n# Visualize the learning curves of the trials. See :func:`~optuna.visualization.plot_intermediate_values` for the details.\nplot_intermediate_values(study)\n\n###################################################################################################\n# Visualize high-dimensional parameter relationships. See :func:`~optuna.visualization.plot_parallel_coordinate` for the details.\nplot_parallel_coordinate(study)\n\n###################################################################################################\n# Select parameters to visualize.\nplot_parallel_coordinate(study, params=[\"bagging_freq\", \"bagging_fraction\"])\n\n###################################################################################################\n# Visualize hyperparameter relationships. See :func:`~optuna.visualization.plot_contour` for the details.\nplot_contour(study)\n\n###################################################################################################\n# Select parameters to visualize.\nplot_contour(study, params=[\"bagging_freq\", \"bagging_fraction\"])\n\n###################################################################################################\n# Visualize individual hyperparameters as slice plot. See :func:`~optuna.visualization.plot_slice` for the details.\nplot_slice(study)\n\n###################################################################################################\n# Select parameters to visualize.\nplot_slice(study, params=[\"bagging_freq\", \"bagging_fraction\"])\n\n###################################################################################################\n# Visualize parameter importances. See :func:`~optuna.visualization.plot_param_importances` for the details.\nplot_param_importances(study)\n\n###################################################################################################\n# Learn which hyperparameters are affecting the trial duration with hyperparameter importance.\noptuna.visualization.plot_param_importances(\n study, target=lambda t: t.duration.total_seconds(), target_name=\"duration\"\n)\n\n###################################################################################################\n# Visualize empirical distribution function. See :func:`~optuna.visualization.plot_edf` for the details.\nplot_edf(study)\n", "path": "tutorial/10_key_features/005_visualization.py"}]}
| 1,897 | 253 |
gh_patches_debug_17011
|
rasdani/github-patches
|
git_diff
|
urllib3__urllib3-1778
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add explicit support for Python 3.9
Start testing on 3.9, add testing on Python 3.8 where there is finally support.
</issue>
<code>
[start of noxfile.py]
1 import os
2 import shutil
3
4 import nox
5
6
7 def tests_impl(session, extras="socks,secure,brotli"):
8 # Install deps and the package itself.
9 session.install("-r", "dev-requirements.txt")
10 session.install(".[{extras}]".format(extras=extras))
11
12 # Show the pip version.
13 session.run("pip", "--version")
14 # Print the Python version and bytesize.
15 session.run("python", "--version")
16 session.run("python", "-c", "import struct; print(struct.calcsize('P') * 8)")
17 # Print OpenSSL information.
18 session.run("python", "-m", "OpenSSL.debug")
19
20 # Inspired from https://github.com/pyca/cryptography
21 # We use parallel mode and then combine here so that coverage.py will take
22 # the paths like .tox/pyXY/lib/pythonX.Y/site-packages/urllib3/__init__.py
23 # and collapse them into src/urllib3/__init__.py.
24
25 session.run(
26 "coverage",
27 "run",
28 "--parallel-mode",
29 "-m",
30 "pytest",
31 "-r",
32 "a",
33 "--tb=native",
34 "--no-success-flaky-report",
35 *(session.posargs or ("test/",)),
36 env={"PYTHONWARNINGS": "always::DeprecationWarning"}
37 )
38 session.run("coverage", "combine")
39 session.run("coverage", "report", "-m")
40
41
42 @nox.session(python=["2.7", "3.5", "3.6", "3.7", "3.8", "pypy"])
43 def test(session):
44 tests_impl(session)
45
46
47 @nox.session(python=["2", "3"])
48 def google_brotli(session):
49 # https://pypi.org/project/Brotli/ is the Google version of brotli, so
50 # install it separately and don't install our brotli extra (which installs
51 # brotlipy).
52 session.install("brotli")
53 tests_impl(session, extras="socks,secure")
54
55
56 @nox.session(python="2.7")
57 def app_engine(session):
58 session.install("-r", "dev-requirements.txt")
59 session.install(".")
60 session.run(
61 "coverage",
62 "run",
63 "--parallel-mode",
64 "-m",
65 "pytest",
66 "-r",
67 "sx",
68 "test/appengine",
69 *session.posargs
70 )
71 session.run("coverage", "combine")
72 session.run("coverage", "report", "-m")
73
74
75 @nox.session()
76 def blacken(session):
77 """Run black code formatter."""
78 session.install("black")
79 session.run("black", "src", "dummyserver", "test", "noxfile.py", "setup.py")
80
81 lint(session)
82
83
84 @nox.session
85 def lint(session):
86 session.install("flake8", "black")
87 session.run("flake8", "--version")
88 session.run("black", "--version")
89 session.run(
90 "black", "--check", "src", "dummyserver", "test", "noxfile.py", "setup.py"
91 )
92 session.run("flake8", "setup.py", "docs", "dummyserver", "src", "test")
93
94
95 @nox.session
96 def docs(session):
97 session.install("-r", "docs/requirements.txt")
98 session.install(".[socks,secure,brotli]")
99
100 session.chdir("docs")
101 if os.path.exists("_build"):
102 shutil.rmtree("_build")
103 session.run("sphinx-build", "-W", ".", "_build/html")
104
[end of noxfile.py]
[start of setup.py]
1 #!/usr/bin/env python
2
3 from setuptools import setup
4
5 import os
6 import re
7 import codecs
8
9 base_path = os.path.dirname(__file__)
10
11 # Get the version (borrowed from SQLAlchemy)
12 with open(os.path.join(base_path, "src", "urllib3", "__init__.py")) as fp:
13 VERSION = (
14 re.compile(r""".*__version__ = ["'](.*?)['"]""", re.S).match(fp.read()).group(1)
15 )
16
17
18 with codecs.open("README.rst", encoding="utf-8") as fp:
19 readme = fp.read()
20
21 with codecs.open("CHANGES.rst", encoding="utf-8") as fp:
22 changes = fp.read()
23
24 version = VERSION
25
26 setup(
27 name="urllib3",
28 version=version,
29 description="HTTP library with thread-safe connection pooling, file post, and more.",
30 long_description=u"\n\n".join([readme, changes]),
31 classifiers=[
32 "Environment :: Web Environment",
33 "Intended Audience :: Developers",
34 "License :: OSI Approved :: MIT License",
35 "Operating System :: OS Independent",
36 "Programming Language :: Python",
37 "Programming Language :: Python :: 2",
38 "Programming Language :: Python :: 2.7",
39 "Programming Language :: Python :: 3",
40 "Programming Language :: Python :: 3.5",
41 "Programming Language :: Python :: 3.6",
42 "Programming Language :: Python :: 3.7",
43 "Programming Language :: Python :: 3.8",
44 "Programming Language :: Python :: Implementation :: CPython",
45 "Programming Language :: Python :: Implementation :: PyPy",
46 "Topic :: Internet :: WWW/HTTP",
47 "Topic :: Software Development :: Libraries",
48 ],
49 keywords="urllib httplib threadsafe filepost http https ssl pooling",
50 author="Andrey Petrov",
51 author_email="[email protected]",
52 url="https://urllib3.readthedocs.io/",
53 project_urls={
54 "Documentation": "https://urllib3.readthedocs.io/",
55 "Code": "https://github.com/urllib3/urllib3",
56 "Issue tracker": "https://github.com/urllib3/urllib3/issues",
57 },
58 license="MIT",
59 packages=[
60 "urllib3",
61 "urllib3.packages",
62 "urllib3.packages.ssl_match_hostname",
63 "urllib3.packages.backports",
64 "urllib3.contrib",
65 "urllib3.contrib._securetransport",
66 "urllib3.util",
67 ],
68 package_dir={"": "src"},
69 requires=[],
70 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4",
71 tests_require=[
72 # These are a less-specific subset of dev-requirements.txt, for the
73 # convenience of distro package maintainers.
74 "pytest",
75 "mock",
76 "tornado",
77 ],
78 test_suite="test",
79 extras_require={
80 "brotli": ["brotlipy>=0.6.0"],
81 "secure": [
82 "pyOpenSSL>=0.14",
83 "cryptography>=1.3.4",
84 "idna>=2.0.0",
85 "certifi",
86 "ipaddress; python_version=='2.7'",
87 ],
88 "socks": ["PySocks>=1.5.6,<2.0,!=1.5.7"],
89 },
90 )
91
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -39,7 +39,7 @@
session.run("coverage", "report", "-m")
[email protected](python=["2.7", "3.5", "3.6", "3.7", "3.8", "pypy"])
[email protected](python=["2.7", "3.5", "3.6", "3.7", "3.8", "3.9", "pypy"])
def test(session):
tests_impl(session)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -41,6 +41,7 @@
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
+ "Programming Language :: Python :: 3.9",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Topic :: Internet :: WWW/HTTP",
|
{"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -39,7 +39,7 @@\n session.run(\"coverage\", \"report\", \"-m\")\n \n \[email protected](python=[\"2.7\", \"3.5\", \"3.6\", \"3.7\", \"3.8\", \"pypy\"])\[email protected](python=[\"2.7\", \"3.5\", \"3.6\", \"3.7\", \"3.8\", \"3.9\", \"pypy\"])\n def test(session):\n tests_impl(session)\n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -41,6 +41,7 @@\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n+ \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n", "issue": "Add explicit support for Python 3.9\nStart testing on 3.9, add testing on Python 3.8 where there is finally support.\n", "before_files": [{"content": "import os\nimport shutil\n\nimport nox\n\n\ndef tests_impl(session, extras=\"socks,secure,brotli\"):\n # Install deps and the package itself.\n session.install(\"-r\", \"dev-requirements.txt\")\n session.install(\".[{extras}]\".format(extras=extras))\n\n # Show the pip version.\n session.run(\"pip\", \"--version\")\n # Print the Python version and bytesize.\n session.run(\"python\", \"--version\")\n session.run(\"python\", \"-c\", \"import struct; print(struct.calcsize('P') * 8)\")\n # Print OpenSSL information.\n session.run(\"python\", \"-m\", \"OpenSSL.debug\")\n\n # Inspired from https://github.com/pyca/cryptography\n # We use parallel mode and then combine here so that coverage.py will take\n # the paths like .tox/pyXY/lib/pythonX.Y/site-packages/urllib3/__init__.py\n # and collapse them into src/urllib3/__init__.py.\n\n session.run(\n \"coverage\",\n \"run\",\n \"--parallel-mode\",\n \"-m\",\n \"pytest\",\n \"-r\",\n \"a\",\n \"--tb=native\",\n \"--no-success-flaky-report\",\n *(session.posargs or (\"test/\",)),\n env={\"PYTHONWARNINGS\": \"always::DeprecationWarning\"}\n )\n session.run(\"coverage\", \"combine\")\n session.run(\"coverage\", \"report\", \"-m\")\n\n\[email protected](python=[\"2.7\", \"3.5\", \"3.6\", \"3.7\", \"3.8\", \"pypy\"])\ndef test(session):\n tests_impl(session)\n\n\[email protected](python=[\"2\", \"3\"])\ndef google_brotli(session):\n # https://pypi.org/project/Brotli/ is the Google version of brotli, so\n # install it separately and don't install our brotli extra (which installs\n # brotlipy).\n session.install(\"brotli\")\n tests_impl(session, extras=\"socks,secure\")\n\n\[email protected](python=\"2.7\")\ndef app_engine(session):\n session.install(\"-r\", \"dev-requirements.txt\")\n session.install(\".\")\n session.run(\n \"coverage\",\n \"run\",\n \"--parallel-mode\",\n \"-m\",\n \"pytest\",\n \"-r\",\n \"sx\",\n \"test/appengine\",\n *session.posargs\n )\n session.run(\"coverage\", \"combine\")\n session.run(\"coverage\", \"report\", \"-m\")\n\n\[email protected]()\ndef blacken(session):\n \"\"\"Run black code formatter.\"\"\"\n session.install(\"black\")\n session.run(\"black\", \"src\", \"dummyserver\", \"test\", \"noxfile.py\", \"setup.py\")\n\n lint(session)\n\n\[email protected]\ndef lint(session):\n session.install(\"flake8\", \"black\")\n session.run(\"flake8\", \"--version\")\n session.run(\"black\", \"--version\")\n session.run(\n \"black\", \"--check\", \"src\", \"dummyserver\", \"test\", \"noxfile.py\", \"setup.py\"\n )\n session.run(\"flake8\", \"setup.py\", \"docs\", \"dummyserver\", \"src\", \"test\")\n\n\[email protected]\ndef docs(session):\n session.install(\"-r\", \"docs/requirements.txt\")\n session.install(\".[socks,secure,brotli]\")\n\n session.chdir(\"docs\")\n if os.path.exists(\"_build\"):\n shutil.rmtree(\"_build\")\n session.run(\"sphinx-build\", \"-W\", \".\", \"_build/html\")\n", "path": "noxfile.py"}, {"content": "#!/usr/bin/env python\n\nfrom setuptools import setup\n\nimport os\nimport re\nimport codecs\n\nbase_path = os.path.dirname(__file__)\n\n# Get the version (borrowed from SQLAlchemy)\nwith open(os.path.join(base_path, \"src\", \"urllib3\", \"__init__.py\")) as fp:\n VERSION = (\n re.compile(r\"\"\".*__version__ = [\"'](.*?)['\"]\"\"\", re.S).match(fp.read()).group(1)\n )\n\n\nwith codecs.open(\"README.rst\", encoding=\"utf-8\") as fp:\n readme = fp.read()\n\nwith codecs.open(\"CHANGES.rst\", encoding=\"utf-8\") as fp:\n changes = fp.read()\n\nversion = VERSION\n\nsetup(\n name=\"urllib3\",\n version=version,\n description=\"HTTP library with thread-safe connection pooling, file post, and more.\",\n long_description=u\"\\n\\n\".join([readme, changes]),\n classifiers=[\n \"Environment :: Web Environment\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Software Development :: Libraries\",\n ],\n keywords=\"urllib httplib threadsafe filepost http https ssl pooling\",\n author=\"Andrey Petrov\",\n author_email=\"[email protected]\",\n url=\"https://urllib3.readthedocs.io/\",\n project_urls={\n \"Documentation\": \"https://urllib3.readthedocs.io/\",\n \"Code\": \"https://github.com/urllib3/urllib3\",\n \"Issue tracker\": \"https://github.com/urllib3/urllib3/issues\",\n },\n license=\"MIT\",\n packages=[\n \"urllib3\",\n \"urllib3.packages\",\n \"urllib3.packages.ssl_match_hostname\",\n \"urllib3.packages.backports\",\n \"urllib3.contrib\",\n \"urllib3.contrib._securetransport\",\n \"urllib3.util\",\n ],\n package_dir={\"\": \"src\"},\n requires=[],\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4\",\n tests_require=[\n # These are a less-specific subset of dev-requirements.txt, for the\n # convenience of distro package maintainers.\n \"pytest\",\n \"mock\",\n \"tornado\",\n ],\n test_suite=\"test\",\n extras_require={\n \"brotli\": [\"brotlipy>=0.6.0\"],\n \"secure\": [\n \"pyOpenSSL>=0.14\",\n \"cryptography>=1.3.4\",\n \"idna>=2.0.0\",\n \"certifi\",\n \"ipaddress; python_version=='2.7'\",\n ],\n \"socks\": [\"PySocks>=1.5.6,<2.0,!=1.5.7\"],\n },\n)\n", "path": "setup.py"}]}
| 2,481 | 253 |
gh_patches_debug_5287
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-1155
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Extra field in rating results
Feedback rating results display the extra blank field after it was changed in #1129
</issue>
<code>
[start of apps/feedback/views.py]
1 #-*- coding: utf-8 -*-
2 import json
3
4 from collections import namedtuple, defaultdict
5
6 from django.http import Http404, HttpResponse
7 from django.shortcuts import render, redirect, get_object_or_404
8 from django.template import RequestContext
9 from django.contrib.contenttypes.models import ContentType
10 from django.contrib import messages
11 from django.contrib.admin.views.decorators import staff_member_required
12 from django.contrib.auth.decorators import login_required
13 from django.core.exceptions import ObjectDoesNotExist
14 from django.utils.translation import ugettext_lazy as _
15 from django.utils.safestring import SafeString
16
17 from apps.feedback.models import FeedbackRelation, FieldOfStudyAnswer, RATING_CHOICES, TextQuestion, TextAnswer, RegisterToken
18 from apps.feedback.forms import create_answer_forms
19 from apps.events.models import Event
20
21 @login_required
22 def feedback(request, applabel, appmodel, object_id, feedback_id):
23 fbr = _get_fbr_or_404(applabel, appmodel, object_id, feedback_id)
24
25 if not fbr.can_answer(request.user):
26 messages.error(request, fbr.answer_error_message(request.user))
27 return redirect("home")
28
29 if request.method == "POST":
30 answers = create_answer_forms(fbr, post_data=request.POST)
31 if all([a.is_valid() for a in answers]):
32 for a in answers:
33 a.save()
34
35 # mark that the user has answered
36 fbr.answered.add(request.user)
37 fbr.save()
38
39 # Set field of study automaticly
40 fosa = FieldOfStudyAnswer(feedback_relation = fbr, answer = request.user.field_of_study)
41 fosa.save()
42
43 messages.success(request, _(u"Takk for at du svarte."))
44 return redirect("home")
45 else:
46 messages.error(request, _(u"Du må svare på alle påkrevde felt."))
47 else:
48 answers = create_answer_forms(fbr)
49
50 description = fbr.description
51
52 return render(request, 'feedback/answer.html',
53 {'answers': answers, 'description':description})
54
55 @staff_member_required
56 def result(request, applabel, appmodel, object_id, feedback_id):
57 return feedback_results(request, applabel, appmodel, object_id, feedback_id)
58
59 def results_token(request, applabel, appmodel, object_id, feedback_id, token):
60 fbr = _get_fbr_or_404(applabel, appmodel, object_id, feedback_id)
61 rt = get_object_or_404(RegisterToken, token = token)
62
63 if rt.is_valid(fbr):
64 return feedback_results(request, applabel, appmodel, object_id, feedback_id, True)
65 else:
66 return HttpResponse('Unauthorized', status=401)
67
68 def feedback_results(request, applabel, appmodel, object_id, feedback_id, token=False):
69 fbr = _get_fbr_or_404(applabel, appmodel, object_id, feedback_id)
70
71 Qa = namedtuple("Qa", "question, answers")
72 question_and_answers = []
73
74 for question in fbr.questions:
75 if (question.display or not token) and isinstance(question, TextQuestion):
76 question_and_answers.append(Qa(question, fbr.answers_to_question(question)))
77
78 info = None
79
80 if(fbr.feedback.display_info or not token):
81 info = fbr.content_info()
82 info[_(u'Besvarelser')] = fbr.answered.count()
83
84
85 rt = get_object_or_404(RegisterToken, fbr=fbr)
86
87 token_url = u"%s%sresults/%s" % (request.META['HTTP_HOST'], fbr.get_absolute_url(), rt.token)
88
89 return render(request, 'feedback/results.html',{'question_and_answers': question_and_answers,
90 'description': fbr.description, 'token_url' : token_url,'token' : token, 'info': info})
91
92 @staff_member_required
93 def chart_data(request, applabel, appmodel, object_id, feedback_id):
94 return get_chart_data(request, applabel, appmodel, object_id, feedback_id)
95
96 def chart_data_token(request, applabel, appmodel, object_id, feedback_id, token):
97 fbr = _get_fbr_or_404(applabel, appmodel, object_id, feedback_id)
98 rt = get_object_or_404(RegisterToken, token = token)
99
100 if rt.is_valid(fbr):
101 return get_chart_data(request, applabel, appmodel, object_id, feedback_id, True)
102 else:
103 return HttpResponse('Unauthorized', status=401)
104
105 def get_chart_data(request, applabel, appmodel, object_id, feedback_id, token=False):
106 fbr = _get_fbr_or_404(applabel, appmodel, object_id, feedback_id)
107
108 rating_answers = []
109 rating_titles = []
110 answer_collection = dict()
111 answer_collection['replies'] = dict()
112 answer_length = int(len(RATING_CHOICES) +1)
113 for question in fbr.ratingquestion:
114 if question.display or not token:
115 rating_titles.append(str(question))
116 answers = fbr.answers_to_question(question)
117 answer_count = [0] * answer_length
118 for answer in answers:
119 answer_count[int(answer.answer)] += 1
120 rating_answers.append(answer_count[1:])
121
122 fos_answer_count = defaultdict(int)
123
124 if fbr.feedback.display_field_of_study or not token:
125 fos = fbr.field_of_study_answers.all()
126 for answer in fos:
127 fos_answer_count[str(answer)] += 1
128
129
130 mc_questions = []
131 mc_answer_count = []
132
133 for question in fbr.multiple_choice_question:
134 if question.display or not token:
135 mc_questions.append(unicode(question))
136 answer_count = defaultdict(int)
137 for answer in fbr.answers_to_question(question):
138 answer_count[str(answer)] += 1
139 mc_answer_count.append(answer_count.items())
140
141 answer_collection['replies']['ratings'] = rating_answers
142 answer_collection['replies']['titles'] = rating_titles
143 answer_collection['replies']['mc_questions'] = mc_questions
144 answer_collection['replies']['mc_answers'] = mc_answer_count
145 answer_collection['replies']['fos'] = fos_answer_count.items()
146
147 return HttpResponse(json.dumps(answer_collection), content_type='application/json')
148
149 @staff_member_required
150 def index(request):
151 feedbacks = FeedbackRelation.objects.all()
152 return render(request, 'feedback/index.html', {'feedbacks': feedbacks})
153
154 @staff_member_required
155 def delete_answer(request):
156 if request.method == 'POST':
157 answer_id = request.POST.get('answer_id')
158 answer = get_object_or_404(TextAnswer, pk=answer_id)
159 answer.delete()
160 return HttpResponse(status = 200)
161 return HttpResponse(status=401)
162
163 def _get_fbr_or_404(app_label, app_model, object_id, feedback_id):
164 """
165 Get FeedbackRelation or raise Http404
166 """
167 try:
168 ct = ContentType.objects.get(app_label=app_label, model=app_model)
169 fbr = FeedbackRelation.objects.get(content_type=ct,
170 object_id=object_id,
171 feedback_id=feedback_id)
172 except ObjectDoesNotExist:
173 raise Http404
174
175 return fbr
176
[end of apps/feedback/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/feedback/views.py b/apps/feedback/views.py
--- a/apps/feedback/views.py
+++ b/apps/feedback/views.py
@@ -109,7 +109,7 @@
rating_titles = []
answer_collection = dict()
answer_collection['replies'] = dict()
- answer_length = int(len(RATING_CHOICES) +1)
+ answer_length = int(len(RATING_CHOICES))
for question in fbr.ratingquestion:
if question.display or not token:
rating_titles.append(str(question))
|
{"golden_diff": "diff --git a/apps/feedback/views.py b/apps/feedback/views.py\n--- a/apps/feedback/views.py\n+++ b/apps/feedback/views.py\n@@ -109,7 +109,7 @@\n rating_titles = []\n answer_collection = dict()\n answer_collection['replies'] = dict()\n- answer_length = int(len(RATING_CHOICES) +1)\n+ answer_length = int(len(RATING_CHOICES))\n for question in fbr.ratingquestion:\n if question.display or not token:\n rating_titles.append(str(question))\n", "issue": "Extra field in rating results\nFeedback rating results display the extra blank field after it was changed in #1129 \n\n", "before_files": [{"content": "#-*- coding: utf-8 -*-\nimport json\n\nfrom collections import namedtuple, defaultdict\n\nfrom django.http import Http404, HttpResponse\nfrom django.shortcuts import render, redirect, get_object_or_404\nfrom django.template import RequestContext\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.contrib import messages\nfrom django.contrib.admin.views.decorators import staff_member_required\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.utils.safestring import SafeString\n\nfrom apps.feedback.models import FeedbackRelation, FieldOfStudyAnswer, RATING_CHOICES, TextQuestion, TextAnswer, RegisterToken\nfrom apps.feedback.forms import create_answer_forms\nfrom apps.events.models import Event\n\n@login_required\ndef feedback(request, applabel, appmodel, object_id, feedback_id):\n fbr = _get_fbr_or_404(applabel, appmodel, object_id, feedback_id)\n\n if not fbr.can_answer(request.user):\n messages.error(request, fbr.answer_error_message(request.user))\n return redirect(\"home\")\n\n if request.method == \"POST\":\n answers = create_answer_forms(fbr, post_data=request.POST)\n if all([a.is_valid() for a in answers]):\n for a in answers:\n a.save()\n\n # mark that the user has answered\n fbr.answered.add(request.user)\n fbr.save()\n\n # Set field of study automaticly\n fosa = FieldOfStudyAnswer(feedback_relation = fbr, answer = request.user.field_of_study)\n fosa.save()\n\n messages.success(request, _(u\"Takk for at du svarte.\"))\n return redirect(\"home\")\n else:\n messages.error(request, _(u\"Du m\u00e5 svare p\u00e5 alle p\u00e5krevde felt.\"))\n else:\n answers = create_answer_forms(fbr)\n\n description = fbr.description\n\n return render(request, 'feedback/answer.html',\n {'answers': answers, 'description':description})\n\n@staff_member_required\ndef result(request, applabel, appmodel, object_id, feedback_id):\n return feedback_results(request, applabel, appmodel, object_id, feedback_id)\n\ndef results_token(request, applabel, appmodel, object_id, feedback_id, token):\n fbr = _get_fbr_or_404(applabel, appmodel, object_id, feedback_id)\n rt = get_object_or_404(RegisterToken, token = token)\n\n if rt.is_valid(fbr):\n return feedback_results(request, applabel, appmodel, object_id, feedback_id, True)\n else:\n return HttpResponse('Unauthorized', status=401)\n\ndef feedback_results(request, applabel, appmodel, object_id, feedback_id, token=False):\n fbr = _get_fbr_or_404(applabel, appmodel, object_id, feedback_id)\n\n Qa = namedtuple(\"Qa\", \"question, answers\")\n question_and_answers = []\n\n for question in fbr.questions:\n if (question.display or not token) and isinstance(question, TextQuestion):\n question_and_answers.append(Qa(question, fbr.answers_to_question(question)))\n \n info = None\n\n if(fbr.feedback.display_info or not token):\n info = fbr.content_info()\n info[_(u'Besvarelser')] = fbr.answered.count()\n \n \n rt = get_object_or_404(RegisterToken, fbr=fbr)\n\n token_url = u\"%s%sresults/%s\" % (request.META['HTTP_HOST'], fbr.get_absolute_url(), rt.token)\n \n return render(request, 'feedback/results.html',{'question_and_answers': question_and_answers, \n 'description': fbr.description, 'token_url' : token_url,'token' : token, 'info': info})\n\n@staff_member_required\ndef chart_data(request, applabel, appmodel, object_id, feedback_id):\n return get_chart_data(request, applabel, appmodel, object_id, feedback_id)\n\ndef chart_data_token(request, applabel, appmodel, object_id, feedback_id, token):\n fbr = _get_fbr_or_404(applabel, appmodel, object_id, feedback_id)\n rt = get_object_or_404(RegisterToken, token = token)\n\n if rt.is_valid(fbr):\n return get_chart_data(request, applabel, appmodel, object_id, feedback_id, True)\n else:\n return HttpResponse('Unauthorized', status=401)\n\ndef get_chart_data(request, applabel, appmodel, object_id, feedback_id, token=False):\n fbr = _get_fbr_or_404(applabel, appmodel, object_id, feedback_id)\n \n rating_answers = []\n rating_titles = []\n answer_collection = dict()\n answer_collection['replies'] = dict()\n answer_length = int(len(RATING_CHOICES) +1)\n for question in fbr.ratingquestion:\n if question.display or not token:\n rating_titles.append(str(question))\n answers = fbr.answers_to_question(question)\n answer_count = [0] * answer_length\n for answer in answers:\n answer_count[int(answer.answer)] += 1\n rating_answers.append(answer_count[1:])\n\n fos_answer_count = defaultdict(int)\n \n if fbr.feedback.display_field_of_study or not token:\n fos = fbr.field_of_study_answers.all()\n for answer in fos:\n fos_answer_count[str(answer)] += 1\n \n\n mc_questions = []\n mc_answer_count = []\n \n for question in fbr.multiple_choice_question:\n if question.display or not token:\n mc_questions.append(unicode(question))\n answer_count = defaultdict(int)\n for answer in fbr.answers_to_question(question):\n answer_count[str(answer)] += 1\n mc_answer_count.append(answer_count.items())\n\n answer_collection['replies']['ratings'] = rating_answers\n answer_collection['replies']['titles'] = rating_titles\n answer_collection['replies']['mc_questions'] = mc_questions\n answer_collection['replies']['mc_answers'] = mc_answer_count\n answer_collection['replies']['fos'] = fos_answer_count.items()\n \n return HttpResponse(json.dumps(answer_collection), content_type='application/json')\n\n@staff_member_required\ndef index(request):\n feedbacks = FeedbackRelation.objects.all()\n return render(request, 'feedback/index.html', {'feedbacks': feedbacks})\n\n@staff_member_required\ndef delete_answer(request):\n if request.method == 'POST':\n answer_id = request.POST.get('answer_id')\n answer = get_object_or_404(TextAnswer, pk=answer_id)\n answer.delete()\n return HttpResponse(status = 200)\n return HttpResponse(status=401)\n\ndef _get_fbr_or_404(app_label, app_model, object_id, feedback_id):\n \"\"\"\n Get FeedbackRelation or raise Http404\n \"\"\"\n try:\n ct = ContentType.objects.get(app_label=app_label, model=app_model)\n fbr = FeedbackRelation.objects.get(content_type=ct,\n object_id=object_id,\n feedback_id=feedback_id)\n except ObjectDoesNotExist:\n raise Http404\n\n return fbr\n", "path": "apps/feedback/views.py"}]}
| 2,549 | 116 |
gh_patches_debug_58736
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-6081
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Not sure that OAuth2 client source should use authorization header
I've been testing authentik using an Auth0 openIDC source as well as a google source. I have gotten both to work, but Auth0 was not working before a fix to prevent adding the access token to the authorizaton Bearer headers. Google auth works fine with or without this fix.
https://auth0.com/blog/id-token-access-token-what-is-the-difference/ suggests that many endpoints should not be given the access token. Not sure this is relevant.
I think Auth0 is less permissive and prefers the access_token param rather than the Authorization Bearer token
in sources/oauth/clients/oauth2.py
```
class OAuth2Client(BaseOAuthClient):
"""OAuth2 Client"""
...
def do_request(self, method: str, url: str, **kwargs) -> Response:
"""Build remote url request. Constructs necessary auth."""
if "token" in kwargs:
token = kwargs.pop("token")
params = kwargs.get("params", {})
params["access_token"] = token["access_token"]
kwargs["params"] = params
headers = kwargs.get("headers", {})
# Note this fix
# headers["Authorization"] = f"{token['token_type']} {token['access_token']}"
kwargs["headers"] = headers
return super().do_request(method, url, **kwargs)
```
</issue>
<code>
[start of authentik/sources/oauth/types/oidc.py]
1 """OpenID Connect OAuth Views"""
2 from typing import Any
3
4 from authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient
5 from authentik.sources.oauth.models import OAuthSource
6 from authentik.sources.oauth.types.registry import SourceType, registry
7 from authentik.sources.oauth.views.callback import OAuthCallback
8 from authentik.sources.oauth.views.redirect import OAuthRedirect
9
10
11 class OpenIDConnectOAuthRedirect(OAuthRedirect):
12 """OpenIDConnect OAuth2 Redirect"""
13
14 def get_additional_parameters(self, source: OAuthSource): # pragma: no cover
15 return {
16 "scope": ["openid", "email", "profile"],
17 }
18
19
20 class OpenIDConnectOAuth2Callback(OAuthCallback):
21 """OpenIDConnect OAuth2 Callback"""
22
23 client_class: UserprofileHeaderAuthClient
24
25 def get_user_id(self, info: dict[str, str]) -> str:
26 return info.get("sub", "")
27
28 def get_user_enroll_context(
29 self,
30 info: dict[str, Any],
31 ) -> dict[str, Any]:
32 return {
33 "username": info.get("nickname", info.get("preferred_username")),
34 "email": info.get("email"),
35 "name": info.get("name"),
36 }
37
38
39 @registry.register()
40 class OpenIDConnectType(SourceType):
41 """OpenIDConnect Type definition"""
42
43 callback_view = OpenIDConnectOAuth2Callback
44 redirect_view = OpenIDConnectOAuthRedirect
45 name = "OpenID Connect"
46 slug = "openidconnect"
47
48 urls_customizable = True
49
[end of authentik/sources/oauth/types/oidc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/authentik/sources/oauth/types/oidc.py b/authentik/sources/oauth/types/oidc.py
--- a/authentik/sources/oauth/types/oidc.py
+++ b/authentik/sources/oauth/types/oidc.py
@@ -20,7 +20,7 @@
class OpenIDConnectOAuth2Callback(OAuthCallback):
"""OpenIDConnect OAuth2 Callback"""
- client_class: UserprofileHeaderAuthClient
+ client_class = UserprofileHeaderAuthClient
def get_user_id(self, info: dict[str, str]) -> str:
return info.get("sub", "")
|
{"golden_diff": "diff --git a/authentik/sources/oauth/types/oidc.py b/authentik/sources/oauth/types/oidc.py\n--- a/authentik/sources/oauth/types/oidc.py\n+++ b/authentik/sources/oauth/types/oidc.py\n@@ -20,7 +20,7 @@\n class OpenIDConnectOAuth2Callback(OAuthCallback):\n \"\"\"OpenIDConnect OAuth2 Callback\"\"\"\n \n- client_class: UserprofileHeaderAuthClient\n+ client_class = UserprofileHeaderAuthClient\n \n def get_user_id(self, info: dict[str, str]) -> str:\n return info.get(\"sub\", \"\")\n", "issue": "Not sure that OAuth2 client source should use authorization header\nI've been testing authentik using an Auth0 openIDC source as well as a google source. I have gotten both to work, but Auth0 was not working before a fix to prevent adding the access token to the authorizaton Bearer headers. Google auth works fine with or without this fix. \r\n\r\nhttps://auth0.com/blog/id-token-access-token-what-is-the-difference/ suggests that many endpoints should not be given the access token. Not sure this is relevant.\r\n\r\nI think Auth0 is less permissive and prefers the access_token param rather than the Authorization Bearer token\r\n\r\nin sources/oauth/clients/oauth2.py\r\n```\r\nclass OAuth2Client(BaseOAuthClient):\r\n \"\"\"OAuth2 Client\"\"\"\r\n\r\n ...\r\n\r\n def do_request(self, method: str, url: str, **kwargs) -> Response:\r\n \"\"\"Build remote url request. Constructs necessary auth.\"\"\"\r\n if \"token\" in kwargs:\r\n token = kwargs.pop(\"token\")\r\n\r\n params = kwargs.get(\"params\", {})\r\n params[\"access_token\"] = token[\"access_token\"]\r\n kwargs[\"params\"] = params\r\n\r\n headers = kwargs.get(\"headers\", {})\r\n # Note this fix\r\n # headers[\"Authorization\"] = f\"{token['token_type']} {token['access_token']}\"\r\n kwargs[\"headers\"] = headers\r\n return super().do_request(method, url, **kwargs)\r\n```\r\n\r\n\n", "before_files": [{"content": "\"\"\"OpenID Connect OAuth Views\"\"\"\nfrom typing import Any\n\nfrom authentik.sources.oauth.clients.oauth2 import UserprofileHeaderAuthClient\nfrom authentik.sources.oauth.models import OAuthSource\nfrom authentik.sources.oauth.types.registry import SourceType, registry\nfrom authentik.sources.oauth.views.callback import OAuthCallback\nfrom authentik.sources.oauth.views.redirect import OAuthRedirect\n\n\nclass OpenIDConnectOAuthRedirect(OAuthRedirect):\n \"\"\"OpenIDConnect OAuth2 Redirect\"\"\"\n\n def get_additional_parameters(self, source: OAuthSource): # pragma: no cover\n return {\n \"scope\": [\"openid\", \"email\", \"profile\"],\n }\n\n\nclass OpenIDConnectOAuth2Callback(OAuthCallback):\n \"\"\"OpenIDConnect OAuth2 Callback\"\"\"\n\n client_class: UserprofileHeaderAuthClient\n\n def get_user_id(self, info: dict[str, str]) -> str:\n return info.get(\"sub\", \"\")\n\n def get_user_enroll_context(\n self,\n info: dict[str, Any],\n ) -> dict[str, Any]:\n return {\n \"username\": info.get(\"nickname\", info.get(\"preferred_username\")),\n \"email\": info.get(\"email\"),\n \"name\": info.get(\"name\"),\n }\n\n\[email protected]()\nclass OpenIDConnectType(SourceType):\n \"\"\"OpenIDConnect Type definition\"\"\"\n\n callback_view = OpenIDConnectOAuth2Callback\n redirect_view = OpenIDConnectOAuthRedirect\n name = \"OpenID Connect\"\n slug = \"openidconnect\"\n\n urls_customizable = True\n", "path": "authentik/sources/oauth/types/oidc.py"}]}
| 1,264 | 132 |
gh_patches_debug_18320
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-1453
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tornado 5.0 raises error on install with older Python versions.
changed to `"tornado>=4.1,<5.0"` in setup.py
This broke installation via pip for me.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3
4 from __future__ import print_function
5 from setuptools import setup
6 import re
7 import os
8 import sys
9
10
11 long_description = (
12 "MkDocs is a fast, simple and downright gorgeous static site generator "
13 "that's geared towards building project documentation. Documentation "
14 "source files are written in Markdown, and configured with a single YAML "
15 "configuration file."
16 )
17
18
19 def get_version(package):
20 """Return package version as listed in `__version__` in `init.py`."""
21 init_py = open(os.path.join(package, '__init__.py')).read()
22 return re.search("__version__ = ['\"]([^'\"]+)['\"]", init_py).group(1)
23
24
25 def get_packages(package):
26 """Return root package and all sub-packages."""
27 return [dirpath
28 for dirpath, dirnames, filenames in os.walk(package)
29 if os.path.exists(os.path.join(dirpath, '__init__.py'))]
30
31
32 if sys.argv[-1] == 'publish':
33 if os.system("pip freeze | grep wheel"):
34 print("wheel not installed.\nUse `pip install wheel`.\nExiting.")
35 sys.exit()
36 if os.system("pip freeze | grep twine"):
37 print("twine not installed.\nUse `pip install twine`.\nExiting.")
38 sys.exit()
39 os.system("python setup.py sdist bdist_wheel")
40 os.system("twine upload dist/*")
41 print("You probably want to also tag the version now:")
42 print(" git tag -a {0} -m 'version {0}'".format(get_version("mkdocs")))
43 print(" git push --tags")
44 sys.exit()
45
46
47 setup(
48 name="mkdocs",
49 version=get_version("mkdocs"),
50 url='http://www.mkdocs.org',
51 license='BSD',
52 description='Project documentation with Markdown.',
53 long_description=long_description,
54 author='Tom Christie',
55 author_email='[email protected]', # SEE NOTE BELOW (*)
56 packages=get_packages("mkdocs"),
57 include_package_data=True,
58 install_requires=[
59 'click>=3.3',
60 'Jinja2>=2.7.1',
61 'livereload>=2.5.1',
62 'Markdown>=2.3.1',
63 'PyYAML>=3.10',
64 'tornado>=4.1',
65 ],
66 entry_points={
67 'console_scripts': [
68 'mkdocs = mkdocs.__main__:cli',
69 ],
70 'mkdocs.themes': [
71 'mkdocs = mkdocs.themes.mkdocs',
72 'readthedocs = mkdocs.themes.readthedocs',
73 ],
74 'mkdocs.plugins': [
75 'search = mkdocs.contrib.search:SearchPlugin',
76 ],
77 },
78 classifiers=[
79 'Development Status :: 5 - Production/Stable',
80 'Environment :: Console',
81 'Environment :: Web Environment',
82 'Intended Audience :: Developers',
83 'License :: OSI Approved :: BSD License',
84 'Operating System :: OS Independent',
85 'Programming Language :: Python',
86 'Programming Language :: Python :: 2',
87 'Programming Language :: Python :: 2.7',
88 'Programming Language :: Python :: 3',
89 'Programming Language :: Python :: 3.3',
90 'Programming Language :: Python :: 3.4',
91 'Programming Language :: Python :: 3.5',
92 'Programming Language :: Python :: 3.6',
93 "Programming Language :: Python :: Implementation :: CPython",
94 "Programming Language :: Python :: Implementation :: PyPy",
95 'Topic :: Documentation',
96 'Topic :: Text Processing',
97 ],
98 zip_safe=False,
99 )
100
101 # (*) Please direct queries to the discussion group:
102 # https://groups.google.com/forum/#!forum/mkdocs
103
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -61,8 +61,9 @@
'livereload>=2.5.1',
'Markdown>=2.3.1',
'PyYAML>=3.10',
- 'tornado>=4.1',
+ 'tornado>=5.0',
],
+ python_requires='>=2.7.9,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',
entry_points={
'console_scripts': [
'mkdocs = mkdocs.__main__:cli',
@@ -86,7 +87,6 @@
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -61,8 +61,9 @@\n 'livereload>=2.5.1',\n 'Markdown>=2.3.1',\n 'PyYAML>=3.10',\n- 'tornado>=4.1',\n+ 'tornado>=5.0',\n ],\n+ python_requires='>=2.7.9,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.__main__:cli',\n@@ -86,7 +87,6 @@\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n- 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n", "issue": "Tornado 5.0 raises error on install with older Python versions.\nchanged to `\"tornado>=4.1,<5.0\"` in setup.py\r\n\r\nThis broke installation via pip for me. \n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\n\nlong_description = (\n \"MkDocs is a fast, simple and downright gorgeous static site generator \"\n \"that's geared towards building project documentation. Documentation \"\n \"source files are written in Markdown, and configured with a single YAML \"\n \"configuration file.\"\n)\n\n\ndef get_version(package):\n \"\"\"Return package version as listed in `__version__` in `init.py`.\"\"\"\n init_py = open(os.path.join(package, '__init__.py')).read()\n return re.search(\"__version__ = ['\\\"]([^'\\\"]+)['\\\"]\", init_py).group(1)\n\n\ndef get_packages(package):\n \"\"\"Return root package and all sub-packages.\"\"\"\n return [dirpath\n for dirpath, dirnames, filenames in os.walk(package)\n if os.path.exists(os.path.join(dirpath, '__init__.py'))]\n\n\nif sys.argv[-1] == 'publish':\n if os.system(\"pip freeze | grep wheel\"):\n print(\"wheel not installed.\\nUse `pip install wheel`.\\nExiting.\")\n sys.exit()\n if os.system(\"pip freeze | grep twine\"):\n print(\"twine not installed.\\nUse `pip install twine`.\\nExiting.\")\n sys.exit()\n os.system(\"python setup.py sdist bdist_wheel\")\n os.system(\"twine upload dist/*\")\n print(\"You probably want to also tag the version now:\")\n print(\" git tag -a {0} -m 'version {0}'\".format(get_version(\"mkdocs\")))\n print(\" git push --tags\")\n sys.exit()\n\n\nsetup(\n name=\"mkdocs\",\n version=get_version(\"mkdocs\"),\n url='http://www.mkdocs.org',\n license='BSD',\n description='Project documentation with Markdown.',\n long_description=long_description,\n author='Tom Christie',\n author_email='[email protected]', # SEE NOTE BELOW (*)\n packages=get_packages(\"mkdocs\"),\n include_package_data=True,\n install_requires=[\n 'click>=3.3',\n 'Jinja2>=2.7.1',\n 'livereload>=2.5.1',\n 'Markdown>=2.3.1',\n 'PyYAML>=3.10',\n 'tornado>=4.1',\n ],\n entry_points={\n 'console_scripts': [\n 'mkdocs = mkdocs.__main__:cli',\n ],\n 'mkdocs.themes': [\n 'mkdocs = mkdocs.themes.mkdocs',\n 'readthedocs = mkdocs.themes.readthedocs',\n ],\n 'mkdocs.plugins': [\n 'search = mkdocs.contrib.search:SearchPlugin',\n ],\n },\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n 'Topic :: Documentation',\n 'Topic :: Text Processing',\n ],\n zip_safe=False,\n)\n\n# (*) Please direct queries to the discussion group:\n# https://groups.google.com/forum/#!forum/mkdocs\n", "path": "setup.py"}]}
| 1,593 | 235 |
gh_patches_debug_38298
|
rasdani/github-patches
|
git_diff
|
jupyterhub__jupyterhub-121
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
redirect loop on invalid single-user auth token
when the single-user server's API request fails with 403, it's handled as failed login (302) causing a redirect loop, when it should be handled as "500: oh noes, I can't do anything!"
</issue>
<code>
[start of jupyterhub/singleuser.py]
1 #!/usr/bin/env python
2 """Extend regular notebook server to be aware of multiuser things."""
3
4 # Copyright (c) Jupyter Development Team.
5 # Distributed under the terms of the Modified BSD License.
6
7 import os
8
9 import requests
10
11 from tornado import ioloop
12
13 from IPython.utils.traitlets import Unicode
14
15 from IPython.html.notebookapp import NotebookApp
16 from IPython.html.auth.login import LoginHandler
17 from IPython.html.auth.logout import LogoutHandler
18
19 from IPython.html.utils import url_path_join
20
21
22 from distutils.version import LooseVersion as V
23
24 import IPython
25 if V(IPython.__version__) < V('3.0'):
26 raise ImportError("JupyterHub Requires IPython >= 3.0, found %s" % IPython.__version__)
27
28 # Define two methods to attach to AuthenticatedHandler,
29 # which authenticate via the central auth server.
30
31 class JupyterHubLoginHandler(LoginHandler):
32 @staticmethod
33 def login_available(settings):
34 return True
35
36 @staticmethod
37 def verify_token(self, cookie_name, encrypted_cookie):
38 """monkeypatch method for token verification"""
39 cookie_cache = self.settings['cookie_cache']
40 if encrypted_cookie in cookie_cache:
41 # we've seen this token before, don't ask upstream again
42 return cookie_cache[encrypted_cookie]
43
44 hub_api_url = self.settings['hub_api_url']
45 hub_api_key = self.settings['hub_api_key']
46 r = requests.get(url_path_join(
47 hub_api_url, "authorizations/cookie", cookie_name,
48 ),
49 headers = {'Authorization' : 'token %s' % hub_api_key},
50 data=encrypted_cookie,
51 )
52 if r.status_code == 404:
53 data = {'user' : ''}
54 elif r.status_code >= 400:
55 self.log.warn("Failed to check authorization: [%i] %s", r.status_code, r.reason)
56 data = None
57 else:
58 data = r.json()
59 cookie_cache[encrypted_cookie] = data
60 return data
61
62 @staticmethod
63 def get_user(self):
64 """alternative get_current_user to query the central server"""
65 my_user = self.settings['user']
66 encrypted_cookie = self.get_cookie(self.cookie_name)
67 if encrypted_cookie:
68 auth_data = JupyterHubLoginHandler.verify_token(self, self.cookie_name, encrypted_cookie)
69 if not auth_data:
70 # treat invalid token the same as no token
71 return None
72 user = auth_data['user']
73 if user == my_user:
74 return user
75 else:
76 return None
77 else:
78 self.log.debug("No token cookie")
79 return None
80
81
82 class JupyterHubLogoutHandler(LogoutHandler):
83 def get(self):
84 self.redirect(url_path_join(self.settings['hub_prefix'], 'logout'))
85
86
87 # register new hub related command-line aliases
88 aliases = NotebookApp.aliases.get_default_value()
89 aliases.update({
90 'user' : 'SingleUserNotebookApp.user',
91 'cookie-name': 'SingleUserNotebookApp.cookie_name',
92 'hub-prefix': 'SingleUserNotebookApp.hub_prefix',
93 'hub-api-url': 'SingleUserNotebookApp.hub_api_url',
94 'base-url': 'SingleUserNotebookApp.base_url',
95 })
96
97 class SingleUserNotebookApp(NotebookApp):
98 """A Subclass of the regular NotebookApp that is aware of the parent multiuser context."""
99 user = Unicode(config=True)
100 cookie_name = Unicode(config=True)
101 hub_prefix = Unicode(config=True)
102 hub_api_url = Unicode(config=True)
103 aliases = aliases
104 open_browser = False
105 login_handler_class = JupyterHubLoginHandler
106 logout_handler_class = JupyterHubLogoutHandler
107
108 def _confirm_exit(self):
109 # disable the exit confirmation for background notebook processes
110 ioloop.IOLoop.instance().stop()
111
112 def init_webapp(self):
113 # load the hub related settings into the tornado settings dict
114 env = os.environ
115 s = self.tornado_settings
116 s['cookie_cache'] = {}
117 s['user'] = self.user
118 s['hub_api_key'] = env.pop('JPY_API_TOKEN')
119 s['hub_prefix'] = self.hub_prefix
120 s['cookie_name'] = self.cookie_name
121 s['login_url'] = url_path_join(self.hub_prefix, 'login')
122 s['hub_api_url'] = self.hub_api_url
123 super(SingleUserNotebookApp, self).init_webapp()
124
125
126 def main():
127 return SingleUserNotebookApp.launch_instance()
128
129
130 if __name__ == "__main__":
131 main()
132
[end of jupyterhub/singleuser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/jupyterhub/singleuser.py b/jupyterhub/singleuser.py
--- a/jupyterhub/singleuser.py
+++ b/jupyterhub/singleuser.py
@@ -9,6 +9,7 @@
import requests
from tornado import ioloop
+from tornado.web import HTTPError
from IPython.utils.traitlets import Unicode
@@ -35,7 +36,7 @@
@staticmethod
def verify_token(self, cookie_name, encrypted_cookie):
- """monkeypatch method for token verification"""
+ """method for token verification"""
cookie_cache = self.settings['cookie_cache']
if encrypted_cookie in cookie_cache:
# we've seen this token before, don't ask upstream again
@@ -51,9 +52,15 @@
)
if r.status_code == 404:
data = {'user' : ''}
+ if r.status_code == 403:
+ self.log.error("I don't have permission to verify cookies, my auth token may have expired: [%i] %s", r.status_code, r.reason)
+ raise HTTPError(500, "Permission failure checking authorization, I may need to be restarted")
+ elif r.status_code >= 500:
+ self.log.error("Upstream failure verifying auth token: [%i] %s", r.status_code, r.reason)
+ raise HTTPError(502, "Failed to check authorization (upstream problem)")
elif r.status_code >= 400:
self.log.warn("Failed to check authorization: [%i] %s", r.status_code, r.reason)
- data = None
+ raise HTTPError(500, "Failed to check authorization")
else:
data = r.json()
cookie_cache[encrypted_cookie] = data
@@ -62,6 +69,13 @@
@staticmethod
def get_user(self):
"""alternative get_current_user to query the central server"""
+ # only allow this to be called once per handler
+ # avoids issues if an error is raised,
+ # since this may be called again when trying to render the error page
+ if hasattr(self, '_cached_user'):
+ return self._cached_user
+
+ self._cached_user = None
my_user = self.settings['user']
encrypted_cookie = self.get_cookie(self.cookie_name)
if encrypted_cookie:
@@ -71,6 +85,7 @@
return None
user = auth_data['user']
if user == my_user:
+ self._cached_user = user
return user
else:
return None
|
{"golden_diff": "diff --git a/jupyterhub/singleuser.py b/jupyterhub/singleuser.py\n--- a/jupyterhub/singleuser.py\n+++ b/jupyterhub/singleuser.py\n@@ -9,6 +9,7 @@\n import requests\n \n from tornado import ioloop\n+from tornado.web import HTTPError\n \n from IPython.utils.traitlets import Unicode\n \n@@ -35,7 +36,7 @@\n \n @staticmethod\n def verify_token(self, cookie_name, encrypted_cookie):\n- \"\"\"monkeypatch method for token verification\"\"\"\n+ \"\"\"method for token verification\"\"\"\n cookie_cache = self.settings['cookie_cache']\n if encrypted_cookie in cookie_cache:\n # we've seen this token before, don't ask upstream again\n@@ -51,9 +52,15 @@\n )\n if r.status_code == 404:\n data = {'user' : ''}\n+ if r.status_code == 403:\n+ self.log.error(\"I don't have permission to verify cookies, my auth token may have expired: [%i] %s\", r.status_code, r.reason)\n+ raise HTTPError(500, \"Permission failure checking authorization, I may need to be restarted\")\n+ elif r.status_code >= 500:\n+ self.log.error(\"Upstream failure verifying auth token: [%i] %s\", r.status_code, r.reason)\n+ raise HTTPError(502, \"Failed to check authorization (upstream problem)\")\n elif r.status_code >= 400:\n self.log.warn(\"Failed to check authorization: [%i] %s\", r.status_code, r.reason)\n- data = None\n+ raise HTTPError(500, \"Failed to check authorization\")\n else:\n data = r.json()\n cookie_cache[encrypted_cookie] = data\n@@ -62,6 +69,13 @@\n @staticmethod\n def get_user(self):\n \"\"\"alternative get_current_user to query the central server\"\"\"\n+ # only allow this to be called once per handler\n+ # avoids issues if an error is raised,\n+ # since this may be called again when trying to render the error page\n+ if hasattr(self, '_cached_user'):\n+ return self._cached_user\n+ \n+ self._cached_user = None\n my_user = self.settings['user']\n encrypted_cookie = self.get_cookie(self.cookie_name)\n if encrypted_cookie:\n@@ -71,6 +85,7 @@\n return None\n user = auth_data['user']\n if user == my_user:\n+ self._cached_user = user\n return user\n else:\n return None\n", "issue": "redirect loop on invalid single-user auth token\nwhen the single-user server's API request fails with 403, it's handled as failed login (302) causing a redirect loop, when it should be handled as \"500: oh noes, I can't do anything!\"\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"Extend regular notebook server to be aware of multiuser things.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport os\n\nimport requests\n\nfrom tornado import ioloop\n\nfrom IPython.utils.traitlets import Unicode\n\nfrom IPython.html.notebookapp import NotebookApp\nfrom IPython.html.auth.login import LoginHandler\nfrom IPython.html.auth.logout import LogoutHandler\n\nfrom IPython.html.utils import url_path_join\n\n\nfrom distutils.version import LooseVersion as V\n\nimport IPython\nif V(IPython.__version__) < V('3.0'):\n raise ImportError(\"JupyterHub Requires IPython >= 3.0, found %s\" % IPython.__version__)\n\n# Define two methods to attach to AuthenticatedHandler,\n# which authenticate via the central auth server.\n\nclass JupyterHubLoginHandler(LoginHandler):\n @staticmethod\n def login_available(settings):\n return True\n \n @staticmethod\n def verify_token(self, cookie_name, encrypted_cookie):\n \"\"\"monkeypatch method for token verification\"\"\"\n cookie_cache = self.settings['cookie_cache']\n if encrypted_cookie in cookie_cache:\n # we've seen this token before, don't ask upstream again\n return cookie_cache[encrypted_cookie]\n \n hub_api_url = self.settings['hub_api_url']\n hub_api_key = self.settings['hub_api_key']\n r = requests.get(url_path_join(\n hub_api_url, \"authorizations/cookie\", cookie_name,\n ),\n headers = {'Authorization' : 'token %s' % hub_api_key},\n data=encrypted_cookie,\n )\n if r.status_code == 404:\n data = {'user' : ''}\n elif r.status_code >= 400:\n self.log.warn(\"Failed to check authorization: [%i] %s\", r.status_code, r.reason)\n data = None\n else:\n data = r.json()\n cookie_cache[encrypted_cookie] = data\n return data\n \n @staticmethod\n def get_user(self):\n \"\"\"alternative get_current_user to query the central server\"\"\"\n my_user = self.settings['user']\n encrypted_cookie = self.get_cookie(self.cookie_name)\n if encrypted_cookie:\n auth_data = JupyterHubLoginHandler.verify_token(self, self.cookie_name, encrypted_cookie)\n if not auth_data:\n # treat invalid token the same as no token\n return None\n user = auth_data['user']\n if user == my_user:\n return user\n else:\n return None\n else:\n self.log.debug(\"No token cookie\")\n return None\n\n\nclass JupyterHubLogoutHandler(LogoutHandler):\n def get(self):\n self.redirect(url_path_join(self.settings['hub_prefix'], 'logout'))\n\n\n# register new hub related command-line aliases\naliases = NotebookApp.aliases.get_default_value()\naliases.update({\n 'user' : 'SingleUserNotebookApp.user',\n 'cookie-name': 'SingleUserNotebookApp.cookie_name',\n 'hub-prefix': 'SingleUserNotebookApp.hub_prefix',\n 'hub-api-url': 'SingleUserNotebookApp.hub_api_url',\n 'base-url': 'SingleUserNotebookApp.base_url',\n})\n\nclass SingleUserNotebookApp(NotebookApp):\n \"\"\"A Subclass of the regular NotebookApp that is aware of the parent multiuser context.\"\"\"\n user = Unicode(config=True)\n cookie_name = Unicode(config=True)\n hub_prefix = Unicode(config=True)\n hub_api_url = Unicode(config=True)\n aliases = aliases\n open_browser = False\n login_handler_class = JupyterHubLoginHandler\n logout_handler_class = JupyterHubLogoutHandler\n \n def _confirm_exit(self):\n # disable the exit confirmation for background notebook processes\n ioloop.IOLoop.instance().stop()\n \n def init_webapp(self):\n # load the hub related settings into the tornado settings dict\n env = os.environ\n s = self.tornado_settings\n s['cookie_cache'] = {}\n s['user'] = self.user\n s['hub_api_key'] = env.pop('JPY_API_TOKEN')\n s['hub_prefix'] = self.hub_prefix\n s['cookie_name'] = self.cookie_name\n s['login_url'] = url_path_join(self.hub_prefix, 'login')\n s['hub_api_url'] = self.hub_api_url\n super(SingleUserNotebookApp, self).init_webapp()\n\n\ndef main():\n return SingleUserNotebookApp.launch_instance()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "jupyterhub/singleuser.py"}]}
| 1,878 | 573 |
gh_patches_debug_19594
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-2778
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Traceback appears, when trying to set mitmproxy's address as upstream server for reverse/upstream mode
##### Steps to reproduce the problem:
1. Run mitmproxy in **reverse** or **upstream** mode, using its own address as upstream server address:
`mitmproxy --mode reverse:http://127.0.0.1:8080` or
`mitmproxy --mode upstream:http://127.0.0.1:8080`
2. Make a request using pathoc `pathoc 127.0.0.1:8080 "get:/"` or a browser.
I am seeing:

##### Any other comments? What have you tried so far?
https://github.com/mitmproxy/mitmproxy/blob/master/mitmproxy/proxy/protocol/base.py#L115
should be handled.
##### System information
Mitmproxy: 3.0.0.dev1101 (commit d9d4d15) binary
Python: 3.5.2
OpenSSL: OpenSSL 1.1.0g 2 Nov 2017
Platform: Linux-4.4.0-104-generic-x86_64-with-debian-stretch-sid
</issue>
<code>
[start of mitmproxy/proxy/server.py]
1 import sys
2 import traceback
3
4 from mitmproxy import exceptions
5 from mitmproxy import connections
6 from mitmproxy import controller # noqa
7 from mitmproxy import http
8 from mitmproxy import log
9 from mitmproxy import platform
10 from mitmproxy.proxy import config
11 from mitmproxy.proxy import modes
12 from mitmproxy.proxy import root_context
13 from mitmproxy.net import tcp
14 from mitmproxy.net.http import http1
15 from mitmproxy.utils import human
16
17
18 class DummyServer:
19 bound = False
20
21 def __init__(self, config=None):
22 self.config = config
23 self.address = "dummy"
24
25 def set_channel(self, channel):
26 pass
27
28 def serve_forever(self):
29 pass
30
31 def shutdown(self):
32 pass
33
34
35 class ProxyServer(tcp.TCPServer):
36 allow_reuse_address = True
37 bound = True
38
39 def __init__(self, config: config.ProxyConfig) -> None:
40 """
41 Raises ServerException if there's a startup problem.
42 """
43 self.config = config
44 try:
45 super().__init__(
46 (config.options.listen_host, config.options.listen_port)
47 )
48 if config.options.mode == "transparent":
49 platform.init_transparent_mode()
50 except Exception as e:
51 if self.socket:
52 self.socket.close()
53 raise exceptions.ServerException(
54 'Error starting proxy server: ' + repr(e)
55 ) from e
56 self.channel = None # type: controller.Channel
57
58 def set_channel(self, channel):
59 self.channel = channel
60
61 def handle_client_connection(self, conn, client_address):
62 h = ConnectionHandler(
63 conn,
64 client_address,
65 self.config,
66 self.channel
67 )
68 h.handle()
69
70
71 class ConnectionHandler:
72
73 def __init__(self, client_conn, client_address, config, channel):
74 self.config = config # type: config.ProxyConfig
75 self.client_conn = connections.ClientConnection(
76 client_conn,
77 client_address,
78 None)
79 """@type: mitmproxy.proxy.connection.ClientConnection"""
80 self.channel = channel
81 """@type: mitmproxy.controller.Channel"""
82
83 def _create_root_layer(self):
84 root_ctx = root_context.RootContext(
85 self.client_conn,
86 self.config,
87 self.channel
88 )
89
90 mode = self.config.options.mode
91 if mode.startswith("upstream:"):
92 return modes.HttpUpstreamProxy(
93 root_ctx,
94 self.config.upstream_server.address
95 )
96 elif mode == "transparent":
97 return modes.TransparentProxy(root_ctx)
98 elif mode.startswith("reverse:"):
99 server_tls = self.config.upstream_server.scheme == "https"
100 return modes.ReverseProxy(
101 root_ctx,
102 self.config.upstream_server.address,
103 server_tls
104 )
105 elif mode == "socks5":
106 return modes.Socks5Proxy(root_ctx)
107 elif mode == "regular":
108 return modes.HttpProxy(root_ctx)
109 elif callable(mode): # pragma: no cover
110 return mode(root_ctx)
111 else: # pragma: no cover
112 raise ValueError("Unknown proxy mode: %s" % mode)
113
114 def handle(self):
115 self.log("clientconnect", "info")
116
117 root_layer = self._create_root_layer()
118
119 try:
120 root_layer = self.channel.ask("clientconnect", root_layer)
121 root_layer()
122 except exceptions.Kill:
123 self.log("Connection killed", "info")
124 except exceptions.ProtocolException as e:
125 if isinstance(e, exceptions.ClientHandshakeException):
126 self.log(
127 "Client Handshake failed. "
128 "The client may not trust the proxy's certificate for {}.".format(e.server),
129 "warn"
130 )
131 self.log(repr(e), "debug")
132 elif isinstance(e, exceptions.InvalidServerCertificate):
133 self.log(str(e), "warn")
134 self.log("Invalid certificate, closing connection. Pass --insecure to disable validation.", "warn")
135 else:
136 self.log(str(e), "warn")
137
138 self.log(repr(e), "debug")
139 # If an error propagates to the topmost level,
140 # we send an HTTP error response, which is both
141 # understandable by HTTP clients and humans.
142 try:
143 error_response = http.make_error_response(502, repr(e))
144 self.client_conn.send(http1.assemble_response(error_response))
145 except exceptions.TcpException:
146 pass
147 except Exception:
148 self.log(traceback.format_exc(), "error")
149 print(traceback.format_exc(), file=sys.stderr)
150 print("mitmproxy has crashed!", file=sys.stderr)
151 print("Please lodge a bug report at: https://github.com/mitmproxy/mitmproxy", file=sys.stderr)
152
153 self.log("clientdisconnect", "info")
154 self.channel.tell("clientdisconnect", root_layer)
155 self.client_conn.finish()
156
157 def log(self, msg, level):
158 msg = "{}: {}".format(human.format_address(self.client_conn.address), msg)
159 self.channel.tell("log", log.LogEntry(msg, level))
160
[end of mitmproxy/proxy/server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mitmproxy/proxy/server.py b/mitmproxy/proxy/server.py
--- a/mitmproxy/proxy/server.py
+++ b/mitmproxy/proxy/server.py
@@ -114,9 +114,9 @@
def handle(self):
self.log("clientconnect", "info")
- root_layer = self._create_root_layer()
-
+ root_layer = None
try:
+ root_layer = self._create_root_layer()
root_layer = self.channel.ask("clientconnect", root_layer)
root_layer()
except exceptions.Kill:
@@ -151,7 +151,8 @@
print("Please lodge a bug report at: https://github.com/mitmproxy/mitmproxy", file=sys.stderr)
self.log("clientdisconnect", "info")
- self.channel.tell("clientdisconnect", root_layer)
+ if root_layer is not None:
+ self.channel.tell("clientdisconnect", root_layer)
self.client_conn.finish()
def log(self, msg, level):
|
{"golden_diff": "diff --git a/mitmproxy/proxy/server.py b/mitmproxy/proxy/server.py\n--- a/mitmproxy/proxy/server.py\n+++ b/mitmproxy/proxy/server.py\n@@ -114,9 +114,9 @@\n def handle(self):\n self.log(\"clientconnect\", \"info\")\n \n- root_layer = self._create_root_layer()\n-\n+ root_layer = None\n try:\n+ root_layer = self._create_root_layer()\n root_layer = self.channel.ask(\"clientconnect\", root_layer)\n root_layer()\n except exceptions.Kill:\n@@ -151,7 +151,8 @@\n print(\"Please lodge a bug report at: https://github.com/mitmproxy/mitmproxy\", file=sys.stderr)\n \n self.log(\"clientdisconnect\", \"info\")\n- self.channel.tell(\"clientdisconnect\", root_layer)\n+ if root_layer is not None:\n+ self.channel.tell(\"clientdisconnect\", root_layer)\n self.client_conn.finish()\n \n def log(self, msg, level):\n", "issue": "Traceback appears, when trying to set mitmproxy's address as upstream server for reverse/upstream mode\n##### Steps to reproduce the problem:\r\n\r\n1. Run mitmproxy in **reverse** or **upstream** mode, using its own address as upstream server address:\r\n`mitmproxy --mode reverse:http://127.0.0.1:8080` or\r\n`mitmproxy --mode upstream:http://127.0.0.1:8080`\r\n2. Make a request using pathoc `pathoc 127.0.0.1:8080 \"get:/\"` or a browser.\r\n\r\nI am seeing:\r\n\r\n\r\n##### Any other comments? What have you tried so far?\r\nhttps://github.com/mitmproxy/mitmproxy/blob/master/mitmproxy/proxy/protocol/base.py#L115\r\nshould be handled.\r\n\r\n##### System information\r\n\r\nMitmproxy: 3.0.0.dev1101 (commit d9d4d15) binary\r\nPython: 3.5.2\r\nOpenSSL: OpenSSL 1.1.0g 2 Nov 2017\r\nPlatform: Linux-4.4.0-104-generic-x86_64-with-debian-stretch-sid\r\n\n", "before_files": [{"content": "import sys\nimport traceback\n\nfrom mitmproxy import exceptions\nfrom mitmproxy import connections\nfrom mitmproxy import controller # noqa\nfrom mitmproxy import http\nfrom mitmproxy import log\nfrom mitmproxy import platform\nfrom mitmproxy.proxy import config\nfrom mitmproxy.proxy import modes\nfrom mitmproxy.proxy import root_context\nfrom mitmproxy.net import tcp\nfrom mitmproxy.net.http import http1\nfrom mitmproxy.utils import human\n\n\nclass DummyServer:\n bound = False\n\n def __init__(self, config=None):\n self.config = config\n self.address = \"dummy\"\n\n def set_channel(self, channel):\n pass\n\n def serve_forever(self):\n pass\n\n def shutdown(self):\n pass\n\n\nclass ProxyServer(tcp.TCPServer):\n allow_reuse_address = True\n bound = True\n\n def __init__(self, config: config.ProxyConfig) -> None:\n \"\"\"\n Raises ServerException if there's a startup problem.\n \"\"\"\n self.config = config\n try:\n super().__init__(\n (config.options.listen_host, config.options.listen_port)\n )\n if config.options.mode == \"transparent\":\n platform.init_transparent_mode()\n except Exception as e:\n if self.socket:\n self.socket.close()\n raise exceptions.ServerException(\n 'Error starting proxy server: ' + repr(e)\n ) from e\n self.channel = None # type: controller.Channel\n\n def set_channel(self, channel):\n self.channel = channel\n\n def handle_client_connection(self, conn, client_address):\n h = ConnectionHandler(\n conn,\n client_address,\n self.config,\n self.channel\n )\n h.handle()\n\n\nclass ConnectionHandler:\n\n def __init__(self, client_conn, client_address, config, channel):\n self.config = config # type: config.ProxyConfig\n self.client_conn = connections.ClientConnection(\n client_conn,\n client_address,\n None)\n \"\"\"@type: mitmproxy.proxy.connection.ClientConnection\"\"\"\n self.channel = channel\n \"\"\"@type: mitmproxy.controller.Channel\"\"\"\n\n def _create_root_layer(self):\n root_ctx = root_context.RootContext(\n self.client_conn,\n self.config,\n self.channel\n )\n\n mode = self.config.options.mode\n if mode.startswith(\"upstream:\"):\n return modes.HttpUpstreamProxy(\n root_ctx,\n self.config.upstream_server.address\n )\n elif mode == \"transparent\":\n return modes.TransparentProxy(root_ctx)\n elif mode.startswith(\"reverse:\"):\n server_tls = self.config.upstream_server.scheme == \"https\"\n return modes.ReverseProxy(\n root_ctx,\n self.config.upstream_server.address,\n server_tls\n )\n elif mode == \"socks5\":\n return modes.Socks5Proxy(root_ctx)\n elif mode == \"regular\":\n return modes.HttpProxy(root_ctx)\n elif callable(mode): # pragma: no cover\n return mode(root_ctx)\n else: # pragma: no cover\n raise ValueError(\"Unknown proxy mode: %s\" % mode)\n\n def handle(self):\n self.log(\"clientconnect\", \"info\")\n\n root_layer = self._create_root_layer()\n\n try:\n root_layer = self.channel.ask(\"clientconnect\", root_layer)\n root_layer()\n except exceptions.Kill:\n self.log(\"Connection killed\", \"info\")\n except exceptions.ProtocolException as e:\n if isinstance(e, exceptions.ClientHandshakeException):\n self.log(\n \"Client Handshake failed. \"\n \"The client may not trust the proxy's certificate for {}.\".format(e.server),\n \"warn\"\n )\n self.log(repr(e), \"debug\")\n elif isinstance(e, exceptions.InvalidServerCertificate):\n self.log(str(e), \"warn\")\n self.log(\"Invalid certificate, closing connection. Pass --insecure to disable validation.\", \"warn\")\n else:\n self.log(str(e), \"warn\")\n\n self.log(repr(e), \"debug\")\n # If an error propagates to the topmost level,\n # we send an HTTP error response, which is both\n # understandable by HTTP clients and humans.\n try:\n error_response = http.make_error_response(502, repr(e))\n self.client_conn.send(http1.assemble_response(error_response))\n except exceptions.TcpException:\n pass\n except Exception:\n self.log(traceback.format_exc(), \"error\")\n print(traceback.format_exc(), file=sys.stderr)\n print(\"mitmproxy has crashed!\", file=sys.stderr)\n print(\"Please lodge a bug report at: https://github.com/mitmproxy/mitmproxy\", file=sys.stderr)\n\n self.log(\"clientdisconnect\", \"info\")\n self.channel.tell(\"clientdisconnect\", root_layer)\n self.client_conn.finish()\n\n def log(self, msg, level):\n msg = \"{}: {}\".format(human.format_address(self.client_conn.address), msg)\n self.channel.tell(\"log\", log.LogEntry(msg, level))\n", "path": "mitmproxy/proxy/server.py"}]}
| 2,349 | 226 |
gh_patches_debug_40215
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-2869
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider regis_uk is broken
During the global build at 2021-05-26-14-42-23, spider **regis_uk** failed with **33 features** and **35 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/regis_uk.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/regis_uk.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/regis_uk.geojson))
</issue>
<code>
[start of locations/spiders/regis_salon_uk.py]
1 import scrapy
2 from locations.items import GeojsonPointItem
3 import re
4
5 regex_am = r"\s?([Aa][Mm])"
6 regex_pm = r"\s?([Pp][Mm])"
7
8
9 class RegisUKSpider(scrapy.Spider):
10 name = "regis_uk"
11 item_attributes = { 'brand': "Regis Salon" }
12 allowed_domains = ["www.regissalons.co.uk"]
13 start_urls = ['https://www.regissalons.co.uk/salon-locator?show-all=yes']
14
15 def convert_hours(self, hours):
16 hours = [x.strip() for x in hours]
17 hours = [x for x in hours if x]
18 for i in range(len(hours)):
19 converted_times = ''
20 if hours[i] != "Closed":
21 from_hr, to_hr = [hr.strip() for hr in hours[i].split('–')]
22 if re.search(regex_am, from_hr):
23 from_hr = re.sub(regex_am, '', from_hr)
24 hour_min = from_hr.split(':')
25 if len(hour_min[0]) < 2:
26 hour_min[0].zfill(2)
27 converted_times += (":".join(hour_min)) + ' - '
28 else:
29 from_hr = re.sub(regex_pm, '', from_hr)
30 hour_min = from_hr.split(':')
31 if int(hour_min[0]) < 12:
32 hour_min[0] = str(12 + int(hour_min[0]))
33 converted_times += (":".join(hour_min)) + ' - '
34
35 if re.search(regex_am, to_hr):
36 to_hr = re.sub(regex_am, '', to_hr)
37 hour_min = to_hr.split(':')
38 if len(hour_min[0]) < 2:
39 hour_min[0].zfill(2)
40 if int(hour_min[0]) == 12:
41 hour_min[0] = '00'
42 converted_times += (":".join(hour_min))
43 else:
44 to_hr = re.sub(regex_pm, '', to_hr)
45 hour_min = to_hr.split(':')
46 if int(hour_min[0]) < 12:
47 hour_min[0] = str(12 + int(hour_min[0]))
48 converted_times += (":".join(hour_min))
49 else:
50 converted_times += "off"
51 hours[i] = converted_times
52 days = ["Mo", "Tu", "We", "Th", "Fr", "Sa", "Su"]
53 hours = ''.join('{} {} '.format(*t) for t in zip(days, hours))
54 return hours
55
56 def parse_store(self, response):
57 phone = response.xpath(
58 '//a[@class="phone-tracked-link"]/text()').extract_first().strip()
59 lat = response.xpath(
60 '//div[@id="map-aside"]/@data-lat').extract_first()
61 lon = response.xpath(
62 '//div[@id="map-aside"]/@data-lng').extract_first()
63 hours = response.xpath(
64 '//div[@class="container"]//p[contains(., "am")'
65 ' or contains(., "Closed")]/text()').extract()
66 hours = self.convert_hours(hours)
67
68 yield GeojsonPointItem(
69 ref=response.url,
70 phone=phone,
71 lat=lat,
72 lon=lon,
73 opening_hours=hours,
74 website=response.url
75 )
76
77 def parse(self, response):
78 stores = response.xpath('//ul[@class="list"]//a/@href').extract()
79 for store in stores:
80 yield scrapy.Request(store, callback=self.parse_store)
81
[end of locations/spiders/regis_salon_uk.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/regis_salon_uk.py b/locations/spiders/regis_salon_uk.py
--- a/locations/spiders/regis_salon_uk.py
+++ b/locations/spiders/regis_salon_uk.py
@@ -11,6 +11,7 @@
item_attributes = { 'brand': "Regis Salon" }
allowed_domains = ["www.regissalons.co.uk"]
start_urls = ['https://www.regissalons.co.uk/salon-locator?show-all=yes']
+ download_delay = 4.0
def convert_hours(self, hours):
hours = [x.strip() for x in hours]
@@ -21,20 +22,20 @@
from_hr, to_hr = [hr.strip() for hr in hours[i].split('–')]
if re.search(regex_am, from_hr):
from_hr = re.sub(regex_am, '', from_hr)
- hour_min = from_hr.split(':')
+ hour_min = re.split('[:.]', from_hr)
if len(hour_min[0]) < 2:
hour_min[0].zfill(2)
converted_times += (":".join(hour_min)) + ' - '
else:
from_hr = re.sub(regex_pm, '', from_hr)
- hour_min = from_hr.split(':')
+ hour_min = re.split('[:.]', from_hr)
if int(hour_min[0]) < 12:
hour_min[0] = str(12 + int(hour_min[0]))
converted_times += (":".join(hour_min)) + ' - '
if re.search(regex_am, to_hr):
to_hr = re.sub(regex_am, '', to_hr)
- hour_min = to_hr.split(':')
+ hour_min = re.split('[:.]', to_hr)
if len(hour_min[0]) < 2:
hour_min[0].zfill(2)
if int(hour_min[0]) == 12:
@@ -42,7 +43,7 @@
converted_times += (":".join(hour_min))
else:
to_hr = re.sub(regex_pm, '', to_hr)
- hour_min = to_hr.split(':')
+ hour_min = re.split('[:.]', to_hr)
if int(hour_min[0]) < 12:
hour_min[0] = str(12 + int(hour_min[0]))
converted_times += (":".join(hour_min))
@@ -77,4 +78,6 @@
def parse(self, response):
stores = response.xpath('//ul[@class="list"]//a/@href').extract()
for store in stores:
+ if '/salon-region/' in store:
+ continue
yield scrapy.Request(store, callback=self.parse_store)
|
{"golden_diff": "diff --git a/locations/spiders/regis_salon_uk.py b/locations/spiders/regis_salon_uk.py\n--- a/locations/spiders/regis_salon_uk.py\n+++ b/locations/spiders/regis_salon_uk.py\n@@ -11,6 +11,7 @@\n item_attributes = { 'brand': \"Regis Salon\" }\n allowed_domains = [\"www.regissalons.co.uk\"]\n start_urls = ['https://www.regissalons.co.uk/salon-locator?show-all=yes']\n+ download_delay = 4.0\n \n def convert_hours(self, hours):\n hours = [x.strip() for x in hours]\n@@ -21,20 +22,20 @@\n from_hr, to_hr = [hr.strip() for hr in hours[i].split('\u2013')]\n if re.search(regex_am, from_hr):\n from_hr = re.sub(regex_am, '', from_hr)\n- hour_min = from_hr.split(':')\n+ hour_min = re.split('[:.]', from_hr)\n if len(hour_min[0]) < 2:\n hour_min[0].zfill(2)\n converted_times += (\":\".join(hour_min)) + ' - '\n else:\n from_hr = re.sub(regex_pm, '', from_hr)\n- hour_min = from_hr.split(':')\n+ hour_min = re.split('[:.]', from_hr)\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n converted_times += (\":\".join(hour_min)) + ' - '\n \n if re.search(regex_am, to_hr):\n to_hr = re.sub(regex_am, '', to_hr)\n- hour_min = to_hr.split(':')\n+ hour_min = re.split('[:.]', to_hr)\n if len(hour_min[0]) < 2:\n hour_min[0].zfill(2)\n if int(hour_min[0]) == 12:\n@@ -42,7 +43,7 @@\n converted_times += (\":\".join(hour_min))\n else:\n to_hr = re.sub(regex_pm, '', to_hr)\n- hour_min = to_hr.split(':')\n+ hour_min = re.split('[:.]', to_hr)\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n converted_times += (\":\".join(hour_min))\n@@ -77,4 +78,6 @@\n def parse(self, response):\n stores = response.xpath('//ul[@class=\"list\"]//a/@href').extract()\n for store in stores:\n+ if '/salon-region/' in store:\n+ continue\n yield scrapy.Request(store, callback=self.parse_store)\n", "issue": "Spider regis_uk is broken\nDuring the global build at 2021-05-26-14-42-23, spider **regis_uk** failed with **33 features** and **35 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/regis_uk.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/regis_uk.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/regis_uk.geojson))\n", "before_files": [{"content": "import scrapy\nfrom locations.items import GeojsonPointItem\nimport re\n\nregex_am = r\"\\s?([Aa][Mm])\"\nregex_pm = r\"\\s?([Pp][Mm])\"\n\n\nclass RegisUKSpider(scrapy.Spider):\n name = \"regis_uk\"\n item_attributes = { 'brand': \"Regis Salon\" }\n allowed_domains = [\"www.regissalons.co.uk\"]\n start_urls = ['https://www.regissalons.co.uk/salon-locator?show-all=yes']\n\n def convert_hours(self, hours):\n hours = [x.strip() for x in hours]\n hours = [x for x in hours if x]\n for i in range(len(hours)):\n converted_times = ''\n if hours[i] != \"Closed\":\n from_hr, to_hr = [hr.strip() for hr in hours[i].split('\u2013')]\n if re.search(regex_am, from_hr):\n from_hr = re.sub(regex_am, '', from_hr)\n hour_min = from_hr.split(':')\n if len(hour_min[0]) < 2:\n hour_min[0].zfill(2)\n converted_times += (\":\".join(hour_min)) + ' - '\n else:\n from_hr = re.sub(regex_pm, '', from_hr)\n hour_min = from_hr.split(':')\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n converted_times += (\":\".join(hour_min)) + ' - '\n\n if re.search(regex_am, to_hr):\n to_hr = re.sub(regex_am, '', to_hr)\n hour_min = to_hr.split(':')\n if len(hour_min[0]) < 2:\n hour_min[0].zfill(2)\n if int(hour_min[0]) == 12:\n hour_min[0] = '00'\n converted_times += (\":\".join(hour_min))\n else:\n to_hr = re.sub(regex_pm, '', to_hr)\n hour_min = to_hr.split(':')\n if int(hour_min[0]) < 12:\n hour_min[0] = str(12 + int(hour_min[0]))\n converted_times += (\":\".join(hour_min))\n else:\n converted_times += \"off\"\n hours[i] = converted_times\n days = [\"Mo\", \"Tu\", \"We\", \"Th\", \"Fr\", \"Sa\", \"Su\"]\n hours = ''.join('{} {} '.format(*t) for t in zip(days, hours))\n return hours\n\n def parse_store(self, response):\n phone = response.xpath(\n '//a[@class=\"phone-tracked-link\"]/text()').extract_first().strip()\n lat = response.xpath(\n '//div[@id=\"map-aside\"]/@data-lat').extract_first()\n lon = response.xpath(\n '//div[@id=\"map-aside\"]/@data-lng').extract_first()\n hours = response.xpath(\n '//div[@class=\"container\"]//p[contains(., \"am\")'\n ' or contains(., \"Closed\")]/text()').extract()\n hours = self.convert_hours(hours)\n\n yield GeojsonPointItem(\n ref=response.url,\n phone=phone,\n lat=lat,\n lon=lon,\n opening_hours=hours,\n website=response.url\n )\n\n def parse(self, response):\n stores = response.xpath('//ul[@class=\"list\"]//a/@href').extract()\n for store in stores:\n yield scrapy.Request(store, callback=self.parse_store)\n", "path": "locations/spiders/regis_salon_uk.py"}]}
| 1,660 | 613 |
gh_patches_debug_27291
|
rasdani/github-patches
|
git_diff
|
uccser__cs-unplugged-302
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Django Debug Toolbar doesn't display in local Docker development environment
Created from work in #193.
</issue>
<code>
[start of csunplugged/config/settings/local.py]
1 # -*- coding: utf-8 -*-
2 """
3 Django settings for local development environment.
4
5 - Run in Debug mode
6 - Add custom dev application
7 - Add Django Debug Toolbar
8 - Add django-extensions
9 - Use console backend for emails
10 """
11
12 import socket
13 import os
14 from .base import * # noqa: F403
15
16 # DATABASE CONFIGURATION
17 # ----------------------------------------------------------------------------
18 # See: https://docs.djangoproject.com/en/dev/ref/settings/#databases
19 DATABASES = {
20 'default': env.db('DATABASE_URL'), # noqa: F405
21 }
22 DATABASES['default']['ATOMIC_REQUESTS'] = True
23
24 # DEBUG
25 # ----------------------------------------------------------------------------
26 DEBUG = env.bool('DJANGO_DEBUG', default=True) # noqa: F405
27 TEMPLATES[0]['OPTIONS']['debug'] = DEBUG # noqa: F405
28
29 # SECRET CONFIGURATION
30 # ----------------------------------------------------------------------------
31 # See: https://docs.djangoproject.com/en/dev/ref/settings/#secret-key
32 # Note: This key only used for development and testing.
33 SECRET_KEY = env('DJANGO_SECRET_KEY', default='l@@)w&&%&u37+sjz^lsx^+29y_333oid3ygxzucar^8o(axo*f') # noqa: F405
34
35 # Mail settings
36 # ----------------------------------------------------------------------------
37
38 EMAIL_PORT = 1025
39
40 EMAIL_HOST = 'localhost'
41 EMAIL_BACKEND = env('DJANGO_EMAIL_BACKEND', default='django.core.mail.backends.console.EmailBackend') # noqa: F405
42
43
44 # CACHING
45 # ----------------------------------------------------------------------------
46 CACHES = {
47 'default': {
48 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
49 'LOCATION': ''
50 }
51 }
52
53 # django-debug-toolbar
54 # ----------------------------------------------------------------------------
55 MIDDLEWARE += ['debug_toolbar.middleware.DebugToolbarMiddleware', ] # noqa: F405
56 INSTALLED_APPS += ['debug_toolbar', ] # noqa: F405
57
58 INTERNAL_IPS = ['127.0.0.1', '10.0.2.2', ]
59 # tricks to have debug toolbar when developing with docker
60 if os.environ.get('USE_DOCKER') == 'yes':
61 ip = socket.gethostbyname(socket.gethostname())
62 INTERNAL_IPS += [ip[:-1] + '1']
63
64 DEBUG_TOOLBAR_CONFIG = {
65 'DISABLE_PANELS': [
66 'debug_toolbar.panels.redirects.RedirectsPanel',
67 ],
68 'SHOW_TEMPLATE_CONTEXT': True,
69 }
70
71 # django-extensions
72 # ----------------------------------------------------------------------------
73 INSTALLED_APPS += ['django_extensions', ]
74
75 # TESTING
76 # ----------------------------------------------------------------------------
77 TEST_RUNNER = 'django.test.runner.DiscoverRunner'
78
79
80 # Your local stuff: Below this line define 3rd party library settings
81 # ----------------------------------------------------------------------------
82 LOCAL_APPS + ['dev.apps.DevConfig'] # noqa: F405
83
[end of csunplugged/config/settings/local.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/csunplugged/config/settings/local.py b/csunplugged/config/settings/local.py
--- a/csunplugged/config/settings/local.py
+++ b/csunplugged/config/settings/local.py
@@ -9,8 +9,6 @@
- Use console backend for emails
"""
-import socket
-import os
from .base import * # noqa: F403
# DATABASE CONFIGURATION
@@ -56,16 +54,23 @@
INSTALLED_APPS += ['debug_toolbar', ] # noqa: F405
INTERNAL_IPS = ['127.0.0.1', '10.0.2.2', ]
-# tricks to have debug toolbar when developing with docker
-if os.environ.get('USE_DOCKER') == 'yes':
- ip = socket.gethostbyname(socket.gethostname())
- INTERNAL_IPS += [ip[:-1] + '1']
+
+
+def show_django_debug_toolbar(request):
+ """Show Django Debug Toolbar in every request when running locally.
+
+ Args:
+ request: The request object.
+ """
+ return True
+
DEBUG_TOOLBAR_CONFIG = {
'DISABLE_PANELS': [
'debug_toolbar.panels.redirects.RedirectsPanel',
],
'SHOW_TEMPLATE_CONTEXT': True,
+ "SHOW_TOOLBAR_CALLBACK": show_django_debug_toolbar,
}
# django-extensions
@@ -79,4 +84,4 @@
# Your local stuff: Below this line define 3rd party library settings
# ----------------------------------------------------------------------------
-LOCAL_APPS + ['dev.apps.DevConfig'] # noqa: F405
+INSTALLED_APPS += ['dev.apps.DevConfig'] # noqa: F405
|
{"golden_diff": "diff --git a/csunplugged/config/settings/local.py b/csunplugged/config/settings/local.py\n--- a/csunplugged/config/settings/local.py\n+++ b/csunplugged/config/settings/local.py\n@@ -9,8 +9,6 @@\n - Use console backend for emails\n \"\"\"\n \n-import socket\n-import os\n from .base import * # noqa: F403\n \n # DATABASE CONFIGURATION\n@@ -56,16 +54,23 @@\n INSTALLED_APPS += ['debug_toolbar', ] # noqa: F405\n \n INTERNAL_IPS = ['127.0.0.1', '10.0.2.2', ]\n-# tricks to have debug toolbar when developing with docker\n-if os.environ.get('USE_DOCKER') == 'yes':\n- ip = socket.gethostbyname(socket.gethostname())\n- INTERNAL_IPS += [ip[:-1] + '1']\n+\n+\n+def show_django_debug_toolbar(request):\n+ \"\"\"Show Django Debug Toolbar in every request when running locally.\n+\n+ Args:\n+ request: The request object.\n+ \"\"\"\n+ return True\n+\n \n DEBUG_TOOLBAR_CONFIG = {\n 'DISABLE_PANELS': [\n 'debug_toolbar.panels.redirects.RedirectsPanel',\n ],\n 'SHOW_TEMPLATE_CONTEXT': True,\n+ \"SHOW_TOOLBAR_CALLBACK\": show_django_debug_toolbar,\n }\n \n # django-extensions\n@@ -79,4 +84,4 @@\n \n # Your local stuff: Below this line define 3rd party library settings\n # ----------------------------------------------------------------------------\n-LOCAL_APPS + ['dev.apps.DevConfig'] # noqa: F405\n+INSTALLED_APPS += ['dev.apps.DevConfig'] # noqa: F405\n", "issue": "Django Debug Toolbar doesn't display in local Docker development environment\nCreated from work in #193.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDjango settings for local development environment.\n\n- Run in Debug mode\n- Add custom dev application\n- Add Django Debug Toolbar\n- Add django-extensions\n- Use console backend for emails\n\"\"\"\n\nimport socket\nimport os\nfrom .base import * # noqa: F403\n\n# DATABASE CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#databases\nDATABASES = {\n 'default': env.db('DATABASE_URL'), # noqa: F405\n}\nDATABASES['default']['ATOMIC_REQUESTS'] = True\n\n# DEBUG\n# ----------------------------------------------------------------------------\nDEBUG = env.bool('DJANGO_DEBUG', default=True) # noqa: F405\nTEMPLATES[0]['OPTIONS']['debug'] = DEBUG # noqa: F405\n\n# SECRET CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#secret-key\n# Note: This key only used for development and testing.\nSECRET_KEY = env('DJANGO_SECRET_KEY', default='l@@)w&&%&u37+sjz^lsx^+29y_333oid3ygxzucar^8o(axo*f') # noqa: F405\n\n# Mail settings\n# ----------------------------------------------------------------------------\n\nEMAIL_PORT = 1025\n\nEMAIL_HOST = 'localhost'\nEMAIL_BACKEND = env('DJANGO_EMAIL_BACKEND', default='django.core.mail.backends.console.EmailBackend') # noqa: F405\n\n\n# CACHING\n# ----------------------------------------------------------------------------\nCACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n 'LOCATION': ''\n }\n}\n\n# django-debug-toolbar\n# ----------------------------------------------------------------------------\nMIDDLEWARE += ['debug_toolbar.middleware.DebugToolbarMiddleware', ] # noqa: F405\nINSTALLED_APPS += ['debug_toolbar', ] # noqa: F405\n\nINTERNAL_IPS = ['127.0.0.1', '10.0.2.2', ]\n# tricks to have debug toolbar when developing with docker\nif os.environ.get('USE_DOCKER') == 'yes':\n ip = socket.gethostbyname(socket.gethostname())\n INTERNAL_IPS += [ip[:-1] + '1']\n\nDEBUG_TOOLBAR_CONFIG = {\n 'DISABLE_PANELS': [\n 'debug_toolbar.panels.redirects.RedirectsPanel',\n ],\n 'SHOW_TEMPLATE_CONTEXT': True,\n}\n\n# django-extensions\n# ----------------------------------------------------------------------------\nINSTALLED_APPS += ['django_extensions', ]\n\n# TESTING\n# ----------------------------------------------------------------------------\nTEST_RUNNER = 'django.test.runner.DiscoverRunner'\n\n\n# Your local stuff: Below this line define 3rd party library settings\n# ----------------------------------------------------------------------------\nLOCAL_APPS + ['dev.apps.DevConfig'] # noqa: F405\n", "path": "csunplugged/config/settings/local.py"}]}
| 1,325 | 373 |
gh_patches_debug_32436
|
rasdani/github-patches
|
git_diff
|
translate__pootle-4193
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Uploads perform poorly when some non localisation error is picked up
On a Mac you could have stray `.DS_Store` files. Also when using POedit it will create a `.mo` files whenever you save the `.po` file.
Errors such as `Unknown filetype (en_ZA/firefox/browser/chrome/overrides/.DS_Store)` are reported in this case.
Whenever such an error occurs then the upload reports the error and fails to complete. We should I think ignore errors unrelated to the translations files we are uploading. And at least execute correctly for those where there are no errors.
</issue>
<code>
[start of pootle/apps/pootle_translationproject/views.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10 import json
11 from urllib import quote, unquote
12
13 from django.conf import settings
14 from django.shortcuts import render
15 from django.utils import dateformat
16
17 from pootle.core.browser import (get_children, get_table_headings, get_parent,
18 get_vfolders)
19 from pootle.core.decorators import (get_path_obj, get_resource,
20 permission_required)
21 from pootle.core.helpers import (get_export_view_context, get_browser_context,
22 get_translation_context)
23 from pootle.core.utils.json import jsonify
24 from pootle_app.models.permissions import check_permission
25 from pootle_app.views.admin.permissions import admin_permissions as admin_perms
26 from staticpages.models import StaticPage
27
28
29 SIDEBAR_COOKIE_NAME = 'pootle-browser-sidebar'
30
31
32 @get_path_obj
33 @permission_required('administrate')
34 def admin_permissions(request, translation_project):
35 language = translation_project.language
36 project = translation_project.project
37
38 ctx = {
39 'page': 'admin-permissions',
40
41 'translation_project': translation_project,
42 'project': project,
43 'language': language,
44 'directory': translation_project.directory,
45 }
46
47 return admin_perms(request, translation_project.directory,
48 'translation_projects/admin/permissions.html', ctx)
49
50
51 def get_sidebar_announcements_context(request, project_code, language_code):
52 announcements = []
53 new_cookie_data = {}
54 cookie_data = {}
55
56 if SIDEBAR_COOKIE_NAME in request.COOKIES:
57 json_str = unquote(request.COOKIES[SIDEBAR_COOKIE_NAME])
58 cookie_data = json.loads(json_str)
59
60 is_sidebar_open = cookie_data.get('isOpen', True)
61
62 def _get_announcement(language_code=None, project_code=None):
63 if language_code is None:
64 virtual_path = u'announcements/projects/%s' % project_code
65 else:
66 path = u'/'.join(filter(None, [language_code, project_code]))
67 virtual_path = u'announcements/%s' % path
68
69 try:
70 return StaticPage.objects.live(request.user).get(
71 virtual_path=virtual_path,
72 )
73 except StaticPage.DoesNotExist:
74 return None
75
76 args_list = [
77 (None, project_code),
78 (language_code, None),
79 (language_code, project_code),
80 ]
81
82 for args in args_list:
83 announcement = _get_announcement(*args)
84
85 if announcement is None:
86 continue
87
88 announcements.append(announcement)
89 # The virtual_path cannot be used as is for JSON.
90 ann_key = announcement.virtual_path.replace('/', '_')
91 ann_mtime = dateformat.format(announcement.modified_on, 'U')
92 stored_mtime = cookie_data.get(ann_key, None)
93
94 if ann_mtime != stored_mtime:
95 new_cookie_data[ann_key] = ann_mtime
96
97 if new_cookie_data:
98 # Some announcement has been changed or was never displayed before, so
99 # display sidebar and save the changed mtimes in the cookie to not
100 # display it next time unless it is necessary.
101 is_sidebar_open = True
102 cookie_data.update(new_cookie_data)
103 new_cookie_data = quote(json.dumps(cookie_data))
104
105 ctx = {
106 'announcements': announcements,
107 'is_sidebar_open': is_sidebar_open,
108 'has_sidebar': len(announcements) > 0,
109 }
110
111 return ctx, new_cookie_data
112
113
114 @get_path_obj
115 @permission_required('view')
116 @get_resource
117 def browse(request, translation_project, dir_path, filename=None):
118 project = translation_project.project
119 language = translation_project.language
120
121 directory = request.directory
122 store = request.store
123 is_admin = check_permission('administrate', request)
124
125 ctx, cookie_data = get_sidebar_announcements_context(request, project.code,
126 language.code)
127
128 ctx.update(get_browser_context(request))
129
130 # TODO improve plugin logic
131 if "import_export" in settings.INSTALLED_APPS and request.user.is_authenticated():
132 from import_export.views import handle_upload_form
133
134 ctx.update(handle_upload_form(request))
135
136 has_download = (not translation_project.is_terminology_project and
137 (check_permission('translate', request) or
138 check_permission('suggest', request)))
139 ctx.update({
140 'display_download': has_download,
141 'has_sidebar': True,
142 })
143
144 stats = request.resource_obj.get_stats()
145
146 if store is None:
147 table_fields = ['name', 'progress', 'total', 'need-translation',
148 'suggestions', 'critical', 'last-updated', 'activity']
149 ctx.update({
150 'table': {
151 'id': 'tp',
152 'fields': table_fields,
153 'headings': get_table_headings(table_fields),
154 'items': get_children(directory),
155 }
156 })
157
158 if 'virtualfolder' in settings.INSTALLED_APPS:
159 vfolders = get_vfolders(directory, all_vfolders=is_admin)
160 if len(vfolders) > 0:
161 table_fields = ['name', 'priority', 'progress', 'total',
162 'need-translation', 'suggestions', 'critical',
163 'last-updated', 'activity']
164 ctx.update({
165 'vfolders': {
166 'id': 'vfolders',
167 'fields': table_fields,
168 'headings': get_table_headings(table_fields),
169 'items': vfolders,
170 },
171 })
172
173 #FIXME: set vfolders stats in the resource, don't inject them here.
174 stats['vfolders'] = {}
175
176 for vfolder_treeitem in directory.vf_treeitems.iterator():
177 if request.user.is_superuser or vfolder_treeitem.is_visible:
178 stats['vfolders'][vfolder_treeitem.code] = \
179 vfolder_treeitem.get_stats(include_children=False)
180
181 ctx.update({
182 'parent': get_parent(directory if store is None else store),
183 'translation_project': translation_project,
184 'project': project,
185 'language': language,
186 'stats': jsonify(stats),
187 'is_admin': is_admin,
188 'is_store': store is not None,
189
190 'browser_extends': 'translation_projects/base.html',
191 })
192
193 response = render(request, 'browser/index.html', ctx)
194
195 if cookie_data:
196 response.set_cookie(SIDEBAR_COOKIE_NAME, cookie_data)
197
198 return response
199
200
201 @get_path_obj
202 @permission_required('view')
203 @get_resource
204 def translate(request, translation_project, dir_path, filename):
205 project = translation_project.project
206
207 ctx = get_translation_context(request)
208
209 ctx.update({
210 'language': translation_project.language,
211 'project': project,
212 'translation_project': translation_project,
213
214 'editor_extends': 'translation_projects/base.html',
215 })
216
217 return render(request, "editor/main.html", ctx)
218
219
220 @get_path_obj
221 @permission_required('view')
222 @get_resource
223 def export_view(request, translation_project, dir_path, filename=None):
224 """Displays a list of units with filters applied."""
225 ctx = get_export_view_context(request)
226 ctx.update({
227 'source_language': translation_project.project.source_language,
228 'language': translation_project.language,
229 'project': translation_project.project,
230 })
231
232 return render(request, 'editor/export_view.html', ctx)
233
[end of pootle/apps/pootle_translationproject/views.py]
[start of pootle/apps/import_export/views.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10 import logging
11 import os
12 from io import BytesIO
13 from zipfile import ZipFile, is_zipfile
14
15 from django.http import Http404, HttpResponse
16
17 from pootle_store.models import Store
18
19 from .forms import UploadForm
20 from .utils import import_file
21
22
23 def download(contents, name, content_type):
24 response = HttpResponse(contents, content_type=content_type)
25 response["Content-Disposition"] = "attachment; filename=%s" % (name)
26 return response
27
28
29 def export(request):
30 path = request.GET.get("path")
31 if not path:
32 raise Http404
33
34 stores = Store.objects.live().filter(pootle_path__startswith=path)
35 num_items = stores.count()
36
37 if not num_items:
38 raise Http404
39
40 if num_items == 1:
41 store = stores.get()
42 contents = BytesIO(store.serialize())
43 name = os.path.basename(store.pootle_path)
44 contents.seek(0)
45 return download(contents.read(), name, "application/octet-stream")
46
47 # zip all the stores together
48 f = BytesIO()
49 prefix = path.strip("/").replace("/", "-")
50 if not prefix:
51 prefix = "export"
52 with BytesIO() as f:
53 with ZipFile(f, "w") as zf:
54 for store in stores:
55 if store.is_terminology:
56 continue
57 try:
58 data = store.serialize()
59 except Exception as e:
60 logging.error("Could not serialize %r: %s", store.pootle_path, e)
61 continue
62 zf.writestr(prefix + store.pootle_path, data)
63
64 return download(f.getvalue(), "%s.zip" % (prefix), "application/zip")
65
66
67 def handle_upload_form(request):
68 """Process the upload form."""
69 if request.method == "POST" and "file" in request.FILES:
70 upload_form = UploadForm(request.POST, request.FILES)
71
72 if upload_form.is_valid():
73 django_file = request.FILES["file"]
74 try:
75 if is_zipfile(django_file):
76 with ZipFile(django_file, "r") as zf:
77 for path in zf.namelist():
78 if path.endswith("/"):
79 # is a directory
80 continue
81 with zf.open(path, "r") as f:
82 import_file(f, user=request.user)
83 else:
84 # It is necessary to seek to the beginning because
85 # is_zipfile fucks the file, and thus cannot be read.
86 django_file.seek(0)
87 import_file(django_file, user=request.user)
88 except Exception as e:
89 upload_form.add_error("file", e.message)
90 return {
91 "upload_form": upload_form,
92 }
93
94 # Always return a blank upload form unless the upload form is not valid.
95 return {
96 "upload_form": UploadForm(),
97 }
98
[end of pootle/apps/import_export/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pootle/apps/import_export/views.py b/pootle/apps/import_export/views.py
--- a/pootle/apps/import_export/views.py
+++ b/pootle/apps/import_export/views.py
@@ -64,10 +64,12 @@
return download(f.getvalue(), "%s.zip" % (prefix), "application/zip")
-def handle_upload_form(request):
+def handle_upload_form(request, project):
"""Process the upload form."""
if request.method == "POST" and "file" in request.FILES:
upload_form = UploadForm(request.POST, request.FILES)
+ project_filetypes = [project.localfiletype,
+ project.get_template_filetype()]
if upload_form.is_valid():
django_file = request.FILES["file"]
@@ -78,6 +80,9 @@
if path.endswith("/"):
# is a directory
continue
+ ext = os.path.splitext(path)[1].strip(".")
+ if ext not in project_filetypes:
+ continue
with zf.open(path, "r") as f:
import_file(f, user=request.user)
else:
diff --git a/pootle/apps/pootle_translationproject/views.py b/pootle/apps/pootle_translationproject/views.py
--- a/pootle/apps/pootle_translationproject/views.py
+++ b/pootle/apps/pootle_translationproject/views.py
@@ -131,7 +131,7 @@
if "import_export" in settings.INSTALLED_APPS and request.user.is_authenticated():
from import_export.views import handle_upload_form
- ctx.update(handle_upload_form(request))
+ ctx.update(handle_upload_form(request, project))
has_download = (not translation_project.is_terminology_project and
(check_permission('translate', request) or
|
{"golden_diff": "diff --git a/pootle/apps/import_export/views.py b/pootle/apps/import_export/views.py\n--- a/pootle/apps/import_export/views.py\n+++ b/pootle/apps/import_export/views.py\n@@ -64,10 +64,12 @@\n return download(f.getvalue(), \"%s.zip\" % (prefix), \"application/zip\")\n \n \n-def handle_upload_form(request):\n+def handle_upload_form(request, project):\n \"\"\"Process the upload form.\"\"\"\n if request.method == \"POST\" and \"file\" in request.FILES:\n upload_form = UploadForm(request.POST, request.FILES)\n+ project_filetypes = [project.localfiletype,\n+ project.get_template_filetype()]\n \n if upload_form.is_valid():\n django_file = request.FILES[\"file\"]\n@@ -78,6 +80,9 @@\n if path.endswith(\"/\"):\n # is a directory\n continue\n+ ext = os.path.splitext(path)[1].strip(\".\")\n+ if ext not in project_filetypes:\n+ continue\n with zf.open(path, \"r\") as f:\n import_file(f, user=request.user)\n else:\ndiff --git a/pootle/apps/pootle_translationproject/views.py b/pootle/apps/pootle_translationproject/views.py\n--- a/pootle/apps/pootle_translationproject/views.py\n+++ b/pootle/apps/pootle_translationproject/views.py\n@@ -131,7 +131,7 @@\n if \"import_export\" in settings.INSTALLED_APPS and request.user.is_authenticated():\n from import_export.views import handle_upload_form\n \n- ctx.update(handle_upload_form(request))\n+ ctx.update(handle_upload_form(request, project))\n \n has_download = (not translation_project.is_terminology_project and\n (check_permission('translate', request) or\n", "issue": "Uploads perform poorly when some non localisation error is picked up\nOn a Mac you could have stray `.DS_Store` files. Also when using POedit it will create a `.mo` files whenever you save the `.po` file.\n\nErrors such as `Unknown filetype (en_ZA/firefox/browser/chrome/overrides/.DS_Store)` are reported in this case.\n\nWhenever such an error occurs then the upload reports the error and fails to complete. We should I think ignore errors unrelated to the translations files we are uploading. And at least execute correctly for those where there are no errors.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport json\nfrom urllib import quote, unquote\n\nfrom django.conf import settings\nfrom django.shortcuts import render\nfrom django.utils import dateformat\n\nfrom pootle.core.browser import (get_children, get_table_headings, get_parent,\n get_vfolders)\nfrom pootle.core.decorators import (get_path_obj, get_resource,\n permission_required)\nfrom pootle.core.helpers import (get_export_view_context, get_browser_context,\n get_translation_context)\nfrom pootle.core.utils.json import jsonify\nfrom pootle_app.models.permissions import check_permission\nfrom pootle_app.views.admin.permissions import admin_permissions as admin_perms\nfrom staticpages.models import StaticPage\n\n\nSIDEBAR_COOKIE_NAME = 'pootle-browser-sidebar'\n\n\n@get_path_obj\n@permission_required('administrate')\ndef admin_permissions(request, translation_project):\n language = translation_project.language\n project = translation_project.project\n\n ctx = {\n 'page': 'admin-permissions',\n\n 'translation_project': translation_project,\n 'project': project,\n 'language': language,\n 'directory': translation_project.directory,\n }\n\n return admin_perms(request, translation_project.directory,\n 'translation_projects/admin/permissions.html', ctx)\n\n\ndef get_sidebar_announcements_context(request, project_code, language_code):\n announcements = []\n new_cookie_data = {}\n cookie_data = {}\n\n if SIDEBAR_COOKIE_NAME in request.COOKIES:\n json_str = unquote(request.COOKIES[SIDEBAR_COOKIE_NAME])\n cookie_data = json.loads(json_str)\n\n is_sidebar_open = cookie_data.get('isOpen', True)\n\n def _get_announcement(language_code=None, project_code=None):\n if language_code is None:\n virtual_path = u'announcements/projects/%s' % project_code\n else:\n path = u'/'.join(filter(None, [language_code, project_code]))\n virtual_path = u'announcements/%s' % path\n\n try:\n return StaticPage.objects.live(request.user).get(\n virtual_path=virtual_path,\n )\n except StaticPage.DoesNotExist:\n return None\n\n args_list = [\n (None, project_code),\n (language_code, None),\n (language_code, project_code),\n ]\n\n for args in args_list:\n announcement = _get_announcement(*args)\n\n if announcement is None:\n continue\n\n announcements.append(announcement)\n # The virtual_path cannot be used as is for JSON.\n ann_key = announcement.virtual_path.replace('/', '_')\n ann_mtime = dateformat.format(announcement.modified_on, 'U')\n stored_mtime = cookie_data.get(ann_key, None)\n\n if ann_mtime != stored_mtime:\n new_cookie_data[ann_key] = ann_mtime\n\n if new_cookie_data:\n # Some announcement has been changed or was never displayed before, so\n # display sidebar and save the changed mtimes in the cookie to not\n # display it next time unless it is necessary.\n is_sidebar_open = True\n cookie_data.update(new_cookie_data)\n new_cookie_data = quote(json.dumps(cookie_data))\n\n ctx = {\n 'announcements': announcements,\n 'is_sidebar_open': is_sidebar_open,\n 'has_sidebar': len(announcements) > 0,\n }\n\n return ctx, new_cookie_data\n\n\n@get_path_obj\n@permission_required('view')\n@get_resource\ndef browse(request, translation_project, dir_path, filename=None):\n project = translation_project.project\n language = translation_project.language\n\n directory = request.directory\n store = request.store\n is_admin = check_permission('administrate', request)\n\n ctx, cookie_data = get_sidebar_announcements_context(request, project.code,\n language.code)\n\n ctx.update(get_browser_context(request))\n\n # TODO improve plugin logic\n if \"import_export\" in settings.INSTALLED_APPS and request.user.is_authenticated():\n from import_export.views import handle_upload_form\n\n ctx.update(handle_upload_form(request))\n\n has_download = (not translation_project.is_terminology_project and\n (check_permission('translate', request) or\n check_permission('suggest', request)))\n ctx.update({\n 'display_download': has_download,\n 'has_sidebar': True,\n })\n\n stats = request.resource_obj.get_stats()\n\n if store is None:\n table_fields = ['name', 'progress', 'total', 'need-translation',\n 'suggestions', 'critical', 'last-updated', 'activity']\n ctx.update({\n 'table': {\n 'id': 'tp',\n 'fields': table_fields,\n 'headings': get_table_headings(table_fields),\n 'items': get_children(directory),\n }\n })\n\n if 'virtualfolder' in settings.INSTALLED_APPS:\n vfolders = get_vfolders(directory, all_vfolders=is_admin)\n if len(vfolders) > 0:\n table_fields = ['name', 'priority', 'progress', 'total',\n 'need-translation', 'suggestions', 'critical',\n 'last-updated', 'activity']\n ctx.update({\n 'vfolders': {\n 'id': 'vfolders',\n 'fields': table_fields,\n 'headings': get_table_headings(table_fields),\n 'items': vfolders,\n },\n })\n\n #FIXME: set vfolders stats in the resource, don't inject them here.\n stats['vfolders'] = {}\n\n for vfolder_treeitem in directory.vf_treeitems.iterator():\n if request.user.is_superuser or vfolder_treeitem.is_visible:\n stats['vfolders'][vfolder_treeitem.code] = \\\n vfolder_treeitem.get_stats(include_children=False)\n\n ctx.update({\n 'parent': get_parent(directory if store is None else store),\n 'translation_project': translation_project,\n 'project': project,\n 'language': language,\n 'stats': jsonify(stats),\n 'is_admin': is_admin,\n 'is_store': store is not None,\n\n 'browser_extends': 'translation_projects/base.html',\n })\n\n response = render(request, 'browser/index.html', ctx)\n\n if cookie_data:\n response.set_cookie(SIDEBAR_COOKIE_NAME, cookie_data)\n\n return response\n\n\n@get_path_obj\n@permission_required('view')\n@get_resource\ndef translate(request, translation_project, dir_path, filename):\n project = translation_project.project\n\n ctx = get_translation_context(request)\n\n ctx.update({\n 'language': translation_project.language,\n 'project': project,\n 'translation_project': translation_project,\n\n 'editor_extends': 'translation_projects/base.html',\n })\n\n return render(request, \"editor/main.html\", ctx)\n\n\n@get_path_obj\n@permission_required('view')\n@get_resource\ndef export_view(request, translation_project, dir_path, filename=None):\n \"\"\"Displays a list of units with filters applied.\"\"\"\n ctx = get_export_view_context(request)\n ctx.update({\n 'source_language': translation_project.project.source_language,\n 'language': translation_project.language,\n 'project': translation_project.project,\n })\n\n return render(request, 'editor/export_view.html', ctx)\n", "path": "pootle/apps/pootle_translationproject/views.py"}, {"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport logging\nimport os\nfrom io import BytesIO\nfrom zipfile import ZipFile, is_zipfile\n\nfrom django.http import Http404, HttpResponse\n\nfrom pootle_store.models import Store\n\nfrom .forms import UploadForm\nfrom .utils import import_file\n\n\ndef download(contents, name, content_type):\n response = HttpResponse(contents, content_type=content_type)\n response[\"Content-Disposition\"] = \"attachment; filename=%s\" % (name)\n return response\n\n\ndef export(request):\n path = request.GET.get(\"path\")\n if not path:\n raise Http404\n\n stores = Store.objects.live().filter(pootle_path__startswith=path)\n num_items = stores.count()\n\n if not num_items:\n raise Http404\n\n if num_items == 1:\n store = stores.get()\n contents = BytesIO(store.serialize())\n name = os.path.basename(store.pootle_path)\n contents.seek(0)\n return download(contents.read(), name, \"application/octet-stream\")\n\n # zip all the stores together\n f = BytesIO()\n prefix = path.strip(\"/\").replace(\"/\", \"-\")\n if not prefix:\n prefix = \"export\"\n with BytesIO() as f:\n with ZipFile(f, \"w\") as zf:\n for store in stores:\n if store.is_terminology:\n continue\n try:\n data = store.serialize()\n except Exception as e:\n logging.error(\"Could not serialize %r: %s\", store.pootle_path, e)\n continue\n zf.writestr(prefix + store.pootle_path, data)\n\n return download(f.getvalue(), \"%s.zip\" % (prefix), \"application/zip\")\n\n\ndef handle_upload_form(request):\n \"\"\"Process the upload form.\"\"\"\n if request.method == \"POST\" and \"file\" in request.FILES:\n upload_form = UploadForm(request.POST, request.FILES)\n\n if upload_form.is_valid():\n django_file = request.FILES[\"file\"]\n try:\n if is_zipfile(django_file):\n with ZipFile(django_file, \"r\") as zf:\n for path in zf.namelist():\n if path.endswith(\"/\"):\n # is a directory\n continue\n with zf.open(path, \"r\") as f:\n import_file(f, user=request.user)\n else:\n # It is necessary to seek to the beginning because\n # is_zipfile fucks the file, and thus cannot be read.\n django_file.seek(0)\n import_file(django_file, user=request.user)\n except Exception as e:\n upload_form.add_error(\"file\", e.message)\n return {\n \"upload_form\": upload_form,\n }\n\n # Always return a blank upload form unless the upload form is not valid.\n return {\n \"upload_form\": UploadForm(),\n }\n", "path": "pootle/apps/import_export/views.py"}]}
| 3,764 | 389 |
gh_patches_debug_3526
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-4487
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Check documentation links as part of docs linting
## Description
Sphinx [linkcheck](https://www.sphinx-doc.org/en/master/usage/builders/index.html#sphinx.builders.linkcheck.CheckExternalLinksBuilder) allows the verification of links with the `requests` library to ensure that the links are still valid and active. It might be useful to run this regularly or as part of CI to catch dead or broken links.
## User Stories
As a user, clicking on a link and getting a 404 can be a a frustrating experience.
</issue>
<code>
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # SecureDrop documentation build configuration file, created by
4 # sphinx-quickstart on Tue Oct 13 12:08:52 2015.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import os
16
17 # Detect if we're being built by Read the Docs
18 # https://docs.readthedocs.org/en/latest/faq.html#how-do-i-change-behavior-for-read-the-docs
19 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
20
21 # If extensions (or modules to document with autodoc) are in another directory,
22 # add these directories to sys.path here. If the directory is relative to the
23 # documentation root, use os.path.abspath to make it absolute, like shown here.
24 # sys.path.insert(0, os.path.abspath('.'))
25
26 # -- General configuration ------------------------------------------------
27
28 # If your documentation needs a minimal Sphinx version, state it here.
29 # needs_sphinx = '1.0'
30
31 # Add any Sphinx extension module names here, as strings. They can be
32 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
33 # ones.
34 extensions = ['sphinx.ext.todo', ]
35
36 # Add any paths that contain templates here, relative to this directory.
37 templates_path = ['_templates']
38
39 # The suffix(es) of source filenames.
40 # You can specify multiple suffix as a list of string:
41 # source_suffix = ['.rst', '.md']
42 source_suffix = '.rst'
43
44 # The encoding of source files.
45 # source_encoding = 'utf-8-sig'
46
47 # The master toctree document.
48 master_doc = 'index'
49
50 # General information about the project.
51 project = u'SecureDrop'
52 copyright = u'2017, Freedom of the Press Foundation'
53 author = u'SecureDrop Team and Contributors'
54
55 # The version info for the project you're documenting, acts as replacement for
56 # |version| and |release|, also used in various other places throughout the
57 # built documents.
58 #
59 # The short X.Y version.
60 version = '0.13.0'
61 # The full version, including alpha/beta/rc tags.
62 release = '0.13.0'
63
64 # The language for content autogenerated by Sphinx. Refer to documentation
65 # for a list of supported languages.
66 #
67 # This is also used if you do content translation via gettext catalogs.
68 # Usually you set "language" from the command line for these cases.
69 language = None
70
71 # There are two options for replacing |today|: either, you set today to some
72 # non-false value, then it is used:
73 # today = ''
74 # Else, today_fmt is used as the format for a strftime call.
75 # today_fmt = '%B %d, %Y'
76
77 # List of patterns, relative to source directory, that match files and
78 # directories to ignore when looking for source files.
79 exclude_patterns = ['_build']
80
81 # The reST default role (used for this markup: `text`) to use for all
82 # documents.
83 # default_role = None
84
85 # If true, '()' will be appended to :func: etc. cross-reference text.
86 # add_function_parentheses = True
87
88 # If true, the current module name will be prepended to all description
89 # unit titles (such as .. function::).
90 # add_module_names = True
91
92 # If true, sectionauthor and moduleauthor directives will be shown in the
93 # output. They are ignored by default.
94 # show_authors = False
95
96 # The name of the Pygments (syntax highlighting) style to use.
97 pygments_style = 'sphinx'
98
99 # A list of ignored prefixes for module index sorting.
100 # modindex_common_prefix = []
101
102 # If true, keep warnings as "system message" paragraphs in the built documents.
103 # keep_warnings = False
104
105 # If true, `todo` and `todoList` produce output, else they produce nothing.
106 todo_include_todos = False
107
108
109 # -- Options for HTML output ----------------------------------------------
110
111 # The theme to use for HTML and HTML Help pages. See the documentation for
112 # a list of builtin themes.
113 if on_rtd:
114 html_theme = 'default'
115 else:
116 try:
117 # If you want to build the docs locally using the RTD theme,
118 # you may need to install it: ``pip install sphinx_rtd_theme``.
119 # https://github.com/snide/sphinx_rtd_theme#via-package
120 import sphinx_rtd_theme
121 html_theme = "sphinx_rtd_theme"
122 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
123 except ImportError:
124 # This theme is included with Sphinx and is quite nice (based
125 # on the Pocoo themes), but since we're using the RTD theme
126 # for the production docs, it's best to use that to avoid
127 # issues due to discrepancies between the themes.
128 html_theme = 'alabaster'
129
130 # Theme options are theme-specific and customize the look and feel of a theme
131 # further. For a list of options available for each theme, see the
132 # documentation.
133 # html_theme_options = {}
134
135 # Add any paths that contain custom themes here, relative to this directory.
136 # html_theme_path = []
137
138 # The name for this set of Sphinx documents. If None, it defaults to
139 # "<project> v<release> documentation".
140 # html_title = None
141
142 # A shorter title for the navigation bar. Default is the same as html_title.
143 # html_short_title = None
144
145 # The name of an image file (relative to this directory) to place at the top
146 # of the sidebar.
147 html_logo = '../securedrop/static/i/favicon.png'
148
149 # The name of an image file (within the static path) to use as favicon of the
150 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
151 # pixels large.
152 # html_favicon = None
153
154 # Add any paths that contain custom static files (such as style sheets) here,
155 # relative to this directory. They are copied after the builtin static files,
156 # so a file named "default.css" will overwrite the builtin "default.css".
157 # html_static_path = ['_static']
158
159 # Add any extra paths that contain custom files (such as robots.txt or
160 # .htaccess) here, relative to this directory. These files are copied
161 # directly to the root of the documentation.
162 # html_extra_path = []
163
164 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
165 # using the given strftime format.
166 # html_last_updated_fmt = '%b %d, %Y'
167
168 # If true, SmartyPants will be used to convert quotes and dashes to
169 # typographically correct entities.
170 # html_use_smartypants = True
171
172 # Custom sidebar templates, maps document names to template names.
173 # html_sidebars = {}
174
175 # Additional templates that should be rendered to pages, maps page names to
176 # template names.
177 # html_additional_pages = {}
178
179 # If false, no module index is generated.
180 # html_domain_indices = True
181
182 # If false, no index is generated.
183 # html_use_index = True
184
185 # If true, the index is split into individual pages for each letter.
186 # html_split_index = False
187
188 # If true, links to the reST sources are added to the pages.
189 # html_show_sourcelink = True
190
191 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
192 # html_show_sphinx = True
193
194 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
195 # html_show_copyright = True
196
197 # If true, an OpenSearch description file will be output, and all pages will
198 # contain a <link> tag referring to it. The value of this option must be the
199 # base URL from which the finished HTML is served.
200 # html_use_opensearch = ''
201
202 # This is the file name suffix for HTML files (e.g. ".xhtml").
203 # html_file_suffix = None
204
205 # Language to be used for generating the HTML full-text search index.
206 # Sphinx supports the following languages:
207 # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
208 # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
209 # html_search_language = 'en'
210
211 # A dictionary with options for the search language support, empty by default.
212 # Now only 'ja' uses this config value
213 # html_search_options = {'type': 'default'}
214
215 # The name of a javascript file (relative to the configuration directory) that
216 # implements a search results scorer. If empty, the default will be used.
217 # html_search_scorer = 'scorer.js'
218
219 # Output file base name for HTML help builder.
220 htmlhelp_basename = 'SecureDropdoc'
221
222 # -- Options for LaTeX output ---------------------------------------------
223
224 latex_elements = {
225 # The paper size ('letterpaper' or 'a4paper').
226 # 'papersize': 'letterpaper',
227
228 # The font size ('10pt', '11pt' or '12pt').
229 # 'pointsize': '10pt',
230
231 # Additional stuff for the LaTeX preamble.
232 # 'preamble': '',
233
234 # Latex figure (float) alignment
235 # 'figure_align': 'htbp',
236 }
237
238 # Grouping the document tree into LaTeX files. List of tuples
239 # (source start file, target name, title,
240 # author, documentclass [howto, manual, or own class]).
241 latex_documents = [
242 (master_doc, 'SecureDrop.tex', u'SecureDrop Documentation',
243 author, 'manual'),
244 ]
245
246 # The name of an image file (relative to this directory) to place at the top of
247 # the title page.
248 # latex_logo = None
249
250 # For "manual" documents, if this is true, then toplevel headings are parts,
251 # not chapters.
252 # latex_use_parts = False
253
254 # If true, show page references after internal links.
255 # latex_show_pagerefs = False
256
257 # If true, show URL addresses after external links.
258 # latex_show_urls = False
259
260 # Documents to append as an appendix to all manuals.
261 # latex_appendices = []
262
263 # If false, no module index is generated.
264 # latex_domain_indices = True
265
266
267 # -- Options for manual page output ---------------------------------------
268
269 # One entry per manual page. List of tuples
270 # (source start file, name, description, authors, manual section).
271 man_pages = [
272 (master_doc, 'securedrop', u'SecureDrop Documentation',
273 [author], 1)
274 ]
275
276 # If true, show URL addresses after external links.
277 # man_show_urls = False
278
279
280 # -- Options for Texinfo output -------------------------------------------
281
282 # Grouping the document tree into Texinfo files. List of tuples
283 # (source start file, target name, title, author,
284 # dir menu entry, description, category)
285 texinfo_documents = [
286 (master_doc, 'SecureDrop', u'SecureDrop Documentation',
287 author, 'SecureDrop', 'One line description of project.',
288 'Miscellaneous'),
289 ]
290
291 # Documents to append as an appendix to all manuals.
292 # texinfo_appendices = []
293
294 # If false, no module index is generated.
295 # texinfo_domain_indices = True
296
297 # How to display URL addresses: 'footnote', 'no', or 'inline'.
298 # texinfo_show_urls = 'footnote'
299
300 # If true, do not generate a @detailmenu in the "Top" node's menu.
301 # texinfo_no_detailmenu = False
302
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -299,3 +299,14 @@
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
+
+# -- Options for linkcheck --
+
+linkcheck_retries = 3
+
+linkcheck_ignore = [
+ r'http://127.0.0.1(:\d+)?/?',
+ r'http://localhost(:\d+)?/?',
+ 'https://forum.securedrop.org/admin/users/list/active',
+ 'https://weblate.securedrop.org/projects/securedrop/securedrop/#repository',
+]
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -299,3 +299,14 @@\n \n # If true, do not generate a @detailmenu in the \"Top\" node's menu.\n # texinfo_no_detailmenu = False\n+\n+# -- Options for linkcheck --\n+\n+linkcheck_retries = 3\n+\n+linkcheck_ignore = [\n+ r'http://127.0.0.1(:\\d+)?/?',\n+ r'http://localhost(:\\d+)?/?',\n+ 'https://forum.securedrop.org/admin/users/list/active',\n+ 'https://weblate.securedrop.org/projects/securedrop/securedrop/#repository',\n+]\n", "issue": "Check documentation links as part of docs linting\n## Description\r\n\r\nSphinx [linkcheck](https://www.sphinx-doc.org/en/master/usage/builders/index.html#sphinx.builders.linkcheck.CheckExternalLinksBuilder) allows the verification of links with the `requests` library to ensure that the links are still valid and active. It might be useful to run this regularly or as part of CI to catch dead or broken links.\r\n## User Stories\r\nAs a user, clicking on a link and getting a 404 can be a a frustrating experience.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# SecureDrop documentation build configuration file, created by\n# sphinx-quickstart on Tue Oct 13 12:08:52 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport os\n\n# Detect if we're being built by Read the Docs\n# https://docs.readthedocs.org/en/latest/faq.html#how-do-i-change-behavior-for-read-the-docs\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n# sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.todo', ]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'SecureDrop'\ncopyright = u'2017, Freedom of the Press Foundation'\nauthor = u'SecureDrop Team and Contributors'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '0.13.0'\n# The full version, including alpha/beta/rc tags.\nrelease = '0.13.0'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n# keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nif on_rtd:\n html_theme = 'default'\nelse:\n try:\n # If you want to build the docs locally using the RTD theme,\n # you may need to install it: ``pip install sphinx_rtd_theme``.\n # https://github.com/snide/sphinx_rtd_theme#via-package\n import sphinx_rtd_theme\n html_theme = \"sphinx_rtd_theme\"\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n except ImportError:\n # This theme is included with Sphinx and is quite nice (based\n # on the Pocoo themes), but since we're using the RTD theme\n # for the production docs, it's best to use that to avoid\n # issues due to discrepancies between the themes.\n html_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n# html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n# html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = '../securedrop/static/i/favicon.png'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n# html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n# html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n# html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n# html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n# html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n# html_domain_indices = True\n\n# If false, no index is generated.\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n# html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n# html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n# html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n# html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'SecureDropdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n\n # Latex figure (float) alignment\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'SecureDrop.tex', u'SecureDrop Documentation',\n author, 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n# latex_use_parts = False\n\n# If true, show page references after internal links.\n# latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n# latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n# latex_appendices = []\n\n# If false, no module index is generated.\n# latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'securedrop', u'SecureDrop Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n# man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'SecureDrop', u'SecureDrop Documentation',\n author, 'SecureDrop', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n# texinfo_appendices = []\n\n# If false, no module index is generated.\n# texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n# texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n# texinfo_no_detailmenu = False\n", "path": "docs/conf.py"}]}
| 4,010 | 168 |
gh_patches_debug_7664
|
rasdani/github-patches
|
git_diff
|
google__jax-11307
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Line-search x64 type promotion bug
There seems to be a bug in the line-search when enabling x64 mode but optimizing a purely float32 function.
```python
import jax.numpy as jnp
import jax.scipy.optimize
jax.config.update("jax_enable_x64", True)
def f(x):
return jnp.sum(x ** 2)
x0 = jnp.zeros(2, dtype=jnp.float32)
jax.scipy.optimize.minimize(f, x0, method='BFGS')
```
```
TypeError: body_fun output and input must have identical types, got
_ZoomState(done=ShapedArray(bool[]), failed=ShapedArray(bool[]), j=ShapedArray(int64[], weak_type=True), a_lo=ShapedArray(float64[]), phi_lo=ShapedArray(float64[]), dphi_lo=ShapedArray(float64[]), a_hi=ShapedArray(float64[]), phi_hi=ShapedArray(float64[]), dphi_hi=ShapedArray(float64[]), a_rec=ShapedArray(float64[]), phi_rec=ShapedArray(float64[]), a_star=ShapedArray(float64[]), phi_star=ShapedArray(float64[]), dphi_star=ShapedArray(float64[]), g_star=ShapedArray(float64[2]), nfev=ShapedArray(int64[], weak_type=True), ngev=ShapedArray(int64[], weak_type=True))
and
_ZoomState(done=ShapedArray(bool[], weak_type=True), failed=ShapedArray(bool[], weak_type=True), j=ShapedArray(int64[], weak_type=True), a_lo=ShapedArray(float64[], weak_type=True), phi_lo=ShapedArray(float32[]), dphi_lo=ShapedArray(float64[]), a_hi=ShapedArray(float64[], weak_type=True), phi_hi=ShapedArray(float64[]), dphi_hi=ShapedArray(float64[]), a_rec=ShapedArray(float64[], weak_type=True), phi_rec=ShapedArray(float64[]), a_star=ShapedArray(float64[], weak_type=True), phi_star=ShapedArray(float32[]), dphi_star=ShapedArray(float64[]), g_star=ShapedArray(float32[2]), nfev=ShapedArray(int64[], weak_type=True), ngev=ShapedArray(int64[], weak_type=True)).
```
-> `g_star` type differs
Is this expected behavior or a bug?
</issue>
<code>
[start of jax/_src/scipy/optimize/bfgs.py]
1 # Copyright 2020 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """The Broyden-Fletcher-Goldfarb-Shanno minimization algorithm."""
15 from functools import partial
16 from typing import Callable, NamedTuple, Optional, Union
17
18 import jax
19 import jax.numpy as jnp
20 from jax import lax
21 from jax._src.scipy.optimize.line_search import line_search
22
23
24 class _BFGSResults(NamedTuple):
25 """Results from BFGS optimization.
26
27 Parameters:
28 converged: True if minimization converged.
29 failed: True if line search failed.
30 k: integer the number of iterations of the BFGS update.
31 nfev: integer total number of objective evaluations performed.
32 ngev: integer total number of jacobian evaluations
33 nhev: integer total number of hessian evaluations
34 x_k: array containing the last argument value found during the search. If
35 the search converged, then this value is the argmin of the objective
36 function.
37 f_k: array containing the value of the objective function at `x_k`. If the
38 search converged, then this is the (local) minimum of the objective
39 function.
40 g_k: array containing the gradient of the objective function at `x_k`. If
41 the search converged the l2-norm of this tensor should be below the
42 tolerance.
43 H_k: array containing the inverse of the estimated Hessian.
44 status: int describing end state.
45 line_search_status: int describing line search end state (only means
46 something if line search fails).
47 """
48 converged: Union[bool, jnp.ndarray]
49 failed: Union[bool, jnp.ndarray]
50 k: Union[int, jnp.ndarray]
51 nfev: Union[int, jnp.ndarray]
52 ngev: Union[int, jnp.ndarray]
53 nhev: Union[int, jnp.ndarray]
54 x_k: jnp.ndarray
55 f_k: jnp.ndarray
56 g_k: jnp.ndarray
57 H_k: jnp.ndarray
58 old_old_fval: jnp.ndarray
59 status: Union[int, jnp.ndarray]
60 line_search_status: Union[int, jnp.ndarray]
61
62
63 _dot = partial(jnp.dot, precision=lax.Precision.HIGHEST)
64 _einsum = partial(jnp.einsum, precision=lax.Precision.HIGHEST)
65
66
67 def minimize_bfgs(
68 fun: Callable,
69 x0: jnp.ndarray,
70 maxiter: Optional[int] = None,
71 norm=jnp.inf,
72 gtol: float = 1e-5,
73 line_search_maxiter: int = 10,
74 ) -> _BFGSResults:
75 """Minimize a function using BFGS.
76
77 Implements the BFGS algorithm from
78 Algorithm 6.1 from Wright and Nocedal, 'Numerical Optimization', 1999, pg.
79 136-143.
80
81 Args:
82 fun: function of the form f(x) where x is a flat ndarray and returns a real
83 scalar. The function should be composed of operations with vjp defined.
84 x0: initial guess.
85 maxiter: maximum number of iterations.
86 norm: order of norm for convergence check. Default inf.
87 gtol: terminates minimization when |grad|_norm < g_tol.
88 line_search_maxiter: maximum number of linesearch iterations.
89
90 Returns:
91 Optimization result.
92 """
93
94 if maxiter is None:
95 maxiter = jnp.size(x0) * 200
96
97 d = x0.shape[0]
98
99 initial_H = jnp.eye(d, dtype=x0.dtype)
100 f_0, g_0 = jax.value_and_grad(fun)(x0)
101 state = _BFGSResults(
102 converged=jnp.linalg.norm(g_0, ord=norm) < gtol,
103 failed=False,
104 k=0,
105 nfev=1,
106 ngev=1,
107 nhev=0,
108 x_k=x0,
109 f_k=f_0,
110 g_k=g_0,
111 H_k=initial_H,
112 old_old_fval=f_0 + jnp.linalg.norm(g_0) / 2,
113 status=0,
114 line_search_status=0,
115 )
116
117 def cond_fun(state):
118 return (jnp.logical_not(state.converged)
119 & jnp.logical_not(state.failed)
120 & (state.k < maxiter))
121
122 def body_fun(state):
123 p_k = -_dot(state.H_k, state.g_k)
124 line_search_results = line_search(
125 fun,
126 state.x_k,
127 p_k,
128 old_fval=state.f_k,
129 old_old_fval=state.old_old_fval,
130 gfk=state.g_k,
131 maxiter=line_search_maxiter,
132 )
133 state = state._replace(
134 nfev=state.nfev + line_search_results.nfev,
135 ngev=state.ngev + line_search_results.ngev,
136 failed=line_search_results.failed,
137 line_search_status=line_search_results.status,
138 )
139 s_k = line_search_results.a_k * p_k
140 x_kp1 = state.x_k + s_k
141 f_kp1 = line_search_results.f_k
142 g_kp1 = line_search_results.g_k
143 y_k = g_kp1 - state.g_k
144 rho_k = jnp.reciprocal(_dot(y_k, s_k))
145
146 sy_k = s_k[:, jnp.newaxis] * y_k[jnp.newaxis, :]
147 w = jnp.eye(d) - rho_k * sy_k
148 H_kp1 = (_einsum('ij,jk,lk', w, state.H_k, w)
149 + rho_k * s_k[:, jnp.newaxis] * s_k[jnp.newaxis, :])
150 H_kp1 = jnp.where(jnp.isfinite(rho_k), H_kp1, state.H_k)
151 converged = jnp.linalg.norm(g_kp1, ord=norm) < gtol
152
153 state = state._replace(
154 converged=converged,
155 k=state.k + 1,
156 x_k=x_kp1,
157 f_k=f_kp1,
158 g_k=g_kp1,
159 H_k=H_kp1,
160 old_old_fval=state.f_k,
161 )
162 return state
163
164 state = lax.while_loop(cond_fun, body_fun, state)
165 status = jnp.where(
166 state.converged,
167 0, # converged
168 jnp.where(
169 state.k == maxiter,
170 1, # max iters reached
171 jnp.where(
172 state.failed,
173 2 + state.line_search_status, # ls failed (+ reason)
174 -1, # undefined
175 )
176 )
177 )
178 state = state._replace(status=status)
179 return state
180
[end of jax/_src/scipy/optimize/bfgs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/jax/_src/scipy/optimize/bfgs.py b/jax/_src/scipy/optimize/bfgs.py
--- a/jax/_src/scipy/optimize/bfgs.py
+++ b/jax/_src/scipy/optimize/bfgs.py
@@ -144,7 +144,7 @@
rho_k = jnp.reciprocal(_dot(y_k, s_k))
sy_k = s_k[:, jnp.newaxis] * y_k[jnp.newaxis, :]
- w = jnp.eye(d) - rho_k * sy_k
+ w = jnp.eye(d, dtype=rho_k.dtype) - rho_k * sy_k
H_kp1 = (_einsum('ij,jk,lk', w, state.H_k, w)
+ rho_k * s_k[:, jnp.newaxis] * s_k[jnp.newaxis, :])
H_kp1 = jnp.where(jnp.isfinite(rho_k), H_kp1, state.H_k)
|
{"golden_diff": "diff --git a/jax/_src/scipy/optimize/bfgs.py b/jax/_src/scipy/optimize/bfgs.py\n--- a/jax/_src/scipy/optimize/bfgs.py\n+++ b/jax/_src/scipy/optimize/bfgs.py\n@@ -144,7 +144,7 @@\n rho_k = jnp.reciprocal(_dot(y_k, s_k))\n \n sy_k = s_k[:, jnp.newaxis] * y_k[jnp.newaxis, :]\n- w = jnp.eye(d) - rho_k * sy_k\n+ w = jnp.eye(d, dtype=rho_k.dtype) - rho_k * sy_k\n H_kp1 = (_einsum('ij,jk,lk', w, state.H_k, w)\n + rho_k * s_k[:, jnp.newaxis] * s_k[jnp.newaxis, :])\n H_kp1 = jnp.where(jnp.isfinite(rho_k), H_kp1, state.H_k)\n", "issue": "Line-search x64 type promotion bug\nThere seems to be a bug in the line-search when enabling x64 mode but optimizing a purely float32 function.\r\n\r\n```python\r\nimport jax.numpy as jnp\r\nimport jax.scipy.optimize\r\n\r\njax.config.update(\"jax_enable_x64\", True)\r\n\r\n\r\ndef f(x):\r\n return jnp.sum(x ** 2)\r\n\r\n\r\nx0 = jnp.zeros(2, dtype=jnp.float32)\r\njax.scipy.optimize.minimize(f, x0, method='BFGS')\r\n```\r\n\r\n```\r\nTypeError: body_fun output and input must have identical types, got\r\n_ZoomState(done=ShapedArray(bool[]), failed=ShapedArray(bool[]), j=ShapedArray(int64[], weak_type=True), a_lo=ShapedArray(float64[]), phi_lo=ShapedArray(float64[]), dphi_lo=ShapedArray(float64[]), a_hi=ShapedArray(float64[]), phi_hi=ShapedArray(float64[]), dphi_hi=ShapedArray(float64[]), a_rec=ShapedArray(float64[]), phi_rec=ShapedArray(float64[]), a_star=ShapedArray(float64[]), phi_star=ShapedArray(float64[]), dphi_star=ShapedArray(float64[]), g_star=ShapedArray(float64[2]), nfev=ShapedArray(int64[], weak_type=True), ngev=ShapedArray(int64[], weak_type=True))\r\nand\r\n_ZoomState(done=ShapedArray(bool[], weak_type=True), failed=ShapedArray(bool[], weak_type=True), j=ShapedArray(int64[], weak_type=True), a_lo=ShapedArray(float64[], weak_type=True), phi_lo=ShapedArray(float32[]), dphi_lo=ShapedArray(float64[]), a_hi=ShapedArray(float64[], weak_type=True), phi_hi=ShapedArray(float64[]), dphi_hi=ShapedArray(float64[]), a_rec=ShapedArray(float64[], weak_type=True), phi_rec=ShapedArray(float64[]), a_star=ShapedArray(float64[], weak_type=True), phi_star=ShapedArray(float32[]), dphi_star=ShapedArray(float64[]), g_star=ShapedArray(float32[2]), nfev=ShapedArray(int64[], weak_type=True), ngev=ShapedArray(int64[], weak_type=True)).\r\n```\r\n\r\n-> `g_star` type differs\r\n\r\nIs this expected behavior or a bug?\r\n\n", "before_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"The Broyden-Fletcher-Goldfarb-Shanno minimization algorithm.\"\"\"\nfrom functools import partial\nfrom typing import Callable, NamedTuple, Optional, Union\n\nimport jax\nimport jax.numpy as jnp\nfrom jax import lax\nfrom jax._src.scipy.optimize.line_search import line_search\n\n\nclass _BFGSResults(NamedTuple):\n \"\"\"Results from BFGS optimization.\n\n Parameters:\n converged: True if minimization converged.\n failed: True if line search failed.\n k: integer the number of iterations of the BFGS update.\n nfev: integer total number of objective evaluations performed.\n ngev: integer total number of jacobian evaluations\n nhev: integer total number of hessian evaluations\n x_k: array containing the last argument value found during the search. If\n the search converged, then this value is the argmin of the objective\n function.\n f_k: array containing the value of the objective function at `x_k`. If the\n search converged, then this is the (local) minimum of the objective\n function.\n g_k: array containing the gradient of the objective function at `x_k`. If\n the search converged the l2-norm of this tensor should be below the\n tolerance.\n H_k: array containing the inverse of the estimated Hessian.\n status: int describing end state.\n line_search_status: int describing line search end state (only means\n something if line search fails).\n \"\"\"\n converged: Union[bool, jnp.ndarray]\n failed: Union[bool, jnp.ndarray]\n k: Union[int, jnp.ndarray]\n nfev: Union[int, jnp.ndarray]\n ngev: Union[int, jnp.ndarray]\n nhev: Union[int, jnp.ndarray]\n x_k: jnp.ndarray\n f_k: jnp.ndarray\n g_k: jnp.ndarray\n H_k: jnp.ndarray\n old_old_fval: jnp.ndarray\n status: Union[int, jnp.ndarray]\n line_search_status: Union[int, jnp.ndarray]\n\n\n_dot = partial(jnp.dot, precision=lax.Precision.HIGHEST)\n_einsum = partial(jnp.einsum, precision=lax.Precision.HIGHEST)\n\n\ndef minimize_bfgs(\n fun: Callable,\n x0: jnp.ndarray,\n maxiter: Optional[int] = None,\n norm=jnp.inf,\n gtol: float = 1e-5,\n line_search_maxiter: int = 10,\n) -> _BFGSResults:\n \"\"\"Minimize a function using BFGS.\n\n Implements the BFGS algorithm from\n Algorithm 6.1 from Wright and Nocedal, 'Numerical Optimization', 1999, pg.\n 136-143.\n\n Args:\n fun: function of the form f(x) where x is a flat ndarray and returns a real\n scalar. The function should be composed of operations with vjp defined.\n x0: initial guess.\n maxiter: maximum number of iterations.\n norm: order of norm for convergence check. Default inf.\n gtol: terminates minimization when |grad|_norm < g_tol.\n line_search_maxiter: maximum number of linesearch iterations.\n\n Returns:\n Optimization result.\n \"\"\"\n\n if maxiter is None:\n maxiter = jnp.size(x0) * 200\n\n d = x0.shape[0]\n\n initial_H = jnp.eye(d, dtype=x0.dtype)\n f_0, g_0 = jax.value_and_grad(fun)(x0)\n state = _BFGSResults(\n converged=jnp.linalg.norm(g_0, ord=norm) < gtol,\n failed=False,\n k=0,\n nfev=1,\n ngev=1,\n nhev=0,\n x_k=x0,\n f_k=f_0,\n g_k=g_0,\n H_k=initial_H,\n old_old_fval=f_0 + jnp.linalg.norm(g_0) / 2,\n status=0,\n line_search_status=0,\n )\n\n def cond_fun(state):\n return (jnp.logical_not(state.converged)\n & jnp.logical_not(state.failed)\n & (state.k < maxiter))\n\n def body_fun(state):\n p_k = -_dot(state.H_k, state.g_k)\n line_search_results = line_search(\n fun,\n state.x_k,\n p_k,\n old_fval=state.f_k,\n old_old_fval=state.old_old_fval,\n gfk=state.g_k,\n maxiter=line_search_maxiter,\n )\n state = state._replace(\n nfev=state.nfev + line_search_results.nfev,\n ngev=state.ngev + line_search_results.ngev,\n failed=line_search_results.failed,\n line_search_status=line_search_results.status,\n )\n s_k = line_search_results.a_k * p_k\n x_kp1 = state.x_k + s_k\n f_kp1 = line_search_results.f_k\n g_kp1 = line_search_results.g_k\n y_k = g_kp1 - state.g_k\n rho_k = jnp.reciprocal(_dot(y_k, s_k))\n\n sy_k = s_k[:, jnp.newaxis] * y_k[jnp.newaxis, :]\n w = jnp.eye(d) - rho_k * sy_k\n H_kp1 = (_einsum('ij,jk,lk', w, state.H_k, w)\n + rho_k * s_k[:, jnp.newaxis] * s_k[jnp.newaxis, :])\n H_kp1 = jnp.where(jnp.isfinite(rho_k), H_kp1, state.H_k)\n converged = jnp.linalg.norm(g_kp1, ord=norm) < gtol\n\n state = state._replace(\n converged=converged,\n k=state.k + 1,\n x_k=x_kp1,\n f_k=f_kp1,\n g_k=g_kp1,\n H_k=H_kp1,\n old_old_fval=state.f_k,\n )\n return state\n\n state = lax.while_loop(cond_fun, body_fun, state)\n status = jnp.where(\n state.converged,\n 0, # converged\n jnp.where(\n state.k == maxiter,\n 1, # max iters reached\n jnp.where(\n state.failed,\n 2 + state.line_search_status, # ls failed (+ reason)\n -1, # undefined\n )\n )\n )\n state = state._replace(status=status)\n return state\n", "path": "jax/_src/scipy/optimize/bfgs.py"}]}
| 3,159 | 220 |
gh_patches_debug_25154
|
rasdani/github-patches
|
git_diff
|
falconry__falcon-1988
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CPython 3.10 support
CPython 3.10 has been released.
Although it may already work out of the box, we need to add official first class support anyway:
- [x] Add a CPython 3.10 CI gate: (https://github.com/falconry/falcon/pull/1922).
- [x] Build CPython 3.10 wheels.
- [x] Advertise support using ["trove classifiers"](https://pypi.org/classifiers/).
- [x] Check if anything needs an update in `CONTRIBUTING.md`.
In addition, check for any new warnings emitted when running tests, e.g., whether we are relying on any deprecated functionality that will be removed in future Python versions:
- [x] Multiple `DeprecationWarning`: non-integer arguments to randrange() have been deprecated since Python 3.10 and will be removed in a subsequent version https://github.com/falconry/falcon/pull/1972
- [x] `falcon/util/sync.py`:224: `DeprecationWarning`: There is no current event loop
loop = asyncio.get_event_loop()
[`asyncio.get_event_loop()`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.get_event_loop): _Deprecated since version 3.10:_ Deprecation warning is emitted if there is no running event loop. In future Python releases, this function will be an alias of [`get_running_loop()`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.get_running_loop).
- [x] `tests/asgi/test_ws.py`:344: `DeprecationWarning`: The explicit passing of coroutine objects to asyncio.wait() is deprecated since Python 3.8, and scheduled for removal in Python 3.11.
- [x] Anything else?
</issue>
<code>
[start of falcon/util/sync.py]
1 import asyncio
2 from concurrent.futures import ThreadPoolExecutor
3 from functools import partial
4 from functools import wraps
5 import inspect
6 import os
7 from typing import Callable
8
9
10 __all__ = [
11 'async_to_sync',
12 'create_task',
13 'get_running_loop',
14 'runs_sync',
15 'sync_to_async',
16 'wrap_sync_to_async',
17 'wrap_sync_to_async_unsafe',
18 ]
19
20
21 _one_thread_to_rule_them_all = ThreadPoolExecutor(max_workers=1)
22
23
24 try:
25 get_running_loop = asyncio.get_running_loop
26 except AttributeError: # pragma: nocover
27 # NOTE(kgriffs): This branch is definitely covered under py35 and py36
28 # but for some reason the codecov gate doesn't pick this up, hence
29 # the pragma above.
30
31 get_running_loop = asyncio.get_event_loop
32
33
34 try:
35 create_task = asyncio.create_task
36 except AttributeError: # pragma: nocover
37 # NOTE(kgriffs): This branch is definitely covered under py35 and py36
38 # but for some reason the codecov gate doesn't pick this up, hence
39 # the pragma above.
40
41 def create_task(coro, name=None):
42 return asyncio.ensure_future(coro)
43
44
45 def wrap_sync_to_async_unsafe(func) -> Callable:
46 """Wrap a callable in a coroutine that executes the callable directly.
47
48 This helper makes it easier to use synchronous callables with ASGI
49 apps. However, it is considered "unsafe" because it calls the wrapped
50 function directly in the same thread as the asyncio loop. Generally, you
51 should use :func:`~.wrap_sync_to_async` instead.
52
53 Warning:
54 This helper is only to be used for functions that do not perform any
55 blocking I/O or lengthy CPU-bound operations, since the entire async
56 loop will be blocked while the wrapped function is executed.
57 For a safer, non-blocking alternative that runs the function in a
58 thread pool executor, use :func:`~.sync_to_async` instead.
59
60 Arguments:
61 func (callable): Function, method, or other callable to wrap
62
63 Returns:
64 function: An awaitable coroutine function that wraps the
65 synchronous callable.
66 """
67
68 @wraps(func)
69 async def wrapper(*args, **kwargs):
70 return func(*args, **kwargs)
71
72 return wrapper
73
74
75 def wrap_sync_to_async(func, threadsafe=None) -> Callable:
76 """Wrap a callable in a coroutine that executes the callable in the background.
77
78 This helper makes it easier to call functions that can not be
79 ported to use async natively (e.g., functions exported by a database
80 library that does not yet support asyncio).
81
82 To execute blocking operations safely, without stalling the async
83 loop, the wrapped callable is scheduled to run in the background, on a
84 separate thread, when the wrapper is called.
85
86 Normally, the default executor for the running loop is used to schedule the
87 synchronous callable. If the callable is not thread-safe, it can be
88 scheduled serially in a global single-threaded executor.
89
90 Warning:
91 Wrapping a synchronous function safely adds a fair amount of overhead
92 to the function call, and should only be used when a native async
93 library is not available for the operation you wish to perform.
94
95 Arguments:
96 func (callable): Function, method, or other callable to wrap
97
98 Keyword Arguments:
99 threadsafe (bool): Set to ``False`` when the callable is not
100 thread-safe (default ``True``). When this argument is ``False``,
101 the wrapped callable will be scheduled to run serially in a
102 global single-threaded executor.
103
104 Returns:
105 function: An awaitable coroutine function that wraps the
106 synchronous callable.
107 """
108
109 if threadsafe is None or threadsafe:
110 executor = None # Use default
111 else:
112 executor = _one_thread_to_rule_them_all
113
114 @wraps(func)
115 async def wrapper(*args, **kwargs):
116 return await get_running_loop().run_in_executor(
117 executor, partial(func, *args, **kwargs)
118 )
119
120 return wrapper
121
122
123 async def sync_to_async(func, *args, **kwargs):
124 """Schedule a synchronous callable on the loop's default executor and await the result.
125
126 This helper makes it easier to call functions that can not be
127 ported to use async natively (e.g., functions exported by a database
128 library that does not yet support asyncio).
129
130 To execute blocking operations safely, without stalling the async
131 loop, the wrapped callable is scheduled to run in the background, on a
132 separate thread, when the wrapper is called.
133
134 The default executor for the running loop is used to schedule the
135 synchronous callable.
136
137 Warning:
138 This helper can only be used to execute thread-safe callables. If
139 the callable is not thread-safe, it can be executed serially
140 by first wrapping it with :func:`~.wrap_sync_to_async`, and then
141 executing the wrapper directly.
142
143 Warning:
144 Calling a synchronous function safely from an asyncio event loop
145 adds a fair amount of overhead to the function call, and should
146 only be used when a native async library is not available for the
147 operation you wish to perform.
148
149 Arguments:
150 func (callable): Function, method, or other callable to wrap
151 *args: All additional arguments are passed through to the callable.
152
153 Keyword Arguments:
154 **kwargs: All keyword arguments are passed through to the callable.
155
156 Returns:
157 function: An awaitable coroutine function that wraps the
158 synchronous callable.
159 """
160
161 return await get_running_loop().run_in_executor(
162 None, partial(func, *args, **kwargs)
163 )
164
165
166 def _should_wrap_non_coroutines() -> bool:
167 """Return ``True`` IFF ``FALCON_ASGI_WRAP_NON_COROUTINES`` is set in the environ.
168
169 This should only be used for Falcon's own test suite.
170 """
171 return 'FALCON_ASGI_WRAP_NON_COROUTINES' in os.environ
172
173
174 def _wrap_non_coroutine_unsafe(func):
175 """Wrap a coroutine using ``wrap_sync_to_async_unsafe()`` for internal test cases.
176
177 This method is intended for Falcon's own test suite and should not be
178 used by apps themselves. It provides a convenient way to reuse sync
179 methods for ASGI test cases when it is safe to do so.
180
181 Arguments:
182 func (callable): Function, method, or other callable to wrap
183 Returns:
184 When not in test mode, this function simply returns the callable
185 unchanged. Otherwise, if the callable is not a coroutine function,
186 it will be wrapped using ``wrap_sync_to_async_unsafe()``.
187 """
188
189 if func is None:
190 return func
191
192 if not _should_wrap_non_coroutines():
193 return func
194
195 if inspect.iscoroutinefunction(func):
196 return func
197
198 return wrap_sync_to_async_unsafe(func)
199
200
201 def async_to_sync(coroutine, *args, **kwargs):
202 """Invoke a coroutine function from a synchronous caller.
203
204 This method can be used to invoke an asynchronous task from a synchronous
205 context. The coroutine will be scheduled to run on the current event
206 loop for the current OS thread. If an event loop is not already running,
207 one will be created.
208
209 Warning:
210 This method is very inefficient and is intended primarily for testing
211 and prototyping.
212
213 Additional arguments not mentioned below are bound to the given
214 coroutine function via :any:`functools.partial`.
215
216 Args:
217 coroutine: A coroutine function to invoke.
218 *args: Additional args are passed through to the coroutine function.
219
220 Keyword Args:
221 **kwargs: Additional args are passed through to the coroutine function.
222 """
223
224 loop = asyncio.get_event_loop()
225 return loop.run_until_complete(partial(coroutine, *args, **kwargs)())
226
227
228 def runs_sync(coroutine):
229 """Transform a coroutine function into a synchronous method.
230
231 This is achieved by always invoking the decorated coroutine function via
232 :meth:`async_to_sync`.
233
234 Warning:
235 This decorator is very inefficient and should only be used for adapting
236 asynchronous test functions for use with synchronous test runners such
237 as ``pytest`` or the ``unittest`` module.
238
239 It will create an event loop for the current thread if one is not
240 already running.
241
242 Args:
243 coroutine: A coroutine function to masquerade as a synchronous one.
244
245 Returns:
246 callable: A synchronous function.
247 """
248
249 @wraps(coroutine)
250 def invoke(*args, **kwargs):
251 return async_to_sync(coroutine, *args, **kwargs)
252
253 return invoke
254
[end of falcon/util/sync.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/falcon/util/sync.py b/falcon/util/sync.py
--- a/falcon/util/sync.py
+++ b/falcon/util/sync.py
@@ -210,9 +210,6 @@
This method is very inefficient and is intended primarily for testing
and prototyping.
- Additional arguments not mentioned below are bound to the given
- coroutine function via :any:`functools.partial`.
-
Args:
coroutine: A coroutine function to invoke.
*args: Additional args are passed through to the coroutine function.
@@ -221,8 +218,16 @@
**kwargs: Additional args are passed through to the coroutine function.
"""
- loop = asyncio.get_event_loop()
- return loop.run_until_complete(partial(coroutine, *args, **kwargs)())
+ # TODO(vytas): The canonical way of doing this for simple use cases is
+ # asyncio.run(), but that would be a breaking change wrt the above
+ # documented behaviour; breaking enough to break some of our own tests.
+
+ # NOTE(vytas): Work around get_event_loop deprecation in 3.10 by going via
+ # get_event_loop_policy(). This should be equivalent for async_to_sync's
+ # use case as it is currently impossible to invoke run_until_complete()
+ # from a running loop anyway.
+ loop = asyncio.get_event_loop_policy().get_event_loop()
+ return loop.run_until_complete(coroutine(*args, **kwargs))
def runs_sync(coroutine):
|
{"golden_diff": "diff --git a/falcon/util/sync.py b/falcon/util/sync.py\n--- a/falcon/util/sync.py\n+++ b/falcon/util/sync.py\n@@ -210,9 +210,6 @@\n This method is very inefficient and is intended primarily for testing\n and prototyping.\n \n- Additional arguments not mentioned below are bound to the given\n- coroutine function via :any:`functools.partial`.\n-\n Args:\n coroutine: A coroutine function to invoke.\n *args: Additional args are passed through to the coroutine function.\n@@ -221,8 +218,16 @@\n **kwargs: Additional args are passed through to the coroutine function.\n \"\"\"\n \n- loop = asyncio.get_event_loop()\n- return loop.run_until_complete(partial(coroutine, *args, **kwargs)())\n+ # TODO(vytas): The canonical way of doing this for simple use cases is\n+ # asyncio.run(), but that would be a breaking change wrt the above\n+ # documented behaviour; breaking enough to break some of our own tests.\n+\n+ # NOTE(vytas): Work around get_event_loop deprecation in 3.10 by going via\n+ # get_event_loop_policy(). This should be equivalent for async_to_sync's\n+ # use case as it is currently impossible to invoke run_until_complete()\n+ # from a running loop anyway.\n+ loop = asyncio.get_event_loop_policy().get_event_loop()\n+ return loop.run_until_complete(coroutine(*args, **kwargs))\n \n \n def runs_sync(coroutine):\n", "issue": "CPython 3.10 support\nCPython 3.10 has been released.\r\n\r\nAlthough it may already work out of the box, we need to add official first class support anyway:\r\n- [x] Add a CPython 3.10 CI gate: (https://github.com/falconry/falcon/pull/1922).\r\n- [x] Build CPython 3.10 wheels.\r\n- [x] Advertise support using [\"trove classifiers\"](https://pypi.org/classifiers/).\r\n- [x] Check if anything needs an update in `CONTRIBUTING.md`.\r\n\r\nIn addition, check for any new warnings emitted when running tests, e.g., whether we are relying on any deprecated functionality that will be removed in future Python versions:\r\n- [x] Multiple `DeprecationWarning`: non-integer arguments to randrange() have been deprecated since Python 3.10 and will be removed in a subsequent version https://github.com/falconry/falcon/pull/1972\r\n- [x] `falcon/util/sync.py`:224: `DeprecationWarning`: There is no current event loop\r\n loop = asyncio.get_event_loop()\r\n [`asyncio.get_event_loop()`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.get_event_loop): _Deprecated since version 3.10:_ Deprecation warning is emitted if there is no running event loop. In future Python releases, this function will be an alias of [`get_running_loop()`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.get_running_loop).\r\n- [x] `tests/asgi/test_ws.py`:344: `DeprecationWarning`: The explicit passing of coroutine objects to asyncio.wait() is deprecated since Python 3.8, and scheduled for removal in Python 3.11.\r\n- [x] Anything else?\n", "before_files": [{"content": "import asyncio\nfrom concurrent.futures import ThreadPoolExecutor\nfrom functools import partial\nfrom functools import wraps\nimport inspect\nimport os\nfrom typing import Callable\n\n\n__all__ = [\n 'async_to_sync',\n 'create_task',\n 'get_running_loop',\n 'runs_sync',\n 'sync_to_async',\n 'wrap_sync_to_async',\n 'wrap_sync_to_async_unsafe',\n]\n\n\n_one_thread_to_rule_them_all = ThreadPoolExecutor(max_workers=1)\n\n\ntry:\n get_running_loop = asyncio.get_running_loop\nexcept AttributeError: # pragma: nocover\n # NOTE(kgriffs): This branch is definitely covered under py35 and py36\n # but for some reason the codecov gate doesn't pick this up, hence\n # the pragma above.\n\n get_running_loop = asyncio.get_event_loop\n\n\ntry:\n create_task = asyncio.create_task\nexcept AttributeError: # pragma: nocover\n # NOTE(kgriffs): This branch is definitely covered under py35 and py36\n # but for some reason the codecov gate doesn't pick this up, hence\n # the pragma above.\n\n def create_task(coro, name=None):\n return asyncio.ensure_future(coro)\n\n\ndef wrap_sync_to_async_unsafe(func) -> Callable:\n \"\"\"Wrap a callable in a coroutine that executes the callable directly.\n\n This helper makes it easier to use synchronous callables with ASGI\n apps. However, it is considered \"unsafe\" because it calls the wrapped\n function directly in the same thread as the asyncio loop. Generally, you\n should use :func:`~.wrap_sync_to_async` instead.\n\n Warning:\n This helper is only to be used for functions that do not perform any\n blocking I/O or lengthy CPU-bound operations, since the entire async\n loop will be blocked while the wrapped function is executed.\n For a safer, non-blocking alternative that runs the function in a\n thread pool executor, use :func:`~.sync_to_async` instead.\n\n Arguments:\n func (callable): Function, method, or other callable to wrap\n\n Returns:\n function: An awaitable coroutine function that wraps the\n synchronous callable.\n \"\"\"\n\n @wraps(func)\n async def wrapper(*args, **kwargs):\n return func(*args, **kwargs)\n\n return wrapper\n\n\ndef wrap_sync_to_async(func, threadsafe=None) -> Callable:\n \"\"\"Wrap a callable in a coroutine that executes the callable in the background.\n\n This helper makes it easier to call functions that can not be\n ported to use async natively (e.g., functions exported by a database\n library that does not yet support asyncio).\n\n To execute blocking operations safely, without stalling the async\n loop, the wrapped callable is scheduled to run in the background, on a\n separate thread, when the wrapper is called.\n\n Normally, the default executor for the running loop is used to schedule the\n synchronous callable. If the callable is not thread-safe, it can be\n scheduled serially in a global single-threaded executor.\n\n Warning:\n Wrapping a synchronous function safely adds a fair amount of overhead\n to the function call, and should only be used when a native async\n library is not available for the operation you wish to perform.\n\n Arguments:\n func (callable): Function, method, or other callable to wrap\n\n Keyword Arguments:\n threadsafe (bool): Set to ``False`` when the callable is not\n thread-safe (default ``True``). When this argument is ``False``,\n the wrapped callable will be scheduled to run serially in a\n global single-threaded executor.\n\n Returns:\n function: An awaitable coroutine function that wraps the\n synchronous callable.\n \"\"\"\n\n if threadsafe is None or threadsafe:\n executor = None # Use default\n else:\n executor = _one_thread_to_rule_them_all\n\n @wraps(func)\n async def wrapper(*args, **kwargs):\n return await get_running_loop().run_in_executor(\n executor, partial(func, *args, **kwargs)\n )\n\n return wrapper\n\n\nasync def sync_to_async(func, *args, **kwargs):\n \"\"\"Schedule a synchronous callable on the loop's default executor and await the result.\n\n This helper makes it easier to call functions that can not be\n ported to use async natively (e.g., functions exported by a database\n library that does not yet support asyncio).\n\n To execute blocking operations safely, without stalling the async\n loop, the wrapped callable is scheduled to run in the background, on a\n separate thread, when the wrapper is called.\n\n The default executor for the running loop is used to schedule the\n synchronous callable.\n\n Warning:\n This helper can only be used to execute thread-safe callables. If\n the callable is not thread-safe, it can be executed serially\n by first wrapping it with :func:`~.wrap_sync_to_async`, and then\n executing the wrapper directly.\n\n Warning:\n Calling a synchronous function safely from an asyncio event loop\n adds a fair amount of overhead to the function call, and should\n only be used when a native async library is not available for the\n operation you wish to perform.\n\n Arguments:\n func (callable): Function, method, or other callable to wrap\n *args: All additional arguments are passed through to the callable.\n\n Keyword Arguments:\n **kwargs: All keyword arguments are passed through to the callable.\n\n Returns:\n function: An awaitable coroutine function that wraps the\n synchronous callable.\n \"\"\"\n\n return await get_running_loop().run_in_executor(\n None, partial(func, *args, **kwargs)\n )\n\n\ndef _should_wrap_non_coroutines() -> bool:\n \"\"\"Return ``True`` IFF ``FALCON_ASGI_WRAP_NON_COROUTINES`` is set in the environ.\n\n This should only be used for Falcon's own test suite.\n \"\"\"\n return 'FALCON_ASGI_WRAP_NON_COROUTINES' in os.environ\n\n\ndef _wrap_non_coroutine_unsafe(func):\n \"\"\"Wrap a coroutine using ``wrap_sync_to_async_unsafe()`` for internal test cases.\n\n This method is intended for Falcon's own test suite and should not be\n used by apps themselves. It provides a convenient way to reuse sync\n methods for ASGI test cases when it is safe to do so.\n\n Arguments:\n func (callable): Function, method, or other callable to wrap\n Returns:\n When not in test mode, this function simply returns the callable\n unchanged. Otherwise, if the callable is not a coroutine function,\n it will be wrapped using ``wrap_sync_to_async_unsafe()``.\n \"\"\"\n\n if func is None:\n return func\n\n if not _should_wrap_non_coroutines():\n return func\n\n if inspect.iscoroutinefunction(func):\n return func\n\n return wrap_sync_to_async_unsafe(func)\n\n\ndef async_to_sync(coroutine, *args, **kwargs):\n \"\"\"Invoke a coroutine function from a synchronous caller.\n\n This method can be used to invoke an asynchronous task from a synchronous\n context. The coroutine will be scheduled to run on the current event\n loop for the current OS thread. If an event loop is not already running,\n one will be created.\n\n Warning:\n This method is very inefficient and is intended primarily for testing\n and prototyping.\n\n Additional arguments not mentioned below are bound to the given\n coroutine function via :any:`functools.partial`.\n\n Args:\n coroutine: A coroutine function to invoke.\n *args: Additional args are passed through to the coroutine function.\n\n Keyword Args:\n **kwargs: Additional args are passed through to the coroutine function.\n \"\"\"\n\n loop = asyncio.get_event_loop()\n return loop.run_until_complete(partial(coroutine, *args, **kwargs)())\n\n\ndef runs_sync(coroutine):\n \"\"\"Transform a coroutine function into a synchronous method.\n\n This is achieved by always invoking the decorated coroutine function via\n :meth:`async_to_sync`.\n\n Warning:\n This decorator is very inefficient and should only be used for adapting\n asynchronous test functions for use with synchronous test runners such\n as ``pytest`` or the ``unittest`` module.\n\n It will create an event loop for the current thread if one is not\n already running.\n\n Args:\n coroutine: A coroutine function to masquerade as a synchronous one.\n\n Returns:\n callable: A synchronous function.\n \"\"\"\n\n @wraps(coroutine)\n def invoke(*args, **kwargs):\n return async_to_sync(coroutine, *args, **kwargs)\n\n return invoke\n", "path": "falcon/util/sync.py"}]}
| 3,492 | 346 |
gh_patches_debug_43977
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-3186
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of colossalai/zero/init_ctx/init_context.py]
1 import contextlib
2 import functools
3 from typing import Optional
4 from contextlib import AbstractContextManager
5
6 import torch
7 import torch.nn as nn
8 import torch.distributed as dist
9
10 from colossalai.context.parallel_mode import ParallelMode
11 from colossalai.core import global_context as gpc
12 from colossalai.context.singleton_meta import SingletonMeta
13 from colossalai.logging import get_dist_logger
14 from colossalai.zero.shard_utils import BaseShardStrategy
15 from colossalai.zero.sharded_model._utils import cast_tensor_to_fp16
16 from colossalai.zero.sharded_model.sharded_model_v2 import ShardedModelV2
17 from colossalai.zero.sharded_param import ShardedParamV2
18 from colossalai.utils.model.utils import InsertPostInitMethodToModuleSubClasses
19
20
21 class ZeroContextConfig(object):
22 """The configuration used to control zero context initialization.
23
24 Args:
25 target_device (torch.device): The device where param data are after exiting the context.
26 replicated (bool, optional): Whether the param is replicated across data parallel group.
27 Some parameters are not replicated, e.g. parameters in MOE experts.
28 shard_param (bool, optional): Is param sharded after exiting the context. Defaults to False.
29 """
30
31 def __init__(self, target_device: torch.device, replicated: bool = True, shard_param: bool = False):
32 super().__init__()
33
34 if shard_param:
35 assert replicated, "Non-replicated parameters can't be sharded."
36
37 # replicated no-shard parameters should locate in cuda, since we will broadcast them soon
38 if replicated and not shard_param:
39 assert target_device.type == 'cuda', "Replicated no-shard paramters should locate in cuda."
40
41 self.target_device = target_device
42 self.is_replicated: bool = replicated
43 self.shard_param: bool = shard_param
44
45
46 class ZeroInitContext(InsertPostInitMethodToModuleSubClasses):
47 """A context to initialize model.
48
49 1. Convert the model to fp16.
50 2. The paramaters of the module are adapted to type ShardedParameter.
51 3. Shard the param and grad according to flags.
52
53 Args:
54 target_device (torch.device): The device where param data are after exiting the context.
55 shard_strategy (BaseShardStrategy): Shard strategy instance.
56 seed (int, optional): Random seed for weight initialization
57 shard_param (bool, optional): Is param sharded after exiting the context. Defaults to False.
58 default_dtype (torch.dtype, optional): If it's not None, parameters will be initialized as ``default_dtype`` then converted to fp16.
59 model_numel_tensor (torch.Tensor, optional): A tensor which will store the number of elements of model. Defaults to torch.zeros(1, dtype=torch.int).
60 """
61
62 def __init__(self,
63 target_device: torch.device,
64 shard_strategy: BaseShardStrategy,
65 seed: int = 2**10 - 1,
66 shard_param: bool = False,
67 default_dtype: Optional[torch.dtype] = None,
68 model_numel_tensor: torch.Tensor = torch.zeros(1, dtype=torch.long)):
69
70 super().__init__(default_dtype=default_dtype)
71 self.shard_strategy = shard_strategy
72 self.param_list = []
73 self.model_numel_tensor = model_numel_tensor
74 self.seed = seed
75 self.dp_process_group = gpc.get_group(ParallelMode.DATA)
76
77 self.config = ZeroContextConfig(target_device=target_device, replicated=True, shard_param=shard_param)
78
79 ZeroContextMgr().current_context = self
80
81 self.param_numel = {}
82 self.top_module = None
83
84 @property
85 def target_device(self):
86 return self.config.target_device
87
88 @property
89 def is_replicated(self):
90 return self.config.is_replicated
91
92 @property
93 def shard_param(self):
94 return self.config.shard_param
95
96 @staticmethod
97 def calc_fanin_fanout(tensor: torch.Tensor):
98 """We use this function to substitute fan-in and fan-out calculation in torch.nn.init.
99 This can help us get correct fan-in and fan-out for sharded tensor.
100 """
101 assert isinstance(tensor, nn.Parameter), "Sharded tensor initilization is only allowed for paramters"
102
103 # get correct shape of input tensor
104 if not hasattr(tensor, 'colo_attr') or not tensor.colo_attr.param_is_sharded:
105 tensor_shape = tensor.shape
106 else:
107 tensor_shape = tensor.colo_attr.sharded_data_tensor.origin_shape
108
109 dimensions = len(tensor_shape)
110 if dimensions < 2:
111 raise ValueError("Fan in and fan out can not be computed for tensor with fewer than 2 dimensions")
112
113 num_input_fmaps = tensor_shape[1]
114 num_output_fmaps = tensor_shape[0]
115 receptive_field_size = 1
116 if dimensions > 2:
117 # math.prod is not always available, accumulate the product manually
118 # we could use functools.reduce but that is not supported by TorchScript
119 for s in tensor_shape[2:]:
120 receptive_field_size *= s
121 fan_in = num_input_fmaps * receptive_field_size
122 fan_out = num_output_fmaps * receptive_field_size
123
124 return fan_in, fan_out
125
126 def _pre_context_exec(self):
127 """
128 The Callback function when entering the context
129 """
130 self.logger = get_dist_logger("ZeroInitContext")
131
132 # substitute fan-in and fan-out calculation
133 self.nn_fanin_fanout = nn.init._calculate_fan_in_and_fan_out
134 nn.init._calculate_fan_in_and_fan_out = self.calc_fanin_fanout
135
136 self.module_load_from_state_dict = nn.Module._load_from_state_dict
137 shard_strategy = self.shard_strategy if self.config.shard_param else None
138 nn.Module._load_from_state_dict = functools.partialmethod(ShardedModelV2._colo_load_from_state_dict,
139 shard_strategy=shard_strategy)
140 self.module_state_dict = nn.Module.state_dict
141 nn.Module.state_dict = functools.partialmethod(ShardedModelV2._colo_state_dict,
142 shard_strategy=shard_strategy,
143 state_dict_func=self.module_state_dict,
144 process_group=self.dp_process_group)
145
146 # reserve rng states
147 self.cpu_rng_state = torch.get_rng_state()
148 self.cuda_rng_state = torch.cuda.get_rng_state()
149
150 # set new seed for initialization, since we initialize sharded tensor separately
151 # we don't want all processes have the same seed
152 # otherwise all sharded tensors are same after init
153 offset = self.seed + 1 # we want to have more 1 in binary format seed
154 torch.manual_seed(self.seed + offset * dist.get_rank())
155
156 def _post_context_exec(self):
157 """The callback function when exiting context.
158 """
159 # broadcast replicated no-shard parameters
160 src_rank = gpc.get_ranks_in_group(ParallelMode.DATA)[0]
161 for param in self.param_list:
162 assert hasattr(param, 'colo_attr')
163 if not param.colo_attr.param_is_sharded and param.colo_attr.is_replicated:
164 dist.broadcast(tensor=param.data, src=src_rank, group=self.dp_process_group)
165 param.colo_attr.set_data_none()
166
167 del self.param_list
168
169 nn.init._calculate_fan_in_and_fan_out = self.nn_fanin_fanout
170 nn.Module.load_state_dict = self.module_load_from_state_dict
171 nn.Module.state_dict = self.module_state_dict
172 torch.set_rng_state(self.cpu_rng_state)
173 torch.cuda.set_rng_state(self.cuda_rng_state)
174
175 params = frozenset(self.top_module.parameters())
176 for param in self.param_numel.keys():
177 if param not in params:
178 self.param_numel[param] = 0
179 self.model_numel_tensor.fill_(sum(self.param_numel.values()))
180
181 def _post_init_method(self, module: torch.nn.Module, *args, **kwargs):
182 """
183 The function to call at the end of the constructor of each module.
184 NOTE() The module may be passed to this function multiple times.
185 """
186 self.top_module = module
187
188 def half_fn(t: torch.Tensor):
189 return t.half() if t.is_floating_point() else t
190
191 for param in module.parameters(recurse=False):
192 # avoid adapting a param to ShardedParam twice
193 if hasattr(param, 'colo_attr'):
194 continue
195
196 self.param_numel[param] = param.numel()
197
198 # convert parameters to half
199 param_half = half_fn(param)
200 param.data = param_half
201 if param.grad is not None:
202 grad_half = half_fn(param.grad)
203 param.grad.data = grad_half
204
205 # move torch parameters to the target device
206 target_device = self.target_device
207 param.data = param.data.to(target_device)
208 if param.grad is not None:
209 param.grad = param.grad.to(target_device)
210
211 param.colo_attr = ShardedParamV2(param, set_data_none=True)
212
213 if self.shard_param:
214 self.shard_strategy.shard([param.colo_attr.sharded_data_tensor], self.dp_process_group)
215
216 param.data = param.colo_attr.data_payload # set param.data to payload
217
218 # mark whether the param is replicated
219 param.colo_attr.is_replicated = self.is_replicated
220
221 # mark whether the param should keep not sharded
222 # if True, the param is used as Zero stage 2
223 param.colo_attr.keep_not_shard = not self.shard_param
224
225 self.param_list.append(param)
226
227 # We must cast buffers
228 # If we use BN, buffers may be on CPU and Float
229 # We must cast them
230 for buffer in module.buffers(recurse=False):
231 buffer.data = buffer.data.to(device=torch.cuda.current_device())
232 buffer.data = cast_tensor_to_fp16(buffer.data)
233
234
235 class ZeroContextMgr(metaclass=SingletonMeta):
236 current_context: Optional[ZeroInitContext] = None
237
238 @contextlib.contextmanager
239 def hijack_context_config(self, **kwargs):
240 if self.current_context is None:
241 yield
242 else:
243 old_config = self.current_context.config
244 self.current_context.config = ZeroContextConfig(**kwargs)
245 yield
246 self.current_context.config = old_config
247
248
249 def no_shard_zero_context(is_replicated: bool = True) -> AbstractContextManager:
250 return ZeroContextMgr().hijack_context_config(target_device=torch.device('cuda', torch.cuda.current_device()),
251 replicated=is_replicated,
252 shard_param=False)
253
254
255 def no_shard_zero_decrator(is_replicated: bool = True):
256
257 def _wrapper(init_func):
258
259 def _no_shard(*args, **kwargs):
260 with no_shard_zero_context(is_replicated):
261 ret = init_func(*args, **kwargs)
262 return ret
263
264 return _no_shard
265
266 return _wrapper
267
[end of colossalai/zero/init_ctx/init_context.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/colossalai/zero/init_ctx/init_context.py b/colossalai/zero/init_ctx/init_context.py
--- a/colossalai/zero/init_ctx/init_context.py
+++ b/colossalai/zero/init_ctx/init_context.py
@@ -1,46 +1,45 @@
import contextlib
import functools
-from typing import Optional
from contextlib import AbstractContextManager
+from dataclasses import dataclass
+from typing import Optional
import torch
-import torch.nn as nn
import torch.distributed as dist
+import torch.nn as nn
from colossalai.context.parallel_mode import ParallelMode
-from colossalai.core import global_context as gpc
from colossalai.context.singleton_meta import SingletonMeta
+from colossalai.core import global_context as gpc
from colossalai.logging import get_dist_logger
+from colossalai.utils.model.utils import InsertPostInitMethodToModuleSubClasses
from colossalai.zero.shard_utils import BaseShardStrategy
from colossalai.zero.sharded_model._utils import cast_tensor_to_fp16
from colossalai.zero.sharded_model.sharded_model_v2 import ShardedModelV2
from colossalai.zero.sharded_param import ShardedParamV2
-from colossalai.utils.model.utils import InsertPostInitMethodToModuleSubClasses
-class ZeroContextConfig(object):
+@dataclass
+class ZeroContextConfig:
"""The configuration used to control zero context initialization.
Args:
target_device (torch.device): The device where param data are after exiting the context.
- replicated (bool, optional): Whether the param is replicated across data parallel group.
+ is_replicated (bool, optional): Whether the param is replicated across data parallel group.
Some parameters are not replicated, e.g. parameters in MOE experts.
shard_param (bool, optional): Is param sharded after exiting the context. Defaults to False.
"""
- def __init__(self, target_device: torch.device, replicated: bool = True, shard_param: bool = False):
- super().__init__()
+ target_device: torch.device
+ is_replicated: bool = True
+ shard_param: bool = False
- if shard_param:
- assert replicated, "Non-replicated parameters can't be sharded."
+ def __post_init__(self):
+ if self.shard_param:
+ assert self.is_replicated, "Non-replicated parameters can't be sharded."
- # replicated no-shard parameters should locate in cuda, since we will broadcast them soon
- if replicated and not shard_param:
- assert target_device.type == 'cuda', "Replicated no-shard paramters should locate in cuda."
-
- self.target_device = target_device
- self.is_replicated: bool = replicated
- self.shard_param: bool = shard_param
+ if self.is_replicated and not self.shard_param:
+ assert self.target_device.type == 'cuda', "Replicated no-shard parameters should be located in cuda."
class ZeroInitContext(InsertPostInitMethodToModuleSubClasses):
@@ -74,7 +73,7 @@
self.seed = seed
self.dp_process_group = gpc.get_group(ParallelMode.DATA)
- self.config = ZeroContextConfig(target_device=target_device, replicated=True, shard_param=shard_param)
+ self.config = ZeroContextConfig(target_device=target_device, is_replicated=True, shard_param=shard_param)
ZeroContextMgr().current_context = self
@@ -124,7 +123,7 @@
return fan_in, fan_out
def _pre_context_exec(self):
- """
+ """
The Callback function when entering the context
"""
self.logger = get_dist_logger("ZeroInitContext")
@@ -248,7 +247,7 @@
def no_shard_zero_context(is_replicated: bool = True) -> AbstractContextManager:
return ZeroContextMgr().hijack_context_config(target_device=torch.device('cuda', torch.cuda.current_device()),
- replicated=is_replicated,
+ is_replicated=is_replicated,
shard_param=False)
|
{"golden_diff": "diff --git a/colossalai/zero/init_ctx/init_context.py b/colossalai/zero/init_ctx/init_context.py\n--- a/colossalai/zero/init_ctx/init_context.py\n+++ b/colossalai/zero/init_ctx/init_context.py\n@@ -1,46 +1,45 @@\n import contextlib\n import functools\n-from typing import Optional\n from contextlib import AbstractContextManager\n+from dataclasses import dataclass\n+from typing import Optional\n \n import torch\n-import torch.nn as nn\n import torch.distributed as dist\n+import torch.nn as nn\n \n from colossalai.context.parallel_mode import ParallelMode\n-from colossalai.core import global_context as gpc\n from colossalai.context.singleton_meta import SingletonMeta\n+from colossalai.core import global_context as gpc\n from colossalai.logging import get_dist_logger\n+from colossalai.utils.model.utils import InsertPostInitMethodToModuleSubClasses\n from colossalai.zero.shard_utils import BaseShardStrategy\n from colossalai.zero.sharded_model._utils import cast_tensor_to_fp16\n from colossalai.zero.sharded_model.sharded_model_v2 import ShardedModelV2\n from colossalai.zero.sharded_param import ShardedParamV2\n-from colossalai.utils.model.utils import InsertPostInitMethodToModuleSubClasses\n \n \n-class ZeroContextConfig(object):\n+@dataclass\n+class ZeroContextConfig:\n \"\"\"The configuration used to control zero context initialization.\n \n Args:\n target_device (torch.device): The device where param data are after exiting the context.\n- replicated (bool, optional): Whether the param is replicated across data parallel group.\n+ is_replicated (bool, optional): Whether the param is replicated across data parallel group.\n Some parameters are not replicated, e.g. parameters in MOE experts.\n shard_param (bool, optional): Is param sharded after exiting the context. Defaults to False.\n \"\"\"\n \n- def __init__(self, target_device: torch.device, replicated: bool = True, shard_param: bool = False):\n- super().__init__()\n+ target_device: torch.device\n+ is_replicated: bool = True\n+ shard_param: bool = False\n \n- if shard_param:\n- assert replicated, \"Non-replicated parameters can't be sharded.\"\n+ def __post_init__(self):\n+ if self.shard_param:\n+ assert self.is_replicated, \"Non-replicated parameters can't be sharded.\"\n \n- # replicated no-shard parameters should locate in cuda, since we will broadcast them soon\n- if replicated and not shard_param:\n- assert target_device.type == 'cuda', \"Replicated no-shard paramters should locate in cuda.\"\n-\n- self.target_device = target_device\n- self.is_replicated: bool = replicated\n- self.shard_param: bool = shard_param\n+ if self.is_replicated and not self.shard_param:\n+ assert self.target_device.type == 'cuda', \"Replicated no-shard parameters should be located in cuda.\"\n \n \n class ZeroInitContext(InsertPostInitMethodToModuleSubClasses):\n@@ -74,7 +73,7 @@\n self.seed = seed\n self.dp_process_group = gpc.get_group(ParallelMode.DATA)\n \n- self.config = ZeroContextConfig(target_device=target_device, replicated=True, shard_param=shard_param)\n+ self.config = ZeroContextConfig(target_device=target_device, is_replicated=True, shard_param=shard_param)\n \n ZeroContextMgr().current_context = self\n \n@@ -124,7 +123,7 @@\n return fan_in, fan_out\n \n def _pre_context_exec(self):\n- \"\"\" \n+ \"\"\"\n The Callback function when entering the context\n \"\"\"\n self.logger = get_dist_logger(\"ZeroInitContext\")\n@@ -248,7 +247,7 @@\n \n def no_shard_zero_context(is_replicated: bool = True) -> AbstractContextManager:\n return ZeroContextMgr().hijack_context_config(target_device=torch.device('cuda', torch.cuda.current_device()),\n- replicated=is_replicated,\n+ is_replicated=is_replicated,\n shard_param=False)\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "import contextlib\nimport functools\nfrom typing import Optional\nfrom contextlib import AbstractContextManager\n\nimport torch\nimport torch.nn as nn\nimport torch.distributed as dist\n\nfrom colossalai.context.parallel_mode import ParallelMode\nfrom colossalai.core import global_context as gpc\nfrom colossalai.context.singleton_meta import SingletonMeta\nfrom colossalai.logging import get_dist_logger\nfrom colossalai.zero.shard_utils import BaseShardStrategy\nfrom colossalai.zero.sharded_model._utils import cast_tensor_to_fp16\nfrom colossalai.zero.sharded_model.sharded_model_v2 import ShardedModelV2\nfrom colossalai.zero.sharded_param import ShardedParamV2\nfrom colossalai.utils.model.utils import InsertPostInitMethodToModuleSubClasses\n\n\nclass ZeroContextConfig(object):\n \"\"\"The configuration used to control zero context initialization.\n\n Args:\n target_device (torch.device): The device where param data are after exiting the context.\n replicated (bool, optional): Whether the param is replicated across data parallel group.\n Some parameters are not replicated, e.g. parameters in MOE experts.\n shard_param (bool, optional): Is param sharded after exiting the context. Defaults to False.\n \"\"\"\n\n def __init__(self, target_device: torch.device, replicated: bool = True, shard_param: bool = False):\n super().__init__()\n\n if shard_param:\n assert replicated, \"Non-replicated parameters can't be sharded.\"\n\n # replicated no-shard parameters should locate in cuda, since we will broadcast them soon\n if replicated and not shard_param:\n assert target_device.type == 'cuda', \"Replicated no-shard paramters should locate in cuda.\"\n\n self.target_device = target_device\n self.is_replicated: bool = replicated\n self.shard_param: bool = shard_param\n\n\nclass ZeroInitContext(InsertPostInitMethodToModuleSubClasses):\n \"\"\"A context to initialize model.\n\n 1. Convert the model to fp16.\n 2. The paramaters of the module are adapted to type ShardedParameter.\n 3. Shard the param and grad according to flags.\n\n Args:\n target_device (torch.device): The device where param data are after exiting the context.\n shard_strategy (BaseShardStrategy): Shard strategy instance.\n seed (int, optional): Random seed for weight initialization\n shard_param (bool, optional): Is param sharded after exiting the context. Defaults to False.\n default_dtype (torch.dtype, optional): If it's not None, parameters will be initialized as ``default_dtype`` then converted to fp16.\n model_numel_tensor (torch.Tensor, optional): A tensor which will store the number of elements of model. Defaults to torch.zeros(1, dtype=torch.int).\n \"\"\"\n\n def __init__(self,\n target_device: torch.device,\n shard_strategy: BaseShardStrategy,\n seed: int = 2**10 - 1,\n shard_param: bool = False,\n default_dtype: Optional[torch.dtype] = None,\n model_numel_tensor: torch.Tensor = torch.zeros(1, dtype=torch.long)):\n\n super().__init__(default_dtype=default_dtype)\n self.shard_strategy = shard_strategy\n self.param_list = []\n self.model_numel_tensor = model_numel_tensor\n self.seed = seed\n self.dp_process_group = gpc.get_group(ParallelMode.DATA)\n\n self.config = ZeroContextConfig(target_device=target_device, replicated=True, shard_param=shard_param)\n\n ZeroContextMgr().current_context = self\n\n self.param_numel = {}\n self.top_module = None\n\n @property\n def target_device(self):\n return self.config.target_device\n\n @property\n def is_replicated(self):\n return self.config.is_replicated\n\n @property\n def shard_param(self):\n return self.config.shard_param\n\n @staticmethod\n def calc_fanin_fanout(tensor: torch.Tensor):\n \"\"\"We use this function to substitute fan-in and fan-out calculation in torch.nn.init.\n This can help us get correct fan-in and fan-out for sharded tensor.\n \"\"\"\n assert isinstance(tensor, nn.Parameter), \"Sharded tensor initilization is only allowed for paramters\"\n\n # get correct shape of input tensor\n if not hasattr(tensor, 'colo_attr') or not tensor.colo_attr.param_is_sharded:\n tensor_shape = tensor.shape\n else:\n tensor_shape = tensor.colo_attr.sharded_data_tensor.origin_shape\n\n dimensions = len(tensor_shape)\n if dimensions < 2:\n raise ValueError(\"Fan in and fan out can not be computed for tensor with fewer than 2 dimensions\")\n\n num_input_fmaps = tensor_shape[1]\n num_output_fmaps = tensor_shape[0]\n receptive_field_size = 1\n if dimensions > 2:\n # math.prod is not always available, accumulate the product manually\n # we could use functools.reduce but that is not supported by TorchScript\n for s in tensor_shape[2:]:\n receptive_field_size *= s\n fan_in = num_input_fmaps * receptive_field_size\n fan_out = num_output_fmaps * receptive_field_size\n\n return fan_in, fan_out\n\n def _pre_context_exec(self):\n \"\"\" \n The Callback function when entering the context\n \"\"\"\n self.logger = get_dist_logger(\"ZeroInitContext\")\n\n # substitute fan-in and fan-out calculation\n self.nn_fanin_fanout = nn.init._calculate_fan_in_and_fan_out\n nn.init._calculate_fan_in_and_fan_out = self.calc_fanin_fanout\n\n self.module_load_from_state_dict = nn.Module._load_from_state_dict\n shard_strategy = self.shard_strategy if self.config.shard_param else None\n nn.Module._load_from_state_dict = functools.partialmethod(ShardedModelV2._colo_load_from_state_dict,\n shard_strategy=shard_strategy)\n self.module_state_dict = nn.Module.state_dict\n nn.Module.state_dict = functools.partialmethod(ShardedModelV2._colo_state_dict,\n shard_strategy=shard_strategy,\n state_dict_func=self.module_state_dict,\n process_group=self.dp_process_group)\n\n # reserve rng states\n self.cpu_rng_state = torch.get_rng_state()\n self.cuda_rng_state = torch.cuda.get_rng_state()\n\n # set new seed for initialization, since we initialize sharded tensor separately\n # we don't want all processes have the same seed\n # otherwise all sharded tensors are same after init\n offset = self.seed + 1 # we want to have more 1 in binary format seed\n torch.manual_seed(self.seed + offset * dist.get_rank())\n\n def _post_context_exec(self):\n \"\"\"The callback function when exiting context.\n \"\"\"\n # broadcast replicated no-shard parameters\n src_rank = gpc.get_ranks_in_group(ParallelMode.DATA)[0]\n for param in self.param_list:\n assert hasattr(param, 'colo_attr')\n if not param.colo_attr.param_is_sharded and param.colo_attr.is_replicated:\n dist.broadcast(tensor=param.data, src=src_rank, group=self.dp_process_group)\n param.colo_attr.set_data_none()\n\n del self.param_list\n\n nn.init._calculate_fan_in_and_fan_out = self.nn_fanin_fanout\n nn.Module.load_state_dict = self.module_load_from_state_dict\n nn.Module.state_dict = self.module_state_dict\n torch.set_rng_state(self.cpu_rng_state)\n torch.cuda.set_rng_state(self.cuda_rng_state)\n\n params = frozenset(self.top_module.parameters())\n for param in self.param_numel.keys():\n if param not in params:\n self.param_numel[param] = 0\n self.model_numel_tensor.fill_(sum(self.param_numel.values()))\n\n def _post_init_method(self, module: torch.nn.Module, *args, **kwargs):\n \"\"\"\n The function to call at the end of the constructor of each module.\n NOTE() The module may be passed to this function multiple times.\n \"\"\"\n self.top_module = module\n\n def half_fn(t: torch.Tensor):\n return t.half() if t.is_floating_point() else t\n\n for param in module.parameters(recurse=False):\n # avoid adapting a param to ShardedParam twice\n if hasattr(param, 'colo_attr'):\n continue\n\n self.param_numel[param] = param.numel()\n\n # convert parameters to half\n param_half = half_fn(param)\n param.data = param_half\n if param.grad is not None:\n grad_half = half_fn(param.grad)\n param.grad.data = grad_half\n\n # move torch parameters to the target device\n target_device = self.target_device\n param.data = param.data.to(target_device)\n if param.grad is not None:\n param.grad = param.grad.to(target_device)\n\n param.colo_attr = ShardedParamV2(param, set_data_none=True)\n\n if self.shard_param:\n self.shard_strategy.shard([param.colo_attr.sharded_data_tensor], self.dp_process_group)\n\n param.data = param.colo_attr.data_payload # set param.data to payload\n\n # mark whether the param is replicated\n param.colo_attr.is_replicated = self.is_replicated\n\n # mark whether the param should keep not sharded\n # if True, the param is used as Zero stage 2\n param.colo_attr.keep_not_shard = not self.shard_param\n\n self.param_list.append(param)\n\n # We must cast buffers\n # If we use BN, buffers may be on CPU and Float\n # We must cast them\n for buffer in module.buffers(recurse=False):\n buffer.data = buffer.data.to(device=torch.cuda.current_device())\n buffer.data = cast_tensor_to_fp16(buffer.data)\n\n\nclass ZeroContextMgr(metaclass=SingletonMeta):\n current_context: Optional[ZeroInitContext] = None\n\n @contextlib.contextmanager\n def hijack_context_config(self, **kwargs):\n if self.current_context is None:\n yield\n else:\n old_config = self.current_context.config\n self.current_context.config = ZeroContextConfig(**kwargs)\n yield\n self.current_context.config = old_config\n\n\ndef no_shard_zero_context(is_replicated: bool = True) -> AbstractContextManager:\n return ZeroContextMgr().hijack_context_config(target_device=torch.device('cuda', torch.cuda.current_device()),\n replicated=is_replicated,\n shard_param=False)\n\n\ndef no_shard_zero_decrator(is_replicated: bool = True):\n\n def _wrapper(init_func):\n\n def _no_shard(*args, **kwargs):\n with no_shard_zero_context(is_replicated):\n ret = init_func(*args, **kwargs)\n return ret\n\n return _no_shard\n\n return _wrapper\n", "path": "colossalai/zero/init_ctx/init_context.py"}]}
| 3,616 | 892 |
gh_patches_debug_12025
|
rasdani/github-patches
|
git_diff
|
Showndarya__Hacktoberfest-435
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Travis test ignore first letter of filename for some reason
I'll try and figure out why, thought about simply renaming every file in the travis script but that requires alot of work and overhead for little gain, it is certainly doable, you have to configure the git on the travis instance and make a new commit etc.
Might as well have a cron job or something to it recursively and periodically over the entirety of the repo and make a single commit...
</issue>
<code>
[start of .travis.py]
1 import json
2 import os
3 import re
4 import subprocess
5
6 # Get a diff between master and current.
7 try:
8 commit_range = os.environ["TRAVIS_COMMIT_RANGE"]
9 changed_files = subprocess.check_output(["git", "diff", "--name-only", commit_range])
10 except KeyError:
11 print("🔥 This should be run on Travis. Otherwise make sure TRAVIS_BRANCH is set.")
12 exit(1)
13
14 # Filter JSON files only.
15 changed_files_json = []
16 if changed_files:
17 changed_files = changed_files.decode()
18 for changed_file in changed_files.split('\n'):
19 if re.search(r"\.json$", changed_file):
20 changed_files_json.append(changed_file)
21
22
23 # Iterate over list of changed JSON files.
24 for changed_file_json in changed_files_json:
25 print(f"Checking file {changed_file_json}...")
26 there_was_an_error = False
27
28 if not changed_file_json[0].isupper():
29 there_was_an_error = True
30 print("🔥 File name not capitalized.")
31
32 try:
33 with open(changed_file_json) as data_file:
34 file_content = json.loads(data_file.read())
35 except json.decoder.JSONDecodeError:
36 there_was_an_error = True
37 print("🔥 JSON could not be parsed.")
38
39 if 'word' not in file_content:
40 there_was_an_error = True
41 print("🔥 Key 'word' not found.")
42
43 if not file_content["word"]:
44 there_was_an_error = True
45 print("🔥 Value for 'word' appears to be empty.")
46
47 if 'definitions' not in file_content:
48 there_was_an_error = True
49 print("🔥 Key 'definitions' not found.")
50
51 if not file_content["definitions"]:
52 there_was_an_error = True
53 print("🔥 Value for 'definitions' appears to be empty.")
54
55 if 'parts-of-speech' not in file_content:
56 there_was_an_error = True
57 print("🔥 Key 'parts-of-speech' not found.")
58
59 if not file_content["parts-of-speech"]:
60 there_was_an_error = True
61 print("🔥 Value for 'parts-of-speech' appears to be empty.")
62
63 if there_was_an_error:
64 exit(1)
65
[end of .travis.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/.travis.py b/.travis.py
--- a/.travis.py
+++ b/.travis.py
@@ -19,13 +19,13 @@
if re.search(r"\.json$", changed_file):
changed_files_json.append(changed_file)
-
+
# Iterate over list of changed JSON files.
for changed_file_json in changed_files_json:
print(f"Checking file {changed_file_json}...")
there_was_an_error = False
- if not changed_file_json[0].isupper():
+ if not os.path.basename(changed_file_json)[0].isupper():
there_was_an_error = True
print("🔥 File name not capitalized.")
|
{"golden_diff": "diff --git a/.travis.py b/.travis.py\n--- a/.travis.py\n+++ b/.travis.py\n@@ -19,13 +19,13 @@\n if re.search(r\"\\.json$\", changed_file):\n changed_files_json.append(changed_file)\n \n-\n+ \n # Iterate over list of changed JSON files.\n for changed_file_json in changed_files_json:\n print(f\"Checking file {changed_file_json}...\")\n there_was_an_error = False\n \n- if not changed_file_json[0].isupper():\n+ if not os.path.basename(changed_file_json)[0].isupper():\n there_was_an_error = True\n print(\"\ud83d\udd25 File name not capitalized.\")\n", "issue": "Travis test ignore first letter of filename for some reason\nI'll try and figure out why, thought about simply renaming every file in the travis script but that requires alot of work and overhead for little gain, it is certainly doable, you have to configure the git on the travis instance and make a new commit etc.\r\nMight as well have a cron job or something to it recursively and periodically over the entirety of the repo and make a single commit...\n", "before_files": [{"content": "import json\nimport os\nimport re\nimport subprocess\n\n# Get a diff between master and current.\ntry:\n commit_range = os.environ[\"TRAVIS_COMMIT_RANGE\"]\n changed_files = subprocess.check_output([\"git\", \"diff\", \"--name-only\", commit_range])\nexcept KeyError:\n print(\"\ud83d\udd25 This should be run on Travis. Otherwise make sure TRAVIS_BRANCH is set.\")\n exit(1)\n\n# Filter JSON files only.\nchanged_files_json = []\nif changed_files:\n changed_files = changed_files.decode()\n for changed_file in changed_files.split('\\n'):\n if re.search(r\"\\.json$\", changed_file):\n changed_files_json.append(changed_file)\n\n\n# Iterate over list of changed JSON files.\nfor changed_file_json in changed_files_json:\n print(f\"Checking file {changed_file_json}...\")\n there_was_an_error = False\n\n if not changed_file_json[0].isupper():\n there_was_an_error = True\n print(\"\ud83d\udd25 File name not capitalized.\")\n\n try:\n with open(changed_file_json) as data_file:\n file_content = json.loads(data_file.read())\n except json.decoder.JSONDecodeError:\n there_was_an_error = True\n print(\"\ud83d\udd25 JSON could not be parsed.\")\n\n if 'word' not in file_content:\n there_was_an_error = True\n print(\"\ud83d\udd25 Key 'word' not found.\")\n\n if not file_content[\"word\"]:\n there_was_an_error = True\n print(\"\ud83d\udd25 Value for 'word' appears to be empty.\")\n\n if 'definitions' not in file_content:\n there_was_an_error = True\n print(\"\ud83d\udd25 Key 'definitions' not found.\")\n\n if not file_content[\"definitions\"]:\n there_was_an_error = True\n print(\"\ud83d\udd25 Value for 'definitions' appears to be empty.\")\n\n if 'parts-of-speech' not in file_content:\n there_was_an_error = True\n print(\"\ud83d\udd25 Key 'parts-of-speech' not found.\")\n\n if not file_content[\"parts-of-speech\"]:\n there_was_an_error = True\n print(\"\ud83d\udd25 Value for 'parts-of-speech' appears to be empty.\")\n\n if there_was_an_error:\n exit(1)\n", "path": ".travis.py"}]}
| 1,218 | 152 |
gh_patches_debug_1870
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-1743
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support for Snowflake Secure Views
### Adding support for Secure View in Snowflake
When using the Materialize feature where setting the type of materialization, adding secure-view to the {{ config(materialized='secure-view') }} would be beneficial.
### Current Work-around
Currently the solution for Snowflake secure views is running post-hook events to set the targeted views as secure, example: `alter view sv_mySecureTest set secure;`
This works, and each view that needs to be secured will need to be added to the post-hook event.
### Affects only Snowflake
This feature is specific to the Snowflake Cloud Data warehouse.
[https://docs.snowflake.net/manuals/user-guide/views-secure.html](url)
### This will help DBT Snowflake Developer / Non Developers
When creating a secure view in Snowflake, a developer can use 2 syntax commands
1. CREATE OR REPLACE SECURE VIEW...
2. Alter view <view_name> Set Secure
The first method will allow non-dbt user to render the DDL with the secure declaration as part of the DDL, the second statement is added to the end of the generated DDL however it may be ignored by developers unfamiliar with Snowflake Syntax, causing possible security issues, allowing unauthorized access to the View DDL by Read-Only roles in Snowflake.
</issue>
<code>
[start of plugins/snowflake/dbt/adapters/snowflake/impl.py]
1 from dbt.adapters.sql import SQLAdapter
2 from dbt.adapters.snowflake import SnowflakeConnectionManager
3 from dbt.adapters.snowflake import SnowflakeRelation
4 from dbt.utils import filter_null_values
5
6
7 class SnowflakeAdapter(SQLAdapter):
8 Relation = SnowflakeRelation
9 ConnectionManager = SnowflakeConnectionManager
10
11 AdapterSpecificConfigs = frozenset(
12 {"transient", "cluster_by", "automatic_clustering"}
13 )
14
15 @classmethod
16 def date_function(cls):
17 return "CURRENT_TIMESTAMP()"
18
19 @classmethod
20 def _catalog_filter_table(cls, table, manifest):
21 # On snowflake, users can set QUOTED_IDENTIFIERS_IGNORE_CASE, so force
22 # the column names to their lowercased forms.
23 lowered = table.rename(
24 column_names=[c.lower() for c in table.column_names]
25 )
26 return super()._catalog_filter_table(lowered, manifest)
27
28 def _make_match_kwargs(self, database, schema, identifier):
29 quoting = self.config.quoting
30 if identifier is not None and quoting["identifier"] is False:
31 identifier = identifier.upper()
32
33 if schema is not None and quoting["schema"] is False:
34 schema = schema.upper()
35
36 if database is not None and quoting["database"] is False:
37 database = database.upper()
38
39 return filter_null_values(
40 {"identifier": identifier, "schema": schema, "database": database}
41 )
42
[end of plugins/snowflake/dbt/adapters/snowflake/impl.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plugins/snowflake/dbt/adapters/snowflake/impl.py b/plugins/snowflake/dbt/adapters/snowflake/impl.py
--- a/plugins/snowflake/dbt/adapters/snowflake/impl.py
+++ b/plugins/snowflake/dbt/adapters/snowflake/impl.py
@@ -9,7 +9,7 @@
ConnectionManager = SnowflakeConnectionManager
AdapterSpecificConfigs = frozenset(
- {"transient", "cluster_by", "automatic_clustering"}
+ {"transient", "cluster_by", "automatic_clustering", "secure"}
)
@classmethod
|
{"golden_diff": "diff --git a/plugins/snowflake/dbt/adapters/snowflake/impl.py b/plugins/snowflake/dbt/adapters/snowflake/impl.py\n--- a/plugins/snowflake/dbt/adapters/snowflake/impl.py\n+++ b/plugins/snowflake/dbt/adapters/snowflake/impl.py\n@@ -9,7 +9,7 @@\n ConnectionManager = SnowflakeConnectionManager\n \n AdapterSpecificConfigs = frozenset(\n- {\"transient\", \"cluster_by\", \"automatic_clustering\"}\n+ {\"transient\", \"cluster_by\", \"automatic_clustering\", \"secure\"}\n )\n \n @classmethod\n", "issue": "Support for Snowflake Secure Views\n### Adding support for Secure View in Snowflake\r\nWhen using the Materialize feature where setting the type of materialization, adding secure-view to the {{ config(materialized='secure-view') }} would be beneficial.\r\n\r\n### Current Work-around\r\nCurrently the solution for Snowflake secure views is running post-hook events to set the targeted views as secure, example: `alter view sv_mySecureTest set secure;`\r\nThis works, and each view that needs to be secured will need to be added to the post-hook event.\r\n\r\n### Affects only Snowflake\r\nThis feature is specific to the Snowflake Cloud Data warehouse.\r\n[https://docs.snowflake.net/manuals/user-guide/views-secure.html](url)\r\n\r\n### This will help DBT Snowflake Developer / Non Developers\r\nWhen creating a secure view in Snowflake, a developer can use 2 syntax commands\r\n\r\n1. CREATE OR REPLACE SECURE VIEW...\r\n2. Alter view <view_name> Set Secure\r\n\r\nThe first method will allow non-dbt user to render the DDL with the secure declaration as part of the DDL, the second statement is added to the end of the generated DDL however it may be ignored by developers unfamiliar with Snowflake Syntax, causing possible security issues, allowing unauthorized access to the View DDL by Read-Only roles in Snowflake.\n", "before_files": [{"content": "from dbt.adapters.sql import SQLAdapter\nfrom dbt.adapters.snowflake import SnowflakeConnectionManager\nfrom dbt.adapters.snowflake import SnowflakeRelation\nfrom dbt.utils import filter_null_values\n\n\nclass SnowflakeAdapter(SQLAdapter):\n Relation = SnowflakeRelation\n ConnectionManager = SnowflakeConnectionManager\n\n AdapterSpecificConfigs = frozenset(\n {\"transient\", \"cluster_by\", \"automatic_clustering\"}\n )\n\n @classmethod\n def date_function(cls):\n return \"CURRENT_TIMESTAMP()\"\n\n @classmethod\n def _catalog_filter_table(cls, table, manifest):\n # On snowflake, users can set QUOTED_IDENTIFIERS_IGNORE_CASE, so force\n # the column names to their lowercased forms.\n lowered = table.rename(\n column_names=[c.lower() for c in table.column_names]\n )\n return super()._catalog_filter_table(lowered, manifest)\n\n def _make_match_kwargs(self, database, schema, identifier):\n quoting = self.config.quoting\n if identifier is not None and quoting[\"identifier\"] is False:\n identifier = identifier.upper()\n\n if schema is not None and quoting[\"schema\"] is False:\n schema = schema.upper()\n\n if database is not None and quoting[\"database\"] is False:\n database = database.upper()\n\n return filter_null_values(\n {\"identifier\": identifier, \"schema\": schema, \"database\": database}\n )\n", "path": "plugins/snowflake/dbt/adapters/snowflake/impl.py"}]}
| 1,206 | 136 |
gh_patches_debug_57002
|
rasdani/github-patches
|
git_diff
|
Gallopsled__pwntools-1129
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to create a shell shellcraft for MIPS
The problem is as follows:
```py
>>> from pwnlib.shellcraft import mips
>>> mips.sh()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 8, in sh
File "/usr/lib64/python2.7/site-packages/mako/template.py", line 462, in render
return runtime._render(self, self.callable_, args, data)
File "/usr/lib64/python2.7/site-packages/mako/runtime.py", line 838, in _render
**_kwargs_for_callable(callable_, data))
File "/usr/lib64/python2.7/site-packages/mako/runtime.py", line 873, in _render_context
_exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
File "/usr/lib64/python2.7/site-packages/mako/runtime.py", line 899, in _exec_template
callable_(context, *args, **kwargs)
File "/home/are/.pwntools-cache/mako/mips/linux/sh.asm.py", line 28, in render_body
__M_writer(unicode(mips.execve('//bin/sh', ['sh'], {})))
File "<string>", line 8, in execve
File "/usr/lib64/python2.7/site-packages/mako/template.py", line 462, in render
return runtime._render(self, self.callable_, args, data)
File "/usr/lib64/python2.7/site-packages/mako/runtime.py", line 838, in _render
**_kwargs_for_callable(callable_, data))
File "/usr/lib64/python2.7/site-packages/mako/runtime.py", line 873, in _render_context
_exec_template(inherit, lclcontext, args=args, kwargs=kwargs)
File "/usr/lib64/python2.7/site-packages/mako/runtime.py", line 899, in _exec_template
callable_(context, *args, **kwargs)
File "/home/are/.pwntools-cache/mako/mips/linux/syscalls/execve.asm.py", line 69, in render_body
if arg in allregs:
TypeError: unhashable type: 'list'
```
But it can be fixed by making sure that `shellcraft.registers.current()` returns a list rather than a dict (mips is the only architecture that it happens for, since `type(shellcraft.registers.mips)==dict`). A pull request is on its way.
</issue>
<code>
[start of pwnlib/shellcraft/registers.py]
1 from __future__ import absolute_import
2
3 import re
4
5 from pwnlib.context import context
6 from pwnlib.util.misc import register_sizes
7
8 mips = {
9 '$0' : 0, '$zero': 0,
10 '$1' : 1, '$at': 1,
11 '$2' : 2, '$v0': 2,
12 '$3' : 3, '$v1': 3,
13 '$4' : 4, '$a0': 4,
14 '$5' : 5, '$a1': 5,
15 '$6' : 6, '$a2': 6,
16 '$7' : 7, '$a3': 7,
17 '$8' : 8, '$t0': 8,
18 '$9' : 9, '$t1': 9,
19 '$10': 10, '$t2': 10,
20 '$11': 11, '$t3': 11,
21 '$12': 12, '$t4': 12,
22 '$13': 13, '$t5': 13,
23 '$14': 14, '$t6': 14,
24 '$15': 15, '$t7': 15,
25 '$16': 16, '$s0': 16,
26 '$17': 17, '$s1': 17,
27 '$18': 18, '$s2': 18,
28 '$19': 19, '$s3': 19,
29 '$20': 20, '$s4': 20,
30 '$21': 21, '$s5': 21,
31 '$22': 22, '$s6': 22,
32 '$23': 23, '$s7': 23,
33 '$24': 24, '$t8': 24,
34 '$25': 25, '$t9': 25,
35 '$26': 26, '$k0': 26,
36 '$27': 27, '$k1': 27,
37 '$28': 28, '$gp': 28,
38 '$29': 29, '$sp': 29,
39 '$30': 30, '$s8': 30,
40 '$31': 31, '$ra': 31,
41 }
42
43 arm = map('r{}'.format, range(13))
44 arm += ["sp", "lr", "pc", "cpsr"]
45
46 thumb = arm
47
48 aarch64 = map('x{}'.format, range(32))
49 aarch64 += ["sp", "lr", "pc", "cpsr"]
50
51 i386_baseregs = [ "ax", "cx", "dx", "bx", "sp", "bp", "si", "di", "ip"]
52
53 i386 = map('e{}'.format, i386_baseregs)
54 i386 += i386_baseregs
55 i386 += [ "eflags", "cs", "ss", "ds", "es", "fs", "gs", ]
56
57 amd64 = map('r{}'.format, i386_baseregs)
58 amd64 += map('r{}'.format, range(8,16))
59 amd64 += map('r{}d'.format, range(8,16))
60 amd64 += i386
61
62 powerpc = map('r{}'.format, range(32))
63 powerpc += ["pc", "msr", "cr", "lr", "ctr", "xer", "orig_r3", "trap" ]
64 powerpc = map('%{}'.format, powerpc)
65
66 sparc = map('g{}'.format, range(8))
67 sparc += map('o{}'.format, range(5))
68 sparc += map('l{}'.format, range(8))
69 sparc += map('i{}'.format, range(5))
70 sparc += ["pc", "sp", "fp", "psr" ]
71 sparc = map('%{}'.format, sparc)
72
73
74
75 # x86/amd64 registers in decreasing size
76 i386_ordered = [
77 ['rax', 'eax', 'ax', 'al'],
78 ['rbx', 'ebx', 'bx', 'bl'],
79 ['rcx', 'ecx', 'cx', 'cl'],
80 ['rdx', 'edx', 'dx', 'dl'],
81 ['rdi', 'edi', 'di'],
82 ['rsi', 'esi', 'si'],
83 ['rbp', 'ebp', 'bp'],
84 ['rsp', 'esp', 'sp'],
85 ['r8', 'r8d', 'r8w', 'r8b'],
86 ['r9', 'r9d', 'r9w', 'r9b'],
87 ['r10', 'r10d', 'r10w', 'r10b'],
88 ['r11', 'r11d', 'r11w', 'r11b'],
89 ['r12', 'r12d', 'r12w', 'r12b'],
90 ['r13', 'r13d', 'r13w', 'r13b'],
91 ['r14', 'r14d', 'r14w', 'r14b'],
92 ['r15', 'r15d', 'r15w', 'r15b']
93 ]
94
95 all_regs, sizes, bigger, smaller = register_sizes(i386_ordered, [64, 32, 16, 8, 8])
96 native64 = {k:v[0] for k,v in bigger.items()}
97 native32 = {k:v[1] for k,v in bigger.items() if not k.startswith('r')}
98
99 class Register(object):
100 #: Register name
101 name = None
102
103 #: List of larger registers, in order from largest to smallest
104 bigger = None
105
106 #: List of smaller regsters, in order from smallest to largest
107 smaller = None
108
109 #: Size of the register, in bits
110 size = None
111
112 #: Does this register have a 'high' register for mask 0xff00
113 ff00 = None
114
115 #: Flags for 64-bit mode.64-bit
116 #: The first bit is set, if the register can be used with a REX-mode
117 #: The second bit is set, if the register can be used without a REX-prefix
118 rex_mode = 0
119
120 #: Is this a 64-bit only register?
121 is64bit = False
122
123 #: Name of the native 64-bit register
124 native64 = None
125
126 #: Name of the native 32-bit register
127 native32 = None
128
129 #: Name of the register which should be used to clear
130 #: this register, e.g. xor REG, REG.
131 #: Useful for AMD64 for xor eax, eax is shorter than
132 #: xor rax, rax and has the same effect.
133 xor = None
134
135 def __init__(self, name, size):
136 self.name = name
137 self.size = size
138
139 for row in i386_ordered:
140 if name in row:
141 self.bigger = row[0:row.index(name)]
142 self.smaller = row[row.index(name)+1:]
143 self.sizes = {64>>i:r for i,r in enumerate(row)}
144 self.native64 = row[0]
145 self.native32 = row[1]
146 self.xor = self.sizes[min(self.size, 32)]
147
148 if self.size >= 32 and name.endswith('x'):
149 self.ff00 = name[1] + 'h'
150
151 if name[-1] != 'h':
152 self.rex_mode |= 1
153
154 if name[0] != 'r':
155 self.rex_mode |= 2
156
157 if name.startswith('r') or name[1:3].isdigit():
158 self.is64bit = True
159
160 @property
161 def bits(self):
162 return self.size
163
164 @property
165 def bytes(self):
166 return self.bits / 8
167
168 def fits(self, value):
169 return self.size >= bits_required(value)
170
171 def __str__(self):
172 return self.name
173
174 def __repr__(self):
175 return "Register(%r)" % self.name
176
177 intel = {}
178
179 for row in i386_ordered:
180 for i, reg in enumerate(row):
181 intel[reg] = Register(reg, 64 >> i)
182
183 def get_register(name):
184 if isinstance(name, Register):
185 return name
186 if isinstance(name, str):
187 return intel.get(name, None)
188 return None
189
190 def is_register(obj):
191 if isinstance(obj, Register):
192 return True
193 return get_register(obj)
194
195
196 def bits_required(value):
197 bits = 0
198
199 if value < 0:
200 value = -(value)
201
202 while value:
203 value >>= 8
204 bits += 8
205 return bits
206
207 def current():
208 return {
209 'i386': i386,
210 'amd64': amd64,
211 'arm': arm,
212 'thumb': arm,
213 'aarch64': aarch64,
214 'mips': mips,
215 'powerpc': powerpc
216 }[context.arch]
217
218 # def is_register(sz):
219 # try:
220 # sz = sz.lower()
221 # return sz.lower() in {
222 # 'i386': i386,
223 # 'amd64': amd64,
224 # 'powerpc': powerpc,
225 # 'sparc': sparc,
226 # 'arm': arm,
227 # 'aarch64': arm,
228 # 'thumb': arm,
229 # 'mips': mips,
230 # 'mips64': mips
231 # }[context.arch]
232 # except:
233 # return False
234
235 def register_size(reg):
236 return sizes[reg]
237
238 def fits_in_register(reg, value):
239 return register_size(reg) >= bits_required(value)
240
[end of pwnlib/shellcraft/registers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pwnlib/shellcraft/registers.py b/pwnlib/shellcraft/registers.py
--- a/pwnlib/shellcraft/registers.py
+++ b/pwnlib/shellcraft/registers.py
@@ -211,7 +211,7 @@
'arm': arm,
'thumb': arm,
'aarch64': aarch64,
- 'mips': mips,
+ 'mips': list(mips),
'powerpc': powerpc
}[context.arch]
|
{"golden_diff": "diff --git a/pwnlib/shellcraft/registers.py b/pwnlib/shellcraft/registers.py\n--- a/pwnlib/shellcraft/registers.py\n+++ b/pwnlib/shellcraft/registers.py\n@@ -211,7 +211,7 @@\n 'arm': arm,\n 'thumb': arm,\n 'aarch64': aarch64,\n- 'mips': mips,\n+ 'mips': list(mips),\n 'powerpc': powerpc\n }[context.arch]\n", "issue": "Unable to create a shell shellcraft for MIPS\nThe problem is as follows:\r\n```py\r\n>>> from pwnlib.shellcraft import mips\r\n>>> mips.sh()\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"<string>\", line 8, in sh\r\n File \"/usr/lib64/python2.7/site-packages/mako/template.py\", line 462, in render\r\n return runtime._render(self, self.callable_, args, data)\r\n File \"/usr/lib64/python2.7/site-packages/mako/runtime.py\", line 838, in _render\r\n **_kwargs_for_callable(callable_, data))\r\n File \"/usr/lib64/python2.7/site-packages/mako/runtime.py\", line 873, in _render_context\r\n _exec_template(inherit, lclcontext, args=args, kwargs=kwargs)\r\n File \"/usr/lib64/python2.7/site-packages/mako/runtime.py\", line 899, in _exec_template\r\n callable_(context, *args, **kwargs)\r\n File \"/home/are/.pwntools-cache/mako/mips/linux/sh.asm.py\", line 28, in render_body\r\n __M_writer(unicode(mips.execve('//bin/sh', ['sh'], {})))\r\n File \"<string>\", line 8, in execve\r\n File \"/usr/lib64/python2.7/site-packages/mako/template.py\", line 462, in render\r\n return runtime._render(self, self.callable_, args, data)\r\n File \"/usr/lib64/python2.7/site-packages/mako/runtime.py\", line 838, in _render\r\n **_kwargs_for_callable(callable_, data))\r\n File \"/usr/lib64/python2.7/site-packages/mako/runtime.py\", line 873, in _render_context\r\n _exec_template(inherit, lclcontext, args=args, kwargs=kwargs)\r\n File \"/usr/lib64/python2.7/site-packages/mako/runtime.py\", line 899, in _exec_template\r\n callable_(context, *args, **kwargs)\r\n File \"/home/are/.pwntools-cache/mako/mips/linux/syscalls/execve.asm.py\", line 69, in render_body\r\n if arg in allregs:\r\nTypeError: unhashable type: 'list'\r\n```\r\nBut it can be fixed by making sure that `shellcraft.registers.current()` returns a list rather than a dict (mips is the only architecture that it happens for, since `type(shellcraft.registers.mips)==dict`). A pull request is on its way.\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport re\n\nfrom pwnlib.context import context\nfrom pwnlib.util.misc import register_sizes\n\nmips = {\n '$0' : 0, '$zero': 0,\n '$1' : 1, '$at': 1,\n '$2' : 2, '$v0': 2,\n '$3' : 3, '$v1': 3,\n '$4' : 4, '$a0': 4,\n '$5' : 5, '$a1': 5,\n '$6' : 6, '$a2': 6,\n '$7' : 7, '$a3': 7,\n '$8' : 8, '$t0': 8,\n '$9' : 9, '$t1': 9,\n '$10': 10, '$t2': 10,\n '$11': 11, '$t3': 11,\n '$12': 12, '$t4': 12,\n '$13': 13, '$t5': 13,\n '$14': 14, '$t6': 14,\n '$15': 15, '$t7': 15,\n '$16': 16, '$s0': 16,\n '$17': 17, '$s1': 17,\n '$18': 18, '$s2': 18,\n '$19': 19, '$s3': 19,\n '$20': 20, '$s4': 20,\n '$21': 21, '$s5': 21,\n '$22': 22, '$s6': 22,\n '$23': 23, '$s7': 23,\n '$24': 24, '$t8': 24,\n '$25': 25, '$t9': 25,\n '$26': 26, '$k0': 26,\n '$27': 27, '$k1': 27,\n '$28': 28, '$gp': 28,\n '$29': 29, '$sp': 29,\n '$30': 30, '$s8': 30,\n '$31': 31, '$ra': 31,\n}\n\narm = map('r{}'.format, range(13))\narm += [\"sp\", \"lr\", \"pc\", \"cpsr\"]\n\nthumb = arm\n\naarch64 = map('x{}'.format, range(32))\naarch64 += [\"sp\", \"lr\", \"pc\", \"cpsr\"]\n\ni386_baseregs = [ \"ax\", \"cx\", \"dx\", \"bx\", \"sp\", \"bp\", \"si\", \"di\", \"ip\"]\n\ni386 = map('e{}'.format, i386_baseregs)\ni386 += i386_baseregs\ni386 += [ \"eflags\", \"cs\", \"ss\", \"ds\", \"es\", \"fs\", \"gs\", ]\n\namd64 = map('r{}'.format, i386_baseregs)\namd64 += map('r{}'.format, range(8,16))\namd64 += map('r{}d'.format, range(8,16))\namd64 += i386\n\npowerpc = map('r{}'.format, range(32))\npowerpc += [\"pc\", \"msr\", \"cr\", \"lr\", \"ctr\", \"xer\", \"orig_r3\", \"trap\" ]\npowerpc = map('%{}'.format, powerpc)\n\nsparc = map('g{}'.format, range(8))\nsparc += map('o{}'.format, range(5))\nsparc += map('l{}'.format, range(8))\nsparc += map('i{}'.format, range(5))\nsparc += [\"pc\", \"sp\", \"fp\", \"psr\" ]\nsparc = map('%{}'.format, sparc)\n\n\n\n# x86/amd64 registers in decreasing size\ni386_ordered = [\n ['rax', 'eax', 'ax', 'al'],\n ['rbx', 'ebx', 'bx', 'bl'],\n ['rcx', 'ecx', 'cx', 'cl'],\n ['rdx', 'edx', 'dx', 'dl'],\n ['rdi', 'edi', 'di'],\n ['rsi', 'esi', 'si'],\n ['rbp', 'ebp', 'bp'],\n ['rsp', 'esp', 'sp'],\n ['r8', 'r8d', 'r8w', 'r8b'],\n ['r9', 'r9d', 'r9w', 'r9b'],\n ['r10', 'r10d', 'r10w', 'r10b'],\n ['r11', 'r11d', 'r11w', 'r11b'],\n ['r12', 'r12d', 'r12w', 'r12b'],\n ['r13', 'r13d', 'r13w', 'r13b'],\n ['r14', 'r14d', 'r14w', 'r14b'],\n ['r15', 'r15d', 'r15w', 'r15b']\n]\n\nall_regs, sizes, bigger, smaller = register_sizes(i386_ordered, [64, 32, 16, 8, 8])\nnative64 = {k:v[0] for k,v in bigger.items()}\nnative32 = {k:v[1] for k,v in bigger.items() if not k.startswith('r')}\n\nclass Register(object):\n #: Register name\n name = None\n\n #: List of larger registers, in order from largest to smallest\n bigger = None\n\n #: List of smaller regsters, in order from smallest to largest\n smaller = None\n\n #: Size of the register, in bits\n size = None\n\n #: Does this register have a 'high' register for mask 0xff00\n ff00 = None\n\n #: Flags for 64-bit mode.64-bit\n #: The first bit is set, if the register can be used with a REX-mode\n #: The second bit is set, if the register can be used without a REX-prefix\n rex_mode = 0\n\n #: Is this a 64-bit only register?\n is64bit = False\n\n #: Name of the native 64-bit register\n native64 = None\n\n #: Name of the native 32-bit register\n native32 = None\n\n #: Name of the register which should be used to clear\n #: this register, e.g. xor REG, REG.\n #: Useful for AMD64 for xor eax, eax is shorter than\n #: xor rax, rax and has the same effect.\n xor = None\n\n def __init__(self, name, size):\n self.name = name\n self.size = size\n\n for row in i386_ordered:\n if name in row:\n self.bigger = row[0:row.index(name)]\n self.smaller = row[row.index(name)+1:]\n self.sizes = {64>>i:r for i,r in enumerate(row)}\n self.native64 = row[0]\n self.native32 = row[1]\n self.xor = self.sizes[min(self.size, 32)]\n\n if self.size >= 32 and name.endswith('x'):\n self.ff00 = name[1] + 'h'\n\n if name[-1] != 'h':\n self.rex_mode |= 1\n\n if name[0] != 'r':\n self.rex_mode |= 2\n\n if name.startswith('r') or name[1:3].isdigit():\n self.is64bit = True\n\n @property\n def bits(self):\n return self.size\n\n @property\n def bytes(self):\n return self.bits / 8\n\n def fits(self, value):\n return self.size >= bits_required(value)\n\n def __str__(self):\n return self.name\n\n def __repr__(self):\n return \"Register(%r)\" % self.name\n\nintel = {}\n\nfor row in i386_ordered:\n for i, reg in enumerate(row):\n intel[reg] = Register(reg, 64 >> i)\n\ndef get_register(name):\n if isinstance(name, Register):\n return name\n if isinstance(name, str):\n return intel.get(name, None)\n return None\n\ndef is_register(obj):\n if isinstance(obj, Register):\n return True\n return get_register(obj)\n\n\ndef bits_required(value):\n bits = 0\n\n if value < 0:\n value = -(value)\n\n while value:\n value >>= 8\n bits += 8\n return bits\n\ndef current():\n return {\n 'i386': i386,\n 'amd64': amd64,\n 'arm': arm,\n 'thumb': arm,\n 'aarch64': aarch64,\n 'mips': mips,\n 'powerpc': powerpc\n }[context.arch]\n\n# def is_register(sz):\n# try:\n# sz = sz.lower()\n# return sz.lower() in {\n# 'i386': i386,\n# 'amd64': amd64,\n# 'powerpc': powerpc,\n# 'sparc': sparc,\n# 'arm': arm,\n# 'aarch64': arm,\n# 'thumb': arm,\n# 'mips': mips,\n# 'mips64': mips\n# }[context.arch]\n# except:\n# return False\n\ndef register_size(reg):\n return sizes[reg]\n\ndef fits_in_register(reg, value):\n return register_size(reg) >= bits_required(value)\n", "path": "pwnlib/shellcraft/registers.py"}]}
| 4,028 | 119 |
gh_patches_debug_1805
|
rasdani/github-patches
|
git_diff
|
Mailu__Mailu-840
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Document the new setup utility
Title says all
</issue>
<code>
[start of setup/server.py]
1 import flask
2 import flask_bootstrap
3 import redis
4 import json
5 import os
6 import jinja2
7 import uuid
8 import string
9 import random
10 import ipaddress
11 import hashlib
12
13
14 version = os.getenv("this_version")
15 static_url_path = "/" + version + "/static"
16 app = flask.Flask(__name__, static_url_path=static_url_path)
17 flask_bootstrap.Bootstrap(app)
18 db = redis.StrictRedis(host='redis', port=6379, db=0)
19
20
21 def render_flavor(flavor, template, data):
22 return flask.render_template(
23 os.path.join(flavor, template),
24 **data
25 )
26
27
28 @app.add_template_global
29 def secret(length=16):
30 charset = string.ascii_uppercase + string.digits
31 return ''.join(
32 random.SystemRandom().choice(charset)
33 for _ in range(length)
34 )
35
36
37 def build_app(path):
38
39 app.jinja_env.trim_blocks = True
40 app.jinja_env.lstrip_blocks = True
41
42 @app.context_processor
43 def app_context():
44 return dict(versions=os.getenv("VERSIONS","master").split(','))
45
46 prefix_bp = flask.Blueprint(version, __name__)
47 prefix_bp.jinja_loader = jinja2.ChoiceLoader([
48 jinja2.FileSystemLoader(os.path.join(path, "templates")),
49 jinja2.FileSystemLoader(os.path.join(path, "flavors"))
50 ])
51
52 root_bp = flask.Blueprint("root", __name__)
53 root_bp.jinja_loader = jinja2.ChoiceLoader([
54 jinja2.FileSystemLoader(os.path.join(path, "templates")),
55 jinja2.FileSystemLoader(os.path.join(path, "flavors"))
56 ])
57
58 @prefix_bp.context_processor
59 @root_bp.context_processor
60 def bp_context(version=version):
61 return dict(version=version)
62
63 @prefix_bp.route("/")
64 @root_bp.route("/")
65 def wizard():
66 return flask.render_template('wizard.html')
67
68 @prefix_bp.route("/submit_flavor", methods=["POST"])
69 @root_bp.route("/submit_flavor", methods=["POST"])
70 def submit_flavor():
71 data = flask.request.form.copy()
72 steps = sorted(os.listdir(os.path.join(path, "templates", "steps", data["flavor"])))
73 return flask.render_template('wizard.html', flavor=data["flavor"], steps=steps)
74
75 @prefix_bp.route("/submit", methods=["POST"])
76 @root_bp.route("/submit", methods=["POST"])
77 def submit():
78 data = flask.request.form.copy()
79 data['uid'] = str(uuid.uuid4())
80 data['dns'] = str(ipaddress.IPv4Network(data['subnet'])[-2])
81 db.set(data['uid'], json.dumps(data))
82 return flask.redirect(flask.url_for('.setup', uid=data['uid']))
83
84 @prefix_bp.route("/setup/<uid>", methods=["GET"])
85 @root_bp.route("/setup/<uid>", methods=["GET"])
86 def setup(uid):
87 data = json.loads(db.get(uid))
88 flavor = data.get("flavor", "compose")
89 rendered = render_flavor(flavor, "setup.html", data)
90 return flask.render_template("setup.html", contents=rendered)
91
92 @prefix_bp.route("/file/<uid>/<filepath>", methods=["GET"])
93 @root_bp.route("/file/<uid>/<filepath>", methods=["GET"])
94 def file(uid, filepath):
95 data = json.loads(db.get(uid))
96 flavor = data.get("flavor", "compose")
97 return flask.Response(
98 render_flavor(flavor, filepath, data),
99 mimetype="application/text"
100 )
101
102 app.register_blueprint(prefix_bp, url_prefix="/{}".format(version))
103 app.register_blueprint(root_bp)
104
105
106 if __name__ == "__main__":
107 build_app("/tmp/mailutest")
108 app.run(debug=True)
109
[end of setup/server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup/server.py b/setup/server.py
--- a/setup/server.py
+++ b/setup/server.py
@@ -11,7 +11,7 @@
import hashlib
-version = os.getenv("this_version")
+version = os.getenv("this_version", "master")
static_url_path = "/" + version + "/static"
app = flask.Flask(__name__, static_url_path=static_url_path)
flask_bootstrap.Bootstrap(app)
|
{"golden_diff": "diff --git a/setup/server.py b/setup/server.py\n--- a/setup/server.py\n+++ b/setup/server.py\n@@ -11,7 +11,7 @@\n import hashlib\n \n \n-version = os.getenv(\"this_version\")\n+version = os.getenv(\"this_version\", \"master\")\n static_url_path = \"/\" + version + \"/static\"\n app = flask.Flask(__name__, static_url_path=static_url_path)\n flask_bootstrap.Bootstrap(app)\n", "issue": "Document the new setup utility\nTitle says all\n", "before_files": [{"content": "import flask\nimport flask_bootstrap\nimport redis\nimport json\nimport os\nimport jinja2\nimport uuid\nimport string\nimport random\nimport ipaddress\nimport hashlib\n\n\nversion = os.getenv(\"this_version\")\nstatic_url_path = \"/\" + version + \"/static\"\napp = flask.Flask(__name__, static_url_path=static_url_path)\nflask_bootstrap.Bootstrap(app)\ndb = redis.StrictRedis(host='redis', port=6379, db=0)\n\n\ndef render_flavor(flavor, template, data):\n return flask.render_template(\n os.path.join(flavor, template),\n **data\n )\n\n\[email protected]_template_global\ndef secret(length=16):\n charset = string.ascii_uppercase + string.digits\n return ''.join(\n random.SystemRandom().choice(charset)\n for _ in range(length)\n )\n\n\ndef build_app(path):\n\n app.jinja_env.trim_blocks = True\n app.jinja_env.lstrip_blocks = True\n\n @app.context_processor\n def app_context():\n return dict(versions=os.getenv(\"VERSIONS\",\"master\").split(','))\n\n prefix_bp = flask.Blueprint(version, __name__)\n prefix_bp.jinja_loader = jinja2.ChoiceLoader([\n jinja2.FileSystemLoader(os.path.join(path, \"templates\")),\n jinja2.FileSystemLoader(os.path.join(path, \"flavors\"))\n ])\n\n root_bp = flask.Blueprint(\"root\", __name__)\n root_bp.jinja_loader = jinja2.ChoiceLoader([\n jinja2.FileSystemLoader(os.path.join(path, \"templates\")),\n jinja2.FileSystemLoader(os.path.join(path, \"flavors\"))\n ])\n\n @prefix_bp.context_processor\n @root_bp.context_processor\n def bp_context(version=version):\n return dict(version=version)\n\n @prefix_bp.route(\"/\")\n @root_bp.route(\"/\")\n def wizard():\n return flask.render_template('wizard.html')\n\n @prefix_bp.route(\"/submit_flavor\", methods=[\"POST\"])\n @root_bp.route(\"/submit_flavor\", methods=[\"POST\"])\n def submit_flavor():\n data = flask.request.form.copy()\n steps = sorted(os.listdir(os.path.join(path, \"templates\", \"steps\", data[\"flavor\"])))\n return flask.render_template('wizard.html', flavor=data[\"flavor\"], steps=steps)\n\n @prefix_bp.route(\"/submit\", methods=[\"POST\"])\n @root_bp.route(\"/submit\", methods=[\"POST\"])\n def submit():\n data = flask.request.form.copy()\n data['uid'] = str(uuid.uuid4())\n data['dns'] = str(ipaddress.IPv4Network(data['subnet'])[-2])\n db.set(data['uid'], json.dumps(data))\n return flask.redirect(flask.url_for('.setup', uid=data['uid']))\n\n @prefix_bp.route(\"/setup/<uid>\", methods=[\"GET\"])\n @root_bp.route(\"/setup/<uid>\", methods=[\"GET\"])\n def setup(uid):\n data = json.loads(db.get(uid))\n flavor = data.get(\"flavor\", \"compose\")\n rendered = render_flavor(flavor, \"setup.html\", data)\n return flask.render_template(\"setup.html\", contents=rendered)\n\n @prefix_bp.route(\"/file/<uid>/<filepath>\", methods=[\"GET\"])\n @root_bp.route(\"/file/<uid>/<filepath>\", methods=[\"GET\"])\n def file(uid, filepath):\n data = json.loads(db.get(uid))\n flavor = data.get(\"flavor\", \"compose\")\n return flask.Response(\n render_flavor(flavor, filepath, data),\n mimetype=\"application/text\"\n )\n\n app.register_blueprint(prefix_bp, url_prefix=\"/{}\".format(version))\n app.register_blueprint(root_bp)\n\n\nif __name__ == \"__main__\":\n build_app(\"/tmp/mailutest\")\n app.run(debug=True)\n", "path": "setup/server.py"}]}
| 1,570 | 93 |
gh_patches_debug_8556
|
rasdani/github-patches
|
git_diff
|
certbot__certbot-7503
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dns-rfc2136 is unusable if packet loss present
certbot_dns_rfc2136 authenticator is not robust on a losing network.
During authenticating, the plugin have to send several SOA queries directly to the authoritative nameserver. The number of queries is depended on the specific configuration. In my case, it sends out 21 queries for a single certification with 6 dns-alt names.
Currently, it sends them out using UDP **without timeout** and without retry mechanism. Thus, any single packet lost on these queries will cause the certbot stuck forever.
https://github.com/certbot/certbot/blob/3c24ff88cc0106ac39e5b0f5bd6bf0f29572201e/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py#L209-L210
In my case, the network between my webserver and the nameserver regularly get a ~20% packet loss rate (since one of them is in China). The chance I pass the authentication is around 0.8 ^ 21 < 1%.
## Proposed solution
1. Add a timeout with proper retry mechanism; or
2. Simply use TCP and let the OS handle it for us.
dns-rfc2136 is unusable if packet loss present
certbot_dns_rfc2136 authenticator is not robust on a losing network.
During authenticating, the plugin have to send several SOA queries directly to the authoritative nameserver. The number of queries is depended on the specific configuration. In my case, it sends out 21 queries for a single certification with 6 dns-alt names.
Currently, it sends them out using UDP **without timeout** and without retry mechanism. Thus, any single packet lost on these queries will cause the certbot stuck forever.
https://github.com/certbot/certbot/blob/3c24ff88cc0106ac39e5b0f5bd6bf0f29572201e/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py#L209-L210
In my case, the network between my webserver and the nameserver regularly get a ~20% packet loss rate (since one of them is in China). The chance I pass the authentication is around 0.8 ^ 21 < 1%.
## Proposed solution
1. Add a timeout with proper retry mechanism; or
2. Simply use TCP and let the OS handle it for us.
</issue>
<code>
[start of certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py]
1 """DNS Authenticator using RFC 2136 Dynamic Updates."""
2 import logging
3
4 import dns.flags
5 import dns.message
6 import dns.name
7 import dns.query
8 import dns.rdataclass
9 import dns.rdatatype
10 import dns.tsig
11 import dns.tsigkeyring
12 import dns.update
13 import zope.interface
14
15 from certbot import errors
16 from certbot import interfaces
17 from certbot.plugins import dns_common
18
19 logger = logging.getLogger(__name__)
20
21
22 @zope.interface.implementer(interfaces.IAuthenticator)
23 @zope.interface.provider(interfaces.IPluginFactory)
24 class Authenticator(dns_common.DNSAuthenticator):
25 """DNS Authenticator using RFC 2136 Dynamic Updates
26
27 This Authenticator uses RFC 2136 Dynamic Updates to fulfull a dns-01 challenge.
28 """
29
30 ALGORITHMS = {
31 'HMAC-MD5': dns.tsig.HMAC_MD5,
32 'HMAC-SHA1': dns.tsig.HMAC_SHA1,
33 'HMAC-SHA224': dns.tsig.HMAC_SHA224,
34 'HMAC-SHA256': dns.tsig.HMAC_SHA256,
35 'HMAC-SHA384': dns.tsig.HMAC_SHA384,
36 'HMAC-SHA512': dns.tsig.HMAC_SHA512
37 }
38
39 PORT = 53
40
41 description = 'Obtain certificates using a DNS TXT record (if you are using BIND for DNS).'
42 ttl = 120
43
44 def __init__(self, *args, **kwargs):
45 super(Authenticator, self).__init__(*args, **kwargs)
46 self.credentials = None
47
48 @classmethod
49 def add_parser_arguments(cls, add): # pylint: disable=arguments-differ
50 super(Authenticator, cls).add_parser_arguments(add, default_propagation_seconds=60)
51 add('credentials', help='RFC 2136 credentials INI file.')
52
53 def more_info(self): # pylint: disable=missing-docstring,no-self-use
54 return 'This plugin configures a DNS TXT record to respond to a dns-01 challenge using ' + \
55 'RFC 2136 Dynamic Updates.'
56
57 def _validate_algorithm(self, credentials):
58 algorithm = credentials.conf('algorithm')
59 if algorithm:
60 if not self.ALGORITHMS.get(algorithm.upper()):
61 raise errors.PluginError("Unknown algorithm: {0}.".format(algorithm))
62
63 def _setup_credentials(self):
64 self.credentials = self._configure_credentials(
65 'credentials',
66 'RFC 2136 credentials INI file',
67 {
68 'name': 'TSIG key name',
69 'secret': 'TSIG key secret',
70 'server': 'The target DNS server'
71 },
72 self._validate_algorithm
73 )
74
75 def _perform(self, _domain, validation_name, validation):
76 self._get_rfc2136_client().add_txt_record(validation_name, validation, self.ttl)
77
78 def _cleanup(self, _domain, validation_name, validation):
79 self._get_rfc2136_client().del_txt_record(validation_name, validation)
80
81 def _get_rfc2136_client(self):
82 return _RFC2136Client(self.credentials.conf('server'),
83 int(self.credentials.conf('port') or self.PORT),
84 self.credentials.conf('name'),
85 self.credentials.conf('secret'),
86 self.ALGORITHMS.get(self.credentials.conf('algorithm'),
87 dns.tsig.HMAC_MD5))
88
89
90 class _RFC2136Client(object):
91 """
92 Encapsulates all communication with the target DNS server.
93 """
94 def __init__(self, server, port, key_name, key_secret, key_algorithm):
95 self.server = server
96 self.port = port
97 self.keyring = dns.tsigkeyring.from_text({
98 key_name: key_secret
99 })
100 self.algorithm = key_algorithm
101
102 def add_txt_record(self, record_name, record_content, record_ttl):
103 """
104 Add a TXT record using the supplied information.
105
106 :param str record_name: The record name (typically beginning with '_acme-challenge.').
107 :param str record_content: The record content (typically the challenge validation).
108 :param int record_ttl: The record TTL (number of seconds that the record may be cached).
109 :raises certbot.errors.PluginError: if an error occurs communicating with the DNS server
110 """
111
112 domain = self._find_domain(record_name)
113
114 n = dns.name.from_text(record_name)
115 o = dns.name.from_text(domain)
116 rel = n.relativize(o)
117
118 update = dns.update.Update(
119 domain,
120 keyring=self.keyring,
121 keyalgorithm=self.algorithm)
122 update.add(rel, record_ttl, dns.rdatatype.TXT, record_content)
123
124 try:
125 response = dns.query.tcp(update, self.server, port=self.port)
126 except Exception as e:
127 raise errors.PluginError('Encountered error adding TXT record: {0}'
128 .format(e))
129 rcode = response.rcode()
130
131 if rcode == dns.rcode.NOERROR:
132 logger.debug('Successfully added TXT record')
133 else:
134 raise errors.PluginError('Received response from server: {0}'
135 .format(dns.rcode.to_text(rcode)))
136
137 def del_txt_record(self, record_name, record_content):
138 """
139 Delete a TXT record using the supplied information.
140
141 :param str record_name: The record name (typically beginning with '_acme-challenge.').
142 :param str record_content: The record content (typically the challenge validation).
143 :param int record_ttl: The record TTL (number of seconds that the record may be cached).
144 :raises certbot.errors.PluginError: if an error occurs communicating with the DNS server
145 """
146
147 domain = self._find_domain(record_name)
148
149 n = dns.name.from_text(record_name)
150 o = dns.name.from_text(domain)
151 rel = n.relativize(o)
152
153 update = dns.update.Update(
154 domain,
155 keyring=self.keyring,
156 keyalgorithm=self.algorithm)
157 update.delete(rel, dns.rdatatype.TXT, record_content)
158
159 try:
160 response = dns.query.tcp(update, self.server, port=self.port)
161 except Exception as e:
162 raise errors.PluginError('Encountered error deleting TXT record: {0}'
163 .format(e))
164 rcode = response.rcode()
165
166 if rcode == dns.rcode.NOERROR:
167 logger.debug('Successfully deleted TXT record')
168 else:
169 raise errors.PluginError('Received response from server: {0}'
170 .format(dns.rcode.to_text(rcode)))
171
172 def _find_domain(self, record_name):
173 """
174 Find the closest domain with an SOA record for a given domain name.
175
176 :param str record_name: The record name for which to find the closest SOA record.
177 :returns: The domain, if found.
178 :rtype: str
179 :raises certbot.errors.PluginError: if no SOA record can be found.
180 """
181
182 domain_name_guesses = dns_common.base_domain_name_guesses(record_name)
183
184 # Loop through until we find an authoritative SOA record
185 for guess in domain_name_guesses:
186 if self._query_soa(guess):
187 return guess
188
189 raise errors.PluginError('Unable to determine base domain for {0} using names: {1}.'
190 .format(record_name, domain_name_guesses))
191
192 def _query_soa(self, domain_name):
193 """
194 Query a domain name for an authoritative SOA record.
195
196 :param str domain_name: The domain name to query for an SOA record.
197 :returns: True if found, False otherwise.
198 :rtype: bool
199 :raises certbot.errors.PluginError: if no response is received.
200 """
201
202 domain = dns.name.from_text(domain_name)
203
204 request = dns.message.make_query(domain, dns.rdatatype.SOA, dns.rdataclass.IN)
205 # Turn off Recursion Desired bit in query
206 request.flags ^= dns.flags.RD
207
208 try:
209 response = dns.query.udp(request, self.server, port=self.port)
210 rcode = response.rcode()
211
212 # Authoritative Answer bit should be set
213 if (rcode == dns.rcode.NOERROR and response.get_rrset(response.answer,
214 domain, dns.rdataclass.IN, dns.rdatatype.SOA) and response.flags & dns.flags.AA):
215 logger.debug('Received authoritative SOA response for %s', domain_name)
216 return True
217
218 logger.debug('No authoritative SOA record found for %s', domain_name)
219 return False
220 except Exception as e:
221 raise errors.PluginError('Encountered error when making query: {0}'
222 .format(e))
223
[end of certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py b/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py
--- a/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py
+++ b/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py
@@ -206,7 +206,11 @@
request.flags ^= dns.flags.RD
try:
- response = dns.query.udp(request, self.server, port=self.port)
+ try:
+ response = dns.query.tcp(request, self.server, port=self.port)
+ except OSError as e:
+ logger.debug('TCP query failed, fallback to UDP: %s', e)
+ response = dns.query.udp(request, self.server, port=self.port)
rcode = response.rcode()
# Authoritative Answer bit should be set
|
{"golden_diff": "diff --git a/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py b/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py\n--- a/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py\n+++ b/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py\n@@ -206,7 +206,11 @@\n request.flags ^= dns.flags.RD\n \n try:\n- response = dns.query.udp(request, self.server, port=self.port)\n+ try:\n+ response = dns.query.tcp(request, self.server, port=self.port)\n+ except OSError as e:\n+ logger.debug('TCP query failed, fallback to UDP: %s', e)\n+ response = dns.query.udp(request, self.server, port=self.port)\n rcode = response.rcode()\n \n # Authoritative Answer bit should be set\n", "issue": "dns-rfc2136 is unusable if packet loss present\ncertbot_dns_rfc2136 authenticator is not robust on a losing network.\r\n\r\nDuring authenticating, the plugin have to send several SOA queries directly to the authoritative nameserver. The number of queries is depended on the specific configuration. In my case, it sends out 21 queries for a single certification with 6 dns-alt names.\r\n\r\nCurrently, it sends them out using UDP **without timeout** and without retry mechanism. Thus, any single packet lost on these queries will cause the certbot stuck forever.\r\n\r\nhttps://github.com/certbot/certbot/blob/3c24ff88cc0106ac39e5b0f5bd6bf0f29572201e/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py#L209-L210\r\n\r\nIn my case, the network between my webserver and the nameserver regularly get a ~20% packet loss rate (since one of them is in China). The chance I pass the authentication is around 0.8 ^ 21 < 1%.\r\n\r\n## Proposed solution\r\n\r\n1. Add a timeout with proper retry mechanism; or\r\n2. Simply use TCP and let the OS handle it for us.\ndns-rfc2136 is unusable if packet loss present\ncertbot_dns_rfc2136 authenticator is not robust on a losing network.\r\n\r\nDuring authenticating, the plugin have to send several SOA queries directly to the authoritative nameserver. The number of queries is depended on the specific configuration. In my case, it sends out 21 queries for a single certification with 6 dns-alt names.\r\n\r\nCurrently, it sends them out using UDP **without timeout** and without retry mechanism. Thus, any single packet lost on these queries will cause the certbot stuck forever.\r\n\r\nhttps://github.com/certbot/certbot/blob/3c24ff88cc0106ac39e5b0f5bd6bf0f29572201e/certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py#L209-L210\r\n\r\nIn my case, the network between my webserver and the nameserver regularly get a ~20% packet loss rate (since one of them is in China). The chance I pass the authentication is around 0.8 ^ 21 < 1%.\r\n\r\n## Proposed solution\r\n\r\n1. Add a timeout with proper retry mechanism; or\r\n2. Simply use TCP and let the OS handle it for us.\n", "before_files": [{"content": "\"\"\"DNS Authenticator using RFC 2136 Dynamic Updates.\"\"\"\nimport logging\n\nimport dns.flags\nimport dns.message\nimport dns.name\nimport dns.query\nimport dns.rdataclass\nimport dns.rdatatype\nimport dns.tsig\nimport dns.tsigkeyring\nimport dns.update\nimport zope.interface\n\nfrom certbot import errors\nfrom certbot import interfaces\nfrom certbot.plugins import dns_common\n\nlogger = logging.getLogger(__name__)\n\n\[email protected](interfaces.IAuthenticator)\[email protected](interfaces.IPluginFactory)\nclass Authenticator(dns_common.DNSAuthenticator):\n \"\"\"DNS Authenticator using RFC 2136 Dynamic Updates\n\n This Authenticator uses RFC 2136 Dynamic Updates to fulfull a dns-01 challenge.\n \"\"\"\n\n ALGORITHMS = {\n 'HMAC-MD5': dns.tsig.HMAC_MD5,\n 'HMAC-SHA1': dns.tsig.HMAC_SHA1,\n 'HMAC-SHA224': dns.tsig.HMAC_SHA224,\n 'HMAC-SHA256': dns.tsig.HMAC_SHA256,\n 'HMAC-SHA384': dns.tsig.HMAC_SHA384,\n 'HMAC-SHA512': dns.tsig.HMAC_SHA512\n }\n\n PORT = 53\n\n description = 'Obtain certificates using a DNS TXT record (if you are using BIND for DNS).'\n ttl = 120\n\n def __init__(self, *args, **kwargs):\n super(Authenticator, self).__init__(*args, **kwargs)\n self.credentials = None\n\n @classmethod\n def add_parser_arguments(cls, add): # pylint: disable=arguments-differ\n super(Authenticator, cls).add_parser_arguments(add, default_propagation_seconds=60)\n add('credentials', help='RFC 2136 credentials INI file.')\n\n def more_info(self): # pylint: disable=missing-docstring,no-self-use\n return 'This plugin configures a DNS TXT record to respond to a dns-01 challenge using ' + \\\n 'RFC 2136 Dynamic Updates.'\n\n def _validate_algorithm(self, credentials):\n algorithm = credentials.conf('algorithm')\n if algorithm:\n if not self.ALGORITHMS.get(algorithm.upper()):\n raise errors.PluginError(\"Unknown algorithm: {0}.\".format(algorithm))\n\n def _setup_credentials(self):\n self.credentials = self._configure_credentials(\n 'credentials',\n 'RFC 2136 credentials INI file',\n {\n 'name': 'TSIG key name',\n 'secret': 'TSIG key secret',\n 'server': 'The target DNS server'\n },\n self._validate_algorithm\n )\n\n def _perform(self, _domain, validation_name, validation):\n self._get_rfc2136_client().add_txt_record(validation_name, validation, self.ttl)\n\n def _cleanup(self, _domain, validation_name, validation):\n self._get_rfc2136_client().del_txt_record(validation_name, validation)\n\n def _get_rfc2136_client(self):\n return _RFC2136Client(self.credentials.conf('server'),\n int(self.credentials.conf('port') or self.PORT),\n self.credentials.conf('name'),\n self.credentials.conf('secret'),\n self.ALGORITHMS.get(self.credentials.conf('algorithm'),\n dns.tsig.HMAC_MD5))\n\n\nclass _RFC2136Client(object):\n \"\"\"\n Encapsulates all communication with the target DNS server.\n \"\"\"\n def __init__(self, server, port, key_name, key_secret, key_algorithm):\n self.server = server\n self.port = port\n self.keyring = dns.tsigkeyring.from_text({\n key_name: key_secret\n })\n self.algorithm = key_algorithm\n\n def add_txt_record(self, record_name, record_content, record_ttl):\n \"\"\"\n Add a TXT record using the supplied information.\n\n :param str record_name: The record name (typically beginning with '_acme-challenge.').\n :param str record_content: The record content (typically the challenge validation).\n :param int record_ttl: The record TTL (number of seconds that the record may be cached).\n :raises certbot.errors.PluginError: if an error occurs communicating with the DNS server\n \"\"\"\n\n domain = self._find_domain(record_name)\n\n n = dns.name.from_text(record_name)\n o = dns.name.from_text(domain)\n rel = n.relativize(o)\n\n update = dns.update.Update(\n domain,\n keyring=self.keyring,\n keyalgorithm=self.algorithm)\n update.add(rel, record_ttl, dns.rdatatype.TXT, record_content)\n\n try:\n response = dns.query.tcp(update, self.server, port=self.port)\n except Exception as e:\n raise errors.PluginError('Encountered error adding TXT record: {0}'\n .format(e))\n rcode = response.rcode()\n\n if rcode == dns.rcode.NOERROR:\n logger.debug('Successfully added TXT record')\n else:\n raise errors.PluginError('Received response from server: {0}'\n .format(dns.rcode.to_text(rcode)))\n\n def del_txt_record(self, record_name, record_content):\n \"\"\"\n Delete a TXT record using the supplied information.\n\n :param str record_name: The record name (typically beginning with '_acme-challenge.').\n :param str record_content: The record content (typically the challenge validation).\n :param int record_ttl: The record TTL (number of seconds that the record may be cached).\n :raises certbot.errors.PluginError: if an error occurs communicating with the DNS server\n \"\"\"\n\n domain = self._find_domain(record_name)\n\n n = dns.name.from_text(record_name)\n o = dns.name.from_text(domain)\n rel = n.relativize(o)\n\n update = dns.update.Update(\n domain,\n keyring=self.keyring,\n keyalgorithm=self.algorithm)\n update.delete(rel, dns.rdatatype.TXT, record_content)\n\n try:\n response = dns.query.tcp(update, self.server, port=self.port)\n except Exception as e:\n raise errors.PluginError('Encountered error deleting TXT record: {0}'\n .format(e))\n rcode = response.rcode()\n\n if rcode == dns.rcode.NOERROR:\n logger.debug('Successfully deleted TXT record')\n else:\n raise errors.PluginError('Received response from server: {0}'\n .format(dns.rcode.to_text(rcode)))\n\n def _find_domain(self, record_name):\n \"\"\"\n Find the closest domain with an SOA record for a given domain name.\n\n :param str record_name: The record name for which to find the closest SOA record.\n :returns: The domain, if found.\n :rtype: str\n :raises certbot.errors.PluginError: if no SOA record can be found.\n \"\"\"\n\n domain_name_guesses = dns_common.base_domain_name_guesses(record_name)\n\n # Loop through until we find an authoritative SOA record\n for guess in domain_name_guesses:\n if self._query_soa(guess):\n return guess\n\n raise errors.PluginError('Unable to determine base domain for {0} using names: {1}.'\n .format(record_name, domain_name_guesses))\n\n def _query_soa(self, domain_name):\n \"\"\"\n Query a domain name for an authoritative SOA record.\n\n :param str domain_name: The domain name to query for an SOA record.\n :returns: True if found, False otherwise.\n :rtype: bool\n :raises certbot.errors.PluginError: if no response is received.\n \"\"\"\n\n domain = dns.name.from_text(domain_name)\n\n request = dns.message.make_query(domain, dns.rdatatype.SOA, dns.rdataclass.IN)\n # Turn off Recursion Desired bit in query\n request.flags ^= dns.flags.RD\n\n try:\n response = dns.query.udp(request, self.server, port=self.port)\n rcode = response.rcode()\n\n # Authoritative Answer bit should be set\n if (rcode == dns.rcode.NOERROR and response.get_rrset(response.answer,\n domain, dns.rdataclass.IN, dns.rdatatype.SOA) and response.flags & dns.flags.AA):\n logger.debug('Received authoritative SOA response for %s', domain_name)\n return True\n\n logger.debug('No authoritative SOA record found for %s', domain_name)\n return False\n except Exception as e:\n raise errors.PluginError('Encountered error when making query: {0}'\n .format(e))\n", "path": "certbot-dns-rfc2136/certbot_dns_rfc2136/dns_rfc2136.py"}]}
| 3,620 | 257 |
gh_patches_debug_35731
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-3029
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support for `_repr_html_` property on objects
From a user post here: https://discuss.streamlit.io/t/look-for-html-repr-on-objects-generally-a-method-called-repr-html/1939
> I have just started looking into streamlit after working on and using Panel 1. I am not sure if I missed this, but I was expecting an object with a _repr_html_ method to be automatically renderable with streamlit.
>
> If streamlit looked for that method, it would be easy for other libraries to make themselves renderable. Additionally, many libraries already have html reprs since they are renderable in notebooks. See this blog post 2 for examples of libraries that comply with this defacto standard.
This seems like a good thing to add alongside `st.iframe` (re #686)
</issue>
<code>
[start of lib/streamlit/elements/write.py]
1 # Copyright 2018-2021 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import json as json
16 import types
17 from typing import cast, Any, List, Tuple, Type
18
19 import numpy as np
20
21 import streamlit
22 from streamlit import type_util
23 from streamlit.errors import StreamlitAPIException
24
25 # Special methods:
26
27 HELP_TYPES = (
28 types.BuiltinFunctionType,
29 types.BuiltinMethodType,
30 types.FunctionType,
31 types.MethodType,
32 types.ModuleType,
33 ) # type: Tuple[Type[Any], ...]
34
35
36 class WriteMixin:
37 def write(self, *args, **kwargs):
38 """Write arguments to the app.
39
40 This is the Swiss Army knife of Streamlit commands: it does different
41 things depending on what you throw at it. Unlike other Streamlit commands,
42 write() has some unique properties:
43
44 1. You can pass in multiple arguments, all of which will be written.
45 2. Its behavior depends on the input types as follows.
46 3. It returns None, so it's "slot" in the App cannot be reused.
47
48 Parameters
49 ----------
50 *args : any
51 One or many objects to print to the App.
52
53 Arguments are handled as follows:
54
55 - write(string) : Prints the formatted Markdown string, with
56 support for LaTeX expression and emoji shortcodes.
57 See docs for st.markdown for more.
58 - write(data_frame) : Displays the DataFrame as a table.
59 - write(error) : Prints an exception specially.
60 - write(func) : Displays information about a function.
61 - write(module) : Displays information about the module.
62 - write(dict) : Displays dict in an interactive widget.
63 - write(obj) : The default is to print str(obj).
64 - write(mpl_fig) : Displays a Matplotlib figure.
65 - write(altair) : Displays an Altair chart.
66 - write(keras) : Displays a Keras model.
67 - write(graphviz) : Displays a Graphviz graph.
68 - write(plotly_fig) : Displays a Plotly figure.
69 - write(bokeh_fig) : Displays a Bokeh figure.
70 - write(sympy_expr) : Prints SymPy expression using LaTeX.
71
72 unsafe_allow_html : bool
73 This is a keyword-only argument that defaults to False.
74
75 By default, any HTML tags found in strings will be escaped and
76 therefore treated as pure text. This behavior may be turned off by
77 setting this argument to True.
78
79 That said, *we strongly advise* against it*. It is hard to write secure
80 HTML, so by using this argument you may be compromising your users'
81 security. For more information, see:
82
83 https://github.com/streamlit/streamlit/issues/152
84
85 **Also note that `unsafe_allow_html` is a temporary measure and may be
86 removed from Streamlit at any time.**
87
88 If you decide to turn on HTML anyway, we ask you to please tell us your
89 exact use case here:
90 https://discuss.streamlit.io/t/96 .
91
92 This will help us come up with safe APIs that allow you to do what you
93 want.
94
95 Example
96 -------
97
98 Its basic use case is to draw Markdown-formatted text, whenever the
99 input is a string:
100
101 >>> write('Hello, *World!* :sunglasses:')
102
103 .. output::
104 https://static.streamlit.io/0.50.2-ZWk9/index.html?id=Pn5sjhgNs4a8ZbiUoSTRxE
105 height: 50px
106
107 As mentioned earlier, `st.write()` also accepts other data formats, such as
108 numbers, data frames, styled data frames, and assorted objects:
109
110 >>> st.write(1234)
111 >>> st.write(pd.DataFrame({
112 ... 'first column': [1, 2, 3, 4],
113 ... 'second column': [10, 20, 30, 40],
114 ... }))
115
116 .. output::
117 https://static.streamlit.io/0.25.0-2JkNY/index.html?id=FCp9AMJHwHRsWSiqMgUZGD
118 height: 250px
119
120 Finally, you can pass in multiple arguments to do things like:
121
122 >>> st.write('1 + 1 = ', 2)
123 >>> st.write('Below is a DataFrame:', data_frame, 'Above is a dataframe.')
124
125 .. output::
126 https://static.streamlit.io/0.25.0-2JkNY/index.html?id=DHkcU72sxYcGarkFbf4kK1
127 height: 300px
128
129 Oh, one more thing: `st.write` accepts chart objects too! For example:
130
131 >>> import pandas as pd
132 >>> import numpy as np
133 >>> import altair as alt
134 >>>
135 >>> df = pd.DataFrame(
136 ... np.random.randn(200, 3),
137 ... columns=['a', 'b', 'c'])
138 ...
139 >>> c = alt.Chart(df).mark_circle().encode(
140 ... x='a', y='b', size='c', color='c', tooltip=['a', 'b', 'c'])
141 >>>
142 >>> st.write(c)
143
144 .. output::
145 https://static.streamlit.io/0.25.0-2JkNY/index.html?id=8jmmXR8iKoZGV4kXaKGYV5
146 height: 200px
147
148 """
149 string_buffer = [] # type: List[str]
150 unsafe_allow_html = kwargs.get("unsafe_allow_html", False)
151
152 # This bans some valid cases like: e = st.empty(); e.write("a", "b").
153 # BUT: 1) such cases are rare, 2) this rule is easy to understand,
154 # and 3) this rule should be removed once we have st.container()
155 if not self.dg._is_top_level and len(args) > 1:
156 raise StreamlitAPIException(
157 "Cannot replace a single element with multiple elements.\n\n"
158 "The `write()` method only supports multiple elements when "
159 "inserting elements rather than replacing. That is, only "
160 "when called as `st.write()` or `st.sidebar.write()`."
161 )
162
163 def flush_buffer():
164 if string_buffer:
165 self.dg.markdown(
166 " ".join(string_buffer),
167 unsafe_allow_html=unsafe_allow_html,
168 )
169 string_buffer[:] = []
170
171 for arg in args:
172 # Order matters!
173 if isinstance(arg, str):
174 string_buffer.append(arg)
175 elif type_util.is_dataframe_like(arg):
176 flush_buffer()
177 if len(np.shape(arg)) > 2:
178 self.dg.text(arg)
179 else:
180 self.dg.dataframe(arg)
181 elif isinstance(arg, Exception):
182 flush_buffer()
183 self.dg.exception(arg)
184 elif isinstance(arg, HELP_TYPES):
185 flush_buffer()
186 self.dg.help(arg)
187 elif type_util.is_altair_chart(arg):
188 flush_buffer()
189 self.dg.altair_chart(arg)
190 elif type_util.is_type(arg, "matplotlib.figure.Figure"):
191 flush_buffer()
192 self.dg.pyplot(arg)
193 elif type_util.is_plotly_chart(arg):
194 flush_buffer()
195 self.dg.plotly_chart(arg)
196 elif type_util.is_type(arg, "bokeh.plotting.figure.Figure"):
197 flush_buffer()
198 self.dg.bokeh_chart(arg)
199 elif type_util.is_graphviz_chart(arg):
200 flush_buffer()
201 self.dg.graphviz_chart(arg)
202 elif type_util.is_sympy_expession(arg):
203 flush_buffer()
204 self.dg.latex(arg)
205 elif type_util.is_keras_model(arg):
206 from tensorflow.python.keras.utils import vis_utils
207
208 flush_buffer()
209 dot = vis_utils.model_to_dot(arg)
210 self.dg.graphviz_chart(dot.to_string())
211 elif isinstance(arg, (dict, list)):
212 flush_buffer()
213 self.dg.json(arg)
214 elif type_util.is_namedtuple(arg):
215 flush_buffer()
216 self.dg.json(json.dumps(arg._asdict()))
217 elif type_util.is_pydeck(arg):
218 flush_buffer()
219 self.dg.pydeck_chart(arg)
220 else:
221 string_buffer.append("`%s`" % str(arg).replace("`", "\\`"))
222
223 flush_buffer()
224
225 @property
226 def dg(self) -> "streamlit.delta_generator.DeltaGenerator":
227 """Get our DeltaGenerator."""
228 return cast("streamlit.delta_generator.DeltaGenerator", self)
229
[end of lib/streamlit/elements/write.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/streamlit/elements/write.py b/lib/streamlit/elements/write.py
--- a/lib/streamlit/elements/write.py
+++ b/lib/streamlit/elements/write.py
@@ -43,7 +43,7 @@
1. You can pass in multiple arguments, all of which will be written.
2. Its behavior depends on the input types as follows.
- 3. It returns None, so it's "slot" in the App cannot be reused.
+ 3. It returns None, so its "slot" in the App cannot be reused.
Parameters
----------
@@ -60,7 +60,6 @@
- write(func) : Displays information about a function.
- write(module) : Displays information about the module.
- write(dict) : Displays dict in an interactive widget.
- - write(obj) : The default is to print str(obj).
- write(mpl_fig) : Displays a Matplotlib figure.
- write(altair) : Displays an Altair chart.
- write(keras) : Displays a Keras model.
@@ -68,6 +67,8 @@
- write(plotly_fig) : Displays a Plotly figure.
- write(bokeh_fig) : Displays a Bokeh figure.
- write(sympy_expr) : Prints SymPy expression using LaTeX.
+ - write(htmlable) : Prints _repr_html_() for the object if available.
+ - write(obj) : Prints str(obj) if otherwise unknown.
unsafe_allow_html : bool
This is a keyword-only argument that defaults to False.
@@ -217,6 +218,11 @@
elif type_util.is_pydeck(arg):
flush_buffer()
self.dg.pydeck_chart(arg)
+ elif hasattr(arg, "_repr_html_"):
+ self.dg.markdown(
+ arg._repr_html_(),
+ unsafe_allow_html=True,
+ )
else:
string_buffer.append("`%s`" % str(arg).replace("`", "\\`"))
|
{"golden_diff": "diff --git a/lib/streamlit/elements/write.py b/lib/streamlit/elements/write.py\n--- a/lib/streamlit/elements/write.py\n+++ b/lib/streamlit/elements/write.py\n@@ -43,7 +43,7 @@\n \n 1. You can pass in multiple arguments, all of which will be written.\n 2. Its behavior depends on the input types as follows.\n- 3. It returns None, so it's \"slot\" in the App cannot be reused.\n+ 3. It returns None, so its \"slot\" in the App cannot be reused.\n \n Parameters\n ----------\n@@ -60,7 +60,6 @@\n - write(func) : Displays information about a function.\n - write(module) : Displays information about the module.\n - write(dict) : Displays dict in an interactive widget.\n- - write(obj) : The default is to print str(obj).\n - write(mpl_fig) : Displays a Matplotlib figure.\n - write(altair) : Displays an Altair chart.\n - write(keras) : Displays a Keras model.\n@@ -68,6 +67,8 @@\n - write(plotly_fig) : Displays a Plotly figure.\n - write(bokeh_fig) : Displays a Bokeh figure.\n - write(sympy_expr) : Prints SymPy expression using LaTeX.\n+ - write(htmlable) : Prints _repr_html_() for the object if available.\n+ - write(obj) : Prints str(obj) if otherwise unknown.\n \n unsafe_allow_html : bool\n This is a keyword-only argument that defaults to False.\n@@ -217,6 +218,11 @@\n elif type_util.is_pydeck(arg):\n flush_buffer()\n self.dg.pydeck_chart(arg)\n+ elif hasattr(arg, \"_repr_html_\"):\n+ self.dg.markdown(\n+ arg._repr_html_(),\n+ unsafe_allow_html=True,\n+ )\n else:\n string_buffer.append(\"`%s`\" % str(arg).replace(\"`\", \"\\\\`\"))\n", "issue": "Support for `_repr_html_` property on objects\nFrom a user post here: https://discuss.streamlit.io/t/look-for-html-repr-on-objects-generally-a-method-called-repr-html/1939\r\n\r\n> I have just started looking into streamlit after working on and using Panel 1. I am not sure if I missed this, but I was expecting an object with a _repr_html_ method to be automatically renderable with streamlit.\r\n> \r\n> If streamlit looked for that method, it would be easy for other libraries to make themselves renderable. Additionally, many libraries already have html reprs since they are renderable in notebooks. See this blog post 2 for examples of libraries that comply with this defacto standard.\r\n\r\nThis seems like a good thing to add alongside `st.iframe` (re #686)\n", "before_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport json as json\nimport types\nfrom typing import cast, Any, List, Tuple, Type\n\nimport numpy as np\n\nimport streamlit\nfrom streamlit import type_util\nfrom streamlit.errors import StreamlitAPIException\n\n# Special methods:\n\nHELP_TYPES = (\n types.BuiltinFunctionType,\n types.BuiltinMethodType,\n types.FunctionType,\n types.MethodType,\n types.ModuleType,\n) # type: Tuple[Type[Any], ...]\n\n\nclass WriteMixin:\n def write(self, *args, **kwargs):\n \"\"\"Write arguments to the app.\n\n This is the Swiss Army knife of Streamlit commands: it does different\n things depending on what you throw at it. Unlike other Streamlit commands,\n write() has some unique properties:\n\n 1. You can pass in multiple arguments, all of which will be written.\n 2. Its behavior depends on the input types as follows.\n 3. It returns None, so it's \"slot\" in the App cannot be reused.\n\n Parameters\n ----------\n *args : any\n One or many objects to print to the App.\n\n Arguments are handled as follows:\n\n - write(string) : Prints the formatted Markdown string, with\n support for LaTeX expression and emoji shortcodes.\n See docs for st.markdown for more.\n - write(data_frame) : Displays the DataFrame as a table.\n - write(error) : Prints an exception specially.\n - write(func) : Displays information about a function.\n - write(module) : Displays information about the module.\n - write(dict) : Displays dict in an interactive widget.\n - write(obj) : The default is to print str(obj).\n - write(mpl_fig) : Displays a Matplotlib figure.\n - write(altair) : Displays an Altair chart.\n - write(keras) : Displays a Keras model.\n - write(graphviz) : Displays a Graphviz graph.\n - write(plotly_fig) : Displays a Plotly figure.\n - write(bokeh_fig) : Displays a Bokeh figure.\n - write(sympy_expr) : Prints SymPy expression using LaTeX.\n\n unsafe_allow_html : bool\n This is a keyword-only argument that defaults to False.\n\n By default, any HTML tags found in strings will be escaped and\n therefore treated as pure text. This behavior may be turned off by\n setting this argument to True.\n\n That said, *we strongly advise* against it*. It is hard to write secure\n HTML, so by using this argument you may be compromising your users'\n security. For more information, see:\n\n https://github.com/streamlit/streamlit/issues/152\n\n **Also note that `unsafe_allow_html` is a temporary measure and may be\n removed from Streamlit at any time.**\n\n If you decide to turn on HTML anyway, we ask you to please tell us your\n exact use case here:\n https://discuss.streamlit.io/t/96 .\n\n This will help us come up with safe APIs that allow you to do what you\n want.\n\n Example\n -------\n\n Its basic use case is to draw Markdown-formatted text, whenever the\n input is a string:\n\n >>> write('Hello, *World!* :sunglasses:')\n\n .. output::\n https://static.streamlit.io/0.50.2-ZWk9/index.html?id=Pn5sjhgNs4a8ZbiUoSTRxE\n height: 50px\n\n As mentioned earlier, `st.write()` also accepts other data formats, such as\n numbers, data frames, styled data frames, and assorted objects:\n\n >>> st.write(1234)\n >>> st.write(pd.DataFrame({\n ... 'first column': [1, 2, 3, 4],\n ... 'second column': [10, 20, 30, 40],\n ... }))\n\n .. output::\n https://static.streamlit.io/0.25.0-2JkNY/index.html?id=FCp9AMJHwHRsWSiqMgUZGD\n height: 250px\n\n Finally, you can pass in multiple arguments to do things like:\n\n >>> st.write('1 + 1 = ', 2)\n >>> st.write('Below is a DataFrame:', data_frame, 'Above is a dataframe.')\n\n .. output::\n https://static.streamlit.io/0.25.0-2JkNY/index.html?id=DHkcU72sxYcGarkFbf4kK1\n height: 300px\n\n Oh, one more thing: `st.write` accepts chart objects too! For example:\n\n >>> import pandas as pd\n >>> import numpy as np\n >>> import altair as alt\n >>>\n >>> df = pd.DataFrame(\n ... np.random.randn(200, 3),\n ... columns=['a', 'b', 'c'])\n ...\n >>> c = alt.Chart(df).mark_circle().encode(\n ... x='a', y='b', size='c', color='c', tooltip=['a', 'b', 'c'])\n >>>\n >>> st.write(c)\n\n .. output::\n https://static.streamlit.io/0.25.0-2JkNY/index.html?id=8jmmXR8iKoZGV4kXaKGYV5\n height: 200px\n\n \"\"\"\n string_buffer = [] # type: List[str]\n unsafe_allow_html = kwargs.get(\"unsafe_allow_html\", False)\n\n # This bans some valid cases like: e = st.empty(); e.write(\"a\", \"b\").\n # BUT: 1) such cases are rare, 2) this rule is easy to understand,\n # and 3) this rule should be removed once we have st.container()\n if not self.dg._is_top_level and len(args) > 1:\n raise StreamlitAPIException(\n \"Cannot replace a single element with multiple elements.\\n\\n\"\n \"The `write()` method only supports multiple elements when \"\n \"inserting elements rather than replacing. That is, only \"\n \"when called as `st.write()` or `st.sidebar.write()`.\"\n )\n\n def flush_buffer():\n if string_buffer:\n self.dg.markdown(\n \" \".join(string_buffer),\n unsafe_allow_html=unsafe_allow_html,\n )\n string_buffer[:] = []\n\n for arg in args:\n # Order matters!\n if isinstance(arg, str):\n string_buffer.append(arg)\n elif type_util.is_dataframe_like(arg):\n flush_buffer()\n if len(np.shape(arg)) > 2:\n self.dg.text(arg)\n else:\n self.dg.dataframe(arg)\n elif isinstance(arg, Exception):\n flush_buffer()\n self.dg.exception(arg)\n elif isinstance(arg, HELP_TYPES):\n flush_buffer()\n self.dg.help(arg)\n elif type_util.is_altair_chart(arg):\n flush_buffer()\n self.dg.altair_chart(arg)\n elif type_util.is_type(arg, \"matplotlib.figure.Figure\"):\n flush_buffer()\n self.dg.pyplot(arg)\n elif type_util.is_plotly_chart(arg):\n flush_buffer()\n self.dg.plotly_chart(arg)\n elif type_util.is_type(arg, \"bokeh.plotting.figure.Figure\"):\n flush_buffer()\n self.dg.bokeh_chart(arg)\n elif type_util.is_graphviz_chart(arg):\n flush_buffer()\n self.dg.graphviz_chart(arg)\n elif type_util.is_sympy_expession(arg):\n flush_buffer()\n self.dg.latex(arg)\n elif type_util.is_keras_model(arg):\n from tensorflow.python.keras.utils import vis_utils\n\n flush_buffer()\n dot = vis_utils.model_to_dot(arg)\n self.dg.graphviz_chart(dot.to_string())\n elif isinstance(arg, (dict, list)):\n flush_buffer()\n self.dg.json(arg)\n elif type_util.is_namedtuple(arg):\n flush_buffer()\n self.dg.json(json.dumps(arg._asdict()))\n elif type_util.is_pydeck(arg):\n flush_buffer()\n self.dg.pydeck_chart(arg)\n else:\n string_buffer.append(\"`%s`\" % str(arg).replace(\"`\", \"\\\\`\"))\n\n flush_buffer()\n\n @property\n def dg(self) -> \"streamlit.delta_generator.DeltaGenerator\":\n \"\"\"Get our DeltaGenerator.\"\"\"\n return cast(\"streamlit.delta_generator.DeltaGenerator\", self)\n", "path": "lib/streamlit/elements/write.py"}]}
| 3,306 | 458 |
gh_patches_debug_1293
|
rasdani/github-patches
|
git_diff
|
CTPUG__wafer-643
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for Django 4.0
Currently failing tests (See #632)
</issue>
<code>
[start of setup.py]
1 from glob import glob
2 import subprocess
3
4 from setuptools import find_packages, setup
5
6 REQUIRES = [
7 'Django>=2.2,<4',
8 'bleach',
9 'bleach-allowlist',
10 'diff-match-patch',
11 'django-bakery>=0.12.0',
12 'django-crispy-forms',
13 'django-markitup>=4.0.0',
14 'django-registration-redux',
15 'django-reversion',
16 'django-select2',
17 'djangorestframework',
18 'drf-extensions>=0.5.0',
19 'icalendar>=4.0',
20 'jsonfield',
21 'markdown>=2.5',
22 'pillow',
23 'py3dns',
24 'pyLibravatar',
25 'pytz',
26 'requests',
27 ]
28
29 SOURCES = []
30
31
32 with open('README.rst', 'r') as f:
33 long_description = f.read()
34
35
36 def compile_translations():
37 try:
38 subprocess.check_call(['./manage.py', 'compilemessages'])
39 except subprocess.CalledProcessError:
40 print("WARNING: cannot compile translations.")
41 return glob('wafer/locale/*/LC_MESSAGES/django.mo')
42
43
44 setup(
45 name="wafer",
46 version="0.13.1a",
47 url='http://github.com/CTPUG/wafer',
48 license='ISC',
49 description="A wafer-thin Django library for running small conferences.",
50 long_description=long_description,
51 long_description_content_type="text/x-rst",
52 author='CTPUG',
53 author_email='[email protected]',
54 packages=find_packages(),
55 include_package_data=True,
56 install_requires=REQUIRES,
57 dependency_links=SOURCES,
58 data_files=[
59 ('locale', compile_translations()),
60 ],
61 setup_requires=[
62 # Add setuptools-git, so we get correct behaviour for
63 # include_package_data
64 'setuptools_git >= 1.0',
65 ],
66 classifiers=[
67 'Development Status :: 4 - Beta',
68 'Intended Audience :: Developers',
69 'License :: OSI Approved :: ISC License (ISCL)',
70 'Operating System :: POSIX',
71 'Programming Language :: Python :: 3',
72 'Programming Language :: Python :: 3.6',
73 'Programming Language :: Python :: 3.7',
74 'Programming Language :: Python :: 3.8',
75 'Framework :: Django',
76 'Topic :: Software Development :: Libraries :: Python Modules',
77 'Topic :: Internet :: WWW/HTTP',
78 ],
79 )
80
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -8,7 +8,7 @@
'bleach',
'bleach-allowlist',
'diff-match-patch',
- 'django-bakery>=0.12.0',
+ 'django-bakery>=0.13.0',
'django-crispy-forms',
'django-markitup>=4.0.0',
'django-registration-redux',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -8,7 +8,7 @@\n 'bleach',\n 'bleach-allowlist',\n 'diff-match-patch',\n- 'django-bakery>=0.12.0',\n+ 'django-bakery>=0.13.0',\n 'django-crispy-forms',\n 'django-markitup>=4.0.0',\n 'django-registration-redux',\n", "issue": "Add support for Django 4.0\nCurrently failing tests (See #632)\n", "before_files": [{"content": "from glob import glob\nimport subprocess\n\nfrom setuptools import find_packages, setup\n\nREQUIRES = [\n 'Django>=2.2,<4',\n 'bleach',\n 'bleach-allowlist',\n 'diff-match-patch',\n 'django-bakery>=0.12.0',\n 'django-crispy-forms',\n 'django-markitup>=4.0.0',\n 'django-registration-redux',\n 'django-reversion',\n 'django-select2',\n 'djangorestframework',\n 'drf-extensions>=0.5.0',\n 'icalendar>=4.0',\n 'jsonfield',\n 'markdown>=2.5',\n 'pillow',\n 'py3dns',\n 'pyLibravatar',\n 'pytz',\n 'requests',\n]\n\nSOURCES = []\n\n\nwith open('README.rst', 'r') as f:\n long_description = f.read()\n\n\ndef compile_translations():\n try:\n subprocess.check_call(['./manage.py', 'compilemessages'])\n except subprocess.CalledProcessError:\n print(\"WARNING: cannot compile translations.\")\n return glob('wafer/locale/*/LC_MESSAGES/django.mo')\n\n\nsetup(\n name=\"wafer\",\n version=\"0.13.1a\",\n url='http://github.com/CTPUG/wafer',\n license='ISC',\n description=\"A wafer-thin Django library for running small conferences.\",\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n author='CTPUG',\n author_email='[email protected]',\n packages=find_packages(),\n include_package_data=True,\n install_requires=REQUIRES,\n dependency_links=SOURCES,\n data_files=[\n ('locale', compile_translations()),\n ],\n setup_requires=[\n # Add setuptools-git, so we get correct behaviour for\n # include_package_data\n 'setuptools_git >= 1.0',\n ],\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: ISC License (ISCL)',\n 'Operating System :: POSIX',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Framework :: Django',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Internet :: WWW/HTTP',\n ],\n)\n", "path": "setup.py"}]}
| 1,242 | 108 |
gh_patches_debug_6105
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-348
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DotDict isn't JSON-serializable
`DotDicts` *feel* like dicts, until you try to ship them as JSON:
```python
In [1]: import json
In [2]: from prefect.utilities.collections import DotDict
In [3]: json.dumps(DotDict(x=1, y=2))
```
Results in the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-b595d5c6cfdf> in <module>()
----> 1 json.dumps(DotDict(x=1, y=2))
/anaconda3/lib/python3.6/json/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
229 cls is None and indent is None and separators is None and
230 default is None and not sort_keys and not kw):
--> 231 return _default_encoder.encode(obj)
232 if cls is None:
233 cls = JSONEncoder
/anaconda3/lib/python3.6/json/encoder.py in encode(self, o)
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
/anaconda3/lib/python3.6/json/encoder.py in iterencode(self, o, _one_shot)
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
258
259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
/anaconda3/lib/python3.6/json/encoder.py in default(self, o)
178 """
179 raise TypeError("Object of type '%s' is not JSON serializable" %
--> 180 o.__class__.__name__)
181
182 def encode(self, o):
TypeError: Object of type 'DotDict' is not JSON serializable
```
</issue>
<code>
[start of src/prefect/utilities/collections.py]
1 # Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula
2 import collections
3 import json
4 from collections.abc import MutableMapping
5 from typing import Any, Generator, Iterable, Iterator, Union
6
7 DictLike = Union[dict, "DotDict"]
8
9
10 def flatten_seq(seq: Iterable) -> Generator:
11 """
12 Generator that returns a flattened list from a possibly nested list-of-lists
13 (or any sequence type).
14
15 Example:
16 ```python
17 flatten_seq([1, 2, [3, 4], 5, [6, [7]]])
18 >>> [1, 2, 3, 4, 5, 6, 7]
19 ```
20 Args:
21 - seq (Iterable): the sequence to flatten
22
23 Returns:
24 - generator: a generator that yields the flattened sequence
25 """
26 for item in seq:
27 if isinstance(item, collections.Iterable) and not isinstance(
28 item, (str, bytes)
29 ):
30 yield from flatten_seq(item)
31 else:
32 yield item
33
34
35 class DotDict(MutableMapping):
36 """
37 A `dict` that also supports attribute ("dot") access. Think of this as an extension
38 to the standard python `dict` object. **Note**: while any hashable object can be added to
39 a `DotDict`, _only_ valid Python identifiers can be accessed with the dot syntax; this excludes
40 strings which begin in numbers, special characters, or double underscores.
41
42 Args:
43 - init_dict (dict, optional): dictionary to initialize the `DotDict`
44 with
45 - **kwargs (optional): key, value pairs with which to initialize the
46 `DotDict`
47
48 **Example**:
49 ```python
50 dotdict = DotDict({'a': 34}, b=56, c=set())
51 dotdict.a # 34
52 dotdict['b'] # 56
53 dotdict.c # set()
54 ```
55 """
56
57 def __init__(self, init_dict: DictLike = None, **kwargs: Any) -> None:
58 if init_dict:
59 self.update(init_dict)
60 self.update(kwargs)
61
62 def __getitem__(self, key: str) -> Any:
63 return self.__dict__[key] # __dict__ expects string keys
64
65 def __setitem__(self, key: str, value: Any) -> None:
66 # prevent overwriting any critical attributes
67 if isinstance(key, str) and hasattr(MutableMapping, key):
68 raise ValueError('Invalid key: "{}"'.format(key))
69 self.__dict__[key] = value
70
71 def __setattr__(self, attr: str, value: Any) -> None:
72 self[attr] = value
73
74 def __iter__(self) -> Iterator[str]:
75 return iter(self.__dict__.keys())
76
77 def __delitem__(self, key: str) -> None:
78 del self.__dict__[key]
79
80 def __len__(self) -> int:
81 return len(self.__dict__)
82
83 def __repr__(self) -> str:
84 if len(self) > 0:
85 return "<{}: {}>".format(
86 type(self).__name__, ", ".join(sorted(repr(k) for k in self.keys()))
87 )
88 else:
89 return "<{}>".format(type(self).__name__)
90
91 def copy(self) -> "DotDict":
92 """Creates and returns a shallow copy of the current DotDict"""
93 return type(self)(self.__dict__.copy())
94
95 def __json__(self) -> dict:
96 return dict(self)
97
98
99 class GraphQLResult(DotDict):
100 def __repr__(self) -> str:
101 return json.dumps(as_nested_dict(self, dict), indent=4)
102
103
104 def merge_dicts(d1: DictLike, d2: DictLike) -> DictLike:
105 """
106 Updates `d1` from `d2` by replacing each `(k, v1)` pair in `d1` with the
107 corresponding `(k, v2)` pair in `d2`.
108
109 If the value of each pair is itself a dict, then the value is updated
110 recursively.
111
112 Args:
113 - d1 (MutableMapping): A dictionary to be replaced
114 - d2 (MutableMapping): A dictionary used for replacement
115
116 Returns:
117 - A `MutableMapping` with the two dictionary contents merged
118 """
119
120 new_dict = d1.copy()
121
122 for k, v in d2.items():
123 if isinstance(new_dict.get(k), MutableMapping) and isinstance(
124 v, MutableMapping
125 ):
126 new_dict[k] = merge_dicts(new_dict[k], d2[k])
127 else:
128 new_dict[k] = d2[k]
129 return new_dict
130
131
132 def as_nested_dict(
133 obj: Union[DictLike, Iterable[DictLike]], dct_class: type = DotDict
134 ) -> Union[DictLike, Iterable[DictLike]]:
135 """
136 Given a obj formatted as a dictionary, transforms it (and any nested dictionaries)
137 into the provided dct_class
138
139 Args:
140 - obj (Any): An object that is formatted as a `dict`
141 - dct_class (type): the `dict` class to use (defaults to DotDict)
142
143 Returns:
144 - A `dict_class` representation of the object passed in
145 ```
146 """
147 if isinstance(obj, (list, tuple, set)):
148 return type(obj)([as_nested_dict(d, dct_class) for d in obj])
149 elif isinstance(obj, (dict, DotDict)):
150 return dct_class({k: as_nested_dict(v, dct_class) for k, v in obj.items()})
151 return obj
152
153
154 class CompoundKey(tuple):
155 pass
156
157
158 def dict_to_flatdict(dct: dict, parent: CompoundKey = None) -> dict:
159 """Converts a (nested) dictionary to a flattened representation.
160
161 Each key of the flat dict will be a CompoundKey tuple containing the "chain of keys"
162 for the corresponding value.
163
164 Args:
165 - dct (dict): The dictionary to flatten
166 - parent (CompoundKey, optional): Defaults to `None`. The parent key
167 (you shouldn't need to set this)
168
169 Returns:
170 - dict: A flattened dict
171 """
172
173 items = [] # type: list
174 parent = parent or CompoundKey()
175 for k, v in dct.items():
176 k_parent = CompoundKey(parent + (k,))
177 if isinstance(v, dict):
178 items.extend(dict_to_flatdict(v, parent=k_parent).items())
179 else:
180 items.append((k_parent, v))
181 return dict(items)
182
183
184 def flatdict_to_dict(dct: dict, dct_class: type = None) -> MutableMapping:
185 """Converts a flattened dictionary back to a nested dictionary.
186
187 Args:
188 - dct (dict): The dictionary to be nested. Each key should be a
189 `CompoundKey`, as generated by `dict_to_flatdict()`
190 - dct_class (type, optional): the type of the result; defaults to `dict`
191
192 Returns:
193 - MutableMapping: A `MutableMapping` used to represent a nested dictionary
194 """
195
196 result = (dct_class or dict)()
197 for k, v in dct.items():
198 if isinstance(k, CompoundKey):
199 current_dict = result
200 for ki in k[:-1]:
201 current_dict = current_dict.setdefault( # type: ignore
202 ki, (dct_class or dict)()
203 )
204 current_dict[k[-1]] = v
205 else:
206 result[k] = v
207
208 return result
209
[end of src/prefect/utilities/collections.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/prefect/utilities/collections.py b/src/prefect/utilities/collections.py
--- a/src/prefect/utilities/collections.py
+++ b/src/prefect/utilities/collections.py
@@ -92,8 +92,9 @@
"""Creates and returns a shallow copy of the current DotDict"""
return type(self)(self.__dict__.copy())
- def __json__(self) -> dict:
- return dict(self)
+ def to_dict(self) -> dict:
+ """Converts current `DotDict` (and any `DotDict`s contained within) to an appropriate nested dictionary."""
+ return as_nested_dict(self, dct_class=dict)
class GraphQLResult(DotDict):
|
{"golden_diff": "diff --git a/src/prefect/utilities/collections.py b/src/prefect/utilities/collections.py\n--- a/src/prefect/utilities/collections.py\n+++ b/src/prefect/utilities/collections.py\n@@ -92,8 +92,9 @@\n \"\"\"Creates and returns a shallow copy of the current DotDict\"\"\"\n return type(self)(self.__dict__.copy())\n \n- def __json__(self) -> dict:\n- return dict(self)\n+ def to_dict(self) -> dict:\n+ \"\"\"Converts current `DotDict` (and any `DotDict`s contained within) to an appropriate nested dictionary.\"\"\"\n+ return as_nested_dict(self, dct_class=dict)\n \n \n class GraphQLResult(DotDict):\n", "issue": "DotDict isn't JSON-serializable\n`DotDicts` *feel* like dicts, until you try to ship them as JSON:\r\n\r\n```python\r\nIn [1]: import json\r\n\r\nIn [2]: from prefect.utilities.collections import DotDict\r\n\r\nIn [3]: json.dumps(DotDict(x=1, y=2))\r\n```\r\nResults in the following error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-3-b595d5c6cfdf> in <module>()\r\n----> 1 json.dumps(DotDict(x=1, y=2))\r\n\r\n/anaconda3/lib/python3.6/json/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)\r\n 229 cls is None and indent is None and separators is None and\r\n 230 default is None and not sort_keys and not kw):\r\n--> 231 return _default_encoder.encode(obj)\r\n 232 if cls is None:\r\n 233 cls = JSONEncoder\r\n\r\n/anaconda3/lib/python3.6/json/encoder.py in encode(self, o)\r\n 197 # exceptions aren't as detailed. The list call should be roughly\r\n 198 # equivalent to the PySequence_Fast that ''.join() would do.\r\n--> 199 chunks = self.iterencode(o, _one_shot=True)\r\n 200 if not isinstance(chunks, (list, tuple)):\r\n 201 chunks = list(chunks)\r\n\r\n/anaconda3/lib/python3.6/json/encoder.py in iterencode(self, o, _one_shot)\r\n 255 self.key_separator, self.item_separator, self.sort_keys,\r\n 256 self.skipkeys, _one_shot)\r\n--> 257 return _iterencode(o, 0)\r\n 258\r\n 259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,\r\n\r\n/anaconda3/lib/python3.6/json/encoder.py in default(self, o)\r\n 178 \"\"\"\r\n 179 raise TypeError(\"Object of type '%s' is not JSON serializable\" %\r\n--> 180 o.__class__.__name__)\r\n 181\r\n 182 def encode(self, o):\r\n\r\nTypeError: Object of type 'DotDict' is not JSON serializable\r\n```\n", "before_files": [{"content": "# Licensed under LICENSE.md; also available at https://www.prefect.io/licenses/alpha-eula\nimport collections\nimport json\nfrom collections.abc import MutableMapping\nfrom typing import Any, Generator, Iterable, Iterator, Union\n\nDictLike = Union[dict, \"DotDict\"]\n\n\ndef flatten_seq(seq: Iterable) -> Generator:\n \"\"\"\n Generator that returns a flattened list from a possibly nested list-of-lists\n (or any sequence type).\n\n Example:\n ```python\n flatten_seq([1, 2, [3, 4], 5, [6, [7]]])\n >>> [1, 2, 3, 4, 5, 6, 7]\n ```\n Args:\n - seq (Iterable): the sequence to flatten\n\n Returns:\n - generator: a generator that yields the flattened sequence\n \"\"\"\n for item in seq:\n if isinstance(item, collections.Iterable) and not isinstance(\n item, (str, bytes)\n ):\n yield from flatten_seq(item)\n else:\n yield item\n\n\nclass DotDict(MutableMapping):\n \"\"\"\n A `dict` that also supports attribute (\"dot\") access. Think of this as an extension\n to the standard python `dict` object. **Note**: while any hashable object can be added to\n a `DotDict`, _only_ valid Python identifiers can be accessed with the dot syntax; this excludes\n strings which begin in numbers, special characters, or double underscores.\n\n Args:\n - init_dict (dict, optional): dictionary to initialize the `DotDict`\n with\n - **kwargs (optional): key, value pairs with which to initialize the\n `DotDict`\n\n **Example**:\n ```python\n dotdict = DotDict({'a': 34}, b=56, c=set())\n dotdict.a # 34\n dotdict['b'] # 56\n dotdict.c # set()\n ```\n \"\"\"\n\n def __init__(self, init_dict: DictLike = None, **kwargs: Any) -> None:\n if init_dict:\n self.update(init_dict)\n self.update(kwargs)\n\n def __getitem__(self, key: str) -> Any:\n return self.__dict__[key] # __dict__ expects string keys\n\n def __setitem__(self, key: str, value: Any) -> None:\n # prevent overwriting any critical attributes\n if isinstance(key, str) and hasattr(MutableMapping, key):\n raise ValueError('Invalid key: \"{}\"'.format(key))\n self.__dict__[key] = value\n\n def __setattr__(self, attr: str, value: Any) -> None:\n self[attr] = value\n\n def __iter__(self) -> Iterator[str]:\n return iter(self.__dict__.keys())\n\n def __delitem__(self, key: str) -> None:\n del self.__dict__[key]\n\n def __len__(self) -> int:\n return len(self.__dict__)\n\n def __repr__(self) -> str:\n if len(self) > 0:\n return \"<{}: {}>\".format(\n type(self).__name__, \", \".join(sorted(repr(k) for k in self.keys()))\n )\n else:\n return \"<{}>\".format(type(self).__name__)\n\n def copy(self) -> \"DotDict\":\n \"\"\"Creates and returns a shallow copy of the current DotDict\"\"\"\n return type(self)(self.__dict__.copy())\n\n def __json__(self) -> dict:\n return dict(self)\n\n\nclass GraphQLResult(DotDict):\n def __repr__(self) -> str:\n return json.dumps(as_nested_dict(self, dict), indent=4)\n\n\ndef merge_dicts(d1: DictLike, d2: DictLike) -> DictLike:\n \"\"\"\n Updates `d1` from `d2` by replacing each `(k, v1)` pair in `d1` with the\n corresponding `(k, v2)` pair in `d2`.\n\n If the value of each pair is itself a dict, then the value is updated\n recursively.\n\n Args:\n - d1 (MutableMapping): A dictionary to be replaced\n - d2 (MutableMapping): A dictionary used for replacement\n\n Returns:\n - A `MutableMapping` with the two dictionary contents merged\n \"\"\"\n\n new_dict = d1.copy()\n\n for k, v in d2.items():\n if isinstance(new_dict.get(k), MutableMapping) and isinstance(\n v, MutableMapping\n ):\n new_dict[k] = merge_dicts(new_dict[k], d2[k])\n else:\n new_dict[k] = d2[k]\n return new_dict\n\n\ndef as_nested_dict(\n obj: Union[DictLike, Iterable[DictLike]], dct_class: type = DotDict\n) -> Union[DictLike, Iterable[DictLike]]:\n \"\"\"\n Given a obj formatted as a dictionary, transforms it (and any nested dictionaries)\n into the provided dct_class\n\n Args:\n - obj (Any): An object that is formatted as a `dict`\n - dct_class (type): the `dict` class to use (defaults to DotDict)\n\n Returns:\n - A `dict_class` representation of the object passed in\n ```\n \"\"\"\n if isinstance(obj, (list, tuple, set)):\n return type(obj)([as_nested_dict(d, dct_class) for d in obj])\n elif isinstance(obj, (dict, DotDict)):\n return dct_class({k: as_nested_dict(v, dct_class) for k, v in obj.items()})\n return obj\n\n\nclass CompoundKey(tuple):\n pass\n\n\ndef dict_to_flatdict(dct: dict, parent: CompoundKey = None) -> dict:\n \"\"\"Converts a (nested) dictionary to a flattened representation.\n\n Each key of the flat dict will be a CompoundKey tuple containing the \"chain of keys\"\n for the corresponding value.\n\n Args:\n - dct (dict): The dictionary to flatten\n - parent (CompoundKey, optional): Defaults to `None`. The parent key\n (you shouldn't need to set this)\n\n Returns:\n - dict: A flattened dict\n \"\"\"\n\n items = [] # type: list\n parent = parent or CompoundKey()\n for k, v in dct.items():\n k_parent = CompoundKey(parent + (k,))\n if isinstance(v, dict):\n items.extend(dict_to_flatdict(v, parent=k_parent).items())\n else:\n items.append((k_parent, v))\n return dict(items)\n\n\ndef flatdict_to_dict(dct: dict, dct_class: type = None) -> MutableMapping:\n \"\"\"Converts a flattened dictionary back to a nested dictionary.\n\n Args:\n - dct (dict): The dictionary to be nested. Each key should be a\n `CompoundKey`, as generated by `dict_to_flatdict()`\n - dct_class (type, optional): the type of the result; defaults to `dict`\n\n Returns:\n - MutableMapping: A `MutableMapping` used to represent a nested dictionary\n \"\"\"\n\n result = (dct_class or dict)()\n for k, v in dct.items():\n if isinstance(k, CompoundKey):\n current_dict = result\n for ki in k[:-1]:\n current_dict = current_dict.setdefault( # type: ignore\n ki, (dct_class or dict)()\n )\n current_dict[k[-1]] = v\n else:\n result[k] = v\n\n return result\n", "path": "src/prefect/utilities/collections.py"}]}
| 3,257 | 159 |
gh_patches_debug_20182
|
rasdani/github-patches
|
git_diff
|
pymodbus-dev__pymodbus-351
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Registers in Database DataStore cant be read from remote client
I try to implement a database backed TCP server so that my registers will be kept in database. For that task I modulated "Updating Server" example . When I run the code I can query regısters from inside. But I cant read same registers from a client located on another computer. With normal ModbusSlaveContext I was able to do that. Results are attached. and here is my code:
```
'''
Pymodbus Server With Updating Thread
--------------------------------------------------------------------------
This is an example of having a background thread updating the
context while the server is operating. This can also be done with
a python thread::
from threading import Thread
thread = Thread(target=updating_writer, args=(context,))
thread.start()
'''
#---------------------------------------------------------------------------#
# import the modbus libraries we need
#---------------------------------------------------------------------------#
from pymodbus.server.async import StartTcpServer
from pymodbus.device import ModbusDeviceIdentification
from pymodbus.datastore import ModbusSequentialDataBlock
from pymodbus.datastore import ModbusServerContext
from database_store import DatabaseSlaveContext
from pymodbus.transaction import ModbusRtuFramer, ModbusAsciiFramer
#---------------------------------------------------------------------------#
# import the twisted libraries we need
#---------------------------------------------------------------------------#
from twisted.internet.task import LoopingCall
#---------------------------------------------------------------------------#
# configure the service logging
#---------------------------------------------------------------------------#
import logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(logging.DEBUG)
#---------------------------------------------------------------------------#
# define your callback process
#---------------------------------------------------------------------------#
def updating_writer(a):
''' A worker process that runs every so often and
updates live values of the context. It should be noted
that there is a race condition for the update.
:param arguments: The input arguments to the call
'''
log.debug("updating the context")
context = a[0]
readfunction = 0x03 # read holding registers
writefunction = 0x10 # wrıte holding registers
slave_id = 0x00 # slave address
address = 16 # adress : 400017
values = context[slave_id].getValues(readfunction, address, count=3)
log.debug("new values: " + str(values))
#---------------------------------------------------------------------------#
# initialize your data store
#---------------------------------------------------------------------------#
block = ModbusSequentialDataBlock(0x00, [0]*0xff)
store = DatabaseSlaveContext(block)
context = ModbusServerContext(slaves=store, single=True)
#---------------------------------------------------------------------------#
# initialize the server information
#---------------------------------------------------------------------------#
identity = ModbusDeviceIdentification()
identity.VendorName = 'pymodbus'
identity.ProductCode = 'PM'
identity.VendorUrl = 'http://github.com/bashwork/pymodbus/'
identity.ProductName = 'pymodbus Server'
identity.ModelName = 'pymodbus Server'
identity.MajorMinorRevision = '1.0'
#---------------------------------------------------------------------------#
# run the server you want
#---------------------------------------------------------------------------#
time = 5 # 5 seconds delay
loop = LoopingCall(f=updating_writer, a=(context,))
loop.start(time, now=False) # initially delay by time
StartTcpServer(context, identity=identity, address=("", 5007))
```


</issue>
<code>
[start of pymodbus/datastore/database/sql_datastore.py]
1 import sqlalchemy
2 import sqlalchemy.types as sqltypes
3 from sqlalchemy.sql import and_
4 from sqlalchemy.schema import UniqueConstraint
5 from sqlalchemy.sql.expression import bindparam
6
7 from pymodbus.exceptions import NotImplementedException
8 from pymodbus.interfaces import IModbusSlaveContext
9
10 #---------------------------------------------------------------------------#
11 # Logging
12 #---------------------------------------------------------------------------#
13 import logging
14 _logger = logging.getLogger(__name__)
15
16
17 #---------------------------------------------------------------------------#
18 # Context
19 #---------------------------------------------------------------------------#
20 class SqlSlaveContext(IModbusSlaveContext):
21 '''
22 This creates a modbus data model with each data access
23 stored in its own personal block
24 '''
25
26 def __init__(self, *args, **kwargs):
27 ''' Initializes the datastores
28
29 :param kwargs: Each element is a ModbusDataBlock
30 '''
31 self.table = kwargs.get('table', 'pymodbus')
32 self.database = kwargs.get('database', 'sqlite:///pymodbus.db')
33 self._db_create(self.table, self.database)
34
35 def __str__(self):
36 ''' Returns a string representation of the context
37
38 :returns: A string representation of the context
39 '''
40 return "Modbus Slave Context"
41
42 def reset(self):
43 ''' Resets all the datastores to their default values '''
44 self._metadata.drop_all()
45 self._db_create(self.table, self.database)
46
47 def validate(self, fx, address, count=1):
48 ''' Validates the request to make sure it is in range
49
50 :param fx: The function we are working with
51 :param address: The starting address
52 :param count: The number of values to test
53 :returns: True if the request in within range, False otherwise
54 '''
55 address = address + 1 # section 4.4 of specification
56 _logger.debug("validate[%d] %d:%d" % (fx, address, count))
57 return self._validate(self.decode(fx), address, count)
58
59 def getValues(self, fx, address, count=1):
60 ''' Get `count` values from datastore
61
62 :param fx: The function we are working with
63 :param address: The starting address
64 :param count: The number of values to retrieve
65 :returns: The requested values from a:a+c
66 '''
67 address = address + 1 # section 4.4 of specification
68 _logger.debug("get-values[%d] %d:%d" % (fx, address, count))
69 return self._get(self.decode(fx), address, count)
70
71 def setValues(self, fx, address, values):
72 ''' Sets the datastore with the supplied values
73
74 :param fx: The function we are working with
75 :param address: The starting address
76 :param values: The new values to be set
77 '''
78 address = address + 1 # section 4.4 of specification
79 _logger.debug("set-values[%d] %d:%d" % (fx, address, len(values)))
80 self._set(self.decode(fx), address, values)
81
82 #--------------------------------------------------------------------------#
83 # Sqlite Helper Methods
84 #--------------------------------------------------------------------------#
85 def _db_create(self, table, database):
86 ''' A helper method to initialize the database and handles
87
88 :param table: The table name to create
89 :param database: The database uri to use
90 '''
91 self._engine = sqlalchemy.create_engine(database, echo=False)
92 self._metadata = sqlalchemy.MetaData(self._engine)
93 self._table = sqlalchemy.Table(table, self._metadata,
94 sqlalchemy.Column('type', sqltypes.String(1)),
95 sqlalchemy.Column('index', sqltypes.Integer),
96 sqlalchemy.Column('value', sqltypes.Integer),
97 UniqueConstraint('type', 'index', name='key'))
98 self._table.create(checkfirst=True)
99 self._connection = self._engine.connect()
100
101 def _get(self, type, offset, count):
102 '''
103 :param type: The key prefix to use
104 :param offset: The address offset to start at
105 :param count: The number of bits to read
106 :returns: The resulting values
107 '''
108 query = self._table.select(and_(
109 self._table.c.type == type,
110 self._table.c.index >= offset,
111 self._table.c.index <= offset + count)
112 )
113 query = query.order_by(self._table.c.index.asc())
114 result = self._connection.execute(query).fetchall()
115 return [row.value for row in result]
116
117 def _build_set(self, type, offset, values, prefix=''):
118 ''' A helper method to generate the sql update context
119
120 :param type: The key prefix to use
121 :param offset: The address offset to start at
122 :param values: The values to set
123 :param prefix: Prefix fields index and type, defaults to empty string
124 '''
125 result = []
126 for index, value in enumerate(values):
127 result.append({
128 prefix + 'type': type,
129 prefix + 'index': offset + index,
130 'value': value
131 })
132 return result
133
134 def _check(self, type, offset, values):
135 result = self._get(type, offset, count=1)
136 return False if len(result) > 0 else True
137
138 def _set(self, type, offset, values):
139 '''
140
141 :param key: The type prefix to use
142 :param offset: The address offset to start at
143 :param values: The values to set
144 '''
145 if self._check(type, offset, values):
146 context = self._build_set(type, offset, values)
147 query = self._table.insert()
148 result = self._connection.execute(query, context)
149 return result.rowcount == len(values)
150 else:
151 return False
152
153 def _update(self, type, offset, values):
154 '''
155
156 :param type: The type prefix to use
157 :param offset: The address offset to start at
158 :param values: The values to set
159 '''
160 context = self._build_set(type, offset, values, prefix='x_')
161 query = self._table.update().values(name='value')
162 query = query.where(and_(
163 self._table.c.type == bindparam('x_type'),
164 self._table.c.index == bindparam('x_index')))
165 result = self._connection.execute(query, context)
166 return result.rowcount == len(values)
167
168 def _validate(self, type, offset, count):
169 '''
170 :param key: The key prefix to use
171 :param offset: The address offset to start at
172 :param count: The number of bits to read
173 :returns: The result of the validation
174 '''
175 query = self._table.select(and_(
176 self._table.c.type == type,
177 self._table.c.index >= offset,
178 self._table.c.index <= offset + count))
179 result = self._connection.execute(query)
180 return result.rowcount == count
181
[end of pymodbus/datastore/database/sql_datastore.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pymodbus/datastore/database/sql_datastore.py b/pymodbus/datastore/database/sql_datastore.py
--- a/pymodbus/datastore/database/sql_datastore.py
+++ b/pymodbus/datastore/database/sql_datastore.py
@@ -108,7 +108,7 @@
query = self._table.select(and_(
self._table.c.type == type,
self._table.c.index >= offset,
- self._table.c.index <= offset + count)
+ self._table.c.index <= offset + count - 1)
)
query = query.order_by(self._table.c.index.asc())
result = self._connection.execute(query).fetchall()
@@ -175,6 +175,6 @@
query = self._table.select(and_(
self._table.c.type == type,
self._table.c.index >= offset,
- self._table.c.index <= offset + count))
- result = self._connection.execute(query)
- return result.rowcount == count
+ self._table.c.index <= offset + count - 1))
+ result = self._connection.execute(query).fetchall()
+ return len(result) == count
|
{"golden_diff": "diff --git a/pymodbus/datastore/database/sql_datastore.py b/pymodbus/datastore/database/sql_datastore.py\n--- a/pymodbus/datastore/database/sql_datastore.py\n+++ b/pymodbus/datastore/database/sql_datastore.py\n@@ -108,7 +108,7 @@\n query = self._table.select(and_(\n self._table.c.type == type,\n self._table.c.index >= offset,\n- self._table.c.index <= offset + count)\n+ self._table.c.index <= offset + count - 1)\n )\n query = query.order_by(self._table.c.index.asc())\n result = self._connection.execute(query).fetchall()\n@@ -175,6 +175,6 @@\n query = self._table.select(and_(\n self._table.c.type == type,\n self._table.c.index >= offset,\n- self._table.c.index <= offset + count))\n- result = self._connection.execute(query)\n- return result.rowcount == count\n+ self._table.c.index <= offset + count - 1))\n+ result = self._connection.execute(query).fetchall()\n+ return len(result) == count\n", "issue": "Registers in Database DataStore cant be read from remote client\nI try to implement a database backed TCP server so that my registers will be kept in database. For that task I modulated \"Updating Server\" example . When I run the code I can query reg\u0131sters from inside. But I cant read same registers from a client located on another computer. With normal ModbusSlaveContext I was able to do that. Results are attached. and here is my code:\r\n\r\n```\r\n'''\r\nPymodbus Server With Updating Thread\r\n--------------------------------------------------------------------------\r\nThis is an example of having a background thread updating the\r\ncontext while the server is operating. This can also be done with\r\na python thread::\r\n from threading import Thread\r\n thread = Thread(target=updating_writer, args=(context,))\r\n thread.start()\r\n'''\r\n#---------------------------------------------------------------------------# \r\n# import the modbus libraries we need\r\n#---------------------------------------------------------------------------# \r\nfrom pymodbus.server.async import StartTcpServer\r\nfrom pymodbus.device import ModbusDeviceIdentification\r\nfrom pymodbus.datastore import ModbusSequentialDataBlock\r\nfrom pymodbus.datastore import ModbusServerContext\r\nfrom database_store import DatabaseSlaveContext\r\nfrom pymodbus.transaction import ModbusRtuFramer, ModbusAsciiFramer\r\n\r\n#---------------------------------------------------------------------------# \r\n# import the twisted libraries we need\r\n#---------------------------------------------------------------------------# \r\nfrom twisted.internet.task import LoopingCall\r\n\r\n#---------------------------------------------------------------------------# \r\n# configure the service logging\r\n#---------------------------------------------------------------------------# \r\nimport logging\r\nlogging.basicConfig()\r\nlog = logging.getLogger()\r\nlog.setLevel(logging.DEBUG)\r\n\r\n#---------------------------------------------------------------------------# \r\n# define your callback process\r\n#---------------------------------------------------------------------------# \r\ndef updating_writer(a):\r\n ''' A worker process that runs every so often and\r\n updates live values of the context. It should be noted\r\n that there is a race condition for the update.\r\n :param arguments: The input arguments to the call\r\n '''\r\n log.debug(\"updating the context\")\r\n context = a[0]\r\n readfunction = 0x03 # read holding registers\r\n writefunction = 0x10 # wr\u0131te holding registers\r\n slave_id = 0x00 # slave address\r\n address = 16 # adress : 400017\r\n\r\n\r\n values = context[slave_id].getValues(readfunction, address, count=3)\r\n\r\n log.debug(\"new values: \" + str(values))\r\n\r\n\r\n#---------------------------------------------------------------------------# \r\n# initialize your data store\r\n#---------------------------------------------------------------------------#\r\nblock = ModbusSequentialDataBlock(0x00, [0]*0xff)\r\nstore = DatabaseSlaveContext(block)\r\n\r\ncontext = ModbusServerContext(slaves=store, single=True)\r\n\r\n\r\n#---------------------------------------------------------------------------# \r\n# initialize the server information\r\n#---------------------------------------------------------------------------# \r\nidentity = ModbusDeviceIdentification()\r\nidentity.VendorName = 'pymodbus'\r\nidentity.ProductCode = 'PM'\r\nidentity.VendorUrl = 'http://github.com/bashwork/pymodbus/'\r\nidentity.ProductName = 'pymodbus Server'\r\nidentity.ModelName = 'pymodbus Server'\r\nidentity.MajorMinorRevision = '1.0'\r\n\r\n#---------------------------------------------------------------------------# \r\n# run the server you want\r\n#---------------------------------------------------------------------------# \r\ntime = 5 # 5 seconds delay\r\nloop = LoopingCall(f=updating_writer, a=(context,))\r\nloop.start(time, now=False) # initially delay by time\r\nStartTcpServer(context, identity=identity, address=(\"\", 5007))\r\n\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n \n", "before_files": [{"content": "import sqlalchemy\nimport sqlalchemy.types as sqltypes\nfrom sqlalchemy.sql import and_\nfrom sqlalchemy.schema import UniqueConstraint\nfrom sqlalchemy.sql.expression import bindparam\n\nfrom pymodbus.exceptions import NotImplementedException\nfrom pymodbus.interfaces import IModbusSlaveContext\n\n#---------------------------------------------------------------------------#\n# Logging\n#---------------------------------------------------------------------------#\nimport logging\n_logger = logging.getLogger(__name__)\n\n\n#---------------------------------------------------------------------------#\n# Context\n#---------------------------------------------------------------------------#\nclass SqlSlaveContext(IModbusSlaveContext):\n '''\n This creates a modbus data model with each data access\n stored in its own personal block\n '''\n\n def __init__(self, *args, **kwargs):\n ''' Initializes the datastores\n\n :param kwargs: Each element is a ModbusDataBlock\n '''\n self.table = kwargs.get('table', 'pymodbus')\n self.database = kwargs.get('database', 'sqlite:///pymodbus.db')\n self._db_create(self.table, self.database)\n\n def __str__(self):\n ''' Returns a string representation of the context\n\n :returns: A string representation of the context\n '''\n return \"Modbus Slave Context\"\n\n def reset(self):\n ''' Resets all the datastores to their default values '''\n self._metadata.drop_all()\n self._db_create(self.table, self.database)\n\n def validate(self, fx, address, count=1):\n ''' Validates the request to make sure it is in range\n\n :param fx: The function we are working with\n :param address: The starting address\n :param count: The number of values to test\n :returns: True if the request in within range, False otherwise\n '''\n address = address + 1 # section 4.4 of specification\n _logger.debug(\"validate[%d] %d:%d\" % (fx, address, count))\n return self._validate(self.decode(fx), address, count)\n\n def getValues(self, fx, address, count=1):\n ''' Get `count` values from datastore\n\n :param fx: The function we are working with\n :param address: The starting address\n :param count: The number of values to retrieve\n :returns: The requested values from a:a+c\n '''\n address = address + 1 # section 4.4 of specification\n _logger.debug(\"get-values[%d] %d:%d\" % (fx, address, count))\n return self._get(self.decode(fx), address, count)\n\n def setValues(self, fx, address, values):\n ''' Sets the datastore with the supplied values\n\n :param fx: The function we are working with\n :param address: The starting address\n :param values: The new values to be set\n '''\n address = address + 1 # section 4.4 of specification\n _logger.debug(\"set-values[%d] %d:%d\" % (fx, address, len(values)))\n self._set(self.decode(fx), address, values)\n\n #--------------------------------------------------------------------------#\n # Sqlite Helper Methods\n #--------------------------------------------------------------------------#\n def _db_create(self, table, database):\n ''' A helper method to initialize the database and handles\n\n :param table: The table name to create\n :param database: The database uri to use\n '''\n self._engine = sqlalchemy.create_engine(database, echo=False)\n self._metadata = sqlalchemy.MetaData(self._engine)\n self._table = sqlalchemy.Table(table, self._metadata,\n sqlalchemy.Column('type', sqltypes.String(1)),\n sqlalchemy.Column('index', sqltypes.Integer),\n sqlalchemy.Column('value', sqltypes.Integer),\n UniqueConstraint('type', 'index', name='key'))\n self._table.create(checkfirst=True)\n self._connection = self._engine.connect()\n\n def _get(self, type, offset, count):\n '''\n :param type: The key prefix to use\n :param offset: The address offset to start at\n :param count: The number of bits to read\n :returns: The resulting values\n '''\n query = self._table.select(and_(\n self._table.c.type == type,\n self._table.c.index >= offset,\n self._table.c.index <= offset + count)\n )\n query = query.order_by(self._table.c.index.asc())\n result = self._connection.execute(query).fetchall()\n return [row.value for row in result]\n\n def _build_set(self, type, offset, values, prefix=''):\n ''' A helper method to generate the sql update context\n\n :param type: The key prefix to use\n :param offset: The address offset to start at\n :param values: The values to set\n :param prefix: Prefix fields index and type, defaults to empty string\n '''\n result = []\n for index, value in enumerate(values):\n result.append({\n prefix + 'type': type,\n prefix + 'index': offset + index,\n 'value': value\n })\n return result\n\n def _check(self, type, offset, values):\n result = self._get(type, offset, count=1)\n return False if len(result) > 0 else True\n\n def _set(self, type, offset, values):\n '''\n\n :param key: The type prefix to use\n :param offset: The address offset to start at\n :param values: The values to set\n '''\n if self._check(type, offset, values):\n context = self._build_set(type, offset, values)\n query = self._table.insert()\n result = self._connection.execute(query, context)\n return result.rowcount == len(values)\n else:\n return False\n\n def _update(self, type, offset, values):\n '''\n\n :param type: The type prefix to use\n :param offset: The address offset to start at\n :param values: The values to set\n '''\n context = self._build_set(type, offset, values, prefix='x_')\n query = self._table.update().values(name='value')\n query = query.where(and_(\n self._table.c.type == bindparam('x_type'),\n self._table.c.index == bindparam('x_index')))\n result = self._connection.execute(query, context)\n return result.rowcount == len(values)\n\n def _validate(self, type, offset, count):\n '''\n :param key: The key prefix to use\n :param offset: The address offset to start at\n :param count: The number of bits to read\n :returns: The result of the validation\n '''\n query = self._table.select(and_(\n self._table.c.type == type,\n self._table.c.index >= offset,\n self._table.c.index <= offset + count))\n result = self._connection.execute(query)\n return result.rowcount == count\n", "path": "pymodbus/datastore/database/sql_datastore.py"}]}
| 3,297 | 262 |
gh_patches_debug_36691
|
rasdani/github-patches
|
git_diff
|
AnalogJ__lexicon-147
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Route53: Public and Private Zones can't be distinguished.
I've been testing out lexicon for updating DNS records via Route53, and I have a Public and Private Zone with the same domain name.
I noticed that lexicon is only searching for the domain name by name, so in my case, my internal zone was the first created and so it's the only thing lexicon itself finds for my domain name.
I was going to have it update a record for my home IP address for dynamic IP issues, but what's happening is it is only updating the Private zone's record. I've specified --identifier with the ZoneID of the Public Zone, but that is not working either.
I didn't even have a record for home.mydomain.com in my Private Zone, and it ended up creating the record just to fullfill the update. I do see in the output of lexicon both private and public zones, including the <PrivateZone>true|false</PrivateZone> specifically identifying it as private or not.
I'd like to be able to update both, differently as needed.
</issue>
<code>
[start of lexicon/providers/route53.py]
1 """Provide support to Lexicon for AWS Route 53 DNS changes."""
2 from __future__ import absolute_import
3 from __future__ import print_function
4
5 import logging
6
7 from .base import Provider as BaseProvider
8
9 try:
10 import boto3 #optional dep
11 import botocore #optional dep
12 except ImportError:
13 pass
14
15 logger = logging.getLogger(__name__)
16
17
18 def ProviderParser(subparser):
19 """Specify arguments for AWS Route 53 Lexicon Provider."""
20 subparser.add_argument("--auth-access-key", help="specify ACCESS_KEY used to authenticate")
21 subparser.add_argument("--auth-access-secret", help="specify ACCESS_SECRET used authenticate")
22
23 #TODO: these are only required for testing, we should figure out a way to remove them & update the integration tests
24 # to dynamically populate the auth credentials that are required.
25 subparser.add_argument("--auth-username", help="alternative way to specify ACCESS_KEY used to authenticate")
26 subparser.add_argument("--auth-token", help="alternative way to specify ACCESS_SECRET used authenticate")
27
28
29 class RecordSetPaginator(object):
30 """Paginate through complete list of record sets."""
31
32 def __init__(self, r53_client, hosted_zone_id, max_items=None):
33 """Initialize paginator."""
34 self.r53_client = r53_client
35 self.hosted_zone_id = hosted_zone_id
36 self.max_items = max_items
37
38 def get_record_sets(self, **kwargs):
39 """Retrieve a page from API."""
40 return self.r53_client.list_resource_record_sets(**kwargs)
41
42 def get_base_kwargs(self):
43 """Get base kwargs for API call."""
44 kwargs = {
45 'HostedZoneId': self.hosted_zone_id
46 }
47 if self.max_items is not None:
48 kwargs.update({
49 'MaxItems': str(self.max_items)
50 })
51 return kwargs
52
53 def all_record_sets(self):
54 """Generator to loop through current record set.
55
56 Call next page if it exists.
57 """
58 is_truncated = True
59 start_record_name = None
60 start_record_type = None
61 kwargs = self.get_base_kwargs()
62 while is_truncated:
63 if start_record_name is not None:
64 kwargs.update({
65 'StartRecordName': start_record_name,
66 'StartRecordType': start_record_type
67 })
68 result = self.get_record_sets(**kwargs)
69 for record_set in result.get('ResourceRecordSets', []):
70 yield record_set
71
72 is_truncated = result.get('IsTruncated', False)
73
74 start_record_name = result.get('NextRecordName', None)
75 start_record_type = result.get('NextRecordType', None)
76
77
78 class Provider(BaseProvider):
79 """Provide AWS Route 53 implementation of Lexicon Provider interface."""
80
81 def __init__(self, options, engine_overrides=None):
82 """Initialize AWS Route 53 DNS provider."""
83 super(Provider, self).__init__(options, engine_overrides)
84 self.domain_id = None
85 # instantiate the client
86 self.r53_client = boto3.client(
87 'route53',
88 aws_access_key_id=self.options.get('auth_access_key', self.options.get('auth_username')),
89 aws_secret_access_key=self.options.get('auth_access_secret', self.options.get('auth_token'))
90 )
91
92 def authenticate(self):
93 """Determine the hosted zone id for the domain."""
94 try:
95 hosted_zones = self.r53_client.list_hosted_zones_by_name()[
96 'HostedZones'
97 ]
98 hosted_zone = next(
99 hz for hz in hosted_zones
100 if hz['Name'] == '{0}.'.format(self.options['domain'])
101 )
102 self.domain_id = hosted_zone['Id']
103 except StopIteration:
104 raise Exception('No domain found')
105
106 def _change_record_sets(self, action, type, name, content):
107 ttl = self.options['ttl']
108 value = '"{0}"'.format(content) if type in ['TXT', 'SPF'] else content
109 try:
110 self.r53_client.change_resource_record_sets(
111 HostedZoneId=self.domain_id,
112 ChangeBatch={
113 'Comment': '{0} using lexicon Route 53 provider'.format(
114 action
115 ),
116 'Changes': [
117 {
118 'Action': action,
119 'ResourceRecordSet': {
120 'Name': self._fqdn_name(name),
121 'Type': type,
122 'TTL': ttl if ttl is not None else 300,
123 'ResourceRecords': [
124 {
125 'Value': value
126 }
127 ]
128 }
129 }
130 ]
131 }
132 )
133 return True
134 except botocore.exceptions.ClientError as e:
135 logger.debug(e.message, exc_info=True)
136
137 def create_record(self, type, name, content):
138 """Create a record in the hosted zone."""
139 return self._change_record_sets('CREATE', type, name, content)
140
141 def update_record(self, identifier=None, type=None, name=None, content=None):
142 """Update a record from the hosted zone."""
143 return self._change_record_sets('UPSERT', type, name, content)
144
145 def delete_record(self, identifier=None, type=None, name=None, content=None):
146 """Delete a record from the hosted zone."""
147 return self._change_record_sets('DELETE', type, name, content)
148
149 def _format_content(self, type, content):
150 return content[1:-1] if type in ['TXT', 'SPF'] else content
151
152 def list_records(self, type=None, name=None, content=None):
153 """List all records for the hosted zone."""
154 records = []
155 paginator = RecordSetPaginator(self.r53_client, self.domain_id)
156 for record in paginator.all_record_sets():
157 if type is not None and record['Type'] != type:
158 continue
159 if name is not None and record['Name'] != self._fqdn_name(name):
160 continue
161 if record.get('AliasTarget', None) is not None:
162 record_content = [record['AliasTarget'].get('DNSName', None)]
163 if record.get('ResourceRecords', None) is not None:
164 record_content = [self._format_content(record['Type'], value['Value']) for value
165 in record['ResourceRecords']]
166 if content is not None and content not in record_content:
167 continue
168 logger.debug('record: %s', record)
169 records.append({
170 'type': record['Type'],
171 'name': self._full_name(record['Name']),
172 'ttl': record.get('TTL', None),
173 'content': record_content[0] if len(record_content) == 1 else record_content,
174 })
175 logger.debug('list_records: %s', records)
176 return records
177
[end of lexicon/providers/route53.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lexicon/providers/route53.py b/lexicon/providers/route53.py
--- a/lexicon/providers/route53.py
+++ b/lexicon/providers/route53.py
@@ -19,6 +19,7 @@
"""Specify arguments for AWS Route 53 Lexicon Provider."""
subparser.add_argument("--auth-access-key", help="specify ACCESS_KEY used to authenticate")
subparser.add_argument("--auth-access-secret", help="specify ACCESS_SECRET used authenticate")
+ subparser.add_argument("--private-zone", help="indicates what kind of hosted zone to use, if true, use only private zones, if false, use only public zones")
#TODO: these are only required for testing, we should figure out a way to remove them & update the integration tests
# to dynamically populate the auth credentials that are required.
@@ -82,6 +83,7 @@
"""Initialize AWS Route 53 DNS provider."""
super(Provider, self).__init__(options, engine_overrides)
self.domain_id = None
+ self.private_zone = options.get('private_zone', None)
# instantiate the client
self.r53_client = boto3.client(
'route53',
@@ -89,6 +91,20 @@
aws_secret_access_key=self.options.get('auth_access_secret', self.options.get('auth_token'))
)
+ def filter_zone(self, hz):
+ if self.private_zone is not None:
+ if hz['Config']['PrivateZone'] != self.str2bool(self.private_zone):
+ return False
+
+ if hz['Name'] != '{0}.'.format(self.options['domain']):
+ return False
+
+ return True
+
+ @staticmethod
+ def str2bool(input_string):
+ return input_string.lower() in ('true', 'yes')
+
def authenticate(self):
"""Determine the hosted zone id for the domain."""
try:
@@ -97,7 +113,7 @@
]
hosted_zone = next(
hz for hz in hosted_zones
- if hz['Name'] == '{0}.'.format(self.options['domain'])
+ if self.filter_zone(hz)
)
self.domain_id = hosted_zone['Id']
except StopIteration:
|
{"golden_diff": "diff --git a/lexicon/providers/route53.py b/lexicon/providers/route53.py\n--- a/lexicon/providers/route53.py\n+++ b/lexicon/providers/route53.py\n@@ -19,6 +19,7 @@\n \"\"\"Specify arguments for AWS Route 53 Lexicon Provider.\"\"\"\n subparser.add_argument(\"--auth-access-key\", help=\"specify ACCESS_KEY used to authenticate\")\n subparser.add_argument(\"--auth-access-secret\", help=\"specify ACCESS_SECRET used authenticate\")\n+ subparser.add_argument(\"--private-zone\", help=\"indicates what kind of hosted zone to use, if true, use only private zones, if false, use only public zones\")\n \n #TODO: these are only required for testing, we should figure out a way to remove them & update the integration tests\n # to dynamically populate the auth credentials that are required.\n@@ -82,6 +83,7 @@\n \"\"\"Initialize AWS Route 53 DNS provider.\"\"\"\n super(Provider, self).__init__(options, engine_overrides)\n self.domain_id = None\n+ self.private_zone = options.get('private_zone', None)\n # instantiate the client\n self.r53_client = boto3.client(\n 'route53',\n@@ -89,6 +91,20 @@\n aws_secret_access_key=self.options.get('auth_access_secret', self.options.get('auth_token'))\n )\n \n+ def filter_zone(self, hz):\n+ if self.private_zone is not None:\n+ if hz['Config']['PrivateZone'] != self.str2bool(self.private_zone):\n+ return False\n+\n+ if hz['Name'] != '{0}.'.format(self.options['domain']):\n+ return False\n+\n+ return True\n+\n+ @staticmethod\n+ def str2bool(input_string):\n+ return input_string.lower() in ('true', 'yes')\n+\n def authenticate(self):\n \"\"\"Determine the hosted zone id for the domain.\"\"\"\n try:\n@@ -97,7 +113,7 @@\n ]\n hosted_zone = next(\n hz for hz in hosted_zones\n- if hz['Name'] == '{0}.'.format(self.options['domain'])\n+ if self.filter_zone(hz)\n )\n self.domain_id = hosted_zone['Id']\n except StopIteration:\n", "issue": "Route53: Public and Private Zones can't be distinguished.\nI've been testing out lexicon for updating DNS records via Route53, and I have a Public and Private Zone with the same domain name.\r\nI noticed that lexicon is only searching for the domain name by name, so in my case, my internal zone was the first created and so it's the only thing lexicon itself finds for my domain name.\r\n\r\nI was going to have it update a record for my home IP address for dynamic IP issues, but what's happening is it is only updating the Private zone's record. I've specified --identifier with the ZoneID of the Public Zone, but that is not working either.\r\n\r\nI didn't even have a record for home.mydomain.com in my Private Zone, and it ended up creating the record just to fullfill the update. I do see in the output of lexicon both private and public zones, including the <PrivateZone>true|false</PrivateZone> specifically identifying it as private or not.\r\n\r\nI'd like to be able to update both, differently as needed.\n", "before_files": [{"content": "\"\"\"Provide support to Lexicon for AWS Route 53 DNS changes.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\nimport logging\n\nfrom .base import Provider as BaseProvider\n\ntry:\n import boto3 #optional dep\n import botocore #optional dep\nexcept ImportError:\n pass\n\nlogger = logging.getLogger(__name__)\n\n\ndef ProviderParser(subparser):\n \"\"\"Specify arguments for AWS Route 53 Lexicon Provider.\"\"\"\n subparser.add_argument(\"--auth-access-key\", help=\"specify ACCESS_KEY used to authenticate\")\n subparser.add_argument(\"--auth-access-secret\", help=\"specify ACCESS_SECRET used authenticate\")\n\n #TODO: these are only required for testing, we should figure out a way to remove them & update the integration tests\n # to dynamically populate the auth credentials that are required.\n subparser.add_argument(\"--auth-username\", help=\"alternative way to specify ACCESS_KEY used to authenticate\")\n subparser.add_argument(\"--auth-token\", help=\"alternative way to specify ACCESS_SECRET used authenticate\")\n\n\nclass RecordSetPaginator(object):\n \"\"\"Paginate through complete list of record sets.\"\"\"\n\n def __init__(self, r53_client, hosted_zone_id, max_items=None):\n \"\"\"Initialize paginator.\"\"\"\n self.r53_client = r53_client\n self.hosted_zone_id = hosted_zone_id\n self.max_items = max_items\n\n def get_record_sets(self, **kwargs):\n \"\"\"Retrieve a page from API.\"\"\"\n return self.r53_client.list_resource_record_sets(**kwargs)\n\n def get_base_kwargs(self):\n \"\"\"Get base kwargs for API call.\"\"\"\n kwargs = {\n 'HostedZoneId': self.hosted_zone_id\n }\n if self.max_items is not None:\n kwargs.update({\n 'MaxItems': str(self.max_items)\n })\n return kwargs\n\n def all_record_sets(self):\n \"\"\"Generator to loop through current record set.\n\n Call next page if it exists.\n \"\"\"\n is_truncated = True\n start_record_name = None\n start_record_type = None\n kwargs = self.get_base_kwargs()\n while is_truncated:\n if start_record_name is not None:\n kwargs.update({\n 'StartRecordName': start_record_name,\n 'StartRecordType': start_record_type\n })\n result = self.get_record_sets(**kwargs)\n for record_set in result.get('ResourceRecordSets', []):\n yield record_set\n\n is_truncated = result.get('IsTruncated', False)\n\n start_record_name = result.get('NextRecordName', None)\n start_record_type = result.get('NextRecordType', None)\n\n\nclass Provider(BaseProvider):\n \"\"\"Provide AWS Route 53 implementation of Lexicon Provider interface.\"\"\"\n\n def __init__(self, options, engine_overrides=None):\n \"\"\"Initialize AWS Route 53 DNS provider.\"\"\"\n super(Provider, self).__init__(options, engine_overrides)\n self.domain_id = None\n # instantiate the client\n self.r53_client = boto3.client(\n 'route53',\n aws_access_key_id=self.options.get('auth_access_key', self.options.get('auth_username')),\n aws_secret_access_key=self.options.get('auth_access_secret', self.options.get('auth_token'))\n )\n\n def authenticate(self):\n \"\"\"Determine the hosted zone id for the domain.\"\"\"\n try:\n hosted_zones = self.r53_client.list_hosted_zones_by_name()[\n 'HostedZones'\n ]\n hosted_zone = next(\n hz for hz in hosted_zones\n if hz['Name'] == '{0}.'.format(self.options['domain'])\n )\n self.domain_id = hosted_zone['Id']\n except StopIteration:\n raise Exception('No domain found')\n\n def _change_record_sets(self, action, type, name, content):\n ttl = self.options['ttl']\n value = '\"{0}\"'.format(content) if type in ['TXT', 'SPF'] else content\n try:\n self.r53_client.change_resource_record_sets(\n HostedZoneId=self.domain_id,\n ChangeBatch={\n 'Comment': '{0} using lexicon Route 53 provider'.format(\n action\n ),\n 'Changes': [\n {\n 'Action': action,\n 'ResourceRecordSet': {\n 'Name': self._fqdn_name(name),\n 'Type': type,\n 'TTL': ttl if ttl is not None else 300,\n 'ResourceRecords': [\n {\n 'Value': value\n }\n ]\n }\n }\n ]\n }\n )\n return True\n except botocore.exceptions.ClientError as e:\n logger.debug(e.message, exc_info=True)\n\n def create_record(self, type, name, content):\n \"\"\"Create a record in the hosted zone.\"\"\"\n return self._change_record_sets('CREATE', type, name, content)\n\n def update_record(self, identifier=None, type=None, name=None, content=None):\n \"\"\"Update a record from the hosted zone.\"\"\"\n return self._change_record_sets('UPSERT', type, name, content)\n\n def delete_record(self, identifier=None, type=None, name=None, content=None):\n \"\"\"Delete a record from the hosted zone.\"\"\"\n return self._change_record_sets('DELETE', type, name, content)\n\n def _format_content(self, type, content):\n return content[1:-1] if type in ['TXT', 'SPF'] else content\n\n def list_records(self, type=None, name=None, content=None):\n \"\"\"List all records for the hosted zone.\"\"\"\n records = []\n paginator = RecordSetPaginator(self.r53_client, self.domain_id)\n for record in paginator.all_record_sets():\n if type is not None and record['Type'] != type:\n continue\n if name is not None and record['Name'] != self._fqdn_name(name):\n continue\n if record.get('AliasTarget', None) is not None:\n record_content = [record['AliasTarget'].get('DNSName', None)]\n if record.get('ResourceRecords', None) is not None:\n record_content = [self._format_content(record['Type'], value['Value']) for value\n in record['ResourceRecords']]\n if content is not None and content not in record_content:\n continue\n logger.debug('record: %s', record)\n records.append({\n 'type': record['Type'],\n 'name': self._full_name(record['Name']),\n 'ttl': record.get('TTL', None),\n 'content': record_content[0] if len(record_content) == 1 else record_content,\n })\n logger.debug('list_records: %s', records)\n return records\n", "path": "lexicon/providers/route53.py"}]}
| 2,630 | 510 |
gh_patches_debug_1390
|
rasdani/github-patches
|
git_diff
|
urllib3__urllib3-758
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ca_cert_dir keyword argument may be passed to HTTPConnectionPool by accident.
Seems like as part of #701 I missed the `SSL_KEYWORDS` block in `poolmanager.py`. This means that `ca_cert_dir` may accidentally be passed to the `HTTPConnectionPool`. This leads to the following error when attempting to use `ca_cert_dir` with a `PoolManager` and then making a plaintext HTTP connection:
```
>>> import urllib3
>>> p = urllib3.PoolManager(ca_cert_dir='/usr/local/etc/openssl')
>>> p.urlopen('GET', 'http://http2bin.org/get')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "urllib3/poolmanager.py", line 162, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "urllib3/connectionpool.py", line 548, in urlopen
conn = self._get_conn(timeout=pool_timeout)
File "urllib3/connectionpool.py", line 250, in _get_conn
return conn or self._new_conn()
File "urllib3/connectionpool.py", line 211, in _new_conn
strict=self.strict, **self.conn_kw)
File "urllib3/connection.py", line 121, in __init__
_HTTPConnection.__init__(self, *args, **kw)
TypeError: __init__() got an unexpected keyword argument 'ca_cert_dir'
```
</issue>
<code>
[start of urllib3/poolmanager.py]
1 from __future__ import absolute_import
2 import logging
3
4 try: # Python 3
5 from urllib.parse import urljoin
6 except ImportError:
7 from urlparse import urljoin
8
9 from ._collections import RecentlyUsedContainer
10 from .connectionpool import HTTPConnectionPool, HTTPSConnectionPool
11 from .connectionpool import port_by_scheme
12 from .exceptions import LocationValueError, MaxRetryError, ProxySchemeUnknown
13 from .request import RequestMethods
14 from .util.url import parse_url
15 from .util.retry import Retry
16
17
18 __all__ = ['PoolManager', 'ProxyManager', 'proxy_from_url']
19
20
21 pool_classes_by_scheme = {
22 'http': HTTPConnectionPool,
23 'https': HTTPSConnectionPool,
24 }
25
26 log = logging.getLogger(__name__)
27
28 SSL_KEYWORDS = ('key_file', 'cert_file', 'cert_reqs', 'ca_certs',
29 'ssl_version')
30
31
32 class PoolManager(RequestMethods):
33 """
34 Allows for arbitrary requests while transparently keeping track of
35 necessary connection pools for you.
36
37 :param num_pools:
38 Number of connection pools to cache before discarding the least
39 recently used pool.
40
41 :param headers:
42 Headers to include with all requests, unless other headers are given
43 explicitly.
44
45 :param \**connection_pool_kw:
46 Additional parameters are used to create fresh
47 :class:`urllib3.connectionpool.ConnectionPool` instances.
48
49 Example::
50
51 >>> manager = PoolManager(num_pools=2)
52 >>> r = manager.request('GET', 'http://google.com/')
53 >>> r = manager.request('GET', 'http://google.com/mail')
54 >>> r = manager.request('GET', 'http://yahoo.com/')
55 >>> len(manager.pools)
56 2
57
58 """
59
60 proxy = None
61
62 def __init__(self, num_pools=10, headers=None, **connection_pool_kw):
63 RequestMethods.__init__(self, headers)
64 self.connection_pool_kw = connection_pool_kw
65 self.pools = RecentlyUsedContainer(num_pools,
66 dispose_func=lambda p: p.close())
67
68 def __enter__(self):
69 return self
70
71 def __exit__(self, exc_type, exc_val, exc_tb):
72 self.clear()
73 # Return False to re-raise any potential exceptions
74 return False
75
76 def _new_pool(self, scheme, host, port):
77 """
78 Create a new :class:`ConnectionPool` based on host, port and scheme.
79
80 This method is used to actually create the connection pools handed out
81 by :meth:`connection_from_url` and companion methods. It is intended
82 to be overridden for customization.
83 """
84 pool_cls = pool_classes_by_scheme[scheme]
85 kwargs = self.connection_pool_kw
86 if scheme == 'http':
87 kwargs = self.connection_pool_kw.copy()
88 for kw in SSL_KEYWORDS:
89 kwargs.pop(kw, None)
90
91 return pool_cls(host, port, **kwargs)
92
93 def clear(self):
94 """
95 Empty our store of pools and direct them all to close.
96
97 This will not affect in-flight connections, but they will not be
98 re-used after completion.
99 """
100 self.pools.clear()
101
102 def connection_from_host(self, host, port=None, scheme='http'):
103 """
104 Get a :class:`ConnectionPool` based on the host, port, and scheme.
105
106 If ``port`` isn't given, it will be derived from the ``scheme`` using
107 ``urllib3.connectionpool.port_by_scheme``.
108 """
109
110 if not host:
111 raise LocationValueError("No host specified.")
112
113 scheme = scheme or 'http'
114 port = port or port_by_scheme.get(scheme, 80)
115 pool_key = (scheme, host, port)
116
117 with self.pools.lock:
118 # If the scheme, host, or port doesn't match existing open
119 # connections, open a new ConnectionPool.
120 pool = self.pools.get(pool_key)
121 if pool:
122 return pool
123
124 # Make a fresh ConnectionPool of the desired type
125 pool = self._new_pool(scheme, host, port)
126 self.pools[pool_key] = pool
127
128 return pool
129
130 def connection_from_url(self, url):
131 """
132 Similar to :func:`urllib3.connectionpool.connection_from_url` but
133 doesn't pass any additional parameters to the
134 :class:`urllib3.connectionpool.ConnectionPool` constructor.
135
136 Additional parameters are taken from the :class:`.PoolManager`
137 constructor.
138 """
139 u = parse_url(url)
140 return self.connection_from_host(u.host, port=u.port, scheme=u.scheme)
141
142 def urlopen(self, method, url, redirect=True, **kw):
143 """
144 Same as :meth:`urllib3.connectionpool.HTTPConnectionPool.urlopen`
145 with custom cross-host redirect logic and only sends the request-uri
146 portion of the ``url``.
147
148 The given ``url`` parameter must be absolute, such that an appropriate
149 :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it.
150 """
151 u = parse_url(url)
152 conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme)
153
154 kw['assert_same_host'] = False
155 kw['redirect'] = False
156 if 'headers' not in kw:
157 kw['headers'] = self.headers
158
159 if self.proxy is not None and u.scheme == "http":
160 response = conn.urlopen(method, url, **kw)
161 else:
162 response = conn.urlopen(method, u.request_uri, **kw)
163
164 redirect_location = redirect and response.get_redirect_location()
165 if not redirect_location:
166 return response
167
168 # Support relative URLs for redirecting.
169 redirect_location = urljoin(url, redirect_location)
170
171 # RFC 7231, Section 6.4.4
172 if response.status == 303:
173 method = 'GET'
174
175 retries = kw.get('retries')
176 if not isinstance(retries, Retry):
177 retries = Retry.from_int(retries, redirect=redirect)
178
179 try:
180 retries = retries.increment(method, url, response=response, _pool=conn)
181 except MaxRetryError:
182 if retries.raise_on_redirect:
183 raise
184 return response
185
186 kw['retries'] = retries
187 kw['redirect'] = redirect
188
189 log.info("Redirecting %s -> %s" % (url, redirect_location))
190 return self.urlopen(method, redirect_location, **kw)
191
192
193 class ProxyManager(PoolManager):
194 """
195 Behaves just like :class:`PoolManager`, but sends all requests through
196 the defined proxy, using the CONNECT method for HTTPS URLs.
197
198 :param proxy_url:
199 The URL of the proxy to be used.
200
201 :param proxy_headers:
202 A dictionary contaning headers that will be sent to the proxy. In case
203 of HTTP they are being sent with each request, while in the
204 HTTPS/CONNECT case they are sent only once. Could be used for proxy
205 authentication.
206
207 Example:
208 >>> proxy = urllib3.ProxyManager('http://localhost:3128/')
209 >>> r1 = proxy.request('GET', 'http://google.com/')
210 >>> r2 = proxy.request('GET', 'http://httpbin.org/')
211 >>> len(proxy.pools)
212 1
213 >>> r3 = proxy.request('GET', 'https://httpbin.org/')
214 >>> r4 = proxy.request('GET', 'https://twitter.com/')
215 >>> len(proxy.pools)
216 3
217
218 """
219
220 def __init__(self, proxy_url, num_pools=10, headers=None,
221 proxy_headers=None, **connection_pool_kw):
222
223 if isinstance(proxy_url, HTTPConnectionPool):
224 proxy_url = '%s://%s:%i' % (proxy_url.scheme, proxy_url.host,
225 proxy_url.port)
226 proxy = parse_url(proxy_url)
227 if not proxy.port:
228 port = port_by_scheme.get(proxy.scheme, 80)
229 proxy = proxy._replace(port=port)
230
231 if proxy.scheme not in ("http", "https"):
232 raise ProxySchemeUnknown(proxy.scheme)
233
234 self.proxy = proxy
235 self.proxy_headers = proxy_headers or {}
236
237 connection_pool_kw['_proxy'] = self.proxy
238 connection_pool_kw['_proxy_headers'] = self.proxy_headers
239
240 super(ProxyManager, self).__init__(
241 num_pools, headers, **connection_pool_kw)
242
243 def connection_from_host(self, host, port=None, scheme='http'):
244 if scheme == "https":
245 return super(ProxyManager, self).connection_from_host(
246 host, port, scheme)
247
248 return super(ProxyManager, self).connection_from_host(
249 self.proxy.host, self.proxy.port, self.proxy.scheme)
250
251 def _set_proxy_headers(self, url, headers=None):
252 """
253 Sets headers needed by proxies: specifically, the Accept and Host
254 headers. Only sets headers not provided by the user.
255 """
256 headers_ = {'Accept': '*/*'}
257
258 netloc = parse_url(url).netloc
259 if netloc:
260 headers_['Host'] = netloc
261
262 if headers:
263 headers_.update(headers)
264 return headers_
265
266 def urlopen(self, method, url, redirect=True, **kw):
267 "Same as HTTP(S)ConnectionPool.urlopen, ``url`` must be absolute."
268 u = parse_url(url)
269
270 if u.scheme == "http":
271 # For proxied HTTPS requests, httplib sets the necessary headers
272 # on the CONNECT to the proxy. For HTTP, we'll definitely
273 # need to set 'Host' at the very least.
274 headers = kw.get('headers', self.headers)
275 kw['headers'] = self._set_proxy_headers(url, headers)
276
277 return super(ProxyManager, self).urlopen(method, url, redirect=redirect, **kw)
278
279
280 def proxy_from_url(url, **kw):
281 return ProxyManager(proxy_url=url, **kw)
282
[end of urllib3/poolmanager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/urllib3/poolmanager.py b/urllib3/poolmanager.py
--- a/urllib3/poolmanager.py
+++ b/urllib3/poolmanager.py
@@ -26,7 +26,7 @@
log = logging.getLogger(__name__)
SSL_KEYWORDS = ('key_file', 'cert_file', 'cert_reqs', 'ca_certs',
- 'ssl_version')
+ 'ssl_version', 'ca_cert_dir')
class PoolManager(RequestMethods):
|
{"golden_diff": "diff --git a/urllib3/poolmanager.py b/urllib3/poolmanager.py\n--- a/urllib3/poolmanager.py\n+++ b/urllib3/poolmanager.py\n@@ -26,7 +26,7 @@\n log = logging.getLogger(__name__)\n \n SSL_KEYWORDS = ('key_file', 'cert_file', 'cert_reqs', 'ca_certs',\n- 'ssl_version')\n+ 'ssl_version', 'ca_cert_dir')\n \n \n class PoolManager(RequestMethods):\n", "issue": "ca_cert_dir keyword argument may be passed to HTTPConnectionPool by accident.\nSeems like as part of #701 I missed the `SSL_KEYWORDS` block in `poolmanager.py`. This means that `ca_cert_dir` may accidentally be passed to the `HTTPConnectionPool`. This leads to the following error when attempting to use `ca_cert_dir` with a `PoolManager` and then making a plaintext HTTP connection:\n\n```\n>>> import urllib3\n>>> p = urllib3.PoolManager(ca_cert_dir='/usr/local/etc/openssl')\n>>> p.urlopen('GET', 'http://http2bin.org/get')\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"urllib3/poolmanager.py\", line 162, in urlopen\n response = conn.urlopen(method, u.request_uri, **kw)\n File \"urllib3/connectionpool.py\", line 548, in urlopen\n conn = self._get_conn(timeout=pool_timeout)\n File \"urllib3/connectionpool.py\", line 250, in _get_conn\n return conn or self._new_conn()\n File \"urllib3/connectionpool.py\", line 211, in _new_conn\n strict=self.strict, **self.conn_kw)\n File \"urllib3/connection.py\", line 121, in __init__\n _HTTPConnection.__init__(self, *args, **kw)\nTypeError: __init__() got an unexpected keyword argument 'ca_cert_dir'\n```\n\n", "before_files": [{"content": "from __future__ import absolute_import\nimport logging\n\ntry: # Python 3\n from urllib.parse import urljoin\nexcept ImportError:\n from urlparse import urljoin\n\nfrom ._collections import RecentlyUsedContainer\nfrom .connectionpool import HTTPConnectionPool, HTTPSConnectionPool\nfrom .connectionpool import port_by_scheme\nfrom .exceptions import LocationValueError, MaxRetryError, ProxySchemeUnknown\nfrom .request import RequestMethods\nfrom .util.url import parse_url\nfrom .util.retry import Retry\n\n\n__all__ = ['PoolManager', 'ProxyManager', 'proxy_from_url']\n\n\npool_classes_by_scheme = {\n 'http': HTTPConnectionPool,\n 'https': HTTPSConnectionPool,\n}\n\nlog = logging.getLogger(__name__)\n\nSSL_KEYWORDS = ('key_file', 'cert_file', 'cert_reqs', 'ca_certs',\n 'ssl_version')\n\n\nclass PoolManager(RequestMethods):\n \"\"\"\n Allows for arbitrary requests while transparently keeping track of\n necessary connection pools for you.\n\n :param num_pools:\n Number of connection pools to cache before discarding the least\n recently used pool.\n\n :param headers:\n Headers to include with all requests, unless other headers are given\n explicitly.\n\n :param \\**connection_pool_kw:\n Additional parameters are used to create fresh\n :class:`urllib3.connectionpool.ConnectionPool` instances.\n\n Example::\n\n >>> manager = PoolManager(num_pools=2)\n >>> r = manager.request('GET', 'http://google.com/')\n >>> r = manager.request('GET', 'http://google.com/mail')\n >>> r = manager.request('GET', 'http://yahoo.com/')\n >>> len(manager.pools)\n 2\n\n \"\"\"\n\n proxy = None\n\n def __init__(self, num_pools=10, headers=None, **connection_pool_kw):\n RequestMethods.__init__(self, headers)\n self.connection_pool_kw = connection_pool_kw\n self.pools = RecentlyUsedContainer(num_pools,\n dispose_func=lambda p: p.close())\n\n def __enter__(self):\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n self.clear()\n # Return False to re-raise any potential exceptions\n return False\n\n def _new_pool(self, scheme, host, port):\n \"\"\"\n Create a new :class:`ConnectionPool` based on host, port and scheme.\n\n This method is used to actually create the connection pools handed out\n by :meth:`connection_from_url` and companion methods. It is intended\n to be overridden for customization.\n \"\"\"\n pool_cls = pool_classes_by_scheme[scheme]\n kwargs = self.connection_pool_kw\n if scheme == 'http':\n kwargs = self.connection_pool_kw.copy()\n for kw in SSL_KEYWORDS:\n kwargs.pop(kw, None)\n\n return pool_cls(host, port, **kwargs)\n\n def clear(self):\n \"\"\"\n Empty our store of pools and direct them all to close.\n\n This will not affect in-flight connections, but they will not be\n re-used after completion.\n \"\"\"\n self.pools.clear()\n\n def connection_from_host(self, host, port=None, scheme='http'):\n \"\"\"\n Get a :class:`ConnectionPool` based on the host, port, and scheme.\n\n If ``port`` isn't given, it will be derived from the ``scheme`` using\n ``urllib3.connectionpool.port_by_scheme``.\n \"\"\"\n\n if not host:\n raise LocationValueError(\"No host specified.\")\n\n scheme = scheme or 'http'\n port = port or port_by_scheme.get(scheme, 80)\n pool_key = (scheme, host, port)\n\n with self.pools.lock:\n # If the scheme, host, or port doesn't match existing open\n # connections, open a new ConnectionPool.\n pool = self.pools.get(pool_key)\n if pool:\n return pool\n\n # Make a fresh ConnectionPool of the desired type\n pool = self._new_pool(scheme, host, port)\n self.pools[pool_key] = pool\n\n return pool\n\n def connection_from_url(self, url):\n \"\"\"\n Similar to :func:`urllib3.connectionpool.connection_from_url` but\n doesn't pass any additional parameters to the\n :class:`urllib3.connectionpool.ConnectionPool` constructor.\n\n Additional parameters are taken from the :class:`.PoolManager`\n constructor.\n \"\"\"\n u = parse_url(url)\n return self.connection_from_host(u.host, port=u.port, scheme=u.scheme)\n\n def urlopen(self, method, url, redirect=True, **kw):\n \"\"\"\n Same as :meth:`urllib3.connectionpool.HTTPConnectionPool.urlopen`\n with custom cross-host redirect logic and only sends the request-uri\n portion of the ``url``.\n\n The given ``url`` parameter must be absolute, such that an appropriate\n :class:`urllib3.connectionpool.ConnectionPool` can be chosen for it.\n \"\"\"\n u = parse_url(url)\n conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme)\n\n kw['assert_same_host'] = False\n kw['redirect'] = False\n if 'headers' not in kw:\n kw['headers'] = self.headers\n\n if self.proxy is not None and u.scheme == \"http\":\n response = conn.urlopen(method, url, **kw)\n else:\n response = conn.urlopen(method, u.request_uri, **kw)\n\n redirect_location = redirect and response.get_redirect_location()\n if not redirect_location:\n return response\n\n # Support relative URLs for redirecting.\n redirect_location = urljoin(url, redirect_location)\n\n # RFC 7231, Section 6.4.4\n if response.status == 303:\n method = 'GET'\n\n retries = kw.get('retries')\n if not isinstance(retries, Retry):\n retries = Retry.from_int(retries, redirect=redirect)\n\n try:\n retries = retries.increment(method, url, response=response, _pool=conn)\n except MaxRetryError:\n if retries.raise_on_redirect:\n raise\n return response\n\n kw['retries'] = retries\n kw['redirect'] = redirect\n\n log.info(\"Redirecting %s -> %s\" % (url, redirect_location))\n return self.urlopen(method, redirect_location, **kw)\n\n\nclass ProxyManager(PoolManager):\n \"\"\"\n Behaves just like :class:`PoolManager`, but sends all requests through\n the defined proxy, using the CONNECT method for HTTPS URLs.\n\n :param proxy_url:\n The URL of the proxy to be used.\n\n :param proxy_headers:\n A dictionary contaning headers that will be sent to the proxy. In case\n of HTTP they are being sent with each request, while in the\n HTTPS/CONNECT case they are sent only once. Could be used for proxy\n authentication.\n\n Example:\n >>> proxy = urllib3.ProxyManager('http://localhost:3128/')\n >>> r1 = proxy.request('GET', 'http://google.com/')\n >>> r2 = proxy.request('GET', 'http://httpbin.org/')\n >>> len(proxy.pools)\n 1\n >>> r3 = proxy.request('GET', 'https://httpbin.org/')\n >>> r4 = proxy.request('GET', 'https://twitter.com/')\n >>> len(proxy.pools)\n 3\n\n \"\"\"\n\n def __init__(self, proxy_url, num_pools=10, headers=None,\n proxy_headers=None, **connection_pool_kw):\n\n if isinstance(proxy_url, HTTPConnectionPool):\n proxy_url = '%s://%s:%i' % (proxy_url.scheme, proxy_url.host,\n proxy_url.port)\n proxy = parse_url(proxy_url)\n if not proxy.port:\n port = port_by_scheme.get(proxy.scheme, 80)\n proxy = proxy._replace(port=port)\n\n if proxy.scheme not in (\"http\", \"https\"):\n raise ProxySchemeUnknown(proxy.scheme)\n\n self.proxy = proxy\n self.proxy_headers = proxy_headers or {}\n\n connection_pool_kw['_proxy'] = self.proxy\n connection_pool_kw['_proxy_headers'] = self.proxy_headers\n\n super(ProxyManager, self).__init__(\n num_pools, headers, **connection_pool_kw)\n\n def connection_from_host(self, host, port=None, scheme='http'):\n if scheme == \"https\":\n return super(ProxyManager, self).connection_from_host(\n host, port, scheme)\n\n return super(ProxyManager, self).connection_from_host(\n self.proxy.host, self.proxy.port, self.proxy.scheme)\n\n def _set_proxy_headers(self, url, headers=None):\n \"\"\"\n Sets headers needed by proxies: specifically, the Accept and Host\n headers. Only sets headers not provided by the user.\n \"\"\"\n headers_ = {'Accept': '*/*'}\n\n netloc = parse_url(url).netloc\n if netloc:\n headers_['Host'] = netloc\n\n if headers:\n headers_.update(headers)\n return headers_\n\n def urlopen(self, method, url, redirect=True, **kw):\n \"Same as HTTP(S)ConnectionPool.urlopen, ``url`` must be absolute.\"\n u = parse_url(url)\n\n if u.scheme == \"http\":\n # For proxied HTTPS requests, httplib sets the necessary headers\n # on the CONNECT to the proxy. For HTTP, we'll definitely\n # need to set 'Host' at the very least.\n headers = kw.get('headers', self.headers)\n kw['headers'] = self._set_proxy_headers(url, headers)\n\n return super(ProxyManager, self).urlopen(method, url, redirect=redirect, **kw)\n\n\ndef proxy_from_url(url, **kw):\n return ProxyManager(proxy_url=url, **kw)\n", "path": "urllib3/poolmanager.py"}]}
| 3,757 | 110 |
gh_patches_debug_23755
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-2111
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Budget without value makes IATI export crash
## Test plan
GIVEN the IATI export (bottom right on project page)
WHEN a budget of the project has no value, but does have a value date / currency / label
THEN the `value` tag of the budget item should not be generated
AND no internal server error should be shown
## Issue description
See http://sentry.support.akvo-ops.org/rsr/live/group/832/
</issue>
<code>
[start of akvo/iati/exports/elements/budget.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from lxml import etree
8
9
10 def budget(project):
11 """
12 Generate the budget elements.
13
14 :param project: Project object
15 :return: A list of Etree elements
16 """
17 budget_elements = []
18
19 for budget_item in project.budget_items.all():
20 if budget_item.amount or budget_item.period_start or budget_item.period_end or \
21 budget_item.type or budget_item.status or budget_item.value_date or \
22 budget_item.currency or budget_item.other_extra or budget_item.label:
23 element = etree.Element("budget")
24
25 if budget_item.type:
26 element.attrib['type'] = budget_item.type
27
28 if budget_item.status:
29 element.attrib['status'] = budget_item.status
30
31 if budget_item.period_start:
32 period_start_element = etree.SubElement(element, "period-start")
33 period_start_element.attrib['iso-date'] = str(budget_item.period_start)
34
35 if budget_item.period_end:
36 period_end_element = etree.SubElement(element, "period-end")
37 period_end_element.attrib['iso-date'] = str(budget_item.period_end)
38
39 if budget_item.amount == 0 or budget_item.amount:
40 value_element = etree.SubElement(element, "value")
41 value_element.text = str(budget_item.amount)
42
43 if budget_item.value_date:
44 value_element.attrib['value-date'] = str(budget_item.value_date)
45
46 if budget_item.currency:
47 value_element.attrib['currency'] = budget_item.currency
48
49 if budget_item.other_extra:
50 value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.\
51 other_extra
52 elif budget_item.label and budget_item.label.label:
53 value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.label.\
54 label
55
56 budget_elements.append(element)
57
58 return budget_elements
59
[end of akvo/iati/exports/elements/budget.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/akvo/iati/exports/elements/budget.py b/akvo/iati/exports/elements/budget.py
--- a/akvo/iati/exports/elements/budget.py
+++ b/akvo/iati/exports/elements/budget.py
@@ -40,18 +40,17 @@
value_element = etree.SubElement(element, "value")
value_element.text = str(budget_item.amount)
- if budget_item.value_date:
- value_element.attrib['value-date'] = str(budget_item.value_date)
-
- if budget_item.currency:
- value_element.attrib['currency'] = budget_item.currency
-
- if budget_item.other_extra:
- value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.\
- other_extra
- elif budget_item.label and budget_item.label.label:
- value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.label.\
- label
+ if budget_item.value_date:
+ value_element.attrib['value-date'] = str(budget_item.value_date)
+
+ if budget_item.currency:
+ value_element.attrib['currency'] = budget_item.currency
+
+ akvo_label = '{http://akvo.org/iati-activities}label'
+ if budget_item.other_extra:
+ value_element.attrib[akvo_label] = budget_item.other_extra
+ elif budget_item.label and budget_item.label.label:
+ value_element.attrib[akvo_label] = budget_item.label.label
budget_elements.append(element)
|
{"golden_diff": "diff --git a/akvo/iati/exports/elements/budget.py b/akvo/iati/exports/elements/budget.py\n--- a/akvo/iati/exports/elements/budget.py\n+++ b/akvo/iati/exports/elements/budget.py\n@@ -40,18 +40,17 @@\n value_element = etree.SubElement(element, \"value\")\n value_element.text = str(budget_item.amount)\n \n- if budget_item.value_date:\n- value_element.attrib['value-date'] = str(budget_item.value_date)\n-\n- if budget_item.currency:\n- value_element.attrib['currency'] = budget_item.currency\n-\n- if budget_item.other_extra:\n- value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.\\\n- other_extra\n- elif budget_item.label and budget_item.label.label:\n- value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.label.\\\n- label\n+ if budget_item.value_date:\n+ value_element.attrib['value-date'] = str(budget_item.value_date)\n+\n+ if budget_item.currency:\n+ value_element.attrib['currency'] = budget_item.currency\n+\n+ akvo_label = '{http://akvo.org/iati-activities}label'\n+ if budget_item.other_extra:\n+ value_element.attrib[akvo_label] = budget_item.other_extra\n+ elif budget_item.label and budget_item.label.label:\n+ value_element.attrib[akvo_label] = budget_item.label.label\n \n budget_elements.append(element)\n", "issue": "Budget without value makes IATI export crash\n## Test plan\n\nGIVEN the IATI export (bottom right on project page)\nWHEN a budget of the project has no value, but does have a value date / currency / label\nTHEN the `value` tag of the budget item should not be generated\nAND no internal server error should be shown\n## Issue description\n\nSee http://sentry.support.akvo-ops.org/rsr/live/group/832/\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom lxml import etree\n\n\ndef budget(project):\n \"\"\"\n Generate the budget elements.\n\n :param project: Project object\n :return: A list of Etree elements\n \"\"\"\n budget_elements = []\n\n for budget_item in project.budget_items.all():\n if budget_item.amount or budget_item.period_start or budget_item.period_end or \\\n budget_item.type or budget_item.status or budget_item.value_date or \\\n budget_item.currency or budget_item.other_extra or budget_item.label:\n element = etree.Element(\"budget\")\n\n if budget_item.type:\n element.attrib['type'] = budget_item.type\n\n if budget_item.status:\n element.attrib['status'] = budget_item.status\n\n if budget_item.period_start:\n period_start_element = etree.SubElement(element, \"period-start\")\n period_start_element.attrib['iso-date'] = str(budget_item.period_start)\n\n if budget_item.period_end:\n period_end_element = etree.SubElement(element, \"period-end\")\n period_end_element.attrib['iso-date'] = str(budget_item.period_end)\n\n if budget_item.amount == 0 or budget_item.amount:\n value_element = etree.SubElement(element, \"value\")\n value_element.text = str(budget_item.amount)\n\n if budget_item.value_date:\n value_element.attrib['value-date'] = str(budget_item.value_date)\n\n if budget_item.currency:\n value_element.attrib['currency'] = budget_item.currency\n\n if budget_item.other_extra:\n value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.\\\n other_extra\n elif budget_item.label and budget_item.label.label:\n value_element.attrib['{http://akvo.org/iati-activities}label'] = budget_item.label.\\\n label\n\n budget_elements.append(element)\n\n return budget_elements\n", "path": "akvo/iati/exports/elements/budget.py"}]}
| 1,212 | 346 |
gh_patches_debug_1884
|
rasdani/github-patches
|
git_diff
|
mlflow__mlflow-11463
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[DOC-FIX] Document that attribute RunInfo.lifecycle_stage is of type LifecycleStage
### Willingness to contribute
No. I cannot contribute a documentation fix at this time.
### URL(s) with the issue
https://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.RunInfo.lifecycle_stage
### Description of proposal (what needs changing)
For [documentation on RunInfo](https://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.RunInfo) class.
For the `RunInfo.lifecycle_stage` attribute we should mention that it's type is enum LifecycleStage. Analogous to the documentation for the `RunInfo.stage` attribute.
Should be
```
property lifecycle_stage[source]
One of the values in mlflow.entities.lifecycle_stage.LifecycleStage describing the lifecycle stage of the run.
```
similar to the existing
```
property status[source]
One of the values in mlflow.entities.RunStatus describing the status of the run.
```
</issue>
<code>
[start of mlflow/entities/run_info.py]
1 from mlflow.entities._mlflow_object import _MLflowObject
2 from mlflow.entities.lifecycle_stage import LifecycleStage
3 from mlflow.entities.run_status import RunStatus
4 from mlflow.exceptions import MlflowException
5 from mlflow.protos.databricks_pb2 import INVALID_PARAMETER_VALUE
6 from mlflow.protos.service_pb2 import RunInfo as ProtoRunInfo
7
8
9 def check_run_is_active(run_info):
10 if run_info.lifecycle_stage != LifecycleStage.ACTIVE:
11 raise MlflowException(
12 f"The run {run_info.run_id} must be in 'active' lifecycle_stage.",
13 error_code=INVALID_PARAMETER_VALUE,
14 )
15
16
17 class searchable_attribute(property):
18 # Wrapper class over property to designate some of the properties as searchable
19 # run attributes
20 pass
21
22
23 class orderable_attribute(property):
24 # Wrapper class over property to designate some of the properties as orderable
25 # run attributes
26 pass
27
28
29 class RunInfo(_MLflowObject):
30 """
31 Metadata about a run.
32 """
33
34 def __init__(
35 self,
36 run_uuid,
37 experiment_id,
38 user_id,
39 status,
40 start_time,
41 end_time,
42 lifecycle_stage,
43 artifact_uri=None,
44 run_id=None,
45 run_name=None,
46 ):
47 if experiment_id is None:
48 raise Exception("experiment_id cannot be None")
49 if user_id is None:
50 raise Exception("user_id cannot be None")
51 if status is None:
52 raise Exception("status cannot be None")
53 if start_time is None:
54 raise Exception("start_time cannot be None")
55 actual_run_id = run_id or run_uuid
56 if actual_run_id is None:
57 raise Exception("run_id and run_uuid cannot both be None")
58 self._run_uuid = actual_run_id
59 self._run_id = actual_run_id
60 self._experiment_id = experiment_id
61 self._user_id = user_id
62 self._status = status
63 self._start_time = start_time
64 self._end_time = end_time
65 self._lifecycle_stage = lifecycle_stage
66 self._artifact_uri = artifact_uri
67 self._run_name = run_name
68
69 def __eq__(self, other):
70 if type(other) is type(self):
71 # TODO deep equality here?
72 return self.__dict__ == other.__dict__
73 return False
74
75 def _copy_with_overrides(self, status=None, end_time=None, lifecycle_stage=None, run_name=None):
76 """A copy of the RunInfo with certain attributes modified."""
77 proto = self.to_proto()
78 if status:
79 proto.status = status
80 if end_time:
81 proto.end_time = end_time
82 if lifecycle_stage:
83 proto.lifecycle_stage = lifecycle_stage
84 if run_name:
85 proto.run_name = run_name
86 return RunInfo.from_proto(proto)
87
88 @property
89 def run_uuid(self):
90 """[Deprecated, use run_id instead] String containing run UUID."""
91 return self._run_uuid
92
93 @searchable_attribute
94 def run_id(self):
95 """String containing run id."""
96 return self._run_id
97
98 @property
99 def experiment_id(self):
100 """String ID of the experiment for the current run."""
101 return self._experiment_id
102
103 @searchable_attribute
104 def run_name(self):
105 """String containing run name."""
106 return self._run_name
107
108 def _set_run_name(self, new_name):
109 self._run_name = new_name
110
111 @searchable_attribute
112 def user_id(self):
113 """String ID of the user who initiated this run."""
114 return self._user_id
115
116 @searchable_attribute
117 def status(self):
118 """
119 One of the values in :py:class:`mlflow.entities.RunStatus`
120 describing the status of the run.
121 """
122 return self._status
123
124 @searchable_attribute
125 def start_time(self):
126 """Start time of the run, in number of milliseconds since the UNIX epoch."""
127 return self._start_time
128
129 @searchable_attribute
130 def end_time(self):
131 """End time of the run, in number of milliseconds since the UNIX epoch."""
132 return self._end_time
133
134 @searchable_attribute
135 def artifact_uri(self):
136 """String root artifact URI of the run."""
137 return self._artifact_uri
138
139 @property
140 def lifecycle_stage(self):
141 return self._lifecycle_stage
142
143 def to_proto(self):
144 proto = ProtoRunInfo()
145 proto.run_uuid = self.run_uuid
146 proto.run_id = self.run_id
147 if self.run_name is not None:
148 proto.run_name = self.run_name
149 proto.experiment_id = self.experiment_id
150 proto.user_id = self.user_id
151 proto.status = RunStatus.from_string(self.status)
152 proto.start_time = self.start_time
153 if self.end_time:
154 proto.end_time = self.end_time
155 if self.artifact_uri:
156 proto.artifact_uri = self.artifact_uri
157 proto.lifecycle_stage = self.lifecycle_stage
158 return proto
159
160 @classmethod
161 def from_proto(cls, proto):
162 end_time = proto.end_time
163 # The proto2 default scalar value of zero indicates that the run's end time is absent.
164 # An absent end time is represented with a NoneType in the `RunInfo` class
165 if end_time == 0:
166 end_time = None
167 return cls(
168 run_uuid=proto.run_uuid,
169 run_id=proto.run_id,
170 run_name=proto.run_name,
171 experiment_id=proto.experiment_id,
172 user_id=proto.user_id,
173 status=RunStatus.to_string(proto.status),
174 start_time=proto.start_time,
175 end_time=end_time,
176 lifecycle_stage=proto.lifecycle_stage,
177 artifact_uri=proto.artifact_uri,
178 )
179
180 @classmethod
181 def get_searchable_attributes(cls):
182 return sorted(
183 [p for p in cls.__dict__ if isinstance(getattr(cls, p), searchable_attribute)]
184 )
185
186 @classmethod
187 def get_orderable_attributes(cls):
188 # Note that all searchable attributes are also orderable.
189 return sorted(
190 [
191 p
192 for p in cls.__dict__
193 if isinstance(getattr(cls, p), (searchable_attribute, orderable_attribute))
194 ]
195 )
196
[end of mlflow/entities/run_info.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mlflow/entities/run_info.py b/mlflow/entities/run_info.py
--- a/mlflow/entities/run_info.py
+++ b/mlflow/entities/run_info.py
@@ -138,6 +138,10 @@
@property
def lifecycle_stage(self):
+ """
+ One of the values in :py:class:`mlflow.entities.lifecycle_stage.LifecycleStage`
+ describing the lifecycle stage of the run.
+ """
return self._lifecycle_stage
def to_proto(self):
|
{"golden_diff": "diff --git a/mlflow/entities/run_info.py b/mlflow/entities/run_info.py\n--- a/mlflow/entities/run_info.py\n+++ b/mlflow/entities/run_info.py\n@@ -138,6 +138,10 @@\n \n @property\n def lifecycle_stage(self):\n+ \"\"\"\n+ One of the values in :py:class:`mlflow.entities.lifecycle_stage.LifecycleStage`\n+ describing the lifecycle stage of the run.\n+ \"\"\"\n return self._lifecycle_stage\n \n def to_proto(self):\n", "issue": "[DOC-FIX] Document that attribute RunInfo.lifecycle_stage is of type LifecycleStage\n### Willingness to contribute\n\nNo. I cannot contribute a documentation fix at this time.\n\n### URL(s) with the issue\n\nhttps://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.RunInfo.lifecycle_stage\n\n### Description of proposal (what needs changing)\n\nFor [documentation on RunInfo](https://mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.RunInfo) class.\r\n\r\nFor the `RunInfo.lifecycle_stage` attribute we should mention that it's type is enum LifecycleStage. Analogous to the documentation for the `RunInfo.stage` attribute.\r\n\r\nShould be\r\n```\r\nproperty lifecycle_stage[source]\r\n One of the values in mlflow.entities.lifecycle_stage.LifecycleStage describing the lifecycle stage of the run.\r\n```\r\nsimilar to the existing\r\n```\r\nproperty status[source]\r\n One of the values in mlflow.entities.RunStatus describing the status of the run.\r\n```\r\n\n", "before_files": [{"content": "from mlflow.entities._mlflow_object import _MLflowObject\nfrom mlflow.entities.lifecycle_stage import LifecycleStage\nfrom mlflow.entities.run_status import RunStatus\nfrom mlflow.exceptions import MlflowException\nfrom mlflow.protos.databricks_pb2 import INVALID_PARAMETER_VALUE\nfrom mlflow.protos.service_pb2 import RunInfo as ProtoRunInfo\n\n\ndef check_run_is_active(run_info):\n if run_info.lifecycle_stage != LifecycleStage.ACTIVE:\n raise MlflowException(\n f\"The run {run_info.run_id} must be in 'active' lifecycle_stage.\",\n error_code=INVALID_PARAMETER_VALUE,\n )\n\n\nclass searchable_attribute(property):\n # Wrapper class over property to designate some of the properties as searchable\n # run attributes\n pass\n\n\nclass orderable_attribute(property):\n # Wrapper class over property to designate some of the properties as orderable\n # run attributes\n pass\n\n\nclass RunInfo(_MLflowObject):\n \"\"\"\n Metadata about a run.\n \"\"\"\n\n def __init__(\n self,\n run_uuid,\n experiment_id,\n user_id,\n status,\n start_time,\n end_time,\n lifecycle_stage,\n artifact_uri=None,\n run_id=None,\n run_name=None,\n ):\n if experiment_id is None:\n raise Exception(\"experiment_id cannot be None\")\n if user_id is None:\n raise Exception(\"user_id cannot be None\")\n if status is None:\n raise Exception(\"status cannot be None\")\n if start_time is None:\n raise Exception(\"start_time cannot be None\")\n actual_run_id = run_id or run_uuid\n if actual_run_id is None:\n raise Exception(\"run_id and run_uuid cannot both be None\")\n self._run_uuid = actual_run_id\n self._run_id = actual_run_id\n self._experiment_id = experiment_id\n self._user_id = user_id\n self._status = status\n self._start_time = start_time\n self._end_time = end_time\n self._lifecycle_stage = lifecycle_stage\n self._artifact_uri = artifact_uri\n self._run_name = run_name\n\n def __eq__(self, other):\n if type(other) is type(self):\n # TODO deep equality here?\n return self.__dict__ == other.__dict__\n return False\n\n def _copy_with_overrides(self, status=None, end_time=None, lifecycle_stage=None, run_name=None):\n \"\"\"A copy of the RunInfo with certain attributes modified.\"\"\"\n proto = self.to_proto()\n if status:\n proto.status = status\n if end_time:\n proto.end_time = end_time\n if lifecycle_stage:\n proto.lifecycle_stage = lifecycle_stage\n if run_name:\n proto.run_name = run_name\n return RunInfo.from_proto(proto)\n\n @property\n def run_uuid(self):\n \"\"\"[Deprecated, use run_id instead] String containing run UUID.\"\"\"\n return self._run_uuid\n\n @searchable_attribute\n def run_id(self):\n \"\"\"String containing run id.\"\"\"\n return self._run_id\n\n @property\n def experiment_id(self):\n \"\"\"String ID of the experiment for the current run.\"\"\"\n return self._experiment_id\n\n @searchable_attribute\n def run_name(self):\n \"\"\"String containing run name.\"\"\"\n return self._run_name\n\n def _set_run_name(self, new_name):\n self._run_name = new_name\n\n @searchable_attribute\n def user_id(self):\n \"\"\"String ID of the user who initiated this run.\"\"\"\n return self._user_id\n\n @searchable_attribute\n def status(self):\n \"\"\"\n One of the values in :py:class:`mlflow.entities.RunStatus`\n describing the status of the run.\n \"\"\"\n return self._status\n\n @searchable_attribute\n def start_time(self):\n \"\"\"Start time of the run, in number of milliseconds since the UNIX epoch.\"\"\"\n return self._start_time\n\n @searchable_attribute\n def end_time(self):\n \"\"\"End time of the run, in number of milliseconds since the UNIX epoch.\"\"\"\n return self._end_time\n\n @searchable_attribute\n def artifact_uri(self):\n \"\"\"String root artifact URI of the run.\"\"\"\n return self._artifact_uri\n\n @property\n def lifecycle_stage(self):\n return self._lifecycle_stage\n\n def to_proto(self):\n proto = ProtoRunInfo()\n proto.run_uuid = self.run_uuid\n proto.run_id = self.run_id\n if self.run_name is not None:\n proto.run_name = self.run_name\n proto.experiment_id = self.experiment_id\n proto.user_id = self.user_id\n proto.status = RunStatus.from_string(self.status)\n proto.start_time = self.start_time\n if self.end_time:\n proto.end_time = self.end_time\n if self.artifact_uri:\n proto.artifact_uri = self.artifact_uri\n proto.lifecycle_stage = self.lifecycle_stage\n return proto\n\n @classmethod\n def from_proto(cls, proto):\n end_time = proto.end_time\n # The proto2 default scalar value of zero indicates that the run's end time is absent.\n # An absent end time is represented with a NoneType in the `RunInfo` class\n if end_time == 0:\n end_time = None\n return cls(\n run_uuid=proto.run_uuid,\n run_id=proto.run_id,\n run_name=proto.run_name,\n experiment_id=proto.experiment_id,\n user_id=proto.user_id,\n status=RunStatus.to_string(proto.status),\n start_time=proto.start_time,\n end_time=end_time,\n lifecycle_stage=proto.lifecycle_stage,\n artifact_uri=proto.artifact_uri,\n )\n\n @classmethod\n def get_searchable_attributes(cls):\n return sorted(\n [p for p in cls.__dict__ if isinstance(getattr(cls, p), searchable_attribute)]\n )\n\n @classmethod\n def get_orderable_attributes(cls):\n # Note that all searchable attributes are also orderable.\n return sorted(\n [\n p\n for p in cls.__dict__\n if isinstance(getattr(cls, p), (searchable_attribute, orderable_attribute))\n ]\n )\n", "path": "mlflow/entities/run_info.py"}]}
| 2,566 | 112 |
gh_patches_debug_14561
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-6276
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Address warning in Airflow plugin version check
> not super urgent, but as we move into supporting newer Python versions
> (https://github.com/meltano/meltano/pull/6135) and bumping Meltano's dependencies (https://github.com/meltano/meltano/issues/6264), we might break Airflow support.
>
> It's also probably a very quick (< 1 hour) fix by replacing distutils.StrictVersion with the packaging.version equivalent.
>
```
src/meltano/core/plugin/airflow.py:110: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
if StrictVersion(version) < StrictVersion("2.0.0")
```
</issue>
<code>
[start of src/meltano/core/plugin/airflow.py]
1 """Plugin glue code for Airflow."""
2 import configparser
3 import logging
4 import os
5 import subprocess
6 from distutils.version import StrictVersion
7
8 from meltano.core.behavior.hookable import hook
9 from meltano.core.error import AsyncSubprocessError
10 from meltano.core.plugin_invoker import PluginInvoker
11 from meltano.core.utils import nest
12
13 from . import BasePlugin, PluginType
14
15
16 class AirflowInvoker(PluginInvoker):
17 """Invoker that prepares env for Airflow."""
18
19 def env(self):
20 """Environment variables for Airflow.
21
22 Returns:
23 Dictionary of environment variables.
24 """
25 env = super().env()
26
27 env["AIRFLOW_HOME"] = str(self.plugin_config_service.run_dir)
28 env["AIRFLOW_CONFIG"] = str(self.files["config"])
29
30 return env
31
32
33 class Airflow(BasePlugin):
34 """Plugin glue code for Airflow."""
35
36 __plugin_type__ = PluginType.ORCHESTRATORS
37
38 invoker_class = AirflowInvoker
39
40 @property
41 def config_files(self):
42 """Return the configuration files required by the plugin.
43
44 Returns:
45 Dictionary of config file identifiers and filenames
46 """
47 return {"config": "airflow.cfg"}
48
49 def process_config(self, flat_config):
50 """Unflatten the config.
51
52 Args:
53 flat_config: the flat config
54
55 Returns:
56 unflattened config
57 """
58 config = {}
59 for key, value in flat_config.items():
60 nest(config, key, str(value))
61 return config
62
63 @staticmethod
64 def update_config_file(invoker: AirflowInvoker) -> None:
65 """Update airflow.cfg with plugin configuration.
66
67 Args:
68 invoker: the active PluginInvoker
69 """
70 airflow_cfg_path = invoker.files["config"]
71 logging.debug(f"Generated default '{str(airflow_cfg_path)}'")
72
73 # open the configuration and update it
74 # now we let's update the config to use our stubs
75 airflow_cfg = configparser.ConfigParser()
76
77 with airflow_cfg_path.open() as airflow_cfg_file_to_read:
78 airflow_cfg.read_file(airflow_cfg_file_to_read)
79 logging.debug(f"Loaded '{str(airflow_cfg_path)}'")
80
81 config = invoker.plugin_config_processed
82 for section, section_config in config.items():
83 airflow_cfg[section].update(section_config)
84 logging.debug(f"\tUpdated section [{section}] with {section_config}")
85
86 with airflow_cfg_path.open("w") as airflow_cfg_file_to_write:
87 airflow_cfg.write(airflow_cfg_file_to_write)
88 logging.debug(f"Saved '{str(airflow_cfg_path)}'")
89
90 @hook("before_install")
91 async def setup_env(self, *args, **kwargs):
92 """Configure the env to make airflow installable without GPL deps.
93
94 Args:
95 args: Arbitrary args
96 kwargs: Arbitrary kwargs
97 """
98 os.environ["SLUGIFY_USES_TEXT_UNIDECODE"] = "yes"
99
100 @hook("before_configure")
101 async def before_configure(self, invoker: AirflowInvoker, session): # noqa: WPS217
102 """Generate config file and keep metadata database up-to-date.
103
104 Args:
105 invoker: the active PluginInvoker
106 session: metadata database session
107
108 Raises:
109 AsyncSubprocessError: if command failed to run
110 """
111 # generate the default `airflow.cfg`
112 handle = await invoker.invoke_async(
113 "--help",
114 require_preparation=False,
115 stdout=subprocess.DEVNULL,
116 stderr=subprocess.PIPE,
117 )
118 exit_code = await handle.wait()
119
120 if exit_code:
121 raise AsyncSubprocessError(
122 "Command `airflow --help` failed", process=handle
123 )
124
125 # Read and update airflow.cfg
126 self.update_config_file(invoker)
127
128 # we've changed the configuration here, so we need to call
129 # prepare again on the invoker so it re-reads the configuration
130 # for the Airflow plugin
131 await invoker.prepare(session)
132
133 # make sure we use correct db init
134 handle = await invoker.invoke_async(
135 "version",
136 stdout=subprocess.PIPE,
137 stderr=subprocess.PIPE,
138 )
139
140 stdout, stderr = await handle.communicate()
141
142 if handle.returncode:
143 raise AsyncSubprocessError(
144 "Command `airflow version` failed", process=handle
145 )
146
147 version = stdout.decode()
148 init_db_cmd = (
149 ["initdb"]
150 if StrictVersion(version) < StrictVersion("2.0.0")
151 else ["db", "init"]
152 )
153
154 handle = await invoker.invoke_async(
155 *init_db_cmd,
156 stdout=subprocess.PIPE,
157 stderr=subprocess.PIPE,
158 )
159 exit_code = await handle.wait()
160
161 if exit_code:
162 raise AsyncSubprocessError(
163 "Airflow metadata database could not be initialized: `airflow initdb` failed",
164 handle,
165 )
166
167 logging.debug("Completed `airflow initdb`")
168
169 @hook("before_cleanup")
170 async def before_cleanup(self, invoker: PluginInvoker):
171 """Delete the config file.
172
173 Args:
174 invoker: the active PluginInvoker
175 """
176 config_file = invoker.files["config"]
177 try:
178 config_file.unlink()
179 logging.debug(f"Deleted configuration at {config_file}")
180 except FileNotFoundError:
181 pass
182
[end of src/meltano/core/plugin/airflow.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/meltano/core/plugin/airflow.py b/src/meltano/core/plugin/airflow.py
--- a/src/meltano/core/plugin/airflow.py
+++ b/src/meltano/core/plugin/airflow.py
@@ -3,7 +3,8 @@
import logging
import os
import subprocess
-from distutils.version import StrictVersion
+
+from packaging.version import Version
from meltano.core.behavior.hookable import hook
from meltano.core.error import AsyncSubprocessError
@@ -146,9 +147,7 @@
version = stdout.decode()
init_db_cmd = (
- ["initdb"]
- if StrictVersion(version) < StrictVersion("2.0.0")
- else ["db", "init"]
+ ["initdb"] if Version(version) < Version("2.0.0") else ["db", "init"]
)
handle = await invoker.invoke_async(
|
{"golden_diff": "diff --git a/src/meltano/core/plugin/airflow.py b/src/meltano/core/plugin/airflow.py\n--- a/src/meltano/core/plugin/airflow.py\n+++ b/src/meltano/core/plugin/airflow.py\n@@ -3,7 +3,8 @@\n import logging\n import os\n import subprocess\n-from distutils.version import StrictVersion\n+\n+from packaging.version import Version\n \n from meltano.core.behavior.hookable import hook\n from meltano.core.error import AsyncSubprocessError\n@@ -146,9 +147,7 @@\n \n version = stdout.decode()\n init_db_cmd = (\n- [\"initdb\"]\n- if StrictVersion(version) < StrictVersion(\"2.0.0\")\n- else [\"db\", \"init\"]\n+ [\"initdb\"] if Version(version) < Version(\"2.0.0\") else [\"db\", \"init\"]\n )\n \n handle = await invoker.invoke_async(\n", "issue": "Address warning in Airflow plugin version check\n> not super urgent, but as we move into supporting newer Python versions \r\n> (https://github.com/meltano/meltano/pull/6135) and bumping Meltano's dependencies (https://github.com/meltano/meltano/issues/6264), we might break Airflow support.\r\n> \r\n> It's also probably a very quick (< 1 hour) fix by replacing distutils.StrictVersion with the packaging.version equivalent.\r\n> \r\n\r\n```\r\nsrc/meltano/core/plugin/airflow.py:110: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.\r\n if StrictVersion(version) < StrictVersion(\"2.0.0\")\r\n```\n", "before_files": [{"content": "\"\"\"Plugin glue code for Airflow.\"\"\"\nimport configparser\nimport logging\nimport os\nimport subprocess\nfrom distutils.version import StrictVersion\n\nfrom meltano.core.behavior.hookable import hook\nfrom meltano.core.error import AsyncSubprocessError\nfrom meltano.core.plugin_invoker import PluginInvoker\nfrom meltano.core.utils import nest\n\nfrom . import BasePlugin, PluginType\n\n\nclass AirflowInvoker(PluginInvoker):\n \"\"\"Invoker that prepares env for Airflow.\"\"\"\n\n def env(self):\n \"\"\"Environment variables for Airflow.\n\n Returns:\n Dictionary of environment variables.\n \"\"\"\n env = super().env()\n\n env[\"AIRFLOW_HOME\"] = str(self.plugin_config_service.run_dir)\n env[\"AIRFLOW_CONFIG\"] = str(self.files[\"config\"])\n\n return env\n\n\nclass Airflow(BasePlugin):\n \"\"\"Plugin glue code for Airflow.\"\"\"\n\n __plugin_type__ = PluginType.ORCHESTRATORS\n\n invoker_class = AirflowInvoker\n\n @property\n def config_files(self):\n \"\"\"Return the configuration files required by the plugin.\n\n Returns:\n Dictionary of config file identifiers and filenames\n \"\"\"\n return {\"config\": \"airflow.cfg\"}\n\n def process_config(self, flat_config):\n \"\"\"Unflatten the config.\n\n Args:\n flat_config: the flat config\n\n Returns:\n unflattened config\n \"\"\"\n config = {}\n for key, value in flat_config.items():\n nest(config, key, str(value))\n return config\n\n @staticmethod\n def update_config_file(invoker: AirflowInvoker) -> None:\n \"\"\"Update airflow.cfg with plugin configuration.\n\n Args:\n invoker: the active PluginInvoker\n \"\"\"\n airflow_cfg_path = invoker.files[\"config\"]\n logging.debug(f\"Generated default '{str(airflow_cfg_path)}'\")\n\n # open the configuration and update it\n # now we let's update the config to use our stubs\n airflow_cfg = configparser.ConfigParser()\n\n with airflow_cfg_path.open() as airflow_cfg_file_to_read:\n airflow_cfg.read_file(airflow_cfg_file_to_read)\n logging.debug(f\"Loaded '{str(airflow_cfg_path)}'\")\n\n config = invoker.plugin_config_processed\n for section, section_config in config.items():\n airflow_cfg[section].update(section_config)\n logging.debug(f\"\\tUpdated section [{section}] with {section_config}\")\n\n with airflow_cfg_path.open(\"w\") as airflow_cfg_file_to_write:\n airflow_cfg.write(airflow_cfg_file_to_write)\n logging.debug(f\"Saved '{str(airflow_cfg_path)}'\")\n\n @hook(\"before_install\")\n async def setup_env(self, *args, **kwargs):\n \"\"\"Configure the env to make airflow installable without GPL deps.\n\n Args:\n args: Arbitrary args\n kwargs: Arbitrary kwargs\n \"\"\"\n os.environ[\"SLUGIFY_USES_TEXT_UNIDECODE\"] = \"yes\"\n\n @hook(\"before_configure\")\n async def before_configure(self, invoker: AirflowInvoker, session): # noqa: WPS217\n \"\"\"Generate config file and keep metadata database up-to-date.\n\n Args:\n invoker: the active PluginInvoker\n session: metadata database session\n\n Raises:\n AsyncSubprocessError: if command failed to run\n \"\"\"\n # generate the default `airflow.cfg`\n handle = await invoker.invoke_async(\n \"--help\",\n require_preparation=False,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.PIPE,\n )\n exit_code = await handle.wait()\n\n if exit_code:\n raise AsyncSubprocessError(\n \"Command `airflow --help` failed\", process=handle\n )\n\n # Read and update airflow.cfg\n self.update_config_file(invoker)\n\n # we've changed the configuration here, so we need to call\n # prepare again on the invoker so it re-reads the configuration\n # for the Airflow plugin\n await invoker.prepare(session)\n\n # make sure we use correct db init\n handle = await invoker.invoke_async(\n \"version\",\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n )\n\n stdout, stderr = await handle.communicate()\n\n if handle.returncode:\n raise AsyncSubprocessError(\n \"Command `airflow version` failed\", process=handle\n )\n\n version = stdout.decode()\n init_db_cmd = (\n [\"initdb\"]\n if StrictVersion(version) < StrictVersion(\"2.0.0\")\n else [\"db\", \"init\"]\n )\n\n handle = await invoker.invoke_async(\n *init_db_cmd,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n )\n exit_code = await handle.wait()\n\n if exit_code:\n raise AsyncSubprocessError(\n \"Airflow metadata database could not be initialized: `airflow initdb` failed\",\n handle,\n )\n\n logging.debug(\"Completed `airflow initdb`\")\n\n @hook(\"before_cleanup\")\n async def before_cleanup(self, invoker: PluginInvoker):\n \"\"\"Delete the config file.\n\n Args:\n invoker: the active PluginInvoker\n \"\"\"\n config_file = invoker.files[\"config\"]\n try:\n config_file.unlink()\n logging.debug(f\"Deleted configuration at {config_file}\")\n except FileNotFoundError:\n pass\n", "path": "src/meltano/core/plugin/airflow.py"}]}
| 2,294 | 207 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.