problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_14762
|
rasdani/github-patches
|
git_diff
|
huggingface__peft-1320
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
An error occurs when using LoftQ. IndexError
A problem occurred while applying LoftQ to T5.
`IndexError: tensors used as indices must be long, byte or bool tensors`
The problem was in [this line](https://github.com/huggingface/peft/blob/main/src/peft/utils/loftq_utils.py#L158), I replaced int with long and it worked fine.
`lookup_table_idx = lookup_table_idx.to(torch.long)`
</issue>
<code>
[start of src/peft/utils/loftq_utils.py]
1 # coding=utf-8
2 # Copyright 2023-present the HuggingFace Inc. team.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 # Reference code: https://github.com/yxli2123/LoftQ/blob/main/utils.py
17 # Reference paper: https://arxiv.org/abs/2310.08659
18
19 import logging
20 from typing import Union
21
22 import torch
23
24 from peft.import_utils import is_bnb_4bit_available, is_bnb_available
25
26
27 if is_bnb_available():
28 import bitsandbytes as bnb
29
30
31 class NFQuantizer:
32 def __init__(self, num_bits=2, device="cuda", method="normal", block_size=64, *args, **kwargs):
33 super().__init__(*args, **kwargs)
34 self.num_bits = num_bits
35 self.device = device
36 self.method = method
37 self.block_size = block_size
38 if self.method == "normal":
39 self.norm_lookup_table = self.create_normal_map(num_bits=self.num_bits)
40 self.norm_lookup_table = self.norm_lookup_table.to(device)
41 elif self.method == "uniform":
42 self.norm_lookup_table = self.create_uniform_map(num_bits=self.num_bits)
43 self.norm_lookup_table = self.norm_lookup_table.to(device)
44 else:
45 raise NotImplementedError("Other quantization methods not supported yet.")
46
47 @staticmethod
48 def create_uniform_map(symmetric=False, num_bits=4):
49 if symmetric:
50 # print("symmetric uniform quantization")
51 negative = torch.linspace(-1, 0, 2 ** (num_bits - 1))
52 positive = torch.linspace(0, 1, 2 ** (num_bits - 1))
53 table = torch.cat([negative, positive[1:]])
54 else:
55 # print("asymmetric uniform quantization")
56 table = torch.linspace(-1, 1, 2**num_bits)
57 return table
58
59 @staticmethod
60 def create_normal_map(offset=0.9677083, symmetric=False, num_bits=2):
61 try:
62 from scipy.stats import norm
63 except ImportError:
64 raise ImportError("The required package 'scipy' is not installed. Please install it to continue.")
65
66 variations = 2**num_bits
67 if symmetric:
68 v = norm.ppf(torch.linspace(1 - offset, offset, variations + 1)).tolist()
69 values = []
70 for index in range(len(v) - 1):
71 values.append(0.5 * v[index] + 0.5 * v[index + 1])
72 v = values
73 else:
74 # one more positive value, this is an asymmetric type
75 v1 = norm.ppf(torch.linspace(offset, 0.5, variations // 2 + 1)[:-1]).tolist()
76 v2 = [0]
77 v3 = (-norm.ppf(torch.linspace(offset, 0.5, variations // 2)[:-1])).tolist()
78 v = v1 + v2 + v3
79
80 values = torch.Tensor(v)
81 values = values.sort().values
82 values /= values.max()
83 return values
84
85 def quantize_tensor(self, weight):
86 max_abs = torch.abs(weight).max()
87 weight_normed = weight / max_abs
88
89 weight_normed_expanded = weight_normed.unsqueeze(-1)
90
91 # Reshape L to have the same number of dimensions as X_expanded
92 L_reshaped = torch.tensor(self.norm_lookup_table).reshape(1, -1)
93
94 # Calculate the absolute difference between X_expanded and L_reshaped
95 abs_diff = torch.abs(weight_normed_expanded - L_reshaped)
96
97 # Find the index of the minimum absolute difference for each element
98 qweight = torch.argmin(abs_diff, dim=-1)
99 return qweight, max_abs
100
101 def dequantize_tensor(self, qweight, max_abs):
102 qweight_flatten = qweight.flatten()
103
104 weight_normed = self.norm_lookup_table[qweight_flatten]
105 weight = weight_normed * max_abs
106
107 weight = weight.reshape(qweight.shape)
108
109 return weight
110
111 def quantize_block(self, weight):
112 if len(weight.shape) != 2:
113 raise ValueError(f"Only support 2D matrix, but your input has {len(weight.shape)} dimensions.")
114 if weight.shape[0] * weight.shape[1] % self.block_size != 0:
115 raise ValueError(
116 f"Weight with shape ({weight.shape[0]} x {weight.shape[1]}) "
117 f"is not dividable by block size {self.block_size}."
118 )
119
120 M, N = weight.shape
121 device = weight.device
122
123 # Quantization
124 weight_flatten = weight.flatten() # (M*N, )
125 weight_block = weight_flatten.reshape(-1, self.block_size) # (L, B), L = M * N / B
126 if self.method == "normal":
127 weight_max = weight_block.abs().max(dim=-1)[0] # (L, 1)
128 elif self.method == "uniform":
129 weight_max = weight_block.mean(dim=-1) + 2.5 * weight_block.std(dim=-1)
130 else:
131 raise NotImplementedError("Method not supported yet.")
132 weight_max = weight_max.unsqueeze(-1)
133 weight_divabs = weight_block / weight_max # (L, B)
134 weight_divabs = weight_divabs.unsqueeze(-1) # (L, B, 1)
135 L_reshaped = self.norm_lookup_table.reshape(1, -1) # (1, 2**K)
136
137 abs_diff = torch.abs(weight_divabs - L_reshaped) # (L, B, 2**K)
138 qweight = torch.argmin(abs_diff, dim=-1) # (L, B)
139
140 # Pack multiple k-bit into uint8
141 qweight = qweight.reshape(-1, 8 // self.num_bits)
142 qweight_pack = torch.zeros((M * N // 8 * self.num_bits, 1), dtype=torch.uint8, device=device)
143
144 # data format example:
145 # [1, 0, 3, 2] or [01, 00, 11, 10] -> [10110001], LIFO
146 for i in range(8 // self.num_bits):
147 qweight[:, i] = qweight[:, i] << i * self.num_bits
148 qweight_pack[:, 0] |= qweight[:, i]
149
150 return qweight_pack, weight_max, weight.shape
151
152 def dequantize_block(self, qweight, weight_max, weight_shape):
153 # unpack weight
154 device = qweight.device
155 weight = torch.zeros((qweight.shape[0], 8 // self.num_bits), dtype=torch.float32, device=device)
156 for i in range(8 // self.num_bits):
157 lookup_table_idx = qweight.to(torch.long) % 2**self.num_bits # get the most right 2 bits
158 lookup_table_idx = lookup_table_idx.to(torch.int)
159 weight[:, i] = self.norm_lookup_table[lookup_table_idx].squeeze()
160 qweight = qweight >> self.num_bits # right shift 2 bits of the original data
161
162 weight_block = weight.reshape(-1, self.block_size)
163 weight = weight_block * weight_max
164 weight = weight.reshape(weight_shape)
165
166 return weight
167
168
169 def _low_rank_decomposition(weight, reduced_rank=32):
170 """
171 :param weight: The matrix to decompose, of shape (H, W) :param reduced_rank: the final rank :return:
172 """
173 matrix_dimension = len(weight.size())
174 if matrix_dimension != 2:
175 raise ValueError(f"Only support 2D matrix, but your input has {matrix_dimension} dimensions.")
176
177 # Use SVD to decompose a matrix, default full_matrices is False to save parameters
178 U, S, Vh = torch.linalg.svd(weight, full_matrices=False)
179
180 L = U @ (torch.sqrt(torch.diag(S)[:, 0:reduced_rank]))
181 R = torch.sqrt(torch.diag(S)[0:reduced_rank, :]) @ Vh
182
183 return {"L": L, "R": R, "U": U, "S": S, "Vh": Vh, "reduced_rank": reduced_rank}
184
185
186 @torch.no_grad()
187 def loftq_init(weight: Union[torch.Tensor, torch.nn.Parameter], num_bits: int, reduced_rank: int, num_iter=1):
188 if num_bits not in [2, 4, 8]:
189 raise ValueError("Only support 2, 4, 8 bits quantization")
190 if num_iter <= 0:
191 raise ValueError("Number of iterations must be greater than 0")
192
193 out_feature, in_feature = weight.size()
194 device = weight.device
195 dtype = weight.dtype
196
197 logging.info(
198 f"Weight: ({out_feature}, {in_feature}) | Rank: {reduced_rank} "
199 f"| Num Iter: {num_iter} | Num Bits: {num_bits}"
200 )
201 if not is_bnb_4bit_available() or num_bits in [2, 8]:
202 quantizer = NFQuantizer(num_bits=num_bits, device=device, method="normal", block_size=64)
203 compute_device = device
204 else:
205 compute_device = "cuda"
206
207 weight = weight.to(device=compute_device, dtype=torch.float32)
208 res = weight.clone()
209 for i in range(num_iter):
210 torch.cuda.empty_cache()
211 # Quantization
212 if num_bits == 4 and is_bnb_4bit_available():
213 qweight = bnb.nn.Params4bit(
214 res.to("cpu"), requires_grad=False, compress_statistics=False, quant_type="nf4"
215 ).to(compute_device)
216 dequantized_weight = bnb.functional.dequantize_4bit(qweight.data, qweight.quant_state)
217 else:
218 quantized_weight, max_abs, shape = quantizer.quantize_block(res)
219 dequantized_weight = quantizer.dequantize_block(quantized_weight, max_abs, shape)
220
221 res = weight - dequantized_weight
222
223 # Decompose the residual by SVD
224 output = _low_rank_decomposition(res, reduced_rank=reduced_rank)
225 L, R, reduced_rank = output["L"], output["R"], output["reduced_rank"]
226 res = weight - torch.mm(L, R)
227
228 lora_A, lora_B = R, L
229
230 return dequantized_weight.to(device=device, dtype=dtype), lora_A, lora_B
231
[end of src/peft/utils/loftq_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/peft/utils/loftq_utils.py b/src/peft/utils/loftq_utils.py
--- a/src/peft/utils/loftq_utils.py
+++ b/src/peft/utils/loftq_utils.py
@@ -155,7 +155,7 @@
weight = torch.zeros((qweight.shape[0], 8 // self.num_bits), dtype=torch.float32, device=device)
for i in range(8 // self.num_bits):
lookup_table_idx = qweight.to(torch.long) % 2**self.num_bits # get the most right 2 bits
- lookup_table_idx = lookup_table_idx.to(torch.int)
+ lookup_table_idx = lookup_table_idx.to(torch.long)
weight[:, i] = self.norm_lookup_table[lookup_table_idx].squeeze()
qweight = qweight >> self.num_bits # right shift 2 bits of the original data
|
{"golden_diff": "diff --git a/src/peft/utils/loftq_utils.py b/src/peft/utils/loftq_utils.py\n--- a/src/peft/utils/loftq_utils.py\n+++ b/src/peft/utils/loftq_utils.py\n@@ -155,7 +155,7 @@\n weight = torch.zeros((qweight.shape[0], 8 // self.num_bits), dtype=torch.float32, device=device)\n for i in range(8 // self.num_bits):\n lookup_table_idx = qweight.to(torch.long) % 2**self.num_bits # get the most right 2 bits\n- lookup_table_idx = lookup_table_idx.to(torch.int)\n+ lookup_table_idx = lookup_table_idx.to(torch.long)\n weight[:, i] = self.norm_lookup_table[lookup_table_idx].squeeze()\n qweight = qweight >> self.num_bits # right shift 2 bits of the original data\n", "issue": "An error occurs when using LoftQ. IndexError\nA problem occurred while applying LoftQ to T5.\r\n`IndexError: tensors used as indices must be long, byte or bool tensors`\r\nThe problem was in [this line](https://github.com/huggingface/peft/blob/main/src/peft/utils/loftq_utils.py#L158), I replaced int with long and it worked fine.\r\n`lookup_table_idx = lookup_table_idx.to(torch.long)`\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2023-present the HuggingFace Inc. team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Reference code: https://github.com/yxli2123/LoftQ/blob/main/utils.py\n# Reference paper: https://arxiv.org/abs/2310.08659\n\nimport logging\nfrom typing import Union\n\nimport torch\n\nfrom peft.import_utils import is_bnb_4bit_available, is_bnb_available\n\n\nif is_bnb_available():\n import bitsandbytes as bnb\n\n\nclass NFQuantizer:\n def __init__(self, num_bits=2, device=\"cuda\", method=\"normal\", block_size=64, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.num_bits = num_bits\n self.device = device\n self.method = method\n self.block_size = block_size\n if self.method == \"normal\":\n self.norm_lookup_table = self.create_normal_map(num_bits=self.num_bits)\n self.norm_lookup_table = self.norm_lookup_table.to(device)\n elif self.method == \"uniform\":\n self.norm_lookup_table = self.create_uniform_map(num_bits=self.num_bits)\n self.norm_lookup_table = self.norm_lookup_table.to(device)\n else:\n raise NotImplementedError(\"Other quantization methods not supported yet.\")\n\n @staticmethod\n def create_uniform_map(symmetric=False, num_bits=4):\n if symmetric:\n # print(\"symmetric uniform quantization\")\n negative = torch.linspace(-1, 0, 2 ** (num_bits - 1))\n positive = torch.linspace(0, 1, 2 ** (num_bits - 1))\n table = torch.cat([negative, positive[1:]])\n else:\n # print(\"asymmetric uniform quantization\")\n table = torch.linspace(-1, 1, 2**num_bits)\n return table\n\n @staticmethod\n def create_normal_map(offset=0.9677083, symmetric=False, num_bits=2):\n try:\n from scipy.stats import norm\n except ImportError:\n raise ImportError(\"The required package 'scipy' is not installed. Please install it to continue.\")\n\n variations = 2**num_bits\n if symmetric:\n v = norm.ppf(torch.linspace(1 - offset, offset, variations + 1)).tolist()\n values = []\n for index in range(len(v) - 1):\n values.append(0.5 * v[index] + 0.5 * v[index + 1])\n v = values\n else:\n # one more positive value, this is an asymmetric type\n v1 = norm.ppf(torch.linspace(offset, 0.5, variations // 2 + 1)[:-1]).tolist()\n v2 = [0]\n v3 = (-norm.ppf(torch.linspace(offset, 0.5, variations // 2)[:-1])).tolist()\n v = v1 + v2 + v3\n\n values = torch.Tensor(v)\n values = values.sort().values\n values /= values.max()\n return values\n\n def quantize_tensor(self, weight):\n max_abs = torch.abs(weight).max()\n weight_normed = weight / max_abs\n\n weight_normed_expanded = weight_normed.unsqueeze(-1)\n\n # Reshape L to have the same number of dimensions as X_expanded\n L_reshaped = torch.tensor(self.norm_lookup_table).reshape(1, -1)\n\n # Calculate the absolute difference between X_expanded and L_reshaped\n abs_diff = torch.abs(weight_normed_expanded - L_reshaped)\n\n # Find the index of the minimum absolute difference for each element\n qweight = torch.argmin(abs_diff, dim=-1)\n return qweight, max_abs\n\n def dequantize_tensor(self, qweight, max_abs):\n qweight_flatten = qweight.flatten()\n\n weight_normed = self.norm_lookup_table[qweight_flatten]\n weight = weight_normed * max_abs\n\n weight = weight.reshape(qweight.shape)\n\n return weight\n\n def quantize_block(self, weight):\n if len(weight.shape) != 2:\n raise ValueError(f\"Only support 2D matrix, but your input has {len(weight.shape)} dimensions.\")\n if weight.shape[0] * weight.shape[1] % self.block_size != 0:\n raise ValueError(\n f\"Weight with shape ({weight.shape[0]} x {weight.shape[1]}) \"\n f\"is not dividable by block size {self.block_size}.\"\n )\n\n M, N = weight.shape\n device = weight.device\n\n # Quantization\n weight_flatten = weight.flatten() # (M*N, )\n weight_block = weight_flatten.reshape(-1, self.block_size) # (L, B), L = M * N / B\n if self.method == \"normal\":\n weight_max = weight_block.abs().max(dim=-1)[0] # (L, 1)\n elif self.method == \"uniform\":\n weight_max = weight_block.mean(dim=-1) + 2.5 * weight_block.std(dim=-1)\n else:\n raise NotImplementedError(\"Method not supported yet.\")\n weight_max = weight_max.unsqueeze(-1)\n weight_divabs = weight_block / weight_max # (L, B)\n weight_divabs = weight_divabs.unsqueeze(-1) # (L, B, 1)\n L_reshaped = self.norm_lookup_table.reshape(1, -1) # (1, 2**K)\n\n abs_diff = torch.abs(weight_divabs - L_reshaped) # (L, B, 2**K)\n qweight = torch.argmin(abs_diff, dim=-1) # (L, B)\n\n # Pack multiple k-bit into uint8\n qweight = qweight.reshape(-1, 8 // self.num_bits)\n qweight_pack = torch.zeros((M * N // 8 * self.num_bits, 1), dtype=torch.uint8, device=device)\n\n # data format example:\n # [1, 0, 3, 2] or [01, 00, 11, 10] -> [10110001], LIFO\n for i in range(8 // self.num_bits):\n qweight[:, i] = qweight[:, i] << i * self.num_bits\n qweight_pack[:, 0] |= qweight[:, i]\n\n return qweight_pack, weight_max, weight.shape\n\n def dequantize_block(self, qweight, weight_max, weight_shape):\n # unpack weight\n device = qweight.device\n weight = torch.zeros((qweight.shape[0], 8 // self.num_bits), dtype=torch.float32, device=device)\n for i in range(8 // self.num_bits):\n lookup_table_idx = qweight.to(torch.long) % 2**self.num_bits # get the most right 2 bits\n lookup_table_idx = lookup_table_idx.to(torch.int)\n weight[:, i] = self.norm_lookup_table[lookup_table_idx].squeeze()\n qweight = qweight >> self.num_bits # right shift 2 bits of the original data\n\n weight_block = weight.reshape(-1, self.block_size)\n weight = weight_block * weight_max\n weight = weight.reshape(weight_shape)\n\n return weight\n\n\ndef _low_rank_decomposition(weight, reduced_rank=32):\n \"\"\"\n :param weight: The matrix to decompose, of shape (H, W) :param reduced_rank: the final rank :return:\n \"\"\"\n matrix_dimension = len(weight.size())\n if matrix_dimension != 2:\n raise ValueError(f\"Only support 2D matrix, but your input has {matrix_dimension} dimensions.\")\n\n # Use SVD to decompose a matrix, default full_matrices is False to save parameters\n U, S, Vh = torch.linalg.svd(weight, full_matrices=False)\n\n L = U @ (torch.sqrt(torch.diag(S)[:, 0:reduced_rank]))\n R = torch.sqrt(torch.diag(S)[0:reduced_rank, :]) @ Vh\n\n return {\"L\": L, \"R\": R, \"U\": U, \"S\": S, \"Vh\": Vh, \"reduced_rank\": reduced_rank}\n\n\[email protected]_grad()\ndef loftq_init(weight: Union[torch.Tensor, torch.nn.Parameter], num_bits: int, reduced_rank: int, num_iter=1):\n if num_bits not in [2, 4, 8]:\n raise ValueError(\"Only support 2, 4, 8 bits quantization\")\n if num_iter <= 0:\n raise ValueError(\"Number of iterations must be greater than 0\")\n\n out_feature, in_feature = weight.size()\n device = weight.device\n dtype = weight.dtype\n\n logging.info(\n f\"Weight: ({out_feature}, {in_feature}) | Rank: {reduced_rank} \"\n f\"| Num Iter: {num_iter} | Num Bits: {num_bits}\"\n )\n if not is_bnb_4bit_available() or num_bits in [2, 8]:\n quantizer = NFQuantizer(num_bits=num_bits, device=device, method=\"normal\", block_size=64)\n compute_device = device\n else:\n compute_device = \"cuda\"\n\n weight = weight.to(device=compute_device, dtype=torch.float32)\n res = weight.clone()\n for i in range(num_iter):\n torch.cuda.empty_cache()\n # Quantization\n if num_bits == 4 and is_bnb_4bit_available():\n qweight = bnb.nn.Params4bit(\n res.to(\"cpu\"), requires_grad=False, compress_statistics=False, quant_type=\"nf4\"\n ).to(compute_device)\n dequantized_weight = bnb.functional.dequantize_4bit(qweight.data, qweight.quant_state)\n else:\n quantized_weight, max_abs, shape = quantizer.quantize_block(res)\n dequantized_weight = quantizer.dequantize_block(quantized_weight, max_abs, shape)\n\n res = weight - dequantized_weight\n\n # Decompose the residual by SVD\n output = _low_rank_decomposition(res, reduced_rank=reduced_rank)\n L, R, reduced_rank = output[\"L\"], output[\"R\"], output[\"reduced_rank\"]\n res = weight - torch.mm(L, R)\n\n lora_A, lora_B = R, L\n\n return dequantized_weight.to(device=device, dtype=dtype), lora_A, lora_B\n", "path": "src/peft/utils/loftq_utils.py"}]}
| 3,665 | 203 |
gh_patches_debug_10163
|
rasdani/github-patches
|
git_diff
|
pytorch__pytorch-2200
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DataParallel tests are currently broken
https://github.com/pytorch/pytorch/pull/2121/commits/d69669efcfe4333c223f53249185c2e22f76ed73 has broken DataParallel tests. Now that device_ids are explicitly sent to parallel_apply, this assert https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/parallel_apply.py#L30 gets triggered if inputs are not big enough to be on all devices (e.g. batch size of 20 on 8 GPUs gets chunked into 6*3+2, so that 8-th GPU is idle, and assert gets triggered).
</issue>
<code>
[start of torch/nn/parallel/data_parallel.py]
1 import torch
2 from ..modules import Module
3 from .scatter_gather import scatter_kwargs, gather
4 from .replicate import replicate
5 from .parallel_apply import parallel_apply
6
7
8 class DataParallel(Module):
9 """Implements data parallelism at the module level.
10
11 This container parallelizes the application of the given module by
12 splitting the input across the specified devices by chunking in the batch
13 dimension. In the forward pass, the module is replicated on each device,
14 and each replica handles a portion of the input. During the backwards
15 pass, gradients from each replica are summed into the original module.
16
17 The batch size should be larger than the number of GPUs used. It should
18 also be an integer multiple of the number of GPUs so that each chunk is the
19 same size (so that each GPU processes the same number of samples).
20
21 See also: :ref:`cuda-nn-dataparallel-instead`
22
23 Arbitrary positional and keyword inputs are allowed to be passed into
24 DataParallel EXCEPT Tensors. All variables will be scattered on dim
25 specified (default 0). Primitive types will be broadcasted, but all
26 other types will be a shallow copy and can be corrupted if written to in
27 the model's forward pass.
28
29 Args:
30 module: module to be parallelized
31 device_ids: CUDA devices (default: all devices)
32 output_device: device location of output (default: device_ids[0])
33
34 Example::
35
36 >>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])
37 >>> output = net(input_var)
38 """
39
40 # TODO: update notes/cuda.rst when this class handles 8+ GPUs well
41
42 def __init__(self, module, device_ids=None, output_device=None, dim=0):
43 super(DataParallel, self).__init__()
44 if device_ids is None:
45 device_ids = list(range(torch.cuda.device_count()))
46 if output_device is None:
47 output_device = device_ids[0]
48 self.dim = dim
49 self.module = module
50 self.device_ids = device_ids
51 self.output_device = output_device
52 if len(self.device_ids) == 1:
53 self.module.cuda(device_ids[0])
54
55 def forward(self, *inputs, **kwargs):
56 inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
57 if len(self.device_ids) == 1:
58 return self.module(*inputs[0], **kwargs[0])
59 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
60 outputs = self.parallel_apply(replicas, inputs, kwargs)
61 return self.gather(outputs, self.output_device)
62
63 def replicate(self, module, device_ids):
64 return replicate(module, device_ids)
65
66 def scatter(self, inputs, kwargs, device_ids):
67 return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)
68
69 def parallel_apply(self, replicas, inputs, kwargs):
70 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
71
72 def gather(self, outputs, output_device):
73 return gather(outputs, output_device, dim=self.dim)
74
75
76 def data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None):
77 """Evaluates module(input) in parallel across the GPUs given in device_ids.
78
79 This is the functional version of the DataParallel module.
80
81 Args:
82 module: the module to evaluate in parallel
83 inputs: inputs to the module
84 device_ids: GPU ids on which to replicate module
85 output_device: GPU location of the output Use -1 to indicate the CPU.
86 (default: device_ids[0])
87 Returns:
88 a Variable containing the result of module(input) located on
89 output_device
90 """
91 if not isinstance(inputs, tuple):
92 inputs = (inputs,)
93
94 if device_ids is None:
95 device_ids = list(range(torch.cuda.device_count()))
96
97 if output_device is None:
98 output_device = device_ids[0]
99
100 inputs, module_kwargs = scatter_kwargs(inputs, module_kwargs, device_ids, dim)
101 if len(device_ids) == 1:
102 return module(*inputs[0], **module_kwargs[0])
103 replicas = replicate(module, device_ids[:len(inputs)])
104 outputs = parallel_apply(replicas, inputs, module_kwargs, device_ids)
105 return gather(outputs, output_device, dim)
106
[end of torch/nn/parallel/data_parallel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torch/nn/parallel/data_parallel.py b/torch/nn/parallel/data_parallel.py
--- a/torch/nn/parallel/data_parallel.py
+++ b/torch/nn/parallel/data_parallel.py
@@ -100,6 +100,7 @@
inputs, module_kwargs = scatter_kwargs(inputs, module_kwargs, device_ids, dim)
if len(device_ids) == 1:
return module(*inputs[0], **module_kwargs[0])
- replicas = replicate(module, device_ids[:len(inputs)])
- outputs = parallel_apply(replicas, inputs, module_kwargs, device_ids)
+ used_device_ids = device_ids[:len(inputs)]
+ replicas = replicate(module, used_device_ids)
+ outputs = parallel_apply(replicas, inputs, module_kwargs, used_device_ids)
return gather(outputs, output_device, dim)
|
{"golden_diff": "diff --git a/torch/nn/parallel/data_parallel.py b/torch/nn/parallel/data_parallel.py\n--- a/torch/nn/parallel/data_parallel.py\n+++ b/torch/nn/parallel/data_parallel.py\n@@ -100,6 +100,7 @@\n inputs, module_kwargs = scatter_kwargs(inputs, module_kwargs, device_ids, dim)\n if len(device_ids) == 1:\n return module(*inputs[0], **module_kwargs[0])\n- replicas = replicate(module, device_ids[:len(inputs)])\n- outputs = parallel_apply(replicas, inputs, module_kwargs, device_ids)\n+ used_device_ids = device_ids[:len(inputs)]\n+ replicas = replicate(module, used_device_ids)\n+ outputs = parallel_apply(replicas, inputs, module_kwargs, used_device_ids)\n return gather(outputs, output_device, dim)\n", "issue": "DataParallel tests are currently broken \nhttps://github.com/pytorch/pytorch/pull/2121/commits/d69669efcfe4333c223f53249185c2e22f76ed73 has broken DataParallel tests. Now that device_ids are explicitly sent to parallel_apply, this assert https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/parallel_apply.py#L30 gets triggered if inputs are not big enough to be on all devices (e.g. batch size of 20 on 8 GPUs gets chunked into 6*3+2, so that 8-th GPU is idle, and assert gets triggered). \r\n\n", "before_files": [{"content": "import torch\nfrom ..modules import Module\nfrom .scatter_gather import scatter_kwargs, gather\nfrom .replicate import replicate\nfrom .parallel_apply import parallel_apply\n\n\nclass DataParallel(Module):\n \"\"\"Implements data parallelism at the module level.\n\n This container parallelizes the application of the given module by\n splitting the input across the specified devices by chunking in the batch\n dimension. In the forward pass, the module is replicated on each device,\n and each replica handles a portion of the input. During the backwards\n pass, gradients from each replica are summed into the original module.\n\n The batch size should be larger than the number of GPUs used. It should\n also be an integer multiple of the number of GPUs so that each chunk is the\n same size (so that each GPU processes the same number of samples).\n\n See also: :ref:`cuda-nn-dataparallel-instead`\n\n Arbitrary positional and keyword inputs are allowed to be passed into\n DataParallel EXCEPT Tensors. All variables will be scattered on dim\n specified (default 0). Primitive types will be broadcasted, but all\n other types will be a shallow copy and can be corrupted if written to in\n the model's forward pass.\n\n Args:\n module: module to be parallelized\n device_ids: CUDA devices (default: all devices)\n output_device: device location of output (default: device_ids[0])\n\n Example::\n\n >>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])\n >>> output = net(input_var)\n \"\"\"\n\n # TODO: update notes/cuda.rst when this class handles 8+ GPUs well\n\n def __init__(self, module, device_ids=None, output_device=None, dim=0):\n super(DataParallel, self).__init__()\n if device_ids is None:\n device_ids = list(range(torch.cuda.device_count()))\n if output_device is None:\n output_device = device_ids[0]\n self.dim = dim\n self.module = module\n self.device_ids = device_ids\n self.output_device = output_device\n if len(self.device_ids) == 1:\n self.module.cuda(device_ids[0])\n\n def forward(self, *inputs, **kwargs):\n inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)\n if len(self.device_ids) == 1:\n return self.module(*inputs[0], **kwargs[0])\n replicas = self.replicate(self.module, self.device_ids[:len(inputs)])\n outputs = self.parallel_apply(replicas, inputs, kwargs)\n return self.gather(outputs, self.output_device)\n\n def replicate(self, module, device_ids):\n return replicate(module, device_ids)\n\n def scatter(self, inputs, kwargs, device_ids):\n return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim)\n\n def parallel_apply(self, replicas, inputs, kwargs):\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\n\n def gather(self, outputs, output_device):\n return gather(outputs, output_device, dim=self.dim)\n\n\ndef data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None):\n \"\"\"Evaluates module(input) in parallel across the GPUs given in device_ids.\n\n This is the functional version of the DataParallel module.\n\n Args:\n module: the module to evaluate in parallel\n inputs: inputs to the module\n device_ids: GPU ids on which to replicate module\n output_device: GPU location of the output Use -1 to indicate the CPU.\n (default: device_ids[0])\n Returns:\n a Variable containing the result of module(input) located on\n output_device\n \"\"\"\n if not isinstance(inputs, tuple):\n inputs = (inputs,)\n\n if device_ids is None:\n device_ids = list(range(torch.cuda.device_count()))\n\n if output_device is None:\n output_device = device_ids[0]\n\n inputs, module_kwargs = scatter_kwargs(inputs, module_kwargs, device_ids, dim)\n if len(device_ids) == 1:\n return module(*inputs[0], **module_kwargs[0])\n replicas = replicate(module, device_ids[:len(inputs)])\n outputs = parallel_apply(replicas, inputs, module_kwargs, device_ids)\n return gather(outputs, output_device, dim)\n", "path": "torch/nn/parallel/data_parallel.py"}]}
| 1,854 | 185 |
gh_patches_debug_34564
|
rasdani/github-patches
|
git_diff
|
pyinstaller__pyinstaller-7259
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fixes for use of pyinstaller with Django 4.x and custom management commands.
PROBLEM:
This feature aims to solve the problem of the custom app level management commands being missed out from hidden imports alongside issues with imports of apps listed within INSTALLED_APPS failing due to erroneous execution of 'eval_script' function. Specifically when the hidden imports of the INSTALLED_APPS are evaluated the logging outputs generated by 'collect_submodules' when called in django_import_finder.py are captured in the STDOUT regardless of the --log-level. Also any additional management commands provided by one of the INSTALLED_APPS are ignored as the 'get_commands' function has a hardcoded referenced to Django 1.8 command set. Django's currently implementation of command collection will not complain of missing commands at runtime thereby rendering the patch of this function that is currently implemented irrelevant.
SOLUTION:
The solution to this issue is to remove several redundant parts of the code alongside adding additional overrides for decluttering STDOUT.
The following is a list of measures taken to resolve the problem
- remove the monkey patching of Django's 'get_commands' method in pyi_rth_django.py
- modify the collect static code to have a boolean input parameter 'log' which when the relevant calls to logging within this function are wrapped in a conditional will serve to prevent logs being inappropriately raised.
</issue>
<code>
[start of PyInstaller/hooks/rthooks/pyi_rth_django.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2005-2022, PyInstaller Development Team.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #
9 # SPDX-License-Identifier: Apache-2.0
10 #-----------------------------------------------------------------------------
11
12 # This Django rthook was tested with Django 1.8.3.
13
14 import django.core.management
15 import django.utils.autoreload
16
17
18 def _get_commands():
19 # Django groupss commands by app. This returns static dict() as it is for django 1.8 and the default project.
20 commands = {
21 'changepassword': 'django.contrib.auth',
22 'check': 'django.core',
23 'clearsessions': 'django.contrib.sessions',
24 'collectstatic': 'django.contrib.staticfiles',
25 'compilemessages': 'django.core',
26 'createcachetable': 'django.core',
27 'createsuperuser': 'django.contrib.auth',
28 'dbshell': 'django.core',
29 'diffsettings': 'django.core',
30 'dumpdata': 'django.core',
31 'findstatic': 'django.contrib.staticfiles',
32 'flush': 'django.core',
33 'inspectdb': 'django.core',
34 'loaddata': 'django.core',
35 'makemessages': 'django.core',
36 'makemigrations': 'django.core',
37 'migrate': 'django.core',
38 'runfcgi': 'django.core',
39 'runserver': 'django.core',
40 'shell': 'django.core',
41 'showmigrations': 'django.core',
42 'sql': 'django.core',
43 'sqlall': 'django.core',
44 'sqlclear': 'django.core',
45 'sqlcustom': 'django.core',
46 'sqldropindexes': 'django.core',
47 'sqlflush': 'django.core',
48 'sqlindexes': 'django.core',
49 'sqlmigrate': 'django.core',
50 'sqlsequencereset': 'django.core',
51 'squashmigrations': 'django.core',
52 'startapp': 'django.core',
53 'startproject': 'django.core',
54 'syncdb': 'django.core',
55 'test': 'django.core',
56 'testserver': 'django.core',
57 'validate': 'django.core'
58 }
59 return commands
60
61
62 _old_restart_with_reloader = django.utils.autoreload.restart_with_reloader
63
64
65 def _restart_with_reloader(*args):
66 import sys
67 a0 = sys.argv.pop(0)
68 try:
69 return _old_restart_with_reloader(*args)
70 finally:
71 sys.argv.insert(0, a0)
72
73
74 # Override get_commands() function otherwise the app will complain that there are no commands.
75 django.core.management.get_commands = _get_commands
76 # Override restart_with_reloader() function, otherwise the app might complain that some commands do not exist;
77 # e.g., runserver.
78 django.utils.autoreload.restart_with_reloader = _restart_with_reloader
79
[end of PyInstaller/hooks/rthooks/pyi_rth_django.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/PyInstaller/hooks/rthooks/pyi_rth_django.py b/PyInstaller/hooks/rthooks/pyi_rth_django.py
--- a/PyInstaller/hooks/rthooks/pyi_rth_django.py
+++ b/PyInstaller/hooks/rthooks/pyi_rth_django.py
@@ -11,54 +11,8 @@
# This Django rthook was tested with Django 1.8.3.
-import django.core.management
import django.utils.autoreload
-
-def _get_commands():
- # Django groupss commands by app. This returns static dict() as it is for django 1.8 and the default project.
- commands = {
- 'changepassword': 'django.contrib.auth',
- 'check': 'django.core',
- 'clearsessions': 'django.contrib.sessions',
- 'collectstatic': 'django.contrib.staticfiles',
- 'compilemessages': 'django.core',
- 'createcachetable': 'django.core',
- 'createsuperuser': 'django.contrib.auth',
- 'dbshell': 'django.core',
- 'diffsettings': 'django.core',
- 'dumpdata': 'django.core',
- 'findstatic': 'django.contrib.staticfiles',
- 'flush': 'django.core',
- 'inspectdb': 'django.core',
- 'loaddata': 'django.core',
- 'makemessages': 'django.core',
- 'makemigrations': 'django.core',
- 'migrate': 'django.core',
- 'runfcgi': 'django.core',
- 'runserver': 'django.core',
- 'shell': 'django.core',
- 'showmigrations': 'django.core',
- 'sql': 'django.core',
- 'sqlall': 'django.core',
- 'sqlclear': 'django.core',
- 'sqlcustom': 'django.core',
- 'sqldropindexes': 'django.core',
- 'sqlflush': 'django.core',
- 'sqlindexes': 'django.core',
- 'sqlmigrate': 'django.core',
- 'sqlsequencereset': 'django.core',
- 'squashmigrations': 'django.core',
- 'startapp': 'django.core',
- 'startproject': 'django.core',
- 'syncdb': 'django.core',
- 'test': 'django.core',
- 'testserver': 'django.core',
- 'validate': 'django.core'
- }
- return commands
-
-
_old_restart_with_reloader = django.utils.autoreload.restart_with_reloader
@@ -71,8 +25,6 @@
sys.argv.insert(0, a0)
-# Override get_commands() function otherwise the app will complain that there are no commands.
-django.core.management.get_commands = _get_commands
# Override restart_with_reloader() function, otherwise the app might complain that some commands do not exist;
# e.g., runserver.
django.utils.autoreload.restart_with_reloader = _restart_with_reloader
|
{"golden_diff": "diff --git a/PyInstaller/hooks/rthooks/pyi_rth_django.py b/PyInstaller/hooks/rthooks/pyi_rth_django.py\n--- a/PyInstaller/hooks/rthooks/pyi_rth_django.py\n+++ b/PyInstaller/hooks/rthooks/pyi_rth_django.py\n@@ -11,54 +11,8 @@\n \n # This Django rthook was tested with Django 1.8.3.\n \n-import django.core.management\n import django.utils.autoreload\n \n-\n-def _get_commands():\n- # Django groupss commands by app. This returns static dict() as it is for django 1.8 and the default project.\n- commands = {\n- 'changepassword': 'django.contrib.auth',\n- 'check': 'django.core',\n- 'clearsessions': 'django.contrib.sessions',\n- 'collectstatic': 'django.contrib.staticfiles',\n- 'compilemessages': 'django.core',\n- 'createcachetable': 'django.core',\n- 'createsuperuser': 'django.contrib.auth',\n- 'dbshell': 'django.core',\n- 'diffsettings': 'django.core',\n- 'dumpdata': 'django.core',\n- 'findstatic': 'django.contrib.staticfiles',\n- 'flush': 'django.core',\n- 'inspectdb': 'django.core',\n- 'loaddata': 'django.core',\n- 'makemessages': 'django.core',\n- 'makemigrations': 'django.core',\n- 'migrate': 'django.core',\n- 'runfcgi': 'django.core',\n- 'runserver': 'django.core',\n- 'shell': 'django.core',\n- 'showmigrations': 'django.core',\n- 'sql': 'django.core',\n- 'sqlall': 'django.core',\n- 'sqlclear': 'django.core',\n- 'sqlcustom': 'django.core',\n- 'sqldropindexes': 'django.core',\n- 'sqlflush': 'django.core',\n- 'sqlindexes': 'django.core',\n- 'sqlmigrate': 'django.core',\n- 'sqlsequencereset': 'django.core',\n- 'squashmigrations': 'django.core',\n- 'startapp': 'django.core',\n- 'startproject': 'django.core',\n- 'syncdb': 'django.core',\n- 'test': 'django.core',\n- 'testserver': 'django.core',\n- 'validate': 'django.core'\n- }\n- return commands\n-\n-\n _old_restart_with_reloader = django.utils.autoreload.restart_with_reloader\n \n \n@@ -71,8 +25,6 @@\n sys.argv.insert(0, a0)\n \n \n-# Override get_commands() function otherwise the app will complain that there are no commands.\n-django.core.management.get_commands = _get_commands\n # Override restart_with_reloader() function, otherwise the app might complain that some commands do not exist;\n # e.g., runserver.\n django.utils.autoreload.restart_with_reloader = _restart_with_reloader\n", "issue": "Fixes for use of pyinstaller with Django 4.x and custom management commands.\nPROBLEM:\r\nThis feature aims to solve the problem of the custom app level management commands being missed out from hidden imports alongside issues with imports of apps listed within INSTALLED_APPS failing due to erroneous execution of 'eval_script' function. Specifically when the hidden imports of the INSTALLED_APPS are evaluated the logging outputs generated by 'collect_submodules' when called in django_import_finder.py are captured in the STDOUT regardless of the --log-level. Also any additional management commands provided by one of the INSTALLED_APPS are ignored as the 'get_commands' function has a hardcoded referenced to Django 1.8 command set. Django's currently implementation of command collection will not complain of missing commands at runtime thereby rendering the patch of this function that is currently implemented irrelevant.\r\n\r\nSOLUTION:\r\nThe solution to this issue is to remove several redundant parts of the code alongside adding additional overrides for decluttering STDOUT. \r\n\r\nThe following is a list of measures taken to resolve the problem\r\n- remove the monkey patching of Django's 'get_commands' method in pyi_rth_django.py\r\n- modify the collect static code to have a boolean input parameter 'log' which when the relevant calls to logging within this function are wrapped in a conditional will serve to prevent logs being inappropriately raised.\r\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2022, PyInstaller Development Team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#\n# SPDX-License-Identifier: Apache-2.0\n#-----------------------------------------------------------------------------\n\n# This Django rthook was tested with Django 1.8.3.\n\nimport django.core.management\nimport django.utils.autoreload\n\n\ndef _get_commands():\n # Django groupss commands by app. This returns static dict() as it is for django 1.8 and the default project.\n commands = {\n 'changepassword': 'django.contrib.auth',\n 'check': 'django.core',\n 'clearsessions': 'django.contrib.sessions',\n 'collectstatic': 'django.contrib.staticfiles',\n 'compilemessages': 'django.core',\n 'createcachetable': 'django.core',\n 'createsuperuser': 'django.contrib.auth',\n 'dbshell': 'django.core',\n 'diffsettings': 'django.core',\n 'dumpdata': 'django.core',\n 'findstatic': 'django.contrib.staticfiles',\n 'flush': 'django.core',\n 'inspectdb': 'django.core',\n 'loaddata': 'django.core',\n 'makemessages': 'django.core',\n 'makemigrations': 'django.core',\n 'migrate': 'django.core',\n 'runfcgi': 'django.core',\n 'runserver': 'django.core',\n 'shell': 'django.core',\n 'showmigrations': 'django.core',\n 'sql': 'django.core',\n 'sqlall': 'django.core',\n 'sqlclear': 'django.core',\n 'sqlcustom': 'django.core',\n 'sqldropindexes': 'django.core',\n 'sqlflush': 'django.core',\n 'sqlindexes': 'django.core',\n 'sqlmigrate': 'django.core',\n 'sqlsequencereset': 'django.core',\n 'squashmigrations': 'django.core',\n 'startapp': 'django.core',\n 'startproject': 'django.core',\n 'syncdb': 'django.core',\n 'test': 'django.core',\n 'testserver': 'django.core',\n 'validate': 'django.core'\n }\n return commands\n\n\n_old_restart_with_reloader = django.utils.autoreload.restart_with_reloader\n\n\ndef _restart_with_reloader(*args):\n import sys\n a0 = sys.argv.pop(0)\n try:\n return _old_restart_with_reloader(*args)\n finally:\n sys.argv.insert(0, a0)\n\n\n# Override get_commands() function otherwise the app will complain that there are no commands.\ndjango.core.management.get_commands = _get_commands\n# Override restart_with_reloader() function, otherwise the app might complain that some commands do not exist;\n# e.g., runserver.\ndjango.utils.autoreload.restart_with_reloader = _restart_with_reloader\n", "path": "PyInstaller/hooks/rthooks/pyi_rth_django.py"}]}
| 1,633 | 663 |
gh_patches_debug_24655
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-3217
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make entry_points behave the same across Python versions
The recently introduced `entry_points` function does not behave the same across Python versions and it is not possible to get all entry points in Python 3.8 and 3.9.
</issue>
<code>
[start of opentelemetry-api/src/opentelemetry/util/_importlib_metadata.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from sys import version_info
16
17 # FIXME remove this when support for 3.7 is dropped.
18 if version_info.minor == 7:
19 # pylint: disable=import-error
20 from importlib_metadata import entry_points, version # type: ignore
21
22 # FIXME remove this file when support for 3.9 is dropped.
23 elif version_info.minor in (8, 9):
24 # pylint: disable=import-error
25 from importlib.metadata import (
26 entry_points as importlib_metadata_entry_points,
27 )
28 from importlib.metadata import version
29
30 def entry_points(group: str, name: str): # type: ignore
31 for entry_point in importlib_metadata_entry_points()[group]:
32 if entry_point.name == name:
33 yield entry_point
34
35 else:
36 from importlib.metadata import entry_points, version
37
38 __all__ = ["entry_points", "version"]
39
[end of opentelemetry-api/src/opentelemetry/util/_importlib_metadata.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opentelemetry-api/src/opentelemetry/util/_importlib_metadata.py b/opentelemetry-api/src/opentelemetry/util/_importlib_metadata.py
--- a/opentelemetry-api/src/opentelemetry/util/_importlib_metadata.py
+++ b/opentelemetry-api/src/opentelemetry/util/_importlib_metadata.py
@@ -12,27 +12,18 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from sys import version_info
+# FIXME: Use importlib.metadata when support for 3.11 is dropped if the rest of
+# the supported versions at that time have the same API.
+from importlib_metadata import ( # type: ignore
+ EntryPoint,
+ EntryPoints,
+ entry_points,
+ version,
+)
-# FIXME remove this when support for 3.7 is dropped.
-if version_info.minor == 7:
- # pylint: disable=import-error
- from importlib_metadata import entry_points, version # type: ignore
+# The importlib-metadata library has introduced breaking changes before to its
+# API, this module is kept just to act as a layer between the
+# importlib-metadata library and our project if in any case it is necessary to
+# do so.
-# FIXME remove this file when support for 3.9 is dropped.
-elif version_info.minor in (8, 9):
- # pylint: disable=import-error
- from importlib.metadata import (
- entry_points as importlib_metadata_entry_points,
- )
- from importlib.metadata import version
-
- def entry_points(group: str, name: str): # type: ignore
- for entry_point in importlib_metadata_entry_points()[group]:
- if entry_point.name == name:
- yield entry_point
-
-else:
- from importlib.metadata import entry_points, version
-
-__all__ = ["entry_points", "version"]
+__all__ = ["entry_points", "version", "EntryPoint", "EntryPoints"]
|
{"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/util/_importlib_metadata.py b/opentelemetry-api/src/opentelemetry/util/_importlib_metadata.py\n--- a/opentelemetry-api/src/opentelemetry/util/_importlib_metadata.py\n+++ b/opentelemetry-api/src/opentelemetry/util/_importlib_metadata.py\n@@ -12,27 +12,18 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-from sys import version_info\n+# FIXME: Use importlib.metadata when support for 3.11 is dropped if the rest of\n+# the supported versions at that time have the same API.\n+from importlib_metadata import ( # type: ignore\n+ EntryPoint,\n+ EntryPoints,\n+ entry_points,\n+ version,\n+)\n \n-# FIXME remove this when support for 3.7 is dropped.\n-if version_info.minor == 7:\n- # pylint: disable=import-error\n- from importlib_metadata import entry_points, version # type: ignore\n+# The importlib-metadata library has introduced breaking changes before to its\n+# API, this module is kept just to act as a layer between the\n+# importlib-metadata library and our project if in any case it is necessary to\n+# do so.\n \n-# FIXME remove this file when support for 3.9 is dropped.\n-elif version_info.minor in (8, 9):\n- # pylint: disable=import-error\n- from importlib.metadata import (\n- entry_points as importlib_metadata_entry_points,\n- )\n- from importlib.metadata import version\n-\n- def entry_points(group: str, name: str): # type: ignore\n- for entry_point in importlib_metadata_entry_points()[group]:\n- if entry_point.name == name:\n- yield entry_point\n-\n-else:\n- from importlib.metadata import entry_points, version\n-\n-__all__ = [\"entry_points\", \"version\"]\n+__all__ = [\"entry_points\", \"version\", \"EntryPoint\", \"EntryPoints\"]\n", "issue": "Make entry_points behave the same across Python versions\nThe recently introduced `entry_points` function does not behave the same across Python versions and it is not possible to get all entry points in Python 3.8 and 3.9.\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom sys import version_info\n\n# FIXME remove this when support for 3.7 is dropped.\nif version_info.minor == 7:\n # pylint: disable=import-error\n from importlib_metadata import entry_points, version # type: ignore\n\n# FIXME remove this file when support for 3.9 is dropped.\nelif version_info.minor in (8, 9):\n # pylint: disable=import-error\n from importlib.metadata import (\n entry_points as importlib_metadata_entry_points,\n )\n from importlib.metadata import version\n\n def entry_points(group: str, name: str): # type: ignore\n for entry_point in importlib_metadata_entry_points()[group]:\n if entry_point.name == name:\n yield entry_point\n\nelse:\n from importlib.metadata import entry_points, version\n\n__all__ = [\"entry_points\", \"version\"]\n", "path": "opentelemetry-api/src/opentelemetry/util/_importlib_metadata.py"}]}
| 989 | 445 |
gh_patches_debug_8511
|
rasdani/github-patches
|
git_diff
|
scalableminds__webknossos-libs-1067
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
upload command on windows: backslashes on server, invalid dataset
A user created a valid dataset on a windows machine with the `webknossos convert` command, then called `webknossos upload` with a valid token. The upload went through, but the directory structure got lost: the files on the server had backslashes in the paths, like `'color\2-2-1\z0\y7\x1.wkw'`. Instead, when sending files to upload, the client should always replace the client’s path separator by `/`.
</issue>
<code>
[start of webknossos/webknossos/client/_upload_dataset.py]
1 import os
2 import warnings
3 from functools import lru_cache
4 from pathlib import Path
5 from tempfile import TemporaryDirectory
6 from time import gmtime, strftime
7 from typing import Iterator, List, NamedTuple, Optional, Tuple
8 from uuid import uuid4
9
10 import httpx
11
12 from ..dataset import Dataset, Layer, RemoteDataset
13 from ..utils import get_rich_progress
14 from ._resumable import Resumable
15 from .api_client.models import (
16 ApiDatasetUploadInformation,
17 ApiLinkedLayerIdentifier,
18 ApiReserveDatasetUploadInformation,
19 )
20 from .context import _get_context, _WebknossosContext
21
22 DEFAULT_SIMULTANEOUS_UPLOADS = 5
23 MAXIMUM_RETRY_COUNT = 4
24
25
26 class LayerToLink(NamedTuple):
27 dataset_name: str
28 layer_name: str
29 new_layer_name: Optional[str] = None
30 organization_id: Optional[str] = (
31 None # defaults to the user's organization before uploading
32 )
33
34 @classmethod
35 def from_remote_layer(
36 cls,
37 layer: Layer,
38 new_layer_name: Optional[str] = None,
39 organization_id: Optional[str] = None,
40 ) -> "LayerToLink":
41 ds = layer.dataset
42 assert isinstance(
43 ds, RemoteDataset
44 ), f"The passed layer must belong to a RemoteDataset, but belongs to {ds}"
45 return cls(ds._dataset_name, layer.name, new_layer_name, organization_id)
46
47 def as_api_linked_layer_identifier(self) -> ApiLinkedLayerIdentifier:
48 context = _get_context()
49 return ApiLinkedLayerIdentifier(
50 self.organization_id or context.organization_id,
51 self.dataset_name,
52 self.layer_name,
53 self.new_layer_name,
54 )
55
56
57 @lru_cache(maxsize=None)
58 def _cached_get_upload_datastore(context: _WebknossosContext) -> str:
59 datastores = context.api_client_with_auth.datastore_list()
60 for datastore in datastores:
61 if datastore.allows_upload:
62 return datastore.url
63 raise ValueError("No datastore found where datasets can be uploaded.")
64
65
66 def _walk(
67 path: Path,
68 base_path: Optional[Path] = None,
69 ) -> Iterator[Tuple[Path, Path, int]]:
70 if base_path is None:
71 base_path = path
72 if path.is_dir():
73 for p in path.iterdir():
74 yield from _walk(p, base_path)
75 else:
76 yield (path.resolve(), path.relative_to(base_path), path.stat().st_size)
77
78
79 def upload_dataset(
80 dataset: Dataset,
81 new_dataset_name: Optional[str] = None,
82 layers_to_link: Optional[List[LayerToLink]] = None,
83 jobs: Optional[int] = None,
84 ) -> str:
85 if new_dataset_name is None:
86 new_dataset_name = dataset.name
87 if layers_to_link is None:
88 layers_to_link = []
89 context = _get_context()
90 layer_names_to_link = set(i.new_layer_name or i.layer_name for i in layers_to_link)
91 if len(layer_names_to_link.intersection(dataset.layers.keys())) > 0:
92 warnings.warn(
93 "[INFO] Excluding the following layers from upload, since they will be linked: "
94 + f"{layer_names_to_link.intersection(dataset.layers.keys())}"
95 )
96 with TemporaryDirectory() as tmpdir:
97 tmp_ds = dataset.shallow_copy_dataset(
98 tmpdir, name=dataset.name, layers_to_ignore=layer_names_to_link
99 )
100 return upload_dataset(
101 tmp_ds,
102 new_dataset_name=new_dataset_name,
103 layers_to_link=layers_to_link,
104 jobs=jobs,
105 )
106
107 file_infos = list(_walk(dataset.path))
108 total_file_size = sum(size for _, _, size in file_infos)
109 # replicates https://github.com/scalableminds/webknossos/blob/master/frontend/javascripts/admin/dataset/dataset_upload_view.js
110 time_str = strftime("%Y-%m-%dT%H-%M-%S", gmtime())
111 upload_id = f"{time_str}__{uuid4()}"
112 datastore_token = context.datastore_required_token
113 datastore_url = _cached_get_upload_datastore(context)
114 datastore_api_client = context.get_datastore_api_client(datastore_url)
115 simultaneous_uploads = jobs if jobs is not None else DEFAULT_SIMULTANEOUS_UPLOADS
116 if "PYTEST_CURRENT_TEST" in os.environ:
117 simultaneous_uploads = 1
118 is_valid_new_name_response = context.api_client_with_auth.dataset_is_valid_new_name(
119 context.organization_id, new_dataset_name
120 )
121 if not is_valid_new_name_response.is_valid:
122 problems_str = ""
123 if is_valid_new_name_response.errors is not None:
124 problems_str = f" Problems: {is_valid_new_name_response.errors}"
125 raise Exception(
126 f"Dataset name {context.organization_id}/{new_dataset_name} is not a valid new dataset name.{problems_str}"
127 )
128
129 datastore_api_client.dataset_reserve_upload(
130 ApiReserveDatasetUploadInformation(
131 upload_id,
132 new_dataset_name,
133 context.organization_id,
134 total_file_count=len(file_infos),
135 layers_to_link=[
136 layer.as_api_linked_layer_identifier() for layer in layers_to_link
137 ],
138 folder_id=None,
139 initial_teams=[],
140 ),
141 token=datastore_token,
142 retry_count=MAXIMUM_RETRY_COUNT,
143 )
144 with get_rich_progress() as progress:
145 with Resumable(
146 f"{datastore_url}/data/datasets?token={datastore_token}",
147 simultaneous_uploads=simultaneous_uploads,
148 query={
149 "owningOrganization": context.organization_id,
150 "name": new_dataset_name,
151 "totalFileCount": len(file_infos),
152 },
153 chunk_size=100 * 1024 * 1024, # 100 MiB
154 generate_unique_identifier=lambda _,
155 relative_path: f"{upload_id}/{relative_path}",
156 test_chunks=False,
157 permanent_errors=[400, 403, 404, 409, 415, 500, 501],
158 client=httpx.Client(timeout=None),
159 ) as session:
160 progress_task = progress.add_task("Dataset Upload", total=total_file_size)
161 for file_path, relative_path, _ in file_infos:
162 resumable_file = session.add_file(file_path, relative_path)
163 resumable_file.chunk_completed.register(
164 lambda chunk: progress.advance(progress_task, chunk.size)
165 )
166
167 datastore_api_client.dataset_finish_upload(
168 ApiDatasetUploadInformation(upload_id),
169 datastore_token,
170 retry_count=MAXIMUM_RETRY_COUNT,
171 )
172
173 return new_dataset_name
174
[end of webknossos/webknossos/client/_upload_dataset.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/webknossos/webknossos/client/_upload_dataset.py b/webknossos/webknossos/client/_upload_dataset.py
--- a/webknossos/webknossos/client/_upload_dataset.py
+++ b/webknossos/webknossos/client/_upload_dataset.py
@@ -152,7 +152,7 @@
},
chunk_size=100 * 1024 * 1024, # 100 MiB
generate_unique_identifier=lambda _,
- relative_path: f"{upload_id}/{relative_path}",
+ relative_path: f"{upload_id}/{relative_path.as_posix()}",
test_chunks=False,
permanent_errors=[400, 403, 404, 409, 415, 500, 501],
client=httpx.Client(timeout=None),
|
{"golden_diff": "diff --git a/webknossos/webknossos/client/_upload_dataset.py b/webknossos/webknossos/client/_upload_dataset.py\n--- a/webknossos/webknossos/client/_upload_dataset.py\n+++ b/webknossos/webknossos/client/_upload_dataset.py\n@@ -152,7 +152,7 @@\n },\n chunk_size=100 * 1024 * 1024, # 100 MiB\n generate_unique_identifier=lambda _,\n- relative_path: f\"{upload_id}/{relative_path}\",\n+ relative_path: f\"{upload_id}/{relative_path.as_posix()}\",\n test_chunks=False,\n permanent_errors=[400, 403, 404, 409, 415, 500, 501],\n client=httpx.Client(timeout=None),\n", "issue": "upload command on windows: backslashes on server, invalid dataset\nA user created a valid dataset on a windows machine with the `webknossos convert` command, then called `webknossos upload` with a valid token. The upload went through, but the directory structure got lost: the files on the server had backslashes in the paths, like `'color\\2-2-1\\z0\\y7\\x1.wkw'`. Instead, when sending files to upload, the client should always replace the client\u2019s path separator by `/`.\n", "before_files": [{"content": "import os\nimport warnings\nfrom functools import lru_cache\nfrom pathlib import Path\nfrom tempfile import TemporaryDirectory\nfrom time import gmtime, strftime\nfrom typing import Iterator, List, NamedTuple, Optional, Tuple\nfrom uuid import uuid4\n\nimport httpx\n\nfrom ..dataset import Dataset, Layer, RemoteDataset\nfrom ..utils import get_rich_progress\nfrom ._resumable import Resumable\nfrom .api_client.models import (\n ApiDatasetUploadInformation,\n ApiLinkedLayerIdentifier,\n ApiReserveDatasetUploadInformation,\n)\nfrom .context import _get_context, _WebknossosContext\n\nDEFAULT_SIMULTANEOUS_UPLOADS = 5\nMAXIMUM_RETRY_COUNT = 4\n\n\nclass LayerToLink(NamedTuple):\n dataset_name: str\n layer_name: str\n new_layer_name: Optional[str] = None\n organization_id: Optional[str] = (\n None # defaults to the user's organization before uploading\n )\n\n @classmethod\n def from_remote_layer(\n cls,\n layer: Layer,\n new_layer_name: Optional[str] = None,\n organization_id: Optional[str] = None,\n ) -> \"LayerToLink\":\n ds = layer.dataset\n assert isinstance(\n ds, RemoteDataset\n ), f\"The passed layer must belong to a RemoteDataset, but belongs to {ds}\"\n return cls(ds._dataset_name, layer.name, new_layer_name, organization_id)\n\n def as_api_linked_layer_identifier(self) -> ApiLinkedLayerIdentifier:\n context = _get_context()\n return ApiLinkedLayerIdentifier(\n self.organization_id or context.organization_id,\n self.dataset_name,\n self.layer_name,\n self.new_layer_name,\n )\n\n\n@lru_cache(maxsize=None)\ndef _cached_get_upload_datastore(context: _WebknossosContext) -> str:\n datastores = context.api_client_with_auth.datastore_list()\n for datastore in datastores:\n if datastore.allows_upload:\n return datastore.url\n raise ValueError(\"No datastore found where datasets can be uploaded.\")\n\n\ndef _walk(\n path: Path,\n base_path: Optional[Path] = None,\n) -> Iterator[Tuple[Path, Path, int]]:\n if base_path is None:\n base_path = path\n if path.is_dir():\n for p in path.iterdir():\n yield from _walk(p, base_path)\n else:\n yield (path.resolve(), path.relative_to(base_path), path.stat().st_size)\n\n\ndef upload_dataset(\n dataset: Dataset,\n new_dataset_name: Optional[str] = None,\n layers_to_link: Optional[List[LayerToLink]] = None,\n jobs: Optional[int] = None,\n) -> str:\n if new_dataset_name is None:\n new_dataset_name = dataset.name\n if layers_to_link is None:\n layers_to_link = []\n context = _get_context()\n layer_names_to_link = set(i.new_layer_name or i.layer_name for i in layers_to_link)\n if len(layer_names_to_link.intersection(dataset.layers.keys())) > 0:\n warnings.warn(\n \"[INFO] Excluding the following layers from upload, since they will be linked: \"\n + f\"{layer_names_to_link.intersection(dataset.layers.keys())}\"\n )\n with TemporaryDirectory() as tmpdir:\n tmp_ds = dataset.shallow_copy_dataset(\n tmpdir, name=dataset.name, layers_to_ignore=layer_names_to_link\n )\n return upload_dataset(\n tmp_ds,\n new_dataset_name=new_dataset_name,\n layers_to_link=layers_to_link,\n jobs=jobs,\n )\n\n file_infos = list(_walk(dataset.path))\n total_file_size = sum(size for _, _, size in file_infos)\n # replicates https://github.com/scalableminds/webknossos/blob/master/frontend/javascripts/admin/dataset/dataset_upload_view.js\n time_str = strftime(\"%Y-%m-%dT%H-%M-%S\", gmtime())\n upload_id = f\"{time_str}__{uuid4()}\"\n datastore_token = context.datastore_required_token\n datastore_url = _cached_get_upload_datastore(context)\n datastore_api_client = context.get_datastore_api_client(datastore_url)\n simultaneous_uploads = jobs if jobs is not None else DEFAULT_SIMULTANEOUS_UPLOADS\n if \"PYTEST_CURRENT_TEST\" in os.environ:\n simultaneous_uploads = 1\n is_valid_new_name_response = context.api_client_with_auth.dataset_is_valid_new_name(\n context.organization_id, new_dataset_name\n )\n if not is_valid_new_name_response.is_valid:\n problems_str = \"\"\n if is_valid_new_name_response.errors is not None:\n problems_str = f\" Problems: {is_valid_new_name_response.errors}\"\n raise Exception(\n f\"Dataset name {context.organization_id}/{new_dataset_name} is not a valid new dataset name.{problems_str}\"\n )\n\n datastore_api_client.dataset_reserve_upload(\n ApiReserveDatasetUploadInformation(\n upload_id,\n new_dataset_name,\n context.organization_id,\n total_file_count=len(file_infos),\n layers_to_link=[\n layer.as_api_linked_layer_identifier() for layer in layers_to_link\n ],\n folder_id=None,\n initial_teams=[],\n ),\n token=datastore_token,\n retry_count=MAXIMUM_RETRY_COUNT,\n )\n with get_rich_progress() as progress:\n with Resumable(\n f\"{datastore_url}/data/datasets?token={datastore_token}\",\n simultaneous_uploads=simultaneous_uploads,\n query={\n \"owningOrganization\": context.organization_id,\n \"name\": new_dataset_name,\n \"totalFileCount\": len(file_infos),\n },\n chunk_size=100 * 1024 * 1024, # 100 MiB\n generate_unique_identifier=lambda _,\n relative_path: f\"{upload_id}/{relative_path}\",\n test_chunks=False,\n permanent_errors=[400, 403, 404, 409, 415, 500, 501],\n client=httpx.Client(timeout=None),\n ) as session:\n progress_task = progress.add_task(\"Dataset Upload\", total=total_file_size)\n for file_path, relative_path, _ in file_infos:\n resumable_file = session.add_file(file_path, relative_path)\n resumable_file.chunk_completed.register(\n lambda chunk: progress.advance(progress_task, chunk.size)\n )\n\n datastore_api_client.dataset_finish_upload(\n ApiDatasetUploadInformation(upload_id),\n datastore_token,\n retry_count=MAXIMUM_RETRY_COUNT,\n )\n\n return new_dataset_name\n", "path": "webknossos/webknossos/client/_upload_dataset.py"}]}
| 2,500 | 197 |
gh_patches_debug_3318
|
rasdani/github-patches
|
git_diff
|
feast-dev__feast-2753
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to access data in Feast UI when deployed to remote instance
## Expected Behavior
Should be able to view registry data when launching UI with `feast ui` on remote instances (like EC2).
## Current Behavior
I’ve tried setting the host to `0.0.0.0` and the static assets get loaded and can accessed via the public IP. But the requests to the registry (`http://0.0.0.0:8888/registry`) fails, so no data shows up.
I've also tried setting the host to the private IP, but the request to `/registry` times out.
## Steps to reproduce
Run `feast ui --host <instance private ip>` in EC2 instance.
### Specifications
- Version:`0.21.2`
- Platform: EC2
- Subsystem:
## Possible Solution
Potential CORS issue that needs to be fixed?
Unable to access data in Feast UI when deployed to remote instance
## Expected Behavior
Should be able to view registry data when launching UI with `feast ui` on remote instances (like EC2).
## Current Behavior
I’ve tried setting the host to `0.0.0.0` and the static assets get loaded and can accessed via the public IP. But the requests to the registry (`http://0.0.0.0:8888/registry`) fails, so no data shows up.
I've also tried setting the host to the private IP, but the request to `/registry` times out.
## Steps to reproduce
Run `feast ui --host <instance private ip>` in EC2 instance.
### Specifications
- Version:`0.21.2`
- Platform: EC2
- Subsystem:
## Possible Solution
Potential CORS issue that needs to be fixed?
</issue>
<code>
[start of sdk/python/feast/ui_server.py]
1 import json
2 import threading
3 from typing import Callable, Optional
4
5 import pkg_resources
6 import uvicorn
7 from fastapi import FastAPI, Response
8 from fastapi.middleware.cors import CORSMiddleware
9 from fastapi.staticfiles import StaticFiles
10
11 import feast
12
13
14 def get_app(
15 store: "feast.FeatureStore",
16 get_registry_dump: Callable,
17 project_id: str,
18 registry_ttl_secs: int,
19 host: str,
20 port: int,
21 ):
22 app = FastAPI()
23
24 app.add_middleware(
25 CORSMiddleware,
26 allow_origins=["*"],
27 allow_credentials=True,
28 allow_methods=["*"],
29 allow_headers=["*"],
30 )
31
32 # Asynchronously refresh registry, notifying shutdown and canceling the active timer if the app is shutting down
33 registry_json = ""
34 shutting_down = False
35 active_timer: Optional[threading.Timer] = None
36
37 def async_refresh():
38 store.refresh_registry()
39 nonlocal registry_json
40 registry_json = get_registry_dump(store.config, store.repo_path)
41 if shutting_down:
42 return
43 nonlocal active_timer
44 active_timer = threading.Timer(registry_ttl_secs, async_refresh)
45 active_timer.start()
46
47 @app.on_event("shutdown")
48 def shutdown_event():
49 nonlocal shutting_down
50 shutting_down = True
51 if active_timer:
52 active_timer.cancel()
53
54 async_refresh()
55
56 ui_dir = pkg_resources.resource_filename(__name__, "ui/build/")
57 # Initialize with the projects-list.json file
58 with open(ui_dir + "projects-list.json", mode="w") as f:
59 projects_dict = {
60 "projects": [
61 {
62 "name": "Project",
63 "description": "Test project",
64 "id": project_id,
65 "registryPath": f"http://{host}:{port}/registry",
66 }
67 ]
68 }
69 f.write(json.dumps(projects_dict))
70
71 @app.get("/registry")
72 def read_registry():
73 return json.loads(registry_json)
74
75 # For all other paths (such as paths that would otherwise be handled by react router), pass to React
76 @app.api_route("/p/{path_name:path}", methods=["GET"])
77 def catch_all():
78 filename = ui_dir + "index.html"
79
80 with open(filename) as f:
81 content = f.read()
82
83 return Response(content, media_type="text/html")
84
85 app.mount(
86 "/", StaticFiles(directory=ui_dir, html=True), name="site",
87 )
88
89 return app
90
91
92 def start_server(
93 store: "feast.FeatureStore",
94 host: str,
95 port: int,
96 get_registry_dump: Callable,
97 project_id: str,
98 registry_ttl_sec: int,
99 ):
100 app = get_app(store, get_registry_dump, project_id, registry_ttl_sec, host, port)
101 uvicorn.run(app, host=host, port=port)
102
[end of sdk/python/feast/ui_server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sdk/python/feast/ui_server.py b/sdk/python/feast/ui_server.py
--- a/sdk/python/feast/ui_server.py
+++ b/sdk/python/feast/ui_server.py
@@ -62,7 +62,7 @@
"name": "Project",
"description": "Test project",
"id": project_id,
- "registryPath": f"http://{host}:{port}/registry",
+ "registryPath": "/registry",
}
]
}
|
{"golden_diff": "diff --git a/sdk/python/feast/ui_server.py b/sdk/python/feast/ui_server.py\n--- a/sdk/python/feast/ui_server.py\n+++ b/sdk/python/feast/ui_server.py\n@@ -62,7 +62,7 @@\n \"name\": \"Project\",\n \"description\": \"Test project\",\n \"id\": project_id,\n- \"registryPath\": f\"http://{host}:{port}/registry\",\n+ \"registryPath\": \"/registry\",\n }\n ]\n }\n", "issue": "Unable to access data in Feast UI when deployed to remote instance\n## Expected Behavior \r\nShould be able to view registry data when launching UI with `feast ui` on remote instances (like EC2).\r\n\r\n## Current Behavior\r\nI\u2019ve tried setting the host to `0.0.0.0` and the static assets get loaded and can accessed via the public IP. But the requests to the registry (`http://0.0.0.0:8888/registry`) fails, so no data shows up.\r\n\r\nI've also tried setting the host to the private IP, but the request to `/registry` times out.\r\n\r\n## Steps to reproduce\r\nRun `feast ui --host <instance private ip>` in EC2 instance.\r\n\r\n### Specifications\r\n\r\n- Version:`0.21.2`\r\n- Platform: EC2\r\n- Subsystem:\r\n\r\n## Possible Solution\r\nPotential CORS issue that needs to be fixed?\nUnable to access data in Feast UI when deployed to remote instance\n## Expected Behavior \r\nShould be able to view registry data when launching UI with `feast ui` on remote instances (like EC2).\r\n\r\n## Current Behavior\r\nI\u2019ve tried setting the host to `0.0.0.0` and the static assets get loaded and can accessed via the public IP. But the requests to the registry (`http://0.0.0.0:8888/registry`) fails, so no data shows up.\r\n\r\nI've also tried setting the host to the private IP, but the request to `/registry` times out.\r\n\r\n## Steps to reproduce\r\nRun `feast ui --host <instance private ip>` in EC2 instance.\r\n\r\n### Specifications\r\n\r\n- Version:`0.21.2`\r\n- Platform: EC2\r\n- Subsystem:\r\n\r\n## Possible Solution\r\nPotential CORS issue that needs to be fixed?\n", "before_files": [{"content": "import json\nimport threading\nfrom typing import Callable, Optional\n\nimport pkg_resources\nimport uvicorn\nfrom fastapi import FastAPI, Response\nfrom fastapi.middleware.cors import CORSMiddleware\nfrom fastapi.staticfiles import StaticFiles\n\nimport feast\n\n\ndef get_app(\n store: \"feast.FeatureStore\",\n get_registry_dump: Callable,\n project_id: str,\n registry_ttl_secs: int,\n host: str,\n port: int,\n):\n app = FastAPI()\n\n app.add_middleware(\n CORSMiddleware,\n allow_origins=[\"*\"],\n allow_credentials=True,\n allow_methods=[\"*\"],\n allow_headers=[\"*\"],\n )\n\n # Asynchronously refresh registry, notifying shutdown and canceling the active timer if the app is shutting down\n registry_json = \"\"\n shutting_down = False\n active_timer: Optional[threading.Timer] = None\n\n def async_refresh():\n store.refresh_registry()\n nonlocal registry_json\n registry_json = get_registry_dump(store.config, store.repo_path)\n if shutting_down:\n return\n nonlocal active_timer\n active_timer = threading.Timer(registry_ttl_secs, async_refresh)\n active_timer.start()\n\n @app.on_event(\"shutdown\")\n def shutdown_event():\n nonlocal shutting_down\n shutting_down = True\n if active_timer:\n active_timer.cancel()\n\n async_refresh()\n\n ui_dir = pkg_resources.resource_filename(__name__, \"ui/build/\")\n # Initialize with the projects-list.json file\n with open(ui_dir + \"projects-list.json\", mode=\"w\") as f:\n projects_dict = {\n \"projects\": [\n {\n \"name\": \"Project\",\n \"description\": \"Test project\",\n \"id\": project_id,\n \"registryPath\": f\"http://{host}:{port}/registry\",\n }\n ]\n }\n f.write(json.dumps(projects_dict))\n\n @app.get(\"/registry\")\n def read_registry():\n return json.loads(registry_json)\n\n # For all other paths (such as paths that would otherwise be handled by react router), pass to React\n @app.api_route(\"/p/{path_name:path}\", methods=[\"GET\"])\n def catch_all():\n filename = ui_dir + \"index.html\"\n\n with open(filename) as f:\n content = f.read()\n\n return Response(content, media_type=\"text/html\")\n\n app.mount(\n \"/\", StaticFiles(directory=ui_dir, html=True), name=\"site\",\n )\n\n return app\n\n\ndef start_server(\n store: \"feast.FeatureStore\",\n host: str,\n port: int,\n get_registry_dump: Callable,\n project_id: str,\n registry_ttl_sec: int,\n):\n app = get_app(store, get_registry_dump, project_id, registry_ttl_sec, host, port)\n uvicorn.run(app, host=host, port=port)\n", "path": "sdk/python/feast/ui_server.py"}]}
| 1,733 | 108 |
gh_patches_debug_24195
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-2543
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
testing #2252 follow mail: mail participation ends soon
Where: Mail "participation ends soon"
* in a single module project link should go to project view and not to a module view that does not regularly exist in this case. Is that possible?
* As in the other mails: paragraph between two sentences probably looks better.
</issue>
<code>
[start of meinberlin/apps/offlineevents/models.py]
1 from datetime import timedelta
2
3 from autoslug import AutoSlugField
4 from ckeditor_uploader.fields import RichTextUploadingField
5 from django.db import models
6 from django.urls import reverse
7 from django.utils import timezone
8 from django.utils.translation import ugettext_lazy as _
9
10 from adhocracy4 import transforms
11 from adhocracy4.models.base import UserGeneratedContentModel
12 from adhocracy4.projects import models as project_models
13
14
15 class OfflineEventsQuerySet(models.QuerySet):
16
17 def starts_within(self, hours=72):
18 """All offlineevents starting within the given time."""
19 now = timezone.now()
20 return self.filter(date__gt=now,
21 date__lt=(now + timedelta(hours=hours)))
22
23
24 class OfflineEvent(UserGeneratedContentModel):
25 slug = AutoSlugField(populate_from='name', unique=True)
26 name = models.CharField(max_length=120, verbose_name=_('Name of event'))
27 event_type = models.CharField(
28 max_length=30, verbose_name=_('Event type'),
29 help_text=_('The content of this field is shown in the timeline. It '
30 'should have no more than 30 characters e.g. Information '
31 'event or 3rd public workshop.'))
32 date = models.DateTimeField(
33 verbose_name=_('Date'))
34 description = RichTextUploadingField(
35 config_name='image-editor',
36 verbose_name=_('Description'))
37 project = models.ForeignKey(
38 project_models.Project, on_delete=models.CASCADE)
39
40 objects = OfflineEventsQuerySet.as_manager()
41
42 class Meta:
43 ordering = ['-date']
44
45 def __str__(self):
46 return self.name
47
48 def save(self, *args, **kwargs):
49 self.description = transforms.clean_html_field(
50 self.description, 'image-editor')
51 super().save(*args, **kwargs)
52
53 def get_absolute_url(self):
54 return reverse('meinberlin_offlineevents:offlineevent-detail',
55 args=[str(self.slug)])
56
[end of meinberlin/apps/offlineevents/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/meinberlin/apps/offlineevents/models.py b/meinberlin/apps/offlineevents/models.py
--- a/meinberlin/apps/offlineevents/models.py
+++ b/meinberlin/apps/offlineevents/models.py
@@ -3,8 +3,8 @@
from autoslug import AutoSlugField
from ckeditor_uploader.fields import RichTextUploadingField
from django.db import models
-from django.urls import reverse
from django.utils import timezone
+from django.utils.functional import cached_property
from django.utils.translation import ugettext_lazy as _
from adhocracy4 import transforms
@@ -50,6 +50,16 @@
self.description, 'image-editor')
super().save(*args, **kwargs)
+ @cached_property
+ def get_timeline_index(self):
+ if self.project.display_timeline:
+ for count, cluster in enumerate(self.project.participation_dates):
+ if 'event_type' in cluster and self.slug == cluster['slug']:
+ return count
+ return 0
+
def get_absolute_url(self):
- return reverse('meinberlin_offlineevents:offlineevent-detail',
- args=[str(self.slug)])
+ if self.project.display_timeline:
+ return '{}?initialSlide={}'.format(self.project.get_absolute_url(),
+ self.get_timeline_index)
+ return self.project.get_absolute_url()
|
{"golden_diff": "diff --git a/meinberlin/apps/offlineevents/models.py b/meinberlin/apps/offlineevents/models.py\n--- a/meinberlin/apps/offlineevents/models.py\n+++ b/meinberlin/apps/offlineevents/models.py\n@@ -3,8 +3,8 @@\n from autoslug import AutoSlugField\n from ckeditor_uploader.fields import RichTextUploadingField\n from django.db import models\n-from django.urls import reverse\n from django.utils import timezone\n+from django.utils.functional import cached_property\n from django.utils.translation import ugettext_lazy as _\n \n from adhocracy4 import transforms\n@@ -50,6 +50,16 @@\n self.description, 'image-editor')\n super().save(*args, **kwargs)\n \n+ @cached_property\n+ def get_timeline_index(self):\n+ if self.project.display_timeline:\n+ for count, cluster in enumerate(self.project.participation_dates):\n+ if 'event_type' in cluster and self.slug == cluster['slug']:\n+ return count\n+ return 0\n+\n def get_absolute_url(self):\n- return reverse('meinberlin_offlineevents:offlineevent-detail',\n- args=[str(self.slug)])\n+ if self.project.display_timeline:\n+ return '{}?initialSlide={}'.format(self.project.get_absolute_url(),\n+ self.get_timeline_index)\n+ return self.project.get_absolute_url()\n", "issue": "testing #2252 follow mail: mail participation ends soon\nWhere: Mail \"participation ends soon\"\r\n\r\n* in a single module project link should go to project view and not to a module view that does not regularly exist in this case. Is that possible?\r\n* As in the other mails: paragraph between two sentences probably looks better.\n", "before_files": [{"content": "from datetime import timedelta\n\nfrom autoslug import AutoSlugField\nfrom ckeditor_uploader.fields import RichTextUploadingField\nfrom django.db import models\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom adhocracy4 import transforms\nfrom adhocracy4.models.base import UserGeneratedContentModel\nfrom adhocracy4.projects import models as project_models\n\n\nclass OfflineEventsQuerySet(models.QuerySet):\n\n def starts_within(self, hours=72):\n \"\"\"All offlineevents starting within the given time.\"\"\"\n now = timezone.now()\n return self.filter(date__gt=now,\n date__lt=(now + timedelta(hours=hours)))\n\n\nclass OfflineEvent(UserGeneratedContentModel):\n slug = AutoSlugField(populate_from='name', unique=True)\n name = models.CharField(max_length=120, verbose_name=_('Name of event'))\n event_type = models.CharField(\n max_length=30, verbose_name=_('Event type'),\n help_text=_('The content of this field is shown in the timeline. It '\n 'should have no more than 30 characters e.g. Information '\n 'event or 3rd public workshop.'))\n date = models.DateTimeField(\n verbose_name=_('Date'))\n description = RichTextUploadingField(\n config_name='image-editor',\n verbose_name=_('Description'))\n project = models.ForeignKey(\n project_models.Project, on_delete=models.CASCADE)\n\n objects = OfflineEventsQuerySet.as_manager()\n\n class Meta:\n ordering = ['-date']\n\n def __str__(self):\n return self.name\n\n def save(self, *args, **kwargs):\n self.description = transforms.clean_html_field(\n self.description, 'image-editor')\n super().save(*args, **kwargs)\n\n def get_absolute_url(self):\n return reverse('meinberlin_offlineevents:offlineevent-detail',\n args=[str(self.slug)])\n", "path": "meinberlin/apps/offlineevents/models.py"}]}
| 1,129 | 298 |
gh_patches_debug_43164
|
rasdani/github-patches
|
git_diff
|
Textualize__textual-3659
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CSS error reporting sometimes off by one
See https://github.com/Textualize/textual/pull/3582#issuecomment-1787507687.
Running the app at the bottom produces the error below, where the error reporting is off by one.
See the line above the panel and the code lines in the code snippet printed.
```
Error in stylesheet:
/Users/davep/develop/python/textual-upstream/sandbox/foo.py, CSSErrorApp.CSS:1:4
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ 1 │ │
│ ❱ 2 │ : │
│ 3 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
• Expected one of 'comment line', 'comment start', 'selector start', 'selector start class', 'selector start id', 'selector start universal', 'variable name', or 'whitespace'.
• Did you forget a semicolon at the end of a line?
```
```py
from textual.app import App, ComposeResult
from textual.widgets import Label
class CSSErrorApp(App[None]):
CSS = """
:
"""
def compose(self) -> ComposeResult:
yield Label()
if __name__ == "__main__":
CSSErrorApp().run()
```
</issue>
<code>
[start of src/textual/css/tokenizer.py]
1 from __future__ import annotations
2
3 import re
4 from typing import TYPE_CHECKING, NamedTuple
5
6 import rich.repr
7 from rich.console import Group, RenderableType
8 from rich.highlighter import ReprHighlighter
9 from rich.padding import Padding
10 from rich.panel import Panel
11 from rich.syntax import Syntax
12 from rich.text import Text
13
14 from ..suggestions import get_suggestion
15 from ._error_tools import friendly_list
16 from .constants import VALID_PSEUDO_CLASSES
17
18 if TYPE_CHECKING:
19 from .types import CSSLocation
20
21
22 class TokenError(Exception):
23 """Error raised when the CSS cannot be tokenized (syntax error)."""
24
25 def __init__(
26 self,
27 read_from: CSSLocation,
28 code: str,
29 start: tuple[int, int],
30 message: str,
31 end: tuple[int, int] | None = None,
32 ) -> None:
33 """
34 Args:
35 read_from: The location where the CSS was read from.
36 code: The code being parsed.
37 start: Line number of the error.
38 message: A message associated with the error.
39 end: End location of token, or None if not known.
40 """
41
42 self.read_from = read_from
43 self.code = code
44 self.start = start
45 self.end = end or start
46 super().__init__(message)
47
48 def _get_snippet(self) -> Panel:
49 """Get a short snippet of code around a given line number.
50
51 Returns:
52 A renderable.
53 """
54 line_no = self.start[0]
55 # TODO: Highlight column number
56 syntax = Syntax(
57 self.code,
58 lexer="scss",
59 theme="ansi_light",
60 line_numbers=True,
61 indent_guides=True,
62 line_range=(max(0, line_no - 2), line_no + 2),
63 highlight_lines={line_no + 1},
64 )
65 syntax.stylize_range("reverse bold", self.start, self.end)
66 return Panel(syntax, border_style="red")
67
68 def __rich__(self) -> RenderableType:
69 highlighter = ReprHighlighter()
70 errors: list[RenderableType] = []
71
72 message = str(self)
73 errors.append(Text(" Error in stylesheet:", style="bold red"))
74
75 line_no, col_no = self.start
76
77 path, widget_variable = self.read_from
78 if widget_variable:
79 css_location = f" {path}, {widget_variable}:{line_no}:{col_no}"
80 else:
81 css_location = f" {path}:{line_no}:{col_no}"
82 errors.append(highlighter(css_location))
83 errors.append(self._get_snippet())
84
85 final_message = "\n".join(
86 f"• {message_part.strip()}" for message_part in message.split(";")
87 )
88 errors.append(
89 Padding(
90 highlighter(
91 Text(final_message, "red"),
92 ),
93 pad=(0, 1),
94 )
95 )
96
97 return Group(*errors)
98
99
100 class EOFError(TokenError):
101 pass
102
103
104 class Expect:
105 def __init__(self, **tokens: str) -> None:
106 self.names = list(tokens.keys())
107 self.regexes = list(tokens.values())
108 self._regex = re.compile(
109 "("
110 + "|".join(f"(?P<{name}>{regex})" for name, regex in tokens.items())
111 + ")"
112 )
113 self.match = self._regex.match
114 self.search = self._regex.search
115 self._expect_eof = False
116
117 def expect_eof(self, eof: bool) -> Expect:
118 self._expect_eof = eof
119 return self
120
121 def __rich_repr__(self) -> rich.repr.Result:
122 yield from zip(self.names, self.regexes)
123
124
125 class ReferencedBy(NamedTuple):
126 name: str
127 location: tuple[int, int]
128 length: int
129 code: str
130
131
132 @rich.repr.auto
133 class Token(NamedTuple):
134 name: str
135 value: str
136 read_from: CSSLocation
137 code: str
138 location: tuple[int, int]
139 referenced_by: ReferencedBy | None = None
140
141 @property
142 def start(self) -> tuple[int, int]:
143 """Start line and column (1 indexed)."""
144 line, offset = self.location
145 return (line + 1, offset)
146
147 @property
148 def end(self) -> tuple[int, int]:
149 """End line and column (1 indexed)."""
150 line, offset = self.location
151 return (line + 1, offset + len(self.value))
152
153 def with_reference(self, by: ReferencedBy | None) -> "Token":
154 """Return a copy of the Token, with reference information attached.
155 This is used for variable substitution, where a variable reference
156 can refer to tokens which were defined elsewhere. With the additional
157 ReferencedBy data attached, we can track where the token we are referring
158 to is used.
159 """
160 return Token(
161 name=self.name,
162 value=self.value,
163 read_from=self.read_from,
164 code=self.code,
165 location=self.location,
166 referenced_by=by,
167 )
168
169 def __str__(self) -> str:
170 return self.value
171
172 def __rich_repr__(self) -> rich.repr.Result:
173 yield "name", self.name
174 yield "value", self.value
175 yield (
176 "read_from",
177 self.read_from[0] if not self.read_from[1] else self.read_from,
178 )
179 yield "code", self.code if len(self.code) < 40 else self.code[:40] + "..."
180 yield "location", self.location
181 yield "referenced_by", self.referenced_by, None
182
183
184 class Tokenizer:
185 def __init__(self, text: str, read_from: CSSLocation = ("", "")) -> None:
186 self.read_from = read_from
187 self.code = text
188 self.lines = text.splitlines(keepends=True)
189 self.line_no = 0
190 self.col_no = 0
191
192 def get_token(self, expect: Expect) -> Token:
193 line_no = self.line_no
194 col_no = self.col_no
195 if line_no >= len(self.lines):
196 if expect._expect_eof:
197 return Token(
198 "eof",
199 "",
200 self.read_from,
201 self.code,
202 (line_no + 1, col_no + 1),
203 None,
204 )
205 else:
206 raise EOFError(
207 self.read_from,
208 self.code,
209 (line_no + 1, col_no + 1),
210 "Unexpected end of file",
211 )
212 line = self.lines[line_no]
213 match = expect.match(line, col_no)
214 if match is None:
215 expected = friendly_list(" ".join(name.split("_")) for name in expect.names)
216 message = f"Expected one of {expected}.; Did you forget a semicolon at the end of a line?"
217 raise TokenError(
218 self.read_from,
219 self.code,
220 (line_no, col_no),
221 message,
222 )
223 iter_groups = iter(match.groups())
224
225 next(iter_groups)
226
227 for name, value in zip(expect.names, iter_groups):
228 if value is not None:
229 break
230 else:
231 # For MyPy's benefit
232 raise AssertionError("can't reach here")
233
234 token = Token(
235 name,
236 value,
237 self.read_from,
238 self.code,
239 (line_no, col_no),
240 referenced_by=None,
241 )
242
243 if (
244 token.name == "pseudo_class"
245 and token.value.strip(":") not in VALID_PSEUDO_CLASSES
246 ):
247 pseudo_class = token.value.strip(":")
248 suggestion = get_suggestion(pseudo_class, list(VALID_PSEUDO_CLASSES))
249 all_valid = f"must be one of {friendly_list(VALID_PSEUDO_CLASSES)}"
250 if suggestion:
251 raise TokenError(
252 self.read_from,
253 self.code,
254 (line_no, col_no),
255 f"unknown pseudo-class {pseudo_class!r}; did you mean {suggestion!r}?; {all_valid}",
256 )
257 else:
258 raise TokenError(
259 self.read_from,
260 self.code,
261 (line_no, col_no),
262 f"unknown pseudo-class {pseudo_class!r}; {all_valid}",
263 )
264
265 col_no += len(value)
266 if col_no >= len(line):
267 line_no += 1
268 col_no = 0
269 self.line_no = line_no
270 self.col_no = col_no
271 return token
272
273 def skip_to(self, expect: Expect) -> Token:
274 line_no = self.line_no
275 col_no = self.col_no
276
277 while True:
278 if line_no >= len(self.lines):
279 raise EOFError(
280 self.read_from,
281 self.code,
282 (line_no, col_no),
283 "Unexpected end of file",
284 )
285 line = self.lines[line_no]
286 match = expect.search(line, col_no)
287
288 if match is None:
289 line_no += 1
290 col_no = 0
291 else:
292 self.line_no = line_no
293 self.col_no = match.span(0)[0]
294 return self.get_token(expect)
295
[end of src/textual/css/tokenizer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/textual/css/tokenizer.py b/src/textual/css/tokenizer.py
--- a/src/textual/css/tokenizer.py
+++ b/src/textual/css/tokenizer.py
@@ -34,9 +34,9 @@
Args:
read_from: The location where the CSS was read from.
code: The code being parsed.
- start: Line number of the error.
+ start: Line and column number of the error (1-indexed).
message: A message associated with the error.
- end: End location of token, or None if not known.
+ end: End location of token (1-indexed), or None if not known.
"""
self.read_from = read_from
@@ -60,9 +60,13 @@
line_numbers=True,
indent_guides=True,
line_range=(max(0, line_no - 2), line_no + 2),
- highlight_lines={line_no + 1},
+ highlight_lines={line_no},
+ )
+ syntax.stylize_range(
+ "reverse bold",
+ (self.start[0], self.start[1] - 1),
+ (self.end[0], self.end[1] - 1),
)
- syntax.stylize_range("reverse bold", self.start, self.end)
return Panel(syntax, border_style="red")
def __rich__(self) -> RenderableType:
@@ -136,19 +140,20 @@
read_from: CSSLocation
code: str
location: tuple[int, int]
+ """Token starting location, 0-indexed."""
referenced_by: ReferencedBy | None = None
@property
def start(self) -> tuple[int, int]:
- """Start line and column (1 indexed)."""
+ """Start line and column (1-indexed)."""
line, offset = self.location
- return (line + 1, offset)
+ return (line + 1, offset + 1)
@property
def end(self) -> tuple[int, int]:
- """End line and column (1 indexed)."""
+ """End line and column (1-indexed)."""
line, offset = self.location
- return (line + 1, offset + len(self.value))
+ return (line + 1, offset + len(self.value) + 1)
def with_reference(self, by: ReferencedBy | None) -> "Token":
"""Return a copy of the Token, with reference information attached.
@@ -199,7 +204,7 @@
"",
self.read_from,
self.code,
- (line_no + 1, col_no + 1),
+ (line_no, col_no),
None,
)
else:
@@ -217,7 +222,7 @@
raise TokenError(
self.read_from,
self.code,
- (line_no, col_no),
+ (line_no + 1, col_no + 1),
message,
)
iter_groups = iter(match.groups())
@@ -251,14 +256,14 @@
raise TokenError(
self.read_from,
self.code,
- (line_no, col_no),
+ (line_no + 1, col_no + 1),
f"unknown pseudo-class {pseudo_class!r}; did you mean {suggestion!r}?; {all_valid}",
)
else:
raise TokenError(
self.read_from,
self.code,
- (line_no, col_no),
+ (line_no + 1, col_no + 1),
f"unknown pseudo-class {pseudo_class!r}; {all_valid}",
)
|
{"golden_diff": "diff --git a/src/textual/css/tokenizer.py b/src/textual/css/tokenizer.py\n--- a/src/textual/css/tokenizer.py\n+++ b/src/textual/css/tokenizer.py\n@@ -34,9 +34,9 @@\n Args:\n read_from: The location where the CSS was read from.\n code: The code being parsed.\n- start: Line number of the error.\n+ start: Line and column number of the error (1-indexed).\n message: A message associated with the error.\n- end: End location of token, or None if not known.\n+ end: End location of token (1-indexed), or None if not known.\n \"\"\"\n \n self.read_from = read_from\n@@ -60,9 +60,13 @@\n line_numbers=True,\n indent_guides=True,\n line_range=(max(0, line_no - 2), line_no + 2),\n- highlight_lines={line_no + 1},\n+ highlight_lines={line_no},\n+ )\n+ syntax.stylize_range(\n+ \"reverse bold\",\n+ (self.start[0], self.start[1] - 1),\n+ (self.end[0], self.end[1] - 1),\n )\n- syntax.stylize_range(\"reverse bold\", self.start, self.end)\n return Panel(syntax, border_style=\"red\")\n \n def __rich__(self) -> RenderableType:\n@@ -136,19 +140,20 @@\n read_from: CSSLocation\n code: str\n location: tuple[int, int]\n+ \"\"\"Token starting location, 0-indexed.\"\"\"\n referenced_by: ReferencedBy | None = None\n \n @property\n def start(self) -> tuple[int, int]:\n- \"\"\"Start line and column (1 indexed).\"\"\"\n+ \"\"\"Start line and column (1-indexed).\"\"\"\n line, offset = self.location\n- return (line + 1, offset)\n+ return (line + 1, offset + 1)\n \n @property\n def end(self) -> tuple[int, int]:\n- \"\"\"End line and column (1 indexed).\"\"\"\n+ \"\"\"End line and column (1-indexed).\"\"\"\n line, offset = self.location\n- return (line + 1, offset + len(self.value))\n+ return (line + 1, offset + len(self.value) + 1)\n \n def with_reference(self, by: ReferencedBy | None) -> \"Token\":\n \"\"\"Return a copy of the Token, with reference information attached.\n@@ -199,7 +204,7 @@\n \"\",\n self.read_from,\n self.code,\n- (line_no + 1, col_no + 1),\n+ (line_no, col_no),\n None,\n )\n else:\n@@ -217,7 +222,7 @@\n raise TokenError(\n self.read_from,\n self.code,\n- (line_no, col_no),\n+ (line_no + 1, col_no + 1),\n message,\n )\n iter_groups = iter(match.groups())\n@@ -251,14 +256,14 @@\n raise TokenError(\n self.read_from,\n self.code,\n- (line_no, col_no),\n+ (line_no + 1, col_no + 1),\n f\"unknown pseudo-class {pseudo_class!r}; did you mean {suggestion!r}?; {all_valid}\",\n )\n else:\n raise TokenError(\n self.read_from,\n self.code,\n- (line_no, col_no),\n+ (line_no + 1, col_no + 1),\n f\"unknown pseudo-class {pseudo_class!r}; {all_valid}\",\n )\n", "issue": "CSS error reporting sometimes off by one\nSee https://github.com/Textualize/textual/pull/3582#issuecomment-1787507687.\r\n\r\nRunning the app at the bottom produces the error below, where the error reporting is off by one.\r\nSee the line above the panel and the code lines in the code snippet printed.\r\n\r\n```\r\n Error in stylesheet:\r\n /Users/davep/develop/python/textual-upstream/sandbox/foo.py, CSSErrorApp.CSS:1:4\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 1 \u2502 \u2502\r\n\u2502 \u2771 2 \u2502 : \u2502\r\n\u2502 3 \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n \u2022 Expected one of 'comment line', 'comment start', 'selector start', 'selector start class', 'selector start id', 'selector start universal', 'variable name', or 'whitespace'. \r\n \u2022 Did you forget a semicolon at the end of a line? \r\n```\r\n\r\n```py\r\nfrom textual.app import App, ComposeResult\r\nfrom textual.widgets import Label\r\n\r\nclass CSSErrorApp(App[None]):\r\n\r\n CSS = \"\"\"\r\n :\r\n \"\"\"\r\n\r\n def compose(self) -> ComposeResult:\r\n yield Label()\r\n\r\nif __name__ == \"__main__\":\r\n CSSErrorApp().run()\r\n```\n", "before_files": [{"content": "from __future__ import annotations\n\nimport re\nfrom typing import TYPE_CHECKING, NamedTuple\n\nimport rich.repr\nfrom rich.console import Group, RenderableType\nfrom rich.highlighter import ReprHighlighter\nfrom rich.padding import Padding\nfrom rich.panel import Panel\nfrom rich.syntax import Syntax\nfrom rich.text import Text\n\nfrom ..suggestions import get_suggestion\nfrom ._error_tools import friendly_list\nfrom .constants import VALID_PSEUDO_CLASSES\n\nif TYPE_CHECKING:\n from .types import CSSLocation\n\n\nclass TokenError(Exception):\n \"\"\"Error raised when the CSS cannot be tokenized (syntax error).\"\"\"\n\n def __init__(\n self,\n read_from: CSSLocation,\n code: str,\n start: tuple[int, int],\n message: str,\n end: tuple[int, int] | None = None,\n ) -> None:\n \"\"\"\n Args:\n read_from: The location where the CSS was read from.\n code: The code being parsed.\n start: Line number of the error.\n message: A message associated with the error.\n end: End location of token, or None if not known.\n \"\"\"\n\n self.read_from = read_from\n self.code = code\n self.start = start\n self.end = end or start\n super().__init__(message)\n\n def _get_snippet(self) -> Panel:\n \"\"\"Get a short snippet of code around a given line number.\n\n Returns:\n A renderable.\n \"\"\"\n line_no = self.start[0]\n # TODO: Highlight column number\n syntax = Syntax(\n self.code,\n lexer=\"scss\",\n theme=\"ansi_light\",\n line_numbers=True,\n indent_guides=True,\n line_range=(max(0, line_no - 2), line_no + 2),\n highlight_lines={line_no + 1},\n )\n syntax.stylize_range(\"reverse bold\", self.start, self.end)\n return Panel(syntax, border_style=\"red\")\n\n def __rich__(self) -> RenderableType:\n highlighter = ReprHighlighter()\n errors: list[RenderableType] = []\n\n message = str(self)\n errors.append(Text(\" Error in stylesheet:\", style=\"bold red\"))\n\n line_no, col_no = self.start\n\n path, widget_variable = self.read_from\n if widget_variable:\n css_location = f\" {path}, {widget_variable}:{line_no}:{col_no}\"\n else:\n css_location = f\" {path}:{line_no}:{col_no}\"\n errors.append(highlighter(css_location))\n errors.append(self._get_snippet())\n\n final_message = \"\\n\".join(\n f\"\u2022 {message_part.strip()}\" for message_part in message.split(\";\")\n )\n errors.append(\n Padding(\n highlighter(\n Text(final_message, \"red\"),\n ),\n pad=(0, 1),\n )\n )\n\n return Group(*errors)\n\n\nclass EOFError(TokenError):\n pass\n\n\nclass Expect:\n def __init__(self, **tokens: str) -> None:\n self.names = list(tokens.keys())\n self.regexes = list(tokens.values())\n self._regex = re.compile(\n \"(\"\n + \"|\".join(f\"(?P<{name}>{regex})\" for name, regex in tokens.items())\n + \")\"\n )\n self.match = self._regex.match\n self.search = self._regex.search\n self._expect_eof = False\n\n def expect_eof(self, eof: bool) -> Expect:\n self._expect_eof = eof\n return self\n\n def __rich_repr__(self) -> rich.repr.Result:\n yield from zip(self.names, self.regexes)\n\n\nclass ReferencedBy(NamedTuple):\n name: str\n location: tuple[int, int]\n length: int\n code: str\n\n\[email protected]\nclass Token(NamedTuple):\n name: str\n value: str\n read_from: CSSLocation\n code: str\n location: tuple[int, int]\n referenced_by: ReferencedBy | None = None\n\n @property\n def start(self) -> tuple[int, int]:\n \"\"\"Start line and column (1 indexed).\"\"\"\n line, offset = self.location\n return (line + 1, offset)\n\n @property\n def end(self) -> tuple[int, int]:\n \"\"\"End line and column (1 indexed).\"\"\"\n line, offset = self.location\n return (line + 1, offset + len(self.value))\n\n def with_reference(self, by: ReferencedBy | None) -> \"Token\":\n \"\"\"Return a copy of the Token, with reference information attached.\n This is used for variable substitution, where a variable reference\n can refer to tokens which were defined elsewhere. With the additional\n ReferencedBy data attached, we can track where the token we are referring\n to is used.\n \"\"\"\n return Token(\n name=self.name,\n value=self.value,\n read_from=self.read_from,\n code=self.code,\n location=self.location,\n referenced_by=by,\n )\n\n def __str__(self) -> str:\n return self.value\n\n def __rich_repr__(self) -> rich.repr.Result:\n yield \"name\", self.name\n yield \"value\", self.value\n yield (\n \"read_from\",\n self.read_from[0] if not self.read_from[1] else self.read_from,\n )\n yield \"code\", self.code if len(self.code) < 40 else self.code[:40] + \"...\"\n yield \"location\", self.location\n yield \"referenced_by\", self.referenced_by, None\n\n\nclass Tokenizer:\n def __init__(self, text: str, read_from: CSSLocation = (\"\", \"\")) -> None:\n self.read_from = read_from\n self.code = text\n self.lines = text.splitlines(keepends=True)\n self.line_no = 0\n self.col_no = 0\n\n def get_token(self, expect: Expect) -> Token:\n line_no = self.line_no\n col_no = self.col_no\n if line_no >= len(self.lines):\n if expect._expect_eof:\n return Token(\n \"eof\",\n \"\",\n self.read_from,\n self.code,\n (line_no + 1, col_no + 1),\n None,\n )\n else:\n raise EOFError(\n self.read_from,\n self.code,\n (line_no + 1, col_no + 1),\n \"Unexpected end of file\",\n )\n line = self.lines[line_no]\n match = expect.match(line, col_no)\n if match is None:\n expected = friendly_list(\" \".join(name.split(\"_\")) for name in expect.names)\n message = f\"Expected one of {expected}.; Did you forget a semicolon at the end of a line?\"\n raise TokenError(\n self.read_from,\n self.code,\n (line_no, col_no),\n message,\n )\n iter_groups = iter(match.groups())\n\n next(iter_groups)\n\n for name, value in zip(expect.names, iter_groups):\n if value is not None:\n break\n else:\n # For MyPy's benefit\n raise AssertionError(\"can't reach here\")\n\n token = Token(\n name,\n value,\n self.read_from,\n self.code,\n (line_no, col_no),\n referenced_by=None,\n )\n\n if (\n token.name == \"pseudo_class\"\n and token.value.strip(\":\") not in VALID_PSEUDO_CLASSES\n ):\n pseudo_class = token.value.strip(\":\")\n suggestion = get_suggestion(pseudo_class, list(VALID_PSEUDO_CLASSES))\n all_valid = f\"must be one of {friendly_list(VALID_PSEUDO_CLASSES)}\"\n if suggestion:\n raise TokenError(\n self.read_from,\n self.code,\n (line_no, col_no),\n f\"unknown pseudo-class {pseudo_class!r}; did you mean {suggestion!r}?; {all_valid}\",\n )\n else:\n raise TokenError(\n self.read_from,\n self.code,\n (line_no, col_no),\n f\"unknown pseudo-class {pseudo_class!r}; {all_valid}\",\n )\n\n col_no += len(value)\n if col_no >= len(line):\n line_no += 1\n col_no = 0\n self.line_no = line_no\n self.col_no = col_no\n return token\n\n def skip_to(self, expect: Expect) -> Token:\n line_no = self.line_no\n col_no = self.col_no\n\n while True:\n if line_no >= len(self.lines):\n raise EOFError(\n self.read_from,\n self.code,\n (line_no, col_no),\n \"Unexpected end of file\",\n )\n line = self.lines[line_no]\n match = expect.search(line, col_no)\n\n if match is None:\n line_no += 1\n col_no = 0\n else:\n self.line_no = line_no\n self.col_no = match.span(0)[0]\n return self.get_token(expect)\n", "path": "src/textual/css/tokenizer.py"}]}
| 3,627 | 826 |
gh_patches_debug_22114
|
rasdani/github-patches
|
git_diff
|
deepchecks__deepchecks-550
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Receiving FutureWarning for each label on Calibration Score check
**Describe the bug**
Receiving FutureWarning for each label on Calibration Score
**To Reproduce**
Run a categorical Dataset on Calibration Score check
**Expected behavior**
No warnings
**Screenshots**

**Environment (please complete the following information):**
- OS: mac
- Python Version: 3.8
- Deepchecks Version: 0.2.1
</issue>
<code>
[start of deepchecks/checks/performance/calibration_score.py]
1 # ----------------------------------------------------------------------------
2 # Copyright (C) 2021 Deepchecks (https://www.deepchecks.com)
3 #
4 # This file is part of Deepchecks.
5 # Deepchecks is distributed under the terms of the GNU Affero General
6 # Public License (version 3 or later).
7 # You should have received a copy of the GNU Affero General Public License
8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.
9 # ----------------------------------------------------------------------------
10 #
11 """The calibration score check module."""
12 from sklearn.base import BaseEstimator
13 from sklearn.calibration import calibration_curve
14 from sklearn.metrics import brier_score_loss
15 import plotly.graph_objects as go
16
17 from deepchecks import Dataset, CheckResult, SingleDatasetBaseCheck
18 from deepchecks.utils.metrics import ModelType, task_type_validation
19
20
21 __all__ = ['CalibrationScore']
22
23
24 class CalibrationScore(SingleDatasetBaseCheck):
25 """Calculate the calibration curve with brier score for each class."""
26
27 def run(self, dataset: Dataset, model: BaseEstimator) -> CheckResult:
28 """Run check.
29
30 Args:
31 model (BaseEstimator): A scikit-learn-compatible fitted estimator instance
32 dataset: a Dataset object
33 Returns:
34 CheckResult: value is dictionary of class and it's brier score, displays the calibration curve
35 graph with each class
36
37 Raises:
38 DeepchecksValueError: If the object is not a Dataset instance with a label
39 """
40 return self._calibration_score(dataset, model)
41
42 def _calibration_score(self, dataset: Dataset, model):
43 Dataset.validate_dataset(dataset)
44 dataset.validate_label()
45 task_type_validation(model, dataset, [ModelType.MULTICLASS, ModelType.BINARY])
46
47 ds_x = dataset.features_columns
48 ds_y = dataset.label_col
49 # Expect predict_proba to return in order of the sorted classes.
50 y_pred = model.predict_proba(ds_x)
51
52 briers_scores = {}
53
54 if len(dataset.classes) == 2:
55 briers_scores[0] = brier_score_loss(ds_y, y_pred[:, 1], pos_label=dataset.classes[1])
56 else:
57 for class_index, class_name in enumerate(dataset.classes):
58 prob_pos = y_pred[:, class_index]
59 clf_score = brier_score_loss(ds_y == class_name, prob_pos, pos_label=class_name)
60 briers_scores[class_name] = clf_score
61
62 fig = go.Figure()
63
64 fig.add_trace(go.Scatter(
65 x=[0, 1],
66 y=[0, 1],
67 line_width=2, line_dash='dash',
68 name='Perfectly calibrated',
69 ))
70
71 if len(dataset.classes) == 2:
72 fraction_of_positives, mean_predicted_value = calibration_curve(ds_y, y_pred[:, 1], n_bins=10)
73
74 fig.add_trace(go.Scatter(
75 x=mean_predicted_value,
76 y=fraction_of_positives,
77 mode='lines+markers',
78 name=f'(brier:{briers_scores[0]:9.4f})',
79 ))
80 else:
81 for class_index, class_name in enumerate(dataset.classes):
82 prob_pos = y_pred[:, class_index]
83
84 fraction_of_positives, mean_predicted_value = \
85 calibration_curve(ds_y == class_name, prob_pos, n_bins=10)
86
87 fig.add_trace(go.Scatter(
88 x=mean_predicted_value,
89 y=fraction_of_positives,
90 mode='lines+markers',
91 name=f'{class_name} (brier:{briers_scores[class_name]:9.4f})',
92 ))
93
94 fig.update_layout(title_text='Calibration plots (reliability curve)',
95 width=700, height=500)
96 fig.update_yaxes(title='Fraction of positives')
97 fig.update_xaxes(title='Mean predicted value')
98
99 calibration_text = 'Calibration curves (also known as reliability diagrams) compare how well the ' \
100 'probabilistic predictions of a binary classifier are calibrated. It plots the true ' \
101 'frequency of the positive label against its predicted probability, for binned predictions.'
102 brier_text = 'The Brier score metric may be used to assess how well a classifier is calibrated. For more ' \
103 'info, please visit https://en.wikipedia.org/wiki/Brier_score'
104 return CheckResult(briers_scores, header='Calibration Metric',
105 display=[calibration_text, fig, brier_text])
106
[end of deepchecks/checks/performance/calibration_score.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/deepchecks/checks/performance/calibration_score.py b/deepchecks/checks/performance/calibration_score.py
--- a/deepchecks/checks/performance/calibration_score.py
+++ b/deepchecks/checks/performance/calibration_score.py
@@ -17,7 +17,6 @@
from deepchecks import Dataset, CheckResult, SingleDatasetBaseCheck
from deepchecks.utils.metrics import ModelType, task_type_validation
-
__all__ = ['CalibrationScore']
@@ -52,11 +51,11 @@
briers_scores = {}
if len(dataset.classes) == 2:
- briers_scores[0] = brier_score_loss(ds_y, y_pred[:, 1], pos_label=dataset.classes[1])
+ briers_scores[0] = brier_score_loss(ds_y == dataset.classes[1], y_pred[:, 1])
else:
for class_index, class_name in enumerate(dataset.classes):
prob_pos = y_pred[:, class_index]
- clf_score = brier_score_loss(ds_y == class_name, prob_pos, pos_label=class_name)
+ clf_score = brier_score_loss(ds_y == class_name, prob_pos)
briers_scores[class_name] = clf_score
fig = go.Figure()
|
{"golden_diff": "diff --git a/deepchecks/checks/performance/calibration_score.py b/deepchecks/checks/performance/calibration_score.py\n--- a/deepchecks/checks/performance/calibration_score.py\n+++ b/deepchecks/checks/performance/calibration_score.py\n@@ -17,7 +17,6 @@\n from deepchecks import Dataset, CheckResult, SingleDatasetBaseCheck\n from deepchecks.utils.metrics import ModelType, task_type_validation\n \n-\n __all__ = ['CalibrationScore']\n \n \n@@ -52,11 +51,11 @@\n briers_scores = {}\n \n if len(dataset.classes) == 2:\n- briers_scores[0] = brier_score_loss(ds_y, y_pred[:, 1], pos_label=dataset.classes[1])\n+ briers_scores[0] = brier_score_loss(ds_y == dataset.classes[1], y_pred[:, 1])\n else:\n for class_index, class_name in enumerate(dataset.classes):\n prob_pos = y_pred[:, class_index]\n- clf_score = brier_score_loss(ds_y == class_name, prob_pos, pos_label=class_name)\n+ clf_score = brier_score_loss(ds_y == class_name, prob_pos)\n briers_scores[class_name] = clf_score\n \n fig = go.Figure()\n", "issue": "[BUG] Receiving FutureWarning for each label on Calibration Score check\n**Describe the bug**\r\nReceiving FutureWarning for each label on Calibration Score\r\n\r\n**To Reproduce**\r\nRun a categorical Dataset on Calibration Score check\r\n\r\n**Expected behavior**\r\nNo warnings\r\n\r\n**Screenshots**\r\n\r\n\r\n\r\n**Environment (please complete the following information):**\r\n - OS: mac\r\n - Python Version: 3.8\r\n - Deepchecks Version: 0.2.1\r\n\n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"The calibration score check module.\"\"\"\nfrom sklearn.base import BaseEstimator\nfrom sklearn.calibration import calibration_curve\nfrom sklearn.metrics import brier_score_loss\nimport plotly.graph_objects as go\n\nfrom deepchecks import Dataset, CheckResult, SingleDatasetBaseCheck\nfrom deepchecks.utils.metrics import ModelType, task_type_validation\n\n\n__all__ = ['CalibrationScore']\n\n\nclass CalibrationScore(SingleDatasetBaseCheck):\n \"\"\"Calculate the calibration curve with brier score for each class.\"\"\"\n\n def run(self, dataset: Dataset, model: BaseEstimator) -> CheckResult:\n \"\"\"Run check.\n\n Args:\n model (BaseEstimator): A scikit-learn-compatible fitted estimator instance\n dataset: a Dataset object\n Returns:\n CheckResult: value is dictionary of class and it's brier score, displays the calibration curve\n graph with each class\n\n Raises:\n DeepchecksValueError: If the object is not a Dataset instance with a label\n \"\"\"\n return self._calibration_score(dataset, model)\n\n def _calibration_score(self, dataset: Dataset, model):\n Dataset.validate_dataset(dataset)\n dataset.validate_label()\n task_type_validation(model, dataset, [ModelType.MULTICLASS, ModelType.BINARY])\n\n ds_x = dataset.features_columns\n ds_y = dataset.label_col\n # Expect predict_proba to return in order of the sorted classes.\n y_pred = model.predict_proba(ds_x)\n\n briers_scores = {}\n\n if len(dataset.classes) == 2:\n briers_scores[0] = brier_score_loss(ds_y, y_pred[:, 1], pos_label=dataset.classes[1])\n else:\n for class_index, class_name in enumerate(dataset.classes):\n prob_pos = y_pred[:, class_index]\n clf_score = brier_score_loss(ds_y == class_name, prob_pos, pos_label=class_name)\n briers_scores[class_name] = clf_score\n\n fig = go.Figure()\n\n fig.add_trace(go.Scatter(\n x=[0, 1],\n y=[0, 1],\n line_width=2, line_dash='dash',\n name='Perfectly calibrated',\n ))\n\n if len(dataset.classes) == 2:\n fraction_of_positives, mean_predicted_value = calibration_curve(ds_y, y_pred[:, 1], n_bins=10)\n\n fig.add_trace(go.Scatter(\n x=mean_predicted_value,\n y=fraction_of_positives,\n mode='lines+markers',\n name=f'(brier:{briers_scores[0]:9.4f})',\n ))\n else:\n for class_index, class_name in enumerate(dataset.classes):\n prob_pos = y_pred[:, class_index]\n\n fraction_of_positives, mean_predicted_value = \\\n calibration_curve(ds_y == class_name, prob_pos, n_bins=10)\n\n fig.add_trace(go.Scatter(\n x=mean_predicted_value,\n y=fraction_of_positives,\n mode='lines+markers',\n name=f'{class_name} (brier:{briers_scores[class_name]:9.4f})',\n ))\n\n fig.update_layout(title_text='Calibration plots (reliability curve)',\n width=700, height=500)\n fig.update_yaxes(title='Fraction of positives')\n fig.update_xaxes(title='Mean predicted value')\n\n calibration_text = 'Calibration curves (also known as reliability diagrams) compare how well the ' \\\n 'probabilistic predictions of a binary classifier are calibrated. It plots the true ' \\\n 'frequency of the positive label against its predicted probability, for binned predictions.'\n brier_text = 'The Brier score metric may be used to assess how well a classifier is calibrated. For more ' \\\n 'info, please visit https://en.wikipedia.org/wiki/Brier_score'\n return CheckResult(briers_scores, header='Calibration Metric',\n display=[calibration_text, fig, brier_text])\n", "path": "deepchecks/checks/performance/calibration_score.py"}]}
| 1,875 | 279 |
gh_patches_debug_5367
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-2344
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'NoneType' object has no attribute 'values'
Sentry issue https://sentry.io/organizations/grand-challenge/issues/3127690895/?project=303639&query=is%3Aunresolved
```
slugs = {slug for viewport in mapping.values() for slug in viewport}
```
Added in https://github.com/comic/grand-challenge.org/pull/2322
</issue>
<code>
[start of app/grandchallenge/hanging_protocols/forms.py]
1 from django import forms
2
3 from grandchallenge.components.models import ComponentInterface
4 from grandchallenge.core.forms import SaveFormInitMixin
5 from grandchallenge.core.widgets import JSONEditorWidget
6 from grandchallenge.hanging_protocols.models import (
7 HANGING_PROTOCOL_SCHEMA,
8 VIEW_CONTENT_SCHEMA,
9 HangingProtocol,
10 )
11
12
13 class HangingProtocolForm(SaveFormInitMixin, forms.ModelForm):
14 class Meta:
15 model = HangingProtocol
16 fields = ("title", "description", "json")
17 widgets = {"json": JSONEditorWidget(schema=HANGING_PROTOCOL_SCHEMA)}
18 help_texts = {
19 "json": (
20 "To display a single image in full size, define the "
21 "protocol as follows: "
22 '[{"viewport_name": "main", "x": 0,"y": 0,"w": 1,"h": 1,'
23 '"fullsizable": true,"draggable": false,"selectable": true,'
24 '"order": 0}]'
25 )
26 }
27
28
29 class ViewContentMixin:
30 def clean_view_content(self):
31 mapping = self.cleaned_data["view_content"]
32 hanging_protocol = self.cleaned_data["hanging_protocol"]
33 if mapping and not hanging_protocol:
34 self.add_error(
35 error="Please select a hanging protocol before filling this field.",
36 field="view_content",
37 )
38
39 if mapping and hanging_protocol:
40 if set(mapping.keys()) != {
41 x["viewport_name"] for x in hanging_protocol.json
42 }:
43 self.add_error(
44 error=(
45 "Image ports in view_content do not match "
46 "those in the selected hanging protocol."
47 ),
48 field="view_content",
49 )
50
51 slugs = {slug for viewport in mapping.values() for slug in viewport}
52 unknown = []
53 for slug in slugs:
54 if not ComponentInterface.objects.filter(slug=slug).exists():
55 unknown.append(slug)
56 if len(unknown) > 0:
57 self.add_error(
58 error=f"Unkown slugs in view_content: {', '.join(unknown)}",
59 field="view_content",
60 )
61
62 return mapping
63
64 class Meta:
65 widgets = {
66 "view_content": JSONEditorWidget(schema=VIEW_CONTENT_SCHEMA),
67 }
68 help_texts = {
69 "view_content": (
70 "Indicate which Component Interfaces need to be displayed in "
71 'which image port. E.g. {"main": ["interface1"]}. The first '
72 "item in the list of interfaces will be the main image in "
73 "the image port. The first overlay type interface thereafter "
74 "will be rendered as an overlay. For now, any other items "
75 "will be ignored by the viewer."
76 )
77 }
78
[end of app/grandchallenge/hanging_protocols/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/grandchallenge/hanging_protocols/forms.py b/app/grandchallenge/hanging_protocols/forms.py
--- a/app/grandchallenge/hanging_protocols/forms.py
+++ b/app/grandchallenge/hanging_protocols/forms.py
@@ -28,7 +28,7 @@
class ViewContentMixin:
def clean_view_content(self):
- mapping = self.cleaned_data["view_content"]
+ mapping = self.cleaned_data["view_content"] or {}
hanging_protocol = self.cleaned_data["hanging_protocol"]
if mapping and not hanging_protocol:
self.add_error(
|
{"golden_diff": "diff --git a/app/grandchallenge/hanging_protocols/forms.py b/app/grandchallenge/hanging_protocols/forms.py\n--- a/app/grandchallenge/hanging_protocols/forms.py\n+++ b/app/grandchallenge/hanging_protocols/forms.py\n@@ -28,7 +28,7 @@\n \r\n class ViewContentMixin:\r\n def clean_view_content(self):\r\n- mapping = self.cleaned_data[\"view_content\"]\r\n+ mapping = self.cleaned_data[\"view_content\"] or {}\r\n hanging_protocol = self.cleaned_data[\"hanging_protocol\"]\r\n if mapping and not hanging_protocol:\r\n self.add_error(\n", "issue": "'NoneType' object has no attribute 'values'\nSentry issue https://sentry.io/organizations/grand-challenge/issues/3127690895/?project=303639&query=is%3Aunresolved\r\n\r\n```\r\nslugs = {slug for viewport in mapping.values() for slug in viewport}\r\n```\r\n\r\nAdded in https://github.com/comic/grand-challenge.org/pull/2322\n", "before_files": [{"content": "from django import forms\r\n\r\nfrom grandchallenge.components.models import ComponentInterface\r\nfrom grandchallenge.core.forms import SaveFormInitMixin\r\nfrom grandchallenge.core.widgets import JSONEditorWidget\r\nfrom grandchallenge.hanging_protocols.models import (\r\n HANGING_PROTOCOL_SCHEMA,\r\n VIEW_CONTENT_SCHEMA,\r\n HangingProtocol,\r\n)\r\n\r\n\r\nclass HangingProtocolForm(SaveFormInitMixin, forms.ModelForm):\r\n class Meta:\r\n model = HangingProtocol\r\n fields = (\"title\", \"description\", \"json\")\r\n widgets = {\"json\": JSONEditorWidget(schema=HANGING_PROTOCOL_SCHEMA)}\r\n help_texts = {\r\n \"json\": (\r\n \"To display a single image in full size, define the \"\r\n \"protocol as follows: \"\r\n '[{\"viewport_name\": \"main\", \"x\": 0,\"y\": 0,\"w\": 1,\"h\": 1,'\r\n '\"fullsizable\": true,\"draggable\": false,\"selectable\": true,'\r\n '\"order\": 0}]'\r\n )\r\n }\r\n\r\n\r\nclass ViewContentMixin:\r\n def clean_view_content(self):\r\n mapping = self.cleaned_data[\"view_content\"]\r\n hanging_protocol = self.cleaned_data[\"hanging_protocol\"]\r\n if mapping and not hanging_protocol:\r\n self.add_error(\r\n error=\"Please select a hanging protocol before filling this field.\",\r\n field=\"view_content\",\r\n )\r\n\r\n if mapping and hanging_protocol:\r\n if set(mapping.keys()) != {\r\n x[\"viewport_name\"] for x in hanging_protocol.json\r\n }:\r\n self.add_error(\r\n error=(\r\n \"Image ports in view_content do not match \"\r\n \"those in the selected hanging protocol.\"\r\n ),\r\n field=\"view_content\",\r\n )\r\n\r\n slugs = {slug for viewport in mapping.values() for slug in viewport}\r\n unknown = []\r\n for slug in slugs:\r\n if not ComponentInterface.objects.filter(slug=slug).exists():\r\n unknown.append(slug)\r\n if len(unknown) > 0:\r\n self.add_error(\r\n error=f\"Unkown slugs in view_content: {', '.join(unknown)}\",\r\n field=\"view_content\",\r\n )\r\n\r\n return mapping\r\n\r\n class Meta:\r\n widgets = {\r\n \"view_content\": JSONEditorWidget(schema=VIEW_CONTENT_SCHEMA),\r\n }\r\n help_texts = {\r\n \"view_content\": (\r\n \"Indicate which Component Interfaces need to be displayed in \"\r\n 'which image port. E.g. {\"main\": [\"interface1\"]}. The first '\r\n \"item in the list of interfaces will be the main image in \"\r\n \"the image port. The first overlay type interface thereafter \"\r\n \"will be rendered as an overlay. For now, any other items \"\r\n \"will be ignored by the viewer.\"\r\n )\r\n }\r\n", "path": "app/grandchallenge/hanging_protocols/forms.py"}]}
| 1,361 | 126 |
gh_patches_debug_12704
|
rasdani/github-patches
|
git_diff
|
pypa__pip-6725
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
manpage documentation is missing all subcommands
Initially reported in Arch Linux as: https://bugs.archlinux.org/task/63223
To reproduce:
```
git clone https://github.com/pypa/pip/
cd pip/docs
PYTHONPATH=$PWD/../src/ sphinx-build -W -b man -d build/doctrees/man man build/man -c html
```
Look in build/man and you will see only one manpage: pip.1
Really quick reproducer: look at a recent Travis CI build for the TOXENV=docs results, for example https://travis-ci.org/pypa/pip/jobs/559973823#L388, and see only one file being written out.
Expectation: There should be lots of manpages, one for each pip subcommand, and linux distro packages which install the docs/build/man/ directory to /usr/share/man/man1/ should be able to read all about pip's many excellent features in their offline documentation reader.
The cause of this breakage is https://github.com/pypa/pip/pull/5724, which reorganized the conf.py layout due to https://github.com/readthedocs/readthedocs.org/issues/1543 but did not adapt the somewhat hacky code to automatically add new entries.
</issue>
<code>
[start of docs/html/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # pip documentation build configuration file, created by
4 # sphinx-quickstart on Tue Apr 22 22:08:49 2008
5 #
6 # This file is execfile()d with the current directory set to its containing dir
7 #
8 # Note that not all possible configuration values are present in this
9 # autogenerated file.
10 #
11 # All configuration values have a default; values that are commented out
12 # serve to show the default.
13
14 import glob
15 import os
16 import re
17 import sys
18
19 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
20
21 docs_dir = os.path.dirname(os.path.dirname(__file__))
22 # If extensions (or modules to document with autodoc) are in another directory,
23 # add these directories to sys.path here. If the directory is relative to the
24 # documentation root, use os.path.abspath to make it absolute, like shown here.
25 sys.path.insert(0, docs_dir)
26 # sys.path.append(os.path.join(os.path.dirname(__file__), '../'))
27
28 # -- General configuration ----------------------------------------------------
29
30 # Add any Sphinx extension module names here, as strings. They can be
31 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
32 # extensions = ['sphinx.ext.autodoc']
33 extensions = ['sphinx.ext.extlinks', 'pip_sphinxext', 'sphinx.ext.intersphinx']
34
35 # intersphinx
36 intersphinx_cache_limit = 0
37 intersphinx_mapping = {
38 'pypug': ('https://packaging.python.org/', None),
39 'pypa': ('https://www.pypa.io/en/latest/', None),
40 }
41
42
43 # Add any paths that contain templates here, relative to this directory.
44 templates_path = []
45
46 # The suffix of source filenames.
47 source_suffix = '.rst'
48
49 # The encoding of source files.
50 # source_encoding = 'utf-8'
51
52 # The master toctree document.
53 master_doc = 'index'
54
55 # General information about the project.
56 project = 'pip'
57 copyright = '2008-2017, PyPA'
58
59 # The version info for the project you're documenting, acts as replacement for
60 # |version| and |release|, also used in various other places throughout the
61 # built documents.
62 #
63 # The short X.Y version.
64
65 version = release = 'dev'
66
67 # Readthedocs seems to install pip as an egg (via setup.py install) which
68 # is somehow resulting in "import pip" picking up an older copy of pip.
69 # Rather than trying to force RTD to install pip properly, we'll simply
70 # read the version direct from the __init__.py file. (Yes, this is
71 # fragile, but it works...)
72
73 pip_init = os.path.join(docs_dir, '..', 'src', 'pip', '__init__.py')
74 with open(pip_init) as f:
75 for line in f:
76 m = re.match(r'__version__ = "(.*)"', line)
77 if m:
78 __version__ = m.group(1)
79 # The short X.Y version.
80 version = '.'.join(__version__.split('.')[:2])
81 # The full version, including alpha/beta/rc tags.
82 release = __version__
83 break
84
85 # We have this here because readthedocs plays tricks sometimes and there seems
86 # to be a heisenbug, related to the version of pip discovered. This is here to
87 # help debug that if someone decides to do that in the future.
88 print(version)
89
90 # The language for content autogenerated by Sphinx. Refer to documentation
91 # for a list of supported languages.
92 # language = None
93
94 # There are two options for replacing |today|: either, you set today to some
95 # non-false value, then it is used:
96 # today = ''
97 # Else, today_fmt is used as the format for a strftime call.
98 today_fmt = '%B %d, %Y'
99
100 # List of documents that shouldn't be included in the build.
101 # unused_docs = []
102
103 # List of directories, relative to source directory, that shouldn't be searched
104 # for source files.
105 exclude_patterns = ['build/']
106
107 # The reST default role (used for this markup: `text`) to use for all documents
108 # default_role = None
109
110 # If true, '()' will be appended to :func: etc. cross-reference text.
111 # add_function_parentheses = True
112
113 # If true, the current module name will be prepended to all description
114 # unit titles (such as .. function::).
115 # add_module_names = True
116
117 # If true, sectionauthor and moduleauthor directives will be shown in the
118 # output. They are ignored by default.
119 # show_authors = False
120
121 # The name of the Pygments (syntax highlighting) style to use.
122 pygments_style = 'sphinx'
123
124 # A list of ignored prefixes for module index sorting.
125 # modindex_common_prefix = []
126
127 extlinks = {
128 'issue': ('https://github.com/pypa/pip/issues/%s', '#'),
129 'pull': ('https://github.com/pypa/pip/pull/%s', 'PR #'),
130 'pypi': ('https://pypi.org/project/%s', ''),
131 }
132
133 # -- Options for HTML output --------------------------------------------------
134
135 # The theme to use for HTML and HTML Help pages. Major themes that come with
136 # Sphinx are currently 'default' and 'sphinxdoc'.
137 html_theme = "pypa_theme"
138
139 # Theme options are theme-specific and customize the look and feel of a theme
140 # further. For a list of options available for each theme, see the
141 # documentation.
142 html_theme_options = {
143 'collapsiblesidebar': True,
144 'externalrefs': True,
145 'navigation_depth': 3,
146 'issues_url': 'https://github.com/pypa/pip/issues'
147 }
148
149 # Add any paths that contain custom themes here, relative to this directory.
150
151 # The name for this set of Sphinx documents. If None, it defaults to
152 # "<project> v<release> documentation".
153 # html_title = None
154
155 # A shorter title for the navigation bar. Default is the same as html_title.
156 # html_short_title = None
157
158 # The name of an image file (relative to this directory) to place at the top
159 # of the sidebar.
160 # html_logo = '_static/piplogo.png'
161
162 # The name of an image file (within the static path) to use as favicon of the
163 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
164 # pixels large.
165 # html_favicon = 'favicon.png'
166
167 # Add any paths that contain custom static files (such as style sheets) here,
168 # relative to this directory. They are copied after the builtin static files,
169 # so a file named "default.css" will overwrite the builtin "default.css".
170 html_static_path = []
171
172 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
173 # using the given strftime format.
174 html_last_updated_fmt = '%b %d, %Y'
175
176 # If true, the Docutils Smart Quotes transform (originally based on
177 # SmartyPants) will be used to convert characters like quotes and dashes
178 # to typographically correct entities. The default is True.
179 smartquotes = True
180
181 # This string, for use with Docutils 0.14 or later, customizes the
182 # SmartQuotes transform. The default of "qDe" converts normal quote
183 # characters ('"' and "'"), en and em dashes ("--" and "---"), and
184 # ellipses "...".
185 # For now, we disable the conversion of dashes so that long options
186 # like "--find-links" won't render as "-find-links" if included in the
187 # text in places where monospaced type can't be used. For example, backticks
188 # can't be used inside roles like :ref:`--no-index <--no-index>` because
189 # of nesting.
190 smartquotes_action = "qe"
191
192 # Custom sidebar templates, maps document names to template names.
193 html_sidebars = {
194 '**': ['localtoc.html', 'relations.html'],
195 'index': ['localtoc.html']
196 }
197
198 # Additional templates that should be rendered to pages, maps page names to
199 # template names.
200 # html_additional_pages = {}
201
202 # If false, no module index is generated.
203 html_use_modindex = False
204
205 # If false, no index is generated.
206 html_use_index = False
207
208 # If true, the index is split into individual pages for each letter.
209 # html_split_index = False
210
211 # If true, links to the reST sources are added to the pages.
212 html_show_sourcelink = False
213
214 # If true, an OpenSearch description file will be output, and all pages will
215 # contain a <link> tag referring to it. The value of this option must be the
216 # base URL from which the finished HTML is served.
217 # html_use_opensearch = ''
218
219 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
220 # html_file_suffix = ''
221
222 # Output file base name for HTML help builder.
223 htmlhelp_basename = 'pipdocs'
224
225
226 # -- Options for LaTeX output -------------------------------------------------
227
228 # The paper size ('letter' or 'a4').
229 # latex_paper_size = 'letter'
230
231 # The font size ('10pt', '11pt' or '12pt').
232 # latex_font_size = '10pt'
233
234 # Grouping the document tree into LaTeX files. List of tuples
235 # (source start file, target name, title, author, documentclass [howto/manual])
236 latex_documents = [
237 (
238 'index',
239 'pip.tex',
240 u'pip Documentation',
241 u'pip developers',
242 'manual',
243 ),
244 ]
245
246 # The name of an image file (relative to this directory) to place at the top of
247 # the title page.
248 # latex_logo = None
249
250 # For "manual" documents, if this is true, then toplevel headings are parts,
251 # not chapters.
252 # latex_use_parts = False
253
254 # Additional stuff for the LaTeX preamble.
255 # latex_preamble = ''
256
257 # Documents to append as an appendix to all manuals.
258 # latex_appendices = []
259
260 # If false, no module index is generated.
261 # latex_use_modindex = True
262
263 # -- Options for Manual Pages -------------------------------------------------
264
265 # List of manual pages generated
266 man_pages = [
267 (
268 'index',
269 'pip',
270 u'package manager for Python packages',
271 u'pip developers',
272 1
273 )
274 ]
275
276 # Here, we crawl the entire man/commands/ directory and list every file with
277 # appropriate name and details
278 for fname in glob.glob('man/commands/*.rst'):
279 fname_base = fname[4:-4]
280 outname = 'pip-' + fname_base[13:]
281 description = u'description of {} command'.format(
282 outname.replace('-', ' ')
283 )
284
285 man_pages.append((fname_base, outname, description, u'pip developers', 1))
286
[end of docs/html/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/html/conf.py b/docs/html/conf.py
--- a/docs/html/conf.py
+++ b/docs/html/conf.py
@@ -275,9 +275,13 @@
# Here, we crawl the entire man/commands/ directory and list every file with
# appropriate name and details
-for fname in glob.glob('man/commands/*.rst'):
- fname_base = fname[4:-4]
- outname = 'pip-' + fname_base[13:]
+man_dir = os.path.join(docs_dir, 'man/')
+raw_subcommands = glob.glob(os.path.join(man_dir, 'commands/*.rst'))
+if not raw_subcommands:
+ raise FileNotFoundError('The individual subcommand manpages could not be found!')
+for fname in raw_subcommands:
+ fname_base = fname[len(man_dir):-4]
+ outname = 'pip-' + fname_base[9:]
description = u'description of {} command'.format(
outname.replace('-', ' ')
)
|
{"golden_diff": "diff --git a/docs/html/conf.py b/docs/html/conf.py\n--- a/docs/html/conf.py\n+++ b/docs/html/conf.py\n@@ -275,9 +275,13 @@\n \n # Here, we crawl the entire man/commands/ directory and list every file with\n # appropriate name and details\n-for fname in glob.glob('man/commands/*.rst'):\n- fname_base = fname[4:-4]\n- outname = 'pip-' + fname_base[13:]\n+man_dir = os.path.join(docs_dir, 'man/')\n+raw_subcommands = glob.glob(os.path.join(man_dir, 'commands/*.rst'))\n+if not raw_subcommands:\n+ raise FileNotFoundError('The individual subcommand manpages could not be found!')\n+for fname in raw_subcommands:\n+ fname_base = fname[len(man_dir):-4]\n+ outname = 'pip-' + fname_base[9:]\n description = u'description of {} command'.format(\n outname.replace('-', ' ')\n )\n", "issue": "manpage documentation is missing all subcommands\nInitially reported in Arch Linux as: https://bugs.archlinux.org/task/63223\r\n\r\nTo reproduce:\r\n```\r\ngit clone https://github.com/pypa/pip/\r\ncd pip/docs\r\nPYTHONPATH=$PWD/../src/ sphinx-build -W -b man -d build/doctrees/man man build/man -c html\r\n```\r\n\r\nLook in build/man and you will see only one manpage: pip.1\r\n\r\nReally quick reproducer: look at a recent Travis CI build for the TOXENV=docs results, for example https://travis-ci.org/pypa/pip/jobs/559973823#L388, and see only one file being written out.\r\n\r\nExpectation: There should be lots of manpages, one for each pip subcommand, and linux distro packages which install the docs/build/man/ directory to /usr/share/man/man1/ should be able to read all about pip's many excellent features in their offline documentation reader.\r\n\r\nThe cause of this breakage is https://github.com/pypa/pip/pull/5724, which reorganized the conf.py layout due to https://github.com/readthedocs/readthedocs.org/issues/1543 but did not adapt the somewhat hacky code to automatically add new entries.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# pip documentation build configuration file, created by\n# sphinx-quickstart on Tue Apr 22 22:08:49 2008\n#\n# This file is execfile()d with the current directory set to its containing dir\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport glob\nimport os\nimport re\nimport sys\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\ndocs_dir = os.path.dirname(os.path.dirname(__file__))\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, docs_dir)\n# sys.path.append(os.path.join(os.path.dirname(__file__), '../'))\n\n# -- General configuration ----------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\n# extensions = ['sphinx.ext.autodoc']\nextensions = ['sphinx.ext.extlinks', 'pip_sphinxext', 'sphinx.ext.intersphinx']\n\n# intersphinx\nintersphinx_cache_limit = 0\nintersphinx_mapping = {\n 'pypug': ('https://packaging.python.org/', None),\n 'pypa': ('https://www.pypa.io/en/latest/', None),\n}\n\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = []\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n# source_encoding = 'utf-8'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'pip'\ncopyright = '2008-2017, PyPA'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\n\nversion = release = 'dev'\n\n# Readthedocs seems to install pip as an egg (via setup.py install) which\n# is somehow resulting in \"import pip\" picking up an older copy of pip.\n# Rather than trying to force RTD to install pip properly, we'll simply\n# read the version direct from the __init__.py file. (Yes, this is\n# fragile, but it works...)\n\npip_init = os.path.join(docs_dir, '..', 'src', 'pip', '__init__.py')\nwith open(pip_init) as f:\n for line in f:\n m = re.match(r'__version__ = \"(.*)\"', line)\n if m:\n __version__ = m.group(1)\n # The short X.Y version.\n version = '.'.join(__version__.split('.')[:2])\n # The full version, including alpha/beta/rc tags.\n release = __version__\n break\n\n# We have this here because readthedocs plays tricks sometimes and there seems\n# to be a heisenbug, related to the version of pip discovered. This is here to\n# help debug that if someone decides to do that in the future.\nprint(version)\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n# language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\ntoday_fmt = '%B %d, %Y'\n\n# List of documents that shouldn't be included in the build.\n# unused_docs = []\n\n# List of directories, relative to source directory, that shouldn't be searched\n# for source files.\nexclude_patterns = ['build/']\n\n# The reST default role (used for this markup: `text`) to use for all documents\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\nextlinks = {\n 'issue': ('https://github.com/pypa/pip/issues/%s', '#'),\n 'pull': ('https://github.com/pypa/pip/pull/%s', 'PR #'),\n 'pypi': ('https://pypi.org/project/%s', ''),\n}\n\n# -- Options for HTML output --------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. Major themes that come with\n# Sphinx are currently 'default' and 'sphinxdoc'.\nhtml_theme = \"pypa_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\nhtml_theme_options = {\n 'collapsiblesidebar': True,\n 'externalrefs': True,\n 'navigation_depth': 3,\n 'issues_url': 'https://github.com/pypa/pip/issues'\n}\n\n# Add any paths that contain custom themes here, relative to this directory.\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n# html_logo = '_static/piplogo.png'\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n# html_favicon = 'favicon.png'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\nhtml_last_updated_fmt = '%b %d, %Y'\n\n# If true, the Docutils Smart Quotes transform (originally based on\n# SmartyPants) will be used to convert characters like quotes and dashes\n# to typographically correct entities. The default is True.\nsmartquotes = True\n\n# This string, for use with Docutils 0.14 or later, customizes the\n# SmartQuotes transform. The default of \"qDe\" converts normal quote\n# characters ('\"' and \"'\"), en and em dashes (\"--\" and \"---\"), and\n# ellipses \"...\".\n# For now, we disable the conversion of dashes so that long options\n# like \"--find-links\" won't render as \"-find-links\" if included in the\n# text in places where monospaced type can't be used. For example, backticks\n# can't be used inside roles like :ref:`--no-index <--no-index>` because\n# of nesting.\nsmartquotes_action = \"qe\"\n\n# Custom sidebar templates, maps document names to template names.\nhtml_sidebars = {\n '**': ['localtoc.html', 'relations.html'],\n 'index': ['localtoc.html']\n}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n# html_additional_pages = {}\n\n# If false, no module index is generated.\nhtml_use_modindex = False\n\n# If false, no index is generated.\nhtml_use_index = False\n\n# If true, the index is split into individual pages for each letter.\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = False\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# If nonempty, this is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = ''\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'pipdocs'\n\n\n# -- Options for LaTeX output -------------------------------------------------\n\n# The paper size ('letter' or 'a4').\n# latex_paper_size = 'letter'\n\n# The font size ('10pt', '11pt' or '12pt').\n# latex_font_size = '10pt'\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual])\nlatex_documents = [\n (\n 'index',\n 'pip.tex',\n u'pip Documentation',\n u'pip developers',\n 'manual',\n ),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n# latex_use_parts = False\n\n# Additional stuff for the LaTeX preamble.\n# latex_preamble = ''\n\n# Documents to append as an appendix to all manuals.\n# latex_appendices = []\n\n# If false, no module index is generated.\n# latex_use_modindex = True\n\n# -- Options for Manual Pages -------------------------------------------------\n\n# List of manual pages generated\nman_pages = [\n (\n 'index',\n 'pip',\n u'package manager for Python packages',\n u'pip developers',\n 1\n )\n]\n\n# Here, we crawl the entire man/commands/ directory and list every file with\n# appropriate name and details\nfor fname in glob.glob('man/commands/*.rst'):\n fname_base = fname[4:-4]\n outname = 'pip-' + fname_base[13:]\n description = u'description of {} command'.format(\n outname.replace('-', ' ')\n )\n\n man_pages.append((fname_base, outname, description, u'pip developers', 1))\n", "path": "docs/html/conf.py"}]}
| 3,953 | 215 |
gh_patches_debug_24549
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-8031
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
EL log files should not contain secrets
The full database URI is shown in debug logs:
```console
$ export MELTANO_CLI_LOG_LEVEL=debug
$ meltano invoke my-tap
2022-09-07T16:34:57.234152Z [info ] Environment 'dev' is active
2022-09-07T16:34:57.338859Z [debug ] Creating engine <meltano.core.project.Project object at 0x10e9702e0>@postgresql://***********
```
Where I redacted the username, password, etc. from the Postgres URI.
The full environment variables mapping log message may also contain secrets:
```console
2022-09-07T16:35:01.443284Z [debug ] Env: {'USER': ...
```
_Raised by Tomas B in Office Hours._
</issue>
<code>
[start of src/meltano/core/db.py]
1 """Defines helpers related to the system database."""
2
3 from __future__ import annotations
4
5 import logging
6 import time
7
8 from sqlalchemy import create_engine
9 from sqlalchemy.engine import Connection, Engine
10 from sqlalchemy.exc import OperationalError
11 from sqlalchemy.orm import sessionmaker
12 from sqlalchemy.pool import NullPool
13 from sqlalchemy.sql import text
14
15 from meltano.core.error import MeltanoError
16 from meltano.core.project import Project
17
18 # Keep a Project → Engine mapping to serve
19 # the same engine for the same Project
20 _engines = {}
21
22
23 class MeltanoDatabaseCompatibilityError(MeltanoError):
24 """Raised when the database is not compatible with Meltano."""
25
26 INSTRUCTION = (
27 "Upgrade your database to be compatible with Meltano or use a different "
28 "database"
29 )
30
31 def __init__(self, reason: str):
32 """Initialize the error with a reason.
33
34 Args:
35 reason: The reason why the database is not compatible.
36 """
37 super().__init__(reason, self.INSTRUCTION)
38
39
40 class NullConnectionStringError(MeltanoError):
41 """Raised when the database is not compatible with Meltano."""
42
43 REASON = "The `database_uri` setting has a null value"
44 INSTRUCTION = (
45 "Verify that the `database_uri` setting points to a valid database connection "
46 "URI, or use `MELTANO_FF_STRICT_ENV_VAR_MODE=1 meltano config meltano list` "
47 "to check for missing environment variables"
48 )
49
50 def __init__(self):
51 """Initialize the exception."""
52 super().__init__(self.REASON, self.INSTRUCTION)
53
54
55 def project_engine(
56 project: Project,
57 default: bool = False,
58 ) -> tuple[Engine, sessionmaker]:
59 """Create and register a SQLAlchemy engine for a Meltano project instance.
60
61 Args:
62 project: The Meltano project that the engine will be connected to.
63 default: Whether the engine created should be stored as the default
64 engine for this project.
65
66 Returns:
67 The engine, and a session maker bound to the engine.
68
69 Raises:
70 NullConnectionStringError: The `database_uri` setting has a null value.
71 """
72 existing_engine = _engines.get(project)
73 if existing_engine:
74 return existing_engine
75
76 engine_uri = project.settings.get("database_uri")
77 logging.debug(f"Creating engine '{project}@{engine_uri}'")
78
79 if engine_uri is None:
80 raise NullConnectionStringError
81
82 engine = create_engine(engine_uri, poolclass=NullPool)
83
84 # Connect to the database to ensure it is available.
85 connect(
86 engine,
87 max_retries=project.settings.get("database_max_retries"),
88 retry_timeout=project.settings.get("database_retry_timeout"),
89 )
90
91 check_database_compatibility(engine)
92 init_hook(engine)
93
94 engine_session = (engine, sessionmaker(bind=engine))
95
96 if default:
97 # register the default engine
98 _engines[project] = engine_session
99
100 return engine_session
101
102
103 def connect(
104 engine: Engine,
105 max_retries: int,
106 retry_timeout: float,
107 ) -> Connection:
108 """Connect to the database.
109
110 Args:
111 engine: The DB engine with which the check will be performed.
112 max_retries: The maximum number of retries that will be attempted.
113 retry_timeout: The number of seconds to wait between retries.
114
115 Raises:
116 OperationalError: Error during DB connection - max retries exceeded.
117
118 Returns:
119 A connection to the database.
120 """
121 attempt = 0
122 while True:
123 try:
124 return engine.connect()
125 except OperationalError:
126 if attempt >= max_retries:
127 logging.error(
128 f"Could not connect to the database after {attempt} "
129 "attempts. Max retries exceeded.",
130 )
131 raise
132 attempt += 1
133 logging.info(
134 f"DB connection failed. Will retry after {retry_timeout}s. "
135 f"Attempt {attempt}/{max_retries}",
136 )
137 time.sleep(retry_timeout)
138
139
140 init_hooks = {
141 "sqlite": lambda x: x.execute("PRAGMA journal_mode=WAL"),
142 }
143
144
145 def init_hook(engine: Engine) -> None:
146 """Run the initialization hook for the provided DB engine.
147
148 The initialization hooks are taken from the `meltano.core.db.init_hooks`
149 dictionary, which maps the dialect name of the engine to a unary function
150 which will be called with the provided DB engine.
151
152 Args:
153 engine: The engine for which the init hook will be run.
154
155 Raises:
156 Exception: The init hook raised an exception.
157 """
158 try:
159 hook = init_hooks[engine.dialect.name]
160 except KeyError:
161 return
162
163 try:
164 hook(engine)
165 except Exception as ex:
166 raise Exception(f"Failed to initialize database: {ex!s}") from ex
167
168
169 def ensure_schema_exists(
170 engine: Engine,
171 schema_name: str,
172 grant_roles: tuple[str] = (),
173 ) -> None:
174 """Ensure the specified `schema_name` exists in the database.
175
176 Args:
177 engine: The DB engine to be used.
178 schema_name: The name of the schema.
179 grant_roles: Roles to grant to the specified schema.
180 """
181 group_identifiers = ",".join(grant_roles)
182
183 create_schema = text(f"CREATE SCHEMA IF NOT EXISTS {schema_name}")
184 grant_select_schema = text(
185 f"ALTER DEFAULT PRIVILEGES IN SCHEMA {schema_name} GRANT SELECT ON "
186 f"TABLES TO {group_identifiers}",
187 )
188 grant_usage_schema = text(
189 f"GRANT USAGE ON SCHEMA {schema_name} TO {group_identifiers}",
190 )
191
192 with engine.connect() as conn, conn.begin():
193 conn.execute(create_schema)
194 if grant_roles:
195 conn.execute(grant_select_schema)
196 conn.execute(grant_usage_schema)
197
198 logging.info(f"Schema {schema_name} has been created successfully.")
199 for role in grant_roles:
200 logging.info(f"Usage has been granted for role: {role}.")
201
202
203 def check_database_compatibility(engine: Engine) -> None:
204 """Check that the database is compatible with Meltano.
205
206 Args:
207 engine: The DB engine to be used. This should already be connected to
208 the database.
209
210 Raises:
211 MeltanoDatabaseCompatibilityError: The database is not compatible with
212 Meltano.
213 """
214 dialect = engine.dialect.name
215 version = engine.dialect.server_version_info
216
217 if dialect == "sqlite" and version < (3, 25, 1):
218 version_string = ".".join(map(str, version))
219 reason = (
220 f"Detected SQLite {version_string}, but Meltano requires at least 3.25.1"
221 )
222 raise MeltanoDatabaseCompatibilityError(reason)
223
[end of src/meltano/core/db.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/meltano/core/db.py b/src/meltano/core/db.py
--- a/src/meltano/core/db.py
+++ b/src/meltano/core/db.py
@@ -4,6 +4,7 @@
import logging
import time
+from urllib.parse import urlparse
from sqlalchemy import create_engine
from sqlalchemy.engine import Connection, Engine
@@ -73,13 +74,27 @@
if existing_engine:
return existing_engine
- engine_uri = project.settings.get("database_uri")
- logging.debug(f"Creating engine '{project}@{engine_uri}'")
+ database_uri = project.settings.get("database_uri")
+ parsed_db_uri = urlparse(database_uri)
+ sanitized_db_uri = parsed_db_uri._replace( # noqa: WPS437
+ netloc=(
+ f"{parsed_db_uri.username}:********@" # user:pass auth case
+ if parsed_db_uri.password
+ else "********@" # token auth case
+ if parsed_db_uri.username
+ else "" # no auth case
+ )
+ + (parsed_db_uri.hostname or ""),
+ ).geturl()
+ logging.debug(
+ f"Creating DB engine for project at {str(project.root)!r} "
+ f"with DB URI {sanitized_db_uri!r}",
+ )
- if engine_uri is None:
+ if database_uri is None:
raise NullConnectionStringError
- engine = create_engine(engine_uri, poolclass=NullPool)
+ engine = create_engine(database_uri, poolclass=NullPool)
# Connect to the database to ensure it is available.
connect(
|
{"golden_diff": "diff --git a/src/meltano/core/db.py b/src/meltano/core/db.py\n--- a/src/meltano/core/db.py\n+++ b/src/meltano/core/db.py\n@@ -4,6 +4,7 @@\n \n import logging\n import time\n+from urllib.parse import urlparse\n \n from sqlalchemy import create_engine\n from sqlalchemy.engine import Connection, Engine\n@@ -73,13 +74,27 @@\n if existing_engine:\n return existing_engine\n \n- engine_uri = project.settings.get(\"database_uri\")\n- logging.debug(f\"Creating engine '{project}@{engine_uri}'\")\n+ database_uri = project.settings.get(\"database_uri\")\n+ parsed_db_uri = urlparse(database_uri)\n+ sanitized_db_uri = parsed_db_uri._replace( # noqa: WPS437\n+ netloc=(\n+ f\"{parsed_db_uri.username}:********@\" # user:pass auth case\n+ if parsed_db_uri.password\n+ else \"********@\" # token auth case\n+ if parsed_db_uri.username\n+ else \"\" # no auth case\n+ )\n+ + (parsed_db_uri.hostname or \"\"),\n+ ).geturl()\n+ logging.debug(\n+ f\"Creating DB engine for project at {str(project.root)!r} \"\n+ f\"with DB URI {sanitized_db_uri!r}\",\n+ )\n \n- if engine_uri is None:\n+ if database_uri is None:\n raise NullConnectionStringError\n \n- engine = create_engine(engine_uri, poolclass=NullPool)\n+ engine = create_engine(database_uri, poolclass=NullPool)\n \n # Connect to the database to ensure it is available.\n connect(\n", "issue": "EL log files should not contain secrets\nThe full database URI is shown in debug logs:\r\n\r\n```console\r\n$ export MELTANO_CLI_LOG_LEVEL=debug\r\n$ meltano invoke my-tap\r\n2022-09-07T16:34:57.234152Z [info ] Environment 'dev' is active\r\n2022-09-07T16:34:57.338859Z [debug ] Creating engine <meltano.core.project.Project object at 0x10e9702e0>@postgresql://***********\r\n```\r\n\r\nWhere I redacted the username, password, etc. from the Postgres URI.\r\n\r\nThe full environment variables mapping log message may also contain secrets:\r\n\r\n```console\r\n2022-09-07T16:35:01.443284Z [debug ] Env: {'USER': ...\r\n```\r\n\r\n_Raised by Tomas B in Office Hours._\r\n\n", "before_files": [{"content": "\"\"\"Defines helpers related to the system database.\"\"\"\n\nfrom __future__ import annotations\n\nimport logging\nimport time\n\nfrom sqlalchemy import create_engine\nfrom sqlalchemy.engine import Connection, Engine\nfrom sqlalchemy.exc import OperationalError\nfrom sqlalchemy.orm import sessionmaker\nfrom sqlalchemy.pool import NullPool\nfrom sqlalchemy.sql import text\n\nfrom meltano.core.error import MeltanoError\nfrom meltano.core.project import Project\n\n# Keep a Project \u2192 Engine mapping to serve\n# the same engine for the same Project\n_engines = {}\n\n\nclass MeltanoDatabaseCompatibilityError(MeltanoError):\n \"\"\"Raised when the database is not compatible with Meltano.\"\"\"\n\n INSTRUCTION = (\n \"Upgrade your database to be compatible with Meltano or use a different \"\n \"database\"\n )\n\n def __init__(self, reason: str):\n \"\"\"Initialize the error with a reason.\n\n Args:\n reason: The reason why the database is not compatible.\n \"\"\"\n super().__init__(reason, self.INSTRUCTION)\n\n\nclass NullConnectionStringError(MeltanoError):\n \"\"\"Raised when the database is not compatible with Meltano.\"\"\"\n\n REASON = \"The `database_uri` setting has a null value\"\n INSTRUCTION = (\n \"Verify that the `database_uri` setting points to a valid database connection \"\n \"URI, or use `MELTANO_FF_STRICT_ENV_VAR_MODE=1 meltano config meltano list` \"\n \"to check for missing environment variables\"\n )\n\n def __init__(self):\n \"\"\"Initialize the exception.\"\"\"\n super().__init__(self.REASON, self.INSTRUCTION)\n\n\ndef project_engine(\n project: Project,\n default: bool = False,\n) -> tuple[Engine, sessionmaker]:\n \"\"\"Create and register a SQLAlchemy engine for a Meltano project instance.\n\n Args:\n project: The Meltano project that the engine will be connected to.\n default: Whether the engine created should be stored as the default\n engine for this project.\n\n Returns:\n The engine, and a session maker bound to the engine.\n\n Raises:\n NullConnectionStringError: The `database_uri` setting has a null value.\n \"\"\"\n existing_engine = _engines.get(project)\n if existing_engine:\n return existing_engine\n\n engine_uri = project.settings.get(\"database_uri\")\n logging.debug(f\"Creating engine '{project}@{engine_uri}'\")\n\n if engine_uri is None:\n raise NullConnectionStringError\n\n engine = create_engine(engine_uri, poolclass=NullPool)\n\n # Connect to the database to ensure it is available.\n connect(\n engine,\n max_retries=project.settings.get(\"database_max_retries\"),\n retry_timeout=project.settings.get(\"database_retry_timeout\"),\n )\n\n check_database_compatibility(engine)\n init_hook(engine)\n\n engine_session = (engine, sessionmaker(bind=engine))\n\n if default:\n # register the default engine\n _engines[project] = engine_session\n\n return engine_session\n\n\ndef connect(\n engine: Engine,\n max_retries: int,\n retry_timeout: float,\n) -> Connection:\n \"\"\"Connect to the database.\n\n Args:\n engine: The DB engine with which the check will be performed.\n max_retries: The maximum number of retries that will be attempted.\n retry_timeout: The number of seconds to wait between retries.\n\n Raises:\n OperationalError: Error during DB connection - max retries exceeded.\n\n Returns:\n A connection to the database.\n \"\"\"\n attempt = 0\n while True:\n try:\n return engine.connect()\n except OperationalError:\n if attempt >= max_retries:\n logging.error(\n f\"Could not connect to the database after {attempt} \"\n \"attempts. Max retries exceeded.\",\n )\n raise\n attempt += 1\n logging.info(\n f\"DB connection failed. Will retry after {retry_timeout}s. \"\n f\"Attempt {attempt}/{max_retries}\",\n )\n time.sleep(retry_timeout)\n\n\ninit_hooks = {\n \"sqlite\": lambda x: x.execute(\"PRAGMA journal_mode=WAL\"),\n}\n\n\ndef init_hook(engine: Engine) -> None:\n \"\"\"Run the initialization hook for the provided DB engine.\n\n The initialization hooks are taken from the `meltano.core.db.init_hooks`\n dictionary, which maps the dialect name of the engine to a unary function\n which will be called with the provided DB engine.\n\n Args:\n engine: The engine for which the init hook will be run.\n\n Raises:\n Exception: The init hook raised an exception.\n \"\"\"\n try:\n hook = init_hooks[engine.dialect.name]\n except KeyError:\n return\n\n try:\n hook(engine)\n except Exception as ex:\n raise Exception(f\"Failed to initialize database: {ex!s}\") from ex\n\n\ndef ensure_schema_exists(\n engine: Engine,\n schema_name: str,\n grant_roles: tuple[str] = (),\n) -> None:\n \"\"\"Ensure the specified `schema_name` exists in the database.\n\n Args:\n engine: The DB engine to be used.\n schema_name: The name of the schema.\n grant_roles: Roles to grant to the specified schema.\n \"\"\"\n group_identifiers = \",\".join(grant_roles)\n\n create_schema = text(f\"CREATE SCHEMA IF NOT EXISTS {schema_name}\")\n grant_select_schema = text(\n f\"ALTER DEFAULT PRIVILEGES IN SCHEMA {schema_name} GRANT SELECT ON \"\n f\"TABLES TO {group_identifiers}\",\n )\n grant_usage_schema = text(\n f\"GRANT USAGE ON SCHEMA {schema_name} TO {group_identifiers}\",\n )\n\n with engine.connect() as conn, conn.begin():\n conn.execute(create_schema)\n if grant_roles:\n conn.execute(grant_select_schema)\n conn.execute(grant_usage_schema)\n\n logging.info(f\"Schema {schema_name} has been created successfully.\")\n for role in grant_roles:\n logging.info(f\"Usage has been granted for role: {role}.\")\n\n\ndef check_database_compatibility(engine: Engine) -> None:\n \"\"\"Check that the database is compatible with Meltano.\n\n Args:\n engine: The DB engine to be used. This should already be connected to\n the database.\n\n Raises:\n MeltanoDatabaseCompatibilityError: The database is not compatible with\n Meltano.\n \"\"\"\n dialect = engine.dialect.name\n version = engine.dialect.server_version_info\n\n if dialect == \"sqlite\" and version < (3, 25, 1):\n version_string = \".\".join(map(str, version))\n reason = (\n f\"Detected SQLite {version_string}, but Meltano requires at least 3.25.1\"\n )\n raise MeltanoDatabaseCompatibilityError(reason)\n", "path": "src/meltano/core/db.py"}]}
| 2,799 | 365 |
gh_patches_debug_914
|
rasdani/github-patches
|
git_diff
|
wemake-services__wemake-python-styleguide-204
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Feature: ignore async function definitions from jones complexity check
Currently we only ignore `ClassDef` and `FunctionDef`: https://github.com/wemake-services/wemake-python-styleguide/blob/master/wemake_python_styleguide/visitors/ast/complexity/jones.py#L38-L41
What needs to be done:
1. ignore `AsyncFunctionDef` from the check
2. we do not have a special test case for ignoring nodes for now. It should be added. We can call it `test_that_some_nodes_are_ignored`. It should test all three ignored nodes: with the lowest complexity threshold there should be no errors: https://github.com/wemake-services/wemake-python-styleguide/blob/master/tests/test_visitors/test_ast/test_complexity/test_jones/test_line_complexity.py
</issue>
<code>
[start of wemake_python_styleguide/visitors/ast/complexity/jones.py]
1 # -*- coding: utf-8 -*-
2
3 """
4 Jones Complexity to count inline complexity.
5
6 Based on the original `jones-complexity` project:
7 https://github.com/Miserlou/JonesComplexity
8
9 Original project is licensed under MIT.
10 """
11
12 import ast
13 from collections import defaultdict
14 from statistics import median
15 from typing import DefaultDict, List
16
17 from wemake_python_styleguide.logics.nodes import is_subtype_of_any
18 from wemake_python_styleguide.violations.complexity import (
19 JonesScoreViolation,
20 LineComplexityViolation,
21 )
22 from wemake_python_styleguide.visitors.base import BaseNodeVisitor
23
24
25 class JonesComplexityVisitor(BaseNodeVisitor): # TODO: consider `logical_line`
26 """
27 This visitor is used to find complex lines in the code.
28
29 Calculates the number of AST nodes per line of code.
30 Also calculates the median nodes/line score.
31 Then compares these numbers to the given tressholds.
32
33 Some nodes are ignored because there's no sense in analyzing them.
34 Some nodes like type annotations are not affecting line complexity,
35 so we do not count them.
36 """
37
38 _ignored_nodes = (
39 ast.FunctionDef,
40 ast.ClassDef,
41 )
42
43 def __init__(self, *args, **kwargs) -> None:
44 """Initializes line number counter."""
45 super().__init__(*args, **kwargs)
46 self._lines: DefaultDict[int, List[ast.AST]] = defaultdict(list)
47 self._to_ignore: List[ast.AST] = []
48
49 def _post_visit(self) -> None:
50 """
51 Triggers after the whole module was processed.
52
53 Checks each line for its complexity, compares it to the tresshold.
54 We also calculate the final Jones score for the whole module.
55 """
56 for line_nodes in self._lines.values():
57 complexity = len(line_nodes)
58 if complexity > self.options.max_line_complexity:
59 self.add_violation(LineComplexityViolation(
60 line_nodes[0], text=str(complexity),
61 ))
62
63 node_counts = [len(nodes) for nodes in self._lines.values()]
64 total_count = median(node_counts) if node_counts else 0
65 if total_count > self.options.max_jones_score:
66 self.add_violation(JonesScoreViolation())
67
68 def _maybe_ignore_child(self, node: ast.AST) -> bool:
69 if isinstance(node, ast.AnnAssign):
70 self._to_ignore.append(node.annotation)
71
72 return node in self._to_ignore
73
74 def visit(self, node: ast.AST) -> None:
75 """
76 Visits all nodes, sums the number of nodes per line.
77
78 Then calculates the median value of all line results.
79
80 Raises:
81 JonesScoreViolation
82 LineComplexityViolation
83
84 """
85 line_number = getattr(node, 'lineno', None)
86 is_ignored = is_subtype_of_any(node, self._ignored_nodes)
87 if line_number is not None and not is_ignored:
88 if not self._maybe_ignore_child(node):
89 self._lines[line_number].append(node)
90
91 self.generic_visit(node)
92
[end of wemake_python_styleguide/visitors/ast/complexity/jones.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wemake_python_styleguide/visitors/ast/complexity/jones.py b/wemake_python_styleguide/visitors/ast/complexity/jones.py
--- a/wemake_python_styleguide/visitors/ast/complexity/jones.py
+++ b/wemake_python_styleguide/visitors/ast/complexity/jones.py
@@ -38,6 +38,7 @@
_ignored_nodes = (
ast.FunctionDef,
ast.ClassDef,
+ ast.AsyncFunctionDef,
)
def __init__(self, *args, **kwargs) -> None:
|
{"golden_diff": "diff --git a/wemake_python_styleguide/visitors/ast/complexity/jones.py b/wemake_python_styleguide/visitors/ast/complexity/jones.py\n--- a/wemake_python_styleguide/visitors/ast/complexity/jones.py\n+++ b/wemake_python_styleguide/visitors/ast/complexity/jones.py\n@@ -38,6 +38,7 @@\n _ignored_nodes = (\n ast.FunctionDef,\n ast.ClassDef,\n+ ast.AsyncFunctionDef,\n )\n \n def __init__(self, *args, **kwargs) -> None:\n", "issue": "Feature: ignore async function definitions from jones complexity check\nCurrently we only ignore `ClassDef` and `FunctionDef`: https://github.com/wemake-services/wemake-python-styleguide/blob/master/wemake_python_styleguide/visitors/ast/complexity/jones.py#L38-L41\r\n\r\nWhat needs to be done:\r\n1. ignore `AsyncFunctionDef` from the check\r\n2. we do not have a special test case for ignoring nodes for now. It should be added. We can call it `test_that_some_nodes_are_ignored`. It should test all three ignored nodes: with the lowest complexity threshold there should be no errors: https://github.com/wemake-services/wemake-python-styleguide/blob/master/tests/test_visitors/test_ast/test_complexity/test_jones/test_line_complexity.py\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nJones Complexity to count inline complexity.\n\nBased on the original `jones-complexity` project:\nhttps://github.com/Miserlou/JonesComplexity\n\nOriginal project is licensed under MIT.\n\"\"\"\n\nimport ast\nfrom collections import defaultdict\nfrom statistics import median\nfrom typing import DefaultDict, List\n\nfrom wemake_python_styleguide.logics.nodes import is_subtype_of_any\nfrom wemake_python_styleguide.violations.complexity import (\n JonesScoreViolation,\n LineComplexityViolation,\n)\nfrom wemake_python_styleguide.visitors.base import BaseNodeVisitor\n\n\nclass JonesComplexityVisitor(BaseNodeVisitor): # TODO: consider `logical_line`\n \"\"\"\n This visitor is used to find complex lines in the code.\n\n Calculates the number of AST nodes per line of code.\n Also calculates the median nodes/line score.\n Then compares these numbers to the given tressholds.\n\n Some nodes are ignored because there's no sense in analyzing them.\n Some nodes like type annotations are not affecting line complexity,\n so we do not count them.\n \"\"\"\n\n _ignored_nodes = (\n ast.FunctionDef,\n ast.ClassDef,\n )\n\n def __init__(self, *args, **kwargs) -> None:\n \"\"\"Initializes line number counter.\"\"\"\n super().__init__(*args, **kwargs)\n self._lines: DefaultDict[int, List[ast.AST]] = defaultdict(list)\n self._to_ignore: List[ast.AST] = []\n\n def _post_visit(self) -> None:\n \"\"\"\n Triggers after the whole module was processed.\n\n Checks each line for its complexity, compares it to the tresshold.\n We also calculate the final Jones score for the whole module.\n \"\"\"\n for line_nodes in self._lines.values():\n complexity = len(line_nodes)\n if complexity > self.options.max_line_complexity:\n self.add_violation(LineComplexityViolation(\n line_nodes[0], text=str(complexity),\n ))\n\n node_counts = [len(nodes) for nodes in self._lines.values()]\n total_count = median(node_counts) if node_counts else 0\n if total_count > self.options.max_jones_score:\n self.add_violation(JonesScoreViolation())\n\n def _maybe_ignore_child(self, node: ast.AST) -> bool:\n if isinstance(node, ast.AnnAssign):\n self._to_ignore.append(node.annotation)\n\n return node in self._to_ignore\n\n def visit(self, node: ast.AST) -> None:\n \"\"\"\n Visits all nodes, sums the number of nodes per line.\n\n Then calculates the median value of all line results.\n\n Raises:\n JonesScoreViolation\n LineComplexityViolation\n\n \"\"\"\n line_number = getattr(node, 'lineno', None)\n is_ignored = is_subtype_of_any(node, self._ignored_nodes)\n if line_number is not None and not is_ignored:\n if not self._maybe_ignore_child(node):\n self._lines[line_number].append(node)\n\n self.generic_visit(node)\n", "path": "wemake_python_styleguide/visitors/ast/complexity/jones.py"}]}
| 1,572 | 134 |
gh_patches_debug_28303
|
rasdani/github-patches
|
git_diff
|
wright-group__WrightTools-517
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
quick2D returns transposed plot
`artists.quick2D` uses the overloaded `ax.pcolor` and `ax.contourf` to plot arrays--- see [here](https://github.com/wright-group/WrightTools/blob/development/WrightTools/artists/_quick.py#L252).
However, `ax.contourf` does not properly handle the arrays. I think the problem is [line255](https://github.com/wright-group/WrightTools/blob/development/WrightTools/artists/_base.py#L255). I think `zi` should instead be `zi.T` to handle the way matplotlib indexes arrays.
</issue>
<code>
[start of WrightTools/artists/_quick.py]
1 """Quick plotting."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 import os
8
9 import numpy as np
10
11 import matplotlib.pyplot as plt
12
13 from ._helpers import create_figure, plot_colorbar, savefig
14 from ._colors import colormaps
15 from .. import kit as wt_kit
16
17
18 # --- define --------------------------------------------------------------------------------------
19
20
21 __all__ = ['quick1D', 'quick2D']
22
23
24 # --- general purpose plotting functions ----------------------------------------------------------
25
26
27 def quick1D(data, axis=0, at={}, channel=0, *, local=False, autosave=False, save_directory=None,
28 fname=None, verbose=True):
29 """Quickly plot 1D slice(s) of data.
30
31 Parameters
32 ----------
33 data : WrightTools.Data object
34 Data to plot.
35 axis : string or integer (optional)
36 Expression or index of axis. Default is 0.
37 at : dictionary (optional)
38 Dictionary of parameters in non-plotted dimension(s). If not
39 provided, plots will be made at each coordinate.
40 channel : string or integer (optional)
41 Name or index of channel to plot. Default is 0.
42 local : boolean (optional)
43 Toggle plotting locally. Default is False.
44 autosave : boolean (optional)
45 Toggle autosave. Default is False.
46 save_directory : string (optional)
47 Location to save image(s). Default is None (auto-generated).
48 fname : string (optional)
49 File name. If None, data name is used. Default is None.
50 verbose : boolean (optional)
51 Toggle talkback. Default is True.
52
53 Returns
54 -------
55 list of strings
56 List of saved image files (if any).
57 """
58 # prepare data
59 chopped = data.chop(axis, at=at, verbose=False)
60 # channel index
61 channel_index = wt_kit.get_index(data.channel_names, channel)
62 # prepare figure
63 fig = None
64 if len(chopped) > 10:
65 if not autosave:
66 print('more than 10 images will be generated: forcing autosave')
67 autosave = True
68 # prepare output folders
69 if autosave:
70 if save_directory:
71 pass
72 else:
73 if len(chopped) == 1:
74 save_directory = os.getcwd()
75 if fname:
76 pass
77 else:
78 fname = data.natural_name
79 else:
80 folder_name = 'mpl_1D ' + wt_kit.TimeStamp().path
81 os.mkdir(folder_name)
82 save_directory = folder_name
83 # chew through image generation
84 out = []
85 for i, d in enumerate(chopped.values()):
86 # unpack data -----------------------------------------------------------------------------
87 axis = d.axes[0]
88 xi = axis.full
89 channel = d.channels[channel_index]
90 zi = channel[:]
91 # create figure ---------------------------------------------------------------------------
92 aspects = [[[0, 0], 0.5]]
93 fig, gs = create_figure(width='single', nrows=1, cols=[1], aspects=aspects)
94 ax = plt.subplot(gs[0, 0])
95 # plot ------------------------------------------------------------------------------------
96 plt.plot(xi, zi, lw=2)
97 plt.scatter(xi, zi, color='grey', alpha=0.5, edgecolor='none')
98 # decoration ------------------------------------------------------------------------------
99 plt.grid()
100 # limits
101 if local:
102 pass
103 else:
104 data_channel = data.channels[channel_index]
105 plt.ylim(data_channel.min(), data_channel.max())
106 # label axes
107 ax.set_xlabel(axis.label, fontsize=18)
108 ax.set_ylabel(channel.name, fontsize=18)
109 plt.xticks(rotation=45)
110 plt.xlim(xi.min(), xi.max())
111 # save ------------------------------------------------------------------------------------
112 if autosave:
113 if fname:
114 file_name = fname + ' ' + str(i).zfill(3)
115 else:
116 file_name = str(i).zfill(3)
117 fpath = os.path.join(save_directory, file_name + '.png')
118 savefig(fpath, fig=fig)
119 plt.close()
120 if verbose:
121 print('image saved at', fpath)
122 out.append(fpath)
123 return out
124
125
126 def quick2D(data, xaxis=1, yaxis=0, at={}, channel=0, *, contours=0, pixelated=True,
127 dynamic_range=False, local=False, contours_local=True, autosave=False,
128 save_directory=None, fname=None, verbose=True):
129 """Quickly plot 2D slice(s) of data.
130
131 Parameters
132 ----------
133 data : WrightTools.Data object.
134 Data to plot.
135 xaxis : string or integer (optional)
136 Expression or index of horizontal axis. Default is 1.
137 yaxis : string or integer (optional)
138 Expression or index of vertical axis. Default is 0.
139 at : dictionary (optional)
140 Dictionary of parameters in non-plotted dimension(s). If not
141 provided, plots will be made at each coordinate.
142 channel : string or integer (optional)
143 Name or index of channel to plot. Default is 0.
144 contours : integer (optional)
145 The number of black contour lines to add to the plot. Default is 0.
146 pixelated : boolean (optional)
147 Toggle between pcolor and contourf (deulaney) plotting backends.
148 Default is True (pcolor).
149 dynamic_range : boolean (optional)
150 Force the colorbar to use all of its colors. Only changes behavior
151 for signed channels. Default is False.
152 local : boolean (optional)
153 Toggle plotting locally. Default is False.
154 contours_local : boolean (optional)
155 Toggle plotting black contour lines locally. Default is True.
156 autosave : boolean (optional)
157 Toggle autosave. Default is False.
158 save_directory : string (optional)
159 Location to save image(s). Default is None (auto-generated).
160 fname : string (optional)
161 File name. If None, data name is used. Default is None.
162 verbose : boolean (optional)
163 Toggle talkback. Default is True.
164
165 Returns
166 -------
167 list of strings
168 List of saved image files (if any).
169 """
170 # prepare data
171 chopped = data.chop(xaxis, yaxis, at=at, verbose=False)
172 # channel index
173 channel_index = wt_kit.get_index(data.channel_names, channel)
174 # colormap
175 # get colormap
176 if data.channels[channel_index].signed:
177 cmap = 'signed'
178 else:
179 cmap = 'default'
180 cmap = colormaps[cmap]
181 cmap.set_bad([0.75] * 3, 1.)
182 cmap.set_under([0.75] * 3, 1.)
183 # fname
184 if fname is None:
185 fname = data.natural_name
186 # autosave
187 if len(chopped) > 10:
188 if not autosave:
189 print('more than 10 images will be generated: forcing autosave')
190 autosave = True
191 # output folder
192 if autosave:
193 if save_directory:
194 pass
195 else:
196 if len(chopped) == 1:
197 save_directory = os.getcwd()
198 else:
199 folder_name = 'quick2D ' + wt_kit.TimeStamp().path
200 os.mkdir(folder_name)
201 save_directory = folder_name
202 # loop through image generation
203 out = []
204 for i, d in enumerate(chopped.values()):
205 # unpack data -----------------------------------------------------------------------------
206 xaxis = d.axes[0]
207 xlim = xaxis.min(), xaxis.max()
208 yaxis = d.axes[1]
209 ylim = xaxis.min(), yaxis.max()
210 channel = d.channels[channel_index]
211 zi = channel[:]
212 zi = np.ma.masked_invalid(zi)
213 # create figure ---------------------------------------------------------------------------
214 if xaxis.units == yaxis.units:
215 xr = xlim[1] - xlim[0]
216 yr = ylim[1] - ylim[0]
217 aspect = np.abs(yr / xr)
218 if 3 < aspect or aspect < 1 / 3.:
219 # TODO: raise warning here
220 aspect = np.clip(aspect, 1 / 3., 3.)
221 else:
222 aspect = 1
223 fig, gs = create_figure(width='single', nrows=1, cols=[1, 'cbar'],
224 aspects=[[[0, 0], aspect]])
225 ax = plt.subplot(gs[0])
226 ax.patch.set_facecolor('w')
227 # levels ----------------------------------------------------------------------------------
228 if channel.signed:
229 if local:
230 limit = channel.mag
231 else:
232 data_channel = data.channels[channel_index]
233 if dynamic_range:
234 limit = min(abs(data_channel.null - data_channel.min()),
235 abs(data_channel.null - data_channel.max()))
236 else:
237 limit = data_channel.mag
238 if np.isnan(limit):
239 limit = 1.
240 if limit is np.ma.masked:
241 limit = 1.
242 levels = np.linspace(-limit + channel.null, limit + channel.null, 200)
243 else:
244 if local:
245 levels = np.linspace(channel.null, np.nanmax(zi), 200)
246 else:
247 data_channel = data.channels[channel_index]
248 if data_channel.max() < data_channel.null:
249 levels = np.linspace(data_channel.min(), data_channel.null, 200)
250 else:
251 levels = np.linspace(data_channel.null, data_channel.max(), 200)
252 # colors ----------------------------------------------------------------------------------
253 if pixelated:
254 ax.pcolor(d, cmap=cmap, vmin=levels.min(), vmax=levels.max())
255 else:
256 ax.contourf(d, cmap=cmap, levels=levels)
257 # contour lines ---------------------------------------------------------------------------
258 if contours:
259 raise NotImplementedError
260 # decoration ------------------------------------------------------------------------------
261 plt.xticks(rotation=45, fontsize=14)
262 plt.yticks(fontsize=14)
263 ax.set_xlabel(xaxis.label, fontsize=18)
264 ax.set_ylabel(yaxis.label, fontsize=18)
265 # colorbar
266 cax = plt.subplot(gs[1])
267 cbar_ticks = np.linspace(levels.min(), levels.max(), 11)
268 plot_colorbar(cax=cax, ticks=cbar_ticks, label=channel.name, cmap=cmap)
269 # save figure -----------------------------------------------------------------------------
270 if autosave:
271 if fname:
272 file_name = fname + ' ' + str(i).zfill(3)
273 else:
274 file_name = str(i).zfill(3)
275 fpath = os.path.join(save_directory, file_name + '.png')
276 savefig(fpath, fig=fig)
277 plt.close()
278 if verbose:
279 print('image saved at', fpath)
280 out.append(fpath)
281 return out
282
[end of WrightTools/artists/_quick.py]
[start of examples/fringes_transform.py]
1 # -*- coding: utf-8 -*-
2 """
3 Fringes transform
4 =================
5
6 An example of transform on a dataset containing fringes.
7 """
8
9 import matplotlib.pyplot as plt
10
11 import WrightTools as wt
12 from WrightTools import datasets
13
14 p = datasets.PyCMDS.w2_w1_000
15 data = wt.data.from_PyCMDS(p)
16
17 data.signal_mean.symmetric_root(0.5) # to amplitude level
18
19 fig, gs = wt.artists.create_figure(width='double', cols=[1, 1, 'cbar'])
20
21 # as taken
22 ax = plt.subplot(gs[0, 0])
23 ax.pcolor(data)
24 wt.artists.set_ax_labels(xlabel=data.w2.label, ylabel=data.w1.label)
25 ax.grid()
26 ax.set_title('as taken', fontsize=20)
27
28 # transformed
29 ax = plt.subplot(gs[0, 1])
30 data.transform(['wm', 'w1'])
31 data.convert('wn')
32 ax.pcolor(data)
33 wt.artists.set_ax_labels(xlabel=data.wm.label, yticks=False)
34 ax.grid()
35 ax.set_title('transformed', fontsize=20)
36
37 # colorbar
38 cax = plt.subplot(gs[0, -1])
39 wt.artists.plot_colorbar(cax, label='amplitude')
40
[end of examples/fringes_transform.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/WrightTools/artists/_quick.py b/WrightTools/artists/_quick.py
--- a/WrightTools/artists/_quick.py
+++ b/WrightTools/artists/_quick.py
@@ -123,7 +123,7 @@
return out
-def quick2D(data, xaxis=1, yaxis=0, at={}, channel=0, *, contours=0, pixelated=True,
+def quick2D(data, xaxis=0, yaxis=1, at={}, channel=0, *, contours=0, pixelated=True,
dynamic_range=False, local=False, contours_local=True, autosave=False,
save_directory=None, fname=None, verbose=True):
"""Quickly plot 2D slice(s) of data.
@@ -133,9 +133,9 @@
data : WrightTools.Data object.
Data to plot.
xaxis : string or integer (optional)
- Expression or index of horizontal axis. Default is 1.
+ Expression or index of horizontal axis. Default is 0.
yaxis : string or integer (optional)
- Expression or index of vertical axis. Default is 0.
+ Expression or index of vertical axis. Default is 1.
at : dictionary (optional)
Dictionary of parameters in non-plotted dimension(s). If not
provided, plots will be made at each coordinate.
diff --git a/examples/fringes_transform.py b/examples/fringes_transform.py
--- a/examples/fringes_transform.py
+++ b/examples/fringes_transform.py
@@ -15,6 +15,7 @@
data = wt.data.from_PyCMDS(p)
data.signal_mean.symmetric_root(0.5) # to amplitude level
+data.convert('wn')
fig, gs = wt.artists.create_figure(width='double', cols=[1, 1, 'cbar'])
|
{"golden_diff": "diff --git a/WrightTools/artists/_quick.py b/WrightTools/artists/_quick.py\n--- a/WrightTools/artists/_quick.py\n+++ b/WrightTools/artists/_quick.py\n@@ -123,7 +123,7 @@\n return out\n \n \n-def quick2D(data, xaxis=1, yaxis=0, at={}, channel=0, *, contours=0, pixelated=True,\n+def quick2D(data, xaxis=0, yaxis=1, at={}, channel=0, *, contours=0, pixelated=True,\n dynamic_range=False, local=False, contours_local=True, autosave=False,\n save_directory=None, fname=None, verbose=True):\n \"\"\"Quickly plot 2D slice(s) of data.\n@@ -133,9 +133,9 @@\n data : WrightTools.Data object.\n Data to plot.\n xaxis : string or integer (optional)\n- Expression or index of horizontal axis. Default is 1.\n+ Expression or index of horizontal axis. Default is 0.\n yaxis : string or integer (optional)\n- Expression or index of vertical axis. Default is 0.\n+ Expression or index of vertical axis. Default is 1.\n at : dictionary (optional)\n Dictionary of parameters in non-plotted dimension(s). If not\n provided, plots will be made at each coordinate.\ndiff --git a/examples/fringes_transform.py b/examples/fringes_transform.py\n--- a/examples/fringes_transform.py\n+++ b/examples/fringes_transform.py\n@@ -15,6 +15,7 @@\n data = wt.data.from_PyCMDS(p)\n \n data.signal_mean.symmetric_root(0.5) # to amplitude level\n+data.convert('wn')\n \n fig, gs = wt.artists.create_figure(width='double', cols=[1, 1, 'cbar'])\n", "issue": "quick2D returns transposed plot\n`artists.quick2D` uses the overloaded `ax.pcolor` and `ax.contourf` to plot arrays--- see [here](https://github.com/wright-group/WrightTools/blob/development/WrightTools/artists/_quick.py#L252). \r\n\r\nHowever, `ax.contourf` does not properly handle the arrays. I think the problem is [line255](https://github.com/wright-group/WrightTools/blob/development/WrightTools/artists/_base.py#L255). I think `zi` should instead be `zi.T` to handle the way matplotlib indexes arrays. \r\n \r\n\n", "before_files": [{"content": "\"\"\"Quick plotting.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport os\n\nimport numpy as np\n\nimport matplotlib.pyplot as plt\n\nfrom ._helpers import create_figure, plot_colorbar, savefig\nfrom ._colors import colormaps\nfrom .. import kit as wt_kit\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\n__all__ = ['quick1D', 'quick2D']\n\n\n# --- general purpose plotting functions ----------------------------------------------------------\n\n\ndef quick1D(data, axis=0, at={}, channel=0, *, local=False, autosave=False, save_directory=None,\n fname=None, verbose=True):\n \"\"\"Quickly plot 1D slice(s) of data.\n\n Parameters\n ----------\n data : WrightTools.Data object\n Data to plot.\n axis : string or integer (optional)\n Expression or index of axis. Default is 0.\n at : dictionary (optional)\n Dictionary of parameters in non-plotted dimension(s). If not\n provided, plots will be made at each coordinate.\n channel : string or integer (optional)\n Name or index of channel to plot. Default is 0.\n local : boolean (optional)\n Toggle plotting locally. Default is False.\n autosave : boolean (optional)\n Toggle autosave. Default is False.\n save_directory : string (optional)\n Location to save image(s). Default is None (auto-generated).\n fname : string (optional)\n File name. If None, data name is used. Default is None.\n verbose : boolean (optional)\n Toggle talkback. Default is True.\n\n Returns\n -------\n list of strings\n List of saved image files (if any).\n \"\"\"\n # prepare data\n chopped = data.chop(axis, at=at, verbose=False)\n # channel index\n channel_index = wt_kit.get_index(data.channel_names, channel)\n # prepare figure\n fig = None\n if len(chopped) > 10:\n if not autosave:\n print('more than 10 images will be generated: forcing autosave')\n autosave = True\n # prepare output folders\n if autosave:\n if save_directory:\n pass\n else:\n if len(chopped) == 1:\n save_directory = os.getcwd()\n if fname:\n pass\n else:\n fname = data.natural_name\n else:\n folder_name = 'mpl_1D ' + wt_kit.TimeStamp().path\n os.mkdir(folder_name)\n save_directory = folder_name\n # chew through image generation\n out = []\n for i, d in enumerate(chopped.values()):\n # unpack data -----------------------------------------------------------------------------\n axis = d.axes[0]\n xi = axis.full\n channel = d.channels[channel_index]\n zi = channel[:]\n # create figure ---------------------------------------------------------------------------\n aspects = [[[0, 0], 0.5]]\n fig, gs = create_figure(width='single', nrows=1, cols=[1], aspects=aspects)\n ax = plt.subplot(gs[0, 0])\n # plot ------------------------------------------------------------------------------------\n plt.plot(xi, zi, lw=2)\n plt.scatter(xi, zi, color='grey', alpha=0.5, edgecolor='none')\n # decoration ------------------------------------------------------------------------------\n plt.grid()\n # limits\n if local:\n pass\n else:\n data_channel = data.channels[channel_index]\n plt.ylim(data_channel.min(), data_channel.max())\n # label axes\n ax.set_xlabel(axis.label, fontsize=18)\n ax.set_ylabel(channel.name, fontsize=18)\n plt.xticks(rotation=45)\n plt.xlim(xi.min(), xi.max())\n # save ------------------------------------------------------------------------------------\n if autosave:\n if fname:\n file_name = fname + ' ' + str(i).zfill(3)\n else:\n file_name = str(i).zfill(3)\n fpath = os.path.join(save_directory, file_name + '.png')\n savefig(fpath, fig=fig)\n plt.close()\n if verbose:\n print('image saved at', fpath)\n out.append(fpath)\n return out\n\n\ndef quick2D(data, xaxis=1, yaxis=0, at={}, channel=0, *, contours=0, pixelated=True,\n dynamic_range=False, local=False, contours_local=True, autosave=False,\n save_directory=None, fname=None, verbose=True):\n \"\"\"Quickly plot 2D slice(s) of data.\n\n Parameters\n ----------\n data : WrightTools.Data object.\n Data to plot.\n xaxis : string or integer (optional)\n Expression or index of horizontal axis. Default is 1.\n yaxis : string or integer (optional)\n Expression or index of vertical axis. Default is 0.\n at : dictionary (optional)\n Dictionary of parameters in non-plotted dimension(s). If not\n provided, plots will be made at each coordinate.\n channel : string or integer (optional)\n Name or index of channel to plot. Default is 0.\n contours : integer (optional)\n The number of black contour lines to add to the plot. Default is 0.\n pixelated : boolean (optional)\n Toggle between pcolor and contourf (deulaney) plotting backends.\n Default is True (pcolor).\n dynamic_range : boolean (optional)\n Force the colorbar to use all of its colors. Only changes behavior\n for signed channels. Default is False.\n local : boolean (optional)\n Toggle plotting locally. Default is False.\n contours_local : boolean (optional)\n Toggle plotting black contour lines locally. Default is True.\n autosave : boolean (optional)\n Toggle autosave. Default is False.\n save_directory : string (optional)\n Location to save image(s). Default is None (auto-generated).\n fname : string (optional)\n File name. If None, data name is used. Default is None.\n verbose : boolean (optional)\n Toggle talkback. Default is True.\n\n Returns\n -------\n list of strings\n List of saved image files (if any).\n \"\"\"\n # prepare data\n chopped = data.chop(xaxis, yaxis, at=at, verbose=False)\n # channel index\n channel_index = wt_kit.get_index(data.channel_names, channel)\n # colormap\n # get colormap\n if data.channels[channel_index].signed:\n cmap = 'signed'\n else:\n cmap = 'default'\n cmap = colormaps[cmap]\n cmap.set_bad([0.75] * 3, 1.)\n cmap.set_under([0.75] * 3, 1.)\n # fname\n if fname is None:\n fname = data.natural_name\n # autosave\n if len(chopped) > 10:\n if not autosave:\n print('more than 10 images will be generated: forcing autosave')\n autosave = True\n # output folder\n if autosave:\n if save_directory:\n pass\n else:\n if len(chopped) == 1:\n save_directory = os.getcwd()\n else:\n folder_name = 'quick2D ' + wt_kit.TimeStamp().path\n os.mkdir(folder_name)\n save_directory = folder_name\n # loop through image generation\n out = []\n for i, d in enumerate(chopped.values()):\n # unpack data -----------------------------------------------------------------------------\n xaxis = d.axes[0]\n xlim = xaxis.min(), xaxis.max()\n yaxis = d.axes[1]\n ylim = xaxis.min(), yaxis.max()\n channel = d.channels[channel_index]\n zi = channel[:]\n zi = np.ma.masked_invalid(zi)\n # create figure ---------------------------------------------------------------------------\n if xaxis.units == yaxis.units:\n xr = xlim[1] - xlim[0]\n yr = ylim[1] - ylim[0]\n aspect = np.abs(yr / xr)\n if 3 < aspect or aspect < 1 / 3.:\n # TODO: raise warning here\n aspect = np.clip(aspect, 1 / 3., 3.)\n else:\n aspect = 1\n fig, gs = create_figure(width='single', nrows=1, cols=[1, 'cbar'],\n aspects=[[[0, 0], aspect]])\n ax = plt.subplot(gs[0])\n ax.patch.set_facecolor('w')\n # levels ----------------------------------------------------------------------------------\n if channel.signed:\n if local:\n limit = channel.mag\n else:\n data_channel = data.channels[channel_index]\n if dynamic_range:\n limit = min(abs(data_channel.null - data_channel.min()),\n abs(data_channel.null - data_channel.max()))\n else:\n limit = data_channel.mag\n if np.isnan(limit):\n limit = 1.\n if limit is np.ma.masked:\n limit = 1.\n levels = np.linspace(-limit + channel.null, limit + channel.null, 200)\n else:\n if local:\n levels = np.linspace(channel.null, np.nanmax(zi), 200)\n else:\n data_channel = data.channels[channel_index]\n if data_channel.max() < data_channel.null:\n levels = np.linspace(data_channel.min(), data_channel.null, 200)\n else:\n levels = np.linspace(data_channel.null, data_channel.max(), 200)\n # colors ----------------------------------------------------------------------------------\n if pixelated:\n ax.pcolor(d, cmap=cmap, vmin=levels.min(), vmax=levels.max())\n else:\n ax.contourf(d, cmap=cmap, levels=levels)\n # contour lines ---------------------------------------------------------------------------\n if contours:\n raise NotImplementedError\n # decoration ------------------------------------------------------------------------------\n plt.xticks(rotation=45, fontsize=14)\n plt.yticks(fontsize=14)\n ax.set_xlabel(xaxis.label, fontsize=18)\n ax.set_ylabel(yaxis.label, fontsize=18)\n # colorbar\n cax = plt.subplot(gs[1])\n cbar_ticks = np.linspace(levels.min(), levels.max(), 11)\n plot_colorbar(cax=cax, ticks=cbar_ticks, label=channel.name, cmap=cmap)\n # save figure -----------------------------------------------------------------------------\n if autosave:\n if fname:\n file_name = fname + ' ' + str(i).zfill(3)\n else:\n file_name = str(i).zfill(3)\n fpath = os.path.join(save_directory, file_name + '.png')\n savefig(fpath, fig=fig)\n plt.close()\n if verbose:\n print('image saved at', fpath)\n out.append(fpath)\n return out\n", "path": "WrightTools/artists/_quick.py"}, {"content": "# -*- coding: utf-8 -*-\n\"\"\"\nFringes transform\n=================\n\nAn example of transform on a dataset containing fringes.\n\"\"\"\n\nimport matplotlib.pyplot as plt\n\nimport WrightTools as wt\nfrom WrightTools import datasets\n\np = datasets.PyCMDS.w2_w1_000\ndata = wt.data.from_PyCMDS(p)\n\ndata.signal_mean.symmetric_root(0.5) # to amplitude level\n\nfig, gs = wt.artists.create_figure(width='double', cols=[1, 1, 'cbar'])\n\n# as taken\nax = plt.subplot(gs[0, 0])\nax.pcolor(data)\nwt.artists.set_ax_labels(xlabel=data.w2.label, ylabel=data.w1.label)\nax.grid()\nax.set_title('as taken', fontsize=20)\n\n# transformed\nax = plt.subplot(gs[0, 1])\ndata.transform(['wm', 'w1'])\ndata.convert('wn')\nax.pcolor(data)\nwt.artists.set_ax_labels(xlabel=data.wm.label, yticks=False)\nax.grid()\nax.set_title('transformed', fontsize=20)\n\n# colorbar\ncax = plt.subplot(gs[0, -1])\nwt.artists.plot_colorbar(cax, label='amplitude')\n", "path": "examples/fringes_transform.py"}]}
| 4,048 | 405 |
gh_patches_debug_5671
|
rasdani/github-patches
|
git_diff
|
projectmesa__mesa-539
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
epstein_civil_violence box doesn't fit grid
<img width="431" alt="screen shot 2018-04-01 at 10 05 11 pm" src="https://user-images.githubusercontent.com/166734/38180219-de2decf8-35f8-11e8-8d9b-562d2fb7c58b.png">
^^ Fix the outline grid on this model. The grid should be the same size as the outline.
</issue>
<code>
[start of examples/epstein_civil_violence/civil_violence/server.py]
1 from mesa.visualization.ModularVisualization import ModularServer
2 from mesa.visualization.modules import CanvasGrid
3
4 from .model import CivilViolenceModel
5 from .agent import Citizen, Cop
6
7
8 COP_COLOR = "#000000"
9 AGENT_QUIET_COLOR = "#0066CC"
10 AGENT_REBEL_COLOR = "#CC0000"
11 JAIL_COLOR = "#757575"
12
13
14 def citizen_cop_portrayal(agent):
15 if agent is None:
16 return
17
18 portrayal = {"Shape": "circle",
19 "x": agent.pos[0], "y": agent.pos[1],
20 "Filled": "true"}
21
22 if type(agent) is Citizen:
23 color = AGENT_QUIET_COLOR if agent.condition == "Quiescent" else \
24 AGENT_REBEL_COLOR
25 color = JAIL_COLOR if agent.jail_sentence else color
26 portrayal["Color"] = color
27 portrayal["r"] = 0.8
28 portrayal["Layer"] = 0
29
30 elif type(agent) is Cop:
31 portrayal["Color"] = COP_COLOR
32 portrayal["r"] = 0.5
33 portrayal["Layer"] = 1
34 return portrayal
35
36
37 model_params = dict(height=40,
38 width=40,
39 citizen_density=.7,
40 cop_density=.074,
41 citizen_vision=7,
42 cop_vision=7,
43 legitimacy=.8,
44 max_jail_term=1000)
45
46 canvas_element = CanvasGrid(citizen_cop_portrayal, 40, 40, 500, 500)
47 server = ModularServer(CivilViolenceModel, [canvas_element],
48 "Epstein Civil Violence", model_params)
49
[end of examples/epstein_civil_violence/civil_violence/server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/epstein_civil_violence/civil_violence/server.py b/examples/epstein_civil_violence/civil_violence/server.py
--- a/examples/epstein_civil_violence/civil_violence/server.py
+++ b/examples/epstein_civil_violence/civil_violence/server.py
@@ -43,6 +43,6 @@
legitimacy=.8,
max_jail_term=1000)
-canvas_element = CanvasGrid(citizen_cop_portrayal, 40, 40, 500, 500)
+canvas_element = CanvasGrid(citizen_cop_portrayal, 40, 40, 480, 480)
server = ModularServer(CivilViolenceModel, [canvas_element],
"Epstein Civil Violence", model_params)
|
{"golden_diff": "diff --git a/examples/epstein_civil_violence/civil_violence/server.py b/examples/epstein_civil_violence/civil_violence/server.py\n--- a/examples/epstein_civil_violence/civil_violence/server.py\n+++ b/examples/epstein_civil_violence/civil_violence/server.py\n@@ -43,6 +43,6 @@\n legitimacy=.8,\n max_jail_term=1000)\n \n-canvas_element = CanvasGrid(citizen_cop_portrayal, 40, 40, 500, 500)\n+canvas_element = CanvasGrid(citizen_cop_portrayal, 40, 40, 480, 480)\n server = ModularServer(CivilViolenceModel, [canvas_element],\n \"Epstein Civil Violence\", model_params)\n", "issue": "epstein_civil_violence box doesn't fit grid\n<img width=\"431\" alt=\"screen shot 2018-04-01 at 10 05 11 pm\" src=\"https://user-images.githubusercontent.com/166734/38180219-de2decf8-35f8-11e8-8d9b-562d2fb7c58b.png\">\r\n\r\n^^ Fix the outline grid on this model. The grid should be the same size as the outline.\n", "before_files": [{"content": "from mesa.visualization.ModularVisualization import ModularServer\nfrom mesa.visualization.modules import CanvasGrid\n\nfrom .model import CivilViolenceModel\nfrom .agent import Citizen, Cop\n\n\nCOP_COLOR = \"#000000\"\nAGENT_QUIET_COLOR = \"#0066CC\"\nAGENT_REBEL_COLOR = \"#CC0000\"\nJAIL_COLOR = \"#757575\"\n\n\ndef citizen_cop_portrayal(agent):\n if agent is None:\n return\n\n portrayal = {\"Shape\": \"circle\",\n \"x\": agent.pos[0], \"y\": agent.pos[1],\n \"Filled\": \"true\"}\n\n if type(agent) is Citizen:\n color = AGENT_QUIET_COLOR if agent.condition == \"Quiescent\" else \\\n AGENT_REBEL_COLOR\n color = JAIL_COLOR if agent.jail_sentence else color\n portrayal[\"Color\"] = color\n portrayal[\"r\"] = 0.8\n portrayal[\"Layer\"] = 0\n\n elif type(agent) is Cop:\n portrayal[\"Color\"] = COP_COLOR\n portrayal[\"r\"] = 0.5\n portrayal[\"Layer\"] = 1\n return portrayal\n\n\nmodel_params = dict(height=40,\n width=40,\n citizen_density=.7,\n cop_density=.074,\n citizen_vision=7,\n cop_vision=7,\n legitimacy=.8,\n max_jail_term=1000)\n\ncanvas_element = CanvasGrid(citizen_cop_portrayal, 40, 40, 500, 500)\nserver = ModularServer(CivilViolenceModel, [canvas_element],\n \"Epstein Civil Violence\", model_params)\n", "path": "examples/epstein_civil_violence/civil_violence/server.py"}]}
| 1,150 | 193 |
gh_patches_debug_9767
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-7096
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add deprecation warning to `meltano ui` command
Meltano UI is deprecated, and is scheduled for removal in Meltano v3. We should make that known to all who currently use the UI by printing a clear warning message when they run `meltano ui` that states that:
- it is *currently* deprecated
- it will be removed in Meltano v3
Relates to https://github.com/meltano/internal-general/discussions/460
Migration strategies likely fall outside the scope of this issue. From a chat with @aaronsteers on 2022-12-12 the priority will be providing feature-parity with equal-or-better UX via the CLI.
A UI as part of Meltano Cloud (possibly accessible after `meltano login` even if not running workloads on Meltano Cloud) may be available in the future for users who absolutely love UIs and have no interest in moving away from the Meltano UI to the CLI, but that's more or less entirely unplanned, so no promises at this point.
Because we cannot make promises about what we'll be doing to replace the UI, at this point I recommend we keep the deprecation warning minimal and fact-based. Doing so may lead to users asking (many) questions about the impending removal on Slack. This will be a good opportunity for us to discuss with them to figure out what the best path forward will be, i.e. we can ask them why they like the UI, if they'd be happy with the CLI, etc.
Once we've got a more concrete idea for what comes next (and likely after it has been implemented and released), we can update the deprecation warning to advertise it.
@sbalnojan @afolson @tayloramurphy
</issue>
<code>
[start of src/meltano/cli/ui.py]
1 """Meltano UI CLI."""
2
3 from __future__ import annotations
4
5 import logging
6 import os
7 import secrets
8 import signal
9
10 import click
11
12 from meltano.api.workers import APIWorker, UIAvailableWorker
13 from meltano.cli import cli
14 from meltano.cli.params import pass_project
15 from meltano.cli.utils import CliError, InstrumentedCmd, InstrumentedDefaultGroup
16 from meltano.core.project import Project
17 from meltano.core.project_settings_service import (
18 ProjectSettingsService,
19 SettingValueStore,
20 )
21
22 logger = logging.getLogger(__name__)
23
24
25 def ensure_secure_setup(project: Project):
26 """Verify UI security settings."""
27 settings_service = ProjectSettingsService(project)
28
29 if not settings_service.get("ui.authentication"):
30 return
31
32 facts = []
33 if (
34 settings_service.get("ui.server_name") is None
35 and settings_service.get("ui.session_cookie_domain") is None
36 ):
37 facts.append(
38 "- Neither the 'ui.server_name' or 'ui.session_cookie_domain' setting has been set"
39 )
40
41 secure_settings = ["ui.secret_key", "ui.password_salt"]
42 for setting_name in secure_settings:
43 value, source = settings_service.get_with_source(setting_name)
44 if source is SettingValueStore.DEFAULT:
45 facts.append(
46 f"- The '{setting_name}' setting has not been changed from the default test value"
47 )
48
49 if facts:
50 click.secho(
51 "Authentication is enabled, but your configuration is currently insecure:",
52 fg="red",
53 )
54 for fact in facts:
55 click.echo(fact)
56 click.echo(
57 "For more information about these settings and how to set them, visit "
58 "https://docs.meltano.com/reference/settings#uiauthentication"
59 )
60 click.echo()
61
62
63 def start_workers(workers):
64 """Start UI background workers."""
65
66 def stop_all():
67 logger.info("Stopping all background workers...")
68 for worker in workers:
69 worker.stop()
70
71 # start all workers
72 for worker in workers:
73 worker.start()
74
75 return stop_all
76
77
78 @cli.group(
79 cls=InstrumentedDefaultGroup,
80 default="start",
81 default_if_no_args=True,
82 short_help="Start the Meltano UI webserver.",
83 )
84 @pass_project(migrate=True)
85 @click.pass_context
86 def ui(ctx, project: Project):
87 """
88 Start the Meltano UI webserver.
89
90 \b\nRead more at https://docs.meltano.com/reference/command-line-interface#ui
91 """
92 ctx.obj["project"] = project
93
94
95 @ui.command(cls=InstrumentedCmd, short_help="Start the Meltano UI webserver.")
96 @click.option("--reload", is_flag=True, default=False)
97 @click.option("--bind", help="The hostname (or IP address) to bind on")
98 @click.option("--bind-port", help="Port to run webserver on", type=int)
99 @click.pass_context
100 def start(ctx, reload, bind, bind_port):
101 """Start the Meltano UI webserver."""
102 if bind:
103 ProjectSettingsService.config_override["ui.bind_host"] = bind
104 if bind_port:
105 ProjectSettingsService.config_override["ui.bind_port"] = bind_port
106
107 project: Project = ctx.obj["project"]
108 ensure_secure_setup(project)
109
110 workers = []
111
112 workers.append(UIAvailableWorker(project))
113 workers.append(
114 APIWorker(project, reload=reload or os.getenv("FLASK_ENV") == "development")
115 )
116
117 cleanup = start_workers(workers)
118
119 def handle_terminate(signal, frame): # noqa: WPS442
120 cleanup()
121
122 signal.signal(signal.SIGTERM, handle_terminate)
123 logger.info("All workers started.")
124
125
126 @ui.command(
127 cls=InstrumentedCmd, short_help="Generate and store server name and secrets."
128 )
129 @click.argument("server_name")
130 @click.option(
131 "--bits",
132 default=256, # noqa: WPS432
133 help="Specify the size of secrets in bits in the system DB (default 256)",
134 )
135 @click.pass_context
136 def setup(ctx, server_name, **flags):
137 """
138 Generate and store server name and secrets.
139
140 WARNING\n
141 Regenerating secrets will cause the following:
142
143 - All passwords will be invalid\n
144 - All sessions will be expired\n
145
146 Use with caution!
147 """
148 project = ctx.obj["project"]
149 settings_service = ProjectSettingsService(project)
150
151 def set_setting_env(setting_name, value):
152 settings_service.set(setting_name, value, store=SettingValueStore.DOTENV)
153
154 set_setting_env("ui.server_name", server_name)
155
156 ui_cfg_path = project.root_dir("ui.cfg")
157 if ui_cfg_path.exists():
158 raise CliError(
159 f"Found existing secrets in file '{ui_cfg_path}'. Please delete this file and rerun this command to regenerate the secrets."
160 )
161
162 def generate_secret():
163 return secrets.token_hex(int(flags["bits"] / 8)) # in bytes
164
165 secret_settings = ["ui.secret_key", "ui.password_salt"]
166 for setting_name in secret_settings:
167 value, source = settings_service.get_with_source(setting_name)
168 if source is not SettingValueStore.DEFAULT:
169 click.echo(
170 f"Setting '{setting_name}' has already been set in {source.label}. Please unset it manually and rerun this command to regenerate this secret."
171 )
172 else:
173 set_setting_env(setting_name, generate_secret())
174
175 click.echo(
176 "The server name and generated secrets have been stored in your project's `.env` file."
177 )
178 click.echo(
179 "In production, you will likely want to move these settings to actual environment variables, since `.env` is in `.gitignore` by default."
180 )
181
[end of src/meltano/cli/ui.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/meltano/cli/ui.py b/src/meltano/cli/ui.py
--- a/src/meltano/cli/ui.py
+++ b/src/meltano/cli/ui.py
@@ -90,6 +90,13 @@
\b\nRead more at https://docs.meltano.com/reference/command-line-interface#ui
"""
ctx.obj["project"] = project
+ click.secho(
+ "\n"
+ "┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓\n"
+ "┃ The Meltano UI is deprecated, and scheduled for removal in Meltano 3.0 ┃\n"
+ "┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛\n",
+ fg="red",
+ )
@ui.command(cls=InstrumentedCmd, short_help="Start the Meltano UI webserver.")
|
{"golden_diff": "diff --git a/src/meltano/cli/ui.py b/src/meltano/cli/ui.py\n--- a/src/meltano/cli/ui.py\n+++ b/src/meltano/cli/ui.py\n@@ -90,6 +90,13 @@\n \\b\\nRead more at https://docs.meltano.com/reference/command-line-interface#ui\n \"\"\"\n ctx.obj[\"project\"] = project\n+ click.secho(\n+ \"\\n\"\n+ \"\u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\\n\"\n+ \"\u2503 The Meltano UI is deprecated, and scheduled for removal in Meltano 3.0 \u2503\\n\"\n+ \"\u2517\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u251b\\n\",\n+ fg=\"red\",\n+ )\n \n \n @ui.command(cls=InstrumentedCmd, short_help=\"Start the Meltano UI webserver.\")\n", "issue": "Add deprecation warning to `meltano ui` command\nMeltano UI is deprecated, and is scheduled for removal in Meltano v3. We should make that known to all who currently use the UI by printing a clear warning message when they run `meltano ui` that states that:\r\n- it is *currently* deprecated\r\n- it will be removed in Meltano v3\r\n\r\nRelates to https://github.com/meltano/internal-general/discussions/460\r\n\r\nMigration strategies likely fall outside the scope of this issue. From a chat with @aaronsteers on 2022-12-12 the priority will be providing feature-parity with equal-or-better UX via the CLI.\r\n\r\nA UI as part of Meltano Cloud (possibly accessible after `meltano login` even if not running workloads on Meltano Cloud) may be available in the future for users who absolutely love UIs and have no interest in moving away from the Meltano UI to the CLI, but that's more or less entirely unplanned, so no promises at this point.\r\n\r\nBecause we cannot make promises about what we'll be doing to replace the UI, at this point I recommend we keep the deprecation warning minimal and fact-based. Doing so may lead to users asking (many) questions about the impending removal on Slack. This will be a good opportunity for us to discuss with them to figure out what the best path forward will be, i.e. we can ask them why they like the UI, if they'd be happy with the CLI, etc.\r\n\r\nOnce we've got a more concrete idea for what comes next (and likely after it has been implemented and released), we can update the deprecation warning to advertise it.\r\n\r\n@sbalnojan @afolson @tayloramurphy \n", "before_files": [{"content": "\"\"\"Meltano UI CLI.\"\"\"\n\nfrom __future__ import annotations\n\nimport logging\nimport os\nimport secrets\nimport signal\n\nimport click\n\nfrom meltano.api.workers import APIWorker, UIAvailableWorker\nfrom meltano.cli import cli\nfrom meltano.cli.params import pass_project\nfrom meltano.cli.utils import CliError, InstrumentedCmd, InstrumentedDefaultGroup\nfrom meltano.core.project import Project\nfrom meltano.core.project_settings_service import (\n ProjectSettingsService,\n SettingValueStore,\n)\n\nlogger = logging.getLogger(__name__)\n\n\ndef ensure_secure_setup(project: Project):\n \"\"\"Verify UI security settings.\"\"\"\n settings_service = ProjectSettingsService(project)\n\n if not settings_service.get(\"ui.authentication\"):\n return\n\n facts = []\n if (\n settings_service.get(\"ui.server_name\") is None\n and settings_service.get(\"ui.session_cookie_domain\") is None\n ):\n facts.append(\n \"- Neither the 'ui.server_name' or 'ui.session_cookie_domain' setting has been set\"\n )\n\n secure_settings = [\"ui.secret_key\", \"ui.password_salt\"]\n for setting_name in secure_settings:\n value, source = settings_service.get_with_source(setting_name)\n if source is SettingValueStore.DEFAULT:\n facts.append(\n f\"- The '{setting_name}' setting has not been changed from the default test value\"\n )\n\n if facts:\n click.secho(\n \"Authentication is enabled, but your configuration is currently insecure:\",\n fg=\"red\",\n )\n for fact in facts:\n click.echo(fact)\n click.echo(\n \"For more information about these settings and how to set them, visit \"\n \"https://docs.meltano.com/reference/settings#uiauthentication\"\n )\n click.echo()\n\n\ndef start_workers(workers):\n \"\"\"Start UI background workers.\"\"\"\n\n def stop_all():\n logger.info(\"Stopping all background workers...\")\n for worker in workers:\n worker.stop()\n\n # start all workers\n for worker in workers:\n worker.start()\n\n return stop_all\n\n\[email protected](\n cls=InstrumentedDefaultGroup,\n default=\"start\",\n default_if_no_args=True,\n short_help=\"Start the Meltano UI webserver.\",\n)\n@pass_project(migrate=True)\[email protected]_context\ndef ui(ctx, project: Project):\n \"\"\"\n Start the Meltano UI webserver.\n\n \\b\\nRead more at https://docs.meltano.com/reference/command-line-interface#ui\n \"\"\"\n ctx.obj[\"project\"] = project\n\n\[email protected](cls=InstrumentedCmd, short_help=\"Start the Meltano UI webserver.\")\[email protected](\"--reload\", is_flag=True, default=False)\[email protected](\"--bind\", help=\"The hostname (or IP address) to bind on\")\[email protected](\"--bind-port\", help=\"Port to run webserver on\", type=int)\[email protected]_context\ndef start(ctx, reload, bind, bind_port):\n \"\"\"Start the Meltano UI webserver.\"\"\"\n if bind:\n ProjectSettingsService.config_override[\"ui.bind_host\"] = bind\n if bind_port:\n ProjectSettingsService.config_override[\"ui.bind_port\"] = bind_port\n\n project: Project = ctx.obj[\"project\"]\n ensure_secure_setup(project)\n\n workers = []\n\n workers.append(UIAvailableWorker(project))\n workers.append(\n APIWorker(project, reload=reload or os.getenv(\"FLASK_ENV\") == \"development\")\n )\n\n cleanup = start_workers(workers)\n\n def handle_terminate(signal, frame): # noqa: WPS442\n cleanup()\n\n signal.signal(signal.SIGTERM, handle_terminate)\n logger.info(\"All workers started.\")\n\n\[email protected](\n cls=InstrumentedCmd, short_help=\"Generate and store server name and secrets.\"\n)\[email protected](\"server_name\")\[email protected](\n \"--bits\",\n default=256, # noqa: WPS432\n help=\"Specify the size of secrets in bits in the system DB (default 256)\",\n)\[email protected]_context\ndef setup(ctx, server_name, **flags):\n \"\"\"\n Generate and store server name and secrets.\n\n WARNING\\n\n Regenerating secrets will cause the following:\n\n - All passwords will be invalid\\n\n - All sessions will be expired\\n\n\n Use with caution!\n \"\"\"\n project = ctx.obj[\"project\"]\n settings_service = ProjectSettingsService(project)\n\n def set_setting_env(setting_name, value):\n settings_service.set(setting_name, value, store=SettingValueStore.DOTENV)\n\n set_setting_env(\"ui.server_name\", server_name)\n\n ui_cfg_path = project.root_dir(\"ui.cfg\")\n if ui_cfg_path.exists():\n raise CliError(\n f\"Found existing secrets in file '{ui_cfg_path}'. Please delete this file and rerun this command to regenerate the secrets.\"\n )\n\n def generate_secret():\n return secrets.token_hex(int(flags[\"bits\"] / 8)) # in bytes\n\n secret_settings = [\"ui.secret_key\", \"ui.password_salt\"]\n for setting_name in secret_settings:\n value, source = settings_service.get_with_source(setting_name)\n if source is not SettingValueStore.DEFAULT:\n click.echo(\n f\"Setting '{setting_name}' has already been set in {source.label}. Please unset it manually and rerun this command to regenerate this secret.\"\n )\n else:\n set_setting_env(setting_name, generate_secret())\n\n click.echo(\n \"The server name and generated secrets have been stored in your project's `.env` file.\"\n )\n click.echo(\n \"In production, you will likely want to move these settings to actual environment variables, since `.env` is in `.gitignore` by default.\"\n )\n", "path": "src/meltano/cli/ui.py"}]}
| 2,584 | 244 |
gh_patches_debug_39517
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-55
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for interfaces
We should be able to define interfaces with strawberry, something like this:
```python
@strawberry.interface
class Node:
id: strawberry.ID
```
</issue>
<code>
[start of strawberry/type.py]
1 import typing
2 from functools import partial
3
4 from dataclasses import dataclass
5 from graphql import (
6 GraphQLField,
7 GraphQLInputField,
8 GraphQLInputObjectType,
9 GraphQLObjectType,
10 )
11 from graphql.utilities.schema_printer import print_type
12
13 from .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT
14 from .type_converter import REGISTRY, get_graphql_type_for_annotation
15 from .utils.str_converters import to_camel_case
16
17
18 def _get_resolver(cls, field_name):
19 def _resolver(obj, info):
20 # TODO: can we make this nicer?
21 # does it work in all the cases?
22
23 field_resolver = getattr(cls(**(obj.__dict__ if obj else {})), field_name)
24
25 if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):
26 return field_resolver(obj, info)
27
28 return field_resolver
29
30 return _resolver
31
32
33 def _convert_annotations_fields(cls, *, is_input=False):
34 FieldClass = GraphQLInputField if is_input else GraphQLField
35 annotations = typing.get_type_hints(cls, None, REGISTRY)
36
37 fields = {}
38
39 for key, annotation in annotations.items():
40 field_name = to_camel_case(key)
41 class_field = getattr(cls, key, None)
42
43 description = getattr(class_field, "description", None)
44
45 fields[field_name] = FieldClass(
46 get_graphql_type_for_annotation(annotation, key),
47 description=description,
48 **({} if is_input else {"resolve": _get_resolver(cls, key)})
49 )
50
51 return fields
52
53
54 def _process_type(cls, *, is_input=False, description=None):
55 name = cls.__name__
56 REGISTRY[name] = cls
57
58 def repr_(self):
59 return print_type(self.field)
60
61 setattr(cls, "__repr__", repr_)
62
63 def _get_fields():
64 fields = _convert_annotations_fields(cls, is_input=is_input)
65
66 fields.update(
67 {
68 to_camel_case(key): value.field
69 for key, value in cls.__dict__.items()
70 if getattr(value, IS_STRAWBERRY_FIELD, False)
71 }
72 )
73
74 return fields
75
76 if is_input:
77 setattr(cls, IS_STRAWBERRY_INPUT, True)
78
79 extra_kwargs = {"description": description or cls.__doc__}
80
81 TypeClass = GraphQLInputObjectType if is_input else GraphQLObjectType
82 cls.field = TypeClass(name, lambda: _get_fields(), **extra_kwargs)
83
84 return dataclass(cls, repr=False)
85
86
87 def type(cls=None, *, is_input=False, description=None):
88 """Annotates a class as a GraphQL type.
89
90 Example usage:
91
92 >>> @strawberry.type:
93 >>> class X:
94 >>> field_abc: str = "ABC"
95 """
96
97 def wrap(cls):
98 return _process_type(cls, is_input=is_input, description=description)
99
100 if cls is None:
101 return wrap
102
103 return wrap(cls)
104
105
106 input = partial(type, is_input=True)
107
[end of strawberry/type.py]
[start of strawberry/constants.py]
1 IS_STRAWBERRY_FIELD = "_is_strawberry_field"
2 IS_STRAWBERRY_INPUT = "_is_strawberry_input"
3
[end of strawberry/constants.py]
[start of strawberry/__init__.py]
1 __version__ = "0.1.0"
2
3
4 from .enum import enum # noqa
5 from .field import field # noqa
6 from .mutation import mutation, subscription # noqa
7 from .scalars import ID # noqa
8 from .schema import Schema # noqa
9 from .type import input, type # noqa
10
[end of strawberry/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/strawberry/__init__.py b/strawberry/__init__.py
--- a/strawberry/__init__.py
+++ b/strawberry/__init__.py
@@ -6,4 +6,4 @@
from .mutation import mutation, subscription # noqa
from .scalars import ID # noqa
from .schema import Schema # noqa
-from .type import input, type # noqa
+from .type import input, type, interface # noqa
diff --git a/strawberry/constants.py b/strawberry/constants.py
--- a/strawberry/constants.py
+++ b/strawberry/constants.py
@@ -1,2 +1,3 @@
IS_STRAWBERRY_FIELD = "_is_strawberry_field"
IS_STRAWBERRY_INPUT = "_is_strawberry_input"
+IS_STRAWBERRY_INTERFACE = "_is_strawberry_interface"
diff --git a/strawberry/type.py b/strawberry/type.py
--- a/strawberry/type.py
+++ b/strawberry/type.py
@@ -6,11 +6,12 @@
GraphQLField,
GraphQLInputField,
GraphQLInputObjectType,
+ GraphQLInterfaceType,
GraphQLObjectType,
)
from graphql.utilities.schema_printer import print_type
-from .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT
+from .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT, IS_STRAWBERRY_INTERFACE
from .type_converter import REGISTRY, get_graphql_type_for_annotation
from .utils.str_converters import to_camel_case
@@ -51,7 +52,7 @@
return fields
-def _process_type(cls, *, is_input=False, description=None):
+def _process_type(cls, *, is_input=False, is_interface=False, description=None):
name = cls.__name__
REGISTRY[name] = cls
@@ -75,16 +76,30 @@
if is_input:
setattr(cls, IS_STRAWBERRY_INPUT, True)
+ elif is_interface:
+ setattr(cls, IS_STRAWBERRY_INTERFACE, True)
extra_kwargs = {"description": description or cls.__doc__}
- TypeClass = GraphQLInputObjectType if is_input else GraphQLObjectType
+ if is_input:
+ TypeClass = GraphQLInputObjectType
+ elif is_interface:
+ TypeClass = GraphQLInterfaceType
+ else:
+ TypeClass = GraphQLObjectType
+
+ extra_kwargs["interfaces"] = [
+ klass.field
+ for klass in cls.__bases__
+ if hasattr(klass, IS_STRAWBERRY_INTERFACE)
+ ]
+
cls.field = TypeClass(name, lambda: _get_fields(), **extra_kwargs)
return dataclass(cls, repr=False)
-def type(cls=None, *, is_input=False, description=None):
+def type(cls=None, *, is_input=False, is_interface=False, description=None):
"""Annotates a class as a GraphQL type.
Example usage:
@@ -95,7 +110,9 @@
"""
def wrap(cls):
- return _process_type(cls, is_input=is_input, description=description)
+ return _process_type(
+ cls, is_input=is_input, is_interface=is_interface, description=description
+ )
if cls is None:
return wrap
@@ -104,3 +121,4 @@
input = partial(type, is_input=True)
+interface = partial(type, is_interface=True)
|
{"golden_diff": "diff --git a/strawberry/__init__.py b/strawberry/__init__.py\n--- a/strawberry/__init__.py\n+++ b/strawberry/__init__.py\n@@ -6,4 +6,4 @@\n from .mutation import mutation, subscription # noqa\n from .scalars import ID # noqa\n from .schema import Schema # noqa\n-from .type import input, type # noqa\n+from .type import input, type, interface # noqa\ndiff --git a/strawberry/constants.py b/strawberry/constants.py\n--- a/strawberry/constants.py\n+++ b/strawberry/constants.py\n@@ -1,2 +1,3 @@\n IS_STRAWBERRY_FIELD = \"_is_strawberry_field\"\n IS_STRAWBERRY_INPUT = \"_is_strawberry_input\"\n+IS_STRAWBERRY_INTERFACE = \"_is_strawberry_interface\"\ndiff --git a/strawberry/type.py b/strawberry/type.py\n--- a/strawberry/type.py\n+++ b/strawberry/type.py\n@@ -6,11 +6,12 @@\n GraphQLField,\n GraphQLInputField,\n GraphQLInputObjectType,\n+ GraphQLInterfaceType,\n GraphQLObjectType,\n )\n from graphql.utilities.schema_printer import print_type\n \n-from .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT\n+from .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT, IS_STRAWBERRY_INTERFACE\n from .type_converter import REGISTRY, get_graphql_type_for_annotation\n from .utils.str_converters import to_camel_case\n \n@@ -51,7 +52,7 @@\n return fields\n \n \n-def _process_type(cls, *, is_input=False, description=None):\n+def _process_type(cls, *, is_input=False, is_interface=False, description=None):\n name = cls.__name__\n REGISTRY[name] = cls\n \n@@ -75,16 +76,30 @@\n \n if is_input:\n setattr(cls, IS_STRAWBERRY_INPUT, True)\n+ elif is_interface:\n+ setattr(cls, IS_STRAWBERRY_INTERFACE, True)\n \n extra_kwargs = {\"description\": description or cls.__doc__}\n \n- TypeClass = GraphQLInputObjectType if is_input else GraphQLObjectType\n+ if is_input:\n+ TypeClass = GraphQLInputObjectType\n+ elif is_interface:\n+ TypeClass = GraphQLInterfaceType\n+ else:\n+ TypeClass = GraphQLObjectType\n+\n+ extra_kwargs[\"interfaces\"] = [\n+ klass.field\n+ for klass in cls.__bases__\n+ if hasattr(klass, IS_STRAWBERRY_INTERFACE)\n+ ]\n+\n cls.field = TypeClass(name, lambda: _get_fields(), **extra_kwargs)\n \n return dataclass(cls, repr=False)\n \n \n-def type(cls=None, *, is_input=False, description=None):\n+def type(cls=None, *, is_input=False, is_interface=False, description=None):\n \"\"\"Annotates a class as a GraphQL type.\n \n Example usage:\n@@ -95,7 +110,9 @@\n \"\"\"\n \n def wrap(cls):\n- return _process_type(cls, is_input=is_input, description=description)\n+ return _process_type(\n+ cls, is_input=is_input, is_interface=is_interface, description=description\n+ )\n \n if cls is None:\n return wrap\n@@ -104,3 +121,4 @@\n \n \n input = partial(type, is_input=True)\n+interface = partial(type, is_interface=True)\n", "issue": "Add support for interfaces\nWe should be able to define interfaces with strawberry, something like this:\r\n\r\n```python\r\n\r\[email protected]\r\nclass Node:\r\n id: strawberry.ID\r\n```\n", "before_files": [{"content": "import typing\nfrom functools import partial\n\nfrom dataclasses import dataclass\nfrom graphql import (\n GraphQLField,\n GraphQLInputField,\n GraphQLInputObjectType,\n GraphQLObjectType,\n)\nfrom graphql.utilities.schema_printer import print_type\n\nfrom .constants import IS_STRAWBERRY_FIELD, IS_STRAWBERRY_INPUT\nfrom .type_converter import REGISTRY, get_graphql_type_for_annotation\nfrom .utils.str_converters import to_camel_case\n\n\ndef _get_resolver(cls, field_name):\n def _resolver(obj, info):\n # TODO: can we make this nicer?\n # does it work in all the cases?\n\n field_resolver = getattr(cls(**(obj.__dict__ if obj else {})), field_name)\n\n if getattr(field_resolver, IS_STRAWBERRY_FIELD, False):\n return field_resolver(obj, info)\n\n return field_resolver\n\n return _resolver\n\n\ndef _convert_annotations_fields(cls, *, is_input=False):\n FieldClass = GraphQLInputField if is_input else GraphQLField\n annotations = typing.get_type_hints(cls, None, REGISTRY)\n\n fields = {}\n\n for key, annotation in annotations.items():\n field_name = to_camel_case(key)\n class_field = getattr(cls, key, None)\n\n description = getattr(class_field, \"description\", None)\n\n fields[field_name] = FieldClass(\n get_graphql_type_for_annotation(annotation, key),\n description=description,\n **({} if is_input else {\"resolve\": _get_resolver(cls, key)})\n )\n\n return fields\n\n\ndef _process_type(cls, *, is_input=False, description=None):\n name = cls.__name__\n REGISTRY[name] = cls\n\n def repr_(self):\n return print_type(self.field)\n\n setattr(cls, \"__repr__\", repr_)\n\n def _get_fields():\n fields = _convert_annotations_fields(cls, is_input=is_input)\n\n fields.update(\n {\n to_camel_case(key): value.field\n for key, value in cls.__dict__.items()\n if getattr(value, IS_STRAWBERRY_FIELD, False)\n }\n )\n\n return fields\n\n if is_input:\n setattr(cls, IS_STRAWBERRY_INPUT, True)\n\n extra_kwargs = {\"description\": description or cls.__doc__}\n\n TypeClass = GraphQLInputObjectType if is_input else GraphQLObjectType\n cls.field = TypeClass(name, lambda: _get_fields(), **extra_kwargs)\n\n return dataclass(cls, repr=False)\n\n\ndef type(cls=None, *, is_input=False, description=None):\n \"\"\"Annotates a class as a GraphQL type.\n\n Example usage:\n\n >>> @strawberry.type:\n >>> class X:\n >>> field_abc: str = \"ABC\"\n \"\"\"\n\n def wrap(cls):\n return _process_type(cls, is_input=is_input, description=description)\n\n if cls is None:\n return wrap\n\n return wrap(cls)\n\n\ninput = partial(type, is_input=True)\n", "path": "strawberry/type.py"}, {"content": "IS_STRAWBERRY_FIELD = \"_is_strawberry_field\"\nIS_STRAWBERRY_INPUT = \"_is_strawberry_input\"\n", "path": "strawberry/constants.py"}, {"content": "__version__ = \"0.1.0\"\n\n\nfrom .enum import enum # noqa\nfrom .field import field # noqa\nfrom .mutation import mutation, subscription # noqa\nfrom .scalars import ID # noqa\nfrom .schema import Schema # noqa\nfrom .type import input, type # noqa\n", "path": "strawberry/__init__.py"}]}
| 1,576 | 771 |
gh_patches_debug_16
|
rasdani/github-patches
|
git_diff
|
OCHA-DAP__hdx-ckan-1401
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The MailChimp subscribe field could use a little bit more padding-left
Right now the input text is too close to the left border. It would be nice to add some padding there.

</issue>
<code>
[start of ckanext-hdx_theme/ckanext/hdx_theme/version.py]
1 hdx_version = 'v0.3.9'
2
[end of ckanext-hdx_theme/ckanext/hdx_theme/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py
+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py
@@ -1 +1 @@
-hdx_version = 'v0.3.9'
+hdx_version = 'v0.3.10'
|
{"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version = 'v0.3.9'\n+hdx_version = 'v0.3.10'\n", "issue": "The MailChimp subscribe field could use a little bit more padding-left\nRight now the input text is too close to the left border. It would be nice to add some padding there. \n\n\n\n", "before_files": [{"content": "hdx_version = 'v0.3.9'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]}
| 691 | 107 |
gh_patches_debug_25906
|
rasdani/github-patches
|
git_diff
|
facebookresearch__ParlAI-341
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Return type ambiguous when extracting image features
The return type of the extracted image features (if the features file is present or not) is different. If the file is present then it returns a numpy.ndarray type object otherwise it returns a torch.autograd.variable.Variable object.
( https://github.com/facebookresearch/ParlAI/blob/3d86ccdbb4d87002cc6c4782afd0ee5277e742f1/parlai/core/image_featurizers.py#L149 )
</issue>
<code>
[start of parlai/tasks/vqa_v2/agents.py]
1 # Copyright (c) 2017-present, Facebook, Inc.
2 # All rights reserved.
3 # This source code is licensed under the BSD-style license found in the
4 # LICENSE file in the root directory of this source tree. An additional grant
5 # of patent rights can be found in the PATENTS file in the same directory.
6
7 from parlai.core.agents import Teacher
8 from parlai.core.image_featurizers import ImageLoader
9 from .build import build, buildImage
10
11 import json
12 import random
13 import os
14
15
16 def _path(opt):
17 build(opt)
18 buildImage(opt)
19 dt = opt['datatype'].split(':')[0]
20
21 if dt == 'train':
22 ques_suffix = 'v2_OpenEnded_mscoco_train2014'
23 annotation_suffix = 'v2_mscoco_train2014'
24 img_suffix = os.path.join('train2014', 'COCO_train2014_')
25 elif dt == 'valid':
26 ques_suffix = 'v2_OpenEnded_mscoco_val2014'
27 annotation_suffix = 'v2_mscoco_val2014'
28 img_suffix = os.path.join('val2014', 'COCO_val2014_')
29 elif dt == 'test':
30 ques_suffix = 'v2_OpenEnded_mscoco_test2015'
31 annotation_suffix = 'None'
32 img_suffix = os.path.join('test2015', 'COCO_test2015_')
33 else:
34 raise RuntimeError('Not valid datatype.')
35
36 data_path = os.path.join(opt['datapath'], 'VQA-v2',
37 ques_suffix + '_questions.json')
38
39 annotation_path = os.path.join(opt['datapath'], 'VQA-v2',
40 annotation_suffix + '_annotations.json')
41
42 image_path = os.path.join(opt['datapath'], 'COCO-IMG', img_suffix)
43
44 return data_path, annotation_path, image_path
45
46
47 class OeTeacher(Teacher):
48 """VQA v2.0 Open-Ended teacher, which loads the json VQA data and
49 implements its own `act` method for interacting with student agent.
50 agent.
51 """
52 def __init__(self, opt, shared=None):
53 super().__init__(opt)
54 self.datatype = opt['datatype']
55 data_path, annotation_path, self.image_path = _path(opt)
56
57 if shared and 'ques' in shared:
58 self.ques = shared['ques']
59 if 'annotation' in shared:
60 self.annotation = shared['annotation']
61 else:
62 self._setup_data(data_path, annotation_path)
63 self.len = len(self.ques['questions'])
64
65 # for ordered data in batch mode (especially, for validation and
66 # testing), each teacher in the batch gets a start index and a step
67 # size so they all process disparate sets of the data
68 self.step_size = opt.get('batchsize', 1)
69 self.data_offset = opt.get('batchindex', 0)
70 self.image_loader = ImageLoader(opt)
71
72 self.reset()
73
74 def __len__(self):
75 return self.len
76
77 def reset(self):
78 # Reset the dialog so that it is at the start of the epoch,
79 # and all metrics are reset.
80 super().reset()
81 self.lastY = None
82 self.episode_idx = self.data_offset - self.step_size
83
84 def observe(self, observation):
85 """Process observation for metrics."""
86 if self.lastY is not None:
87 self.metrics.update(observation, self.lastY)
88 self.lastY = None
89 return observation
90
91 def act(self):
92 if self.datatype == 'train':
93 self.episode_idx = random.randrange(self.len)
94 else:
95 self.episode_idx = (self.episode_idx + self.step_size) % len(self)
96 if self.episode_idx == len(self) - self.step_size:
97 self.epochDone = True
98
99 qa = self.ques['questions'][self.episode_idx]
100 question = qa['question']
101 image_id = qa['image_id']
102
103 img_path = self.image_path + '%012d.jpg' % (image_id)
104
105 action = {
106 'image': self.image_loader.load(img_path),
107 'text': question,
108 'episode_done': True
109 }
110
111 if not self.datatype.startswith('test'):
112 anno = self.annotation['annotations'][self.episode_idx]
113 self.lastY = [ans['answer'] for ans in anno['answers']]
114
115 if self.datatype.startswith('train'):
116 action['labels'] = self.lastY
117
118 return action
119
120 def share(self):
121 shared = super().share()
122 shared['ques'] = self.ques
123 if hasattr(self, 'annotation'):
124 shared['annotation'] = self.annotation
125 return shared
126
127 def _setup_data(self, data_path, annotation_path):
128 print('loading: ' + data_path)
129 with open(data_path) as data_file:
130 self.ques = json.load(data_file)
131
132 if self.datatype != 'test':
133 print('loading: ' + annotation_path)
134 with open(annotation_path) as data_file:
135 self.annotation = json.load(data_file)
136
137
138 class DefaultTeacher(OeTeacher):
139 pass
140
[end of parlai/tasks/vqa_v2/agents.py]
[start of parlai/tasks/vqa_v1/agents.py]
1 # Copyright (c) 2017-present, Facebook, Inc.
2 # All rights reserved.
3 # This source code is licensed under the BSD-style license found in the
4 # LICENSE file in the root directory of this source tree. An additional grant
5 # of patent rights can be found in the PATENTS file in the same directory.
6
7 from parlai.core.agents import Teacher
8 from parlai.core.image_featurizers import ImageLoader
9 from .build import build, buildImage
10
11 import json
12 import random
13 import os
14
15
16 def _path(opt):
17 build(opt)
18 buildImage(opt)
19 dt = opt['datatype'].split(':')[0]
20
21 if dt == 'train':
22 ques_suffix = 'MultipleChoice_mscoco_train2014'
23 annotation_suffix = 'mscoco_train2014'
24 img_suffix = os.path.join('train2014', 'COCO_train2014_')
25 elif dt == 'valid':
26 ques_suffix = 'MultipleChoice_mscoco_val2014'
27 annotation_suffix = 'mscoco_val2014'
28 img_suffix = os.path.join('val2014', 'COCO_val2014_')
29 elif dt == 'test':
30 ques_suffix = 'MultipleChoice_mscoco_test2015'
31 annotation_suffix = 'None'
32 img_suffix = os.path.join('test2015', 'COCO_test2015_')
33 else:
34 raise RuntimeError('Not valid datatype.')
35
36 data_path = os.path.join(opt['datapath'], 'VQA-v1',
37 ques_suffix + '_questions.json')
38
39 annotation_path = os.path.join(opt['datapath'], 'VQA-v1',
40 annotation_suffix + '_annotations.json')
41
42 image_path = os.path.join(opt['datapath'], 'COCO-IMG', img_suffix)
43
44 return data_path, annotation_path, image_path
45
46
47 class OeTeacher(Teacher):
48 """
49 VQA Open-Ended teacher, which loads the json vqa data and implements its
50 own `act` method for interacting with student agent.
51 """
52 def __init__(self, opt, shared=None):
53 super().__init__(opt, shared)
54 self.datatype = opt['datatype']
55 data_path, annotation_path, self.image_path = _path(opt)
56
57 if shared and 'ques' in shared:
58 self.ques = shared['ques']
59 if 'annotation' in shared:
60 self.annotation = shared['annotation']
61 else:
62 self._setup_data(data_path, annotation_path)
63
64 # for ordered data in batch mode (especially, for validation and
65 # testing), each teacher in the batch gets a start index and a step
66 # size so they all process disparate sets of the data
67 self.step_size = opt.get('batchsize', 1)
68 self.data_offset = opt.get('batchindex', 0)
69 self.image_loader = ImageLoader(opt)
70 self.reset()
71
72 def __len__(self):
73 return len(self.ques['questions'])
74
75 def reset(self):
76 # Reset the dialog so that it is at the start of the epoch,
77 # and all metrics are reset.
78 super().reset()
79 self.lastY = None
80 self.episode_idx = self.data_offset - self.step_size
81
82 def observe(self, observation):
83 """Process observation for metrics."""
84 if self.lastY is not None:
85 self.metrics.update(observation, self.lastY)
86 self.lastY = None
87 return observation
88
89 def act(self):
90 if self.datatype == 'train':
91 self.episode_idx = random.randrange(len(self))
92 else:
93 self.episode_idx = (self.episode_idx + self.step_size) % len(self)
94 if self.episode_idx == len(self) - self.step_size:
95 self.epochDone = True
96
97 qa = self.ques['questions'][self.episode_idx]
98 question = qa['question']
99 image_id = qa['image_id']
100
101 img_path = self.image_path + '%012d.jpg' % (image_id)
102
103 action = {
104 'image': self.image_loader.load(img_path),
105 'text': question,
106 'episode_done': True
107 }
108
109 if not self.datatype.startswith('test'):
110 anno = self.annotation['annotations'][self.episode_idx]
111 self.lastY = [ans['answer'] for ans in anno['answers']]
112
113 if self.datatype.startswith('train'):
114 action['labels'] = self.lastY
115
116 return action
117
118 def share(self):
119 shared = super().share()
120 shared['ques'] = self.ques
121 if hasattr(self, 'annotation'):
122 shared['annotation'] = self.annotation
123 return shared
124
125 def _setup_data(self, data_path, annotation_path):
126 print('loading: ' + data_path)
127 with open(data_path) as data_file:
128 self.ques = json.load(data_file)
129
130 if self.datatype != 'test':
131 print('loading: ' + annotation_path)
132 with open(annotation_path) as data_file:
133 self.annotation = json.load(data_file)
134
135
136 class McTeacher(OeTeacher):
137 """
138 VQA Multiple-Choice teacher, which inherits from OeTeacher but overrides
139 the label and label_candidates fields with multiple choice data.
140 """
141
142 def act(self):
143 action = super().act()
144
145 qa = self.ques['questions'][self.episode_idx]
146 multiple_choices = qa['multiple_choices']
147
148 action['label_candidates'] = multiple_choices
149
150 if not self.datatype.startswith('test'):
151 anno = self.annotation['annotations'][self.episode_idx]
152 self.lastY = [anno['multiple_choice_answer']]
153
154 if self.datatype.startswith('train'):
155 action['labels'] = self.lastY
156
157 return action
158
159
160 class DefaultTeacher(McTeacher):
161 # default to Multiple-Choice Teacher
162 pass
163
[end of parlai/tasks/vqa_v1/agents.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/parlai/tasks/vqa_v1/agents.py b/parlai/tasks/vqa_v1/agents.py
--- a/parlai/tasks/vqa_v1/agents.py
+++ b/parlai/tasks/vqa_v1/agents.py
@@ -157,6 +157,30 @@
return action
+class AllTeacher(OeTeacher):
+ """
+ VQA Teacher, which inherits from OeTeacher and gives access to
+ the multiple choices and the multiple choice answer.
+ """
+
+ def act(self):
+ action = super().act()
+
+ qa = self.ques['questions'][self.episode_idx]
+ multiple_choices = qa['multiple_choices']
+
+ action['label_candidates'] = multiple_choices
+
+ if not self.datatype.startswith('test'):
+ anno = self.annotation['annotations'][self.episode_idx]
+ self.mclabel = [anno['multiple_choice_answer']]
+
+ if self.datatype.startswith('train'):
+ action['mc_label'] = self.mclabel
+
+ return action
+
+
class DefaultTeacher(McTeacher):
# default to Multiple-Choice Teacher
pass
diff --git a/parlai/tasks/vqa_v2/agents.py b/parlai/tasks/vqa_v2/agents.py
--- a/parlai/tasks/vqa_v2/agents.py
+++ b/parlai/tasks/vqa_v2/agents.py
@@ -135,5 +135,24 @@
self.annotation = json.load(data_file)
+class AllTeacher(OeTeacher):
+ """
+ VQA v2.0 Open-Ended teacher, which inherits from OeTeacher and
+ gives access to the multiple choice answer.
+ """
+
+ def act(self):
+ action = super().act()
+
+ if not self.datatype.startswith('test'):
+ anno = self.annotation['annotations'][self.episode_idx]
+ self.mclabel = [anno['multiple_choice_answer']]
+
+ if self.datatype.startswith('train'):
+ action['mc_label'] = self.mclabel
+
+ return action
+
+
class DefaultTeacher(OeTeacher):
pass
|
{"golden_diff": "diff --git a/parlai/tasks/vqa_v1/agents.py b/parlai/tasks/vqa_v1/agents.py\n--- a/parlai/tasks/vqa_v1/agents.py\n+++ b/parlai/tasks/vqa_v1/agents.py\n@@ -157,6 +157,30 @@\n return action\n \n \n+class AllTeacher(OeTeacher):\n+ \"\"\"\n+ VQA Teacher, which inherits from OeTeacher and gives access to\n+ the multiple choices and the multiple choice answer.\n+ \"\"\"\n+\n+ def act(self):\n+ action = super().act()\n+\n+ qa = self.ques['questions'][self.episode_idx]\n+ multiple_choices = qa['multiple_choices']\n+\n+ action['label_candidates'] = multiple_choices\n+\n+ if not self.datatype.startswith('test'):\n+ anno = self.annotation['annotations'][self.episode_idx]\n+ self.mclabel = [anno['multiple_choice_answer']]\n+\n+ if self.datatype.startswith('train'):\n+ action['mc_label'] = self.mclabel\n+\n+ return action\n+\n+\n class DefaultTeacher(McTeacher):\n # default to Multiple-Choice Teacher\n pass\ndiff --git a/parlai/tasks/vqa_v2/agents.py b/parlai/tasks/vqa_v2/agents.py\n--- a/parlai/tasks/vqa_v2/agents.py\n+++ b/parlai/tasks/vqa_v2/agents.py\n@@ -135,5 +135,24 @@\n self.annotation = json.load(data_file)\n \n \n+class AllTeacher(OeTeacher):\n+ \"\"\"\n+ VQA v2.0 Open-Ended teacher, which inherits from OeTeacher and \n+ gives access to the multiple choice answer.\n+ \"\"\"\n+\n+ def act(self):\n+ action = super().act()\n+\n+ if not self.datatype.startswith('test'):\n+ anno = self.annotation['annotations'][self.episode_idx]\n+ self.mclabel = [anno['multiple_choice_answer']]\n+\n+ if self.datatype.startswith('train'):\n+ action['mc_label'] = self.mclabel\n+\n+ return action\n+\n+\n class DefaultTeacher(OeTeacher):\n pass\n", "issue": "Return type ambiguous when extracting image features\nThe return type of the extracted image features (if the features file is present or not) is different. If the file is present then it returns a numpy.ndarray type object otherwise it returns a torch.autograd.variable.Variable object.\r\n( https://github.com/facebookresearch/ParlAI/blob/3d86ccdbb4d87002cc6c4782afd0ee5277e742f1/parlai/core/image_featurizers.py#L149 )\n", "before_files": [{"content": "# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n# This source code is licensed under the BSD-style license found in the\n# LICENSE file in the root directory of this source tree. An additional grant\n# of patent rights can be found in the PATENTS file in the same directory.\n\nfrom parlai.core.agents import Teacher\nfrom parlai.core.image_featurizers import ImageLoader\nfrom .build import build, buildImage\n\nimport json\nimport random\nimport os\n\n\ndef _path(opt):\n build(opt)\n buildImage(opt)\n dt = opt['datatype'].split(':')[0]\n\n if dt == 'train':\n ques_suffix = 'v2_OpenEnded_mscoco_train2014'\n annotation_suffix = 'v2_mscoco_train2014'\n img_suffix = os.path.join('train2014', 'COCO_train2014_')\n elif dt == 'valid':\n ques_suffix = 'v2_OpenEnded_mscoco_val2014'\n annotation_suffix = 'v2_mscoco_val2014'\n img_suffix = os.path.join('val2014', 'COCO_val2014_')\n elif dt == 'test':\n ques_suffix = 'v2_OpenEnded_mscoco_test2015'\n annotation_suffix = 'None'\n img_suffix = os.path.join('test2015', 'COCO_test2015_')\n else:\n raise RuntimeError('Not valid datatype.')\n\n data_path = os.path.join(opt['datapath'], 'VQA-v2',\n ques_suffix + '_questions.json')\n\n annotation_path = os.path.join(opt['datapath'], 'VQA-v2',\n annotation_suffix + '_annotations.json')\n\n image_path = os.path.join(opt['datapath'], 'COCO-IMG', img_suffix)\n\n return data_path, annotation_path, image_path\n\n\nclass OeTeacher(Teacher):\n \"\"\"VQA v2.0 Open-Ended teacher, which loads the json VQA data and\n implements its own `act` method for interacting with student agent.\n agent.\n \"\"\"\n def __init__(self, opt, shared=None):\n super().__init__(opt)\n self.datatype = opt['datatype']\n data_path, annotation_path, self.image_path = _path(opt)\n\n if shared and 'ques' in shared:\n self.ques = shared['ques']\n if 'annotation' in shared:\n self.annotation = shared['annotation']\n else:\n self._setup_data(data_path, annotation_path)\n self.len = len(self.ques['questions'])\n\n # for ordered data in batch mode (especially, for validation and\n # testing), each teacher in the batch gets a start index and a step\n # size so they all process disparate sets of the data\n self.step_size = opt.get('batchsize', 1)\n self.data_offset = opt.get('batchindex', 0)\n self.image_loader = ImageLoader(opt)\n\n self.reset()\n\n def __len__(self):\n return self.len\n\n def reset(self):\n # Reset the dialog so that it is at the start of the epoch,\n # and all metrics are reset.\n super().reset()\n self.lastY = None\n self.episode_idx = self.data_offset - self.step_size\n\n def observe(self, observation):\n \"\"\"Process observation for metrics.\"\"\"\n if self.lastY is not None:\n self.metrics.update(observation, self.lastY)\n self.lastY = None\n return observation\n\n def act(self):\n if self.datatype == 'train':\n self.episode_idx = random.randrange(self.len)\n else:\n self.episode_idx = (self.episode_idx + self.step_size) % len(self)\n if self.episode_idx == len(self) - self.step_size:\n self.epochDone = True\n\n qa = self.ques['questions'][self.episode_idx]\n question = qa['question']\n image_id = qa['image_id']\n\n img_path = self.image_path + '%012d.jpg' % (image_id)\n\n action = {\n 'image': self.image_loader.load(img_path),\n 'text': question,\n 'episode_done': True\n }\n\n if not self.datatype.startswith('test'):\n anno = self.annotation['annotations'][self.episode_idx]\n self.lastY = [ans['answer'] for ans in anno['answers']]\n\n if self.datatype.startswith('train'):\n action['labels'] = self.lastY\n\n return action\n\n def share(self):\n shared = super().share()\n shared['ques'] = self.ques\n if hasattr(self, 'annotation'):\n shared['annotation'] = self.annotation\n return shared\n\n def _setup_data(self, data_path, annotation_path):\n print('loading: ' + data_path)\n with open(data_path) as data_file:\n self.ques = json.load(data_file)\n\n if self.datatype != 'test':\n print('loading: ' + annotation_path)\n with open(annotation_path) as data_file:\n self.annotation = json.load(data_file)\n\n\nclass DefaultTeacher(OeTeacher):\n pass\n", "path": "parlai/tasks/vqa_v2/agents.py"}, {"content": "# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n# This source code is licensed under the BSD-style license found in the\n# LICENSE file in the root directory of this source tree. An additional grant\n# of patent rights can be found in the PATENTS file in the same directory.\n\nfrom parlai.core.agents import Teacher\nfrom parlai.core.image_featurizers import ImageLoader\nfrom .build import build, buildImage\n\nimport json\nimport random\nimport os\n\n\ndef _path(opt):\n build(opt)\n buildImage(opt)\n dt = opt['datatype'].split(':')[0]\n\n if dt == 'train':\n ques_suffix = 'MultipleChoice_mscoco_train2014'\n annotation_suffix = 'mscoco_train2014'\n img_suffix = os.path.join('train2014', 'COCO_train2014_')\n elif dt == 'valid':\n ques_suffix = 'MultipleChoice_mscoco_val2014'\n annotation_suffix = 'mscoco_val2014'\n img_suffix = os.path.join('val2014', 'COCO_val2014_')\n elif dt == 'test':\n ques_suffix = 'MultipleChoice_mscoco_test2015'\n annotation_suffix = 'None'\n img_suffix = os.path.join('test2015', 'COCO_test2015_')\n else:\n raise RuntimeError('Not valid datatype.')\n\n data_path = os.path.join(opt['datapath'], 'VQA-v1',\n ques_suffix + '_questions.json')\n\n annotation_path = os.path.join(opt['datapath'], 'VQA-v1',\n annotation_suffix + '_annotations.json')\n\n image_path = os.path.join(opt['datapath'], 'COCO-IMG', img_suffix)\n\n return data_path, annotation_path, image_path\n\n\nclass OeTeacher(Teacher):\n \"\"\"\n VQA Open-Ended teacher, which loads the json vqa data and implements its\n own `act` method for interacting with student agent.\n \"\"\"\n def __init__(self, opt, shared=None):\n super().__init__(opt, shared)\n self.datatype = opt['datatype']\n data_path, annotation_path, self.image_path = _path(opt)\n\n if shared and 'ques' in shared:\n self.ques = shared['ques']\n if 'annotation' in shared:\n self.annotation = shared['annotation']\n else:\n self._setup_data(data_path, annotation_path)\n\n # for ordered data in batch mode (especially, for validation and\n # testing), each teacher in the batch gets a start index and a step\n # size so they all process disparate sets of the data\n self.step_size = opt.get('batchsize', 1)\n self.data_offset = opt.get('batchindex', 0)\n self.image_loader = ImageLoader(opt)\n self.reset()\n\n def __len__(self):\n return len(self.ques['questions'])\n\n def reset(self):\n # Reset the dialog so that it is at the start of the epoch,\n # and all metrics are reset.\n super().reset()\n self.lastY = None\n self.episode_idx = self.data_offset - self.step_size\n\n def observe(self, observation):\n \"\"\"Process observation for metrics.\"\"\"\n if self.lastY is not None:\n self.metrics.update(observation, self.lastY)\n self.lastY = None\n return observation\n\n def act(self):\n if self.datatype == 'train':\n self.episode_idx = random.randrange(len(self))\n else:\n self.episode_idx = (self.episode_idx + self.step_size) % len(self)\n if self.episode_idx == len(self) - self.step_size:\n self.epochDone = True\n\n qa = self.ques['questions'][self.episode_idx]\n question = qa['question']\n image_id = qa['image_id']\n\n img_path = self.image_path + '%012d.jpg' % (image_id)\n\n action = {\n 'image': self.image_loader.load(img_path),\n 'text': question,\n 'episode_done': True\n }\n\n if not self.datatype.startswith('test'):\n anno = self.annotation['annotations'][self.episode_idx]\n self.lastY = [ans['answer'] for ans in anno['answers']]\n\n if self.datatype.startswith('train'):\n action['labels'] = self.lastY\n\n return action\n\n def share(self):\n shared = super().share()\n shared['ques'] = self.ques\n if hasattr(self, 'annotation'):\n shared['annotation'] = self.annotation\n return shared\n\n def _setup_data(self, data_path, annotation_path):\n print('loading: ' + data_path)\n with open(data_path) as data_file:\n self.ques = json.load(data_file)\n\n if self.datatype != 'test':\n print('loading: ' + annotation_path)\n with open(annotation_path) as data_file:\n self.annotation = json.load(data_file)\n\n\nclass McTeacher(OeTeacher):\n \"\"\"\n VQA Multiple-Choice teacher, which inherits from OeTeacher but overrides\n the label and label_candidates fields with multiple choice data.\n \"\"\"\n\n def act(self):\n action = super().act()\n\n qa = self.ques['questions'][self.episode_idx]\n multiple_choices = qa['multiple_choices']\n\n action['label_candidates'] = multiple_choices\n\n if not self.datatype.startswith('test'):\n anno = self.annotation['annotations'][self.episode_idx]\n self.lastY = [anno['multiple_choice_answer']]\n\n if self.datatype.startswith('train'):\n action['labels'] = self.lastY\n\n return action\n\n\nclass DefaultTeacher(McTeacher):\n # default to Multiple-Choice Teacher\n pass\n", "path": "parlai/tasks/vqa_v1/agents.py"}]}
| 3,805 | 493 |
gh_patches_debug_40712
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-5973
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] Propagation of cpp_info.name in cmake_find_package_multi
The `cmake_find_package_multi` correctly generates *XXXConfig.cmake* files using the dependency's `cpp_info.name` for the `XXX` part, but doesn't always correctly use `cpp_info.name` inside the generated file.
### Environment Details
* Operating System+version: Ubuntu 18.04
* Conan version: 1.19.2
* Python version: 3.6.8
### Steps to reproduce
I have got an ITK recipe that requires both `zlib` and `hdf5` and uses `cmake_find_package_multi`, the `hdf5` recipe also requires `zlib`. Everything worked smoothly up to yesterday when the `cpp_info.name` of `zlib` was changed to `ZLIB`: now `cmake_find_package_multi` correctly generates *HDF5Config.cmake* and *ZLIBConfig.cmake* as expected, but the *HDF5Config.cmake* file contains the following lines:
```cmake
include(CMakeFindDependencyMacro)
if(${CMAKE_VERSION} VERSION_LESS "3.9.0")
find_package(zlib REQUIRED NO_MODULE)
else()
find_dependency(zlib REQUIRED NO_MODULE)
endif()
get_target_property(tmp zlib::zlib INTERFACE_LINK_LIBRARIES)
```
When the `find_dependency` above is called, it searches a file called *zlibConfig.cmake* instead of the generated *ZLIBConfig.cmake*. I can't tell for sure, but I believe that `cmake_find_package_multi` doesn't correctly propagate the `cpp_info.name` in to the `find_dependency` and subsequent target names when it should.
### Logs (Executed commands with output) (Include/Attach if Applicable)
The relevant logs are the following ones:
```
CMake Error at /usr/share/cmake-3.10/Modules/CMakeFindDependencyMacro.cmake:48 (find_package):
Could not find a package configuration file provided by "zlib" with any of
the following names:
zlibConfig.cmake
zlib-config.cmake
Add the installation prefix of "zlib" to CMAKE_PREFIX_PATH or set
"zlib_DIR" to a directory containing one of the above files. If "zlib"
provides a separate development package or SDK, be sure it has been
installed.
Call Stack (most recent call first):
HDF5Config.cmake:43 (find_dependency)
source_subfolder/Modules/ThirdParty/HDF5/itk-module-init.cmake:5 (find_package)
source_subfolder/CMake/ITKModuleEnablement.cmake:318 (include)
source_subfolder/CMakeLists.txt:433 (include)
-- Configuring incomplete, errors occurred!
```
</issue>
<code>
[start of conans/client/generators/cmake_find_package.py]
1 from conans.client.generators.cmake import DepsCppCmake
2 from conans.client.generators.cmake_find_package_common import target_template
3 from conans.model import Generator
4
5 find_package_header = """
6 include(FindPackageHandleStandardArgs)
7
8 message(STATUS "Conan: Using autogenerated Find{name}.cmake")
9 # Global approach
10 set({name}_FOUND 1)
11 set({name}_VERSION "{version}")
12
13 find_package_handle_standard_args({name} REQUIRED_VARS {name}_VERSION VERSION_VAR {name}_VERSION)
14 mark_as_advanced({name}_FOUND {name}_VERSION)
15
16 """
17
18
19 assign_target_properties = """
20 if({name}_INCLUDE_DIRS)
21 set_target_properties({name}::{name} PROPERTIES INTERFACE_INCLUDE_DIRECTORIES "${{{name}_INCLUDE_DIRS}}")
22 endif()
23 set_property(TARGET {name}::{name} PROPERTY INTERFACE_LINK_LIBRARIES ${{{name}_LIBRARIES_TARGETS}} "${{{name}_LINKER_FLAGS_LIST}}")
24 set_property(TARGET {name}::{name} PROPERTY INTERFACE_COMPILE_DEFINITIONS ${{{name}_COMPILE_DEFINITIONS}})
25 set_property(TARGET {name}::{name} PROPERTY INTERFACE_COMPILE_OPTIONS "${{{name}_COMPILE_OPTIONS_LIST}}")
26 """
27
28
29 class CMakeFindPackageGenerator(Generator):
30 template = """
31 {find_package_header_block}
32 {find_libraries_block}
33 if(NOT ${{CMAKE_VERSION}} VERSION_LESS "3.0")
34 # Target approach
35 if(NOT TARGET {name}::{name})
36 add_library({name}::{name} INTERFACE IMPORTED)
37 {assign_target_properties_block}
38 {find_dependencies_block}
39 endif()
40 endif()
41 """
42
43 @property
44 def filename(self):
45 pass
46
47 @property
48 def content(self):
49 ret = {}
50 for depname, cpp_info in self.deps_build_info.dependencies:
51 ret["Find%s.cmake" % cpp_info.name] = self._find_for_dep(cpp_info.name, cpp_info)
52 return ret
53
54 def _find_for_dep(self, name, cpp_info):
55 deps = DepsCppCmake(cpp_info)
56 lines = []
57 if cpp_info.public_deps:
58 # Here we are generating FindXXX, so find_modules=True
59 lines = find_dependency_lines(name, cpp_info, find_modules=True)
60 find_package_header_block = find_package_header.format(name=name, version=cpp_info.version)
61 find_libraries_block = target_template.format(name=name, deps=deps, build_type_suffix="")
62 target_props = assign_target_properties.format(name=name, deps=deps)
63 tmp = self.template.format(name=name, deps=deps,
64 version=cpp_info.version,
65 find_dependencies_block="\n".join(lines),
66 find_libraries_block=find_libraries_block,
67 find_package_header_block=find_package_header_block,
68 assign_target_properties_block=target_props)
69 return tmp
70
71
72 def find_dependency_lines(name, cpp_info, find_modules):
73 lines = ["", "# Library dependencies", "include(CMakeFindDependencyMacro)"]
74 for dep in cpp_info.public_deps:
75 def property_lines(prop):
76 lib_t = "%s::%s" % (name, name)
77 dep_t = "%s::%s" % (dep, dep)
78 return ["get_target_property(tmp %s %s)" % (dep_t, prop),
79 "if(tmp)",
80 " set_property(TARGET %s APPEND PROPERTY %s ${tmp})" % (lib_t, prop),
81 'endif()']
82
83 if find_modules:
84 lines.append("find_dependency(%s REQUIRED)" % dep)
85 else:
86 # https://github.com/conan-io/conan/issues/4994
87 # https://github.com/conan-io/conan/issues/5040
88 lines.append('if(${CMAKE_VERSION} VERSION_LESS "3.9.0")')
89 lines.append(' find_package(%s REQUIRED NO_MODULE)' % dep)
90 lines.append("else()")
91 lines.append(' find_dependency(%s REQUIRED NO_MODULE)' % dep)
92 lines.append("endif()")
93
94 lines.extend(property_lines("INTERFACE_LINK_LIBRARIES"))
95 lines.extend(property_lines("INTERFACE_COMPILE_DEFINITIONS"))
96 lines.extend(property_lines("INTERFACE_INCLUDE_DIRECTORIES"))
97 return [" {}".format(l) for l in lines]
98
[end of conans/client/generators/cmake_find_package.py]
[start of conans/client/generators/cmake_find_package_multi.py]
1 from conans.client.generators.cmake import DepsCppCmake
2 from conans.client.generators.cmake_find_package import find_dependency_lines
3 from conans.client.generators.cmake_find_package_common import target_template
4 from conans.model import Generator
5
6
7 class CMakeFindPackageMultiGenerator(Generator):
8 config_xxx_template = """
9
10 # Requires CMake > 3.0
11 if(${{CMAKE_VERSION}} VERSION_LESS "3.0")
12 message(FATAL_ERROR "The 'cmake_find_package_multi' only works with CMake > 3.0" )
13 endif()
14
15 include(${{CMAKE_CURRENT_LIST_DIR}}/{name}Targets.cmake)
16
17 {target_props_block}
18 {find_dependencies_block}
19 """
20
21 targets_file = """
22 if(NOT TARGET {name}::{name})
23 add_library({name}::{name} INTERFACE IMPORTED)
24 endif()
25
26 # Load the debug and release library finders
27 get_filename_component(_DIR "${{CMAKE_CURRENT_LIST_FILE}}" PATH)
28 file(GLOB CONFIG_FILES "${{_DIR}}/{name}Target-*.cmake")
29
30 foreach(f ${{CONFIG_FILES}})
31 include(${{f}})
32 endforeach()
33
34 """
35
36 target_properties = """
37 # Assign target properties
38 set_property(TARGET {name}::{name}
39 PROPERTY INTERFACE_LINK_LIBRARIES
40 $<$<CONFIG:Release>:${{{name}_LIBRARIES_TARGETS_RELEASE}} ${{{name}_LINKER_FLAGS_RELEASE_LIST}}>
41 $<$<CONFIG:RelWithDebInfo>:${{{name}_LIBRARIES_TARGETS_RELWITHDEBINFO}} ${{{name}_LINKER_FLAGS_RELWITHDEBINFO_LIST}}>
42 $<$<CONFIG:MinSizeRel>:${{{name}_LIBRARIES_TARGETS_MINSIZEREL}} ${{{name}_LINKER_FLAGS_MINSIZEREL_LIST}}>
43 $<$<CONFIG:Debug>:${{{name}_LIBRARIES_TARGETS_DEBUG}} ${{{name}_LINKER_FLAGS_DEBUG_LIST}}>)
44 set_property(TARGET {name}::{name}
45 PROPERTY INTERFACE_INCLUDE_DIRECTORIES
46 $<$<CONFIG:Release>:${{{name}_INCLUDE_DIRS_RELEASE}}>
47 $<$<CONFIG:RelWithDebInfo>:${{{name}_INCLUDE_DIRS_RELWITHDEBINFO}}>
48 $<$<CONFIG:MinSizeRel>:${{{name}_INCLUDE_DIRS_MINSIZEREL}}>
49 $<$<CONFIG:Debug>:${{{name}_INCLUDE_DIRS_DEBUG}}>)
50 set_property(TARGET {name}::{name}
51 PROPERTY INTERFACE_COMPILE_DEFINITIONS
52 $<$<CONFIG:Release>:${{{name}_COMPILE_DEFINITIONS_RELEASE}}>
53 $<$<CONFIG:RelWithDebInfo>:${{{name}_COMPILE_DEFINITIONS_RELWITHDEBINFO}}>
54 $<$<CONFIG:MinSizeRel>:${{{name}_COMPILE_DEFINITIONS_MINSIZEREL}}>
55 $<$<CONFIG:Debug>:${{{name}_COMPILE_DEFINITIONS_DEBUG}}>)
56 set_property(TARGET {name}::{name}
57 PROPERTY INTERFACE_COMPILE_OPTIONS
58 $<$<CONFIG:Release>:${{{name}_COMPILE_OPTIONS_RELEASE_LIST}}>
59 $<$<CONFIG:RelWithDebInfo>:${{{name}_COMPILE_OPTIONS_RELWITHDEBINFO_LIST}}>
60 $<$<CONFIG:MinSizeRel>:${{{name}_COMPILE_OPTIONS_MINSIZEREL_LIST}}>
61 $<$<CONFIG:Debug>:${{{name}_COMPILE_OPTIONS_DEBUG_LIST}}>)
62 """
63
64 @property
65 def filename(self):
66 pass
67
68 @property
69 def content(self):
70 ret = {}
71 build_type = self.conanfile.settings.get_safe("build_type")
72 build_type_suffix = "_{}".format(build_type.upper()) if build_type else ""
73 for _, cpp_info in self.deps_build_info.dependencies:
74 depname = cpp_info.name
75 deps = DepsCppCmake(cpp_info)
76 ret["{}Config.cmake".format(depname)] = self._find_for_dep(depname, cpp_info)
77
78 find_lib = target_template.format(name=depname, deps=deps,
79 build_type_suffix=build_type_suffix)
80 ret["{}Targets.cmake".format(depname)] = self.targets_file.format(name=depname)
81 ret["{}Target-{}.cmake".format(depname, build_type.lower())] = find_lib
82 return ret
83
84 def _build_type_suffix(self, build_type):
85 return
86
87 def _find_for_dep(self, name, cpp_info):
88 lines = []
89 if cpp_info.public_deps:
90 # Here we are generating only Config files, so do not search for FindXXX modules
91 lines = find_dependency_lines(name, cpp_info, find_modules=False)
92
93 targets_props = self.target_properties.format(name=name)
94
95 tmp = self.config_xxx_template.format(name=name,
96 version=cpp_info.version,
97 find_dependencies_block="\n".join(lines),
98 target_props_block=targets_props)
99
100 return tmp
101
[end of conans/client/generators/cmake_find_package_multi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conans/client/generators/cmake_find_package.py b/conans/client/generators/cmake_find_package.py
--- a/conans/client/generators/cmake_find_package.py
+++ b/conans/client/generators/cmake_find_package.py
@@ -56,7 +56,8 @@
lines = []
if cpp_info.public_deps:
# Here we are generating FindXXX, so find_modules=True
- lines = find_dependency_lines(name, cpp_info, find_modules=True)
+ public_deps_names = [self.deps_build_info[dep].name for dep in cpp_info.public_deps]
+ lines = find_dependency_lines(name, public_deps_names, find_modules=True)
find_package_header_block = find_package_header.format(name=name, version=cpp_info.version)
find_libraries_block = target_template.format(name=name, deps=deps, build_type_suffix="")
target_props = assign_target_properties.format(name=name, deps=deps)
@@ -69,26 +70,26 @@
return tmp
-def find_dependency_lines(name, cpp_info, find_modules):
+def find_dependency_lines(name, public_deps_names, find_modules):
lines = ["", "# Library dependencies", "include(CMakeFindDependencyMacro)"]
- for dep in cpp_info.public_deps:
+ for dep_name in public_deps_names:
def property_lines(prop):
lib_t = "%s::%s" % (name, name)
- dep_t = "%s::%s" % (dep, dep)
+ dep_t = "%s::%s" % (dep_name, dep_name)
return ["get_target_property(tmp %s %s)" % (dep_t, prop),
"if(tmp)",
" set_property(TARGET %s APPEND PROPERTY %s ${tmp})" % (lib_t, prop),
'endif()']
if find_modules:
- lines.append("find_dependency(%s REQUIRED)" % dep)
+ lines.append("find_dependency(%s REQUIRED)" % dep_name)
else:
# https://github.com/conan-io/conan/issues/4994
# https://github.com/conan-io/conan/issues/5040
lines.append('if(${CMAKE_VERSION} VERSION_LESS "3.9.0")')
- lines.append(' find_package(%s REQUIRED NO_MODULE)' % dep)
+ lines.append(' find_package(%s REQUIRED NO_MODULE)' % dep_name)
lines.append("else()")
- lines.append(' find_dependency(%s REQUIRED NO_MODULE)' % dep)
+ lines.append(' find_dependency(%s REQUIRED NO_MODULE)' % dep_name)
lines.append("endif()")
lines.extend(property_lines("INTERFACE_LINK_LIBRARIES"))
diff --git a/conans/client/generators/cmake_find_package_multi.py b/conans/client/generators/cmake_find_package_multi.py
--- a/conans/client/generators/cmake_find_package_multi.py
+++ b/conans/client/generators/cmake_find_package_multi.py
@@ -88,7 +88,8 @@
lines = []
if cpp_info.public_deps:
# Here we are generating only Config files, so do not search for FindXXX modules
- lines = find_dependency_lines(name, cpp_info, find_modules=False)
+ public_deps_names = [self.deps_build_info[dep].name for dep in cpp_info.public_deps]
+ lines = find_dependency_lines(name, public_deps_names, find_modules=False)
targets_props = self.target_properties.format(name=name)
|
{"golden_diff": "diff --git a/conans/client/generators/cmake_find_package.py b/conans/client/generators/cmake_find_package.py\n--- a/conans/client/generators/cmake_find_package.py\n+++ b/conans/client/generators/cmake_find_package.py\n@@ -56,7 +56,8 @@\n lines = []\n if cpp_info.public_deps:\n # Here we are generating FindXXX, so find_modules=True\n- lines = find_dependency_lines(name, cpp_info, find_modules=True)\n+ public_deps_names = [self.deps_build_info[dep].name for dep in cpp_info.public_deps]\n+ lines = find_dependency_lines(name, public_deps_names, find_modules=True)\n find_package_header_block = find_package_header.format(name=name, version=cpp_info.version)\n find_libraries_block = target_template.format(name=name, deps=deps, build_type_suffix=\"\")\n target_props = assign_target_properties.format(name=name, deps=deps)\n@@ -69,26 +70,26 @@\n return tmp\n \n \n-def find_dependency_lines(name, cpp_info, find_modules):\n+def find_dependency_lines(name, public_deps_names, find_modules):\n lines = [\"\", \"# Library dependencies\", \"include(CMakeFindDependencyMacro)\"]\n- for dep in cpp_info.public_deps:\n+ for dep_name in public_deps_names:\n def property_lines(prop):\n lib_t = \"%s::%s\" % (name, name)\n- dep_t = \"%s::%s\" % (dep, dep)\n+ dep_t = \"%s::%s\" % (dep_name, dep_name)\n return [\"get_target_property(tmp %s %s)\" % (dep_t, prop),\n \"if(tmp)\",\n \" set_property(TARGET %s APPEND PROPERTY %s ${tmp})\" % (lib_t, prop),\n 'endif()']\n \n if find_modules:\n- lines.append(\"find_dependency(%s REQUIRED)\" % dep)\n+ lines.append(\"find_dependency(%s REQUIRED)\" % dep_name)\n else:\n # https://github.com/conan-io/conan/issues/4994\n # https://github.com/conan-io/conan/issues/5040\n lines.append('if(${CMAKE_VERSION} VERSION_LESS \"3.9.0\")')\n- lines.append(' find_package(%s REQUIRED NO_MODULE)' % dep)\n+ lines.append(' find_package(%s REQUIRED NO_MODULE)' % dep_name)\n lines.append(\"else()\")\n- lines.append(' find_dependency(%s REQUIRED NO_MODULE)' % dep)\n+ lines.append(' find_dependency(%s REQUIRED NO_MODULE)' % dep_name)\n lines.append(\"endif()\")\n \n lines.extend(property_lines(\"INTERFACE_LINK_LIBRARIES\"))\ndiff --git a/conans/client/generators/cmake_find_package_multi.py b/conans/client/generators/cmake_find_package_multi.py\n--- a/conans/client/generators/cmake_find_package_multi.py\n+++ b/conans/client/generators/cmake_find_package_multi.py\n@@ -88,7 +88,8 @@\n lines = []\n if cpp_info.public_deps:\n # Here we are generating only Config files, so do not search for FindXXX modules\n- lines = find_dependency_lines(name, cpp_info, find_modules=False)\n+ public_deps_names = [self.deps_build_info[dep].name for dep in cpp_info.public_deps]\n+ lines = find_dependency_lines(name, public_deps_names, find_modules=False)\n \n targets_props = self.target_properties.format(name=name)\n", "issue": "[bug] Propagation of cpp_info.name in cmake_find_package_multi\nThe `cmake_find_package_multi` correctly generates *XXXConfig.cmake* files using the dependency's `cpp_info.name` for the `XXX` part, but doesn't always correctly use `cpp_info.name` inside the generated file.\r\n\r\n### Environment Details\r\n * Operating System+version: Ubuntu 18.04\r\n * Conan version: 1.19.2\r\n * Python version: 3.6.8\r\n\r\n### Steps to reproduce\r\n\r\nI have got an ITK recipe that requires both `zlib` and `hdf5` and uses `cmake_find_package_multi`, the `hdf5` recipe also requires `zlib`. Everything worked smoothly up to yesterday when the `cpp_info.name` of `zlib` was changed to `ZLIB`: now `cmake_find_package_multi` correctly generates *HDF5Config.cmake* and *ZLIBConfig.cmake* as expected, but the *HDF5Config.cmake* file contains the following lines:\r\n\r\n```cmake\r\ninclude(CMakeFindDependencyMacro)\r\nif(${CMAKE_VERSION} VERSION_LESS \"3.9.0\")\r\n find_package(zlib REQUIRED NO_MODULE)\r\nelse()\r\n find_dependency(zlib REQUIRED NO_MODULE)\r\nendif()\r\nget_target_property(tmp zlib::zlib INTERFACE_LINK_LIBRARIES)\r\n```\r\n\r\nWhen the `find_dependency` above is called, it searches a file called *zlibConfig.cmake* instead of the generated *ZLIBConfig.cmake*. I can't tell for sure, but I believe that `cmake_find_package_multi` doesn't correctly propagate the `cpp_info.name` in to the `find_dependency` and subsequent target names when it should.\r\n\r\n### Logs (Executed commands with output) (Include/Attach if Applicable)\r\n\r\nThe relevant logs are the following ones:\r\n\r\n```\r\nCMake Error at /usr/share/cmake-3.10/Modules/CMakeFindDependencyMacro.cmake:48 (find_package):\r\n Could not find a package configuration file provided by \"zlib\" with any of\r\n the following names:\r\n\r\n zlibConfig.cmake\r\n zlib-config.cmake\r\n\r\n Add the installation prefix of \"zlib\" to CMAKE_PREFIX_PATH or set\r\n \"zlib_DIR\" to a directory containing one of the above files. If \"zlib\"\r\n provides a separate development package or SDK, be sure it has been\r\n installed.\r\nCall Stack (most recent call first):\r\n HDF5Config.cmake:43 (find_dependency)\r\n source_subfolder/Modules/ThirdParty/HDF5/itk-module-init.cmake:5 (find_package)\r\n source_subfolder/CMake/ITKModuleEnablement.cmake:318 (include)\r\n source_subfolder/CMakeLists.txt:433 (include)\r\n\r\n\r\n-- Configuring incomplete, errors occurred!\r\n```\r\n\n", "before_files": [{"content": "from conans.client.generators.cmake import DepsCppCmake\nfrom conans.client.generators.cmake_find_package_common import target_template\nfrom conans.model import Generator\n\nfind_package_header = \"\"\"\ninclude(FindPackageHandleStandardArgs)\n\nmessage(STATUS \"Conan: Using autogenerated Find{name}.cmake\")\n# Global approach\nset({name}_FOUND 1)\nset({name}_VERSION \"{version}\")\n\nfind_package_handle_standard_args({name} REQUIRED_VARS {name}_VERSION VERSION_VAR {name}_VERSION)\nmark_as_advanced({name}_FOUND {name}_VERSION)\n\n\"\"\"\n\n\nassign_target_properties = \"\"\"\n if({name}_INCLUDE_DIRS)\n set_target_properties({name}::{name} PROPERTIES INTERFACE_INCLUDE_DIRECTORIES \"${{{name}_INCLUDE_DIRS}}\")\n endif()\n set_property(TARGET {name}::{name} PROPERTY INTERFACE_LINK_LIBRARIES ${{{name}_LIBRARIES_TARGETS}} \"${{{name}_LINKER_FLAGS_LIST}}\")\n set_property(TARGET {name}::{name} PROPERTY INTERFACE_COMPILE_DEFINITIONS ${{{name}_COMPILE_DEFINITIONS}})\n set_property(TARGET {name}::{name} PROPERTY INTERFACE_COMPILE_OPTIONS \"${{{name}_COMPILE_OPTIONS_LIST}}\")\n\"\"\"\n\n\nclass CMakeFindPackageGenerator(Generator):\n template = \"\"\"\n{find_package_header_block}\n{find_libraries_block}\nif(NOT ${{CMAKE_VERSION}} VERSION_LESS \"3.0\")\n # Target approach\n if(NOT TARGET {name}::{name})\n add_library({name}::{name} INTERFACE IMPORTED)\n {assign_target_properties_block}\n {find_dependencies_block}\n endif()\nendif()\n\"\"\"\n\n @property\n def filename(self):\n pass\n\n @property\n def content(self):\n ret = {}\n for depname, cpp_info in self.deps_build_info.dependencies:\n ret[\"Find%s.cmake\" % cpp_info.name] = self._find_for_dep(cpp_info.name, cpp_info)\n return ret\n\n def _find_for_dep(self, name, cpp_info):\n deps = DepsCppCmake(cpp_info)\n lines = []\n if cpp_info.public_deps:\n # Here we are generating FindXXX, so find_modules=True\n lines = find_dependency_lines(name, cpp_info, find_modules=True)\n find_package_header_block = find_package_header.format(name=name, version=cpp_info.version)\n find_libraries_block = target_template.format(name=name, deps=deps, build_type_suffix=\"\")\n target_props = assign_target_properties.format(name=name, deps=deps)\n tmp = self.template.format(name=name, deps=deps,\n version=cpp_info.version,\n find_dependencies_block=\"\\n\".join(lines),\n find_libraries_block=find_libraries_block,\n find_package_header_block=find_package_header_block,\n assign_target_properties_block=target_props)\n return tmp\n\n\ndef find_dependency_lines(name, cpp_info, find_modules):\n lines = [\"\", \"# Library dependencies\", \"include(CMakeFindDependencyMacro)\"]\n for dep in cpp_info.public_deps:\n def property_lines(prop):\n lib_t = \"%s::%s\" % (name, name)\n dep_t = \"%s::%s\" % (dep, dep)\n return [\"get_target_property(tmp %s %s)\" % (dep_t, prop),\n \"if(tmp)\",\n \" set_property(TARGET %s APPEND PROPERTY %s ${tmp})\" % (lib_t, prop),\n 'endif()']\n\n if find_modules:\n lines.append(\"find_dependency(%s REQUIRED)\" % dep)\n else:\n # https://github.com/conan-io/conan/issues/4994\n # https://github.com/conan-io/conan/issues/5040\n lines.append('if(${CMAKE_VERSION} VERSION_LESS \"3.9.0\")')\n lines.append(' find_package(%s REQUIRED NO_MODULE)' % dep)\n lines.append(\"else()\")\n lines.append(' find_dependency(%s REQUIRED NO_MODULE)' % dep)\n lines.append(\"endif()\")\n\n lines.extend(property_lines(\"INTERFACE_LINK_LIBRARIES\"))\n lines.extend(property_lines(\"INTERFACE_COMPILE_DEFINITIONS\"))\n lines.extend(property_lines(\"INTERFACE_INCLUDE_DIRECTORIES\"))\n return [\" {}\".format(l) for l in lines]\n", "path": "conans/client/generators/cmake_find_package.py"}, {"content": "from conans.client.generators.cmake import DepsCppCmake\nfrom conans.client.generators.cmake_find_package import find_dependency_lines\nfrom conans.client.generators.cmake_find_package_common import target_template\nfrom conans.model import Generator\n\n\nclass CMakeFindPackageMultiGenerator(Generator):\n config_xxx_template = \"\"\"\n\n# Requires CMake > 3.0\nif(${{CMAKE_VERSION}} VERSION_LESS \"3.0\")\n message(FATAL_ERROR \"The 'cmake_find_package_multi' only works with CMake > 3.0\" )\nendif()\n\ninclude(${{CMAKE_CURRENT_LIST_DIR}}/{name}Targets.cmake)\n\n{target_props_block}\n{find_dependencies_block}\n\"\"\"\n\n targets_file = \"\"\"\nif(NOT TARGET {name}::{name})\n add_library({name}::{name} INTERFACE IMPORTED)\nendif()\n\n# Load the debug and release library finders\nget_filename_component(_DIR \"${{CMAKE_CURRENT_LIST_FILE}}\" PATH)\nfile(GLOB CONFIG_FILES \"${{_DIR}}/{name}Target-*.cmake\")\n\nforeach(f ${{CONFIG_FILES}})\n include(${{f}})\nendforeach()\n \n\"\"\"\n\n target_properties = \"\"\"\n# Assign target properties\nset_property(TARGET {name}::{name} \n PROPERTY INTERFACE_LINK_LIBRARIES \n $<$<CONFIG:Release>:${{{name}_LIBRARIES_TARGETS_RELEASE}} ${{{name}_LINKER_FLAGS_RELEASE_LIST}}>\n $<$<CONFIG:RelWithDebInfo>:${{{name}_LIBRARIES_TARGETS_RELWITHDEBINFO}} ${{{name}_LINKER_FLAGS_RELWITHDEBINFO_LIST}}>\n $<$<CONFIG:MinSizeRel>:${{{name}_LIBRARIES_TARGETS_MINSIZEREL}} ${{{name}_LINKER_FLAGS_MINSIZEREL_LIST}}>\n $<$<CONFIG:Debug>:${{{name}_LIBRARIES_TARGETS_DEBUG}} ${{{name}_LINKER_FLAGS_DEBUG_LIST}}>)\nset_property(TARGET {name}::{name} \n PROPERTY INTERFACE_INCLUDE_DIRECTORIES \n $<$<CONFIG:Release>:${{{name}_INCLUDE_DIRS_RELEASE}}>\n $<$<CONFIG:RelWithDebInfo>:${{{name}_INCLUDE_DIRS_RELWITHDEBINFO}}>\n $<$<CONFIG:MinSizeRel>:${{{name}_INCLUDE_DIRS_MINSIZEREL}}>\n $<$<CONFIG:Debug>:${{{name}_INCLUDE_DIRS_DEBUG}}>)\nset_property(TARGET {name}::{name} \n PROPERTY INTERFACE_COMPILE_DEFINITIONS \n $<$<CONFIG:Release>:${{{name}_COMPILE_DEFINITIONS_RELEASE}}>\n $<$<CONFIG:RelWithDebInfo>:${{{name}_COMPILE_DEFINITIONS_RELWITHDEBINFO}}>\n $<$<CONFIG:MinSizeRel>:${{{name}_COMPILE_DEFINITIONS_MINSIZEREL}}>\n $<$<CONFIG:Debug>:${{{name}_COMPILE_DEFINITIONS_DEBUG}}>)\nset_property(TARGET {name}::{name} \n PROPERTY INTERFACE_COMPILE_OPTIONS \n $<$<CONFIG:Release>:${{{name}_COMPILE_OPTIONS_RELEASE_LIST}}>\n $<$<CONFIG:RelWithDebInfo>:${{{name}_COMPILE_OPTIONS_RELWITHDEBINFO_LIST}}>\n $<$<CONFIG:MinSizeRel>:${{{name}_COMPILE_OPTIONS_MINSIZEREL_LIST}}>\n $<$<CONFIG:Debug>:${{{name}_COMPILE_OPTIONS_DEBUG_LIST}}>) \n \"\"\"\n\n @property\n def filename(self):\n pass\n\n @property\n def content(self):\n ret = {}\n build_type = self.conanfile.settings.get_safe(\"build_type\")\n build_type_suffix = \"_{}\".format(build_type.upper()) if build_type else \"\"\n for _, cpp_info in self.deps_build_info.dependencies:\n depname = cpp_info.name\n deps = DepsCppCmake(cpp_info)\n ret[\"{}Config.cmake\".format(depname)] = self._find_for_dep(depname, cpp_info)\n\n find_lib = target_template.format(name=depname, deps=deps,\n build_type_suffix=build_type_suffix)\n ret[\"{}Targets.cmake\".format(depname)] = self.targets_file.format(name=depname)\n ret[\"{}Target-{}.cmake\".format(depname, build_type.lower())] = find_lib\n return ret\n\n def _build_type_suffix(self, build_type):\n return\n\n def _find_for_dep(self, name, cpp_info):\n lines = []\n if cpp_info.public_deps:\n # Here we are generating only Config files, so do not search for FindXXX modules\n lines = find_dependency_lines(name, cpp_info, find_modules=False)\n\n targets_props = self.target_properties.format(name=name)\n\n tmp = self.config_xxx_template.format(name=name,\n version=cpp_info.version,\n find_dependencies_block=\"\\n\".join(lines),\n target_props_block=targets_props)\n\n return tmp\n", "path": "conans/client/generators/cmake_find_package_multi.py"}]}
| 3,509 | 759 |
gh_patches_debug_24750
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-7636
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CT-2479] replace all instances of set-output and node16
Details in https://github.com/dbt-labs/actions/issues/39.
### Acceptance Criteria
- [ ] Verified there are no workflows to update
_or_
- [ ] removed all uses of `set-output` - either directly or up updating any marketplace actions we reference
- [ ] removed all references to node16 - either directly or up updating any marketplace actions we reference
- [ ] backport changes
</issue>
<code>
[start of .github/actions/latest-wrangler/main.py]
1 import os
2 import sys
3 import requests
4 from distutils.util import strtobool
5 from typing import Union
6 from packaging.version import parse, Version
7
8 if __name__ == "__main__":
9
10 # get inputs
11 package = os.environ["INPUT_PACKAGE"]
12 new_version = parse(os.environ["INPUT_NEW_VERSION"])
13 gh_token = os.environ["INPUT_GH_TOKEN"]
14 halt_on_missing = strtobool(os.environ.get("INPUT_HALT_ON_MISSING", "False"))
15
16 # get package metadata from github
17 package_request = requests.get(
18 f"https://api.github.com/orgs/dbt-labs/packages/container/{package}/versions",
19 auth=("", gh_token),
20 )
21 package_meta = package_request.json()
22
23 # Log info if we don't get a 200
24 if package_request.status_code != 200:
25 print(f"Call to GH API failed: {package_request.status_code} {package_meta['message']}")
26
27 # Make an early exit if there is no matching package in github
28 if package_request.status_code == 404:
29 if halt_on_missing:
30 sys.exit(1)
31 else:
32 # everything is the latest if the package doesn't exist
33 print(f"::set-output name=latest::{True}")
34 print(f"::set-output name=minor_latest::{True}")
35 sys.exit(0)
36
37 # TODO: verify package meta is "correct"
38 # https://github.com/dbt-labs/dbt-core/issues/4640
39
40 # map versions and tags
41 version_tag_map = {
42 version["id"]: version["metadata"]["container"]["tags"] for version in package_meta
43 }
44
45 # is pre-release
46 pre_rel = True if any(x in str(new_version) for x in ["a", "b", "rc"]) else False
47
48 # semver of current latest
49 for version, tags in version_tag_map.items():
50 if "latest" in tags:
51 # N.B. This seems counterintuitive, but we expect any version tagged
52 # 'latest' to have exactly three associated tags:
53 # latest, major.minor.latest, and major.minor.patch.
54 # Subtracting everything that contains the string 'latest' gets us
55 # the major.minor.patch which is what's needed for comparison.
56 current_latest = parse([tag for tag in tags if "latest" not in tag][0])
57 else:
58 current_latest = False
59
60 # semver of current_minor_latest
61 for version, tags in version_tag_map.items():
62 if f"{new_version.major}.{new_version.minor}.latest" in tags:
63 # Similar to above, only now we expect exactly two tags:
64 # major.minor.patch and major.minor.latest
65 current_minor_latest = parse([tag for tag in tags if "latest" not in tag][0])
66 else:
67 current_minor_latest = False
68
69 def is_latest(
70 pre_rel: bool, new_version: Version, remote_latest: Union[bool, Version]
71 ) -> bool:
72 """Determine if a given contaier should be tagged 'latest' based on:
73 - it's pre-release status
74 - it's version
75 - the version of a previously identified container tagged 'latest'
76
77 :param pre_rel: Wether or not the version of the new container is a pre-release
78 :param new_version: The version of the new container
79 :param remote_latest: The version of the previously identified container that's
80 already tagged latest or False
81 """
82 # is a pre-release = not latest
83 if pre_rel:
84 return False
85 # + no latest tag found = is latest
86 if not remote_latest:
87 return True
88 # + if remote version is lower than current = is latest, else not latest
89 return True if remote_latest <= new_version else False
90
91 latest = is_latest(pre_rel, new_version, current_latest)
92 minor_latest = is_latest(pre_rel, new_version, current_minor_latest)
93
94 print(f"::set-output name=latest::{latest}")
95 print(f"::set-output name=minor_latest::{minor_latest}")
96
[end of .github/actions/latest-wrangler/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/.github/actions/latest-wrangler/main.py b/.github/actions/latest-wrangler/main.py
--- a/.github/actions/latest-wrangler/main.py
+++ b/.github/actions/latest-wrangler/main.py
@@ -28,11 +28,12 @@
if package_request.status_code == 404:
if halt_on_missing:
sys.exit(1)
- else:
- # everything is the latest if the package doesn't exist
- print(f"::set-output name=latest::{True}")
- print(f"::set-output name=minor_latest::{True}")
- sys.exit(0)
+ # everything is the latest if the package doesn't exist
+ github_output = os.environ.get("GITHUB_OUTPUT")
+ with open(github_output, "at", encoding="utf-8") as gh_output:
+ gh_output.write("latest=True")
+ gh_output.write("minor_latest=True")
+ sys.exit(0)
# TODO: verify package meta is "correct"
# https://github.com/dbt-labs/dbt-core/issues/4640
@@ -91,5 +92,7 @@
latest = is_latest(pre_rel, new_version, current_latest)
minor_latest = is_latest(pre_rel, new_version, current_minor_latest)
- print(f"::set-output name=latest::{latest}")
- print(f"::set-output name=minor_latest::{minor_latest}")
+ github_output = os.environ.get("GITHUB_OUTPUT")
+ with open(github_output, "at", encoding="utf-8") as gh_output:
+ gh_output.write(f"latest={latest}")
+ gh_output.write(f"minor_latest={minor_latest}")
|
{"golden_diff": "diff --git a/.github/actions/latest-wrangler/main.py b/.github/actions/latest-wrangler/main.py\n--- a/.github/actions/latest-wrangler/main.py\n+++ b/.github/actions/latest-wrangler/main.py\n@@ -28,11 +28,12 @@\n if package_request.status_code == 404:\n if halt_on_missing:\n sys.exit(1)\n- else:\n- # everything is the latest if the package doesn't exist\n- print(f\"::set-output name=latest::{True}\")\n- print(f\"::set-output name=minor_latest::{True}\")\n- sys.exit(0)\n+ # everything is the latest if the package doesn't exist\n+ github_output = os.environ.get(\"GITHUB_OUTPUT\")\n+ with open(github_output, \"at\", encoding=\"utf-8\") as gh_output:\n+ gh_output.write(\"latest=True\")\n+ gh_output.write(\"minor_latest=True\")\n+ sys.exit(0)\n \n # TODO: verify package meta is \"correct\"\n # https://github.com/dbt-labs/dbt-core/issues/4640\n@@ -91,5 +92,7 @@\n latest = is_latest(pre_rel, new_version, current_latest)\n minor_latest = is_latest(pre_rel, new_version, current_minor_latest)\n \n- print(f\"::set-output name=latest::{latest}\")\n- print(f\"::set-output name=minor_latest::{minor_latest}\")\n+ github_output = os.environ.get(\"GITHUB_OUTPUT\")\n+ with open(github_output, \"at\", encoding=\"utf-8\") as gh_output:\n+ gh_output.write(f\"latest={latest}\")\n+ gh_output.write(f\"minor_latest={minor_latest}\")\n", "issue": "[CT-2479] replace all instances of set-output and node16\nDetails in https://github.com/dbt-labs/actions/issues/39.\r\n\r\n### Acceptance Criteria\r\n- [ ] Verified there are no workflows to update\r\n_or_\r\n- [ ] removed all uses of `set-output` - either directly or up updating any marketplace actions we reference\r\n- [ ] removed all references to node16 - either directly or up updating any marketplace actions we reference\r\n- [ ] backport changes\n", "before_files": [{"content": "import os\nimport sys\nimport requests\nfrom distutils.util import strtobool\nfrom typing import Union\nfrom packaging.version import parse, Version\n\nif __name__ == \"__main__\":\n\n # get inputs\n package = os.environ[\"INPUT_PACKAGE\"]\n new_version = parse(os.environ[\"INPUT_NEW_VERSION\"])\n gh_token = os.environ[\"INPUT_GH_TOKEN\"]\n halt_on_missing = strtobool(os.environ.get(\"INPUT_HALT_ON_MISSING\", \"False\"))\n\n # get package metadata from github\n package_request = requests.get(\n f\"https://api.github.com/orgs/dbt-labs/packages/container/{package}/versions\",\n auth=(\"\", gh_token),\n )\n package_meta = package_request.json()\n\n # Log info if we don't get a 200\n if package_request.status_code != 200:\n print(f\"Call to GH API failed: {package_request.status_code} {package_meta['message']}\")\n\n # Make an early exit if there is no matching package in github\n if package_request.status_code == 404:\n if halt_on_missing:\n sys.exit(1)\n else:\n # everything is the latest if the package doesn't exist\n print(f\"::set-output name=latest::{True}\")\n print(f\"::set-output name=minor_latest::{True}\")\n sys.exit(0)\n\n # TODO: verify package meta is \"correct\"\n # https://github.com/dbt-labs/dbt-core/issues/4640\n\n # map versions and tags\n version_tag_map = {\n version[\"id\"]: version[\"metadata\"][\"container\"][\"tags\"] for version in package_meta\n }\n\n # is pre-release\n pre_rel = True if any(x in str(new_version) for x in [\"a\", \"b\", \"rc\"]) else False\n\n # semver of current latest\n for version, tags in version_tag_map.items():\n if \"latest\" in tags:\n # N.B. This seems counterintuitive, but we expect any version tagged\n # 'latest' to have exactly three associated tags:\n # latest, major.minor.latest, and major.minor.patch.\n # Subtracting everything that contains the string 'latest' gets us\n # the major.minor.patch which is what's needed for comparison.\n current_latest = parse([tag for tag in tags if \"latest\" not in tag][0])\n else:\n current_latest = False\n\n # semver of current_minor_latest\n for version, tags in version_tag_map.items():\n if f\"{new_version.major}.{new_version.minor}.latest\" in tags:\n # Similar to above, only now we expect exactly two tags:\n # major.minor.patch and major.minor.latest\n current_minor_latest = parse([tag for tag in tags if \"latest\" not in tag][0])\n else:\n current_minor_latest = False\n\n def is_latest(\n pre_rel: bool, new_version: Version, remote_latest: Union[bool, Version]\n ) -> bool:\n \"\"\"Determine if a given contaier should be tagged 'latest' based on:\n - it's pre-release status\n - it's version\n - the version of a previously identified container tagged 'latest'\n\n :param pre_rel: Wether or not the version of the new container is a pre-release\n :param new_version: The version of the new container\n :param remote_latest: The version of the previously identified container that's\n already tagged latest or False\n \"\"\"\n # is a pre-release = not latest\n if pre_rel:\n return False\n # + no latest tag found = is latest\n if not remote_latest:\n return True\n # + if remote version is lower than current = is latest, else not latest\n return True if remote_latest <= new_version else False\n\n latest = is_latest(pre_rel, new_version, current_latest)\n minor_latest = is_latest(pre_rel, new_version, current_minor_latest)\n\n print(f\"::set-output name=latest::{latest}\")\n print(f\"::set-output name=minor_latest::{minor_latest}\")\n", "path": ".github/actions/latest-wrangler/main.py"}]}
| 1,722 | 377 |
gh_patches_debug_13866
|
rasdani/github-patches
|
git_diff
|
ESMCI__cime-4326
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add "set_value" method to CIME.BuildTools.configure.FakeCase so that user can add needed variables
In principle this is a simple change, but I want to make sure it's acceptable. I'd like to add a set_value method to the FakeCase class in CIME.BuiltTools.configure so that the user can add the variables they want to have in their FakeCase. The utility I see is for unit-testing of buildnml. But, I think there may be other cases where you want to create a simpler Fake case without doing all the normal needed setup to create a real case. There are some offline scripts we are thinking about having this kind of utility where we know we need to setup some type of fake case, but we want to hide it from the user, and it doesn't need to be fully functional like a real case.
``` python
diff --git a/scripts/lib/CIME/BuildTools/configure.py b/scripts/lib/CIME/BuildTools/configure.py
index a74c087a5..fd7067ed7 100644
--- a/scripts/lib/CIME/BuildTools/configure.py
+++ b/scripts/lib/CIME/BuildTools/configure.py
@@ -74,6 +74,9 @@ def get_value(self, attrib):
expect(attrib in self._vals, "FakeCase does not support getting value of '%s'" % attrib)
return self._vals[attrib]
+ def set_value(self, attrib, value):
+ self._vals[attrib] = value
+
def _generate_env_mach_specific(output_dir, machobj, compiler, mpilib, debug,
sysos, unit_testing):
"""
```
@jedwards4b does this sound acceptable? If so I'll submit a simple PR for it.
@billsacks would this kind of thing possibly make LILAC easier to setup?
Add "set_value" method to CIME.BuildTools.configure.FakeCase so that user can add needed variables
In principle this is a simple change, but I want to make sure it's acceptable. I'd like to add a set_value method to the FakeCase class in CIME.BuiltTools.configure so that the user can add the variables they want to have in their FakeCase. The utility I see is for unit-testing of buildnml. But, I think there may be other cases where you want to create a simpler Fake case without doing all the normal needed setup to create a real case. There are some offline scripts we are thinking about having this kind of utility where we know we need to setup some type of fake case, but we want to hide it from the user, and it doesn't need to be fully functional like a real case.
``` python
diff --git a/scripts/lib/CIME/BuildTools/configure.py b/scripts/lib/CIME/BuildTools/configure.py
index a74c087a5..fd7067ed7 100644
--- a/scripts/lib/CIME/BuildTools/configure.py
+++ b/scripts/lib/CIME/BuildTools/configure.py
@@ -74,6 +74,9 @@ def get_value(self, attrib):
expect(attrib in self._vals, "FakeCase does not support getting value of '%s'" % attrib)
return self._vals[attrib]
+ def set_value(self, attrib, value):
+ self._vals[attrib] = value
+
def _generate_env_mach_specific(output_dir, machobj, compiler, mpilib, debug,
sysos, unit_testing):
"""
```
@jedwards4b does this sound acceptable? If so I'll submit a simple PR for it.
@billsacks would this kind of thing possibly make LILAC easier to setup?
</issue>
<code>
[start of CIME/BuildTools/configure.py]
1 #!/usr/bin/env python3
2
3 """This script writes CIME build information to a directory.
4
5 The pieces of information that will be written include:
6
7 1. Machine-specific build settings (i.e. the "Macros" file).
8 2. File-specific build settings (i.e. "Depends" files).
9 3. Environment variable loads (i.e. the env_mach_specific files).
10
11 The .env_mach_specific.sh and .env_mach_specific.csh files are specific to a
12 given compiler, MPI library, and DEBUG setting. By default, these will be the
13 machine's default compiler, the machine's default MPI library, and FALSE,
14 respectively. These can be changed by setting the environment variables
15 COMPILER, MPILIB, and DEBUG, respectively.
16 """
17
18 from CIME.XML.standard_module_setup import *
19 from CIME.utils import (
20 expect,
21 safe_copy,
22 get_model,
23 get_src_root,
24 stringify_bool,
25 copy_local_macros_to_dir,
26 )
27 from CIME.XML.env_mach_specific import EnvMachSpecific
28 from CIME.XML.files import Files
29 from CIME.build import CmakeTmpBuildDir
30
31 import shutil
32
33 logger = logging.getLogger(__name__)
34
35
36 def configure(
37 machobj,
38 output_dir,
39 macros_format,
40 compiler,
41 mpilib,
42 debug,
43 comp_interface,
44 sysos,
45 unit_testing=False,
46 noenv=False,
47 threaded=False,
48 extra_machines_dir=None,
49 ):
50 """Add Macros, Depends, and env_mach_specific files to a directory.
51
52 Arguments:
53 machobj - Machines argument for this machine.
54 output_dir - Directory in which to place output.
55 macros_format - Container containing the string 'Makefile' to produce
56 Makefile Macros output, and/or 'CMake' for CMake output.
57 compiler - String containing the compiler vendor to configure for.
58 mpilib - String containing the MPI implementation to configure for.
59 debug - Boolean specifying whether debugging options are enabled.
60 unit_testing - Boolean specifying whether we're running unit tests (as
61 opposed to a system run)
62 extra_machines_dir - String giving path to an additional directory that will be
63 searched for cmake_macros.
64 """
65 new_cmake_macros_dir = Files(comp_interface=comp_interface).get_value(
66 "CMAKE_MACROS_DIR"
67 )
68 for form in macros_format:
69
70 if not os.path.isfile(os.path.join(output_dir, "Macros.cmake")):
71 safe_copy(os.path.join(new_cmake_macros_dir, "Macros.cmake"), output_dir)
72 output_cmake_macros_dir = os.path.join(output_dir, "cmake_macros")
73 if not os.path.exists(output_cmake_macros_dir):
74 shutil.copytree(new_cmake_macros_dir, output_cmake_macros_dir)
75
76 copy_local_macros_to_dir(
77 output_cmake_macros_dir, extra_machdir=extra_machines_dir
78 )
79
80 if form == "Makefile":
81 # Use the cmake macros to generate the make macros
82 cmake_args = " -DOS={} -DMACH={} -DCOMPILER={} -DDEBUG={} -DMPILIB={} -Dcompile_threaded={} -DCASEROOT={}".format(
83 sysos,
84 machobj.get_machine_name(),
85 compiler,
86 stringify_bool(debug),
87 mpilib,
88 stringify_bool(threaded),
89 output_dir,
90 )
91
92 with CmakeTmpBuildDir(macroloc=output_dir) as cmaketmp:
93 output = cmaketmp.get_makefile_vars(cmake_args=cmake_args)
94
95 with open(os.path.join(output_dir, "Macros.make"), "w") as fd:
96 fd.write(output)
97
98 copy_depends_files(
99 machobj.get_machine_name(), machobj.machines_dir, output_dir, compiler
100 )
101 generate_env_mach_specific(
102 output_dir,
103 machobj,
104 compiler,
105 mpilib,
106 debug,
107 comp_interface,
108 sysos,
109 unit_testing,
110 threaded,
111 noenv=noenv,
112 )
113
114
115 def copy_depends_files(machine_name, machines_dir, output_dir, compiler):
116 """
117 Copy any system or compiler Depends files if they do not exist in the output directory
118 If there is a match for Depends.machine_name.compiler copy that and ignore the others
119 """
120 # Note, the cmake build system does not stop if Depends.mach.compiler.cmake is found
121 makefiles_done = False
122 both = "{}.{}".format(machine_name, compiler)
123 for suffix in [both, machine_name, compiler]:
124 for extra_suffix in ["", ".cmake"]:
125 if extra_suffix == "" and makefiles_done:
126 continue
127
128 basename = "Depends.{}{}".format(suffix, extra_suffix)
129 dfile = os.path.join(machines_dir, basename)
130 outputdfile = os.path.join(output_dir, basename)
131 if os.path.isfile(dfile):
132 if suffix == both and extra_suffix == "":
133 makefiles_done = True
134 if not os.path.exists(outputdfile):
135 safe_copy(dfile, outputdfile)
136
137
138 class FakeCase(object):
139 def __init__(self, compiler, mpilib, debug, comp_interface, threading=False):
140 # PIO_VERSION is needed to parse config_machines.xml but isn't otherwise used
141 # by FakeCase
142 self._vals = {
143 "COMPILER": compiler,
144 "MPILIB": mpilib,
145 "DEBUG": debug,
146 "COMP_INTERFACE": comp_interface,
147 "PIO_VERSION": 2,
148 "SMP_PRESENT": threading,
149 "MODEL": get_model(),
150 "SRCROOT": get_src_root(),
151 }
152
153 def get_build_threaded(self):
154 return self.get_value("SMP_PRESENT")
155
156 def get_value(self, attrib):
157 expect(
158 attrib in self._vals,
159 "FakeCase does not support getting value of '%s'" % attrib,
160 )
161 return self._vals[attrib]
162
163
164 def generate_env_mach_specific(
165 output_dir,
166 machobj,
167 compiler,
168 mpilib,
169 debug,
170 comp_interface,
171 sysos,
172 unit_testing,
173 threaded,
174 noenv=False,
175 ):
176 """
177 env_mach_specific generation.
178 """
179 ems_path = os.path.join(output_dir, "env_mach_specific.xml")
180 if os.path.exists(ems_path):
181 logger.warning("{} already exists, delete to replace".format(ems_path))
182 return
183
184 ems_file = EnvMachSpecific(
185 output_dir, unit_testing=unit_testing, standalone_configure=True
186 )
187 ems_file.populate(
188 machobj,
189 attributes={"mpilib": mpilib, "compiler": compiler, "threaded": threaded},
190 )
191 ems_file.write()
192
193 if noenv:
194 return
195
196 fake_case = FakeCase(compiler, mpilib, debug, comp_interface)
197 ems_file.load_env(fake_case)
198 for shell in ("sh", "csh"):
199 ems_file.make_env_mach_specific_file(shell, fake_case, output_dir=output_dir)
200 shell_path = os.path.join(output_dir, ".env_mach_specific." + shell)
201 with open(shell_path, "a") as shell_file:
202 if shell == "sh":
203 shell_file.write("\nexport COMPILER={}\n".format(compiler))
204 shell_file.write("export MPILIB={}\n".format(mpilib))
205 shell_file.write("export DEBUG={}\n".format(repr(debug).upper()))
206 shell_file.write("export OS={}\n".format(sysos))
207 else:
208 shell_file.write("\nsetenv COMPILER {}\n".format(compiler))
209 shell_file.write("setenv MPILIB {}\n".format(mpilib))
210 shell_file.write("setenv DEBUG {}\n".format(repr(debug).upper()))
211 shell_file.write("setenv OS {}\n".format(sysos))
212
[end of CIME/BuildTools/configure.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/CIME/BuildTools/configure.py b/CIME/BuildTools/configure.py
--- a/CIME/BuildTools/configure.py
+++ b/CIME/BuildTools/configure.py
@@ -153,6 +153,10 @@
def get_build_threaded(self):
return self.get_value("SMP_PRESENT")
+ def get_case_root(self):
+ """Returns the root directory for this case."""
+ return self.get_value("CASEROOT")
+
def get_value(self, attrib):
expect(
attrib in self._vals,
@@ -160,6 +164,10 @@
)
return self._vals[attrib]
+ def set_value(self, attrib, value):
+ """Sets a given variable value for the case"""
+ self._vals[attrib] = value
+
def generate_env_mach_specific(
output_dir,
|
{"golden_diff": "diff --git a/CIME/BuildTools/configure.py b/CIME/BuildTools/configure.py\n--- a/CIME/BuildTools/configure.py\n+++ b/CIME/BuildTools/configure.py\n@@ -153,6 +153,10 @@\n def get_build_threaded(self):\n return self.get_value(\"SMP_PRESENT\")\n \n+ def get_case_root(self):\n+ \"\"\"Returns the root directory for this case.\"\"\"\n+ return self.get_value(\"CASEROOT\")\n+\n def get_value(self, attrib):\n expect(\n attrib in self._vals,\n@@ -160,6 +164,10 @@\n )\n return self._vals[attrib]\n \n+ def set_value(self, attrib, value):\n+ \"\"\"Sets a given variable value for the case\"\"\"\n+ self._vals[attrib] = value\n+\n \n def generate_env_mach_specific(\n output_dir,\n", "issue": "Add \"set_value\" method to CIME.BuildTools.configure.FakeCase so that user can add needed variables\nIn principle this is a simple change, but I want to make sure it's acceptable. I'd like to add a set_value method to the FakeCase class in CIME.BuiltTools.configure so that the user can add the variables they want to have in their FakeCase. The utility I see is for unit-testing of buildnml. But, I think there may be other cases where you want to create a simpler Fake case without doing all the normal needed setup to create a real case. There are some offline scripts we are thinking about having this kind of utility where we know we need to setup some type of fake case, but we want to hide it from the user, and it doesn't need to be fully functional like a real case.\r\n\r\n``` python\r\ndiff --git a/scripts/lib/CIME/BuildTools/configure.py b/scripts/lib/CIME/BuildTools/configure.py\r\nindex a74c087a5..fd7067ed7 100644\r\n--- a/scripts/lib/CIME/BuildTools/configure.py\r\n+++ b/scripts/lib/CIME/BuildTools/configure.py\r\n@@ -74,6 +74,9 @@ def get_value(self, attrib):\r\n expect(attrib in self._vals, \"FakeCase does not support getting value of '%s'\" % attrib)\r\n return self._vals[attrib]\r\n \r\n+ def set_value(self, attrib, value):\r\n+ self._vals[attrib] = value\r\n+\r\n def _generate_env_mach_specific(output_dir, machobj, compiler, mpilib, debug,\r\n sysos, unit_testing):\r\n \"\"\"\r\n```\r\n\r\n@jedwards4b does this sound acceptable? If so I'll submit a simple PR for it.\r\n@billsacks would this kind of thing possibly make LILAC easier to setup?\nAdd \"set_value\" method to CIME.BuildTools.configure.FakeCase so that user can add needed variables\nIn principle this is a simple change, but I want to make sure it's acceptable. I'd like to add a set_value method to the FakeCase class in CIME.BuiltTools.configure so that the user can add the variables they want to have in their FakeCase. The utility I see is for unit-testing of buildnml. But, I think there may be other cases where you want to create a simpler Fake case without doing all the normal needed setup to create a real case. There are some offline scripts we are thinking about having this kind of utility where we know we need to setup some type of fake case, but we want to hide it from the user, and it doesn't need to be fully functional like a real case.\r\n\r\n``` python\r\ndiff --git a/scripts/lib/CIME/BuildTools/configure.py b/scripts/lib/CIME/BuildTools/configure.py\r\nindex a74c087a5..fd7067ed7 100644\r\n--- a/scripts/lib/CIME/BuildTools/configure.py\r\n+++ b/scripts/lib/CIME/BuildTools/configure.py\r\n@@ -74,6 +74,9 @@ def get_value(self, attrib):\r\n expect(attrib in self._vals, \"FakeCase does not support getting value of '%s'\" % attrib)\r\n return self._vals[attrib]\r\n \r\n+ def set_value(self, attrib, value):\r\n+ self._vals[attrib] = value\r\n+\r\n def _generate_env_mach_specific(output_dir, machobj, compiler, mpilib, debug,\r\n sysos, unit_testing):\r\n \"\"\"\r\n```\r\n\r\n@jedwards4b does this sound acceptable? If so I'll submit a simple PR for it.\r\n@billsacks would this kind of thing possibly make LILAC easier to setup?\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"This script writes CIME build information to a directory.\n\nThe pieces of information that will be written include:\n\n1. Machine-specific build settings (i.e. the \"Macros\" file).\n2. File-specific build settings (i.e. \"Depends\" files).\n3. Environment variable loads (i.e. the env_mach_specific files).\n\nThe .env_mach_specific.sh and .env_mach_specific.csh files are specific to a\ngiven compiler, MPI library, and DEBUG setting. By default, these will be the\nmachine's default compiler, the machine's default MPI library, and FALSE,\nrespectively. These can be changed by setting the environment variables\nCOMPILER, MPILIB, and DEBUG, respectively.\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import (\n expect,\n safe_copy,\n get_model,\n get_src_root,\n stringify_bool,\n copy_local_macros_to_dir,\n)\nfrom CIME.XML.env_mach_specific import EnvMachSpecific\nfrom CIME.XML.files import Files\nfrom CIME.build import CmakeTmpBuildDir\n\nimport shutil\n\nlogger = logging.getLogger(__name__)\n\n\ndef configure(\n machobj,\n output_dir,\n macros_format,\n compiler,\n mpilib,\n debug,\n comp_interface,\n sysos,\n unit_testing=False,\n noenv=False,\n threaded=False,\n extra_machines_dir=None,\n):\n \"\"\"Add Macros, Depends, and env_mach_specific files to a directory.\n\n Arguments:\n machobj - Machines argument for this machine.\n output_dir - Directory in which to place output.\n macros_format - Container containing the string 'Makefile' to produce\n Makefile Macros output, and/or 'CMake' for CMake output.\n compiler - String containing the compiler vendor to configure for.\n mpilib - String containing the MPI implementation to configure for.\n debug - Boolean specifying whether debugging options are enabled.\n unit_testing - Boolean specifying whether we're running unit tests (as\n opposed to a system run)\n extra_machines_dir - String giving path to an additional directory that will be\n searched for cmake_macros.\n \"\"\"\n new_cmake_macros_dir = Files(comp_interface=comp_interface).get_value(\n \"CMAKE_MACROS_DIR\"\n )\n for form in macros_format:\n\n if not os.path.isfile(os.path.join(output_dir, \"Macros.cmake\")):\n safe_copy(os.path.join(new_cmake_macros_dir, \"Macros.cmake\"), output_dir)\n output_cmake_macros_dir = os.path.join(output_dir, \"cmake_macros\")\n if not os.path.exists(output_cmake_macros_dir):\n shutil.copytree(new_cmake_macros_dir, output_cmake_macros_dir)\n\n copy_local_macros_to_dir(\n output_cmake_macros_dir, extra_machdir=extra_machines_dir\n )\n\n if form == \"Makefile\":\n # Use the cmake macros to generate the make macros\n cmake_args = \" -DOS={} -DMACH={} -DCOMPILER={} -DDEBUG={} -DMPILIB={} -Dcompile_threaded={} -DCASEROOT={}\".format(\n sysos,\n machobj.get_machine_name(),\n compiler,\n stringify_bool(debug),\n mpilib,\n stringify_bool(threaded),\n output_dir,\n )\n\n with CmakeTmpBuildDir(macroloc=output_dir) as cmaketmp:\n output = cmaketmp.get_makefile_vars(cmake_args=cmake_args)\n\n with open(os.path.join(output_dir, \"Macros.make\"), \"w\") as fd:\n fd.write(output)\n\n copy_depends_files(\n machobj.get_machine_name(), machobj.machines_dir, output_dir, compiler\n )\n generate_env_mach_specific(\n output_dir,\n machobj,\n compiler,\n mpilib,\n debug,\n comp_interface,\n sysos,\n unit_testing,\n threaded,\n noenv=noenv,\n )\n\n\ndef copy_depends_files(machine_name, machines_dir, output_dir, compiler):\n \"\"\"\n Copy any system or compiler Depends files if they do not exist in the output directory\n If there is a match for Depends.machine_name.compiler copy that and ignore the others\n \"\"\"\n # Note, the cmake build system does not stop if Depends.mach.compiler.cmake is found\n makefiles_done = False\n both = \"{}.{}\".format(machine_name, compiler)\n for suffix in [both, machine_name, compiler]:\n for extra_suffix in [\"\", \".cmake\"]:\n if extra_suffix == \"\" and makefiles_done:\n continue\n\n basename = \"Depends.{}{}\".format(suffix, extra_suffix)\n dfile = os.path.join(machines_dir, basename)\n outputdfile = os.path.join(output_dir, basename)\n if os.path.isfile(dfile):\n if suffix == both and extra_suffix == \"\":\n makefiles_done = True\n if not os.path.exists(outputdfile):\n safe_copy(dfile, outputdfile)\n\n\nclass FakeCase(object):\n def __init__(self, compiler, mpilib, debug, comp_interface, threading=False):\n # PIO_VERSION is needed to parse config_machines.xml but isn't otherwise used\n # by FakeCase\n self._vals = {\n \"COMPILER\": compiler,\n \"MPILIB\": mpilib,\n \"DEBUG\": debug,\n \"COMP_INTERFACE\": comp_interface,\n \"PIO_VERSION\": 2,\n \"SMP_PRESENT\": threading,\n \"MODEL\": get_model(),\n \"SRCROOT\": get_src_root(),\n }\n\n def get_build_threaded(self):\n return self.get_value(\"SMP_PRESENT\")\n\n def get_value(self, attrib):\n expect(\n attrib in self._vals,\n \"FakeCase does not support getting value of '%s'\" % attrib,\n )\n return self._vals[attrib]\n\n\ndef generate_env_mach_specific(\n output_dir,\n machobj,\n compiler,\n mpilib,\n debug,\n comp_interface,\n sysos,\n unit_testing,\n threaded,\n noenv=False,\n):\n \"\"\"\n env_mach_specific generation.\n \"\"\"\n ems_path = os.path.join(output_dir, \"env_mach_specific.xml\")\n if os.path.exists(ems_path):\n logger.warning(\"{} already exists, delete to replace\".format(ems_path))\n return\n\n ems_file = EnvMachSpecific(\n output_dir, unit_testing=unit_testing, standalone_configure=True\n )\n ems_file.populate(\n machobj,\n attributes={\"mpilib\": mpilib, \"compiler\": compiler, \"threaded\": threaded},\n )\n ems_file.write()\n\n if noenv:\n return\n\n fake_case = FakeCase(compiler, mpilib, debug, comp_interface)\n ems_file.load_env(fake_case)\n for shell in (\"sh\", \"csh\"):\n ems_file.make_env_mach_specific_file(shell, fake_case, output_dir=output_dir)\n shell_path = os.path.join(output_dir, \".env_mach_specific.\" + shell)\n with open(shell_path, \"a\") as shell_file:\n if shell == \"sh\":\n shell_file.write(\"\\nexport COMPILER={}\\n\".format(compiler))\n shell_file.write(\"export MPILIB={}\\n\".format(mpilib))\n shell_file.write(\"export DEBUG={}\\n\".format(repr(debug).upper()))\n shell_file.write(\"export OS={}\\n\".format(sysos))\n else:\n shell_file.write(\"\\nsetenv COMPILER {}\\n\".format(compiler))\n shell_file.write(\"setenv MPILIB {}\\n\".format(mpilib))\n shell_file.write(\"setenv DEBUG {}\\n\".format(repr(debug).upper()))\n shell_file.write(\"setenv OS {}\\n\".format(sysos))\n", "path": "CIME/BuildTools/configure.py"}]}
| 3,546 | 200 |
gh_patches_debug_37590
|
rasdani/github-patches
|
git_diff
|
urllib3__urllib3-840
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
urllib3 attempts to use IPv6 even when IPv6 is disabled
This is an issue when running on a server without IPv6 (must be disabled because the network does not support it). Example when connecting to https://graph.facebook.com using requests and IPv4 happens to fail:
```
HTTPSConnectionPool(host='graph.facebook.com', port=443): Max retries exceeded with url: /v2.5/me/feed (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f4dbd158518>: Failed to establish a new connection: [Errno 97] Address family not supported by protocol',))
Traceback (most recent call last):
File "/home/lib/python3.4/site-packages/requests/packages/urllib3/connection.py", line 137, in _new_conn
(self.host, self.port), self.timeout, **extra_kw)
File "/home/lib/python3.4/site-packages/requests/packages/urllib3/util/connection.py", line 91, in create_connection
raise err
File "/home/lib/python3.4/site-packages/requests/packages/urllib3/util/connection.py", line 71, in create_connection
sock = socket.socket(af, socktype, proto)
File "/usr/lib/python3.4/socket.py", line 126, in __init__
_socket.socket.__init__(self, family, type, proto, fileno)
OSError: [Errno 97] Address family not supported by protocol
```
urllib3 should throw an exception after exhausting all IPv4 options instead of trying (and invariably failing) IPv6.
See closed issue https://github.com/kennethreitz/requests/issues/3084.
</issue>
<code>
[start of urllib3/util/connection.py]
1 from __future__ import absolute_import
2 import socket
3 try:
4 from select import poll, POLLIN
5 except ImportError: # `poll` doesn't exist on OSX and other platforms
6 poll = False
7 try:
8 from select import select
9 except ImportError: # `select` doesn't exist on AppEngine.
10 select = False
11
12
13 def is_connection_dropped(conn): # Platform-specific
14 """
15 Returns True if the connection is dropped and should be closed.
16
17 :param conn:
18 :class:`httplib.HTTPConnection` object.
19
20 Note: For platforms like AppEngine, this will always return ``False`` to
21 let the platform handle connection recycling transparently for us.
22 """
23 sock = getattr(conn, 'sock', False)
24 if sock is False: # Platform-specific: AppEngine
25 return False
26 if sock is None: # Connection already closed (such as by httplib).
27 return True
28
29 if not poll:
30 if not select: # Platform-specific: AppEngine
31 return False
32
33 try:
34 return select([sock], [], [], 0.0)[0]
35 except socket.error:
36 return True
37
38 # This version is better on platforms that support it.
39 p = poll()
40 p.register(sock, POLLIN)
41 for (fno, ev) in p.poll(0.0):
42 if fno == sock.fileno():
43 # Either data is buffered (bad), or the connection is dropped.
44 return True
45
46
47 # This function is copied from socket.py in the Python 2.7 standard
48 # library test suite. Added to its signature is only `socket_options`.
49 def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
50 source_address=None, socket_options=None):
51 """Connect to *address* and return the socket object.
52
53 Convenience function. Connect to *address* (a 2-tuple ``(host,
54 port)``) and return the socket object. Passing the optional
55 *timeout* parameter will set the timeout on the socket instance
56 before attempting to connect. If no *timeout* is supplied, the
57 global default timeout setting returned by :func:`getdefaulttimeout`
58 is used. If *source_address* is set it must be a tuple of (host, port)
59 for the socket to bind as a source address before making the connection.
60 An host of '' or port 0 tells the OS to use the default.
61 """
62
63 host, port = address
64 if host.startswith('['):
65 host = host.strip('[]')
66 err = None
67 for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM):
68 af, socktype, proto, canonname, sa = res
69 sock = None
70 try:
71 sock = socket.socket(af, socktype, proto)
72
73 # If provided, set socket level options before connecting.
74 # This is the only addition urllib3 makes to this function.
75 _set_socket_options(sock, socket_options)
76
77 if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
78 sock.settimeout(timeout)
79 if source_address:
80 sock.bind(source_address)
81 sock.connect(sa)
82 return sock
83
84 except socket.error as e:
85 err = e
86 if sock is not None:
87 sock.close()
88 sock = None
89
90 if err is not None:
91 raise err
92
93 raise socket.error("getaddrinfo returns an empty list")
94
95
96 def _set_socket_options(sock, options):
97 if options is None:
98 return
99
100 for opt in options:
101 sock.setsockopt(*opt)
102
[end of urllib3/util/connection.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/urllib3/util/connection.py b/urllib3/util/connection.py
--- a/urllib3/util/connection.py
+++ b/urllib3/util/connection.py
@@ -46,6 +46,8 @@
# This function is copied from socket.py in the Python 2.7 standard
# library test suite. Added to its signature is only `socket_options`.
+# One additional modification is that we avoid binding to IPv6 servers
+# discovered in DNS if the system doesn't have IPv6 functionality.
def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
source_address=None, socket_options=None):
"""Connect to *address* and return the socket object.
@@ -64,14 +66,19 @@
if host.startswith('['):
host = host.strip('[]')
err = None
- for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM):
+
+ # Using the value from allowed_gai_family() in the context of getaddrinfo lets
+ # us select whether to work with IPv4 DNS records, IPv6 records, or both.
+ # The original create_connection function always returns all records.
+ family = allowed_gai_family()
+
+ for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
af, socktype, proto, canonname, sa = res
sock = None
try:
sock = socket.socket(af, socktype, proto)
# If provided, set socket level options before connecting.
- # This is the only addition urllib3 makes to this function.
_set_socket_options(sock, socket_options)
if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:
@@ -99,3 +106,39 @@
for opt in options:
sock.setsockopt(*opt)
+
+
+def allowed_gai_family():
+ """This function is designed to work in the context of
+ getaddrinfo, where family=socket.AF_UNSPEC is the default and
+ will perform a DNS search for both IPv6 and IPv4 records."""
+
+ family = socket.AF_INET
+ if HAS_IPV6:
+ family = socket.AF_UNSPEC
+ return family
+
+
+def _has_ipv6(host):
+ """ Returns True if the system can bind an IPv6 address. """
+ sock = None
+ has_ipv6 = False
+
+ if socket.has_ipv6:
+ # has_ipv6 returns true if cPython was compiled with IPv6 support.
+ # It does not tell us if the system has IPv6 support enabled. To
+ # determine that we must bind to an IPv6 address.
+ # https://github.com/shazow/urllib3/pull/611
+ # https://bugs.python.org/issue658327
+ try:
+ sock = socket.socket(socket.AF_INET6)
+ sock.bind((host, 0))
+ has_ipv6 = True
+ except Exception:
+ pass
+
+ if sock:
+ sock.close()
+ return has_ipv6
+
+HAS_IPV6 = _has_ipv6('::1')
|
{"golden_diff": "diff --git a/urllib3/util/connection.py b/urllib3/util/connection.py\n--- a/urllib3/util/connection.py\n+++ b/urllib3/util/connection.py\n@@ -46,6 +46,8 @@\n \n # This function is copied from socket.py in the Python 2.7 standard\n # library test suite. Added to its signature is only `socket_options`.\n+# One additional modification is that we avoid binding to IPv6 servers\n+# discovered in DNS if the system doesn't have IPv6 functionality.\n def create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,\n source_address=None, socket_options=None):\n \"\"\"Connect to *address* and return the socket object.\n@@ -64,14 +66,19 @@\n if host.startswith('['):\n host = host.strip('[]')\n err = None\n- for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM):\n+\n+ # Using the value from allowed_gai_family() in the context of getaddrinfo lets\n+ # us select whether to work with IPv4 DNS records, IPv6 records, or both.\n+ # The original create_connection function always returns all records.\n+ family = allowed_gai_family()\n+\n+ for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):\n af, socktype, proto, canonname, sa = res\n sock = None\n try:\n sock = socket.socket(af, socktype, proto)\n \n # If provided, set socket level options before connecting.\n- # This is the only addition urllib3 makes to this function.\n _set_socket_options(sock, socket_options)\n \n if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:\n@@ -99,3 +106,39 @@\n \n for opt in options:\n sock.setsockopt(*opt)\n+\n+\n+def allowed_gai_family():\n+ \"\"\"This function is designed to work in the context of\n+ getaddrinfo, where family=socket.AF_UNSPEC is the default and\n+ will perform a DNS search for both IPv6 and IPv4 records.\"\"\"\n+\n+ family = socket.AF_INET\n+ if HAS_IPV6:\n+ family = socket.AF_UNSPEC\n+ return family\n+\n+\n+def _has_ipv6(host):\n+ \"\"\" Returns True if the system can bind an IPv6 address. \"\"\"\n+ sock = None\n+ has_ipv6 = False\n+\n+ if socket.has_ipv6:\n+ # has_ipv6 returns true if cPython was compiled with IPv6 support.\n+ # It does not tell us if the system has IPv6 support enabled. To\n+ # determine that we must bind to an IPv6 address.\n+ # https://github.com/shazow/urllib3/pull/611\n+ # https://bugs.python.org/issue658327\n+ try:\n+ sock = socket.socket(socket.AF_INET6)\n+ sock.bind((host, 0))\n+ has_ipv6 = True\n+ except Exception:\n+ pass\n+\n+ if sock:\n+ sock.close()\n+ return has_ipv6\n+\n+HAS_IPV6 = _has_ipv6('::1')\n", "issue": "urllib3 attempts to use IPv6 even when IPv6 is disabled\nThis is an issue when running on a server without IPv6 (must be disabled because the network does not support it). Example when connecting to https://graph.facebook.com using requests and IPv4 happens to fail:\n\n```\nHTTPSConnectionPool(host='graph.facebook.com', port=443): Max retries exceeded with url: /v2.5/me/feed (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f4dbd158518>: Failed to establish a new connection: [Errno 97] Address family not supported by protocol',))\nTraceback (most recent call last):\n File \"/home/lib/python3.4/site-packages/requests/packages/urllib3/connection.py\", line 137, in _new_conn\n (self.host, self.port), self.timeout, **extra_kw)\n File \"/home/lib/python3.4/site-packages/requests/packages/urllib3/util/connection.py\", line 91, in create_connection\n raise err\n File \"/home/lib/python3.4/site-packages/requests/packages/urllib3/util/connection.py\", line 71, in create_connection\n sock = socket.socket(af, socktype, proto)\n File \"/usr/lib/python3.4/socket.py\", line 126, in __init__\n _socket.socket.__init__(self, family, type, proto, fileno)\nOSError: [Errno 97] Address family not supported by protocol\n```\n\nurllib3 should throw an exception after exhausting all IPv4 options instead of trying (and invariably failing) IPv6.\n\nSee closed issue https://github.com/kennethreitz/requests/issues/3084.\n\n", "before_files": [{"content": "from __future__ import absolute_import\nimport socket\ntry:\n from select import poll, POLLIN\nexcept ImportError: # `poll` doesn't exist on OSX and other platforms\n poll = False\n try:\n from select import select\n except ImportError: # `select` doesn't exist on AppEngine.\n select = False\n\n\ndef is_connection_dropped(conn): # Platform-specific\n \"\"\"\n Returns True if the connection is dropped and should be closed.\n\n :param conn:\n :class:`httplib.HTTPConnection` object.\n\n Note: For platforms like AppEngine, this will always return ``False`` to\n let the platform handle connection recycling transparently for us.\n \"\"\"\n sock = getattr(conn, 'sock', False)\n if sock is False: # Platform-specific: AppEngine\n return False\n if sock is None: # Connection already closed (such as by httplib).\n return True\n\n if not poll:\n if not select: # Platform-specific: AppEngine\n return False\n\n try:\n return select([sock], [], [], 0.0)[0]\n except socket.error:\n return True\n\n # This version is better on platforms that support it.\n p = poll()\n p.register(sock, POLLIN)\n for (fno, ev) in p.poll(0.0):\n if fno == sock.fileno():\n # Either data is buffered (bad), or the connection is dropped.\n return True\n\n\n# This function is copied from socket.py in the Python 2.7 standard\n# library test suite. Added to its signature is only `socket_options`.\ndef create_connection(address, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,\n source_address=None, socket_options=None):\n \"\"\"Connect to *address* and return the socket object.\n\n Convenience function. Connect to *address* (a 2-tuple ``(host,\n port)``) and return the socket object. Passing the optional\n *timeout* parameter will set the timeout on the socket instance\n before attempting to connect. If no *timeout* is supplied, the\n global default timeout setting returned by :func:`getdefaulttimeout`\n is used. If *source_address* is set it must be a tuple of (host, port)\n for the socket to bind as a source address before making the connection.\n An host of '' or port 0 tells the OS to use the default.\n \"\"\"\n\n host, port = address\n if host.startswith('['):\n host = host.strip('[]')\n err = None\n for res in socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM):\n af, socktype, proto, canonname, sa = res\n sock = None\n try:\n sock = socket.socket(af, socktype, proto)\n\n # If provided, set socket level options before connecting.\n # This is the only addition urllib3 makes to this function.\n _set_socket_options(sock, socket_options)\n\n if timeout is not socket._GLOBAL_DEFAULT_TIMEOUT:\n sock.settimeout(timeout)\n if source_address:\n sock.bind(source_address)\n sock.connect(sa)\n return sock\n\n except socket.error as e:\n err = e\n if sock is not None:\n sock.close()\n sock = None\n\n if err is not None:\n raise err\n\n raise socket.error(\"getaddrinfo returns an empty list\")\n\n\ndef _set_socket_options(sock, options):\n if options is None:\n return\n\n for opt in options:\n sock.setsockopt(*opt)\n", "path": "urllib3/util/connection.py"}]}
| 1,895 | 700 |
gh_patches_debug_57104
|
rasdani/github-patches
|
git_diff
|
pyro-ppl__pyro-1704
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
make test: no attribute 'optim' error in 'examples/contrib/oed/ab_test.py'
### Issue Description
On the latest dev branch, `make test` gives the following error:
_
examples/contrib/oed/ab_test.py:12: in <module>
from gp_bayes_opt import GPBayesOptimizer
examples/contrib/oed/gp_bayes_opt.py:11: in <module>
class GPBayesOptimizer(pyro.optim.multi.MultiOptimizer):
E AttributeError: module 'pyro' has no attribute 'optim'
### Environment
For any bugs, please provide the following:
- OS and python version: CentOS Linux 7 (Core); Python 3.7.1
- PyTorch version, or if relevant, output of `pip freeze`: PyTorch 1.0.0
- Pyro version: output of `python -c 'import pyro; print pyro.__version__'`: pyro 0.3.0+9adbdb7
### Code Snippet
```
make install
make format
make test
```
</issue>
<code>
[start of examples/contrib/oed/gp_bayes_opt.py]
1 import torch
2 import torch.autograd as autograd
3 import torch.optim as optim
4 from torch.distributions import transform_to
5
6 import pyro
7 import pyro.contrib.gp as gp
8 from pyro.infer import TraceEnum_ELBO
9
10
11 class GPBayesOptimizer(pyro.optim.multi.MultiOptimizer):
12 """Performs Bayesian Optimization using a Gaussian Process as an
13 emulator for the unknown function.
14 """
15
16 def __init__(self, constraints, gpmodel, num_acquisitions, acquisition_func=None):
17 """
18 :param torch.constraint constraints: constraints defining the domain of `f`
19 :param gp.models.GPRegression gpmodel: a (possibly initialized) GP
20 regression model. The kernel, etc is specified via `gpmodel`.
21 :param int num_acquisitions: number of points to acquire at each step
22 :param function acquisition_func: a function to generate acquisitions.
23 It should return a torch.Tensor of new points to query.
24 """
25 if acquisition_func is None:
26 acquisition_func = self.acquire_thompson
27
28 self.constraints = constraints
29 self.gpmodel = gpmodel
30 self.num_acquisitions = num_acquisitions
31 self.acquisition_func = acquisition_func
32
33 def update_posterior(self, X, y):
34 X = torch.cat([self.gpmodel.X, X])
35 y = torch.cat([self.gpmodel.y, y])
36 self.gpmodel.set_data(X, y)
37 optimizer = torch.optim.Adam(self.gpmodel.parameters(), lr=0.001)
38 gp.util.train(self.gpmodel, optimizer,
39 loss_fn=TraceEnum_ELBO(strict_enumeration_warning=False).differentiable_loss,
40 retain_graph=True)
41
42 def find_a_candidate(self, differentiable, x_init):
43 """Given a starting point, `x_init`, takes one LBFGS step
44 to optimize the differentiable function.
45
46 :param function differentiable: a function amenable to torch
47 autograd
48 :param torch.Tensor x_init: the initial point
49
50 """
51 # transform x to an unconstrained domain
52 unconstrained_x_init = transform_to(self.constraints).inv(x_init)
53 unconstrained_x = unconstrained_x_init.detach().clone().requires_grad_(True)
54 # TODO: Use LBFGS with line search by pytorch #8824 merged
55 minimizer = optim.LBFGS([unconstrained_x], max_eval=20)
56
57 def closure():
58 minimizer.zero_grad()
59 if (torch.log(torch.abs(unconstrained_x)) > 25.).any():
60 return torch.tensor(float('inf'))
61 x = transform_to(self.constraints)(unconstrained_x)
62 y = differentiable(x)
63 autograd.backward(unconstrained_x,
64 autograd.grad(y, unconstrained_x, retain_graph=True))
65 return y
66
67 minimizer.step(closure)
68 # after finding a candidate in the unconstrained domain,
69 # convert it back to original domain.
70 x = transform_to(self.constraints)(unconstrained_x)
71 opt_y = differentiable(x)
72 return x.detach(), opt_y.detach()
73
74 def opt_differentiable(self, differentiable, num_candidates=5):
75 """Optimizes a differentiable function by choosing `num_candidates`
76 initial points at random and calling :func:`find_a_candidate` on
77 each. The best candidate is returned with its function value.
78
79 :param function differentiable: a function amenable to torch autograd
80 :param int num_candidates: the number of random starting points to
81 use
82 :return: the minimiser and its function value
83 :rtype: tuple
84 """
85
86 candidates = []
87 values = []
88 for j in range(num_candidates):
89 x_init = self.gpmodel.X.new_empty(1).uniform_(
90 self.constraints.lower_bound, self.constraints.upper_bound)
91 x, y = self.find_a_candidate(differentiable, x_init)
92 if torch.isnan(y):
93 continue
94 candidates.append(x)
95 values.append(y)
96
97 mvalue, argmin = torch.min(torch.cat(values), dim=0)
98 return candidates[argmin.item()], mvalue
99
100 def acquire_thompson(self, num_acquisitions=1, **opt_params):
101 """Selects `num_acquisitions` query points at which to query the
102 original function by Thompson sampling.
103
104 :param int num_acquisitions: the number of points to generate
105 :param dict opt_params: additional parameters for optimization
106 routines
107 :return: a tensor of points to evaluate `loss` at
108 :rtype: torch.Tensor
109 """
110
111 # Initialize the return tensor
112 X = self.gpmodel.X.new_empty(num_acquisitions, *self.gpmodel.X.shape[1:])
113
114 for i in range(num_acquisitions):
115 sampler = self.gpmodel.iter_sample(noiseless=False)
116 x, _ = self.opt_differentiable(sampler, **opt_params)
117 X[i, ...] = x
118
119 return X
120
121 def get_step(self, loss, params, verbose=False):
122 X = self.acquisition_func(num_acquisitions=self.num_acquisitions)
123 y = loss(X)
124 if verbose:
125 print("Acquire at: X")
126 print(X)
127 print("y")
128 print(y)
129 self.update_posterior(X, y)
130 return self.opt_differentiable(lambda x: self.gpmodel(x)[0])
131
[end of examples/contrib/oed/gp_bayes_opt.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/contrib/oed/gp_bayes_opt.py b/examples/contrib/oed/gp_bayes_opt.py
--- a/examples/contrib/oed/gp_bayes_opt.py
+++ b/examples/contrib/oed/gp_bayes_opt.py
@@ -3,9 +3,9 @@
import torch.optim as optim
from torch.distributions import transform_to
-import pyro
import pyro.contrib.gp as gp
from pyro.infer import TraceEnum_ELBO
+import pyro.optim
class GPBayesOptimizer(pyro.optim.multi.MultiOptimizer):
|
{"golden_diff": "diff --git a/examples/contrib/oed/gp_bayes_opt.py b/examples/contrib/oed/gp_bayes_opt.py\n--- a/examples/contrib/oed/gp_bayes_opt.py\n+++ b/examples/contrib/oed/gp_bayes_opt.py\n@@ -3,9 +3,9 @@\n import torch.optim as optim\n from torch.distributions import transform_to\n \n-import pyro\n import pyro.contrib.gp as gp\n from pyro.infer import TraceEnum_ELBO\n+import pyro.optim\n \n \n class GPBayesOptimizer(pyro.optim.multi.MultiOptimizer):\n", "issue": "make test: no attribute 'optim' error in 'examples/contrib/oed/ab_test.py' \n### Issue Description\r\nOn the latest dev branch, `make test` gives the following error:\r\n\r\n_\r\nexamples/contrib/oed/ab_test.py:12: in <module>\r\n from gp_bayes_opt import GPBayesOptimizer\r\nexamples/contrib/oed/gp_bayes_opt.py:11: in <module>\r\n class GPBayesOptimizer(pyro.optim.multi.MultiOptimizer):\r\nE AttributeError: module 'pyro' has no attribute 'optim'\r\n\r\n### Environment\r\nFor any bugs, please provide the following:\r\n - OS and python version: CentOS Linux 7 (Core); Python 3.7.1\r\n - PyTorch version, or if relevant, output of `pip freeze`: PyTorch 1.0.0\r\n - Pyro version: output of `python -c 'import pyro; print pyro.__version__'`: pyro 0.3.0+9adbdb7\r\n\r\n### Code Snippet\r\n\r\n```\r\nmake install\r\nmake format\r\nmake test\r\n```\r\n\n", "before_files": [{"content": "import torch\nimport torch.autograd as autograd\nimport torch.optim as optim\nfrom torch.distributions import transform_to\n\nimport pyro\nimport pyro.contrib.gp as gp\nfrom pyro.infer import TraceEnum_ELBO\n\n\nclass GPBayesOptimizer(pyro.optim.multi.MultiOptimizer):\n \"\"\"Performs Bayesian Optimization using a Gaussian Process as an\n emulator for the unknown function.\n \"\"\"\n\n def __init__(self, constraints, gpmodel, num_acquisitions, acquisition_func=None):\n \"\"\"\n :param torch.constraint constraints: constraints defining the domain of `f`\n :param gp.models.GPRegression gpmodel: a (possibly initialized) GP\n regression model. The kernel, etc is specified via `gpmodel`.\n :param int num_acquisitions: number of points to acquire at each step\n :param function acquisition_func: a function to generate acquisitions.\n It should return a torch.Tensor of new points to query.\n \"\"\"\n if acquisition_func is None:\n acquisition_func = self.acquire_thompson\n\n self.constraints = constraints\n self.gpmodel = gpmodel\n self.num_acquisitions = num_acquisitions\n self.acquisition_func = acquisition_func\n\n def update_posterior(self, X, y):\n X = torch.cat([self.gpmodel.X, X])\n y = torch.cat([self.gpmodel.y, y])\n self.gpmodel.set_data(X, y)\n optimizer = torch.optim.Adam(self.gpmodel.parameters(), lr=0.001)\n gp.util.train(self.gpmodel, optimizer,\n loss_fn=TraceEnum_ELBO(strict_enumeration_warning=False).differentiable_loss,\n retain_graph=True)\n\n def find_a_candidate(self, differentiable, x_init):\n \"\"\"Given a starting point, `x_init`, takes one LBFGS step\n to optimize the differentiable function.\n\n :param function differentiable: a function amenable to torch\n autograd\n :param torch.Tensor x_init: the initial point\n\n \"\"\"\n # transform x to an unconstrained domain\n unconstrained_x_init = transform_to(self.constraints).inv(x_init)\n unconstrained_x = unconstrained_x_init.detach().clone().requires_grad_(True)\n # TODO: Use LBFGS with line search by pytorch #8824 merged\n minimizer = optim.LBFGS([unconstrained_x], max_eval=20)\n\n def closure():\n minimizer.zero_grad()\n if (torch.log(torch.abs(unconstrained_x)) > 25.).any():\n return torch.tensor(float('inf'))\n x = transform_to(self.constraints)(unconstrained_x)\n y = differentiable(x)\n autograd.backward(unconstrained_x,\n autograd.grad(y, unconstrained_x, retain_graph=True))\n return y\n\n minimizer.step(closure)\n # after finding a candidate in the unconstrained domain,\n # convert it back to original domain.\n x = transform_to(self.constraints)(unconstrained_x)\n opt_y = differentiable(x)\n return x.detach(), opt_y.detach()\n\n def opt_differentiable(self, differentiable, num_candidates=5):\n \"\"\"Optimizes a differentiable function by choosing `num_candidates`\n initial points at random and calling :func:`find_a_candidate` on\n each. The best candidate is returned with its function value.\n\n :param function differentiable: a function amenable to torch autograd\n :param int num_candidates: the number of random starting points to\n use\n :return: the minimiser and its function value\n :rtype: tuple\n \"\"\"\n\n candidates = []\n values = []\n for j in range(num_candidates):\n x_init = self.gpmodel.X.new_empty(1).uniform_(\n self.constraints.lower_bound, self.constraints.upper_bound)\n x, y = self.find_a_candidate(differentiable, x_init)\n if torch.isnan(y):\n continue\n candidates.append(x)\n values.append(y)\n\n mvalue, argmin = torch.min(torch.cat(values), dim=0)\n return candidates[argmin.item()], mvalue\n\n def acquire_thompson(self, num_acquisitions=1, **opt_params):\n \"\"\"Selects `num_acquisitions` query points at which to query the\n original function by Thompson sampling.\n\n :param int num_acquisitions: the number of points to generate\n :param dict opt_params: additional parameters for optimization\n routines\n :return: a tensor of points to evaluate `loss` at\n :rtype: torch.Tensor\n \"\"\"\n\n # Initialize the return tensor\n X = self.gpmodel.X.new_empty(num_acquisitions, *self.gpmodel.X.shape[1:])\n\n for i in range(num_acquisitions):\n sampler = self.gpmodel.iter_sample(noiseless=False)\n x, _ = self.opt_differentiable(sampler, **opt_params)\n X[i, ...] = x\n\n return X\n\n def get_step(self, loss, params, verbose=False):\n X = self.acquisition_func(num_acquisitions=self.num_acquisitions)\n y = loss(X)\n if verbose:\n print(\"Acquire at: X\")\n print(X)\n print(\"y\")\n print(y)\n self.update_posterior(X, y)\n return self.opt_differentiable(lambda x: self.gpmodel(x)[0])\n", "path": "examples/contrib/oed/gp_bayes_opt.py"}]}
| 2,225 | 127 |
gh_patches_debug_23563
|
rasdani/github-patches
|
git_diff
|
getsentry__snuba-558
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support Redis Authentication
I'm trying to install Snuba on my Kubernetes instance alongside Sentry.
Sentry's Helm chart installs Redis with a password (It generates a secret), and there was no option for me to specify that password for Snuba.
I opened up the source code and it looks like a simple solution:
Another setting (REDIS_PASSWORD) that would be passed to startup_nodes and to StrictRedis' constructor on the snuba/redis.py module.
</issue>
<code>
[start of snuba/settings_docker.py]
1 import os
2 from snuba.settings_base import *
3
4 env = os.environ.get
5
6 DEBUG = env('DEBUG', '0').lower() in ('1', 'true')
7
8 DEFAULT_BROKERS = env('DEFAULT_BROKERS', 'localhost:9092').split(',')
9
10 REDIS_HOST = env('REDIS_HOST', 'localhost')
11 REDIS_PORT = int(env('REDIS_PORT', 6379))
12 REDIS_DB = int(env('REDIS_DB', 1))
13 USE_REDIS_CLUSTER = False
14
[end of snuba/settings_docker.py]
[start of snuba/settings_base.py]
1 import os
2
3 LOG_LEVEL = os.environ.get('LOG_LEVEL', 'INFO')
4
5 TESTING = False
6 DEBUG = True
7
8 PORT = 1218
9
10 DEFAULT_DATASET_NAME = 'events'
11 DISABLED_DATASETS = {}
12 DATASET_MODE = 'local'
13
14 # Clickhouse Options
15 # TODO: Warn about using `CLICKHOUSE_SERVER`, users should use the new settings instead.
16 [default_clickhouse_host, default_clickhouse_port] = os.environ.get('CLICKHOUSE_SERVER', 'localhost:9000').split(':', 1)
17 CLICKHOUSE_HOST = os.environ.get('CLICKHOUSE_HOST', default_clickhouse_host)
18 CLICKHOUSE_PORT = int(os.environ.get('CLICKHOUSE_PORT', default_clickhouse_port))
19 CLICKHOUSE_HTTP_PORT = int(os.environ.get('CLICKHOUSE_HTTP_PORT', 8123))
20 CLICKHOUSE_MAX_POOL_SIZE = 25
21
22 # Dogstatsd Options
23 DOGSTATSD_HOST = 'localhost'
24 DOGSTATSD_PORT = 8125
25
26 # Redis Options
27 USE_REDIS_CLUSTER = False
28 REDIS_CLUSTER_STARTUP_NODES = None
29 REDIS_HOST = os.environ.get('REDIS_HOST', 'localhost')
30 REDIS_PORT = 6379
31 REDIS_DB = 1
32
33 # Query Recording Options
34 RECORD_QUERIES = False
35 QUERIES_TOPIC = 'snuba-queries'
36
37 # Runtime Config Options
38 CONFIG_MEMOIZE_TIMEOUT = 10
39
40 # Sentry Options
41 SENTRY_DSN = None
42
43 # Snuba Options
44
45 SNAPSHOT_LOAD_PRODUCT = 'snuba'
46
47 SNAPSHOT_CONTROL_TOPIC_INIT_TIMEOUT = 30
48 BULK_CLICKHOUSE_BUFFER = 10000
49
50 # Processor/Writer Options
51 DEFAULT_BROKERS = ['localhost:9092']
52 DEFAULT_DATASET_BROKERS = {}
53
54 DEFAULT_MAX_BATCH_SIZE = 50000
55 DEFAULT_MAX_BATCH_TIME_MS = 2 * 1000
56 DEFAULT_QUEUED_MAX_MESSAGE_KBYTES = 10000
57 DEFAULT_QUEUED_MIN_MESSAGES = 10000
58 DISCARD_OLD_EVENTS = True
59
60 DEFAULT_RETENTION_DAYS = 90
61 RETENTION_OVERRIDES = {}
62
63 MAX_PREWHERE_CONDITIONS = 1
64
65 STATS_IN_RESPONSE = False
66
67 PAYLOAD_DATETIME_FORMAT = "%Y-%m-%dT%H:%M:%S.%fZ"
68
69 REPLACER_MAX_BLOCK_SIZE = 512
70 REPLACER_MAX_MEMORY_USAGE = 10 * (1024**3) # 10GB
71 # TLL of Redis key that denotes whether a project had replacements
72 # run recently. Useful for decidig whether or not to add FINAL clause
73 # to queries.
74 REPLACER_KEY_TTL = 12 * 60 * 60
75 REPLACER_MAX_GROUP_IDS_TO_EXCLUDE = 256
76
77 TURBO_SAMPLE_RATE = 0.1
78
[end of snuba/settings_base.py]
[start of snuba/redis.py]
1 from __future__ import absolute_import
2
3 from redis.client import StrictRedis
4 from redis.exceptions import BusyLoadingError, ConnectionError
5 from rediscluster import StrictRedisCluster
6
7 from snuba import settings
8
9
10 class RetryingStrictRedisCluster(StrictRedisCluster):
11 """
12 Execute a command with cluster reinitialization retry logic.
13 Should a cluster respond with a ConnectionError or BusyLoadingError the
14 cluster nodes list will be reinitialized and the command will be executed
15 again with the most up to date view of the world.
16 """
17
18 def execute_command(self, *args, **kwargs):
19 try:
20 return super(self.__class__, self).execute_command(*args, **kwargs)
21 except (
22 ConnectionError,
23 BusyLoadingError,
24 KeyError # see: https://github.com/Grokzen/redis-py-cluster/issues/287
25 ):
26 self.connection_pool.nodes.reset()
27 return super(self.__class__, self).execute_command(*args, **kwargs)
28
29
30 if settings.USE_REDIS_CLUSTER:
31 startup_nodes = settings.REDIS_CLUSTER_STARTUP_NODES
32 if startup_nodes is None:
33 startup_nodes = [{
34 'host': settings.REDIS_HOST,
35 'port': settings.REDIS_PORT,
36 }]
37 redis_client = RetryingStrictRedisCluster(
38 startup_nodes=startup_nodes,
39 socket_keepalive=True,
40 )
41 else:
42 redis_client = StrictRedis(
43 host=settings.REDIS_HOST,
44 port=settings.REDIS_PORT,
45 db=settings.REDIS_DB,
46 socket_keepalive=True,
47 )
48
[end of snuba/redis.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/snuba/redis.py b/snuba/redis.py
--- a/snuba/redis.py
+++ b/snuba/redis.py
@@ -37,11 +37,13 @@
redis_client = RetryingStrictRedisCluster(
startup_nodes=startup_nodes,
socket_keepalive=True,
+ password=settings.REDIS_PASSWORD,
)
else:
redis_client = StrictRedis(
host=settings.REDIS_HOST,
port=settings.REDIS_PORT,
+ password=settings.REDIS_PASSWORD,
db=settings.REDIS_DB,
socket_keepalive=True,
)
diff --git a/snuba/settings_base.py b/snuba/settings_base.py
--- a/snuba/settings_base.py
+++ b/snuba/settings_base.py
@@ -28,6 +28,7 @@
REDIS_CLUSTER_STARTUP_NODES = None
REDIS_HOST = os.environ.get('REDIS_HOST', 'localhost')
REDIS_PORT = 6379
+REDIS_PASSWORD = None
REDIS_DB = 1
# Query Recording Options
diff --git a/snuba/settings_docker.py b/snuba/settings_docker.py
--- a/snuba/settings_docker.py
+++ b/snuba/settings_docker.py
@@ -9,5 +9,6 @@
REDIS_HOST = env('REDIS_HOST', 'localhost')
REDIS_PORT = int(env('REDIS_PORT', 6379))
+REDIS_PASSWORD = env('REDIS_PASSWORD')
REDIS_DB = int(env('REDIS_DB', 1))
USE_REDIS_CLUSTER = False
|
{"golden_diff": "diff --git a/snuba/redis.py b/snuba/redis.py\n--- a/snuba/redis.py\n+++ b/snuba/redis.py\n@@ -37,11 +37,13 @@\n redis_client = RetryingStrictRedisCluster(\n startup_nodes=startup_nodes,\n socket_keepalive=True,\n+ password=settings.REDIS_PASSWORD,\n )\n else:\n redis_client = StrictRedis(\n host=settings.REDIS_HOST,\n port=settings.REDIS_PORT,\n+ password=settings.REDIS_PASSWORD,\n db=settings.REDIS_DB,\n socket_keepalive=True,\n )\ndiff --git a/snuba/settings_base.py b/snuba/settings_base.py\n--- a/snuba/settings_base.py\n+++ b/snuba/settings_base.py\n@@ -28,6 +28,7 @@\n REDIS_CLUSTER_STARTUP_NODES = None\n REDIS_HOST = os.environ.get('REDIS_HOST', 'localhost')\n REDIS_PORT = 6379\n+REDIS_PASSWORD = None\n REDIS_DB = 1\n \n # Query Recording Options\ndiff --git a/snuba/settings_docker.py b/snuba/settings_docker.py\n--- a/snuba/settings_docker.py\n+++ b/snuba/settings_docker.py\n@@ -9,5 +9,6 @@\n \n REDIS_HOST = env('REDIS_HOST', 'localhost')\n REDIS_PORT = int(env('REDIS_PORT', 6379))\n+REDIS_PASSWORD = env('REDIS_PASSWORD')\n REDIS_DB = int(env('REDIS_DB', 1))\n USE_REDIS_CLUSTER = False\n", "issue": "Support Redis Authentication\nI'm trying to install Snuba on my Kubernetes instance alongside Sentry.\r\nSentry's Helm chart installs Redis with a password (It generates a secret), and there was no option for me to specify that password for Snuba.\r\n\r\nI opened up the source code and it looks like a simple solution: \r\nAnother setting (REDIS_PASSWORD) that would be passed to startup_nodes and to StrictRedis' constructor on the snuba/redis.py module.\n", "before_files": [{"content": "import os\nfrom snuba.settings_base import *\n\nenv = os.environ.get\n\nDEBUG = env('DEBUG', '0').lower() in ('1', 'true')\n\nDEFAULT_BROKERS = env('DEFAULT_BROKERS', 'localhost:9092').split(',')\n\nREDIS_HOST = env('REDIS_HOST', 'localhost')\nREDIS_PORT = int(env('REDIS_PORT', 6379))\nREDIS_DB = int(env('REDIS_DB', 1))\nUSE_REDIS_CLUSTER = False\n", "path": "snuba/settings_docker.py"}, {"content": "import os\n\nLOG_LEVEL = os.environ.get('LOG_LEVEL', 'INFO')\n\nTESTING = False\nDEBUG = True\n\nPORT = 1218\n\nDEFAULT_DATASET_NAME = 'events'\nDISABLED_DATASETS = {}\nDATASET_MODE = 'local'\n\n# Clickhouse Options\n# TODO: Warn about using `CLICKHOUSE_SERVER`, users should use the new settings instead.\n[default_clickhouse_host, default_clickhouse_port] = os.environ.get('CLICKHOUSE_SERVER', 'localhost:9000').split(':', 1)\nCLICKHOUSE_HOST = os.environ.get('CLICKHOUSE_HOST', default_clickhouse_host)\nCLICKHOUSE_PORT = int(os.environ.get('CLICKHOUSE_PORT', default_clickhouse_port))\nCLICKHOUSE_HTTP_PORT = int(os.environ.get('CLICKHOUSE_HTTP_PORT', 8123))\nCLICKHOUSE_MAX_POOL_SIZE = 25\n\n# Dogstatsd Options\nDOGSTATSD_HOST = 'localhost'\nDOGSTATSD_PORT = 8125\n\n# Redis Options\nUSE_REDIS_CLUSTER = False\nREDIS_CLUSTER_STARTUP_NODES = None\nREDIS_HOST = os.environ.get('REDIS_HOST', 'localhost')\nREDIS_PORT = 6379\nREDIS_DB = 1\n\n# Query Recording Options\nRECORD_QUERIES = False\nQUERIES_TOPIC = 'snuba-queries'\n\n# Runtime Config Options\nCONFIG_MEMOIZE_TIMEOUT = 10\n\n# Sentry Options\nSENTRY_DSN = None\n\n# Snuba Options\n\nSNAPSHOT_LOAD_PRODUCT = 'snuba'\n\nSNAPSHOT_CONTROL_TOPIC_INIT_TIMEOUT = 30\nBULK_CLICKHOUSE_BUFFER = 10000\n\n# Processor/Writer Options\nDEFAULT_BROKERS = ['localhost:9092']\nDEFAULT_DATASET_BROKERS = {}\n\nDEFAULT_MAX_BATCH_SIZE = 50000\nDEFAULT_MAX_BATCH_TIME_MS = 2 * 1000\nDEFAULT_QUEUED_MAX_MESSAGE_KBYTES = 10000\nDEFAULT_QUEUED_MIN_MESSAGES = 10000\nDISCARD_OLD_EVENTS = True\n\nDEFAULT_RETENTION_DAYS = 90\nRETENTION_OVERRIDES = {}\n\nMAX_PREWHERE_CONDITIONS = 1\n\nSTATS_IN_RESPONSE = False\n\nPAYLOAD_DATETIME_FORMAT = \"%Y-%m-%dT%H:%M:%S.%fZ\"\n\nREPLACER_MAX_BLOCK_SIZE = 512\nREPLACER_MAX_MEMORY_USAGE = 10 * (1024**3) # 10GB\n# TLL of Redis key that denotes whether a project had replacements\n# run recently. Useful for decidig whether or not to add FINAL clause\n# to queries.\nREPLACER_KEY_TTL = 12 * 60 * 60\nREPLACER_MAX_GROUP_IDS_TO_EXCLUDE = 256\n\nTURBO_SAMPLE_RATE = 0.1\n", "path": "snuba/settings_base.py"}, {"content": "from __future__ import absolute_import\n\nfrom redis.client import StrictRedis\nfrom redis.exceptions import BusyLoadingError, ConnectionError\nfrom rediscluster import StrictRedisCluster\n\nfrom snuba import settings\n\n\nclass RetryingStrictRedisCluster(StrictRedisCluster):\n \"\"\"\n Execute a command with cluster reinitialization retry logic.\n Should a cluster respond with a ConnectionError or BusyLoadingError the\n cluster nodes list will be reinitialized and the command will be executed\n again with the most up to date view of the world.\n \"\"\"\n\n def execute_command(self, *args, **kwargs):\n try:\n return super(self.__class__, self).execute_command(*args, **kwargs)\n except (\n ConnectionError,\n BusyLoadingError,\n KeyError # see: https://github.com/Grokzen/redis-py-cluster/issues/287\n ):\n self.connection_pool.nodes.reset()\n return super(self.__class__, self).execute_command(*args, **kwargs)\n\n\nif settings.USE_REDIS_CLUSTER:\n startup_nodes = settings.REDIS_CLUSTER_STARTUP_NODES\n if startup_nodes is None:\n startup_nodes = [{\n 'host': settings.REDIS_HOST,\n 'port': settings.REDIS_PORT,\n }]\n redis_client = RetryingStrictRedisCluster(\n startup_nodes=startup_nodes,\n socket_keepalive=True,\n )\nelse:\n redis_client = StrictRedis(\n host=settings.REDIS_HOST,\n port=settings.REDIS_PORT,\n db=settings.REDIS_DB,\n socket_keepalive=True,\n )\n", "path": "snuba/redis.py"}]}
| 1,977 | 324 |
gh_patches_debug_3670
|
rasdani/github-patches
|
git_diff
|
wright-group__WrightTools-753
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Kit leastsq should not except BaseException
https://github.com/wright-group/WrightTools/blob/f22920579f45632b4123661d9832ff0cc1b614c4/WrightTools/kit/_leastsq.py#L74
The exception caught should be limited to those known to be raised inside.
</issue>
<code>
[start of WrightTools/kit/_leastsq.py]
1 """Least-square fitting tools."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 from ._utilities import Timer
8
9 import numpy as np
10
11 from scipy import optimize as scipy_optimize
12
13
14 # --- define --------------------------------------------------------------------------------------
15
16
17 __all__ = ["leastsqfitter"]
18
19
20 # --- functions -----------------------------------------------------------------------------------
21
22
23 def leastsqfitter(p0, datax, datay, function, verbose=False, cov_verbose=False):
24 """Conveniently call scipy.optmize.leastsq().
25
26 Returns fit parameters and their errors.
27
28 Parameters
29 ----------
30 p0 : list
31 list of guess parameters to pass to function
32 datax : array
33 array of independent values
34 datay : array
35 array of dependent values
36 function : function
37 function object to fit data to. Must be of the callable form function(p, x)
38 verbose : bool
39 toggles printing of fit time, fit params, and fit param errors
40 cov_verbose : bool
41 toggles printing of covarience matrix
42
43 Returns
44 -------
45 pfit_leastsq : list
46 list of fit parameters. s.t. the error between datay and function(p, datax) is minimized
47 perr_leastsq : list
48 list of fit parameter errors (1 std)
49 """
50 timer = Timer(verbose=False)
51 with timer:
52 # define error function
53 def errfunc(p, x, y):
54 return y - function(p, x)
55
56 # run optimization
57 pfit_leastsq, pcov, infodict, errmsg, success = scipy_optimize.leastsq(
58 errfunc, p0, args=(datax, datay), full_output=1, epsfcn=0.0001
59 )
60 # calculate covarience matrix
61 # original idea https://stackoverflow.com/a/21844726
62 if (len(datay) > len(p0)) and pcov is not None:
63 s_sq = (errfunc(pfit_leastsq, datax, datay) ** 2).sum() / (len(datay) - len(p0))
64 pcov = pcov * s_sq
65 if cov_verbose:
66 print(pcov)
67 else:
68 pcov = np.inf
69 # calculate and write errors
70 error = []
71 for i in range(len(pfit_leastsq)):
72 try:
73 error.append(np.absolute(pcov[i][i]) ** 0.5)
74 except BaseException:
75 error.append(0.00)
76 perr_leastsq = np.array(error)
77 # exit
78 if verbose:
79 print("fit params: ", pfit_leastsq)
80 print("fit params error: ", perr_leastsq)
81 print("fitting done in %f seconds" % timer.interval)
82 return pfit_leastsq, perr_leastsq
83
[end of WrightTools/kit/_leastsq.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/WrightTools/kit/_leastsq.py b/WrightTools/kit/_leastsq.py
--- a/WrightTools/kit/_leastsq.py
+++ b/WrightTools/kit/_leastsq.py
@@ -71,7 +71,7 @@
for i in range(len(pfit_leastsq)):
try:
error.append(np.absolute(pcov[i][i]) ** 0.5)
- except BaseException:
+ except IndexError:
error.append(0.00)
perr_leastsq = np.array(error)
# exit
|
{"golden_diff": "diff --git a/WrightTools/kit/_leastsq.py b/WrightTools/kit/_leastsq.py\n--- a/WrightTools/kit/_leastsq.py\n+++ b/WrightTools/kit/_leastsq.py\n@@ -71,7 +71,7 @@\n for i in range(len(pfit_leastsq)):\n try:\n error.append(np.absolute(pcov[i][i]) ** 0.5)\n- except BaseException:\n+ except IndexError:\n error.append(0.00)\n perr_leastsq = np.array(error)\n # exit\n", "issue": "Kit leastsq should not except BaseException\nhttps://github.com/wright-group/WrightTools/blob/f22920579f45632b4123661d9832ff0cc1b614c4/WrightTools/kit/_leastsq.py#L74\r\n\r\nThe exception caught should be limited to those known to be raised inside.\n", "before_files": [{"content": "\"\"\"Least-square fitting tools.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nfrom ._utilities import Timer\n\nimport numpy as np\n\nfrom scipy import optimize as scipy_optimize\n\n\n# --- define --------------------------------------------------------------------------------------\n\n\n__all__ = [\"leastsqfitter\"]\n\n\n# --- functions -----------------------------------------------------------------------------------\n\n\ndef leastsqfitter(p0, datax, datay, function, verbose=False, cov_verbose=False):\n \"\"\"Conveniently call scipy.optmize.leastsq().\n\n Returns fit parameters and their errors.\n\n Parameters\n ----------\n p0 : list\n list of guess parameters to pass to function\n datax : array\n array of independent values\n datay : array\n array of dependent values\n function : function\n function object to fit data to. Must be of the callable form function(p, x)\n verbose : bool\n toggles printing of fit time, fit params, and fit param errors\n cov_verbose : bool\n toggles printing of covarience matrix\n\n Returns\n -------\n pfit_leastsq : list\n list of fit parameters. s.t. the error between datay and function(p, datax) is minimized\n perr_leastsq : list\n list of fit parameter errors (1 std)\n \"\"\"\n timer = Timer(verbose=False)\n with timer:\n # define error function\n def errfunc(p, x, y):\n return y - function(p, x)\n\n # run optimization\n pfit_leastsq, pcov, infodict, errmsg, success = scipy_optimize.leastsq(\n errfunc, p0, args=(datax, datay), full_output=1, epsfcn=0.0001\n )\n # calculate covarience matrix\n # original idea https://stackoverflow.com/a/21844726\n if (len(datay) > len(p0)) and pcov is not None:\n s_sq = (errfunc(pfit_leastsq, datax, datay) ** 2).sum() / (len(datay) - len(p0))\n pcov = pcov * s_sq\n if cov_verbose:\n print(pcov)\n else:\n pcov = np.inf\n # calculate and write errors\n error = []\n for i in range(len(pfit_leastsq)):\n try:\n error.append(np.absolute(pcov[i][i]) ** 0.5)\n except BaseException:\n error.append(0.00)\n perr_leastsq = np.array(error)\n # exit\n if verbose:\n print(\"fit params: \", pfit_leastsq)\n print(\"fit params error: \", perr_leastsq)\n print(\"fitting done in %f seconds\" % timer.interval)\n return pfit_leastsq, perr_leastsq\n", "path": "WrightTools/kit/_leastsq.py"}]}
| 1,410 | 131 |
gh_patches_debug_5878
|
rasdani/github-patches
|
git_diff
|
ManimCommunity__manim-2593
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Preview without embed broken in jupyter notebooks.
## Description
The embed feature seems to have broken preview in jupyter notebooks.
default:
<img width="1115" alt="Screenshot 2022-03-05 at 3 24 09 PM" src="https://user-images.githubusercontent.com/90276965/156878243-224900ac-8da2-46e5-8ee2-cc9f1852bec8.png">
with media embed:
<img width="1115" alt="Screenshot 2022-03-05 at 3 24 30 PM" src="https://user-images.githubusercontent.com/90276965/156878267-b5d77f95-7953-45e9-b155-baac8f19db35.png">
## Expected Behaviour
Should preview images even without `media_embed`
_Originally posted by @Kiran-Raj-Dev in https://github.com/ManimCommunity/manim/issues/2442#issuecomment-1059732017_
</issue>
<code>
[start of manim/utils/ipython_magic.py]
1 """Utilities for using Manim with IPython (in particular: Jupyter notebooks)"""
2
3 from __future__ import annotations
4
5 import mimetypes
6 import os
7 import shutil
8 from datetime import datetime
9 from pathlib import Path
10 from typing import Any
11
12 from manim import Group, config, logger, tempconfig
13 from manim.__main__ import main
14 from manim.renderer.shader import shader_program_cache
15
16 try:
17 from IPython import get_ipython
18 from IPython.core.interactiveshell import InteractiveShell
19 from IPython.core.magic import (
20 Magics,
21 line_cell_magic,
22 magics_class,
23 needs_local_scope,
24 )
25 from IPython.display import Image, Video, display
26 except ImportError:
27 pass
28 else:
29
30 @magics_class
31 class ManimMagic(Magics):
32 def __init__(self, shell: InteractiveShell) -> None:
33 super().__init__(shell)
34 self.rendered_files = {}
35
36 @needs_local_scope
37 @line_cell_magic
38 def manim(
39 self,
40 line: str,
41 cell: str = None,
42 local_ns: dict[str, Any] = None,
43 ) -> None:
44 r"""Render Manim scenes contained in IPython cells.
45 Works as a line or cell magic.
46
47 .. hint::
48
49 This line and cell magic works best when used in a JupyterLab
50 environment: while all of the functionality is available for
51 classic Jupyter notebooks as well, it is possible that videos
52 sometimes don't update on repeated execution of the same cell
53 if the scene name stays the same.
54
55 This problem does not occur when using JupyterLab.
56
57 Please refer to `<https://jupyter.org/>`_ for more information about JupyterLab
58 and Jupyter notebooks.
59
60 Usage in line mode::
61
62 %manim [CLI options] MyAwesomeScene
63
64 Usage in cell mode::
65
66 %%manim [CLI options] MyAwesomeScene
67
68 class MyAweseomeScene(Scene):
69 def construct(self):
70 ...
71
72 Run ``%manim --help`` and ``%manim render --help`` for possible command line interface options.
73
74 .. note::
75
76 The maximal width of the rendered videos that are displayed in the notebook can be
77 configured via the ``media_width`` configuration option. The default is set to ``25vw``,
78 which is 25% of your current viewport width. To allow the output to become as large
79 as possible, set ``config.media_width = "100%"``.
80
81 The ``media_embed`` option will embed the image/video output in the notebook. This is
82 generally undesirable as it makes the notebooks very large, but is required on some
83 platforms (notably Google's CoLab, where it is automatically enabled unless suppressed
84 by ``config.embed = False``) and needed in cases when the notebook (or converted HTML
85 file) will be moved relative to the video locations. Use-cases include building
86 documentation with Sphinx and JupyterBook. See also the :mod:`manim directive for Sphinx
87 <manim.utils.docbuild.manim_directive>`.
88
89 Examples
90 --------
91
92 First make sure to put ``import manim``, or even ``from manim import *``
93 in a cell and evaluate it. Then, a typical Jupyter notebook cell for Manim
94 could look as follows::
95
96 %%manim -v WARNING --disable_caching -qm BannerExample
97
98 config.media_width = "75%"
99 config.media_embed = True
100
101 class BannerExample(Scene):
102 def construct(self):
103 self.camera.background_color = "#ece6e2"
104 banner_large = ManimBanner(dark_theme=False).scale(0.7)
105 self.play(banner_large.create())
106 self.play(banner_large.expand())
107
108 Evaluating this cell will render and display the ``BannerExample`` scene defined in the body of the cell.
109
110 .. note::
111
112 In case you want to hide the red box containing the output progress bar, the ``progress_bar`` config
113 option should be set to ``None``. This can also be done by passing ``--progress_bar None`` as a
114 CLI flag.
115
116 """
117 if cell:
118 exec(cell, local_ns)
119
120 args = line.split()
121 if not len(args) or "-h" in args or "--help" in args or "--version" in args:
122 main(args, standalone_mode=False, prog_name="manim")
123 return
124
125 modified_args = self.add_additional_args(args)
126 args = main(modified_args, standalone_mode=False, prog_name="manim")
127 with tempconfig(local_ns.get("config", {})):
128 config.digest_args(args)
129
130 renderer = None
131 if config.renderer == "opengl":
132 # Check if the imported mobjects extend the OpenGLMobject class
133 # meaning ConvertToOpenGL did its job
134 if "OpenGLMobject" in map(lambda cls: cls.__name__, Group.mro()):
135 from manim.renderer.opengl_renderer import OpenGLRenderer
136
137 renderer = OpenGLRenderer()
138 else:
139 logger.warning(
140 "Renderer must be set to OpenGL in the configuration file "
141 "before importing Manim! Using cairo renderer instead.",
142 )
143 config.renderer = "cairo"
144
145 try:
146 SceneClass = local_ns[config["scene_names"][0]]
147 scene = SceneClass(renderer=renderer)
148 scene.render()
149 finally:
150 # Shader cache becomes invalid as the context is destroyed
151 shader_program_cache.clear()
152
153 # Close OpenGL window here instead of waiting for the main thread to
154 # finish causing the window to stay open and freeze
155 if renderer is not None and renderer.window is not None:
156 renderer.window.close()
157
158 if config["output_file"] is None:
159 logger.info("No output file produced")
160 return
161
162 local_path = Path(config["output_file"]).relative_to(Path.cwd())
163 tmpfile = (
164 Path(config["media_dir"])
165 / "jupyter"
166 / f"{_generate_file_name()}{local_path.suffix}"
167 )
168
169 if local_path in self.rendered_files:
170 self.rendered_files[local_path].unlink()
171 self.rendered_files[local_path] = tmpfile
172 os.makedirs(tmpfile.parent, exist_ok=True)
173 shutil.copy(local_path, tmpfile)
174
175 file_type = mimetypes.guess_type(config["output_file"])[0]
176 embed = config["media_embed"]
177 if embed is None:
178 # videos need to be embedded when running in google colab.
179 # do this automatically in case config.media_embed has not been
180 # set explicitly.
181 embed = "google.colab" in str(get_ipython())
182
183 if file_type.startswith("image"):
184 result = Image(filename=config["output_file"], embed=embed)
185 else:
186 result = Video(
187 tmpfile,
188 html_attributes=f'controls autoplay loop style="max-width: {config["media_width"]};"',
189 embed=embed,
190 )
191
192 display(result)
193
194 def add_additional_args(self, args: list[str]) -> list[str]:
195 additional_args = ["--jupyter"]
196 # Use webm to support transparency
197 if "-t" in args and "--format" not in args:
198 additional_args += ["--format", "webm"]
199 return additional_args + args[:-1] + [""] + [args[-1]]
200
201
202 def _generate_file_name() -> str:
203 return config["scene_names"][0] + "@" + datetime.now().strftime("%Y-%m-%d@%H-%M-%S")
204
[end of manim/utils/ipython_magic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/manim/utils/ipython_magic.py b/manim/utils/ipython_magic.py
--- a/manim/utils/ipython_magic.py
+++ b/manim/utils/ipython_magic.py
@@ -181,7 +181,7 @@
embed = "google.colab" in str(get_ipython())
if file_type.startswith("image"):
- result = Image(filename=config["output_file"], embed=embed)
+ result = Image(filename=config["output_file"])
else:
result = Video(
tmpfile,
|
{"golden_diff": "diff --git a/manim/utils/ipython_magic.py b/manim/utils/ipython_magic.py\n--- a/manim/utils/ipython_magic.py\n+++ b/manim/utils/ipython_magic.py\n@@ -181,7 +181,7 @@\n embed = \"google.colab\" in str(get_ipython())\n \n if file_type.startswith(\"image\"):\n- result = Image(filename=config[\"output_file\"], embed=embed)\n+ result = Image(filename=config[\"output_file\"])\n else:\n result = Video(\n tmpfile,\n", "issue": "Preview without embed broken in jupyter notebooks.\n## Description\r\n\r\nThe embed feature seems to have broken preview in jupyter notebooks.\r\n\r\ndefault:\r\n<img width=\"1115\" alt=\"Screenshot 2022-03-05 at 3 24 09 PM\" src=\"https://user-images.githubusercontent.com/90276965/156878243-224900ac-8da2-46e5-8ee2-cc9f1852bec8.png\">\r\n\r\nwith media embed:\r\n<img width=\"1115\" alt=\"Screenshot 2022-03-05 at 3 24 30 PM\" src=\"https://user-images.githubusercontent.com/90276965/156878267-b5d77f95-7953-45e9-b155-baac8f19db35.png\">\r\n\r\n## Expected Behaviour\r\nShould preview images even without `media_embed`\r\n\r\n\r\n_Originally posted by @Kiran-Raj-Dev in https://github.com/ManimCommunity/manim/issues/2442#issuecomment-1059732017_\n", "before_files": [{"content": "\"\"\"Utilities for using Manim with IPython (in particular: Jupyter notebooks)\"\"\"\n\nfrom __future__ import annotations\n\nimport mimetypes\nimport os\nimport shutil\nfrom datetime import datetime\nfrom pathlib import Path\nfrom typing import Any\n\nfrom manim import Group, config, logger, tempconfig\nfrom manim.__main__ import main\nfrom manim.renderer.shader import shader_program_cache\n\ntry:\n from IPython import get_ipython\n from IPython.core.interactiveshell import InteractiveShell\n from IPython.core.magic import (\n Magics,\n line_cell_magic,\n magics_class,\n needs_local_scope,\n )\n from IPython.display import Image, Video, display\nexcept ImportError:\n pass\nelse:\n\n @magics_class\n class ManimMagic(Magics):\n def __init__(self, shell: InteractiveShell) -> None:\n super().__init__(shell)\n self.rendered_files = {}\n\n @needs_local_scope\n @line_cell_magic\n def manim(\n self,\n line: str,\n cell: str = None,\n local_ns: dict[str, Any] = None,\n ) -> None:\n r\"\"\"Render Manim scenes contained in IPython cells.\n Works as a line or cell magic.\n\n .. hint::\n\n This line and cell magic works best when used in a JupyterLab\n environment: while all of the functionality is available for\n classic Jupyter notebooks as well, it is possible that videos\n sometimes don't update on repeated execution of the same cell\n if the scene name stays the same.\n\n This problem does not occur when using JupyterLab.\n\n Please refer to `<https://jupyter.org/>`_ for more information about JupyterLab\n and Jupyter notebooks.\n\n Usage in line mode::\n\n %manim [CLI options] MyAwesomeScene\n\n Usage in cell mode::\n\n %%manim [CLI options] MyAwesomeScene\n\n class MyAweseomeScene(Scene):\n def construct(self):\n ...\n\n Run ``%manim --help`` and ``%manim render --help`` for possible command line interface options.\n\n .. note::\n\n The maximal width of the rendered videos that are displayed in the notebook can be\n configured via the ``media_width`` configuration option. The default is set to ``25vw``,\n which is 25% of your current viewport width. To allow the output to become as large\n as possible, set ``config.media_width = \"100%\"``.\n\n The ``media_embed`` option will embed the image/video output in the notebook. This is\n generally undesirable as it makes the notebooks very large, but is required on some\n platforms (notably Google's CoLab, where it is automatically enabled unless suppressed\n by ``config.embed = False``) and needed in cases when the notebook (or converted HTML\n file) will be moved relative to the video locations. Use-cases include building\n documentation with Sphinx and JupyterBook. See also the :mod:`manim directive for Sphinx\n <manim.utils.docbuild.manim_directive>`.\n\n Examples\n --------\n\n First make sure to put ``import manim``, or even ``from manim import *``\n in a cell and evaluate it. Then, a typical Jupyter notebook cell for Manim\n could look as follows::\n\n %%manim -v WARNING --disable_caching -qm BannerExample\n\n config.media_width = \"75%\"\n config.media_embed = True\n\n class BannerExample(Scene):\n def construct(self):\n self.camera.background_color = \"#ece6e2\"\n banner_large = ManimBanner(dark_theme=False).scale(0.7)\n self.play(banner_large.create())\n self.play(banner_large.expand())\n\n Evaluating this cell will render and display the ``BannerExample`` scene defined in the body of the cell.\n\n .. note::\n\n In case you want to hide the red box containing the output progress bar, the ``progress_bar`` config\n option should be set to ``None``. This can also be done by passing ``--progress_bar None`` as a\n CLI flag.\n\n \"\"\"\n if cell:\n exec(cell, local_ns)\n\n args = line.split()\n if not len(args) or \"-h\" in args or \"--help\" in args or \"--version\" in args:\n main(args, standalone_mode=False, prog_name=\"manim\")\n return\n\n modified_args = self.add_additional_args(args)\n args = main(modified_args, standalone_mode=False, prog_name=\"manim\")\n with tempconfig(local_ns.get(\"config\", {})):\n config.digest_args(args)\n\n renderer = None\n if config.renderer == \"opengl\":\n # Check if the imported mobjects extend the OpenGLMobject class\n # meaning ConvertToOpenGL did its job\n if \"OpenGLMobject\" in map(lambda cls: cls.__name__, Group.mro()):\n from manim.renderer.opengl_renderer import OpenGLRenderer\n\n renderer = OpenGLRenderer()\n else:\n logger.warning(\n \"Renderer must be set to OpenGL in the configuration file \"\n \"before importing Manim! Using cairo renderer instead.\",\n )\n config.renderer = \"cairo\"\n\n try:\n SceneClass = local_ns[config[\"scene_names\"][0]]\n scene = SceneClass(renderer=renderer)\n scene.render()\n finally:\n # Shader cache becomes invalid as the context is destroyed\n shader_program_cache.clear()\n\n # Close OpenGL window here instead of waiting for the main thread to\n # finish causing the window to stay open and freeze\n if renderer is not None and renderer.window is not None:\n renderer.window.close()\n\n if config[\"output_file\"] is None:\n logger.info(\"No output file produced\")\n return\n\n local_path = Path(config[\"output_file\"]).relative_to(Path.cwd())\n tmpfile = (\n Path(config[\"media_dir\"])\n / \"jupyter\"\n / f\"{_generate_file_name()}{local_path.suffix}\"\n )\n\n if local_path in self.rendered_files:\n self.rendered_files[local_path].unlink()\n self.rendered_files[local_path] = tmpfile\n os.makedirs(tmpfile.parent, exist_ok=True)\n shutil.copy(local_path, tmpfile)\n\n file_type = mimetypes.guess_type(config[\"output_file\"])[0]\n embed = config[\"media_embed\"]\n if embed is None:\n # videos need to be embedded when running in google colab.\n # do this automatically in case config.media_embed has not been\n # set explicitly.\n embed = \"google.colab\" in str(get_ipython())\n\n if file_type.startswith(\"image\"):\n result = Image(filename=config[\"output_file\"], embed=embed)\n else:\n result = Video(\n tmpfile,\n html_attributes=f'controls autoplay loop style=\"max-width: {config[\"media_width\"]};\"',\n embed=embed,\n )\n\n display(result)\n\n def add_additional_args(self, args: list[str]) -> list[str]:\n additional_args = [\"--jupyter\"]\n # Use webm to support transparency\n if \"-t\" in args and \"--format\" not in args:\n additional_args += [\"--format\", \"webm\"]\n return additional_args + args[:-1] + [\"\"] + [args[-1]]\n\n\ndef _generate_file_name() -> str:\n return config[\"scene_names\"][0] + \"@\" + datetime.now().strftime(\"%Y-%m-%d@%H-%M-%S\")\n", "path": "manim/utils/ipython_magic.py"}]}
| 2,954 | 119 |
gh_patches_debug_32137
|
rasdani/github-patches
|
git_diff
|
kartoza__prj.app-391
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot add new sponsor
I tried adding new sponsors to the QGIS sponsors list, but got the following error:
500
Ooops. Something broke....
The powers that be have been informed.
In the meantime, please form an orderly queue and head to the home page.
If you need assistance, you may reference this error as 1f96dfdab8c642789dcb40867a97e90d.
I used foreign characters, f.e. "ü" and "é" in the contact names, if that matters.
</issue>
<code>
[start of django_project/base/models/project.py]
1 # coding=utf-8
2 """Project model used by all apps."""
3 import os
4 import logging
5 import string
6 import re
7 from django.core.urlresolvers import reverse
8 from django.utils.text import slugify
9 from django.conf.global_settings import MEDIA_ROOT
10 from django.db import models
11 from django.utils.translation import ugettext_lazy as _
12 from changes.models.version import Version
13 from core.settings.contrib import STOP_WORDS
14 from django.contrib.auth.models import User
15 from django.conf import settings
16 from django.core.exceptions import ValidationError
17
18 logger = logging.getLogger(__name__)
19
20
21 class ApprovedProjectManager(models.Manager):
22 """Custom project manager that shows only approved records."""
23
24 def get_queryset(self):
25 """Query set generator"""
26 return super(
27 ApprovedProjectManager, self).get_queryset().filter(
28 approved=True)
29
30
31 class UnapprovedProjectManager(models.Manager):
32 """Custom project manager that shows only unapproved records."""
33
34 def get_queryset(self):
35 """Query set generator"""
36 return super(
37 UnapprovedProjectManager, self).get_queryset().filter(
38 approved=False)
39
40
41 class PublicProjectManager(models.Manager):
42 """Custom project manager that shows only public and approved projects."""
43
44 def get_queryset(self):
45 """Query set generator"""
46 return super(
47 PublicProjectManager, self).get_queryset().filter(
48 private=False).filter(approved=True)
49
50
51 def validate_gitter_room_name(value):
52 """Ensure user enter proper gitter room name
53
54 :param value: string input
55 :raises: ValidationError
56 """
57 invalid_chars = set(string.punctuation.replace('/', ''))
58 pattern = re.compile('^(\w+\/\w+)$')
59 if any(char in invalid_chars for char in value) \
60 or not pattern.match(value):
61 raise ValidationError(
62 _('%(value)s is not proper gitter room name'),
63 params={'value': value},
64 )
65
66
67 class Project(models.Model):
68 """A project model e.g. QGIS, InaSAFE etc."""
69 name = models.CharField(
70 help_text=_('Name of this project.'),
71 max_length=255,
72 null=False,
73 blank=False,
74 unique=True)
75
76 description = models.CharField(
77 help_text=_('A description for the project'),
78 max_length=500,
79 blank=True,
80 null=True
81 )
82
83 image_file = models.ImageField(
84 help_text=_('A logo image for this project. '
85 'Most browsers support dragging the image directly on to '
86 'the "Choose File" button above.'),
87 upload_to=os.path.join(MEDIA_ROOT, 'images/projects'),
88 blank=True
89 )
90
91 approved = models.BooleanField(
92 help_text=_('Whether this project has been approved for use yet.'),
93 default=False
94 )
95
96 private = models.BooleanField(
97 help_text=_('Only visible to logged-in users?'),
98 default=False
99 )
100
101 owner = models.ForeignKey(User)
102 slug = models.SlugField(unique=True)
103 objects = models.Manager()
104 approved_objects = ApprovedProjectManager()
105 unapproved_objects = UnapprovedProjectManager()
106 public_objects = PublicProjectManager()
107
108 gitter_room = models.CharField(
109 help_text=_('Gitter room name, e.g. gitterhq/sandbox'),
110 max_length=255,
111 null=True,
112 blank=True,
113 validators=[validate_gitter_room_name]
114 )
115
116 # noinspection PyClassicStyleClass
117 class Meta:
118 """Meta class for project."""
119 app_label = 'base'
120 ordering = ['name']
121
122 def save(self, *args, **kwargs):
123 """Overloaded save method.
124
125 :param args:
126 :param kwargs:
127 """
128 if not self.pk:
129 words = self.name.split()
130 filtered_words = [t for t in words if t.lower() not in STOP_WORDS]
131 new_list = unicode(' '.join(filtered_words))
132 self.slug = slugify(new_list)[:50]
133
134 super(Project, self).save(*args, **kwargs)
135
136 def __unicode__(self):
137 return u'%s' % self.name
138
139 def get_absolute_url(self):
140 """Return URL to project detail page
141
142 :return: URL
143 :rtype: str
144
145 """
146 return reverse('project-detail', kwargs={'slug': self.slug})
147
148 def versions(self):
149 """Get all the versions for this project."""
150 qs = Version.objects.filter(project=self).order_by('-padded_version')
151 return qs
152
153 def latest_versions(self):
154 """Get the latest version.
155
156 How many versions returned is determined by the pagination threshold.
157
158 :returns: List of versions.
159 :rtype: list"""
160 return self.versions()[:settings.PROJECT_VERSION_LIST_SIZE]
161
162 @staticmethod
163 def pagination_threshold(self):
164 """Find out how many versions to list per page.
165
166 :returns: The count of items to show per page as defined in
167 settings.PROJECT_VERSION_LIST_SIZE.
168 :rtype: int
169 """
170 return settings.PROJECT_VERSION_LIST_SIZE
171
172 def pagination_threshold_exceeded(self):
173 """Check if project version count exceeds pagination threshold.
174
175 :returns: Flag indicating if there are more versions than
176 self.threshold.
177 :rtype: bool
178 """
179 if self.versions().count() >= settings.PROJECT_VERSION_LIST_SIZE:
180 return True
181 else:
182 return False
183
[end of django_project/base/models/project.py]
[start of django_project/changes/models/sponsor.py]
1 # coding=utf-8
2 import os
3 import pytz
4 import logging
5 from django.core.urlresolvers import reverse
6 from django.utils.text import slugify
7 from core.settings.contrib import STOP_WORDS
8 from django.conf.global_settings import MEDIA_ROOT
9 from django.db import models
10 from django.utils.translation import ugettext_lazy as _
11 from django.contrib.auth.models import User
12 from django_countries.fields import CountryField
13
14 __author__ = 'rischan'
15
16 logger = logging.getLogger(__name__)
17 utc = pytz.UTC
18
19
20 class ApprovedSponsorManager(models.Manager):
21 """Custom sponsor manager that shows only approved records."""
22
23 def get_queryset(self):
24 """Query set generator"""
25 return super(
26 ApprovedSponsorManager, self).get_queryset().filter(
27 approved=True)
28
29
30 class UnapprovedSponsorManager(models.Manager):
31 """Custom sponsor manager that shows only unapproved records."""
32
33 def get_queryset(self):
34 """Query set generator"""
35 return super(
36 UnapprovedSponsorManager, self).get_queryset().filter(
37 approved=False)
38
39
40 # noinspection PyUnresolvedReferences
41 class Sponsor(models.Model):
42 """A sponsor model e.g. gui, backend, web site etc."""
43 name = models.CharField(
44 help_text=_('Name of sponsor.'),
45 max_length=255,
46 null=False,
47 blank=False,
48 unique=False) # there is a unique together rule in meta class below
49
50 sponsor_url = models.CharField(
51 help_text='Input the sponsor URL.',
52 max_length=255,
53 null=True,
54 blank=True)
55
56 contact_person = models.CharField(
57 help_text='Input the contact person of sponsor.',
58 max_length=255,
59 null=True,
60 blank=True)
61
62 address = models.TextField(
63 help_text=(
64 'Enter the complete street address for this sponsor. '
65 'Use line breaks to separate address elements and use '
66 'the country field to specify the country.'
67 ),
68 null=True,
69 blank=True)
70
71 country = CountryField(
72 help_text='Select the country for this sponsor',
73 null=True,
74 blank=True)
75
76 sponsor_email = models.CharField(
77 help_text='Input an email of sponsor.',
78 max_length=255,
79 null=True,
80 blank=True)
81
82 agreement = models.FileField(
83 help_text='Attach sponsor agreement',
84 upload_to=os.path.join(MEDIA_ROOT, 'docs'),
85 blank=True)
86
87 logo = models.ImageField(
88 help_text=(
89 'An image of sponsor logo e.g. a splashscreen. '
90 'Most browsers support dragging the image directly on to the '
91 '"Choose File" button above.'),
92 upload_to=os.path.join(MEDIA_ROOT, 'images/projects'),
93 blank=False)
94
95 approved = models.BooleanField(
96 help_text=_(
97 'Whether this sponsor has been approved for use by the '
98 'project owner.'),
99 default=False
100 )
101
102 author = models.ForeignKey(User)
103 slug = models.SlugField()
104 project = models.ForeignKey('base.Project')
105 objects = models.Manager()
106 approved_objects = ApprovedSponsorManager()
107 unapproved_objects = UnapprovedSponsorManager()
108
109 # noinspection PyClassicStyleClass
110 class Meta:
111 """Meta options for the sponsor class."""
112 unique_together = (
113 ('name', 'project'),
114 ('project', 'slug')
115 )
116 app_label = 'changes'
117 ordering = ['name']
118
119 def save(self, *args, **kwargs):
120 if not self.pk:
121 words = self.name.split()
122 filtered_words = [t for t in words if t.lower() not in STOP_WORDS]
123 new_list = ' '.join(filtered_words)
124 self.slug = slugify(new_list)[:50]
125 super(Sponsor, self).save(*args, **kwargs)
126
127 def __unicode__(self):
128 return u'%s' % (self.name)
129
130 def get_absolute_url(self):
131 return reverse('sponsor-detail', kwargs={
132 'slug': self.slug,
133 'project_slug': self.project.slug
134 })
135
[end of django_project/changes/models/sponsor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/django_project/base/models/project.py b/django_project/base/models/project.py
--- a/django_project/base/models/project.py
+++ b/django_project/base/models/project.py
@@ -14,6 +14,7 @@
from django.contrib.auth.models import User
from django.conf import settings
from django.core.exceptions import ValidationError
+from unidecode import unidecode
logger = logging.getLogger(__name__)
@@ -128,7 +129,8 @@
if not self.pk:
words = self.name.split()
filtered_words = [t for t in words if t.lower() not in STOP_WORDS]
- new_list = unicode(' '.join(filtered_words))
+ # unidecode() represents special characters (unicode data) in ASCII
+ new_list = unidecode(' '.join(filtered_words))
self.slug = slugify(new_list)[:50]
super(Project, self).save(*args, **kwargs)
diff --git a/django_project/changes/models/sponsor.py b/django_project/changes/models/sponsor.py
--- a/django_project/changes/models/sponsor.py
+++ b/django_project/changes/models/sponsor.py
@@ -10,6 +10,8 @@
from django.utils.translation import ugettext_lazy as _
from django.contrib.auth.models import User
from django_countries.fields import CountryField
+from unidecode import unidecode
+
__author__ = 'rischan'
@@ -120,7 +122,8 @@
if not self.pk:
words = self.name.split()
filtered_words = [t for t in words if t.lower() not in STOP_WORDS]
- new_list = ' '.join(filtered_words)
+ # unidecode() represents special characters (unicode data) in ASCII
+ new_list = unidecode(' '.join(filtered_words))
self.slug = slugify(new_list)[:50]
super(Sponsor, self).save(*args, **kwargs)
|
{"golden_diff": "diff --git a/django_project/base/models/project.py b/django_project/base/models/project.py\n--- a/django_project/base/models/project.py\n+++ b/django_project/base/models/project.py\n@@ -14,6 +14,7 @@\n from django.contrib.auth.models import User\n from django.conf import settings\n from django.core.exceptions import ValidationError\n+from unidecode import unidecode\n \n logger = logging.getLogger(__name__)\n \n@@ -128,7 +129,8 @@\n if not self.pk:\n words = self.name.split()\n filtered_words = [t for t in words if t.lower() not in STOP_WORDS]\n- new_list = unicode(' '.join(filtered_words))\n+ # unidecode() represents special characters (unicode data) in ASCII\n+ new_list = unidecode(' '.join(filtered_words))\n self.slug = slugify(new_list)[:50]\n \n super(Project, self).save(*args, **kwargs)\ndiff --git a/django_project/changes/models/sponsor.py b/django_project/changes/models/sponsor.py\n--- a/django_project/changes/models/sponsor.py\n+++ b/django_project/changes/models/sponsor.py\n@@ -10,6 +10,8 @@\n from django.utils.translation import ugettext_lazy as _\n from django.contrib.auth.models import User\n from django_countries.fields import CountryField\n+from unidecode import unidecode\n+\n \n __author__ = 'rischan'\n \n@@ -120,7 +122,8 @@\n if not self.pk:\n words = self.name.split()\n filtered_words = [t for t in words if t.lower() not in STOP_WORDS]\n- new_list = ' '.join(filtered_words)\n+ # unidecode() represents special characters (unicode data) in ASCII\n+ new_list = unidecode(' '.join(filtered_words))\n self.slug = slugify(new_list)[:50]\n super(Sponsor, self).save(*args, **kwargs)\n", "issue": "Cannot add new sponsor\nI tried adding new sponsors to the QGIS sponsors list, but got the following error:\n\n500\nOoops. Something broke....\n\nThe powers that be have been informed.\nIn the meantime, please form an orderly queue and head to the home page.\n\nIf you need assistance, you may reference this error as 1f96dfdab8c642789dcb40867a97e90d.\n\nI used foreign characters, f.e. \"\u00fc\" and \"\u00e9\" in the contact names, if that matters.\n\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"Project model used by all apps.\"\"\"\nimport os\nimport logging\nimport string\nimport re\nfrom django.core.urlresolvers import reverse\nfrom django.utils.text import slugify\nfrom django.conf.global_settings import MEDIA_ROOT\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom changes.models.version import Version\nfrom core.settings.contrib import STOP_WORDS\nfrom django.contrib.auth.models import User\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\n\nlogger = logging.getLogger(__name__)\n\n\nclass ApprovedProjectManager(models.Manager):\n \"\"\"Custom project manager that shows only approved records.\"\"\"\n\n def get_queryset(self):\n \"\"\"Query set generator\"\"\"\n return super(\n ApprovedProjectManager, self).get_queryset().filter(\n approved=True)\n\n\nclass UnapprovedProjectManager(models.Manager):\n \"\"\"Custom project manager that shows only unapproved records.\"\"\"\n\n def get_queryset(self):\n \"\"\"Query set generator\"\"\"\n return super(\n UnapprovedProjectManager, self).get_queryset().filter(\n approved=False)\n\n\nclass PublicProjectManager(models.Manager):\n \"\"\"Custom project manager that shows only public and approved projects.\"\"\"\n\n def get_queryset(self):\n \"\"\"Query set generator\"\"\"\n return super(\n PublicProjectManager, self).get_queryset().filter(\n private=False).filter(approved=True)\n\n\ndef validate_gitter_room_name(value):\n \"\"\"Ensure user enter proper gitter room name\n\n :param value: string input\n :raises: ValidationError\n \"\"\"\n invalid_chars = set(string.punctuation.replace('/', ''))\n pattern = re.compile('^(\\w+\\/\\w+)$')\n if any(char in invalid_chars for char in value) \\\n or not pattern.match(value):\n raise ValidationError(\n _('%(value)s is not proper gitter room name'),\n params={'value': value},\n )\n\n\nclass Project(models.Model):\n \"\"\"A project model e.g. QGIS, InaSAFE etc.\"\"\"\n name = models.CharField(\n help_text=_('Name of this project.'),\n max_length=255,\n null=False,\n blank=False,\n unique=True)\n\n description = models.CharField(\n help_text=_('A description for the project'),\n max_length=500,\n blank=True,\n null=True\n )\n\n image_file = models.ImageField(\n help_text=_('A logo image for this project. '\n 'Most browsers support dragging the image directly on to '\n 'the \"Choose File\" button above.'),\n upload_to=os.path.join(MEDIA_ROOT, 'images/projects'),\n blank=True\n )\n\n approved = models.BooleanField(\n help_text=_('Whether this project has been approved for use yet.'),\n default=False\n )\n\n private = models.BooleanField(\n help_text=_('Only visible to logged-in users?'),\n default=False\n )\n\n owner = models.ForeignKey(User)\n slug = models.SlugField(unique=True)\n objects = models.Manager()\n approved_objects = ApprovedProjectManager()\n unapproved_objects = UnapprovedProjectManager()\n public_objects = PublicProjectManager()\n\n gitter_room = models.CharField(\n help_text=_('Gitter room name, e.g. gitterhq/sandbox'),\n max_length=255,\n null=True,\n blank=True,\n validators=[validate_gitter_room_name]\n )\n\n # noinspection PyClassicStyleClass\n class Meta:\n \"\"\"Meta class for project.\"\"\"\n app_label = 'base'\n ordering = ['name']\n\n def save(self, *args, **kwargs):\n \"\"\"Overloaded save method.\n\n :param args:\n :param kwargs:\n \"\"\"\n if not self.pk:\n words = self.name.split()\n filtered_words = [t for t in words if t.lower() not in STOP_WORDS]\n new_list = unicode(' '.join(filtered_words))\n self.slug = slugify(new_list)[:50]\n\n super(Project, self).save(*args, **kwargs)\n\n def __unicode__(self):\n return u'%s' % self.name\n\n def get_absolute_url(self):\n \"\"\"Return URL to project detail page\n\n :return: URL\n :rtype: str\n\n \"\"\"\n return reverse('project-detail', kwargs={'slug': self.slug})\n\n def versions(self):\n \"\"\"Get all the versions for this project.\"\"\"\n qs = Version.objects.filter(project=self).order_by('-padded_version')\n return qs\n\n def latest_versions(self):\n \"\"\"Get the latest version.\n\n How many versions returned is determined by the pagination threshold.\n\n :returns: List of versions.\n :rtype: list\"\"\"\n return self.versions()[:settings.PROJECT_VERSION_LIST_SIZE]\n\n @staticmethod\n def pagination_threshold(self):\n \"\"\"Find out how many versions to list per page.\n\n :returns: The count of items to show per page as defined in\n settings.PROJECT_VERSION_LIST_SIZE.\n :rtype: int\n \"\"\"\n return settings.PROJECT_VERSION_LIST_SIZE\n\n def pagination_threshold_exceeded(self):\n \"\"\"Check if project version count exceeds pagination threshold.\n\n :returns: Flag indicating if there are more versions than\n self.threshold.\n :rtype: bool\n \"\"\"\n if self.versions().count() >= settings.PROJECT_VERSION_LIST_SIZE:\n return True\n else:\n return False\n", "path": "django_project/base/models/project.py"}, {"content": "# coding=utf-8\nimport os\nimport pytz\nimport logging\nfrom django.core.urlresolvers import reverse\nfrom django.utils.text import slugify\nfrom core.settings.contrib import STOP_WORDS\nfrom django.conf.global_settings import MEDIA_ROOT\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.contrib.auth.models import User\nfrom django_countries.fields import CountryField\n\n__author__ = 'rischan'\n\nlogger = logging.getLogger(__name__)\nutc = pytz.UTC\n\n\nclass ApprovedSponsorManager(models.Manager):\n \"\"\"Custom sponsor manager that shows only approved records.\"\"\"\n\n def get_queryset(self):\n \"\"\"Query set generator\"\"\"\n return super(\n ApprovedSponsorManager, self).get_queryset().filter(\n approved=True)\n\n\nclass UnapprovedSponsorManager(models.Manager):\n \"\"\"Custom sponsor manager that shows only unapproved records.\"\"\"\n\n def get_queryset(self):\n \"\"\"Query set generator\"\"\"\n return super(\n UnapprovedSponsorManager, self).get_queryset().filter(\n approved=False)\n\n\n# noinspection PyUnresolvedReferences\nclass Sponsor(models.Model):\n \"\"\"A sponsor model e.g. gui, backend, web site etc.\"\"\"\n name = models.CharField(\n help_text=_('Name of sponsor.'),\n max_length=255,\n null=False,\n blank=False,\n unique=False) # there is a unique together rule in meta class below\n\n sponsor_url = models.CharField(\n help_text='Input the sponsor URL.',\n max_length=255,\n null=True,\n blank=True)\n\n contact_person = models.CharField(\n help_text='Input the contact person of sponsor.',\n max_length=255,\n null=True,\n blank=True)\n\n address = models.TextField(\n help_text=(\n 'Enter the complete street address for this sponsor. '\n 'Use line breaks to separate address elements and use '\n 'the country field to specify the country.'\n ),\n null=True,\n blank=True)\n\n country = CountryField(\n help_text='Select the country for this sponsor',\n null=True,\n blank=True)\n\n sponsor_email = models.CharField(\n help_text='Input an email of sponsor.',\n max_length=255,\n null=True,\n blank=True)\n\n agreement = models.FileField(\n help_text='Attach sponsor agreement',\n upload_to=os.path.join(MEDIA_ROOT, 'docs'),\n blank=True)\n\n logo = models.ImageField(\n help_text=(\n 'An image of sponsor logo e.g. a splashscreen. '\n 'Most browsers support dragging the image directly on to the '\n '\"Choose File\" button above.'),\n upload_to=os.path.join(MEDIA_ROOT, 'images/projects'),\n blank=False)\n\n approved = models.BooleanField(\n help_text=_(\n 'Whether this sponsor has been approved for use by the '\n 'project owner.'),\n default=False\n )\n\n author = models.ForeignKey(User)\n slug = models.SlugField()\n project = models.ForeignKey('base.Project')\n objects = models.Manager()\n approved_objects = ApprovedSponsorManager()\n unapproved_objects = UnapprovedSponsorManager()\n\n # noinspection PyClassicStyleClass\n class Meta:\n \"\"\"Meta options for the sponsor class.\"\"\"\n unique_together = (\n ('name', 'project'),\n ('project', 'slug')\n )\n app_label = 'changes'\n ordering = ['name']\n\n def save(self, *args, **kwargs):\n if not self.pk:\n words = self.name.split()\n filtered_words = [t for t in words if t.lower() not in STOP_WORDS]\n new_list = ' '.join(filtered_words)\n self.slug = slugify(new_list)[:50]\n super(Sponsor, self).save(*args, **kwargs)\n\n def __unicode__(self):\n return u'%s' % (self.name)\n\n def get_absolute_url(self):\n return reverse('sponsor-detail', kwargs={\n 'slug': self.slug,\n 'project_slug': self.project.slug\n })\n", "path": "django_project/changes/models/sponsor.py"}]}
| 3,411 | 427 |
gh_patches_debug_20609
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-3484
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Merchandise page redesign
### Is your feature request related to a problem? Please describe.
The current merchandise page is a long list with lot's of text. Part of this problem is the board not hiding the merchandise items that are sold out, but I think some other layout than a list would help to improve the look.
### Describe the solution you'd like
View more images side by side, and make the images larger. The text is not very important for the merch at all, so it can be pushed to the background.
### Motivation
The board is getting new merch and would like the page to look better to get people interested in the merch.
### Describe alternatives you've considered
Keep the page as is, because people will buy merch anyway through whatsapp promotion etc.
</issue>
<code>
[start of website/merchandise/urls.py]
1 """Defines the routes provided in this package."""
2 from django.urls import include, path
3
4 from . import views
5
6 #: the name of the application
7 app_name = "merchandise"
8
9 #: the urls provided by this package
10 urlpatterns = [
11 path(
12 "association/merchandise/",
13 include(
14 [
15 path("", views.index, name="index"),
16 ]
17 ),
18 )
19 ]
20
[end of website/merchandise/urls.py]
[start of website/merchandise/views.py]
1 """The views for the merchandise package."""
2 from django.shortcuts import render
3
4 from merchandise.models import MerchandiseItem
5
6
7 def index(request):
8 """Render the index view.
9
10 :param request: the request object
11 :return: the response
12 """
13 items = MerchandiseItem.objects.all()
14
15 return render(request, "merchandise/index.html", {"items": items})
16
[end of website/merchandise/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/website/merchandise/urls.py b/website/merchandise/urls.py
--- a/website/merchandise/urls.py
+++ b/website/merchandise/urls.py
@@ -15,5 +15,6 @@
path("", views.index, name="index"),
]
),
- )
+ ),
+ path("association/merchandise/<int:id>/", views.product_page, name="product"),
]
diff --git a/website/merchandise/views.py b/website/merchandise/views.py
--- a/website/merchandise/views.py
+++ b/website/merchandise/views.py
@@ -1,4 +1,5 @@
"""The views for the merchandise package."""
+from django.http import Http404
from django.shortcuts import render
from merchandise.models import MerchandiseItem
@@ -13,3 +14,14 @@
items = MerchandiseItem.objects.all()
return render(request, "merchandise/index.html", {"items": items})
+
+
+def product_page(request, id):
+ try:
+ product = MerchandiseItem.objects.get(pk=id)
+ except MerchandiseItem.DoesNotExist:
+ raise Http404(
+ "This item may not exists, or is removed. Please check if the link is correct!"
+ )
+
+ return render(request, "merchandise/product_page.html", {"product": product})
|
{"golden_diff": "diff --git a/website/merchandise/urls.py b/website/merchandise/urls.py\n--- a/website/merchandise/urls.py\n+++ b/website/merchandise/urls.py\n@@ -15,5 +15,6 @@\n path(\"\", views.index, name=\"index\"),\n ]\n ),\n- )\n+ ),\n+ path(\"association/merchandise/<int:id>/\", views.product_page, name=\"product\"),\n ]\ndiff --git a/website/merchandise/views.py b/website/merchandise/views.py\n--- a/website/merchandise/views.py\n+++ b/website/merchandise/views.py\n@@ -1,4 +1,5 @@\n \"\"\"The views for the merchandise package.\"\"\"\n+from django.http import Http404\n from django.shortcuts import render\n \n from merchandise.models import MerchandiseItem\n@@ -13,3 +14,14 @@\n items = MerchandiseItem.objects.all()\n \n return render(request, \"merchandise/index.html\", {\"items\": items})\n+\n+\n+def product_page(request, id):\n+ try:\n+ product = MerchandiseItem.objects.get(pk=id)\n+ except MerchandiseItem.DoesNotExist:\n+ raise Http404(\n+ \"This item may not exists, or is removed. Please check if the link is correct!\"\n+ )\n+\n+ return render(request, \"merchandise/product_page.html\", {\"product\": product})\n", "issue": "Merchandise page redesign\n### Is your feature request related to a problem? Please describe.\r\n\r\nThe current merchandise page is a long list with lot's of text. Part of this problem is the board not hiding the merchandise items that are sold out, but I think some other layout than a list would help to improve the look.\r\n\r\n### Describe the solution you'd like\r\n\r\nView more images side by side, and make the images larger. The text is not very important for the merch at all, so it can be pushed to the background.\r\n\r\n### Motivation\r\n\r\nThe board is getting new merch and would like the page to look better to get people interested in the merch.\r\n\r\n### Describe alternatives you've considered\r\n\r\nKeep the page as is, because people will buy merch anyway through whatsapp promotion etc.\r\n\n", "before_files": [{"content": "\"\"\"Defines the routes provided in this package.\"\"\"\nfrom django.urls import include, path\n\nfrom . import views\n\n#: the name of the application\napp_name = \"merchandise\"\n\n#: the urls provided by this package\nurlpatterns = [\n path(\n \"association/merchandise/\",\n include(\n [\n path(\"\", views.index, name=\"index\"),\n ]\n ),\n )\n]\n", "path": "website/merchandise/urls.py"}, {"content": "\"\"\"The views for the merchandise package.\"\"\"\nfrom django.shortcuts import render\n\nfrom merchandise.models import MerchandiseItem\n\n\ndef index(request):\n \"\"\"Render the index view.\n\n :param request: the request object\n :return: the response\n \"\"\"\n items = MerchandiseItem.objects.all()\n\n return render(request, \"merchandise/index.html\", {\"items\": items})\n", "path": "website/merchandise/views.py"}]}
| 942 | 315 |
gh_patches_debug_11616
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-7320
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CT-2391] [Bug] Following the Python API example.py example raises AttributeError
### Is this a new bug in dbt-core?
- [X] I believe this is a new bug in dbt-core
- [X] I have searched the existing issues, and I could not find an existing issue for this bug
### Current Behavior
I'm trying to work with the python APIs in dbt version 1.5, per the given example.py and hitting some error
`AttributeError: 'Namespace' object has no attribute 'MACRO_DEBUGGING'`
### Expected Behavior
Not throw AttributeError, profile getting created properly
### Steps To Reproduce
1. run these lines (project_dir is a path to a dbt project)
```python
from dbt.cli.main import dbtRunner
from dbt.config.runtime import load_profile, load_project
project_dir = "/Users/roitabach/workspace/small_fix/dbt-data-reliability/integration_tests/"
profile = load_profile(project_dir, {}, target_override="snowflake")
```
### Relevant log output
```shell
This exception is raised:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[16], line 1
----> 1 profile = load_profile(project_dir, cli_vars={"MACRO_DEBUGGING":False},target_override="snowflake")
File ~/workspace/env/elementary/venv/lib/python3.10/site-packages/dbt/config/runtime.py:68, in load_profile(project_root, cli_vars, profile_name_override, target_override, threads_override)
66 raw_profile_name = raw_project.get("profile")
67 profile_renderer = ProfileRenderer(cli_vars)
---> 68 profile_name = profile_renderer.render_value(raw_profile_name)
69 profile = Profile.render(
70 profile_renderer, profile_name, profile_name_override, target_override, threads_override
71 )
72 # Save env_vars encountered in rendering for partial parsing
File ~/workspace/env/elementary/venv/lib/python3.10/site-packages/dbt/config/renderer.py:185, in SecretRenderer.render_value(self, value, keypath)
181 def render_value(self, value: Any, keypath: Optional[Keypath] = None) -> Any:
182 # First, standard Jinja rendering, with special handling for 'secret' environment variables
183 # "{{ env_var('DBT_SECRET_ENV_VAR') }}" -> "$$$DBT_SECRET_START$$$DBT_SECRET_ENV_{VARIABLE_NAME}$$$DBT_SECRET_END$$$"
184 # This prevents Jinja manipulation of secrets via macros/filters that might leak partial/modified values in logs
--> 185 rendered = super().render_value(value, keypath)
186 # Now, detect instances of the placeholder value ($$$DBT_SECRET_START...DBT_SECRET_END$$$)
187 # and replace them with the actual secret value
188 if SECRET_ENV_PREFIX in str(rendered):
File ~/workspace/env/elementary/venv/lib/python3.10/site-packages/dbt/config/renderer.py:42, in BaseRenderer.render_value(self, value, keypath)
40 try:
41 with catch_jinja():
---> 42 return get_rendered(value, self.context, native=True)
43 except CompilationError as exc:
44 msg = f"Could not render {value}: {exc.msg}"
File ~/workspace/env/elementary/venv/lib/python3.10/site-packages/dbt/clients/jinja.py:584, in get_rendered(string, ctx, node, capture_macros, native)
582 if not native and isinstance(string, str) and _HAS_RENDER_CHARS_PAT.search(string) is None:
583 return string
--> 584 template = get_template(
585 string,
586 ctx,
587 node,
588 capture_macros=capture_macros,
589 native=native,
590 )
591 return render_template(template, ctx, node)
File ~/workspace/env/elementary/venv/lib/python3.10/site-packages/dbt/clients/jinja.py:541, in get_template(string, ctx, node, capture_macros, native)
538 env = get_environment(node, capture_macros, native=native)
540 template_source = str(string)
--> 541 return env.from_string(template_source, globals=ctx)
File ~/workspace/env/elementary/venv/lib/python3.10/site-packages/Jinja2-3.1.2-py3.10.egg/jinja2/environment.py:1105, in Environment.from_string(self, source, globals, template_class)
1103 gs = self.make_globals(globals)
1104 cls = template_class or self.template_class
-> 1105 return cls.from_code(self, self.compile(source), gs, None)
File ~/workspace/env/elementary/venv/lib/python3.10/site-packages/Jinja2-3.1.2-py3.10.egg/jinja2/environment.py:766, in Environment.compile(self, source, name, filename, raw, defer_init)
764 if filename is None:
765 filename = "<template>"
--> 766 return self._compile(source, filename)
767 except TemplateSyntaxError:
768 self.handle_exception(source=source_hint)
File ~/workspace/env/elementary/venv/lib/python3.10/site-packages/dbt/clients/jinja.py:102, in MacroFuzzEnvironment._compile(self, source, filename)
94 def _compile(self, source, filename):
95 """Override jinja's compilation to stash the rendered source inside
96 the python linecache for debugging when the appropriate environment
97 variable is set.
(...)
100 WARNING: This can write a ton of data if you aren't careful.
101 """
--> 102 macro_debugging = get_flags().MACRO_DEBUGGING
103 if filename == "<template>" and macro_debugging:
104 write = macro_debugging == "write"
AttributeError: 'Namespace' object has no attribute 'MACRO_DEBUGGING'
```
### Environment
```markdown
- OS: MacOS Monterey 12.5.1 (21G83)
- Python: 3.11.2
- dbt-core: 1.5.0-b5
```
### Which database adapter are you using with dbt?
snowflake
### Additional Context
I tried to comment out the `if` block just to see where I get- got another argparse error-
```
AttributeError: 'Namespace' object has no attribute 'PROFILES_DIR'
```
from this row in `.../site-packages/dbt/config/profile.py`
```
--> 434 raw_profiles = read_profile(flags.PROFILES_DIR)
```
</issue>
<code>
[start of core/dbt/cli/example.py]
1 from dbt.cli.main import dbtRunner
2 from dbt.config.runtime import load_profile, load_project
3
4 if __name__ == "__main__":
5 project_dir = "/Users/chenyuli/git/jaffle_shop"
6 cli_args = ["run", "--project-dir", project_dir]
7
8 # initialize the dbt runner
9 dbt = dbtRunner()
10 # run the command
11 res = dbt.invoke(cli_args)
12
13 # preload profile and project
14 profile = load_profile(project_dir, {}, "testing-postgres")
15 project = load_project(project_dir, False, profile, {})
16
17 # initialize the runner with pre-loaded profile and project, you can also pass in a preloaded manifest
18 dbt = dbtRunner(profile=profile, project=project)
19 # run the command, this will use the pre-loaded profile and project instead of loading
20 res = dbt.invoke(cli_args)
21
[end of core/dbt/cli/example.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/core/dbt/cli/example.py b/core/dbt/cli/example.py
deleted file mode 100644
--- a/core/dbt/cli/example.py
+++ /dev/null
@@ -1,20 +0,0 @@
-from dbt.cli.main import dbtRunner
-from dbt.config.runtime import load_profile, load_project
-
-if __name__ == "__main__":
- project_dir = "/Users/chenyuli/git/jaffle_shop"
- cli_args = ["run", "--project-dir", project_dir]
-
- # initialize the dbt runner
- dbt = dbtRunner()
- # run the command
- res = dbt.invoke(cli_args)
-
- # preload profile and project
- profile = load_profile(project_dir, {}, "testing-postgres")
- project = load_project(project_dir, False, profile, {})
-
- # initialize the runner with pre-loaded profile and project, you can also pass in a preloaded manifest
- dbt = dbtRunner(profile=profile, project=project)
- # run the command, this will use the pre-loaded profile and project instead of loading
- res = dbt.invoke(cli_args)
|
{"golden_diff": "diff --git a/core/dbt/cli/example.py b/core/dbt/cli/example.py\ndeleted file mode 100644\n--- a/core/dbt/cli/example.py\n+++ /dev/null\n@@ -1,20 +0,0 @@\n-from dbt.cli.main import dbtRunner\n-from dbt.config.runtime import load_profile, load_project\n-\n-if __name__ == \"__main__\":\n- project_dir = \"/Users/chenyuli/git/jaffle_shop\"\n- cli_args = [\"run\", \"--project-dir\", project_dir]\n-\n- # initialize the dbt runner\n- dbt = dbtRunner()\n- # run the command\n- res = dbt.invoke(cli_args)\n-\n- # preload profile and project\n- profile = load_profile(project_dir, {}, \"testing-postgres\")\n- project = load_project(project_dir, False, profile, {})\n-\n- # initialize the runner with pre-loaded profile and project, you can also pass in a preloaded manifest\n- dbt = dbtRunner(profile=profile, project=project)\n- # run the command, this will use the pre-loaded profile and project instead of loading\n- res = dbt.invoke(cli_args)\n", "issue": "[CT-2391] [Bug] Following the Python API example.py example raises AttributeError\n### Is this a new bug in dbt-core?\r\n\r\n- [X] I believe this is a new bug in dbt-core\r\n- [X] I have searched the existing issues, and I could not find an existing issue for this bug\r\n\r\n### Current Behavior\r\n\r\n I'm trying to work with the python APIs in dbt version 1.5, per the given example.py and hitting some error\r\n\r\n`AttributeError: 'Namespace' object has no attribute 'MACRO_DEBUGGING'`\r\n\r\n\r\n### Expected Behavior\r\n\r\nNot throw AttributeError, profile getting created properly\r\n\r\n### Steps To Reproduce\r\n\r\n1. run these lines (project_dir is a path to a dbt project)\r\n```python\r\nfrom dbt.cli.main import dbtRunner\r\nfrom dbt.config.runtime import load_profile, load_project\r\nproject_dir = \"/Users/roitabach/workspace/small_fix/dbt-data-reliability/integration_tests/\"\r\nprofile = load_profile(project_dir, {}, target_override=\"snowflake\")\r\n```\r\n\r\n\r\n### Relevant log output\r\n\r\n```shell\r\nThis exception is raised:\r\n\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nCell In[16], line 1\r\n----> 1 profile = load_profile(project_dir, cli_vars={\"MACRO_DEBUGGING\":False},target_override=\"snowflake\")\r\n\r\nFile ~/workspace/env/elementary/venv/lib/python3.10/site-packages/dbt/config/runtime.py:68, in load_profile(project_root, cli_vars, profile_name_override, target_override, threads_override)\r\n 66 raw_profile_name = raw_project.get(\"profile\")\r\n 67 profile_renderer = ProfileRenderer(cli_vars)\r\n---> 68 profile_name = profile_renderer.render_value(raw_profile_name)\r\n 69 profile = Profile.render(\r\n 70 profile_renderer, profile_name, profile_name_override, target_override, threads_override\r\n 71 )\r\n 72 # Save env_vars encountered in rendering for partial parsing\r\n\r\nFile ~/workspace/env/elementary/venv/lib/python3.10/site-packages/dbt/config/renderer.py:185, in SecretRenderer.render_value(self, value, keypath)\r\n 181 def render_value(self, value: Any, keypath: Optional[Keypath] = None) -> Any:\r\n 182 # First, standard Jinja rendering, with special handling for 'secret' environment variables\r\n 183 # \"{{ env_var('DBT_SECRET_ENV_VAR') }}\" -> \"$$$DBT_SECRET_START$$$DBT_SECRET_ENV_{VARIABLE_NAME}$$$DBT_SECRET_END$$$\"\r\n 184 # This prevents Jinja manipulation of secrets via macros/filters that might leak partial/modified values in logs\r\n--> 185 rendered = super().render_value(value, keypath)\r\n 186 # Now, detect instances of the placeholder value ($$$DBT_SECRET_START...DBT_SECRET_END$$$)\r\n 187 # and replace them with the actual secret value\r\n 188 if SECRET_ENV_PREFIX in str(rendered):\r\n\r\nFile ~/workspace/env/elementary/venv/lib/python3.10/site-packages/dbt/config/renderer.py:42, in BaseRenderer.render_value(self, value, keypath)\r\n 40 try:\r\n 41 with catch_jinja():\r\n---> 42 return get_rendered(value, self.context, native=True)\r\n 43 except CompilationError as exc:\r\n 44 msg = f\"Could not render {value}: {exc.msg}\"\r\n\r\nFile ~/workspace/env/elementary/venv/lib/python3.10/site-packages/dbt/clients/jinja.py:584, in get_rendered(string, ctx, node, capture_macros, native)\r\n 582 if not native and isinstance(string, str) and _HAS_RENDER_CHARS_PAT.search(string) is None:\r\n 583 return string\r\n--> 584 template = get_template(\r\n 585 string,\r\n 586 ctx,\r\n 587 node,\r\n 588 capture_macros=capture_macros,\r\n 589 native=native,\r\n 590 )\r\n 591 return render_template(template, ctx, node)\r\n\r\nFile ~/workspace/env/elementary/venv/lib/python3.10/site-packages/dbt/clients/jinja.py:541, in get_template(string, ctx, node, capture_macros, native)\r\n 538 env = get_environment(node, capture_macros, native=native)\r\n 540 template_source = str(string)\r\n--> 541 return env.from_string(template_source, globals=ctx)\r\n\r\nFile ~/workspace/env/elementary/venv/lib/python3.10/site-packages/Jinja2-3.1.2-py3.10.egg/jinja2/environment.py:1105, in Environment.from_string(self, source, globals, template_class)\r\n 1103 gs = self.make_globals(globals)\r\n 1104 cls = template_class or self.template_class\r\n-> 1105 return cls.from_code(self, self.compile(source), gs, None)\r\n\r\nFile ~/workspace/env/elementary/venv/lib/python3.10/site-packages/Jinja2-3.1.2-py3.10.egg/jinja2/environment.py:766, in Environment.compile(self, source, name, filename, raw, defer_init)\r\n 764 if filename is None:\r\n 765 filename = \"<template>\"\r\n--> 766 return self._compile(source, filename)\r\n 767 except TemplateSyntaxError:\r\n 768 self.handle_exception(source=source_hint)\r\n\r\nFile ~/workspace/env/elementary/venv/lib/python3.10/site-packages/dbt/clients/jinja.py:102, in MacroFuzzEnvironment._compile(self, source, filename)\r\n 94 def _compile(self, source, filename):\r\n 95 \"\"\"Override jinja's compilation to stash the rendered source inside\r\n 96 the python linecache for debugging when the appropriate environment\r\n 97 variable is set.\r\n (...)\r\n 100 WARNING: This can write a ton of data if you aren't careful.\r\n 101 \"\"\"\r\n--> 102 macro_debugging = get_flags().MACRO_DEBUGGING\r\n 103 if filename == \"<template>\" and macro_debugging:\r\n 104 write = macro_debugging == \"write\"\r\n\r\nAttributeError: 'Namespace' object has no attribute 'MACRO_DEBUGGING'\r\n```\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: MacOS Monterey 12.5.1 (21G83)\r\n- Python: 3.11.2\r\n- dbt-core: 1.5.0-b5\r\n```\r\n\r\n### Which database adapter are you using with dbt?\r\n\r\nsnowflake\r\n\r\n### Additional Context\r\n\r\nI tried to comment out the `if` block just to see where I get- got another argparse error- \r\n```\r\nAttributeError: 'Namespace' object has no attribute 'PROFILES_DIR'\r\n```\r\nfrom this row in `.../site-packages/dbt/config/profile.py`\r\n```\r\n--> 434 raw_profiles = read_profile(flags.PROFILES_DIR)\r\n```\n", "before_files": [{"content": "from dbt.cli.main import dbtRunner\nfrom dbt.config.runtime import load_profile, load_project\n\nif __name__ == \"__main__\":\n project_dir = \"/Users/chenyuli/git/jaffle_shop\"\n cli_args = [\"run\", \"--project-dir\", project_dir]\n\n # initialize the dbt runner\n dbt = dbtRunner()\n # run the command\n res = dbt.invoke(cli_args)\n\n # preload profile and project\n profile = load_profile(project_dir, {}, \"testing-postgres\")\n project = load_project(project_dir, False, profile, {})\n\n # initialize the runner with pre-loaded profile and project, you can also pass in a preloaded manifest\n dbt = dbtRunner(profile=profile, project=project)\n # run the command, this will use the pre-loaded profile and project instead of loading\n res = dbt.invoke(cli_args)\n", "path": "core/dbt/cli/example.py"}]}
| 2,349 | 260 |
gh_patches_debug_59564
|
rasdani/github-patches
|
git_diff
|
saulpw__visidata-1921
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pasting from Clipboard in Windows 10
**Small description**
I cannot get any results pasted from the clipboard onto a visidata cell, selected cell or range of cells.
On the most basic test:
- Create new sheet
- Create 1 cell
- I copy a short string from another app to the clipboard
- trying:
- syspaste-cells or
- syspaste-cells-selected (while selecting a single cell), or
- zP
- I get the following message "INFO: Type "CLIP /?" for usage."
This is a message from Windows regarding its CLIP application.
I have tried this, having multiple clipboard history activated, or deactivated. In both cases I get the same result.
Copying from visidata onto clipboard works alright with zY
**Expected result**
Pasted content in cell
**Actual result with screenshot**
If you get an unexpected error, please include the full stack trace that you get with `Ctrl-E`.

No error captured (no ctrl-E)
**Steps to reproduce with sample data and a .vd**
First try reproducing without any user configuration by using the flag `-N`.
e.g. `echo "abc" | vd -f txt -N`
Same result.
Please attach the commandlog (saved with `Ctrl-D`)
[paste_problem_win.zip](https://github.com/saulpw/visidata/files/11748544/paste_problem_win.zip)
to show the steps that led to the issue.
See [here](http://visidata.org/docs/save-restore/) for more details.
**Additional context**
Please include the version of VisiData and Python.
Visidata v2.11
Python 3.9.7
</issue>
<code>
[start of visidata/clipboard.py]
1 from copy import copy, deepcopy
2 import shutil
3 import subprocess
4 import io
5 import sys
6 import tempfile
7 import functools
8 import os
9
10 from visidata import VisiData, vd, asyncthread
11 from visidata import Sheet, Path
12
13 if sys.platform == 'win32':
14 syscopy_cmd_default = 'clip.exe'
15 syspaste_cmd_default = 'clip.exe'
16 elif sys.platform == 'darwin':
17 syscopy_cmd_default = 'pbcopy w'
18 syspaste_cmd_default = 'pbpaste'
19 else:
20 if 'WAYLAND_DISPLAY' in os.environ:
21 syscopy_cmd_default = 'wl-copy'
22 syspaste_cmd_default = 'wl-paste'
23 else:
24 syscopy_cmd_default = 'xclip -selection clipboard -filter' # xsel --clipboard --input
25 syspaste_cmd_default = 'xclip -selection clipboard -o' # xsel --clipboard
26
27 vd.option('clipboard_copy_cmd', syscopy_cmd_default, 'command to copy stdin to system clipboard', sheettype=None)
28 vd.option('clipboard_paste_cmd', syspaste_cmd_default, 'command to send contents of system clipboard to stdout', sheettype=None)
29
30
31 @Sheet.api
32 def copyRows(sheet, rows):
33 vd.memory.cliprows = rows
34 vd.memory.clipcols = list(sheet.visibleCols)
35 if not rows:
36 vd.warning('no %s selected; clipboard emptied' % sheet.rowtype)
37 else:
38 vd.status('copied %d %s to clipboard' % (len(rows), sheet.rowtype))
39
40 @Sheet.api
41 def copyCells(sheet, col, rows):
42 vd.memory.clipcells = [col.getTypedValue(r) for r in rows]
43 if not rows:
44 vd.warning('no %s selected; clipboard emptied' % sheet.rowtype)
45 return
46 vd.status('copied %d %s.%s to clipboard' % (len(rows), sheet.rowtype, col.name))
47
48
49 @Sheet.api
50 def syscopyValue(sheet, val):
51 # pipe val to stdin of clipboard command
52
53 p = subprocess.run(
54 sheet.options.clipboard_copy_cmd.split(),
55 input=val,
56 encoding=sheet.options.encoding,
57 stdout=subprocess.DEVNULL)
58
59 vd.status('copied value to system clipboard')
60
61
62 @Sheet.api
63 def syscopyCells(sheet, cols, rows, filetype=None):
64 filetype = filetype or vd.input("copy %d %s as filetype: " % (len(rows), sheet.rowtype), value=sheet.options.save_filetype or 'tsv')
65 sheet.syscopyCells_async(cols, rows, filetype)
66
67
68 @Sheet.api
69 @asyncthread
70 def syscopyCells_async(sheet, cols, rows, filetype):
71 vs = copy(sheet)
72 vs.rows = rows or vd.fail('no %s selected' % sheet.rowtype)
73 vs.columns = cols
74
75 vd.status(f'copying {vs.nRows} {vs.rowtype} to system clipboard as {filetype}')
76
77 with io.StringIO() as buf:
78 vd.sync(vd.saveSheets(Path(sheet.name+'.'+filetype, fptext=buf), vs))
79 subprocess.run(
80 sheet.options.clipboard_copy_cmd.split(),
81 input=buf.getvalue(),
82 encoding=sheet.options.encoding,
83 stdout=subprocess.DEVNULL)
84
85
86 @VisiData.api
87 def sysclipValue(vd):
88 cmd = vd.options.clipboard_paste_cmd
89 return subprocess.check_output(vd.options.clipboard_paste_cmd.split()).decode('utf-8')
90
91
92 @VisiData.api
93 @asyncthread
94 def pasteFromClipboard(vd, cols, rows):
95 text = vd.getLastArgs() or vd.sysclipValue().strip() or vd.fail('system clipboard is empty')
96
97 vd.addUndoSetValues(cols, rows)
98 lines = text.split('\n')
99 if not lines:
100 vd.warning('nothing to paste')
101 return
102
103 vs = cols[0].sheet
104 newrows = [vs.newRow() for i in range(len(lines)-len(rows))]
105 if newrows:
106 rows.extend(newrows)
107 vs.addRows(newrows)
108
109 for line, r in zip(lines, rows):
110 for v, c in zip(line.split('\t'), cols):
111 c.setValue(r, v)
112
113
114 @Sheet.api
115 def delete_row(sheet, rowidx):
116 if not sheet.defer:
117 oldrow = sheet.rows.pop(rowidx)
118 vd.addUndo(sheet.rows.insert, rowidx, oldrow)
119 # clear the deleted row from selected rows
120 if sheet.isSelected(oldrow):
121 sheet.addUndoSelection()
122 sheet.unselectRow(oldrow)
123 else:
124 oldrow = sheet.rows[rowidx]
125 sheet.rowDeleted(oldrow)
126
127 sheet.setModified()
128 return oldrow
129
130 @Sheet.api
131 def paste_after(sheet, rowidx):
132 if not vd.memory.cliprows: #1793
133 vd.warning('nothing to paste')
134 return
135 to_paste = list(deepcopy(r) for r in reversed(vd.memory.cliprows))
136 sheet.addRows(to_paste, index=rowidx)
137
138
139
140 Sheet.addCommand('y', 'copy-row', 'copyRows([cursorRow])', 'yank (copy) current row to clipboard')
141
142 Sheet.addCommand('p', 'paste-after', 'paste_after(cursorRowIndex)', 'paste clipboard rows after current row')
143 Sheet.addCommand('P', 'paste-before', 'paste_after(cursorRowIndex-1)', 'paste clipboard rows before current row')
144
145 Sheet.addCommand('gy', 'copy-selected', 'copyRows(onlySelectedRows)', 'yank (copy) selected rows to clipboard')
146
147 Sheet.addCommand('zy', 'copy-cell', 'copyCells(cursorCol, [cursorRow]); vd.memo("clipval", cursorCol, cursorRow)', 'yank (copy) current cell to clipboard')
148 Sheet.addCommand('zp', 'paste-cell', 'cursorCol.setValuesTyped([cursorRow], vd.memory.clipval)', 'set contents of current cell to last clipboard value')
149
150 Sheet.addCommand('d', 'delete-row', 'delete_row(cursorRowIndex); defer and cursorDown(1)', 'delete current row')
151 Sheet.addCommand('gd', 'delete-selected', 'deleteSelected()', 'delete selected rows')
152 Sheet.addCommand('zd', 'delete-cell', 'cursorCol.setValues([cursorRow], options.null_value)', 'delete current cell (set to None)')
153 Sheet.addCommand('gzd', 'delete-cells', 'cursorCol.setValues(onlySelectedRows, options.null_value)', 'delete contents of current column for selected rows (set to None)')
154
155 Sheet.bindkey('BUTTON2_PRESSED', 'go-mouse')
156 Sheet.addCommand('BUTTON2_RELEASED', 'syspaste-cells', 'pasteFromClipboard(visibleCols[cursorVisibleColIndex:], rows[cursorRowIndex:])', 'paste from system clipboard to region starting at cursor')
157 Sheet.bindkey('BUTTON2_CLICKED', 'go-mouse')
158 Sheet.bindkey('zP', 'syspaste-cells')
159 Sheet.addCommand('gzP', 'syspaste-cells-selected', 'pasteFromClipboard(visibleCols[cursorVisibleColIndex:], someSelectedRows)', 'paste from system clipboard to selected cells')
160
161 Sheet.addCommand('gzy', 'copy-cells', 'copyCells(cursorCol, onlySelectedRows)', 'yank (copy) contents of current column for selected rows to clipboard')
162 Sheet.addCommand('gzp', 'setcol-clipboard', 'for r, v in zip(onlySelectedRows, itertools.cycle(vd.memory.clipcells or [None])): cursorCol.setValuesTyped([r], v)', 'set cells of current column for selected rows to last clipboard value')
163
164 Sheet.addCommand('Y', 'syscopy-row', 'syscopyCells(visibleCols, [cursorRow])', 'yank (copy) current row to system clipboard (using options.clipboard_copy_cmd)')
165
166 Sheet.addCommand('gY', 'syscopy-selected', 'syscopyCells(visibleCols, onlySelectedRows)', 'yank (copy) selected rows to system clipboard (using options.clipboard_copy_cmd)')
167 Sheet.addCommand('zY', 'syscopy-cell', 'syscopyValue(cursorDisplay)', 'yank (copy) current cell to system clipboard (using options.clipboard_copy_cmd)')
168 Sheet.addCommand('gzY', 'syscopy-cells', 'syscopyCells([cursorCol], onlySelectedRows, filetype="txt")', 'yank (copy) contents of current column from selected rows to system clipboard (using options.clipboard_copy_cmd')
169
170 Sheet.addCommand('x', 'cut-row', 'copyRows([sheet.delete_row(cursorRowIndex)]); defer and cursorDown(1)', 'delete (cut) current row and move it to clipboard')
171 Sheet.addCommand('gx', 'cut-selected', 'copyRows(onlySelectedRows); deleteSelected()', 'delete (cut) selected rows and move them to clipboard')
172 Sheet.addCommand('zx', 'cut-cell', 'copyCells(cursorCol, [cursorRow]); cursorCol.setValues([cursorRow], None)', 'delete (cut) current cell and move it to clipboard')
173 Sheet.addCommand('gzx', 'cut-cells', 'copyCells(cursorCol, onlySelectedRows); cursorCol.setValues(onlySelectedRows, None)', 'delete (cut) contents of current column for selected rows and move them to clipboard')
174
175
176 Sheet.bindkey('KEY_DC', 'delete-cell'),
177 Sheet.bindkey('gKEY_DC', 'delete-cells'),
178
179 vd.addMenuItems('''
180 Edit > Delete > current row > delete-row
181 Edit > Delete > current cell > delete-cell
182 Edit > Delete > selected rows > delete-selected
183 Edit > Delete > selected cells > delete-cells
184 Edit > Copy > current cell > copy-cell
185 Edit > Copy > current row > copy-row
186 Edit > Copy > selected cells > copy-cells
187 Edit > Copy > selected rows > copy-selected
188 Edit > Copy > to system clipboard > current cell > syscopy-cell
189 Edit > Copy > to system clipboard > current row > syscopy-row
190 Edit > Copy > to system clipboard > selected cells > syscopy-cells
191 Edit > Copy > to system clipboard > selected rows > syscopy-selected
192 Edit > Cut > current row > cut-row
193 Edit > Cut > selected cells > cut-selected
194 Edit > Cut > current cell > cut-cell
195 Edit > Paste > row after > paste-after
196 Edit > Paste > row before > paste-before
197 Edit > Paste > into selected cells > setcol-clipboard
198 Edit > Paste > into current cell > paste-cell
199 Edit > Paste > from system clipboard > cells at cursor > syspaste-cells
200 Edit > Paste > from system clipboard > selected cells > syspaste-cells-selected
201 ''')
202
[end of visidata/clipboard.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/visidata/clipboard.py b/visidata/clipboard.py
--- a/visidata/clipboard.py
+++ b/visidata/clipboard.py
@@ -12,7 +12,7 @@
if sys.platform == 'win32':
syscopy_cmd_default = 'clip.exe'
- syspaste_cmd_default = 'clip.exe'
+ syspaste_cmd_default = 'powershell -command Get-Clipboard'
elif sys.platform == 'darwin':
syscopy_cmd_default = 'pbcopy w'
syspaste_cmd_default = 'pbpaste'
|
{"golden_diff": "diff --git a/visidata/clipboard.py b/visidata/clipboard.py\n--- a/visidata/clipboard.py\n+++ b/visidata/clipboard.py\n@@ -12,7 +12,7 @@\n \n if sys.platform == 'win32':\n syscopy_cmd_default = 'clip.exe'\n- syspaste_cmd_default = 'clip.exe'\n+ syspaste_cmd_default = 'powershell -command Get-Clipboard'\n elif sys.platform == 'darwin':\n syscopy_cmd_default = 'pbcopy w'\n syspaste_cmd_default = 'pbpaste'\n", "issue": "Pasting from Clipboard in Windows 10\n**Small description**\r\nI cannot get any results pasted from the clipboard onto a visidata cell, selected cell or range of cells. \r\n\r\nOn the most basic test:\r\n- Create new sheet\r\n- Create 1 cell\r\n- I copy a short string from another app to the clipboard\r\n- trying:\r\n - syspaste-cells or \r\n - syspaste-cells-selected (while selecting a single cell), or\r\n - zP\r\n- I get the following message \"INFO: Type \"CLIP /?\" for usage.\"\r\n\r\nThis is a message from Windows regarding its CLIP application.\r\n\r\nI have tried this, having multiple clipboard history activated, or deactivated. In both cases I get the same result.\r\n\r\nCopying from visidata onto clipboard works alright with zY\r\n\r\n**Expected result**\r\nPasted content in cell\r\n\r\n**Actual result with screenshot**\r\nIf you get an unexpected error, please include the full stack trace that you get with `Ctrl-E`.\r\n\r\n\r\nNo error captured (no ctrl-E)\r\n\r\n**Steps to reproduce with sample data and a .vd**\r\nFirst try reproducing without any user configuration by using the flag `-N`.\r\ne.g. `echo \"abc\" | vd -f txt -N`\r\n\r\nSame result.\r\n\r\nPlease attach the commandlog (saved with `Ctrl-D`)\r\n[paste_problem_win.zip](https://github.com/saulpw/visidata/files/11748544/paste_problem_win.zip)\r\n\r\n to show the steps that led to the issue.\r\nSee [here](http://visidata.org/docs/save-restore/) for more details.\r\n\r\n**Additional context**\r\nPlease include the version of VisiData and Python.\r\nVisidata v2.11\r\nPython 3.9.7\n", "before_files": [{"content": "from copy import copy, deepcopy\nimport shutil\nimport subprocess\nimport io\nimport sys\nimport tempfile\nimport functools\nimport os\n\nfrom visidata import VisiData, vd, asyncthread\nfrom visidata import Sheet, Path\n\nif sys.platform == 'win32':\n syscopy_cmd_default = 'clip.exe'\n syspaste_cmd_default = 'clip.exe'\nelif sys.platform == 'darwin':\n syscopy_cmd_default = 'pbcopy w'\n syspaste_cmd_default = 'pbpaste'\nelse:\n if 'WAYLAND_DISPLAY' in os.environ:\n syscopy_cmd_default = 'wl-copy'\n syspaste_cmd_default = 'wl-paste'\n else:\n syscopy_cmd_default = 'xclip -selection clipboard -filter' # xsel --clipboard --input\n syspaste_cmd_default = 'xclip -selection clipboard -o' # xsel --clipboard\n\nvd.option('clipboard_copy_cmd', syscopy_cmd_default, 'command to copy stdin to system clipboard', sheettype=None)\nvd.option('clipboard_paste_cmd', syspaste_cmd_default, 'command to send contents of system clipboard to stdout', sheettype=None)\n\n\[email protected]\ndef copyRows(sheet, rows):\n vd.memory.cliprows = rows\n vd.memory.clipcols = list(sheet.visibleCols)\n if not rows:\n vd.warning('no %s selected; clipboard emptied' % sheet.rowtype)\n else:\n vd.status('copied %d %s to clipboard' % (len(rows), sheet.rowtype))\n\[email protected]\ndef copyCells(sheet, col, rows):\n vd.memory.clipcells = [col.getTypedValue(r) for r in rows]\n if not rows:\n vd.warning('no %s selected; clipboard emptied' % sheet.rowtype)\n return\n vd.status('copied %d %s.%s to clipboard' % (len(rows), sheet.rowtype, col.name))\n\n\[email protected]\ndef syscopyValue(sheet, val):\n # pipe val to stdin of clipboard command\n\n p = subprocess.run(\n sheet.options.clipboard_copy_cmd.split(),\n input=val,\n encoding=sheet.options.encoding,\n stdout=subprocess.DEVNULL)\n\n vd.status('copied value to system clipboard')\n\n\[email protected]\ndef syscopyCells(sheet, cols, rows, filetype=None):\n filetype = filetype or vd.input(\"copy %d %s as filetype: \" % (len(rows), sheet.rowtype), value=sheet.options.save_filetype or 'tsv')\n sheet.syscopyCells_async(cols, rows, filetype)\n\n\[email protected]\n@asyncthread\ndef syscopyCells_async(sheet, cols, rows, filetype):\n vs = copy(sheet)\n vs.rows = rows or vd.fail('no %s selected' % sheet.rowtype)\n vs.columns = cols\n\n vd.status(f'copying {vs.nRows} {vs.rowtype} to system clipboard as {filetype}')\n\n with io.StringIO() as buf:\n vd.sync(vd.saveSheets(Path(sheet.name+'.'+filetype, fptext=buf), vs))\n subprocess.run(\n sheet.options.clipboard_copy_cmd.split(),\n input=buf.getvalue(),\n encoding=sheet.options.encoding,\n stdout=subprocess.DEVNULL)\n\n\[email protected]\ndef sysclipValue(vd):\n cmd = vd.options.clipboard_paste_cmd\n return subprocess.check_output(vd.options.clipboard_paste_cmd.split()).decode('utf-8')\n\n\[email protected]\n@asyncthread\ndef pasteFromClipboard(vd, cols, rows):\n text = vd.getLastArgs() or vd.sysclipValue().strip() or vd.fail('system clipboard is empty')\n\n vd.addUndoSetValues(cols, rows)\n lines = text.split('\\n')\n if not lines:\n vd.warning('nothing to paste')\n return\n\n vs = cols[0].sheet\n newrows = [vs.newRow() for i in range(len(lines)-len(rows))]\n if newrows:\n rows.extend(newrows)\n vs.addRows(newrows)\n\n for line, r in zip(lines, rows):\n for v, c in zip(line.split('\\t'), cols):\n c.setValue(r, v)\n\n\[email protected]\ndef delete_row(sheet, rowidx):\n if not sheet.defer:\n oldrow = sheet.rows.pop(rowidx)\n vd.addUndo(sheet.rows.insert, rowidx, oldrow)\n # clear the deleted row from selected rows\n if sheet.isSelected(oldrow):\n sheet.addUndoSelection()\n sheet.unselectRow(oldrow)\n else:\n oldrow = sheet.rows[rowidx]\n sheet.rowDeleted(oldrow)\n\n sheet.setModified()\n return oldrow\n\[email protected]\ndef paste_after(sheet, rowidx):\n if not vd.memory.cliprows: #1793\n vd.warning('nothing to paste')\n return\n to_paste = list(deepcopy(r) for r in reversed(vd.memory.cliprows))\n sheet.addRows(to_paste, index=rowidx)\n\n\n\nSheet.addCommand('y', 'copy-row', 'copyRows([cursorRow])', 'yank (copy) current row to clipboard')\n\nSheet.addCommand('p', 'paste-after', 'paste_after(cursorRowIndex)', 'paste clipboard rows after current row')\nSheet.addCommand('P', 'paste-before', 'paste_after(cursorRowIndex-1)', 'paste clipboard rows before current row')\n\nSheet.addCommand('gy', 'copy-selected', 'copyRows(onlySelectedRows)', 'yank (copy) selected rows to clipboard')\n\nSheet.addCommand('zy', 'copy-cell', 'copyCells(cursorCol, [cursorRow]); vd.memo(\"clipval\", cursorCol, cursorRow)', 'yank (copy) current cell to clipboard')\nSheet.addCommand('zp', 'paste-cell', 'cursorCol.setValuesTyped([cursorRow], vd.memory.clipval)', 'set contents of current cell to last clipboard value')\n\nSheet.addCommand('d', 'delete-row', 'delete_row(cursorRowIndex); defer and cursorDown(1)', 'delete current row')\nSheet.addCommand('gd', 'delete-selected', 'deleteSelected()', 'delete selected rows')\nSheet.addCommand('zd', 'delete-cell', 'cursorCol.setValues([cursorRow], options.null_value)', 'delete current cell (set to None)')\nSheet.addCommand('gzd', 'delete-cells', 'cursorCol.setValues(onlySelectedRows, options.null_value)', 'delete contents of current column for selected rows (set to None)')\n\nSheet.bindkey('BUTTON2_PRESSED', 'go-mouse')\nSheet.addCommand('BUTTON2_RELEASED', 'syspaste-cells', 'pasteFromClipboard(visibleCols[cursorVisibleColIndex:], rows[cursorRowIndex:])', 'paste from system clipboard to region starting at cursor')\nSheet.bindkey('BUTTON2_CLICKED', 'go-mouse')\nSheet.bindkey('zP', 'syspaste-cells')\nSheet.addCommand('gzP', 'syspaste-cells-selected', 'pasteFromClipboard(visibleCols[cursorVisibleColIndex:], someSelectedRows)', 'paste from system clipboard to selected cells')\n\nSheet.addCommand('gzy', 'copy-cells', 'copyCells(cursorCol, onlySelectedRows)', 'yank (copy) contents of current column for selected rows to clipboard')\nSheet.addCommand('gzp', 'setcol-clipboard', 'for r, v in zip(onlySelectedRows, itertools.cycle(vd.memory.clipcells or [None])): cursorCol.setValuesTyped([r], v)', 'set cells of current column for selected rows to last clipboard value')\n\nSheet.addCommand('Y', 'syscopy-row', 'syscopyCells(visibleCols, [cursorRow])', 'yank (copy) current row to system clipboard (using options.clipboard_copy_cmd)')\n\nSheet.addCommand('gY', 'syscopy-selected', 'syscopyCells(visibleCols, onlySelectedRows)', 'yank (copy) selected rows to system clipboard (using options.clipboard_copy_cmd)')\nSheet.addCommand('zY', 'syscopy-cell', 'syscopyValue(cursorDisplay)', 'yank (copy) current cell to system clipboard (using options.clipboard_copy_cmd)')\nSheet.addCommand('gzY', 'syscopy-cells', 'syscopyCells([cursorCol], onlySelectedRows, filetype=\"txt\")', 'yank (copy) contents of current column from selected rows to system clipboard (using options.clipboard_copy_cmd')\n\nSheet.addCommand('x', 'cut-row', 'copyRows([sheet.delete_row(cursorRowIndex)]); defer and cursorDown(1)', 'delete (cut) current row and move it to clipboard')\nSheet.addCommand('gx', 'cut-selected', 'copyRows(onlySelectedRows); deleteSelected()', 'delete (cut) selected rows and move them to clipboard')\nSheet.addCommand('zx', 'cut-cell', 'copyCells(cursorCol, [cursorRow]); cursorCol.setValues([cursorRow], None)', 'delete (cut) current cell and move it to clipboard')\nSheet.addCommand('gzx', 'cut-cells', 'copyCells(cursorCol, onlySelectedRows); cursorCol.setValues(onlySelectedRows, None)', 'delete (cut) contents of current column for selected rows and move them to clipboard')\n\n\nSheet.bindkey('KEY_DC', 'delete-cell'),\nSheet.bindkey('gKEY_DC', 'delete-cells'),\n\nvd.addMenuItems('''\n Edit > Delete > current row > delete-row\n Edit > Delete > current cell > delete-cell\n Edit > Delete > selected rows > delete-selected\n Edit > Delete > selected cells > delete-cells\n Edit > Copy > current cell > copy-cell\n Edit > Copy > current row > copy-row\n Edit > Copy > selected cells > copy-cells\n Edit > Copy > selected rows > copy-selected\n Edit > Copy > to system clipboard > current cell > syscopy-cell\n Edit > Copy > to system clipboard > current row > syscopy-row\n Edit > Copy > to system clipboard > selected cells > syscopy-cells\n Edit > Copy > to system clipboard > selected rows > syscopy-selected\n Edit > Cut > current row > cut-row\n Edit > Cut > selected cells > cut-selected\n Edit > Cut > current cell > cut-cell\n Edit > Paste > row after > paste-after\n Edit > Paste > row before > paste-before\n Edit > Paste > into selected cells > setcol-clipboard\n Edit > Paste > into current cell > paste-cell\n Edit > Paste > from system clipboard > cells at cursor > syspaste-cells\n Edit > Paste > from system clipboard > selected cells > syspaste-cells-selected\n''')\n", "path": "visidata/clipboard.py"}]}
| 3,732 | 123 |
gh_patches_debug_33854
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-1551
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use a different colour for waiting list events
### Is your feature request related to a problem? Please describe.
In the event overview calendar, the dot in front of the event is colored as if you are registered. And this is confusing.
### Describe the solution you'd like
Use a different colour for events you're on the waiting list for.
### Motivation
Better user experience.
### Describe alternatives you've considered
There is not really any alternative
### Additional context
Created based on #1007
</issue>
<code>
[start of website/events/api/calendarjs/serializers.py]
1 from rest_framework.reverse import reverse
2
3 from events import services
4 from events.models import Event
5 from thaliawebsite.api.calendarjs.serializers import CalenderJSSerializer
6
7
8 class EventsCalenderJSSerializer(CalenderJSSerializer):
9 class Meta(CalenderJSSerializer.Meta):
10 model = Event
11
12 def _url(self, instance):
13 return reverse("events:event", kwargs={"pk": instance.id})
14
15 def _class_names(self, instance):
16 class_names = ["regular-event"]
17 if self.context["member"] and services.is_user_registered(
18 self.context["member"], instance
19 ):
20 class_names.append("has-registration")
21 return class_names
22
23
24 class UnpublishedEventsCalenderJSSerializer(CalenderJSSerializer):
25 """See CalenderJSSerializer, customised classes."""
26
27 class Meta(CalenderJSSerializer.Meta):
28 model = Event
29
30 def _class_names(self, instance):
31 return ["unpublished-event"]
32
33 def _url(self, instance):
34 return reverse("admin:events_event_details", kwargs={"pk": instance.id})
35
[end of website/events/api/calendarjs/serializers.py]
[start of website/events/services.py]
1 from collections import OrderedDict
2
3 from django.utils import timezone
4 from django.utils.datetime_safe import date
5 from django.utils.translation import gettext_lazy as _, get_language
6
7 from events import emails
8 from events.exceptions import RegistrationError
9 from events.models import EventRegistration, RegistrationInformationField, Event
10 from payments.api.v1.fields import PaymentTypeField
11 from payments.services import create_payment, delete_payment
12 from utils.snippets import datetime_to_lectureyear
13
14
15 def is_user_registered(member, event):
16 """Return if the user is registered for the specified event.
17
18 :param member: the user
19 :param event: the event
20 :return: None if registration is not required or no member else True/False
21 """
22 if not event.registration_required or not member.is_authenticated:
23 return None
24
25 return event.registrations.filter(member=member, date_cancelled=None).count() > 0
26
27
28 def is_user_present(member, event):
29 if not event.registration_required or not member.is_authenticated:
30 return None
31
32 return (
33 event.registrations.filter(
34 member=member, date_cancelled=None, present=True
35 ).count()
36 > 0
37 )
38
39
40 def event_permissions(member, event, name=None):
41 """Return a dictionary with the available event permissions of the user.
42
43 :param member: the user
44 :param event: the event
45 :param name: the name of a non member registration
46 :return: the permission dictionary
47 """
48 perms = {
49 "create_registration": False,
50 "cancel_registration": False,
51 "update_registration": False,
52 "manage_event": is_organiser(member, event),
53 }
54 if not member:
55 return perms
56 if not (member.is_authenticated or name):
57 return perms
58
59 registration = None
60 try:
61 registration = EventRegistration.objects.get(
62 event=event, member=member, name=name
63 )
64 except EventRegistration.DoesNotExist:
65 pass
66
67 perms["create_registration"] = (
68 (registration is None or registration.date_cancelled is not None)
69 and (event.registration_allowed or not event.registration_required)
70 and (name or member.can_attend_events)
71 )
72 perms["cancel_registration"] = (
73 registration is not None
74 and registration.date_cancelled is None
75 and (event.cancellation_allowed or name or not event.registration_required)
76 and registration.payment is None
77 )
78 perms["update_registration"] = (
79 registration is not None
80 and registration.date_cancelled is None
81 and event.has_fields
82 and (event.registration_allowed or not event.registration_required)
83 and (name or member.can_attend_events)
84 )
85 return perms
86
87
88 def is_organiser(member, event):
89 if member and member.is_authenticated:
90 if member.is_superuser or member.has_perm("events.override_organiser"):
91 return True
92
93 if event:
94 return member.get_member_groups().filter(pk=event.organiser.pk).count() != 0
95
96 return False
97
98
99 def create_registration(member, event):
100 """Create a new user registration for an event.
101
102 :param member: the user
103 :param event: the event
104 :return: Return the registration if successful
105 """
106 if event_permissions(member, event)["create_registration"]:
107 registration = None
108 try:
109 registration = EventRegistration.objects.get(event=event, member=member)
110 except EventRegistration.DoesNotExist:
111 pass
112
113 if registration is None:
114 return EventRegistration.objects.create(event=event, member=member)
115 if registration.date_cancelled is not None:
116 if registration.is_late_cancellation():
117 raise RegistrationError(
118 _(
119 "You cannot re-register anymore "
120 "since you've cancelled after the "
121 "deadline."
122 )
123 )
124 registration.date = timezone.now()
125 registration.date_cancelled = None
126 registration.save()
127
128 return registration
129 if event_permissions(member, event)["cancel_registration"]:
130 raise RegistrationError(_("You were already registered."))
131 raise RegistrationError(_("You may not register."))
132
133
134 def cancel_registration(member, event):
135 """Cancel a user registration for an event.
136
137 :param member: the user
138 :param event: the event
139 """
140 registration = None
141 try:
142 registration = EventRegistration.objects.get(event=event, member=member)
143 except EventRegistration.DoesNotExist:
144 pass
145
146 if event_permissions(member, event)["cancel_registration"] and registration:
147 if not registration.queue_position:
148 emails.notify_first_waiting(event)
149
150 if event.send_cancel_email and event.after_cancel_deadline:
151 emails.notify_organiser(event, registration)
152
153 # Note that this doesn"t remove the values for the
154 # information fields that the user entered upon registering.
155 # But this is regarded as a feature, not a bug. Especially
156 # since the values will still appear in the backend.
157 registration.date_cancelled = timezone.now()
158 registration.save()
159 else:
160 raise RegistrationError(_("You are not allowed to deregister for this event."))
161
162
163 def update_registration(
164 member=None, event=None, name=None, registration=None, field_values=None
165 ):
166 """Update a user registration of an event.
167
168 :param request: http request
169 :param member: the user
170 :param event: the event
171 :param name: the name of a registration not associated with a user
172 :param registration: the registration
173 :param field_values: values for the information fields
174 """
175 if not registration:
176 try:
177 registration = EventRegistration.objects.get(
178 event=event, member=member, name=name
179 )
180 except EventRegistration.DoesNotExist as error:
181 raise RegistrationError(
182 _("You are not registered for this event.")
183 ) from error
184 else:
185 member = registration.member
186 event = registration.event
187 name = registration.name
188
189 if (
190 not event_permissions(member, event, name)["update_registration"]
191 or not field_values
192 ):
193 return
194
195 for field_id, field_value in field_values:
196 field = RegistrationInformationField.objects.get(
197 id=field_id.replace("info_field_", "")
198 )
199
200 if (
201 field.type == RegistrationInformationField.INTEGER_FIELD
202 and field_value is None
203 ):
204 field_value = 0
205 elif (
206 field.type == RegistrationInformationField.BOOLEAN_FIELD
207 and field_value is None
208 ):
209 field_value = False
210 elif (
211 field.type == RegistrationInformationField.TEXT_FIELD
212 and field_value is None
213 ):
214 field_value = ""
215
216 field.set_value_for(registration, field_value)
217
218
219 def registration_fields(request, member=None, event=None, registration=None, name=None):
220 """Return information about the registration fields of a registration.
221
222 :param member: the user (optional if registration provided)
223 :param name: the name of a non member registration
224 (optional if registration provided)
225 :param event: the event (optional if registration provided)
226 :param registration: the registration (optional if member & event provided)
227 :return: the fields
228 """
229 if registration is None:
230 try:
231 registration = EventRegistration.objects.get(
232 event=event, member=member, name=name
233 )
234 except EventRegistration.DoesNotExist as error:
235 raise RegistrationError(
236 _("You are not registered for this event.")
237 ) from error
238 except EventRegistration.MultipleObjectsReturned as error:
239 raise RegistrationError(
240 _("Unable to find the right registration.")
241 ) from error
242
243 member = registration.member
244 event = registration.event
245 name = registration.name
246
247 perms = event_permissions(member, event, name)[
248 "update_registration"
249 ] or is_organiser(request.member, event)
250 if perms and registration:
251 information_fields = registration.information_fields
252 fields = OrderedDict()
253
254 for information_field in information_fields:
255 field = information_field["field"]
256
257 fields["info_field_{}".format(field.id)] = {
258 "type": field.type,
259 "label": getattr(field, "{}_{}".format("name", get_language())),
260 "description": getattr(
261 field, "{}_{}".format("description", get_language())
262 ),
263 "value": information_field["value"],
264 "required": field.required,
265 }
266
267 return fields
268 raise RegistrationError(_("You are not allowed to update this registration."))
269
270
271 def update_registration_by_organiser(registration, member, data):
272 if not is_organiser(member, registration.event):
273 raise RegistrationError(_("You are not allowed to update this registration."))
274
275 if "payment" in data:
276 if data["payment"]["type"] == PaymentTypeField.NO_PAYMENT:
277 if registration.payment is not None:
278 delete_payment(registration)
279 else:
280 registration.payment = create_payment(
281 payable=registration,
282 processed_by=member,
283 pay_type=data["payment"]["type"],
284 )
285
286 if "present" in data:
287 registration.present = data["present"]
288
289 registration.save()
290
291
292 def generate_category_statistics():
293 """Generate statistics about events, number of events per category.
294
295 :return: Dict with key, value resp. being category, event count.
296 """
297 year = datetime_to_lectureyear(timezone.now())
298
299 data = {}
300 for i in range(5):
301 year_start = date(year=year - i, month=9, day=1)
302 year_end = date(year=year - i + 1, month=9, day=1)
303 data[str(year - i)] = {
304 str(display): Event.objects.filter(
305 category=key, start__gte=year_start, end__lte=year_end
306 ).count()
307 for key, display in Event.EVENT_CATEGORIES
308 }
309
310 return data
311
[end of website/events/services.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/website/events/api/calendarjs/serializers.py b/website/events/api/calendarjs/serializers.py
--- a/website/events/api/calendarjs/serializers.py
+++ b/website/events/api/calendarjs/serializers.py
@@ -13,12 +13,18 @@
return reverse("events:event", kwargs={"pk": instance.id})
def _class_names(self, instance):
- class_names = ["regular-event"]
if self.context["member"] and services.is_user_registered(
self.context["member"], instance
):
- class_names.append("has-registration")
- return class_names
+ if services.user_registration_pending(self.context["member"], instance):
+ return ["regular-event-pending-registration"]
+ else:
+ return ["regular-event-has-registration"]
+ elif (not instance.registration_required) or instance.registration_allowed:
+ return ["regular-event-registration-open"]
+ else:
+ # I think this handles the case that registration is needed, but not yet possible
+ return ["regular-event-registration-closed"]
class UnpublishedEventsCalenderJSSerializer(CalenderJSSerializer):
diff --git a/website/events/services.py b/website/events/services.py
--- a/website/events/services.py
+++ b/website/events/services.py
@@ -19,12 +19,28 @@
:param event: the event
:return: None if registration is not required or no member else True/False
"""
- if not event.registration_required or not member.is_authenticated:
+ if not member.is_authenticated:
return None
return event.registrations.filter(member=member, date_cancelled=None).count() > 0
+def user_registration_pending(member, event):
+ """Return if the user is in the queue, but not yet registered for, the specific event.
+
+ :param member: the user
+ :param event: the event
+ :return: None if registration is not required or no member else True/False
+ """
+ if not event.registration_required:
+ return False
+ if not member.is_authenticated:
+ return None
+
+ reg = event.registrations.filter(member=member, date_cancelled=None)
+ return len(list(filter(lambda r: r.queue_position, reg))) > 0
+
+
def is_user_present(member, event):
if not event.registration_required or not member.is_authenticated:
return None
|
{"golden_diff": "diff --git a/website/events/api/calendarjs/serializers.py b/website/events/api/calendarjs/serializers.py\n--- a/website/events/api/calendarjs/serializers.py\n+++ b/website/events/api/calendarjs/serializers.py\n@@ -13,12 +13,18 @@\n return reverse(\"events:event\", kwargs={\"pk\": instance.id})\n \n def _class_names(self, instance):\n- class_names = [\"regular-event\"]\n if self.context[\"member\"] and services.is_user_registered(\n self.context[\"member\"], instance\n ):\n- class_names.append(\"has-registration\")\n- return class_names\n+ if services.user_registration_pending(self.context[\"member\"], instance):\n+ return [\"regular-event-pending-registration\"]\n+ else:\n+ return [\"regular-event-has-registration\"]\n+ elif (not instance.registration_required) or instance.registration_allowed:\n+ return [\"regular-event-registration-open\"]\n+ else:\n+ # I think this handles the case that registration is needed, but not yet possible\n+ return [\"regular-event-registration-closed\"]\n \n \n class UnpublishedEventsCalenderJSSerializer(CalenderJSSerializer):\ndiff --git a/website/events/services.py b/website/events/services.py\n--- a/website/events/services.py\n+++ b/website/events/services.py\n@@ -19,12 +19,28 @@\n :param event: the event\n :return: None if registration is not required or no member else True/False\n \"\"\"\n- if not event.registration_required or not member.is_authenticated:\n+ if not member.is_authenticated:\n return None\n \n return event.registrations.filter(member=member, date_cancelled=None).count() > 0\n \n \n+def user_registration_pending(member, event):\n+ \"\"\"Return if the user is in the queue, but not yet registered for, the specific event.\n+\n+ :param member: the user\n+ :param event: the event\n+ :return: None if registration is not required or no member else True/False\n+ \"\"\"\n+ if not event.registration_required:\n+ return False\n+ if not member.is_authenticated:\n+ return None\n+\n+ reg = event.registrations.filter(member=member, date_cancelled=None)\n+ return len(list(filter(lambda r: r.queue_position, reg))) > 0\n+\n+\n def is_user_present(member, event):\n if not event.registration_required or not member.is_authenticated:\n return None\n", "issue": "Use a different colour for waiting list events\n### Is your feature request related to a problem? Please describe.\r\nIn the event overview calendar, the dot in front of the event is colored as if you are registered. And this is confusing.\r\n\r\n### Describe the solution you'd like\r\nUse a different colour for events you're on the waiting list for.\r\n\r\n### Motivation\r\nBetter user experience.\r\n\r\n### Describe alternatives you've considered\r\nThere is not really any alternative\r\n\r\n### Additional context\r\nCreated based on #1007\r\n\n", "before_files": [{"content": "from rest_framework.reverse import reverse\n\nfrom events import services\nfrom events.models import Event\nfrom thaliawebsite.api.calendarjs.serializers import CalenderJSSerializer\n\n\nclass EventsCalenderJSSerializer(CalenderJSSerializer):\n class Meta(CalenderJSSerializer.Meta):\n model = Event\n\n def _url(self, instance):\n return reverse(\"events:event\", kwargs={\"pk\": instance.id})\n\n def _class_names(self, instance):\n class_names = [\"regular-event\"]\n if self.context[\"member\"] and services.is_user_registered(\n self.context[\"member\"], instance\n ):\n class_names.append(\"has-registration\")\n return class_names\n\n\nclass UnpublishedEventsCalenderJSSerializer(CalenderJSSerializer):\n \"\"\"See CalenderJSSerializer, customised classes.\"\"\"\n\n class Meta(CalenderJSSerializer.Meta):\n model = Event\n\n def _class_names(self, instance):\n return [\"unpublished-event\"]\n\n def _url(self, instance):\n return reverse(\"admin:events_event_details\", kwargs={\"pk\": instance.id})\n", "path": "website/events/api/calendarjs/serializers.py"}, {"content": "from collections import OrderedDict\n\nfrom django.utils import timezone\nfrom django.utils.datetime_safe import date\nfrom django.utils.translation import gettext_lazy as _, get_language\n\nfrom events import emails\nfrom events.exceptions import RegistrationError\nfrom events.models import EventRegistration, RegistrationInformationField, Event\nfrom payments.api.v1.fields import PaymentTypeField\nfrom payments.services import create_payment, delete_payment\nfrom utils.snippets import datetime_to_lectureyear\n\n\ndef is_user_registered(member, event):\n \"\"\"Return if the user is registered for the specified event.\n\n :param member: the user\n :param event: the event\n :return: None if registration is not required or no member else True/False\n \"\"\"\n if not event.registration_required or not member.is_authenticated:\n return None\n\n return event.registrations.filter(member=member, date_cancelled=None).count() > 0\n\n\ndef is_user_present(member, event):\n if not event.registration_required or not member.is_authenticated:\n return None\n\n return (\n event.registrations.filter(\n member=member, date_cancelled=None, present=True\n ).count()\n > 0\n )\n\n\ndef event_permissions(member, event, name=None):\n \"\"\"Return a dictionary with the available event permissions of the user.\n\n :param member: the user\n :param event: the event\n :param name: the name of a non member registration\n :return: the permission dictionary\n \"\"\"\n perms = {\n \"create_registration\": False,\n \"cancel_registration\": False,\n \"update_registration\": False,\n \"manage_event\": is_organiser(member, event),\n }\n if not member:\n return perms\n if not (member.is_authenticated or name):\n return perms\n\n registration = None\n try:\n registration = EventRegistration.objects.get(\n event=event, member=member, name=name\n )\n except EventRegistration.DoesNotExist:\n pass\n\n perms[\"create_registration\"] = (\n (registration is None or registration.date_cancelled is not None)\n and (event.registration_allowed or not event.registration_required)\n and (name or member.can_attend_events)\n )\n perms[\"cancel_registration\"] = (\n registration is not None\n and registration.date_cancelled is None\n and (event.cancellation_allowed or name or not event.registration_required)\n and registration.payment is None\n )\n perms[\"update_registration\"] = (\n registration is not None\n and registration.date_cancelled is None\n and event.has_fields\n and (event.registration_allowed or not event.registration_required)\n and (name or member.can_attend_events)\n )\n return perms\n\n\ndef is_organiser(member, event):\n if member and member.is_authenticated:\n if member.is_superuser or member.has_perm(\"events.override_organiser\"):\n return True\n\n if event:\n return member.get_member_groups().filter(pk=event.organiser.pk).count() != 0\n\n return False\n\n\ndef create_registration(member, event):\n \"\"\"Create a new user registration for an event.\n\n :param member: the user\n :param event: the event\n :return: Return the registration if successful\n \"\"\"\n if event_permissions(member, event)[\"create_registration\"]:\n registration = None\n try:\n registration = EventRegistration.objects.get(event=event, member=member)\n except EventRegistration.DoesNotExist:\n pass\n\n if registration is None:\n return EventRegistration.objects.create(event=event, member=member)\n if registration.date_cancelled is not None:\n if registration.is_late_cancellation():\n raise RegistrationError(\n _(\n \"You cannot re-register anymore \"\n \"since you've cancelled after the \"\n \"deadline.\"\n )\n )\n registration.date = timezone.now()\n registration.date_cancelled = None\n registration.save()\n\n return registration\n if event_permissions(member, event)[\"cancel_registration\"]:\n raise RegistrationError(_(\"You were already registered.\"))\n raise RegistrationError(_(\"You may not register.\"))\n\n\ndef cancel_registration(member, event):\n \"\"\"Cancel a user registration for an event.\n\n :param member: the user\n :param event: the event\n \"\"\"\n registration = None\n try:\n registration = EventRegistration.objects.get(event=event, member=member)\n except EventRegistration.DoesNotExist:\n pass\n\n if event_permissions(member, event)[\"cancel_registration\"] and registration:\n if not registration.queue_position:\n emails.notify_first_waiting(event)\n\n if event.send_cancel_email and event.after_cancel_deadline:\n emails.notify_organiser(event, registration)\n\n # Note that this doesn\"t remove the values for the\n # information fields that the user entered upon registering.\n # But this is regarded as a feature, not a bug. Especially\n # since the values will still appear in the backend.\n registration.date_cancelled = timezone.now()\n registration.save()\n else:\n raise RegistrationError(_(\"You are not allowed to deregister for this event.\"))\n\n\ndef update_registration(\n member=None, event=None, name=None, registration=None, field_values=None\n):\n \"\"\"Update a user registration of an event.\n\n :param request: http request\n :param member: the user\n :param event: the event\n :param name: the name of a registration not associated with a user\n :param registration: the registration\n :param field_values: values for the information fields\n \"\"\"\n if not registration:\n try:\n registration = EventRegistration.objects.get(\n event=event, member=member, name=name\n )\n except EventRegistration.DoesNotExist as error:\n raise RegistrationError(\n _(\"You are not registered for this event.\")\n ) from error\n else:\n member = registration.member\n event = registration.event\n name = registration.name\n\n if (\n not event_permissions(member, event, name)[\"update_registration\"]\n or not field_values\n ):\n return\n\n for field_id, field_value in field_values:\n field = RegistrationInformationField.objects.get(\n id=field_id.replace(\"info_field_\", \"\")\n )\n\n if (\n field.type == RegistrationInformationField.INTEGER_FIELD\n and field_value is None\n ):\n field_value = 0\n elif (\n field.type == RegistrationInformationField.BOOLEAN_FIELD\n and field_value is None\n ):\n field_value = False\n elif (\n field.type == RegistrationInformationField.TEXT_FIELD\n and field_value is None\n ):\n field_value = \"\"\n\n field.set_value_for(registration, field_value)\n\n\ndef registration_fields(request, member=None, event=None, registration=None, name=None):\n \"\"\"Return information about the registration fields of a registration.\n\n :param member: the user (optional if registration provided)\n :param name: the name of a non member registration\n (optional if registration provided)\n :param event: the event (optional if registration provided)\n :param registration: the registration (optional if member & event provided)\n :return: the fields\n \"\"\"\n if registration is None:\n try:\n registration = EventRegistration.objects.get(\n event=event, member=member, name=name\n )\n except EventRegistration.DoesNotExist as error:\n raise RegistrationError(\n _(\"You are not registered for this event.\")\n ) from error\n except EventRegistration.MultipleObjectsReturned as error:\n raise RegistrationError(\n _(\"Unable to find the right registration.\")\n ) from error\n\n member = registration.member\n event = registration.event\n name = registration.name\n\n perms = event_permissions(member, event, name)[\n \"update_registration\"\n ] or is_organiser(request.member, event)\n if perms and registration:\n information_fields = registration.information_fields\n fields = OrderedDict()\n\n for information_field in information_fields:\n field = information_field[\"field\"]\n\n fields[\"info_field_{}\".format(field.id)] = {\n \"type\": field.type,\n \"label\": getattr(field, \"{}_{}\".format(\"name\", get_language())),\n \"description\": getattr(\n field, \"{}_{}\".format(\"description\", get_language())\n ),\n \"value\": information_field[\"value\"],\n \"required\": field.required,\n }\n\n return fields\n raise RegistrationError(_(\"You are not allowed to update this registration.\"))\n\n\ndef update_registration_by_organiser(registration, member, data):\n if not is_organiser(member, registration.event):\n raise RegistrationError(_(\"You are not allowed to update this registration.\"))\n\n if \"payment\" in data:\n if data[\"payment\"][\"type\"] == PaymentTypeField.NO_PAYMENT:\n if registration.payment is not None:\n delete_payment(registration)\n else:\n registration.payment = create_payment(\n payable=registration,\n processed_by=member,\n pay_type=data[\"payment\"][\"type\"],\n )\n\n if \"present\" in data:\n registration.present = data[\"present\"]\n\n registration.save()\n\n\ndef generate_category_statistics():\n \"\"\"Generate statistics about events, number of events per category.\n\n :return: Dict with key, value resp. being category, event count.\n \"\"\"\n year = datetime_to_lectureyear(timezone.now())\n\n data = {}\n for i in range(5):\n year_start = date(year=year - i, month=9, day=1)\n year_end = date(year=year - i + 1, month=9, day=1)\n data[str(year - i)] = {\n str(display): Event.objects.filter(\n category=key, start__gte=year_start, end__lte=year_end\n ).count()\n for key, display in Event.EVENT_CATEGORIES\n }\n\n return data\n", "path": "website/events/services.py"}]}
| 3,861 | 531 |
gh_patches_debug_33960
|
rasdani/github-patches
|
git_diff
|
tensorflow__addons-187
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dense Image Warp tests are flaky
Recently we've seen that #53 is causing flaky failures in the CI. See:
https://source.cloud.google.com/results/invocations/8f31faef-505a-440e-b75f-e6edf1071269/targets/tensorflow_addons%2Fubuntu%2Fgpu%2Fpy3%2Fpresubmit/log
Do you mind taking a look when time allows @WindQAQ ?
</issue>
<code>
[start of tensorflow_addons/image/dense_image_warp.py]
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Image warping using per-pixel flow vectors."""
16 from __future__ import absolute_import
17 from __future__ import division
18 from __future__ import print_function
19
20 import numpy as np
21 import tensorflow as tf
22
23
24 @tf.function
25 def interpolate_bilinear(grid, query_points, indexing="ij", name=None):
26 """Similar to Matlab's interp2 function.
27
28 Finds values for query points on a grid using bilinear interpolation.
29
30 Args:
31 grid: a 4-D float `Tensor` of shape `[batch, height, width, channels]`.
32 query_points: a 3-D float `Tensor` of N points with shape
33 `[batch, N, 2]`.
34 indexing: whether the query points are specified as row and column (ij),
35 or Cartesian coordinates (xy).
36 name: a name for the operation (optional).
37
38 Returns:
39 values: a 3-D `Tensor` with shape `[batch, N, channels]`
40
41 Raises:
42 ValueError: if the indexing mode is invalid, or if the shape of the
43 inputs invalid.
44 """
45 if indexing != "ij" and indexing != "xy":
46 raise ValueError("Indexing mode must be \'ij\' or \'xy\'")
47
48 with tf.name_scope(name or "interpolate_bilinear"):
49 grid = tf.convert_to_tensor(grid)
50 query_points = tf.convert_to_tensor(query_points)
51 shape = grid.get_shape().as_list()
52 if len(shape) != 4:
53 msg = "Grid must be 4 dimensional. Received size: "
54 raise ValueError(msg + str(grid.get_shape()))
55
56 batch_size, height, width, channels = (tf.shape(grid)[0],
57 tf.shape(grid)[1],
58 tf.shape(grid)[2],
59 tf.shape(grid)[3])
60
61 shape = [batch_size, height, width, channels]
62 query_type = query_points.dtype
63 grid_type = grid.dtype
64
65 tf.debugging.assert_equal(
66 len(query_points.get_shape()),
67 3,
68 message="Query points must be 3 dimensional.")
69 tf.debugging.assert_equal(
70 tf.shape(query_points)[2],
71 2,
72 message="Query points must be size 2 in dim 2.")
73
74 num_queries = tf.shape(query_points)[1]
75
76 tf.debugging.assert_greater_equal(
77 height, 2, message="Grid height must be at least 2."),
78 tf.debugging.assert_greater_equal(
79 width, 2, message="Grid width must be at least 2.")
80
81 alphas = []
82 floors = []
83 ceils = []
84 index_order = [0, 1] if indexing == "ij" else [1, 0]
85 unstacked_query_points = tf.unstack(query_points, axis=2)
86
87 for dim in index_order:
88 with tf.name_scope("dim-" + str(dim)):
89 queries = unstacked_query_points[dim]
90
91 size_in_indexing_dimension = shape[dim + 1]
92
93 # max_floor is size_in_indexing_dimension - 2 so that max_floor + 1
94 # is still a valid index into the grid.
95 max_floor = tf.cast(size_in_indexing_dimension - 2, query_type)
96 min_floor = tf.constant(0.0, dtype=query_type)
97 floor = tf.math.minimum(
98 tf.math.maximum(min_floor, tf.math.floor(queries)),
99 max_floor)
100 int_floor = tf.cast(floor, tf.dtypes.int32)
101 floors.append(int_floor)
102 ceil = int_floor + 1
103 ceils.append(ceil)
104
105 # alpha has the same type as the grid, as we will directly use alpha
106 # when taking linear combinations of pixel values from the image.
107 alpha = tf.cast(queries - floor, grid_type)
108 min_alpha = tf.constant(0.0, dtype=grid_type)
109 max_alpha = tf.constant(1.0, dtype=grid_type)
110 alpha = tf.math.minimum(
111 tf.math.maximum(min_alpha, alpha), max_alpha)
112
113 # Expand alpha to [b, n, 1] so we can use broadcasting
114 # (since the alpha values don't depend on the channel).
115 alpha = tf.expand_dims(alpha, 2)
116 alphas.append(alpha)
117
118 tf.debugging.assert_less_equal(
119 tf.cast(batch_size * height * width, dtype=tf.dtypes.float32),
120 np.iinfo(np.int32).max / 8.0,
121 message="The image size or batch size is sufficiently large "
122 "that the linearized addresses used by tf.gather "
123 "may exceed the int32 limit.")
124 flattened_grid = tf.reshape(grid,
125 [batch_size * height * width, channels])
126 batch_offsets = tf.reshape(
127 tf.range(batch_size) * height * width, [batch_size, 1])
128
129 # This wraps tf.gather. We reshape the image data such that the
130 # batch, y, and x coordinates are pulled into the first dimension.
131 # Then we gather. Finally, we reshape the output back. It's possible this
132 # code would be made simpler by using tf.gather_nd.
133 def gather(y_coords, x_coords, name):
134 with tf.name_scope("gather-" + name):
135 linear_coordinates = (
136 batch_offsets + y_coords * width + x_coords)
137 gathered_values = tf.gather(flattened_grid, linear_coordinates)
138 return tf.reshape(gathered_values,
139 [batch_size, num_queries, channels])
140
141 # grab the pixel values in the 4 corners around each query point
142 top_left = gather(floors[0], floors[1], "top_left")
143 top_right = gather(floors[0], ceils[1], "top_right")
144 bottom_left = gather(ceils[0], floors[1], "bottom_left")
145 bottom_right = gather(ceils[0], ceils[1], "bottom_right")
146
147 # now, do the actual interpolation
148 with tf.name_scope("interpolate"):
149 interp_top = alphas[1] * (top_right - top_left) + top_left
150 interp_bottom = alphas[1] * (
151 bottom_right - bottom_left) + bottom_left
152 interp = alphas[0] * (interp_bottom - interp_top) + interp_top
153
154 return interp
155
156
157 @tf.function
158 def dense_image_warp(image, flow, name=None):
159 """Image warping using per-pixel flow vectors.
160
161 Apply a non-linear warp to the image, where the warp is specified by a
162 dense flow field of offset vectors that define the correspondences of
163 pixel values in the output image back to locations in the source image.
164 Specifically, the pixel value at output[b, j, i, c] is
165 images[b, j - flow[b, j, i, 0], i - flow[b, j, i, 1], c].
166
167 The locations specified by this formula do not necessarily map to an int
168 index. Therefore, the pixel value is obtained by bilinear
169 interpolation of the 4 nearest pixels around
170 (b, j - flow[b, j, i, 0], i - flow[b, j, i, 1]). For locations outside
171 of the image, we use the nearest pixel values at the image boundary.
172
173 Args:
174 image: 4-D float `Tensor` with shape `[batch, height, width, channels]`.
175 flow: A 4-D float `Tensor` with shape `[batch, height, width, 2]`.
176 name: A name for the operation (optional).
177
178 Note that image and flow can be of type tf.half, tf.float32, or
179 tf.float64, and do not necessarily have to be the same type.
180
181 Returns:
182 A 4-D float `Tensor` with shape`[batch, height, width, channels]`
183 and same type as input image.
184
185 Raises:
186 ValueError: if height < 2 or width < 2 or the inputs have the wrong
187 number of dimensions.
188 """
189 with tf.name_scope(name or "dense_image_warp"):
190 image = tf.convert_to_tensor(image)
191 flow = tf.convert_to_tensor(flow)
192 batch_size, height, width, channels = (tf.shape(image)[0],
193 tf.shape(image)[1],
194 tf.shape(image)[2],
195 tf.shape(image)[3])
196
197 # The flow is defined on the image grid. Turn the flow into a list of query
198 # points in the grid space.
199 grid_x, grid_y = tf.meshgrid(tf.range(width), tf.range(height))
200 stacked_grid = tf.cast(tf.stack([grid_y, grid_x], axis=2), flow.dtype)
201 batched_grid = tf.expand_dims(stacked_grid, axis=0)
202 query_points_on_grid = batched_grid - flow
203 query_points_flattened = tf.reshape(query_points_on_grid,
204 [batch_size, height * width, 2])
205 # Compute values at the query points, then reshape the result back to the
206 # image grid.
207 interpolated = interpolate_bilinear(image, query_points_flattened)
208 interpolated = tf.reshape(interpolated,
209 [batch_size, height, width, channels])
210 return interpolated
211
[end of tensorflow_addons/image/dense_image_warp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tensorflow_addons/image/dense_image_warp.py b/tensorflow_addons/image/dense_image_warp.py
--- a/tensorflow_addons/image/dense_image_warp.py
+++ b/tensorflow_addons/image/dense_image_warp.py
@@ -21,7 +21,6 @@
import tensorflow as tf
[email protected]
def interpolate_bilinear(grid, query_points, indexing="ij", name=None):
"""Similar to Matlab's interp2 function.
@@ -48,30 +47,28 @@
with tf.name_scope(name or "interpolate_bilinear"):
grid = tf.convert_to_tensor(grid)
query_points = tf.convert_to_tensor(query_points)
- shape = grid.get_shape().as_list()
- if len(shape) != 4:
+
+ if len(grid.shape) != 4:
msg = "Grid must be 4 dimensional. Received size: "
- raise ValueError(msg + str(grid.get_shape()))
+ raise ValueError(msg + str(grid.shape))
+
+ if len(query_points.shape) != 3:
+ raise ValueError("Query points must be 3 dimensional.")
+
+ grid_shape = tf.shape(grid)
+ query_shape = tf.shape(query_points)
- batch_size, height, width, channels = (tf.shape(grid)[0],
- tf.shape(grid)[1],
- tf.shape(grid)[2],
- tf.shape(grid)[3])
+ batch_size, height, width, channels = (grid_shape[0], grid_shape[1],
+ grid_shape[2], grid_shape[3])
shape = [batch_size, height, width, channels]
+ num_queries = query_shape[1]
+
query_type = query_points.dtype
grid_type = grid.dtype
tf.debugging.assert_equal(
- len(query_points.get_shape()),
- 3,
- message="Query points must be 3 dimensional.")
- tf.debugging.assert_equal(
- tf.shape(query_points)[2],
- 2,
- message="Query points must be size 2 in dim 2.")
-
- num_queries = tf.shape(query_points)[1]
+ query_shape[2], 2, message="Query points must be size 2 in dim 2.")
tf.debugging.assert_greater_equal(
height, 2, message="Grid height must be at least 2."),
|
{"golden_diff": "diff --git a/tensorflow_addons/image/dense_image_warp.py b/tensorflow_addons/image/dense_image_warp.py\n--- a/tensorflow_addons/image/dense_image_warp.py\n+++ b/tensorflow_addons/image/dense_image_warp.py\n@@ -21,7 +21,6 @@\n import tensorflow as tf\n \n \[email protected]\n def interpolate_bilinear(grid, query_points, indexing=\"ij\", name=None):\n \"\"\"Similar to Matlab's interp2 function.\n \n@@ -48,30 +47,28 @@\n with tf.name_scope(name or \"interpolate_bilinear\"):\n grid = tf.convert_to_tensor(grid)\n query_points = tf.convert_to_tensor(query_points)\n- shape = grid.get_shape().as_list()\n- if len(shape) != 4:\n+\n+ if len(grid.shape) != 4:\n msg = \"Grid must be 4 dimensional. Received size: \"\n- raise ValueError(msg + str(grid.get_shape()))\n+ raise ValueError(msg + str(grid.shape))\n+\n+ if len(query_points.shape) != 3:\n+ raise ValueError(\"Query points must be 3 dimensional.\")\n+\n+ grid_shape = tf.shape(grid)\n+ query_shape = tf.shape(query_points)\n \n- batch_size, height, width, channels = (tf.shape(grid)[0],\n- tf.shape(grid)[1],\n- tf.shape(grid)[2],\n- tf.shape(grid)[3])\n+ batch_size, height, width, channels = (grid_shape[0], grid_shape[1],\n+ grid_shape[2], grid_shape[3])\n \n shape = [batch_size, height, width, channels]\n+ num_queries = query_shape[1]\n+\n query_type = query_points.dtype\n grid_type = grid.dtype\n \n tf.debugging.assert_equal(\n- len(query_points.get_shape()),\n- 3,\n- message=\"Query points must be 3 dimensional.\")\n- tf.debugging.assert_equal(\n- tf.shape(query_points)[2],\n- 2,\n- message=\"Query points must be size 2 in dim 2.\")\n-\n- num_queries = tf.shape(query_points)[1]\n+ query_shape[2], 2, message=\"Query points must be size 2 in dim 2.\")\n \n tf.debugging.assert_greater_equal(\n height, 2, message=\"Grid height must be at least 2.\"),\n", "issue": "Dense Image Warp tests are flaky\nRecently we've seen that #53 is causing flaky failures in the CI. See:\r\nhttps://source.cloud.google.com/results/invocations/8f31faef-505a-440e-b75f-e6edf1071269/targets/tensorflow_addons%2Fubuntu%2Fgpu%2Fpy3%2Fpresubmit/log\r\n\r\nDo you mind taking a look when time allows @WindQAQ ?\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Image warping using per-pixel flow vectors.\"\"\"\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport numpy as np\nimport tensorflow as tf\n\n\[email protected]\ndef interpolate_bilinear(grid, query_points, indexing=\"ij\", name=None):\n \"\"\"Similar to Matlab's interp2 function.\n\n Finds values for query points on a grid using bilinear interpolation.\n\n Args:\n grid: a 4-D float `Tensor` of shape `[batch, height, width, channels]`.\n query_points: a 3-D float `Tensor` of N points with shape\n `[batch, N, 2]`.\n indexing: whether the query points are specified as row and column (ij),\n or Cartesian coordinates (xy).\n name: a name for the operation (optional).\n\n Returns:\n values: a 3-D `Tensor` with shape `[batch, N, channels]`\n\n Raises:\n ValueError: if the indexing mode is invalid, or if the shape of the\n inputs invalid.\n \"\"\"\n if indexing != \"ij\" and indexing != \"xy\":\n raise ValueError(\"Indexing mode must be \\'ij\\' or \\'xy\\'\")\n\n with tf.name_scope(name or \"interpolate_bilinear\"):\n grid = tf.convert_to_tensor(grid)\n query_points = tf.convert_to_tensor(query_points)\n shape = grid.get_shape().as_list()\n if len(shape) != 4:\n msg = \"Grid must be 4 dimensional. Received size: \"\n raise ValueError(msg + str(grid.get_shape()))\n\n batch_size, height, width, channels = (tf.shape(grid)[0],\n tf.shape(grid)[1],\n tf.shape(grid)[2],\n tf.shape(grid)[3])\n\n shape = [batch_size, height, width, channels]\n query_type = query_points.dtype\n grid_type = grid.dtype\n\n tf.debugging.assert_equal(\n len(query_points.get_shape()),\n 3,\n message=\"Query points must be 3 dimensional.\")\n tf.debugging.assert_equal(\n tf.shape(query_points)[2],\n 2,\n message=\"Query points must be size 2 in dim 2.\")\n\n num_queries = tf.shape(query_points)[1]\n\n tf.debugging.assert_greater_equal(\n height, 2, message=\"Grid height must be at least 2.\"),\n tf.debugging.assert_greater_equal(\n width, 2, message=\"Grid width must be at least 2.\")\n\n alphas = []\n floors = []\n ceils = []\n index_order = [0, 1] if indexing == \"ij\" else [1, 0]\n unstacked_query_points = tf.unstack(query_points, axis=2)\n\n for dim in index_order:\n with tf.name_scope(\"dim-\" + str(dim)):\n queries = unstacked_query_points[dim]\n\n size_in_indexing_dimension = shape[dim + 1]\n\n # max_floor is size_in_indexing_dimension - 2 so that max_floor + 1\n # is still a valid index into the grid.\n max_floor = tf.cast(size_in_indexing_dimension - 2, query_type)\n min_floor = tf.constant(0.0, dtype=query_type)\n floor = tf.math.minimum(\n tf.math.maximum(min_floor, tf.math.floor(queries)),\n max_floor)\n int_floor = tf.cast(floor, tf.dtypes.int32)\n floors.append(int_floor)\n ceil = int_floor + 1\n ceils.append(ceil)\n\n # alpha has the same type as the grid, as we will directly use alpha\n # when taking linear combinations of pixel values from the image.\n alpha = tf.cast(queries - floor, grid_type)\n min_alpha = tf.constant(0.0, dtype=grid_type)\n max_alpha = tf.constant(1.0, dtype=grid_type)\n alpha = tf.math.minimum(\n tf.math.maximum(min_alpha, alpha), max_alpha)\n\n # Expand alpha to [b, n, 1] so we can use broadcasting\n # (since the alpha values don't depend on the channel).\n alpha = tf.expand_dims(alpha, 2)\n alphas.append(alpha)\n\n tf.debugging.assert_less_equal(\n tf.cast(batch_size * height * width, dtype=tf.dtypes.float32),\n np.iinfo(np.int32).max / 8.0,\n message=\"The image size or batch size is sufficiently large \"\n \"that the linearized addresses used by tf.gather \"\n \"may exceed the int32 limit.\")\n flattened_grid = tf.reshape(grid,\n [batch_size * height * width, channels])\n batch_offsets = tf.reshape(\n tf.range(batch_size) * height * width, [batch_size, 1])\n\n # This wraps tf.gather. We reshape the image data such that the\n # batch, y, and x coordinates are pulled into the first dimension.\n # Then we gather. Finally, we reshape the output back. It's possible this\n # code would be made simpler by using tf.gather_nd.\n def gather(y_coords, x_coords, name):\n with tf.name_scope(\"gather-\" + name):\n linear_coordinates = (\n batch_offsets + y_coords * width + x_coords)\n gathered_values = tf.gather(flattened_grid, linear_coordinates)\n return tf.reshape(gathered_values,\n [batch_size, num_queries, channels])\n\n # grab the pixel values in the 4 corners around each query point\n top_left = gather(floors[0], floors[1], \"top_left\")\n top_right = gather(floors[0], ceils[1], \"top_right\")\n bottom_left = gather(ceils[0], floors[1], \"bottom_left\")\n bottom_right = gather(ceils[0], ceils[1], \"bottom_right\")\n\n # now, do the actual interpolation\n with tf.name_scope(\"interpolate\"):\n interp_top = alphas[1] * (top_right - top_left) + top_left\n interp_bottom = alphas[1] * (\n bottom_right - bottom_left) + bottom_left\n interp = alphas[0] * (interp_bottom - interp_top) + interp_top\n\n return interp\n\n\[email protected]\ndef dense_image_warp(image, flow, name=None):\n \"\"\"Image warping using per-pixel flow vectors.\n\n Apply a non-linear warp to the image, where the warp is specified by a\n dense flow field of offset vectors that define the correspondences of\n pixel values in the output image back to locations in the source image.\n Specifically, the pixel value at output[b, j, i, c] is\n images[b, j - flow[b, j, i, 0], i - flow[b, j, i, 1], c].\n\n The locations specified by this formula do not necessarily map to an int\n index. Therefore, the pixel value is obtained by bilinear\n interpolation of the 4 nearest pixels around\n (b, j - flow[b, j, i, 0], i - flow[b, j, i, 1]). For locations outside\n of the image, we use the nearest pixel values at the image boundary.\n\n Args:\n image: 4-D float `Tensor` with shape `[batch, height, width, channels]`.\n flow: A 4-D float `Tensor` with shape `[batch, height, width, 2]`.\n name: A name for the operation (optional).\n\n Note that image and flow can be of type tf.half, tf.float32, or\n tf.float64, and do not necessarily have to be the same type.\n\n Returns:\n A 4-D float `Tensor` with shape`[batch, height, width, channels]`\n and same type as input image.\n\n Raises:\n ValueError: if height < 2 or width < 2 or the inputs have the wrong\n number of dimensions.\n \"\"\"\n with tf.name_scope(name or \"dense_image_warp\"):\n image = tf.convert_to_tensor(image)\n flow = tf.convert_to_tensor(flow)\n batch_size, height, width, channels = (tf.shape(image)[0],\n tf.shape(image)[1],\n tf.shape(image)[2],\n tf.shape(image)[3])\n\n # The flow is defined on the image grid. Turn the flow into a list of query\n # points in the grid space.\n grid_x, grid_y = tf.meshgrid(tf.range(width), tf.range(height))\n stacked_grid = tf.cast(tf.stack([grid_y, grid_x], axis=2), flow.dtype)\n batched_grid = tf.expand_dims(stacked_grid, axis=0)\n query_points_on_grid = batched_grid - flow\n query_points_flattened = tf.reshape(query_points_on_grid,\n [batch_size, height * width, 2])\n # Compute values at the query points, then reshape the result back to the\n # image grid.\n interpolated = interpolate_bilinear(image, query_points_flattened)\n interpolated = tf.reshape(interpolated,\n [batch_size, height, width, channels])\n return interpolated\n", "path": "tensorflow_addons/image/dense_image_warp.py"}]}
| 3,304 | 522 |
gh_patches_debug_24019
|
rasdani/github-patches
|
git_diff
|
nilearn__nilearn-2096
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
scipy.misc.imread() replaced by scipy.imageio.imread() in v1.2
`scipy.misc.imread()` was deprecatd in SciPy 1.0 & replaced in SciPy 1.2 by `scipy.imageio.imread()`
https://docs.scipy.org/doc/scipy-1.2.1/reference/generated/scipy.misc.imread.html
This is causing failures in CircleCI.
I will work on this once PR #2076 doctest problem has been addressed, since we need this issue to be resolved before it can be merged. Intended today.
</issue>
<code>
[start of examples/02_decoding/plot_haxby_stimuli.py]
1 """
2 Show stimuli of Haxby et al. dataset
3 ===============================================================================
4
5 In this script we plot an overview of the stimuli used in "Distributed
6 and Overlapping Representations of Faces and Objects in Ventral Temporal
7 Cortex" (Science 2001)
8 """
9
10 from scipy.misc import imread
11 import matplotlib.pyplot as plt
12
13 from nilearn import datasets
14 from nilearn.plotting import show
15
16 haxby_dataset = datasets.fetch_haxby(subjects=[], fetch_stimuli=True)
17 stimulus_information = haxby_dataset.stimuli
18
19 for stim_type in sorted(stimulus_information.keys()):
20 if stim_type == b'controls':
21 # skip control images, there are too many
22 continue
23
24 file_names = stimulus_information[stim_type]
25
26 plt.figure()
27 for i in range(48):
28 plt.subplot(6, 8, i + 1)
29 try:
30 plt.imshow(imread(file_names[i]), cmap=plt.cm.gray)
31 except:
32 # just go to the next one if the file is not present
33 pass
34 plt.axis("off")
35 plt.suptitle(stim_type)
36
37 show()
38
[end of examples/02_decoding/plot_haxby_stimuli.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/02_decoding/plot_haxby_stimuli.py b/examples/02_decoding/plot_haxby_stimuli.py
--- a/examples/02_decoding/plot_haxby_stimuli.py
+++ b/examples/02_decoding/plot_haxby_stimuli.py
@@ -7,7 +7,6 @@
Cortex" (Science 2001)
"""
-from scipy.misc import imread
import matplotlib.pyplot as plt
from nilearn import datasets
@@ -16,22 +15,19 @@
haxby_dataset = datasets.fetch_haxby(subjects=[], fetch_stimuli=True)
stimulus_information = haxby_dataset.stimuli
-for stim_type in sorted(stimulus_information.keys()):
- if stim_type == b'controls':
- # skip control images, there are too many
- continue
-
- file_names = stimulus_information[stim_type]
-
- plt.figure()
- for i in range(48):
- plt.subplot(6, 8, i + 1)
- try:
- plt.imshow(imread(file_names[i]), cmap=plt.cm.gray)
- except:
- # just go to the next one if the file is not present
- pass
- plt.axis("off")
- plt.suptitle(stim_type)
+for stim_type in stimulus_information:
+ # skip control images, there are too many
+ if stim_type != 'controls':
+
+ file_names = stimulus_information[stim_type]
+
+ fig, axes = plt.subplots(6, 8)
+ fig.suptitle(stim_type)
+
+ for img_path, ax in zip(file_names, axes.ravel()):
+ ax.imshow(plt.imread(img_path), cmap=plt.cm.gray)
+
+ for ax in axes.ravel():
+ ax.axis("off")
show()
|
{"golden_diff": "diff --git a/examples/02_decoding/plot_haxby_stimuli.py b/examples/02_decoding/plot_haxby_stimuli.py\n--- a/examples/02_decoding/plot_haxby_stimuli.py\n+++ b/examples/02_decoding/plot_haxby_stimuli.py\n@@ -7,7 +7,6 @@\n Cortex\" (Science 2001)\n \"\"\"\n \n-from scipy.misc import imread\n import matplotlib.pyplot as plt\n \n from nilearn import datasets\n@@ -16,22 +15,19 @@\n haxby_dataset = datasets.fetch_haxby(subjects=[], fetch_stimuli=True)\n stimulus_information = haxby_dataset.stimuli\n \n-for stim_type in sorted(stimulus_information.keys()):\n- if stim_type == b'controls':\n- # skip control images, there are too many\n- continue\n-\n- file_names = stimulus_information[stim_type]\n-\n- plt.figure()\n- for i in range(48):\n- plt.subplot(6, 8, i + 1)\n- try:\n- plt.imshow(imread(file_names[i]), cmap=plt.cm.gray)\n- except:\n- # just go to the next one if the file is not present\n- pass\n- plt.axis(\"off\")\n- plt.suptitle(stim_type)\n+for stim_type in stimulus_information:\n+ # skip control images, there are too many\n+ if stim_type != 'controls':\n+\n+ file_names = stimulus_information[stim_type]\n+\n+ fig, axes = plt.subplots(6, 8)\n+ fig.suptitle(stim_type)\n+\n+ for img_path, ax in zip(file_names, axes.ravel()):\n+ ax.imshow(plt.imread(img_path), cmap=plt.cm.gray)\n+\n+ for ax in axes.ravel():\n+ ax.axis(\"off\")\n \n show()\n", "issue": "scipy.misc.imread() replaced by scipy.imageio.imread() in v1.2\n`scipy.misc.imread()` was deprecatd in SciPy 1.0 & replaced in SciPy 1.2 by `scipy.imageio.imread()`\r\n\r\nhttps://docs.scipy.org/doc/scipy-1.2.1/reference/generated/scipy.misc.imread.html\r\n\r\nThis is causing failures in CircleCI. \r\n\r\nI will work on this once PR #2076 doctest problem has been addressed, since we need this issue to be resolved before it can be merged. Intended today.\n", "before_files": [{"content": "\"\"\"\nShow stimuli of Haxby et al. dataset\n===============================================================================\n\nIn this script we plot an overview of the stimuli used in \"Distributed\nand Overlapping Representations of Faces and Objects in Ventral Temporal\nCortex\" (Science 2001)\n\"\"\"\n\nfrom scipy.misc import imread\nimport matplotlib.pyplot as plt\n\nfrom nilearn import datasets\nfrom nilearn.plotting import show\n\nhaxby_dataset = datasets.fetch_haxby(subjects=[], fetch_stimuli=True)\nstimulus_information = haxby_dataset.stimuli\n\nfor stim_type in sorted(stimulus_information.keys()):\n if stim_type == b'controls':\n # skip control images, there are too many\n continue\n\n file_names = stimulus_information[stim_type]\n\n plt.figure()\n for i in range(48):\n plt.subplot(6, 8, i + 1)\n try:\n plt.imshow(imread(file_names[i]), cmap=plt.cm.gray)\n except:\n # just go to the next one if the file is not present\n pass\n plt.axis(\"off\")\n plt.suptitle(stim_type)\n\nshow()\n", "path": "examples/02_decoding/plot_haxby_stimuli.py"}]}
| 994 | 413 |
gh_patches_debug_4779
|
rasdani/github-patches
|
git_diff
|
zestedesavoir__zds-site-4960
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Insertion du smiley >_<
On obtient `>_<`
Solution 1 : Modifier le code ici : https://github.com/zestedesavoir/zds-site/blob/4ae0431bbf199e318dd6f2b1301ac7b6adc40198/assets/js/editor.js#L132 Vérifier qu'il n'y a pas un bug/fail avec ">" et "<".
Solution 2 : On peut ajouter l'alias `X/` pour ce smiley et remplacer le code dans l'éditeur. https://github.com/zestedesavoir/zds-site/blob/56a5b2e8b524848efa2d328c0a46365a44c1d43e/zds/utils/templatetags/smileys_def.py#L26
</issue>
<code>
[start of zds/utils/templatetags/smileys_def.py]
1 import os
2 from django.conf import settings
3
4 SMILEYS_BASE_PATH = os.path.join(settings.BASE_DIR, 'dist/smileys')
5 LICENSES_BASE_PATH = os.path.join(settings.BASE_DIR, 'dist/licenses')
6 SMILEYS_BASE_URL = os.path.join(settings.STATIC_URL, 'smileys')
7
8 SMILEYS_BASE = {
9 'smile.png': (':)', ':-)', ),
10 'heureux.png': (':D', ':-D', ),
11 'clin.png': (';)', ';-)', ),
12 'langue.png': (':p', ':P', ':-p', ':-P', ),
13 'rire.gif': (':lol:', ),
14 'unsure.gif': (':euh:', ),
15 'triste.png': (':(', ':-(', ),
16 'huh.png': (':o', ':-o', ':O', ':-O', ),
17 'mechant.png': (':colere2:', ),
18 'blink.gif': ('o_O', 'O_o', ),
19 'hihi.png': ('^^', ),
20 'siffle.png': (':-°', ':°', ),
21 'ange.png': (':ange:', ),
22 'angry.gif': (':colere:', ),
23 'diable.png': (':diable:', ),
24 'magicien.png': (':magicien:', ),
25 'ninja.gif': (':ninja:', ),
26 'pinch.png': ('>_<', ),
27 'pirate.png': (':pirate:', ),
28 'pleure.png': (":'(", ),
29 'rouge.png': (':honte:', ),
30 'soleil.png': (':soleil:', ),
31 'waw.png': (':waw:', ),
32 'zorro.png': (':zorro:', ),
33 'cthulhu.png': ('^(;,;)^', ),
34 }
35
36 smileys = {}
37 for image_file, symbols in SMILEYS_BASE.items():
38 for symbol in symbols:
39 smileys[symbol] = os.path.join(SMILEYS_BASE_URL, image_file)
40
[end of zds/utils/templatetags/smileys_def.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/zds/utils/templatetags/smileys_def.py b/zds/utils/templatetags/smileys_def.py
--- a/zds/utils/templatetags/smileys_def.py
+++ b/zds/utils/templatetags/smileys_def.py
@@ -23,7 +23,7 @@
'diable.png': (':diable:', ),
'magicien.png': (':magicien:', ),
'ninja.gif': (':ninja:', ),
- 'pinch.png': ('>_<', ),
+ 'pinch.png': ('>_<', 'X/'),
'pirate.png': (':pirate:', ),
'pleure.png': (":'(", ),
'rouge.png': (':honte:', ),
|
{"golden_diff": "diff --git a/zds/utils/templatetags/smileys_def.py b/zds/utils/templatetags/smileys_def.py\n--- a/zds/utils/templatetags/smileys_def.py\n+++ b/zds/utils/templatetags/smileys_def.py\n@@ -23,7 +23,7 @@\n 'diable.png': (':diable:', ),\n 'magicien.png': (':magicien:', ),\n 'ninja.gif': (':ninja:', ),\n- 'pinch.png': ('>_<', ),\n+ 'pinch.png': ('>_<', 'X/'),\n 'pirate.png': (':pirate:', ),\n 'pleure.png': (\":'(\", ),\n 'rouge.png': (':honte:', ),\n", "issue": " Insertion du smiley >_<\nOn obtient `>_<`\r\n\r\nSolution 1 : Modifier le code ici : https://github.com/zestedesavoir/zds-site/blob/4ae0431bbf199e318dd6f2b1301ac7b6adc40198/assets/js/editor.js#L132 V\u00e9rifier qu'il n'y a pas un bug/fail avec \">\" et \"<\".\r\n\r\nSolution 2 : On peut ajouter l'alias `X/` pour ce smiley et remplacer le code dans l'\u00e9diteur. https://github.com/zestedesavoir/zds-site/blob/56a5b2e8b524848efa2d328c0a46365a44c1d43e/zds/utils/templatetags/smileys_def.py#L26\n", "before_files": [{"content": "import os\nfrom django.conf import settings\n\nSMILEYS_BASE_PATH = os.path.join(settings.BASE_DIR, 'dist/smileys')\nLICENSES_BASE_PATH = os.path.join(settings.BASE_DIR, 'dist/licenses')\nSMILEYS_BASE_URL = os.path.join(settings.STATIC_URL, 'smileys')\n\nSMILEYS_BASE = {\n 'smile.png': (':)', ':-)', ),\n 'heureux.png': (':D', ':-D', ),\n 'clin.png': (';)', ';-)', ),\n 'langue.png': (':p', ':P', ':-p', ':-P', ),\n 'rire.gif': (':lol:', ),\n 'unsure.gif': (':euh:', ),\n 'triste.png': (':(', ':-(', ),\n 'huh.png': (':o', ':-o', ':O', ':-O', ),\n 'mechant.png': (':colere2:', ),\n 'blink.gif': ('o_O', 'O_o', ),\n 'hihi.png': ('^^', ),\n 'siffle.png': (':-\u00b0', ':\u00b0', ),\n 'ange.png': (':ange:', ),\n 'angry.gif': (':colere:', ),\n 'diable.png': (':diable:', ),\n 'magicien.png': (':magicien:', ),\n 'ninja.gif': (':ninja:', ),\n 'pinch.png': ('>_<', ),\n 'pirate.png': (':pirate:', ),\n 'pleure.png': (\":'(\", ),\n 'rouge.png': (':honte:', ),\n 'soleil.png': (':soleil:', ),\n 'waw.png': (':waw:', ),\n 'zorro.png': (':zorro:', ),\n 'cthulhu.png': ('^(;,;)^', ),\n}\n\nsmileys = {}\nfor image_file, symbols in SMILEYS_BASE.items():\n for symbol in symbols:\n smileys[symbol] = os.path.join(SMILEYS_BASE_URL, image_file)\n", "path": "zds/utils/templatetags/smileys_def.py"}]}
| 1,271 | 176 |
gh_patches_debug_25265
|
rasdani/github-patches
|
git_diff
|
tinygrad__tinygrad-1562
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tensor.__eq__() with two bool tensors raises error on Torch backend
This was introduced from #1493
To reproduce:
```
In [24]: (Tensor([1], dtype=dtypes.bool, device="TORCH") == Tensor([1], dtype=dtypes.bool, device="TORCH")).realize()
RuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `logical_not()` operator instead.
```
RuntimeError is from pytorch
</issue>
<code>
[start of tinygrad/runtime/ops_torch.py]
1 import torch
2 from typing import Dict, Callable, Optional
3 from tinygrad.ops import UnaryOps, BinaryOps, MovementOps, TernaryOps, Op, Interpreted
4 from tinygrad.helpers import getenv, dtypes, prod, DType
5 from tinygrad.runtime.ops_cpu import base_fxn_for_op, einsum_mulacc
6 from tinygrad.runtime.lib import RawBuffer
7
8 device = torch.device("cuda:0" if torch.cuda.is_available() else ("mps" if getenv("MPS", 0) else "cpu"))
9 type_map = {torch.float64: dtypes.float64, torch.float16: dtypes.float16, torch.float32: dtypes.float32, torch.int8: dtypes.int8, torch.int32: dtypes.int32, torch.int64: dtypes.int64, torch.uint8: dtypes.uint8, torch.bool: dtypes.bool}
10 inverse_type_map = {v:k for k,v in type_map.items()}
11
12 def as_strided(x, arg):
13 if any(i < 0 for i in arg[1]):
14 return torch.as_strided(x.contiguous(), arg[0], tuple(abs(i) for i in arg[1]),
15 arg[2] + sum((s-1)*a if a < 0 else 0 for (s,a) in zip(arg[0], arg[1]))).flip([i for i,a in enumerate(arg[1]) if a < 0])
16 return torch.as_strided(x.contiguous(), arg[0], arg[1], arg[2])
17
18 torch_fxn_for_op: Dict[Op, Callable] = {**base_fxn_for_op, **{
19 UnaryOps.NOOP: lambda x: x.contiguous(), UnaryOps.SQRT: lambda x: x.sqrt(), UnaryOps.EXP2: lambda x: x.exp2(), UnaryOps.LOG2: lambda x: x.log2(), UnaryOps.SIN: torch.sin,
20 UnaryOps.CAST: lambda x,y: (x.view if y[1] else x.type)(next(k for k,v in type_map.items() if v==y[0])),
21 BinaryOps.MAX: torch.maximum, BinaryOps.CMPLT: lambda x,y: (x<y).type(torch.promote_types(x.dtype, y.dtype)),
22 MovementOps.PAD: lambda x, padding: torch.nn.functional.pad(x, [item for sublist in padding[::-1] for item in sublist]),
23 TernaryOps.MULACC: einsum_mulacc(lambda s,a,b: torch.einsum(s, a.float(), b.float()).type(torch.promote_types(a.dtype, b.dtype)), lambda x: x.stride(), lambda x,s: x.expand(s)),
24 TernaryOps.WHERE: lambda x, y, z: torch.where(x != 0, y, z),
25 MovementOps.STRIDE: lambda x, arg: x[tuple(slice(None, None, abs(i)) for i in arg)].flip([i for i,a in enumerate(arg) if a < 0]),
26 MovementOps.EXPAND: lambda x, arg: x.expand(arg), MovementOps.PERMUTE: lambda x, arg: x.permute(arg),
27 MovementOps.AS_STRIDED: as_strided
28 }}
29
30 class RawTorchBuffer(RawBuffer):
31 def __init__(self, size:int, dtype:DType, buf:Optional[torch.Tensor]=None): super().__init__(size, dtype, buf if buf is not None else torch.empty([size], dtype=inverse_type_map[dtype]))
32 @classmethod
33 def fromCPU(cls, x):
34 buf = torch.from_numpy(x if all(s>=0 for s in x.strides) else x.copy()).requires_grad_(False).to(device)
35 return cls(prod(x.shape), type_map[buf.dtype], buf)
36 def toCPU(self): return self._buf.cpu().numpy()
37 TorchBuffer = Interpreted(RawTorchBuffer, torch_fxn_for_op, from_underlying=lambda x: RawTorchBuffer(prod(x.shape), type_map[x.dtype], x))
38
[end of tinygrad/runtime/ops_torch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tinygrad/runtime/ops_torch.py b/tinygrad/runtime/ops_torch.py
--- a/tinygrad/runtime/ops_torch.py
+++ b/tinygrad/runtime/ops_torch.py
@@ -18,7 +18,7 @@
torch_fxn_for_op: Dict[Op, Callable] = {**base_fxn_for_op, **{
UnaryOps.NOOP: lambda x: x.contiguous(), UnaryOps.SQRT: lambda x: x.sqrt(), UnaryOps.EXP2: lambda x: x.exp2(), UnaryOps.LOG2: lambda x: x.log2(), UnaryOps.SIN: torch.sin,
UnaryOps.CAST: lambda x,y: (x.view if y[1] else x.type)(next(k for k,v in type_map.items() if v==y[0])),
- BinaryOps.MAX: torch.maximum, BinaryOps.CMPLT: lambda x,y: (x<y).type(torch.promote_types(x.dtype, y.dtype)),
+ BinaryOps.MAX: torch.maximum, BinaryOps.CMPLT: lambda x,y: (x<y).type(torch.promote_types(x.dtype, y.dtype)), BinaryOps.SUB: lambda x,y: torch.logical_xor(x, y) if y.dtype is torch.bool else torch.sub(x, y),
MovementOps.PAD: lambda x, padding: torch.nn.functional.pad(x, [item for sublist in padding[::-1] for item in sublist]),
TernaryOps.MULACC: einsum_mulacc(lambda s,a,b: torch.einsum(s, a.float(), b.float()).type(torch.promote_types(a.dtype, b.dtype)), lambda x: x.stride(), lambda x,s: x.expand(s)),
TernaryOps.WHERE: lambda x, y, z: torch.where(x != 0, y, z),
|
{"golden_diff": "diff --git a/tinygrad/runtime/ops_torch.py b/tinygrad/runtime/ops_torch.py\n--- a/tinygrad/runtime/ops_torch.py\n+++ b/tinygrad/runtime/ops_torch.py\n@@ -18,7 +18,7 @@\n torch_fxn_for_op: Dict[Op, Callable] = {**base_fxn_for_op, **{\n UnaryOps.NOOP: lambda x: x.contiguous(), UnaryOps.SQRT: lambda x: x.sqrt(), UnaryOps.EXP2: lambda x: x.exp2(), UnaryOps.LOG2: lambda x: x.log2(), UnaryOps.SIN: torch.sin,\n UnaryOps.CAST: lambda x,y: (x.view if y[1] else x.type)(next(k for k,v in type_map.items() if v==y[0])),\n- BinaryOps.MAX: torch.maximum, BinaryOps.CMPLT: lambda x,y: (x<y).type(torch.promote_types(x.dtype, y.dtype)),\n+ BinaryOps.MAX: torch.maximum, BinaryOps.CMPLT: lambda x,y: (x<y).type(torch.promote_types(x.dtype, y.dtype)), BinaryOps.SUB: lambda x,y: torch.logical_xor(x, y) if y.dtype is torch.bool else torch.sub(x, y),\n MovementOps.PAD: lambda x, padding: torch.nn.functional.pad(x, [item for sublist in padding[::-1] for item in sublist]),\n TernaryOps.MULACC: einsum_mulacc(lambda s,a,b: torch.einsum(s, a.float(), b.float()).type(torch.promote_types(a.dtype, b.dtype)), lambda x: x.stride(), lambda x,s: x.expand(s)),\n TernaryOps.WHERE: lambda x, y, z: torch.where(x != 0, y, z),\n", "issue": "Tensor.__eq__() with two bool tensors raises error on Torch backend\nThis was introduced from #1493\r\n\r\nTo reproduce:\r\n```\r\nIn [24]: (Tensor([1], dtype=dtypes.bool, device=\"TORCH\") == Tensor([1], dtype=dtypes.bool, device=\"TORCH\")).realize()\r\nRuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `logical_not()` operator instead.\r\n```\r\nRuntimeError is from pytorch\r\n\r\n\n", "before_files": [{"content": "import torch\nfrom typing import Dict, Callable, Optional\nfrom tinygrad.ops import UnaryOps, BinaryOps, MovementOps, TernaryOps, Op, Interpreted\nfrom tinygrad.helpers import getenv, dtypes, prod, DType\nfrom tinygrad.runtime.ops_cpu import base_fxn_for_op, einsum_mulacc\nfrom tinygrad.runtime.lib import RawBuffer\n\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else (\"mps\" if getenv(\"MPS\", 0) else \"cpu\"))\ntype_map = {torch.float64: dtypes.float64, torch.float16: dtypes.float16, torch.float32: dtypes.float32, torch.int8: dtypes.int8, torch.int32: dtypes.int32, torch.int64: dtypes.int64, torch.uint8: dtypes.uint8, torch.bool: dtypes.bool}\ninverse_type_map = {v:k for k,v in type_map.items()}\n\ndef as_strided(x, arg):\n if any(i < 0 for i in arg[1]):\n return torch.as_strided(x.contiguous(), arg[0], tuple(abs(i) for i in arg[1]),\n arg[2] + sum((s-1)*a if a < 0 else 0 for (s,a) in zip(arg[0], arg[1]))).flip([i for i,a in enumerate(arg[1]) if a < 0])\n return torch.as_strided(x.contiguous(), arg[0], arg[1], arg[2])\n\ntorch_fxn_for_op: Dict[Op, Callable] = {**base_fxn_for_op, **{\n UnaryOps.NOOP: lambda x: x.contiguous(), UnaryOps.SQRT: lambda x: x.sqrt(), UnaryOps.EXP2: lambda x: x.exp2(), UnaryOps.LOG2: lambda x: x.log2(), UnaryOps.SIN: torch.sin,\n UnaryOps.CAST: lambda x,y: (x.view if y[1] else x.type)(next(k for k,v in type_map.items() if v==y[0])),\n BinaryOps.MAX: torch.maximum, BinaryOps.CMPLT: lambda x,y: (x<y).type(torch.promote_types(x.dtype, y.dtype)),\n MovementOps.PAD: lambda x, padding: torch.nn.functional.pad(x, [item for sublist in padding[::-1] for item in sublist]),\n TernaryOps.MULACC: einsum_mulacc(lambda s,a,b: torch.einsum(s, a.float(), b.float()).type(torch.promote_types(a.dtype, b.dtype)), lambda x: x.stride(), lambda x,s: x.expand(s)),\n TernaryOps.WHERE: lambda x, y, z: torch.where(x != 0, y, z),\n MovementOps.STRIDE: lambda x, arg: x[tuple(slice(None, None, abs(i)) for i in arg)].flip([i for i,a in enumerate(arg) if a < 0]),\n MovementOps.EXPAND: lambda x, arg: x.expand(arg), MovementOps.PERMUTE: lambda x, arg: x.permute(arg),\n MovementOps.AS_STRIDED: as_strided\n}}\n\nclass RawTorchBuffer(RawBuffer):\n def __init__(self, size:int, dtype:DType, buf:Optional[torch.Tensor]=None): super().__init__(size, dtype, buf if buf is not None else torch.empty([size], dtype=inverse_type_map[dtype]))\n @classmethod\n def fromCPU(cls, x):\n buf = torch.from_numpy(x if all(s>=0 for s in x.strides) else x.copy()).requires_grad_(False).to(device)\n return cls(prod(x.shape), type_map[buf.dtype], buf)\n def toCPU(self): return self._buf.cpu().numpy()\nTorchBuffer = Interpreted(RawTorchBuffer, torch_fxn_for_op, from_underlying=lambda x: RawTorchBuffer(prod(x.shape), type_map[x.dtype], x))\n", "path": "tinygrad/runtime/ops_torch.py"}]}
| 1,592 | 385 |
gh_patches_debug_32291
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-contrib-1253
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add more features for adding HTTP request / response headers to spans.
I already have https://github.com/open-telemetry/opentelemetry-python-contrib/pull/1172 open for this, and I'll be breaking it in to smaller pieces at @lzchen 's request.
**Is your feature request related to a problem?**
Currently, you can only provide a list of full HTTP request / response header names to be added to the span.
There is also no capacity for header value redaction.
**Describe the solution you'd like**
It would be nice to be able to specify a regex or "all" to get all headers.
Header value redaction is also a must-have for us.
**Describe alternatives you've considered**
I considered doing this in my application, but it makes more sense to add it here.
</issue>
<code>
[start of util/opentelemetry-util-http/src/opentelemetry/util/http/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from os import environ
16 from re import compile as re_compile
17 from re import search
18 from typing import Iterable, List
19 from urllib.parse import urlparse, urlunparse
20
21 from opentelemetry.semconv.trace import SpanAttributes
22
23 OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_REQUEST = (
24 "OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_REQUEST"
25 )
26 OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_RESPONSE = (
27 "OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_RESPONSE"
28 )
29
30 # List of recommended metrics attributes
31 _duration_attrs = {
32 SpanAttributes.HTTP_METHOD,
33 SpanAttributes.HTTP_HOST,
34 SpanAttributes.HTTP_SCHEME,
35 SpanAttributes.HTTP_STATUS_CODE,
36 SpanAttributes.HTTP_FLAVOR,
37 SpanAttributes.HTTP_SERVER_NAME,
38 SpanAttributes.NET_HOST_NAME,
39 SpanAttributes.NET_HOST_PORT,
40 }
41
42 _active_requests_count_attrs = {
43 SpanAttributes.HTTP_METHOD,
44 SpanAttributes.HTTP_HOST,
45 SpanAttributes.HTTP_SCHEME,
46 SpanAttributes.HTTP_FLAVOR,
47 SpanAttributes.HTTP_SERVER_NAME,
48 }
49
50
51 class ExcludeList:
52 """Class to exclude certain paths (given as a list of regexes) from tracing requests"""
53
54 def __init__(self, excluded_urls: Iterable[str]):
55 self._excluded_urls = excluded_urls
56 if self._excluded_urls:
57 self._regex = re_compile("|".join(excluded_urls))
58
59 def url_disabled(self, url: str) -> bool:
60 return bool(self._excluded_urls and search(self._regex, url))
61
62
63 _root = r"OTEL_PYTHON_{}"
64
65
66 def get_traced_request_attrs(instrumentation):
67 traced_request_attrs = environ.get(
68 _root.format(f"{instrumentation}_TRACED_REQUEST_ATTRS"), []
69 )
70
71 if traced_request_attrs:
72 traced_request_attrs = [
73 traced_request_attr.strip()
74 for traced_request_attr in traced_request_attrs.split(",")
75 ]
76
77 return traced_request_attrs
78
79
80 def get_excluded_urls(instrumentation: str) -> ExcludeList:
81 # Get instrumentation-specific excluded URLs. If not set, retrieve them
82 # from generic variable.
83 excluded_urls = environ.get(
84 _root.format(f"{instrumentation}_EXCLUDED_URLS"),
85 environ.get(_root.format("EXCLUDED_URLS"), ""),
86 )
87
88 return parse_excluded_urls(excluded_urls)
89
90
91 def parse_excluded_urls(excluded_urls: str) -> ExcludeList:
92 """
93 Small helper to put an arbitrary url list inside of ExcludeList
94 """
95 if excluded_urls:
96 excluded_url_list = [
97 excluded_url.strip() for excluded_url in excluded_urls.split(",")
98 ]
99 else:
100 excluded_url_list = []
101
102 return ExcludeList(excluded_url_list)
103
104
105 def remove_url_credentials(url: str) -> str:
106 """Given a string url, remove the username and password only if it is a valid url"""
107
108 try:
109 parsed = urlparse(url)
110 if all([parsed.scheme, parsed.netloc]): # checks for valid url
111 parsed_url = urlparse(url)
112 netloc = (
113 (":".join(((parsed_url.hostname or ""), str(parsed_url.port))))
114 if parsed_url.port
115 else (parsed_url.hostname or "")
116 )
117 return urlunparse(
118 (
119 parsed_url.scheme,
120 netloc,
121 parsed_url.path,
122 parsed_url.params,
123 parsed_url.query,
124 parsed_url.fragment,
125 )
126 )
127 except ValueError: # an unparsable url was passed
128 pass
129 return url
130
131
132 def normalise_request_header_name(header: str) -> str:
133 key = header.lower().replace("-", "_")
134 return f"http.request.header.{key}"
135
136
137 def normalise_response_header_name(header: str) -> str:
138 key = header.lower().replace("-", "_")
139 return f"http.response.header.{key}"
140
141
142 def get_custom_headers(env_var: str) -> List[str]:
143 custom_headers = environ.get(env_var, [])
144 if custom_headers:
145 custom_headers = [
146 custom_headers.strip()
147 for custom_headers in custom_headers.split(",")
148 ]
149 return custom_headers
150
151
152 def _parse_active_request_count_attrs(req_attrs):
153 active_requests_count_attrs = {
154 key: req_attrs[key]
155 for key in _active_requests_count_attrs.intersection(req_attrs.keys())
156 }
157 return active_requests_count_attrs
158
159
160 def _parse_duration_attrs(req_attrs):
161 duration_attrs = {
162 key: req_attrs[key]
163 for key in _duration_attrs.intersection(req_attrs.keys())
164 }
165 return duration_attrs
166
[end of util/opentelemetry-util-http/src/opentelemetry/util/http/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/util/opentelemetry-util-http/src/opentelemetry/util/http/__init__.py b/util/opentelemetry-util-http/src/opentelemetry/util/http/__init__.py
--- a/util/opentelemetry-util-http/src/opentelemetry/util/http/__init__.py
+++ b/util/opentelemetry-util-http/src/opentelemetry/util/http/__init__.py
@@ -13,6 +13,7 @@
# limitations under the License.
from os import environ
+from re import IGNORECASE as RE_IGNORECASE
from re import compile as re_compile
from re import search
from typing import Iterable, List
@@ -20,6 +21,9 @@
from opentelemetry.semconv.trace import SpanAttributes
+OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SANITIZE_FIELDS = (
+ "OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SANITIZE_FIELDS"
+)
OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_REQUEST = (
"OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_REQUEST"
)
@@ -60,6 +64,22 @@
return bool(self._excluded_urls and search(self._regex, url))
+class SanitizeValue:
+ """Class to sanitize (remove sensitive data from) certain headers (given as a list of regexes)"""
+
+ def __init__(self, sanitized_fields: Iterable[str]):
+ self._sanitized_fields = sanitized_fields
+ if self._sanitized_fields:
+ self._regex = re_compile("|".join(sanitized_fields), RE_IGNORECASE)
+
+ def sanitize_header_value(self, header: str, value: str) -> str:
+ return (
+ "[REDACTED]"
+ if (self._sanitized_fields and search(self._regex, header))
+ else value
+ )
+
+
_root = r"OTEL_PYTHON_{}"
@@ -90,7 +110,7 @@
def parse_excluded_urls(excluded_urls: str) -> ExcludeList:
"""
- Small helper to put an arbitrary url list inside of ExcludeList
+ Small helper to put an arbitrary url list inside an ExcludeList
"""
if excluded_urls:
excluded_url_list = [
|
{"golden_diff": "diff --git a/util/opentelemetry-util-http/src/opentelemetry/util/http/__init__.py b/util/opentelemetry-util-http/src/opentelemetry/util/http/__init__.py\n--- a/util/opentelemetry-util-http/src/opentelemetry/util/http/__init__.py\n+++ b/util/opentelemetry-util-http/src/opentelemetry/util/http/__init__.py\n@@ -13,6 +13,7 @@\n # limitations under the License.\n \n from os import environ\n+from re import IGNORECASE as RE_IGNORECASE\n from re import compile as re_compile\n from re import search\n from typing import Iterable, List\n@@ -20,6 +21,9 @@\n \n from opentelemetry.semconv.trace import SpanAttributes\n \n+OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SANITIZE_FIELDS = (\n+ \"OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SANITIZE_FIELDS\"\n+)\n OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_REQUEST = (\n \"OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_REQUEST\"\n )\n@@ -60,6 +64,22 @@\n return bool(self._excluded_urls and search(self._regex, url))\n \n \n+class SanitizeValue:\n+ \"\"\"Class to sanitize (remove sensitive data from) certain headers (given as a list of regexes)\"\"\"\n+\n+ def __init__(self, sanitized_fields: Iterable[str]):\n+ self._sanitized_fields = sanitized_fields\n+ if self._sanitized_fields:\n+ self._regex = re_compile(\"|\".join(sanitized_fields), RE_IGNORECASE)\n+\n+ def sanitize_header_value(self, header: str, value: str) -> str:\n+ return (\n+ \"[REDACTED]\"\n+ if (self._sanitized_fields and search(self._regex, header))\n+ else value\n+ )\n+\n+\n _root = r\"OTEL_PYTHON_{}\"\n \n \n@@ -90,7 +110,7 @@\n \n def parse_excluded_urls(excluded_urls: str) -> ExcludeList:\n \"\"\"\n- Small helper to put an arbitrary url list inside of ExcludeList\n+ Small helper to put an arbitrary url list inside an ExcludeList\n \"\"\"\n if excluded_urls:\n excluded_url_list = [\n", "issue": "Add more features for adding HTTP request / response headers to spans.\nI already have https://github.com/open-telemetry/opentelemetry-python-contrib/pull/1172 open for this, and I'll be breaking it in to smaller pieces at @lzchen 's request.\r\n\r\n**Is your feature request related to a problem?**\r\nCurrently, you can only provide a list of full HTTP request / response header names to be added to the span.\r\n\r\nThere is also no capacity for header value redaction.\r\n\r\n**Describe the solution you'd like**\r\nIt would be nice to be able to specify a regex or \"all\" to get all headers.\r\n\r\nHeader value redaction is also a must-have for us.\r\n\r\n**Describe alternatives you've considered**\r\nI considered doing this in my application, but it makes more sense to add it here.\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom os import environ\nfrom re import compile as re_compile\nfrom re import search\nfrom typing import Iterable, List\nfrom urllib.parse import urlparse, urlunparse\n\nfrom opentelemetry.semconv.trace import SpanAttributes\n\nOTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_REQUEST = (\n \"OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_REQUEST\"\n)\nOTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_RESPONSE = (\n \"OTEL_INSTRUMENTATION_HTTP_CAPTURE_HEADERS_SERVER_RESPONSE\"\n)\n\n# List of recommended metrics attributes\n_duration_attrs = {\n SpanAttributes.HTTP_METHOD,\n SpanAttributes.HTTP_HOST,\n SpanAttributes.HTTP_SCHEME,\n SpanAttributes.HTTP_STATUS_CODE,\n SpanAttributes.HTTP_FLAVOR,\n SpanAttributes.HTTP_SERVER_NAME,\n SpanAttributes.NET_HOST_NAME,\n SpanAttributes.NET_HOST_PORT,\n}\n\n_active_requests_count_attrs = {\n SpanAttributes.HTTP_METHOD,\n SpanAttributes.HTTP_HOST,\n SpanAttributes.HTTP_SCHEME,\n SpanAttributes.HTTP_FLAVOR,\n SpanAttributes.HTTP_SERVER_NAME,\n}\n\n\nclass ExcludeList:\n \"\"\"Class to exclude certain paths (given as a list of regexes) from tracing requests\"\"\"\n\n def __init__(self, excluded_urls: Iterable[str]):\n self._excluded_urls = excluded_urls\n if self._excluded_urls:\n self._regex = re_compile(\"|\".join(excluded_urls))\n\n def url_disabled(self, url: str) -> bool:\n return bool(self._excluded_urls and search(self._regex, url))\n\n\n_root = r\"OTEL_PYTHON_{}\"\n\n\ndef get_traced_request_attrs(instrumentation):\n traced_request_attrs = environ.get(\n _root.format(f\"{instrumentation}_TRACED_REQUEST_ATTRS\"), []\n )\n\n if traced_request_attrs:\n traced_request_attrs = [\n traced_request_attr.strip()\n for traced_request_attr in traced_request_attrs.split(\",\")\n ]\n\n return traced_request_attrs\n\n\ndef get_excluded_urls(instrumentation: str) -> ExcludeList:\n # Get instrumentation-specific excluded URLs. If not set, retrieve them\n # from generic variable.\n excluded_urls = environ.get(\n _root.format(f\"{instrumentation}_EXCLUDED_URLS\"),\n environ.get(_root.format(\"EXCLUDED_URLS\"), \"\"),\n )\n\n return parse_excluded_urls(excluded_urls)\n\n\ndef parse_excluded_urls(excluded_urls: str) -> ExcludeList:\n \"\"\"\n Small helper to put an arbitrary url list inside of ExcludeList\n \"\"\"\n if excluded_urls:\n excluded_url_list = [\n excluded_url.strip() for excluded_url in excluded_urls.split(\",\")\n ]\n else:\n excluded_url_list = []\n\n return ExcludeList(excluded_url_list)\n\n\ndef remove_url_credentials(url: str) -> str:\n \"\"\"Given a string url, remove the username and password only if it is a valid url\"\"\"\n\n try:\n parsed = urlparse(url)\n if all([parsed.scheme, parsed.netloc]): # checks for valid url\n parsed_url = urlparse(url)\n netloc = (\n (\":\".join(((parsed_url.hostname or \"\"), str(parsed_url.port))))\n if parsed_url.port\n else (parsed_url.hostname or \"\")\n )\n return urlunparse(\n (\n parsed_url.scheme,\n netloc,\n parsed_url.path,\n parsed_url.params,\n parsed_url.query,\n parsed_url.fragment,\n )\n )\n except ValueError: # an unparsable url was passed\n pass\n return url\n\n\ndef normalise_request_header_name(header: str) -> str:\n key = header.lower().replace(\"-\", \"_\")\n return f\"http.request.header.{key}\"\n\n\ndef normalise_response_header_name(header: str) -> str:\n key = header.lower().replace(\"-\", \"_\")\n return f\"http.response.header.{key}\"\n\n\ndef get_custom_headers(env_var: str) -> List[str]:\n custom_headers = environ.get(env_var, [])\n if custom_headers:\n custom_headers = [\n custom_headers.strip()\n for custom_headers in custom_headers.split(\",\")\n ]\n return custom_headers\n\n\ndef _parse_active_request_count_attrs(req_attrs):\n active_requests_count_attrs = {\n key: req_attrs[key]\n for key in _active_requests_count_attrs.intersection(req_attrs.keys())\n }\n return active_requests_count_attrs\n\n\ndef _parse_duration_attrs(req_attrs):\n duration_attrs = {\n key: req_attrs[key]\n for key in _duration_attrs.intersection(req_attrs.keys())\n }\n return duration_attrs\n", "path": "util/opentelemetry-util-http/src/opentelemetry/util/http/__init__.py"}]}
| 2,199 | 475 |
gh_patches_debug_2683
|
rasdani/github-patches
|
git_diff
|
huggingface__huggingface_hub-790
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support python=3.10
Python 3.10 has been out for a while but we seem to not test for it. What are the roadblocks for us to support 3.10 and maybe deprecate 3.6? (Many packages now support 3.8-3.10 and older versions are not supported anymore).
Ping @LysandreJik @osanseviero maybe?
</issue>
<code>
[start of setup.py]
1 from setuptools import find_packages, setup
2
3
4 def get_version() -> str:
5 rel_path = "src/huggingface_hub/__init__.py"
6 with open(rel_path, "r") as fp:
7 for line in fp.read().splitlines():
8 if line.startswith("__version__"):
9 delim = '"' if '"' in line else "'"
10 return line.split(delim)[1]
11 raise RuntimeError("Unable to find version string.")
12
13
14 install_requires = [
15 "filelock",
16 "requests",
17 "tqdm",
18 "pyyaml",
19 "typing-extensions>=3.7.4.3", # to be able to import TypeAlias
20 "importlib_metadata;python_version<'3.8'",
21 "packaging>=20.9",
22 ]
23
24 extras = {}
25
26 extras["torch"] = [
27 "torch",
28 ]
29
30 extras["tensorflow"] = [
31 "tensorflow",
32 "pydot",
33 "graphviz"
34 ]
35
36 extras["testing"] = [
37 "pytest",
38 "datasets",
39 "soundfile",
40 ]
41
42 extras["quality"] = [
43 "black~=22.0",
44 "isort>=5.5.4",
45 "flake8>=3.8.3",
46 ]
47
48 extras["all"] = extras["testing"] + extras["quality"]
49
50 extras["dev"] = extras["all"]
51
52
53 setup(
54 name="huggingface_hub",
55 version=get_version(),
56 author="Hugging Face, Inc.",
57 author_email="[email protected]",
58 description="Client library to download and publish models on the huggingface.co hub",
59 long_description=open("README.md", "r", encoding="utf-8").read(),
60 long_description_content_type="text/markdown",
61 keywords="model-hub machine-learning models natural-language-processing deep-learning pytorch pretrained-models",
62 license="Apache",
63 url="https://github.com/huggingface/huggingface_hub",
64 package_dir={"": "src"},
65 packages=find_packages("src"),
66 extras_require=extras,
67 entry_points={
68 "console_scripts": [
69 "huggingface-cli=huggingface_hub.commands.huggingface_cli:main"
70 ]
71 },
72 python_requires=">=3.6.0",
73 install_requires=install_requires,
74 classifiers=[
75 "Intended Audience :: Developers",
76 "Intended Audience :: Education",
77 "Intended Audience :: Science/Research",
78 "License :: OSI Approved :: Apache Software License",
79 "Operating System :: OS Independent",
80 "Programming Language :: Python :: 3",
81 "Topic :: Scientific/Engineering :: Artificial Intelligence",
82 ],
83 )
84
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -69,7 +69,7 @@
"huggingface-cli=huggingface_hub.commands.huggingface_cli:main"
]
},
- python_requires=">=3.6.0",
+ python_requires=">=3.7.0",
install_requires=install_requires,
classifiers=[
"Intended Audience :: Developers",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -69,7 +69,7 @@\n \"huggingface-cli=huggingface_hub.commands.huggingface_cli:main\"\n ]\n },\n- python_requires=\">=3.6.0\",\n+ python_requires=\">=3.7.0\",\n install_requires=install_requires,\n classifiers=[\n \"Intended Audience :: Developers\",\n", "issue": "Support python=3.10\nPython 3.10 has been out for a while but we seem to not test for it. What are the roadblocks for us to support 3.10 and maybe deprecate 3.6? (Many packages now support 3.8-3.10 and older versions are not supported anymore).\r\n\r\nPing @LysandreJik @osanseviero maybe?\n", "before_files": [{"content": "from setuptools import find_packages, setup\n\n\ndef get_version() -> str:\n rel_path = \"src/huggingface_hub/__init__.py\"\n with open(rel_path, \"r\") as fp:\n for line in fp.read().splitlines():\n if line.startswith(\"__version__\"):\n delim = '\"' if '\"' in line else \"'\"\n return line.split(delim)[1]\n raise RuntimeError(\"Unable to find version string.\")\n\n\ninstall_requires = [\n \"filelock\",\n \"requests\",\n \"tqdm\",\n \"pyyaml\",\n \"typing-extensions>=3.7.4.3\", # to be able to import TypeAlias\n \"importlib_metadata;python_version<'3.8'\",\n \"packaging>=20.9\",\n]\n\nextras = {}\n\nextras[\"torch\"] = [\n \"torch\",\n]\n\nextras[\"tensorflow\"] = [\n \"tensorflow\",\n \"pydot\",\n \"graphviz\"\n]\n\nextras[\"testing\"] = [\n \"pytest\",\n \"datasets\",\n \"soundfile\",\n]\n\nextras[\"quality\"] = [\n \"black~=22.0\",\n \"isort>=5.5.4\",\n \"flake8>=3.8.3\",\n]\n\nextras[\"all\"] = extras[\"testing\"] + extras[\"quality\"]\n\nextras[\"dev\"] = extras[\"all\"]\n\n\nsetup(\n name=\"huggingface_hub\",\n version=get_version(),\n author=\"Hugging Face, Inc.\",\n author_email=\"[email protected]\",\n description=\"Client library to download and publish models on the huggingface.co hub\",\n long_description=open(\"README.md\", \"r\", encoding=\"utf-8\").read(),\n long_description_content_type=\"text/markdown\",\n keywords=\"model-hub machine-learning models natural-language-processing deep-learning pytorch pretrained-models\",\n license=\"Apache\",\n url=\"https://github.com/huggingface/huggingface_hub\",\n package_dir={\"\": \"src\"},\n packages=find_packages(\"src\"),\n extras_require=extras,\n entry_points={\n \"console_scripts\": [\n \"huggingface-cli=huggingface_hub.commands.huggingface_cli:main\"\n ]\n },\n python_requires=\">=3.6.0\",\n install_requires=install_requires,\n classifiers=[\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python :: 3\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n", "path": "setup.py"}]}
| 1,336 | 96 |
gh_patches_debug_36073
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-5711
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.wasd: service gone
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
6.4.2
### Description
A few days ago, the service [gone](https://mts.ru/personal/novosti/2023-12-05/vstrechajte-polzovatelskuyu-videoplatformu-nuum). Now this [nuum.ru](https://nuum.ru).
Though we could easily replace the plugin, but I'm not sure it's worth adding it to upstream, because it's a beta version.
<details>
```diff
diff --git a/src/streamlink/plugins/wasd.py b/src/streamlink/plugins/wasd.py
index 7d61304e..656a16eb 100644
--- a/src/streamlink/plugins/wasd.py
+++ b/src/streamlink/plugins/wasd.py
@@ -16,7 +16,7 @@ log = logging.getLogger(__name__)
@pluginmatcher(re.compile(
- r"https?://(?:www\.)?wasd\.tv/(?P<nickname>[^/]+)/?$",
+ r"https?://(?:www\.)?nuum\.ru/channel/(?P<nickname>[^/]+)/?$",
))
class WASD(Plugin):
_media_schema = validate.Schema({
@@ -53,11 +53,11 @@ class WASD(Plugin):
def _get_streams(self):
nickname = self.match.group("nickname")
- res = self.session.http.get(f"https://wasd.tv/api/channels/nicknames/{nickname}")
+ res = self.session.http.get(f"https://nuum.ru/api/channels/nicknames/{nickname}")
channel_id = self.session.http.json(res, schema=self._api_nicknames_schema)
res = self.session.http.get(
- "https://wasd.tv/api/v2/media-containers",
+ "https://nuum.ru/api/v2/media-containers",
params={
"media_container_status": "RUNNING",
"limit": "1",
```
</details>
### Debug log
</issue>
<code>
[start of src/streamlink/plugins/wasd.py]
1 """
2 $description Russian live-streaming social platform.
3 $url wasd.tv
4 $type live
5 """
6
7 import logging
8 import re
9
10 from streamlink.plugin import Plugin, PluginError, pluginmatcher
11 from streamlink.plugin.api import validate
12 from streamlink.stream.hls import HLSStream
13
14
15 log = logging.getLogger(__name__)
16
17
18 @pluginmatcher(re.compile(
19 r"https?://(?:www\.)?wasd\.tv/(?P<nickname>[^/]+)/?$",
20 ))
21 class WASD(Plugin):
22 _media_schema = validate.Schema({
23 "user_id": int,
24 "media_container_online_status": str,
25 "media_container_status": str,
26 "media_container_streams": [{
27 "stream_media": [{
28 "media_id": int,
29 "media_meta": {
30 "media_url": validate.any(str, None),
31 "media_archive_url": validate.any(str, None),
32 },
33 "media_status": validate.any("STOPPED", "RUNNING"),
34 "media_type": "HLS",
35 }],
36 }],
37 })
38 _api_schema = validate.Schema({
39 "result":
40 validate.any(
41 _media_schema,
42 validate.all(list,
43 validate.get(0),
44 _media_schema),
45 [],
46 ),
47 }, validate.get("result"))
48 _api_nicknames_schema = validate.Schema({
49 "result": {
50 "channel_id": int,
51 },
52 }, validate.get("result"), validate.get("channel_id"))
53
54 def _get_streams(self):
55 nickname = self.match.group("nickname")
56 res = self.session.http.get(f"https://wasd.tv/api/channels/nicknames/{nickname}")
57 channel_id = self.session.http.json(res, schema=self._api_nicknames_schema)
58
59 res = self.session.http.get(
60 "https://wasd.tv/api/v2/media-containers",
61 params={
62 "media_container_status": "RUNNING",
63 "limit": "1",
64 "offset": "0",
65 "channel_id": channel_id,
66 "media_container_type": "SINGLE,COOP",
67 },
68 )
69
70 json_res = self.session.http.json(res, schema=self._api_schema)
71 log.trace("{0!r}".format(json_res))
72 if not json_res:
73 raise PluginError("No data returned from URL={0}".format(res.url))
74
75 for stream in json_res["media_container_streams"]:
76 log.debug("media_container_status: {0}, media_container_online_status: {1}".format(
77 json_res["media_container_status"], json_res["media_container_online_status"]))
78 for stream_media in stream["stream_media"]:
79 if stream_media["media_status"] == "STOPPED":
80 hls_url = stream_media["media_meta"]["media_archive_url"]
81 else:
82 hls_url = stream_media["media_meta"]["media_url"]
83
84 yield from HLSStream.parse_variant_playlist(self.session, hls_url).items()
85
86
87 __plugin__ = WASD
88
[end of src/streamlink/plugins/wasd.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/streamlink/plugins/wasd.py b/src/streamlink/plugins/wasd.py
deleted file mode 100644
--- a/src/streamlink/plugins/wasd.py
+++ /dev/null
@@ -1,87 +0,0 @@
-"""
-$description Russian live-streaming social platform.
-$url wasd.tv
-$type live
-"""
-
-import logging
-import re
-
-from streamlink.plugin import Plugin, PluginError, pluginmatcher
-from streamlink.plugin.api import validate
-from streamlink.stream.hls import HLSStream
-
-
-log = logging.getLogger(__name__)
-
-
-@pluginmatcher(re.compile(
- r"https?://(?:www\.)?wasd\.tv/(?P<nickname>[^/]+)/?$",
-))
-class WASD(Plugin):
- _media_schema = validate.Schema({
- "user_id": int,
- "media_container_online_status": str,
- "media_container_status": str,
- "media_container_streams": [{
- "stream_media": [{
- "media_id": int,
- "media_meta": {
- "media_url": validate.any(str, None),
- "media_archive_url": validate.any(str, None),
- },
- "media_status": validate.any("STOPPED", "RUNNING"),
- "media_type": "HLS",
- }],
- }],
- })
- _api_schema = validate.Schema({
- "result":
- validate.any(
- _media_schema,
- validate.all(list,
- validate.get(0),
- _media_schema),
- [],
- ),
- }, validate.get("result"))
- _api_nicknames_schema = validate.Schema({
- "result": {
- "channel_id": int,
- },
- }, validate.get("result"), validate.get("channel_id"))
-
- def _get_streams(self):
- nickname = self.match.group("nickname")
- res = self.session.http.get(f"https://wasd.tv/api/channels/nicknames/{nickname}")
- channel_id = self.session.http.json(res, schema=self._api_nicknames_schema)
-
- res = self.session.http.get(
- "https://wasd.tv/api/v2/media-containers",
- params={
- "media_container_status": "RUNNING",
- "limit": "1",
- "offset": "0",
- "channel_id": channel_id,
- "media_container_type": "SINGLE,COOP",
- },
- )
-
- json_res = self.session.http.json(res, schema=self._api_schema)
- log.trace("{0!r}".format(json_res))
- if not json_res:
- raise PluginError("No data returned from URL={0}".format(res.url))
-
- for stream in json_res["media_container_streams"]:
- log.debug("media_container_status: {0}, media_container_online_status: {1}".format(
- json_res["media_container_status"], json_res["media_container_online_status"]))
- for stream_media in stream["stream_media"]:
- if stream_media["media_status"] == "STOPPED":
- hls_url = stream_media["media_meta"]["media_archive_url"]
- else:
- hls_url = stream_media["media_meta"]["media_url"]
-
- yield from HLSStream.parse_variant_playlist(self.session, hls_url).items()
-
-
-__plugin__ = WASD
|
{"golden_diff": "diff --git a/src/streamlink/plugins/wasd.py b/src/streamlink/plugins/wasd.py\ndeleted file mode 100644\n--- a/src/streamlink/plugins/wasd.py\n+++ /dev/null\n@@ -1,87 +0,0 @@\n-\"\"\"\n-$description Russian live-streaming social platform.\n-$url wasd.tv\n-$type live\n-\"\"\"\n-\n-import logging\n-import re\n-\n-from streamlink.plugin import Plugin, PluginError, pluginmatcher\n-from streamlink.plugin.api import validate\n-from streamlink.stream.hls import HLSStream\n-\n-\n-log = logging.getLogger(__name__)\n-\n-\n-@pluginmatcher(re.compile(\n- r\"https?://(?:www\\.)?wasd\\.tv/(?P<nickname>[^/]+)/?$\",\n-))\n-class WASD(Plugin):\n- _media_schema = validate.Schema({\n- \"user_id\": int,\n- \"media_container_online_status\": str,\n- \"media_container_status\": str,\n- \"media_container_streams\": [{\n- \"stream_media\": [{\n- \"media_id\": int,\n- \"media_meta\": {\n- \"media_url\": validate.any(str, None),\n- \"media_archive_url\": validate.any(str, None),\n- },\n- \"media_status\": validate.any(\"STOPPED\", \"RUNNING\"),\n- \"media_type\": \"HLS\",\n- }],\n- }],\n- })\n- _api_schema = validate.Schema({\n- \"result\":\n- validate.any(\n- _media_schema,\n- validate.all(list,\n- validate.get(0),\n- _media_schema),\n- [],\n- ),\n- }, validate.get(\"result\"))\n- _api_nicknames_schema = validate.Schema({\n- \"result\": {\n- \"channel_id\": int,\n- },\n- }, validate.get(\"result\"), validate.get(\"channel_id\"))\n-\n- def _get_streams(self):\n- nickname = self.match.group(\"nickname\")\n- res = self.session.http.get(f\"https://wasd.tv/api/channels/nicknames/{nickname}\")\n- channel_id = self.session.http.json(res, schema=self._api_nicknames_schema)\n-\n- res = self.session.http.get(\n- \"https://wasd.tv/api/v2/media-containers\",\n- params={\n- \"media_container_status\": \"RUNNING\",\n- \"limit\": \"1\",\n- \"offset\": \"0\",\n- \"channel_id\": channel_id,\n- \"media_container_type\": \"SINGLE,COOP\",\n- },\n- )\n-\n- json_res = self.session.http.json(res, schema=self._api_schema)\n- log.trace(\"{0!r}\".format(json_res))\n- if not json_res:\n- raise PluginError(\"No data returned from URL={0}\".format(res.url))\n-\n- for stream in json_res[\"media_container_streams\"]:\n- log.debug(\"media_container_status: {0}, media_container_online_status: {1}\".format(\n- json_res[\"media_container_status\"], json_res[\"media_container_online_status\"]))\n- for stream_media in stream[\"stream_media\"]:\n- if stream_media[\"media_status\"] == \"STOPPED\":\n- hls_url = stream_media[\"media_meta\"][\"media_archive_url\"]\n- else:\n- hls_url = stream_media[\"media_meta\"][\"media_url\"]\n-\n- yield from HLSStream.parse_variant_playlist(self.session, hls_url).items()\n-\n-\n-__plugin__ = WASD\n", "issue": "plugins.wasd: service gone\n### Checklist\r\n\r\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\r\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\r\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\r\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\r\n\r\n### Streamlink version\r\n\r\n6.4.2\r\n\r\n### Description\r\n\r\nA few days ago, the service [gone](https://mts.ru/personal/novosti/2023-12-05/vstrechajte-polzovatelskuyu-videoplatformu-nuum). Now this [nuum.ru](https://nuum.ru).\r\n\r\nThough we could easily replace the plugin, but I'm not sure it's worth adding it to upstream, because it's a beta version.\r\n<details>\r\n\r\n```diff\r\ndiff --git a/src/streamlink/plugins/wasd.py b/src/streamlink/plugins/wasd.py\r\nindex 7d61304e..656a16eb 100644\r\n--- a/src/streamlink/plugins/wasd.py\r\n+++ b/src/streamlink/plugins/wasd.py\r\n@@ -16,7 +16,7 @@ log = logging.getLogger(__name__)\r\n \r\n \r\n @pluginmatcher(re.compile(\r\n- r\"https?://(?:www\\.)?wasd\\.tv/(?P<nickname>[^/]+)/?$\",\r\n+ r\"https?://(?:www\\.)?nuum\\.ru/channel/(?P<nickname>[^/]+)/?$\",\r\n ))\r\n class WASD(Plugin):\r\n _media_schema = validate.Schema({\r\n@@ -53,11 +53,11 @@ class WASD(Plugin):\r\n \r\n def _get_streams(self):\r\n nickname = self.match.group(\"nickname\")\r\n- res = self.session.http.get(f\"https://wasd.tv/api/channels/nicknames/{nickname}\")\r\n+ res = self.session.http.get(f\"https://nuum.ru/api/channels/nicknames/{nickname}\")\r\n channel_id = self.session.http.json(res, schema=self._api_nicknames_schema)\r\n \r\n res = self.session.http.get(\r\n- \"https://wasd.tv/api/v2/media-containers\",\r\n+ \"https://nuum.ru/api/v2/media-containers\",\r\n params={\r\n \"media_container_status\": \"RUNNING\",\r\n \"limit\": \"1\",\r\n```\r\n</details>\r\n\r\n### Debug log\r\n\r\n\n", "before_files": [{"content": "\"\"\"\n$description Russian live-streaming social platform.\n$url wasd.tv\n$type live\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, PluginError, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.hls import HLSStream\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?wasd\\.tv/(?P<nickname>[^/]+)/?$\",\n))\nclass WASD(Plugin):\n _media_schema = validate.Schema({\n \"user_id\": int,\n \"media_container_online_status\": str,\n \"media_container_status\": str,\n \"media_container_streams\": [{\n \"stream_media\": [{\n \"media_id\": int,\n \"media_meta\": {\n \"media_url\": validate.any(str, None),\n \"media_archive_url\": validate.any(str, None),\n },\n \"media_status\": validate.any(\"STOPPED\", \"RUNNING\"),\n \"media_type\": \"HLS\",\n }],\n }],\n })\n _api_schema = validate.Schema({\n \"result\":\n validate.any(\n _media_schema,\n validate.all(list,\n validate.get(0),\n _media_schema),\n [],\n ),\n }, validate.get(\"result\"))\n _api_nicknames_schema = validate.Schema({\n \"result\": {\n \"channel_id\": int,\n },\n }, validate.get(\"result\"), validate.get(\"channel_id\"))\n\n def _get_streams(self):\n nickname = self.match.group(\"nickname\")\n res = self.session.http.get(f\"https://wasd.tv/api/channels/nicknames/{nickname}\")\n channel_id = self.session.http.json(res, schema=self._api_nicknames_schema)\n\n res = self.session.http.get(\n \"https://wasd.tv/api/v2/media-containers\",\n params={\n \"media_container_status\": \"RUNNING\",\n \"limit\": \"1\",\n \"offset\": \"0\",\n \"channel_id\": channel_id,\n \"media_container_type\": \"SINGLE,COOP\",\n },\n )\n\n json_res = self.session.http.json(res, schema=self._api_schema)\n log.trace(\"{0!r}\".format(json_res))\n if not json_res:\n raise PluginError(\"No data returned from URL={0}\".format(res.url))\n\n for stream in json_res[\"media_container_streams\"]:\n log.debug(\"media_container_status: {0}, media_container_online_status: {1}\".format(\n json_res[\"media_container_status\"], json_res[\"media_container_online_status\"]))\n for stream_media in stream[\"stream_media\"]:\n if stream_media[\"media_status\"] == \"STOPPED\":\n hls_url = stream_media[\"media_meta\"][\"media_archive_url\"]\n else:\n hls_url = stream_media[\"media_meta\"][\"media_url\"]\n\n yield from HLSStream.parse_variant_playlist(self.session, hls_url).items()\n\n\n__plugin__ = WASD\n", "path": "src/streamlink/plugins/wasd.py"}]}
| 1,936 | 742 |
gh_patches_debug_26333
|
rasdani/github-patches
|
git_diff
|
cal-itp__benefits-333
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create VerifierSessionRequired middleware that expects a verifier to exist in session
## Background
Once the session tracks the selected verifier in #321, we can make use of that infrastructure to put guards on certain view functions that require a verifier to be selected. The first step is to create a new middleware class that enforces the requirement.
This is similar to how the [`AgencySessionRequired`](https://github.com/cal-itp/benefits/blob/dev/benefits/core/middleware.py#L20) and [`EligibileSessionRequired`](https://github.com/cal-itp/benefits/blob/dev/benefits/core/middleware.py#L68) middleware are used.
## Tasks
- [x] Create a new middleware class like `VerifierSessionRequired` inheriting from `MiddlewareMixin`, see the other `*SessionRequired` as examples
- [x] In `process_request()`, check `session.verifier()` for the request.
- If `None`, raise an error to stop the request
- Otherwise return `None` to allow the request to continue
- [x] Apply this middleware to the following views to enforce that a verifier is needed:
- [x] [`eligibility:index`](https://github.com/cal-itp/benefits/blob/dev/benefits/eligibility/views.py#L16)
- [x] [`eligibility:confirm`](https://github.com/cal-itp/benefits/blob/dev/benefits/eligibility/views.py#L46)
</issue>
<code>
[start of benefits/core/middleware.py]
1 """
2 The core application: middleware definitions for request/response cycle.
3 """
4 import logging
5 import time
6
7 from django.http import HttpResponse, HttpResponseBadRequest
8 from django.template import loader
9 from django.utils.decorators import decorator_from_middleware
10 from django.utils.deprecation import MiddlewareMixin
11 from django.views import i18n
12
13 from benefits.settings import RATE_LIMIT, RATE_LIMIT_METHODS, RATE_LIMIT_PERIOD, DEBUG
14 from . import analytics, session, viewmodels
15
16
17 logger = logging.getLogger(__name__)
18
19
20 class AgencySessionRequired(MiddlewareMixin):
21 """Middleware raises an exception for sessions lacking an agency configuration."""
22
23 def process_request(self, request):
24 if session.active_agency(request):
25 logger.debug("Session configured with agency")
26 return None
27 else:
28 raise AttributeError("Session not configured with agency")
29
30
31 class RateLimit(MiddlewareMixin):
32 """Middleware checks settings and session to ensure rate limit is respected."""
33
34 def process_request(self, request):
35 if any((RATE_LIMIT < 1, len(RATE_LIMIT_METHODS) < 1, RATE_LIMIT_PERIOD < 1)):
36 logger.debug("RATE_LIMIT, RATE_LIMIT_METHODS, or RATE_LIMIT_PERIOD are not configured")
37 return None
38
39 if request.method in RATE_LIMIT_METHODS:
40 session.increment_rate_limit_counter(request)
41 else:
42 # bail early if the request method doesn't match
43 return None
44
45 counter = session.rate_limit_counter(request)
46 reset_time = session.rate_limit_time(request)
47 now = int(time.time())
48
49 if counter > RATE_LIMIT:
50 if reset_time > now:
51 logger.warn("Rate limit exceeded")
52 home = viewmodels.Button.home(request)
53 page = viewmodels.ErrorPage.error(
54 title="Rate limit error",
55 content_title="Rate limit error",
56 paragraphs=["You have reached the rate limit. Please try again."],
57 button=home,
58 )
59 t = loader.get_template("400.html")
60 return HttpResponseBadRequest(t.render(page.context_dict()))
61 else:
62 # enough time has passed, reset the rate limit
63 session.reset_rate_limit(request)
64
65 return None
66
67
68 class EligibleSessionRequired(MiddlewareMixin):
69 """Middleware raises an exception for sessions lacking confirmed eligibility."""
70
71 def process_request(self, request):
72 if session.eligible(request):
73 logger.debug("Session has confirmed eligibility")
74 return None
75 else:
76 raise AttributeError("Session has no confirmed eligibility")
77
78
79 class DebugSession(MiddlewareMixin):
80 """Middleware to configure debug context in the request session."""
81
82 def process_request(self, request):
83 session.update(request, debug=DEBUG)
84 return None
85
86
87 class Healthcheck:
88 """Middleware intercepts and accepts /healthcheck requests."""
89
90 def __init__(self, get_response):
91 self.get_response = get_response
92
93 def __call__(self, request):
94 if request.path == "/healthcheck":
95 return HttpResponse("Healthy", content_type="text/plain")
96 return self.get_response(request)
97
98
99 class ViewedPageEvent(MiddlewareMixin):
100 """Middleware sends an analytics event for page views."""
101
102 def process_response(self, request, response):
103 event = analytics.ViewedPageEvent(request)
104 try:
105 analytics.send_event(event)
106 except Exception:
107 logger.warning(f"Failed to send event: {event}")
108 finally:
109 return response
110
111
112 pageview_decorator = decorator_from_middleware(ViewedPageEvent)
113
114
115 class ChangedLanguageEvent(MiddlewareMixin):
116 """Middleware hooks into django.views.i18n.set_language to send an analytics event."""
117
118 def process_view(self, request, view_func, view_args, view_kwargs):
119 if view_func == i18n.set_language:
120 new_lang = request.POST["language"]
121 event = analytics.ChangedLanguageEvent(request, new_lang)
122 analytics.send_event(event)
123 return None
124
[end of benefits/core/middleware.py]
[start of benefits/eligibility/views.py]
1 """
2 The eligibility application: view definitions for the eligibility verification flow.
3 """
4 from django.contrib import messages
5 from django.shortcuts import redirect
6 from django.urls import reverse
7 from django.utils.decorators import decorator_from_middleware
8 from django.utils.translation import pgettext, gettext as _
9
10 from benefits.core import middleware, recaptcha, session, viewmodels
11 from benefits.core.views import PageTemplateResponse, _index_image
12 from . import analytics, api, forms
13
14
15 @decorator_from_middleware(middleware.AgencySessionRequired)
16 def index(request):
17 """View handler for the eligibility verification getting started screen."""
18
19 session.update(request, eligibility_types=[], origin=reverse("eligibility:index"))
20
21 page = viewmodels.Page(
22 title=_("eligibility.pages.index.title"),
23 content_title=_("eligibility.pages.index.content_title"),
24 media=[
25 viewmodels.MediaItem(
26 icon=viewmodels.Icon("idcardcheck", pgettext("image alt text", "core.icons.idcardcheck")),
27 heading=_("eligibility.pages.index.items[0].title"),
28 details=_("eligibility.pages.index.items[0].text"),
29 ),
30 viewmodels.MediaItem(
31 icon=viewmodels.Icon("bankcardcheck", pgettext("image alt text", "core.icons.bankcardcheck")),
32 heading=_("eligibility.pages.index.items[1].title"),
33 details=_("eligibility.pages.index.items[1].text"),
34 ),
35 ],
36 paragraphs=[_("eligibility.pages.index.p[0]")],
37 image=_index_image(),
38 button=viewmodels.Button.primary(text=_("eligibility.pages.index.button"), url=reverse("eligibility:confirm")),
39 )
40
41 return PageTemplateResponse(request, page)
42
43
44 @decorator_from_middleware(middleware.AgencySessionRequired)
45 @decorator_from_middleware(middleware.RateLimit)
46 def confirm(request):
47 """View handler for the eligibility verification form."""
48
49 page = viewmodels.Page(
50 title=_("eligibility.pages.confirm.title"),
51 content_title=_("eligibility.pages.confirm.content_title"),
52 paragraphs=[_("eligibility.pages.confirm.p[0]")],
53 form=forms.EligibilityVerificationForm(auto_id=True, label_suffix=""),
54 classes="text-lg-center",
55 )
56
57 if request.method == "POST":
58 analytics.started_eligibility(request)
59
60 form = forms.EligibilityVerificationForm(request.POST)
61 response = _verify(request, form)
62
63 if response is None:
64 # form was not valid, allow for correction/resubmission
65 analytics.returned_error(request, form.errors)
66 page.forms = [form]
67 response = PageTemplateResponse(request, page)
68 elif session.eligible(request):
69 eligibility = session.eligibility(request)
70 response = verified(request, [eligibility.name])
71 else:
72 response = PageTemplateResponse(request, page)
73
74 return response
75
76
77 def _verify(request, form):
78 """Helper calls the eligibility verification API with user input."""
79
80 if not form.is_valid():
81 if recaptcha.has_error(form):
82 messages.error(request, "Recaptcha failed. Please try again.")
83 return None
84
85 sub, name = form.cleaned_data.get("sub"), form.cleaned_data.get("name")
86
87 agency = session.agency(request)
88 verifier = session.verifier(request)
89 client = api.Client(agency, verifier)
90
91 response = client.verify(sub, name)
92
93 if response.error and any(response.error):
94 form.add_api_errors(response.error)
95 return None
96 elif any(response.eligibility):
97 return verified(request, response.eligibility)
98 else:
99 return unverified(request)
100
101
102 @decorator_from_middleware(middleware.AgencySessionRequired)
103 def verified(request, verified_types):
104 """View handler for the verified eligibility page."""
105
106 analytics.returned_success(request)
107
108 enrollment_index = reverse("enrollment:index")
109 session.update(request, eligibility_types=verified_types, origin=enrollment_index)
110
111 return redirect(enrollment_index)
112
113
114 @decorator_from_middleware(middleware.AgencySessionRequired)
115 def unverified(request):
116 """View handler for the unverified eligibility page."""
117
118 analytics.returned_fail(request)
119
120 # tel: link to agency phone number
121 agency = session.agency(request)
122 buttons = viewmodels.Button.agency_contact_links(agency)
123
124 page = viewmodels.Page(
125 title=_("eligibility.pages.unverified.title"),
126 content_title=_("eligibility.pages.unverified.content_title"),
127 icon=viewmodels.Icon("idcardquestion", pgettext("image alt text", "core.icons.idcardquestion")),
128 paragraphs=[_("eligibility.pages.unverified.p[0]"), _("eligibility.pages.unverified.p[1]")],
129 buttons=buttons,
130 classes="text-lg-center",
131 )
132
133 return PageTemplateResponse(request, page)
134
[end of benefits/eligibility/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/benefits/core/middleware.py b/benefits/core/middleware.py
--- a/benefits/core/middleware.py
+++ b/benefits/core/middleware.py
@@ -96,6 +96,17 @@
return self.get_response(request)
+class VerifierSessionRequired(MiddlewareMixin):
+ """Middleware raises an exception for sessions lacking an eligibility verifier configuration."""
+
+ def process_request(self, request):
+ if session.verifier(request):
+ logger.debug("Session configured with eligibility verifier")
+ return None
+ else:
+ raise AttributeError("Session not configured with eligibility verifier")
+
+
class ViewedPageEvent(MiddlewareMixin):
"""Middleware sends an analytics event for page views."""
diff --git a/benefits/eligibility/views.py b/benefits/eligibility/views.py
--- a/benefits/eligibility/views.py
+++ b/benefits/eligibility/views.py
@@ -13,6 +13,7 @@
@decorator_from_middleware(middleware.AgencySessionRequired)
+@decorator_from_middleware(middleware.VerifierSessionRequired)
def index(request):
"""View handler for the eligibility verification getting started screen."""
@@ -43,6 +44,7 @@
@decorator_from_middleware(middleware.AgencySessionRequired)
@decorator_from_middleware(middleware.RateLimit)
+@decorator_from_middleware(middleware.VerifierSessionRequired)
def confirm(request):
"""View handler for the eligibility verification form."""
|
{"golden_diff": "diff --git a/benefits/core/middleware.py b/benefits/core/middleware.py\n--- a/benefits/core/middleware.py\n+++ b/benefits/core/middleware.py\n@@ -96,6 +96,17 @@\n return self.get_response(request)\n \n \n+class VerifierSessionRequired(MiddlewareMixin):\n+ \"\"\"Middleware raises an exception for sessions lacking an eligibility verifier configuration.\"\"\"\n+\n+ def process_request(self, request):\n+ if session.verifier(request):\n+ logger.debug(\"Session configured with eligibility verifier\")\n+ return None\n+ else:\n+ raise AttributeError(\"Session not configured with eligibility verifier\")\n+\n+\n class ViewedPageEvent(MiddlewareMixin):\n \"\"\"Middleware sends an analytics event for page views.\"\"\"\n \ndiff --git a/benefits/eligibility/views.py b/benefits/eligibility/views.py\n--- a/benefits/eligibility/views.py\n+++ b/benefits/eligibility/views.py\n@@ -13,6 +13,7 @@\n \n \n @decorator_from_middleware(middleware.AgencySessionRequired)\n+@decorator_from_middleware(middleware.VerifierSessionRequired)\n def index(request):\n \"\"\"View handler for the eligibility verification getting started screen.\"\"\"\n \n@@ -43,6 +44,7 @@\n \n @decorator_from_middleware(middleware.AgencySessionRequired)\n @decorator_from_middleware(middleware.RateLimit)\n+@decorator_from_middleware(middleware.VerifierSessionRequired)\n def confirm(request):\n \"\"\"View handler for the eligibility verification form.\"\"\"\n", "issue": "Create VerifierSessionRequired middleware that expects a verifier to exist in session\n## Background\r\n\r\nOnce the session tracks the selected verifier in #321, we can make use of that infrastructure to put guards on certain view functions that require a verifier to be selected. The first step is to create a new middleware class that enforces the requirement.\r\n\r\nThis is similar to how the [`AgencySessionRequired`](https://github.com/cal-itp/benefits/blob/dev/benefits/core/middleware.py#L20) and [`EligibileSessionRequired`](https://github.com/cal-itp/benefits/blob/dev/benefits/core/middleware.py#L68) middleware are used.\r\n\r\n## Tasks\r\n\r\n- [x] Create a new middleware class like `VerifierSessionRequired` inheriting from `MiddlewareMixin`, see the other `*SessionRequired` as examples\r\n- [x] In `process_request()`, check `session.verifier()` for the request.\r\n - If `None`, raise an error to stop the request\r\n - Otherwise return `None` to allow the request to continue\r\n- [x] Apply this middleware to the following views to enforce that a verifier is needed:\r\n - [x] [`eligibility:index`](https://github.com/cal-itp/benefits/blob/dev/benefits/eligibility/views.py#L16)\r\n - [x] [`eligibility:confirm`](https://github.com/cal-itp/benefits/blob/dev/benefits/eligibility/views.py#L46)\r\n\n", "before_files": [{"content": "\"\"\"\nThe core application: middleware definitions for request/response cycle.\n\"\"\"\nimport logging\nimport time\n\nfrom django.http import HttpResponse, HttpResponseBadRequest\nfrom django.template import loader\nfrom django.utils.decorators import decorator_from_middleware\nfrom django.utils.deprecation import MiddlewareMixin\nfrom django.views import i18n\n\nfrom benefits.settings import RATE_LIMIT, RATE_LIMIT_METHODS, RATE_LIMIT_PERIOD, DEBUG\nfrom . import analytics, session, viewmodels\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass AgencySessionRequired(MiddlewareMixin):\n \"\"\"Middleware raises an exception for sessions lacking an agency configuration.\"\"\"\n\n def process_request(self, request):\n if session.active_agency(request):\n logger.debug(\"Session configured with agency\")\n return None\n else:\n raise AttributeError(\"Session not configured with agency\")\n\n\nclass RateLimit(MiddlewareMixin):\n \"\"\"Middleware checks settings and session to ensure rate limit is respected.\"\"\"\n\n def process_request(self, request):\n if any((RATE_LIMIT < 1, len(RATE_LIMIT_METHODS) < 1, RATE_LIMIT_PERIOD < 1)):\n logger.debug(\"RATE_LIMIT, RATE_LIMIT_METHODS, or RATE_LIMIT_PERIOD are not configured\")\n return None\n\n if request.method in RATE_LIMIT_METHODS:\n session.increment_rate_limit_counter(request)\n else:\n # bail early if the request method doesn't match\n return None\n\n counter = session.rate_limit_counter(request)\n reset_time = session.rate_limit_time(request)\n now = int(time.time())\n\n if counter > RATE_LIMIT:\n if reset_time > now:\n logger.warn(\"Rate limit exceeded\")\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.error(\n title=\"Rate limit error\",\n content_title=\"Rate limit error\",\n paragraphs=[\"You have reached the rate limit. Please try again.\"],\n button=home,\n )\n t = loader.get_template(\"400.html\")\n return HttpResponseBadRequest(t.render(page.context_dict()))\n else:\n # enough time has passed, reset the rate limit\n session.reset_rate_limit(request)\n\n return None\n\n\nclass EligibleSessionRequired(MiddlewareMixin):\n \"\"\"Middleware raises an exception for sessions lacking confirmed eligibility.\"\"\"\n\n def process_request(self, request):\n if session.eligible(request):\n logger.debug(\"Session has confirmed eligibility\")\n return None\n else:\n raise AttributeError(\"Session has no confirmed eligibility\")\n\n\nclass DebugSession(MiddlewareMixin):\n \"\"\"Middleware to configure debug context in the request session.\"\"\"\n\n def process_request(self, request):\n session.update(request, debug=DEBUG)\n return None\n\n\nclass Healthcheck:\n \"\"\"Middleware intercepts and accepts /healthcheck requests.\"\"\"\n\n def __init__(self, get_response):\n self.get_response = get_response\n\n def __call__(self, request):\n if request.path == \"/healthcheck\":\n return HttpResponse(\"Healthy\", content_type=\"text/plain\")\n return self.get_response(request)\n\n\nclass ViewedPageEvent(MiddlewareMixin):\n \"\"\"Middleware sends an analytics event for page views.\"\"\"\n\n def process_response(self, request, response):\n event = analytics.ViewedPageEvent(request)\n try:\n analytics.send_event(event)\n except Exception:\n logger.warning(f\"Failed to send event: {event}\")\n finally:\n return response\n\n\npageview_decorator = decorator_from_middleware(ViewedPageEvent)\n\n\nclass ChangedLanguageEvent(MiddlewareMixin):\n \"\"\"Middleware hooks into django.views.i18n.set_language to send an analytics event.\"\"\"\n\n def process_view(self, request, view_func, view_args, view_kwargs):\n if view_func == i18n.set_language:\n new_lang = request.POST[\"language\"]\n event = analytics.ChangedLanguageEvent(request, new_lang)\n analytics.send_event(event)\n return None\n", "path": "benefits/core/middleware.py"}, {"content": "\"\"\"\nThe eligibility application: view definitions for the eligibility verification flow.\n\"\"\"\nfrom django.contrib import messages\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.utils.decorators import decorator_from_middleware\nfrom django.utils.translation import pgettext, gettext as _\n\nfrom benefits.core import middleware, recaptcha, session, viewmodels\nfrom benefits.core.views import PageTemplateResponse, _index_image\nfrom . import analytics, api, forms\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\ndef index(request):\n \"\"\"View handler for the eligibility verification getting started screen.\"\"\"\n\n session.update(request, eligibility_types=[], origin=reverse(\"eligibility:index\"))\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.index.title\"),\n content_title=_(\"eligibility.pages.index.content_title\"),\n media=[\n viewmodels.MediaItem(\n icon=viewmodels.Icon(\"idcardcheck\", pgettext(\"image alt text\", \"core.icons.idcardcheck\")),\n heading=_(\"eligibility.pages.index.items[0].title\"),\n details=_(\"eligibility.pages.index.items[0].text\"),\n ),\n viewmodels.MediaItem(\n icon=viewmodels.Icon(\"bankcardcheck\", pgettext(\"image alt text\", \"core.icons.bankcardcheck\")),\n heading=_(\"eligibility.pages.index.items[1].title\"),\n details=_(\"eligibility.pages.index.items[1].text\"),\n ),\n ],\n paragraphs=[_(\"eligibility.pages.index.p[0]\")],\n image=_index_image(),\n button=viewmodels.Button.primary(text=_(\"eligibility.pages.index.button\"), url=reverse(\"eligibility:confirm\")),\n )\n\n return PageTemplateResponse(request, page)\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\n@decorator_from_middleware(middleware.RateLimit)\ndef confirm(request):\n \"\"\"View handler for the eligibility verification form.\"\"\"\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.confirm.title\"),\n content_title=_(\"eligibility.pages.confirm.content_title\"),\n paragraphs=[_(\"eligibility.pages.confirm.p[0]\")],\n form=forms.EligibilityVerificationForm(auto_id=True, label_suffix=\"\"),\n classes=\"text-lg-center\",\n )\n\n if request.method == \"POST\":\n analytics.started_eligibility(request)\n\n form = forms.EligibilityVerificationForm(request.POST)\n response = _verify(request, form)\n\n if response is None:\n # form was not valid, allow for correction/resubmission\n analytics.returned_error(request, form.errors)\n page.forms = [form]\n response = PageTemplateResponse(request, page)\n elif session.eligible(request):\n eligibility = session.eligibility(request)\n response = verified(request, [eligibility.name])\n else:\n response = PageTemplateResponse(request, page)\n\n return response\n\n\ndef _verify(request, form):\n \"\"\"Helper calls the eligibility verification API with user input.\"\"\"\n\n if not form.is_valid():\n if recaptcha.has_error(form):\n messages.error(request, \"Recaptcha failed. Please try again.\")\n return None\n\n sub, name = form.cleaned_data.get(\"sub\"), form.cleaned_data.get(\"name\")\n\n agency = session.agency(request)\n verifier = session.verifier(request)\n client = api.Client(agency, verifier)\n\n response = client.verify(sub, name)\n\n if response.error and any(response.error):\n form.add_api_errors(response.error)\n return None\n elif any(response.eligibility):\n return verified(request, response.eligibility)\n else:\n return unverified(request)\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\ndef verified(request, verified_types):\n \"\"\"View handler for the verified eligibility page.\"\"\"\n\n analytics.returned_success(request)\n\n enrollment_index = reverse(\"enrollment:index\")\n session.update(request, eligibility_types=verified_types, origin=enrollment_index)\n\n return redirect(enrollment_index)\n\n\n@decorator_from_middleware(middleware.AgencySessionRequired)\ndef unverified(request):\n \"\"\"View handler for the unverified eligibility page.\"\"\"\n\n analytics.returned_fail(request)\n\n # tel: link to agency phone number\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n\n page = viewmodels.Page(\n title=_(\"eligibility.pages.unverified.title\"),\n content_title=_(\"eligibility.pages.unverified.content_title\"),\n icon=viewmodels.Icon(\"idcardquestion\", pgettext(\"image alt text\", \"core.icons.idcardquestion\")),\n paragraphs=[_(\"eligibility.pages.unverified.p[0]\"), _(\"eligibility.pages.unverified.p[1]\")],\n buttons=buttons,\n classes=\"text-lg-center\",\n )\n\n return PageTemplateResponse(request, page)\n", "path": "benefits/eligibility/views.py"}]}
| 3,230 | 326 |
gh_patches_debug_27898
|
rasdani/github-patches
|
git_diff
|
pypa__pip-4046
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip freeze --requirement doesn't accept inline comments
- Pip version: 8.1.2
- Python version: 2.7.11
- Operating System: Mac OS X
### Description:
pip freeze --requirement doesn't accept inline comments
### What I've run:
```
pip freeze -r requirements.txt
```
Output:
```
Invalid requirement: 'alembic==0.8.6 # MIT license'
Traceback (most recent call last):
File ".../site-packages/pip/req/req_install.py", line 78, in __init__
req = Requirement(req)
File ".../site-packages/pip/_vendor/packaging/requirements.py", line 96, in __init__
requirement_string[e.loc:e.loc + 8]))
InvalidRequirement: Invalid requirement, parse error at "'# MIT li'"
```
requirements.txt:
```
alembic==0.8.6 # MIT license
Babel==2.3.4 # BSD license
```
`pip install -r` works for this requirements.txt file.
Documentation states:
> Whitespace followed by a # causes the # and the remainder of the line to be treated as a comment.
</issue>
<code>
[start of pip/operations/freeze.py]
1 from __future__ import absolute_import
2
3 import logging
4 import re
5
6 import pip
7 from pip.req import InstallRequirement
8 from pip.utils import get_installed_distributions
9 from pip._vendor import pkg_resources
10 from pip._vendor.packaging.utils import canonicalize_name
11 from pip._vendor.pkg_resources import RequirementParseError
12
13
14 logger = logging.getLogger(__name__)
15
16
17 def freeze(
18 requirement=None,
19 find_links=None, local_only=None, user_only=None, skip_regex=None,
20 default_vcs=None,
21 isolated=False,
22 wheel_cache=None,
23 skip=()):
24 find_links = find_links or []
25 skip_match = None
26
27 if skip_regex:
28 skip_match = re.compile(skip_regex).search
29
30 dependency_links = []
31
32 for dist in pkg_resources.working_set:
33 if dist.has_metadata('dependency_links.txt'):
34 dependency_links.extend(
35 dist.get_metadata_lines('dependency_links.txt')
36 )
37 for link in find_links:
38 if '#egg=' in link:
39 dependency_links.append(link)
40 for link in find_links:
41 yield '-f %s' % link
42 installations = {}
43 for dist in get_installed_distributions(local_only=local_only,
44 skip=(),
45 user_only=user_only):
46 try:
47 req = pip.FrozenRequirement.from_dist(
48 dist,
49 dependency_links
50 )
51 except RequirementParseError:
52 logger.warning(
53 "Could not parse requirement: %s",
54 dist.project_name
55 )
56 continue
57 installations[req.name] = req
58
59 if requirement:
60 # the options that don't get turned into an InstallRequirement
61 # should only be emitted once, even if the same option is in multiple
62 # requirements files, so we need to keep track of what has been emitted
63 # so that we don't emit it again if it's seen again
64 emitted_options = set()
65 for req_file_path in requirement:
66 with open(req_file_path) as req_file:
67 for line in req_file:
68 if (not line.strip() or
69 line.strip().startswith('#') or
70 (skip_match and skip_match(line)) or
71 line.startswith((
72 '-r', '--requirement',
73 '-Z', '--always-unzip',
74 '-f', '--find-links',
75 '-i', '--index-url',
76 '--pre',
77 '--trusted-host',
78 '--process-dependency-links',
79 '--extra-index-url'))):
80 line = line.rstrip()
81 if line not in emitted_options:
82 emitted_options.add(line)
83 yield line
84 continue
85
86 if line.startswith('-e') or line.startswith('--editable'):
87 if line.startswith('-e'):
88 line = line[2:].strip()
89 else:
90 line = line[len('--editable'):].strip().lstrip('=')
91 line_req = InstallRequirement.from_editable(
92 line,
93 default_vcs=default_vcs,
94 isolated=isolated,
95 wheel_cache=wheel_cache,
96 )
97 else:
98 line_req = InstallRequirement.from_line(
99 line,
100 isolated=isolated,
101 wheel_cache=wheel_cache,
102 )
103
104 if not line_req.name:
105 logger.info(
106 "Skipping line in requirement file [%s] because "
107 "it's not clear what it would install: %s",
108 req_file_path, line.strip(),
109 )
110 logger.info(
111 " (add #egg=PackageName to the URL to avoid"
112 " this warning)"
113 )
114 elif line_req.name not in installations:
115 logger.warning(
116 "Requirement file [%s] contains %s, but that "
117 "package is not installed",
118 req_file_path, line.strip(),
119 )
120 else:
121 yield str(installations[line_req.name]).rstrip()
122 del installations[line_req.name]
123
124 yield(
125 '## The following requirements were added by '
126 'pip freeze:'
127 )
128 for installation in sorted(
129 installations.values(), key=lambda x: x.name.lower()):
130 if canonicalize_name(installation.name) not in skip:
131 yield str(installation).rstrip()
132
[end of pip/operations/freeze.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pip/operations/freeze.py b/pip/operations/freeze.py
--- a/pip/operations/freeze.py
+++ b/pip/operations/freeze.py
@@ -5,6 +5,7 @@
import pip
from pip.req import InstallRequirement
+from pip.req.req_file import COMMENT_RE
from pip.utils import get_installed_distributions
from pip._vendor import pkg_resources
from pip._vendor.packaging.utils import canonicalize_name
@@ -96,7 +97,7 @@
)
else:
line_req = InstallRequirement.from_line(
- line,
+ COMMENT_RE.sub('', line).strip(),
isolated=isolated,
wheel_cache=wheel_cache,
)
@@ -115,7 +116,7 @@
logger.warning(
"Requirement file [%s] contains %s, but that "
"package is not installed",
- req_file_path, line.strip(),
+ req_file_path, COMMENT_RE.sub('', line).strip(),
)
else:
yield str(installations[line_req.name]).rstrip()
|
{"golden_diff": "diff --git a/pip/operations/freeze.py b/pip/operations/freeze.py\n--- a/pip/operations/freeze.py\n+++ b/pip/operations/freeze.py\n@@ -5,6 +5,7 @@\n \n import pip\n from pip.req import InstallRequirement\n+from pip.req.req_file import COMMENT_RE\n from pip.utils import get_installed_distributions\n from pip._vendor import pkg_resources\n from pip._vendor.packaging.utils import canonicalize_name\n@@ -96,7 +97,7 @@\n )\n else:\n line_req = InstallRequirement.from_line(\n- line,\n+ COMMENT_RE.sub('', line).strip(),\n isolated=isolated,\n wheel_cache=wheel_cache,\n )\n@@ -115,7 +116,7 @@\n logger.warning(\n \"Requirement file [%s] contains %s, but that \"\n \"package is not installed\",\n- req_file_path, line.strip(),\n+ req_file_path, COMMENT_RE.sub('', line).strip(),\n )\n else:\n yield str(installations[line_req.name]).rstrip()\n", "issue": "pip freeze --requirement doesn't accept inline comments\n- Pip version: 8.1.2\n- Python version: 2.7.11\n- Operating System: Mac OS X\n### Description:\n\npip freeze --requirement doesn't accept inline comments\n### What I've run:\n\n```\npip freeze -r requirements.txt\n```\n\nOutput:\n\n```\nInvalid requirement: 'alembic==0.8.6 # MIT license'\nTraceback (most recent call last):\n File \".../site-packages/pip/req/req_install.py\", line 78, in __init__\n req = Requirement(req)\n File \".../site-packages/pip/_vendor/packaging/requirements.py\", line 96, in __init__\n requirement_string[e.loc:e.loc + 8]))\nInvalidRequirement: Invalid requirement, parse error at \"'# MIT li'\"\n```\n\nrequirements.txt:\n\n```\nalembic==0.8.6 # MIT license\nBabel==2.3.4 # BSD license\n```\n\n`pip install -r` works for this requirements.txt file.\n\nDocumentation states:\n\n> Whitespace followed by a # causes the # and the remainder of the line to be treated as a comment.\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport logging\nimport re\n\nimport pip\nfrom pip.req import InstallRequirement\nfrom pip.utils import get_installed_distributions\nfrom pip._vendor import pkg_resources\nfrom pip._vendor.packaging.utils import canonicalize_name\nfrom pip._vendor.pkg_resources import RequirementParseError\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef freeze(\n requirement=None,\n find_links=None, local_only=None, user_only=None, skip_regex=None,\n default_vcs=None,\n isolated=False,\n wheel_cache=None,\n skip=()):\n find_links = find_links or []\n skip_match = None\n\n if skip_regex:\n skip_match = re.compile(skip_regex).search\n\n dependency_links = []\n\n for dist in pkg_resources.working_set:\n if dist.has_metadata('dependency_links.txt'):\n dependency_links.extend(\n dist.get_metadata_lines('dependency_links.txt')\n )\n for link in find_links:\n if '#egg=' in link:\n dependency_links.append(link)\n for link in find_links:\n yield '-f %s' % link\n installations = {}\n for dist in get_installed_distributions(local_only=local_only,\n skip=(),\n user_only=user_only):\n try:\n req = pip.FrozenRequirement.from_dist(\n dist,\n dependency_links\n )\n except RequirementParseError:\n logger.warning(\n \"Could not parse requirement: %s\",\n dist.project_name\n )\n continue\n installations[req.name] = req\n\n if requirement:\n # the options that don't get turned into an InstallRequirement\n # should only be emitted once, even if the same option is in multiple\n # requirements files, so we need to keep track of what has been emitted\n # so that we don't emit it again if it's seen again\n emitted_options = set()\n for req_file_path in requirement:\n with open(req_file_path) as req_file:\n for line in req_file:\n if (not line.strip() or\n line.strip().startswith('#') or\n (skip_match and skip_match(line)) or\n line.startswith((\n '-r', '--requirement',\n '-Z', '--always-unzip',\n '-f', '--find-links',\n '-i', '--index-url',\n '--pre',\n '--trusted-host',\n '--process-dependency-links',\n '--extra-index-url'))):\n line = line.rstrip()\n if line not in emitted_options:\n emitted_options.add(line)\n yield line\n continue\n\n if line.startswith('-e') or line.startswith('--editable'):\n if line.startswith('-e'):\n line = line[2:].strip()\n else:\n line = line[len('--editable'):].strip().lstrip('=')\n line_req = InstallRequirement.from_editable(\n line,\n default_vcs=default_vcs,\n isolated=isolated,\n wheel_cache=wheel_cache,\n )\n else:\n line_req = InstallRequirement.from_line(\n line,\n isolated=isolated,\n wheel_cache=wheel_cache,\n )\n\n if not line_req.name:\n logger.info(\n \"Skipping line in requirement file [%s] because \"\n \"it's not clear what it would install: %s\",\n req_file_path, line.strip(),\n )\n logger.info(\n \" (add #egg=PackageName to the URL to avoid\"\n \" this warning)\"\n )\n elif line_req.name not in installations:\n logger.warning(\n \"Requirement file [%s] contains %s, but that \"\n \"package is not installed\",\n req_file_path, line.strip(),\n )\n else:\n yield str(installations[line_req.name]).rstrip()\n del installations[line_req.name]\n\n yield(\n '## The following requirements were added by '\n 'pip freeze:'\n )\n for installation in sorted(\n installations.values(), key=lambda x: x.name.lower()):\n if canonicalize_name(installation.name) not in skip:\n yield str(installation).rstrip()\n", "path": "pip/operations/freeze.py"}]}
| 1,939 | 234 |
gh_patches_debug_33248
|
rasdani/github-patches
|
git_diff
|
WeblateOrg__weblate-9101
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Checking "Needs editing" on a translated entry trigger "Has been translated" warning
**Describe the bug**
After an entry has been already translated (even if it's already marked as "Need editing"), if the translation is modified and the user adds (or keeps) the "Need editing" checked, it will trigger the warning "Has been translated".
I think it shouldn't trigger that warning at least, the message is misleading and in any case the report already marks the entry that needs editing as red.
**To Reproduce the bug**
1. Go to an entry for a component (.po in my case)
2. Translate for the first time the entry and click Save.
3. Go to that entry again, click on "Needs editing" and then Save.
4. The warning will appear.
**Expected behavior**
This specific warning shouldn't show every time a translation is made and Needs editing is there. It's not a warning and the user is already marking as needing some action.
**Additional context**
See also #2935
</issue>
<code>
[start of weblate/checks/consistency.py]
1 # Copyright © Michal Čihař <[email protected]>
2 #
3 # SPDX-License-Identifier: GPL-3.0-or-later
4
5 from functools import reduce
6
7 from django.db.models import Count, Prefetch, Q
8 from django.utils.translation import gettext_lazy as _
9
10 from weblate.checks.base import TargetCheck
11 from weblate.utils.state import STATE_TRANSLATED
12
13
14 class PluralsCheck(TargetCheck):
15 """Check for incomplete plural forms."""
16
17 check_id = "plurals"
18 name = _("Missing plurals")
19 description = _("Some plural forms are untranslated")
20
21 def should_skip(self, unit):
22 if unit.translation.component.is_multivalue:
23 return True
24 return super().should_skip(unit)
25
26 def check_target_unit(self, sources, targets, unit):
27 # Is this plural?
28 if len(sources) == 1:
29 return False
30 # Is at least something translated?
31 if targets == len(targets) * [""]:
32 return False
33 # Check for empty translation
34 return "" in targets
35
36 def check_single(self, source, target, unit):
37 """We don't check target strings here."""
38 return False
39
40
41 class SamePluralsCheck(TargetCheck):
42 """Check for same plural forms."""
43
44 check_id = "same-plurals"
45 name = _("Same plurals")
46 description = _("Some plural forms are translated in the same way")
47
48 def check_target_unit(self, sources, targets, unit):
49 # Is this plural?
50 if len(sources) == 1 or len(targets) == 1:
51 return False
52 if not targets[0]:
53 return False
54 return len(set(targets)) == 1
55
56 def check_single(self, source, target, unit):
57 """We don't check target strings here."""
58 return False
59
60
61 class ConsistencyCheck(TargetCheck):
62 """Check for inconsistent translations."""
63
64 check_id = "inconsistent"
65 name = _("Inconsistent")
66 description = _(
67 "This string has more than one translation in this project "
68 "or is untranslated in some components."
69 )
70 ignore_untranslated = False
71 propagates = True
72 batch_project_wide = True
73 skip_suggestions = True
74
75 def check_target_unit(self, sources, targets, unit):
76 component = unit.translation.component
77 if not component.allow_translation_propagation:
78 return False
79
80 # Use last result if checks are batched
81 if component.batch_checks:
82 return self.handle_batch(unit, component)
83
84 for other in unit.same_source_units:
85 if unit.target == other.target:
86 continue
87 if unit.translated or other.translated:
88 return True
89 return False
90
91 def check_single(self, source, target, unit):
92 """We don't check target strings here."""
93 return False
94
95 def check_component(self, component):
96 from weblate.trans.models import Unit
97
98 units = Unit.objects.filter(
99 translation__component__project=component.project,
100 translation__component__allow_translation_propagation=True,
101 )
102
103 # List strings with different targets
104 # Limit this to 100 strings, otherwise the resulting query is way too complex
105 matches = (
106 units.values("id_hash", "translation__language", "translation__plural")
107 .annotate(Count("target", distinct=True))
108 .filter(target__count__gt=1)
109 .order_by("id_hash")[:100]
110 )
111
112 if not matches:
113 return []
114
115 return (
116 units.filter(
117 reduce(
118 lambda x, y: x
119 | (
120 Q(id_hash=y["id_hash"])
121 & Q(translation__language=y["translation__language"])
122 & Q(translation__plural=y["translation__plural"])
123 ),
124 matches,
125 Q(),
126 )
127 )
128 .prefetch()
129 .prefetch_bulk()
130 )
131
132
133 class TranslatedCheck(TargetCheck):
134 """Check for inconsistent translations."""
135
136 check_id = "translated"
137 name = _("Has been translated")
138 description = _("This string has been translated in the past")
139 ignore_untranslated = False
140 skip_suggestions = True
141
142 def get_description(self, check_obj):
143 unit = check_obj.unit
144 target = self.check_target_unit(unit.source, unit.target, unit)
145 if not target:
146 return super().get_description(check_obj)
147 return _('Previous translation was "%s".') % target
148
149 def check_target_unit(self, sources, targets, unit):
150 if unit.translated:
151 return False
152
153 component = unit.translation.component
154
155 if component.batch_checks:
156 return self.handle_batch(unit, component)
157
158 from weblate.trans.models import Change
159
160 changes = unit.change_set.filter(action__in=Change.ACTIONS_CONTENT).order()
161
162 for action, target in changes.values_list("action", "target"):
163 if action == Change.ACTION_SOURCE_CHANGE:
164 break
165 if target and target != unit.target:
166 return target
167
168 return False
169
170 def check_single(self, source, target, unit):
171 """We don't check target strings here."""
172 return False
173
174 def get_fixup(self, unit):
175 target = self.check_target_unit(unit.source, unit.target, unit)
176 if not target:
177 return None
178 return [(".*", target, "u")]
179
180 def check_component(self, component):
181 from weblate.trans.models import Change, Unit
182
183 units = (
184 Unit.objects.filter(
185 translation__component=component,
186 change__action__in=Change.ACTIONS_CONTENT,
187 state__lt=STATE_TRANSLATED,
188 )
189 .prefetch_related(
190 Prefetch(
191 "change_set",
192 queryset=Change.objects.filter(
193 action__in=Change.ACTIONS_CONTENT,
194 ).order(),
195 to_attr="recent_consistency_changes",
196 )
197 )
198 .prefetch()
199 .prefetch_bulk()
200 )
201
202 for unit in units:
203 for change in unit.recent_consistency_changes:
204 if change.action == Change.ACTION_SOURCE_CHANGE:
205 break
206 if change.target:
207 yield unit
208
[end of weblate/checks/consistency.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/weblate/checks/consistency.py b/weblate/checks/consistency.py
--- a/weblate/checks/consistency.py
+++ b/weblate/checks/consistency.py
@@ -146,6 +146,23 @@
return super().get_description(check_obj)
return _('Previous translation was "%s".') % target
+ def should_skip_change(self, change, unit):
+ from weblate.trans.models import Change
+
+ # Skip automatic translation entries adding needs editing string
+ return (
+ change.action == Change.ACTION_AUTO
+ and change.details.get("state", STATE_TRANSLATED) < STATE_TRANSLATED
+ )
+
+ @staticmethod
+ def should_break_changes(change):
+ from weblate.trans.models import Change
+
+ # Stop changes processin on source string change or on
+ # intentional marking as needing edit
+ return change.action in (Change.ACTION_SOURCE_CHANGE, Change.ACTION_MARKED_EDIT)
+
def check_target_unit(self, sources, targets, unit):
if unit.translated:
return False
@@ -159,11 +176,13 @@
changes = unit.change_set.filter(action__in=Change.ACTIONS_CONTENT).order()
- for action, target in changes.values_list("action", "target"):
- if action == Change.ACTION_SOURCE_CHANGE:
+ for change in changes:
+ if self.should_break_changes(change):
break
- if target and target != unit.target:
- return target
+ if self.should_skip_change(change, unit):
+ continue
+ if change.target and change.target != unit.target:
+ return change.target
return False
@@ -201,7 +220,9 @@
for unit in units:
for change in unit.recent_consistency_changes:
- if change.action == Change.ACTION_SOURCE_CHANGE:
+ if self.should_break_changes(change):
break
+ if self.should_skip_change(change, unit):
+ continue
if change.target:
yield unit
|
{"golden_diff": "diff --git a/weblate/checks/consistency.py b/weblate/checks/consistency.py\n--- a/weblate/checks/consistency.py\n+++ b/weblate/checks/consistency.py\n@@ -146,6 +146,23 @@\n return super().get_description(check_obj)\n return _('Previous translation was \"%s\".') % target\n \n+ def should_skip_change(self, change, unit):\n+ from weblate.trans.models import Change\n+\n+ # Skip automatic translation entries adding needs editing string\n+ return (\n+ change.action == Change.ACTION_AUTO\n+ and change.details.get(\"state\", STATE_TRANSLATED) < STATE_TRANSLATED\n+ )\n+\n+ @staticmethod\n+ def should_break_changes(change):\n+ from weblate.trans.models import Change\n+\n+ # Stop changes processin on source string change or on\n+ # intentional marking as needing edit\n+ return change.action in (Change.ACTION_SOURCE_CHANGE, Change.ACTION_MARKED_EDIT)\n+\n def check_target_unit(self, sources, targets, unit):\n if unit.translated:\n return False\n@@ -159,11 +176,13 @@\n \n changes = unit.change_set.filter(action__in=Change.ACTIONS_CONTENT).order()\n \n- for action, target in changes.values_list(\"action\", \"target\"):\n- if action == Change.ACTION_SOURCE_CHANGE:\n+ for change in changes:\n+ if self.should_break_changes(change):\n break\n- if target and target != unit.target:\n- return target\n+ if self.should_skip_change(change, unit):\n+ continue\n+ if change.target and change.target != unit.target:\n+ return change.target\n \n return False\n \n@@ -201,7 +220,9 @@\n \n for unit in units:\n for change in unit.recent_consistency_changes:\n- if change.action == Change.ACTION_SOURCE_CHANGE:\n+ if self.should_break_changes(change):\n break\n+ if self.should_skip_change(change, unit):\n+ continue\n if change.target:\n yield unit\n", "issue": "Checking \"Needs editing\" on a translated entry trigger \"Has been translated\" warning \n**Describe the bug**\r\n\r\nAfter an entry has been already translated (even if it's already marked as \"Need editing\"), if the translation is modified and the user adds (or keeps) the \"Need editing\" checked, it will trigger the warning \"Has been translated\".\r\n\r\nI think it shouldn't trigger that warning at least, the message is misleading and in any case the report already marks the entry that needs editing as red.\r\n\r\n**To Reproduce the bug**\r\n\r\n1. Go to an entry for a component (.po in my case)\r\n2. Translate for the first time the entry and click Save.\r\n3. Go to that entry again, click on \"Needs editing\" and then Save.\r\n4. The warning will appear.\r\n\r\n**Expected behavior**\r\n\r\nThis specific warning shouldn't show every time a translation is made and Needs editing is there. It's not a warning and the user is already marking as needing some action.\r\n\r\n**Additional context**\r\n\r\nSee also #2935\r\n\n", "before_files": [{"content": "# Copyright \u00a9 Michal \u010ciha\u0159 <[email protected]>\n#\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom functools import reduce\n\nfrom django.db.models import Count, Prefetch, Q\nfrom django.utils.translation import gettext_lazy as _\n\nfrom weblate.checks.base import TargetCheck\nfrom weblate.utils.state import STATE_TRANSLATED\n\n\nclass PluralsCheck(TargetCheck):\n \"\"\"Check for incomplete plural forms.\"\"\"\n\n check_id = \"plurals\"\n name = _(\"Missing plurals\")\n description = _(\"Some plural forms are untranslated\")\n\n def should_skip(self, unit):\n if unit.translation.component.is_multivalue:\n return True\n return super().should_skip(unit)\n\n def check_target_unit(self, sources, targets, unit):\n # Is this plural?\n if len(sources) == 1:\n return False\n # Is at least something translated?\n if targets == len(targets) * [\"\"]:\n return False\n # Check for empty translation\n return \"\" in targets\n\n def check_single(self, source, target, unit):\n \"\"\"We don't check target strings here.\"\"\"\n return False\n\n\nclass SamePluralsCheck(TargetCheck):\n \"\"\"Check for same plural forms.\"\"\"\n\n check_id = \"same-plurals\"\n name = _(\"Same plurals\")\n description = _(\"Some plural forms are translated in the same way\")\n\n def check_target_unit(self, sources, targets, unit):\n # Is this plural?\n if len(sources) == 1 or len(targets) == 1:\n return False\n if not targets[0]:\n return False\n return len(set(targets)) == 1\n\n def check_single(self, source, target, unit):\n \"\"\"We don't check target strings here.\"\"\"\n return False\n\n\nclass ConsistencyCheck(TargetCheck):\n \"\"\"Check for inconsistent translations.\"\"\"\n\n check_id = \"inconsistent\"\n name = _(\"Inconsistent\")\n description = _(\n \"This string has more than one translation in this project \"\n \"or is untranslated in some components.\"\n )\n ignore_untranslated = False\n propagates = True\n batch_project_wide = True\n skip_suggestions = True\n\n def check_target_unit(self, sources, targets, unit):\n component = unit.translation.component\n if not component.allow_translation_propagation:\n return False\n\n # Use last result if checks are batched\n if component.batch_checks:\n return self.handle_batch(unit, component)\n\n for other in unit.same_source_units:\n if unit.target == other.target:\n continue\n if unit.translated or other.translated:\n return True\n return False\n\n def check_single(self, source, target, unit):\n \"\"\"We don't check target strings here.\"\"\"\n return False\n\n def check_component(self, component):\n from weblate.trans.models import Unit\n\n units = Unit.objects.filter(\n translation__component__project=component.project,\n translation__component__allow_translation_propagation=True,\n )\n\n # List strings with different targets\n # Limit this to 100 strings, otherwise the resulting query is way too complex\n matches = (\n units.values(\"id_hash\", \"translation__language\", \"translation__plural\")\n .annotate(Count(\"target\", distinct=True))\n .filter(target__count__gt=1)\n .order_by(\"id_hash\")[:100]\n )\n\n if not matches:\n return []\n\n return (\n units.filter(\n reduce(\n lambda x, y: x\n | (\n Q(id_hash=y[\"id_hash\"])\n & Q(translation__language=y[\"translation__language\"])\n & Q(translation__plural=y[\"translation__plural\"])\n ),\n matches,\n Q(),\n )\n )\n .prefetch()\n .prefetch_bulk()\n )\n\n\nclass TranslatedCheck(TargetCheck):\n \"\"\"Check for inconsistent translations.\"\"\"\n\n check_id = \"translated\"\n name = _(\"Has been translated\")\n description = _(\"This string has been translated in the past\")\n ignore_untranslated = False\n skip_suggestions = True\n\n def get_description(self, check_obj):\n unit = check_obj.unit\n target = self.check_target_unit(unit.source, unit.target, unit)\n if not target:\n return super().get_description(check_obj)\n return _('Previous translation was \"%s\".') % target\n\n def check_target_unit(self, sources, targets, unit):\n if unit.translated:\n return False\n\n component = unit.translation.component\n\n if component.batch_checks:\n return self.handle_batch(unit, component)\n\n from weblate.trans.models import Change\n\n changes = unit.change_set.filter(action__in=Change.ACTIONS_CONTENT).order()\n\n for action, target in changes.values_list(\"action\", \"target\"):\n if action == Change.ACTION_SOURCE_CHANGE:\n break\n if target and target != unit.target:\n return target\n\n return False\n\n def check_single(self, source, target, unit):\n \"\"\"We don't check target strings here.\"\"\"\n return False\n\n def get_fixup(self, unit):\n target = self.check_target_unit(unit.source, unit.target, unit)\n if not target:\n return None\n return [(\".*\", target, \"u\")]\n\n def check_component(self, component):\n from weblate.trans.models import Change, Unit\n\n units = (\n Unit.objects.filter(\n translation__component=component,\n change__action__in=Change.ACTIONS_CONTENT,\n state__lt=STATE_TRANSLATED,\n )\n .prefetch_related(\n Prefetch(\n \"change_set\",\n queryset=Change.objects.filter(\n action__in=Change.ACTIONS_CONTENT,\n ).order(),\n to_attr=\"recent_consistency_changes\",\n )\n )\n .prefetch()\n .prefetch_bulk()\n )\n\n for unit in units:\n for change in unit.recent_consistency_changes:\n if change.action == Change.ACTION_SOURCE_CHANGE:\n break\n if change.target:\n yield unit\n", "path": "weblate/checks/consistency.py"}]}
| 2,597 | 458 |
gh_patches_debug_12666
|
rasdani/github-patches
|
git_diff
|
openshift__openshift-ansible-3887
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[healthchecks] the package_version check always checks for master/node packages regardless of host group
#### Description
When running `playbooks/byo/openshift-preflight/check.yml`, the `package_version` check reports failures on hosts that can't access the `atomic-openshift-{master,node}` packages even when this is expected, e.g. on etcd or lb hosts.
##### Version
```
openshift-ansible-3.5.3-1-521-g3125e72
```
##### Steps To Reproduce
1. Have a cluster with `[etcd]`, `[lb]` and/or additional "auxiliary" host groups
2. Run the `playbooks/byo/openshift-preflight/check.yml` playbook
##### Expected Results
Hosts would not report a failure when they have access to the packages they need.
##### Observed Results
Hosts that don't have access to `atomic-openshift-{master,node}` packages in their configured repos are reported as failed, even when the hosts don't need these packages.
Describe what is actually happening.
```
$ ansible-playbook playbooks/byo/openshift-preflight/check.yml
[...]
Failure summary:
1. Host: etcd2.example.com
Play: run OpenShift health checks
Task: openshift_health_check
Message: One or more checks failed
Details: {'package_availability': {'_ansible_parsed': True,
u'changed': False,
u'invocation': {u'module_args': {u'packages': []}}},
'package_update': {'_ansible_parsed': True,
u'changed': False,
u'invocation': {u'module_args': {u'packages': []}}},
'package_version': {'_ansible_parsed': True,
u'failed': True,
u'invocation': {u'module_args': {u'prefix': u'atomic-openshift',
u'version': u'v3.4'}},
u'msg': u'Not all of the required packages are available at requested version 3.4:\n atomic-openshift\n atomic-openshift-master\n atomic-openshift-node\nPlease check your subscriptions and enabled repositories.'}}
```
##### Additional Information
The inventory file used here has:
```
[OSEv3:children]
masters
nodes
etcd
lb
dns
# [...]
[etcd]
etcd2.example.com
# [...]
[lb]
lb.example.com
```
the hosts in *etcd*, *lb* and *dns* groups all fail the check.
</issue>
<code>
[start of roles/openshift_health_checker/openshift_checks/package_version.py]
1 # pylint: disable=missing-docstring
2 from openshift_checks import OpenShiftCheck, get_var
3 from openshift_checks.mixins import NotContainerizedMixin
4
5
6 class PackageVersion(NotContainerizedMixin, OpenShiftCheck):
7 """Check that available RPM packages match the required versions."""
8
9 name = "package_version"
10 tags = ["preflight"]
11
12 def run(self, tmp, task_vars):
13 rpm_prefix = get_var(task_vars, "openshift", "common", "service_type")
14 openshift_release = get_var(task_vars, "openshift_release")
15
16 args = {
17 "prefix": rpm_prefix,
18 "version": openshift_release,
19 }
20 return self.execute_module("aos_version", args, tmp, task_vars)
21
[end of roles/openshift_health_checker/openshift_checks/package_version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/roles/openshift_health_checker/openshift_checks/package_version.py b/roles/openshift_health_checker/openshift_checks/package_version.py
--- a/roles/openshift_health_checker/openshift_checks/package_version.py
+++ b/roles/openshift_health_checker/openshift_checks/package_version.py
@@ -9,6 +9,13 @@
name = "package_version"
tags = ["preflight"]
+ @classmethod
+ def is_active(cls, task_vars):
+ """Skip hosts that do not have package requirements."""
+ group_names = get_var(task_vars, "group_names", default=[])
+ master_or_node = 'masters' in group_names or 'nodes' in group_names
+ return super(PackageVersion, cls).is_active(task_vars) and master_or_node
+
def run(self, tmp, task_vars):
rpm_prefix = get_var(task_vars, "openshift", "common", "service_type")
openshift_release = get_var(task_vars, "openshift_release")
|
{"golden_diff": "diff --git a/roles/openshift_health_checker/openshift_checks/package_version.py b/roles/openshift_health_checker/openshift_checks/package_version.py\n--- a/roles/openshift_health_checker/openshift_checks/package_version.py\n+++ b/roles/openshift_health_checker/openshift_checks/package_version.py\n@@ -9,6 +9,13 @@\n name = \"package_version\"\n tags = [\"preflight\"]\n \n+ @classmethod\n+ def is_active(cls, task_vars):\n+ \"\"\"Skip hosts that do not have package requirements.\"\"\"\n+ group_names = get_var(task_vars, \"group_names\", default=[])\n+ master_or_node = 'masters' in group_names or 'nodes' in group_names\n+ return super(PackageVersion, cls).is_active(task_vars) and master_or_node\n+\n def run(self, tmp, task_vars):\n rpm_prefix = get_var(task_vars, \"openshift\", \"common\", \"service_type\")\n openshift_release = get_var(task_vars, \"openshift_release\")\n", "issue": "[healthchecks] the package_version check always checks for master/node packages regardless of host group\n#### Description\r\n\r\nWhen running `playbooks/byo/openshift-preflight/check.yml`, the `package_version` check reports failures on hosts that can't access the `atomic-openshift-{master,node}` packages even when this is expected, e.g. on etcd or lb hosts.\r\n\r\n\r\n##### Version\r\n\r\n```\r\nopenshift-ansible-3.5.3-1-521-g3125e72\r\n```\r\n\r\n##### Steps To Reproduce\r\n1. Have a cluster with `[etcd]`, `[lb]` and/or additional \"auxiliary\" host groups\r\n2. Run the `playbooks/byo/openshift-preflight/check.yml` playbook\r\n\r\n\r\n##### Expected Results\r\nHosts would not report a failure when they have access to the packages they need.\r\n\r\n##### Observed Results\r\nHosts that don't have access to `atomic-openshift-{master,node}` packages in their configured repos are reported as failed, even when the hosts don't need these packages.\r\nDescribe what is actually happening.\r\n\r\n```\r\n$ ansible-playbook playbooks/byo/openshift-preflight/check.yml\r\n[...]\r\nFailure summary:\r\n\r\n 1. Host: etcd2.example.com\r\n Play: run OpenShift health checks\r\n Task: openshift_health_check\r\n Message: One or more checks failed\r\n Details: {'package_availability': {'_ansible_parsed': True,\r\n u'changed': False,\r\n u'invocation': {u'module_args': {u'packages': []}}},\r\n 'package_update': {'_ansible_parsed': True,\r\n u'changed': False,\r\n u'invocation': {u'module_args': {u'packages': []}}},\r\n 'package_version': {'_ansible_parsed': True,\r\n u'failed': True,\r\n u'invocation': {u'module_args': {u'prefix': u'atomic-openshift',\r\n u'version': u'v3.4'}},\r\n u'msg': u'Not all of the required packages are available at requested version 3.4:\\n atomic-openshift\\n atomic-openshift-master\\n atomic-openshift-node\\nPlease check your subscriptions and enabled repositories.'}}\r\n```\r\n\r\n##### Additional Information\r\n\r\nThe inventory file used here has:\r\n\r\n```\r\n[OSEv3:children]\r\nmasters\r\nnodes\r\netcd\r\nlb\r\ndns\r\n\r\n# [...]\r\n\r\n[etcd]\r\netcd2.example.com\r\n# [...]\r\n\r\n[lb]\r\nlb.example.com\r\n```\r\n\r\nthe hosts in *etcd*, *lb* and *dns* groups all fail the check.\r\n\r\n\r\n\n", "before_files": [{"content": "# pylint: disable=missing-docstring\nfrom openshift_checks import OpenShiftCheck, get_var\nfrom openshift_checks.mixins import NotContainerizedMixin\n\n\nclass PackageVersion(NotContainerizedMixin, OpenShiftCheck):\n \"\"\"Check that available RPM packages match the required versions.\"\"\"\n\n name = \"package_version\"\n tags = [\"preflight\"]\n\n def run(self, tmp, task_vars):\n rpm_prefix = get_var(task_vars, \"openshift\", \"common\", \"service_type\")\n openshift_release = get_var(task_vars, \"openshift_release\")\n\n args = {\n \"prefix\": rpm_prefix,\n \"version\": openshift_release,\n }\n return self.execute_module(\"aos_version\", args, tmp, task_vars)\n", "path": "roles/openshift_health_checker/openshift_checks/package_version.py"}]}
| 1,315 | 224 |
gh_patches_debug_28149
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-1407
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove unused classifiers from filter list
We currently show all trove classifiers in the search filter panel, despite the fact that some are not applied to any projects in the DB.
It would be better to only show those classifiers that are actually applied to projects, so we avoid filtering by a classifier and returning an empty result.
</issue>
<code>
[start of warehouse/views.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import collections
14
15 from pyramid.httpexceptions import (
16 HTTPException, HTTPSeeOther, HTTPMovedPermanently, HTTPNotFound,
17 HTTPBadRequest,
18 )
19 from pyramid.view import (
20 notfound_view_config, forbidden_view_config, view_config,
21 )
22 from elasticsearch_dsl import Q
23 from sqlalchemy import func
24 from sqlalchemy.orm import aliased, joinedload
25
26 from warehouse.accounts import REDIRECT_FIELD_NAME
27 from warehouse.accounts.models import User
28 from warehouse.cache.origin import origin_cache
29 from warehouse.cache.http import cache_control
30 from warehouse.classifiers.models import Classifier
31 from warehouse.packaging.models import Project, Release, File
32 from warehouse.utils.row_counter import RowCount
33 from warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory
34
35
36 SEARCH_FIELDS = [
37 "author", "author_email", "description", "download_url", "home_page",
38 "keywords", "license", "maintainer", "maintainer_email", "normalized_name",
39 "platform", "summary",
40 ]
41 SEARCH_BOOSTS = {
42 "normalized_name": 10,
43 "description": 5,
44 "keywords": 5,
45 "summary": 5,
46 }
47 SEARCH_FILTER_ORDER = (
48 "Programming Language",
49 "License",
50 "Framework",
51 "Topic",
52 "Intended Audience",
53 "Environment",
54 "Operating System",
55 "Natural Language",
56 "Development Status",
57 )
58
59
60 @view_config(context=HTTPException)
61 @notfound_view_config(append_slash=HTTPMovedPermanently)
62 def httpexception_view(exc, request):
63 return exc
64
65
66 @forbidden_view_config()
67 def forbidden(exc, request):
68 # If the forbidden error is because the user isn't logged in, then we'll
69 # redirect them to the log in page.
70 if request.authenticated_userid is None:
71 url = request.route_url(
72 "accounts.login",
73 _query={REDIRECT_FIELD_NAME: request.path_qs},
74 )
75 return HTTPSeeOther(url)
76
77 # If we've reached here, then the user is logged in and they are genuinely
78 # not allowed to access this page.
79 # TODO: Style the forbidden page.
80 return exc
81
82
83 @view_config(
84 route_name="robots.txt",
85 renderer="robots.txt",
86 decorator=[
87 cache_control(1 * 24 * 60 * 60), # 1 day
88 origin_cache(
89 1 * 24 * 60 * 60, # 1 day
90 stale_while_revalidate=6 * 60 * 60, # 6 hours
91 stale_if_error=1 * 24 * 60 * 60, # 1 day
92 ),
93 ],
94 )
95 def robotstxt(request):
96 request.response.content_type = "text/plain"
97 return {}
98
99
100 @view_config(
101 route_name="index",
102 renderer="index.html",
103 decorator=[
104 origin_cache(
105 1 * 60 * 60, # 1 hour
106 stale_while_revalidate=10 * 60, # 10 minutes
107 stale_if_error=1 * 24 * 60 * 60, # 1 day
108 keys=["all-projects"],
109 ),
110 ]
111 )
112 def index(request):
113 project_names = [
114 r[0] for r in (
115 request.db.query(File.name)
116 .group_by(File.name)
117 .order_by(func.sum(File.downloads).desc())
118 .limit(5)
119 .all())
120 ]
121 release_a = aliased(
122 Release,
123 request.db.query(Release)
124 .distinct(Release.name)
125 .filter(Release.name.in_(project_names))
126 .order_by(Release.name, Release._pypi_ordering.desc())
127 .subquery(),
128 )
129 top_projects = (
130 request.db.query(release_a)
131 .options(joinedload(release_a.project))
132 .order_by(func.array_idx(project_names, release_a.name))
133 .all()
134 )
135
136 latest_releases = (
137 request.db.query(Release)
138 .options(joinedload(Release.project))
139 .order_by(Release.created.desc())
140 .limit(5)
141 .all()
142 )
143
144 counts = dict(
145 request.db.query(RowCount.table_name, RowCount.count)
146 .filter(
147 RowCount.table_name.in_([
148 Project.__tablename__,
149 Release.__tablename__,
150 File.__tablename__,
151 User.__tablename__,
152 ]))
153 .all()
154 )
155
156 return {
157 "latest_releases": latest_releases,
158 "top_projects": top_projects,
159 "num_projects": counts.get(Project.__tablename__, 0),
160 "num_releases": counts.get(Release.__tablename__, 0),
161 "num_files": counts.get(File.__tablename__, 0),
162 "num_users": counts.get(User.__tablename__, 0),
163 }
164
165
166 @view_config(
167 route_name="search",
168 renderer="search/results.html",
169 decorator=[
170 origin_cache(
171 1 * 60 * 60, # 1 hour
172 stale_while_revalidate=10 * 60, # 10 minutes
173 stale_if_error=1 * 24 * 60 * 60, # 1 day
174 keys=["all-projects"],
175 )
176 ],
177 )
178 def search(request):
179
180 q = request.params.get("q", '')
181
182 if q:
183 should = []
184 for field in SEARCH_FIELDS:
185 kw = {"query": q}
186 if field in SEARCH_BOOSTS:
187 kw["boost"] = SEARCH_BOOSTS[field]
188 should.append(Q("match", **{field: kw}))
189
190 # Add a prefix query if ``q`` is longer than one character.
191 if len(q) > 1:
192 should.append(Q('prefix', normalized_name=q))
193
194 query = request.es.query("dis_max", queries=should)
195 query = query.suggest("name_suggestion", q, term={"field": "name"})
196 else:
197 query = request.es.query()
198
199 if request.params.get("o"):
200 query = query.sort(request.params["o"])
201
202 if request.params.getall("c"):
203 query = query.filter("terms", classifiers=request.params.getall("c"))
204
205 try:
206 page_num = int(request.params.get("page", 1))
207 except ValueError:
208 raise HTTPBadRequest("'page' must be an integer.")
209
210 page = ElasticsearchPage(
211 query,
212 page=page_num,
213 url_maker=paginate_url_factory(request),
214 )
215
216 if page.page_count and page_num > page.page_count:
217 return HTTPNotFound()
218
219 available_filters = collections.defaultdict(list)
220
221 for cls in request.db.query(Classifier).order_by(Classifier.classifier):
222 first, *_ = cls.classifier.split(' :: ')
223 available_filters[first].append(cls.classifier)
224
225 def filter_key(item):
226 try:
227 return 0, SEARCH_FILTER_ORDER.index(item[0]), item[0]
228 except ValueError:
229 return 1, 0, item[0]
230
231 return {
232 "page": page,
233 "term": q,
234 "order": request.params.get("o", ''),
235 "available_filters": sorted(available_filters.items(), key=filter_key),
236 "applied_filters": request.params.getall("c"),
237 }
238
239
240 @view_config(
241 route_name="includes.current-user-indicator",
242 renderer="includes/current-user-indicator.html",
243 uses_session=True,
244 )
245 def current_user_indicator(request):
246 return {}
247
248
249 @view_config(route_name="health", renderer="string")
250 def health(request):
251 # This will ensure that we can access the database and run queries against
252 # it without doing anything that will take a lock or block other queries.
253 request.db.execute("SELECT 1")
254
255 # Nothing will actually check this, but it's a little nicer to have
256 # something to return besides an empty body.
257 return "OK"
258
[end of warehouse/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/warehouse/views.py b/warehouse/views.py
--- a/warehouse/views.py
+++ b/warehouse/views.py
@@ -22,13 +22,16 @@
from elasticsearch_dsl import Q
from sqlalchemy import func
from sqlalchemy.orm import aliased, joinedload
+from sqlalchemy.sql import exists
from warehouse.accounts import REDIRECT_FIELD_NAME
from warehouse.accounts.models import User
from warehouse.cache.origin import origin_cache
from warehouse.cache.http import cache_control
from warehouse.classifiers.models import Classifier
-from warehouse.packaging.models import Project, Release, File
+from warehouse.packaging.models import (
+ Project, Release, File, release_classifiers,
+)
from warehouse.utils.row_counter import RowCount
from warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory
@@ -218,7 +221,17 @@
available_filters = collections.defaultdict(list)
- for cls in request.db.query(Classifier).order_by(Classifier.classifier):
+ classifiers_q = (
+ request.db.query(Classifier)
+ .with_entities(Classifier.classifier)
+ .filter(
+ exists([release_classifiers.c.trove_id])
+ .where(release_classifiers.c.trove_id == Classifier.id)
+ )
+ .order_by(Classifier.classifier)
+ )
+
+ for cls in classifiers_q:
first, *_ = cls.classifier.split(' :: ')
available_filters[first].append(cls.classifier)
|
{"golden_diff": "diff --git a/warehouse/views.py b/warehouse/views.py\n--- a/warehouse/views.py\n+++ b/warehouse/views.py\n@@ -22,13 +22,16 @@\n from elasticsearch_dsl import Q\n from sqlalchemy import func\n from sqlalchemy.orm import aliased, joinedload\n+from sqlalchemy.sql import exists\n \n from warehouse.accounts import REDIRECT_FIELD_NAME\n from warehouse.accounts.models import User\n from warehouse.cache.origin import origin_cache\n from warehouse.cache.http import cache_control\n from warehouse.classifiers.models import Classifier\n-from warehouse.packaging.models import Project, Release, File\n+from warehouse.packaging.models import (\n+ Project, Release, File, release_classifiers,\n+)\n from warehouse.utils.row_counter import RowCount\n from warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory\n \n@@ -218,7 +221,17 @@\n \n available_filters = collections.defaultdict(list)\n \n- for cls in request.db.query(Classifier).order_by(Classifier.classifier):\n+ classifiers_q = (\n+ request.db.query(Classifier)\n+ .with_entities(Classifier.classifier)\n+ .filter(\n+ exists([release_classifiers.c.trove_id])\n+ .where(release_classifiers.c.trove_id == Classifier.id)\n+ )\n+ .order_by(Classifier.classifier)\n+ )\n+\n+ for cls in classifiers_q:\n first, *_ = cls.classifier.split(' :: ')\n available_filters[first].append(cls.classifier)\n", "issue": "Remove unused classifiers from filter list\nWe currently show all trove classifiers in the search filter panel, despite the fact that some are not applied to any projects in the DB.\n\nIt would be better to only show those classifiers that are actually applied to projects, so we avoid filtering by a classifier and returning an empty result.\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport collections\n\nfrom pyramid.httpexceptions import (\n HTTPException, HTTPSeeOther, HTTPMovedPermanently, HTTPNotFound,\n HTTPBadRequest,\n)\nfrom pyramid.view import (\n notfound_view_config, forbidden_view_config, view_config,\n)\nfrom elasticsearch_dsl import Q\nfrom sqlalchemy import func\nfrom sqlalchemy.orm import aliased, joinedload\n\nfrom warehouse.accounts import REDIRECT_FIELD_NAME\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.cache.http import cache_control\nfrom warehouse.classifiers.models import Classifier\nfrom warehouse.packaging.models import Project, Release, File\nfrom warehouse.utils.row_counter import RowCount\nfrom warehouse.utils.paginate import ElasticsearchPage, paginate_url_factory\n\n\nSEARCH_FIELDS = [\n \"author\", \"author_email\", \"description\", \"download_url\", \"home_page\",\n \"keywords\", \"license\", \"maintainer\", \"maintainer_email\", \"normalized_name\",\n \"platform\", \"summary\",\n]\nSEARCH_BOOSTS = {\n \"normalized_name\": 10,\n \"description\": 5,\n \"keywords\": 5,\n \"summary\": 5,\n}\nSEARCH_FILTER_ORDER = (\n \"Programming Language\",\n \"License\",\n \"Framework\",\n \"Topic\",\n \"Intended Audience\",\n \"Environment\",\n \"Operating System\",\n \"Natural Language\",\n \"Development Status\",\n)\n\n\n@view_config(context=HTTPException)\n@notfound_view_config(append_slash=HTTPMovedPermanently)\ndef httpexception_view(exc, request):\n return exc\n\n\n@forbidden_view_config()\ndef forbidden(exc, request):\n # If the forbidden error is because the user isn't logged in, then we'll\n # redirect them to the log in page.\n if request.authenticated_userid is None:\n url = request.route_url(\n \"accounts.login\",\n _query={REDIRECT_FIELD_NAME: request.path_qs},\n )\n return HTTPSeeOther(url)\n\n # If we've reached here, then the user is logged in and they are genuinely\n # not allowed to access this page.\n # TODO: Style the forbidden page.\n return exc\n\n\n@view_config(\n route_name=\"robots.txt\",\n renderer=\"robots.txt\",\n decorator=[\n cache_control(1 * 24 * 60 * 60), # 1 day\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=6 * 60 * 60, # 6 hours\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef robotstxt(request):\n request.response.content_type = \"text/plain\"\n return {}\n\n\n@view_config(\n route_name=\"index\",\n renderer=\"index.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n ),\n ]\n)\ndef index(request):\n project_names = [\n r[0] for r in (\n request.db.query(File.name)\n .group_by(File.name)\n .order_by(func.sum(File.downloads).desc())\n .limit(5)\n .all())\n ]\n release_a = aliased(\n Release,\n request.db.query(Release)\n .distinct(Release.name)\n .filter(Release.name.in_(project_names))\n .order_by(Release.name, Release._pypi_ordering.desc())\n .subquery(),\n )\n top_projects = (\n request.db.query(release_a)\n .options(joinedload(release_a.project))\n .order_by(func.array_idx(project_names, release_a.name))\n .all()\n )\n\n latest_releases = (\n request.db.query(Release)\n .options(joinedload(Release.project))\n .order_by(Release.created.desc())\n .limit(5)\n .all()\n )\n\n counts = dict(\n request.db.query(RowCount.table_name, RowCount.count)\n .filter(\n RowCount.table_name.in_([\n Project.__tablename__,\n Release.__tablename__,\n File.__tablename__,\n User.__tablename__,\n ]))\n .all()\n )\n\n return {\n \"latest_releases\": latest_releases,\n \"top_projects\": top_projects,\n \"num_projects\": counts.get(Project.__tablename__, 0),\n \"num_releases\": counts.get(Release.__tablename__, 0),\n \"num_files\": counts.get(File.__tablename__, 0),\n \"num_users\": counts.get(User.__tablename__, 0),\n }\n\n\n@view_config(\n route_name=\"search\",\n renderer=\"search/results.html\",\n decorator=[\n origin_cache(\n 1 * 60 * 60, # 1 hour\n stale_while_revalidate=10 * 60, # 10 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n keys=[\"all-projects\"],\n )\n ],\n)\ndef search(request):\n\n q = request.params.get(\"q\", '')\n\n if q:\n should = []\n for field in SEARCH_FIELDS:\n kw = {\"query\": q}\n if field in SEARCH_BOOSTS:\n kw[\"boost\"] = SEARCH_BOOSTS[field]\n should.append(Q(\"match\", **{field: kw}))\n\n # Add a prefix query if ``q`` is longer than one character.\n if len(q) > 1:\n should.append(Q('prefix', normalized_name=q))\n\n query = request.es.query(\"dis_max\", queries=should)\n query = query.suggest(\"name_suggestion\", q, term={\"field\": \"name\"})\n else:\n query = request.es.query()\n\n if request.params.get(\"o\"):\n query = query.sort(request.params[\"o\"])\n\n if request.params.getall(\"c\"):\n query = query.filter(\"terms\", classifiers=request.params.getall(\"c\"))\n\n try:\n page_num = int(request.params.get(\"page\", 1))\n except ValueError:\n raise HTTPBadRequest(\"'page' must be an integer.\")\n\n page = ElasticsearchPage(\n query,\n page=page_num,\n url_maker=paginate_url_factory(request),\n )\n\n if page.page_count and page_num > page.page_count:\n return HTTPNotFound()\n\n available_filters = collections.defaultdict(list)\n\n for cls in request.db.query(Classifier).order_by(Classifier.classifier):\n first, *_ = cls.classifier.split(' :: ')\n available_filters[first].append(cls.classifier)\n\n def filter_key(item):\n try:\n return 0, SEARCH_FILTER_ORDER.index(item[0]), item[0]\n except ValueError:\n return 1, 0, item[0]\n\n return {\n \"page\": page,\n \"term\": q,\n \"order\": request.params.get(\"o\", ''),\n \"available_filters\": sorted(available_filters.items(), key=filter_key),\n \"applied_filters\": request.params.getall(\"c\"),\n }\n\n\n@view_config(\n route_name=\"includes.current-user-indicator\",\n renderer=\"includes/current-user-indicator.html\",\n uses_session=True,\n)\ndef current_user_indicator(request):\n return {}\n\n\n@view_config(route_name=\"health\", renderer=\"string\")\ndef health(request):\n # This will ensure that we can access the database and run queries against\n # it without doing anything that will take a lock or block other queries.\n request.db.execute(\"SELECT 1\")\n\n # Nothing will actually check this, but it's a little nicer to have\n # something to return besides an empty body.\n return \"OK\"\n", "path": "warehouse/views.py"}]}
| 3,088 | 316 |
gh_patches_debug_30542
|
rasdani/github-patches
|
git_diff
|
internetarchive__openlibrary-7202
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
partner_batch_imports.py should not import books published in a future year
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Per the [Open-Mic topics](https://docs.google.com/document/d/1LEbzsLZ1F9_YIQOoZzO7GoZnG1z-rudhZ9HNtsameTc/edit#heading=h.swvutwwydubf) for the Open Library Community call on 2022-11-29, we should not import partner data for books purporting to be published in a future year, as this is resulting in bad records of books that may never exist.
### Describe the problem that you'd like solved
<!-- A clear and concise description of what you want to happen. -->
When importing books, `partner_batch_imports.py` does not currently check if the `publish_date` is in a future year when importing. It should. E.g. if an import is attempted in the year 2022, it should not import a book purported to be published in 2023.
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
The proposed solution is to add a check to `batch_import()` in `partner_batch_imports.py` to ensure a book isn't purported to be published in a future year.
<!-- Which suggestions or requirements should be considered for how feature needs to appear or be implemented? -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
I will submit a PR to address this.
### Stakeholders
<!-- @ tag stakeholders of this bug -->
@mekarpeles, @cdrini
</issue>
<code>
[start of scripts/partner_batch_imports.py]
1 """
2 Process partner bibliographic csv data into importable json book
3 records and then batch submit into the ImportBot
4 `import_item` table (http://openlibrary.org/admin/imports)
5 which queues items to be imported via the
6 Open Library JSON import API: https://openlibrary.org/api/import
7
8 To Run:
9
10 PYTHONPATH=. python ./scripts/partner_batch_imports.py /olsystem/etc/openlibrary.yml
11 """
12
13 import datetime
14 import logging
15 import os
16 import re
17
18 import requests
19
20 from infogami import config # noqa: F401
21 from openlibrary.config import load_config
22 from openlibrary.core.imports import Batch
23 from scripts.solr_builder.solr_builder.fn_to_cli import FnToCLI
24
25 logger = logging.getLogger("openlibrary.importer.bwb")
26
27 EXCLUDED_AUTHORS = {
28 x.casefold()
29 for x in (
30 "1570 publishing",
31 "bahija",
32 "bruna murino",
33 "creative elegant edition",
34 "delsee notebooks",
35 "grace garcia",
36 "holo",
37 "jeryx publishing",
38 "mado",
39 "mazzo",
40 "mikemix",
41 "mitch allison",
42 "pickleball publishing",
43 "pizzelle passion",
44 "punny cuaderno",
45 "razal koraya",
46 "t. d. publishing",
47 "tobias publishing",
48 )
49 }
50
51 EXCLUDED_INDEPENDENTLY_PUBLISHED_TITLES = {
52 x.casefold()
53 for x in (
54 # Noisy classic re-prints
55 'annotated',
56 'annoté',
57 'classic',
58 'classics',
59 'illustarted', # Some books have typos in their titles!
60 'illustrated',
61 'Illustrée',
62 'original',
63 'summary',
64 'version',
65 # Not a book
66 'calendar',
67 'diary',
68 'journal',
69 'logbook',
70 'notebook',
71 'notizbuch',
72 'planner',
73 'sketchbook',
74 )
75 }
76
77 SCHEMA_URL = (
78 "https://raw.githubusercontent.com/internetarchive"
79 "/openlibrary-client/master/olclient/schemata/import.schema.json"
80 )
81
82
83 class Biblio:
84
85 ACTIVE_FIELDS = [
86 'title',
87 'isbn_13',
88 'publish_date',
89 'publishers',
90 'weight',
91 'authors',
92 'lc_classifications',
93 'pagination',
94 'languages',
95 'subjects',
96 'source_records',
97 ]
98 INACTIVE_FIELDS = [
99 "copyright",
100 "issn",
101 "doi",
102 "lccn",
103 "dewey",
104 "length",
105 "width",
106 "height",
107 ]
108 REQUIRED_FIELDS = requests.get(SCHEMA_URL).json()['required']
109
110 NONBOOK = """A2 AA AB AJ AVI AZ BK BM C3 CD CE CF CR CRM CRW CX D3 DA DD DF DI DL
111 DO DR DRM DRW DS DV EC FC FI FM FR FZ GB GC GM GR H3 H5 L3 L5 LP MAC MC MF MG MH ML
112 MS MSX MZ N64 NGA NGB NGC NGE NT OR OS PC PP PRP PS PSC PY QU RE RV SA SD SG SH SK
113 SL SMD SN SO SO1 SO2 SR SU TA TB TR TS TY UX V35 V8 VC VD VE VF VK VM VN VO VP VS
114 VU VY VZ WA WC WI WL WM WP WT WX XL XZ ZF ZZ""".split()
115
116 def __init__(self, data):
117 self.isbn = data[124]
118 self.source_id = f'bwb:{self.isbn}'
119 self.isbn_13 = [self.isbn]
120 self.title = data[10]
121 self.primary_format = data[6]
122 self.publish_date = data[20][:4] # YYYY, YYYYMMDD
123 self.publishers = [data[135]]
124 self.weight = data[39]
125 self.authors = self.contributors(data)
126 self.lc_classifications = [data[147]] if data[147] else []
127 self.pagination = data[36]
128 self.languages = [data[37].lower()]
129 self.source_records = [self.source_id]
130 self.subjects = [
131 s.capitalize().replace('_', ', ')
132 for s in data[91:100]
133 # + data[101:120]
134 # + data[153:158]
135 if s
136 ]
137
138 # Inactive fields
139 self.copyright = data[19]
140 self.issn = data[54]
141 self.doi = data[145]
142 self.lccn = data[146]
143 self.dewey = data[49]
144 # physical_dimensions
145 # e.g. "5.4 x 4.7 x 0.2 inches"
146 self.length, self.width, self.height = data[40:43]
147
148 # Assert importable
149 for field in self.REQUIRED_FIELDS + ['isbn_13']:
150 assert getattr(self, field), field
151 assert (
152 self.primary_format not in self.NONBOOK
153 ), f"{self.primary_format} is NONBOOK"
154
155 @staticmethod
156 def contributors(data):
157 def make_author(name, _, typ):
158 author = {'name': name}
159 if typ == 'X':
160 # set corporate contributor
161 author['entity_type'] = 'org'
162 # TODO: sort out contributor types
163 # AU = author
164 # ED = editor
165 return author
166
167 contributors = (
168 (data[21 + i * 3], data[22 + i * 3], data[23 + i * 3]) for i in range(5)
169 )
170
171 # form list of author dicts
172 authors = [make_author(*c) for c in contributors if c[0]]
173 return authors
174
175 def json(self):
176 return {
177 field: getattr(self, field)
178 for field in self.ACTIVE_FIELDS
179 if getattr(self, field)
180 }
181
182
183 def load_state(path, logfile):
184 """Retrieves starting point from logfile, if log exists
185
186 Takes as input a path which expands to an ordered candidate list
187 of bettworldbks* filenames to process, the location of the
188 logfile, and determines which of those files are remaining, as
189 well as what our offset is in that file.
190
191 e.g. if we request path containing f1, f2, f3 and our log
192 says f2,100 then we start our processing at f2 at the 100th line.
193
194 This assumes the script is being called w/ e.g.:
195 /1/var/tmp/imports/2021-08/Bibliographic/*/
196 """
197 filenames = sorted(
198 os.path.join(path, f) for f in os.listdir(path) if f.startswith("bettworldbks")
199 )
200 try:
201 with open(logfile) as fin:
202 active_fname, offset = next(fin).strip().split(',')
203 unfinished_filenames = filenames[filenames.index(active_fname) :]
204 return unfinished_filenames, int(offset)
205 except (ValueError, OSError):
206 return filenames, 0
207
208
209 def update_state(logfile, fname, line_num=0):
210 """Records the last file we began processing and the current line"""
211 with open(logfile, 'w') as fout:
212 fout.write(f'{fname},{line_num}\n')
213
214
215 def csv_to_ol_json_item(line):
216 """converts a line to a book item"""
217 try:
218 data = line.decode().strip().split('|')
219 except UnicodeDecodeError:
220 data = line.decode('ISO-8859-1').strip().split('|')
221
222 b = Biblio(data)
223 return {'ia_id': b.source_id, 'data': b.json()}
224
225
226 def is_low_quality_book(book_item) -> bool:
227 """
228 Check if a book item is of low quality which means that 1) one of its authors
229 (regardless of case) is in the set of excluded authors.
230 """
231 authors = {a['name'].casefold() for a in book_item.get('authors') or []}
232 if authors & EXCLUDED_AUTHORS: # Leverage Python set intersection for speed.
233 return True
234
235 # A recent independently published book with excluded key words in its title
236 # (regardless of case) is also considered a low quality book.
237 title_words = set(re.split(r'\W+', book_item["title"].casefold()))
238 publishers = {p.casefold() for p in book_item.get('publishers') or []}
239 publish_year = int(book_item.get("publish_date", "0")[:4]) # YYYY
240 return bool(
241 "independently published" in publishers
242 and publish_year >= 2018
243 and title_words & EXCLUDED_INDEPENDENTLY_PUBLISHED_TITLES
244 )
245
246
247 def batch_import(path, batch, batch_size=5000):
248 logfile = os.path.join(path, 'import.log')
249 filenames, offset = load_state(path, logfile)
250
251 for fname in filenames:
252 book_items = []
253 with open(fname, 'rb') as f:
254 logger.info(f"Processing: {fname} from line {offset}")
255 for line_num, line in enumerate(f):
256
257 # skip over already processed records
258 if offset:
259 if offset > line_num:
260 continue
261 offset = 0
262
263 try:
264 book_item = csv_to_ol_json_item(line)
265 if not is_low_quality_book(book_item["data"]):
266 book_items.append(book_item)
267 except (AssertionError, IndexError) as e:
268 logger.info(f"Error: {e} from {line}")
269
270 # If we have enough items, submit a batch
271 if not ((line_num + 1) % batch_size):
272 batch.add_items(book_items)
273 update_state(logfile, fname, line_num)
274 book_items = [] # clear added items
275
276 # Add any remaining book_items to batch
277 if book_items:
278 batch.add_items(book_items)
279 update_state(logfile, fname, line_num)
280
281
282 def main(ol_config: str, batch_path: str):
283 load_config(ol_config)
284
285 # Partner data is offset ~15 days from start of month
286 date = datetime.date.today() - datetime.timedelta(days=15)
287 batch_name = "%s-%04d%02d" % ('bwb', date.year, date.month)
288 batch = Batch.find(batch_name) or Batch.new(batch_name)
289 batch_import(batch_path, batch)
290
291
292 if __name__ == '__main__':
293 FnToCLI(main).run()
294
[end of scripts/partner_batch_imports.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scripts/partner_batch_imports.py b/scripts/partner_batch_imports.py
--- a/scripts/partner_batch_imports.py
+++ b/scripts/partner_batch_imports.py
@@ -10,10 +10,12 @@
PYTHONPATH=. python ./scripts/partner_batch_imports.py /olsystem/etc/openlibrary.yml
"""
+from collections.abc import Mapping
import datetime
import logging
import os
import re
+from typing import TypedDict, cast
import requests
@@ -244,6 +246,18 @@
)
+def is_published_in_future_year(book_item: Mapping[str, str | list]) -> bool:
+ """
+ Prevent import of books with a publication after the current year.
+
+ Some import sources have publication dates in a future year, and the likelihood
+ is high that this is bad data. So we don't want to import these.
+ """
+ publish_year = int(cast(str, book_item.get("publish_date", "0")[:4])) # YYYY
+ this_year = datetime.datetime.now().year
+ return publish_year > this_year
+
+
def batch_import(path, batch, batch_size=5000):
logfile = os.path.join(path, 'import.log')
filenames, offset = load_state(path, logfile)
@@ -262,7 +276,12 @@
try:
book_item = csv_to_ol_json_item(line)
- if not is_low_quality_book(book_item["data"]):
+ if not any(
+ [
+ is_low_quality_book(book_item["data"]),
+ is_published_in_future_year(book_item["data"]),
+ ]
+ ):
book_items.append(book_item)
except (AssertionError, IndexError) as e:
logger.info(f"Error: {e} from {line}")
|
{"golden_diff": "diff --git a/scripts/partner_batch_imports.py b/scripts/partner_batch_imports.py\n--- a/scripts/partner_batch_imports.py\n+++ b/scripts/partner_batch_imports.py\n@@ -10,10 +10,12 @@\n PYTHONPATH=. python ./scripts/partner_batch_imports.py /olsystem/etc/openlibrary.yml\n \"\"\"\n \n+from collections.abc import Mapping\n import datetime\n import logging\n import os\n import re\n+from typing import TypedDict, cast\n \n import requests\n \n@@ -244,6 +246,18 @@\n )\n \n \n+def is_published_in_future_year(book_item: Mapping[str, str | list]) -> bool:\n+ \"\"\"\n+ Prevent import of books with a publication after the current year.\n+\n+ Some import sources have publication dates in a future year, and the likelihood\n+ is high that this is bad data. So we don't want to import these.\n+ \"\"\"\n+ publish_year = int(cast(str, book_item.get(\"publish_date\", \"0\")[:4])) # YYYY\n+ this_year = datetime.datetime.now().year\n+ return publish_year > this_year\n+\n+\n def batch_import(path, batch, batch_size=5000):\n logfile = os.path.join(path, 'import.log')\n filenames, offset = load_state(path, logfile)\n@@ -262,7 +276,12 @@\n \n try:\n book_item = csv_to_ol_json_item(line)\n- if not is_low_quality_book(book_item[\"data\"]):\n+ if not any(\n+ [\n+ is_low_quality_book(book_item[\"data\"]),\n+ is_published_in_future_year(book_item[\"data\"]),\n+ ]\n+ ):\n book_items.append(book_item)\n except (AssertionError, IndexError) as e:\n logger.info(f\"Error: {e} from {line}\")\n", "issue": "partner_batch_imports.py should not import books published in a future year\n<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->\r\nPer the [Open-Mic topics](https://docs.google.com/document/d/1LEbzsLZ1F9_YIQOoZzO7GoZnG1z-rudhZ9HNtsameTc/edit#heading=h.swvutwwydubf) for the Open Library Community call on 2022-11-29, we should not import partner data for books purporting to be published in a future year, as this is resulting in bad records of books that may never exist.\r\n\r\n### Describe the problem that you'd like solved\r\n<!-- A clear and concise description of what you want to happen. -->\r\nWhen importing books, `partner_batch_imports.py` does not currently check if the `publish_date` is in a future year when importing. It should. E.g. if an import is attempted in the year 2022, it should not import a book purported to be published in 2023.\r\n\r\n### Proposal & Constraints\r\n<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->\r\nThe proposed solution is to add a check to `batch_import()` in `partner_batch_imports.py` to ensure a book isn't purported to be published in a future year.\r\n\r\n<!-- Which suggestions or requirements should be considered for how feature needs to appear or be implemented? -->\r\n\r\n### Additional context\r\n<!-- Add any other context or screenshots about the feature request here. -->\r\nI will submit a PR to address this.\r\n\r\n### Stakeholders\r\n<!-- @ tag stakeholders of this bug -->\r\n@mekarpeles, @cdrini \r\n\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nProcess partner bibliographic csv data into importable json book\nrecords and then batch submit into the ImportBot\n`import_item` table (http://openlibrary.org/admin/imports)\nwhich queues items to be imported via the\nOpen Library JSON import API: https://openlibrary.org/api/import\n\nTo Run:\n\nPYTHONPATH=. python ./scripts/partner_batch_imports.py /olsystem/etc/openlibrary.yml\n\"\"\"\n\nimport datetime\nimport logging\nimport os\nimport re\n\nimport requests\n\nfrom infogami import config # noqa: F401\nfrom openlibrary.config import load_config\nfrom openlibrary.core.imports import Batch\nfrom scripts.solr_builder.solr_builder.fn_to_cli import FnToCLI\n\nlogger = logging.getLogger(\"openlibrary.importer.bwb\")\n\nEXCLUDED_AUTHORS = {\n x.casefold()\n for x in (\n \"1570 publishing\",\n \"bahija\",\n \"bruna murino\",\n \"creative elegant edition\",\n \"delsee notebooks\",\n \"grace garcia\",\n \"holo\",\n \"jeryx publishing\",\n \"mado\",\n \"mazzo\",\n \"mikemix\",\n \"mitch allison\",\n \"pickleball publishing\",\n \"pizzelle passion\",\n \"punny cuaderno\",\n \"razal koraya\",\n \"t. d. publishing\",\n \"tobias publishing\",\n )\n}\n\nEXCLUDED_INDEPENDENTLY_PUBLISHED_TITLES = {\n x.casefold()\n for x in (\n # Noisy classic re-prints\n 'annotated',\n 'annot\u00e9',\n 'classic',\n 'classics',\n 'illustarted', # Some books have typos in their titles!\n 'illustrated',\n 'Illustr\u00e9e',\n 'original',\n 'summary',\n 'version',\n # Not a book\n 'calendar',\n 'diary',\n 'journal',\n 'logbook',\n 'notebook',\n 'notizbuch',\n 'planner',\n 'sketchbook',\n )\n}\n\nSCHEMA_URL = (\n \"https://raw.githubusercontent.com/internetarchive\"\n \"/openlibrary-client/master/olclient/schemata/import.schema.json\"\n)\n\n\nclass Biblio:\n\n ACTIVE_FIELDS = [\n 'title',\n 'isbn_13',\n 'publish_date',\n 'publishers',\n 'weight',\n 'authors',\n 'lc_classifications',\n 'pagination',\n 'languages',\n 'subjects',\n 'source_records',\n ]\n INACTIVE_FIELDS = [\n \"copyright\",\n \"issn\",\n \"doi\",\n \"lccn\",\n \"dewey\",\n \"length\",\n \"width\",\n \"height\",\n ]\n REQUIRED_FIELDS = requests.get(SCHEMA_URL).json()['required']\n\n NONBOOK = \"\"\"A2 AA AB AJ AVI AZ BK BM C3 CD CE CF CR CRM CRW CX D3 DA DD DF DI DL\n DO DR DRM DRW DS DV EC FC FI FM FR FZ GB GC GM GR H3 H5 L3 L5 LP MAC MC MF MG MH ML\n MS MSX MZ N64 NGA NGB NGC NGE NT OR OS PC PP PRP PS PSC PY QU RE RV SA SD SG SH SK\n SL SMD SN SO SO1 SO2 SR SU TA TB TR TS TY UX V35 V8 VC VD VE VF VK VM VN VO VP VS\n VU VY VZ WA WC WI WL WM WP WT WX XL XZ ZF ZZ\"\"\".split()\n\n def __init__(self, data):\n self.isbn = data[124]\n self.source_id = f'bwb:{self.isbn}'\n self.isbn_13 = [self.isbn]\n self.title = data[10]\n self.primary_format = data[6]\n self.publish_date = data[20][:4] # YYYY, YYYYMMDD\n self.publishers = [data[135]]\n self.weight = data[39]\n self.authors = self.contributors(data)\n self.lc_classifications = [data[147]] if data[147] else []\n self.pagination = data[36]\n self.languages = [data[37].lower()]\n self.source_records = [self.source_id]\n self.subjects = [\n s.capitalize().replace('_', ', ')\n for s in data[91:100]\n # + data[101:120]\n # + data[153:158]\n if s\n ]\n\n # Inactive fields\n self.copyright = data[19]\n self.issn = data[54]\n self.doi = data[145]\n self.lccn = data[146]\n self.dewey = data[49]\n # physical_dimensions\n # e.g. \"5.4 x 4.7 x 0.2 inches\"\n self.length, self.width, self.height = data[40:43]\n\n # Assert importable\n for field in self.REQUIRED_FIELDS + ['isbn_13']:\n assert getattr(self, field), field\n assert (\n self.primary_format not in self.NONBOOK\n ), f\"{self.primary_format} is NONBOOK\"\n\n @staticmethod\n def contributors(data):\n def make_author(name, _, typ):\n author = {'name': name}\n if typ == 'X':\n # set corporate contributor\n author['entity_type'] = 'org'\n # TODO: sort out contributor types\n # AU = author\n # ED = editor\n return author\n\n contributors = (\n (data[21 + i * 3], data[22 + i * 3], data[23 + i * 3]) for i in range(5)\n )\n\n # form list of author dicts\n authors = [make_author(*c) for c in contributors if c[0]]\n return authors\n\n def json(self):\n return {\n field: getattr(self, field)\n for field in self.ACTIVE_FIELDS\n if getattr(self, field)\n }\n\n\ndef load_state(path, logfile):\n \"\"\"Retrieves starting point from logfile, if log exists\n\n Takes as input a path which expands to an ordered candidate list\n of bettworldbks* filenames to process, the location of the\n logfile, and determines which of those files are remaining, as\n well as what our offset is in that file.\n\n e.g. if we request path containing f1, f2, f3 and our log\n says f2,100 then we start our processing at f2 at the 100th line.\n\n This assumes the script is being called w/ e.g.:\n /1/var/tmp/imports/2021-08/Bibliographic/*/\n \"\"\"\n filenames = sorted(\n os.path.join(path, f) for f in os.listdir(path) if f.startswith(\"bettworldbks\")\n )\n try:\n with open(logfile) as fin:\n active_fname, offset = next(fin).strip().split(',')\n unfinished_filenames = filenames[filenames.index(active_fname) :]\n return unfinished_filenames, int(offset)\n except (ValueError, OSError):\n return filenames, 0\n\n\ndef update_state(logfile, fname, line_num=0):\n \"\"\"Records the last file we began processing and the current line\"\"\"\n with open(logfile, 'w') as fout:\n fout.write(f'{fname},{line_num}\\n')\n\n\ndef csv_to_ol_json_item(line):\n \"\"\"converts a line to a book item\"\"\"\n try:\n data = line.decode().strip().split('|')\n except UnicodeDecodeError:\n data = line.decode('ISO-8859-1').strip().split('|')\n\n b = Biblio(data)\n return {'ia_id': b.source_id, 'data': b.json()}\n\n\ndef is_low_quality_book(book_item) -> bool:\n \"\"\"\n Check if a book item is of low quality which means that 1) one of its authors\n (regardless of case) is in the set of excluded authors.\n \"\"\"\n authors = {a['name'].casefold() for a in book_item.get('authors') or []}\n if authors & EXCLUDED_AUTHORS: # Leverage Python set intersection for speed.\n return True\n\n # A recent independently published book with excluded key words in its title\n # (regardless of case) is also considered a low quality book.\n title_words = set(re.split(r'\\W+', book_item[\"title\"].casefold()))\n publishers = {p.casefold() for p in book_item.get('publishers') or []}\n publish_year = int(book_item.get(\"publish_date\", \"0\")[:4]) # YYYY\n return bool(\n \"independently published\" in publishers\n and publish_year >= 2018\n and title_words & EXCLUDED_INDEPENDENTLY_PUBLISHED_TITLES\n )\n\n\ndef batch_import(path, batch, batch_size=5000):\n logfile = os.path.join(path, 'import.log')\n filenames, offset = load_state(path, logfile)\n\n for fname in filenames:\n book_items = []\n with open(fname, 'rb') as f:\n logger.info(f\"Processing: {fname} from line {offset}\")\n for line_num, line in enumerate(f):\n\n # skip over already processed records\n if offset:\n if offset > line_num:\n continue\n offset = 0\n\n try:\n book_item = csv_to_ol_json_item(line)\n if not is_low_quality_book(book_item[\"data\"]):\n book_items.append(book_item)\n except (AssertionError, IndexError) as e:\n logger.info(f\"Error: {e} from {line}\")\n\n # If we have enough items, submit a batch\n if not ((line_num + 1) % batch_size):\n batch.add_items(book_items)\n update_state(logfile, fname, line_num)\n book_items = [] # clear added items\n\n # Add any remaining book_items to batch\n if book_items:\n batch.add_items(book_items)\n update_state(logfile, fname, line_num)\n\n\ndef main(ol_config: str, batch_path: str):\n load_config(ol_config)\n\n # Partner data is offset ~15 days from start of month\n date = datetime.date.today() - datetime.timedelta(days=15)\n batch_name = \"%s-%04d%02d\" % ('bwb', date.year, date.month)\n batch = Batch.find(batch_name) or Batch.new(batch_name)\n batch_import(batch_path, batch)\n\n\nif __name__ == '__main__':\n FnToCLI(main).run()\n", "path": "scripts/partner_batch_imports.py"}]}
| 4,053 | 405 |
gh_patches_debug_20166
|
rasdani/github-patches
|
git_diff
|
marshmallow-code__webargs-680
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
typing issue with __version_info__ += __parsed_version__.pre
mypy issue:
```
__version_info__ += __parsed_version__.pre
```
```
src/webargs/__init__.py:14: error: Unsupported operand types for + ("Tuple[int, ...]" and "Tuple[str, int]")
```
Not sure what the problem is. I'm tempted to just add a `# type: ignore`. Any better idea, anyone?
</issue>
<code>
[start of src/webargs/__init__.py]
1 from packaging.version import Version
2 from marshmallow.utils import missing
3
4 # Make marshmallow's validation functions importable from webargs
5 from marshmallow import validate
6
7 from webargs.core import ValidationError
8 from webargs import fields
9
10 __version__ = "8.0.1"
11 __parsed_version__ = Version(__version__)
12 __version_info__ = __parsed_version__.release
13 if __parsed_version__.pre:
14 __version_info__ += __parsed_version__.pre
15 __all__ = ("ValidationError", "fields", "missing", "validate")
16
[end of src/webargs/__init__.py]
[start of setup.py]
1 import re
2 from setuptools import setup, find_packages
3
4 FRAMEWORKS = [
5 "Flask>=0.12.5",
6 "Django>=2.2.0",
7 "bottle>=0.12.13",
8 "tornado>=4.5.2",
9 "pyramid>=1.9.1",
10 "falcon>=2.0.0",
11 "aiohttp>=3.0.8",
12 ]
13 EXTRAS_REQUIRE = {
14 "frameworks": FRAMEWORKS,
15 "tests": [
16 "pytest",
17 "webtest==3.0.0",
18 "webtest-aiohttp==2.0.0",
19 "pytest-aiohttp>=0.3.0",
20 ]
21 + FRAMEWORKS,
22 "lint": [
23 "mypy==0.910",
24 "flake8==4.0.1",
25 "flake8-bugbear==21.11.29",
26 "pre-commit~=2.4",
27 ],
28 "docs": [
29 "Sphinx==4.3.2",
30 "sphinx-issues==2.0.0",
31 "furo==2022.1.2",
32 ]
33 + FRAMEWORKS,
34 }
35 EXTRAS_REQUIRE["dev"] = EXTRAS_REQUIRE["tests"] + EXTRAS_REQUIRE["lint"] + ["tox"]
36
37
38 def find_version(fname):
39 """Attempts to find the version number in the file names fname.
40 Raises RuntimeError if not found.
41 """
42 version = ""
43 with open(fname) as fp:
44 reg = re.compile(r'__version__ = [\'"]([^\'"]*)[\'"]')
45 for line in fp:
46 m = reg.match(line)
47 if m:
48 version = m.group(1)
49 break
50 if not version:
51 raise RuntimeError("Cannot find version information")
52 return version
53
54
55 def read(fname):
56 with open(fname) as fp:
57 content = fp.read()
58 return content
59
60
61 setup(
62 name="webargs",
63 version=find_version("src/webargs/__init__.py"),
64 description=(
65 "Declarative parsing and validation of HTTP request objects, "
66 "with built-in support for popular web frameworks, including "
67 "Flask, Django, Bottle, Tornado, Pyramid, Falcon, and aiohttp."
68 ),
69 long_description=read("README.rst"),
70 author="Steven Loria",
71 author_email="[email protected]",
72 url="https://github.com/marshmallow-code/webargs",
73 packages=find_packages("src"),
74 package_dir={"": "src"},
75 package_data={"webargs": ["py.typed"]},
76 install_requires=["marshmallow>=3.0.0", "packaging"],
77 extras_require=EXTRAS_REQUIRE,
78 license="MIT",
79 zip_safe=False,
80 keywords=(
81 "webargs",
82 "http",
83 "flask",
84 "django",
85 "bottle",
86 "tornado",
87 "aiohttp",
88 "request",
89 "arguments",
90 "validation",
91 "parameters",
92 "rest",
93 "api",
94 "marshmallow",
95 ),
96 python_requires=">=3.7",
97 classifiers=[
98 "Development Status :: 5 - Production/Stable",
99 "Intended Audience :: Developers",
100 "License :: OSI Approved :: MIT License",
101 "Natural Language :: English",
102 "Programming Language :: Python :: 3",
103 "Programming Language :: Python :: 3.7",
104 "Programming Language :: Python :: 3.8",
105 "Programming Language :: Python :: 3.9",
106 "Programming Language :: Python :: 3.10",
107 "Programming Language :: Python :: 3 :: Only",
108 "Topic :: Internet :: WWW/HTTP :: Dynamic Content",
109 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
110 ],
111 test_suite="tests",
112 project_urls={
113 "Changelog": "https://webargs.readthedocs.io/en/latest/changelog.html",
114 "Issues": "https://github.com/marshmallow-code/webargs/issues",
115 "Funding": "https://opencollective.com/marshmallow",
116 "Tidelift": "https://tidelift.com/subscription/pkg/pypi-webargs?utm_source=pypi-marshmallow&utm_medium=pypi", # noqa
117 },
118 )
119
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -20,7 +20,7 @@
]
+ FRAMEWORKS,
"lint": [
- "mypy==0.910",
+ "mypy==0.930",
"flake8==4.0.1",
"flake8-bugbear==21.11.29",
"pre-commit~=2.4",
diff --git a/src/webargs/__init__.py b/src/webargs/__init__.py
--- a/src/webargs/__init__.py
+++ b/src/webargs/__init__.py
@@ -1,3 +1,5 @@
+from __future__ import annotations
+
from packaging.version import Version
from marshmallow.utils import missing
@@ -9,7 +11,9 @@
__version__ = "8.0.1"
__parsed_version__ = Version(__version__)
-__version_info__ = __parsed_version__.release
+__version_info__: tuple[int, int, int] | tuple[
+ int, int, int, str, int
+] = __parsed_version__.release # type: ignore[assignment]
if __parsed_version__.pre:
- __version_info__ += __parsed_version__.pre
+ __version_info__ += __parsed_version__.pre # type: ignore[assignment]
__all__ = ("ValidationError", "fields", "missing", "validate")
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -20,7 +20,7 @@\n ]\n + FRAMEWORKS,\n \"lint\": [\n- \"mypy==0.910\",\n+ \"mypy==0.930\",\n \"flake8==4.0.1\",\n \"flake8-bugbear==21.11.29\",\n \"pre-commit~=2.4\",\ndiff --git a/src/webargs/__init__.py b/src/webargs/__init__.py\n--- a/src/webargs/__init__.py\n+++ b/src/webargs/__init__.py\n@@ -1,3 +1,5 @@\n+from __future__ import annotations\n+\n from packaging.version import Version\n from marshmallow.utils import missing\n \n@@ -9,7 +11,9 @@\n \n __version__ = \"8.0.1\"\n __parsed_version__ = Version(__version__)\n-__version_info__ = __parsed_version__.release\n+__version_info__: tuple[int, int, int] | tuple[\n+ int, int, int, str, int\n+] = __parsed_version__.release # type: ignore[assignment]\n if __parsed_version__.pre:\n- __version_info__ += __parsed_version__.pre\n+ __version_info__ += __parsed_version__.pre # type: ignore[assignment]\n __all__ = (\"ValidationError\", \"fields\", \"missing\", \"validate\")\n", "issue": "typing issue with __version_info__ += __parsed_version__.pre\nmypy issue:\r\n\r\n```\r\n __version_info__ += __parsed_version__.pre\r\n```\r\n\r\n```\r\nsrc/webargs/__init__.py:14: error: Unsupported operand types for + (\"Tuple[int, ...]\" and \"Tuple[str, int]\")\r\n```\r\n\r\nNot sure what the problem is. I'm tempted to just add a `# type: ignore`. Any better idea, anyone?\n", "before_files": [{"content": "from packaging.version import Version\nfrom marshmallow.utils import missing\n\n# Make marshmallow's validation functions importable from webargs\nfrom marshmallow import validate\n\nfrom webargs.core import ValidationError\nfrom webargs import fields\n\n__version__ = \"8.0.1\"\n__parsed_version__ = Version(__version__)\n__version_info__ = __parsed_version__.release\nif __parsed_version__.pre:\n __version_info__ += __parsed_version__.pre\n__all__ = (\"ValidationError\", \"fields\", \"missing\", \"validate\")\n", "path": "src/webargs/__init__.py"}, {"content": "import re\nfrom setuptools import setup, find_packages\n\nFRAMEWORKS = [\n \"Flask>=0.12.5\",\n \"Django>=2.2.0\",\n \"bottle>=0.12.13\",\n \"tornado>=4.5.2\",\n \"pyramid>=1.9.1\",\n \"falcon>=2.0.0\",\n \"aiohttp>=3.0.8\",\n]\nEXTRAS_REQUIRE = {\n \"frameworks\": FRAMEWORKS,\n \"tests\": [\n \"pytest\",\n \"webtest==3.0.0\",\n \"webtest-aiohttp==2.0.0\",\n \"pytest-aiohttp>=0.3.0\",\n ]\n + FRAMEWORKS,\n \"lint\": [\n \"mypy==0.910\",\n \"flake8==4.0.1\",\n \"flake8-bugbear==21.11.29\",\n \"pre-commit~=2.4\",\n ],\n \"docs\": [\n \"Sphinx==4.3.2\",\n \"sphinx-issues==2.0.0\",\n \"furo==2022.1.2\",\n ]\n + FRAMEWORKS,\n}\nEXTRAS_REQUIRE[\"dev\"] = EXTRAS_REQUIRE[\"tests\"] + EXTRAS_REQUIRE[\"lint\"] + [\"tox\"]\n\n\ndef find_version(fname):\n \"\"\"Attempts to find the version number in the file names fname.\n Raises RuntimeError if not found.\n \"\"\"\n version = \"\"\n with open(fname) as fp:\n reg = re.compile(r'__version__ = [\\'\"]([^\\'\"]*)[\\'\"]')\n for line in fp:\n m = reg.match(line)\n if m:\n version = m.group(1)\n break\n if not version:\n raise RuntimeError(\"Cannot find version information\")\n return version\n\n\ndef read(fname):\n with open(fname) as fp:\n content = fp.read()\n return content\n\n\nsetup(\n name=\"webargs\",\n version=find_version(\"src/webargs/__init__.py\"),\n description=(\n \"Declarative parsing and validation of HTTP request objects, \"\n \"with built-in support for popular web frameworks, including \"\n \"Flask, Django, Bottle, Tornado, Pyramid, Falcon, and aiohttp.\"\n ),\n long_description=read(\"README.rst\"),\n author=\"Steven Loria\",\n author_email=\"[email protected]\",\n url=\"https://github.com/marshmallow-code/webargs\",\n packages=find_packages(\"src\"),\n package_dir={\"\": \"src\"},\n package_data={\"webargs\": [\"py.typed\"]},\n install_requires=[\"marshmallow>=3.0.0\", \"packaging\"],\n extras_require=EXTRAS_REQUIRE,\n license=\"MIT\",\n zip_safe=False,\n keywords=(\n \"webargs\",\n \"http\",\n \"flask\",\n \"django\",\n \"bottle\",\n \"tornado\",\n \"aiohttp\",\n \"request\",\n \"arguments\",\n \"validation\",\n \"parameters\",\n \"rest\",\n \"api\",\n \"marshmallow\",\n ),\n python_requires=\">=3.7\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Natural Language :: English\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Topic :: Internet :: WWW/HTTP :: Dynamic Content\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n ],\n test_suite=\"tests\",\n project_urls={\n \"Changelog\": \"https://webargs.readthedocs.io/en/latest/changelog.html\",\n \"Issues\": \"https://github.com/marshmallow-code/webargs/issues\",\n \"Funding\": \"https://opencollective.com/marshmallow\",\n \"Tidelift\": \"https://tidelift.com/subscription/pkg/pypi-webargs?utm_source=pypi-marshmallow&utm_medium=pypi\", # noqa\n },\n)\n", "path": "setup.py"}]}
| 1,975 | 320 |
gh_patches_debug_50867
|
rasdani/github-patches
|
git_diff
|
spyder-ide__spyder-8896
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
spyder 3.3.3 icon theme Spyder 3 problem with PyQt 5.12
## Problem Description
After updating to Spyder 3.3.3 (on Linux, with Python 3.6.7 64-bit | | Qt 5.12.1 | PyQt5 5.12 ) spyder icon theme "Spyder 3" stopped working (because of coming with this version PyQt upgrade probably) . Only the "Spyder 2" icon theme is working.
Below the look of Spyder3 icon theme

After reverting to PyQt 5.9.2 the icon set Spyder3 is working again.
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright © Spyder Project Contributors
4 # Licensed under the terms of the MIT License
5 # (see spyder/__init__.py for details)
6
7 """
8 Spyder
9 ======
10
11 The Scientific Python Development Environment
12
13 Spyder is a powerful scientific environment written in Python, for Python,
14 and designed by and for scientists, engineers and data analysts.
15
16 It features a unique combination of the advanced editing, analysis, debugging
17 and profiling functionality of a comprehensive development tool with the data
18 exploration, interactive execution, deep inspection and beautiful visualization
19 capabilities of a scientific package.
20 """
21
22 from __future__ import print_function
23
24 import os
25 import os.path as osp
26 import subprocess
27 import sys
28 import shutil
29
30 from distutils.core import setup
31 from distutils.command.install_data import install_data
32
33
34 #==============================================================================
35 # Check for Python 3
36 #==============================================================================
37 PY3 = sys.version_info[0] == 3
38
39
40 #==============================================================================
41 # Minimal Python version sanity check
42 # Taken from the notebook setup.py -- Modified BSD License
43 #==============================================================================
44 v = sys.version_info
45 if v[:2] < (2, 7) or (v[0] >= 3 and v[:2] < (3, 4)):
46 error = "ERROR: Spyder requires Python version 2.7 or 3.4 and above."
47 print(error, file=sys.stderr)
48 sys.exit(1)
49
50
51 #==============================================================================
52 # Constants
53 #==============================================================================
54 NAME = 'spyder'
55 LIBNAME = 'spyder'
56 from spyder import __version__, __website_url__ #analysis:ignore
57
58
59 #==============================================================================
60 # Auxiliary functions
61 #==============================================================================
62 def get_package_data(name, extlist):
63 """Return data files for package *name* with extensions in *extlist*"""
64 flist = []
65 # Workaround to replace os.path.relpath (not available until Python 2.6):
66 offset = len(name)+len(os.pathsep)
67 for dirpath, _dirnames, filenames in os.walk(name):
68 for fname in filenames:
69 if not fname.startswith('.') and osp.splitext(fname)[1] in extlist:
70 flist.append(osp.join(dirpath, fname)[offset:])
71 return flist
72
73
74 def get_subpackages(name):
75 """Return subpackages of package *name*"""
76 splist = []
77 for dirpath, _dirnames, _filenames in os.walk(name):
78 if osp.isfile(osp.join(dirpath, '__init__.py')):
79 splist.append(".".join(dirpath.split(os.sep)))
80 return splist
81
82
83 def get_data_files():
84 """Return data_files in a platform dependent manner"""
85 if sys.platform.startswith('linux'):
86 if PY3:
87 data_files = [('share/applications', ['scripts/spyder3.desktop']),
88 ('share/icons', ['img_src/spyder3.png']),
89 ('share/metainfo', ['scripts/spyder3.appdata.xml'])]
90 else:
91 data_files = [('share/applications', ['scripts/spyder.desktop']),
92 ('share/icons', ['img_src/spyder.png'])]
93 elif os.name == 'nt':
94 data_files = [('scripts', ['img_src/spyder.ico',
95 'img_src/spyder_reset.ico'])]
96 else:
97 data_files = []
98 return data_files
99
100
101 def get_packages():
102 """Return package list"""
103 packages = (
104 get_subpackages(LIBNAME)
105 + get_subpackages('spyder_breakpoints')
106 + get_subpackages('spyder_profiler')
107 + get_subpackages('spyder_pylint')
108 + get_subpackages('spyder_io_dcm')
109 + get_subpackages('spyder_io_hdf5')
110 )
111 return packages
112
113
114 #==============================================================================
115 # Make Linux detect Spyder desktop file
116 #==============================================================================
117 class MyInstallData(install_data):
118 def run(self):
119 install_data.run(self)
120 if sys.platform.startswith('linux'):
121 try:
122 subprocess.call(['update-desktop-database'])
123 except:
124 print("ERROR: unable to update desktop database",
125 file=sys.stderr)
126 CMDCLASS = {'install_data': MyInstallData}
127
128
129 #==============================================================================
130 # Main scripts
131 #==============================================================================
132 # NOTE: the '[...]_win_post_install.py' script is installed even on non-Windows
133 # platforms due to a bug in pip installation process (see Issue 1158)
134 SCRIPTS = ['%s_win_post_install.py' % NAME]
135 if PY3 and sys.platform.startswith('linux'):
136 SCRIPTS.append('spyder3')
137 else:
138 SCRIPTS.append('spyder')
139
140
141 #==============================================================================
142 # Files added to the package
143 #==============================================================================
144 EXTLIST = ['.mo', '.svg', '.png', '.css', '.html', '.js', '.chm', '.ini',
145 '.txt', '.rst', '.qss', '.ttf', '.json', '.c', '.cpp', '.java',
146 '.md', '.R', '.csv', '.pyx', '.ipynb', '.xml']
147 if os.name == 'nt':
148 SCRIPTS += ['spyder.bat']
149 EXTLIST += ['.ico']
150
151
152 #==============================================================================
153 # Setup arguments
154 #==============================================================================
155 setup_args = dict(
156 name=NAME,
157 version=__version__,
158 description='The Scientific Python Development Environment',
159 long_description=(
160 """Spyder is a powerful scientific environment written in Python, for Python,
161 and designed by and for scientists, engineers and data analysts.
162 It features a unique combination of the advanced editing, analysis, debugging
163 and profiling functionality of a comprehensive development tool with the data
164 exploration, interactive execution, deep inspection and beautiful visualization
165 capabilities of a scientific package.\n
166 Furthermore, Spyder offers built-in integration with many popular
167 scientific packages, including NumPy, SciPy, Pandas, IPython, QtConsole,
168 Matplotlib, SymPy, and more.\n
169 Beyond its many built-in features, Spyder's abilities can be extended even
170 further via first- and third-party plugins.\n
171 Spyder can also be used as a PyQt5 extension library, allowing you to build
172 upon its functionality and embed its components, such as the interactive
173 console or advanced editor, in your own software.
174 """),
175 download_url=__website_url__ + "#fh5co-download",
176 author="The Spyder Project Contributors",
177 author_email="[email protected]",
178 url=__website_url__,
179 license='MIT',
180 keywords='PyQt5 editor console widgets IDE science data analysis IPython',
181 platforms=["Windows", "Linux", "Mac OS-X"],
182 packages=get_packages(),
183 package_data={LIBNAME: get_package_data(LIBNAME, EXTLIST),
184 'spyder_breakpoints': get_package_data('spyder_breakpoints',
185 EXTLIST),
186 'spyder_profiler': get_package_data('spyder_profiler',
187 EXTLIST),
188 'spyder_pylint': get_package_data('spyder_pylint',
189 EXTLIST),
190 'spyder_io_dcm': get_package_data('spyder_io_dcm',
191 EXTLIST),
192 'spyder_io_hdf5': get_package_data('spyder_io_hdf5',
193 EXTLIST),
194 },
195 scripts=[osp.join('scripts', fname) for fname in SCRIPTS],
196 data_files=get_data_files(),
197 classifiers=['License :: OSI Approved :: MIT License',
198 'Operating System :: MacOS',
199 'Operating System :: Microsoft :: Windows',
200 'Operating System :: POSIX :: Linux',
201 'Programming Language :: Python :: 2',
202 'Programming Language :: Python :: 2.7',
203 'Programming Language :: Python :: 3',
204 'Programming Language :: Python :: 3.4',
205 'Programming Language :: Python :: 3.5',
206 'Programming Language :: Python :: 3.6',
207 'Programming Language :: Python :: 3.7',
208 'Development Status :: 5 - Production/Stable',
209 'Intended Audience :: Education',
210 'Intended Audience :: Science/Research',
211 'Intended Audience :: Developers',
212 'Topic :: Scientific/Engineering',
213 'Topic :: Software Development :: Widget Sets'],
214 cmdclass=CMDCLASS)
215
216
217 #==============================================================================
218 # Setuptools deps
219 #==============================================================================
220 if any(arg == 'bdist_wheel' for arg in sys.argv):
221 import setuptools # analysis:ignore
222
223 install_requires = [
224 'cloudpickle',
225 'rope>=0.10.5',
226 'jedi>=0.9.0',
227 'pyflakes',
228 'pygments>=2.0',
229 'qtconsole>=4.2.0',
230 'nbconvert',
231 'sphinx',
232 'pycodestyle',
233 'pylint',
234 'psutil',
235 'qtawesome>=0.4.1',
236 'qtpy>=1.5.0',
237 'pickleshare',
238 'pyzmq',
239 'chardet>=2.0.0',
240 'numpydoc',
241 'spyder-kernels>=0.4.2,<1.0',
242 # Don't require keyring for Python 2 and Linux
243 # because it depends on system packages
244 'keyring;sys_platform!="linux2"',
245 # Packages for pyqt5 are only available in
246 # Python 3
247 'pyqt5<5.13;python_version>="3"',
248 # pyqt5 5.12 split WebEngine into the
249 # pyqtwebengine module
250 'pyqtwebengine<5.13'
251 ]
252
253 extras_require = {
254 'test:python_version == "2.7"': ['mock'],
255 'test': ['pytest<4.1',
256 'pytest-qt',
257 'pytest-mock',
258 'pytest-cov',
259 'pytest-xvfb',
260 'mock',
261 'flaky',
262 'pandas',
263 'scipy',
264 'sympy',
265 'pillow',
266 'matplotlib',
267 'cython'],
268 }
269
270 if 'setuptools' in sys.modules:
271 setup_args['install_requires'] = install_requires
272 setup_args['extras_require'] = extras_require
273
274 setup_args['entry_points'] = {
275 'gui_scripts': [
276 '{} = spyder.app.start:main'.format(
277 'spyder3' if PY3 else 'spyder')
278 ]
279 }
280
281 setup_args.pop('scripts', None)
282
283
284 #==============================================================================
285 # Main setup
286 #==============================================================================
287 setup(**setup_args)
288
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -232,7 +232,7 @@
'pycodestyle',
'pylint',
'psutil',
- 'qtawesome>=0.4.1',
+ 'qtawesome>=0.5.7',
'qtpy>=1.5.0',
'pickleshare',
'pyzmq',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -232,7 +232,7 @@\n 'pycodestyle',\n 'pylint',\n 'psutil',\n- 'qtawesome>=0.4.1',\n+ 'qtawesome>=0.5.7',\n 'qtpy>=1.5.0',\n 'pickleshare',\n 'pyzmq',\n", "issue": "spyder 3.3.3 icon theme Spyder 3 problem with PyQt 5.12\n## Problem Description\r\nAfter updating to Spyder 3.3.3 (on Linux, with Python 3.6.7 64-bit | | Qt 5.12.1 | PyQt5 5.12 ) spyder icon theme \"Spyder 3\" stopped working (because of coming with this version PyQt upgrade probably) . Only the \"Spyder 2\" icon theme is working.\r\nBelow the look of Spyder3 icon theme\r\n\r\n\r\nAfter reverting to PyQt 5.9.2 the icon set Spyder3 is working again.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright \u00a9 Spyder Project Contributors\n# Licensed under the terms of the MIT License\n# (see spyder/__init__.py for details)\n\n\"\"\"\nSpyder\n======\n\nThe Scientific Python Development Environment\n\nSpyder is a powerful scientific environment written in Python, for Python,\nand designed by and for scientists, engineers and data analysts.\n\nIt features a unique combination of the advanced editing, analysis, debugging\nand profiling functionality of a comprehensive development tool with the data\nexploration, interactive execution, deep inspection and beautiful visualization\ncapabilities of a scientific package.\n\"\"\"\n\nfrom __future__ import print_function\n\nimport os\nimport os.path as osp\nimport subprocess\nimport sys\nimport shutil\n\nfrom distutils.core import setup\nfrom distutils.command.install_data import install_data\n\n\n#==============================================================================\n# Check for Python 3\n#==============================================================================\nPY3 = sys.version_info[0] == 3\n\n\n#==============================================================================\n# Minimal Python version sanity check\n# Taken from the notebook setup.py -- Modified BSD License\n#==============================================================================\nv = sys.version_info\nif v[:2] < (2, 7) or (v[0] >= 3 and v[:2] < (3, 4)):\n error = \"ERROR: Spyder requires Python version 2.7 or 3.4 and above.\"\n print(error, file=sys.stderr)\n sys.exit(1)\n\n\n#==============================================================================\n# Constants\n#==============================================================================\nNAME = 'spyder'\nLIBNAME = 'spyder'\nfrom spyder import __version__, __website_url__ #analysis:ignore\n\n\n#==============================================================================\n# Auxiliary functions\n#==============================================================================\ndef get_package_data(name, extlist):\n \"\"\"Return data files for package *name* with extensions in *extlist*\"\"\"\n flist = []\n # Workaround to replace os.path.relpath (not available until Python 2.6):\n offset = len(name)+len(os.pathsep)\n for dirpath, _dirnames, filenames in os.walk(name):\n for fname in filenames:\n if not fname.startswith('.') and osp.splitext(fname)[1] in extlist:\n flist.append(osp.join(dirpath, fname)[offset:])\n return flist\n\n\ndef get_subpackages(name):\n \"\"\"Return subpackages of package *name*\"\"\"\n splist = []\n for dirpath, _dirnames, _filenames in os.walk(name):\n if osp.isfile(osp.join(dirpath, '__init__.py')):\n splist.append(\".\".join(dirpath.split(os.sep)))\n return splist\n\n\ndef get_data_files():\n \"\"\"Return data_files in a platform dependent manner\"\"\"\n if sys.platform.startswith('linux'):\n if PY3:\n data_files = [('share/applications', ['scripts/spyder3.desktop']),\n ('share/icons', ['img_src/spyder3.png']),\n ('share/metainfo', ['scripts/spyder3.appdata.xml'])]\n else:\n data_files = [('share/applications', ['scripts/spyder.desktop']),\n ('share/icons', ['img_src/spyder.png'])]\n elif os.name == 'nt':\n data_files = [('scripts', ['img_src/spyder.ico',\n 'img_src/spyder_reset.ico'])]\n else:\n data_files = []\n return data_files\n\n\ndef get_packages():\n \"\"\"Return package list\"\"\"\n packages = (\n get_subpackages(LIBNAME)\n + get_subpackages('spyder_breakpoints')\n + get_subpackages('spyder_profiler')\n + get_subpackages('spyder_pylint')\n + get_subpackages('spyder_io_dcm')\n + get_subpackages('spyder_io_hdf5')\n )\n return packages\n\n\n#==============================================================================\n# Make Linux detect Spyder desktop file\n#==============================================================================\nclass MyInstallData(install_data):\n def run(self):\n install_data.run(self)\n if sys.platform.startswith('linux'):\n try:\n subprocess.call(['update-desktop-database'])\n except:\n print(\"ERROR: unable to update desktop database\",\n file=sys.stderr)\nCMDCLASS = {'install_data': MyInstallData}\n\n\n#==============================================================================\n# Main scripts\n#==============================================================================\n# NOTE: the '[...]_win_post_install.py' script is installed even on non-Windows\n# platforms due to a bug in pip installation process (see Issue 1158)\nSCRIPTS = ['%s_win_post_install.py' % NAME]\nif PY3 and sys.platform.startswith('linux'):\n SCRIPTS.append('spyder3')\nelse:\n SCRIPTS.append('spyder')\n\n\n#==============================================================================\n# Files added to the package\n#==============================================================================\nEXTLIST = ['.mo', '.svg', '.png', '.css', '.html', '.js', '.chm', '.ini',\n '.txt', '.rst', '.qss', '.ttf', '.json', '.c', '.cpp', '.java',\n '.md', '.R', '.csv', '.pyx', '.ipynb', '.xml']\nif os.name == 'nt':\n SCRIPTS += ['spyder.bat']\n EXTLIST += ['.ico']\n\n\n#==============================================================================\n# Setup arguments\n#==============================================================================\nsetup_args = dict(\n name=NAME,\n version=__version__,\n description='The Scientific Python Development Environment',\n long_description=(\n\"\"\"Spyder is a powerful scientific environment written in Python, for Python,\nand designed by and for scientists, engineers and data analysts.\nIt features a unique combination of the advanced editing, analysis, debugging\nand profiling functionality of a comprehensive development tool with the data\nexploration, interactive execution, deep inspection and beautiful visualization\ncapabilities of a scientific package.\\n\nFurthermore, Spyder offers built-in integration with many popular\nscientific packages, including NumPy, SciPy, Pandas, IPython, QtConsole,\nMatplotlib, SymPy, and more.\\n\nBeyond its many built-in features, Spyder's abilities can be extended even\nfurther via first- and third-party plugins.\\n\nSpyder can also be used as a PyQt5 extension library, allowing you to build\nupon its functionality and embed its components, such as the interactive\nconsole or advanced editor, in your own software.\n\"\"\"),\n download_url=__website_url__ + \"#fh5co-download\",\n author=\"The Spyder Project Contributors\",\n author_email=\"[email protected]\",\n url=__website_url__,\n license='MIT',\n keywords='PyQt5 editor console widgets IDE science data analysis IPython',\n platforms=[\"Windows\", \"Linux\", \"Mac OS-X\"],\n packages=get_packages(),\n package_data={LIBNAME: get_package_data(LIBNAME, EXTLIST),\n 'spyder_breakpoints': get_package_data('spyder_breakpoints',\n EXTLIST),\n 'spyder_profiler': get_package_data('spyder_profiler',\n EXTLIST),\n 'spyder_pylint': get_package_data('spyder_pylint',\n EXTLIST),\n 'spyder_io_dcm': get_package_data('spyder_io_dcm',\n EXTLIST),\n 'spyder_io_hdf5': get_package_data('spyder_io_hdf5',\n EXTLIST),\n },\n scripts=[osp.join('scripts', fname) for fname in SCRIPTS],\n data_files=get_data_files(),\n classifiers=['License :: OSI Approved :: MIT License',\n 'Operating System :: MacOS',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Software Development :: Widget Sets'],\n cmdclass=CMDCLASS)\n\n\n#==============================================================================\n# Setuptools deps\n#==============================================================================\nif any(arg == 'bdist_wheel' for arg in sys.argv):\n import setuptools # analysis:ignore\n\ninstall_requires = [\n 'cloudpickle',\n 'rope>=0.10.5',\n 'jedi>=0.9.0',\n 'pyflakes',\n 'pygments>=2.0',\n 'qtconsole>=4.2.0',\n 'nbconvert',\n 'sphinx',\n 'pycodestyle',\n 'pylint',\n 'psutil',\n 'qtawesome>=0.4.1',\n 'qtpy>=1.5.0',\n 'pickleshare',\n 'pyzmq',\n 'chardet>=2.0.0',\n 'numpydoc',\n 'spyder-kernels>=0.4.2,<1.0',\n # Don't require keyring for Python 2 and Linux\n # because it depends on system packages\n 'keyring;sys_platform!=\"linux2\"',\n # Packages for pyqt5 are only available in\n # Python 3\n 'pyqt5<5.13;python_version>=\"3\"',\n # pyqt5 5.12 split WebEngine into the\n # pyqtwebengine module\n 'pyqtwebengine<5.13'\n]\n\nextras_require = {\n 'test:python_version == \"2.7\"': ['mock'],\n 'test': ['pytest<4.1',\n 'pytest-qt',\n 'pytest-mock',\n 'pytest-cov',\n 'pytest-xvfb',\n 'mock',\n 'flaky',\n 'pandas',\n 'scipy',\n 'sympy',\n 'pillow',\n 'matplotlib',\n 'cython'],\n}\n\nif 'setuptools' in sys.modules:\n setup_args['install_requires'] = install_requires\n setup_args['extras_require'] = extras_require\n\n setup_args['entry_points'] = {\n 'gui_scripts': [\n '{} = spyder.app.start:main'.format(\n 'spyder3' if PY3 else 'spyder')\n ]\n }\n\n setup_args.pop('scripts', None)\n\n\n#==============================================================================\n# Main setup\n#==============================================================================\nsetup(**setup_args)\n", "path": "setup.py"}]}
| 3,745 | 98 |
gh_patches_debug_11347
|
rasdani/github-patches
|
git_diff
|
plotly__dash-999
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] + in version string breaks fingerprint system
**Describe your context**
- replace the result of `pip list | grep dash` below
```
dash 1.5.1
dash-core-components 1.4.0
dash-daq 0.2.2
dash-html-components 1.0.1
dash-renderer 1.2.0
dash-table 4.5.0
```
**Describe the bug**
When going from `dash==1.4` to `dash==1.5`, we experienced a breaking change in the custom Dash components we use.
It took some hours to debug, but the reason was found to be related to the new "fingerprint" system in Dash. In our project, we use the [setuptools_scm](https://github.com/pypa/setuptools_scm) package (by the Python Packaging Authority) in order to have a versioning system that automatically is linked to the git repo tags. This makes continuous deployment to e.g. Pypi easy and robust wrt. keeping versions consistent.
I.e. instead of
```python
__version__ = package['version']
```
in the component package, we use something like
```
__version__ = get_distribution(__name__).version
```
This worked until `dash==1.5`, then it broke on non-release-versions due to automatic tags of the type
`1.0.0.dev5+af4304c.d20191103`, where the tag includes a `+`. See [the default tag formats](https://github.com/pypa/setuptools_scm#default-versioning-scheme).
Changing the line above to
```
__version__ = get_distribution(__name__).version.replace("+", ".")
```
is one workaround that gets the third party components to also work on `dash==1.5`
**Expected behavior**
`setuptools_scm` provided versions to work also in `dash>=1.5`.
**Suggested solution**
Change [this line](https://github.com/plotly/dash/blob/40b5357f262ac207f94ac980e6cb928d94df65b7/dash/fingerprint.py#L12) in Dash's `build_fingerprint` to also replace `+` with `_`?
</issue>
<code>
[start of dash/fingerprint.py]
1 import re
2
3 cache_regex = re.compile(r"^v[\w-]+m[0-9a-fA-F]+$")
4
5
6 def build_fingerprint(path, version, hash_value):
7 path_parts = path.split("/")
8 filename, extension = path_parts[-1].split(".", 1)
9
10 return "{}.v{}m{}.{}".format(
11 "/".join(path_parts[:-1] + [filename]),
12 str(version).replace(".", "_"),
13 hash_value,
14 extension,
15 )
16
17
18 def check_fingerprint(path):
19 path_parts = path.split("/")
20 name_parts = path_parts[-1].split(".")
21
22 # Check if the resource has a fingerprint
23 if len(name_parts) > 2 and cache_regex.match(name_parts[1]):
24 original_name = ".".join([name_parts[0]] + name_parts[2:])
25 return "/".join(path_parts[:-1] + [original_name]), True
26
27 return path, False
28
[end of dash/fingerprint.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dash/fingerprint.py b/dash/fingerprint.py
--- a/dash/fingerprint.py
+++ b/dash/fingerprint.py
@@ -1,7 +1,7 @@
import re
cache_regex = re.compile(r"^v[\w-]+m[0-9a-fA-F]+$")
-
+version_clean = re.compile(r"[^\w-]")
def build_fingerprint(path, version, hash_value):
path_parts = path.split("/")
@@ -9,7 +9,7 @@
return "{}.v{}m{}.{}".format(
"/".join(path_parts[:-1] + [filename]),
- str(version).replace(".", "_"),
+ re.sub(version_clean, "_", str(version)),
hash_value,
extension,
)
|
{"golden_diff": "diff --git a/dash/fingerprint.py b/dash/fingerprint.py\n--- a/dash/fingerprint.py\n+++ b/dash/fingerprint.py\n@@ -1,7 +1,7 @@\n import re\n \n cache_regex = re.compile(r\"^v[\\w-]+m[0-9a-fA-F]+$\")\n-\n+version_clean = re.compile(r\"[^\\w-]\")\n \n def build_fingerprint(path, version, hash_value):\n path_parts = path.split(\"/\")\n@@ -9,7 +9,7 @@\n \n return \"{}.v{}m{}.{}\".format(\n \"/\".join(path_parts[:-1] + [filename]),\n- str(version).replace(\".\", \"_\"),\n+ re.sub(version_clean, \"_\", str(version)),\n hash_value,\n extension,\n )\n", "issue": "[BUG] + in version string breaks fingerprint system\n**Describe your context**\r\n- replace the result of `pip list | grep dash` below\r\n```\r\ndash 1.5.1 \r\ndash-core-components 1.4.0 \r\ndash-daq 0.2.2 \r\ndash-html-components 1.0.1 \r\ndash-renderer 1.2.0 \r\ndash-table 4.5.0 \r\n```\r\n\r\n**Describe the bug**\r\n\r\nWhen going from `dash==1.4` to `dash==1.5`, we experienced a breaking change in the custom Dash components we use.\r\n\r\nIt took some hours to debug, but the reason was found to be related to the new \"fingerprint\" system in Dash. In our project, we use the [setuptools_scm](https://github.com/pypa/setuptools_scm) package (by the Python Packaging Authority) in order to have a versioning system that automatically is linked to the git repo tags. This makes continuous deployment to e.g. Pypi easy and robust wrt. keeping versions consistent.\r\n\r\nI.e. instead of\r\n```python\r\n__version__ = package['version']\r\n```\r\nin the component package, we use something like\r\n```\r\n__version__ = get_distribution(__name__).version\r\n```\r\nThis worked until `dash==1.5`, then it broke on non-release-versions due to automatic tags of the type\r\n`1.0.0.dev5+af4304c.d20191103`, where the tag includes a `+`. See [the default tag formats](https://github.com/pypa/setuptools_scm#default-versioning-scheme).\r\n\r\nChanging the line above to\r\n```\r\n__version__ = get_distribution(__name__).version.replace(\"+\", \".\")\r\n```\r\nis one workaround that gets the third party components to also work on `dash==1.5`\r\n\r\n**Expected behavior**\r\n\r\n`setuptools_scm` provided versions to work also in `dash>=1.5`.\r\n\r\n**Suggested solution**\r\n\r\nChange [this line](https://github.com/plotly/dash/blob/40b5357f262ac207f94ac980e6cb928d94df65b7/dash/fingerprint.py#L12) in Dash's `build_fingerprint` to also replace `+` with `_`?\n", "before_files": [{"content": "import re\n\ncache_regex = re.compile(r\"^v[\\w-]+m[0-9a-fA-F]+$\")\n\n\ndef build_fingerprint(path, version, hash_value):\n path_parts = path.split(\"/\")\n filename, extension = path_parts[-1].split(\".\", 1)\n\n return \"{}.v{}m{}.{}\".format(\n \"/\".join(path_parts[:-1] + [filename]),\n str(version).replace(\".\", \"_\"),\n hash_value,\n extension,\n )\n\n\ndef check_fingerprint(path):\n path_parts = path.split(\"/\")\n name_parts = path_parts[-1].split(\".\")\n\n # Check if the resource has a fingerprint\n if len(name_parts) > 2 and cache_regex.match(name_parts[1]):\n original_name = \".\".join([name_parts[0]] + name_parts[2:])\n return \"/\".join(path_parts[:-1] + [original_name]), True\n\n return path, False\n", "path": "dash/fingerprint.py"}]}
| 1,306 | 165 |
gh_patches_debug_175
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmengine-684
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
config/utils.py haven't mmyolo

</issue>
<code>
[start of mmengine/config/utils.py]
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import ast
3 import os.path as osp
4 import re
5 import warnings
6 from typing import Tuple
7
8 from mmengine.fileio import load
9 from mmengine.utils import check_file_exist
10
11 PKG2PROJECT = {
12 'mmcls': 'mmcls',
13 'mmdet': 'mmdet',
14 'mmdet3d': 'mmdet3d',
15 'mmseg': 'mmsegmentation',
16 'mmaction2': 'mmaction2',
17 'mmtrack': 'mmtrack',
18 'mmpose': 'mmpose',
19 'mmedit': 'mmedit',
20 'mmocr': 'mmocr',
21 'mmgen': 'mmgen',
22 'mmfewshot': 'mmfewshot',
23 'mmrazor': 'mmrazor',
24 'mmflow': 'mmflow',
25 'mmhuman3d': 'mmhuman3d',
26 'mmrotate': 'mmrotate',
27 'mmselfsup': 'mmselfsup',
28 }
29
30
31 def _get_cfg_metainfo(package_path: str, cfg_path: str) -> dict:
32 """Get target meta information from all 'metafile.yml' defined in `mode-
33 index.yml` of external package.
34
35 Args:
36 package_path (str): Path of external package.
37 cfg_path (str): Name of experiment config.
38
39 Returns:
40 dict: Meta information of target experiment.
41 """
42 meta_index_path = osp.join(package_path, '.mim', 'model-index.yml')
43 meta_index = load(meta_index_path)
44 cfg_dict = dict()
45 for meta_path in meta_index['Import']:
46 meta_path = osp.join(package_path, '.mim', meta_path)
47 cfg_meta = load(meta_path)
48 for model_cfg in cfg_meta['Models']:
49 if 'Config' not in model_cfg:
50 warnings.warn(f'There is not `Config` define in {model_cfg}')
51 continue
52 cfg_name = model_cfg['Config'].partition('/')[-1]
53 # Some config could have multiple weights, we only pick the
54 # first one.
55 if cfg_name in cfg_dict:
56 continue
57 cfg_dict[cfg_name] = model_cfg
58 if cfg_path not in cfg_dict:
59 raise ValueError(f'Expected configs: {cfg_dict.keys()}, but got '
60 f'{cfg_path}')
61 return cfg_dict[cfg_path]
62
63
64 def _get_external_cfg_path(package_path: str, cfg_file: str) -> str:
65 """Get config path of external package.
66
67 Args:
68 package_path (str): Path of external package.
69 cfg_file (str): Name of experiment config.
70
71 Returns:
72 str: Absolute config path from external package.
73 """
74 cfg_file = cfg_file.split('.')[0]
75 model_cfg = _get_cfg_metainfo(package_path, cfg_file)
76 cfg_path = osp.join(package_path, model_cfg['Config'])
77 check_file_exist(cfg_path)
78 return cfg_path
79
80
81 def _get_external_cfg_base_path(package_path: str, cfg_name: str) -> str:
82 """Get base config path of external package.
83
84 Args:
85 package_path (str): Path of external package.
86 cfg_name (str): External relative config path with 'package::'.
87
88 Returns:
89 str: Absolute config path from external package.
90 """
91 cfg_path = osp.join(package_path, '.mim', 'configs', cfg_name)
92 check_file_exist(cfg_path)
93 return cfg_path
94
95
96 def _get_package_and_cfg_path(cfg_path: str) -> Tuple[str, str]:
97 """Get package name and relative config path.
98
99 Args:
100 cfg_path (str): External relative config path with 'package::'.
101
102 Returns:
103 Tuple[str, str]: Package name and config path.
104 """
105 if re.match(r'\w*::\w*/\w*', cfg_path) is None:
106 raise ValueError(
107 '`_get_package_and_cfg_path` is used for get external package, '
108 'please specify the package name and relative config path, just '
109 'like `mmdet::faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py`')
110 package_cfg = cfg_path.split('::')
111 if len(package_cfg) > 2:
112 raise ValueError('`::` should only be used to separate package and '
113 'config name, but found multiple `::` in '
114 f'{cfg_path}')
115 package, cfg_path = package_cfg
116 assert package in PKG2PROJECT, 'mmengine does not support to load ' \
117 f'{package} config.'
118 package = PKG2PROJECT[package]
119 return package, cfg_path
120
121
122 class RemoveAssignFromAST(ast.NodeTransformer):
123 """Remove Assign node if the target's name match the key.
124
125 Args:
126 key (str): The target name of the Assign node.
127 """
128
129 def __init__(self, key):
130 self.key = key
131
132 def visit_Assign(self, node):
133 if (isinstance(node.targets[0], ast.Name)
134 and node.targets[0].id == self.key):
135 return None
136 else:
137 return node
138
[end of mmengine/config/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mmengine/config/utils.py b/mmengine/config/utils.py
--- a/mmengine/config/utils.py
+++ b/mmengine/config/utils.py
@@ -25,6 +25,7 @@
'mmhuman3d': 'mmhuman3d',
'mmrotate': 'mmrotate',
'mmselfsup': 'mmselfsup',
+ 'mmyolo': 'mmyolo',
}
|
{"golden_diff": "diff --git a/mmengine/config/utils.py b/mmengine/config/utils.py\n--- a/mmengine/config/utils.py\n+++ b/mmengine/config/utils.py\n@@ -25,6 +25,7 @@\n 'mmhuman3d': 'mmhuman3d',\n 'mmrotate': 'mmrotate',\n 'mmselfsup': 'mmselfsup',\n+ 'mmyolo': 'mmyolo',\n }\n", "issue": "config/utils.py haven't mmyolo\n\r\n\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport ast\nimport os.path as osp\nimport re\nimport warnings\nfrom typing import Tuple\n\nfrom mmengine.fileio import load\nfrom mmengine.utils import check_file_exist\n\nPKG2PROJECT = {\n 'mmcls': 'mmcls',\n 'mmdet': 'mmdet',\n 'mmdet3d': 'mmdet3d',\n 'mmseg': 'mmsegmentation',\n 'mmaction2': 'mmaction2',\n 'mmtrack': 'mmtrack',\n 'mmpose': 'mmpose',\n 'mmedit': 'mmedit',\n 'mmocr': 'mmocr',\n 'mmgen': 'mmgen',\n 'mmfewshot': 'mmfewshot',\n 'mmrazor': 'mmrazor',\n 'mmflow': 'mmflow',\n 'mmhuman3d': 'mmhuman3d',\n 'mmrotate': 'mmrotate',\n 'mmselfsup': 'mmselfsup',\n}\n\n\ndef _get_cfg_metainfo(package_path: str, cfg_path: str) -> dict:\n \"\"\"Get target meta information from all 'metafile.yml' defined in `mode-\n index.yml` of external package.\n\n Args:\n package_path (str): Path of external package.\n cfg_path (str): Name of experiment config.\n\n Returns:\n dict: Meta information of target experiment.\n \"\"\"\n meta_index_path = osp.join(package_path, '.mim', 'model-index.yml')\n meta_index = load(meta_index_path)\n cfg_dict = dict()\n for meta_path in meta_index['Import']:\n meta_path = osp.join(package_path, '.mim', meta_path)\n cfg_meta = load(meta_path)\n for model_cfg in cfg_meta['Models']:\n if 'Config' not in model_cfg:\n warnings.warn(f'There is not `Config` define in {model_cfg}')\n continue\n cfg_name = model_cfg['Config'].partition('/')[-1]\n # Some config could have multiple weights, we only pick the\n # first one.\n if cfg_name in cfg_dict:\n continue\n cfg_dict[cfg_name] = model_cfg\n if cfg_path not in cfg_dict:\n raise ValueError(f'Expected configs: {cfg_dict.keys()}, but got '\n f'{cfg_path}')\n return cfg_dict[cfg_path]\n\n\ndef _get_external_cfg_path(package_path: str, cfg_file: str) -> str:\n \"\"\"Get config path of external package.\n\n Args:\n package_path (str): Path of external package.\n cfg_file (str): Name of experiment config.\n\n Returns:\n str: Absolute config path from external package.\n \"\"\"\n cfg_file = cfg_file.split('.')[0]\n model_cfg = _get_cfg_metainfo(package_path, cfg_file)\n cfg_path = osp.join(package_path, model_cfg['Config'])\n check_file_exist(cfg_path)\n return cfg_path\n\n\ndef _get_external_cfg_base_path(package_path: str, cfg_name: str) -> str:\n \"\"\"Get base config path of external package.\n\n Args:\n package_path (str): Path of external package.\n cfg_name (str): External relative config path with 'package::'.\n\n Returns:\n str: Absolute config path from external package.\n \"\"\"\n cfg_path = osp.join(package_path, '.mim', 'configs', cfg_name)\n check_file_exist(cfg_path)\n return cfg_path\n\n\ndef _get_package_and_cfg_path(cfg_path: str) -> Tuple[str, str]:\n \"\"\"Get package name and relative config path.\n\n Args:\n cfg_path (str): External relative config path with 'package::'.\n\n Returns:\n Tuple[str, str]: Package name and config path.\n \"\"\"\n if re.match(r'\\w*::\\w*/\\w*', cfg_path) is None:\n raise ValueError(\n '`_get_package_and_cfg_path` is used for get external package, '\n 'please specify the package name and relative config path, just '\n 'like `mmdet::faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py`')\n package_cfg = cfg_path.split('::')\n if len(package_cfg) > 2:\n raise ValueError('`::` should only be used to separate package and '\n 'config name, but found multiple `::` in '\n f'{cfg_path}')\n package, cfg_path = package_cfg\n assert package in PKG2PROJECT, 'mmengine does not support to load ' \\\n f'{package} config.'\n package = PKG2PROJECT[package]\n return package, cfg_path\n\n\nclass RemoveAssignFromAST(ast.NodeTransformer):\n \"\"\"Remove Assign node if the target's name match the key.\n\n Args:\n key (str): The target name of the Assign node.\n \"\"\"\n\n def __init__(self, key):\n self.key = key\n\n def visit_Assign(self, node):\n if (isinstance(node.targets[0], ast.Name)\n and node.targets[0].id == self.key):\n return None\n else:\n return node\n", "path": "mmengine/config/utils.py"}]}
| 2,053 | 90 |
gh_patches_debug_18337
|
rasdani/github-patches
|
git_diff
|
microsoft__botbuilder-python-1220
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Complete the aiohttp ApplicationInsights implementation
See also #673
</issue>
<code>
[start of libraries/botbuilder-applicationinsights/botbuilder/applicationinsights/application_insights_telemetry_client.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3 """Application Insights Telemetry Client for Bots."""
4
5 import traceback
6 from typing import Dict, Callable
7
8 from applicationinsights import TelemetryClient # pylint: disable=no-name-in-module
9 from botbuilder.core.bot_telemetry_client import (
10 BotTelemetryClient,
11 Severity,
12 TelemetryDataPointType,
13 )
14
15 from .bot_telemetry_processor import BotTelemetryProcessor
16
17
18 def bot_telemetry_processor(data, context) -> bool:
19 """Bot Telemetry Processor as a method for backward compatibility. Refer to
20 callable object :class:`BotTelemetryProcessor` for details.
21
22 :param data: Data from Application Insights
23 :type data: telemetry item
24 :param context: Context from Application Insights
25 :type context: context object
26 :return: determines if the event is passed to the server (False = Filtered).
27 :rtype: bool
28 """
29 processor = BotTelemetryProcessor()
30 return processor(data, context)
31
32
33 class ApplicationInsightsTelemetryClient(BotTelemetryClient):
34 """Application Insights Telemetry Client."""
35
36 def __init__(
37 self,
38 instrumentation_key: str,
39 telemetry_client: TelemetryClient = None,
40 telemetry_processor: Callable[[object, object], bool] = None,
41 ):
42 self._instrumentation_key = instrumentation_key
43 self._client = (
44 telemetry_client
45 if telemetry_client is not None
46 else TelemetryClient(self._instrumentation_key)
47 )
48 # Telemetry Processor
49 processor = (
50 telemetry_processor
51 if telemetry_processor is not None
52 else bot_telemetry_processor
53 )
54 self._client.add_telemetry_processor(processor)
55
56 def track_pageview(
57 self,
58 name: str,
59 url: str,
60 duration: int = 0,
61 properties: Dict[str, object] = None,
62 measurements: Dict[str, object] = None,
63 ) -> None:
64 """
65 Send information about the page viewed in the application (a web page for instance).
66 :param name: the name of the page that was viewed.
67 :param url: the URL of the page that was viewed.
68 :param duration: the duration of the page view in milliseconds. (defaults to: 0)
69 :param properties: the set of custom properties the client wants attached to this data item.
70 (defaults to: None)
71 :param measurements: the set of custom measurements the client wants to attach to this data item.
72 (defaults to: None)
73 """
74 self._client.track_pageview(name, url, duration, properties, measurements)
75
76 def track_exception(
77 self,
78 exception_type: type = None,
79 value: Exception = None,
80 trace: traceback = None,
81 properties: Dict[str, object] = None,
82 measurements: Dict[str, object] = None,
83 ) -> None:
84 """
85 Send information about a single exception that occurred in the application.
86 :param exception_type: the type of the exception that was thrown.
87 :param value: the exception that the client wants to send.
88 :param trace: the traceback information as returned by :func:`sys.exc_info`.
89 :param properties: the set of custom properties the client wants attached to this data item.
90 (defaults to: None)
91 :param measurements: the set of custom measurements the client wants to attach to this data item.
92 (defaults to: None)
93 """
94 self._client.track_exception(
95 exception_type, value, trace, properties, measurements
96 )
97
98 def track_event(
99 self,
100 name: str,
101 properties: Dict[str, object] = None,
102 measurements: Dict[str, object] = None,
103 ) -> None:
104 """
105 Send information about a single event that has occurred in the context of the application.
106 :param name: the data to associate to this event.
107 :param properties: the set of custom properties the client wants attached to this data item.
108 (defaults to: None)
109 :param measurements: the set of custom measurements the client wants to attach to this data item.
110 (defaults to: None)
111 """
112 self._client.track_event(name, properties=properties, measurements=measurements)
113
114 def track_metric(
115 self,
116 name: str,
117 value: float,
118 tel_type: TelemetryDataPointType = None,
119 count: int = None,
120 min_val: float = None,
121 max_val: float = None,
122 std_dev: float = None,
123 properties: Dict[str, object] = None,
124 ) -> NotImplemented:
125 """
126 Send information about a single metric data point that was captured for the application.
127 :param name: The name of the metric that was captured.
128 :param value: The value of the metric that was captured.
129 :param tel_type: The type of the metric. (defaults to: TelemetryDataPointType.aggregation`)
130 :param count: the number of metrics that were aggregated into this data point. (defaults to: None)
131 :param min_val: the minimum of all metrics collected that were aggregated into this data point.
132 (defaults to: None)
133 :param max_val: the maximum of all metrics collected that were aggregated into this data point.
134 (defaults to: None)
135 :param std_dev: the standard deviation of all metrics collected that were aggregated into this data point.
136 (defaults to: None)
137 :param properties: the set of custom properties the client wants attached to this data item.
138 (defaults to: None)
139 """
140 self._client.track_metric(
141 name, value, tel_type, count, min_val, max_val, std_dev, properties
142 )
143
144 def track_trace(
145 self, name: str, properties: Dict[str, object] = None, severity: Severity = None
146 ):
147 """
148 Sends a single trace statement.
149 :param name: the trace statement.
150 :param properties: the set of custom properties the client wants attached to this data item. (defaults to: None)
151 :param severity: the severity level of this trace, one of DEBUG, INFO, WARNING, ERROR, CRITICAL
152 """
153 self._client.track_trace(name, properties, severity)
154
155 def track_request(
156 self,
157 name: str,
158 url: str,
159 success: bool,
160 start_time: str = None,
161 duration: int = None,
162 response_code: str = None,
163 http_method: str = None,
164 properties: Dict[str, object] = None,
165 measurements: Dict[str, object] = None,
166 request_id: str = None,
167 ):
168 """
169 Sends a single request that was captured for the application.
170 :param name: The name for this request. All requests with the same name will be grouped together.
171 :param url: The actual URL for this request (to show in individual request instances).
172 :param success: True if the request ended in success, False otherwise.
173 :param start_time: the start time of the request. The value should look the same as the one returned by
174 :func:`datetime.isoformat`. (defaults to: None)
175 :param duration: the number of milliseconds that this request lasted. (defaults to: None)
176 :param response_code: the response code that this request returned. (defaults to: None)
177 :param http_method: the HTTP method that triggered this request. (defaults to: None)
178 :param properties: the set of custom properties the client wants attached to this data item.
179 (defaults to: None)
180 :param measurements: the set of custom measurements the client wants to attach to this data item.
181 (defaults to: None)
182 :param request_id: the id for this request. If None, a new uuid will be generated. (defaults to: None)
183 """
184 self._client.track_request(
185 name,
186 url,
187 success,
188 start_time,
189 duration,
190 response_code,
191 http_method,
192 properties,
193 measurements,
194 request_id,
195 )
196
197 def track_dependency(
198 self,
199 name: str,
200 data: str,
201 type_name: str = None,
202 target: str = None,
203 duration: int = None,
204 success: bool = None,
205 result_code: str = None,
206 properties: Dict[str, object] = None,
207 measurements: Dict[str, object] = None,
208 dependency_id: str = None,
209 ):
210 """
211 Sends a single dependency telemetry that was captured for the application.
212 :param name: the name of the command initiated with this dependency call. Low cardinality value.
213 Examples are stored procedure name and URL path template.
214 :param data: the command initiated by this dependency call.
215 Examples are SQL statement and HTTP URL with all query parameters.
216 :param type_name: the dependency type name. Low cardinality value for logical grouping of dependencies and
217 interpretation of other fields like commandName and resultCode. Examples are SQL, Azure table, and HTTP.
218 (default to: None)
219 :param target: the target site of a dependency call. Examples are server name, host address.
220 (default to: None)
221 :param duration: the number of milliseconds that this dependency call lasted.
222 (defaults to: None)
223 :param success: true if the dependency call ended in success, false otherwise.
224 (defaults to: None)
225 :param result_code: the result code of a dependency call. Examples are SQL error code and HTTP status code.
226 (defaults to: None)
227 :param properties: the set of custom properties the client wants attached to this data item. (defaults to: None)
228 :param measurements: the set of custom measurements the client wants to attach to this data item.
229 (defaults to: None)
230 :param id: the id for this dependency call. If None, a new uuid will be generated. (defaults to: None)
231 """
232 self._client.track_dependency(
233 name,
234 data,
235 type_name,
236 target,
237 duration,
238 success,
239 result_code,
240 properties,
241 measurements,
242 dependency_id,
243 )
244
245 def flush(self):
246 """Flushes data in the queue. Data in the queue will be sent either immediately irrespective of what sender is
247 being used.
248 """
249 self._client.flush()
250
[end of libraries/botbuilder-applicationinsights/botbuilder/applicationinsights/application_insights_telemetry_client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/libraries/botbuilder-applicationinsights/botbuilder/applicationinsights/application_insights_telemetry_client.py b/libraries/botbuilder-applicationinsights/botbuilder/applicationinsights/application_insights_telemetry_client.py
--- a/libraries/botbuilder-applicationinsights/botbuilder/applicationinsights/application_insights_telemetry_client.py
+++ b/libraries/botbuilder-applicationinsights/botbuilder/applicationinsights/application_insights_telemetry_client.py
@@ -38,13 +38,18 @@
instrumentation_key: str,
telemetry_client: TelemetryClient = None,
telemetry_processor: Callable[[object, object], bool] = None,
+ client_queue_size: int = None,
):
self._instrumentation_key = instrumentation_key
+
self._client = (
telemetry_client
if telemetry_client is not None
else TelemetryClient(self._instrumentation_key)
)
+ if client_queue_size:
+ self._client.channel.queue.max_queue_length = client_queue_size
+
# Telemetry Processor
processor = (
telemetry_processor
|
{"golden_diff": "diff --git a/libraries/botbuilder-applicationinsights/botbuilder/applicationinsights/application_insights_telemetry_client.py b/libraries/botbuilder-applicationinsights/botbuilder/applicationinsights/application_insights_telemetry_client.py\n--- a/libraries/botbuilder-applicationinsights/botbuilder/applicationinsights/application_insights_telemetry_client.py\n+++ b/libraries/botbuilder-applicationinsights/botbuilder/applicationinsights/application_insights_telemetry_client.py\n@@ -38,13 +38,18 @@\n instrumentation_key: str,\n telemetry_client: TelemetryClient = None,\n telemetry_processor: Callable[[object, object], bool] = None,\n+ client_queue_size: int = None,\n ):\n self._instrumentation_key = instrumentation_key\n+\n self._client = (\n telemetry_client\n if telemetry_client is not None\n else TelemetryClient(self._instrumentation_key)\n )\n+ if client_queue_size:\n+ self._client.channel.queue.max_queue_length = client_queue_size\n+\n # Telemetry Processor\n processor = (\n telemetry_processor\n", "issue": "Complete the aiohttp ApplicationInsights implementation\nSee also #673 \n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\"\"\"Application Insights Telemetry Client for Bots.\"\"\"\n\nimport traceback\nfrom typing import Dict, Callable\n\nfrom applicationinsights import TelemetryClient # pylint: disable=no-name-in-module\nfrom botbuilder.core.bot_telemetry_client import (\n BotTelemetryClient,\n Severity,\n TelemetryDataPointType,\n)\n\nfrom .bot_telemetry_processor import BotTelemetryProcessor\n\n\ndef bot_telemetry_processor(data, context) -> bool:\n \"\"\"Bot Telemetry Processor as a method for backward compatibility. Refer to\n callable object :class:`BotTelemetryProcessor` for details.\n\n :param data: Data from Application Insights\n :type data: telemetry item\n :param context: Context from Application Insights\n :type context: context object\n :return: determines if the event is passed to the server (False = Filtered).\n :rtype: bool\n \"\"\"\n processor = BotTelemetryProcessor()\n return processor(data, context)\n\n\nclass ApplicationInsightsTelemetryClient(BotTelemetryClient):\n \"\"\"Application Insights Telemetry Client.\"\"\"\n\n def __init__(\n self,\n instrumentation_key: str,\n telemetry_client: TelemetryClient = None,\n telemetry_processor: Callable[[object, object], bool] = None,\n ):\n self._instrumentation_key = instrumentation_key\n self._client = (\n telemetry_client\n if telemetry_client is not None\n else TelemetryClient(self._instrumentation_key)\n )\n # Telemetry Processor\n processor = (\n telemetry_processor\n if telemetry_processor is not None\n else bot_telemetry_processor\n )\n self._client.add_telemetry_processor(processor)\n\n def track_pageview(\n self,\n name: str,\n url: str,\n duration: int = 0,\n properties: Dict[str, object] = None,\n measurements: Dict[str, object] = None,\n ) -> None:\n \"\"\"\n Send information about the page viewed in the application (a web page for instance).\n :param name: the name of the page that was viewed.\n :param url: the URL of the page that was viewed.\n :param duration: the duration of the page view in milliseconds. (defaults to: 0)\n :param properties: the set of custom properties the client wants attached to this data item.\n (defaults to: None)\n :param measurements: the set of custom measurements the client wants to attach to this data item.\n (defaults to: None)\n \"\"\"\n self._client.track_pageview(name, url, duration, properties, measurements)\n\n def track_exception(\n self,\n exception_type: type = None,\n value: Exception = None,\n trace: traceback = None,\n properties: Dict[str, object] = None,\n measurements: Dict[str, object] = None,\n ) -> None:\n \"\"\"\n Send information about a single exception that occurred in the application.\n :param exception_type: the type of the exception that was thrown.\n :param value: the exception that the client wants to send.\n :param trace: the traceback information as returned by :func:`sys.exc_info`.\n :param properties: the set of custom properties the client wants attached to this data item.\n (defaults to: None)\n :param measurements: the set of custom measurements the client wants to attach to this data item.\n (defaults to: None)\n \"\"\"\n self._client.track_exception(\n exception_type, value, trace, properties, measurements\n )\n\n def track_event(\n self,\n name: str,\n properties: Dict[str, object] = None,\n measurements: Dict[str, object] = None,\n ) -> None:\n \"\"\"\n Send information about a single event that has occurred in the context of the application.\n :param name: the data to associate to this event.\n :param properties: the set of custom properties the client wants attached to this data item.\n (defaults to: None)\n :param measurements: the set of custom measurements the client wants to attach to this data item.\n (defaults to: None)\n \"\"\"\n self._client.track_event(name, properties=properties, measurements=measurements)\n\n def track_metric(\n self,\n name: str,\n value: float,\n tel_type: TelemetryDataPointType = None,\n count: int = None,\n min_val: float = None,\n max_val: float = None,\n std_dev: float = None,\n properties: Dict[str, object] = None,\n ) -> NotImplemented:\n \"\"\"\n Send information about a single metric data point that was captured for the application.\n :param name: The name of the metric that was captured.\n :param value: The value of the metric that was captured.\n :param tel_type: The type of the metric. (defaults to: TelemetryDataPointType.aggregation`)\n :param count: the number of metrics that were aggregated into this data point. (defaults to: None)\n :param min_val: the minimum of all metrics collected that were aggregated into this data point.\n (defaults to: None)\n :param max_val: the maximum of all metrics collected that were aggregated into this data point.\n (defaults to: None)\n :param std_dev: the standard deviation of all metrics collected that were aggregated into this data point.\n (defaults to: None)\n :param properties: the set of custom properties the client wants attached to this data item.\n (defaults to: None)\n \"\"\"\n self._client.track_metric(\n name, value, tel_type, count, min_val, max_val, std_dev, properties\n )\n\n def track_trace(\n self, name: str, properties: Dict[str, object] = None, severity: Severity = None\n ):\n \"\"\"\n Sends a single trace statement.\n :param name: the trace statement.\n :param properties: the set of custom properties the client wants attached to this data item. (defaults to: None)\n :param severity: the severity level of this trace, one of DEBUG, INFO, WARNING, ERROR, CRITICAL\n \"\"\"\n self._client.track_trace(name, properties, severity)\n\n def track_request(\n self,\n name: str,\n url: str,\n success: bool,\n start_time: str = None,\n duration: int = None,\n response_code: str = None,\n http_method: str = None,\n properties: Dict[str, object] = None,\n measurements: Dict[str, object] = None,\n request_id: str = None,\n ):\n \"\"\"\n Sends a single request that was captured for the application.\n :param name: The name for this request. All requests with the same name will be grouped together.\n :param url: The actual URL for this request (to show in individual request instances).\n :param success: True if the request ended in success, False otherwise.\n :param start_time: the start time of the request. The value should look the same as the one returned by\n :func:`datetime.isoformat`. (defaults to: None)\n :param duration: the number of milliseconds that this request lasted. (defaults to: None)\n :param response_code: the response code that this request returned. (defaults to: None)\n :param http_method: the HTTP method that triggered this request. (defaults to: None)\n :param properties: the set of custom properties the client wants attached to this data item.\n (defaults to: None)\n :param measurements: the set of custom measurements the client wants to attach to this data item.\n (defaults to: None)\n :param request_id: the id for this request. If None, a new uuid will be generated. (defaults to: None)\n \"\"\"\n self._client.track_request(\n name,\n url,\n success,\n start_time,\n duration,\n response_code,\n http_method,\n properties,\n measurements,\n request_id,\n )\n\n def track_dependency(\n self,\n name: str,\n data: str,\n type_name: str = None,\n target: str = None,\n duration: int = None,\n success: bool = None,\n result_code: str = None,\n properties: Dict[str, object] = None,\n measurements: Dict[str, object] = None,\n dependency_id: str = None,\n ):\n \"\"\"\n Sends a single dependency telemetry that was captured for the application.\n :param name: the name of the command initiated with this dependency call. Low cardinality value.\n Examples are stored procedure name and URL path template.\n :param data: the command initiated by this dependency call.\n Examples are SQL statement and HTTP URL with all query parameters.\n :param type_name: the dependency type name. Low cardinality value for logical grouping of dependencies and\n interpretation of other fields like commandName and resultCode. Examples are SQL, Azure table, and HTTP.\n (default to: None)\n :param target: the target site of a dependency call. Examples are server name, host address.\n (default to: None)\n :param duration: the number of milliseconds that this dependency call lasted.\n (defaults to: None)\n :param success: true if the dependency call ended in success, false otherwise.\n (defaults to: None)\n :param result_code: the result code of a dependency call. Examples are SQL error code and HTTP status code.\n (defaults to: None)\n :param properties: the set of custom properties the client wants attached to this data item. (defaults to: None)\n :param measurements: the set of custom measurements the client wants to attach to this data item.\n (defaults to: None)\n :param id: the id for this dependency call. If None, a new uuid will be generated. (defaults to: None)\n \"\"\"\n self._client.track_dependency(\n name,\n data,\n type_name,\n target,\n duration,\n success,\n result_code,\n properties,\n measurements,\n dependency_id,\n )\n\n def flush(self):\n \"\"\"Flushes data in the queue. Data in the queue will be sent either immediately irrespective of what sender is\n being used.\n \"\"\"\n self._client.flush()\n", "path": "libraries/botbuilder-applicationinsights/botbuilder/applicationinsights/application_insights_telemetry_client.py"}]}
| 3,425 | 236 |
gh_patches_debug_22464
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-5371
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Task cleanup must not delete content nor artifacts
Deleting content or artifacts outside of orphan cleanup is breaking the rules.
And no, we cannot get away with that.
</issue>
<code>
[start of pulpcore/tasking/_util.py]
1 import asyncio
2 import importlib
3 import logging
4 import os
5 import resource
6 import signal
7 import sys
8 import threading
9 import time
10 from gettext import gettext as _
11
12 from django.conf import settings
13 from django.db import connection, transaction
14 from django.db.models import Q
15 from django.utils import timezone
16 from django_guid import set_guid
17 from django_guid.utils import generate_guid
18 from pulpcore.app.models import Task, TaskSchedule
19 from pulpcore.app.role_util import get_users_with_perms
20 from pulpcore.app.util import set_current_user, set_domain, configure_analytics, configure_cleanup
21 from pulpcore.constants import TASK_FINAL_STATES, TASK_STATES, VAR_TMP_PULP
22 from pulpcore.exceptions import AdvisoryLockError
23 from pulpcore.tasking.tasks import dispatch, execute_task
24
25 _logger = logging.getLogger(__name__)
26
27
28 class PGAdvisoryLock:
29 """
30 A context manager that will hold a postgres advisory lock non-blocking.
31
32 The locks can be chosen from a lock group to avoid collisions. They will never collide with the
33 locks used for tasks.
34 """
35
36 def __init__(self, lock, lock_group=0):
37 self.lock_group = lock_group
38 self.lock = lock
39
40 def __enter__(self):
41 with connection.cursor() as cursor:
42 cursor.execute("SELECT pg_try_advisory_lock(%s, %s)", [self.lock_group, self.lock])
43 acquired = cursor.fetchone()[0]
44 if not acquired:
45 raise AdvisoryLockError("Could not acquire lock.")
46 return self
47
48 def __exit__(self, exc_type, exc_value, traceback):
49 with connection.cursor() as cursor:
50 cursor.execute("SELECT pg_advisory_unlock(%s, %s)", [self.lock_group, self.lock])
51 released = cursor.fetchone()[0]
52 if not released:
53 raise RuntimeError("Lock not held.")
54
55
56 def startup_hook():
57 configure_analytics()
58 configure_cleanup()
59
60
61 def delete_incomplete_resources(task):
62 """
63 Delete all incomplete created-resources on a canceled task.
64
65 Args:
66 task (Task): A task.
67 """
68 if task.state != TASK_STATES.CANCELING:
69 raise RuntimeError(_("Task must be canceling."))
70 for model in (r.content_object for r in task.created_resources.all()):
71 try:
72 if model.complete:
73 continue
74 except AttributeError:
75 continue
76 try:
77 with transaction.atomic():
78 model.delete()
79 except Exception as error:
80 _logger.error(_("Delete created resource, failed: {}").format(str(error)))
81
82
83 def write_memory_usage(path):
84 _logger.info("Writing task memory data to {}".format(path))
85
86 with open(path, "w") as file:
87 file.write("# Seconds\tMemory in MB\n")
88 seconds = 0
89 while True:
90 current_mb_in_use = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1024
91 file.write(f"{seconds}\t{current_mb_in_use:.2f}\n")
92 file.flush()
93 time.sleep(5)
94 seconds += 5
95
96
97 def child_signal_handler(sig, frame):
98 _logger.debug("Signal %s recieved by %s.", sig, os.getpid())
99 # Reset signal handlers to default
100 # If you kill the process a second time it's not graceful anymore.
101 signal.signal(signal.SIGINT, signal.SIG_DFL)
102 signal.signal(signal.SIGTERM, signal.SIG_DFL)
103 signal.signal(signal.SIGHUP, signal.SIG_DFL)
104 signal.signal(signal.SIGUSR1, signal.SIG_DFL)
105
106 if sig == signal.SIGUSR1:
107 sys.exit()
108
109
110 def perform_task(task_pk, task_working_dir_rel_path):
111 """Setup the environment to handle a task and execute it.
112 This must be called as a subprocess, while the parent holds the advisory lock of the task."""
113 signal.signal(signal.SIGINT, child_signal_handler)
114 signal.signal(signal.SIGTERM, child_signal_handler)
115 signal.signal(signal.SIGHUP, child_signal_handler)
116 signal.signal(signal.SIGUSR1, child_signal_handler)
117 if settings.TASK_DIAGNOSTICS:
118 diagnostics_dir = VAR_TMP_PULP / str(task_pk)
119 diagnostics_dir.mkdir(parents=True, exist_ok=True)
120 mem_diagnostics_path = diagnostics_dir / "memory.datum"
121 # It would be better to have this recording happen in the parent process instead of here
122 # https://github.com/pulp/pulpcore/issues/2337
123 mem_diagnostics_thread = threading.Thread(
124 target=write_memory_usage, args=(mem_diagnostics_path,), daemon=True
125 )
126 mem_diagnostics_thread.start()
127 # All processes need to create their own postgres connection
128 connection.connection = None
129 task = Task.objects.select_related("pulp_domain").get(pk=task_pk)
130 user = get_users_with_perms(task, with_group_users=False).first()
131 # Isolate from the parent asyncio.
132 asyncio.set_event_loop(asyncio.new_event_loop())
133 # Set current contexts
134 set_guid(task.logging_cid)
135 set_current_user(user)
136 set_domain(task.pulp_domain)
137 os.chdir(task_working_dir_rel_path)
138
139 # set up profiling
140 if settings.TASK_DIAGNOSTICS and importlib.util.find_spec("pyinstrument") is not None:
141 from pyinstrument import Profiler
142
143 with Profiler() as profiler:
144 execute_task(task)
145
146 profile_file = diagnostics_dir / "pyinstrument.html"
147 _logger.info("Writing task profile data to {}".format(profile_file))
148 with open(profile_file, "w+") as f:
149 f.write(profiler.output_html())
150 else:
151 execute_task(task)
152
153
154 def dispatch_scheduled_tasks():
155 # Warning, dispatch_scheduled_tasks is not race condition free!
156 now = timezone.now()
157 # Dispatch all tasks old enough and not still running
158 for task_schedule in TaskSchedule.objects.filter(next_dispatch__lte=now).filter(
159 Q(last_task=None) | Q(last_task__state__in=TASK_FINAL_STATES)
160 ):
161 try:
162 if task_schedule.dispatch_interval is None:
163 # This was a timed one shot task schedule
164 task_schedule.next_dispatch = None
165 else:
166 # This is a recurring task schedule
167 while task_schedule.next_dispatch < now:
168 # Do not schedule in the past
169 task_schedule.next_dispatch += task_schedule.dispatch_interval
170 set_guid(generate_guid())
171 with transaction.atomic():
172 task_schedule.last_task = dispatch(
173 task_schedule.task_name,
174 )
175 task_schedule.save(update_fields=["next_dispatch", "last_task"])
176
177 _logger.info(
178 "Dispatched scheduled task {task_name} as task id {task_id}".format(
179 task_name=task_schedule.task_name, task_id=task_schedule.last_task.pk
180 )
181 )
182 except Exception as e:
183 _logger.warning(
184 "Dispatching scheduled task {task_name} failed. {error}".format(
185 task_name=task_schedule.task_name, error=str(e)
186 )
187 )
188
[end of pulpcore/tasking/_util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pulpcore/tasking/_util.py b/pulpcore/tasking/_util.py
--- a/pulpcore/tasking/_util.py
+++ b/pulpcore/tasking/_util.py
@@ -15,7 +15,7 @@
from django.utils import timezone
from django_guid import set_guid
from django_guid.utils import generate_guid
-from pulpcore.app.models import Task, TaskSchedule
+from pulpcore.app.models import Artifact, Content, Task, TaskSchedule
from pulpcore.app.role_util import get_users_with_perms
from pulpcore.app.util import set_current_user, set_domain, configure_analytics, configure_cleanup
from pulpcore.constants import TASK_FINAL_STATES, TASK_STATES, VAR_TMP_PULP
@@ -68,6 +68,8 @@
if task.state != TASK_STATES.CANCELING:
raise RuntimeError(_("Task must be canceling."))
for model in (r.content_object for r in task.created_resources.all()):
+ if isinstance(model, (Artifact, Content)):
+ continue
try:
if model.complete:
continue
|
{"golden_diff": "diff --git a/pulpcore/tasking/_util.py b/pulpcore/tasking/_util.py\n--- a/pulpcore/tasking/_util.py\n+++ b/pulpcore/tasking/_util.py\n@@ -15,7 +15,7 @@\n from django.utils import timezone\n from django_guid import set_guid\n from django_guid.utils import generate_guid\n-from pulpcore.app.models import Task, TaskSchedule\n+from pulpcore.app.models import Artifact, Content, Task, TaskSchedule\n from pulpcore.app.role_util import get_users_with_perms\n from pulpcore.app.util import set_current_user, set_domain, configure_analytics, configure_cleanup\n from pulpcore.constants import TASK_FINAL_STATES, TASK_STATES, VAR_TMP_PULP\n@@ -68,6 +68,8 @@\n if task.state != TASK_STATES.CANCELING:\n raise RuntimeError(_(\"Task must be canceling.\"))\n for model in (r.content_object for r in task.created_resources.all()):\n+ if isinstance(model, (Artifact, Content)):\n+ continue\n try:\n if model.complete:\n continue\n", "issue": "Task cleanup must not delete content nor artifacts\nDeleting content or artifacts outside of orphan cleanup is breaking the rules.\r\nAnd no, we cannot get away with that.\r\n\n", "before_files": [{"content": "import asyncio\nimport importlib\nimport logging\nimport os\nimport resource\nimport signal\nimport sys\nimport threading\nimport time\nfrom gettext import gettext as _\n\nfrom django.conf import settings\nfrom django.db import connection, transaction\nfrom django.db.models import Q\nfrom django.utils import timezone\nfrom django_guid import set_guid\nfrom django_guid.utils import generate_guid\nfrom pulpcore.app.models import Task, TaskSchedule\nfrom pulpcore.app.role_util import get_users_with_perms\nfrom pulpcore.app.util import set_current_user, set_domain, configure_analytics, configure_cleanup\nfrom pulpcore.constants import TASK_FINAL_STATES, TASK_STATES, VAR_TMP_PULP\nfrom pulpcore.exceptions import AdvisoryLockError\nfrom pulpcore.tasking.tasks import dispatch, execute_task\n\n_logger = logging.getLogger(__name__)\n\n\nclass PGAdvisoryLock:\n \"\"\"\n A context manager that will hold a postgres advisory lock non-blocking.\n\n The locks can be chosen from a lock group to avoid collisions. They will never collide with the\n locks used for tasks.\n \"\"\"\n\n def __init__(self, lock, lock_group=0):\n self.lock_group = lock_group\n self.lock = lock\n\n def __enter__(self):\n with connection.cursor() as cursor:\n cursor.execute(\"SELECT pg_try_advisory_lock(%s, %s)\", [self.lock_group, self.lock])\n acquired = cursor.fetchone()[0]\n if not acquired:\n raise AdvisoryLockError(\"Could not acquire lock.\")\n return self\n\n def __exit__(self, exc_type, exc_value, traceback):\n with connection.cursor() as cursor:\n cursor.execute(\"SELECT pg_advisory_unlock(%s, %s)\", [self.lock_group, self.lock])\n released = cursor.fetchone()[0]\n if not released:\n raise RuntimeError(\"Lock not held.\")\n\n\ndef startup_hook():\n configure_analytics()\n configure_cleanup()\n\n\ndef delete_incomplete_resources(task):\n \"\"\"\n Delete all incomplete created-resources on a canceled task.\n\n Args:\n task (Task): A task.\n \"\"\"\n if task.state != TASK_STATES.CANCELING:\n raise RuntimeError(_(\"Task must be canceling.\"))\n for model in (r.content_object for r in task.created_resources.all()):\n try:\n if model.complete:\n continue\n except AttributeError:\n continue\n try:\n with transaction.atomic():\n model.delete()\n except Exception as error:\n _logger.error(_(\"Delete created resource, failed: {}\").format(str(error)))\n\n\ndef write_memory_usage(path):\n _logger.info(\"Writing task memory data to {}\".format(path))\n\n with open(path, \"w\") as file:\n file.write(\"# Seconds\\tMemory in MB\\n\")\n seconds = 0\n while True:\n current_mb_in_use = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1024\n file.write(f\"{seconds}\\t{current_mb_in_use:.2f}\\n\")\n file.flush()\n time.sleep(5)\n seconds += 5\n\n\ndef child_signal_handler(sig, frame):\n _logger.debug(\"Signal %s recieved by %s.\", sig, os.getpid())\n # Reset signal handlers to default\n # If you kill the process a second time it's not graceful anymore.\n signal.signal(signal.SIGINT, signal.SIG_DFL)\n signal.signal(signal.SIGTERM, signal.SIG_DFL)\n signal.signal(signal.SIGHUP, signal.SIG_DFL)\n signal.signal(signal.SIGUSR1, signal.SIG_DFL)\n\n if sig == signal.SIGUSR1:\n sys.exit()\n\n\ndef perform_task(task_pk, task_working_dir_rel_path):\n \"\"\"Setup the environment to handle a task and execute it.\n This must be called as a subprocess, while the parent holds the advisory lock of the task.\"\"\"\n signal.signal(signal.SIGINT, child_signal_handler)\n signal.signal(signal.SIGTERM, child_signal_handler)\n signal.signal(signal.SIGHUP, child_signal_handler)\n signal.signal(signal.SIGUSR1, child_signal_handler)\n if settings.TASK_DIAGNOSTICS:\n diagnostics_dir = VAR_TMP_PULP / str(task_pk)\n diagnostics_dir.mkdir(parents=True, exist_ok=True)\n mem_diagnostics_path = diagnostics_dir / \"memory.datum\"\n # It would be better to have this recording happen in the parent process instead of here\n # https://github.com/pulp/pulpcore/issues/2337\n mem_diagnostics_thread = threading.Thread(\n target=write_memory_usage, args=(mem_diagnostics_path,), daemon=True\n )\n mem_diagnostics_thread.start()\n # All processes need to create their own postgres connection\n connection.connection = None\n task = Task.objects.select_related(\"pulp_domain\").get(pk=task_pk)\n user = get_users_with_perms(task, with_group_users=False).first()\n # Isolate from the parent asyncio.\n asyncio.set_event_loop(asyncio.new_event_loop())\n # Set current contexts\n set_guid(task.logging_cid)\n set_current_user(user)\n set_domain(task.pulp_domain)\n os.chdir(task_working_dir_rel_path)\n\n # set up profiling\n if settings.TASK_DIAGNOSTICS and importlib.util.find_spec(\"pyinstrument\") is not None:\n from pyinstrument import Profiler\n\n with Profiler() as profiler:\n execute_task(task)\n\n profile_file = diagnostics_dir / \"pyinstrument.html\"\n _logger.info(\"Writing task profile data to {}\".format(profile_file))\n with open(profile_file, \"w+\") as f:\n f.write(profiler.output_html())\n else:\n execute_task(task)\n\n\ndef dispatch_scheduled_tasks():\n # Warning, dispatch_scheduled_tasks is not race condition free!\n now = timezone.now()\n # Dispatch all tasks old enough and not still running\n for task_schedule in TaskSchedule.objects.filter(next_dispatch__lte=now).filter(\n Q(last_task=None) | Q(last_task__state__in=TASK_FINAL_STATES)\n ):\n try:\n if task_schedule.dispatch_interval is None:\n # This was a timed one shot task schedule\n task_schedule.next_dispatch = None\n else:\n # This is a recurring task schedule\n while task_schedule.next_dispatch < now:\n # Do not schedule in the past\n task_schedule.next_dispatch += task_schedule.dispatch_interval\n set_guid(generate_guid())\n with transaction.atomic():\n task_schedule.last_task = dispatch(\n task_schedule.task_name,\n )\n task_schedule.save(update_fields=[\"next_dispatch\", \"last_task\"])\n\n _logger.info(\n \"Dispatched scheduled task {task_name} as task id {task_id}\".format(\n task_name=task_schedule.task_name, task_id=task_schedule.last_task.pk\n )\n )\n except Exception as e:\n _logger.warning(\n \"Dispatching scheduled task {task_name} failed. {error}\".format(\n task_name=task_schedule.task_name, error=str(e)\n )\n )\n", "path": "pulpcore/tasking/_util.py"}]}
| 2,509 | 230 |
gh_patches_debug_1507
|
rasdani/github-patches
|
git_diff
|
keras-team__autokeras-1285
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
How use multiple gpu?
### Feature Description
I want to use a single machine with multiple gpu for training, but it seems to have no actual effect### Code Example
```python
with strategy.scope():
```
### Reason
Speed up the calculation of toxins
### Solution
<!---
Please tell us how to implement the feature,
if you have one in mind.
-->
</issue>
<code>
[start of autokeras/graph.py]
1 # Copyright 2020 The AutoKeras Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import kerastuner
16 import tensorflow as tf
17 from tensorflow.python.util import nest
18
19 from autokeras import blocks as blocks_module
20 from autokeras import nodes as nodes_module
21 from autokeras.engine import head as head_module
22 from autokeras.engine import serializable
23 from autokeras.utils import utils
24
25
26 def feature_encoding_input(block):
27 """Fetch the column_types and column_names.
28
29 The values are fetched for FeatureEncoding from StructuredDataInput.
30 """
31 if not isinstance(block.inputs[0], nodes_module.StructuredDataInput):
32 raise TypeError(
33 "CategoricalToNumerical can only be used with StructuredDataInput."
34 )
35 block.column_types = block.inputs[0].column_types
36 block.column_names = block.inputs[0].column_names
37
38
39 # Compile the graph.
40 COMPILE_FUNCTIONS = {
41 blocks_module.StructuredDataBlock: [feature_encoding_input],
42 blocks_module.CategoricalToNumerical: [feature_encoding_input],
43 }
44
45
46 def load_graph(filepath, custom_objects=None):
47 if custom_objects is None:
48 custom_objects = {}
49 with tf.keras.utils.custom_object_scope(custom_objects):
50 return Graph.from_config(utils.load_json(filepath))
51
52
53 class Graph(kerastuner.HyperModel, serializable.Serializable):
54 """A graph consists of connected Blocks, or Heads.
55
56 # Arguments
57 inputs: A list of input node(s) for the Graph.
58 outputs: A list of output node(s) for the Graph.
59 override_hps: A list of HyperParameters. The predefined HyperParameters that
60 will override the space of the Hyperparameters defined in the Hypermodels
61 with the same names.
62 """
63
64 def __init__(self, inputs=None, outputs=None, override_hps=None):
65 super().__init__()
66 self.inputs = nest.flatten(inputs)
67 self.outputs = nest.flatten(outputs)
68 self._node_to_id = {}
69 self._nodes = []
70 self.blocks = []
71 self._block_to_id = {}
72 if inputs and outputs:
73 self._build_network()
74 self.override_hps = override_hps or []
75
76 def compile(self):
77 """Share the information between blocks."""
78 for block in self.blocks:
79 for func in COMPILE_FUNCTIONS.get(block.__class__, []):
80 func(block)
81
82 def _register_hps(self, hp):
83 """Register the override HyperParameters for current HyperParameters."""
84 for single_hp in self.override_hps:
85 name = single_hp.name
86 if name not in hp.values:
87 hp._register(single_hp)
88 hp.values[name] = single_hp.default
89
90 def _build_network(self):
91 self._node_to_id = {}
92
93 # Recursively find all the interested nodes.
94 for input_node in self.inputs:
95 self._search_network(input_node, self.outputs, set(), set())
96 self._nodes = sorted(
97 list(self._node_to_id.keys()), key=lambda x: self._node_to_id[x]
98 )
99
100 for node in self.inputs + self.outputs:
101 if node not in self._node_to_id:
102 raise ValueError("Inputs and outputs not connected.")
103
104 # Find the blocks.
105 blocks = []
106 for input_node in self._nodes:
107 for block in input_node.out_blocks:
108 if (
109 any(
110 [
111 output_node in self._node_to_id
112 for output_node in block.outputs
113 ]
114 )
115 and block not in blocks
116 ):
117 blocks.append(block)
118
119 # Check if all the inputs of the blocks are set as inputs.
120 for block in blocks:
121 for input_node in block.inputs:
122 if input_node not in self._node_to_id:
123 raise ValueError(
124 "A required input is missing for HyperModel "
125 "{name}.".format(name=block.name)
126 )
127
128 # Calculate the in degree of all the nodes
129 in_degree = [0] * len(self._nodes)
130 for node_id, node in enumerate(self._nodes):
131 in_degree[node_id] = len(
132 [block for block in node.in_blocks if block in blocks]
133 )
134
135 # Add the blocks in topological order.
136 self.blocks = []
137 self._block_to_id = {}
138 while len(blocks) != 0:
139 new_added = []
140
141 # Collect blocks with in degree 0.
142 for block in blocks:
143 if any([in_degree[self._node_to_id[node]] for node in block.inputs]):
144 continue
145 new_added.append(block)
146
147 # Remove the collected blocks from blocks.
148 for block in new_added:
149 blocks.remove(block)
150
151 for block in new_added:
152 # Add the collected blocks to the Graph.
153 self._add_block(block)
154
155 # Decrease the in degree of the output nodes.
156 for output_node in block.outputs:
157 output_node_id = self._node_to_id[output_node]
158 in_degree[output_node_id] -= 1
159
160 def _search_network(self, input_node, outputs, in_stack_nodes, visited_nodes):
161 visited_nodes.add(input_node)
162 in_stack_nodes.add(input_node)
163
164 outputs_reached = False
165 if input_node in outputs:
166 outputs_reached = True
167
168 for block in input_node.out_blocks:
169 for output_node in block.outputs:
170 if output_node in in_stack_nodes:
171 raise ValueError("The network has a cycle.")
172 if output_node not in visited_nodes:
173 self._search_network(
174 output_node, outputs, in_stack_nodes, visited_nodes
175 )
176 if output_node in self._node_to_id.keys():
177 outputs_reached = True
178
179 if outputs_reached:
180 self._add_node(input_node)
181
182 in_stack_nodes.remove(input_node)
183
184 def _add_block(self, block):
185 if block not in self.blocks:
186 block_id = len(self.blocks)
187 self._block_to_id[block] = block_id
188 self.blocks.append(block)
189
190 def _add_node(self, input_node):
191 if input_node not in self._node_to_id:
192 self._node_to_id[input_node] = len(self._node_to_id)
193
194 def get_config(self):
195 blocks = [blocks_module.serialize(block) for block in self.blocks]
196 nodes = {
197 str(self._node_to_id[node]): nodes_module.serialize(node)
198 for node in self.inputs
199 }
200 override_hps = [
201 kerastuner.engine.hyperparameters.serialize(hp)
202 for hp in self.override_hps
203 ]
204 block_inputs = {
205 str(block_id): [self._node_to_id[node] for node in block.inputs]
206 for block_id, block in enumerate(self.blocks)
207 }
208 block_outputs = {
209 str(block_id): [self._node_to_id[node] for node in block.outputs]
210 for block_id, block in enumerate(self.blocks)
211 }
212
213 outputs = [self._node_to_id[node] for node in self.outputs]
214
215 return {
216 "override_hps": override_hps, # List [serialized].
217 "blocks": blocks, # Dict {id: serialized}.
218 "nodes": nodes, # Dict {id: serialized}.
219 "outputs": outputs, # List of node_ids.
220 "block_inputs": block_inputs, # Dict {id: List of node_ids}.
221 "block_outputs": block_outputs, # Dict {id: List of node_ids}.
222 }
223
224 @classmethod
225 def from_config(cls, config):
226 blocks = [blocks_module.deserialize(block) for block in config["blocks"]]
227 nodes = {
228 int(node_id): nodes_module.deserialize(node)
229 for node_id, node in config["nodes"].items()
230 }
231 override_hps = [
232 kerastuner.engine.hyperparameters.deserialize(config)
233 for config in config["override_hps"]
234 ]
235
236 inputs = [nodes[node_id] for node_id in nodes]
237 for block_id, block in enumerate(blocks):
238 input_nodes = [
239 nodes[node_id] for node_id in config["block_inputs"][str(block_id)]
240 ]
241 output_nodes = nest.flatten(block(input_nodes))
242 for output_node, node_id in zip(
243 output_nodes, config["block_outputs"][str(block_id)]
244 ):
245 nodes[node_id] = output_node
246
247 outputs = [nodes[node_id] for node_id in config["outputs"]]
248 return cls(inputs=inputs, outputs=outputs, override_hps=override_hps)
249
250 def build(self, hp):
251 """Build the HyperModel into a Keras Model."""
252 tf.keras.backend.clear_session()
253 self._register_hps(hp)
254 self.compile()
255 real_nodes = {}
256 for input_node in self.inputs:
257 node_id = self._node_to_id[input_node]
258 real_nodes[node_id] = input_node.build()
259 for block in self.blocks:
260 temp_inputs = [
261 real_nodes[self._node_to_id[input_node]]
262 for input_node in block.inputs
263 ]
264 outputs = block.build(hp, inputs=temp_inputs)
265 outputs = nest.flatten(outputs)
266 for output_node, real_output_node in zip(block.outputs, outputs):
267 real_nodes[self._node_to_id[output_node]] = real_output_node
268 model = tf.keras.Model(
269 [real_nodes[self._node_to_id[input_node]] for input_node in self.inputs],
270 [
271 real_nodes[self._node_to_id[output_node]]
272 for output_node in self.outputs
273 ],
274 )
275
276 return self._compile_keras_model(hp, model)
277
278 def _get_metrics(self):
279 metrics = {}
280 for output_node in self.outputs:
281 block = output_node.in_blocks[0]
282 if isinstance(block, head_module.Head):
283 metrics[block.name] = block.metrics
284 return metrics
285
286 def _get_loss(self):
287 loss = {}
288 for output_node in self.outputs:
289 block = output_node.in_blocks[0]
290 if isinstance(block, head_module.Head):
291 loss[block.name] = block.loss
292 return loss
293
294 def _compile_keras_model(self, hp, model):
295 # Specify hyperparameters from compile(...)
296 optimizer_name = hp.Choice(
297 "optimizer", ["adam", "adadelta", "sgd"], default="adam"
298 )
299 learning_rate = hp.Choice(
300 "learning_rate", [1e-1, 1e-2, 1e-3, 1e-4, 1e-5], default=1e-3
301 )
302
303 if optimizer_name == "adam":
304 optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
305 elif optimizer_name == "adadelta":
306 optimizer = tf.keras.optimizers.Adadelta(learning_rate=learning_rate)
307 elif optimizer_name == "sgd":
308 optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate)
309
310 model.compile(
311 optimizer=optimizer, metrics=self._get_metrics(), loss=self._get_loss()
312 )
313
314 return model
315
316 def save(self, filepath):
317 utils.save_json(filepath, self.get_config())
318
[end of autokeras/graph.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/autokeras/graph.py b/autokeras/graph.py
--- a/autokeras/graph.py
+++ b/autokeras/graph.py
@@ -249,7 +249,6 @@
def build(self, hp):
"""Build the HyperModel into a Keras Model."""
- tf.keras.backend.clear_session()
self._register_hps(hp)
self.compile()
real_nodes = {}
|
{"golden_diff": "diff --git a/autokeras/graph.py b/autokeras/graph.py\n--- a/autokeras/graph.py\n+++ b/autokeras/graph.py\n@@ -249,7 +249,6 @@\n \n def build(self, hp):\n \"\"\"Build the HyperModel into a Keras Model.\"\"\"\n- tf.keras.backend.clear_session()\n self._register_hps(hp)\n self.compile()\n real_nodes = {}\n", "issue": "How use multiple gpu?\n### Feature Description\r\nI want to use a single machine with multiple gpu for training, but it seems to have no actual effect### Code Example\r\n\r\n```python\r\nwith strategy.scope():\r\n```\r\n\r\n### Reason\r\nSpeed up the calculation of toxins\r\n\r\n### Solution\r\n<!---\r\nPlease tell us how to implement the feature,\r\nif you have one in mind.\r\n-->\r\n\n", "before_files": [{"content": "# Copyright 2020 The AutoKeras Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport kerastuner\nimport tensorflow as tf\nfrom tensorflow.python.util import nest\n\nfrom autokeras import blocks as blocks_module\nfrom autokeras import nodes as nodes_module\nfrom autokeras.engine import head as head_module\nfrom autokeras.engine import serializable\nfrom autokeras.utils import utils\n\n\ndef feature_encoding_input(block):\n \"\"\"Fetch the column_types and column_names.\n\n The values are fetched for FeatureEncoding from StructuredDataInput.\n \"\"\"\n if not isinstance(block.inputs[0], nodes_module.StructuredDataInput):\n raise TypeError(\n \"CategoricalToNumerical can only be used with StructuredDataInput.\"\n )\n block.column_types = block.inputs[0].column_types\n block.column_names = block.inputs[0].column_names\n\n\n# Compile the graph.\nCOMPILE_FUNCTIONS = {\n blocks_module.StructuredDataBlock: [feature_encoding_input],\n blocks_module.CategoricalToNumerical: [feature_encoding_input],\n}\n\n\ndef load_graph(filepath, custom_objects=None):\n if custom_objects is None:\n custom_objects = {}\n with tf.keras.utils.custom_object_scope(custom_objects):\n return Graph.from_config(utils.load_json(filepath))\n\n\nclass Graph(kerastuner.HyperModel, serializable.Serializable):\n \"\"\"A graph consists of connected Blocks, or Heads.\n\n # Arguments\n inputs: A list of input node(s) for the Graph.\n outputs: A list of output node(s) for the Graph.\n override_hps: A list of HyperParameters. The predefined HyperParameters that\n will override the space of the Hyperparameters defined in the Hypermodels\n with the same names.\n \"\"\"\n\n def __init__(self, inputs=None, outputs=None, override_hps=None):\n super().__init__()\n self.inputs = nest.flatten(inputs)\n self.outputs = nest.flatten(outputs)\n self._node_to_id = {}\n self._nodes = []\n self.blocks = []\n self._block_to_id = {}\n if inputs and outputs:\n self._build_network()\n self.override_hps = override_hps or []\n\n def compile(self):\n \"\"\"Share the information between blocks.\"\"\"\n for block in self.blocks:\n for func in COMPILE_FUNCTIONS.get(block.__class__, []):\n func(block)\n\n def _register_hps(self, hp):\n \"\"\"Register the override HyperParameters for current HyperParameters.\"\"\"\n for single_hp in self.override_hps:\n name = single_hp.name\n if name not in hp.values:\n hp._register(single_hp)\n hp.values[name] = single_hp.default\n\n def _build_network(self):\n self._node_to_id = {}\n\n # Recursively find all the interested nodes.\n for input_node in self.inputs:\n self._search_network(input_node, self.outputs, set(), set())\n self._nodes = sorted(\n list(self._node_to_id.keys()), key=lambda x: self._node_to_id[x]\n )\n\n for node in self.inputs + self.outputs:\n if node not in self._node_to_id:\n raise ValueError(\"Inputs and outputs not connected.\")\n\n # Find the blocks.\n blocks = []\n for input_node in self._nodes:\n for block in input_node.out_blocks:\n if (\n any(\n [\n output_node in self._node_to_id\n for output_node in block.outputs\n ]\n )\n and block not in blocks\n ):\n blocks.append(block)\n\n # Check if all the inputs of the blocks are set as inputs.\n for block in blocks:\n for input_node in block.inputs:\n if input_node not in self._node_to_id:\n raise ValueError(\n \"A required input is missing for HyperModel \"\n \"{name}.\".format(name=block.name)\n )\n\n # Calculate the in degree of all the nodes\n in_degree = [0] * len(self._nodes)\n for node_id, node in enumerate(self._nodes):\n in_degree[node_id] = len(\n [block for block in node.in_blocks if block in blocks]\n )\n\n # Add the blocks in topological order.\n self.blocks = []\n self._block_to_id = {}\n while len(blocks) != 0:\n new_added = []\n\n # Collect blocks with in degree 0.\n for block in blocks:\n if any([in_degree[self._node_to_id[node]] for node in block.inputs]):\n continue\n new_added.append(block)\n\n # Remove the collected blocks from blocks.\n for block in new_added:\n blocks.remove(block)\n\n for block in new_added:\n # Add the collected blocks to the Graph.\n self._add_block(block)\n\n # Decrease the in degree of the output nodes.\n for output_node in block.outputs:\n output_node_id = self._node_to_id[output_node]\n in_degree[output_node_id] -= 1\n\n def _search_network(self, input_node, outputs, in_stack_nodes, visited_nodes):\n visited_nodes.add(input_node)\n in_stack_nodes.add(input_node)\n\n outputs_reached = False\n if input_node in outputs:\n outputs_reached = True\n\n for block in input_node.out_blocks:\n for output_node in block.outputs:\n if output_node in in_stack_nodes:\n raise ValueError(\"The network has a cycle.\")\n if output_node not in visited_nodes:\n self._search_network(\n output_node, outputs, in_stack_nodes, visited_nodes\n )\n if output_node in self._node_to_id.keys():\n outputs_reached = True\n\n if outputs_reached:\n self._add_node(input_node)\n\n in_stack_nodes.remove(input_node)\n\n def _add_block(self, block):\n if block not in self.blocks:\n block_id = len(self.blocks)\n self._block_to_id[block] = block_id\n self.blocks.append(block)\n\n def _add_node(self, input_node):\n if input_node not in self._node_to_id:\n self._node_to_id[input_node] = len(self._node_to_id)\n\n def get_config(self):\n blocks = [blocks_module.serialize(block) for block in self.blocks]\n nodes = {\n str(self._node_to_id[node]): nodes_module.serialize(node)\n for node in self.inputs\n }\n override_hps = [\n kerastuner.engine.hyperparameters.serialize(hp)\n for hp in self.override_hps\n ]\n block_inputs = {\n str(block_id): [self._node_to_id[node] for node in block.inputs]\n for block_id, block in enumerate(self.blocks)\n }\n block_outputs = {\n str(block_id): [self._node_to_id[node] for node in block.outputs]\n for block_id, block in enumerate(self.blocks)\n }\n\n outputs = [self._node_to_id[node] for node in self.outputs]\n\n return {\n \"override_hps\": override_hps, # List [serialized].\n \"blocks\": blocks, # Dict {id: serialized}.\n \"nodes\": nodes, # Dict {id: serialized}.\n \"outputs\": outputs, # List of node_ids.\n \"block_inputs\": block_inputs, # Dict {id: List of node_ids}.\n \"block_outputs\": block_outputs, # Dict {id: List of node_ids}.\n }\n\n @classmethod\n def from_config(cls, config):\n blocks = [blocks_module.deserialize(block) for block in config[\"blocks\"]]\n nodes = {\n int(node_id): nodes_module.deserialize(node)\n for node_id, node in config[\"nodes\"].items()\n }\n override_hps = [\n kerastuner.engine.hyperparameters.deserialize(config)\n for config in config[\"override_hps\"]\n ]\n\n inputs = [nodes[node_id] for node_id in nodes]\n for block_id, block in enumerate(blocks):\n input_nodes = [\n nodes[node_id] for node_id in config[\"block_inputs\"][str(block_id)]\n ]\n output_nodes = nest.flatten(block(input_nodes))\n for output_node, node_id in zip(\n output_nodes, config[\"block_outputs\"][str(block_id)]\n ):\n nodes[node_id] = output_node\n\n outputs = [nodes[node_id] for node_id in config[\"outputs\"]]\n return cls(inputs=inputs, outputs=outputs, override_hps=override_hps)\n\n def build(self, hp):\n \"\"\"Build the HyperModel into a Keras Model.\"\"\"\n tf.keras.backend.clear_session()\n self._register_hps(hp)\n self.compile()\n real_nodes = {}\n for input_node in self.inputs:\n node_id = self._node_to_id[input_node]\n real_nodes[node_id] = input_node.build()\n for block in self.blocks:\n temp_inputs = [\n real_nodes[self._node_to_id[input_node]]\n for input_node in block.inputs\n ]\n outputs = block.build(hp, inputs=temp_inputs)\n outputs = nest.flatten(outputs)\n for output_node, real_output_node in zip(block.outputs, outputs):\n real_nodes[self._node_to_id[output_node]] = real_output_node\n model = tf.keras.Model(\n [real_nodes[self._node_to_id[input_node]] for input_node in self.inputs],\n [\n real_nodes[self._node_to_id[output_node]]\n for output_node in self.outputs\n ],\n )\n\n return self._compile_keras_model(hp, model)\n\n def _get_metrics(self):\n metrics = {}\n for output_node in self.outputs:\n block = output_node.in_blocks[0]\n if isinstance(block, head_module.Head):\n metrics[block.name] = block.metrics\n return metrics\n\n def _get_loss(self):\n loss = {}\n for output_node in self.outputs:\n block = output_node.in_blocks[0]\n if isinstance(block, head_module.Head):\n loss[block.name] = block.loss\n return loss\n\n def _compile_keras_model(self, hp, model):\n # Specify hyperparameters from compile(...)\n optimizer_name = hp.Choice(\n \"optimizer\", [\"adam\", \"adadelta\", \"sgd\"], default=\"adam\"\n )\n learning_rate = hp.Choice(\n \"learning_rate\", [1e-1, 1e-2, 1e-3, 1e-4, 1e-5], default=1e-3\n )\n\n if optimizer_name == \"adam\":\n optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)\n elif optimizer_name == \"adadelta\":\n optimizer = tf.keras.optimizers.Adadelta(learning_rate=learning_rate)\n elif optimizer_name == \"sgd\":\n optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate)\n\n model.compile(\n optimizer=optimizer, metrics=self._get_metrics(), loss=self._get_loss()\n )\n\n return model\n\n def save(self, filepath):\n utils.save_json(filepath, self.get_config())\n", "path": "autokeras/graph.py"}]}
| 3,926 | 96 |
gh_patches_debug_58655
|
rasdani/github-patches
|
git_diff
|
Anselmoo__spectrafit-715
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feature]: Add python 3.11 support
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Missing Feature
Add python 3.11 support
### Possible Solution
_No response_
### Anything else?
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
</issue>
<code>
[start of spectrafit/__init__.py]
1 """SpectraFit, fast command line tool for fitting data."""
2 __version__ = "0.16.6"
3
[end of spectrafit/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/spectrafit/__init__.py b/spectrafit/__init__.py
--- a/spectrafit/__init__.py
+++ b/spectrafit/__init__.py
@@ -1,2 +1,2 @@
"""SpectraFit, fast command line tool for fitting data."""
-__version__ = "0.16.6"
+__version__ = "0.16.7"
|
{"golden_diff": "diff --git a/spectrafit/__init__.py b/spectrafit/__init__.py\n--- a/spectrafit/__init__.py\n+++ b/spectrafit/__init__.py\n@@ -1,2 +1,2 @@\n \"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n-__version__ = \"0.16.6\"\n+__version__ = \"0.16.7\"\n", "issue": "[Feature]: Add python 3.11 support\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Missing Feature\n\nAdd python 3.11 support\n\n### Possible Solution\n\n_No response_\n\n### Anything else?\n\n_No response_\n\n### Code of Conduct\n\n- [X] I agree to follow this project's Code of Conduct\n", "before_files": [{"content": "\"\"\"SpectraFit, fast command line tool for fitting data.\"\"\"\n__version__ = \"0.16.6\"\n", "path": "spectrafit/__init__.py"}]}
| 646 | 94 |
gh_patches_debug_2025
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-2836
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Alternative to stashing files for testing
Are there any plans to implement alternatives to stashing the worktree?
Ideally this would be hook/scriptable, like some 'prepare-worktree' and 'restore-worktree' options (which default to the current stash behavior) but can also yield some new directory where the tests are run. The rationale here is that my editor reverts files changed on disk and I'd like to add notes to source files while the commit is in progress.
In my own pre-commit hooks I use something like:
git archive "$(git write-tree)" --prefix="$test_dir/" | tar xf -
To create a pristine source tree (actually, I also prime it with `cp -rl` with build artifacts from the previous build to speed up incremental builds). 'git-worktree' and other tools could be used as well...
Eventually I have the idea to run some (more expensive) pre-commit checks in the background while one types the commit message. Then in the commit-msg hook wait for the background results and abort the commit there. This should reduce the turn around times significantly.
</issue>
<code>
[start of pre_commit/languages/swift.py]
1 from __future__ import annotations
2
3 import contextlib
4 import os
5 from typing import Generator
6 from typing import Sequence
7
8 from pre_commit import lang_base
9 from pre_commit.envcontext import envcontext
10 from pre_commit.envcontext import PatchesT
11 from pre_commit.envcontext import Var
12 from pre_commit.prefix import Prefix
13 from pre_commit.util import cmd_output_b
14
15 BUILD_DIR = '.build'
16 BUILD_CONFIG = 'release'
17
18 ENVIRONMENT_DIR = 'swift_env'
19 get_default_version = lang_base.basic_get_default_version
20 health_check = lang_base.basic_health_check
21 run_hook = lang_base.basic_run_hook
22
23
24 def get_env_patch(venv: str) -> PatchesT: # pragma: win32 no cover
25 bin_path = os.path.join(venv, BUILD_DIR, BUILD_CONFIG)
26 return (('PATH', (bin_path, os.pathsep, Var('PATH'))),)
27
28
29 @contextlib.contextmanager # pragma: win32 no cover
30 def in_env(prefix: Prefix, version: str) -> Generator[None, None, None]:
31 envdir = lang_base.environment_dir(prefix, ENVIRONMENT_DIR, version)
32 with envcontext(get_env_patch(envdir)):
33 yield
34
35
36 def install_environment(
37 prefix: Prefix, version: str, additional_dependencies: Sequence[str],
38 ) -> None: # pragma: win32 no cover
39 lang_base.assert_version_default('swift', version)
40 lang_base.assert_no_additional_deps('swift', additional_dependencies)
41 envdir = lang_base.environment_dir(prefix, ENVIRONMENT_DIR, version)
42
43 # Build the swift package
44 os.mkdir(envdir)
45 cmd_output_b(
46 'swift', 'build',
47 '-C', prefix.prefix_dir,
48 '-c', BUILD_CONFIG,
49 '--build-path', os.path.join(envdir, BUILD_DIR),
50 )
51
[end of pre_commit/languages/swift.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/languages/swift.py b/pre_commit/languages/swift.py
--- a/pre_commit/languages/swift.py
+++ b/pre_commit/languages/swift.py
@@ -44,7 +44,7 @@
os.mkdir(envdir)
cmd_output_b(
'swift', 'build',
- '-C', prefix.prefix_dir,
+ '--package-path', prefix.prefix_dir,
'-c', BUILD_CONFIG,
'--build-path', os.path.join(envdir, BUILD_DIR),
)
|
{"golden_diff": "diff --git a/pre_commit/languages/swift.py b/pre_commit/languages/swift.py\n--- a/pre_commit/languages/swift.py\n+++ b/pre_commit/languages/swift.py\n@@ -44,7 +44,7 @@\n os.mkdir(envdir)\n cmd_output_b(\n 'swift', 'build',\n- '-C', prefix.prefix_dir,\n+ '--package-path', prefix.prefix_dir,\n '-c', BUILD_CONFIG,\n '--build-path', os.path.join(envdir, BUILD_DIR),\n )\n", "issue": "Alternative to stashing files for testing\nAre there any plans to implement alternatives to stashing the worktree?\r\n\r\nIdeally this would be hook/scriptable, like some 'prepare-worktree' and 'restore-worktree' options (which default to the current stash behavior) but can also yield some new directory where the tests are run. The rationale here is that my editor reverts files changed on disk and I'd like to add notes to source files while the commit is in progress.\r\n\r\nIn my own pre-commit hooks I use something like:\r\n\r\n git archive \"$(git write-tree)\" --prefix=\"$test_dir/\" | tar xf -\r\n\r\nTo create a pristine source tree (actually, I also prime it with `cp -rl` with build artifacts from the previous build to speed up incremental builds). 'git-worktree' and other tools could be used as well...\r\n\r\nEventually I have the idea to run some (more expensive) pre-commit checks in the background while one types the commit message. Then in the commit-msg hook wait for the background results and abort the commit there. This should reduce the turn around times significantly.\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport contextlib\nimport os\nfrom typing import Generator\nfrom typing import Sequence\n\nfrom pre_commit import lang_base\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import PatchesT\nfrom pre_commit.envcontext import Var\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import cmd_output_b\n\nBUILD_DIR = '.build'\nBUILD_CONFIG = 'release'\n\nENVIRONMENT_DIR = 'swift_env'\nget_default_version = lang_base.basic_get_default_version\nhealth_check = lang_base.basic_health_check\nrun_hook = lang_base.basic_run_hook\n\n\ndef get_env_patch(venv: str) -> PatchesT: # pragma: win32 no cover\n bin_path = os.path.join(venv, BUILD_DIR, BUILD_CONFIG)\n return (('PATH', (bin_path, os.pathsep, Var('PATH'))),)\n\n\[email protected] # pragma: win32 no cover\ndef in_env(prefix: Prefix, version: str) -> Generator[None, None, None]:\n envdir = lang_base.environment_dir(prefix, ENVIRONMENT_DIR, version)\n with envcontext(get_env_patch(envdir)):\n yield\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None: # pragma: win32 no cover\n lang_base.assert_version_default('swift', version)\n lang_base.assert_no_additional_deps('swift', additional_dependencies)\n envdir = lang_base.environment_dir(prefix, ENVIRONMENT_DIR, version)\n\n # Build the swift package\n os.mkdir(envdir)\n cmd_output_b(\n 'swift', 'build',\n '-C', prefix.prefix_dir,\n '-c', BUILD_CONFIG,\n '--build-path', os.path.join(envdir, BUILD_DIR),\n )\n", "path": "pre_commit/languages/swift.py"}]}
| 1,250 | 112 |
gh_patches_debug_16451
|
rasdani/github-patches
|
git_diff
|
getredash__redash-602
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
API keys should be supported in the HTTP headers
Currently it seems that all API calls must include the `api_key` in the query string. Ideally the HTTP headers could also be used (e.g. `Authorization: Key XXXX` or `X-Api-Key`) so that Web server logs don't log the API key in the clear.
</issue>
<code>
[start of redash/authentication.py]
1 import hashlib
2 import hmac
3 import time
4 import logging
5
6 from flask.ext.login import LoginManager
7 from flask.ext.login import user_logged_in
8
9 from redash import models, settings, google_oauth, saml_auth
10 from redash.tasks import record_event
11
12 login_manager = LoginManager()
13 logger = logging.getLogger('authentication')
14
15
16 def sign(key, path, expires):
17 if not key:
18 return None
19
20 h = hmac.new(str(key), msg=path, digestmod=hashlib.sha1)
21 h.update(str(expires))
22
23 return h.hexdigest()
24
25
26 @login_manager.user_loader
27 def load_user(user_id):
28 return models.User.get_by_id(user_id)
29
30
31 def hmac_load_user_from_request(request):
32 signature = request.args.get('signature')
33 expires = float(request.args.get('expires') or 0)
34 query_id = request.view_args.get('query_id', None)
35 user_id = request.args.get('user_id', None)
36
37 # TODO: 3600 should be a setting
38 if signature and time.time() < expires <= time.time() + 3600:
39 if user_id:
40 user = models.User.get_by_id(user_id)
41 calculated_signature = sign(user.api_key, request.path, expires)
42
43 if user.api_key and signature == calculated_signature:
44 return user
45
46 if query_id:
47 query = models.Query.get(models.Query.id == query_id)
48 calculated_signature = sign(query.api_key, request.path, expires)
49
50 if query.api_key and signature == calculated_signature:
51 return models.ApiUser(query.api_key)
52
53 return None
54
55 def get_user_from_api_key(api_key, query_id):
56 if not api_key:
57 return None
58
59 user = None
60 try:
61 user = models.User.get_by_api_key(api_key)
62 except models.User.DoesNotExist:
63 if query_id:
64 query = models.Query.get_by_id(query_id)
65 if query and query.api_key == api_key:
66 user = models.ApiUser(api_key)
67
68 return user
69
70 def api_key_load_user_from_request(request):
71 api_key = request.args.get('api_key', None)
72 query_id = request.view_args.get('query_id', None)
73
74 user = get_user_from_api_key(api_key, query_id)
75 return user
76
77
78 def log_user_logged_in(app, user):
79 event = {
80 'user_id': user.id,
81 'action': 'login',
82 'object_type': 'redash',
83 'timestamp': int(time.time()),
84 }
85
86 record_event.delay(event)
87
88
89 def setup_authentication(app):
90 login_manager.init_app(app)
91 login_manager.anonymous_user = models.AnonymousUser
92 login_manager.login_view = 'login'
93 app.secret_key = settings.COOKIE_SECRET
94 app.register_blueprint(google_oauth.blueprint)
95 app.register_blueprint(saml_auth.blueprint)
96
97 user_logged_in.connect(log_user_logged_in)
98
99 if settings.AUTH_TYPE == 'hmac':
100 login_manager.request_loader(hmac_load_user_from_request)
101 elif settings.AUTH_TYPE == 'api_key':
102 login_manager.request_loader(api_key_load_user_from_request)
103 else:
104 logger.warning("Unknown authentication type ({}). Using default (HMAC).".format(settings.AUTH_TYPE))
105 login_manager.request_loader(hmac_load_user_from_request)
106
107
108
[end of redash/authentication.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/redash/authentication.py b/redash/authentication.py
--- a/redash/authentication.py
+++ b/redash/authentication.py
@@ -52,6 +52,7 @@
return None
+
def get_user_from_api_key(api_key, query_id):
if not api_key:
return None
@@ -67,8 +68,19 @@
return user
-def api_key_load_user_from_request(request):
+
+def get_api_key_from_request(request):
api_key = request.args.get('api_key', None)
+
+ if api_key is None and request.headers.get('Authorization'):
+ auth_header = request.headers.get('Authorization')
+ api_key = auth_header.replace('Key ', '', 1)
+
+ return api_key
+
+
+def api_key_load_user_from_request(request):
+ api_key = get_api_key_from_request(request)
query_id = request.view_args.get('query_id', None)
user = get_user_from_api_key(api_key, query_id)
|
{"golden_diff": "diff --git a/redash/authentication.py b/redash/authentication.py\n--- a/redash/authentication.py\n+++ b/redash/authentication.py\n@@ -52,6 +52,7 @@\n \n return None\n \n+\n def get_user_from_api_key(api_key, query_id):\n if not api_key:\n return None\n@@ -67,8 +68,19 @@\n \n return user\n \n-def api_key_load_user_from_request(request):\n+\n+def get_api_key_from_request(request):\n api_key = request.args.get('api_key', None)\n+\n+ if api_key is None and request.headers.get('Authorization'):\n+ auth_header = request.headers.get('Authorization')\n+ api_key = auth_header.replace('Key ', '', 1)\n+\n+ return api_key\n+\n+\n+def api_key_load_user_from_request(request):\n+ api_key = get_api_key_from_request(request)\n query_id = request.view_args.get('query_id', None)\n \n user = get_user_from_api_key(api_key, query_id)\n", "issue": "API keys should be supported in the HTTP headers\nCurrently it seems that all API calls must include the `api_key` in the query string. Ideally the HTTP headers could also be used (e.g. `Authorization: Key XXXX` or `X-Api-Key`) so that Web server logs don't log the API key in the clear.\n\n", "before_files": [{"content": "import hashlib\nimport hmac\nimport time\nimport logging\n\nfrom flask.ext.login import LoginManager\nfrom flask.ext.login import user_logged_in\n\nfrom redash import models, settings, google_oauth, saml_auth\nfrom redash.tasks import record_event\n\nlogin_manager = LoginManager()\nlogger = logging.getLogger('authentication')\n\n\ndef sign(key, path, expires):\n if not key:\n return None\n\n h = hmac.new(str(key), msg=path, digestmod=hashlib.sha1)\n h.update(str(expires))\n\n return h.hexdigest()\n\n\n@login_manager.user_loader\ndef load_user(user_id):\n return models.User.get_by_id(user_id)\n\n\ndef hmac_load_user_from_request(request):\n signature = request.args.get('signature')\n expires = float(request.args.get('expires') or 0)\n query_id = request.view_args.get('query_id', None)\n user_id = request.args.get('user_id', None)\n\n # TODO: 3600 should be a setting\n if signature and time.time() < expires <= time.time() + 3600:\n if user_id:\n user = models.User.get_by_id(user_id)\n calculated_signature = sign(user.api_key, request.path, expires)\n\n if user.api_key and signature == calculated_signature:\n return user\n\n if query_id:\n query = models.Query.get(models.Query.id == query_id)\n calculated_signature = sign(query.api_key, request.path, expires)\n\n if query.api_key and signature == calculated_signature:\n return models.ApiUser(query.api_key)\n\n return None\n\ndef get_user_from_api_key(api_key, query_id):\n if not api_key:\n return None\n\n user = None\n try:\n user = models.User.get_by_api_key(api_key)\n except models.User.DoesNotExist:\n if query_id:\n query = models.Query.get_by_id(query_id)\n if query and query.api_key == api_key:\n user = models.ApiUser(api_key)\n\n return user\n\ndef api_key_load_user_from_request(request):\n api_key = request.args.get('api_key', None)\n query_id = request.view_args.get('query_id', None)\n\n user = get_user_from_api_key(api_key, query_id)\n return user\n\n\ndef log_user_logged_in(app, user):\n event = {\n 'user_id': user.id,\n 'action': 'login',\n 'object_type': 'redash',\n 'timestamp': int(time.time()),\n }\n\n record_event.delay(event)\n\n\ndef setup_authentication(app):\n login_manager.init_app(app)\n login_manager.anonymous_user = models.AnonymousUser\n login_manager.login_view = 'login'\n app.secret_key = settings.COOKIE_SECRET\n app.register_blueprint(google_oauth.blueprint)\n app.register_blueprint(saml_auth.blueprint)\n\n user_logged_in.connect(log_user_logged_in)\n\n if settings.AUTH_TYPE == 'hmac':\n login_manager.request_loader(hmac_load_user_from_request)\n elif settings.AUTH_TYPE == 'api_key':\n login_manager.request_loader(api_key_load_user_from_request)\n else:\n logger.warning(\"Unknown authentication type ({}). Using default (HMAC).\".format(settings.AUTH_TYPE))\n login_manager.request_loader(hmac_load_user_from_request)\n\n\n", "path": "redash/authentication.py"}]}
| 1,528 | 219 |
gh_patches_debug_2463
|
rasdani/github-patches
|
git_diff
|
kedro-org__kedro-1977
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pickle.PickleDataSet docstring examples are incorrect
## Description
Kind of a small issue but the "advanced" example in the [pickle.PickleDataSet API docs](https://kedro.readthedocs.io/en/stable/kedro.extras.datasets.pickle.PickleDataSet.html) is wrong.
`compression` is not a valid [`joblib.dump`](https://joblib.readthedocs.io/en/latest/generated/joblib.dump.html) parameter (it should simply be `compress`) and [`joblib.load`](https://joblib.readthedocs.io/en/latest/generated/joblib.load.html) does not require a `compression` kwarg at all since it can automagically discover the correct compression algorithm used.
## Context
Even if it's a trivial issue I stumbled upon it and I hope to fix it so that future users will not have to go the joblib docs to find the problem.
## Possible Alternatives
I'a m working on a trivial fix, I'm going to open a PR as soon as possible.
</issue>
<code>
[start of kedro/extras/datasets/pickle/pickle_dataset.py]
1 """``PickleDataSet`` loads/saves data from/to a Pickle file using an underlying
2 filesystem (e.g.: local, S3, GCS). The underlying functionality is supported by
3 the specified backend library passed in (defaults to the ``pickle`` library), so it
4 supports all allowed options for loading and saving pickle files.
5 """
6 import importlib
7 from copy import deepcopy
8 from pathlib import PurePosixPath
9 from typing import Any, Dict
10
11 import fsspec
12
13 from kedro.io.core import (
14 AbstractVersionedDataSet,
15 DataSetError,
16 Version,
17 get_filepath_str,
18 get_protocol_and_path,
19 )
20
21
22 class PickleDataSet(AbstractVersionedDataSet[Any, Any]):
23 """``PickleDataSet`` loads/saves data from/to a Pickle file using an underlying
24 filesystem (e.g.: local, S3, GCS). The underlying functionality is supported by
25 the specified backend library passed in (defaults to the ``pickle`` library), so it
26 supports all allowed options for loading and saving pickle files.
27
28 Example adding a catalog entry with
29 `YAML API <https://kedro.readthedocs.io/en/stable/data/\
30 data_catalog.html#use-the-data-catalog-with-the-yaml-api>`_:
31
32 .. code-block:: yaml
33
34 >>> test_model: # simple example without compression
35 >>> type: pickle.PickleDataSet
36 >>> filepath: data/07_model_output/test_model.pkl
37 >>> backend: pickle
38 >>>
39 >>> final_model: # example with load and save args
40 >>> type: pickle.PickleDataSet
41 >>> filepath: s3://your_bucket/final_model.pkl.lz4
42 >>> backend: joblib
43 >>> credentials: s3_credentials
44 >>> save_args:
45 >>> compression: lz4
46 >>> load_args:
47 >>> compression: lz4
48
49 Example using Python API:
50 ::
51
52 >>> from kedro.extras.datasets.pickle import PickleDataSet
53 >>> import pandas as pd
54 >>>
55 >>> data = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],
56 >>> 'col3': [5, 6]})
57 >>>
58 >>> # data_set = PickleDataSet(filepath="gcs://bucket/test.pkl")
59 >>> data_set = PickleDataSet(filepath="test.pkl", backend="pickle")
60 >>> data_set.save(data)
61 >>> reloaded = data_set.load()
62 >>> assert data.equals(reloaded)
63 >>>
64 >>> # Add "compress_pickle[lz4]" to requirements.txt
65 >>> data_set = PickleDataSet(filepath="test.pickle.lz4",
66 >>> backend="compress_pickle",
67 >>> load_args={"compression":"lz4"},
68 >>> save_args={"compression":"lz4"})
69 >>> data_set.save(data)
70 >>> reloaded = data_set.load()
71 >>> assert data.equals(reloaded)
72 """
73
74 DEFAULT_LOAD_ARGS = {} # type: Dict[str, Any]
75 DEFAULT_SAVE_ARGS = {} # type: Dict[str, Any]
76
77 # pylint: disable=too-many-arguments,too-many-locals
78 def __init__(
79 self,
80 filepath: str,
81 backend: str = "pickle",
82 load_args: Dict[str, Any] = None,
83 save_args: Dict[str, Any] = None,
84 version: Version = None,
85 credentials: Dict[str, Any] = None,
86 fs_args: Dict[str, Any] = None,
87 ) -> None:
88 """Creates a new instance of ``PickleDataSet`` pointing to a concrete Pickle
89 file on a specific filesystem. ``PickleDataSet`` supports custom backends to
90 serialise/deserialise objects.
91
92 Example backends that are compatible (non-exhaustive):
93 * `pickle`
94 * `joblib`
95 * `dill`
96 * `compress_pickle`
97
98 Example backends that are incompatible:
99 * `torch`
100
101 Args:
102 filepath: Filepath in POSIX format to a Pickle file prefixed with a protocol like
103 `s3://`. If prefix is not provided, `file` protocol (local filesystem) will be used.
104 The prefix should be any protocol supported by ``fsspec``.
105 Note: `http(s)` doesn't support versioning.
106 backend: Backend to use, must be an import path to a module which satisfies the
107 ``pickle`` interface. That is, contains a `load` and `dump` function.
108 Defaults to 'pickle'.
109 load_args: Pickle options for loading pickle files.
110 You can pass in arguments that the backend load function specified accepts, e.g:
111 pickle.load: https://docs.python.org/3/library/pickle.html#pickle.load
112 joblib.load: https://joblib.readthedocs.io/en/latest/generated/joblib.load.html
113 dill.load: https://dill.readthedocs.io/en/latest/dill.html#dill._dill.load
114 compress_pickle.load:
115 https://lucianopaz.github.io/compress_pickle/html/api/compress_pickle.html#compress_pickle.compress_pickle.load
116 All defaults are preserved.
117 save_args: Pickle options for saving pickle files.
118 You can pass in arguments that the backend dump function specified accepts, e.g:
119 pickle.dump: https://docs.python.org/3/library/pickle.html#pickle.dump
120 joblib.dump: https://joblib.readthedocs.io/en/latest/generated/joblib.dump.html
121 dill.dump: https://dill.readthedocs.io/en/latest/dill.html#dill._dill.dump
122 compress_pickle.dump:
123 https://lucianopaz.github.io/compress_pickle/html/api/compress_pickle.html#compress_pickle.compress_pickle.dump
124 All defaults are preserved.
125 version: If specified, should be an instance of
126 ``kedro.io.core.Version``. If its ``load`` attribute is
127 None, the latest version will be loaded. If its ``save``
128 attribute is None, save version will be autogenerated.
129 credentials: Credentials required to get access to the underlying filesystem.
130 E.g. for ``GCSFileSystem`` it should look like `{"token": None}`.
131 fs_args: Extra arguments to pass into underlying filesystem class constructor
132 (e.g. `{"project": "my-project"}` for ``GCSFileSystem``), as well as
133 to pass to the filesystem's `open` method through nested keys
134 `open_args_load` and `open_args_save`.
135 Here you can find all available arguments for `open`:
136 https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.open
137 All defaults are preserved, except `mode`, which is set to `wb` when saving.
138
139 Raises:
140 ValueError: If ``backend`` does not satisfy the `pickle` interface.
141 ImportError: If the ``backend`` module could not be imported.
142 """
143 # We do not store `imported_backend` as an attribute to be used in `load`/`save`
144 # as this would mean the dataset cannot be deepcopied (module objects cannot be
145 # pickled). The import here is purely to raise any errors as early as possible.
146 # Repeated imports in the `load` and `save` methods should not be a significant
147 # performance hit as Python caches imports.
148 try:
149 imported_backend = importlib.import_module(backend)
150 except ImportError as exc:
151 raise ImportError(
152 f"Selected backend '{backend}' could not be imported. "
153 "Make sure it is installed and importable."
154 ) from exc
155
156 if not (
157 hasattr(imported_backend, "load") and hasattr(imported_backend, "dump")
158 ):
159 raise ValueError(
160 f"Selected backend '{backend}' should satisfy the pickle interface. "
161 "Missing one of 'load' and 'dump' on the backend."
162 )
163
164 _fs_args = deepcopy(fs_args) or {}
165 _fs_open_args_load = _fs_args.pop("open_args_load", {})
166 _fs_open_args_save = _fs_args.pop("open_args_save", {})
167 _credentials = deepcopy(credentials) or {}
168
169 protocol, path = get_protocol_and_path(filepath, version)
170 if protocol == "file":
171 _fs_args.setdefault("auto_mkdir", True)
172
173 self._protocol = protocol
174 self._fs = fsspec.filesystem(self._protocol, **_credentials, **_fs_args)
175
176 super().__init__(
177 filepath=PurePosixPath(path),
178 version=version,
179 exists_function=self._fs.exists,
180 glob_function=self._fs.glob,
181 )
182
183 self._backend = backend
184
185 # Handle default load and save arguments
186 self._load_args = deepcopy(self.DEFAULT_LOAD_ARGS)
187 if load_args is not None:
188 self._load_args.update(load_args)
189 self._save_args = deepcopy(self.DEFAULT_SAVE_ARGS)
190 if save_args is not None:
191 self._save_args.update(save_args)
192
193 _fs_open_args_save.setdefault("mode", "wb")
194 self._fs_open_args_load = _fs_open_args_load
195 self._fs_open_args_save = _fs_open_args_save
196
197 def _describe(self) -> Dict[str, Any]:
198 return dict(
199 filepath=self._filepath,
200 backend=self._backend,
201 protocol=self._protocol,
202 load_args=self._load_args,
203 save_args=self._save_args,
204 version=self._version,
205 )
206
207 def _load(self) -> Any:
208 load_path = get_filepath_str(self._get_load_path(), self._protocol)
209
210 with self._fs.open(load_path, **self._fs_open_args_load) as fs_file:
211 imported_backend = importlib.import_module(self._backend)
212 return imported_backend.load(fs_file, **self._load_args) # type: ignore
213
214 def _save(self, data: Any) -> None:
215 save_path = get_filepath_str(self._get_save_path(), self._protocol)
216
217 with self._fs.open(save_path, **self._fs_open_args_save) as fs_file:
218 try:
219 imported_backend = importlib.import_module(self._backend)
220 imported_backend.dump(data, fs_file, **self._save_args) # type: ignore
221 except Exception as exc:
222 raise DataSetError(
223 f"{data.__class__} was not serialised due to: {exc}"
224 ) from exc
225
226 self._invalidate_cache()
227
228 def _exists(self) -> bool:
229 try:
230 load_path = get_filepath_str(self._get_load_path(), self._protocol)
231 except DataSetError:
232 return False
233
234 return self._fs.exists(load_path)
235
236 def _release(self) -> None:
237 super()._release()
238 self._invalidate_cache()
239
240 def _invalidate_cache(self) -> None:
241 """Invalidate underlying filesystem caches."""
242 filepath = get_filepath_str(self._filepath, self._protocol)
243 self._fs.invalidate_cache(filepath)
244
[end of kedro/extras/datasets/pickle/pickle_dataset.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kedro/extras/datasets/pickle/pickle_dataset.py b/kedro/extras/datasets/pickle/pickle_dataset.py
--- a/kedro/extras/datasets/pickle/pickle_dataset.py
+++ b/kedro/extras/datasets/pickle/pickle_dataset.py
@@ -42,9 +42,7 @@
>>> backend: joblib
>>> credentials: s3_credentials
>>> save_args:
- >>> compression: lz4
- >>> load_args:
- >>> compression: lz4
+ >>> compress: lz4
Example using Python API:
::
|
{"golden_diff": "diff --git a/kedro/extras/datasets/pickle/pickle_dataset.py b/kedro/extras/datasets/pickle/pickle_dataset.py\n--- a/kedro/extras/datasets/pickle/pickle_dataset.py\n+++ b/kedro/extras/datasets/pickle/pickle_dataset.py\n@@ -42,9 +42,7 @@\n >>> backend: joblib\n >>> credentials: s3_credentials\n >>> save_args:\n- >>> compression: lz4\n- >>> load_args:\n- >>> compression: lz4\n+ >>> compress: lz4\n \n Example using Python API:\n ::\n", "issue": "pickle.PickleDataSet docstring examples are incorrect\n## Description\r\nKind of a small issue but the \"advanced\" example in the [pickle.PickleDataSet API docs](https://kedro.readthedocs.io/en/stable/kedro.extras.datasets.pickle.PickleDataSet.html) is wrong.\r\n`compression` is not a valid [`joblib.dump`](https://joblib.readthedocs.io/en/latest/generated/joblib.dump.html) parameter (it should simply be `compress`) and [`joblib.load`](https://joblib.readthedocs.io/en/latest/generated/joblib.load.html) does not require a `compression` kwarg at all since it can automagically discover the correct compression algorithm used.\r\n\r\n\r\n## Context\r\nEven if it's a trivial issue I stumbled upon it and I hope to fix it so that future users will not have to go the joblib docs to find the problem.\r\n\r\n\r\n## Possible Alternatives\r\nI'a m working on a trivial fix, I'm going to open a PR as soon as possible.\r\n\n", "before_files": [{"content": "\"\"\"``PickleDataSet`` loads/saves data from/to a Pickle file using an underlying\nfilesystem (e.g.: local, S3, GCS). The underlying functionality is supported by\nthe specified backend library passed in (defaults to the ``pickle`` library), so it\nsupports all allowed options for loading and saving pickle files.\n\"\"\"\nimport importlib\nfrom copy import deepcopy\nfrom pathlib import PurePosixPath\nfrom typing import Any, Dict\n\nimport fsspec\n\nfrom kedro.io.core import (\n AbstractVersionedDataSet,\n DataSetError,\n Version,\n get_filepath_str,\n get_protocol_and_path,\n)\n\n\nclass PickleDataSet(AbstractVersionedDataSet[Any, Any]):\n \"\"\"``PickleDataSet`` loads/saves data from/to a Pickle file using an underlying\n filesystem (e.g.: local, S3, GCS). The underlying functionality is supported by\n the specified backend library passed in (defaults to the ``pickle`` library), so it\n supports all allowed options for loading and saving pickle files.\n\n Example adding a catalog entry with\n `YAML API <https://kedro.readthedocs.io/en/stable/data/\\\n data_catalog.html#use-the-data-catalog-with-the-yaml-api>`_:\n\n .. code-block:: yaml\n\n >>> test_model: # simple example without compression\n >>> type: pickle.PickleDataSet\n >>> filepath: data/07_model_output/test_model.pkl\n >>> backend: pickle\n >>>\n >>> final_model: # example with load and save args\n >>> type: pickle.PickleDataSet\n >>> filepath: s3://your_bucket/final_model.pkl.lz4\n >>> backend: joblib\n >>> credentials: s3_credentials\n >>> save_args:\n >>> compression: lz4\n >>> load_args:\n >>> compression: lz4\n\n Example using Python API:\n ::\n\n >>> from kedro.extras.datasets.pickle import PickleDataSet\n >>> import pandas as pd\n >>>\n >>> data = pd.DataFrame({'col1': [1, 2], 'col2': [4, 5],\n >>> 'col3': [5, 6]})\n >>>\n >>> # data_set = PickleDataSet(filepath=\"gcs://bucket/test.pkl\")\n >>> data_set = PickleDataSet(filepath=\"test.pkl\", backend=\"pickle\")\n >>> data_set.save(data)\n >>> reloaded = data_set.load()\n >>> assert data.equals(reloaded)\n >>>\n >>> # Add \"compress_pickle[lz4]\" to requirements.txt\n >>> data_set = PickleDataSet(filepath=\"test.pickle.lz4\",\n >>> backend=\"compress_pickle\",\n >>> load_args={\"compression\":\"lz4\"},\n >>> save_args={\"compression\":\"lz4\"})\n >>> data_set.save(data)\n >>> reloaded = data_set.load()\n >>> assert data.equals(reloaded)\n \"\"\"\n\n DEFAULT_LOAD_ARGS = {} # type: Dict[str, Any]\n DEFAULT_SAVE_ARGS = {} # type: Dict[str, Any]\n\n # pylint: disable=too-many-arguments,too-many-locals\n def __init__(\n self,\n filepath: str,\n backend: str = \"pickle\",\n load_args: Dict[str, Any] = None,\n save_args: Dict[str, Any] = None,\n version: Version = None,\n credentials: Dict[str, Any] = None,\n fs_args: Dict[str, Any] = None,\n ) -> None:\n \"\"\"Creates a new instance of ``PickleDataSet`` pointing to a concrete Pickle\n file on a specific filesystem. ``PickleDataSet`` supports custom backends to\n serialise/deserialise objects.\n\n Example backends that are compatible (non-exhaustive):\n * `pickle`\n * `joblib`\n * `dill`\n * `compress_pickle`\n\n Example backends that are incompatible:\n * `torch`\n\n Args:\n filepath: Filepath in POSIX format to a Pickle file prefixed with a protocol like\n `s3://`. If prefix is not provided, `file` protocol (local filesystem) will be used.\n The prefix should be any protocol supported by ``fsspec``.\n Note: `http(s)` doesn't support versioning.\n backend: Backend to use, must be an import path to a module which satisfies the\n ``pickle`` interface. That is, contains a `load` and `dump` function.\n Defaults to 'pickle'.\n load_args: Pickle options for loading pickle files.\n You can pass in arguments that the backend load function specified accepts, e.g:\n pickle.load: https://docs.python.org/3/library/pickle.html#pickle.load\n joblib.load: https://joblib.readthedocs.io/en/latest/generated/joblib.load.html\n dill.load: https://dill.readthedocs.io/en/latest/dill.html#dill._dill.load\n compress_pickle.load:\n https://lucianopaz.github.io/compress_pickle/html/api/compress_pickle.html#compress_pickle.compress_pickle.load\n All defaults are preserved.\n save_args: Pickle options for saving pickle files.\n You can pass in arguments that the backend dump function specified accepts, e.g:\n pickle.dump: https://docs.python.org/3/library/pickle.html#pickle.dump\n joblib.dump: https://joblib.readthedocs.io/en/latest/generated/joblib.dump.html\n dill.dump: https://dill.readthedocs.io/en/latest/dill.html#dill._dill.dump\n compress_pickle.dump:\n https://lucianopaz.github.io/compress_pickle/html/api/compress_pickle.html#compress_pickle.compress_pickle.dump\n All defaults are preserved.\n version: If specified, should be an instance of\n ``kedro.io.core.Version``. If its ``load`` attribute is\n None, the latest version will be loaded. If its ``save``\n attribute is None, save version will be autogenerated.\n credentials: Credentials required to get access to the underlying filesystem.\n E.g. for ``GCSFileSystem`` it should look like `{\"token\": None}`.\n fs_args: Extra arguments to pass into underlying filesystem class constructor\n (e.g. `{\"project\": \"my-project\"}` for ``GCSFileSystem``), as well as\n to pass to the filesystem's `open` method through nested keys\n `open_args_load` and `open_args_save`.\n Here you can find all available arguments for `open`:\n https://filesystem-spec.readthedocs.io/en/latest/api.html#fsspec.spec.AbstractFileSystem.open\n All defaults are preserved, except `mode`, which is set to `wb` when saving.\n\n Raises:\n ValueError: If ``backend`` does not satisfy the `pickle` interface.\n ImportError: If the ``backend`` module could not be imported.\n \"\"\"\n # We do not store `imported_backend` as an attribute to be used in `load`/`save`\n # as this would mean the dataset cannot be deepcopied (module objects cannot be\n # pickled). The import here is purely to raise any errors as early as possible.\n # Repeated imports in the `load` and `save` methods should not be a significant\n # performance hit as Python caches imports.\n try:\n imported_backend = importlib.import_module(backend)\n except ImportError as exc:\n raise ImportError(\n f\"Selected backend '{backend}' could not be imported. \"\n \"Make sure it is installed and importable.\"\n ) from exc\n\n if not (\n hasattr(imported_backend, \"load\") and hasattr(imported_backend, \"dump\")\n ):\n raise ValueError(\n f\"Selected backend '{backend}' should satisfy the pickle interface. \"\n \"Missing one of 'load' and 'dump' on the backend.\"\n )\n\n _fs_args = deepcopy(fs_args) or {}\n _fs_open_args_load = _fs_args.pop(\"open_args_load\", {})\n _fs_open_args_save = _fs_args.pop(\"open_args_save\", {})\n _credentials = deepcopy(credentials) or {}\n\n protocol, path = get_protocol_and_path(filepath, version)\n if protocol == \"file\":\n _fs_args.setdefault(\"auto_mkdir\", True)\n\n self._protocol = protocol\n self._fs = fsspec.filesystem(self._protocol, **_credentials, **_fs_args)\n\n super().__init__(\n filepath=PurePosixPath(path),\n version=version,\n exists_function=self._fs.exists,\n glob_function=self._fs.glob,\n )\n\n self._backend = backend\n\n # Handle default load and save arguments\n self._load_args = deepcopy(self.DEFAULT_LOAD_ARGS)\n if load_args is not None:\n self._load_args.update(load_args)\n self._save_args = deepcopy(self.DEFAULT_SAVE_ARGS)\n if save_args is not None:\n self._save_args.update(save_args)\n\n _fs_open_args_save.setdefault(\"mode\", \"wb\")\n self._fs_open_args_load = _fs_open_args_load\n self._fs_open_args_save = _fs_open_args_save\n\n def _describe(self) -> Dict[str, Any]:\n return dict(\n filepath=self._filepath,\n backend=self._backend,\n protocol=self._protocol,\n load_args=self._load_args,\n save_args=self._save_args,\n version=self._version,\n )\n\n def _load(self) -> Any:\n load_path = get_filepath_str(self._get_load_path(), self._protocol)\n\n with self._fs.open(load_path, **self._fs_open_args_load) as fs_file:\n imported_backend = importlib.import_module(self._backend)\n return imported_backend.load(fs_file, **self._load_args) # type: ignore\n\n def _save(self, data: Any) -> None:\n save_path = get_filepath_str(self._get_save_path(), self._protocol)\n\n with self._fs.open(save_path, **self._fs_open_args_save) as fs_file:\n try:\n imported_backend = importlib.import_module(self._backend)\n imported_backend.dump(data, fs_file, **self._save_args) # type: ignore\n except Exception as exc:\n raise DataSetError(\n f\"{data.__class__} was not serialised due to: {exc}\"\n ) from exc\n\n self._invalidate_cache()\n\n def _exists(self) -> bool:\n try:\n load_path = get_filepath_str(self._get_load_path(), self._protocol)\n except DataSetError:\n return False\n\n return self._fs.exists(load_path)\n\n def _release(self) -> None:\n super()._release()\n self._invalidate_cache()\n\n def _invalidate_cache(self) -> None:\n \"\"\"Invalidate underlying filesystem caches.\"\"\"\n filepath = get_filepath_str(self._filepath, self._protocol)\n self._fs.invalidate_cache(filepath)\n", "path": "kedro/extras/datasets/pickle/pickle_dataset.py"}]}
| 3,737 | 141 |
gh_patches_debug_42802
|
rasdani/github-patches
|
git_diff
|
mampfes__hacs_waste_collection_schedule-1318
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
missing file or path in source: aha_region_de.py
Hi,
I recently installed Version 1.42.0 using HACS and cant get it to run.
Changed the adress to one of the test-adresses, but same issue.
That home directory '/home/silas/tmp/test.html' seems like debug file for some server-responds. But thats not going to work :)
Any ideas?
Thanks for your help!
configuration.yaml
```
waste_collection_schedule:
sources:
- name: aha_region_de
args:
gemeinde: "Hannover"
strasse: "Voltastr. / Vahrenwald"
hnr: "25"
zusatz: ""
```
```
Logger: waste_collection_schedule.source_shell
Source: custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py:136
Integration: waste_collection_schedule (documentation)
First occurred: 20:08:22 (2 occurrences)
Last logged: 20:09:05
fetch failed for source Zweckverband Abfallwirtschaft Region Hannover: Traceback (most recent call last): File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/aha_region_de.py", line 85, in fetch with open("/home/silas/tmp/test.html", "w") as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: '/home/silas/tmp/test.html'`
```
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/aha_region_de.py]
1 from waste_collection_schedule import Collection # type: ignore[attr-defined]
2 from waste_collection_schedule.service.ICS import ICS
3
4 import requests
5 from bs4 import BeautifulSoup
6
7 TITLE = "Zweckverband Abfallwirtschaft Region Hannover"
8 DESCRIPTION = "Source for Zweckverband Abfallwirtschaft Region Hannover."
9 URL = "https://www.aha-region.de/"
10 TEST_CASES = {
11 "Neustadt a. Rbge., Am Rotdorn / Nöpke, 1 ": {
12 "gemeinde": "Neustadt a. Rbge.",
13 "strasse": "Am Rotdorn / Nöpke",
14 "hnr": 1,
15 },
16 "Isernhagen, Am Lohner Hof / Isernhagen Fb, 10": {
17 "gemeinde": "Isernhagen",
18 "strasse": "Am Lohner Hof / Isernhagen Fb",
19 "hnr": "10",
20 },
21 "Hannover, Voltastr. / Vahrenwald, 25": {
22 "gemeinde": "Hannover",
23 "strasse": "Voltastr. / Vahrenwald",
24 "hnr": "25",
25 },
26 "Hannover, Melanchthonstr., 10A": {
27 "gemeinde": "Hannover",
28 "strasse": "Melanchthonstr.",
29 "hnr": "10",
30 "zusatz": "A",
31 }
32 }
33
34 ICON_MAP = {
35 "Restabfall": "mdi:trash-can",
36 "Glass": "mdi:bottle-soda",
37 "Bioabfall": "mdi:leaf",
38 "Papier": "mdi:package-variant",
39 "Leichtverpackungen": "mdi:recycle",
40 }
41
42 API_URL = "https://www.aha-region.de/abholtermine/abfuhrkalender"
43
44 class Source:
45 def __init__(self, gemeinde: str, strasse: str, hnr: str | int, zusatz: str | int = ""):
46 self._gemeinde: str = gemeinde
47 self._strasse: str = strasse
48 self._hnr: str = str(hnr)
49 self._zusatz: str = str(zusatz)
50 self._ics = ICS()
51
52 def fetch(self):
53 # find strassen_id
54 r = requests.get(API_URL, params={"gemeinde": self._gemeinde, "von": "A", "bis": "["})
55 r.raise_for_status()
56
57 strassen_id = None
58 selects = BeautifulSoup(r.text, "html.parser").find("select", {"id": "strasse"}).find_all("option")
59 for select in selects:
60 if select.text.lower().replace(" ", "") == self._strasse.lower().replace(" ", ""):
61 strassen_id = select["value"]
62 break
63
64 if not strassen_id:
65 raise Exception("Street not found for gemeinde: " + self._gemeinde + " and strasse: " + self._strasse)
66
67 # request overview page
68 args = {
69 "gemeinde": self._gemeinde,
70 "jsaus": "",
71 "strasse": strassen_id,
72 "hausnr": self._hnr,
73 "hausnraddon": self._zusatz,
74 "anzeigen": "Suchen",
75 }
76
77 r = requests.post(API_URL, data=args)
78 r.raise_for_status()
79
80 soup = BeautifulSoup(r.text, "html.parser")
81 # find all ICAL download buttons
82 download_buttons = soup.find_all("button", {"name": "ical_apple"})
83
84 if not download_buttons:
85 with open("/home/silas/tmp/test.html", "w") as f:
86 f.write(r.text)
87 raise Exception("Invalid response from server, check you configuration if it is correct.")
88
89 entries = []
90
91 for button in download_buttons:
92 # get form data and request ICAL file for every waste type
93 args = {}
94 args["ical_apple"] = button["value"]
95 form = button.parent
96 for input in form.find_all("input"):
97 args[input["name"]] = input["value"]
98
99 r = requests.post(API_URL, data=args)
100 r.encoding = "utf-8"
101
102 dates = self._ics.convert(r.text)
103
104 for d in dates:
105 bin_type = d[1].replace("Abfuhr", "").strip()
106 entries.append(Collection(d[0], bin_type, ICON_MAP.get(bin_type)))
107
108 return entries
109
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/aha_region_de.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/aha_region_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/aha_region_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/aha_region_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/aha_region_de.py
@@ -1,8 +1,7 @@
-from waste_collection_schedule import Collection # type: ignore[attr-defined]
-from waste_collection_schedule.service.ICS import ICS
-
import requests
from bs4 import BeautifulSoup
+from waste_collection_schedule import Collection # type: ignore[attr-defined]
+from waste_collection_schedule.service.ICS import ICS
TITLE = "Zweckverband Abfallwirtschaft Region Hannover"
DESCRIPTION = "Source for Zweckverband Abfallwirtschaft Region Hannover."
@@ -14,9 +13,9 @@
"hnr": 1,
},
"Isernhagen, Am Lohner Hof / Isernhagen Fb, 10": {
- "gemeinde": "Isernhagen",
- "strasse": "Am Lohner Hof / Isernhagen Fb",
- "hnr": "10",
+ "gemeinde": "Isernhagen",
+ "strasse": "Am Lohner Hof / Isernhagen Fb",
+ "hnr": "10",
},
"Hannover, Voltastr. / Vahrenwald, 25": {
"gemeinde": "Hannover",
@@ -28,7 +27,7 @@
"strasse": "Melanchthonstr.",
"hnr": "10",
"zusatz": "A",
- }
+ },
}
ICON_MAP = {
@@ -41,8 +40,11 @@
API_URL = "https://www.aha-region.de/abholtermine/abfuhrkalender"
+
class Source:
- def __init__(self, gemeinde: str, strasse: str, hnr: str | int, zusatz: str | int = ""):
+ def __init__(
+ self, gemeinde: str, strasse: str, hnr: str | int, zusatz: str | int = ""
+ ):
self._gemeinde: str = gemeinde
self._strasse: str = strasse
self._hnr: str = str(hnr)
@@ -51,18 +53,31 @@
def fetch(self):
# find strassen_id
- r = requests.get(API_URL, params={"gemeinde": self._gemeinde, "von": "A", "bis": "["})
+ r = requests.get(
+ API_URL, params={"gemeinde": self._gemeinde, "von": "A", "bis": "["}
+ )
r.raise_for_status()
strassen_id = None
- selects = BeautifulSoup(r.text, "html.parser").find("select", {"id": "strasse"}).find_all("option")
+ selects = (
+ BeautifulSoup(r.text, "html.parser")
+ .find("select", {"id": "strasse"})
+ .find_all("option")
+ )
for select in selects:
- if select.text.lower().replace(" ", "") == self._strasse.lower().replace(" ", ""):
+ if select.text.lower().replace(" ", "") == self._strasse.lower().replace(
+ " ", ""
+ ):
strassen_id = select["value"]
break
if not strassen_id:
- raise Exception("Street not found for gemeinde: " + self._gemeinde + " and strasse: " + self._strasse)
+ raise Exception(
+ "Street not found for gemeinde: "
+ + self._gemeinde
+ + " and strasse: "
+ + self._strasse
+ )
# request overview page
args = {
@@ -82,9 +97,9 @@
download_buttons = soup.find_all("button", {"name": "ical_apple"})
if not download_buttons:
- with open("/home/silas/tmp/test.html", "w") as f:
- f.write(r.text)
- raise Exception("Invalid response from server, check you configuration if it is correct.")
+ raise Exception(
+ "Invalid response from server, check you configuration if it is correct."
+ )
entries = []
|
{"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/aha_region_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/aha_region_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/aha_region_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/aha_region_de.py\n@@ -1,8 +1,7 @@\n-from waste_collection_schedule import Collection # type: ignore[attr-defined]\n-from waste_collection_schedule.service.ICS import ICS\n-\n import requests\n from bs4 import BeautifulSoup\n+from waste_collection_schedule import Collection # type: ignore[attr-defined]\n+from waste_collection_schedule.service.ICS import ICS\n \n TITLE = \"Zweckverband Abfallwirtschaft Region Hannover\"\n DESCRIPTION = \"Source for Zweckverband Abfallwirtschaft Region Hannover.\"\n@@ -14,9 +13,9 @@\n \"hnr\": 1,\n },\n \"Isernhagen, Am Lohner Hof / Isernhagen Fb, 10\": {\n- \"gemeinde\": \"Isernhagen\",\n- \"strasse\": \"Am Lohner Hof / Isernhagen Fb\",\n- \"hnr\": \"10\",\n+ \"gemeinde\": \"Isernhagen\",\n+ \"strasse\": \"Am Lohner Hof / Isernhagen Fb\",\n+ \"hnr\": \"10\",\n },\n \"Hannover, Voltastr. / Vahrenwald, 25\": {\n \"gemeinde\": \"Hannover\",\n@@ -28,7 +27,7 @@\n \"strasse\": \"Melanchthonstr.\",\n \"hnr\": \"10\",\n \"zusatz\": \"A\",\n- }\n+ },\n }\n \n ICON_MAP = {\n@@ -41,8 +40,11 @@\n \n API_URL = \"https://www.aha-region.de/abholtermine/abfuhrkalender\"\n \n+\n class Source:\n- def __init__(self, gemeinde: str, strasse: str, hnr: str | int, zusatz: str | int = \"\"):\n+ def __init__(\n+ self, gemeinde: str, strasse: str, hnr: str | int, zusatz: str | int = \"\"\n+ ):\n self._gemeinde: str = gemeinde\n self._strasse: str = strasse\n self._hnr: str = str(hnr)\n@@ -51,18 +53,31 @@\n \n def fetch(self):\n # find strassen_id\n- r = requests.get(API_URL, params={\"gemeinde\": self._gemeinde, \"von\": \"A\", \"bis\": \"[\"})\n+ r = requests.get(\n+ API_URL, params={\"gemeinde\": self._gemeinde, \"von\": \"A\", \"bis\": \"[\"}\n+ )\n r.raise_for_status()\n \n strassen_id = None\n- selects = BeautifulSoup(r.text, \"html.parser\").find(\"select\", {\"id\": \"strasse\"}).find_all(\"option\")\n+ selects = (\n+ BeautifulSoup(r.text, \"html.parser\")\n+ .find(\"select\", {\"id\": \"strasse\"})\n+ .find_all(\"option\")\n+ )\n for select in selects:\n- if select.text.lower().replace(\" \", \"\") == self._strasse.lower().replace(\" \", \"\"):\n+ if select.text.lower().replace(\" \", \"\") == self._strasse.lower().replace(\n+ \" \", \"\"\n+ ):\n strassen_id = select[\"value\"]\n break\n \n if not strassen_id:\n- raise Exception(\"Street not found for gemeinde: \" + self._gemeinde + \" and strasse: \" + self._strasse)\n+ raise Exception(\n+ \"Street not found for gemeinde: \"\n+ + self._gemeinde\n+ + \" and strasse: \"\n+ + self._strasse\n+ )\n \n # request overview page\n args = {\n@@ -82,9 +97,9 @@\n download_buttons = soup.find_all(\"button\", {\"name\": \"ical_apple\"})\n \n if not download_buttons:\n- with open(\"/home/silas/tmp/test.html\", \"w\") as f:\n- f.write(r.text)\n- raise Exception(\"Invalid response from server, check you configuration if it is correct.\")\n+ raise Exception(\n+ \"Invalid response from server, check you configuration if it is correct.\"\n+ )\n \n entries = []\n", "issue": "missing file or path in source: aha_region_de.py\nHi,\r\nI recently installed Version 1.42.0 using HACS and cant get it to run.\r\nChanged the adress to one of the test-adresses, but same issue.\r\n\r\nThat home directory '/home/silas/tmp/test.html' seems like debug file for some server-responds. But thats not going to work :)\r\n\r\nAny ideas?\r\n\r\nThanks for your help!\r\n\r\nconfiguration.yaml\r\n```\r\nwaste_collection_schedule:\r\n sources:\r\n - name: aha_region_de\r\n args:\r\n gemeinde: \"Hannover\"\r\n strasse: \"Voltastr. / Vahrenwald\"\r\n hnr: \"25\"\r\n zusatz: \"\"\r\n```\r\n\r\n```\r\nLogger: waste_collection_schedule.source_shell\r\nSource: custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py:136\r\nIntegration: waste_collection_schedule (documentation)\r\nFirst occurred: 20:08:22 (2 occurrences)\r\nLast logged: 20:09:05\r\n\r\nfetch failed for source Zweckverband Abfallwirtschaft Region Hannover: Traceback (most recent call last): File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py\", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/aha_region_de.py\", line 85, in fetch with open(\"/home/silas/tmp/test.html\", \"w\") as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: '/home/silas/tmp/test.html'`\r\n```\n", "before_files": [{"content": "from waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\n\nimport requests\nfrom bs4 import BeautifulSoup\n\nTITLE = \"Zweckverband Abfallwirtschaft Region Hannover\"\nDESCRIPTION = \"Source for Zweckverband Abfallwirtschaft Region Hannover.\"\nURL = \"https://www.aha-region.de/\"\nTEST_CASES = {\n \"Neustadt a. Rbge., Am Rotdorn / N\u00f6pke, 1 \": {\n \"gemeinde\": \"Neustadt a. Rbge.\",\n \"strasse\": \"Am Rotdorn / N\u00f6pke\",\n \"hnr\": 1,\n },\n \"Isernhagen, Am Lohner Hof / Isernhagen Fb, 10\": {\n \"gemeinde\": \"Isernhagen\",\n \"strasse\": \"Am Lohner Hof / Isernhagen Fb\",\n \"hnr\": \"10\",\n },\n \"Hannover, Voltastr. / Vahrenwald, 25\": {\n \"gemeinde\": \"Hannover\",\n \"strasse\": \"Voltastr. / Vahrenwald\",\n \"hnr\": \"25\",\n },\n \"Hannover, Melanchthonstr., 10A\": {\n \"gemeinde\": \"Hannover\",\n \"strasse\": \"Melanchthonstr.\",\n \"hnr\": \"10\",\n \"zusatz\": \"A\",\n }\n}\n\nICON_MAP = {\n \"Restabfall\": \"mdi:trash-can\",\n \"Glass\": \"mdi:bottle-soda\",\n \"Bioabfall\": \"mdi:leaf\",\n \"Papier\": \"mdi:package-variant\",\n \"Leichtverpackungen\": \"mdi:recycle\",\n}\n\nAPI_URL = \"https://www.aha-region.de/abholtermine/abfuhrkalender\"\n\nclass Source:\n def __init__(self, gemeinde: str, strasse: str, hnr: str | int, zusatz: str | int = \"\"):\n self._gemeinde: str = gemeinde\n self._strasse: str = strasse\n self._hnr: str = str(hnr)\n self._zusatz: str = str(zusatz)\n self._ics = ICS()\n\n def fetch(self):\n # find strassen_id\n r = requests.get(API_URL, params={\"gemeinde\": self._gemeinde, \"von\": \"A\", \"bis\": \"[\"})\n r.raise_for_status()\n\n strassen_id = None\n selects = BeautifulSoup(r.text, \"html.parser\").find(\"select\", {\"id\": \"strasse\"}).find_all(\"option\")\n for select in selects:\n if select.text.lower().replace(\" \", \"\") == self._strasse.lower().replace(\" \", \"\"):\n strassen_id = select[\"value\"]\n break\n\n if not strassen_id:\n raise Exception(\"Street not found for gemeinde: \" + self._gemeinde + \" and strasse: \" + self._strasse)\n\n # request overview page\n args = {\n \"gemeinde\": self._gemeinde,\n \"jsaus\": \"\",\n \"strasse\": strassen_id,\n \"hausnr\": self._hnr,\n \"hausnraddon\": self._zusatz,\n \"anzeigen\": \"Suchen\",\n }\n\n r = requests.post(API_URL, data=args)\n r.raise_for_status()\n\n soup = BeautifulSoup(r.text, \"html.parser\")\n # find all ICAL download buttons\n download_buttons = soup.find_all(\"button\", {\"name\": \"ical_apple\"})\n\n if not download_buttons:\n with open(\"/home/silas/tmp/test.html\", \"w\") as f:\n f.write(r.text)\n raise Exception(\"Invalid response from server, check you configuration if it is correct.\")\n\n entries = []\n\n for button in download_buttons:\n # get form data and request ICAL file for every waste type\n args = {}\n args[\"ical_apple\"] = button[\"value\"]\n form = button.parent\n for input in form.find_all(\"input\"):\n args[input[\"name\"]] = input[\"value\"]\n\n r = requests.post(API_URL, data=args)\n r.encoding = \"utf-8\"\n\n dates = self._ics.convert(r.text)\n\n for d in dates:\n bin_type = d[1].replace(\"Abfuhr\", \"\").strip()\n entries.append(Collection(d[0], bin_type, ICON_MAP.get(bin_type)))\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/aha_region_de.py"}]}
| 2,142 | 993 |
gh_patches_debug_4484
|
rasdani/github-patches
|
git_diff
|
python-telegram-bot__python-telegram-bot-953
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pre_checkout_query does not store bot.
### Steps to reproduce
- On a PreChecoutQueryHandler, get the PreCheckoutQuery object update.pre_checkout_query
- Try to answer it, bot has not been set:
File "/home/folarte/sexychat/nor File "/home/folarte/sexychat/normalstate.py", line 998, in on_pcoq
pcoq.answer(ok=True)
File "/home/folarte/venv-sxc/local/lib/python3.6/site-packages/telegram/payment/precheckoutquery.py", line 115, in answer
return self.bot.answer_pre_checkout_query(self.id, *args, **kwargs)
AttributeError: 'NoneType' object has no attribute 'answer_pre_checkout_query'
malstate.py", line 998, in on_pcoq
pcoq.answer(ok=True)
File "/home/folarte/venv-sxc/local/lib/python3.6/site-packages/telegram/payment/precheckoutquery.py", line 115, in answer
return self.bot.answer_pre_checkout_query(self.id, *args, **kwargs)
AttributeError: 'NoneType' object has no attribute 'answer_pre_checkout_query'
### Expected behaviour
pcoq.bot should contain the bot object.
### Actual behaviour
bot object is not set. Thi is due to the de_json function being:
@classmethod
def de_json(cls, data, bot):
if not data:
return None
data = super(PreCheckoutQuery, cls).de_json(data, bot)
data['from_user'] = User.de_json(data.pop('from'), bot)
data['order_info'] = OrderInfo.de_json(data.get('order_info'), bot)
return cls(**data)
When the last call should pass the bot to the constructor, as done in the callbackquery object:
return cls(bot=bot, **data)
When editing the line to these, it works fine.
Do not know GIT, can try to do it, but it is a trivial fix, probably a typo.
### Configuration
Amazon Linux, aws instance.
$ python -m telegram
python-telegram-bot 9.0.0
certifi 2017.11.05
future 0.16.0
Python 3.6.2 (default, Nov 2 2017, 19:34:31) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)]
</issue>
<code>
[start of telegram/payment/precheckoutquery.py]
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2017
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains an object that represents a Telegram PreCheckoutQuery."""
20
21 from telegram import TelegramObject, User, OrderInfo
22
23
24 class PreCheckoutQuery(TelegramObject):
25 """This object contains information about an incoming pre-checkout query.
26
27 Note:
28 * In Python `from` is a reserved word, use `from_user` instead.
29
30 Attributes:
31 id (:obj:`str`): Unique query identifier.
32 from_user (:class:`telegram.User`): User who sent the query.
33 currency (:obj:`str`): Three-letter ISO 4217 currency code.
34 total_amount (:obj:`int`): Total price in the smallest units of the currency.
35 invoice_payload (:obj:`str`): Bot specified invoice payload.
36 shipping_option_id (:obj:`str`): Optional. Identifier of the shipping option chosen by the
37 user.
38 order_info (:class:`telegram.OrderInfo`): Optional. Order info provided by the user.
39 bot (:class:`telegram.Bot`): Optional. The Bot to use for instance methods.
40
41 Args:
42 id (:obj:`str`): Unique query identifier.
43 from_user (:class:`telegram.User`): User who sent the query.
44 currency (:obj:`str`): Three-letter ISO 4217 currency code
45 total_amount (:obj:`int`): Total price in the smallest units of the currency (integer, not
46 float/double). For example, for a price of US$ 1.45 pass amount = 145. See the exp
47 parameter in currencies.json, it shows the number of digits past the decimal point for
48 each currency (2 for the majority of currencies).
49 invoice_payload (:obj:`str`): Bot specified invoice payload.
50 shipping_option_id (:obj:`str`, optional): Identifier of the shipping option chosen by the
51 user.
52 order_info (:class:`telegram.OrderInfo`, optional): Order info provided by the user.
53 bot (:class:`telegram.Bot`, optional): The Bot to use for instance methods.
54 **kwargs (:obj:`dict`): Arbitrary keyword arguments.
55
56 """
57
58 def __init__(self,
59 id,
60 from_user,
61 currency,
62 total_amount,
63 invoice_payload,
64 shipping_option_id=None,
65 order_info=None,
66 bot=None,
67 **kwargs):
68 self.id = id
69 self.from_user = from_user
70 self.currency = currency
71 self.total_amount = total_amount
72 self.invoice_payload = invoice_payload
73 self.shipping_option_id = shipping_option_id
74 self.order_info = order_info
75
76 self.bot = bot
77
78 self._id_attrs = (self.id,)
79
80 @classmethod
81 def de_json(cls, data, bot):
82 if not data:
83 return None
84
85 data = super(PreCheckoutQuery, cls).de_json(data, bot)
86
87 data['from_user'] = User.de_json(data.pop('from'), bot)
88 data['order_info'] = OrderInfo.de_json(data.get('order_info'), bot)
89
90 return cls(**data)
91
92 def to_dict(self):
93 data = super(PreCheckoutQuery, self).to_dict()
94
95 data['from'] = data.pop('from_user', None)
96
97 return data
98
99 def answer(self, *args, **kwargs):
100 """Shortcut for::
101
102 bot.answer_pre_checkout_query(update.pre_checkout_query.id, *args, **kwargs)
103
104 Args:
105 ok (:obj:`bool`): Specify True if everything is alright (goods are available, etc.) and
106 the bot is ready to proceed with the order. Use False if there are any problems.
107 error_message (:obj:`str`, optional): Required if ok is False. Error message in human
108 readable form that explains the reason for failure to proceed with the checkout
109 (e.g. "Sorry, somebody just bought the last of our amazing black T-shirts while you
110 were busy filling out your payment details. Please choose a different color or
111 garment!"). Telegram will display this message to the user.
112 **kwargs (:obj:`dict`): Arbitrary keyword arguments.
113
114 """
115 return self.bot.answer_pre_checkout_query(self.id, *args, **kwargs)
116
[end of telegram/payment/precheckoutquery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/telegram/payment/precheckoutquery.py b/telegram/payment/precheckoutquery.py
--- a/telegram/payment/precheckoutquery.py
+++ b/telegram/payment/precheckoutquery.py
@@ -87,7 +87,7 @@
data['from_user'] = User.de_json(data.pop('from'), bot)
data['order_info'] = OrderInfo.de_json(data.get('order_info'), bot)
- return cls(**data)
+ return cls(bot=bot, **data)
def to_dict(self):
data = super(PreCheckoutQuery, self).to_dict()
|
{"golden_diff": "diff --git a/telegram/payment/precheckoutquery.py b/telegram/payment/precheckoutquery.py\n--- a/telegram/payment/precheckoutquery.py\n+++ b/telegram/payment/precheckoutquery.py\n@@ -87,7 +87,7 @@\n data['from_user'] = User.de_json(data.pop('from'), bot)\n data['order_info'] = OrderInfo.de_json(data.get('order_info'), bot)\n \n- return cls(**data)\n+ return cls(bot=bot, **data)\n \n def to_dict(self):\n data = super(PreCheckoutQuery, self).to_dict()\n", "issue": "pre_checkout_query does not store bot.\n\r\n### Steps to reproduce\r\n- On a PreChecoutQueryHandler, get the PreCheckoutQuery object update.pre_checkout_query\r\n\r\n- Try to answer it, bot has not been set:\r\n\r\n File \"/home/folarte/sexychat/nor File \"/home/folarte/sexychat/normalstate.py\", line 998, in on_pcoq\r\n pcoq.answer(ok=True)\r\n File \"/home/folarte/venv-sxc/local/lib/python3.6/site-packages/telegram/payment/precheckoutquery.py\", line 115, in answer\r\n return self.bot.answer_pre_checkout_query(self.id, *args, **kwargs)\r\nAttributeError: 'NoneType' object has no attribute 'answer_pre_checkout_query'\r\nmalstate.py\", line 998, in on_pcoq\r\n pcoq.answer(ok=True)\r\n File \"/home/folarte/venv-sxc/local/lib/python3.6/site-packages/telegram/payment/precheckoutquery.py\", line 115, in answer\r\n return self.bot.answer_pre_checkout_query(self.id, *args, **kwargs)\r\nAttributeError: 'NoneType' object has no attribute 'answer_pre_checkout_query'\r\n\r\n### Expected behaviour\r\n\r\npcoq.bot should contain the bot object.\r\n\r\n### Actual behaviour\r\n\r\nbot object is not set. Thi is due to the de_json function being:\r\n\r\n @classmethod\r\n def de_json(cls, data, bot):\r\n if not data:\r\n return None\r\n\r\n data = super(PreCheckoutQuery, cls).de_json(data, bot)\r\n\r\n data['from_user'] = User.de_json(data.pop('from'), bot)\r\n\tdata['order_info'] = OrderInfo.de_json(data.get('order_info'), bot)\r\n\r\n return cls(**data)\r\n\r\nWhen the last call should pass the bot to the constructor, as done in the callbackquery object:\r\n\r\n return cls(bot=bot, **data)\r\n\r\nWhen editing the line to these, it works fine.\r\n\r\nDo not know GIT, can try to do it, but it is a trivial fix, probably a typo.\r\n\r\n### Configuration\r\n\r\nAmazon Linux, aws instance.\r\n\r\n$ python -m telegram\r\npython-telegram-bot 9.0.0\r\ncertifi 2017.11.05\r\nfuture 0.16.0\r\nPython 3.6.2 (default, Nov 2 2017, 19:34:31) [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)]\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2017\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains an object that represents a Telegram PreCheckoutQuery.\"\"\"\n\nfrom telegram import TelegramObject, User, OrderInfo\n\n\nclass PreCheckoutQuery(TelegramObject):\n \"\"\"This object contains information about an incoming pre-checkout query.\n\n Note:\n * In Python `from` is a reserved word, use `from_user` instead.\n\n Attributes:\n id (:obj:`str`): Unique query identifier.\n from_user (:class:`telegram.User`): User who sent the query.\n currency (:obj:`str`): Three-letter ISO 4217 currency code.\n total_amount (:obj:`int`): Total price in the smallest units of the currency.\n invoice_payload (:obj:`str`): Bot specified invoice payload.\n shipping_option_id (:obj:`str`): Optional. Identifier of the shipping option chosen by the\n user.\n order_info (:class:`telegram.OrderInfo`): Optional. Order info provided by the user.\n bot (:class:`telegram.Bot`): Optional. The Bot to use for instance methods.\n\n Args:\n id (:obj:`str`): Unique query identifier.\n from_user (:class:`telegram.User`): User who sent the query.\n currency (:obj:`str`): Three-letter ISO 4217 currency code\n total_amount (:obj:`int`): Total price in the smallest units of the currency (integer, not\n float/double). For example, for a price of US$ 1.45 pass amount = 145. See the exp\n parameter in currencies.json, it shows the number of digits past the decimal point for\n each currency (2 for the majority of currencies).\n invoice_payload (:obj:`str`): Bot specified invoice payload.\n shipping_option_id (:obj:`str`, optional): Identifier of the shipping option chosen by the\n user.\n order_info (:class:`telegram.OrderInfo`, optional): Order info provided by the user.\n bot (:class:`telegram.Bot`, optional): The Bot to use for instance methods.\n **kwargs (:obj:`dict`): Arbitrary keyword arguments.\n\n \"\"\"\n\n def __init__(self,\n id,\n from_user,\n currency,\n total_amount,\n invoice_payload,\n shipping_option_id=None,\n order_info=None,\n bot=None,\n **kwargs):\n self.id = id\n self.from_user = from_user\n self.currency = currency\n self.total_amount = total_amount\n self.invoice_payload = invoice_payload\n self.shipping_option_id = shipping_option_id\n self.order_info = order_info\n\n self.bot = bot\n\n self._id_attrs = (self.id,)\n\n @classmethod\n def de_json(cls, data, bot):\n if not data:\n return None\n\n data = super(PreCheckoutQuery, cls).de_json(data, bot)\n\n data['from_user'] = User.de_json(data.pop('from'), bot)\n data['order_info'] = OrderInfo.de_json(data.get('order_info'), bot)\n\n return cls(**data)\n\n def to_dict(self):\n data = super(PreCheckoutQuery, self).to_dict()\n\n data['from'] = data.pop('from_user', None)\n\n return data\n\n def answer(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.answer_pre_checkout_query(update.pre_checkout_query.id, *args, **kwargs)\n\n Args:\n ok (:obj:`bool`): Specify True if everything is alright (goods are available, etc.) and\n the bot is ready to proceed with the order. Use False if there are any problems.\n error_message (:obj:`str`, optional): Required if ok is False. Error message in human\n readable form that explains the reason for failure to proceed with the checkout\n (e.g. \"Sorry, somebody just bought the last of our amazing black T-shirts while you\n were busy filling out your payment details. Please choose a different color or\n garment!\"). Telegram will display this message to the user.\n **kwargs (:obj:`dict`): Arbitrary keyword arguments.\n\n \"\"\"\n return self.bot.answer_pre_checkout_query(self.id, *args, **kwargs)\n", "path": "telegram/payment/precheckoutquery.py"}]}
| 2,415 | 128 |
gh_patches_debug_30794
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-6991
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support ChainerX in F.GetItem backward
`GetItemGrad` does not suport it yet.
Related: #5944
</issue>
<code>
[start of chainer/functions/array/get_item.py]
1 import numpy
2
3 import chainer
4 from chainer import backend
5 from chainer import function_node
6 from chainer import utils
7 from chainer.utils import type_check
8 from chainer import variable
9 import chainerx
10
11
12 _numpy_supports_0d_bool_index = \
13 numpy.lib.NumpyVersion(numpy.__version__) >= '1.13.0'
14
15
16 class GetItem(function_node.FunctionNode):
17
18 """Function that slices array and extract elements."""
19
20 def __init__(self, slices):
21 if isinstance(slices, list):
22 if all([isinstance(s, int) for s in slices]):
23 slices = slices,
24 slices = tuple(slices)
25 elif not isinstance(slices, tuple):
26 slices = slices,
27
28 if chainer.is_debug():
29 n_ellipses = 0
30 for s in slices:
31 if s is Ellipsis:
32 n_ellipses += 1
33 if n_ellipses > 1:
34 raise ValueError('Only one Ellipsis is allowed')
35
36 self.slices = slices
37
38 def check_type_forward(self, in_types):
39 type_check._argname(in_types, ('x',))
40
41 def forward(self, xs):
42 slices = tuple([
43 backend.from_chx(s) if isinstance(s, chainerx.ndarray) else s
44 for s in self.slices])
45 return utils.force_array(xs[0][slices]),
46
47 def backward(self, indexes, gy):
48 return GetItemGrad(
49 self.slices, self.inputs[0].shape).apply(gy)
50
51
52 class GetItemGrad(function_node.FunctionNode):
53
54 def __init__(self, slices, in_shape):
55 self.slices = slices
56 self._in_shape = in_shape
57
58 def forward(self, inputs):
59 gy, = inputs
60 xp = backend.get_array_module(*inputs)
61 gx = xp.zeros(self._in_shape, gy.dtype)
62 if xp is numpy:
63 try:
64 numpy.add.at(gx, self.slices, gy)
65 except IndexError:
66 done = False
67 # In numpy<1.13, 0-dim boolean index is not supported in
68 # numpy.add.at and it's supported for 0-dim arr in
69 # arr.__getitem__.
70 if not _numpy_supports_0d_bool_index and len(self.slices) == 1:
71 idx = numpy.asanyarray(self.slices[0])
72 if idx.dtype == numpy.dtype(bool):
73 # Convert the array and the mask to 1-dim.
74 # numpy.add.at with them is supported in older numpy.
75 numpy.add.at(gx[None], idx[None], gy)
76 done = True
77
78 if not done:
79 msg = '''
80 GetItem does not support backward for this slices. The slices argument is not
81 supported by numpy.add.at, while it is supported by numpy.ndarray.__getitem__.
82
83 Please report this error to the issue tracker with the stack trace,
84 the information of your environment, and your script:
85 https://github.com/chainer/chainer/issues/new.
86 '''
87 raise IndexError(msg)
88 else:
89 gx.scatter_add(self.slices, inputs[0])
90 return gx,
91
92 def backward(self, indexes, ggx):
93 return GetItem(self.slices).apply(ggx)
94
95
96 def get_item(x, slices):
97 """Extract elements from array with specified shape, axes and offsets.
98
99 Args:
100 x (:class:`~chainer.Variable` or :ref:`ndarray`):
101 A variable to be sliced.
102 slices (int, slice, Ellipsis, None, integer array-like, boolean\
103 array-like or tuple of them):
104 An object to specify the selection of elements.
105
106 Returns:
107 A :class:`~chainer.Variable` object which contains sliced array of
108 ``x``.
109
110 .. note::
111
112 It only supports types that are supported by CUDA's atomicAdd when
113 an integer array is included in ``slices``.
114 The supported types are ``numpy.float32``, ``numpy.int32``,
115 ``numpy.uint32``, ``numpy.uint64`` and ``numpy.ulonglong``.
116
117 .. note::
118
119 It does not support ``slices`` that contains multiple boolean arrays.
120
121 .. note::
122
123 See NumPy documentation for details of `indexing
124 <https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html>`_.
125
126 .. admonition:: Example
127
128 >>> x = np.arange(12).reshape((2, 2, 3))
129 >>> x
130 array([[[ 0, 1, 2],
131 [ 3, 4, 5]],
132 <BLANKLINE>
133 [[ 6, 7, 8],
134 [ 9, 10, 11]]])
135 >>> F.get_item(x, 0)
136 variable([[0, 1, 2],
137 [3, 4, 5]])
138 >>> F.get_item(x, (0, 0, slice(0, 2, 1))) # equals x[0, 0, 0:2:1]
139 variable([0, 1])
140 >>> F.get_item(x, (Ellipsis, 2)) # equals x[..., 2]
141 variable([[ 2, 5],
142 [ 8, 11]])
143 >>> F.get_item(x, (1, np.newaxis, 1, 0)) # equals x[1, None, 1, 0]
144 variable([9])
145
146 """
147 return GetItem(slices).apply((x,))[0]
148
149
150 def install_variable_get_item():
151 variable.Variable.__getitem__ = get_item
152
[end of chainer/functions/array/get_item.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/chainer/functions/array/get_item.py b/chainer/functions/array/get_item.py
--- a/chainer/functions/array/get_item.py
+++ b/chainer/functions/array/get_item.py
@@ -56,19 +56,23 @@
self._in_shape = in_shape
def forward(self, inputs):
+ slices = tuple([
+ backend.from_chx(s) if isinstance(s, chainerx.ndarray) else s
+ for s in self.slices])
+
gy, = inputs
xp = backend.get_array_module(*inputs)
gx = xp.zeros(self._in_shape, gy.dtype)
if xp is numpy:
try:
- numpy.add.at(gx, self.slices, gy)
+ numpy.add.at(gx, slices, gy)
except IndexError:
done = False
# In numpy<1.13, 0-dim boolean index is not supported in
# numpy.add.at and it's supported for 0-dim arr in
# arr.__getitem__.
- if not _numpy_supports_0d_bool_index and len(self.slices) == 1:
- idx = numpy.asanyarray(self.slices[0])
+ if not _numpy_supports_0d_bool_index and len(slices) == 1:
+ idx = numpy.asanyarray(slices[0])
if idx.dtype == numpy.dtype(bool):
# Convert the array and the mask to 1-dim.
# numpy.add.at with them is supported in older numpy.
@@ -86,7 +90,7 @@
'''
raise IndexError(msg)
else:
- gx.scatter_add(self.slices, inputs[0])
+ gx.scatter_add(slices, inputs[0])
return gx,
def backward(self, indexes, ggx):
|
{"golden_diff": "diff --git a/chainer/functions/array/get_item.py b/chainer/functions/array/get_item.py\n--- a/chainer/functions/array/get_item.py\n+++ b/chainer/functions/array/get_item.py\n@@ -56,19 +56,23 @@\n self._in_shape = in_shape\n \n def forward(self, inputs):\n+ slices = tuple([\n+ backend.from_chx(s) if isinstance(s, chainerx.ndarray) else s\n+ for s in self.slices])\n+\n gy, = inputs\n xp = backend.get_array_module(*inputs)\n gx = xp.zeros(self._in_shape, gy.dtype)\n if xp is numpy:\n try:\n- numpy.add.at(gx, self.slices, gy)\n+ numpy.add.at(gx, slices, gy)\n except IndexError:\n done = False\n # In numpy<1.13, 0-dim boolean index is not supported in\n # numpy.add.at and it's supported for 0-dim arr in\n # arr.__getitem__.\n- if not _numpy_supports_0d_bool_index and len(self.slices) == 1:\n- idx = numpy.asanyarray(self.slices[0])\n+ if not _numpy_supports_0d_bool_index and len(slices) == 1:\n+ idx = numpy.asanyarray(slices[0])\n if idx.dtype == numpy.dtype(bool):\n # Convert the array and the mask to 1-dim.\n # numpy.add.at with them is supported in older numpy.\n@@ -86,7 +90,7 @@\n '''\n raise IndexError(msg)\n else:\n- gx.scatter_add(self.slices, inputs[0])\n+ gx.scatter_add(slices, inputs[0])\n return gx,\n \n def backward(self, indexes, ggx):\n", "issue": "Support ChainerX in F.GetItem backward\n`GetItemGrad` does not suport it yet.\r\n\r\nRelated: #5944\n", "before_files": [{"content": "import numpy\n\nimport chainer\nfrom chainer import backend\nfrom chainer import function_node\nfrom chainer import utils\nfrom chainer.utils import type_check\nfrom chainer import variable\nimport chainerx\n\n\n_numpy_supports_0d_bool_index = \\\n numpy.lib.NumpyVersion(numpy.__version__) >= '1.13.0'\n\n\nclass GetItem(function_node.FunctionNode):\n\n \"\"\"Function that slices array and extract elements.\"\"\"\n\n def __init__(self, slices):\n if isinstance(slices, list):\n if all([isinstance(s, int) for s in slices]):\n slices = slices,\n slices = tuple(slices)\n elif not isinstance(slices, tuple):\n slices = slices,\n\n if chainer.is_debug():\n n_ellipses = 0\n for s in slices:\n if s is Ellipsis:\n n_ellipses += 1\n if n_ellipses > 1:\n raise ValueError('Only one Ellipsis is allowed')\n\n self.slices = slices\n\n def check_type_forward(self, in_types):\n type_check._argname(in_types, ('x',))\n\n def forward(self, xs):\n slices = tuple([\n backend.from_chx(s) if isinstance(s, chainerx.ndarray) else s\n for s in self.slices])\n return utils.force_array(xs[0][slices]),\n\n def backward(self, indexes, gy):\n return GetItemGrad(\n self.slices, self.inputs[0].shape).apply(gy)\n\n\nclass GetItemGrad(function_node.FunctionNode):\n\n def __init__(self, slices, in_shape):\n self.slices = slices\n self._in_shape = in_shape\n\n def forward(self, inputs):\n gy, = inputs\n xp = backend.get_array_module(*inputs)\n gx = xp.zeros(self._in_shape, gy.dtype)\n if xp is numpy:\n try:\n numpy.add.at(gx, self.slices, gy)\n except IndexError:\n done = False\n # In numpy<1.13, 0-dim boolean index is not supported in\n # numpy.add.at and it's supported for 0-dim arr in\n # arr.__getitem__.\n if not _numpy_supports_0d_bool_index and len(self.slices) == 1:\n idx = numpy.asanyarray(self.slices[0])\n if idx.dtype == numpy.dtype(bool):\n # Convert the array and the mask to 1-dim.\n # numpy.add.at with them is supported in older numpy.\n numpy.add.at(gx[None], idx[None], gy)\n done = True\n\n if not done:\n msg = '''\nGetItem does not support backward for this slices. The slices argument is not\nsupported by numpy.add.at, while it is supported by numpy.ndarray.__getitem__.\n\nPlease report this error to the issue tracker with the stack trace,\nthe information of your environment, and your script:\nhttps://github.com/chainer/chainer/issues/new.\n'''\n raise IndexError(msg)\n else:\n gx.scatter_add(self.slices, inputs[0])\n return gx,\n\n def backward(self, indexes, ggx):\n return GetItem(self.slices).apply(ggx)\n\n\ndef get_item(x, slices):\n \"\"\"Extract elements from array with specified shape, axes and offsets.\n\n Args:\n x (:class:`~chainer.Variable` or :ref:`ndarray`):\n A variable to be sliced.\n slices (int, slice, Ellipsis, None, integer array-like, boolean\\\n array-like or tuple of them):\n An object to specify the selection of elements.\n\n Returns:\n A :class:`~chainer.Variable` object which contains sliced array of\n ``x``.\n\n .. note::\n\n It only supports types that are supported by CUDA's atomicAdd when\n an integer array is included in ``slices``.\n The supported types are ``numpy.float32``, ``numpy.int32``,\n ``numpy.uint32``, ``numpy.uint64`` and ``numpy.ulonglong``.\n\n .. note::\n\n It does not support ``slices`` that contains multiple boolean arrays.\n\n .. note::\n\n See NumPy documentation for details of `indexing\n <https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html>`_.\n\n .. admonition:: Example\n\n >>> x = np.arange(12).reshape((2, 2, 3))\n >>> x\n array([[[ 0, 1, 2],\n [ 3, 4, 5]],\n <BLANKLINE>\n [[ 6, 7, 8],\n [ 9, 10, 11]]])\n >>> F.get_item(x, 0)\n variable([[0, 1, 2],\n [3, 4, 5]])\n >>> F.get_item(x, (0, 0, slice(0, 2, 1))) # equals x[0, 0, 0:2:1]\n variable([0, 1])\n >>> F.get_item(x, (Ellipsis, 2)) # equals x[..., 2]\n variable([[ 2, 5],\n [ 8, 11]])\n >>> F.get_item(x, (1, np.newaxis, 1, 0)) # equals x[1, None, 1, 0]\n variable([9])\n\n \"\"\"\n return GetItem(slices).apply((x,))[0]\n\n\ndef install_variable_get_item():\n variable.Variable.__getitem__ = get_item\n", "path": "chainer/functions/array/get_item.py"}]}
| 2,154 | 393 |
gh_patches_debug_12838
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-429
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
VIMPAGER error
I realize that it is a bit of an aside, but it would be great to support alternative pagers.
```
~ $ echo $MANPAGER
/bin/sh -c "col -bx | vim -c 'set ft=man' -"
~ $ python --version 1
Python 2.7.5
~ $ pip --version
pip 1.4.1 from /Users/carl/.virtualenv/lib/python2.7/site-packages (python 2.7)
~ $ aws --version
aws-cli/1.1.0 Python/2.7.5 Darwin/12.5.0
~ $ aws help
-bx: -c: line 0: unexpected EOF while looking for matching `"'
-bx: -c: line 1: syntax error: unexpected end of file
```
</issue>
<code>
[start of awscli/help.py]
1 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 import sys
14 import logging
15 import os
16 import platform
17 from subprocess import Popen, PIPE
18
19 from docutils.core import publish_string
20 from docutils.writers import manpage
21 import bcdoc
22 from bcdoc.clidocs import ReSTDocument
23 from bcdoc.clidocs import ProviderDocumentEventHandler
24 from bcdoc.clidocs import ServiceDocumentEventHandler
25 from bcdoc.clidocs import OperationDocumentEventHandler
26 import bcdoc.clidocevents
27 from bcdoc.textwriter import TextWriter
28
29 from awscli.argprocess import ParamShorthand
30
31
32 LOG = logging.getLogger('awscli.help')
33
34
35 class ExecutableNotFoundError(Exception):
36 def __init__(self, executable_name):
37 super(ExecutableNotFoundError, self).__init__(
38 'Could not find executable named "%s"' % executable_name)
39
40
41 def get_renderer():
42 """
43 Return the appropriate HelpRenderer implementation for the
44 current platform.
45 """
46 if platform.system() == 'Windows':
47 return WindowsHelpRenderer()
48 else:
49 return PosixHelpRenderer()
50
51
52 class HelpRenderer(object):
53 """
54 Interface for a help renderer.
55
56 The renderer is responsible for displaying the help content on
57 a particular platform.
58 """
59
60 def render(self, contents):
61 """
62 Each implementation of HelpRenderer must implement this
63 render method.
64 """
65 pass
66
67
68 class PosixHelpRenderer(HelpRenderer):
69 """
70 Render help content on a Posix-like system. This includes
71 Linux and MacOS X.
72 """
73
74 PAGER = 'less -R'
75
76 def get_pager_cmdline(self):
77 pager = self.PAGER
78 if 'MANPAGER' in os.environ:
79 pager = os.environ['MANPAGER']
80 elif 'PAGER' in os.environ:
81 pager = os.environ['PAGER']
82 return pager.split()
83
84 def render(self, contents):
85 man_contents = publish_string(contents, writer=manpage.Writer())
86 if not self._exists_on_path('groff'):
87 raise ExecutableNotFoundError('groff')
88 cmdline = ['groff', '-man', '-T', 'ascii']
89 LOG.debug("Running command: %s", cmdline)
90 p3 = self._popen(cmdline, stdin=PIPE, stdout=PIPE)
91 groff_output = p3.communicate(input=man_contents)[0]
92 cmdline = self.get_pager_cmdline()
93 LOG.debug("Running command: %s", cmdline)
94 p4 = self._popen(cmdline, stdin=PIPE)
95 p4.communicate(input=groff_output)
96 sys.exit(1)
97
98 def _get_rst2man_name(self):
99 if self._exists_on_path('rst2man.py'):
100 return 'rst2man.py'
101 elif self._exists_on_path('rst2man'):
102 # Some distros like ubuntu will rename rst2man.py to rst2man
103 # if you install their version (i.e. "apt-get install
104 # python-docutils"). Though they could technically rename
105 # this to anything we'll support it renamed to 'rst2man' by
106 # explicitly checking for this case ourself.
107 return 'rst2man'
108 else:
109 # Give them the original name as set from docutils.
110 raise ExecutableNotFoundError('rst2man.py')
111
112 def _exists_on_path(self, name):
113 # Since we're only dealing with POSIX systems, we can
114 # ignore things like PATHEXT.
115 return any([os.path.exists(os.path.join(p, name))
116 for p in os.environ.get('PATH', []).split(os.pathsep)])
117
118 def _popen(self, *args, **kwargs):
119 return Popen(*args, **kwargs)
120
121
122 class WindowsHelpRenderer(HelpRenderer):
123 """
124 Render help content on a Windows platform.
125 """
126
127 def render(self, contents):
128 text_output = publish_string(contents,
129 writer=TextWriter())
130 sys.stdout.write(text_output.decode('utf-8'))
131 sys.exit(1)
132
133
134 class RawRenderer(HelpRenderer):
135 """
136 Render help as the raw ReST document.
137 """
138
139 def render(self, contents):
140 sys.stdout.write(contents)
141 sys.exit(1)
142
143
144 class HelpCommand(object):
145 """
146 HelpCommand Interface
147 ---------------------
148 A HelpCommand object acts as the interface between objects in the
149 CLI (e.g. Providers, Services, Operations, etc.) and the documentation
150 system (bcdoc).
151
152 A HelpCommand object wraps the object from the CLI space and provides
153 a consistent interface to critical information needed by the
154 documentation pipeline such as the object's name, description, etc.
155
156 The HelpCommand object is passed to the component of the
157 documentation pipeline that fires documentation events. It is
158 then passed on to each document event handler that has registered
159 for the events.
160
161 All HelpCommand objects contain the following attributes:
162
163 + ``session`` - A ``botocore`` ``Session`` object.
164 + ``obj`` - The object that is being documented.
165 + ``command_table`` - A dict mapping command names to
166 callable objects.
167 + ``arg_table`` - A dict mapping argument names to callable objects.
168 + ``doc`` - A ``Document`` object that is used to collect the
169 generated documentation.
170
171 In addition, please note the `properties` defined below which are
172 required to allow the object to be used in the document pipeline.
173
174 Implementations of HelpCommand are provided here for Provider,
175 Service and Operation objects. Other implementations for other
176 types of objects might be needed for customization in plugins.
177 As long as the implementations conform to this basic interface
178 it should be possible to pass them to the documentation system
179 and generate interactive and static help files.
180 """
181
182 EventHandlerClass = None
183 """
184 Each subclass should define this class variable to point to the
185 EventHandler class used by this HelpCommand.
186 """
187
188 def __init__(self, session, obj, command_table, arg_table):
189 self.session = session
190 self.obj = obj
191 self.command_table = command_table
192 self.arg_table = arg_table
193 self.renderer = get_renderer()
194 self.doc = ReSTDocument(target='man')
195
196 @property
197 def event_class(self):
198 """
199 Return the ``event_class`` for this object.
200
201 The ``event_class`` is used by the documentation pipeline
202 when generating documentation events. For the event below::
203
204 doc-title.<event_class>.<name>
205
206 The document pipeline would use this property to determine
207 the ``event_class`` value.
208 """
209 pass
210
211 @property
212 def name(self):
213 """
214 Return the name of the wrapped object.
215
216 This would be called by the document pipeline to determine
217 the ``name`` to be inserted into the event, as shown above.
218 """
219 pass
220
221 def __call__(self, args, parsed_globals):
222 # Create an event handler for a Provider Document
223 instance = self.EventHandlerClass(self)
224 # Now generate all of the events for a Provider document.
225 # We pass ourselves along so that we can, in turn, get passed
226 # to all event handlers.
227 bcdoc.clidocevents.generate_events(self.session, self)
228 self.renderer.render(self.doc.getvalue())
229 instance.unregister()
230
231
232 class ProviderHelpCommand(HelpCommand):
233 """Implements top level help command.
234
235 This is what is called when ``aws help`` is run.
236
237 """
238 EventHandlerClass = ProviderDocumentEventHandler
239
240 def __init__(self, session, command_table, arg_table,
241 description, synopsis, usage):
242 HelpCommand.__init__(self, session, session.provider,
243 command_table, arg_table)
244 self.description = description
245 self.synopsis = synopsis
246 self.help_usage = usage
247
248 @property
249 def event_class(self):
250 return 'Provider'
251
252 @property
253 def name(self):
254 return self.obj.name
255
256
257 class ServiceHelpCommand(HelpCommand):
258 """Implements service level help.
259
260 This is the object invoked whenever a service command
261 help is implemented, e.g. ``aws ec2 help``.
262
263 """
264
265 EventHandlerClass = ServiceDocumentEventHandler
266
267 def __init__(self, session, obj, command_table, arg_table, name,
268 event_class):
269 super(ServiceHelpCommand, self).__init__(session, obj, command_table,
270 arg_table)
271 self._name = name
272 self._event_class = event_class
273
274 @property
275 def event_class(self):
276 return self._event_class
277
278 @property
279 def name(self):
280 return self._name
281
282
283 class OperationHelpCommand(HelpCommand):
284 """Implements operation level help.
285
286 This is the object invoked whenever help for a service is requested,
287 e.g. ``aws ec2 describe-instances help``.
288
289 """
290 EventHandlerClass = OperationDocumentEventHandler
291
292 def __init__(self, session, service, operation, arg_table, name,
293 event_class):
294 HelpCommand.__init__(self, session, operation, None, arg_table)
295 self.service = service
296 self.param_shorthand = ParamShorthand()
297 self._name = name
298 self._event_class = event_class
299
300 @property
301 def event_class(self):
302 return self._event_class
303
304 @property
305 def name(self):
306 return self._name
307
[end of awscli/help.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/awscli/help.py b/awscli/help.py
--- a/awscli/help.py
+++ b/awscli/help.py
@@ -14,6 +14,7 @@
import logging
import os
import platform
+import shlex
from subprocess import Popen, PIPE
from docutils.core import publish_string
@@ -79,7 +80,7 @@
pager = os.environ['MANPAGER']
elif 'PAGER' in os.environ:
pager = os.environ['PAGER']
- return pager.split()
+ return shlex.split(pager)
def render(self, contents):
man_contents = publish_string(contents, writer=manpage.Writer())
|
{"golden_diff": "diff --git a/awscli/help.py b/awscli/help.py\n--- a/awscli/help.py\n+++ b/awscli/help.py\n@@ -14,6 +14,7 @@\n import logging\n import os\n import platform\n+import shlex\n from subprocess import Popen, PIPE\n \n from docutils.core import publish_string\n@@ -79,7 +80,7 @@\n pager = os.environ['MANPAGER']\n elif 'PAGER' in os.environ:\n pager = os.environ['PAGER']\n- return pager.split()\n+ return shlex.split(pager)\n \n def render(self, contents):\n man_contents = publish_string(contents, writer=manpage.Writer())\n", "issue": "VIMPAGER error\nI realize that it is a bit of an aside, but it would be great to support alternative pagers.\n\n```\n~ $ echo $MANPAGER\n/bin/sh -c \"col -bx | vim -c 'set ft=man' -\"\n~ $ python --version 1\nPython 2.7.5\n~ $ pip --version\npip 1.4.1 from /Users/carl/.virtualenv/lib/python2.7/site-packages (python 2.7)\n~ $ aws --version\naws-cli/1.1.0 Python/2.7.5 Darwin/12.5.0\n~ $ aws help\n-bx: -c: line 0: unexpected EOF while looking for matching `\"'\n-bx: -c: line 1: syntax error: unexpected end of file\n```\n\n", "before_files": [{"content": "# Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport sys\nimport logging\nimport os\nimport platform\nfrom subprocess import Popen, PIPE\n\nfrom docutils.core import publish_string\nfrom docutils.writers import manpage\nimport bcdoc\nfrom bcdoc.clidocs import ReSTDocument\nfrom bcdoc.clidocs import ProviderDocumentEventHandler\nfrom bcdoc.clidocs import ServiceDocumentEventHandler\nfrom bcdoc.clidocs import OperationDocumentEventHandler\nimport bcdoc.clidocevents\nfrom bcdoc.textwriter import TextWriter\n\nfrom awscli.argprocess import ParamShorthand\n\n\nLOG = logging.getLogger('awscli.help')\n\n\nclass ExecutableNotFoundError(Exception):\n def __init__(self, executable_name):\n super(ExecutableNotFoundError, self).__init__(\n 'Could not find executable named \"%s\"' % executable_name)\n\n\ndef get_renderer():\n \"\"\"\n Return the appropriate HelpRenderer implementation for the\n current platform.\n \"\"\"\n if platform.system() == 'Windows':\n return WindowsHelpRenderer()\n else:\n return PosixHelpRenderer()\n\n\nclass HelpRenderer(object):\n \"\"\"\n Interface for a help renderer.\n\n The renderer is responsible for displaying the help content on\n a particular platform.\n \"\"\"\n\n def render(self, contents):\n \"\"\"\n Each implementation of HelpRenderer must implement this\n render method.\n \"\"\"\n pass\n\n\nclass PosixHelpRenderer(HelpRenderer):\n \"\"\"\n Render help content on a Posix-like system. This includes\n Linux and MacOS X.\n \"\"\"\n\n PAGER = 'less -R'\n\n def get_pager_cmdline(self):\n pager = self.PAGER\n if 'MANPAGER' in os.environ:\n pager = os.environ['MANPAGER']\n elif 'PAGER' in os.environ:\n pager = os.environ['PAGER']\n return pager.split()\n\n def render(self, contents):\n man_contents = publish_string(contents, writer=manpage.Writer())\n if not self._exists_on_path('groff'):\n raise ExecutableNotFoundError('groff')\n cmdline = ['groff', '-man', '-T', 'ascii']\n LOG.debug(\"Running command: %s\", cmdline)\n p3 = self._popen(cmdline, stdin=PIPE, stdout=PIPE)\n groff_output = p3.communicate(input=man_contents)[0]\n cmdline = self.get_pager_cmdline()\n LOG.debug(\"Running command: %s\", cmdline)\n p4 = self._popen(cmdline, stdin=PIPE)\n p4.communicate(input=groff_output)\n sys.exit(1)\n\n def _get_rst2man_name(self):\n if self._exists_on_path('rst2man.py'):\n return 'rst2man.py'\n elif self._exists_on_path('rst2man'):\n # Some distros like ubuntu will rename rst2man.py to rst2man\n # if you install their version (i.e. \"apt-get install\n # python-docutils\"). Though they could technically rename\n # this to anything we'll support it renamed to 'rst2man' by\n # explicitly checking for this case ourself.\n return 'rst2man'\n else:\n # Give them the original name as set from docutils.\n raise ExecutableNotFoundError('rst2man.py')\n\n def _exists_on_path(self, name):\n # Since we're only dealing with POSIX systems, we can\n # ignore things like PATHEXT.\n return any([os.path.exists(os.path.join(p, name))\n for p in os.environ.get('PATH', []).split(os.pathsep)])\n\n def _popen(self, *args, **kwargs):\n return Popen(*args, **kwargs)\n\n\nclass WindowsHelpRenderer(HelpRenderer):\n \"\"\"\n Render help content on a Windows platform.\n \"\"\"\n\n def render(self, contents):\n text_output = publish_string(contents,\n writer=TextWriter())\n sys.stdout.write(text_output.decode('utf-8'))\n sys.exit(1)\n\n\nclass RawRenderer(HelpRenderer):\n \"\"\"\n Render help as the raw ReST document.\n \"\"\"\n\n def render(self, contents):\n sys.stdout.write(contents)\n sys.exit(1)\n\n\nclass HelpCommand(object):\n \"\"\"\n HelpCommand Interface\n ---------------------\n A HelpCommand object acts as the interface between objects in the\n CLI (e.g. Providers, Services, Operations, etc.) and the documentation\n system (bcdoc).\n\n A HelpCommand object wraps the object from the CLI space and provides\n a consistent interface to critical information needed by the\n documentation pipeline such as the object's name, description, etc.\n\n The HelpCommand object is passed to the component of the\n documentation pipeline that fires documentation events. It is\n then passed on to each document event handler that has registered\n for the events.\n\n All HelpCommand objects contain the following attributes:\n\n + ``session`` - A ``botocore`` ``Session`` object.\n + ``obj`` - The object that is being documented.\n + ``command_table`` - A dict mapping command names to\n callable objects.\n + ``arg_table`` - A dict mapping argument names to callable objects.\n + ``doc`` - A ``Document`` object that is used to collect the\n generated documentation.\n\n In addition, please note the `properties` defined below which are\n required to allow the object to be used in the document pipeline.\n\n Implementations of HelpCommand are provided here for Provider,\n Service and Operation objects. Other implementations for other\n types of objects might be needed for customization in plugins.\n As long as the implementations conform to this basic interface\n it should be possible to pass them to the documentation system\n and generate interactive and static help files.\n \"\"\"\n\n EventHandlerClass = None\n \"\"\"\n Each subclass should define this class variable to point to the\n EventHandler class used by this HelpCommand.\n \"\"\"\n\n def __init__(self, session, obj, command_table, arg_table):\n self.session = session\n self.obj = obj\n self.command_table = command_table\n self.arg_table = arg_table\n self.renderer = get_renderer()\n self.doc = ReSTDocument(target='man')\n\n @property\n def event_class(self):\n \"\"\"\n Return the ``event_class`` for this object.\n\n The ``event_class`` is used by the documentation pipeline\n when generating documentation events. For the event below::\n\n doc-title.<event_class>.<name>\n\n The document pipeline would use this property to determine\n the ``event_class`` value.\n \"\"\"\n pass\n\n @property\n def name(self):\n \"\"\"\n Return the name of the wrapped object.\n\n This would be called by the document pipeline to determine\n the ``name`` to be inserted into the event, as shown above.\n \"\"\"\n pass\n\n def __call__(self, args, parsed_globals):\n # Create an event handler for a Provider Document\n instance = self.EventHandlerClass(self)\n # Now generate all of the events for a Provider document.\n # We pass ourselves along so that we can, in turn, get passed\n # to all event handlers.\n bcdoc.clidocevents.generate_events(self.session, self)\n self.renderer.render(self.doc.getvalue())\n instance.unregister()\n\n\nclass ProviderHelpCommand(HelpCommand):\n \"\"\"Implements top level help command.\n\n This is what is called when ``aws help`` is run.\n\n \"\"\"\n EventHandlerClass = ProviderDocumentEventHandler\n\n def __init__(self, session, command_table, arg_table,\n description, synopsis, usage):\n HelpCommand.__init__(self, session, session.provider,\n command_table, arg_table)\n self.description = description\n self.synopsis = synopsis\n self.help_usage = usage\n\n @property\n def event_class(self):\n return 'Provider'\n\n @property\n def name(self):\n return self.obj.name\n\n\nclass ServiceHelpCommand(HelpCommand):\n \"\"\"Implements service level help.\n\n This is the object invoked whenever a service command\n help is implemented, e.g. ``aws ec2 help``.\n\n \"\"\"\n\n EventHandlerClass = ServiceDocumentEventHandler\n\n def __init__(self, session, obj, command_table, arg_table, name,\n event_class):\n super(ServiceHelpCommand, self).__init__(session, obj, command_table,\n arg_table)\n self._name = name\n self._event_class = event_class\n\n @property\n def event_class(self):\n return self._event_class\n\n @property\n def name(self):\n return self._name\n\n\nclass OperationHelpCommand(HelpCommand):\n \"\"\"Implements operation level help.\n\n This is the object invoked whenever help for a service is requested,\n e.g. ``aws ec2 describe-instances help``.\n\n \"\"\"\n EventHandlerClass = OperationDocumentEventHandler\n\n def __init__(self, session, service, operation, arg_table, name,\n event_class):\n HelpCommand.__init__(self, session, operation, None, arg_table)\n self.service = service\n self.param_shorthand = ParamShorthand()\n self._name = name\n self._event_class = event_class\n\n @property\n def event_class(self):\n return self._event_class\n\n @property\n def name(self):\n return self._name\n", "path": "awscli/help.py"}]}
| 3,704 | 148 |
gh_patches_debug_41523
|
rasdani/github-patches
|
git_diff
|
pypa__pip-6879
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Don't use file locking to protect selfcheck state file
**What's the problem this feature will solve?**
There are several issues around file locking that have been filed over the years, specifically related to:
1. Underlying OS/filesystem does not support hardlinks as used by the file lock (#2993, #5322, #6761)
2. Lingering lock files and/or lock files in an inconsistent state can cause pip to hang when attempting to acquire the lock (some of #3532, #5034)
3. lockfile uses hostname when creating its unique name, which can result in invalid paths when hostname includes a `/` (#6938)
**Describe the solution you'd like**
1. Write a selfcheck state file per-prefix, to remove the need to read and then write the file within a lock
2. Write the file atomically (write to a separate tmp file and then move into place) to avoid partial writes if the process is killed
This will satisfy the linked issues and help us progress on #4766 to remove lockfile entirely.
**Alternative Solutions**
1. Switch to `MkdirLockFile` as currently used in the HTTP cache - the downside of this approach is that it is not backwards-compatible, so we would need to use a separate file to track the information for modern pip versions. If we would need to use a separate file anyway, we might as well go one step further to progress #4766.
**Additional context**
* PR #6855 - writes per-prefix selfcheck state files
* PR #6879 - removes file locking
</issue>
<code>
[start of src/pip/_internal/utils/outdated.py]
1 from __future__ import absolute_import
2
3 import datetime
4 import hashlib
5 import json
6 import logging
7 import os.path
8 import sys
9
10 from pip._vendor import lockfile, pkg_resources
11 from pip._vendor.packaging import version as packaging_version
12 from pip._vendor.six import ensure_binary
13
14 from pip._internal.cli.cmdoptions import make_search_scope
15 from pip._internal.index import PackageFinder
16 from pip._internal.models.selection_prefs import SelectionPreferences
17 from pip._internal.utils.compat import WINDOWS
18 from pip._internal.utils.filesystem import check_path_owner
19 from pip._internal.utils.misc import ensure_dir, get_installed_version
20 from pip._internal.utils.packaging import get_installer
21 from pip._internal.utils.typing import MYPY_CHECK_RUNNING
22
23 if MYPY_CHECK_RUNNING:
24 import optparse
25 from typing import Any, Dict, Text, Union
26 from pip._internal.download import PipSession
27
28
29 SELFCHECK_DATE_FMT = "%Y-%m-%dT%H:%M:%SZ"
30
31
32 logger = logging.getLogger(__name__)
33
34
35 def _get_statefile_name(key):
36 # type: (Union[str, Text]) -> str
37 key_bytes = ensure_binary(key)
38 name = hashlib.sha224(key_bytes).hexdigest()
39 return name
40
41
42 class SelfCheckState(object):
43 def __init__(self, cache_dir):
44 # type: (str) -> None
45 self.state = {} # type: Dict[str, Any]
46 self.statefile_path = None
47
48 # Try to load the existing state
49 if cache_dir:
50 self.statefile_path = os.path.join(
51 cache_dir, "selfcheck", _get_statefile_name(self.key)
52 )
53 try:
54 with open(self.statefile_path) as statefile:
55 self.state = json.load(statefile)
56 except (IOError, ValueError, KeyError):
57 # Explicitly suppressing exceptions, since we don't want to
58 # error out if the cache file is invalid.
59 pass
60
61 @property
62 def key(self):
63 return sys.prefix
64
65 def save(self, pypi_version, current_time):
66 # type: (str, datetime.datetime) -> None
67 # If we do not have a path to cache in, don't bother saving.
68 if not self.statefile_path:
69 return
70
71 # Check to make sure that we own the directory
72 if not check_path_owner(os.path.dirname(self.statefile_path)):
73 return
74
75 # Now that we've ensured the directory is owned by this user, we'll go
76 # ahead and make sure that all our directories are created.
77 ensure_dir(os.path.dirname(self.statefile_path))
78
79 state = {
80 # Include the key so it's easy to tell which pip wrote the
81 # file.
82 "key": self.key,
83 "last_check": current_time.strftime(SELFCHECK_DATE_FMT),
84 "pypi_version": pypi_version,
85 }
86
87 text = json.dumps(state, sort_keys=True, separators=(",", ":"))
88
89 # Attempt to write out our version check file
90 with lockfile.LockFile(self.statefile_path):
91 # Since we have a prefix-specific state file, we can just
92 # overwrite whatever is there, no need to check.
93 with open(self.statefile_path, "w") as statefile:
94 statefile.write(text)
95
96
97 def was_installed_by_pip(pkg):
98 # type: (str) -> bool
99 """Checks whether pkg was installed by pip
100
101 This is used not to display the upgrade message when pip is in fact
102 installed by system package manager, such as dnf on Fedora.
103 """
104 try:
105 dist = pkg_resources.get_distribution(pkg)
106 return "pip" == get_installer(dist)
107 except pkg_resources.DistributionNotFound:
108 return False
109
110
111 def pip_version_check(session, options):
112 # type: (PipSession, optparse.Values) -> None
113 """Check for an update for pip.
114
115 Limit the frequency of checks to once per week. State is stored either in
116 the active virtualenv or in the user's USER_CACHE_DIR keyed off the prefix
117 of the pip script path.
118 """
119 installed_version = get_installed_version("pip")
120 if not installed_version:
121 return
122
123 pip_version = packaging_version.parse(installed_version)
124 pypi_version = None
125
126 try:
127 state = SelfCheckState(cache_dir=options.cache_dir)
128
129 current_time = datetime.datetime.utcnow()
130 # Determine if we need to refresh the state
131 if "last_check" in state.state and "pypi_version" in state.state:
132 last_check = datetime.datetime.strptime(
133 state.state["last_check"],
134 SELFCHECK_DATE_FMT
135 )
136 if (current_time - last_check).total_seconds() < 7 * 24 * 60 * 60:
137 pypi_version = state.state["pypi_version"]
138
139 # Refresh the version if we need to or just see if we need to warn
140 if pypi_version is None:
141 # Lets use PackageFinder to see what the latest pip version is
142 search_scope = make_search_scope(options, suppress_no_index=True)
143
144 # Pass allow_yanked=False so we don't suggest upgrading to a
145 # yanked version.
146 selection_prefs = SelectionPreferences(
147 allow_yanked=False,
148 allow_all_prereleases=False, # Explicitly set to False
149 )
150
151 finder = PackageFinder.create(
152 search_scope=search_scope,
153 selection_prefs=selection_prefs,
154 session=session,
155 )
156 best_candidate = finder.find_best_candidate("pip").best_candidate
157 if best_candidate is None:
158 return
159 pypi_version = str(best_candidate.version)
160
161 # save that we've performed a check
162 state.save(pypi_version, current_time)
163
164 remote_version = packaging_version.parse(pypi_version)
165
166 local_version_is_older = (
167 pip_version < remote_version and
168 pip_version.base_version != remote_version.base_version and
169 was_installed_by_pip('pip')
170 )
171
172 # Determine if our pypi_version is older
173 if not local_version_is_older:
174 return
175
176 # Advise "python -m pip" on Windows to avoid issues
177 # with overwriting pip.exe.
178 if WINDOWS:
179 pip_cmd = "python -m pip"
180 else:
181 pip_cmd = "pip"
182 logger.warning(
183 "You are using pip version %s, however version %s is "
184 "available.\nYou should consider upgrading via the "
185 "'%s install --upgrade pip' command.",
186 pip_version, pypi_version, pip_cmd
187 )
188 except Exception:
189 logger.debug(
190 "There was an error checking the latest version of pip",
191 exc_info=True,
192 )
193
[end of src/pip/_internal/utils/outdated.py]
[start of src/pip/_internal/utils/filesystem.py]
1 import os
2 import os.path
3 import shutil
4 import stat
5
6 from pip._internal.utils.compat import get_path_uid
7
8
9 def check_path_owner(path):
10 # type: (str) -> bool
11 # If we don't have a way to check the effective uid of this process, then
12 # we'll just assume that we own the directory.
13 if not hasattr(os, "geteuid"):
14 return True
15
16 previous = None
17 while path != previous:
18 if os.path.lexists(path):
19 # Check if path is writable by current user.
20 if os.geteuid() == 0:
21 # Special handling for root user in order to handle properly
22 # cases where users use sudo without -H flag.
23 try:
24 path_uid = get_path_uid(path)
25 except OSError:
26 return False
27 return path_uid == 0
28 else:
29 return os.access(path, os.W_OK)
30 else:
31 previous, path = path, os.path.dirname(path)
32 return False # assume we don't own the path
33
34
35 def copy2_fixed(src, dest):
36 # type: (str, str) -> None
37 """Wrap shutil.copy2() but map errors copying socket files to
38 SpecialFileError as expected.
39
40 See also https://bugs.python.org/issue37700.
41 """
42 try:
43 shutil.copy2(src, dest)
44 except (OSError, IOError):
45 for f in [src, dest]:
46 try:
47 is_socket_file = is_socket(f)
48 except OSError:
49 # An error has already occurred. Another error here is not
50 # a problem and we can ignore it.
51 pass
52 else:
53 if is_socket_file:
54 raise shutil.SpecialFileError("`%s` is a socket" % f)
55
56 raise
57
58
59 def is_socket(path):
60 # type: (str) -> bool
61 return stat.S_ISSOCK(os.lstat(path).st_mode)
62
[end of src/pip/_internal/utils/filesystem.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pip/_internal/utils/filesystem.py b/src/pip/_internal/utils/filesystem.py
--- a/src/pip/_internal/utils/filesystem.py
+++ b/src/pip/_internal/utils/filesystem.py
@@ -2,8 +2,26 @@
import os.path
import shutil
import stat
+from contextlib import contextmanager
+from tempfile import NamedTemporaryFile
+
+# NOTE: retrying is not annotated in typeshed as on 2017-07-17, which is
+# why we ignore the type on this import.
+from pip._vendor.retrying import retry # type: ignore
+from pip._vendor.six import PY2
from pip._internal.utils.compat import get_path_uid
+from pip._internal.utils.misc import cast
+from pip._internal.utils.typing import MYPY_CHECK_RUNNING
+
+if MYPY_CHECK_RUNNING:
+ from typing import BinaryIO, Iterator
+
+ class NamedTemporaryFileResult(BinaryIO):
+ @property
+ def file(self):
+ # type: () -> BinaryIO
+ pass
def check_path_owner(path):
@@ -59,3 +77,39 @@
def is_socket(path):
# type: (str) -> bool
return stat.S_ISSOCK(os.lstat(path).st_mode)
+
+
+@contextmanager
+def adjacent_tmp_file(path):
+ # type: (str) -> Iterator[NamedTemporaryFileResult]
+ """Given a path to a file, open a temp file next to it securely and ensure
+ it is written to disk after the context reaches its end.
+ """
+ with NamedTemporaryFile(
+ delete=False,
+ dir=os.path.dirname(path),
+ prefix=os.path.basename(path),
+ suffix='.tmp',
+ ) as f:
+ result = cast('NamedTemporaryFileResult', f)
+ try:
+ yield result
+ finally:
+ result.file.flush()
+ os.fsync(result.file.fileno())
+
+
+_replace_retry = retry(stop_max_delay=1000, wait_fixed=250)
+
+if PY2:
+ @_replace_retry
+ def replace(src, dest):
+ # type: (str, str) -> None
+ try:
+ os.rename(src, dest)
+ except OSError:
+ os.remove(dest)
+ os.rename(src, dest)
+
+else:
+ replace = _replace_retry(os.replace)
diff --git a/src/pip/_internal/utils/outdated.py b/src/pip/_internal/utils/outdated.py
--- a/src/pip/_internal/utils/outdated.py
+++ b/src/pip/_internal/utils/outdated.py
@@ -7,7 +7,7 @@
import os.path
import sys
-from pip._vendor import lockfile, pkg_resources
+from pip._vendor import pkg_resources
from pip._vendor.packaging import version as packaging_version
from pip._vendor.six import ensure_binary
@@ -15,7 +15,11 @@
from pip._internal.index import PackageFinder
from pip._internal.models.selection_prefs import SelectionPreferences
from pip._internal.utils.compat import WINDOWS
-from pip._internal.utils.filesystem import check_path_owner
+from pip._internal.utils.filesystem import (
+ adjacent_tmp_file,
+ check_path_owner,
+ replace,
+)
from pip._internal.utils.misc import ensure_dir, get_installed_version
from pip._internal.utils.packaging import get_installer
from pip._internal.utils.typing import MYPY_CHECK_RUNNING
@@ -86,12 +90,16 @@
text = json.dumps(state, sort_keys=True, separators=(",", ":"))
- # Attempt to write out our version check file
- with lockfile.LockFile(self.statefile_path):
+ with adjacent_tmp_file(self.statefile_path) as f:
+ f.write(ensure_binary(text))
+
+ try:
# Since we have a prefix-specific state file, we can just
# overwrite whatever is there, no need to check.
- with open(self.statefile_path, "w") as statefile:
- statefile.write(text)
+ replace(f.name, self.statefile_path)
+ except OSError:
+ # Best effort.
+ pass
def was_installed_by_pip(pkg):
|
{"golden_diff": "diff --git a/src/pip/_internal/utils/filesystem.py b/src/pip/_internal/utils/filesystem.py\n--- a/src/pip/_internal/utils/filesystem.py\n+++ b/src/pip/_internal/utils/filesystem.py\n@@ -2,8 +2,26 @@\n import os.path\n import shutil\n import stat\n+from contextlib import contextmanager\n+from tempfile import NamedTemporaryFile\n+\n+# NOTE: retrying is not annotated in typeshed as on 2017-07-17, which is\n+# why we ignore the type on this import.\n+from pip._vendor.retrying import retry # type: ignore\n+from pip._vendor.six import PY2\n \n from pip._internal.utils.compat import get_path_uid\n+from pip._internal.utils.misc import cast\n+from pip._internal.utils.typing import MYPY_CHECK_RUNNING\n+\n+if MYPY_CHECK_RUNNING:\n+ from typing import BinaryIO, Iterator\n+\n+ class NamedTemporaryFileResult(BinaryIO):\n+ @property\n+ def file(self):\n+ # type: () -> BinaryIO\n+ pass\n \n \n def check_path_owner(path):\n@@ -59,3 +77,39 @@\n def is_socket(path):\n # type: (str) -> bool\n return stat.S_ISSOCK(os.lstat(path).st_mode)\n+\n+\n+@contextmanager\n+def adjacent_tmp_file(path):\n+ # type: (str) -> Iterator[NamedTemporaryFileResult]\n+ \"\"\"Given a path to a file, open a temp file next to it securely and ensure\n+ it is written to disk after the context reaches its end.\n+ \"\"\"\n+ with NamedTemporaryFile(\n+ delete=False,\n+ dir=os.path.dirname(path),\n+ prefix=os.path.basename(path),\n+ suffix='.tmp',\n+ ) as f:\n+ result = cast('NamedTemporaryFileResult', f)\n+ try:\n+ yield result\n+ finally:\n+ result.file.flush()\n+ os.fsync(result.file.fileno())\n+\n+\n+_replace_retry = retry(stop_max_delay=1000, wait_fixed=250)\n+\n+if PY2:\n+ @_replace_retry\n+ def replace(src, dest):\n+ # type: (str, str) -> None\n+ try:\n+ os.rename(src, dest)\n+ except OSError:\n+ os.remove(dest)\n+ os.rename(src, dest)\n+\n+else:\n+ replace = _replace_retry(os.replace)\ndiff --git a/src/pip/_internal/utils/outdated.py b/src/pip/_internal/utils/outdated.py\n--- a/src/pip/_internal/utils/outdated.py\n+++ b/src/pip/_internal/utils/outdated.py\n@@ -7,7 +7,7 @@\n import os.path\n import sys\n \n-from pip._vendor import lockfile, pkg_resources\n+from pip._vendor import pkg_resources\n from pip._vendor.packaging import version as packaging_version\n from pip._vendor.six import ensure_binary\n \n@@ -15,7 +15,11 @@\n from pip._internal.index import PackageFinder\n from pip._internal.models.selection_prefs import SelectionPreferences\n from pip._internal.utils.compat import WINDOWS\n-from pip._internal.utils.filesystem import check_path_owner\n+from pip._internal.utils.filesystem import (\n+ adjacent_tmp_file,\n+ check_path_owner,\n+ replace,\n+)\n from pip._internal.utils.misc import ensure_dir, get_installed_version\n from pip._internal.utils.packaging import get_installer\n from pip._internal.utils.typing import MYPY_CHECK_RUNNING\n@@ -86,12 +90,16 @@\n \n text = json.dumps(state, sort_keys=True, separators=(\",\", \":\"))\n \n- # Attempt to write out our version check file\n- with lockfile.LockFile(self.statefile_path):\n+ with adjacent_tmp_file(self.statefile_path) as f:\n+ f.write(ensure_binary(text))\n+\n+ try:\n # Since we have a prefix-specific state file, we can just\n # overwrite whatever is there, no need to check.\n- with open(self.statefile_path, \"w\") as statefile:\n- statefile.write(text)\n+ replace(f.name, self.statefile_path)\n+ except OSError:\n+ # Best effort.\n+ pass\n \n \n def was_installed_by_pip(pkg):\n", "issue": "Don't use file locking to protect selfcheck state file\n**What's the problem this feature will solve?**\r\n\r\nThere are several issues around file locking that have been filed over the years, specifically related to:\r\n\r\n1. Underlying OS/filesystem does not support hardlinks as used by the file lock (#2993, #5322, #6761)\r\n2. Lingering lock files and/or lock files in an inconsistent state can cause pip to hang when attempting to acquire the lock (some of #3532, #5034)\r\n3. lockfile uses hostname when creating its unique name, which can result in invalid paths when hostname includes a `/` (#6938)\r\n\r\n**Describe the solution you'd like**\r\n\r\n1. Write a selfcheck state file per-prefix, to remove the need to read and then write the file within a lock\r\n2. Write the file atomically (write to a separate tmp file and then move into place) to avoid partial writes if the process is killed\r\n\r\nThis will satisfy the linked issues and help us progress on #4766 to remove lockfile entirely.\r\n\r\n**Alternative Solutions**\r\n\r\n1. Switch to `MkdirLockFile` as currently used in the HTTP cache - the downside of this approach is that it is not backwards-compatible, so we would need to use a separate file to track the information for modern pip versions. If we would need to use a separate file anyway, we might as well go one step further to progress #4766.\r\n\r\n**Additional context**\r\n\r\n* PR #6855 - writes per-prefix selfcheck state files\r\n* PR #6879 - removes file locking\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport datetime\nimport hashlib\nimport json\nimport logging\nimport os.path\nimport sys\n\nfrom pip._vendor import lockfile, pkg_resources\nfrom pip._vendor.packaging import version as packaging_version\nfrom pip._vendor.six import ensure_binary\n\nfrom pip._internal.cli.cmdoptions import make_search_scope\nfrom pip._internal.index import PackageFinder\nfrom pip._internal.models.selection_prefs import SelectionPreferences\nfrom pip._internal.utils.compat import WINDOWS\nfrom pip._internal.utils.filesystem import check_path_owner\nfrom pip._internal.utils.misc import ensure_dir, get_installed_version\nfrom pip._internal.utils.packaging import get_installer\nfrom pip._internal.utils.typing import MYPY_CHECK_RUNNING\n\nif MYPY_CHECK_RUNNING:\n import optparse\n from typing import Any, Dict, Text, Union\n from pip._internal.download import PipSession\n\n\nSELFCHECK_DATE_FMT = \"%Y-%m-%dT%H:%M:%SZ\"\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef _get_statefile_name(key):\n # type: (Union[str, Text]) -> str\n key_bytes = ensure_binary(key)\n name = hashlib.sha224(key_bytes).hexdigest()\n return name\n\n\nclass SelfCheckState(object):\n def __init__(self, cache_dir):\n # type: (str) -> None\n self.state = {} # type: Dict[str, Any]\n self.statefile_path = None\n\n # Try to load the existing state\n if cache_dir:\n self.statefile_path = os.path.join(\n cache_dir, \"selfcheck\", _get_statefile_name(self.key)\n )\n try:\n with open(self.statefile_path) as statefile:\n self.state = json.load(statefile)\n except (IOError, ValueError, KeyError):\n # Explicitly suppressing exceptions, since we don't want to\n # error out if the cache file is invalid.\n pass\n\n @property\n def key(self):\n return sys.prefix\n\n def save(self, pypi_version, current_time):\n # type: (str, datetime.datetime) -> None\n # If we do not have a path to cache in, don't bother saving.\n if not self.statefile_path:\n return\n\n # Check to make sure that we own the directory\n if not check_path_owner(os.path.dirname(self.statefile_path)):\n return\n\n # Now that we've ensured the directory is owned by this user, we'll go\n # ahead and make sure that all our directories are created.\n ensure_dir(os.path.dirname(self.statefile_path))\n\n state = {\n # Include the key so it's easy to tell which pip wrote the\n # file.\n \"key\": self.key,\n \"last_check\": current_time.strftime(SELFCHECK_DATE_FMT),\n \"pypi_version\": pypi_version,\n }\n\n text = json.dumps(state, sort_keys=True, separators=(\",\", \":\"))\n\n # Attempt to write out our version check file\n with lockfile.LockFile(self.statefile_path):\n # Since we have a prefix-specific state file, we can just\n # overwrite whatever is there, no need to check.\n with open(self.statefile_path, \"w\") as statefile:\n statefile.write(text)\n\n\ndef was_installed_by_pip(pkg):\n # type: (str) -> bool\n \"\"\"Checks whether pkg was installed by pip\n\n This is used not to display the upgrade message when pip is in fact\n installed by system package manager, such as dnf on Fedora.\n \"\"\"\n try:\n dist = pkg_resources.get_distribution(pkg)\n return \"pip\" == get_installer(dist)\n except pkg_resources.DistributionNotFound:\n return False\n\n\ndef pip_version_check(session, options):\n # type: (PipSession, optparse.Values) -> None\n \"\"\"Check for an update for pip.\n\n Limit the frequency of checks to once per week. State is stored either in\n the active virtualenv or in the user's USER_CACHE_DIR keyed off the prefix\n of the pip script path.\n \"\"\"\n installed_version = get_installed_version(\"pip\")\n if not installed_version:\n return\n\n pip_version = packaging_version.parse(installed_version)\n pypi_version = None\n\n try:\n state = SelfCheckState(cache_dir=options.cache_dir)\n\n current_time = datetime.datetime.utcnow()\n # Determine if we need to refresh the state\n if \"last_check\" in state.state and \"pypi_version\" in state.state:\n last_check = datetime.datetime.strptime(\n state.state[\"last_check\"],\n SELFCHECK_DATE_FMT\n )\n if (current_time - last_check).total_seconds() < 7 * 24 * 60 * 60:\n pypi_version = state.state[\"pypi_version\"]\n\n # Refresh the version if we need to or just see if we need to warn\n if pypi_version is None:\n # Lets use PackageFinder to see what the latest pip version is\n search_scope = make_search_scope(options, suppress_no_index=True)\n\n # Pass allow_yanked=False so we don't suggest upgrading to a\n # yanked version.\n selection_prefs = SelectionPreferences(\n allow_yanked=False,\n allow_all_prereleases=False, # Explicitly set to False\n )\n\n finder = PackageFinder.create(\n search_scope=search_scope,\n selection_prefs=selection_prefs,\n session=session,\n )\n best_candidate = finder.find_best_candidate(\"pip\").best_candidate\n if best_candidate is None:\n return\n pypi_version = str(best_candidate.version)\n\n # save that we've performed a check\n state.save(pypi_version, current_time)\n\n remote_version = packaging_version.parse(pypi_version)\n\n local_version_is_older = (\n pip_version < remote_version and\n pip_version.base_version != remote_version.base_version and\n was_installed_by_pip('pip')\n )\n\n # Determine if our pypi_version is older\n if not local_version_is_older:\n return\n\n # Advise \"python -m pip\" on Windows to avoid issues\n # with overwriting pip.exe.\n if WINDOWS:\n pip_cmd = \"python -m pip\"\n else:\n pip_cmd = \"pip\"\n logger.warning(\n \"You are using pip version %s, however version %s is \"\n \"available.\\nYou should consider upgrading via the \"\n \"'%s install --upgrade pip' command.\",\n pip_version, pypi_version, pip_cmd\n )\n except Exception:\n logger.debug(\n \"There was an error checking the latest version of pip\",\n exc_info=True,\n )\n", "path": "src/pip/_internal/utils/outdated.py"}, {"content": "import os\nimport os.path\nimport shutil\nimport stat\n\nfrom pip._internal.utils.compat import get_path_uid\n\n\ndef check_path_owner(path):\n # type: (str) -> bool\n # If we don't have a way to check the effective uid of this process, then\n # we'll just assume that we own the directory.\n if not hasattr(os, \"geteuid\"):\n return True\n\n previous = None\n while path != previous:\n if os.path.lexists(path):\n # Check if path is writable by current user.\n if os.geteuid() == 0:\n # Special handling for root user in order to handle properly\n # cases where users use sudo without -H flag.\n try:\n path_uid = get_path_uid(path)\n except OSError:\n return False\n return path_uid == 0\n else:\n return os.access(path, os.W_OK)\n else:\n previous, path = path, os.path.dirname(path)\n return False # assume we don't own the path\n\n\ndef copy2_fixed(src, dest):\n # type: (str, str) -> None\n \"\"\"Wrap shutil.copy2() but map errors copying socket files to\n SpecialFileError as expected.\n\n See also https://bugs.python.org/issue37700.\n \"\"\"\n try:\n shutil.copy2(src, dest)\n except (OSError, IOError):\n for f in [src, dest]:\n try:\n is_socket_file = is_socket(f)\n except OSError:\n # An error has already occurred. Another error here is not\n # a problem and we can ignore it.\n pass\n else:\n if is_socket_file:\n raise shutil.SpecialFileError(\"`%s` is a socket\" % f)\n\n raise\n\n\ndef is_socket(path):\n # type: (str) -> bool\n return stat.S_ISSOCK(os.lstat(path).st_mode)\n", "path": "src/pip/_internal/utils/filesystem.py"}]}
| 3,375 | 940 |
gh_patches_debug_2701
|
rasdani/github-patches
|
git_diff
|
sunpy__sunpy-3835
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Plot titles and x-labels overlapping in example
The plot titles and labels overlap in the 3rd image of https://docs.sunpy.org/en/latest/generated/gallery/acquiring_data/2011_06_07_sampledata_overview.html#sphx-glr-generated-gallery-acquiring-data-2011-06-07-sampledata-overview-py (see below). I'm guessing the tight-layout just needs tweaking.

</issue>
<code>
[start of examples/acquiring_data/2011_06_07_sampledata_overview.py]
1 # -*- coding: utf-8 -*-
2 """
3 ========================
4 Sample data set overview
5 ========================
6
7 An overview of the coordinated sample data set.
8 """
9 import matplotlib.pyplot as plt
10 import astropy.units as u
11
12 import sunpy.map
13 import sunpy.timeseries
14 import sunpy.data.sample as sample_data
15
16 ###############################################################################
17 # On 2011 June 7, various solar instruments observed a spectacular solar
18 # eruption from NOAA AR 11226. The event included an M2.5 flare, a
19 # filament eruption, a coronal mass ejection, and a global coronal EUV wave (IAU standard:
20 # SOL2011-06-07T06:24:00L045C112). This event was spectacular because it
21 # features the ejection of a large amount of prominence material, much of which
22 # failed to escape and fell back to the solar surface.
23 # This event received some press coverage (e.g. `National Geographics
24 # <https://news.nationalgeographic.com/news/2011/06/110608-solar-flare-sun-science-space/>`_,
25 # `Discover Magazine <http://blogs.discovermagazine.com/badastronomy/2011/06/07/the-sun-lets-loose-a-huge-explosion/>`_)
26 # and the literature contains a number of a papers about it (e.g. `Li et al.
27 # <https://iopscience.iop.org/article/10.1088/0004-637X/746/1/13/meta>`_,
28 # `Inglis et al. <https://iopscience.iop.org/article/10.1088/0004-637X/777/1/30/meta>`_)
29
30 ###############################################################################
31 # The following image of the flare is now fairly iconic.
32 aia_cutout03_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT03_IMAGE)
33 fig = plt.figure()
34 ax = fig.add_subplot(111, projection=aia_cutout03_map)
35 aia_cutout03_map.plot()
36 plt.show()
37
38 ###############################################################################
39 # Let's take a look at the GOES XRS data.
40 goes = sunpy.timeseries.TimeSeries(sample_data.GOES_XRS_TIMESERIES)
41 fig = plt.figure()
42 goes.plot()
43 plt.show()
44
45 ###############################################################################
46 # Next let's investigate the AIA full disk images that are available. Please
47 # note that these images are not at the full AIA resolution.
48
49 aia_131_map = sunpy.map.Map(sample_data.AIA_131_IMAGE)
50 aia_171_map = sunpy.map.Map(sample_data.AIA_171_IMAGE)
51 aia_211_map = sunpy.map.Map(sample_data.AIA_211_IMAGE)
52 aia_335_map = sunpy.map.Map(sample_data.AIA_335_IMAGE)
53 aia_094_map = sunpy.map.Map(sample_data.AIA_094_IMAGE)
54 aia_1600_map = sunpy.map.Map(sample_data.AIA_1600_IMAGE)
55
56 fig = plt.figure(figsize=(6, 28))
57 ax = fig.add_subplot(611, projection=aia_131_map)
58 aia_131_map.plot(clip_interval=(0.5, 99.9)*u.percent)
59 aia_131_map.draw_grid()
60
61 ax = fig.add_subplot(612, projection=aia_171_map)
62 aia_171_map.plot(clip_interval=(0.5, 99.9)*u.percent)
63 aia_171_map.draw_grid()
64
65 ax = fig.add_subplot(613, projection=aia_211_map)
66 aia_211_map.plot(clip_interval=(0.5, 99.9)*u.percent)
67 aia_211_map.draw_grid()
68
69 ax = fig.add_subplot(614, projection=aia_335_map)
70 aia_335_map.plot(clip_interval=(0.5, 99.9)*u.percent)
71 aia_335_map.draw_grid()
72
73 ax = fig.add_subplot(615, projection=aia_094_map)
74 aia_094_map.plot(clip_interval=(0.5, 99.9)*u.percent)
75 aia_094_map.draw_grid()
76
77 ax = fig.add_subplot(616, projection=aia_1600_map)
78 aia_1600_map.plot(clip_interval=(0.5, 99.9)*u.percent)
79 aia_1600_map.draw_grid()
80
81 fig.tight_layout(pad=6.50)
82 plt.show()
83
84 ###############################################################################
85 # We also provide a series of AIA cutouts so that you can get a sense of the
86 # dynamics of the in-falling material.
87 aia_cutout01_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT01_IMAGE)
88 aia_cutout02_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT02_IMAGE)
89 aia_cutout03_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT03_IMAGE)
90 aia_cutout04_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT04_IMAGE)
91 aia_cutout05_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT05_IMAGE)
92
93 fig = plt.figure(figsize=(6, 28))
94 ax = fig.add_subplot(511, projection=aia_cutout01_map)
95 aia_cutout01_map.plot()
96
97 ax = fig.add_subplot(512, projection=aia_cutout02_map)
98 aia_cutout02_map.plot()
99
100 ax = fig.add_subplot(513, projection=aia_cutout03_map)
101 aia_cutout03_map.plot()
102
103 ax = fig.add_subplot(514, projection=aia_cutout04_map)
104 aia_cutout04_map.plot()
105
106 ax = fig.add_subplot(515, projection=aia_cutout05_map)
107 aia_cutout05_map.plot()
108
109 fig.tight_layout(pad=5.50)
110 plt.show()
111
112 ###############################################################################
113 # There are a number of other data sources available as well, such as SWAP.
114 swap_map = sunpy.map.Map(sample_data.SWAP_LEVEL1_IMAGE)
115 fig = plt.figure()
116 swap_map.plot()
117 plt.show()
118
119 ###############################################################################
120 # And also RHESSI.
121 rhessi_map = sunpy.map.Map(sample_data.RHESSI_IMAGE)
122 fig = plt.figure()
123 rhessi_map.plot()
124 plt.show()
125
[end of examples/acquiring_data/2011_06_07_sampledata_overview.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/acquiring_data/2011_06_07_sampledata_overview.py b/examples/acquiring_data/2011_06_07_sampledata_overview.py
--- a/examples/acquiring_data/2011_06_07_sampledata_overview.py
+++ b/examples/acquiring_data/2011_06_07_sampledata_overview.py
@@ -78,7 +78,7 @@
aia_1600_map.plot(clip_interval=(0.5, 99.9)*u.percent)
aia_1600_map.draw_grid()
-fig.tight_layout(pad=6.50)
+fig.tight_layout(pad=8.50)
plt.show()
###############################################################################
|
{"golden_diff": "diff --git a/examples/acquiring_data/2011_06_07_sampledata_overview.py b/examples/acquiring_data/2011_06_07_sampledata_overview.py\n--- a/examples/acquiring_data/2011_06_07_sampledata_overview.py\n+++ b/examples/acquiring_data/2011_06_07_sampledata_overview.py\n@@ -78,7 +78,7 @@\n aia_1600_map.plot(clip_interval=(0.5, 99.9)*u.percent)\n aia_1600_map.draw_grid()\n \n-fig.tight_layout(pad=6.50)\n+fig.tight_layout(pad=8.50)\n plt.show()\n \n ###############################################################################\n", "issue": "Plot titles and x-labels overlapping in example\nThe plot titles and labels overlap in the 3rd image of https://docs.sunpy.org/en/latest/generated/gallery/acquiring_data/2011_06_07_sampledata_overview.html#sphx-glr-generated-gallery-acquiring-data-2011-06-07-sampledata-overview-py (see below). I'm guessing the tight-layout just needs tweaking.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n========================\nSample data set overview\n========================\n\nAn overview of the coordinated sample data set.\n\"\"\"\nimport matplotlib.pyplot as plt\nimport astropy.units as u\n\nimport sunpy.map\nimport sunpy.timeseries\nimport sunpy.data.sample as sample_data\n\n###############################################################################\n# On 2011 June 7, various solar instruments observed a spectacular solar\n# eruption from NOAA AR 11226. The event included an M2.5 flare, a\n# filament eruption, a coronal mass ejection, and a global coronal EUV wave (IAU standard:\n# SOL2011-06-07T06:24:00L045C112). This event was spectacular because it\n# features the ejection of a large amount of prominence material, much of which\n# failed to escape and fell back to the solar surface.\n# This event received some press coverage (e.g. `National Geographics\n# <https://news.nationalgeographic.com/news/2011/06/110608-solar-flare-sun-science-space/>`_,\n# `Discover Magazine <http://blogs.discovermagazine.com/badastronomy/2011/06/07/the-sun-lets-loose-a-huge-explosion/>`_)\n# and the literature contains a number of a papers about it (e.g. `Li et al.\n# <https://iopscience.iop.org/article/10.1088/0004-637X/746/1/13/meta>`_,\n# `Inglis et al. <https://iopscience.iop.org/article/10.1088/0004-637X/777/1/30/meta>`_)\n\n###############################################################################\n# The following image of the flare is now fairly iconic.\naia_cutout03_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT03_IMAGE)\nfig = plt.figure()\nax = fig.add_subplot(111, projection=aia_cutout03_map)\naia_cutout03_map.plot()\nplt.show()\n\n###############################################################################\n# Let's take a look at the GOES XRS data.\ngoes = sunpy.timeseries.TimeSeries(sample_data.GOES_XRS_TIMESERIES)\nfig = plt.figure()\ngoes.plot()\nplt.show()\n\n###############################################################################\n# Next let's investigate the AIA full disk images that are available. Please\n# note that these images are not at the full AIA resolution.\n\naia_131_map = sunpy.map.Map(sample_data.AIA_131_IMAGE)\naia_171_map = sunpy.map.Map(sample_data.AIA_171_IMAGE)\naia_211_map = sunpy.map.Map(sample_data.AIA_211_IMAGE)\naia_335_map = sunpy.map.Map(sample_data.AIA_335_IMAGE)\naia_094_map = sunpy.map.Map(sample_data.AIA_094_IMAGE)\naia_1600_map = sunpy.map.Map(sample_data.AIA_1600_IMAGE)\n\nfig = plt.figure(figsize=(6, 28))\nax = fig.add_subplot(611, projection=aia_131_map)\naia_131_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_131_map.draw_grid()\n\nax = fig.add_subplot(612, projection=aia_171_map)\naia_171_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_171_map.draw_grid()\n\nax = fig.add_subplot(613, projection=aia_211_map)\naia_211_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_211_map.draw_grid()\n\nax = fig.add_subplot(614, projection=aia_335_map)\naia_335_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_335_map.draw_grid()\n\nax = fig.add_subplot(615, projection=aia_094_map)\naia_094_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_094_map.draw_grid()\n\nax = fig.add_subplot(616, projection=aia_1600_map)\naia_1600_map.plot(clip_interval=(0.5, 99.9)*u.percent)\naia_1600_map.draw_grid()\n\nfig.tight_layout(pad=6.50)\nplt.show()\n\n###############################################################################\n# We also provide a series of AIA cutouts so that you can get a sense of the\n# dynamics of the in-falling material.\naia_cutout01_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT01_IMAGE)\naia_cutout02_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT02_IMAGE)\naia_cutout03_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT03_IMAGE)\naia_cutout04_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT04_IMAGE)\naia_cutout05_map = sunpy.map.Map(sample_data.AIA_193_CUTOUT05_IMAGE)\n\nfig = plt.figure(figsize=(6, 28))\nax = fig.add_subplot(511, projection=aia_cutout01_map)\naia_cutout01_map.plot()\n\nax = fig.add_subplot(512, projection=aia_cutout02_map)\naia_cutout02_map.plot()\n\nax = fig.add_subplot(513, projection=aia_cutout03_map)\naia_cutout03_map.plot()\n\nax = fig.add_subplot(514, projection=aia_cutout04_map)\naia_cutout04_map.plot()\n\nax = fig.add_subplot(515, projection=aia_cutout05_map)\naia_cutout05_map.plot()\n\nfig.tight_layout(pad=5.50)\nplt.show()\n\n###############################################################################\n# There are a number of other data sources available as well, such as SWAP.\nswap_map = sunpy.map.Map(sample_data.SWAP_LEVEL1_IMAGE)\nfig = plt.figure()\nswap_map.plot()\nplt.show()\n\n###############################################################################\n# And also RHESSI.\nrhessi_map = sunpy.map.Map(sample_data.RHESSI_IMAGE)\nfig = plt.figure()\nrhessi_map.plot()\nplt.show()\n", "path": "examples/acquiring_data/2011_06_07_sampledata_overview.py"}]}
| 2,499 | 170 |
gh_patches_debug_26918
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-1567
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OpenID state length is too long for the PostgreSQL cache backend
Those two lines are not compatible together:
- https://github.com/Kinto/kinto/blob/c6cc7bba094aed6897d0157dc78b1731ac12c8db/kinto/core/cache/postgresql/schema.sql#L7
- https://github.com/Kinto/kinto/blob/c6cc7bba094aed6897d0157dc78b1731ac12c8db/kinto/plugins/openid/views.py#L97
OpenID state length is too long for the PostgreSQL cache backend
Those two lines are not compatible together:
- https://github.com/Kinto/kinto/blob/c6cc7bba094aed6897d0157dc78b1731ac12c8db/kinto/core/cache/postgresql/schema.sql#L7
- https://github.com/Kinto/kinto/blob/c6cc7bba094aed6897d0157dc78b1731ac12c8db/kinto/plugins/openid/views.py#L97
</issue>
<code>
[start of kinto/plugins/openid/views.py]
1 import urllib.parse
2
3 import colander
4 import requests
5 from pyramid import httpexceptions
6
7 from cornice.validators import colander_validator
8 from kinto.core import Service
9 from kinto.core.errors import raise_invalid, ERRORS
10 from kinto.core.utils import random_bytes_hex
11 from kinto.core.resource.schema import ErrorResponseSchema
12 from kinto.core.schema import URL
13
14 from .utils import fetch_openid_config
15
16
17 DEFAULT_STATE_TTL_SECONDS = 3600
18
19
20 class RedirectHeadersSchema(colander.MappingSchema):
21 """Redirect response headers."""
22 location = colander.SchemaNode(colander.String(), name='Location')
23
24
25 class RedirectResponseSchema(colander.MappingSchema):
26 """Redirect response schema."""
27 headers = RedirectHeadersSchema()
28
29
30 response_schemas = {
31 '307': RedirectResponseSchema(description='Successful redirection.'),
32 '400': ErrorResponseSchema(description='The request is invalid.'),
33 }
34
35
36 def provider_validator(request, **kwargs):
37 """
38 This validator verifies that the validator in URL (eg. /openid/auth0/login)
39 is a configured OpenIDConnect policy.
40 """
41 provider = request.matchdict['provider']
42 used = request.registry.settings.get('multiauth.policy.%s.use' % provider, '')
43 if not used.endswith('OpenIDConnectPolicy'):
44 request.errors.add('path', 'provider', 'Unknow provider %r' % provider)
45
46
47 class LoginQuerystringSchema(colander.MappingSchema):
48 """
49 Querystring schema for the login endpoint.
50 """
51 callback = URL()
52 scope = colander.SchemaNode(colander.String())
53
54
55 class LoginSchema(colander.MappingSchema):
56 querystring = LoginQuerystringSchema()
57
58
59 login = Service(name='openid_login',
60 path='/openid/{provider}/login',
61 description='Initiate the OAuth2 login')
62
63
64 @login.get(schema=LoginSchema(),
65 validators=(colander_validator, provider_validator),
66 response_schemas=response_schemas)
67 def get_login(request):
68 """Initiates to login dance for the specified scopes and callback URI
69 using appropriate redirections."""
70
71 # Settings.
72 provider = request.matchdict['provider']
73 settings_prefix = 'multiauth.policy.%s.' % provider
74 issuer = request.registry.settings[settings_prefix + 'issuer']
75 client_id = request.registry.settings[settings_prefix + 'client_id']
76 userid_field = request.registry.settings.get(settings_prefix + 'userid_field')
77 state_ttl = int(request.registry.settings.get(settings_prefix + 'state_ttl_seconds',
78 DEFAULT_STATE_TTL_SECONDS))
79
80 # Read OpenID configuration (cached by issuer)
81 oid_config = fetch_openid_config(issuer)
82 auth_endpoint = oid_config['authorization_endpoint']
83
84 scope = request.GET['scope']
85 callback = request.GET['callback']
86
87 # Check that email scope is requested if userid field is configured as email.
88 if userid_field == 'email' and 'email' not in scope:
89 error_details = {
90 'name': 'scope',
91 'description': "Provider %s requires 'email' scope" % provider,
92 }
93 raise_invalid(request, **error_details)
94
95 # Generate a random string as state.
96 # And save it until code is traded.
97 state = random_bytes_hex(256)
98 request.registry.cache.set('openid:state:' + state, callback, ttl=state_ttl)
99
100 # Redirect the client to the Identity Provider that will eventually redirect
101 # to the OpenID token endpoint.
102 token_uri = request.route_url('openid_token', provider=provider) + '?'
103 params = dict(client_id=client_id, response_type='code', scope=scope,
104 redirect_uri=token_uri, state=state)
105 redirect = '{}?{}'.format(auth_endpoint, urllib.parse.urlencode(params))
106 raise httpexceptions.HTTPTemporaryRedirect(redirect)
107
108
109 class TokenQuerystringSchema(colander.MappingSchema):
110 """
111 Querystring schema for the token endpoint.
112 """
113 code = colander.SchemaNode(colander.String())
114 state = colander.SchemaNode(colander.String())
115
116
117 class TokenSchema(colander.MappingSchema):
118 querystring = TokenQuerystringSchema()
119
120
121 token = Service(name='openid_token',
122 path='/openid/{provider}/token',
123 description='')
124
125
126 @token.get(schema=TokenSchema(),
127 validators=(colander_validator, provider_validator))
128 def get_token(request):
129 """Trades the specified code and state against access and ID tokens.
130 The client is redirected to the original ``callback`` URI with the
131 result in querystring."""
132
133 # Settings.
134 provider = request.matchdict['provider']
135 settings_prefix = 'multiauth.policy.%s.' % provider
136 issuer = request.registry.settings[settings_prefix + 'issuer']
137 client_id = request.registry.settings[settings_prefix + 'client_id']
138 client_secret = request.registry.settings[settings_prefix + 'client_secret']
139
140 # Read OpenID configuration (cached by issuer)
141 oid_config = fetch_openid_config(issuer)
142 token_endpoint = oid_config['token_endpoint']
143
144 code = request.GET['code']
145 state = request.GET['state']
146
147 # State can be used only once.
148 callback = request.registry.cache.delete('openid:state:' + state)
149 if callback is None:
150 error_details = {
151 'name': 'state',
152 'description': 'Invalid state',
153 'errno': ERRORS.INVALID_AUTH_TOKEN.value,
154 }
155 raise_invalid(request, **error_details)
156
157 # Trade the code for tokens on the Identity Provider.
158 # Google Identity requires to specify again redirect_uri.
159 redirect_uri = request.route_url('openid_token', provider=provider) + '?'
160 data = {
161 'code': code,
162 'client_id': client_id,
163 'client_secret': client_secret,
164 'redirect_uri': redirect_uri,
165 'grant_type': 'authorization_code',
166 }
167 resp = requests.post(token_endpoint, data=data)
168
169 # The IdP response is forwarded to the client in the querystring/location hash.
170 # (eg. callback=`http://localhost:3000/#tokens=`)
171 redirect = callback + urllib.parse.quote(resp.text)
172 raise httpexceptions.HTTPTemporaryRedirect(redirect)
173
[end of kinto/plugins/openid/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kinto/plugins/openid/views.py b/kinto/plugins/openid/views.py
--- a/kinto/plugins/openid/views.py
+++ b/kinto/plugins/openid/views.py
@@ -15,6 +15,7 @@
DEFAULT_STATE_TTL_SECONDS = 3600
+DEFAULT_STATE_LENGTH = 32
class RedirectHeadersSchema(colander.MappingSchema):
@@ -76,6 +77,8 @@
userid_field = request.registry.settings.get(settings_prefix + 'userid_field')
state_ttl = int(request.registry.settings.get(settings_prefix + 'state_ttl_seconds',
DEFAULT_STATE_TTL_SECONDS))
+ state_length = int(request.registry.settings.get(settings_prefix + 'state_length',
+ DEFAULT_STATE_LENGTH))
# Read OpenID configuration (cached by issuer)
oid_config = fetch_openid_config(issuer)
@@ -94,7 +97,7 @@
# Generate a random string as state.
# And save it until code is traded.
- state = random_bytes_hex(256)
+ state = random_bytes_hex(state_length)
request.registry.cache.set('openid:state:' + state, callback, ttl=state_ttl)
# Redirect the client to the Identity Provider that will eventually redirect
|
{"golden_diff": "diff --git a/kinto/plugins/openid/views.py b/kinto/plugins/openid/views.py\n--- a/kinto/plugins/openid/views.py\n+++ b/kinto/plugins/openid/views.py\n@@ -15,6 +15,7 @@\n \n \n DEFAULT_STATE_TTL_SECONDS = 3600\n+DEFAULT_STATE_LENGTH = 32\n \n \n class RedirectHeadersSchema(colander.MappingSchema):\n@@ -76,6 +77,8 @@\n userid_field = request.registry.settings.get(settings_prefix + 'userid_field')\n state_ttl = int(request.registry.settings.get(settings_prefix + 'state_ttl_seconds',\n DEFAULT_STATE_TTL_SECONDS))\n+ state_length = int(request.registry.settings.get(settings_prefix + 'state_length',\n+ DEFAULT_STATE_LENGTH))\n \n # Read OpenID configuration (cached by issuer)\n oid_config = fetch_openid_config(issuer)\n@@ -94,7 +97,7 @@\n \n # Generate a random string as state.\n # And save it until code is traded.\n- state = random_bytes_hex(256)\n+ state = random_bytes_hex(state_length)\n request.registry.cache.set('openid:state:' + state, callback, ttl=state_ttl)\n \n # Redirect the client to the Identity Provider that will eventually redirect\n", "issue": "OpenID state length is too long for the PostgreSQL cache backend\nThose two lines are not compatible together:\r\n\r\n- https://github.com/Kinto/kinto/blob/c6cc7bba094aed6897d0157dc78b1731ac12c8db/kinto/core/cache/postgresql/schema.sql#L7\r\n- https://github.com/Kinto/kinto/blob/c6cc7bba094aed6897d0157dc78b1731ac12c8db/kinto/plugins/openid/views.py#L97\nOpenID state length is too long for the PostgreSQL cache backend\nThose two lines are not compatible together:\r\n\r\n- https://github.com/Kinto/kinto/blob/c6cc7bba094aed6897d0157dc78b1731ac12c8db/kinto/core/cache/postgresql/schema.sql#L7\r\n- https://github.com/Kinto/kinto/blob/c6cc7bba094aed6897d0157dc78b1731ac12c8db/kinto/plugins/openid/views.py#L97\n", "before_files": [{"content": "import urllib.parse\n\nimport colander\nimport requests\nfrom pyramid import httpexceptions\n\nfrom cornice.validators import colander_validator\nfrom kinto.core import Service\nfrom kinto.core.errors import raise_invalid, ERRORS\nfrom kinto.core.utils import random_bytes_hex\nfrom kinto.core.resource.schema import ErrorResponseSchema\nfrom kinto.core.schema import URL\n\nfrom .utils import fetch_openid_config\n\n\nDEFAULT_STATE_TTL_SECONDS = 3600\n\n\nclass RedirectHeadersSchema(colander.MappingSchema):\n \"\"\"Redirect response headers.\"\"\"\n location = colander.SchemaNode(colander.String(), name='Location')\n\n\nclass RedirectResponseSchema(colander.MappingSchema):\n \"\"\"Redirect response schema.\"\"\"\n headers = RedirectHeadersSchema()\n\n\nresponse_schemas = {\n '307': RedirectResponseSchema(description='Successful redirection.'),\n '400': ErrorResponseSchema(description='The request is invalid.'),\n}\n\n\ndef provider_validator(request, **kwargs):\n \"\"\"\n This validator verifies that the validator in URL (eg. /openid/auth0/login)\n is a configured OpenIDConnect policy.\n \"\"\"\n provider = request.matchdict['provider']\n used = request.registry.settings.get('multiauth.policy.%s.use' % provider, '')\n if not used.endswith('OpenIDConnectPolicy'):\n request.errors.add('path', 'provider', 'Unknow provider %r' % provider)\n\n\nclass LoginQuerystringSchema(colander.MappingSchema):\n \"\"\"\n Querystring schema for the login endpoint.\n \"\"\"\n callback = URL()\n scope = colander.SchemaNode(colander.String())\n\n\nclass LoginSchema(colander.MappingSchema):\n querystring = LoginQuerystringSchema()\n\n\nlogin = Service(name='openid_login',\n path='/openid/{provider}/login',\n description='Initiate the OAuth2 login')\n\n\[email protected](schema=LoginSchema(),\n validators=(colander_validator, provider_validator),\n response_schemas=response_schemas)\ndef get_login(request):\n \"\"\"Initiates to login dance for the specified scopes and callback URI\n using appropriate redirections.\"\"\"\n\n # Settings.\n provider = request.matchdict['provider']\n settings_prefix = 'multiauth.policy.%s.' % provider\n issuer = request.registry.settings[settings_prefix + 'issuer']\n client_id = request.registry.settings[settings_prefix + 'client_id']\n userid_field = request.registry.settings.get(settings_prefix + 'userid_field')\n state_ttl = int(request.registry.settings.get(settings_prefix + 'state_ttl_seconds',\n DEFAULT_STATE_TTL_SECONDS))\n\n # Read OpenID configuration (cached by issuer)\n oid_config = fetch_openid_config(issuer)\n auth_endpoint = oid_config['authorization_endpoint']\n\n scope = request.GET['scope']\n callback = request.GET['callback']\n\n # Check that email scope is requested if userid field is configured as email.\n if userid_field == 'email' and 'email' not in scope:\n error_details = {\n 'name': 'scope',\n 'description': \"Provider %s requires 'email' scope\" % provider,\n }\n raise_invalid(request, **error_details)\n\n # Generate a random string as state.\n # And save it until code is traded.\n state = random_bytes_hex(256)\n request.registry.cache.set('openid:state:' + state, callback, ttl=state_ttl)\n\n # Redirect the client to the Identity Provider that will eventually redirect\n # to the OpenID token endpoint.\n token_uri = request.route_url('openid_token', provider=provider) + '?'\n params = dict(client_id=client_id, response_type='code', scope=scope,\n redirect_uri=token_uri, state=state)\n redirect = '{}?{}'.format(auth_endpoint, urllib.parse.urlencode(params))\n raise httpexceptions.HTTPTemporaryRedirect(redirect)\n\n\nclass TokenQuerystringSchema(colander.MappingSchema):\n \"\"\"\n Querystring schema for the token endpoint.\n \"\"\"\n code = colander.SchemaNode(colander.String())\n state = colander.SchemaNode(colander.String())\n\n\nclass TokenSchema(colander.MappingSchema):\n querystring = TokenQuerystringSchema()\n\n\ntoken = Service(name='openid_token',\n path='/openid/{provider}/token',\n description='')\n\n\[email protected](schema=TokenSchema(),\n validators=(colander_validator, provider_validator))\ndef get_token(request):\n \"\"\"Trades the specified code and state against access and ID tokens.\n The client is redirected to the original ``callback`` URI with the\n result in querystring.\"\"\"\n\n # Settings.\n provider = request.matchdict['provider']\n settings_prefix = 'multiauth.policy.%s.' % provider\n issuer = request.registry.settings[settings_prefix + 'issuer']\n client_id = request.registry.settings[settings_prefix + 'client_id']\n client_secret = request.registry.settings[settings_prefix + 'client_secret']\n\n # Read OpenID configuration (cached by issuer)\n oid_config = fetch_openid_config(issuer)\n token_endpoint = oid_config['token_endpoint']\n\n code = request.GET['code']\n state = request.GET['state']\n\n # State can be used only once.\n callback = request.registry.cache.delete('openid:state:' + state)\n if callback is None:\n error_details = {\n 'name': 'state',\n 'description': 'Invalid state',\n 'errno': ERRORS.INVALID_AUTH_TOKEN.value,\n }\n raise_invalid(request, **error_details)\n\n # Trade the code for tokens on the Identity Provider.\n # Google Identity requires to specify again redirect_uri.\n redirect_uri = request.route_url('openid_token', provider=provider) + '?'\n data = {\n 'code': code,\n 'client_id': client_id,\n 'client_secret': client_secret,\n 'redirect_uri': redirect_uri,\n 'grant_type': 'authorization_code',\n }\n resp = requests.post(token_endpoint, data=data)\n\n # The IdP response is forwarded to the client in the querystring/location hash.\n # (eg. callback=`http://localhost:3000/#tokens=`)\n redirect = callback + urllib.parse.quote(resp.text)\n raise httpexceptions.HTTPTemporaryRedirect(redirect)\n", "path": "kinto/plugins/openid/views.py"}]}
| 2,523 | 270 |
gh_patches_debug_2814
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-496
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make offline archive look more like event archive
Same as #481. This is mainly about the filtering section.
</issue>
<code>
[start of apps/api/v0/article.py]
1 #-*- coding: utf-8 -*-
2 from copy import copy
3
4 from django.conf import settings
5 from django.template.defaultfilters import slugify
6 from django.utils import timezone
7
8 from filebrowser.base import FileObject
9 from filebrowser.settings import VERSIONS
10 from tastypie import fields
11 from tastypie.resources import ModelResource
12
13 from apps.api.v0.authentication import UserResource
14 from apps.article.models import Article, ArticleTag, Tag
15
16
17
18
19 class ArticleResource(ModelResource):
20 author = fields.ToOneField(UserResource, 'created_by')
21
22 def alter_list_data_to_serialize(self, request, data):
23 # Renames list data 'object' to 'articles'.
24 if isinstance(data, dict):
25 data['articles'] = copy(data['objects'])
26 del(data['objects'])
27 return data
28
29 # Making multiple images for the article
30 def dehydrate(self, bundle):
31
32 # Setting slug-field
33 bundle.data['slug'] = slugify(bundle.data['heading'])
34
35 # If image is set
36 if bundle.data['image']:
37 # Parse to FileObject used by Filebrowser
38 temp_image = FileObject(bundle.data['image'])
39
40 # Itterate the different versions (by key)
41 for ver in VERSIONS.keys():
42 # Check if the key start with article_ (if it does, we want to crop to that size)
43 if ver.startswith('article_'):
44 # Adding the new image to the object
45 bundle.data['image_'+ver] = temp_image.version_generate(ver).url
46
47 # Unset the image-field
48 del(bundle.data['image'])
49
50 # Returning washed object
51 return bundle
52
53 def get_object_list(self, request):
54 # Getting the GET-params
55 if 'tag' in request.GET:
56 request_tag = request.GET['tag']
57 else:
58 request_tag = None
59
60 if 'year' in request.GET:
61 request_year = request.GET['year']
62 else:
63 request_year = None
64
65 if 'month' in request.GET:
66 request_month = request.GET['month']
67 else:
68 request_month = None
69
70 # Check filtering here
71 if (request_year is not None):
72 if (request_month is not None):
73 # Filtering on both year and month
74 queryset = Article.objects.filter(published_date__year=request_year, published_date__month=request_month, published_date__lte=timezone.now()).order_by('-published_date')
75 else:
76 # Filtering on only year
77 queryset = Article.objects.filter(published_date__year=request_year, published_date__lte=timezone.now()).order_by('-published_date')
78 else:
79 # Not filtering on year, check if filtering on slug (tag) or return default query
80 if (request_tag is not None):
81 # Filtering on slug
82 slug_query = Tag.objects.filter(slug = request_tag)
83 slug_connect = ArticleTag.objects.filter(tag = slug_query).values('article_id')
84 queryset = Article.objects.filter(id__in = slug_connect, published_date__lte=timezone.now()).order_by('-published_date')
85 else:
86 # No filtering at all, return default query
87 queryset = Article.objects.filter(published_date__lte=timezone.now()).order_by('-published_date')
88 return queryset
89
90 class Meta:
91 API_LIMIT_PER_PAGE = 9
92 queryset = Article.objects.filter(published_date__lte=timezone.now())
93 resource_name = 'article/all'
94 ordering = ['-published_date']
95 include_absolute_url = True
96 filtering = {
97 'featured' : ('exact',),
98 'published_date' : ('gte',),
99 }
100
101 class ArticleLatestResource(ModelResource):
102 author = fields.ToOneField(UserResource, 'created_by')
103
104 class Meta:
105 queryset = Article.objects.filter(published_date__lte=timezone.now())
106
107 resource_name = 'article/latest'
108 filtering = {
109 'featured': ('exact',)
110 }
111 ordering = ['-published_date']
112 max_limit = 25
113 def alter_list_data_to_serialize(self, request, data):
114 # Renames list data 'object' to 'articles'.
115 if isinstance(data, dict):
116 data['articles'] = copy(data['objects'])
117 del(data['objects'])
118 return data
119 def dehydrate(self, bundle):
120 bundle.data['slug'] = slugify(bundle.data['heading'])
121 return bundle
122
[end of apps/api/v0/article.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/api/v0/article.py b/apps/api/v0/article.py
--- a/apps/api/v0/article.py
+++ b/apps/api/v0/article.py
@@ -17,7 +17,7 @@
class ArticleResource(ModelResource):
- author = fields.ToOneField(UserResource, 'created_by')
+ author = fields.ToOneField(UserResource, 'created_by', full=True)
def alter_list_data_to_serialize(self, request, data):
# Renames list data 'object' to 'articles'.
|
{"golden_diff": "diff --git a/apps/api/v0/article.py b/apps/api/v0/article.py\n--- a/apps/api/v0/article.py\n+++ b/apps/api/v0/article.py\n@@ -17,7 +17,7 @@\n \n \n class ArticleResource(ModelResource):\n- author = fields.ToOneField(UserResource, 'created_by')\n+ author = fields.ToOneField(UserResource, 'created_by', full=True)\n \n def alter_list_data_to_serialize(self, request, data):\n # Renames list data 'object' to 'articles'.\n", "issue": "Make offline archive look more like event archive\nSame as #481. This is mainly about the filtering section.\n\n", "before_files": [{"content": "#-*- coding: utf-8 -*-\nfrom copy import copy\n\nfrom django.conf import settings\nfrom django.template.defaultfilters import slugify\nfrom django.utils import timezone\n\nfrom filebrowser.base import FileObject\nfrom filebrowser.settings import VERSIONS\nfrom tastypie import fields\nfrom tastypie.resources import ModelResource\n\nfrom apps.api.v0.authentication import UserResource\nfrom apps.article.models import Article, ArticleTag, Tag\n\n\n\n\nclass ArticleResource(ModelResource):\n author = fields.ToOneField(UserResource, 'created_by')\n \n def alter_list_data_to_serialize(self, request, data):\n # Renames list data 'object' to 'articles'.\n if isinstance(data, dict):\n data['articles'] = copy(data['objects'])\n del(data['objects'])\n return data\n \n # Making multiple images for the article\n def dehydrate(self, bundle):\n \n # Setting slug-field\n bundle.data['slug'] = slugify(bundle.data['heading'])\n \n # If image is set\n if bundle.data['image']:\n # Parse to FileObject used by Filebrowser\n temp_image = FileObject(bundle.data['image'])\n \n # Itterate the different versions (by key)\n for ver in VERSIONS.keys():\n # Check if the key start with article_ (if it does, we want to crop to that size)\n if ver.startswith('article_'):\n # Adding the new image to the object\n bundle.data['image_'+ver] = temp_image.version_generate(ver).url\n \n # Unset the image-field\n del(bundle.data['image'])\n \n # Returning washed object\n return bundle\n \n def get_object_list(self, request):\n # Getting the GET-params\n if 'tag' in request.GET:\n request_tag = request.GET['tag']\n else:\n request_tag = None\n \n if 'year' in request.GET:\n request_year = request.GET['year']\n else:\n request_year = None\n \n if 'month' in request.GET:\n request_month = request.GET['month']\n else:\n request_month = None\n \n # Check filtering here\n if (request_year is not None):\n if (request_month is not None):\n # Filtering on both year and month\n queryset = Article.objects.filter(published_date__year=request_year, published_date__month=request_month, published_date__lte=timezone.now()).order_by('-published_date')\n else:\n # Filtering on only year\n queryset = Article.objects.filter(published_date__year=request_year, published_date__lte=timezone.now()).order_by('-published_date')\n else:\n # Not filtering on year, check if filtering on slug (tag) or return default query\n if (request_tag is not None):\n # Filtering on slug\n slug_query = Tag.objects.filter(slug = request_tag)\n slug_connect = ArticleTag.objects.filter(tag = slug_query).values('article_id')\n queryset = Article.objects.filter(id__in = slug_connect, published_date__lte=timezone.now()).order_by('-published_date')\n else:\n # No filtering at all, return default query\n queryset = Article.objects.filter(published_date__lte=timezone.now()).order_by('-published_date')\n return queryset\n \n class Meta: \n API_LIMIT_PER_PAGE = 9\n queryset = Article.objects.filter(published_date__lte=timezone.now())\n resource_name = 'article/all'\n ordering = ['-published_date']\n include_absolute_url = True\n filtering = {\n 'featured' : ('exact',),\n 'published_date' : ('gte',),\n }\n\nclass ArticleLatestResource(ModelResource):\n author = fields.ToOneField(UserResource, 'created_by')\n \n class Meta:\n queryset = Article.objects.filter(published_date__lte=timezone.now())\n \n resource_name = 'article/latest'\n filtering = {\n 'featured': ('exact',)\n }\n ordering = ['-published_date']\n max_limit = 25\n def alter_list_data_to_serialize(self, request, data):\n # Renames list data 'object' to 'articles'.\n if isinstance(data, dict): \n data['articles'] = copy(data['objects'])\n del(data['objects'])\n return data\n def dehydrate(self, bundle):\n bundle.data['slug'] = slugify(bundle.data['heading'])\n return bundle\n", "path": "apps/api/v0/article.py"}]}
| 1,750 | 115 |
gh_patches_debug_18808
|
rasdani/github-patches
|
git_diff
|
ipython__ipython-13433
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improvement of `core.magic_arguments` example
Currently, there is only a [very raw example](https://ipython.readthedocs.io/en/stable/api/generated/IPython.core.magic_arguments.html?highlight=%40magic_arguments.argumen#module-IPython.core.magic_arguments
) of using `magic_arguments` with custom cell magic.
Therefore, I have the idea to add a second, more fleshed out example that might help people to easier understand and use cell magic arguments:

Here is the code:
```py
from IPython.core import magic_arguments
from IPython.core.magic import register_cell_magic
@magic_arguments.magic_arguments()
@magic_arguments.argument(
"--option",
help=("Add an option here"),
)
@magic_arguments.argument(
"--style",
default=None,
help=("Add some style arguments"),
)
@register_cell_magic
def my_cell_magic(line, cell):
"""Cool cell magic"""
args = magic_arguments.parse_argstring(my_cell_magic, line)
print(args.style)
print(args.option)
```
</issue>
<code>
[start of IPython/core/magic_arguments.py]
1 ''' A decorator-based method of constructing IPython magics with `argparse`
2 option handling.
3
4 New magic functions can be defined like so::
5
6 from IPython.core.magic_arguments import (argument, magic_arguments,
7 parse_argstring)
8
9 @magic_arguments()
10 @argument('-o', '--option', help='An optional argument.')
11 @argument('arg', type=int, help='An integer positional argument.')
12 def magic_cool(self, arg):
13 """ A really cool magic command.
14
15 """
16 args = parse_argstring(magic_cool, arg)
17 ...
18
19 The `@magic_arguments` decorator marks the function as having argparse arguments.
20 The `@argument` decorator adds an argument using the same syntax as argparse's
21 `add_argument()` method. More sophisticated uses may also require the
22 `@argument_group` or `@kwds` decorator to customize the formatting and the
23 parsing.
24
25 Help text for the magic is automatically generated from the docstring and the
26 arguments::
27
28 In[1]: %cool?
29 %cool [-o OPTION] arg
30
31 A really cool magic command.
32
33 positional arguments:
34 arg An integer positional argument.
35
36 optional arguments:
37 -o OPTION, --option OPTION
38 An optional argument.
39
40 Inheritance diagram:
41
42 .. inheritance-diagram:: IPython.core.magic_arguments
43 :parts: 3
44
45 '''
46 #-----------------------------------------------------------------------------
47 # Copyright (C) 2010-2011, IPython Development Team.
48 #
49 # Distributed under the terms of the Modified BSD License.
50 #
51 # The full license is in the file COPYING.txt, distributed with this software.
52 #-----------------------------------------------------------------------------
53 import argparse
54 import re
55
56 # Our own imports
57 from IPython.core.error import UsageError
58 from IPython.utils.decorators import undoc
59 from IPython.utils.process import arg_split
60 from IPython.utils.text import dedent
61
62 NAME_RE = re.compile(r"[a-zA-Z][a-zA-Z0-9_-]*$")
63
64 @undoc
65 class MagicHelpFormatter(argparse.RawDescriptionHelpFormatter):
66 """A HelpFormatter with a couple of changes to meet our needs.
67 """
68 # Modified to dedent text.
69 def _fill_text(self, text, width, indent):
70 return argparse.RawDescriptionHelpFormatter._fill_text(self, dedent(text), width, indent)
71
72 # Modified to wrap argument placeholders in <> where necessary.
73 def _format_action_invocation(self, action):
74 if not action.option_strings:
75 metavar, = self._metavar_formatter(action, action.dest)(1)
76 return metavar
77
78 else:
79 parts = []
80
81 # if the Optional doesn't take a value, format is:
82 # -s, --long
83 if action.nargs == 0:
84 parts.extend(action.option_strings)
85
86 # if the Optional takes a value, format is:
87 # -s ARGS, --long ARGS
88 else:
89 default = action.dest.upper()
90 args_string = self._format_args(action, default)
91 # IPYTHON MODIFICATION: If args_string is not a plain name, wrap
92 # it in <> so it's valid RST.
93 if not NAME_RE.match(args_string):
94 args_string = "<%s>" % args_string
95 for option_string in action.option_strings:
96 parts.append('%s %s' % (option_string, args_string))
97
98 return ', '.join(parts)
99
100 # Override the default prefix ('usage') to our % magic escape,
101 # in a code block.
102 def add_usage(self, usage, actions, groups, prefix="::\n\n %"):
103 super(MagicHelpFormatter, self).add_usage(usage, actions, groups, prefix)
104
105 class MagicArgumentParser(argparse.ArgumentParser):
106 """ An ArgumentParser tweaked for use by IPython magics.
107 """
108 def __init__(self,
109 prog=None,
110 usage=None,
111 description=None,
112 epilog=None,
113 parents=None,
114 formatter_class=MagicHelpFormatter,
115 prefix_chars='-',
116 argument_default=None,
117 conflict_handler='error',
118 add_help=False):
119 if parents is None:
120 parents = []
121 super(MagicArgumentParser, self).__init__(prog=prog, usage=usage,
122 description=description, epilog=epilog,
123 parents=parents, formatter_class=formatter_class,
124 prefix_chars=prefix_chars, argument_default=argument_default,
125 conflict_handler=conflict_handler, add_help=add_help)
126
127 def error(self, message):
128 """ Raise a catchable error instead of exiting.
129 """
130 raise UsageError(message)
131
132 def parse_argstring(self, argstring):
133 """ Split a string into an argument list and parse that argument list.
134 """
135 argv = arg_split(argstring)
136 return self.parse_args(argv)
137
138
139 def construct_parser(magic_func):
140 """ Construct an argument parser using the function decorations.
141 """
142 kwds = getattr(magic_func, 'argcmd_kwds', {})
143 if 'description' not in kwds:
144 kwds['description'] = getattr(magic_func, '__doc__', None)
145 arg_name = real_name(magic_func)
146 parser = MagicArgumentParser(arg_name, **kwds)
147 # Reverse the list of decorators in order to apply them in the
148 # order in which they appear in the source.
149 group = None
150 for deco in magic_func.decorators[::-1]:
151 result = deco.add_to_parser(parser, group)
152 if result is not None:
153 group = result
154
155 # Replace the magic function's docstring with the full help text.
156 magic_func.__doc__ = parser.format_help()
157
158 return parser
159
160
161 def parse_argstring(magic_func, argstring):
162 """ Parse the string of arguments for the given magic function.
163 """
164 return magic_func.parser.parse_argstring(argstring)
165
166
167 def real_name(magic_func):
168 """ Find the real name of the magic.
169 """
170 magic_name = magic_func.__name__
171 if magic_name.startswith('magic_'):
172 magic_name = magic_name[len('magic_'):]
173 return getattr(magic_func, 'argcmd_name', magic_name)
174
175
176 class ArgDecorator(object):
177 """ Base class for decorators to add ArgumentParser information to a method.
178 """
179
180 def __call__(self, func):
181 if not getattr(func, 'has_arguments', False):
182 func.has_arguments = True
183 func.decorators = []
184 func.decorators.append(self)
185 return func
186
187 def add_to_parser(self, parser, group):
188 """ Add this object's information to the parser, if necessary.
189 """
190 pass
191
192
193 class magic_arguments(ArgDecorator):
194 """ Mark the magic as having argparse arguments and possibly adjust the
195 name.
196 """
197
198 def __init__(self, name=None):
199 self.name = name
200
201 def __call__(self, func):
202 if not getattr(func, 'has_arguments', False):
203 func.has_arguments = True
204 func.decorators = []
205 if self.name is not None:
206 func.argcmd_name = self.name
207 # This should be the first decorator in the list of decorators, thus the
208 # last to execute. Build the parser.
209 func.parser = construct_parser(func)
210 return func
211
212
213 class ArgMethodWrapper(ArgDecorator):
214
215 """
216 Base class to define a wrapper for ArgumentParser method.
217
218 Child class must define either `_method_name` or `add_to_parser`.
219
220 """
221
222 _method_name = None
223
224 def __init__(self, *args, **kwds):
225 self.args = args
226 self.kwds = kwds
227
228 def add_to_parser(self, parser, group):
229 """ Add this object's information to the parser.
230 """
231 if group is not None:
232 parser = group
233 getattr(parser, self._method_name)(*self.args, **self.kwds)
234 return None
235
236
237 class argument(ArgMethodWrapper):
238 """ Store arguments and keywords to pass to add_argument().
239
240 Instances also serve to decorate command methods.
241 """
242 _method_name = 'add_argument'
243
244
245 class defaults(ArgMethodWrapper):
246 """ Store arguments and keywords to pass to set_defaults().
247
248 Instances also serve to decorate command methods.
249 """
250 _method_name = 'set_defaults'
251
252
253 class argument_group(ArgMethodWrapper):
254 """ Store arguments and keywords to pass to add_argument_group().
255
256 Instances also serve to decorate command methods.
257 """
258
259 def add_to_parser(self, parser, group):
260 """ Add this object's information to the parser.
261 """
262 return parser.add_argument_group(*self.args, **self.kwds)
263
264
265 class kwds(ArgDecorator):
266 """ Provide other keywords to the sub-parser constructor.
267 """
268 def __init__(self, **kwds):
269 self.kwds = kwds
270
271 def __call__(self, func):
272 func = super(kwds, self).__call__(func)
273 func.argcmd_kwds = self.kwds
274 return func
275
276
277 __all__ = ['magic_arguments', 'argument', 'argument_group', 'kwds',
278 'parse_argstring']
279
[end of IPython/core/magic_arguments.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/IPython/core/magic_arguments.py b/IPython/core/magic_arguments.py
--- a/IPython/core/magic_arguments.py
+++ b/IPython/core/magic_arguments.py
@@ -37,6 +37,38 @@
-o OPTION, --option OPTION
An optional argument.
+Here is an elaborated example that uses default parameters in `argument` and calls the `args` in the cell magic::
+
+ from IPython.core.magic import register_cell_magic
+ from IPython.core.magic_arguments import (argument, magic_arguments,
+ parse_argstring)
+
+
+ @magic_arguments()
+ @argument(
+ "--option",
+ "-o",
+ help=("Add an option here"),
+ )
+ @argument(
+ "--style",
+ "-s",
+ default="foo",
+ help=("Add some style arguments"),
+ )
+ @register_cell_magic
+ def my_cell_magic(line, cell):
+ args = parse_argstring(my_cell_magic, line)
+ print(f"{args.option=}")
+ print(f"{args.style=}")
+ print(f"{cell=}")
+
+In a jupyter notebook, this cell magic can be executed like this::
+
+ %%my_cell_magic -o Hello
+ print("bar")
+ i = 42
+
Inheritance diagram:
.. inheritance-diagram:: IPython.core.magic_arguments
|
{"golden_diff": "diff --git a/IPython/core/magic_arguments.py b/IPython/core/magic_arguments.py\n--- a/IPython/core/magic_arguments.py\n+++ b/IPython/core/magic_arguments.py\n@@ -37,6 +37,38 @@\n -o OPTION, --option OPTION\n An optional argument.\n \n+Here is an elaborated example that uses default parameters in `argument` and calls the `args` in the cell magic::\n+\n+ from IPython.core.magic import register_cell_magic\n+ from IPython.core.magic_arguments import (argument, magic_arguments,\n+ parse_argstring)\n+\n+\n+ @magic_arguments()\n+ @argument(\n+ \"--option\",\n+ \"-o\",\n+ help=(\"Add an option here\"),\n+ )\n+ @argument(\n+ \"--style\",\n+ \"-s\",\n+ default=\"foo\",\n+ help=(\"Add some style arguments\"),\n+ )\n+ @register_cell_magic\n+ def my_cell_magic(line, cell):\n+ args = parse_argstring(my_cell_magic, line)\n+ print(f\"{args.option=}\")\n+ print(f\"{args.style=}\")\n+ print(f\"{cell=}\")\n+\n+In a jupyter notebook, this cell magic can be executed like this::\n+\n+ %%my_cell_magic -o Hello\n+ print(\"bar\")\n+ i = 42\n+\n Inheritance diagram:\n \n .. inheritance-diagram:: IPython.core.magic_arguments\n", "issue": "Improvement of `core.magic_arguments` example\nCurrently, there is only a [very raw example](https://ipython.readthedocs.io/en/stable/api/generated/IPython.core.magic_arguments.html?highlight=%40magic_arguments.argumen#module-IPython.core.magic_arguments\r\n) of using `magic_arguments` with custom cell magic.\r\nTherefore, I have the idea to add a second, more fleshed out example that might help people to easier understand and use cell magic arguments: \r\n\r\nHere is the code:\r\n```py\r\nfrom IPython.core import magic_arguments\r\nfrom IPython.core.magic import register_cell_magic\r\n\r\n\r\n@magic_arguments.magic_arguments()\r\n@magic_arguments.argument(\r\n \"--option\",\r\n help=(\"Add an option here\"),\r\n)\r\n@magic_arguments.argument(\r\n \"--style\",\r\n default=None,\r\n help=(\"Add some style arguments\"),\r\n)\r\n@register_cell_magic\r\ndef my_cell_magic(line, cell):\r\n \"\"\"Cool cell magic\"\"\"\r\n args = magic_arguments.parse_argstring(my_cell_magic, line)\r\n print(args.style)\r\n print(args.option)\r\n```\n", "before_files": [{"content": "''' A decorator-based method of constructing IPython magics with `argparse`\noption handling.\n\nNew magic functions can be defined like so::\n\n from IPython.core.magic_arguments import (argument, magic_arguments,\n parse_argstring)\n\n @magic_arguments()\n @argument('-o', '--option', help='An optional argument.')\n @argument('arg', type=int, help='An integer positional argument.')\n def magic_cool(self, arg):\n \"\"\" A really cool magic command.\n\n \"\"\"\n args = parse_argstring(magic_cool, arg)\n ...\n\nThe `@magic_arguments` decorator marks the function as having argparse arguments.\nThe `@argument` decorator adds an argument using the same syntax as argparse's\n`add_argument()` method. More sophisticated uses may also require the\n`@argument_group` or `@kwds` decorator to customize the formatting and the\nparsing.\n\nHelp text for the magic is automatically generated from the docstring and the\narguments::\n\n In[1]: %cool?\n %cool [-o OPTION] arg\n \n A really cool magic command.\n \n positional arguments:\n arg An integer positional argument.\n \n optional arguments:\n -o OPTION, --option OPTION\n An optional argument.\n\nInheritance diagram:\n\n.. inheritance-diagram:: IPython.core.magic_arguments\n :parts: 3\n\n'''\n#-----------------------------------------------------------------------------\n# Copyright (C) 2010-2011, IPython Development Team.\n#\n# Distributed under the terms of the Modified BSD License.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\nimport argparse\nimport re\n\n# Our own imports\nfrom IPython.core.error import UsageError\nfrom IPython.utils.decorators import undoc\nfrom IPython.utils.process import arg_split\nfrom IPython.utils.text import dedent\n\nNAME_RE = re.compile(r\"[a-zA-Z][a-zA-Z0-9_-]*$\")\n\n@undoc\nclass MagicHelpFormatter(argparse.RawDescriptionHelpFormatter):\n \"\"\"A HelpFormatter with a couple of changes to meet our needs.\n \"\"\"\n # Modified to dedent text.\n def _fill_text(self, text, width, indent):\n return argparse.RawDescriptionHelpFormatter._fill_text(self, dedent(text), width, indent)\n\n # Modified to wrap argument placeholders in <> where necessary.\n def _format_action_invocation(self, action):\n if not action.option_strings:\n metavar, = self._metavar_formatter(action, action.dest)(1)\n return metavar\n\n else:\n parts = []\n\n # if the Optional doesn't take a value, format is:\n # -s, --long\n if action.nargs == 0:\n parts.extend(action.option_strings)\n\n # if the Optional takes a value, format is:\n # -s ARGS, --long ARGS\n else:\n default = action.dest.upper()\n args_string = self._format_args(action, default)\n # IPYTHON MODIFICATION: If args_string is not a plain name, wrap\n # it in <> so it's valid RST.\n if not NAME_RE.match(args_string):\n args_string = \"<%s>\" % args_string\n for option_string in action.option_strings:\n parts.append('%s %s' % (option_string, args_string))\n\n return ', '.join(parts)\n\n # Override the default prefix ('usage') to our % magic escape,\n # in a code block.\n def add_usage(self, usage, actions, groups, prefix=\"::\\n\\n %\"):\n super(MagicHelpFormatter, self).add_usage(usage, actions, groups, prefix)\n\nclass MagicArgumentParser(argparse.ArgumentParser):\n \"\"\" An ArgumentParser tweaked for use by IPython magics.\n \"\"\"\n def __init__(self,\n prog=None,\n usage=None,\n description=None,\n epilog=None,\n parents=None,\n formatter_class=MagicHelpFormatter,\n prefix_chars='-',\n argument_default=None,\n conflict_handler='error',\n add_help=False):\n if parents is None:\n parents = []\n super(MagicArgumentParser, self).__init__(prog=prog, usage=usage,\n description=description, epilog=epilog,\n parents=parents, formatter_class=formatter_class,\n prefix_chars=prefix_chars, argument_default=argument_default,\n conflict_handler=conflict_handler, add_help=add_help)\n\n def error(self, message):\n \"\"\" Raise a catchable error instead of exiting.\n \"\"\"\n raise UsageError(message)\n\n def parse_argstring(self, argstring):\n \"\"\" Split a string into an argument list and parse that argument list.\n \"\"\"\n argv = arg_split(argstring)\n return self.parse_args(argv)\n\n\ndef construct_parser(magic_func):\n \"\"\" Construct an argument parser using the function decorations.\n \"\"\"\n kwds = getattr(magic_func, 'argcmd_kwds', {})\n if 'description' not in kwds:\n kwds['description'] = getattr(magic_func, '__doc__', None)\n arg_name = real_name(magic_func)\n parser = MagicArgumentParser(arg_name, **kwds)\n # Reverse the list of decorators in order to apply them in the\n # order in which they appear in the source.\n group = None\n for deco in magic_func.decorators[::-1]:\n result = deco.add_to_parser(parser, group)\n if result is not None:\n group = result\n\n # Replace the magic function's docstring with the full help text.\n magic_func.__doc__ = parser.format_help()\n\n return parser\n\n\ndef parse_argstring(magic_func, argstring):\n \"\"\" Parse the string of arguments for the given magic function.\n \"\"\"\n return magic_func.parser.parse_argstring(argstring)\n\n\ndef real_name(magic_func):\n \"\"\" Find the real name of the magic.\n \"\"\"\n magic_name = magic_func.__name__\n if magic_name.startswith('magic_'):\n magic_name = magic_name[len('magic_'):]\n return getattr(magic_func, 'argcmd_name', magic_name)\n\n\nclass ArgDecorator(object):\n \"\"\" Base class for decorators to add ArgumentParser information to a method.\n \"\"\"\n\n def __call__(self, func):\n if not getattr(func, 'has_arguments', False):\n func.has_arguments = True\n func.decorators = []\n func.decorators.append(self)\n return func\n\n def add_to_parser(self, parser, group):\n \"\"\" Add this object's information to the parser, if necessary.\n \"\"\"\n pass\n\n\nclass magic_arguments(ArgDecorator):\n \"\"\" Mark the magic as having argparse arguments and possibly adjust the\n name.\n \"\"\"\n\n def __init__(self, name=None):\n self.name = name\n\n def __call__(self, func):\n if not getattr(func, 'has_arguments', False):\n func.has_arguments = True\n func.decorators = []\n if self.name is not None:\n func.argcmd_name = self.name\n # This should be the first decorator in the list of decorators, thus the\n # last to execute. Build the parser.\n func.parser = construct_parser(func)\n return func\n\n\nclass ArgMethodWrapper(ArgDecorator):\n\n \"\"\"\n Base class to define a wrapper for ArgumentParser method.\n\n Child class must define either `_method_name` or `add_to_parser`.\n\n \"\"\"\n\n _method_name = None\n\n def __init__(self, *args, **kwds):\n self.args = args\n self.kwds = kwds\n\n def add_to_parser(self, parser, group):\n \"\"\" Add this object's information to the parser.\n \"\"\"\n if group is not None:\n parser = group\n getattr(parser, self._method_name)(*self.args, **self.kwds)\n return None\n\n\nclass argument(ArgMethodWrapper):\n \"\"\" Store arguments and keywords to pass to add_argument().\n\n Instances also serve to decorate command methods.\n \"\"\"\n _method_name = 'add_argument'\n\n\nclass defaults(ArgMethodWrapper):\n \"\"\" Store arguments and keywords to pass to set_defaults().\n\n Instances also serve to decorate command methods.\n \"\"\"\n _method_name = 'set_defaults'\n\n\nclass argument_group(ArgMethodWrapper):\n \"\"\" Store arguments and keywords to pass to add_argument_group().\n\n Instances also serve to decorate command methods.\n \"\"\"\n\n def add_to_parser(self, parser, group):\n \"\"\" Add this object's information to the parser.\n \"\"\"\n return parser.add_argument_group(*self.args, **self.kwds)\n\n\nclass kwds(ArgDecorator):\n \"\"\" Provide other keywords to the sub-parser constructor.\n \"\"\"\n def __init__(self, **kwds):\n self.kwds = kwds\n\n def __call__(self, func):\n func = super(kwds, self).__call__(func)\n func.argcmd_kwds = self.kwds\n return func\n\n\n__all__ = ['magic_arguments', 'argument', 'argument_group', 'kwds',\n 'parse_argstring']\n", "path": "IPython/core/magic_arguments.py"}]}
| 3,487 | 311 |
gh_patches_debug_14108
|
rasdani/github-patches
|
git_diff
|
wright-group__WrightTools-726
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Group is not defined in collection
https://github.com/wright-group/WrightTools/blob/ca056aa600f341501a99d2ea4d11f7d74047bc26/WrightTools/_open.py#L48
Statement will cause an attribute error. Not tested currently
</issue>
<code>
[start of WrightTools/_open.py]
1 """Generic open method for wt5 files."""
2
3
4 # --- import -------------------------------------------------------------------------------------
5
6
7 import posixpath
8
9 import h5py
10
11 from . import collection as wt_collection
12 from . import data as wt_data
13
14
15 # --- define -------------------------------------------------------------------------------------
16
17
18 __all__ = ["open"]
19
20
21 # --- functions ----------------------------------------------------------------------------------
22
23
24 def open(filepath, edit_local=False):
25 """Open any wt5 file, returning the top-level object (data or collection).
26
27 Parameters
28 ----------
29 filepath : string
30 Path to file.
31 edit_local : boolean (optional)
32 If True, the file itself will be opened for editing. Otherwise, a
33 copy will be created. Default is False.
34
35 Returns
36 -------
37 WrightTools Collection or Data
38 Root-level object in file.
39 """
40 f = h5py.File(filepath)
41 class_name = f[posixpath.sep].attrs["class"]
42 name = f[posixpath.sep].attrs["name"]
43 if class_name == "Data":
44 return wt_data.Data(filepath=filepath, name=name, edit_local=edit_local)
45 elif class_name == "Collection":
46 return wt_collection.Collection(filepath=filepath, name=name, edit_local=edit_local)
47 else:
48 return wt_collection.Group(filepath=filepath, name=name, edit_local=edit_local)
49
[end of WrightTools/_open.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/WrightTools/_open.py b/WrightTools/_open.py
--- a/WrightTools/_open.py
+++ b/WrightTools/_open.py
@@ -10,6 +10,7 @@
from . import collection as wt_collection
from . import data as wt_data
+from . import _group as wt_group
# --- define -------------------------------------------------------------------------------------
@@ -45,4 +46,4 @@
elif class_name == "Collection":
return wt_collection.Collection(filepath=filepath, name=name, edit_local=edit_local)
else:
- return wt_collection.Group(filepath=filepath, name=name, edit_local=edit_local)
+ return wt_group.Group(filepath=filepath, name=name, edit_local=edit_local)
|
{"golden_diff": "diff --git a/WrightTools/_open.py b/WrightTools/_open.py\n--- a/WrightTools/_open.py\n+++ b/WrightTools/_open.py\n@@ -10,6 +10,7 @@\n \n from . import collection as wt_collection\n from . import data as wt_data\n+from . import _group as wt_group\n \n \n # --- define -------------------------------------------------------------------------------------\n@@ -45,4 +46,4 @@\n elif class_name == \"Collection\":\n return wt_collection.Collection(filepath=filepath, name=name, edit_local=edit_local)\n else:\n- return wt_collection.Group(filepath=filepath, name=name, edit_local=edit_local)\n+ return wt_group.Group(filepath=filepath, name=name, edit_local=edit_local)\n", "issue": "Group is not defined in collection\nhttps://github.com/wright-group/WrightTools/blob/ca056aa600f341501a99d2ea4d11f7d74047bc26/WrightTools/_open.py#L48\r\n\r\nStatement will cause an attribute error. Not tested currently\n", "before_files": [{"content": "\"\"\"Generic open method for wt5 files.\"\"\"\n\n\n# --- import -------------------------------------------------------------------------------------\n\n\nimport posixpath\n\nimport h5py\n\nfrom . import collection as wt_collection\nfrom . import data as wt_data\n\n\n# --- define -------------------------------------------------------------------------------------\n\n\n__all__ = [\"open\"]\n\n\n# --- functions ----------------------------------------------------------------------------------\n\n\ndef open(filepath, edit_local=False):\n \"\"\"Open any wt5 file, returning the top-level object (data or collection).\n\n Parameters\n ----------\n filepath : string\n Path to file.\n edit_local : boolean (optional)\n If True, the file itself will be opened for editing. Otherwise, a\n copy will be created. Default is False.\n\n Returns\n -------\n WrightTools Collection or Data\n Root-level object in file.\n \"\"\"\n f = h5py.File(filepath)\n class_name = f[posixpath.sep].attrs[\"class\"]\n name = f[posixpath.sep].attrs[\"name\"]\n if class_name == \"Data\":\n return wt_data.Data(filepath=filepath, name=name, edit_local=edit_local)\n elif class_name == \"Collection\":\n return wt_collection.Collection(filepath=filepath, name=name, edit_local=edit_local)\n else:\n return wt_collection.Group(filepath=filepath, name=name, edit_local=edit_local)\n", "path": "WrightTools/_open.py"}]}
| 978 | 160 |
gh_patches_debug_14577
|
rasdani/github-patches
|
git_diff
|
urllib3__urllib3-2289
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Deprecate NTLMConnectionPool in 1.26.x
As was mentioned in https://github.com/urllib3/urllib3/pull/2278#issuecomment-864414599 and https://github.com/urllib3/urllib3/pull/2278#issuecomment-864450016 we're moving to remove `NTLMConnectionPool` and the `urllib3.contrib.nltmpool` module from urllib3 in v2.0 if we don't find a new maintainer for the module (perhaps as a third-party package ie `urllib3-ntlmpool`?)
- The module is not covered by our test suite.
- It is not clear even which pypi package is needed for it.
- It has fallen into disrepair (e.g. timeout/ssl/other options not being respected).
- According to Wikipedia, "Since 2010, Microsoft no longer recommends NTLM in applications"
- Seems like it's not used often, if at all.
In the `1.26.x` branch we should unconditionally raise a `DeprecationWarning` when the module is imported. Should link to this issue with a call to action to comment in the issue if they are a user. This should help us better discover who (if any) our users are here so we can better make a decision.
</issue>
<code>
[start of src/urllib3/contrib/ntlmpool.py]
1 """
2 NTLM authenticating pool, contributed by erikcederstran
3
4 Issue #10, see: http://code.google.com/p/urllib3/issues/detail?id=10
5 """
6 from __future__ import absolute_import
7
8 from logging import getLogger
9
10 from ntlm import ntlm
11
12 from .. import HTTPSConnectionPool
13 from ..packages.six.moves.http_client import HTTPSConnection
14
15 log = getLogger(__name__)
16
17
18 class NTLMConnectionPool(HTTPSConnectionPool):
19 """
20 Implements an NTLM authentication version of an urllib3 connection pool
21 """
22
23 scheme = "https"
24
25 def __init__(self, user, pw, authurl, *args, **kwargs):
26 """
27 authurl is a random URL on the server that is protected by NTLM.
28 user is the Windows user, probably in the DOMAIN\\username format.
29 pw is the password for the user.
30 """
31 super(NTLMConnectionPool, self).__init__(*args, **kwargs)
32 self.authurl = authurl
33 self.rawuser = user
34 user_parts = user.split("\\", 1)
35 self.domain = user_parts[0].upper()
36 self.user = user_parts[1]
37 self.pw = pw
38
39 def _new_conn(self):
40 # Performs the NTLM handshake that secures the connection. The socket
41 # must be kept open while requests are performed.
42 self.num_connections += 1
43 log.debug(
44 "Starting NTLM HTTPS connection no. %d: https://%s%s",
45 self.num_connections,
46 self.host,
47 self.authurl,
48 )
49
50 headers = {"Connection": "Keep-Alive"}
51 req_header = "Authorization"
52 resp_header = "www-authenticate"
53
54 conn = HTTPSConnection(host=self.host, port=self.port)
55
56 # Send negotiation message
57 headers[req_header] = "NTLM %s" % ntlm.create_NTLM_NEGOTIATE_MESSAGE(
58 self.rawuser
59 )
60 log.debug("Request headers: %s", headers)
61 conn.request("GET", self.authurl, None, headers)
62 res = conn.getresponse()
63 reshdr = dict(res.getheaders())
64 log.debug("Response status: %s %s", res.status, res.reason)
65 log.debug("Response headers: %s", reshdr)
66 log.debug("Response data: %s [...]", res.read(100))
67
68 # Remove the reference to the socket, so that it can not be closed by
69 # the response object (we want to keep the socket open)
70 res.fp = None
71
72 # Server should respond with a challenge message
73 auth_header_values = reshdr[resp_header].split(", ")
74 auth_header_value = None
75 for s in auth_header_values:
76 if s[:5] == "NTLM ":
77 auth_header_value = s[5:]
78 if auth_header_value is None:
79 raise Exception(
80 "Unexpected %s response header: %s" % (resp_header, reshdr[resp_header])
81 )
82
83 # Send authentication message
84 ServerChallenge, NegotiateFlags = ntlm.parse_NTLM_CHALLENGE_MESSAGE(
85 auth_header_value
86 )
87 auth_msg = ntlm.create_NTLM_AUTHENTICATE_MESSAGE(
88 ServerChallenge, self.user, self.domain, self.pw, NegotiateFlags
89 )
90 headers[req_header] = "NTLM %s" % auth_msg
91 log.debug("Request headers: %s", headers)
92 conn.request("GET", self.authurl, None, headers)
93 res = conn.getresponse()
94 log.debug("Response status: %s %s", res.status, res.reason)
95 log.debug("Response headers: %s", dict(res.getheaders()))
96 log.debug("Response data: %s [...]", res.read()[:100])
97 if res.status != 200:
98 if res.status == 401:
99 raise Exception("Server rejected request: wrong username or password")
100 raise Exception("Wrong server response: %s %s" % (res.status, res.reason))
101
102 res.fp = None
103 log.debug("Connection established")
104 return conn
105
106 def urlopen(
107 self,
108 method,
109 url,
110 body=None,
111 headers=None,
112 retries=3,
113 redirect=True,
114 assert_same_host=True,
115 ):
116 if headers is None:
117 headers = {}
118 headers["Connection"] = "Keep-Alive"
119 return super(NTLMConnectionPool, self).urlopen(
120 method, url, body, headers, retries, redirect, assert_same_host
121 )
122
[end of src/urllib3/contrib/ntlmpool.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/urllib3/contrib/ntlmpool.py b/src/urllib3/contrib/ntlmpool.py
--- a/src/urllib3/contrib/ntlmpool.py
+++ b/src/urllib3/contrib/ntlmpool.py
@@ -5,6 +5,7 @@
"""
from __future__ import absolute_import
+import warnings
from logging import getLogger
from ntlm import ntlm
@@ -12,6 +13,14 @@
from .. import HTTPSConnectionPool
from ..packages.six.moves.http_client import HTTPSConnection
+warnings.warn(
+ "The 'urllib3.contrib.ntlmpool' module is deprecated and will be removed "
+ "in urllib3 v2.0 release, urllib3 is not able to support it properly due "
+ "to reasons listed in issue: https://github.com/urllib3/urllib3/issues/2282. "
+ "If you are a user of this module please comment in the mentioned issue.",
+ DeprecationWarning,
+)
+
log = getLogger(__name__)
|
{"golden_diff": "diff --git a/src/urllib3/contrib/ntlmpool.py b/src/urllib3/contrib/ntlmpool.py\n--- a/src/urllib3/contrib/ntlmpool.py\n+++ b/src/urllib3/contrib/ntlmpool.py\n@@ -5,6 +5,7 @@\n \"\"\"\n from __future__ import absolute_import\n \n+import warnings\n from logging import getLogger\n \n from ntlm import ntlm\n@@ -12,6 +13,14 @@\n from .. import HTTPSConnectionPool\n from ..packages.six.moves.http_client import HTTPSConnection\n \n+warnings.warn(\n+ \"The 'urllib3.contrib.ntlmpool' module is deprecated and will be removed \"\n+ \"in urllib3 v2.0 release, urllib3 is not able to support it properly due \"\n+ \"to reasons listed in issue: https://github.com/urllib3/urllib3/issues/2282. \"\n+ \"If you are a user of this module please comment in the mentioned issue.\",\n+ DeprecationWarning,\n+)\n+\n log = getLogger(__name__)\n", "issue": "Deprecate NTLMConnectionPool in 1.26.x\nAs was mentioned in https://github.com/urllib3/urllib3/pull/2278#issuecomment-864414599 and https://github.com/urllib3/urllib3/pull/2278#issuecomment-864450016 we're moving to remove `NTLMConnectionPool` and the `urllib3.contrib.nltmpool` module from urllib3 in v2.0 if we don't find a new maintainer for the module (perhaps as a third-party package ie `urllib3-ntlmpool`?)\r\n\r\n- The module is not covered by our test suite.\r\n- It is not clear even which pypi package is needed for it.\r\n- It has fallen into disrepair (e.g. timeout/ssl/other options not being respected).\r\n- According to Wikipedia, \"Since 2010, Microsoft no longer recommends NTLM in applications\"\r\n- Seems like it's not used often, if at all.\r\n\r\nIn the `1.26.x` branch we should unconditionally raise a `DeprecationWarning` when the module is imported. Should link to this issue with a call to action to comment in the issue if they are a user. This should help us better discover who (if any) our users are here so we can better make a decision.\n", "before_files": [{"content": "\"\"\"\nNTLM authenticating pool, contributed by erikcederstran\n\nIssue #10, see: http://code.google.com/p/urllib3/issues/detail?id=10\n\"\"\"\nfrom __future__ import absolute_import\n\nfrom logging import getLogger\n\nfrom ntlm import ntlm\n\nfrom .. import HTTPSConnectionPool\nfrom ..packages.six.moves.http_client import HTTPSConnection\n\nlog = getLogger(__name__)\n\n\nclass NTLMConnectionPool(HTTPSConnectionPool):\n \"\"\"\n Implements an NTLM authentication version of an urllib3 connection pool\n \"\"\"\n\n scheme = \"https\"\n\n def __init__(self, user, pw, authurl, *args, **kwargs):\n \"\"\"\n authurl is a random URL on the server that is protected by NTLM.\n user is the Windows user, probably in the DOMAIN\\\\username format.\n pw is the password for the user.\n \"\"\"\n super(NTLMConnectionPool, self).__init__(*args, **kwargs)\n self.authurl = authurl\n self.rawuser = user\n user_parts = user.split(\"\\\\\", 1)\n self.domain = user_parts[0].upper()\n self.user = user_parts[1]\n self.pw = pw\n\n def _new_conn(self):\n # Performs the NTLM handshake that secures the connection. The socket\n # must be kept open while requests are performed.\n self.num_connections += 1\n log.debug(\n \"Starting NTLM HTTPS connection no. %d: https://%s%s\",\n self.num_connections,\n self.host,\n self.authurl,\n )\n\n headers = {\"Connection\": \"Keep-Alive\"}\n req_header = \"Authorization\"\n resp_header = \"www-authenticate\"\n\n conn = HTTPSConnection(host=self.host, port=self.port)\n\n # Send negotiation message\n headers[req_header] = \"NTLM %s\" % ntlm.create_NTLM_NEGOTIATE_MESSAGE(\n self.rawuser\n )\n log.debug(\"Request headers: %s\", headers)\n conn.request(\"GET\", self.authurl, None, headers)\n res = conn.getresponse()\n reshdr = dict(res.getheaders())\n log.debug(\"Response status: %s %s\", res.status, res.reason)\n log.debug(\"Response headers: %s\", reshdr)\n log.debug(\"Response data: %s [...]\", res.read(100))\n\n # Remove the reference to the socket, so that it can not be closed by\n # the response object (we want to keep the socket open)\n res.fp = None\n\n # Server should respond with a challenge message\n auth_header_values = reshdr[resp_header].split(\", \")\n auth_header_value = None\n for s in auth_header_values:\n if s[:5] == \"NTLM \":\n auth_header_value = s[5:]\n if auth_header_value is None:\n raise Exception(\n \"Unexpected %s response header: %s\" % (resp_header, reshdr[resp_header])\n )\n\n # Send authentication message\n ServerChallenge, NegotiateFlags = ntlm.parse_NTLM_CHALLENGE_MESSAGE(\n auth_header_value\n )\n auth_msg = ntlm.create_NTLM_AUTHENTICATE_MESSAGE(\n ServerChallenge, self.user, self.domain, self.pw, NegotiateFlags\n )\n headers[req_header] = \"NTLM %s\" % auth_msg\n log.debug(\"Request headers: %s\", headers)\n conn.request(\"GET\", self.authurl, None, headers)\n res = conn.getresponse()\n log.debug(\"Response status: %s %s\", res.status, res.reason)\n log.debug(\"Response headers: %s\", dict(res.getheaders()))\n log.debug(\"Response data: %s [...]\", res.read()[:100])\n if res.status != 200:\n if res.status == 401:\n raise Exception(\"Server rejected request: wrong username or password\")\n raise Exception(\"Wrong server response: %s %s\" % (res.status, res.reason))\n\n res.fp = None\n log.debug(\"Connection established\")\n return conn\n\n def urlopen(\n self,\n method,\n url,\n body=None,\n headers=None,\n retries=3,\n redirect=True,\n assert_same_host=True,\n ):\n if headers is None:\n headers = {}\n headers[\"Connection\"] = \"Keep-Alive\"\n return super(NTLMConnectionPool, self).urlopen(\n method, url, body, headers, retries, redirect, assert_same_host\n )\n", "path": "src/urllib3/contrib/ntlmpool.py"}]}
| 2,099 | 241 |
gh_patches_debug_17014
|
rasdani/github-patches
|
git_diff
|
pymedusa__Medusa-5472
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IndexError sending to uTorrent
**Describe the bug**
Sending an magnet link to uTorret goes into error
**To Reproduce**
Steps to reproduce the behavior:
1. Just add an existing episode to the wanted category and click go
**Expected behavior**
An download into uTorrent
**Medusa (please complete the following information):**
- OS: Windows 10
- Branch: master
- Commit: 4614efc77151ded92ef458a09dec39f8bd5acfc6
**Logs:**
<details>
```
2018-09-10 10:48:55 DEBUG SEARCHQUEUE-BACKLOG-80379 :: [4614efc] uTorrent: Exception raised when sending torrent Rarbg @ magnet:?xt=urn:btih:04a442a9f4ec4f968897faa4bce27f6b2d6f9083&dn=The.Big.Bang.Theory.S11E24.The.Bow.Tie.Asymmetry.720p.AMZN.WEBRip.DDP5.1.x264-NTb%5Brartv%5D&tr=http%3A%2F%2Ftracker.trackerfix.com%3A80%2Fannounce&tr=udp%3A%2F%2F9.rarbg.me%3A2710&tr=udp%3A%2F%2F9.rarbg.to%3A2710&tr=udp%3A%2F%2Fopen.demonii.com%3A1337%2Fannounce&tr=udp://tracker.coppersurfer.tk:6969/announce&tr=udp://tracker.leechers-paradise.org:6969/announce&tr=udp://tracker.zer0day.to:1337/announce&tr=udp://tracker.opentrackr.org:1337/announce&tr=http://tracker.opentrackr.org:1337/announce&tr=udp://p4p.arenabg.com:1337/announce&tr=http://p4p.arenabg.com:1337/announce&tr=udp://explodie.org:6969/announce&tr=udp://9.rarbg.com:2710/announce&tr=http://explodie.org:6969/announce&tr=http://tracker.dler.org:6969/announce&tr=udp://public.popcorn-tracker.org:6969/announce&tr=udp://tracker.internetwarriors.net:1337/announce&tr=udp://ipv4.tracker.harry.lu:80/announce&tr=http://ipv4.tracker.harry.lu:80/announce&tr=udp://mgtracker.org:2710/announce&tr=http://mgtracker.org:6969/announce&tr=udp://tracker.mg64.net:6969/announce&tr=http://tracker.mg64.net:6881/announce&tr=http://torrentsmd.com:8080/announce
Extra Info:
Episodes:
u'The Big Bang Theory' - u'S11E24' - u'The Bow Tie Asymmetry'
location: u''
description: u'When Amy\u2019s parents and Sheldon\u2019s family arrive for the wedding, everybody is focused on making sure all goes according to plan \u2013 everyone except the bride and groom.'
subtitles: u''
subtitles_searchcount: 0
subtitles_lastsearch: u'0001-01-01 00:00:00'
airdate: 736824 (datetime.date(2018, 5, 10))
hasnfo: False
hastbn: False
status: 3
quality: 64
Quality: 720p WEB-DL
Name: The.Big.Bang.Theory.S11E24.The.Bow.Tie.Asymmetry.720p.AMZN.WEBRip.DDP5.1.x264-NTb[rartv]
Size: 770147134
Release Group: NTb
. Error: list index out of range
2018-09-10 10:48:55 ERROR SEARCHQUEUE-BACKLOG-80379 :: [4614efc] uTorrent: Failed Sending Torrent
Traceback (most recent call last):
File "C:\medusa\medusa\clients\torrent\generic.py", line 261, in send_torrent
r_code = self._add_torrent_uri(result)
File "C:\medusa\medusa\clients\torrent\utorrent_client.py", line 92, in _add_torrent_uri
torrent_subfolder = get_torrent_subfolder(result)
File "C:\medusa\medusa\clients\torrent\utorrent_client.py", line 27, in get_torrent_subfolder
root_location = root_dirs[int(root_dirs[0]) + 1]
IndexError: list index out of range
```
</details>
**Additional context**
Add any other context about the problem here.
</issue>
<code>
[start of medusa/clients/torrent/utorrent_client.py]
1 # coding=utf-8
2
3 """uTorrent Client."""
4
5 from __future__ import unicode_literals
6
7 import logging
8 import os
9 import re
10 from collections import OrderedDict
11
12 from medusa import app
13 from medusa.clients.torrent.generic import GenericClient
14 from medusa.logger.adapters.style import BraceAdapter
15
16 from requests.compat import urljoin
17 from requests.exceptions import RequestException
18
19 log = BraceAdapter(logging.getLogger(__name__))
20 log.logger.addHandler(logging.NullHandler())
21
22
23 def get_torrent_subfolder(result):
24 """Retrieve the series destination-subfolder required for uTorrent WebUI 'start' action."""
25 # Get the subfolder name the user has assigned to that series
26 root_dirs = app.ROOT_DIRS
27 root_location = root_dirs[int(root_dirs[0]) + 1]
28 torrent_path = result.series.raw_location
29
30 if not root_location == torrent_path:
31 # Subfolder is under root, but possibly not directly under
32 if torrent_path.startswith(root_location):
33 torrent_subfolder = torrent_path.replace(root_location, '')
34 # Subfolder is NOT under root, use it too (WebUI limitation)
35 else:
36 torrent_subfolder = os.path.basename(torrent_path)
37 # Use the series name if there is no subfolder defined
38 else:
39 torrent_subfolder = result.series.name
40
41 log.debug('Show {name}: torrent download destination folder is: {path} (sub-folder: {sub})',
42 {'name': result.series.name, 'path': torrent_path, 'sub': torrent_subfolder})
43
44 return torrent_subfolder
45
46
47 class UTorrentAPI(GenericClient):
48 """uTorrent API class."""
49
50 def __init__(self, host=None, username=None, password=None):
51 """Constructor.
52
53 :param host:
54 :type host: string
55 :param username:
56 :type username: string
57 :param password:
58 :type password: string
59 """
60 super(UTorrentAPI, self).__init__('uTorrent', host, username, password)
61 self.url = urljoin(self.host, 'gui/')
62
63 def _request(self, method='get', params=None, data=None, files=None, cookies=None):
64 if cookies:
65 log.debug('{name}: Received unused argument: cookies={value!r}',
66 {'name': self.name, 'value': cookies})
67
68 # "token" must be the first parameter: https://goo.gl/qTxf9x
69 ordered_params = OrderedDict({
70 'token': self.auth,
71 })
72 ordered_params.update(params)
73
74 return super(UTorrentAPI, self)._request(method=method, params=ordered_params, data=data, files=files)
75
76 def _get_auth(self):
77 try:
78 self.response = self.session.get(urljoin(self.url, 'token.html'), verify=False)
79 except RequestException as error:
80 log.warning('Unable to authenticate with uTorrent client: {0!r}', error)
81 return None
82
83 if not self.response.status_code == 404:
84 self.auth = re.findall('<div.*?>(.*?)</', self.response.text)[0]
85 return self.auth
86
87 return None
88
89 def _add_torrent_uri(self, result):
90 """Send an 'add-url' download request to uTorrent when search provider is using a magnet link."""
91 # Set proper subfolder as download destination for uTorrent torrent
92 torrent_subfolder = get_torrent_subfolder(result)
93
94 return self._request(params={
95 'action': 'add-url',
96 # limit the param length to 1024 chars (uTorrent bug)
97 's': result.url[:1024],
98 'path': torrent_subfolder,
99 })
100
101 def _add_torrent_file(self, result):
102 """Send an 'add-file' download request to uTorrent when the search provider is using a .torrent file."""
103 # Set proper subfolder as download destination for uTorrent torrent
104 torrent_subfolder = get_torrent_subfolder(result)
105
106 return self._request(
107 method='post',
108 params={
109 'action': 'add-file',
110 'path': torrent_subfolder,
111 },
112 files={
113 'torrent_file': (
114 '{name}.torrent'.format(name=result.name),
115 result.content,
116 ),
117 }
118 )
119
120 def _set_torrent_label(self, result):
121 """Send a 'setprop' request to uTorrent to set a label for the torrent, optionally - the show name."""
122 torrent_new_label = result.series.name
123
124 if result.series.is_anime and app.TORRENT_LABEL_ANIME:
125 label = app.TORRENT_LABEL_ANIME
126 else:
127 label = app.TORRENT_LABEL
128
129 label = label.replace('%N', torrent_new_label)
130
131 log.debug('Torrent label is now set to {path}', {'path': label})
132
133 return self._request(
134 params={
135 'action': 'setprops',
136 'hash': result.hash,
137 's': 'label',
138 'v': label,
139 }
140 )
141
142 def _set_torrent_ratio(self, result):
143 ratio = result.ratio or None
144
145 if ratio:
146 if self._request(params={
147 'action': 'setprops',
148 'hash': result.hash,
149 's': 'seed_override',
150 'v': '1',
151 }):
152 return self._request(params={
153 'action': 'setprops',
154 'hash': result.hash,
155 's': 'seed_ratio',
156 'v': float(ratio) * 10,
157 })
158 else:
159 return False
160
161 return True
162
163 def _set_torrent_seed_time(self, result):
164 # Allow 0 - as unlimitted, and "-1" - that is used to disable
165 if float(app.TORRENT_SEED_TIME) >= 0:
166 if self._request(params={
167 'action': 'setprops',
168 'hash': result.hash,
169 's': 'seed_override',
170 'v': '1',
171 }):
172 return self._request(params={
173 'action': 'setprops',
174 'hash': result.hash,
175 's': 'seed_time',
176 'v': 3600 * float(app.TORRENT_SEED_TIME),
177 })
178 else:
179 return False
180 else:
181 return True
182
183 def _set_torrent_priority(self, result):
184 return True if result.priority != 1 else self._request(params={
185 'action': 'queuetop',
186 'hash': result.hash,
187 })
188
189 def _set_torrent_pause(self, result):
190 return self._request(params={
191 # "stop" torrent, can always be resumed!
192 'action': 'stop' if app.TORRENT_PAUSED else 'start',
193 'hash': result.hash,
194 })
195
196 def remove_torrent(self, info_hash):
197 """Remove torrent from client using given info_hash.
198
199 :param info_hash:
200 :type info_hash: string
201 :return
202 :rtype: bool
203 """
204 return self._request(params={
205 'action': 'removedatatorrent',
206 'hash': info_hash,
207 })
208
209
210 api = UTorrentAPI
211
[end of medusa/clients/torrent/utorrent_client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/medusa/clients/torrent/utorrent_client.py b/medusa/clients/torrent/utorrent_client.py
--- a/medusa/clients/torrent/utorrent_client.py
+++ b/medusa/clients/torrent/utorrent_client.py
@@ -24,10 +24,11 @@
"""Retrieve the series destination-subfolder required for uTorrent WebUI 'start' action."""
# Get the subfolder name the user has assigned to that series
root_dirs = app.ROOT_DIRS
- root_location = root_dirs[int(root_dirs[0]) + 1]
+ if root_dirs:
+ root_location = root_dirs[int(root_dirs[0]) + 1]
torrent_path = result.series.raw_location
- if not root_location == torrent_path:
+ if root_dirs and root_location != torrent_path:
# Subfolder is under root, but possibly not directly under
if torrent_path.startswith(root_location):
torrent_subfolder = torrent_path.replace(root_location, '')
|
{"golden_diff": "diff --git a/medusa/clients/torrent/utorrent_client.py b/medusa/clients/torrent/utorrent_client.py\n--- a/medusa/clients/torrent/utorrent_client.py\n+++ b/medusa/clients/torrent/utorrent_client.py\n@@ -24,10 +24,11 @@\n \"\"\"Retrieve the series destination-subfolder required for uTorrent WebUI 'start' action.\"\"\"\n # Get the subfolder name the user has assigned to that series\n root_dirs = app.ROOT_DIRS\n- root_location = root_dirs[int(root_dirs[0]) + 1]\n+ if root_dirs:\n+ root_location = root_dirs[int(root_dirs[0]) + 1]\n torrent_path = result.series.raw_location\n \n- if not root_location == torrent_path:\n+ if root_dirs and root_location != torrent_path:\n # Subfolder is under root, but possibly not directly under\n if torrent_path.startswith(root_location):\n torrent_subfolder = torrent_path.replace(root_location, '')\n", "issue": "IndexError sending to uTorrent\n**Describe the bug**\r\nSending an magnet link to uTorret goes into error\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Just add an existing episode to the wanted category and click go\r\n\r\n**Expected behavior**\r\nAn download into uTorrent\r\n\r\n**Medusa (please complete the following information):**\r\n - OS: Windows 10\r\n - Branch: master\r\n - Commit: 4614efc77151ded92ef458a09dec39f8bd5acfc6 \r\n\r\n**Logs:**\r\n<details>\r\n\r\n```\r\n2018-09-10 10:48:55 DEBUG SEARCHQUEUE-BACKLOG-80379 :: [4614efc] uTorrent: Exception raised when sending torrent Rarbg @ magnet:?xt=urn:btih:04a442a9f4ec4f968897faa4bce27f6b2d6f9083&dn=The.Big.Bang.Theory.S11E24.The.Bow.Tie.Asymmetry.720p.AMZN.WEBRip.DDP5.1.x264-NTb%5Brartv%5D&tr=http%3A%2F%2Ftracker.trackerfix.com%3A80%2Fannounce&tr=udp%3A%2F%2F9.rarbg.me%3A2710&tr=udp%3A%2F%2F9.rarbg.to%3A2710&tr=udp%3A%2F%2Fopen.demonii.com%3A1337%2Fannounce&tr=udp://tracker.coppersurfer.tk:6969/announce&tr=udp://tracker.leechers-paradise.org:6969/announce&tr=udp://tracker.zer0day.to:1337/announce&tr=udp://tracker.opentrackr.org:1337/announce&tr=http://tracker.opentrackr.org:1337/announce&tr=udp://p4p.arenabg.com:1337/announce&tr=http://p4p.arenabg.com:1337/announce&tr=udp://explodie.org:6969/announce&tr=udp://9.rarbg.com:2710/announce&tr=http://explodie.org:6969/announce&tr=http://tracker.dler.org:6969/announce&tr=udp://public.popcorn-tracker.org:6969/announce&tr=udp://tracker.internetwarriors.net:1337/announce&tr=udp://ipv4.tracker.harry.lu:80/announce&tr=http://ipv4.tracker.harry.lu:80/announce&tr=udp://mgtracker.org:2710/announce&tr=http://mgtracker.org:6969/announce&tr=udp://tracker.mg64.net:6969/announce&tr=http://tracker.mg64.net:6881/announce&tr=http://torrentsmd.com:8080/announce\r\nExtra Info:\r\nEpisodes:\r\n u'The Big Bang Theory' - u'S11E24' - u'The Bow Tie Asymmetry'\r\nlocation: u''\r\ndescription: u'When Amy\\u2019s parents and Sheldon\\u2019s family arrive for the wedding, everybody is focused on making sure all goes according to plan \\u2013 everyone except the bride and groom.'\r\nsubtitles: u''\r\nsubtitles_searchcount: 0\r\nsubtitles_lastsearch: u'0001-01-01 00:00:00'\r\nairdate: 736824 (datetime.date(2018, 5, 10))\r\nhasnfo: False\r\nhastbn: False\r\nstatus: 3\r\nquality: 64\r\nQuality: 720p WEB-DL\r\nName: The.Big.Bang.Theory.S11E24.The.Bow.Tie.Asymmetry.720p.AMZN.WEBRip.DDP5.1.x264-NTb[rartv]\r\nSize: 770147134\r\nRelease Group: NTb\r\n. Error: list index out of range\r\n2018-09-10 10:48:55 ERROR SEARCHQUEUE-BACKLOG-80379 :: [4614efc] uTorrent: Failed Sending Torrent\r\nTraceback (most recent call last):\r\n File \"C:\\medusa\\medusa\\clients\\torrent\\generic.py\", line 261, in send_torrent\r\n r_code = self._add_torrent_uri(result)\r\n File \"C:\\medusa\\medusa\\clients\\torrent\\utorrent_client.py\", line 92, in _add_torrent_uri\r\n torrent_subfolder = get_torrent_subfolder(result)\r\n File \"C:\\medusa\\medusa\\clients\\torrent\\utorrent_client.py\", line 27, in get_torrent_subfolder\r\n root_location = root_dirs[int(root_dirs[0]) + 1]\r\nIndexError: list index out of range\r\n```\r\n</details>\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "# coding=utf-8\n\n\"\"\"uTorrent Client.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport logging\nimport os\nimport re\nfrom collections import OrderedDict\n\nfrom medusa import app\nfrom medusa.clients.torrent.generic import GenericClient\nfrom medusa.logger.adapters.style import BraceAdapter\n\nfrom requests.compat import urljoin\nfrom requests.exceptions import RequestException\n\nlog = BraceAdapter(logging.getLogger(__name__))\nlog.logger.addHandler(logging.NullHandler())\n\n\ndef get_torrent_subfolder(result):\n \"\"\"Retrieve the series destination-subfolder required for uTorrent WebUI 'start' action.\"\"\"\n # Get the subfolder name the user has assigned to that series\n root_dirs = app.ROOT_DIRS\n root_location = root_dirs[int(root_dirs[0]) + 1]\n torrent_path = result.series.raw_location\n\n if not root_location == torrent_path:\n # Subfolder is under root, but possibly not directly under\n if torrent_path.startswith(root_location):\n torrent_subfolder = torrent_path.replace(root_location, '')\n # Subfolder is NOT under root, use it too (WebUI limitation)\n else:\n torrent_subfolder = os.path.basename(torrent_path)\n # Use the series name if there is no subfolder defined\n else:\n torrent_subfolder = result.series.name\n\n log.debug('Show {name}: torrent download destination folder is: {path} (sub-folder: {sub})',\n {'name': result.series.name, 'path': torrent_path, 'sub': torrent_subfolder})\n\n return torrent_subfolder\n\n\nclass UTorrentAPI(GenericClient):\n \"\"\"uTorrent API class.\"\"\"\n\n def __init__(self, host=None, username=None, password=None):\n \"\"\"Constructor.\n\n :param host:\n :type host: string\n :param username:\n :type username: string\n :param password:\n :type password: string\n \"\"\"\n super(UTorrentAPI, self).__init__('uTorrent', host, username, password)\n self.url = urljoin(self.host, 'gui/')\n\n def _request(self, method='get', params=None, data=None, files=None, cookies=None):\n if cookies:\n log.debug('{name}: Received unused argument: cookies={value!r}',\n {'name': self.name, 'value': cookies})\n\n # \"token\" must be the first parameter: https://goo.gl/qTxf9x\n ordered_params = OrderedDict({\n 'token': self.auth,\n })\n ordered_params.update(params)\n\n return super(UTorrentAPI, self)._request(method=method, params=ordered_params, data=data, files=files)\n\n def _get_auth(self):\n try:\n self.response = self.session.get(urljoin(self.url, 'token.html'), verify=False)\n except RequestException as error:\n log.warning('Unable to authenticate with uTorrent client: {0!r}', error)\n return None\n\n if not self.response.status_code == 404:\n self.auth = re.findall('<div.*?>(.*?)</', self.response.text)[0]\n return self.auth\n\n return None\n\n def _add_torrent_uri(self, result):\n \"\"\"Send an 'add-url' download request to uTorrent when search provider is using a magnet link.\"\"\"\n # Set proper subfolder as download destination for uTorrent torrent\n torrent_subfolder = get_torrent_subfolder(result)\n\n return self._request(params={\n 'action': 'add-url',\n # limit the param length to 1024 chars (uTorrent bug)\n 's': result.url[:1024],\n 'path': torrent_subfolder,\n })\n\n def _add_torrent_file(self, result):\n \"\"\"Send an 'add-file' download request to uTorrent when the search provider is using a .torrent file.\"\"\"\n # Set proper subfolder as download destination for uTorrent torrent\n torrent_subfolder = get_torrent_subfolder(result)\n\n return self._request(\n method='post',\n params={\n 'action': 'add-file',\n 'path': torrent_subfolder,\n },\n files={\n 'torrent_file': (\n '{name}.torrent'.format(name=result.name),\n result.content,\n ),\n }\n )\n\n def _set_torrent_label(self, result):\n \"\"\"Send a 'setprop' request to uTorrent to set a label for the torrent, optionally - the show name.\"\"\"\n torrent_new_label = result.series.name\n\n if result.series.is_anime and app.TORRENT_LABEL_ANIME:\n label = app.TORRENT_LABEL_ANIME\n else:\n label = app.TORRENT_LABEL\n\n label = label.replace('%N', torrent_new_label)\n\n log.debug('Torrent label is now set to {path}', {'path': label})\n\n return self._request(\n params={\n 'action': 'setprops',\n 'hash': result.hash,\n 's': 'label',\n 'v': label,\n }\n )\n\n def _set_torrent_ratio(self, result):\n ratio = result.ratio or None\n\n if ratio:\n if self._request(params={\n 'action': 'setprops',\n 'hash': result.hash,\n 's': 'seed_override',\n 'v': '1',\n }):\n return self._request(params={\n 'action': 'setprops',\n 'hash': result.hash,\n 's': 'seed_ratio',\n 'v': float(ratio) * 10,\n })\n else:\n return False\n\n return True\n\n def _set_torrent_seed_time(self, result):\n # Allow 0 - as unlimitted, and \"-1\" - that is used to disable\n if float(app.TORRENT_SEED_TIME) >= 0:\n if self._request(params={\n 'action': 'setprops',\n 'hash': result.hash,\n 's': 'seed_override',\n 'v': '1',\n }):\n return self._request(params={\n 'action': 'setprops',\n 'hash': result.hash,\n 's': 'seed_time',\n 'v': 3600 * float(app.TORRENT_SEED_TIME),\n })\n else:\n return False\n else:\n return True\n\n def _set_torrent_priority(self, result):\n return True if result.priority != 1 else self._request(params={\n 'action': 'queuetop',\n 'hash': result.hash,\n })\n\n def _set_torrent_pause(self, result):\n return self._request(params={\n # \"stop\" torrent, can always be resumed!\n 'action': 'stop' if app.TORRENT_PAUSED else 'start',\n 'hash': result.hash,\n })\n\n def remove_torrent(self, info_hash):\n \"\"\"Remove torrent from client using given info_hash.\n\n :param info_hash:\n :type info_hash: string\n :return\n :rtype: bool\n \"\"\"\n return self._request(params={\n 'action': 'removedatatorrent',\n 'hash': info_hash,\n })\n\n\napi = UTorrentAPI\n", "path": "medusa/clients/torrent/utorrent_client.py"}]}
| 3,825 | 222 |
gh_patches_debug_16553
|
rasdani/github-patches
|
git_diff
|
internetarchive__openlibrary-8214
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DOCKER: git rev-parse --short HEAD --` returns a non-zero exit status 128. web container won't load
### Evidence / Screenshot (if possible)

### Steps to Reproduce
1. Follow the instructions from openlibrary/docker/README.md to build a new docker setup
2. Run docker compose up
* Actual:
The errors shown on the screen capture are thrown, and the _web_ container fails to get up.
* Expected:
_web_ container successfully brought up
### Proposal & Constraints
The issue seems to be on the `get_software_version()` function called on `openlibrary/plugins/openlibrary/status.py`, when `git rev-parse --short HEAD --` returns a non-zero exit status 128.
I put the function call between quotation marks as a test, so that "Software version" becomes a hardcoded string, and after that everything seems to load and work just fine.
</issue>
<code>
[start of openlibrary/utils/__init__.py]
1 """Generic utilities"""
2
3 from enum import Enum
4 import re
5 from subprocess import run
6 from typing import TypeVar, Literal, Optional
7 from collections.abc import Iterable, Callable
8
9 to_drop = set(''';/?:@&=+$,<>#%"{}|\\^[]`\n\r''')
10
11
12 def str_to_key(s: str) -> str:
13 """
14 >>> str_to_key("?H$e##l{o}[0] -world!")
15 'helo0_-world!'
16 >>> str_to_key("".join(to_drop))
17 ''
18 >>> str_to_key("")
19 ''
20 """
21 return ''.join(c if c != ' ' else '_' for c in s.lower() if c not in to_drop)
22
23
24 def finddict(dicts, **filters):
25 """Find a dictionary that matches given filter conditions.
26
27 >>> dicts = [{"x": 1, "y": 2}, {"x": 3, "y": 4}]
28 >>> sorted(finddict(dicts, x=1).items())
29 [('x', 1), ('y', 2)]
30 """
31 for d in dicts:
32 if all(d.get(k) == v for k, v in filters.items()):
33 return d
34
35
36 T = TypeVar('T')
37
38
39 def uniq(values: Iterable[T], key=None) -> list[T]:
40 """Returns the unique entries from the given values in the original order.
41
42 The value of the optional `key` parameter should be a function that takes
43 a single argument and returns a key to test the uniqueness.
44 TODO: Moved this to core/utils.py
45
46 >>> uniq("abcbcddefefg")
47 ['a', 'b', 'c', 'd', 'e', 'f', 'g']
48 >>> uniq("011223344556677889")
49 ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
50 """
51 key = key or (lambda x: x)
52 s = set()
53 result = []
54 for v in values:
55 k = key(v)
56 if k not in s:
57 s.add(k)
58 result.append(v)
59 return result
60
61
62 def take_best(
63 items: list[T],
64 optimization: Literal["min", "max"],
65 scoring_fn: Callable[[T], float],
66 ) -> list[T]:
67 """
68 >>> take_best([], 'min', lambda x: x)
69 []
70 >>> take_best([3, 2, 1], 'min', lambda x: x)
71 [1]
72 >>> take_best([3, 4, 5], 'max', lambda x: x)
73 [5]
74 >>> take_best([4, 1, -1, -1], 'min', lambda x: x)
75 [-1, -1]
76 """
77 best_score = float("-inf") if optimization == "max" else float("inf")
78 besties = []
79 for item in items:
80 score = scoring_fn(item)
81 if (optimization == "max" and score > best_score) or (
82 optimization == "min" and score < best_score
83 ):
84 best_score = score
85 besties = [item]
86 elif score == best_score:
87 besties.append(item)
88 else:
89 continue
90 return besties
91
92
93 def multisort_best(
94 items: list[T], specs: list[tuple[Literal["min", "max"], Callable[[T], float]]]
95 ) -> Optional[T]:
96 """
97 Takes the best item, taking into account the multisorts
98
99 >>> multisort_best([], [])
100
101 >>> multisort_best([3,4,5], [('max', lambda x: x)])
102 5
103
104 >>> multisort_best([
105 ... {'provider': 'ia', 'size': 4},
106 ... {'provider': 'ia', 'size': 12},
107 ... {'provider': None, 'size': 42},
108 ... ], [
109 ... ('min', lambda x: 0 if x['provider'] == 'ia' else 1),
110 ... ('max', lambda x: x['size']),
111 ... ])
112 {'provider': 'ia', 'size': 12}
113 """
114 if not items:
115 return None
116 pool = items
117 for optimization, fn in specs:
118 # Shrink the pool down each time
119 pool = take_best(pool, optimization, fn)
120 return pool[0]
121
122
123 def dicthash(d):
124 """Dictionaries are not hashable. This function converts dictionary into nested
125 tuples, so that it can hashed.
126 """
127 if isinstance(d, dict):
128 return tuple((k, dicthash(d[k])) for k in sorted(d))
129 elif isinstance(d, list):
130 return tuple(dicthash(v) for v in d)
131 else:
132 return d
133
134
135 olid_re = re.compile(r'OL\d+[A-Z]', re.IGNORECASE)
136
137
138 def find_olid_in_string(s: str, olid_suffix: str | None = None) -> str | None:
139 """
140 >>> find_olid_in_string("ol123w")
141 'OL123W'
142 >>> find_olid_in_string("/authors/OL123A/DAVIE_BOWIE")
143 'OL123A'
144 >>> find_olid_in_string("/authors/OL123A/DAVIE_BOWIE", "W")
145 >>> find_olid_in_string("some random string")
146 """
147 found = re.search(olid_re, s)
148 if not found:
149 return None
150 olid = found.group(0).upper()
151
152 if olid_suffix and not olid.endswith(olid_suffix):
153 return None
154
155 return olid
156
157
158 def olid_to_key(olid: str) -> str:
159 """
160 >>> olid_to_key('OL123W')
161 '/works/OL123W'
162 >>> olid_to_key('OL123A')
163 '/authors/OL123A'
164 >>> olid_to_key('OL123M')
165 '/books/OL123M'
166 >>> olid_to_key("OL123L")
167 '/lists/OL123L'
168 """
169 typ = {
170 'A': 'authors',
171 'W': 'works',
172 'M': 'books',
173 'L': 'lists',
174 }[olid[-1]]
175 if not typ:
176 raise ValueError(f"Invalid olid: {olid}")
177 return f"/{typ}/{olid}"
178
179
180 def extract_numeric_id_from_olid(olid):
181 """
182 >>> extract_numeric_id_from_olid("OL123W")
183 '123'
184 >>> extract_numeric_id_from_olid("/authors/OL123A")
185 '123'
186 """
187 if '/' in olid:
188 olid = olid.split('/')[-1]
189 if olid.lower().startswith('ol'):
190 olid = olid[2:]
191 if not is_number(olid[-1].lower()):
192 olid = olid[:-1]
193 return olid
194
195
196 def is_number(s):
197 """
198 >>> all(is_number(n) for n in (1234, "1234", -1234, "-1234", 123.4, -123.4))
199 True
200 >>> not any(is_number(n) for n in ("123.4", "-123.4", "123a", "--1234"))
201 True
202 """
203 try:
204 int(s)
205 return True
206 except ValueError:
207 return False
208
209
210 def get_software_version() -> str:
211 """
212 assert get_software_version() # Should never return a falsy value
213 """
214 cmd = "git rev-parse --short HEAD --".split()
215 return run(cmd, capture_output=True, text=True, check=True).stdout.strip()
216
217
218 # See https://docs.python.org/3/library/enum.html#orderedenum
219 class OrderedEnum(Enum):
220 def __ge__(self, other):
221 if self.__class__ is other.__class__:
222 return self.value >= other.value
223 return NotImplemented
224
225 def __gt__(self, other):
226 if self.__class__ is other.__class__:
227 return self.value > other.value
228 return NotImplemented
229
230 def __le__(self, other):
231 if self.__class__ is other.__class__:
232 return self.value <= other.value
233 return NotImplemented
234
235 def __lt__(self, other):
236 if self.__class__ is other.__class__:
237 return self.value < other.value
238 return NotImplemented
239
[end of openlibrary/utils/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/openlibrary/utils/__init__.py b/openlibrary/utils/__init__.py
--- a/openlibrary/utils/__init__.py
+++ b/openlibrary/utils/__init__.py
@@ -2,7 +2,7 @@
from enum import Enum
import re
-from subprocess import run
+from subprocess import CalledProcessError, run
from typing import TypeVar, Literal, Optional
from collections.abc import Iterable, Callable
@@ -212,7 +212,10 @@
assert get_software_version() # Should never return a falsy value
"""
cmd = "git rev-parse --short HEAD --".split()
- return run(cmd, capture_output=True, text=True, check=True).stdout.strip()
+ try:
+ return run(cmd, capture_output=True, text=True, check=True).stdout.strip()
+ except CalledProcessError:
+ return "unknown"
# See https://docs.python.org/3/library/enum.html#orderedenum
|
{"golden_diff": "diff --git a/openlibrary/utils/__init__.py b/openlibrary/utils/__init__.py\n--- a/openlibrary/utils/__init__.py\n+++ b/openlibrary/utils/__init__.py\n@@ -2,7 +2,7 @@\n \n from enum import Enum\n import re\n-from subprocess import run\n+from subprocess import CalledProcessError, run\n from typing import TypeVar, Literal, Optional\n from collections.abc import Iterable, Callable\n \n@@ -212,7 +212,10 @@\n assert get_software_version() # Should never return a falsy value\n \"\"\"\n cmd = \"git rev-parse --short HEAD --\".split()\n- return run(cmd, capture_output=True, text=True, check=True).stdout.strip()\n+ try:\n+ return run(cmd, capture_output=True, text=True, check=True).stdout.strip()\n+ except CalledProcessError:\n+ return \"unknown\"\n \n \n # See https://docs.python.org/3/library/enum.html#orderedenum\n", "issue": "DOCKER: git rev-parse --short HEAD --` returns a non-zero exit status 128. web container won't load\n### Evidence / Screenshot (if possible)\r\n\r\n\r\n### Steps to Reproduce\r\n\r\n1. Follow the instructions from openlibrary/docker/README.md to build a new docker setup\r\n2. Run docker compose up\r\n\r\n* Actual:\r\nThe errors shown on the screen capture are thrown, and the _web_ container fails to get up.\r\n* Expected:\r\n_web_ container successfully brought up\r\n\r\n### Proposal & Constraints\r\nThe issue seems to be on the `get_software_version()` function called on `openlibrary/plugins/openlibrary/status.py`, when `git rev-parse --short HEAD --` returns a non-zero exit status 128.\r\n\r\nI put the function call between quotation marks as a test, so that \"Software version\" becomes a hardcoded string, and after that everything seems to load and work just fine.\r\n\r\n\n", "before_files": [{"content": "\"\"\"Generic utilities\"\"\"\n\nfrom enum import Enum\nimport re\nfrom subprocess import run\nfrom typing import TypeVar, Literal, Optional\nfrom collections.abc import Iterable, Callable\n\nto_drop = set(''';/?:@&=+$,<>#%\"{}|\\\\^[]`\\n\\r''')\n\n\ndef str_to_key(s: str) -> str:\n \"\"\"\n >>> str_to_key(\"?H$e##l{o}[0] -world!\")\n 'helo0_-world!'\n >>> str_to_key(\"\".join(to_drop))\n ''\n >>> str_to_key(\"\")\n ''\n \"\"\"\n return ''.join(c if c != ' ' else '_' for c in s.lower() if c not in to_drop)\n\n\ndef finddict(dicts, **filters):\n \"\"\"Find a dictionary that matches given filter conditions.\n\n >>> dicts = [{\"x\": 1, \"y\": 2}, {\"x\": 3, \"y\": 4}]\n >>> sorted(finddict(dicts, x=1).items())\n [('x', 1), ('y', 2)]\n \"\"\"\n for d in dicts:\n if all(d.get(k) == v for k, v in filters.items()):\n return d\n\n\nT = TypeVar('T')\n\n\ndef uniq(values: Iterable[T], key=None) -> list[T]:\n \"\"\"Returns the unique entries from the given values in the original order.\n\n The value of the optional `key` parameter should be a function that takes\n a single argument and returns a key to test the uniqueness.\n TODO: Moved this to core/utils.py\n\n >>> uniq(\"abcbcddefefg\")\n ['a', 'b', 'c', 'd', 'e', 'f', 'g']\n >>> uniq(\"011223344556677889\")\n ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']\n \"\"\"\n key = key or (lambda x: x)\n s = set()\n result = []\n for v in values:\n k = key(v)\n if k not in s:\n s.add(k)\n result.append(v)\n return result\n\n\ndef take_best(\n items: list[T],\n optimization: Literal[\"min\", \"max\"],\n scoring_fn: Callable[[T], float],\n) -> list[T]:\n \"\"\"\n >>> take_best([], 'min', lambda x: x)\n []\n >>> take_best([3, 2, 1], 'min', lambda x: x)\n [1]\n >>> take_best([3, 4, 5], 'max', lambda x: x)\n [5]\n >>> take_best([4, 1, -1, -1], 'min', lambda x: x)\n [-1, -1]\n \"\"\"\n best_score = float(\"-inf\") if optimization == \"max\" else float(\"inf\")\n besties = []\n for item in items:\n score = scoring_fn(item)\n if (optimization == \"max\" and score > best_score) or (\n optimization == \"min\" and score < best_score\n ):\n best_score = score\n besties = [item]\n elif score == best_score:\n besties.append(item)\n else:\n continue\n return besties\n\n\ndef multisort_best(\n items: list[T], specs: list[tuple[Literal[\"min\", \"max\"], Callable[[T], float]]]\n) -> Optional[T]:\n \"\"\"\n Takes the best item, taking into account the multisorts\n\n >>> multisort_best([], [])\n\n >>> multisort_best([3,4,5], [('max', lambda x: x)])\n 5\n\n >>> multisort_best([\n ... {'provider': 'ia', 'size': 4},\n ... {'provider': 'ia', 'size': 12},\n ... {'provider': None, 'size': 42},\n ... ], [\n ... ('min', lambda x: 0 if x['provider'] == 'ia' else 1),\n ... ('max', lambda x: x['size']),\n ... ])\n {'provider': 'ia', 'size': 12}\n \"\"\"\n if not items:\n return None\n pool = items\n for optimization, fn in specs:\n # Shrink the pool down each time\n pool = take_best(pool, optimization, fn)\n return pool[0]\n\n\ndef dicthash(d):\n \"\"\"Dictionaries are not hashable. This function converts dictionary into nested\n tuples, so that it can hashed.\n \"\"\"\n if isinstance(d, dict):\n return tuple((k, dicthash(d[k])) for k in sorted(d))\n elif isinstance(d, list):\n return tuple(dicthash(v) for v in d)\n else:\n return d\n\n\nolid_re = re.compile(r'OL\\d+[A-Z]', re.IGNORECASE)\n\n\ndef find_olid_in_string(s: str, olid_suffix: str | None = None) -> str | None:\n \"\"\"\n >>> find_olid_in_string(\"ol123w\")\n 'OL123W'\n >>> find_olid_in_string(\"/authors/OL123A/DAVIE_BOWIE\")\n 'OL123A'\n >>> find_olid_in_string(\"/authors/OL123A/DAVIE_BOWIE\", \"W\")\n >>> find_olid_in_string(\"some random string\")\n \"\"\"\n found = re.search(olid_re, s)\n if not found:\n return None\n olid = found.group(0).upper()\n\n if olid_suffix and not olid.endswith(olid_suffix):\n return None\n\n return olid\n\n\ndef olid_to_key(olid: str) -> str:\n \"\"\"\n >>> olid_to_key('OL123W')\n '/works/OL123W'\n >>> olid_to_key('OL123A')\n '/authors/OL123A'\n >>> olid_to_key('OL123M')\n '/books/OL123M'\n >>> olid_to_key(\"OL123L\")\n '/lists/OL123L'\n \"\"\"\n typ = {\n 'A': 'authors',\n 'W': 'works',\n 'M': 'books',\n 'L': 'lists',\n }[olid[-1]]\n if not typ:\n raise ValueError(f\"Invalid olid: {olid}\")\n return f\"/{typ}/{olid}\"\n\n\ndef extract_numeric_id_from_olid(olid):\n \"\"\"\n >>> extract_numeric_id_from_olid(\"OL123W\")\n '123'\n >>> extract_numeric_id_from_olid(\"/authors/OL123A\")\n '123'\n \"\"\"\n if '/' in olid:\n olid = olid.split('/')[-1]\n if olid.lower().startswith('ol'):\n olid = olid[2:]\n if not is_number(olid[-1].lower()):\n olid = olid[:-1]\n return olid\n\n\ndef is_number(s):\n \"\"\"\n >>> all(is_number(n) for n in (1234, \"1234\", -1234, \"-1234\", 123.4, -123.4))\n True\n >>> not any(is_number(n) for n in (\"123.4\", \"-123.4\", \"123a\", \"--1234\"))\n True\n \"\"\"\n try:\n int(s)\n return True\n except ValueError:\n return False\n\n\ndef get_software_version() -> str:\n \"\"\"\n assert get_software_version() # Should never return a falsy value\n \"\"\"\n cmd = \"git rev-parse --short HEAD --\".split()\n return run(cmd, capture_output=True, text=True, check=True).stdout.strip()\n\n\n# See https://docs.python.org/3/library/enum.html#orderedenum\nclass OrderedEnum(Enum):\n def __ge__(self, other):\n if self.__class__ is other.__class__:\n return self.value >= other.value\n return NotImplemented\n\n def __gt__(self, other):\n if self.__class__ is other.__class__:\n return self.value > other.value\n return NotImplemented\n\n def __le__(self, other):\n if self.__class__ is other.__class__:\n return self.value <= other.value\n return NotImplemented\n\n def __lt__(self, other):\n if self.__class__ is other.__class__:\n return self.value < other.value\n return NotImplemented\n", "path": "openlibrary/utils/__init__.py"}]}
| 3,296 | 214 |
gh_patches_debug_61109
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-1709
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
running `pre-commit autoupdate` fails because tip of HEAD is missing hook
Hello 👋
I'm setting up `pre-commit` on a project and came across an issue when adding hook `destroyed-symlinks`. The error message suggested running `pre-commit autoupdate`. I ran that and saw that it cannot update because the tip of HEAD is missing that hook. I'm not sure what that means so posting here.
```console
$ echo ' - id: destroyed-symlinks' >> .pre-commit-config.yaml
$ git add -p !$
git add -p .pre-commit-config.yaml
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index bfde4717..949f3ffc 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -21,3 +21,4 @@ repos:
- id: check-vcs-permalinks
- id: check-xml
- id: debug-statements
+ - id: destroyed-symlinks
(1/1) Stage this hunk [y,n,q,a,d,e,?]? y
$ git commit -m 'new hook destroyed-symlinks'
[ERROR] `destroyed-symlinks` is not present in repository https://github.com/pre-commit/pre-commit-hooks. Typo? Perhaps it is introduced in a newer version? Often `pre-commit autoupdate` fixes this.
$ git status
On branch pre-commit
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
modified: .pre-commit-config.yaml
Untracked files:
(use "git add <file>..." to include in what will be committed)
tests/__init__.py
$ pre-commit autoupdate
Updating https://github.com/pre-commit/pre-commit-hooks ... [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks.
Cannot update because the tip of HEAD is missing these hooks:
destroyed-symlinks
$ git checkout .
Updated 0 paths from the index
$ pre-commit autoupdate
Updating https://github.com/pre-commit/pre-commit-hooks ... Cannot update because the tip of HEAD is missing these hooks:
destroyed-symlinks
$ pre-commit --version
pre-commit 2.9.0
```
</issue>
<code>
[start of pre_commit/commands/autoupdate.py]
1 import os.path
2 import re
3 from typing import Any
4 from typing import Dict
5 from typing import List
6 from typing import NamedTuple
7 from typing import Optional
8 from typing import Sequence
9 from typing import Tuple
10
11 import pre_commit.constants as C
12 from pre_commit import git
13 from pre_commit import output
14 from pre_commit.clientlib import InvalidManifestError
15 from pre_commit.clientlib import load_config
16 from pre_commit.clientlib import load_manifest
17 from pre_commit.clientlib import LOCAL
18 from pre_commit.clientlib import META
19 from pre_commit.commands.migrate_config import migrate_config
20 from pre_commit.store import Store
21 from pre_commit.util import CalledProcessError
22 from pre_commit.util import cmd_output
23 from pre_commit.util import cmd_output_b
24 from pre_commit.util import tmpdir
25 from pre_commit.util import yaml_dump
26 from pre_commit.util import yaml_load
27
28
29 class RevInfo(NamedTuple):
30 repo: str
31 rev: str
32 frozen: Optional[str]
33
34 @classmethod
35 def from_config(cls, config: Dict[str, Any]) -> 'RevInfo':
36 return cls(config['repo'], config['rev'], None)
37
38 def update(self, tags_only: bool, freeze: bool) -> 'RevInfo':
39 if tags_only:
40 tag_cmd = ('git', 'describe', 'FETCH_HEAD', '--tags', '--abbrev=0')
41 else:
42 tag_cmd = ('git', 'describe', 'FETCH_HEAD', '--tags', '--exact')
43
44 with tmpdir() as tmp:
45 git.init_repo(tmp, self.repo)
46 cmd_output_b('git', 'fetch', 'origin', 'HEAD', '--tags', cwd=tmp)
47
48 try:
49 rev = cmd_output(*tag_cmd, cwd=tmp)[1].strip()
50 except CalledProcessError:
51 cmd = ('git', 'rev-parse', 'FETCH_HEAD')
52 rev = cmd_output(*cmd, cwd=tmp)[1].strip()
53
54 frozen = None
55 if freeze:
56 exact = cmd_output('git', 'rev-parse', rev, cwd=tmp)[1].strip()
57 if exact != rev:
58 rev, frozen = exact, rev
59 return self._replace(rev=rev, frozen=frozen)
60
61
62 class RepositoryCannotBeUpdatedError(RuntimeError):
63 pass
64
65
66 def _check_hooks_still_exist_at_rev(
67 repo_config: Dict[str, Any],
68 info: RevInfo,
69 store: Store,
70 ) -> None:
71 try:
72 path = store.clone(repo_config['repo'], info.rev)
73 manifest = load_manifest(os.path.join(path, C.MANIFEST_FILE))
74 except InvalidManifestError as e:
75 raise RepositoryCannotBeUpdatedError(str(e))
76
77 # See if any of our hooks were deleted with the new commits
78 hooks = {hook['id'] for hook in repo_config['hooks']}
79 hooks_missing = hooks - {hook['id'] for hook in manifest}
80 if hooks_missing:
81 raise RepositoryCannotBeUpdatedError(
82 f'Cannot update because the tip of HEAD is missing these hooks:\n'
83 f'{", ".join(sorted(hooks_missing))}',
84 )
85
86
87 REV_LINE_RE = re.compile(r'^(\s+)rev:(\s*)([\'"]?)([^\s#]+)(.*)(\r?\n)$')
88
89
90 def _original_lines(
91 path: str,
92 rev_infos: List[Optional[RevInfo]],
93 retry: bool = False,
94 ) -> Tuple[List[str], List[int]]:
95 """detect `rev:` lines or reformat the file"""
96 with open(path, newline='') as f:
97 original = f.read()
98
99 lines = original.splitlines(True)
100 idxs = [i for i, line in enumerate(lines) if REV_LINE_RE.match(line)]
101 if len(idxs) == len(rev_infos):
102 return lines, idxs
103 elif retry:
104 raise AssertionError('could not find rev lines')
105 else:
106 with open(path, 'w') as f:
107 f.write(yaml_dump(yaml_load(original)))
108 return _original_lines(path, rev_infos, retry=True)
109
110
111 def _write_new_config(path: str, rev_infos: List[Optional[RevInfo]]) -> None:
112 lines, idxs = _original_lines(path, rev_infos)
113
114 for idx, rev_info in zip(idxs, rev_infos):
115 if rev_info is None:
116 continue
117 match = REV_LINE_RE.match(lines[idx])
118 assert match is not None
119 new_rev_s = yaml_dump({'rev': rev_info.rev}, default_style=match[3])
120 new_rev = new_rev_s.split(':', 1)[1].strip()
121 if rev_info.frozen is not None:
122 comment = f' # frozen: {rev_info.frozen}'
123 elif match[5].strip().startswith('# frozen:'):
124 comment = ''
125 else:
126 comment = match[5]
127 lines[idx] = f'{match[1]}rev:{match[2]}{new_rev}{comment}{match[6]}'
128
129 with open(path, 'w', newline='') as f:
130 f.write(''.join(lines))
131
132
133 def autoupdate(
134 config_file: str,
135 store: Store,
136 tags_only: bool,
137 freeze: bool,
138 repos: Sequence[str] = (),
139 ) -> int:
140 """Auto-update the pre-commit config to the latest versions of repos."""
141 migrate_config(config_file, quiet=True)
142 retv = 0
143 rev_infos: List[Optional[RevInfo]] = []
144 changed = False
145
146 config = load_config(config_file)
147 for repo_config in config['repos']:
148 if repo_config['repo'] in {LOCAL, META}:
149 continue
150
151 info = RevInfo.from_config(repo_config)
152 if repos and info.repo not in repos:
153 rev_infos.append(None)
154 continue
155
156 output.write(f'Updating {info.repo} ... ')
157 new_info = info.update(tags_only=tags_only, freeze=freeze)
158 try:
159 _check_hooks_still_exist_at_rev(repo_config, new_info, store)
160 except RepositoryCannotBeUpdatedError as error:
161 output.write_line(error.args[0])
162 rev_infos.append(None)
163 retv = 1
164 continue
165
166 if new_info.rev != info.rev:
167 changed = True
168 if new_info.frozen:
169 updated_to = f'{new_info.frozen} (frozen)'
170 else:
171 updated_to = new_info.rev
172 msg = f'updating {info.rev} -> {updated_to}.'
173 output.write_line(msg)
174 rev_infos.append(new_info)
175 else:
176 output.write_line('already up to date.')
177 rev_infos.append(None)
178
179 if changed:
180 _write_new_config(config_file, rev_infos)
181
182 return retv
183
[end of pre_commit/commands/autoupdate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/commands/autoupdate.py b/pre_commit/commands/autoupdate.py
--- a/pre_commit/commands/autoupdate.py
+++ b/pre_commit/commands/autoupdate.py
@@ -79,8 +79,8 @@
hooks_missing = hooks - {hook['id'] for hook in manifest}
if hooks_missing:
raise RepositoryCannotBeUpdatedError(
- f'Cannot update because the tip of HEAD is missing these hooks:\n'
- f'{", ".join(sorted(hooks_missing))}',
+ f'Cannot update because the update target is missing these '
+ f'hooks:\n{", ".join(sorted(hooks_missing))}',
)
|
{"golden_diff": "diff --git a/pre_commit/commands/autoupdate.py b/pre_commit/commands/autoupdate.py\n--- a/pre_commit/commands/autoupdate.py\n+++ b/pre_commit/commands/autoupdate.py\n@@ -79,8 +79,8 @@\n hooks_missing = hooks - {hook['id'] for hook in manifest}\n if hooks_missing:\n raise RepositoryCannotBeUpdatedError(\n- f'Cannot update because the tip of HEAD is missing these hooks:\\n'\n- f'{\", \".join(sorted(hooks_missing))}',\n+ f'Cannot update because the update target is missing these '\n+ f'hooks:\\n{\", \".join(sorted(hooks_missing))}',\n )\n", "issue": "running `pre-commit autoupdate` fails because tip of HEAD is missing hook\nHello \ud83d\udc4b \r\nI'm setting up `pre-commit` on a project and came across an issue when adding hook `destroyed-symlinks`. The error message suggested running `pre-commit autoupdate`. I ran that and saw that it cannot update because the tip of HEAD is missing that hook. I'm not sure what that means so posting here.\r\n\r\n```console\r\n$ echo ' - id: destroyed-symlinks' >> .pre-commit-config.yaml\r\n$ git add -p !$\r\ngit add -p .pre-commit-config.yaml\r\ndiff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml\r\nindex bfde4717..949f3ffc 100644\r\n--- a/.pre-commit-config.yaml\r\n+++ b/.pre-commit-config.yaml\r\n@@ -21,3 +21,4 @@ repos:\r\n - id: check-vcs-permalinks\r\n - id: check-xml\r\n - id: debug-statements\r\n+ - id: destroyed-symlinks\r\n(1/1) Stage this hunk [y,n,q,a,d,e,?]? y\r\n\r\n$ git commit -m 'new hook destroyed-symlinks'\r\n[ERROR] `destroyed-symlinks` is not present in repository https://github.com/pre-commit/pre-commit-hooks. Typo? Perhaps it is introduced in a newer version? Often `pre-commit autoupdate` fixes this.\r\n$ git status\r\nOn branch pre-commit\r\nChanges to be committed:\r\n (use \"git restore --staged <file>...\" to unstage)\r\n modified: .pre-commit-config.yaml\r\n\r\nUntracked files:\r\n (use \"git add <file>...\" to include in what will be committed)\r\n tests/__init__.py\r\n\r\n$ pre-commit autoupdate\r\nUpdating https://github.com/pre-commit/pre-commit-hooks ... [INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks.\r\nCannot update because the tip of HEAD is missing these hooks:\r\ndestroyed-symlinks\r\n$ git checkout .\r\nUpdated 0 paths from the index\r\n$ pre-commit autoupdate\r\nUpdating https://github.com/pre-commit/pre-commit-hooks ... Cannot update because the tip of HEAD is missing these hooks:\r\ndestroyed-symlinks\r\n$ pre-commit --version\r\npre-commit 2.9.0\r\n```\n", "before_files": [{"content": "import os.path\nimport re\nfrom typing import Any\nfrom typing import Dict\nfrom typing import List\nfrom typing import NamedTuple\nfrom typing import Optional\nfrom typing import Sequence\nfrom typing import Tuple\n\nimport pre_commit.constants as C\nfrom pre_commit import git\nfrom pre_commit import output\nfrom pre_commit.clientlib import InvalidManifestError\nfrom pre_commit.clientlib import load_config\nfrom pre_commit.clientlib import load_manifest\nfrom pre_commit.clientlib import LOCAL\nfrom pre_commit.clientlib import META\nfrom pre_commit.commands.migrate_config import migrate_config\nfrom pre_commit.store import Store\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\nfrom pre_commit.util import tmpdir\nfrom pre_commit.util import yaml_dump\nfrom pre_commit.util import yaml_load\n\n\nclass RevInfo(NamedTuple):\n repo: str\n rev: str\n frozen: Optional[str]\n\n @classmethod\n def from_config(cls, config: Dict[str, Any]) -> 'RevInfo':\n return cls(config['repo'], config['rev'], None)\n\n def update(self, tags_only: bool, freeze: bool) -> 'RevInfo':\n if tags_only:\n tag_cmd = ('git', 'describe', 'FETCH_HEAD', '--tags', '--abbrev=0')\n else:\n tag_cmd = ('git', 'describe', 'FETCH_HEAD', '--tags', '--exact')\n\n with tmpdir() as tmp:\n git.init_repo(tmp, self.repo)\n cmd_output_b('git', 'fetch', 'origin', 'HEAD', '--tags', cwd=tmp)\n\n try:\n rev = cmd_output(*tag_cmd, cwd=tmp)[1].strip()\n except CalledProcessError:\n cmd = ('git', 'rev-parse', 'FETCH_HEAD')\n rev = cmd_output(*cmd, cwd=tmp)[1].strip()\n\n frozen = None\n if freeze:\n exact = cmd_output('git', 'rev-parse', rev, cwd=tmp)[1].strip()\n if exact != rev:\n rev, frozen = exact, rev\n return self._replace(rev=rev, frozen=frozen)\n\n\nclass RepositoryCannotBeUpdatedError(RuntimeError):\n pass\n\n\ndef _check_hooks_still_exist_at_rev(\n repo_config: Dict[str, Any],\n info: RevInfo,\n store: Store,\n) -> None:\n try:\n path = store.clone(repo_config['repo'], info.rev)\n manifest = load_manifest(os.path.join(path, C.MANIFEST_FILE))\n except InvalidManifestError as e:\n raise RepositoryCannotBeUpdatedError(str(e))\n\n # See if any of our hooks were deleted with the new commits\n hooks = {hook['id'] for hook in repo_config['hooks']}\n hooks_missing = hooks - {hook['id'] for hook in manifest}\n if hooks_missing:\n raise RepositoryCannotBeUpdatedError(\n f'Cannot update because the tip of HEAD is missing these hooks:\\n'\n f'{\", \".join(sorted(hooks_missing))}',\n )\n\n\nREV_LINE_RE = re.compile(r'^(\\s+)rev:(\\s*)([\\'\"]?)([^\\s#]+)(.*)(\\r?\\n)$')\n\n\ndef _original_lines(\n path: str,\n rev_infos: List[Optional[RevInfo]],\n retry: bool = False,\n) -> Tuple[List[str], List[int]]:\n \"\"\"detect `rev:` lines or reformat the file\"\"\"\n with open(path, newline='') as f:\n original = f.read()\n\n lines = original.splitlines(True)\n idxs = [i for i, line in enumerate(lines) if REV_LINE_RE.match(line)]\n if len(idxs) == len(rev_infos):\n return lines, idxs\n elif retry:\n raise AssertionError('could not find rev lines')\n else:\n with open(path, 'w') as f:\n f.write(yaml_dump(yaml_load(original)))\n return _original_lines(path, rev_infos, retry=True)\n\n\ndef _write_new_config(path: str, rev_infos: List[Optional[RevInfo]]) -> None:\n lines, idxs = _original_lines(path, rev_infos)\n\n for idx, rev_info in zip(idxs, rev_infos):\n if rev_info is None:\n continue\n match = REV_LINE_RE.match(lines[idx])\n assert match is not None\n new_rev_s = yaml_dump({'rev': rev_info.rev}, default_style=match[3])\n new_rev = new_rev_s.split(':', 1)[1].strip()\n if rev_info.frozen is not None:\n comment = f' # frozen: {rev_info.frozen}'\n elif match[5].strip().startswith('# frozen:'):\n comment = ''\n else:\n comment = match[5]\n lines[idx] = f'{match[1]}rev:{match[2]}{new_rev}{comment}{match[6]}'\n\n with open(path, 'w', newline='') as f:\n f.write(''.join(lines))\n\n\ndef autoupdate(\n config_file: str,\n store: Store,\n tags_only: bool,\n freeze: bool,\n repos: Sequence[str] = (),\n) -> int:\n \"\"\"Auto-update the pre-commit config to the latest versions of repos.\"\"\"\n migrate_config(config_file, quiet=True)\n retv = 0\n rev_infos: List[Optional[RevInfo]] = []\n changed = False\n\n config = load_config(config_file)\n for repo_config in config['repos']:\n if repo_config['repo'] in {LOCAL, META}:\n continue\n\n info = RevInfo.from_config(repo_config)\n if repos and info.repo not in repos:\n rev_infos.append(None)\n continue\n\n output.write(f'Updating {info.repo} ... ')\n new_info = info.update(tags_only=tags_only, freeze=freeze)\n try:\n _check_hooks_still_exist_at_rev(repo_config, new_info, store)\n except RepositoryCannotBeUpdatedError as error:\n output.write_line(error.args[0])\n rev_infos.append(None)\n retv = 1\n continue\n\n if new_info.rev != info.rev:\n changed = True\n if new_info.frozen:\n updated_to = f'{new_info.frozen} (frozen)'\n else:\n updated_to = new_info.rev\n msg = f'updating {info.rev} -> {updated_to}.'\n output.write_line(msg)\n rev_infos.append(new_info)\n else:\n output.write_line('already up to date.')\n rev_infos.append(None)\n\n if changed:\n _write_new_config(config_file, rev_infos)\n\n return retv\n", "path": "pre_commit/commands/autoupdate.py"}]}
| 2,935 | 153 |
gh_patches_debug_10228
|
rasdani/github-patches
|
git_diff
|
fedora-infra__bodhi-1520
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py uses server default
The ```alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py``` migration uses a server default, which is not allowed by BDR:
```
[bowlofeggs@bodhi-backend01 ~][STG]$ sudo /usr/bin/alembic -c /etc/bodhi/alembic.ini upgrade head
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade 12d3e8695f90 -> 9241378c92ab, Convert the builds table to be polymorphic.
Traceback (most recent call last):
File "/usr/bin/alembic", line 12, in <module>
sys.exit(load_entry_point('alembic', 'console_scripts', 'alembic')())
File "/usr/lib/python2.7/site-packages/alembic/config.py", line 479, in main
CommandLine(prog=prog).main(argv=argv)
File "/usr/lib/python2.7/site-packages/alembic/config.py", line 473, in main
self.run_cmd(cfg, options)
File "/usr/lib/python2.7/site-packages/alembic/config.py", line 456, in run_cmd
**dict((k, getattr(options, k)) for k in kwarg)
File "/usr/lib/python2.7/site-packages/alembic/command.py", line 174, in upgrade
script.run_env()
File "/usr/lib/python2.7/site-packages/alembic/script/base.py", line 397, in run_env
util.load_python_file(self.dir, 'env.py')
File "/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 93, in load_python_file
module = load_module_py(module_id, path)
File "/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 79, in load_module_py
mod = imp.load_source(module_id, path, fp)
File "/usr/share/bodhi/alembic/env.py", line 83, in <module>
run_migrations_online()
File "/usr/share/bodhi/alembic/env.py", line 76, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/usr/lib/python2.7/site-packages/alembic/runtime/environment.py", line 797, in run_migrations
self.get_context().run_migrations(**kw)
File "/usr/lib/python2.7/site-packages/alembic/runtime/migration.py", line 312, in run_migrations
step.migration_fn(**kw)
File "/usr/share/bodhi/alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py", line 19, in upgrade
op.add_column('builds', sa.Column('type', sa.Integer(), nullable=False, server_default=u'1'))
File "<string>", line 8, in add_column
File "<string>", line 3, in add_column
File "/usr/lib/python2.7/site-packages/alembic/operations/ops.py", line 1535, in add_column
return operations.invoke(op)
File "/usr/lib/python2.7/site-packages/alembic/operations/base.py", line 318, in invoke
return fn(self, operation)
File "/usr/lib/python2.7/site-packages/alembic/operations/toimpl.py", line 123, in add_column
schema=schema
File "/usr/lib/python2.7/site-packages/alembic/ddl/impl.py", line 172, in add_column
self._exec(base.AddColumn(table_name, column, schema=schema))
File "/usr/lib/python2.7/site-packages/alembic/ddl/impl.py", line 118, in _exec
return conn.execute(construct, *multiparams, **params)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 914, in execute
return meth(self, multiparams, params)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py", line 68, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 968, in _execute_ddl
compiled
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1146, in _execute_context
context)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1341, in _handle_dbapi_exception
exc_info
File "/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py", line 1139, in _execute_context
context)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py", line 450, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.NotSupportedError: (psycopg2.NotSupportedError) ALTER TABLE ... ADD COLUMN ... DEFAULT may only affect UNLOGGED or TEMPORARY tables when BDR is active; builds is a regular table
[SQL: "ALTER TABLE builds ADD COLUMN type INTEGER DEFAULT '1' NOT NULL"]
```
</issue>
<code>
[start of alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py]
1 """Convert the builds table to be polymorphic.
2
3 Revision ID: 9241378c92ab
4 Revises: 12d3e8695f90
5 Create Date: 2017-04-06 20:37:24.766366
6 """
7 from alembic import op
8 import sqlalchemy as sa
9
10
11 # revision identifiers, used by Alembic.
12 revision = '9241378c92ab'
13 down_revision = '12d3e8695f90'
14
15
16 def upgrade():
17 """Add the type column to the builds table."""
18 # The default of ``1`` is the RPM Build type.
19 op.add_column('builds', sa.Column('type', sa.Integer(), nullable=False, server_default=u'1'))
20 op.alter_column('builds', 'type', server_default=None)
21
22
23 def downgrade():
24 """Remove the type column from the builds table."""
25 op.drop_column('builds', 'type')
26
[end of alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py b/alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py
--- a/alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py
+++ b/alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py
@@ -15,9 +15,11 @@
def upgrade():
"""Add the type column to the builds table."""
- # The default of ``1`` is the RPM Build type.
- op.add_column('builds', sa.Column('type', sa.Integer(), nullable=False, server_default=u'1'))
- op.alter_column('builds', 'type', server_default=None)
+ builds = sa.sql.table('builds', sa.sql.column('type', sa.Integer()))
+ op.add_column('builds', sa.Column('type', sa.Integer(), nullable=True))
+ # The type 1 is the RPM Build type.
+ op.execute(builds.update().values({'type': 1}))
+ op.alter_column('builds', 'type', nullable=False)
def downgrade():
|
{"golden_diff": "diff --git a/alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py b/alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py\n--- a/alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py\n+++ b/alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py\n@@ -15,9 +15,11 @@\n \n def upgrade():\n \"\"\"Add the type column to the builds table.\"\"\"\n- # The default of ``1`` is the RPM Build type.\n- op.add_column('builds', sa.Column('type', sa.Integer(), nullable=False, server_default=u'1'))\n- op.alter_column('builds', 'type', server_default=None)\n+ builds = sa.sql.table('builds', sa.sql.column('type', sa.Integer()))\n+ op.add_column('builds', sa.Column('type', sa.Integer(), nullable=True))\n+ # The type 1 is the RPM Build type.\n+ op.execute(builds.update().values({'type': 1}))\n+ op.alter_column('builds', 'type', nullable=False)\n \n \n def downgrade():\n", "issue": "alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py uses server default\nThe ```alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py``` migration uses a server default, which is not allowed by BDR:\r\n\r\n```\r\n[bowlofeggs@bodhi-backend01 ~][STG]$ sudo /usr/bin/alembic -c /etc/bodhi/alembic.ini upgrade head\r\nINFO [alembic.runtime.migration] Context impl PostgresqlImpl.\r\nINFO [alembic.runtime.migration] Will assume transactional DDL.\r\nINFO [alembic.runtime.migration] Running upgrade 12d3e8695f90 -> 9241378c92ab, Convert the builds table to be polymorphic.\r\nTraceback (most recent call last):\r\n File \"/usr/bin/alembic\", line 12, in <module>\r\n sys.exit(load_entry_point('alembic', 'console_scripts', 'alembic')())\r\n File \"/usr/lib/python2.7/site-packages/alembic/config.py\", line 479, in main\r\n CommandLine(prog=prog).main(argv=argv)\r\n File \"/usr/lib/python2.7/site-packages/alembic/config.py\", line 473, in main\r\n self.run_cmd(cfg, options)\r\n File \"/usr/lib/python2.7/site-packages/alembic/config.py\", line 456, in run_cmd\r\n **dict((k, getattr(options, k)) for k in kwarg)\r\n File \"/usr/lib/python2.7/site-packages/alembic/command.py\", line 174, in upgrade\r\n script.run_env()\r\n File \"/usr/lib/python2.7/site-packages/alembic/script/base.py\", line 397, in run_env\r\n util.load_python_file(self.dir, 'env.py')\r\n File \"/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py\", line 93, in load_python_file\r\n module = load_module_py(module_id, path)\r\n File \"/usr/lib/python2.7/site-packages/alembic/util/compat.py\", line 79, in load_module_py\r\n mod = imp.load_source(module_id, path, fp)\r\n File \"/usr/share/bodhi/alembic/env.py\", line 83, in <module>\r\n run_migrations_online()\r\n File \"/usr/share/bodhi/alembic/env.py\", line 76, in run_migrations_online\r\n context.run_migrations()\r\n File \"<string>\", line 8, in run_migrations\r\n File \"/usr/lib/python2.7/site-packages/alembic/runtime/environment.py\", line 797, in run_migrations\r\n self.get_context().run_migrations(**kw)\r\n File \"/usr/lib/python2.7/site-packages/alembic/runtime/migration.py\", line 312, in run_migrations\r\n step.migration_fn(**kw)\r\n File \"/usr/share/bodhi/alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py\", line 19, in upgrade\r\n op.add_column('builds', sa.Column('type', sa.Integer(), nullable=False, server_default=u'1'))\r\n File \"<string>\", line 8, in add_column\r\n File \"<string>\", line 3, in add_column\r\n File \"/usr/lib/python2.7/site-packages/alembic/operations/ops.py\", line 1535, in add_column\r\n return operations.invoke(op)\r\n File \"/usr/lib/python2.7/site-packages/alembic/operations/base.py\", line 318, in invoke\r\n return fn(self, operation)\r\n File \"/usr/lib/python2.7/site-packages/alembic/operations/toimpl.py\", line 123, in add_column\r\n schema=schema\r\n File \"/usr/lib/python2.7/site-packages/alembic/ddl/impl.py\", line 172, in add_column\r\n self._exec(base.AddColumn(table_name, column, schema=schema))\r\n File \"/usr/lib/python2.7/site-packages/alembic/ddl/impl.py\", line 118, in _exec\r\n return conn.execute(construct, *multiparams, **params)\r\n File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 914, in execute\r\n return meth(self, multiparams, params)\r\n File \"/usr/lib64/python2.7/site-packages/sqlalchemy/sql/ddl.py\", line 68, in _execute_on_connection\r\n return connection._execute_ddl(self, multiparams, params)\r\n File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 968, in _execute_ddl\r\n compiled\r\n File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 1146, in _execute_context\r\n context)\r\n File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 1341, in _handle_dbapi_exception\r\n exc_info\r\n File \"/usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py\", line 203, in raise_from_cause\r\n reraise(type(exception), exception, tb=exc_tb, cause=cause)\r\n File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py\", line 1139, in _execute_context\r\n context)\r\n File \"/usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py\", line 450, in do_execute\r\n cursor.execute(statement, parameters)\r\nsqlalchemy.exc.NotSupportedError: (psycopg2.NotSupportedError) ALTER TABLE ... ADD COLUMN ... DEFAULT may only affect UNLOGGED or TEMPORARY tables when BDR is active; builds is a regular table\r\n [SQL: \"ALTER TABLE builds ADD COLUMN type INTEGER DEFAULT '1' NOT NULL\"]\r\n```\n", "before_files": [{"content": "\"\"\"Convert the builds table to be polymorphic.\n\nRevision ID: 9241378c92ab\nRevises: 12d3e8695f90\nCreate Date: 2017-04-06 20:37:24.766366\n\"\"\"\nfrom alembic import op\nimport sqlalchemy as sa\n\n\n# revision identifiers, used by Alembic.\nrevision = '9241378c92ab'\ndown_revision = '12d3e8695f90'\n\n\ndef upgrade():\n \"\"\"Add the type column to the builds table.\"\"\"\n # The default of ``1`` is the RPM Build type.\n op.add_column('builds', sa.Column('type', sa.Integer(), nullable=False, server_default=u'1'))\n op.alter_column('builds', 'type', server_default=None)\n\n\ndef downgrade():\n \"\"\"Remove the type column from the builds table.\"\"\"\n op.drop_column('builds', 'type')\n", "path": "alembic/versions/9241378c92ab_convert_the_builds_table_to_be_.py"}]}
| 2,199 | 294 |
gh_patches_debug_5168
|
rasdani/github-patches
|
git_diff
|
ivy-llc__ivy-13695
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
poisson
</issue>
<code>
[start of ivy/functional/frontends/jax/random.py]
1 # local
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes
4 from ivy.functional.frontends.jax.func_wrapper import (
5 to_ivy_arrays_and_back,
6 handle_jax_dtype,
7 )
8
9
10 @to_ivy_arrays_and_back
11 def PRNGKey(seed):
12 return ivy.array([0, seed % 4294967295 - (seed // 4294967295)], dtype=ivy.int64)
13
14
15 @handle_jax_dtype
16 @to_ivy_arrays_and_back
17 def uniform(key, shape=(), dtype=None, minval=0.0, maxval=1.0):
18 return ivy.random_uniform(
19 low=minval, high=maxval, shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1])
20 )
21
22
23 @handle_jax_dtype
24 @to_ivy_arrays_and_back
25 def normal(key, shape=(), dtype=None):
26 return ivy.random_normal(shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1]))
27
28
29 def _get_seed(key):
30 key1, key2 = int(key[0]), int(key[1])
31 return ivy.to_scalar(int("".join(map(str, [key1, key2]))))
32
33
34 @handle_jax_dtype
35 @to_ivy_arrays_and_back
36 @with_unsupported_dtypes(
37 {
38 "0.3.14 and below": (
39 "float16",
40 "bfloat16",
41 )
42 },
43 "jax",
44 )
45 def beta(key, a, b, shape=None, dtype=None):
46 seed = _get_seed(key)
47 return ivy.beta(a, b, shape=shape, dtype=dtype, seed=seed)
48
49
50 @handle_jax_dtype
51 @to_ivy_arrays_and_back
52 @with_unsupported_dtypes(
53 {
54 "0.3.14 and below": (
55 "float16",
56 "bfloat16",
57 )
58 },
59 "jax",
60 )
61 def dirichlet(key, alpha, shape=None, dtype="float32"):
62 seed = _get_seed(key)
63 alpha = ivy.astype(alpha, dtype)
64 return ivy.dirichlet(alpha, size=shape, dtype=dtype, seed=seed)
65
[end of ivy/functional/frontends/jax/random.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ivy/functional/frontends/jax/random.py b/ivy/functional/frontends/jax/random.py
--- a/ivy/functional/frontends/jax/random.py
+++ b/ivy/functional/frontends/jax/random.py
@@ -62,3 +62,14 @@
seed = _get_seed(key)
alpha = ivy.astype(alpha, dtype)
return ivy.dirichlet(alpha, size=shape, dtype=dtype, seed=seed)
+
+
+@handle_jax_dtype
+@to_ivy_arrays_and_back
+@with_unsupported_dtypes(
+ {"0.3.14 and below": ("unsigned", "int8", "int16")},
+ "jax",
+)
+def poisson(key, lam, shape=None, dtype=None):
+ seed = _get_seed(key)
+ return ivy.poisson(lam, shape=shape, dtype=dtype, seed=seed)
|
{"golden_diff": "diff --git a/ivy/functional/frontends/jax/random.py b/ivy/functional/frontends/jax/random.py\n--- a/ivy/functional/frontends/jax/random.py\n+++ b/ivy/functional/frontends/jax/random.py\n@@ -62,3 +62,14 @@\n seed = _get_seed(key)\n alpha = ivy.astype(alpha, dtype)\n return ivy.dirichlet(alpha, size=shape, dtype=dtype, seed=seed)\n+\n+\n+@handle_jax_dtype\n+@to_ivy_arrays_and_back\n+@with_unsupported_dtypes(\n+ {\"0.3.14 and below\": (\"unsigned\", \"int8\", \"int16\")},\n+ \"jax\",\n+)\n+def poisson(key, lam, shape=None, dtype=None):\n+ seed = _get_seed(key)\n+ return ivy.poisson(lam, shape=shape, dtype=dtype, seed=seed)\n", "issue": "poisson\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes\nfrom ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_jax_dtype,\n)\n\n\n@to_ivy_arrays_and_back\ndef PRNGKey(seed):\n return ivy.array([0, seed % 4294967295 - (seed // 4294967295)], dtype=ivy.int64)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef uniform(key, shape=(), dtype=None, minval=0.0, maxval=1.0):\n return ivy.random_uniform(\n low=minval, high=maxval, shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1])\n )\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef normal(key, shape=(), dtype=None):\n return ivy.random_normal(shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1]))\n\n\ndef _get_seed(key):\n key1, key2 = int(key[0]), int(key[1])\n return ivy.to_scalar(int(\"\".join(map(str, [key1, key2]))))\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef beta(key, a, b, shape=None, dtype=None):\n seed = _get_seed(key)\n return ivy.beta(a, b, shape=shape, dtype=dtype, seed=seed)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes(\n {\n \"0.3.14 and below\": (\n \"float16\",\n \"bfloat16\",\n )\n },\n \"jax\",\n)\ndef dirichlet(key, alpha, shape=None, dtype=\"float32\"):\n seed = _get_seed(key)\n alpha = ivy.astype(alpha, dtype)\n return ivy.dirichlet(alpha, size=shape, dtype=dtype, seed=seed)\n", "path": "ivy/functional/frontends/jax/random.py"}]}
| 1,162 | 207 |
gh_patches_debug_42998
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-10983
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SnippetBulkAction not respecting models definition
<!--
Found a bug? Please fill out the sections below. 👍
-->
### Issue Summary
I'm registering a bulk action for a snippet model and declared the bulk action as specific for a model with class variable `models`, but the bulk action is being showed on all snippet models and not only the one I've declared.
### Steps to Reproduce
1. Declare on `wagtail_hooks.py` the snippet:
```python
class PeriodicTaskSnippetViewSet(SnippetViewSet):
model = PeriodicTask
icon = "tasks"
menu_order = 100
list_display = ("name", "enabled", "scheduler", "interval", "start_time", "last_run_at", "one_off")
list_filter = ["enabled", "one_off", "task", "start_time", "last_run_at"]
search_fields = ["name"]
```
1. Also declare the bulk action:
```python
@hooks.register("register_bulk_action")
class EnableTaskBulkAction(SnippetBulkAction):
models = [PeriodicTask]
display_name = _("Enable")
aria_label = _("Enable selected tasks")
action_type = "enable"
template_name = "core/wagtailadmin/bulk_action_enable_tasks.html"
@classmethod
def execute_action(cls, objects, **kwargs):
for obj in objects:
obj.enabled = True
obj.save()
rows_updated = len(objects)
return rows_updated, rows_updated
```
Any other relevant information. For example, why do you consider this a bug and what did you expect to happen instead?
The documentation (https://docs.wagtail.org/en/stable/extending/custom_bulk_actions.html#adding-bulk-actions-to-the-snippets-listing) says how to limit action to specific models on snippets, but that's not happening.
After checking the code, I think that because `get_models` is being overwritten on `SnippetBulkAction` which ignores if `models` is already defined by the user or not.
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
- Python version: 3.11.5
- Django version: 4.2.5
- Wagtail version: 5.1.2.
- Browser version: Chrome 117.
</issue>
<code>
[start of wagtail/snippets/bulk_actions/snippet_bulk_action.py]
1 from wagtail.admin.admin_url_finder import AdminURLFinder
2 from wagtail.admin.views.bulk_action import BulkAction
3 from wagtail.snippets.models import get_snippet_models
4
5
6 class SnippetBulkAction(BulkAction):
7 @classmethod
8 def get_models(cls):
9 # We used to set `models = get_snippet_models()` directly on the class,
10 # but this is problematic because it means that the list of models is
11 # evaluated at import time.
12
13 # Bulk actions are normally registered in wagtail_hooks.py, but snippets
14 # can also be registered in wagtail_hooks.py. Evaluating
15 # get_snippet_models() at import time could result in either a circular
16 # import or an incomplete list of models.
17
18 # Update the models list with the latest registered snippets in case
19 # there is user code that still accesses cls.models instead of calling
20 # this get_models() method.
21 cls.models = get_snippet_models()
22 return cls.models
23
24 def object_context(self, snippet):
25 return {
26 "item": snippet,
27 "edit_url": AdminURLFinder(self.request.user).get_edit_url(snippet),
28 }
29
30 def get_context_data(self, **kwargs):
31 kwargs.update(
32 {
33 "model_opts": self.model._meta,
34 "header_icon": self.model.snippet_viewset.icon,
35 }
36 )
37 return super().get_context_data(**kwargs)
38
39 def get_execution_context(self):
40 return {**super().get_execution_context(), "self": self}
41
[end of wagtail/snippets/bulk_actions/snippet_bulk_action.py]
[start of wagtail/admin/views/bulk_action/registry.py]
1 from wagtail import hooks
2 from wagtail.admin.views.bulk_action import BulkAction
3
4
5 class BulkActionRegistry:
6 def __init__(self):
7 self.actions = {} # {app_name: {model_name: {action_name: action_class]}}
8 self.has_scanned_for_bulk_actions = False
9
10 def _scan_for_bulk_actions(self):
11 if not self.has_scanned_for_bulk_actions:
12 for action_class in hooks.get_hooks("register_bulk_action"):
13 if not issubclass(action_class, BulkAction):
14 raise Exception(
15 "{} is not a subclass of {}".format(
16 action_class.__name__, BulkAction.__name__
17 )
18 )
19 for model in action_class.get_models():
20 self.actions.setdefault(model._meta.app_label, {})
21 self.actions[model._meta.app_label].setdefault(
22 model._meta.model_name, {}
23 )
24 self.actions[model._meta.app_label][model._meta.model_name][
25 action_class.action_type
26 ] = action_class
27 self.has_scanned_for_bulk_actions = True
28
29 def get_bulk_actions_for_model(self, app_label, model_name):
30 self._scan_for_bulk_actions()
31 return self.actions.get(app_label, {}).get(model_name, {}).values()
32
33 def get_bulk_action_class(self, app_label, model_name, action_type):
34 self._scan_for_bulk_actions()
35 return (
36 self.actions.get(app_label, {}).get(model_name, {}).get(action_type, None)
37 )
38
39
40 bulk_action_registry = BulkActionRegistry()
41
[end of wagtail/admin/views/bulk_action/registry.py]
[start of wagtail/admin/views/bulk_action/base_bulk_action.py]
1 from abc import ABC, abstractmethod
2
3 from django import forms
4 from django.db import transaction
5 from django.shortcuts import get_list_or_404, redirect
6 from django.views.generic import FormView
7
8 from wagtail import hooks
9 from wagtail.admin import messages
10 from wagtail.admin.utils import get_valid_next_url_from_request
11
12
13 class BulkAction(ABC, FormView):
14 @property
15 @abstractmethod
16 def display_name(self):
17 pass
18
19 @property
20 @abstractmethod
21 def action_type(self):
22 pass
23
24 @property
25 @abstractmethod
26 def aria_label(self):
27 pass
28
29 extras = {}
30 action_priority = 100
31 models = []
32 classes = set()
33
34 form_class = forms.Form
35 cleaned_form = None
36
37 def __init__(self, request, model):
38 self.request = request
39 next_url = get_valid_next_url_from_request(request)
40 if not next_url:
41 next_url = request.path
42 self.next_url = next_url
43 self.num_parent_objects = self.num_child_objects = 0
44 if model in self.get_models():
45 self.model = model
46 else:
47 raise Exception(
48 "model {} is not among the specified list of models".format(
49 model.__class__.__name__
50 )
51 )
52
53 @classmethod
54 def get_models(cls):
55 return cls.models
56
57 @classmethod
58 def get_queryset(cls, model, object_ids):
59 return get_list_or_404(model, pk__in=object_ids)
60
61 def check_perm(self, obj):
62 return True
63
64 @classmethod
65 def execute_action(cls, objects, **kwargs):
66 raise NotImplementedError("execute_action needs to be implemented")
67
68 def get_success_message(self, num_parent_objects, num_child_objects):
69 pass
70
71 def object_context(self, obj):
72 return {"item": obj}
73
74 @classmethod
75 def get_default_model(cls):
76 models = cls.get_models()
77 if len(models) == 1:
78 return models[0]
79 raise Exception(
80 "Cannot get default model if number of models is greater than 1"
81 )
82
83 def __run_before_hooks(self, action_type, request, objects):
84 for hook in hooks.get_hooks("before_bulk_action"):
85 result = hook(request, action_type, objects, self)
86 if hasattr(result, "status_code"):
87 return result
88
89 def __run_after_hooks(self, action_type, request, objects):
90 for hook in hooks.get_hooks("after_bulk_action"):
91 result = hook(request, action_type, objects, self)
92 if hasattr(result, "status_code"):
93 return result
94
95 def get_all_objects_in_listing_query(self, parent_id):
96 return self.model.objects.all().values_list("pk", flat=True)
97
98 def get_actionable_objects(self):
99 objects = []
100 items_with_no_access = []
101 object_ids = self.request.GET.getlist("id")
102 if "all" in object_ids:
103 object_ids = self.get_all_objects_in_listing_query(
104 self.request.GET.get("childOf")
105 )
106
107 for obj in self.get_queryset(self.model, object_ids):
108 if not self.check_perm(obj):
109 items_with_no_access.append(obj)
110 else:
111 objects.append(obj)
112 return objects, {"items_with_no_access": items_with_no_access}
113
114 def get_context_data(self, **kwargs):
115 items, items_with_no_access = self.get_actionable_objects()
116 _items = []
117 for item in items:
118 _items.append(self.object_context(item))
119 return {
120 **super().get_context_data(**kwargs),
121 "items": _items,
122 **items_with_no_access,
123 "next": self.next_url,
124 "submit_url": self.request.path + "?" + self.request.META["QUERY_STRING"],
125 }
126
127 def prepare_action(self, objects, objects_without_access):
128 return
129
130 def get_execution_context(self):
131 return {}
132
133 def form_valid(self, form):
134 request = self.request
135 self.cleaned_form = form
136 objects, objects_without_access = self.get_actionable_objects()
137 self.actionable_objects = objects
138 resp = self.prepare_action(objects, objects_without_access)
139 if hasattr(resp, "status_code"):
140 return resp
141 with transaction.atomic():
142 before_hook_result = self.__run_before_hooks(
143 self.action_type, request, objects
144 )
145 if before_hook_result is not None:
146 return before_hook_result
147 num_parent_objects, num_child_objects = self.execute_action(
148 objects, **self.get_execution_context()
149 )
150 after_hook_result = self.__run_after_hooks(
151 self.action_type, request, objects
152 )
153 if after_hook_result is not None:
154 return after_hook_result
155 success_message = self.get_success_message(
156 num_parent_objects, num_child_objects
157 )
158 if success_message is not None:
159 messages.success(request, success_message)
160 return redirect(self.next_url)
161
162 def form_invalid(self, form):
163 return super().form_invalid(form)
164
[end of wagtail/admin/views/bulk_action/base_bulk_action.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wagtail/admin/views/bulk_action/base_bulk_action.py b/wagtail/admin/views/bulk_action/base_bulk_action.py
--- a/wagtail/admin/views/bulk_action/base_bulk_action.py
+++ b/wagtail/admin/views/bulk_action/base_bulk_action.py
@@ -3,6 +3,7 @@
from django import forms
from django.db import transaction
from django.shortcuts import get_list_or_404, redirect
+from django.utils.functional import classproperty
from django.views.generic import FormView
from wagtail import hooks
@@ -28,7 +29,6 @@
extras = {}
action_priority = 100
- models = []
classes = set()
form_class = forms.Form
@@ -41,7 +41,7 @@
next_url = request.path
self.next_url = next_url
self.num_parent_objects = self.num_child_objects = 0
- if model in self.get_models():
+ if model in self.models:
self.model = model
else:
raise Exception(
@@ -50,9 +50,9 @@
)
)
- @classmethod
- def get_models(cls):
- return cls.models
+ @classproperty
+ def models(cls):
+ return []
@classmethod
def get_queryset(cls, model, object_ids):
@@ -73,7 +73,7 @@
@classmethod
def get_default_model(cls):
- models = cls.get_models()
+ models = cls.models
if len(models) == 1:
return models[0]
raise Exception(
diff --git a/wagtail/admin/views/bulk_action/registry.py b/wagtail/admin/views/bulk_action/registry.py
--- a/wagtail/admin/views/bulk_action/registry.py
+++ b/wagtail/admin/views/bulk_action/registry.py
@@ -16,7 +16,7 @@
action_class.__name__, BulkAction.__name__
)
)
- for model in action_class.get_models():
+ for model in action_class.models:
self.actions.setdefault(model._meta.app_label, {})
self.actions[model._meta.app_label].setdefault(
model._meta.model_name, {}
diff --git a/wagtail/snippets/bulk_actions/snippet_bulk_action.py b/wagtail/snippets/bulk_actions/snippet_bulk_action.py
--- a/wagtail/snippets/bulk_actions/snippet_bulk_action.py
+++ b/wagtail/snippets/bulk_actions/snippet_bulk_action.py
@@ -1,11 +1,13 @@
+from django.utils.functional import classproperty
+
from wagtail.admin.admin_url_finder import AdminURLFinder
from wagtail.admin.views.bulk_action import BulkAction
from wagtail.snippets.models import get_snippet_models
class SnippetBulkAction(BulkAction):
- @classmethod
- def get_models(cls):
+ @classproperty
+ def models(cls):
# We used to set `models = get_snippet_models()` directly on the class,
# but this is problematic because it means that the list of models is
# evaluated at import time.
@@ -14,12 +16,7 @@
# can also be registered in wagtail_hooks.py. Evaluating
# get_snippet_models() at import time could result in either a circular
# import or an incomplete list of models.
-
- # Update the models list with the latest registered snippets in case
- # there is user code that still accesses cls.models instead of calling
- # this get_models() method.
- cls.models = get_snippet_models()
- return cls.models
+ return get_snippet_models()
def object_context(self, snippet):
return {
|
{"golden_diff": "diff --git a/wagtail/admin/views/bulk_action/base_bulk_action.py b/wagtail/admin/views/bulk_action/base_bulk_action.py\n--- a/wagtail/admin/views/bulk_action/base_bulk_action.py\n+++ b/wagtail/admin/views/bulk_action/base_bulk_action.py\n@@ -3,6 +3,7 @@\n from django import forms\n from django.db import transaction\n from django.shortcuts import get_list_or_404, redirect\n+from django.utils.functional import classproperty\n from django.views.generic import FormView\n \n from wagtail import hooks\n@@ -28,7 +29,6 @@\n \n extras = {}\n action_priority = 100\n- models = []\n classes = set()\n \n form_class = forms.Form\n@@ -41,7 +41,7 @@\n next_url = request.path\n self.next_url = next_url\n self.num_parent_objects = self.num_child_objects = 0\n- if model in self.get_models():\n+ if model in self.models:\n self.model = model\n else:\n raise Exception(\n@@ -50,9 +50,9 @@\n )\n )\n \n- @classmethod\n- def get_models(cls):\n- return cls.models\n+ @classproperty\n+ def models(cls):\n+ return []\n \n @classmethod\n def get_queryset(cls, model, object_ids):\n@@ -73,7 +73,7 @@\n \n @classmethod\n def get_default_model(cls):\n- models = cls.get_models()\n+ models = cls.models\n if len(models) == 1:\n return models[0]\n raise Exception(\ndiff --git a/wagtail/admin/views/bulk_action/registry.py b/wagtail/admin/views/bulk_action/registry.py\n--- a/wagtail/admin/views/bulk_action/registry.py\n+++ b/wagtail/admin/views/bulk_action/registry.py\n@@ -16,7 +16,7 @@\n action_class.__name__, BulkAction.__name__\n )\n )\n- for model in action_class.get_models():\n+ for model in action_class.models:\n self.actions.setdefault(model._meta.app_label, {})\n self.actions[model._meta.app_label].setdefault(\n model._meta.model_name, {}\ndiff --git a/wagtail/snippets/bulk_actions/snippet_bulk_action.py b/wagtail/snippets/bulk_actions/snippet_bulk_action.py\n--- a/wagtail/snippets/bulk_actions/snippet_bulk_action.py\n+++ b/wagtail/snippets/bulk_actions/snippet_bulk_action.py\n@@ -1,11 +1,13 @@\n+from django.utils.functional import classproperty\n+\n from wagtail.admin.admin_url_finder import AdminURLFinder\n from wagtail.admin.views.bulk_action import BulkAction\n from wagtail.snippets.models import get_snippet_models\n \n \n class SnippetBulkAction(BulkAction):\n- @classmethod\n- def get_models(cls):\n+ @classproperty\n+ def models(cls):\n # We used to set `models = get_snippet_models()` directly on the class,\n # but this is problematic because it means that the list of models is\n # evaluated at import time.\n@@ -14,12 +16,7 @@\n # can also be registered in wagtail_hooks.py. Evaluating\n # get_snippet_models() at import time could result in either a circular\n # import or an incomplete list of models.\n-\n- # Update the models list with the latest registered snippets in case\n- # there is user code that still accesses cls.models instead of calling\n- # this get_models() method.\n- cls.models = get_snippet_models()\n- return cls.models\n+ return get_snippet_models()\n \n def object_context(self, snippet):\n return {\n", "issue": "SnippetBulkAction not respecting models definition\n<!--\r\nFound a bug? Please fill out the sections below. \ud83d\udc4d\r\n-->\r\n\r\n### Issue Summary\r\n\r\nI'm registering a bulk action for a snippet model and declared the bulk action as specific for a model with class variable `models`, but the bulk action is being showed on all snippet models and not only the one I've declared.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Declare on `wagtail_hooks.py` the snippet:\r\n ```python\r\n class PeriodicTaskSnippetViewSet(SnippetViewSet):\r\n model = PeriodicTask\r\n icon = \"tasks\"\r\n menu_order = 100\r\n list_display = (\"name\", \"enabled\", \"scheduler\", \"interval\", \"start_time\", \"last_run_at\", \"one_off\")\r\n list_filter = [\"enabled\", \"one_off\", \"task\", \"start_time\", \"last_run_at\"]\r\n search_fields = [\"name\"]\r\n ```\r\n1. Also declare the bulk action:\r\n ```python\r\n @hooks.register(\"register_bulk_action\")\r\n class EnableTaskBulkAction(SnippetBulkAction):\r\n models = [PeriodicTask]\r\n display_name = _(\"Enable\")\r\n aria_label = _(\"Enable selected tasks\")\r\n action_type = \"enable\"\r\n template_name = \"core/wagtailadmin/bulk_action_enable_tasks.html\"\r\n\r\n @classmethod\r\n def execute_action(cls, objects, **kwargs):\r\n for obj in objects:\r\n obj.enabled = True\r\n obj.save()\r\n rows_updated = len(objects)\r\n return rows_updated, rows_updated\r\n ```\r\n\r\nAny other relevant information. For example, why do you consider this a bug and what did you expect to happen instead?\r\n\r\nThe documentation (https://docs.wagtail.org/en/stable/extending/custom_bulk_actions.html#adding-bulk-actions-to-the-snippets-listing) says how to limit action to specific models on snippets, but that's not happening.\r\n\r\nAfter checking the code, I think that because `get_models` is being overwritten on `SnippetBulkAction` which ignores if `models` is already defined by the user or not. \r\n\r\n- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes \r\n\r\n### Technical details\r\n\r\n- Python version: 3.11.5\r\n- Django version: 4.2.5\r\n- Wagtail version: 5.1.2.\r\n- Browser version: Chrome 117.\r\n\n", "before_files": [{"content": "from wagtail.admin.admin_url_finder import AdminURLFinder\nfrom wagtail.admin.views.bulk_action import BulkAction\nfrom wagtail.snippets.models import get_snippet_models\n\n\nclass SnippetBulkAction(BulkAction):\n @classmethod\n def get_models(cls):\n # We used to set `models = get_snippet_models()` directly on the class,\n # but this is problematic because it means that the list of models is\n # evaluated at import time.\n\n # Bulk actions are normally registered in wagtail_hooks.py, but snippets\n # can also be registered in wagtail_hooks.py. Evaluating\n # get_snippet_models() at import time could result in either a circular\n # import or an incomplete list of models.\n\n # Update the models list with the latest registered snippets in case\n # there is user code that still accesses cls.models instead of calling\n # this get_models() method.\n cls.models = get_snippet_models()\n return cls.models\n\n def object_context(self, snippet):\n return {\n \"item\": snippet,\n \"edit_url\": AdminURLFinder(self.request.user).get_edit_url(snippet),\n }\n\n def get_context_data(self, **kwargs):\n kwargs.update(\n {\n \"model_opts\": self.model._meta,\n \"header_icon\": self.model.snippet_viewset.icon,\n }\n )\n return super().get_context_data(**kwargs)\n\n def get_execution_context(self):\n return {**super().get_execution_context(), \"self\": self}\n", "path": "wagtail/snippets/bulk_actions/snippet_bulk_action.py"}, {"content": "from wagtail import hooks\nfrom wagtail.admin.views.bulk_action import BulkAction\n\n\nclass BulkActionRegistry:\n def __init__(self):\n self.actions = {} # {app_name: {model_name: {action_name: action_class]}}\n self.has_scanned_for_bulk_actions = False\n\n def _scan_for_bulk_actions(self):\n if not self.has_scanned_for_bulk_actions:\n for action_class in hooks.get_hooks(\"register_bulk_action\"):\n if not issubclass(action_class, BulkAction):\n raise Exception(\n \"{} is not a subclass of {}\".format(\n action_class.__name__, BulkAction.__name__\n )\n )\n for model in action_class.get_models():\n self.actions.setdefault(model._meta.app_label, {})\n self.actions[model._meta.app_label].setdefault(\n model._meta.model_name, {}\n )\n self.actions[model._meta.app_label][model._meta.model_name][\n action_class.action_type\n ] = action_class\n self.has_scanned_for_bulk_actions = True\n\n def get_bulk_actions_for_model(self, app_label, model_name):\n self._scan_for_bulk_actions()\n return self.actions.get(app_label, {}).get(model_name, {}).values()\n\n def get_bulk_action_class(self, app_label, model_name, action_type):\n self._scan_for_bulk_actions()\n return (\n self.actions.get(app_label, {}).get(model_name, {}).get(action_type, None)\n )\n\n\nbulk_action_registry = BulkActionRegistry()\n", "path": "wagtail/admin/views/bulk_action/registry.py"}, {"content": "from abc import ABC, abstractmethod\n\nfrom django import forms\nfrom django.db import transaction\nfrom django.shortcuts import get_list_or_404, redirect\nfrom django.views.generic import FormView\n\nfrom wagtail import hooks\nfrom wagtail.admin import messages\nfrom wagtail.admin.utils import get_valid_next_url_from_request\n\n\nclass BulkAction(ABC, FormView):\n @property\n @abstractmethod\n def display_name(self):\n pass\n\n @property\n @abstractmethod\n def action_type(self):\n pass\n\n @property\n @abstractmethod\n def aria_label(self):\n pass\n\n extras = {}\n action_priority = 100\n models = []\n classes = set()\n\n form_class = forms.Form\n cleaned_form = None\n\n def __init__(self, request, model):\n self.request = request\n next_url = get_valid_next_url_from_request(request)\n if not next_url:\n next_url = request.path\n self.next_url = next_url\n self.num_parent_objects = self.num_child_objects = 0\n if model in self.get_models():\n self.model = model\n else:\n raise Exception(\n \"model {} is not among the specified list of models\".format(\n model.__class__.__name__\n )\n )\n\n @classmethod\n def get_models(cls):\n return cls.models\n\n @classmethod\n def get_queryset(cls, model, object_ids):\n return get_list_or_404(model, pk__in=object_ids)\n\n def check_perm(self, obj):\n return True\n\n @classmethod\n def execute_action(cls, objects, **kwargs):\n raise NotImplementedError(\"execute_action needs to be implemented\")\n\n def get_success_message(self, num_parent_objects, num_child_objects):\n pass\n\n def object_context(self, obj):\n return {\"item\": obj}\n\n @classmethod\n def get_default_model(cls):\n models = cls.get_models()\n if len(models) == 1:\n return models[0]\n raise Exception(\n \"Cannot get default model if number of models is greater than 1\"\n )\n\n def __run_before_hooks(self, action_type, request, objects):\n for hook in hooks.get_hooks(\"before_bulk_action\"):\n result = hook(request, action_type, objects, self)\n if hasattr(result, \"status_code\"):\n return result\n\n def __run_after_hooks(self, action_type, request, objects):\n for hook in hooks.get_hooks(\"after_bulk_action\"):\n result = hook(request, action_type, objects, self)\n if hasattr(result, \"status_code\"):\n return result\n\n def get_all_objects_in_listing_query(self, parent_id):\n return self.model.objects.all().values_list(\"pk\", flat=True)\n\n def get_actionable_objects(self):\n objects = []\n items_with_no_access = []\n object_ids = self.request.GET.getlist(\"id\")\n if \"all\" in object_ids:\n object_ids = self.get_all_objects_in_listing_query(\n self.request.GET.get(\"childOf\")\n )\n\n for obj in self.get_queryset(self.model, object_ids):\n if not self.check_perm(obj):\n items_with_no_access.append(obj)\n else:\n objects.append(obj)\n return objects, {\"items_with_no_access\": items_with_no_access}\n\n def get_context_data(self, **kwargs):\n items, items_with_no_access = self.get_actionable_objects()\n _items = []\n for item in items:\n _items.append(self.object_context(item))\n return {\n **super().get_context_data(**kwargs),\n \"items\": _items,\n **items_with_no_access,\n \"next\": self.next_url,\n \"submit_url\": self.request.path + \"?\" + self.request.META[\"QUERY_STRING\"],\n }\n\n def prepare_action(self, objects, objects_without_access):\n return\n\n def get_execution_context(self):\n return {}\n\n def form_valid(self, form):\n request = self.request\n self.cleaned_form = form\n objects, objects_without_access = self.get_actionable_objects()\n self.actionable_objects = objects\n resp = self.prepare_action(objects, objects_without_access)\n if hasattr(resp, \"status_code\"):\n return resp\n with transaction.atomic():\n before_hook_result = self.__run_before_hooks(\n self.action_type, request, objects\n )\n if before_hook_result is not None:\n return before_hook_result\n num_parent_objects, num_child_objects = self.execute_action(\n objects, **self.get_execution_context()\n )\n after_hook_result = self.__run_after_hooks(\n self.action_type, request, objects\n )\n if after_hook_result is not None:\n return after_hook_result\n success_message = self.get_success_message(\n num_parent_objects, num_child_objects\n )\n if success_message is not None:\n messages.success(request, success_message)\n return redirect(self.next_url)\n\n def form_invalid(self, form):\n return super().form_invalid(form)\n", "path": "wagtail/admin/views/bulk_action/base_bulk_action.py"}]}
| 3,369 | 820 |
gh_patches_debug_41621
|
rasdani/github-patches
|
git_diff
|
watchdogpolska__feder-328
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Eksport w CSV EmailLog
Wprowadziliśmy w ```feder.letters.logs``` statystyki dostarczania wiadomości. Należy wprowadzić zestawienie wszystkich danych z EmailLog dla danego monitoringu, aby można było zrobić statystykę czy coś.
</issue>
<code>
[start of feder/letters/logs/views.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import unicode_literals
3
4 from braces.views import SelectRelatedMixin, PrefetchRelatedMixin
5 from cached_property import cached_property
6 from django.shortcuts import get_object_or_404
7 from django.views.generic import DetailView, ListView
8
9 from feder.cases.models import Case
10 from feder.letters.logs.models import EmailLog
11 from feder.main.mixins import AttrPermissionRequiredMixin
12 from feder.monitorings.models import Monitoring
13
14
15 class ListMonitoringMixin(AttrPermissionRequiredMixin, SelectRelatedMixin):
16 select_related = ['case']
17 paginate_by = 100
18 model = EmailLog
19 permission_attribute = 'case__monitoring'
20 permission_required = 'monitorings.view_log'
21
22 def get_permission_object(self):
23 return self.monitoring
24
25 def get_queryset(self):
26 return super(ListMonitoringMixin, self).get_queryset().filter(case__monitoring=self.monitoring).with_logrecord_count()
27
28 def get_context_data(self, **kwargs):
29 kwargs['monitoring'] = self.monitoring
30 return super(ListMonitoringMixin, self).get_context_data(**kwargs)
31
32
33 class EmailLogMonitoringListView(ListMonitoringMixin, ListView):
34 template_name_suffix = '_list_for_monitoring'
35 permission_required = 'monitorings.view_log'
36
37 @cached_property
38 def monitoring(self):
39 return get_object_or_404(Monitoring, pk=self.kwargs['monitoring_pk'])
40
41
42 class EmailLogCaseListView(ListMonitoringMixin, ListView):
43 template_name_suffix = '_list_for_case'
44
45 @cached_property
46 def case(self):
47 return get_object_or_404(Case.objects.select_related('monitoring'),
48 pk=self.kwargs['case_pk'])
49
50 @cached_property
51 def monitoring(self):
52 return self.case.monitoring
53
54 def get_context_data(self, **kwargs):
55 kwargs['case'] = self.case
56 return super(EmailLogCaseListView, self).get_context_data(**kwargs)
57
58 def get_queryset(self):
59 return super(ListMonitoringMixin, self).get_queryset().filter(case=self.case)
60
61
62 class EmailLogDetailView(AttrPermissionRequiredMixin, PrefetchRelatedMixin,
63 SelectRelatedMixin, DetailView):
64 model = EmailLog
65 select_related = ['case__monitoring']
66 prefetch_related = ['logrecord_set']
67 permission_attribute = 'case__monitoring'
68 permission_required = 'monitorings.view_log'
69
[end of feder/letters/logs/views.py]
[start of feder/letters/logs/urls.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import unicode_literals
3
4 from django.conf.urls import url
5 from django.utils.translation import ugettext_lazy as _
6
7 from . import views
8
9 urlpatterns = [
10 url(_(r'^case-(?P<case_pk>[\d-]+)$'), views.EmailLogCaseListView.as_view(),
11 name="list"),
12 url(_(r'^monitoring-(?P<monitoring_pk>[\d-]+)$'), views.EmailLogMonitoringListView.as_view(),
13 name="list"),
14 url(_(r'^log-(?P<pk>[\d-]+)$'), views.EmailLogDetailView.as_view(),
15 name="detail"),
16 ]
17
[end of feder/letters/logs/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/feder/letters/logs/urls.py b/feder/letters/logs/urls.py
--- a/feder/letters/logs/urls.py
+++ b/feder/letters/logs/urls.py
@@ -11,6 +11,8 @@
name="list"),
url(_(r'^monitoring-(?P<monitoring_pk>[\d-]+)$'), views.EmailLogMonitoringListView.as_view(),
name="list"),
+ url(_(r'^monitoring-(?P<monitoring_pk>[\d-]+)/export$'), views.EmailLogMonitoringCsvView.as_view(),
+ name="export"),
url(_(r'^log-(?P<pk>[\d-]+)$'), views.EmailLogDetailView.as_view(),
name="detail"),
]
diff --git a/feder/letters/logs/views.py b/feder/letters/logs/views.py
--- a/feder/letters/logs/views.py
+++ b/feder/letters/logs/views.py
@@ -1,8 +1,12 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
+from django.utils import timezone
+import unicodecsv as csv
+
from braces.views import SelectRelatedMixin, PrefetchRelatedMixin
from cached_property import cached_property
+from django.http import HttpResponse
from django.shortcuts import get_object_or_404
from django.views.generic import DetailView, ListView
@@ -10,7 +14,7 @@
from feder.letters.logs.models import EmailLog
from feder.main.mixins import AttrPermissionRequiredMixin
from feder.monitorings.models import Monitoring
-
+from django.views.generic.list import ListView
class ListMonitoringMixin(AttrPermissionRequiredMixin, SelectRelatedMixin):
select_related = ['case']
@@ -39,6 +43,61 @@
return get_object_or_404(Monitoring, pk=self.kwargs['monitoring_pk'])
+class EmailLogMonitoringCsvView(ListMonitoringMixin, ListView):
+ permission_required = 'monitorings.view_log'
+
+ select_related = ['case', 'case__institution']
+
+ @cached_property
+ def monitoring(self):
+ return get_object_or_404(Monitoring, pk=self.kwargs['monitoring_pk'])
+
+ def get(self, *args, **kwargs):
+ response = self._get_csv_response()
+ self._write_rows(response, self.get_queryset())
+ return response
+
+ @staticmethod
+ def _get_base_model_field_names(queryset):
+ opts = queryset.model._meta
+ return [field.name for field in opts.fields if field.related_model is None]
+
+ def _get_csv_response(self):
+ csv_response = HttpResponse(content_type='text/csv')
+ current_time = timezone.now()
+ filename = 'email_log_{0}-{1}-{2}.csv'.format(self.monitoring.id,
+ current_time.strftime('%Y_%m_%d-%H_%M_%S'),
+ current_time.tzname()
+ )
+ csv_response['Content-Disposition'] = "attachment;filename={0}".format(filename)
+ return csv_response
+
+ def _write_rows(self, response, queryset):
+ writer = csv.writer(response)
+
+ # automatically add all fields from base table/model
+ base_field_names = self._get_base_model_field_names(queryset)
+
+ # print header row
+ writer.writerow(base_field_names +
+ [
+ 'case id',
+ 'case email',
+ 'institution',
+ 'institution id',
+ 'monitoring id']
+ )
+
+ for obj in queryset:
+ writer.writerow(
+ [getattr(obj, field) for field in base_field_names] + [
+ obj.case.id,
+ obj.case.email,
+ obj.case.institution.name,
+ obj.case.institution_id,
+ obj.case.monitoring_id,
+ ])
+
class EmailLogCaseListView(ListMonitoringMixin, ListView):
template_name_suffix = '_list_for_case'
|
{"golden_diff": "diff --git a/feder/letters/logs/urls.py b/feder/letters/logs/urls.py\n--- a/feder/letters/logs/urls.py\n+++ b/feder/letters/logs/urls.py\n@@ -11,6 +11,8 @@\n name=\"list\"),\n url(_(r'^monitoring-(?P<monitoring_pk>[\\d-]+)$'), views.EmailLogMonitoringListView.as_view(),\n name=\"list\"),\n+ url(_(r'^monitoring-(?P<monitoring_pk>[\\d-]+)/export$'), views.EmailLogMonitoringCsvView.as_view(),\n+ name=\"export\"),\n url(_(r'^log-(?P<pk>[\\d-]+)$'), views.EmailLogDetailView.as_view(),\n name=\"detail\"),\n ]\ndiff --git a/feder/letters/logs/views.py b/feder/letters/logs/views.py\n--- a/feder/letters/logs/views.py\n+++ b/feder/letters/logs/views.py\n@@ -1,8 +1,12 @@\n # -*- coding: utf-8 -*-\n from __future__ import unicode_literals\n \n+from django.utils import timezone\n+import unicodecsv as csv\n+\n from braces.views import SelectRelatedMixin, PrefetchRelatedMixin\n from cached_property import cached_property\n+from django.http import HttpResponse\n from django.shortcuts import get_object_or_404\n from django.views.generic import DetailView, ListView\n \n@@ -10,7 +14,7 @@\n from feder.letters.logs.models import EmailLog\n from feder.main.mixins import AttrPermissionRequiredMixin\n from feder.monitorings.models import Monitoring\n-\n+from django.views.generic.list import ListView\n \n class ListMonitoringMixin(AttrPermissionRequiredMixin, SelectRelatedMixin):\n select_related = ['case']\n@@ -39,6 +43,61 @@\n return get_object_or_404(Monitoring, pk=self.kwargs['monitoring_pk'])\n \n \n+class EmailLogMonitoringCsvView(ListMonitoringMixin, ListView):\n+ permission_required = 'monitorings.view_log'\n+\n+ select_related = ['case', 'case__institution']\n+\n+ @cached_property\n+ def monitoring(self):\n+ return get_object_or_404(Monitoring, pk=self.kwargs['monitoring_pk'])\n+\n+ def get(self, *args, **kwargs):\n+ response = self._get_csv_response()\n+ self._write_rows(response, self.get_queryset())\n+ return response\n+\n+ @staticmethod\n+ def _get_base_model_field_names(queryset):\n+ opts = queryset.model._meta\n+ return [field.name for field in opts.fields if field.related_model is None]\n+\n+ def _get_csv_response(self):\n+ csv_response = HttpResponse(content_type='text/csv')\n+ current_time = timezone.now()\n+ filename = 'email_log_{0}-{1}-{2}.csv'.format(self.monitoring.id,\n+ current_time.strftime('%Y_%m_%d-%H_%M_%S'),\n+ current_time.tzname()\n+ )\n+ csv_response['Content-Disposition'] = \"attachment;filename={0}\".format(filename)\n+ return csv_response\n+\n+ def _write_rows(self, response, queryset):\n+ writer = csv.writer(response)\n+\n+ # automatically add all fields from base table/model\n+ base_field_names = self._get_base_model_field_names(queryset)\n+\n+ # print header row\n+ writer.writerow(base_field_names +\n+ [\n+ 'case id',\n+ 'case email',\n+ 'institution',\n+ 'institution id',\n+ 'monitoring id']\n+ )\n+\n+ for obj in queryset:\n+ writer.writerow(\n+ [getattr(obj, field) for field in base_field_names] + [\n+ obj.case.id,\n+ obj.case.email,\n+ obj.case.institution.name,\n+ obj.case.institution_id,\n+ obj.case.monitoring_id,\n+ ])\n+\n class EmailLogCaseListView(ListMonitoringMixin, ListView):\n template_name_suffix = '_list_for_case'\n", "issue": "Eksport w CSV EmailLog \nWprowadzili\u015bmy w ```feder.letters.logs``` statystyki dostarczania wiadomo\u015bci. Nale\u017cy wprowadzi\u0107 zestawienie wszystkich danych z EmailLog dla danego monitoringu, aby mo\u017cna by\u0142o zrobi\u0107 statystyk\u0119 czy co\u015b.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom braces.views import SelectRelatedMixin, PrefetchRelatedMixin\nfrom cached_property import cached_property\nfrom django.shortcuts import get_object_or_404\nfrom django.views.generic import DetailView, ListView\n\nfrom feder.cases.models import Case\nfrom feder.letters.logs.models import EmailLog\nfrom feder.main.mixins import AttrPermissionRequiredMixin\nfrom feder.monitorings.models import Monitoring\n\n\nclass ListMonitoringMixin(AttrPermissionRequiredMixin, SelectRelatedMixin):\n select_related = ['case']\n paginate_by = 100\n model = EmailLog\n permission_attribute = 'case__monitoring'\n permission_required = 'monitorings.view_log'\n\n def get_permission_object(self):\n return self.monitoring\n\n def get_queryset(self):\n return super(ListMonitoringMixin, self).get_queryset().filter(case__monitoring=self.monitoring).with_logrecord_count()\n\n def get_context_data(self, **kwargs):\n kwargs['monitoring'] = self.monitoring\n return super(ListMonitoringMixin, self).get_context_data(**kwargs)\n\n\nclass EmailLogMonitoringListView(ListMonitoringMixin, ListView):\n template_name_suffix = '_list_for_monitoring'\n permission_required = 'monitorings.view_log'\n\n @cached_property\n def monitoring(self):\n return get_object_or_404(Monitoring, pk=self.kwargs['monitoring_pk'])\n\n\nclass EmailLogCaseListView(ListMonitoringMixin, ListView):\n template_name_suffix = '_list_for_case'\n\n @cached_property\n def case(self):\n return get_object_or_404(Case.objects.select_related('monitoring'),\n pk=self.kwargs['case_pk'])\n\n @cached_property\n def monitoring(self):\n return self.case.monitoring\n\n def get_context_data(self, **kwargs):\n kwargs['case'] = self.case\n return super(EmailLogCaseListView, self).get_context_data(**kwargs)\n\n def get_queryset(self):\n return super(ListMonitoringMixin, self).get_queryset().filter(case=self.case)\n\n\nclass EmailLogDetailView(AttrPermissionRequiredMixin, PrefetchRelatedMixin,\n SelectRelatedMixin, DetailView):\n model = EmailLog\n select_related = ['case__monitoring']\n prefetch_related = ['logrecord_set']\n permission_attribute = 'case__monitoring'\n permission_required = 'monitorings.view_log'\n", "path": "feder/letters/logs/views.py"}, {"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.conf.urls import url\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom . import views\n\nurlpatterns = [\n url(_(r'^case-(?P<case_pk>[\\d-]+)$'), views.EmailLogCaseListView.as_view(),\n name=\"list\"),\n url(_(r'^monitoring-(?P<monitoring_pk>[\\d-]+)$'), views.EmailLogMonitoringListView.as_view(),\n name=\"list\"),\n url(_(r'^log-(?P<pk>[\\d-]+)$'), views.EmailLogDetailView.as_view(),\n name=\"detail\"),\n]\n", "path": "feder/letters/logs/urls.py"}]}
| 1,434 | 860 |
gh_patches_debug_32913
|
rasdani/github-patches
|
git_diff
|
translate__pootle-3588
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
REGEXP not present in PostgreSQL
Postgres [fails on Travis tests](https://travis-ci.org/dwaynebailey/pootle/jobs/50400894#L2516) because of the use of [NOT REGEXP](http://dev.mysql.com/doc/refman/5.1/en/regexp.html#operator_not-regexp)
The equivalent in Postgres is to use [POSIX Regular Expressions](http://www.postgresql.org/docs/9.3/static/functions-matching.html#FUNCTIONS-POSIX-REGEXP)
</issue>
<code>
[start of pootle/core/decorators.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright 2013 Zuza Software Foundation
5 # Copyright 2013-2015 Evernote Corporation
6 #
7 # This file is part of Pootle.
8 #
9 # Pootle is free software; you can redistribute it and/or modify
10 # it under the terms of the GNU General Public License as published by
11 # the Free Software Foundation; either version 2 of the License, or
12 # (at your option) any later version.
13 #
14 # This program is distributed in the hope that it will be useful,
15 # but WITHOUT ANY WARRANTY; without even the implied warranty of
16 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17 # GNU General Public License for more details.
18 #
19 # You should have received a copy of the GNU General Public License
20 # along with this program; if not, see <http://www.gnu.org/licenses/>.
21
22 from functools import wraps
23
24 from django.contrib.auth import get_user_model
25 from django.core.exceptions import PermissionDenied
26 from django.core.urlresolvers import reverse
27 from django.http import Http404
28 from django.shortcuts import get_object_or_404, redirect
29 from django.utils.translation import ugettext as _
30
31 from pootle_app.models.directory import Directory
32 from pootle_app.models.permissions import (check_permission,
33 get_matching_permissions)
34 from pootle_language.models import Language
35 from pootle_project.models import Project, ProjectSet, ProjectResource
36 from pootle_store.models import Store
37 from pootle_translationproject.models import TranslationProject
38
39 from .exceptions import Http400
40 from .url_helpers import split_pootle_path
41
42
43 CLS2ATTR = {
44 'TranslationProject': 'translation_project',
45 'Project': 'project',
46 'Language': 'language',
47 }
48
49
50 def get_path_obj(func):
51 @wraps(func)
52 def wrapped(request, *args, **kwargs):
53 if request.is_ajax():
54 pootle_path = request.GET.get('path', None)
55 if pootle_path is None:
56 raise Http400(_('Arguments missing.'))
57
58 language_code, project_code, dir_path, filename = \
59 split_pootle_path(pootle_path)
60 kwargs['dir_path'] = dir_path
61 kwargs['filename'] = filename
62 else:
63 language_code = kwargs.pop('language_code', None)
64 project_code = kwargs.pop('project_code', None)
65
66 if language_code and project_code:
67 try:
68 path_obj = TranslationProject.objects.enabled().get(
69 language__code=language_code,
70 project__code=project_code,
71 )
72 except TranslationProject.DoesNotExist:
73 path_obj = None
74
75 if path_obj is None and not request.is_ajax():
76 # Explicit selection via the UI: redirect either to
77 # ``/language_code/`` or ``/projects/project_code/``
78 user_choice = request.COOKIES.get('user-choice', None)
79 if user_choice and user_choice in ('language', 'project',):
80 url = {
81 'language': reverse('pootle-language-overview',
82 args=[language_code]),
83 'project': reverse('pootle-project-overview',
84 args=[project_code, '', '']),
85 }
86 response = redirect(url[user_choice])
87 response.delete_cookie('user-choice')
88
89 return response
90
91 raise Http404
92 elif language_code:
93 user_projects = Project.accessible_by_user(request.user)
94 language = get_object_or_404(Language, code=language_code)
95 children = language.children \
96 .filter(project__code__in=user_projects)
97 language.set_children(children)
98 path_obj = language
99 elif project_code:
100 try:
101 path_obj = Project.objects.get_for_user(project_code,
102 request.user)
103 except Project.DoesNotExist:
104 raise Http404
105 else: # No arguments: all user-accessible projects
106 user_projects = Project.accessible_by_user(request.user)
107 user_projects = Project.objects.for_user(request.user) \
108 .filter(code__in=user_projects)
109
110 path_obj = ProjectSet(user_projects)
111
112 request.ctx_obj = path_obj
113 request.ctx_path = path_obj.pootle_path
114 request.resource_obj = path_obj
115 request.pootle_path = path_obj.pootle_path
116
117 return func(request, path_obj, *args, **kwargs)
118
119 return wrapped
120
121
122 def set_resource(request, path_obj, dir_path, filename):
123 """Loads :cls:`pootle_app.models.Directory` and
124 :cls:`pootle_store.models.Store` models and populates the
125 request object.
126
127 :param path_obj: A path-like object object.
128 :param dir_path: Path relative to the root of `path_obj`.
129 :param filename: Optional filename.
130 """
131 obj_directory = getattr(path_obj, 'directory', path_obj)
132 ctx_path = obj_directory.pootle_path
133 resource_path = dir_path
134 pootle_path = ctx_path + dir_path
135
136 directory = None
137 store = None
138
139 is_404 = False
140
141 if filename:
142 pootle_path = pootle_path + filename
143 resource_path = resource_path + filename
144
145 try:
146 store = Store.objects.select_related(
147 'translation_project',
148 'parent',
149 ).get(pootle_path=pootle_path)
150 directory = store.parent
151 except Store.DoesNotExist:
152 is_404 = True
153
154 if directory is None and not is_404:
155 if dir_path:
156 try:
157 directory = Directory.objects.get(pootle_path=pootle_path)
158 except Directory.DoesNotExist:
159 is_404 = True
160 else:
161 directory = obj_directory
162
163 if is_404: # Try parent directory
164 language_code, project_code, dp, fn = split_pootle_path(pootle_path)
165 if not filename:
166 dir_path = dir_path[:dir_path[:-1].rfind('/') + 1]
167
168 url = reverse('pootle-tp-overview',
169 args=[language_code, project_code, dir_path])
170 request.redirect_url = url
171
172 raise Http404
173
174 request.store = store
175 request.directory = directory
176 request.pootle_path = pootle_path
177
178 request.resource_obj = store or (directory if dir_path else path_obj)
179 request.resource_path = resource_path
180 request.ctx_obj = path_obj or request.resource_obj
181 request.ctx_path = ctx_path
182
183
184 def set_project_resource(request, path_obj, dir_path, filename):
185 """Loads :cls:`pootle_app.models.Directory` and
186 :cls:`pootle_store.models.Store` models and populates the
187 request object.
188
189 This is the same as `set_resource` but operates at the project level
190 across all languages.
191
192 :param path_obj: A :cls:`pootle_project.models.Project` object.
193 :param dir_path: Path relative to the root of `path_obj`.
194 :param filename: Optional filename.
195 """
196 query_ctx_path = ''.join(['/%/', path_obj.code, '/'])
197 query_pootle_path = query_ctx_path + dir_path
198
199 obj_directory = getattr(path_obj, 'directory', path_obj)
200 ctx_path = obj_directory.pootle_path
201 resource_path = dir_path
202 pootle_path = ctx_path + dir_path
203
204 # List of disabled TP paths
205 disabled_tps = TranslationProject.objects.disabled().filter(
206 project__code=path_obj.code,
207 ).values_list('pootle_path', flat=True)
208 disabled_tps = list(disabled_tps)
209 disabled_tps.append('/templates/')
210 disabled_tps_regex = '^%s' % u'|'.join(disabled_tps)
211
212 if filename:
213 query_pootle_path = query_pootle_path + filename
214 pootle_path = pootle_path + filename
215 resource_path = resource_path + filename
216
217 resources = Store.objects.extra(
218 where=[
219 'pootle_store_store.pootle_path LIKE %s',
220 'pootle_store_store.pootle_path NOT REGEXP %s',
221 ], params=[query_pootle_path, disabled_tps_regex]
222 ).select_related('translation_project__language')
223 else:
224 resources = Directory.objects.extra(
225 where=[
226 'pootle_app_directory.pootle_path LIKE %s',
227 'pootle_app_directory.pootle_path NOT REGEXP %s',
228 ], params=[query_pootle_path, disabled_tps_regex]
229 ).select_related('parent')
230
231 if not resources.exists():
232 raise Http404
233
234 request.store = None
235 request.directory = None
236 request.pootle_path = pootle_path
237
238 request.resource_obj = ProjectResource(resources, pootle_path)
239 request.resource_path = resource_path
240 request.ctx_obj = path_obj or request.resource_obj
241 request.ctx_path = ctx_path
242
243
244 def get_resource(func):
245 @wraps(func)
246 def wrapped(request, path_obj, dir_path, filename):
247 """Gets resources associated to the current context."""
248 try:
249 directory = getattr(path_obj, 'directory', path_obj)
250 if directory.is_project() and (dir_path or filename):
251 set_project_resource(request, path_obj, dir_path, filename)
252 else:
253 set_resource(request, path_obj, dir_path, filename)
254 except Http404:
255 if not request.is_ajax():
256 user_choice = request.COOKIES.get('user-choice', None)
257 url = None
258
259 if hasattr(request, 'redirect_url'):
260 url = request.redirect_url
261 elif user_choice in ('language', 'resource',):
262 project = (path_obj if isinstance(path_obj, Project)
263 else path_obj.project)
264 url = reverse('pootle-project-overview',
265 args=[project.code, dir_path, filename])
266
267 if url is not None:
268 response = redirect(url)
269
270 if user_choice in ('language', 'resource',):
271 # XXX: should we rather delete this in a single place?
272 response.delete_cookie('user-choice')
273
274 return response
275
276 raise Http404
277
278 return func(request, path_obj, dir_path, filename)
279
280 return wrapped
281
282
283 def permission_required(permission_code):
284 """Checks for `permission_code` in the current context.
285
286 To retrieve the proper context, the `get_path_obj` decorator must be
287 used along with this decorator.
288 """
289 def wrapped(func):
290 @wraps(func)
291 def _wrapped(request, *args, **kwargs):
292 path_obj = args[0]
293 directory = getattr(path_obj, 'directory', path_obj)
294
295 # HACKISH: some old code relies on
296 # `request.translation_project`, `request.language` etc.
297 # being set, so we need to set that too.
298 attr_name = CLS2ATTR.get(path_obj.__class__.__name__,
299 'path_obj')
300 setattr(request, attr_name, path_obj)
301
302 User = get_user_model()
303 request.profile = User.get(request.user)
304 request.permissions = get_matching_permissions(request.profile,
305 directory)
306
307 if not permission_code:
308 return func(request, *args, **kwargs)
309
310 if not check_permission(permission_code, request):
311 raise PermissionDenied(
312 _("Insufficient rights to access this page."),
313 )
314
315 return func(request, *args, **kwargs)
316 return _wrapped
317 return wrapped
318
319
320 def admin_required(func):
321 @wraps(func)
322 def wrapped(request, *args, **kwargs):
323 if not request.user.is_superuser:
324 raise PermissionDenied(
325 _("You do not have rights to administer Pootle.")
326 )
327 return func(request, *args, **kwargs)
328
329 return wrapped
330
[end of pootle/core/decorators.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pootle/core/decorators.py b/pootle/core/decorators.py
--- a/pootle/core/decorators.py
+++ b/pootle/core/decorators.py
@@ -24,6 +24,7 @@
from django.contrib.auth import get_user_model
from django.core.exceptions import PermissionDenied
from django.core.urlresolvers import reverse
+from django.db import connection
from django.http import Http404
from django.shortcuts import get_object_or_404, redirect
from django.utils.translation import ugettext as _
@@ -208,6 +209,9 @@
disabled_tps = list(disabled_tps)
disabled_tps.append('/templates/')
disabled_tps_regex = '^%s' % u'|'.join(disabled_tps)
+ sql_not_regex = 'NOT REGEXP'
+ if connection.vendor == 'postgresql':
+ sql_not_regex = '!~'
if filename:
query_pootle_path = query_pootle_path + filename
@@ -217,14 +221,14 @@
resources = Store.objects.extra(
where=[
'pootle_store_store.pootle_path LIKE %s',
- 'pootle_store_store.pootle_path NOT REGEXP %s',
+ 'pootle_store_store.pootle_path ' + sql_not_regex + ' %s',
], params=[query_pootle_path, disabled_tps_regex]
).select_related('translation_project__language')
else:
resources = Directory.objects.extra(
where=[
'pootle_app_directory.pootle_path LIKE %s',
- 'pootle_app_directory.pootle_path NOT REGEXP %s',
+ 'pootle_app_directory.pootle_path ' + sql_not_regex + ' %s',
], params=[query_pootle_path, disabled_tps_regex]
).select_related('parent')
|
{"golden_diff": "diff --git a/pootle/core/decorators.py b/pootle/core/decorators.py\n--- a/pootle/core/decorators.py\n+++ b/pootle/core/decorators.py\n@@ -24,6 +24,7 @@\n from django.contrib.auth import get_user_model\n from django.core.exceptions import PermissionDenied\n from django.core.urlresolvers import reverse\n+from django.db import connection\n from django.http import Http404\n from django.shortcuts import get_object_or_404, redirect\n from django.utils.translation import ugettext as _\n@@ -208,6 +209,9 @@\n disabled_tps = list(disabled_tps)\n disabled_tps.append('/templates/')\n disabled_tps_regex = '^%s' % u'|'.join(disabled_tps)\n+ sql_not_regex = 'NOT REGEXP'\n+ if connection.vendor == 'postgresql':\n+ sql_not_regex = '!~'\n \n if filename:\n query_pootle_path = query_pootle_path + filename\n@@ -217,14 +221,14 @@\n resources = Store.objects.extra(\n where=[\n 'pootle_store_store.pootle_path LIKE %s',\n- 'pootle_store_store.pootle_path NOT REGEXP %s',\n+ 'pootle_store_store.pootle_path ' + sql_not_regex + ' %s',\n ], params=[query_pootle_path, disabled_tps_regex]\n ).select_related('translation_project__language')\n else:\n resources = Directory.objects.extra(\n where=[\n 'pootle_app_directory.pootle_path LIKE %s',\n- 'pootle_app_directory.pootle_path NOT REGEXP %s',\n+ 'pootle_app_directory.pootle_path ' + sql_not_regex + ' %s',\n ], params=[query_pootle_path, disabled_tps_regex]\n ).select_related('parent')\n", "issue": "REGEXP not present in PostgreSQL\nPostgres [fails on Travis tests](https://travis-ci.org/dwaynebailey/pootle/jobs/50400894#L2516) because of the use of [NOT REGEXP](http://dev.mysql.com/doc/refman/5.1/en/regexp.html#operator_not-regexp)\n\nThe equivalent in Postgres is to use [POSIX Regular Expressions](http://www.postgresql.org/docs/9.3/static/functions-matching.html#FUNCTIONS-POSIX-REGEXP)\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright 2013 Zuza Software Foundation\n# Copyright 2013-2015 Evernote Corporation\n#\n# This file is part of Pootle.\n#\n# Pootle is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\nfrom functools import wraps\n\nfrom django.contrib.auth import get_user_model\nfrom django.core.exceptions import PermissionDenied\nfrom django.core.urlresolvers import reverse\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils.translation import ugettext as _\n\nfrom pootle_app.models.directory import Directory\nfrom pootle_app.models.permissions import (check_permission,\n get_matching_permissions)\nfrom pootle_language.models import Language\nfrom pootle_project.models import Project, ProjectSet, ProjectResource\nfrom pootle_store.models import Store\nfrom pootle_translationproject.models import TranslationProject\n\nfrom .exceptions import Http400\nfrom .url_helpers import split_pootle_path\n\n\nCLS2ATTR = {\n 'TranslationProject': 'translation_project',\n 'Project': 'project',\n 'Language': 'language',\n}\n\n\ndef get_path_obj(func):\n @wraps(func)\n def wrapped(request, *args, **kwargs):\n if request.is_ajax():\n pootle_path = request.GET.get('path', None)\n if pootle_path is None:\n raise Http400(_('Arguments missing.'))\n\n language_code, project_code, dir_path, filename = \\\n split_pootle_path(pootle_path)\n kwargs['dir_path'] = dir_path\n kwargs['filename'] = filename\n else:\n language_code = kwargs.pop('language_code', None)\n project_code = kwargs.pop('project_code', None)\n\n if language_code and project_code:\n try:\n path_obj = TranslationProject.objects.enabled().get(\n language__code=language_code,\n project__code=project_code,\n )\n except TranslationProject.DoesNotExist:\n path_obj = None\n\n if path_obj is None and not request.is_ajax():\n # Explicit selection via the UI: redirect either to\n # ``/language_code/`` or ``/projects/project_code/``\n user_choice = request.COOKIES.get('user-choice', None)\n if user_choice and user_choice in ('language', 'project',):\n url = {\n 'language': reverse('pootle-language-overview',\n args=[language_code]),\n 'project': reverse('pootle-project-overview',\n args=[project_code, '', '']),\n }\n response = redirect(url[user_choice])\n response.delete_cookie('user-choice')\n\n return response\n\n raise Http404\n elif language_code:\n user_projects = Project.accessible_by_user(request.user)\n language = get_object_or_404(Language, code=language_code)\n children = language.children \\\n .filter(project__code__in=user_projects)\n language.set_children(children)\n path_obj = language\n elif project_code:\n try:\n path_obj = Project.objects.get_for_user(project_code,\n request.user)\n except Project.DoesNotExist:\n raise Http404\n else: # No arguments: all user-accessible projects\n user_projects = Project.accessible_by_user(request.user)\n user_projects = Project.objects.for_user(request.user) \\\n .filter(code__in=user_projects)\n\n path_obj = ProjectSet(user_projects)\n\n request.ctx_obj = path_obj\n request.ctx_path = path_obj.pootle_path\n request.resource_obj = path_obj\n request.pootle_path = path_obj.pootle_path\n\n return func(request, path_obj, *args, **kwargs)\n\n return wrapped\n\n\ndef set_resource(request, path_obj, dir_path, filename):\n \"\"\"Loads :cls:`pootle_app.models.Directory` and\n :cls:`pootle_store.models.Store` models and populates the\n request object.\n\n :param path_obj: A path-like object object.\n :param dir_path: Path relative to the root of `path_obj`.\n :param filename: Optional filename.\n \"\"\"\n obj_directory = getattr(path_obj, 'directory', path_obj)\n ctx_path = obj_directory.pootle_path\n resource_path = dir_path\n pootle_path = ctx_path + dir_path\n\n directory = None\n store = None\n\n is_404 = False\n\n if filename:\n pootle_path = pootle_path + filename\n resource_path = resource_path + filename\n\n try:\n store = Store.objects.select_related(\n 'translation_project',\n 'parent',\n ).get(pootle_path=pootle_path)\n directory = store.parent\n except Store.DoesNotExist:\n is_404 = True\n\n if directory is None and not is_404:\n if dir_path:\n try:\n directory = Directory.objects.get(pootle_path=pootle_path)\n except Directory.DoesNotExist:\n is_404 = True\n else:\n directory = obj_directory\n\n if is_404: # Try parent directory\n language_code, project_code, dp, fn = split_pootle_path(pootle_path)\n if not filename:\n dir_path = dir_path[:dir_path[:-1].rfind('/') + 1]\n\n url = reverse('pootle-tp-overview',\n args=[language_code, project_code, dir_path])\n request.redirect_url = url\n\n raise Http404\n\n request.store = store\n request.directory = directory\n request.pootle_path = pootle_path\n\n request.resource_obj = store or (directory if dir_path else path_obj)\n request.resource_path = resource_path\n request.ctx_obj = path_obj or request.resource_obj\n request.ctx_path = ctx_path\n\n\ndef set_project_resource(request, path_obj, dir_path, filename):\n \"\"\"Loads :cls:`pootle_app.models.Directory` and\n :cls:`pootle_store.models.Store` models and populates the\n request object.\n\n This is the same as `set_resource` but operates at the project level\n across all languages.\n\n :param path_obj: A :cls:`pootle_project.models.Project` object.\n :param dir_path: Path relative to the root of `path_obj`.\n :param filename: Optional filename.\n \"\"\"\n query_ctx_path = ''.join(['/%/', path_obj.code, '/'])\n query_pootle_path = query_ctx_path + dir_path\n\n obj_directory = getattr(path_obj, 'directory', path_obj)\n ctx_path = obj_directory.pootle_path\n resource_path = dir_path\n pootle_path = ctx_path + dir_path\n\n # List of disabled TP paths\n disabled_tps = TranslationProject.objects.disabled().filter(\n project__code=path_obj.code,\n ).values_list('pootle_path', flat=True)\n disabled_tps = list(disabled_tps)\n disabled_tps.append('/templates/')\n disabled_tps_regex = '^%s' % u'|'.join(disabled_tps)\n\n if filename:\n query_pootle_path = query_pootle_path + filename\n pootle_path = pootle_path + filename\n resource_path = resource_path + filename\n\n resources = Store.objects.extra(\n where=[\n 'pootle_store_store.pootle_path LIKE %s',\n 'pootle_store_store.pootle_path NOT REGEXP %s',\n ], params=[query_pootle_path, disabled_tps_regex]\n ).select_related('translation_project__language')\n else:\n resources = Directory.objects.extra(\n where=[\n 'pootle_app_directory.pootle_path LIKE %s',\n 'pootle_app_directory.pootle_path NOT REGEXP %s',\n ], params=[query_pootle_path, disabled_tps_regex]\n ).select_related('parent')\n\n if not resources.exists():\n raise Http404\n\n request.store = None\n request.directory = None\n request.pootle_path = pootle_path\n\n request.resource_obj = ProjectResource(resources, pootle_path)\n request.resource_path = resource_path\n request.ctx_obj = path_obj or request.resource_obj\n request.ctx_path = ctx_path\n\n\ndef get_resource(func):\n @wraps(func)\n def wrapped(request, path_obj, dir_path, filename):\n \"\"\"Gets resources associated to the current context.\"\"\"\n try:\n directory = getattr(path_obj, 'directory', path_obj)\n if directory.is_project() and (dir_path or filename):\n set_project_resource(request, path_obj, dir_path, filename)\n else:\n set_resource(request, path_obj, dir_path, filename)\n except Http404:\n if not request.is_ajax():\n user_choice = request.COOKIES.get('user-choice', None)\n url = None\n\n if hasattr(request, 'redirect_url'):\n url = request.redirect_url\n elif user_choice in ('language', 'resource',):\n project = (path_obj if isinstance(path_obj, Project)\n else path_obj.project)\n url = reverse('pootle-project-overview',\n args=[project.code, dir_path, filename])\n\n if url is not None:\n response = redirect(url)\n\n if user_choice in ('language', 'resource',):\n # XXX: should we rather delete this in a single place?\n response.delete_cookie('user-choice')\n\n return response\n\n raise Http404\n\n return func(request, path_obj, dir_path, filename)\n\n return wrapped\n\n\ndef permission_required(permission_code):\n \"\"\"Checks for `permission_code` in the current context.\n\n To retrieve the proper context, the `get_path_obj` decorator must be\n used along with this decorator.\n \"\"\"\n def wrapped(func):\n @wraps(func)\n def _wrapped(request, *args, **kwargs):\n path_obj = args[0]\n directory = getattr(path_obj, 'directory', path_obj)\n\n # HACKISH: some old code relies on\n # `request.translation_project`, `request.language` etc.\n # being set, so we need to set that too.\n attr_name = CLS2ATTR.get(path_obj.__class__.__name__,\n 'path_obj')\n setattr(request, attr_name, path_obj)\n\n User = get_user_model()\n request.profile = User.get(request.user)\n request.permissions = get_matching_permissions(request.profile,\n directory)\n\n if not permission_code:\n return func(request, *args, **kwargs)\n\n if not check_permission(permission_code, request):\n raise PermissionDenied(\n _(\"Insufficient rights to access this page.\"),\n )\n\n return func(request, *args, **kwargs)\n return _wrapped\n return wrapped\n\n\ndef admin_required(func):\n @wraps(func)\n def wrapped(request, *args, **kwargs):\n if not request.user.is_superuser:\n raise PermissionDenied(\n _(\"You do not have rights to administer Pootle.\")\n )\n return func(request, *args, **kwargs)\n\n return wrapped\n", "path": "pootle/core/decorators.py"}]}
| 4,090 | 419 |
gh_patches_debug_34199
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-1251
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip install crashes can easily confuse newbies
those that are not familiar with the usual annoying messaging that pip presents can get pretty easily confused by the output that happens when `pip` fails to install
here's an example:
```console
$ pre-commit run flake8 --all-files
[INFO] Initializing environment for https://gitlab.com/pycqa/flake8:flake8-walrus.
[INFO] Installing environment for https://gitlab.com/pycqa/flake8.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: Command: ('/home/asottile/.cache/pre-commit/repoi6ij0tyu/py_env-python3/bin/python', '/home/asottile/.cache/pre-commit/repoi6ij0tyu/py_env-python3/bin/pip', 'install', '.', 'flake8-walrus')
Return code: 1
Expected return code: 0
Output:
Processing /home/asottile/.cache/pre-commit/repoi6ij0tyu
Collecting flake8-walrus
Errors:
ERROR: Could not find a version that satisfies the requirement flake8-walrus (from versions: none)
ERROR: No matching distribution found for flake8-walrus
WARNING: You are using pip version 19.2.3, however version 19.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Check the log at /home/asottile/.cache/pre-commit/pre-commit.log
```
this ~admittedly is a bit garbled for a number of reasons:
- pip's error message here isn't great (it _could_ say something about `python_requires` or that there are versions available for other versions) **(the actual error is that the python is python3.6 and the plugin requires python3.8)**
- pip is out of date (when is it not? but admittedly who cares) -- **this is what a lot of people try and fix** -- unfortunately there's not really anything to fix here, the version of `pip` is from inside the virtualenv and doesn't really matter all that much
- `pre-commit` is currently splitting the output from stdout and stderr making it harder to read what's going on
I can't really fix the first one, and the second one I could silence but it doesn't quite feel like the right thing to do (and admittedly knowing the pip version is sometimes useful when debugging). The third however I can pretty easily fix!
</issue>
<code>
[start of pre_commit/util.py]
1 from __future__ import unicode_literals
2
3 import contextlib
4 import errno
5 import os.path
6 import shutil
7 import stat
8 import subprocess
9 import sys
10 import tempfile
11
12 import six
13
14 from pre_commit import five
15 from pre_commit import parse_shebang
16
17 if sys.version_info >= (3, 7): # pragma: no cover (PY37+)
18 from importlib.resources import open_binary
19 from importlib.resources import read_text
20 else: # pragma: no cover (<PY37)
21 from importlib_resources import open_binary
22 from importlib_resources import read_text
23
24
25 def mkdirp(path):
26 try:
27 os.makedirs(path)
28 except OSError:
29 if not os.path.exists(path):
30 raise
31
32
33 @contextlib.contextmanager
34 def clean_path_on_failure(path):
35 """Cleans up the directory on an exceptional failure."""
36 try:
37 yield
38 except BaseException:
39 if os.path.exists(path):
40 rmtree(path)
41 raise
42
43
44 @contextlib.contextmanager
45 def noop_context():
46 yield
47
48
49 @contextlib.contextmanager
50 def tmpdir():
51 """Contextmanager to create a temporary directory. It will be cleaned up
52 afterwards.
53 """
54 tempdir = tempfile.mkdtemp()
55 try:
56 yield tempdir
57 finally:
58 rmtree(tempdir)
59
60
61 def resource_bytesio(filename):
62 return open_binary('pre_commit.resources', filename)
63
64
65 def resource_text(filename):
66 return read_text('pre_commit.resources', filename)
67
68
69 def make_executable(filename):
70 original_mode = os.stat(filename).st_mode
71 os.chmod(
72 filename, original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH,
73 )
74
75
76 class CalledProcessError(RuntimeError):
77 def __init__(self, returncode, cmd, expected_returncode, output=None):
78 super(CalledProcessError, self).__init__(
79 returncode, cmd, expected_returncode, output,
80 )
81 self.returncode = returncode
82 self.cmd = cmd
83 self.expected_returncode = expected_returncode
84 self.output = output
85
86 def to_bytes(self):
87 output = []
88 for maybe_text in self.output:
89 if maybe_text:
90 output.append(
91 b'\n ' +
92 five.to_bytes(maybe_text).replace(b'\n', b'\n '),
93 )
94 else:
95 output.append(b'(none)')
96
97 return b''.join((
98 five.to_bytes(
99 'Command: {!r}\n'
100 'Return code: {}\n'
101 'Expected return code: {}\n'.format(
102 self.cmd, self.returncode, self.expected_returncode,
103 ),
104 ),
105 b'Output: ', output[0], b'\n',
106 b'Errors: ', output[1],
107 ))
108
109 def to_text(self):
110 return self.to_bytes().decode('UTF-8')
111
112 if six.PY2: # pragma: no cover (py2)
113 __str__ = to_bytes
114 __unicode__ = to_text
115 else: # pragma: no cover (py3)
116 __bytes__ = to_bytes
117 __str__ = to_text
118
119
120 def _cmd_kwargs(*cmd, **kwargs):
121 # py2/py3 on windows are more strict about the types here
122 cmd = tuple(five.n(arg) for arg in cmd)
123 kwargs['env'] = {
124 five.n(key): five.n(value)
125 for key, value in kwargs.pop('env', {}).items()
126 } or None
127 for arg in ('stdin', 'stdout', 'stderr'):
128 kwargs.setdefault(arg, subprocess.PIPE)
129 return cmd, kwargs
130
131
132 def cmd_output_b(*cmd, **kwargs):
133 retcode = kwargs.pop('retcode', 0)
134 cmd, kwargs = _cmd_kwargs(*cmd, **kwargs)
135
136 try:
137 cmd = parse_shebang.normalize_cmd(cmd)
138 except parse_shebang.ExecutableNotFoundError as e:
139 returncode, stdout_b, stderr_b = e.to_output()
140 else:
141 proc = subprocess.Popen(cmd, **kwargs)
142 stdout_b, stderr_b = proc.communicate()
143 returncode = proc.returncode
144
145 if retcode is not None and retcode != returncode:
146 raise CalledProcessError(
147 returncode, cmd, retcode, output=(stdout_b, stderr_b),
148 )
149
150 return returncode, stdout_b, stderr_b
151
152
153 def cmd_output(*cmd, **kwargs):
154 returncode, stdout_b, stderr_b = cmd_output_b(*cmd, **kwargs)
155 stdout = stdout_b.decode('UTF-8') if stdout_b is not None else None
156 stderr = stderr_b.decode('UTF-8') if stderr_b is not None else None
157 return returncode, stdout, stderr
158
159
160 if os.name != 'nt': # pragma: windows no cover
161 from os import openpty
162 import termios
163
164 class Pty(object):
165 def __init__(self):
166 self.r = self.w = None
167
168 def __enter__(self):
169 self.r, self.w = openpty()
170
171 # tty flags normally change \n to \r\n
172 attrs = termios.tcgetattr(self.r)
173 attrs[1] &= ~(termios.ONLCR | termios.OPOST)
174 termios.tcsetattr(self.r, termios.TCSANOW, attrs)
175
176 return self
177
178 def close_w(self):
179 if self.w is not None:
180 os.close(self.w)
181 self.w = None
182
183 def close_r(self):
184 assert self.r is not None
185 os.close(self.r)
186 self.r = None
187
188 def __exit__(self, exc_type, exc_value, traceback):
189 self.close_w()
190 self.close_r()
191
192 def cmd_output_p(*cmd, **kwargs):
193 assert kwargs.pop('retcode') is None
194 assert kwargs['stderr'] == subprocess.STDOUT, kwargs['stderr']
195 cmd, kwargs = _cmd_kwargs(*cmd, **kwargs)
196
197 try:
198 cmd = parse_shebang.normalize_cmd(cmd)
199 except parse_shebang.ExecutableNotFoundError as e:
200 return e.to_output()
201
202 with open(os.devnull) as devnull, Pty() as pty:
203 kwargs.update({'stdin': devnull, 'stdout': pty.w, 'stderr': pty.w})
204 proc = subprocess.Popen(cmd, **kwargs)
205 pty.close_w()
206
207 buf = b''
208 while True:
209 try:
210 bts = os.read(pty.r, 4096)
211 except OSError as e:
212 if e.errno == errno.EIO:
213 bts = b''
214 else:
215 raise
216 else:
217 buf += bts
218 if not bts:
219 break
220
221 return proc.wait(), buf, None
222 else: # pragma: no cover
223 cmd_output_p = cmd_output_b
224
225
226 def rmtree(path):
227 """On windows, rmtree fails for readonly dirs."""
228 def handle_remove_readonly(func, path, exc):
229 excvalue = exc[1]
230 if (
231 func in (os.rmdir, os.remove, os.unlink) and
232 excvalue.errno == errno.EACCES
233 ):
234 for p in (path, os.path.dirname(path)):
235 os.chmod(p, os.stat(p).st_mode | stat.S_IWUSR)
236 func(path)
237 else:
238 raise
239 shutil.rmtree(path, ignore_errors=False, onerror=handle_remove_readonly)
240
241
242 def parse_version(s):
243 """poor man's version comparison"""
244 return tuple(int(p) for p in s.split('.'))
245
[end of pre_commit/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/util.py b/pre_commit/util.py
--- a/pre_commit/util.py
+++ b/pre_commit/util.py
@@ -74,36 +74,31 @@
class CalledProcessError(RuntimeError):
- def __init__(self, returncode, cmd, expected_returncode, output=None):
+ def __init__(self, returncode, cmd, expected_returncode, stdout, stderr):
super(CalledProcessError, self).__init__(
- returncode, cmd, expected_returncode, output,
+ returncode, cmd, expected_returncode, stdout, stderr,
)
self.returncode = returncode
self.cmd = cmd
self.expected_returncode = expected_returncode
- self.output = output
+ self.stdout = stdout
+ self.stderr = stderr
def to_bytes(self):
- output = []
- for maybe_text in self.output:
- if maybe_text:
- output.append(
- b'\n ' +
- five.to_bytes(maybe_text).replace(b'\n', b'\n '),
- )
+ def _indent_or_none(part):
+ if part:
+ return b'\n ' + part.replace(b'\n', b'\n ')
else:
- output.append(b'(none)')
+ return b' (none)'
return b''.join((
- five.to_bytes(
- 'Command: {!r}\n'
- 'Return code: {}\n'
- 'Expected return code: {}\n'.format(
- self.cmd, self.returncode, self.expected_returncode,
- ),
- ),
- b'Output: ', output[0], b'\n',
- b'Errors: ', output[1],
+ 'command: {!r}\n'
+ 'return code: {}\n'
+ 'expected return code: {}\n'.format(
+ self.cmd, self.returncode, self.expected_returncode,
+ ).encode('UTF-8'),
+ b'stdout:', _indent_or_none(self.stdout), b'\n',
+ b'stderr:', _indent_or_none(self.stderr),
))
def to_text(self):
@@ -143,9 +138,7 @@
returncode = proc.returncode
if retcode is not None and retcode != returncode:
- raise CalledProcessError(
- returncode, cmd, retcode, output=(stdout_b, stderr_b),
- )
+ raise CalledProcessError(returncode, cmd, retcode, stdout_b, stderr_b)
return returncode, stdout_b, stderr_b
|
{"golden_diff": "diff --git a/pre_commit/util.py b/pre_commit/util.py\n--- a/pre_commit/util.py\n+++ b/pre_commit/util.py\n@@ -74,36 +74,31 @@\n \n \n class CalledProcessError(RuntimeError):\n- def __init__(self, returncode, cmd, expected_returncode, output=None):\n+ def __init__(self, returncode, cmd, expected_returncode, stdout, stderr):\n super(CalledProcessError, self).__init__(\n- returncode, cmd, expected_returncode, output,\n+ returncode, cmd, expected_returncode, stdout, stderr,\n )\n self.returncode = returncode\n self.cmd = cmd\n self.expected_returncode = expected_returncode\n- self.output = output\n+ self.stdout = stdout\n+ self.stderr = stderr\n \n def to_bytes(self):\n- output = []\n- for maybe_text in self.output:\n- if maybe_text:\n- output.append(\n- b'\\n ' +\n- five.to_bytes(maybe_text).replace(b'\\n', b'\\n '),\n- )\n+ def _indent_or_none(part):\n+ if part:\n+ return b'\\n ' + part.replace(b'\\n', b'\\n ')\n else:\n- output.append(b'(none)')\n+ return b' (none)'\n \n return b''.join((\n- five.to_bytes(\n- 'Command: {!r}\\n'\n- 'Return code: {}\\n'\n- 'Expected return code: {}\\n'.format(\n- self.cmd, self.returncode, self.expected_returncode,\n- ),\n- ),\n- b'Output: ', output[0], b'\\n',\n- b'Errors: ', output[1],\n+ 'command: {!r}\\n'\n+ 'return code: {}\\n'\n+ 'expected return code: {}\\n'.format(\n+ self.cmd, self.returncode, self.expected_returncode,\n+ ).encode('UTF-8'),\n+ b'stdout:', _indent_or_none(self.stdout), b'\\n',\n+ b'stderr:', _indent_or_none(self.stderr),\n ))\n \n def to_text(self):\n@@ -143,9 +138,7 @@\n returncode = proc.returncode\n \n if retcode is not None and retcode != returncode:\n- raise CalledProcessError(\n- returncode, cmd, retcode, output=(stdout_b, stderr_b),\n- )\n+ raise CalledProcessError(returncode, cmd, retcode, stdout_b, stderr_b)\n \n return returncode, stdout_b, stderr_b\n", "issue": "pip install crashes can easily confuse newbies\nthose that are not familiar with the usual annoying messaging that pip presents can get pretty easily confused by the output that happens when `pip` fails to install\r\n\r\nhere's an example:\r\n\r\n```console\r\n$ pre-commit run flake8 --all-files\r\n[INFO] Initializing environment for https://gitlab.com/pycqa/flake8:flake8-walrus.\r\n[INFO] Installing environment for https://gitlab.com/pycqa/flake8.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\nAn unexpected error has occurred: CalledProcessError: Command: ('/home/asottile/.cache/pre-commit/repoi6ij0tyu/py_env-python3/bin/python', '/home/asottile/.cache/pre-commit/repoi6ij0tyu/py_env-python3/bin/pip', 'install', '.', 'flake8-walrus')\r\nReturn code: 1\r\nExpected return code: 0\r\nOutput: \r\n Processing /home/asottile/.cache/pre-commit/repoi6ij0tyu\r\n Collecting flake8-walrus\r\n \r\nErrors: \r\n ERROR: Could not find a version that satisfies the requirement flake8-walrus (from versions: none)\r\n ERROR: No matching distribution found for flake8-walrus\r\n WARNING: You are using pip version 19.2.3, however version 19.3.1 is available.\r\n You should consider upgrading via the 'pip install --upgrade pip' command.\r\n \r\nCheck the log at /home/asottile/.cache/pre-commit/pre-commit.log\r\n```\r\n\r\nthis ~admittedly is a bit garbled for a number of reasons:\r\n- pip's error message here isn't great (it _could_ say something about `python_requires` or that there are versions available for other versions) **(the actual error is that the python is python3.6 and the plugin requires python3.8)**\r\n- pip is out of date (when is it not? but admittedly who cares) -- **this is what a lot of people try and fix** -- unfortunately there's not really anything to fix here, the version of `pip` is from inside the virtualenv and doesn't really matter all that much\r\n- `pre-commit` is currently splitting the output from stdout and stderr making it harder to read what's going on\r\n\r\nI can't really fix the first one, and the second one I could silence but it doesn't quite feel like the right thing to do (and admittedly knowing the pip version is sometimes useful when debugging). The third however I can pretty easily fix!\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport errno\nimport os.path\nimport shutil\nimport stat\nimport subprocess\nimport sys\nimport tempfile\n\nimport six\n\nfrom pre_commit import five\nfrom pre_commit import parse_shebang\n\nif sys.version_info >= (3, 7): # pragma: no cover (PY37+)\n from importlib.resources import open_binary\n from importlib.resources import read_text\nelse: # pragma: no cover (<PY37)\n from importlib_resources import open_binary\n from importlib_resources import read_text\n\n\ndef mkdirp(path):\n try:\n os.makedirs(path)\n except OSError:\n if not os.path.exists(path):\n raise\n\n\[email protected]\ndef clean_path_on_failure(path):\n \"\"\"Cleans up the directory on an exceptional failure.\"\"\"\n try:\n yield\n except BaseException:\n if os.path.exists(path):\n rmtree(path)\n raise\n\n\[email protected]\ndef noop_context():\n yield\n\n\[email protected]\ndef tmpdir():\n \"\"\"Contextmanager to create a temporary directory. It will be cleaned up\n afterwards.\n \"\"\"\n tempdir = tempfile.mkdtemp()\n try:\n yield tempdir\n finally:\n rmtree(tempdir)\n\n\ndef resource_bytesio(filename):\n return open_binary('pre_commit.resources', filename)\n\n\ndef resource_text(filename):\n return read_text('pre_commit.resources', filename)\n\n\ndef make_executable(filename):\n original_mode = os.stat(filename).st_mode\n os.chmod(\n filename, original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH,\n )\n\n\nclass CalledProcessError(RuntimeError):\n def __init__(self, returncode, cmd, expected_returncode, output=None):\n super(CalledProcessError, self).__init__(\n returncode, cmd, expected_returncode, output,\n )\n self.returncode = returncode\n self.cmd = cmd\n self.expected_returncode = expected_returncode\n self.output = output\n\n def to_bytes(self):\n output = []\n for maybe_text in self.output:\n if maybe_text:\n output.append(\n b'\\n ' +\n five.to_bytes(maybe_text).replace(b'\\n', b'\\n '),\n )\n else:\n output.append(b'(none)')\n\n return b''.join((\n five.to_bytes(\n 'Command: {!r}\\n'\n 'Return code: {}\\n'\n 'Expected return code: {}\\n'.format(\n self.cmd, self.returncode, self.expected_returncode,\n ),\n ),\n b'Output: ', output[0], b'\\n',\n b'Errors: ', output[1],\n ))\n\n def to_text(self):\n return self.to_bytes().decode('UTF-8')\n\n if six.PY2: # pragma: no cover (py2)\n __str__ = to_bytes\n __unicode__ = to_text\n else: # pragma: no cover (py3)\n __bytes__ = to_bytes\n __str__ = to_text\n\n\ndef _cmd_kwargs(*cmd, **kwargs):\n # py2/py3 on windows are more strict about the types here\n cmd = tuple(five.n(arg) for arg in cmd)\n kwargs['env'] = {\n five.n(key): five.n(value)\n for key, value in kwargs.pop('env', {}).items()\n } or None\n for arg in ('stdin', 'stdout', 'stderr'):\n kwargs.setdefault(arg, subprocess.PIPE)\n return cmd, kwargs\n\n\ndef cmd_output_b(*cmd, **kwargs):\n retcode = kwargs.pop('retcode', 0)\n cmd, kwargs = _cmd_kwargs(*cmd, **kwargs)\n\n try:\n cmd = parse_shebang.normalize_cmd(cmd)\n except parse_shebang.ExecutableNotFoundError as e:\n returncode, stdout_b, stderr_b = e.to_output()\n else:\n proc = subprocess.Popen(cmd, **kwargs)\n stdout_b, stderr_b = proc.communicate()\n returncode = proc.returncode\n\n if retcode is not None and retcode != returncode:\n raise CalledProcessError(\n returncode, cmd, retcode, output=(stdout_b, stderr_b),\n )\n\n return returncode, stdout_b, stderr_b\n\n\ndef cmd_output(*cmd, **kwargs):\n returncode, stdout_b, stderr_b = cmd_output_b(*cmd, **kwargs)\n stdout = stdout_b.decode('UTF-8') if stdout_b is not None else None\n stderr = stderr_b.decode('UTF-8') if stderr_b is not None else None\n return returncode, stdout, stderr\n\n\nif os.name != 'nt': # pragma: windows no cover\n from os import openpty\n import termios\n\n class Pty(object):\n def __init__(self):\n self.r = self.w = None\n\n def __enter__(self):\n self.r, self.w = openpty()\n\n # tty flags normally change \\n to \\r\\n\n attrs = termios.tcgetattr(self.r)\n attrs[1] &= ~(termios.ONLCR | termios.OPOST)\n termios.tcsetattr(self.r, termios.TCSANOW, attrs)\n\n return self\n\n def close_w(self):\n if self.w is not None:\n os.close(self.w)\n self.w = None\n\n def close_r(self):\n assert self.r is not None\n os.close(self.r)\n self.r = None\n\n def __exit__(self, exc_type, exc_value, traceback):\n self.close_w()\n self.close_r()\n\n def cmd_output_p(*cmd, **kwargs):\n assert kwargs.pop('retcode') is None\n assert kwargs['stderr'] == subprocess.STDOUT, kwargs['stderr']\n cmd, kwargs = _cmd_kwargs(*cmd, **kwargs)\n\n try:\n cmd = parse_shebang.normalize_cmd(cmd)\n except parse_shebang.ExecutableNotFoundError as e:\n return e.to_output()\n\n with open(os.devnull) as devnull, Pty() as pty:\n kwargs.update({'stdin': devnull, 'stdout': pty.w, 'stderr': pty.w})\n proc = subprocess.Popen(cmd, **kwargs)\n pty.close_w()\n\n buf = b''\n while True:\n try:\n bts = os.read(pty.r, 4096)\n except OSError as e:\n if e.errno == errno.EIO:\n bts = b''\n else:\n raise\n else:\n buf += bts\n if not bts:\n break\n\n return proc.wait(), buf, None\nelse: # pragma: no cover\n cmd_output_p = cmd_output_b\n\n\ndef rmtree(path):\n \"\"\"On windows, rmtree fails for readonly dirs.\"\"\"\n def handle_remove_readonly(func, path, exc):\n excvalue = exc[1]\n if (\n func in (os.rmdir, os.remove, os.unlink) and\n excvalue.errno == errno.EACCES\n ):\n for p in (path, os.path.dirname(path)):\n os.chmod(p, os.stat(p).st_mode | stat.S_IWUSR)\n func(path)\n else:\n raise\n shutil.rmtree(path, ignore_errors=False, onerror=handle_remove_readonly)\n\n\ndef parse_version(s):\n \"\"\"poor man's version comparison\"\"\"\n return tuple(int(p) for p in s.split('.'))\n", "path": "pre_commit/util.py"}]}
| 3,362 | 573 |
gh_patches_debug_14332
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-638
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Automate deployment to PyPI
# Description
According to @lukasheinrich, the current workflow for deploying to PyPI is:
```
git checkout master
git pull
bumpversion patch
git commit
git push origin master --tags
```
This is a bit annoyingly manual and ideally should be done automatically.
Luckily, there is an [official PyPA GitHub action](https://discuss.python.org/t/official-github-action-for-publishing-to-pypi/1061) to do this:
https://github.com/pypa/gh-action-pypi-publish
However, we need GitHub actions for pyhf, so we have to wait.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2 from os import path
3 import sys
4
5 this_directory = path.abspath(path.dirname(__file__))
6 if sys.version_info.major < 3:
7 from io import open
8 with open(path.join(this_directory, 'README.md'), encoding='utf-8') as readme_md:
9 long_description = readme_md.read()
10
11 extras_require = {
12 'tensorflow': ['tensorflow~=1.15', 'tensorflow-probability~=0.8', 'numpy~=1.16',],
13 'torch': ['torch~=1.2'],
14 'xmlio': ['uproot'],
15 'minuit': ['iminuit'],
16 'develop': [
17 'pyflakes',
18 'pytest~=3.5',
19 'pytest-cov>=2.5.1',
20 'pytest-mock',
21 'pytest-benchmark[histogram]',
22 'pytest-console-scripts',
23 'pydocstyle',
24 'coverage>=4.0', # coveralls
25 'matplotlib',
26 'jupyter',
27 'nbdime',
28 'uproot~=3.3',
29 'papermill~=1.0',
30 'nteract-scrapbook~=0.2',
31 'graphviz',
32 'bumpversion',
33 'sphinx',
34 'sphinxcontrib-bibtex',
35 'sphinxcontrib-napoleon',
36 'sphinx_rtd_theme',
37 'nbsphinx',
38 'sphinx-issues',
39 'm2r',
40 'jsonpatch',
41 'ipython',
42 'pre-commit',
43 'black;python_version>="3.6"', # Black is Python3 only
44 'twine',
45 'check-manifest',
46 ],
47 }
48 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
49
50
51 def _is_test_pypi():
52 """
53 Determine if the Travis CI environment has TESTPYPI_UPLOAD defined and
54 set to true (c.f. .travis.yml)
55
56 The use_scm_version kwarg accepts a callable for the local_scheme
57 configuration parameter with argument "version". This can be replaced
58 with a lambda as the desired version structure is {next_version}.dev{distance}
59 c.f. https://github.com/pypa/setuptools_scm/#importing-in-setuppy
60
61 As the scm versioning is only desired for TestPyPI, for depolyment to PyPI the version
62 controlled through bumpversion is used.
63 """
64 from os import getenv
65
66 return (
67 {'local_scheme': lambda version: ''}
68 if getenv('TESTPYPI_UPLOAD') == 'true'
69 else False
70 )
71
72
73 setup(
74 name='pyhf',
75 version='0.2.0',
76 description='(partial) pure python histfactory implementation',
77 long_description=long_description,
78 long_description_content_type='text/markdown',
79 url='https://github.com/diana-hep/pyhf',
80 author='Lukas Heinrich, Matthew Feickert, Giordon Stark',
81 author_email='[email protected], [email protected], [email protected]',
82 license='Apache',
83 keywords='physics fitting numpy scipy tensorflow pytorch',
84 classifiers=[
85 "Programming Language :: Python :: 2",
86 "Programming Language :: Python :: 2.7",
87 "Programming Language :: Python :: 3",
88 "Programming Language :: Python :: 3.6",
89 "Programming Language :: Python :: 3.7",
90 ],
91 package_dir={'': 'src'},
92 packages=find_packages(where='src'),
93 include_package_data=True,
94 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*",
95 install_requires=[
96 'scipy', # requires numpy, which is required by pyhf and tensorflow
97 'click>=6.0', # for console scripts,
98 'tqdm', # for readxml
99 'six', # for modifiers
100 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6
101 'jsonpatch',
102 'pyyaml', # for parsing CLI equal-delimited options
103 ],
104 extras_require=extras_require,
105 entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},
106 dependency_links=[],
107 use_scm_version=_is_test_pypi(),
108 )
109
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -50,8 +50,8 @@
def _is_test_pypi():
"""
- Determine if the Travis CI environment has TESTPYPI_UPLOAD defined and
- set to true (c.f. .travis.yml)
+ Determine if the CI environment has IS_TESTPYPI defined and
+ set to true (c.f. .github/workflows/publish-package.yml)
The use_scm_version kwarg accepts a callable for the local_scheme
configuration parameter with argument "version". This can be replaced
@@ -65,7 +65,7 @@
return (
{'local_scheme': lambda version: ''}
- if getenv('TESTPYPI_UPLOAD') == 'true'
+ if getenv('IS_TESTPYPI') == 'true'
else False
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -50,8 +50,8 @@\n \n def _is_test_pypi():\n \"\"\"\n- Determine if the Travis CI environment has TESTPYPI_UPLOAD defined and\n- set to true (c.f. .travis.yml)\n+ Determine if the CI environment has IS_TESTPYPI defined and\n+ set to true (c.f. .github/workflows/publish-package.yml)\n \n The use_scm_version kwarg accepts a callable for the local_scheme\n configuration parameter with argument \"version\". This can be replaced\n@@ -65,7 +65,7 @@\n \n return (\n {'local_scheme': lambda version: ''}\n- if getenv('TESTPYPI_UPLOAD') == 'true'\n+ if getenv('IS_TESTPYPI') == 'true'\n else False\n )\n", "issue": "Automate deployment to PyPI\n# Description\r\n\r\nAccording to @lukasheinrich, the current workflow for deploying to PyPI is:\r\n\r\n```\r\ngit checkout master\r\ngit pull\r\nbumpversion patch\r\ngit commit\r\ngit push origin master --tags\r\n```\r\n\r\nThis is a bit annoyingly manual and ideally should be done automatically.\r\n\r\nLuckily, there is an [official PyPA GitHub action](https://discuss.python.org/t/official-github-action-for-publishing-to-pypi/1061) to do this:\r\n\r\nhttps://github.com/pypa/gh-action-pypi-publish\r\n\r\nHowever, we need GitHub actions for pyhf, so we have to wait.\n", "before_files": [{"content": "from setuptools import setup, find_packages\nfrom os import path\nimport sys\n\nthis_directory = path.abspath(path.dirname(__file__))\nif sys.version_info.major < 3:\n from io import open\nwith open(path.join(this_directory, 'README.md'), encoding='utf-8') as readme_md:\n long_description = readme_md.read()\n\nextras_require = {\n 'tensorflow': ['tensorflow~=1.15', 'tensorflow-probability~=0.8', 'numpy~=1.16',],\n 'torch': ['torch~=1.2'],\n 'xmlio': ['uproot'],\n 'minuit': ['iminuit'],\n 'develop': [\n 'pyflakes',\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'matplotlib',\n 'jupyter',\n 'nbdime',\n 'uproot~=3.3',\n 'papermill~=1.0',\n 'nteract-scrapbook~=0.2',\n 'graphviz',\n 'bumpversion',\n 'sphinx',\n 'sphinxcontrib-bibtex',\n 'sphinxcontrib-napoleon',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'sphinx-issues',\n 'm2r',\n 'jsonpatch',\n 'ipython',\n 'pre-commit',\n 'black;python_version>=\"3.6\"', # Black is Python3 only\n 'twine',\n 'check-manifest',\n ],\n}\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\ndef _is_test_pypi():\n \"\"\"\n Determine if the Travis CI environment has TESTPYPI_UPLOAD defined and\n set to true (c.f. .travis.yml)\n\n The use_scm_version kwarg accepts a callable for the local_scheme\n configuration parameter with argument \"version\". This can be replaced\n with a lambda as the desired version structure is {next_version}.dev{distance}\n c.f. https://github.com/pypa/setuptools_scm/#importing-in-setuppy\n\n As the scm versioning is only desired for TestPyPI, for depolyment to PyPI the version\n controlled through bumpversion is used.\n \"\"\"\n from os import getenv\n\n return (\n {'local_scheme': lambda version: ''}\n if getenv('TESTPYPI_UPLOAD') == 'true'\n else False\n )\n\n\nsetup(\n name='pyhf',\n version='0.2.0',\n description='(partial) pure python histfactory implementation',\n long_description=long_description,\n long_description_content_type='text/markdown',\n url='https://github.com/diana-hep/pyhf',\n author='Lukas Heinrich, Matthew Feickert, Giordon Stark',\n author_email='[email protected], [email protected], [email protected]',\n license='Apache',\n keywords='physics fitting numpy scipy tensorflow pytorch',\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n package_dir={'': 'src'},\n packages=find_packages(where='src'),\n include_package_data=True,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*\",\n install_requires=[\n 'scipy', # requires numpy, which is required by pyhf and tensorflow\n 'click>=6.0', # for console scripts,\n 'tqdm', # for readxml\n 'six', # for modifiers\n 'jsonschema>=v3.0.0a2', # for utils, alpha-release for draft 6\n 'jsonpatch',\n 'pyyaml', # for parsing CLI equal-delimited options\n ],\n extras_require=extras_require,\n entry_points={'console_scripts': ['pyhf=pyhf.commandline:pyhf']},\n dependency_links=[],\n use_scm_version=_is_test_pypi(),\n)\n", "path": "setup.py"}]}
| 1,846 | 194 |
gh_patches_debug_59597
|
rasdani/github-patches
|
git_diff
|
googleapis__python-bigquery-587
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
loosen opentelemetry dependencies
See Spanner PR: https://github.com/googleapis/python-spanner/pull/298
</issue>
<code>
[start of setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = "google-cloud-bigquery"
24 description = "Google BigQuery API client library"
25
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = "Development Status :: 5 - Production/Stable"
31 dependencies = [
32 "google-api-core[grpc] >= 1.23.0, < 2.0.0dev",
33 "proto-plus >= 1.10.0",
34 "google-cloud-core >= 1.4.1, < 2.0dev",
35 "google-resumable-media >= 0.6.0, < 2.0dev",
36 "packaging >= 14.3",
37 "protobuf >= 3.12.0",
38 "requests >= 2.18.0, < 3.0.0dev",
39 ]
40 extras = {
41 "bqstorage": [
42 "google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev",
43 # Due to an issue in pip's dependency resolver, the `grpc` extra is not
44 # installed, even though `google-cloud-bigquery-storage` specifies it
45 # as `google-api-core[grpc]`. We thus need to explicitly specify it here.
46 # See: https://github.com/googleapis/python-bigquery/issues/83 The
47 # grpc.Channel.close() method isn't added until 1.32.0.
48 # https://github.com/grpc/grpc/pull/15254
49 "grpcio >= 1.32.0, < 2.0dev",
50 "pyarrow >= 1.0.0, < 4.0dev",
51 ],
52 "pandas": ["pandas>=0.23.0", "pyarrow >= 1.0.0, < 4.0dev"],
53 "bignumeric_type": ["pyarrow >= 3.0.0, < 4.0dev"],
54 "tqdm": ["tqdm >= 4.7.4, <5.0.0dev"],
55 "opentelemetry": [
56 "opentelemetry-api==0.11b0",
57 "opentelemetry-sdk==0.11b0",
58 "opentelemetry-instrumentation==0.11b0",
59 ],
60 }
61
62 all_extras = []
63
64 for extra in extras:
65 # Exclude this extra from all to avoid overly strict dependencies on core
66 # libraries such as pyarrow.
67 # https://github.com/googleapis/python-bigquery/issues/563
68 if extra in {"bignumeric_type"}:
69 continue
70 all_extras.extend(extras[extra])
71
72 extras["all"] = all_extras
73
74 # Setup boilerplate below this line.
75
76 package_root = os.path.abspath(os.path.dirname(__file__))
77
78 readme_filename = os.path.join(package_root, "README.rst")
79 with io.open(readme_filename, encoding="utf-8") as readme_file:
80 readme = readme_file.read()
81
82 version = {}
83 with open(os.path.join(package_root, "google/cloud/bigquery/version.py")) as fp:
84 exec(fp.read(), version)
85 version = version["__version__"]
86
87 # Only include packages under the 'google' namespace. Do not include tests,
88 # benchmarks, etc.
89 packages = [
90 package
91 for package in setuptools.PEP420PackageFinder.find()
92 if package.startswith("google")
93 ]
94
95 # Determine which namespaces are needed.
96 namespaces = ["google"]
97 if "google.cloud" in packages:
98 namespaces.append("google.cloud")
99
100
101 setuptools.setup(
102 name=name,
103 version=version,
104 description=description,
105 long_description=readme,
106 author="Google LLC",
107 author_email="[email protected]",
108 license="Apache 2.0",
109 url="https://github.com/googleapis/python-bigquery",
110 classifiers=[
111 release_status,
112 "Intended Audience :: Developers",
113 "License :: OSI Approved :: Apache Software License",
114 "Programming Language :: Python",
115 "Programming Language :: Python :: 3",
116 "Programming Language :: Python :: 3.6",
117 "Programming Language :: Python :: 3.7",
118 "Programming Language :: Python :: 3.8",
119 "Programming Language :: Python :: 3.9",
120 "Operating System :: OS Independent",
121 "Topic :: Internet",
122 ],
123 platforms="Posix; MacOS X; Windows",
124 packages=packages,
125 namespace_packages=namespaces,
126 install_requires=dependencies,
127 extras_require=extras,
128 python_requires=">=3.6, <3.10",
129 include_package_data=True,
130 zip_safe=False,
131 )
132
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -53,9 +53,9 @@
"bignumeric_type": ["pyarrow >= 3.0.0, < 4.0dev"],
"tqdm": ["tqdm >= 4.7.4, <5.0.0dev"],
"opentelemetry": [
- "opentelemetry-api==0.11b0",
- "opentelemetry-sdk==0.11b0",
- "opentelemetry-instrumentation==0.11b0",
+ "opentelemetry-api >= 0.11b0",
+ "opentelemetry-sdk >= 0.11b0",
+ "opentelemetry-instrumentation >= 0.11b0",
],
}
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -53,9 +53,9 @@\n \"bignumeric_type\": [\"pyarrow >= 3.0.0, < 4.0dev\"],\n \"tqdm\": [\"tqdm >= 4.7.4, <5.0.0dev\"],\n \"opentelemetry\": [\n- \"opentelemetry-api==0.11b0\",\n- \"opentelemetry-sdk==0.11b0\",\n- \"opentelemetry-instrumentation==0.11b0\",\n+ \"opentelemetry-api >= 0.11b0\",\n+ \"opentelemetry-sdk >= 0.11b0\",\n+ \"opentelemetry-instrumentation >= 0.11b0\",\n ],\n }\n", "issue": "loosen opentelemetry dependencies\nSee Spanner PR: https://github.com/googleapis/python-spanner/pull/298\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\n\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n \"google-api-core[grpc] >= 1.23.0, < 2.0.0dev\",\n \"proto-plus >= 1.10.0\",\n \"google-cloud-core >= 1.4.1, < 2.0dev\",\n \"google-resumable-media >= 0.6.0, < 2.0dev\",\n \"packaging >= 14.3\",\n \"protobuf >= 3.12.0\",\n \"requests >= 2.18.0, < 3.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev\",\n # Due to an issue in pip's dependency resolver, the `grpc` extra is not\n # installed, even though `google-cloud-bigquery-storage` specifies it\n # as `google-api-core[grpc]`. We thus need to explicitly specify it here.\n # See: https://github.com/googleapis/python-bigquery/issues/83 The\n # grpc.Channel.close() method isn't added until 1.32.0.\n # https://github.com/grpc/grpc/pull/15254\n \"grpcio >= 1.32.0, < 2.0dev\",\n \"pyarrow >= 1.0.0, < 4.0dev\",\n ],\n \"pandas\": [\"pandas>=0.23.0\", \"pyarrow >= 1.0.0, < 4.0dev\"],\n \"bignumeric_type\": [\"pyarrow >= 3.0.0, < 4.0dev\"],\n \"tqdm\": [\"tqdm >= 4.7.4, <5.0.0dev\"],\n \"opentelemetry\": [\n \"opentelemetry-api==0.11b0\",\n \"opentelemetry-sdk==0.11b0\",\n \"opentelemetry-instrumentation==0.11b0\",\n ],\n}\n\nall_extras = []\n\nfor extra in extras:\n # Exclude this extra from all to avoid overly strict dependencies on core\n # libraries such as pyarrow.\n # https://github.com/googleapis/python-bigquery/issues/563\n if extra in {\"bignumeric_type\"}:\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\nversion = {}\nwith open(os.path.join(package_root, \"google/cloud/bigquery/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package\n for package in setuptools.PEP420PackageFinder.find()\n if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=3.6, <3.10\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}]}
| 2,013 | 190 |
gh_patches_debug_14536
|
rasdani/github-patches
|
git_diff
|
mozmeao__snippets-service-864
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Filter by release channel on ASRSnippets raises an error
</issue>
<code>
[start of snippets/base/admin/filters.py]
1 from datetime import datetime, timedelta
2
3 from django.contrib import admin
4 from django.utils.encoding import force_text
5
6
7 class ModifiedFilter(admin.SimpleListFilter):
8 title = 'Last modified'
9 parameter_name = 'last_modified'
10
11 def lookups(self, request, model_admin):
12 return (
13 ('24', '24 hours'),
14 ('168', '7 days'),
15 ('336', '14 days'),
16 ('720', '30 days'),
17 ('all', 'All'),
18 )
19
20 def queryset(self, request, queryset):
21 value = self.value()
22 if not value or value == 'all':
23 return queryset
24
25 when = datetime.utcnow() - timedelta(hours=int(value))
26 return queryset.exclude(modified__lt=when)
27
28 def choices(self, cl):
29 for lookup, title in self.lookup_choices:
30 yield {
31 'selected': self.value() == force_text(lookup),
32 'query_string': cl.get_query_string({
33 self.parameter_name: lookup,
34 }, []),
35 'display': title,
36 }
37
38
39 class ChannelFilter(admin.SimpleListFilter):
40 title = 'Channel'
41 parameter_name = 'channel'
42
43 def lookups(self, request, model_admin):
44 return (
45 ('on_release', 'Release'),
46 ('on_esr', 'ESR'),
47 ('on_beta', 'Beta'),
48 ('on_aurora', 'Dev (Aurora)'),
49 ('on_nightly', 'Nightly'),
50 )
51
52 def queryset(self, request, queryset):
53 if self.value() is None:
54 return queryset
55
56 return queryset.filter(**{self.value(): True})
57
58
59 class ActivityStreamFilter(admin.SimpleListFilter):
60 title = 'Activity Stream'
61 parameter_name = 'is_activity_stream'
62
63 def lookups(self, request, model_admin):
64 return (
65 ('yes', 'Yes'),
66 ('no', 'No'),
67 )
68
69 def queryset(self, request, queryset):
70 if self.value() is None:
71 return queryset
72 elif self.value() == 'yes':
73 return queryset.filter(on_startpage_5=True)
74 elif self.value() == 'no':
75 return queryset.exclude(on_startpage_5=True)
76
[end of snippets/base/admin/filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/snippets/base/admin/filters.py b/snippets/base/admin/filters.py
--- a/snippets/base/admin/filters.py
+++ b/snippets/base/admin/filters.py
@@ -3,6 +3,8 @@
from django.contrib import admin
from django.utils.encoding import force_text
+from snippets.base.managers import SnippetQuerySet
+
class ModifiedFilter(admin.SimpleListFilter):
title = 'Last modified'
@@ -53,7 +55,9 @@
if self.value() is None:
return queryset
- return queryset.filter(**{self.value(): True})
+ if isinstance(queryset, SnippetQuerySet):
+ return queryset.filter(**{self.value(): True})
+ return queryset.filter(**{f'target__{self.value()}': True})
class ActivityStreamFilter(admin.SimpleListFilter):
|
{"golden_diff": "diff --git a/snippets/base/admin/filters.py b/snippets/base/admin/filters.py\n--- a/snippets/base/admin/filters.py\n+++ b/snippets/base/admin/filters.py\n@@ -3,6 +3,8 @@\n from django.contrib import admin\n from django.utils.encoding import force_text\n \n+from snippets.base.managers import SnippetQuerySet\n+\n \n class ModifiedFilter(admin.SimpleListFilter):\n title = 'Last modified'\n@@ -53,7 +55,9 @@\n if self.value() is None:\n return queryset\n \n- return queryset.filter(**{self.value(): True})\n+ if isinstance(queryset, SnippetQuerySet):\n+ return queryset.filter(**{self.value(): True})\n+ return queryset.filter(**{f'target__{self.value()}': True})\n \n \n class ActivityStreamFilter(admin.SimpleListFilter):\n", "issue": "Filter by release channel on ASRSnippets raises an error\n\n", "before_files": [{"content": "from datetime import datetime, timedelta\n\nfrom django.contrib import admin\nfrom django.utils.encoding import force_text\n\n\nclass ModifiedFilter(admin.SimpleListFilter):\n title = 'Last modified'\n parameter_name = 'last_modified'\n\n def lookups(self, request, model_admin):\n return (\n ('24', '24 hours'),\n ('168', '7 days'),\n ('336', '14 days'),\n ('720', '30 days'),\n ('all', 'All'),\n )\n\n def queryset(self, request, queryset):\n value = self.value()\n if not value or value == 'all':\n return queryset\n\n when = datetime.utcnow() - timedelta(hours=int(value))\n return queryset.exclude(modified__lt=when)\n\n def choices(self, cl):\n for lookup, title in self.lookup_choices:\n yield {\n 'selected': self.value() == force_text(lookup),\n 'query_string': cl.get_query_string({\n self.parameter_name: lookup,\n }, []),\n 'display': title,\n }\n\n\nclass ChannelFilter(admin.SimpleListFilter):\n title = 'Channel'\n parameter_name = 'channel'\n\n def lookups(self, request, model_admin):\n return (\n ('on_release', 'Release'),\n ('on_esr', 'ESR'),\n ('on_beta', 'Beta'),\n ('on_aurora', 'Dev (Aurora)'),\n ('on_nightly', 'Nightly'),\n )\n\n def queryset(self, request, queryset):\n if self.value() is None:\n return queryset\n\n return queryset.filter(**{self.value(): True})\n\n\nclass ActivityStreamFilter(admin.SimpleListFilter):\n title = 'Activity Stream'\n parameter_name = 'is_activity_stream'\n\n def lookups(self, request, model_admin):\n return (\n ('yes', 'Yes'),\n ('no', 'No'),\n )\n\n def queryset(self, request, queryset):\n if self.value() is None:\n return queryset\n elif self.value() == 'yes':\n return queryset.filter(on_startpage_5=True)\n elif self.value() == 'no':\n return queryset.exclude(on_startpage_5=True)\n", "path": "snippets/base/admin/filters.py"}]}
| 1,166 | 181 |
gh_patches_debug_21120
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-242
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add type check to NonparameterizedLinear function
Related to #123
</issue>
<code>
[start of chainer/functions/nonparameterized_linear.py]
1 from chainer import cuda
2 from chainer import function
3 from chainer.functions import linear as linear_module
4
5
6 class NonparameterizedLinear(function.Function):
7
8 """Nonparameterized linear class.
9
10 .. seealso:: :class:`Linear`
11
12 """
13
14 def forward(self, x):
15 W = x[1]
16 b = None
17 if len(x) == 3:
18 b = x[2]
19 out_size, in_size = W.shape
20 func = linear_module.Linear(
21 in_size, out_size, initialW=W, initial_bias=b)
22 self.func = func
23 if any(isinstance(i, cuda.GPUArray) for i in x):
24 func.to_gpu()
25 return func.forward(x[:1])
26
27 def backward(self, x, gy):
28 func = self.func
29 func.zero_grads()
30 gx = func.backward(x[:1], gy)
31 if func.gb is None:
32 return (gx[0], func.gW)
33 return (gx[0], func.gW, func.gb)
34
35
36 def linear(x, W, b=None, stride=1, pad=0, use_cudnn=True):
37 """Nonparameterized linear function.
38
39 Args:
40 x (~chainer.Variable): Input variable.
41 W (~chainer.Variable): Weight variable.
42 b (~chainer.Variable): Bias variable.
43
44 Returns:
45 ~chainer.Variable: Output variable.
46
47 .. seealso:: :class:`Linear`
48
49 """
50
51 return NonparameterizedLinear()(x, W, b)
52
[end of chainer/functions/nonparameterized_linear.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/chainer/functions/nonparameterized_linear.py b/chainer/functions/nonparameterized_linear.py
--- a/chainer/functions/nonparameterized_linear.py
+++ b/chainer/functions/nonparameterized_linear.py
@@ -1,6 +1,9 @@
+import numpy
+
from chainer import cuda
from chainer import function
from chainer.functions import linear as linear_module
+from chainer.utils import type_check
class NonparameterizedLinear(function.Function):
@@ -11,6 +14,29 @@
"""
+ def check_type_forward(self, in_types):
+ type_check.expect(
+ 2 <= in_types.size(),
+ in_types.size() <= 3,
+ )
+ x_type = in_types[0]
+ w_type = in_types[1]
+
+ prod = type_check.Variable(numpy.prod, 'prod')
+ type_check.expect(
+ x_type.dtype == numpy.float32,
+ w_type.dtype == numpy.float32,
+ x_type.ndim >= 2,
+ w_type.ndim == 2,
+ prod(x_type.shape[1:]) == w_type.shape[1],
+ )
+ if in_types.size().eval() == 3:
+ b_type = in_types[2]
+ type_check.expect(
+ b_type.ndim == 1,
+ b_type.shape[0] == w_type.shape[0],
+ )
+
def forward(self, x):
W = x[1]
b = None
|
{"golden_diff": "diff --git a/chainer/functions/nonparameterized_linear.py b/chainer/functions/nonparameterized_linear.py\n--- a/chainer/functions/nonparameterized_linear.py\n+++ b/chainer/functions/nonparameterized_linear.py\n@@ -1,6 +1,9 @@\n+import numpy\n+\n from chainer import cuda\n from chainer import function\n from chainer.functions import linear as linear_module\n+from chainer.utils import type_check\n \n \n class NonparameterizedLinear(function.Function):\n@@ -11,6 +14,29 @@\n \n \"\"\"\n \n+ def check_type_forward(self, in_types):\n+ type_check.expect(\n+ 2 <= in_types.size(),\n+ in_types.size() <= 3,\n+ )\n+ x_type = in_types[0]\n+ w_type = in_types[1]\n+\n+ prod = type_check.Variable(numpy.prod, 'prod')\n+ type_check.expect(\n+ x_type.dtype == numpy.float32,\n+ w_type.dtype == numpy.float32,\n+ x_type.ndim >= 2,\n+ w_type.ndim == 2,\n+ prod(x_type.shape[1:]) == w_type.shape[1],\n+ )\n+ if in_types.size().eval() == 3:\n+ b_type = in_types[2]\n+ type_check.expect(\n+ b_type.ndim == 1,\n+ b_type.shape[0] == w_type.shape[0],\n+ )\n+\n def forward(self, x):\n W = x[1]\n b = None\n", "issue": "Add type check to NonparameterizedLinear function\nRelated to #123\n\n", "before_files": [{"content": "from chainer import cuda\nfrom chainer import function\nfrom chainer.functions import linear as linear_module\n\n\nclass NonparameterizedLinear(function.Function):\n\n \"\"\"Nonparameterized linear class.\n\n .. seealso:: :class:`Linear`\n\n \"\"\"\n\n def forward(self, x):\n W = x[1]\n b = None\n if len(x) == 3:\n b = x[2]\n out_size, in_size = W.shape\n func = linear_module.Linear(\n in_size, out_size, initialW=W, initial_bias=b)\n self.func = func\n if any(isinstance(i, cuda.GPUArray) for i in x):\n func.to_gpu()\n return func.forward(x[:1])\n\n def backward(self, x, gy):\n func = self.func\n func.zero_grads()\n gx = func.backward(x[:1], gy)\n if func.gb is None:\n return (gx[0], func.gW)\n return (gx[0], func.gW, func.gb)\n\n\ndef linear(x, W, b=None, stride=1, pad=0, use_cudnn=True):\n \"\"\"Nonparameterized linear function.\n\n Args:\n x (~chainer.Variable): Input variable.\n W (~chainer.Variable): Weight variable.\n b (~chainer.Variable): Bias variable.\n\n Returns:\n ~chainer.Variable: Output variable.\n\n .. seealso:: :class:`Linear`\n\n \"\"\"\n\n return NonparameterizedLinear()(x, W, b)\n", "path": "chainer/functions/nonparameterized_linear.py"}]}
| 984 | 330 |
gh_patches_debug_21472
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-2877
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bplan template dates saved but not shown in Dashboard
URL: https://mein.berlin.de/dashboard/projects/erweiterung-mauerpark-bebauungsplan-3-64-im-bezirk/bplan/
user: initiator
expected behaviour: date and time that I have entered are still shown after saving form
behaviour: dates are no longer shown after saving, no error message, I can still publish the project and date is shown correctly on project tile
device & browser: Desktop, mac, chrome Version 76.0.3809.132 (Offizieller Build) (64-Bit)
Importance: relevant bug, fix before next release
</issue>
<code>
[start of meinberlin/apps/bplan/serializers.py]
1 import datetime
2 import imghdr
3 import posixpath
4 import tempfile
5 from urllib.parse import urlparse
6
7 import requests
8 from django.apps import apps
9 from django.conf import settings
10 from django.contrib.sites.models import Site
11 from django.core.exceptions import ValidationError
12 from django.core.files.images import ImageFile
13 from django.urls import reverse
14 from django.utils import timezone
15 from django.utils.translation import ugettext as _
16 from rest_framework import serializers
17
18 from adhocracy4.dashboard import components
19 from adhocracy4.dashboard import signals as a4dashboard_signals
20 from adhocracy4.images.validators import validate_image
21 from adhocracy4.modules import models as module_models
22 from adhocracy4.phases import models as phase_models
23 from adhocracy4.projects import models as project_models
24
25 from .models import Bplan
26 from .phases import StatementPhase
27
28 BPLAN_EMBED = '<iframe height="500" style="width: 100%; min-height: 300px; ' \
29 'max-height: 100vh" src="{}" frameborder="0"></iframe>'
30 DOWNLOAD_IMAGE_SIZE_LIMIT_BYTES = 10 * 1024 * 1024
31
32
33 class BplanSerializer(serializers.ModelSerializer):
34 id = serializers.IntegerField(required=False)
35
36 # make write_only for consistency reasons
37 start_date = serializers.DateTimeField(write_only=True)
38 end_date = serializers.DateTimeField(write_only=True)
39 image_url = serializers.URLField(required=False, write_only=True)
40 image_copyright = serializers.CharField(required=False, write_only=True,
41 source='tile_image_copyright',
42 allow_blank=True,
43 max_length=120)
44 embed_code = serializers.SerializerMethodField()
45
46 class Meta:
47 model = Bplan
48 fields = (
49 'id', 'name', 'identifier', 'description', 'url',
50 'office_worker_email', 'is_draft', 'start_date', 'end_date',
51 'image_url', 'image_copyright', 'embed_code'
52 )
53 extra_kwargs = {
54 # write_only for consistency reasons
55 'is_draft': {'default': False, 'write_only': True},
56 'name': {'write_only': True},
57 'description': {'write_only': True},
58 'url': {'write_only': True},
59 'office_worker_email': {'write_only': True},
60 'identifier': {'write_only': True}
61 }
62
63 def create(self, validated_data):
64 orga_pk = self._context.get('organisation_pk', None)
65 orga_model = apps.get_model(settings.A4_ORGANISATIONS_MODEL)
66 orga = orga_model.objects.get(pk=orga_pk)
67 validated_data['organisation'] = orga
68
69 start_date = validated_data.pop('start_date')
70 end_date = validated_data.pop('end_date')
71
72 image_url = validated_data.pop('image_url', None)
73 if image_url:
74 validated_data['tile_image'] = \
75 self._download_image_from_url(image_url)
76
77 bplan = super().create(validated_data)
78 self._create_module_and_phase(bplan, start_date, end_date)
79 self._send_project_created_signal(bplan)
80 return bplan
81
82 def _create_module_and_phase(self, bplan, start_date, end_date):
83 module = module_models.Module.objects.create(
84 name=bplan.slug + '_module',
85 weight=1,
86 project=bplan,
87 )
88
89 phase_content = StatementPhase()
90 phase_models.Phase.objects.create(
91 name=_('Bplan statement phase'),
92 description=_('Bplan statement phase'),
93 type=phase_content.identifier,
94 module=module,
95 start_date=start_date,
96 end_date=end_date
97 )
98
99 def update(self, instance, validated_data):
100 start_date = validated_data.pop('start_date', None)
101 end_date = validated_data.pop('end_date', None)
102 if start_date or end_date:
103 self._update_phase(instance, start_date, end_date)
104 if end_date and end_date > timezone.localtime(timezone.now()):
105 instance.is_archived = False
106
107 image_url = validated_data.pop('image_url', None)
108 if image_url:
109 validated_data['tile_image'] = \
110 self._download_image_from_url(image_url)
111
112 instance = super().update(instance, validated_data)
113
114 self._send_component_updated_signal(instance)
115 return instance
116
117 def _update_phase(self, bplan, start_date, end_date):
118 module = module_models.Module.objects.get(project=bplan)
119 phase = phase_models.Phase.objects.get(module=module)
120 if start_date:
121 phase.start_date = start_date
122 if end_date:
123 phase.end_date = end_date
124 phase.save()
125
126 def get_embed_code(self, bplan):
127 url = self._get_absolute_url(bplan)
128 embed = BPLAN_EMBED.format(url)
129 return embed
130
131 def _get_absolute_url(self, bplan):
132 site_url = Site.objects.get_current().domain
133 embed_url = reverse('embed-project', kwargs={'slug': bplan.slug, })
134 url = 'https://{}{}'.format(site_url, embed_url)
135 return url
136
137 def _download_image_from_url(self, url):
138 parsed_url = urlparse(url)
139 file_name = None
140 try:
141 r = requests.get(url, stream=True, timeout=10)
142 downloaded_bytes = 0
143 with tempfile.TemporaryFile() as f:
144 for chunk in r.iter_content(chunk_size=1024):
145 downloaded_bytes += len(chunk)
146 if downloaded_bytes > DOWNLOAD_IMAGE_SIZE_LIMIT_BYTES:
147 raise serializers.ValidationError(
148 'Image too large to download {}'.format(url))
149 if chunk:
150 f.write(chunk)
151 file_name = self._generate_image_filename(parsed_url.path, f)
152 self._image_storage.save(file_name, f)
153 except Exception:
154 if file_name:
155 self._image_storage.delete(file_name)
156 raise serializers.ValidationError(
157 'Failed to download image {}'.format(url))
158
159 try:
160 self._validate_image(file_name)
161 except ValidationError as e:
162 self._image_storage.delete(file_name)
163 raise serializers.ValidationError(e)
164
165 return file_name
166
167 def _validate_image(self, file_name):
168 image_file = self._image_storage.open(file_name, 'rb')
169 image = ImageFile(image_file, file_name)
170 config = settings.IMAGE_ALIASES.get('*', {})
171 config.update(settings.IMAGE_ALIASES['tileimage'])
172 validate_image(image, **config)
173
174 @property
175 def _image_storage(self):
176 return project_models.Project._meta.get_field('tile_image').storage
177
178 @property
179 def _image_upload_to(self):
180 return project_models.Project._meta.get_field('tile_image').upload_to
181
182 def _generate_image_filename(self, url_path, file):
183 if callable(self._image_upload_to):
184 raise Exception('Callable upload_to fields are not supported')
185
186 root_path, extension = posixpath.splitext(url_path)
187 if file:
188 # Workaround: imghdr expects the files position on 0
189 file.seek(0)
190 extension = imghdr.what(file) or 'jpeg'
191
192 basename = posixpath.basename(root_path)
193 if not basename:
194 basename = 'bplan'
195
196 dirname = datetime.datetime.now().strftime(self._image_upload_to)
197 filename = posixpath.join(dirname, basename + '.' + extension)
198
199 return self._image_storage.get_available_name(filename)
200
201 def _send_project_created_signal(self, bplan):
202 a4dashboard_signals.project_created.send(
203 sender=self.__class__,
204 project=bplan,
205 user=self.context['request'].user
206 )
207
208 def _send_component_updated_signal(self, bplan):
209 component = components.projects['bplan']
210 a4dashboard_signals.project_component_updated.send(
211 sender=self.__class__,
212 project=bplan,
213 component=component,
214 user=self.context['request'].user
215 )
216
[end of meinberlin/apps/bplan/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/meinberlin/apps/bplan/serializers.py b/meinberlin/apps/bplan/serializers.py
--- a/meinberlin/apps/bplan/serializers.py
+++ b/meinberlin/apps/bplan/serializers.py
@@ -66,8 +66,8 @@
orga = orga_model.objects.get(pk=orga_pk)
validated_data['organisation'] = orga
- start_date = validated_data.pop('start_date')
- end_date = validated_data.pop('end_date')
+ start_date = validated_data['start_date']
+ end_date = validated_data['end_date']
image_url = validated_data.pop('image_url', None)
if image_url:
@@ -97,8 +97,8 @@
)
def update(self, instance, validated_data):
- start_date = validated_data.pop('start_date', None)
- end_date = validated_data.pop('end_date', None)
+ start_date = validated_data.get('start_date', None)
+ end_date = validated_data.get('end_date', None)
if start_date or end_date:
self._update_phase(instance, start_date, end_date)
if end_date and end_date > timezone.localtime(timezone.now()):
|
{"golden_diff": "diff --git a/meinberlin/apps/bplan/serializers.py b/meinberlin/apps/bplan/serializers.py\n--- a/meinberlin/apps/bplan/serializers.py\n+++ b/meinberlin/apps/bplan/serializers.py\n@@ -66,8 +66,8 @@\n orga = orga_model.objects.get(pk=orga_pk)\n validated_data['organisation'] = orga\n \n- start_date = validated_data.pop('start_date')\n- end_date = validated_data.pop('end_date')\n+ start_date = validated_data['start_date']\n+ end_date = validated_data['end_date']\n \n image_url = validated_data.pop('image_url', None)\n if image_url:\n@@ -97,8 +97,8 @@\n )\n \n def update(self, instance, validated_data):\n- start_date = validated_data.pop('start_date', None)\n- end_date = validated_data.pop('end_date', None)\n+ start_date = validated_data.get('start_date', None)\n+ end_date = validated_data.get('end_date', None)\n if start_date or end_date:\n self._update_phase(instance, start_date, end_date)\n if end_date and end_date > timezone.localtime(timezone.now()):\n", "issue": "bplan template dates saved but not shown in Dashboard\nURL: https://mein.berlin.de/dashboard/projects/erweiterung-mauerpark-bebauungsplan-3-64-im-bezirk/bplan/\r\nuser: initiator\r\nexpected behaviour: date and time that I have entered are still shown after saving form\r\nbehaviour: dates are no longer shown after saving, no error message, I can still publish the project and date is shown correctly on project tile\r\ndevice & browser: Desktop, mac, chrome Version 76.0.3809.132 (Offizieller Build) (64-Bit)\r\nImportance: relevant bug, fix before next release\n", "before_files": [{"content": "import datetime\nimport imghdr\nimport posixpath\nimport tempfile\nfrom urllib.parse import urlparse\n\nimport requests\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.contrib.sites.models import Site\nfrom django.core.exceptions import ValidationError\nfrom django.core.files.images import ImageFile\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext as _\nfrom rest_framework import serializers\n\nfrom adhocracy4.dashboard import components\nfrom adhocracy4.dashboard import signals as a4dashboard_signals\nfrom adhocracy4.images.validators import validate_image\nfrom adhocracy4.modules import models as module_models\nfrom adhocracy4.phases import models as phase_models\nfrom adhocracy4.projects import models as project_models\n\nfrom .models import Bplan\nfrom .phases import StatementPhase\n\nBPLAN_EMBED = '<iframe height=\"500\" style=\"width: 100%; min-height: 300px; ' \\\n 'max-height: 100vh\" src=\"{}\" frameborder=\"0\"></iframe>'\nDOWNLOAD_IMAGE_SIZE_LIMIT_BYTES = 10 * 1024 * 1024\n\n\nclass BplanSerializer(serializers.ModelSerializer):\n id = serializers.IntegerField(required=False)\n\n # make write_only for consistency reasons\n start_date = serializers.DateTimeField(write_only=True)\n end_date = serializers.DateTimeField(write_only=True)\n image_url = serializers.URLField(required=False, write_only=True)\n image_copyright = serializers.CharField(required=False, write_only=True,\n source='tile_image_copyright',\n allow_blank=True,\n max_length=120)\n embed_code = serializers.SerializerMethodField()\n\n class Meta:\n model = Bplan\n fields = (\n 'id', 'name', 'identifier', 'description', 'url',\n 'office_worker_email', 'is_draft', 'start_date', 'end_date',\n 'image_url', 'image_copyright', 'embed_code'\n )\n extra_kwargs = {\n # write_only for consistency reasons\n 'is_draft': {'default': False, 'write_only': True},\n 'name': {'write_only': True},\n 'description': {'write_only': True},\n 'url': {'write_only': True},\n 'office_worker_email': {'write_only': True},\n 'identifier': {'write_only': True}\n }\n\n def create(self, validated_data):\n orga_pk = self._context.get('organisation_pk', None)\n orga_model = apps.get_model(settings.A4_ORGANISATIONS_MODEL)\n orga = orga_model.objects.get(pk=orga_pk)\n validated_data['organisation'] = orga\n\n start_date = validated_data.pop('start_date')\n end_date = validated_data.pop('end_date')\n\n image_url = validated_data.pop('image_url', None)\n if image_url:\n validated_data['tile_image'] = \\\n self._download_image_from_url(image_url)\n\n bplan = super().create(validated_data)\n self._create_module_and_phase(bplan, start_date, end_date)\n self._send_project_created_signal(bplan)\n return bplan\n\n def _create_module_and_phase(self, bplan, start_date, end_date):\n module = module_models.Module.objects.create(\n name=bplan.slug + '_module',\n weight=1,\n project=bplan,\n )\n\n phase_content = StatementPhase()\n phase_models.Phase.objects.create(\n name=_('Bplan statement phase'),\n description=_('Bplan statement phase'),\n type=phase_content.identifier,\n module=module,\n start_date=start_date,\n end_date=end_date\n )\n\n def update(self, instance, validated_data):\n start_date = validated_data.pop('start_date', None)\n end_date = validated_data.pop('end_date', None)\n if start_date or end_date:\n self._update_phase(instance, start_date, end_date)\n if end_date and end_date > timezone.localtime(timezone.now()):\n instance.is_archived = False\n\n image_url = validated_data.pop('image_url', None)\n if image_url:\n validated_data['tile_image'] = \\\n self._download_image_from_url(image_url)\n\n instance = super().update(instance, validated_data)\n\n self._send_component_updated_signal(instance)\n return instance\n\n def _update_phase(self, bplan, start_date, end_date):\n module = module_models.Module.objects.get(project=bplan)\n phase = phase_models.Phase.objects.get(module=module)\n if start_date:\n phase.start_date = start_date\n if end_date:\n phase.end_date = end_date\n phase.save()\n\n def get_embed_code(self, bplan):\n url = self._get_absolute_url(bplan)\n embed = BPLAN_EMBED.format(url)\n return embed\n\n def _get_absolute_url(self, bplan):\n site_url = Site.objects.get_current().domain\n embed_url = reverse('embed-project', kwargs={'slug': bplan.slug, })\n url = 'https://{}{}'.format(site_url, embed_url)\n return url\n\n def _download_image_from_url(self, url):\n parsed_url = urlparse(url)\n file_name = None\n try:\n r = requests.get(url, stream=True, timeout=10)\n downloaded_bytes = 0\n with tempfile.TemporaryFile() as f:\n for chunk in r.iter_content(chunk_size=1024):\n downloaded_bytes += len(chunk)\n if downloaded_bytes > DOWNLOAD_IMAGE_SIZE_LIMIT_BYTES:\n raise serializers.ValidationError(\n 'Image too large to download {}'.format(url))\n if chunk:\n f.write(chunk)\n file_name = self._generate_image_filename(parsed_url.path, f)\n self._image_storage.save(file_name, f)\n except Exception:\n if file_name:\n self._image_storage.delete(file_name)\n raise serializers.ValidationError(\n 'Failed to download image {}'.format(url))\n\n try:\n self._validate_image(file_name)\n except ValidationError as e:\n self._image_storage.delete(file_name)\n raise serializers.ValidationError(e)\n\n return file_name\n\n def _validate_image(self, file_name):\n image_file = self._image_storage.open(file_name, 'rb')\n image = ImageFile(image_file, file_name)\n config = settings.IMAGE_ALIASES.get('*', {})\n config.update(settings.IMAGE_ALIASES['tileimage'])\n validate_image(image, **config)\n\n @property\n def _image_storage(self):\n return project_models.Project._meta.get_field('tile_image').storage\n\n @property\n def _image_upload_to(self):\n return project_models.Project._meta.get_field('tile_image').upload_to\n\n def _generate_image_filename(self, url_path, file):\n if callable(self._image_upload_to):\n raise Exception('Callable upload_to fields are not supported')\n\n root_path, extension = posixpath.splitext(url_path)\n if file:\n # Workaround: imghdr expects the files position on 0\n file.seek(0)\n extension = imghdr.what(file) or 'jpeg'\n\n basename = posixpath.basename(root_path)\n if not basename:\n basename = 'bplan'\n\n dirname = datetime.datetime.now().strftime(self._image_upload_to)\n filename = posixpath.join(dirname, basename + '.' + extension)\n\n return self._image_storage.get_available_name(filename)\n\n def _send_project_created_signal(self, bplan):\n a4dashboard_signals.project_created.send(\n sender=self.__class__,\n project=bplan,\n user=self.context['request'].user\n )\n\n def _send_component_updated_signal(self, bplan):\n component = components.projects['bplan']\n a4dashboard_signals.project_component_updated.send(\n sender=self.__class__,\n project=bplan,\n component=component,\n user=self.context['request'].user\n )\n", "path": "meinberlin/apps/bplan/serializers.py"}]}
| 2,923 | 274 |
gh_patches_debug_63158
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-2101
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Users should be able to edit expired 'careeropportunity' from Dashboard
## What kind of an issue is this?
- Feature request
## What is the expected behaviour?
You should be able to click to edit from the list of expired careeropportunities in the Dashboard.
## Other information
This was requested by one of our users on email.
</issue>
<code>
[start of apps/careeropportunity/dashboard/views.py]
1 # -*- encoding: utf-8 -*-
2 import logging
3
4 from django.contrib import messages
5 from django.contrib.auth.decorators import login_required
6 from django.core.exceptions import PermissionDenied
7 from django.shortcuts import get_object_or_404, redirect, render
8 from django.utils import timezone
9 from guardian.decorators import permission_required
10
11 from apps.careeropportunity.forms import AddCareerOpportunityForm
12 from apps.careeropportunity.models import CareerOpportunity
13 from apps.dashboard.tools import get_base_context, has_access
14
15
16 @login_required
17 @permission_required('careeropportunity.view_careeropportunity', return_403=True)
18 def index(request):
19
20 if not has_access(request):
21 raise PermissionDenied
22
23 context = get_base_context(request)
24
25 # "cops" is short for "careeropportunities" which is a fucking long word
26 # "cop" is short for "careeropportunity" which also is a fucking long word
27 cops = CareerOpportunity.objects.all()
28 context['cops'] = cops.filter(end__gte=timezone.now()).order_by('end')
29 context['archive'] = cops.filter(end__lte=timezone.now()).order_by('-id')
30
31 return render(request, 'careeropportunity/dashboard/index.html', context)
32
33
34 @login_required
35 @permission_required('careeropportunity.change_careeropportunity', return_403=True)
36 def detail(request, opportunity_id=None):
37 logger = logging.getLogger(__name__)
38 logger.debug('Editing careeropportunity with id: %s' % (opportunity_id))
39
40 if not has_access(request):
41 raise PermissionDenied
42
43 context = get_base_context(request)
44 cop = None
45 if opportunity_id:
46 cop = get_object_or_404(CareerOpportunity, pk=opportunity_id)
47 context['cop'] = cop
48 context['form'] = AddCareerOpportunityForm(instance=cop)
49 else:
50 context['form'] = AddCareerOpportunityForm()
51
52 if request.method == 'POST':
53 if cop:
54 form = AddCareerOpportunityForm(data=request.POST, instance=cop)
55 else:
56 form = AddCareerOpportunityForm(data=request.POST)
57
58 if form.is_valid():
59 form.save()
60 messages.success(request, 'La til ny karrieremulighet')
61 return redirect(index)
62 else:
63 context['form'] = form
64 messages.error(request,
65 'Skjemaet ble ikke korrekt utfylt. Se etter markerte felter for å se hva som gikk galt.')
66
67 return render(request, 'careeropportunity/dashboard/detail.html', context)
68
69
70 @login_required
71 @permission_required('careeropportunity.change_careeropportunity', return_403=True)
72 def delete(request, opportunity_id=None):
73 logger = logging.getLogger(__name__)
74 logger.debug('Deleting careeropportunitywith id: %s' % (opportunity_id))
75 if not has_access(request):
76 raise PermissionDenied
77
78 cop = get_object_or_404(CareerOpportunity, pk=opportunity_id)
79 cop.delete()
80 messages.success(request, 'Slettet karrieremuligheten')
81 return redirect(index)
82
[end of apps/careeropportunity/dashboard/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/careeropportunity/dashboard/views.py b/apps/careeropportunity/dashboard/views.py
--- a/apps/careeropportunity/dashboard/views.py
+++ b/apps/careeropportunity/dashboard/views.py
@@ -27,7 +27,7 @@
cops = CareerOpportunity.objects.all()
context['cops'] = cops.filter(end__gte=timezone.now()).order_by('end')
context['archive'] = cops.filter(end__lte=timezone.now()).order_by('-id')
-
+ context['all'] = cops
return render(request, 'careeropportunity/dashboard/index.html', context)
|
{"golden_diff": "diff --git a/apps/careeropportunity/dashboard/views.py b/apps/careeropportunity/dashboard/views.py\n--- a/apps/careeropportunity/dashboard/views.py\n+++ b/apps/careeropportunity/dashboard/views.py\n@@ -27,7 +27,7 @@\n cops = CareerOpportunity.objects.all()\n context['cops'] = cops.filter(end__gte=timezone.now()).order_by('end')\n context['archive'] = cops.filter(end__lte=timezone.now()).order_by('-id')\n-\n+ context['all'] = cops\n return render(request, 'careeropportunity/dashboard/index.html', context)\n", "issue": "Users should be able to edit expired 'careeropportunity' from Dashboard\n## What kind of an issue is this?\r\n- Feature request\r\n\r\n## What is the expected behaviour?\r\n\r\nYou should be able to click to edit from the list of expired careeropportunities in the Dashboard.\r\n\r\n## Other information\r\n\r\nThis was requested by one of our users on email.\r\n\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\nimport logging\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils import timezone\nfrom guardian.decorators import permission_required\n\nfrom apps.careeropportunity.forms import AddCareerOpportunityForm\nfrom apps.careeropportunity.models import CareerOpportunity\nfrom apps.dashboard.tools import get_base_context, has_access\n\n\n@login_required\n@permission_required('careeropportunity.view_careeropportunity', return_403=True)\ndef index(request):\n\n if not has_access(request):\n raise PermissionDenied\n\n context = get_base_context(request)\n\n # \"cops\" is short for \"careeropportunities\" which is a fucking long word\n # \"cop\" is short for \"careeropportunity\" which also is a fucking long word\n cops = CareerOpportunity.objects.all()\n context['cops'] = cops.filter(end__gte=timezone.now()).order_by('end')\n context['archive'] = cops.filter(end__lte=timezone.now()).order_by('-id')\n\n return render(request, 'careeropportunity/dashboard/index.html', context)\n\n\n@login_required\n@permission_required('careeropportunity.change_careeropportunity', return_403=True)\ndef detail(request, opportunity_id=None):\n logger = logging.getLogger(__name__)\n logger.debug('Editing careeropportunity with id: %s' % (opportunity_id))\n\n if not has_access(request):\n raise PermissionDenied\n\n context = get_base_context(request)\n cop = None\n if opportunity_id:\n cop = get_object_or_404(CareerOpportunity, pk=opportunity_id)\n context['cop'] = cop\n context['form'] = AddCareerOpportunityForm(instance=cop)\n else:\n context['form'] = AddCareerOpportunityForm()\n\n if request.method == 'POST':\n if cop:\n form = AddCareerOpportunityForm(data=request.POST, instance=cop)\n else:\n form = AddCareerOpportunityForm(data=request.POST)\n\n if form.is_valid():\n form.save()\n messages.success(request, 'La til ny karrieremulighet')\n return redirect(index)\n else:\n context['form'] = form\n messages.error(request,\n 'Skjemaet ble ikke korrekt utfylt. Se etter markerte felter for \u00e5 se hva som gikk galt.')\n\n return render(request, 'careeropportunity/dashboard/detail.html', context)\n\n\n@login_required\n@permission_required('careeropportunity.change_careeropportunity', return_403=True)\ndef delete(request, opportunity_id=None):\n logger = logging.getLogger(__name__)\n logger.debug('Deleting careeropportunitywith id: %s' % (opportunity_id))\n if not has_access(request):\n raise PermissionDenied\n\n cop = get_object_or_404(CareerOpportunity, pk=opportunity_id)\n cop.delete()\n messages.success(request, 'Slettet karrieremuligheten')\n return redirect(index)\n", "path": "apps/careeropportunity/dashboard/views.py"}]}
| 1,454 | 135 |
gh_patches_debug_12009
|
rasdani/github-patches
|
git_diff
|
Netflix__lemur-111
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Duplicate Plugins Listed
Plugins are duplicated in the authority dropdown.
</issue>
<code>
[start of lemur/plugins/views.py]
1 """
2 .. module: lemur.plugins.views
3 :platform: Unix
4 :synopsis: This module contains all of the accounts view code.
5 :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more
6 :license: Apache, see LICENSE for more details.
7 .. moduleauthor:: Kevin Glisson <[email protected]>
8 """
9 from flask import Blueprint
10 from flask.ext.restful import Api, reqparse, fields
11 from lemur.auth.service import AuthenticatedResource
12
13 from lemur.common.utils import marshal_items
14
15 from lemur.plugins.base import plugins
16
17 mod = Blueprint('plugins', __name__)
18 api = Api(mod)
19
20
21 FIELDS = {
22 'title': fields.String,
23 'pluginOptions': fields.Raw(attribute='options'),
24 'description': fields.String,
25 'version': fields.String,
26 'author': fields.String,
27 'authorUrl': fields.String,
28 'type': fields.String,
29 'slug': fields.String,
30 }
31
32
33 class PluginsList(AuthenticatedResource):
34 """ Defines the 'plugins' endpoint """
35 def __init__(self):
36 self.reqparse = reqparse.RequestParser()
37 super(PluginsList, self).__init__()
38
39 @marshal_items(FIELDS)
40 def get(self):
41 """
42 .. http:get:: /plugins
43
44 The current plugin list
45
46 **Example request**:
47
48 .. sourcecode:: http
49
50 GET /plugins HTTP/1.1
51 Host: example.com
52 Accept: application/json, text/javascript
53
54 **Example response**:
55
56 .. sourcecode:: http
57
58 HTTP/1.1 200 OK
59 Vary: Accept
60 Content-Type: text/javascript
61
62 {
63 "items": [
64 {
65 "id": 2,
66 "accountNumber": 222222222,
67 "label": "account2",
68 "description": "this is a thing"
69 },
70 {
71 "id": 1,
72 "accountNumber": 11111111111,
73 "label": "account1",
74 "description": "this is a thing"
75 },
76 ]
77 "total": 2
78 }
79
80 :reqheader Authorization: OAuth token to authenticate
81 :statuscode 200: no error
82 """
83 self.reqparse.add_argument('type', type=str, location='args')
84 args = self.reqparse.parse_args()
85
86 if args['type']:
87 return list(plugins.all(plugin_type=args['type']))
88
89 return plugins.all()
90
91
92 class Plugins(AuthenticatedResource):
93 """ Defines the the 'plugins' endpoint """
94 def __init__(self):
95 super(Plugins, self).__init__()
96
97 @marshal_items(FIELDS)
98 def get(self, name):
99 """
100 .. http:get:: /plugins/<name>
101
102 The current plugin list
103
104 **Example request**:
105
106 .. sourcecode:: http
107
108 GET /plugins HTTP/1.1
109 Host: example.com
110 Accept: application/json, text/javascript
111
112 **Example response**:
113
114 .. sourcecode:: http
115
116 HTTP/1.1 200 OK
117 Vary: Accept
118 Content-Type: text/javascript
119
120 {
121 "accountNumber": 222222222,
122 "label": "account2",
123 "description": "this is a thing"
124 }
125
126 :reqheader Authorization: OAuth token to authenticate
127 :statuscode 200: no error
128 """
129 return plugins.get(name)
130
131
132 api.add_resource(PluginsList, '/plugins', endpoint='plugins')
133 api.add_resource(Plugins, '/plugins/<name>', endpoint='pluginName')
134
[end of lemur/plugins/views.py]
[start of lemur/common/managers.py]
1 """
2 .. module: lemur.common.managers
3 :platform: Unix
4 :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more
5 :license: Apache, see LICENSE for more details.
6
7 .. moduleauthor:: Kevin Glisson <[email protected]>
8 """
9 from flask import current_app
10
11
12 # inspired by https://github.com/getsentry/sentry
13 class InstanceManager(object):
14 def __init__(self, class_list=None, instances=True):
15 if class_list is None:
16 class_list = []
17 self.instances = instances
18 self.update(class_list)
19
20 def get_class_list(self):
21 return self.class_list
22
23 def add(self, class_path):
24 self.cache = None
25 self.class_list.append(class_path)
26
27 def remove(self, class_path):
28 self.cache = None
29 self.class_list.remove(class_path)
30
31 def update(self, class_list):
32 """
33 Updates the class list and wipes the cache.
34 """
35 self.cache = None
36 self.class_list = class_list
37
38 def all(self):
39 """
40 Returns a list of cached instances.
41 """
42 class_list = list(self.get_class_list())
43 if not class_list:
44 self.cache = []
45 return []
46
47 if self.cache is not None:
48 return self.cache
49
50 results = []
51 for cls_path in class_list:
52 module_name, class_name = cls_path.rsplit('.', 1)
53 try:
54 module = __import__(module_name, {}, {}, class_name)
55 cls = getattr(module, class_name)
56 if self.instances:
57 results.append(cls())
58 else:
59 results.append(cls)
60 except Exception:
61 current_app.logger.exception('Unable to import %s', cls_path)
62 continue
63 self.cache = results
64
65 return results
66
[end of lemur/common/managers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lemur/common/managers.py b/lemur/common/managers.py
--- a/lemur/common/managers.py
+++ b/lemur/common/managers.py
@@ -22,7 +22,8 @@
def add(self, class_path):
self.cache = None
- self.class_list.append(class_path)
+ if class_path not in self.class_list:
+ self.class_list.append(class_path)
def remove(self, class_path):
self.cache = None
diff --git a/lemur/plugins/views.py b/lemur/plugins/views.py
--- a/lemur/plugins/views.py
+++ b/lemur/plugins/views.py
@@ -86,7 +86,7 @@
if args['type']:
return list(plugins.all(plugin_type=args['type']))
- return plugins.all()
+ return list(plugins.all())
class Plugins(AuthenticatedResource):
|
{"golden_diff": "diff --git a/lemur/common/managers.py b/lemur/common/managers.py\n--- a/lemur/common/managers.py\n+++ b/lemur/common/managers.py\n@@ -22,7 +22,8 @@\n \n def add(self, class_path):\n self.cache = None\n- self.class_list.append(class_path)\n+ if class_path not in self.class_list:\n+ self.class_list.append(class_path)\n \n def remove(self, class_path):\n self.cache = None\ndiff --git a/lemur/plugins/views.py b/lemur/plugins/views.py\n--- a/lemur/plugins/views.py\n+++ b/lemur/plugins/views.py\n@@ -86,7 +86,7 @@\n if args['type']:\n return list(plugins.all(plugin_type=args['type']))\n \n- return plugins.all()\n+ return list(plugins.all())\n \n \n class Plugins(AuthenticatedResource):\n", "issue": "Duplicate Plugins Listed\nPlugins are duplicated in the authority dropdown.\n\n", "before_files": [{"content": "\"\"\"\n.. module: lemur.plugins.views\n :platform: Unix\n :synopsis: This module contains all of the accounts view code.\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\"\"\"\nfrom flask import Blueprint\nfrom flask.ext.restful import Api, reqparse, fields\nfrom lemur.auth.service import AuthenticatedResource\n\nfrom lemur.common.utils import marshal_items\n\nfrom lemur.plugins.base import plugins\n\nmod = Blueprint('plugins', __name__)\napi = Api(mod)\n\n\nFIELDS = {\n 'title': fields.String,\n 'pluginOptions': fields.Raw(attribute='options'),\n 'description': fields.String,\n 'version': fields.String,\n 'author': fields.String,\n 'authorUrl': fields.String,\n 'type': fields.String,\n 'slug': fields.String,\n}\n\n\nclass PluginsList(AuthenticatedResource):\n \"\"\" Defines the 'plugins' endpoint \"\"\"\n def __init__(self):\n self.reqparse = reqparse.RequestParser()\n super(PluginsList, self).__init__()\n\n @marshal_items(FIELDS)\n def get(self):\n \"\"\"\n .. http:get:: /plugins\n\n The current plugin list\n\n **Example request**:\n\n .. sourcecode:: http\n\n GET /plugins HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"items\": [\n {\n \"id\": 2,\n \"accountNumber\": 222222222,\n \"label\": \"account2\",\n \"description\": \"this is a thing\"\n },\n {\n \"id\": 1,\n \"accountNumber\": 11111111111,\n \"label\": \"account1\",\n \"description\": \"this is a thing\"\n },\n ]\n \"total\": 2\n }\n\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n \"\"\"\n self.reqparse.add_argument('type', type=str, location='args')\n args = self.reqparse.parse_args()\n\n if args['type']:\n return list(plugins.all(plugin_type=args['type']))\n\n return plugins.all()\n\n\nclass Plugins(AuthenticatedResource):\n \"\"\" Defines the the 'plugins' endpoint \"\"\"\n def __init__(self):\n super(Plugins, self).__init__()\n\n @marshal_items(FIELDS)\n def get(self, name):\n \"\"\"\n .. http:get:: /plugins/<name>\n\n The current plugin list\n\n **Example request**:\n\n .. sourcecode:: http\n\n GET /plugins HTTP/1.1\n Host: example.com\n Accept: application/json, text/javascript\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Vary: Accept\n Content-Type: text/javascript\n\n {\n \"accountNumber\": 222222222,\n \"label\": \"account2\",\n \"description\": \"this is a thing\"\n }\n\n :reqheader Authorization: OAuth token to authenticate\n :statuscode 200: no error\n \"\"\"\n return plugins.get(name)\n\n\napi.add_resource(PluginsList, '/plugins', endpoint='plugins')\napi.add_resource(Plugins, '/plugins/<name>', endpoint='pluginName')\n", "path": "lemur/plugins/views.py"}, {"content": "\"\"\"\n.. module: lemur.common.managers\n :platform: Unix\n :copyright: (c) 2015 by Netflix Inc., see AUTHORS for more\n :license: Apache, see LICENSE for more details.\n\n.. moduleauthor:: Kevin Glisson <[email protected]>\n\"\"\"\nfrom flask import current_app\n\n\n# inspired by https://github.com/getsentry/sentry\nclass InstanceManager(object):\n def __init__(self, class_list=None, instances=True):\n if class_list is None:\n class_list = []\n self.instances = instances\n self.update(class_list)\n\n def get_class_list(self):\n return self.class_list\n\n def add(self, class_path):\n self.cache = None\n self.class_list.append(class_path)\n\n def remove(self, class_path):\n self.cache = None\n self.class_list.remove(class_path)\n\n def update(self, class_list):\n \"\"\"\n Updates the class list and wipes the cache.\n \"\"\"\n self.cache = None\n self.class_list = class_list\n\n def all(self):\n \"\"\"\n Returns a list of cached instances.\n \"\"\"\n class_list = list(self.get_class_list())\n if not class_list:\n self.cache = []\n return []\n\n if self.cache is not None:\n return self.cache\n\n results = []\n for cls_path in class_list:\n module_name, class_name = cls_path.rsplit('.', 1)\n try:\n module = __import__(module_name, {}, {}, class_name)\n cls = getattr(module, class_name)\n if self.instances:\n results.append(cls())\n else:\n results.append(cls)\n except Exception:\n current_app.logger.exception('Unable to import %s', cls_path)\n continue\n self.cache = results\n\n return results\n", "path": "lemur/common/managers.py"}]}
| 2,179 | 199 |
gh_patches_debug_12142
|
rasdani/github-patches
|
git_diff
|
safe-global__safe-config-service-90
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use different namespace and endpoint name for `/safe-apps`
The endpoint `/api/v1/safe-apps` is currently under the `v1` namespace and `safe-apps` endpoint name.
To align it better with the future endpoints the following should be changed:
- the namespace changes from `v1` to `safe-apps`
- the endpoint name changes from `safe-apps` to `list`
This results in a reverse url resolution with `safe-apps:list` instead of `v1:safe-apps`
</issue>
<code>
[start of src/config/urls.py]
1 from django.contrib import admin
2 from django.http import HttpResponse
3 from django.urls import include, path, re_path
4 from drf_yasg.views import get_schema_view
5 from rest_framework import permissions
6
7 schema_view = get_schema_view(
8 validators=["flex", "ssv"],
9 public=True,
10 permission_classes=(permissions.AllowAny,),
11 )
12
13 urlpatterns = [
14 path("api/v1/", include("safe_apps.urls", namespace="v1")),
15 path("api/v1/", include("chains.urls", namespace="chains")),
16 path("admin/", admin.site.urls),
17 path("check/", lambda request: HttpResponse("Ok"), name="check"),
18 re_path(
19 r"^swagger(?P<format>\.json|\.yaml)$",
20 schema_view.without_ui(cache_timeout=0),
21 name="schema-json",
22 ),
23 re_path(
24 r"^$",
25 schema_view.with_ui("swagger", cache_timeout=0),
26 name="schema-swagger-ui",
27 ),
28 ]
29
[end of src/config/urls.py]
[start of src/safe_apps/urls.py]
1 from django.urls import path
2
3 from .views import SafeAppsListView
4
5 app_name = "apps"
6
7 urlpatterns = [
8 path("safe-apps/", SafeAppsListView.as_view(), name="safe-apps"),
9 ]
10
[end of src/safe_apps/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/config/urls.py b/src/config/urls.py
--- a/src/config/urls.py
+++ b/src/config/urls.py
@@ -11,7 +11,7 @@
)
urlpatterns = [
- path("api/v1/", include("safe_apps.urls", namespace="v1")),
+ path("api/v1/", include("safe_apps.urls", namespace="safe-apps")),
path("api/v1/", include("chains.urls", namespace="chains")),
path("admin/", admin.site.urls),
path("check/", lambda request: HttpResponse("Ok"), name="check"),
diff --git a/src/safe_apps/urls.py b/src/safe_apps/urls.py
--- a/src/safe_apps/urls.py
+++ b/src/safe_apps/urls.py
@@ -5,5 +5,5 @@
app_name = "apps"
urlpatterns = [
- path("safe-apps/", SafeAppsListView.as_view(), name="safe-apps"),
+ path("safe-apps/", SafeAppsListView.as_view(), name="list"),
]
|
{"golden_diff": "diff --git a/src/config/urls.py b/src/config/urls.py\n--- a/src/config/urls.py\n+++ b/src/config/urls.py\n@@ -11,7 +11,7 @@\n )\n \n urlpatterns = [\n- path(\"api/v1/\", include(\"safe_apps.urls\", namespace=\"v1\")),\n+ path(\"api/v1/\", include(\"safe_apps.urls\", namespace=\"safe-apps\")),\n path(\"api/v1/\", include(\"chains.urls\", namespace=\"chains\")),\n path(\"admin/\", admin.site.urls),\n path(\"check/\", lambda request: HttpResponse(\"Ok\"), name=\"check\"),\ndiff --git a/src/safe_apps/urls.py b/src/safe_apps/urls.py\n--- a/src/safe_apps/urls.py\n+++ b/src/safe_apps/urls.py\n@@ -5,5 +5,5 @@\n app_name = \"apps\"\n \n urlpatterns = [\n- path(\"safe-apps/\", SafeAppsListView.as_view(), name=\"safe-apps\"),\n+ path(\"safe-apps/\", SafeAppsListView.as_view(), name=\"list\"),\n ]\n", "issue": "Use different namespace and endpoint name for `/safe-apps`\nThe endpoint `/api/v1/safe-apps` is currently under the `v1` namespace and `safe-apps` endpoint name.\r\n\r\nTo align it better with the future endpoints the following should be changed:\r\n\r\n- the namespace changes from `v1` to `safe-apps`\r\n- the endpoint name changes from `safe-apps` to `list`\r\n\r\nThis results in a reverse url resolution with `safe-apps:list` instead of `v1:safe-apps`\n", "before_files": [{"content": "from django.contrib import admin\nfrom django.http import HttpResponse\nfrom django.urls import include, path, re_path\nfrom drf_yasg.views import get_schema_view\nfrom rest_framework import permissions\n\nschema_view = get_schema_view(\n validators=[\"flex\", \"ssv\"],\n public=True,\n permission_classes=(permissions.AllowAny,),\n)\n\nurlpatterns = [\n path(\"api/v1/\", include(\"safe_apps.urls\", namespace=\"v1\")),\n path(\"api/v1/\", include(\"chains.urls\", namespace=\"chains\")),\n path(\"admin/\", admin.site.urls),\n path(\"check/\", lambda request: HttpResponse(\"Ok\"), name=\"check\"),\n re_path(\n r\"^swagger(?P<format>\\.json|\\.yaml)$\",\n schema_view.without_ui(cache_timeout=0),\n name=\"schema-json\",\n ),\n re_path(\n r\"^$\",\n schema_view.with_ui(\"swagger\", cache_timeout=0),\n name=\"schema-swagger-ui\",\n ),\n]\n", "path": "src/config/urls.py"}, {"content": "from django.urls import path\n\nfrom .views import SafeAppsListView\n\napp_name = \"apps\"\n\nurlpatterns = [\n path(\"safe-apps/\", SafeAppsListView.as_view(), name=\"safe-apps\"),\n]\n", "path": "src/safe_apps/urls.py"}]}
| 978 | 228 |
gh_patches_debug_61113
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-1022
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Good old 'utf-8' codec error on Windows
Howdy,
I'm unable to run `tox -re linting` on pytest anymore. I'm getting this error:
```
λ tox -re linting
linting recreate: c:\pytest\.tox\linting
linting installdeps: pre-commit>=1.11.0
linting installed: aspy.yaml==1.2.0,cfgv==1.6.0,identify==1.4.2,importlib-metadata==0.9,nodeenv==1.3.3,pre-commit==1.16.0,pytest==3.6.0,PyYAML==5.1,six==1.12.0,toml==0.10.0,virtualenv==16.5.0,zipp==0.4.0
linting run-test-pre: PYTHONHASHSEED='335'
linting run-test: commands[0] | pre-commit run --all-files --show-diff-on-failure
An unexpected error has occurred: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe3 in position 282: invalid continuation byte
Check the log at C:\Users\Bruno/.cache\pre-commit\pre-commit.log
ERROR: InvocationError for command 'c:\pytest\.tox\linting\Scripts\pre-commit.EXE' run --all-files --show-diff-on-failure (exited with code 1)
```
Here's the contents of the log file:
```
An unexpected error has occurred: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe3 in position 282: invalid continuation byte
Traceback (most recent call last):
File "c:\pytest\.tox\linting\lib\site-packages\pre_commit\error_handler.py", line 46, in error_handler
yield
File "c:\pytest\.tox\linting\lib\site-packages\pre_commit\main.py", line 294, in main
return run(args.config, store, args)
File "c:\pytest\.tox\linting\lib\site-packages\pre_commit\commands\run.py", line 285, in run
install_hook_envs(hooks, store)
File "c:\pytest\.tox\linting\lib\site-packages\pre_commit\repository.py", line 210, in install_hook_envs
if not _need_installed():
File "c:\pytest\.tox\linting\lib\site-packages\pre_commit\repository.py", line 205, in _need_installed
if hook.install_key not in seen and not hook.installed():
File "c:\pytest\.tox\linting\lib\site-packages\pre_commit\repository.py", line 75, in installed
lang.healthy(self.prefix, self.language_version)
File "c:\pytest\.tox\linting\lib\site-packages\pre_commit\languages\python.py", line 139, in healthy
retcode, _, _ = cmd_output(
File "c:\pytest\.tox\linting\lib\site-packages\pre_commit\util.py", line 149, in cmd_output
stderr = stderr.decode(encoding)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe3 in position 282: invalid continuation byte
```
I've seen #835, #330 and #245, so I've tried to cleanup the pre-commit cache and updating `pip` and `virtualenv`, both on my system and in the virtualenv I have for pytest:
```
(.env37) λ pip install -U virtualenv
Requirement already up-to-date: virtualenv in .\.env37\lib\site-packages (16.5.0)
(.env37) λ py -3.7 -m pip install -U virtualenv
Requirement already up-to-date: virtualenv in c:\users\bruno\appdata\local\programs\python\python37\lib\site-packages (16.5.0)
(.env37) λ .tox\linting\Scripts\pip install virtualenv -U
Requirement already up-to-date: virtualenv in .\.tox\linting\lib\site-packages (16.5.0)
```
Same for `pre-commit`:
```
(.env37) λ .tox\linting\Scripts\pip list
Package Version
------------------ -------
aspy.yaml 1.2.0
cfgv 1.6.0
identify 1.4.2
importlib-metadata 0.9
nodeenv 1.3.3
pip 19.1.1
pre-commit 1.16.0
PyYAML 5.1
setuptools 41.0.1
six 1.12.0
toml 0.10.0
virtualenv 16.5.0
wheel 0.33.1
zipp 0.4.0
(.env37) λ pip list
Package Version Location
------------------ ---------------------- -------------
aspy.yaml 1.2.0
atomicwrites 1.3.0
attrs 19.1.0
cfgv 1.6.0
colorama 0.4.1
filelock 3.0.10
identify 1.4.2
importlib-metadata 0.9
more-itertools 7.0.0
nodeenv 1.3.3
pip 19.1.1
pluggy 0.9.0
pre-commit 1.16.0
py 1.8.0
pytest 4.4.2.dev43+g8605ed2a1 c:\pytest\src
PyYAML 5.1
setuptools 39.0.1
six 1.12.0
toml 0.10.0
tox 3.9.0
virtualenv 16.5.0
zipp 0.4.0
```
Any hints @asottile? 🤔
</issue>
<code>
[start of pre_commit/languages/python.py]
1 from __future__ import unicode_literals
2
3 import contextlib
4 import os
5 import sys
6
7 import pre_commit.constants as C
8 from pre_commit.envcontext import envcontext
9 from pre_commit.envcontext import UNSET
10 from pre_commit.envcontext import Var
11 from pre_commit.languages import helpers
12 from pre_commit.parse_shebang import find_executable
13 from pre_commit.util import CalledProcessError
14 from pre_commit.util import clean_path_on_failure
15 from pre_commit.util import cmd_output
16
17
18 ENVIRONMENT_DIR = 'py_env'
19
20
21 def bin_dir(venv):
22 """On windows there's a different directory for the virtualenv"""
23 bin_part = 'Scripts' if os.name == 'nt' else 'bin'
24 return os.path.join(venv, bin_part)
25
26
27 def get_env_patch(venv):
28 return (
29 ('PYTHONHOME', UNSET),
30 ('VIRTUAL_ENV', venv),
31 ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),
32 )
33
34
35 def _find_by_py_launcher(version): # pragma: no cover (windows only)
36 if version.startswith('python'):
37 try:
38 return cmd_output(
39 'py', '-{}'.format(version[len('python'):]),
40 '-c', 'import sys; print(sys.executable)',
41 )[1].strip()
42 except CalledProcessError:
43 pass
44
45
46 def _get_default_version(): # pragma: no cover (platform dependent)
47 def _norm(path):
48 _, exe = os.path.split(path.lower())
49 exe, _, _ = exe.partition('.exe')
50 if find_executable(exe) and exe not in {'python', 'pythonw'}:
51 return exe
52
53 # First attempt from `sys.executable` (or the realpath)
54 # On linux, I see these common sys.executables:
55 #
56 # system `python`: /usr/bin/python -> python2.7
57 # system `python2`: /usr/bin/python2 -> python2.7
58 # virtualenv v: v/bin/python (will not return from this loop)
59 # virtualenv v -ppython2: v/bin/python -> python2
60 # virtualenv v -ppython2.7: v/bin/python -> python2.7
61 # virtualenv v -ppypy: v/bin/python -> v/bin/pypy
62 for path in {sys.executable, os.path.realpath(sys.executable)}:
63 exe = _norm(path)
64 if exe:
65 return exe
66
67 # Next try the `pythonX.X` executable
68 exe = 'python{}.{}'.format(*sys.version_info)
69 if find_executable(exe):
70 return exe
71
72 if _find_by_py_launcher(exe):
73 return exe
74
75 # Give a best-effort try for windows
76 if os.path.exists(r'C:\{}\python.exe'.format(exe.replace('.', ''))):
77 return exe
78
79 # We tried!
80 return C.DEFAULT
81
82
83 def get_default_version():
84 # TODO: when dropping python2, use `functools.lru_cache(maxsize=1)`
85 try:
86 return get_default_version.cached_version
87 except AttributeError:
88 get_default_version.cached_version = _get_default_version()
89 return get_default_version()
90
91
92 def _sys_executable_matches(version):
93 if version == 'python':
94 return True
95 elif not version.startswith('python'):
96 return False
97
98 try:
99 info = tuple(int(p) for p in version[len('python'):].split('.'))
100 except ValueError:
101 return False
102
103 return sys.version_info[:len(info)] == info
104
105
106 def norm_version(version):
107 if os.name == 'nt': # pragma: no cover (windows)
108 # first see if our current executable is appropriate
109 if _sys_executable_matches(version):
110 return sys.executable
111
112 version_exec = _find_by_py_launcher(version)
113 if version_exec:
114 return version_exec
115
116 # Try looking up by name
117 version_exec = find_executable(version)
118 if version_exec and version_exec != version:
119 return version_exec
120
121 # If it is in the form pythonx.x search in the default
122 # place on windows
123 if version.startswith('python'):
124 return r'C:\{}\python.exe'.format(version.replace('.', ''))
125
126 # Otherwise assume it is a path
127 return os.path.expanduser(version)
128
129
130 def py_interface(_dir, _make_venv):
131 @contextlib.contextmanager
132 def in_env(prefix, language_version):
133 envdir = prefix.path(helpers.environment_dir(_dir, language_version))
134 with envcontext(get_env_patch(envdir)):
135 yield
136
137 def healthy(prefix, language_version):
138 with in_env(prefix, language_version):
139 retcode, _, _ = cmd_output(
140 'python', '-c',
141 'import ctypes, datetime, io, os, ssl, weakref',
142 retcode=None,
143 )
144 return retcode == 0
145
146 def run_hook(hook, file_args):
147 with in_env(hook.prefix, hook.language_version):
148 return helpers.run_xargs(hook, helpers.to_cmd(hook), file_args)
149
150 def install_environment(prefix, version, additional_dependencies):
151 additional_dependencies = tuple(additional_dependencies)
152 directory = helpers.environment_dir(_dir, version)
153
154 env_dir = prefix.path(directory)
155 with clean_path_on_failure(env_dir):
156 if version != C.DEFAULT:
157 python = norm_version(version)
158 else:
159 python = os.path.realpath(sys.executable)
160 _make_venv(env_dir, python)
161 with in_env(prefix, version):
162 helpers.run_setup_cmd(
163 prefix, ('pip', 'install', '.') + additional_dependencies,
164 )
165
166 return in_env, healthy, run_hook, install_environment
167
168
169 def make_venv(envdir, python):
170 env = dict(os.environ, VIRTUALENV_NO_DOWNLOAD='1')
171 cmd = (sys.executable, '-mvirtualenv', envdir, '-p', python)
172 cmd_output(*cmd, env=env, cwd='/')
173
174
175 _interface = py_interface(ENVIRONMENT_DIR, make_venv)
176 in_env, healthy, run_hook, install_environment = _interface
177
[end of pre_commit/languages/python.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py
--- a/pre_commit/languages/python.py
+++ b/pre_commit/languages/python.py
@@ -140,6 +140,7 @@
'python', '-c',
'import ctypes, datetime, io, os, ssl, weakref',
retcode=None,
+ encoding=None,
)
return retcode == 0
|
{"golden_diff": "diff --git a/pre_commit/languages/python.py b/pre_commit/languages/python.py\n--- a/pre_commit/languages/python.py\n+++ b/pre_commit/languages/python.py\n@@ -140,6 +140,7 @@\n 'python', '-c',\n 'import ctypes, datetime, io, os, ssl, weakref',\n retcode=None,\n+ encoding=None,\n )\n return retcode == 0\n", "issue": "Good old 'utf-8' codec error on Windows\nHowdy,\r\n\r\nI'm unable to run `tox -re linting` on pytest anymore. I'm getting this error:\r\n\r\n```\r\n\u03bb tox -re linting\r\nlinting recreate: c:\\pytest\\.tox\\linting\r\nlinting installdeps: pre-commit>=1.11.0\r\nlinting installed: aspy.yaml==1.2.0,cfgv==1.6.0,identify==1.4.2,importlib-metadata==0.9,nodeenv==1.3.3,pre-commit==1.16.0,pytest==3.6.0,PyYAML==5.1,six==1.12.0,toml==0.10.0,virtualenv==16.5.0,zipp==0.4.0\r\nlinting run-test-pre: PYTHONHASHSEED='335'\r\nlinting run-test: commands[0] | pre-commit run --all-files --show-diff-on-failure\r\nAn unexpected error has occurred: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe3 in position 282: invalid continuation byte\r\nCheck the log at C:\\Users\\Bruno/.cache\\pre-commit\\pre-commit.log\r\nERROR: InvocationError for command 'c:\\pytest\\.tox\\linting\\Scripts\\pre-commit.EXE' run --all-files --show-diff-on-failure (exited with code 1)\r\n```\r\n\r\nHere's the contents of the log file:\r\n\r\n```\r\nAn unexpected error has occurred: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe3 in position 282: invalid continuation byte\r\nTraceback (most recent call last):\r\n File \"c:\\pytest\\.tox\\linting\\lib\\site-packages\\pre_commit\\error_handler.py\", line 46, in error_handler\r\n yield\r\n File \"c:\\pytest\\.tox\\linting\\lib\\site-packages\\pre_commit\\main.py\", line 294, in main\r\n return run(args.config, store, args)\r\n File \"c:\\pytest\\.tox\\linting\\lib\\site-packages\\pre_commit\\commands\\run.py\", line 285, in run\r\n install_hook_envs(hooks, store)\r\n File \"c:\\pytest\\.tox\\linting\\lib\\site-packages\\pre_commit\\repository.py\", line 210, in install_hook_envs\r\n if not _need_installed():\r\n File \"c:\\pytest\\.tox\\linting\\lib\\site-packages\\pre_commit\\repository.py\", line 205, in _need_installed\r\n if hook.install_key not in seen and not hook.installed():\r\n File \"c:\\pytest\\.tox\\linting\\lib\\site-packages\\pre_commit\\repository.py\", line 75, in installed\r\n lang.healthy(self.prefix, self.language_version)\r\n File \"c:\\pytest\\.tox\\linting\\lib\\site-packages\\pre_commit\\languages\\python.py\", line 139, in healthy\r\n retcode, _, _ = cmd_output(\r\n File \"c:\\pytest\\.tox\\linting\\lib\\site-packages\\pre_commit\\util.py\", line 149, in cmd_output\r\n stderr = stderr.decode(encoding)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xe3 in position 282: invalid continuation byte\r\n```\r\n\r\nI've seen #835, #330 and #245, so I've tried to cleanup the pre-commit cache and updating `pip` and `virtualenv`, both on my system and in the virtualenv I have for pytest:\r\n\r\n```\r\n(.env37) \u03bb pip install -U virtualenv\r\nRequirement already up-to-date: virtualenv in .\\.env37\\lib\\site-packages (16.5.0)\r\n\r\n(.env37) \u03bb py -3.7 -m pip install -U virtualenv\r\nRequirement already up-to-date: virtualenv in c:\\users\\bruno\\appdata\\local\\programs\\python\\python37\\lib\\site-packages (16.5.0)\r\n\r\n(.env37) \u03bb .tox\\linting\\Scripts\\pip install virtualenv -U\r\nRequirement already up-to-date: virtualenv in .\\.tox\\linting\\lib\\site-packages (16.5.0)\r\n```\r\n\r\nSame for `pre-commit`:\r\n\r\n```\r\n(.env37) \u03bb .tox\\linting\\Scripts\\pip list\r\nPackage Version\r\n------------------ -------\r\naspy.yaml 1.2.0\r\ncfgv 1.6.0\r\nidentify 1.4.2\r\nimportlib-metadata 0.9\r\nnodeenv 1.3.3\r\npip 19.1.1\r\npre-commit 1.16.0\r\nPyYAML 5.1\r\nsetuptools 41.0.1\r\nsix 1.12.0\r\ntoml 0.10.0\r\nvirtualenv 16.5.0\r\nwheel 0.33.1\r\nzipp 0.4.0\r\n\r\n(.env37) \u03bb pip list\r\nPackage Version Location\r\n------------------ ---------------------- -------------\r\naspy.yaml 1.2.0\r\natomicwrites 1.3.0\r\nattrs 19.1.0\r\ncfgv 1.6.0\r\ncolorama 0.4.1\r\nfilelock 3.0.10\r\nidentify 1.4.2\r\nimportlib-metadata 0.9\r\nmore-itertools 7.0.0\r\nnodeenv 1.3.3\r\npip 19.1.1\r\npluggy 0.9.0\r\npre-commit 1.16.0\r\npy 1.8.0\r\npytest 4.4.2.dev43+g8605ed2a1 c:\\pytest\\src\r\nPyYAML 5.1\r\nsetuptools 39.0.1\r\nsix 1.12.0\r\ntoml 0.10.0\r\ntox 3.9.0\r\nvirtualenv 16.5.0\r\nzipp 0.4.0\r\n```\r\n\r\nAny hints @asottile? \ud83e\udd14 \n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport os\nimport sys\n\nimport pre_commit.constants as C\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages import helpers\nfrom pre_commit.parse_shebang import find_executable\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import clean_path_on_failure\nfrom pre_commit.util import cmd_output\n\n\nENVIRONMENT_DIR = 'py_env'\n\n\ndef bin_dir(venv):\n \"\"\"On windows there's a different directory for the virtualenv\"\"\"\n bin_part = 'Scripts' if os.name == 'nt' else 'bin'\n return os.path.join(venv, bin_part)\n\n\ndef get_env_patch(venv):\n return (\n ('PYTHONHOME', UNSET),\n ('VIRTUAL_ENV', venv),\n ('PATH', (bin_dir(venv), os.pathsep, Var('PATH'))),\n )\n\n\ndef _find_by_py_launcher(version): # pragma: no cover (windows only)\n if version.startswith('python'):\n try:\n return cmd_output(\n 'py', '-{}'.format(version[len('python'):]),\n '-c', 'import sys; print(sys.executable)',\n )[1].strip()\n except CalledProcessError:\n pass\n\n\ndef _get_default_version(): # pragma: no cover (platform dependent)\n def _norm(path):\n _, exe = os.path.split(path.lower())\n exe, _, _ = exe.partition('.exe')\n if find_executable(exe) and exe not in {'python', 'pythonw'}:\n return exe\n\n # First attempt from `sys.executable` (or the realpath)\n # On linux, I see these common sys.executables:\n #\n # system `python`: /usr/bin/python -> python2.7\n # system `python2`: /usr/bin/python2 -> python2.7\n # virtualenv v: v/bin/python (will not return from this loop)\n # virtualenv v -ppython2: v/bin/python -> python2\n # virtualenv v -ppython2.7: v/bin/python -> python2.7\n # virtualenv v -ppypy: v/bin/python -> v/bin/pypy\n for path in {sys.executable, os.path.realpath(sys.executable)}:\n exe = _norm(path)\n if exe:\n return exe\n\n # Next try the `pythonX.X` executable\n exe = 'python{}.{}'.format(*sys.version_info)\n if find_executable(exe):\n return exe\n\n if _find_by_py_launcher(exe):\n return exe\n\n # Give a best-effort try for windows\n if os.path.exists(r'C:\\{}\\python.exe'.format(exe.replace('.', ''))):\n return exe\n\n # We tried!\n return C.DEFAULT\n\n\ndef get_default_version():\n # TODO: when dropping python2, use `functools.lru_cache(maxsize=1)`\n try:\n return get_default_version.cached_version\n except AttributeError:\n get_default_version.cached_version = _get_default_version()\n return get_default_version()\n\n\ndef _sys_executable_matches(version):\n if version == 'python':\n return True\n elif not version.startswith('python'):\n return False\n\n try:\n info = tuple(int(p) for p in version[len('python'):].split('.'))\n except ValueError:\n return False\n\n return sys.version_info[:len(info)] == info\n\n\ndef norm_version(version):\n if os.name == 'nt': # pragma: no cover (windows)\n # first see if our current executable is appropriate\n if _sys_executable_matches(version):\n return sys.executable\n\n version_exec = _find_by_py_launcher(version)\n if version_exec:\n return version_exec\n\n # Try looking up by name\n version_exec = find_executable(version)\n if version_exec and version_exec != version:\n return version_exec\n\n # If it is in the form pythonx.x search in the default\n # place on windows\n if version.startswith('python'):\n return r'C:\\{}\\python.exe'.format(version.replace('.', ''))\n\n # Otherwise assume it is a path\n return os.path.expanduser(version)\n\n\ndef py_interface(_dir, _make_venv):\n @contextlib.contextmanager\n def in_env(prefix, language_version):\n envdir = prefix.path(helpers.environment_dir(_dir, language_version))\n with envcontext(get_env_patch(envdir)):\n yield\n\n def healthy(prefix, language_version):\n with in_env(prefix, language_version):\n retcode, _, _ = cmd_output(\n 'python', '-c',\n 'import ctypes, datetime, io, os, ssl, weakref',\n retcode=None,\n )\n return retcode == 0\n\n def run_hook(hook, file_args):\n with in_env(hook.prefix, hook.language_version):\n return helpers.run_xargs(hook, helpers.to_cmd(hook), file_args)\n\n def install_environment(prefix, version, additional_dependencies):\n additional_dependencies = tuple(additional_dependencies)\n directory = helpers.environment_dir(_dir, version)\n\n env_dir = prefix.path(directory)\n with clean_path_on_failure(env_dir):\n if version != C.DEFAULT:\n python = norm_version(version)\n else:\n python = os.path.realpath(sys.executable)\n _make_venv(env_dir, python)\n with in_env(prefix, version):\n helpers.run_setup_cmd(\n prefix, ('pip', 'install', '.') + additional_dependencies,\n )\n\n return in_env, healthy, run_hook, install_environment\n\n\ndef make_venv(envdir, python):\n env = dict(os.environ, VIRTUALENV_NO_DOWNLOAD='1')\n cmd = (sys.executable, '-mvirtualenv', envdir, '-p', python)\n cmd_output(*cmd, env=env, cwd='/')\n\n\n_interface = py_interface(ENVIRONMENT_DIR, make_venv)\nin_env, healthy, run_hook, install_environment = _interface\n", "path": "pre_commit/languages/python.py"}]}
| 3,702 | 93 |
gh_patches_debug_41061
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-3019
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] BTV plugin needs updating
## Bug Report
- [x] This is a bug report and I have read the contribution guidelines.
### Description
The location of the BTV livestream has moved to https://btvplus.bg/live/
**Edit**: Livestreaming no longer requires a user to login, so that can be removed from the plugin info page.
### Expected / Actual behavior
Streamlink should be able to handle the link.
### Reproduction steps / Explicit stream URLs to test
1. streamlink https://btvplus.bg/live/ best
2. error: No plugin can handle URL: https://btvplus.bg/live/
</issue>
<code>
[start of src/streamlink/plugins/btv.py]
1 from __future__ import print_function
2 import re
3
4 from streamlink import PluginError
5 from streamlink.plugin import Plugin
6 from streamlink.plugin.api import validate
7 from streamlink.stream import HLSStream
8 from streamlink.utils import parse_json
9 from streamlink.plugin import PluginArgument, PluginArguments
10
11
12 class BTV(Plugin):
13 arguments = PluginArguments(
14 PluginArgument(
15 "username",
16 metavar="USERNAME",
17 requires=["password"],
18 help="""
19 A BTV username required to access any stream.
20 """
21 ),
22 PluginArgument(
23 "password",
24 sensitive=True,
25 metavar="PASSWORD",
26 help="""
27 A BTV account password to use with --btv-username.
28 """
29 )
30 )
31 url_re = re.compile(r"https?://(?:www\.)?btv\.bg/live/?")
32
33 api_url = "http://www.btv.bg/lbin/global/player_config.php"
34 check_login_url = "http://www.btv.bg/lbin/userRegistration/check_user_login.php"
35 login_url = "https://www.btv.bg/bin/registration2/login.php?action=login&settings=0"
36
37 media_id_re = re.compile(r"media_id=(\d+)")
38 src_re = re.compile(r"src: \"(http.*?)\"")
39 api_schema = validate.Schema(
40 validate.all(
41 {"status": "ok", "config": validate.text},
42 validate.get("config"),
43 validate.all(
44 validate.transform(src_re.search),
45 validate.any(
46 None,
47 validate.get(1),
48 validate.url()
49 )
50 )
51 )
52 )
53
54 @classmethod
55 def can_handle_url(cls, url):
56 return cls.url_re.match(url) is not None
57
58 def login(self, username, password):
59 res = self.session.http.post(self.login_url, data={"username": username, "password": password})
60 if "success_logged_in" in res.text:
61 return True
62 else:
63 return False
64
65 def get_hls_url(self, media_id):
66 res = self.session.http.get(self.api_url, params=dict(media_id=media_id))
67 try:
68 return parse_json(res.text, schema=self.api_schema)
69 except PluginError:
70 return
71
72 def _get_streams(self):
73 if not self.options.get("username") or not self.options.get("password"):
74 self.logger.error("BTV requires registration, set the username and password"
75 " with --btv-username and --btv-password")
76 elif self.login(self.options.get("username"), self.options.get("password")):
77 res = self.session.http.get(self.url)
78 media_match = self.media_id_re.search(res.text)
79 media_id = media_match and media_match.group(1)
80 if media_id:
81 self.logger.debug("Found media id: {0}", media_id)
82 stream_url = self.get_hls_url(media_id)
83 if stream_url:
84 return HLSStream.parse_variant_playlist(self.session, stream_url)
85 else:
86 self.logger.error("Login failed, a valid username and password is required")
87
88
89 __plugin__ = BTV
90
[end of src/streamlink/plugins/btv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/streamlink/plugins/btv.py b/src/streamlink/plugins/btv.py
--- a/src/streamlink/plugins/btv.py
+++ b/src/streamlink/plugins/btv.py
@@ -1,38 +1,30 @@
-from __future__ import print_function
+import argparse
+import logging
import re
-from streamlink import PluginError
-from streamlink.plugin import Plugin
+from streamlink.plugin import Plugin, PluginArguments, PluginArgument
from streamlink.plugin.api import validate
from streamlink.stream import HLSStream
from streamlink.utils import parse_json
-from streamlink.plugin import PluginArgument, PluginArguments
+
+log = logging.getLogger(__name__)
class BTV(Plugin):
arguments = PluginArguments(
PluginArgument(
"username",
- metavar="USERNAME",
- requires=["password"],
- help="""
- A BTV username required to access any stream.
- """
+ help=argparse.SUPPRESS
),
PluginArgument(
"password",
sensitive=True,
- metavar="PASSWORD",
- help="""
- A BTV account password to use with --btv-username.
- """
+ help=argparse.SUPPRESS
)
)
- url_re = re.compile(r"https?://(?:www\.)?btv\.bg/live/?")
- api_url = "http://www.btv.bg/lbin/global/player_config.php"
- check_login_url = "http://www.btv.bg/lbin/userRegistration/check_user_login.php"
- login_url = "https://www.btv.bg/bin/registration2/login.php?action=login&settings=0"
+ url_re = re.compile(r"https?://(?:www\.)?btvplus\.bg/live/?")
+ api_url = "https://btvplus.bg/lbin/v3/btvplus/player_config.php"
media_id_re = re.compile(r"media_id=(\d+)")
src_re = re.compile(r"src: \"(http.*?)\"")
@@ -55,35 +47,19 @@
def can_handle_url(cls, url):
return cls.url_re.match(url) is not None
- def login(self, username, password):
- res = self.session.http.post(self.login_url, data={"username": username, "password": password})
- if "success_logged_in" in res.text:
- return True
- else:
- return False
-
def get_hls_url(self, media_id):
res = self.session.http.get(self.api_url, params=dict(media_id=media_id))
- try:
- return parse_json(res.text, schema=self.api_schema)
- except PluginError:
- return
+ return parse_json(res.text, schema=self.api_schema)
def _get_streams(self):
- if not self.options.get("username") or not self.options.get("password"):
- self.logger.error("BTV requires registration, set the username and password"
- " with --btv-username and --btv-password")
- elif self.login(self.options.get("username"), self.options.get("password")):
- res = self.session.http.get(self.url)
- media_match = self.media_id_re.search(res.text)
- media_id = media_match and media_match.group(1)
- if media_id:
- self.logger.debug("Found media id: {0}", media_id)
- stream_url = self.get_hls_url(media_id)
- if stream_url:
- return HLSStream.parse_variant_playlist(self.session, stream_url)
- else:
- self.logger.error("Login failed, a valid username and password is required")
+ res = self.session.http.get(self.url)
+ media_match = self.media_id_re.search(res.text)
+ media_id = media_match and media_match.group(1)
+ if media_id:
+ log.debug("Found media id: {0}", media_id)
+ stream_url = self.get_hls_url(media_id)
+ if stream_url:
+ return HLSStream.parse_variant_playlist(self.session, stream_url)
__plugin__ = BTV
|
{"golden_diff": "diff --git a/src/streamlink/plugins/btv.py b/src/streamlink/plugins/btv.py\n--- a/src/streamlink/plugins/btv.py\n+++ b/src/streamlink/plugins/btv.py\n@@ -1,38 +1,30 @@\n-from __future__ import print_function\n+import argparse\n+import logging\n import re\n \n-from streamlink import PluginError\n-from streamlink.plugin import Plugin\n+from streamlink.plugin import Plugin, PluginArguments, PluginArgument\n from streamlink.plugin.api import validate\n from streamlink.stream import HLSStream\n from streamlink.utils import parse_json\n-from streamlink.plugin import PluginArgument, PluginArguments\n+\n+log = logging.getLogger(__name__)\n \n \n class BTV(Plugin):\n arguments = PluginArguments(\n PluginArgument(\n \"username\",\n- metavar=\"USERNAME\",\n- requires=[\"password\"],\n- help=\"\"\"\n- A BTV username required to access any stream.\n- \"\"\"\n+ help=argparse.SUPPRESS\n ),\n PluginArgument(\n \"password\",\n sensitive=True,\n- metavar=\"PASSWORD\",\n- help=\"\"\"\n- A BTV account password to use with --btv-username.\n- \"\"\"\n+ help=argparse.SUPPRESS\n )\n )\n- url_re = re.compile(r\"https?://(?:www\\.)?btv\\.bg/live/?\")\n \n- api_url = \"http://www.btv.bg/lbin/global/player_config.php\"\n- check_login_url = \"http://www.btv.bg/lbin/userRegistration/check_user_login.php\"\n- login_url = \"https://www.btv.bg/bin/registration2/login.php?action=login&settings=0\"\n+ url_re = re.compile(r\"https?://(?:www\\.)?btvplus\\.bg/live/?\")\n+ api_url = \"https://btvplus.bg/lbin/v3/btvplus/player_config.php\"\n \n media_id_re = re.compile(r\"media_id=(\\d+)\")\n src_re = re.compile(r\"src: \\\"(http.*?)\\\"\")\n@@ -55,35 +47,19 @@\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n \n- def login(self, username, password):\n- res = self.session.http.post(self.login_url, data={\"username\": username, \"password\": password})\n- if \"success_logged_in\" in res.text:\n- return True\n- else:\n- return False\n-\n def get_hls_url(self, media_id):\n res = self.session.http.get(self.api_url, params=dict(media_id=media_id))\n- try:\n- return parse_json(res.text, schema=self.api_schema)\n- except PluginError:\n- return\n+ return parse_json(res.text, schema=self.api_schema)\n \n def _get_streams(self):\n- if not self.options.get(\"username\") or not self.options.get(\"password\"):\n- self.logger.error(\"BTV requires registration, set the username and password\"\n- \" with --btv-username and --btv-password\")\n- elif self.login(self.options.get(\"username\"), self.options.get(\"password\")):\n- res = self.session.http.get(self.url)\n- media_match = self.media_id_re.search(res.text)\n- media_id = media_match and media_match.group(1)\n- if media_id:\n- self.logger.debug(\"Found media id: {0}\", media_id)\n- stream_url = self.get_hls_url(media_id)\n- if stream_url:\n- return HLSStream.parse_variant_playlist(self.session, stream_url)\n- else:\n- self.logger.error(\"Login failed, a valid username and password is required\")\n+ res = self.session.http.get(self.url)\n+ media_match = self.media_id_re.search(res.text)\n+ media_id = media_match and media_match.group(1)\n+ if media_id:\n+ log.debug(\"Found media id: {0}\", media_id)\n+ stream_url = self.get_hls_url(media_id)\n+ if stream_url:\n+ return HLSStream.parse_variant_playlist(self.session, stream_url)\n \n \n __plugin__ = BTV\n", "issue": "[bug] BTV plugin needs updating\n## Bug Report\r\n- [x] This is a bug report and I have read the contribution guidelines.\r\n\r\n\r\n### Description\r\nThe location of the BTV livestream has moved to https://btvplus.bg/live/\r\n**Edit**: Livestreaming no longer requires a user to login, so that can be removed from the plugin info page.\r\n\r\n\r\n### Expected / Actual behavior\r\nStreamlink should be able to handle the link.\r\n\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\n1. streamlink https://btvplus.bg/live/ best \r\n2. error: No plugin can handle URL: https://btvplus.bg/live/\n", "before_files": [{"content": "from __future__ import print_function\nimport re\n\nfrom streamlink import PluginError\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils import parse_json\nfrom streamlink.plugin import PluginArgument, PluginArguments\n\n\nclass BTV(Plugin):\n arguments = PluginArguments(\n PluginArgument(\n \"username\",\n metavar=\"USERNAME\",\n requires=[\"password\"],\n help=\"\"\"\n A BTV username required to access any stream.\n \"\"\"\n ),\n PluginArgument(\n \"password\",\n sensitive=True,\n metavar=\"PASSWORD\",\n help=\"\"\"\n A BTV account password to use with --btv-username.\n \"\"\"\n )\n )\n url_re = re.compile(r\"https?://(?:www\\.)?btv\\.bg/live/?\")\n\n api_url = \"http://www.btv.bg/lbin/global/player_config.php\"\n check_login_url = \"http://www.btv.bg/lbin/userRegistration/check_user_login.php\"\n login_url = \"https://www.btv.bg/bin/registration2/login.php?action=login&settings=0\"\n\n media_id_re = re.compile(r\"media_id=(\\d+)\")\n src_re = re.compile(r\"src: \\\"(http.*?)\\\"\")\n api_schema = validate.Schema(\n validate.all(\n {\"status\": \"ok\", \"config\": validate.text},\n validate.get(\"config\"),\n validate.all(\n validate.transform(src_re.search),\n validate.any(\n None,\n validate.get(1),\n validate.url()\n )\n )\n )\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n def login(self, username, password):\n res = self.session.http.post(self.login_url, data={\"username\": username, \"password\": password})\n if \"success_logged_in\" in res.text:\n return True\n else:\n return False\n\n def get_hls_url(self, media_id):\n res = self.session.http.get(self.api_url, params=dict(media_id=media_id))\n try:\n return parse_json(res.text, schema=self.api_schema)\n except PluginError:\n return\n\n def _get_streams(self):\n if not self.options.get(\"username\") or not self.options.get(\"password\"):\n self.logger.error(\"BTV requires registration, set the username and password\"\n \" with --btv-username and --btv-password\")\n elif self.login(self.options.get(\"username\"), self.options.get(\"password\")):\n res = self.session.http.get(self.url)\n media_match = self.media_id_re.search(res.text)\n media_id = media_match and media_match.group(1)\n if media_id:\n self.logger.debug(\"Found media id: {0}\", media_id)\n stream_url = self.get_hls_url(media_id)\n if stream_url:\n return HLSStream.parse_variant_playlist(self.session, stream_url)\n else:\n self.logger.error(\"Login failed, a valid username and password is required\")\n\n\n__plugin__ = BTV\n", "path": "src/streamlink/plugins/btv.py"}]}
| 1,509 | 890 |
gh_patches_debug_10309
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-5209
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fallback to value in "Report Admin > Pending Approval" list view
# Current situation
Some updates look like they have no value and others do.

The reason is that some have descriptions and others don't.
**Examples:**
With description

Without description

# Improvement
Fallback to the actual value when no description has been provided.
</issue>
<code>
[start of akvo/rest/serializers/indicator_period_data.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6 import json
7
8 from rest_framework import serializers
9 from django.db.models import Sum
10 from django.contrib.admin.models import LogEntry, CHANGE, DELETION
11 from django.contrib.contenttypes.models import ContentType
12
13 from akvo.rest.serializers.disaggregation import DisaggregationSerializer, DisaggregationReadOnlySerializer
14 from akvo.rest.serializers.rsr_serializer import BaseRSRSerializer
15 from akvo.rest.serializers.user import UserDetailsSerializer, UserRawSerializer
16 from akvo.rsr.models import (
17 IndicatorPeriod, IndicatorPeriodData, IndicatorPeriodDataComment, IndicatorPeriodDataFile, IndicatorPeriodDataPhoto,
18 IndicatorDimensionValue, Disaggregation
19 )
20 from akvo.utils import ensure_decimal
21
22
23 class IndicatorPeriodDataCommentSerializer(BaseRSRSerializer):
24
25 user_details = UserDetailsSerializer(read_only=True, source='user')
26
27 class Meta:
28 model = IndicatorPeriodDataComment
29 fields = '__all__'
30 read_only_fields = ['user']
31
32
33 class IndicatorPeriodDataCommentNestedSerializer(BaseRSRSerializer):
34 id = serializers.IntegerField(required=False)
35
36 class Meta:
37 model = IndicatorPeriodDataComment
38 fields = '__all__'
39 read_only_fields = ('id', 'data', 'user')
40
41
42 class IndicatorPeriodDataFileSerializer(BaseRSRSerializer):
43 class Meta:
44 model = IndicatorPeriodDataFile
45 fields = '__all__'
46
47
48 class IndicatorPeriodDataPhotoSerializer(BaseRSRSerializer):
49 class Meta:
50 model = IndicatorPeriodDataPhoto
51 fields = '__all__'
52
53
54 class IndicatorPeriodDataSerializer(BaseRSRSerializer):
55
56 user_details = UserDetailsSerializer(read_only=True, source='user')
57 approver_details = UserDetailsSerializer(read_only=True, source='approved_by')
58 status_display = serializers.ReadOnlyField()
59 photo_url = serializers.ReadOnlyField()
60 file_url = serializers.ReadOnlyField()
61
62 class Meta:
63 model = IndicatorPeriodData
64 fields = '__all__'
65 read_only_fields = ['user']
66
67
68 class IndicatorPeriodDataLiteSerializer(BaseRSRSerializer):
69
70 user_details = UserRawSerializer(required=False, source='user')
71 status_display = serializers.ReadOnlyField()
72 photo_url = serializers.ReadOnlyField()
73 file_url = serializers.ReadOnlyField()
74 disaggregations = DisaggregationReadOnlySerializer(many=True, required=False)
75 value = serializers.SerializerMethodField()
76 file_set = IndicatorPeriodDataFileSerializer(many=True, read_only=True, source='indicatorperioddatafile_set')
77 photo_set = IndicatorPeriodDataPhotoSerializer(many=True, read_only=True, source='indicatorperioddataphoto_set')
78 comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)
79
80 def get_value(self, obj):
81 return ensure_decimal(obj.value)
82
83 class Meta:
84 model = IndicatorPeriodData
85 fields = (
86 'id', 'user_details', 'status', 'status_display', 'update_method', 'value', 'numerator', 'denominator', 'text',
87 'disaggregations', 'narrative', 'photo_url', 'file_url', 'created_at', 'last_modified_at',
88 'file_set', 'photo_set', 'review_note', 'comments',
89 )
90
91
92 class IndicatorPeriodDataFrameworkSerializer(BaseRSRSerializer):
93
94 period = serializers.PrimaryKeyRelatedField(queryset=IndicatorPeriod.objects.all())
95 comments = IndicatorPeriodDataCommentNestedSerializer(many=True, required=False)
96 disaggregations = DisaggregationSerializer(many=True, required=False)
97 user_details = UserDetailsSerializer(read_only=True, source='user')
98 approver_details = UserDetailsSerializer(read_only=True, source='approved_by')
99 status_display = serializers.ReadOnlyField()
100 photo_url = serializers.ReadOnlyField()
101 file_url = serializers.ReadOnlyField()
102 period_can_add_update = serializers.ReadOnlyField(source='period.can_save_update')
103 files = serializers.ListField(child=serializers.FileField(), required=False, write_only=True)
104 photos = serializers.ListField(child=serializers.FileField(), required=False, write_only=True)
105 file_set = IndicatorPeriodDataFileSerializer(many=True, read_only=True, source='indicatorperioddatafile_set')
106 photo_set = IndicatorPeriodDataPhotoSerializer(many=True, read_only=True, source='indicatorperioddataphoto_set')
107 audit_trail = serializers.SerializerMethodField()
108
109 class Meta:
110 model = IndicatorPeriodData
111 fields = '__all__'
112 read_only_fields = ['user']
113
114 def get_audit_trail(self, obj):
115 entries = LogEntry.objects.filter(
116 content_type=ContentType.objects.get_for_model(IndicatorPeriodData),
117 object_id=obj.id,
118 change_message__contains='"audit_trail": true'
119 )
120 return [
121 {
122 'user': {'id': entry.user.id, 'email': entry.user.email, 'first_name': entry.user.first_name, 'last_name': entry.user.last_name},
123 'action_time': entry.action_time,
124 'action_flag': 'CHANGE' if entry.action_flag == CHANGE else 'DELETION' if entry.action_flag == DELETION else 'ADDITION',
125 'data': json.loads(entry.change_message)['data'],
126 }
127 for entry in entries
128 ]
129
130 def create(self, validated_data):
131 self._validate_disaggregations(
132 self._disaggregations_data,
133 value=ensure_decimal(validated_data.get('value', 0)),
134 numerator=ensure_decimal(validated_data.get('numerator', None)),
135 denominator=ensure_decimal(validated_data.get('denominator', None))
136 )
137 """Over-ridden to handle nested writes."""
138 files = validated_data.pop('files', [])
139 photos = validated_data.pop('photos', [])
140 comments = validated_data.pop('comments', [])
141 update = super(IndicatorPeriodDataFrameworkSerializer, self).create(validated_data)
142 for disaggregation in self._disaggregations_data:
143 disaggregation['update'] = update.id
144 if 'type_id' in disaggregation and 'dimension_value' not in disaggregation:
145 disaggregation['dimension_value'] = disaggregation['type_id']
146 serializer = DisaggregationSerializer(data=disaggregation)
147 serializer.is_valid(raise_exception=True)
148 serializer.create(serializer.validated_data)
149 for file in files:
150 IndicatorPeriodDataFile.objects.create(update=update, file=file)
151 for photo in photos:
152 IndicatorPeriodDataPhoto.objects.create(update=update, photo=photo)
153 for comment in comments:
154 IndicatorPeriodDataComment.objects.create(data=update, user=update.user, comment=comment['comment'])
155
156 return update
157
158 def update(self, instance, validated_data):
159 self._validate_disaggregations(
160 self._disaggregations_data,
161 value=ensure_decimal(validated_data.get('value', instance.value)),
162 numerator=ensure_decimal(validated_data.get('numerator', instance.numerator)),
163 denominator=ensure_decimal(validated_data.get('denominator', instance.denominator)),
164 update=instance
165 )
166 """Over-ridden to handle nested updates."""
167 files = validated_data.pop('files', [])
168 photos = validated_data.pop('photos', [])
169 comments = validated_data.pop('comments', [])
170 super(IndicatorPeriodDataFrameworkSerializer, self).update(instance, validated_data)
171 for disaggregation in self._disaggregations_data:
172 disaggregation['update'] = instance.id
173 serializer = DisaggregationSerializer(data=disaggregation)
174 serializer.is_valid(raise_exception=True)
175 disaggregation_instance, _ = instance.disaggregations.get_or_create(
176 update=instance,
177 dimension_value=serializer.validated_data['dimension_value'],
178 )
179 serializer.update(disaggregation_instance, serializer.validated_data)
180 for file in files:
181 IndicatorPeriodDataFile.objects.create(update=instance, file=file)
182 for photo in photos:
183 IndicatorPeriodDataPhoto.objects.create(update=instance, photo=photo)
184 for comment in comments:
185 comment_id = int(comment.get('id', 0))
186 comment_txt = str(comment.get('comment', ''))
187 if not comment_id:
188 IndicatorPeriodDataComment.objects.create(data=instance, user=instance.user, comment=comment['comment'])
189 else:
190 comment_obj = IndicatorPeriodDataComment.objects.get(id=comment_id)
191 if not comment_txt:
192 comment_obj.delete()
193 else:
194 comment_obj.comment = comment_txt
195 comment_obj.save()
196
197 return instance._meta.model.objects.select_related(
198 'period',
199 'user',
200 'approved_by',
201 ).prefetch_related(
202 'comments',
203 'disaggregations',
204 ).get(id=instance.id)
205
206 def _validate_disaggregations(self, disaggregations, value, numerator=None, denominator=None, update=None):
207 adjustments = {}
208 for disaggregation in disaggregations:
209 type_id = disaggregation.get('type_id', disaggregation.get('dimension_value', None))
210 if type_id is None:
211 continue
212 if denominator is not None:
213 disaggregation_denominator = ensure_decimal(disaggregation.get('denominator', 0))
214 if disaggregation_denominator > denominator:
215 raise serializers.ValidationError("disaggregations denominator should not exceed update denominator")
216 category = IndicatorDimensionValue.objects.get(pk=type_id).name
217 if category.id not in adjustments:
218 adjustments[category.id] = {'values': 0, 'numerators': 0, 'type_ids': []}
219 adjustments[category.id]['values'] += ensure_decimal(disaggregation.get('value', 0))
220 adjustments[category.id]['numerators'] += ensure_decimal(disaggregation.get('numerator', 0))
221 adjustments[category.id]['type_ids'].append(type_id)
222 for key, adjustment in adjustments.items():
223 unmodifieds = Disaggregation.objects.filter(update=update, dimension_value__name=key)\
224 .exclude(dimension_value__in=adjustment['type_ids'])\
225 .aggregate(values=Sum('value'))
226 total = adjustment['values'] + ensure_decimal(unmodifieds['values'])
227 if numerator is not None and adjustment['numerators'] > numerator:
228 raise serializers.ValidationError("The disaggregation numerator should not exceed update numerator")
229 if total > value:
230 raise serializers.ValidationError("The accumulated disaggregations value should not exceed update value")
231
232 def is_valid(self, raise_exception=False):
233 # HACK to allow nested posting...
234 self._disaggregations_data = self.initial_data.pop('disaggregations', [])
235 super(IndicatorPeriodDataFrameworkSerializer, self).is_valid(raise_exception=raise_exception)
236
[end of akvo/rest/serializers/indicator_period_data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/akvo/rest/serializers/indicator_period_data.py b/akvo/rest/serializers/indicator_period_data.py
--- a/akvo/rest/serializers/indicator_period_data.py
+++ b/akvo/rest/serializers/indicator_period_data.py
@@ -84,7 +84,7 @@
model = IndicatorPeriodData
fields = (
'id', 'user_details', 'status', 'status_display', 'update_method', 'value', 'numerator', 'denominator', 'text',
- 'disaggregations', 'narrative', 'photo_url', 'file_url', 'created_at', 'last_modified_at',
+ 'disaggregations', 'narrative', 'score_indices', 'photo_url', 'file_url', 'created_at', 'last_modified_at',
'file_set', 'photo_set', 'review_note', 'comments',
)
|
{"golden_diff": "diff --git a/akvo/rest/serializers/indicator_period_data.py b/akvo/rest/serializers/indicator_period_data.py\n--- a/akvo/rest/serializers/indicator_period_data.py\n+++ b/akvo/rest/serializers/indicator_period_data.py\n@@ -84,7 +84,7 @@\n model = IndicatorPeriodData\n fields = (\n 'id', 'user_details', 'status', 'status_display', 'update_method', 'value', 'numerator', 'denominator', 'text',\n- 'disaggregations', 'narrative', 'photo_url', 'file_url', 'created_at', 'last_modified_at',\n+ 'disaggregations', 'narrative', 'score_indices', 'photo_url', 'file_url', 'created_at', 'last_modified_at',\n 'file_set', 'photo_set', 'review_note', 'comments',\n )\n", "issue": "Fallback to value in \"Report Admin > Pending Approval\" list view\n# Current situation\n\nSome updates look like they have no value and others do.\n\n\n\nThe reason is that some have descriptions and others don't.\n\n**Examples:**\n\nWith description\n\n\nWithout description\n\n\n# Improvement\n\nFallback to the actual value when no description has been provided.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\nimport json\n\nfrom rest_framework import serializers\nfrom django.db.models import Sum\nfrom django.contrib.admin.models import LogEntry, CHANGE, DELETION\nfrom django.contrib.contenttypes.models import ContentType\n\nfrom akvo.rest.serializers.disaggregation import DisaggregationSerializer, DisaggregationReadOnlySerializer\nfrom akvo.rest.serializers.rsr_serializer import BaseRSRSerializer\nfrom akvo.rest.serializers.user import UserDetailsSerializer, UserRawSerializer\nfrom akvo.rsr.models import (\n IndicatorPeriod, IndicatorPeriodData, IndicatorPeriodDataComment, IndicatorPeriodDataFile, IndicatorPeriodDataPhoto,\n IndicatorDimensionValue, Disaggregation\n)\nfrom akvo.utils import ensure_decimal\n\n\nclass IndicatorPeriodDataCommentSerializer(BaseRSRSerializer):\n\n user_details = UserDetailsSerializer(read_only=True, source='user')\n\n class Meta:\n model = IndicatorPeriodDataComment\n fields = '__all__'\n read_only_fields = ['user']\n\n\nclass IndicatorPeriodDataCommentNestedSerializer(BaseRSRSerializer):\n id = serializers.IntegerField(required=False)\n\n class Meta:\n model = IndicatorPeriodDataComment\n fields = '__all__'\n read_only_fields = ('id', 'data', 'user')\n\n\nclass IndicatorPeriodDataFileSerializer(BaseRSRSerializer):\n class Meta:\n model = IndicatorPeriodDataFile\n fields = '__all__'\n\n\nclass IndicatorPeriodDataPhotoSerializer(BaseRSRSerializer):\n class Meta:\n model = IndicatorPeriodDataPhoto\n fields = '__all__'\n\n\nclass IndicatorPeriodDataSerializer(BaseRSRSerializer):\n\n user_details = UserDetailsSerializer(read_only=True, source='user')\n approver_details = UserDetailsSerializer(read_only=True, source='approved_by')\n status_display = serializers.ReadOnlyField()\n photo_url = serializers.ReadOnlyField()\n file_url = serializers.ReadOnlyField()\n\n class Meta:\n model = IndicatorPeriodData\n fields = '__all__'\n read_only_fields = ['user']\n\n\nclass IndicatorPeriodDataLiteSerializer(BaseRSRSerializer):\n\n user_details = UserRawSerializer(required=False, source='user')\n status_display = serializers.ReadOnlyField()\n photo_url = serializers.ReadOnlyField()\n file_url = serializers.ReadOnlyField()\n disaggregations = DisaggregationReadOnlySerializer(many=True, required=False)\n value = serializers.SerializerMethodField()\n file_set = IndicatorPeriodDataFileSerializer(many=True, read_only=True, source='indicatorperioddatafile_set')\n photo_set = IndicatorPeriodDataPhotoSerializer(many=True, read_only=True, source='indicatorperioddataphoto_set')\n comments = IndicatorPeriodDataCommentSerializer(read_only=True, many=True, required=False)\n\n def get_value(self, obj):\n return ensure_decimal(obj.value)\n\n class Meta:\n model = IndicatorPeriodData\n fields = (\n 'id', 'user_details', 'status', 'status_display', 'update_method', 'value', 'numerator', 'denominator', 'text',\n 'disaggregations', 'narrative', 'photo_url', 'file_url', 'created_at', 'last_modified_at',\n 'file_set', 'photo_set', 'review_note', 'comments',\n )\n\n\nclass IndicatorPeriodDataFrameworkSerializer(BaseRSRSerializer):\n\n period = serializers.PrimaryKeyRelatedField(queryset=IndicatorPeriod.objects.all())\n comments = IndicatorPeriodDataCommentNestedSerializer(many=True, required=False)\n disaggregations = DisaggregationSerializer(many=True, required=False)\n user_details = UserDetailsSerializer(read_only=True, source='user')\n approver_details = UserDetailsSerializer(read_only=True, source='approved_by')\n status_display = serializers.ReadOnlyField()\n photo_url = serializers.ReadOnlyField()\n file_url = serializers.ReadOnlyField()\n period_can_add_update = serializers.ReadOnlyField(source='period.can_save_update')\n files = serializers.ListField(child=serializers.FileField(), required=False, write_only=True)\n photos = serializers.ListField(child=serializers.FileField(), required=False, write_only=True)\n file_set = IndicatorPeriodDataFileSerializer(many=True, read_only=True, source='indicatorperioddatafile_set')\n photo_set = IndicatorPeriodDataPhotoSerializer(many=True, read_only=True, source='indicatorperioddataphoto_set')\n audit_trail = serializers.SerializerMethodField()\n\n class Meta:\n model = IndicatorPeriodData\n fields = '__all__'\n read_only_fields = ['user']\n\n def get_audit_trail(self, obj):\n entries = LogEntry.objects.filter(\n content_type=ContentType.objects.get_for_model(IndicatorPeriodData),\n object_id=obj.id,\n change_message__contains='\"audit_trail\": true'\n )\n return [\n {\n 'user': {'id': entry.user.id, 'email': entry.user.email, 'first_name': entry.user.first_name, 'last_name': entry.user.last_name},\n 'action_time': entry.action_time,\n 'action_flag': 'CHANGE' if entry.action_flag == CHANGE else 'DELETION' if entry.action_flag == DELETION else 'ADDITION',\n 'data': json.loads(entry.change_message)['data'],\n }\n for entry in entries\n ]\n\n def create(self, validated_data):\n self._validate_disaggregations(\n self._disaggregations_data,\n value=ensure_decimal(validated_data.get('value', 0)),\n numerator=ensure_decimal(validated_data.get('numerator', None)),\n denominator=ensure_decimal(validated_data.get('denominator', None))\n )\n \"\"\"Over-ridden to handle nested writes.\"\"\"\n files = validated_data.pop('files', [])\n photos = validated_data.pop('photos', [])\n comments = validated_data.pop('comments', [])\n update = super(IndicatorPeriodDataFrameworkSerializer, self).create(validated_data)\n for disaggregation in self._disaggregations_data:\n disaggregation['update'] = update.id\n if 'type_id' in disaggregation and 'dimension_value' not in disaggregation:\n disaggregation['dimension_value'] = disaggregation['type_id']\n serializer = DisaggregationSerializer(data=disaggregation)\n serializer.is_valid(raise_exception=True)\n serializer.create(serializer.validated_data)\n for file in files:\n IndicatorPeriodDataFile.objects.create(update=update, file=file)\n for photo in photos:\n IndicatorPeriodDataPhoto.objects.create(update=update, photo=photo)\n for comment in comments:\n IndicatorPeriodDataComment.objects.create(data=update, user=update.user, comment=comment['comment'])\n\n return update\n\n def update(self, instance, validated_data):\n self._validate_disaggregations(\n self._disaggregations_data,\n value=ensure_decimal(validated_data.get('value', instance.value)),\n numerator=ensure_decimal(validated_data.get('numerator', instance.numerator)),\n denominator=ensure_decimal(validated_data.get('denominator', instance.denominator)),\n update=instance\n )\n \"\"\"Over-ridden to handle nested updates.\"\"\"\n files = validated_data.pop('files', [])\n photos = validated_data.pop('photos', [])\n comments = validated_data.pop('comments', [])\n super(IndicatorPeriodDataFrameworkSerializer, self).update(instance, validated_data)\n for disaggregation in self._disaggregations_data:\n disaggregation['update'] = instance.id\n serializer = DisaggregationSerializer(data=disaggregation)\n serializer.is_valid(raise_exception=True)\n disaggregation_instance, _ = instance.disaggregations.get_or_create(\n update=instance,\n dimension_value=serializer.validated_data['dimension_value'],\n )\n serializer.update(disaggregation_instance, serializer.validated_data)\n for file in files:\n IndicatorPeriodDataFile.objects.create(update=instance, file=file)\n for photo in photos:\n IndicatorPeriodDataPhoto.objects.create(update=instance, photo=photo)\n for comment in comments:\n comment_id = int(comment.get('id', 0))\n comment_txt = str(comment.get('comment', ''))\n if not comment_id:\n IndicatorPeriodDataComment.objects.create(data=instance, user=instance.user, comment=comment['comment'])\n else:\n comment_obj = IndicatorPeriodDataComment.objects.get(id=comment_id)\n if not comment_txt:\n comment_obj.delete()\n else:\n comment_obj.comment = comment_txt\n comment_obj.save()\n\n return instance._meta.model.objects.select_related(\n 'period',\n 'user',\n 'approved_by',\n ).prefetch_related(\n 'comments',\n 'disaggregations',\n ).get(id=instance.id)\n\n def _validate_disaggregations(self, disaggregations, value, numerator=None, denominator=None, update=None):\n adjustments = {}\n for disaggregation in disaggregations:\n type_id = disaggregation.get('type_id', disaggregation.get('dimension_value', None))\n if type_id is None:\n continue\n if denominator is not None:\n disaggregation_denominator = ensure_decimal(disaggregation.get('denominator', 0))\n if disaggregation_denominator > denominator:\n raise serializers.ValidationError(\"disaggregations denominator should not exceed update denominator\")\n category = IndicatorDimensionValue.objects.get(pk=type_id).name\n if category.id not in adjustments:\n adjustments[category.id] = {'values': 0, 'numerators': 0, 'type_ids': []}\n adjustments[category.id]['values'] += ensure_decimal(disaggregation.get('value', 0))\n adjustments[category.id]['numerators'] += ensure_decimal(disaggregation.get('numerator', 0))\n adjustments[category.id]['type_ids'].append(type_id)\n for key, adjustment in adjustments.items():\n unmodifieds = Disaggregation.objects.filter(update=update, dimension_value__name=key)\\\n .exclude(dimension_value__in=adjustment['type_ids'])\\\n .aggregate(values=Sum('value'))\n total = adjustment['values'] + ensure_decimal(unmodifieds['values'])\n if numerator is not None and adjustment['numerators'] > numerator:\n raise serializers.ValidationError(\"The disaggregation numerator should not exceed update numerator\")\n if total > value:\n raise serializers.ValidationError(\"The accumulated disaggregations value should not exceed update value\")\n\n def is_valid(self, raise_exception=False):\n # HACK to allow nested posting...\n self._disaggregations_data = self.initial_data.pop('disaggregations', [])\n super(IndicatorPeriodDataFrameworkSerializer, self).is_valid(raise_exception=raise_exception)\n", "path": "akvo/rest/serializers/indicator_period_data.py"}]}
| 3,682 | 198 |
gh_patches_debug_27700
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-5742
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.atresplayer: Error -3 while decompressing data: incorrect header check
### Checklist
- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
streamlink 6.4.2
### Description
Possible change in link decoding.
### Debug log
```text
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.11.6
[cli][debug] OpenSSL: OpenSSL 3.0.11 19 Sep 2023
[cli][debug] Streamlink: 6.4.2
[cli][debug] Dependencies:
[cli][debug] certifi: 2023.11.17
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.3
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.19.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.31.0
[cli][debug] trio: 0.23.1
[cli][debug] trio-websocket: 0.11.1
[cli][debug] typing-extensions: 4.8.0
[cli][debug] urllib3: 2.1.0
[cli][debug] websocket-client: 1.6.4
[cli][debug] Arguments:
[cli][debug] url=https://www.atresplayer.com/directos/antena3/
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][debug] --ffmpeg-ffmpeg=C:\Program Files\Streamlink\ffmpeg\ffmpeg.exe
[cli][info] Found matching plugin atresplayer for URL https://www.atresplayer.com/directos/antena3/
[plugins.atresplayer][debug] Player API URL: https://api.atresplayer.com/player/v1/live/5a6a165a7ed1a834493ebf6a
[plugins.atresplayer][debug] Stream source: https://directo.atresmedia.com/49aa0979c14a4113668984aa8f6f7a43dd3a624a_1701338572/antena3/master.m3u8 (application/vnd.apple.mpegurl)
[utils.l10n][debug] Language code: es_ES
error: Unable to open URL: https://directo.atresmedia.com/49aa0979c14a4113668984aa8f6f7a43dd3a624a_1701338572/antena3/master.m3u8 (('Received response with content-encoding: gzip, but failed to decode it.', error('Error -3 while decompressing data: incorrect header check')))
```
</issue>
<code>
[start of src/streamlink/plugins/atresplayer.py]
1 """
2 $description Spanish live TV channels from Atresmedia Television, including Antena 3 and laSexta.
3 $url atresplayer.com
4 $type live
5 $region Spain
6 """
7
8 import logging
9 import re
10
11 from streamlink.plugin import Plugin, pluginmatcher
12 from streamlink.plugin.api import validate
13 from streamlink.stream.dash import DASHStream
14 from streamlink.stream.hls import HLSStream
15 from streamlink.utils.url import update_scheme
16
17
18 log = logging.getLogger(__name__)
19
20
21 @pluginmatcher(re.compile(
22 r"https?://(?:www\.)?atresplayer\.com/directos/.+",
23 ))
24 class AtresPlayer(Plugin):
25 _channels_api_url = "https://api.atresplayer.com/client/v1/info/channels"
26 _player_api_url = "https://api.atresplayer.com/player/v1/live/{channel_id}"
27
28 def __init__(self, *args, **kwargs):
29 super().__init__(*args, **kwargs)
30 self.url = update_scheme("https://", f"{self.url.rstrip('/')}/")
31
32 def _get_streams(self):
33 channel_path = f"/{self.url.split('/')[-2]}/"
34 channel_data = self.session.http.get(self._channels_api_url, schema=validate.Schema(
35 validate.parse_json(),
36 [{
37 "id": str,
38 "link": {"url": str},
39 }],
40 validate.filter(lambda item: item["link"]["url"] == channel_path),
41 ))
42 if not channel_data:
43 return
44 channel_id = channel_data[0]["id"]
45
46 player_api_url = self._player_api_url.format(channel_id=channel_id)
47 log.debug(f"Player API URL: {player_api_url}")
48
49 sources = self.session.http.get(player_api_url, acceptable_status=(200, 403), schema=validate.Schema(
50 validate.parse_json(),
51 validate.any(
52 {
53 "error": str,
54 "error_description": str,
55 },
56 {
57 "sources": [
58 validate.all(
59 {
60 "src": validate.url(),
61 validate.optional("type"): str,
62 },
63 validate.union_get("type", "src"),
64 ),
65 ],
66 },
67 ),
68 ))
69 if "error" in sources:
70 log.error(f"Player API error: {sources['error']} - {sources['error_description']}")
71 return
72
73 for streamtype, streamsrc in sources.get("sources"):
74 log.debug(f"Stream source: {streamsrc} ({streamtype or 'n/a'})")
75
76 if streamtype == "application/vnd.apple.mpegurl":
77 streams = HLSStream.parse_variant_playlist(self.session, streamsrc)
78 if not streams:
79 yield "live", HLSStream(self.session, streamsrc)
80 else:
81 yield from streams.items()
82 elif streamtype == "application/dash+xml":
83 yield from DASHStream.parse_manifest(self.session, streamsrc).items()
84
85
86 __plugin__ = AtresPlayer
87
[end of src/streamlink/plugins/atresplayer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/streamlink/plugins/atresplayer.py b/src/streamlink/plugins/atresplayer.py
--- a/src/streamlink/plugins/atresplayer.py
+++ b/src/streamlink/plugins/atresplayer.py
@@ -23,7 +23,7 @@
))
class AtresPlayer(Plugin):
_channels_api_url = "https://api.atresplayer.com/client/v1/info/channels"
- _player_api_url = "https://api.atresplayer.com/player/v1/live/{channel_id}"
+ _player_api_url = "https://api.atresplayer.com/player/v1/live/{channel_id}?NODRM=true"
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
@@ -54,7 +54,7 @@
"error_description": str,
},
{
- "sources": [
+ "sourcesLive": [
validate.all(
{
"src": validate.url(),
@@ -70,7 +70,7 @@
log.error(f"Player API error: {sources['error']} - {sources['error_description']}")
return
- for streamtype, streamsrc in sources.get("sources"):
+ for streamtype, streamsrc in sources.get("sourcesLive"):
log.debug(f"Stream source: {streamsrc} ({streamtype or 'n/a'})")
if streamtype == "application/vnd.apple.mpegurl":
|
{"golden_diff": "diff --git a/src/streamlink/plugins/atresplayer.py b/src/streamlink/plugins/atresplayer.py\n--- a/src/streamlink/plugins/atresplayer.py\n+++ b/src/streamlink/plugins/atresplayer.py\n@@ -23,7 +23,7 @@\n ))\n class AtresPlayer(Plugin):\n _channels_api_url = \"https://api.atresplayer.com/client/v1/info/channels\"\n- _player_api_url = \"https://api.atresplayer.com/player/v1/live/{channel_id}\"\n+ _player_api_url = \"https://api.atresplayer.com/player/v1/live/{channel_id}?NODRM=true\"\n \n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n@@ -54,7 +54,7 @@\n \"error_description\": str,\n },\n {\n- \"sources\": [\n+ \"sourcesLive\": [\n validate.all(\n {\n \"src\": validate.url(),\n@@ -70,7 +70,7 @@\n log.error(f\"Player API error: {sources['error']} - {sources['error_description']}\")\n return\n \n- for streamtype, streamsrc in sources.get(\"sources\"):\n+ for streamtype, streamsrc in sources.get(\"sourcesLive\"):\n log.debug(f\"Stream source: {streamsrc} ({streamtype or 'n/a'})\")\n \n if streamtype == \"application/vnd.apple.mpegurl\":\n", "issue": "plugins.atresplayer: Error -3 while decompressing data: incorrect header check\n### Checklist\n\n- [X] This is a [plugin issue](https://streamlink.github.io/plugins.html) and not [a different kind of issue](https://github.com/streamlink/streamlink/issues/new/choose)\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nstreamlink 6.4.2\n\n### Description\n\nPossible change in link decoding.\n\n### Debug log\n\n```text\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.11.6\r\n[cli][debug] OpenSSL: OpenSSL 3.0.11 19 Sep 2023\r\n[cli][debug] Streamlink: 6.4.2\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2023.11.17\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.3\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.19.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.31.0\r\n[cli][debug] trio: 0.23.1\r\n[cli][debug] trio-websocket: 0.11.1\r\n[cli][debug] typing-extensions: 4.8.0\r\n[cli][debug] urllib3: 2.1.0\r\n[cli][debug] websocket-client: 1.6.4\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.atresplayer.com/directos/antena3/\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --ffmpeg-ffmpeg=C:\\Program Files\\Streamlink\\ffmpeg\\ffmpeg.exe\r\n[cli][info] Found matching plugin atresplayer for URL https://www.atresplayer.com/directos/antena3/\r\n[plugins.atresplayer][debug] Player API URL: https://api.atresplayer.com/player/v1/live/5a6a165a7ed1a834493ebf6a\r\n[plugins.atresplayer][debug] Stream source: https://directo.atresmedia.com/49aa0979c14a4113668984aa8f6f7a43dd3a624a_1701338572/antena3/master.m3u8 (application/vnd.apple.mpegurl)\r\n[utils.l10n][debug] Language code: es_ES\r\nerror: Unable to open URL: https://directo.atresmedia.com/49aa0979c14a4113668984aa8f6f7a43dd3a624a_1701338572/antena3/master.m3u8 (('Received response with content-encoding: gzip, but failed to decode it.', error('Error -3 while decompressing data: incorrect header check')))\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Spanish live TV channels from Atresmedia Television, including Antena 3 and laSexta.\n$url atresplayer.com\n$type live\n$region Spain\n\"\"\"\n\nimport logging\nimport re\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.dash import DASHStream\nfrom streamlink.stream.hls import HLSStream\nfrom streamlink.utils.url import update_scheme\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?atresplayer\\.com/directos/.+\",\n))\nclass AtresPlayer(Plugin):\n _channels_api_url = \"https://api.atresplayer.com/client/v1/info/channels\"\n _player_api_url = \"https://api.atresplayer.com/player/v1/live/{channel_id}\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.url = update_scheme(\"https://\", f\"{self.url.rstrip('/')}/\")\n\n def _get_streams(self):\n channel_path = f\"/{self.url.split('/')[-2]}/\"\n channel_data = self.session.http.get(self._channels_api_url, schema=validate.Schema(\n validate.parse_json(),\n [{\n \"id\": str,\n \"link\": {\"url\": str},\n }],\n validate.filter(lambda item: item[\"link\"][\"url\"] == channel_path),\n ))\n if not channel_data:\n return\n channel_id = channel_data[0][\"id\"]\n\n player_api_url = self._player_api_url.format(channel_id=channel_id)\n log.debug(f\"Player API URL: {player_api_url}\")\n\n sources = self.session.http.get(player_api_url, acceptable_status=(200, 403), schema=validate.Schema(\n validate.parse_json(),\n validate.any(\n {\n \"error\": str,\n \"error_description\": str,\n },\n {\n \"sources\": [\n validate.all(\n {\n \"src\": validate.url(),\n validate.optional(\"type\"): str,\n },\n validate.union_get(\"type\", \"src\"),\n ),\n ],\n },\n ),\n ))\n if \"error\" in sources:\n log.error(f\"Player API error: {sources['error']} - {sources['error_description']}\")\n return\n\n for streamtype, streamsrc in sources.get(\"sources\"):\n log.debug(f\"Stream source: {streamsrc} ({streamtype or 'n/a'})\")\n\n if streamtype == \"application/vnd.apple.mpegurl\":\n streams = HLSStream.parse_variant_playlist(self.session, streamsrc)\n if not streams:\n yield \"live\", HLSStream(self.session, streamsrc)\n else:\n yield from streams.items()\n elif streamtype == \"application/dash+xml\":\n yield from DASHStream.parse_manifest(self.session, streamsrc).items()\n\n\n__plugin__ = AtresPlayer\n", "path": "src/streamlink/plugins/atresplayer.py"}]}
| 2,177 | 310 |
gh_patches_debug_17070
|
rasdani/github-patches
|
git_diff
|
xonsh__xonsh-341
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
xonsh dies if the prompt raises an exception
If a function in the prompt raises an exception, it kills xonsh. I would expect the error to be displayed, but not kill the shell.
</issue>
<code>
[start of xonsh/base_shell.py]
1 """The base class for xonsh shell"""
2 import os
3 import sys
4 import builtins
5 import traceback
6
7 from xonsh.execer import Execer
8 from xonsh.tools import XonshError, escape_windows_title_string
9 from xonsh.tools import ON_WINDOWS
10 from xonsh.completer import Completer
11 from xonsh.environ import multiline_prompt, format_prompt
12
13
14 class BaseShell(object):
15 """The xonsh shell."""
16
17 def __init__(self, execer, ctx, **kwargs):
18 super().__init__(**kwargs)
19 self.execer = execer
20 self.ctx = ctx
21 self.completer = Completer()
22 self.buffer = []
23 self.need_more_lines = False
24 self.mlprompt = None
25
26 def emptyline(self):
27 """Called when an empty line has been entered."""
28 self.need_more_lines = False
29 self.default('')
30
31 def precmd(self, line):
32 """Called just before execution of line."""
33 return line if self.need_more_lines else line.lstrip()
34
35 def default(self, line):
36 """Implements code execution."""
37 line = line if line.endswith('\n') else line + '\n'
38 code = self.push(line)
39 if code is None:
40 return
41 try:
42 self.execer.exec(code, mode='single', glbs=self.ctx) # no locals
43 except XonshError as e:
44 print(e.args[0], file=sys.stderr)
45 except:
46 _print_exception()
47 if builtins.__xonsh_exit__:
48 return True
49
50 def push(self, line):
51 """Pushes a line onto the buffer and compiles the code in a way that
52 enables multiline input.
53 """
54 code = None
55 self.buffer.append(line)
56 if self.need_more_lines:
57 return code
58 src = ''.join(self.buffer)
59 try:
60 code = self.execer.compile(src,
61 mode='single',
62 glbs=None,
63 locs=self.ctx)
64 self.reset_buffer()
65 except SyntaxError:
66 if line == '\n':
67 self.reset_buffer()
68 _print_exception()
69 return None
70 self.need_more_lines = True
71 except:
72 self.reset_buffer()
73 _print_exception()
74 return None
75 return code
76
77 def reset_buffer(self):
78 """Resets the line buffer."""
79 self.buffer.clear()
80 self.need_more_lines = False
81 self.mlprompt = None
82
83 def settitle(self):
84 """Sets terminal title."""
85 env = builtins.__xonsh_env__
86 term = env.get('TERM', None)
87 if term is None or term == 'linux':
88 return
89 if 'TITLE' in env:
90 t = env['TITLE']
91 else:
92 return
93 t = format_prompt(t)
94 if ON_WINDOWS and 'ANSICON' not in env:
95 t = escape_windows_title_string(t)
96 os.system('title {}'.format(t))
97 else:
98 sys.stdout.write("\x1b]2;{0}\x07".format(t))
99
100 @property
101 def prompt(self):
102 """Obtains the current prompt string."""
103 if self.need_more_lines:
104 if self.mlprompt is None:
105 self.mlprompt = multiline_prompt()
106 return self.mlprompt
107 env = builtins.__xonsh_env__
108 if 'PROMPT' in env:
109 p = env['PROMPT']
110 p = format_prompt(p)
111 else:
112 p = "set '$PROMPT = ...' $ "
113 self.settitle()
114 return p
115
116 def _print_exception():
117 """Print exceptions with/without traceback."""
118 if not 'XONSH_SHOW_TRACEBACK' in builtins.__xonsh_env__:
119 sys.stderr.write('xonsh: For full traceback set: '
120 '$XONSH_SHOW_TRACEBACK=True\n')
121 if builtins.__xonsh_env__.get('XONSH_SHOW_TRACEBACK', False):
122 traceback.print_exc()
123 else:
124 exc_type, exc_value, exc_traceback = sys.exc_info()
125 exception_only = traceback.format_exception_only(exc_type, exc_value)
126 sys.stderr.write(''.join(exception_only))
127
[end of xonsh/base_shell.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/xonsh/base_shell.py b/xonsh/base_shell.py
--- a/xonsh/base_shell.py
+++ b/xonsh/base_shell.py
@@ -102,12 +102,19 @@
"""Obtains the current prompt string."""
if self.need_more_lines:
if self.mlprompt is None:
- self.mlprompt = multiline_prompt()
+ try:
+ self.mlprompt = multiline_prompt()
+ except Exception:
+ _print_exception()
+ self.mlprompt = '<multiline prompt error> '
return self.mlprompt
env = builtins.__xonsh_env__
if 'PROMPT' in env:
p = env['PROMPT']
- p = format_prompt(p)
+ try:
+ p = format_prompt(p)
+ except Exception:
+ _print_exception()
else:
p = "set '$PROMPT = ...' $ "
self.settitle()
|
{"golden_diff": "diff --git a/xonsh/base_shell.py b/xonsh/base_shell.py\n--- a/xonsh/base_shell.py\n+++ b/xonsh/base_shell.py\n@@ -102,12 +102,19 @@\n \"\"\"Obtains the current prompt string.\"\"\"\n if self.need_more_lines:\n if self.mlprompt is None:\n- self.mlprompt = multiline_prompt()\n+ try:\n+ self.mlprompt = multiline_prompt()\n+ except Exception:\n+ _print_exception()\n+ self.mlprompt = '<multiline prompt error> '\n return self.mlprompt\n env = builtins.__xonsh_env__\n if 'PROMPT' in env:\n p = env['PROMPT']\n- p = format_prompt(p)\n+ try:\n+ p = format_prompt(p)\n+ except Exception:\n+ _print_exception()\n else:\n p = \"set '$PROMPT = ...' $ \"\n self.settitle()\n", "issue": "xonsh dies if the prompt raises an exception\nIf a function in the prompt raises an exception, it kills xonsh. I would expect the error to be displayed, but not kill the shell. \n\n", "before_files": [{"content": "\"\"\"The base class for xonsh shell\"\"\"\nimport os\nimport sys\nimport builtins\nimport traceback\n\nfrom xonsh.execer import Execer\nfrom xonsh.tools import XonshError, escape_windows_title_string\nfrom xonsh.tools import ON_WINDOWS\nfrom xonsh.completer import Completer\nfrom xonsh.environ import multiline_prompt, format_prompt\n\n\nclass BaseShell(object):\n \"\"\"The xonsh shell.\"\"\"\n\n def __init__(self, execer, ctx, **kwargs):\n super().__init__(**kwargs)\n self.execer = execer\n self.ctx = ctx\n self.completer = Completer()\n self.buffer = []\n self.need_more_lines = False\n self.mlprompt = None\n\n def emptyline(self):\n \"\"\"Called when an empty line has been entered.\"\"\"\n self.need_more_lines = False\n self.default('')\n\n def precmd(self, line):\n \"\"\"Called just before execution of line.\"\"\"\n return line if self.need_more_lines else line.lstrip()\n\n def default(self, line):\n \"\"\"Implements code execution.\"\"\"\n line = line if line.endswith('\\n') else line + '\\n'\n code = self.push(line)\n if code is None:\n return\n try:\n self.execer.exec(code, mode='single', glbs=self.ctx) # no locals\n except XonshError as e:\n print(e.args[0], file=sys.stderr)\n except:\n _print_exception()\n if builtins.__xonsh_exit__:\n return True\n\n def push(self, line):\n \"\"\"Pushes a line onto the buffer and compiles the code in a way that\n enables multiline input.\n \"\"\"\n code = None\n self.buffer.append(line)\n if self.need_more_lines:\n return code\n src = ''.join(self.buffer)\n try:\n code = self.execer.compile(src,\n mode='single',\n glbs=None,\n locs=self.ctx)\n self.reset_buffer()\n except SyntaxError:\n if line == '\\n':\n self.reset_buffer()\n _print_exception()\n return None\n self.need_more_lines = True\n except:\n self.reset_buffer()\n _print_exception()\n return None\n return code\n\n def reset_buffer(self):\n \"\"\"Resets the line buffer.\"\"\"\n self.buffer.clear()\n self.need_more_lines = False\n self.mlprompt = None\n\n def settitle(self):\n \"\"\"Sets terminal title.\"\"\"\n env = builtins.__xonsh_env__\n term = env.get('TERM', None)\n if term is None or term == 'linux':\n return\n if 'TITLE' in env:\n t = env['TITLE']\n else:\n return\n t = format_prompt(t)\n if ON_WINDOWS and 'ANSICON' not in env:\n t = escape_windows_title_string(t)\n os.system('title {}'.format(t))\n else:\n sys.stdout.write(\"\\x1b]2;{0}\\x07\".format(t))\n\n @property\n def prompt(self):\n \"\"\"Obtains the current prompt string.\"\"\"\n if self.need_more_lines:\n if self.mlprompt is None:\n self.mlprompt = multiline_prompt()\n return self.mlprompt\n env = builtins.__xonsh_env__\n if 'PROMPT' in env:\n p = env['PROMPT']\n p = format_prompt(p)\n else:\n p = \"set '$PROMPT = ...' $ \"\n self.settitle()\n return p\n \ndef _print_exception():\n \"\"\"Print exceptions with/without traceback.\"\"\"\n if not 'XONSH_SHOW_TRACEBACK' in builtins.__xonsh_env__:\n sys.stderr.write('xonsh: For full traceback set: '\n '$XONSH_SHOW_TRACEBACK=True\\n')\n if builtins.__xonsh_env__.get('XONSH_SHOW_TRACEBACK', False):\n traceback.print_exc()\n else:\n exc_type, exc_value, exc_traceback = sys.exc_info()\n exception_only = traceback.format_exception_only(exc_type, exc_value)\n sys.stderr.write(''.join(exception_only))\n", "path": "xonsh/base_shell.py"}]}
| 1,758 | 217 |
gh_patches_debug_36944
|
rasdani/github-patches
|
git_diff
|
paperless-ngx__paperless-ngx-903
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Password reset after docker container restarted
*Copy from old repository*: https://github.com/jonaswinkler/paperless-ng/issues/1511
**Describe the bug**
I deployed Paperless-NG in TrueNAS via the TrueCharts integration. TrueCharts uses the official docker container and passes environment variables to configure the superuser.
I changed the admin password in the Django admin interface. However, after redeploying the application (for example due to an update) the password gets overridden by the initial password passed via environment variable.
**To Reproduce**
Steps to reproduce the behavior:
1. Deploy Paperless with credentials admin//secret
2. Open Paperless
3. Navigate to admin interface
4. Change password to "mysupersecretpassword"
5. Restart/update the docker container
6. Navigate to Paperless and try to login with admin/mysupersecretpassword
7. You can't login.
**Expected behavior**
The admin password should not be overridden by the initial password.
**Relevant information**
- Version
- Installation method: **docker**
- Any configuration changes you made in `docker-compose.yml`, `docker-compose.env` or `paperless.conf`. -
I think this is related to the admin user password reset when the docker container is started:
docker-entrypoint.sh calls docker-prepare.sh calls the manage_superuser mgmt command and there the password is updated:
https://github.com/jonaswinkler/paperless-ng/blob/master/src/documents/management/commands/manage_superuser.py#L29
Am I missing something?
</issue>
<code>
[start of src/documents/management/commands/manage_superuser.py]
1 import logging
2 import os
3
4 from django.contrib.auth.models import User
5 from django.core.management.base import BaseCommand
6
7
8 logger = logging.getLogger("paperless.management.superuser")
9
10
11 class Command(BaseCommand):
12
13 help = """
14 Creates a Django superuser based on env variables.
15 """.replace(
16 " ",
17 "",
18 )
19
20 def handle(self, *args, **options):
21
22 username = os.getenv("PAPERLESS_ADMIN_USER")
23 if not username:
24 return
25
26 mail = os.getenv("PAPERLESS_ADMIN_MAIL", "root@localhost")
27 password = os.getenv("PAPERLESS_ADMIN_PASSWORD")
28
29 # Check if user exists already, leave as is if it does
30 if User.objects.filter(username=username).exists():
31 user: User = User.objects.get_by_natural_key(username)
32 user.set_password(password)
33 user.save()
34 self.stdout.write(f"Changed password of user {username}.")
35 elif password:
36 # Create superuser based on env variables
37 User.objects.create_superuser(username, mail, password)
38 self.stdout.write(f'Created superuser "{username}" with provided password.')
39 else:
40 self.stdout.write(f'Did not create superuser "{username}".')
41 self.stdout.write(
42 'Make sure you specified "PAPERLESS_ADMIN_PASSWORD" in your '
43 '"docker-compose.env" file.',
44 )
45
[end of src/documents/management/commands/manage_superuser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/documents/management/commands/manage_superuser.py b/src/documents/management/commands/manage_superuser.py
--- a/src/documents/management/commands/manage_superuser.py
+++ b/src/documents/management/commands/manage_superuser.py
@@ -11,7 +11,14 @@
class Command(BaseCommand):
help = """
- Creates a Django superuser based on env variables.
+ Creates a Django superuser:
+ User named: admin
+ Email: root@localhost
+ with password based on env variable.
+ No superuser will be created, when:
+ - The username is taken already exists
+ - A superuser already exists
+ - PAPERLESS_ADMIN_PASSWORD is not set
""".replace(
" ",
"",
@@ -19,26 +26,41 @@
def handle(self, *args, **options):
- username = os.getenv("PAPERLESS_ADMIN_USER")
- if not username:
- return
-
+ username = os.getenv("PAPERLESS_ADMIN_USER", "admin")
mail = os.getenv("PAPERLESS_ADMIN_MAIL", "root@localhost")
password = os.getenv("PAPERLESS_ADMIN_PASSWORD")
- # Check if user exists already, leave as is if it does
+ # Check if there's already a user called admin
if User.objects.filter(username=username).exists():
- user: User = User.objects.get_by_natural_key(username)
- user.set_password(password)
- user.save()
- self.stdout.write(f"Changed password of user {username}.")
- elif password:
- # Create superuser based on env variables
- User.objects.create_superuser(username, mail, password)
- self.stdout.write(f'Created superuser "{username}" with provided password.')
+ self.stdout.write(
+ self.style.NOTICE(
+ f"Did not create superuser, a user {username} already exists",
+ ),
+ )
+ return
+
+ # Check if any superuseruser
+ # exists already, leave as is if it does
+ if User.objects.filter(is_superuser=True).count() > 0:
+ self.stdout.write(
+ self.style.NOTICE(
+ "Did not create superuser, the DB already contains superusers",
+ ),
+ )
+ return
+
+ if password is None:
+ self.stdout.write(
+ self.style.ERROR(
+ "Please check if PAPERLESS_ADMIN_PASSWORD has been"
+ " set in the environment",
+ ),
+ )
else:
- self.stdout.write(f'Did not create superuser "{username}".')
+ # Create superuser with password based on env variable
+ User.objects.create_superuser(username, mail, password)
self.stdout.write(
- 'Make sure you specified "PAPERLESS_ADMIN_PASSWORD" in your '
- '"docker-compose.env" file.',
+ self.style.SUCCESS(
+ f'Created superuser "{username}" with provided password.',
+ ),
)
|
{"golden_diff": "diff --git a/src/documents/management/commands/manage_superuser.py b/src/documents/management/commands/manage_superuser.py\n--- a/src/documents/management/commands/manage_superuser.py\n+++ b/src/documents/management/commands/manage_superuser.py\n@@ -11,7 +11,14 @@\n class Command(BaseCommand):\n \n help = \"\"\"\n- Creates a Django superuser based on env variables.\n+ Creates a Django superuser:\n+ User named: admin\n+ Email: root@localhost\n+ with password based on env variable.\n+ No superuser will be created, when:\n+ - The username is taken already exists\n+ - A superuser already exists\n+ - PAPERLESS_ADMIN_PASSWORD is not set\n \"\"\".replace(\n \" \",\n \"\",\n@@ -19,26 +26,41 @@\n \n def handle(self, *args, **options):\n \n- username = os.getenv(\"PAPERLESS_ADMIN_USER\")\n- if not username:\n- return\n-\n+ username = os.getenv(\"PAPERLESS_ADMIN_USER\", \"admin\")\n mail = os.getenv(\"PAPERLESS_ADMIN_MAIL\", \"root@localhost\")\n password = os.getenv(\"PAPERLESS_ADMIN_PASSWORD\")\n \n- # Check if user exists already, leave as is if it does\n+ # Check if there's already a user called admin\n if User.objects.filter(username=username).exists():\n- user: User = User.objects.get_by_natural_key(username)\n- user.set_password(password)\n- user.save()\n- self.stdout.write(f\"Changed password of user {username}.\")\n- elif password:\n- # Create superuser based on env variables\n- User.objects.create_superuser(username, mail, password)\n- self.stdout.write(f'Created superuser \"{username}\" with provided password.')\n+ self.stdout.write(\n+ self.style.NOTICE(\n+ f\"Did not create superuser, a user {username} already exists\",\n+ ),\n+ )\n+ return\n+\n+ # Check if any superuseruser\n+ # exists already, leave as is if it does\n+ if User.objects.filter(is_superuser=True).count() > 0:\n+ self.stdout.write(\n+ self.style.NOTICE(\n+ \"Did not create superuser, the DB already contains superusers\",\n+ ),\n+ )\n+ return\n+\n+ if password is None:\n+ self.stdout.write(\n+ self.style.ERROR(\n+ \"Please check if PAPERLESS_ADMIN_PASSWORD has been\"\n+ \" set in the environment\",\n+ ),\n+ )\n else:\n- self.stdout.write(f'Did not create superuser \"{username}\".')\n+ # Create superuser with password based on env variable\n+ User.objects.create_superuser(username, mail, password)\n self.stdout.write(\n- 'Make sure you specified \"PAPERLESS_ADMIN_PASSWORD\" in your '\n- '\"docker-compose.env\" file.',\n+ self.style.SUCCESS(\n+ f'Created superuser \"{username}\" with provided password.',\n+ ),\n )\n", "issue": "[BUG] Password reset after docker container restarted\n*Copy from old repository*: https://github.com/jonaswinkler/paperless-ng/issues/1511\r\n\r\n**Describe the bug**\r\nI deployed Paperless-NG in TrueNAS via the TrueCharts integration. TrueCharts uses the official docker container and passes environment variables to configure the superuser.\r\n\r\nI changed the admin password in the Django admin interface. However, after redeploying the application (for example due to an update) the password gets overridden by the initial password passed via environment variable.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Deploy Paperless with credentials admin//secret\r\n2. Open Paperless\r\n3. Navigate to admin interface\r\n4. Change password to \"mysupersecretpassword\"\r\n5. Restart/update the docker container\r\n6. Navigate to Paperless and try to login with admin/mysupersecretpassword\r\n7. You can't login.\r\n\r\n**Expected behavior**\r\nThe admin password should not be overridden by the initial password.\r\n\r\n**Relevant information**\r\n - Version \r\n - Installation method: **docker**\r\n - Any configuration changes you made in `docker-compose.yml`, `docker-compose.env` or `paperless.conf`. -\r\n\r\nI think this is related to the admin user password reset when the docker container is started:\r\ndocker-entrypoint.sh calls docker-prepare.sh calls the manage_superuser mgmt command and there the password is updated:\r\nhttps://github.com/jonaswinkler/paperless-ng/blob/master/src/documents/management/commands/manage_superuser.py#L29\r\n\r\nAm I missing something?\n", "before_files": [{"content": "import logging\nimport os\n\nfrom django.contrib.auth.models import User\nfrom django.core.management.base import BaseCommand\n\n\nlogger = logging.getLogger(\"paperless.management.superuser\")\n\n\nclass Command(BaseCommand):\n\n help = \"\"\"\n Creates a Django superuser based on env variables.\n \"\"\".replace(\n \" \",\n \"\",\n )\n\n def handle(self, *args, **options):\n\n username = os.getenv(\"PAPERLESS_ADMIN_USER\")\n if not username:\n return\n\n mail = os.getenv(\"PAPERLESS_ADMIN_MAIL\", \"root@localhost\")\n password = os.getenv(\"PAPERLESS_ADMIN_PASSWORD\")\n\n # Check if user exists already, leave as is if it does\n if User.objects.filter(username=username).exists():\n user: User = User.objects.get_by_natural_key(username)\n user.set_password(password)\n user.save()\n self.stdout.write(f\"Changed password of user {username}.\")\n elif password:\n # Create superuser based on env variables\n User.objects.create_superuser(username, mail, password)\n self.stdout.write(f'Created superuser \"{username}\" with provided password.')\n else:\n self.stdout.write(f'Did not create superuser \"{username}\".')\n self.stdout.write(\n 'Make sure you specified \"PAPERLESS_ADMIN_PASSWORD\" in your '\n '\"docker-compose.env\" file.',\n )\n", "path": "src/documents/management/commands/manage_superuser.py"}]}
| 1,240 | 658 |
gh_patches_debug_1384
|
rasdani/github-patches
|
git_diff
|
huggingface__text-generation-inference-1182
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update Docker to torch 2.1?
### Feature request
H100s have trouble with gptq quants due to not having latest pytorch, can in the next TGI Docker we update torch to this, or have one special for this for use on h100s?
### Motivation
Cant get tgi + gptq quant to work on h100s
### Your contribution
Sorry I dont have any contribution ^_^
</issue>
<code>
[start of integration-tests/conftest.py]
1 import sys
2 import subprocess
3 import contextlib
4 import pytest
5 import asyncio
6 import os
7 import docker
8 import json
9 import math
10 import time
11 import random
12
13 from docker.errors import NotFound
14 from typing import Optional, List, Dict
15 from syrupy.extensions.json import JSONSnapshotExtension
16 from aiohttp import ClientConnectorError, ClientOSError, ServerDisconnectedError
17
18 from text_generation import AsyncClient
19 from text_generation.types import Response, Details, InputToken, Token, BestOfSequence
20
21 DOCKER_IMAGE = os.getenv("DOCKER_IMAGE", None)
22 HUGGING_FACE_HUB_TOKEN = os.getenv("HUGGING_FACE_HUB_TOKEN", None)
23 DOCKER_VOLUME = os.getenv("DOCKER_VOLUME", "/data")
24
25
26 class ResponseComparator(JSONSnapshotExtension):
27 def serialize(
28 self,
29 data,
30 *,
31 exclude=None,
32 matcher=None,
33 ):
34 if isinstance(data, List):
35 data = [d.dict() for d in data]
36
37 data = self._filter(
38 data=data, depth=0, path=(), exclude=exclude, matcher=matcher
39 )
40 return json.dumps(data, indent=2, ensure_ascii=False, sort_keys=False) + "\n"
41
42 def matches(
43 self,
44 *,
45 serialized_data,
46 snapshot_data,
47 ) -> bool:
48 def convert_data(data):
49 data = json.loads(data)
50
51 if isinstance(data, Dict):
52 return Response(**data)
53 if isinstance(data, List):
54 return [Response(**d) for d in data]
55 raise NotImplementedError
56
57 def eq_token(token: Token, other: Token) -> bool:
58 return (
59 token.id == other.id
60 and token.text == other.text
61 and math.isclose(token.logprob, other.logprob, rel_tol=0.2)
62 and token.special == other.special
63 )
64
65 def eq_prefill_token(prefill_token: InputToken, other: InputToken) -> bool:
66 try:
67 return (
68 prefill_token.id == other.id
69 and prefill_token.text == other.text
70 and (
71 math.isclose(prefill_token.logprob, other.logprob, rel_tol=0.2)
72 if prefill_token.logprob is not None
73 else prefill_token.logprob == other.logprob
74 )
75 )
76 except TypeError:
77 return False
78
79 def eq_best_of(details: BestOfSequence, other: BestOfSequence) -> bool:
80 return (
81 details.finish_reason == other.finish_reason
82 and details.generated_tokens == other.generated_tokens
83 and details.seed == other.seed
84 and len(details.prefill) == len(other.prefill)
85 and all(
86 [
87 eq_prefill_token(d, o)
88 for d, o in zip(details.prefill, other.prefill)
89 ]
90 )
91 and len(details.tokens) == len(other.tokens)
92 and all([eq_token(d, o) for d, o in zip(details.tokens, other.tokens)])
93 )
94
95 def eq_details(details: Details, other: Details) -> bool:
96 return (
97 details.finish_reason == other.finish_reason
98 and details.generated_tokens == other.generated_tokens
99 and details.seed == other.seed
100 and len(details.prefill) == len(other.prefill)
101 and all(
102 [
103 eq_prefill_token(d, o)
104 for d, o in zip(details.prefill, other.prefill)
105 ]
106 )
107 and len(details.tokens) == len(other.tokens)
108 and all([eq_token(d, o) for d, o in zip(details.tokens, other.tokens)])
109 and (
110 len(details.best_of_sequences)
111 if details.best_of_sequences is not None
112 else 0
113 )
114 == (
115 len(other.best_of_sequences)
116 if other.best_of_sequences is not None
117 else 0
118 )
119 and (
120 all(
121 [
122 eq_best_of(d, o)
123 for d, o in zip(
124 details.best_of_sequences, other.best_of_sequences
125 )
126 ]
127 )
128 if details.best_of_sequences is not None
129 else details.best_of_sequences == other.best_of_sequences
130 )
131 )
132
133 def eq_response(response: Response, other: Response) -> bool:
134 return response.generated_text == other.generated_text and eq_details(
135 response.details, other.details
136 )
137
138 serialized_data = convert_data(serialized_data)
139 snapshot_data = convert_data(snapshot_data)
140
141 if not isinstance(serialized_data, List):
142 serialized_data = [serialized_data]
143 if not isinstance(snapshot_data, List):
144 snapshot_data = [snapshot_data]
145
146 return len(snapshot_data) == len(serialized_data) and all(
147 [eq_response(r, o) for r, o in zip(serialized_data, snapshot_data)]
148 )
149
150
151 class LauncherHandle:
152 def __init__(self, port: int):
153 self.client = AsyncClient(f"http://localhost:{port}")
154
155 def _inner_health(self):
156 raise NotImplementedError
157
158 async def health(self, timeout: int = 60):
159 assert timeout > 0
160 for _ in range(timeout):
161 if not self._inner_health():
162 raise RuntimeError("Launcher crashed")
163
164 try:
165 await self.client.generate("test")
166 return
167 except (ClientConnectorError, ClientOSError, ServerDisconnectedError) as e:
168 time.sleep(1)
169 raise RuntimeError("Health check failed")
170
171
172 class ContainerLauncherHandle(LauncherHandle):
173 def __init__(self, docker_client, container_name, port: int):
174 super(ContainerLauncherHandle, self).__init__(port)
175 self.docker_client = docker_client
176 self.container_name = container_name
177
178 def _inner_health(self) -> bool:
179 container = self.docker_client.containers.get(self.container_name)
180 return container.status in ["running", "created"]
181
182
183 class ProcessLauncherHandle(LauncherHandle):
184 def __init__(self, process, port: int):
185 super(ProcessLauncherHandle, self).__init__(port)
186 self.process = process
187
188 def _inner_health(self) -> bool:
189 return self.process.poll() is None
190
191
192 @pytest.fixture
193 def response_snapshot(snapshot):
194 return snapshot.use_extension(ResponseComparator)
195
196
197 @pytest.fixture(scope="module")
198 def event_loop():
199 loop = asyncio.get_event_loop()
200 yield loop
201 loop.close()
202
203
204 @pytest.fixture(scope="module")
205 def launcher(event_loop):
206 @contextlib.contextmanager
207 def local_launcher(
208 model_id: str,
209 num_shard: Optional[int] = None,
210 quantize: Optional[str] = None,
211 trust_remote_code: bool = False,
212 use_flash_attention: bool = True,
213 ):
214 port = random.randint(8000, 10_000)
215 master_port = random.randint(10_000, 20_000)
216
217 shard_uds_path = (
218 f"/tmp/tgi-tests-{model_id.split('/')[-1]}-{num_shard}-{quantize}-server"
219 )
220
221 args = [
222 "text-generation-launcher",
223 "--model-id",
224 model_id,
225 "--port",
226 str(port),
227 "--master-port",
228 str(master_port),
229 "--shard-uds-path",
230 shard_uds_path,
231 ]
232
233 env = os.environ
234
235 if num_shard is not None:
236 args.extend(["--num-shard", str(num_shard)])
237 if quantize is not None:
238 args.append("--quantize")
239 args.append(quantize)
240 if trust_remote_code:
241 args.append("--trust-remote-code")
242
243 env["LOG_LEVEL"] = "info,text_generation_router=debug"
244
245 if not use_flash_attention:
246 env["USE_FLASH_ATTENTION"] = "false"
247
248 with subprocess.Popen(
249 args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env
250 ) as process:
251 yield ProcessLauncherHandle(process, port)
252
253 process.terminate()
254 process.wait(60)
255
256 launcher_output = process.stdout.read().decode("utf-8")
257 print(launcher_output, file=sys.stderr)
258
259 process.stdout.close()
260 process.stderr.close()
261
262 if not use_flash_attention:
263 del env["USE_FLASH_ATTENTION"]
264
265 @contextlib.contextmanager
266 def docker_launcher(
267 model_id: str,
268 num_shard: Optional[int] = None,
269 quantize: Optional[str] = None,
270 trust_remote_code: bool = False,
271 use_flash_attention: bool = True,
272 ):
273 port = random.randint(8000, 10_000)
274
275 args = ["--model-id", model_id, "--env"]
276
277 if num_shard is not None:
278 args.extend(["--num-shard", str(num_shard)])
279 if quantize is not None:
280 args.append("--quantize")
281 args.append(quantize)
282 if trust_remote_code:
283 args.append("--trust-remote-code")
284
285 client = docker.from_env()
286
287 container_name = f"tgi-tests-{model_id.split('/')[-1]}-{num_shard}-{quantize}"
288
289 try:
290 container = client.containers.get(container_name)
291 container.stop()
292 container.wait()
293 except NotFound:
294 pass
295
296 gpu_count = num_shard if num_shard is not None else 1
297
298 env = {"LOG_LEVEL": "info,text_generation_router=debug"}
299 if not use_flash_attention:
300 env["USE_FLASH_ATTENTION"] = "false"
301
302 if HUGGING_FACE_HUB_TOKEN is not None:
303 env["HUGGING_FACE_HUB_TOKEN"] = HUGGING_FACE_HUB_TOKEN
304
305 volumes = []
306 if DOCKER_VOLUME:
307 volumes = [f"{DOCKER_VOLUME}:/data"]
308
309 container = client.containers.run(
310 DOCKER_IMAGE,
311 command=args,
312 name=container_name,
313 environment=env,
314 auto_remove=False,
315 detach=True,
316 device_requests=[
317 docker.types.DeviceRequest(count=gpu_count, capabilities=[["gpu"]])
318 ],
319 volumes=volumes,
320 ports={"80/tcp": port},
321 )
322
323 yield ContainerLauncherHandle(client, container.name, port)
324
325 if not use_flash_attention:
326 del env["USE_FLASH_ATTENTION"]
327
328 try:
329 container.stop()
330 container.wait()
331 except NotFound:
332 pass
333
334 container_output = container.logs().decode("utf-8")
335 print(container_output, file=sys.stderr)
336
337 container.remove()
338
339 if DOCKER_IMAGE is not None:
340 return docker_launcher
341 return local_launcher
342
343
344 @pytest.fixture(scope="module")
345 def generate_load():
346 async def generate_load_inner(
347 client: AsyncClient, prompt: str, max_new_tokens: int, n: int
348 ) -> List[Response]:
349 futures = [
350 client.generate(
351 prompt, max_new_tokens=max_new_tokens, decoder_input_details=True
352 )
353 for _ in range(n)
354 ]
355
356 return await asyncio.gather(*futures)
357
358 return generate_load_inner
359
[end of integration-tests/conftest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/integration-tests/conftest.py b/integration-tests/conftest.py
--- a/integration-tests/conftest.py
+++ b/integration-tests/conftest.py
@@ -318,6 +318,7 @@
],
volumes=volumes,
ports={"80/tcp": port},
+ shm_size="1G"
)
yield ContainerLauncherHandle(client, container.name, port)
|
{"golden_diff": "diff --git a/integration-tests/conftest.py b/integration-tests/conftest.py\n--- a/integration-tests/conftest.py\n+++ b/integration-tests/conftest.py\n@@ -318,6 +318,7 @@\n ],\n volumes=volumes,\n ports={\"80/tcp\": port},\n+ shm_size=\"1G\"\n )\n \n yield ContainerLauncherHandle(client, container.name, port)\n", "issue": "Update Docker to torch 2.1?\n### Feature request\n\nH100s have trouble with gptq quants due to not having latest pytorch, can in the next TGI Docker we update torch to this, or have one special for this for use on h100s? \n\n### Motivation\n\nCant get tgi + gptq quant to work on h100s\n\n### Your contribution\n\nSorry I dont have any contribution ^_^ \n", "before_files": [{"content": "import sys\nimport subprocess\nimport contextlib\nimport pytest\nimport asyncio\nimport os\nimport docker\nimport json\nimport math\nimport time\nimport random\n\nfrom docker.errors import NotFound\nfrom typing import Optional, List, Dict\nfrom syrupy.extensions.json import JSONSnapshotExtension\nfrom aiohttp import ClientConnectorError, ClientOSError, ServerDisconnectedError\n\nfrom text_generation import AsyncClient\nfrom text_generation.types import Response, Details, InputToken, Token, BestOfSequence\n\nDOCKER_IMAGE = os.getenv(\"DOCKER_IMAGE\", None)\nHUGGING_FACE_HUB_TOKEN = os.getenv(\"HUGGING_FACE_HUB_TOKEN\", None)\nDOCKER_VOLUME = os.getenv(\"DOCKER_VOLUME\", \"/data\")\n\n\nclass ResponseComparator(JSONSnapshotExtension):\n def serialize(\n self,\n data,\n *,\n exclude=None,\n matcher=None,\n ):\n if isinstance(data, List):\n data = [d.dict() for d in data]\n\n data = self._filter(\n data=data, depth=0, path=(), exclude=exclude, matcher=matcher\n )\n return json.dumps(data, indent=2, ensure_ascii=False, sort_keys=False) + \"\\n\"\n\n def matches(\n self,\n *,\n serialized_data,\n snapshot_data,\n ) -> bool:\n def convert_data(data):\n data = json.loads(data)\n\n if isinstance(data, Dict):\n return Response(**data)\n if isinstance(data, List):\n return [Response(**d) for d in data]\n raise NotImplementedError\n\n def eq_token(token: Token, other: Token) -> bool:\n return (\n token.id == other.id\n and token.text == other.text\n and math.isclose(token.logprob, other.logprob, rel_tol=0.2)\n and token.special == other.special\n )\n\n def eq_prefill_token(prefill_token: InputToken, other: InputToken) -> bool:\n try:\n return (\n prefill_token.id == other.id\n and prefill_token.text == other.text\n and (\n math.isclose(prefill_token.logprob, other.logprob, rel_tol=0.2)\n if prefill_token.logprob is not None\n else prefill_token.logprob == other.logprob\n )\n )\n except TypeError:\n return False\n\n def eq_best_of(details: BestOfSequence, other: BestOfSequence) -> bool:\n return (\n details.finish_reason == other.finish_reason\n and details.generated_tokens == other.generated_tokens\n and details.seed == other.seed\n and len(details.prefill) == len(other.prefill)\n and all(\n [\n eq_prefill_token(d, o)\n for d, o in zip(details.prefill, other.prefill)\n ]\n )\n and len(details.tokens) == len(other.tokens)\n and all([eq_token(d, o) for d, o in zip(details.tokens, other.tokens)])\n )\n\n def eq_details(details: Details, other: Details) -> bool:\n return (\n details.finish_reason == other.finish_reason\n and details.generated_tokens == other.generated_tokens\n and details.seed == other.seed\n and len(details.prefill) == len(other.prefill)\n and all(\n [\n eq_prefill_token(d, o)\n for d, o in zip(details.prefill, other.prefill)\n ]\n )\n and len(details.tokens) == len(other.tokens)\n and all([eq_token(d, o) for d, o in zip(details.tokens, other.tokens)])\n and (\n len(details.best_of_sequences)\n if details.best_of_sequences is not None\n else 0\n )\n == (\n len(other.best_of_sequences)\n if other.best_of_sequences is not None\n else 0\n )\n and (\n all(\n [\n eq_best_of(d, o)\n for d, o in zip(\n details.best_of_sequences, other.best_of_sequences\n )\n ]\n )\n if details.best_of_sequences is not None\n else details.best_of_sequences == other.best_of_sequences\n )\n )\n\n def eq_response(response: Response, other: Response) -> bool:\n return response.generated_text == other.generated_text and eq_details(\n response.details, other.details\n )\n\n serialized_data = convert_data(serialized_data)\n snapshot_data = convert_data(snapshot_data)\n\n if not isinstance(serialized_data, List):\n serialized_data = [serialized_data]\n if not isinstance(snapshot_data, List):\n snapshot_data = [snapshot_data]\n\n return len(snapshot_data) == len(serialized_data) and all(\n [eq_response(r, o) for r, o in zip(serialized_data, snapshot_data)]\n )\n\n\nclass LauncherHandle:\n def __init__(self, port: int):\n self.client = AsyncClient(f\"http://localhost:{port}\")\n\n def _inner_health(self):\n raise NotImplementedError\n\n async def health(self, timeout: int = 60):\n assert timeout > 0\n for _ in range(timeout):\n if not self._inner_health():\n raise RuntimeError(\"Launcher crashed\")\n\n try:\n await self.client.generate(\"test\")\n return\n except (ClientConnectorError, ClientOSError, ServerDisconnectedError) as e:\n time.sleep(1)\n raise RuntimeError(\"Health check failed\")\n\n\nclass ContainerLauncherHandle(LauncherHandle):\n def __init__(self, docker_client, container_name, port: int):\n super(ContainerLauncherHandle, self).__init__(port)\n self.docker_client = docker_client\n self.container_name = container_name\n\n def _inner_health(self) -> bool:\n container = self.docker_client.containers.get(self.container_name)\n return container.status in [\"running\", \"created\"]\n\n\nclass ProcessLauncherHandle(LauncherHandle):\n def __init__(self, process, port: int):\n super(ProcessLauncherHandle, self).__init__(port)\n self.process = process\n\n def _inner_health(self) -> bool:\n return self.process.poll() is None\n\n\[email protected]\ndef response_snapshot(snapshot):\n return snapshot.use_extension(ResponseComparator)\n\n\[email protected](scope=\"module\")\ndef event_loop():\n loop = asyncio.get_event_loop()\n yield loop\n loop.close()\n\n\[email protected](scope=\"module\")\ndef launcher(event_loop):\n @contextlib.contextmanager\n def local_launcher(\n model_id: str,\n num_shard: Optional[int] = None,\n quantize: Optional[str] = None,\n trust_remote_code: bool = False,\n use_flash_attention: bool = True,\n ):\n port = random.randint(8000, 10_000)\n master_port = random.randint(10_000, 20_000)\n\n shard_uds_path = (\n f\"/tmp/tgi-tests-{model_id.split('/')[-1]}-{num_shard}-{quantize}-server\"\n )\n\n args = [\n \"text-generation-launcher\",\n \"--model-id\",\n model_id,\n \"--port\",\n str(port),\n \"--master-port\",\n str(master_port),\n \"--shard-uds-path\",\n shard_uds_path,\n ]\n\n env = os.environ\n\n if num_shard is not None:\n args.extend([\"--num-shard\", str(num_shard)])\n if quantize is not None:\n args.append(\"--quantize\")\n args.append(quantize)\n if trust_remote_code:\n args.append(\"--trust-remote-code\")\n\n env[\"LOG_LEVEL\"] = \"info,text_generation_router=debug\"\n\n if not use_flash_attention:\n env[\"USE_FLASH_ATTENTION\"] = \"false\"\n\n with subprocess.Popen(\n args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env\n ) as process:\n yield ProcessLauncherHandle(process, port)\n\n process.terminate()\n process.wait(60)\n\n launcher_output = process.stdout.read().decode(\"utf-8\")\n print(launcher_output, file=sys.stderr)\n\n process.stdout.close()\n process.stderr.close()\n\n if not use_flash_attention:\n del env[\"USE_FLASH_ATTENTION\"]\n\n @contextlib.contextmanager\n def docker_launcher(\n model_id: str,\n num_shard: Optional[int] = None,\n quantize: Optional[str] = None,\n trust_remote_code: bool = False,\n use_flash_attention: bool = True,\n ):\n port = random.randint(8000, 10_000)\n\n args = [\"--model-id\", model_id, \"--env\"]\n\n if num_shard is not None:\n args.extend([\"--num-shard\", str(num_shard)])\n if quantize is not None:\n args.append(\"--quantize\")\n args.append(quantize)\n if trust_remote_code:\n args.append(\"--trust-remote-code\")\n\n client = docker.from_env()\n\n container_name = f\"tgi-tests-{model_id.split('/')[-1]}-{num_shard}-{quantize}\"\n\n try:\n container = client.containers.get(container_name)\n container.stop()\n container.wait()\n except NotFound:\n pass\n\n gpu_count = num_shard if num_shard is not None else 1\n\n env = {\"LOG_LEVEL\": \"info,text_generation_router=debug\"}\n if not use_flash_attention:\n env[\"USE_FLASH_ATTENTION\"] = \"false\"\n\n if HUGGING_FACE_HUB_TOKEN is not None:\n env[\"HUGGING_FACE_HUB_TOKEN\"] = HUGGING_FACE_HUB_TOKEN\n\n volumes = []\n if DOCKER_VOLUME:\n volumes = [f\"{DOCKER_VOLUME}:/data\"]\n\n container = client.containers.run(\n DOCKER_IMAGE,\n command=args,\n name=container_name,\n environment=env,\n auto_remove=False,\n detach=True,\n device_requests=[\n docker.types.DeviceRequest(count=gpu_count, capabilities=[[\"gpu\"]])\n ],\n volumes=volumes,\n ports={\"80/tcp\": port},\n )\n\n yield ContainerLauncherHandle(client, container.name, port)\n\n if not use_flash_attention:\n del env[\"USE_FLASH_ATTENTION\"]\n\n try:\n container.stop()\n container.wait()\n except NotFound:\n pass\n\n container_output = container.logs().decode(\"utf-8\")\n print(container_output, file=sys.stderr)\n\n container.remove()\n\n if DOCKER_IMAGE is not None:\n return docker_launcher\n return local_launcher\n\n\[email protected](scope=\"module\")\ndef generate_load():\n async def generate_load_inner(\n client: AsyncClient, prompt: str, max_new_tokens: int, n: int\n ) -> List[Response]:\n futures = [\n client.generate(\n prompt, max_new_tokens=max_new_tokens, decoder_input_details=True\n )\n for _ in range(n)\n ]\n\n return await asyncio.gather(*futures)\n\n return generate_load_inner\n", "path": "integration-tests/conftest.py"}]}
| 3,953 | 93 |
gh_patches_debug_9140
|
rasdani/github-patches
|
git_diff
|
Lightning-AI__torchmetrics-1155
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
notation typo in Cosine Similarity docs
## 📚 Documentation
There is a typo in the notation for the [pairwise_cosine_similarity](https://torchmetrics.readthedocs.io/en/stable/pairwise/cosine_similarity.html)

</issue>
<code>
[start of src/torchmetrics/functional/pairwise/cosine.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Optional
15
16 import torch
17 from torch import Tensor
18 from typing_extensions import Literal
19
20 from torchmetrics.functional.pairwise.helpers import _check_input, _reduce_distance_matrix
21 from torchmetrics.utilities.compute import _safe_matmul
22
23
24 def _pairwise_cosine_similarity_update(
25 x: Tensor, y: Optional[Tensor] = None, zero_diagonal: Optional[bool] = None
26 ) -> Tensor:
27 """Calculates the pairwise cosine similarity matrix.
28
29 Args:
30 x: tensor of shape ``[N,d]``
31 y: tensor of shape ``[M,d]``
32 zero_diagonal: determines if the diagonal of the distance matrix should be set to zero
33 """
34 x, y, zero_diagonal = _check_input(x, y, zero_diagonal)
35
36 norm = torch.norm(x, p=2, dim=1)
37 x /= norm.unsqueeze(1)
38 norm = torch.norm(y, p=2, dim=1)
39 y /= norm.unsqueeze(1)
40
41 distance = _safe_matmul(x, y)
42 if zero_diagonal:
43 distance.fill_diagonal_(0)
44 return distance
45
46
47 def pairwise_cosine_similarity(
48 x: Tensor,
49 y: Optional[Tensor] = None,
50 reduction: Literal["mean", "sum", "none", None] = None,
51 zero_diagonal: Optional[bool] = None,
52 ) -> Tensor:
53 r"""Calculates pairwise cosine similarity:
54
55 .. math::
56 s_{cos}(x,y) = \frac{<x,y>}{||x|| \cdot ||y||}
57 = \frac{\sum_{d=1}^D x_d \cdot y_d }{\sqrt{\sum_{d=1}^D x_i^2} \cdot \sqrt{\sum_{d=1}^D x_i^2}}
58
59 If both :math:`x` and :math:`y` are passed in, the calculation will be performed pairwise
60 between the rows of :math:`x` and :math:`y`.
61 If only :math:`x` is passed in, the calculation will be performed between the rows of :math:`x`.
62
63 Args:
64 x: Tensor with shape ``[N, d]``
65 y: Tensor with shape ``[M, d]``, optional
66 reduction: reduction to apply along the last dimension. Choose between `'mean'`, `'sum'`
67 (applied along column dimension) or `'none'`, `None` for no reduction
68 zero_diagonal: if the diagonal of the distance matrix should be set to 0. If only :math:`x` is given
69 this defaults to ``True`` else if :math:`y` is also given it defaults to ``False``
70
71 Returns:
72 A ``[N,N]`` matrix of distances if only ``x`` is given, else a ``[N,M]`` matrix
73
74 Example:
75 >>> import torch
76 >>> from torchmetrics.functional import pairwise_cosine_similarity
77 >>> x = torch.tensor([[2, 3], [3, 5], [5, 8]], dtype=torch.float32)
78 >>> y = torch.tensor([[1, 0], [2, 1]], dtype=torch.float32)
79 >>> pairwise_cosine_similarity(x, y)
80 tensor([[0.5547, 0.8682],
81 [0.5145, 0.8437],
82 [0.5300, 0.8533]])
83 >>> pairwise_cosine_similarity(x)
84 tensor([[0.0000, 0.9989, 0.9996],
85 [0.9989, 0.0000, 0.9998],
86 [0.9996, 0.9998, 0.0000]])
87
88 """
89 distance = _pairwise_cosine_similarity_update(x, y, zero_diagonal)
90 return _reduce_distance_matrix(distance, reduction)
91
[end of src/torchmetrics/functional/pairwise/cosine.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/torchmetrics/functional/pairwise/cosine.py b/src/torchmetrics/functional/pairwise/cosine.py
--- a/src/torchmetrics/functional/pairwise/cosine.py
+++ b/src/torchmetrics/functional/pairwise/cosine.py
@@ -54,7 +54,7 @@
.. math::
s_{cos}(x,y) = \frac{<x,y>}{||x|| \cdot ||y||}
- = \frac{\sum_{d=1}^D x_d \cdot y_d }{\sqrt{\sum_{d=1}^D x_i^2} \cdot \sqrt{\sum_{d=1}^D x_i^2}}
+ = \frac{\sum_{d=1}^D x_d \cdot y_d }{\sqrt{\sum_{d=1}^D x_i^2} \cdot \sqrt{\sum_{d=1}^D y_i^2}}
If both :math:`x` and :math:`y` are passed in, the calculation will be performed pairwise
between the rows of :math:`x` and :math:`y`.
|
{"golden_diff": "diff --git a/src/torchmetrics/functional/pairwise/cosine.py b/src/torchmetrics/functional/pairwise/cosine.py\n--- a/src/torchmetrics/functional/pairwise/cosine.py\n+++ b/src/torchmetrics/functional/pairwise/cosine.py\n@@ -54,7 +54,7 @@\n \n .. math::\n s_{cos}(x,y) = \\frac{<x,y>}{||x|| \\cdot ||y||}\n- = \\frac{\\sum_{d=1}^D x_d \\cdot y_d }{\\sqrt{\\sum_{d=1}^D x_i^2} \\cdot \\sqrt{\\sum_{d=1}^D x_i^2}}\n+ = \\frac{\\sum_{d=1}^D x_d \\cdot y_d }{\\sqrt{\\sum_{d=1}^D x_i^2} \\cdot \\sqrt{\\sum_{d=1}^D y_i^2}}\n \n If both :math:`x` and :math:`y` are passed in, the calculation will be performed pairwise\n between the rows of :math:`x` and :math:`y`.\n", "issue": "notation typo in Cosine Similarity docs \n## \ud83d\udcda Documentation\r\n\r\nThere is a typo in the notation for the [pairwise_cosine_similarity](https://torchmetrics.readthedocs.io/en/stable/pairwise/cosine_similarity.html)\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Optional\n\nimport torch\nfrom torch import Tensor\nfrom typing_extensions import Literal\n\nfrom torchmetrics.functional.pairwise.helpers import _check_input, _reduce_distance_matrix\nfrom torchmetrics.utilities.compute import _safe_matmul\n\n\ndef _pairwise_cosine_similarity_update(\n x: Tensor, y: Optional[Tensor] = None, zero_diagonal: Optional[bool] = None\n) -> Tensor:\n \"\"\"Calculates the pairwise cosine similarity matrix.\n\n Args:\n x: tensor of shape ``[N,d]``\n y: tensor of shape ``[M,d]``\n zero_diagonal: determines if the diagonal of the distance matrix should be set to zero\n \"\"\"\n x, y, zero_diagonal = _check_input(x, y, zero_diagonal)\n\n norm = torch.norm(x, p=2, dim=1)\n x /= norm.unsqueeze(1)\n norm = torch.norm(y, p=2, dim=1)\n y /= norm.unsqueeze(1)\n\n distance = _safe_matmul(x, y)\n if zero_diagonal:\n distance.fill_diagonal_(0)\n return distance\n\n\ndef pairwise_cosine_similarity(\n x: Tensor,\n y: Optional[Tensor] = None,\n reduction: Literal[\"mean\", \"sum\", \"none\", None] = None,\n zero_diagonal: Optional[bool] = None,\n) -> Tensor:\n r\"\"\"Calculates pairwise cosine similarity:\n\n .. math::\n s_{cos}(x,y) = \\frac{<x,y>}{||x|| \\cdot ||y||}\n = \\frac{\\sum_{d=1}^D x_d \\cdot y_d }{\\sqrt{\\sum_{d=1}^D x_i^2} \\cdot \\sqrt{\\sum_{d=1}^D x_i^2}}\n\n If both :math:`x` and :math:`y` are passed in, the calculation will be performed pairwise\n between the rows of :math:`x` and :math:`y`.\n If only :math:`x` is passed in, the calculation will be performed between the rows of :math:`x`.\n\n Args:\n x: Tensor with shape ``[N, d]``\n y: Tensor with shape ``[M, d]``, optional\n reduction: reduction to apply along the last dimension. Choose between `'mean'`, `'sum'`\n (applied along column dimension) or `'none'`, `None` for no reduction\n zero_diagonal: if the diagonal of the distance matrix should be set to 0. If only :math:`x` is given\n this defaults to ``True`` else if :math:`y` is also given it defaults to ``False``\n\n Returns:\n A ``[N,N]`` matrix of distances if only ``x`` is given, else a ``[N,M]`` matrix\n\n Example:\n >>> import torch\n >>> from torchmetrics.functional import pairwise_cosine_similarity\n >>> x = torch.tensor([[2, 3], [3, 5], [5, 8]], dtype=torch.float32)\n >>> y = torch.tensor([[1, 0], [2, 1]], dtype=torch.float32)\n >>> pairwise_cosine_similarity(x, y)\n tensor([[0.5547, 0.8682],\n [0.5145, 0.8437],\n [0.5300, 0.8533]])\n >>> pairwise_cosine_similarity(x)\n tensor([[0.0000, 0.9989, 0.9996],\n [0.9989, 0.0000, 0.9998],\n [0.9996, 0.9998, 0.0000]])\n\n \"\"\"\n distance = _pairwise_cosine_similarity_update(x, y, zero_diagonal)\n return _reduce_distance_matrix(distance, reduction)\n", "path": "src/torchmetrics/functional/pairwise/cosine.py"}]}
| 1,848 | 259 |
gh_patches_debug_16473
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-2115
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
When "streamlit run" file doesn't exit and has no extension, error printout is weird
1. Create a Python file and call it `example` (without .py)
2. `streamlit run example`
Here's what you get:
<img width="400px" src="https://user-images.githubusercontent.com/690814/95294472-307bcb00-082a-11eb-86b0-37c2a1335988.png" />
**This error message is not a valid sentence: "Streamlit requires raw Python (.py) files, not ."**
What's happening is that the code is trying to write the file extension in the error message, but in this case the file has no extension.
We should instead say something like "Streamlit requires raw Python (.py) files, and the provided file has no extension."
</issue>
<code>
[start of lib/streamlit/cli.py]
1 # Copyright 2018-2020 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """This is a script which is run when the Streamlit package is executed."""
16
17 from streamlit import config as _config
18
19 import os
20 import re
21 from typing import Optional
22
23 import click
24
25 import streamlit
26 from streamlit.credentials import Credentials, check_credentials
27 from streamlit import version
28 import streamlit.bootstrap as bootstrap
29 from streamlit.case_converters import to_snake_case
30
31 ACCEPTED_FILE_EXTENSIONS = ("py", "py3")
32
33 LOG_LEVELS = ("error", "warning", "info", "debug")
34
35 NEW_VERSION_TEXT = """
36 %(new_version)s
37
38 See what's new at https://discuss.streamlit.io/c/announcements
39
40 Enter the following command to upgrade:
41 %(prompt)s %(command)s
42 """ % {
43 "new_version": click.style(
44 "A new version of Streamlit is available.", fg="blue", bold=True
45 ),
46 "prompt": click.style("$", fg="blue"),
47 "command": click.style("pip install streamlit --upgrade", bold=True),
48 }
49
50
51 def _convert_config_option_to_click_option(config_option):
52 """Composes given config option options as options for click lib."""
53 option = "--{}".format(config_option.key)
54 param = config_option.key.replace(".", "_")
55 description = config_option.description
56 if config_option.deprecated:
57 description += "\n {} - {}".format(
58 config_option.deprecation_text, config_option.expiration_date
59 )
60 envvar = "STREAMLIT_{}".format(to_snake_case(param).upper())
61
62 return {
63 "param": param,
64 "description": description,
65 "type": config_option.type,
66 "option": option,
67 "envvar": envvar,
68 }
69
70
71 def configurator_options(func):
72 """Decorator that adds config param keys to click dynamically."""
73 for _, value in reversed(_config._config_options.items()):
74 parsed_parameter = _convert_config_option_to_click_option(value)
75 config_option = click.option(
76 parsed_parameter["option"],
77 parsed_parameter["param"],
78 help=parsed_parameter["description"],
79 type=parsed_parameter["type"],
80 show_envvar=True,
81 envvar=parsed_parameter["envvar"],
82 )
83 func = config_option(func)
84 return func
85
86
87 def _apply_config_options_from_cli(kwargs):
88 """The "streamlit run" command supports passing Streamlit's config options
89 as flags.
90
91 This function reads through all config flags, massage them, and
92 pass them to _set_config() overriding default values and values set via
93 config.toml file
94
95 """
96 # Parse config files first before setting CLI args.
97 # Prevents CLI args from being overwritten
98 if not _config._config_file_has_been_parsed:
99 _config.parse_config_file()
100
101 for config_option in kwargs:
102 if kwargs[config_option] is not None:
103 config_option_def_key = config_option.replace("_", ".")
104 _config._set_option(
105 config_option_def_key,
106 kwargs[config_option],
107 "command-line argument or environment variable",
108 )
109
110 _config._on_config_parsed.send()
111
112
113 # Fetch remote file at url_path to script_path
114 def _download_remote(script_path, url_path):
115 import requests
116
117 with open(script_path, "wb") as fp:
118 try:
119 resp = requests.get(url_path)
120 resp.raise_for_status()
121 fp.write(resp.content)
122 except requests.exceptions.RequestException as e:
123 raise click.BadParameter(("Unable to fetch {}.\n{}".format(url_path, e)))
124
125
126 @click.group(context_settings={"auto_envvar_prefix": "STREAMLIT"})
127 @click.option("--log_level", show_default=True, type=click.Choice(LOG_LEVELS))
128 @click.version_option(prog_name="Streamlit")
129 @click.pass_context
130 def main(ctx, log_level="info"):
131 """Try out a demo with:
132
133 $ streamlit hello
134
135 Or use the line below to run your own script:
136
137 $ streamlit run your_script.py
138 """
139
140 if log_level:
141 import streamlit.logger
142
143 streamlit.logger.set_log_level(log_level.upper())
144
145
146 @main.command("help")
147 @click.pass_context
148 def help(ctx):
149 """Print this help message."""
150 # Pretend user typed 'streamlit --help' instead of 'streamlit help'.
151 import sys
152
153 assert len(sys.argv) == 2 # This is always true, but let's assert anyway.
154 sys.argv[1] = "--help"
155 main()
156
157
158 @main.command("version")
159 @click.pass_context
160 def main_version(ctx):
161 """Print Streamlit's version number."""
162 # Pretend user typed 'streamlit --version' instead of 'streamlit version'
163 import sys
164
165 assert len(sys.argv) == 2 # This is always true, but let's assert anyway.
166 sys.argv[1] = "--version"
167 main()
168
169
170 @main.command("docs")
171 def main_docs():
172 """Show help in browser."""
173 print("Showing help page in browser...")
174 from streamlit import util
175
176 util.open_browser("https://docs.streamlit.io")
177
178
179 @main.command("hello")
180 @configurator_options
181 def main_hello(**kwargs):
182 """Runs the Hello World script."""
183 from streamlit.hello import hello
184
185 _apply_config_options_from_cli(kwargs)
186 filename = hello.__file__
187 _main_run(filename)
188
189
190 @main.command("run")
191 @configurator_options
192 @click.argument("target", required=True, envvar="STREAMLIT_RUN_TARGET")
193 @click.argument("args", nargs=-1)
194 def main_run(target, args=None, **kwargs):
195 """Run a Python script, piping stderr to Streamlit.
196
197 The script can be local or it can be an url. In the latter case, Streamlit
198 will download the script to a temporary file and runs this file.
199
200 """
201 from validators import url
202
203 _apply_config_options_from_cli(kwargs)
204
205 _, extension = os.path.splitext(target)
206 if extension[1:] not in ACCEPTED_FILE_EXTENSIONS:
207 raise click.BadArgumentUsage(
208 "Streamlit requires raw Python (.py) files, not %s.\nFor more information, please see https://docs.streamlit.io"
209 % extension
210 )
211
212 if url(target):
213 from streamlit.temporary_directory import TemporaryDirectory
214
215 with TemporaryDirectory() as temp_dir:
216 from urllib.parse import urlparse
217 from streamlit import url_util
218
219 path = urlparse(target).path
220 script_path = os.path.join(temp_dir, path.strip("/").rsplit("/", 1)[-1])
221 # if this is a GitHub/Gist blob url, convert to a raw URL first.
222 target = url_util.process_gitblob_url(target)
223 _download_remote(script_path, target)
224 _main_run(script_path, args)
225 else:
226 if not os.path.exists(target):
227 raise click.BadParameter("File does not exist: {}".format(target))
228 _main_run(target, args)
229
230
231 # Utility function to compute the command line as a string
232 def _get_command_line_as_string() -> Optional[str]:
233 import subprocess
234
235 parent = click.get_current_context().parent
236 if parent is None:
237 return None
238 cmd_line_as_list = [parent.command_path]
239 cmd_line_as_list.extend(click.get_os_args())
240 return subprocess.list2cmdline(cmd_line_as_list)
241
242
243 def _main_run(file, args=[]):
244 command_line = _get_command_line_as_string()
245
246 # Set a global flag indicating that we're "within" streamlit.
247 streamlit._is_running_with_streamlit = True
248
249 # Check credentials.
250 check_credentials()
251
252 # Notify if streamlit is out of date.
253 if version.should_show_new_version_notice():
254 click.echo(NEW_VERSION_TEXT)
255
256 bootstrap.run(file, command_line, args)
257
258
259 # SUBCOMMAND: cache
260
261
262 @main.group("cache")
263 def cache():
264 """Manage the Streamlit cache."""
265 pass
266
267
268 @cache.command("clear")
269 def cache_clear():
270 """Clear the Streamlit on-disk cache."""
271 import streamlit.caching
272
273 result = streamlit.caching.clear_cache()
274 cache_path = streamlit.caching.get_cache_path()
275 if result:
276 print("Cleared directory %s." % cache_path)
277 else:
278 print("Nothing to clear at %s." % cache_path)
279
280
281 # SUBCOMMAND: config
282
283
284 @main.group("config")
285 def config():
286 """Manage Streamlit's config settings."""
287 pass
288
289
290 @config.command("show")
291 @configurator_options
292 def config_show(**kwargs):
293 """Show all of Streamlit's config settings."""
294
295 _apply_config_options_from_cli(kwargs)
296
297 _config.show_config()
298
299
300 # SUBCOMMAND: activate
301
302
303 @main.group("activate", invoke_without_command=True)
304 @click.pass_context
305 def activate(ctx):
306 """Activate Streamlit by entering your email."""
307 if not ctx.invoked_subcommand:
308 Credentials.get_current().activate()
309
310
311 @activate.command("reset")
312 def activate_reset():
313 """Reset Activation Credentials."""
314 Credentials.get_current().reset()
315
316
317 if __name__ == "__main__":
318 main()
319
[end of lib/streamlit/cli.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/streamlit/cli.py b/lib/streamlit/cli.py
--- a/lib/streamlit/cli.py
+++ b/lib/streamlit/cli.py
@@ -204,10 +204,15 @@
_, extension = os.path.splitext(target)
if extension[1:] not in ACCEPTED_FILE_EXTENSIONS:
- raise click.BadArgumentUsage(
- "Streamlit requires raw Python (.py) files, not %s.\nFor more information, please see https://docs.streamlit.io"
- % extension
- )
+ if extension[1:] == "":
+ raise click.BadArgumentUsage(
+ "Streamlit requires raw Python (.py) files, but the provided file has no extension.\nFor more information, please see https://docs.streamlit.io"
+ )
+ else:
+ raise click.BadArgumentUsage(
+ "Streamlit requires raw Python (.py) files, not %s.\nFor more information, please see https://docs.streamlit.io"
+ % extension
+ )
if url(target):
from streamlit.temporary_directory import TemporaryDirectory
|
{"golden_diff": "diff --git a/lib/streamlit/cli.py b/lib/streamlit/cli.py\n--- a/lib/streamlit/cli.py\n+++ b/lib/streamlit/cli.py\n@@ -204,10 +204,15 @@\n \n _, extension = os.path.splitext(target)\n if extension[1:] not in ACCEPTED_FILE_EXTENSIONS:\n- raise click.BadArgumentUsage(\n- \"Streamlit requires raw Python (.py) files, not %s.\\nFor more information, please see https://docs.streamlit.io\"\n- % extension\n- )\n+ if extension[1:] == \"\":\n+ raise click.BadArgumentUsage(\n+ \"Streamlit requires raw Python (.py) files, but the provided file has no extension.\\nFor more information, please see https://docs.streamlit.io\"\n+ )\n+ else: \n+ raise click.BadArgumentUsage(\n+ \"Streamlit requires raw Python (.py) files, not %s.\\nFor more information, please see https://docs.streamlit.io\"\n+ % extension\n+ )\n \n if url(target):\n from streamlit.temporary_directory import TemporaryDirectory\n", "issue": "When \"streamlit run\" file doesn't exit and has no extension, error printout is weird\n1. Create a Python file and call it `example` (without .py)\r\n2. `streamlit run example`\r\n\r\nHere's what you get:\r\n<img width=\"400px\" src=\"https://user-images.githubusercontent.com/690814/95294472-307bcb00-082a-11eb-86b0-37c2a1335988.png\" />\r\n\r\n**This error message is not a valid sentence: \"Streamlit requires raw Python (.py) files, not .\"**\r\n\r\nWhat's happening is that the code is trying to write the file extension in the error message, but in this case the file has no extension.\r\n\r\nWe should instead say something like \"Streamlit requires raw Python (.py) files, and the provided file has no extension.\"\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018-2020 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"This is a script which is run when the Streamlit package is executed.\"\"\"\n\nfrom streamlit import config as _config\n\nimport os\nimport re\nfrom typing import Optional\n\nimport click\n\nimport streamlit\nfrom streamlit.credentials import Credentials, check_credentials\nfrom streamlit import version\nimport streamlit.bootstrap as bootstrap\nfrom streamlit.case_converters import to_snake_case\n\nACCEPTED_FILE_EXTENSIONS = (\"py\", \"py3\")\n\nLOG_LEVELS = (\"error\", \"warning\", \"info\", \"debug\")\n\nNEW_VERSION_TEXT = \"\"\"\n %(new_version)s\n\n See what's new at https://discuss.streamlit.io/c/announcements\n\n Enter the following command to upgrade:\n %(prompt)s %(command)s\n\"\"\" % {\n \"new_version\": click.style(\n \"A new version of Streamlit is available.\", fg=\"blue\", bold=True\n ),\n \"prompt\": click.style(\"$\", fg=\"blue\"),\n \"command\": click.style(\"pip install streamlit --upgrade\", bold=True),\n}\n\n\ndef _convert_config_option_to_click_option(config_option):\n \"\"\"Composes given config option options as options for click lib.\"\"\"\n option = \"--{}\".format(config_option.key)\n param = config_option.key.replace(\".\", \"_\")\n description = config_option.description\n if config_option.deprecated:\n description += \"\\n {} - {}\".format(\n config_option.deprecation_text, config_option.expiration_date\n )\n envvar = \"STREAMLIT_{}\".format(to_snake_case(param).upper())\n\n return {\n \"param\": param,\n \"description\": description,\n \"type\": config_option.type,\n \"option\": option,\n \"envvar\": envvar,\n }\n\n\ndef configurator_options(func):\n \"\"\"Decorator that adds config param keys to click dynamically.\"\"\"\n for _, value in reversed(_config._config_options.items()):\n parsed_parameter = _convert_config_option_to_click_option(value)\n config_option = click.option(\n parsed_parameter[\"option\"],\n parsed_parameter[\"param\"],\n help=parsed_parameter[\"description\"],\n type=parsed_parameter[\"type\"],\n show_envvar=True,\n envvar=parsed_parameter[\"envvar\"],\n )\n func = config_option(func)\n return func\n\n\ndef _apply_config_options_from_cli(kwargs):\n \"\"\"The \"streamlit run\" command supports passing Streamlit's config options\n as flags.\n\n This function reads through all config flags, massage them, and\n pass them to _set_config() overriding default values and values set via\n config.toml file\n\n \"\"\"\n # Parse config files first before setting CLI args.\n # Prevents CLI args from being overwritten\n if not _config._config_file_has_been_parsed:\n _config.parse_config_file()\n\n for config_option in kwargs:\n if kwargs[config_option] is not None:\n config_option_def_key = config_option.replace(\"_\", \".\")\n _config._set_option(\n config_option_def_key,\n kwargs[config_option],\n \"command-line argument or environment variable\",\n )\n\n _config._on_config_parsed.send()\n\n\n# Fetch remote file at url_path to script_path\ndef _download_remote(script_path, url_path):\n import requests\n\n with open(script_path, \"wb\") as fp:\n try:\n resp = requests.get(url_path)\n resp.raise_for_status()\n fp.write(resp.content)\n except requests.exceptions.RequestException as e:\n raise click.BadParameter((\"Unable to fetch {}.\\n{}\".format(url_path, e)))\n\n\[email protected](context_settings={\"auto_envvar_prefix\": \"STREAMLIT\"})\[email protected](\"--log_level\", show_default=True, type=click.Choice(LOG_LEVELS))\[email protected]_option(prog_name=\"Streamlit\")\[email protected]_context\ndef main(ctx, log_level=\"info\"):\n \"\"\"Try out a demo with:\n\n $ streamlit hello\n\n Or use the line below to run your own script:\n\n $ streamlit run your_script.py\n \"\"\"\n\n if log_level:\n import streamlit.logger\n\n streamlit.logger.set_log_level(log_level.upper())\n\n\[email protected](\"help\")\[email protected]_context\ndef help(ctx):\n \"\"\"Print this help message.\"\"\"\n # Pretend user typed 'streamlit --help' instead of 'streamlit help'.\n import sys\n\n assert len(sys.argv) == 2 # This is always true, but let's assert anyway.\n sys.argv[1] = \"--help\"\n main()\n\n\[email protected](\"version\")\[email protected]_context\ndef main_version(ctx):\n \"\"\"Print Streamlit's version number.\"\"\"\n # Pretend user typed 'streamlit --version' instead of 'streamlit version'\n import sys\n\n assert len(sys.argv) == 2 # This is always true, but let's assert anyway.\n sys.argv[1] = \"--version\"\n main()\n\n\[email protected](\"docs\")\ndef main_docs():\n \"\"\"Show help in browser.\"\"\"\n print(\"Showing help page in browser...\")\n from streamlit import util\n\n util.open_browser(\"https://docs.streamlit.io\")\n\n\[email protected](\"hello\")\n@configurator_options\ndef main_hello(**kwargs):\n \"\"\"Runs the Hello World script.\"\"\"\n from streamlit.hello import hello\n\n _apply_config_options_from_cli(kwargs)\n filename = hello.__file__\n _main_run(filename)\n\n\[email protected](\"run\")\n@configurator_options\[email protected](\"target\", required=True, envvar=\"STREAMLIT_RUN_TARGET\")\[email protected](\"args\", nargs=-1)\ndef main_run(target, args=None, **kwargs):\n \"\"\"Run a Python script, piping stderr to Streamlit.\n\n The script can be local or it can be an url. In the latter case, Streamlit\n will download the script to a temporary file and runs this file.\n\n \"\"\"\n from validators import url\n\n _apply_config_options_from_cli(kwargs)\n\n _, extension = os.path.splitext(target)\n if extension[1:] not in ACCEPTED_FILE_EXTENSIONS:\n raise click.BadArgumentUsage(\n \"Streamlit requires raw Python (.py) files, not %s.\\nFor more information, please see https://docs.streamlit.io\"\n % extension\n )\n\n if url(target):\n from streamlit.temporary_directory import TemporaryDirectory\n\n with TemporaryDirectory() as temp_dir:\n from urllib.parse import urlparse\n from streamlit import url_util\n\n path = urlparse(target).path\n script_path = os.path.join(temp_dir, path.strip(\"/\").rsplit(\"/\", 1)[-1])\n # if this is a GitHub/Gist blob url, convert to a raw URL first.\n target = url_util.process_gitblob_url(target)\n _download_remote(script_path, target)\n _main_run(script_path, args)\n else:\n if not os.path.exists(target):\n raise click.BadParameter(\"File does not exist: {}\".format(target))\n _main_run(target, args)\n\n\n# Utility function to compute the command line as a string\ndef _get_command_line_as_string() -> Optional[str]:\n import subprocess\n\n parent = click.get_current_context().parent\n if parent is None:\n return None\n cmd_line_as_list = [parent.command_path]\n cmd_line_as_list.extend(click.get_os_args())\n return subprocess.list2cmdline(cmd_line_as_list)\n\n\ndef _main_run(file, args=[]):\n command_line = _get_command_line_as_string()\n\n # Set a global flag indicating that we're \"within\" streamlit.\n streamlit._is_running_with_streamlit = True\n\n # Check credentials.\n check_credentials()\n\n # Notify if streamlit is out of date.\n if version.should_show_new_version_notice():\n click.echo(NEW_VERSION_TEXT)\n\n bootstrap.run(file, command_line, args)\n\n\n# SUBCOMMAND: cache\n\n\[email protected](\"cache\")\ndef cache():\n \"\"\"Manage the Streamlit cache.\"\"\"\n pass\n\n\[email protected](\"clear\")\ndef cache_clear():\n \"\"\"Clear the Streamlit on-disk cache.\"\"\"\n import streamlit.caching\n\n result = streamlit.caching.clear_cache()\n cache_path = streamlit.caching.get_cache_path()\n if result:\n print(\"Cleared directory %s.\" % cache_path)\n else:\n print(\"Nothing to clear at %s.\" % cache_path)\n\n\n# SUBCOMMAND: config\n\n\[email protected](\"config\")\ndef config():\n \"\"\"Manage Streamlit's config settings.\"\"\"\n pass\n\n\[email protected](\"show\")\n@configurator_options\ndef config_show(**kwargs):\n \"\"\"Show all of Streamlit's config settings.\"\"\"\n\n _apply_config_options_from_cli(kwargs)\n\n _config.show_config()\n\n\n# SUBCOMMAND: activate\n\n\[email protected](\"activate\", invoke_without_command=True)\[email protected]_context\ndef activate(ctx):\n \"\"\"Activate Streamlit by entering your email.\"\"\"\n if not ctx.invoked_subcommand:\n Credentials.get_current().activate()\n\n\[email protected](\"reset\")\ndef activate_reset():\n \"\"\"Reset Activation Credentials.\"\"\"\n Credentials.get_current().reset()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "lib/streamlit/cli.py"}]}
| 3,696 | 242 |
gh_patches_debug_12586
|
rasdani/github-patches
|
git_diff
|
nerfstudio-project__nerfstudio-824
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Depth normalization inconsistent for packed vs. nonpacked samples
**Describe the bug**
When the raymarching samples are packed, the depth is calculated according to `sum_i w_i t_i`: https://github.com/nerfstudio-project/nerfstudio/blob/863fc77ab5f247ff3ce3c80f192173063529b036/nerfstudio/model_components/renderers.py#L236
When the raymarching samples are not packed, the depth is calculated with a normalization factor dividing by the total accumulation, `(sum_i w_i t_i) / (sum_i w_i)`: https://github.com/nerfstudio-project/nerfstudio/blob/863fc77ab5f247ff3ce3c80f192173063529b036/nerfstudio/model_components/renderers.py#L238
**To Reproduce**
N/A
**Expected behavior**
For consistency, the calculation for packed samples should also divide by the total accumulation.
**Screenshots**
N/A
**Additional context**
If this is desired, I can implement the change.
</issue>
<code>
[start of nerfstudio/model_components/renderers.py]
1 # Copyright 2022 The Nerfstudio Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Collection of renderers
17
18 Example:
19
20 .. code-block:: python
21
22 field_outputs = field(ray_sampler)
23 weights = ray_sampler.get_weights(field_outputs[FieldHeadNames.DENSITY])
24
25 rgb_renderer = RGBRenderer()
26 rgb = rgb_renderer(rgb=field_outputs[FieldHeadNames.RGB], weights=weights)
27
28 """
29 import math
30 from typing import Optional, Union
31
32 import nerfacc
33 import torch
34 from torch import nn
35 from torchtyping import TensorType
36 from typing_extensions import Literal
37
38 from nerfstudio.cameras.rays import RaySamples
39 from nerfstudio.utils.math import components_from_spherical_harmonics
40
41
42 class RGBRenderer(nn.Module):
43 """Standard volumetic rendering.
44
45 Args:
46 background_color: Background color as RGB. Uses random colors if None.
47 """
48
49 def __init__(self, background_color: Union[Literal["random", "last_sample"], TensorType[3]] = "random") -> None:
50 super().__init__()
51 self.background_color = background_color
52
53 @classmethod
54 def combine_rgb(
55 cls,
56 rgb: TensorType["bs":..., "num_samples", 3],
57 weights: TensorType["bs":..., "num_samples", 1],
58 background_color: Union[Literal["random", "last_sample"], TensorType[3]] = "random",
59 ray_indices: Optional[TensorType["num_samples"]] = None,
60 num_rays: Optional[int] = None,
61 ) -> TensorType["bs":..., 3]:
62 """Composite samples along ray and render color image
63
64 Args:
65 rgb: RGB for each sample
66 weights: Weights for each sample
67 background_color: Background color as RGB.
68 ray_indices: Ray index for each sample, used when samples are packed.
69 num_rays: Number of rays, used when samples are packed.
70
71 Returns:
72 Outputs rgb values.
73 """
74 if ray_indices is not None and num_rays is not None:
75 # Necessary for packed samples from volumetric ray sampler
76 if background_color == "last_sample":
77 raise NotImplementedError("Background color 'last_sample' not implemented for packed samples.")
78 comp_rgb = nerfacc.accumulate_along_rays(weights, ray_indices, rgb, num_rays)
79 accumulated_weight = nerfacc.accumulate_along_rays(weights, ray_indices, None, num_rays)
80 else:
81 comp_rgb = torch.sum(weights * rgb, dim=-2)
82 accumulated_weight = torch.sum(weights, dim=-2)
83
84 if background_color == "last_sample":
85 background_color = rgb[..., -1, :]
86 if background_color == "random":
87 background_color = torch.rand_like(comp_rgb).to(rgb.device)
88
89 assert isinstance(background_color, torch.Tensor)
90 comp_rgb = comp_rgb + background_color.to(weights.device) * (1.0 - accumulated_weight)
91
92 return comp_rgb
93
94 def forward(
95 self,
96 rgb: TensorType["bs":..., "num_samples", 3],
97 weights: TensorType["bs":..., "num_samples", 1],
98 ray_indices: Optional[TensorType["num_samples"]] = None,
99 num_rays: Optional[int] = None,
100 ) -> TensorType["bs":..., 3]:
101 """Composite samples along ray and render color image
102
103 Args:
104 rgb: RGB for each sample
105 weights: Weights for each sample
106 ray_indices: Ray index for each sample, used when samples are packed.
107 num_rays: Number of rays, used when samples are packed.
108
109 Returns:
110 Outputs of rgb values.
111 """
112
113 rgb = self.combine_rgb(
114 rgb, weights, background_color=self.background_color, ray_indices=ray_indices, num_rays=num_rays
115 )
116 if not self.training:
117 torch.clamp_(rgb, min=0.0, max=1.0)
118 return rgb
119
120
121 class SHRenderer(nn.Module):
122 """Render RGB value from spherical harmonics.
123
124 Args:
125 background_color: Background color as RGB. Uses random colors if None
126 activation: Output activation.
127 """
128
129 def __init__(
130 self,
131 background_color: Union[Literal["random", "last_sample"], TensorType[3]] = "random",
132 activation: Optional[nn.Module] = nn.Sigmoid(),
133 ) -> None:
134 super().__init__()
135 self.background_color = background_color
136 self.activation = activation
137
138 def forward(
139 self,
140 sh: TensorType[..., "num_samples", "coeffs"],
141 directions: TensorType[..., "num_samples", 3],
142 weights: TensorType[..., "num_samples", 1],
143 ) -> TensorType[..., 3]:
144 """Composite samples along ray and render color image
145
146 Args:
147 sh: Spherical hamonics coefficients for each sample
148 directions: Sample direction
149 weights: Weights for each sample
150
151 Returns:
152 Outputs of rgb values.
153 """
154
155 sh = sh.view(*sh.shape[:-1], 3, sh.shape[-1] // 3)
156
157 levels = int(math.sqrt(sh.shape[-1]))
158 components = components_from_spherical_harmonics(levels=levels, directions=directions)
159
160 rgb = sh * components[..., None, :] # [..., num_samples, 3, sh_components]
161 rgb = torch.sum(sh, dim=-1) + 0.5 # [..., num_samples, 3]
162
163 if self.activation is not None:
164 self.activation(rgb)
165
166 rgb = RGBRenderer.combine_rgb(rgb, weights, background_color=self.background_color)
167
168 return rgb
169
170
171 class AccumulationRenderer(nn.Module):
172 """Accumulated value along a ray."""
173
174 @classmethod
175 def forward(
176 cls,
177 weights: TensorType["bs":..., "num_samples", 1],
178 ray_indices: Optional[TensorType["num_samples"]] = None,
179 num_rays: Optional[int] = None,
180 ) -> TensorType["bs":..., 1]:
181 """Composite samples along ray and calculate accumulation.
182
183 Args:
184 weights: Weights for each sample
185 ray_indices: Ray index for each sample, used when samples are packed.
186 num_rays: Number of rays, used when samples are packed.
187
188 Returns:
189 Outputs of accumulated values.
190 """
191
192 if ray_indices is not None and num_rays is not None:
193 # Necessary for packed samples from volumetric ray sampler
194 accumulation = nerfacc.accumulate_along_rays(weights, ray_indices, None, num_rays)
195 else:
196 accumulation = torch.sum(weights, dim=-2)
197 return accumulation
198
199
200 class DepthRenderer(nn.Module):
201 """Calculate depth along ray.
202
203 Args:
204 method (str, optional): Depth calculation method.
205 """
206
207 def __init__(self, method: Literal["expected"] = "expected") -> None:
208 super().__init__()
209 self.method = method
210
211 def forward(
212 self,
213 weights: TensorType[..., "num_samples", 1],
214 ray_samples: RaySamples,
215 ray_indices: Optional[TensorType["num_samples"]] = None,
216 num_rays: Optional[int] = None,
217 ) -> TensorType[..., 1]:
218 """Composite samples along ray and calculate disparities.
219
220 Args:
221 weights: Weights for each sample.
222 ray_samples: Set of ray samples.
223 ray_indices: Ray index for each sample, used when samples are packed.
224 num_rays: Number of rays, used when samples are packed.
225
226 Returns:
227 Outputs of depth values.
228 """
229
230 if self.method == "expected":
231 eps = 1e-10
232 steps = (ray_samples.frustums.starts + ray_samples.frustums.ends) / 2
233
234 if ray_indices is not None and num_rays is not None:
235 # Necessary for packed samples from volumetric ray sampler
236 depth = nerfacc.accumulate_along_rays(weights, ray_indices, steps, num_rays)
237 else:
238 depth = torch.sum(weights * steps, dim=-2) / (torch.sum(weights, -2) + eps)
239
240 depth = torch.clip(depth, steps.min(), steps.max())
241
242 return depth
243
244 raise NotImplementedError(f"Method {self.method} not implemented")
245
246
247 class UncertaintyRenderer(nn.Module):
248 """Calculate uncertainty along the ray."""
249
250 @classmethod
251 def forward(
252 cls, betas: TensorType["bs":..., "num_samples", 1], weights: TensorType["bs":..., "num_samples", 1]
253 ) -> TensorType["bs":..., 1]:
254 """Calculate uncertainty along the ray.
255
256 Args:
257 betas: Uncertainty betas for each sample.
258 weights: Weights of each sample.
259
260 Returns:
261 Rendering of uncertainty.
262 """
263 uncertainty = torch.sum(weights * betas, dim=-2)
264 return uncertainty
265
266
267 class SemanticRenderer(nn.Module):
268 """Calculate semantics along the ray."""
269
270 @classmethod
271 def forward(
272 cls,
273 semantics: TensorType["bs":..., "num_samples", "num_classes"],
274 weights: TensorType["bs":..., "num_samples", 1],
275 ) -> TensorType["bs":..., "num_classes"]:
276 """_summary_"""
277 sem = torch.sum(weights * semantics, dim=-2)
278 return sem
279
[end of nerfstudio/model_components/renderers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nerfstudio/model_components/renderers.py b/nerfstudio/model_components/renderers.py
--- a/nerfstudio/model_components/renderers.py
+++ b/nerfstudio/model_components/renderers.py
@@ -234,6 +234,8 @@
if ray_indices is not None and num_rays is not None:
# Necessary for packed samples from volumetric ray sampler
depth = nerfacc.accumulate_along_rays(weights, ray_indices, steps, num_rays)
+ accumulation = nerfacc.accumulate_along_rays(weights, ray_indices, None, num_rays)
+ depth = depth / (accumulation + eps)
else:
depth = torch.sum(weights * steps, dim=-2) / (torch.sum(weights, -2) + eps)
|
{"golden_diff": "diff --git a/nerfstudio/model_components/renderers.py b/nerfstudio/model_components/renderers.py\n--- a/nerfstudio/model_components/renderers.py\n+++ b/nerfstudio/model_components/renderers.py\n@@ -234,6 +234,8 @@\n if ray_indices is not None and num_rays is not None:\n # Necessary for packed samples from volumetric ray sampler\n depth = nerfacc.accumulate_along_rays(weights, ray_indices, steps, num_rays)\n+ accumulation = nerfacc.accumulate_along_rays(weights, ray_indices, None, num_rays)\n+ depth = depth / (accumulation + eps)\n else:\n depth = torch.sum(weights * steps, dim=-2) / (torch.sum(weights, -2) + eps)\n", "issue": "Depth normalization inconsistent for packed vs. nonpacked samples\n**Describe the bug**\r\nWhen the raymarching samples are packed, the depth is calculated according to `sum_i w_i t_i`: https://github.com/nerfstudio-project/nerfstudio/blob/863fc77ab5f247ff3ce3c80f192173063529b036/nerfstudio/model_components/renderers.py#L236\r\n\r\nWhen the raymarching samples are not packed, the depth is calculated with a normalization factor dividing by the total accumulation, `(sum_i w_i t_i) / (sum_i w_i)`: https://github.com/nerfstudio-project/nerfstudio/blob/863fc77ab5f247ff3ce3c80f192173063529b036/nerfstudio/model_components/renderers.py#L238\r\n\r\n**To Reproduce**\r\nN/A\r\n\r\n**Expected behavior**\r\nFor consistency, the calculation for packed samples should also divide by the total accumulation.\r\n\r\n**Screenshots**\r\nN/A\r\n\r\n**Additional context**\r\nIf this is desired, I can implement the change.\r\n\n", "before_files": [{"content": "# Copyright 2022 The Nerfstudio Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nCollection of renderers\n\nExample:\n\n.. code-block:: python\n\n field_outputs = field(ray_sampler)\n weights = ray_sampler.get_weights(field_outputs[FieldHeadNames.DENSITY])\n\n rgb_renderer = RGBRenderer()\n rgb = rgb_renderer(rgb=field_outputs[FieldHeadNames.RGB], weights=weights)\n\n\"\"\"\nimport math\nfrom typing import Optional, Union\n\nimport nerfacc\nimport torch\nfrom torch import nn\nfrom torchtyping import TensorType\nfrom typing_extensions import Literal\n\nfrom nerfstudio.cameras.rays import RaySamples\nfrom nerfstudio.utils.math import components_from_spherical_harmonics\n\n\nclass RGBRenderer(nn.Module):\n \"\"\"Standard volumetic rendering.\n\n Args:\n background_color: Background color as RGB. Uses random colors if None.\n \"\"\"\n\n def __init__(self, background_color: Union[Literal[\"random\", \"last_sample\"], TensorType[3]] = \"random\") -> None:\n super().__init__()\n self.background_color = background_color\n\n @classmethod\n def combine_rgb(\n cls,\n rgb: TensorType[\"bs\":..., \"num_samples\", 3],\n weights: TensorType[\"bs\":..., \"num_samples\", 1],\n background_color: Union[Literal[\"random\", \"last_sample\"], TensorType[3]] = \"random\",\n ray_indices: Optional[TensorType[\"num_samples\"]] = None,\n num_rays: Optional[int] = None,\n ) -> TensorType[\"bs\":..., 3]:\n \"\"\"Composite samples along ray and render color image\n\n Args:\n rgb: RGB for each sample\n weights: Weights for each sample\n background_color: Background color as RGB.\n ray_indices: Ray index for each sample, used when samples are packed.\n num_rays: Number of rays, used when samples are packed.\n\n Returns:\n Outputs rgb values.\n \"\"\"\n if ray_indices is not None and num_rays is not None:\n # Necessary for packed samples from volumetric ray sampler\n if background_color == \"last_sample\":\n raise NotImplementedError(\"Background color 'last_sample' not implemented for packed samples.\")\n comp_rgb = nerfacc.accumulate_along_rays(weights, ray_indices, rgb, num_rays)\n accumulated_weight = nerfacc.accumulate_along_rays(weights, ray_indices, None, num_rays)\n else:\n comp_rgb = torch.sum(weights * rgb, dim=-2)\n accumulated_weight = torch.sum(weights, dim=-2)\n\n if background_color == \"last_sample\":\n background_color = rgb[..., -1, :]\n if background_color == \"random\":\n background_color = torch.rand_like(comp_rgb).to(rgb.device)\n\n assert isinstance(background_color, torch.Tensor)\n comp_rgb = comp_rgb + background_color.to(weights.device) * (1.0 - accumulated_weight)\n\n return comp_rgb\n\n def forward(\n self,\n rgb: TensorType[\"bs\":..., \"num_samples\", 3],\n weights: TensorType[\"bs\":..., \"num_samples\", 1],\n ray_indices: Optional[TensorType[\"num_samples\"]] = None,\n num_rays: Optional[int] = None,\n ) -> TensorType[\"bs\":..., 3]:\n \"\"\"Composite samples along ray and render color image\n\n Args:\n rgb: RGB for each sample\n weights: Weights for each sample\n ray_indices: Ray index for each sample, used when samples are packed.\n num_rays: Number of rays, used when samples are packed.\n\n Returns:\n Outputs of rgb values.\n \"\"\"\n\n rgb = self.combine_rgb(\n rgb, weights, background_color=self.background_color, ray_indices=ray_indices, num_rays=num_rays\n )\n if not self.training:\n torch.clamp_(rgb, min=0.0, max=1.0)\n return rgb\n\n\nclass SHRenderer(nn.Module):\n \"\"\"Render RGB value from spherical harmonics.\n\n Args:\n background_color: Background color as RGB. Uses random colors if None\n activation: Output activation.\n \"\"\"\n\n def __init__(\n self,\n background_color: Union[Literal[\"random\", \"last_sample\"], TensorType[3]] = \"random\",\n activation: Optional[nn.Module] = nn.Sigmoid(),\n ) -> None:\n super().__init__()\n self.background_color = background_color\n self.activation = activation\n\n def forward(\n self,\n sh: TensorType[..., \"num_samples\", \"coeffs\"],\n directions: TensorType[..., \"num_samples\", 3],\n weights: TensorType[..., \"num_samples\", 1],\n ) -> TensorType[..., 3]:\n \"\"\"Composite samples along ray and render color image\n\n Args:\n sh: Spherical hamonics coefficients for each sample\n directions: Sample direction\n weights: Weights for each sample\n\n Returns:\n Outputs of rgb values.\n \"\"\"\n\n sh = sh.view(*sh.shape[:-1], 3, sh.shape[-1] // 3)\n\n levels = int(math.sqrt(sh.shape[-1]))\n components = components_from_spherical_harmonics(levels=levels, directions=directions)\n\n rgb = sh * components[..., None, :] # [..., num_samples, 3, sh_components]\n rgb = torch.sum(sh, dim=-1) + 0.5 # [..., num_samples, 3]\n\n if self.activation is not None:\n self.activation(rgb)\n\n rgb = RGBRenderer.combine_rgb(rgb, weights, background_color=self.background_color)\n\n return rgb\n\n\nclass AccumulationRenderer(nn.Module):\n \"\"\"Accumulated value along a ray.\"\"\"\n\n @classmethod\n def forward(\n cls,\n weights: TensorType[\"bs\":..., \"num_samples\", 1],\n ray_indices: Optional[TensorType[\"num_samples\"]] = None,\n num_rays: Optional[int] = None,\n ) -> TensorType[\"bs\":..., 1]:\n \"\"\"Composite samples along ray and calculate accumulation.\n\n Args:\n weights: Weights for each sample\n ray_indices: Ray index for each sample, used when samples are packed.\n num_rays: Number of rays, used when samples are packed.\n\n Returns:\n Outputs of accumulated values.\n \"\"\"\n\n if ray_indices is not None and num_rays is not None:\n # Necessary for packed samples from volumetric ray sampler\n accumulation = nerfacc.accumulate_along_rays(weights, ray_indices, None, num_rays)\n else:\n accumulation = torch.sum(weights, dim=-2)\n return accumulation\n\n\nclass DepthRenderer(nn.Module):\n \"\"\"Calculate depth along ray.\n\n Args:\n method (str, optional): Depth calculation method.\n \"\"\"\n\n def __init__(self, method: Literal[\"expected\"] = \"expected\") -> None:\n super().__init__()\n self.method = method\n\n def forward(\n self,\n weights: TensorType[..., \"num_samples\", 1],\n ray_samples: RaySamples,\n ray_indices: Optional[TensorType[\"num_samples\"]] = None,\n num_rays: Optional[int] = None,\n ) -> TensorType[..., 1]:\n \"\"\"Composite samples along ray and calculate disparities.\n\n Args:\n weights: Weights for each sample.\n ray_samples: Set of ray samples.\n ray_indices: Ray index for each sample, used when samples are packed.\n num_rays: Number of rays, used when samples are packed.\n\n Returns:\n Outputs of depth values.\n \"\"\"\n\n if self.method == \"expected\":\n eps = 1e-10\n steps = (ray_samples.frustums.starts + ray_samples.frustums.ends) / 2\n\n if ray_indices is not None and num_rays is not None:\n # Necessary for packed samples from volumetric ray sampler\n depth = nerfacc.accumulate_along_rays(weights, ray_indices, steps, num_rays)\n else:\n depth = torch.sum(weights * steps, dim=-2) / (torch.sum(weights, -2) + eps)\n\n depth = torch.clip(depth, steps.min(), steps.max())\n\n return depth\n\n raise NotImplementedError(f\"Method {self.method} not implemented\")\n\n\nclass UncertaintyRenderer(nn.Module):\n \"\"\"Calculate uncertainty along the ray.\"\"\"\n\n @classmethod\n def forward(\n cls, betas: TensorType[\"bs\":..., \"num_samples\", 1], weights: TensorType[\"bs\":..., \"num_samples\", 1]\n ) -> TensorType[\"bs\":..., 1]:\n \"\"\"Calculate uncertainty along the ray.\n\n Args:\n betas: Uncertainty betas for each sample.\n weights: Weights of each sample.\n\n Returns:\n Rendering of uncertainty.\n \"\"\"\n uncertainty = torch.sum(weights * betas, dim=-2)\n return uncertainty\n\n\nclass SemanticRenderer(nn.Module):\n \"\"\"Calculate semantics along the ray.\"\"\"\n\n @classmethod\n def forward(\n cls,\n semantics: TensorType[\"bs\":..., \"num_samples\", \"num_classes\"],\n weights: TensorType[\"bs\":..., \"num_samples\", 1],\n ) -> TensorType[\"bs\":..., \"num_classes\"]:\n \"\"\"_summary_\"\"\"\n sem = torch.sum(weights * semantics, dim=-2)\n return sem\n", "path": "nerfstudio/model_components/renderers.py"}]}
| 3,680 | 176 |
gh_patches_debug_27323
|
rasdani/github-patches
|
git_diff
|
mindsdb__lightwood-168
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Construct comperhensive test suite to evaluate predictions with missing column
We should have a test suite to evaluate prediction accuracy with missing column.
This should take the form of:
Given `M` columns and a Lightwood model trained with them to predict `y`, the accuracy for `y` when predicting with `M` columns (where `M` is a subset of `N`), should be about equal to or greater than that of a Gradient Boosting Regressor or Classifier trained with just the columns `M` to predict `y`.
The reason we are using a Gradient Booster to determine the benchmark accuracy is that it's safe to assume they are fairly generic (i.e. should get about the same accuracy as a well trained neural network) and fast&easy to train.
We can do this testing in two phases:
First, we can add this as a check to the generate-data tests in lightwood, which should be fairly easy.
Second, we can add these tests to mindsdb_examples, the helpers that are already present in there can help.
I'll be handling this but @torrmal feel free to review the methodology
</issue>
<code>
[start of docs/examples/learn_to_classify.py]
1 import lightwood
2 import random
3 import pandas as pd
4 import numpy as np
5 from collections import Counter
6
7
8 random.seed(66)
9 n = 100
10 m = 500
11 train = True
12 nr_inputs = 10
13
14 #options = ['a','b','c','d','e','f','g','h','n','m']
15 options = ['a','b','c']
16
17 data_train = {}
18 data_test = {}
19
20 for data, nr_ele in [(data_train,n), (data_test,m)]:
21 for i in range(nr_inputs):
22 data[f'x_{i}'] = [random.choice(options) for _ in range(nr_ele)]
23
24 data['y'] = [Counter([data[f'x_{i}'][n] for i in range(nr_inputs)]).most_common(1)[0][0] for n in range(nr_ele)]
25
26 data_train = pd.DataFrame(data_train)
27 data_test = pd.DataFrame(data_test)
28
29 def iter_function(epoch, training_error, test_error, test_error_gradient, test_accuracy):
30 print(f'Epoch: {epoch}, Train Error: {training_error}, Test Error: {test_error}, Test Error Gradient: {test_error_gradient}, Test Accuracy: {test_accuracy}')
31
32 if train:
33 predictor = lightwood.Predictor(output=['y'])
34 predictor.learn(from_data=data_train, callback_on_iter=iter_function, eval_every_x_epochs=200)
35 predictor.save('/tmp/ltcrl.pkl')
36
37 predictor = lightwood.Predictor(load_from_path='/tmp/ltcrl.pkl')
38 print('Train accuracy: ', predictor.train_accuracy['y']['value'])
39 print('Test accuracy: ', predictor.calculate_accuracy(from_data=data_test)['y']['value'])
40
41 predictions = predictor.predict(when_data=data_test)
42 print(f'Confidence mean for all columns present ', np.mean(predictions['y']['selfaware_confidences']))
43
44 for i_drop in range(nr_inputs):
45 predictions = predictor.predict(when_data=data_test.drop(columns=[f'x_{i_drop}']))
46 print(f'Accuracy for x_{i_drop} missing: ', predictor.calculate_accuracy(from_data=data_test.drop(columns=[f'x_{i_drop}']))['y']['value'])
47 print(f'Confidence mean for x_{i_drop} missing: ', np.mean(predictions['y']['selfaware_confidences']))
48
[end of docs/examples/learn_to_classify.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/examples/learn_to_classify.py b/docs/examples/learn_to_classify.py
--- a/docs/examples/learn_to_classify.py
+++ b/docs/examples/learn_to_classify.py
@@ -34,14 +34,18 @@
predictor.learn(from_data=data_train, callback_on_iter=iter_function, eval_every_x_epochs=200)
predictor.save('/tmp/ltcrl.pkl')
+
predictor = lightwood.Predictor(load_from_path='/tmp/ltcrl.pkl')
print('Train accuracy: ', predictor.train_accuracy['y']['value'])
print('Test accuracy: ', predictor.calculate_accuracy(from_data=data_test)['y']['value'])
-predictions = predictor.predict(when_data=data_test)
+print(f'Accuracy for all columns present: ', predictor.calculate_accuracy(from_data=data_test)['y']['value'])
+
+predictions = predictor.calculate_accuracy(from_data=data_test)
print(f'Confidence mean for all columns present ', np.mean(predictions['y']['selfaware_confidences']))
for i_drop in range(nr_inputs):
- predictions = predictor.predict(when_data=data_test.drop(columns=[f'x_{i_drop}']))
print(f'Accuracy for x_{i_drop} missing: ', predictor.calculate_accuracy(from_data=data_test.drop(columns=[f'x_{i_drop}']))['y']['value'])
+
+ predictions = predictor.calculate_accuracy(from_data=data_test.drop(columns=[f'x_{i_drop}']))
print(f'Confidence mean for x_{i_drop} missing: ', np.mean(predictions['y']['selfaware_confidences']))
|
{"golden_diff": "diff --git a/docs/examples/learn_to_classify.py b/docs/examples/learn_to_classify.py\n--- a/docs/examples/learn_to_classify.py\n+++ b/docs/examples/learn_to_classify.py\n@@ -34,14 +34,18 @@\n predictor.learn(from_data=data_train, callback_on_iter=iter_function, eval_every_x_epochs=200)\n predictor.save('/tmp/ltcrl.pkl')\n \n+\n predictor = lightwood.Predictor(load_from_path='/tmp/ltcrl.pkl')\n print('Train accuracy: ', predictor.train_accuracy['y']['value'])\n print('Test accuracy: ', predictor.calculate_accuracy(from_data=data_test)['y']['value'])\n \n-predictions = predictor.predict(when_data=data_test)\n+print(f'Accuracy for all columns present: ', predictor.calculate_accuracy(from_data=data_test)['y']['value'])\n+\n+predictions = predictor.calculate_accuracy(from_data=data_test)\n print(f'Confidence mean for all columns present ', np.mean(predictions['y']['selfaware_confidences']))\n \n for i_drop in range(nr_inputs):\n- predictions = predictor.predict(when_data=data_test.drop(columns=[f'x_{i_drop}']))\n print(f'Accuracy for x_{i_drop} missing: ', predictor.calculate_accuracy(from_data=data_test.drop(columns=[f'x_{i_drop}']))['y']['value'])\n+\n+ predictions = predictor.calculate_accuracy(from_data=data_test.drop(columns=[f'x_{i_drop}']))\n print(f'Confidence mean for x_{i_drop} missing: ', np.mean(predictions['y']['selfaware_confidences']))\n", "issue": "Construct comperhensive test suite to evaluate predictions with missing column\nWe should have a test suite to evaluate prediction accuracy with missing column.\r\n\r\nThis should take the form of:\r\n\r\nGiven `M` columns and a Lightwood model trained with them to predict `y`, the accuracy for `y` when predicting with `M` columns (where `M` is a subset of `N`), should be about equal to or greater than that of a Gradient Boosting Regressor or Classifier trained with just the columns `M` to predict `y`.\r\n\r\nThe reason we are using a Gradient Booster to determine the benchmark accuracy is that it's safe to assume they are fairly generic (i.e. should get about the same accuracy as a well trained neural network) and fast&easy to train.\r\n\r\nWe can do this testing in two phases:\r\n\r\nFirst, we can add this as a check to the generate-data tests in lightwood, which should be fairly easy.\r\n\r\nSecond, we can add these tests to mindsdb_examples, the helpers that are already present in there can help.\r\n\r\nI'll be handling this but @torrmal feel free to review the methodology\n", "before_files": [{"content": "import lightwood\nimport random\nimport pandas as pd\nimport numpy as np\nfrom collections import Counter\n\n\nrandom.seed(66)\nn = 100\nm = 500\ntrain = True\nnr_inputs = 10\n\n#options = ['a','b','c','d','e','f','g','h','n','m']\noptions = ['a','b','c']\n\ndata_train = {}\ndata_test = {}\n\nfor data, nr_ele in [(data_train,n), (data_test,m)]:\n for i in range(nr_inputs):\n data[f'x_{i}'] = [random.choice(options) for _ in range(nr_ele)]\n\n data['y'] = [Counter([data[f'x_{i}'][n] for i in range(nr_inputs)]).most_common(1)[0][0] for n in range(nr_ele)]\n\ndata_train = pd.DataFrame(data_train)\ndata_test = pd.DataFrame(data_test)\n\ndef iter_function(epoch, training_error, test_error, test_error_gradient, test_accuracy):\n print(f'Epoch: {epoch}, Train Error: {training_error}, Test Error: {test_error}, Test Error Gradient: {test_error_gradient}, Test Accuracy: {test_accuracy}')\n\nif train:\n predictor = lightwood.Predictor(output=['y'])\n predictor.learn(from_data=data_train, callback_on_iter=iter_function, eval_every_x_epochs=200)\n predictor.save('/tmp/ltcrl.pkl')\n\npredictor = lightwood.Predictor(load_from_path='/tmp/ltcrl.pkl')\nprint('Train accuracy: ', predictor.train_accuracy['y']['value'])\nprint('Test accuracy: ', predictor.calculate_accuracy(from_data=data_test)['y']['value'])\n\npredictions = predictor.predict(when_data=data_test)\nprint(f'Confidence mean for all columns present ', np.mean(predictions['y']['selfaware_confidences']))\n\nfor i_drop in range(nr_inputs):\n predictions = predictor.predict(when_data=data_test.drop(columns=[f'x_{i_drop}']))\n print(f'Accuracy for x_{i_drop} missing: ', predictor.calculate_accuracy(from_data=data_test.drop(columns=[f'x_{i_drop}']))['y']['value'])\n print(f'Confidence mean for x_{i_drop} missing: ', np.mean(predictions['y']['selfaware_confidences']))\n", "path": "docs/examples/learn_to_classify.py"}]}
| 1,355 | 333 |
gh_patches_debug_35750
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-1663
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Test N-dimensional convolution link for dtypes of FP16 and FP64
Follows #1279 and #1556.
Since #1295 is now merged to master, we can add test for dtypes of FP16 and FP64 to N-dimensional convolution **LINK**.
</issue>
<code>
[start of chainer/links/connection/convolution_nd.py]
1 from chainer.functions.connection import convolution_nd
2 from chainer import initializers
3 from chainer import link
4 from chainer.utils import conv_nd
5
6
7 class ConvolutionND(link.Link):
8 """N-dimensional convolution layer.
9
10 This link wraps the :func:`~chainer.functions.convolution_nd` function and
11 holds the filter weight and bias vector as parameters.
12
13 Args:
14 ndim (int): Number of spatial dimensions.
15 in_channels (int): Number of channels of input arrays.
16 out_channels (int): Number of channels of output arrays.
17 ksize (int or tuple of ints): Size of filters (a.k.a. kernels).
18 ``ksize=k`` and ``ksize=(k, k, ..., k)`` are equivalent.
19 stride (int or tuple of ints): Stride of filter application.
20 ``stride=s`` and ``stride=(s, s, ..., s)`` are equivalent.
21 pad (int or tuple of ints): Spatial padding width for input arrays.
22 ``pad=p`` and ``pad=(p, p, ..., p)`` are equivalent.
23 initialW: Value used to initialize the filter weight. May be an
24 initializer instance or another value that
25 :func:`~chainer.init_weight` helper function can take. This link
26 uses :func:`~chainer.init_weight` to initialize the filter weight
27 and passes the value of ``initialW`` to it as it is.
28 initial_bias: Value used to initialize the bias vector. May be an
29 initializer instance or another value except ``None`` that
30 :func:`~chainer.init_weight` helper function can take. If ``None``
31 is given, this link does not use the bias vector. This link uses
32 :func:`~chainer.init_weight` to initialize the bias vector and
33 passes the value of ``initial_bias`` other than ``None`` to it as
34 it is.
35 use_cudnn (bool): If ``True``, then this link uses cuDNN if available.
36 See :func:`~chainer.functions.convolution_nd` for exact conditions
37 of cuDNN availability.
38 cover_all (bool): If ``True``, all spatial locations are convoluted
39 into some output pixels. It may make the output size larger.
40 ``cover_all`` needs to be ``False`` if you want to use cuDNN.
41
42 .. seealso::
43 See :func:`~chainer.functions.convolution_nd` for the definition of
44 N-dimensional convolution. See
45 :func:`~chainer.functions.convolution_2d` for the definition of
46 two-dimensional convolution.
47
48 Attributes:
49 W (~chainer.Variable): Weight parameter.
50 b (~chainer.Variable): Bias parameter. If ``initial_bias`` is ``None``,
51 set to ``None``.
52
53 """
54
55 def __init__(self, ndim, in_channels, out_channels, ksize, stride=1, pad=0,
56 initialW=None, initial_bias=None, use_cudnn=True,
57 cover_all=False):
58 ksize = conv_nd.as_tuple(ksize, ndim)
59 self.stride = stride
60 self.pad = pad
61 self.use_cudnn = use_cudnn
62 self.cover_all = cover_all
63
64 W_shape = (out_channels, in_channels) + ksize
65 super(ConvolutionND, self).__init__(W=W_shape)
66 initializers.init_weight(self.W.data, initialW)
67
68 if initial_bias is None:
69 self.b = None
70 else:
71 self.add_param('b', out_channels)
72 initializers.init_weight(self.b.data, initial_bias)
73
74 def __call__(self, x):
75 """Applies N-dimensional convolution layer.
76
77 Args:
78 x (~chainer.Variable): Input image.
79
80 Returns:
81 ~chainer.Variable: Output of convolution.
82
83 """
84 return convolution_nd.convolution_nd(
85 x, self.W, self.b, self.stride, self.pad,
86 use_cudnn=self.use_cudnn, cover_all=self.cover_all)
87
[end of chainer/links/connection/convolution_nd.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/chainer/links/connection/convolution_nd.py b/chainer/links/connection/convolution_nd.py
--- a/chainer/links/connection/convolution_nd.py
+++ b/chainer/links/connection/convolution_nd.py
@@ -22,16 +22,11 @@
``pad=p`` and ``pad=(p, p, ..., p)`` are equivalent.
initialW: Value used to initialize the filter weight. May be an
initializer instance or another value that
- :func:`~chainer.init_weight` helper function can take. This link
- uses :func:`~chainer.init_weight` to initialize the filter weight
- and passes the value of ``initialW`` to it as it is.
+ :func:`~chainer.init_weight` helper function can take.
initial_bias: Value used to initialize the bias vector. May be an
initializer instance or another value except ``None`` that
:func:`~chainer.init_weight` helper function can take. If ``None``
- is given, this link does not use the bias vector. This link uses
- :func:`~chainer.init_weight` to initialize the bias vector and
- passes the value of ``initial_bias`` other than ``None`` to it as
- it is.
+ is given, this link does not use the bias vector.
use_cudnn (bool): If ``True``, then this link uses cuDNN if available.
See :func:`~chainer.functions.convolution_nd` for exact conditions
of cuDNN availability.
@@ -61,15 +56,17 @@
self.use_cudnn = use_cudnn
self.cover_all = cover_all
+ super(ConvolutionND, self).__init__()
+
W_shape = (out_channels, in_channels) + ksize
- super(ConvolutionND, self).__init__(W=W_shape)
- initializers.init_weight(self.W.data, initialW)
+ initialW = initializers._get_initializer(initialW)
+ self.add_param('W', W_shape, initializer=initialW)
if initial_bias is None:
self.b = None
else:
- self.add_param('b', out_channels)
- initializers.init_weight(self.b.data, initial_bias)
+ initial_bias = initializers._get_initializer(initial_bias)
+ self.add_param('b', out_channels, initializer=initial_bias)
def __call__(self, x):
"""Applies N-dimensional convolution layer.
|
{"golden_diff": "diff --git a/chainer/links/connection/convolution_nd.py b/chainer/links/connection/convolution_nd.py\n--- a/chainer/links/connection/convolution_nd.py\n+++ b/chainer/links/connection/convolution_nd.py\n@@ -22,16 +22,11 @@\n ``pad=p`` and ``pad=(p, p, ..., p)`` are equivalent.\n initialW: Value used to initialize the filter weight. May be an\n initializer instance or another value that\n- :func:`~chainer.init_weight` helper function can take. This link\n- uses :func:`~chainer.init_weight` to initialize the filter weight\n- and passes the value of ``initialW`` to it as it is.\n+ :func:`~chainer.init_weight` helper function can take.\n initial_bias: Value used to initialize the bias vector. May be an\n initializer instance or another value except ``None`` that\n :func:`~chainer.init_weight` helper function can take. If ``None``\n- is given, this link does not use the bias vector. This link uses\n- :func:`~chainer.init_weight` to initialize the bias vector and\n- passes the value of ``initial_bias`` other than ``None`` to it as\n- it is.\n+ is given, this link does not use the bias vector.\n use_cudnn (bool): If ``True``, then this link uses cuDNN if available.\n See :func:`~chainer.functions.convolution_nd` for exact conditions\n of cuDNN availability.\n@@ -61,15 +56,17 @@\n self.use_cudnn = use_cudnn\n self.cover_all = cover_all\n \n+ super(ConvolutionND, self).__init__()\n+\n W_shape = (out_channels, in_channels) + ksize\n- super(ConvolutionND, self).__init__(W=W_shape)\n- initializers.init_weight(self.W.data, initialW)\n+ initialW = initializers._get_initializer(initialW)\n+ self.add_param('W', W_shape, initializer=initialW)\n \n if initial_bias is None:\n self.b = None\n else:\n- self.add_param('b', out_channels)\n- initializers.init_weight(self.b.data, initial_bias)\n+ initial_bias = initializers._get_initializer(initial_bias)\n+ self.add_param('b', out_channels, initializer=initial_bias)\n \n def __call__(self, x):\n \"\"\"Applies N-dimensional convolution layer.\n", "issue": "Test N-dimensional convolution link for dtypes of FP16 and FP64\nFollows #1279 and #1556.\n\nSince #1295 is now merged to master, we can add test for dtypes of FP16 and FP64 to N-dimensional convolution **LINK**.\n\n", "before_files": [{"content": "from chainer.functions.connection import convolution_nd\nfrom chainer import initializers\nfrom chainer import link\nfrom chainer.utils import conv_nd\n\n\nclass ConvolutionND(link.Link):\n \"\"\"N-dimensional convolution layer.\n\n This link wraps the :func:`~chainer.functions.convolution_nd` function and\n holds the filter weight and bias vector as parameters.\n\n Args:\n ndim (int): Number of spatial dimensions.\n in_channels (int): Number of channels of input arrays.\n out_channels (int): Number of channels of output arrays.\n ksize (int or tuple of ints): Size of filters (a.k.a. kernels).\n ``ksize=k`` and ``ksize=(k, k, ..., k)`` are equivalent.\n stride (int or tuple of ints): Stride of filter application.\n ``stride=s`` and ``stride=(s, s, ..., s)`` are equivalent.\n pad (int or tuple of ints): Spatial padding width for input arrays.\n ``pad=p`` and ``pad=(p, p, ..., p)`` are equivalent.\n initialW: Value used to initialize the filter weight. May be an\n initializer instance or another value that\n :func:`~chainer.init_weight` helper function can take. This link\n uses :func:`~chainer.init_weight` to initialize the filter weight\n and passes the value of ``initialW`` to it as it is.\n initial_bias: Value used to initialize the bias vector. May be an\n initializer instance or another value except ``None`` that\n :func:`~chainer.init_weight` helper function can take. If ``None``\n is given, this link does not use the bias vector. This link uses\n :func:`~chainer.init_weight` to initialize the bias vector and\n passes the value of ``initial_bias`` other than ``None`` to it as\n it is.\n use_cudnn (bool): If ``True``, then this link uses cuDNN if available.\n See :func:`~chainer.functions.convolution_nd` for exact conditions\n of cuDNN availability.\n cover_all (bool): If ``True``, all spatial locations are convoluted\n into some output pixels. It may make the output size larger.\n ``cover_all`` needs to be ``False`` if you want to use cuDNN.\n\n .. seealso::\n See :func:`~chainer.functions.convolution_nd` for the definition of\n N-dimensional convolution. See\n :func:`~chainer.functions.convolution_2d` for the definition of\n two-dimensional convolution.\n\n Attributes:\n W (~chainer.Variable): Weight parameter.\n b (~chainer.Variable): Bias parameter. If ``initial_bias`` is ``None``,\n set to ``None``.\n\n \"\"\"\n\n def __init__(self, ndim, in_channels, out_channels, ksize, stride=1, pad=0,\n initialW=None, initial_bias=None, use_cudnn=True,\n cover_all=False):\n ksize = conv_nd.as_tuple(ksize, ndim)\n self.stride = stride\n self.pad = pad\n self.use_cudnn = use_cudnn\n self.cover_all = cover_all\n\n W_shape = (out_channels, in_channels) + ksize\n super(ConvolutionND, self).__init__(W=W_shape)\n initializers.init_weight(self.W.data, initialW)\n\n if initial_bias is None:\n self.b = None\n else:\n self.add_param('b', out_channels)\n initializers.init_weight(self.b.data, initial_bias)\n\n def __call__(self, x):\n \"\"\"Applies N-dimensional convolution layer.\n\n Args:\n x (~chainer.Variable): Input image.\n\n Returns:\n ~chainer.Variable: Output of convolution.\n\n \"\"\"\n return convolution_nd.convolution_nd(\n x, self.W, self.b, self.stride, self.pad,\n use_cudnn=self.use_cudnn, cover_all=self.cover_all)\n", "path": "chainer/links/connection/convolution_nd.py"}]}
| 1,639 | 548 |
gh_patches_debug_17963
|
rasdani/github-patches
|
git_diff
|
frappe__frappe-11391
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SMTP: Exception Handling Resolution Order
In `frappe/frappe/email/smtp.py` In `SMTPServer.sess`:
` try:`
` ....`
` except `_socket.error` as e: `
` .... `
` except smtplib.SMTPAuthenticationError as e: `
` .... `
` except smtplib.SMTPException: `
` .... `
`
Where:
`_socket.error` is `OSError` Which is defined: `class OSError(Exception):`
`class SMTPException(OSError):`
`class SMTPResponseException(SMTPException):`
`class SMTPAuthenticationError(SMTPResponseException):`
From the python documentation:
> A class in an except clause is compatible with an exception if it is the same class or a base class thereof (but not the other way around — an except clause listing a derived class is not compatible with a base class).
So the way the except clauses are ordered now will always be handled by the `except` clause with `_socket.error` no matter what the error is.
</issue>
<code>
[start of frappe/email/smtp.py]
1 # Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors
2 # MIT License. See license.txt
3
4 from __future__ import unicode_literals
5 from six import reraise as raise_
6 import frappe
7 import smtplib
8 import email.utils
9 import _socket, sys
10 from frappe import _
11 from frappe.utils import cint, cstr, parse_addr
12
13 def send(email, append_to=None, retry=1):
14 """Deprecated: Send the message or add it to Outbox Email"""
15 def _send(retry):
16 try:
17 smtpserver = SMTPServer(append_to=append_to)
18
19 # validate is called in as_string
20 email_body = email.as_string()
21
22 smtpserver.sess.sendmail(email.sender, email.recipients + (email.cc or []), email_body)
23 except smtplib.SMTPSenderRefused:
24 frappe.throw(_("Invalid login or password"), title='Email Failed')
25 raise
26 except smtplib.SMTPRecipientsRefused:
27 frappe.msgprint(_("Invalid recipient address"), title='Email Failed')
28 raise
29 except (smtplib.SMTPServerDisconnected, smtplib.SMTPAuthenticationError):
30 if not retry:
31 raise
32 else:
33 retry = retry - 1
34 _send(retry)
35
36 _send(retry)
37
38 def get_outgoing_email_account(raise_exception_not_set=True, append_to=None, sender=None):
39 """Returns outgoing email account based on `append_to` or the default
40 outgoing account. If default outgoing account is not found, it will
41 try getting settings from `site_config.json`."""
42
43 sender_email_id = None
44 if sender:
45 sender_email_id = parse_addr(sender)[1]
46
47 if not getattr(frappe.local, "outgoing_email_account", None):
48 frappe.local.outgoing_email_account = {}
49
50 if not (frappe.local.outgoing_email_account.get(append_to)
51 or frappe.local.outgoing_email_account.get(sender_email_id)
52 or frappe.local.outgoing_email_account.get("default")):
53 email_account = None
54
55 if append_to:
56 # append_to is only valid when enable_incoming is checked
57
58 # in case of multiple Email Accounts with same append_to
59 # narrow it down based on email_id
60 email_account = _get_email_account({
61 "enable_outgoing": 1,
62 "enable_incoming": 1,
63 "append_to": append_to,
64 "email_id": sender_email_id
65 })
66
67 # else find the first Email Account with append_to
68 if not email_account:
69 email_account = _get_email_account({
70 "enable_outgoing": 1,
71 "enable_incoming": 1,
72 "append_to": append_to
73 })
74
75 if not email_account and sender_email_id:
76 # check if the sender has email account with enable_outgoing
77 email_account = _get_email_account({"enable_outgoing": 1, "email_id": sender_email_id})
78
79 if not email_account:
80 # sender don't have the outging email account
81 sender_email_id = None
82 email_account = get_default_outgoing_email_account(raise_exception_not_set=raise_exception_not_set)
83
84 if not email_account and raise_exception_not_set and cint(frappe.db.get_single_value('System Settings', 'setup_complete')):
85 frappe.throw(_("Please setup default Email Account from Setup > Email > Email Account"),
86 frappe.OutgoingEmailError)
87
88 if email_account:
89 if email_account.enable_outgoing and not getattr(email_account, 'from_site_config', False):
90 raise_exception = True
91 if email_account.smtp_server in ['localhost','127.0.0.1'] or email_account.no_smtp_authentication:
92 raise_exception = False
93 email_account.password = email_account.get_password(raise_exception=raise_exception)
94 email_account.default_sender = email.utils.formataddr((email_account.name, email_account.get("email_id")))
95
96 frappe.local.outgoing_email_account[append_to or sender_email_id or "default"] = email_account
97
98 return frappe.local.outgoing_email_account.get(append_to) \
99 or frappe.local.outgoing_email_account.get(sender_email_id) \
100 or frappe.local.outgoing_email_account.get("default")
101
102 def get_default_outgoing_email_account(raise_exception_not_set=True):
103 '''conf should be like:
104 {
105 "mail_server": "smtp.example.com",
106 "mail_port": 587,
107 "use_tls": 1,
108 "mail_login": "[email protected]",
109 "mail_password": "Super.Secret.Password",
110 "auto_email_id": "[email protected]",
111 "email_sender_name": "Example Notifications",
112 "always_use_account_email_id_as_sender": 0,
113 "always_use_account_name_as_sender_name": 0
114 }
115 '''
116 email_account = _get_email_account({"enable_outgoing": 1, "default_outgoing": 1})
117 if email_account:
118 email_account.password = email_account.get_password(raise_exception=False)
119
120 if not email_account and frappe.conf.get("mail_server"):
121 # from site_config.json
122 email_account = frappe.new_doc("Email Account")
123 email_account.update({
124 "smtp_server": frappe.conf.get("mail_server"),
125 "smtp_port": frappe.conf.get("mail_port"),
126
127 # legacy: use_ssl was used in site_config instead of use_tls, but meant the same thing
128 "use_tls": cint(frappe.conf.get("use_tls") or 0) or cint(frappe.conf.get("use_ssl") or 0),
129 "login_id": frappe.conf.get("mail_login"),
130 "email_id": frappe.conf.get("auto_email_id") or frappe.conf.get("mail_login") or '[email protected]',
131 "password": frappe.conf.get("mail_password"),
132 "always_use_account_email_id_as_sender": frappe.conf.get("always_use_account_email_id_as_sender", 0),
133 "always_use_account_name_as_sender_name": frappe.conf.get("always_use_account_name_as_sender_name", 0)
134 })
135 email_account.from_site_config = True
136 email_account.name = frappe.conf.get("email_sender_name") or "Frappe"
137
138 if not email_account and not raise_exception_not_set:
139 return None
140
141 if frappe.are_emails_muted():
142 # create a stub
143 email_account = frappe.new_doc("Email Account")
144 email_account.update({
145 "email_id": "[email protected]"
146 })
147
148 return email_account
149
150 def _get_email_account(filters):
151 name = frappe.db.get_value("Email Account", filters)
152 return frappe.get_doc("Email Account", name) if name else None
153
154 class SMTPServer:
155 def __init__(self, login=None, password=None, server=None, port=None, use_tls=None, append_to=None):
156 # get defaults from mail settings
157
158 self._sess = None
159 self.email_account = None
160 self.server = None
161 if server:
162 self.server = server
163 self.port = port
164 self.use_tls = cint(use_tls)
165 self.login = login
166 self.password = password
167
168 else:
169 self.setup_email_account(append_to)
170
171 def setup_email_account(self, append_to=None, sender=None):
172 self.email_account = get_outgoing_email_account(raise_exception_not_set=False, append_to=append_to, sender=sender)
173 if self.email_account:
174 self.server = self.email_account.smtp_server
175 self.login = (getattr(self.email_account, "login_id", None) or self.email_account.email_id)
176 if not self.email_account.no_smtp_authentication:
177 if self.email_account.ascii_encode_password:
178 self.password = frappe.safe_encode(self.email_account.password, 'ascii')
179 else:
180 self.password = self.email_account.password
181 else:
182 self.password = None
183 self.port = self.email_account.smtp_port
184 self.use_tls = self.email_account.use_tls
185 self.sender = self.email_account.email_id
186 self.always_use_account_email_id_as_sender = cint(self.email_account.get("always_use_account_email_id_as_sender"))
187 self.always_use_account_name_as_sender_name = cint(self.email_account.get("always_use_account_name_as_sender_name"))
188
189 @property
190 def sess(self):
191 """get session"""
192 if self._sess:
193 return self._sess
194
195 # check if email server specified
196 if not getattr(self, 'server'):
197 err_msg = _('Email Account not setup. Please create a new Email Account from Setup > Email > Email Account')
198 frappe.msgprint(err_msg)
199 raise frappe.OutgoingEmailError(err_msg)
200
201 try:
202 if self.use_tls and not self.port:
203 self.port = 587
204
205 self._sess = smtplib.SMTP(cstr(self.server or ""),
206 cint(self.port) or None)
207
208 if not self._sess:
209 err_msg = _('Could not connect to outgoing email server')
210 frappe.msgprint(err_msg)
211 raise frappe.OutgoingEmailError(err_msg)
212
213 if self.use_tls:
214 self._sess.ehlo()
215 self._sess.starttls()
216 self._sess.ehlo()
217
218 if self.login and self.password:
219 ret = self._sess.login(str(self.login or ""), str(self.password or ""))
220
221 # check if logged correctly
222 if ret[0]!=235:
223 frappe.msgprint(ret[1])
224 raise frappe.OutgoingEmailError(ret[1])
225
226 return self._sess
227
228 except _socket.error as e:
229 # Invalid mail server -- due to refusing connection
230 frappe.msgprint(_('Invalid Outgoing Mail Server or Port'))
231 traceback = sys.exc_info()[2]
232 raise_(frappe.ValidationError, e, traceback)
233
234 except smtplib.SMTPAuthenticationError as e:
235 frappe.msgprint(_("Invalid login or password"))
236 traceback = sys.exc_info()[2]
237 raise_(frappe.ValidationError, e, traceback)
238
239 except smtplib.SMTPException:
240 frappe.msgprint(_('Unable to send emails at this time'))
241 raise
242
[end of frappe/email/smtp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/frappe/email/smtp.py b/frappe/email/smtp.py
--- a/frappe/email/smtp.py
+++ b/frappe/email/smtp.py
@@ -225,12 +225,6 @@
return self._sess
- except _socket.error as e:
- # Invalid mail server -- due to refusing connection
- frappe.msgprint(_('Invalid Outgoing Mail Server or Port'))
- traceback = sys.exc_info()[2]
- raise_(frappe.ValidationError, e, traceback)
-
except smtplib.SMTPAuthenticationError as e:
frappe.msgprint(_("Invalid login or password"))
traceback = sys.exc_info()[2]
@@ -239,3 +233,9 @@
except smtplib.SMTPException:
frappe.msgprint(_('Unable to send emails at this time'))
raise
+
+ except _socket.error as e:
+ # Invalid mail server -- due to refusing connection
+ frappe.msgprint(_('Invalid Outgoing Mail Server or Port'))
+ traceback = sys.exc_info()[2]
+ raise_(frappe.ValidationError, e, traceback)
|
{"golden_diff": "diff --git a/frappe/email/smtp.py b/frappe/email/smtp.py\n--- a/frappe/email/smtp.py\n+++ b/frappe/email/smtp.py\n@@ -225,12 +225,6 @@\n \n \t\t\treturn self._sess\n \n-\t\texcept _socket.error as e:\n-\t\t\t# Invalid mail server -- due to refusing connection\n-\t\t\tfrappe.msgprint(_('Invalid Outgoing Mail Server or Port'))\n-\t\t\ttraceback = sys.exc_info()[2]\n-\t\t\traise_(frappe.ValidationError, e, traceback)\n-\n \t\texcept smtplib.SMTPAuthenticationError as e:\n \t\t\tfrappe.msgprint(_(\"Invalid login or password\"))\n \t\t\ttraceback = sys.exc_info()[2]\n@@ -239,3 +233,9 @@\n \t\texcept smtplib.SMTPException:\n \t\t\tfrappe.msgprint(_('Unable to send emails at this time'))\n \t\t\traise\n+\n+\t\texcept _socket.error as e:\n+\t\t\t# Invalid mail server -- due to refusing connection\n+\t\t\tfrappe.msgprint(_('Invalid Outgoing Mail Server or Port'))\n+\t\t\ttraceback = sys.exc_info()[2]\n+\t\t\traise_(frappe.ValidationError, e, traceback)\n", "issue": "SMTP: Exception Handling Resolution Order\nIn `frappe/frappe/email/smtp.py` In `SMTPServer.sess`:\r\n`\t\ttry:`\r\n`\t\t\t\t....`\r\n`\t\texcept `_socket.error` as e: `\r\n`\t\t\t.... `\r\n`\t\texcept smtplib.SMTPAuthenticationError as e: `\r\n`\t\t\t.... `\r\n`\t\texcept smtplib.SMTPException: `\r\n`\t\t\t.... `\r\n`\r\n\r\nWhere:\r\n`_socket.error` is `OSError` Which is defined: `class OSError(Exception):`\r\n`class SMTPException(OSError):`\r\n`class SMTPResponseException(SMTPException):`\r\n`class SMTPAuthenticationError(SMTPResponseException):`\r\n\r\nFrom the python documentation:\r\n\r\n> A class in an except clause is compatible with an exception if it is the same class or a base class thereof (but not the other way around \u2014 an except clause listing a derived class is not compatible with a base class).\r\n\r\nSo the way the except clauses are ordered now will always be handled by the `except` clause with `_socket.error` no matter what the error is.\n", "before_files": [{"content": "# Copyright (c) 2015, Frappe Technologies Pvt. Ltd. and Contributors\n# MIT License. See license.txt\n\nfrom __future__ import unicode_literals\nfrom six import reraise as raise_\nimport frappe\nimport smtplib\nimport email.utils\nimport _socket, sys\nfrom frappe import _\nfrom frappe.utils import cint, cstr, parse_addr\n\ndef send(email, append_to=None, retry=1):\n\t\"\"\"Deprecated: Send the message or add it to Outbox Email\"\"\"\n\tdef _send(retry):\n\t\ttry:\n\t\t\tsmtpserver = SMTPServer(append_to=append_to)\n\n\t\t\t# validate is called in as_string\n\t\t\temail_body = email.as_string()\n\n\t\t\tsmtpserver.sess.sendmail(email.sender, email.recipients + (email.cc or []), email_body)\n\t\texcept smtplib.SMTPSenderRefused:\n\t\t\tfrappe.throw(_(\"Invalid login or password\"), title='Email Failed')\n\t\t\traise\n\t\texcept smtplib.SMTPRecipientsRefused:\n\t\t\tfrappe.msgprint(_(\"Invalid recipient address\"), title='Email Failed')\n\t\t\traise\n\t\texcept (smtplib.SMTPServerDisconnected, smtplib.SMTPAuthenticationError):\n\t\t\tif not retry:\n\t\t\t\traise\n\t\t\telse:\n\t\t\t\tretry = retry - 1\n\t\t\t\t_send(retry)\n\n\t_send(retry)\n\ndef get_outgoing_email_account(raise_exception_not_set=True, append_to=None, sender=None):\n\t\"\"\"Returns outgoing email account based on `append_to` or the default\n\t\toutgoing account. If default outgoing account is not found, it will\n\t\ttry getting settings from `site_config.json`.\"\"\"\n\n\tsender_email_id = None\n\tif sender:\n\t\tsender_email_id = parse_addr(sender)[1]\n\n\tif not getattr(frappe.local, \"outgoing_email_account\", None):\n\t\tfrappe.local.outgoing_email_account = {}\n\n\tif not (frappe.local.outgoing_email_account.get(append_to)\n\t\tor frappe.local.outgoing_email_account.get(sender_email_id)\n\t\tor frappe.local.outgoing_email_account.get(\"default\")):\n\t\temail_account = None\n\n\t\tif append_to:\n\t\t\t# append_to is only valid when enable_incoming is checked\n\n\t\t\t# in case of multiple Email Accounts with same append_to\n\t\t\t# narrow it down based on email_id\n\t\t\temail_account = _get_email_account({\n\t\t\t\t\"enable_outgoing\": 1,\n\t\t\t\t\"enable_incoming\": 1,\n\t\t\t\t\"append_to\": append_to,\n\t\t\t\t\"email_id\": sender_email_id\n\t\t\t})\n\n\t\t\t# else find the first Email Account with append_to\n\t\t\tif not email_account:\n\t\t\t\temail_account = _get_email_account({\n\t\t\t\t\t\"enable_outgoing\": 1,\n\t\t\t\t\t\"enable_incoming\": 1,\n\t\t\t\t\t\"append_to\": append_to\n\t\t\t\t})\n\n\t\tif not email_account and sender_email_id:\n\t\t\t# check if the sender has email account with enable_outgoing\n\t\t\temail_account = _get_email_account({\"enable_outgoing\": 1, \"email_id\": sender_email_id})\n\n\t\tif not email_account:\n\t\t\t# sender don't have the outging email account\n\t\t\tsender_email_id = None\n\t\t\temail_account = get_default_outgoing_email_account(raise_exception_not_set=raise_exception_not_set)\n\n\t\tif not email_account and raise_exception_not_set and cint(frappe.db.get_single_value('System Settings', 'setup_complete')):\n\t\t\tfrappe.throw(_(\"Please setup default Email Account from Setup > Email > Email Account\"),\n\t\t\t\tfrappe.OutgoingEmailError)\n\n\t\tif email_account:\n\t\t\tif email_account.enable_outgoing and not getattr(email_account, 'from_site_config', False):\n\t\t\t\traise_exception = True\n\t\t\t\tif email_account.smtp_server in ['localhost','127.0.0.1'] or email_account.no_smtp_authentication:\n\t\t\t\t\traise_exception = False\n\t\t\t\temail_account.password = email_account.get_password(raise_exception=raise_exception)\n\t\t\temail_account.default_sender = email.utils.formataddr((email_account.name, email_account.get(\"email_id\")))\n\n\t\tfrappe.local.outgoing_email_account[append_to or sender_email_id or \"default\"] = email_account\n\n\treturn frappe.local.outgoing_email_account.get(append_to) \\\n\t\tor frappe.local.outgoing_email_account.get(sender_email_id) \\\n\t\tor frappe.local.outgoing_email_account.get(\"default\")\n\ndef get_default_outgoing_email_account(raise_exception_not_set=True):\n\t'''conf should be like:\n\t\t{\n\t\t \"mail_server\": \"smtp.example.com\",\n\t\t \"mail_port\": 587,\n\t\t \"use_tls\": 1,\n\t\t \"mail_login\": \"[email protected]\",\n\t\t \"mail_password\": \"Super.Secret.Password\",\n\t\t \"auto_email_id\": \"[email protected]\",\n\t\t \"email_sender_name\": \"Example Notifications\",\n\t\t \"always_use_account_email_id_as_sender\": 0,\n\t\t \"always_use_account_name_as_sender_name\": 0\n\t\t}\n\t'''\n\temail_account = _get_email_account({\"enable_outgoing\": 1, \"default_outgoing\": 1})\n\tif email_account:\n\t\temail_account.password = email_account.get_password(raise_exception=False)\n\n\tif not email_account and frappe.conf.get(\"mail_server\"):\n\t\t# from site_config.json\n\t\temail_account = frappe.new_doc(\"Email Account\")\n\t\temail_account.update({\n\t\t\t\"smtp_server\": frappe.conf.get(\"mail_server\"),\n\t\t\t\"smtp_port\": frappe.conf.get(\"mail_port\"),\n\n\t\t\t# legacy: use_ssl was used in site_config instead of use_tls, but meant the same thing\n\t\t\t\"use_tls\": cint(frappe.conf.get(\"use_tls\") or 0) or cint(frappe.conf.get(\"use_ssl\") or 0),\n\t\t\t\"login_id\": frappe.conf.get(\"mail_login\"),\n\t\t\t\"email_id\": frappe.conf.get(\"auto_email_id\") or frappe.conf.get(\"mail_login\") or '[email protected]',\n\t\t\t\"password\": frappe.conf.get(\"mail_password\"),\n\t\t\t\"always_use_account_email_id_as_sender\": frappe.conf.get(\"always_use_account_email_id_as_sender\", 0),\n\t\t\t\"always_use_account_name_as_sender_name\": frappe.conf.get(\"always_use_account_name_as_sender_name\", 0)\n\t\t})\n\t\temail_account.from_site_config = True\n\t\temail_account.name = frappe.conf.get(\"email_sender_name\") or \"Frappe\"\n\n\tif not email_account and not raise_exception_not_set:\n\t\treturn None\n\n\tif frappe.are_emails_muted():\n\t\t# create a stub\n\t\temail_account = frappe.new_doc(\"Email Account\")\n\t\temail_account.update({\n\t\t\t\"email_id\": \"[email protected]\"\n\t\t})\n\n\treturn email_account\n\ndef _get_email_account(filters):\n\tname = frappe.db.get_value(\"Email Account\", filters)\n\treturn frappe.get_doc(\"Email Account\", name) if name else None\n\nclass SMTPServer:\n\tdef __init__(self, login=None, password=None, server=None, port=None, use_tls=None, append_to=None):\n\t\t# get defaults from mail settings\n\n\t\tself._sess = None\n\t\tself.email_account = None\n\t\tself.server = None\n\t\tif server:\n\t\t\tself.server = server\n\t\t\tself.port = port\n\t\t\tself.use_tls = cint(use_tls)\n\t\t\tself.login = login\n\t\t\tself.password = password\n\n\t\telse:\n\t\t\tself.setup_email_account(append_to)\n\n\tdef setup_email_account(self, append_to=None, sender=None):\n\t\tself.email_account = get_outgoing_email_account(raise_exception_not_set=False, append_to=append_to, sender=sender)\n\t\tif self.email_account:\n\t\t\tself.server = self.email_account.smtp_server\n\t\t\tself.login = (getattr(self.email_account, \"login_id\", None) or self.email_account.email_id)\n\t\t\tif not self.email_account.no_smtp_authentication:\n\t\t\t\tif self.email_account.ascii_encode_password:\n\t\t\t\t\tself.password = frappe.safe_encode(self.email_account.password, 'ascii')\n\t\t\t\telse:\n\t\t\t\t\tself.password = self.email_account.password\n\t\t\telse:\n\t\t\t\tself.password = None\n\t\t\tself.port = self.email_account.smtp_port\n\t\t\tself.use_tls = self.email_account.use_tls\n\t\t\tself.sender = self.email_account.email_id\n\t\t\tself.always_use_account_email_id_as_sender = cint(self.email_account.get(\"always_use_account_email_id_as_sender\"))\n\t\t\tself.always_use_account_name_as_sender_name = cint(self.email_account.get(\"always_use_account_name_as_sender_name\"))\n\n\t@property\n\tdef sess(self):\n\t\t\"\"\"get session\"\"\"\n\t\tif self._sess:\n\t\t\treturn self._sess\n\n\t\t# check if email server specified\n\t\tif not getattr(self, 'server'):\n\t\t\terr_msg = _('Email Account not setup. Please create a new Email Account from Setup > Email > Email Account')\n\t\t\tfrappe.msgprint(err_msg)\n\t\t\traise frappe.OutgoingEmailError(err_msg)\n\n\t\ttry:\n\t\t\tif self.use_tls and not self.port:\n\t\t\t\tself.port = 587\n\n\t\t\tself._sess = smtplib.SMTP(cstr(self.server or \"\"),\n\t\t\t\tcint(self.port) or None)\n\n\t\t\tif not self._sess:\n\t\t\t\terr_msg = _('Could not connect to outgoing email server')\n\t\t\t\tfrappe.msgprint(err_msg)\n\t\t\t\traise frappe.OutgoingEmailError(err_msg)\n\n\t\t\tif self.use_tls:\n\t\t\t\tself._sess.ehlo()\n\t\t\t\tself._sess.starttls()\n\t\t\t\tself._sess.ehlo()\n\n\t\t\tif self.login and self.password:\n\t\t\t\tret = self._sess.login(str(self.login or \"\"), str(self.password or \"\"))\n\n\t\t\t\t# check if logged correctly\n\t\t\t\tif ret[0]!=235:\n\t\t\t\t\tfrappe.msgprint(ret[1])\n\t\t\t\t\traise frappe.OutgoingEmailError(ret[1])\n\n\t\t\treturn self._sess\n\n\t\texcept _socket.error as e:\n\t\t\t# Invalid mail server -- due to refusing connection\n\t\t\tfrappe.msgprint(_('Invalid Outgoing Mail Server or Port'))\n\t\t\ttraceback = sys.exc_info()[2]\n\t\t\traise_(frappe.ValidationError, e, traceback)\n\n\t\texcept smtplib.SMTPAuthenticationError as e:\n\t\t\tfrappe.msgprint(_(\"Invalid login or password\"))\n\t\t\ttraceback = sys.exc_info()[2]\n\t\t\traise_(frappe.ValidationError, e, traceback)\n\n\t\texcept smtplib.SMTPException:\n\t\t\tfrappe.msgprint(_('Unable to send emails at this time'))\n\t\t\traise\n", "path": "frappe/email/smtp.py"}]}
| 3,636 | 255 |
gh_patches_debug_22607
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-1314
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RepresentationMixin breaks on classes with no default parameters
```
from parsl.utils import RepresentationMixin
class A(RepresentationMixin):
def __init__(self, q):
self.q = q
x = A(q=4)
print(x)
```
gives:
```
$ python b.py
Traceback (most recent call last):
File "b.py", line 10, in <module>
print(x)
File "/home/benc/parsl/src/parsl/parsl/utils.py", line 193, in __repr__
defaults = dict(zip(reversed(argspec.args), reversed(argspec.defaults)))
TypeError: 'NoneType' object is not reversible
```
Changing `__init__` to:
```
def __init__(self, q=3):
```
fixes this.
At a guess, argspec.defaults is None rather than an empty sequence in the breaking case.
</issue>
<code>
[start of parsl/utils.py]
1 import inspect
2 import logging
3 import os
4 import shlex
5 import subprocess
6 import threading
7 import time
8 from contextlib import contextmanager
9 from functools import wraps
10
11 import parsl
12 from parsl.version import VERSION
13
14 logger = logging.getLogger(__name__)
15
16
17 def get_version():
18 version = parsl.__version__
19 work_tree = os.path.dirname(os.path.dirname(__file__))
20 git_dir = os.path.join(work_tree, '.git')
21 if os.path.exists(git_dir):
22 env = {'GIT_WORK_TREE': work_tree, 'GIT_DIR': git_dir}
23 try:
24 cmd = shlex.split('git rev-parse --short HEAD')
25 head = subprocess.check_output(cmd, env=env).strip().decode('utf-8')
26 diff = subprocess.check_output(shlex.split('git diff HEAD'), env=env)
27 status = 'dirty' if diff else 'clean'
28 version = '{v}-{head}-{status}'.format(v=VERSION, head=head, status=status)
29 except Exception:
30 pass
31
32 return version
33
34
35 def get_all_checkpoints(rundir="runinfo"):
36 """Finds the checkpoints from all last runs.
37
38 Note that checkpoints are incremental, and this helper will not find
39 previous checkpoints from earlier than the most recent run. It probably
40 should be made to do so.
41
42 Kwargs:
43 - rundir(str) : Path to the runinfo directory
44
45 Returns:
46 - a list suitable for the checkpointFiles parameter of DataFlowKernel
47 constructor
48
49 """
50
51 if(not os.path.isdir(rundir)):
52 return []
53
54 dirs = sorted(os.listdir(rundir))
55
56 checkpoints = []
57
58 for runid in dirs:
59
60 checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, runid))
61
62 if os.path.isdir(checkpoint):
63 checkpoints.append(checkpoint)
64
65 return checkpoints
66
67
68 def get_last_checkpoint(rundir="runinfo"):
69 """Find the checkpoint from the last run, if one exists.
70
71 Note that checkpoints are incremental, and this helper will not find
72 previous checkpoints from earlier than the most recent run. It probably
73 should be made to do so.
74
75 Kwargs:
76 - rundir(str) : Path to the runinfo directory
77
78 Returns:
79 - a list suitable for checkpointFiles parameter of DataFlowKernel
80 constructor, with 0 or 1 elements
81
82 """
83 if not os.path.isdir(rundir):
84 return []
85
86 dirs = sorted(os.listdir(rundir))
87
88 if len(dirs) == 0:
89 return []
90
91 last_runid = dirs[-1]
92 last_checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, last_runid))
93
94 if(not(os.path.isdir(last_checkpoint))):
95 return []
96
97 return [last_checkpoint]
98
99
100 def timeout(seconds=None):
101 def decorator(func, *args, **kwargs):
102 @wraps(func)
103 def wrapper(*args, **kwargs):
104 t = threading.Thread(target=func, args=args, kwargs=kwargs, name="Timeout-Decorator")
105 t.start()
106 result = t.join(seconds)
107 if t.is_alive():
108 raise RuntimeError('timed out in {}'.format(func))
109 return result
110 return wrapper
111 return decorator
112
113
114 @contextmanager
115 def wait_for_file(path, seconds=10):
116 for i in range(0, int(seconds * 100)):
117 time.sleep(seconds / 100.)
118 if os.path.exists(path):
119 break
120 yield
121
122
123 @contextmanager
124 def time_limited_open(path, mode, seconds=1):
125 with wait_for_file(path, seconds):
126 logger.debug("wait_for_file yielded")
127 f = open(path, mode)
128 yield f
129 f.close()
130
131
132 def wtime_to_minutes(time_string):
133 ''' wtime_to_minutes
134
135 Convert standard wallclock time string to minutes.
136
137 Args:
138 - Time_string in HH:MM:SS format
139
140 Returns:
141 (int) minutes
142
143 '''
144 hours, mins, seconds = time_string.split(':')
145 total_mins = int(hours) * 60 + int(mins)
146 if total_mins < 1:
147 logger.warning("Time string '{}' parsed to {} minutes, less than 1".format(time_string, total_mins))
148 return total_mins
149
150
151 class RepresentationMixin(object):
152 """A mixin class for adding a __repr__ method.
153
154 The __repr__ method will return a string equivalent to the code used to instantiate
155 the child class, with any defaults included explicitly. The __max_width__ class variable
156 controls the maximum width of the representation string. If this width is exceeded,
157 the representation string will be split up, with one argument or keyword argument per line.
158
159 Any arguments or keyword arguments in the constructor must be defined as attributes, or
160 an AttributeError will be raised.
161
162 Examples
163 --------
164 >>> from parsl.utils import RepresentationMixin
165 >>> class Foo(RepresentationMixin):
166 def __init__(self, first, second, third='three', fourth='fourth'):
167 self.first = first
168 self.second = second
169 self.third = third
170 self.fourth = fourth
171 >>> bar = Foo(1, 'two', fourth='baz')
172 >>> bar
173 Foo(1, 'two', third='three', fourth='baz')
174 """
175 __max_width__ = 80
176
177 def __repr__(self):
178 init = self.__init__
179
180 # This test looks for a single layer of wrapping performed by
181 # functools.update_wrapper, commonly used in decorators. This will
182 # allow RepresentationMixin to see through a single such decorator
183 # applied to the __init__ method of a class, and find the underlying
184 # arguments. It will not see through multiple layers of such
185 # decorators, or cope with other decorators which do not use
186 # functools.update_wrapper.
187
188 if hasattr(init, '__wrapped__'):
189 init = init.__wrapped__
190
191 argspec = inspect.getfullargspec(init)
192 if len(argspec.args) > 1:
193 defaults = dict(zip(reversed(argspec.args), reversed(argspec.defaults)))
194 else:
195 defaults = {}
196
197 for arg in argspec.args[1:]:
198 if not hasattr(self, arg):
199 template = 'class {} uses {} in the constructor, but does not define it as an attribute'
200 raise AttributeError(template.format(self.__class__.__name__, arg))
201
202 args = [getattr(self, a) for a in argspec.args[1:-len(defaults)]]
203 kwargs = {key: getattr(self, key) for key in defaults}
204
205 def assemble_multiline(args, kwargs):
206 def indent(text):
207 lines = text.splitlines()
208 if len(lines) <= 1:
209 return text
210 return "\n".join(" " + l for l in lines).strip()
211 args = ["\n {},".format(indent(repr(a))) for a in args]
212 kwargs = ["\n {}={}".format(k, indent(repr(v)))
213 for k, v in sorted(kwargs.items())]
214
215 info = "".join(args) + ", ".join(kwargs)
216 return self.__class__.__name__ + "({}\n)".format(info)
217
218 def assemble_line(args, kwargs):
219 kwargs = ['{}={}'.format(k, repr(v)) for k, v in sorted(kwargs.items())]
220
221 info = ", ".join([repr(a) for a in args] + kwargs)
222 return self.__class__.__name__ + "({})".format(info)
223
224 if len(assemble_line(args, kwargs)) <= self.__class__.__max_width__:
225 return assemble_line(args, kwargs)
226 else:
227 return assemble_multiline(args, kwargs)
228
[end of parsl/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/parsl/utils.py b/parsl/utils.py
--- a/parsl/utils.py
+++ b/parsl/utils.py
@@ -189,7 +189,7 @@
init = init.__wrapped__
argspec = inspect.getfullargspec(init)
- if len(argspec.args) > 1:
+ if len(argspec.args) > 1 and argspec.defaults is not None:
defaults = dict(zip(reversed(argspec.args), reversed(argspec.defaults)))
else:
defaults = {}
@@ -199,7 +199,10 @@
template = 'class {} uses {} in the constructor, but does not define it as an attribute'
raise AttributeError(template.format(self.__class__.__name__, arg))
- args = [getattr(self, a) for a in argspec.args[1:-len(defaults)]]
+ if len(defaults) != 0:
+ args = [getattr(self, a) for a in argspec.args[1:-len(defaults)]]
+ else:
+ args = [getattr(self, a) for a in argspec.args[1:]]
kwargs = {key: getattr(self, key) for key in defaults}
def assemble_multiline(args, kwargs):
|
{"golden_diff": "diff --git a/parsl/utils.py b/parsl/utils.py\n--- a/parsl/utils.py\n+++ b/parsl/utils.py\n@@ -189,7 +189,7 @@\n init = init.__wrapped__\n \n argspec = inspect.getfullargspec(init)\n- if len(argspec.args) > 1:\n+ if len(argspec.args) > 1 and argspec.defaults is not None:\n defaults = dict(zip(reversed(argspec.args), reversed(argspec.defaults)))\n else:\n defaults = {}\n@@ -199,7 +199,10 @@\n template = 'class {} uses {} in the constructor, but does not define it as an attribute'\n raise AttributeError(template.format(self.__class__.__name__, arg))\n \n- args = [getattr(self, a) for a in argspec.args[1:-len(defaults)]]\n+ if len(defaults) != 0:\n+ args = [getattr(self, a) for a in argspec.args[1:-len(defaults)]]\n+ else:\n+ args = [getattr(self, a) for a in argspec.args[1:]]\n kwargs = {key: getattr(self, key) for key in defaults}\n \n def assemble_multiline(args, kwargs):\n", "issue": "RepresentationMixin breaks on classes with no default parameters\n```\r\nfrom parsl.utils import RepresentationMixin\r\n\r\nclass A(RepresentationMixin):\r\n\r\n def __init__(self, q):\r\n self.q = q\r\n\r\nx = A(q=4)\r\nprint(x)\r\n```\r\n\r\ngives:\r\n\r\n```\r\n$ python b.py \r\nTraceback (most recent call last):\r\n File \"b.py\", line 10, in <module>\r\n print(x)\r\n File \"/home/benc/parsl/src/parsl/parsl/utils.py\", line 193, in __repr__\r\n defaults = dict(zip(reversed(argspec.args), reversed(argspec.defaults)))\r\nTypeError: 'NoneType' object is not reversible\r\n```\r\n\r\nChanging `__init__` to:\r\n\r\n```\r\n def __init__(self, q=3):\r\n```\r\n\r\nfixes this.\r\n\r\nAt a guess, argspec.defaults is None rather than an empty sequence in the breaking case.\n", "before_files": [{"content": "import inspect\nimport logging\nimport os\nimport shlex\nimport subprocess\nimport threading\nimport time\nfrom contextlib import contextmanager\nfrom functools import wraps\n\nimport parsl\nfrom parsl.version import VERSION\n\nlogger = logging.getLogger(__name__)\n\n\ndef get_version():\n version = parsl.__version__\n work_tree = os.path.dirname(os.path.dirname(__file__))\n git_dir = os.path.join(work_tree, '.git')\n if os.path.exists(git_dir):\n env = {'GIT_WORK_TREE': work_tree, 'GIT_DIR': git_dir}\n try:\n cmd = shlex.split('git rev-parse --short HEAD')\n head = subprocess.check_output(cmd, env=env).strip().decode('utf-8')\n diff = subprocess.check_output(shlex.split('git diff HEAD'), env=env)\n status = 'dirty' if diff else 'clean'\n version = '{v}-{head}-{status}'.format(v=VERSION, head=head, status=status)\n except Exception:\n pass\n\n return version\n\n\ndef get_all_checkpoints(rundir=\"runinfo\"):\n \"\"\"Finds the checkpoints from all last runs.\n\n Note that checkpoints are incremental, and this helper will not find\n previous checkpoints from earlier than the most recent run. It probably\n should be made to do so.\n\n Kwargs:\n - rundir(str) : Path to the runinfo directory\n\n Returns:\n - a list suitable for the checkpointFiles parameter of DataFlowKernel\n constructor\n\n \"\"\"\n\n if(not os.path.isdir(rundir)):\n return []\n\n dirs = sorted(os.listdir(rundir))\n\n checkpoints = []\n\n for runid in dirs:\n\n checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, runid))\n\n if os.path.isdir(checkpoint):\n checkpoints.append(checkpoint)\n\n return checkpoints\n\n\ndef get_last_checkpoint(rundir=\"runinfo\"):\n \"\"\"Find the checkpoint from the last run, if one exists.\n\n Note that checkpoints are incremental, and this helper will not find\n previous checkpoints from earlier than the most recent run. It probably\n should be made to do so.\n\n Kwargs:\n - rundir(str) : Path to the runinfo directory\n\n Returns:\n - a list suitable for checkpointFiles parameter of DataFlowKernel\n constructor, with 0 or 1 elements\n\n \"\"\"\n if not os.path.isdir(rundir):\n return []\n\n dirs = sorted(os.listdir(rundir))\n\n if len(dirs) == 0:\n return []\n\n last_runid = dirs[-1]\n last_checkpoint = os.path.abspath('{}/{}/checkpoint'.format(rundir, last_runid))\n\n if(not(os.path.isdir(last_checkpoint))):\n return []\n\n return [last_checkpoint]\n\n\ndef timeout(seconds=None):\n def decorator(func, *args, **kwargs):\n @wraps(func)\n def wrapper(*args, **kwargs):\n t = threading.Thread(target=func, args=args, kwargs=kwargs, name=\"Timeout-Decorator\")\n t.start()\n result = t.join(seconds)\n if t.is_alive():\n raise RuntimeError('timed out in {}'.format(func))\n return result\n return wrapper\n return decorator\n\n\n@contextmanager\ndef wait_for_file(path, seconds=10):\n for i in range(0, int(seconds * 100)):\n time.sleep(seconds / 100.)\n if os.path.exists(path):\n break\n yield\n\n\n@contextmanager\ndef time_limited_open(path, mode, seconds=1):\n with wait_for_file(path, seconds):\n logger.debug(\"wait_for_file yielded\")\n f = open(path, mode)\n yield f\n f.close()\n\n\ndef wtime_to_minutes(time_string):\n ''' wtime_to_minutes\n\n Convert standard wallclock time string to minutes.\n\n Args:\n - Time_string in HH:MM:SS format\n\n Returns:\n (int) minutes\n\n '''\n hours, mins, seconds = time_string.split(':')\n total_mins = int(hours) * 60 + int(mins)\n if total_mins < 1:\n logger.warning(\"Time string '{}' parsed to {} minutes, less than 1\".format(time_string, total_mins))\n return total_mins\n\n\nclass RepresentationMixin(object):\n \"\"\"A mixin class for adding a __repr__ method.\n\n The __repr__ method will return a string equivalent to the code used to instantiate\n the child class, with any defaults included explicitly. The __max_width__ class variable\n controls the maximum width of the representation string. If this width is exceeded,\n the representation string will be split up, with one argument or keyword argument per line.\n\n Any arguments or keyword arguments in the constructor must be defined as attributes, or\n an AttributeError will be raised.\n\n Examples\n --------\n >>> from parsl.utils import RepresentationMixin\n >>> class Foo(RepresentationMixin):\n def __init__(self, first, second, third='three', fourth='fourth'):\n self.first = first\n self.second = second\n self.third = third\n self.fourth = fourth\n >>> bar = Foo(1, 'two', fourth='baz')\n >>> bar\n Foo(1, 'two', third='three', fourth='baz')\n \"\"\"\n __max_width__ = 80\n\n def __repr__(self):\n init = self.__init__\n\n # This test looks for a single layer of wrapping performed by\n # functools.update_wrapper, commonly used in decorators. This will\n # allow RepresentationMixin to see through a single such decorator\n # applied to the __init__ method of a class, and find the underlying\n # arguments. It will not see through multiple layers of such\n # decorators, or cope with other decorators which do not use\n # functools.update_wrapper.\n\n if hasattr(init, '__wrapped__'):\n init = init.__wrapped__\n\n argspec = inspect.getfullargspec(init)\n if len(argspec.args) > 1:\n defaults = dict(zip(reversed(argspec.args), reversed(argspec.defaults)))\n else:\n defaults = {}\n\n for arg in argspec.args[1:]:\n if not hasattr(self, arg):\n template = 'class {} uses {} in the constructor, but does not define it as an attribute'\n raise AttributeError(template.format(self.__class__.__name__, arg))\n\n args = [getattr(self, a) for a in argspec.args[1:-len(defaults)]]\n kwargs = {key: getattr(self, key) for key in defaults}\n\n def assemble_multiline(args, kwargs):\n def indent(text):\n lines = text.splitlines()\n if len(lines) <= 1:\n return text\n return \"\\n\".join(\" \" + l for l in lines).strip()\n args = [\"\\n {},\".format(indent(repr(a))) for a in args]\n kwargs = [\"\\n {}={}\".format(k, indent(repr(v)))\n for k, v in sorted(kwargs.items())]\n\n info = \"\".join(args) + \", \".join(kwargs)\n return self.__class__.__name__ + \"({}\\n)\".format(info)\n\n def assemble_line(args, kwargs):\n kwargs = ['{}={}'.format(k, repr(v)) for k, v in sorted(kwargs.items())]\n\n info = \", \".join([repr(a) for a in args] + kwargs)\n return self.__class__.__name__ + \"({})\".format(info)\n\n if len(assemble_line(args, kwargs)) <= self.__class__.__max_width__:\n return assemble_line(args, kwargs)\n else:\n return assemble_multiline(args, kwargs)\n", "path": "parsl/utils.py"}]}
| 2,984 | 278 |
gh_patches_debug_36694
|
rasdani/github-patches
|
git_diff
|
mabel-dev__opteryx-1443
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
🪲stats for distinct incorrect
### Thank you for taking the time to report a problem with Opteryx.
_To help us to respond to your request we ask that you try to provide the below detail about the bug._
**Describe the bug** _A clear and specific description of what the bug is. What the error, incorrect or unexpected behaviour was._
**Expected behaviour** _A clear and concise description of what you expected to happen._
**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._
~~~sql
~~~
**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._
</issue>
<code>
[start of opteryx/operators/distinct_node.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 """
14 Distinct Node
15
16 This is a SQL Query Execution Plan Node.
17
18 This Node eliminates duplicate records.
19 """
20 import time
21 from typing import Generator
22
23 import pyarrow
24 import pyarrow.compute
25
26 from opteryx.models import QueryProperties
27 from opteryx.operators import BasePlanNode
28
29
30 class DistinctNode(BasePlanNode):
31 def __init__(self, properties: QueryProperties, **config):
32 super().__init__(properties=properties)
33 self._distinct_on = config.get("on")
34 if self._distinct_on:
35 self._distinct_on = [col.schema_column.identity for col in self._distinct_on]
36
37 @property
38 def config(self): # pragma: no cover
39 return ""
40
41 @property
42 def greedy(self): # pragma: no cover
43 return True
44
45 @property
46 def name(self): # pragma: no cover
47 return "Distinction"
48
49 def execute(self) -> Generator[pyarrow.Table, None, None]:
50
51 from opteryx.compiled.functions import HashSet
52 from opteryx.compiled.functions import distinct
53
54 # We create a HashSet outside the distinct call, this allows us to pass
55 # the hash to each run of the distinct which means we don't need to concat
56 # all of the tables together to return a result.
57 # The Cython distinct is about 8x faster on a 10 million row dataset with
58 # approx 85k distinct entries (4.8sec vs 0.8sec) and faster on a 177 record
59 # dataset with 7 distinct entries.
60 # Being able to run morsel-by-morsel means if we have a LIMIT clause, we can
61 # limit processing
62 hash_set = HashSet()
63
64 morsels = self._producers[0] # type:ignore
65
66 start = time.monotonic_ns()
67 for morsel in morsels.execute():
68 deduped, hash_set = distinct(
69 morsel, columns=self._distinct_on, seen_hashes=hash_set, return_seen_hashes=True
70 )
71 if deduped.num_rows > 0:
72 self.statistics.time_distincting += time.monotonic_ns() - start
73 yield deduped
74 start = time.monotonic_ns()
75
[end of opteryx/operators/distinct_node.py]
[start of opteryx/__version__.py]
1 __build__ = 296
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Store the version here so:
17 1) we don't load dependencies by storing it in __init__.py
18 2) we can import it in setup.py for the same reason
19 """
20 from enum import Enum # isort: skip
21
22
23 class VersionStatus(Enum):
24 ALPHA = "alpha"
25 BETA = "beta"
26 RELEASE = "release"
27
28
29 _major = 0
30 _minor = 14
31 _revision = 0
32 _status = VersionStatus.ALPHA
33
34 __author__ = "@joocer"
35 __version__ = f"{_major}.{_minor}.{_revision}" + (
36 f"-{_status.value}.{__build__}" if _status != VersionStatus.RELEASE else ""
37 )
38
[end of opteryx/__version__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opteryx/__version__.py b/opteryx/__version__.py
--- a/opteryx/__version__.py
+++ b/opteryx/__version__.py
@@ -1,4 +1,4 @@
-__build__ = 296
+__build__ = 298
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
diff --git a/opteryx/operators/distinct_node.py b/opteryx/operators/distinct_node.py
--- a/opteryx/operators/distinct_node.py
+++ b/opteryx/operators/distinct_node.py
@@ -29,10 +29,13 @@
class DistinctNode(BasePlanNode):
def __init__(self, properties: QueryProperties, **config):
+ from opteryx.compiled.functions import HashSet
+
super().__init__(properties=properties)
self._distinct_on = config.get("on")
if self._distinct_on:
self._distinct_on = [col.schema_column.identity for col in self._distinct_on]
+ self.hash_set = HashSet()
@property
def config(self): # pragma: no cover
@@ -48,7 +51,6 @@
def execute(self) -> Generator[pyarrow.Table, None, None]:
- from opteryx.compiled.functions import HashSet
from opteryx.compiled.functions import distinct
# We create a HashSet outside the distinct call, this allows us to pass
@@ -59,16 +61,17 @@
# dataset with 7 distinct entries.
# Being able to run morsel-by-morsel means if we have a LIMIT clause, we can
# limit processing
- hash_set = HashSet()
morsels = self._producers[0] # type:ignore
- start = time.monotonic_ns()
for morsel in morsels.execute():
- deduped, hash_set = distinct(
- morsel, columns=self._distinct_on, seen_hashes=hash_set, return_seen_hashes=True
+ start = time.monotonic_ns()
+ deduped, self.hash_set = distinct(
+ morsel,
+ columns=self._distinct_on,
+ seen_hashes=self.hash_set,
+ return_seen_hashes=True,
)
+ self.statistics.time_distincting += time.monotonic_ns() - start
if deduped.num_rows > 0:
- self.statistics.time_distincting += time.monotonic_ns() - start
yield deduped
- start = time.monotonic_ns()
|
{"golden_diff": "diff --git a/opteryx/__version__.py b/opteryx/__version__.py\n--- a/opteryx/__version__.py\n+++ b/opteryx/__version__.py\n@@ -1,4 +1,4 @@\n-__build__ = 296\n+__build__ = 298\n \n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\ndiff --git a/opteryx/operators/distinct_node.py b/opteryx/operators/distinct_node.py\n--- a/opteryx/operators/distinct_node.py\n+++ b/opteryx/operators/distinct_node.py\n@@ -29,10 +29,13 @@\n \n class DistinctNode(BasePlanNode):\n def __init__(self, properties: QueryProperties, **config):\n+ from opteryx.compiled.functions import HashSet\n+\n super().__init__(properties=properties)\n self._distinct_on = config.get(\"on\")\n if self._distinct_on:\n self._distinct_on = [col.schema_column.identity for col in self._distinct_on]\n+ self.hash_set = HashSet()\n \n @property\n def config(self): # pragma: no cover\n@@ -48,7 +51,6 @@\n \n def execute(self) -> Generator[pyarrow.Table, None, None]:\n \n- from opteryx.compiled.functions import HashSet\n from opteryx.compiled.functions import distinct\n \n # We create a HashSet outside the distinct call, this allows us to pass\n@@ -59,16 +61,17 @@\n # dataset with 7 distinct entries.\n # Being able to run morsel-by-morsel means if we have a LIMIT clause, we can\n # limit processing\n- hash_set = HashSet()\n \n morsels = self._producers[0] # type:ignore\n \n- start = time.monotonic_ns()\n for morsel in morsels.execute():\n- deduped, hash_set = distinct(\n- morsel, columns=self._distinct_on, seen_hashes=hash_set, return_seen_hashes=True\n+ start = time.monotonic_ns()\n+ deduped, self.hash_set = distinct(\n+ morsel,\n+ columns=self._distinct_on,\n+ seen_hashes=self.hash_set,\n+ return_seen_hashes=True,\n )\n+ self.statistics.time_distincting += time.monotonic_ns() - start\n if deduped.num_rows > 0:\n- self.statistics.time_distincting += time.monotonic_ns() - start\n yield deduped\n- start = time.monotonic_ns()\n", "issue": "\ud83e\udeb2stats for distinct incorrect\n### Thank you for taking the time to report a problem with Opteryx.\r\n_To help us to respond to your request we ask that you try to provide the below detail about the bug._\r\n\r\n**Describe the bug** _A clear and specific description of what the bug is. What the error, incorrect or unexpected behaviour was._\r\n\r\n\r\n**Expected behaviour** _A clear and concise description of what you expected to happen._\r\n\r\n\r\n**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._\r\n\r\n~~~sql\r\n\r\n~~~\r\n\r\n**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._\r\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nDistinct Node\n\nThis is a SQL Query Execution Plan Node.\n\nThis Node eliminates duplicate records.\n\"\"\"\nimport time\nfrom typing import Generator\n\nimport pyarrow\nimport pyarrow.compute\n\nfrom opteryx.models import QueryProperties\nfrom opteryx.operators import BasePlanNode\n\n\nclass DistinctNode(BasePlanNode):\n def __init__(self, properties: QueryProperties, **config):\n super().__init__(properties=properties)\n self._distinct_on = config.get(\"on\")\n if self._distinct_on:\n self._distinct_on = [col.schema_column.identity for col in self._distinct_on]\n\n @property\n def config(self): # pragma: no cover\n return \"\"\n\n @property\n def greedy(self): # pragma: no cover\n return True\n\n @property\n def name(self): # pragma: no cover\n return \"Distinction\"\n\n def execute(self) -> Generator[pyarrow.Table, None, None]:\n\n from opteryx.compiled.functions import HashSet\n from opteryx.compiled.functions import distinct\n\n # We create a HashSet outside the distinct call, this allows us to pass\n # the hash to each run of the distinct which means we don't need to concat\n # all of the tables together to return a result.\n # The Cython distinct is about 8x faster on a 10 million row dataset with\n # approx 85k distinct entries (4.8sec vs 0.8sec) and faster on a 177 record\n # dataset with 7 distinct entries.\n # Being able to run morsel-by-morsel means if we have a LIMIT clause, we can\n # limit processing\n hash_set = HashSet()\n\n morsels = self._producers[0] # type:ignore\n\n start = time.monotonic_ns()\n for morsel in morsels.execute():\n deduped, hash_set = distinct(\n morsel, columns=self._distinct_on, seen_hashes=hash_set, return_seen_hashes=True\n )\n if deduped.num_rows > 0:\n self.statistics.time_distincting += time.monotonic_ns() - start\n yield deduped\n start = time.monotonic_ns()\n", "path": "opteryx/operators/distinct_node.py"}, {"content": "__build__ = 296\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 14\n_revision = 0\n_status = VersionStatus.ALPHA\n\n__author__ = \"@joocer\"\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}]}
| 1,837 | 586 |
gh_patches_debug_15665
|
rasdani/github-patches
|
git_diff
|
meltano__meltano-6562
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug: No way to dismiss image scan alerts
### Meltano Version
NA
### Python Version
NA
### Bug scope
Other
### Operating System
NA
### Description
Currently we use `.github/actions/docker-build-scan-push/check_sarif.py` to analyze the SARIF report created from running `grype` to scan our Docker images. It parses the SARIF JSON file itself to check if there are any issues detected with a severity above some threshold in the range [0.0, 10.0].
Before running this check, we upload the SARIF results to GitHub, which stores them for our repository using the "code scanning" feature. From there, we can review them, dismiss them, and create issues to address them. [An example can be found here](https://github.com/meltano/meltano/security/code-scanning?query=ref%3Arefs%2Fpull%2F6410%2Fmerge+tool%3AGrype).
Our `check_sarif.py` script does not consider whether we've dismissed the issue via GitHub's "code scanning" feature, so we have no way to deem a detected issue acceptable, and have the Docker publish workflow pass. To fix this we should replace `check_sarif.py` with some steps that use [the GitHub code scanning API](https://docs.github.com/en/rest/code-scanning#list-code-scanning-alerts-for-a-repository) to check if there are any issues above some set severity level *that haven't been dismissed*.
### Code
_No response_
</issue>
<code>
[start of .github/actions/docker-build-scan-push/check_sarif.py]
1 """Check if the provided SARIF file has any violations at or above some severity level."""
2
3 from __future__ import annotations
4
5 import argparse
6 import json
7
8 DEFAULT_SEVERITY_CUTOFF = 4.0
9
10 parser = argparse.ArgumentParser()
11 parser.add_argument(
12 "sarif_path",
13 help="The path to the SARIF file to be checked.",
14 )
15 parser.add_argument(
16 "--severity-cutoff",
17 help="Violations with a severity >= this value result in an exit code of 1"
18 + " - must be a number in the range [0.0, 10.0].",
19 type=float,
20 default=DEFAULT_SEVERITY_CUTOFF,
21 )
22 args = parser.parse_args()
23
24 with open(args.sarif_path) as sarif_file:
25 sarif_data = json.load(sarif_file)
26
27 first_run = sarif_data["runs"][0]
28 triggered_rules = first_run["tool"]["driver"]["rules"]
29
30 exit( # noqa: WPS421
31 any(
32 float(rule["properties"]["security-severity"]) >= args.severity_cutoff
33 for rule in triggered_rules
34 )
35 )
36
[end of .github/actions/docker-build-scan-push/check_sarif.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/.github/actions/docker-build-scan-push/check_sarif.py b/.github/actions/docker-build-scan-push/check_sarif.py
deleted file mode 100644
--- a/.github/actions/docker-build-scan-push/check_sarif.py
+++ /dev/null
@@ -1,35 +0,0 @@
-"""Check if the provided SARIF file has any violations at or above some severity level."""
-
-from __future__ import annotations
-
-import argparse
-import json
-
-DEFAULT_SEVERITY_CUTOFF = 4.0
-
-parser = argparse.ArgumentParser()
-parser.add_argument(
- "sarif_path",
- help="The path to the SARIF file to be checked.",
-)
-parser.add_argument(
- "--severity-cutoff",
- help="Violations with a severity >= this value result in an exit code of 1"
- + " - must be a number in the range [0.0, 10.0].",
- type=float,
- default=DEFAULT_SEVERITY_CUTOFF,
-)
-args = parser.parse_args()
-
-with open(args.sarif_path) as sarif_file:
- sarif_data = json.load(sarif_file)
-
-first_run = sarif_data["runs"][0]
-triggered_rules = first_run["tool"]["driver"]["rules"]
-
-exit( # noqa: WPS421
- any(
- float(rule["properties"]["security-severity"]) >= args.severity_cutoff
- for rule in triggered_rules
- )
-)
|
{"golden_diff": "diff --git a/.github/actions/docker-build-scan-push/check_sarif.py b/.github/actions/docker-build-scan-push/check_sarif.py\ndeleted file mode 100644\n--- a/.github/actions/docker-build-scan-push/check_sarif.py\n+++ /dev/null\n@@ -1,35 +0,0 @@\n-\"\"\"Check if the provided SARIF file has any violations at or above some severity level.\"\"\"\n-\n-from __future__ import annotations\n-\n-import argparse\n-import json\n-\n-DEFAULT_SEVERITY_CUTOFF = 4.0\n-\n-parser = argparse.ArgumentParser()\n-parser.add_argument(\n- \"sarif_path\",\n- help=\"The path to the SARIF file to be checked.\",\n-)\n-parser.add_argument(\n- \"--severity-cutoff\",\n- help=\"Violations with a severity >= this value result in an exit code of 1\"\n- + \" - must be a number in the range [0.0, 10.0].\",\n- type=float,\n- default=DEFAULT_SEVERITY_CUTOFF,\n-)\n-args = parser.parse_args()\n-\n-with open(args.sarif_path) as sarif_file:\n- sarif_data = json.load(sarif_file)\n-\n-first_run = sarif_data[\"runs\"][0]\n-triggered_rules = first_run[\"tool\"][\"driver\"][\"rules\"]\n-\n-exit( # noqa: WPS421\n- any(\n- float(rule[\"properties\"][\"security-severity\"]) >= args.severity_cutoff\n- for rule in triggered_rules\n- )\n-)\n", "issue": "bug: No way to dismiss image scan alerts\n### Meltano Version\n\nNA\n\n### Python Version\n\nNA\n\n### Bug scope\n\nOther\n\n### Operating System\n\nNA\n\n### Description\n\nCurrently we use `.github/actions/docker-build-scan-push/check_sarif.py` to analyze the SARIF report created from running `grype` to scan our Docker images. It parses the SARIF JSON file itself to check if there are any issues detected with a severity above some threshold in the range [0.0, 10.0].\r\n\r\nBefore running this check, we upload the SARIF results to GitHub, which stores them for our repository using the \"code scanning\" feature. From there, we can review them, dismiss them, and create issues to address them. [An example can be found here](https://github.com/meltano/meltano/security/code-scanning?query=ref%3Arefs%2Fpull%2F6410%2Fmerge+tool%3AGrype).\r\n\r\nOur `check_sarif.py` script does not consider whether we've dismissed the issue via GitHub's \"code scanning\" feature, so we have no way to deem a detected issue acceptable, and have the Docker publish workflow pass. To fix this we should replace `check_sarif.py` with some steps that use [the GitHub code scanning API](https://docs.github.com/en/rest/code-scanning#list-code-scanning-alerts-for-a-repository) to check if there are any issues above some set severity level *that haven't been dismissed*.\n\n### Code\n\n_No response_\n", "before_files": [{"content": "\"\"\"Check if the provided SARIF file has any violations at or above some severity level.\"\"\"\n\nfrom __future__ import annotations\n\nimport argparse\nimport json\n\nDEFAULT_SEVERITY_CUTOFF = 4.0\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\n \"sarif_path\",\n help=\"The path to the SARIF file to be checked.\",\n)\nparser.add_argument(\n \"--severity-cutoff\",\n help=\"Violations with a severity >= this value result in an exit code of 1\"\n + \" - must be a number in the range [0.0, 10.0].\",\n type=float,\n default=DEFAULT_SEVERITY_CUTOFF,\n)\nargs = parser.parse_args()\n\nwith open(args.sarif_path) as sarif_file:\n sarif_data = json.load(sarif_file)\n\nfirst_run = sarif_data[\"runs\"][0]\ntriggered_rules = first_run[\"tool\"][\"driver\"][\"rules\"]\n\nexit( # noqa: WPS421\n any(\n float(rule[\"properties\"][\"security-severity\"]) >= args.severity_cutoff\n for rule in triggered_rules\n )\n)\n", "path": ".github/actions/docker-build-scan-push/check_sarif.py"}]}
| 1,191 | 340 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.