problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_9956 | rasdani/github-patches | git_diff | iterative__dvc-8823 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dvc repro: Tuples in parameters are always marked as modified
# Bug Report
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
## Description
<!--
A clear and concise description of what the bug is.
-->
When tuples are used in parameters, dvc always marks them as modified.
```sh
dvc status
```
outputs
```yaml
train_model:
changed deps:
model/parameters.py:
modified: NGRAM_VECTORIZER_RANGE
```
regardless of whether `NGRAM_VECTORIZER_RANGE` has been changed
</issue>
<code>
[start of dvc/dependency/param.py]
1 import logging
2 import os
3 import typing
4 from collections import defaultdict
5 from typing import Dict, cast
6
7 import dpath.util
8 from voluptuous import Any
9
10 from dvc.exceptions import DvcException
11 from dvc.utils.serialize import ParseError, load_path
12 from dvc_data.hashfile.hash_info import HashInfo
13
14 from .base import Dependency
15
16 logger = logging.getLogger(__name__)
17
18
19 class MissingParamsError(DvcException):
20 pass
21
22
23 class MissingParamsFile(DvcException):
24 pass
25
26
27 class ParamsIsADirectoryError(DvcException):
28 pass
29
30
31 class BadParamFileError(DvcException):
32 pass
33
34
35 class ParamsDependency(Dependency):
36 PARAM_PARAMS = "params"
37 PARAM_SCHEMA = {PARAM_PARAMS: Any(dict, list, None)}
38 DEFAULT_PARAMS_FILE = "params.yaml"
39
40 def __init__(self, stage, path, params=None, repo=None):
41 self.params = list(params) if params else []
42 hash_info = HashInfo()
43 if isinstance(params, dict):
44 hash_info = HashInfo(
45 self.PARAM_PARAMS,
46 params, # type: ignore[arg-type]
47 )
48 repo = repo or stage.repo
49 path = path or os.path.join(repo.root_dir, self.DEFAULT_PARAMS_FILE)
50 super().__init__(stage, path, repo=repo)
51 self.hash_info = hash_info
52
53 def dumpd(self, **kwargs):
54 ret = super().dumpd()
55 if not self.hash_info:
56 ret[self.PARAM_PARAMS] = self.params or {}
57 return ret
58
59 def fill_values(self, values=None):
60 """Load params values dynamically."""
61 if values is None:
62 return
63
64 info = {}
65 if not self.params:
66 info.update(values)
67 for param in self.params:
68 if param in values:
69 info[param] = values[param]
70 self.hash_info = HashInfo(
71 self.PARAM_PARAMS,
72 info, # type: ignore[arg-type]
73 )
74
75 def read_params(
76 self, flatten: bool = True, **kwargs: typing.Any
77 ) -> Dict[str, typing.Any]:
78 try:
79 config = self.read_file()
80 except MissingParamsFile:
81 config = {}
82
83 if not self.params:
84 return config
85
86 ret = {}
87 if flatten:
88 for param in self.params:
89 try:
90 ret[param] = dpath.util.get(config, param, separator=".")
91 except KeyError:
92 continue
93 return ret
94
95 from dpath.util import merge
96
97 for param in self.params:
98 merge(
99 ret,
100 dpath.util.search(config, param, separator="."),
101 separator=".",
102 )
103 return ret
104
105 def workspace_status(self):
106 if not self.exists:
107 return {str(self): "deleted"}
108 if self.hash_info.value is None:
109 return {str(self): "new"}
110
111 from funcy import ldistinct
112
113 status: Dict[str, Any] = defaultdict(dict)
114 info = cast(dict, self.hash_info.value) if self.hash_info else {}
115 actual = self.read_params()
116
117 # NOTE: we want to preserve the order of params as specified in the
118 # status. In case of tracking the whole file, the order is top-level
119 # keys in the file and then the keys in the `info` from `dvc.lock`
120 # (which are alphabetically sorted).
121 params = self.params or ldistinct([*actual.keys(), *info.keys()])
122 for param in params:
123 if param not in actual:
124 st = "deleted"
125 elif param not in info:
126 st = "new"
127 elif actual[param] != info[param]:
128 st = "modified"
129 else:
130 assert actual[param] == info[param]
131 continue
132
133 status[str(self)][param] = st
134
135 return status
136
137 def status(self):
138 return self.workspace_status()
139
140 def validate_filepath(self):
141 if not self.exists:
142 raise MissingParamsFile(f"Parameters file '{self}' does not exist")
143 if self.isdir():
144 raise ParamsIsADirectoryError(
145 f"'{self}' is a directory, expected a parameters file"
146 )
147
148 def read_file(self):
149 self.validate_filepath()
150 try:
151 return load_path(self.fs_path, self.repo.fs)
152 except ParseError as exc:
153 raise BadParamFileError(
154 f"Unable to read parameters from '{self}'"
155 ) from exc
156
157 def get_hash(self):
158 info = self.read_params()
159
160 missing_params = set(self.params) - set(info.keys())
161 if missing_params:
162 raise MissingParamsError(
163 "Parameters '{}' are missing from '{}'.".format(
164 ", ".join(missing_params), self
165 )
166 )
167
168 return HashInfo(self.PARAM_PARAMS, info) # type: ignore[arg-type]
169
170 def save(self):
171 if not self.exists:
172 raise self.DoesNotExistError(self)
173
174 if not self.isfile() and not self.isdir():
175 raise self.IsNotFileOrDirError(self)
176
177 self.ignore()
178 self.hash_info = self.get_hash()
179
[end of dvc/dependency/param.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/dependency/param.py b/dvc/dependency/param.py
--- a/dvc/dependency/param.py
+++ b/dvc/dependency/param.py
@@ -125,9 +125,14 @@
elif param not in info:
st = "new"
elif actual[param] != info[param]:
- st = "modified"
+ if (
+ isinstance(actual[param], tuple)
+ and list(actual[param]) == info[param]
+ ):
+ continue
+ else:
+ st = "modified"
else:
- assert actual[param] == info[param]
continue
status[str(self)][param] = st
| {"golden_diff": "diff --git a/dvc/dependency/param.py b/dvc/dependency/param.py\n--- a/dvc/dependency/param.py\n+++ b/dvc/dependency/param.py\n@@ -125,9 +125,14 @@\n elif param not in info:\n st = \"new\"\n elif actual[param] != info[param]:\n- st = \"modified\"\n+ if (\n+ isinstance(actual[param], tuple)\n+ and list(actual[param]) == info[param]\n+ ):\n+ continue\n+ else:\n+ st = \"modified\"\n else:\n- assert actual[param] == info[param]\n continue\n \n status[str(self)][param] = st\n", "issue": "dvc repro: Tuples in parameters are always marked as modified\n# Bug Report\r\n\r\n<!--\r\n## Issue name\r\n\r\nIssue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.\r\n\r\nExample: `repro: doesn't detect input changes`\r\n-->\r\n\r\n## Description\r\n\r\n<!--\r\nA clear and concise description of what the bug is.\r\n-->\r\nWhen tuples are used in parameters, dvc always marks them as modified.\r\n\r\n```sh\r\ndvc status\r\n```\r\noutputs\r\n```yaml\r\ntrain_model:\r\n changed deps:\r\n model/parameters.py: \r\n modified: NGRAM_VECTORIZER_RANGE\r\n```\r\n\r\nregardless of whether `NGRAM_VECTORIZER_RANGE` has been changed\r\n\n", "before_files": [{"content": "import logging\nimport os\nimport typing\nfrom collections import defaultdict\nfrom typing import Dict, cast\n\nimport dpath.util\nfrom voluptuous import Any\n\nfrom dvc.exceptions import DvcException\nfrom dvc.utils.serialize import ParseError, load_path\nfrom dvc_data.hashfile.hash_info import HashInfo\n\nfrom .base import Dependency\n\nlogger = logging.getLogger(__name__)\n\n\nclass MissingParamsError(DvcException):\n pass\n\n\nclass MissingParamsFile(DvcException):\n pass\n\n\nclass ParamsIsADirectoryError(DvcException):\n pass\n\n\nclass BadParamFileError(DvcException):\n pass\n\n\nclass ParamsDependency(Dependency):\n PARAM_PARAMS = \"params\"\n PARAM_SCHEMA = {PARAM_PARAMS: Any(dict, list, None)}\n DEFAULT_PARAMS_FILE = \"params.yaml\"\n\n def __init__(self, stage, path, params=None, repo=None):\n self.params = list(params) if params else []\n hash_info = HashInfo()\n if isinstance(params, dict):\n hash_info = HashInfo(\n self.PARAM_PARAMS,\n params, # type: ignore[arg-type]\n )\n repo = repo or stage.repo\n path = path or os.path.join(repo.root_dir, self.DEFAULT_PARAMS_FILE)\n super().__init__(stage, path, repo=repo)\n self.hash_info = hash_info\n\n def dumpd(self, **kwargs):\n ret = super().dumpd()\n if not self.hash_info:\n ret[self.PARAM_PARAMS] = self.params or {}\n return ret\n\n def fill_values(self, values=None):\n \"\"\"Load params values dynamically.\"\"\"\n if values is None:\n return\n\n info = {}\n if not self.params:\n info.update(values)\n for param in self.params:\n if param in values:\n info[param] = values[param]\n self.hash_info = HashInfo(\n self.PARAM_PARAMS,\n info, # type: ignore[arg-type]\n )\n\n def read_params(\n self, flatten: bool = True, **kwargs: typing.Any\n ) -> Dict[str, typing.Any]:\n try:\n config = self.read_file()\n except MissingParamsFile:\n config = {}\n\n if not self.params:\n return config\n\n ret = {}\n if flatten:\n for param in self.params:\n try:\n ret[param] = dpath.util.get(config, param, separator=\".\")\n except KeyError:\n continue\n return ret\n\n from dpath.util import merge\n\n for param in self.params:\n merge(\n ret,\n dpath.util.search(config, param, separator=\".\"),\n separator=\".\",\n )\n return ret\n\n def workspace_status(self):\n if not self.exists:\n return {str(self): \"deleted\"}\n if self.hash_info.value is None:\n return {str(self): \"new\"}\n\n from funcy import ldistinct\n\n status: Dict[str, Any] = defaultdict(dict)\n info = cast(dict, self.hash_info.value) if self.hash_info else {}\n actual = self.read_params()\n\n # NOTE: we want to preserve the order of params as specified in the\n # status. In case of tracking the whole file, the order is top-level\n # keys in the file and then the keys in the `info` from `dvc.lock`\n # (which are alphabetically sorted).\n params = self.params or ldistinct([*actual.keys(), *info.keys()])\n for param in params:\n if param not in actual:\n st = \"deleted\"\n elif param not in info:\n st = \"new\"\n elif actual[param] != info[param]:\n st = \"modified\"\n else:\n assert actual[param] == info[param]\n continue\n\n status[str(self)][param] = st\n\n return status\n\n def status(self):\n return self.workspace_status()\n\n def validate_filepath(self):\n if not self.exists:\n raise MissingParamsFile(f\"Parameters file '{self}' does not exist\")\n if self.isdir():\n raise ParamsIsADirectoryError(\n f\"'{self}' is a directory, expected a parameters file\"\n )\n\n def read_file(self):\n self.validate_filepath()\n try:\n return load_path(self.fs_path, self.repo.fs)\n except ParseError as exc:\n raise BadParamFileError(\n f\"Unable to read parameters from '{self}'\"\n ) from exc\n\n def get_hash(self):\n info = self.read_params()\n\n missing_params = set(self.params) - set(info.keys())\n if missing_params:\n raise MissingParamsError(\n \"Parameters '{}' are missing from '{}'.\".format(\n \", \".join(missing_params), self\n )\n )\n\n return HashInfo(self.PARAM_PARAMS, info) # type: ignore[arg-type]\n\n def save(self):\n if not self.exists:\n raise self.DoesNotExistError(self)\n\n if not self.isfile() and not self.isdir():\n raise self.IsNotFileOrDirError(self)\n\n self.ignore()\n self.hash_info = self.get_hash()\n", "path": "dvc/dependency/param.py"}]} | 2,219 | 151 |
gh_patches_debug_25469 | rasdani/github-patches | git_diff | ray-project__ray-9572 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[rllib] Train script overwrites framework field in experiment config argument
[rllib]
### What is the problem?
Framework setting passed to training script via config argument is ignored, because training script uses separate arguments for this field (`--torch`, `--eager`). If none of these args is passed to the script the framework is set to `tf`, regardless of options passed in `--config` argument.
### Expected behavior
Script should use default value for `framework` field only if it is not already present in `config` dict (it can be passed as cmd argument or be loaded from file).
### Reproduction (REQUIRED)
Run `rllib train --run DQN --env CartPole-v0 --config "{\"framework\": \"tfe\"}"` - script uses `tf` instead of `tfe`
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
</issue>
<code>
[start of rllib/train.py]
1 #!/usr/bin/env python
2
3 import argparse
4 import os
5 from pathlib import Path
6 import yaml
7
8 import ray
9 from ray.cluster_utils import Cluster
10 from ray.tune.config_parser import make_parser
11 from ray.tune.result import DEFAULT_RESULTS_DIR
12 from ray.tune.resources import resources_to_json
13 from ray.tune.tune import _make_scheduler, run_experiments
14 from ray.rllib.utils.framework import try_import_tf, try_import_torch
15
16 # Try to import both backends for flag checking/warnings.
17 tf1, tf, tfv = try_import_tf()
18 torch, _ = try_import_torch()
19
20 EXAMPLE_USAGE = """
21 Training example via RLlib CLI:
22 rllib train --run DQN --env CartPole-v0
23
24 Grid search example via RLlib CLI:
25 rllib train -f tuned_examples/cartpole-grid-search-example.yaml
26
27 Grid search example via executable:
28 ./train.py -f tuned_examples/cartpole-grid-search-example.yaml
29
30 Note that -f overrides all other trial-specific command-line options.
31 """
32
33
34 def create_parser(parser_creator=None):
35 parser = make_parser(
36 parser_creator=parser_creator,
37 formatter_class=argparse.RawDescriptionHelpFormatter,
38 description="Train a reinforcement learning agent.",
39 epilog=EXAMPLE_USAGE)
40
41 # See also the base parser definition in ray/tune/config_parser.py
42 parser.add_argument(
43 "--ray-address",
44 default=None,
45 type=str,
46 help="Connect to an existing Ray cluster at this address instead "
47 "of starting a new one.")
48 parser.add_argument(
49 "--no-ray-ui",
50 action="store_true",
51 help="Whether to disable the Ray web ui.")
52 parser.add_argument(
53 "--local-mode",
54 action="store_true",
55 help="Whether to run ray with `local_mode=True`. "
56 "Only if --ray-num-nodes is not used.")
57 parser.add_argument(
58 "--ray-num-cpus",
59 default=None,
60 type=int,
61 help="--num-cpus to use if starting a new cluster.")
62 parser.add_argument(
63 "--ray-num-gpus",
64 default=None,
65 type=int,
66 help="--num-gpus to use if starting a new cluster.")
67 parser.add_argument(
68 "--ray-num-nodes",
69 default=None,
70 type=int,
71 help="Emulate multiple cluster nodes for debugging.")
72 parser.add_argument(
73 "--ray-redis-max-memory",
74 default=None,
75 type=int,
76 help="--redis-max-memory to use if starting a new cluster.")
77 parser.add_argument(
78 "--ray-memory",
79 default=None,
80 type=int,
81 help="--memory to use if starting a new cluster.")
82 parser.add_argument(
83 "--ray-object-store-memory",
84 default=None,
85 type=int,
86 help="--object-store-memory to use if starting a new cluster.")
87 parser.add_argument(
88 "--experiment-name",
89 default="default",
90 type=str,
91 help="Name of the subdirectory under `local_dir` to put results in.")
92 parser.add_argument(
93 "--local-dir",
94 default=DEFAULT_RESULTS_DIR,
95 type=str,
96 help="Local dir to save training results to. Defaults to '{}'.".format(
97 DEFAULT_RESULTS_DIR))
98 parser.add_argument(
99 "--upload-dir",
100 default="",
101 type=str,
102 help="Optional URI to sync training results to (e.g. s3://bucket).")
103 parser.add_argument(
104 "-v", action="store_true", help="Whether to use INFO level logging.")
105 parser.add_argument(
106 "-vv", action="store_true", help="Whether to use DEBUG level logging.")
107 parser.add_argument(
108 "--resume",
109 action="store_true",
110 help="Whether to attempt to resume previous Tune experiments.")
111 parser.add_argument(
112 "--torch",
113 action="store_true",
114 help="Whether to use PyTorch (instead of tf) as the DL framework.")
115 parser.add_argument(
116 "--eager",
117 action="store_true",
118 help="Whether to attempt to enable TF eager execution.")
119 parser.add_argument(
120 "--trace",
121 action="store_true",
122 help="Whether to attempt to enable tracing for eager mode.")
123 parser.add_argument(
124 "--env", default=None, type=str, help="The gym environment to use.")
125 parser.add_argument(
126 "--queue-trials",
127 action="store_true",
128 help=(
129 "Whether to queue trials when the cluster does not currently have "
130 "enough resources to launch one. This should be set to True when "
131 "running on an autoscaling cluster to enable automatic scale-up."))
132 parser.add_argument(
133 "-f",
134 "--config-file",
135 default=None,
136 type=str,
137 help="If specified, use config options from this file. Note that this "
138 "overrides any trial-specific options set via flags above.")
139 return parser
140
141
142 def run(args, parser):
143 if args.config_file:
144 with open(args.config_file) as f:
145 experiments = yaml.safe_load(f)
146 else:
147 # Note: keep this in sync with tune/config_parser.py
148 experiments = {
149 args.experiment_name: { # i.e. log to ~/ray_results/default
150 "run": args.run,
151 "checkpoint_freq": args.checkpoint_freq,
152 "checkpoint_at_end": args.checkpoint_at_end,
153 "keep_checkpoints_num": args.keep_checkpoints_num,
154 "checkpoint_score_attr": args.checkpoint_score_attr,
155 "local_dir": args.local_dir,
156 "resources_per_trial": (
157 args.resources_per_trial and
158 resources_to_json(args.resources_per_trial)),
159 "stop": args.stop,
160 "config": dict(args.config, env=args.env),
161 "restore": args.restore,
162 "num_samples": args.num_samples,
163 "upload_dir": args.upload_dir,
164 }
165 }
166
167 verbose = 1
168 for exp in experiments.values():
169 # Bazel makes it hard to find files specified in `args` (and `data`).
170 # Look for them here.
171 # NOTE: Some of our yaml files don't have a `config` section.
172 if exp.get("config", {}).get("input") and \
173 not os.path.exists(exp["config"]["input"]):
174 # This script runs in the ray/rllib dir.
175 rllib_dir = Path(__file__).parent
176 input_file = rllib_dir.absolute().joinpath(exp["config"]["input"])
177 exp["config"]["input"] = str(input_file)
178
179 if not exp.get("run"):
180 parser.error("the following arguments are required: --run")
181 if not exp.get("env") and not exp.get("config", {}).get("env"):
182 parser.error("the following arguments are required: --env")
183 if args.eager:
184 exp["config"]["framework"] = "tfe"
185 elif args.torch:
186 exp["config"]["framework"] = "torch"
187 else:
188 exp["config"]["framework"] = "tf"
189 if args.v:
190 exp["config"]["log_level"] = "INFO"
191 verbose = 2
192 if args.vv:
193 exp["config"]["log_level"] = "DEBUG"
194 verbose = 3
195 if args.trace:
196 if exp["config"]["framework"] != "tfe":
197 raise ValueError("Must enable --eager to enable tracing.")
198 exp["config"]["eager_tracing"] = True
199
200 if args.ray_num_nodes:
201 cluster = Cluster()
202 for _ in range(args.ray_num_nodes):
203 cluster.add_node(
204 num_cpus=args.ray_num_cpus or 1,
205 num_gpus=args.ray_num_gpus or 0,
206 object_store_memory=args.ray_object_store_memory,
207 memory=args.ray_memory,
208 redis_max_memory=args.ray_redis_max_memory)
209 ray.init(address=cluster.address)
210 else:
211 ray.init(
212 include_dashboard=not args.no_ray_ui,
213 address=args.ray_address,
214 object_store_memory=args.ray_object_store_memory,
215 memory=args.ray_memory,
216 redis_max_memory=args.ray_redis_max_memory,
217 num_cpus=args.ray_num_cpus,
218 num_gpus=args.ray_num_gpus,
219 local_mode=args.local_mode)
220
221 run_experiments(
222 experiments,
223 scheduler=_make_scheduler(args),
224 queue_trials=args.queue_trials,
225 resume=args.resume,
226 verbose=verbose,
227 concurrent=True)
228
229 ray.shutdown()
230
231
232 if __name__ == "__main__":
233 parser = create_parser()
234 args = parser.parse_args()
235 run(args, parser)
236
[end of rllib/train.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rllib/train.py b/rllib/train.py
--- a/rllib/train.py
+++ b/rllib/train.py
@@ -180,22 +180,23 @@
parser.error("the following arguments are required: --run")
if not exp.get("env") and not exp.get("config", {}).get("env"):
parser.error("the following arguments are required: --env")
- if args.eager:
- exp["config"]["framework"] = "tfe"
- elif args.torch:
+
+ if args.torch:
exp["config"]["framework"] = "torch"
- else:
- exp["config"]["framework"] = "tf"
+ elif args.eager:
+ exp["config"]["framework"] = "tfe"
+
+ if args.trace:
+ if exp["config"]["framework"] not in ["tf2", "tfe"]:
+ raise ValueError("Must enable --eager to enable tracing.")
+ exp["config"]["eager_tracing"] = True
+
if args.v:
exp["config"]["log_level"] = "INFO"
verbose = 2
if args.vv:
exp["config"]["log_level"] = "DEBUG"
verbose = 3
- if args.trace:
- if exp["config"]["framework"] != "tfe":
- raise ValueError("Must enable --eager to enable tracing.")
- exp["config"]["eager_tracing"] = True
if args.ray_num_nodes:
cluster = Cluster()
| {"golden_diff": "diff --git a/rllib/train.py b/rllib/train.py\n--- a/rllib/train.py\n+++ b/rllib/train.py\n@@ -180,22 +180,23 @@\n parser.error(\"the following arguments are required: --run\")\n if not exp.get(\"env\") and not exp.get(\"config\", {}).get(\"env\"):\n parser.error(\"the following arguments are required: --env\")\n- if args.eager:\n- exp[\"config\"][\"framework\"] = \"tfe\"\n- elif args.torch:\n+\n+ if args.torch:\n exp[\"config\"][\"framework\"] = \"torch\"\n- else:\n- exp[\"config\"][\"framework\"] = \"tf\"\n+ elif args.eager:\n+ exp[\"config\"][\"framework\"] = \"tfe\"\n+\n+ if args.trace:\n+ if exp[\"config\"][\"framework\"] not in [\"tf2\", \"tfe\"]:\n+ raise ValueError(\"Must enable --eager to enable tracing.\")\n+ exp[\"config\"][\"eager_tracing\"] = True\n+\n if args.v:\n exp[\"config\"][\"log_level\"] = \"INFO\"\n verbose = 2\n if args.vv:\n exp[\"config\"][\"log_level\"] = \"DEBUG\"\n verbose = 3\n- if args.trace:\n- if exp[\"config\"][\"framework\"] != \"tfe\":\n- raise ValueError(\"Must enable --eager to enable tracing.\")\n- exp[\"config\"][\"eager_tracing\"] = True\n \n if args.ray_num_nodes:\n cluster = Cluster()\n", "issue": "[rllib] Train script overwrites framework field in experiment config argument\n[rllib]\r\n\r\n### What is the problem?\r\n\r\nFramework setting passed to training script via config argument is ignored, because training script uses separate arguments for this field (`--torch`, `--eager`). If none of these args is passed to the script the framework is set to `tf`, regardless of options passed in `--config` argument.\r\n\r\n### Expected behavior\r\n\r\nScript should use default value for `framework` field only if it is not already present in `config` dict (it can be passed as cmd argument or be loaded from file).\r\n\r\n### Reproduction (REQUIRED)\r\nRun `rllib train --run DQN --env CartPole-v0 --config \"{\\\"framework\\\": \\\"tfe\\\"}\"` - script uses `tf` instead of `tfe`\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport argparse\nimport os\nfrom pathlib import Path\nimport yaml\n\nimport ray\nfrom ray.cluster_utils import Cluster\nfrom ray.tune.config_parser import make_parser\nfrom ray.tune.result import DEFAULT_RESULTS_DIR\nfrom ray.tune.resources import resources_to_json\nfrom ray.tune.tune import _make_scheduler, run_experiments\nfrom ray.rllib.utils.framework import try_import_tf, try_import_torch\n\n# Try to import both backends for flag checking/warnings.\ntf1, tf, tfv = try_import_tf()\ntorch, _ = try_import_torch()\n\nEXAMPLE_USAGE = \"\"\"\nTraining example via RLlib CLI:\n rllib train --run DQN --env CartPole-v0\n\nGrid search example via RLlib CLI:\n rllib train -f tuned_examples/cartpole-grid-search-example.yaml\n\nGrid search example via executable:\n ./train.py -f tuned_examples/cartpole-grid-search-example.yaml\n\nNote that -f overrides all other trial-specific command-line options.\n\"\"\"\n\n\ndef create_parser(parser_creator=None):\n parser = make_parser(\n parser_creator=parser_creator,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n description=\"Train a reinforcement learning agent.\",\n epilog=EXAMPLE_USAGE)\n\n # See also the base parser definition in ray/tune/config_parser.py\n parser.add_argument(\n \"--ray-address\",\n default=None,\n type=str,\n help=\"Connect to an existing Ray cluster at this address instead \"\n \"of starting a new one.\")\n parser.add_argument(\n \"--no-ray-ui\",\n action=\"store_true\",\n help=\"Whether to disable the Ray web ui.\")\n parser.add_argument(\n \"--local-mode\",\n action=\"store_true\",\n help=\"Whether to run ray with `local_mode=True`. \"\n \"Only if --ray-num-nodes is not used.\")\n parser.add_argument(\n \"--ray-num-cpus\",\n default=None,\n type=int,\n help=\"--num-cpus to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-num-gpus\",\n default=None,\n type=int,\n help=\"--num-gpus to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-num-nodes\",\n default=None,\n type=int,\n help=\"Emulate multiple cluster nodes for debugging.\")\n parser.add_argument(\n \"--ray-redis-max-memory\",\n default=None,\n type=int,\n help=\"--redis-max-memory to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-memory\",\n default=None,\n type=int,\n help=\"--memory to use if starting a new cluster.\")\n parser.add_argument(\n \"--ray-object-store-memory\",\n default=None,\n type=int,\n help=\"--object-store-memory to use if starting a new cluster.\")\n parser.add_argument(\n \"--experiment-name\",\n default=\"default\",\n type=str,\n help=\"Name of the subdirectory under `local_dir` to put results in.\")\n parser.add_argument(\n \"--local-dir\",\n default=DEFAULT_RESULTS_DIR,\n type=str,\n help=\"Local dir to save training results to. Defaults to '{}'.\".format(\n DEFAULT_RESULTS_DIR))\n parser.add_argument(\n \"--upload-dir\",\n default=\"\",\n type=str,\n help=\"Optional URI to sync training results to (e.g. s3://bucket).\")\n parser.add_argument(\n \"-v\", action=\"store_true\", help=\"Whether to use INFO level logging.\")\n parser.add_argument(\n \"-vv\", action=\"store_true\", help=\"Whether to use DEBUG level logging.\")\n parser.add_argument(\n \"--resume\",\n action=\"store_true\",\n help=\"Whether to attempt to resume previous Tune experiments.\")\n parser.add_argument(\n \"--torch\",\n action=\"store_true\",\n help=\"Whether to use PyTorch (instead of tf) as the DL framework.\")\n parser.add_argument(\n \"--eager\",\n action=\"store_true\",\n help=\"Whether to attempt to enable TF eager execution.\")\n parser.add_argument(\n \"--trace\",\n action=\"store_true\",\n help=\"Whether to attempt to enable tracing for eager mode.\")\n parser.add_argument(\n \"--env\", default=None, type=str, help=\"The gym environment to use.\")\n parser.add_argument(\n \"--queue-trials\",\n action=\"store_true\",\n help=(\n \"Whether to queue trials when the cluster does not currently have \"\n \"enough resources to launch one. This should be set to True when \"\n \"running on an autoscaling cluster to enable automatic scale-up.\"))\n parser.add_argument(\n \"-f\",\n \"--config-file\",\n default=None,\n type=str,\n help=\"If specified, use config options from this file. Note that this \"\n \"overrides any trial-specific options set via flags above.\")\n return parser\n\n\ndef run(args, parser):\n if args.config_file:\n with open(args.config_file) as f:\n experiments = yaml.safe_load(f)\n else:\n # Note: keep this in sync with tune/config_parser.py\n experiments = {\n args.experiment_name: { # i.e. log to ~/ray_results/default\n \"run\": args.run,\n \"checkpoint_freq\": args.checkpoint_freq,\n \"checkpoint_at_end\": args.checkpoint_at_end,\n \"keep_checkpoints_num\": args.keep_checkpoints_num,\n \"checkpoint_score_attr\": args.checkpoint_score_attr,\n \"local_dir\": args.local_dir,\n \"resources_per_trial\": (\n args.resources_per_trial and\n resources_to_json(args.resources_per_trial)),\n \"stop\": args.stop,\n \"config\": dict(args.config, env=args.env),\n \"restore\": args.restore,\n \"num_samples\": args.num_samples,\n \"upload_dir\": args.upload_dir,\n }\n }\n\n verbose = 1\n for exp in experiments.values():\n # Bazel makes it hard to find files specified in `args` (and `data`).\n # Look for them here.\n # NOTE: Some of our yaml files don't have a `config` section.\n if exp.get(\"config\", {}).get(\"input\") and \\\n not os.path.exists(exp[\"config\"][\"input\"]):\n # This script runs in the ray/rllib dir.\n rllib_dir = Path(__file__).parent\n input_file = rllib_dir.absolute().joinpath(exp[\"config\"][\"input\"])\n exp[\"config\"][\"input\"] = str(input_file)\n\n if not exp.get(\"run\"):\n parser.error(\"the following arguments are required: --run\")\n if not exp.get(\"env\") and not exp.get(\"config\", {}).get(\"env\"):\n parser.error(\"the following arguments are required: --env\")\n if args.eager:\n exp[\"config\"][\"framework\"] = \"tfe\"\n elif args.torch:\n exp[\"config\"][\"framework\"] = \"torch\"\n else:\n exp[\"config\"][\"framework\"] = \"tf\"\n if args.v:\n exp[\"config\"][\"log_level\"] = \"INFO\"\n verbose = 2\n if args.vv:\n exp[\"config\"][\"log_level\"] = \"DEBUG\"\n verbose = 3\n if args.trace:\n if exp[\"config\"][\"framework\"] != \"tfe\":\n raise ValueError(\"Must enable --eager to enable tracing.\")\n exp[\"config\"][\"eager_tracing\"] = True\n\n if args.ray_num_nodes:\n cluster = Cluster()\n for _ in range(args.ray_num_nodes):\n cluster.add_node(\n num_cpus=args.ray_num_cpus or 1,\n num_gpus=args.ray_num_gpus or 0,\n object_store_memory=args.ray_object_store_memory,\n memory=args.ray_memory,\n redis_max_memory=args.ray_redis_max_memory)\n ray.init(address=cluster.address)\n else:\n ray.init(\n include_dashboard=not args.no_ray_ui,\n address=args.ray_address,\n object_store_memory=args.ray_object_store_memory,\n memory=args.ray_memory,\n redis_max_memory=args.ray_redis_max_memory,\n num_cpus=args.ray_num_cpus,\n num_gpus=args.ray_num_gpus,\n local_mode=args.local_mode)\n\n run_experiments(\n experiments,\n scheduler=_make_scheduler(args),\n queue_trials=args.queue_trials,\n resume=args.resume,\n verbose=verbose,\n concurrent=True)\n\n ray.shutdown()\n\n\nif __name__ == \"__main__\":\n parser = create_parser()\n args = parser.parse_args()\n run(args, parser)\n", "path": "rllib/train.py"}]} | 3,155 | 335 |
gh_patches_debug_31622 | rasdani/github-patches | git_diff | lightly-ai__lightly-425 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
lightly-upload fails with video dataset
# lightly-upload fails with video dataset
Bug introduced in #417 which breaks image upload for video datasets because we try to determine the file size of each image on disk.
Relevant lines:
https://github.com/lightly-ai/lightly/blob/faff18868914f6884dab73a00ec4940e35106f48/lightly/api/api_workflow_upload_dataset.py#L143
Possible fix: Wrap the file size extraction in a try-catch and use the old method as a fallback.
</issue>
<code>
[start of lightly/data/dataset.py]
1 """ Lightly Dataset """
2
3 # Copyright (c) 2020. Lightly AG and its affiliates.
4 # All Rights Reserved
5
6 import os
7 import shutil
8 import tempfile
9
10 import PIL.Image
11 from PIL import Image
12 from typing import List, Union, Callable
13
14 import torch.utils.data as data
15 import torchvision.datasets as datasets
16 from torchvision import transforms
17
18 from lightly.data._helpers import _load_dataset
19 from lightly.data._helpers import DatasetFolder
20 from lightly.data._video import VideoDataset
21
22
23 def _get_filename_by_index(dataset, index):
24 """Default function which maps the index of an image to a filename.
25
26 """
27 if isinstance(dataset, datasets.ImageFolder):
28 # filename is the path of the image relative to the dataset root
29 full_path = dataset.imgs[index][0]
30 return os.path.relpath(full_path, dataset.root)
31 elif isinstance(dataset, DatasetFolder):
32 # filename is the path of the image relative to the dataset root
33 full_path = dataset.samples[index][0]
34 return os.path.relpath(full_path, dataset.root)
35 elif isinstance(dataset, VideoDataset):
36 # filename is constructed by the video dataset
37 return dataset.get_filename(index)
38 else:
39 # dummy to prevent crashes
40 return str(index)
41
42
43 def _ensure_dir(path):
44 """Makes sure that the directory at path exists.
45
46 """
47 dirname = os.path.dirname(path)
48 os.makedirs(dirname, exist_ok=True)
49
50
51 def _copy_image(input_dir, output_dir, filename):
52 """Copies an image from the input directory to the output directory.
53
54 """
55 source = os.path.join(input_dir, filename)
56 target = os.path.join(output_dir, filename)
57 _ensure_dir(target)
58 shutil.copyfile(source, target)
59
60
61 def _save_image(image, output_dir, filename, fmt):
62 """Saves an image in the output directory.
63
64 """
65 target = os.path.join(output_dir, filename)
66 _ensure_dir(target)
67 try:
68 # try to save the image with the specified format or
69 # derive the format from the filename (if format=None)
70 image.save(target, format=fmt)
71 except ValueError:
72 # could not determine format from filename
73 image.save(target, format='png')
74
75
76 def _dump_image(dataset, output_dir, filename, index, fmt):
77 """Saves a single image to the output directory.
78
79 Will copy the image from the input directory to the output directory
80 if possible. If not (e.g. for VideoDatasets), will load the image and
81 then save it to the output directory with the specified format.
82
83 """
84
85 if isinstance(dataset, datasets.ImageFolder):
86 # can safely copy the image from the input to the output directory
87 _copy_image(dataset.root, output_dir, filename)
88 elif isinstance(dataset, DatasetFolder):
89 # can safely copy the image from the input to the output directory
90 _copy_image(dataset.root, output_dir, filename)
91 else:
92 # need to load the image and save it to the output directory
93 image, _ = dataset[index]
94 _save_image(image, output_dir, filename, fmt)
95
96
97 class LightlyDataset:
98 """Provides a uniform data interface for the embedding models.
99
100 Should be used for all models and functions in the lightly package.
101 Returns a tuple (sample, target, fname) when accessed using __getitem__.
102
103 The LightlyDataset supports different input sources. You can use it
104 on a folder of images. You can also use it on a folder with subfolders
105 with images (ImageNet style). If the input_dir has subfolders each subfolder
106 gets its own target label. You can also work with videos (requires pyav).
107 If there are multiple videos in the input_dir each video gets a different
108 target label assigned. If input_dir contains images and videos
109 only the videos are used.
110
111 Can also be used in combination with the `from_torch_dataset` method
112 to load a dataset offered by torchvision (e.g. cifar10).
113
114 Args:
115 input_dir:
116 Path to directory holding the images or videos to load.
117 transform:
118 Image transforms (as in torchvision).
119 index_to_filename:
120 Function which takes the dataset and index as input and returns
121 the filename of the file at the index. If None, uses default.
122
123 Examples:
124 >>> # load a dataset consisting of images from a local folder
125 >>> # mydata/
126 >>> # `- img1.png
127 >>> # `- img2.png
128 >>> # `- ...
129 >>> import lightly.data as data
130 >>> dataset = data.LightlyDataset(input_dir='path/to/mydata/')
131 >>> sample, target, fname = dataset[0]
132 >>>
133 >>> # also works with subfolders
134 >>> # mydata/
135 >>> # `- subfolder1
136 >>> # `- img1.png
137 >>> # `- subfolder2
138 >>> # ...
139 >>>
140 >>> # also works with videos
141 >>> # mydata/
142 >>> # `- video1.mp4
143 >>> # `- video2.mp4
144 >>> # `- ...
145 """
146
147 def __init__(self,
148 input_dir: str,
149 transform: transforms.Compose = None,
150 index_to_filename: Callable[[datasets.VisionDataset, int], str] = None):
151
152 # can pass input_dir=None to create an "empty" dataset
153 self.input_dir = input_dir
154 if self.input_dir is not None:
155 self.dataset = _load_dataset(self.input_dir, transform)
156
157 # initialize function to get filename of image
158 self.index_to_filename = _get_filename_by_index
159 if index_to_filename is not None:
160 self.index_to_filename = index_to_filename
161
162 @classmethod
163 def from_torch_dataset(cls,
164 dataset,
165 transform=None,
166 index_to_filename=None):
167 """Builds a LightlyDataset from a PyTorch (or torchvision) dataset.
168
169 Args:
170 dataset:
171 PyTorch/torchvision dataset.
172 transform:
173 Image transforms (as in torchvision).
174 index_to_filename:
175 Function which takes the dataset and index as input and returns
176 the filename of the file at the index. If None, uses default.
177
178 Returns:
179 A LightlyDataset object.
180
181 Examples:
182 >>> # load cifar10 from torchvision
183 >>> import torchvision
184 >>> import lightly.data as data
185 >>> base = torchvision.datasets.CIFAR10(root='./')
186 >>> dataset = data.LightlyDataset.from_torch_dataset(base)
187
188 """
189 # create an "empty" dataset object
190 dataset_obj = cls(
191 None,
192 transform=transform,
193 index_to_filename=index_to_filename
194 )
195
196 # populate it with the torch dataset
197 dataset_obj.dataset = dataset
198 return dataset_obj
199
200 def __getitem__(self, index: int):
201 """Returns (sample, target, fname) of item at index.
202
203 Args:
204 index:
205 Index of the queried item.
206
207 Returns:
208 The image, target, and filename of the item at index.
209
210 """
211 fname = self.index_to_filename(self.dataset, index)
212 sample, target = self.dataset.__getitem__(index)
213
214 return sample, target, fname
215
216 def __len__(self):
217 """Returns the length of the dataset.
218
219 """
220 return len(self.dataset)
221
222 def __add__(self, other):
223 """Adds another item to the dataset.
224
225 """
226 raise NotImplementedError()
227
228 def get_filenames(self) -> List[str]:
229 """Returns all filenames in the dataset.
230
231 """
232 list_of_filenames = []
233 for index in range(len(self)):
234 fname = self.index_to_filename(self.dataset, index)
235 list_of_filenames.append(fname)
236 return list_of_filenames
237
238 def dump(self,
239 output_dir: str,
240 filenames: Union[List[str], None] = None,
241 format: Union[str, None] = None):
242 """Saves images in the dataset to the output directory.
243
244 Will copy the images from the input directory to the output directory
245 if possible. If not (e.g. for VideoDatasets), will load the images and
246 then save them to the output directory with the specified format.
247
248 Args:
249 output_dir:
250 Output directory where the image is stored.
251 filenames:
252 Filenames of the images to store. If None, stores all images.
253 format:
254 Image format. Can be any pillow image format (png, jpg, ...).
255 By default we try to use the same format as the input data. If
256 not possible (e.g. for videos) we dump the image
257 as a png image to prevent compression artifacts.
258
259 """
260
261 if self.dataset.transform is not None:
262 raise RuntimeError('Cannot dump dataset which applies transforms!')
263
264 # create directory if it doesn't exist yet
265 os.makedirs(output_dir, exist_ok=True)
266
267 # dump all the files if no filenames were passed, otherwise dump only
268 # the ones referenced in the list
269 if filenames is None:
270 indices = [i for i in range(self.__len__())]
271 filenames = self.get_filenames()
272 else:
273 indices = []
274 all_filenames = self.get_filenames()
275 for i in range(len(filenames)):
276 if filenames[i] in all_filenames:
277 indices.append(i)
278
279 # dump images
280 for i, filename in zip(indices, filenames):
281 _dump_image(self.dataset, output_dir, filename, i, fmt=format)
282
283 def get_filepath_from_filename(self, filename: str, image: PIL.Image.Image = None):
284 """Returns the filepath given the filename of the image
285
286 Args:
287 filename:
288 The filename of the image
289 image:
290 The image corresponding to the filename
291
292 Returns:
293
294 """
295 if hasattr(self, 'input_dir') and isinstance(self.input_dir, str):
296 return os.path.join(self.input_dir, filename)
297 else:
298 if image is None:
299 raise ValueError("This LightlyDataset was created from a torch dataset and thus has no input_dir."
300 "Thus you must provide the image to be able to save it and return the path to it.")
301 folder_path = tempfile.mkdtemp()
302 filepath = os.path.join(folder_path,filename) + '.jpg'
303 image.save(filepath)
304 return filepath
305
306
307 @property
308 def transform(self):
309 """Getter for the transform of the dataset.
310
311 """
312 return self.dataset.transform
313
314 @transform.setter
315 def transform(self, t):
316 """Setter for the transform of the dataset.
317
318 """
319 self.dataset.transform = t
320
[end of lightly/data/dataset.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lightly/data/dataset.py b/lightly/data/dataset.py
--- a/lightly/data/dataset.py
+++ b/lightly/data/dataset.py
@@ -283,6 +283,10 @@
def get_filepath_from_filename(self, filename: str, image: PIL.Image.Image = None):
"""Returns the filepath given the filename of the image
+ There are three cases:
+ - The dataset is a regular dataset with the images in the input dir.
+ - The dataset is a video dataset, thus the images have to be saved in a temporary folder.
+ - The dataset is a torch dataset, thus the images have to be saved in a temporary folder.
Args:
filename:
The filename of the image
@@ -290,18 +294,25 @@
The image corresponding to the filename
Returns:
+ The filename to the image, either the exiting one (case 1) or a newly created jpg (case 2, 3)
"""
- if hasattr(self, 'input_dir') and isinstance(self.input_dir, str):
- return os.path.join(self.input_dir, filename)
- else:
- if image is None:
- raise ValueError("This LightlyDataset was created from a torch dataset and thus has no input_dir."
- "Thus you must provide the image to be able to save it and return the path to it.")
- folder_path = tempfile.mkdtemp()
- filepath = os.path.join(folder_path,filename) + '.jpg'
- image.save(filepath)
- return filepath
+
+ has_input_dir = hasattr(self, 'input_dir') and isinstance(self.input_dir, str)
+ if has_input_dir:
+ path_to_image = os.path.join(self.input_dir, filename)
+ if os.path.isfile(path_to_image):
+ # Case 1
+ return path_to_image
+
+ if image is None:
+ raise ValueError("The parameter image must not be None for VideoDatasets and TorchDatasets")
+
+ # Case 2 and 3
+ folder_path = tempfile.mkdtemp()
+ filepath = os.path.join(folder_path,filename) + '.jpg'
+ image.save(filepath)
+ return filepath
@property
| {"golden_diff": "diff --git a/lightly/data/dataset.py b/lightly/data/dataset.py\n--- a/lightly/data/dataset.py\n+++ b/lightly/data/dataset.py\n@@ -283,6 +283,10 @@\n def get_filepath_from_filename(self, filename: str, image: PIL.Image.Image = None):\n \"\"\"Returns the filepath given the filename of the image\n \n+ There are three cases:\n+ - The dataset is a regular dataset with the images in the input dir.\n+ - The dataset is a video dataset, thus the images have to be saved in a temporary folder.\n+ - The dataset is a torch dataset, thus the images have to be saved in a temporary folder.\n Args:\n filename:\n The filename of the image\n@@ -290,18 +294,25 @@\n The image corresponding to the filename\n \n Returns:\n+ The filename to the image, either the exiting one (case 1) or a newly created jpg (case 2, 3)\n \n \"\"\"\n- if hasattr(self, 'input_dir') and isinstance(self.input_dir, str):\n- return os.path.join(self.input_dir, filename)\n- else:\n- if image is None:\n- raise ValueError(\"This LightlyDataset was created from a torch dataset and thus has no input_dir.\"\n- \"Thus you must provide the image to be able to save it and return the path to it.\")\n- folder_path = tempfile.mkdtemp()\n- filepath = os.path.join(folder_path,filename) + '.jpg'\n- image.save(filepath)\n- return filepath\n+\n+ has_input_dir = hasattr(self, 'input_dir') and isinstance(self.input_dir, str)\n+ if has_input_dir:\n+ path_to_image = os.path.join(self.input_dir, filename)\n+ if os.path.isfile(path_to_image):\n+ # Case 1\n+ return path_to_image\n+\n+ if image is None:\n+ raise ValueError(\"The parameter image must not be None for VideoDatasets and TorchDatasets\")\n+\n+ # Case 2 and 3\n+ folder_path = tempfile.mkdtemp()\n+ filepath = os.path.join(folder_path,filename) + '.jpg'\n+ image.save(filepath)\n+ return filepath\n \n \n @property\n", "issue": "lightly-upload fails with video dataset\n# lightly-upload fails with video dataset\r\n\r\nBug introduced in #417 which breaks image upload for video datasets because we try to determine the file size of each image on disk.\r\n\r\nRelevant lines:\r\nhttps://github.com/lightly-ai/lightly/blob/faff18868914f6884dab73a00ec4940e35106f48/lightly/api/api_workflow_upload_dataset.py#L143\r\n\r\nPossible fix: Wrap the file size extraction in a try-catch and use the old method as a fallback.\n", "before_files": [{"content": "\"\"\" Lightly Dataset \"\"\"\n\n# Copyright (c) 2020. Lightly AG and its affiliates.\n# All Rights Reserved\n\nimport os\nimport shutil\nimport tempfile\n\nimport PIL.Image\nfrom PIL import Image\nfrom typing import List, Union, Callable\n\nimport torch.utils.data as data\nimport torchvision.datasets as datasets\nfrom torchvision import transforms\n\nfrom lightly.data._helpers import _load_dataset\nfrom lightly.data._helpers import DatasetFolder\nfrom lightly.data._video import VideoDataset\n\n\ndef _get_filename_by_index(dataset, index):\n \"\"\"Default function which maps the index of an image to a filename.\n\n \"\"\"\n if isinstance(dataset, datasets.ImageFolder):\n # filename is the path of the image relative to the dataset root\n full_path = dataset.imgs[index][0]\n return os.path.relpath(full_path, dataset.root)\n elif isinstance(dataset, DatasetFolder):\n # filename is the path of the image relative to the dataset root\n full_path = dataset.samples[index][0]\n return os.path.relpath(full_path, dataset.root)\n elif isinstance(dataset, VideoDataset):\n # filename is constructed by the video dataset\n return dataset.get_filename(index)\n else:\n # dummy to prevent crashes\n return str(index)\n\n\ndef _ensure_dir(path):\n \"\"\"Makes sure that the directory at path exists.\n\n \"\"\"\n dirname = os.path.dirname(path)\n os.makedirs(dirname, exist_ok=True)\n\n\ndef _copy_image(input_dir, output_dir, filename):\n \"\"\"Copies an image from the input directory to the output directory.\n\n \"\"\"\n source = os.path.join(input_dir, filename)\n target = os.path.join(output_dir, filename)\n _ensure_dir(target)\n shutil.copyfile(source, target)\n\n\ndef _save_image(image, output_dir, filename, fmt):\n \"\"\"Saves an image in the output directory.\n\n \"\"\"\n target = os.path.join(output_dir, filename)\n _ensure_dir(target)\n try:\n # try to save the image with the specified format or\n # derive the format from the filename (if format=None)\n image.save(target, format=fmt)\n except ValueError:\n # could not determine format from filename\n image.save(target, format='png')\n\n\ndef _dump_image(dataset, output_dir, filename, index, fmt):\n \"\"\"Saves a single image to the output directory.\n\n Will copy the image from the input directory to the output directory\n if possible. If not (e.g. for VideoDatasets), will load the image and\n then save it to the output directory with the specified format.\n\n \"\"\"\n\n if isinstance(dataset, datasets.ImageFolder):\n # can safely copy the image from the input to the output directory\n _copy_image(dataset.root, output_dir, filename)\n elif isinstance(dataset, DatasetFolder):\n # can safely copy the image from the input to the output directory\n _copy_image(dataset.root, output_dir, filename)\n else:\n # need to load the image and save it to the output directory\n image, _ = dataset[index]\n _save_image(image, output_dir, filename, fmt)\n\n\nclass LightlyDataset:\n \"\"\"Provides a uniform data interface for the embedding models.\n\n Should be used for all models and functions in the lightly package.\n Returns a tuple (sample, target, fname) when accessed using __getitem__.\n\n The LightlyDataset supports different input sources. You can use it\n on a folder of images. You can also use it on a folder with subfolders\n with images (ImageNet style). If the input_dir has subfolders each subfolder\n gets its own target label. You can also work with videos (requires pyav).\n If there are multiple videos in the input_dir each video gets a different\n target label assigned. If input_dir contains images and videos\n only the videos are used.\n\n Can also be used in combination with the `from_torch_dataset` method\n to load a dataset offered by torchvision (e.g. cifar10).\n\n Args:\n input_dir:\n Path to directory holding the images or videos to load.\n transform:\n Image transforms (as in torchvision).\n index_to_filename:\n Function which takes the dataset and index as input and returns\n the filename of the file at the index. If None, uses default.\n\n Examples:\n >>> # load a dataset consisting of images from a local folder\n >>> # mydata/\n >>> # `- img1.png\n >>> # `- img2.png\n >>> # `- ...\n >>> import lightly.data as data\n >>> dataset = data.LightlyDataset(input_dir='path/to/mydata/')\n >>> sample, target, fname = dataset[0]\n >>>\n >>> # also works with subfolders\n >>> # mydata/\n >>> # `- subfolder1\n >>> # `- img1.png\n >>> # `- subfolder2\n >>> # ...\n >>>\n >>> # also works with videos\n >>> # mydata/\n >>> # `- video1.mp4\n >>> # `- video2.mp4\n >>> # `- ...\n \"\"\"\n\n def __init__(self,\n input_dir: str,\n transform: transforms.Compose = None,\n index_to_filename: Callable[[datasets.VisionDataset, int], str] = None):\n\n # can pass input_dir=None to create an \"empty\" dataset\n self.input_dir = input_dir\n if self.input_dir is not None:\n self.dataset = _load_dataset(self.input_dir, transform)\n\n # initialize function to get filename of image\n self.index_to_filename = _get_filename_by_index\n if index_to_filename is not None:\n self.index_to_filename = index_to_filename\n\n @classmethod\n def from_torch_dataset(cls,\n dataset,\n transform=None,\n index_to_filename=None):\n \"\"\"Builds a LightlyDataset from a PyTorch (or torchvision) dataset.\n\n Args:\n dataset:\n PyTorch/torchvision dataset.\n transform:\n Image transforms (as in torchvision).\n index_to_filename:\n Function which takes the dataset and index as input and returns\n the filename of the file at the index. If None, uses default.\n\n Returns:\n A LightlyDataset object.\n\n Examples:\n >>> # load cifar10 from torchvision\n >>> import torchvision\n >>> import lightly.data as data\n >>> base = torchvision.datasets.CIFAR10(root='./')\n >>> dataset = data.LightlyDataset.from_torch_dataset(base)\n\n \"\"\"\n # create an \"empty\" dataset object\n dataset_obj = cls(\n None,\n transform=transform,\n index_to_filename=index_to_filename\n )\n\n # populate it with the torch dataset\n dataset_obj.dataset = dataset\n return dataset_obj\n\n def __getitem__(self, index: int):\n \"\"\"Returns (sample, target, fname) of item at index.\n\n Args:\n index:\n Index of the queried item.\n\n Returns:\n The image, target, and filename of the item at index.\n\n \"\"\"\n fname = self.index_to_filename(self.dataset, index)\n sample, target = self.dataset.__getitem__(index)\n\n return sample, target, fname\n\n def __len__(self):\n \"\"\"Returns the length of the dataset.\n\n \"\"\"\n return len(self.dataset)\n\n def __add__(self, other):\n \"\"\"Adds another item to the dataset.\n\n \"\"\"\n raise NotImplementedError()\n\n def get_filenames(self) -> List[str]:\n \"\"\"Returns all filenames in the dataset.\n\n \"\"\"\n list_of_filenames = []\n for index in range(len(self)):\n fname = self.index_to_filename(self.dataset, index)\n list_of_filenames.append(fname)\n return list_of_filenames\n\n def dump(self,\n output_dir: str,\n filenames: Union[List[str], None] = None,\n format: Union[str, None] = None):\n \"\"\"Saves images in the dataset to the output directory.\n\n Will copy the images from the input directory to the output directory\n if possible. If not (e.g. for VideoDatasets), will load the images and\n then save them to the output directory with the specified format.\n\n Args:\n output_dir:\n Output directory where the image is stored.\n filenames:\n Filenames of the images to store. If None, stores all images.\n format:\n Image format. Can be any pillow image format (png, jpg, ...).\n By default we try to use the same format as the input data. If\n not possible (e.g. for videos) we dump the image \n as a png image to prevent compression artifacts.\n\n \"\"\"\n\n if self.dataset.transform is not None:\n raise RuntimeError('Cannot dump dataset which applies transforms!')\n\n # create directory if it doesn't exist yet\n os.makedirs(output_dir, exist_ok=True)\n\n # dump all the files if no filenames were passed, otherwise dump only\n # the ones referenced in the list\n if filenames is None:\n indices = [i for i in range(self.__len__())]\n filenames = self.get_filenames()\n else:\n indices = []\n all_filenames = self.get_filenames()\n for i in range(len(filenames)):\n if filenames[i] in all_filenames:\n indices.append(i)\n\n # dump images\n for i, filename in zip(indices, filenames):\n _dump_image(self.dataset, output_dir, filename, i, fmt=format)\n\n def get_filepath_from_filename(self, filename: str, image: PIL.Image.Image = None):\n \"\"\"Returns the filepath given the filename of the image\n\n Args:\n filename:\n The filename of the image\n image:\n The image corresponding to the filename\n\n Returns:\n\n \"\"\"\n if hasattr(self, 'input_dir') and isinstance(self.input_dir, str):\n return os.path.join(self.input_dir, filename)\n else:\n if image is None:\n raise ValueError(\"This LightlyDataset was created from a torch dataset and thus has no input_dir.\"\n \"Thus you must provide the image to be able to save it and return the path to it.\")\n folder_path = tempfile.mkdtemp()\n filepath = os.path.join(folder_path,filename) + '.jpg'\n image.save(filepath)\n return filepath\n\n\n @property\n def transform(self):\n \"\"\"Getter for the transform of the dataset.\n\n \"\"\"\n return self.dataset.transform\n\n @transform.setter\n def transform(self, t):\n \"\"\"Setter for the transform of the dataset.\n\n \"\"\"\n self.dataset.transform = t\n", "path": "lightly/data/dataset.py"}]} | 3,796 | 494 |
gh_patches_debug_5546 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-5695 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mitmdump jumps to 100% CPU when parent process exits
#### Problem Description
It took me two days to make this reproduce in isolation. I hope someone with Python skills can figure out what is happening here. Depending on what the root cause is this might not even be related to my funny architecture.
I'm spawning `mitmdump` from Node.js. If the node process exits mitmdump will be re-assigned to become a child of `systemd` (some unix wizardry). It will then immediately jump to 100% CPU and stay there. This _only_ happens when an addon is using at least one network event (go figure...). E.g. I'm using `client_connected` (works with `clientconnect` on v6 as well). If the addon is only using sth. like `running` the bug does not occur. Even better: if the addon originally only has "running" nothing bad happens. But if I then add a `client_connected` and save the file (and the addon is automatically reloaded) it will instantly jump to 100% CPU.
My guess is that it might be related to stdout and the switcheroo with the parent process? In my actual architecture the mitmdump process will poll the parent via gRPC every second and shutdown if it's gone. But the 100% CPU prevents that.
Update: while trying to write down the exact steps it turns out this might only reproduce via local venv and and not if you download the binary. I'm not sure, it's confusing. I'm confused. But I have video proof, so I'm not completely insane.
#### Steps to reproduce the behavior:
index.js
```js
const path = require('path');
const { spawn } = require('child_process');
function handleStdOut(data) {
console.log(`mitmdump stdout: ${data}`);
}
function handleStdError(data) {
console.error(`mitmdump stderr: ${data}`);
}
function handleExit(code) {
console.log(`mitm process exited with code ${code}`);
}
const mitm = spawn(
// Adjust this path
'/home/alex/Projects/Bandsalat/src/forks/mitmproxy/venv/bin/mitmdump',
['--quiet', '--set', 'connection_strategy=lazy', '--scripts', 'addon.py'],
{
detached: true,
windowsHide: true,
env: {
PYTHONUNBUFFERED: '1',
},
}
);
console.log(mitm.spawnargs);
mitm.unref();
mitm.on('exit', handleExit);
mitm.stdout.on('data', handleStdOut);
mitm.stderr.on('data', handleStdError);
```
addon.py
```py
class MyAddon:
def running(self):
print('running')
def client_connected(self, client):
print('client_connected')
addons = [
MyAddon()
]
```
1. I'm on Ubuntu
2. Adjust index.js to point to your local mitmproxy git venv
3. Launch `node index.js` (Node 14 or 16 work both for me)
4. Now open Chromium with mitmproxy configured. You don't need to enter any URL, Chromium will phone home anyway.
5. Keep Chromium open and ctrl+c the node process
6. Observe your fan getting louder and `top` showing mitmdump at 100% CPU
https://user-images.githubusercontent.com/679144/124594746-740a7080-de60-11eb-9ffb-a5fc4b3ba24a.mp4
#### System Information
Happens with both v6 and HEAD.
```
Mitmproxy: 7.0.0.dev (+492, commit af27556)
Python: 3.8.10
OpenSSL: OpenSSL 1.1.1i 8 Dec 2020
Platform: Linux-5.8.0-59-generic-x86_64-with-glibc2.29
```
</issue>
<code>
[start of mitmproxy/addons/termlog.py]
1 from __future__ import annotations
2 import asyncio
3 import logging
4 from typing import IO
5
6 import sys
7
8 from mitmproxy import ctx, log
9 from mitmproxy.utils import vt_codes
10
11
12 class TermLog:
13 def __init__(
14 self,
15 out: IO[str] | None = None
16 ):
17 self.logger = TermLogHandler(out)
18 self.logger.install()
19
20 def load(self, loader):
21 loader.add_option(
22 "termlog_verbosity", str, "info", "Log verbosity.", choices=log.LogLevels
23 )
24 self.logger.setLevel(logging.INFO)
25
26 def configure(self, updated):
27 if "termlog_verbosity" in updated:
28 self.logger.setLevel(ctx.options.termlog_verbosity.upper())
29
30 def done(self):
31 t = self._teardown()
32 try:
33 # try to delay teardown a bit.
34 asyncio.create_task(t)
35 except RuntimeError:
36 # no event loop, we're in a test.
37 asyncio.run(t)
38
39 async def _teardown(self):
40 self.logger.uninstall()
41
42
43 class TermLogHandler(log.MitmLogHandler):
44 def __init__(
45 self,
46 out: IO[str] | None = None
47 ):
48 super().__init__()
49 self.file: IO[str] = out or sys.stdout
50 self.has_vt_codes = vt_codes.ensure_supported(self.file)
51 self.formatter = log.MitmFormatter(self.has_vt_codes)
52
53 def emit(self, record: logging.LogRecord) -> None:
54 print(
55 self.format(record),
56 file=self.file
57 )
58
[end of mitmproxy/addons/termlog.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/addons/termlog.py b/mitmproxy/addons/termlog.py
--- a/mitmproxy/addons/termlog.py
+++ b/mitmproxy/addons/termlog.py
@@ -51,7 +51,9 @@
self.formatter = log.MitmFormatter(self.has_vt_codes)
def emit(self, record: logging.LogRecord) -> None:
- print(
- self.format(record),
- file=self.file
- )
+ try:
+ print(self.format(record), file=self.file)
+ except OSError:
+ # We cannot print, exit immediately.
+ # See https://github.com/mitmproxy/mitmproxy/issues/4669
+ sys.exit(1)
| {"golden_diff": "diff --git a/mitmproxy/addons/termlog.py b/mitmproxy/addons/termlog.py\n--- a/mitmproxy/addons/termlog.py\n+++ b/mitmproxy/addons/termlog.py\n@@ -51,7 +51,9 @@\n self.formatter = log.MitmFormatter(self.has_vt_codes)\n \n def emit(self, record: logging.LogRecord) -> None:\n- print(\n- self.format(record),\n- file=self.file\n- )\n+ try:\n+ print(self.format(record), file=self.file)\n+ except OSError:\n+ # We cannot print, exit immediately.\n+ # See https://github.com/mitmproxy/mitmproxy/issues/4669\n+ sys.exit(1)\n", "issue": "mitmdump jumps to 100% CPU when parent process exits\n#### Problem Description\r\n\r\nIt took me two days to make this reproduce in isolation. I hope someone with Python skills can figure out what is happening here. Depending on what the root cause is this might not even be related to my funny architecture.\r\n\r\nI'm spawning `mitmdump` from Node.js. If the node process exits mitmdump will be re-assigned to become a child of `systemd` (some unix wizardry). It will then immediately jump to 100% CPU and stay there. This _only_ happens when an addon is using at least one network event (go figure...). E.g. I'm using `client_connected` (works with `clientconnect` on v6 as well). If the addon is only using sth. like `running` the bug does not occur. Even better: if the addon originally only has \"running\" nothing bad happens. But if I then add a `client_connected` and save the file (and the addon is automatically reloaded) it will instantly jump to 100% CPU.\r\n\r\nMy guess is that it might be related to stdout and the switcheroo with the parent process? In my actual architecture the mitmdump process will poll the parent via gRPC every second and shutdown if it's gone. But the 100% CPU prevents that.\r\n\r\nUpdate: while trying to write down the exact steps it turns out this might only reproduce via local venv and and not if you download the binary. I'm not sure, it's confusing. I'm confused. But I have video proof, so I'm not completely insane.\r\n\r\n#### Steps to reproduce the behavior:\r\n\r\nindex.js\r\n\r\n```js\r\nconst path = require('path');\r\nconst { spawn } = require('child_process');\r\n\r\nfunction handleStdOut(data) {\r\n console.log(`mitmdump stdout: ${data}`);\r\n}\r\n\r\nfunction handleStdError(data) {\r\n console.error(`mitmdump stderr: ${data}`);\r\n}\r\n\r\nfunction handleExit(code) {\r\n console.log(`mitm process exited with code ${code}`);\r\n}\r\n\r\nconst mitm = spawn(\r\n // Adjust this path\r\n '/home/alex/Projects/Bandsalat/src/forks/mitmproxy/venv/bin/mitmdump',\r\n ['--quiet', '--set', 'connection_strategy=lazy', '--scripts', 'addon.py'],\r\n {\r\n detached: true,\r\n windowsHide: true,\r\n env: {\r\n PYTHONUNBUFFERED: '1',\r\n },\r\n }\r\n);\r\n\r\nconsole.log(mitm.spawnargs);\r\n\r\nmitm.unref();\r\nmitm.on('exit', handleExit);\r\nmitm.stdout.on('data', handleStdOut);\r\nmitm.stderr.on('data', handleStdError);\r\n```\r\naddon.py\r\n\r\n```py\r\nclass MyAddon:\r\n def running(self):\r\n print('running')\r\n\r\n def client_connected(self, client):\r\n print('client_connected')\r\n\r\naddons = [\r\n MyAddon()\r\n]\r\n```\r\n\r\n1. I'm on Ubuntu\r\n2. Adjust index.js to point to your local mitmproxy git venv\r\n3. Launch `node index.js` (Node 14 or 16 work both for me)\r\n4. Now open Chromium with mitmproxy configured. You don't need to enter any URL, Chromium will phone home anyway.\r\n5. Keep Chromium open and ctrl+c the node process\r\n6. Observe your fan getting louder and `top` showing mitmdump at 100% CPU\r\n\r\nhttps://user-images.githubusercontent.com/679144/124594746-740a7080-de60-11eb-9ffb-a5fc4b3ba24a.mp4\r\n\r\n#### System Information\r\n\r\nHappens with both v6 and HEAD.\r\n\r\n```\r\nMitmproxy: 7.0.0.dev (+492, commit af27556)\r\nPython: 3.8.10\r\nOpenSSL: OpenSSL 1.1.1i 8 Dec 2020\r\nPlatform: Linux-5.8.0-59-generic-x86_64-with-glibc2.29\r\n```\r\n\n", "before_files": [{"content": "from __future__ import annotations\nimport asyncio\nimport logging\nfrom typing import IO\n\nimport sys\n\nfrom mitmproxy import ctx, log\nfrom mitmproxy.utils import vt_codes\n\n\nclass TermLog:\n def __init__(\n self,\n out: IO[str] | None = None\n ):\n self.logger = TermLogHandler(out)\n self.logger.install()\n\n def load(self, loader):\n loader.add_option(\n \"termlog_verbosity\", str, \"info\", \"Log verbosity.\", choices=log.LogLevels\n )\n self.logger.setLevel(logging.INFO)\n\n def configure(self, updated):\n if \"termlog_verbosity\" in updated:\n self.logger.setLevel(ctx.options.termlog_verbosity.upper())\n\n def done(self):\n t = self._teardown()\n try:\n # try to delay teardown a bit.\n asyncio.create_task(t)\n except RuntimeError:\n # no event loop, we're in a test.\n asyncio.run(t)\n\n async def _teardown(self):\n self.logger.uninstall()\n\n\nclass TermLogHandler(log.MitmLogHandler):\n def __init__(\n self,\n out: IO[str] | None = None\n ):\n super().__init__()\n self.file: IO[str] = out or sys.stdout\n self.has_vt_codes = vt_codes.ensure_supported(self.file)\n self.formatter = log.MitmFormatter(self.has_vt_codes)\n\n def emit(self, record: logging.LogRecord) -> None:\n print(\n self.format(record),\n file=self.file\n )\n", "path": "mitmproxy/addons/termlog.py"}]} | 1,867 | 167 |
gh_patches_debug_9900 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-777 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TensorBoardLogger creates another tfevents file.
## 🐛 Bug
TensorBoardLogger creates another tfevents file when fit() is running.
It seems that no metrics are logged in the redundant file, but it will be shown in TensorBoard as a `run`.
I don't do anything about loggers in my LightningModules.
**Expected file structure:**
```
|
|- save_dir
| |- name
| |- version_0
| |- events.out.tfevents.1579833025.ip-xxx-xxx-xxx-xxx.17584.0
| |- meta_tags.csv
|- train.py
```
**Observed file structure:**
```
|
|- save_dir
| |- name
| |- version_0
| |- 1579833032
| |- events.out.tfevents.1579833032.ip-xxx-xxx-xxx-xxx.17584.1
| |- events.out.tfevents.1579833025.ip-xxx-xxx-xxx-xxx.17584.0
| |- meta_tags.csv
|- train.py
```
### To Reproduce / Code sample
Basic training step of PyTorch Lightning:
```python
# define a logger
logger = TensorBoardLogger(
save_dir='runs',
name=args.model
)
# define callbacks
ckpt_path = Path(logger.experiment.log_dir) / 'ckpts'
checkpoint_callback = ModelCheckpoint(filepath=ckpt_path)
# instantiate trainer
trainer = Trainer(
logger=logger,
checkpoint_callback=checkpoint_callback,
gpus=args.gpus
)
# define a model
model = CoolModel(args)
# start training!
trainer.fit(model)
```
### Environment
- PyTorch Lightning Version (e.g., 1.0): `0.6.0`
- PyTorch Version (e.g., 1.0): `1.3.1`
- OS (e.g., Linux): `Ubuntu 16.04 LTS`
- How you installed PyTorch (`conda`, `pip`, source): `pip`
- Build command you used (if compiling from source):
- Python version: `3.7.4`
- CUDA/cuDNN version: `10.1`
</issue>
<code>
[start of pytorch_lightning/loggers/tensorboard.py]
1 import os
2 from warnings import warn
3 from argparse import Namespace
4 from pkg_resources import parse_version
5
6 import torch
7 import csv
8 from torch.utils.tensorboard import SummaryWriter
9
10 from .base import LightningLoggerBase, rank_zero_only
11
12
13 class TensorBoardLogger(LightningLoggerBase):
14 r"""
15
16 Log to local file system in TensorBoard format
17
18 Implemented using :class:`torch.utils.tensorboard.SummaryWriter`. Logs are saved to
19 `os.path.join(save_dir, name, version)`
20
21 Example
22 --------
23
24 .. code-block:: python
25
26 logger = TensorBoardLogger("tb_logs", name="my_model")
27 trainer = Trainer(logger=logger)
28 trainer.train(model)
29
30 Args:
31 save_dir (str): Save directory
32 name (str): Experiment name. Defaults to "default".
33 version (int): Experiment version. If version is not specified the logger inspects the save
34 directory for existing versions, then automatically assigns the next available version.
35 \**kwargs (dict): Other arguments are passed directly to the :class:`SummaryWriter` constructor.
36
37 """
38 NAME_CSV_TAGS = 'meta_tags.csv'
39
40 def __init__(self, save_dir, name="default", version=None, **kwargs):
41 super().__init__()
42 self.save_dir = save_dir
43 self._name = name
44 self._version = version
45
46 self._experiment = None
47 self.tags = {}
48 self.kwargs = kwargs
49
50 @property
51 def experiment(self):
52 r"""
53
54 Actual tensorboard object. To use tensorboard features do the following.
55
56 Example::
57
58 self.logger.experiment.some_tensorboard_function()
59
60 """
61 if self._experiment is not None:
62 return self._experiment
63
64 root_dir = os.path.join(self.save_dir, self.name)
65 os.makedirs(root_dir, exist_ok=True)
66 log_dir = os.path.join(root_dir, "version_" + str(self.version))
67 self._experiment = SummaryWriter(log_dir=log_dir, **self.kwargs)
68 return self._experiment
69
70 @rank_zero_only
71 def log_hyperparams(self, params):
72 if params is None:
73 return
74
75 # in case converting from namespace
76 if isinstance(params, Namespace):
77 params = vars(params)
78 params = dict(params)
79
80 if parse_version(torch.__version__) < parse_version("1.3.0"):
81 warn(
82 f"Hyperparameter logging is not available for Torch version {torch.__version__}."
83 " Skipping log_hyperparams. Upgrade to Torch 1.3.0 or above to enable"
84 " hyperparameter logging."
85 )
86 else:
87 # `add_hparams` requires both - hparams and metric
88 self.experiment.add_hparams(hparam_dict=params, metric_dict={})
89 # some alternative should be added
90 self.tags.update(params)
91
92 @rank_zero_only
93 def log_metrics(self, metrics, step=None):
94 for k, v in metrics.items():
95 if isinstance(v, torch.Tensor):
96 v = v.item()
97 self.experiment.add_scalar(k, v, step)
98
99 @rank_zero_only
100 def save(self):
101 try:
102 self.experiment.flush()
103 except AttributeError:
104 # you are using PT version (<v1.2) which does not have implemented flush
105 self.experiment._get_file_writer().flush()
106
107 # create a preudo standard path ala test-tube
108 dir_path = os.path.join(self.save_dir, self.name, 'version_%s' % self.version)
109 if not os.path.isdir(dir_path):
110 dir_path = self.save_dir
111
112 # prepare the file path
113 meta_tags_path = os.path.join(dir_path, self.NAME_CSV_TAGS)
114
115 # save the metatags file
116 with open(meta_tags_path, 'w', newline='') as csvfile:
117 fieldnames = ['key', 'value']
118 writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
119 writer.writerow({'key': 'key', 'value': 'value'})
120 for k, v in self.tags.items():
121 writer.writerow({'key': k, 'value': v})
122
123 @rank_zero_only
124 def finalize(self, status):
125 self.save()
126
127 @property
128 def name(self):
129 return self._name
130
131 @property
132 def version(self):
133 if self._version is None:
134 self._version = self._get_next_version()
135 return self._version
136
137 def _get_next_version(self):
138 root_dir = os.path.join(self.save_dir, self.name)
139 existing_versions = []
140 for d in os.listdir(root_dir):
141 if os.path.isdir(os.path.join(root_dir, d)) and d.startswith("version_"):
142 existing_versions.append(int(d.split("_")[1]))
143
144 if len(existing_versions) == 0:
145 return 0
146
147 return max(existing_versions) + 1
148
[end of pytorch_lightning/loggers/tensorboard.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pytorch_lightning/loggers/tensorboard.py b/pytorch_lightning/loggers/tensorboard.py
--- a/pytorch_lightning/loggers/tensorboard.py
+++ b/pytorch_lightning/loggers/tensorboard.py
@@ -84,8 +84,12 @@
" hyperparameter logging."
)
else:
- # `add_hparams` requires both - hparams and metric
- self.experiment.add_hparams(hparam_dict=params, metric_dict={})
+ from torch.utils.tensorboard.summary import hparams
+ exp, ssi, sei = hparams(params, {})
+ writer = self.experiment._get_file_writer()
+ writer.add_summary(exp)
+ writer.add_summary(ssi)
+ writer.add_summary(sei)
# some alternative should be added
self.tags.update(params)
| {"golden_diff": "diff --git a/pytorch_lightning/loggers/tensorboard.py b/pytorch_lightning/loggers/tensorboard.py\n--- a/pytorch_lightning/loggers/tensorboard.py\n+++ b/pytorch_lightning/loggers/tensorboard.py\n@@ -84,8 +84,12 @@\n \" hyperparameter logging.\"\n )\n else:\n- # `add_hparams` requires both - hparams and metric\n- self.experiment.add_hparams(hparam_dict=params, metric_dict={})\n+ from torch.utils.tensorboard.summary import hparams\n+ exp, ssi, sei = hparams(params, {})\n+ writer = self.experiment._get_file_writer()\n+ writer.add_summary(exp)\n+ writer.add_summary(ssi)\n+ writer.add_summary(sei)\n # some alternative should be added\n self.tags.update(params)\n", "issue": "TensorBoardLogger creates another tfevents file.\n## \ud83d\udc1b Bug\r\nTensorBoardLogger creates another tfevents file when fit() is running.\r\nIt seems that no metrics are logged in the redundant file, but it will be shown in TensorBoard as a `run`.\r\nI don't do anything about loggers in my LightningModules.\r\n\r\n\r\n**Expected file structure:**\r\n```\r\n|\r\n|- save_dir\r\n| |- name\r\n| |- version_0\r\n| |- events.out.tfevents.1579833025.ip-xxx-xxx-xxx-xxx.17584.0\r\n| |- meta_tags.csv\r\n|- train.py\r\n```\r\n\r\n**Observed file structure:**\r\n```\r\n|\r\n|- save_dir\r\n| |- name\r\n| |- version_0\r\n| |- 1579833032\r\n| |- events.out.tfevents.1579833032.ip-xxx-xxx-xxx-xxx.17584.1\r\n| |- events.out.tfevents.1579833025.ip-xxx-xxx-xxx-xxx.17584.0\r\n| |- meta_tags.csv\r\n|- train.py\r\n```\r\n\r\n### To Reproduce / Code sample\r\nBasic training step of PyTorch Lightning:\r\n```python\r\n# define a logger\r\nlogger = TensorBoardLogger(\r\n save_dir='runs',\r\n name=args.model\r\n)\r\n\r\n# define callbacks\r\nckpt_path = Path(logger.experiment.log_dir) / 'ckpts'\r\ncheckpoint_callback = ModelCheckpoint(filepath=ckpt_path)\r\n\r\n# instantiate trainer\r\ntrainer = Trainer(\r\n logger=logger,\r\n checkpoint_callback=checkpoint_callback,\r\n gpus=args.gpus\r\n)\r\n\r\n# define a model\r\nmodel = CoolModel(args)\r\n\r\n# start training!\r\ntrainer.fit(model)\r\n```\r\n\r\n### Environment\r\n\r\n - PyTorch Lightning Version (e.g., 1.0): `0.6.0`\r\n - PyTorch Version (e.g., 1.0): `1.3.1`\r\n - OS (e.g., Linux): `Ubuntu 16.04 LTS`\r\n - How you installed PyTorch (`conda`, `pip`, source): `pip`\r\n - Build command you used (if compiling from source):\r\n - Python version: `3.7.4`\r\n - CUDA/cuDNN version: `10.1`\n", "before_files": [{"content": "import os\nfrom warnings import warn\nfrom argparse import Namespace\nfrom pkg_resources import parse_version\n\nimport torch\nimport csv\nfrom torch.utils.tensorboard import SummaryWriter\n\nfrom .base import LightningLoggerBase, rank_zero_only\n\n\nclass TensorBoardLogger(LightningLoggerBase):\n r\"\"\"\n\n Log to local file system in TensorBoard format\n\n Implemented using :class:`torch.utils.tensorboard.SummaryWriter`. Logs are saved to\n `os.path.join(save_dir, name, version)`\n\n Example\n --------\n\n .. code-block:: python\n\n logger = TensorBoardLogger(\"tb_logs\", name=\"my_model\")\n trainer = Trainer(logger=logger)\n trainer.train(model)\n\n Args:\n save_dir (str): Save directory\n name (str): Experiment name. Defaults to \"default\".\n version (int): Experiment version. If version is not specified the logger inspects the save\n directory for existing versions, then automatically assigns the next available version.\n \\**kwargs (dict): Other arguments are passed directly to the :class:`SummaryWriter` constructor.\n\n \"\"\"\n NAME_CSV_TAGS = 'meta_tags.csv'\n\n def __init__(self, save_dir, name=\"default\", version=None, **kwargs):\n super().__init__()\n self.save_dir = save_dir\n self._name = name\n self._version = version\n\n self._experiment = None\n self.tags = {}\n self.kwargs = kwargs\n\n @property\n def experiment(self):\n r\"\"\"\n\n Actual tensorboard object. To use tensorboard features do the following.\n\n Example::\n\n self.logger.experiment.some_tensorboard_function()\n\n \"\"\"\n if self._experiment is not None:\n return self._experiment\n\n root_dir = os.path.join(self.save_dir, self.name)\n os.makedirs(root_dir, exist_ok=True)\n log_dir = os.path.join(root_dir, \"version_\" + str(self.version))\n self._experiment = SummaryWriter(log_dir=log_dir, **self.kwargs)\n return self._experiment\n\n @rank_zero_only\n def log_hyperparams(self, params):\n if params is None:\n return\n\n # in case converting from namespace\n if isinstance(params, Namespace):\n params = vars(params)\n params = dict(params)\n\n if parse_version(torch.__version__) < parse_version(\"1.3.0\"):\n warn(\n f\"Hyperparameter logging is not available for Torch version {torch.__version__}.\"\n \" Skipping log_hyperparams. Upgrade to Torch 1.3.0 or above to enable\"\n \" hyperparameter logging.\"\n )\n else:\n # `add_hparams` requires both - hparams and metric\n self.experiment.add_hparams(hparam_dict=params, metric_dict={})\n # some alternative should be added\n self.tags.update(params)\n\n @rank_zero_only\n def log_metrics(self, metrics, step=None):\n for k, v in metrics.items():\n if isinstance(v, torch.Tensor):\n v = v.item()\n self.experiment.add_scalar(k, v, step)\n\n @rank_zero_only\n def save(self):\n try:\n self.experiment.flush()\n except AttributeError:\n # you are using PT version (<v1.2) which does not have implemented flush\n self.experiment._get_file_writer().flush()\n\n # create a preudo standard path ala test-tube\n dir_path = os.path.join(self.save_dir, self.name, 'version_%s' % self.version)\n if not os.path.isdir(dir_path):\n dir_path = self.save_dir\n\n # prepare the file path\n meta_tags_path = os.path.join(dir_path, self.NAME_CSV_TAGS)\n\n # save the metatags file\n with open(meta_tags_path, 'w', newline='') as csvfile:\n fieldnames = ['key', 'value']\n writer = csv.DictWriter(csvfile, fieldnames=fieldnames)\n writer.writerow({'key': 'key', 'value': 'value'})\n for k, v in self.tags.items():\n writer.writerow({'key': k, 'value': v})\n\n @rank_zero_only\n def finalize(self, status):\n self.save()\n\n @property\n def name(self):\n return self._name\n\n @property\n def version(self):\n if self._version is None:\n self._version = self._get_next_version()\n return self._version\n\n def _get_next_version(self):\n root_dir = os.path.join(self.save_dir, self.name)\n existing_versions = []\n for d in os.listdir(root_dir):\n if os.path.isdir(os.path.join(root_dir, d)) and d.startswith(\"version_\"):\n existing_versions.append(int(d.split(\"_\")[1]))\n\n if len(existing_versions) == 0:\n return 0\n\n return max(existing_versions) + 1\n", "path": "pytorch_lightning/loggers/tensorboard.py"}]} | 2,446 | 184 |
gh_patches_debug_27961 | rasdani/github-patches | git_diff | sunpy__sunpy-6926 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add a "How do I..." page to our documentation
<!--
We know asking good questions takes effort, and we appreciate your time.
Thank you.
Please be aware that everyone has to follow our code of conduct:
https://sunpy.org/coc
These comments are hidden when you submit this github issue.
Please have a search on our GitHub repository to see if a similar issue has already been posted.
If a similar issue is closed, have a quick look to see if you are satisfied by the resolution.
If not please go ahead and open an issue!
-->
<!--
Provide a general description of the feature you would like.
If you prefer, you can also suggest a draft design or API.
-->
e.g. this page from the xarray docs: http://xarray.pydata.org/en/stable/howdoi.html
</issue>
<code>
[start of examples/acquiring_data/searching_multiple_wavelengths.py]
1 """
2 ==============================================
3 Searching for multiple wavelengths with Fido
4 ==============================================
5
6 This example shows how you can search for several wavelengths of AIA data with Fido.
7 """
8 from astropy import units as u
9
10 from sunpy.net import Fido
11 from sunpy.net import attrs as a
12
13 ###############################################################################
14 # Here we are demonstrating how you can search for specific wavelengths of
15 # AIA data using `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`
16 # and the `sunpy.net.attrs.AttrOr` function.
17 # For example, you may only want a single wavelength, say 171 Angstrom:
18
19 aia_search = Fido.search(a.Time("2022-02-20 00:00", "2022-02-20 00:01"),
20 a.Instrument("AIA"),
21 a.Wavelength(171*u.angstrom))
22
23 print(aia_search)
24
25 ###############################################################################
26 # But say you actually want to search for several wavelengths, rather than just one.
27 # You could use the "|" operator, or instead you can use the `sunpy.net.attrs.AttrOr`
28 # function.
29
30 wavelengths = [94, 131, 171, 193, 211]*u.angstrom
31 aia_search = Fido.search(a.Time("2022-02-20 00:00", "2022-02-20 00:01"),
32 a.Instrument("AIA"),
33 a.AttrOr([a.Wavelength(wav) for wav in wavelengths]))
34
35 print(aia_search)
36
37 # This returns several searches for each of the wavelengths, which can be indexed.
38 # Here the first index is that of 94 angstrom.
39 print(aia_search[0])
40
41 ###############################################################################
42 # You can then pass the `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`
43 # result to :meth:`Fido.fetch <sunpy.net.fido_factory.UnifiedDownloaderFactory.fetch>`
44 # to download the data, i.e., ``Fido.fetch(aia_search)``.
45
[end of examples/acquiring_data/searching_multiple_wavelengths.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/acquiring_data/searching_multiple_wavelengths.py b/examples/acquiring_data/searching_multiple_wavelengths.py
deleted file mode 100644
--- a/examples/acquiring_data/searching_multiple_wavelengths.py
+++ /dev/null
@@ -1,44 +0,0 @@
-"""
-==============================================
-Searching for multiple wavelengths with Fido
-==============================================
-
-This example shows how you can search for several wavelengths of AIA data with Fido.
-"""
-from astropy import units as u
-
-from sunpy.net import Fido
-from sunpy.net import attrs as a
-
-###############################################################################
-# Here we are demonstrating how you can search for specific wavelengths of
-# AIA data using `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`
-# and the `sunpy.net.attrs.AttrOr` function.
-# For example, you may only want a single wavelength, say 171 Angstrom:
-
-aia_search = Fido.search(a.Time("2022-02-20 00:00", "2022-02-20 00:01"),
- a.Instrument("AIA"),
- a.Wavelength(171*u.angstrom))
-
-print(aia_search)
-
-###############################################################################
-# But say you actually want to search for several wavelengths, rather than just one.
-# You could use the "|" operator, or instead you can use the `sunpy.net.attrs.AttrOr`
-# function.
-
-wavelengths = [94, 131, 171, 193, 211]*u.angstrom
-aia_search = Fido.search(a.Time("2022-02-20 00:00", "2022-02-20 00:01"),
- a.Instrument("AIA"),
- a.AttrOr([a.Wavelength(wav) for wav in wavelengths]))
-
-print(aia_search)
-
-# This returns several searches for each of the wavelengths, which can be indexed.
-# Here the first index is that of 94 angstrom.
-print(aia_search[0])
-
-###############################################################################
-# You can then pass the `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`
-# result to :meth:`Fido.fetch <sunpy.net.fido_factory.UnifiedDownloaderFactory.fetch>`
-# to download the data, i.e., ``Fido.fetch(aia_search)``.
| {"golden_diff": "diff --git a/examples/acquiring_data/searching_multiple_wavelengths.py b/examples/acquiring_data/searching_multiple_wavelengths.py\ndeleted file mode 100644\n--- a/examples/acquiring_data/searching_multiple_wavelengths.py\n+++ /dev/null\n@@ -1,44 +0,0 @@\n-\"\"\"\n-==============================================\n-Searching for multiple wavelengths with Fido\n-==============================================\n-\n-This example shows how you can search for several wavelengths of AIA data with Fido.\n-\"\"\"\n-from astropy import units as u\n-\n-from sunpy.net import Fido\n-from sunpy.net import attrs as a\n-\n-###############################################################################\n-# Here we are demonstrating how you can search for specific wavelengths of\n-# AIA data using `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`\n-# and the `sunpy.net.attrs.AttrOr` function.\n-# For example, you may only want a single wavelength, say 171 Angstrom:\n-\n-aia_search = Fido.search(a.Time(\"2022-02-20 00:00\", \"2022-02-20 00:01\"),\n- a.Instrument(\"AIA\"),\n- a.Wavelength(171*u.angstrom))\n-\n-print(aia_search)\n-\n-###############################################################################\n-# But say you actually want to search for several wavelengths, rather than just one.\n-# You could use the \"|\" operator, or instead you can use the `sunpy.net.attrs.AttrOr`\n-# function.\n-\n-wavelengths = [94, 131, 171, 193, 211]*u.angstrom\n-aia_search = Fido.search(a.Time(\"2022-02-20 00:00\", \"2022-02-20 00:01\"),\n- a.Instrument(\"AIA\"),\n- a.AttrOr([a.Wavelength(wav) for wav in wavelengths]))\n-\n-print(aia_search)\n-\n-# This returns several searches for each of the wavelengths, which can be indexed.\n-# Here the first index is that of 94 angstrom.\n-print(aia_search[0])\n-\n-###############################################################################\n-# You can then pass the `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`\n-# result to :meth:`Fido.fetch <sunpy.net.fido_factory.UnifiedDownloaderFactory.fetch>`\n-# to download the data, i.e., ``Fido.fetch(aia_search)``.\n", "issue": "Add a \"How do I...\" page to our documentation\n<!--\r\nWe know asking good questions takes effort, and we appreciate your time.\r\nThank you.\r\n\r\nPlease be aware that everyone has to follow our code of conduct:\r\nhttps://sunpy.org/coc\r\n\r\nThese comments are hidden when you submit this github issue.\r\n\r\nPlease have a search on our GitHub repository to see if a similar issue has already been posted.\r\nIf a similar issue is closed, have a quick look to see if you are satisfied by the resolution.\r\nIf not please go ahead and open an issue!\r\n-->\r\n\r\n\r\n<!--\r\nProvide a general description of the feature you would like.\r\nIf you prefer, you can also suggest a draft design or API.\r\n-->\r\n\r\ne.g. this page from the xarray docs: http://xarray.pydata.org/en/stable/howdoi.html\r\n\n", "before_files": [{"content": "\"\"\"\n==============================================\nSearching for multiple wavelengths with Fido\n==============================================\n\nThis example shows how you can search for several wavelengths of AIA data with Fido.\n\"\"\"\nfrom astropy import units as u\n\nfrom sunpy.net import Fido\nfrom sunpy.net import attrs as a\n\n###############################################################################\n# Here we are demonstrating how you can search for specific wavelengths of\n# AIA data using `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`\n# and the `sunpy.net.attrs.AttrOr` function.\n# For example, you may only want a single wavelength, say 171 Angstrom:\n\naia_search = Fido.search(a.Time(\"2022-02-20 00:00\", \"2022-02-20 00:01\"),\n a.Instrument(\"AIA\"),\n a.Wavelength(171*u.angstrom))\n\nprint(aia_search)\n\n###############################################################################\n# But say you actually want to search for several wavelengths, rather than just one.\n# You could use the \"|\" operator, or instead you can use the `sunpy.net.attrs.AttrOr`\n# function.\n\nwavelengths = [94, 131, 171, 193, 211]*u.angstrom\naia_search = Fido.search(a.Time(\"2022-02-20 00:00\", \"2022-02-20 00:01\"),\n a.Instrument(\"AIA\"),\n a.AttrOr([a.Wavelength(wav) for wav in wavelengths]))\n\nprint(aia_search)\n\n# This returns several searches for each of the wavelengths, which can be indexed.\n# Here the first index is that of 94 angstrom.\nprint(aia_search[0])\n\n###############################################################################\n# You can then pass the `Fido <sunpy.net.fido_factory.UnifiedDownloaderFactory>`\n# result to :meth:`Fido.fetch <sunpy.net.fido_factory.UnifiedDownloaderFactory.fetch>`\n# to download the data, i.e., ``Fido.fetch(aia_search)``.\n", "path": "examples/acquiring_data/searching_multiple_wavelengths.py"}]} | 1,256 | 551 |
gh_patches_debug_2798 | rasdani/github-patches | git_diff | svthalia__concrexit-1739 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Renewal of membership send email about references without reason
### Describe the bug
A renewal of a membership to the 'until study end' triggered the email for references.
### How to reproduce
Steps to reproduce the behaviour:
1. Create a registration for a completely new user
2. Create an upgrade for the membership
3. Get the email
### Expected behaviour
No email should be received for references.
### Screenshots
<img width="929" alt="Screenshot 2021-06-20 at 15 20 10" src="https://user-images.githubusercontent.com/1799914/122675750-2a245800-d1db-11eb-93eb-d02abc0f63d1.png">
<img width="656" alt="Screenshot 2021-06-20 at 15 20 20" src="https://user-images.githubusercontent.com/1799914/122675752-2b558500-d1db-11eb-9763-3c1f622826e7.png">
</issue>
<code>
[start of website/registrations/forms.py]
1 """The forms defined by the registrations package."""
2 from django import forms
3 from django.core.exceptions import NON_FIELD_ERRORS, ValidationError
4 from django.forms import TypedChoiceField
5 from django.urls import reverse_lazy
6 from django.utils import timezone
7 from django.utils.safestring import mark_safe
8 from django.utils.text import capfirst
9 from django.utils.translation import gettext_lazy as _
10
11 from members.models import Membership
12 from payments.widgets import SignatureWidget
13 from registrations import services
14 from .models import Registration, Renewal, Reference
15
16
17 class BaseRegistrationForm(forms.ModelForm):
18 """Base form for membership registrations."""
19
20 birthday = forms.DateField(
21 widget=forms.widgets.SelectDateWidget(
22 years=range(timezone.now().year - 50, timezone.now().year - 10)
23 ),
24 label=capfirst(_("birthday")),
25 )
26
27 privacy_policy = forms.BooleanField(required=True,)
28
29 def __init__(self, *args, **kwargs):
30 super().__init__(*args, **kwargs)
31 self.fields["privacy_policy"].label = mark_safe(
32 _('I accept the <a href="{}">privacy policy</a>.').format(
33 reverse_lazy("singlepages:privacy-policy")
34 )
35 )
36
37
38 class RegistrationAdminForm(forms.ModelForm):
39 """Custom admin form for Registration model to add the widget for the signature."""
40
41 class Meta:
42 fields = "__all__"
43 model = Registration
44 widgets = {
45 "signature": SignatureWidget(),
46 }
47
48
49 class MemberRegistrationForm(BaseRegistrationForm):
50 """Form for member registrations."""
51
52 this_year = timezone.now().year
53 years = reversed(
54 [(x, "{} - {}".format(x, x + 1)) for x in range(this_year - 20, this_year + 1)]
55 )
56
57 starting_year = TypedChoiceField(
58 choices=years,
59 coerce=int,
60 empty_value=this_year,
61 required=False,
62 help_text=_("What lecture year did you start studying at Radboud University?"),
63 )
64
65 class Meta:
66 model = Registration
67 widgets = {
68 "signature": SignatureWidget(),
69 }
70 fields = (
71 "length",
72 "first_name",
73 "last_name",
74 "birthday",
75 "email",
76 "phone_number",
77 "student_number",
78 "programme",
79 "starting_year",
80 "address_street",
81 "address_street2",
82 "address_postal_code",
83 "address_city",
84 "address_country",
85 "optin_birthday",
86 "optin_mailinglist",
87 "membership_type",
88 "direct_debit",
89 "initials",
90 "iban",
91 "bic",
92 "signature",
93 )
94
95
96 class BenefactorRegistrationForm(BaseRegistrationForm):
97 """Form for benefactor registrations."""
98
99 icis_employee = forms.BooleanField(
100 required=False, label=_("I am an employee of iCIS")
101 )
102
103 class Meta:
104 model = Registration
105 widgets = {
106 "signature": SignatureWidget(),
107 }
108 fields = (
109 "length",
110 "first_name",
111 "last_name",
112 "birthday",
113 "email",
114 "phone_number",
115 "student_number",
116 "address_street",
117 "address_street2",
118 "address_postal_code",
119 "address_city",
120 "address_country",
121 "optin_birthday",
122 "optin_mailinglist",
123 "contribution",
124 "membership_type",
125 "direct_debit",
126 "initials",
127 "iban",
128 "bic",
129 "signature",
130 )
131
132
133 class RenewalForm(forms.ModelForm):
134 """Form for membership renewals."""
135
136 privacy_policy = forms.BooleanField(required=True,)
137
138 icis_employee = forms.BooleanField(
139 required=False, label=_("I am an employee of iCIS")
140 )
141
142 contribution = forms.IntegerField(required=False,)
143
144 def __init__(self, *args, **kwargs):
145 super().__init__(*args, **kwargs)
146 self.fields["privacy_policy"].label = mark_safe(
147 _('I accept the <a href="{}">privacy policy</a>.').format(
148 reverse_lazy("singlepages:privacy-policy")
149 )
150 )
151
152 class Meta:
153 model = Renewal
154 fields = ("member", "length", "contribution", "membership_type")
155
156
157 class ReferenceForm(forms.ModelForm):
158 def clean(self):
159 super().clean()
160 membership = self.cleaned_data["member"].current_membership
161 if membership and membership.type == Membership.BENEFACTOR:
162 raise ValidationError(_("Benefactors cannot give references."))
163
164 membership = self.cleaned_data["member"].latest_membership
165 if (
166 membership
167 and membership.until
168 and membership.until < services.calculate_membership_since()
169 ):
170 raise ValidationError(
171 _(
172 "It's not possible to give references for "
173 "memberships that start after your own "
174 "membership's end."
175 )
176 )
177
178 class Meta:
179 model = Reference
180 fields = "__all__"
181 error_messages = {
182 NON_FIELD_ERRORS: {
183 "unique_together": _(
184 "You've already given a reference for this person."
185 ),
186 }
187 }
188
[end of website/registrations/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/registrations/forms.py b/website/registrations/forms.py
--- a/website/registrations/forms.py
+++ b/website/registrations/forms.py
@@ -151,7 +151,14 @@
class Meta:
model = Renewal
- fields = ("member", "length", "contribution", "membership_type")
+ fields = (
+ "member",
+ "length",
+ "contribution",
+ "membership_type",
+ "no_references",
+ "remarks",
+ )
class ReferenceForm(forms.ModelForm):
| {"golden_diff": "diff --git a/website/registrations/forms.py b/website/registrations/forms.py\n--- a/website/registrations/forms.py\n+++ b/website/registrations/forms.py\n@@ -151,7 +151,14 @@\n \n class Meta:\n model = Renewal\n- fields = (\"member\", \"length\", \"contribution\", \"membership_type\")\n+ fields = (\n+ \"member\",\n+ \"length\",\n+ \"contribution\",\n+ \"membership_type\",\n+ \"no_references\",\n+ \"remarks\",\n+ )\n \n \n class ReferenceForm(forms.ModelForm):\n", "issue": "Renewal of membership send email about references without reason\n### Describe the bug\r\nA renewal of a membership to the 'until study end' triggered the email for references.\r\n\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Create a registration for a completely new user\r\n2. Create an upgrade for the membership\r\n3. Get the email\r\n\r\n### Expected behaviour\r\n\r\nNo email should be received for references.\r\n\r\n### Screenshots\r\n<img width=\"929\" alt=\"Screenshot 2021-06-20 at 15 20 10\" src=\"https://user-images.githubusercontent.com/1799914/122675750-2a245800-d1db-11eb-93eb-d02abc0f63d1.png\">\r\n<img width=\"656\" alt=\"Screenshot 2021-06-20 at 15 20 20\" src=\"https://user-images.githubusercontent.com/1799914/122675752-2b558500-d1db-11eb-9763-3c1f622826e7.png\">\r\n\r\n\n", "before_files": [{"content": "\"\"\"The forms defined by the registrations package.\"\"\"\nfrom django import forms\nfrom django.core.exceptions import NON_FIELD_ERRORS, ValidationError\nfrom django.forms import TypedChoiceField\nfrom django.urls import reverse_lazy\nfrom django.utils import timezone\nfrom django.utils.safestring import mark_safe\nfrom django.utils.text import capfirst\nfrom django.utils.translation import gettext_lazy as _\n\nfrom members.models import Membership\nfrom payments.widgets import SignatureWidget\nfrom registrations import services\nfrom .models import Registration, Renewal, Reference\n\n\nclass BaseRegistrationForm(forms.ModelForm):\n \"\"\"Base form for membership registrations.\"\"\"\n\n birthday = forms.DateField(\n widget=forms.widgets.SelectDateWidget(\n years=range(timezone.now().year - 50, timezone.now().year - 10)\n ),\n label=capfirst(_(\"birthday\")),\n )\n\n privacy_policy = forms.BooleanField(required=True,)\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"privacy_policy\"].label = mark_safe(\n _('I accept the <a href=\"{}\">privacy policy</a>.').format(\n reverse_lazy(\"singlepages:privacy-policy\")\n )\n )\n\n\nclass RegistrationAdminForm(forms.ModelForm):\n \"\"\"Custom admin form for Registration model to add the widget for the signature.\"\"\"\n\n class Meta:\n fields = \"__all__\"\n model = Registration\n widgets = {\n \"signature\": SignatureWidget(),\n }\n\n\nclass MemberRegistrationForm(BaseRegistrationForm):\n \"\"\"Form for member registrations.\"\"\"\n\n this_year = timezone.now().year\n years = reversed(\n [(x, \"{} - {}\".format(x, x + 1)) for x in range(this_year - 20, this_year + 1)]\n )\n\n starting_year = TypedChoiceField(\n choices=years,\n coerce=int,\n empty_value=this_year,\n required=False,\n help_text=_(\"What lecture year did you start studying at Radboud University?\"),\n )\n\n class Meta:\n model = Registration\n widgets = {\n \"signature\": SignatureWidget(),\n }\n fields = (\n \"length\",\n \"first_name\",\n \"last_name\",\n \"birthday\",\n \"email\",\n \"phone_number\",\n \"student_number\",\n \"programme\",\n \"starting_year\",\n \"address_street\",\n \"address_street2\",\n \"address_postal_code\",\n \"address_city\",\n \"address_country\",\n \"optin_birthday\",\n \"optin_mailinglist\",\n \"membership_type\",\n \"direct_debit\",\n \"initials\",\n \"iban\",\n \"bic\",\n \"signature\",\n )\n\n\nclass BenefactorRegistrationForm(BaseRegistrationForm):\n \"\"\"Form for benefactor registrations.\"\"\"\n\n icis_employee = forms.BooleanField(\n required=False, label=_(\"I am an employee of iCIS\")\n )\n\n class Meta:\n model = Registration\n widgets = {\n \"signature\": SignatureWidget(),\n }\n fields = (\n \"length\",\n \"first_name\",\n \"last_name\",\n \"birthday\",\n \"email\",\n \"phone_number\",\n \"student_number\",\n \"address_street\",\n \"address_street2\",\n \"address_postal_code\",\n \"address_city\",\n \"address_country\",\n \"optin_birthday\",\n \"optin_mailinglist\",\n \"contribution\",\n \"membership_type\",\n \"direct_debit\",\n \"initials\",\n \"iban\",\n \"bic\",\n \"signature\",\n )\n\n\nclass RenewalForm(forms.ModelForm):\n \"\"\"Form for membership renewals.\"\"\"\n\n privacy_policy = forms.BooleanField(required=True,)\n\n icis_employee = forms.BooleanField(\n required=False, label=_(\"I am an employee of iCIS\")\n )\n\n contribution = forms.IntegerField(required=False,)\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"privacy_policy\"].label = mark_safe(\n _('I accept the <a href=\"{}\">privacy policy</a>.').format(\n reverse_lazy(\"singlepages:privacy-policy\")\n )\n )\n\n class Meta:\n model = Renewal\n fields = (\"member\", \"length\", \"contribution\", \"membership_type\")\n\n\nclass ReferenceForm(forms.ModelForm):\n def clean(self):\n super().clean()\n membership = self.cleaned_data[\"member\"].current_membership\n if membership and membership.type == Membership.BENEFACTOR:\n raise ValidationError(_(\"Benefactors cannot give references.\"))\n\n membership = self.cleaned_data[\"member\"].latest_membership\n if (\n membership\n and membership.until\n and membership.until < services.calculate_membership_since()\n ):\n raise ValidationError(\n _(\n \"It's not possible to give references for \"\n \"memberships that start after your own \"\n \"membership's end.\"\n )\n )\n\n class Meta:\n model = Reference\n fields = \"__all__\"\n error_messages = {\n NON_FIELD_ERRORS: {\n \"unique_together\": _(\n \"You've already given a reference for this person.\"\n ),\n }\n }\n", "path": "website/registrations/forms.py"}]} | 2,351 | 132 |
gh_patches_debug_61059 | rasdani/github-patches | git_diff | google__turbinia-602 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Configuration file not behaving as expected
I was struggling a bit today with having the configuration file mapped to what I needed it to be (for launching dftimewolf with a one-off configuration of Turbinia).
My ~/.turbiniarc is set to what I want, but the config is still picked up from "whatever file it can find" in the directory pointed to by `TURBINIA_CONFIG_PATH` (specified in `ENVCONFIGVAR`)
This happens because when importing `evidence` (e.g. `from turbinia import evidence`), `LoadConfig` is called with no parameters, and thus populates the config with whatever files it can find there. Subsequent calls to `LoadConfig`, even when passing a `config_file` will still return this first configuration because it has already been loaded.
</issue>
<code>
[start of turbinia/config/__init__.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2016 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Basic Turbinia config."""
16
17 from __future__ import unicode_literals
18
19 import imp
20 import itertools
21 import logging
22 import os
23 import sys
24
25 from turbinia import TurbiniaException
26
27 DATETIME_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'
28
29 # Look for config files with these names
30 CONFIGFILES = ['.turbiniarc', 'turbinia.conf', 'turbinia_config_tmpl.py']
31 # Look in homedir first, then /etc/turbinia
32 CONFIGPATH = [
33 os.path.expanduser('~'),
34 '/etc/turbinia',
35 os.path.dirname(os.path.abspath(__file__)),
36 ]
37 # Config setup reminder for cleaner error handling on empty configs.
38 CONFIG_MSG = (
39 'Copy turbinia/config/turbinia_config_tmpl.py to ~/.turbiniarc '
40 'or /etc/turbinia/turbinia.conf, edit, and re-run.')
41
42 # Required config vars
43 REQUIRED_VARS = [
44 # Turbinia Config
45 'INSTANCE_ID',
46 'STATE_MANAGER',
47 'TASK_MANAGER',
48 'LOG_FILE',
49 'LOCK_FILE',
50 'OUTPUT_DIR',
51 'TMP_DIR',
52 'SLEEP_TIME',
53 'SINGLE_RUN',
54 'MOUNT_DIR_PREFIX',
55 'SHARED_FILESYSTEM',
56 'DEBUG_TASKS',
57 'DEPENDENCIES',
58 'DOCKER_ENABLED',
59 'DISABLED_JOBS',
60 ]
61
62 # Optional config vars. Some may be mandatory depending on the configuration
63 # (e.g. if TASK_MANAGER is set to 'PSQ', then the GCE Config variables are
64 # required), but these requirements are not enforced.
65 OPTIONAL_VARS = [
66 # GCE CONFIG
67 'TURBINIA_PROJECT',
68 'TURBINIA_ZONE',
69 'TURBINIA_REGION',
70 'BUCKET_NAME',
71 'PSQ_TOPIC',
72 'PUBSUB_TOPIC',
73 'GCS_OUTPUT_PATH',
74 'STACKDRIVER_LOGGING',
75 'STACKDRIVER_TRACEBACK',
76 # REDIS CONFIG
77 'REDIS_HOST',
78 'REDIS_PORT',
79 'REDIS_DB',
80 # Celery config
81 'CELERY_BROKER',
82 'CELERY_BACKEND',
83 'KOMBU_BROKER',
84 'KOMBU_CHANNEL',
85 'KOMBU_DURABLE',
86 # Email config
87 'EMAIL_NOTIFICATIONS',
88 'EMAIL_HOST_ADDRESS',
89 'EMAIL_PORT',
90 'EMAIL_ADDRESS',
91 'EMAIL_PASSWORD',
92 ]
93
94 # Environment variable to look for path data in
95 ENVCONFIGVAR = 'TURBINIA_CONFIG_PATH'
96
97 CONFIG = None
98
99 log = logging.getLogger('turbinia')
100
101
102 def LoadConfig(config_file=None):
103 """Finds Turbinia config file and loads it.
104
105 Args:
106 config_file(str): full path to config file
107 """
108 # TODO(aarontp): Find way to not require global var here. Maybe a singleton
109 # pattern on the config class.
110 # pylint: disable=global-statement
111 global CONFIG
112 if CONFIG:
113 log.debug(
114 'Returning cached config from {0:s} instead of reloading config'.format(
115 CONFIG.configSource))
116 return CONFIG
117
118 if not config_file:
119 log.debug('No config specified. Looking in default locations for config.')
120 # If the environment variable is set, take precedence over the pre-defined
121 # CONFIGPATHs.
122 configpath = CONFIGPATH
123 if ENVCONFIGVAR in os.environ:
124 configpath = os.environ[ENVCONFIGVAR].split(':')
125
126 # Load first file found
127 for _dir, _file in itertools.product(configpath, CONFIGFILES):
128 if os.path.exists(os.path.join(_dir, _file)):
129 config_file = os.path.join(_dir, _file)
130 break
131
132 if config_file is None:
133 raise TurbiniaException('No config files found')
134
135 log.debug('Loading config from {0:s}'.format(config_file))
136 # Warn about using fallback source config, but it's currently necessary for
137 # tests. See issue #446.
138 if 'turbinia_config_tmpl' in config_file:
139 log.warning('Using fallback source config. {0:s}'.format(CONFIG_MSG))
140 try:
141 _config = imp.load_source('config', config_file)
142 except IOError as exception:
143 message = (
144 'Could not load config file {0:s}: {1!s}'.format(
145 config_file, exception))
146 log.error(message)
147 raise TurbiniaException(message)
148
149 _config.configSource = config_file
150 ValidateAndSetConfig(_config)
151
152 # Set the environment var for this so that we don't see the "No project ID
153 # could be determined." warning later.
154 if hasattr(_config, 'TURBINIA_PROJECT') and _config.TURBINIA_PROJECT:
155 os.environ['GOOGLE_CLOUD_PROJECT'] = _config.TURBINIA_PROJECT
156
157 CONFIG = _config
158 log.debug(
159 'Returning parsed config loaded from {0:s}'.format(CONFIG.configSource))
160 return _config
161
162
163 def ValidateAndSetConfig(_config):
164 """Makes sure that the config has the vars loaded and set in the module."""
165 # Explicitly set the config path
166 setattr(sys.modules[__name__], 'configSource', _config.configSource)
167
168 CONFIGVARS = REQUIRED_VARS + OPTIONAL_VARS
169 for var in CONFIGVARS:
170 empty_value = False
171 if not hasattr(_config, var):
172 if var in OPTIONAL_VARS:
173 log.debug(
174 'Setting non-existent but optional config variable {0:s} to '
175 'None'.format(var))
176 empty_value = True
177 else:
178 raise TurbiniaException(
179 'Required config attribute {0:s}:{1:s} not in config'.format(
180 _config.configSource, var))
181 if var in REQUIRED_VARS and getattr(_config, var) is None:
182 raise TurbiniaException(
183 'Config attribute {0:s}:{1:s} is not set'.format(
184 _config.configSource, var))
185
186 # Set the attribute in the current module
187 if empty_value:
188 setattr(sys.modules[__name__], var, None)
189 else:
190 setattr(sys.modules[__name__], var, getattr(_config, var))
191
192
193 def ParseDependencies():
194 """Parses the config file DEPENDENCIES variable.
195
196 Raises:
197 TurbiniaException: If bad config file.
198
199 Returns:
200 dependencies(dict): The parsed dependency values.
201 """
202 dependencies = {}
203 try:
204 for values in CONFIG.DEPENDENCIES:
205 job = values['job'].lower()
206 dependencies[job] = {}
207 dependencies[job]['programs'] = values['programs']
208 dependencies[job]['docker_image'] = values.get('docker_image')
209 except (KeyError, TypeError) as exception:
210 raise TurbiniaException(
211 'An issue has occurred while parsing the '
212 'dependency config: {0!s}'.format(exception))
213 return dependencies
214
[end of turbinia/config/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/turbinia/config/__init__.py b/turbinia/config/__init__.py
--- a/turbinia/config/__init__.py
+++ b/turbinia/config/__init__.py
@@ -109,7 +109,7 @@
# pattern on the config class.
# pylint: disable=global-statement
global CONFIG
- if CONFIG:
+ if CONFIG and not config_file:
log.debug(
'Returning cached config from {0:s} instead of reloading config'.format(
CONFIG.configSource))
| {"golden_diff": "diff --git a/turbinia/config/__init__.py b/turbinia/config/__init__.py\n--- a/turbinia/config/__init__.py\n+++ b/turbinia/config/__init__.py\n@@ -109,7 +109,7 @@\n # pattern on the config class.\n # pylint: disable=global-statement\n global CONFIG\n- if CONFIG:\n+ if CONFIG and not config_file:\n log.debug(\n 'Returning cached config from {0:s} instead of reloading config'.format(\n CONFIG.configSource))\n", "issue": "Configuration file not behaving as expected\nI was struggling a bit today with having the configuration file mapped to what I needed it to be (for launching dftimewolf with a one-off configuration of Turbinia).\r\n\r\nMy ~/.turbiniarc is set to what I want, but the config is still picked up from \"whatever file it can find\" in the directory pointed to by `TURBINIA_CONFIG_PATH` (specified in `ENVCONFIGVAR`)\r\n\r\nThis happens because when importing `evidence` (e.g. `from turbinia import evidence`), `LoadConfig` is called with no parameters, and thus populates the config with whatever files it can find there. Subsequent calls to `LoadConfig`, even when passing a `config_file` will still return this first configuration because it has already been loaded.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2016 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Basic Turbinia config.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport imp\nimport itertools\nimport logging\nimport os\nimport sys\n\nfrom turbinia import TurbiniaException\n\nDATETIME_FORMAT = '%Y-%m-%dT%H:%M:%S.%fZ'\n\n# Look for config files with these names\nCONFIGFILES = ['.turbiniarc', 'turbinia.conf', 'turbinia_config_tmpl.py']\n# Look in homedir first, then /etc/turbinia\nCONFIGPATH = [\n os.path.expanduser('~'),\n '/etc/turbinia',\n os.path.dirname(os.path.abspath(__file__)),\n]\n# Config setup reminder for cleaner error handling on empty configs.\nCONFIG_MSG = (\n 'Copy turbinia/config/turbinia_config_tmpl.py to ~/.turbiniarc '\n 'or /etc/turbinia/turbinia.conf, edit, and re-run.')\n\n# Required config vars\nREQUIRED_VARS = [\n # Turbinia Config\n 'INSTANCE_ID',\n 'STATE_MANAGER',\n 'TASK_MANAGER',\n 'LOG_FILE',\n 'LOCK_FILE',\n 'OUTPUT_DIR',\n 'TMP_DIR',\n 'SLEEP_TIME',\n 'SINGLE_RUN',\n 'MOUNT_DIR_PREFIX',\n 'SHARED_FILESYSTEM',\n 'DEBUG_TASKS',\n 'DEPENDENCIES',\n 'DOCKER_ENABLED',\n 'DISABLED_JOBS',\n]\n\n# Optional config vars. Some may be mandatory depending on the configuration\n# (e.g. if TASK_MANAGER is set to 'PSQ', then the GCE Config variables are\n# required), but these requirements are not enforced.\nOPTIONAL_VARS = [\n # GCE CONFIG\n 'TURBINIA_PROJECT',\n 'TURBINIA_ZONE',\n 'TURBINIA_REGION',\n 'BUCKET_NAME',\n 'PSQ_TOPIC',\n 'PUBSUB_TOPIC',\n 'GCS_OUTPUT_PATH',\n 'STACKDRIVER_LOGGING',\n 'STACKDRIVER_TRACEBACK',\n # REDIS CONFIG\n 'REDIS_HOST',\n 'REDIS_PORT',\n 'REDIS_DB',\n # Celery config\n 'CELERY_BROKER',\n 'CELERY_BACKEND',\n 'KOMBU_BROKER',\n 'KOMBU_CHANNEL',\n 'KOMBU_DURABLE',\n # Email config\n 'EMAIL_NOTIFICATIONS',\n 'EMAIL_HOST_ADDRESS',\n 'EMAIL_PORT',\n 'EMAIL_ADDRESS',\n 'EMAIL_PASSWORD',\n]\n\n# Environment variable to look for path data in\nENVCONFIGVAR = 'TURBINIA_CONFIG_PATH'\n\nCONFIG = None\n\nlog = logging.getLogger('turbinia')\n\n\ndef LoadConfig(config_file=None):\n \"\"\"Finds Turbinia config file and loads it.\n\n Args:\n config_file(str): full path to config file\n \"\"\"\n # TODO(aarontp): Find way to not require global var here. Maybe a singleton\n # pattern on the config class.\n # pylint: disable=global-statement\n global CONFIG\n if CONFIG:\n log.debug(\n 'Returning cached config from {0:s} instead of reloading config'.format(\n CONFIG.configSource))\n return CONFIG\n\n if not config_file:\n log.debug('No config specified. Looking in default locations for config.')\n # If the environment variable is set, take precedence over the pre-defined\n # CONFIGPATHs.\n configpath = CONFIGPATH\n if ENVCONFIGVAR in os.environ:\n configpath = os.environ[ENVCONFIGVAR].split(':')\n\n # Load first file found\n for _dir, _file in itertools.product(configpath, CONFIGFILES):\n if os.path.exists(os.path.join(_dir, _file)):\n config_file = os.path.join(_dir, _file)\n break\n\n if config_file is None:\n raise TurbiniaException('No config files found')\n\n log.debug('Loading config from {0:s}'.format(config_file))\n # Warn about using fallback source config, but it's currently necessary for\n # tests. See issue #446.\n if 'turbinia_config_tmpl' in config_file:\n log.warning('Using fallback source config. {0:s}'.format(CONFIG_MSG))\n try:\n _config = imp.load_source('config', config_file)\n except IOError as exception:\n message = (\n 'Could not load config file {0:s}: {1!s}'.format(\n config_file, exception))\n log.error(message)\n raise TurbiniaException(message)\n\n _config.configSource = config_file\n ValidateAndSetConfig(_config)\n\n # Set the environment var for this so that we don't see the \"No project ID\n # could be determined.\" warning later.\n if hasattr(_config, 'TURBINIA_PROJECT') and _config.TURBINIA_PROJECT:\n os.environ['GOOGLE_CLOUD_PROJECT'] = _config.TURBINIA_PROJECT\n\n CONFIG = _config\n log.debug(\n 'Returning parsed config loaded from {0:s}'.format(CONFIG.configSource))\n return _config\n\n\ndef ValidateAndSetConfig(_config):\n \"\"\"Makes sure that the config has the vars loaded and set in the module.\"\"\"\n # Explicitly set the config path\n setattr(sys.modules[__name__], 'configSource', _config.configSource)\n\n CONFIGVARS = REQUIRED_VARS + OPTIONAL_VARS\n for var in CONFIGVARS:\n empty_value = False\n if not hasattr(_config, var):\n if var in OPTIONAL_VARS:\n log.debug(\n 'Setting non-existent but optional config variable {0:s} to '\n 'None'.format(var))\n empty_value = True\n else:\n raise TurbiniaException(\n 'Required config attribute {0:s}:{1:s} not in config'.format(\n _config.configSource, var))\n if var in REQUIRED_VARS and getattr(_config, var) is None:\n raise TurbiniaException(\n 'Config attribute {0:s}:{1:s} is not set'.format(\n _config.configSource, var))\n\n # Set the attribute in the current module\n if empty_value:\n setattr(sys.modules[__name__], var, None)\n else:\n setattr(sys.modules[__name__], var, getattr(_config, var))\n\n\ndef ParseDependencies():\n \"\"\"Parses the config file DEPENDENCIES variable.\n\n Raises:\n TurbiniaException: If bad config file.\n\n Returns:\n dependencies(dict): The parsed dependency values.\n \"\"\"\n dependencies = {}\n try:\n for values in CONFIG.DEPENDENCIES:\n job = values['job'].lower()\n dependencies[job] = {}\n dependencies[job]['programs'] = values['programs']\n dependencies[job]['docker_image'] = values.get('docker_image')\n except (KeyError, TypeError) as exception:\n raise TurbiniaException(\n 'An issue has occurred while parsing the '\n 'dependency config: {0!s}'.format(exception))\n return dependencies\n", "path": "turbinia/config/__init__.py"}]} | 2,885 | 124 |
gh_patches_debug_1057 | rasdani/github-patches | git_diff | StackStorm__st2-5091 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
St2Stream service broken when using SSL with mongodb
## SUMMARY
This issue is an extension to #4832 however this time it is the st2stream service, I have looked that the code and can see the same monkey patch code hasn't been applied to the st2stream app
### STACKSTORM VERSION
Paste the output of ``st2 --version``: 3.3.0
##### OS, environment, install method
Docker compose with the split services and mongo db references commented out so that an external db can be used https://github.com/StackStorm/st2-docker/blob/master/docker-compose.yml
All other services correctly connected to mongodb.net test instance with the exception of st2stream.
## Steps to reproduce the problem
use docker yaml at https://github.com/StackStorm/st2-docker/blob/master/docker-compose.yml, comment out mongo container and references, adjust files/st2-docker.conf to point to external DB with SSL = True enabled.
docker-compose up
## Expected Results
What did you expect to happen when running the steps above?
st2stream to operate correctly
## Actual Results
What happened? What output did you get?
2020-11-16 05:48:55,053 WARNING [-] Retry on ConnectionError - Cannot connect to database default :
maximum recursion depth exceeded
Adding monkey patch code to st2stream app resolves the issue (manually injected into container to test).
file: st2stream/cmd/api.py
Code:
from st2common.util.monkey_patch import monkey_patch
monkey_patch()
</issue>
<code>
[start of st2stream/st2stream/cmd/api.py]
1 # Copyright 2020 The StackStorm Authors.
2 # Copyright 2019 Extreme Networks, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import os
17 import sys
18
19 import eventlet
20 from oslo_config import cfg
21 from eventlet import wsgi
22
23 from st2common import log as logging
24 from st2common.service_setup import setup as common_setup
25 from st2common.service_setup import teardown as common_teardown
26 from st2common.stream.listener import get_listener_if_set
27 from st2common.util.wsgi import shutdown_server_kill_pending_requests
28 from st2stream.signal_handlers import register_stream_signal_handlers
29 from st2stream import config
30 config.register_opts()
31 from st2stream import app
32
33 __all__ = [
34 'main'
35 ]
36
37
38 eventlet.monkey_patch(
39 os=True,
40 select=True,
41 socket=True,
42 thread=False if '--use-debugger' in sys.argv else True,
43 time=True)
44
45 LOG = logging.getLogger(__name__)
46
47 # How much time to give to the request in progress to finish in seconds before killing them
48 WSGI_SERVER_REQUEST_SHUTDOWN_TIME = 2
49
50
51 def _setup():
52 capabilities = {
53 'name': 'stream',
54 'listen_host': cfg.CONF.stream.host,
55 'listen_port': cfg.CONF.stream.port,
56 'type': 'active'
57 }
58 common_setup(service='stream', config=config, setup_db=True, register_mq_exchanges=True,
59 register_signal_handlers=True, register_internal_trigger_types=False,
60 run_migrations=False, service_registry=True, capabilities=capabilities)
61
62
63 def _run_server():
64 host = cfg.CONF.stream.host
65 port = cfg.CONF.stream.port
66
67 LOG.info('(PID=%s) ST2 Stream API is serving on http://%s:%s.', os.getpid(), host, port)
68
69 max_pool_size = eventlet.wsgi.DEFAULT_MAX_SIMULTANEOUS_REQUESTS
70 worker_pool = eventlet.GreenPool(max_pool_size)
71 sock = eventlet.listen((host, port))
72
73 def queue_shutdown(signal_number, stack_frame):
74 eventlet.spawn_n(shutdown_server_kill_pending_requests, sock=sock,
75 worker_pool=worker_pool, wait_time=WSGI_SERVER_REQUEST_SHUTDOWN_TIME)
76
77 # We register a custom SIGINT handler which allows us to kill long running active requests.
78 # Note: Eventually we will support draining (waiting for short-running requests), but we
79 # will still want to kill long running stream requests.
80 register_stream_signal_handlers(handler_func=queue_shutdown)
81
82 wsgi.server(sock, app.setup_app(), custom_pool=worker_pool)
83 return 0
84
85
86 def _teardown():
87 common_teardown()
88
89
90 def main():
91 try:
92 _setup()
93 return _run_server()
94 except SystemExit as exit_code:
95 sys.exit(exit_code)
96 except KeyboardInterrupt:
97 listener = get_listener_if_set(name='stream')
98
99 if listener:
100 listener.shutdown()
101 except Exception:
102 LOG.exception('(PID=%s) ST2 Stream API quit due to exception.', os.getpid())
103 return 1
104 finally:
105 _teardown()
106
[end of st2stream/st2stream/cmd/api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/st2stream/st2stream/cmd/api.py b/st2stream/st2stream/cmd/api.py
--- a/st2stream/st2stream/cmd/api.py
+++ b/st2stream/st2stream/cmd/api.py
@@ -13,6 +13,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+from st2common.util.monkey_patch import monkey_patch
+monkey_patch()
+
import os
import sys
| {"golden_diff": "diff --git a/st2stream/st2stream/cmd/api.py b/st2stream/st2stream/cmd/api.py\n--- a/st2stream/st2stream/cmd/api.py\n+++ b/st2stream/st2stream/cmd/api.py\n@@ -13,6 +13,9 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+from st2common.util.monkey_patch import monkey_patch\n+monkey_patch()\n+\n import os\n import sys\n", "issue": "St2Stream service broken when using SSL with mongodb\n## SUMMARY\r\n\r\nThis issue is an extension to #4832 however this time it is the st2stream service, I have looked that the code and can see the same monkey patch code hasn't been applied to the st2stream app\r\n\r\n### STACKSTORM VERSION\r\n\r\nPaste the output of ``st2 --version``: 3.3.0\r\n\r\n##### OS, environment, install method\r\n\r\nDocker compose with the split services and mongo db references commented out so that an external db can be used https://github.com/StackStorm/st2-docker/blob/master/docker-compose.yml\r\n\r\nAll other services correctly connected to mongodb.net test instance with the exception of st2stream.\r\n\r\n## Steps to reproduce the problem\r\n\r\nuse docker yaml at https://github.com/StackStorm/st2-docker/blob/master/docker-compose.yml, comment out mongo container and references, adjust files/st2-docker.conf to point to external DB with SSL = True enabled.\r\ndocker-compose up\r\n\r\n## Expected Results\r\n\r\nWhat did you expect to happen when running the steps above?\r\n\r\nst2stream to operate correctly\r\n\r\n## Actual Results\r\n\r\nWhat happened? What output did you get?\r\n\r\n2020-11-16 05:48:55,053 WARNING [-] Retry on ConnectionError - Cannot connect to database default :\r\nmaximum recursion depth exceeded\r\n\r\n\r\n\r\nAdding monkey patch code to st2stream app resolves the issue (manually injected into container to test).\r\n\r\nfile: st2stream/cmd/api.py\r\nCode: \r\nfrom st2common.util.monkey_patch import monkey_patch\r\nmonkey_patch()\r\n\r\n\n", "before_files": [{"content": "# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport sys\n\nimport eventlet\nfrom oslo_config import cfg\nfrom eventlet import wsgi\n\nfrom st2common import log as logging\nfrom st2common.service_setup import setup as common_setup\nfrom st2common.service_setup import teardown as common_teardown\nfrom st2common.stream.listener import get_listener_if_set\nfrom st2common.util.wsgi import shutdown_server_kill_pending_requests\nfrom st2stream.signal_handlers import register_stream_signal_handlers\nfrom st2stream import config\nconfig.register_opts()\nfrom st2stream import app\n\n__all__ = [\n 'main'\n]\n\n\neventlet.monkey_patch(\n os=True,\n select=True,\n socket=True,\n thread=False if '--use-debugger' in sys.argv else True,\n time=True)\n\nLOG = logging.getLogger(__name__)\n\n# How much time to give to the request in progress to finish in seconds before killing them\nWSGI_SERVER_REQUEST_SHUTDOWN_TIME = 2\n\n\ndef _setup():\n capabilities = {\n 'name': 'stream',\n 'listen_host': cfg.CONF.stream.host,\n 'listen_port': cfg.CONF.stream.port,\n 'type': 'active'\n }\n common_setup(service='stream', config=config, setup_db=True, register_mq_exchanges=True,\n register_signal_handlers=True, register_internal_trigger_types=False,\n run_migrations=False, service_registry=True, capabilities=capabilities)\n\n\ndef _run_server():\n host = cfg.CONF.stream.host\n port = cfg.CONF.stream.port\n\n LOG.info('(PID=%s) ST2 Stream API is serving on http://%s:%s.', os.getpid(), host, port)\n\n max_pool_size = eventlet.wsgi.DEFAULT_MAX_SIMULTANEOUS_REQUESTS\n worker_pool = eventlet.GreenPool(max_pool_size)\n sock = eventlet.listen((host, port))\n\n def queue_shutdown(signal_number, stack_frame):\n eventlet.spawn_n(shutdown_server_kill_pending_requests, sock=sock,\n worker_pool=worker_pool, wait_time=WSGI_SERVER_REQUEST_SHUTDOWN_TIME)\n\n # We register a custom SIGINT handler which allows us to kill long running active requests.\n # Note: Eventually we will support draining (waiting for short-running requests), but we\n # will still want to kill long running stream requests.\n register_stream_signal_handlers(handler_func=queue_shutdown)\n\n wsgi.server(sock, app.setup_app(), custom_pool=worker_pool)\n return 0\n\n\ndef _teardown():\n common_teardown()\n\n\ndef main():\n try:\n _setup()\n return _run_server()\n except SystemExit as exit_code:\n sys.exit(exit_code)\n except KeyboardInterrupt:\n listener = get_listener_if_set(name='stream')\n\n if listener:\n listener.shutdown()\n except Exception:\n LOG.exception('(PID=%s) ST2 Stream API quit due to exception.', os.getpid())\n return 1\n finally:\n _teardown()\n", "path": "st2stream/st2stream/cmd/api.py"}]} | 1,870 | 103 |
gh_patches_debug_8235 | rasdani/github-patches | git_diff | googleapis__google-api-python-client-1824 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
google-api-python-client 2.6.0, 2.7.0, 2.8.0, 2.9.0 cannot work with google-auth 1.18.0 and earlier
These versions of `google-api-python-client` requires `google-auth>=1.16.0,<2dev`. However, some of the code cannot work with `google-auth` 1.18.0 and earlier because`google.auth.load_credentials_from_file()` and `google.auth.default()` do not accept the `quota_project_id` parameter. I think it will be better to increase the minimum version of `google-auth` for these `google-api-python-client` versions.
Here are the details:
#### Environment details
- OS type and version: Ubuntu 20.04
- Python version: 3.9.7
- pip version: 21.2.4
- `google-api-python-client` version: 2.6.0, 2.7.0, 2.8.0, 2.9.0
#### Steps to reproduce
Create a clean Python environment and install relevant packages
```
conda create -n test python=3.9
conda activate test
pip install --no-cache-dir google-auth==1.18.0 google-api-core==1.21.0 google-api-python-client==2.9.0
```
#### Code example
```
(test) heh@ubuntu:~$ python
Python 3.9.7 (default, Sep 16 2021, 13:09:58)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import googleapiclient._auth as _auth
>>> _auth.credentials_from_file("a")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/heh/anaconda3/envs/test/lib/python3.9/site-packages/googleapiclient/_auth.py", line 44, in credentials_from_file
credentials, _ = google.auth.load_credentials_from_file(filename, scopes=scopes, quota_project_id=quota_project_id)
TypeError: load_credentials_from_file() got an unexpected keyword argument 'quota_project_id'
>>> _auth.default_credentials("a")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/heh/anaconda3/envs/test/lib/python3.9/site-packages/googleapiclient/_auth.py", line 54, in default_credentials
credentials, _ = google.auth.default(scopes=scopes, quota_project_id=quota_project_id)
TypeError: default() got an unexpected keyword argument 'quota_project_id'
```
</issue>
<code>
[start of setup.py]
1 # Copyright 2014 Google Inc. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Setup script for Google API Python client.
16
17 Also installs included versions of third party libraries, if those libraries
18 are not already installed.
19 """
20 from __future__ import print_function
21
22 import sys
23
24 if sys.version_info < (3, 6):
25 print("google-api-python-client requires python3 version >= 3.6.", file=sys.stderr)
26 sys.exit(1)
27
28 import io
29 import os
30
31 from setuptools import setup
32
33 packages = ["apiclient", "googleapiclient", "googleapiclient/discovery_cache"]
34
35 install_requires = [
36 "httplib2>=0.15.0,<1dev",
37 # NOTE: Maintainers, please do not require google-auth>=2.x.x
38 # Until this issue is closed
39 # https://github.com/googleapis/google-cloud-python/issues/10566
40 "google-auth>=1.16.0,<3.0.0dev",
41 "google-auth-httplib2>=0.1.0",
42 # NOTE: Maintainers, please do not require google-api-core>=2.x.x
43 # Until this issue is closed
44 # https://github.com/googleapis/google-cloud-python/issues/10566
45 "google-api-core >= 1.31.5, <3.0.0dev,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0",
46 "uritemplate>=3.0.1,<5",
47 ]
48
49 package_root = os.path.abspath(os.path.dirname(__file__))
50
51 readme_filename = os.path.join(package_root, "README.md")
52 with io.open(readme_filename, encoding="utf-8") as readme_file:
53 readme = readme_file.read()
54
55 package_root = os.path.abspath(os.path.dirname(__file__))
56
57 version = {}
58 with open(os.path.join(package_root, "googleapiclient/version.py")) as fp:
59 exec(fp.read(), version)
60 version = version["__version__"]
61
62 setup(
63 name="google-api-python-client",
64 version=version,
65 description="Google API Client Library for Python",
66 long_description=readme,
67 long_description_content_type="text/markdown",
68 author="Google LLC",
69 author_email="[email protected]",
70 url="https://github.com/googleapis/google-api-python-client/",
71 install_requires=install_requires,
72 python_requires=">=3.6",
73 packages=packages,
74 package_data={"googleapiclient": ["discovery_cache/documents/*.json"]},
75 license="Apache 2.0",
76 keywords="google api client",
77 classifiers=[
78 "Programming Language :: Python :: 3",
79 "Programming Language :: Python :: 3.6",
80 "Programming Language :: Python :: 3.7",
81 "Programming Language :: Python :: 3.8",
82 "Programming Language :: Python :: 3.9",
83 "Programming Language :: Python :: 3.10",
84 "Development Status :: 5 - Production/Stable",
85 "Intended Audience :: Developers",
86 "License :: OSI Approved :: Apache Software License",
87 "Operating System :: OS Independent",
88 "Topic :: Internet :: WWW/HTTP",
89 ],
90 )
91
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -37,7 +37,7 @@
# NOTE: Maintainers, please do not require google-auth>=2.x.x
# Until this issue is closed
# https://github.com/googleapis/google-cloud-python/issues/10566
- "google-auth>=1.16.0,<3.0.0dev",
+ "google-auth>=1.19.0,<3.0.0dev",
"google-auth-httplib2>=0.1.0",
# NOTE: Maintainers, please do not require google-api-core>=2.x.x
# Until this issue is closed
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -37,7 +37,7 @@\n # NOTE: Maintainers, please do not require google-auth>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n- \"google-auth>=1.16.0,<3.0.0dev\",\n+ \"google-auth>=1.19.0,<3.0.0dev\",\n \"google-auth-httplib2>=0.1.0\",\n # NOTE: Maintainers, please do not require google-api-core>=2.x.x\n # Until this issue is closed\n", "issue": "google-api-python-client 2.6.0, 2.7.0, 2.8.0, 2.9.0 cannot work with google-auth 1.18.0 and earlier\nThese versions of `google-api-python-client` requires `google-auth>=1.16.0,<2dev`. However, some of the code cannot work with `google-auth` 1.18.0 and earlier because`google.auth.load_credentials_from_file()` and `google.auth.default()` do not accept the `quota_project_id` parameter. I think it will be better to increase the minimum version of `google-auth` for these `google-api-python-client` versions.\r\n\r\nHere are the details:\r\n\r\n#### Environment details\r\n\r\n - OS type and version: Ubuntu 20.04\r\n - Python version: 3.9.7\r\n - pip version: 21.2.4\r\n - `google-api-python-client` version: 2.6.0, 2.7.0, 2.8.0, 2.9.0\r\n\r\n#### Steps to reproduce\r\n\r\nCreate a clean Python environment and install relevant packages\r\n \r\n```\r\nconda create -n test python=3.9\r\nconda activate test\r\npip install --no-cache-dir google-auth==1.18.0 google-api-core==1.21.0 google-api-python-client==2.9.0\r\n```\r\n\r\n#### Code example\r\n\r\n```\r\n(test) heh@ubuntu:~$ python\r\nPython 3.9.7 (default, Sep 16 2021, 13:09:58)\r\n[GCC 7.5.0] :: Anaconda, Inc. on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> import googleapiclient._auth as _auth\r\n>>> _auth.credentials_from_file(\"a\")\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/heh/anaconda3/envs/test/lib/python3.9/site-packages/googleapiclient/_auth.py\", line 44, in credentials_from_file\r\n credentials, _ = google.auth.load_credentials_from_file(filename, scopes=scopes, quota_project_id=quota_project_id)\r\nTypeError: load_credentials_from_file() got an unexpected keyword argument 'quota_project_id'\r\n>>> _auth.default_credentials(\"a\")\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/heh/anaconda3/envs/test/lib/python3.9/site-packages/googleapiclient/_auth.py\", line 54, in default_credentials\r\n credentials, _ = google.auth.default(scopes=scopes, quota_project_id=quota_project_id)\r\nTypeError: default() got an unexpected keyword argument 'quota_project_id'\r\n```\r\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Setup script for Google API Python client.\n\nAlso installs included versions of third party libraries, if those libraries\nare not already installed.\n\"\"\"\nfrom __future__ import print_function\n\nimport sys\n\nif sys.version_info < (3, 6):\n print(\"google-api-python-client requires python3 version >= 3.6.\", file=sys.stderr)\n sys.exit(1)\n\nimport io\nimport os\n\nfrom setuptools import setup\n\npackages = [\"apiclient\", \"googleapiclient\", \"googleapiclient/discovery_cache\"]\n\ninstall_requires = [\n \"httplib2>=0.15.0,<1dev\",\n # NOTE: Maintainers, please do not require google-auth>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-auth>=1.16.0,<3.0.0dev\",\n \"google-auth-httplib2>=0.1.0\",\n # NOTE: Maintainers, please do not require google-api-core>=2.x.x\n # Until this issue is closed\n # https://github.com/googleapis/google-cloud-python/issues/10566\n \"google-api-core >= 1.31.5, <3.0.0dev,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.0\",\n \"uritemplate>=3.0.1,<5\",\n]\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.md\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nversion = {}\nwith open(os.path.join(package_root, \"googleapiclient/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\nsetup(\n name=\"google-api-python-client\",\n version=version,\n description=\"Google API Client Library for Python\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n url=\"https://github.com/googleapis/google-api-python-client/\",\n install_requires=install_requires,\n python_requires=\">=3.6\",\n packages=packages,\n package_data={\"googleapiclient\": [\"discovery_cache/documents/*.json\"]},\n license=\"Apache 2.0\",\n keywords=\"google api client\",\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet :: WWW/HTTP\",\n ],\n)\n", "path": "setup.py"}]} | 2,125 | 159 |
gh_patches_debug_14503 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-3525 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Flyout links incorrect on Mkdocs projects
GitHub links in the sidebar footer contain an extraneous path. For example, on http://tracks.readthedocs.org/en/latest/upgrading/ the "View" link points to https://github.com/TracksApp/tracks/blob/master/home/docs/checkouts/readthedocs.org/user_builds/tracks/checkouts/latest/doc/Upgrading.md.
</issue>
<code>
[start of readthedocs/doc_builder/backends/mkdocs.py]
1 """
2 MkDocs_ backend for building docs.
3
4 .. _MkDocs: http://www.mkdocs.org/
5 """
6 from __future__ import (
7 absolute_import, division, print_function, unicode_literals)
8
9 import json
10 import logging
11 import os
12
13 import yaml
14 from django.conf import settings
15 from django.template import loader as template_loader
16
17 from readthedocs.doc_builder.base import BaseBuilder
18 from readthedocs.doc_builder.exceptions import BuildEnvironmentError
19
20 log = logging.getLogger(__name__)
21
22
23 def get_absolute_static_url():
24 """
25 Get the fully qualified static URL from settings.
26
27 Mkdocs needs a full domain because it tries to link to local files.
28 """
29 static_url = settings.STATIC_URL
30
31 if not static_url.startswith('http'):
32 domain = getattr(settings, 'PRODUCTION_DOMAIN')
33 static_url = 'http://{}{}'.format(domain, static_url)
34
35 return static_url
36
37
38 class BaseMkdocs(BaseBuilder):
39
40 """Mkdocs builder."""
41
42 # The default theme for mkdocs is the 'mkdocs' theme
43 DEFAULT_THEME_NAME = 'mkdocs'
44
45 def __init__(self, *args, **kwargs):
46 super(BaseMkdocs, self).__init__(*args, **kwargs)
47 self.old_artifact_path = os.path.join(
48 self.version.project.checkout_path(self.version.slug),
49 self.build_dir)
50 self.root_path = self.version.project.checkout_path(self.version.slug)
51 self.yaml_file = self.get_yaml_config()
52
53 def get_yaml_config(self):
54 """Find the ``mkdocs.yml`` file in the project root."""
55 mkdoc_path = self.config.mkdocs.configuration
56 if not mkdoc_path:
57 mkdoc_path = os.path.join(
58 self.project.checkout_path(self.version.slug),
59 'mkdocs.yml'
60 )
61 if not os.path.exists(mkdoc_path):
62 return None
63 return mkdoc_path
64
65 def load_yaml_config(self):
66 """
67 Load a YAML config.
68
69 Raise BuildEnvironmentError if failed due to syntax errors.
70 """
71 try:
72 return yaml.safe_load(
73 open(self.yaml_file, 'r')
74 )
75 except IOError:
76 return {
77 'site_name': self.version.project.name,
78 }
79 except yaml.YAMLError as exc:
80 note = ''
81 if hasattr(exc, 'problem_mark'):
82 mark = exc.problem_mark
83 note = ' (line %d, column %d)' % (mark.line + 1, mark.column + 1)
84 raise BuildEnvironmentError(
85 'Your mkdocs.yml could not be loaded, '
86 'possibly due to a syntax error{note}'.format(note=note)
87 )
88
89 def append_conf(self, **__):
90 """Set mkdocs config values."""
91 if not self.yaml_file:
92 self.yaml_file = os.path.join(self.root_path, 'mkdocs.yml')
93
94 user_config = self.load_yaml_config()
95
96 # Handle custom docs dirs
97 user_docs_dir = user_config.get('docs_dir')
98 docs_dir = self.docs_dir(docs_dir=user_docs_dir)
99 self.create_index(extension='md')
100 user_config['docs_dir'] = docs_dir
101
102 # Set mkdocs config values
103 static_url = get_absolute_static_url()
104 user_config.setdefault('extra_javascript', []).extend([
105 'readthedocs-data.js',
106 '%score/js/readthedocs-doc-embed.js' % static_url,
107 '%sjavascript/readthedocs-analytics.js' % static_url,
108 ])
109 user_config.setdefault('extra_css', []).extend([
110 '%scss/badge_only.css' % static_url,
111 '%scss/readthedocs-doc-embed.css' % static_url,
112 ])
113
114 docs_path = os.path.join(self.root_path, docs_dir)
115
116 # RTD javascript writing
117 rtd_data = self.generate_rtd_data(
118 docs_dir=docs_dir,
119 mkdocs_config=user_config
120 )
121 with open(os.path.join(docs_path, 'readthedocs-data.js'), 'w') as f:
122 f.write(rtd_data)
123
124 # Use Read the Docs' analytics setup rather than mkdocs'
125 # This supports using RTD's privacy improvements around analytics
126 user_config['google_analytics'] = None
127
128 # Write the modified mkdocs configuration
129 yaml.safe_dump(
130 user_config,
131 open(self.yaml_file, 'w')
132 )
133
134 # Write the mkdocs.yml to the build logs
135 self.run(
136 'cat',
137 os.path.relpath(self.yaml_file, self.root_path),
138 cwd=self.root_path,
139 )
140
141 def generate_rtd_data(self, docs_dir, mkdocs_config):
142 """Generate template properties and render readthedocs-data.js."""
143 # Use the analytics code from mkdocs.yml
144 # if it isn't set already by Read the Docs,
145 analytics_code = self.version.project.analytics_code
146 if not analytics_code and mkdocs_config.get('google_analytics'):
147 # http://www.mkdocs.org/user-guide/configuration/#google_analytics
148 analytics_code = mkdocs_config['google_analytics'][0]
149
150 # Will be available in the JavaScript as READTHEDOCS_DATA.
151 readthedocs_data = {
152 'project': self.version.project.slug,
153 'version': self.version.slug,
154 'language': self.version.project.language,
155 'programming_language': self.version.project.programming_language,
156 'page': None,
157 'theme': self.get_theme_name(mkdocs_config),
158 'builder': "mkdocs",
159 'docroot': docs_dir,
160 'source_suffix': ".md",
161 'api_host': getattr(settings, 'PUBLIC_API_URL', 'https://readthedocs.org'),
162 'ad_free': not self.project.show_advertising,
163 'commit': self.version.project.vcs_repo(self.version.slug).commit,
164 'global_analytics_code': getattr(settings, 'GLOBAL_ANALYTICS_CODE', 'UA-17997319-1'),
165 'user_analytics_code': analytics_code,
166 }
167 data_json = json.dumps(readthedocs_data, indent=4)
168 data_ctx = {
169 'data_json': data_json,
170 'current_version': readthedocs_data['version'],
171 'slug': readthedocs_data['project'],
172 'html_theme': readthedocs_data['theme'],
173 'pagename': None,
174 }
175 tmpl = template_loader.get_template('doc_builder/data.js.tmpl')
176 return tmpl.render(data_ctx)
177
178 def build(self):
179 checkout_path = self.project.checkout_path(self.version.slug)
180 build_command = [
181 'python',
182 self.python_env.venv_bin(filename='mkdocs'),
183 self.builder,
184 '--clean',
185 '--site-dir', self.build_dir,
186 '--config-file', self.yaml_file,
187 ]
188 if self.config.mkdocs.fail_on_warning:
189 build_command.append('--strict')
190 cmd_ret = self.run(
191 *build_command,
192 cwd=checkout_path,
193 bin_path=self.python_env.venv_bin()
194 )
195 return cmd_ret.successful
196
197 def get_theme_name(self, mkdocs_config):
198 """
199 Get the theme configuration in the mkdocs_config
200
201 In v0.17.0, the theme configuration switched
202 from two separate configs (both optional) to a nested directive.
203
204 :see: http://www.mkdocs.org/about/release-notes/#theme-customization-1164
205 :returns: the name of the theme RTD will use
206 """
207 theme_setting = mkdocs_config.get('theme')
208 if isinstance(theme_setting, dict):
209 # Full nested theme config (the new configuration)
210 return theme_setting.get('name') or self.DEFAULT_THEME_NAME
211
212 if theme_setting:
213 # A string which is the name of the theme
214 return theme_setting
215
216 theme_dir = mkdocs_config.get('theme_dir')
217 if theme_dir:
218 # Use the name of the directory in this project's custom theme directory
219 return theme_dir.rstrip('/').split('/')[-1]
220
221 return self.DEFAULT_THEME_NAME
222
223
224 class MkdocsHTML(BaseMkdocs):
225 type = 'mkdocs'
226 builder = 'build'
227 build_dir = '_build/html'
228
229
230 class MkdocsJSON(BaseMkdocs):
231 type = 'mkdocs_json'
232 builder = 'json'
233 build_dir = '_build/json'
234
[end of readthedocs/doc_builder/backends/mkdocs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/readthedocs/doc_builder/backends/mkdocs.py b/readthedocs/doc_builder/backends/mkdocs.py
--- a/readthedocs/doc_builder/backends/mkdocs.py
+++ b/readthedocs/doc_builder/backends/mkdocs.py
@@ -111,11 +111,16 @@
'%scss/readthedocs-doc-embed.css' % static_url,
])
- docs_path = os.path.join(self.root_path, docs_dir)
+ # The docs path is relative to the location
+ # of the mkdocs configuration file.
+ docs_path = os.path.join(
+ os.path.dirname(self.yaml_file),
+ docs_dir
+ )
# RTD javascript writing
rtd_data = self.generate_rtd_data(
- docs_dir=docs_dir,
+ docs_dir=os.path.relpath(docs_path, self.root_path),
mkdocs_config=user_config
)
with open(os.path.join(docs_path, 'readthedocs-data.js'), 'w') as f:
| {"golden_diff": "diff --git a/readthedocs/doc_builder/backends/mkdocs.py b/readthedocs/doc_builder/backends/mkdocs.py\n--- a/readthedocs/doc_builder/backends/mkdocs.py\n+++ b/readthedocs/doc_builder/backends/mkdocs.py\n@@ -111,11 +111,16 @@\n '%scss/readthedocs-doc-embed.css' % static_url,\n ])\n \n- docs_path = os.path.join(self.root_path, docs_dir)\n+ # The docs path is relative to the location\n+ # of the mkdocs configuration file.\n+ docs_path = os.path.join(\n+ os.path.dirname(self.yaml_file),\n+ docs_dir\n+ )\n \n # RTD javascript writing\n rtd_data = self.generate_rtd_data(\n- docs_dir=docs_dir,\n+ docs_dir=os.path.relpath(docs_path, self.root_path),\n mkdocs_config=user_config\n )\n with open(os.path.join(docs_path, 'readthedocs-data.js'), 'w') as f:\n", "issue": "Flyout links incorrect on Mkdocs projects\nGitHub links in the sidebar footer contain an extraneous path. For example, on http://tracks.readthedocs.org/en/latest/upgrading/ the \"View\" link points to https://github.com/TracksApp/tracks/blob/master/home/docs/checkouts/readthedocs.org/user_builds/tracks/checkouts/latest/doc/Upgrading.md.\n\n", "before_files": [{"content": "\"\"\"\nMkDocs_ backend for building docs.\n\n.. _MkDocs: http://www.mkdocs.org/\n\"\"\"\nfrom __future__ import (\n absolute_import, division, print_function, unicode_literals)\n\nimport json\nimport logging\nimport os\n\nimport yaml\nfrom django.conf import settings\nfrom django.template import loader as template_loader\n\nfrom readthedocs.doc_builder.base import BaseBuilder\nfrom readthedocs.doc_builder.exceptions import BuildEnvironmentError\n\nlog = logging.getLogger(__name__)\n\n\ndef get_absolute_static_url():\n \"\"\"\n Get the fully qualified static URL from settings.\n\n Mkdocs needs a full domain because it tries to link to local files.\n \"\"\"\n static_url = settings.STATIC_URL\n\n if not static_url.startswith('http'):\n domain = getattr(settings, 'PRODUCTION_DOMAIN')\n static_url = 'http://{}{}'.format(domain, static_url)\n\n return static_url\n\n\nclass BaseMkdocs(BaseBuilder):\n\n \"\"\"Mkdocs builder.\"\"\"\n\n # The default theme for mkdocs is the 'mkdocs' theme\n DEFAULT_THEME_NAME = 'mkdocs'\n\n def __init__(self, *args, **kwargs):\n super(BaseMkdocs, self).__init__(*args, **kwargs)\n self.old_artifact_path = os.path.join(\n self.version.project.checkout_path(self.version.slug),\n self.build_dir)\n self.root_path = self.version.project.checkout_path(self.version.slug)\n self.yaml_file = self.get_yaml_config()\n\n def get_yaml_config(self):\n \"\"\"Find the ``mkdocs.yml`` file in the project root.\"\"\"\n mkdoc_path = self.config.mkdocs.configuration\n if not mkdoc_path:\n mkdoc_path = os.path.join(\n self.project.checkout_path(self.version.slug),\n 'mkdocs.yml'\n )\n if not os.path.exists(mkdoc_path):\n return None\n return mkdoc_path\n\n def load_yaml_config(self):\n \"\"\"\n Load a YAML config.\n\n Raise BuildEnvironmentError if failed due to syntax errors.\n \"\"\"\n try:\n return yaml.safe_load(\n open(self.yaml_file, 'r')\n )\n except IOError:\n return {\n 'site_name': self.version.project.name,\n }\n except yaml.YAMLError as exc:\n note = ''\n if hasattr(exc, 'problem_mark'):\n mark = exc.problem_mark\n note = ' (line %d, column %d)' % (mark.line + 1, mark.column + 1)\n raise BuildEnvironmentError(\n 'Your mkdocs.yml could not be loaded, '\n 'possibly due to a syntax error{note}'.format(note=note)\n )\n\n def append_conf(self, **__):\n \"\"\"Set mkdocs config values.\"\"\"\n if not self.yaml_file:\n self.yaml_file = os.path.join(self.root_path, 'mkdocs.yml')\n\n user_config = self.load_yaml_config()\n\n # Handle custom docs dirs\n user_docs_dir = user_config.get('docs_dir')\n docs_dir = self.docs_dir(docs_dir=user_docs_dir)\n self.create_index(extension='md')\n user_config['docs_dir'] = docs_dir\n\n # Set mkdocs config values\n static_url = get_absolute_static_url()\n user_config.setdefault('extra_javascript', []).extend([\n 'readthedocs-data.js',\n '%score/js/readthedocs-doc-embed.js' % static_url,\n '%sjavascript/readthedocs-analytics.js' % static_url,\n ])\n user_config.setdefault('extra_css', []).extend([\n '%scss/badge_only.css' % static_url,\n '%scss/readthedocs-doc-embed.css' % static_url,\n ])\n\n docs_path = os.path.join(self.root_path, docs_dir)\n\n # RTD javascript writing\n rtd_data = self.generate_rtd_data(\n docs_dir=docs_dir,\n mkdocs_config=user_config\n )\n with open(os.path.join(docs_path, 'readthedocs-data.js'), 'w') as f:\n f.write(rtd_data)\n\n # Use Read the Docs' analytics setup rather than mkdocs'\n # This supports using RTD's privacy improvements around analytics\n user_config['google_analytics'] = None\n\n # Write the modified mkdocs configuration\n yaml.safe_dump(\n user_config,\n open(self.yaml_file, 'w')\n )\n\n # Write the mkdocs.yml to the build logs\n self.run(\n 'cat',\n os.path.relpath(self.yaml_file, self.root_path),\n cwd=self.root_path,\n )\n\n def generate_rtd_data(self, docs_dir, mkdocs_config):\n \"\"\"Generate template properties and render readthedocs-data.js.\"\"\"\n # Use the analytics code from mkdocs.yml\n # if it isn't set already by Read the Docs,\n analytics_code = self.version.project.analytics_code\n if not analytics_code and mkdocs_config.get('google_analytics'):\n # http://www.mkdocs.org/user-guide/configuration/#google_analytics\n analytics_code = mkdocs_config['google_analytics'][0]\n\n # Will be available in the JavaScript as READTHEDOCS_DATA.\n readthedocs_data = {\n 'project': self.version.project.slug,\n 'version': self.version.slug,\n 'language': self.version.project.language,\n 'programming_language': self.version.project.programming_language,\n 'page': None,\n 'theme': self.get_theme_name(mkdocs_config),\n 'builder': \"mkdocs\",\n 'docroot': docs_dir,\n 'source_suffix': \".md\",\n 'api_host': getattr(settings, 'PUBLIC_API_URL', 'https://readthedocs.org'),\n 'ad_free': not self.project.show_advertising,\n 'commit': self.version.project.vcs_repo(self.version.slug).commit,\n 'global_analytics_code': getattr(settings, 'GLOBAL_ANALYTICS_CODE', 'UA-17997319-1'),\n 'user_analytics_code': analytics_code,\n }\n data_json = json.dumps(readthedocs_data, indent=4)\n data_ctx = {\n 'data_json': data_json,\n 'current_version': readthedocs_data['version'],\n 'slug': readthedocs_data['project'],\n 'html_theme': readthedocs_data['theme'],\n 'pagename': None,\n }\n tmpl = template_loader.get_template('doc_builder/data.js.tmpl')\n return tmpl.render(data_ctx)\n\n def build(self):\n checkout_path = self.project.checkout_path(self.version.slug)\n build_command = [\n 'python',\n self.python_env.venv_bin(filename='mkdocs'),\n self.builder,\n '--clean',\n '--site-dir', self.build_dir,\n '--config-file', self.yaml_file,\n ]\n if self.config.mkdocs.fail_on_warning:\n build_command.append('--strict')\n cmd_ret = self.run(\n *build_command,\n cwd=checkout_path,\n bin_path=self.python_env.venv_bin()\n )\n return cmd_ret.successful\n\n def get_theme_name(self, mkdocs_config):\n \"\"\"\n Get the theme configuration in the mkdocs_config\n\n In v0.17.0, the theme configuration switched\n from two separate configs (both optional) to a nested directive.\n\n :see: http://www.mkdocs.org/about/release-notes/#theme-customization-1164\n :returns: the name of the theme RTD will use\n \"\"\"\n theme_setting = mkdocs_config.get('theme')\n if isinstance(theme_setting, dict):\n # Full nested theme config (the new configuration)\n return theme_setting.get('name') or self.DEFAULT_THEME_NAME\n\n if theme_setting:\n # A string which is the name of the theme\n return theme_setting\n\n theme_dir = mkdocs_config.get('theme_dir')\n if theme_dir:\n # Use the name of the directory in this project's custom theme directory\n return theme_dir.rstrip('/').split('/')[-1]\n\n return self.DEFAULT_THEME_NAME\n\n\nclass MkdocsHTML(BaseMkdocs):\n type = 'mkdocs'\n builder = 'build'\n build_dir = '_build/html'\n\n\nclass MkdocsJSON(BaseMkdocs):\n type = 'mkdocs_json'\n builder = 'json'\n build_dir = '_build/json'\n", "path": "readthedocs/doc_builder/backends/mkdocs.py"}]} | 2,995 | 226 |
gh_patches_debug_38482 | rasdani/github-patches | git_diff | great-expectations__great_expectations-2749 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
</issue>
<code>
[start of great_expectations/datasource/data_connector/batch_filter.py]
1 import itertools
2 import logging
3 from typing import Callable, Dict, Optional, Union
4
5 import great_expectations.exceptions as ge_exceptions
6 from great_expectations.core.id_dict import IDDict
7 from great_expectations.util import is_int
8
9 logger = logging.getLogger(__name__)
10
11
12 def build_batch_filter(
13 data_connector_query_dict: Optional[
14 Dict[
15 str,
16 Optional[
17 Union[
18 int,
19 list,
20 tuple,
21 slice,
22 str,
23 Union[Dict, IDDict],
24 Callable,
25 ]
26 ],
27 ]
28 ] = None
29 ):
30 if not data_connector_query_dict:
31 return BatchFilter(
32 custom_filter_function=None,
33 batch_filter_parameters=None,
34 index=None,
35 limit=None,
36 )
37 data_connector_query_keys: set = set(data_connector_query_dict.keys())
38 if not data_connector_query_keys <= BatchFilter.RECOGNIZED_KEYS:
39 raise ge_exceptions.BatchFilterError(
40 f"""Unrecognized data_connector_query key(s):
41 "{str(data_connector_query_keys - BatchFilter.RECOGNIZED_KEYS)}" detected.
42 """
43 )
44 custom_filter_function: Callable = data_connector_query_dict.get(
45 "custom_filter_function"
46 )
47 if custom_filter_function and not isinstance(custom_filter_function, Callable):
48 raise ge_exceptions.BatchFilterError(
49 f"""The type of a custom_filter must be a function (Python "Callable"). The type given is
50 "{str(type(custom_filter_function))}", which is illegal.
51 """
52 )
53 batch_filter_parameters: Optional[dict] = data_connector_query_dict.get(
54 "batch_filter_parameters"
55 )
56 if batch_filter_parameters:
57 if not isinstance(batch_filter_parameters, dict):
58 raise ge_exceptions.BatchFilterError(
59 f"""The type of batch_filter_parameters must be a dictionary (Python "dict"). The type given is
60 "{str(type(batch_filter_parameters))}", which is illegal.
61 """
62 )
63 if not all([isinstance(key, str) for key in batch_filter_parameters.keys()]):
64 raise ge_exceptions.BatchFilterError(
65 'All batch_filter_parameters keys must strings (Python "str").'
66 )
67 if batch_filter_parameters is not None:
68 batch_filter_parameters: IDDict = IDDict(batch_filter_parameters)
69 index: Optional[
70 Union[int, list, tuple, slice, str]
71 ] = data_connector_query_dict.get("index")
72 limit: Optional[int] = data_connector_query_dict.get("limit")
73 if limit and (not isinstance(limit, int) or limit < 0):
74 raise ge_exceptions.BatchFilterError(
75 f"""The type of a limit must be an integer (Python "int") that is greater than or equal to 0. The
76 type and value given are "{str(type(limit))}" and "{limit}", respectively, which is illegal.
77 """
78 )
79 if index is not None and limit is not None:
80 raise ge_exceptions.BatchFilterError(
81 "Only one of index or limit, but not both, can be specified (specifying both is illegal)."
82 )
83 index = _parse_index(index=index)
84 return BatchFilter(
85 custom_filter_function=custom_filter_function,
86 batch_filter_parameters=batch_filter_parameters,
87 index=index,
88 limit=limit,
89 )
90
91
92 def _parse_index(
93 index: Optional[Union[int, list, tuple, slice, str]] = None
94 ) -> Optional[Union[int, slice]]:
95 if index is None:
96 return None
97 elif isinstance(index, (int, slice)):
98 return index
99 elif isinstance(index, (list, tuple)):
100 if len(index) > 3:
101 raise ge_exceptions.BatchFilterError(
102 f"""The number of index slice components must be between 1 and 3 (the given number is
103 {len(index)}).
104 """
105 )
106 if len(index) == 1:
107 return index[0]
108 if len(index) == 2:
109 return slice(index[0], index[1], None)
110 if len(index) == 3:
111 return slice(index[0], index[1], index[2])
112 elif isinstance(index, str):
113 if is_int(value=index):
114 return _parse_index(index=int(index))
115 return _parse_index(index=[int(idx_str) for idx_str in index.split(":")])
116 else:
117 raise ge_exceptions.BatchFilterError(
118 f"""The type of index must be an integer (Python "int"), or a list (Python "list") or a tuple
119 (Python "tuple"), or a Python "slice" object, or a string that has the format of a single integer or a slice argument.
120 The type given is "{str(type(index))}", which is illegal.
121 """
122 )
123
124
125 class BatchFilter:
126 RECOGNIZED_KEYS: set = {
127 "custom_filter_function",
128 "batch_filter_parameters",
129 "index",
130 "limit",
131 }
132
133 def __init__(
134 self,
135 custom_filter_function: Callable = None,
136 batch_filter_parameters: Optional[IDDict] = None,
137 index: Optional[Union[int, slice]] = None,
138 limit: int = None,
139 ):
140 self._custom_filter_function = custom_filter_function
141 self._batch_filter_parameters = batch_filter_parameters
142 self._index = index
143 self._limit = limit
144
145 @property
146 def custom_filter_function(self) -> Callable:
147 return self._custom_filter_function
148
149 @property
150 def batch_filter_parameters(self) -> Optional[IDDict]:
151 return self._batch_filter_parameters
152
153 @property
154 def index(self) -> Optional[Union[int, slice]]:
155 return self._index
156
157 @property
158 def limit(self) -> int:
159 return self._limit
160
161 def __repr__(self) -> str:
162 doc_fields_dict: dict = {
163 "custom_filter_function": self._custom_filter_function,
164 "batch_filter_parameters": self.batch_filter_parameters,
165 "index": self.index,
166 "limit": self.limit,
167 }
168 return str(doc_fields_dict)
169
170 def select_from_data_connector_query(self, batch_definition_list=None):
171 if batch_definition_list is None:
172 return []
173 filter_function: Callable
174 if self.custom_filter_function:
175 filter_function = self.custom_filter_function
176 else:
177 filter_function = self.best_effort_batch_definition_matcher()
178 selected_batch_definitions = list(
179 filter(
180 lambda batch_definition: filter_function(
181 batch_identifiers=batch_definition.batch_identifiers,
182 ),
183 batch_definition_list,
184 )
185 )
186 if self.index is None:
187 selected_batch_definitions = selected_batch_definitions[: self.limit]
188 else:
189 if isinstance(self.index, int):
190 selected_batch_definitions = [selected_batch_definitions[self.index]]
191 else:
192 selected_batch_definitions = list(
193 itertools.chain.from_iterable(
194 [selected_batch_definitions[self.index]]
195 )
196 )
197 return selected_batch_definitions
198
199 def best_effort_batch_definition_matcher(self) -> Callable:
200 def match_batch_identifiers_to_batch_filter_params(
201 batch_identifiers: dict,
202 ) -> bool:
203 if self.batch_filter_parameters:
204 if not batch_identifiers:
205 return False
206
207 for batch_filter_parameter, val in self.batch_filter_parameters.items():
208 if not (
209 batch_filter_parameter in batch_identifiers
210 and batch_identifiers[batch_filter_parameter] == val
211 ):
212 return False
213
214 return True
215
216 return match_batch_identifiers_to_batch_filter_params
217
[end of great_expectations/datasource/data_connector/batch_filter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/great_expectations/datasource/data_connector/batch_filter.py b/great_expectations/datasource/data_connector/batch_filter.py
--- a/great_expectations/datasource/data_connector/batch_filter.py
+++ b/great_expectations/datasource/data_connector/batch_filter.py
@@ -1,8 +1,9 @@
import itertools
import logging
-from typing import Callable, Dict, Optional, Union
+from typing import Callable, Dict, List, Optional, Union
import great_expectations.exceptions as ge_exceptions
+from great_expectations.core.batch import BatchDefinition
from great_expectations.core.id_dict import IDDict
from great_expectations.util import is_int
@@ -50,9 +51,9 @@
"{str(type(custom_filter_function))}", which is illegal.
"""
)
- batch_filter_parameters: Optional[dict] = data_connector_query_dict.get(
- "batch_filter_parameters"
- )
+ batch_filter_parameters: Optional[
+ Union[dict, IDDict]
+ ] = data_connector_query_dict.get("batch_filter_parameters")
if batch_filter_parameters:
if not isinstance(batch_filter_parameters, dict):
raise ge_exceptions.BatchFilterError(
@@ -64,8 +65,7 @@
raise ge_exceptions.BatchFilterError(
'All batch_filter_parameters keys must strings (Python "str").'
)
- if batch_filter_parameters is not None:
- batch_filter_parameters: IDDict = IDDict(batch_filter_parameters)
+ batch_filter_parameters = IDDict(batch_filter_parameters)
index: Optional[
Union[int, list, tuple, slice, str]
] = data_connector_query_dict.get("index")
@@ -167,7 +167,9 @@
}
return str(doc_fields_dict)
- def select_from_data_connector_query(self, batch_definition_list=None):
+ def select_from_data_connector_query(
+ self, batch_definition_list: Optional[List[BatchDefinition]] = None
+ ) -> List[BatchDefinition]:
if batch_definition_list is None:
return []
filter_function: Callable
@@ -175,6 +177,7 @@
filter_function = self.custom_filter_function
else:
filter_function = self.best_effort_batch_definition_matcher()
+ selected_batch_definitions: List[BatchDefinition]
selected_batch_definitions = list(
filter(
lambda batch_definition: filter_function(
| {"golden_diff": "diff --git a/great_expectations/datasource/data_connector/batch_filter.py b/great_expectations/datasource/data_connector/batch_filter.py\n--- a/great_expectations/datasource/data_connector/batch_filter.py\n+++ b/great_expectations/datasource/data_connector/batch_filter.py\n@@ -1,8 +1,9 @@\n import itertools\n import logging\n-from typing import Callable, Dict, Optional, Union\n+from typing import Callable, Dict, List, Optional, Union\n \n import great_expectations.exceptions as ge_exceptions\n+from great_expectations.core.batch import BatchDefinition\n from great_expectations.core.id_dict import IDDict\n from great_expectations.util import is_int\n \n@@ -50,9 +51,9 @@\n \"{str(type(custom_filter_function))}\", which is illegal.\n \"\"\"\n )\n- batch_filter_parameters: Optional[dict] = data_connector_query_dict.get(\n- \"batch_filter_parameters\"\n- )\n+ batch_filter_parameters: Optional[\n+ Union[dict, IDDict]\n+ ] = data_connector_query_dict.get(\"batch_filter_parameters\")\n if batch_filter_parameters:\n if not isinstance(batch_filter_parameters, dict):\n raise ge_exceptions.BatchFilterError(\n@@ -64,8 +65,7 @@\n raise ge_exceptions.BatchFilterError(\n 'All batch_filter_parameters keys must strings (Python \"str\").'\n )\n- if batch_filter_parameters is not None:\n- batch_filter_parameters: IDDict = IDDict(batch_filter_parameters)\n+ batch_filter_parameters = IDDict(batch_filter_parameters)\n index: Optional[\n Union[int, list, tuple, slice, str]\n ] = data_connector_query_dict.get(\"index\")\n@@ -167,7 +167,9 @@\n }\n return str(doc_fields_dict)\n \n- def select_from_data_connector_query(self, batch_definition_list=None):\n+ def select_from_data_connector_query(\n+ self, batch_definition_list: Optional[List[BatchDefinition]] = None\n+ ) -> List[BatchDefinition]:\n if batch_definition_list is None:\n return []\n filter_function: Callable\n@@ -175,6 +177,7 @@\n filter_function = self.custom_filter_function\n else:\n filter_function = self.best_effort_batch_definition_matcher()\n+ selected_batch_definitions: List[BatchDefinition]\n selected_batch_definitions = list(\n filter(\n lambda batch_definition: filter_function(\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "import itertools\nimport logging\nfrom typing import Callable, Dict, Optional, Union\n\nimport great_expectations.exceptions as ge_exceptions\nfrom great_expectations.core.id_dict import IDDict\nfrom great_expectations.util import is_int\n\nlogger = logging.getLogger(__name__)\n\n\ndef build_batch_filter(\n data_connector_query_dict: Optional[\n Dict[\n str,\n Optional[\n Union[\n int,\n list,\n tuple,\n slice,\n str,\n Union[Dict, IDDict],\n Callable,\n ]\n ],\n ]\n ] = None\n):\n if not data_connector_query_dict:\n return BatchFilter(\n custom_filter_function=None,\n batch_filter_parameters=None,\n index=None,\n limit=None,\n )\n data_connector_query_keys: set = set(data_connector_query_dict.keys())\n if not data_connector_query_keys <= BatchFilter.RECOGNIZED_KEYS:\n raise ge_exceptions.BatchFilterError(\n f\"\"\"Unrecognized data_connector_query key(s):\n\"{str(data_connector_query_keys - BatchFilter.RECOGNIZED_KEYS)}\" detected.\n \"\"\"\n )\n custom_filter_function: Callable = data_connector_query_dict.get(\n \"custom_filter_function\"\n )\n if custom_filter_function and not isinstance(custom_filter_function, Callable):\n raise ge_exceptions.BatchFilterError(\n f\"\"\"The type of a custom_filter must be a function (Python \"Callable\"). The type given is\n\"{str(type(custom_filter_function))}\", which is illegal.\n \"\"\"\n )\n batch_filter_parameters: Optional[dict] = data_connector_query_dict.get(\n \"batch_filter_parameters\"\n )\n if batch_filter_parameters:\n if not isinstance(batch_filter_parameters, dict):\n raise ge_exceptions.BatchFilterError(\n f\"\"\"The type of batch_filter_parameters must be a dictionary (Python \"dict\"). The type given is\n\"{str(type(batch_filter_parameters))}\", which is illegal.\n \"\"\"\n )\n if not all([isinstance(key, str) for key in batch_filter_parameters.keys()]):\n raise ge_exceptions.BatchFilterError(\n 'All batch_filter_parameters keys must strings (Python \"str\").'\n )\n if batch_filter_parameters is not None:\n batch_filter_parameters: IDDict = IDDict(batch_filter_parameters)\n index: Optional[\n Union[int, list, tuple, slice, str]\n ] = data_connector_query_dict.get(\"index\")\n limit: Optional[int] = data_connector_query_dict.get(\"limit\")\n if limit and (not isinstance(limit, int) or limit < 0):\n raise ge_exceptions.BatchFilterError(\n f\"\"\"The type of a limit must be an integer (Python \"int\") that is greater than or equal to 0. The\ntype and value given are \"{str(type(limit))}\" and \"{limit}\", respectively, which is illegal.\n \"\"\"\n )\n if index is not None and limit is not None:\n raise ge_exceptions.BatchFilterError(\n \"Only one of index or limit, but not both, can be specified (specifying both is illegal).\"\n )\n index = _parse_index(index=index)\n return BatchFilter(\n custom_filter_function=custom_filter_function,\n batch_filter_parameters=batch_filter_parameters,\n index=index,\n limit=limit,\n )\n\n\ndef _parse_index(\n index: Optional[Union[int, list, tuple, slice, str]] = None\n) -> Optional[Union[int, slice]]:\n if index is None:\n return None\n elif isinstance(index, (int, slice)):\n return index\n elif isinstance(index, (list, tuple)):\n if len(index) > 3:\n raise ge_exceptions.BatchFilterError(\n f\"\"\"The number of index slice components must be between 1 and 3 (the given number is\n{len(index)}).\n \"\"\"\n )\n if len(index) == 1:\n return index[0]\n if len(index) == 2:\n return slice(index[0], index[1], None)\n if len(index) == 3:\n return slice(index[0], index[1], index[2])\n elif isinstance(index, str):\n if is_int(value=index):\n return _parse_index(index=int(index))\n return _parse_index(index=[int(idx_str) for idx_str in index.split(\":\")])\n else:\n raise ge_exceptions.BatchFilterError(\n f\"\"\"The type of index must be an integer (Python \"int\"), or a list (Python \"list\") or a tuple\n(Python \"tuple\"), or a Python \"slice\" object, or a string that has the format of a single integer or a slice argument.\nThe type given is \"{str(type(index))}\", which is illegal.\n \"\"\"\n )\n\n\nclass BatchFilter:\n RECOGNIZED_KEYS: set = {\n \"custom_filter_function\",\n \"batch_filter_parameters\",\n \"index\",\n \"limit\",\n }\n\n def __init__(\n self,\n custom_filter_function: Callable = None,\n batch_filter_parameters: Optional[IDDict] = None,\n index: Optional[Union[int, slice]] = None,\n limit: int = None,\n ):\n self._custom_filter_function = custom_filter_function\n self._batch_filter_parameters = batch_filter_parameters\n self._index = index\n self._limit = limit\n\n @property\n def custom_filter_function(self) -> Callable:\n return self._custom_filter_function\n\n @property\n def batch_filter_parameters(self) -> Optional[IDDict]:\n return self._batch_filter_parameters\n\n @property\n def index(self) -> Optional[Union[int, slice]]:\n return self._index\n\n @property\n def limit(self) -> int:\n return self._limit\n\n def __repr__(self) -> str:\n doc_fields_dict: dict = {\n \"custom_filter_function\": self._custom_filter_function,\n \"batch_filter_parameters\": self.batch_filter_parameters,\n \"index\": self.index,\n \"limit\": self.limit,\n }\n return str(doc_fields_dict)\n\n def select_from_data_connector_query(self, batch_definition_list=None):\n if batch_definition_list is None:\n return []\n filter_function: Callable\n if self.custom_filter_function:\n filter_function = self.custom_filter_function\n else:\n filter_function = self.best_effort_batch_definition_matcher()\n selected_batch_definitions = list(\n filter(\n lambda batch_definition: filter_function(\n batch_identifiers=batch_definition.batch_identifiers,\n ),\n batch_definition_list,\n )\n )\n if self.index is None:\n selected_batch_definitions = selected_batch_definitions[: self.limit]\n else:\n if isinstance(self.index, int):\n selected_batch_definitions = [selected_batch_definitions[self.index]]\n else:\n selected_batch_definitions = list(\n itertools.chain.from_iterable(\n [selected_batch_definitions[self.index]]\n )\n )\n return selected_batch_definitions\n\n def best_effort_batch_definition_matcher(self) -> Callable:\n def match_batch_identifiers_to_batch_filter_params(\n batch_identifiers: dict,\n ) -> bool:\n if self.batch_filter_parameters:\n if not batch_identifiers:\n return False\n\n for batch_filter_parameter, val in self.batch_filter_parameters.items():\n if not (\n batch_filter_parameter in batch_identifiers\n and batch_identifiers[batch_filter_parameter] == val\n ):\n return False\n\n return True\n\n return match_batch_identifiers_to_batch_filter_params\n", "path": "great_expectations/datasource/data_connector/batch_filter.py"}]} | 2,690 | 516 |
gh_patches_debug_31205 | rasdani/github-patches | git_diff | jazzband__pip-tools-614 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip-compile gives misleading error message if no internet
If there is no internet access, `pip-compile` will fail with a misleading message. This is with pip-compile 1.6.1, pip 8.1.1, and Python 3.5.1.
##### Steps to replicate
1. Create a `requirements.in` file with a single project name.
2. Disable internet access (e.g. by disabling wi-fi).
3. Run `pip-compile --verbose requirements.in`.
##### Expected result
The command should fail with an error message relating to the fact that there is no internet connection.
##### Actual result
Instead, the output looks something like this:
```
Using indexes:
https://pypi.python.org/simple
ROUND 1
Current constraints:
MyPackage
Finding the best candidates:
Could not find a version that matches MyPackage
Tried: (no version found at all)
```
</issue>
<code>
[start of piptools/repositories/pypi.py]
1 # coding: utf-8
2 from __future__ import (absolute_import, division, print_function,
3 unicode_literals)
4
5 import hashlib
6 import os
7 from contextlib import contextmanager
8 from shutil import rmtree
9
10 from pip.download import is_file_url, url_to_path
11 from pip.index import PackageFinder
12 from pip.req.req_set import RequirementSet
13 from pip.wheel import Wheel
14 try:
15 from pip.utils.hashes import FAVORITE_HASH
16 except ImportError:
17 FAVORITE_HASH = 'sha256'
18
19 from ..cache import CACHE_DIR
20 from ..exceptions import NoCandidateFound
21 from ..utils import (fs_str, is_pinned_requirement, lookup_table,
22 make_install_requirement, pip_version_info)
23 from .base import BaseRepository
24
25 try:
26 from tempfile import TemporaryDirectory # added in 3.2
27 except ImportError:
28 from .._compat import TemporaryDirectory
29
30
31 class PyPIRepository(BaseRepository):
32 DEFAULT_INDEX_URL = 'https://pypi.python.org/simple'
33
34 """
35 The PyPIRepository will use the provided Finder instance to lookup
36 packages. Typically, it looks up packages on PyPI (the default implicit
37 config), but any other PyPI mirror can be used if index_urls is
38 changed/configured on the Finder.
39 """
40 def __init__(self, pip_options, session):
41 self.session = session
42
43 index_urls = [pip_options.index_url] + pip_options.extra_index_urls
44 if pip_options.no_index:
45 index_urls = []
46
47 self.finder = PackageFinder(
48 find_links=pip_options.find_links,
49 index_urls=index_urls,
50 trusted_hosts=pip_options.trusted_hosts,
51 allow_all_prereleases=pip_options.pre,
52 process_dependency_links=pip_options.process_dependency_links,
53 session=self.session,
54 )
55
56 # Caches
57 # stores project_name => InstallationCandidate mappings for all
58 # versions reported by PyPI, so we only have to ask once for each
59 # project
60 self._available_candidates_cache = {}
61
62 # stores InstallRequirement => list(InstallRequirement) mappings
63 # of all secondary dependencies for the given requirement, so we
64 # only have to go to disk once for each requirement
65 self._dependencies_cache = {}
66
67 # Setup file paths
68 self.freshen_build_caches()
69 self._download_dir = fs_str(os.path.join(CACHE_DIR, 'pkgs'))
70 self._wheel_download_dir = fs_str(os.path.join(CACHE_DIR, 'wheels'))
71
72 def freshen_build_caches(self):
73 """
74 Start with fresh build/source caches. Will remove any old build
75 caches from disk automatically.
76 """
77 self._build_dir = TemporaryDirectory(fs_str('build'))
78 self._source_dir = TemporaryDirectory(fs_str('source'))
79
80 @property
81 def build_dir(self):
82 return self._build_dir.name
83
84 @property
85 def source_dir(self):
86 return self._source_dir.name
87
88 def clear_caches(self):
89 rmtree(self._download_dir, ignore_errors=True)
90 rmtree(self._wheel_download_dir, ignore_errors=True)
91
92 def find_all_candidates(self, req_name):
93 if req_name not in self._available_candidates_cache:
94 # pip 8 changed the internal API, making this a public method
95 if pip_version_info >= (8, 0):
96 candidates = self.finder.find_all_candidates(req_name)
97 else:
98 candidates = self.finder._find_all_versions(req_name)
99 self._available_candidates_cache[req_name] = candidates
100 return self._available_candidates_cache[req_name]
101
102 def find_best_match(self, ireq, prereleases=None):
103 """
104 Returns a Version object that indicates the best match for the given
105 InstallRequirement according to the external repository.
106 """
107 if ireq.editable:
108 return ireq # return itself as the best match
109
110 all_candidates = self.find_all_candidates(ireq.name)
111 candidates_by_version = lookup_table(all_candidates, key=lambda c: c.version, unique=True)
112 matching_versions = ireq.specifier.filter((candidate.version for candidate in all_candidates),
113 prereleases=prereleases)
114
115 # Reuses pip's internal candidate sort key to sort
116 matching_candidates = [candidates_by_version[ver] for ver in matching_versions]
117 if not matching_candidates:
118 raise NoCandidateFound(ireq, all_candidates)
119 best_candidate = max(matching_candidates, key=self.finder._candidate_sort_key)
120
121 # Turn the candidate into a pinned InstallRequirement
122 return make_install_requirement(
123 best_candidate.project, best_candidate.version, ireq.extras, constraint=ireq.constraint
124 )
125
126 def get_dependencies(self, ireq):
127 """
128 Given a pinned or an editable InstallRequirement, returns a set of
129 dependencies (also InstallRequirements, but not necessarily pinned).
130 They indicate the secondary dependencies for the given requirement.
131 """
132 if not (ireq.editable or is_pinned_requirement(ireq)):
133 raise TypeError('Expected pinned or editable InstallRequirement, got {}'.format(ireq))
134
135 if ireq not in self._dependencies_cache:
136 if ireq.editable and (ireq.source_dir and os.path.exists(ireq.source_dir)):
137 # No download_dir for locally available editable requirements.
138 # If a download_dir is passed, pip will unnecessarely
139 # archive the entire source directory
140 download_dir = None
141 elif ireq.link and not ireq.link.is_artifact:
142 # No download_dir for VCS sources. This also works around pip
143 # using git-checkout-index, which gets rid of the .git dir.
144 download_dir = None
145 else:
146 download_dir = self._download_dir
147 if not os.path.isdir(download_dir):
148 os.makedirs(download_dir)
149 if not os.path.isdir(self._wheel_download_dir):
150 os.makedirs(self._wheel_download_dir)
151
152 reqset = RequirementSet(self.build_dir,
153 self.source_dir,
154 download_dir=download_dir,
155 wheel_download_dir=self._wheel_download_dir,
156 session=self.session)
157 self._dependencies_cache[ireq] = reqset._prepare_file(self.finder, ireq)
158 return set(self._dependencies_cache[ireq])
159
160 def get_hashes(self, ireq):
161 """
162 Given a pinned InstallRequire, returns a set of hashes that represent
163 all of the files for a given requirement. It is not acceptable for an
164 editable or unpinned requirement to be passed to this function.
165 """
166 if not is_pinned_requirement(ireq):
167 raise TypeError(
168 "Expected pinned requirement, not unpinned or editable, got {}".format(ireq))
169
170 # We need to get all of the candidates that match our current version
171 # pin, these will represent all of the files that could possibly
172 # satisfy this constraint.
173 all_candidates = self.find_all_candidates(ireq.name)
174 candidates_by_version = lookup_table(all_candidates, key=lambda c: c.version)
175 matching_versions = list(
176 ireq.specifier.filter((candidate.version for candidate in all_candidates)))
177 matching_candidates = candidates_by_version[matching_versions[0]]
178
179 return {
180 self._get_file_hash(candidate.location)
181 for candidate in matching_candidates
182 }
183
184 def _get_file_hash(self, location):
185 h = hashlib.new(FAVORITE_HASH)
186 with open_local_or_remote_file(location, self.session) as fp:
187 for chunk in iter(lambda: fp.read(8096), b""):
188 h.update(chunk)
189 return ":".join([FAVORITE_HASH, h.hexdigest()])
190
191 @contextmanager
192 def allow_all_wheels(self):
193 """
194 Monkey patches pip.Wheel to allow wheels from all platforms and Python versions.
195
196 This also saves the candidate cache and set a new one, or else the results from the
197 previous non-patched calls will interfere.
198 """
199 def _wheel_supported(self, tags=None):
200 # Ignore current platform. Support everything.
201 return True
202
203 def _wheel_support_index_min(self, tags=None):
204 # All wheels are equal priority for sorting.
205 return 0
206
207 original_wheel_supported = Wheel.supported
208 original_support_index_min = Wheel.support_index_min
209 original_cache = self._available_candidates_cache
210
211 Wheel.supported = _wheel_supported
212 Wheel.support_index_min = _wheel_support_index_min
213 self._available_candidates_cache = {}
214
215 try:
216 yield
217 finally:
218 Wheel.supported = original_wheel_supported
219 Wheel.support_index_min = original_support_index_min
220 self._available_candidates_cache = original_cache
221
222
223 @contextmanager
224 def open_local_or_remote_file(link, session):
225 """
226 Open local or remote file for reading.
227
228 :type link: pip.index.Link
229 :type session: requests.Session
230 :raises ValueError: If link points to a local directory.
231 :return: a context manager to the opened file-like object
232 """
233 url = link.url_without_fragment
234
235 if is_file_url(link):
236 # Local URL
237 local_path = url_to_path(url)
238 if os.path.isdir(local_path):
239 raise ValueError("Cannot open directory for read: {}".format(url))
240 else:
241 with open(local_path, 'rb') as local_file:
242 yield local_file
243 else:
244 # Remote URL
245 headers = {"Accept-Encoding": "identity"}
246 response = session.get(url, headers=headers, stream=True)
247 try:
248 yield response.raw
249 finally:
250 response.close()
251
[end of piptools/repositories/pypi.py]
[start of piptools/exceptions.py]
1 class PipToolsError(Exception):
2 pass
3
4
5 class NoCandidateFound(PipToolsError):
6 def __init__(self, ireq, candidates_tried):
7 self.ireq = ireq
8 self.candidates_tried = candidates_tried
9
10 def __str__(self):
11 sorted_versions = sorted(c.version for c in self.candidates_tried)
12 lines = [
13 'Could not find a version that matches {}'.format(self.ireq),
14 'Tried: {}'.format(', '.join(str(version) for version in sorted_versions) or '(no version found at all)')
15 ]
16 return '\n'.join(lines)
17
18
19 class UnsupportedConstraint(PipToolsError):
20 def __init__(self, message, constraint):
21 super(UnsupportedConstraint, self).__init__(message)
22 self.constraint = constraint
23
24 def __str__(self):
25 message = super(UnsupportedConstraint, self).__str__()
26 return '{} (constraint was: {})'.format(message, str(self.constraint))
27
28
29 class IncompatibleRequirements(PipToolsError):
30 def __init__(self, ireq_a, ireq_b):
31 self.ireq_a = ireq_a
32 self.ireq_b = ireq_b
33
34 def __str__(self):
35 message = "Incompatible requirements found: {} and {}"
36 return message.format(self.ireq_a, self.ireq_b)
37
[end of piptools/exceptions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/piptools/exceptions.py b/piptools/exceptions.py
--- a/piptools/exceptions.py
+++ b/piptools/exceptions.py
@@ -3,9 +3,10 @@
class NoCandidateFound(PipToolsError):
- def __init__(self, ireq, candidates_tried):
+ def __init__(self, ireq, candidates_tried, index_urls):
self.ireq = ireq
self.candidates_tried = candidates_tried
+ self.index_urls = index_urls
def __str__(self):
sorted_versions = sorted(c.version for c in self.candidates_tried)
@@ -13,6 +14,12 @@
'Could not find a version that matches {}'.format(self.ireq),
'Tried: {}'.format(', '.join(str(version) for version in sorted_versions) or '(no version found at all)')
]
+ if sorted_versions:
+ lines.append('There are incompatible versions in the resolved dependencies.')
+ else:
+ lines.append('{} {} reachable?'.format(
+ 'Were' if len(self.index_urls) > 1 else 'Was', ' or '.join(self.index_urls))
+ )
return '\n'.join(lines)
diff --git a/piptools/repositories/pypi.py b/piptools/repositories/pypi.py
--- a/piptools/repositories/pypi.py
+++ b/piptools/repositories/pypi.py
@@ -115,7 +115,7 @@
# Reuses pip's internal candidate sort key to sort
matching_candidates = [candidates_by_version[ver] for ver in matching_versions]
if not matching_candidates:
- raise NoCandidateFound(ireq, all_candidates)
+ raise NoCandidateFound(ireq, all_candidates, self.finder.index_urls)
best_candidate = max(matching_candidates, key=self.finder._candidate_sort_key)
# Turn the candidate into a pinned InstallRequirement
| {"golden_diff": "diff --git a/piptools/exceptions.py b/piptools/exceptions.py\n--- a/piptools/exceptions.py\n+++ b/piptools/exceptions.py\n@@ -3,9 +3,10 @@\n \n \n class NoCandidateFound(PipToolsError):\n- def __init__(self, ireq, candidates_tried):\n+ def __init__(self, ireq, candidates_tried, index_urls):\n self.ireq = ireq\n self.candidates_tried = candidates_tried\n+ self.index_urls = index_urls\n \n def __str__(self):\n sorted_versions = sorted(c.version for c in self.candidates_tried)\n@@ -13,6 +14,12 @@\n 'Could not find a version that matches {}'.format(self.ireq),\n 'Tried: {}'.format(', '.join(str(version) for version in sorted_versions) or '(no version found at all)')\n ]\n+ if sorted_versions:\n+ lines.append('There are incompatible versions in the resolved dependencies.')\n+ else:\n+ lines.append('{} {} reachable?'.format(\n+ 'Were' if len(self.index_urls) > 1 else 'Was', ' or '.join(self.index_urls))\n+ )\n return '\\n'.join(lines)\n \n \ndiff --git a/piptools/repositories/pypi.py b/piptools/repositories/pypi.py\n--- a/piptools/repositories/pypi.py\n+++ b/piptools/repositories/pypi.py\n@@ -115,7 +115,7 @@\n # Reuses pip's internal candidate sort key to sort\n matching_candidates = [candidates_by_version[ver] for ver in matching_versions]\n if not matching_candidates:\n- raise NoCandidateFound(ireq, all_candidates)\n+ raise NoCandidateFound(ireq, all_candidates, self.finder.index_urls)\n best_candidate = max(matching_candidates, key=self.finder._candidate_sort_key)\n \n # Turn the candidate into a pinned InstallRequirement\n", "issue": "pip-compile gives misleading error message if no internet\nIf there is no internet access, `pip-compile` will fail with a misleading message. This is with pip-compile 1.6.1, pip 8.1.1, and Python 3.5.1.\n##### Steps to replicate\n1. Create a `requirements.in` file with a single project name.\n2. Disable internet access (e.g. by disabling wi-fi).\n3. Run `pip-compile --verbose requirements.in`.\n##### Expected result\n\nThe command should fail with an error message relating to the fact that there is no internet connection.\n##### Actual result\n\nInstead, the output looks something like this:\n\n```\nUsing indexes:\n https://pypi.python.org/simple\n\n ROUND 1 \nCurrent constraints:\n MyPackage\n\nFinding the best candidates:\nCould not find a version that matches MyPackage\nTried: (no version found at all)\n```\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\n\nimport hashlib\nimport os\nfrom contextlib import contextmanager\nfrom shutil import rmtree\n\nfrom pip.download import is_file_url, url_to_path\nfrom pip.index import PackageFinder\nfrom pip.req.req_set import RequirementSet\nfrom pip.wheel import Wheel\ntry:\n from pip.utils.hashes import FAVORITE_HASH\nexcept ImportError:\n FAVORITE_HASH = 'sha256'\n\nfrom ..cache import CACHE_DIR\nfrom ..exceptions import NoCandidateFound\nfrom ..utils import (fs_str, is_pinned_requirement, lookup_table,\n make_install_requirement, pip_version_info)\nfrom .base import BaseRepository\n\ntry:\n from tempfile import TemporaryDirectory # added in 3.2\nexcept ImportError:\n from .._compat import TemporaryDirectory\n\n\nclass PyPIRepository(BaseRepository):\n DEFAULT_INDEX_URL = 'https://pypi.python.org/simple'\n\n \"\"\"\n The PyPIRepository will use the provided Finder instance to lookup\n packages. Typically, it looks up packages on PyPI (the default implicit\n config), but any other PyPI mirror can be used if index_urls is\n changed/configured on the Finder.\n \"\"\"\n def __init__(self, pip_options, session):\n self.session = session\n\n index_urls = [pip_options.index_url] + pip_options.extra_index_urls\n if pip_options.no_index:\n index_urls = []\n\n self.finder = PackageFinder(\n find_links=pip_options.find_links,\n index_urls=index_urls,\n trusted_hosts=pip_options.trusted_hosts,\n allow_all_prereleases=pip_options.pre,\n process_dependency_links=pip_options.process_dependency_links,\n session=self.session,\n )\n\n # Caches\n # stores project_name => InstallationCandidate mappings for all\n # versions reported by PyPI, so we only have to ask once for each\n # project\n self._available_candidates_cache = {}\n\n # stores InstallRequirement => list(InstallRequirement) mappings\n # of all secondary dependencies for the given requirement, so we\n # only have to go to disk once for each requirement\n self._dependencies_cache = {}\n\n # Setup file paths\n self.freshen_build_caches()\n self._download_dir = fs_str(os.path.join(CACHE_DIR, 'pkgs'))\n self._wheel_download_dir = fs_str(os.path.join(CACHE_DIR, 'wheels'))\n\n def freshen_build_caches(self):\n \"\"\"\n Start with fresh build/source caches. Will remove any old build\n caches from disk automatically.\n \"\"\"\n self._build_dir = TemporaryDirectory(fs_str('build'))\n self._source_dir = TemporaryDirectory(fs_str('source'))\n\n @property\n def build_dir(self):\n return self._build_dir.name\n\n @property\n def source_dir(self):\n return self._source_dir.name\n\n def clear_caches(self):\n rmtree(self._download_dir, ignore_errors=True)\n rmtree(self._wheel_download_dir, ignore_errors=True)\n\n def find_all_candidates(self, req_name):\n if req_name not in self._available_candidates_cache:\n # pip 8 changed the internal API, making this a public method\n if pip_version_info >= (8, 0):\n candidates = self.finder.find_all_candidates(req_name)\n else:\n candidates = self.finder._find_all_versions(req_name)\n self._available_candidates_cache[req_name] = candidates\n return self._available_candidates_cache[req_name]\n\n def find_best_match(self, ireq, prereleases=None):\n \"\"\"\n Returns a Version object that indicates the best match for the given\n InstallRequirement according to the external repository.\n \"\"\"\n if ireq.editable:\n return ireq # return itself as the best match\n\n all_candidates = self.find_all_candidates(ireq.name)\n candidates_by_version = lookup_table(all_candidates, key=lambda c: c.version, unique=True)\n matching_versions = ireq.specifier.filter((candidate.version for candidate in all_candidates),\n prereleases=prereleases)\n\n # Reuses pip's internal candidate sort key to sort\n matching_candidates = [candidates_by_version[ver] for ver in matching_versions]\n if not matching_candidates:\n raise NoCandidateFound(ireq, all_candidates)\n best_candidate = max(matching_candidates, key=self.finder._candidate_sort_key)\n\n # Turn the candidate into a pinned InstallRequirement\n return make_install_requirement(\n best_candidate.project, best_candidate.version, ireq.extras, constraint=ireq.constraint\n )\n\n def get_dependencies(self, ireq):\n \"\"\"\n Given a pinned or an editable InstallRequirement, returns a set of\n dependencies (also InstallRequirements, but not necessarily pinned).\n They indicate the secondary dependencies for the given requirement.\n \"\"\"\n if not (ireq.editable or is_pinned_requirement(ireq)):\n raise TypeError('Expected pinned or editable InstallRequirement, got {}'.format(ireq))\n\n if ireq not in self._dependencies_cache:\n if ireq.editable and (ireq.source_dir and os.path.exists(ireq.source_dir)):\n # No download_dir for locally available editable requirements.\n # If a download_dir is passed, pip will unnecessarely\n # archive the entire source directory\n download_dir = None\n elif ireq.link and not ireq.link.is_artifact:\n # No download_dir for VCS sources. This also works around pip\n # using git-checkout-index, which gets rid of the .git dir.\n download_dir = None\n else:\n download_dir = self._download_dir\n if not os.path.isdir(download_dir):\n os.makedirs(download_dir)\n if not os.path.isdir(self._wheel_download_dir):\n os.makedirs(self._wheel_download_dir)\n\n reqset = RequirementSet(self.build_dir,\n self.source_dir,\n download_dir=download_dir,\n wheel_download_dir=self._wheel_download_dir,\n session=self.session)\n self._dependencies_cache[ireq] = reqset._prepare_file(self.finder, ireq)\n return set(self._dependencies_cache[ireq])\n\n def get_hashes(self, ireq):\n \"\"\"\n Given a pinned InstallRequire, returns a set of hashes that represent\n all of the files for a given requirement. It is not acceptable for an\n editable or unpinned requirement to be passed to this function.\n \"\"\"\n if not is_pinned_requirement(ireq):\n raise TypeError(\n \"Expected pinned requirement, not unpinned or editable, got {}\".format(ireq))\n\n # We need to get all of the candidates that match our current version\n # pin, these will represent all of the files that could possibly\n # satisfy this constraint.\n all_candidates = self.find_all_candidates(ireq.name)\n candidates_by_version = lookup_table(all_candidates, key=lambda c: c.version)\n matching_versions = list(\n ireq.specifier.filter((candidate.version for candidate in all_candidates)))\n matching_candidates = candidates_by_version[matching_versions[0]]\n\n return {\n self._get_file_hash(candidate.location)\n for candidate in matching_candidates\n }\n\n def _get_file_hash(self, location):\n h = hashlib.new(FAVORITE_HASH)\n with open_local_or_remote_file(location, self.session) as fp:\n for chunk in iter(lambda: fp.read(8096), b\"\"):\n h.update(chunk)\n return \":\".join([FAVORITE_HASH, h.hexdigest()])\n\n @contextmanager\n def allow_all_wheels(self):\n \"\"\"\n Monkey patches pip.Wheel to allow wheels from all platforms and Python versions.\n\n This also saves the candidate cache and set a new one, or else the results from the\n previous non-patched calls will interfere.\n \"\"\"\n def _wheel_supported(self, tags=None):\n # Ignore current platform. Support everything.\n return True\n\n def _wheel_support_index_min(self, tags=None):\n # All wheels are equal priority for sorting.\n return 0\n\n original_wheel_supported = Wheel.supported\n original_support_index_min = Wheel.support_index_min\n original_cache = self._available_candidates_cache\n\n Wheel.supported = _wheel_supported\n Wheel.support_index_min = _wheel_support_index_min\n self._available_candidates_cache = {}\n\n try:\n yield\n finally:\n Wheel.supported = original_wheel_supported\n Wheel.support_index_min = original_support_index_min\n self._available_candidates_cache = original_cache\n\n\n@contextmanager\ndef open_local_or_remote_file(link, session):\n \"\"\"\n Open local or remote file for reading.\n\n :type link: pip.index.Link\n :type session: requests.Session\n :raises ValueError: If link points to a local directory.\n :return: a context manager to the opened file-like object\n \"\"\"\n url = link.url_without_fragment\n\n if is_file_url(link):\n # Local URL\n local_path = url_to_path(url)\n if os.path.isdir(local_path):\n raise ValueError(\"Cannot open directory for read: {}\".format(url))\n else:\n with open(local_path, 'rb') as local_file:\n yield local_file\n else:\n # Remote URL\n headers = {\"Accept-Encoding\": \"identity\"}\n response = session.get(url, headers=headers, stream=True)\n try:\n yield response.raw\n finally:\n response.close()\n", "path": "piptools/repositories/pypi.py"}, {"content": "class PipToolsError(Exception):\n pass\n\n\nclass NoCandidateFound(PipToolsError):\n def __init__(self, ireq, candidates_tried):\n self.ireq = ireq\n self.candidates_tried = candidates_tried\n\n def __str__(self):\n sorted_versions = sorted(c.version for c in self.candidates_tried)\n lines = [\n 'Could not find a version that matches {}'.format(self.ireq),\n 'Tried: {}'.format(', '.join(str(version) for version in sorted_versions) or '(no version found at all)')\n ]\n return '\\n'.join(lines)\n\n\nclass UnsupportedConstraint(PipToolsError):\n def __init__(self, message, constraint):\n super(UnsupportedConstraint, self).__init__(message)\n self.constraint = constraint\n\n def __str__(self):\n message = super(UnsupportedConstraint, self).__str__()\n return '{} (constraint was: {})'.format(message, str(self.constraint))\n\n\nclass IncompatibleRequirements(PipToolsError):\n def __init__(self, ireq_a, ireq_b):\n self.ireq_a = ireq_a\n self.ireq_b = ireq_b\n\n def __str__(self):\n message = \"Incompatible requirements found: {} and {}\"\n return message.format(self.ireq_a, self.ireq_b)\n", "path": "piptools/exceptions.py"}]} | 3,803 | 429 |
gh_patches_debug_11620 | rasdani/github-patches | git_diff | spack__spack-37500 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect checksum for CRTM fix files @ version 2.4.0_emc
### Steps to reproduce
```console
% cat spack/var/spack/repos/builtin/packages/crtm-fix/package.py
[...]
class CrtmFix(Package):
"""CRTM coefficient files"""
homepage = "https://github.com/NOAA-EMC/crtm"
url = "ftp://ftp.ssec.wisc.edu/pub/s4/CRTM/fix_REL-2.3.0_emc.tgz"
maintainers = [
"BenjaminTJohnson",
"edwardhartnett",
"AlexanderRichert-NOAA",
"Hang-Lei-NOAA",
"climbfuji",
]
version("2.4.0_emc", sha256="88d659ae5bc4434f7fafa232ff65b4c48442d2d1a25f8fc96078094fa572ac1a")
version("2.3.0_emc", sha256="1452af2d1d11d57ef3c57b6b861646541e7042a9b0f3c230f9a82854d7e90924")
[...]
% spack checksum [email protected]_emc
==> Found 1 version of crtm-fix:
2.4.0_emc ftp://ftp.ssec.wisc.edu/pub/s4/CRTM/fix_REL-2.4.0_emc.tgz
==> Fetching ftp://ftp.ssec.wisc.edu/pub/s4/CRTM/fix_REL-2.4.0_emc.tgz
version("2.4.0_emc", sha256="d0f1b2ae2905457f4c3731746892aaa8f6b84ee0691f6228dfbe48917df1e85e")
```
### Error message
```console
==> Installing crtm-fix-2.4.0_emc-ftl3fxdnhmvw43lgf74lunrcms7dxmqf
==> No binary for crtm-fix-2.4.0_emc-ftl3fxdnhmvw43lgf74lunrcms7dxmqf found: installing from source
==> Fetching ftp://ftp.ssec.wisc.edu/pub/s4/CRTM/fix_REL-2.4.0_emc.tgz
==> Error: ChecksumError: sha256 checksum failed for /path/to/cache/build_stage/spack-stage-crtm-fix-2.4.0_emc-ftl3fxdnhmvw43lgf74lunrcms7dxmqf/fix_REL-2.4.0_emc.tgz
Expected 88d659ae5bc4434f7fafa232ff65b4c48442d2d1a25f8fc96078094fa572ac1a but got d0f1b2ae2905457f4c3731746892aaa8f6b84ee0691f6228dfbe48917df1e85e
```
### Information on your system
% spack debug report
* **Spack:** 0.20.0.dev0
* **Python:** 3.9.15
* **Platform:** linux-rhel7-broadwell
* **Concretizer:** clingo
### General information
- [X] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [X] I have searched the issues of this repo and believe this is not a duplicate
- [X] I have run the failing commands in debug mode and reported the output
</issue>
<code>
[start of var/spack/repos/builtin/packages/crtm-fix/package.py]
1 # Copyright 2013-2022 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 import os
7
8 from spack.package import *
9
10
11 class CrtmFix(Package):
12 """CRTM coefficient files"""
13
14 homepage = "https://github.com/NOAA-EMC/crtm"
15 url = "ftp://ftp.ssec.wisc.edu/pub/s4/CRTM/fix_REL-2.3.0_emc.tgz"
16
17 maintainers(
18 "BenjaminTJohnson", "edwardhartnett", "AlexanderRichert-NOAA", "Hang-Lei-NOAA", "climbfuji"
19 )
20
21 version("2.4.0_emc", sha256="88d659ae5bc4434f7fafa232ff65b4c48442d2d1a25f8fc96078094fa572ac1a")
22 version("2.3.0_emc", sha256="1452af2d1d11d57ef3c57b6b861646541e7042a9b0f3c230f9a82854d7e90924")
23
24 variant("big_endian", default=True, description="Install big_endian fix files")
25 variant("little_endian", default=False, description="Install little endian fix files")
26 variant("netcdf", default=True, description="Install netcdf fix files")
27
28 conflicts("+big_endian", when="+little_endian", msg="big_endian and little_endian conflict")
29
30 def url_for_version(self, version):
31 url = "ftp://ftp.ssec.wisc.edu/pub/s4/CRTM/fix_REL-{}.tgz"
32 return url.format(version)
33
34 def install(self, spec, prefix):
35 spec = self.spec
36 mkdir(self.prefix.fix)
37
38 endian_dirs = []
39 if "+big_endian" in spec:
40 endian_dirs.append("Big_Endian")
41 elif "+little_endian" in spec:
42 endian_dirs.append("Little_Endian")
43
44 if "+netcdf" in spec:
45 endian_dirs.extend(["netcdf", "netCDF"])
46
47 fix_files = []
48 for d in endian_dirs:
49 fix_files = fix_files + find(".", "*/{}/*".format(d))
50
51 # Big_Endian amsua_metop-c.SpcCoeff.bin is incorrect
52 # Little_Endian amsua_metop-c_v2.SpcCoeff.bin is what it's supposed to be.
53 # Remove the incorrect file, and install it as noACC,, then install
54 # correct file under new name.
55 if "+big_endian" in spec and spec.version == Version("2.4.0_emc"):
56 remove_path = join_path(
57 os.getcwd(), "fix", "SpcCoeff", "Big_Endian", "amsua_metop-c.SpcCoeff.bin"
58 )
59 fix_files.remove(remove_path)
60
61 # This file is incorrect, install it as a different name.
62 install(
63 join_path("fix", "SpcCoeff", "Big_Endian", "amsua_metop-c.SpcCoeff.bin"),
64 join_path(self.prefix.fix, "amsua_metop-c.SpcCoeff.noACC.bin"),
65 )
66
67 # This "Little_Endian" file is actually the correct one.
68 install(
69 join_path("fix", "SpcCoeff", "Little_Endian", "amsua_metop-c_v2.SpcCoeff.bin"),
70 join_path(self.prefix.fix, "amsua_metop-c.SpcCoeff.bin"),
71 )
72
73 for f in fix_files:
74 install(f, self.prefix.fix)
75
76 def setup_run_environment(self, env):
77 env.set("CRTM_FIX", self.prefix.fix)
78
[end of var/spack/repos/builtin/packages/crtm-fix/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/var/spack/repos/builtin/packages/crtm-fix/package.py b/var/spack/repos/builtin/packages/crtm-fix/package.py
--- a/var/spack/repos/builtin/packages/crtm-fix/package.py
+++ b/var/spack/repos/builtin/packages/crtm-fix/package.py
@@ -18,7 +18,7 @@
"BenjaminTJohnson", "edwardhartnett", "AlexanderRichert-NOAA", "Hang-Lei-NOAA", "climbfuji"
)
- version("2.4.0_emc", sha256="88d659ae5bc4434f7fafa232ff65b4c48442d2d1a25f8fc96078094fa572ac1a")
+ version("2.4.0_emc", sha256="d0f1b2ae2905457f4c3731746892aaa8f6b84ee0691f6228dfbe48917df1e85e")
version("2.3.0_emc", sha256="1452af2d1d11d57ef3c57b6b861646541e7042a9b0f3c230f9a82854d7e90924")
variant("big_endian", default=True, description="Install big_endian fix files")
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/crtm-fix/package.py b/var/spack/repos/builtin/packages/crtm-fix/package.py\n--- a/var/spack/repos/builtin/packages/crtm-fix/package.py\n+++ b/var/spack/repos/builtin/packages/crtm-fix/package.py\n@@ -18,7 +18,7 @@\n \"BenjaminTJohnson\", \"edwardhartnett\", \"AlexanderRichert-NOAA\", \"Hang-Lei-NOAA\", \"climbfuji\"\n )\n \n- version(\"2.4.0_emc\", sha256=\"88d659ae5bc4434f7fafa232ff65b4c48442d2d1a25f8fc96078094fa572ac1a\")\n+ version(\"2.4.0_emc\", sha256=\"d0f1b2ae2905457f4c3731746892aaa8f6b84ee0691f6228dfbe48917df1e85e\")\n version(\"2.3.0_emc\", sha256=\"1452af2d1d11d57ef3c57b6b861646541e7042a9b0f3c230f9a82854d7e90924\")\n \n variant(\"big_endian\", default=True, description=\"Install big_endian fix files\")\n", "issue": "Incorrect checksum for CRTM fix files @ version 2.4.0_emc\n### Steps to reproduce\n\n```console\r\n% cat spack/var/spack/repos/builtin/packages/crtm-fix/package.py \r\n[...]\r\nclass CrtmFix(Package):\r\n \"\"\"CRTM coefficient files\"\"\"\r\n\r\n homepage = \"https://github.com/NOAA-EMC/crtm\"\r\n url = \"ftp://ftp.ssec.wisc.edu/pub/s4/CRTM/fix_REL-2.3.0_emc.tgz\"\r\n\r\n maintainers = [\r\n \"BenjaminTJohnson\",\r\n \"edwardhartnett\",\r\n \"AlexanderRichert-NOAA\",\r\n \"Hang-Lei-NOAA\",\r\n \"climbfuji\",\r\n ]\r\n\r\n version(\"2.4.0_emc\", sha256=\"88d659ae5bc4434f7fafa232ff65b4c48442d2d1a25f8fc96078094fa572ac1a\")\r\n version(\"2.3.0_emc\", sha256=\"1452af2d1d11d57ef3c57b6b861646541e7042a9b0f3c230f9a82854d7e90924\")\r\n[...]\r\n\r\n% spack checksum [email protected]_emc\r\n==> Found 1 version of crtm-fix:\r\n \r\n 2.4.0_emc ftp://ftp.ssec.wisc.edu/pub/s4/CRTM/fix_REL-2.4.0_emc.tgz\r\n\r\n==> Fetching ftp://ftp.ssec.wisc.edu/pub/s4/CRTM/fix_REL-2.4.0_emc.tgz\r\n\r\n version(\"2.4.0_emc\", sha256=\"d0f1b2ae2905457f4c3731746892aaa8f6b84ee0691f6228dfbe48917df1e85e\")\r\n```\n\n### Error message\n\n```console\r\n==> Installing crtm-fix-2.4.0_emc-ftl3fxdnhmvw43lgf74lunrcms7dxmqf\r\n==> No binary for crtm-fix-2.4.0_emc-ftl3fxdnhmvw43lgf74lunrcms7dxmqf found: installing from source\r\n==> Fetching ftp://ftp.ssec.wisc.edu/pub/s4/CRTM/fix_REL-2.4.0_emc.tgz\r\n==> Error: ChecksumError: sha256 checksum failed for /path/to/cache/build_stage/spack-stage-crtm-fix-2.4.0_emc-ftl3fxdnhmvw43lgf74lunrcms7dxmqf/fix_REL-2.4.0_emc.tgz\r\n Expected 88d659ae5bc4434f7fafa232ff65b4c48442d2d1a25f8fc96078094fa572ac1a but got d0f1b2ae2905457f4c3731746892aaa8f6b84ee0691f6228dfbe48917df1e85e\r\n```\n\n### Information on your system\n\n% spack debug report\r\n* **Spack:** 0.20.0.dev0\r\n* **Python:** 3.9.15\r\n* **Platform:** linux-rhel7-broadwell\r\n* **Concretizer:** clingo\n\n### General information\n\n- [X] I have run `spack debug report` and reported the version of Spack/Python/Platform\n- [X] I have searched the issues of this repo and believe this is not a duplicate\n- [X] I have run the failing commands in debug mode and reported the output\n", "before_files": [{"content": "# Copyright 2013-2022 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nimport os\n\nfrom spack.package import *\n\n\nclass CrtmFix(Package):\n \"\"\"CRTM coefficient files\"\"\"\n\n homepage = \"https://github.com/NOAA-EMC/crtm\"\n url = \"ftp://ftp.ssec.wisc.edu/pub/s4/CRTM/fix_REL-2.3.0_emc.tgz\"\n\n maintainers(\n \"BenjaminTJohnson\", \"edwardhartnett\", \"AlexanderRichert-NOAA\", \"Hang-Lei-NOAA\", \"climbfuji\"\n )\n\n version(\"2.4.0_emc\", sha256=\"88d659ae5bc4434f7fafa232ff65b4c48442d2d1a25f8fc96078094fa572ac1a\")\n version(\"2.3.0_emc\", sha256=\"1452af2d1d11d57ef3c57b6b861646541e7042a9b0f3c230f9a82854d7e90924\")\n\n variant(\"big_endian\", default=True, description=\"Install big_endian fix files\")\n variant(\"little_endian\", default=False, description=\"Install little endian fix files\")\n variant(\"netcdf\", default=True, description=\"Install netcdf fix files\")\n\n conflicts(\"+big_endian\", when=\"+little_endian\", msg=\"big_endian and little_endian conflict\")\n\n def url_for_version(self, version):\n url = \"ftp://ftp.ssec.wisc.edu/pub/s4/CRTM/fix_REL-{}.tgz\"\n return url.format(version)\n\n def install(self, spec, prefix):\n spec = self.spec\n mkdir(self.prefix.fix)\n\n endian_dirs = []\n if \"+big_endian\" in spec:\n endian_dirs.append(\"Big_Endian\")\n elif \"+little_endian\" in spec:\n endian_dirs.append(\"Little_Endian\")\n\n if \"+netcdf\" in spec:\n endian_dirs.extend([\"netcdf\", \"netCDF\"])\n\n fix_files = []\n for d in endian_dirs:\n fix_files = fix_files + find(\".\", \"*/{}/*\".format(d))\n\n # Big_Endian amsua_metop-c.SpcCoeff.bin is incorrect\n # Little_Endian amsua_metop-c_v2.SpcCoeff.bin is what it's supposed to be.\n # Remove the incorrect file, and install it as noACC,, then install\n # correct file under new name.\n if \"+big_endian\" in spec and spec.version == Version(\"2.4.0_emc\"):\n remove_path = join_path(\n os.getcwd(), \"fix\", \"SpcCoeff\", \"Big_Endian\", \"amsua_metop-c.SpcCoeff.bin\"\n )\n fix_files.remove(remove_path)\n\n # This file is incorrect, install it as a different name.\n install(\n join_path(\"fix\", \"SpcCoeff\", \"Big_Endian\", \"amsua_metop-c.SpcCoeff.bin\"),\n join_path(self.prefix.fix, \"amsua_metop-c.SpcCoeff.noACC.bin\"),\n )\n\n # This \"Little_Endian\" file is actually the correct one.\n install(\n join_path(\"fix\", \"SpcCoeff\", \"Little_Endian\", \"amsua_metop-c_v2.SpcCoeff.bin\"),\n join_path(self.prefix.fix, \"amsua_metop-c.SpcCoeff.bin\"),\n )\n\n for f in fix_files:\n install(f, self.prefix.fix)\n\n def setup_run_environment(self, env):\n env.set(\"CRTM_FIX\", self.prefix.fix)\n", "path": "var/spack/repos/builtin/packages/crtm-fix/package.py"}]} | 2,487 | 361 |
gh_patches_debug_11914 | rasdani/github-patches | git_diff | pytorch__ignite-2984 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix warning in fast_neural_style example
Here is another good first issue to improve the ignite project. Currently, we have a warning on this line: https://github.com/pytorch/ignite/blob/master/examples/fast_neural_style/vgg.py#L10 (fast neural style example)
/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
</issue>
<code>
[start of examples/fast_neural_style/vgg.py]
1 from collections import namedtuple
2
3 import torch
4 from torchvision import models
5
6
7 class Vgg16(torch.nn.Module):
8 def __init__(self, requires_grad=False):
9 super(Vgg16, self).__init__()
10 vgg_pretrained_features = models.vgg16(pretrained=True).features
11 self.slice1 = torch.nn.Sequential()
12 self.slice2 = torch.nn.Sequential()
13 self.slice3 = torch.nn.Sequential()
14 self.slice4 = torch.nn.Sequential()
15 for x in range(4):
16 self.slice1.add_module(str(x), vgg_pretrained_features[x])
17 for x in range(4, 9):
18 self.slice2.add_module(str(x), vgg_pretrained_features[x])
19 for x in range(9, 16):
20 self.slice3.add_module(str(x), vgg_pretrained_features[x])
21 for x in range(16, 23):
22 self.slice4.add_module(str(x), vgg_pretrained_features[x])
23 if not requires_grad:
24 for param in self.parameters():
25 param.requires_grad = False
26
27 def forward(self, X):
28 h = self.slice1(X)
29 h_relu1_2 = h
30 h = self.slice2(h)
31 h_relu2_2 = h
32 h = self.slice3(h)
33 h_relu3_3 = h
34 h = self.slice4(h)
35 h_relu4_3 = h
36 vgg_outputs = namedtuple("VggOutputs", ["relu1_2", "relu2_2", "relu3_3", "relu4_3"])
37 out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3)
38 return out
39
[end of examples/fast_neural_style/vgg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/fast_neural_style/vgg.py b/examples/fast_neural_style/vgg.py
--- a/examples/fast_neural_style/vgg.py
+++ b/examples/fast_neural_style/vgg.py
@@ -2,12 +2,13 @@
import torch
from torchvision import models
+from torchvision.models.vgg import VGG16_Weights
class Vgg16(torch.nn.Module):
def __init__(self, requires_grad=False):
super(Vgg16, self).__init__()
- vgg_pretrained_features = models.vgg16(pretrained=True).features
+ vgg_pretrained_features = models.vgg16(weights=VGG16_Weights.IMAGENET1K_V1).features
self.slice1 = torch.nn.Sequential()
self.slice2 = torch.nn.Sequential()
self.slice3 = torch.nn.Sequential()
| {"golden_diff": "diff --git a/examples/fast_neural_style/vgg.py b/examples/fast_neural_style/vgg.py\n--- a/examples/fast_neural_style/vgg.py\n+++ b/examples/fast_neural_style/vgg.py\n@@ -2,12 +2,13 @@\n \n import torch\n from torchvision import models\n+from torchvision.models.vgg import VGG16_Weights\n \n \n class Vgg16(torch.nn.Module):\n def __init__(self, requires_grad=False):\n super(Vgg16, self).__init__()\n- vgg_pretrained_features = models.vgg16(pretrained=True).features\n+ vgg_pretrained_features = models.vgg16(weights=VGG16_Weights.IMAGENET1K_V1).features\n self.slice1 = torch.nn.Sequential()\n self.slice2 = torch.nn.Sequential()\n self.slice3 = torch.nn.Sequential()\n", "issue": "Fix warning in fast_neural_style example\nHere is another good first issue to improve the ignite project. Currently, we have a warning on this line: https://github.com/pytorch/ignite/blob/master/examples/fast_neural_style/vgg.py#L10 (fast neural style example)\r\n /opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.\r\n warnings.warn(\r\n/opt/hostedtoolcache/Python/3.9.17/x64/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.\n", "before_files": [{"content": "from collections import namedtuple\n\nimport torch\nfrom torchvision import models\n\n\nclass Vgg16(torch.nn.Module):\n def __init__(self, requires_grad=False):\n super(Vgg16, self).__init__()\n vgg_pretrained_features = models.vgg16(pretrained=True).features\n self.slice1 = torch.nn.Sequential()\n self.slice2 = torch.nn.Sequential()\n self.slice3 = torch.nn.Sequential()\n self.slice4 = torch.nn.Sequential()\n for x in range(4):\n self.slice1.add_module(str(x), vgg_pretrained_features[x])\n for x in range(4, 9):\n self.slice2.add_module(str(x), vgg_pretrained_features[x])\n for x in range(9, 16):\n self.slice3.add_module(str(x), vgg_pretrained_features[x])\n for x in range(16, 23):\n self.slice4.add_module(str(x), vgg_pretrained_features[x])\n if not requires_grad:\n for param in self.parameters():\n param.requires_grad = False\n\n def forward(self, X):\n h = self.slice1(X)\n h_relu1_2 = h\n h = self.slice2(h)\n h_relu2_2 = h\n h = self.slice3(h)\n h_relu3_3 = h\n h = self.slice4(h)\n h_relu4_3 = h\n vgg_outputs = namedtuple(\"VggOutputs\", [\"relu1_2\", \"relu2_2\", \"relu3_3\", \"relu4_3\"])\n out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3)\n return out\n", "path": "examples/fast_neural_style/vgg.py"}]} | 1,234 | 189 |
gh_patches_debug_24196 | rasdani/github-patches | git_diff | NVIDIA__TransformerEngine-631 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Does `transformer_engine.pytorch.RMSNorm` support `zero_centered_gamma`?
The [doc](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/pytorch.html#transformer_engine.pytorch.RMSNorm) explains the option `zero_centered_gamma` clearly, but I found the implementation at [normalization.cu:L221](https://github.com/NVIDIA/TransformerEngine/blob/82555b3fd1ddece3a886fac3c7a4aa8d6f8c1955/transformer_engine/pytorch/csrc/extensions/normalization.cu#L221) assert `zero_centered_gamma == false`.
Is there any limitation?
</issue>
<code>
[start of transformer_engine/pytorch/module/rmsnorm.py]
1 # Copyright (c) 2022-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2 #
3 # See LICENSE for license information.
4
5 """RMSNorm API"""
6 import os
7 import warnings
8 from typing import Union, Tuple, Optional
9
10 import torch
11 from torch.nn.parameter import Parameter
12 from torch.nn import init
13
14 from .base import TransformerEngineBaseModule
15 from .. import cpp_extensions as tex
16 from ..jit import no_torch_dynamo
17 from ..utils import cast_if_needed
18
19
20 __all__ = ["RMSNorm"]
21
22
23 class _RMSNorm(torch.autograd.Function):
24 """functional RMSNorm"""
25
26 @staticmethod
27 def forward(
28 ctx,
29 inp: torch.Tensor,
30 rmsnorm_weight: torch.Tensor,
31 eps: float,
32 fwd_rmsnorm_sm_margin: int,
33 bwd_rmsnorm_sm_margin: int,
34 zero_centered_gamma: bool,
35 is_grad_enabled: bool,
36 activation_dtype: torch.dtype,
37 ) -> torch.Tensor:
38 # Make sure input dimensions are compatible
39 in_features = rmsnorm_weight.numel()
40 assert inp.is_cuda, "TransformerEngine needs CUDA."
41 assert inp.shape[-1] == in_features, "RMSNorm not possible"
42 inputmat = inp.view((-1, in_features))
43
44 # Cast for native AMP
45 inputmat = cast_if_needed(inputmat, activation_dtype)
46 rmsnorm_weight = cast_if_needed(rmsnorm_weight, activation_dtype)
47
48 if is_grad_enabled:
49 rmsnorm_out, rsigma = tex.rmsnorm_fwd(inputmat, rmsnorm_weight,
50 eps, fwd_rmsnorm_sm_margin,
51 zero_centered_gamma)
52 ctx.save_for_backward(inputmat, rmsnorm_weight, rsigma)
53 ctx.inp_shape = inp.shape
54 ctx.bwd_rmsnorm_sm_margin = bwd_rmsnorm_sm_margin
55 ctx.zero_centered_gamma = zero_centered_gamma
56 else:
57 rmsnorm_out = tex.rmsnorm_fwd_inf(inputmat, rmsnorm_weight,
58 eps,
59 zero_centered_gamma)
60 return rmsnorm_out.view_as(inp)
61
62 @staticmethod
63 def backward(
64 ctx, grad_output: torch.Tensor
65 ) -> Tuple[Union[torch.Tensor, None], ...]:
66 inputmat, rmsnorm_weight, rsigma = ctx.saved_tensors
67 grad_output = grad_output.contiguous()
68 d_rmsnorm_out = grad_output.view(inputmat.shape)
69 dxmat, dgamma = tex.rmsnorm_bwd(
70 d_rmsnorm_out, inputmat, rsigma, rmsnorm_weight,
71 ctx.bwd_rmsnorm_sm_margin, ctx.zero_centered_gamma
72 )
73 return (
74 dxmat.view(ctx.inp_shape),
75 dgamma,
76 None,
77 None,
78 None,
79 None,
80 None,
81 None,
82 )
83
84
85 class RMSNorm(torch.nn.Module):
86 r"""
87 Applies Root Mean Square Layer Normalization over a mini-batch of inputs as described in
88 the paper `Root Mean Square Layer Normalization <https://arxiv.org/abs/1910.07467>`__
89
90 .. math::
91 y = \frac{x}{RMS_\varepsilon(x)} * \gamma
92
93 where
94
95 .. math::
96 RMS_\varepsilon(x) = \sqrt{\frac{1}{n}\sum_{i=0}^nx_i^2 + \varepsilon}
97
98 :math:`\gamma` is a learnable affine transform parameter of size :attr:`hidden_size`
99
100 Parameters
101 ----------
102 hidden_size : int
103 size of each input sample.
104 eps : float, default = 1e-5
105 a value added to the denominator of layer normalization for numerical stability.
106 sequence_parallel : bool, default = `False`
107 if set to `True`, uses sequence parallelism.
108 params_dtype : torch.dtype, default = `torch.get_default_dtype()`
109 it controls the type used to allocate the initial parameters. Useful when
110 the model is trained with lower precision and the original FP32 parameters
111 would not fit in GPU memory.
112 zero_centered_gamma : bool, default = 'False'
113 if set to 'True', gamma parameter in RMSNorm is initialized to 0 and
114 the RMSNorm formula changes to
115
116 .. math::
117 y = \frac{x}{RMS(x) + \varepsilon} * (1 + \gamma)
118 device : Union[torch.device, str], default = "cuda"
119 The device on which the parameters of the model will allocated. It is the user's
120 responsibility to ensure all parameters are moved to the GPU before running the
121 forward pass.
122 """
123
124 def __init__(
125 self,
126 hidden_size: int,
127 eps: float = 1e-5,
128 sequence_parallel: bool = False,
129 params_dtype: Optional[torch.dtype] = None,
130 zero_centered_gamma: bool = False,
131 device: Union[torch.device, str] = "cuda",
132 ) -> None:
133 super().__init__()
134 params_dtype = torch.get_default_dtype() if params_dtype is None else params_dtype
135 self.eps = eps
136 self.zero_centered_gamma = zero_centered_gamma
137 self.weight = Parameter(
138 torch.empty(
139 hidden_size,
140 device=device,
141 dtype=params_dtype,
142 )
143 )
144 self.sequence_parallel = sequence_parallel
145
146 self.reset_parameters(defer_init=(device == 'meta'))
147
148 # These many SMs are subtracted from the total SM count when calling forward
149 # and backward RMSNorm C APIs. These envvars can be used to prevent the LN
150 # kernels from using all SMs in the device. This is useful for cases such as
151 # communication overlap with RMSNorm.
152 self.fwd_rmsnorm_sm_margin = int(os.getenv("NVTE_FWD_LAYERNORM_SM_MARGIN", "0"))
153 self.bwd_rmsnorm_sm_margin = int(os.getenv("NVTE_BWD_LAYERNORM_SM_MARGIN", "0"))
154
155 def reset_rms_norm_parameters(self) -> None:
156 """Init RMSNorm params"""
157 warnings.warn(
158 ("This method will be deprecated in an upcoming release. "
159 "Update your code to use RMSNorm.reset_parameters() instead."),
160 DeprecationWarning,
161 stacklevel=2
162 )
163 if not self.zero_centered_gamma:
164 init.ones_(self.weight)
165 else:
166 init.zeros_(self.weight)
167
168 def reset_parameters(self, defer_init=False) -> None:
169 """Reset RMSNorm parameters"""
170 if defer_init:
171 return
172
173 if self.weight.device == torch.device('meta'):
174 self.weight = torch.nn.Parameter(torch.empty_like(self.weight, device='cuda'))
175 init.constant_(self.weight, float(not self.zero_centered_gamma))
176 setattr(self.weight, "sequence_parallel", self.sequence_parallel)
177
178 @no_torch_dynamo()
179 def forward(self, inp: torch.Tensor) -> torch.Tensor:
180 """RMSNorm FWD"""
181
182 # Set the activation type for AMP.
183 TransformerEngineBaseModule.set_activation_dtype(self, inp)
184
185 if torch.is_grad_enabled():
186 fwd_fn = _RMSNorm.apply
187 args = []
188 else:
189 fwd_fn = _RMSNorm.forward
190 args = [None]
191
192 args += (
193 inp,
194 self.weight,
195 self.eps,
196 self.fwd_rmsnorm_sm_margin,
197 self.bwd_rmsnorm_sm_margin,
198 self.zero_centered_gamma,
199 torch.is_grad_enabled(),
200 self.activation_dtype,
201 )
202
203 return fwd_fn(*args)
204
[end of transformer_engine/pytorch/module/rmsnorm.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/transformer_engine/pytorch/module/rmsnorm.py b/transformer_engine/pytorch/module/rmsnorm.py
--- a/transformer_engine/pytorch/module/rmsnorm.py
+++ b/transformer_engine/pytorch/module/rmsnorm.py
@@ -114,7 +114,7 @@
the RMSNorm formula changes to
.. math::
- y = \frac{x}{RMS(x) + \varepsilon} * (1 + \gamma)
+ y = \frac{x}{RMS_\varepsilon(x)} * (1 + \gamma)
device : Union[torch.device, str], default = "cuda"
The device on which the parameters of the model will allocated. It is the user's
responsibility to ensure all parameters are moved to the GPU before running the
@@ -155,7 +155,7 @@
def reset_rms_norm_parameters(self) -> None:
"""Init RMSNorm params"""
warnings.warn(
- ("This method will be deprecated in an upcoming release. "
+ ("This method is deprecated and will be removed in an upcoming release. "
"Update your code to use RMSNorm.reset_parameters() instead."),
DeprecationWarning,
stacklevel=2
| {"golden_diff": "diff --git a/transformer_engine/pytorch/module/rmsnorm.py b/transformer_engine/pytorch/module/rmsnorm.py\n--- a/transformer_engine/pytorch/module/rmsnorm.py\n+++ b/transformer_engine/pytorch/module/rmsnorm.py\n@@ -114,7 +114,7 @@\n the RMSNorm formula changes to\n \n .. math::\n- y = \\frac{x}{RMS(x) + \\varepsilon} * (1 + \\gamma)\n+ y = \\frac{x}{RMS_\\varepsilon(x)} * (1 + \\gamma)\n device : Union[torch.device, str], default = \"cuda\"\n The device on which the parameters of the model will allocated. It is the user's\n responsibility to ensure all parameters are moved to the GPU before running the\n@@ -155,7 +155,7 @@\n def reset_rms_norm_parameters(self) -> None:\n \"\"\"Init RMSNorm params\"\"\"\n warnings.warn(\n- (\"This method will be deprecated in an upcoming release. \"\n+ (\"This method is deprecated and will be removed in an upcoming release. \"\n \"Update your code to use RMSNorm.reset_parameters() instead.\"),\n DeprecationWarning,\n stacklevel=2\n", "issue": "Does `transformer_engine.pytorch.RMSNorm` support `zero_centered_gamma`?\nThe [doc](https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/api/pytorch.html#transformer_engine.pytorch.RMSNorm) explains the option `zero_centered_gamma` clearly, but I found the implementation at [normalization.cu:L221](https://github.com/NVIDIA/TransformerEngine/blob/82555b3fd1ddece3a886fac3c7a4aa8d6f8c1955/transformer_engine/pytorch/csrc/extensions/normalization.cu#L221) assert `zero_centered_gamma == false`.\r\n\r\nIs there any limitation?\n", "before_files": [{"content": "# Copyright (c) 2022-2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved.\n#\n# See LICENSE for license information.\n\n\"\"\"RMSNorm API\"\"\"\nimport os\nimport warnings\nfrom typing import Union, Tuple, Optional\n\nimport torch\nfrom torch.nn.parameter import Parameter\nfrom torch.nn import init\n\nfrom .base import TransformerEngineBaseModule\nfrom .. import cpp_extensions as tex\nfrom ..jit import no_torch_dynamo\nfrom ..utils import cast_if_needed\n\n\n__all__ = [\"RMSNorm\"]\n\n\nclass _RMSNorm(torch.autograd.Function):\n \"\"\"functional RMSNorm\"\"\"\n\n @staticmethod\n def forward(\n ctx,\n inp: torch.Tensor,\n rmsnorm_weight: torch.Tensor,\n eps: float,\n fwd_rmsnorm_sm_margin: int,\n bwd_rmsnorm_sm_margin: int,\n zero_centered_gamma: bool,\n is_grad_enabled: bool,\n activation_dtype: torch.dtype,\n ) -> torch.Tensor:\n # Make sure input dimensions are compatible\n in_features = rmsnorm_weight.numel()\n assert inp.is_cuda, \"TransformerEngine needs CUDA.\"\n assert inp.shape[-1] == in_features, \"RMSNorm not possible\"\n inputmat = inp.view((-1, in_features))\n\n # Cast for native AMP\n inputmat = cast_if_needed(inputmat, activation_dtype)\n rmsnorm_weight = cast_if_needed(rmsnorm_weight, activation_dtype)\n\n if is_grad_enabled:\n rmsnorm_out, rsigma = tex.rmsnorm_fwd(inputmat, rmsnorm_weight,\n eps, fwd_rmsnorm_sm_margin,\n zero_centered_gamma)\n ctx.save_for_backward(inputmat, rmsnorm_weight, rsigma)\n ctx.inp_shape = inp.shape\n ctx.bwd_rmsnorm_sm_margin = bwd_rmsnorm_sm_margin\n ctx.zero_centered_gamma = zero_centered_gamma\n else:\n rmsnorm_out = tex.rmsnorm_fwd_inf(inputmat, rmsnorm_weight,\n eps,\n zero_centered_gamma)\n return rmsnorm_out.view_as(inp)\n\n @staticmethod\n def backward(\n ctx, grad_output: torch.Tensor\n ) -> Tuple[Union[torch.Tensor, None], ...]:\n inputmat, rmsnorm_weight, rsigma = ctx.saved_tensors\n grad_output = grad_output.contiguous()\n d_rmsnorm_out = grad_output.view(inputmat.shape)\n dxmat, dgamma = tex.rmsnorm_bwd(\n d_rmsnorm_out, inputmat, rsigma, rmsnorm_weight,\n ctx.bwd_rmsnorm_sm_margin, ctx.zero_centered_gamma\n )\n return (\n dxmat.view(ctx.inp_shape),\n dgamma,\n None,\n None,\n None,\n None,\n None,\n None,\n )\n\n\nclass RMSNorm(torch.nn.Module):\n r\"\"\"\n Applies Root Mean Square Layer Normalization over a mini-batch of inputs as described in\n the paper `Root Mean Square Layer Normalization <https://arxiv.org/abs/1910.07467>`__\n\n .. math::\n y = \\frac{x}{RMS_\\varepsilon(x)} * \\gamma\n\n where\n\n .. math::\n RMS_\\varepsilon(x) = \\sqrt{\\frac{1}{n}\\sum_{i=0}^nx_i^2 + \\varepsilon}\n\n :math:`\\gamma` is a learnable affine transform parameter of size :attr:`hidden_size`\n\n Parameters\n ----------\n hidden_size : int\n size of each input sample.\n eps : float, default = 1e-5\n a value added to the denominator of layer normalization for numerical stability.\n sequence_parallel : bool, default = `False`\n if set to `True`, uses sequence parallelism.\n params_dtype : torch.dtype, default = `torch.get_default_dtype()`\n it controls the type used to allocate the initial parameters. Useful when\n the model is trained with lower precision and the original FP32 parameters\n would not fit in GPU memory.\n zero_centered_gamma : bool, default = 'False'\n if set to 'True', gamma parameter in RMSNorm is initialized to 0 and\n the RMSNorm formula changes to\n\n .. math::\n y = \\frac{x}{RMS(x) + \\varepsilon} * (1 + \\gamma)\n device : Union[torch.device, str], default = \"cuda\"\n The device on which the parameters of the model will allocated. It is the user's\n responsibility to ensure all parameters are moved to the GPU before running the\n forward pass.\n \"\"\"\n\n def __init__(\n self,\n hidden_size: int,\n eps: float = 1e-5,\n sequence_parallel: bool = False,\n params_dtype: Optional[torch.dtype] = None,\n zero_centered_gamma: bool = False,\n device: Union[torch.device, str] = \"cuda\",\n ) -> None:\n super().__init__()\n params_dtype = torch.get_default_dtype() if params_dtype is None else params_dtype\n self.eps = eps\n self.zero_centered_gamma = zero_centered_gamma\n self.weight = Parameter(\n torch.empty(\n hidden_size,\n device=device,\n dtype=params_dtype,\n )\n )\n self.sequence_parallel = sequence_parallel\n\n self.reset_parameters(defer_init=(device == 'meta'))\n\n # These many SMs are subtracted from the total SM count when calling forward\n # and backward RMSNorm C APIs. These envvars can be used to prevent the LN\n # kernels from using all SMs in the device. This is useful for cases such as\n # communication overlap with RMSNorm.\n self.fwd_rmsnorm_sm_margin = int(os.getenv(\"NVTE_FWD_LAYERNORM_SM_MARGIN\", \"0\"))\n self.bwd_rmsnorm_sm_margin = int(os.getenv(\"NVTE_BWD_LAYERNORM_SM_MARGIN\", \"0\"))\n\n def reset_rms_norm_parameters(self) -> None:\n \"\"\"Init RMSNorm params\"\"\"\n warnings.warn(\n (\"This method will be deprecated in an upcoming release. \"\n \"Update your code to use RMSNorm.reset_parameters() instead.\"),\n DeprecationWarning,\n stacklevel=2\n )\n if not self.zero_centered_gamma:\n init.ones_(self.weight)\n else:\n init.zeros_(self.weight)\n\n def reset_parameters(self, defer_init=False) -> None:\n \"\"\"Reset RMSNorm parameters\"\"\"\n if defer_init:\n return\n\n if self.weight.device == torch.device('meta'):\n self.weight = torch.nn.Parameter(torch.empty_like(self.weight, device='cuda'))\n init.constant_(self.weight, float(not self.zero_centered_gamma))\n setattr(self.weight, \"sequence_parallel\", self.sequence_parallel)\n\n @no_torch_dynamo()\n def forward(self, inp: torch.Tensor) -> torch.Tensor:\n \"\"\"RMSNorm FWD\"\"\"\n\n # Set the activation type for AMP.\n TransformerEngineBaseModule.set_activation_dtype(self, inp)\n\n if torch.is_grad_enabled():\n fwd_fn = _RMSNorm.apply\n args = []\n else:\n fwd_fn = _RMSNorm.forward\n args = [None]\n\n args += (\n inp,\n self.weight,\n self.eps,\n self.fwd_rmsnorm_sm_margin,\n self.bwd_rmsnorm_sm_margin,\n self.zero_centered_gamma,\n torch.is_grad_enabled(),\n self.activation_dtype,\n )\n\n return fwd_fn(*args)\n", "path": "transformer_engine/pytorch/module/rmsnorm.py"}]} | 2,839 | 273 |
gh_patches_debug_31562 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-520 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MyPy - Support for dynamic enums is broken at the moment
Annoyingly this bit of code doesn't work:
```python
import strawberry
Animal = Enum('Animal', 'ANT BEE CAT DOG')
AnimalEnum = strawberry.enum(Animal, name="Animal")
a: AnimalEnum
reveal_type(AnimalEnum)
reveal_type(a)
```
As it raises the following error:
```
➜ poetry run mypy demo_types.py --show-traceback
demo_types.py:6: error: INTERNAL ERROR -- Please try using mypy master on Github:
https://mypy.rtfd.io/en/latest/common_issues.html#using-a-development-mypy-build
Please report a bug at https://github.com/python/mypy/issues
version: 0.790
Traceback (most recent call last):
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/bin/mypy", line 8, in <module>
sys.exit(console_entry())
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/__main__.py", line 8, in console_entry
main(None, sys.stdout, sys.stderr)
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/main.py", line 90, in main
res = build.build(sources, options, None, flush_errors, fscache, stdout, stderr)
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/build.py", line 180, in build
result = _build(
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/build.py", line 254, in _build
graph = dispatch(sources, manager, stdout)
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/build.py", line 2630, in dispatch
process_graph(graph, manager)
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/build.py", line 2953, in process_graph
process_stale_scc(graph, scc, manager)
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/build.py", line 3045, in process_stale_scc
mypy.semanal_main.semantic_analysis_for_scc(graph, scc, manager.errors)
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal_main.py", line 78, in semantic_analysis_for_scc
process_top_levels(graph, scc, patches)
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal_main.py", line 199, in process_top_levels
deferred, incomplete, progress = semantic_analyze_target(next_id, state,
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal_main.py", line 326, in semantic_analyze_target
analyzer.refresh_partial(refresh_node,
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal.py", line 357, in refresh_partial
self.refresh_top_level(node)
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal.py", line 368, in refresh_top_level
self.accept(d)
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal.py", line 4700, in accept
node.accept(self)
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/nodes.py", line 1063, in accept
return visitor.visit_assignment_stmt(self)
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal.py", line 1943, in visit_assignment_stmt
self.apply_dynamic_class_hook(s)
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal.py", line 2194, in apply_dynamic_class_hook
hook(DynamicClassDefContext(call, lval.name, self))
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/strawberry/ext/mypy_plugin.py", line 69, in enum_hook
enum_type = _get_type_for_expr(ctx.call.args[0], ctx.api)
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/strawberry/ext/mypy_plugin.py", line 39, in _get_type_for_expr
return api.named_type(expr.name)
File "/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal.py", line 4155, in named_type
assert isinstance(node, TypeInfo)
AssertionError:
demo_types.py:6: : note: use --pdb to drop into pdb
```
I'm not sure how to fix, since this error is deep inside mypy, and it seems to fail because the node we send to it is not a TypeInfo (it is a var node if I understood correctly). I wonder if we can check the type of node we have and then do something based on that 🤔
</issue>
<code>
[start of strawberry/ext/mypy_plugin.py]
1 from typing import Callable, Optional
2
3 from mypy.nodes import (
4 GDEF,
5 Expression,
6 IndexExpr,
7 NameExpr,
8 SymbolTableNode,
9 TupleExpr,
10 TypeAlias,
11 )
12 from mypy.plugin import (
13 AnalyzeTypeContext,
14 ClassDefContext,
15 DynamicClassDefContext,
16 Plugin,
17 SemanticAnalyzerPluginInterface,
18 )
19 from mypy.plugins import dataclasses
20 from mypy.types import Type, UnionType
21
22
23 def lazy_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:
24 type_name = ctx.type.args[0]
25 type_ = ctx.api.analyze_type(type_name)
26
27 return type_
28
29
30 def private_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:
31 type_name = ctx.type.args[0]
32 type_ = ctx.api.analyze_type(type_name)
33
34 return type_
35
36
37 def _get_type_for_expr(expr: Expression, api: SemanticAnalyzerPluginInterface):
38 if isinstance(expr, NameExpr):
39 return api.named_type(expr.name)
40
41 if isinstance(expr, IndexExpr):
42 type_ = _get_type_for_expr(expr.base, api)
43 type_.args = (_get_type_for_expr(expr.index, api),)
44
45 return type_
46
47 raise ValueError(f"Unsupported expression f{type(expr)}")
48
49
50 def union_hook(ctx: DynamicClassDefContext) -> None:
51 types = ctx.call.args[1]
52
53 if isinstance(types, TupleExpr):
54 type_ = UnionType(tuple(_get_type_for_expr(x, ctx.api) for x in types.items))
55
56 type_alias = TypeAlias(
57 type_,
58 fullname=ctx.api.qualified_name(ctx.name),
59 line=ctx.call.line,
60 column=ctx.call.column,
61 )
62
63 ctx.api.add_symbol_table_node(
64 ctx.name, SymbolTableNode(GDEF, type_alias, plugin_generated=False)
65 )
66
67
68 def enum_hook(ctx: DynamicClassDefContext) -> None:
69 enum_type = _get_type_for_expr(ctx.call.args[0], ctx.api)
70
71 type_alias = TypeAlias(
72 enum_type,
73 fullname=ctx.api.qualified_name(ctx.name),
74 line=ctx.call.line,
75 column=ctx.call.column,
76 )
77
78 ctx.api.add_symbol_table_node(
79 ctx.name, SymbolTableNode(GDEF, type_alias, plugin_generated=False)
80 )
81
82
83 class StrawberryPlugin(Plugin):
84 def get_dynamic_class_hook(
85 self, fullname: str
86 ) -> Optional[Callable[[DynamicClassDefContext], None]]:
87 # TODO: investigate why we need this instead of `strawberry.union.union` on CI
88 # we have the same issue in the other hooks
89 if "strawberry.union" in fullname:
90 return union_hook
91
92 if "strawberry.enum" in fullname:
93 return enum_hook
94
95 return None
96
97 def get_type_analyze_hook(self, fullname: str):
98 if fullname == "strawberry.lazy_type.LazyType":
99 return lazy_type_analyze_callback
100
101 if any(
102 name in fullname
103 for name in {"strawberry.private.Private", "strawberry.Private"}
104 ):
105 return private_type_analyze_callback
106
107 return None
108
109 def get_class_decorator_hook(
110 self, fullname: str
111 ) -> Optional[Callable[[ClassDefContext], None]]:
112 if any(
113 strawberry_decorator in fullname
114 for strawberry_decorator in {
115 "strawberry.type",
116 "strawberry.federation.type",
117 "strawberry.input",
118 "strawberry.interface",
119 }
120 ):
121 return dataclasses.dataclass_class_maker_callback
122 return None
123
124
125 def plugin(version: str):
126 return StrawberryPlugin
127
[end of strawberry/ext/mypy_plugin.py]
[start of .github/release-check-action/config.py]
1 import os
2
3
4 RELEASE_FILE_PATH = "RELEASE.md"
5 GITHUB_SHA = os.environ["GITHUB_SHA"]
6 GITHUB_EVENT_PATH = os.environ["GITHUB_EVENT_PATH"]
7 GITHUB_WORKSPACE = os.environ["GITHUB_WORKSPACE"]
8 API_URL = "https://gentle-beach-72400.herokuapp.com/graphql"
9
[end of .github/release-check-action/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/.github/release-check-action/config.py b/.github/release-check-action/config.py
--- a/.github/release-check-action/config.py
+++ b/.github/release-check-action/config.py
@@ -5,4 +5,4 @@
GITHUB_SHA = os.environ["GITHUB_SHA"]
GITHUB_EVENT_PATH = os.environ["GITHUB_EVENT_PATH"]
GITHUB_WORKSPACE = os.environ["GITHUB_WORKSPACE"]
-API_URL = "https://gentle-beach-72400.herokuapp.com/graphql"
+API_URL = "https://strawberry-github-bot.herokuapp.com/graphql"
diff --git a/strawberry/ext/mypy_plugin.py b/strawberry/ext/mypy_plugin.py
--- a/strawberry/ext/mypy_plugin.py
+++ b/strawberry/ext/mypy_plugin.py
@@ -8,6 +8,7 @@
SymbolTableNode,
TupleExpr,
TypeAlias,
+ Var,
)
from mypy.plugin import (
AnalyzeTypeContext,
@@ -17,7 +18,7 @@
SemanticAnalyzerPluginInterface,
)
from mypy.plugins import dataclasses
-from mypy.types import Type, UnionType
+from mypy.types import AnyType, Type, TypeOfAny, UnionType
def lazy_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:
@@ -66,7 +67,32 @@
def enum_hook(ctx: DynamicClassDefContext) -> None:
- enum_type = _get_type_for_expr(ctx.call.args[0], ctx.api)
+ first_argument = ctx.call.args[0]
+
+ if isinstance(first_argument, NameExpr):
+ if not first_argument.node:
+ ctx.api.defer()
+
+ return
+
+ if isinstance(first_argument.node, Var):
+ var_type = first_argument.node.type or AnyType(
+ TypeOfAny.implementation_artifact
+ )
+
+ type_alias = TypeAlias(
+ var_type,
+ fullname=ctx.api.qualified_name(ctx.name),
+ line=ctx.call.line,
+ column=ctx.call.column,
+ )
+
+ ctx.api.add_symbol_table_node(
+ ctx.name, SymbolTableNode(GDEF, type_alias, plugin_generated=False)
+ )
+ return
+
+ enum_type = _get_type_for_expr(first_argument, ctx.api)
type_alias = TypeAlias(
enum_type,
| {"golden_diff": "diff --git a/.github/release-check-action/config.py b/.github/release-check-action/config.py\n--- a/.github/release-check-action/config.py\n+++ b/.github/release-check-action/config.py\n@@ -5,4 +5,4 @@\n GITHUB_SHA = os.environ[\"GITHUB_SHA\"]\n GITHUB_EVENT_PATH = os.environ[\"GITHUB_EVENT_PATH\"]\n GITHUB_WORKSPACE = os.environ[\"GITHUB_WORKSPACE\"]\n-API_URL = \"https://gentle-beach-72400.herokuapp.com/graphql\"\n+API_URL = \"https://strawberry-github-bot.herokuapp.com/graphql\"\ndiff --git a/strawberry/ext/mypy_plugin.py b/strawberry/ext/mypy_plugin.py\n--- a/strawberry/ext/mypy_plugin.py\n+++ b/strawberry/ext/mypy_plugin.py\n@@ -8,6 +8,7 @@\n SymbolTableNode,\n TupleExpr,\n TypeAlias,\n+ Var,\n )\n from mypy.plugin import (\n AnalyzeTypeContext,\n@@ -17,7 +18,7 @@\n SemanticAnalyzerPluginInterface,\n )\n from mypy.plugins import dataclasses\n-from mypy.types import Type, UnionType\n+from mypy.types import AnyType, Type, TypeOfAny, UnionType\n \n \n def lazy_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:\n@@ -66,7 +67,32 @@\n \n \n def enum_hook(ctx: DynamicClassDefContext) -> None:\n- enum_type = _get_type_for_expr(ctx.call.args[0], ctx.api)\n+ first_argument = ctx.call.args[0]\n+\n+ if isinstance(first_argument, NameExpr):\n+ if not first_argument.node:\n+ ctx.api.defer()\n+\n+ return\n+\n+ if isinstance(first_argument.node, Var):\n+ var_type = first_argument.node.type or AnyType(\n+ TypeOfAny.implementation_artifact\n+ )\n+\n+ type_alias = TypeAlias(\n+ var_type,\n+ fullname=ctx.api.qualified_name(ctx.name),\n+ line=ctx.call.line,\n+ column=ctx.call.column,\n+ )\n+\n+ ctx.api.add_symbol_table_node(\n+ ctx.name, SymbolTableNode(GDEF, type_alias, plugin_generated=False)\n+ )\n+ return\n+\n+ enum_type = _get_type_for_expr(first_argument, ctx.api)\n \n type_alias = TypeAlias(\n enum_type,\n", "issue": "MyPy - Support for dynamic enums is broken at the moment\nAnnoyingly this bit of code doesn't work:\r\n\r\n```python\r\nimport strawberry\r\n\r\n\r\nAnimal = Enum('Animal', 'ANT BEE CAT DOG')\r\n\r\nAnimalEnum = strawberry.enum(Animal, name=\"Animal\")\r\n\r\n\r\na: AnimalEnum\r\nreveal_type(AnimalEnum)\r\nreveal_type(a)\r\n```\r\n\r\nAs it raises the following error:\r\n\r\n```\r\n\u279c poetry run mypy demo_types.py --show-traceback\r\ndemo_types.py:6: error: INTERNAL ERROR -- Please try using mypy master on Github:\r\nhttps://mypy.rtfd.io/en/latest/common_issues.html#using-a-development-mypy-build\r\nPlease report a bug at https://github.com/python/mypy/issues\r\nversion: 0.790\r\nTraceback (most recent call last):\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/bin/mypy\", line 8, in <module>\r\n sys.exit(console_entry())\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/__main__.py\", line 8, in console_entry\r\n main(None, sys.stdout, sys.stderr)\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/main.py\", line 90, in main\r\n res = build.build(sources, options, None, flush_errors, fscache, stdout, stderr)\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/build.py\", line 180, in build\r\n result = _build(\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/build.py\", line 254, in _build\r\n graph = dispatch(sources, manager, stdout)\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/build.py\", line 2630, in dispatch\r\n process_graph(graph, manager)\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/build.py\", line 2953, in process_graph\r\n process_stale_scc(graph, scc, manager)\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/build.py\", line 3045, in process_stale_scc\r\n mypy.semanal_main.semantic_analysis_for_scc(graph, scc, manager.errors)\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal_main.py\", line 78, in semantic_analysis_for_scc\r\n process_top_levels(graph, scc, patches)\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal_main.py\", line 199, in process_top_levels\r\n deferred, incomplete, progress = semantic_analyze_target(next_id, state,\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal_main.py\", line 326, in semantic_analyze_target\r\n analyzer.refresh_partial(refresh_node,\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal.py\", line 357, in refresh_partial\r\n self.refresh_top_level(node)\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal.py\", line 368, in refresh_top_level\r\n self.accept(d)\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal.py\", line 4700, in accept\r\n node.accept(self)\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/nodes.py\", line 1063, in accept\r\n return visitor.visit_assignment_stmt(self)\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal.py\", line 1943, in visit_assignment_stmt\r\n self.apply_dynamic_class_hook(s)\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal.py\", line 2194, in apply_dynamic_class_hook\r\n hook(DynamicClassDefContext(call, lval.name, self))\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/strawberry/ext/mypy_plugin.py\", line 69, in enum_hook\r\n enum_type = _get_type_for_expr(ctx.call.args[0], ctx.api)\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/strawberry/ext/mypy_plugin.py\", line 39, in _get_type_for_expr\r\n return api.named_type(expr.name)\r\n File \"/Users/patrick/Documents/github/strawberry-graphql/strawberry/.venv/lib/python3.9/site-packages/mypy/semanal.py\", line 4155, in named_type\r\n assert isinstance(node, TypeInfo)\r\nAssertionError:\r\ndemo_types.py:6: : note: use --pdb to drop into pdb\r\n```\r\n\r\nI'm not sure how to fix, since this error is deep inside mypy, and it seems to fail because the node we send to it is not a TypeInfo (it is a var node if I understood correctly). I wonder if we can check the type of node we have and then do something based on that \ud83e\udd14 \n", "before_files": [{"content": "from typing import Callable, Optional\n\nfrom mypy.nodes import (\n GDEF,\n Expression,\n IndexExpr,\n NameExpr,\n SymbolTableNode,\n TupleExpr,\n TypeAlias,\n)\nfrom mypy.plugin import (\n AnalyzeTypeContext,\n ClassDefContext,\n DynamicClassDefContext,\n Plugin,\n SemanticAnalyzerPluginInterface,\n)\nfrom mypy.plugins import dataclasses\nfrom mypy.types import Type, UnionType\n\n\ndef lazy_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:\n type_name = ctx.type.args[0]\n type_ = ctx.api.analyze_type(type_name)\n\n return type_\n\n\ndef private_type_analyze_callback(ctx: AnalyzeTypeContext) -> Type:\n type_name = ctx.type.args[0]\n type_ = ctx.api.analyze_type(type_name)\n\n return type_\n\n\ndef _get_type_for_expr(expr: Expression, api: SemanticAnalyzerPluginInterface):\n if isinstance(expr, NameExpr):\n return api.named_type(expr.name)\n\n if isinstance(expr, IndexExpr):\n type_ = _get_type_for_expr(expr.base, api)\n type_.args = (_get_type_for_expr(expr.index, api),)\n\n return type_\n\n raise ValueError(f\"Unsupported expression f{type(expr)}\")\n\n\ndef union_hook(ctx: DynamicClassDefContext) -> None:\n types = ctx.call.args[1]\n\n if isinstance(types, TupleExpr):\n type_ = UnionType(tuple(_get_type_for_expr(x, ctx.api) for x in types.items))\n\n type_alias = TypeAlias(\n type_,\n fullname=ctx.api.qualified_name(ctx.name),\n line=ctx.call.line,\n column=ctx.call.column,\n )\n\n ctx.api.add_symbol_table_node(\n ctx.name, SymbolTableNode(GDEF, type_alias, plugin_generated=False)\n )\n\n\ndef enum_hook(ctx: DynamicClassDefContext) -> None:\n enum_type = _get_type_for_expr(ctx.call.args[0], ctx.api)\n\n type_alias = TypeAlias(\n enum_type,\n fullname=ctx.api.qualified_name(ctx.name),\n line=ctx.call.line,\n column=ctx.call.column,\n )\n\n ctx.api.add_symbol_table_node(\n ctx.name, SymbolTableNode(GDEF, type_alias, plugin_generated=False)\n )\n\n\nclass StrawberryPlugin(Plugin):\n def get_dynamic_class_hook(\n self, fullname: str\n ) -> Optional[Callable[[DynamicClassDefContext], None]]:\n # TODO: investigate why we need this instead of `strawberry.union.union` on CI\n # we have the same issue in the other hooks\n if \"strawberry.union\" in fullname:\n return union_hook\n\n if \"strawberry.enum\" in fullname:\n return enum_hook\n\n return None\n\n def get_type_analyze_hook(self, fullname: str):\n if fullname == \"strawberry.lazy_type.LazyType\":\n return lazy_type_analyze_callback\n\n if any(\n name in fullname\n for name in {\"strawberry.private.Private\", \"strawberry.Private\"}\n ):\n return private_type_analyze_callback\n\n return None\n\n def get_class_decorator_hook(\n self, fullname: str\n ) -> Optional[Callable[[ClassDefContext], None]]:\n if any(\n strawberry_decorator in fullname\n for strawberry_decorator in {\n \"strawberry.type\",\n \"strawberry.federation.type\",\n \"strawberry.input\",\n \"strawberry.interface\",\n }\n ):\n return dataclasses.dataclass_class_maker_callback\n return None\n\n\ndef plugin(version: str):\n return StrawberryPlugin\n", "path": "strawberry/ext/mypy_plugin.py"}, {"content": "import os\n\n\nRELEASE_FILE_PATH = \"RELEASE.md\"\nGITHUB_SHA = os.environ[\"GITHUB_SHA\"]\nGITHUB_EVENT_PATH = os.environ[\"GITHUB_EVENT_PATH\"]\nGITHUB_WORKSPACE = os.environ[\"GITHUB_WORKSPACE\"]\nAPI_URL = \"https://gentle-beach-72400.herokuapp.com/graphql\"\n", "path": ".github/release-check-action/config.py"}]} | 3,066 | 517 |
gh_patches_debug_8119 | rasdani/github-patches | git_diff | google__osv.dev-704 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Packagist version enumeration is incorrect
Since #675 was landed we're now seeing some false positives in `osv-detector` due to the `versions` field seemingly holding wrong versions.
Some examples that I've looked into specifically:
* https://api.osv.dev/v1/vulns/GHSA-vxrc-68xx-x48g has all of the v2.x versions listed in `affected[0].versions`, even though that `affected` says its `fixed` in version `1.38.0`. (it's also missing `affected[1].versions` which doesn't seem right?
* https://api.osv.dev/v1/vulns/GHSA-77mr-wc79-m8j3 has v6.5+ versions listed in `affected[0].versions`, even though that `affected` says `fixed` in version `6.5.0`.
* https://api.osv.dev/v1/vulns/GHSA-q348-f93x-9gx4 has v2.2.11+ versions listed in `affected[0].versions`, even though that `affected` says `fixed` in version `2.2.11`.
And here are some outputs from `osv-detector` which have some more examples:
```
phpmailer/[email protected]:
GHSA-77mr-wc79-m8j3: A validation function with the same name as a built-in validator can be called (https://github.com/advisories/GHSA-77mr-wc79-m8j3)
GHSA-7q44-r25x-wm4q: Remote Code Execution vulnerability in PHPMailer 6.4.1 running on Windows (https://github.com/advisories/GHSA-7q44-r25x-wm4q)
GHSA-f7hx-fqxw-rvvj: Insufficient output escaping of attachment names in PHPMailer (https://github.com/advisories/GHSA-f7hx-fqxw-rvvj)
GHSA-v5c9-mmw9-829q: PHPMailer susceptible to arbitrary code execution (https://github.com/advisories/GHSA-v5c9-mmw9-829q)
twig/[email protected]:
GHSA-vxrc-68xx-x48g: Sandbox Information Disclosure (https://github.com/advisories/GHSA-vxrc-68xx-x48g)
zendesk/[email protected]:
GHSA-q348-f93x-9gx4: Lack of Input Validation in zendesk_api_client_php for Zendesk Subdomain (https://github.com/advisories/GHSA-q348-f93x-9gx4)
laravel/[email protected]:
GHSA-3p32-j457-pg5x: Query Binding Exploitation (https://github.com/advisories/GHSA-3p32-j457-pg5x)
GHSA-4mg9-vhxq-vm7j: SQL Server LIMIT / OFFSET SQL Injection in laravel/framework and illuminate/database (https://github.com/advisories/GHSA-4mg9-vhxq-vm7j)
GHSA-66hf-2p6w-jqfw: Laravel Framework XSS in Blade templating engine (https://github.com/advisories/GHSA-66hf-2p6w-jqfw)
GHSA-w2pm-r78h-4m7v: OS Command Injection in Laravel Framework (https://github.com/advisories/GHSA-w2pm-r78h-4m7v)
GHSA-w68r-5p45-5rqp: Improper Input Validation in Laravel (https://github.com/advisories/GHSA-w68r-5p45-5rqp)
GHSA-x7p5-p2c9-phvg: Unexpected database bindings (https://github.com/advisories/GHSA-x7p5-p2c9-phvg)
```
</issue>
<code>
[start of osv/packagist_version.py]
1 # Copyright 2022 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Ordered packagist version"""
15 import re
16
17
18 class PackagistVersion:
19 """
20 Follows the packagist version ordering, which is recommended to be semver, but
21 not enforced to be semver. The php standard version comparison code written in
22 C is located here: https://github.com/php/php-src/blob/master/ext/standard/versioning.c
23
24 The function first replaces _, - and + with a dot . in the version strings and
25 also inserts dots . before and after any non number so that for example
26 '4.3.2RC1' becomes '4.3.2.RC.1'. Then it compares the parts starting from
27 left to right.
28
29 If a part contains special version strings these are handled in the following order:
30 any string not found in this list < dev < alpha = a < beta = b < RC = rc < # < pl = p.
31 This way not only versions with different levels like '4.1' and '4.1.2' can be
32 compared but also any PHP specific version containing development state.
33
34 ---
35
36 ## Known differences:
37 The following are some examples of known differences between this python
38 implementation and the C implementation of PHP
39
40 - In this version, special version strings need to exactly match to not be
41 considered "any other string", while in the original implementation the
42 string only need to start with one of the listed strings.
43 """
44
45 version_str: str
46 canonicalized_version: str
47
48 def __init__(self, version: str):
49 self.version_str = version
50 self.canonicalized_version = self.php_canonicalize_version(version)
51
52 def __str__(self) -> str:
53 return self.version_str
54
55 def __hash__(self):
56 return self.canonicalized_version
57
58 def __eq__(self, other):
59 if not isinstance(other, self.__class__):
60 return NotImplemented
61 return self.canonicalized_version == other.canonicalized_version
62
63 def __lt__(self, other):
64 return self.__cmp__(other) < 0
65
66 def __le__(self, other):
67 return self.__cmp__(other) <= 0
68
69 def __gt__(self, other):
70 return self.__cmp__(other) > 0
71
72 def __ge__(self, other):
73 return self.__cmp__(other) >= 0
74
75 def __cmp__(self, other):
76 return self.php_version_compare(self.version_str, other.version_str)
77
78 @staticmethod
79 def php_slices_compare(a_split: [str], b_split: [str]):
80 """
81 Compare php versions after being split by '.'
82 """
83 for a, b in zip(a_split, b_split):
84 if a.isdigit() and b.isdigit():
85 compare = int(a) - int(b)
86 elif not a.isdigit() and not b.isdigit():
87 compare = PackagistVersion.compare_special_versions(a, b)
88 elif a.isdigit():
89 compare = PackagistVersion.compare_special_versions('#', b)
90 else:
91 compare = PackagistVersion.compare_special_versions(a, '#')
92
93 if compare != 0:
94 if compare > 0:
95 return 1
96 else:
97 return -1
98
99 if len(a_split) > len(b_split):
100 next_char = a_split[len(b_split)]
101 if next_char.isdigit():
102 return 1
103 return PackagistVersion.php_slices_compare(a_split[len(b_split):], ['#'])
104
105 if len(a_split) < len(b_split):
106 next_char = b_split[len(a_split)]
107 if next_char.isdigit():
108 return -1
109 return PackagistVersion.php_slices_compare(['#'], b_split[len(a_split):])
110
111 return 0
112
113 @staticmethod
114 def php_version_compare(version_a: str, version_b: str) -> int:
115 """
116 Given two php versions, compare which is newer
117
118 :return: 1 if a > b, -1 if b > a, 0 if a == b
119 """
120 version_a = PackagistVersion.php_canonicalize_version(version_a)
121 version_b = PackagistVersion.php_canonicalize_version(version_b)
122
123 a_split = version_a.split('.')
124 b_split = version_b.split('.')
125 return PackagistVersion.php_slices_compare(a_split, b_split)
126
127 @staticmethod
128 def php_canonicalize_version(version: str) -> str:
129 """
130 Replaces special separators (`-`,`_`,`+`) with `.`, and inserts `.`
131 between any digit and non-digit.
132 """
133 replaced = re.sub('[-_+]', '.', version)
134 replaced = re.sub(r'([^\d.])(\d)', r'\1.\2', replaced)
135 replaced = re.sub(r'(\d)([^\d.])', r'\1.\2', replaced)
136 return replaced
137
138 SPECIAL_CHARACTER_ORDER = {
139 "dev": 0,
140 "alpha": 1,
141 "a": 1,
142 "beta": 2,
143 "b": 2,
144 "RC": 3,
145 "rc": 3,
146 "#": 4,
147 "pl": 5,
148 "p": 5,
149 None: 0,
150 }
151
152 @staticmethod
153 def compare_special_versions(version_part_a: str, version_part_b: str) -> int:
154 """
155 Compares the order of special characters against the order specified in php
156 docs.
157
158 any string not found in this list < dev < alpha = a < beta = b < RC = rc < # < pl = p.
159
160 :return: 1 if a > b, -1 if b > a, 0 if a == b
161 """
162 # This isn't quite the behaviour of the c implementation of php version_compare
163 # In php if the part starts with special_chars its enough.
164 # ### For example:
165 # *PHP implementation:*
166 # `1.0.0beta1 = 1.0.0betawithsomeothertext1`
167 #
168 # *This python implementation*
169 # `1.0.0beta1 > 1.0.0betawithsomeothertext1`
170 # Because "any string not found in this list" should apply to
171 # `betawithsomeothertext`
172 found_a = PackagistVersion.SPECIAL_CHARACTER_ORDER.get(version_part_a, -1)
173 found_b = PackagistVersion.SPECIAL_CHARACTER_ORDER.get(version_part_b, -1)
174
175 if found_a > found_b:
176 return 1
177 if found_a < found_b:
178 return -1
179 return 0
180
[end of osv/packagist_version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/osv/packagist_version.py b/osv/packagist_version.py
--- a/osv/packagist_version.py
+++ b/osv/packagist_version.py
@@ -130,6 +130,8 @@
Replaces special separators (`-`,`_`,`+`) with `.`, and inserts `.`
between any digit and non-digit.
"""
+ if version.startswith('v'):
+ version = version[1:]
replaced = re.sub('[-_+]', '.', version)
replaced = re.sub(r'([^\d.])(\d)', r'\1.\2', replaced)
replaced = re.sub(r'(\d)([^\d.])', r'\1.\2', replaced)
| {"golden_diff": "diff --git a/osv/packagist_version.py b/osv/packagist_version.py\n--- a/osv/packagist_version.py\n+++ b/osv/packagist_version.py\n@@ -130,6 +130,8 @@\n Replaces special separators (`-`,`_`,`+`) with `.`, and inserts `.`\n between any digit and non-digit.\n \"\"\"\n+ if version.startswith('v'):\n+ version = version[1:]\n replaced = re.sub('[-_+]', '.', version)\n replaced = re.sub(r'([^\\d.])(\\d)', r'\\1.\\2', replaced)\n replaced = re.sub(r'(\\d)([^\\d.])', r'\\1.\\2', replaced)\n", "issue": "Packagist version enumeration is incorrect\nSince #675 was landed we're now seeing some false positives in `osv-detector` due to the `versions` field seemingly holding wrong versions.\r\n\r\nSome examples that I've looked into specifically:\r\n * https://api.osv.dev/v1/vulns/GHSA-vxrc-68xx-x48g has all of the v2.x versions listed in `affected[0].versions`, even though that `affected` says its `fixed` in version `1.38.0`. (it's also missing `affected[1].versions` which doesn't seem right?\r\n * https://api.osv.dev/v1/vulns/GHSA-77mr-wc79-m8j3 has v6.5+ versions listed in `affected[0].versions`, even though that `affected` says `fixed` in version `6.5.0`.\r\n * https://api.osv.dev/v1/vulns/GHSA-q348-f93x-9gx4 has v2.2.11+ versions listed in `affected[0].versions`, even though that `affected` says `fixed` in version `2.2.11`.\r\n\r\nAnd here are some outputs from `osv-detector` which have some more examples:\r\n```\r\nphpmailer/[email protected]:\r\n GHSA-77mr-wc79-m8j3: A validation function with the same name as a built-in validator can be called (https://github.com/advisories/GHSA-77mr-wc79-m8j3)\r\n GHSA-7q44-r25x-wm4q: Remote Code Execution vulnerability in PHPMailer 6.4.1 running on Windows (https://github.com/advisories/GHSA-7q44-r25x-wm4q)\r\n GHSA-f7hx-fqxw-rvvj: Insufficient output escaping of attachment names in PHPMailer (https://github.com/advisories/GHSA-f7hx-fqxw-rvvj)\r\n GHSA-v5c9-mmw9-829q: PHPMailer susceptible to arbitrary code execution (https://github.com/advisories/GHSA-v5c9-mmw9-829q)\r\n\r\ntwig/[email protected]:\r\n GHSA-vxrc-68xx-x48g: Sandbox Information Disclosure (https://github.com/advisories/GHSA-vxrc-68xx-x48g)\r\n\r\nzendesk/[email protected]:\r\n GHSA-q348-f93x-9gx4: Lack of Input Validation in zendesk_api_client_php for Zendesk Subdomain (https://github.com/advisories/GHSA-q348-f93x-9gx4)\r\n\r\nlaravel/[email protected]:\r\n GHSA-3p32-j457-pg5x: Query Binding Exploitation (https://github.com/advisories/GHSA-3p32-j457-pg5x)\r\n GHSA-4mg9-vhxq-vm7j: SQL Server LIMIT / OFFSET SQL Injection in laravel/framework and illuminate/database (https://github.com/advisories/GHSA-4mg9-vhxq-vm7j)\r\n GHSA-66hf-2p6w-jqfw: Laravel Framework XSS in Blade templating engine (https://github.com/advisories/GHSA-66hf-2p6w-jqfw)\r\n GHSA-w2pm-r78h-4m7v: OS Command Injection in Laravel Framework (https://github.com/advisories/GHSA-w2pm-r78h-4m7v)\r\n GHSA-w68r-5p45-5rqp: Improper Input Validation in Laravel (https://github.com/advisories/GHSA-w68r-5p45-5rqp)\r\n GHSA-x7p5-p2c9-phvg: Unexpected database bindings (https://github.com/advisories/GHSA-x7p5-p2c9-phvg)\r\n```\n", "before_files": [{"content": "# Copyright 2022 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Ordered packagist version\"\"\"\nimport re\n\n\nclass PackagistVersion:\n \"\"\"\n Follows the packagist version ordering, which is recommended to be semver, but\n not enforced to be semver. The php standard version comparison code written in\n C is located here: https://github.com/php/php-src/blob/master/ext/standard/versioning.c\n\n The function first replaces _, - and + with a dot . in the version strings and\n also inserts dots . before and after any non number so that for example\n '4.3.2RC1' becomes '4.3.2.RC.1'. Then it compares the parts starting from\n left to right.\n\n If a part contains special version strings these are handled in the following order:\n any string not found in this list < dev < alpha = a < beta = b < RC = rc < # < pl = p.\n This way not only versions with different levels like '4.1' and '4.1.2' can be\n compared but also any PHP specific version containing development state.\n\n ---\n\n ## Known differences:\n The following are some examples of known differences between this python\n implementation and the C implementation of PHP\n\n - In this version, special version strings need to exactly match to not be\n considered \"any other string\", while in the original implementation the\n string only need to start with one of the listed strings.\n \"\"\"\n\n version_str: str\n canonicalized_version: str\n\n def __init__(self, version: str):\n self.version_str = version\n self.canonicalized_version = self.php_canonicalize_version(version)\n\n def __str__(self) -> str:\n return self.version_str\n\n def __hash__(self):\n return self.canonicalized_version\n\n def __eq__(self, other):\n if not isinstance(other, self.__class__):\n return NotImplemented\n return self.canonicalized_version == other.canonicalized_version\n\n def __lt__(self, other):\n return self.__cmp__(other) < 0\n\n def __le__(self, other):\n return self.__cmp__(other) <= 0\n\n def __gt__(self, other):\n return self.__cmp__(other) > 0\n\n def __ge__(self, other):\n return self.__cmp__(other) >= 0\n\n def __cmp__(self, other):\n return self.php_version_compare(self.version_str, other.version_str)\n\n @staticmethod\n def php_slices_compare(a_split: [str], b_split: [str]):\n \"\"\"\n Compare php versions after being split by '.'\n \"\"\"\n for a, b in zip(a_split, b_split):\n if a.isdigit() and b.isdigit():\n compare = int(a) - int(b)\n elif not a.isdigit() and not b.isdigit():\n compare = PackagistVersion.compare_special_versions(a, b)\n elif a.isdigit():\n compare = PackagistVersion.compare_special_versions('#', b)\n else:\n compare = PackagistVersion.compare_special_versions(a, '#')\n\n if compare != 0:\n if compare > 0:\n return 1\n else:\n return -1\n\n if len(a_split) > len(b_split):\n next_char = a_split[len(b_split)]\n if next_char.isdigit():\n return 1\n return PackagistVersion.php_slices_compare(a_split[len(b_split):], ['#'])\n\n if len(a_split) < len(b_split):\n next_char = b_split[len(a_split)]\n if next_char.isdigit():\n return -1\n return PackagistVersion.php_slices_compare(['#'], b_split[len(a_split):])\n\n return 0\n\n @staticmethod\n def php_version_compare(version_a: str, version_b: str) -> int:\n \"\"\"\n Given two php versions, compare which is newer\n\n :return: 1 if a > b, -1 if b > a, 0 if a == b\n \"\"\"\n version_a = PackagistVersion.php_canonicalize_version(version_a)\n version_b = PackagistVersion.php_canonicalize_version(version_b)\n\n a_split = version_a.split('.')\n b_split = version_b.split('.')\n return PackagistVersion.php_slices_compare(a_split, b_split)\n\n @staticmethod\n def php_canonicalize_version(version: str) -> str:\n \"\"\"\n Replaces special separators (`-`,`_`,`+`) with `.`, and inserts `.`\n between any digit and non-digit.\n \"\"\"\n replaced = re.sub('[-_+]', '.', version)\n replaced = re.sub(r'([^\\d.])(\\d)', r'\\1.\\2', replaced)\n replaced = re.sub(r'(\\d)([^\\d.])', r'\\1.\\2', replaced)\n return replaced\n\n SPECIAL_CHARACTER_ORDER = {\n \"dev\": 0,\n \"alpha\": 1,\n \"a\": 1,\n \"beta\": 2,\n \"b\": 2,\n \"RC\": 3,\n \"rc\": 3,\n \"#\": 4,\n \"pl\": 5,\n \"p\": 5,\n None: 0,\n }\n\n @staticmethod\n def compare_special_versions(version_part_a: str, version_part_b: str) -> int:\n \"\"\"\n Compares the order of special characters against the order specified in php\n docs.\n\n any string not found in this list < dev < alpha = a < beta = b < RC = rc < # < pl = p.\n\n :return: 1 if a > b, -1 if b > a, 0 if a == b\n \"\"\"\n # This isn't quite the behaviour of the c implementation of php version_compare\n # In php if the part starts with special_chars its enough.\n # ### For example:\n # *PHP implementation:*\n # `1.0.0beta1 = 1.0.0betawithsomeothertext1`\n #\n # *This python implementation*\n # `1.0.0beta1 > 1.0.0betawithsomeothertext1`\n # Because \"any string not found in this list\" should apply to\n # `betawithsomeothertext`\n found_a = PackagistVersion.SPECIAL_CHARACTER_ORDER.get(version_part_a, -1)\n found_b = PackagistVersion.SPECIAL_CHARACTER_ORDER.get(version_part_b, -1)\n\n if found_a > found_b:\n return 1\n if found_a < found_b:\n return -1\n return 0\n", "path": "osv/packagist_version.py"}]} | 3,464 | 163 |
gh_patches_debug_35040 | rasdani/github-patches | git_diff | MongoEngine__mongoengine-1541 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docs: use vanilla sphinx RTD theme rather than duplicating it in the repo
Currently, [sphinx RTD theme](https://github.com/snide/sphinx_rtd_theme) is copied in the docs directory. Rather than duplicating, it would make sense to include it as a submodule, using latest release ([0.1.9](https://github.com/snide/sphinx_rtd_theme/releases/tag/0.1.9)).
And of course, in the future, try to follow new releases.
</issue>
<code>
[start of docs/_themes/sphinx_rtd_theme/__init__.py]
1 """Sphinx ReadTheDocs theme.
2
3 From https://github.com/ryan-roemer/sphinx-bootstrap-theme.
4
5 """
6 import os
7
8 VERSION = (0, 1, 5)
9
10 __version__ = ".".join(str(v) for v in VERSION)
11 __version_full__ = __version__
12
13
14 def get_html_theme_path():
15 """Return list of HTML theme paths."""
16 cur_dir = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))
17 return cur_dir
18
[end of docs/_themes/sphinx_rtd_theme/__init__.py]
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # MongoEngine documentation build configuration file, created by
4 # sphinx-quickstart on Sun Nov 22 18:14:13 2009.
5 #
6 # This file is execfile()d with the current directory set to its containing dir.
7 #
8 # Note that not all possible configuration values are present in this
9 # autogenerated file.
10 #
11 # All configuration values have a default; values that are commented out
12 # serve to show the default.
13
14 import sys, os
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 sys.path.insert(0, os.path.abspath('..'))
20
21 # -- General configuration -----------------------------------------------------
22
23 # Add any Sphinx extension module names here, as strings. They can be extensions
24 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
25 extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo']
26
27 # Add any paths that contain templates here, relative to this directory.
28 templates_path = ['_templates']
29
30 # The suffix of source filenames.
31 source_suffix = '.rst'
32
33 # The encoding of source files.
34 #source_encoding = 'utf-8'
35
36 # The master toctree document.
37 master_doc = 'index'
38
39 # General information about the project.
40 project = u'MongoEngine'
41 copyright = u'2009, MongoEngine Authors'
42
43 # The version info for the project you're documenting, acts as replacement for
44 # |version| and |release|, also used in various other places throughout the
45 # built documents.
46 #
47 import mongoengine
48 # The short X.Y version.
49 version = mongoengine.get_version()
50 # The full version, including alpha/beta/rc tags.
51 release = mongoengine.get_version()
52
53 # The language for content autogenerated by Sphinx. Refer to documentation
54 # for a list of supported languages.
55 #language = None
56
57 # There are two options for replacing |today|: either, you set today to some
58 # non-false value, then it is used:
59 #today = ''
60 # Else, today_fmt is used as the format for a strftime call.
61 #today_fmt = '%B %d, %Y'
62
63 # List of documents that shouldn't be included in the build.
64 #unused_docs = []
65
66 # List of directories, relative to source directory, that shouldn't be searched
67 # for source files.
68 exclude_trees = ['_build']
69
70 # The reST default role (used for this markup: `text`) to use for all documents.
71 #default_role = None
72
73 # If true, '()' will be appended to :func: etc. cross-reference text.
74 #add_function_parentheses = True
75
76 # If true, the current module name will be prepended to all description
77 # unit titles (such as .. function::).
78 #add_module_names = True
79
80 # If true, sectionauthor and moduleauthor directives will be shown in the
81 # output. They are ignored by default.
82 #show_authors = False
83
84 # The name of the Pygments (syntax highlighting) style to use.
85 pygments_style = 'sphinx'
86
87 # A list of ignored prefixes for module index sorting.
88 #modindex_common_prefix = []
89
90
91 # -- Options for HTML output ---------------------------------------------------
92
93 # The theme to use for HTML and HTML Help pages. Major themes that come with
94 # Sphinx are currently 'default' and 'sphinxdoc'.
95 html_theme = 'sphinx_rtd_theme'
96
97 # Theme options are theme-specific and customize the look and feel of a theme
98 # further. For a list of options available for each theme, see the
99 # documentation.
100 #html_theme_options = {}
101
102 # Add any paths that contain custom themes here, relative to this directory.
103 html_theme_path = ['_themes']
104
105 # The name for this set of Sphinx documents. If None, it defaults to
106 # "<project> v<release> documentation".
107 #html_title = None
108
109 # A shorter title for the navigation bar. Default is the same as html_title.
110 #html_short_title = None
111
112 # The name of an image file (relative to this directory) to place at the top
113 # of the sidebar.
114 #html_logo = None
115
116 # The name of an image file (within the static path) to use as favicon of the
117 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
118 # pixels large.
119 html_favicon = "favicon.ico"
120
121 # Add any paths that contain custom static files (such as style sheets) here,
122 # relative to this directory. They are copied after the builtin static files,
123 # so a file named "default.css" will overwrite the builtin "default.css".
124 #html_static_path = ['_static']
125
126 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
127 # using the given strftime format.
128 #html_last_updated_fmt = '%b %d, %Y'
129
130 # If true, SmartyPants will be used to convert quotes and dashes to
131 # typographically correct entities.
132 html_use_smartypants = True
133
134 # Custom sidebar templates, maps document names to template names.
135 html_sidebars = {
136 'index': ['globaltoc.html', 'searchbox.html'],
137 '**': ['localtoc.html', 'relations.html', 'searchbox.html']
138 }
139
140
141 # Additional templates that should be rendered to pages, maps page names to
142 # template names.
143 #html_additional_pages = {}
144
145 # If false, no module index is generated.
146 #html_use_modindex = True
147
148 # If false, no index is generated.
149 #html_use_index = True
150
151 # If true, the index is split into individual pages for each letter.
152 #html_split_index = False
153
154 # If true, links to the reST sources are added to the pages.
155 #html_show_sourcelink = True
156
157 # If true, an OpenSearch description file will be output, and all pages will
158 # contain a <link> tag referring to it. The value of this option must be the
159 # base URL from which the finished HTML is served.
160 #html_use_opensearch = ''
161
162 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
163 #html_file_suffix = ''
164
165 # Output file base name for HTML help builder.
166 htmlhelp_basename = 'MongoEnginedoc'
167
168
169 # -- Options for LaTeX output --------------------------------------------------
170
171 # The paper size ('letter' or 'a4').
172 latex_paper_size = 'a4'
173
174 # The font size ('10pt', '11pt' or '12pt').
175 #latex_font_size = '10pt'
176
177 # Grouping the document tree into LaTeX files. List of tuples
178 # (source start file, target name, title, author, documentclass [howto/manual]).
179 latex_documents = [
180 ('index', 'MongoEngine.tex', 'MongoEngine Documentation',
181 'Ross Lawley', 'manual'),
182 ]
183
184 # The name of an image file (relative to this directory) to place at the top of
185 # the title page.
186 #latex_logo = None
187
188 # For "manual" documents, if this is true, then toplevel headings are parts,
189 # not chapters.
190 #latex_use_parts = False
191
192 # Additional stuff for the LaTeX preamble.
193 #latex_preamble = ''
194
195 # Documents to append as an appendix to all manuals.
196 #latex_appendices = []
197
198 # If false, no module index is generated.
199 #latex_use_modindex = True
200
201 autoclass_content = 'both'
202
203 html_theme_options = dict(
204 canonical_url='http://docs.mongoengine.org/en/latest/'
205 )
206
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/_themes/sphinx_rtd_theme/__init__.py b/docs/_themes/sphinx_rtd_theme/__init__.py
deleted file mode 100755
--- a/docs/_themes/sphinx_rtd_theme/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-"""Sphinx ReadTheDocs theme.
-
-From https://github.com/ryan-roemer/sphinx-bootstrap-theme.
-
-"""
-import os
-
-VERSION = (0, 1, 5)
-
-__version__ = ".".join(str(v) for v in VERSION)
-__version_full__ = __version__
-
-
-def get_html_theme_path():
- """Return list of HTML theme paths."""
- cur_dir = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))
- return cur_dir
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -13,6 +13,10 @@
import sys, os
+import sphinx_rtd_theme
+
+import mongoengine
+
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
@@ -44,7 +48,6 @@
# |version| and |release|, also used in various other places throughout the
# built documents.
#
-import mongoengine
# The short X.Y version.
version = mongoengine.get_version()
# The full version, including alpha/beta/rc tags.
@@ -97,10 +100,12 @@
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
-#html_theme_options = {}
+html_theme_options = {
+ 'canonical_url': 'http://docs.mongoengine.org/en/latest/'
+}
# Add any paths that contain custom themes here, relative to this directory.
-html_theme_path = ['_themes']
+html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
@@ -199,7 +204,3 @@
#latex_use_modindex = True
autoclass_content = 'both'
-
-html_theme_options = dict(
- canonical_url='http://docs.mongoengine.org/en/latest/'
-)
| {"golden_diff": "diff --git a/docs/_themes/sphinx_rtd_theme/__init__.py b/docs/_themes/sphinx_rtd_theme/__init__.py\ndeleted file mode 100755\n--- a/docs/_themes/sphinx_rtd_theme/__init__.py\n+++ /dev/null\n@@ -1,17 +0,0 @@\n-\"\"\"Sphinx ReadTheDocs theme.\n-\n-From https://github.com/ryan-roemer/sphinx-bootstrap-theme.\n-\n-\"\"\"\n-import os\n-\n-VERSION = (0, 1, 5)\n-\n-__version__ = \".\".join(str(v) for v in VERSION)\n-__version_full__ = __version__\n-\n-\n-def get_html_theme_path():\n- \"\"\"Return list of HTML theme paths.\"\"\"\n- cur_dir = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))\n- return cur_dir\ndiff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -13,6 +13,10 @@\n \n import sys, os\n \n+import sphinx_rtd_theme\n+\n+import mongoengine\n+\n # If extensions (or modules to document with autodoc) are in another directory,\n # add these directories to sys.path here. If the directory is relative to the\n # documentation root, use os.path.abspath to make it absolute, like shown here.\n@@ -44,7 +48,6 @@\n # |version| and |release|, also used in various other places throughout the\n # built documents.\n #\n-import mongoengine\n # The short X.Y version.\n version = mongoengine.get_version()\n # The full version, including alpha/beta/rc tags.\n@@ -97,10 +100,12 @@\n # Theme options are theme-specific and customize the look and feel of a theme\n # further. For a list of options available for each theme, see the\n # documentation.\n-#html_theme_options = {}\n+html_theme_options = {\n+ 'canonical_url': 'http://docs.mongoengine.org/en/latest/'\n+}\n \n # Add any paths that contain custom themes here, relative to this directory.\n-html_theme_path = ['_themes']\n+html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n \n # The name for this set of Sphinx documents. If None, it defaults to\n # \"<project> v<release> documentation\".\n@@ -199,7 +204,3 @@\n #latex_use_modindex = True\n \n autoclass_content = 'both'\n-\n-html_theme_options = dict(\n- canonical_url='http://docs.mongoengine.org/en/latest/'\n-)\n", "issue": "Docs: use vanilla sphinx RTD theme rather than duplicating it in the repo\nCurrently, [sphinx RTD theme](https://github.com/snide/sphinx_rtd_theme) is copied in the docs directory. Rather than duplicating, it would make sense to include it as a submodule, using latest release ([0.1.9](https://github.com/snide/sphinx_rtd_theme/releases/tag/0.1.9)).\n\nAnd of course, in the future, try to follow new releases.\n\n", "before_files": [{"content": "\"\"\"Sphinx ReadTheDocs theme.\n\nFrom https://github.com/ryan-roemer/sphinx-bootstrap-theme.\n\n\"\"\"\nimport os\n\nVERSION = (0, 1, 5)\n\n__version__ = \".\".join(str(v) for v in VERSION)\n__version_full__ = __version__\n\n\ndef get_html_theme_path():\n \"\"\"Return list of HTML theme paths.\"\"\"\n cur_dir = os.path.abspath(os.path.dirname(os.path.dirname(__file__)))\n return cur_dir\n", "path": "docs/_themes/sphinx_rtd_theme/__init__.py"}, {"content": "# -*- coding: utf-8 -*-\n#\n# MongoEngine documentation build configuration file, created by\n# sphinx-quickstart on Sun Nov 22 18:14:13 2009.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys, os\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('..'))\n\n# -- General configuration -----------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'MongoEngine'\ncopyright = u'2009, MongoEngine Authors'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\nimport mongoengine\n# The short X.Y version.\nversion = mongoengine.get_version()\n# The full version, including alpha/beta/rc tags.\nrelease = mongoengine.get_version()\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of documents that shouldn't be included in the build.\n#unused_docs = []\n\n# List of directories, relative to source directory, that shouldn't be searched\n# for source files.\nexclude_trees = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n\n# -- Options for HTML output ---------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. Major themes that come with\n# Sphinx are currently 'default' and 'sphinxdoc'.\nhtml_theme = 'sphinx_rtd_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\nhtml_theme_path = ['_themes']\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = \"favicon.ico\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n#html_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\nhtml_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\nhtml_sidebars = {\n 'index': ['globaltoc.html', 'searchbox.html'],\n '**': ['localtoc.html', 'relations.html', 'searchbox.html']\n}\n\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_use_modindex = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# If nonempty, this is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = ''\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'MongoEnginedoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\n# The paper size ('letter' or 'a4').\nlatex_paper_size = 'a4'\n\n# The font size ('10pt', '11pt' or '12pt').\n#latex_font_size = '10pt'\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n ('index', 'MongoEngine.tex', 'MongoEngine Documentation',\n 'Ross Lawley', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# Additional stuff for the LaTeX preamble.\n#latex_preamble = ''\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_use_modindex = True\n\nautoclass_content = 'both'\n\nhtml_theme_options = dict(\n canonical_url='http://docs.mongoengine.org/en/latest/'\n)\n", "path": "docs/conf.py"}]} | 2,968 | 562 |
gh_patches_debug_4532 | rasdani/github-patches | git_diff | huggingface__dataset-viewer-2789 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Truncate all the logs
We sometimes have very big logs (one row > 5MB). It's not useful at all and triggers warnings from infra. When we setup the logs configuration, we could try to set a maximum length
https://github.com/huggingface/dataset-viewer/blob/95527c2f1f0b8f077ed9ec74d3c75e45dbc1d00a/libs/libcommon/src/libcommon/log.py#L7-L9
</issue>
<code>
[start of libs/libcommon/src/libcommon/log.py]
1 # SPDX-License-Identifier: Apache-2.0
2 # Copyright 2022 The HuggingFace Authors.
3
4 import logging
5
6
7 def init_logging(level: int = logging.INFO) -> None:
8 logging.basicConfig(level=level, format="%(levelname)s: %(asctime)s - %(name)s - %(message)s")
9 logging.debug(f"Log level set to: {logging.getLevelName(logging.getLogger().getEffectiveLevel())}")
10
[end of libs/libcommon/src/libcommon/log.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libs/libcommon/src/libcommon/log.py b/libs/libcommon/src/libcommon/log.py
--- a/libs/libcommon/src/libcommon/log.py
+++ b/libs/libcommon/src/libcommon/log.py
@@ -5,5 +5,5 @@
def init_logging(level: int = logging.INFO) -> None:
- logging.basicConfig(level=level, format="%(levelname)s: %(asctime)s - %(name)s - %(message)s")
+ logging.basicConfig(level=level, format="%(levelname)s: %(asctime)s - %(name)s - %(message).5000s")
logging.debug(f"Log level set to: {logging.getLevelName(logging.getLogger().getEffectiveLevel())}")
| {"golden_diff": "diff --git a/libs/libcommon/src/libcommon/log.py b/libs/libcommon/src/libcommon/log.py\n--- a/libs/libcommon/src/libcommon/log.py\n+++ b/libs/libcommon/src/libcommon/log.py\n@@ -5,5 +5,5 @@\n \n \n def init_logging(level: int = logging.INFO) -> None:\n- logging.basicConfig(level=level, format=\"%(levelname)s: %(asctime)s - %(name)s - %(message)s\")\n+ logging.basicConfig(level=level, format=\"%(levelname)s: %(asctime)s - %(name)s - %(message).5000s\")\n logging.debug(f\"Log level set to: {logging.getLevelName(logging.getLogger().getEffectiveLevel())}\")\n", "issue": "Truncate all the logs\nWe sometimes have very big logs (one row > 5MB). It's not useful at all and triggers warnings from infra. When we setup the logs configuration, we could try to set a maximum length\r\n\r\nhttps://github.com/huggingface/dataset-viewer/blob/95527c2f1f0b8f077ed9ec74d3c75e45dbc1d00a/libs/libcommon/src/libcommon/log.py#L7-L9\r\n\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n# Copyright 2022 The HuggingFace Authors.\n\nimport logging\n\n\ndef init_logging(level: int = logging.INFO) -> None:\n logging.basicConfig(level=level, format=\"%(levelname)s: %(asctime)s - %(name)s - %(message)s\")\n logging.debug(f\"Log level set to: {logging.getLevelName(logging.getLogger().getEffectiveLevel())}\")\n", "path": "libs/libcommon/src/libcommon/log.py"}]} | 756 | 148 |
gh_patches_debug_23578 | rasdani/github-patches | git_diff | Flexget__Flexget-2271 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Kodi API has been changed in v18 (Leia) such that HTTP POST is required
<!---
Before opening an issue, verify:
- Is this a feature request? Post it on https://feathub.com/Flexget/Flexget
- Did you recently upgrade? Look at the Change Log and Upgrade Actions to make sure that you don't need to make any changes to your config https://flexget.com/ChangeLog https://flexget.com/UpgradeActions
- Are you running FlexGet as a daemon? Stop it completely and then start it again https://flexget.com/CLI/daemon
- Did you search to see if the issue already exists? https://github.com/Flexget/Flexget/issues
- Did you fill out the issue template as completely as possible?
The issue template is here because it helps to ensure you submitted all the necessary information the first time, and allows us to more quickly review issues. Please fill it out correctly and do not ignore it, no matter how irrelevant you think it may be. Thanks in advance for your help with this!
--->
### Expected behaviour:
<!---
Please don't just say "it doesn't crash" or "it works". Explain what the expected result is.
--->
Updates should work
### Actual behaviour:
Error message: `JSONRPC failed. Error -32099: Bad client permission`
### Steps to reproduce:
- Step 1: Call a kodi library scan from a task
#### Config:
```
kodi_library:
action: scan
category: video
url: http://192.168.1.214
port: 80
```
### Details
The kodi API has been changed in v18 Leia and up. In the old API, all requests were HTTP GET (even API calls that update/mutate state). They've finally updated the API to require HTTP POST for updates, but they've completely failed to update the API version or even provide sensible error messages.
https://forum.kodi.tv/showthread.php?tid=324598
https://discuss.flexget.com/t/kodi-plugin-not-working-on-kodi-18/4196
**NOTE**: I no longer use Kodi, so I'm simply creating an issue based on a forum post to keep track of the issue in case other users begin to experience it.
</issue>
<code>
[start of flexget/plugins/services/kodi_library.py]
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # noqa pylint: disable=unused-import, redefined-builtin
3
4 import logging
5 import json
6
7 from flexget import plugin
8 from flexget.event import event
9 from flexget.utils.requests import RequestException
10
11 log = logging.getLogger('kodi_library')
12
13 JSON_URI = '/jsonrpc'
14
15
16 class KodiLibrary(object):
17 schema = {
18 'type': 'object',
19 'properties': {
20 'action': {'type': 'string', 'enum': ['clean', 'scan']},
21 'category': {'type': 'string', 'enum': ['audio', 'video']},
22 'url': {'type': 'string', 'format': 'url'},
23 'port': {'type': 'integer', 'default': 8080},
24 'username': {'type': 'string'},
25 'password': {'type': 'string'},
26 'only_on_accepted': {'type': 'boolean', 'default': True}
27 },
28 'required': ['url', 'action', 'category'],
29 'additionalProperties': False,
30 }
31
32 @plugin.priority(-255)
33 def on_task_exit(self, task, config):
34 if task.accepted or not config['only_on_accepted']:
35 # make the url without trailing slash
36 base_url = config['url'][:-1] if config['url'].endswith('/') else config['url']
37 base_url += ':{0}'.format(config['port'])
38
39 url = base_url + JSON_URI
40 # create the params
41 json_params = {"id": 1, "jsonrpc": "2.0",
42 'method': '{category}Library.{action}'.format(category=config['category'].title(),
43 action=config['action'].title())}
44 params = {'request': json.dumps(json_params)}
45 log.debug('Sending request params %s', params)
46
47 try:
48 r = task.requests.get(url, params=params, auth=(config.get('username'), config.get('password'))).json()
49 if r.get('result') == 'OK':
50 log.info('Successfully sent a %s request for the %s library', config['action'], config['category'])
51 else:
52 if r.get('error'):
53 log.error('Kodi JSONRPC failed. Error %s: %s', r['error']['code'], r['error']['message'])
54 else:
55 # this should never happen as Kodi say they follow the JSON-RPC 2.0 spec
56 log.debug('Received error response %s', json.dumps(r))
57 log.error('Kodi JSONRPC failed with unrecognized message: %s', json.dumps(r))
58 except RequestException as e:
59 raise plugin.PluginError('Failed to send request to Kodi: %s' % e.args[0])
60 else:
61 log.info('No entries were accepted. No request is sent.')
62
63
64 @event('plugin.register')
65 def register_plugin():
66 plugin.register(KodiLibrary, 'kodi_library', api_ver=2)
67
[end of flexget/plugins/services/kodi_library.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flexget/plugins/services/kodi_library.py b/flexget/plugins/services/kodi_library.py
--- a/flexget/plugins/services/kodi_library.py
+++ b/flexget/plugins/services/kodi_library.py
@@ -38,14 +38,13 @@
url = base_url + JSON_URI
# create the params
- json_params = {"id": 1, "jsonrpc": "2.0",
- 'method': '{category}Library.{action}'.format(category=config['category'].title(),
- action=config['action'].title())}
- params = {'request': json.dumps(json_params)}
+ params = {"id": 1, "jsonrpc": "2.0",
+ 'method': '{category}Library.{action}'.format(category=config['category'].title(),
+ action=config['action'].title())}
log.debug('Sending request params %s', params)
try:
- r = task.requests.get(url, params=params, auth=(config.get('username'), config.get('password'))).json()
+ r = task.requests.post(url, json=params, auth=(config.get('username'), config.get('password'))).json()
if r.get('result') == 'OK':
log.info('Successfully sent a %s request for the %s library', config['action'], config['category'])
else:
| {"golden_diff": "diff --git a/flexget/plugins/services/kodi_library.py b/flexget/plugins/services/kodi_library.py\n--- a/flexget/plugins/services/kodi_library.py\n+++ b/flexget/plugins/services/kodi_library.py\n@@ -38,14 +38,13 @@\n \n url = base_url + JSON_URI\n # create the params\n- json_params = {\"id\": 1, \"jsonrpc\": \"2.0\",\n- 'method': '{category}Library.{action}'.format(category=config['category'].title(),\n- action=config['action'].title())}\n- params = {'request': json.dumps(json_params)}\n+ params = {\"id\": 1, \"jsonrpc\": \"2.0\",\n+ 'method': '{category}Library.{action}'.format(category=config['category'].title(),\n+ action=config['action'].title())}\n log.debug('Sending request params %s', params)\n \n try:\n- r = task.requests.get(url, params=params, auth=(config.get('username'), config.get('password'))).json()\n+ r = task.requests.post(url, json=params, auth=(config.get('username'), config.get('password'))).json()\n if r.get('result') == 'OK':\n log.info('Successfully sent a %s request for the %s library', config['action'], config['category'])\n else:\n", "issue": "Kodi API has been changed in v18 (Leia) such that HTTP POST is required\n<!---\r\nBefore opening an issue, verify:\r\n\r\n- Is this a feature request? Post it on https://feathub.com/Flexget/Flexget\r\n- Did you recently upgrade? Look at the Change Log and Upgrade Actions to make sure that you don't need to make any changes to your config https://flexget.com/ChangeLog https://flexget.com/UpgradeActions\r\n- Are you running FlexGet as a daemon? Stop it completely and then start it again https://flexget.com/CLI/daemon\r\n- Did you search to see if the issue already exists? https://github.com/Flexget/Flexget/issues\r\n- Did you fill out the issue template as completely as possible?\r\n\r\nThe issue template is here because it helps to ensure you submitted all the necessary information the first time, and allows us to more quickly review issues. Please fill it out correctly and do not ignore it, no matter how irrelevant you think it may be. Thanks in advance for your help with this!\r\n--->\r\n### Expected behaviour:\r\n<!---\r\nPlease don't just say \"it doesn't crash\" or \"it works\". Explain what the expected result is.\r\n--->\r\nUpdates should work\r\n### Actual behaviour:\r\nError message: `JSONRPC failed. Error -32099: Bad client permission`\r\n### Steps to reproduce:\r\n- Step 1: Call a kodi library scan from a task\r\n\r\n#### Config:\r\n```\r\nkodi_library:\r\n action: scan\r\n category: video\r\n url: http://192.168.1.214\r\n port: 80\r\n```\r\n\r\n### Details\r\nThe kodi API has been changed in v18 Leia and up. In the old API, all requests were HTTP GET (even API calls that update/mutate state). They've finally updated the API to require HTTP POST for updates, but they've completely failed to update the API version or even provide sensible error messages.\r\n\r\nhttps://forum.kodi.tv/showthread.php?tid=324598\r\nhttps://discuss.flexget.com/t/kodi-plugin-not-working-on-kodi-18/4196\r\n\r\n**NOTE**: I no longer use Kodi, so I'm simply creating an issue based on a forum post to keep track of the issue in case other users begin to experience it.\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # noqa pylint: disable=unused-import, redefined-builtin\n\nimport logging\nimport json\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils.requests import RequestException\n\nlog = logging.getLogger('kodi_library')\n\nJSON_URI = '/jsonrpc'\n\n\nclass KodiLibrary(object):\n schema = {\n 'type': 'object',\n 'properties': {\n 'action': {'type': 'string', 'enum': ['clean', 'scan']},\n 'category': {'type': 'string', 'enum': ['audio', 'video']},\n 'url': {'type': 'string', 'format': 'url'},\n 'port': {'type': 'integer', 'default': 8080},\n 'username': {'type': 'string'},\n 'password': {'type': 'string'},\n 'only_on_accepted': {'type': 'boolean', 'default': True}\n },\n 'required': ['url', 'action', 'category'],\n 'additionalProperties': False,\n }\n\n @plugin.priority(-255)\n def on_task_exit(self, task, config):\n if task.accepted or not config['only_on_accepted']:\n # make the url without trailing slash\n base_url = config['url'][:-1] if config['url'].endswith('/') else config['url']\n base_url += ':{0}'.format(config['port'])\n\n url = base_url + JSON_URI\n # create the params\n json_params = {\"id\": 1, \"jsonrpc\": \"2.0\",\n 'method': '{category}Library.{action}'.format(category=config['category'].title(),\n action=config['action'].title())}\n params = {'request': json.dumps(json_params)}\n log.debug('Sending request params %s', params)\n\n try:\n r = task.requests.get(url, params=params, auth=(config.get('username'), config.get('password'))).json()\n if r.get('result') == 'OK':\n log.info('Successfully sent a %s request for the %s library', config['action'], config['category'])\n else:\n if r.get('error'):\n log.error('Kodi JSONRPC failed. Error %s: %s', r['error']['code'], r['error']['message'])\n else:\n # this should never happen as Kodi say they follow the JSON-RPC 2.0 spec\n log.debug('Received error response %s', json.dumps(r))\n log.error('Kodi JSONRPC failed with unrecognized message: %s', json.dumps(r))\n except RequestException as e:\n raise plugin.PluginError('Failed to send request to Kodi: %s' % e.args[0])\n else:\n log.info('No entries were accepted. No request is sent.')\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(KodiLibrary, 'kodi_library', api_ver=2)\n", "path": "flexget/plugins/services/kodi_library.py"}]} | 1,810 | 297 |
gh_patches_debug_4894 | rasdani/github-patches | git_diff | streamlink__streamlink-5369 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.pluzz: La 1ere subdomain from francetvinfo.fr is not supported
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest stable release
### Description
VODs and 24/7 live from the La 1ere francetvinfo.fr subdomain (https://la1ere.francetvinfo.fr) are not supported by the Pluzz plugin. Find the original .m3u8 works for the live, but the link changes every (at least) 2 hours.
First log with VOD : https://la1ere.francetvinfo.fr/programme-video/la1ere_mitterrand-et-l-outre-mer/diffusion/2976069-espoir-et-promesses.html
Second log with live 24/7 : https://la1ere.francetvinfo.fr/info-en-continu-24-24 (HTTP referer needed for playing it)
French IP needed for playing both.
### Debug log
```text
streamlink --loglevel=debug https://la1ere.francetvinfo.fr/programme-video/la1ere_mitterrand-et-l-outre-mer/diffusion/2976069-espoir-et-promesses.html best
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.11.3
[cli][debug] Streamlink: 5.5.1
[cli][debug] Dependencies:
[cli][debug] certifi: 2023.5.7
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.2
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.17
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.29.0
[cli][debug] urllib3: 1.26.15
[cli][debug] websocket-client: 1.5.1
[cli][debug] Arguments:
[cli][debug] url=https://la1ere.francetvinfo.fr/programme-video/la1ere_mitterrand-et-l-outre-mer/diffusion/2976069-espoir-et-promesses.html
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][debug] --player="D:\Program Files\VideoLAN\VLC\vlc.exe" --file-caching=5000
[cli][debug] --ffmpeg-ffmpeg=C:\Users\Orion\AppData\Local\Programs\Streamlink\ffmpeg\ffmpeg.exe
[cli][info] Found matching plugin pluzz for URL https://la1ere.francetvinfo.fr/programme-video/la1ere_mitterrand-et-l-outre-mer/diffusion/2976069-espoir-et-promesses.html
[plugins.pluzz][debug] Country: FR
error: No playable streams found on this URL: https://la1ere.francetvinfo.fr/programme-video/la1ere_mitterrand-et-l-outre-mer/diffusion/2976069-espoir-et-promesses.html
streamlink --loglevel=debug --http-header "Referer= https://la1ere.francetvinfo.fr" https://la1ere.francetvinfo.fr/info-en-continu-24-24 best
[cli][debug] OS: Windows 10
[cli][debug] Python: 3.11.3
[cli][debug] Streamlink: 5.5.1
[cli][debug] Dependencies:
[cli][debug] certifi: 2023.5.7
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.2
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.17
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.29.0
[cli][debug] urllib3: 1.26.15
[cli][debug] websocket-client: 1.5.1
[cli][debug] Arguments:
[cli][debug] url=https://la1ere.francetvinfo.fr/info-en-continu-24-24
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][debug] --player="D:\Program Files\VideoLAN\VLC\vlc.exe" --file-caching=5000
[cli][debug] --ffmpeg-ffmpeg=C:\Users\Orion\AppData\Local\Programs\Streamlink\ffmpeg\ffmpeg.exe
[cli][debug] --http-header=[('Referer', 'https://la1ere.francetvinfo.fr')]
[cli][info] Found matching plugin pluzz for URL https://la1ere.francetvinfo.fr/info-en-continu-24-24
[plugins.pluzz][debug] Country: FR
error: No playable streams found on this URL: https://la1ere.francetvinfo.fr/info-en-continu-24-24
```
</issue>
<code>
[start of src/streamlink/plugins/pluzz.py]
1 """
2 $description Live TV channels and video on-demand service from france.tv, a French public, state-owned broadcaster.
3 $url france.tv
4 $url francetvinfo.fr
5 $type live, vod
6 $region France, Andorra, Monaco
7 """
8
9 import logging
10 import re
11 from urllib.parse import urlparse
12
13 from streamlink.plugin import Plugin, PluginError, pluginmatcher
14 from streamlink.plugin.api import useragents, validate
15 from streamlink.stream.dash import DASHStream
16 from streamlink.stream.hls import HLSStream
17 from streamlink.utils.times import localnow
18 from streamlink.utils.url import update_qsd
19
20
21 log = logging.getLogger(__name__)
22
23
24 @pluginmatcher(re.compile(r"""
25 https?://(?:
26 (?:www\.)?france\.tv/
27 |
28 (?:.+\.)?francetvinfo\.fr/
29 )
30 """, re.VERBOSE))
31 class Pluzz(Plugin):
32 PLAYER_VERSION = "5.51.35"
33 GEO_URL = "https://geoftv-a.akamaihd.net/ws/edgescape.json"
34 API_URL = "https://player.webservices.francetelevisions.fr/v1/videos/{video_id}"
35
36 def _get_streams(self):
37 self.session.http.headers.update({
38 "User-Agent": useragents.CHROME,
39 })
40 CHROME_VERSION = re.compile(r"Chrome/(\d+)").search(useragents.CHROME).group(1)
41
42 # Retrieve geolocation data
43 country_code = self.session.http.get(self.GEO_URL, schema=validate.Schema(
44 validate.parse_json(),
45 {"reponse": {"geo_info": {
46 "country_code": str,
47 }}},
48 validate.get(("reponse", "geo_info", "country_code")),
49 ))
50 log.debug(f"Country: {country_code}")
51
52 # Retrieve URL page and search for video ID
53 video_id = None
54 try:
55 video_id = self.session.http.get(self.url, schema=validate.Schema(
56 validate.parse_html(),
57 validate.any(
58 validate.all(
59 validate.xml_xpath_string(".//script[contains(text(),'window.FTVPlayerVideos')][1]/text()"),
60 str,
61 validate.regex(re.compile(
62 r"window\.FTVPlayerVideos\s*=\s*(?P<json>\[{.+?}])\s*;\s*(?:$|var)",
63 re.DOTALL,
64 )),
65 validate.get("json"),
66 validate.parse_json(),
67 [{"videoId": str}],
68 validate.get((0, "videoId")),
69 ),
70 validate.all(
71 validate.xml_xpath_string(".//script[contains(text(),'new Magnetoscope')][1]/text()"),
72 str,
73 validate.regex(re.compile(
74 r"""player\.load\s*\(\s*{\s*src\s*:\s*(?P<q>['"])(?P<video_id>.+?)(?P=q)\s*}\s*\)\s*;""",
75 )),
76 validate.get("video_id"),
77 ),
78 validate.all(
79 validate.xml_xpath_string(".//*[@id][contains(@class,'francetv-player-wrapper')][1]/@id"),
80 str,
81 ),
82 validate.all(
83 validate.xml_xpath_string(".//*[@data-id][@class='magneto'][1]/@data-id"),
84 str,
85 ),
86 ),
87 ))
88 except PluginError:
89 pass
90 if not video_id:
91 return
92 log.debug(f"Video ID: {video_id}")
93
94 api_url = update_qsd(self.API_URL.format(video_id=video_id), {
95 "country_code": country_code,
96 "w": 1920,
97 "h": 1080,
98 "player_version": self.PLAYER_VERSION,
99 "domain": urlparse(self.url).netloc,
100 "device_type": "mobile",
101 "browser": "chrome",
102 "browser_version": CHROME_VERSION,
103 "os": "ios",
104 "gmt": localnow().strftime("%z"),
105 })
106 video_format, token_url, url, self.title = self.session.http.get(api_url, schema=validate.Schema(
107 validate.parse_json(),
108 {
109 "video": {
110 "workflow": validate.any("token-akamai", "dai"),
111 "format": validate.any("dash", "hls"),
112 "token": validate.url(),
113 "url": validate.url(),
114 },
115 "meta": {
116 "title": str,
117 },
118 },
119 validate.union_get(
120 ("video", "format"),
121 ("video", "token"),
122 ("video", "url"),
123 ("meta", "title"),
124 ),
125 ))
126
127 data_url = update_qsd(token_url, {
128 "url": url,
129 })
130 video_url = self.session.http.get(data_url, schema=validate.Schema(
131 validate.parse_json(),
132 {"url": validate.url()},
133 validate.get("url"),
134 ))
135
136 if video_format == "dash":
137 yield from DASHStream.parse_manifest(self.session, video_url).items()
138 elif video_format == "hls":
139 yield from HLSStream.parse_variant_playlist(self.session, video_url).items()
140
141
142 __plugin__ = Pluzz
143
[end of src/streamlink/plugins/pluzz.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/pluzz.py b/src/streamlink/plugins/pluzz.py
--- a/src/streamlink/plugins/pluzz.py
+++ b/src/streamlink/plugins/pluzz.py
@@ -80,7 +80,7 @@
str,
),
validate.all(
- validate.xml_xpath_string(".//*[@data-id][@class='magneto'][1]/@data-id"),
+ validate.xml_xpath_string(".//*[@data-id][contains(@class,'magneto')][1]/@data-id"),
str,
),
),
| {"golden_diff": "diff --git a/src/streamlink/plugins/pluzz.py b/src/streamlink/plugins/pluzz.py\n--- a/src/streamlink/plugins/pluzz.py\n+++ b/src/streamlink/plugins/pluzz.py\n@@ -80,7 +80,7 @@\n str,\n ),\n validate.all(\n- validate.xml_xpath_string(\".//*[@data-id][@class='magneto'][1]/@data-id\"),\n+ validate.xml_xpath_string(\".//*[@data-id][contains(@class,'magneto')][1]/@data-id\"),\n str,\n ),\n ),\n", "issue": "plugins.pluzz: La 1ere subdomain from francetvinfo.fr is not supported\n### Checklist\r\n\r\n- [X] This is a plugin issue and not a different kind of issue\r\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\r\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\r\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\r\n\r\n### Streamlink version\r\n\r\nLatest stable release\r\n\r\n### Description\r\n\r\nVODs and 24/7 live from the La 1ere francetvinfo.fr subdomain (https://la1ere.francetvinfo.fr) are not supported by the Pluzz plugin. Find the original .m3u8 works for the live, but the link changes every (at least) 2 hours.\r\n\r\nFirst log with VOD : https://la1ere.francetvinfo.fr/programme-video/la1ere_mitterrand-et-l-outre-mer/diffusion/2976069-espoir-et-promesses.html\r\n\r\nSecond log with live 24/7 : https://la1ere.francetvinfo.fr/info-en-continu-24-24 (HTTP referer needed for playing it)\r\n\r\nFrench IP needed for playing both.\r\n\r\n### Debug log\r\n\r\n```text\r\nstreamlink --loglevel=debug https://la1ere.francetvinfo.fr/programme-video/la1ere_mitterrand-et-l-outre-mer/diffusion/2976069-espoir-et-promesses.html best\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.11.3\r\n[cli][debug] Streamlink: 5.5.1\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2023.5.7\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.2\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.17\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.29.0\r\n[cli][debug] urllib3: 1.26.15\r\n[cli][debug] websocket-client: 1.5.1\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://la1ere.francetvinfo.fr/programme-video/la1ere_mitterrand-et-l-outre-mer/diffusion/2976069-espoir-et-promesses.html\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --player=\"D:\\Program Files\\VideoLAN\\VLC\\vlc.exe\" --file-caching=5000\r\n[cli][debug] --ffmpeg-ffmpeg=C:\\Users\\Orion\\AppData\\Local\\Programs\\Streamlink\\ffmpeg\\ffmpeg.exe\r\n[cli][info] Found matching plugin pluzz for URL https://la1ere.francetvinfo.fr/programme-video/la1ere_mitterrand-et-l-outre-mer/diffusion/2976069-espoir-et-promesses.html\r\n[plugins.pluzz][debug] Country: FR\r\nerror: No playable streams found on this URL: https://la1ere.francetvinfo.fr/programme-video/la1ere_mitterrand-et-l-outre-mer/diffusion/2976069-espoir-et-promesses.html\r\n\r\nstreamlink --loglevel=debug --http-header \"Referer= https://la1ere.francetvinfo.fr\" https://la1ere.francetvinfo.fr/info-en-continu-24-24 best\r\n[cli][debug] OS: Windows 10\r\n[cli][debug] Python: 3.11.3\r\n[cli][debug] Streamlink: 5.5.1\r\n[cli][debug] Dependencies:\r\n[cli][debug] certifi: 2023.5.7\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.2\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.17\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.29.0\r\n[cli][debug] urllib3: 1.26.15\r\n[cli][debug] websocket-client: 1.5.1\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://la1ere.francetvinfo.fr/info-en-continu-24-24\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][debug] --player=\"D:\\Program Files\\VideoLAN\\VLC\\vlc.exe\" --file-caching=5000\r\n[cli][debug] --ffmpeg-ffmpeg=C:\\Users\\Orion\\AppData\\Local\\Programs\\Streamlink\\ffmpeg\\ffmpeg.exe\r\n[cli][debug] --http-header=[('Referer', 'https://la1ere.francetvinfo.fr')]\r\n[cli][info] Found matching plugin pluzz for URL https://la1ere.francetvinfo.fr/info-en-continu-24-24\r\n[plugins.pluzz][debug] Country: FR\r\nerror: No playable streams found on this URL: https://la1ere.francetvinfo.fr/info-en-continu-24-24\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\n$description Live TV channels and video on-demand service from france.tv, a French public, state-owned broadcaster.\n$url france.tv\n$url francetvinfo.fr\n$type live, vod\n$region France, Andorra, Monaco\n\"\"\"\n\nimport logging\nimport re\nfrom urllib.parse import urlparse\n\nfrom streamlink.plugin import Plugin, PluginError, pluginmatcher\nfrom streamlink.plugin.api import useragents, validate\nfrom streamlink.stream.dash import DASHStream\nfrom streamlink.stream.hls import HLSStream\nfrom streamlink.utils.times import localnow\nfrom streamlink.utils.url import update_qsd\n\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(r\"\"\"\n https?://(?:\n (?:www\\.)?france\\.tv/\n |\n (?:.+\\.)?francetvinfo\\.fr/\n )\n\"\"\", re.VERBOSE))\nclass Pluzz(Plugin):\n PLAYER_VERSION = \"5.51.35\"\n GEO_URL = \"https://geoftv-a.akamaihd.net/ws/edgescape.json\"\n API_URL = \"https://player.webservices.francetelevisions.fr/v1/videos/{video_id}\"\n\n def _get_streams(self):\n self.session.http.headers.update({\n \"User-Agent\": useragents.CHROME,\n })\n CHROME_VERSION = re.compile(r\"Chrome/(\\d+)\").search(useragents.CHROME).group(1)\n\n # Retrieve geolocation data\n country_code = self.session.http.get(self.GEO_URL, schema=validate.Schema(\n validate.parse_json(),\n {\"reponse\": {\"geo_info\": {\n \"country_code\": str,\n }}},\n validate.get((\"reponse\", \"geo_info\", \"country_code\")),\n ))\n log.debug(f\"Country: {country_code}\")\n\n # Retrieve URL page and search for video ID\n video_id = None\n try:\n video_id = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.any(\n validate.all(\n validate.xml_xpath_string(\".//script[contains(text(),'window.FTVPlayerVideos')][1]/text()\"),\n str,\n validate.regex(re.compile(\n r\"window\\.FTVPlayerVideos\\s*=\\s*(?P<json>\\[{.+?}])\\s*;\\s*(?:$|var)\",\n re.DOTALL,\n )),\n validate.get(\"json\"),\n validate.parse_json(),\n [{\"videoId\": str}],\n validate.get((0, \"videoId\")),\n ),\n validate.all(\n validate.xml_xpath_string(\".//script[contains(text(),'new Magnetoscope')][1]/text()\"),\n str,\n validate.regex(re.compile(\n r\"\"\"player\\.load\\s*\\(\\s*{\\s*src\\s*:\\s*(?P<q>['\"])(?P<video_id>.+?)(?P=q)\\s*}\\s*\\)\\s*;\"\"\",\n )),\n validate.get(\"video_id\"),\n ),\n validate.all(\n validate.xml_xpath_string(\".//*[@id][contains(@class,'francetv-player-wrapper')][1]/@id\"),\n str,\n ),\n validate.all(\n validate.xml_xpath_string(\".//*[@data-id][@class='magneto'][1]/@data-id\"),\n str,\n ),\n ),\n ))\n except PluginError:\n pass\n if not video_id:\n return\n log.debug(f\"Video ID: {video_id}\")\n\n api_url = update_qsd(self.API_URL.format(video_id=video_id), {\n \"country_code\": country_code,\n \"w\": 1920,\n \"h\": 1080,\n \"player_version\": self.PLAYER_VERSION,\n \"domain\": urlparse(self.url).netloc,\n \"device_type\": \"mobile\",\n \"browser\": \"chrome\",\n \"browser_version\": CHROME_VERSION,\n \"os\": \"ios\",\n \"gmt\": localnow().strftime(\"%z\"),\n })\n video_format, token_url, url, self.title = self.session.http.get(api_url, schema=validate.Schema(\n validate.parse_json(),\n {\n \"video\": {\n \"workflow\": validate.any(\"token-akamai\", \"dai\"),\n \"format\": validate.any(\"dash\", \"hls\"),\n \"token\": validate.url(),\n \"url\": validate.url(),\n },\n \"meta\": {\n \"title\": str,\n },\n },\n validate.union_get(\n (\"video\", \"format\"),\n (\"video\", \"token\"),\n (\"video\", \"url\"),\n (\"meta\", \"title\"),\n ),\n ))\n\n data_url = update_qsd(token_url, {\n \"url\": url,\n })\n video_url = self.session.http.get(data_url, schema=validate.Schema(\n validate.parse_json(),\n {\"url\": validate.url()},\n validate.get(\"url\"),\n ))\n\n if video_format == \"dash\":\n yield from DASHStream.parse_manifest(self.session, video_url).items()\n elif video_format == \"hls\":\n yield from HLSStream.parse_variant_playlist(self.session, video_url).items()\n\n\n__plugin__ = Pluzz\n", "path": "src/streamlink/plugins/pluzz.py"}]} | 3,321 | 118 |
gh_patches_debug_12658 | rasdani/github-patches | git_diff | pytorch__text-112 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
python2 iterator bug?
here is the snippet:
```
from __future__ import print_function
from torchtext.datasets import WikiText2
i_train, i_dev, i_test = WikiText2.iters(device='-1', root="data/")
for i in i_train:
print(i)
```
here's the error:
```
Traceback (most recent call last):
File "bug.py", line 5, in <module>
for i in i_train:
File "/u/bosctom/.local/lib/python2.7/site-packages/torchtext/data/iterator.py", line 218, in __iter__
self.batch_size - len(text)))
TypeError: can't multiply sequence by non-int of type 'float'
```
The bug is related to python2's math.ceil() returning a float?
</issue>
<code>
[start of torchtext/data/iterator.py]
1 from __future__ import division
2
3 import math
4 import random
5 from contextlib import contextmanager
6 from copy import deepcopy
7
8 from .batch import Batch
9 from .dataset import Dataset
10
11
12 class RandomShuffler(object):
13 """Use random functions while keeping track of the random state to make it
14 reproducible and deterministic."""
15
16 def __init__(self, random_state=None):
17 self._random_state = random_state
18 if self._random_state is None:
19 self._random_state = random.getstate()
20
21 @contextmanager
22 def use_internal_state(self):
23 """Use a specific RNG state."""
24 old_state = random.getstate()
25 random.setstate(self._random_state)
26 yield
27 self._random_state = random.getstate()
28 random.setstate(old_state)
29
30 @property
31 def random_state(self):
32 return deepcopy(self._random_state)
33
34 @random_state.setter
35 def random_state(self, s):
36 self._random_state = s
37
38 def __call__(self, data):
39 """Shuffle and return a new list."""
40 with self.use_internal_state():
41 return random.sample(data, len(data))
42
43
44 class Iterator(object):
45 """Defines an iterator that loads batches of data from a Dataset.
46
47 Attributes:
48 dataset: The Dataset object to load Examples from.
49 batch_size: Batch size.
50 batch_size_fn: Function of three arguments (new example to add, current
51 count of examples in the batch, and current effective batch size)
52 that returns the new effective batch size resulting from adding
53 that example to a batch. This is useful for dynamic batching, where
54 this function would add to the current effective batch size the
55 number of tokens in the new example.
56 sort_key: A key to use for sorting examples in order to batch together
57 examples with similar lengths and minimize padding. The sort_key
58 provided to the Iterator constructor overrides the sort_key
59 attribute of the Dataset, or defers to it if None.
60 train: Whether the iterator represents a train set.
61 repeat: Whether to repeat the iterator for multiple epochs.
62 shuffle: Whether to shuffle examples between epochs.
63 sort: Whether to sort examples according to self.sort_key.
64 Note that repeat, shuffle, and sort default to train, train, and
65 (not train).
66 device: Device to create batches on. Use -1 for CPU and None for the
67 currently active GPU device.
68 """
69
70 def __init__(self, dataset, batch_size, sort_key=None, device=None,
71 batch_size_fn=lambda new, count, sofar: count, train=True,
72 repeat=None, shuffle=None, sort=None):
73 self.batch_size, self.train, self.dataset = batch_size, train, dataset
74 self.batch_size_fn = batch_size_fn
75 self.iterations = 0
76 self.repeat = train if repeat is None else repeat
77 self.shuffle = train if shuffle is None else shuffle
78 self.sort = not train if sort is None else sort
79 if sort_key is None:
80 self.sort_key = dataset.sort_key
81 else:
82 self.sort_key = sort_key
83 self.device = device
84
85 self.random_shuffler = RandomShuffler()
86
87 # For state loading/saving only
88 self._iterations_this_epoch = 0
89 self._random_state_this_epoch = None
90 self._restored_from_state = False
91
92 @classmethod
93 def splits(cls, datasets, batch_sizes=None, **kwargs):
94 """Create Iterator objects for multiple splits of a dataset.
95
96 Arguments:
97 datasets: Tuple of Dataset objects corresponding to the splits. The
98 first such object should be the train set.
99 batch_sizes: Tuple of batch sizes to use for the different splits,
100 or None to use the same batch_size for all splits.
101 Remaining keyword arguments: Passed to the constructor of the
102 iterator class being used.
103 """
104 if batch_sizes is None:
105 batch_sizes = [kwargs.pop('batch_size')] * len(datasets)
106 ret = []
107 for i in range(len(datasets)):
108 train = i == 0
109 ret.append(cls(
110 datasets[i], batch_size=batch_sizes[i], train=train, **kwargs))
111 return tuple(ret)
112
113 def data(self):
114 """Return the examples in the dataset in order, sorted, or shuffled."""
115 if self.sort:
116 xs = sorted(self.dataset, key=self.sort_key)
117 elif self.shuffle:
118 xs = [self.dataset[i] for i in self.random_shuffler(range(len(self.dataset)))]
119 else:
120 xs = self.dataset
121 return xs
122
123 def init_epoch(self):
124 """Set up the batch generator for a new epoch."""
125
126 if self._restored_from_state:
127 self.random_shuffler.random_state = self._random_state_this_epoch
128 else:
129 self._random_state_this_epoch = self.random_shuffler.random_state
130
131 self.create_batches()
132
133 if self._restored_from_state:
134 self._restored_from_state = False
135 else:
136 self._iterations_this_epoch = 0
137
138 if not self.repeat:
139 self.iterations = 0
140
141 def create_batches(self):
142 self.batches = batch(self.data(), self.batch_size, self.batch_size_fn)
143
144 @property
145 def epoch(self):
146 return self.iterations / len(self)
147
148 def __len__(self):
149 return math.ceil(len(self.dataset) / self.batch_size)
150
151 def __iter__(self):
152 while True:
153 self.init_epoch()
154 for idx, minibatch in enumerate(self.batches):
155 # fast-forward if loaded from state
156 if self._iterations_this_epoch > idx:
157 continue
158 self.iterations += 1
159 self._iterations_this_epoch += 1
160 # NOTE: `rnn.pack_padded_sequence` requires that a minibatch be sorted by
161 # decreasing order, which requires reversing relative to typical sort keys
162 minibatch.reverse()
163 yield Batch(minibatch, self.dataset, self.device,
164 self.train)
165 if not self.repeat:
166 raise StopIteration
167
168 def state_dict(self):
169 return {
170 "iterations": self.iterations,
171 "iterations_this_epoch": self._iterations_this_epoch,
172 "random_state_this_epoch": self._random_state_this_epoch}
173
174 def load_state_dict(self, state_dict):
175 self.iterations = state_dict["iterations"]
176 self._iterations_this_epoch = state_dict["iterations_this_epoch"]
177 self._random_state_this_epoch = state_dict["random_state_this_epoch"]
178 self._restored_from_state = True
179
180
181 class BPTTIterator(Iterator):
182 """Defines an iterator for language modeling tasks that use BPTT.
183
184 Provides contiguous streams of examples together with targets that are
185 one timestep further forward, for language modeling training with
186 backpropagation through time (BPTT). Expects a Dataset with a single
187 example and a single field called 'text' and produces Batches with text and
188 target attributes.
189
190 Attributes:
191 dataset: The Dataset object to load Examples from.
192 batch_size: Batch size.
193 bptt_len: Length of sequences for backpropagation through time.
194 sort_key: A key to use for sorting examples in order to batch together
195 examples with similar lengths and minimize padding. The sort_key
196 provided to the Iterator constructor overrides the sort_key
197 attribute of the Dataset, or defers to it if None.
198 train: Whether the iterator represents a train set.
199 repeat: Whether to repeat the iterator for multiple epochs.
200 shuffle: Whether to shuffle examples between epochs.
201 sort: Whether to sort examples according to self.sort_key.
202 Note that repeat, shuffle, and sort default to train, train, and
203 (not train).
204 device: Device to create batches on. Use -1 for CPU and None for the
205 currently active GPU device.
206 """
207
208 def __init__(self, dataset, batch_size, bptt_len, **kwargs):
209 self.bptt_len = bptt_len
210 super(BPTTIterator, self).__init__(dataset, batch_size, **kwargs)
211
212 def __len__(self):
213 return math.ceil(len(self.dataset[0].text) /
214 (self.batch_size * self.bptt_len))
215
216 def __iter__(self):
217 text = self.dataset[0].text
218 TEXT = self.dataset.fields['text']
219 TEXT.eos_token = None
220 text = text + ([TEXT.pad_token] * (math.ceil(len(text) / self.batch_size) *
221 self.batch_size - len(text)))
222 data = TEXT.numericalize(
223 [text], device=self.device, train=self.train)
224 data = data.view(self.batch_size, -1).t().contiguous()
225 dataset = Dataset(examples=self.dataset.examples, fields=[
226 ('text', TEXT), ('target', TEXT)])
227 while True:
228 for i in range(0, len(self) * self.bptt_len, self.bptt_len):
229 seq_len = min(self.bptt_len, len(data) - 1 - i)
230 yield Batch.fromvars(
231 dataset, self.batch_size, train=self.train,
232 text=data[i:i + seq_len],
233 target=data[i + 1:i + 1 + seq_len])
234 if not self.repeat:
235 raise StopIteration
236
237
238 class BucketIterator(Iterator):
239 """Defines an iterator that batches examples of similar lengths together.
240
241 Minimizes amount of padding needed while producing freshly shuffled
242 batches for each new epoch. See pool for the bucketing procedure used.
243 """
244
245 def create_batches(self):
246 if self.sort:
247 self.batches = batch(self.data(), self.batch_size,
248 self.batch_size_fn)
249 else:
250 self.batches = pool(self.data(), self.batch_size,
251 self.sort_key, self.batch_size_fn,
252 random_shuffler=self.random_shuffler)
253
254
255 def batch(data, batch_size, batch_size_fn=lambda new, count, sofar: count):
256 """Yield elements from data in chunks of batch_size."""
257 minibatch, size_so_far = [], 0
258 for ex in data:
259 minibatch.append(ex)
260 size_so_far = batch_size_fn(ex, len(minibatch), size_so_far)
261 if size_so_far == batch_size:
262 yield minibatch
263 minibatch, size_so_far = [], 0
264 elif size_so_far > batch_size:
265 yield minibatch[:-1]
266 minibatch, size_so_far = minibatch[-1:], batch_size_fn(ex, 1, 0)
267 if minibatch:
268 yield minibatch
269
270
271 def pool(data, batch_size, key, batch_size_fn=lambda new, count, sofar: count,
272 random_shuffler=None):
273 """Sort within buckets, then batch, then shuffle batches.
274
275 Partitions data into chunks of size 100*batch_size, sorts examples within
276 each chunk using sort_key, then batch these examples and shuffle the
277 batches.
278 """
279 if random_shuffler is None:
280 random_shuffler = random.shuffle
281 for p in batch(data, batch_size * 100, batch_size_fn):
282 p_batch = batch(sorted(p, key=key), batch_size, batch_size_fn)
283 for b in random_shuffler(list(p_batch)):
284 yield b
285
[end of torchtext/data/iterator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchtext/data/iterator.py b/torchtext/data/iterator.py
--- a/torchtext/data/iterator.py
+++ b/torchtext/data/iterator.py
@@ -217,8 +217,8 @@
text = self.dataset[0].text
TEXT = self.dataset.fields['text']
TEXT.eos_token = None
- text = text + ([TEXT.pad_token] * (math.ceil(len(text) / self.batch_size) *
- self.batch_size - len(text)))
+ text = text + ([TEXT.pad_token] * int(math.ceil(len(text) / self.batch_size) *
+ self.batch_size - len(text)))
data = TEXT.numericalize(
[text], device=self.device, train=self.train)
data = data.view(self.batch_size, -1).t().contiguous()
| {"golden_diff": "diff --git a/torchtext/data/iterator.py b/torchtext/data/iterator.py\n--- a/torchtext/data/iterator.py\n+++ b/torchtext/data/iterator.py\n@@ -217,8 +217,8 @@\n text = self.dataset[0].text\n TEXT = self.dataset.fields['text']\n TEXT.eos_token = None\n- text = text + ([TEXT.pad_token] * (math.ceil(len(text) / self.batch_size) *\n- self.batch_size - len(text)))\n+ text = text + ([TEXT.pad_token] * int(math.ceil(len(text) / self.batch_size) *\n+ self.batch_size - len(text)))\n data = TEXT.numericalize(\n [text], device=self.device, train=self.train)\n data = data.view(self.batch_size, -1).t().contiguous()\n", "issue": "python2 iterator bug?\nhere is the snippet:\r\n\r\n```\r\nfrom __future__ import print_function\r\nfrom torchtext.datasets import WikiText2\r\n\r\ni_train, i_dev, i_test = WikiText2.iters(device='-1', root=\"data/\")\r\nfor i in i_train:\r\n print(i)\r\n```\r\n\r\nhere's the error:\r\n```\r\nTraceback (most recent call last):\r\n File \"bug.py\", line 5, in <module>\r\n for i in i_train:\r\n File \"/u/bosctom/.local/lib/python2.7/site-packages/torchtext/data/iterator.py\", line 218, in __iter__\r\n self.batch_size - len(text)))\r\nTypeError: can't multiply sequence by non-int of type 'float'\r\n```\r\n\r\nThe bug is related to python2's math.ceil() returning a float?\n", "before_files": [{"content": "from __future__ import division\n\nimport math\nimport random\nfrom contextlib import contextmanager\nfrom copy import deepcopy\n\nfrom .batch import Batch\nfrom .dataset import Dataset\n\n\nclass RandomShuffler(object):\n \"\"\"Use random functions while keeping track of the random state to make it\n reproducible and deterministic.\"\"\"\n\n def __init__(self, random_state=None):\n self._random_state = random_state\n if self._random_state is None:\n self._random_state = random.getstate()\n\n @contextmanager\n def use_internal_state(self):\n \"\"\"Use a specific RNG state.\"\"\"\n old_state = random.getstate()\n random.setstate(self._random_state)\n yield\n self._random_state = random.getstate()\n random.setstate(old_state)\n\n @property\n def random_state(self):\n return deepcopy(self._random_state)\n\n @random_state.setter\n def random_state(self, s):\n self._random_state = s\n\n def __call__(self, data):\n \"\"\"Shuffle and return a new list.\"\"\"\n with self.use_internal_state():\n return random.sample(data, len(data))\n\n\nclass Iterator(object):\n \"\"\"Defines an iterator that loads batches of data from a Dataset.\n\n Attributes:\n dataset: The Dataset object to load Examples from.\n batch_size: Batch size.\n batch_size_fn: Function of three arguments (new example to add, current\n count of examples in the batch, and current effective batch size)\n that returns the new effective batch size resulting from adding\n that example to a batch. This is useful for dynamic batching, where\n this function would add to the current effective batch size the\n number of tokens in the new example.\n sort_key: A key to use for sorting examples in order to batch together\n examples with similar lengths and minimize padding. The sort_key\n provided to the Iterator constructor overrides the sort_key\n attribute of the Dataset, or defers to it if None.\n train: Whether the iterator represents a train set.\n repeat: Whether to repeat the iterator for multiple epochs.\n shuffle: Whether to shuffle examples between epochs.\n sort: Whether to sort examples according to self.sort_key.\n Note that repeat, shuffle, and sort default to train, train, and\n (not train).\n device: Device to create batches on. Use -1 for CPU and None for the\n currently active GPU device.\n \"\"\"\n\n def __init__(self, dataset, batch_size, sort_key=None, device=None,\n batch_size_fn=lambda new, count, sofar: count, train=True,\n repeat=None, shuffle=None, sort=None):\n self.batch_size, self.train, self.dataset = batch_size, train, dataset\n self.batch_size_fn = batch_size_fn\n self.iterations = 0\n self.repeat = train if repeat is None else repeat\n self.shuffle = train if shuffle is None else shuffle\n self.sort = not train if sort is None else sort\n if sort_key is None:\n self.sort_key = dataset.sort_key\n else:\n self.sort_key = sort_key\n self.device = device\n\n self.random_shuffler = RandomShuffler()\n\n # For state loading/saving only\n self._iterations_this_epoch = 0\n self._random_state_this_epoch = None\n self._restored_from_state = False\n\n @classmethod\n def splits(cls, datasets, batch_sizes=None, **kwargs):\n \"\"\"Create Iterator objects for multiple splits of a dataset.\n\n Arguments:\n datasets: Tuple of Dataset objects corresponding to the splits. The\n first such object should be the train set.\n batch_sizes: Tuple of batch sizes to use for the different splits,\n or None to use the same batch_size for all splits.\n Remaining keyword arguments: Passed to the constructor of the\n iterator class being used.\n \"\"\"\n if batch_sizes is None:\n batch_sizes = [kwargs.pop('batch_size')] * len(datasets)\n ret = []\n for i in range(len(datasets)):\n train = i == 0\n ret.append(cls(\n datasets[i], batch_size=batch_sizes[i], train=train, **kwargs))\n return tuple(ret)\n\n def data(self):\n \"\"\"Return the examples in the dataset in order, sorted, or shuffled.\"\"\"\n if self.sort:\n xs = sorted(self.dataset, key=self.sort_key)\n elif self.shuffle:\n xs = [self.dataset[i] for i in self.random_shuffler(range(len(self.dataset)))]\n else:\n xs = self.dataset\n return xs\n\n def init_epoch(self):\n \"\"\"Set up the batch generator for a new epoch.\"\"\"\n\n if self._restored_from_state:\n self.random_shuffler.random_state = self._random_state_this_epoch\n else:\n self._random_state_this_epoch = self.random_shuffler.random_state\n\n self.create_batches()\n\n if self._restored_from_state:\n self._restored_from_state = False\n else:\n self._iterations_this_epoch = 0\n\n if not self.repeat:\n self.iterations = 0\n\n def create_batches(self):\n self.batches = batch(self.data(), self.batch_size, self.batch_size_fn)\n\n @property\n def epoch(self):\n return self.iterations / len(self)\n\n def __len__(self):\n return math.ceil(len(self.dataset) / self.batch_size)\n\n def __iter__(self):\n while True:\n self.init_epoch()\n for idx, minibatch in enumerate(self.batches):\n # fast-forward if loaded from state\n if self._iterations_this_epoch > idx:\n continue\n self.iterations += 1\n self._iterations_this_epoch += 1\n # NOTE: `rnn.pack_padded_sequence` requires that a minibatch be sorted by\n # decreasing order, which requires reversing relative to typical sort keys\n minibatch.reverse()\n yield Batch(minibatch, self.dataset, self.device,\n self.train)\n if not self.repeat:\n raise StopIteration\n\n def state_dict(self):\n return {\n \"iterations\": self.iterations,\n \"iterations_this_epoch\": self._iterations_this_epoch,\n \"random_state_this_epoch\": self._random_state_this_epoch}\n\n def load_state_dict(self, state_dict):\n self.iterations = state_dict[\"iterations\"]\n self._iterations_this_epoch = state_dict[\"iterations_this_epoch\"]\n self._random_state_this_epoch = state_dict[\"random_state_this_epoch\"]\n self._restored_from_state = True\n\n\nclass BPTTIterator(Iterator):\n \"\"\"Defines an iterator for language modeling tasks that use BPTT.\n\n Provides contiguous streams of examples together with targets that are\n one timestep further forward, for language modeling training with\n backpropagation through time (BPTT). Expects a Dataset with a single\n example and a single field called 'text' and produces Batches with text and\n target attributes.\n\n Attributes:\n dataset: The Dataset object to load Examples from.\n batch_size: Batch size.\n bptt_len: Length of sequences for backpropagation through time.\n sort_key: A key to use for sorting examples in order to batch together\n examples with similar lengths and minimize padding. The sort_key\n provided to the Iterator constructor overrides the sort_key\n attribute of the Dataset, or defers to it if None.\n train: Whether the iterator represents a train set.\n repeat: Whether to repeat the iterator for multiple epochs.\n shuffle: Whether to shuffle examples between epochs.\n sort: Whether to sort examples according to self.sort_key.\n Note that repeat, shuffle, and sort default to train, train, and\n (not train).\n device: Device to create batches on. Use -1 for CPU and None for the\n currently active GPU device.\n \"\"\"\n\n def __init__(self, dataset, batch_size, bptt_len, **kwargs):\n self.bptt_len = bptt_len\n super(BPTTIterator, self).__init__(dataset, batch_size, **kwargs)\n\n def __len__(self):\n return math.ceil(len(self.dataset[0].text) /\n (self.batch_size * self.bptt_len))\n\n def __iter__(self):\n text = self.dataset[0].text\n TEXT = self.dataset.fields['text']\n TEXT.eos_token = None\n text = text + ([TEXT.pad_token] * (math.ceil(len(text) / self.batch_size) *\n self.batch_size - len(text)))\n data = TEXT.numericalize(\n [text], device=self.device, train=self.train)\n data = data.view(self.batch_size, -1).t().contiguous()\n dataset = Dataset(examples=self.dataset.examples, fields=[\n ('text', TEXT), ('target', TEXT)])\n while True:\n for i in range(0, len(self) * self.bptt_len, self.bptt_len):\n seq_len = min(self.bptt_len, len(data) - 1 - i)\n yield Batch.fromvars(\n dataset, self.batch_size, train=self.train,\n text=data[i:i + seq_len],\n target=data[i + 1:i + 1 + seq_len])\n if not self.repeat:\n raise StopIteration\n\n\nclass BucketIterator(Iterator):\n \"\"\"Defines an iterator that batches examples of similar lengths together.\n\n Minimizes amount of padding needed while producing freshly shuffled\n batches for each new epoch. See pool for the bucketing procedure used.\n \"\"\"\n\n def create_batches(self):\n if self.sort:\n self.batches = batch(self.data(), self.batch_size,\n self.batch_size_fn)\n else:\n self.batches = pool(self.data(), self.batch_size,\n self.sort_key, self.batch_size_fn,\n random_shuffler=self.random_shuffler)\n\n\ndef batch(data, batch_size, batch_size_fn=lambda new, count, sofar: count):\n \"\"\"Yield elements from data in chunks of batch_size.\"\"\"\n minibatch, size_so_far = [], 0\n for ex in data:\n minibatch.append(ex)\n size_so_far = batch_size_fn(ex, len(minibatch), size_so_far)\n if size_so_far == batch_size:\n yield minibatch\n minibatch, size_so_far = [], 0\n elif size_so_far > batch_size:\n yield minibatch[:-1]\n minibatch, size_so_far = minibatch[-1:], batch_size_fn(ex, 1, 0)\n if minibatch:\n yield minibatch\n\n\ndef pool(data, batch_size, key, batch_size_fn=lambda new, count, sofar: count,\n random_shuffler=None):\n \"\"\"Sort within buckets, then batch, then shuffle batches.\n\n Partitions data into chunks of size 100*batch_size, sorts examples within\n each chunk using sort_key, then batch these examples and shuffle the\n batches.\n \"\"\"\n if random_shuffler is None:\n random_shuffler = random.shuffle\n for p in batch(data, batch_size * 100, batch_size_fn):\n p_batch = batch(sorted(p, key=key), batch_size, batch_size_fn)\n for b in random_shuffler(list(p_batch)):\n yield b\n", "path": "torchtext/data/iterator.py"}]} | 3,886 | 185 |
gh_patches_debug_30352 | rasdani/github-patches | git_diff | AnalogJ__lexicon-381 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PowerDNS dot notation
From PowerDNS 4.2 onwards, undotted notation is not supported.
From PowerDNS >=4.0 but <=4.1 the dotted notation is _optional_.
I suggest we implement the dotted notation _by default_ since that will work with both dotted and undotted versions.
We could have a flag to turn this behavior for those running some legacy versions (which doesn't support dotted notation) or simply add support for PowerDNS >=4.0.
</issue>
<code>
[start of lexicon/providers/powerdns.py]
1 """
2 Lexicon PowerDNS Provider
3
4 Author: Will Hughes, 2017
5
6 API Docs: https://doc.powerdns.com/md/httpapi/api_spec/
7
8 Implementation notes:
9 * The PowerDNS API does not assign a unique identifier to each record in the way
10 that Lexicon expects. We work around this by creating an ID based on the record
11 name, type and content, which when taken together are always unique
12 * The PowerDNS API has no notion of 'create a single record' or 'delete a single
13 record'. All operations are either 'replace the RRSet with this new set of records'
14 or 'delete all records for this name and type. Similarly, there is no notion of
15 'change the content of this record', because records are identified by their name,
16 type and content.
17 * The API is very picky about the format of values used when creating records:
18 ** CNAMEs must be fully qualified
19 ** TXT, LOC records must be quoted
20 This is why the _clean_content and _unclean_content methods exist, to convert
21 back and forth between the format PowerDNS expects, and the format Lexicon uses
22 """
23 from __future__ import absolute_import
24 import json
25 import logging
26
27 import requests
28 from lexicon.providers.base import Provider as BaseProvider
29
30
31 LOGGER = logging.getLogger(__name__)
32
33 NAMESERVER_DOMAINS = []
34
35
36 def provider_parser(subparser):
37 """Configure provider parser for powerdns"""
38 subparser.add_argument(
39 "--auth-token", help="specify token for authentication")
40 subparser.add_argument("--pdns-server", help="URI for PowerDNS server")
41 subparser.add_argument(
42 "--pdns-server-id", help="Server ID to interact with")
43 subparser.add_argument(
44 "--pdns-disable-notify", help="Disable slave notifications from master")
45
46 class Provider(BaseProvider):
47 """Provider class for PowerDNS"""
48 def __init__(self, config):
49 super(Provider, self).__init__(config)
50
51 self.api_endpoint = self._get_provider_option('pdns_server')
52 self.disable_slave_notify = self._get_provider_option('pdns-disable-notify')
53
54 if self.api_endpoint.endswith('/'):
55 self.api_endpoint = self.api_endpoint[:-1]
56
57 if not self.api_endpoint.endswith("/api/v1"):
58 self.api_endpoint += "/api/v1"
59
60 self.server_id = self._get_provider_option('pdns_server_id')
61 if self.server_id is None:
62 self.server_id = 'localhost'
63
64 self.api_endpoint += "/servers/" + self.server_id
65
66 self.api_key = self._get_provider_option('auth_token')
67 assert self.api_key is not None
68 self._zone_data = None
69
70 def notify_slaves(self):
71 """Checks to see if slaves should be notified, and notifies them if needed"""
72 if self.disable_slave_notify is not None:
73 LOGGER.debug('Slave notifications disabled')
74 return False
75
76 if self.zone_data()['kind'] == 'Master':
77 response_code = self._put('/zones/' + self.domain + '/notify').status_code
78 if response_code == 200:
79 LOGGER.debug('Slave(s) notified')
80 return True
81 LOGGER.debug('Slave notification failed with code %i', response_code)
82 else:
83 LOGGER.debug('Zone type should be \'Master\' for slave notifications')
84 return False
85
86 def zone_data(self):
87 """Get zone data"""
88 if self._zone_data is None:
89 self._zone_data = self._get('/zones/' + self.domain).json()
90 return self._zone_data
91
92 def _authenticate(self):
93 self.zone_data()
94 self.domain_id = self.domain
95
96 def _make_identifier(self, rtype, name, content): # pylint: disable=no-self-use
97 return "{}/{}={}".format(rtype, name, content)
98
99 def _parse_identifier(self, identifier): # pylint: disable=no-self-use
100 parts = identifier.split('/')
101 rtype = parts[0]
102 parts = parts[1].split('=')
103 name = parts[0]
104 content = "=".join(parts[1:])
105 return rtype, name, content
106
107 def _list_records(self, rtype=None, name=None, content=None):
108 records = []
109 for rrset in self.zone_data()['rrsets']:
110 if (name is None or self._fqdn_name(rrset['name']) == self._fqdn_name(
111 name)) and (rtype is None or rrset['type'] == rtype):
112 for record in rrset['records']:
113 if content is None or record['content'] == self._clean_content(rtype, content):
114 records.append({
115 'type': rrset['type'],
116 'name': self._full_name(rrset['name']),
117 'ttl': rrset['ttl'],
118 'content': self._unclean_content(rrset['type'], record['content']),
119 'id': self._make_identifier(rrset['type'],
120 rrset['name'], record['content'])
121 })
122 LOGGER.debug('list_records: %s', records)
123 return records
124
125 def _clean_content(self, rtype, content):
126 if rtype in ("TXT", "LOC"):
127 if content[0] != '"':
128 content = '"' + content
129 if content[-1] != '"':
130 content += '"'
131 elif rtype == "CNAME":
132 content = self._fqdn_name(content)
133 return content
134
135 def _unclean_content(self, rtype, content):
136 if rtype in ("TXT", "LOC"):
137 content = content.strip('"')
138 elif rtype == "CNAME":
139 content = self._full_name(content)
140 return content
141
142 def _create_record(self, rtype, name, content):
143 rname = self._fqdn_name(name)
144 newcontent = self._clean_content(rtype, content)
145
146 updated_data = {
147 'name': rname,
148 'type': rtype,
149 'records': [],
150 'ttl': self._get_lexicon_option('ttl') or 600,
151 'changetype': 'REPLACE'
152 }
153
154 updated_data['records'].append({'content': newcontent, 'disabled': False})
155
156 for rrset in self.zone_data()['rrsets']:
157 if rrset['name'] == rname and rrset['type'] == rtype:
158 updated_data['ttl'] = rrset['ttl']
159
160 for record in rrset['records']:
161 if record['content'] != newcontent:
162 updated_data['records'].append(
163 {
164 'content': record['content'],
165 'disabled': record['disabled']
166 })
167 break
168
169 request = {'rrsets': [updated_data]}
170 LOGGER.debug('request: %s', request)
171
172 self._patch('/zones/' + self.domain, data=request)
173 self.notify_slaves()
174 self._zone_data = None
175 return True
176
177 def _delete_record(self, identifier=None, rtype=None, name=None, content=None):
178 if identifier is not None:
179 rtype, name, content = self._parse_identifier(identifier)
180
181 LOGGER.debug("delete %s %s %s", rtype, name, content)
182 if rtype is None or name is None:
183 raise Exception("Must specify at least both rtype and name")
184
185 for rrset in self.zone_data()['rrsets']:
186 if rrset['type'] == rtype and self._fqdn_name(rrset['name']) == self._fqdn_name(name):
187 update_data = rrset
188
189 if 'comments' in update_data:
190 del update_data['comments']
191
192 if content is None:
193 update_data['records'] = []
194 update_data['changetype'] = 'DELETE'
195 else:
196 new_record_list = []
197 for record in update_data['records']:
198 if self._clean_content(rrset['type'], content) != record['content']:
199 new_record_list.append(record)
200
201 update_data['records'] = new_record_list
202 update_data['changetype'] = 'REPLACE'
203 break
204
205 request = {'rrsets': [update_data]}
206 LOGGER.debug('request: %s', request)
207
208 self._patch('/zones/' + self.domain, data=request)
209 self.notify_slaves()
210 self._zone_data = None
211 return True
212
213 def _update_record(self, identifier, rtype=None, name=None, content=None):
214 self._delete_record(identifier)
215 return self._create_record(rtype, name, content)
216
217 def _patch(self, url='/', data=None, query_params=None):
218 return self._request('PATCH', url, data=data, query_params=query_params)
219
220 def _request(self, action='GET', url='/', data=None, query_params=None):
221 if data is None:
222 data = {}
223 if query_params is None:
224 query_params = {}
225 response = requests.request(action, self.api_endpoint + url, params=query_params,
226 data=json.dumps(data),
227 headers={
228 'X-API-Key': self.api_key,
229 'Content-Type': 'application/json'
230 })
231 LOGGER.debug('response: %s', response.text)
232 response.raise_for_status()
233 return response
234
[end of lexicon/providers/powerdns.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lexicon/providers/powerdns.py b/lexicon/providers/powerdns.py
--- a/lexicon/providers/powerdns.py
+++ b/lexicon/providers/powerdns.py
@@ -86,7 +86,7 @@
def zone_data(self):
"""Get zone data"""
if self._zone_data is None:
- self._zone_data = self._get('/zones/' + self.domain).json()
+ self._zone_data = self._get('/zones/' + self._ensure_dot(self.domain)).json()
return self._zone_data
def _authenticate(self):
@@ -169,7 +169,7 @@
request = {'rrsets': [updated_data]}
LOGGER.debug('request: %s', request)
- self._patch('/zones/' + self.domain, data=request)
+ self._patch('/zones/' + self._ensure_dot(self.domain), data=request)
self.notify_slaves()
self._zone_data = None
return True
@@ -205,8 +205,9 @@
request = {'rrsets': [update_data]}
LOGGER.debug('request: %s', request)
- self._patch('/zones/' + self.domain, data=request)
+ self._patch('/zones/' + self._ensure_dot(self.domain), data=request)
self.notify_slaves()
+
self._zone_data = None
return True
@@ -231,3 +232,12 @@
LOGGER.debug('response: %s', response.text)
response.raise_for_status()
return response
+
+ @classmethod
+ def _ensure_dot(cls, text):
+ """
+ This function makes sure a string contains a dot at the end
+ """
+ if text.endswith("."):
+ return text
+ return text + "."
| {"golden_diff": "diff --git a/lexicon/providers/powerdns.py b/lexicon/providers/powerdns.py\n--- a/lexicon/providers/powerdns.py\n+++ b/lexicon/providers/powerdns.py\n@@ -86,7 +86,7 @@\n def zone_data(self):\n \"\"\"Get zone data\"\"\"\n if self._zone_data is None:\n- self._zone_data = self._get('/zones/' + self.domain).json()\n+ self._zone_data = self._get('/zones/' + self._ensure_dot(self.domain)).json()\n return self._zone_data\n \n def _authenticate(self):\n@@ -169,7 +169,7 @@\n request = {'rrsets': [updated_data]}\n LOGGER.debug('request: %s', request)\n \n- self._patch('/zones/' + self.domain, data=request)\n+ self._patch('/zones/' + self._ensure_dot(self.domain), data=request)\n self.notify_slaves()\n self._zone_data = None\n return True\n@@ -205,8 +205,9 @@\n request = {'rrsets': [update_data]}\n LOGGER.debug('request: %s', request)\n \n- self._patch('/zones/' + self.domain, data=request)\n+ self._patch('/zones/' + self._ensure_dot(self.domain), data=request)\n self.notify_slaves()\n+\n self._zone_data = None\n return True\n \n@@ -231,3 +232,12 @@\n LOGGER.debug('response: %s', response.text)\n response.raise_for_status()\n return response\n+\n+ @classmethod\n+ def _ensure_dot(cls, text):\n+ \"\"\"\n+ This function makes sure a string contains a dot at the end\n+ \"\"\"\n+ if text.endswith(\".\"):\n+ return text\n+ return text + \".\"\n", "issue": "PowerDNS dot notation\nFrom PowerDNS 4.2 onwards, undotted notation is not supported.\r\nFrom PowerDNS >=4.0 but <=4.1 the dotted notation is _optional_.\r\n\r\nI suggest we implement the dotted notation _by default_ since that will work with both dotted and undotted versions. \r\n\r\nWe could have a flag to turn this behavior for those running some legacy versions (which doesn't support dotted notation) or simply add support for PowerDNS >=4.0.\n", "before_files": [{"content": "\"\"\"\nLexicon PowerDNS Provider\n\nAuthor: Will Hughes, 2017\n\nAPI Docs: https://doc.powerdns.com/md/httpapi/api_spec/\n\nImplementation notes:\n* The PowerDNS API does not assign a unique identifier to each record in the way\nthat Lexicon expects. We work around this by creating an ID based on the record\nname, type and content, which when taken together are always unique\n* The PowerDNS API has no notion of 'create a single record' or 'delete a single\nrecord'. All operations are either 'replace the RRSet with this new set of records'\nor 'delete all records for this name and type. Similarly, there is no notion of\n'change the content of this record', because records are identified by their name,\ntype and content.\n* The API is very picky about the format of values used when creating records:\n** CNAMEs must be fully qualified\n** TXT, LOC records must be quoted\nThis is why the _clean_content and _unclean_content methods exist, to convert\nback and forth between the format PowerDNS expects, and the format Lexicon uses\n\"\"\"\nfrom __future__ import absolute_import\nimport json\nimport logging\n\nimport requests\nfrom lexicon.providers.base import Provider as BaseProvider\n\n\nLOGGER = logging.getLogger(__name__)\n\nNAMESERVER_DOMAINS = []\n\n\ndef provider_parser(subparser):\n \"\"\"Configure provider parser for powerdns\"\"\"\n subparser.add_argument(\n \"--auth-token\", help=\"specify token for authentication\")\n subparser.add_argument(\"--pdns-server\", help=\"URI for PowerDNS server\")\n subparser.add_argument(\n \"--pdns-server-id\", help=\"Server ID to interact with\")\n subparser.add_argument(\n \"--pdns-disable-notify\", help=\"Disable slave notifications from master\")\n\nclass Provider(BaseProvider):\n \"\"\"Provider class for PowerDNS\"\"\"\n def __init__(self, config):\n super(Provider, self).__init__(config)\n\n self.api_endpoint = self._get_provider_option('pdns_server')\n self.disable_slave_notify = self._get_provider_option('pdns-disable-notify')\n\n if self.api_endpoint.endswith('/'):\n self.api_endpoint = self.api_endpoint[:-1]\n\n if not self.api_endpoint.endswith(\"/api/v1\"):\n self.api_endpoint += \"/api/v1\"\n\n self.server_id = self._get_provider_option('pdns_server_id')\n if self.server_id is None:\n self.server_id = 'localhost'\n\n self.api_endpoint += \"/servers/\" + self.server_id\n\n self.api_key = self._get_provider_option('auth_token')\n assert self.api_key is not None\n self._zone_data = None\n\n def notify_slaves(self):\n \"\"\"Checks to see if slaves should be notified, and notifies them if needed\"\"\"\n if self.disable_slave_notify is not None:\n LOGGER.debug('Slave notifications disabled')\n return False\n\n if self.zone_data()['kind'] == 'Master':\n response_code = self._put('/zones/' + self.domain + '/notify').status_code\n if response_code == 200:\n LOGGER.debug('Slave(s) notified')\n return True\n LOGGER.debug('Slave notification failed with code %i', response_code)\n else:\n LOGGER.debug('Zone type should be \\'Master\\' for slave notifications')\n return False\n\n def zone_data(self):\n \"\"\"Get zone data\"\"\"\n if self._zone_data is None:\n self._zone_data = self._get('/zones/' + self.domain).json()\n return self._zone_data\n\n def _authenticate(self):\n self.zone_data()\n self.domain_id = self.domain\n\n def _make_identifier(self, rtype, name, content): # pylint: disable=no-self-use\n return \"{}/{}={}\".format(rtype, name, content)\n\n def _parse_identifier(self, identifier): # pylint: disable=no-self-use\n parts = identifier.split('/')\n rtype = parts[0]\n parts = parts[1].split('=')\n name = parts[0]\n content = \"=\".join(parts[1:])\n return rtype, name, content\n\n def _list_records(self, rtype=None, name=None, content=None):\n records = []\n for rrset in self.zone_data()['rrsets']:\n if (name is None or self._fqdn_name(rrset['name']) == self._fqdn_name(\n name)) and (rtype is None or rrset['type'] == rtype):\n for record in rrset['records']:\n if content is None or record['content'] == self._clean_content(rtype, content):\n records.append({\n 'type': rrset['type'],\n 'name': self._full_name(rrset['name']),\n 'ttl': rrset['ttl'],\n 'content': self._unclean_content(rrset['type'], record['content']),\n 'id': self._make_identifier(rrset['type'],\n rrset['name'], record['content'])\n })\n LOGGER.debug('list_records: %s', records)\n return records\n\n def _clean_content(self, rtype, content):\n if rtype in (\"TXT\", \"LOC\"):\n if content[0] != '\"':\n content = '\"' + content\n if content[-1] != '\"':\n content += '\"'\n elif rtype == \"CNAME\":\n content = self._fqdn_name(content)\n return content\n\n def _unclean_content(self, rtype, content):\n if rtype in (\"TXT\", \"LOC\"):\n content = content.strip('\"')\n elif rtype == \"CNAME\":\n content = self._full_name(content)\n return content\n\n def _create_record(self, rtype, name, content):\n rname = self._fqdn_name(name)\n newcontent = self._clean_content(rtype, content)\n\n updated_data = {\n 'name': rname,\n 'type': rtype,\n 'records': [],\n 'ttl': self._get_lexicon_option('ttl') or 600,\n 'changetype': 'REPLACE'\n }\n\n updated_data['records'].append({'content': newcontent, 'disabled': False})\n\n for rrset in self.zone_data()['rrsets']:\n if rrset['name'] == rname and rrset['type'] == rtype:\n updated_data['ttl'] = rrset['ttl']\n\n for record in rrset['records']:\n if record['content'] != newcontent:\n updated_data['records'].append(\n {\n 'content': record['content'],\n 'disabled': record['disabled']\n })\n break\n\n request = {'rrsets': [updated_data]}\n LOGGER.debug('request: %s', request)\n\n self._patch('/zones/' + self.domain, data=request)\n self.notify_slaves()\n self._zone_data = None\n return True\n\n def _delete_record(self, identifier=None, rtype=None, name=None, content=None):\n if identifier is not None:\n rtype, name, content = self._parse_identifier(identifier)\n\n LOGGER.debug(\"delete %s %s %s\", rtype, name, content)\n if rtype is None or name is None:\n raise Exception(\"Must specify at least both rtype and name\")\n\n for rrset in self.zone_data()['rrsets']:\n if rrset['type'] == rtype and self._fqdn_name(rrset['name']) == self._fqdn_name(name):\n update_data = rrset\n\n if 'comments' in update_data:\n del update_data['comments']\n\n if content is None:\n update_data['records'] = []\n update_data['changetype'] = 'DELETE'\n else:\n new_record_list = []\n for record in update_data['records']:\n if self._clean_content(rrset['type'], content) != record['content']:\n new_record_list.append(record)\n\n update_data['records'] = new_record_list\n update_data['changetype'] = 'REPLACE'\n break\n\n request = {'rrsets': [update_data]}\n LOGGER.debug('request: %s', request)\n\n self._patch('/zones/' + self.domain, data=request)\n self.notify_slaves()\n self._zone_data = None\n return True\n\n def _update_record(self, identifier, rtype=None, name=None, content=None):\n self._delete_record(identifier)\n return self._create_record(rtype, name, content)\n\n def _patch(self, url='/', data=None, query_params=None):\n return self._request('PATCH', url, data=data, query_params=query_params)\n\n def _request(self, action='GET', url='/', data=None, query_params=None):\n if data is None:\n data = {}\n if query_params is None:\n query_params = {}\n response = requests.request(action, self.api_endpoint + url, params=query_params,\n data=json.dumps(data),\n headers={\n 'X-API-Key': self.api_key,\n 'Content-Type': 'application/json'\n })\n LOGGER.debug('response: %s', response.text)\n response.raise_for_status()\n return response\n", "path": "lexicon/providers/powerdns.py"}]} | 3,206 | 400 |
gh_patches_debug_17345 | rasdani/github-patches | git_diff | RedHatInsights__insights-core-3074 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The smt combiner is raising IndexError exceptions in production.
The CpuTopology combiner is throwing a large number of the exception IndexError('list index out of range',) in production.
</issue>
<code>
[start of insights/combiners/smt.py]
1 """
2 Simultaneous Multithreading (SMT) combiner
3 ==========================================
4
5 Combiner for Simultaneous Multithreading (SMT). It uses the results of the following parsers:
6 :class:`insights.parsers.smt.CpuCoreOnline`,
7 :class:`insights.parsers.smt.CpuSiblings`.
8 """
9
10 from insights.core.plugins import combiner
11 from insights.parsers.smt import CpuCoreOnline, CpuSiblings
12
13
14 @combiner(CpuCoreOnline, CpuSiblings)
15 class CpuTopology(object):
16 """
17 Class for collecting the online/siblings status for all CPU cores.
18
19 Sample output of the ``CpuCoreOnline`` parser is::
20
21 [[Core 0: Online], [Core 1: Online], [Core 2: Online], [Core 3: Online]]
22
23 Sample output of the ``CpuSiblings`` parser is::
24
25 [[Core 0 Siblings: [0, 2]], [Core 1 Siblings: [1, 3]], [Core 2 Siblings: [0, 2]], [Core 3 Siblings: [1, 3]]]
26
27 Attributes:
28 cores (list of dictionaries): List of all cores.
29 all_solitary (bool): True, if hyperthreading is not used.
30
31 Examples:
32 >>> type(cpu_topology)
33 <class 'insights.combiners.smt.CpuTopology'>
34 >>> cpu_topology.cores == [{'online': True, 'siblings': [0, 2]}, {'online': True, 'siblings': [1, 3]}, {'online': True, 'siblings': [0, 2]}, {'online': True, 'siblings': [1, 3]}]
35 True
36 >>> cpu_topology.all_solitary
37 False
38 """
39
40 def __init__(self, cpu_online, cpu_siblings):
41 self.cores = []
42
43 max_cpu_core_id = max([core.core_id for core in cpu_online])
44 for n in range(max_cpu_core_id + 1):
45 online = [core for core in cpu_online if core.core_id == n]
46 online = online[0].on
47 siblings = [sibling for sibling in cpu_siblings if sibling.core_id == n]
48 if len(siblings) != 0:
49 siblings = siblings[0].siblings
50
51 one_core = {"online": online, "siblings": siblings}
52 self.cores.append(one_core)
53
54 self.all_solitary = all([len(core["siblings"]) <= 1 for core in self.cores])
55
56 def online(self, core_id):
57 """
58 Returns bool value obtained from "online" file for given core_id.
59 """
60 if core_id >= len(self.cores) or core_id < 0:
61 return None
62 return self.cores[core_id]["online"]
63
64 def siblings(self, core_id):
65 """
66 Returns list of siblings for given core_id.
67 """
68 if core_id >= len(self.cores) or core_id < 0:
69 return None
70 return self.cores[core_id]["siblings"]
71
[end of insights/combiners/smt.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/insights/combiners/smt.py b/insights/combiners/smt.py
--- a/insights/combiners/smt.py
+++ b/insights/combiners/smt.py
@@ -43,7 +43,13 @@
max_cpu_core_id = max([core.core_id for core in cpu_online])
for n in range(max_cpu_core_id + 1):
online = [core for core in cpu_online if core.core_id == n]
- online = online[0].on
+ # On some boxes cpu0 doesn't have the online file, since technically cpu0 will always
+ # be online. So check if online returns anything before trying to access online[0].
+ # If it returns nothing and n is 0 set online to True.
+ if online:
+ online = online[0].on
+ elif not online and n == 0:
+ online = True
siblings = [sibling for sibling in cpu_siblings if sibling.core_id == n]
if len(siblings) != 0:
siblings = siblings[0].siblings
| {"golden_diff": "diff --git a/insights/combiners/smt.py b/insights/combiners/smt.py\n--- a/insights/combiners/smt.py\n+++ b/insights/combiners/smt.py\n@@ -43,7 +43,13 @@\n max_cpu_core_id = max([core.core_id for core in cpu_online])\n for n in range(max_cpu_core_id + 1):\n online = [core for core in cpu_online if core.core_id == n]\n- online = online[0].on\n+ # On some boxes cpu0 doesn't have the online file, since technically cpu0 will always\n+ # be online. So check if online returns anything before trying to access online[0].\n+ # If it returns nothing and n is 0 set online to True.\n+ if online:\n+ online = online[0].on\n+ elif not online and n == 0:\n+ online = True\n siblings = [sibling for sibling in cpu_siblings if sibling.core_id == n]\n if len(siblings) != 0:\n siblings = siblings[0].siblings\n", "issue": "The smt combiner is raising IndexError exceptions in production.\nThe CpuTopology combiner is throwing a large number of the exception IndexError('list index out of range',) in production.\n", "before_files": [{"content": "\"\"\"\nSimultaneous Multithreading (SMT) combiner\n==========================================\n\nCombiner for Simultaneous Multithreading (SMT). It uses the results of the following parsers:\n:class:`insights.parsers.smt.CpuCoreOnline`,\n:class:`insights.parsers.smt.CpuSiblings`.\n\"\"\"\n\nfrom insights.core.plugins import combiner\nfrom insights.parsers.smt import CpuCoreOnline, CpuSiblings\n\n\n@combiner(CpuCoreOnline, CpuSiblings)\nclass CpuTopology(object):\n \"\"\"\n Class for collecting the online/siblings status for all CPU cores.\n\n Sample output of the ``CpuCoreOnline`` parser is::\n\n [[Core 0: Online], [Core 1: Online], [Core 2: Online], [Core 3: Online]]\n\n Sample output of the ``CpuSiblings`` parser is::\n\n [[Core 0 Siblings: [0, 2]], [Core 1 Siblings: [1, 3]], [Core 2 Siblings: [0, 2]], [Core 3 Siblings: [1, 3]]]\n\n Attributes:\n cores (list of dictionaries): List of all cores.\n all_solitary (bool): True, if hyperthreading is not used.\n\n Examples:\n >>> type(cpu_topology)\n <class 'insights.combiners.smt.CpuTopology'>\n >>> cpu_topology.cores == [{'online': True, 'siblings': [0, 2]}, {'online': True, 'siblings': [1, 3]}, {'online': True, 'siblings': [0, 2]}, {'online': True, 'siblings': [1, 3]}]\n True\n >>> cpu_topology.all_solitary\n False\n \"\"\"\n\n def __init__(self, cpu_online, cpu_siblings):\n self.cores = []\n\n max_cpu_core_id = max([core.core_id for core in cpu_online])\n for n in range(max_cpu_core_id + 1):\n online = [core for core in cpu_online if core.core_id == n]\n online = online[0].on\n siblings = [sibling for sibling in cpu_siblings if sibling.core_id == n]\n if len(siblings) != 0:\n siblings = siblings[0].siblings\n\n one_core = {\"online\": online, \"siblings\": siblings}\n self.cores.append(one_core)\n\n self.all_solitary = all([len(core[\"siblings\"]) <= 1 for core in self.cores])\n\n def online(self, core_id):\n \"\"\"\n Returns bool value obtained from \"online\" file for given core_id.\n \"\"\"\n if core_id >= len(self.cores) or core_id < 0:\n return None\n return self.cores[core_id][\"online\"]\n\n def siblings(self, core_id):\n \"\"\"\n Returns list of siblings for given core_id.\n \"\"\"\n if core_id >= len(self.cores) or core_id < 0:\n return None\n return self.cores[core_id][\"siblings\"]\n", "path": "insights/combiners/smt.py"}]} | 1,371 | 247 |
gh_patches_debug_25666 | rasdani/github-patches | git_diff | ephios-dev__ephios-875 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change event type in event form
As a planner I want to change the event's type after creation so I can correct a mistake.
</issue>
<code>
[start of ephios/core/forms/events.py]
1 import operator
2 import re
3 from datetime import date, datetime, timedelta
4
5 from crispy_forms.bootstrap import FormActions
6 from crispy_forms.helper import FormHelper
7 from crispy_forms.layout import Field, Layout, Submit
8 from django import forms
9 from django.contrib.auth import get_user_model
10 from django.contrib.auth.models import Group
11 from django.core.exceptions import ValidationError
12 from django.db.models import Q
13 from django.template.loader import render_to_string
14 from django.utils.timezone import make_aware
15 from django.utils.translation import gettext as _
16 from django_select2.forms import Select2MultipleWidget
17 from dynamic_preferences.forms import PreferenceForm
18 from guardian.shortcuts import assign_perm, get_objects_for_user, get_users_with_perms, remove_perm
19 from recurrence.forms import RecurrenceField
20
21 from ephios.core.dynamic_preferences_registry import event_type_preference_registry
22 from ephios.core.models import Event, EventType, LocalParticipation, Shift, UserProfile
23 from ephios.core.signup.methods import enabled_signup_methods, signup_method_from_slug
24 from ephios.core.widgets import MultiUserProfileWidget
25 from ephios.extra.crispy import AbortLink
26 from ephios.extra.permissions import get_groups_with_perms
27 from ephios.extra.widgets import ColorInput, CustomDateInput, CustomTimeInput
28 from ephios.modellogging.log import add_log_recorder, update_log
29 from ephios.modellogging.recorders import (
30 DerivedFieldsLogRecorder,
31 InstanceActionType,
32 PermissionLogRecorder,
33 )
34
35
36 class EventForm(forms.ModelForm):
37 visible_for = forms.ModelMultipleChoiceField(
38 queryset=Group.objects.none(),
39 label=_("Visible for"),
40 help_text=_(
41 "Select groups which the event shall be visible for. Regardless, the event will be visible for users that already signed up."
42 ),
43 widget=Select2MultipleWidget,
44 required=False,
45 )
46 responsible_users = forms.ModelMultipleChoiceField(
47 queryset=UserProfile.objects.all(),
48 required=False,
49 label=_("Responsible persons"),
50 widget=MultiUserProfileWidget,
51 )
52 responsible_groups = forms.ModelMultipleChoiceField(
53 queryset=Group.objects.all(),
54 required=False,
55 label=_("Responsible groups"),
56 widget=Select2MultipleWidget,
57 )
58
59 class Meta:
60 model = Event
61 fields = ["title", "description", "location"]
62
63 def __init__(self, **kwargs):
64 user = kwargs.pop("user")
65 can_publish_for_groups = get_objects_for_user(user, "publish_event_for_group", klass=Group)
66
67 if (event := kwargs.get("instance", None)) is not None:
68 self.eventtype = event.type
69 responsible_users = get_users_with_perms(
70 event, only_with_perms_in=["change_event"], with_group_users=False
71 )
72 responsible_groups = get_groups_with_perms(event, only_with_perms_in=["change_event"])
73 visible_for = get_groups_with_perms(event, only_with_perms_in=["view_event"]).exclude(
74 id__in=responsible_groups
75 )
76
77 self.locked_visible_for_groups = set(visible_for.exclude(id__in=can_publish_for_groups))
78 kwargs["initial"] = {
79 "visible_for": visible_for.filter(id__in=can_publish_for_groups),
80 "responsible_users": responsible_users,
81 "responsible_groups": responsible_groups,
82 **kwargs.get("initial", {}),
83 }
84 else:
85 self.eventtype = kwargs.pop("eventtype")
86 kwargs["initial"] = {
87 "responsible_users": self.eventtype.preferences.get("responsible_users")
88 or get_user_model().objects.filter(pk=user.pk),
89 "responsible_groups": self.eventtype.preferences.get("responsible_groups"),
90 "visible_for": self.eventtype.preferences.get("visible_for")
91 or get_objects_for_user(user, "publish_event_for_group", klass=Group),
92 }
93 self.locked_visible_for_groups = set()
94
95 super().__init__(**kwargs)
96
97 self.fields["visible_for"].queryset = can_publish_for_groups
98 self.fields["visible_for"].disabled = not can_publish_for_groups
99 if self.locked_visible_for_groups:
100 self.fields["visible_for"].help_text = _(
101 "Select groups which the event shall be visible for. "
102 "This event is also visible for <b>{groups}</b>, "
103 "but you don't have the permission to change visibility "
104 "for those groups."
105 ).format(groups=", ".join(group.name for group in self.locked_visible_for_groups))
106
107 def save(self, commit=True):
108 self.instance.type = self.eventtype
109 event: Event = super().save(commit=commit)
110
111 add_log_recorder(event, PermissionLogRecorder("view_event", _("Visible for")))
112 add_log_recorder(event, PermissionLogRecorder("change_event", _("Responsibles")))
113
114 # delete existing permissions
115 # (better implement https://github.com/django-guardian/django-guardian/issues/654)
116 for group in get_groups_with_perms(
117 event, only_with_perms_in=["view_event", "change_event"]
118 ):
119 remove_perm("view_event", group, event)
120 remove_perm("change_event", group, event)
121 for user in get_users_with_perms(event, only_with_perms_in=["view_event", "change_event"]):
122 remove_perm("view_event", user, event)
123 remove_perm("change_event", user, event)
124
125 # assign designated permissions
126 assign_perm(
127 "view_event",
128 Group.objects.filter(
129 Q(id__in=self.cleaned_data["visible_for"])
130 | Q(id__in=self.cleaned_data["responsible_groups"])
131 | Q(id__in=(g.id for g in self.locked_visible_for_groups))
132 ),
133 event,
134 )
135 assign_perm("change_event", self.cleaned_data["responsible_groups"], event)
136 assign_perm("change_event", self.cleaned_data["responsible_users"], event)
137
138 # Assign view_event to responsible users and to non-responsible users
139 # that already have some sort of participation for the event
140 # (-> they saw and interacted with it)
141 # We can't just do users that aren't included by group permissions,
142 # as they might get removed from that group.
143 assign_perm(
144 "view_event",
145 UserProfile.objects.filter(
146 Q(pk__in=self.cleaned_data["responsible_users"])
147 | Q(
148 pk__in=LocalParticipation.objects.filter(
149 shift_id__in=event.shifts.all()
150 ).values_list("user", flat=True)
151 )
152 ),
153 event,
154 )
155
156 update_log(event, InstanceActionType.CHANGE)
157 return event
158
159
160 class ShiftForm(forms.ModelForm):
161 date = forms.DateField(widget=CustomDateInput, label=_("Date"))
162 meeting_time = forms.TimeField(widget=CustomTimeInput, label=_("Meeting time"))
163 start_time = forms.TimeField(widget=CustomTimeInput, label=_("Start time"))
164 end_time = forms.TimeField(widget=CustomTimeInput, label=_("End time"))
165
166 field_order = ["date", "meeting_time", "start_time", "end_time", "signup_method_slug"]
167
168 class Meta:
169 model = Shift
170 fields = ["meeting_time", "start_time", "end_time", "signup_method_slug"]
171
172 def __init__(self, *args, **kwargs):
173 super().__init__(*args, **kwargs)
174 signup_methods = list(enabled_signup_methods())
175
176 # make sure that if a shift uses a disabled but installed method, it is also available in the list
177 if self.instance and (method_slug := self.instance.signup_method_slug):
178 if method_slug not in map(operator.attrgetter("slug"), signup_methods):
179 try:
180 signup_methods.append(signup_method_from_slug(method_slug, self.instance))
181 except ValueError: # not installed
182 pass
183
184 self.fields["signup_method_slug"].widget = forms.Select(
185 choices=((method.slug, method.verbose_name) for method in signup_methods)
186 )
187 # this recorder may cause db queries, so it's added on Shift init, but here in the form
188 # pylint: disable=undefined-variable
189 add_log_recorder(
190 self.instance,
191 DerivedFieldsLogRecorder(
192 lambda shift: method.get_signup_info() if (method := shift.signup_method) else {}
193 ),
194 )
195
196 def clean(self):
197 cleaned_data = super().clean()
198 if {"date", "meeting_time", "start_time", "end_time"} <= set(cleaned_data.keys()):
199 cleaned_data["meeting_time"] = make_aware(
200 datetime.combine(cleaned_data["date"], cleaned_data["meeting_time"])
201 )
202 cleaned_data["start_time"] = make_aware(
203 datetime.combine(cleaned_data["date"], cleaned_data["start_time"])
204 )
205 cleaned_data["end_time"] = make_aware(
206 datetime.combine(self.cleaned_data["date"], cleaned_data["end_time"])
207 )
208 if self.cleaned_data["end_time"] <= self.cleaned_data["start_time"]:
209 cleaned_data["end_time"] = cleaned_data["end_time"] + timedelta(days=1)
210 if not cleaned_data["meeting_time"] <= cleaned_data["start_time"]:
211 raise ValidationError(_("Meeting time must not be after start time!"))
212 return cleaned_data
213
214
215 class EventDuplicationForm(forms.Form):
216 start_date = forms.DateField(
217 widget=CustomDateInput,
218 initial=date.today(),
219 help_text=_(
220 "This date will be used as the start date for recurring events that you create below, e.g. daily events will be created from this date onwards."
221 ),
222 label=_("Start date"),
223 )
224 recurrence = RecurrenceField(required=False)
225
226
227 class EventTypeForm(forms.ModelForm):
228 class Meta:
229 model = EventType
230 fields = ["title", "color"]
231 widgets = {"color": ColorInput()}
232
233 def clean_color(self):
234 regex = re.compile(r"#[a-fA-F\d]{6}")
235 if not regex.match(self.cleaned_data["color"]):
236 raise ValidationError(_("You need to enter a valid color"))
237 return self.cleaned_data["color"]
238
239
240 class EventTypePreferenceForm(PreferenceForm):
241 registry = event_type_preference_registry
242
243
244 class BasePluginFormMixin:
245 @property
246 def heading(self):
247 raise NotImplementedError
248
249 def render(self):
250 try:
251 self.helper.form_tag = False
252 except AttributeError:
253 self.helper = FormHelper(self)
254 self.helper.form_tag = False
255 return render_to_string("core/fragments/plugin_form.html", context={"form": self})
256
257 def is_function_active(self):
258 """
259 When building forms for additional features, return whether that feature is enabled for the form instance.
260 With the default template, if this is True, the collapse is expanded on page load.
261 """
262 return False
263
264
265 class EventNotificationForm(forms.Form):
266 NEW_EVENT = "new"
267 REMINDER = "remind"
268 PARTICIPANTS = "participants"
269 action = forms.ChoiceField(
270 choices=[
271 (NEW_EVENT, _("Send notification about new event to everyone")),
272 (REMINDER, _("Send reminder to everyone that is not participating")),
273 (PARTICIPANTS, _("Send a message to all participants")),
274 ],
275 widget=forms.RadioSelect,
276 label=False,
277 )
278 mail_content = forms.CharField(required=False, widget=forms.Textarea, label=_("Mail content"))
279
280 def __init__(self, *args, **kwargs):
281 self.event = kwargs.pop("event")
282 super().__init__(*args, **kwargs)
283 self.helper = FormHelper(self)
284 self.helper.layout = Layout(
285 Field("action"),
286 Field("mail_content"),
287 FormActions(
288 Submit("submit", _("Send"), css_class="float-end"),
289 AbortLink(href=self.event.get_absolute_url()),
290 ),
291 )
292
293 def clean(self):
294 if (
295 self.cleaned_data.get("action") == self.PARTICIPANTS
296 and not self.cleaned_data["mail_content"]
297 ):
298 raise ValidationError(_("You cannot send an empty mail."))
299 return super().clean()
300
[end of ephios/core/forms/events.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ephios/core/forms/events.py b/ephios/core/forms/events.py
--- a/ephios/core/forms/events.py
+++ b/ephios/core/forms/events.py
@@ -58,7 +58,7 @@
class Meta:
model = Event
- fields = ["title", "description", "location"]
+ fields = ["title", "type", "description", "location"]
def __init__(self, **kwargs):
user = kwargs.pop("user")
@@ -94,6 +94,8 @@
super().__init__(**kwargs)
+ if event is None:
+ self.fields.pop("type")
self.fields["visible_for"].queryset = can_publish_for_groups
self.fields["visible_for"].disabled = not can_publish_for_groups
if self.locked_visible_for_groups:
@@ -105,7 +107,8 @@
).format(groups=", ".join(group.name for group in self.locked_visible_for_groups))
def save(self, commit=True):
- self.instance.type = self.eventtype
+ if not self.instance.pk:
+ self.instance.type = self.eventtype
event: Event = super().save(commit=commit)
add_log_recorder(event, PermissionLogRecorder("view_event", _("Visible for")))
| {"golden_diff": "diff --git a/ephios/core/forms/events.py b/ephios/core/forms/events.py\n--- a/ephios/core/forms/events.py\n+++ b/ephios/core/forms/events.py\n@@ -58,7 +58,7 @@\n \n class Meta:\n model = Event\n- fields = [\"title\", \"description\", \"location\"]\n+ fields = [\"title\", \"type\", \"description\", \"location\"]\n \n def __init__(self, **kwargs):\n user = kwargs.pop(\"user\")\n@@ -94,6 +94,8 @@\n \n super().__init__(**kwargs)\n \n+ if event is None:\n+ self.fields.pop(\"type\")\n self.fields[\"visible_for\"].queryset = can_publish_for_groups\n self.fields[\"visible_for\"].disabled = not can_publish_for_groups\n if self.locked_visible_for_groups:\n@@ -105,7 +107,8 @@\n ).format(groups=\", \".join(group.name for group in self.locked_visible_for_groups))\n \n def save(self, commit=True):\n- self.instance.type = self.eventtype\n+ if not self.instance.pk:\n+ self.instance.type = self.eventtype\n event: Event = super().save(commit=commit)\n \n add_log_recorder(event, PermissionLogRecorder(\"view_event\", _(\"Visible for\")))\n", "issue": "Change event type in event form\nAs a planner I want to change the event's type after creation so I can correct a mistake.\n", "before_files": [{"content": "import operator\nimport re\nfrom datetime import date, datetime, timedelta\n\nfrom crispy_forms.bootstrap import FormActions\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Field, Layout, Submit\nfrom django import forms\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.models import Group\nfrom django.core.exceptions import ValidationError\nfrom django.db.models import Q\nfrom django.template.loader import render_to_string\nfrom django.utils.timezone import make_aware\nfrom django.utils.translation import gettext as _\nfrom django_select2.forms import Select2MultipleWidget\nfrom dynamic_preferences.forms import PreferenceForm\nfrom guardian.shortcuts import assign_perm, get_objects_for_user, get_users_with_perms, remove_perm\nfrom recurrence.forms import RecurrenceField\n\nfrom ephios.core.dynamic_preferences_registry import event_type_preference_registry\nfrom ephios.core.models import Event, EventType, LocalParticipation, Shift, UserProfile\nfrom ephios.core.signup.methods import enabled_signup_methods, signup_method_from_slug\nfrom ephios.core.widgets import MultiUserProfileWidget\nfrom ephios.extra.crispy import AbortLink\nfrom ephios.extra.permissions import get_groups_with_perms\nfrom ephios.extra.widgets import ColorInput, CustomDateInput, CustomTimeInput\nfrom ephios.modellogging.log import add_log_recorder, update_log\nfrom ephios.modellogging.recorders import (\n DerivedFieldsLogRecorder,\n InstanceActionType,\n PermissionLogRecorder,\n)\n\n\nclass EventForm(forms.ModelForm):\n visible_for = forms.ModelMultipleChoiceField(\n queryset=Group.objects.none(),\n label=_(\"Visible for\"),\n help_text=_(\n \"Select groups which the event shall be visible for. Regardless, the event will be visible for users that already signed up.\"\n ),\n widget=Select2MultipleWidget,\n required=False,\n )\n responsible_users = forms.ModelMultipleChoiceField(\n queryset=UserProfile.objects.all(),\n required=False,\n label=_(\"Responsible persons\"),\n widget=MultiUserProfileWidget,\n )\n responsible_groups = forms.ModelMultipleChoiceField(\n queryset=Group.objects.all(),\n required=False,\n label=_(\"Responsible groups\"),\n widget=Select2MultipleWidget,\n )\n\n class Meta:\n model = Event\n fields = [\"title\", \"description\", \"location\"]\n\n def __init__(self, **kwargs):\n user = kwargs.pop(\"user\")\n can_publish_for_groups = get_objects_for_user(user, \"publish_event_for_group\", klass=Group)\n\n if (event := kwargs.get(\"instance\", None)) is not None:\n self.eventtype = event.type\n responsible_users = get_users_with_perms(\n event, only_with_perms_in=[\"change_event\"], with_group_users=False\n )\n responsible_groups = get_groups_with_perms(event, only_with_perms_in=[\"change_event\"])\n visible_for = get_groups_with_perms(event, only_with_perms_in=[\"view_event\"]).exclude(\n id__in=responsible_groups\n )\n\n self.locked_visible_for_groups = set(visible_for.exclude(id__in=can_publish_for_groups))\n kwargs[\"initial\"] = {\n \"visible_for\": visible_for.filter(id__in=can_publish_for_groups),\n \"responsible_users\": responsible_users,\n \"responsible_groups\": responsible_groups,\n **kwargs.get(\"initial\", {}),\n }\n else:\n self.eventtype = kwargs.pop(\"eventtype\")\n kwargs[\"initial\"] = {\n \"responsible_users\": self.eventtype.preferences.get(\"responsible_users\")\n or get_user_model().objects.filter(pk=user.pk),\n \"responsible_groups\": self.eventtype.preferences.get(\"responsible_groups\"),\n \"visible_for\": self.eventtype.preferences.get(\"visible_for\")\n or get_objects_for_user(user, \"publish_event_for_group\", klass=Group),\n }\n self.locked_visible_for_groups = set()\n\n super().__init__(**kwargs)\n\n self.fields[\"visible_for\"].queryset = can_publish_for_groups\n self.fields[\"visible_for\"].disabled = not can_publish_for_groups\n if self.locked_visible_for_groups:\n self.fields[\"visible_for\"].help_text = _(\n \"Select groups which the event shall be visible for. \"\n \"This event is also visible for <b>{groups}</b>, \"\n \"but you don't have the permission to change visibility \"\n \"for those groups.\"\n ).format(groups=\", \".join(group.name for group in self.locked_visible_for_groups))\n\n def save(self, commit=True):\n self.instance.type = self.eventtype\n event: Event = super().save(commit=commit)\n\n add_log_recorder(event, PermissionLogRecorder(\"view_event\", _(\"Visible for\")))\n add_log_recorder(event, PermissionLogRecorder(\"change_event\", _(\"Responsibles\")))\n\n # delete existing permissions\n # (better implement https://github.com/django-guardian/django-guardian/issues/654)\n for group in get_groups_with_perms(\n event, only_with_perms_in=[\"view_event\", \"change_event\"]\n ):\n remove_perm(\"view_event\", group, event)\n remove_perm(\"change_event\", group, event)\n for user in get_users_with_perms(event, only_with_perms_in=[\"view_event\", \"change_event\"]):\n remove_perm(\"view_event\", user, event)\n remove_perm(\"change_event\", user, event)\n\n # assign designated permissions\n assign_perm(\n \"view_event\",\n Group.objects.filter(\n Q(id__in=self.cleaned_data[\"visible_for\"])\n | Q(id__in=self.cleaned_data[\"responsible_groups\"])\n | Q(id__in=(g.id for g in self.locked_visible_for_groups))\n ),\n event,\n )\n assign_perm(\"change_event\", self.cleaned_data[\"responsible_groups\"], event)\n assign_perm(\"change_event\", self.cleaned_data[\"responsible_users\"], event)\n\n # Assign view_event to responsible users and to non-responsible users\n # that already have some sort of participation for the event\n # (-> they saw and interacted with it)\n # We can't just do users that aren't included by group permissions,\n # as they might get removed from that group.\n assign_perm(\n \"view_event\",\n UserProfile.objects.filter(\n Q(pk__in=self.cleaned_data[\"responsible_users\"])\n | Q(\n pk__in=LocalParticipation.objects.filter(\n shift_id__in=event.shifts.all()\n ).values_list(\"user\", flat=True)\n )\n ),\n event,\n )\n\n update_log(event, InstanceActionType.CHANGE)\n return event\n\n\nclass ShiftForm(forms.ModelForm):\n date = forms.DateField(widget=CustomDateInput, label=_(\"Date\"))\n meeting_time = forms.TimeField(widget=CustomTimeInput, label=_(\"Meeting time\"))\n start_time = forms.TimeField(widget=CustomTimeInput, label=_(\"Start time\"))\n end_time = forms.TimeField(widget=CustomTimeInput, label=_(\"End time\"))\n\n field_order = [\"date\", \"meeting_time\", \"start_time\", \"end_time\", \"signup_method_slug\"]\n\n class Meta:\n model = Shift\n fields = [\"meeting_time\", \"start_time\", \"end_time\", \"signup_method_slug\"]\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n signup_methods = list(enabled_signup_methods())\n\n # make sure that if a shift uses a disabled but installed method, it is also available in the list\n if self.instance and (method_slug := self.instance.signup_method_slug):\n if method_slug not in map(operator.attrgetter(\"slug\"), signup_methods):\n try:\n signup_methods.append(signup_method_from_slug(method_slug, self.instance))\n except ValueError: # not installed\n pass\n\n self.fields[\"signup_method_slug\"].widget = forms.Select(\n choices=((method.slug, method.verbose_name) for method in signup_methods)\n )\n # this recorder may cause db queries, so it's added on Shift init, but here in the form\n # pylint: disable=undefined-variable\n add_log_recorder(\n self.instance,\n DerivedFieldsLogRecorder(\n lambda shift: method.get_signup_info() if (method := shift.signup_method) else {}\n ),\n )\n\n def clean(self):\n cleaned_data = super().clean()\n if {\"date\", \"meeting_time\", \"start_time\", \"end_time\"} <= set(cleaned_data.keys()):\n cleaned_data[\"meeting_time\"] = make_aware(\n datetime.combine(cleaned_data[\"date\"], cleaned_data[\"meeting_time\"])\n )\n cleaned_data[\"start_time\"] = make_aware(\n datetime.combine(cleaned_data[\"date\"], cleaned_data[\"start_time\"])\n )\n cleaned_data[\"end_time\"] = make_aware(\n datetime.combine(self.cleaned_data[\"date\"], cleaned_data[\"end_time\"])\n )\n if self.cleaned_data[\"end_time\"] <= self.cleaned_data[\"start_time\"]:\n cleaned_data[\"end_time\"] = cleaned_data[\"end_time\"] + timedelta(days=1)\n if not cleaned_data[\"meeting_time\"] <= cleaned_data[\"start_time\"]:\n raise ValidationError(_(\"Meeting time must not be after start time!\"))\n return cleaned_data\n\n\nclass EventDuplicationForm(forms.Form):\n start_date = forms.DateField(\n widget=CustomDateInput,\n initial=date.today(),\n help_text=_(\n \"This date will be used as the start date for recurring events that you create below, e.g. daily events will be created from this date onwards.\"\n ),\n label=_(\"Start date\"),\n )\n recurrence = RecurrenceField(required=False)\n\n\nclass EventTypeForm(forms.ModelForm):\n class Meta:\n model = EventType\n fields = [\"title\", \"color\"]\n widgets = {\"color\": ColorInput()}\n\n def clean_color(self):\n regex = re.compile(r\"#[a-fA-F\\d]{6}\")\n if not regex.match(self.cleaned_data[\"color\"]):\n raise ValidationError(_(\"You need to enter a valid color\"))\n return self.cleaned_data[\"color\"]\n\n\nclass EventTypePreferenceForm(PreferenceForm):\n registry = event_type_preference_registry\n\n\nclass BasePluginFormMixin:\n @property\n def heading(self):\n raise NotImplementedError\n\n def render(self):\n try:\n self.helper.form_tag = False\n except AttributeError:\n self.helper = FormHelper(self)\n self.helper.form_tag = False\n return render_to_string(\"core/fragments/plugin_form.html\", context={\"form\": self})\n\n def is_function_active(self):\n \"\"\"\n When building forms for additional features, return whether that feature is enabled for the form instance.\n With the default template, if this is True, the collapse is expanded on page load.\n \"\"\"\n return False\n\n\nclass EventNotificationForm(forms.Form):\n NEW_EVENT = \"new\"\n REMINDER = \"remind\"\n PARTICIPANTS = \"participants\"\n action = forms.ChoiceField(\n choices=[\n (NEW_EVENT, _(\"Send notification about new event to everyone\")),\n (REMINDER, _(\"Send reminder to everyone that is not participating\")),\n (PARTICIPANTS, _(\"Send a message to all participants\")),\n ],\n widget=forms.RadioSelect,\n label=False,\n )\n mail_content = forms.CharField(required=False, widget=forms.Textarea, label=_(\"Mail content\"))\n\n def __init__(self, *args, **kwargs):\n self.event = kwargs.pop(\"event\")\n super().__init__(*args, **kwargs)\n self.helper = FormHelper(self)\n self.helper.layout = Layout(\n Field(\"action\"),\n Field(\"mail_content\"),\n FormActions(\n Submit(\"submit\", _(\"Send\"), css_class=\"float-end\"),\n AbortLink(href=self.event.get_absolute_url()),\n ),\n )\n\n def clean(self):\n if (\n self.cleaned_data.get(\"action\") == self.PARTICIPANTS\n and not self.cleaned_data[\"mail_content\"]\n ):\n raise ValidationError(_(\"You cannot send an empty mail.\"))\n return super().clean()\n", "path": "ephios/core/forms/events.py"}]} | 3,879 | 287 |
gh_patches_debug_39470 | rasdani/github-patches | git_diff | microsoft__onnxscript-120 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Set up linters for the project
It helps if we set up linters early in the development process (less big PRs for fixes in the future). We may consider: mypy, pylint, black, isort, pydocstyle, flake8, bandit and xdoctest.
</issue>
<code>
[start of setup.py]
1 # -------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License.
4 # --------------------------------------------------------------------------
5
6 # -*- coding: utf-8 -*-
7
8 from distutils.core import setup
9 from setuptools import find_packages
10 import os
11 this = os.path.dirname(__file__)
12
13 with open(os.path.join(this, "requirements.txt"), "r") as f:
14 requirements = [_ for _ in [_.strip("\r\n ")
15 for _ in f.readlines()] if _ is not None]
16
17 packages = find_packages()
18 assert packages
19
20 # read version from the package file.
21 version_str = '1.0.0'
22 with (open(os.path.join(this, 'onnxscript/__init__.py'), "r")) as f:
23 line = [_ for _ in [_.strip("\r\n ")
24 for _ in f.readlines()] if _.startswith("__version__")]
25 if len(line) > 0:
26 version_str = line[0].split('=')[1].strip('" ')
27
28 README = os.path.join(os.getcwd(), "README.md")
29 with open(README) as f:
30 long_description = f.read()
31 start_pos = long_description.find('## Contributing')
32 if start_pos >= 0:
33 long_description = long_description[:start_pos]
34
35 setup(
36 name='onnx-script',
37 version=version_str,
38 description="Authoring ONNX functions in Python",
39 long_description=long_description,
40 long_description_content_type='text/markdown',
41 license='Apache License v2.0',
42 author='Microsoft Corporation',
43 author_email='[email protected]',
44 url='https://github.com/onnx/onnx-script',
45 packages=packages,
46 include_package_data=True,
47 install_requires=requirements,
48 classifiers=[
49 'Development Status :: 4 - Beta',
50 'Environment :: Console',
51 'Intended Audience :: Developers',
52 'Operating System :: MacOS :: MacOS X',
53 'Operating System :: Microsoft :: Windows',
54 'Programming Language :: Python',
55 'Programming Language :: Python :: 3.7',
56 'Programming Language :: Python :: 3.8',
57 'Programming Language :: Python :: 3.9',
58 'License :: OSI Approved :: Apache Software License'],
59 )
60
[end of setup.py]
[start of onnxscript/__init__.py]
1 # -------------------------------------------------------------------------
2 # Copyright (c) Microsoft Corporation. All rights reserved.
3 # Licensed under the MIT License.
4 # --------------------------------------------------------------------------
5
6 __version__ = '0.1'
7
8 from .main import script, export_onnx_lib, OnnxFunction
9 from .backend.onnx_export import export2python as proto2python
10
11 __all__ = [script, export_onnx_lib, OnnxFunction, proto2python]
12
[end of onnxscript/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/onnxscript/__init__.py b/onnxscript/__init__.py
--- a/onnxscript/__init__.py
+++ b/onnxscript/__init__.py
@@ -3,9 +3,22 @@
# Licensed under the MIT License.
# --------------------------------------------------------------------------
-__version__ = '0.1'
-
+import sys
from .main import script, export_onnx_lib, OnnxFunction
from .backend.onnx_export import export2python as proto2python
-__all__ = [script, export_onnx_lib, OnnxFunction, proto2python]
+if sys.version_info[0:2] >= (3, 8):
+ import importlib.metadata as importlib_metadata
+else:
+ # TODO: Remove this when Python 3.7 is deprecated
+ import importlib_metadata
+
+try:
+ # TODO: should we algin the folder name with package name?
+ # It's onnxscript and onnx-script now. That way, we can use __package__ here.
+ __version__ = importlib_metadata.version("onnx-script")
+except importlib_metadata.PackageNotFoundError:
+ __version__ = None
+
+
+__all__ = ["script", "export_onnx_lib", "OnnxFunction", "proto2python"]
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -5,55 +5,26 @@
# -*- coding: utf-8 -*-
-from distutils.core import setup
-from setuptools import find_packages
import os
-this = os.path.dirname(__file__)
+import setuptools
-with open(os.path.join(this, "requirements.txt"), "r") as f:
- requirements = [_ for _ in [_.strip("\r\n ")
- for _ in f.readlines()] if _ is not None]
+this = os.path.dirname(__file__)
-packages = find_packages()
+packages = setuptools.find_packages()
assert packages
-# read version from the package file.
-version_str = '1.0.0'
-with (open(os.path.join(this, 'onnxscript/__init__.py'), "r")) as f:
- line = [_ for _ in [_.strip("\r\n ")
- for _ in f.readlines()] if _.startswith("__version__")]
- if len(line) > 0:
- version_str = line[0].split('=')[1].strip('" ')
-
README = os.path.join(os.getcwd(), "README.md")
-with open(README) as f:
+with open(README, encoding="utf-8") as f:
long_description = f.read()
start_pos = long_description.find('## Contributing')
if start_pos >= 0:
long_description = long_description[:start_pos]
-setup(
- name='onnx-script',
- version=version_str,
- description="Authoring ONNX functions in Python",
+setuptools.setup(
long_description=long_description,
long_description_content_type='text/markdown',
- license='Apache License v2.0',
- author='Microsoft Corporation',
- author_email='[email protected]',
url='https://github.com/onnx/onnx-script',
packages=packages,
include_package_data=True,
- install_requires=requirements,
- classifiers=[
- 'Development Status :: 4 - Beta',
- 'Environment :: Console',
- 'Intended Audience :: Developers',
- 'Operating System :: MacOS :: MacOS X',
- 'Operating System :: Microsoft :: Windows',
- 'Programming Language :: Python',
- 'Programming Language :: Python :: 3.7',
- 'Programming Language :: Python :: 3.8',
- 'Programming Language :: Python :: 3.9',
- 'License :: OSI Approved :: Apache Software License'],
+ package_data={"onnx-script": ["py.typed"], "onnx": ["py.typed"],},
)
| {"golden_diff": "diff --git a/onnxscript/__init__.py b/onnxscript/__init__.py\n--- a/onnxscript/__init__.py\n+++ b/onnxscript/__init__.py\n@@ -3,9 +3,22 @@\n # Licensed under the MIT License.\n # --------------------------------------------------------------------------\n \n-__version__ = '0.1'\n-\n+import sys\n from .main import script, export_onnx_lib, OnnxFunction\n from .backend.onnx_export import export2python as proto2python\n \n-__all__ = [script, export_onnx_lib, OnnxFunction, proto2python]\n+if sys.version_info[0:2] >= (3, 8):\n+ import importlib.metadata as importlib_metadata\n+else:\n+ # TODO: Remove this when Python 3.7 is deprecated\n+ import importlib_metadata\n+\n+try:\n+ # TODO: should we algin the folder name with package name?\n+ # It's onnxscript and onnx-script now. That way, we can use __package__ here.\n+ __version__ = importlib_metadata.version(\"onnx-script\")\n+except importlib_metadata.PackageNotFoundError:\n+ __version__ = None\n+\n+\n+__all__ = [\"script\", \"export_onnx_lib\", \"OnnxFunction\", \"proto2python\"]\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -5,55 +5,26 @@\n \n # -*- coding: utf-8 -*-\n \n-from distutils.core import setup\n-from setuptools import find_packages\n import os\n-this = os.path.dirname(__file__)\n+import setuptools\n \n-with open(os.path.join(this, \"requirements.txt\"), \"r\") as f:\n- requirements = [_ for _ in [_.strip(\"\\r\\n \")\n- for _ in f.readlines()] if _ is not None]\n+this = os.path.dirname(__file__)\n \n-packages = find_packages()\n+packages = setuptools.find_packages()\n assert packages\n \n-# read version from the package file.\n-version_str = '1.0.0'\n-with (open(os.path.join(this, 'onnxscript/__init__.py'), \"r\")) as f:\n- line = [_ for _ in [_.strip(\"\\r\\n \")\n- for _ in f.readlines()] if _.startswith(\"__version__\")]\n- if len(line) > 0:\n- version_str = line[0].split('=')[1].strip('\" ')\n-\n README = os.path.join(os.getcwd(), \"README.md\")\n-with open(README) as f:\n+with open(README, encoding=\"utf-8\") as f:\n long_description = f.read()\n start_pos = long_description.find('## Contributing')\n if start_pos >= 0:\n long_description = long_description[:start_pos]\n \n-setup(\n- name='onnx-script',\n- version=version_str,\n- description=\"Authoring ONNX functions in Python\",\n+setuptools.setup(\n long_description=long_description,\n long_description_content_type='text/markdown',\n- license='Apache License v2.0',\n- author='Microsoft Corporation',\n- author_email='[email protected]',\n url='https://github.com/onnx/onnx-script',\n packages=packages,\n include_package_data=True,\n- install_requires=requirements,\n- classifiers=[\n- 'Development Status :: 4 - Beta',\n- 'Environment :: Console',\n- 'Intended Audience :: Developers',\n- 'Operating System :: MacOS :: MacOS X',\n- 'Operating System :: Microsoft :: Windows',\n- 'Programming Language :: Python',\n- 'Programming Language :: Python :: 3.7',\n- 'Programming Language :: Python :: 3.8',\n- 'Programming Language :: Python :: 3.9',\n- 'License :: OSI Approved :: Apache Software License'],\n+ package_data={\"onnx-script\": [\"py.typed\"], \"onnx\": [\"py.typed\"],},\n )\n", "issue": "Set up linters for the project\nIt helps if we set up linters early in the development process (less big PRs for fixes in the future). We may consider: mypy, pylint, black, isort, pydocstyle, flake8, bandit and xdoctest.\n", "before_files": [{"content": "# -------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n# --------------------------------------------------------------------------\n\n# -*- coding: utf-8 -*-\n\nfrom distutils.core import setup\nfrom setuptools import find_packages\nimport os\nthis = os.path.dirname(__file__)\n\nwith open(os.path.join(this, \"requirements.txt\"), \"r\") as f:\n requirements = [_ for _ in [_.strip(\"\\r\\n \")\n for _ in f.readlines()] if _ is not None]\n\npackages = find_packages()\nassert packages\n\n# read version from the package file.\nversion_str = '1.0.0'\nwith (open(os.path.join(this, 'onnxscript/__init__.py'), \"r\")) as f:\n line = [_ for _ in [_.strip(\"\\r\\n \")\n for _ in f.readlines()] if _.startswith(\"__version__\")]\n if len(line) > 0:\n version_str = line[0].split('=')[1].strip('\" ')\n\nREADME = os.path.join(os.getcwd(), \"README.md\")\nwith open(README) as f:\n long_description = f.read()\n start_pos = long_description.find('## Contributing')\n if start_pos >= 0:\n long_description = long_description[:start_pos]\n\nsetup(\n name='onnx-script',\n version=version_str,\n description=\"Authoring ONNX functions in Python\",\n long_description=long_description,\n long_description_content_type='text/markdown',\n license='Apache License v2.0',\n author='Microsoft Corporation',\n author_email='[email protected]',\n url='https://github.com/onnx/onnx-script',\n packages=packages,\n include_package_data=True,\n install_requires=requirements,\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'License :: OSI Approved :: Apache Software License'],\n)\n", "path": "setup.py"}, {"content": "# -------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n# --------------------------------------------------------------------------\n\n__version__ = '0.1'\n\nfrom .main import script, export_onnx_lib, OnnxFunction\nfrom .backend.onnx_export import export2python as proto2python\n\n__all__ = [script, export_onnx_lib, OnnxFunction, proto2python]\n", "path": "onnxscript/__init__.py"}]} | 1,287 | 844 |
gh_patches_debug_1350 | rasdani/github-patches | git_diff | fossasia__open-event-server-7659 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Preset roles deletion is allowed
**Describe the bug**
Currently the preset roles like "organizer, coorganizer etc" should not be deleted from the db. But right now it is possible to delete these entries.
**To Reproduce**
Steps to reproduce the behavior:
1. Hit the delete endpoint for role
2. Choose any of the ids pointing to any of the 7 preset roles
3. You can find deletion to be successful
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Stacktrace**
<!-- If applicable, add stacktrace to help explain your problem. -->
**Additional details (please complete the following information):**
- OS: [e.g. MacOS, Ubuntu, CentOS]
- Python Version [e.g. `3.5`, `3.6`]
- `HEAD` Commit hash [e.g. `4629c62`]
**Additional context**
<!-- Add any other context about the problem here. -->
</issue>
<code>
[start of app/api/roles.py]
1 from flask_rest_jsonapi import ResourceDetail, ResourceList
2
3 from app.api.bootstrap import api
4 from app.api.helpers.db import safe_query_kwargs
5 from app.api.helpers.errors import UnprocessableEntityError
6 from app.api.schema.roles import RoleSchema
7 from app.models import db
8 from app.models.role import Role
9 from app.models.role_invite import RoleInvite
10 from app.models.users_events_role import UsersEventsRoles
11
12
13 class RoleList(ResourceList):
14 """
15 List and create role
16 """
17
18 decorators = (api.has_permission('is_admin', methods="POST"),)
19 schema = RoleSchema
20 data_layer = {'session': db.session, 'model': Role}
21
22
23 class RoleDetail(ResourceDetail):
24 """
25 Role detail by id
26 """
27
28 def before_get_object(self, view_kwargs):
29 """
30 before get method to get the resource id for fetching details
31 :param view_kwargs:
32 :return:
33 """
34 if view_kwargs.get('role_invite_id') is not None:
35 role_invite = safe_query_kwargs(RoleInvite, view_kwargs, 'role_invite_id')
36 if role_invite.role_id is not None:
37 view_kwargs['id'] = role_invite.role_id
38 else:
39 view_kwargs['id'] = None
40
41 if view_kwargs.get('users_events_roles_id') is not None:
42 users_events_role = safe_query_kwargs(
43 UsersEventsRoles,
44 view_kwargs,
45 'users_events_roles_id',
46 )
47
48 if users_events_role.role_id is not None:
49 view_kwargs['id'] = users_events_role.role_id
50 else:
51 view_kwargs['id'] = None
52
53 def before_update_object(self, role, data, view_kwargs):
54 """
55 Method to edit object
56 :param role:
57 :param data:
58 :param view_kwargs:
59 :return:
60 """
61 if data.get('name'):
62 if data['name'] in [
63 'owner',
64 'organizer',
65 'coorganizer',
66 'registrar',
67 'moderator',
68 'attendee',
69 'track_organizer',
70 ]:
71 raise UnprocessableEntityError(
72 {'data': 'name'}, "The given name cannot be updated"
73 )
74
75 def before_delete_object(self, obj, kwargs):
76 """
77 method to check proper resource name before deleting
78 :param obj:
79 :param kwargs:
80 :return:
81 """
82 if obj.name in [
83 'owner',
84 'organizer',
85 'coorganizer',
86 'registrar',
87 'moderator',
88 'attendee',
89 'track_organizer',
90 ]:
91 raise UnprocessableEntityError(
92 {'data': 'name'}, "The resource with given name cannot be deleted"
93 )
94
95 decorators = (api.has_permission('is_admin', methods="PATCH,DELETE"),)
96 schema = RoleSchema
97 data_layer = {
98 'session': db.session,
99 'model': Role,
100 'methods': {'before_get_object': before_get_object},
101 }
102
[end of app/api/roles.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/roles.py b/app/api/roles.py
--- a/app/api/roles.py
+++ b/app/api/roles.py
@@ -97,5 +97,8 @@
data_layer = {
'session': db.session,
'model': Role,
- 'methods': {'before_get_object': before_get_object},
+ 'methods': {
+ 'before_get_object': before_get_object,
+ 'before_delete_object': before_delete_object,
+ },
}
| {"golden_diff": "diff --git a/app/api/roles.py b/app/api/roles.py\n--- a/app/api/roles.py\n+++ b/app/api/roles.py\n@@ -97,5 +97,8 @@\n data_layer = {\n 'session': db.session,\n 'model': Role,\n- 'methods': {'before_get_object': before_get_object},\n+ 'methods': {\n+ 'before_get_object': before_get_object,\n+ 'before_delete_object': before_delete_object,\n+ },\n }\n", "issue": "Preset roles deletion is allowed\n**Describe the bug**\r\nCurrently the preset roles like \"organizer, coorganizer etc\" should not be deleted from the db. But right now it is possible to delete these entries.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Hit the delete endpoint for role \r\n2. Choose any of the ids pointing to any of the 7 preset roles\r\n3. You can find deletion to be successful\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n**Stacktrace**\r\n<!-- If applicable, add stacktrace to help explain your problem. -->\r\n\r\n**Additional details (please complete the following information):**\r\n - OS: [e.g. MacOS, Ubuntu, CentOS]\r\n - Python Version [e.g. `3.5`, `3.6`]\r\n - `HEAD` Commit hash [e.g. `4629c62`]\r\n\r\n**Additional context**\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query_kwargs\nfrom app.api.helpers.errors import UnprocessableEntityError\nfrom app.api.schema.roles import RoleSchema\nfrom app.models import db\nfrom app.models.role import Role\nfrom app.models.role_invite import RoleInvite\nfrom app.models.users_events_role import UsersEventsRoles\n\n\nclass RoleList(ResourceList):\n \"\"\"\n List and create role\n \"\"\"\n\n decorators = (api.has_permission('is_admin', methods=\"POST\"),)\n schema = RoleSchema\n data_layer = {'session': db.session, 'model': Role}\n\n\nclass RoleDetail(ResourceDetail):\n \"\"\"\n Role detail by id\n \"\"\"\n\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get method to get the resource id for fetching details\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('role_invite_id') is not None:\n role_invite = safe_query_kwargs(RoleInvite, view_kwargs, 'role_invite_id')\n if role_invite.role_id is not None:\n view_kwargs['id'] = role_invite.role_id\n else:\n view_kwargs['id'] = None\n\n if view_kwargs.get('users_events_roles_id') is not None:\n users_events_role = safe_query_kwargs(\n UsersEventsRoles,\n view_kwargs,\n 'users_events_roles_id',\n )\n\n if users_events_role.role_id is not None:\n view_kwargs['id'] = users_events_role.role_id\n else:\n view_kwargs['id'] = None\n\n def before_update_object(self, role, data, view_kwargs):\n \"\"\"\n Method to edit object\n :param role:\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n if data.get('name'):\n if data['name'] in [\n 'owner',\n 'organizer',\n 'coorganizer',\n 'registrar',\n 'moderator',\n 'attendee',\n 'track_organizer',\n ]:\n raise UnprocessableEntityError(\n {'data': 'name'}, \"The given name cannot be updated\"\n )\n\n def before_delete_object(self, obj, kwargs):\n \"\"\"\n method to check proper resource name before deleting\n :param obj:\n :param kwargs:\n :return:\n \"\"\"\n if obj.name in [\n 'owner',\n 'organizer',\n 'coorganizer',\n 'registrar',\n 'moderator',\n 'attendee',\n 'track_organizer',\n ]:\n raise UnprocessableEntityError(\n {'data': 'name'}, \"The resource with given name cannot be deleted\"\n )\n\n decorators = (api.has_permission('is_admin', methods=\"PATCH,DELETE\"),)\n schema = RoleSchema\n data_layer = {\n 'session': db.session,\n 'model': Role,\n 'methods': {'before_get_object': before_get_object},\n }\n", "path": "app/api/roles.py"}]} | 1,575 | 109 |
gh_patches_debug_7221 | rasdani/github-patches | git_diff | StackStorm__st2-3038 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cache filters API response for web UI
As the number of actions, rules, etc has grown in our StackStorm cluster, the UI is becoming slower. In particular, the filters view (which we rely on heavily with this many executions) is very slow to become visible and usable. It might help to cache this `/api/v1/executions/views/filters` API response and asynchronously reload this value outside of the request thread, since this is slowly-changing data.
Our typical workflow is to
1. load the main execution page
2. wait for the filters to appear
3. apply some set of filters
4. wait for the next page to load
Here's a waterfall showing a 20s load time for the filters response. This is pretty common for us now.

For reference, we have 572 rules, 1200 actions, 143 triggers, 19 trigger types, and 600k+ executions.
</issue>
<code>
[start of st2common/st2common/models/db/execution.py]
1 # Licensed to the StackStorm, Inc ('StackStorm') under one or more
2 # contributor license agreements. See the NOTICE file distributed with
3 # this work for additional information regarding copyright ownership.
4 # The ASF licenses this file to You under the Apache License, Version 2.0
5 # (the "License"); you may not use this file except in compliance with
6 # the License. You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import copy
17
18 import mongoengine as me
19
20 from st2common import log as logging
21 from st2common.models.db import stormbase
22 from st2common.fields import ComplexDateTimeField
23 from st2common.util import date as date_utils
24 from st2common.util.secrets import get_secret_parameters
25 from st2common.util.secrets import mask_secret_parameters
26 from st2common.constants.types import ResourceType
27
28 __all__ = [
29 'ActionExecutionDB'
30 ]
31
32
33 LOG = logging.getLogger(__name__)
34
35
36 class ActionExecutionDB(stormbase.StormFoundationDB):
37 RESOURCE_TYPE = ResourceType.EXECUTION
38 UID_FIELDS = ['id']
39
40 trigger = stormbase.EscapedDictField()
41 trigger_type = stormbase.EscapedDictField()
42 trigger_instance = stormbase.EscapedDictField()
43 rule = stormbase.EscapedDictField()
44 action = stormbase.EscapedDictField(required=True)
45 runner = stormbase.EscapedDictField(required=True)
46 # Only the diff between the liveaction type and what is replicated
47 # in the ActionExecutionDB object.
48 liveaction = stormbase.EscapedDictField(required=True)
49 status = me.StringField(
50 required=True,
51 help_text='The current status of the liveaction.')
52 start_timestamp = ComplexDateTimeField(
53 default=date_utils.get_datetime_utc_now,
54 help_text='The timestamp when the liveaction was created.')
55 end_timestamp = ComplexDateTimeField(
56 help_text='The timestamp when the liveaction has finished.')
57 parameters = stormbase.EscapedDynamicField(
58 default={},
59 help_text='The key-value pairs passed as to the action runner & action.')
60 result = stormbase.EscapedDynamicField(
61 default={},
62 help_text='Action defined result.')
63 context = me.DictField(
64 default={},
65 help_text='Contextual information on the action execution.')
66 parent = me.StringField()
67 children = me.ListField(field=me.StringField())
68 log = me.ListField(field=me.DictField())
69 # Do not use URLField for web_url. If host doesn't have FQDN set, URLField validation blows.
70 web_url = me.StringField(required=False)
71
72 meta = {
73 'indexes': [
74 {'fields': ['rule.ref']},
75 {'fields': ['action.ref']},
76 {'fields': ['liveaction.id']},
77 {'fields': ['start_timestamp']},
78 {'fields': ['end_timestamp']},
79 {'fields': ['status']},
80 {'fields': ['parent']},
81 {'fields': ['-start_timestamp', 'action.ref', 'status']}
82 ]
83 }
84
85 def get_uid(self):
86 # TODO Construct od from non id field:
87 uid = [self.RESOURCE_TYPE, str(self.id)]
88 return ':'.join(uid)
89
90 def mask_secrets(self, value):
91 result = copy.deepcopy(value)
92
93 execution_parameters = value['parameters']
94 parameters = {}
95 # pylint: disable=no-member
96 parameters.update(value.get('action', {}).get('parameters', {}))
97 parameters.update(value.get('runner', {}).get('runner_parameters', {}))
98
99 secret_parameters = get_secret_parameters(parameters=parameters)
100 result['parameters'] = mask_secret_parameters(parameters=execution_parameters,
101 secret_parameters=secret_parameters)
102 return result
103
104 def get_masked_parameters(self):
105 """
106 Retrieve parameters with the secrets masked.
107
108 :rtype: ``dict``
109 """
110 serializable_dict = self.to_serializable_dict(mask_secrets=True)
111 return serializable_dict['parameters']
112
113
114 MODELS = [ActionExecutionDB]
115
[end of st2common/st2common/models/db/execution.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/st2common/st2common/models/db/execution.py b/st2common/st2common/models/db/execution.py
--- a/st2common/st2common/models/db/execution.py
+++ b/st2common/st2common/models/db/execution.py
@@ -78,6 +78,11 @@
{'fields': ['end_timestamp']},
{'fields': ['status']},
{'fields': ['parent']},
+ {'fields': ['rule.name']},
+ {'fields': ['runner.name']},
+ {'fields': ['trigger.name']},
+ {'fields': ['trigger_type.name']},
+ {'fields': ['context.user']},
{'fields': ['-start_timestamp', 'action.ref', 'status']}
]
}
| {"golden_diff": "diff --git a/st2common/st2common/models/db/execution.py b/st2common/st2common/models/db/execution.py\n--- a/st2common/st2common/models/db/execution.py\n+++ b/st2common/st2common/models/db/execution.py\n@@ -78,6 +78,11 @@\n {'fields': ['end_timestamp']},\n {'fields': ['status']},\n {'fields': ['parent']},\n+ {'fields': ['rule.name']},\n+ {'fields': ['runner.name']},\n+ {'fields': ['trigger.name']},\n+ {'fields': ['trigger_type.name']},\n+ {'fields': ['context.user']},\n {'fields': ['-start_timestamp', 'action.ref', 'status']}\n ]\n }\n", "issue": "Cache filters API response for web UI\nAs the number of actions, rules, etc has grown in our StackStorm cluster, the UI is becoming slower. In particular, the filters view (which we rely on heavily with this many executions) is very slow to become visible and usable. It might help to cache this `/api/v1/executions/views/filters` API response and asynchronously reload this value outside of the request thread, since this is slowly-changing data.\n\nOur typical workflow is to\n1. load the main execution page\n2. wait for the filters to appear\n3. apply some set of filters\n4. wait for the next page to load\n\nHere's a waterfall showing a 20s load time for the filters response. This is pretty common for us now.\n\n\nFor reference, we have 572 rules, 1200 actions, 143 triggers, 19 trigger types, and 600k+ executions.\n\n", "before_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport copy\n\nimport mongoengine as me\n\nfrom st2common import log as logging\nfrom st2common.models.db import stormbase\nfrom st2common.fields import ComplexDateTimeField\nfrom st2common.util import date as date_utils\nfrom st2common.util.secrets import get_secret_parameters\nfrom st2common.util.secrets import mask_secret_parameters\nfrom st2common.constants.types import ResourceType\n\n__all__ = [\n 'ActionExecutionDB'\n]\n\n\nLOG = logging.getLogger(__name__)\n\n\nclass ActionExecutionDB(stormbase.StormFoundationDB):\n RESOURCE_TYPE = ResourceType.EXECUTION\n UID_FIELDS = ['id']\n\n trigger = stormbase.EscapedDictField()\n trigger_type = stormbase.EscapedDictField()\n trigger_instance = stormbase.EscapedDictField()\n rule = stormbase.EscapedDictField()\n action = stormbase.EscapedDictField(required=True)\n runner = stormbase.EscapedDictField(required=True)\n # Only the diff between the liveaction type and what is replicated\n # in the ActionExecutionDB object.\n liveaction = stormbase.EscapedDictField(required=True)\n status = me.StringField(\n required=True,\n help_text='The current status of the liveaction.')\n start_timestamp = ComplexDateTimeField(\n default=date_utils.get_datetime_utc_now,\n help_text='The timestamp when the liveaction was created.')\n end_timestamp = ComplexDateTimeField(\n help_text='The timestamp when the liveaction has finished.')\n parameters = stormbase.EscapedDynamicField(\n default={},\n help_text='The key-value pairs passed as to the action runner & action.')\n result = stormbase.EscapedDynamicField(\n default={},\n help_text='Action defined result.')\n context = me.DictField(\n default={},\n help_text='Contextual information on the action execution.')\n parent = me.StringField()\n children = me.ListField(field=me.StringField())\n log = me.ListField(field=me.DictField())\n # Do not use URLField for web_url. If host doesn't have FQDN set, URLField validation blows.\n web_url = me.StringField(required=False)\n\n meta = {\n 'indexes': [\n {'fields': ['rule.ref']},\n {'fields': ['action.ref']},\n {'fields': ['liveaction.id']},\n {'fields': ['start_timestamp']},\n {'fields': ['end_timestamp']},\n {'fields': ['status']},\n {'fields': ['parent']},\n {'fields': ['-start_timestamp', 'action.ref', 'status']}\n ]\n }\n\n def get_uid(self):\n # TODO Construct od from non id field:\n uid = [self.RESOURCE_TYPE, str(self.id)]\n return ':'.join(uid)\n\n def mask_secrets(self, value):\n result = copy.deepcopy(value)\n\n execution_parameters = value['parameters']\n parameters = {}\n # pylint: disable=no-member\n parameters.update(value.get('action', {}).get('parameters', {}))\n parameters.update(value.get('runner', {}).get('runner_parameters', {}))\n\n secret_parameters = get_secret_parameters(parameters=parameters)\n result['parameters'] = mask_secret_parameters(parameters=execution_parameters,\n secret_parameters=secret_parameters)\n return result\n\n def get_masked_parameters(self):\n \"\"\"\n Retrieve parameters with the secrets masked.\n\n :rtype: ``dict``\n \"\"\"\n serializable_dict = self.to_serializable_dict(mask_secrets=True)\n return serializable_dict['parameters']\n\n\nMODELS = [ActionExecutionDB]\n", "path": "st2common/st2common/models/db/execution.py"}]} | 1,989 | 156 |
gh_patches_debug_946 | rasdani/github-patches | git_diff | xonsh__xonsh-2332 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
xoreutils: echo fails with KeyError: 'help'
Any `echo` invocation fails:
```shell
$ $XONSH_SHOW_TRACEBACK = True
$ echo
xonsh: To log full traceback to a file set: $XONSH_TRACEBACK_LOGFILE = <filename>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/xonsh/__amalgam__.py", line 13061, in run
r = self.f(self.args, sp_stdin, sp_stdout, sp_stderr, spec)
File "/usr/local/lib/python3.6/site-packages/xonsh/__amalgam__.py", line 12896, in proxy_four
return f(args, stdin, stdout, stderr)
File "/usr/local/lib/python3.6/site-packages/xonsh/xoreutils/echo.py", line 9, in echo
if opts['help']:
KeyError: 'help'
$ echo foo
xonsh: To log full traceback to a file set: $XONSH_TRACEBACK_LOGFILE = <filename>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/xonsh/__amalgam__.py", line 13061, in run
r = self.f(self.args, sp_stdin, sp_stdout, sp_stderr, spec)
File "/usr/local/lib/python3.6/site-packages/xonsh/__amalgam__.py", line 12896, in proxy_four
return f(args, stdin, stdout, stderr)
File "/usr/local/lib/python3.6/site-packages/xonsh/xoreutils/echo.py", line 9, in echo
if opts['help']:
KeyError: 'help'
$ echo "foo"
xonsh: To log full traceback to a file set: $XONSH_TRACEBACK_LOGFILE = <filename>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/xonsh/__amalgam__.py", line 13061, in run
r = self.f(self.args, sp_stdin, sp_stdout, sp_stderr, spec)
File "/usr/local/lib/python3.6/site-packages/xonsh/__amalgam__.py", line 12896, in proxy_four
return f(args, stdin, stdout, stderr)
File "/usr/local/lib/python3.6/site-packages/xonsh/xoreutils/echo.py", line 9, in echo
if opts['help']:
KeyError: 'help'
```
Obviously, the problem is that `help` is looked up but missing: http://xon.sh/_modules/xonsh/xoreutils/echo.html#echo
</issue>
<code>
[start of xonsh/xoreutils/echo.py]
1 """Implements a simple echo command for xonsh."""
2
3
4 def echo(args, stdin, stdout, stderr):
5 """A simple echo command."""
6 opts = _echo_parse_args(args)
7 if opts is None:
8 return
9 if opts['help']:
10 print(ECHO_HELP, file=stdout)
11 return 0
12 ender = opts['end']
13 args = map(str, args)
14 if opts['escapes']:
15 args = map(lambda x: x.encode().decode('unicode_escape'), args)
16 print(*args, end=ender, file=stdout)
17
18
19 def _echo_parse_args(args):
20 out = {'escapes': False, 'end': '\n'}
21 if '-e' in args:
22 args.remove('-e')
23 out['escapes'] = True
24 if '-E' in args:
25 args.remove('-E')
26 out['escapes'] = False
27 if '-n' in args:
28 args.remove('-n')
29 out['end'] = ''
30 if '-h' in args or '--help' in args:
31 out['help'] = True
32 return out
33
34
35 ECHO_HELP = """Usage: echo [OPTIONS]... [STRING]...
36 Echo the STRING(s) to standard output.
37
38 -n do not include the trailing newline
39 -e enable interpretation of backslash escapes
40 -E disable interpretation of backslash escapes (default)
41 -h --help display this message and exit
42
43 This version of echo was written in Python for the xonsh project: http://xon.sh
44 Based on echo from GNU coreutils: http://www.gnu.org/software/coreutils/"""
45
[end of xonsh/xoreutils/echo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/xonsh/xoreutils/echo.py b/xonsh/xoreutils/echo.py
--- a/xonsh/xoreutils/echo.py
+++ b/xonsh/xoreutils/echo.py
@@ -17,7 +17,7 @@
def _echo_parse_args(args):
- out = {'escapes': False, 'end': '\n'}
+ out = {'escapes': False, 'end': '\n', 'help': False}
if '-e' in args:
args.remove('-e')
out['escapes'] = True
| {"golden_diff": "diff --git a/xonsh/xoreutils/echo.py b/xonsh/xoreutils/echo.py\n--- a/xonsh/xoreutils/echo.py\n+++ b/xonsh/xoreutils/echo.py\n@@ -17,7 +17,7 @@\n \n \n def _echo_parse_args(args):\n- out = {'escapes': False, 'end': '\\n'}\n+ out = {'escapes': False, 'end': '\\n', 'help': False}\n if '-e' in args:\n args.remove('-e')\n out['escapes'] = True\n", "issue": "xoreutils: echo fails with KeyError: 'help'\nAny `echo` invocation fails:\r\n\r\n```shell\r\n$ $XONSH_SHOW_TRACEBACK = True\r\n$ echo\r\nxonsh: To log full traceback to a file set: $XONSH_TRACEBACK_LOGFILE = <filename>\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/site-packages/xonsh/__amalgam__.py\", line 13061, in run\r\n r = self.f(self.args, sp_stdin, sp_stdout, sp_stderr, spec)\r\n File \"/usr/local/lib/python3.6/site-packages/xonsh/__amalgam__.py\", line 12896, in proxy_four\r\n return f(args, stdin, stdout, stderr)\r\n File \"/usr/local/lib/python3.6/site-packages/xonsh/xoreutils/echo.py\", line 9, in echo\r\n if opts['help']:\r\nKeyError: 'help'\r\n$ echo foo\r\nxonsh: To log full traceback to a file set: $XONSH_TRACEBACK_LOGFILE = <filename>\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/site-packages/xonsh/__amalgam__.py\", line 13061, in run\r\n r = self.f(self.args, sp_stdin, sp_stdout, sp_stderr, spec)\r\n File \"/usr/local/lib/python3.6/site-packages/xonsh/__amalgam__.py\", line 12896, in proxy_four\r\n return f(args, stdin, stdout, stderr)\r\n File \"/usr/local/lib/python3.6/site-packages/xonsh/xoreutils/echo.py\", line 9, in echo\r\n if opts['help']:\r\nKeyError: 'help'\r\n$ echo \"foo\"\r\nxonsh: To log full traceback to a file set: $XONSH_TRACEBACK_LOGFILE = <filename>\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/site-packages/xonsh/__amalgam__.py\", line 13061, in run\r\n r = self.f(self.args, sp_stdin, sp_stdout, sp_stderr, spec)\r\n File \"/usr/local/lib/python3.6/site-packages/xonsh/__amalgam__.py\", line 12896, in proxy_four\r\n return f(args, stdin, stdout, stderr)\r\n File \"/usr/local/lib/python3.6/site-packages/xonsh/xoreutils/echo.py\", line 9, in echo\r\n if opts['help']:\r\nKeyError: 'help'\r\n```\r\n\r\nObviously, the problem is that `help` is looked up but missing: http://xon.sh/_modules/xonsh/xoreutils/echo.html#echo\n", "before_files": [{"content": "\"\"\"Implements a simple echo command for xonsh.\"\"\"\n\n\ndef echo(args, stdin, stdout, stderr):\n \"\"\"A simple echo command.\"\"\"\n opts = _echo_parse_args(args)\n if opts is None:\n return\n if opts['help']:\n print(ECHO_HELP, file=stdout)\n return 0\n ender = opts['end']\n args = map(str, args)\n if opts['escapes']:\n args = map(lambda x: x.encode().decode('unicode_escape'), args)\n print(*args, end=ender, file=stdout)\n\n\ndef _echo_parse_args(args):\n out = {'escapes': False, 'end': '\\n'}\n if '-e' in args:\n args.remove('-e')\n out['escapes'] = True\n if '-E' in args:\n args.remove('-E')\n out['escapes'] = False\n if '-n' in args:\n args.remove('-n')\n out['end'] = ''\n if '-h' in args or '--help' in args:\n out['help'] = True\n return out\n\n\nECHO_HELP = \"\"\"Usage: echo [OPTIONS]... [STRING]...\nEcho the STRING(s) to standard output.\n\n -n do not include the trailing newline\n -e enable interpretation of backslash escapes\n -E disable interpretation of backslash escapes (default)\n -h --help display this message and exit\n\nThis version of echo was written in Python for the xonsh project: http://xon.sh\nBased on echo from GNU coreutils: http://www.gnu.org/software/coreutils/\"\"\"\n", "path": "xonsh/xoreutils/echo.py"}]} | 1,574 | 128 |
gh_patches_debug_19919 | rasdani/github-patches | git_diff | cookiecutter__cookiecutter-1923 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add checkout details to context
* Cookiecutter version: 2.1.1
* Template project url: N/A
* Python version: 3.9.13
* Operating System: macOS
### Description:
I would like the context output available in `{{cookiecutter | pprint }}` to include the `--checkout` argument passed to the CLI, under a '_checkout' field. This will allow me to perform different actions in my post generation hooks if a non-default branch, tag, or commit was passed.
### What I've run:
```
from collections import OrderedDict
cookiecutter_context = {{cookiecutter | pprint }}
print(cookiecutter_context)
```
Related to https://github.com/cookiecutter/cookiecutter/issues/1422
</issue>
<code>
[start of cookiecutter/main.py]
1 """
2 Main entry point for the `cookiecutter` command.
3
4 The code in this module is also a good example of how to use Cookiecutter as a
5 library rather than a script.
6 """
7 import logging
8 import os
9 import re
10 import sys
11 from copy import copy
12
13 from cookiecutter.config import get_user_config
14 from cookiecutter.exceptions import InvalidModeException
15 from cookiecutter.generate import generate_context, generate_files
16 from cookiecutter.prompt import prompt_for_config
17 from cookiecutter.replay import dump, load
18 from cookiecutter.repository import determine_repo_dir
19 from cookiecutter.utils import rmtree
20
21 logger = logging.getLogger(__name__)
22
23
24 def cookiecutter(
25 template,
26 checkout=None,
27 no_input=False,
28 extra_context=None,
29 replay=None,
30 overwrite_if_exists=False,
31 output_dir='.',
32 config_file=None,
33 default_config=False,
34 password=None,
35 directory=None,
36 skip_if_file_exists=False,
37 accept_hooks=True,
38 keep_project_on_failure=False,
39 ):
40 """
41 Run Cookiecutter just as if using it from the command line.
42
43 :param template: A directory containing a project template directory,
44 or a URL to a git repository.
45 :param checkout: The branch, tag or commit ID to checkout after clone.
46 :param no_input: Do not prompt for user input.
47 Use default values for template parameters taken from `cookiecutter.json`, user
48 config and `extra_dict`. Force a refresh of cached resources.
49 :param extra_context: A dictionary of context that overrides default
50 and user configuration.
51 :param replay: Do not prompt for input, instead read from saved json. If
52 ``True`` read from the ``replay_dir``.
53 if it exists
54 :param output_dir: Where to output the generated project dir into.
55 :param config_file: User configuration file path.
56 :param default_config: Use default values rather than a config file.
57 :param password: The password to use when extracting the repository.
58 :param directory: Relative path to a cookiecutter template in a repository.
59 :param accept_hooks: Accept pre and post hooks if set to `True`.
60 :param keep_project_on_failure: If `True` keep generated project directory even when
61 generation fails
62 """
63 if replay and ((no_input is not False) or (extra_context is not None)):
64 err_msg = (
65 "You can not use both replay and no_input or extra_context "
66 "at the same time."
67 )
68 raise InvalidModeException(err_msg)
69
70 config_dict = get_user_config(
71 config_file=config_file,
72 default_config=default_config,
73 )
74
75 repo_dir, cleanup = determine_repo_dir(
76 template=template,
77 abbreviations=config_dict['abbreviations'],
78 clone_to_dir=config_dict['cookiecutters_dir'],
79 checkout=checkout,
80 no_input=no_input,
81 password=password,
82 directory=directory,
83 )
84 import_patch = _patch_import_path_for_repo(repo_dir)
85
86 template_name = os.path.basename(os.path.abspath(repo_dir))
87
88 if replay:
89 with import_patch:
90 if isinstance(replay, bool):
91 context_from_replayfile = load(config_dict['replay_dir'], template_name)
92 else:
93 path, template_name = os.path.split(os.path.splitext(replay)[0])
94 context_from_replayfile = load(path, template_name)
95
96 context_file = os.path.join(repo_dir, 'cookiecutter.json')
97 logger.debug('context_file is %s', context_file)
98
99 if replay:
100 context = generate_context(
101 context_file=context_file,
102 default_context=config_dict['default_context'],
103 extra_context=None,
104 )
105 logger.debug('replayfile context: %s', context_from_replayfile)
106 items_for_prompting = {
107 k: v
108 for k, v in context['cookiecutter'].items()
109 if k not in context_from_replayfile['cookiecutter'].keys()
110 }
111 context_for_prompting = {}
112 context_for_prompting['cookiecutter'] = items_for_prompting
113 context = context_from_replayfile
114 logger.debug('prompting context: %s', context_for_prompting)
115 else:
116 context = generate_context(
117 context_file=context_file,
118 default_context=config_dict['default_context'],
119 extra_context=extra_context,
120 )
121 context_for_prompting = context
122 # preserve the original cookiecutter options
123 # print(context['cookiecutter'])
124 context['_cookiecutter'] = {
125 k: v for k, v in context['cookiecutter'].items() if not k.startswith("_")
126 }
127
128 # prompt the user to manually configure at the command line.
129 # except when 'no-input' flag is set
130
131 with import_patch:
132 if context_for_prompting['cookiecutter']:
133 context['cookiecutter'].update(
134 prompt_for_config(context_for_prompting, no_input)
135 )
136 if "template" in context["cookiecutter"]:
137 nested_template = re.search(
138 r'\((.*?)\)', context["cookiecutter"]["template"]
139 ).group(1)
140 return cookiecutter(
141 template=os.path.join(template, nested_template),
142 checkout=checkout,
143 no_input=no_input,
144 extra_context=extra_context,
145 replay=replay,
146 overwrite_if_exists=overwrite_if_exists,
147 output_dir=output_dir,
148 config_file=config_file,
149 default_config=default_config,
150 password=password,
151 directory=directory,
152 skip_if_file_exists=skip_if_file_exists,
153 accept_hooks=accept_hooks,
154 keep_project_on_failure=keep_project_on_failure,
155 )
156
157 logger.debug('contex is %s', context)
158
159 # include template dir or url in the context dict
160 context['cookiecutter']['_template'] = template
161
162 # include output+dir in the context dict
163 context['cookiecutter']['_output_dir'] = os.path.abspath(output_dir)
164 # include repo dir or url in the context dict
165 context['cookiecutter']['_repo_dir'] = repo_dir
166
167 dump(config_dict['replay_dir'], template_name, context)
168
169 # Create project from local context and project template.
170 with import_patch:
171 result = generate_files(
172 repo_dir=repo_dir,
173 context=context,
174 overwrite_if_exists=overwrite_if_exists,
175 skip_if_file_exists=skip_if_file_exists,
176 output_dir=output_dir,
177 accept_hooks=accept_hooks,
178 keep_project_on_failure=keep_project_on_failure,
179 )
180
181 # Cleanup (if required)
182 if cleanup:
183 rmtree(repo_dir)
184
185 return result
186
187
188 class _patch_import_path_for_repo:
189 def __init__(self, repo_dir):
190 self._repo_dir = repo_dir
191 self._path = None
192
193 def __enter__(self):
194 self._path = copy(sys.path)
195 sys.path.append(self._repo_dir)
196
197 def __exit__(self, type, value, traceback):
198 sys.path = self._path
199
[end of cookiecutter/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cookiecutter/main.py b/cookiecutter/main.py
--- a/cookiecutter/main.py
+++ b/cookiecutter/main.py
@@ -154,16 +154,20 @@
keep_project_on_failure=keep_project_on_failure,
)
- logger.debug('contex is %s', context)
+ logger.debug('context is %s', context)
# include template dir or url in the context dict
context['cookiecutter']['_template'] = template
# include output+dir in the context dict
context['cookiecutter']['_output_dir'] = os.path.abspath(output_dir)
+
# include repo dir or url in the context dict
context['cookiecutter']['_repo_dir'] = repo_dir
+ # include checkout details in the context dict
+ context['cookiecutter']['_checkout'] = checkout
+
dump(config_dict['replay_dir'], template_name, context)
# Create project from local context and project template.
| {"golden_diff": "diff --git a/cookiecutter/main.py b/cookiecutter/main.py\n--- a/cookiecutter/main.py\n+++ b/cookiecutter/main.py\n@@ -154,16 +154,20 @@\n keep_project_on_failure=keep_project_on_failure,\n )\n \n- logger.debug('contex is %s', context)\n+ logger.debug('context is %s', context)\n \n # include template dir or url in the context dict\n context['cookiecutter']['_template'] = template\n \n # include output+dir in the context dict\n context['cookiecutter']['_output_dir'] = os.path.abspath(output_dir)\n+\n # include repo dir or url in the context dict\n context['cookiecutter']['_repo_dir'] = repo_dir\n \n+ # include checkout details in the context dict\n+ context['cookiecutter']['_checkout'] = checkout\n+\n dump(config_dict['replay_dir'], template_name, context)\n \n # Create project from local context and project template.\n", "issue": "Add checkout details to context\n* Cookiecutter version: 2.1.1\r\n* Template project url: N/A\r\n* Python version: 3.9.13\r\n* Operating System: macOS\r\n\r\n### Description:\r\n\r\nI would like the context output available in `{{cookiecutter | pprint }}` to include the `--checkout` argument passed to the CLI, under a '_checkout' field. This will allow me to perform different actions in my post generation hooks if a non-default branch, tag, or commit was passed.\r\n\r\n### What I've run:\r\n\r\n```\r\nfrom collections import OrderedDict\r\ncookiecutter_context = {{cookiecutter | pprint }}\r\nprint(cookiecutter_context)\r\n```\r\n\r\nRelated to https://github.com/cookiecutter/cookiecutter/issues/1422\n", "before_files": [{"content": "\"\"\"\nMain entry point for the `cookiecutter` command.\n\nThe code in this module is also a good example of how to use Cookiecutter as a\nlibrary rather than a script.\n\"\"\"\nimport logging\nimport os\nimport re\nimport sys\nfrom copy import copy\n\nfrom cookiecutter.config import get_user_config\nfrom cookiecutter.exceptions import InvalidModeException\nfrom cookiecutter.generate import generate_context, generate_files\nfrom cookiecutter.prompt import prompt_for_config\nfrom cookiecutter.replay import dump, load\nfrom cookiecutter.repository import determine_repo_dir\nfrom cookiecutter.utils import rmtree\n\nlogger = logging.getLogger(__name__)\n\n\ndef cookiecutter(\n template,\n checkout=None,\n no_input=False,\n extra_context=None,\n replay=None,\n overwrite_if_exists=False,\n output_dir='.',\n config_file=None,\n default_config=False,\n password=None,\n directory=None,\n skip_if_file_exists=False,\n accept_hooks=True,\n keep_project_on_failure=False,\n):\n \"\"\"\n Run Cookiecutter just as if using it from the command line.\n\n :param template: A directory containing a project template directory,\n or a URL to a git repository.\n :param checkout: The branch, tag or commit ID to checkout after clone.\n :param no_input: Do not prompt for user input.\n Use default values for template parameters taken from `cookiecutter.json`, user\n config and `extra_dict`. Force a refresh of cached resources.\n :param extra_context: A dictionary of context that overrides default\n and user configuration.\n :param replay: Do not prompt for input, instead read from saved json. If\n ``True`` read from the ``replay_dir``.\n if it exists\n :param output_dir: Where to output the generated project dir into.\n :param config_file: User configuration file path.\n :param default_config: Use default values rather than a config file.\n :param password: The password to use when extracting the repository.\n :param directory: Relative path to a cookiecutter template in a repository.\n :param accept_hooks: Accept pre and post hooks if set to `True`.\n :param keep_project_on_failure: If `True` keep generated project directory even when\n generation fails\n \"\"\"\n if replay and ((no_input is not False) or (extra_context is not None)):\n err_msg = (\n \"You can not use both replay and no_input or extra_context \"\n \"at the same time.\"\n )\n raise InvalidModeException(err_msg)\n\n config_dict = get_user_config(\n config_file=config_file,\n default_config=default_config,\n )\n\n repo_dir, cleanup = determine_repo_dir(\n template=template,\n abbreviations=config_dict['abbreviations'],\n clone_to_dir=config_dict['cookiecutters_dir'],\n checkout=checkout,\n no_input=no_input,\n password=password,\n directory=directory,\n )\n import_patch = _patch_import_path_for_repo(repo_dir)\n\n template_name = os.path.basename(os.path.abspath(repo_dir))\n\n if replay:\n with import_patch:\n if isinstance(replay, bool):\n context_from_replayfile = load(config_dict['replay_dir'], template_name)\n else:\n path, template_name = os.path.split(os.path.splitext(replay)[0])\n context_from_replayfile = load(path, template_name)\n\n context_file = os.path.join(repo_dir, 'cookiecutter.json')\n logger.debug('context_file is %s', context_file)\n\n if replay:\n context = generate_context(\n context_file=context_file,\n default_context=config_dict['default_context'],\n extra_context=None,\n )\n logger.debug('replayfile context: %s', context_from_replayfile)\n items_for_prompting = {\n k: v\n for k, v in context['cookiecutter'].items()\n if k not in context_from_replayfile['cookiecutter'].keys()\n }\n context_for_prompting = {}\n context_for_prompting['cookiecutter'] = items_for_prompting\n context = context_from_replayfile\n logger.debug('prompting context: %s', context_for_prompting)\n else:\n context = generate_context(\n context_file=context_file,\n default_context=config_dict['default_context'],\n extra_context=extra_context,\n )\n context_for_prompting = context\n # preserve the original cookiecutter options\n # print(context['cookiecutter'])\n context['_cookiecutter'] = {\n k: v for k, v in context['cookiecutter'].items() if not k.startswith(\"_\")\n }\n\n # prompt the user to manually configure at the command line.\n # except when 'no-input' flag is set\n\n with import_patch:\n if context_for_prompting['cookiecutter']:\n context['cookiecutter'].update(\n prompt_for_config(context_for_prompting, no_input)\n )\n if \"template\" in context[\"cookiecutter\"]:\n nested_template = re.search(\n r'\\((.*?)\\)', context[\"cookiecutter\"][\"template\"]\n ).group(1)\n return cookiecutter(\n template=os.path.join(template, nested_template),\n checkout=checkout,\n no_input=no_input,\n extra_context=extra_context,\n replay=replay,\n overwrite_if_exists=overwrite_if_exists,\n output_dir=output_dir,\n config_file=config_file,\n default_config=default_config,\n password=password,\n directory=directory,\n skip_if_file_exists=skip_if_file_exists,\n accept_hooks=accept_hooks,\n keep_project_on_failure=keep_project_on_failure,\n )\n\n logger.debug('contex is %s', context)\n\n # include template dir or url in the context dict\n context['cookiecutter']['_template'] = template\n\n # include output+dir in the context dict\n context['cookiecutter']['_output_dir'] = os.path.abspath(output_dir)\n # include repo dir or url in the context dict\n context['cookiecutter']['_repo_dir'] = repo_dir\n\n dump(config_dict['replay_dir'], template_name, context)\n\n # Create project from local context and project template.\n with import_patch:\n result = generate_files(\n repo_dir=repo_dir,\n context=context,\n overwrite_if_exists=overwrite_if_exists,\n skip_if_file_exists=skip_if_file_exists,\n output_dir=output_dir,\n accept_hooks=accept_hooks,\n keep_project_on_failure=keep_project_on_failure,\n )\n\n # Cleanup (if required)\n if cleanup:\n rmtree(repo_dir)\n\n return result\n\n\nclass _patch_import_path_for_repo:\n def __init__(self, repo_dir):\n self._repo_dir = repo_dir\n self._path = None\n\n def __enter__(self):\n self._path = copy(sys.path)\n sys.path.append(self._repo_dir)\n\n def __exit__(self, type, value, traceback):\n sys.path = self._path\n", "path": "cookiecutter/main.py"}]} | 2,688 | 226 |
gh_patches_debug_56453 | rasdani/github-patches | git_diff | netket__netket-506 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ExactSampler is not resetting at construction time
Exact Sampler should call Reset at construction time
</issue>
<code>
[start of netket/sampler/exact_sampler.py]
1 import numpy as _np
2 from .abstract_sampler import AbstractSampler
3 from ..machine.density_matrix import AbstractDensityMatrix
4 from ..hilbert import DoubledHilbert
5 import netket.random
6
7
8 class ExactSampler(AbstractSampler):
9 r"""
10 This sampler generates i.i.d. samples from $$|\Psi(s)|^2$$.
11 In order to perform exact sampling, $$|\Psi(s)|^2$$ is precomputed an all
12 the possible values of the quantum numbers $$s$$. This sampler has thus an
13 exponential cost with the number of degrees of freedom, and cannot be used
14 for large systems, where Metropolis-based sampling are instead a viable
15 option.
16 """
17
18 def __init__(self, machine, sample_size=16):
19 r"""
20 Constructs a new ``ExactSampler`` given a machine.
21
22 Args:
23 machine: A machine $$\Psi(s)$$ used for the sampling.
24 The probability distribution being sampled
25 from is $$F(\Psi(s))$$, where the function
26 $$F(X)$$, is arbitrary, by default $$F(X)=|X|^2$$.
27
28 sample_size: The number of independent samples to be generated at each invocation of __next__.
29 """
30 super().__init__(machine, sample_size)
31 if isinstance(machine, AbstractDensityMatrix):
32 self.hilbert = DoubledHilbert(machine.hilbert)
33 else:
34 self.hilbert = machine.hilbert
35 self._machine_pow = 2.0
36
37 def reset(self, init_random=False):
38 self._prob = _np.absolute(self.machine.to_array()) ** self.machine_pow
39 self._prob /= self._prob.sum()
40
41 def __next__(self):
42 numbers = netket.random.choice(
43 self._prob.size, size=self.sample_shape[0], replace=True, p=self._prob
44 )
45 return self.hilbert.numbers_to_states(numbers)
46
47 def generate_samples(self, n_samples, init_random=False, samples=None):
48
49 if samples is None:
50 samples = _np.zeros((n_samples, self.sample_shape[0], self.sample_shape[1]))
51
52 numbers = netket.random.choice(
53 self._prob.size,
54 size=self.sample_shape[0] * n_samples,
55 replace=True,
56 p=self._prob,
57 )
58 samples[:] = self.hilbert.numbers_to_states(numbers).reshape(samples.shape)
59
60 return samples
61
62 @property
63 def machine_pow(self):
64 return self._machine_pow
65
66 @machine_pow.setter
67 def machine_pow(self, m_power):
68 self._machine_pow = m_power
69 self.reset()
70
[end of netket/sampler/exact_sampler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netket/sampler/exact_sampler.py b/netket/sampler/exact_sampler.py
--- a/netket/sampler/exact_sampler.py
+++ b/netket/sampler/exact_sampler.py
@@ -33,6 +33,7 @@
else:
self.hilbert = machine.hilbert
self._machine_pow = 2.0
+ self.reset()
def reset(self, init_random=False):
self._prob = _np.absolute(self.machine.to_array()) ** self.machine_pow
| {"golden_diff": "diff --git a/netket/sampler/exact_sampler.py b/netket/sampler/exact_sampler.py\n--- a/netket/sampler/exact_sampler.py\n+++ b/netket/sampler/exact_sampler.py\n@@ -33,6 +33,7 @@\n else:\n self.hilbert = machine.hilbert\n self._machine_pow = 2.0\n+ self.reset()\n \n def reset(self, init_random=False):\n self._prob = _np.absolute(self.machine.to_array()) ** self.machine_pow\n", "issue": "ExactSampler is not resetting at construction time\nExact Sampler should call Reset at construction time \r\n\n", "before_files": [{"content": "import numpy as _np\nfrom .abstract_sampler import AbstractSampler\nfrom ..machine.density_matrix import AbstractDensityMatrix\nfrom ..hilbert import DoubledHilbert\nimport netket.random\n\n\nclass ExactSampler(AbstractSampler):\n r\"\"\"\n This sampler generates i.i.d. samples from $$|\\Psi(s)|^2$$.\n In order to perform exact sampling, $$|\\Psi(s)|^2$$ is precomputed an all\n the possible values of the quantum numbers $$s$$. This sampler has thus an\n exponential cost with the number of degrees of freedom, and cannot be used\n for large systems, where Metropolis-based sampling are instead a viable\n option.\n \"\"\"\n\n def __init__(self, machine, sample_size=16):\n r\"\"\"\n Constructs a new ``ExactSampler`` given a machine.\n\n Args:\n machine: A machine $$\\Psi(s)$$ used for the sampling.\n The probability distribution being sampled\n from is $$F(\\Psi(s))$$, where the function\n $$F(X)$$, is arbitrary, by default $$F(X)=|X|^2$$.\n\n sample_size: The number of independent samples to be generated at each invocation of __next__.\n \"\"\"\n super().__init__(machine, sample_size)\n if isinstance(machine, AbstractDensityMatrix):\n self.hilbert = DoubledHilbert(machine.hilbert)\n else:\n self.hilbert = machine.hilbert\n self._machine_pow = 2.0\n\n def reset(self, init_random=False):\n self._prob = _np.absolute(self.machine.to_array()) ** self.machine_pow\n self._prob /= self._prob.sum()\n\n def __next__(self):\n numbers = netket.random.choice(\n self._prob.size, size=self.sample_shape[0], replace=True, p=self._prob\n )\n return self.hilbert.numbers_to_states(numbers)\n\n def generate_samples(self, n_samples, init_random=False, samples=None):\n\n if samples is None:\n samples = _np.zeros((n_samples, self.sample_shape[0], self.sample_shape[1]))\n\n numbers = netket.random.choice(\n self._prob.size,\n size=self.sample_shape[0] * n_samples,\n replace=True,\n p=self._prob,\n )\n samples[:] = self.hilbert.numbers_to_states(numbers).reshape(samples.shape)\n\n return samples\n\n @property\n def machine_pow(self):\n return self._machine_pow\n\n @machine_pow.setter\n def machine_pow(self, m_power):\n self._machine_pow = m_power\n self.reset()\n", "path": "netket/sampler/exact_sampler.py"}]} | 1,261 | 114 |
gh_patches_debug_41803 | rasdani/github-patches | git_diff | mindsdb__mindsdb-943 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
UUID endpoint
We need a UUID endpoint, say, `/api/config/reporting_uuid` that returns the mindsdb_native telemtry uuid to scout.
This will make less sense when/if mindsdb native starts wrapping over multiple machines running native, but we'll cross that bridge when we get there.
For now the impl is simple:
* When mindsdb starts create a predictor with a placeholder name
* Make the uuid endpoint call `.get_model_data` on that predictor and get the `report_uuid` key
* Send this key to the frontend in a json: `{"report_uuid": "{{value_of_the_report_uuid_key}}" }
</issue>
<code>
[start of mindsdb/interfaces/native/predictor_process.py]
1 import torch.multiprocessing as mp
2 from mindsdb.interfaces.database.database import DatabaseWrapper
3
4
5 ctx = mp.get_context('spawn')
6
7
8 class PredictorProcess(ctx.Process):
9 daemon = True
10
11 def __init__(self, *args):
12 super(PredictorProcess, self).__init__(args=args)
13
14 def run(self):
15 '''
16 running at subprocess due to
17 ValueError: signal only works in main thread
18
19 this is work for celery worker here?
20 '''
21 import mindsdb_native
22
23 name, from_data, to_predict, kwargs, config, trx_type = self._args
24
25 mdb = mindsdb_native.Predictor(name=name)
26
27 if trx_type == 'learn':
28 to_predict = to_predict if isinstance(to_predict, list) else [to_predict]
29 data_source = getattr(mindsdb_native, from_data['class'])(*from_data['args'], **from_data['kwargs'])
30 mdb.learn(
31 from_data=data_source,
32 to_predict=to_predict,
33 **kwargs
34 )
35
36 stats = mindsdb_native.F.get_model_data(name)['data_analysis_v2']
37
38 DatabaseWrapper(config).register_predictors([{
39 'name': name,
40 'predict': to_predict,
41 'data_analysis': stats
42 }], setup=False)
43
44 if trx_type == 'predict':
45 if isinstance(from_data, dict):
46 when_data = from_data
47 else:
48 when_data = getattr(mindsdb_native, from_data['class'])(*from_data['args'], **from_data['kwargs'])
49
50 predictions = mdb.predict(
51 when_data=when_data,
52 **kwargs
53 )
54
55 # @TODO Figure out a way to recover this since we are using `spawn` here... simple Queue or instiating a Multiprocessing manager and registering a value in a dict using that. Or using map from a multiprocessing pool with 1x process (though using a custom process there might be it's own bucket of annoying)
56 return predictions
57
[end of mindsdb/interfaces/native/predictor_process.py]
[start of mindsdb/interfaces/native/mindsdb.py]
1 # Mindsdb native interface
2 from pathlib import Path
3 import json
4
5 from dateutil.parser import parse as parse_datetime
6
7 import mindsdb_native
8 from mindsdb_native import F
9 from mindsdb.utilities.fs import create_directory
10 from mindsdb_native.libs.constants.mindsdb import DATA_SUBTYPES
11 from mindsdb.interfaces.native.predictor_process import PredictorProcess
12 from mindsdb.interfaces.database.database import DatabaseWrapper
13
14
15 class MindsdbNative():
16 def __init__(self, config):
17 self.config = config
18 self.dbw = DatabaseWrapper(self.config)
19
20 def learn(self, name, from_data, to_predict, kwargs={}):
21 join_learn_process = kwargs.get('join_learn_process', False)
22 if 'join_learn_process' in kwargs:
23 del kwargs['join_learn_process']
24
25 predictor_dir = Path(self.config.paths['predictors']).joinpath(name)
26 create_directory(predictor_dir)
27 versions_file_path = predictor_dir.joinpath('versions.json')
28 with open(str(versions_file_path), 'wt') as f:
29 json.dump(self.config.versions, f, indent=4, sort_keys=True)
30
31 p = PredictorProcess(name, from_data, to_predict, kwargs, self.config.get_all(), 'learn')
32 p.start()
33 if join_learn_process is True:
34 p.join()
35 if p.exitcode != 0:
36 raise Exception('Learning process failed !')
37
38 def predict(self, name, when_data=None, kwargs={}):
39 # @TODO Separate into two paths, one for "normal" predictions and one for "real time" predictions. Use the multiprocessing code commented out bellow for normal (once we figure out how to return the prediction object... else use the inline code but with the "real time" predict functionality of mindsdb_native taht will be implemented later)
40 '''
41 from_data = when if when is not None else when_data
42 p = PredictorProcess(name, from_data, to_predict=None, kwargs=kwargs, config=self.config.get_all(), 'predict')
43 p.start()
44 predictions = p.join()
45 '''
46 mdb = mindsdb_native.Predictor(name=name)
47
48 predictions = mdb.predict(
49 when_data=when_data,
50 **kwargs
51 )
52
53 return predictions
54
55 def analyse_dataset(self, ds):
56 return F.analyse_dataset(ds)
57
58 def get_model_data(self, name, native_view=False):
59 model = F.get_model_data(name)
60 if native_view:
61 return model
62
63 data_analysis = model['data_analysis_v2']
64 for column in data_analysis['columns']:
65 if len(data_analysis[column]) == 0 or data_analysis[column].get('empty', {}).get('is_empty', False):
66 data_analysis[column]['typing'] = {
67 'data_subtype': DATA_SUBTYPES.INT
68 }
69
70 return model
71
72 def get_models(self, status='any'):
73 models = F.get_models()
74 if status != 'any':
75 models = [x for x in models if x['status'] == status]
76
77 for i in range(len(models)):
78 for k in ['train_end_at', 'updated_at', 'created_at']:
79 if k in models[i] and models[i][k] is not None:
80 try:
81 models[i][k] = parse_datetime(str(models[i][k]).split('.')[0])
82 except Exception:
83 models[i][k] = parse_datetime(str(models[i][k]))
84 return models
85
86 def delete_model(self, name):
87 F.delete_model(name)
88 self.dbw.unregister_predictor(name)
89
90 def rename_model(self, name, new_name):
91 self.dbw.unregister_predictor(self.get_model_data(name))
92 F.rename_model(name, new_name)
93 self.dbw.register_predictors(self.get_model_data(new_name), setup=False)
94
95 def load_model(self, fpath):
96 F.import_model(model_archive_path=fpath)
97 # @TODO How do we figure out the name here ?
98 # dbw.register_predictors(...)
99
100 def export_model(self, name):
101 F.export_predictor(model_name=name)
102
[end of mindsdb/interfaces/native/mindsdb.py]
[start of mindsdb/api/http/namespaces/util.py]
1 from flask import request
2 from flask_restx import Resource, abort
3
4 from mindsdb.api.http.namespaces.configs.util import ns_conf
5 from mindsdb import __about__
6
7 @ns_conf.route('/ping')
8 class Ping(Resource):
9 @ns_conf.doc('get_ping')
10 def get(self):
11 '''Checks server avaliable'''
12 return {'status': 'ok'}
13
14
15 @ns_conf.route('/shutdown')
16 class Shutdown(Resource):
17 @ns_conf.doc('get_shutdown')
18 def get(self):
19 '''Shutdown server'''
20 if request.host.startswith('127.0.0.1') or request.host.startswith('localhost'):
21 func = request.environ.get('werkzeug.server.shutdown')
22 if func is None:
23 return '', 500
24 func()
25 return '', 200
26 abort(403, "")
27
28
29 @ns_conf.route('/util/version')
30 class Version(Resource):
31 @ns_conf.doc('get_endpoint')
32 def get(self):
33 '''Check endpoint'''
34 return {'mindsdb': "{__about__.__version__}"}
35
36
[end of mindsdb/api/http/namespaces/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mindsdb/api/http/namespaces/util.py b/mindsdb/api/http/namespaces/util.py
--- a/mindsdb/api/http/namespaces/util.py
+++ b/mindsdb/api/http/namespaces/util.py
@@ -1,5 +1,6 @@
from flask import request
from flask_restx import Resource, abort
+from flask import current_app as ca
from mindsdb.api.http.namespaces.configs.util import ns_conf
from mindsdb import __about__
@@ -11,25 +12,12 @@
'''Checks server avaliable'''
return {'status': 'ok'}
-
-@ns_conf.route('/shutdown')
-class Shutdown(Resource):
- @ns_conf.doc('get_shutdown')
- def get(self):
- '''Shutdown server'''
- if request.host.startswith('127.0.0.1') or request.host.startswith('localhost'):
- func = request.environ.get('werkzeug.server.shutdown')
- if func is None:
- return '', 500
- func()
- return '', 200
- abort(403, "")
-
-
-@ns_conf.route('/util/version')
-class Version(Resource):
- @ns_conf.doc('get_endpoint')
+@ns_conf.route('/report_uuid')
+class ReportUUID(Resource):
+ @ns_conf.doc('get_report_uuid')
def get(self):
- '''Check endpoint'''
- return {'mindsdb': "{__about__.__version__}"}
-
+ metamodel_name = '___monitroing_metamodel___'
+ predictor = ca.mindsdb_native.create(metamodel_name)
+ return {
+ 'report_uuid': predictor.report_uuid
+ }
diff --git a/mindsdb/interfaces/native/mindsdb.py b/mindsdb/interfaces/native/mindsdb.py
--- a/mindsdb/interfaces/native/mindsdb.py
+++ b/mindsdb/interfaces/native/mindsdb.py
@@ -17,16 +17,24 @@
self.config = config
self.dbw = DatabaseWrapper(self.config)
+ def _setup_for_creation(self, name):
+ predictor_dir = Path(self.config.paths['predictors']).joinpath(name)
+ create_directory(predictor_dir)
+ versions_file_path = predictor_dir.joinpath('versions.json')
+ with open(str(versions_file_path), 'wt') as f:
+ json.dump(self.config.versions, f, indent=4, sort_keys=True)
+
+ def create(self, name):
+ self._setup_for_creation(name)
+ predictor = mindsdb_native.Predictor(name=name, run_env={'trigger': 'mindsdb'})
+ return predictor
+
def learn(self, name, from_data, to_predict, kwargs={}):
join_learn_process = kwargs.get('join_learn_process', False)
if 'join_learn_process' in kwargs:
del kwargs['join_learn_process']
- predictor_dir = Path(self.config.paths['predictors']).joinpath(name)
- create_directory(predictor_dir)
- versions_file_path = predictor_dir.joinpath('versions.json')
- with open(str(versions_file_path), 'wt') as f:
- json.dump(self.config.versions, f, indent=4, sort_keys=True)
+ self._setup_for_creation(name)
p = PredictorProcess(name, from_data, to_predict, kwargs, self.config.get_all(), 'learn')
p.start()
@@ -43,7 +51,7 @@
p.start()
predictions = p.join()
'''
- mdb = mindsdb_native.Predictor(name=name)
+ mdb = mindsdb_native.Predictor(name=name, run_env={'trigger': 'mindsdb'})
predictions = mdb.predict(
when_data=when_data,
diff --git a/mindsdb/interfaces/native/predictor_process.py b/mindsdb/interfaces/native/predictor_process.py
--- a/mindsdb/interfaces/native/predictor_process.py
+++ b/mindsdb/interfaces/native/predictor_process.py
@@ -22,7 +22,7 @@
name, from_data, to_predict, kwargs, config, trx_type = self._args
- mdb = mindsdb_native.Predictor(name=name)
+ mdb = mindsdb_native.Predictor(name=name, run_env={'trigger': 'mindsdb'})
if trx_type == 'learn':
to_predict = to_predict if isinstance(to_predict, list) else [to_predict]
| {"golden_diff": "diff --git a/mindsdb/api/http/namespaces/util.py b/mindsdb/api/http/namespaces/util.py\n--- a/mindsdb/api/http/namespaces/util.py\n+++ b/mindsdb/api/http/namespaces/util.py\n@@ -1,5 +1,6 @@\n from flask import request\n from flask_restx import Resource, abort\n+from flask import current_app as ca\n \n from mindsdb.api.http.namespaces.configs.util import ns_conf\n from mindsdb import __about__\n@@ -11,25 +12,12 @@\n '''Checks server avaliable'''\n return {'status': 'ok'}\n \n-\n-@ns_conf.route('/shutdown')\n-class Shutdown(Resource):\n- @ns_conf.doc('get_shutdown')\n- def get(self):\n- '''Shutdown server'''\n- if request.host.startswith('127.0.0.1') or request.host.startswith('localhost'):\n- func = request.environ.get('werkzeug.server.shutdown')\n- if func is None:\n- return '', 500\n- func()\n- return '', 200\n- abort(403, \"\")\n-\n- \n-@ns_conf.route('/util/version')\n-class Version(Resource):\n- @ns_conf.doc('get_endpoint')\n+@ns_conf.route('/report_uuid')\n+class ReportUUID(Resource):\n+ @ns_conf.doc('get_report_uuid')\n def get(self):\n- '''Check endpoint'''\n- return {'mindsdb': \"{__about__.__version__}\"}\n- \n+ metamodel_name = '___monitroing_metamodel___'\n+ predictor = ca.mindsdb_native.create(metamodel_name)\n+ return {\n+ 'report_uuid': predictor.report_uuid\n+ }\ndiff --git a/mindsdb/interfaces/native/mindsdb.py b/mindsdb/interfaces/native/mindsdb.py\n--- a/mindsdb/interfaces/native/mindsdb.py\n+++ b/mindsdb/interfaces/native/mindsdb.py\n@@ -17,16 +17,24 @@\n self.config = config\n self.dbw = DatabaseWrapper(self.config)\n \n+ def _setup_for_creation(self, name):\n+ predictor_dir = Path(self.config.paths['predictors']).joinpath(name)\n+ create_directory(predictor_dir)\n+ versions_file_path = predictor_dir.joinpath('versions.json')\n+ with open(str(versions_file_path), 'wt') as f:\n+ json.dump(self.config.versions, f, indent=4, sort_keys=True)\n+\n+ def create(self, name):\n+ self._setup_for_creation(name)\n+ predictor = mindsdb_native.Predictor(name=name, run_env={'trigger': 'mindsdb'})\n+ return predictor\n+\n def learn(self, name, from_data, to_predict, kwargs={}):\n join_learn_process = kwargs.get('join_learn_process', False)\n if 'join_learn_process' in kwargs:\n del kwargs['join_learn_process']\n \n- predictor_dir = Path(self.config.paths['predictors']).joinpath(name)\n- create_directory(predictor_dir)\n- versions_file_path = predictor_dir.joinpath('versions.json')\n- with open(str(versions_file_path), 'wt') as f:\n- json.dump(self.config.versions, f, indent=4, sort_keys=True)\n+ self._setup_for_creation(name)\n \n p = PredictorProcess(name, from_data, to_predict, kwargs, self.config.get_all(), 'learn')\n p.start()\n@@ -43,7 +51,7 @@\n p.start()\n predictions = p.join()\n '''\n- mdb = mindsdb_native.Predictor(name=name)\n+ mdb = mindsdb_native.Predictor(name=name, run_env={'trigger': 'mindsdb'})\n \n predictions = mdb.predict(\n when_data=when_data,\ndiff --git a/mindsdb/interfaces/native/predictor_process.py b/mindsdb/interfaces/native/predictor_process.py\n--- a/mindsdb/interfaces/native/predictor_process.py\n+++ b/mindsdb/interfaces/native/predictor_process.py\n@@ -22,7 +22,7 @@\n \n name, from_data, to_predict, kwargs, config, trx_type = self._args\n \n- mdb = mindsdb_native.Predictor(name=name)\n+ mdb = mindsdb_native.Predictor(name=name, run_env={'trigger': 'mindsdb'})\n \n if trx_type == 'learn':\n to_predict = to_predict if isinstance(to_predict, list) else [to_predict]\n", "issue": "UUID endpoint\nWe need a UUID endpoint, say, `/api/config/reporting_uuid` that returns the mindsdb_native telemtry uuid to scout.\r\n\r\nThis will make less sense when/if mindsdb native starts wrapping over multiple machines running native, but we'll cross that bridge when we get there.\r\n\r\nFor now the impl is simple:\r\n\r\n* When mindsdb starts create a predictor with a placeholder name\r\n* Make the uuid endpoint call `.get_model_data` on that predictor and get the `report_uuid` key\r\n* Send this key to the frontend in a json: `{\"report_uuid\": \"{{value_of_the_report_uuid_key}}\" }\r\n\n", "before_files": [{"content": "import torch.multiprocessing as mp\nfrom mindsdb.interfaces.database.database import DatabaseWrapper\n\n\nctx = mp.get_context('spawn')\n\n\nclass PredictorProcess(ctx.Process):\n daemon = True\n\n def __init__(self, *args):\n super(PredictorProcess, self).__init__(args=args)\n\n def run(self):\n '''\n running at subprocess due to\n ValueError: signal only works in main thread\n\n this is work for celery worker here?\n '''\n import mindsdb_native\n\n name, from_data, to_predict, kwargs, config, trx_type = self._args\n\n mdb = mindsdb_native.Predictor(name=name)\n\n if trx_type == 'learn':\n to_predict = to_predict if isinstance(to_predict, list) else [to_predict]\n data_source = getattr(mindsdb_native, from_data['class'])(*from_data['args'], **from_data['kwargs'])\n mdb.learn(\n from_data=data_source,\n to_predict=to_predict,\n **kwargs\n )\n\n stats = mindsdb_native.F.get_model_data(name)['data_analysis_v2']\n\n DatabaseWrapper(config).register_predictors([{\n 'name': name,\n 'predict': to_predict,\n 'data_analysis': stats\n }], setup=False)\n\n if trx_type == 'predict':\n if isinstance(from_data, dict):\n when_data = from_data\n else:\n when_data = getattr(mindsdb_native, from_data['class'])(*from_data['args'], **from_data['kwargs'])\n\n predictions = mdb.predict(\n when_data=when_data,\n **kwargs\n )\n\n # @TODO Figure out a way to recover this since we are using `spawn` here... simple Queue or instiating a Multiprocessing manager and registering a value in a dict using that. Or using map from a multiprocessing pool with 1x process (though using a custom process there might be it's own bucket of annoying)\n return predictions\n", "path": "mindsdb/interfaces/native/predictor_process.py"}, {"content": "# Mindsdb native interface\nfrom pathlib import Path\nimport json\n\nfrom dateutil.parser import parse as parse_datetime\n\nimport mindsdb_native\nfrom mindsdb_native import F\nfrom mindsdb.utilities.fs import create_directory\nfrom mindsdb_native.libs.constants.mindsdb import DATA_SUBTYPES\nfrom mindsdb.interfaces.native.predictor_process import PredictorProcess\nfrom mindsdb.interfaces.database.database import DatabaseWrapper\n\n\nclass MindsdbNative():\n def __init__(self, config):\n self.config = config\n self.dbw = DatabaseWrapper(self.config)\n\n def learn(self, name, from_data, to_predict, kwargs={}):\n join_learn_process = kwargs.get('join_learn_process', False)\n if 'join_learn_process' in kwargs:\n del kwargs['join_learn_process']\n\n predictor_dir = Path(self.config.paths['predictors']).joinpath(name)\n create_directory(predictor_dir)\n versions_file_path = predictor_dir.joinpath('versions.json')\n with open(str(versions_file_path), 'wt') as f:\n json.dump(self.config.versions, f, indent=4, sort_keys=True)\n\n p = PredictorProcess(name, from_data, to_predict, kwargs, self.config.get_all(), 'learn')\n p.start()\n if join_learn_process is True:\n p.join()\n if p.exitcode != 0:\n raise Exception('Learning process failed !')\n\n def predict(self, name, when_data=None, kwargs={}):\n # @TODO Separate into two paths, one for \"normal\" predictions and one for \"real time\" predictions. Use the multiprocessing code commented out bellow for normal (once we figure out how to return the prediction object... else use the inline code but with the \"real time\" predict functionality of mindsdb_native taht will be implemented later)\n '''\n from_data = when if when is not None else when_data\n p = PredictorProcess(name, from_data, to_predict=None, kwargs=kwargs, config=self.config.get_all(), 'predict')\n p.start()\n predictions = p.join()\n '''\n mdb = mindsdb_native.Predictor(name=name)\n\n predictions = mdb.predict(\n when_data=when_data,\n **kwargs\n )\n\n return predictions\n\n def analyse_dataset(self, ds):\n return F.analyse_dataset(ds)\n\n def get_model_data(self, name, native_view=False):\n model = F.get_model_data(name)\n if native_view:\n return model\n\n data_analysis = model['data_analysis_v2']\n for column in data_analysis['columns']:\n if len(data_analysis[column]) == 0 or data_analysis[column].get('empty', {}).get('is_empty', False):\n data_analysis[column]['typing'] = {\n 'data_subtype': DATA_SUBTYPES.INT\n }\n\n return model\n\n def get_models(self, status='any'):\n models = F.get_models()\n if status != 'any':\n models = [x for x in models if x['status'] == status]\n\n for i in range(len(models)):\n for k in ['train_end_at', 'updated_at', 'created_at']:\n if k in models[i] and models[i][k] is not None:\n try:\n models[i][k] = parse_datetime(str(models[i][k]).split('.')[0])\n except Exception:\n models[i][k] = parse_datetime(str(models[i][k]))\n return models\n\n def delete_model(self, name):\n F.delete_model(name)\n self.dbw.unregister_predictor(name)\n\n def rename_model(self, name, new_name):\n self.dbw.unregister_predictor(self.get_model_data(name))\n F.rename_model(name, new_name)\n self.dbw.register_predictors(self.get_model_data(new_name), setup=False)\n\n def load_model(self, fpath):\n F.import_model(model_archive_path=fpath)\n # @TODO How do we figure out the name here ?\n # dbw.register_predictors(...)\n\n def export_model(self, name):\n F.export_predictor(model_name=name)\n", "path": "mindsdb/interfaces/native/mindsdb.py"}, {"content": "from flask import request\nfrom flask_restx import Resource, abort\n\nfrom mindsdb.api.http.namespaces.configs.util import ns_conf\nfrom mindsdb import __about__\n\n@ns_conf.route('/ping')\nclass Ping(Resource):\n @ns_conf.doc('get_ping')\n def get(self):\n '''Checks server avaliable'''\n return {'status': 'ok'}\n\n\n@ns_conf.route('/shutdown')\nclass Shutdown(Resource):\n @ns_conf.doc('get_shutdown')\n def get(self):\n '''Shutdown server'''\n if request.host.startswith('127.0.0.1') or request.host.startswith('localhost'):\n func = request.environ.get('werkzeug.server.shutdown')\n if func is None:\n return '', 500\n func()\n return '', 200\n abort(403, \"\")\n\n \n@ns_conf.route('/util/version')\nclass Version(Resource):\n @ns_conf.doc('get_endpoint')\n def get(self):\n '''Check endpoint'''\n return {'mindsdb': \"{__about__.__version__}\"}\n \n", "path": "mindsdb/api/http/namespaces/util.py"}]} | 2,592 | 968 |
gh_patches_debug_13882 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-6032 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Broken command argument type parsing
#### Problem Description
It seems like our command argument type parsing does not like `from __future__ import annotations`.
#### Steps to reproduce the behavior:
1. `mitmproxy`
2. `[n] [enter] [r]`
</issue>
<code>
[start of mitmproxy/command.py]
1 """
2 This module manages and invokes typed commands.
3 """
4 import functools
5 import inspect
6 import logging
7 import sys
8 import textwrap
9 import types
10 from collections.abc import Callable
11 from collections.abc import Iterable
12 from collections.abc import Sequence
13 from typing import Any
14 from typing import NamedTuple
15
16 import pyparsing
17
18 import mitmproxy.types
19 from mitmproxy import command_lexer
20 from mitmproxy import exceptions
21 from mitmproxy.command_lexer import unquote
22
23
24 def verify_arg_signature(f: Callable, args: Iterable[Any], kwargs: dict) -> None:
25 sig = inspect.signature(f)
26 try:
27 sig.bind(*args, **kwargs)
28 except TypeError as v:
29 raise exceptions.CommandError("command argument mismatch: %s" % v.args[0])
30
31
32 def typename(t: type) -> str:
33 """
34 Translates a type to an explanatory string.
35 """
36 if t == inspect._empty: # type: ignore
37 raise exceptions.CommandError("missing type annotation")
38 to = mitmproxy.types.CommandTypes.get(t, None)
39 if not to:
40 raise exceptions.CommandError(
41 "unsupported type: %s" % getattr(t, "__name__", t)
42 )
43 return to.display
44
45
46 def _empty_as_none(x: Any) -> Any:
47 if x == inspect.Signature.empty:
48 return None
49 return x
50
51
52 class CommandParameter(NamedTuple):
53 name: str
54 type: type
55 kind: inspect._ParameterKind = inspect.Parameter.POSITIONAL_OR_KEYWORD
56
57 def __str__(self):
58 if self.kind is inspect.Parameter.VAR_POSITIONAL:
59 return f"*{self.name}"
60 else:
61 return self.name
62
63
64 class Command:
65 name: str
66 manager: "CommandManager"
67 signature: inspect.Signature
68 help: str | None
69
70 def __init__(self, manager: "CommandManager", name: str, func: Callable) -> None:
71 self.name = name
72 self.manager = manager
73 self.func = func
74 self.signature = inspect.signature(self.func)
75
76 if func.__doc__:
77 txt = func.__doc__.strip()
78 self.help = "\n".join(textwrap.wrap(txt))
79 else:
80 self.help = None
81
82 # This fails with a CommandException if types are invalid
83 for name, parameter in self.signature.parameters.items():
84 t = parameter.annotation
85 if not mitmproxy.types.CommandTypes.get(parameter.annotation, None):
86 raise exceptions.CommandError(
87 f"Argument {name} has an unknown type {t} in {func}."
88 )
89 if self.return_type and not mitmproxy.types.CommandTypes.get(
90 self.return_type, None
91 ):
92 raise exceptions.CommandError(
93 f"Return type has an unknown type ({self.return_type}) in {func}."
94 )
95
96 @property
97 def return_type(self) -> type | None:
98 return _empty_as_none(self.signature.return_annotation)
99
100 @property
101 def parameters(self) -> list[CommandParameter]:
102 """Returns a list of CommandParameters."""
103 ret = []
104 for name, param in self.signature.parameters.items():
105 ret.append(CommandParameter(name, param.annotation, param.kind))
106 return ret
107
108 def signature_help(self) -> str:
109 params = " ".join(str(param) for param in self.parameters)
110 if self.return_type:
111 ret = f" -> {typename(self.return_type)}"
112 else:
113 ret = ""
114 return f"{self.name} {params}{ret}"
115
116 def prepare_args(self, args: Sequence[str]) -> inspect.BoundArguments:
117 try:
118 bound_arguments = self.signature.bind(*args)
119 except TypeError:
120 expected = f"Expected: {str(self.signature.parameters)}"
121 received = f"Received: {str(args)}"
122 raise exceptions.CommandError(
123 f"Command argument mismatch: \n {expected}\n {received}"
124 )
125
126 for name, value in bound_arguments.arguments.items():
127 param = self.signature.parameters[name]
128 convert_to = param.annotation
129 if param.kind == param.VAR_POSITIONAL:
130 bound_arguments.arguments[name] = tuple(
131 parsearg(self.manager, x, convert_to) for x in value
132 )
133 else:
134 bound_arguments.arguments[name] = parsearg(
135 self.manager, value, convert_to
136 )
137
138 bound_arguments.apply_defaults()
139
140 return bound_arguments
141
142 def call(self, args: Sequence[str]) -> Any:
143 """
144 Call the command with a list of arguments. At this point, all
145 arguments are strings.
146 """
147 bound_args = self.prepare_args(args)
148 ret = self.func(*bound_args.args, **bound_args.kwargs)
149 if ret is None and self.return_type is None:
150 return
151 typ = mitmproxy.types.CommandTypes.get(self.return_type)
152 assert typ
153 if not typ.is_valid(self.manager, typ, ret):
154 raise exceptions.CommandError(
155 f"{self.name} returned unexpected data - expected {typ.display}"
156 )
157 return ret
158
159
160 class ParseResult(NamedTuple):
161 value: str
162 type: type
163 valid: bool
164
165
166 class CommandManager:
167 commands: dict[str, Command]
168
169 def __init__(self, master):
170 self.master = master
171 self.commands = {}
172
173 def collect_commands(self, addon):
174 for i in dir(addon):
175 if not i.startswith("__"):
176 o = getattr(addon, i)
177 try:
178 # hasattr is not enough, see https://github.com/mitmproxy/mitmproxy/issues/3794
179 is_command = isinstance(getattr(o, "command_name", None), str)
180 except Exception:
181 pass # getattr may raise if o implements __getattr__.
182 else:
183 if is_command:
184 try:
185 self.add(o.command_name, o)
186 except exceptions.CommandError as e:
187 logging.warning(
188 f"Could not load command {o.command_name}: {e}"
189 )
190
191 def add(self, path: str, func: Callable):
192 self.commands[path] = Command(self, path, func)
193
194 @functools.lru_cache(maxsize=128)
195 def parse_partial(
196 self, cmdstr: str
197 ) -> tuple[Sequence[ParseResult], Sequence[CommandParameter]]:
198 """
199 Parse a possibly partial command. Return a sequence of ParseResults and a sequence of remainder type help items.
200 """
201
202 parts: pyparsing.ParseResults = command_lexer.expr.parseString(
203 cmdstr, parseAll=True
204 )
205
206 parsed: list[ParseResult] = []
207 next_params: list[CommandParameter] = [
208 CommandParameter("", mitmproxy.types.Cmd),
209 CommandParameter("", mitmproxy.types.CmdArgs),
210 ]
211 expected: CommandParameter | None = None
212 for part in parts:
213 if part.isspace():
214 parsed.append(
215 ParseResult(
216 value=part,
217 type=mitmproxy.types.Space,
218 valid=True,
219 )
220 )
221 continue
222
223 if expected and expected.kind is inspect.Parameter.VAR_POSITIONAL:
224 assert not next_params
225 elif next_params:
226 expected = next_params.pop(0)
227 else:
228 expected = CommandParameter("", mitmproxy.types.Unknown)
229
230 arg_is_known_command = (
231 expected.type == mitmproxy.types.Cmd and part in self.commands
232 )
233 arg_is_unknown_command = (
234 expected.type == mitmproxy.types.Cmd and part not in self.commands
235 )
236 command_args_following = (
237 next_params and next_params[0].type == mitmproxy.types.CmdArgs
238 )
239 if arg_is_known_command and command_args_following:
240 next_params = self.commands[part].parameters + next_params[1:]
241 if arg_is_unknown_command and command_args_following:
242 next_params.pop(0)
243
244 to = mitmproxy.types.CommandTypes.get(expected.type, None)
245 valid = False
246 if to:
247 try:
248 to.parse(self, expected.type, part)
249 except ValueError:
250 valid = False
251 else:
252 valid = True
253
254 parsed.append(
255 ParseResult(
256 value=part,
257 type=expected.type,
258 valid=valid,
259 )
260 )
261
262 return parsed, next_params
263
264 def call(self, command_name: str, *args: Any) -> Any:
265 """
266 Call a command with native arguments. May raise CommandError.
267 """
268 if command_name not in self.commands:
269 raise exceptions.CommandError("Unknown command: %s" % command_name)
270 return self.commands[command_name].func(*args)
271
272 def call_strings(self, command_name: str, args: Sequence[str]) -> Any:
273 """
274 Call a command using a list of string arguments. May raise CommandError.
275 """
276 if command_name not in self.commands:
277 raise exceptions.CommandError("Unknown command: %s" % command_name)
278
279 return self.commands[command_name].call(args)
280
281 def execute(self, cmdstr: str) -> Any:
282 """
283 Execute a command string. May raise CommandError.
284 """
285 parts, _ = self.parse_partial(cmdstr)
286 if not parts:
287 raise exceptions.CommandError(f"Invalid command: {cmdstr!r}")
288 command_name, *args = (
289 unquote(part.value) for part in parts if part.type != mitmproxy.types.Space
290 )
291 return self.call_strings(command_name, args)
292
293 def dump(self, out=sys.stdout) -> None:
294 cmds = list(self.commands.values())
295 cmds.sort(key=lambda x: x.signature_help())
296 for c in cmds:
297 for hl in (c.help or "").splitlines():
298 print("# " + hl, file=out)
299 print(c.signature_help(), file=out)
300 print(file=out)
301
302
303 def parsearg(manager: CommandManager, spec: str, argtype: type) -> Any:
304 """
305 Convert a string to a argument to the appropriate type.
306 """
307 t = mitmproxy.types.CommandTypes.get(argtype, None)
308 if not t:
309 raise exceptions.CommandError(f"Unsupported argument type: {argtype}")
310 try:
311 return t.parse(manager, argtype, spec)
312 except ValueError as e:
313 raise exceptions.CommandError(str(e)) from e
314
315
316 def command(name: str | None = None):
317 def decorator(function):
318 @functools.wraps(function)
319 def wrapper(*args, **kwargs):
320 verify_arg_signature(function, args, kwargs)
321 return function(*args, **kwargs)
322
323 wrapper.__dict__["command_name"] = name or function.__name__.replace("_", ".")
324 return wrapper
325
326 return decorator
327
328
329 def argument(name, type):
330 """
331 Set the type of a command argument at runtime. This is useful for more
332 specific types such as mitmproxy.types.Choice, which we cannot annotate
333 directly as mypy does not like that.
334 """
335
336 def decorator(f: types.FunctionType) -> types.FunctionType:
337 assert name in f.__annotations__
338 f.__annotations__[name] = type
339 return f
340
341 return decorator
342
[end of mitmproxy/command.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/command.py b/mitmproxy/command.py
--- a/mitmproxy/command.py
+++ b/mitmproxy/command.py
@@ -22,7 +22,7 @@
def verify_arg_signature(f: Callable, args: Iterable[Any], kwargs: dict) -> None:
- sig = inspect.signature(f)
+ sig = inspect.signature(f, eval_str=True)
try:
sig.bind(*args, **kwargs)
except TypeError as v:
@@ -71,7 +71,7 @@
self.name = name
self.manager = manager
self.func = func
- self.signature = inspect.signature(self.func)
+ self.signature = inspect.signature(self.func, eval_str=True)
if func.__doc__:
txt = func.__doc__.strip()
| {"golden_diff": "diff --git a/mitmproxy/command.py b/mitmproxy/command.py\n--- a/mitmproxy/command.py\n+++ b/mitmproxy/command.py\n@@ -22,7 +22,7 @@\n \n \n def verify_arg_signature(f: Callable, args: Iterable[Any], kwargs: dict) -> None:\n- sig = inspect.signature(f)\n+ sig = inspect.signature(f, eval_str=True)\n try:\n sig.bind(*args, **kwargs)\n except TypeError as v:\n@@ -71,7 +71,7 @@\n self.name = name\n self.manager = manager\n self.func = func\n- self.signature = inspect.signature(self.func)\n+ self.signature = inspect.signature(self.func, eval_str=True)\n \n if func.__doc__:\n txt = func.__doc__.strip()\n", "issue": "Broken command argument type parsing\n#### Problem Description\r\n\r\nIt seems like our command argument type parsing does not like `from __future__ import annotations`.\r\n\r\n#### Steps to reproduce the behavior:\r\n\r\n1. `mitmproxy`\r\n2. `[n] [enter] [r]`\n", "before_files": [{"content": "\"\"\"\n This module manages and invokes typed commands.\n\"\"\"\nimport functools\nimport inspect\nimport logging\nimport sys\nimport textwrap\nimport types\nfrom collections.abc import Callable\nfrom collections.abc import Iterable\nfrom collections.abc import Sequence\nfrom typing import Any\nfrom typing import NamedTuple\n\nimport pyparsing\n\nimport mitmproxy.types\nfrom mitmproxy import command_lexer\nfrom mitmproxy import exceptions\nfrom mitmproxy.command_lexer import unquote\n\n\ndef verify_arg_signature(f: Callable, args: Iterable[Any], kwargs: dict) -> None:\n sig = inspect.signature(f)\n try:\n sig.bind(*args, **kwargs)\n except TypeError as v:\n raise exceptions.CommandError(\"command argument mismatch: %s\" % v.args[0])\n\n\ndef typename(t: type) -> str:\n \"\"\"\n Translates a type to an explanatory string.\n \"\"\"\n if t == inspect._empty: # type: ignore\n raise exceptions.CommandError(\"missing type annotation\")\n to = mitmproxy.types.CommandTypes.get(t, None)\n if not to:\n raise exceptions.CommandError(\n \"unsupported type: %s\" % getattr(t, \"__name__\", t)\n )\n return to.display\n\n\ndef _empty_as_none(x: Any) -> Any:\n if x == inspect.Signature.empty:\n return None\n return x\n\n\nclass CommandParameter(NamedTuple):\n name: str\n type: type\n kind: inspect._ParameterKind = inspect.Parameter.POSITIONAL_OR_KEYWORD\n\n def __str__(self):\n if self.kind is inspect.Parameter.VAR_POSITIONAL:\n return f\"*{self.name}\"\n else:\n return self.name\n\n\nclass Command:\n name: str\n manager: \"CommandManager\"\n signature: inspect.Signature\n help: str | None\n\n def __init__(self, manager: \"CommandManager\", name: str, func: Callable) -> None:\n self.name = name\n self.manager = manager\n self.func = func\n self.signature = inspect.signature(self.func)\n\n if func.__doc__:\n txt = func.__doc__.strip()\n self.help = \"\\n\".join(textwrap.wrap(txt))\n else:\n self.help = None\n\n # This fails with a CommandException if types are invalid\n for name, parameter in self.signature.parameters.items():\n t = parameter.annotation\n if not mitmproxy.types.CommandTypes.get(parameter.annotation, None):\n raise exceptions.CommandError(\n f\"Argument {name} has an unknown type {t} in {func}.\"\n )\n if self.return_type and not mitmproxy.types.CommandTypes.get(\n self.return_type, None\n ):\n raise exceptions.CommandError(\n f\"Return type has an unknown type ({self.return_type}) in {func}.\"\n )\n\n @property\n def return_type(self) -> type | None:\n return _empty_as_none(self.signature.return_annotation)\n\n @property\n def parameters(self) -> list[CommandParameter]:\n \"\"\"Returns a list of CommandParameters.\"\"\"\n ret = []\n for name, param in self.signature.parameters.items():\n ret.append(CommandParameter(name, param.annotation, param.kind))\n return ret\n\n def signature_help(self) -> str:\n params = \" \".join(str(param) for param in self.parameters)\n if self.return_type:\n ret = f\" -> {typename(self.return_type)}\"\n else:\n ret = \"\"\n return f\"{self.name} {params}{ret}\"\n\n def prepare_args(self, args: Sequence[str]) -> inspect.BoundArguments:\n try:\n bound_arguments = self.signature.bind(*args)\n except TypeError:\n expected = f\"Expected: {str(self.signature.parameters)}\"\n received = f\"Received: {str(args)}\"\n raise exceptions.CommandError(\n f\"Command argument mismatch: \\n {expected}\\n {received}\"\n )\n\n for name, value in bound_arguments.arguments.items():\n param = self.signature.parameters[name]\n convert_to = param.annotation\n if param.kind == param.VAR_POSITIONAL:\n bound_arguments.arguments[name] = tuple(\n parsearg(self.manager, x, convert_to) for x in value\n )\n else:\n bound_arguments.arguments[name] = parsearg(\n self.manager, value, convert_to\n )\n\n bound_arguments.apply_defaults()\n\n return bound_arguments\n\n def call(self, args: Sequence[str]) -> Any:\n \"\"\"\n Call the command with a list of arguments. At this point, all\n arguments are strings.\n \"\"\"\n bound_args = self.prepare_args(args)\n ret = self.func(*bound_args.args, **bound_args.kwargs)\n if ret is None and self.return_type is None:\n return\n typ = mitmproxy.types.CommandTypes.get(self.return_type)\n assert typ\n if not typ.is_valid(self.manager, typ, ret):\n raise exceptions.CommandError(\n f\"{self.name} returned unexpected data - expected {typ.display}\"\n )\n return ret\n\n\nclass ParseResult(NamedTuple):\n value: str\n type: type\n valid: bool\n\n\nclass CommandManager:\n commands: dict[str, Command]\n\n def __init__(self, master):\n self.master = master\n self.commands = {}\n\n def collect_commands(self, addon):\n for i in dir(addon):\n if not i.startswith(\"__\"):\n o = getattr(addon, i)\n try:\n # hasattr is not enough, see https://github.com/mitmproxy/mitmproxy/issues/3794\n is_command = isinstance(getattr(o, \"command_name\", None), str)\n except Exception:\n pass # getattr may raise if o implements __getattr__.\n else:\n if is_command:\n try:\n self.add(o.command_name, o)\n except exceptions.CommandError as e:\n logging.warning(\n f\"Could not load command {o.command_name}: {e}\"\n )\n\n def add(self, path: str, func: Callable):\n self.commands[path] = Command(self, path, func)\n\n @functools.lru_cache(maxsize=128)\n def parse_partial(\n self, cmdstr: str\n ) -> tuple[Sequence[ParseResult], Sequence[CommandParameter]]:\n \"\"\"\n Parse a possibly partial command. Return a sequence of ParseResults and a sequence of remainder type help items.\n \"\"\"\n\n parts: pyparsing.ParseResults = command_lexer.expr.parseString(\n cmdstr, parseAll=True\n )\n\n parsed: list[ParseResult] = []\n next_params: list[CommandParameter] = [\n CommandParameter(\"\", mitmproxy.types.Cmd),\n CommandParameter(\"\", mitmproxy.types.CmdArgs),\n ]\n expected: CommandParameter | None = None\n for part in parts:\n if part.isspace():\n parsed.append(\n ParseResult(\n value=part,\n type=mitmproxy.types.Space,\n valid=True,\n )\n )\n continue\n\n if expected and expected.kind is inspect.Parameter.VAR_POSITIONAL:\n assert not next_params\n elif next_params:\n expected = next_params.pop(0)\n else:\n expected = CommandParameter(\"\", mitmproxy.types.Unknown)\n\n arg_is_known_command = (\n expected.type == mitmproxy.types.Cmd and part in self.commands\n )\n arg_is_unknown_command = (\n expected.type == mitmproxy.types.Cmd and part not in self.commands\n )\n command_args_following = (\n next_params and next_params[0].type == mitmproxy.types.CmdArgs\n )\n if arg_is_known_command and command_args_following:\n next_params = self.commands[part].parameters + next_params[1:]\n if arg_is_unknown_command and command_args_following:\n next_params.pop(0)\n\n to = mitmproxy.types.CommandTypes.get(expected.type, None)\n valid = False\n if to:\n try:\n to.parse(self, expected.type, part)\n except ValueError:\n valid = False\n else:\n valid = True\n\n parsed.append(\n ParseResult(\n value=part,\n type=expected.type,\n valid=valid,\n )\n )\n\n return parsed, next_params\n\n def call(self, command_name: str, *args: Any) -> Any:\n \"\"\"\n Call a command with native arguments. May raise CommandError.\n \"\"\"\n if command_name not in self.commands:\n raise exceptions.CommandError(\"Unknown command: %s\" % command_name)\n return self.commands[command_name].func(*args)\n\n def call_strings(self, command_name: str, args: Sequence[str]) -> Any:\n \"\"\"\n Call a command using a list of string arguments. May raise CommandError.\n \"\"\"\n if command_name not in self.commands:\n raise exceptions.CommandError(\"Unknown command: %s\" % command_name)\n\n return self.commands[command_name].call(args)\n\n def execute(self, cmdstr: str) -> Any:\n \"\"\"\n Execute a command string. May raise CommandError.\n \"\"\"\n parts, _ = self.parse_partial(cmdstr)\n if not parts:\n raise exceptions.CommandError(f\"Invalid command: {cmdstr!r}\")\n command_name, *args = (\n unquote(part.value) for part in parts if part.type != mitmproxy.types.Space\n )\n return self.call_strings(command_name, args)\n\n def dump(self, out=sys.stdout) -> None:\n cmds = list(self.commands.values())\n cmds.sort(key=lambda x: x.signature_help())\n for c in cmds:\n for hl in (c.help or \"\").splitlines():\n print(\"# \" + hl, file=out)\n print(c.signature_help(), file=out)\n print(file=out)\n\n\ndef parsearg(manager: CommandManager, spec: str, argtype: type) -> Any:\n \"\"\"\n Convert a string to a argument to the appropriate type.\n \"\"\"\n t = mitmproxy.types.CommandTypes.get(argtype, None)\n if not t:\n raise exceptions.CommandError(f\"Unsupported argument type: {argtype}\")\n try:\n return t.parse(manager, argtype, spec)\n except ValueError as e:\n raise exceptions.CommandError(str(e)) from e\n\n\ndef command(name: str | None = None):\n def decorator(function):\n @functools.wraps(function)\n def wrapper(*args, **kwargs):\n verify_arg_signature(function, args, kwargs)\n return function(*args, **kwargs)\n\n wrapper.__dict__[\"command_name\"] = name or function.__name__.replace(\"_\", \".\")\n return wrapper\n\n return decorator\n\n\ndef argument(name, type):\n \"\"\"\n Set the type of a command argument at runtime. This is useful for more\n specific types such as mitmproxy.types.Choice, which we cannot annotate\n directly as mypy does not like that.\n \"\"\"\n\n def decorator(f: types.FunctionType) -> types.FunctionType:\n assert name in f.__annotations__\n f.__annotations__[name] = type\n return f\n\n return decorator\n", "path": "mitmproxy/command.py"}]} | 3,897 | 174 |
gh_patches_debug_1487 | rasdani/github-patches | git_diff | huggingface__diffusers-1149 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Flax] 🚨 0.7.0 not working 🚨
### Describe the bug

### Reproduction
_No response_
### Logs
_No response_
### System Info
TPU v3-8
</issue>
<code>
[start of src/diffusers/models/embeddings_flax.py]
1 # Copyright 2022 The HuggingFace Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import math
15
16 import flax.linen as nn
17 import jax.numpy as jnp
18
19
20 def get_sinusoidal_embeddings(
21 timesteps: jnp.ndarray,
22 embedding_dim: int,
23 freq_shift: float = 1,
24 min_timescale: float = 1,
25 max_timescale: float = 1.0e4,
26 flip_sin_to_cos: bool = False,
27 scale: float = 1.0,
28 ) -> jnp.ndarray:
29 """Returns the positional encoding (same as Tensor2Tensor).
30 Args:
31 timesteps: a 1-D Tensor of N indices, one per batch element.
32 These may be fractional.
33 embedding_dim: The number of output channels.
34 min_timescale: The smallest time unit (should probably be 0.0).
35 max_timescale: The largest time unit.
36 Returns:
37 a Tensor of timing signals [N, num_channels]
38 """
39 assert timesteps.ndim == 1, "Timesteps should be a 1d-array"
40 assert embedding_dim % 2 == 0, f"Embedding dimension {embedding_dim} should be even"
41 num_timescales = float(embedding_dim // 2)
42 log_timescale_increment = math.log(max_timescale / min_timescale) / (num_timescales - freq_shift)
43 inv_timescales = min_timescale * jnp.exp(jnp.arange(num_timescales, dtype=jnp.float32) * -log_timescale_increment)
44 emb = jnp.expand_dims(timesteps, 1) * jnp.expand_dims(inv_timescales, 0)
45
46 # scale embeddings
47 scaled_time = scale * emb
48
49 if flip_sin_to_cos:
50 signal = jnp.concatenate([jnp.cos(scaled_time), jnp.sin(scaled_time)], axis=1)
51 else:
52 signal = jnp.concatenate([jnp.sin(scaled_time), jnp.cos(scaled_time)], axis=1)
53 signal = jnp.reshape(signal, [jnp.shape(timesteps)[0], embedding_dim])
54 return signal
55
56
57 class FlaxTimestepEmbedding(nn.Module):
58 r"""
59 Time step Embedding Module. Learns embeddings for input time steps.
60
61 Args:
62 time_embed_dim (`int`, *optional*, defaults to `32`):
63 Time step embedding dimension
64 dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):
65 Parameters `dtype`
66 """
67 time_embed_dim: int = 32
68 dtype: jnp.dtype = jnp.float32
69
70 @nn.compact
71 def __call__(self, temb):
72 temb = nn.Dense(self.time_embed_dim, dtype=self.dtype, name="linear_1")(temb)
73 temb = nn.silu(temb)
74 temb = nn.Dense(self.time_embed_dim, dtype=self.dtype, name="linear_2")(temb)
75 return temb
76
77
78 class FlaxTimesteps(nn.Module):
79 r"""
80 Wrapper Module for sinusoidal Time step Embeddings as described in https://arxiv.org/abs/2006.11239
81
82 Args:
83 dim (`int`, *optional*, defaults to `32`):
84 Time step embedding dimension
85 """
86 dim: int = 32
87 freq_shift: float = 1
88
89 @nn.compact
90 def __call__(self, timesteps):
91 return get_sinusoidal_embeddings(timesteps, embedding_dim=self.dim, freq_shift=self.freq_shift)
92
[end of src/diffusers/models/embeddings_flax.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/diffusers/models/embeddings_flax.py b/src/diffusers/models/embeddings_flax.py
--- a/src/diffusers/models/embeddings_flax.py
+++ b/src/diffusers/models/embeddings_flax.py
@@ -88,4 +88,6 @@
@nn.compact
def __call__(self, timesteps):
- return get_sinusoidal_embeddings(timesteps, embedding_dim=self.dim, freq_shift=self.freq_shift)
+ return get_sinusoidal_embeddings(
+ timesteps, embedding_dim=self.dim, freq_shift=self.freq_shift, flip_sin_to_cos=True
+ )
| {"golden_diff": "diff --git a/src/diffusers/models/embeddings_flax.py b/src/diffusers/models/embeddings_flax.py\n--- a/src/diffusers/models/embeddings_flax.py\n+++ b/src/diffusers/models/embeddings_flax.py\n@@ -88,4 +88,6 @@\n \n @nn.compact\n def __call__(self, timesteps):\n- return get_sinusoidal_embeddings(timesteps, embedding_dim=self.dim, freq_shift=self.freq_shift)\n+ return get_sinusoidal_embeddings(\n+ timesteps, embedding_dim=self.dim, freq_shift=self.freq_shift, flip_sin_to_cos=True\n+ )\n", "issue": "[Flax] \ud83d\udea8 0.7.0 not working \ud83d\udea8\n### Describe the bug\n\n\r\n\n\n### Reproduction\n\n_No response_\n\n### Logs\n\n_No response_\n\n### System Info\n\nTPU v3-8\n", "before_files": [{"content": "# Copyright 2022 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport math\n\nimport flax.linen as nn\nimport jax.numpy as jnp\n\n\ndef get_sinusoidal_embeddings(\n timesteps: jnp.ndarray,\n embedding_dim: int,\n freq_shift: float = 1,\n min_timescale: float = 1,\n max_timescale: float = 1.0e4,\n flip_sin_to_cos: bool = False,\n scale: float = 1.0,\n) -> jnp.ndarray:\n \"\"\"Returns the positional encoding (same as Tensor2Tensor).\n Args:\n timesteps: a 1-D Tensor of N indices, one per batch element.\n These may be fractional.\n embedding_dim: The number of output channels.\n min_timescale: The smallest time unit (should probably be 0.0).\n max_timescale: The largest time unit.\n Returns:\n a Tensor of timing signals [N, num_channels]\n \"\"\"\n assert timesteps.ndim == 1, \"Timesteps should be a 1d-array\"\n assert embedding_dim % 2 == 0, f\"Embedding dimension {embedding_dim} should be even\"\n num_timescales = float(embedding_dim // 2)\n log_timescale_increment = math.log(max_timescale / min_timescale) / (num_timescales - freq_shift)\n inv_timescales = min_timescale * jnp.exp(jnp.arange(num_timescales, dtype=jnp.float32) * -log_timescale_increment)\n emb = jnp.expand_dims(timesteps, 1) * jnp.expand_dims(inv_timescales, 0)\n\n # scale embeddings\n scaled_time = scale * emb\n\n if flip_sin_to_cos:\n signal = jnp.concatenate([jnp.cos(scaled_time), jnp.sin(scaled_time)], axis=1)\n else:\n signal = jnp.concatenate([jnp.sin(scaled_time), jnp.cos(scaled_time)], axis=1)\n signal = jnp.reshape(signal, [jnp.shape(timesteps)[0], embedding_dim])\n return signal\n\n\nclass FlaxTimestepEmbedding(nn.Module):\n r\"\"\"\n Time step Embedding Module. Learns embeddings for input time steps.\n\n Args:\n time_embed_dim (`int`, *optional*, defaults to `32`):\n Time step embedding dimension\n dtype (:obj:`jnp.dtype`, *optional*, defaults to jnp.float32):\n Parameters `dtype`\n \"\"\"\n time_embed_dim: int = 32\n dtype: jnp.dtype = jnp.float32\n\n @nn.compact\n def __call__(self, temb):\n temb = nn.Dense(self.time_embed_dim, dtype=self.dtype, name=\"linear_1\")(temb)\n temb = nn.silu(temb)\n temb = nn.Dense(self.time_embed_dim, dtype=self.dtype, name=\"linear_2\")(temb)\n return temb\n\n\nclass FlaxTimesteps(nn.Module):\n r\"\"\"\n Wrapper Module for sinusoidal Time step Embeddings as described in https://arxiv.org/abs/2006.11239\n\n Args:\n dim (`int`, *optional*, defaults to `32`):\n Time step embedding dimension\n \"\"\"\n dim: int = 32\n freq_shift: float = 1\n\n @nn.compact\n def __call__(self, timesteps):\n return get_sinusoidal_embeddings(timesteps, embedding_dim=self.dim, freq_shift=self.freq_shift)\n", "path": "src/diffusers/models/embeddings_flax.py"}]} | 1,739 | 137 |
gh_patches_debug_48383 | rasdani/github-patches | git_diff | DDMAL__CantusDB-900 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
we need to re-add a restart policy to docker-compose.yml
A recent change to docker-compose.yml removed the `restart: always` policy we added to our containers a couple of weeks ago. We should re-instate this.
</issue>
<code>
[start of django/cantusdb_project/main_app/widgets.py]
1 from django.forms.widgets import TextInput, Select, Textarea, CheckboxInput
2 from django.utils.safestring import mark_safe
3
4 class TextInputWidget(TextInput):
5 def __init__(self):
6 self.attrs = {"class": "form-control form-control-sm"}
7
8
9 class SelectWidget(Select):
10 """
11 not used, this widget does work, but we cannot order the choices by name
12 """
13
14 def __init__(self):
15 attrs = {"class": "form-control custom-select custom-select-sm"}
16 super().__init__(attrs=attrs)
17 # super().choices = choices
18 # self.choices = super().choices
19
20
21 class TextAreaWidget(Textarea):
22 def __init__(self):
23 self.attrs = {"class": "form-control", "rows": "3"}
24
25
26 class VolpianoAreaWidget(Textarea):
27 def __init__(self):
28 self.attrs = {
29 "class": "form-control",
30 "rows": "1.5",
31 "style": "font-family: Volpiano; font-size: xx-large",
32 }
33
34
35 class VolpianoInputWidget(TextInput):
36 def __init__(self):
37 self.attrs = {
38 "class": "form-control form-control-sm",
39 "style": "font-family: Volpiano; font-size: xx-large",
40 }
41
42
43 class CheckboxWidget(CheckboxInput):
44 pass
45
46
47 class AdminTextAreaWidget(Textarea):
48 def __init__(self):
49 self.attrs = {"class": "form-control", "rows": 10, "cols": 75}
50
51 def render(self, name, value, attrs=None, renderer=None):
52 return super().render(name, value, attrs=self.attrs) + mark_safe(
53 '<span style="color: red; font-weight: bold;"> * </span>'
54 )
55
56
57 class AdminTextInputWidget(TextInputWidget):
58 def render(self, name, value, attrs=None, renderer=None):
59 return super().render(name, value) + mark_safe(
60 '<span style="color: red; font-weight: bold;"> * </span>'
61 )
62
[end of django/cantusdb_project/main_app/widgets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/django/cantusdb_project/main_app/widgets.py b/django/cantusdb_project/main_app/widgets.py
--- a/django/cantusdb_project/main_app/widgets.py
+++ b/django/cantusdb_project/main_app/widgets.py
@@ -1,6 +1,7 @@
from django.forms.widgets import TextInput, Select, Textarea, CheckboxInput
from django.utils.safestring import mark_safe
+
class TextInputWidget(TextInput):
def __init__(self):
self.attrs = {"class": "form-control form-control-sm"}
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/widgets.py b/django/cantusdb_project/main_app/widgets.py\n--- a/django/cantusdb_project/main_app/widgets.py\n+++ b/django/cantusdb_project/main_app/widgets.py\n@@ -1,6 +1,7 @@\n from django.forms.widgets import TextInput, Select, Textarea, CheckboxInput\n from django.utils.safestring import mark_safe\n \n+\n class TextInputWidget(TextInput):\n def __init__(self):\n self.attrs = {\"class\": \"form-control form-control-sm\"}\n", "issue": "we need to re-add a restart policy to docker-compose.yml\nA recent change to docker-compose.yml removed the `restart: always` policy we added to our containers a couple of weeks ago. We should re-instate this.\n", "before_files": [{"content": "from django.forms.widgets import TextInput, Select, Textarea, CheckboxInput\nfrom django.utils.safestring import mark_safe\n\nclass TextInputWidget(TextInput):\n def __init__(self):\n self.attrs = {\"class\": \"form-control form-control-sm\"}\n\n\nclass SelectWidget(Select):\n \"\"\"\n not used, this widget does work, but we cannot order the choices by name\n \"\"\"\n\n def __init__(self):\n attrs = {\"class\": \"form-control custom-select custom-select-sm\"}\n super().__init__(attrs=attrs)\n # super().choices = choices\n # self.choices = super().choices\n\n\nclass TextAreaWidget(Textarea):\n def __init__(self):\n self.attrs = {\"class\": \"form-control\", \"rows\": \"3\"}\n\n\nclass VolpianoAreaWidget(Textarea):\n def __init__(self):\n self.attrs = {\n \"class\": \"form-control\",\n \"rows\": \"1.5\",\n \"style\": \"font-family: Volpiano; font-size: xx-large\",\n }\n\n\nclass VolpianoInputWidget(TextInput):\n def __init__(self):\n self.attrs = {\n \"class\": \"form-control form-control-sm\",\n \"style\": \"font-family: Volpiano; font-size: xx-large\",\n }\n\n\nclass CheckboxWidget(CheckboxInput):\n pass\n\n\nclass AdminTextAreaWidget(Textarea):\n def __init__(self):\n self.attrs = {\"class\": \"form-control\", \"rows\": 10, \"cols\": 75}\n\n def render(self, name, value, attrs=None, renderer=None):\n return super().render(name, value, attrs=self.attrs) + mark_safe(\n '<span style=\"color: red; font-weight: bold;\"> * </span>'\n )\n\n\nclass AdminTextInputWidget(TextInputWidget):\n def render(self, name, value, attrs=None, renderer=None):\n return super().render(name, value) + mark_safe(\n '<span style=\"color: red; font-weight: bold;\"> * </span>'\n )\n", "path": "django/cantusdb_project/main_app/widgets.py"}]} | 1,155 | 119 |
gh_patches_debug_64424 | rasdani/github-patches | git_diff | pwndbg__pwndbg-80 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
source code section unavailable if current line is in the first chunk of the list command
when debugging through code and the current function is indeed one that it at the top of a .c file (that does not use tons of includes) the section is lost because of a too aggressive check:
https://github.com/pwndbg/pwndbg/blob/master/pwndbg/commands/context.py#L160
Which edge-case does that try to solve? As I'm hitting that in several cases, I think it does harm more then it maybe solves something? Would be cool to know which case it does try to fix so we could adjust that check to be less aggressive and preserve the section of the mentioned scenario.
</issue>
<code>
[start of pwndbg/commands/context.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from __future__ import print_function
4 import gdb
5 import sys
6
7 import pwndbg.arguments
8 import pwndbg.chain
9 import pwndbg.color
10 import pwndbg.commands
11 import pwndbg.commands.nearpc
12 import pwndbg.commands.telescope
13 import pwndbg.config
14 import pwndbg.disasm
15 import pwndbg.events
16 import pwndbg.ida
17 import pwndbg.regs
18 import pwndbg.symbol
19 import pwndbg.ui
20 import pwndbg.vmmap
21
22 # @pwndbg.events.stop
23 @pwndbg.commands.Command
24 @pwndbg.commands.OnlyWhenRunning
25 def context(*args):
26 """
27 Print out the current register, instruction, and stack context.
28
29 Accepts subcommands 'reg', 'code', 'stack', 'backtrace', and 'args'.
30 """
31 if len(args) == 0:
32 args = ['reg','code','stack','backtrace','args']
33
34 args = [a[0] for a in args]
35
36 result = []
37
38 result.append(pwndbg.color.legend())
39 if 'r' in args: result.extend(context_regs())
40 if 'c' in args: result.extend(context_code())
41 if 'c' in args: result.extend(context_source())
42 if 'a' in args: result.extend(context_args())
43 if 's' in args: result.extend(context_stack())
44 if 'b' in args: result.extend(context_backtrace())
45 result.extend(context_signal())
46
47 for line in result:
48 sys.stdout.write(line + '\n')
49 sys.stdout.flush()
50
51 def context_regs():
52 result = []
53 result.append(pwndbg.color.blue(pwndbg.ui.banner("registers")))
54 result.extend(get_regs())
55 return result
56
57 @pwndbg.commands.Command
58 @pwndbg.commands.OnlyWhenRunning
59 def regs(*regs):
60 '''Print out all registers and enhance the information.'''
61 print('\n'.join(get_regs(*regs)))
62
63 pwndbg.config.Parameter('show-flags', False, 'whether to show flags registers')
64 pwndbg.config.Parameter('colored-flags', False, 'whether to colorize flags registers')
65
66 def get_regs(*regs):
67 result = []
68
69 if not regs:
70 regs = pwndbg.regs.gpr + (pwndbg.regs.frame, pwndbg.regs.current.stack, pwndbg.regs.current.pc)
71
72 if pwndbg.config.show_flags:
73 regs += tuple(pwndbg.regs.flags)
74
75 changed = pwndbg.regs.changed
76
77 for reg in regs:
78 if reg is None:
79 continue
80
81 if reg not in pwndbg.regs:
82 print("Unknown register: %r" % reg)
83 continue
84
85 value = pwndbg.regs[reg]
86
87 # Make the register stand out
88 regname = pwndbg.color.bold(reg.ljust(4).upper())
89
90 # Show a dot next to the register if it changed
91 m = ' ' if reg not in changed else '*'
92
93 if reg not in pwndbg.regs.flags:
94 desc = pwndbg.chain.format(value)
95
96 else:
97 names = []
98 desc = '%#x' % value
99 last = pwndbg.regs.last.get(reg, 0) or 0
100 flags = pwndbg.regs.flags[reg]
101
102 for name, bit in sorted(flags.items()):
103 bit = 1<<bit
104 if value & bit:
105 name = name.upper()
106 name = pwndbg.color.bold(name)
107 if pwndbg.config.colored_flags:
108 name = pwndbg.color.green(name)
109 else:
110 name = name.lower()
111 if pwndbg.config.colored_flags:
112 name = pwndbg.color.red(name)
113
114 if value & bit != last & bit:
115 name = pwndbg.color.underline(name)
116 names.append(name)
117
118 if names:
119 desc = '%s [ %s ]' % (desc, ' '.join(names))
120
121 result.append("%s%s %s" % (m, regname, desc))
122
123 return result
124
125 pwndbg.config.Parameter('emulate', True, '''
126 Unicorn emulation of code near the current instruction
127 ''')
128
129 def context_code():
130 banner = [pwndbg.color.blue(pwndbg.ui.banner("code"))]
131 emulate = bool(pwndbg.config.emulate)
132 result = pwndbg.commands.nearpc.nearpc(to_string=True, emulate=emulate)
133
134 # If we didn't disassemble backward, try to make sure
135 # that the amount of screen space taken is roughly constant.
136 while len(result) < 11:
137 result.append('')
138
139 return banner + result
140
141 def context_source():
142 try:
143 symtab = gdb.selected_frame().find_sal().symtab
144 linetable = symtab.linetable()
145
146 closest_pc = -1
147 closest_line = -1
148 for line in linetable:
149 if line.pc <= pwndbg.regs.pc and line.pc > closest_pc:
150 closest_line = line.line
151 closest_pc = line.pc
152
153 if closest_line < 0:
154 return []
155
156 source = gdb.execute('list %i' % closest_line, from_tty=False, to_string=True)
157
158 # If it starts on line 1, it's not really using the
159 # correct source code.
160 if not source or source.startswith('1\t'):
161 return []
162
163 banner = [pwndbg.color.blue(pwndbg.ui.banner("code"))]
164 banner.extend(source.splitlines())
165 return banner
166 except:
167 pass
168
169 if not pwndbg.ida.available():
170 return []
171
172 try:
173 name = pwndbg.ida.GetFunctionName(pwndbg.regs.pc)
174 addr = pwndbg.ida.LocByName(name)
175 lines = pwndbg.ida.decompile(addr)
176 return lines.splitlines()
177 except:
178 pass
179
180 return []
181
182 def context_stack():
183 result = []
184 result.append(pwndbg.color.blue(pwndbg.ui.banner("stack")))
185 telescope = pwndbg.commands.telescope.telescope(pwndbg.regs.sp, to_string=True)
186 if telescope:
187 result.extend(telescope)
188 return result
189
190 def context_backtrace(frame_count=10, with_banner=True):
191 result = []
192
193 if with_banner:
194 result.append(pwndbg.color.blue(pwndbg.ui.banner("backtrace")))
195
196 this_frame = gdb.selected_frame()
197 newest_frame = this_frame
198 oldest_frame = this_frame
199
200 for i in range(frame_count):
201 try:
202 candidate = oldest_frame.older()
203 except gdb.MemoryError:
204 break
205
206 if not candidate:
207 break
208 oldest_frame = candidate
209
210 for i in range(frame_count):
211 candidate = newest_frame.newer()
212 if not candidate:
213 break
214 newest_frame = candidate
215
216 frame = newest_frame
217 i = 0
218 while True:
219 prefix = '> ' if frame == this_frame else ' '
220 addrsz = pwndbg.ui.addrsz(frame.pc())
221 symbol = pwndbg.symbol.get(frame.pc())
222 if symbol:
223 addrsz = addrsz + ' ' + symbol
224 line = map(str, (prefix, 'f', i, addrsz))
225 line = ' '.join(line)
226 result.append(line)
227
228 if frame == oldest_frame:
229 break
230
231 frame = frame.older()
232 i += 1
233 return result
234
235 def context_args():
236 result = []
237
238 ##################################################
239 # DISABLED FOR NOW, I LIKE INLINE DISPLAY BETTER
240 ##################################################
241 # # For call instructions, attempt to resolve the target and
242 # # determine the number of arguments.
243 # for arg, value in pwndbg.arguments.arguments(pwndbg.disasm.one()):
244 # code = False if arg.type == 'char' else True
245 # pretty = pwndbg.chain.format(value, code=code)
246 # result.append('%-10s %s' % (arg.name+':', pretty))
247 # if not result:
248 # return []
249 # result.insert(0, pwndbg.color.blue(pwndbg.ui.banner("arguments")))
250 return result
251
252 last_signal = []
253
254 def save_signal(signal):
255 global last_signal
256 last_signal = result = []
257
258 if isinstance(signal, gdb.ExitedEvent):
259 # Booooo old gdb
260 if hasattr(signal, 'exit_code'):
261 result.append(pwndbg.color.red('Exited: %r' % signal.exit_code))
262
263 elif isinstance(signal, gdb.SignalEvent):
264 msg = 'Program received signal %s' % signal.stop_signal
265 if signal.stop_signal == 'SIGSEGV':
266 try:
267 si_addr = gdb.parse_and_eval("$_siginfo._sifields._sigfault.si_addr")
268 msg += ' (fault address %#x)' % int(si_addr or 0)
269 except gdb.error:
270 pass
271 msg = pwndbg.color.red(msg)
272 msg = pwndbg.color.bold(msg)
273 result.append(msg)
274
275 elif isinstance(signal, gdb.BreakpointEvent):
276 for bkpt in signal.breakpoints:
277 result.append(pwndbg.color.yellow('Breakpoint %s' % (bkpt.location)))
278
279 gdb.events.cont.connect(save_signal)
280 gdb.events.stop.connect(save_signal)
281 gdb.events.exited.connect(save_signal)
282
283 def context_signal():
284 return last_signal
285
[end of pwndbg/commands/context.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pwndbg/commands/context.py b/pwndbg/commands/context.py
--- a/pwndbg/commands/context.py
+++ b/pwndbg/commands/context.py
@@ -157,7 +157,7 @@
# If it starts on line 1, it's not really using the
# correct source code.
- if not source or source.startswith('1\t'):
+ if not source or closest_line <= 1:
return []
banner = [pwndbg.color.blue(pwndbg.ui.banner("code"))]
| {"golden_diff": "diff --git a/pwndbg/commands/context.py b/pwndbg/commands/context.py\n--- a/pwndbg/commands/context.py\n+++ b/pwndbg/commands/context.py\n@@ -157,7 +157,7 @@\n \n # If it starts on line 1, it's not really using the\n # correct source code.\n- if not source or source.startswith('1\\t'):\n+ if not source or closest_line <= 1:\n return []\n \n banner = [pwndbg.color.blue(pwndbg.ui.banner(\"code\"))]\n", "issue": "source code section unavailable if current line is in the first chunk of the list command\nwhen debugging through code and the current function is indeed one that it at the top of a .c file (that does not use tons of includes) the section is lost because of a too aggressive check:\n\nhttps://github.com/pwndbg/pwndbg/blob/master/pwndbg/commands/context.py#L160\n\nWhich edge-case does that try to solve? As I'm hitting that in several cases, I think it does harm more then it maybe solves something? Would be cool to know which case it does try to fix so we could adjust that check to be less aggressive and preserve the section of the mentioned scenario.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import print_function\nimport gdb\nimport sys\n\nimport pwndbg.arguments\nimport pwndbg.chain\nimport pwndbg.color\nimport pwndbg.commands\nimport pwndbg.commands.nearpc\nimport pwndbg.commands.telescope\nimport pwndbg.config\nimport pwndbg.disasm\nimport pwndbg.events\nimport pwndbg.ida\nimport pwndbg.regs\nimport pwndbg.symbol\nimport pwndbg.ui\nimport pwndbg.vmmap\n\n# @pwndbg.events.stop\[email protected]\[email protected]\ndef context(*args):\n \"\"\"\n Print out the current register, instruction, and stack context.\n\n Accepts subcommands 'reg', 'code', 'stack', 'backtrace', and 'args'.\n \"\"\"\n if len(args) == 0:\n args = ['reg','code','stack','backtrace','args']\n\n args = [a[0] for a in args]\n\n result = []\n\n result.append(pwndbg.color.legend())\n if 'r' in args: result.extend(context_regs())\n if 'c' in args: result.extend(context_code())\n if 'c' in args: result.extend(context_source())\n if 'a' in args: result.extend(context_args())\n if 's' in args: result.extend(context_stack())\n if 'b' in args: result.extend(context_backtrace())\n result.extend(context_signal())\n\n for line in result:\n sys.stdout.write(line + '\\n')\n sys.stdout.flush()\n\ndef context_regs():\n result = []\n result.append(pwndbg.color.blue(pwndbg.ui.banner(\"registers\")))\n result.extend(get_regs())\n return result\n\[email protected]\[email protected]\ndef regs(*regs):\n '''Print out all registers and enhance the information.'''\n print('\\n'.join(get_regs(*regs)))\n\npwndbg.config.Parameter('show-flags', False, 'whether to show flags registers')\npwndbg.config.Parameter('colored-flags', False, 'whether to colorize flags registers')\n\ndef get_regs(*regs):\n result = []\n\n if not regs:\n regs = pwndbg.regs.gpr + (pwndbg.regs.frame, pwndbg.regs.current.stack, pwndbg.regs.current.pc)\n\n if pwndbg.config.show_flags:\n regs += tuple(pwndbg.regs.flags)\n\n changed = pwndbg.regs.changed\n\n for reg in regs:\n if reg is None:\n continue\n\n if reg not in pwndbg.regs:\n print(\"Unknown register: %r\" % reg)\n continue\n\n value = pwndbg.regs[reg]\n\n # Make the register stand out\n regname = pwndbg.color.bold(reg.ljust(4).upper())\n\n # Show a dot next to the register if it changed\n m = ' ' if reg not in changed else '*'\n\n if reg not in pwndbg.regs.flags:\n desc = pwndbg.chain.format(value)\n\n else:\n names = []\n desc = '%#x' % value\n last = pwndbg.regs.last.get(reg, 0) or 0\n flags = pwndbg.regs.flags[reg]\n\n for name, bit in sorted(flags.items()):\n bit = 1<<bit\n if value & bit:\n name = name.upper()\n name = pwndbg.color.bold(name)\n if pwndbg.config.colored_flags:\n name = pwndbg.color.green(name)\n else:\n name = name.lower()\n if pwndbg.config.colored_flags:\n name = pwndbg.color.red(name)\n\n if value & bit != last & bit:\n name = pwndbg.color.underline(name)\n names.append(name)\n\n if names:\n desc = '%s [ %s ]' % (desc, ' '.join(names))\n\n result.append(\"%s%s %s\" % (m, regname, desc))\n\n return result\n\npwndbg.config.Parameter('emulate', True, '''\nUnicorn emulation of code near the current instruction\n''')\n\ndef context_code():\n banner = [pwndbg.color.blue(pwndbg.ui.banner(\"code\"))]\n emulate = bool(pwndbg.config.emulate)\n result = pwndbg.commands.nearpc.nearpc(to_string=True, emulate=emulate)\n\n # If we didn't disassemble backward, try to make sure\n # that the amount of screen space taken is roughly constant.\n while len(result) < 11:\n result.append('')\n\n return banner + result\n\ndef context_source():\n try:\n symtab = gdb.selected_frame().find_sal().symtab\n linetable = symtab.linetable()\n\n closest_pc = -1\n closest_line = -1\n for line in linetable:\n if line.pc <= pwndbg.regs.pc and line.pc > closest_pc:\n closest_line = line.line\n closest_pc = line.pc\n\n if closest_line < 0:\n return []\n\n source = gdb.execute('list %i' % closest_line, from_tty=False, to_string=True)\n\n # If it starts on line 1, it's not really using the\n # correct source code.\n if not source or source.startswith('1\\t'):\n return []\n\n banner = [pwndbg.color.blue(pwndbg.ui.banner(\"code\"))]\n banner.extend(source.splitlines())\n return banner\n except:\n pass\n\n if not pwndbg.ida.available():\n return []\n\n try:\n name = pwndbg.ida.GetFunctionName(pwndbg.regs.pc)\n addr = pwndbg.ida.LocByName(name)\n lines = pwndbg.ida.decompile(addr)\n return lines.splitlines()\n except:\n pass\n\n return []\n\ndef context_stack():\n result = []\n result.append(pwndbg.color.blue(pwndbg.ui.banner(\"stack\")))\n telescope = pwndbg.commands.telescope.telescope(pwndbg.regs.sp, to_string=True)\n if telescope:\n result.extend(telescope)\n return result\n\ndef context_backtrace(frame_count=10, with_banner=True):\n result = []\n\n if with_banner:\n result.append(pwndbg.color.blue(pwndbg.ui.banner(\"backtrace\")))\n\n this_frame = gdb.selected_frame()\n newest_frame = this_frame\n oldest_frame = this_frame\n\n for i in range(frame_count):\n try:\n candidate = oldest_frame.older()\n except gdb.MemoryError:\n break\n\n if not candidate:\n break\n oldest_frame = candidate\n\n for i in range(frame_count):\n candidate = newest_frame.newer()\n if not candidate:\n break\n newest_frame = candidate\n\n frame = newest_frame\n i = 0\n while True:\n prefix = '> ' if frame == this_frame else ' '\n addrsz = pwndbg.ui.addrsz(frame.pc())\n symbol = pwndbg.symbol.get(frame.pc())\n if symbol:\n addrsz = addrsz + ' ' + symbol\n line = map(str, (prefix, 'f', i, addrsz))\n line = ' '.join(line)\n result.append(line)\n\n if frame == oldest_frame:\n break\n\n frame = frame.older()\n i += 1\n return result\n\ndef context_args():\n result = []\n\n ##################################################\n # DISABLED FOR NOW, I LIKE INLINE DISPLAY BETTER\n ##################################################\n # # For call instructions, attempt to resolve the target and\n # # determine the number of arguments.\n # for arg, value in pwndbg.arguments.arguments(pwndbg.disasm.one()):\n # code = False if arg.type == 'char' else True\n # pretty = pwndbg.chain.format(value, code=code)\n # result.append('%-10s %s' % (arg.name+':', pretty))\n # if not result:\n # return []\n # result.insert(0, pwndbg.color.blue(pwndbg.ui.banner(\"arguments\")))\n return result\n\nlast_signal = []\n\ndef save_signal(signal):\n global last_signal\n last_signal = result = []\n\n if isinstance(signal, gdb.ExitedEvent):\n # Booooo old gdb\n if hasattr(signal, 'exit_code'):\n result.append(pwndbg.color.red('Exited: %r' % signal.exit_code))\n\n elif isinstance(signal, gdb.SignalEvent):\n msg = 'Program received signal %s' % signal.stop_signal\n if signal.stop_signal == 'SIGSEGV':\n try:\n si_addr = gdb.parse_and_eval(\"$_siginfo._sifields._sigfault.si_addr\")\n msg += ' (fault address %#x)' % int(si_addr or 0)\n except gdb.error:\n pass\n msg = pwndbg.color.red(msg)\n msg = pwndbg.color.bold(msg)\n result.append(msg)\n\n elif isinstance(signal, gdb.BreakpointEvent):\n for bkpt in signal.breakpoints:\n result.append(pwndbg.color.yellow('Breakpoint %s' % (bkpt.location)))\n\ngdb.events.cont.connect(save_signal)\ngdb.events.stop.connect(save_signal)\ngdb.events.exited.connect(save_signal)\n\ndef context_signal():\n return last_signal\n", "path": "pwndbg/commands/context.py"}]} | 3,498 | 124 |
gh_patches_debug_3244 | rasdani/github-patches | git_diff | ansible__molecule-135 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Verbose in molecule.yml should handle bool True
https://github.com/metacloud/molecule/blob/master/molecule/ansible_playbook.py#L104
```
clearly string +bool isn't gonna work. And it will have error like
Traceback (most recent call last):
File "/Users/zochao/.virtualenvs/metamole/bin/molecule", line 10, in <module>
sys.exit(main())
File "/Users/zochao/metamole/molecule/molecule/cli.py", line 66, in main
CLI().main()
File "/Users/zochao/metamole/molecule/molecule/cli.py", line 62, in main
sys.exit(c.execute())
File "/Users/zochao/metamole/molecule/molecule/commands.py", line 393, in execute
c.execute()
File "/Users/zochao/metamole/molecule/molecule/commands.py", line 224, in execute
ansible = AnsiblePlaybook(self.molecule._config.config['ansible'])
File "/Users/zochao/metamole/molecule/molecule/ansible_playbook.py", line 51, in _init_
self.parse_arg(k, v)
File "/Users/zochao/metamole/molecule/molecule/ansible_playbook.py", line 104, in parse_arg
self.cli_pos.append('-' + value)
TypeError: cannot concatenate 'str' and 'bool' objects
```
</issue>
<code>
[start of molecule/ansible_playbook.py]
1 # Copyright (c) 2015 Cisco Systems
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19 # THE SOFTWARE.
20
21 from __future__ import print_function
22
23 import os
24 import sys
25
26 import sh
27
28 from utilities import print_stderr
29 from utilities import print_stdout
30
31
32 class AnsiblePlaybook:
33 def __init__(self, args, _env=None, _out=print_stdout, _err=print_stderr):
34 """
35 Sets up requirements for ansible-playbook
36
37 :param args: Dictionary arguments to pass to ansible-playbook
38 :param _env: Environment dictionary to use. os.environ.copy() is used by default
39 :param _out: Function passed to sh for STDOUT
40 :param _err: Function passed to sh for STDERR
41 :return: None
42 """
43 self.cli = {}
44 self.cli_pos = []
45 self.env = _env if _env else os.environ.copy()
46 self.playbook = None
47 self.ansible = None
48
49 # process arguments passed in (typically from molecule.yml's ansible block)
50 for k, v in args.iteritems():
51 self.parse_arg(k, v)
52
53 # defaults can be redefined with call to add_env_arg() before baking
54 self.add_env_arg('PYTHONUNBUFFERED', '1')
55 self.add_env_arg('ANSIBLE_FORCE_COLOR', 'true')
56
57 # passed through to sh, not ansible-playbook
58 self.add_cli_arg('_out', _out)
59 self.add_cli_arg('_err', _err)
60
61 def bake(self):
62 """
63 Bake ansible-playbook command so it's ready to execute.
64
65 :return: None
66 """
67 self.ansible = sh.ansible_playbook.bake(self.playbook, *self.cli_pos, _env=self.env, **self.cli)
68
69 def parse_arg(self, name, value):
70 """
71 Parses argument and adds to CLI or environment
72
73 :param name: Name of argument to be added
74 :param value: Value of argument to be added
75 :return: None
76 """
77 # skip `requirements_file` since it used by ansible-galaxy only
78 if name == 'requirements_file':
79 return
80
81 if name == 'raw_env_vars':
82 for k, v in value.iteritems():
83 self.add_env_arg(k, v)
84 return
85
86 if name == 'host_key_checking':
87 self.add_env_arg('ANSIBLE_HOST_KEY_CHECKING', str(value).lower())
88 return
89
90 if name == 'raw_ssh_args':
91 self.add_env_arg('ANSIBLE_SSH_ARGS', ' '.join(value))
92 return
93
94 if name == 'config_file':
95 self.add_env_arg('ANSIBLE_CONFIG', value)
96 return
97
98 if name == 'playbook':
99 self.playbook = value
100 return
101
102 # verbose is weird, must be -vvvv not verbose=vvvv
103 if name == 'verbose' and value:
104 self.cli_pos.append('-' + value)
105 return
106
107 self.add_cli_arg(name, value)
108
109 def add_cli_arg(self, name, value):
110 """
111 Adds argument to CLI passed to ansible-playbook
112
113 :param name: Name of argument to be added
114 :param value: Value of argument to be added
115 :return: None
116 """
117 if value:
118 self.cli[name] = value
119
120 def remove_cli_arg(self, name):
121 """
122 Removes CLI argument
123
124 :param name: Key name of CLI argument to remove
125 :return: None
126 """
127 self.cli.pop(name, None)
128
129 def add_env_arg(self, name, value):
130 """
131 Adds argument to environment passed to ansible-playbook
132
133 :param name: Name of argument to be added
134 :param value: Value of argument to be added
135 :return: None
136 """
137 self.env[name] = value
138
139 def remove_env_arg(self, name):
140 """
141 Removes environment argument
142
143 :param name: Key name of environment argument to remove
144 :return: None
145 """
146 self.env.pop(name, None)
147
148 def execute(self):
149 """
150 Executes ansible-playbook
151
152 :return: sh.stdout on success, else None
153 :return: None
154 """
155 if self.ansible is None:
156 self.bake()
157
158 try:
159 return self.ansible().stdout
160 except sh.ErrorReturnCode as e:
161 print('ERROR: {}'.format(e))
162 sys.exit(e.exit_code)
163
[end of molecule/ansible_playbook.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/molecule/ansible_playbook.py b/molecule/ansible_playbook.py
--- a/molecule/ansible_playbook.py
+++ b/molecule/ansible_playbook.py
@@ -101,6 +101,9 @@
# verbose is weird, must be -vvvv not verbose=vvvv
if name == 'verbose' and value:
+ # for cases where someone passes in verbose: True
+ if value is True:
+ value = 'vvvv'
self.cli_pos.append('-' + value)
return
| {"golden_diff": "diff --git a/molecule/ansible_playbook.py b/molecule/ansible_playbook.py\n--- a/molecule/ansible_playbook.py\n+++ b/molecule/ansible_playbook.py\n@@ -101,6 +101,9 @@\n \n # verbose is weird, must be -vvvv not verbose=vvvv\n if name == 'verbose' and value:\n+ # for cases where someone passes in verbose: True\n+ if value is True:\n+ value = 'vvvv'\n self.cli_pos.append('-' + value)\n return\n", "issue": "Verbose in molecule.yml should handle bool True\nhttps://github.com/metacloud/molecule/blob/master/molecule/ansible_playbook.py#L104\n\n```\nclearly string +bool isn't gonna work. And it will have error like\nTraceback (most recent call last):\nFile \"/Users/zochao/.virtualenvs/metamole/bin/molecule\", line 10, in <module>\nsys.exit(main())\nFile \"/Users/zochao/metamole/molecule/molecule/cli.py\", line 66, in main\nCLI().main()\nFile \"/Users/zochao/metamole/molecule/molecule/cli.py\", line 62, in main\nsys.exit(c.execute())\nFile \"/Users/zochao/metamole/molecule/molecule/commands.py\", line 393, in execute\nc.execute()\nFile \"/Users/zochao/metamole/molecule/molecule/commands.py\", line 224, in execute\nansible = AnsiblePlaybook(self.molecule._config.config['ansible'])\nFile \"/Users/zochao/metamole/molecule/molecule/ansible_playbook.py\", line 51, in _init_\nself.parse_arg(k, v)\nFile \"/Users/zochao/metamole/molecule/molecule/ansible_playbook.py\", line 104, in parse_arg\nself.cli_pos.append('-' + value)\nTypeError: cannot concatenate 'str' and 'bool' objects\n```\n\n", "before_files": [{"content": "# Copyright (c) 2015 Cisco Systems\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n\nfrom __future__ import print_function\n\nimport os\nimport sys\n\nimport sh\n\nfrom utilities import print_stderr\nfrom utilities import print_stdout\n\n\nclass AnsiblePlaybook:\n def __init__(self, args, _env=None, _out=print_stdout, _err=print_stderr):\n \"\"\"\n Sets up requirements for ansible-playbook\n\n :param args: Dictionary arguments to pass to ansible-playbook\n :param _env: Environment dictionary to use. os.environ.copy() is used by default\n :param _out: Function passed to sh for STDOUT\n :param _err: Function passed to sh for STDERR\n :return: None\n \"\"\"\n self.cli = {}\n self.cli_pos = []\n self.env = _env if _env else os.environ.copy()\n self.playbook = None\n self.ansible = None\n\n # process arguments passed in (typically from molecule.yml's ansible block)\n for k, v in args.iteritems():\n self.parse_arg(k, v)\n\n # defaults can be redefined with call to add_env_arg() before baking\n self.add_env_arg('PYTHONUNBUFFERED', '1')\n self.add_env_arg('ANSIBLE_FORCE_COLOR', 'true')\n\n # passed through to sh, not ansible-playbook\n self.add_cli_arg('_out', _out)\n self.add_cli_arg('_err', _err)\n\n def bake(self):\n \"\"\"\n Bake ansible-playbook command so it's ready to execute.\n\n :return: None\n \"\"\"\n self.ansible = sh.ansible_playbook.bake(self.playbook, *self.cli_pos, _env=self.env, **self.cli)\n\n def parse_arg(self, name, value):\n \"\"\"\n Parses argument and adds to CLI or environment\n\n :param name: Name of argument to be added\n :param value: Value of argument to be added\n :return: None\n \"\"\"\n # skip `requirements_file` since it used by ansible-galaxy only\n if name == 'requirements_file':\n return\n\n if name == 'raw_env_vars':\n for k, v in value.iteritems():\n self.add_env_arg(k, v)\n return\n\n if name == 'host_key_checking':\n self.add_env_arg('ANSIBLE_HOST_KEY_CHECKING', str(value).lower())\n return\n\n if name == 'raw_ssh_args':\n self.add_env_arg('ANSIBLE_SSH_ARGS', ' '.join(value))\n return\n\n if name == 'config_file':\n self.add_env_arg('ANSIBLE_CONFIG', value)\n return\n\n if name == 'playbook':\n self.playbook = value\n return\n\n # verbose is weird, must be -vvvv not verbose=vvvv\n if name == 'verbose' and value:\n self.cli_pos.append('-' + value)\n return\n\n self.add_cli_arg(name, value)\n\n def add_cli_arg(self, name, value):\n \"\"\"\n Adds argument to CLI passed to ansible-playbook\n\n :param name: Name of argument to be added\n :param value: Value of argument to be added\n :return: None\n \"\"\"\n if value:\n self.cli[name] = value\n\n def remove_cli_arg(self, name):\n \"\"\"\n Removes CLI argument\n\n :param name: Key name of CLI argument to remove\n :return: None\n \"\"\"\n self.cli.pop(name, None)\n\n def add_env_arg(self, name, value):\n \"\"\"\n Adds argument to environment passed to ansible-playbook\n\n :param name: Name of argument to be added\n :param value: Value of argument to be added\n :return: None\n \"\"\"\n self.env[name] = value\n\n def remove_env_arg(self, name):\n \"\"\"\n Removes environment argument\n\n :param name: Key name of environment argument to remove\n :return: None\n \"\"\"\n self.env.pop(name, None)\n\n def execute(self):\n \"\"\"\n Executes ansible-playbook\n\n :return: sh.stdout on success, else None\n :return: None\n \"\"\"\n if self.ansible is None:\n self.bake()\n\n try:\n return self.ansible().stdout\n except sh.ErrorReturnCode as e:\n print('ERROR: {}'.format(e))\n sys.exit(e.exit_code)\n", "path": "molecule/ansible_playbook.py"}]} | 2,416 | 123 |
gh_patches_debug_568 | rasdani/github-patches | git_diff | pex-tool__pex-836 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.0
On the docket:
The prime motivator:
+ [x] Pex does not download foreign abi3 wheels correctly #823
Changes to support the above as well as others:
+ [x] Fix pex resolving for foreign platforms. #835
+ [x] Use pypa/packaging. #831
+ [x] Upgrade vendored setuptools to 42.0.2. #832
+ [x] De-vendor pex just once per version. #833
+ [x] Support VCS urls for vendoring. #834
+ [x] Support python 3.8 in CI. #829
+ [x] Fix pex resolution to respect --ignore-errors. #828
+ [x] Kill `pkg_resources` finders monkey-patching. #827
+ [x] Use flit to distribute pex. #826
+ [x] Cleanup extras_require. #825
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '2.0.3'
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = '2.0.3'
+__version__ = '2.1.0'
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = '2.0.3'\n+__version__ = '2.1.0'\n", "issue": "Release 2.1.0\nOn the docket:\r\n\r\nThe prime motivator:\r\n+ [x] Pex does not download foreign abi3 wheels correctly #823\r\n\r\nChanges to support the above as well as others:\r\n+ [x] Fix pex resolving for foreign platforms. #835 \r\n+ [x] Use pypa/packaging. #831\r\n+ [x] Upgrade vendored setuptools to 42.0.2. #832\r\n+ [x] De-vendor pex just once per version. #833\r\n+ [x] Support VCS urls for vendoring. #834\r\n+ [x] Support python 3.8 in CI. #829\r\n+ [x] Fix pex resolution to respect --ignore-errors. #828\r\n+ [x] Kill `pkg_resources` finders monkey-patching. #827\r\n+ [x] Use flit to distribute pex. #826\r\n+ [x] Cleanup extras_require. #825\r\n\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '2.0.3'\n", "path": "pex/version.py"}]} | 809 | 94 |
gh_patches_debug_16389 | rasdani/github-patches | git_diff | facebookresearch__hydra-1968 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feature Request] submitit: add `stderr_to_stdout` ass supported by submitit
# 🚀 Feature Request
`stderr_to_stdout` option was added in submitit (not the plugin) [last year](https://github.com/facebookincubator/submitit/pull/1611) but cannot currently be given from submitit plugin.
We should add support for passing this `stderr_to_stdout` via the plugin as well.
cc @Jasha10
</issue>
<code>
[start of plugins/hydra_submitit_launcher/setup.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 # type: ignore
3 from pathlib import Path
4
5 from read_version import read_version
6 from setuptools import find_namespace_packages, setup
7
8 setup(
9 name="hydra-submitit-launcher",
10 version=read_version("hydra_plugins/hydra_submitit_launcher", "__init__.py"),
11 author="Jeremy Rapin, Jieru Hu, Omry Yadan",
12 author_email="[email protected], [email protected], [email protected]",
13 description="Submitit Launcher for Hydra apps",
14 long_description=(Path(__file__).parent / "README.md").read_text(),
15 long_description_content_type="text/markdown",
16 url="https://github.com/facebookincubator/submitit",
17 packages=find_namespace_packages(include=["hydra_plugins.*"]),
18 classifiers=[
19 "License :: OSI Approved :: MIT License",
20 "Programming Language :: Python :: 3.7",
21 "Programming Language :: Python :: 3.8",
22 "Programming Language :: Python :: 3.9",
23 "Operating System :: MacOS",
24 "Operating System :: POSIX :: Linux",
25 "Development Status :: 4 - Beta",
26 ],
27 install_requires=[
28 "hydra-core>=1.1.0.dev7",
29 "submitit>=1.0.0",
30 ],
31 include_package_data=True,
32 )
33
[end of plugins/hydra_submitit_launcher/setup.py]
[start of plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 from dataclasses import dataclass, field
3 from typing import Any, Dict, List, Optional
4
5 from hydra.core.config_store import ConfigStore
6
7
8 @dataclass
9 class BaseQueueConf:
10 """Configuration shared by all executors"""
11
12 submitit_folder: str = "${hydra.sweep.dir}/.submitit/%j"
13
14 # maximum time for the job in minutes
15 timeout_min: int = 60
16 # number of cpus to use for each task
17 cpus_per_task: Optional[int] = None
18 # number of gpus to use on each node
19 gpus_per_node: Optional[int] = None
20 # number of tasks to spawn on each node
21 tasks_per_node: int = 1
22 # memory to reserve for the job on each node (in GB)
23 mem_gb: Optional[int] = None
24 # number of nodes to use for the job
25 nodes: int = 1
26 # name of the job
27 name: str = "${hydra.job.name}"
28
29
30 @dataclass
31 class SlurmQueueConf(BaseQueueConf):
32 """Slurm configuration overrides and specific parameters"""
33
34 _target_: str = (
35 "hydra_plugins.hydra_submitit_launcher.submitit_launcher.SlurmLauncher"
36 )
37
38 # Params are used to configure sbatch, for more info check:
39 # https://github.com/facebookincubator/submitit/blob/master/submitit/slurm/slurm.py
40
41 # Following parameters are slurm specific
42 # More information: https://slurm.schedmd.com/sbatch.html
43 #
44 # slurm partition to use on the cluster
45 partition: Optional[str] = None
46 qos: Optional[str] = None
47 comment: Optional[str] = None
48 constraint: Optional[str] = None
49 exclude: Optional[str] = None
50 gres: Optional[str] = None
51 cpus_per_gpu: Optional[int] = None
52 gpus_per_task: Optional[int] = None
53 mem_per_gpu: Optional[str] = None
54 mem_per_cpu: Optional[str] = None
55
56 # Following parameters are submitit specifics
57 #
58 # USR1 signal delay before timeout
59 signal_delay_s: int = 120
60 # Maximum number of retries on job timeout.
61 # Change this only after you confirmed your code can handle re-submission
62 # by properly resuming from the latest stored checkpoint.
63 # check the following for more info on slurm_max_num_timeout
64 # https://github.com/facebookincubator/submitit/blob/master/docs/checkpointing.md
65 max_num_timeout: int = 0
66 # Useful to add parameters which are not currently available in the plugin.
67 # Eg: {"mail-user": "[email protected]", "mail-type": "BEGIN"}
68 additional_parameters: Dict[str, Any] = field(default_factory=dict)
69 # Maximum number of jobs running in parallel
70 array_parallelism: int = 256
71 # A list of commands to run in sbatch befure running srun
72 setup: Optional[List[str]] = None
73
74
75 @dataclass
76 class LocalQueueConf(BaseQueueConf):
77 _target_: str = (
78 "hydra_plugins.hydra_submitit_launcher.submitit_launcher.LocalLauncher"
79 )
80
81
82 # finally, register two different choices:
83 ConfigStore.instance().store(
84 group="hydra/launcher",
85 name="submitit_local",
86 node=LocalQueueConf(),
87 provider="submitit_launcher",
88 )
89
90
91 ConfigStore.instance().store(
92 group="hydra/launcher",
93 name="submitit_slurm",
94 node=SlurmQueueConf(),
95 provider="submitit_launcher",
96 )
97
[end of plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py]
[start of plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/__init__.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
3 __version__ = "1.2.0dev1"
4
[end of plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/__init__.py b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/__init__.py
--- a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/__init__.py
+++ b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/__init__.py
@@ -1,3 +1,3 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-__version__ = "1.2.0dev1"
+__version__ = "1.2.0dev2"
diff --git a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py
--- a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py
+++ b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py
@@ -25,6 +25,8 @@
nodes: int = 1
# name of the job
name: str = "${hydra.job.name}"
+ # redirect stderr to stdout
+ stderr_to_stdout: bool = False
@dataclass
diff --git a/plugins/hydra_submitit_launcher/setup.py b/plugins/hydra_submitit_launcher/setup.py
--- a/plugins/hydra_submitit_launcher/setup.py
+++ b/plugins/hydra_submitit_launcher/setup.py
@@ -26,7 +26,7 @@
],
install_requires=[
"hydra-core>=1.1.0.dev7",
- "submitit>=1.0.0",
+ "submitit>=1.3.3",
],
include_package_data=True,
)
| {"golden_diff": "diff --git a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/__init__.py b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/__init__.py\n--- a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/__init__.py\n+++ b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/__init__.py\n@@ -1,3 +1,3 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n \n-__version__ = \"1.2.0dev1\"\n+__version__ = \"1.2.0dev2\"\ndiff --git a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py\n--- a/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py\n+++ b/plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py\n@@ -25,6 +25,8 @@\n nodes: int = 1\n # name of the job\n name: str = \"${hydra.job.name}\"\n+ # redirect stderr to stdout\n+ stderr_to_stdout: bool = False\n \n \n @dataclass\ndiff --git a/plugins/hydra_submitit_launcher/setup.py b/plugins/hydra_submitit_launcher/setup.py\n--- a/plugins/hydra_submitit_launcher/setup.py\n+++ b/plugins/hydra_submitit_launcher/setup.py\n@@ -26,7 +26,7 @@\n ],\n install_requires=[\n \"hydra-core>=1.1.0.dev7\",\n- \"submitit>=1.0.0\",\n+ \"submitit>=1.3.3\",\n ],\n include_package_data=True,\n )\n", "issue": "[Feature Request] submitit: add `stderr_to_stdout` ass supported by submitit\n# \ud83d\ude80 Feature Request\r\n\r\n`stderr_to_stdout` option was added in submitit (not the plugin) [last year](https://github.com/facebookincubator/submitit/pull/1611) but cannot currently be given from submitit plugin. \r\n\r\nWe should add support for passing this `stderr_to_stdout` via the plugin as well.\r\n\r\ncc @Jasha10 \n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# type: ignore\nfrom pathlib import Path\n\nfrom read_version import read_version\nfrom setuptools import find_namespace_packages, setup\n\nsetup(\n name=\"hydra-submitit-launcher\",\n version=read_version(\"hydra_plugins/hydra_submitit_launcher\", \"__init__.py\"),\n author=\"Jeremy Rapin, Jieru Hu, Omry Yadan\",\n author_email=\"[email protected], [email protected], [email protected]\",\n description=\"Submitit Launcher for Hydra apps\",\n long_description=(Path(__file__).parent / \"README.md\").read_text(),\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookincubator/submitit\",\n packages=find_namespace_packages(include=[\"hydra_plugins.*\"]),\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: MacOS\",\n \"Operating System :: POSIX :: Linux\",\n \"Development Status :: 4 - Beta\",\n ],\n install_requires=[\n \"hydra-core>=1.1.0.dev7\",\n \"submitit>=1.0.0\",\n ],\n include_package_data=True,\n)\n", "path": "plugins/hydra_submitit_launcher/setup.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\nfrom dataclasses import dataclass, field\nfrom typing import Any, Dict, List, Optional\n\nfrom hydra.core.config_store import ConfigStore\n\n\n@dataclass\nclass BaseQueueConf:\n \"\"\"Configuration shared by all executors\"\"\"\n\n submitit_folder: str = \"${hydra.sweep.dir}/.submitit/%j\"\n\n # maximum time for the job in minutes\n timeout_min: int = 60\n # number of cpus to use for each task\n cpus_per_task: Optional[int] = None\n # number of gpus to use on each node\n gpus_per_node: Optional[int] = None\n # number of tasks to spawn on each node\n tasks_per_node: int = 1\n # memory to reserve for the job on each node (in GB)\n mem_gb: Optional[int] = None\n # number of nodes to use for the job\n nodes: int = 1\n # name of the job\n name: str = \"${hydra.job.name}\"\n\n\n@dataclass\nclass SlurmQueueConf(BaseQueueConf):\n \"\"\"Slurm configuration overrides and specific parameters\"\"\"\n\n _target_: str = (\n \"hydra_plugins.hydra_submitit_launcher.submitit_launcher.SlurmLauncher\"\n )\n\n # Params are used to configure sbatch, for more info check:\n # https://github.com/facebookincubator/submitit/blob/master/submitit/slurm/slurm.py\n\n # Following parameters are slurm specific\n # More information: https://slurm.schedmd.com/sbatch.html\n #\n # slurm partition to use on the cluster\n partition: Optional[str] = None\n qos: Optional[str] = None\n comment: Optional[str] = None\n constraint: Optional[str] = None\n exclude: Optional[str] = None\n gres: Optional[str] = None\n cpus_per_gpu: Optional[int] = None\n gpus_per_task: Optional[int] = None\n mem_per_gpu: Optional[str] = None\n mem_per_cpu: Optional[str] = None\n\n # Following parameters are submitit specifics\n #\n # USR1 signal delay before timeout\n signal_delay_s: int = 120\n # Maximum number of retries on job timeout.\n # Change this only after you confirmed your code can handle re-submission\n # by properly resuming from the latest stored checkpoint.\n # check the following for more info on slurm_max_num_timeout\n # https://github.com/facebookincubator/submitit/blob/master/docs/checkpointing.md\n max_num_timeout: int = 0\n # Useful to add parameters which are not currently available in the plugin.\n # Eg: {\"mail-user\": \"[email protected]\", \"mail-type\": \"BEGIN\"}\n additional_parameters: Dict[str, Any] = field(default_factory=dict)\n # Maximum number of jobs running in parallel\n array_parallelism: int = 256\n # A list of commands to run in sbatch befure running srun\n setup: Optional[List[str]] = None\n\n\n@dataclass\nclass LocalQueueConf(BaseQueueConf):\n _target_: str = (\n \"hydra_plugins.hydra_submitit_launcher.submitit_launcher.LocalLauncher\"\n )\n\n\n# finally, register two different choices:\nConfigStore.instance().store(\n group=\"hydra/launcher\",\n name=\"submitit_local\",\n node=LocalQueueConf(),\n provider=\"submitit_launcher\",\n)\n\n\nConfigStore.instance().store(\n group=\"hydra/launcher\",\n name=\"submitit_slurm\",\n node=SlurmQueueConf(),\n provider=\"submitit_launcher\",\n)\n", "path": "plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/config.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\n__version__ = \"1.2.0dev1\"\n", "path": "plugins/hydra_submitit_launcher/hydra_plugins/hydra_submitit_launcher/__init__.py"}]} | 2,134 | 413 |
gh_patches_debug_5819 | rasdani/github-patches | git_diff | aio-libs__aiohttp-5176 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tuple index out of range with 3.7.1 and no IPv6
🐞 **Describe the bug**
After upgrading to aiohttp==3.7.1, dns resolution fails with an exception.
💡 **To Reproduce**
<!-- How to reproduce the behavior?
For example:
1. Have certain environment
2. Run given code snippet in a certain way
3. See some behavior described
Add these steps below this comment: -->
1. I believe this is due to not have IPV6 enabled in my kernel, which is custom compiled with.
`CONFIG_IPV6=n`
2. Run:
```python
import asyncio
import aiohttp
async def main():
async with aiohttp.ClientSession() as session:
async with session.get('http://python.org') as response:
print(response.status)
asyncio.run(main())
```
3. Actual behavior
```python-traceback
$ python bug.py
Traceback (most recent call last):
File "bug.py", line 11, in <module>
asyncio.run(main())
File "/usr/lib/python3.7/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.7/asyncio/base_events.py", line 587, in run_until_complete
return future.result()
File "bug.py", line 7, in main
async with session.get('http://python.org') as response:
File "/home/dbn/usr/py37/lib/python3.7/site-packages/aiohttp/client.py", line 1124, in __aenter__
self._resp = await self._coro
File "/home/dbn/usr/py37/lib/python3.7/site-packages/aiohttp/client.py", line 528, in _request
req, traces=traces, timeout=real_timeout
File "/home/dbn/usr/py37/lib/python3.7/site-packages/aiohttp/connector.py", line 541, in connect
proto = await self._create_connection(req, traces, timeout)
File "/home/dbn/usr/py37/lib/python3.7/site-packages/aiohttp/connector.py", line 898, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "/home/dbn/usr/py37/lib/python3.7/site-packages/aiohttp/connector.py", line 1005, in _create_direct_connection
hosts = await asyncio.shield(host_resolved)
File "/home/dbn/usr/py37/lib/python3.7/site-packages/aiohttp/connector.py", line 871, in _resolve_host
addrs = await self._resolver.resolve(host, port, family=self._family)
File "/home/dbn/usr/py37/lib/python3.7/site-packages/aiohttp/resolver.py", line 37, in resolve
if family == socket.AF_INET6 and address[3]: # type: ignore
IndexError: tuple index out of range
```
💡 **Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
Should print `200`.
📋 **Logs/tracebacks**
<!-- If applicable, add logs/tracebacks to help explain your problem. -->
```python-traceback (paste your traceback in the next line)
```
📋 **Your version of the Python**
<!-- Attach your version of the Python. -->
```console
$ python --version
Python 3.7.9
```
📋 **Your version of the aiohttp/yarl/multidict distributions**
<!-- Attach your version of the distributions in the code blocks below. -->
```console
$ python -m pip show aiohttp
Name: aiohttp
Version: 3.7.1
Summary: Async http client/server framework (asyncio)
Home-page: https://github.com/aio-libs/aiohttp
Author: Nikolay Kim
Author-email: [email protected]
License: Apache 2
Location: /home/dbn/usr/py37/lib/python3.7/site-packages
Requires: multidict, async-timeout, typing-extensions, yarl, chardet, attrs
Required-by: replay-proxy
```
```console
$ python -m pip show multidict
Name: multidict
Version: 4.7.6
Summary: multidict implementation
Home-page: https://github.com/aio-libs/multidict
Author: Andrew Svetlov
Author-email: [email protected]
License: Apache 2
Location: /home/dbn/usr/py37/lib/python3.7/site-packages
Requires:
Required-by: yarl, aiohttp
```
```console
$ python -m pip show yarl
Name: yarl
Version: 1.5.1
Summary: Yet another URL library
Home-page: https://github.com/aio-libs/yarl/
Author: Andrew Svetlov
Author-email: [email protected]
License: Apache 2
Location: /home/dbn/usr/py37/lib/python3.7/site-packages
Requires: typing-extensions, multidict, idna
Required-by: aiohttp, replay-proxy
```
📋 **Additional context**
<!-- Add any other context about the problem here, in the next line. -->
<!-- Describe the environment you have that lead to your issue.
This includes aiohttp version, OS, proxy server and other bits that
are related to your case.
IMPORTANT: aiohttp is both server framework and client library.
For getting rid of confusing please put 'server', 'client' or 'both'
word here.
-->
</issue>
<code>
[start of aiohttp/resolver.py]
1 import socket
2 from typing import Any, Dict, List
3
4 from .abc import AbstractResolver
5 from .helpers import get_running_loop
6
7 __all__ = ("ThreadedResolver", "AsyncResolver", "DefaultResolver")
8
9 try:
10 import aiodns
11
12 # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')
13 except ImportError: # pragma: no cover
14 aiodns = None
15
16 aiodns_default = False
17
18
19 class ThreadedResolver(AbstractResolver):
20 """Use Executor for synchronous getaddrinfo() calls, which defaults to
21 concurrent.futures.ThreadPoolExecutor.
22 """
23
24 def __init__(self) -> None:
25 self._loop = get_running_loop()
26
27 async def resolve(
28 self, hostname: str, port: int = 0, family: int = socket.AF_INET
29 ) -> List[Dict[str, Any]]:
30 infos = await self._loop.getaddrinfo(
31 hostname, port, type=socket.SOCK_STREAM, family=family
32 )
33
34 hosts = []
35 for family, _, proto, _, address in infos:
36 if family == socket.AF_INET6 and address[3]: # type: ignore
37 # This is essential for link-local IPv6 addresses.
38 # LL IPv6 is a VERY rare case. Strictly speaking, we should use
39 # getnameinfo() unconditionally, but performance makes sense.
40 host, _port = socket.getnameinfo(
41 address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV
42 )
43 port = int(_port)
44 else:
45 host, port = address[:2]
46 hosts.append(
47 {
48 "hostname": hostname,
49 "host": host,
50 "port": port,
51 "family": family,
52 "proto": proto,
53 "flags": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,
54 }
55 )
56
57 return hosts
58
59 async def close(self) -> None:
60 pass
61
62
63 class AsyncResolver(AbstractResolver):
64 """Use the `aiodns` package to make asynchronous DNS lookups"""
65
66 def __init__(self, *args: Any, **kwargs: Any) -> None:
67 if aiodns is None:
68 raise RuntimeError("Resolver requires aiodns library")
69
70 self._loop = get_running_loop()
71 self._resolver = aiodns.DNSResolver(*args, loop=self._loop, **kwargs)
72
73 async def resolve(
74 self, host: str, port: int = 0, family: int = socket.AF_INET
75 ) -> List[Dict[str, Any]]:
76 try:
77 resp = await self._resolver.gethostbyname(host, family)
78 except aiodns.error.DNSError as exc:
79 msg = exc.args[1] if len(exc.args) >= 1 else "DNS lookup failed"
80 raise OSError(msg) from exc
81 hosts = []
82 for address in resp.addresses:
83 hosts.append(
84 {
85 "hostname": host,
86 "host": address,
87 "port": port,
88 "family": family,
89 "proto": 0,
90 "flags": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,
91 }
92 )
93
94 if not hosts:
95 raise OSError("DNS lookup failed")
96
97 return hosts
98
99 async def close(self) -> None:
100 return self._resolver.cancel()
101
102
103 DefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver
104
[end of aiohttp/resolver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py
--- a/aiohttp/resolver.py
+++ b/aiohttp/resolver.py
@@ -28,7 +28,11 @@
self, hostname: str, port: int = 0, family: int = socket.AF_INET
) -> List[Dict[str, Any]]:
infos = await self._loop.getaddrinfo(
- hostname, port, type=socket.SOCK_STREAM, family=family
+ hostname,
+ port,
+ type=socket.SOCK_STREAM,
+ family=family,
+ flags=socket.AI_ADDRCONFIG,
)
hosts = []
| {"golden_diff": "diff --git a/aiohttp/resolver.py b/aiohttp/resolver.py\n--- a/aiohttp/resolver.py\n+++ b/aiohttp/resolver.py\n@@ -28,7 +28,11 @@\n self, hostname: str, port: int = 0, family: int = socket.AF_INET\n ) -> List[Dict[str, Any]]:\n infos = await self._loop.getaddrinfo(\n- hostname, port, type=socket.SOCK_STREAM, family=family\n+ hostname,\n+ port,\n+ type=socket.SOCK_STREAM,\n+ family=family,\n+ flags=socket.AI_ADDRCONFIG,\n )\n \n hosts = []\n", "issue": "Tuple index out of range with 3.7.1 and no IPv6\n\ud83d\udc1e **Describe the bug**\r\nAfter upgrading to aiohttp==3.7.1, dns resolution fails with an exception.\r\n\r\n\r\n\ud83d\udca1 **To Reproduce**\r\n<!-- How to reproduce the behavior?\r\n\r\nFor example:\r\n1. Have certain environment\r\n2. Run given code snippet in a certain way\r\n3. See some behavior described\r\n\r\nAdd these steps below this comment: -->\r\n\r\n1. I believe this is due to not have IPV6 enabled in my kernel, which is custom compiled with.\r\n `CONFIG_IPV6=n`\r\n2. Run:\r\n ```python\r\n import asyncio\r\n import aiohttp\r\n\r\n\r\n async def main():\r\n async with aiohttp.ClientSession() as session:\r\n async with session.get('http://python.org') as response:\r\n print(response.status)\r\n\r\n\r\n asyncio.run(main())\r\n ```\r\n3. Actual behavior\r\n ```python-traceback\r\n $ python bug.py\r\n Traceback (most recent call last):\r\n File \"bug.py\", line 11, in <module>\r\n asyncio.run(main())\r\n File \"/usr/lib/python3.7/asyncio/runners.py\", line 43, in run\r\n return loop.run_until_complete(main)\r\n File \"/usr/lib/python3.7/asyncio/base_events.py\", line 587, in run_until_complete\r\n return future.result()\r\n File \"bug.py\", line 7, in main\r\n async with session.get('http://python.org') as response:\r\n File \"/home/dbn/usr/py37/lib/python3.7/site-packages/aiohttp/client.py\", line 1124, in __aenter__\r\n self._resp = await self._coro\r\n File \"/home/dbn/usr/py37/lib/python3.7/site-packages/aiohttp/client.py\", line 528, in _request\r\n req, traces=traces, timeout=real_timeout\r\n File \"/home/dbn/usr/py37/lib/python3.7/site-packages/aiohttp/connector.py\", line 541, in connect\r\n proto = await self._create_connection(req, traces, timeout)\r\n File \"/home/dbn/usr/py37/lib/python3.7/site-packages/aiohttp/connector.py\", line 898, in _create_connection\r\n _, proto = await self._create_direct_connection(req, traces, timeout)\r\n File \"/home/dbn/usr/py37/lib/python3.7/site-packages/aiohttp/connector.py\", line 1005, in _create_direct_connection\r\n hosts = await asyncio.shield(host_resolved)\r\n File \"/home/dbn/usr/py37/lib/python3.7/site-packages/aiohttp/connector.py\", line 871, in _resolve_host\r\n addrs = await self._resolver.resolve(host, port, family=self._family)\r\n File \"/home/dbn/usr/py37/lib/python3.7/site-packages/aiohttp/resolver.py\", line 37, in resolve\r\n if family == socket.AF_INET6 and address[3]: # type: ignore\r\n IndexError: tuple index out of range\r\n ```\r\n\r\n\r\n\ud83d\udca1 **Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nShould print `200`.\r\n\r\n\r\n\ud83d\udccb **Logs/tracebacks**\r\n<!-- If applicable, add logs/tracebacks to help explain your problem. -->\r\n```python-traceback (paste your traceback in the next line)\r\n\r\n```\r\n\r\n\ud83d\udccb **Your version of the Python**\r\n<!-- Attach your version of the Python. -->\r\n```console\r\n$ python --version\r\nPython 3.7.9\r\n```\r\n\r\n\ud83d\udccb **Your version of the aiohttp/yarl/multidict distributions**\r\n<!-- Attach your version of the distributions in the code blocks below. -->\r\n```console\r\n$ python -m pip show aiohttp\r\nName: aiohttp\r\nVersion: 3.7.1\r\nSummary: Async http client/server framework (asyncio)\r\nHome-page: https://github.com/aio-libs/aiohttp\r\nAuthor: Nikolay Kim\r\nAuthor-email: [email protected]\r\nLicense: Apache 2\r\nLocation: /home/dbn/usr/py37/lib/python3.7/site-packages\r\nRequires: multidict, async-timeout, typing-extensions, yarl, chardet, attrs\r\nRequired-by: replay-proxy\r\n```\r\n```console\r\n$ python -m pip show multidict\r\nName: multidict\r\nVersion: 4.7.6\r\nSummary: multidict implementation\r\nHome-page: https://github.com/aio-libs/multidict\r\nAuthor: Andrew Svetlov\r\nAuthor-email: [email protected]\r\nLicense: Apache 2\r\nLocation: /home/dbn/usr/py37/lib/python3.7/site-packages\r\nRequires:\r\nRequired-by: yarl, aiohttp\r\n```\r\n```console\r\n$ python -m pip show yarl\r\nName: yarl\r\nVersion: 1.5.1\r\nSummary: Yet another URL library\r\nHome-page: https://github.com/aio-libs/yarl/\r\nAuthor: Andrew Svetlov\r\nAuthor-email: [email protected]\r\nLicense: Apache 2\r\nLocation: /home/dbn/usr/py37/lib/python3.7/site-packages\r\nRequires: typing-extensions, multidict, idna\r\nRequired-by: aiohttp, replay-proxy\r\n```\r\n\r\n\ud83d\udccb **Additional context**\r\n<!-- Add any other context about the problem here, in the next line. -->\r\n\r\n<!-- Describe the environment you have that lead to your issue.\r\n This includes aiohttp version, OS, proxy server and other bits that\r\n are related to your case.\r\n\r\n IMPORTANT: aiohttp is both server framework and client library.\r\n For getting rid of confusing please put 'server', 'client' or 'both'\r\n word here.\r\n -->\r\n\n", "before_files": [{"content": "import socket\nfrom typing import Any, Dict, List\n\nfrom .abc import AbstractResolver\nfrom .helpers import get_running_loop\n\n__all__ = (\"ThreadedResolver\", \"AsyncResolver\", \"DefaultResolver\")\n\ntry:\n import aiodns\n\n # aiodns_default = hasattr(aiodns.DNSResolver, 'gethostbyname')\nexcept ImportError: # pragma: no cover\n aiodns = None\n\naiodns_default = False\n\n\nclass ThreadedResolver(AbstractResolver):\n \"\"\"Use Executor for synchronous getaddrinfo() calls, which defaults to\n concurrent.futures.ThreadPoolExecutor.\n \"\"\"\n\n def __init__(self) -> None:\n self._loop = get_running_loop()\n\n async def resolve(\n self, hostname: str, port: int = 0, family: int = socket.AF_INET\n ) -> List[Dict[str, Any]]:\n infos = await self._loop.getaddrinfo(\n hostname, port, type=socket.SOCK_STREAM, family=family\n )\n\n hosts = []\n for family, _, proto, _, address in infos:\n if family == socket.AF_INET6 and address[3]: # type: ignore\n # This is essential for link-local IPv6 addresses.\n # LL IPv6 is a VERY rare case. Strictly speaking, we should use\n # getnameinfo() unconditionally, but performance makes sense.\n host, _port = socket.getnameinfo(\n address, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV\n )\n port = int(_port)\n else:\n host, port = address[:2]\n hosts.append(\n {\n \"hostname\": hostname,\n \"host\": host,\n \"port\": port,\n \"family\": family,\n \"proto\": proto,\n \"flags\": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,\n }\n )\n\n return hosts\n\n async def close(self) -> None:\n pass\n\n\nclass AsyncResolver(AbstractResolver):\n \"\"\"Use the `aiodns` package to make asynchronous DNS lookups\"\"\"\n\n def __init__(self, *args: Any, **kwargs: Any) -> None:\n if aiodns is None:\n raise RuntimeError(\"Resolver requires aiodns library\")\n\n self._loop = get_running_loop()\n self._resolver = aiodns.DNSResolver(*args, loop=self._loop, **kwargs)\n\n async def resolve(\n self, host: str, port: int = 0, family: int = socket.AF_INET\n ) -> List[Dict[str, Any]]:\n try:\n resp = await self._resolver.gethostbyname(host, family)\n except aiodns.error.DNSError as exc:\n msg = exc.args[1] if len(exc.args) >= 1 else \"DNS lookup failed\"\n raise OSError(msg) from exc\n hosts = []\n for address in resp.addresses:\n hosts.append(\n {\n \"hostname\": host,\n \"host\": address,\n \"port\": port,\n \"family\": family,\n \"proto\": 0,\n \"flags\": socket.AI_NUMERICHOST | socket.AI_NUMERICSERV,\n }\n )\n\n if not hosts:\n raise OSError(\"DNS lookup failed\")\n\n return hosts\n\n async def close(self) -> None:\n return self._resolver.cancel()\n\n\nDefaultResolver = AsyncResolver if aiodns_default else ThreadedResolver\n", "path": "aiohttp/resolver.py"}]} | 2,719 | 146 |
gh_patches_debug_20712 | rasdani/github-patches | git_diff | chainer__chainer-3925 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MNIST example gets invalid DISPLAY error when run without X11
If MNIST example is run in the environment where matplotlib is installed but X11 is not available (e.g. ssh), the training script fails with `RuntimeError: Invalid DISPLAY variable` error.
```
Exception in main training loop: Invalid DISPLAY variable
Traceback (most recent call last):
File "/data/work/w/repos/chainer/chainer/training/trainer.py", line 302, in run
entry.extension(self)
File "/data/work/w/repos/chainer/chainer/training/extensions/plot_report.py", line 150, in __call__
f = plt.figure()
File "/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/pyplot.py", line 534, in figure
**kwargs)
File "/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backend_bases.py", line 170, in new_figure_manager
return cls.new_figure_manager_given_figure(num, fig)
File "/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backend_bases.py", line 176, in new_figure_manager_given_figure
canvas = cls.FigureCanvas(figure)
File "/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5agg.py", line 35, in __init__
super(FigureCanvasQTAggBase, self).__init__(figure=figure)
File "/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5.py", line 235, in __init__
_create_qApp()
File "/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5.py", line 122, in _create_qApp
raise RuntimeError('Invalid DISPLAY variable')
Will finalize trainer extensions and updater before reraising the exception.
Traceback (most recent call last):
File "train_mnist.py", line 124, in <module>
main()
File "train_mnist.py", line 120, in main
trainer.run()
File "/data/work/w/repos/chainer/chainer/training/trainer.py", line 313, in run
six.reraise(*sys.exc_info())
File "/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "/data/work/w/repos/chainer/chainer/training/trainer.py", line 302, in run
entry.extension(self)
File "/data/work/w/repos/chainer/chainer/training/extensions/plot_report.py", line 150, in __call__
f = plt.figure()
File "/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/pyplot.py", line 534, in figure
**kwargs)
File "/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backend_bases.py", line 170, in new_figure_manager
return cls.new_figure_manager_given_figure(num, fig)
File "/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backend_bases.py", line 176, in new_figure_manager_given_figure
canvas = cls.FigureCanvas(figure)
File "/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5agg.py", line 35, in __init__
super(FigureCanvasQTAggBase, self).__init__(figure=figure)
File "/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5.py", line 235, in __init__
_create_qApp()
File "/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5.py", line 122, in _create_qApp
raise RuntimeError('Invalid DISPLAY variable')
RuntimeError: Invalid DISPLAY variable
```
</issue>
<code>
[start of examples/mnist/train_mnist.py]
1 #!/usr/bin/env python
2
3 from __future__ import print_function
4
5 import argparse
6
7 import chainer
8 import chainer.functions as F
9 import chainer.links as L
10 from chainer import training
11 from chainer.training import extensions
12
13
14 # Network definition
15 class MLP(chainer.Chain):
16
17 def __init__(self, n_units, n_out):
18 super(MLP, self).__init__()
19 with self.init_scope():
20 # the size of the inputs to each layer will be inferred
21 self.l1 = L.Linear(None, n_units) # n_in -> n_units
22 self.l2 = L.Linear(None, n_units) # n_units -> n_units
23 self.l3 = L.Linear(None, n_out) # n_units -> n_out
24
25 def __call__(self, x):
26 h1 = F.relu(self.l1(x))
27 h2 = F.relu(self.l2(h1))
28 return self.l3(h2)
29
30
31 def main():
32 parser = argparse.ArgumentParser(description='Chainer example: MNIST')
33 parser.add_argument('--batchsize', '-b', type=int, default=100,
34 help='Number of images in each mini-batch')
35 parser.add_argument('--epoch', '-e', type=int, default=20,
36 help='Number of sweeps over the dataset to train')
37 parser.add_argument('--frequency', '-f', type=int, default=-1,
38 help='Frequency of taking a snapshot')
39 parser.add_argument('--gpu', '-g', type=int, default=-1,
40 help='GPU ID (negative value indicates CPU)')
41 parser.add_argument('--out', '-o', default='result',
42 help='Directory to output the result')
43 parser.add_argument('--resume', '-r', default='',
44 help='Resume the training from snapshot')
45 parser.add_argument('--unit', '-u', type=int, default=1000,
46 help='Number of units')
47 args = parser.parse_args()
48
49 print('GPU: {}'.format(args.gpu))
50 print('# unit: {}'.format(args.unit))
51 print('# Minibatch-size: {}'.format(args.batchsize))
52 print('# epoch: {}'.format(args.epoch))
53 print('')
54
55 # Set up a neural network to train
56 # Classifier reports softmax cross entropy loss and accuracy at every
57 # iteration, which will be used by the PrintReport extension below.
58 model = L.Classifier(MLP(args.unit, 10))
59 if args.gpu >= 0:
60 # Make a specified GPU current
61 chainer.cuda.get_device_from_id(args.gpu).use()
62 model.to_gpu() # Copy the model to the GPU
63
64 # Setup an optimizer
65 optimizer = chainer.optimizers.Adam()
66 optimizer.setup(model)
67
68 # Load the MNIST dataset
69 train, test = chainer.datasets.get_mnist()
70
71 train_iter = chainer.iterators.SerialIterator(train, args.batchsize)
72 test_iter = chainer.iterators.SerialIterator(test, args.batchsize,
73 repeat=False, shuffle=False)
74
75 # Set up a trainer
76 updater = training.StandardUpdater(train_iter, optimizer, device=args.gpu)
77 trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)
78
79 # Evaluate the model with the test dataset for each epoch
80 trainer.extend(extensions.Evaluator(test_iter, model, device=args.gpu))
81
82 # Dump a computational graph from 'loss' variable at the first iteration
83 # The "main" refers to the target link of the "main" optimizer.
84 trainer.extend(extensions.dump_graph('main/loss'))
85
86 # Take a snapshot for each specified epoch
87 frequency = args.epoch if args.frequency == -1 else max(1, args.frequency)
88 trainer.extend(extensions.snapshot(), trigger=(frequency, 'epoch'))
89
90 # Write a log of evaluation statistics for each epoch
91 trainer.extend(extensions.LogReport())
92
93 # Save two plot images to the result dir
94 if extensions.PlotReport.available():
95 trainer.extend(
96 extensions.PlotReport(['main/loss', 'validation/main/loss'],
97 'epoch', file_name='loss.png'))
98 trainer.extend(
99 extensions.PlotReport(
100 ['main/accuracy', 'validation/main/accuracy'],
101 'epoch', file_name='accuracy.png'))
102
103 # Print selected entries of the log to stdout
104 # Here "main" refers to the target link of the "main" optimizer again, and
105 # "validation" refers to the default name of the Evaluator extension.
106 # Entries other than 'epoch' are reported by the Classifier link, called by
107 # either the updater or the evaluator.
108 trainer.extend(extensions.PrintReport(
109 ['epoch', 'main/loss', 'validation/main/loss',
110 'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
111
112 # Print a progress bar to stdout
113 trainer.extend(extensions.ProgressBar())
114
115 if args.resume:
116 # Resume from a snapshot
117 chainer.serializers.load_npz(args.resume, trainer)
118
119 # Run the training
120 trainer.run()
121
122
123 if __name__ == '__main__':
124 main()
125
[end of examples/mnist/train_mnist.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/mnist/train_mnist.py b/examples/mnist/train_mnist.py
--- a/examples/mnist/train_mnist.py
+++ b/examples/mnist/train_mnist.py
@@ -44,6 +44,8 @@
help='Resume the training from snapshot')
parser.add_argument('--unit', '-u', type=int, default=1000,
help='Number of units')
+ parser.add_argument('--noplot', dest='plot', action='store_false',
+ help='Disable PlotReport extension')
args = parser.parse_args()
print('GPU: {}'.format(args.gpu))
@@ -91,7 +93,7 @@
trainer.extend(extensions.LogReport())
# Save two plot images to the result dir
- if extensions.PlotReport.available():
+ if args.plot and extensions.PlotReport.available():
trainer.extend(
extensions.PlotReport(['main/loss', 'validation/main/loss'],
'epoch', file_name='loss.png'))
| {"golden_diff": "diff --git a/examples/mnist/train_mnist.py b/examples/mnist/train_mnist.py\n--- a/examples/mnist/train_mnist.py\n+++ b/examples/mnist/train_mnist.py\n@@ -44,6 +44,8 @@\n help='Resume the training from snapshot')\n parser.add_argument('--unit', '-u', type=int, default=1000,\n help='Number of units')\n+ parser.add_argument('--noplot', dest='plot', action='store_false',\n+ help='Disable PlotReport extension')\n args = parser.parse_args()\n \n print('GPU: {}'.format(args.gpu))\n@@ -91,7 +93,7 @@\n trainer.extend(extensions.LogReport())\n \n # Save two plot images to the result dir\n- if extensions.PlotReport.available():\n+ if args.plot and extensions.PlotReport.available():\n trainer.extend(\n extensions.PlotReport(['main/loss', 'validation/main/loss'],\n 'epoch', file_name='loss.png'))\n", "issue": "MNIST example gets invalid DISPLAY error when run without X11\nIf MNIST example is run in the environment where matplotlib is installed but X11 is not available (e.g. ssh), the training script fails with `RuntimeError: Invalid DISPLAY variable` error.\r\n\r\n```\r\nException in main training loop: Invalid DISPLAY variable\r\nTraceback (most recent call last):\r\n File \"/data/work/w/repos/chainer/chainer/training/trainer.py\", line 302, in run\r\n entry.extension(self)\r\n File \"/data/work/w/repos/chainer/chainer/training/extensions/plot_report.py\", line 150, in __call__\r\n f = plt.figure()\r\n File \"/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/pyplot.py\", line 534, in figure\r\n **kwargs)\r\n File \"/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backend_bases.py\", line 170, in new_figure_manager\r\n return cls.new_figure_manager_given_figure(num, fig)\r\n File \"/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backend_bases.py\", line 176, in new_figure_manager_given_figure\r\n canvas = cls.FigureCanvas(figure)\r\n File \"/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5agg.py\", line 35, in __init__\r\n super(FigureCanvasQTAggBase, self).__init__(figure=figure)\r\n File \"/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5.py\", line 235, in __init__\r\n _create_qApp()\r\n File \"/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5.py\", line 122, in _create_qApp\r\n raise RuntimeError('Invalid DISPLAY variable')\r\nWill finalize trainer extensions and updater before reraising the exception.\r\nTraceback (most recent call last):\r\n File \"train_mnist.py\", line 124, in <module>\r\n main()\r\n File \"train_mnist.py\", line 120, in main\r\n trainer.run()\r\n File \"/data/work/w/repos/chainer/chainer/training/trainer.py\", line 313, in run\r\n six.reraise(*sys.exc_info())\r\n File \"/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/six.py\", line 693, in reraise\r\n raise value\r\n File \"/data/work/w/repos/chainer/chainer/training/trainer.py\", line 302, in run\r\n entry.extension(self)\r\n File \"/data/work/w/repos/chainer/chainer/training/extensions/plot_report.py\", line 150, in __call__\r\n f = plt.figure()\r\n File \"/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/pyplot.py\", line 534, in figure\r\n **kwargs)\r\n File \"/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backend_bases.py\", line 170, in new_figure_manager\r\n return cls.new_figure_manager_given_figure(num, fig)\r\n File \"/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backend_bases.py\", line 176, in new_figure_manager_given_figure\r\n canvas = cls.FigureCanvas(figure)\r\n File \"/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5agg.py\", line 35, in __init__\r\n super(FigureCanvasQTAggBase, self).__init__(figure=figure)\r\n File \"/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5.py\", line 235, in __init__\r\n _create_qApp()\r\n File \"/home/niboshi/anaconda/anaconda3/lib/python3.6/site-packages/matplotlib/backends/backend_qt5.py\", line 122, in _create_qApp\r\n raise RuntimeError('Invalid DISPLAY variable')\r\nRuntimeError: Invalid DISPLAY variable\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom __future__ import print_function\n\nimport argparse\n\nimport chainer\nimport chainer.functions as F\nimport chainer.links as L\nfrom chainer import training\nfrom chainer.training import extensions\n\n\n# Network definition\nclass MLP(chainer.Chain):\n\n def __init__(self, n_units, n_out):\n super(MLP, self).__init__()\n with self.init_scope():\n # the size of the inputs to each layer will be inferred\n self.l1 = L.Linear(None, n_units) # n_in -> n_units\n self.l2 = L.Linear(None, n_units) # n_units -> n_units\n self.l3 = L.Linear(None, n_out) # n_units -> n_out\n\n def __call__(self, x):\n h1 = F.relu(self.l1(x))\n h2 = F.relu(self.l2(h1))\n return self.l3(h2)\n\n\ndef main():\n parser = argparse.ArgumentParser(description='Chainer example: MNIST')\n parser.add_argument('--batchsize', '-b', type=int, default=100,\n help='Number of images in each mini-batch')\n parser.add_argument('--epoch', '-e', type=int, default=20,\n help='Number of sweeps over the dataset to train')\n parser.add_argument('--frequency', '-f', type=int, default=-1,\n help='Frequency of taking a snapshot')\n parser.add_argument('--gpu', '-g', type=int, default=-1,\n help='GPU ID (negative value indicates CPU)')\n parser.add_argument('--out', '-o', default='result',\n help='Directory to output the result')\n parser.add_argument('--resume', '-r', default='',\n help='Resume the training from snapshot')\n parser.add_argument('--unit', '-u', type=int, default=1000,\n help='Number of units')\n args = parser.parse_args()\n\n print('GPU: {}'.format(args.gpu))\n print('# unit: {}'.format(args.unit))\n print('# Minibatch-size: {}'.format(args.batchsize))\n print('# epoch: {}'.format(args.epoch))\n print('')\n\n # Set up a neural network to train\n # Classifier reports softmax cross entropy loss and accuracy at every\n # iteration, which will be used by the PrintReport extension below.\n model = L.Classifier(MLP(args.unit, 10))\n if args.gpu >= 0:\n # Make a specified GPU current\n chainer.cuda.get_device_from_id(args.gpu).use()\n model.to_gpu() # Copy the model to the GPU\n\n # Setup an optimizer\n optimizer = chainer.optimizers.Adam()\n optimizer.setup(model)\n\n # Load the MNIST dataset\n train, test = chainer.datasets.get_mnist()\n\n train_iter = chainer.iterators.SerialIterator(train, args.batchsize)\n test_iter = chainer.iterators.SerialIterator(test, args.batchsize,\n repeat=False, shuffle=False)\n\n # Set up a trainer\n updater = training.StandardUpdater(train_iter, optimizer, device=args.gpu)\n trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)\n\n # Evaluate the model with the test dataset for each epoch\n trainer.extend(extensions.Evaluator(test_iter, model, device=args.gpu))\n\n # Dump a computational graph from 'loss' variable at the first iteration\n # The \"main\" refers to the target link of the \"main\" optimizer.\n trainer.extend(extensions.dump_graph('main/loss'))\n\n # Take a snapshot for each specified epoch\n frequency = args.epoch if args.frequency == -1 else max(1, args.frequency)\n trainer.extend(extensions.snapshot(), trigger=(frequency, 'epoch'))\n\n # Write a log of evaluation statistics for each epoch\n trainer.extend(extensions.LogReport())\n\n # Save two plot images to the result dir\n if extensions.PlotReport.available():\n trainer.extend(\n extensions.PlotReport(['main/loss', 'validation/main/loss'],\n 'epoch', file_name='loss.png'))\n trainer.extend(\n extensions.PlotReport(\n ['main/accuracy', 'validation/main/accuracy'],\n 'epoch', file_name='accuracy.png'))\n\n # Print selected entries of the log to stdout\n # Here \"main\" refers to the target link of the \"main\" optimizer again, and\n # \"validation\" refers to the default name of the Evaluator extension.\n # Entries other than 'epoch' are reported by the Classifier link, called by\n # either the updater or the evaluator.\n trainer.extend(extensions.PrintReport(\n ['epoch', 'main/loss', 'validation/main/loss',\n 'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))\n\n # Print a progress bar to stdout\n trainer.extend(extensions.ProgressBar())\n\n if args.resume:\n # Resume from a snapshot\n chainer.serializers.load_npz(args.resume, trainer)\n\n # Run the training\n trainer.run()\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/mnist/train_mnist.py"}]} | 2,854 | 216 |
gh_patches_debug_13695 | rasdani/github-patches | git_diff | projectmesa__mesa-373 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow fixed seed for replication
Figure out how to best handle running a model with a fixed seed, to ensure that it will produce the same result. and implement that.
</issue>
<code>
[start of mesa/model.py]
1 # -*- coding: utf-8 -*-
2 """
3 The model class for Mesa framework.
4
5 Core Objects: Model
6
7 """
8 import datetime as dt
9 import random
10
11
12 class Model:
13 """ Base class for models. """
14 def __init__(self, seed=None):
15 """ Create a new model. Overload this method with the actual code to
16 start the model.
17
18 Args:
19 seed: seed for the random number generator
20
21 Attributes:
22 schedule: schedule object
23 running: a bool indicating if the model should continue running
24
25 """
26 if seed is None:
27 self.seed = dt.datetime.now()
28 else:
29 self.seed = seed
30 random.seed(seed)
31 self.running = True
32 self.schedule = None
33
34 def run_model(self):
35 """ Run the model until the end condition is reached. Overload as
36 needed.
37
38 """
39 while self.running:
40 self.step()
41
42 def step(self):
43 """ A single step. Fill in here. """
44 pass
45
[end of mesa/model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mesa/model.py b/mesa/model.py
--- a/mesa/model.py
+++ b/mesa/model.py
@@ -7,6 +7,7 @@
"""
import datetime as dt
import random
+import numpy
class Model:
@@ -23,11 +24,14 @@
running: a bool indicating if the model should continue running
"""
+ # seed both the numpy and Python random number generators
if seed is None:
self.seed = dt.datetime.now()
else:
self.seed = seed
random.seed(seed)
+ numpy.random.seed(seed)
+
self.running = True
self.schedule = None
| {"golden_diff": "diff --git a/mesa/model.py b/mesa/model.py\n--- a/mesa/model.py\n+++ b/mesa/model.py\n@@ -7,6 +7,7 @@\n \"\"\"\n import datetime as dt\n import random\n+import numpy\n \n \n class Model:\n@@ -23,11 +24,14 @@\n running: a bool indicating if the model should continue running\n \n \"\"\"\n+ # seed both the numpy and Python random number generators\n if seed is None:\n self.seed = dt.datetime.now()\n else:\n self.seed = seed\n random.seed(seed)\n+ numpy.random.seed(seed)\n+\n self.running = True\n self.schedule = None\n", "issue": "Allow fixed seed for replication\nFigure out how to best handle running a model with a fixed seed, to ensure that it will produce the same result. and implement that.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nThe model class for Mesa framework.\n\nCore Objects: Model\n\n\"\"\"\nimport datetime as dt\nimport random\n\n\nclass Model:\n \"\"\" Base class for models. \"\"\"\n def __init__(self, seed=None):\n \"\"\" Create a new model. Overload this method with the actual code to\n start the model.\n\n Args:\n seed: seed for the random number generator\n\n Attributes:\n schedule: schedule object\n running: a bool indicating if the model should continue running\n\n \"\"\"\n if seed is None:\n self.seed = dt.datetime.now()\n else:\n self.seed = seed\n random.seed(seed)\n self.running = True\n self.schedule = None\n\n def run_model(self):\n \"\"\" Run the model until the end condition is reached. Overload as\n needed.\n\n \"\"\"\n while self.running:\n self.step()\n\n def step(self):\n \"\"\" A single step. Fill in here. \"\"\"\n pass\n", "path": "mesa/model.py"}]} | 861 | 145 |
gh_patches_debug_39623 | rasdani/github-patches | git_diff | wright-group__WrightTools-886 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ENHC: Trim should have an option to replace with the neighborhood mean with the outlier not in the neighborhood
Current replacement options are `'nan', 'mean', number`. I vote we add `exclusive_mean`.
</issue>
<code>
[start of WrightTools/data/_channel.py]
1 """Channel class and associated."""
2
3
4 # --- import --------------------------------------------------------------------------------------
5
6
7 import numpy as np
8
9 import h5py
10
11 import warnings
12 import numbers
13
14 from .. import kit as wt_kit
15 from .._dataset import Dataset
16 from .. import exceptions as wt_exceptions
17
18 __all__ = ["Channel"]
19
20 # --- class ---------------------------------------------------------------------------------------
21
22
23 class Channel(Dataset):
24 """Channel."""
25
26 class_name = "Channel"
27
28 def __init__(
29 self,
30 parent,
31 id,
32 *,
33 units=None,
34 null=None,
35 signed=None,
36 label=None,
37 label_seed=None,
38 **kwargs
39 ):
40 """Construct a channel object.
41
42 Parameters
43 ----------
44 values : array-like
45 Values.
46 name : string
47 Channel name.
48 units : string (optional)
49 Channel units. Default is None.
50 null : number (optional)
51 Channel null. Default is None (0).
52 signed : booelan (optional)
53 Channel signed flag. Default is None (guess).
54 label : string.
55 Label. Default is None.
56 label_seed : list of strings
57 Label seed. Default is None.
58 **kwargs
59 Additional keyword arguments are added to the attrs dictionary
60 and to the natural namespace of the object (if possible).
61 """
62 self._parent = parent
63 super().__init__(id)
64 self.label = label
65 self.label_seed = label_seed
66 self.units = units
67 self.dimensionality = len(self.shape)
68 # attrs
69 self.attrs.update(kwargs)
70 self.attrs["name"] = h5py.h5i.get_name(self.id).decode().split("/")[-1]
71 self.attrs["class"] = "Channel"
72 if signed is not None:
73 self.attrs["signed"] = signed
74 if null is not None:
75 self.attrs["null"] = null
76 for key, value in self.attrs.items():
77 identifier = wt_kit.string2identifier(key)
78 if not hasattr(self, identifier):
79 setattr(self, identifier, value)
80
81 @property
82 def major_extent(self) -> complex:
83 """Maximum deviation from null."""
84 return max((self.max() - self.null, self.null - self.min()))
85
86 @property
87 def minor_extent(self) -> complex:
88 """Minimum deviation from null."""
89 return min((self.max() - self.null, self.null - self.min()))
90
91 @property
92 def null(self) -> complex:
93 if "null" not in self.attrs.keys():
94 self.attrs["null"] = 0
95 return self.attrs["null"]
96
97 @null.setter
98 def null(self, value):
99 self.attrs["null"] = value
100
101 @property
102 def signed(self) -> bool:
103 if "signed" not in self.attrs.keys():
104 self.attrs["signed"] = False
105 return self.attrs["signed"]
106
107 @signed.setter
108 def signed(self, value):
109 self.attrs["signed"] = value
110
111 def mag(self) -> complex:
112 """Channel magnitude (maximum deviation from null)."""
113 return self.major_extent
114
115 def normalize(self, mag=1.):
116 """Normalize a Channel, set `null` to 0 and the mag to given value.
117
118 Parameters
119 ----------
120 mag : float (optional)
121 New value of mag. Default is 1.
122 """
123
124 def f(dataset, s, null, mag):
125 dataset[s] -= null
126 dataset[s] /= mag
127
128 if self.signed:
129 mag = self.mag() / mag
130 else:
131 mag = self.max() / mag
132 self.chunkwise(f, null=self.null, mag=mag)
133 self._null = 0
134
135 def trim(self, neighborhood, method="ztest", factor=3, replace="nan", verbose=True):
136 """Remove outliers from the dataset.
137
138 Identifies outliers by comparing each point to its
139 neighbors using a statistical test.
140
141 Parameters
142 ----------
143 neighborhood : list of integers
144 Size of the neighborhood in each dimension. Length of the list must
145 be equal to the dimensionality of the channel.
146 method : {'ztest'} (optional)
147 Statistical test used to detect outliers. Default is ztest.
148
149 ztest
150 Compare point deviation from neighborhood mean to neighborhood
151 standard deviation.
152
153 factor : number (optional)
154 Tolerance factor. Default is 3.
155 replace : {'nan', 'mean', number} (optional)
156 Behavior of outlier replacement. Default is nan.
157
158 nan
159 Outliers are replaced by numpy nans.
160
161 mean
162 Outliers are replaced by the mean of its neighborhood.
163
164 number
165 Array becomes given number.
166
167 Returns
168 -------
169 list of tuples
170 Indicies of trimmed outliers.
171
172 See Also
173 --------
174 clip
175 Remove pixels outside of a certain range.
176 """
177 warnings.warn("trim", category=wt_exceptions.EntireDatasetInMemoryWarning)
178 outliers = []
179 means = []
180 # find outliers
181 for idx in np.ndindex(self.shape):
182 slices = []
183 for i, di, size in zip(idx, neighborhood, self.shape):
184 start = max(0, i - di)
185 stop = min(size, i + di + 1)
186 slices.append(slice(start, stop, 1))
187 neighbors = self[slices]
188 mean = np.nanmean(neighbors)
189 limit = np.nanstd(neighbors) * factor
190 if np.abs(self[idx] - mean) > limit:
191 outliers.append(idx)
192 means.append(mean)
193 # replace outliers
194 i = tuple(zip(*outliers))
195 if replace == "nan":
196 arr = self[:]
197 arr[i] = np.nan
198 self[:] = arr
199 elif replace == "mean":
200 arr = self[:]
201 arr[i] = means
202 self[:] = arr
203 elif isinstance(replace, numbers.Number):
204 arr = self[:]
205 arr[i] = replace
206 self[:] = arr
207 else:
208 raise KeyError("replace must be one of {nan, mean} or some number")
209 # finish
210 if verbose:
211 print("%i outliers removed" % len(outliers))
212 return outliers
213
[end of WrightTools/data/_channel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/WrightTools/data/_channel.py b/WrightTools/data/_channel.py
--- a/WrightTools/data/_channel.py
+++ b/WrightTools/data/_channel.py
@@ -152,14 +152,17 @@
factor : number (optional)
Tolerance factor. Default is 3.
- replace : {'nan', 'mean', number} (optional)
+ replace : {'nan', 'mean', 'exclusive_mean', number} (optional)
Behavior of outlier replacement. Default is nan.
nan
Outliers are replaced by numpy nans.
mean
- Outliers are replaced by the mean of its neighborhood.
+ Outliers are replaced by the mean of its neighborhood, including itself.
+
+ exclusive_mean
+ Outilers are replaced by the mean of its neighborhood, not including itself.
number
Array becomes given number.
@@ -177,6 +180,7 @@
warnings.warn("trim", category=wt_exceptions.EntireDatasetInMemoryWarning)
outliers = []
means = []
+ ex_means = []
# find outliers
for idx in np.ndindex(self.shape):
slices = []
@@ -186,26 +190,33 @@
slices.append(slice(start, stop, 1))
neighbors = self[slices]
mean = np.nanmean(neighbors)
+ sum_ = np.nansum(neighbors)
limit = np.nanstd(neighbors) * factor
if np.abs(self[idx] - mean) > limit:
outliers.append(idx)
means.append(mean)
+ # Note, "- 1" is to exclude the point itself, which is not nan, in order
+ # to enter this if block, as `np.abs(nan - mean)` is nan, which would
+ # evaluate to False
+ ex_means.append((sum_ - self[idx]) / (np.sum(~np.isnan(neighbors)) - 1))
+
# replace outliers
i = tuple(zip(*outliers))
- if replace == "nan":
- arr = self[:]
- arr[i] = np.nan
- self[:] = arr
- elif replace == "mean":
- arr = self[:]
- arr[i] = means
- self[:] = arr
- elif isinstance(replace, numbers.Number):
- arr = self[:]
- arr[i] = replace
- self[:] = arr
- else:
- raise KeyError("replace must be one of {nan, mean} or some number")
+
+ if len(i) == 0:
+ if verbose:
+ print("No outliers found")
+ return []
+
+ replace = {"nan": np.nan, "mean": means, "exclusive_mean": ex_means}.get(replace, replace)
+
+ # This may someday be available in h5py directly, but seems that day is not yet.
+ # This is annoying because it is the only reason we hold the whole set in memory.
+ # KFS 2019-03-21
+ arr = self[:]
+ arr[i] = replace
+ self[:] = arr
+
# finish
if verbose:
print("%i outliers removed" % len(outliers))
| {"golden_diff": "diff --git a/WrightTools/data/_channel.py b/WrightTools/data/_channel.py\n--- a/WrightTools/data/_channel.py\n+++ b/WrightTools/data/_channel.py\n@@ -152,14 +152,17 @@\n \n factor : number (optional)\n Tolerance factor. Default is 3.\n- replace : {'nan', 'mean', number} (optional)\n+ replace : {'nan', 'mean', 'exclusive_mean', number} (optional)\n Behavior of outlier replacement. Default is nan.\n \n nan\n Outliers are replaced by numpy nans.\n \n mean\n- Outliers are replaced by the mean of its neighborhood.\n+ Outliers are replaced by the mean of its neighborhood, including itself.\n+\n+ exclusive_mean\n+ Outilers are replaced by the mean of its neighborhood, not including itself.\n \n number\n Array becomes given number.\n@@ -177,6 +180,7 @@\n warnings.warn(\"trim\", category=wt_exceptions.EntireDatasetInMemoryWarning)\n outliers = []\n means = []\n+ ex_means = []\n # find outliers\n for idx in np.ndindex(self.shape):\n slices = []\n@@ -186,26 +190,33 @@\n slices.append(slice(start, stop, 1))\n neighbors = self[slices]\n mean = np.nanmean(neighbors)\n+ sum_ = np.nansum(neighbors)\n limit = np.nanstd(neighbors) * factor\n if np.abs(self[idx] - mean) > limit:\n outliers.append(idx)\n means.append(mean)\n+ # Note, \"- 1\" is to exclude the point itself, which is not nan, in order\n+ # to enter this if block, as `np.abs(nan - mean)` is nan, which would\n+ # evaluate to False\n+ ex_means.append((sum_ - self[idx]) / (np.sum(~np.isnan(neighbors)) - 1))\n+\n # replace outliers\n i = tuple(zip(*outliers))\n- if replace == \"nan\":\n- arr = self[:]\n- arr[i] = np.nan\n- self[:] = arr\n- elif replace == \"mean\":\n- arr = self[:]\n- arr[i] = means\n- self[:] = arr\n- elif isinstance(replace, numbers.Number):\n- arr = self[:]\n- arr[i] = replace\n- self[:] = arr\n- else:\n- raise KeyError(\"replace must be one of {nan, mean} or some number\")\n+\n+ if len(i) == 0:\n+ if verbose:\n+ print(\"No outliers found\")\n+ return []\n+\n+ replace = {\"nan\": np.nan, \"mean\": means, \"exclusive_mean\": ex_means}.get(replace, replace)\n+\n+ # This may someday be available in h5py directly, but seems that day is not yet.\n+ # This is annoying because it is the only reason we hold the whole set in memory.\n+ # KFS 2019-03-21\n+ arr = self[:]\n+ arr[i] = replace\n+ self[:] = arr\n+\n # finish\n if verbose:\n print(\"%i outliers removed\" % len(outliers))\n", "issue": "ENHC: Trim should have an option to replace with the neighborhood mean with the outlier not in the neighborhood\nCurrent replacement options are `'nan', 'mean', number`. I vote we add `exclusive_mean`.\n", "before_files": [{"content": "\"\"\"Channel class and associated.\"\"\"\n\n\n# --- import --------------------------------------------------------------------------------------\n\n\nimport numpy as np\n\nimport h5py\n\nimport warnings\nimport numbers\n\nfrom .. import kit as wt_kit\nfrom .._dataset import Dataset\nfrom .. import exceptions as wt_exceptions\n\n__all__ = [\"Channel\"]\n\n# --- class ---------------------------------------------------------------------------------------\n\n\nclass Channel(Dataset):\n \"\"\"Channel.\"\"\"\n\n class_name = \"Channel\"\n\n def __init__(\n self,\n parent,\n id,\n *,\n units=None,\n null=None,\n signed=None,\n label=None,\n label_seed=None,\n **kwargs\n ):\n \"\"\"Construct a channel object.\n\n Parameters\n ----------\n values : array-like\n Values.\n name : string\n Channel name.\n units : string (optional)\n Channel units. Default is None.\n null : number (optional)\n Channel null. Default is None (0).\n signed : booelan (optional)\n Channel signed flag. Default is None (guess).\n label : string.\n Label. Default is None.\n label_seed : list of strings\n Label seed. Default is None.\n **kwargs\n Additional keyword arguments are added to the attrs dictionary\n and to the natural namespace of the object (if possible).\n \"\"\"\n self._parent = parent\n super().__init__(id)\n self.label = label\n self.label_seed = label_seed\n self.units = units\n self.dimensionality = len(self.shape)\n # attrs\n self.attrs.update(kwargs)\n self.attrs[\"name\"] = h5py.h5i.get_name(self.id).decode().split(\"/\")[-1]\n self.attrs[\"class\"] = \"Channel\"\n if signed is not None:\n self.attrs[\"signed\"] = signed\n if null is not None:\n self.attrs[\"null\"] = null\n for key, value in self.attrs.items():\n identifier = wt_kit.string2identifier(key)\n if not hasattr(self, identifier):\n setattr(self, identifier, value)\n\n @property\n def major_extent(self) -> complex:\n \"\"\"Maximum deviation from null.\"\"\"\n return max((self.max() - self.null, self.null - self.min()))\n\n @property\n def minor_extent(self) -> complex:\n \"\"\"Minimum deviation from null.\"\"\"\n return min((self.max() - self.null, self.null - self.min()))\n\n @property\n def null(self) -> complex:\n if \"null\" not in self.attrs.keys():\n self.attrs[\"null\"] = 0\n return self.attrs[\"null\"]\n\n @null.setter\n def null(self, value):\n self.attrs[\"null\"] = value\n\n @property\n def signed(self) -> bool:\n if \"signed\" not in self.attrs.keys():\n self.attrs[\"signed\"] = False\n return self.attrs[\"signed\"]\n\n @signed.setter\n def signed(self, value):\n self.attrs[\"signed\"] = value\n\n def mag(self) -> complex:\n \"\"\"Channel magnitude (maximum deviation from null).\"\"\"\n return self.major_extent\n\n def normalize(self, mag=1.):\n \"\"\"Normalize a Channel, set `null` to 0 and the mag to given value.\n\n Parameters\n ----------\n mag : float (optional)\n New value of mag. Default is 1.\n \"\"\"\n\n def f(dataset, s, null, mag):\n dataset[s] -= null\n dataset[s] /= mag\n\n if self.signed:\n mag = self.mag() / mag\n else:\n mag = self.max() / mag\n self.chunkwise(f, null=self.null, mag=mag)\n self._null = 0\n\n def trim(self, neighborhood, method=\"ztest\", factor=3, replace=\"nan\", verbose=True):\n \"\"\"Remove outliers from the dataset.\n\n Identifies outliers by comparing each point to its\n neighbors using a statistical test.\n\n Parameters\n ----------\n neighborhood : list of integers\n Size of the neighborhood in each dimension. Length of the list must\n be equal to the dimensionality of the channel.\n method : {'ztest'} (optional)\n Statistical test used to detect outliers. Default is ztest.\n\n ztest\n Compare point deviation from neighborhood mean to neighborhood\n standard deviation.\n\n factor : number (optional)\n Tolerance factor. Default is 3.\n replace : {'nan', 'mean', number} (optional)\n Behavior of outlier replacement. Default is nan.\n\n nan\n Outliers are replaced by numpy nans.\n\n mean\n Outliers are replaced by the mean of its neighborhood.\n\n number\n Array becomes given number.\n\n Returns\n -------\n list of tuples\n Indicies of trimmed outliers.\n\n See Also\n --------\n clip\n Remove pixels outside of a certain range.\n \"\"\"\n warnings.warn(\"trim\", category=wt_exceptions.EntireDatasetInMemoryWarning)\n outliers = []\n means = []\n # find outliers\n for idx in np.ndindex(self.shape):\n slices = []\n for i, di, size in zip(idx, neighborhood, self.shape):\n start = max(0, i - di)\n stop = min(size, i + di + 1)\n slices.append(slice(start, stop, 1))\n neighbors = self[slices]\n mean = np.nanmean(neighbors)\n limit = np.nanstd(neighbors) * factor\n if np.abs(self[idx] - mean) > limit:\n outliers.append(idx)\n means.append(mean)\n # replace outliers\n i = tuple(zip(*outliers))\n if replace == \"nan\":\n arr = self[:]\n arr[i] = np.nan\n self[:] = arr\n elif replace == \"mean\":\n arr = self[:]\n arr[i] = means\n self[:] = arr\n elif isinstance(replace, numbers.Number):\n arr = self[:]\n arr[i] = replace\n self[:] = arr\n else:\n raise KeyError(\"replace must be one of {nan, mean} or some number\")\n # finish\n if verbose:\n print(\"%i outliers removed\" % len(outliers))\n return outliers\n", "path": "WrightTools/data/_channel.py"}]} | 2,438 | 721 |
gh_patches_debug_14678 | rasdani/github-patches | git_diff | AUTOMATIC1111__stable-diffusion-webui-7430 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: "AssertionError: bad number of vertical texts: 2; must be 3" on hires and prompt matrix
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
Performing a render using both **hires fix** and **prompt matrix** results in an error and a failure to render the image grid (and accompanying legend). The individual images still render and can be accessed in the txt2img output folder. The image preview pane remains blank and the error text _AssertionError: bad number of vertical texts: 2; must be 3_* shows below it.
*These numbers might be different. Sometimes _vertical_ will instead say _horizontal_.
### Steps to reproduce the problem
1. In the txt2img tab, enter a base prompt with 2 or more variable prompts separated by the vertical line character. For example: **Painting of a man|Cowboy hat|beard**
2. Toggle **Hires. fix** to on. ✅
3. Under **Script**, select **Prompt matrix**.
4. Click generate.
### What should have happened?
On completion, the generation should have produced a grid showing the array of possible images with the variable prompts toggled on or off. Instead, no grid is produced and an error appears instead. The images still render (and can be viewed from the output folder), but the image grid does not. Since the image grid is the most important part (allowing you to compare images very easily), this is a problem.
Note that using prompt matrix or hires. fix individually still work just fine.
### Commit where the problem happens
3a0d6b77295162146d0a8d04278804334da6f1b4
### What platforms do you use to access UI ?
Windows
### What browsers do you use to access the UI ?
Google Chrome
### Command Line Arguments
_No response_
### Additional information, context and logs
The following text is an example of what shows in my command window when the error occurs:
```
Error completing request8:54, 1.83s/it]
Arguments: ('task(xe7gus9bhl4yqzh)', 'painting of a beautiful woman|cowboy hat|blindfold', '', 'None', 'None', 20, 1, False, False, 1, 1, 10, -1.0, -1.0, 0, 0, 0, False, 512, 512, True, 0.7, 2, 'Latent', 0, 0, 0, 1, False, False, False, False, '', 1, '', 0, '', True, False, False) {}
Traceback (most recent call last):
File "C:\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "C:\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\StableDiffusion\stable-diffusion-webui\modules\txt2img.py", line 49, in txt2img
processed = modules.scripts.scripts_txt2img.run(p, *args)
File "C:\StableDiffusion\stable-diffusion-webui\modules\scripts.py", line 337, in run
processed = script.run(p, *script_args)
File "C:\StableDiffusion\stable-diffusion-webui\scripts\prompt_matrix.py", line 82, in run
grid = images.draw_prompt_matrix(grid, p.width, p.height, prompt_matrix_parts)
File "C:\StableDiffusion\stable-diffusion-webui\modules\images.py", line 230, in draw_prompt_matrix
return draw_grid_annotations(im, width, height, hor_texts, ver_texts)
File "C:\StableDiffusion\stable-diffusion-webui\modules\images.py", line 175, in draw_grid_annotations
assert cols == len(hor_texts), f'bad number of horizontal texts: {len(hor_texts)}; must be {cols}'
AssertionError: bad number of horizontal texts: 2; must be 4
```
</issue>
<code>
[start of scripts/prompt_matrix.py]
1 import math
2 from collections import namedtuple
3 from copy import copy
4 import random
5
6 import modules.scripts as scripts
7 import gradio as gr
8
9 from modules import images
10 from modules.processing import process_images, Processed
11 from modules.shared import opts, cmd_opts, state
12 import modules.sd_samplers
13
14
15 def draw_xy_grid(xs, ys, x_label, y_label, cell):
16 res = []
17
18 ver_texts = [[images.GridAnnotation(y_label(y))] for y in ys]
19 hor_texts = [[images.GridAnnotation(x_label(x))] for x in xs]
20
21 first_processed = None
22
23 state.job_count = len(xs) * len(ys)
24
25 for iy, y in enumerate(ys):
26 for ix, x in enumerate(xs):
27 state.job = f"{ix + iy * len(xs) + 1} out of {len(xs) * len(ys)}"
28
29 processed = cell(x, y)
30 if first_processed is None:
31 first_processed = processed
32
33 res.append(processed.images[0])
34
35 grid = images.image_grid(res, rows=len(ys))
36 grid = images.draw_grid_annotations(grid, res[0].width, res[0].height, hor_texts, ver_texts)
37
38 first_processed.images = [grid]
39
40 return first_processed
41
42
43 class Script(scripts.Script):
44 def title(self):
45 return "Prompt matrix"
46
47 def ui(self, is_img2img):
48 gr.HTML('<br />')
49 with gr.Row():
50 with gr.Column():
51 put_at_start = gr.Checkbox(label='Put variable parts at start of prompt', value=False, elem_id=self.elem_id("put_at_start"))
52 different_seeds = gr.Checkbox(label='Use different seed for each picture', value=False, elem_id=self.elem_id("different_seeds"))
53 with gr.Column():
54 prompt_type = gr.Radio(["positive", "negative"], label="Select prompt", elem_id=self.elem_id("prompt_type"), value="positive")
55 variations_delimiter = gr.Radio(["comma", "space"], label="Select joining char", elem_id=self.elem_id("variations_delimiter"), value="comma")
56 with gr.Column():
57 margin_size = gr.Slider(label="Grid margins (px)", minimum=0, maximum=500, value=0, step=2, elem_id=self.elem_id("margin_size"))
58
59 return [put_at_start, different_seeds, prompt_type, variations_delimiter, margin_size]
60
61 def run(self, p, put_at_start, different_seeds, prompt_type, variations_delimiter, margin_size):
62 modules.processing.fix_seed(p)
63 # Raise error if promp type is not positive or negative
64 if prompt_type not in ["positive", "negative"]:
65 raise ValueError(f"Unknown prompt type {prompt_type}")
66 # Raise error if variations delimiter is not comma or space
67 if variations_delimiter not in ["comma", "space"]:
68 raise ValueError(f"Unknown variations delimiter {variations_delimiter}")
69
70 prompt = p.prompt if prompt_type == "positive" else p.negative_prompt
71 original_prompt = prompt[0] if type(prompt) == list else prompt
72 positive_prompt = p.prompt[0] if type(p.prompt) == list else p.prompt
73
74 delimiter = ", " if variations_delimiter == "comma" else " "
75
76 all_prompts = []
77 prompt_matrix_parts = original_prompt.split("|")
78 combination_count = 2 ** (len(prompt_matrix_parts) - 1)
79 for combination_num in range(combination_count):
80 selected_prompts = [text.strip().strip(',') for n, text in enumerate(prompt_matrix_parts[1:]) if combination_num & (1 << n)]
81
82 if put_at_start:
83 selected_prompts = selected_prompts + [prompt_matrix_parts[0]]
84 else:
85 selected_prompts = [prompt_matrix_parts[0]] + selected_prompts
86
87 all_prompts.append(delimiter.join(selected_prompts))
88
89 p.n_iter = math.ceil(len(all_prompts) / p.batch_size)
90 p.do_not_save_grid = True
91
92 print(f"Prompt matrix will create {len(all_prompts)} images using a total of {p.n_iter} batches.")
93
94 if prompt_type == "positive":
95 p.prompt = all_prompts
96 else:
97 p.negative_prompt = all_prompts
98 p.seed = [p.seed + (i if different_seeds else 0) for i in range(len(all_prompts))]
99 p.prompt_for_display = positive_prompt
100 processed = process_images(p)
101
102 grid = images.image_grid(processed.images, p.batch_size, rows=1 << ((len(prompt_matrix_parts) - 1) // 2))
103 grid = images.draw_prompt_matrix(grid, p.width, p.height, prompt_matrix_parts, margin_size)
104 processed.images.insert(0, grid)
105 processed.index_of_first_image = 1
106 processed.infotexts.insert(0, processed.infotexts[0])
107
108 if opts.grid_save:
109 images.save_image(processed.images[0], p.outpath_grids, "prompt_matrix", extension=opts.grid_format, prompt=original_prompt, seed=processed.seed, grid=True, p=p)
110
111 return processed
112
[end of scripts/prompt_matrix.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/prompt_matrix.py b/scripts/prompt_matrix.py
--- a/scripts/prompt_matrix.py
+++ b/scripts/prompt_matrix.py
@@ -99,8 +99,8 @@
p.prompt_for_display = positive_prompt
processed = process_images(p)
- grid = images.image_grid(processed.images, p.batch_size, rows=1 << ((len(prompt_matrix_parts) - 1) // 2))
- grid = images.draw_prompt_matrix(grid, p.width, p.height, prompt_matrix_parts, margin_size)
+ grid = images.image_grid(processed.images, p.batch_size, rows=1 << ((len(prompt_matrix_parts) - 1) // 2))
+ grid = images.draw_prompt_matrix(grid, processed.images[0].width, processed.images[1].height, prompt_matrix_parts, margin_size)
processed.images.insert(0, grid)
processed.index_of_first_image = 1
processed.infotexts.insert(0, processed.infotexts[0])
| {"golden_diff": "diff --git a/scripts/prompt_matrix.py b/scripts/prompt_matrix.py\n--- a/scripts/prompt_matrix.py\n+++ b/scripts/prompt_matrix.py\n@@ -99,8 +99,8 @@\n p.prompt_for_display = positive_prompt\r\n processed = process_images(p)\r\n \r\n- grid = images.image_grid(processed.images, p.batch_size, rows=1 << ((len(prompt_matrix_parts) - 1) // 2))\r\n- grid = images.draw_prompt_matrix(grid, p.width, p.height, prompt_matrix_parts, margin_size)\r\n+ grid = images.image_grid(processed.images, p.batch_size, rows=1 << ((len(prompt_matrix_parts) - 1) // 2)) \r\n+ grid = images.draw_prompt_matrix(grid, processed.images[0].width, processed.images[1].height, prompt_matrix_parts, margin_size)\r\n processed.images.insert(0, grid)\r\n processed.index_of_first_image = 1\r\n processed.infotexts.insert(0, processed.infotexts[0])\n", "issue": "[Bug]: \"AssertionError: bad number of vertical texts: 2; must be 3\" on hires and prompt matrix\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nPerforming a render using both **hires fix** and **prompt matrix** results in an error and a failure to render the image grid (and accompanying legend). The individual images still render and can be accessed in the txt2img output folder. The image preview pane remains blank and the error text _AssertionError: bad number of vertical texts: 2; must be 3_* shows below it.\r\n\r\n*These numbers might be different. Sometimes _vertical_ will instead say _horizontal_.\n\n### Steps to reproduce the problem\n\n1. In the txt2img tab, enter a base prompt with 2 or more variable prompts separated by the vertical line character. For example: **Painting of a man|Cowboy hat|beard**\r\n2. Toggle **Hires. fix** to on. \u2705\r\n3. Under **Script**, select **Prompt matrix**.\r\n4. Click generate.\n\n### What should have happened?\n\nOn completion, the generation should have produced a grid showing the array of possible images with the variable prompts toggled on or off. Instead, no grid is produced and an error appears instead. The images still render (and can be viewed from the output folder), but the image grid does not. Since the image grid is the most important part (allowing you to compare images very easily), this is a problem.\r\n\r\nNote that using prompt matrix or hires. fix individually still work just fine.\n\n### Commit where the problem happens\n\n3a0d6b77295162146d0a8d04278804334da6f1b4\n\n### What platforms do you use to access UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nGoogle Chrome\n\n### Command Line Arguments\n\n_No response_\n\n### Additional information, context and logs\n\nThe following text is an example of what shows in my command window when the error occurs:\r\n\r\n```\r\nError completing request8:54, 1.83s/it]\r\nArguments: ('task(xe7gus9bhl4yqzh)', 'painting of a beautiful woman|cowboy hat|blindfold', '', 'None', 'None', 20, 1, False, False, 1, 1, 10, -1.0, -1.0, 0, 0, 0, False, 512, 512, True, 0.7, 2, 'Latent', 0, 0, 0, 1, False, False, False, False, '', 1, '', 0, '', True, False, False) {}\r\nTraceback (most recent call last):\r\n File \"C:\\StableDiffusion\\stable-diffusion-webui\\modules\\call_queue.py\", line 56, in f\r\n res = list(func(*args, **kwargs))\r\n File \"C:\\StableDiffusion\\stable-diffusion-webui\\modules\\call_queue.py\", line 37, in f\r\n res = func(*args, **kwargs)\r\n File \"C:\\StableDiffusion\\stable-diffusion-webui\\modules\\txt2img.py\", line 49, in txt2img\r\n processed = modules.scripts.scripts_txt2img.run(p, *args)\r\n File \"C:\\StableDiffusion\\stable-diffusion-webui\\modules\\scripts.py\", line 337, in run\r\n processed = script.run(p, *script_args)\r\n File \"C:\\StableDiffusion\\stable-diffusion-webui\\scripts\\prompt_matrix.py\", line 82, in run\r\n grid = images.draw_prompt_matrix(grid, p.width, p.height, prompt_matrix_parts)\r\n File \"C:\\StableDiffusion\\stable-diffusion-webui\\modules\\images.py\", line 230, in draw_prompt_matrix\r\n return draw_grid_annotations(im, width, height, hor_texts, ver_texts)\r\n File \"C:\\StableDiffusion\\stable-diffusion-webui\\modules\\images.py\", line 175, in draw_grid_annotations\r\n assert cols == len(hor_texts), f'bad number of horizontal texts: {len(hor_texts)}; must be {cols}'\r\nAssertionError: bad number of horizontal texts: 2; must be 4\r\n```\n", "before_files": [{"content": "import math\r\nfrom collections import namedtuple\r\nfrom copy import copy\r\nimport random\r\n\r\nimport modules.scripts as scripts\r\nimport gradio as gr\r\n\r\nfrom modules import images\r\nfrom modules.processing import process_images, Processed\r\nfrom modules.shared import opts, cmd_opts, state\r\nimport modules.sd_samplers\r\n\r\n\r\ndef draw_xy_grid(xs, ys, x_label, y_label, cell):\r\n res = []\r\n\r\n ver_texts = [[images.GridAnnotation(y_label(y))] for y in ys]\r\n hor_texts = [[images.GridAnnotation(x_label(x))] for x in xs]\r\n\r\n first_processed = None\r\n\r\n state.job_count = len(xs) * len(ys)\r\n\r\n for iy, y in enumerate(ys):\r\n for ix, x in enumerate(xs):\r\n state.job = f\"{ix + iy * len(xs) + 1} out of {len(xs) * len(ys)}\"\r\n\r\n processed = cell(x, y)\r\n if first_processed is None:\r\n first_processed = processed\r\n\r\n res.append(processed.images[0])\r\n\r\n grid = images.image_grid(res, rows=len(ys))\r\n grid = images.draw_grid_annotations(grid, res[0].width, res[0].height, hor_texts, ver_texts)\r\n\r\n first_processed.images = [grid]\r\n\r\n return first_processed\r\n\r\n\r\nclass Script(scripts.Script):\r\n def title(self):\r\n return \"Prompt matrix\"\r\n\r\n def ui(self, is_img2img):\r\n gr.HTML('<br />')\r\n with gr.Row():\r\n with gr.Column():\r\n put_at_start = gr.Checkbox(label='Put variable parts at start of prompt', value=False, elem_id=self.elem_id(\"put_at_start\"))\r\n different_seeds = gr.Checkbox(label='Use different seed for each picture', value=False, elem_id=self.elem_id(\"different_seeds\"))\r\n with gr.Column():\r\n prompt_type = gr.Radio([\"positive\", \"negative\"], label=\"Select prompt\", elem_id=self.elem_id(\"prompt_type\"), value=\"positive\")\r\n variations_delimiter = gr.Radio([\"comma\", \"space\"], label=\"Select joining char\", elem_id=self.elem_id(\"variations_delimiter\"), value=\"comma\")\r\n with gr.Column():\r\n margin_size = gr.Slider(label=\"Grid margins (px)\", minimum=0, maximum=500, value=0, step=2, elem_id=self.elem_id(\"margin_size\"))\r\n\r\n return [put_at_start, different_seeds, prompt_type, variations_delimiter, margin_size]\r\n\r\n def run(self, p, put_at_start, different_seeds, prompt_type, variations_delimiter, margin_size):\r\n modules.processing.fix_seed(p)\r\n # Raise error if promp type is not positive or negative\r\n if prompt_type not in [\"positive\", \"negative\"]:\r\n raise ValueError(f\"Unknown prompt type {prompt_type}\")\r\n # Raise error if variations delimiter is not comma or space\r\n if variations_delimiter not in [\"comma\", \"space\"]:\r\n raise ValueError(f\"Unknown variations delimiter {variations_delimiter}\")\r\n\r\n prompt = p.prompt if prompt_type == \"positive\" else p.negative_prompt\r\n original_prompt = prompt[0] if type(prompt) == list else prompt\r\n positive_prompt = p.prompt[0] if type(p.prompt) == list else p.prompt\r\n\r\n delimiter = \", \" if variations_delimiter == \"comma\" else \" \"\r\n\r\n all_prompts = []\r\n prompt_matrix_parts = original_prompt.split(\"|\")\r\n combination_count = 2 ** (len(prompt_matrix_parts) - 1)\r\n for combination_num in range(combination_count):\r\n selected_prompts = [text.strip().strip(',') for n, text in enumerate(prompt_matrix_parts[1:]) if combination_num & (1 << n)]\r\n\r\n if put_at_start:\r\n selected_prompts = selected_prompts + [prompt_matrix_parts[0]]\r\n else:\r\n selected_prompts = [prompt_matrix_parts[0]] + selected_prompts\r\n\r\n all_prompts.append(delimiter.join(selected_prompts))\r\n\r\n p.n_iter = math.ceil(len(all_prompts) / p.batch_size)\r\n p.do_not_save_grid = True\r\n\r\n print(f\"Prompt matrix will create {len(all_prompts)} images using a total of {p.n_iter} batches.\")\r\n\r\n if prompt_type == \"positive\":\r\n p.prompt = all_prompts\r\n else:\r\n p.negative_prompt = all_prompts\r\n p.seed = [p.seed + (i if different_seeds else 0) for i in range(len(all_prompts))]\r\n p.prompt_for_display = positive_prompt\r\n processed = process_images(p)\r\n\r\n grid = images.image_grid(processed.images, p.batch_size, rows=1 << ((len(prompt_matrix_parts) - 1) // 2))\r\n grid = images.draw_prompt_matrix(grid, p.width, p.height, prompt_matrix_parts, margin_size)\r\n processed.images.insert(0, grid)\r\n processed.index_of_first_image = 1\r\n processed.infotexts.insert(0, processed.infotexts[0])\r\n\r\n if opts.grid_save:\r\n images.save_image(processed.images[0], p.outpath_grids, \"prompt_matrix\", extension=opts.grid_format, prompt=original_prompt, seed=processed.seed, grid=True, p=p)\r\n\r\n return processed\r\n", "path": "scripts/prompt_matrix.py"}]} | 2,857 | 219 |
gh_patches_debug_29583 | rasdani/github-patches | git_diff | Parsl__parsl-2301 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove IPyParallel executor
**Is your feature request related to a problem? Please describe.**
The parsl ipp code isn't really maintained any more. As I try to tidy up some stuff in the test suite I find myself spending more time than I'd like to debugging what is happening inside the IPP parsl executor, while at the same time the folklore is "don't use IPP".
**Describe the solution you'd like**
I'd like to remove all IPP tests from the test suite now. This will inevitably lead to the ipp code rotting fast, and so in addition the IPP executor should probably be removed in its entirely.
**Describe alternatives you've considered**
An alternative is to put developer effort into maintaining IPP as a supported executor.
**Additional context**
The folklore on slack seems to be "don't use IPP". This feature request turns that folklore into reality.
I invite comment in support or against from @kylechard @yadudoc @annawoodard especially as regards how this would affect existing users who have not moved to htex (if any?).
see #1328
</issue>
<code>
[start of parsl/executors/ipp_controller.py]
1 class Controller():
2 """This stub exists to issue a more helpful warning about the IPyParallel
3 executor being removed from parsl some time after v0.9.
4
5 It can eventually be removed entirely - perhaps after v0.10
6 """
7 def __init__(self, *args, **kwargs):
8 raise RuntimeError("The IPyParallel executor has been removed from parsl")
9
[end of parsl/executors/ipp_controller.py]
[start of parsl/executors/__init__.py]
1 from parsl.executors.threads import ThreadPoolExecutor
2 from parsl.executors.ipp import IPyParallelExecutor
3 from parsl.executors.workqueue.executor import WorkQueueExecutor
4 from parsl.executors.high_throughput.executor import HighThroughputExecutor
5 from parsl.executors.extreme_scale.executor import ExtremeScaleExecutor
6 from parsl.executors.low_latency.executor import LowLatencyExecutor
7 from parsl.executors.flux.executor import FluxExecutor
8
9 __all__ = ['IPyParallelExecutor',
10 'ThreadPoolExecutor',
11 'HighThroughputExecutor',
12 'ExtremeScaleExecutor',
13 'LowLatencyExecutor',
14 'WorkQueueExecutor',
15 'FluxExecutor']
16
[end of parsl/executors/__init__.py]
[start of parsl/executors/ipp.py]
1 from parsl.executors.base import ParslExecutor
2
3
4 class IPyParallelExecutor(ParslExecutor):
5 """This stub exists to issue a more helpful warning about the IPyParallel
6 executor being removed from parsl some time after v0.9.
7
8 It can eventually be removed entirely - perhaps after v0.10
9 """
10
11 def __new__(*args, **kwargs):
12 raise RuntimeError("The IPyParallel executor has been removed from parsl")
13
[end of parsl/executors/ipp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parsl/executors/__init__.py b/parsl/executors/__init__.py
--- a/parsl/executors/__init__.py
+++ b/parsl/executors/__init__.py
@@ -1,13 +1,11 @@
from parsl.executors.threads import ThreadPoolExecutor
-from parsl.executors.ipp import IPyParallelExecutor
from parsl.executors.workqueue.executor import WorkQueueExecutor
from parsl.executors.high_throughput.executor import HighThroughputExecutor
from parsl.executors.extreme_scale.executor import ExtremeScaleExecutor
from parsl.executors.low_latency.executor import LowLatencyExecutor
from parsl.executors.flux.executor import FluxExecutor
-__all__ = ['IPyParallelExecutor',
- 'ThreadPoolExecutor',
+__all__ = ['ThreadPoolExecutor',
'HighThroughputExecutor',
'ExtremeScaleExecutor',
'LowLatencyExecutor',
diff --git a/parsl/executors/ipp.py b/parsl/executors/ipp.py
deleted file mode 100644
--- a/parsl/executors/ipp.py
+++ /dev/null
@@ -1,12 +0,0 @@
-from parsl.executors.base import ParslExecutor
-
-
-class IPyParallelExecutor(ParslExecutor):
- """This stub exists to issue a more helpful warning about the IPyParallel
- executor being removed from parsl some time after v0.9.
-
- It can eventually be removed entirely - perhaps after v0.10
- """
-
- def __new__(*args, **kwargs):
- raise RuntimeError("The IPyParallel executor has been removed from parsl")
diff --git a/parsl/executors/ipp_controller.py b/parsl/executors/ipp_controller.py
deleted file mode 100644
--- a/parsl/executors/ipp_controller.py
+++ /dev/null
@@ -1,8 +0,0 @@
-class Controller():
- """This stub exists to issue a more helpful warning about the IPyParallel
- executor being removed from parsl some time after v0.9.
-
- It can eventually be removed entirely - perhaps after v0.10
- """
- def __init__(self, *args, **kwargs):
- raise RuntimeError("The IPyParallel executor has been removed from parsl")
| {"golden_diff": "diff --git a/parsl/executors/__init__.py b/parsl/executors/__init__.py\n--- a/parsl/executors/__init__.py\n+++ b/parsl/executors/__init__.py\n@@ -1,13 +1,11 @@\n from parsl.executors.threads import ThreadPoolExecutor\n-from parsl.executors.ipp import IPyParallelExecutor\n from parsl.executors.workqueue.executor import WorkQueueExecutor\n from parsl.executors.high_throughput.executor import HighThroughputExecutor\n from parsl.executors.extreme_scale.executor import ExtremeScaleExecutor\n from parsl.executors.low_latency.executor import LowLatencyExecutor\n from parsl.executors.flux.executor import FluxExecutor\n \n-__all__ = ['IPyParallelExecutor',\n- 'ThreadPoolExecutor',\n+__all__ = ['ThreadPoolExecutor',\n 'HighThroughputExecutor',\n 'ExtremeScaleExecutor',\n 'LowLatencyExecutor',\ndiff --git a/parsl/executors/ipp.py b/parsl/executors/ipp.py\ndeleted file mode 100644\n--- a/parsl/executors/ipp.py\n+++ /dev/null\n@@ -1,12 +0,0 @@\n-from parsl.executors.base import ParslExecutor\n-\n-\n-class IPyParallelExecutor(ParslExecutor):\n- \"\"\"This stub exists to issue a more helpful warning about the IPyParallel\n- executor being removed from parsl some time after v0.9.\n-\n- It can eventually be removed entirely - perhaps after v0.10\n- \"\"\"\n-\n- def __new__(*args, **kwargs):\n- raise RuntimeError(\"The IPyParallel executor has been removed from parsl\")\ndiff --git a/parsl/executors/ipp_controller.py b/parsl/executors/ipp_controller.py\ndeleted file mode 100644\n--- a/parsl/executors/ipp_controller.py\n+++ /dev/null\n@@ -1,8 +0,0 @@\n-class Controller():\n- \"\"\"This stub exists to issue a more helpful warning about the IPyParallel\n- executor being removed from parsl some time after v0.9.\n-\n- It can eventually be removed entirely - perhaps after v0.10\n- \"\"\"\n- def __init__(self, *args, **kwargs):\n- raise RuntimeError(\"The IPyParallel executor has been removed from parsl\")\n", "issue": "Remove IPyParallel executor\n**Is your feature request related to a problem? Please describe.**\r\nThe parsl ipp code isn't really maintained any more. As I try to tidy up some stuff in the test suite I find myself spending more time than I'd like to debugging what is happening inside the IPP parsl executor, while at the same time the folklore is \"don't use IPP\".\r\n\r\n**Describe the solution you'd like**\r\nI'd like to remove all IPP tests from the test suite now. This will inevitably lead to the ipp code rotting fast, and so in addition the IPP executor should probably be removed in its entirely.\r\n\r\n**Describe alternatives you've considered**\r\nAn alternative is to put developer effort into maintaining IPP as a supported executor.\r\n\r\n**Additional context**\r\nThe folklore on slack seems to be \"don't use IPP\". This feature request turns that folklore into reality.\r\n\r\nI invite comment in support or against from @kylechard @yadudoc @annawoodard especially as regards how this would affect existing users who have not moved to htex (if any?).\r\n\r\nsee #1328 \n", "before_files": [{"content": "class Controller():\n \"\"\"This stub exists to issue a more helpful warning about the IPyParallel\n executor being removed from parsl some time after v0.9.\n\n It can eventually be removed entirely - perhaps after v0.10\n \"\"\"\n def __init__(self, *args, **kwargs):\n raise RuntimeError(\"The IPyParallel executor has been removed from parsl\")\n", "path": "parsl/executors/ipp_controller.py"}, {"content": "from parsl.executors.threads import ThreadPoolExecutor\nfrom parsl.executors.ipp import IPyParallelExecutor\nfrom parsl.executors.workqueue.executor import WorkQueueExecutor\nfrom parsl.executors.high_throughput.executor import HighThroughputExecutor\nfrom parsl.executors.extreme_scale.executor import ExtremeScaleExecutor\nfrom parsl.executors.low_latency.executor import LowLatencyExecutor\nfrom parsl.executors.flux.executor import FluxExecutor\n\n__all__ = ['IPyParallelExecutor',\n 'ThreadPoolExecutor',\n 'HighThroughputExecutor',\n 'ExtremeScaleExecutor',\n 'LowLatencyExecutor',\n 'WorkQueueExecutor',\n 'FluxExecutor']\n", "path": "parsl/executors/__init__.py"}, {"content": "from parsl.executors.base import ParslExecutor\n\n\nclass IPyParallelExecutor(ParslExecutor):\n \"\"\"This stub exists to issue a more helpful warning about the IPyParallel\n executor being removed from parsl some time after v0.9.\n\n It can eventually be removed entirely - perhaps after v0.10\n \"\"\"\n\n def __new__(*args, **kwargs):\n raise RuntimeError(\"The IPyParallel executor has been removed from parsl\")\n", "path": "parsl/executors/ipp.py"}]} | 1,190 | 516 |
gh_patches_debug_29758 | rasdani/github-patches | git_diff | mabel-dev__opteryx-1644 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
🪲 mabel partitioning bug
### Thank you for taking the time to report a problem with Opteryx.
_To help us to respond to your request we ask that you try to provide the below detail about the bug._
**Describe the bug** _A clear and specific description of what the bug is. What the error, incorrect or unexpected behaviour was._
**Expected behaviour** _A clear and concise description of what you expected to happen._
**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._
~~~sql
~~~
**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._
</issue>
<code>
[start of opteryx/__version__.py]
1 __build__ = 486
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Store the version here so:
17 1) we don't load dependencies by storing it in __init__.py
18 2) we can import it in setup.py for the same reason
19 """
20 from enum import Enum # isort: skip
21
22
23 class VersionStatus(Enum):
24 ALPHA = "alpha"
25 BETA = "beta"
26 RELEASE = "release"
27
28
29 _major = 0
30 _minor = 15
31 _revision = 0
32 _status = VersionStatus.BETA
33
34 __author__ = "@joocer"
35 __version__ = f"{_major}.{_minor}.{_revision}" + (
36 f"-{_status.value}.{__build__}" if _status != VersionStatus.RELEASE else ""
37 )
38
[end of opteryx/__version__.py]
[start of opteryx/managers/schemes/mabel_partitions.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import concurrent.futures
14 import datetime
15 import os
16 from typing import Callable
17 from typing import List
18 from typing import Optional
19
20 from opteryx.exceptions import DataError
21 from opteryx.managers.schemes import BasePartitionScheme
22 from opteryx.utils.file_decoders import DATA_EXTENSIONS
23
24 OS_SEP = os.sep
25
26
27 class UnsupportedSegementationError(DataError):
28 """Exception raised for unsupported segmentations."""
29
30 def __init__(self, dataset: str, segments: set = None):
31 self.dataset = dataset
32 self.segments = segments
33 message = f"'{dataset}' contains unsupported segmentation (`{'`, `'.join(segments)}`), only 'by_hour' segments are supported."
34 super().__init__(message)
35
36
37 def extract_prefix(path, prefix):
38 start_index = path.find(prefix)
39 if start_index == -1:
40 return None
41 end_index = path.find(OS_SEP, start_index)
42 if end_index == -1:
43 return None
44 return path[start_index:end_index]
45
46
47 def is_complete_and_not_invalid(blobs, as_at):
48 # Directly initialize flags to False.
49 complete = False
50 ignore = False
51
52 # Search suffix strings are constant for each call, so define them once.
53 complete_suffix = f"{as_at}{OS_SEP}frame.complete"
54 invalid_suffix = f"{as_at}{OS_SEP}frame.ignore"
55
56 # Iterate over blobs once, checking conditions.
57 for blob in blobs:
58
59 if complete_suffix in blob:
60 complete = True
61 if complete and ignore:
62 break
63 elif invalid_suffix in blob:
64 ignore = True
65 if complete and ignore:
66 break
67
68 return complete and not ignore
69
70
71 class MabelPartitionScheme(BasePartitionScheme):
72 """
73 Handle reading data using the Mabel partition scheme.
74 """
75
76 def get_blobs_in_partition(
77 self,
78 *,
79 blob_list_getter: Callable,
80 prefix: str,
81 start_date: Optional[datetime.datetime],
82 end_date: Optional[datetime.datetime],
83 ) -> List[str]:
84 """filter the blobs acording to the chosen scheme"""
85
86 midnight = datetime.datetime.utcnow().replace(hour=0, minute=0, second=0, microsecond=0)
87 by_label = f"{OS_SEP}by_"
88 as_at_label = f"{OS_SEP}as_at"
89
90 def _inner(*, timestamp):
91 date_path = f"{prefix}{OS_SEP}year_{timestamp.year:04d}{OS_SEP}month_{timestamp.month:02d}{OS_SEP}day_{timestamp.day:02d}"
92 hour_label = f"{OS_SEP}by_hour{OS_SEP}hour={timestamp.hour:02d}/"
93
94 # Call your method to get the list of blob names
95 blob_names = blob_list_getter(prefix=date_path)
96 if len(blob_names) == 0:
97 return []
98
99 control_blobs: List[str] = []
100 data_blobs: List[str] = []
101 segments = set()
102 as_ats = set()
103 hour_blobs: List[str] = []
104
105 for blob in blob_names:
106 extension = os.path.splitext(blob)[1]
107 if extension not in DATA_EXTENSIONS:
108 control_blobs.append(blob)
109 else:
110 data_blobs.append(blob)
111 # Collect hour specific blobs, but only data blobs
112 if hour_label in blob:
113 hour_blobs.append(blob)
114
115 # Collect segments
116 if by_label in blob:
117 segments.add(extract_prefix(blob, "by_"))
118
119 if as_at_label in blob:
120 as_ats.add(extract_prefix(blob, "as_at_"))
121
122 if hour_blobs:
123 data_blobs = hour_blobs
124
125 if segments - {"by_hour", None}:
126 raise UnsupportedSegementationError(dataset=prefix, segments=segments)
127
128 as_at = None
129 as_at_list = sorted(as_ats)
130
131 # Keep popping from as_ats until a valid frame is found
132 while as_at_list:
133 as_at = as_at_list.pop()
134 if as_at is None:
135 continue
136 if is_complete_and_not_invalid(control_blobs, as_at):
137 data_blobs = [blob for blob in data_blobs if as_at in blob]
138 break
139 data_blobs = [blob for blob in data_blobs if as_at not in blob]
140 as_at = None
141
142 return data_blobs
143
144 start_date = start_date or midnight
145 end_date = end_date or midnight.replace(hour=23, minute=59)
146
147 found = set()
148
149 # Use a ThreadPoolExecutor to parallelize fetching blobs for each hour
150 with concurrent.futures.ThreadPoolExecutor() as executor:
151 # Prepare a list of future tasks
152 futures = [
153 executor.submit(_inner, **{"timestamp": ts})
154 for ts in self.hourly_timestamps(start_date, end_date)
155 ]
156 # Wait for all futures to complete and collect results
157 for future in concurrent.futures.as_completed(futures):
158 found.update(future.result())
159
160 return sorted(found)
161
[end of opteryx/managers/schemes/mabel_partitions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opteryx/__version__.py b/opteryx/__version__.py
--- a/opteryx/__version__.py
+++ b/opteryx/__version__.py
@@ -1,4 +1,4 @@
-__build__ = 486
+__build__ = 487
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
diff --git a/opteryx/managers/schemes/mabel_partitions.py b/opteryx/managers/schemes/mabel_partitions.py
--- a/opteryx/managers/schemes/mabel_partitions.py
+++ b/opteryx/managers/schemes/mabel_partitions.py
@@ -98,7 +98,6 @@
control_blobs: List[str] = []
data_blobs: List[str] = []
- segments = set()
as_ats = set()
hour_blobs: List[str] = []
@@ -112,19 +111,19 @@
if hour_label in blob:
hour_blobs.append(blob)
+ if hour_blobs:
+ data_blobs = hour_blobs
+
+ for blob in blob_names:
# Collect segments
if by_label in blob:
- segments.add(extract_prefix(blob, "by_"))
+ segment = extract_prefix(blob, "by_")
+ if segment != "by_hour":
+ raise UnsupportedSegementationError(dataset=prefix, segments=[segment])
if as_at_label in blob:
as_ats.add(extract_prefix(blob, "as_at_"))
- if hour_blobs:
- data_blobs = hour_blobs
-
- if segments - {"by_hour", None}:
- raise UnsupportedSegementationError(dataset=prefix, segments=segments)
-
as_at = None
as_at_list = sorted(as_ats)
| {"golden_diff": "diff --git a/opteryx/__version__.py b/opteryx/__version__.py\n--- a/opteryx/__version__.py\n+++ b/opteryx/__version__.py\n@@ -1,4 +1,4 @@\n-__build__ = 486\n+__build__ = 487\n \n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\ndiff --git a/opteryx/managers/schemes/mabel_partitions.py b/opteryx/managers/schemes/mabel_partitions.py\n--- a/opteryx/managers/schemes/mabel_partitions.py\n+++ b/opteryx/managers/schemes/mabel_partitions.py\n@@ -98,7 +98,6 @@\n \n control_blobs: List[str] = []\n data_blobs: List[str] = []\n- segments = set()\n as_ats = set()\n hour_blobs: List[str] = []\n \n@@ -112,19 +111,19 @@\n if hour_label in blob:\n hour_blobs.append(blob)\n \n+ if hour_blobs:\n+ data_blobs = hour_blobs\n+\n+ for blob in blob_names:\n # Collect segments\n if by_label in blob:\n- segments.add(extract_prefix(blob, \"by_\"))\n+ segment = extract_prefix(blob, \"by_\")\n+ if segment != \"by_hour\":\n+ raise UnsupportedSegementationError(dataset=prefix, segments=[segment])\n \n if as_at_label in blob:\n as_ats.add(extract_prefix(blob, \"as_at_\"))\n \n- if hour_blobs:\n- data_blobs = hour_blobs\n-\n- if segments - {\"by_hour\", None}:\n- raise UnsupportedSegementationError(dataset=prefix, segments=segments)\n-\n as_at = None\n as_at_list = sorted(as_ats)\n", "issue": "\ud83e\udeb2 mabel partitioning bug\n### Thank you for taking the time to report a problem with Opteryx.\r\n_To help us to respond to your request we ask that you try to provide the below detail about the bug._\r\n\r\n**Describe the bug** _A clear and specific description of what the bug is. What the error, incorrect or unexpected behaviour was._\r\n\r\n\r\n**Expected behaviour** _A clear and concise description of what you expected to happen._\r\n\r\n\r\n**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._\r\n\r\n~~~sql\r\n\r\n~~~\r\n\r\n**Additional context** _Add any other context about the problem here, for example what you have done to try to diagnose or workaround the problem._\r\n\n", "before_files": [{"content": "__build__ = 486\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 15\n_revision = 0\n_status = VersionStatus.BETA\n\n__author__ = \"@joocer\"\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport concurrent.futures\nimport datetime\nimport os\nfrom typing import Callable\nfrom typing import List\nfrom typing import Optional\n\nfrom opteryx.exceptions import DataError\nfrom opteryx.managers.schemes import BasePartitionScheme\nfrom opteryx.utils.file_decoders import DATA_EXTENSIONS\n\nOS_SEP = os.sep\n\n\nclass UnsupportedSegementationError(DataError):\n \"\"\"Exception raised for unsupported segmentations.\"\"\"\n\n def __init__(self, dataset: str, segments: set = None):\n self.dataset = dataset\n self.segments = segments\n message = f\"'{dataset}' contains unsupported segmentation (`{'`, `'.join(segments)}`), only 'by_hour' segments are supported.\"\n super().__init__(message)\n\n\ndef extract_prefix(path, prefix):\n start_index = path.find(prefix)\n if start_index == -1:\n return None\n end_index = path.find(OS_SEP, start_index)\n if end_index == -1:\n return None\n return path[start_index:end_index]\n\n\ndef is_complete_and_not_invalid(blobs, as_at):\n # Directly initialize flags to False.\n complete = False\n ignore = False\n\n # Search suffix strings are constant for each call, so define them once.\n complete_suffix = f\"{as_at}{OS_SEP}frame.complete\"\n invalid_suffix = f\"{as_at}{OS_SEP}frame.ignore\"\n\n # Iterate over blobs once, checking conditions.\n for blob in blobs:\n\n if complete_suffix in blob:\n complete = True\n if complete and ignore:\n break\n elif invalid_suffix in blob:\n ignore = True\n if complete and ignore:\n break\n\n return complete and not ignore\n\n\nclass MabelPartitionScheme(BasePartitionScheme):\n \"\"\"\n Handle reading data using the Mabel partition scheme.\n \"\"\"\n\n def get_blobs_in_partition(\n self,\n *,\n blob_list_getter: Callable,\n prefix: str,\n start_date: Optional[datetime.datetime],\n end_date: Optional[datetime.datetime],\n ) -> List[str]:\n \"\"\"filter the blobs acording to the chosen scheme\"\"\"\n\n midnight = datetime.datetime.utcnow().replace(hour=0, minute=0, second=0, microsecond=0)\n by_label = f\"{OS_SEP}by_\"\n as_at_label = f\"{OS_SEP}as_at\"\n\n def _inner(*, timestamp):\n date_path = f\"{prefix}{OS_SEP}year_{timestamp.year:04d}{OS_SEP}month_{timestamp.month:02d}{OS_SEP}day_{timestamp.day:02d}\"\n hour_label = f\"{OS_SEP}by_hour{OS_SEP}hour={timestamp.hour:02d}/\"\n\n # Call your method to get the list of blob names\n blob_names = blob_list_getter(prefix=date_path)\n if len(blob_names) == 0:\n return []\n\n control_blobs: List[str] = []\n data_blobs: List[str] = []\n segments = set()\n as_ats = set()\n hour_blobs: List[str] = []\n\n for blob in blob_names:\n extension = os.path.splitext(blob)[1]\n if extension not in DATA_EXTENSIONS:\n control_blobs.append(blob)\n else:\n data_blobs.append(blob)\n # Collect hour specific blobs, but only data blobs\n if hour_label in blob:\n hour_blobs.append(blob)\n\n # Collect segments\n if by_label in blob:\n segments.add(extract_prefix(blob, \"by_\"))\n\n if as_at_label in blob:\n as_ats.add(extract_prefix(blob, \"as_at_\"))\n\n if hour_blobs:\n data_blobs = hour_blobs\n\n if segments - {\"by_hour\", None}:\n raise UnsupportedSegementationError(dataset=prefix, segments=segments)\n\n as_at = None\n as_at_list = sorted(as_ats)\n\n # Keep popping from as_ats until a valid frame is found\n while as_at_list:\n as_at = as_at_list.pop()\n if as_at is None:\n continue\n if is_complete_and_not_invalid(control_blobs, as_at):\n data_blobs = [blob for blob in data_blobs if as_at in blob]\n break\n data_blobs = [blob for blob in data_blobs if as_at not in blob]\n as_at = None\n\n return data_blobs\n\n start_date = start_date or midnight\n end_date = end_date or midnight.replace(hour=23, minute=59)\n\n found = set()\n\n # Use a ThreadPoolExecutor to parallelize fetching blobs for each hour\n with concurrent.futures.ThreadPoolExecutor() as executor:\n # Prepare a list of future tasks\n futures = [\n executor.submit(_inner, **{\"timestamp\": ts})\n for ts in self.hourly_timestamps(start_date, end_date)\n ]\n # Wait for all futures to complete and collect results\n for future in concurrent.futures.as_completed(futures):\n found.update(future.result())\n\n return sorted(found)\n", "path": "opteryx/managers/schemes/mabel_partitions.py"}]} | 2,673 | 421 |
gh_patches_debug_11808 | rasdani/github-patches | git_diff | avocado-framework__avocado-4549 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Avocado crash without output
When I run nrunner with the wrong test path avocado crashes without any information. It can be reproduced by:
`avocado run --test-runner=nrunner --ignore-missing-references -- test_utils_partition.py`
</issue>
<code>
[start of avocado/plugins/run.py]
1 # This program is free software; you can redistribute it and/or modify
2 # it under the terms of the GNU General Public License as published by
3 # the Free Software Foundation; either version 2 of the License, or
4 # (at your option) any later version.
5 #
6 # This program is distributed in the hope that it will be useful,
7 # but WITHOUT ANY WARRANTY; without even the implied warranty of
8 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
9 #
10 # See LICENSE for more details.
11 #
12 # Copyright: Red Hat Inc. 2013-2014
13 # Author: Ruda Moura <[email protected]>
14
15 """
16 Base Test Runner Plugins.
17 """
18
19 import argparse
20 import sys
21
22 from avocado.core import exit_codes, job, loader, parser_common_args
23 from avocado.core.output import LOG_UI
24 from avocado.core.plugin_interfaces import CLICmd, Init
25 from avocado.core.settings import settings
26 from avocado.core.suite import TestSuite, TestSuiteError
27 from avocado.utils import process
28
29
30 class RunInit(Init):
31
32 name = 'run'
33 description = 'Initializes the run options'
34
35 def initialize(self):
36 help_msg = ('Defines the order of iterating through test suite '
37 'and test variants')
38 settings.register_option(section='run',
39 key='execution_order',
40 choices=('tests-per-variant',
41 'variants-per-test'),
42 default='variants-per-test',
43 help_msg=help_msg)
44
45
46 class Run(CLICmd):
47
48 """
49 Implements the avocado 'run' subcommand
50 """
51
52 name = 'run'
53 description = ("Runs one or more tests (native test, test alias, binary "
54 "or script)")
55
56 @staticmethod
57 def _test_parameter(string):
58 param_name_value = string.split('=', 1)
59 if len(param_name_value) < 2:
60 msg = ('Invalid --test-parameter option: "%s". Valid option must '
61 'be a "NAME=VALUE" like expression' % string)
62 raise argparse.ArgumentTypeError(msg)
63 return param_name_value
64
65 def configure(self, parser):
66 """
67 Add the subparser for the run action.
68
69 :param parser: Main test runner parser.
70 """
71 parser = super(Run, self).configure(parser)
72
73 help_msg = 'List of test references (aliases or paths)'
74 settings.register_option(section='run',
75 key='references',
76 key_type=list,
77 default=[],
78 nargs='*',
79 metavar='TEST_REFERENCE',
80 parser=parser,
81 help_msg=help_msg,
82 positional_arg=True)
83
84 help_msg = ('Parameter name and value to pass to all tests. This is '
85 'only applicable when not using a varianter plugin. '
86 'This option format must be given in the NAME=VALUE '
87 'format, and may be given any number of times, or per '
88 'parameter.')
89 settings.register_option(section='run',
90 key='test_parameters',
91 action='append',
92 default=[],
93 key_type=self._test_parameter,
94 metavar="NAME_VALUE",
95 help_msg=help_msg,
96 parser=parser,
97 long_arg='--test-parameter',
98 short_arg='-p')
99
100 help_msg = ('Selects the runner implementation from one of the '
101 'installed and active implementations. You can run '
102 '"avocado plugins" and find the list of valid runners '
103 'under the "Plugins that run test suites on a job '
104 '(runners) section. Defaults to "runner", which is '
105 'the conventional and traditional runner.')
106 settings.register_option(section='run',
107 key='test_runner',
108 default='runner',
109 help_msg=help_msg,
110 parser=parser,
111 long_arg='--test-runner')
112
113 help_msg = ('Instead of running the test only list them and log '
114 'their params.')
115 settings.register_option(section='run.dry_run',
116 key='enabled',
117 default=False,
118 key_type=bool,
119 help_msg=help_msg,
120 parser=parser,
121 short_arg='-d',
122 long_arg='--dry-run')
123
124 help_msg = ('Do not automatically clean up temporary directories '
125 'used by dry-run')
126 settings.register_option(section='run.dry_run',
127 key='no_cleanup',
128 help_msg=help_msg,
129 default=False,
130 key_type=bool,
131 parser=parser,
132 long_arg='--dry-run-no-cleanup')
133
134 help_msg = ('Forces the use of a particular job ID. Used internally '
135 'when interacting with an avocado server. You should not '
136 'use this option unless you know exactly what you\'re '
137 'doing')
138 settings.register_option(section='run',
139 key='unique_job_id',
140 default=None,
141 help_msg=help_msg,
142 parser=parser,
143 long_arg='--force-job-id')
144
145 help_msg = 'Forces to use of an alternate job results directory.'
146 settings.register_option(section='run',
147 key='results_dir',
148 default=None,
149 metavar='DIRECTORY',
150 help_msg=help_msg,
151 parser=parser,
152 long_arg='--job-results-dir')
153
154 help_msg = ('Categorizes this within a directory with the same name, '
155 'by creating a link to the job result directory')
156 settings.register_option(section='run',
157 key='job_category',
158 help_msg=help_msg,
159 parser=parser,
160 default=None,
161 metavar='CATEGORY',
162 long_arg='--job-category')
163
164 settings.add_argparser_to_option(namespace='job.run.timeout',
165 metavar='SECONDS',
166 parser=parser,
167 long_arg='--job-timeout')
168
169 help_msg = 'Enable the job interruption on first failed test.'
170 settings.register_option(section='run',
171 key='failfast',
172 default=False,
173 key_type=bool,
174 action='store_true',
175 help_msg=help_msg,
176 parser=parser,
177 long_arg='--failfast')
178
179 help_msg = 'Keep job temporary files (useful for avocado debugging).'
180 settings.register_option(section='run',
181 key='keep_tmp',
182 default=False,
183 key_type=bool,
184 action='store_true',
185 help_msg=help_msg,
186 parser=parser,
187 long_arg='--keep-tmp')
188
189 help_msg = ('Force the job execution, even if some of the test '
190 'references are not resolved to tests. "on" and '
191 '"off" will be deprecated soon.')
192 settings.register_option(section='run',
193 key='ignore_missing_references',
194 default=False,
195 key_type=bool,
196 help_msg=help_msg,
197 parser=parser,
198 long_arg='--ignore-missing-references')
199
200 settings.add_argparser_to_option(namespace='sysinfo.collect.enabled',
201 parser=parser,
202 action='store_false',
203 long_arg='--disable-sysinfo')
204
205 settings.add_argparser_to_option('run.execution_order',
206 parser=parser,
207 long_arg='--execution-order')
208
209 parser.output = parser.add_argument_group('output and result format')
210
211 help_msg = ('Store given logging STREAMs in '
212 '"$JOB_RESULTS_DIR/$STREAM.$LEVEL."')
213 settings.register_option(section='run',
214 key='store_logging_stream',
215 nargs='+',
216 help_msg=help_msg,
217 default=[],
218 metavar='STREAM[:LEVEL]',
219 key_type=list,
220 parser=parser,
221 long_arg='--store-logging-stream')
222
223 help_msg = ('Logs the possible data directories for each test. This '
224 'is helpful when writing new tests and not being sure '
225 'where to put data files. Look for "Test data '
226 'directories" in your test log')
227 settings.register_option(section='run',
228 key='log_test_data_directories',
229 default=False,
230 key_type=bool,
231 help_msg=help_msg,
232 parser=parser,
233 long_arg='--log-test-data-directories')
234
235 out_check = parser.add_argument_group('output check arguments')
236
237 help_msg = ('Record the output produced by each test (from stdout '
238 'and stderr) into both the current executing result '
239 'and into reference files. Reference files are used on '
240 'subsequent runs to determine if the test produced the '
241 'expected output or not, and the current executing result '
242 'is used to check against a previously recorded reference '
243 'file. Valid values: "none" (to explicitly disable all '
244 'recording) "stdout" (to record standard output *only*), '
245 '"stderr" (to record standard error *only*), "both" (to '
246 'record standard output and error in separate files), '
247 '"combined" (for standard output and error in a single '
248 'file). "all" is also a valid but deprecated option that '
249 'is a synonym of "both".')
250 settings.register_option(section='run',
251 key='output_check_record',
252 help_msg=help_msg,
253 choices=('none', 'stdout', 'stderr',
254 'both', 'combined', 'all'),
255 parser=out_check,
256 default=None,
257 long_arg='--output-check-record')
258
259 help_msg = ('Disables test output (stdout/stderr) check. If this '
260 'option is given, no output will be checked, even if '
261 'there are reference files present for the test.')
262 settings.register_option(section='run',
263 key='output_check',
264 default=True,
265 key_type=bool,
266 action='store_false',
267 help_msg=help_msg,
268 parser=out_check,
269 long_arg='--disable-output-check')
270
271 loader.add_loader_options(parser, 'run')
272 parser_common_args.add_tag_filter_args(parser)
273
274 def run(self, config):
275 """
276 Run test modules or simple tests.
277
278 :param config: Configuration received from command line parser and
279 possibly other sources.
280 :type config: dict
281 """
282 if 'run.output_check_record' in config:
283 check_record = config.get('run.output_check_record')
284 process.OUTPUT_CHECK_RECORD_MODE = check_record
285
286 unique_job_id = config.get('run.unique_job_id')
287 if unique_job_id is not None:
288 try:
289 int(unique_job_id, 16)
290 if len(unique_job_id) != 40:
291 raise ValueError
292 except ValueError:
293 LOG_UI.error('Unique Job ID needs to be a 40 digit hex number')
294 sys.exit(exit_codes.AVOCADO_FAIL)
295
296 try:
297 suite = TestSuite.from_config(config, name='')
298 if suite.size == 0:
299 sys.exit(exit_codes.AVOCADO_JOB_FAIL)
300 except TestSuiteError as err:
301 LOG_UI.error(err)
302 sys.exit(exit_codes.AVOCADO_JOB_FAIL)
303 with job.Job(config, [suite]) as job_instance:
304 return job_instance.run()
305
[end of avocado/plugins/run.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/avocado/plugins/run.py b/avocado/plugins/run.py
--- a/avocado/plugins/run.py
+++ b/avocado/plugins/run.py
@@ -296,7 +296,11 @@
try:
suite = TestSuite.from_config(config, name='')
if suite.size == 0:
- sys.exit(exit_codes.AVOCADO_JOB_FAIL)
+ msg = ("Suite is empty. There is no tests to run. This usually "
+ "happens when you pass --ignore-missing-references and "
+ "there is no more references to process.")
+ LOG_UI.warning(msg)
+ sys.exit(exit_codes.AVOCADO_FAIL)
except TestSuiteError as err:
LOG_UI.error(err)
sys.exit(exit_codes.AVOCADO_JOB_FAIL)
| {"golden_diff": "diff --git a/avocado/plugins/run.py b/avocado/plugins/run.py\n--- a/avocado/plugins/run.py\n+++ b/avocado/plugins/run.py\n@@ -296,7 +296,11 @@\n try:\n suite = TestSuite.from_config(config, name='')\n if suite.size == 0:\n- sys.exit(exit_codes.AVOCADO_JOB_FAIL)\n+ msg = (\"Suite is empty. There is no tests to run. This usually \"\n+ \"happens when you pass --ignore-missing-references and \"\n+ \"there is no more references to process.\")\n+ LOG_UI.warning(msg)\n+ sys.exit(exit_codes.AVOCADO_FAIL)\n except TestSuiteError as err:\n LOG_UI.error(err)\n sys.exit(exit_codes.AVOCADO_JOB_FAIL)\n", "issue": "Avocado crash without output\nWhen I run nrunner with the wrong test path avocado crashes without any information. It can be reproduced by:\r\n\r\n`avocado run --test-runner=nrunner --ignore-missing-references -- test_utils_partition.py`\n", "before_files": [{"content": "# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.\n#\n# See LICENSE for more details.\n#\n# Copyright: Red Hat Inc. 2013-2014\n# Author: Ruda Moura <[email protected]>\n\n\"\"\"\nBase Test Runner Plugins.\n\"\"\"\n\nimport argparse\nimport sys\n\nfrom avocado.core import exit_codes, job, loader, parser_common_args\nfrom avocado.core.output import LOG_UI\nfrom avocado.core.plugin_interfaces import CLICmd, Init\nfrom avocado.core.settings import settings\nfrom avocado.core.suite import TestSuite, TestSuiteError\nfrom avocado.utils import process\n\n\nclass RunInit(Init):\n\n name = 'run'\n description = 'Initializes the run options'\n\n def initialize(self):\n help_msg = ('Defines the order of iterating through test suite '\n 'and test variants')\n settings.register_option(section='run',\n key='execution_order',\n choices=('tests-per-variant',\n 'variants-per-test'),\n default='variants-per-test',\n help_msg=help_msg)\n\n\nclass Run(CLICmd):\n\n \"\"\"\n Implements the avocado 'run' subcommand\n \"\"\"\n\n name = 'run'\n description = (\"Runs one or more tests (native test, test alias, binary \"\n \"or script)\")\n\n @staticmethod\n def _test_parameter(string):\n param_name_value = string.split('=', 1)\n if len(param_name_value) < 2:\n msg = ('Invalid --test-parameter option: \"%s\". Valid option must '\n 'be a \"NAME=VALUE\" like expression' % string)\n raise argparse.ArgumentTypeError(msg)\n return param_name_value\n\n def configure(self, parser):\n \"\"\"\n Add the subparser for the run action.\n\n :param parser: Main test runner parser.\n \"\"\"\n parser = super(Run, self).configure(parser)\n\n help_msg = 'List of test references (aliases or paths)'\n settings.register_option(section='run',\n key='references',\n key_type=list,\n default=[],\n nargs='*',\n metavar='TEST_REFERENCE',\n parser=parser,\n help_msg=help_msg,\n positional_arg=True)\n\n help_msg = ('Parameter name and value to pass to all tests. This is '\n 'only applicable when not using a varianter plugin. '\n 'This option format must be given in the NAME=VALUE '\n 'format, and may be given any number of times, or per '\n 'parameter.')\n settings.register_option(section='run',\n key='test_parameters',\n action='append',\n default=[],\n key_type=self._test_parameter,\n metavar=\"NAME_VALUE\",\n help_msg=help_msg,\n parser=parser,\n long_arg='--test-parameter',\n short_arg='-p')\n\n help_msg = ('Selects the runner implementation from one of the '\n 'installed and active implementations. You can run '\n '\"avocado plugins\" and find the list of valid runners '\n 'under the \"Plugins that run test suites on a job '\n '(runners) section. Defaults to \"runner\", which is '\n 'the conventional and traditional runner.')\n settings.register_option(section='run',\n key='test_runner',\n default='runner',\n help_msg=help_msg,\n parser=parser,\n long_arg='--test-runner')\n\n help_msg = ('Instead of running the test only list them and log '\n 'their params.')\n settings.register_option(section='run.dry_run',\n key='enabled',\n default=False,\n key_type=bool,\n help_msg=help_msg,\n parser=parser,\n short_arg='-d',\n long_arg='--dry-run')\n\n help_msg = ('Do not automatically clean up temporary directories '\n 'used by dry-run')\n settings.register_option(section='run.dry_run',\n key='no_cleanup',\n help_msg=help_msg,\n default=False,\n key_type=bool,\n parser=parser,\n long_arg='--dry-run-no-cleanup')\n\n help_msg = ('Forces the use of a particular job ID. Used internally '\n 'when interacting with an avocado server. You should not '\n 'use this option unless you know exactly what you\\'re '\n 'doing')\n settings.register_option(section='run',\n key='unique_job_id',\n default=None,\n help_msg=help_msg,\n parser=parser,\n long_arg='--force-job-id')\n\n help_msg = 'Forces to use of an alternate job results directory.'\n settings.register_option(section='run',\n key='results_dir',\n default=None,\n metavar='DIRECTORY',\n help_msg=help_msg,\n parser=parser,\n long_arg='--job-results-dir')\n\n help_msg = ('Categorizes this within a directory with the same name, '\n 'by creating a link to the job result directory')\n settings.register_option(section='run',\n key='job_category',\n help_msg=help_msg,\n parser=parser,\n default=None,\n metavar='CATEGORY',\n long_arg='--job-category')\n\n settings.add_argparser_to_option(namespace='job.run.timeout',\n metavar='SECONDS',\n parser=parser,\n long_arg='--job-timeout')\n\n help_msg = 'Enable the job interruption on first failed test.'\n settings.register_option(section='run',\n key='failfast',\n default=False,\n key_type=bool,\n action='store_true',\n help_msg=help_msg,\n parser=parser,\n long_arg='--failfast')\n\n help_msg = 'Keep job temporary files (useful for avocado debugging).'\n settings.register_option(section='run',\n key='keep_tmp',\n default=False,\n key_type=bool,\n action='store_true',\n help_msg=help_msg,\n parser=parser,\n long_arg='--keep-tmp')\n\n help_msg = ('Force the job execution, even if some of the test '\n 'references are not resolved to tests. \"on\" and '\n '\"off\" will be deprecated soon.')\n settings.register_option(section='run',\n key='ignore_missing_references',\n default=False,\n key_type=bool,\n help_msg=help_msg,\n parser=parser,\n long_arg='--ignore-missing-references')\n\n settings.add_argparser_to_option(namespace='sysinfo.collect.enabled',\n parser=parser,\n action='store_false',\n long_arg='--disable-sysinfo')\n\n settings.add_argparser_to_option('run.execution_order',\n parser=parser,\n long_arg='--execution-order')\n\n parser.output = parser.add_argument_group('output and result format')\n\n help_msg = ('Store given logging STREAMs in '\n '\"$JOB_RESULTS_DIR/$STREAM.$LEVEL.\"')\n settings.register_option(section='run',\n key='store_logging_stream',\n nargs='+',\n help_msg=help_msg,\n default=[],\n metavar='STREAM[:LEVEL]',\n key_type=list,\n parser=parser,\n long_arg='--store-logging-stream')\n\n help_msg = ('Logs the possible data directories for each test. This '\n 'is helpful when writing new tests and not being sure '\n 'where to put data files. Look for \"Test data '\n 'directories\" in your test log')\n settings.register_option(section='run',\n key='log_test_data_directories',\n default=False,\n key_type=bool,\n help_msg=help_msg,\n parser=parser,\n long_arg='--log-test-data-directories')\n\n out_check = parser.add_argument_group('output check arguments')\n\n help_msg = ('Record the output produced by each test (from stdout '\n 'and stderr) into both the current executing result '\n 'and into reference files. Reference files are used on '\n 'subsequent runs to determine if the test produced the '\n 'expected output or not, and the current executing result '\n 'is used to check against a previously recorded reference '\n 'file. Valid values: \"none\" (to explicitly disable all '\n 'recording) \"stdout\" (to record standard output *only*), '\n '\"stderr\" (to record standard error *only*), \"both\" (to '\n 'record standard output and error in separate files), '\n '\"combined\" (for standard output and error in a single '\n 'file). \"all\" is also a valid but deprecated option that '\n 'is a synonym of \"both\".')\n settings.register_option(section='run',\n key='output_check_record',\n help_msg=help_msg,\n choices=('none', 'stdout', 'stderr',\n 'both', 'combined', 'all'),\n parser=out_check,\n default=None,\n long_arg='--output-check-record')\n\n help_msg = ('Disables test output (stdout/stderr) check. If this '\n 'option is given, no output will be checked, even if '\n 'there are reference files present for the test.')\n settings.register_option(section='run',\n key='output_check',\n default=True,\n key_type=bool,\n action='store_false',\n help_msg=help_msg,\n parser=out_check,\n long_arg='--disable-output-check')\n\n loader.add_loader_options(parser, 'run')\n parser_common_args.add_tag_filter_args(parser)\n\n def run(self, config):\n \"\"\"\n Run test modules or simple tests.\n\n :param config: Configuration received from command line parser and\n possibly other sources.\n :type config: dict\n \"\"\"\n if 'run.output_check_record' in config:\n check_record = config.get('run.output_check_record')\n process.OUTPUT_CHECK_RECORD_MODE = check_record\n\n unique_job_id = config.get('run.unique_job_id')\n if unique_job_id is not None:\n try:\n int(unique_job_id, 16)\n if len(unique_job_id) != 40:\n raise ValueError\n except ValueError:\n LOG_UI.error('Unique Job ID needs to be a 40 digit hex number')\n sys.exit(exit_codes.AVOCADO_FAIL)\n\n try:\n suite = TestSuite.from_config(config, name='')\n if suite.size == 0:\n sys.exit(exit_codes.AVOCADO_JOB_FAIL)\n except TestSuiteError as err:\n LOG_UI.error(err)\n sys.exit(exit_codes.AVOCADO_JOB_FAIL)\n with job.Job(config, [suite]) as job_instance:\n return job_instance.run()\n", "path": "avocado/plugins/run.py"}]} | 3,694 | 178 |
gh_patches_debug_12424 | rasdani/github-patches | git_diff | bridgecrewio__checkov-619 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Azure NSG Security rule loop error
I got the following error when I execute to scan Azure NSG terraform file.
_ERROR:checkov.terraform.checks.resource.azure.NSGRuleSSHAccessRestricted:Failed to run check: Ensure that SSH access is restricted from the internet for configuration: {'name': ['${var.nsg_name}'], 'resource_group_name': ['${azurerm_resource_group.rg.name}'], 'location': ['${var.location}'], 'security_rule': ['${[for rule in var.security_rules : {\'name\': \'${rule.name}\', \'priority\': \'${rule.priority}\', \'source_address_prefix\': \'${lookup(rule,"source_address_prefixes",[]) == [] ? lookup(rule,"source_address_prefix",var.nsg_default_source_address_prefix) : ""}\', \'source_address_prefixes\': \'${lookup(rule,"source_address_prefixes",[])}\', \'access\': \'${lookup(rule,"access",var.nsg_default_access)}\', \'destination_port_range\': \'${lookup(rule,"destination_port_ranges",[]) == [] ? lookup(rule,"destination_port_range",var.nsg_default_destination_port_range) : ""}\', \'destination_port_ranges\': \'${lookup(rule,"destination_port_ranges",[])}\', \'direction\': \'${lookup(rule,"direction",var.nsg_default_direction)}\', \'protocol\': \'${lookup(rule,"protocol",var.nsg_default_protocol)}\', \'source_port_range\': \'${lookup(rule,"source_port_range",var.nsg_default_source_port_range)}\', \'description\': \'\', \'destination_address_prefix\': \'*\', \'destination_address_prefixes\': [], \'destination_application_security_group_ids\': [], \'source_application_security_group_ids\': [], \'source_port_ranges\': []}]}'], 'tags': ['${var.tags}']} at file: /aaa.tf
2020-09-11 11:23:01,213 [MainThread ] [ERROR] Failed to run check: Ensure that SSH access is restricted from the internet for configuration: {'name': ['${var.nsg_name}'], 'resource_group_name': ['${azurerm_resource_group.rg.name}'], 'location': ['${var.location}'], 'security_rule': ['${[for rule in var.security_rules : {\'name\': \'${rule.name}\', \'priority\': \'${rule.priority}\', \'source_address_prefix\': \'${lookup(rule,"source_address_prefixes",[]) == [] ? lookup(rule,"source_address_prefix",var.nsg_default_source_address_prefix) : ""}\', \'source_address_prefixes\': \'${lookup(rule,"source_address_prefixes",[])}\', \'access\': \'${lookup(rule,"access",var.nsg_default_access)}\', \'destination_port_range\': \'${lookup(rule,"destination_port_ranges",[]) == [] ? lookup(rule,"destination_port_range",var.nsg_default_destination_port_range) : ""}\', \'destination_port_ranges\': \'${lookup(rule,"destination_port_ranges",[])}\', \'direction\': \'${lookup(rule,"direction",var.nsg_default_direction)}\', \'protocol\': \'${lookup(rule,"protocol",var.nsg_default_protocol)}\', \'source_port_range\': \'${lookup(rule,"source_port_range",var.nsg_default_source_port_range)}\', \'description\': \'\', \'destination_address_prefix\': \'*\', \'destination_address_prefixes\': [], \'destination_application_security_group_ids\': [], \'source_application_security_group_ids\': [], \'source_port_ranges\': []}]}'], 'tags': ['${var.tags}']} at file: /aaa.tf
Traceback (most recent call last):
File "/usr/local/bin/checkov", line 5, in <module>
run()
File "/usr/local/lib/python3.8/site-packages/checkov/main.py", line 65, in run
scan_reports = runner_registry.run(root_folder=root_folder, external_checks_dir=external_checks_dir,
File "/usr/local/lib/python3.8/site-packages/checkov/common/runners/runner_registry.py", line 27, in run
scan_report = runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,
File "/usr/local/lib/python3.8/site-packages/checkov/terraform/runner.py", line 52, in run
self.check_tf_definition(report, root_folder, runner_filter)
File "/usr/local/lib/python3.8/site-packages/checkov/terraform/runner.py", line 94, in check_tf_definition
self.run_block(definition[block_type], definitions_context, full_file_path, report, scanned_file,
File "/usr/local/lib/python3.8/site-packages/checkov/terraform/runner.py", line 117, in run_block
results = registry.scan(scanned_file, entity, skipped_checks, runner_filter)
File "/usr/local/lib/python3.8/site-packages/checkov/common/checks/base_check_registry.py", line 104, in scan
result = self.run_check(check, entity_configuration, entity_name, entity_type, scanned_file, skip_info)
File "/usr/local/lib/python3.8/site-packages/checkov/common/checks/base_check_registry.py", line 110, in run_check
result = check.run(scanned_file=scanned_file, entity_configuration=entity_configuration,
File "/usr/local/lib/python3.8/site-packages/checkov/common/checks/base_check.py", line 58, in run
raise e
File "/usr/local/lib/python3.8/site-packages/checkov/common/checks/base_check.py", line 39, in run
check_result['result'] = self.scan_entity_conf(entity_configuration, entity_type)
File "/usr/local/lib/python3.8/site-packages/checkov/terraform/checks/resource/base_resource_check.py", line 16, in scan_entity_conf
return self.scan_resource_conf(conf, entity_type)
File "/usr/local/lib/python3.8/site-packages/checkov/terraform/checks/resource/base_resource_check.py", line 28, in wrapper
return wrapped(self, conf)
File "/usr/local/lib/python3.8/site-packages/checkov/terraform/checks/resource/azure/NSGRulePortAccessRestricted.py", line 37, in scan_resource_conf
if 'access' in rule_conf and rule_conf['access'][0] == "Allow":
TypeError: string indices must be integers_
**Scanned NSG:**
resource "azurerm_network_security_group" "mynsg" {
name = var.nsg_name
resource_group_name = azurerm_resource_group.rg.name
location = var.location
security_rule = [for rule in var.security_rules : {
name = rule.name
priority = rule.priority
source_address_prefix = lookup(rule, "source_address_prefixes", []) == [] ? lookup(rule, "source_address_prefix", var.nsg_default_source_address_prefix) : ""
source_address_prefixes = lookup(rule, "source_address_prefixes", [])
access = lookup(rule, "access", var.nsg_default_access)
destination_port_range = lookup(rule, "destination_port_ranges", []) == [] ? lookup(rule, "destination_port_range", var.nsg_default_destination_port_range) : ""
destination_port_ranges = lookup(rule, "destination_port_ranges", [])
direction = lookup(rule, "direction", var.nsg_default_direction)
protocol = lookup(rule, "protocol", var.nsg_default_protocol)
source_port_range = lookup(rule, "source_port_range", var.nsg_default_source_port_range)
description = ""
destination_address_prefix = "*"
destination_address_prefixes = []
destination_application_security_group_ids = []
source_application_security_group_ids = []
source_port_ranges = []
}
]
}
</issue>
<code>
[start of checkov/terraform/checks/resource/azure/NSGRulePortAccessRestricted.py]
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceCheck
3 from checkov.common.util.type_forcers import force_list
4 import re
5
6 INTERNET_ADDRESSES = ["*", "0.0.0.0", "<nw>/0", "/0", "internet", "any"]
7 PORT_RANGE = re.compile('\d+-\d+')
8
9
10 class NSGRulePortAccessRestricted(BaseResourceCheck):
11 def __init__(self, name, check_id, port):
12 supported_resources = ['azurerm_network_security_rule', 'azurerm_network_security_group']
13 categories = [CheckCategories.NETWORKING]
14 super().__init__(name=name, id=check_id, categories=categories, supported_resources=supported_resources)
15 self.port = port
16
17 def is_port_in_range(self, conf):
18 ports = force_list(conf['destination_port_range'][0])
19 for range in ports:
20 if re.match(PORT_RANGE, range):
21 start, end = int(range.split('-')[0]), int(range.split('-')[1])
22 if start <= self.port <= end:
23 return True
24 if range in [str(self.port), '*']:
25 return True
26 return False
27
28 def scan_resource_conf(self, conf):
29 if "dynamic" in conf:
30 return CheckResult.UNKNOWN
31
32 rule_confs = [conf]
33 if 'security_rule' in conf:
34 rule_confs = conf['security_rule']
35
36 for rule_conf in rule_confs:
37 if 'access' in rule_conf and rule_conf['access'][0] == "Allow":
38 if 'direction' in rule_conf and rule_conf['direction'][0] == "Inbound":
39 if 'protocol' in rule_conf and rule_conf['protocol'][0].upper() == 'TCP':
40 if 'destination_port_range' in rule_conf and self.is_port_in_range(rule_conf):
41 if 'source_address_prefix' in rule_conf and rule_conf['source_address_prefix'][0] in INTERNET_ADDRESSES:
42 return CheckResult.FAILED
43 return CheckResult.PASSED
44
45
[end of checkov/terraform/checks/resource/azure/NSGRulePortAccessRestricted.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/azure/NSGRulePortAccessRestricted.py b/checkov/terraform/checks/resource/azure/NSGRulePortAccessRestricted.py
--- a/checkov/terraform/checks/resource/azure/NSGRulePortAccessRestricted.py
+++ b/checkov/terraform/checks/resource/azure/NSGRulePortAccessRestricted.py
@@ -34,6 +34,8 @@
rule_confs = conf['security_rule']
for rule_conf in rule_confs:
+ if not isinstance(rule_conf, dict):
+ return CheckResult.UNKNOWN
if 'access' in rule_conf and rule_conf['access'][0] == "Allow":
if 'direction' in rule_conf and rule_conf['direction'][0] == "Inbound":
if 'protocol' in rule_conf and rule_conf['protocol'][0].upper() == 'TCP':
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/NSGRulePortAccessRestricted.py b/checkov/terraform/checks/resource/azure/NSGRulePortAccessRestricted.py\n--- a/checkov/terraform/checks/resource/azure/NSGRulePortAccessRestricted.py\n+++ b/checkov/terraform/checks/resource/azure/NSGRulePortAccessRestricted.py\n@@ -34,6 +34,8 @@\n rule_confs = conf['security_rule']\n \n for rule_conf in rule_confs:\n+ if not isinstance(rule_conf, dict):\n+ return CheckResult.UNKNOWN\n if 'access' in rule_conf and rule_conf['access'][0] == \"Allow\":\n if 'direction' in rule_conf and rule_conf['direction'][0] == \"Inbound\":\n if 'protocol' in rule_conf and rule_conf['protocol'][0].upper() == 'TCP':\n", "issue": "Azure NSG Security rule loop error\nI got the following error when I execute to scan Azure NSG terraform file.\r\n\r\n_ERROR:checkov.terraform.checks.resource.azure.NSGRuleSSHAccessRestricted:Failed to run check: Ensure that SSH access is restricted from the internet for configuration: {'name': ['${var.nsg_name}'], 'resource_group_name': ['${azurerm_resource_group.rg.name}'], 'location': ['${var.location}'], 'security_rule': ['${[for rule in var.security_rules : {\\'name\\': \\'${rule.name}\\', \\'priority\\': \\'${rule.priority}\\', \\'source_address_prefix\\': \\'${lookup(rule,\"source_address_prefixes\",[]) == [] ? lookup(rule,\"source_address_prefix\",var.nsg_default_source_address_prefix) : \"\"}\\', \\'source_address_prefixes\\': \\'${lookup(rule,\"source_address_prefixes\",[])}\\', \\'access\\': \\'${lookup(rule,\"access\",var.nsg_default_access)}\\', \\'destination_port_range\\': \\'${lookup(rule,\"destination_port_ranges\",[]) == [] ? lookup(rule,\"destination_port_range\",var.nsg_default_destination_port_range) : \"\"}\\', \\'destination_port_ranges\\': \\'${lookup(rule,\"destination_port_ranges\",[])}\\', \\'direction\\': \\'${lookup(rule,\"direction\",var.nsg_default_direction)}\\', \\'protocol\\': \\'${lookup(rule,\"protocol\",var.nsg_default_protocol)}\\', \\'source_port_range\\': \\'${lookup(rule,\"source_port_range\",var.nsg_default_source_port_range)}\\', \\'description\\': \\'\\', \\'destination_address_prefix\\': \\'*\\', \\'destination_address_prefixes\\': [], \\'destination_application_security_group_ids\\': [], \\'source_application_security_group_ids\\': [], \\'source_port_ranges\\': []}]}'], 'tags': ['${var.tags}']} at file: /aaa.tf\r\n2020-09-11 11:23:01,213 [MainThread ] [ERROR] Failed to run check: Ensure that SSH access is restricted from the internet for configuration: {'name': ['${var.nsg_name}'], 'resource_group_name': ['${azurerm_resource_group.rg.name}'], 'location': ['${var.location}'], 'security_rule': ['${[for rule in var.security_rules : {\\'name\\': \\'${rule.name}\\', \\'priority\\': \\'${rule.priority}\\', \\'source_address_prefix\\': \\'${lookup(rule,\"source_address_prefixes\",[]) == [] ? lookup(rule,\"source_address_prefix\",var.nsg_default_source_address_prefix) : \"\"}\\', \\'source_address_prefixes\\': \\'${lookup(rule,\"source_address_prefixes\",[])}\\', \\'access\\': \\'${lookup(rule,\"access\",var.nsg_default_access)}\\', \\'destination_port_range\\': \\'${lookup(rule,\"destination_port_ranges\",[]) == [] ? lookup(rule,\"destination_port_range\",var.nsg_default_destination_port_range) : \"\"}\\', \\'destination_port_ranges\\': \\'${lookup(rule,\"destination_port_ranges\",[])}\\', \\'direction\\': \\'${lookup(rule,\"direction\",var.nsg_default_direction)}\\', \\'protocol\\': \\'${lookup(rule,\"protocol\",var.nsg_default_protocol)}\\', \\'source_port_range\\': \\'${lookup(rule,\"source_port_range\",var.nsg_default_source_port_range)}\\', \\'description\\': \\'\\', \\'destination_address_prefix\\': \\'*\\', \\'destination_address_prefixes\\': [], \\'destination_application_security_group_ids\\': [], \\'source_application_security_group_ids\\': [], \\'source_port_ranges\\': []}]}'], 'tags': ['${var.tags}']} at file: /aaa.tf\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/checkov\", line 5, in <module>\r\n run()\r\n File \"/usr/local/lib/python3.8/site-packages/checkov/main.py\", line 65, in run\r\n scan_reports = runner_registry.run(root_folder=root_folder, external_checks_dir=external_checks_dir,\r\n File \"/usr/local/lib/python3.8/site-packages/checkov/common/runners/runner_registry.py\", line 27, in run\r\n scan_report = runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,\r\n File \"/usr/local/lib/python3.8/site-packages/checkov/terraform/runner.py\", line 52, in run\r\n self.check_tf_definition(report, root_folder, runner_filter)\r\n File \"/usr/local/lib/python3.8/site-packages/checkov/terraform/runner.py\", line 94, in check_tf_definition\r\n self.run_block(definition[block_type], definitions_context, full_file_path, report, scanned_file,\r\n File \"/usr/local/lib/python3.8/site-packages/checkov/terraform/runner.py\", line 117, in run_block\r\n results = registry.scan(scanned_file, entity, skipped_checks, runner_filter)\r\n File \"/usr/local/lib/python3.8/site-packages/checkov/common/checks/base_check_registry.py\", line 104, in scan\r\n result = self.run_check(check, entity_configuration, entity_name, entity_type, scanned_file, skip_info)\r\n File \"/usr/local/lib/python3.8/site-packages/checkov/common/checks/base_check_registry.py\", line 110, in run_check\r\n result = check.run(scanned_file=scanned_file, entity_configuration=entity_configuration,\r\n File \"/usr/local/lib/python3.8/site-packages/checkov/common/checks/base_check.py\", line 58, in run\r\n raise e\r\n File \"/usr/local/lib/python3.8/site-packages/checkov/common/checks/base_check.py\", line 39, in run\r\n check_result['result'] = self.scan_entity_conf(entity_configuration, entity_type)\r\n File \"/usr/local/lib/python3.8/site-packages/checkov/terraform/checks/resource/base_resource_check.py\", line 16, in scan_entity_conf\r\n return self.scan_resource_conf(conf, entity_type)\r\n File \"/usr/local/lib/python3.8/site-packages/checkov/terraform/checks/resource/base_resource_check.py\", line 28, in wrapper\r\n return wrapped(self, conf)\r\n File \"/usr/local/lib/python3.8/site-packages/checkov/terraform/checks/resource/azure/NSGRulePortAccessRestricted.py\", line 37, in scan_resource_conf\r\n if 'access' in rule_conf and rule_conf['access'][0] == \"Allow\":\r\nTypeError: string indices must be integers_\r\n\r\n**Scanned NSG:**\r\n\r\nresource \"azurerm_network_security_group\" \"mynsg\" {\r\n name = var.nsg_name\r\n resource_group_name = azurerm_resource_group.rg.name\r\n location = var.location\r\n\r\n security_rule = [for rule in var.security_rules : {\r\n name = rule.name\r\n priority = rule.priority\r\n source_address_prefix = lookup(rule, \"source_address_prefixes\", []) == [] ? lookup(rule, \"source_address_prefix\", var.nsg_default_source_address_prefix) : \"\"\r\n source_address_prefixes = lookup(rule, \"source_address_prefixes\", [])\r\n access = lookup(rule, \"access\", var.nsg_default_access)\r\n destination_port_range = lookup(rule, \"destination_port_ranges\", []) == [] ? lookup(rule, \"destination_port_range\", var.nsg_default_destination_port_range) : \"\"\r\n destination_port_ranges = lookup(rule, \"destination_port_ranges\", [])\r\n direction = lookup(rule, \"direction\", var.nsg_default_direction)\r\n protocol = lookup(rule, \"protocol\", var.nsg_default_protocol)\r\n source_port_range = lookup(rule, \"source_port_range\", var.nsg_default_source_port_range)\r\n description = \"\"\r\n destination_address_prefix = \"*\"\r\n destination_address_prefixes = []\r\n destination_application_security_group_ids = []\r\n source_application_security_group_ids = []\r\n source_port_ranges = []\r\n }\r\n ]\r\n}\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceCheck\nfrom checkov.common.util.type_forcers import force_list\nimport re\n\nINTERNET_ADDRESSES = [\"*\", \"0.0.0.0\", \"<nw>/0\", \"/0\", \"internet\", \"any\"]\nPORT_RANGE = re.compile('\\d+-\\d+')\n\n\nclass NSGRulePortAccessRestricted(BaseResourceCheck):\n def __init__(self, name, check_id, port):\n supported_resources = ['azurerm_network_security_rule', 'azurerm_network_security_group']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=check_id, categories=categories, supported_resources=supported_resources)\n self.port = port\n\n def is_port_in_range(self, conf):\n ports = force_list(conf['destination_port_range'][0])\n for range in ports:\n if re.match(PORT_RANGE, range):\n start, end = int(range.split('-')[0]), int(range.split('-')[1])\n if start <= self.port <= end:\n return True\n if range in [str(self.port), '*']:\n return True\n return False\n\n def scan_resource_conf(self, conf):\n if \"dynamic\" in conf:\n return CheckResult.UNKNOWN\n \n rule_confs = [conf]\n if 'security_rule' in conf:\n rule_confs = conf['security_rule']\n\n for rule_conf in rule_confs:\n if 'access' in rule_conf and rule_conf['access'][0] == \"Allow\":\n if 'direction' in rule_conf and rule_conf['direction'][0] == \"Inbound\":\n if 'protocol' in rule_conf and rule_conf['protocol'][0].upper() == 'TCP':\n if 'destination_port_range' in rule_conf and self.is_port_in_range(rule_conf):\n if 'source_address_prefix' in rule_conf and rule_conf['source_address_prefix'][0] in INTERNET_ADDRESSES:\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n", "path": "checkov/terraform/checks/resource/azure/NSGRulePortAccessRestricted.py"}]} | 2,771 | 192 |
gh_patches_debug_66765 | rasdani/github-patches | git_diff | pymedusa__Medusa-7069 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
1. Ignore words/filter - does not except "." ? 2. Show page is not showing global defined filters/"words"
After upgrading from 0.30 to 0.34 -
1. Seems like my previous ignore words that had "." in them now the "." is interpreted as wild card :( See log below.
**To Reproduce**
1. define 2.0 as ignore
2. search names that has 2**x**0 (210 in my example below)
3. see log with wrong ignore...
**Expected behavior**
Filter word should be used as is, no semi- regex interpretation, or add comment to use "" if needed, and explain the special symbols.
**Medusa (please complete the following information):**
Medusa Configuration
Medusa Info: Branch: master
Commit: 74c3f124fb599ad923e23f46fc320778b8b4217c
Version: 0.3.4
Database: 44.14
Python Version: 2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 20:42:59) [MSC v.1500 32 bit (Intel)]
SSL Version: OpenSSL 1.0.2j 26 Sep 2016
OS: Windows-10-10.0.17763
Locale: en_US.cp1255
User: RAFI-PC$
Program Folder: D:\Program Files\Medusa\Medusa
Config File: D:\Program Files\Medusa\Data\config.ini
Database File: D:\Program Files\Medusa\Data\main.db
Cache Folder: D:\Program Files\Medusa\Data\cache
Log Folder: D:\Program Files\Medusa\Data\Logs
Arguments:
--nolaunch --port=8081 --datadir=D:\Program Files\Medusa\Data
**Debug logs (at least 50 lines):**
```
2019-08-19 16:15:25 INFO SEARCHQUEUE-DAILY-SEARCH :: Rarbg :: [74c3f12] Ignoring Fear.the.Walking.Dead.S05E10.210.Words.Per.Minute.1080p.AMZN.WEBRip.DDP5.1.x264-NTG[rartv] based on ignored words filter: 2.0
2019-08-19 16:15:25 INFO SEARCHQUEUE-DAILY-SEARCH :: Rarbg :: [74c3f12] Torrent meets the file size requirements.file Name:Fear.the.Walking.Dead.S05E10.210.Words.Per.Minute.1080p.AMZN.WEBRip.DDP5.1.x264-NTG[rartv] Size:3629018180
2019-08-19 16:15:25 INFO SEARCHQUEUE-DAILY-SEARCH :: Rarbg :: [74c3f12] Quality of Fear.the.Walking.Dead.S05E10.210.Words.Per.Minute.1080p.AMZN.WEBRip.DDP5.1.x264-NTG[rartv] is 1080p WEB-DL
2019-08-19 16:15:25 INFO SEARCHQUEUE-DAILY-SEARCH :: Rarbg :: [74c3f12] Quality of Fear.the.Walking.Dead.S05E10.1080p.WEB.h264-TBS[rartv] is 1080p WEB-DL
2019-08-19 16:15:21 INFO SEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Quality of The.Handmaids.Tale.S03E13.Mayday.1080p.HULU.WEBRip.AAC2.0.H264-NTb[TGx] is 1080p WEB-DL
2019-08-19 16:15:21 INFO SEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Quality of The.Handmaids.Tale.S03E13.Mayday.1080p.HULU.WEBRip.AAC2.0.H264-NTb is 1080p WEB-DL
2019-08-19 16:15:21 INFO SEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Quality of The.Handmaids.Tale.S03E13.Mayday.1080p.HULU.WEB-DL.AAC2.0.H.264-NTb.chs.eng.mp4 is 1080p WEB-DL
2019-08-19 16:15:21 INFO SEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Quality of The.Handmaids.Tale.S03E13.1080p.WEB.x264-worldmkv.mkv is 1080p WEB-DL
2019-08-19 16:15:21 INFO SEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Quality of The.Handmaids.Tale.S03E13.Mayday.1080p.HULU.WEBRip.AAC2.0.H264-NTb[rarbg] is 1080p WEB-DL
2019-08-19 16:15:21 INFO SEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Quality of The.Handmaids.Tale.S03E13.Mayday.1080p.HULU.WEBRip.AAC2.0.H264-NTb[TGx] is 1080p WEB-DL
2019-08-19 16:15:21 INFO SEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Ignoring Fear.the.Walking.Dead.S05E10.210.Words.Per.Minute.1080p.AMZN.WEBRip.DDP5.1.x264-NTG[rarbg] based on ignored words filter: 2.0
2019-08-19 16:15:21 INFO SEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Torrent meets the file size requirements.file Name:Fear.the.Walking.Dead.S05E10.210.Words.Per.Minute.1080p.AMZN.WEBRip.DDP5.1.x264-NTG[rarbg] Size:3629121536
2019-08-19 16:15:21 INFO SEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Quality of Fear.the.Walking.Dead.S05E10.210.Words.Per.Minute.1080p.AMZN.WEBRip.DDP5.1.x264-NTG[rarbg] is 1080p WEB-DL
```
</details>
**Additional context**
2. Show's page is now not showing any global filters/words :(
</issue>
<code>
[start of medusa/show/naming.py]
1 # coding=utf-8
2
3 """Series naming helpers for selecting results."""
4
5 from __future__ import unicode_literals
6
7 import fnmatch
8 import logging
9 import os
10 import re
11
12 from medusa import app
13 from medusa.logger.adapters.style import BraceAdapter
14 from medusa.name_parser.parser import InvalidNameException, InvalidShowException, NameParser
15
16 from six import string_types
17
18 log = BraceAdapter(logging.getLogger(__name__))
19 log.logger.addHandler(logging.NullHandler())
20
21
22 result_filters = [
23 '(sub|nfo)fix',
24 '(?<!shomin.)sample',
25 '(dvd)?extras',
26 ]
27
28 if hasattr('General', 'ignore_und_subs') and app.IGNORE_UND_SUBS:
29 result_filters.append('sub(bed|ed|pack|s)')
30
31 if hasattr('General', 'ignored_subs_list') and app.IGNORED_SUBS_LIST:
32 result_filters.append('(' + app.IGNORED_SUBS_LIST.replace(',', '|') + ')sub(bed|ed|s)?')
33
34
35 def contains_words(item, words, strict=True):
36 """
37 Yield words that are contained in an item.
38
39 :param item: item to search for words
40 :param words: iterable of words to search for in item
41 :param strict: exclude substring matches
42 If strict find exact existence of a word in the item but exclude matches
43 where the word is part of a substring. For example `word` would not
44 match 'words' or 'word1'. Regex expressions as words can only
45 be used in strict mode!
46 """
47 log.debug('Searching {item} for {words}. (strict={strict})',
48 {'item': item, 'words': words, 'strict': strict})
49
50 def _strict(_word):
51 # Use a regex to make sure the match is not part of a substring
52 pattern = r'(^|[\W_]){word}($|[\W_])'.format(word=_word)
53 return re.search(pattern, item, re.I)
54
55 def _lenient(_word):
56 # Use string.__contains__ for a quick lenient test
57 return _word in item
58
59 # select strict or lenient method for the test
60 item_contains = _strict if strict else _lenient
61
62 for word in words:
63 if item_contains(word):
64 yield word
65
66
67 def contains_at_least_one_word(name, words):
68 """
69 Filter out results based on filter_words.
70
71 :param name: name to check
72 :param words: string of words separated by a ',' or list of words
73 :return: False if the name doesn't contain any word of words list, or the found word from the list.
74 """
75 if not (name and words):
76 return False
77
78 if isinstance(words, string_types):
79 words = words.split(',')
80 items = [(re.compile(r'(^|[\W_])%s($|[\W_])' % word.strip(), re.I), word.strip()) for word in words]
81 for regexp, word in items:
82 if regexp.search(name):
83 # subs_words = '.dub.' or '.dksub.' or else
84 subs_word = regexp.search(name).group(0)
85 # If word is a regex like 'dub(bed)?' or 'sub(bed|ed|pack|s)'
86 # then return just the matched word: 'dub' and not full regex
87 if word in result_filters:
88 return subs_word.replace('.', '')
89 else:
90 return word
91
92 return False
93
94
95 def filter_bad_releases(name, parse=True):
96 """
97 Filter out non-english and invalid releases by comparing them to the result_filters contents.
98
99 :param parse: parse the name
100 :param name: the release name to check
101 :return: True if the release name is OK, False if it's bad.
102 """
103 try:
104 if parse:
105 NameParser().parse(name)
106 except InvalidNameException as error:
107 log.debug('{0}', error)
108 return False
109 except InvalidShowException:
110 pass
111
112 # if any of the bad strings are in the name then say no
113 word = contains_at_least_one_word(name, result_filters)
114 if word:
115 log.debug('Unwanted scene release: {0}. Contains unwanted word: {1}.'
116 ' Ignoring it', name, word)
117 return False
118 return True
119
120
121 def determine_release_name(dir_name=None, nzb_name=None):
122 """Determine a release name from an nzb and/or folder name."""
123 if nzb_name is not None:
124 log.info('Using nzb_name for release name.')
125 return nzb_name.rpartition('.')[0]
126
127 if dir_name is None:
128 return None
129
130 # try to get the release name from nzb/nfo
131 file_types = ['*.nzb', '*.nfo']
132
133 for search in file_types:
134
135 reg_expr = re.compile(fnmatch.translate(search), re.IGNORECASE)
136 files = [file_name for file_name in os.listdir(dir_name) if
137 os.path.isfile(os.path.join(dir_name, file_name))]
138
139 results = [f for f in files if reg_expr.search(f)]
140
141 if len(results) == 1:
142 found_file = os.path.basename(results[0])
143 found_file = found_file.rpartition('.')[0]
144 if filter_bad_releases(found_file):
145 log.info('Release name ({0}) found from file ({1})',
146 found_file, results[0])
147 return found_file.rpartition('.')[0]
148
149 # If that fails, we try the folder
150 folder = os.path.basename(dir_name)
151 if filter_bad_releases(folder):
152 # NOTE: Multiple failed downloads will change the folder name.
153 # (e.g., appending #s)
154 # Should we handle that?
155 log.debug('Folder name ({0}) appears to be a valid release name.'
156 ' Using it.', folder)
157 return folder
158
159 return None
160
[end of medusa/show/naming.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/medusa/show/naming.py b/medusa/show/naming.py
--- a/medusa/show/naming.py
+++ b/medusa/show/naming.py
@@ -77,7 +77,7 @@
if isinstance(words, string_types):
words = words.split(',')
- items = [(re.compile(r'(^|[\W_])%s($|[\W_])' % word.strip(), re.I), word.strip()) for word in words]
+ items = [(re.compile(r'(^|[\W_])%s($|[\W_])' % re.escape(word.strip()), re.I), word.strip()) for word in words]
for regexp, word in items:
if regexp.search(name):
# subs_words = '.dub.' or '.dksub.' or else
| {"golden_diff": "diff --git a/medusa/show/naming.py b/medusa/show/naming.py\n--- a/medusa/show/naming.py\n+++ b/medusa/show/naming.py\n@@ -77,7 +77,7 @@\n \n if isinstance(words, string_types):\n words = words.split(',')\n- items = [(re.compile(r'(^|[\\W_])%s($|[\\W_])' % word.strip(), re.I), word.strip()) for word in words]\n+ items = [(re.compile(r'(^|[\\W_])%s($|[\\W_])' % re.escape(word.strip()), re.I), word.strip()) for word in words]\n for regexp, word in items:\n if regexp.search(name):\n # subs_words = '.dub.' or '.dksub.' or else\n", "issue": "1. Ignore words/filter - does not except \".\" ? 2. Show page is not showing global defined filters/\"words\" \nAfter upgrading from 0.30 to 0.34 - \r\n1. Seems like my previous ignore words that had \".\" in them now the \".\" is interpreted as wild card :( See log below.\r\n\r\n**To Reproduce**\r\n1. define 2.0 as ignore\r\n2. search names that has 2**x**0 (210 in my example below)\r\n3. see log with wrong ignore... \r\n\r\n**Expected behavior**\r\nFilter word should be used as is, no semi- regex interpretation, or add comment to use \"\" if needed, and explain the special symbols.\r\n\r\n**Medusa (please complete the following information):**\r\n\r\n\r\nMedusa Configuration\r\nMedusa Info: \tBranch: master\r\nCommit: 74c3f124fb599ad923e23f46fc320778b8b4217c\r\nVersion: 0.3.4\r\nDatabase: 44.14\r\nPython Version:\t2.7.13 (v2.7.13:a06454b1afa1, Dec 17 2016, 20:42:59) [MSC v.1500 32 bit (Intel)]\r\nSSL Version:\tOpenSSL 1.0.2j 26 Sep 2016\r\nOS:\tWindows-10-10.0.17763\r\nLocale:\ten_US.cp1255\r\n \t \r\n \t \r\nUser:\tRAFI-PC$\r\nProgram Folder:\tD:\\Program Files\\Medusa\\Medusa\r\nConfig File:\tD:\\Program Files\\Medusa\\Data\\config.ini\r\nDatabase File:\tD:\\Program Files\\Medusa\\Data\\main.db\r\nCache Folder:\tD:\\Program Files\\Medusa\\Data\\cache\r\nLog Folder:\tD:\\Program Files\\Medusa\\Data\\Logs\r\nArguments:\t\r\n--nolaunch --port=8081 --datadir=D:\\Program Files\\Medusa\\Data\r\n\r\n**Debug logs (at least 50 lines):**\r\n```\r\n\r\n2019-08-19 16:15:25 INFO\tSEARCHQUEUE-DAILY-SEARCH :: Rarbg :: [74c3f12] Ignoring Fear.the.Walking.Dead.S05E10.210.Words.Per.Minute.1080p.AMZN.WEBRip.DDP5.1.x264-NTG[rartv] based on ignored words filter: 2.0\r\n2019-08-19 16:15:25 INFO\tSEARCHQUEUE-DAILY-SEARCH :: Rarbg :: [74c3f12] Torrent meets the file size requirements.file Name:Fear.the.Walking.Dead.S05E10.210.Words.Per.Minute.1080p.AMZN.WEBRip.DDP5.1.x264-NTG[rartv] Size:3629018180\r\n2019-08-19 16:15:25 INFO\tSEARCHQUEUE-DAILY-SEARCH :: Rarbg :: [74c3f12] Quality of Fear.the.Walking.Dead.S05E10.210.Words.Per.Minute.1080p.AMZN.WEBRip.DDP5.1.x264-NTG[rartv] is 1080p WEB-DL\r\n2019-08-19 16:15:25 INFO\tSEARCHQUEUE-DAILY-SEARCH :: Rarbg :: [74c3f12] Quality of Fear.the.Walking.Dead.S05E10.1080p.WEB.h264-TBS[rartv] is 1080p WEB-DL\r\n2019-08-19 16:15:21 INFO\tSEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Quality of The.Handmaids.Tale.S03E13.Mayday.1080p.HULU.WEBRip.AAC2.0.H264-NTb[TGx] is 1080p WEB-DL\r\n2019-08-19 16:15:21 INFO\tSEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Quality of The.Handmaids.Tale.S03E13.Mayday.1080p.HULU.WEBRip.AAC2.0.H264-NTb is 1080p WEB-DL\r\n2019-08-19 16:15:21 INFO\tSEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Quality of The.Handmaids.Tale.S03E13.Mayday.1080p.HULU.WEB-DL.AAC2.0.H.264-NTb.chs.eng.mp4 is 1080p WEB-DL\r\n2019-08-19 16:15:21 INFO\tSEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Quality of The.Handmaids.Tale.S03E13.1080p.WEB.x264-worldmkv.mkv is 1080p WEB-DL\r\n2019-08-19 16:15:21 INFO\tSEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Quality of The.Handmaids.Tale.S03E13.Mayday.1080p.HULU.WEBRip.AAC2.0.H264-NTb[rarbg] is 1080p WEB-DL\r\n2019-08-19 16:15:21 INFO\tSEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Quality of The.Handmaids.Tale.S03E13.Mayday.1080p.HULU.WEBRip.AAC2.0.H264-NTb[TGx] is 1080p WEB-DL\r\n2019-08-19 16:15:21 INFO\tSEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Ignoring Fear.the.Walking.Dead.S05E10.210.Words.Per.Minute.1080p.AMZN.WEBRip.DDP5.1.x264-NTG[rarbg] based on ignored words filter: 2.0\r\n2019-08-19 16:15:21 INFO\tSEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Torrent meets the file size requirements.file Name:Fear.the.Walking.Dead.S05E10.210.Words.Per.Minute.1080p.AMZN.WEBRip.DDP5.1.x264-NTG[rarbg] Size:3629121536\r\n2019-08-19 16:15:21 INFO\tSEARCHQUEUE-DAILY-SEARCH :: Torrentz2 :: [74c3f12] Quality of Fear.the.Walking.Dead.S05E10.210.Words.Per.Minute.1080p.AMZN.WEBRip.DDP5.1.x264-NTG[rarbg] is 1080p WEB-DL\r\n\r\n\r\n\r\n```\r\n</details>\r\n\r\n**Additional context**\r\n\r\n2. Show's page is now not showing any global filters/words :(\n", "before_files": [{"content": "# coding=utf-8\n\n\"\"\"Series naming helpers for selecting results.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport fnmatch\nimport logging\nimport os\nimport re\n\nfrom medusa import app\nfrom medusa.logger.adapters.style import BraceAdapter\nfrom medusa.name_parser.parser import InvalidNameException, InvalidShowException, NameParser\n\nfrom six import string_types\n\nlog = BraceAdapter(logging.getLogger(__name__))\nlog.logger.addHandler(logging.NullHandler())\n\n\nresult_filters = [\n '(sub|nfo)fix',\n '(?<!shomin.)sample',\n '(dvd)?extras',\n]\n\nif hasattr('General', 'ignore_und_subs') and app.IGNORE_UND_SUBS:\n result_filters.append('sub(bed|ed|pack|s)')\n\nif hasattr('General', 'ignored_subs_list') and app.IGNORED_SUBS_LIST:\n result_filters.append('(' + app.IGNORED_SUBS_LIST.replace(',', '|') + ')sub(bed|ed|s)?')\n\n\ndef contains_words(item, words, strict=True):\n \"\"\"\n Yield words that are contained in an item.\n\n :param item: item to search for words\n :param words: iterable of words to search for in item\n :param strict: exclude substring matches\n If strict find exact existence of a word in the item but exclude matches\n where the word is part of a substring. For example `word` would not\n match 'words' or 'word1'. Regex expressions as words can only\n be used in strict mode!\n \"\"\"\n log.debug('Searching {item} for {words}. (strict={strict})',\n {'item': item, 'words': words, 'strict': strict})\n\n def _strict(_word):\n # Use a regex to make sure the match is not part of a substring\n pattern = r'(^|[\\W_]){word}($|[\\W_])'.format(word=_word)\n return re.search(pattern, item, re.I)\n\n def _lenient(_word):\n # Use string.__contains__ for a quick lenient test\n return _word in item\n\n # select strict or lenient method for the test\n item_contains = _strict if strict else _lenient\n\n for word in words:\n if item_contains(word):\n yield word\n\n\ndef contains_at_least_one_word(name, words):\n \"\"\"\n Filter out results based on filter_words.\n\n :param name: name to check\n :param words: string of words separated by a ',' or list of words\n :return: False if the name doesn't contain any word of words list, or the found word from the list.\n \"\"\"\n if not (name and words):\n return False\n\n if isinstance(words, string_types):\n words = words.split(',')\n items = [(re.compile(r'(^|[\\W_])%s($|[\\W_])' % word.strip(), re.I), word.strip()) for word in words]\n for regexp, word in items:\n if regexp.search(name):\n # subs_words = '.dub.' or '.dksub.' or else\n subs_word = regexp.search(name).group(0)\n # If word is a regex like 'dub(bed)?' or 'sub(bed|ed|pack|s)'\n # then return just the matched word: 'dub' and not full regex\n if word in result_filters:\n return subs_word.replace('.', '')\n else:\n return word\n\n return False\n\n\ndef filter_bad_releases(name, parse=True):\n \"\"\"\n Filter out non-english and invalid releases by comparing them to the result_filters contents.\n\n :param parse: parse the name\n :param name: the release name to check\n :return: True if the release name is OK, False if it's bad.\n \"\"\"\n try:\n if parse:\n NameParser().parse(name)\n except InvalidNameException as error:\n log.debug('{0}', error)\n return False\n except InvalidShowException:\n pass\n\n # if any of the bad strings are in the name then say no\n word = contains_at_least_one_word(name, result_filters)\n if word:\n log.debug('Unwanted scene release: {0}. Contains unwanted word: {1}.'\n ' Ignoring it', name, word)\n return False\n return True\n\n\ndef determine_release_name(dir_name=None, nzb_name=None):\n \"\"\"Determine a release name from an nzb and/or folder name.\"\"\"\n if nzb_name is not None:\n log.info('Using nzb_name for release name.')\n return nzb_name.rpartition('.')[0]\n\n if dir_name is None:\n return None\n\n # try to get the release name from nzb/nfo\n file_types = ['*.nzb', '*.nfo']\n\n for search in file_types:\n\n reg_expr = re.compile(fnmatch.translate(search), re.IGNORECASE)\n files = [file_name for file_name in os.listdir(dir_name) if\n os.path.isfile(os.path.join(dir_name, file_name))]\n\n results = [f for f in files if reg_expr.search(f)]\n\n if len(results) == 1:\n found_file = os.path.basename(results[0])\n found_file = found_file.rpartition('.')[0]\n if filter_bad_releases(found_file):\n log.info('Release name ({0}) found from file ({1})',\n found_file, results[0])\n return found_file.rpartition('.')[0]\n\n # If that fails, we try the folder\n folder = os.path.basename(dir_name)\n if filter_bad_releases(folder):\n # NOTE: Multiple failed downloads will change the folder name.\n # (e.g., appending #s)\n # Should we handle that?\n log.debug('Folder name ({0}) appears to be a valid release name.'\n ' Using it.', folder)\n return folder\n\n return None\n", "path": "medusa/show/naming.py"}]} | 3,972 | 178 |
gh_patches_debug_15651 | rasdani/github-patches | git_diff | explosion__spaCy-1389 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Lemmatizer exceptions: `lemma_strings` get sorted anyway
Lemmatization exceptions have been working inconsistently, even when they are added directly in `corpora/en/wordnet/dict/verb.exc`
The minimal test case of `coping` at https://github.com/explosion/spaCy/issues/389 reveals that, at
https://github.com/explosion/spaCy/blob/master/spacy/lemmatizer.py#L94, the lemmatizer's list of potential forms (where the thing in lemmatizer.exceptions is item [0]) is cast to a `set` (and so loses ordering); then in https://github.com/explosion/spaCy/blob/master/spacy/morphology.pyx#L149, that `set` gets sorted. So lemmatizer exceptions only work if they also come first alphabetically!
I've implemented the fix for this, but I need this issue in order to submit the PR!
One question, though, for @honnibal: this can be fixed either as I did it locally -- return the whole list, with exceptions and then anything that comes back from the lemmatizer's `rules` -- or skip the rules altogether if we used an exception. I think it's more useful downstream if we keep all lemma candidates, even if we're not using them in the default pipeline. But it also seems only destructive to do `sorted(set())` on them!
</issue>
<code>
[start of spacy/lemmatizer.py]
1 # coding: utf8
2 from __future__ import unicode_literals
3
4 from .symbols import POS, NOUN, VERB, ADJ, PUNCT
5 from .symbols import VerbForm_inf, VerbForm_none, Number_sing, Degree_pos
6
7
8 class Lemmatizer(object):
9 @classmethod
10 def load(cls, path, index=None, exc=None, rules=None):
11 return cls(index or {}, exc or {}, rules or {})
12
13 def __init__(self, index, exceptions, rules):
14 self.index = index
15 self.exc = exceptions
16 self.rules = rules
17
18 def __call__(self, string, univ_pos, morphology=None):
19 if univ_pos == NOUN:
20 univ_pos = 'noun'
21 elif univ_pos == VERB:
22 univ_pos = 'verb'
23 elif univ_pos == ADJ:
24 univ_pos = 'adj'
25 elif univ_pos == PUNCT:
26 univ_pos = 'punct'
27 # See Issue #435 for example of where this logic is requied.
28 if self.is_base_form(univ_pos, morphology):
29 return set([string.lower()])
30 lemmas = lemmatize(string, self.index.get(univ_pos, {}),
31 self.exc.get(univ_pos, {}),
32 self.rules.get(univ_pos, []))
33 return lemmas
34
35 def is_base_form(self, univ_pos, morphology=None):
36 """
37 Check whether we're dealing with an uninflected paradigm, so we can
38 avoid lemmatization entirely.
39 """
40 morphology = {} if morphology is None else morphology
41 others = [key for key in morphology if key not in (POS, 'number', 'pos', 'verbform')]
42 true_morph_key = morphology.get('morph', 0)
43 if univ_pos == 'noun' and morphology.get('Number') == 'sing':
44 return True
45 elif univ_pos == 'verb' and morphology.get('VerbForm') == 'inf':
46 return True
47 elif univ_pos == 'adj' and morphology.get('Degree') == 'pos':
48 return True
49 elif VerbForm_inf in morphology:
50 return True
51 elif VerbForm_none in morphology:
52 return True
53 elif Number_sing in morphology:
54 return True
55 elif Degree_pos in morphology:
56 return True
57 else:
58 return False
59
60 def noun(self, string, morphology=None):
61 return self(string, 'noun', morphology)
62
63 def verb(self, string, morphology=None):
64 return self(string, 'verb', morphology)
65
66 def adj(self, string, morphology=None):
67 return self(string, 'adj', morphology)
68
69 def punct(self, string, morphology=None):
70 return self(string, 'punct', morphology)
71
72
73 def lemmatize(string, index, exceptions, rules):
74 string = string.lower()
75 forms = []
76 # TODO: Is this correct? See discussion in Issue #435.
77 #if string in index:
78 # forms.append(string)
79 forms.extend(exceptions.get(string, []))
80 oov_forms = []
81 for old, new in rules:
82 if string.endswith(old):
83 form = string[:len(string) - len(old)] + new
84 if not form:
85 pass
86 elif form in index or not form.isalpha():
87 forms.append(form)
88 else:
89 oov_forms.append(form)
90 if not forms:
91 forms.extend(oov_forms)
92 if not forms:
93 forms.append(string)
94 return set(forms)
95
[end of spacy/lemmatizer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/spacy/lemmatizer.py b/spacy/lemmatizer.py
--- a/spacy/lemmatizer.py
+++ b/spacy/lemmatizer.py
@@ -78,15 +78,16 @@
# forms.append(string)
forms.extend(exceptions.get(string, []))
oov_forms = []
- for old, new in rules:
- if string.endswith(old):
- form = string[:len(string) - len(old)] + new
- if not form:
- pass
- elif form in index or not form.isalpha():
- forms.append(form)
- else:
- oov_forms.append(form)
+ if not forms:
+ for old, new in rules:
+ if string.endswith(old):
+ form = string[:len(string) - len(old)] + new
+ if not form:
+ pass
+ elif form in index or not form.isalpha():
+ forms.append(form)
+ else:
+ oov_forms.append(form)
if not forms:
forms.extend(oov_forms)
if not forms:
| {"golden_diff": "diff --git a/spacy/lemmatizer.py b/spacy/lemmatizer.py\n--- a/spacy/lemmatizer.py\n+++ b/spacy/lemmatizer.py\n@@ -78,15 +78,16 @@\n # forms.append(string)\n forms.extend(exceptions.get(string, []))\n oov_forms = []\n- for old, new in rules:\n- if string.endswith(old):\n- form = string[:len(string) - len(old)] + new\n- if not form:\n- pass\n- elif form in index or not form.isalpha():\n- forms.append(form)\n- else:\n- oov_forms.append(form)\n+ if not forms:\n+ for old, new in rules:\n+ if string.endswith(old):\n+ form = string[:len(string) - len(old)] + new\n+ if not form:\n+ pass\n+ elif form in index or not form.isalpha():\n+ forms.append(form)\n+ else:\n+ oov_forms.append(form)\n if not forms:\n forms.extend(oov_forms)\n if not forms:\n", "issue": "Lemmatizer exceptions: `lemma_strings` get sorted anyway\nLemmatization exceptions have been working inconsistently, even when they are added directly in `corpora/en/wordnet/dict/verb.exc`\r\n\r\nThe minimal test case of `coping` at https://github.com/explosion/spaCy/issues/389 reveals that, at \r\nhttps://github.com/explosion/spaCy/blob/master/spacy/lemmatizer.py#L94, the lemmatizer's list of potential forms (where the thing in lemmatizer.exceptions is item [0]) is cast to a `set` (and so loses ordering); then in https://github.com/explosion/spaCy/blob/master/spacy/morphology.pyx#L149, that `set` gets sorted. So lemmatizer exceptions only work if they also come first alphabetically!\r\n\r\nI've implemented the fix for this, but I need this issue in order to submit the PR!\r\n\r\nOne question, though, for @honnibal: this can be fixed either as I did it locally -- return the whole list, with exceptions and then anything that comes back from the lemmatizer's `rules` -- or skip the rules altogether if we used an exception. I think it's more useful downstream if we keep all lemma candidates, even if we're not using them in the default pipeline. But it also seems only destructive to do `sorted(set())` on them!\n", "before_files": [{"content": "# coding: utf8\nfrom __future__ import unicode_literals\n\nfrom .symbols import POS, NOUN, VERB, ADJ, PUNCT\nfrom .symbols import VerbForm_inf, VerbForm_none, Number_sing, Degree_pos\n\n\nclass Lemmatizer(object):\n @classmethod\n def load(cls, path, index=None, exc=None, rules=None):\n return cls(index or {}, exc or {}, rules or {})\n\n def __init__(self, index, exceptions, rules):\n self.index = index\n self.exc = exceptions\n self.rules = rules\n\n def __call__(self, string, univ_pos, morphology=None):\n if univ_pos == NOUN:\n univ_pos = 'noun'\n elif univ_pos == VERB:\n univ_pos = 'verb'\n elif univ_pos == ADJ:\n univ_pos = 'adj'\n elif univ_pos == PUNCT:\n univ_pos = 'punct'\n # See Issue #435 for example of where this logic is requied.\n if self.is_base_form(univ_pos, morphology):\n return set([string.lower()])\n lemmas = lemmatize(string, self.index.get(univ_pos, {}),\n self.exc.get(univ_pos, {}),\n self.rules.get(univ_pos, []))\n return lemmas\n\n def is_base_form(self, univ_pos, morphology=None):\n \"\"\"\n Check whether we're dealing with an uninflected paradigm, so we can\n avoid lemmatization entirely.\n \"\"\"\n morphology = {} if morphology is None else morphology\n others = [key for key in morphology if key not in (POS, 'number', 'pos', 'verbform')]\n true_morph_key = morphology.get('morph', 0)\n if univ_pos == 'noun' and morphology.get('Number') == 'sing':\n return True\n elif univ_pos == 'verb' and morphology.get('VerbForm') == 'inf':\n return True\n elif univ_pos == 'adj' and morphology.get('Degree') == 'pos':\n return True\n elif VerbForm_inf in morphology:\n return True\n elif VerbForm_none in morphology:\n return True\n elif Number_sing in morphology:\n return True\n elif Degree_pos in morphology:\n return True\n else:\n return False\n\n def noun(self, string, morphology=None):\n return self(string, 'noun', morphology)\n\n def verb(self, string, morphology=None):\n return self(string, 'verb', morphology)\n\n def adj(self, string, morphology=None):\n return self(string, 'adj', morphology)\n\n def punct(self, string, morphology=None):\n return self(string, 'punct', morphology)\n\n\ndef lemmatize(string, index, exceptions, rules):\n string = string.lower()\n forms = []\n # TODO: Is this correct? See discussion in Issue #435.\n #if string in index:\n # forms.append(string)\n forms.extend(exceptions.get(string, []))\n oov_forms = []\n for old, new in rules:\n if string.endswith(old):\n form = string[:len(string) - len(old)] + new\n if not form:\n pass\n elif form in index or not form.isalpha():\n forms.append(form)\n else:\n oov_forms.append(form)\n if not forms:\n forms.extend(oov_forms)\n if not forms:\n forms.append(string)\n return set(forms)\n", "path": "spacy/lemmatizer.py"}]} | 1,782 | 244 |
gh_patches_debug_51225 | rasdani/github-patches | git_diff | python-pillow__Pillow-7555 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[10.1.0 regression] Palette images save as blank PDFs
Minimal example (you can use [this tiny PNG](https://github.com/python-pillow/Pillow/assets/1119169/d8d45152-7734-4fe3-a2d3-fb49839a0893) for example):
```python
from PIL import Image
image = Image.open('test.png')
image = image.convert('P')
image.save('test.pdf')
```
Output PDF with Pillow 10.0.1:

Output PDF with Pillow 10.1.0:

Issue faced with Python 3.11.6 on Ubuntu 22.04 and Debian 12 (bookworm). I also had the same issue in Docker environments, so I could make a Docker image if needed.
</issue>
<code>
[start of src/PIL/PdfImagePlugin.py]
1 #
2 # The Python Imaging Library.
3 # $Id$
4 #
5 # PDF (Acrobat) file handling
6 #
7 # History:
8 # 1996-07-16 fl Created
9 # 1997-01-18 fl Fixed header
10 # 2004-02-21 fl Fixes for 1/L/CMYK images, etc.
11 # 2004-02-24 fl Fixes for 1 and P images.
12 #
13 # Copyright (c) 1997-2004 by Secret Labs AB. All rights reserved.
14 # Copyright (c) 1996-1997 by Fredrik Lundh.
15 #
16 # See the README file for information on usage and redistribution.
17 #
18
19 ##
20 # Image plugin for PDF images (output only).
21 ##
22
23 import io
24 import math
25 import os
26 import time
27
28 from . import Image, ImageFile, ImageSequence, PdfParser, __version__, features
29
30 #
31 # --------------------------------------------------------------------
32
33 # object ids:
34 # 1. catalogue
35 # 2. pages
36 # 3. image
37 # 4. page
38 # 5. page contents
39
40
41 def _save_all(im, fp, filename):
42 _save(im, fp, filename, save_all=True)
43
44
45 ##
46 # (Internal) Image save plugin for the PDF format.
47
48
49 def _write_image(im, filename, existing_pdf, image_refs):
50 # FIXME: Should replace ASCIIHexDecode with RunLengthDecode
51 # (packbits) or LZWDecode (tiff/lzw compression). Note that
52 # PDF 1.2 also supports Flatedecode (zip compression).
53
54 params = None
55 decode = None
56
57 #
58 # Get image characteristics
59
60 width, height = im.size
61
62 dict_obj = {"BitsPerComponent": 8}
63 if im.mode == "1":
64 if features.check("libtiff"):
65 filter = "CCITTFaxDecode"
66 dict_obj["BitsPerComponent"] = 1
67 params = PdfParser.PdfArray(
68 [
69 PdfParser.PdfDict(
70 {
71 "K": -1,
72 "BlackIs1": True,
73 "Columns": width,
74 "Rows": height,
75 }
76 )
77 ]
78 )
79 else:
80 filter = "DCTDecode"
81 dict_obj["ColorSpace"] = PdfParser.PdfName("DeviceGray")
82 procset = "ImageB" # grayscale
83 elif im.mode == "L":
84 filter = "DCTDecode"
85 # params = f"<< /Predictor 15 /Columns {width-2} >>"
86 dict_obj["ColorSpace"] = PdfParser.PdfName("DeviceGray")
87 procset = "ImageB" # grayscale
88 elif im.mode == "LA":
89 filter = "JPXDecode"
90 # params = f"<< /Predictor 15 /Columns {width-2} >>"
91 procset = "ImageB" # grayscale
92 dict_obj["SMaskInData"] = 1
93 elif im.mode == "P":
94 filter = "ASCIIHexDecode"
95 palette = im.getpalette()
96 dict_obj["ColorSpace"] = [
97 PdfParser.PdfName("Indexed"),
98 PdfParser.PdfName("DeviceRGB"),
99 255,
100 PdfParser.PdfBinary(palette),
101 ]
102 procset = "ImageI" # indexed color
103
104 if "transparency" in im.info:
105 smask = im.convert("LA").getchannel("A")
106 smask.encoderinfo = {}
107
108 image_ref = _write_image(smask, filename, existing_pdf, image_refs)[0]
109 dict_obj["SMask"] = image_ref
110 elif im.mode == "RGB":
111 filter = "DCTDecode"
112 dict_obj["ColorSpace"] = PdfParser.PdfName("DeviceRGB")
113 procset = "ImageC" # color images
114 elif im.mode == "RGBA":
115 filter = "JPXDecode"
116 procset = "ImageC" # color images
117 dict_obj["SMaskInData"] = 1
118 elif im.mode == "CMYK":
119 filter = "DCTDecode"
120 dict_obj["ColorSpace"] = PdfParser.PdfName("DeviceCMYK")
121 procset = "ImageC" # color images
122 decode = [1, 0, 1, 0, 1, 0, 1, 0]
123 else:
124 msg = f"cannot save mode {im.mode}"
125 raise ValueError(msg)
126
127 #
128 # image
129
130 op = io.BytesIO()
131
132 if filter == "ASCIIHexDecode":
133 ImageFile._save(im, op, [("hex", (0, 0) + im.size, 0, im.mode)])
134 elif filter == "CCITTFaxDecode":
135 im.save(
136 op,
137 "TIFF",
138 compression="group4",
139 # use a single strip
140 strip_size=math.ceil(width / 8) * height,
141 )
142 elif filter == "DCTDecode":
143 Image.SAVE["JPEG"](im, op, filename)
144 elif filter == "JPXDecode":
145 del dict_obj["BitsPerComponent"]
146 Image.SAVE["JPEG2000"](im, op, filename)
147 else:
148 msg = f"unsupported PDF filter ({filter})"
149 raise ValueError(msg)
150
151 stream = op.getvalue()
152 if filter == "CCITTFaxDecode":
153 stream = stream[8:]
154 filter = PdfParser.PdfArray([PdfParser.PdfName(filter)])
155 else:
156 filter = PdfParser.PdfName(filter)
157
158 image_ref = image_refs.pop(0)
159 existing_pdf.write_obj(
160 image_ref,
161 stream=stream,
162 Type=PdfParser.PdfName("XObject"),
163 Subtype=PdfParser.PdfName("Image"),
164 Width=width, # * 72.0 / x_resolution,
165 Height=height, # * 72.0 / y_resolution,
166 Filter=filter,
167 Decode=decode,
168 DecodeParms=params,
169 **dict_obj,
170 )
171
172 return image_ref, procset
173
174
175 def _save(im, fp, filename, save_all=False):
176 is_appending = im.encoderinfo.get("append", False)
177 if is_appending:
178 existing_pdf = PdfParser.PdfParser(f=fp, filename=filename, mode="r+b")
179 else:
180 existing_pdf = PdfParser.PdfParser(f=fp, filename=filename, mode="w+b")
181
182 dpi = im.encoderinfo.get("dpi")
183 if dpi:
184 x_resolution = dpi[0]
185 y_resolution = dpi[1]
186 else:
187 x_resolution = y_resolution = im.encoderinfo.get("resolution", 72.0)
188
189 info = {
190 "title": None
191 if is_appending
192 else os.path.splitext(os.path.basename(filename))[0],
193 "author": None,
194 "subject": None,
195 "keywords": None,
196 "creator": None,
197 "producer": None,
198 "creationDate": None if is_appending else time.gmtime(),
199 "modDate": None if is_appending else time.gmtime(),
200 }
201 for k, default in info.items():
202 v = im.encoderinfo.get(k) if k in im.encoderinfo else default
203 if v:
204 existing_pdf.info[k[0].upper() + k[1:]] = v
205
206 #
207 # make sure image data is available
208 im.load()
209
210 existing_pdf.start_writing()
211 existing_pdf.write_header()
212 existing_pdf.write_comment(f"created by Pillow {__version__} PDF driver")
213
214 #
215 # pages
216 ims = [im]
217 if save_all:
218 append_images = im.encoderinfo.get("append_images", [])
219 for append_im in append_images:
220 append_im.encoderinfo = im.encoderinfo.copy()
221 ims.append(append_im)
222 number_of_pages = 0
223 image_refs = []
224 page_refs = []
225 contents_refs = []
226 for im in ims:
227 im_number_of_pages = 1
228 if save_all:
229 try:
230 im_number_of_pages = im.n_frames
231 except AttributeError:
232 # Image format does not have n_frames.
233 # It is a single frame image
234 pass
235 number_of_pages += im_number_of_pages
236 for i in range(im_number_of_pages):
237 image_refs.append(existing_pdf.next_object_id(0))
238 if im.mode == "P" and "transparency" in im.info:
239 image_refs.append(existing_pdf.next_object_id(0))
240
241 page_refs.append(existing_pdf.next_object_id(0))
242 contents_refs.append(existing_pdf.next_object_id(0))
243 existing_pdf.pages.append(page_refs[-1])
244
245 #
246 # catalog and list of pages
247 existing_pdf.write_catalog()
248
249 page_number = 0
250 for im_sequence in ims:
251 im_pages = ImageSequence.Iterator(im_sequence) if save_all else [im_sequence]
252 for im in im_pages:
253 image_ref, procset = _write_image(im, filename, existing_pdf, image_refs)
254
255 #
256 # page
257
258 existing_pdf.write_page(
259 page_refs[page_number],
260 Resources=PdfParser.PdfDict(
261 ProcSet=[PdfParser.PdfName("PDF"), PdfParser.PdfName(procset)],
262 XObject=PdfParser.PdfDict(image=image_ref),
263 ),
264 MediaBox=[
265 0,
266 0,
267 im.width * 72.0 / x_resolution,
268 im.height * 72.0 / y_resolution,
269 ],
270 Contents=contents_refs[page_number],
271 )
272
273 #
274 # page contents
275
276 page_contents = b"q %f 0 0 %f 0 0 cm /image Do Q\n" % (
277 im.width * 72.0 / x_resolution,
278 im.height * 72.0 / y_resolution,
279 )
280
281 existing_pdf.write_obj(contents_refs[page_number], stream=page_contents)
282
283 page_number += 1
284
285 #
286 # trailer
287 existing_pdf.write_xref_and_trailer()
288 if hasattr(fp, "flush"):
289 fp.flush()
290 existing_pdf.close()
291
292
293 #
294 # --------------------------------------------------------------------
295
296
297 Image.register_save("PDF", _save)
298 Image.register_save_all("PDF", _save_all)
299
300 Image.register_extension("PDF", ".pdf")
301
302 Image.register_mime("PDF", "application/pdf")
303
[end of src/PIL/PdfImagePlugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/PIL/PdfImagePlugin.py b/src/PIL/PdfImagePlugin.py
--- a/src/PIL/PdfImagePlugin.py
+++ b/src/PIL/PdfImagePlugin.py
@@ -96,7 +96,7 @@
dict_obj["ColorSpace"] = [
PdfParser.PdfName("Indexed"),
PdfParser.PdfName("DeviceRGB"),
- 255,
+ len(palette) // 3 - 1,
PdfParser.PdfBinary(palette),
]
procset = "ImageI" # indexed color
| {"golden_diff": "diff --git a/src/PIL/PdfImagePlugin.py b/src/PIL/PdfImagePlugin.py\n--- a/src/PIL/PdfImagePlugin.py\n+++ b/src/PIL/PdfImagePlugin.py\n@@ -96,7 +96,7 @@\n dict_obj[\"ColorSpace\"] = [\n PdfParser.PdfName(\"Indexed\"),\n PdfParser.PdfName(\"DeviceRGB\"),\n- 255,\n+ len(palette) // 3 - 1,\n PdfParser.PdfBinary(palette),\n ]\n procset = \"ImageI\" # indexed color\n", "issue": "[10.1.0 regression] Palette images save as blank PDFs\nMinimal example (you can use [this tiny PNG](https://github.com/python-pillow/Pillow/assets/1119169/d8d45152-7734-4fe3-a2d3-fb49839a0893) for example):\r\n\r\n```python\r\nfrom PIL import Image\r\n\r\nimage = Image.open('test.png')\r\nimage = image.convert('P')\r\nimage.save('test.pdf')\r\n```\r\n\r\nOutput PDF with Pillow 10.0.1:\r\n\r\n\r\nOutput PDF with Pillow 10.1.0:\r\n\r\n\r\nIssue faced with Python 3.11.6 on Ubuntu 22.04 and Debian 12 (bookworm). I also had the same issue in Docker environments, so I could make a Docker image if needed.\n", "before_files": [{"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# PDF (Acrobat) file handling\n#\n# History:\n# 1996-07-16 fl Created\n# 1997-01-18 fl Fixed header\n# 2004-02-21 fl Fixes for 1/L/CMYK images, etc.\n# 2004-02-24 fl Fixes for 1 and P images.\n#\n# Copyright (c) 1997-2004 by Secret Labs AB. All rights reserved.\n# Copyright (c) 1996-1997 by Fredrik Lundh.\n#\n# See the README file for information on usage and redistribution.\n#\n\n##\n# Image plugin for PDF images (output only).\n##\n\nimport io\nimport math\nimport os\nimport time\n\nfrom . import Image, ImageFile, ImageSequence, PdfParser, __version__, features\n\n#\n# --------------------------------------------------------------------\n\n# object ids:\n# 1. catalogue\n# 2. pages\n# 3. image\n# 4. page\n# 5. page contents\n\n\ndef _save_all(im, fp, filename):\n _save(im, fp, filename, save_all=True)\n\n\n##\n# (Internal) Image save plugin for the PDF format.\n\n\ndef _write_image(im, filename, existing_pdf, image_refs):\n # FIXME: Should replace ASCIIHexDecode with RunLengthDecode\n # (packbits) or LZWDecode (tiff/lzw compression). Note that\n # PDF 1.2 also supports Flatedecode (zip compression).\n\n params = None\n decode = None\n\n #\n # Get image characteristics\n\n width, height = im.size\n\n dict_obj = {\"BitsPerComponent\": 8}\n if im.mode == \"1\":\n if features.check(\"libtiff\"):\n filter = \"CCITTFaxDecode\"\n dict_obj[\"BitsPerComponent\"] = 1\n params = PdfParser.PdfArray(\n [\n PdfParser.PdfDict(\n {\n \"K\": -1,\n \"BlackIs1\": True,\n \"Columns\": width,\n \"Rows\": height,\n }\n )\n ]\n )\n else:\n filter = \"DCTDecode\"\n dict_obj[\"ColorSpace\"] = PdfParser.PdfName(\"DeviceGray\")\n procset = \"ImageB\" # grayscale\n elif im.mode == \"L\":\n filter = \"DCTDecode\"\n # params = f\"<< /Predictor 15 /Columns {width-2} >>\"\n dict_obj[\"ColorSpace\"] = PdfParser.PdfName(\"DeviceGray\")\n procset = \"ImageB\" # grayscale\n elif im.mode == \"LA\":\n filter = \"JPXDecode\"\n # params = f\"<< /Predictor 15 /Columns {width-2} >>\"\n procset = \"ImageB\" # grayscale\n dict_obj[\"SMaskInData\"] = 1\n elif im.mode == \"P\":\n filter = \"ASCIIHexDecode\"\n palette = im.getpalette()\n dict_obj[\"ColorSpace\"] = [\n PdfParser.PdfName(\"Indexed\"),\n PdfParser.PdfName(\"DeviceRGB\"),\n 255,\n PdfParser.PdfBinary(palette),\n ]\n procset = \"ImageI\" # indexed color\n\n if \"transparency\" in im.info:\n smask = im.convert(\"LA\").getchannel(\"A\")\n smask.encoderinfo = {}\n\n image_ref = _write_image(smask, filename, existing_pdf, image_refs)[0]\n dict_obj[\"SMask\"] = image_ref\n elif im.mode == \"RGB\":\n filter = \"DCTDecode\"\n dict_obj[\"ColorSpace\"] = PdfParser.PdfName(\"DeviceRGB\")\n procset = \"ImageC\" # color images\n elif im.mode == \"RGBA\":\n filter = \"JPXDecode\"\n procset = \"ImageC\" # color images\n dict_obj[\"SMaskInData\"] = 1\n elif im.mode == \"CMYK\":\n filter = \"DCTDecode\"\n dict_obj[\"ColorSpace\"] = PdfParser.PdfName(\"DeviceCMYK\")\n procset = \"ImageC\" # color images\n decode = [1, 0, 1, 0, 1, 0, 1, 0]\n else:\n msg = f\"cannot save mode {im.mode}\"\n raise ValueError(msg)\n\n #\n # image\n\n op = io.BytesIO()\n\n if filter == \"ASCIIHexDecode\":\n ImageFile._save(im, op, [(\"hex\", (0, 0) + im.size, 0, im.mode)])\n elif filter == \"CCITTFaxDecode\":\n im.save(\n op,\n \"TIFF\",\n compression=\"group4\",\n # use a single strip\n strip_size=math.ceil(width / 8) * height,\n )\n elif filter == \"DCTDecode\":\n Image.SAVE[\"JPEG\"](im, op, filename)\n elif filter == \"JPXDecode\":\n del dict_obj[\"BitsPerComponent\"]\n Image.SAVE[\"JPEG2000\"](im, op, filename)\n else:\n msg = f\"unsupported PDF filter ({filter})\"\n raise ValueError(msg)\n\n stream = op.getvalue()\n if filter == \"CCITTFaxDecode\":\n stream = stream[8:]\n filter = PdfParser.PdfArray([PdfParser.PdfName(filter)])\n else:\n filter = PdfParser.PdfName(filter)\n\n image_ref = image_refs.pop(0)\n existing_pdf.write_obj(\n image_ref,\n stream=stream,\n Type=PdfParser.PdfName(\"XObject\"),\n Subtype=PdfParser.PdfName(\"Image\"),\n Width=width, # * 72.0 / x_resolution,\n Height=height, # * 72.0 / y_resolution,\n Filter=filter,\n Decode=decode,\n DecodeParms=params,\n **dict_obj,\n )\n\n return image_ref, procset\n\n\ndef _save(im, fp, filename, save_all=False):\n is_appending = im.encoderinfo.get(\"append\", False)\n if is_appending:\n existing_pdf = PdfParser.PdfParser(f=fp, filename=filename, mode=\"r+b\")\n else:\n existing_pdf = PdfParser.PdfParser(f=fp, filename=filename, mode=\"w+b\")\n\n dpi = im.encoderinfo.get(\"dpi\")\n if dpi:\n x_resolution = dpi[0]\n y_resolution = dpi[1]\n else:\n x_resolution = y_resolution = im.encoderinfo.get(\"resolution\", 72.0)\n\n info = {\n \"title\": None\n if is_appending\n else os.path.splitext(os.path.basename(filename))[0],\n \"author\": None,\n \"subject\": None,\n \"keywords\": None,\n \"creator\": None,\n \"producer\": None,\n \"creationDate\": None if is_appending else time.gmtime(),\n \"modDate\": None if is_appending else time.gmtime(),\n }\n for k, default in info.items():\n v = im.encoderinfo.get(k) if k in im.encoderinfo else default\n if v:\n existing_pdf.info[k[0].upper() + k[1:]] = v\n\n #\n # make sure image data is available\n im.load()\n\n existing_pdf.start_writing()\n existing_pdf.write_header()\n existing_pdf.write_comment(f\"created by Pillow {__version__} PDF driver\")\n\n #\n # pages\n ims = [im]\n if save_all:\n append_images = im.encoderinfo.get(\"append_images\", [])\n for append_im in append_images:\n append_im.encoderinfo = im.encoderinfo.copy()\n ims.append(append_im)\n number_of_pages = 0\n image_refs = []\n page_refs = []\n contents_refs = []\n for im in ims:\n im_number_of_pages = 1\n if save_all:\n try:\n im_number_of_pages = im.n_frames\n except AttributeError:\n # Image format does not have n_frames.\n # It is a single frame image\n pass\n number_of_pages += im_number_of_pages\n for i in range(im_number_of_pages):\n image_refs.append(existing_pdf.next_object_id(0))\n if im.mode == \"P\" and \"transparency\" in im.info:\n image_refs.append(existing_pdf.next_object_id(0))\n\n page_refs.append(existing_pdf.next_object_id(0))\n contents_refs.append(existing_pdf.next_object_id(0))\n existing_pdf.pages.append(page_refs[-1])\n\n #\n # catalog and list of pages\n existing_pdf.write_catalog()\n\n page_number = 0\n for im_sequence in ims:\n im_pages = ImageSequence.Iterator(im_sequence) if save_all else [im_sequence]\n for im in im_pages:\n image_ref, procset = _write_image(im, filename, existing_pdf, image_refs)\n\n #\n # page\n\n existing_pdf.write_page(\n page_refs[page_number],\n Resources=PdfParser.PdfDict(\n ProcSet=[PdfParser.PdfName(\"PDF\"), PdfParser.PdfName(procset)],\n XObject=PdfParser.PdfDict(image=image_ref),\n ),\n MediaBox=[\n 0,\n 0,\n im.width * 72.0 / x_resolution,\n im.height * 72.0 / y_resolution,\n ],\n Contents=contents_refs[page_number],\n )\n\n #\n # page contents\n\n page_contents = b\"q %f 0 0 %f 0 0 cm /image Do Q\\n\" % (\n im.width * 72.0 / x_resolution,\n im.height * 72.0 / y_resolution,\n )\n\n existing_pdf.write_obj(contents_refs[page_number], stream=page_contents)\n\n page_number += 1\n\n #\n # trailer\n existing_pdf.write_xref_and_trailer()\n if hasattr(fp, \"flush\"):\n fp.flush()\n existing_pdf.close()\n\n\n#\n# --------------------------------------------------------------------\n\n\nImage.register_save(\"PDF\", _save)\nImage.register_save_all(\"PDF\", _save_all)\n\nImage.register_extension(\"PDF\", \".pdf\")\n\nImage.register_mime(\"PDF\", \"application/pdf\")\n", "path": "src/PIL/PdfImagePlugin.py"}]} | 3,913 | 128 |
gh_patches_debug_7553 | rasdani/github-patches | git_diff | CTFd__CTFd-1336 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Files not Downloadable
**Environment**:
- CTFd Version/Commit: 2.3.3
- Operating System: Windows
- Web Browser and Version: Chrome 80.0.3987.132
**What happened?**
When building a challenge that has a file, the file is not downloadable.
**What did you expect to happen?**
A file should be downloadable from the frontend.
**How to reproduce your issue**
Create a fresh CTFd instance. Run serve.py. Create a challenge and upload a file. Navigate to the challenge on the front end. The file link is 404.
**Any associated stack traces or error logs**
```
CTFd-Base> python .\serve.py
* Loaded module, <module 'CTFd.plugins.challenges' from 'C:\Users\StormCTF\Downloads\CTFd-Base\CTFd\plugins\challenges\__init__.pyc'>
* Loaded module, <module 'CTFd.plugins.dynamic_challenges' from 'C:\Users\StormCTF\Downloads\CTFd-Base\CTFd\plugins\dynamic_challenges\__init__.pyc'>
* Loaded module, <module 'CTFd.plugins.flags' from 'C:\Users\StormCTF\Downloads\CTFd-Base\CTFd\plugins\flags\__init__.pyc'>
* Serving Flask app "CTFd" (lazy loading)
* Environment: development
* Debug mode: on
* Restarting with stat
* Loaded module, <module 'CTFd.plugins.challenges' from 'C:\Users\StormCTF\Downloads\CTFd-Base\CTFd\plugins\challenges\__init__.pyc'>
* Loaded module, <module 'CTFd.plugins.dynamic_challenges' from 'C:\Users\StormCTF\Downloads\CTFd-Base\CTFd\plugins\dynamic_challenges\__init__.pyc'>
* Loaded module, <module 'CTFd.plugins.flags' from 'C:\Users\StormCTF\Downloads\CTFd-Base\CTFd\plugins\flags\__init__.pyc'>
* Debugger is active!
* Debugger PIN: 310-671-378
* Running on http://127.0.0.1:4000/ (Press CTRL+C to quit)
127.0.0.1 - - [19/Apr/2020 12:40:42] "GET /challenges HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:43] "GET /themes/core/static/css/challenge-board.dev.css?d=a5654bce HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:43] "GET /themes/core/static/css/main.dev.css?d=a5654bce HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:43] "GET /themes/core/static/css/core.dev.css?d=a5654bce HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:43] "GET /themes/core/static/js/vendor.bundle.dev.js?d=a5654bce HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:43] "GET /themes/core/static/css/fonts.dev.css?d=a5654bce HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:43] "GET /themes/core/static/js/core.dev.js?d=a5654bce HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:43] "GET /themes/core/static/js/helpers.dev.js?d=a5654bce HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:43] "GET /themes/core/static/js/pages/challenges.dev.js?d=a5654bce HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:43] "GET /themes/core/static/img/favicon.ico?d=a5654bce HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:43] "GET /api/v1/teams/me/solves HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:43] "GET /api/v1/challenges HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:43] "GET /api/v1/teams/me/solves HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:46] "GET /plugins/challenges/assets/view.js?_=1587314443544 HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:46] "GET /api/v1/challenges/1/solves HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:46] "GET /api/v1/challenges/1 HTTP/1.1" 200 -
127.0.0.1 - - [19/Apr/2020 12:40:50] "GET /files/e57e22cdfa601592e5239c2d358994d6%5COrange-Icon-Black-BG.png?token=.eJyrVipJTcyNz0xRsjLSUSotTi2CsdMyc1LBbMNaAObrC-o.Xpx_Dg.vcuDUI8hJiIgPLExAPan7WVg4FU HTTP/1.1" 404 -
127.0.0.1 - - [19/Apr/2020 12:40:50] "GET /themes/core/static/js/pages/main.dev.js?d=a5654bce HTTP/1.1" 200 -```
</issue>
<code>
[start of CTFd/utils/uploads/uploaders.py]
1 import os
2 import string
3 from shutil import copyfileobj
4
5 import boto3
6 from flask import current_app, redirect, send_file
7 from flask.helpers import safe_join
8 from werkzeug.utils import secure_filename
9
10 from CTFd.utils import get_app_config
11 from CTFd.utils.encoding import hexencode
12
13
14 class BaseUploader(object):
15 def __init__(self):
16 raise NotImplementedError
17
18 def store(self, fileobj, filename):
19 raise NotImplementedError
20
21 def upload(self, file_obj, filename):
22 raise NotImplementedError
23
24 def download(self, filename):
25 raise NotImplementedError
26
27 def delete(self, filename):
28 raise NotImplementedError
29
30 def sync(self):
31 raise NotImplementedError
32
33
34 class FilesystemUploader(BaseUploader):
35 def __init__(self, base_path=None):
36 super(BaseUploader, self).__init__()
37 self.base_path = base_path or current_app.config.get("UPLOAD_FOLDER")
38
39 def store(self, fileobj, filename):
40 location = os.path.join(self.base_path, filename)
41 directory = os.path.dirname(location)
42
43 if not os.path.exists(directory):
44 os.makedirs(directory)
45
46 with open(location, "wb") as dst:
47 copyfileobj(fileobj, dst, 16384)
48
49 return filename
50
51 def upload(self, file_obj, filename):
52 if len(filename) == 0:
53 raise Exception("Empty filenames cannot be used")
54
55 filename = secure_filename(filename)
56 md5hash = hexencode(os.urandom(16))
57 file_path = os.path.join(md5hash, filename)
58
59 return self.store(file_obj, file_path)
60
61 def download(self, filename):
62 return send_file(safe_join(self.base_path, filename), as_attachment=True)
63
64 def delete(self, filename):
65 if os.path.exists(os.path.join(self.base_path, filename)):
66 os.unlink(os.path.join(self.base_path, filename))
67 return True
68 return False
69
70 def sync(self):
71 pass
72
73
74 class S3Uploader(BaseUploader):
75 def __init__(self):
76 super(BaseUploader, self).__init__()
77 self.s3 = self._get_s3_connection()
78 self.bucket = get_app_config("AWS_S3_BUCKET")
79
80 def _get_s3_connection(self):
81 access_key = get_app_config("AWS_ACCESS_KEY_ID")
82 secret_key = get_app_config("AWS_SECRET_ACCESS_KEY")
83 endpoint = get_app_config("AWS_S3_ENDPOINT_URL")
84 client = boto3.client(
85 "s3",
86 aws_access_key_id=access_key,
87 aws_secret_access_key=secret_key,
88 endpoint_url=endpoint,
89 )
90 return client
91
92 def _clean_filename(self, c):
93 if c in string.ascii_letters + string.digits + "-" + "_" + ".":
94 return True
95
96 def store(self, fileobj, filename):
97 self.s3.upload_fileobj(fileobj, self.bucket, filename)
98 return filename
99
100 def upload(self, file_obj, filename):
101 filename = filter(
102 self._clean_filename, secure_filename(filename).replace(" ", "_")
103 )
104 filename = "".join(filename)
105 if len(filename) <= 0:
106 return False
107
108 md5hash = hexencode(os.urandom(16))
109
110 dst = md5hash + "/" + filename
111 self.s3.upload_fileobj(file_obj, self.bucket, dst)
112 return dst
113
114 def download(self, filename):
115 key = filename
116 filename = filename.split("/").pop()
117 url = self.s3.generate_presigned_url(
118 "get_object",
119 Params={
120 "Bucket": self.bucket,
121 "Key": key,
122 "ResponseContentDisposition": "attachment; filename={}".format(
123 filename
124 ),
125 },
126 )
127 return redirect(url)
128
129 def delete(self, filename):
130 self.s3.delete_object(Bucket=self.bucket, Key=filename)
131 return True
132
133 def sync(self):
134 local_folder = current_app.config.get("UPLOAD_FOLDER")
135 # If the bucket is empty then Contents will not be in the response
136 bucket_list = self.s3.list_objects(Bucket=self.bucket).get("Contents", [])
137
138 for s3_key in bucket_list:
139 s3_object = s3_key["Key"]
140 # We don't want to download any directories
141 if s3_object.endswith("/") is False:
142 local_path = os.path.join(local_folder, s3_object)
143 directory = os.path.dirname(local_path)
144 if not os.path.exists(directory):
145 os.makedirs(directory)
146
147 self.s3.download_file(self.bucket, s3_object, local_path)
148
[end of CTFd/utils/uploads/uploaders.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CTFd/utils/uploads/uploaders.py b/CTFd/utils/uploads/uploaders.py
--- a/CTFd/utils/uploads/uploaders.py
+++ b/CTFd/utils/uploads/uploaders.py
@@ -1,4 +1,5 @@
import os
+import posixpath
import string
from shutil import copyfileobj
@@ -54,7 +55,7 @@
filename = secure_filename(filename)
md5hash = hexencode(os.urandom(16))
- file_path = os.path.join(md5hash, filename)
+ file_path = posixpath.join(md5hash, filename)
return self.store(file_obj, file_path)
| {"golden_diff": "diff --git a/CTFd/utils/uploads/uploaders.py b/CTFd/utils/uploads/uploaders.py\n--- a/CTFd/utils/uploads/uploaders.py\n+++ b/CTFd/utils/uploads/uploaders.py\n@@ -1,4 +1,5 @@\n import os\n+import posixpath\n import string\n from shutil import copyfileobj\n \n@@ -54,7 +55,7 @@\n \n filename = secure_filename(filename)\n md5hash = hexencode(os.urandom(16))\n- file_path = os.path.join(md5hash, filename)\n+ file_path = posixpath.join(md5hash, filename)\n \n return self.store(file_obj, file_path)\n", "issue": "Files not Downloadable\n**Environment**:\r\n\r\n - CTFd Version/Commit: 2.3.3\r\n - Operating System: Windows\r\n - Web Browser and Version: Chrome 80.0.3987.132\r\n\r\n**What happened?**\r\nWhen building a challenge that has a file, the file is not downloadable.\r\n\r\n**What did you expect to happen?**\r\nA file should be downloadable from the frontend.\r\n\r\n**How to reproduce your issue**\r\nCreate a fresh CTFd instance. Run serve.py. Create a challenge and upload a file. Navigate to the challenge on the front end. The file link is 404.\r\n\r\n**Any associated stack traces or error logs**\r\n```\r\nCTFd-Base> python .\\serve.py\r\n * Loaded module, <module 'CTFd.plugins.challenges' from 'C:\\Users\\StormCTF\\Downloads\\CTFd-Base\\CTFd\\plugins\\challenges\\__init__.pyc'>\r\n * Loaded module, <module 'CTFd.plugins.dynamic_challenges' from 'C:\\Users\\StormCTF\\Downloads\\CTFd-Base\\CTFd\\plugins\\dynamic_challenges\\__init__.pyc'>\r\n * Loaded module, <module 'CTFd.plugins.flags' from 'C:\\Users\\StormCTF\\Downloads\\CTFd-Base\\CTFd\\plugins\\flags\\__init__.pyc'>\r\n * Serving Flask app \"CTFd\" (lazy loading)\r\n * Environment: development\r\n * Debug mode: on\r\n * Restarting with stat\r\n * Loaded module, <module 'CTFd.plugins.challenges' from 'C:\\Users\\StormCTF\\Downloads\\CTFd-Base\\CTFd\\plugins\\challenges\\__init__.pyc'>\r\n * Loaded module, <module 'CTFd.plugins.dynamic_challenges' from 'C:\\Users\\StormCTF\\Downloads\\CTFd-Base\\CTFd\\plugins\\dynamic_challenges\\__init__.pyc'>\r\n * Loaded module, <module 'CTFd.plugins.flags' from 'C:\\Users\\StormCTF\\Downloads\\CTFd-Base\\CTFd\\plugins\\flags\\__init__.pyc'>\r\n * Debugger is active!\r\n * Debugger PIN: 310-671-378\r\n * Running on http://127.0.0.1:4000/ (Press CTRL+C to quit)\r\n127.0.0.1 - - [19/Apr/2020 12:40:42] \"GET /challenges HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:43] \"GET /themes/core/static/css/challenge-board.dev.css?d=a5654bce HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:43] \"GET /themes/core/static/css/main.dev.css?d=a5654bce HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:43] \"GET /themes/core/static/css/core.dev.css?d=a5654bce HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:43] \"GET /themes/core/static/js/vendor.bundle.dev.js?d=a5654bce HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:43] \"GET /themes/core/static/css/fonts.dev.css?d=a5654bce HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:43] \"GET /themes/core/static/js/core.dev.js?d=a5654bce HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:43] \"GET /themes/core/static/js/helpers.dev.js?d=a5654bce HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:43] \"GET /themes/core/static/js/pages/challenges.dev.js?d=a5654bce HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:43] \"GET /themes/core/static/img/favicon.ico?d=a5654bce HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:43] \"GET /api/v1/teams/me/solves HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:43] \"GET /api/v1/challenges HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:43] \"GET /api/v1/teams/me/solves HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:46] \"GET /plugins/challenges/assets/view.js?_=1587314443544 HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:46] \"GET /api/v1/challenges/1/solves HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:46] \"GET /api/v1/challenges/1 HTTP/1.1\" 200 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:50] \"GET /files/e57e22cdfa601592e5239c2d358994d6%5COrange-Icon-Black-BG.png?token=.eJyrVipJTcyNz0xRsjLSUSotTi2CsdMyc1LBbMNaAObrC-o.Xpx_Dg.vcuDUI8hJiIgPLExAPan7WVg4FU HTTP/1.1\" 404 -\r\n127.0.0.1 - - [19/Apr/2020 12:40:50] \"GET /themes/core/static/js/pages/main.dev.js?d=a5654bce HTTP/1.1\" 200 -```\r\n\n", "before_files": [{"content": "import os\nimport string\nfrom shutil import copyfileobj\n\nimport boto3\nfrom flask import current_app, redirect, send_file\nfrom flask.helpers import safe_join\nfrom werkzeug.utils import secure_filename\n\nfrom CTFd.utils import get_app_config\nfrom CTFd.utils.encoding import hexencode\n\n\nclass BaseUploader(object):\n def __init__(self):\n raise NotImplementedError\n\n def store(self, fileobj, filename):\n raise NotImplementedError\n\n def upload(self, file_obj, filename):\n raise NotImplementedError\n\n def download(self, filename):\n raise NotImplementedError\n\n def delete(self, filename):\n raise NotImplementedError\n\n def sync(self):\n raise NotImplementedError\n\n\nclass FilesystemUploader(BaseUploader):\n def __init__(self, base_path=None):\n super(BaseUploader, self).__init__()\n self.base_path = base_path or current_app.config.get(\"UPLOAD_FOLDER\")\n\n def store(self, fileobj, filename):\n location = os.path.join(self.base_path, filename)\n directory = os.path.dirname(location)\n\n if not os.path.exists(directory):\n os.makedirs(directory)\n\n with open(location, \"wb\") as dst:\n copyfileobj(fileobj, dst, 16384)\n\n return filename\n\n def upload(self, file_obj, filename):\n if len(filename) == 0:\n raise Exception(\"Empty filenames cannot be used\")\n\n filename = secure_filename(filename)\n md5hash = hexencode(os.urandom(16))\n file_path = os.path.join(md5hash, filename)\n\n return self.store(file_obj, file_path)\n\n def download(self, filename):\n return send_file(safe_join(self.base_path, filename), as_attachment=True)\n\n def delete(self, filename):\n if os.path.exists(os.path.join(self.base_path, filename)):\n os.unlink(os.path.join(self.base_path, filename))\n return True\n return False\n\n def sync(self):\n pass\n\n\nclass S3Uploader(BaseUploader):\n def __init__(self):\n super(BaseUploader, self).__init__()\n self.s3 = self._get_s3_connection()\n self.bucket = get_app_config(\"AWS_S3_BUCKET\")\n\n def _get_s3_connection(self):\n access_key = get_app_config(\"AWS_ACCESS_KEY_ID\")\n secret_key = get_app_config(\"AWS_SECRET_ACCESS_KEY\")\n endpoint = get_app_config(\"AWS_S3_ENDPOINT_URL\")\n client = boto3.client(\n \"s3\",\n aws_access_key_id=access_key,\n aws_secret_access_key=secret_key,\n endpoint_url=endpoint,\n )\n return client\n\n def _clean_filename(self, c):\n if c in string.ascii_letters + string.digits + \"-\" + \"_\" + \".\":\n return True\n\n def store(self, fileobj, filename):\n self.s3.upload_fileobj(fileobj, self.bucket, filename)\n return filename\n\n def upload(self, file_obj, filename):\n filename = filter(\n self._clean_filename, secure_filename(filename).replace(\" \", \"_\")\n )\n filename = \"\".join(filename)\n if len(filename) <= 0:\n return False\n\n md5hash = hexencode(os.urandom(16))\n\n dst = md5hash + \"/\" + filename\n self.s3.upload_fileobj(file_obj, self.bucket, dst)\n return dst\n\n def download(self, filename):\n key = filename\n filename = filename.split(\"/\").pop()\n url = self.s3.generate_presigned_url(\n \"get_object\",\n Params={\n \"Bucket\": self.bucket,\n \"Key\": key,\n \"ResponseContentDisposition\": \"attachment; filename={}\".format(\n filename\n ),\n },\n )\n return redirect(url)\n\n def delete(self, filename):\n self.s3.delete_object(Bucket=self.bucket, Key=filename)\n return True\n\n def sync(self):\n local_folder = current_app.config.get(\"UPLOAD_FOLDER\")\n # If the bucket is empty then Contents will not be in the response\n bucket_list = self.s3.list_objects(Bucket=self.bucket).get(\"Contents\", [])\n\n for s3_key in bucket_list:\n s3_object = s3_key[\"Key\"]\n # We don't want to download any directories\n if s3_object.endswith(\"/\") is False:\n local_path = os.path.join(local_folder, s3_object)\n directory = os.path.dirname(local_path)\n if not os.path.exists(directory):\n os.makedirs(directory)\n\n self.s3.download_file(self.bucket, s3_object, local_path)\n", "path": "CTFd/utils/uploads/uploaders.py"}]} | 3,515 | 145 |
gh_patches_debug_8108 | rasdani/github-patches | git_diff | common-workflow-language__cwltool-792 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cwltool fails to run with an input of record and an InlineJavascriptRequirement
`test.cwl`
```
cwlVersion: v1.0
class: CommandLineTool
requirements:
- class: InlineJavascriptRequirement
baseCommand: ['echo']
inputs:
- id: input
type:
type: record
fields: []
outputs: []
```
fails with:
```
/Users/th10/checkouts/cwltool/venv/bin/cwltool 1.0.20180611202326
Resolved 'test.cwl' to 'file:///Users/th10/checkouts/cwltool/test.cwl'
I'm sorry, I couldn't load this CWL file. The error was:
Traceback (most recent call last):
File "/Users/th10/checkouts/cwltool/cwltool/main.py", line 527, in main
job_script_provider, make_tool_kwds)
File "/Users/th10/checkouts/cwltool/cwltool/load_tool.py", line 351, in make_tool
job_script_provider=job_script_provider, **kwargs)
File "/Users/th10/checkouts/cwltool/cwltool/workflow.py", line 51, in default_make_tool
force_docker_pull, job_script_provider, **kwargs)
File "/Users/th10/checkouts/cwltool/cwltool/command_line_tool.py", line 224, in __init__
force_docker_pull, job_script_provider, **kwargs)
File "/Users/th10/checkouts/cwltool/cwltool/process.py", line 568, in __init__
validate_js_expressions(cast(CommentedMap, toolpath_object), self.doc_schema.names[toolpath_object["class"]], validate_js_options)
File "/Users/th10/checkouts/cwltool/cwltool/validate_js.py", line 181, in validate_js_expressions
expressions = get_expressions(tool, schema)
File "/Users/th10/checkouts/cwltool/cwltool/validate_js.py", line 76, in get_expressions
SourceLine(tool, schema_field.name)
File "/Users/th10/checkouts/cwltool/cwltool/validate_js.py", line 62, in get_expressions
map(lambda x: get_expressions(x[1], schema.items, SourceLine(tool, x[0])), enumerate(tool)) # type: ignore # https://github.com/python/mypy/issues/4679
File "/Users/th10/checkouts/cwltool/cwltool/validate_js.py", line 62, in <lambda>
map(lambda x: get_expressions(x[1], schema.items, SourceLine(tool, x[0])), enumerate(tool)) # type: ignore # https://github.com/python/mypy/issues/4679
File "/Users/th10/checkouts/cwltool/cwltool/validate_js.py", line 76, in get_expressions
SourceLine(tool, schema_field.name)
File "/Users/th10/checkouts/cwltool/cwltool/validate_js.py", line 55, in get_expressions
assert valid_schema is not None
AssertionError
```
as https://github.com/common-workflow-language/cwltool/blob/886a6ac41c685f20d39e352f9c657e59f3312265/cwltool/validate_js.py#L51-L52 is parsing using strict validation and earlier parts of the code are setting the "name" attribute of records and arrays. This will also produce an error when someone is not strict validating a CWL script and a CWL warning would appear. A quick fix for this would be to get rid of the `strict=True` parameter.
</issue>
<code>
[start of cwltool/validate_js.py]
1 import copy
2 import itertools
3 import json
4 import logging
5 from collections import namedtuple
6 from typing import (Any, Dict, List, # pylint: disable=unused-import
7 Optional, Text, Tuple, Union)
8
9 import avro.schema # always import after schema_salad, never before
10 from pkg_resources import resource_stream
11 from ruamel.yaml.comments import CommentedMap # pylint: disable=unused-import
12 from schema_salad.sourceline import SourceLine
13 from schema_salad.validate import (Schema, # pylint: disable=unused-import
14 ValidationException, validate_ex)
15 from six import string_types
16
17 from .expression import scanner as scan_expression
18 from .loghandler import _logger
19 from .sandboxjs import code_fragment_to_js, exec_js_process
20 from .utils import json_dumps
21
22 def is_expression(tool, schema):
23 # type: (Union[CommentedMap, Any], Optional[Schema]) -> bool
24 return isinstance(schema, avro.schema.EnumSchema) \
25 and schema.name == "Expression" and isinstance(tool, string_types)
26
27 class SuppressLog(logging.Filter):
28 def __init__(self, name): # type: (Text) -> None
29 name = str(name)
30 super(SuppressLog, self).__init__(name)
31
32 def filter(self, record):
33 return False
34
35
36 _logger_validation_warnings = logging.getLogger("cwltool.validation_warnings")
37 _logger_validation_warnings.addFilter(SuppressLog("cwltool.validation_warnings"))
38
39 def get_expressions(tool, # type: Union[CommentedMap, Any]
40 schema, # type: Optional[avro.schema.Schema]
41 source_line=None # type: Optional[SourceLine]
42 ): # type: (...) -> List[Tuple[Text, Optional[SourceLine]]]
43 if is_expression(tool, schema):
44 return [(tool, source_line)]
45 elif isinstance(schema, avro.schema.UnionSchema):
46 valid_schema = None
47
48 for possible_schema in schema.schemas:
49 if is_expression(tool, possible_schema):
50 return [(tool, source_line)]
51 elif validate_ex(possible_schema, tool, strict=True, raise_ex=False,
52 logger=_logger_validation_warnings):
53 valid_schema = possible_schema
54
55 return get_expressions(tool, valid_schema, source_line)
56 elif isinstance(schema, avro.schema.ArraySchema):
57 if not isinstance(tool, list):
58 return []
59
60 return list(itertools.chain(*
61 map(lambda x: get_expressions(x[1], schema.items, SourceLine(tool, x[0])), enumerate(tool)) # type: ignore # https://github.com/python/mypy/issues/4679
62 ))
63
64 elif isinstance(schema, avro.schema.RecordSchema):
65 if not isinstance(tool, Dict):
66 return []
67
68 expression_nodes = []
69
70 for schema_field in schema.fields:
71 if schema_field.name in tool:
72 expression_nodes.extend(get_expressions(
73 tool[schema_field.name],
74 schema_field.type,
75 SourceLine(tool, schema_field.name)
76 ))
77
78 return expression_nodes
79 else:
80 return []
81
82
83 JSHintJSReturn = namedtuple("jshint_return", ["errors", "globals"])
84
85 def jshint_js(js_text, globals=None, options=None):
86 # type: (Text, List[Text], Dict) -> Tuple[List[Text], List[Text]]
87 if globals is None:
88 globals = []
89 if options is None:
90 options = {
91 "includewarnings": [
92 "W117", # <VARIABLE> not defined
93 "W104", "W119" # using ES6 features
94 ],
95 "strict": "implied",
96 "esversion": 5
97 }
98
99 with resource_stream(__name__, "jshint/jshint.js") as file:
100 # NOTE: we need a global variable for lodash (which jshint depends on)
101 jshint_functions_text = "var global = this;" + file.read().decode('utf-8')
102
103 with resource_stream(__name__, "jshint/jshint_wrapper.js") as file:
104 # NOTE: we need to assign to ob, as the expression {validateJS: validateJS} as an expression
105 # is interpreted as a block with a label `validateJS`
106 jshint_functions_text += "\n" + file.read().decode('utf-8') + "\nvar ob = {validateJS: validateJS}; ob"
107
108 returncode, stdout, stderr = exec_js_process(
109 "validateJS(%s)" % json_dumps({
110 "code": js_text,
111 "options": options,
112 "globals": globals
113 }),
114 timeout=30,
115 context=jshint_functions_text
116 )
117
118 def dump_jshint_error():
119 # type: () -> None
120 raise RuntimeError("jshint failed to run succesfully\nreturncode: %d\nstdout: \"%s\"\nstderr: \"%s\"" % (
121 returncode,
122 stdout,
123 stderr
124 ))
125
126 if returncode == -1:
127 _logger.warn("jshint process timed out")
128
129 if returncode != 0:
130 dump_jshint_error()
131
132 try:
133 jshint_json = json.loads(stdout)
134 except ValueError:
135 dump_jshint_error()
136
137 jshint_errors = [] # type: List[Text]
138
139 js_text_lines = js_text.split("\n")
140
141 for jshint_error_obj in jshint_json.get("errors", []):
142 text = u"JSHINT: " + js_text_lines[jshint_error_obj["line"] - 1] + "\n"
143 text += u"JSHINT: " + " " * (jshint_error_obj["character"] - 1) + "^\n"
144 text += u"JSHINT: %s: %s" % (jshint_error_obj["code"], jshint_error_obj["reason"])
145 jshint_errors.append(text)
146
147 return JSHintJSReturn(jshint_errors, jshint_json.get("globals", []))
148
149
150 def print_js_hint_messages(js_hint_messages, source_line):
151 # type: (List[Text], Optional[SourceLine]) -> None
152 if source_line:
153 for js_hint_message in js_hint_messages:
154 _logger.warn(source_line.makeError(js_hint_message))
155
156 def validate_js_expressions(tool, schema, jshint_options=None):
157 # type: (CommentedMap, Schema, Dict) -> None
158
159 if tool.get("requirements") is None:
160 return
161
162 requirements = tool["requirements"]
163
164 default_globals = [u"self", u"inputs", u"runtime", u"console"]
165
166 for i, prop in enumerate(reversed(requirements)):
167 if prop["class"] == "InlineJavascriptRequirement":
168 expression_lib = prop.get("expressionLib", [])
169 break
170 else:
171 return
172
173 js_globals = copy.deepcopy(default_globals)
174
175 for i, expression_lib_line in enumerate(expression_lib):
176 expression_lib_line_errors, expression_lib_line_globals = jshint_js(expression_lib_line, js_globals, jshint_options)
177 js_globals.extend(expression_lib_line_globals)
178 print_js_hint_messages(expression_lib_line_errors, SourceLine(expression_lib, i))
179
180 expressions = get_expressions(tool, schema)
181
182 for expression, source_line in expressions:
183 unscanned_str = expression.strip()
184 scan_slice = scan_expression(unscanned_str)
185
186 while scan_slice:
187 if unscanned_str[scan_slice[0]] == '$':
188 code_fragment = unscanned_str[scan_slice[0] + 1:scan_slice[1]]
189 code_fragment_js = code_fragment_to_js(code_fragment, "")
190 expression_errors, _ = jshint_js(code_fragment_js, js_globals, jshint_options)
191 print_js_hint_messages(expression_errors, source_line)
192
193 unscanned_str = unscanned_str[scan_slice[1]:]
194 scan_slice = scan_expression(unscanned_str)
195
[end of cwltool/validate_js.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cwltool/validate_js.py b/cwltool/validate_js.py
--- a/cwltool/validate_js.py
+++ b/cwltool/validate_js.py
@@ -48,7 +48,7 @@
for possible_schema in schema.schemas:
if is_expression(tool, possible_schema):
return [(tool, source_line)]
- elif validate_ex(possible_schema, tool, strict=True, raise_ex=False,
+ elif validate_ex(possible_schema, tool, raise_ex=False,
logger=_logger_validation_warnings):
valid_schema = possible_schema
| {"golden_diff": "diff --git a/cwltool/validate_js.py b/cwltool/validate_js.py\n--- a/cwltool/validate_js.py\n+++ b/cwltool/validate_js.py\n@@ -48,7 +48,7 @@\n for possible_schema in schema.schemas:\n if is_expression(tool, possible_schema):\n return [(tool, source_line)]\n- elif validate_ex(possible_schema, tool, strict=True, raise_ex=False,\n+ elif validate_ex(possible_schema, tool, raise_ex=False,\n logger=_logger_validation_warnings):\n valid_schema = possible_schema\n", "issue": "cwltool fails to run with an input of record and an InlineJavascriptRequirement\n`test.cwl`\r\n```\r\ncwlVersion: v1.0\r\nclass: CommandLineTool\r\nrequirements:\r\n - class: InlineJavascriptRequirement\r\nbaseCommand: ['echo']\r\n\r\ninputs:\r\n - id: input\r\n type:\r\n type: record\r\n fields: []\r\n\r\noutputs: []\r\n```\r\nfails with:\r\n```\r\n/Users/th10/checkouts/cwltool/venv/bin/cwltool 1.0.20180611202326\r\nResolved 'test.cwl' to 'file:///Users/th10/checkouts/cwltool/test.cwl'\r\nI'm sorry, I couldn't load this CWL file. The error was:\r\nTraceback (most recent call last):\r\n File \"/Users/th10/checkouts/cwltool/cwltool/main.py\", line 527, in main\r\n job_script_provider, make_tool_kwds)\r\n File \"/Users/th10/checkouts/cwltool/cwltool/load_tool.py\", line 351, in make_tool\r\n job_script_provider=job_script_provider, **kwargs)\r\n File \"/Users/th10/checkouts/cwltool/cwltool/workflow.py\", line 51, in default_make_tool\r\n force_docker_pull, job_script_provider, **kwargs)\r\n File \"/Users/th10/checkouts/cwltool/cwltool/command_line_tool.py\", line 224, in __init__\r\n force_docker_pull, job_script_provider, **kwargs)\r\n File \"/Users/th10/checkouts/cwltool/cwltool/process.py\", line 568, in __init__\r\n validate_js_expressions(cast(CommentedMap, toolpath_object), self.doc_schema.names[toolpath_object[\"class\"]], validate_js_options)\r\n File \"/Users/th10/checkouts/cwltool/cwltool/validate_js.py\", line 181, in validate_js_expressions\r\n expressions = get_expressions(tool, schema)\r\n File \"/Users/th10/checkouts/cwltool/cwltool/validate_js.py\", line 76, in get_expressions\r\n SourceLine(tool, schema_field.name)\r\n File \"/Users/th10/checkouts/cwltool/cwltool/validate_js.py\", line 62, in get_expressions\r\n map(lambda x: get_expressions(x[1], schema.items, SourceLine(tool, x[0])), enumerate(tool)) # type: ignore # https://github.com/python/mypy/issues/4679\r\n File \"/Users/th10/checkouts/cwltool/cwltool/validate_js.py\", line 62, in <lambda>\r\n map(lambda x: get_expressions(x[1], schema.items, SourceLine(tool, x[0])), enumerate(tool)) # type: ignore # https://github.com/python/mypy/issues/4679\r\n File \"/Users/th10/checkouts/cwltool/cwltool/validate_js.py\", line 76, in get_expressions\r\n SourceLine(tool, schema_field.name)\r\n File \"/Users/th10/checkouts/cwltool/cwltool/validate_js.py\", line 55, in get_expressions\r\n assert valid_schema is not None\r\nAssertionError\r\n```\r\nas https://github.com/common-workflow-language/cwltool/blob/886a6ac41c685f20d39e352f9c657e59f3312265/cwltool/validate_js.py#L51-L52 is parsing using strict validation and earlier parts of the code are setting the \"name\" attribute of records and arrays. This will also produce an error when someone is not strict validating a CWL script and a CWL warning would appear. A quick fix for this would be to get rid of the `strict=True` parameter.\n", "before_files": [{"content": "import copy\nimport itertools\nimport json\nimport logging\nfrom collections import namedtuple\nfrom typing import (Any, Dict, List, # pylint: disable=unused-import\n Optional, Text, Tuple, Union)\n\nimport avro.schema # always import after schema_salad, never before\nfrom pkg_resources import resource_stream\nfrom ruamel.yaml.comments import CommentedMap # pylint: disable=unused-import\nfrom schema_salad.sourceline import SourceLine\nfrom schema_salad.validate import (Schema, # pylint: disable=unused-import\n ValidationException, validate_ex)\nfrom six import string_types\n\nfrom .expression import scanner as scan_expression\nfrom .loghandler import _logger\nfrom .sandboxjs import code_fragment_to_js, exec_js_process\nfrom .utils import json_dumps\n\ndef is_expression(tool, schema):\n # type: (Union[CommentedMap, Any], Optional[Schema]) -> bool\n return isinstance(schema, avro.schema.EnumSchema) \\\n and schema.name == \"Expression\" and isinstance(tool, string_types)\n\nclass SuppressLog(logging.Filter):\n def __init__(self, name): # type: (Text) -> None\n name = str(name)\n super(SuppressLog, self).__init__(name)\n\n def filter(self, record):\n return False\n\n\n_logger_validation_warnings = logging.getLogger(\"cwltool.validation_warnings\")\n_logger_validation_warnings.addFilter(SuppressLog(\"cwltool.validation_warnings\"))\n\ndef get_expressions(tool, # type: Union[CommentedMap, Any]\n schema, # type: Optional[avro.schema.Schema]\n source_line=None # type: Optional[SourceLine]\n ): # type: (...) -> List[Tuple[Text, Optional[SourceLine]]]\n if is_expression(tool, schema):\n return [(tool, source_line)]\n elif isinstance(schema, avro.schema.UnionSchema):\n valid_schema = None\n\n for possible_schema in schema.schemas:\n if is_expression(tool, possible_schema):\n return [(tool, source_line)]\n elif validate_ex(possible_schema, tool, strict=True, raise_ex=False,\n logger=_logger_validation_warnings):\n valid_schema = possible_schema\n\n return get_expressions(tool, valid_schema, source_line)\n elif isinstance(schema, avro.schema.ArraySchema):\n if not isinstance(tool, list):\n return []\n\n return list(itertools.chain(*\n map(lambda x: get_expressions(x[1], schema.items, SourceLine(tool, x[0])), enumerate(tool)) # type: ignore # https://github.com/python/mypy/issues/4679\n ))\n\n elif isinstance(schema, avro.schema.RecordSchema):\n if not isinstance(tool, Dict):\n return []\n\n expression_nodes = []\n\n for schema_field in schema.fields:\n if schema_field.name in tool:\n expression_nodes.extend(get_expressions(\n tool[schema_field.name],\n schema_field.type,\n SourceLine(tool, schema_field.name)\n ))\n\n return expression_nodes\n else:\n return []\n\n\nJSHintJSReturn = namedtuple(\"jshint_return\", [\"errors\", \"globals\"])\n\ndef jshint_js(js_text, globals=None, options=None):\n # type: (Text, List[Text], Dict) -> Tuple[List[Text], List[Text]]\n if globals is None:\n globals = []\n if options is None:\n options = {\n \"includewarnings\": [\n \"W117\", # <VARIABLE> not defined\n \"W104\", \"W119\" # using ES6 features\n ],\n \"strict\": \"implied\",\n \"esversion\": 5\n }\n\n with resource_stream(__name__, \"jshint/jshint.js\") as file:\n # NOTE: we need a global variable for lodash (which jshint depends on)\n jshint_functions_text = \"var global = this;\" + file.read().decode('utf-8')\n\n with resource_stream(__name__, \"jshint/jshint_wrapper.js\") as file:\n # NOTE: we need to assign to ob, as the expression {validateJS: validateJS} as an expression\n # is interpreted as a block with a label `validateJS`\n jshint_functions_text += \"\\n\" + file.read().decode('utf-8') + \"\\nvar ob = {validateJS: validateJS}; ob\"\n\n returncode, stdout, stderr = exec_js_process(\n \"validateJS(%s)\" % json_dumps({\n \"code\": js_text,\n \"options\": options,\n \"globals\": globals\n }),\n timeout=30,\n context=jshint_functions_text\n )\n\n def dump_jshint_error():\n # type: () -> None\n raise RuntimeError(\"jshint failed to run succesfully\\nreturncode: %d\\nstdout: \\\"%s\\\"\\nstderr: \\\"%s\\\"\" % (\n returncode,\n stdout,\n stderr\n ))\n\n if returncode == -1:\n _logger.warn(\"jshint process timed out\")\n\n if returncode != 0:\n dump_jshint_error()\n\n try:\n jshint_json = json.loads(stdout)\n except ValueError:\n dump_jshint_error()\n\n jshint_errors = [] # type: List[Text]\n\n js_text_lines = js_text.split(\"\\n\")\n\n for jshint_error_obj in jshint_json.get(\"errors\", []):\n text = u\"JSHINT: \" + js_text_lines[jshint_error_obj[\"line\"] - 1] + \"\\n\"\n text += u\"JSHINT: \" + \" \" * (jshint_error_obj[\"character\"] - 1) + \"^\\n\"\n text += u\"JSHINT: %s: %s\" % (jshint_error_obj[\"code\"], jshint_error_obj[\"reason\"])\n jshint_errors.append(text)\n\n return JSHintJSReturn(jshint_errors, jshint_json.get(\"globals\", []))\n\n\ndef print_js_hint_messages(js_hint_messages, source_line):\n # type: (List[Text], Optional[SourceLine]) -> None\n if source_line:\n for js_hint_message in js_hint_messages:\n _logger.warn(source_line.makeError(js_hint_message))\n\ndef validate_js_expressions(tool, schema, jshint_options=None):\n # type: (CommentedMap, Schema, Dict) -> None\n\n if tool.get(\"requirements\") is None:\n return\n\n requirements = tool[\"requirements\"]\n\n default_globals = [u\"self\", u\"inputs\", u\"runtime\", u\"console\"]\n\n for i, prop in enumerate(reversed(requirements)):\n if prop[\"class\"] == \"InlineJavascriptRequirement\":\n expression_lib = prop.get(\"expressionLib\", [])\n break\n else:\n return\n\n js_globals = copy.deepcopy(default_globals)\n\n for i, expression_lib_line in enumerate(expression_lib):\n expression_lib_line_errors, expression_lib_line_globals = jshint_js(expression_lib_line, js_globals, jshint_options)\n js_globals.extend(expression_lib_line_globals)\n print_js_hint_messages(expression_lib_line_errors, SourceLine(expression_lib, i))\n\n expressions = get_expressions(tool, schema)\n\n for expression, source_line in expressions:\n unscanned_str = expression.strip()\n scan_slice = scan_expression(unscanned_str)\n\n while scan_slice:\n if unscanned_str[scan_slice[0]] == '$':\n code_fragment = unscanned_str[scan_slice[0] + 1:scan_slice[1]]\n code_fragment_js = code_fragment_to_js(code_fragment, \"\")\n expression_errors, _ = jshint_js(code_fragment_js, js_globals, jshint_options)\n print_js_hint_messages(expression_errors, source_line)\n\n unscanned_str = unscanned_str[scan_slice[1]:]\n scan_slice = scan_expression(unscanned_str)\n", "path": "cwltool/validate_js.py"}]} | 3,550 | 125 |
gh_patches_debug_41749 | rasdani/github-patches | git_diff | AnalogJ__lexicon-368 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Rackspace provider not getting token from the identity API
_Update: solved! See my comment below._
I'm just starting out with lexicon and trying to get it working with Rackspace Cloud DNS. Requests are failing with a `requests.exceptions.HTTPError: 401 Client Error: Resource not found for validate token request` error. With debugging on, it appears lexicon is not requesting a token from `https://identity.api.rackspacecloud.com/v2.0/tokens` before it continues to submit a reqeuset to `https://dns.api.rackspacecloud.com/v1.0/…`.
Here's the command I'm running:
lexicon rackspace --log_level DEBUG --auth-username myusername --auth-api-key XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX list example.com A
And here's the output:
Arguments: Namespace(action='list', auth_account=None, auth_api_key='XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', auth_token=None, auth_username='myusername', content=None, delegated=None, domain='example.com', identifier=None, log_level='DEBUG', name=None, output='TABLE', priority=None, provider_name='rackspace', sleep_time=1, ttl=None, type='A')
Starting new HTTPS connection (1): dns.api.rackspacecloud.com:443
https://dns.api.rackspacecloud.com:443 "GET /v1.0/None/domains?name=example.com HTTP/1.1" 401 0
Traceback (most recent call last):
File "/Users/q/src/dns-lexicon/env/bin/lexicon", line 10, in <module>
sys.exit(main())
File "/Users/q/src/dns-lexicon/env/lib/python3.7/site-packages/lexicon/cli.py", line 117, in main
results = client.execute()
File "/Users/q/src/dns-lexicon/env/lib/python3.7/site-packages/lexicon/client.py", line 64, in execute
self.provider.authenticate()
File "/Users/q/src/dns-lexicon/env/lib/python3.7/site-packages/lexicon/providers/base.py", line 69, in authenticate
return self._authenticate()
File "/Users/q/src/dns-lexicon/env/lib/python3.7/site-packages/lexicon/providers/rackspace.py", line 66, in _authenticate
'name': self.domain
File "/Users/q/src/dns-lexicon/env/lib/python3.7/site-packages/lexicon/providers/base.py", line 142, in _get
return self._request('GET', url, query_params=query_params)
File "/Users/q/src/dns-lexicon/env/lib/python3.7/site-packages/lexicon/providers/rackspace.py", line 190, in _request
response.raise_for_status()
File "/Users/q/src/dns-lexicon/env/lib/python3.7/site-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Resource not found for validate token request for url: https://dns.api.rackspacecloud.com/v1.0/None/domains?name=example.com
Of course, this request will fail if is no token has been received from the `identity.api.rackspacecloud.com` API endpoint.
I'm happy to help troubleshoot, but I thought I'd ask first if I'm using the correct parameters for the rackspace provider.
</issue>
<code>
[start of lexicon/providers/rackspace.py]
1 """Rackspace provider implementation"""
2 from __future__ import absolute_import
3 import json
4 import logging
5 import time
6
7 import requests
8 from lexicon.providers.base import Provider as BaseProvider
9
10
11 LOGGER = logging.getLogger(__name__)
12
13 NAMESERVER_DOMAINS = ['rackspacecloud.com']
14
15
16 def _async_request_completed(payload):
17 """Looks into an async response payload to see if the requested job has finished."""
18 if payload['status'] == 'COMPLETED':
19 return True
20 if payload['status'] == 'ERROR':
21 return True
22 return False
23
24
25 def provider_parser(subparser):
26 """Configure provider parser for Rackspace"""
27 subparser.add_argument(
28 "--auth-account", help="specify account number for authentication")
29 subparser.add_argument(
30 "--auth-username",
31 help="specify username for authentication. Only used if --auth-token is empty.")
32 subparser.add_argument(
33 "--auth-api-key",
34 help="specify api key for authentication. Only used if --auth-token is empty.")
35 subparser.add_argument(
36 "--auth-token",
37 help=("specify token for authentication. "
38 "If empty, the username and api key will be used to create a token."))
39 subparser.add_argument("--sleep-time", type=float, default=1,
40 help="number of seconds to wait between update requests.")
41
42
43 class Provider(BaseProvider):
44 """Provider class for Rackspace"""
45 def __init__(self, config):
46 super(Provider, self).__init__(config)
47 self.domain_id = None
48 self.api_endpoint = 'https://dns.api.rackspacecloud.com/v1.0'
49 self.auth_api_endpoint = 'https://identity.api.rackspacecloud.com/v2.0'
50 self._auth_token = None
51
52 def _authenticate(self):
53 self._auth_token = self._get_provider_option('auth_token')
54 if not self._auth_token:
55 auth_response = self._auth_request('POST', '/tokens', {
56 'auth': {
57 'RAX-KSKEY:apiKeyCredentials': {
58 'username': self._get_provider_option('auth_username'),
59 'apiKey': self._get_provider_option('auth_api_key')
60 }
61 }
62 })
63 self._auth_token = auth_response['access']['token']['id']
64
65 payload = self._get('/domains', {
66 'name': self.domain
67 })
68
69 if not payload['domains']:
70 raise Exception('No domain found')
71 if len(payload['domains']) > 1:
72 raise Exception('Too many domains found. This should not happen')
73
74 self.domain_id = payload['domains'][0]['id']
75
76 # Create record. If record already exists with the same content, do nothing'
77
78 def _create_record(self, rtype, name, content):
79 data = {'records': [
80 {'type': rtype, 'name': self._full_name(name), 'data': content}]}
81 if self._get_lexicon_option('ttl'):
82 data['records'][0]['ttl'] = self._get_lexicon_option('ttl')
83
84 try:
85 payload = self._post_and_wait(
86 '/domains/{0}/records'.format(self.domain_id), data)
87 except Exception as error:
88 if str(error).startswith('Record is a duplicate of another record'):
89 return self._update_record(None, rtype, name, content)
90 raise error
91
92 success = len(payload['records']) > 0
93 LOGGER.debug('create_record: %s', success)
94 return success
95
96 # List all records. Return an empty list if no records found
97 # type, name and content are used to filter records.
98 # If possible filter during the query, otherwise filter after response is received.
99 def _list_records(self, rtype=None, name=None, content=None):
100 params = {'per_page': 100}
101 if rtype:
102 params['type'] = rtype
103 if name:
104 params['name'] = self._full_name(name)
105 # Sending the data filter to the Rackspace DNS API results in a 503 error
106 # if content:
107 # params['data'] = content
108
109 payload = self._get(
110 '/domains/{0}/records'.format(self.domain_id), params)
111
112 records = list(payload['records'])
113 if content:
114 records = [
115 record for record in records if record['data'] == content]
116 records = [{
117 'type': record['type'],
118 'name': record['name'],
119 'ttl': record['ttl'],
120 'content': record['data'],
121 'id': record['id']
122 } for record in records]
123
124 LOGGER.debug('list_records: %s', records)
125 return records
126
127 # Create or update a record.
128 def _update_record(self, identifier, rtype=None, name=None, content=None):
129 data = {}
130 if rtype:
131 data['type'] = rtype
132 if name:
133 data['name'] = self._full_name(name)
134 if content:
135 data['data'] = content
136 if self._get_lexicon_option('ttl'):
137 data['ttl'] = self._get_lexicon_option('ttl')
138
139 if identifier is None:
140 records = self._list_records(rtype, name)
141 if not records:
142 raise Exception('Unable to find record to modify: ' + name)
143 identifier = records[0]['id']
144
145 self._put_and_wait(
146 '/domains/{0}/records/{1}'.format(self.domain_id, identifier), data)
147
148 # If it didn't raise from the http status code, then we're good
149 LOGGER.debug('update_record: %s', identifier)
150 return True
151
152 # Delete an existing record.
153 # If record does not exist, do nothing.
154 def _delete_record(self, identifier=None, rtype=None, name=None, content=None):
155 delete_record_id = []
156 if not identifier:
157 records = self._list_records(rtype, name, content)
158 delete_record_id = [record['id'] for record in records]
159 else:
160 delete_record_id.append(identifier)
161
162 LOGGER.debug('delete_records: %s', delete_record_id)
163
164 for record_id in delete_record_id:
165 self._delete_and_wait(
166 '/domains/{0}/records/{1}'.format(self.domain_id, record_id)
167 )
168
169 # If it didn't raise from the http status code, then we're good
170 success = True
171 LOGGER.debug('delete_record: %s', success)
172 return success
173
174 # Helpers
175
176 def _request(self, action='GET', url='/', data=None, query_params=None):
177 if data is None:
178 data = {}
179 if query_params is None:
180 query_params = {}
181 full_url = (self.api_endpoint +
182 '/{0}' + url).format(self._get_provider_option('auth_account'))
183 response = requests.request(action, full_url, params=query_params,
184 data=json.dumps(data),
185 headers={
186 'X-Auth-Token': self._get_provider_option('auth_token'),
187 'Content-Type': 'application/json'
188 })
189 # if the request fails for any reason, throw an error.
190 response.raise_for_status()
191 return response.json()
192
193 # Non-GET requests to the Rackspace CloudDNS API are asynchronous
194 def _request_and_wait(self, action='POST', url='/', data=None, query_params=None):
195 result = self._request(action, url, data, query_params)
196
197 sleep_time = self._get_provider_option('sleep_time') or '1'
198 sleep_time = float(sleep_time)
199
200 while not _async_request_completed(result):
201 if sleep_time:
202 time.sleep(sleep_time)
203 result = self._update_response(result)
204
205 if result['status'] == 'ERROR':
206 raise Exception(result['error']['details'])
207
208 if 'response' in result:
209 return result['response']
210 return None
211
212 def _post_and_wait(self, url='/', data=None, query_params=None):
213 return self._request_and_wait('POST', url, data, query_params)
214
215 def _put_and_wait(self, url='/', data=None, query_params=None):
216 return self._request_and_wait('PUT', url, data, query_params)
217
218 def _delete_and_wait(self, url='/', data=None, query_params=None):
219 return self._request_and_wait('DELETE', url, data, query_params)
220
221 def _update_response(self, payload):
222 response = requests.request('GET', payload['callbackUrl'], params={'showDetails': 'true'},
223 data={},
224 headers={
225 'X-Auth-Token': self._get_provider_option('auth_token'),
226 'Content-Type': 'application/json'})
227
228 # if the request fails for any reason, throw an error.
229 response.raise_for_status()
230 return response.json()
231
232 def _auth_request(self, action='GET', url='/', data=None, query_params=None):
233 if data is None:
234 data = {}
235
236 response = requests.request(action, self.auth_api_endpoint + url, params=query_params,
237 data=json.dumps(data),
238 headers={
239 'Content-Type': 'application/json'
240 })
241 # if the request fails for any reason, throw an error.
242 response.raise_for_status()
243 return response.json()
244
[end of lexicon/providers/rackspace.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lexicon/providers/rackspace.py b/lexicon/providers/rackspace.py
--- a/lexicon/providers/rackspace.py
+++ b/lexicon/providers/rackspace.py
@@ -48,6 +48,16 @@
self.api_endpoint = 'https://dns.api.rackspacecloud.com/v1.0'
self.auth_api_endpoint = 'https://identity.api.rackspacecloud.com/v2.0'
self._auth_token = None
+ self._auth_account = None
+
+ def _get_rackspace_option(self, key):
+ private_key = '_' + key
+ result = None
+ if hasattr(self, private_key):
+ result = getattr(self, private_key)
+ if result is None:
+ result = self._get_provider_option(key)
+ return result
def _authenticate(self):
self._auth_token = self._get_provider_option('auth_token')
@@ -61,6 +71,7 @@
}
})
self._auth_token = auth_response['access']['token']['id']
+ self._auth_account = auth_response['access']['token']['tenant']['id']
payload = self._get('/domains', {
'name': self.domain
@@ -178,12 +189,13 @@
data = {}
if query_params is None:
query_params = {}
+ LOGGER.debug('request tenant ID: %s', self._get_rackspace_option('auth_account'))
full_url = (self.api_endpoint +
- '/{0}' + url).format(self._get_provider_option('auth_account'))
+ '/{0}' + url).format(self._get_rackspace_option('auth_account'))
response = requests.request(action, full_url, params=query_params,
data=json.dumps(data),
headers={
- 'X-Auth-Token': self._get_provider_option('auth_token'),
+ 'X-Auth-Token': self._get_rackspace_option('auth_token'),
'Content-Type': 'application/json'
})
# if the request fails for any reason, throw an error.
@@ -194,7 +206,7 @@
def _request_and_wait(self, action='POST', url='/', data=None, query_params=None):
result = self._request(action, url, data, query_params)
- sleep_time = self._get_provider_option('sleep_time') or '1'
+ sleep_time = self._get_rackspace_option('sleep_time') or '1'
sleep_time = float(sleep_time)
while not _async_request_completed(result):
@@ -222,7 +234,7 @@
response = requests.request('GET', payload['callbackUrl'], params={'showDetails': 'true'},
data={},
headers={
- 'X-Auth-Token': self._get_provider_option('auth_token'),
+ 'X-Auth-Token': self._get_rackspace_option('auth_token'),
'Content-Type': 'application/json'})
# if the request fails for any reason, throw an error.
| {"golden_diff": "diff --git a/lexicon/providers/rackspace.py b/lexicon/providers/rackspace.py\n--- a/lexicon/providers/rackspace.py\n+++ b/lexicon/providers/rackspace.py\n@@ -48,6 +48,16 @@\n self.api_endpoint = 'https://dns.api.rackspacecloud.com/v1.0'\n self.auth_api_endpoint = 'https://identity.api.rackspacecloud.com/v2.0'\n self._auth_token = None\n+ self._auth_account = None\n+\n+ def _get_rackspace_option(self, key):\n+ private_key = '_' + key\n+ result = None\n+ if hasattr(self, private_key):\n+ result = getattr(self, private_key)\n+ if result is None:\n+ result = self._get_provider_option(key)\n+ return result\n \n def _authenticate(self):\n self._auth_token = self._get_provider_option('auth_token')\n@@ -61,6 +71,7 @@\n }\n })\n self._auth_token = auth_response['access']['token']['id']\n+ self._auth_account = auth_response['access']['token']['tenant']['id']\n \n payload = self._get('/domains', {\n 'name': self.domain\n@@ -178,12 +189,13 @@\n data = {}\n if query_params is None:\n query_params = {}\n+ LOGGER.debug('request tenant ID: %s', self._get_rackspace_option('auth_account'))\n full_url = (self.api_endpoint +\n- '/{0}' + url).format(self._get_provider_option('auth_account'))\n+ '/{0}' + url).format(self._get_rackspace_option('auth_account'))\n response = requests.request(action, full_url, params=query_params,\n data=json.dumps(data),\n headers={\n- 'X-Auth-Token': self._get_provider_option('auth_token'),\n+ 'X-Auth-Token': self._get_rackspace_option('auth_token'),\n 'Content-Type': 'application/json'\n })\n # if the request fails for any reason, throw an error.\n@@ -194,7 +206,7 @@\n def _request_and_wait(self, action='POST', url='/', data=None, query_params=None):\n result = self._request(action, url, data, query_params)\n \n- sleep_time = self._get_provider_option('sleep_time') or '1'\n+ sleep_time = self._get_rackspace_option('sleep_time') or '1'\n sleep_time = float(sleep_time)\n \n while not _async_request_completed(result):\n@@ -222,7 +234,7 @@\n response = requests.request('GET', payload['callbackUrl'], params={'showDetails': 'true'},\n data={},\n headers={\n- 'X-Auth-Token': self._get_provider_option('auth_token'),\n+ 'X-Auth-Token': self._get_rackspace_option('auth_token'),\n 'Content-Type': 'application/json'})\n \n # if the request fails for any reason, throw an error.\n", "issue": "Rackspace provider not getting token from the identity API\n_Update: solved! See my comment below._\r\n\r\nI'm just starting out with lexicon and trying to get it working with Rackspace Cloud DNS. Requests are failing with a `requests.exceptions.HTTPError: 401 Client Error: Resource not found for validate token request` error. With debugging on, it appears lexicon is not requesting a token from `https://identity.api.rackspacecloud.com/v2.0/tokens` before it continues to submit a reqeuset to `https://dns.api.rackspacecloud.com/v1.0/\u2026`.\r\n\r\nHere's the command I'm running:\r\n\r\n lexicon rackspace --log_level DEBUG --auth-username myusername --auth-api-key XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX list example.com A\r\n\r\nAnd here's the output:\r\n\r\n Arguments: Namespace(action='list', auth_account=None, auth_api_key='XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX', auth_token=None, auth_username='myusername', content=None, delegated=None, domain='example.com', identifier=None, log_level='DEBUG', name=None, output='TABLE', priority=None, provider_name='rackspace', sleep_time=1, ttl=None, type='A')\r\n Starting new HTTPS connection (1): dns.api.rackspacecloud.com:443\r\n https://dns.api.rackspacecloud.com:443 \"GET /v1.0/None/domains?name=example.com HTTP/1.1\" 401 0\r\n Traceback (most recent call last):\r\n File \"/Users/q/src/dns-lexicon/env/bin/lexicon\", line 10, in <module>\r\n sys.exit(main())\r\n File \"/Users/q/src/dns-lexicon/env/lib/python3.7/site-packages/lexicon/cli.py\", line 117, in main\r\n results = client.execute()\r\n File \"/Users/q/src/dns-lexicon/env/lib/python3.7/site-packages/lexicon/client.py\", line 64, in execute\r\n self.provider.authenticate()\r\n File \"/Users/q/src/dns-lexicon/env/lib/python3.7/site-packages/lexicon/providers/base.py\", line 69, in authenticate\r\n return self._authenticate()\r\n File \"/Users/q/src/dns-lexicon/env/lib/python3.7/site-packages/lexicon/providers/rackspace.py\", line 66, in _authenticate\r\n 'name': self.domain\r\n File \"/Users/q/src/dns-lexicon/env/lib/python3.7/site-packages/lexicon/providers/base.py\", line 142, in _get\r\n return self._request('GET', url, query_params=query_params)\r\n File \"/Users/q/src/dns-lexicon/env/lib/python3.7/site-packages/lexicon/providers/rackspace.py\", line 190, in _request\r\n response.raise_for_status()\r\n File \"/Users/q/src/dns-lexicon/env/lib/python3.7/site-packages/requests/models.py\", line 940, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\n requests.exceptions.HTTPError: 401 Client Error: Resource not found for validate token request for url: https://dns.api.rackspacecloud.com/v1.0/None/domains?name=example.com\r\n\r\nOf course, this request will fail if is no token has been received from the `identity.api.rackspacecloud.com` API endpoint.\r\n\r\nI'm happy to help troubleshoot, but I thought I'd ask first if I'm using the correct parameters for the rackspace provider. \n", "before_files": [{"content": "\"\"\"Rackspace provider implementation\"\"\"\nfrom __future__ import absolute_import\nimport json\nimport logging\nimport time\n\nimport requests\nfrom lexicon.providers.base import Provider as BaseProvider\n\n\nLOGGER = logging.getLogger(__name__)\n\nNAMESERVER_DOMAINS = ['rackspacecloud.com']\n\n\ndef _async_request_completed(payload):\n \"\"\"Looks into an async response payload to see if the requested job has finished.\"\"\"\n if payload['status'] == 'COMPLETED':\n return True\n if payload['status'] == 'ERROR':\n return True\n return False\n\n\ndef provider_parser(subparser):\n \"\"\"Configure provider parser for Rackspace\"\"\"\n subparser.add_argument(\n \"--auth-account\", help=\"specify account number for authentication\")\n subparser.add_argument(\n \"--auth-username\",\n help=\"specify username for authentication. Only used if --auth-token is empty.\")\n subparser.add_argument(\n \"--auth-api-key\",\n help=\"specify api key for authentication. Only used if --auth-token is empty.\")\n subparser.add_argument(\n \"--auth-token\",\n help=(\"specify token for authentication. \"\n \"If empty, the username and api key will be used to create a token.\"))\n subparser.add_argument(\"--sleep-time\", type=float, default=1,\n help=\"number of seconds to wait between update requests.\")\n\n\nclass Provider(BaseProvider):\n \"\"\"Provider class for Rackspace\"\"\"\n def __init__(self, config):\n super(Provider, self).__init__(config)\n self.domain_id = None\n self.api_endpoint = 'https://dns.api.rackspacecloud.com/v1.0'\n self.auth_api_endpoint = 'https://identity.api.rackspacecloud.com/v2.0'\n self._auth_token = None\n\n def _authenticate(self):\n self._auth_token = self._get_provider_option('auth_token')\n if not self._auth_token:\n auth_response = self._auth_request('POST', '/tokens', {\n 'auth': {\n 'RAX-KSKEY:apiKeyCredentials': {\n 'username': self._get_provider_option('auth_username'),\n 'apiKey': self._get_provider_option('auth_api_key')\n }\n }\n })\n self._auth_token = auth_response['access']['token']['id']\n\n payload = self._get('/domains', {\n 'name': self.domain\n })\n\n if not payload['domains']:\n raise Exception('No domain found')\n if len(payload['domains']) > 1:\n raise Exception('Too many domains found. This should not happen')\n\n self.domain_id = payload['domains'][0]['id']\n\n # Create record. If record already exists with the same content, do nothing'\n\n def _create_record(self, rtype, name, content):\n data = {'records': [\n {'type': rtype, 'name': self._full_name(name), 'data': content}]}\n if self._get_lexicon_option('ttl'):\n data['records'][0]['ttl'] = self._get_lexicon_option('ttl')\n\n try:\n payload = self._post_and_wait(\n '/domains/{0}/records'.format(self.domain_id), data)\n except Exception as error:\n if str(error).startswith('Record is a duplicate of another record'):\n return self._update_record(None, rtype, name, content)\n raise error\n\n success = len(payload['records']) > 0\n LOGGER.debug('create_record: %s', success)\n return success\n\n # List all records. Return an empty list if no records found\n # type, name and content are used to filter records.\n # If possible filter during the query, otherwise filter after response is received.\n def _list_records(self, rtype=None, name=None, content=None):\n params = {'per_page': 100}\n if rtype:\n params['type'] = rtype\n if name:\n params['name'] = self._full_name(name)\n # Sending the data filter to the Rackspace DNS API results in a 503 error\n # if content:\n # params['data'] = content\n\n payload = self._get(\n '/domains/{0}/records'.format(self.domain_id), params)\n\n records = list(payload['records'])\n if content:\n records = [\n record for record in records if record['data'] == content]\n records = [{\n 'type': record['type'],\n 'name': record['name'],\n 'ttl': record['ttl'],\n 'content': record['data'],\n 'id': record['id']\n } for record in records]\n\n LOGGER.debug('list_records: %s', records)\n return records\n\n # Create or update a record.\n def _update_record(self, identifier, rtype=None, name=None, content=None):\n data = {}\n if rtype:\n data['type'] = rtype\n if name:\n data['name'] = self._full_name(name)\n if content:\n data['data'] = content\n if self._get_lexicon_option('ttl'):\n data['ttl'] = self._get_lexicon_option('ttl')\n\n if identifier is None:\n records = self._list_records(rtype, name)\n if not records:\n raise Exception('Unable to find record to modify: ' + name)\n identifier = records[0]['id']\n\n self._put_and_wait(\n '/domains/{0}/records/{1}'.format(self.domain_id, identifier), data)\n\n # If it didn't raise from the http status code, then we're good\n LOGGER.debug('update_record: %s', identifier)\n return True\n\n # Delete an existing record.\n # If record does not exist, do nothing.\n def _delete_record(self, identifier=None, rtype=None, name=None, content=None):\n delete_record_id = []\n if not identifier:\n records = self._list_records(rtype, name, content)\n delete_record_id = [record['id'] for record in records]\n else:\n delete_record_id.append(identifier)\n\n LOGGER.debug('delete_records: %s', delete_record_id)\n\n for record_id in delete_record_id:\n self._delete_and_wait(\n '/domains/{0}/records/{1}'.format(self.domain_id, record_id)\n )\n\n # If it didn't raise from the http status code, then we're good\n success = True\n LOGGER.debug('delete_record: %s', success)\n return success\n\n # Helpers\n\n def _request(self, action='GET', url='/', data=None, query_params=None):\n if data is None:\n data = {}\n if query_params is None:\n query_params = {}\n full_url = (self.api_endpoint +\n '/{0}' + url).format(self._get_provider_option('auth_account'))\n response = requests.request(action, full_url, params=query_params,\n data=json.dumps(data),\n headers={\n 'X-Auth-Token': self._get_provider_option('auth_token'),\n 'Content-Type': 'application/json'\n })\n # if the request fails for any reason, throw an error.\n response.raise_for_status()\n return response.json()\n\n # Non-GET requests to the Rackspace CloudDNS API are asynchronous\n def _request_and_wait(self, action='POST', url='/', data=None, query_params=None):\n result = self._request(action, url, data, query_params)\n\n sleep_time = self._get_provider_option('sleep_time') or '1'\n sleep_time = float(sleep_time)\n\n while not _async_request_completed(result):\n if sleep_time:\n time.sleep(sleep_time)\n result = self._update_response(result)\n\n if result['status'] == 'ERROR':\n raise Exception(result['error']['details'])\n\n if 'response' in result:\n return result['response']\n return None\n\n def _post_and_wait(self, url='/', data=None, query_params=None):\n return self._request_and_wait('POST', url, data, query_params)\n\n def _put_and_wait(self, url='/', data=None, query_params=None):\n return self._request_and_wait('PUT', url, data, query_params)\n\n def _delete_and_wait(self, url='/', data=None, query_params=None):\n return self._request_and_wait('DELETE', url, data, query_params)\n\n def _update_response(self, payload):\n response = requests.request('GET', payload['callbackUrl'], params={'showDetails': 'true'},\n data={},\n headers={\n 'X-Auth-Token': self._get_provider_option('auth_token'),\n 'Content-Type': 'application/json'})\n\n # if the request fails for any reason, throw an error.\n response.raise_for_status()\n return response.json()\n\n def _auth_request(self, action='GET', url='/', data=None, query_params=None):\n if data is None:\n data = {}\n\n response = requests.request(action, self.auth_api_endpoint + url, params=query_params,\n data=json.dumps(data),\n headers={\n 'Content-Type': 'application/json'\n })\n # if the request fails for any reason, throw an error.\n response.raise_for_status()\n return response.json()\n", "path": "lexicon/providers/rackspace.py"}]} | 3,927 | 673 |
gh_patches_debug_5114 | rasdani/github-patches | git_diff | pre-commit__pre-commit-177 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Stashed changes lost if hook fails
I've run into this particular (in my eyes, critical) bug.
If I want to do a partial commit, e.g. I have 2 files but I only add 1 file to the staging area and the staged file will cause a hook to fail, I loose the changes in the 2nd file because pre-commit fails to reroll the patch it stashed before running.
Here's my terminal log and the steps to reproduce:
## Version
$ pre-commit -V
pre-commit 0.3.0
## Commands to reproduce
```
$ cat unstaged.py
"""I am unstaged"""
$ echo "'''I am unstaged, but I have changes'''" > unstaged.py
$ echo "x = 'This is the loooooooooooooooooooooooooooooooooooongest liiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiine eveeeeeeeeer'" > foo.py
$ git status
On branch master
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: unstaged.py
modified: foo.py
no changes added to commit (use "git add" and/or "git commit -a")
$ git add foo.py
$ git commit -m "Adding a long line"
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to /home/k/.pre-commit/patch1412683352.
Flake8...............................................Failed
hookid: flake8
foo.py:1:80: E501 line too long (112 > 79 characters)
[WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes...
An unexpected error has occurred: CalledProcessError: Command: [u'git', u'apply', u'/home/k/.pre-commit/patch1412683352']
Return code: 128
Expected return code: 0
Output: (u'', u'fatal: unrecognized input\n')
Check the log at ~/.pre-commit/pre-commit.log
$ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
modified: foo.py
$ echo "x = 'This is a shorter line, its better'" > foo.py
$ git status
On branch master
Changes to be committed:
(use "git reset HEAD <file>..." to unstage)
modified: foo.py
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
modified: foo.py
$ git add foo.py
$ git commit -m "Fixed the error I got from the flake8 hook"
Flake8...............................................Passed
[master 78568e8] Fixed the error I got from the flake8 hook
1 file changed, 1 insertion(+), 1 deletion(-)
$ git status
On branch master
nothing to commit, working directory clean
$ cat unstaged.py
"""I am unstaged"""
```
## Log
```
$ cat ~/.pre-commit/pre-commit.log
Traceback (most recent call last):
File "/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/error_handler.py", line 34, in error_handler
yield
File "/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/main.py", line 108, in main
return run(runner, args)
File "/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/commands/run.py", line 151, in run
return _run_hooks(runner, args, write=write, environ=environ)
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/staged_files_only.py", line 55, in staged_files_only
cmd_runner.run(['git', 'apply', patch_filename])
File "/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/prefixed_command_runner.py", line 82, in run
returncode, replaced_cmd, retcode, output=(stdout, stderr),
CalledProcessError: Command: [u'git', u'apply', u'/home/k/.pre-commit/patch1412683352']
Return code: 128
Expected return code: 0
Output: (u'', u'fatal: unrecognized input\n')
```
## .pre-commit-config.yaml
```
$ cat .pre-commit-config.yaml
- repo: [email protected]:pre-commit/pre-commit-hooks
sha: 6343700aa063fe30acc319d2dc84353a35a3d6d0
hooks:
- id: flake8
args: ['--ignore=E712,F821']
```
</issue>
<code>
[start of pre_commit/staged_files_only.py]
1 from __future__ import unicode_literals
2
3 import contextlib
4 import io
5 import logging
6 import time
7
8 from pre_commit.util import CalledProcessError
9
10
11 logger = logging.getLogger('pre_commit')
12
13
14 @contextlib.contextmanager
15 def staged_files_only(cmd_runner):
16 """Clear any unstaged changes from the git working directory inside this
17 context.
18
19 Args:
20 cmd_runner - PrefixedCommandRunner
21 """
22 # Determine if there are unstaged files
23 retcode, diff_stdout_binary, _ = cmd_runner.run(
24 ['git', 'diff', '--ignore-submodules', '--binary', '--exit-code'],
25 retcode=None,
26 encoding=None,
27 )
28 if retcode and diff_stdout_binary.strip():
29 patch_filename = cmd_runner.path('patch{0}'.format(int(time.time())))
30 logger.warning('Unstaged files detected.')
31 logger.info(
32 'Stashing unstaged files to {0}.'.format(patch_filename),
33 )
34 # Save the current unstaged changes as a patch
35 with io.open(patch_filename, 'wb') as patch_file:
36 patch_file.write(diff_stdout_binary)
37
38 # Clear the working directory of unstaged changes
39 cmd_runner.run(['git', 'checkout', '--', '.'])
40 try:
41 yield
42 finally:
43 # Try to apply the patch we saved
44 try:
45 cmd_runner.run(['git', 'apply', patch_filename])
46 except CalledProcessError:
47 logger.warning(
48 'Stashed changes conflicted with hook auto-fixes... '
49 'Rolling back fixes...'
50 )
51 # We failed to apply the patch, presumably due to fixes made
52 # by hooks.
53 # Roll back the changes made by hooks.
54 cmd_runner.run(['git', 'checkout', '--', '.'])
55 cmd_runner.run(['git', 'apply', patch_filename])
56 logger.info('Restored changes from {0}.'.format(patch_filename))
57 else:
58 # There weren't any staged files so we don't need to do anything
59 # special
60 yield
61
[end of pre_commit/staged_files_only.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py
--- a/pre_commit/staged_files_only.py
+++ b/pre_commit/staged_files_only.py
@@ -21,7 +21,10 @@
"""
# Determine if there are unstaged files
retcode, diff_stdout_binary, _ = cmd_runner.run(
- ['git', 'diff', '--ignore-submodules', '--binary', '--exit-code'],
+ [
+ 'git', 'diff', '--ignore-submodules', '--binary', '--exit-code',
+ '--no-color',
+ ],
retcode=None,
encoding=None,
)
| {"golden_diff": "diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py\n--- a/pre_commit/staged_files_only.py\n+++ b/pre_commit/staged_files_only.py\n@@ -21,7 +21,10 @@\n \"\"\"\n # Determine if there are unstaged files\n retcode, diff_stdout_binary, _ = cmd_runner.run(\n- ['git', 'diff', '--ignore-submodules', '--binary', '--exit-code'],\n+ [\n+ 'git', 'diff', '--ignore-submodules', '--binary', '--exit-code',\n+ '--no-color',\n+ ],\n retcode=None,\n encoding=None,\n )\n", "issue": "Stashed changes lost if hook fails\nI've run into this particular (in my eyes, critical) bug.\nIf I want to do a partial commit, e.g. I have 2 files but I only add 1 file to the staging area and the staged file will cause a hook to fail, I loose the changes in the 2nd file because pre-commit fails to reroll the patch it stashed before running.\n\nHere's my terminal log and the steps to reproduce:\n## Version\n\n$ pre-commit -V\npre-commit 0.3.0\n## Commands to reproduce\n\n```\n$ cat unstaged.py\n\"\"\"I am unstaged\"\"\"\n$ echo \"'''I am unstaged, but I have changes'''\" > unstaged.py\n$ echo \"x = 'This is the loooooooooooooooooooooooooooooooooooongest liiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiine eveeeeeeeeer'\" > foo.py \n$ git status\nOn branch master\nChanges not staged for commit:\n (use \"git add <file>...\" to update what will be committed)\n (use \"git checkout -- <file>...\" to discard changes in working directory)\n\n modified: unstaged.py\n modified: foo.py\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")\n$ git add foo.py \n$ git commit -m \"Adding a long line\"\n[WARNING] Unstaged files detected.\n[INFO] Stashing unstaged files to /home/k/.pre-commit/patch1412683352.\nFlake8...............................................Failed\nhookid: flake8\n\nfoo.py:1:80: E501 line too long (112 > 79 characters)\n\n[WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes...\nAn unexpected error has occurred: CalledProcessError: Command: [u'git', u'apply', u'/home/k/.pre-commit/patch1412683352']\nReturn code: 128\nExpected return code: 0\nOutput: (u'', u'fatal: unrecognized input\\n')\n\nCheck the log at ~/.pre-commit/pre-commit.log\n$ git status\nOn branch master\nChanges to be committed:\n (use \"git reset HEAD <file>...\" to unstage)\n\n modified: foo.py\n\n$ echo \"x = 'This is a shorter line, its better'\" > foo.py\n$ git status\nOn branch master\nChanges to be committed:\n (use \"git reset HEAD <file>...\" to unstage)\n\n modified: foo.py\n\nChanges not staged for commit:\n (use \"git add <file>...\" to update what will be committed)\n (use \"git checkout -- <file>...\" to discard changes in working directory)\n\n modified: foo.py\n\n$ git add foo.py \n$ git commit -m \"Fixed the error I got from the flake8 hook\"\nFlake8...............................................Passed\n[master 78568e8] Fixed the error I got from the flake8 hook\n 1 file changed, 1 insertion(+), 1 deletion(-)\n$ git status\nOn branch master\nnothing to commit, working directory clean\n$ cat unstaged.py\n\"\"\"I am unstaged\"\"\"\n```\n## Log\n\n```\n$ cat ~/.pre-commit/pre-commit.log\nTraceback (most recent call last):\n File \"/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/error_handler.py\", line 34, in error_handler\n yield\n File \"/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/main.py\", line 108, in main\n return run(runner, args)\n File \"/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/commands/run.py\", line 151, in run\n return _run_hooks(runner, args, write=write, environ=environ)\n File \"/usr/lib/python2.7/contextlib.py\", line 24, in __exit__\n self.gen.next()\n File \"/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/staged_files_only.py\", line 55, in staged_files_only\n cmd_runner.run(['git', 'apply', patch_filename])\n File \"/home/k/.virtualenvs/pre-commit-test/local/lib/python2.7/site-packages/pre_commit/prefixed_command_runner.py\", line 82, in run\n returncode, replaced_cmd, retcode, output=(stdout, stderr),\nCalledProcessError: Command: [u'git', u'apply', u'/home/k/.pre-commit/patch1412683352']\nReturn code: 128\nExpected return code: 0\nOutput: (u'', u'fatal: unrecognized input\\n')\n```\n## .pre-commit-config.yaml\n\n```\n$ cat .pre-commit-config.yaml \n- repo: [email protected]:pre-commit/pre-commit-hooks\n sha: 6343700aa063fe30acc319d2dc84353a35a3d6d0\n hooks:\n - id: flake8\n args: ['--ignore=E712,F821']\n```\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport logging\nimport time\n\nfrom pre_commit.util import CalledProcessError\n\n\nlogger = logging.getLogger('pre_commit')\n\n\[email protected]\ndef staged_files_only(cmd_runner):\n \"\"\"Clear any unstaged changes from the git working directory inside this\n context.\n\n Args:\n cmd_runner - PrefixedCommandRunner\n \"\"\"\n # Determine if there are unstaged files\n retcode, diff_stdout_binary, _ = cmd_runner.run(\n ['git', 'diff', '--ignore-submodules', '--binary', '--exit-code'],\n retcode=None,\n encoding=None,\n )\n if retcode and diff_stdout_binary.strip():\n patch_filename = cmd_runner.path('patch{0}'.format(int(time.time())))\n logger.warning('Unstaged files detected.')\n logger.info(\n 'Stashing unstaged files to {0}.'.format(patch_filename),\n )\n # Save the current unstaged changes as a patch\n with io.open(patch_filename, 'wb') as patch_file:\n patch_file.write(diff_stdout_binary)\n\n # Clear the working directory of unstaged changes\n cmd_runner.run(['git', 'checkout', '--', '.'])\n try:\n yield\n finally:\n # Try to apply the patch we saved\n try:\n cmd_runner.run(['git', 'apply', patch_filename])\n except CalledProcessError:\n logger.warning(\n 'Stashed changes conflicted with hook auto-fixes... '\n 'Rolling back fixes...'\n )\n # We failed to apply the patch, presumably due to fixes made\n # by hooks.\n # Roll back the changes made by hooks.\n cmd_runner.run(['git', 'checkout', '--', '.'])\n cmd_runner.run(['git', 'apply', patch_filename])\n logger.info('Restored changes from {0}.'.format(patch_filename))\n else:\n # There weren't any staged files so we don't need to do anything\n # special\n yield\n", "path": "pre_commit/staged_files_only.py"}]} | 2,243 | 142 |
gh_patches_debug_1628 | rasdani/github-patches | git_diff | apache__tvm-12178 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exercise TVM under minimal configuration in CI
We have seen a couple bugs due to microTVM being presumed-ON in config.cmake. Namely, you get python errors importing TVM right now when USE_MICRO is OFF. We should have a regression test that verifies basic functionality with everything (or nearly everything) OFF.
Context: apache/tvm#9617
And another micro-related issue of the same kind, which i don't have handy right now.
cc @gigiblender
</issue>
<code>
[start of ci/jenkins/generate.py]
1 #!/usr/bin/env python3
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18 import jinja2
19 import argparse
20 import difflib
21 import re
22 import datetime
23 import textwrap
24
25 from pathlib import Path
26
27
28 REPO_ROOT = Path(__file__).resolve().parent.parent.parent
29 JENKINSFILE_TEMPLATE = REPO_ROOT / "ci" / "jenkins" / "Jenkinsfile.j2"
30 JENKINSFILE = REPO_ROOT / "Jenkinsfile"
31
32
33 data = {
34 "images": [
35 {
36 "name": "ci_arm",
37 "platform": "ARM",
38 },
39 {
40 "name": "ci_cpu",
41 "platform": "CPU",
42 },
43 {
44 "name": "ci_gpu",
45 "platform": "CPU",
46 },
47 {
48 "name": "ci_hexagon",
49 "platform": "CPU",
50 },
51 {
52 "name": "ci_i386",
53 "platform": "CPU",
54 },
55 {
56 "name": "ci_lint",
57 "platform": "CPU",
58 },
59 {
60 "name": "ci_cortexm",
61 "platform": "CPU",
62 },
63 {
64 "name": "ci_wasm",
65 "platform": "CPU",
66 },
67 ]
68 }
69
70
71 def lines_without_generated_tag(content):
72 return [
73 line for line in content.splitlines(keepends=True) if not line.startswith("// Generated at")
74 ]
75
76
77 if __name__ == "__main__":
78 help = "Regenerate Jenkinsfile from template"
79 parser = argparse.ArgumentParser(description=help)
80 parser.add_argument("--check", action="store_true", help="just verify the output didn't change")
81 args = parser.parse_args()
82
83 with open(JENKINSFILE) as f:
84 content = f.read()
85
86 data["generated_time"] = datetime.datetime.now().isoformat()
87
88 environment = jinja2.Environment(
89 loader=jinja2.FileSystemLoader(REPO_ROOT),
90 undefined=jinja2.StrictUndefined,
91 lstrip_blocks=True,
92 trim_blocks=True,
93 keep_trailing_newline=True,
94 )
95 template = environment.get_template(str(JENKINSFILE_TEMPLATE.relative_to(REPO_ROOT)))
96 new_content = template.render(**data)
97
98 diff = "".join(
99 difflib.unified_diff(
100 lines_without_generated_tag(content), lines_without_generated_tag(new_content)
101 )
102 )
103 if args.check:
104 if not diff:
105 print("Success, the newly generated Jenkinsfile matched the one on disk")
106 exit(0)
107 else:
108 print(
109 textwrap.dedent(
110 """
111 Newly generated Jenkinsfile did not match the one on disk! If you have made
112 edits to the Jenkinsfile, move them to 'jenkins/Jenkinsfile.j2' and
113 regenerate the Jenkinsfile from the template with
114
115 python3 -m pip install -r jenkins/requirements.txt
116 python3 jenkins/generate.py
117
118 Diffed changes:
119 """
120 ).strip()
121 )
122 print(diff)
123 exit(1)
124 else:
125 with open(JENKINSFILE, "w") as f:
126 f.write(new_content)
127 if not diff:
128 print(f"Wrote output to {JENKINSFILE.relative_to(REPO_ROOT)}, no changes made")
129 else:
130 print(f"Wrote output to {JENKINSFILE.relative_to(REPO_ROOT)}, changes:")
131 print(diff)
132
[end of ci/jenkins/generate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ci/jenkins/generate.py b/ci/jenkins/generate.py
--- a/ci/jenkins/generate.py
+++ b/ci/jenkins/generate.py
@@ -40,6 +40,10 @@
"name": "ci_cpu",
"platform": "CPU",
},
+ {
+ "name": "ci_minimal",
+ "platform": "CPU",
+ },
{
"name": "ci_gpu",
"platform": "CPU",
| {"golden_diff": "diff --git a/ci/jenkins/generate.py b/ci/jenkins/generate.py\n--- a/ci/jenkins/generate.py\n+++ b/ci/jenkins/generate.py\n@@ -40,6 +40,10 @@\n \"name\": \"ci_cpu\",\n \"platform\": \"CPU\",\n },\n+ {\n+ \"name\": \"ci_minimal\",\n+ \"platform\": \"CPU\",\n+ },\n {\n \"name\": \"ci_gpu\",\n \"platform\": \"CPU\",\n", "issue": "Exercise TVM under minimal configuration in CI\nWe have seen a couple bugs due to microTVM being presumed-ON in config.cmake. Namely, you get python errors importing TVM right now when USE_MICRO is OFF. We should have a regression test that verifies basic functionality with everything (or nearly everything) OFF.\r\n\r\nContext: apache/tvm#9617\r\nAnd another micro-related issue of the same kind, which i don't have handy right now.\r\n\r\ncc @gigiblender \n", "before_files": [{"content": "#!/usr/bin/env python3\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\nimport jinja2\nimport argparse\nimport difflib\nimport re\nimport datetime\nimport textwrap\n\nfrom pathlib import Path\n\n\nREPO_ROOT = Path(__file__).resolve().parent.parent.parent\nJENKINSFILE_TEMPLATE = REPO_ROOT / \"ci\" / \"jenkins\" / \"Jenkinsfile.j2\"\nJENKINSFILE = REPO_ROOT / \"Jenkinsfile\"\n\n\ndata = {\n \"images\": [\n {\n \"name\": \"ci_arm\",\n \"platform\": \"ARM\",\n },\n {\n \"name\": \"ci_cpu\",\n \"platform\": \"CPU\",\n },\n {\n \"name\": \"ci_gpu\",\n \"platform\": \"CPU\",\n },\n {\n \"name\": \"ci_hexagon\",\n \"platform\": \"CPU\",\n },\n {\n \"name\": \"ci_i386\",\n \"platform\": \"CPU\",\n },\n {\n \"name\": \"ci_lint\",\n \"platform\": \"CPU\",\n },\n {\n \"name\": \"ci_cortexm\",\n \"platform\": \"CPU\",\n },\n {\n \"name\": \"ci_wasm\",\n \"platform\": \"CPU\",\n },\n ]\n}\n\n\ndef lines_without_generated_tag(content):\n return [\n line for line in content.splitlines(keepends=True) if not line.startswith(\"// Generated at\")\n ]\n\n\nif __name__ == \"__main__\":\n help = \"Regenerate Jenkinsfile from template\"\n parser = argparse.ArgumentParser(description=help)\n parser.add_argument(\"--check\", action=\"store_true\", help=\"just verify the output didn't change\")\n args = parser.parse_args()\n\n with open(JENKINSFILE) as f:\n content = f.read()\n\n data[\"generated_time\"] = datetime.datetime.now().isoformat()\n\n environment = jinja2.Environment(\n loader=jinja2.FileSystemLoader(REPO_ROOT),\n undefined=jinja2.StrictUndefined,\n lstrip_blocks=True,\n trim_blocks=True,\n keep_trailing_newline=True,\n )\n template = environment.get_template(str(JENKINSFILE_TEMPLATE.relative_to(REPO_ROOT)))\n new_content = template.render(**data)\n\n diff = \"\".join(\n difflib.unified_diff(\n lines_without_generated_tag(content), lines_without_generated_tag(new_content)\n )\n )\n if args.check:\n if not diff:\n print(\"Success, the newly generated Jenkinsfile matched the one on disk\")\n exit(0)\n else:\n print(\n textwrap.dedent(\n \"\"\"\n Newly generated Jenkinsfile did not match the one on disk! If you have made\n edits to the Jenkinsfile, move them to 'jenkins/Jenkinsfile.j2' and\n regenerate the Jenkinsfile from the template with\n\n python3 -m pip install -r jenkins/requirements.txt\n python3 jenkins/generate.py\n\n Diffed changes:\n \"\"\"\n ).strip()\n )\n print(diff)\n exit(1)\n else:\n with open(JENKINSFILE, \"w\") as f:\n f.write(new_content)\n if not diff:\n print(f\"Wrote output to {JENKINSFILE.relative_to(REPO_ROOT)}, no changes made\")\n else:\n print(f\"Wrote output to {JENKINSFILE.relative_to(REPO_ROOT)}, changes:\")\n print(diff)\n", "path": "ci/jenkins/generate.py"}]} | 1,833 | 111 |
gh_patches_debug_19824 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-899 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make private project setting more clear
Currently the checkbox is kind of hidden and it is not instantly clear what is meant by "Access to Project". The setting should be more obvious and it's consequences more clear.
Opin has another way of showing the setting which we could/should adopt
</issue>
<code>
[start of meinberlin/apps/dashboard2/forms.py]
1 from django import forms
2 from django.contrib.auth import get_user_model
3 from django.forms import inlineformset_factory
4 from django.utils.translation import ugettext_lazy as _
5
6 from adhocracy4.categories import models as category_models
7 from adhocracy4.forms.fields import DateTimeField
8 from adhocracy4.maps import models as map_models
9 from adhocracy4.modules import models as module_models
10 from adhocracy4.phases import models as phase_models
11 from adhocracy4.projects import models as project_models
12 from meinberlin.apps.maps.widgets import MapChoosePolygonWithPresetWidget
13
14 from . import signals
15 from .components.forms import ModuleDashboardForm
16 from .components.forms import ModuleDashboardFormSet
17 from .components.forms import ProjectDashboardForm
18
19 User = get_user_model()
20
21
22 class ProjectCreateForm(forms.ModelForm):
23
24 class Meta:
25 model = project_models.Project
26 fields = ['name', 'description', 'image', 'image_copyright']
27
28 def __init__(self, organisation, creator,
29 *args, **kwargs):
30 super().__init__(*args, **kwargs)
31 self.organisation = organisation
32 self.creator = creator
33
34 def save(self, commit=True):
35 project = super().save(commit=False)
36
37 project.organisation = self.organisation
38 project.creator = self.creator
39
40 if commit:
41 project.save()
42 if hasattr(self, 'save_m2m'):
43 self.save_m2m()
44
45 return project
46
47
48 class ProjectBasicForm(ProjectDashboardForm):
49
50 class Meta:
51 model = project_models.Project
52 fields = ['name', 'description', 'image', 'image_copyright',
53 'tile_image', 'tile_image_copyright',
54 'is_archived', 'is_public']
55 required_for_project_publish = ['name', 'description']
56
57 def __init__(self, *args, **kwargs):
58 super().__init__(*args, **kwargs)
59 instance = kwargs.get('instance', None)
60 self._project_was_archived = instance and instance.is_archived
61
62 def save(self, commit=True):
63 project = super().save(commit)
64 if not self._project_was_archived and project.is_archived:
65 signals.project_archived.send(sender=None, project=project)
66 return project
67
68
69 class ProjectInformationForm(ProjectDashboardForm):
70
71 class Meta:
72 model = project_models.Project
73 fields = ['information']
74 required_for_project_publish = ['information']
75
76
77 class ProjectResultForm(ProjectDashboardForm):
78
79 class Meta:
80 model = project_models.Project
81 fields = ['result']
82 required_for_project_publish = []
83
84
85 class ModuleBasicForm(ModuleDashboardForm):
86
87 class Meta:
88 model = module_models.Module
89 fields = ['name', 'description']
90 required_for_project_publish = '__all__'
91
92
93 class PhaseForm(forms.ModelForm):
94 end_date = DateTimeField(
95 time_format='%H:%M',
96 required=False,
97 require_all_fields=False,
98 label=(_('End date'), _('End time'))
99 )
100 start_date = DateTimeField(
101 time_format='%H:%M',
102 required=False,
103 require_all_fields=False,
104 label=(_('Start date'), _('Start time'))
105 )
106
107 class Meta:
108 model = phase_models.Phase
109 fields = ['name', 'description', 'start_date', 'end_date',
110 'type', # required for get_phase_name in the tpl
111 ]
112 required_for_project_publish = ['name', 'description', 'start_date',
113 'end_date']
114 widgets = {
115 'type': forms.HiddenInput(),
116 'weight': forms.HiddenInput()
117 }
118
119
120 PhaseFormSet = inlineformset_factory(module_models.Module,
121 phase_models.Phase,
122 form=PhaseForm,
123 formset=ModuleDashboardFormSet,
124 extra=0,
125 can_delete=False,
126 )
127
128
129 class AreaSettingsForm(ModuleDashboardForm):
130
131 def __init__(self, *args, **kwargs):
132 self.module = kwargs['instance']
133 kwargs['instance'] = self.module.settings_instance
134 super().__init__(*args, **kwargs)
135
136 def save(self, commit=True):
137 super().save(commit)
138 return self.module
139
140 def get_project(self):
141 return self.module.project
142
143 class Meta:
144 model = map_models.AreaSettings
145 fields = ['polygon']
146 required_for_project_publish = ['polygon']
147 # widgets = map_models.AreaSettings.widgets()
148 widgets = {'polygon': MapChoosePolygonWithPresetWidget}
149
150
151 class CategoryForm(forms.ModelForm):
152 name = forms.CharField(widget=forms.TextInput(attrs={
153 'placeholder': _('Category')}
154 ))
155
156 @property
157 def media(self):
158 media = super().media
159 media.add_js(['js/formset.js'])
160 return media
161
162 class Meta:
163 model = category_models.Category
164 fields = ['name']
165
166
167 CategoryFormSet = inlineformset_factory(module_models.Module,
168 category_models.Category,
169 form=CategoryForm,
170 formset=ModuleDashboardFormSet,
171 extra=0,
172 )
173
[end of meinberlin/apps/dashboard2/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/dashboard2/forms.py b/meinberlin/apps/dashboard2/forms.py
--- a/meinberlin/apps/dashboard2/forms.py
+++ b/meinberlin/apps/dashboard2/forms.py
@@ -1,5 +1,6 @@
from django import forms
from django.contrib.auth import get_user_model
+from django.forms import RadioSelect
from django.forms import inlineformset_factory
from django.utils.translation import ugettext_lazy as _
@@ -53,6 +54,14 @@
'tile_image', 'tile_image_copyright',
'is_archived', 'is_public']
required_for_project_publish = ['name', 'description']
+ widgets = {
+ 'is_public': RadioSelect(
+ choices=[
+ (True, _('All users can participate (public).')),
+ (False, _('Only invited users can participate (private).'))
+ ]
+ ),
+ }
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
| {"golden_diff": "diff --git a/meinberlin/apps/dashboard2/forms.py b/meinberlin/apps/dashboard2/forms.py\n--- a/meinberlin/apps/dashboard2/forms.py\n+++ b/meinberlin/apps/dashboard2/forms.py\n@@ -1,5 +1,6 @@\n from django import forms\n from django.contrib.auth import get_user_model\n+from django.forms import RadioSelect\n from django.forms import inlineformset_factory\n from django.utils.translation import ugettext_lazy as _\n \n@@ -53,6 +54,14 @@\n 'tile_image', 'tile_image_copyright',\n 'is_archived', 'is_public']\n required_for_project_publish = ['name', 'description']\n+ widgets = {\n+ 'is_public': RadioSelect(\n+ choices=[\n+ (True, _('All users can participate (public).')),\n+ (False, _('Only invited users can participate (private).'))\n+ ]\n+ ),\n+ }\n \n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n", "issue": "Make private project setting more clear\nCurrently the checkbox is kind of hidden and it is not instantly clear what is meant by \"Access to Project\". The setting should be more obvious and it's consequences more clear.\r\n\r\nOpin has another way of showing the setting which we could/should adopt\n", "before_files": [{"content": "from django import forms\nfrom django.contrib.auth import get_user_model\nfrom django.forms import inlineformset_factory\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom adhocracy4.categories import models as category_models\nfrom adhocracy4.forms.fields import DateTimeField\nfrom adhocracy4.maps import models as map_models\nfrom adhocracy4.modules import models as module_models\nfrom adhocracy4.phases import models as phase_models\nfrom adhocracy4.projects import models as project_models\nfrom meinberlin.apps.maps.widgets import MapChoosePolygonWithPresetWidget\n\nfrom . import signals\nfrom .components.forms import ModuleDashboardForm\nfrom .components.forms import ModuleDashboardFormSet\nfrom .components.forms import ProjectDashboardForm\n\nUser = get_user_model()\n\n\nclass ProjectCreateForm(forms.ModelForm):\n\n class Meta:\n model = project_models.Project\n fields = ['name', 'description', 'image', 'image_copyright']\n\n def __init__(self, organisation, creator,\n *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.organisation = organisation\n self.creator = creator\n\n def save(self, commit=True):\n project = super().save(commit=False)\n\n project.organisation = self.organisation\n project.creator = self.creator\n\n if commit:\n project.save()\n if hasattr(self, 'save_m2m'):\n self.save_m2m()\n\n return project\n\n\nclass ProjectBasicForm(ProjectDashboardForm):\n\n class Meta:\n model = project_models.Project\n fields = ['name', 'description', 'image', 'image_copyright',\n 'tile_image', 'tile_image_copyright',\n 'is_archived', 'is_public']\n required_for_project_publish = ['name', 'description']\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n instance = kwargs.get('instance', None)\n self._project_was_archived = instance and instance.is_archived\n\n def save(self, commit=True):\n project = super().save(commit)\n if not self._project_was_archived and project.is_archived:\n signals.project_archived.send(sender=None, project=project)\n return project\n\n\nclass ProjectInformationForm(ProjectDashboardForm):\n\n class Meta:\n model = project_models.Project\n fields = ['information']\n required_for_project_publish = ['information']\n\n\nclass ProjectResultForm(ProjectDashboardForm):\n\n class Meta:\n model = project_models.Project\n fields = ['result']\n required_for_project_publish = []\n\n\nclass ModuleBasicForm(ModuleDashboardForm):\n\n class Meta:\n model = module_models.Module\n fields = ['name', 'description']\n required_for_project_publish = '__all__'\n\n\nclass PhaseForm(forms.ModelForm):\n end_date = DateTimeField(\n time_format='%H:%M',\n required=False,\n require_all_fields=False,\n label=(_('End date'), _('End time'))\n )\n start_date = DateTimeField(\n time_format='%H:%M',\n required=False,\n require_all_fields=False,\n label=(_('Start date'), _('Start time'))\n )\n\n class Meta:\n model = phase_models.Phase\n fields = ['name', 'description', 'start_date', 'end_date',\n 'type', # required for get_phase_name in the tpl\n ]\n required_for_project_publish = ['name', 'description', 'start_date',\n 'end_date']\n widgets = {\n 'type': forms.HiddenInput(),\n 'weight': forms.HiddenInput()\n }\n\n\nPhaseFormSet = inlineformset_factory(module_models.Module,\n phase_models.Phase,\n form=PhaseForm,\n formset=ModuleDashboardFormSet,\n extra=0,\n can_delete=False,\n )\n\n\nclass AreaSettingsForm(ModuleDashboardForm):\n\n def __init__(self, *args, **kwargs):\n self.module = kwargs['instance']\n kwargs['instance'] = self.module.settings_instance\n super().__init__(*args, **kwargs)\n\n def save(self, commit=True):\n super().save(commit)\n return self.module\n\n def get_project(self):\n return self.module.project\n\n class Meta:\n model = map_models.AreaSettings\n fields = ['polygon']\n required_for_project_publish = ['polygon']\n # widgets = map_models.AreaSettings.widgets()\n widgets = {'polygon': MapChoosePolygonWithPresetWidget}\n\n\nclass CategoryForm(forms.ModelForm):\n name = forms.CharField(widget=forms.TextInput(attrs={\n 'placeholder': _('Category')}\n ))\n\n @property\n def media(self):\n media = super().media\n media.add_js(['js/formset.js'])\n return media\n\n class Meta:\n model = category_models.Category\n fields = ['name']\n\n\nCategoryFormSet = inlineformset_factory(module_models.Module,\n category_models.Category,\n form=CategoryForm,\n formset=ModuleDashboardFormSet,\n extra=0,\n )\n", "path": "meinberlin/apps/dashboard2/forms.py"}]} | 2,079 | 226 |
gh_patches_debug_26793 | rasdani/github-patches | git_diff | pulp__pulpcore-5196 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Overwriting existing packages in backend storage can lead to caching issues
If an existing package is re-added to pulp, the default behavior will overwrite the existing file in backing storage. This is typically fine.
- If using Azure Blobstore, the timestamp of the blob is updated (Last-Modified time and ETag).
- Conversely, some CDN's (notably Azure Front Door) use Last-Modified Time as a signal that a file in origin has updated.
- This can lead to poor cache behavior, and in some cases, incomplete downloads as the CDN attempts to resolve disparate content.
- If we set `AZURE_OVERWRITE_FILES` to `false` this partially mitigates the issue (Last-Modified/ETag are unmodified). However, this results in duplicate copies written to storage (with a suffix to differentiate from the original).
- We should have an option that does "nothing" if the uploaded file already exists (don't overwrite, and don't write a new copy).
</issue>
<code>
[start of pulpcore/plugin/viewsets/content.py]
1 from drf_spectacular.utils import extend_schema
2
3 from django.db import DatabaseError
4 from django.db.utils import IntegrityError
5
6 from pulpcore.app import tasks
7 from pulpcore.plugin.serializers import (
8 ArtifactSerializer,
9 AsyncOperationResponseSerializer,
10 )
11 from pulpcore.plugin.models import Artifact, PulpTemporaryFile
12 from pulpcore.plugin.tasking import dispatch
13 from pulpcore.plugin.viewsets import (
14 ContentViewSet,
15 OperationPostponedResponse,
16 )
17
18
19 class DefaultDeferredContextMixin:
20 """A mixin that provides a method for retrieving the default deferred context."""
21
22 def get_deferred_context(self, request):
23 """
24 Supply context for deferred validation.
25
26 When overwriting this method, it must return a dict, that is JSON serializable by
27 and does _not_ contain 'request' as a key.
28 """
29 return {}
30
31
32 class NoArtifactContentViewSet(DefaultDeferredContextMixin, ContentViewSet):
33 """A ViewSet for content creation that does not require a file to be uploaded."""
34
35 @extend_schema(
36 description="Trigger an asynchronous task to create content,"
37 "optionally create new repository version.",
38 responses={202: AsyncOperationResponseSerializer},
39 )
40 def create(self, request):
41 """Create a content unit."""
42 serializer = self.get_serializer(data=request.data)
43 serializer.is_valid(raise_exception=True)
44
45 exclusive_resources = [
46 item for item in (serializer.validated_data.get(key) for key in ("repository",)) if item
47 ]
48
49 task = dispatch(
50 tasks.base.general_create,
51 exclusive_resources=exclusive_resources,
52 args=(self.queryset.model._meta.app_label, serializer.__class__.__name__),
53 kwargs={
54 "data": {k: v for k, v in request.data.items()},
55 "context": self.get_deferred_context(request),
56 },
57 )
58 return OperationPostponedResponse(task, request)
59
60
61 class NoArtifactContentUploadViewSet(DefaultDeferredContextMixin, ContentViewSet):
62 """A ViewSet for uploads that do not require to store an uploaded content as an Artifact."""
63
64 @extend_schema(
65 description="Trigger an asynchronous task to create content,"
66 "optionally create new repository version.",
67 responses={202: AsyncOperationResponseSerializer},
68 )
69 def create(self, request):
70 """Create a content unit."""
71 serializer = self.get_serializer(data=request.data)
72 serializer.is_valid(raise_exception=True)
73
74 task_payload = {k: v for k, v in request.data.items()}
75
76 file_content = task_payload.pop("file", None)
77 temp_file = PulpTemporaryFile.init_and_validate(file_content)
78 temp_file.save()
79
80 exclusive_resources = [
81 item for item in (serializer.validated_data.get(key) for key in ("repository",)) if item
82 ]
83
84 app_label = self.queryset.model._meta.app_label
85 context = self.get_deferred_context(request)
86 context["pulp_temp_file_pk"] = str(temp_file.pk)
87 task = dispatch(
88 tasks.base.general_create,
89 exclusive_resources=exclusive_resources,
90 args=(app_label, serializer.__class__.__name__),
91 kwargs={"data": task_payload, "context": context},
92 )
93 return OperationPostponedResponse(task, request)
94
95
96 class SingleArtifactContentUploadViewSet(DefaultDeferredContextMixin, ContentViewSet):
97 """A ViewSet which can be used to store an uploaded content as an Artifact."""
98
99 @extend_schema(
100 description="Trigger an asynchronous task to create content,"
101 "optionally create new repository version.",
102 responses={202: AsyncOperationResponseSerializer},
103 )
104 def create(self, request):
105 """Create a content unit."""
106 serializer = self.get_serializer(data=request.data)
107 serializer.is_valid(raise_exception=True)
108
109 task_payload = self.init_content_data(serializer, request)
110
111 exclusive_resources = [
112 item
113 for item in (serializer.validated_data.get(key) for key in ("upload", "repository"))
114 if item
115 ]
116
117 app_label = self.queryset.model._meta.app_label
118 task = dispatch(
119 tasks.base.general_create,
120 exclusive_resources=exclusive_resources,
121 args=(app_label, serializer.__class__.__name__),
122 kwargs={
123 "data": task_payload,
124 "context": self.get_deferred_context(request),
125 },
126 )
127 return OperationPostponedResponse(task, request)
128
129 def init_content_data(self, serializer, request):
130 """Initialize the reference to an Artifact along with relevant task's payload data."""
131 task_payload = {k: v for k, v in request.data.items()}
132 if "file" in task_payload:
133 # in the upload code path make sure, the artifact exists, and the 'file'
134 # parameter is replaced by 'artifact'
135 artifact = Artifact.init_and_validate(task_payload.pop("file"))
136 try:
137 artifact.save()
138 except IntegrityError:
139 # if artifact already exists, let's use it
140 try:
141 artifact = Artifact.objects.get(
142 sha256=artifact.sha256, pulp_domain=request.pulp_domain
143 )
144 artifact.touch()
145 except (Artifact.DoesNotExist, DatabaseError):
146 # the artifact has since been removed from when we first attempted to save it
147 artifact.save()
148
149 task_payload["artifact"] = ArtifactSerializer(
150 artifact, context={"request": request}
151 ).data["pulp_href"]
152 elif "artifact" in serializer.validated_data:
153 serializer.validated_data["artifact"].touch()
154 # In case of a provided upload object, there is no artifact to touch yet.
155
156 return task_payload
157
[end of pulpcore/plugin/viewsets/content.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/plugin/viewsets/content.py b/pulpcore/plugin/viewsets/content.py
--- a/pulpcore/plugin/viewsets/content.py
+++ b/pulpcore/plugin/viewsets/content.py
@@ -133,18 +133,20 @@
# in the upload code path make sure, the artifact exists, and the 'file'
# parameter is replaced by 'artifact'
artifact = Artifact.init_and_validate(task_payload.pop("file"))
+ # if artifact already exists, let's use it
try:
- artifact.save()
- except IntegrityError:
- # if artifact already exists, let's use it
+ artifact = Artifact.objects.get(
+ sha256=artifact.sha256, pulp_domain=request.pulp_domain
+ )
+ artifact.touch()
+ except (Artifact.DoesNotExist, DatabaseError):
try:
+ artifact.save()
+ except IntegrityError:
artifact = Artifact.objects.get(
sha256=artifact.sha256, pulp_domain=request.pulp_domain
)
artifact.touch()
- except (Artifact.DoesNotExist, DatabaseError):
- # the artifact has since been removed from when we first attempted to save it
- artifact.save()
task_payload["artifact"] = ArtifactSerializer(
artifact, context={"request": request}
| {"golden_diff": "diff --git a/pulpcore/plugin/viewsets/content.py b/pulpcore/plugin/viewsets/content.py\n--- a/pulpcore/plugin/viewsets/content.py\n+++ b/pulpcore/plugin/viewsets/content.py\n@@ -133,18 +133,20 @@\n # in the upload code path make sure, the artifact exists, and the 'file'\n # parameter is replaced by 'artifact'\n artifact = Artifact.init_and_validate(task_payload.pop(\"file\"))\n+ # if artifact already exists, let's use it\n try:\n- artifact.save()\n- except IntegrityError:\n- # if artifact already exists, let's use it\n+ artifact = Artifact.objects.get(\n+ sha256=artifact.sha256, pulp_domain=request.pulp_domain\n+ )\n+ artifact.touch()\n+ except (Artifact.DoesNotExist, DatabaseError):\n try:\n+ artifact.save()\n+ except IntegrityError:\n artifact = Artifact.objects.get(\n sha256=artifact.sha256, pulp_domain=request.pulp_domain\n )\n artifact.touch()\n- except (Artifact.DoesNotExist, DatabaseError):\n- # the artifact has since been removed from when we first attempted to save it\n- artifact.save()\n \n task_payload[\"artifact\"] = ArtifactSerializer(\n artifact, context={\"request\": request}\n", "issue": "Overwriting existing packages in backend storage can lead to caching issues\nIf an existing package is re-added to pulp, the default behavior will overwrite the existing file in backing storage. This is typically fine.\r\n- If using Azure Blobstore, the timestamp of the blob is updated (Last-Modified time and ETag).\r\n- Conversely, some CDN's (notably Azure Front Door) use Last-Modified Time as a signal that a file in origin has updated.\r\n- This can lead to poor cache behavior, and in some cases, incomplete downloads as the CDN attempts to resolve disparate content.\r\n- If we set `AZURE_OVERWRITE_FILES` to `false` this partially mitigates the issue (Last-Modified/ETag are unmodified). However, this results in duplicate copies written to storage (with a suffix to differentiate from the original).\r\n- We should have an option that does \"nothing\" if the uploaded file already exists (don't overwrite, and don't write a new copy).\n", "before_files": [{"content": "from drf_spectacular.utils import extend_schema\n\nfrom django.db import DatabaseError\nfrom django.db.utils import IntegrityError\n\nfrom pulpcore.app import tasks\nfrom pulpcore.plugin.serializers import (\n ArtifactSerializer,\n AsyncOperationResponseSerializer,\n)\nfrom pulpcore.plugin.models import Artifact, PulpTemporaryFile\nfrom pulpcore.plugin.tasking import dispatch\nfrom pulpcore.plugin.viewsets import (\n ContentViewSet,\n OperationPostponedResponse,\n)\n\n\nclass DefaultDeferredContextMixin:\n \"\"\"A mixin that provides a method for retrieving the default deferred context.\"\"\"\n\n def get_deferred_context(self, request):\n \"\"\"\n Supply context for deferred validation.\n\n When overwriting this method, it must return a dict, that is JSON serializable by\n and does _not_ contain 'request' as a key.\n \"\"\"\n return {}\n\n\nclass NoArtifactContentViewSet(DefaultDeferredContextMixin, ContentViewSet):\n \"\"\"A ViewSet for content creation that does not require a file to be uploaded.\"\"\"\n\n @extend_schema(\n description=\"Trigger an asynchronous task to create content,\"\n \"optionally create new repository version.\",\n responses={202: AsyncOperationResponseSerializer},\n )\n def create(self, request):\n \"\"\"Create a content unit.\"\"\"\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n\n exclusive_resources = [\n item for item in (serializer.validated_data.get(key) for key in (\"repository\",)) if item\n ]\n\n task = dispatch(\n tasks.base.general_create,\n exclusive_resources=exclusive_resources,\n args=(self.queryset.model._meta.app_label, serializer.__class__.__name__),\n kwargs={\n \"data\": {k: v for k, v in request.data.items()},\n \"context\": self.get_deferred_context(request),\n },\n )\n return OperationPostponedResponse(task, request)\n\n\nclass NoArtifactContentUploadViewSet(DefaultDeferredContextMixin, ContentViewSet):\n \"\"\"A ViewSet for uploads that do not require to store an uploaded content as an Artifact.\"\"\"\n\n @extend_schema(\n description=\"Trigger an asynchronous task to create content,\"\n \"optionally create new repository version.\",\n responses={202: AsyncOperationResponseSerializer},\n )\n def create(self, request):\n \"\"\"Create a content unit.\"\"\"\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n\n task_payload = {k: v for k, v in request.data.items()}\n\n file_content = task_payload.pop(\"file\", None)\n temp_file = PulpTemporaryFile.init_and_validate(file_content)\n temp_file.save()\n\n exclusive_resources = [\n item for item in (serializer.validated_data.get(key) for key in (\"repository\",)) if item\n ]\n\n app_label = self.queryset.model._meta.app_label\n context = self.get_deferred_context(request)\n context[\"pulp_temp_file_pk\"] = str(temp_file.pk)\n task = dispatch(\n tasks.base.general_create,\n exclusive_resources=exclusive_resources,\n args=(app_label, serializer.__class__.__name__),\n kwargs={\"data\": task_payload, \"context\": context},\n )\n return OperationPostponedResponse(task, request)\n\n\nclass SingleArtifactContentUploadViewSet(DefaultDeferredContextMixin, ContentViewSet):\n \"\"\"A ViewSet which can be used to store an uploaded content as an Artifact.\"\"\"\n\n @extend_schema(\n description=\"Trigger an asynchronous task to create content,\"\n \"optionally create new repository version.\",\n responses={202: AsyncOperationResponseSerializer},\n )\n def create(self, request):\n \"\"\"Create a content unit.\"\"\"\n serializer = self.get_serializer(data=request.data)\n serializer.is_valid(raise_exception=True)\n\n task_payload = self.init_content_data(serializer, request)\n\n exclusive_resources = [\n item\n for item in (serializer.validated_data.get(key) for key in (\"upload\", \"repository\"))\n if item\n ]\n\n app_label = self.queryset.model._meta.app_label\n task = dispatch(\n tasks.base.general_create,\n exclusive_resources=exclusive_resources,\n args=(app_label, serializer.__class__.__name__),\n kwargs={\n \"data\": task_payload,\n \"context\": self.get_deferred_context(request),\n },\n )\n return OperationPostponedResponse(task, request)\n\n def init_content_data(self, serializer, request):\n \"\"\"Initialize the reference to an Artifact along with relevant task's payload data.\"\"\"\n task_payload = {k: v for k, v in request.data.items()}\n if \"file\" in task_payload:\n # in the upload code path make sure, the artifact exists, and the 'file'\n # parameter is replaced by 'artifact'\n artifact = Artifact.init_and_validate(task_payload.pop(\"file\"))\n try:\n artifact.save()\n except IntegrityError:\n # if artifact already exists, let's use it\n try:\n artifact = Artifact.objects.get(\n sha256=artifact.sha256, pulp_domain=request.pulp_domain\n )\n artifact.touch()\n except (Artifact.DoesNotExist, DatabaseError):\n # the artifact has since been removed from when we first attempted to save it\n artifact.save()\n\n task_payload[\"artifact\"] = ArtifactSerializer(\n artifact, context={\"request\": request}\n ).data[\"pulp_href\"]\n elif \"artifact\" in serializer.validated_data:\n serializer.validated_data[\"artifact\"].touch()\n # In case of a provided upload object, there is no artifact to touch yet.\n\n return task_payload\n", "path": "pulpcore/plugin/viewsets/content.py"}]} | 2,285 | 286 |
gh_patches_debug_977 | rasdani/github-patches | git_diff | medtagger__MedTagger-442 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Getting random scan for non-existing task key results in 500
## Current Behavior
Providing non existing task key results in 500 HTTP code.
## Expected Behavior
Backend should handle this situation appropriate and return 404 HTTP code.
## Steps to Reproduce the Problem
1. Perform a GET `scans/random?task=<task_key>` and provide non existing key.
</issue>
<code>
[start of backend/medtagger/repositories/tasks.py]
1 """Module responsible for definition of TaskRepository."""
2 from typing import List
3
4 from medtagger.database import db_session
5 from medtagger.database.models import Task, LabelTag, Dataset
6 from medtagger.exceptions import InternalErrorException
7
8
9 def get_all_tasks(include_disabled: bool = False) -> List[Task]:
10 """Fetch all tasks from database ordered by key."""
11 query = Task.query
12 if not include_disabled:
13 query = query.filter(~Task.disabled)
14 return query.order_by(Task.key).all()
15
16
17 def get_task_by_key(key: str) -> Task:
18 """Fetch Task from database.
19
20 :param key: key for a Task
21 :return: Task object
22 """
23 with db_session() as session:
24 task = session.query(Task).filter(Task.key == key).one()
25 return task
26
27
28 def add_task(key: str, name: str, image_path: str, datasets_keys: List[str], tags: List[LabelTag]) -> Task:
29 """Add new Task to the database.
30
31 :param key: key that will identify such Task
32 :param name: name that will be used in the Use Interface for such Task
33 :param image_path: path to the image that represents such Task (used in User Interface)
34 :param datasets_keys: Keys of Datasets that Task takes Scans from
35 :param tags: Label Tags that will be created and assigned to Task
36 :return: Task object
37 """
38 with db_session() as session:
39 task = Task(key, name, image_path)
40 datasets = Dataset.query.filter(Dataset.key.in_(datasets_keys)).all() # type: ignore
41 task.datasets = datasets
42 task.available_tags = tags
43 session.add(task)
44 return task
45
46
47 def assign_label_tag(tag: LabelTag, task_key: str) -> None:
48 """Assign existing Label Tag to Task.
49
50 :param tag: tag that should be assigned to Task
51 :param task_key: key that will identify such Task
52 """
53 with db_session():
54 task = Task.query.filter(Task.key == task_key).one()
55 task.available_tags.append(tag)
56 task.save()
57
58
59 def unassign_label_tag(tag: LabelTag, task_key: str) -> None:
60 """Unassign Label Tag from Task.
61
62 :param tag: tag that should be unassigned from Task
63 :param task_key: key that will identify such Task
64 """
65 with db_session():
66 task = Task.query.filter(Task.key == task_key).one()
67 task.available_tags.remove(tag)
68 task.save()
69
70
71 def update(task_key: str, name: str = None, image_path: str = None, datasets_keys: List[str] = None) -> Task:
72 """Update Datasets where this Task will be available.
73
74 :param task_key: key that will identify such Task
75 :param name: (optional) new name for such Task
76 :param image_path: (optional) new path to the image which shows on the UI
77 :param datasets_keys: (optional) keys of Datasets which should have this Task
78 """
79 with db_session():
80 task = Task.query.filter(Task.key == task_key).one()
81 if name:
82 task.name = name
83 if image_path:
84 task.image_path = image_path
85 if datasets_keys:
86 datasets = Dataset.query.filter(Dataset.key.in_(datasets_keys)).all() # type: ignore
87 task.datasets = datasets
88 return task
89
90
91 def disable(task_key: str) -> None:
92 """Disable existing Task."""
93 disabling_query = Task.query.filter(Task.key == task_key)
94 updated = disabling_query.update({'disabled': True}, synchronize_session='fetch')
95 if not updated:
96 raise InternalErrorException(f'Task "{task_key}" was not disabled due to unknown database error.')
97
98
99 def enable(task_key: str) -> None:
100 """Enable existing Task."""
101 enabling_query = Task.query.filter(Task.key == task_key)
102 updated = enabling_query.update({'disabled': False}, synchronize_session='fetch')
103 if not updated:
104 raise InternalErrorException(f'Task "{task_key}" was not enabled due to unknown database error.')
105
[end of backend/medtagger/repositories/tasks.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/medtagger/repositories/tasks.py b/backend/medtagger/repositories/tasks.py
--- a/backend/medtagger/repositories/tasks.py
+++ b/backend/medtagger/repositories/tasks.py
@@ -21,7 +21,7 @@
:return: Task object
"""
with db_session() as session:
- task = session.query(Task).filter(Task.key == key).one()
+ task = session.query(Task).filter(Task.key == key).first()
return task
| {"golden_diff": "diff --git a/backend/medtagger/repositories/tasks.py b/backend/medtagger/repositories/tasks.py\n--- a/backend/medtagger/repositories/tasks.py\n+++ b/backend/medtagger/repositories/tasks.py\n@@ -21,7 +21,7 @@\n :return: Task object\n \"\"\"\n with db_session() as session:\n- task = session.query(Task).filter(Task.key == key).one()\n+ task = session.query(Task).filter(Task.key == key).first()\n return task\n", "issue": "Getting random scan for non-existing task key results in 500\n## Current Behavior\r\n\r\nProviding non existing task key results in 500 HTTP code.\r\n\r\n## Expected Behavior\r\n\r\nBackend should handle this situation appropriate and return 404 HTTP code.\r\n\r\n## Steps to Reproduce the Problem\r\n\r\n 1. Perform a GET `scans/random?task=<task_key>` and provide non existing key.\r\n\n", "before_files": [{"content": "\"\"\"Module responsible for definition of TaskRepository.\"\"\"\nfrom typing import List\n\nfrom medtagger.database import db_session\nfrom medtagger.database.models import Task, LabelTag, Dataset\nfrom medtagger.exceptions import InternalErrorException\n\n\ndef get_all_tasks(include_disabled: bool = False) -> List[Task]:\n \"\"\"Fetch all tasks from database ordered by key.\"\"\"\n query = Task.query\n if not include_disabled:\n query = query.filter(~Task.disabled)\n return query.order_by(Task.key).all()\n\n\ndef get_task_by_key(key: str) -> Task:\n \"\"\"Fetch Task from database.\n\n :param key: key for a Task\n :return: Task object\n \"\"\"\n with db_session() as session:\n task = session.query(Task).filter(Task.key == key).one()\n return task\n\n\ndef add_task(key: str, name: str, image_path: str, datasets_keys: List[str], tags: List[LabelTag]) -> Task:\n \"\"\"Add new Task to the database.\n\n :param key: key that will identify such Task\n :param name: name that will be used in the Use Interface for such Task\n :param image_path: path to the image that represents such Task (used in User Interface)\n :param datasets_keys: Keys of Datasets that Task takes Scans from\n :param tags: Label Tags that will be created and assigned to Task\n :return: Task object\n \"\"\"\n with db_session() as session:\n task = Task(key, name, image_path)\n datasets = Dataset.query.filter(Dataset.key.in_(datasets_keys)).all() # type: ignore\n task.datasets = datasets\n task.available_tags = tags\n session.add(task)\n return task\n\n\ndef assign_label_tag(tag: LabelTag, task_key: str) -> None:\n \"\"\"Assign existing Label Tag to Task.\n\n :param tag: tag that should be assigned to Task\n :param task_key: key that will identify such Task\n \"\"\"\n with db_session():\n task = Task.query.filter(Task.key == task_key).one()\n task.available_tags.append(tag)\n task.save()\n\n\ndef unassign_label_tag(tag: LabelTag, task_key: str) -> None:\n \"\"\"Unassign Label Tag from Task.\n\n :param tag: tag that should be unassigned from Task\n :param task_key: key that will identify such Task\n \"\"\"\n with db_session():\n task = Task.query.filter(Task.key == task_key).one()\n task.available_tags.remove(tag)\n task.save()\n\n\ndef update(task_key: str, name: str = None, image_path: str = None, datasets_keys: List[str] = None) -> Task:\n \"\"\"Update Datasets where this Task will be available.\n\n :param task_key: key that will identify such Task\n :param name: (optional) new name for such Task\n :param image_path: (optional) new path to the image which shows on the UI\n :param datasets_keys: (optional) keys of Datasets which should have this Task\n \"\"\"\n with db_session():\n task = Task.query.filter(Task.key == task_key).one()\n if name:\n task.name = name\n if image_path:\n task.image_path = image_path\n if datasets_keys:\n datasets = Dataset.query.filter(Dataset.key.in_(datasets_keys)).all() # type: ignore\n task.datasets = datasets\n return task\n\n\ndef disable(task_key: str) -> None:\n \"\"\"Disable existing Task.\"\"\"\n disabling_query = Task.query.filter(Task.key == task_key)\n updated = disabling_query.update({'disabled': True}, synchronize_session='fetch')\n if not updated:\n raise InternalErrorException(f'Task \"{task_key}\" was not disabled due to unknown database error.')\n\n\ndef enable(task_key: str) -> None:\n \"\"\"Enable existing Task.\"\"\"\n enabling_query = Task.query.filter(Task.key == task_key)\n updated = enabling_query.update({'disabled': False}, synchronize_session='fetch')\n if not updated:\n raise InternalErrorException(f'Task \"{task_key}\" was not enabled due to unknown database error.')\n", "path": "backend/medtagger/repositories/tasks.py"}]} | 1,724 | 113 |
gh_patches_debug_40038 | rasdani/github-patches | git_diff | dbt-labs__dbt-core-4223 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove py36 support
Python 3.6 end of life is December 23. We're planning to release major version v1.0 in December. For clarity and ease of support, let's remove official support for python 3.6 ahead of v1.0.
I think this looks like:
- Turning off py36 tests
- Bumping `python_requires` for `dbt-core`
Removal of related cruft (e.g. backported `dataclasses`) can come later
</issue>
<code>
[start of plugins/postgres/setup.py]
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 6):
6 print('Error: dbt does not support this version of Python.')
7 print('Please upgrade to Python 3.6 or higher.')
8 sys.exit(1)
9
10
11 from setuptools import setup
12 try:
13 from setuptools import find_namespace_packages
14 except ImportError:
15 # the user has a downlevel version of setuptools.
16 print('Error: dbt requires setuptools v40.1.0 or higher.')
17 print('Please upgrade setuptools with "pip install --upgrade setuptools" '
18 'and try again')
19 sys.exit(1)
20
21
22 PSYCOPG2_MESSAGE = '''
23 No package name override was set.
24 Using 'psycopg2-binary' package to satisfy 'psycopg2'
25
26 If you experience segmentation faults, silent crashes, or installation errors,
27 consider retrying with the 'DBT_PSYCOPG2_NAME' environment variable set to
28 'psycopg2'. It may require a compiler toolchain and development libraries!
29 '''.strip()
30
31
32 def _dbt_psycopg2_name():
33 # if the user chose something, use that
34 package_name = os.getenv('DBT_PSYCOPG2_NAME', '')
35 if package_name:
36 return package_name
37
38 # default to psycopg2-binary for all OSes/versions
39 print(PSYCOPG2_MESSAGE)
40 return 'psycopg2-binary'
41
42
43 package_name = "dbt-postgres"
44 package_version = "1.0.0b2"
45 description = """The postgres adpter plugin for dbt (data build tool)"""
46
47 this_directory = os.path.abspath(os.path.dirname(__file__))
48 with open(os.path.join(this_directory, 'README.md')) as f:
49 long_description = f.read()
50
51 DBT_PSYCOPG2_NAME = _dbt_psycopg2_name()
52
53 setup(
54 name=package_name,
55 version=package_version,
56 description=description,
57 long_description=long_description,
58 long_description_content_type='text/markdown',
59 author="dbt Labs",
60 author_email="[email protected]",
61 url="https://github.com/dbt-labs/dbt-core",
62 packages=find_namespace_packages(include=['dbt', 'dbt.*']),
63 package_data={
64 'dbt': [
65 'include/postgres/dbt_project.yml',
66 'include/postgres/sample_profiles.yml',
67 'include/postgres/macros/*.sql',
68 'include/postgres/macros/**/*.sql',
69 ]
70 },
71 install_requires=[
72 'dbt-core=={}'.format(package_version),
73 '{}~=2.8'.format(DBT_PSYCOPG2_NAME),
74 ],
75 zip_safe=False,
76 classifiers=[
77 'Development Status :: 5 - Production/Stable',
78
79 'License :: OSI Approved :: Apache Software License',
80
81 'Operating System :: Microsoft :: Windows',
82 'Operating System :: MacOS :: MacOS X',
83 'Operating System :: POSIX :: Linux',
84
85 'Programming Language :: Python :: 3.6',
86 'Programming Language :: Python :: 3.7',
87 'Programming Language :: Python :: 3.8',
88 'Programming Language :: Python :: 3.9',
89 ],
90 python_requires=">=3.6.2",
91 )
92
[end of plugins/postgres/setup.py]
[start of setup.py]
1 #!/usr/bin/env python
2 import os
3 import sys
4
5
6 if 'sdist' not in sys.argv:
7 print('')
8 print('As of v1.0.0, `pip install dbt` is no longer supported.')
9 print('Instead, please use either:')
10 print(' - `pip install dbt-core`, for core functionality')
11 print(' - `pip install dbt-<adapter>`, to use dbt with your database, platform, or query engine')
12 print('See full list: https://docs.getdbt.com/docs/available-adapters')
13 print('')
14 sys.exit(1)
15
16
17 if sys.version_info < (3, 6):
18 print('Error: dbt does not support this version of Python.')
19 print('Please upgrade to Python 3.6 or higher.')
20 sys.exit(1)
21
22
23 from setuptools import setup
24 try:
25 from setuptools import find_namespace_packages
26 except ImportError:
27 # the user has a downlevel version of setuptools.
28 print('Error: dbt requires setuptools v40.1.0 or higher.')
29 print('Please upgrade setuptools with "pip install --upgrade setuptools" '
30 'and try again')
31 sys.exit(1)
32
33 this_directory = os.path.abspath(os.path.dirname(__file__))
34 with open(os.path.join(this_directory, 'README.md')) as f:
35 long_description = f.read()
36
37
38 package_name = "dbt"
39 package_version = "1.0.0b2"
40 description = """With dbt, data analysts and engineers can build analytics \
41 the way engineers build applications."""
42
43
44 setup(
45 name=package_name,
46 version=package_version,
47
48 description=description,
49 long_description=long_description,
50 long_description_content_type='text/markdown',
51
52 author="dbt Labs",
53 author_email="[email protected]",
54 url="https://github.com/dbt-labs/dbt-core",
55 zip_safe=False,
56 classifiers=[
57 'Development Status :: 7 - Inactive',
58
59 'License :: OSI Approved :: Apache Software License',
60
61 'Operating System :: Microsoft :: Windows',
62 'Operating System :: MacOS :: MacOS X',
63 'Operating System :: POSIX :: Linux',
64
65 'Programming Language :: Python :: 3.6',
66 'Programming Language :: Python :: 3.7',
67 'Programming Language :: Python :: 3.8',
68 'Programming Language :: Python :: 3.9',
69 ],
70 python_requires=">=3.6.2",
71 )
72
[end of setup.py]
[start of core/setup.py]
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 6):
6 print('Error: dbt does not support this version of Python.')
7 print('Please upgrade to Python 3.6 or higher.')
8 sys.exit(1)
9
10
11 from setuptools import setup
12 try:
13 from setuptools import find_namespace_packages
14 except ImportError:
15 # the user has a downlevel version of setuptools.
16 print('Error: dbt requires setuptools v40.1.0 or higher.')
17 print('Please upgrade setuptools with "pip install --upgrade setuptools" '
18 'and try again')
19 sys.exit(1)
20
21
22 this_directory = os.path.abspath(os.path.dirname(__file__))
23 with open(os.path.join(this_directory, 'README.md')) as f:
24 long_description = f.read()
25
26
27 package_name = "dbt-core"
28 package_version = "1.0.0b2"
29 description = """With dbt, data analysts and engineers can build analytics \
30 the way engineers build applications."""
31
32
33 setup(
34 name=package_name,
35 version=package_version,
36 description=description,
37 long_description=long_description,
38 long_description_content_type='text/markdown',
39 author="dbt Labs",
40 author_email="[email protected]",
41 url="https://github.com/dbt-labs/dbt-core",
42 packages=find_namespace_packages(include=['dbt', 'dbt.*']),
43 include_package_data = True,
44 test_suite='test',
45 entry_points={
46 'console_scripts': [
47 'dbt = dbt.main:main',
48 ],
49 },
50 scripts=[
51 'scripts/dbt',
52 ],
53 install_requires=[
54 'Jinja2==2.11.3',
55 'agate>=1.6,<1.6.4',
56 'click>=8,<9',
57 'colorama>=0.3.9,<0.4.5',
58 'dataclasses>=0.6,<0.9;python_version<"3.7"',
59 'hologram==0.0.14',
60 'isodate>=0.6,<0.7',
61 'logbook>=1.5,<1.6',
62 'mashumaro==2.9',
63 'minimal-snowplow-tracker==0.0.2',
64 'networkx>=2.3,<3',
65 'packaging>=20.9,<22.0',
66 'sqlparse>=0.2.3,<0.5',
67 'dbt-extractor==0.4.0',
68 'typing-extensions>=3.7.4,<3.11',
69 'werkzeug>=1,<3',
70 # the following are all to match snowflake-connector-python
71 'requests<3.0.0',
72 'idna>=2.5,<4',
73 'cffi>=1.9,<2.0.0',
74 ],
75 zip_safe=False,
76 classifiers=[
77 'Development Status :: 5 - Production/Stable',
78
79 'License :: OSI Approved :: Apache Software License',
80
81 'Operating System :: Microsoft :: Windows',
82 'Operating System :: MacOS :: MacOS X',
83 'Operating System :: POSIX :: Linux',
84
85 'Programming Language :: Python :: 3.6',
86 'Programming Language :: Python :: 3.7',
87 'Programming Language :: Python :: 3.8',
88 'Programming Language :: Python :: 3.9',
89 ],
90 python_requires=">=3.6.3",
91 )
92
[end of core/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -2,9 +2,9 @@
import os
import sys
-if sys.version_info < (3, 6):
+if sys.version_info < (3, 7):
print('Error: dbt does not support this version of Python.')
- print('Please upgrade to Python 3.6 or higher.')
+ print('Please upgrade to Python 3.7 or higher.')
sys.exit(1)
@@ -55,7 +55,6 @@
'agate>=1.6,<1.6.4',
'click>=8,<9',
'colorama>=0.3.9,<0.4.5',
- 'dataclasses>=0.6,<0.9;python_version<"3.7"',
'hologram==0.0.14',
'isodate>=0.6,<0.7',
'logbook>=1.5,<1.6',
@@ -82,10 +81,9 @@
'Operating System :: MacOS :: MacOS X',
'Operating System :: POSIX :: Linux',
- 'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
- python_requires=">=3.6.3",
+ python_requires=">=3.7",
)
diff --git a/plugins/postgres/setup.py b/plugins/postgres/setup.py
--- a/plugins/postgres/setup.py
+++ b/plugins/postgres/setup.py
@@ -2,9 +2,9 @@
import os
import sys
-if sys.version_info < (3, 6):
+if sys.version_info < (3, 7):
print('Error: dbt does not support this version of Python.')
- print('Please upgrade to Python 3.6 or higher.')
+ print('Please upgrade to Python 3.7 or higher.')
sys.exit(1)
@@ -82,10 +82,9 @@
'Operating System :: MacOS :: MacOS X',
'Operating System :: POSIX :: Linux',
- 'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
- python_requires=">=3.6.2",
+ python_requires=">=3.7",
)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -14,9 +14,9 @@
sys.exit(1)
-if sys.version_info < (3, 6):
+if sys.version_info < (3, 7):
print('Error: dbt does not support this version of Python.')
- print('Please upgrade to Python 3.6 or higher.')
+ print('Please upgrade to Python 3.7 or higher.')
sys.exit(1)
@@ -62,10 +62,9 @@
'Operating System :: MacOS :: MacOS X',
'Operating System :: POSIX :: Linux',
- 'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.9',
],
- python_requires=">=3.6.2",
+ python_requires=">=3.7",
)
| {"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -2,9 +2,9 @@\n import os\n import sys\n \n-if sys.version_info < (3, 6):\n+if sys.version_info < (3, 7):\n print('Error: dbt does not support this version of Python.')\n- print('Please upgrade to Python 3.6 or higher.')\n+ print('Please upgrade to Python 3.7 or higher.')\n sys.exit(1)\n \n \n@@ -55,7 +55,6 @@\n 'agate>=1.6,<1.6.4',\n 'click>=8,<9',\n 'colorama>=0.3.9,<0.4.5',\n- 'dataclasses>=0.6,<0.9;python_version<\"3.7\"',\n 'hologram==0.0.14',\n 'isodate>=0.6,<0.7',\n 'logbook>=1.5,<1.6',\n@@ -82,10 +81,9 @@\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n \n- 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n- python_requires=\">=3.6.3\",\n+ python_requires=\">=3.7\",\n )\ndiff --git a/plugins/postgres/setup.py b/plugins/postgres/setup.py\n--- a/plugins/postgres/setup.py\n+++ b/plugins/postgres/setup.py\n@@ -2,9 +2,9 @@\n import os\n import sys\n \n-if sys.version_info < (3, 6):\n+if sys.version_info < (3, 7):\n print('Error: dbt does not support this version of Python.')\n- print('Please upgrade to Python 3.6 or higher.')\n+ print('Please upgrade to Python 3.7 or higher.')\n sys.exit(1)\n \n \n@@ -82,10 +82,9 @@\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n \n- 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n- python_requires=\">=3.6.2\",\n+ python_requires=\">=3.7\",\n )\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -14,9 +14,9 @@\n sys.exit(1)\n \n \n-if sys.version_info < (3, 6):\n+if sys.version_info < (3, 7):\n print('Error: dbt does not support this version of Python.')\n- print('Please upgrade to Python 3.6 or higher.')\n+ print('Please upgrade to Python 3.7 or higher.')\n sys.exit(1)\n \n \n@@ -62,10 +62,9 @@\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n \n- 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n- python_requires=\">=3.6.2\",\n+ python_requires=\">=3.7\",\n )\n", "issue": "Remove py36 support\nPython 3.6 end of life is December 23. We're planning to release major version v1.0 in December. For clarity and ease of support, let's remove official support for python 3.6 ahead of v1.0.\r\n\r\nI think this looks like:\r\n- Turning off py36 tests\r\n- Bumping `python_requires` for `dbt-core`\r\n\r\nRemoval of related cruft (e.g. backported `dataclasses`) can come later\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\nPSYCOPG2_MESSAGE = '''\nNo package name override was set.\nUsing 'psycopg2-binary' package to satisfy 'psycopg2'\n\nIf you experience segmentation faults, silent crashes, or installation errors,\nconsider retrying with the 'DBT_PSYCOPG2_NAME' environment variable set to\n'psycopg2'. It may require a compiler toolchain and development libraries!\n'''.strip()\n\n\ndef _dbt_psycopg2_name():\n # if the user chose something, use that\n package_name = os.getenv('DBT_PSYCOPG2_NAME', '')\n if package_name:\n return package_name\n\n # default to psycopg2-binary for all OSes/versions\n print(PSYCOPG2_MESSAGE)\n return 'psycopg2-binary'\n\n\npackage_name = \"dbt-postgres\"\npackage_version = \"1.0.0b2\"\ndescription = \"\"\"The postgres adpter plugin for dbt (data build tool)\"\"\"\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\nDBT_PSYCOPG2_NAME = _dbt_psycopg2_name()\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type='text/markdown',\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n package_data={\n 'dbt': [\n 'include/postgres/dbt_project.yml',\n 'include/postgres/sample_profiles.yml',\n 'include/postgres/macros/*.sql',\n 'include/postgres/macros/**/*.sql',\n ]\n },\n install_requires=[\n 'dbt-core=={}'.format(package_version),\n '{}~=2.8'.format(DBT_PSYCOPG2_NAME),\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires=\">=3.6.2\",\n)\n", "path": "plugins/postgres/setup.py"}, {"content": "#!/usr/bin/env python\nimport os\nimport sys\n\n\nif 'sdist' not in sys.argv:\n print('')\n print('As of v1.0.0, `pip install dbt` is no longer supported.')\n print('Instead, please use either:')\n print(' - `pip install dbt-core`, for core functionality')\n print(' - `pip install dbt-<adapter>`, to use dbt with your database, platform, or query engine')\n print('See full list: https://docs.getdbt.com/docs/available-adapters')\n print('')\n sys.exit(1)\n\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt\"\npackage_version = \"1.0.0b2\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n\n description=description,\n long_description=long_description,\n long_description_content_type='text/markdown',\n\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n zip_safe=False,\n classifiers=[\n 'Development Status :: 7 - Inactive',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires=\">=3.6.2\",\n)\n", "path": "setup.py"}, {"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 6):\n print('Error: dbt does not support this version of Python.')\n print('Please upgrade to Python 3.6 or higher.')\n sys.exit(1)\n\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.0.0b2\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type='text/markdown',\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n include_package_data = True,\n test_suite='test',\n entry_points={\n 'console_scripts': [\n 'dbt = dbt.main:main',\n ],\n },\n scripts=[\n 'scripts/dbt',\n ],\n install_requires=[\n 'Jinja2==2.11.3',\n 'agate>=1.6,<1.6.4',\n 'click>=8,<9',\n 'colorama>=0.3.9,<0.4.5',\n 'dataclasses>=0.6,<0.9;python_version<\"3.7\"',\n 'hologram==0.0.14',\n 'isodate>=0.6,<0.7',\n 'logbook>=1.5,<1.6',\n 'mashumaro==2.9',\n 'minimal-snowplow-tracker==0.0.2',\n 'networkx>=2.3,<3',\n 'packaging>=20.9,<22.0',\n 'sqlparse>=0.2.3,<0.5',\n 'dbt-extractor==0.4.0',\n 'typing-extensions>=3.7.4,<3.11',\n 'werkzeug>=1,<3',\n # the following are all to match snowflake-connector-python\n 'requests<3.0.0',\n 'idna>=2.5,<4',\n 'cffi>=1.9,<2.0.0',\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n python_requires=\">=3.6.3\",\n)\n", "path": "core/setup.py"}]} | 3,133 | 784 |
gh_patches_debug_24127 | rasdani/github-patches | git_diff | deepset-ai__haystack-6358 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Installation] Import of GPTGenerator failing without torch being installed
**Describe the bug**
Installed haystack from latest main in a fresh env via
```
pip install -e '.[dev]'
pip install -U -e .[preview]
```
Import of GPTGenerator throws error:
```
>>> from haystack.preview.components.generators.openai import GPTGenerator
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/maltepietsch/deepset/dev/haystack/haystack/preview/components/generators/__init__.py", line 1, in <module>
from haystack.preview.components.generators.hugging_face_local import HuggingFaceLocalGenerator
File "/Users/maltepietsch/deepset/dev/haystack/haystack/preview/components/generators/hugging_face_local.py", line 27, in <module>
class StopWordsCriteria(StoppingCriteria):
File "/Users/maltepietsch/deepset/dev/haystack/haystack/preview/components/generators/hugging_face_local.py", line 43, in StopWordsCriteria
device: Union[str, torch.device] = "cpu",
^^^^^
NameError: name 'torch' is not defined
```
**Expected behavior**
I can use GPTGenerator without any torch installation 🤔
**FAQ Check**
- [x] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS: MacOSX
- Haystack version (commit or version number): e905066
</issue>
<code>
[start of haystack/preview/components/generators/hugging_face_local.py]
1 import logging
2 from typing import Any, Dict, List, Literal, Optional, Union
3 from copy import deepcopy
4
5 from haystack.preview import component, default_to_dict
6 from haystack.preview.lazy_imports import LazyImport
7
8 logger = logging.getLogger(__name__)
9
10 SUPPORTED_TASKS = ["text-generation", "text2text-generation"]
11
12 with LazyImport(
13 message="PyTorch is needed to run this component. Please install it by following the instructions at https://pytorch.org/"
14 ) as torch_import:
15 import torch
16
17 with LazyImport(message="Run 'pip install transformers'") as transformers_import:
18 from huggingface_hub import model_info
19 from transformers import (
20 pipeline,
21 StoppingCriteriaList,
22 StoppingCriteria,
23 PreTrainedTokenizer,
24 PreTrainedTokenizerFast,
25 )
26
27 class StopWordsCriteria(StoppingCriteria):
28 """
29 Stops text generation if any one of the stop words is generated.
30
31 Note: When a stop word is encountered, the generation of new text is stopped.
32 However, if the stop word is in the prompt itself, it can stop generating new text
33 prematurely after the first token. This is particularly important for LLMs designed
34 for dialogue generation. For these models, like for example mosaicml/mpt-7b-chat,
35 the output includes both the new text and the original prompt. Therefore, it's important
36 to make sure your prompt has no stop words.
37 """
38
39 def __init__(
40 self,
41 tokenizer: Union[PreTrainedTokenizer, PreTrainedTokenizerFast],
42 stop_words: List[str],
43 device: Union[str, torch.device] = "cpu",
44 ):
45 super().__init__()
46 encoded_stop_words = tokenizer(stop_words, add_special_tokens=False, padding=True, return_tensors="pt")
47 self.stop_ids = encoded_stop_words.input_ids.to(device)
48
49 def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
50 for stop_id in self.stop_ids:
51 found_stop_word = self.is_stop_word_found(input_ids, stop_id)
52 if found_stop_word:
53 return True
54 return False
55
56 def is_stop_word_found(self, generated_text_ids: torch.Tensor, stop_id: torch.Tensor) -> bool:
57 generated_text_ids = generated_text_ids[-1]
58 len_generated_text_ids = generated_text_ids.size(0)
59 len_stop_id = stop_id.size(0)
60 result = all(generated_text_ids[len_generated_text_ids - len_stop_id :].eq(stop_id))
61 return result
62
63
64 @component
65 class HuggingFaceLocalGenerator:
66 """
67 Generator based on a Hugging Face model.
68 This component provides an interface to generate text using a Hugging Face model that runs locally.
69
70 Usage example:
71 ```python
72 from haystack.preview.components.generators import HuggingFaceLocalGenerator
73
74 generator = HuggingFaceLocalGenerator(model="google/flan-t5-large",
75 task="text2text-generation",
76 generation_kwargs={
77 "max_new_tokens": 100,
78 "temperature": 0.9,
79 })
80
81 print(generator.run("Who is the best American actor?"))
82 # {'replies': ['John Cusack']}
83 ```
84 """
85
86 def __init__(
87 self,
88 model_name_or_path: str = "google/flan-t5-base",
89 task: Optional[Literal["text-generation", "text2text-generation"]] = None,
90 device: Optional[str] = None,
91 token: Optional[Union[str, bool]] = None,
92 generation_kwargs: Optional[Dict[str, Any]] = None,
93 pipeline_kwargs: Optional[Dict[str, Any]] = None,
94 stop_words: Optional[List[str]] = None,
95 ):
96 """
97 :param model_name_or_path: The name or path of a Hugging Face model for text generation,
98 for example, "google/flan-t5-large".
99 If the model is also specified in the `pipeline_kwargs`, this parameter will be ignored.
100 :param task: The task for the Hugging Face pipeline.
101 Possible values are "text-generation" and "text2text-generation".
102 Generally, decoder-only models like GPT support "text-generation",
103 while encoder-decoder models like T5 support "text2text-generation".
104 If the task is also specified in the `pipeline_kwargs`, this parameter will be ignored.
105 If not specified, the component will attempt to infer the task from the model name,
106 calling the Hugging Face Hub API.
107 :param device: The device on which the model is loaded. (e.g., "cpu", "cuda:0").
108 If `device` or `device_map` is specified in the `pipeline_kwargs`, this parameter will be ignored.
109 :param token: The token to use as HTTP bearer authorization for remote files.
110 If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
111 If the token is also specified in the `pipeline_kwargs`, this parameter will be ignored.
112 :param generation_kwargs: A dictionary containing keyword arguments to customize text generation.
113 Some examples: `max_length`, `max_new_tokens`, `temperature`, `top_k`, `top_p`,...
114 See Hugging Face's documentation for more information:
115 - https://huggingface.co/docs/transformers/main/en/generation_strategies#customize-text-generation
116 - https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig
117 :param pipeline_kwargs: Dictionary containing keyword arguments used to initialize the pipeline.
118 These keyword arguments provide fine-grained control over the pipeline.
119 In case of duplication, these kwargs override `model_name_or_path`, `task`, `device`, and `token` init parameters.
120 See Hugging Face's [documentation](https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.pipeline.task)
121 for more information on the available kwargs.
122 In this dictionary, you can also include `model_kwargs` to specify the kwargs
123 for model initialization:
124 https://huggingface.co/docs/transformers/en/main_classes/model#transformers.PreTrainedModel.from_pretrained
125 :param stop_words: A list of stop words. If any one of the stop words is generated, the generation is stopped.
126 If you provide this parameter, you should not specify the `stopping_criteria` in `generation_kwargs`.
127 For some chat models, the output includes both the new text and the original prompt.
128 In these cases, it's important to make sure your prompt has no stop words.
129 """
130 transformers_import.check()
131 torch_import.check()
132
133 pipeline_kwargs = pipeline_kwargs or {}
134 generation_kwargs = generation_kwargs or {}
135
136 # check if the pipeline_kwargs contain the essential parameters
137 # otherwise, populate them with values from other init parameters
138 pipeline_kwargs.setdefault("model", model_name_or_path)
139 pipeline_kwargs.setdefault("token", token)
140 if device is not None and "device" not in pipeline_kwargs and "device_map" not in pipeline_kwargs:
141 pipeline_kwargs["device"] = device
142
143 # task identification and validation
144 if task is None:
145 if "task" in pipeline_kwargs:
146 task = pipeline_kwargs["task"]
147 elif isinstance(pipeline_kwargs["model"], str):
148 task = model_info(pipeline_kwargs["model"], token=pipeline_kwargs["token"]).pipeline_tag
149
150 if task not in SUPPORTED_TASKS:
151 raise ValueError(
152 f"Task '{task}' is not supported. " f"The supported tasks are: {', '.join(SUPPORTED_TASKS)}."
153 )
154 pipeline_kwargs["task"] = task
155
156 # if not specified, set return_full_text to False for text-generation
157 # only generated text is returned (excluding prompt)
158 if task == "text-generation":
159 generation_kwargs.setdefault("return_full_text", False)
160
161 if stop_words and "stopping_criteria" in generation_kwargs:
162 raise ValueError(
163 "Found both the `stop_words` init parameter and the `stopping_criteria` key in `generation_kwargs`. "
164 "Please specify only one of them."
165 )
166
167 self.pipeline_kwargs = pipeline_kwargs
168 self.generation_kwargs = generation_kwargs
169 self.stop_words = stop_words
170 self.pipeline = None
171 self.stopping_criteria_list = None
172
173 def _get_telemetry_data(self) -> Dict[str, Any]:
174 """
175 Data that is sent to Posthog for usage analytics.
176 """
177 if isinstance(self.pipeline_kwargs["model"], str):
178 return {"model": self.pipeline_kwargs["model"]}
179 return {"model": f"[object of type {type(self.pipeline_kwargs['model'])}]"}
180
181 def warm_up(self):
182 if self.pipeline is None:
183 self.pipeline = pipeline(**self.pipeline_kwargs)
184
185 if self.stop_words and self.stopping_criteria_list is None:
186 stop_words_criteria = StopWordsCriteria(
187 tokenizer=self.pipeline.tokenizer, stop_words=self.stop_words, device=self.pipeline.device
188 )
189 self.stopping_criteria_list = StoppingCriteriaList([stop_words_criteria])
190
191 def to_dict(self) -> Dict[str, Any]:
192 """
193 Serialize this component to a dictionary.
194 """
195 pipeline_kwargs_to_serialize = deepcopy(self.pipeline_kwargs)
196
197 # we don't want to serialize valid tokens
198 if isinstance(pipeline_kwargs_to_serialize["token"], str):
199 pipeline_kwargs_to_serialize["token"] = None
200
201 return default_to_dict(
202 self,
203 pipeline_kwargs=pipeline_kwargs_to_serialize,
204 generation_kwargs=self.generation_kwargs,
205 stop_words=self.stop_words,
206 )
207
208 @component.output_types(replies=List[str])
209 def run(self, prompt: str, generation_kwargs: Optional[Dict[str, Any]] = None):
210 """
211 Run the text generation model on the given prompt.
212
213 :param prompt: A string representing the prompt.
214 :param generation_kwargs: Additional keyword arguments for text generation.
215 :return: A dictionary containing the generated replies.
216 """
217 if self.pipeline is None:
218 raise RuntimeError("The generation model has not been loaded. Please call warm_up() before running.")
219
220 if not prompt:
221 return {"replies": []}
222
223 # merge generation kwargs from init method with those from run method
224 updated_generation_kwargs = {**self.generation_kwargs, **(generation_kwargs or {})}
225
226 output = self.pipeline(prompt, stopping_criteria=self.stopping_criteria_list, **updated_generation_kwargs)
227 replies = [o["generated_text"] for o in output if "generated_text" in o]
228
229 if self.stop_words:
230 # the output of the pipeline includes the stop word
231 replies = [reply.replace(stop_word, "").rstrip() for reply in replies for stop_word in self.stop_words]
232
233 return {"replies": replies}
234
[end of haystack/preview/components/generators/hugging_face_local.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/haystack/preview/components/generators/hugging_face_local.py b/haystack/preview/components/generators/hugging_face_local.py
--- a/haystack/preview/components/generators/hugging_face_local.py
+++ b/haystack/preview/components/generators/hugging_face_local.py
@@ -9,12 +9,8 @@
SUPPORTED_TASKS = ["text-generation", "text2text-generation"]
-with LazyImport(
- message="PyTorch is needed to run this component. Please install it by following the instructions at https://pytorch.org/"
-) as torch_import:
+with LazyImport(message="Run 'pip install transformers[torch]'") as torch_and_transformers_import:
import torch
-
-with LazyImport(message="Run 'pip install transformers'") as transformers_import:
from huggingface_hub import model_info
from transformers import (
pipeline,
@@ -127,8 +123,7 @@
For some chat models, the output includes both the new text and the original prompt.
In these cases, it's important to make sure your prompt has no stop words.
"""
- transformers_import.check()
- torch_import.check()
+ torch_and_transformers_import.check()
pipeline_kwargs = pipeline_kwargs or {}
generation_kwargs = generation_kwargs or {}
| {"golden_diff": "diff --git a/haystack/preview/components/generators/hugging_face_local.py b/haystack/preview/components/generators/hugging_face_local.py\n--- a/haystack/preview/components/generators/hugging_face_local.py\n+++ b/haystack/preview/components/generators/hugging_face_local.py\n@@ -9,12 +9,8 @@\n \n SUPPORTED_TASKS = [\"text-generation\", \"text2text-generation\"]\n \n-with LazyImport(\n- message=\"PyTorch is needed to run this component. Please install it by following the instructions at https://pytorch.org/\"\n-) as torch_import:\n+with LazyImport(message=\"Run 'pip install transformers[torch]'\") as torch_and_transformers_import:\n import torch\n-\n-with LazyImport(message=\"Run 'pip install transformers'\") as transformers_import:\n from huggingface_hub import model_info\n from transformers import (\n pipeline,\n@@ -127,8 +123,7 @@\n For some chat models, the output includes both the new text and the original prompt.\n In these cases, it's important to make sure your prompt has no stop words.\n \"\"\"\n- transformers_import.check()\n- torch_import.check()\n+ torch_and_transformers_import.check()\n \n pipeline_kwargs = pipeline_kwargs or {}\n generation_kwargs = generation_kwargs or {}\n", "issue": "[Installation] Import of GPTGenerator failing without torch being installed\n**Describe the bug**\r\nInstalled haystack from latest main in a fresh env via\r\n```\r\npip install -e '.[dev]'\r\npip install -U -e .[preview]\r\n```\r\nImport of GPTGenerator throws error:\r\n```\r\n>>> from haystack.preview.components.generators.openai import GPTGenerator\r\nNone of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/maltepietsch/deepset/dev/haystack/haystack/preview/components/generators/__init__.py\", line 1, in <module>\r\n from haystack.preview.components.generators.hugging_face_local import HuggingFaceLocalGenerator\r\n File \"/Users/maltepietsch/deepset/dev/haystack/haystack/preview/components/generators/hugging_face_local.py\", line 27, in <module>\r\n class StopWordsCriteria(StoppingCriteria):\r\n File \"/Users/maltepietsch/deepset/dev/haystack/haystack/preview/components/generators/hugging_face_local.py\", line 43, in StopWordsCriteria\r\n device: Union[str, torch.device] = \"cpu\",\r\n ^^^^^\r\nNameError: name 'torch' is not defined\r\n```\r\n**Expected behavior**\r\nI can use GPTGenerator without any torch installation \ud83e\udd14 \r\n\r\n\r\n**FAQ Check**\r\n- [x] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?\r\n\r\n**System:**\r\n - OS: MacOSX\r\n - Haystack version (commit or version number): e905066\r\n\n", "before_files": [{"content": "import logging\nfrom typing import Any, Dict, List, Literal, Optional, Union\nfrom copy import deepcopy\n\nfrom haystack.preview import component, default_to_dict\nfrom haystack.preview.lazy_imports import LazyImport\n\nlogger = logging.getLogger(__name__)\n\nSUPPORTED_TASKS = [\"text-generation\", \"text2text-generation\"]\n\nwith LazyImport(\n message=\"PyTorch is needed to run this component. Please install it by following the instructions at https://pytorch.org/\"\n) as torch_import:\n import torch\n\nwith LazyImport(message=\"Run 'pip install transformers'\") as transformers_import:\n from huggingface_hub import model_info\n from transformers import (\n pipeline,\n StoppingCriteriaList,\n StoppingCriteria,\n PreTrainedTokenizer,\n PreTrainedTokenizerFast,\n )\n\n class StopWordsCriteria(StoppingCriteria):\n \"\"\"\n Stops text generation if any one of the stop words is generated.\n\n Note: When a stop word is encountered, the generation of new text is stopped.\n However, if the stop word is in the prompt itself, it can stop generating new text\n prematurely after the first token. This is particularly important for LLMs designed\n for dialogue generation. For these models, like for example mosaicml/mpt-7b-chat,\n the output includes both the new text and the original prompt. Therefore, it's important\n to make sure your prompt has no stop words.\n \"\"\"\n\n def __init__(\n self,\n tokenizer: Union[PreTrainedTokenizer, PreTrainedTokenizerFast],\n stop_words: List[str],\n device: Union[str, torch.device] = \"cpu\",\n ):\n super().__init__()\n encoded_stop_words = tokenizer(stop_words, add_special_tokens=False, padding=True, return_tensors=\"pt\")\n self.stop_ids = encoded_stop_words.input_ids.to(device)\n\n def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:\n for stop_id in self.stop_ids:\n found_stop_word = self.is_stop_word_found(input_ids, stop_id)\n if found_stop_word:\n return True\n return False\n\n def is_stop_word_found(self, generated_text_ids: torch.Tensor, stop_id: torch.Tensor) -> bool:\n generated_text_ids = generated_text_ids[-1]\n len_generated_text_ids = generated_text_ids.size(0)\n len_stop_id = stop_id.size(0)\n result = all(generated_text_ids[len_generated_text_ids - len_stop_id :].eq(stop_id))\n return result\n\n\n@component\nclass HuggingFaceLocalGenerator:\n \"\"\"\n Generator based on a Hugging Face model.\n This component provides an interface to generate text using a Hugging Face model that runs locally.\n\n Usage example:\n ```python\n from haystack.preview.components.generators import HuggingFaceLocalGenerator\n\n generator = HuggingFaceLocalGenerator(model=\"google/flan-t5-large\",\n task=\"text2text-generation\",\n generation_kwargs={\n \"max_new_tokens\": 100,\n \"temperature\": 0.9,\n })\n\n print(generator.run(\"Who is the best American actor?\"))\n # {'replies': ['John Cusack']}\n ```\n \"\"\"\n\n def __init__(\n self,\n model_name_or_path: str = \"google/flan-t5-base\",\n task: Optional[Literal[\"text-generation\", \"text2text-generation\"]] = None,\n device: Optional[str] = None,\n token: Optional[Union[str, bool]] = None,\n generation_kwargs: Optional[Dict[str, Any]] = None,\n pipeline_kwargs: Optional[Dict[str, Any]] = None,\n stop_words: Optional[List[str]] = None,\n ):\n \"\"\"\n :param model_name_or_path: The name or path of a Hugging Face model for text generation,\n for example, \"google/flan-t5-large\".\n If the model is also specified in the `pipeline_kwargs`, this parameter will be ignored.\n :param task: The task for the Hugging Face pipeline.\n Possible values are \"text-generation\" and \"text2text-generation\".\n Generally, decoder-only models like GPT support \"text-generation\",\n while encoder-decoder models like T5 support \"text2text-generation\".\n If the task is also specified in the `pipeline_kwargs`, this parameter will be ignored.\n If not specified, the component will attempt to infer the task from the model name,\n calling the Hugging Face Hub API.\n :param device: The device on which the model is loaded. (e.g., \"cpu\", \"cuda:0\").\n If `device` or `device_map` is specified in the `pipeline_kwargs`, this parameter will be ignored.\n :param token: The token to use as HTTP bearer authorization for remote files.\n If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).\n If the token is also specified in the `pipeline_kwargs`, this parameter will be ignored.\n :param generation_kwargs: A dictionary containing keyword arguments to customize text generation.\n Some examples: `max_length`, `max_new_tokens`, `temperature`, `top_k`, `top_p`,...\n See Hugging Face's documentation for more information:\n - https://huggingface.co/docs/transformers/main/en/generation_strategies#customize-text-generation\n - https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationConfig\n :param pipeline_kwargs: Dictionary containing keyword arguments used to initialize the pipeline.\n These keyword arguments provide fine-grained control over the pipeline.\n In case of duplication, these kwargs override `model_name_or_path`, `task`, `device`, and `token` init parameters.\n See Hugging Face's [documentation](https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.pipeline.task)\n for more information on the available kwargs.\n In this dictionary, you can also include `model_kwargs` to specify the kwargs\n for model initialization:\n https://huggingface.co/docs/transformers/en/main_classes/model#transformers.PreTrainedModel.from_pretrained\n :param stop_words: A list of stop words. If any one of the stop words is generated, the generation is stopped.\n If you provide this parameter, you should not specify the `stopping_criteria` in `generation_kwargs`.\n For some chat models, the output includes both the new text and the original prompt.\n In these cases, it's important to make sure your prompt has no stop words.\n \"\"\"\n transformers_import.check()\n torch_import.check()\n\n pipeline_kwargs = pipeline_kwargs or {}\n generation_kwargs = generation_kwargs or {}\n\n # check if the pipeline_kwargs contain the essential parameters\n # otherwise, populate them with values from other init parameters\n pipeline_kwargs.setdefault(\"model\", model_name_or_path)\n pipeline_kwargs.setdefault(\"token\", token)\n if device is not None and \"device\" not in pipeline_kwargs and \"device_map\" not in pipeline_kwargs:\n pipeline_kwargs[\"device\"] = device\n\n # task identification and validation\n if task is None:\n if \"task\" in pipeline_kwargs:\n task = pipeline_kwargs[\"task\"]\n elif isinstance(pipeline_kwargs[\"model\"], str):\n task = model_info(pipeline_kwargs[\"model\"], token=pipeline_kwargs[\"token\"]).pipeline_tag\n\n if task not in SUPPORTED_TASKS:\n raise ValueError(\n f\"Task '{task}' is not supported. \" f\"The supported tasks are: {', '.join(SUPPORTED_TASKS)}.\"\n )\n pipeline_kwargs[\"task\"] = task\n\n # if not specified, set return_full_text to False for text-generation\n # only generated text is returned (excluding prompt)\n if task == \"text-generation\":\n generation_kwargs.setdefault(\"return_full_text\", False)\n\n if stop_words and \"stopping_criteria\" in generation_kwargs:\n raise ValueError(\n \"Found both the `stop_words` init parameter and the `stopping_criteria` key in `generation_kwargs`. \"\n \"Please specify only one of them.\"\n )\n\n self.pipeline_kwargs = pipeline_kwargs\n self.generation_kwargs = generation_kwargs\n self.stop_words = stop_words\n self.pipeline = None\n self.stopping_criteria_list = None\n\n def _get_telemetry_data(self) -> Dict[str, Any]:\n \"\"\"\n Data that is sent to Posthog for usage analytics.\n \"\"\"\n if isinstance(self.pipeline_kwargs[\"model\"], str):\n return {\"model\": self.pipeline_kwargs[\"model\"]}\n return {\"model\": f\"[object of type {type(self.pipeline_kwargs['model'])}]\"}\n\n def warm_up(self):\n if self.pipeline is None:\n self.pipeline = pipeline(**self.pipeline_kwargs)\n\n if self.stop_words and self.stopping_criteria_list is None:\n stop_words_criteria = StopWordsCriteria(\n tokenizer=self.pipeline.tokenizer, stop_words=self.stop_words, device=self.pipeline.device\n )\n self.stopping_criteria_list = StoppingCriteriaList([stop_words_criteria])\n\n def to_dict(self) -> Dict[str, Any]:\n \"\"\"\n Serialize this component to a dictionary.\n \"\"\"\n pipeline_kwargs_to_serialize = deepcopy(self.pipeline_kwargs)\n\n # we don't want to serialize valid tokens\n if isinstance(pipeline_kwargs_to_serialize[\"token\"], str):\n pipeline_kwargs_to_serialize[\"token\"] = None\n\n return default_to_dict(\n self,\n pipeline_kwargs=pipeline_kwargs_to_serialize,\n generation_kwargs=self.generation_kwargs,\n stop_words=self.stop_words,\n )\n\n @component.output_types(replies=List[str])\n def run(self, prompt: str, generation_kwargs: Optional[Dict[str, Any]] = None):\n \"\"\"\n Run the text generation model on the given prompt.\n\n :param prompt: A string representing the prompt.\n :param generation_kwargs: Additional keyword arguments for text generation.\n :return: A dictionary containing the generated replies.\n \"\"\"\n if self.pipeline is None:\n raise RuntimeError(\"The generation model has not been loaded. Please call warm_up() before running.\")\n\n if not prompt:\n return {\"replies\": []}\n\n # merge generation kwargs from init method with those from run method\n updated_generation_kwargs = {**self.generation_kwargs, **(generation_kwargs or {})}\n\n output = self.pipeline(prompt, stopping_criteria=self.stopping_criteria_list, **updated_generation_kwargs)\n replies = [o[\"generated_text\"] for o in output if \"generated_text\" in o]\n\n if self.stop_words:\n # the output of the pipeline includes the stop word\n replies = [reply.replace(stop_word, \"\").rstrip() for reply in replies for stop_word in self.stop_words]\n\n return {\"replies\": replies}\n", "path": "haystack/preview/components/generators/hugging_face_local.py"}]} | 3,825 | 284 |
gh_patches_debug_471 | rasdani/github-patches | git_diff | pytorch__rl-402 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Loggers registration
## Describe the bug
The [`__init__.py`](https://github.com/facebookresearch/rl/blob/main/torchrl/trainers/loggers/__init__.py) from loggers does not reference the implemented loggers. However, the [docstring from the trainer](https://github.com/facebookresearch/rl/blob/806733f27dfa9a878b75b079de9f18df83f54c2d/torchrl/trainers/helpers/trainers.py#L109) assumes that this kind of import can be run.
</issue>
<code>
[start of torchrl/trainers/loggers/__init__.py]
1 # Copyright (c) Meta Platforms, Inc. and affiliates.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
5
6 from .common import Logger
7
[end of torchrl/trainers/loggers/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchrl/trainers/loggers/__init__.py b/torchrl/trainers/loggers/__init__.py
--- a/torchrl/trainers/loggers/__init__.py
+++ b/torchrl/trainers/loggers/__init__.py
@@ -4,3 +4,6 @@
# LICENSE file in the root directory of this source tree.
from .common import Logger
+from .csv import CSVLogger
+from .tensorboard import TensorboardLogger
+from .wandb import WandbLogger
| {"golden_diff": "diff --git a/torchrl/trainers/loggers/__init__.py b/torchrl/trainers/loggers/__init__.py\n--- a/torchrl/trainers/loggers/__init__.py\n+++ b/torchrl/trainers/loggers/__init__.py\n@@ -4,3 +4,6 @@\n # LICENSE file in the root directory of this source tree.\n \n from .common import Logger\n+from .csv import CSVLogger\n+from .tensorboard import TensorboardLogger\n+from .wandb import WandbLogger\n", "issue": "[BUG] Loggers registration\n## Describe the bug\r\n\r\nThe [`__init__.py`](https://github.com/facebookresearch/rl/blob/main/torchrl/trainers/loggers/__init__.py) from loggers does not reference the implemented loggers. However, the [docstring from the trainer](https://github.com/facebookresearch/rl/blob/806733f27dfa9a878b75b079de9f18df83f54c2d/torchrl/trainers/helpers/trainers.py#L109) assumes that this kind of import can be run.\n", "before_files": [{"content": "# Copyright (c) Meta Platforms, Inc. and affiliates.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom .common import Logger\n", "path": "torchrl/trainers/loggers/__init__.py"}]} | 732 | 113 |
gh_patches_debug_16425 | rasdani/github-patches | git_diff | pantsbuild__pants-15979 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`./pants run` crashes when `remote_cache_eager_fetch = false`
**Describe the bug**
After setting `remote_cache_eager_fetch = false`, a CI step that `./pants run`s a `pex_binary` has started (intermittently) failing with errors like:
```
Exception: Was not present in either the local or remote store: Digest { hash: Fingerprint<46683dec8706b7ac7c4f6011f68b4b8c10ad423ae8ba57745a6f5e01ba5b2f7b>, size_bytes: 11827 }
```
**Pants version**
`PANTS_SHA=5d8a328d72209863986c8959b20305505bc068ba`
**OS**
Linux
**Additional info**
Some BuildSense links where we've seen the failure:
* https://app.toolchain.com/organizations/color/repos/color/builds/pants_run_2022_06_27_19_29_46_827_37a3a5f2d8e440cca373a7fb4a5d3b51/
* https://app.toolchain.com/organizations/color/repos/color/builds/pants_run_2022_06_27_19_31_09_277_3793f53b54914135885f2ac951faf210/
* https://app.toolchain.com/organizations/color/repos/color/builds/pants_run_2022_06_27_19_29_46_827_37a3a5f2d8e440cca373a7fb4a5d3b51/
</issue>
<code>
[start of src/python/pants/base/exceptions.py]
1 # Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6
7 class TargetDefinitionException(Exception):
8 """Indicates an invalid target definition.
9
10 :API: public
11 """
12
13 def __init__(self, target, msg):
14 """
15 :param target: the target in question
16 :param string msg: a description of the target misconfiguration
17 """
18 super().__init__(f"Invalid target {target}: {msg}")
19
20
21 class BuildConfigurationError(Exception):
22 """Indicates an error in a pants installation's configuration."""
23
24
25 class BackendConfigurationError(BuildConfigurationError):
26 """Indicates a plugin backend with a missing or malformed register module."""
27
28
29 class MappingError(Exception):
30 """Indicates an error mapping addressable objects."""
31
[end of src/python/pants/base/exceptions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/python/pants/base/exceptions.py b/src/python/pants/base/exceptions.py
--- a/src/python/pants/base/exceptions.py
+++ b/src/python/pants/base/exceptions.py
@@ -3,6 +3,11 @@
from __future__ import annotations
+from typing import TYPE_CHECKING
+
+if TYPE_CHECKING:
+ from pants.engine.internals.native_engine import PyFailure
+
class TargetDefinitionException(Exception):
"""Indicates an invalid target definition.
@@ -28,3 +33,15 @@
class MappingError(Exception):
"""Indicates an error mapping addressable objects."""
+
+
+class NativeEngineFailure(Exception):
+ """A wrapper around a `Failure` instance.
+
+ TODO: This type is defined in Python because pyo3 doesn't support declaring Exceptions with
+ additional fields. See https://github.com/PyO3/pyo3/issues/295
+ """
+
+ def __init__(self, msg: str, failure: PyFailure) -> None:
+ super().__init__(msg)
+ self.failure = failure
| {"golden_diff": "diff --git a/src/python/pants/base/exceptions.py b/src/python/pants/base/exceptions.py\n--- a/src/python/pants/base/exceptions.py\n+++ b/src/python/pants/base/exceptions.py\n@@ -3,6 +3,11 @@\n \n from __future__ import annotations\n \n+from typing import TYPE_CHECKING\n+\n+if TYPE_CHECKING:\n+ from pants.engine.internals.native_engine import PyFailure\n+\n \n class TargetDefinitionException(Exception):\n \"\"\"Indicates an invalid target definition.\n@@ -28,3 +33,15 @@\n \n class MappingError(Exception):\n \"\"\"Indicates an error mapping addressable objects.\"\"\"\n+\n+\n+class NativeEngineFailure(Exception):\n+ \"\"\"A wrapper around a `Failure` instance.\n+\n+ TODO: This type is defined in Python because pyo3 doesn't support declaring Exceptions with\n+ additional fields. See https://github.com/PyO3/pyo3/issues/295\n+ \"\"\"\n+\n+ def __init__(self, msg: str, failure: PyFailure) -> None:\n+ super().__init__(msg)\n+ self.failure = failure\n", "issue": "`./pants run` crashes when `remote_cache_eager_fetch = false`\n**Describe the bug**\r\n\r\nAfter setting `remote_cache_eager_fetch = false`, a CI step that `./pants run`s a `pex_binary` has started (intermittently) failing with errors like:\r\n```\r\n Exception: Was not present in either the local or remote store: Digest { hash: Fingerprint<46683dec8706b7ac7c4f6011f68b4b8c10ad423ae8ba57745a6f5e01ba5b2f7b>, size_bytes: 11827 }\r\n```\r\n\r\n**Pants version**\r\n\r\n`PANTS_SHA=5d8a328d72209863986c8959b20305505bc068ba`\r\n\r\n**OS**\r\n\r\nLinux\r\n\r\n**Additional info**\r\n\r\nSome BuildSense links where we've seen the failure:\r\n* https://app.toolchain.com/organizations/color/repos/color/builds/pants_run_2022_06_27_19_29_46_827_37a3a5f2d8e440cca373a7fb4a5d3b51/\r\n* https://app.toolchain.com/organizations/color/repos/color/builds/pants_run_2022_06_27_19_31_09_277_3793f53b54914135885f2ac951faf210/\r\n* https://app.toolchain.com/organizations/color/repos/color/builds/pants_run_2022_06_27_19_29_46_827_37a3a5f2d8e440cca373a7fb4a5d3b51/\r\n\n", "before_files": [{"content": "# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\n\nclass TargetDefinitionException(Exception):\n \"\"\"Indicates an invalid target definition.\n\n :API: public\n \"\"\"\n\n def __init__(self, target, msg):\n \"\"\"\n :param target: the target in question\n :param string msg: a description of the target misconfiguration\n \"\"\"\n super().__init__(f\"Invalid target {target}: {msg}\")\n\n\nclass BuildConfigurationError(Exception):\n \"\"\"Indicates an error in a pants installation's configuration.\"\"\"\n\n\nclass BackendConfigurationError(BuildConfigurationError):\n \"\"\"Indicates a plugin backend with a missing or malformed register module.\"\"\"\n\n\nclass MappingError(Exception):\n \"\"\"Indicates an error mapping addressable objects.\"\"\"\n", "path": "src/python/pants/base/exceptions.py"}]} | 1,221 | 243 |
gh_patches_debug_2986 | rasdani/github-patches | git_diff | coala__coala-3908 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fail to install and py.test on docker environment.
<!-- Hello! If you're filing a bug, please include every step so as to help us reproduce it on our machines. If you're unsure about how to file an issue, use the issue template. If you need any help regarding usage of coala, check out the documentation or hit us up on chat. You can ignore or delete this text, it is commented and won't appear when the issue is submitted or previewed.
Chat: https://coala.io/chat
Issue Template: https://github.com/coala/coala/blob/master/CONTRIBUTING.rst#filing-issues
Documentation: https://docs.coala.io
-->
When I try to install by `python setup.py install`, it is failed with this message.
`UnicodeEncodeError: 'ascii' codec can't encode character '\xfc' in position 15224: ordinal not in range(128)`
Also, the same happening when I try to run unit test on local.
It needs to be fixed.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python3
2
3 import datetime
4 import locale
5 import platform
6 import sys
7 from os import getenv
8 from subprocess import call
9
10 import setuptools.command.build_py
11 from setuptools import find_packages, setup
12 from setuptools.command.test import test as TestCommand
13
14 from coalib import VERSION, assert_supported_version, get_version
15 from coalib.misc.BuildManPage import BuildManPage
16
17 try:
18 locale.getlocale()
19 except (ValueError, UnicodeError):
20 locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
21
22
23 assert_supported_version()
24
25
26 class BuildPyCommand(setuptools.command.build_py.build_py):
27
28 def run(self):
29 if platform.system() != 'Windows':
30 self.run_command('build_manpage')
31 setuptools.command.build_py.build_py.run(self)
32
33
34 class PyTestCommand(TestCommand):
35
36 def run_tests(self):
37 # import here, cause outside the eggs aren't loaded
38 import pytest
39 errno = pytest.main([])
40 sys.exit(errno)
41
42
43 class BuildDocsCommand(setuptools.command.build_py.build_py):
44 apidoc_command = (
45 'sphinx-apidoc', '-f', '-o', 'docs', '--no-toc', 'coalib'
46 )
47 doc_command = ('make', '-C', 'docs', 'html', 'SPHINXOPTS=-W')
48
49 def run(self):
50 errOne = call(self.apidoc_command)
51 errTwo = call(self.doc_command)
52 sys.exit(errOne or errTwo)
53
54
55 # Generate API documentation only if we are running on readthedocs.io
56 on_rtd = getenv('READTHEDOCS', None) is not None
57 if on_rtd:
58 call(BuildDocsCommand.apidoc_command)
59 if 'dev' in VERSION:
60 current_version = datetime.datetime.now().strftime('%Y%m%d%H%M%S')
61 call(['python3', '.misc/adjust_version_number.py', 'coalib/VERSION',
62 '-b {}'.format(current_version)])
63 VERSION = get_version()
64
65 with open('requirements.txt') as requirements:
66 required = requirements.read().splitlines()
67
68 with open('test-requirements.txt') as requirements:
69 test_required = requirements.read().splitlines()
70
71 with open('README.rst') as readme:
72 long_description = readme.read()
73
74
75 if __name__ == '__main__':
76 if platform.system() != 'Windows':
77 data_files = [('.', ['coala.1'])]
78 else:
79 data_files = [('.', [])]
80
81 setup(name='coala',
82 version=VERSION,
83 description='Linting and Fixing Code for All Languages',
84 author='The coala developers',
85 author_email='[email protected]',
86 maintainer='Lasse Schuirmann, Fabian Neuschmidt, Mischa Kr\xfcger'
87 if not on_rtd else 'L.S., F.N., M.K.',
88 maintainer_email=('[email protected], '
89 '[email protected], '
90 '[email protected]'),
91 url='http://coala.io/',
92 platforms='any',
93 packages=find_packages(exclude=['build.*', 'tests', 'tests.*']),
94 install_requires=required,
95 tests_require=test_required,
96 package_data={'coalib': ['default_coafile', 'VERSION',
97 'bearlib/languages/documentation/*.coalang']
98 },
99 license='AGPL-3.0',
100 data_files=data_files,
101 long_description=long_description,
102 entry_points={
103 'console_scripts': [
104 'coala = coalib.coala:main',
105 'coala-ci = coalib.coala_ci:main',
106 'coala-json = coalib.coala_json:main',
107 'coala-format = coalib.coala_format:main',
108 'coala-delete-orig = coalib.coala_delete_orig:main']},
109 # from http://pypi.python.org/pypi?%3Aaction=list_classifiers
110 classifiers=[
111 'Development Status :: 4 - Beta',
112
113 'Environment :: Console',
114 'Environment :: MacOS X',
115 'Environment :: Win32 (MS Windows)',
116 'Environment :: X11 Applications :: Gnome',
117
118 'Intended Audience :: Science/Research',
119 'Intended Audience :: Developers',
120
121 'License :: OSI Approved :: GNU Affero General Public License '
122 'v3 or later (AGPLv3+)',
123
124 'Operating System :: OS Independent',
125
126 'Programming Language :: Python :: Implementation :: CPython',
127 'Programming Language :: Python :: 3.4',
128 'Programming Language :: Python :: 3.5',
129 'Programming Language :: Python :: 3 :: Only',
130
131 'Topic :: Scientific/Engineering :: Information Analysis',
132 'Topic :: Software Development :: Quality Assurance',
133 'Topic :: Text Processing :: Linguistic'],
134 cmdclass={'build_manpage': BuildManPage,
135 'build_py': BuildPyCommand,
136 'docs': BuildDocsCommand,
137 'test': PyTestCommand})
138
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -15,7 +15,10 @@
from coalib.misc.BuildManPage import BuildManPage
try:
- locale.getlocale()
+ lc = locale.getlocale()
+ pf = platform.system()
+ if pf != 'Windows' and lc == (None, None):
+ locale.setlocale(locale.LC_ALL, 'C.UTF-8')
except (ValueError, UnicodeError):
locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -15,7 +15,10 @@\n from coalib.misc.BuildManPage import BuildManPage\n \n try:\n- locale.getlocale()\n+ lc = locale.getlocale()\n+ pf = platform.system()\n+ if pf != 'Windows' and lc == (None, None):\n+ locale.setlocale(locale.LC_ALL, 'C.UTF-8')\n except (ValueError, UnicodeError):\n locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')\n", "issue": "Fail to install and py.test on docker environment.\n<!-- Hello! If you're filing a bug, please include every step so as to help us reproduce it on our machines. If you're unsure about how to file an issue, use the issue template. If you need any help regarding usage of coala, check out the documentation or hit us up on chat. You can ignore or delete this text, it is commented and won't appear when the issue is submitted or previewed.\r\n\r\nChat: https://coala.io/chat\r\nIssue Template: https://github.com/coala/coala/blob/master/CONTRIBUTING.rst#filing-issues\r\nDocumentation: https://docs.coala.io\r\n-->\r\nWhen I try to install by `python setup.py install`, it is failed with this message.\r\n`UnicodeEncodeError: 'ascii' codec can't encode character '\\xfc' in position 15224: ordinal not in range(128)`\r\nAlso, the same happening when I try to run unit test on local.\r\nIt needs to be fixed.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport datetime\nimport locale\nimport platform\nimport sys\nfrom os import getenv\nfrom subprocess import call\n\nimport setuptools.command.build_py\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\nfrom coalib import VERSION, assert_supported_version, get_version\nfrom coalib.misc.BuildManPage import BuildManPage\n\ntry:\n locale.getlocale()\nexcept (ValueError, UnicodeError):\n locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')\n\n\nassert_supported_version()\n\n\nclass BuildPyCommand(setuptools.command.build_py.build_py):\n\n def run(self):\n if platform.system() != 'Windows':\n self.run_command('build_manpage')\n setuptools.command.build_py.build_py.run(self)\n\n\nclass PyTestCommand(TestCommand):\n\n def run_tests(self):\n # import here, cause outside the eggs aren't loaded\n import pytest\n errno = pytest.main([])\n sys.exit(errno)\n\n\nclass BuildDocsCommand(setuptools.command.build_py.build_py):\n apidoc_command = (\n 'sphinx-apidoc', '-f', '-o', 'docs', '--no-toc', 'coalib'\n )\n doc_command = ('make', '-C', 'docs', 'html', 'SPHINXOPTS=-W')\n\n def run(self):\n errOne = call(self.apidoc_command)\n errTwo = call(self.doc_command)\n sys.exit(errOne or errTwo)\n\n\n# Generate API documentation only if we are running on readthedocs.io\non_rtd = getenv('READTHEDOCS', None) is not None\nif on_rtd:\n call(BuildDocsCommand.apidoc_command)\n if 'dev' in VERSION:\n current_version = datetime.datetime.now().strftime('%Y%m%d%H%M%S')\n call(['python3', '.misc/adjust_version_number.py', 'coalib/VERSION',\n '-b {}'.format(current_version)])\n VERSION = get_version()\n\nwith open('requirements.txt') as requirements:\n required = requirements.read().splitlines()\n\nwith open('test-requirements.txt') as requirements:\n test_required = requirements.read().splitlines()\n\nwith open('README.rst') as readme:\n long_description = readme.read()\n\n\nif __name__ == '__main__':\n if platform.system() != 'Windows':\n data_files = [('.', ['coala.1'])]\n else:\n data_files = [('.', [])]\n\n setup(name='coala',\n version=VERSION,\n description='Linting and Fixing Code for All Languages',\n author='The coala developers',\n author_email='[email protected]',\n maintainer='Lasse Schuirmann, Fabian Neuschmidt, Mischa Kr\\xfcger'\n if not on_rtd else 'L.S., F.N., M.K.',\n maintainer_email=('[email protected], '\n '[email protected], '\n '[email protected]'),\n url='http://coala.io/',\n platforms='any',\n packages=find_packages(exclude=['build.*', 'tests', 'tests.*']),\n install_requires=required,\n tests_require=test_required,\n package_data={'coalib': ['default_coafile', 'VERSION',\n 'bearlib/languages/documentation/*.coalang']\n },\n license='AGPL-3.0',\n data_files=data_files,\n long_description=long_description,\n entry_points={\n 'console_scripts': [\n 'coala = coalib.coala:main',\n 'coala-ci = coalib.coala_ci:main',\n 'coala-json = coalib.coala_json:main',\n 'coala-format = coalib.coala_format:main',\n 'coala-delete-orig = coalib.coala_delete_orig:main']},\n # from http://pypi.python.org/pypi?%3Aaction=list_classifiers\n classifiers=[\n 'Development Status :: 4 - Beta',\n\n 'Environment :: Console',\n 'Environment :: MacOS X',\n 'Environment :: Win32 (MS Windows)',\n 'Environment :: X11 Applications :: Gnome',\n\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n\n 'License :: OSI Approved :: GNU Affero General Public License '\n 'v3 or later (AGPLv3+)',\n\n 'Operating System :: OS Independent',\n\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3 :: Only',\n\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Text Processing :: Linguistic'],\n cmdclass={'build_manpage': BuildManPage,\n 'build_py': BuildPyCommand,\n 'docs': BuildDocsCommand,\n 'test': PyTestCommand})\n", "path": "setup.py"}]} | 2,126 | 124 |
gh_patches_debug_28687 | rasdani/github-patches | git_diff | iterative__dvc-2853 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
gs: support directories as external dependencies/outputs
I have a similar issue to #2678 but for GS.
I have a bucket with the following structure
```
my_bucket
├── data
│ ├── img1.png
│ ├── img2.png
│ ├── ...
└── cache
```
I have then created a clean project
```
$ git init
$ dvc init
$ dvc remote add gscache gs://my_bucket/cache
$ dvc config cache.gs gscache
$ dvc add gs://my_bucket/data
```
The output is as follows:
```
100%|██████████|Add 1/1 [00:00<00:00, 1.21file/s]
ERROR: output 'gs://my_bucket/data' does not exist
```
Adding a single file works (`dvc add gs://my_bucket/data/img1.png`).
A more verbose version:
```
$ dvc add gs://my_bucket/data -v
DEBUG: PRAGMA user_version;
DEBUG: fetched: [(3,)]
DEBUG: CREATE TABLE IF NOT EXISTS state (inode INTEGER PRIMARY KEY, mtime TEXT NOT NULL, size TEXT NOT NULL, md5 TEXT NOT NULL, timestamp TEXT NOT NULL)
DEBUG: CREATE TABLE IF NOT EXISTS state_info (count INTEGER)
DEBUG: CREATE TABLE IF NOT EXISTS link_state (path TEXT PRIMARY KEY, inode INTEGER NOT NULL, mtime TEXT NOT NULL)
DEBUG: INSERT OR IGNORE INTO state_info (count) SELECT 0 WHERE NOT EXISTS (SELECT * FROM state_info)
DEBUG: PRAGMA user_version = 3;
100%|██████████|Add 1/1 [00:01<00:00, 1.63s/file]
DEBUG: SELECT count from state_info WHERE rowid=?
DEBUG: fetched: [(0,)]
DEBUG: UPDATE state_info SET count = ? WHERE rowid = ?
ERROR: output 'gs://my_bucket/data' does not exist
------------------------------------------------------------
Traceback (most recent call last):
File "/home/egnyte/anaconda3/envs/dvc/lib/python3.7/site-packages/dvc/command/add.py", line 25, in run
fname=self.args.file,
File "/home/egnyte/anaconda3/envs/dvc/lib/python3.7/site-packages/dvc/repo/__init__.py", line 35, in wrapper
ret = f(repo, *args, **kwargs)
File "/home/egnyte/anaconda3/envs/dvc/lib/python3.7/site-packages/dvc/repo/scm_context.py", line 4, in run
result = method(repo, *args, **kw)
File "/home/egnyte/anaconda3/envs/dvc/lib/python3.7/site-packages/dvc/repo/add.py", line 53, in add
stage.save()
File "/home/egnyte/anaconda3/envs/dvc/lib/python3.7/site-packages/dvc/stage.py", line 716, in save
out.save()
File "/home/egnyte/anaconda3/envs/dvc/lib/python3.7/site-packages/dvc/output/base.py", line 219, in save
raise self.DoesNotExistError(self)
dvc.output.base.OutputDoesNotExistError: output 'gs://my_bucket/data' does not exist
------------------------------------------------------------
```
`dvc --version = 0.68.1`. I am using ubuntu, I installed using conda, python 3.7.5.
</issue>
<code>
[start of dvc/remote/gs.py]
1 from __future__ import unicode_literals, division
2
3 import logging
4 from datetime import timedelta
5 from functools import wraps
6 import io
7 import os.path
8 import threading
9
10 from funcy import cached_property, wrap_prop
11
12 from dvc.config import Config
13 from dvc.exceptions import DvcException
14 from dvc.path_info import CloudURLInfo
15 from dvc.progress import Tqdm
16 from dvc.remote.base import RemoteBASE
17 from dvc.scheme import Schemes
18 from dvc.utils.compat import FileNotFoundError # skipcq: PYL-W0622
19
20 logger = logging.getLogger(__name__)
21
22
23 def dynamic_chunk_size(func):
24 @wraps(func)
25 def wrapper(*args, **kwargs):
26 import requests
27 from google.cloud.storage.blob import Blob
28
29 # `ConnectionError` may be due to too large `chunk_size`
30 # (see [#2572]) so try halving on error.
31 # Note: start with 40 * [default: 256K] = 10M.
32 # Note: must be multiple of 256K.
33 #
34 # [#2572]: https://github.com/iterative/dvc/issues/2572
35
36 # skipcq: PYL-W0212
37 multiplier = 40
38 while True:
39 try:
40 # skipcq: PYL-W0212
41 chunk_size = Blob._CHUNK_SIZE_MULTIPLE * multiplier
42 return func(*args, chunk_size=chunk_size, **kwargs)
43 except requests.exceptions.ConnectionError:
44 multiplier //= 2
45 if not multiplier:
46 raise
47
48 return wrapper
49
50
51 @dynamic_chunk_size
52 def _upload_to_bucket(
53 bucket,
54 from_file,
55 to_info,
56 chunk_size=None,
57 name=None,
58 no_progress_bar=True,
59 ):
60 blob = bucket.blob(to_info.path, chunk_size=chunk_size)
61 with Tqdm(
62 desc=name or to_info.path,
63 total=os.path.getsize(from_file),
64 bytes=True,
65 disable=no_progress_bar,
66 ) as pbar:
67 with io.open(from_file, mode="rb") as fobj:
68 raw_read = fobj.read
69
70 def read(size=chunk_size):
71 res = raw_read(size)
72 if res:
73 pbar.update(len(res))
74 return res
75
76 fobj.read = read
77 blob.upload_from_file(fobj)
78
79
80 class RemoteGS(RemoteBASE):
81 scheme = Schemes.GS
82 path_cls = CloudURLInfo
83 REQUIRES = {"google-cloud-storage": "google.cloud.storage"}
84 PARAM_CHECKSUM = "md5"
85
86 def __init__(self, repo, config):
87 super(RemoteGS, self).__init__(repo, config)
88
89 url = config.get(Config.SECTION_REMOTE_URL, "gs:///")
90 self.path_info = self.path_cls(url)
91
92 self.projectname = config.get(Config.SECTION_GCP_PROJECTNAME, None)
93 self.credentialpath = config.get(Config.SECTION_GCP_CREDENTIALPATH)
94
95 @wrap_prop(threading.Lock())
96 @cached_property
97 def gs(self):
98 from google.cloud.storage import Client
99
100 return (
101 Client.from_service_account_json(self.credentialpath)
102 if self.credentialpath
103 else Client(self.projectname)
104 )
105
106 def get_file_checksum(self, path_info):
107 import base64
108 import codecs
109
110 bucket = path_info.bucket
111 path = path_info.path
112 blob = self.gs.bucket(bucket).get_blob(path)
113 if not blob:
114 return None
115
116 b64_md5 = blob.md5_hash
117 md5 = base64.b64decode(b64_md5)
118 return codecs.getencoder("hex")(md5)[0].decode("utf-8")
119
120 def copy(self, from_info, to_info):
121 from_bucket = self.gs.bucket(from_info.bucket)
122 blob = from_bucket.get_blob(from_info.path)
123 if not blob:
124 msg = "'{}' doesn't exist in the cloud".format(from_info.path)
125 raise DvcException(msg)
126
127 to_bucket = self.gs.bucket(to_info.bucket)
128 from_bucket.copy_blob(blob, to_bucket, new_name=to_info.path)
129
130 def remove(self, path_info):
131 if path_info.scheme != "gs":
132 raise NotImplementedError
133
134 logger.debug("Removing gs://{}".format(path_info))
135 blob = self.gs.bucket(path_info.bucket).get_blob(path_info.path)
136 if not blob:
137 return
138
139 blob.delete()
140
141 def _list_paths(self, bucket, prefix):
142 for blob in self.gs.bucket(bucket).list_blobs(prefix=prefix):
143 yield blob.name
144
145 def list_cache_paths(self):
146 return self._list_paths(self.path_info.bucket, self.path_info.path)
147
148 def exists(self, path_info):
149 paths = set(self._list_paths(path_info.bucket, path_info.path))
150 return any(path_info.path == path for path in paths)
151
152 def _upload(self, from_file, to_info, name=None, no_progress_bar=True):
153 bucket = self.gs.bucket(to_info.bucket)
154 _upload_to_bucket(
155 bucket,
156 from_file,
157 to_info,
158 name=name,
159 no_progress_bar=no_progress_bar,
160 )
161
162 def _download(self, from_info, to_file, name=None, no_progress_bar=True):
163 bucket = self.gs.bucket(from_info.bucket)
164 blob = bucket.get_blob(from_info.path)
165 with Tqdm(
166 desc=name or from_info.path,
167 total=blob.size,
168 bytes=True,
169 disable=no_progress_bar,
170 ) as pbar:
171 with io.open(to_file, mode="wb") as fobj:
172 raw_write = fobj.write
173
174 def write(byte_string):
175 raw_write(byte_string)
176 pbar.update(len(byte_string))
177
178 fobj.write = write
179 blob.download_to_file(fobj)
180
181 def _generate_download_url(self, path_info, expires=3600):
182 expiration = timedelta(seconds=int(expires))
183
184 bucket = self.gs.bucket(path_info.bucket)
185 blob = bucket.get_blob(path_info.path)
186 if blob is None:
187 raise FileNotFoundError
188 return blob.generate_signed_url(expiration=expiration)
189
[end of dvc/remote/gs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/remote/gs.py b/dvc/remote/gs.py
--- a/dvc/remote/gs.py
+++ b/dvc/remote/gs.py
@@ -138,16 +138,45 @@
blob.delete()
- def _list_paths(self, bucket, prefix):
- for blob in self.gs.bucket(bucket).list_blobs(prefix=prefix):
+ def _list_paths(self, path_info, max_items=None):
+ for blob in self.gs.bucket(path_info.bucket).list_blobs(
+ prefix=path_info.path, max_results=max_items
+ ):
yield blob.name
def list_cache_paths(self):
- return self._list_paths(self.path_info.bucket, self.path_info.path)
+ return self._list_paths(self.path_info)
+
+ def walk_files(self, path_info):
+ for fname in self._list_paths(path_info / ""):
+ # skip nested empty directories
+ if fname.endswith("/"):
+ continue
+ yield path_info.replace(fname)
+
+ def makedirs(self, path_info):
+ self.gs.bucket(path_info.bucket).blob(
+ (path_info / "").path
+ ).upload_from_string("")
+
+ def isdir(self, path_info):
+ dir_path = path_info / ""
+ return bool(list(self._list_paths(dir_path, max_items=1)))
+
+ def isfile(self, path_info):
+ if path_info.path.endswith("/"):
+ return False
+
+ blob = self.gs.bucket(path_info.bucket).blob(path_info.path)
+ return blob.exists()
def exists(self, path_info):
- paths = set(self._list_paths(path_info.bucket, path_info.path))
- return any(path_info.path == path for path in paths)
+ """Check if the blob exists. If it does not exist,
+ it could be a part of a directory path.
+
+ eg: if `data/file.txt` exists, check for `data` should return True
+ """
+ return self.isfile(path_info) or self.isdir(path_info)
def _upload(self, from_file, to_info, name=None, no_progress_bar=True):
bucket = self.gs.bucket(to_info.bucket)
| {"golden_diff": "diff --git a/dvc/remote/gs.py b/dvc/remote/gs.py\n--- a/dvc/remote/gs.py\n+++ b/dvc/remote/gs.py\n@@ -138,16 +138,45 @@\n \n blob.delete()\n \n- def _list_paths(self, bucket, prefix):\n- for blob in self.gs.bucket(bucket).list_blobs(prefix=prefix):\n+ def _list_paths(self, path_info, max_items=None):\n+ for blob in self.gs.bucket(path_info.bucket).list_blobs(\n+ prefix=path_info.path, max_results=max_items\n+ ):\n yield blob.name\n \n def list_cache_paths(self):\n- return self._list_paths(self.path_info.bucket, self.path_info.path)\n+ return self._list_paths(self.path_info)\n+\n+ def walk_files(self, path_info):\n+ for fname in self._list_paths(path_info / \"\"):\n+ # skip nested empty directories\n+ if fname.endswith(\"/\"):\n+ continue\n+ yield path_info.replace(fname)\n+\n+ def makedirs(self, path_info):\n+ self.gs.bucket(path_info.bucket).blob(\n+ (path_info / \"\").path\n+ ).upload_from_string(\"\")\n+\n+ def isdir(self, path_info):\n+ dir_path = path_info / \"\"\n+ return bool(list(self._list_paths(dir_path, max_items=1)))\n+\n+ def isfile(self, path_info):\n+ if path_info.path.endswith(\"/\"):\n+ return False\n+\n+ blob = self.gs.bucket(path_info.bucket).blob(path_info.path)\n+ return blob.exists()\n \n def exists(self, path_info):\n- paths = set(self._list_paths(path_info.bucket, path_info.path))\n- return any(path_info.path == path for path in paths)\n+ \"\"\"Check if the blob exists. If it does not exist,\n+ it could be a part of a directory path.\n+\n+ eg: if `data/file.txt` exists, check for `data` should return True\n+ \"\"\"\n+ return self.isfile(path_info) or self.isdir(path_info)\n \n def _upload(self, from_file, to_info, name=None, no_progress_bar=True):\n bucket = self.gs.bucket(to_info.bucket)\n", "issue": "gs: support directories as external dependencies/outputs\nI have a similar issue to #2678 but for GS.\r\n\r\nI have a bucket with the following structure\r\n```\r\nmy_bucket\r\n \u251c\u2500\u2500 data\r\n \u2502 \u251c\u2500\u2500 img1.png\r\n \u2502 \u251c\u2500\u2500 img2.png\r\n \u2502 \u251c\u2500\u2500 ...\r\n \u2514\u2500\u2500 cache\r\n```\r\n\r\nI have then created a clean project\r\n\r\n```\r\n$ git init\r\n$ dvc init\r\n$ dvc remote add gscache gs://my_bucket/cache\r\n$ dvc config cache.gs gscache\r\n$ dvc add gs://my_bucket/data\r\n```\r\n\r\nThe output is as follows:\r\n```\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588|Add 1/1 [00:00<00:00, 1.21file/s]\r\nERROR: output 'gs://my_bucket/data' does not exist\r\n```\r\n\r\nAdding a single file works (`dvc add gs://my_bucket/data/img1.png`).\r\n\r\nA more verbose version:\r\n```\r\n$ dvc add gs://my_bucket/data -v \r\nDEBUG: PRAGMA user_version;\r\nDEBUG: fetched: [(3,)]\r\nDEBUG: CREATE TABLE IF NOT EXISTS state (inode INTEGER PRIMARY KEY, mtime TEXT NOT NULL, size TEXT NOT NULL, md5 TEXT NOT NULL, timestamp TEXT NOT NULL)\r\nDEBUG: CREATE TABLE IF NOT EXISTS state_info (count INTEGER)\r\nDEBUG: CREATE TABLE IF NOT EXISTS link_state (path TEXT PRIMARY KEY, inode INTEGER NOT NULL, mtime TEXT NOT NULL)\r\nDEBUG: INSERT OR IGNORE INTO state_info (count) SELECT 0 WHERE NOT EXISTS (SELECT * FROM state_info)\r\nDEBUG: PRAGMA user_version = 3;\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588|Add 1/1 [00:01<00:00, 1.63s/file]\r\nDEBUG: SELECT count from state_info WHERE rowid=?\r\nDEBUG: fetched: [(0,)]\r\nDEBUG: UPDATE state_info SET count = ? WHERE rowid = ?\r\nERROR: output 'gs://my_bucket/data' does not exist\r\n------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"/home/egnyte/anaconda3/envs/dvc/lib/python3.7/site-packages/dvc/command/add.py\", line 25, in run\r\n fname=self.args.file,\r\n File \"/home/egnyte/anaconda3/envs/dvc/lib/python3.7/site-packages/dvc/repo/__init__.py\", line 35, in wrapper\r\n ret = f(repo, *args, **kwargs)\r\n File \"/home/egnyte/anaconda3/envs/dvc/lib/python3.7/site-packages/dvc/repo/scm_context.py\", line 4, in run\r\n result = method(repo, *args, **kw)\r\n File \"/home/egnyte/anaconda3/envs/dvc/lib/python3.7/site-packages/dvc/repo/add.py\", line 53, in add\r\n stage.save()\r\n File \"/home/egnyte/anaconda3/envs/dvc/lib/python3.7/site-packages/dvc/stage.py\", line 716, in save\r\n out.save()\r\n File \"/home/egnyte/anaconda3/envs/dvc/lib/python3.7/site-packages/dvc/output/base.py\", line 219, in save\r\n raise self.DoesNotExistError(self)\r\ndvc.output.base.OutputDoesNotExistError: output 'gs://my_bucket/data' does not exist\r\n------------------------------------------------------------\r\n```\r\n\r\n\r\n`dvc --version = 0.68.1`. I am using ubuntu, I installed using conda, python 3.7.5.\n", "before_files": [{"content": "from __future__ import unicode_literals, division\n\nimport logging\nfrom datetime import timedelta\nfrom functools import wraps\nimport io\nimport os.path\nimport threading\n\nfrom funcy import cached_property, wrap_prop\n\nfrom dvc.config import Config\nfrom dvc.exceptions import DvcException\nfrom dvc.path_info import CloudURLInfo\nfrom dvc.progress import Tqdm\nfrom dvc.remote.base import RemoteBASE\nfrom dvc.scheme import Schemes\nfrom dvc.utils.compat import FileNotFoundError # skipcq: PYL-W0622\n\nlogger = logging.getLogger(__name__)\n\n\ndef dynamic_chunk_size(func):\n @wraps(func)\n def wrapper(*args, **kwargs):\n import requests\n from google.cloud.storage.blob import Blob\n\n # `ConnectionError` may be due to too large `chunk_size`\n # (see [#2572]) so try halving on error.\n # Note: start with 40 * [default: 256K] = 10M.\n # Note: must be multiple of 256K.\n #\n # [#2572]: https://github.com/iterative/dvc/issues/2572\n\n # skipcq: PYL-W0212\n multiplier = 40\n while True:\n try:\n # skipcq: PYL-W0212\n chunk_size = Blob._CHUNK_SIZE_MULTIPLE * multiplier\n return func(*args, chunk_size=chunk_size, **kwargs)\n except requests.exceptions.ConnectionError:\n multiplier //= 2\n if not multiplier:\n raise\n\n return wrapper\n\n\n@dynamic_chunk_size\ndef _upload_to_bucket(\n bucket,\n from_file,\n to_info,\n chunk_size=None,\n name=None,\n no_progress_bar=True,\n):\n blob = bucket.blob(to_info.path, chunk_size=chunk_size)\n with Tqdm(\n desc=name or to_info.path,\n total=os.path.getsize(from_file),\n bytes=True,\n disable=no_progress_bar,\n ) as pbar:\n with io.open(from_file, mode=\"rb\") as fobj:\n raw_read = fobj.read\n\n def read(size=chunk_size):\n res = raw_read(size)\n if res:\n pbar.update(len(res))\n return res\n\n fobj.read = read\n blob.upload_from_file(fobj)\n\n\nclass RemoteGS(RemoteBASE):\n scheme = Schemes.GS\n path_cls = CloudURLInfo\n REQUIRES = {\"google-cloud-storage\": \"google.cloud.storage\"}\n PARAM_CHECKSUM = \"md5\"\n\n def __init__(self, repo, config):\n super(RemoteGS, self).__init__(repo, config)\n\n url = config.get(Config.SECTION_REMOTE_URL, \"gs:///\")\n self.path_info = self.path_cls(url)\n\n self.projectname = config.get(Config.SECTION_GCP_PROJECTNAME, None)\n self.credentialpath = config.get(Config.SECTION_GCP_CREDENTIALPATH)\n\n @wrap_prop(threading.Lock())\n @cached_property\n def gs(self):\n from google.cloud.storage import Client\n\n return (\n Client.from_service_account_json(self.credentialpath)\n if self.credentialpath\n else Client(self.projectname)\n )\n\n def get_file_checksum(self, path_info):\n import base64\n import codecs\n\n bucket = path_info.bucket\n path = path_info.path\n blob = self.gs.bucket(bucket).get_blob(path)\n if not blob:\n return None\n\n b64_md5 = blob.md5_hash\n md5 = base64.b64decode(b64_md5)\n return codecs.getencoder(\"hex\")(md5)[0].decode(\"utf-8\")\n\n def copy(self, from_info, to_info):\n from_bucket = self.gs.bucket(from_info.bucket)\n blob = from_bucket.get_blob(from_info.path)\n if not blob:\n msg = \"'{}' doesn't exist in the cloud\".format(from_info.path)\n raise DvcException(msg)\n\n to_bucket = self.gs.bucket(to_info.bucket)\n from_bucket.copy_blob(blob, to_bucket, new_name=to_info.path)\n\n def remove(self, path_info):\n if path_info.scheme != \"gs\":\n raise NotImplementedError\n\n logger.debug(\"Removing gs://{}\".format(path_info))\n blob = self.gs.bucket(path_info.bucket).get_blob(path_info.path)\n if not blob:\n return\n\n blob.delete()\n\n def _list_paths(self, bucket, prefix):\n for blob in self.gs.bucket(bucket).list_blobs(prefix=prefix):\n yield blob.name\n\n def list_cache_paths(self):\n return self._list_paths(self.path_info.bucket, self.path_info.path)\n\n def exists(self, path_info):\n paths = set(self._list_paths(path_info.bucket, path_info.path))\n return any(path_info.path == path for path in paths)\n\n def _upload(self, from_file, to_info, name=None, no_progress_bar=True):\n bucket = self.gs.bucket(to_info.bucket)\n _upload_to_bucket(\n bucket,\n from_file,\n to_info,\n name=name,\n no_progress_bar=no_progress_bar,\n )\n\n def _download(self, from_info, to_file, name=None, no_progress_bar=True):\n bucket = self.gs.bucket(from_info.bucket)\n blob = bucket.get_blob(from_info.path)\n with Tqdm(\n desc=name or from_info.path,\n total=blob.size,\n bytes=True,\n disable=no_progress_bar,\n ) as pbar:\n with io.open(to_file, mode=\"wb\") as fobj:\n raw_write = fobj.write\n\n def write(byte_string):\n raw_write(byte_string)\n pbar.update(len(byte_string))\n\n fobj.write = write\n blob.download_to_file(fobj)\n\n def _generate_download_url(self, path_info, expires=3600):\n expiration = timedelta(seconds=int(expires))\n\n bucket = self.gs.bucket(path_info.bucket)\n blob = bucket.get_blob(path_info.path)\n if blob is None:\n raise FileNotFoundError\n return blob.generate_signed_url(expiration=expiration)\n", "path": "dvc/remote/gs.py"}]} | 3,124 | 489 |
gh_patches_debug_20763 | rasdani/github-patches | git_diff | nv-legate__cunumeric-278 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add some missing array methods
A few numpy array methods were discovered to be missing:
```
"loads",
"mafromtxt",
"matmul",
"ndfromtxt",
```
These should be copied over as necessary, and the comparison blocklist updated.
ref: https://github.com/nv-legate/cunumeric/pull/224#discussion_r830657282
</issue>
<code>
[start of docs/cunumeric/source/comparison/_comparison_generator.py]
1 import importlib
2
3 import numpy
4
5 blocklist = [
6 "abs",
7 "add_docstring",
8 "add_newdoc",
9 "add_newdoc_ufunc",
10 "alen",
11 "alltrue",
12 "bitwise_not",
13 "compare_chararrays",
14 "cumproduct",
15 "fastCopyAndTranspose",
16 "get_array_wrap",
17 "iterable",
18 "loads",
19 "mafromtxt",
20 "max",
21 "min",
22 "ndfromtxt",
23 "ndim",
24 "product",
25 "recfromcsv",
26 "recfromtxt",
27 "round",
28 "safe_eval",
29 "set_numeric_ops",
30 "size",
31 "sometrue",
32 "test",
33 ]
34
35
36 def check_ufunc(obj, n):
37 try:
38 return isinstance(getattr(obj, n), numpy.ufunc)
39 except: # noqa E722
40 return False
41
42
43 def _filter(obj, n, ufuncs=False):
44 is_ufunc = check_ufunc(obj, n)
45 if not ufuncs:
46 is_ufunc = not is_ufunc
47
48 try:
49 return (
50 n not in blocklist
51 and callable(getattr(obj, n)) # callable
52 and not isinstance(getattr(obj, n), type) # not class
53 and n[0].islower() # starts with lower char
54 and not n.startswith("__") # not special methods
55 and is_ufunc
56 )
57 except: # noqa: E722
58 return False
59
60
61 def _get_functions(obj, ufuncs=False):
62 return set([n for n in dir(obj) if (_filter(obj, n, ufuncs))])
63
64
65 def _import(mod, klass):
66 try:
67 obj = importlib.import_module(mod)
68 except ModuleNotFoundError:
69 return None, None
70
71 if klass:
72 obj = getattr(obj, klass)
73 return obj, ":meth:`{}.{}.{{}}`".format(mod, klass)
74 else:
75 # ufunc is not a function
76 return obj, ":obj:`{}.{{}}`".format(mod)
77
78
79 def _section(
80 header, mod_ext, other_lib, klass=None, exclude_mod=None, ufuncs=False
81 ):
82 base_mod = "numpy" + mod_ext
83 other_mod = other_lib + mod_ext
84
85 base_funcs = []
86 base_obj, base_fmt = _import(base_mod, klass)
87 base_funcs = _get_functions(base_obj, ufuncs)
88 lg_obj, lg_fmt = _import(other_mod, klass)
89
90 lg_funcs = []
91 for f in _get_functions(lg_obj):
92 obj = getattr(lg_obj, f)
93 if getattr(obj, "_cunumeric_implemented", False):
94 lg_funcs.append(f)
95 lg_funcs = set(lg_funcs)
96
97 if exclude_mod:
98 exclude_obj, _ = _import(exclude_mod, klass)
99 exclude_funcs = _get_functions(exclude_obj)
100 base_funcs -= exclude_funcs
101 lg_funcs -= exclude_funcs
102
103 buf = [
104 header,
105 "~" * len(header),
106 "",
107 ]
108
109 buf += [
110 ".. currentmodule:: cunumeric",
111 "",
112 ".. autosummary::",
113 " :toctree: generated/",
114 "",
115 ]
116
117 buf += [
118 ".. csv-table::",
119 " :header: NumPy, {}, single-GPU/CPU, multi-GPU/CPU".format(
120 other_mod
121 ),
122 "",
123 ]
124 for f in sorted(base_funcs):
125 base_cell = base_fmt.format(f)
126 lg_cell = r"\-"
127 single_gpu_cell = ""
128 multi_gpu_cell = ""
129 if f in lg_funcs:
130 lg_cell = lg_fmt.format(f)
131 obj = getattr(lg_obj, f)
132 if obj.__doc__ is not None and "Single GPU" in obj.__doc__:
133 multi_gpu_cell = "No"
134 single_gpu_cell = "Yes"
135 elif obj.__doc__ is not None and "Multiple GPUs" in obj.__doc__:
136 multi_gpu_cell = "Yes"
137 single_gpu_cell = "Yes"
138 if getattr(base_obj, f) is getattr(lg_obj, f):
139 lg_cell = "{} (*alias of* {})".format(lg_cell, base_cell)
140 line = " {}, {}, {}, {}".format(
141 base_cell, lg_cell, single_gpu_cell, multi_gpu_cell
142 )
143 buf.append(line)
144
145 buf += [
146 "",
147 ".. Summary:",
148 " Number of NumPy functions: {}".format(len(base_funcs)),
149 " Number of functions covered by "
150 f"{other_lib}: {len(lg_funcs & base_funcs)}",
151 ]
152 buf += [
153 "",
154 ]
155 return buf
156
157
158 def generate(other_lib):
159 buf = []
160 buf += [
161 "NumPy vs cuNumeric APIs",
162 "------------------------",
163 "",
164 ]
165 buf += _section("Module-Level", "", other_lib)
166 buf += _section("Ufuncs", "", other_lib, ufuncs=True)
167 buf += _section("Multi-Dimensional Array", "", other_lib, klass="ndarray")
168 buf += _section("Linear Algebra", ".linalg", other_lib)
169 buf += _section("Discrete Fourier Transform", ".fft", other_lib)
170 buf += _section("Random Sampling", ".random", other_lib)
171
172 return "\n".join(buf)
173
174
175 if __name__ == "__main__":
176 print(generate("cunumeric"))
177
[end of docs/cunumeric/source/comparison/_comparison_generator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/cunumeric/source/comparison/_comparison_generator.py b/docs/cunumeric/source/comparison/_comparison_generator.py
--- a/docs/cunumeric/source/comparison/_comparison_generator.py
+++ b/docs/cunumeric/source/comparison/_comparison_generator.py
@@ -15,11 +15,8 @@
"fastCopyAndTranspose",
"get_array_wrap",
"iterable",
- "loads",
- "mafromtxt",
"max",
"min",
- "ndfromtxt",
"ndim",
"product",
"recfromcsv",
@@ -32,6 +29,13 @@
"test",
]
+# these do not have valid intersphinx references
+missing_numpy_refs = {
+ "loads",
+ "mafromtxt",
+ "ndfromtxt",
+}
+
def check_ufunc(obj, n):
try:
@@ -122,7 +126,10 @@
"",
]
for f in sorted(base_funcs):
- base_cell = base_fmt.format(f)
+ if f not in missing_numpy_refs:
+ base_cell = base_fmt.format(f)
+ else:
+ base_cell = f"``numpy.{f}``"
lg_cell = r"\-"
single_gpu_cell = ""
multi_gpu_cell = ""
| {"golden_diff": "diff --git a/docs/cunumeric/source/comparison/_comparison_generator.py b/docs/cunumeric/source/comparison/_comparison_generator.py\n--- a/docs/cunumeric/source/comparison/_comparison_generator.py\n+++ b/docs/cunumeric/source/comparison/_comparison_generator.py\n@@ -15,11 +15,8 @@\n \"fastCopyAndTranspose\",\n \"get_array_wrap\",\n \"iterable\",\n- \"loads\",\n- \"mafromtxt\",\n \"max\",\n \"min\",\n- \"ndfromtxt\",\n \"ndim\",\n \"product\",\n \"recfromcsv\",\n@@ -32,6 +29,13 @@\n \"test\",\n ]\n \n+# these do not have valid intersphinx references\n+missing_numpy_refs = {\n+ \"loads\",\n+ \"mafromtxt\",\n+ \"ndfromtxt\",\n+}\n+\n \n def check_ufunc(obj, n):\n try:\n@@ -122,7 +126,10 @@\n \"\",\n ]\n for f in sorted(base_funcs):\n- base_cell = base_fmt.format(f)\n+ if f not in missing_numpy_refs:\n+ base_cell = base_fmt.format(f)\n+ else:\n+ base_cell = f\"``numpy.{f}``\"\n lg_cell = r\"\\-\"\n single_gpu_cell = \"\"\n multi_gpu_cell = \"\"\n", "issue": "Add some missing array methods \nA few numpy array methods were discovered to be missing:\r\n```\r\n \"loads\",\r\n \"mafromtxt\",\r\n \"matmul\",\r\n \"ndfromtxt\",\r\n```\r\n\r\nThese should be copied over as necessary, and the comparison blocklist updated. \r\n\r\nref: https://github.com/nv-legate/cunumeric/pull/224#discussion_r830657282\r\n\n", "before_files": [{"content": "import importlib\n\nimport numpy\n\nblocklist = [\n \"abs\",\n \"add_docstring\",\n \"add_newdoc\",\n \"add_newdoc_ufunc\",\n \"alen\",\n \"alltrue\",\n \"bitwise_not\",\n \"compare_chararrays\",\n \"cumproduct\",\n \"fastCopyAndTranspose\",\n \"get_array_wrap\",\n \"iterable\",\n \"loads\",\n \"mafromtxt\",\n \"max\",\n \"min\",\n \"ndfromtxt\",\n \"ndim\",\n \"product\",\n \"recfromcsv\",\n \"recfromtxt\",\n \"round\",\n \"safe_eval\",\n \"set_numeric_ops\",\n \"size\",\n \"sometrue\",\n \"test\",\n]\n\n\ndef check_ufunc(obj, n):\n try:\n return isinstance(getattr(obj, n), numpy.ufunc)\n except: # noqa E722\n return False\n\n\ndef _filter(obj, n, ufuncs=False):\n is_ufunc = check_ufunc(obj, n)\n if not ufuncs:\n is_ufunc = not is_ufunc\n\n try:\n return (\n n not in blocklist\n and callable(getattr(obj, n)) # callable\n and not isinstance(getattr(obj, n), type) # not class\n and n[0].islower() # starts with lower char\n and not n.startswith(\"__\") # not special methods\n and is_ufunc\n )\n except: # noqa: E722\n return False\n\n\ndef _get_functions(obj, ufuncs=False):\n return set([n for n in dir(obj) if (_filter(obj, n, ufuncs))])\n\n\ndef _import(mod, klass):\n try:\n obj = importlib.import_module(mod)\n except ModuleNotFoundError:\n return None, None\n\n if klass:\n obj = getattr(obj, klass)\n return obj, \":meth:`{}.{}.{{}}`\".format(mod, klass)\n else:\n # ufunc is not a function\n return obj, \":obj:`{}.{{}}`\".format(mod)\n\n\ndef _section(\n header, mod_ext, other_lib, klass=None, exclude_mod=None, ufuncs=False\n):\n base_mod = \"numpy\" + mod_ext\n other_mod = other_lib + mod_ext\n\n base_funcs = []\n base_obj, base_fmt = _import(base_mod, klass)\n base_funcs = _get_functions(base_obj, ufuncs)\n lg_obj, lg_fmt = _import(other_mod, klass)\n\n lg_funcs = []\n for f in _get_functions(lg_obj):\n obj = getattr(lg_obj, f)\n if getattr(obj, \"_cunumeric_implemented\", False):\n lg_funcs.append(f)\n lg_funcs = set(lg_funcs)\n\n if exclude_mod:\n exclude_obj, _ = _import(exclude_mod, klass)\n exclude_funcs = _get_functions(exclude_obj)\n base_funcs -= exclude_funcs\n lg_funcs -= exclude_funcs\n\n buf = [\n header,\n \"~\" * len(header),\n \"\",\n ]\n\n buf += [\n \".. currentmodule:: cunumeric\",\n \"\",\n \".. autosummary::\",\n \" :toctree: generated/\",\n \"\",\n ]\n\n buf += [\n \".. csv-table::\",\n \" :header: NumPy, {}, single-GPU/CPU, multi-GPU/CPU\".format(\n other_mod\n ),\n \"\",\n ]\n for f in sorted(base_funcs):\n base_cell = base_fmt.format(f)\n lg_cell = r\"\\-\"\n single_gpu_cell = \"\"\n multi_gpu_cell = \"\"\n if f in lg_funcs:\n lg_cell = lg_fmt.format(f)\n obj = getattr(lg_obj, f)\n if obj.__doc__ is not None and \"Single GPU\" in obj.__doc__:\n multi_gpu_cell = \"No\"\n single_gpu_cell = \"Yes\"\n elif obj.__doc__ is not None and \"Multiple GPUs\" in obj.__doc__:\n multi_gpu_cell = \"Yes\"\n single_gpu_cell = \"Yes\"\n if getattr(base_obj, f) is getattr(lg_obj, f):\n lg_cell = \"{} (*alias of* {})\".format(lg_cell, base_cell)\n line = \" {}, {}, {}, {}\".format(\n base_cell, lg_cell, single_gpu_cell, multi_gpu_cell\n )\n buf.append(line)\n\n buf += [\n \"\",\n \".. Summary:\",\n \" Number of NumPy functions: {}\".format(len(base_funcs)),\n \" Number of functions covered by \"\n f\"{other_lib}: {len(lg_funcs & base_funcs)}\",\n ]\n buf += [\n \"\",\n ]\n return buf\n\n\ndef generate(other_lib):\n buf = []\n buf += [\n \"NumPy vs cuNumeric APIs\",\n \"------------------------\",\n \"\",\n ]\n buf += _section(\"Module-Level\", \"\", other_lib)\n buf += _section(\"Ufuncs\", \"\", other_lib, ufuncs=True)\n buf += _section(\"Multi-Dimensional Array\", \"\", other_lib, klass=\"ndarray\")\n buf += _section(\"Linear Algebra\", \".linalg\", other_lib)\n buf += _section(\"Discrete Fourier Transform\", \".fft\", other_lib)\n buf += _section(\"Random Sampling\", \".random\", other_lib)\n\n return \"\\n\".join(buf)\n\n\nif __name__ == \"__main__\":\n print(generate(\"cunumeric\"))\n", "path": "docs/cunumeric/source/comparison/_comparison_generator.py"}]} | 2,252 | 294 |
gh_patches_debug_17995 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-3822 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Plone site id is a reserved id and can't be used any where in Zope as an short name
## BUG/PROBLEM REPORT (OR OTHER COMMON ISSUE)
### What I did:
Add a Plone site with the id example.
Add another Plone side with any other name.
In the second Plone side, try to create a Document with the title example.
You'll get a trace:
```
Traceback (innermost last):
Module ZPublisher.WSGIPublisher, line 181, in transaction_pubevents
Module ZPublisher.WSGIPublisher, line 390, in publish_module
Module ZPublisher.WSGIPublisher, line 285, in publish
Module ZPublisher.mapply, line 85, in mapply
Module ZPublisher.WSGIPublisher, line 68, in call_object
Module plone.z3cform.layout, line 61, in __call__
Module plone.z3cform.layout, line 45, in update
Module plone.dexterity.browser.add, line 136, in update
Module plone.z3cform.fieldsets.extensible, line 62, in update
Module plone.z3cform.patch, line 31, in GroupForm_update
Module z3c.form.group, line 145, in update
Module plone.app.z3cform.csrf, line 21, in execute
Module z3c.form.action, line 98, in execute
Module z3c.form.button, line 301, in __call__
Module z3c.form.button, line 159, in __call__
Module plone.dexterity.browser.add, line 114, in handleAdd
Module z3c.form.form, line 265, in createAndAdd
Module plone.dexterity.browser.add, line 89, in add
Module plone.dexterity.utils, line 174, in addContentToContainer
Module Products.BTreeFolder2.BTreeFolder2, line 434, in _setObject
Module plone.folder.ordered, line 253, in _checkId
Module Products.CMFCore.PortalFolder, line 318, in _checkId
Module OFS.ObjectManager, line 130, in checkValidId
zExceptions.BadRequest: The id "example" is reserved.
```
### What I expect to happen:
Having a page with the id example.
### What actually happened:
Got the trace above.
A Plone site id should not be blocked to be used in other places for object id's.
### What version of Plone/ Addons I am using:
Plone 6.0.6
</issue>
<code>
[start of Products/CMFPlone/Portal.py]
1 from AccessControl import ClassSecurityInfo
2 from AccessControl import Unauthorized
3 from AccessControl.class_init import InitializeClass
4 from Acquisition import aq_base
5 from ComputedAttribute import ComputedAttribute
6 from five.localsitemanager.registry import PersistentComponents
7 from OFS.ObjectManager import REPLACEABLE
8 from plone.base.interfaces.siteroot import IPloneSiteRoot
9 from plone.base.interfaces.syndication import ISyndicatable
10 from plone.base.permissions import AddPortalContent
11 from plone.base.permissions import AddPortalFolders
12 from plone.base.permissions import ListPortalMembers
13 from plone.base.permissions import ModifyPortalContent
14 from plone.base.permissions import ReplyToItem
15 from plone.base.permissions import View
16 from plone.dexterity.content import Container
17 from Products.CMFCore import permissions
18 from Products.CMFCore.interfaces import IContentish
19 from Products.CMFCore.interfaces import ISiteRoot
20 from Products.CMFCore.permissions import AccessContentsInformation
21 from Products.CMFCore.permissions import AddPortalMember
22 from Products.CMFCore.permissions import MailForgottenPassword
23 from Products.CMFCore.permissions import RequestReview
24 from Products.CMFCore.permissions import ReviewPortalContent
25 from Products.CMFCore.permissions import SetOwnPassword
26 from Products.CMFCore.permissions import SetOwnProperties
27 from Products.CMFCore.PortalFolder import PortalFolderBase
28 from Products.CMFCore.PortalObject import PortalObjectBase
29 from Products.CMFCore.Skinnable import SkinnableObjectManager
30 from Products.CMFCore.utils import _checkPermission
31 from Products.CMFCore.utils import getToolByName
32 from Products.CMFCore.utils import UniqueObject
33 from Products.CMFPlone import bbb
34 from Products.Five.component.interfaces import IObjectManagerSite
35 from zope.event import notify
36 from zope.interface import classImplementsOnly
37 from zope.interface import implementedBy
38 from zope.interface import implementer
39 from zope.interface.interfaces import ComponentLookupError
40 from zope.traversing.interfaces import BeforeTraverseEvent
41
42
43 if bbb.HAS_ZSERVER:
44 from webdav.NullResource import NullResource
45
46
47 @implementer(IPloneSiteRoot, ISiteRoot, ISyndicatable, IObjectManagerSite)
48 class PloneSite(Container, SkinnableObjectManager, UniqueObject):
49 """The Plone site object."""
50
51 security = ClassSecurityInfo()
52 meta_type = portal_type = "Plone Site"
53
54 # Ensure certain attributes come from the correct base class.
55 _checkId = SkinnableObjectManager._checkId
56 manage_main = PortalFolderBase.manage_main
57
58 def __getattr__(self, name):
59 try:
60 # Try DX
61 return super().__getattr__(name)
62 except AttributeError:
63 # Check portal_skins
64 return SkinnableObjectManager.__getattr__(self, name)
65
66 def __setattr__(self, name, obj):
67 # handle re setting an item as an attribute
68 if self._tree is not None and name in self:
69 del self[name]
70 self[name] = obj
71 else:
72 super().__setattr__(name, obj)
73
74 def __delattr__(self, name):
75 try:
76 return super().__delattr__(name)
77 except AttributeError:
78 return self.__delitem__(name)
79
80 # Removes the 'Components Folder'
81
82 manage_options = Container.manage_options[:2] + Container.manage_options[3:]
83
84 __ac_permissions__ = (
85 (AccessContentsInformation, ()),
86 (AddPortalMember, ()),
87 (SetOwnPassword, ()),
88 (SetOwnProperties, ()),
89 (MailForgottenPassword, ()),
90 (RequestReview, ()),
91 (ReviewPortalContent, ()),
92 (AddPortalContent, ()),
93 (AddPortalFolders, ()),
94 (ListPortalMembers, ()),
95 (ReplyToItem, ()),
96 (View, ("isEffective",)),
97 (
98 ModifyPortalContent,
99 (
100 "manage_cutObjects",
101 "manage_pasteObjects",
102 "manage_renameForm",
103 "manage_renameObject",
104 "manage_renameObjects",
105 ),
106 ),
107 )
108
109 # Switch off ZMI ordering interface as it assumes a slightly
110 # different functionality
111 has_order_support = 0
112 management_page_charset = "utf-8"
113 _default_sort_key = "id"
114 _properties = (
115 {"id": "title", "type": "string", "mode": "w"},
116 {"id": "description", "type": "text", "mode": "w"},
117 )
118 title = ""
119 description = ""
120 icon = "misc_/CMFPlone/tool.gif"
121
122 # From PortalObjectBase
123 def __init__(self, id, title=""):
124 super().__init__(id, title=title)
125 components = PersistentComponents("++etc++site")
126 components.__parent__ = self
127 self.setSiteManager(components)
128
129 # From PortalObjectBase
130 def __before_publishing_traverse__(self, arg1, arg2=None):
131 """Pre-traversal hook."""
132 # XXX hack around a bug(?) in BeforeTraverse.MultiHook
133 REQUEST = arg2 or arg1
134
135 try:
136 notify(BeforeTraverseEvent(self, REQUEST))
137 except ComponentLookupError:
138 # allow ZMI access, even if the portal's site manager is missing
139 pass
140 self.setupCurrentSkin(REQUEST)
141
142 super().__before_publishing_traverse__(arg1, arg2)
143
144 # Concept from OFS.OrderSupport
145 @security.protected(permissions.AccessContentsInformation)
146 def tpValues(self):
147 # Return a list of subobjects, used by ZMI tree tag (and only there).
148 # see also https://github.com/plone/Products.CMFPlone/issues/3323
149 return sorted(
150 (
151 obj
152 for obj in self.objectValues()
153 if getattr(aq_base(obj), "isPrincipiaFolderish", False)
154 ),
155 key=lambda obj: obj.getId(),
156 )
157
158 def __browser_default__(self, request):
159 """Set default so we can return whatever we want instead
160 of index_html"""
161 return getToolByName(self, "plone_utils").browserDefault(self)
162
163 def index_html(self):
164 """Acquire if not present."""
165 request = getattr(self, "REQUEST", None)
166 if (
167 request is not None
168 and "REQUEST_METHOD" in request
169 and request.maybe_webdav_client
170 ):
171 method = request["REQUEST_METHOD"]
172 if bbb.HAS_ZSERVER and method in ("PUT",):
173 # Very likely a WebDAV client trying to create something
174 result = NullResource(self, "index_html")
175 setattr(result, "__replaceable__", REPLACEABLE)
176 return result
177 elif method not in ("GET", "HEAD", "POST"):
178 raise AttributeError("index_html")
179 # Acquire from skin.
180 _target = self.__getattr__("index_html")
181 result = aq_base(_target).__of__(self)
182 setattr(result, "__replaceable__", REPLACEABLE)
183 return result
184
185 index_html = ComputedAttribute(index_html, 1)
186
187 def manage_beforeDelete(self, container, item):
188 # Should send out an Event before Site is being deleted.
189 self.removal_inprogress = 1
190 PloneSite.inheritedAttribute("manage_beforeDelete")(self, container, item)
191
192 @security.protected(permissions.DeleteObjects)
193 def manage_delObjects(self, ids=None, REQUEST=None):
194 """We need to enforce security."""
195 if ids is None:
196 ids = []
197 if isinstance(ids, str):
198 ids = [ids]
199 for id in ids:
200 item = self._getOb(id)
201 if not _checkPermission(permissions.DeleteObjects, item):
202 raise Unauthorized("Do not have permissions to remove this object")
203 return PortalObjectBase.manage_delObjects(self, ids, REQUEST=REQUEST)
204
205 def view(self):
206 """Ensure that we get a plain view of the object, via a delegation to
207 __call__(), which is defined in BrowserDefaultMixin
208 """
209 return self()
210
211 @security.protected(permissions.AccessContentsInformation)
212 def folderlistingFolderContents(self, contentFilter=None):
213 """Calls listFolderContents in protected only by ACI so that
214 folder_listing can work without the List folder contents permission.
215
216 This is copied from Archetypes Basefolder and is needed by the
217 reference browser.
218 """
219 return self.listFolderContents(contentFilter)
220
221 def isEffective(self, date):
222 # Override DefaultDublinCoreImpl's test, since we are always viewable.
223 return 1
224
225
226 # Remove the IContentish interface so we don't listen to events that won't
227 # apply to the site root, ie handleUidAnnotationEvent
228 classImplementsOnly(PloneSite, implementedBy(PloneSite) - IContentish)
229
230 InitializeClass(PloneSite)
231
[end of Products/CMFPlone/Portal.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/Portal.py b/Products/CMFPlone/Portal.py
--- a/Products/CMFPlone/Portal.py
+++ b/Products/CMFPlone/Portal.py
@@ -29,7 +29,6 @@
from Products.CMFCore.Skinnable import SkinnableObjectManager
from Products.CMFCore.utils import _checkPermission
from Products.CMFCore.utils import getToolByName
-from Products.CMFCore.utils import UniqueObject
from Products.CMFPlone import bbb
from Products.Five.component.interfaces import IObjectManagerSite
from zope.event import notify
@@ -45,7 +44,7 @@
@implementer(IPloneSiteRoot, ISiteRoot, ISyndicatable, IObjectManagerSite)
-class PloneSite(Container, SkinnableObjectManager, UniqueObject):
+class PloneSite(Container, SkinnableObjectManager):
"""The Plone site object."""
security = ClassSecurityInfo()
| {"golden_diff": "diff --git a/Products/CMFPlone/Portal.py b/Products/CMFPlone/Portal.py\n--- a/Products/CMFPlone/Portal.py\n+++ b/Products/CMFPlone/Portal.py\n@@ -29,7 +29,6 @@\n from Products.CMFCore.Skinnable import SkinnableObjectManager\n from Products.CMFCore.utils import _checkPermission\n from Products.CMFCore.utils import getToolByName\n-from Products.CMFCore.utils import UniqueObject\n from Products.CMFPlone import bbb\n from Products.Five.component.interfaces import IObjectManagerSite\n from zope.event import notify\n@@ -45,7 +44,7 @@\n \n \n @implementer(IPloneSiteRoot, ISiteRoot, ISyndicatable, IObjectManagerSite)\n-class PloneSite(Container, SkinnableObjectManager, UniqueObject):\n+class PloneSite(Container, SkinnableObjectManager):\n \"\"\"The Plone site object.\"\"\"\n \n security = ClassSecurityInfo()\n", "issue": "Plone site id is a reserved id and can't be used any where in Zope as an short name \n## BUG/PROBLEM REPORT (OR OTHER COMMON ISSUE)\r\n\r\n### What I did:\r\n\r\nAdd a Plone site with the id example.\r\nAdd another Plone side with any other name.\r\nIn the second Plone side, try to create a Document with the title example.\r\n\r\nYou'll get a trace:\r\n\r\n```\r\nTraceback (innermost last):\r\n Module ZPublisher.WSGIPublisher, line 181, in transaction_pubevents\r\n Module ZPublisher.WSGIPublisher, line 390, in publish_module\r\n Module ZPublisher.WSGIPublisher, line 285, in publish\r\n Module ZPublisher.mapply, line 85, in mapply\r\n Module ZPublisher.WSGIPublisher, line 68, in call_object\r\n Module plone.z3cform.layout, line 61, in __call__\r\n Module plone.z3cform.layout, line 45, in update\r\n Module plone.dexterity.browser.add, line 136, in update\r\n Module plone.z3cform.fieldsets.extensible, line 62, in update\r\n Module plone.z3cform.patch, line 31, in GroupForm_update\r\n Module z3c.form.group, line 145, in update\r\n Module plone.app.z3cform.csrf, line 21, in execute\r\n Module z3c.form.action, line 98, in execute\r\n Module z3c.form.button, line 301, in __call__\r\n Module z3c.form.button, line 159, in __call__\r\n Module plone.dexterity.browser.add, line 114, in handleAdd\r\n Module z3c.form.form, line 265, in createAndAdd\r\n Module plone.dexterity.browser.add, line 89, in add\r\n Module plone.dexterity.utils, line 174, in addContentToContainer\r\n Module Products.BTreeFolder2.BTreeFolder2, line 434, in _setObject\r\n Module plone.folder.ordered, line 253, in _checkId\r\n Module Products.CMFCore.PortalFolder, line 318, in _checkId\r\n Module OFS.ObjectManager, line 130, in checkValidId\r\nzExceptions.BadRequest: The id \"example\" is reserved.\r\n```\r\n\r\n### What I expect to happen:\r\n\r\nHaving a page with the id example.\r\n\r\n### What actually happened:\r\n\r\nGot the trace above.\r\nA Plone site id should not be blocked to be used in other places for object id's.\r\n\r\n### What version of Plone/ Addons I am using:\r\n\r\nPlone 6.0.6\n", "before_files": [{"content": "from AccessControl import ClassSecurityInfo\nfrom AccessControl import Unauthorized\nfrom AccessControl.class_init import InitializeClass\nfrom Acquisition import aq_base\nfrom ComputedAttribute import ComputedAttribute\nfrom five.localsitemanager.registry import PersistentComponents\nfrom OFS.ObjectManager import REPLACEABLE\nfrom plone.base.interfaces.siteroot import IPloneSiteRoot\nfrom plone.base.interfaces.syndication import ISyndicatable\nfrom plone.base.permissions import AddPortalContent\nfrom plone.base.permissions import AddPortalFolders\nfrom plone.base.permissions import ListPortalMembers\nfrom plone.base.permissions import ModifyPortalContent\nfrom plone.base.permissions import ReplyToItem\nfrom plone.base.permissions import View\nfrom plone.dexterity.content import Container\nfrom Products.CMFCore import permissions\nfrom Products.CMFCore.interfaces import IContentish\nfrom Products.CMFCore.interfaces import ISiteRoot\nfrom Products.CMFCore.permissions import AccessContentsInformation\nfrom Products.CMFCore.permissions import AddPortalMember\nfrom Products.CMFCore.permissions import MailForgottenPassword\nfrom Products.CMFCore.permissions import RequestReview\nfrom Products.CMFCore.permissions import ReviewPortalContent\nfrom Products.CMFCore.permissions import SetOwnPassword\nfrom Products.CMFCore.permissions import SetOwnProperties\nfrom Products.CMFCore.PortalFolder import PortalFolderBase\nfrom Products.CMFCore.PortalObject import PortalObjectBase\nfrom Products.CMFCore.Skinnable import SkinnableObjectManager\nfrom Products.CMFCore.utils import _checkPermission\nfrom Products.CMFCore.utils import getToolByName\nfrom Products.CMFCore.utils import UniqueObject\nfrom Products.CMFPlone import bbb\nfrom Products.Five.component.interfaces import IObjectManagerSite\nfrom zope.event import notify\nfrom zope.interface import classImplementsOnly\nfrom zope.interface import implementedBy\nfrom zope.interface import implementer\nfrom zope.interface.interfaces import ComponentLookupError\nfrom zope.traversing.interfaces import BeforeTraverseEvent\n\n\nif bbb.HAS_ZSERVER:\n from webdav.NullResource import NullResource\n\n\n@implementer(IPloneSiteRoot, ISiteRoot, ISyndicatable, IObjectManagerSite)\nclass PloneSite(Container, SkinnableObjectManager, UniqueObject):\n \"\"\"The Plone site object.\"\"\"\n\n security = ClassSecurityInfo()\n meta_type = portal_type = \"Plone Site\"\n\n # Ensure certain attributes come from the correct base class.\n _checkId = SkinnableObjectManager._checkId\n manage_main = PortalFolderBase.manage_main\n\n def __getattr__(self, name):\n try:\n # Try DX\n return super().__getattr__(name)\n except AttributeError:\n # Check portal_skins\n return SkinnableObjectManager.__getattr__(self, name)\n\n def __setattr__(self, name, obj):\n # handle re setting an item as an attribute\n if self._tree is not None and name in self:\n del self[name]\n self[name] = obj\n else:\n super().__setattr__(name, obj)\n\n def __delattr__(self, name):\n try:\n return super().__delattr__(name)\n except AttributeError:\n return self.__delitem__(name)\n\n # Removes the 'Components Folder'\n\n manage_options = Container.manage_options[:2] + Container.manage_options[3:]\n\n __ac_permissions__ = (\n (AccessContentsInformation, ()),\n (AddPortalMember, ()),\n (SetOwnPassword, ()),\n (SetOwnProperties, ()),\n (MailForgottenPassword, ()),\n (RequestReview, ()),\n (ReviewPortalContent, ()),\n (AddPortalContent, ()),\n (AddPortalFolders, ()),\n (ListPortalMembers, ()),\n (ReplyToItem, ()),\n (View, (\"isEffective\",)),\n (\n ModifyPortalContent,\n (\n \"manage_cutObjects\",\n \"manage_pasteObjects\",\n \"manage_renameForm\",\n \"manage_renameObject\",\n \"manage_renameObjects\",\n ),\n ),\n )\n\n # Switch off ZMI ordering interface as it assumes a slightly\n # different functionality\n has_order_support = 0\n management_page_charset = \"utf-8\"\n _default_sort_key = \"id\"\n _properties = (\n {\"id\": \"title\", \"type\": \"string\", \"mode\": \"w\"},\n {\"id\": \"description\", \"type\": \"text\", \"mode\": \"w\"},\n )\n title = \"\"\n description = \"\"\n icon = \"misc_/CMFPlone/tool.gif\"\n\n # From PortalObjectBase\n def __init__(self, id, title=\"\"):\n super().__init__(id, title=title)\n components = PersistentComponents(\"++etc++site\")\n components.__parent__ = self\n self.setSiteManager(components)\n\n # From PortalObjectBase\n def __before_publishing_traverse__(self, arg1, arg2=None):\n \"\"\"Pre-traversal hook.\"\"\"\n # XXX hack around a bug(?) in BeforeTraverse.MultiHook\n REQUEST = arg2 or arg1\n\n try:\n notify(BeforeTraverseEvent(self, REQUEST))\n except ComponentLookupError:\n # allow ZMI access, even if the portal's site manager is missing\n pass\n self.setupCurrentSkin(REQUEST)\n\n super().__before_publishing_traverse__(arg1, arg2)\n\n # Concept from OFS.OrderSupport\n @security.protected(permissions.AccessContentsInformation)\n def tpValues(self):\n # Return a list of subobjects, used by ZMI tree tag (and only there).\n # see also https://github.com/plone/Products.CMFPlone/issues/3323\n return sorted(\n (\n obj\n for obj in self.objectValues()\n if getattr(aq_base(obj), \"isPrincipiaFolderish\", False)\n ),\n key=lambda obj: obj.getId(),\n )\n\n def __browser_default__(self, request):\n \"\"\"Set default so we can return whatever we want instead\n of index_html\"\"\"\n return getToolByName(self, \"plone_utils\").browserDefault(self)\n\n def index_html(self):\n \"\"\"Acquire if not present.\"\"\"\n request = getattr(self, \"REQUEST\", None)\n if (\n request is not None\n and \"REQUEST_METHOD\" in request\n and request.maybe_webdav_client\n ):\n method = request[\"REQUEST_METHOD\"]\n if bbb.HAS_ZSERVER and method in (\"PUT\",):\n # Very likely a WebDAV client trying to create something\n result = NullResource(self, \"index_html\")\n setattr(result, \"__replaceable__\", REPLACEABLE)\n return result\n elif method not in (\"GET\", \"HEAD\", \"POST\"):\n raise AttributeError(\"index_html\")\n # Acquire from skin.\n _target = self.__getattr__(\"index_html\")\n result = aq_base(_target).__of__(self)\n setattr(result, \"__replaceable__\", REPLACEABLE)\n return result\n\n index_html = ComputedAttribute(index_html, 1)\n\n def manage_beforeDelete(self, container, item):\n # Should send out an Event before Site is being deleted.\n self.removal_inprogress = 1\n PloneSite.inheritedAttribute(\"manage_beforeDelete\")(self, container, item)\n\n @security.protected(permissions.DeleteObjects)\n def manage_delObjects(self, ids=None, REQUEST=None):\n \"\"\"We need to enforce security.\"\"\"\n if ids is None:\n ids = []\n if isinstance(ids, str):\n ids = [ids]\n for id in ids:\n item = self._getOb(id)\n if not _checkPermission(permissions.DeleteObjects, item):\n raise Unauthorized(\"Do not have permissions to remove this object\")\n return PortalObjectBase.manage_delObjects(self, ids, REQUEST=REQUEST)\n\n def view(self):\n \"\"\"Ensure that we get a plain view of the object, via a delegation to\n __call__(), which is defined in BrowserDefaultMixin\n \"\"\"\n return self()\n\n @security.protected(permissions.AccessContentsInformation)\n def folderlistingFolderContents(self, contentFilter=None):\n \"\"\"Calls listFolderContents in protected only by ACI so that\n folder_listing can work without the List folder contents permission.\n\n This is copied from Archetypes Basefolder and is needed by the\n reference browser.\n \"\"\"\n return self.listFolderContents(contentFilter)\n\n def isEffective(self, date):\n # Override DefaultDublinCoreImpl's test, since we are always viewable.\n return 1\n\n\n# Remove the IContentish interface so we don't listen to events that won't\n# apply to the site root, ie handleUidAnnotationEvent\nclassImplementsOnly(PloneSite, implementedBy(PloneSite) - IContentish)\n\nInitializeClass(PloneSite)\n", "path": "Products/CMFPlone/Portal.py"}]} | 3,635 | 219 |
gh_patches_debug_26080 | rasdani/github-patches | git_diff | mirumee__ariadne-246 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exceptions are never delivered to extensions
If you add an extension that implements the `def request_finished(self, context: ContextValue, error: Optional[Exception] = None)` method, and raise any kind of Exception in the resolver, the `request_finished` method of the extension always gets called without any errors.
A stack trace shows that the request context manager in `extensions.py` never catches the exception, so the `else` statement is always executed no matter what kind of exception I raise.
</issue>
<code>
[start of ariadne/contrib/tracing/opentracing.py]
1 from copy import deepcopy
2 from functools import partial
3 from inspect import isawaitable
4 from typing import Any, Callable, Dict, Optional
5
6 from graphql import GraphQLResolveInfo
7 from opentracing import Scope, Tracer, global_tracer
8 from opentracing.ext import tags
9
10 from ...types import ContextValue, Extension, Resolver
11 from .utils import format_path, should_trace
12
13 ArgFilter = Callable[[Dict[str, Any], GraphQLResolveInfo], Dict[str, Any]]
14
15
16 class OpenTracingExtension(Extension):
17 _arg_filter: Optional[ArgFilter]
18 _root_scope: Scope
19 _tracer: Tracer
20
21 def __init__(self, *, arg_filter: Optional[ArgFilter] = None):
22 self._arg_filter = arg_filter
23 self._tracer = global_tracer()
24 self._root_scope = None
25
26 def request_started(self, context: ContextValue):
27 self._root_scope = self._tracer.start_active_span("GraphQL Query")
28 self._root_scope.span.set_tag(tags.COMPONENT, "graphql")
29
30 def request_finished(
31 self, context: ContextValue, error: Optional[Exception] = None
32 ):
33 self._root_scope.close()
34
35 async def resolve(
36 self, next_: Resolver, parent: Any, info: GraphQLResolveInfo, **kwargs
37 ):
38 if not should_trace(info):
39 result = next_(parent, info, **kwargs)
40 if isawaitable(result):
41 result = await result
42 return result
43
44 with self._tracer.start_active_span(info.field_name) as scope:
45 span = scope.span
46 span.set_tag(tags.COMPONENT, "graphql")
47 span.set_tag("graphql.parentType", info.parent_type.name)
48
49 graphql_path = ".".join(
50 map(str, format_path(info.path)) # pylint: disable=bad-builtin
51 )
52 span.set_tag("graphql.path", graphql_path)
53
54 if kwargs:
55 filtered_kwargs = self.filter_resolver_args(kwargs, info)
56 for kwarg, value in filtered_kwargs.items():
57 span.set_tag(f"graphql.param.{kwarg}", value)
58
59 result = next_(parent, info, **kwargs)
60 if isawaitable(result):
61 result = await result
62 return result
63
64 def filter_resolver_args(
65 self, args: Dict[str, Any], info: GraphQLResolveInfo
66 ) -> Dict[str, Any]:
67 if not self._arg_filter:
68 return args
69
70 return self._arg_filter(deepcopy(args), info)
71
72
73 class OpenTracingExtensionSync(OpenTracingExtension):
74 def resolve(self, next_: Resolver, parent: Any, info: GraphQLResolveInfo, **kwargs):
75 if not should_trace(info):
76 result = next_(parent, info, **kwargs)
77 return result
78
79 with self._tracer.start_active_span(info.field_name) as scope:
80 span = scope.span
81 span.set_tag(tags.COMPONENT, "graphql")
82 span.set_tag("graphql.parentType", info.parent_type.name)
83
84 graphql_path = ".".join(
85 map(str, format_path(info.path)) # pylint: disable=bad-builtin
86 )
87 span.set_tag("graphql.path", graphql_path)
88
89 if kwargs:
90 filtered_kwargs = self.filter_resolver_args(kwargs, info)
91 for kwarg, value in filtered_kwargs.items():
92 span.set_tag(f"graphql.param.{kwarg}", value)
93
94 result = next_(parent, info, **kwargs)
95 return result
96
97
98 def opentracing_extension(*, arg_filter: Optional[ArgFilter] = None):
99 return partial(OpenTracingExtension, arg_filter=arg_filter)
100
101
102 def opentracing_extension_sync(*, arg_filter: Optional[ArgFilter] = None):
103 return partial(OpenTracingExtensionSync, arg_filter=arg_filter)
104
[end of ariadne/contrib/tracing/opentracing.py]
[start of ariadne/types.py]
1 from inspect import isawaitable
2 from typing import Any, AsyncGenerator, Callable, List, Optional, Tuple, Union
3 from typing_extensions import Protocol
4
5 from graphql import (
6 DocumentNode,
7 ExecutionResult,
8 GraphQLError,
9 GraphQLResolveInfo,
10 GraphQLSchema,
11 )
12
13 # Note: this should be [Any, GraphQLResolveInfo, **kwargs],
14 # but this is not achieveable with python types yet:
15 # https://github.com/mirumee/ariadne/pull/79
16 Resolver = Callable[..., Any]
17 GraphQLResult = Tuple[bool, dict]
18 SubscriptionResult = Tuple[
19 bool, Union[List[dict], AsyncGenerator[ExecutionResult, None]]
20 ]
21 Subscriber = Callable[..., AsyncGenerator]
22 ErrorFormatter = Callable[[GraphQLError, bool], dict]
23
24 ContextValue = Union[Any, Callable[[Any], Any]]
25 RootValue = Union[Any, Callable[[Optional[Any], DocumentNode], Any]]
26
27
28 class Extension(Protocol):
29 def request_started(self, context: ContextValue):
30 pass # pragma: no cover
31
32 def request_finished(
33 self, context: ContextValue, error: Optional[Exception] = None
34 ):
35 pass # pragma: no cover
36
37 async def resolve(
38 self, next_: Resolver, parent: Any, info: GraphQLResolveInfo, **kwargs
39 ):
40 result = next_(parent, info, **kwargs)
41 if isawaitable(result):
42 result = await result
43 return result
44
45 def has_errors(self, errors: List[GraphQLError]):
46 pass # pragma: no cover
47
48 def format(self) -> dict:
49 return {} # pragma: no cover
50
51
52 class ExtensionSync(Extension):
53 def resolve(self, next_: Resolver, parent: Any, info: GraphQLResolveInfo, **kwargs):
54 return next_(parent, info, **kwargs)
55
56
57 class SchemaBindable(Protocol):
58 def bind_to_schema(self, schema: GraphQLSchema) -> None:
59 pass # pragma: no cover
60
[end of ariadne/types.py]
[start of ariadne/extensions.py]
1 from contextlib import contextmanager
2 from typing import List, Optional, Type
3
4 from graphql import GraphQLError
5 from graphql.execution import MiddlewareManager
6
7 from .types import ContextValue, Extension
8
9
10 class ExtensionManager:
11 __slots__ = ("extensions", "extensions_reversed")
12
13 def __init__(self, extensions: Optional[List[Type[Extension]]] = None):
14 if extensions:
15 self.extensions = tuple(ext() for ext in extensions)
16 self.extensions_reversed = tuple(reversed(self.extensions))
17 else:
18 self.extensions_reversed = self.extensions = tuple()
19
20 def as_middleware_manager(
21 self, manager: Optional[MiddlewareManager]
22 ) -> MiddlewareManager:
23 if manager and manager.middlewares:
24 return MiddlewareManager(*manager.middlewares, *self.extensions)
25 return MiddlewareManager(*self.extensions)
26
27 @contextmanager
28 def request(self, context: ContextValue):
29 for ext in self.extensions:
30 ext.request_started(context)
31 try:
32 yield
33 except Exception as e:
34 for ext in self.extensions_reversed:
35 ext.request_finished(context, e)
36 raise
37 else:
38 for ext in self.extensions_reversed:
39 ext.request_finished(context)
40
41 def has_errors(self, errors: List[GraphQLError]):
42 for ext in self.extensions:
43 ext.has_errors(errors)
44
45 def format(self) -> dict:
46 data = {}
47 for ext in self.extensions:
48 ext_data = ext.format()
49 if ext_data:
50 data.update(ext_data)
51 return data
52
[end of ariadne/extensions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ariadne/contrib/tracing/opentracing.py b/ariadne/contrib/tracing/opentracing.py
--- a/ariadne/contrib/tracing/opentracing.py
+++ b/ariadne/contrib/tracing/opentracing.py
@@ -27,9 +27,7 @@
self._root_scope = self._tracer.start_active_span("GraphQL Query")
self._root_scope.span.set_tag(tags.COMPONENT, "graphql")
- def request_finished(
- self, context: ContextValue, error: Optional[Exception] = None
- ):
+ def request_finished(self, context: ContextValue):
self._root_scope.close()
async def resolve(
diff --git a/ariadne/extensions.py b/ariadne/extensions.py
--- a/ariadne/extensions.py
+++ b/ariadne/extensions.py
@@ -30,11 +30,7 @@
ext.request_started(context)
try:
yield
- except Exception as e:
- for ext in self.extensions_reversed:
- ext.request_finished(context, e)
- raise
- else:
+ finally:
for ext in self.extensions_reversed:
ext.request_finished(context)
diff --git a/ariadne/types.py b/ariadne/types.py
--- a/ariadne/types.py
+++ b/ariadne/types.py
@@ -29,9 +29,7 @@
def request_started(self, context: ContextValue):
pass # pragma: no cover
- def request_finished(
- self, context: ContextValue, error: Optional[Exception] = None
- ):
+ def request_finished(self, context: ContextValue):
pass # pragma: no cover
async def resolve(
| {"golden_diff": "diff --git a/ariadne/contrib/tracing/opentracing.py b/ariadne/contrib/tracing/opentracing.py\n--- a/ariadne/contrib/tracing/opentracing.py\n+++ b/ariadne/contrib/tracing/opentracing.py\n@@ -27,9 +27,7 @@\n self._root_scope = self._tracer.start_active_span(\"GraphQL Query\")\n self._root_scope.span.set_tag(tags.COMPONENT, \"graphql\")\n \n- def request_finished(\n- self, context: ContextValue, error: Optional[Exception] = None\n- ):\n+ def request_finished(self, context: ContextValue):\n self._root_scope.close()\n \n async def resolve(\ndiff --git a/ariadne/extensions.py b/ariadne/extensions.py\n--- a/ariadne/extensions.py\n+++ b/ariadne/extensions.py\n@@ -30,11 +30,7 @@\n ext.request_started(context)\n try:\n yield\n- except Exception as e:\n- for ext in self.extensions_reversed:\n- ext.request_finished(context, e)\n- raise\n- else:\n+ finally:\n for ext in self.extensions_reversed:\n ext.request_finished(context)\n \ndiff --git a/ariadne/types.py b/ariadne/types.py\n--- a/ariadne/types.py\n+++ b/ariadne/types.py\n@@ -29,9 +29,7 @@\n def request_started(self, context: ContextValue):\n pass # pragma: no cover\n \n- def request_finished(\n- self, context: ContextValue, error: Optional[Exception] = None\n- ):\n+ def request_finished(self, context: ContextValue):\n pass # pragma: no cover\n \n async def resolve(\n", "issue": "Exceptions are never delivered to extensions\nIf you add an extension that implements the `def request_finished(self, context: ContextValue, error: Optional[Exception] = None)` method, and raise any kind of Exception in the resolver, the `request_finished` method of the extension always gets called without any errors.\r\n\r\nA stack trace shows that the request context manager in `extensions.py` never catches the exception, so the `else` statement is always executed no matter what kind of exception I raise.\n", "before_files": [{"content": "from copy import deepcopy\nfrom functools import partial\nfrom inspect import isawaitable\nfrom typing import Any, Callable, Dict, Optional\n\nfrom graphql import GraphQLResolveInfo\nfrom opentracing import Scope, Tracer, global_tracer\nfrom opentracing.ext import tags\n\nfrom ...types import ContextValue, Extension, Resolver\nfrom .utils import format_path, should_trace\n\nArgFilter = Callable[[Dict[str, Any], GraphQLResolveInfo], Dict[str, Any]]\n\n\nclass OpenTracingExtension(Extension):\n _arg_filter: Optional[ArgFilter]\n _root_scope: Scope\n _tracer: Tracer\n\n def __init__(self, *, arg_filter: Optional[ArgFilter] = None):\n self._arg_filter = arg_filter\n self._tracer = global_tracer()\n self._root_scope = None\n\n def request_started(self, context: ContextValue):\n self._root_scope = self._tracer.start_active_span(\"GraphQL Query\")\n self._root_scope.span.set_tag(tags.COMPONENT, \"graphql\")\n\n def request_finished(\n self, context: ContextValue, error: Optional[Exception] = None\n ):\n self._root_scope.close()\n\n async def resolve(\n self, next_: Resolver, parent: Any, info: GraphQLResolveInfo, **kwargs\n ):\n if not should_trace(info):\n result = next_(parent, info, **kwargs)\n if isawaitable(result):\n result = await result\n return result\n\n with self._tracer.start_active_span(info.field_name) as scope:\n span = scope.span\n span.set_tag(tags.COMPONENT, \"graphql\")\n span.set_tag(\"graphql.parentType\", info.parent_type.name)\n\n graphql_path = \".\".join(\n map(str, format_path(info.path)) # pylint: disable=bad-builtin\n )\n span.set_tag(\"graphql.path\", graphql_path)\n\n if kwargs:\n filtered_kwargs = self.filter_resolver_args(kwargs, info)\n for kwarg, value in filtered_kwargs.items():\n span.set_tag(f\"graphql.param.{kwarg}\", value)\n\n result = next_(parent, info, **kwargs)\n if isawaitable(result):\n result = await result\n return result\n\n def filter_resolver_args(\n self, args: Dict[str, Any], info: GraphQLResolveInfo\n ) -> Dict[str, Any]:\n if not self._arg_filter:\n return args\n\n return self._arg_filter(deepcopy(args), info)\n\n\nclass OpenTracingExtensionSync(OpenTracingExtension):\n def resolve(self, next_: Resolver, parent: Any, info: GraphQLResolveInfo, **kwargs):\n if not should_trace(info):\n result = next_(parent, info, **kwargs)\n return result\n\n with self._tracer.start_active_span(info.field_name) as scope:\n span = scope.span\n span.set_tag(tags.COMPONENT, \"graphql\")\n span.set_tag(\"graphql.parentType\", info.parent_type.name)\n\n graphql_path = \".\".join(\n map(str, format_path(info.path)) # pylint: disable=bad-builtin\n )\n span.set_tag(\"graphql.path\", graphql_path)\n\n if kwargs:\n filtered_kwargs = self.filter_resolver_args(kwargs, info)\n for kwarg, value in filtered_kwargs.items():\n span.set_tag(f\"graphql.param.{kwarg}\", value)\n\n result = next_(parent, info, **kwargs)\n return result\n\n\ndef opentracing_extension(*, arg_filter: Optional[ArgFilter] = None):\n return partial(OpenTracingExtension, arg_filter=arg_filter)\n\n\ndef opentracing_extension_sync(*, arg_filter: Optional[ArgFilter] = None):\n return partial(OpenTracingExtensionSync, arg_filter=arg_filter)\n", "path": "ariadne/contrib/tracing/opentracing.py"}, {"content": "from inspect import isawaitable\nfrom typing import Any, AsyncGenerator, Callable, List, Optional, Tuple, Union\nfrom typing_extensions import Protocol\n\nfrom graphql import (\n DocumentNode,\n ExecutionResult,\n GraphQLError,\n GraphQLResolveInfo,\n GraphQLSchema,\n)\n\n# Note: this should be [Any, GraphQLResolveInfo, **kwargs],\n# but this is not achieveable with python types yet:\n# https://github.com/mirumee/ariadne/pull/79\nResolver = Callable[..., Any]\nGraphQLResult = Tuple[bool, dict]\nSubscriptionResult = Tuple[\n bool, Union[List[dict], AsyncGenerator[ExecutionResult, None]]\n]\nSubscriber = Callable[..., AsyncGenerator]\nErrorFormatter = Callable[[GraphQLError, bool], dict]\n\nContextValue = Union[Any, Callable[[Any], Any]]\nRootValue = Union[Any, Callable[[Optional[Any], DocumentNode], Any]]\n\n\nclass Extension(Protocol):\n def request_started(self, context: ContextValue):\n pass # pragma: no cover\n\n def request_finished(\n self, context: ContextValue, error: Optional[Exception] = None\n ):\n pass # pragma: no cover\n\n async def resolve(\n self, next_: Resolver, parent: Any, info: GraphQLResolveInfo, **kwargs\n ):\n result = next_(parent, info, **kwargs)\n if isawaitable(result):\n result = await result\n return result\n\n def has_errors(self, errors: List[GraphQLError]):\n pass # pragma: no cover\n\n def format(self) -> dict:\n return {} # pragma: no cover\n\n\nclass ExtensionSync(Extension):\n def resolve(self, next_: Resolver, parent: Any, info: GraphQLResolveInfo, **kwargs):\n return next_(parent, info, **kwargs)\n\n\nclass SchemaBindable(Protocol):\n def bind_to_schema(self, schema: GraphQLSchema) -> None:\n pass # pragma: no cover\n", "path": "ariadne/types.py"}, {"content": "from contextlib import contextmanager\nfrom typing import List, Optional, Type\n\nfrom graphql import GraphQLError\nfrom graphql.execution import MiddlewareManager\n\nfrom .types import ContextValue, Extension\n\n\nclass ExtensionManager:\n __slots__ = (\"extensions\", \"extensions_reversed\")\n\n def __init__(self, extensions: Optional[List[Type[Extension]]] = None):\n if extensions:\n self.extensions = tuple(ext() for ext in extensions)\n self.extensions_reversed = tuple(reversed(self.extensions))\n else:\n self.extensions_reversed = self.extensions = tuple()\n\n def as_middleware_manager(\n self, manager: Optional[MiddlewareManager]\n ) -> MiddlewareManager:\n if manager and manager.middlewares:\n return MiddlewareManager(*manager.middlewares, *self.extensions)\n return MiddlewareManager(*self.extensions)\n\n @contextmanager\n def request(self, context: ContextValue):\n for ext in self.extensions:\n ext.request_started(context)\n try:\n yield\n except Exception as e:\n for ext in self.extensions_reversed:\n ext.request_finished(context, e)\n raise\n else:\n for ext in self.extensions_reversed:\n ext.request_finished(context)\n\n def has_errors(self, errors: List[GraphQLError]):\n for ext in self.extensions:\n ext.has_errors(errors)\n\n def format(self) -> dict:\n data = {}\n for ext in self.extensions:\n ext_data = ext.format()\n if ext_data:\n data.update(ext_data)\n return data\n", "path": "ariadne/extensions.py"}]} | 2,657 | 392 |
gh_patches_debug_13589 | rasdani/github-patches | git_diff | freedomofpress__securedrop-582 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Large files can be downloaded before the file is completely processed
Scenario:
As a source, upload a >250 MB file (something large)
While this is uploading, from the journalist view, reload the page of “files received from the source”
Once the file shows up, immediately click on the filename in order to download it.
Result:
You will download part of the encrypted file, but possibly not all of it. It can take a while for large files to be encrypted, and in the interim you will get confusing partial copies of the encrypted file.
Expected behavior:
We should either get the whole file, or none of the file. (i.e.: it shouldn’t show up if it is not in a state available for downloading).
</issue>
<code>
[start of securedrop/journalist.py]
1 # -*- coding: utf-8 -*-
2 import config
3 import version
4 import crypto_util
5 import store
6 from db import db_session, Source, Submission, SourceStar, get_one_or_else
7
8 import os
9 from datetime import datetime
10 from flask import (Flask, request, render_template, send_file, redirect, flash, url_for, g, abort)
11 from flask_wtf.csrf import CsrfProtect
12 from sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound
13
14 import background
15
16 app = Flask(__name__, template_folder=config.JOURNALIST_TEMPLATES_DIR)
17 app.config.from_object(config.FlaskConfig)
18 CsrfProtect(app)
19
20 app.jinja_env.globals['version'] = version.__version__
21 if getattr(config, 'CUSTOM_HEADER_IMAGE', None):
22 app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE
23 app.jinja_env.globals['use_custom_header_image'] = True
24 else:
25 app.jinja_env.globals['header_image'] = 'logo.png'
26 app.jinja_env.globals['use_custom_header_image'] = False
27
28
29 @app.template_filter('datetimeformat')
30 def _jinja2_datetimeformat(dt, fmt=None):
31 """Template filter for readable formatting of datetime.datetime"""
32 fmt = fmt or '%b %d, %Y %I:%M %p'
33 return dt.strftime(fmt)
34
35
36 @app.teardown_appcontext
37 def shutdown_session(exception=None):
38 """Automatically remove database sessions at the end of the request, or
39 when the application shuts down"""
40 db_session.remove()
41
42
43 def get_source(sid):
44 """Return a Source object, representing the database row, for the source
45 with id `sid`"""
46 source = None
47 query = Source.query.filter(Source.filesystem_id == sid)
48 source = get_one_or_else(query, app.logger, abort)
49
50 return source
51
52
53 @app.before_request
54 def setup_g():
55 """Store commonly used values in Flask's special g object"""
56 if request.method == 'POST':
57 sid = request.form.get('sid')
58 if sid:
59 g.sid = sid
60 g.source = get_source(sid)
61
62
63 def get_docs(sid):
64 """Get docs associated with source id `sid`, sorted by submission date"""
65 docs = []
66 for filename in os.listdir(store.path(sid)):
67 os_stat = os.stat(store.path(sid, filename))
68 docs.append(dict(
69 name=filename,
70 date=datetime.fromtimestamp(os_stat.st_mtime),
71 size=os_stat.st_size,
72 ))
73 # sort in chronological order
74 docs.sort(key=lambda x: int(x['name'].split('-')[0]))
75 return docs
76
77
78 def make_star_true(sid):
79 source = get_source(sid)
80 if source.star:
81 source.star.starred = True
82 else:
83 source_star = SourceStar(source)
84 db_session.add(source_star)
85
86
87 def make_star_false(sid):
88 source = get_source(sid)
89 source.star.starred = False
90
91
92 @app.route('/col/add_star/<sid>', methods=('POST',))
93 def add_star(sid):
94 make_star_true(sid)
95 db_session.commit()
96 return redirect(url_for('index'))
97
98
99 @app.route("/col/remove_star/<sid>", methods=('POST',))
100 def remove_star(sid):
101 make_star_false(sid)
102 db_session.commit()
103 return redirect(url_for('index'))
104
105
106 @app.route('/')
107 def index():
108 unstarred = []
109 starred = []
110 for source in Source.query.filter_by(pending=False).order_by(Source.last_updated.desc()).all():
111 star = SourceStar.query.filter(SourceStar.source_id == source.id).first()
112 if star and star.starred:
113 starred.append(source)
114 else:
115 unstarred.append(source)
116 source.num_unread = len(
117 Submission.query.filter(Submission.source_id == source.id, Submission.downloaded == False).all())
118
119 return render_template('index.html', unstarred=unstarred, starred=starred)
120
121
122 @app.route('/col/<sid>')
123 def col(sid):
124 source = get_source(sid)
125 docs = get_docs(sid)
126 haskey = crypto_util.getkey(sid)
127 return render_template("col.html", sid=sid,
128 codename=source.journalist_designation, docs=docs, haskey=haskey,
129 flagged=source.flagged)
130
131
132 def delete_collection(source_id):
133 # Delete the source's collection of submissions
134 store.delete_source_directory(source_id)
135
136 # Delete the source's reply keypair
137 crypto_util.delete_reply_keypair(source_id)
138
139 # Delete their entry in the db
140 source = get_source(source_id)
141 db_session.delete(source)
142 db_session.commit()
143
144
145 @app.route('/col/process', methods=('POST',))
146 def col_process():
147 actions = {'delete': col_delete, 'star': col_star, 'un-star': col_un_star}
148 if 'cols_selected' not in request.form:
149 return redirect(url_for('index'))
150
151 cols_selected = request.form.getlist('cols_selected') # getlist is cgi.FieldStorage.getlist
152 action = request.form['action']
153
154 if action not in actions:
155 return abort(500)
156
157 method = actions[action]
158 return method(cols_selected)
159
160
161 def col_star(cols_selected):
162 for sid in cols_selected:
163 make_star_true(sid)
164
165 db_session.commit()
166 return redirect(url_for('index'))
167
168
169 def col_un_star(cols_selected):
170 for source_id in cols_selected:
171 make_star_false(source_id)
172
173 db_session.commit()
174 return redirect(url_for('index'))
175
176
177 @app.route('/col/delete/<sid>', methods=('POST',))
178 def col_delete_single(sid):
179 """deleting a single collection from its /col page"""
180 source = get_source(sid)
181 delete_collection(sid)
182 flash("%s's collection deleted" % (source.journalist_designation,), "notification")
183 return redirect(url_for('index'))
184
185
186 def col_delete(cols_selected):
187 """deleting multiple collections from the index"""
188 if len(cols_selected) < 1:
189 flash("No collections selected to delete!", "warning")
190 else:
191 for source_id in cols_selected:
192 delete_collection(source_id)
193 flash("%s %s deleted" % (
194 len(cols_selected),
195 "collection" if len(cols_selected) == 1 else "collections"
196 ), "notification")
197
198 return redirect(url_for('index'))
199
200
201 @app.route('/col/<sid>/<fn>')
202 def doc(sid, fn):
203 if '..' in fn or fn.startswith('/'):
204 abort(404)
205 try:
206 Submission.query.filter(Submission.filename == fn).one().downloaded = True
207 except NoResultFound as e:
208 app.logger.error("Could not mark " + fn + " as downloaded: %s" % (e,))
209 db_session.commit()
210 return send_file(store.path(sid, fn), mimetype="application/pgp-encrypted")
211
212
213 @app.route('/reply', methods=('POST',))
214 def reply():
215 msg = request.form['msg']
216 g.source.interaction_count += 1
217 filename = "{0}-reply.gpg".format(g.source.interaction_count)
218
219 crypto_util.encrypt(crypto_util.getkey(g.sid), msg, output=
220 store.path(g.sid, filename))
221
222 db_session.commit()
223 return render_template('reply.html', sid=g.sid,
224 codename=g.source.journalist_designation)
225
226
227 @app.route('/regenerate-code', methods=('POST',))
228 def generate_code():
229 g.source.journalist_designation = crypto_util.display_id()
230 db_session.commit()
231 return redirect('/col/' + g.sid)
232
233
234 @app.route('/download_unread/<sid>')
235 def download_unread(sid):
236 id = Source.query.filter(Source.filesystem_id == sid).one().id
237 docs = [doc.filename for doc in
238 Submission.query.filter(Submission.source_id == id, Submission.downloaded == False).all()]
239 return bulk_download(sid, docs)
240
241
242 @app.route('/bulk', methods=('POST',))
243 def bulk():
244 action = request.form['action']
245
246 doc_names_selected = request.form.getlist('doc_names_selected')
247 docs_selected = [
248 doc for doc in get_docs(g.sid) if doc['name'] in doc_names_selected]
249 filenames_selected = [
250 doc['name'] for doc in docs_selected]
251
252 if action == 'download':
253 return bulk_download(g.sid, filenames_selected)
254 elif action == 'delete':
255 return bulk_delete(g.sid, docs_selected)
256 else:
257 abort(400)
258
259
260 def bulk_delete(sid, docs_selected):
261 source = get_source(sid)
262 confirm_delete = bool(request.form.get('confirm_delete', False))
263 if confirm_delete:
264 for doc in docs_selected:
265 db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())
266 fn = store.path(sid, doc['name'])
267 store.secure_unlink(fn)
268 db_session.commit()
269 return render_template('delete.html', sid=sid,
270 codename=source.journalist_designation,
271 docs_selected=docs_selected, confirm_delete=confirm_delete)
272
273
274 def bulk_download(sid, docs_selected):
275 source = get_source(sid)
276 filenames = []
277 for doc in docs_selected:
278 filenames.append(store.path(sid, doc))
279 try:
280 Submission.query.filter(Submission.filename == doc).one().downloaded = True
281 except NoResultFound as e:
282 app.logger.error("Could not mark " + doc + " as downloaded: %s" % (e,))
283 db_session.commit()
284 zip = store.get_bulk_archive(filenames)
285 return send_file(zip.name, mimetype="application/zip",
286 attachment_filename=source.journalist_designation + ".zip",
287 as_attachment=True)
288
289
290 @app.route('/flag', methods=('POST',))
291 def flag():
292 g.source.flagged = True
293 db_session.commit()
294 return render_template('flag.html', sid=g.sid,
295 codename=g.source.journalist_designation)
296 def write_pidfile():
297 pid = str(os.getpid())
298 with open(config.JOURNALIST_PIDFILE, 'w') as fp:
299 fp.write(pid)
300
301 if __name__ == "__main__":
302 write_pidfile()
303 # TODO make sure debug=False in production
304 app.run(debug=True, host='0.0.0.0', port=8081)
305
[end of securedrop/journalist.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/securedrop/journalist.py b/securedrop/journalist.py
--- a/securedrop/journalist.py
+++ b/securedrop/journalist.py
@@ -123,6 +123,11 @@
def col(sid):
source = get_source(sid)
docs = get_docs(sid)
+ submissions = [submission.filename for submission in Submission.query.filter(Submission.source_id == source.id).all()]
+ # Only include documents loaded from the filesystem which are replies or which are also listed in the
+ # submissions table to avoid displaying partially uploaded files (#561).
+ docs = [doc for doc in docs if doc['name'] in submissions or doc['name'].endswith('reply.gpg')]
+
haskey = crypto_util.getkey(sid)
return render_template("col.html", sid=sid,
codename=source.journalist_designation, docs=docs, haskey=haskey,
| {"golden_diff": "diff --git a/securedrop/journalist.py b/securedrop/journalist.py\n--- a/securedrop/journalist.py\n+++ b/securedrop/journalist.py\n@@ -123,6 +123,11 @@\n def col(sid):\n source = get_source(sid)\n docs = get_docs(sid)\n+ submissions = [submission.filename for submission in Submission.query.filter(Submission.source_id == source.id).all()]\n+ # Only include documents loaded from the filesystem which are replies or which are also listed in the\n+ # submissions table to avoid displaying partially uploaded files (#561).\n+ docs = [doc for doc in docs if doc['name'] in submissions or doc['name'].endswith('reply.gpg')]\n+\n haskey = crypto_util.getkey(sid)\n return render_template(\"col.html\", sid=sid,\n codename=source.journalist_designation, docs=docs, haskey=haskey,\n", "issue": "Large files can be downloaded before the file is completely processed\nScenario:\n\nAs a source, upload a >250 MB file (something large)\nWhile this is uploading, from the journalist view, reload the page of \u201cfiles received from the source\u201d\nOnce the file shows up, immediately click on the filename in order to download it.\n\nResult:\n\nYou will download part of the encrypted file, but possibly not all of it. It can take a while for large files to be encrypted, and in the interim you will get confusing partial copies of the encrypted file.\n\nExpected behavior:\n\nWe should either get the whole file, or none of the file. (i.e.: it shouldn\u2019t show up if it is not in a state available for downloading).\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport config\nimport version\nimport crypto_util\nimport store\nfrom db import db_session, Source, Submission, SourceStar, get_one_or_else\n\nimport os\nfrom datetime import datetime\nfrom flask import (Flask, request, render_template, send_file, redirect, flash, url_for, g, abort)\nfrom flask_wtf.csrf import CsrfProtect\nfrom sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound\n\nimport background\n\napp = Flask(__name__, template_folder=config.JOURNALIST_TEMPLATES_DIR)\napp.config.from_object(config.FlaskConfig)\nCsrfProtect(app)\n\napp.jinja_env.globals['version'] = version.__version__\nif getattr(config, 'CUSTOM_HEADER_IMAGE', None):\n app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE\n app.jinja_env.globals['use_custom_header_image'] = True\nelse:\n app.jinja_env.globals['header_image'] = 'logo.png'\n app.jinja_env.globals['use_custom_header_image'] = False\n\n\[email protected]_filter('datetimeformat')\ndef _jinja2_datetimeformat(dt, fmt=None):\n \"\"\"Template filter for readable formatting of datetime.datetime\"\"\"\n fmt = fmt or '%b %d, %Y %I:%M %p'\n return dt.strftime(fmt)\n\n\[email protected]_appcontext\ndef shutdown_session(exception=None):\n \"\"\"Automatically remove database sessions at the end of the request, or\n when the application shuts down\"\"\"\n db_session.remove()\n\n\ndef get_source(sid):\n \"\"\"Return a Source object, representing the database row, for the source\n with id `sid`\"\"\"\n source = None\n query = Source.query.filter(Source.filesystem_id == sid)\n source = get_one_or_else(query, app.logger, abort)\n\n return source\n\n\[email protected]_request\ndef setup_g():\n \"\"\"Store commonly used values in Flask's special g object\"\"\"\n if request.method == 'POST':\n sid = request.form.get('sid')\n if sid:\n g.sid = sid\n g.source = get_source(sid)\n\n\ndef get_docs(sid):\n \"\"\"Get docs associated with source id `sid`, sorted by submission date\"\"\"\n docs = []\n for filename in os.listdir(store.path(sid)):\n os_stat = os.stat(store.path(sid, filename))\n docs.append(dict(\n name=filename,\n date=datetime.fromtimestamp(os_stat.st_mtime),\n size=os_stat.st_size,\n ))\n # sort in chronological order\n docs.sort(key=lambda x: int(x['name'].split('-')[0]))\n return docs\n\n\ndef make_star_true(sid):\n source = get_source(sid)\n if source.star:\n source.star.starred = True\n else:\n source_star = SourceStar(source)\n db_session.add(source_star)\n\n\ndef make_star_false(sid):\n source = get_source(sid)\n source.star.starred = False\n\n\[email protected]('/col/add_star/<sid>', methods=('POST',))\ndef add_star(sid):\n make_star_true(sid)\n db_session.commit()\n return redirect(url_for('index'))\n\n\[email protected](\"/col/remove_star/<sid>\", methods=('POST',))\ndef remove_star(sid):\n make_star_false(sid)\n db_session.commit()\n return redirect(url_for('index'))\n\n\[email protected]('/')\ndef index():\n unstarred = []\n starred = []\n for source in Source.query.filter_by(pending=False).order_by(Source.last_updated.desc()).all():\n star = SourceStar.query.filter(SourceStar.source_id == source.id).first()\n if star and star.starred:\n starred.append(source)\n else:\n unstarred.append(source)\n source.num_unread = len(\n Submission.query.filter(Submission.source_id == source.id, Submission.downloaded == False).all())\n\n return render_template('index.html', unstarred=unstarred, starred=starred)\n\n\[email protected]('/col/<sid>')\ndef col(sid):\n source = get_source(sid)\n docs = get_docs(sid)\n haskey = crypto_util.getkey(sid)\n return render_template(\"col.html\", sid=sid,\n codename=source.journalist_designation, docs=docs, haskey=haskey,\n flagged=source.flagged)\n\n\ndef delete_collection(source_id):\n # Delete the source's collection of submissions\n store.delete_source_directory(source_id)\n\n # Delete the source's reply keypair\n crypto_util.delete_reply_keypair(source_id)\n\n # Delete their entry in the db\n source = get_source(source_id)\n db_session.delete(source)\n db_session.commit()\n\n\[email protected]('/col/process', methods=('POST',))\ndef col_process():\n actions = {'delete': col_delete, 'star': col_star, 'un-star': col_un_star}\n if 'cols_selected' not in request.form:\n return redirect(url_for('index'))\n\n cols_selected = request.form.getlist('cols_selected') # getlist is cgi.FieldStorage.getlist\n action = request.form['action']\n\n if action not in actions:\n return abort(500)\n\n method = actions[action]\n return method(cols_selected)\n\n\ndef col_star(cols_selected):\n for sid in cols_selected:\n make_star_true(sid)\n\n db_session.commit()\n return redirect(url_for('index'))\n\n\ndef col_un_star(cols_selected):\n for source_id in cols_selected:\n make_star_false(source_id)\n\n db_session.commit()\n return redirect(url_for('index'))\n\n\[email protected]('/col/delete/<sid>', methods=('POST',))\ndef col_delete_single(sid):\n \"\"\"deleting a single collection from its /col page\"\"\"\n source = get_source(sid)\n delete_collection(sid)\n flash(\"%s's collection deleted\" % (source.journalist_designation,), \"notification\")\n return redirect(url_for('index'))\n\n\ndef col_delete(cols_selected):\n \"\"\"deleting multiple collections from the index\"\"\"\n if len(cols_selected) < 1:\n flash(\"No collections selected to delete!\", \"warning\")\n else:\n for source_id in cols_selected:\n delete_collection(source_id)\n flash(\"%s %s deleted\" % (\n len(cols_selected),\n \"collection\" if len(cols_selected) == 1 else \"collections\"\n ), \"notification\")\n\n return redirect(url_for('index'))\n\n\[email protected]('/col/<sid>/<fn>')\ndef doc(sid, fn):\n if '..' in fn or fn.startswith('/'):\n abort(404)\n try:\n Submission.query.filter(Submission.filename == fn).one().downloaded = True\n except NoResultFound as e:\n app.logger.error(\"Could not mark \" + fn + \" as downloaded: %s\" % (e,))\n db_session.commit()\n return send_file(store.path(sid, fn), mimetype=\"application/pgp-encrypted\")\n\n\[email protected]('/reply', methods=('POST',))\ndef reply():\n msg = request.form['msg']\n g.source.interaction_count += 1\n filename = \"{0}-reply.gpg\".format(g.source.interaction_count)\n\n crypto_util.encrypt(crypto_util.getkey(g.sid), msg, output=\n store.path(g.sid, filename))\n\n db_session.commit()\n return render_template('reply.html', sid=g.sid,\n codename=g.source.journalist_designation)\n\n\[email protected]('/regenerate-code', methods=('POST',))\ndef generate_code():\n g.source.journalist_designation = crypto_util.display_id()\n db_session.commit()\n return redirect('/col/' + g.sid)\n\n\[email protected]('/download_unread/<sid>')\ndef download_unread(sid):\n id = Source.query.filter(Source.filesystem_id == sid).one().id\n docs = [doc.filename for doc in\n Submission.query.filter(Submission.source_id == id, Submission.downloaded == False).all()]\n return bulk_download(sid, docs)\n\n\[email protected]('/bulk', methods=('POST',))\ndef bulk():\n action = request.form['action']\n\n doc_names_selected = request.form.getlist('doc_names_selected')\n docs_selected = [\n doc for doc in get_docs(g.sid) if doc['name'] in doc_names_selected]\n filenames_selected = [\n doc['name'] for doc in docs_selected]\n\n if action == 'download':\n return bulk_download(g.sid, filenames_selected)\n elif action == 'delete':\n return bulk_delete(g.sid, docs_selected)\n else:\n abort(400)\n\n\ndef bulk_delete(sid, docs_selected):\n source = get_source(sid)\n confirm_delete = bool(request.form.get('confirm_delete', False))\n if confirm_delete:\n for doc in docs_selected:\n db_session.delete(Submission.query.filter(Submission.filename == doc['name']).one())\n fn = store.path(sid, doc['name'])\n store.secure_unlink(fn)\n db_session.commit()\n return render_template('delete.html', sid=sid,\n codename=source.journalist_designation,\n docs_selected=docs_selected, confirm_delete=confirm_delete)\n\n\ndef bulk_download(sid, docs_selected):\n source = get_source(sid)\n filenames = []\n for doc in docs_selected:\n filenames.append(store.path(sid, doc))\n try:\n Submission.query.filter(Submission.filename == doc).one().downloaded = True\n except NoResultFound as e:\n app.logger.error(\"Could not mark \" + doc + \" as downloaded: %s\" % (e,))\n db_session.commit()\n zip = store.get_bulk_archive(filenames)\n return send_file(zip.name, mimetype=\"application/zip\",\n attachment_filename=source.journalist_designation + \".zip\",\n as_attachment=True)\n\n\[email protected]('/flag', methods=('POST',))\ndef flag():\n g.source.flagged = True\n db_session.commit()\n return render_template('flag.html', sid=g.sid,\n codename=g.source.journalist_designation)\ndef write_pidfile():\n pid = str(os.getpid())\n with open(config.JOURNALIST_PIDFILE, 'w') as fp:\n fp.write(pid)\n\nif __name__ == \"__main__\":\n write_pidfile()\n # TODO make sure debug=False in production\n app.run(debug=True, host='0.0.0.0', port=8081)\n", "path": "securedrop/journalist.py"}]} | 3,708 | 206 |
gh_patches_debug_17890 | rasdani/github-patches | git_diff | apluslms__a-plus-1218 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exercise API field html_url behaves inconsistently
If an exercise has a parent (i.e., is embedded in a chapter or an exercise), the html_url in the API differ between what the ExerciseBriefSerializer and ExerciseSerializer produce.
The brief serializer provides the wanted format of the url (link to the parent with anchor to exercise):

However, the exercise detail page includes the previous format (to the exercise page):

This is illogical, inconsistent, and causes problems for MOOC-Jutut.
</issue>
<code>
[start of lib/api/serializers.py]
1 from functools import partial
2 from urllib.parse import urlencode
3
4 from django.db.models import Manager
5 from rest_framework import serializers
6 from rest_framework.serializers import HyperlinkedModelSerializer
7 from rest_framework.utils.field_mapping import get_nested_relation_kwargs
8 from rest_framework.fields import get_attribute
9
10 from .fields import NestedHyperlinkedIdentityField, NestedHyperlinkedRelatedField
11
12
13 class NestedHyperlinkedModelSerializer(HyperlinkedModelSerializer):
14 """
15 Extension of `HyperlinkedModelSerializer` that adds support for
16 nested resources.
17 """
18 serializer_related_field = NestedHyperlinkedRelatedField
19 serializer_url_field = NestedHyperlinkedIdentityField
20
21 def get_default_field_names(self, declared_fields, model_info):
22 """
23 Return the default list of field names that will be used if the
24 `Meta.fields` option is not specified.
25 """
26 return (
27 [self.url_field_name] +
28 list(declared_fields.keys()) +
29 list(model_info.fields.keys()) +
30 list(model_info.forward_relations.keys())
31 )
32
33 def build_nested_field(self, field_name, relation_info, nested_depth):
34 """
35 Create nested fields for forward and reverse relationships.
36 """
37 class NestedSerializer(NestedHyperlinkedModelSerializer):
38 class Meta:
39 model = relation_info.related_model
40 depth = nested_depth - 1
41
42 field_class = NestedSerializer
43 field_kwargs = get_nested_relation_kwargs(relation_info)
44
45 return field_class, field_kwargs
46
47
48 class AlwaysListSerializer:
49 def __new__(cls, *args, **kwargs):
50 if kwargs.pop('_many', True):
51 kwargs['many'] = True
52 return super(AlwaysListSerializer, cls).__new__(cls, *args, _many=False, **kwargs)
53
54 def __init__(self, *args, _many=False, **kwargs):
55 super().__init__(*args, **kwargs)
56
57
58 class HtmlViewField(serializers.ReadOnlyField):
59 def __init__(self, *args, **kwargs):
60 kwargs['source'] = '*'
61 super().__init__(*args, **kwargs)
62
63 def to_representation(self, obj): # pylint: disable=arguments-renamed
64 request = self.context['request']
65 url = obj.get_absolute_url()
66 return request.build_absolute_uri(url)
67
68
69 class NestedHyperlinkedIdentityFieldWithQuery(NestedHyperlinkedIdentityField):
70 def __init__(self, *args, query_params=None, **kwargs):
71 self.__query_params = query_params
72 super().__init__(*args, **kwargs)
73
74 def get_url(self, obj, view_name, request, format): # pylint: disable=redefined-builtin
75 url = super().get_url(obj, view_name, request, format)
76
77 if url and self.__query_params:
78 # pylint: disable-next=unnecessary-lambda-assignment
79 get = lambda x: x(obj) if callable(x) else get_attribute(obj, x.split('.'))
80 params = [(key, get(value)) for key, value in self.__query_params.items()]
81 url = url + '?' + urlencode(params)
82
83 return url
84
85
86 class AttributeProxy:
87 def __init__(self, obj, **kwargs):
88 self._obj = obj
89 self._kwargs = kwargs
90
91 def __getattr__(self, key):
92 try:
93 return self._kwargs[key]
94 except KeyError:
95 return getattr(self._obj, key)
96
97
98 def zip_instance_extra_with_iterable(instance, iterable, extra):
99 extra_attrs = dict(
100 (key, get_attribute(instance, attrs.split('.')))
101 for key, attrs in extra.items()
102 )
103 return (AttributeProxy(item, **extra_attrs) for item in iterable)
104
105
106 class CompositeListSerializer(serializers.ListSerializer):
107 @classmethod
108 def with_extra(cls, extra):
109 return partial(cls, extra=extra)
110
111 def __init__(self, instance=None, data=serializers.empty, extra=None, **kwargs):
112 self.__extra = extra
113 source = kwargs.get('source', None)
114 if instance and source:
115 iterable = instance[source]
116 instance = zip_instance_extra_with_iterable(instance, iterable, extra)
117 super().__init__(instance=instance, data=data, **kwargs)
118
119 def get_attribute(self, instance):
120 data = super().get_attribute(instance)
121 iterable = data.all() if isinstance(data, Manager) else data
122 return zip_instance_extra_with_iterable(instance, iterable, self.__extra)
123
124
125 class AplusSerializerMetaMetaclass(type):
126 def __new__(cls, name, bases, dict_):
127 new_cls = type.__new__(cls, name, bases, dict_)
128 for k, v in dict_.items():
129 if k[0] != '_' and not callable(v):
130 if isinstance(v, dict):
131 parent = getattr(super(new_cls, new_cls), k, {})
132 setattr(new_cls, k, dict(parent, **v))
133 elif isinstance(v, (tuple, list)):
134 parent = getattr(super(new_cls, new_cls), k, ())
135 seen = set()
136 seen_add = seen.add
137 res = [x for x in parent if not (x in seen or seen_add(x))]
138 res += (x for x in v if not (x in seen or seen_add(x)))
139 setattr(new_cls, k, type(v)(res))
140 return new_cls
141
142
143 class AplusSerializerMeta(metaclass=AplusSerializerMetaMetaclass):
144 pass
145
146
147 class AplusModelSerializerBase(NestedHyperlinkedModelSerializer):
148 url_field_name = 'url'
149 html_url_field_name = 'html_url'
150
151 def get_field_names(self, declared_fields, info):
152 fields = list(super().get_field_names(declared_fields, info))
153 extra_kwargs = getattr(self.Meta, 'extra_kwargs', {})
154 if self.url_field_name not in fields and self.url_field_name in extra_kwargs:
155 fields.insert(0, self.url_field_name)
156 return fields
157
158 def build_unknown_field(self, field_name, model_class):
159 if field_name == self.html_url_field_name:
160 return (HtmlViewField, {})
161 if field_name == self.url_field_name:
162 extra_kwargs = getattr(self.Meta, 'extra_kwargs', {})
163 kwargs = {'context': self.context}
164 kwargs.update(extra_kwargs[self.url_field_name])
165 return (NestedHyperlinkedIdentityField, kwargs)
166 return super().build_unknown_field(field_name, model_class)
167
168
169 class AplusModelSerializer(AplusModelSerializerBase):
170 id = serializers.IntegerField(source='pk', read_only=True)
171
172 class Meta(AplusSerializerMeta):
173 fields = (
174 'id',
175 'url',
176 )
177
178
179 class StatisticsSerializer(serializers.Serializer):
180 starttime = serializers.DateTimeField(allow_null=True)
181 endtime = serializers.DateTimeField(allow_null=True)
182 submission_count = serializers.IntegerField(read_only=True)
183 submitters = serializers.IntegerField(read_only=True)
184
[end of lib/api/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/api/serializers.py b/lib/api/serializers.py
--- a/lib/api/serializers.py
+++ b/lib/api/serializers.py
@@ -8,6 +8,7 @@
from rest_framework.fields import get_attribute
from .fields import NestedHyperlinkedIdentityField, NestedHyperlinkedRelatedField
+from lib.helpers import build_aplus_url
class NestedHyperlinkedModelSerializer(HyperlinkedModelSerializer):
@@ -61,9 +62,8 @@
super().__init__(*args, **kwargs)
def to_representation(self, obj): # pylint: disable=arguments-renamed
- request = self.context['request']
- url = obj.get_absolute_url()
- return request.build_absolute_uri(url)
+ url = obj.get_display_url()
+ return build_aplus_url(url, True)
class NestedHyperlinkedIdentityFieldWithQuery(NestedHyperlinkedIdentityField):
| {"golden_diff": "diff --git a/lib/api/serializers.py b/lib/api/serializers.py\n--- a/lib/api/serializers.py\n+++ b/lib/api/serializers.py\n@@ -8,6 +8,7 @@\n from rest_framework.fields import get_attribute\n \n from .fields import NestedHyperlinkedIdentityField, NestedHyperlinkedRelatedField\n+from lib.helpers import build_aplus_url\n \n \n class NestedHyperlinkedModelSerializer(HyperlinkedModelSerializer):\n@@ -61,9 +62,8 @@\n super().__init__(*args, **kwargs)\n \n def to_representation(self, obj): # pylint: disable=arguments-renamed\n- request = self.context['request']\n- url = obj.get_absolute_url()\n- return request.build_absolute_uri(url)\n+ url = obj.get_display_url()\n+ return build_aplus_url(url, True)\n \n \n class NestedHyperlinkedIdentityFieldWithQuery(NestedHyperlinkedIdentityField):\n", "issue": "Exercise API field html_url behaves inconsistently\nIf an exercise has a parent (i.e., is embedded in a chapter or an exercise), the html_url in the API differ between what the ExerciseBriefSerializer and ExerciseSerializer produce.\r\n\r\nThe brief serializer provides the wanted format of the url (link to the parent with anchor to exercise):\r\n\r\n\r\nHowever, the exercise detail page includes the previous format (to the exercise page): \r\n\r\n\r\nThis is illogical, inconsistent, and causes problems for MOOC-Jutut.\n", "before_files": [{"content": "from functools import partial\nfrom urllib.parse import urlencode\n\nfrom django.db.models import Manager\nfrom rest_framework import serializers\nfrom rest_framework.serializers import HyperlinkedModelSerializer\nfrom rest_framework.utils.field_mapping import get_nested_relation_kwargs\nfrom rest_framework.fields import get_attribute\n\nfrom .fields import NestedHyperlinkedIdentityField, NestedHyperlinkedRelatedField\n\n\nclass NestedHyperlinkedModelSerializer(HyperlinkedModelSerializer):\n \"\"\"\n Extension of `HyperlinkedModelSerializer` that adds support for\n nested resources.\n \"\"\"\n serializer_related_field = NestedHyperlinkedRelatedField\n serializer_url_field = NestedHyperlinkedIdentityField\n\n def get_default_field_names(self, declared_fields, model_info):\n \"\"\"\n Return the default list of field names that will be used if the\n `Meta.fields` option is not specified.\n \"\"\"\n return (\n [self.url_field_name] +\n list(declared_fields.keys()) +\n list(model_info.fields.keys()) +\n list(model_info.forward_relations.keys())\n )\n\n def build_nested_field(self, field_name, relation_info, nested_depth):\n \"\"\"\n Create nested fields for forward and reverse relationships.\n \"\"\"\n class NestedSerializer(NestedHyperlinkedModelSerializer):\n class Meta:\n model = relation_info.related_model\n depth = nested_depth - 1\n\n field_class = NestedSerializer\n field_kwargs = get_nested_relation_kwargs(relation_info)\n\n return field_class, field_kwargs\n\n\nclass AlwaysListSerializer:\n def __new__(cls, *args, **kwargs):\n if kwargs.pop('_many', True):\n kwargs['many'] = True\n return super(AlwaysListSerializer, cls).__new__(cls, *args, _many=False, **kwargs)\n\n def __init__(self, *args, _many=False, **kwargs):\n super().__init__(*args, **kwargs)\n\n\nclass HtmlViewField(serializers.ReadOnlyField):\n def __init__(self, *args, **kwargs):\n kwargs['source'] = '*'\n super().__init__(*args, **kwargs)\n\n def to_representation(self, obj): # pylint: disable=arguments-renamed\n request = self.context['request']\n url = obj.get_absolute_url()\n return request.build_absolute_uri(url)\n\n\nclass NestedHyperlinkedIdentityFieldWithQuery(NestedHyperlinkedIdentityField):\n def __init__(self, *args, query_params=None, **kwargs):\n self.__query_params = query_params\n super().__init__(*args, **kwargs)\n\n def get_url(self, obj, view_name, request, format): # pylint: disable=redefined-builtin\n url = super().get_url(obj, view_name, request, format)\n\n if url and self.__query_params:\n # pylint: disable-next=unnecessary-lambda-assignment\n get = lambda x: x(obj) if callable(x) else get_attribute(obj, x.split('.'))\n params = [(key, get(value)) for key, value in self.__query_params.items()]\n url = url + '?' + urlencode(params)\n\n return url\n\n\nclass AttributeProxy:\n def __init__(self, obj, **kwargs):\n self._obj = obj\n self._kwargs = kwargs\n\n def __getattr__(self, key):\n try:\n return self._kwargs[key]\n except KeyError:\n return getattr(self._obj, key)\n\n\ndef zip_instance_extra_with_iterable(instance, iterable, extra):\n extra_attrs = dict(\n (key, get_attribute(instance, attrs.split('.')))\n for key, attrs in extra.items()\n )\n return (AttributeProxy(item, **extra_attrs) for item in iterable)\n\n\nclass CompositeListSerializer(serializers.ListSerializer):\n @classmethod\n def with_extra(cls, extra):\n return partial(cls, extra=extra)\n\n def __init__(self, instance=None, data=serializers.empty, extra=None, **kwargs):\n self.__extra = extra\n source = kwargs.get('source', None)\n if instance and source:\n iterable = instance[source]\n instance = zip_instance_extra_with_iterable(instance, iterable, extra)\n super().__init__(instance=instance, data=data, **kwargs)\n\n def get_attribute(self, instance):\n data = super().get_attribute(instance)\n iterable = data.all() if isinstance(data, Manager) else data\n return zip_instance_extra_with_iterable(instance, iterable, self.__extra)\n\n\nclass AplusSerializerMetaMetaclass(type):\n def __new__(cls, name, bases, dict_):\n new_cls = type.__new__(cls, name, bases, dict_)\n for k, v in dict_.items():\n if k[0] != '_' and not callable(v):\n if isinstance(v, dict):\n parent = getattr(super(new_cls, new_cls), k, {})\n setattr(new_cls, k, dict(parent, **v))\n elif isinstance(v, (tuple, list)):\n parent = getattr(super(new_cls, new_cls), k, ())\n seen = set()\n seen_add = seen.add\n res = [x for x in parent if not (x in seen or seen_add(x))]\n res += (x for x in v if not (x in seen or seen_add(x)))\n setattr(new_cls, k, type(v)(res))\n return new_cls\n\n\nclass AplusSerializerMeta(metaclass=AplusSerializerMetaMetaclass):\n pass\n\n\nclass AplusModelSerializerBase(NestedHyperlinkedModelSerializer):\n url_field_name = 'url'\n html_url_field_name = 'html_url'\n\n def get_field_names(self, declared_fields, info):\n fields = list(super().get_field_names(declared_fields, info))\n extra_kwargs = getattr(self.Meta, 'extra_kwargs', {})\n if self.url_field_name not in fields and self.url_field_name in extra_kwargs:\n fields.insert(0, self.url_field_name)\n return fields\n\n def build_unknown_field(self, field_name, model_class):\n if field_name == self.html_url_field_name:\n return (HtmlViewField, {})\n if field_name == self.url_field_name:\n extra_kwargs = getattr(self.Meta, 'extra_kwargs', {})\n kwargs = {'context': self.context}\n kwargs.update(extra_kwargs[self.url_field_name])\n return (NestedHyperlinkedIdentityField, kwargs)\n return super().build_unknown_field(field_name, model_class)\n\n\nclass AplusModelSerializer(AplusModelSerializerBase):\n id = serializers.IntegerField(source='pk', read_only=True)\n\n class Meta(AplusSerializerMeta):\n fields = (\n 'id',\n 'url',\n )\n\n\nclass StatisticsSerializer(serializers.Serializer):\n starttime = serializers.DateTimeField(allow_null=True)\n endtime = serializers.DateTimeField(allow_null=True)\n submission_count = serializers.IntegerField(read_only=True)\n submitters = serializers.IntegerField(read_only=True)\n", "path": "lib/api/serializers.py"}]} | 2,649 | 196 |
gh_patches_debug_3154 | rasdani/github-patches | git_diff | ultrabug__py3status-1893 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mpd_status: self.current_status dosen't return (text, state)
self.current_status command returns a tuple of (text, dict_of_mpd_state) but the mpd_status method expects the second value of the tuple to be `play`, `pause` or `stop`. this results in the module not coloring the output
</issue>
<code>
[start of py3status/modules/mpd_status.py]
1 # -*- coding: utf-8 -*-
2 """
3 Display song currently playing in mpd.
4
5 Configuration parameters:
6 cache_timeout: how often we refresh this module in seconds (default 2)
7 format: template string (see below)
8 (default '{state} [[[{artist}] - {title}]|[{file}]]')
9 hide_on_error: hide the status if an error has occurred (default False)
10 hide_when_paused: hide the status if state is paused (default False)
11 hide_when_stopped: hide the status if state is stopped (default True)
12 host: mpd host (default 'localhost')
13 idle_subsystems: a space seperated string of subsystems to idle on.
14 player: changes in song information, play state
15 mixer: changes in volume
16 options: e.g. repeat mode
17 See the MPD protocol documentation for additional events.
18 (default 'player mixer options')
19 idle_timeout: force idle to reset every n seconds (default 3600)
20 max_width: maximum status length (default 120)
21 password: mpd password (default None)
22 port: mpd port (default '6600')
23 state_pause: label to display for "paused" state (default '[pause]')
24 state_play: label to display for "playing" state (default '[play]')
25 state_stop: label to display for "stopped" state (default '[stop]')
26 use_idle: whether to use idling instead of polling. None to autodetect
27 (default None)
28
29 Format placeholders:
30 {state} state (paused, playing. stopped) can be defined via `state_..`
31 configuration parameters
32 Refer to the mpc(1) manual page for the list of available placeholders to
33 be used in the format. Placeholders should use braces `{}` rather than
34 percent `%%` eg `{artist}`.
35 Every placeholder can also be prefixed with
36 `next_` to retrieve the data for the song following the one currently
37 playing.
38
39 Color options:
40 color_pause: Paused, default color_degraded
41 color_play: Playing, default color_good
42 color_stop: Stopped, default color_bad
43
44 Requires:
45 python-mpd2: (NOT python2-mpd2)
46
47 Examples:
48 ```
49 # Show state and (artist -) title, if no title fallback to file:
50 {state} [[[{artist} - ]{title}]|[{file}]]
51
52 # Show state, [duration], title (or file) and next song title (or file):
53 {state} \[{time}\] [{title}|{file}] → [{next_title}|{next_file}]
54 ```
55
56 @author shadowprince, zopieux
57 @license Eclipse Public License
58
59 SAMPLE OUTPUT
60 {'color': '#00ff00', 'full_text': '[play] Music For Programming - Idol Eyes'}
61
62 paused
63 {'color': '#ffff00', 'full_text': '[pause] Music For Programming - Idol Eyes'}
64
65 stopped
66 {'color': '#ff0000', 'full_text': '[stop] Music For Programming - Idol Eyes'}
67 """
68
69 import datetime
70 import re
71 import socket
72 from py3status.composite import Composite
73 from mpd import MPDClient, CommandError, ConnectionError
74 from threading import Thread
75 from time import sleep
76
77
78 def song_attr(song, attr):
79 def parse_mtime(date_str):
80 return datetime.datetime.strptime(date_str, "%Y-%m-%dT%H:%M:%SZ")
81
82 if attr == "time":
83 try:
84 duration = int(song["time"])
85 if duration > 0:
86 minutes, seconds = divmod(duration, 60)
87 return "{:d}:{:02d}".format(minutes, seconds)
88 raise ValueError
89 except (KeyError, ValueError):
90 return ""
91 elif attr == "position":
92 try:
93 return "{}".format(int(song["pos"]) + 1)
94 except (KeyError, ValueError):
95 return ""
96 elif attr == "mtime":
97 return parse_mtime(song["last-modified"]).strftime("%c")
98 elif attr == "mdate":
99 return parse_mtime(song["last-modified"]).strftime("%x")
100
101 return song.get(attr, "")
102
103
104 class Py3status:
105 """
106 """
107
108 # available configuration parameters
109 cache_timeout = 2
110 format = "{state} [[[{artist}] - {title}]|[{file}]]"
111 hide_on_error = False
112 hide_when_paused = False
113 hide_when_stopped = True
114 host = "localhost"
115 idle_subsystems = "player mixer options"
116 idle_timeout = 3600
117 max_width = 120
118 password = None
119 port = "6600"
120 state_pause = "[pause]"
121 state_play = "[play]"
122 state_stop = "[stop]"
123 use_idle = None
124
125 def post_config_hook(self):
126 # class variables:
127 self.current_status = None
128 self.idle_thread = Thread()
129 # Convert from %placeholder% to {placeholder}
130 # This is not perfect but should be good enough
131 if not self.py3.get_placeholders_list(self.format) and "%" in self.format:
132 self.format = re.sub("%([a-z]+)%", r"{\1}", self.format)
133 self.py3.log("Old % style format DEPRECATED use { style format")
134 # class variables:
135 self.client = None
136
137 def _get_mpd(self, disconnect=False):
138 if disconnect:
139 try:
140 self.client.disconnect()
141 finally:
142 self.client = None
143 return
144
145 try:
146 if self.client is None:
147 self.client = MPDClient()
148 self.client.connect(host=self.host, port=self.port)
149 if self.password:
150 self.client.password(self.password)
151 if self.use_idle is None:
152 self.use_idle = "idle" in self.client.commands()
153 if self.use_idle and self.idle_timeout:
154 self.client.idletimeout = self.idle_timeout
155 return self.client
156 except (socket.error, ConnectionError, CommandError) as e:
157 self.client = None
158 raise e
159
160 def _state_character(self, state):
161 if state == "play":
162 return self.state_play
163 elif state == "pause":
164 return self.state_pause
165 elif state == "stop":
166 return self.state_stop
167 return "?"
168
169 def mpd_status(self):
170 # I - get current mpd status (or wait until it changes)
171 # this writes into self.current_status
172 if self.use_idle is not False:
173 if not self.idle_thread.is_alive():
174 sleep(self.cache_timeout) # rate limit thread restarting
175 self.idle_thread = Thread(target=self._get_status)
176 self.idle_thread.daemon = True
177 self.idle_thread.start()
178 else:
179 self._get_status()
180
181 # II - format response
182 (text, state) = ("", "")
183 if self.current_status is not None:
184 (text, state) = self.current_status
185
186 if len(text) > self.max_width:
187 text = u"{}...".format(text[: self.max_width - 3])
188
189 response = {
190 "cached_until": self.py3.time_in(self.cache_timeout),
191 "full_text": text if state or not self.hide_on_error else "",
192 }
193
194 if state:
195 if state == "play":
196 response["color"] = self.py3.COLOR_PLAY or self.py3.COLOR_GOOD
197 elif state == "pause":
198 response["color"] = self.py3.COLOR_PAUSE or self.py3.COLOR_DEGRADED
199 elif state == "stop":
200 response["color"] = self.py3.COLOR_STOP or self.py3.COLOR_BAD
201
202 return response
203
204 def _get_status(self):
205 while True:
206 try:
207 status = self._get_mpd().status()
208 song = int(status.get("song", 0))
209 next_song = int(status.get("nextsong", 0))
210
211 state = status.get("state")
212
213 if (state == "pause" and self.hide_when_paused) or (
214 state == "stop" and self.hide_when_stopped
215 ):
216 text = ""
217
218 else:
219 playlist_info = self._get_mpd().playlistinfo()
220 try:
221 song = playlist_info[song]
222 except IndexError:
223 song = {}
224 try:
225 next_song = playlist_info[next_song]
226 except IndexError:
227 next_song = {}
228
229 song["state"] = next_song["state"] = self._state_character(state)
230
231 def attr_getter(attr):
232 if attr.startswith("next_"):
233 return song_attr(next_song, attr[5:])
234 return song_attr(song, attr)
235
236 text = self.py3.safe_format(self.format, attr_getter=attr_getter)
237 if isinstance(text, Composite):
238 text = text.text()
239
240 self.current_status = (text, status)
241
242 if self.use_idle:
243 self.py3.update()
244 # Note: mpd2 does not support more than 1 idle subsystem. so if
245 # the user wants to listen on more than one, we listen on all
246 # and loop until one we're interested in changed.
247 # https://github.com/Mic92/python-mpd2/issues/107
248 changed = self._get_mpd().idle()
249 while not any([c in self.idle_subsystems for c in changed]):
250 changed = self._get_mpd().idle()
251 else:
252 return
253
254 except (ValueError, socket.error, ConnectionError, CommandError) as e:
255 # ValueError can happen when status.get(...) returns None; e.g.
256 # during reversal of playlist
257 if isinstance(e, ValueError):
258 text = "No song information!"
259 if isinstance(e, socket.error):
260 text = "Failed to connect to mpd!"
261 if isinstance(e, ConnectionError):
262 text = "Error while connecting to mpd!"
263 self._get_mpd(disconnect=True)
264 if isinstance(e, CommandError):
265 text = "Failed to authenticate to mpd!"
266 self._get_mpd(disconnect=True)
267
268 state = None
269 self.current_status = (text, status)
270 return
271 finally:
272 self.py3.update() # to propagate error message
273
274 def kill(self):
275 self._get_mpd(disconnect=True)
276
277
278 if __name__ == "__main__":
279 """
280 Run module in test mode.
281 """
282 from py3status.module_test import module_test
283
284 module_test(Py3status)
285
[end of py3status/modules/mpd_status.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/py3status/modules/mpd_status.py b/py3status/modules/mpd_status.py
--- a/py3status/modules/mpd_status.py
+++ b/py3status/modules/mpd_status.py
@@ -237,7 +237,7 @@
if isinstance(text, Composite):
text = text.text()
- self.current_status = (text, status)
+ self.current_status = (text, state)
if self.use_idle:
self.py3.update()
| {"golden_diff": "diff --git a/py3status/modules/mpd_status.py b/py3status/modules/mpd_status.py\n--- a/py3status/modules/mpd_status.py\n+++ b/py3status/modules/mpd_status.py\n@@ -237,7 +237,7 @@\n if isinstance(text, Composite):\n text = text.text()\n \n- self.current_status = (text, status)\n+ self.current_status = (text, state)\n \n if self.use_idle:\n self.py3.update()\n", "issue": "mpd_status: self.current_status dosen't return (text, state)\nself.current_status command returns a tuple of (text, dict_of_mpd_state) but the mpd_status method expects the second value of the tuple to be `play`, `pause` or `stop`. this results in the module not coloring the output\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDisplay song currently playing in mpd.\n\nConfiguration parameters:\n cache_timeout: how often we refresh this module in seconds (default 2)\n format: template string (see below)\n (default '{state} [[[{artist}] - {title}]|[{file}]]')\n hide_on_error: hide the status if an error has occurred (default False)\n hide_when_paused: hide the status if state is paused (default False)\n hide_when_stopped: hide the status if state is stopped (default True)\n host: mpd host (default 'localhost')\n idle_subsystems: a space seperated string of subsystems to idle on.\n player: changes in song information, play state\n mixer: changes in volume\n options: e.g. repeat mode\n See the MPD protocol documentation for additional events.\n (default 'player mixer options')\n idle_timeout: force idle to reset every n seconds (default 3600)\n max_width: maximum status length (default 120)\n password: mpd password (default None)\n port: mpd port (default '6600')\n state_pause: label to display for \"paused\" state (default '[pause]')\n state_play: label to display for \"playing\" state (default '[play]')\n state_stop: label to display for \"stopped\" state (default '[stop]')\n use_idle: whether to use idling instead of polling. None to autodetect\n (default None)\n\nFormat placeholders:\n {state} state (paused, playing. stopped) can be defined via `state_..`\n configuration parameters\n Refer to the mpc(1) manual page for the list of available placeholders to\n be used in the format. Placeholders should use braces `{}` rather than\n percent `%%` eg `{artist}`.\n Every placeholder can also be prefixed with\n `next_` to retrieve the data for the song following the one currently\n playing.\n\nColor options:\n color_pause: Paused, default color_degraded\n color_play: Playing, default color_good\n color_stop: Stopped, default color_bad\n\nRequires:\n python-mpd2: (NOT python2-mpd2)\n\nExamples:\n```\n# Show state and (artist -) title, if no title fallback to file:\n{state} [[[{artist} - ]{title}]|[{file}]]\n\n# Show state, [duration], title (or file) and next song title (or file):\n{state} \\[{time}\\] [{title}|{file}] \u2192 [{next_title}|{next_file}]\n```\n\n@author shadowprince, zopieux\n@license Eclipse Public License\n\nSAMPLE OUTPUT\n{'color': '#00ff00', 'full_text': '[play] Music For Programming - Idol Eyes'}\n\npaused\n{'color': '#ffff00', 'full_text': '[pause] Music For Programming - Idol Eyes'}\n\nstopped\n{'color': '#ff0000', 'full_text': '[stop] Music For Programming - Idol Eyes'}\n\"\"\"\n\nimport datetime\nimport re\nimport socket\nfrom py3status.composite import Composite\nfrom mpd import MPDClient, CommandError, ConnectionError\nfrom threading import Thread\nfrom time import sleep\n\n\ndef song_attr(song, attr):\n def parse_mtime(date_str):\n return datetime.datetime.strptime(date_str, \"%Y-%m-%dT%H:%M:%SZ\")\n\n if attr == \"time\":\n try:\n duration = int(song[\"time\"])\n if duration > 0:\n minutes, seconds = divmod(duration, 60)\n return \"{:d}:{:02d}\".format(minutes, seconds)\n raise ValueError\n except (KeyError, ValueError):\n return \"\"\n elif attr == \"position\":\n try:\n return \"{}\".format(int(song[\"pos\"]) + 1)\n except (KeyError, ValueError):\n return \"\"\n elif attr == \"mtime\":\n return parse_mtime(song[\"last-modified\"]).strftime(\"%c\")\n elif attr == \"mdate\":\n return parse_mtime(song[\"last-modified\"]).strftime(\"%x\")\n\n return song.get(attr, \"\")\n\n\nclass Py3status:\n \"\"\"\n \"\"\"\n\n # available configuration parameters\n cache_timeout = 2\n format = \"{state} [[[{artist}] - {title}]|[{file}]]\"\n hide_on_error = False\n hide_when_paused = False\n hide_when_stopped = True\n host = \"localhost\"\n idle_subsystems = \"player mixer options\"\n idle_timeout = 3600\n max_width = 120\n password = None\n port = \"6600\"\n state_pause = \"[pause]\"\n state_play = \"[play]\"\n state_stop = \"[stop]\"\n use_idle = None\n\n def post_config_hook(self):\n # class variables:\n self.current_status = None\n self.idle_thread = Thread()\n # Convert from %placeholder% to {placeholder}\n # This is not perfect but should be good enough\n if not self.py3.get_placeholders_list(self.format) and \"%\" in self.format:\n self.format = re.sub(\"%([a-z]+)%\", r\"{\\1}\", self.format)\n self.py3.log(\"Old % style format DEPRECATED use { style format\")\n # class variables:\n self.client = None\n\n def _get_mpd(self, disconnect=False):\n if disconnect:\n try:\n self.client.disconnect()\n finally:\n self.client = None\n return\n\n try:\n if self.client is None:\n self.client = MPDClient()\n self.client.connect(host=self.host, port=self.port)\n if self.password:\n self.client.password(self.password)\n if self.use_idle is None:\n self.use_idle = \"idle\" in self.client.commands()\n if self.use_idle and self.idle_timeout:\n self.client.idletimeout = self.idle_timeout\n return self.client\n except (socket.error, ConnectionError, CommandError) as e:\n self.client = None\n raise e\n\n def _state_character(self, state):\n if state == \"play\":\n return self.state_play\n elif state == \"pause\":\n return self.state_pause\n elif state == \"stop\":\n return self.state_stop\n return \"?\"\n\n def mpd_status(self):\n # I - get current mpd status (or wait until it changes)\n # this writes into self.current_status\n if self.use_idle is not False:\n if not self.idle_thread.is_alive():\n sleep(self.cache_timeout) # rate limit thread restarting\n self.idle_thread = Thread(target=self._get_status)\n self.idle_thread.daemon = True\n self.idle_thread.start()\n else:\n self._get_status()\n\n # II - format response\n (text, state) = (\"\", \"\")\n if self.current_status is not None:\n (text, state) = self.current_status\n\n if len(text) > self.max_width:\n text = u\"{}...\".format(text[: self.max_width - 3])\n\n response = {\n \"cached_until\": self.py3.time_in(self.cache_timeout),\n \"full_text\": text if state or not self.hide_on_error else \"\",\n }\n\n if state:\n if state == \"play\":\n response[\"color\"] = self.py3.COLOR_PLAY or self.py3.COLOR_GOOD\n elif state == \"pause\":\n response[\"color\"] = self.py3.COLOR_PAUSE or self.py3.COLOR_DEGRADED\n elif state == \"stop\":\n response[\"color\"] = self.py3.COLOR_STOP or self.py3.COLOR_BAD\n\n return response\n\n def _get_status(self):\n while True:\n try:\n status = self._get_mpd().status()\n song = int(status.get(\"song\", 0))\n next_song = int(status.get(\"nextsong\", 0))\n\n state = status.get(\"state\")\n\n if (state == \"pause\" and self.hide_when_paused) or (\n state == \"stop\" and self.hide_when_stopped\n ):\n text = \"\"\n\n else:\n playlist_info = self._get_mpd().playlistinfo()\n try:\n song = playlist_info[song]\n except IndexError:\n song = {}\n try:\n next_song = playlist_info[next_song]\n except IndexError:\n next_song = {}\n\n song[\"state\"] = next_song[\"state\"] = self._state_character(state)\n\n def attr_getter(attr):\n if attr.startswith(\"next_\"):\n return song_attr(next_song, attr[5:])\n return song_attr(song, attr)\n\n text = self.py3.safe_format(self.format, attr_getter=attr_getter)\n if isinstance(text, Composite):\n text = text.text()\n\n self.current_status = (text, status)\n\n if self.use_idle:\n self.py3.update()\n # Note: mpd2 does not support more than 1 idle subsystem. so if\n # the user wants to listen on more than one, we listen on all\n # and loop until one we're interested in changed.\n # https://github.com/Mic92/python-mpd2/issues/107\n changed = self._get_mpd().idle()\n while not any([c in self.idle_subsystems for c in changed]):\n changed = self._get_mpd().idle()\n else:\n return\n\n except (ValueError, socket.error, ConnectionError, CommandError) as e:\n # ValueError can happen when status.get(...) returns None; e.g.\n # during reversal of playlist\n if isinstance(e, ValueError):\n text = \"No song information!\"\n if isinstance(e, socket.error):\n text = \"Failed to connect to mpd!\"\n if isinstance(e, ConnectionError):\n text = \"Error while connecting to mpd!\"\n self._get_mpd(disconnect=True)\n if isinstance(e, CommandError):\n text = \"Failed to authenticate to mpd!\"\n self._get_mpd(disconnect=True)\n\n state = None\n self.current_status = (text, status)\n return\n finally:\n self.py3.update() # to propagate error message\n\n def kill(self):\n self._get_mpd(disconnect=True)\n\n\nif __name__ == \"__main__\":\n \"\"\"\n Run module in test mode.\n \"\"\"\n from py3status.module_test import module_test\n\n module_test(Py3status)\n", "path": "py3status/modules/mpd_status.py"}]} | 3,627 | 107 |
gh_patches_debug_21905 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-850 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error numpy.ndarray size changed
## 🐛 Bug
Hello,
I Installed a clean Ubuntu 20.04, installed nvidia drivers 440 for RTX 3080. Then I followed the instructions to train a model with NVIDIA NeMo Tacotron2.
I got the next error. It seems that the latest release of metrics is not compatible with the latest release of numpy.
numpy fails with the next error:
valueerror: numpy.ndarray size changed, may indicate binary incompatibility. expected 96 from c header, got 88 from pyobject
I share the full stack trace:
```
pc@pc:~/Downloads$ sudo python3.8 tacotron2.py sample_rate=25050 train_dataset=/home/pc/Downloads/newcontent.json validation_datasets=/home/pc/Downloads/newcontent.json trainer.max_epochs=1 trainer.accelerator=null trainer.check_val_every_n_epoch=1
[sudo] password for pc:
Traceback (most recent call last):
File "tacotron2.py", line 24, in
import pytorch_lightning as pl
File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/init.py", line 20, in
from pytorch_lightning.callbacks import Callback # noqa: E402
File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/callbacks/init.py", line 14, in
from pytorch_lightning.callbacks.base import Callback
File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/callbacks/base.py", line 26, in
from pytorch_lightning.utilities.types import STEP_OUTPUT
File "/usr/local/lib/python3.8/site-packages/pytorch_lightning/utilities/types.py", line 25, in
from torchmetrics import Metric
File "/usr/local/lib/python3.8/site-packages/torchmetrics/init.py", line 14, in
from torchmetrics import functional # noqa: E402
File "/usr/local/lib/python3.8/site-packages/torchmetrics/functional/init.py", line 14, in
from torchmetrics.functional.audio.pit import permutation_invariant_training, pit, pit_permutate
File "/usr/local/lib/python3.8/site-packages/torchmetrics/functional/audio/init.py", line 26, in
from torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality # noqa: F401
File "/usr/local/lib/python3.8/site-packages/torchmetrics/functional/audio/pesq.py", line 20, in
import pesq as pesq_backend
File "/usr/local/lib/python3.8/site-packages/pesq/init.py", line 6, in
from .cypesq import cypesq, cypesq_retvals, cypesq_error_message as pesq_error_message
File "pesq/cypesq.pyx", line 1, in init cypesq
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
```
### Environment
- PyTorch Version: 1.10.1
- OS: Ubuntu 20.10
- PyTorch installed with pip
- Python version: python3.8
- CUDA/cuDNN version: 11.3
- GPU models and configuration: RTX 3080
- Any other relevant information: MSI Creator P100X 637
Could you help me to solve this?
Thank you!
</issue>
<code>
[start of torchmetrics/functional/audio/pesq.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import numpy as np
15
16 from torchmetrics.utilities.imports import _PESQ_AVAILABLE
17
18 if _PESQ_AVAILABLE:
19 import pesq as pesq_backend
20 else:
21 pesq_backend = None
22 import torch
23 from torch import Tensor
24
25 from torchmetrics.utilities.checks import _check_same_shape
26
27 __doctest_requires__ = {("perceptual_evaluation_speech_quality",): ["pesq"]}
28
29
30 def perceptual_evaluation_speech_quality(
31 preds: Tensor, target: Tensor, fs: int, mode: str, keep_same_device: bool = False
32 ) -> Tensor:
33 r"""PESQ (Perceptual Evaluation of Speech Quality)
34
35 This is a wrapper for the ``pesq`` package [1]. Note that input will be moved to `cpu`
36 to perform the metric calculation.
37
38 .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install
39 torchmetrics[audio]`` or ``pip install pesq``
40
41 Args:
42 preds:
43 shape ``[...,time]``
44 target:
45 shape ``[...,time]``
46 fs:
47 sampling frequency, should be 16000 or 8000 (Hz)
48 mode:
49 'wb' (wide-band) or 'nb' (narrow-band)
50 keep_same_device:
51 whether to move the pesq value to the device of preds
52
53 Returns:
54 pesq value of shape [...]
55
56 Raises:
57 ModuleNotFoundError:
58 If ``peqs`` package is not installed
59 ValueError:
60 If ``fs`` is not either ``8000`` or ``16000``
61 ValueError:
62 If ``mode`` is not either ``"wb"`` or ``"nb"``
63
64 Example:
65 >>> from torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality
66 >>> import torch
67 >>> g = torch.manual_seed(1)
68 >>> preds = torch.randn(8000)
69 >>> target = torch.randn(8000)
70 >>> perceptual_evaluation_speech_quality(preds, target, 8000, 'nb')
71 tensor(2.2076)
72 >>> perceptual_evaluation_speech_quality(preds, target, 16000, 'wb')
73 tensor(1.7359)
74
75 References:
76 [1] https://github.com/ludlows/python-pesq
77 """
78 if not _PESQ_AVAILABLE:
79 raise ModuleNotFoundError(
80 "PESQ metric requires that pesq is installed."
81 " Either install as `pip install torchmetrics[audio]` or `pip install pesq`."
82 )
83 if fs not in (8000, 16000):
84 raise ValueError(f"Expected argument `fs` to either be 8000 or 16000 but got {fs}")
85 if mode not in ("wb", "nb"):
86 raise ValueError(f"Expected argument `mode` to either be 'wb' or 'nb' but got {mode}")
87 _check_same_shape(preds, target)
88
89 if preds.ndim == 1:
90 pesq_val_np = pesq_backend.pesq(fs, target.detach().cpu().numpy(), preds.detach().cpu().numpy(), mode)
91 pesq_val = torch.tensor(pesq_val_np)
92 else:
93 preds_np = preds.reshape(-1, preds.shape[-1]).detach().cpu().numpy()
94 target_np = target.reshape(-1, preds.shape[-1]).detach().cpu().numpy()
95 pesq_val_np = np.empty(shape=(preds_np.shape[0]))
96 for b in range(preds_np.shape[0]):
97 pesq_val_np[b] = pesq_backend.pesq(fs, target_np[b, :], preds_np[b, :], mode)
98 pesq_val = torch.from_numpy(pesq_val_np)
99 pesq_val = pesq_val.reshape(preds.shape[:-1])
100
101 if keep_same_device:
102 pesq_val = pesq_val.to(preds.device)
103
104 return pesq_val
105
[end of torchmetrics/functional/audio/pesq.py]
[start of torchmetrics/audio/pesq.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Any, Dict, Optional
15
16 from torch import Tensor, tensor
17
18 from torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality
19 from torchmetrics.metric import Metric
20 from torchmetrics.utilities.imports import _PESQ_AVAILABLE
21
22 __doctest_requires__ = {("PerceptualEvaluationSpeechQuality"): ["pesq"]}
23
24
25 class PerceptualEvaluationSpeechQuality(Metric):
26 """Perceptual Evaluation of Speech Quality (PESQ)
27
28 This is a wrapper for the pesq package [1]. . Note that input will be moved to `cpu`
29 to perform the metric calculation.
30
31 .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install
32 torchmetrics[audio]`` or ``pip install pesq``
33
34 Forward accepts
35
36 - ``preds``: ``shape [...,time]``
37 - ``target``: ``shape [...,time]``
38
39 Args:
40 fs:
41 sampling frequency, should be 16000 or 8000 (Hz)
42 mode:
43 'wb' (wide-band) or 'nb' (narrow-band)
44 keep_same_device:
45 whether to move the pesq value to the device of preds
46 compute_on_step:
47 Forward only calls ``update()`` and returns None if this is set to False.
48
49 .. deprecated:: v0.8
50 Argument has no use anymore and will be removed v0.9.
51
52 kwargs:
53 Additional keyword arguments, see :ref:`Metric kwargs` for more info.
54
55 Raises:
56 ModuleNotFoundError:
57 If ``peqs`` package is not installed
58 ValueError:
59 If ``fs`` is not either ``8000`` or ``16000``
60 ValueError:
61 If ``mode`` is not either ``"wb"`` or ``"nb"``
62
63 Example:
64 >>> from torchmetrics.audio.pesq import PerceptualEvaluationSpeechQuality
65 >>> import torch
66 >>> g = torch.manual_seed(1)
67 >>> preds = torch.randn(8000)
68 >>> target = torch.randn(8000)
69 >>> nb_pesq = PerceptualEvaluationSpeechQuality(8000, 'nb')
70 >>> nb_pesq(preds, target)
71 tensor(2.2076)
72 >>> wb_pesq = PerceptualEvaluationSpeechQuality(16000, 'wb')
73 >>> wb_pesq(preds, target)
74 tensor(1.7359)
75
76 References:
77 [1] https://github.com/ludlows/python-pesq
78 """
79
80 sum_pesq: Tensor
81 total: Tensor
82 is_differentiable = False
83 higher_is_better = True
84
85 def __init__(
86 self,
87 fs: int,
88 mode: str,
89 compute_on_step: Optional[bool] = None,
90 **kwargs: Dict[str, Any],
91 ) -> None:
92 super().__init__(compute_on_step=compute_on_step, **kwargs)
93 if not _PESQ_AVAILABLE:
94 raise ModuleNotFoundError(
95 "PerceptualEvaluationSpeechQuality metric requires that `pesq` is installed."
96 " Either install as `pip install torchmetrics[audio]` or `pip install pesq`."
97 )
98 if fs not in (8000, 16000):
99 raise ValueError(f"Expected argument `fs` to either be 8000 or 16000 but got {fs}")
100 self.fs = fs
101 if mode not in ("wb", "nb"):
102 raise ValueError(f"Expected argument `mode` to either be 'wb' or 'nb' but got {mode}")
103 self.mode = mode
104
105 self.add_state("sum_pesq", default=tensor(0.0), dist_reduce_fx="sum")
106 self.add_state("total", default=tensor(0), dist_reduce_fx="sum")
107
108 def update(self, preds: Tensor, target: Tensor) -> None: # type: ignore
109 """Update state with predictions and targets.
110
111 Args:
112 preds: Predictions from model
113 target: Ground truth values
114 """
115 pesq_batch = perceptual_evaluation_speech_quality(preds, target, self.fs, self.mode, False).to(
116 self.sum_pesq.device
117 )
118
119 self.sum_pesq += pesq_batch.sum()
120 self.total += pesq_batch.numel()
121
122 def compute(self) -> Tensor:
123 """Computes average PESQ."""
124 return self.sum_pesq / self.total
125
[end of torchmetrics/audio/pesq.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchmetrics/audio/pesq.py b/torchmetrics/audio/pesq.py
--- a/torchmetrics/audio/pesq.py
+++ b/torchmetrics/audio/pesq.py
@@ -29,7 +29,9 @@
to perform the metric calculation.
.. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install
- torchmetrics[audio]`` or ``pip install pesq``
+ torchmetrics[audio]`` or ``pip install pesq``. Note that ``pesq`` will compile with your currently
+ installed version of numpy, meaning that if you upgrade numpy at some point in the future you will
+ most likely have to reinstall ``pesq``.
Forward accepts
diff --git a/torchmetrics/functional/audio/pesq.py b/torchmetrics/functional/audio/pesq.py
--- a/torchmetrics/functional/audio/pesq.py
+++ b/torchmetrics/functional/audio/pesq.py
@@ -36,7 +36,9 @@
to perform the metric calculation.
.. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install
- torchmetrics[audio]`` or ``pip install pesq``
+ torchmetrics[audio]`` or ``pip install pesq``. Note that ``pesq`` will compile with your currently
+ installed version of numpy, meaning that if you upgrade numpy at some point in the future you will
+ most likely have to reinstall ``pesq``.
Args:
preds:
| {"golden_diff": "diff --git a/torchmetrics/audio/pesq.py b/torchmetrics/audio/pesq.py\n--- a/torchmetrics/audio/pesq.py\n+++ b/torchmetrics/audio/pesq.py\n@@ -29,7 +29,9 @@\n to perform the metric calculation.\n \n .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install\n- torchmetrics[audio]`` or ``pip install pesq``\n+ torchmetrics[audio]`` or ``pip install pesq``. Note that ``pesq`` will compile with your currently\n+ installed version of numpy, meaning that if you upgrade numpy at some point in the future you will\n+ most likely have to reinstall ``pesq``.\n \n Forward accepts\n \ndiff --git a/torchmetrics/functional/audio/pesq.py b/torchmetrics/functional/audio/pesq.py\n--- a/torchmetrics/functional/audio/pesq.py\n+++ b/torchmetrics/functional/audio/pesq.py\n@@ -36,7 +36,9 @@\n to perform the metric calculation.\n \n .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install\n- torchmetrics[audio]`` or ``pip install pesq``\n+ torchmetrics[audio]`` or ``pip install pesq``. Note that ``pesq`` will compile with your currently\n+ installed version of numpy, meaning that if you upgrade numpy at some point in the future you will\n+ most likely have to reinstall ``pesq``.\n \n Args:\n preds:\n", "issue": "Error numpy.ndarray size changed\n## \ud83d\udc1b Bug\r\n\r\nHello,\r\n\r\nI Installed a clean Ubuntu 20.04, installed nvidia drivers 440 for RTX 3080. Then I followed the instructions to train a model with NVIDIA NeMo Tacotron2.\r\n\r\nI got the next error. It seems that the latest release of metrics is not compatible with the latest release of numpy.\r\n\r\nnumpy fails with the next error:\r\nvalueerror: numpy.ndarray size changed, may indicate binary incompatibility. expected 96 from c header, got 88 from pyobject\r\n\r\nI share the full stack trace:\r\n```\r\npc@pc:~/Downloads$ sudo python3.8 tacotron2.py sample_rate=25050 train_dataset=/home/pc/Downloads/newcontent.json validation_datasets=/home/pc/Downloads/newcontent.json trainer.max_epochs=1 trainer.accelerator=null trainer.check_val_every_n_epoch=1\r\n[sudo] password for pc:\r\nTraceback (most recent call last):\r\nFile \"tacotron2.py\", line 24, in\r\nimport pytorch_lightning as pl\r\nFile \"/usr/local/lib/python3.8/site-packages/pytorch_lightning/init.py\", line 20, in\r\nfrom pytorch_lightning.callbacks import Callback # noqa: E402\r\nFile \"/usr/local/lib/python3.8/site-packages/pytorch_lightning/callbacks/init.py\", line 14, in\r\nfrom pytorch_lightning.callbacks.base import Callback\r\nFile \"/usr/local/lib/python3.8/site-packages/pytorch_lightning/callbacks/base.py\", line 26, in\r\nfrom pytorch_lightning.utilities.types import STEP_OUTPUT\r\nFile \"/usr/local/lib/python3.8/site-packages/pytorch_lightning/utilities/types.py\", line 25, in\r\nfrom torchmetrics import Metric\r\nFile \"/usr/local/lib/python3.8/site-packages/torchmetrics/init.py\", line 14, in\r\nfrom torchmetrics import functional # noqa: E402\r\nFile \"/usr/local/lib/python3.8/site-packages/torchmetrics/functional/init.py\", line 14, in\r\nfrom torchmetrics.functional.audio.pit import permutation_invariant_training, pit, pit_permutate\r\nFile \"/usr/local/lib/python3.8/site-packages/torchmetrics/functional/audio/init.py\", line 26, in\r\nfrom torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality # noqa: F401\r\nFile \"/usr/local/lib/python3.8/site-packages/torchmetrics/functional/audio/pesq.py\", line 20, in\r\nimport pesq as pesq_backend\r\nFile \"/usr/local/lib/python3.8/site-packages/pesq/init.py\", line 6, in\r\nfrom .cypesq import cypesq, cypesq_retvals, cypesq_error_message as pesq_error_message\r\nFile \"pesq/cypesq.pyx\", line 1, in init cypesq\r\nValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject\r\n```\r\n\r\n### Environment\r\n\r\n- PyTorch Version: 1.10.1\r\n- OS: Ubuntu 20.10\r\n- PyTorch installed with pip\r\n- Python version: python3.8\r\n- CUDA/cuDNN version: 11.3\r\n- GPU models and configuration: RTX 3080\r\n- Any other relevant information: MSI Creator P100X 637\r\n\r\nCould you help me to solve this?\r\n\r\nThank you!\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport numpy as np\n\nfrom torchmetrics.utilities.imports import _PESQ_AVAILABLE\n\nif _PESQ_AVAILABLE:\n import pesq as pesq_backend\nelse:\n pesq_backend = None\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.utilities.checks import _check_same_shape\n\n__doctest_requires__ = {(\"perceptual_evaluation_speech_quality\",): [\"pesq\"]}\n\n\ndef perceptual_evaluation_speech_quality(\n preds: Tensor, target: Tensor, fs: int, mode: str, keep_same_device: bool = False\n) -> Tensor:\n r\"\"\"PESQ (Perceptual Evaluation of Speech Quality)\n\n This is a wrapper for the ``pesq`` package [1]. Note that input will be moved to `cpu`\n to perform the metric calculation.\n\n .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install\n torchmetrics[audio]`` or ``pip install pesq``\n\n Args:\n preds:\n shape ``[...,time]``\n target:\n shape ``[...,time]``\n fs:\n sampling frequency, should be 16000 or 8000 (Hz)\n mode:\n 'wb' (wide-band) or 'nb' (narrow-band)\n keep_same_device:\n whether to move the pesq value to the device of preds\n\n Returns:\n pesq value of shape [...]\n\n Raises:\n ModuleNotFoundError:\n If ``peqs`` package is not installed\n ValueError:\n If ``fs`` is not either ``8000`` or ``16000``\n ValueError:\n If ``mode`` is not either ``\"wb\"`` or ``\"nb\"``\n\n Example:\n >>> from torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality\n >>> import torch\n >>> g = torch.manual_seed(1)\n >>> preds = torch.randn(8000)\n >>> target = torch.randn(8000)\n >>> perceptual_evaluation_speech_quality(preds, target, 8000, 'nb')\n tensor(2.2076)\n >>> perceptual_evaluation_speech_quality(preds, target, 16000, 'wb')\n tensor(1.7359)\n\n References:\n [1] https://github.com/ludlows/python-pesq\n \"\"\"\n if not _PESQ_AVAILABLE:\n raise ModuleNotFoundError(\n \"PESQ metric requires that pesq is installed.\"\n \" Either install as `pip install torchmetrics[audio]` or `pip install pesq`.\"\n )\n if fs not in (8000, 16000):\n raise ValueError(f\"Expected argument `fs` to either be 8000 or 16000 but got {fs}\")\n if mode not in (\"wb\", \"nb\"):\n raise ValueError(f\"Expected argument `mode` to either be 'wb' or 'nb' but got {mode}\")\n _check_same_shape(preds, target)\n\n if preds.ndim == 1:\n pesq_val_np = pesq_backend.pesq(fs, target.detach().cpu().numpy(), preds.detach().cpu().numpy(), mode)\n pesq_val = torch.tensor(pesq_val_np)\n else:\n preds_np = preds.reshape(-1, preds.shape[-1]).detach().cpu().numpy()\n target_np = target.reshape(-1, preds.shape[-1]).detach().cpu().numpy()\n pesq_val_np = np.empty(shape=(preds_np.shape[0]))\n for b in range(preds_np.shape[0]):\n pesq_val_np[b] = pesq_backend.pesq(fs, target_np[b, :], preds_np[b, :], mode)\n pesq_val = torch.from_numpy(pesq_val_np)\n pesq_val = pesq_val.reshape(preds.shape[:-1])\n\n if keep_same_device:\n pesq_val = pesq_val.to(preds.device)\n\n return pesq_val\n", "path": "torchmetrics/functional/audio/pesq.py"}, {"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any, Dict, Optional\n\nfrom torch import Tensor, tensor\n\nfrom torchmetrics.functional.audio.pesq import perceptual_evaluation_speech_quality\nfrom torchmetrics.metric import Metric\nfrom torchmetrics.utilities.imports import _PESQ_AVAILABLE\n\n__doctest_requires__ = {(\"PerceptualEvaluationSpeechQuality\"): [\"pesq\"]}\n\n\nclass PerceptualEvaluationSpeechQuality(Metric):\n \"\"\"Perceptual Evaluation of Speech Quality (PESQ)\n\n This is a wrapper for the pesq package [1]. . Note that input will be moved to `cpu`\n to perform the metric calculation.\n\n .. note:: using this metrics requires you to have ``pesq`` install. Either install as ``pip install\n torchmetrics[audio]`` or ``pip install pesq``\n\n Forward accepts\n\n - ``preds``: ``shape [...,time]``\n - ``target``: ``shape [...,time]``\n\n Args:\n fs:\n sampling frequency, should be 16000 or 8000 (Hz)\n mode:\n 'wb' (wide-band) or 'nb' (narrow-band)\n keep_same_device:\n whether to move the pesq value to the device of preds\n compute_on_step:\n Forward only calls ``update()`` and returns None if this is set to False.\n\n .. deprecated:: v0.8\n Argument has no use anymore and will be removed v0.9.\n\n kwargs:\n Additional keyword arguments, see :ref:`Metric kwargs` for more info.\n\n Raises:\n ModuleNotFoundError:\n If ``peqs`` package is not installed\n ValueError:\n If ``fs`` is not either ``8000`` or ``16000``\n ValueError:\n If ``mode`` is not either ``\"wb\"`` or ``\"nb\"``\n\n Example:\n >>> from torchmetrics.audio.pesq import PerceptualEvaluationSpeechQuality\n >>> import torch\n >>> g = torch.manual_seed(1)\n >>> preds = torch.randn(8000)\n >>> target = torch.randn(8000)\n >>> nb_pesq = PerceptualEvaluationSpeechQuality(8000, 'nb')\n >>> nb_pesq(preds, target)\n tensor(2.2076)\n >>> wb_pesq = PerceptualEvaluationSpeechQuality(16000, 'wb')\n >>> wb_pesq(preds, target)\n tensor(1.7359)\n\n References:\n [1] https://github.com/ludlows/python-pesq\n \"\"\"\n\n sum_pesq: Tensor\n total: Tensor\n is_differentiable = False\n higher_is_better = True\n\n def __init__(\n self,\n fs: int,\n mode: str,\n compute_on_step: Optional[bool] = None,\n **kwargs: Dict[str, Any],\n ) -> None:\n super().__init__(compute_on_step=compute_on_step, **kwargs)\n if not _PESQ_AVAILABLE:\n raise ModuleNotFoundError(\n \"PerceptualEvaluationSpeechQuality metric requires that `pesq` is installed.\"\n \" Either install as `pip install torchmetrics[audio]` or `pip install pesq`.\"\n )\n if fs not in (8000, 16000):\n raise ValueError(f\"Expected argument `fs` to either be 8000 or 16000 but got {fs}\")\n self.fs = fs\n if mode not in (\"wb\", \"nb\"):\n raise ValueError(f\"Expected argument `mode` to either be 'wb' or 'nb' but got {mode}\")\n self.mode = mode\n\n self.add_state(\"sum_pesq\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n self.add_state(\"total\", default=tensor(0), dist_reduce_fx=\"sum\")\n\n def update(self, preds: Tensor, target: Tensor) -> None: # type: ignore\n \"\"\"Update state with predictions and targets.\n\n Args:\n preds: Predictions from model\n target: Ground truth values\n \"\"\"\n pesq_batch = perceptual_evaluation_speech_quality(preds, target, self.fs, self.mode, False).to(\n self.sum_pesq.device\n )\n\n self.sum_pesq += pesq_batch.sum()\n self.total += pesq_batch.numel()\n\n def compute(self) -> Tensor:\n \"\"\"Computes average PESQ.\"\"\"\n return self.sum_pesq / self.total\n", "path": "torchmetrics/audio/pesq.py"}]} | 3,928 | 350 |
gh_patches_debug_8688 | rasdani/github-patches | git_diff | NVIDIA__NVFlare-2547 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] aws default vpc should not be required.
**Describe the bug**
nvflare commands bomb out, if no default VPC is defined in my AWS account. Many AWS accounts (per AWS recommendation) have no default VPC assigned, however they may have exactly one VPC assigned. In that case (check `aws ec2 describe-vpcs`) you do not have to ask the user and just pick that one. If there is none you can tell the user to create a default vpc if they have the permissions to do so: `aws ec2 create-default-vpc` or have their admin create one
**To Reproduce**
```
dp@grammy:~/NVFlare$ nvflare dashboard --cloud aws
Dashboard launch script for cloud is written at /home/dp/NVFlare/aws_start_dsb.sh. Now running the script.
This script requires aws (AWS CLI), sshpass, dig and jq. Now checking if they are installed.
Checking if aws exists. => found
Checking if sshpass exists. => found
Checking if dig exists. => found
Checking if jq exists. => found
One initial user will be created when starting dashboard.
Please enter the email address for this user.
[email protected]
Generating key pair for VM
An error occurred (VPCIdNotSpecified) when calling the CreateSecurityGroup operation: No default VPC for this user
Security group id:
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument --group-id: expected one argument
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument --group-id: expected one argument
```
**Desktop (please complete the following information):**
- OS: [e.g. wsl windows 11]
- Python Version [e.g. 3.9]
- NVFlare Version [e.g. 2.4]
**Other comments:**
Ask ChatGPT/Claude:
Does aws recommend to not create a default vpc for enterprise aws accounts?
Yes, AWS generally recommends not creating a default VPC for enterprise AWS accounts. Instead, they recommend creating custom VPCs tailored to your specific network requirements.
Here are a few reasons why:
Security: Default VPCs have predetermined settings, which might not align with your organization's security policies. Creating a custom VPC allows you to configure security groups, network ACLs, and other settings as per your needs.
IP Address Range: Default VPCs come with a predetermined IP address range. This might lead to IP address conflicts if you have multiple AWS accounts or if you need to connect your AWS resources with your on-premises network. With a custom VPC, you can choose your IP address range.
VPC Peering: If you need to set up VPC peering between multiple VPCs, using default VPCs can make the process more complex due to potential overlapping IP address ranges.
Compliance: Depending on your industry, you might have to adhere to certain compliance requirements. Default VPCs might not be set up to meet these requirements out of the box.
Customization: Custom VPCs give you complete control over your network setup, including subnets, route tables, gateways, and more. This level of customization is essential for many enterprise setups.
For these reasons, AWS recommends creating custom VPCs for enterprise accounts, as it provides more flexibility, security, and control over your network infrastructure. Cop
</issue>
<code>
[start of nvflare/dashboard/cli.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import argparse
16 import os
17 import signal
18 import subprocess
19 import sys
20
21 import docker
22 import nvflare
23 from nvflare.apis.utils.format_check import name_check
24 from nvflare.lighter import tplt_utils, utils
25
26 supported_csp = ("azure", "aws")
27
28
29 def start(args):
30 cwd = os.getcwd()
31 if not args.folder:
32 folder = cwd
33 else:
34 folder = os.path.join(cwd, args.folder)
35 environment = dict()
36 env_vars = args.env
37 if env_vars:
38 for e in env_vars:
39 splitted = e.split("=")
40 environment[splitted[0]] = splitted[1]
41 passphrase = args.passphrase
42 if passphrase:
43 environment["NVFL_DASHBOARD_PP"] = passphrase
44 if args.cred:
45 environment.update({"NVFL_CREDENTIAL": args.cred})
46 elif not os.path.exists(os.path.join(folder, ".db_init_done")):
47 need_email = True
48 while need_email:
49 answer = input(
50 "Please provide project admin email address. This person will be the super user of the dashboard and this project.\n"
51 )
52 error, reason = name_check(answer, "email")
53 if error:
54 print(f"Expecting an email address, but got one in an invalid format. Reason: {reason}")
55 else:
56 need_email = False
57 print("generating random password")
58 pwd = utils.generate_password(8)
59 print(f"Project admin credential is {answer} and the password is {pwd}")
60 environment.update({"NVFL_CREDENTIAL": f"{answer}:{pwd}"})
61 if args.local:
62 return start_local(environment)
63 try:
64 client = docker.from_env()
65 except docker.errors.DockerException:
66 print("Unable to communicate to docker daemon/socket. Please make sure your docker is up and running.")
67 exit(0)
68 version = nvflare.__version__
69 dashboard_image = f"nvflare/nvflare:{version}"
70 if args.image:
71 if dashboard_image != args.image:
72 print(
73 f"Current dashboard container image is nvflare/nvflare:{version}, but requesting to use {args.image}. Use it at your own risk."
74 )
75 dashboard_image = args.image
76 try:
77 print(f"Pulling {dashboard_image}, may take some time to finish.")
78 _ = client.images.pull(dashboard_image)
79 except docker.errors.APIError:
80 print(f"unable to pull {dashboard_image}")
81 exit(1)
82 print(f"Launching {dashboard_image}")
83 print(f"Dashboard will listen to port {args.port}")
84 print(f"{folder} on host mounted to /var/tmp/nvflare/dashboard in container")
85 if environment:
86 print(f"environment vars set to {environment}")
87 else:
88 print("No additional environment variables set to the launched container.")
89 try:
90 container_obj = client.containers.run(
91 dashboard_image,
92 entrypoint=["/usr/local/bin/python3", "nvflare/dashboard/wsgi.py"],
93 detach=True,
94 auto_remove=True,
95 name="nvflare-dashboard",
96 ports={8443: args.port},
97 volumes={folder: {"bind": "/var/tmp/nvflare/dashboard", "model": "rw"}},
98 environment=environment,
99 )
100 except docker.errors.APIError as e:
101 print(f"Either {dashboard_image} image does not exist or another nvflare-dashboard instance is still running.")
102 print("Please either provide an existing container image or stop the running container instance.")
103 print(e)
104 exit(1)
105 if container_obj:
106 print("Dashboard container started")
107 print("Container name nvflare-dashboard")
108 print(f"id is {container_obj.id}")
109 else:
110 print("Container failed to start")
111
112
113 def start_local(env):
114 print("Local dashboard without docker is for development and test only")
115 file_dir_path = os.path.dirname(__file__)
116 wsgi_location = os.path.join(file_dir_path, "wsgi.py")
117 cmd = [sys.executable, wsgi_location]
118 env.update({"NVFL_WEB_ROOT": os.path.dirname(os.path.abspath(__file__))})
119 process_status = subprocess.run(args=cmd, env=env)
120 return process_status
121
122
123 def stop():
124 try:
125 client = docker.from_env()
126 except docker.errors.DockerException:
127 print("Unable to communicate to docker daemon/socket. Please make sure your docker is up and running.")
128 exit(0)
129 try:
130 container_obj = client.containers.get("nvflare-dashboard")
131 except docker.errors.NotFound:
132 print("No nvflare-dashboard container found")
133 exit(0)
134 container_obj.kill(signal=signal.SIGINT)
135 print("nvflare-dashboard exited")
136
137
138 def cloud(args):
139 lighter_folder = os.path.dirname(utils.__file__)
140 template = utils.load_yaml(os.path.join(lighter_folder, "impl", "master_template.yml"))
141 tplt = tplt_utils.Template(template)
142 cwd = os.getcwd()
143 csp = args.cloud
144 dest = os.path.join(cwd, f"{csp}_start_dsb.sh")
145 dsb_start = template[f"{csp}_start_dsb_sh"]
146 version = nvflare.__version__
147 replacement_dict = {"NVFLARE": f"nvflare=={version}", "START_OPT": f"-i {args.image}" if args.image else ""}
148 utils._write(
149 dest,
150 utils.sh_replace(tplt.get_cloud_script_header() + dsb_start, replacement_dict),
151 "t",
152 exe=True,
153 )
154 print(f"Dashboard launch script for cloud is written at {dest}. Now running the script.")
155 _ = subprocess.run(dest)
156 os.remove(dest)
157
158
159 def has_no_arguments() -> bool:
160 last_item = sys.argv[-1]
161 return (
162 last_item.endswith("dashboard.cli") or last_item.endswith("dashboard/cli.py") or last_item.endswith("dashboard")
163 )
164
165
166 def main():
167 parser = argparse.ArgumentParser()
168 define_dashboard_parser(parser)
169 args = parser.parse_args()
170 handle_dashboard(args)
171
172
173 def define_dashboard_parser(parser):
174 parser.add_argument(
175 "--cloud",
176 type=str,
177 default="",
178 help="launch dashboard on cloud service provider (ex: --cloud azure or --cloud aws)",
179 )
180 parser.add_argument("--start", action="store_true", help="start dashboard")
181 parser.add_argument("--stop", action="store_true", help="stop dashboard")
182 parser.add_argument("-p", "--port", type=str, default="443", help="port to listen")
183 parser.add_argument(
184 "-f", "--folder", type=str, help="folder containing necessary info (default: current working directory)"
185 )
186 parser.add_argument(
187 "--passphrase", help="Passphrase to encrypt/decrypt root CA private key. !!! Do not share it with others. !!!"
188 )
189 parser.add_argument("-e", "--env", action="append", help="additonal environment variables: var1=value1")
190 parser.add_argument("--cred", help="set credential directly in the form of USER_EMAIL:PASSWORD")
191 parser.add_argument("-i", "--image", help="set the container image name")
192 parser.add_argument("--local", action="store_true", help="start dashboard locally without docker image")
193
194
195 def handle_dashboard(args):
196 support_csp_string = ", ".join(supported_csp)
197 if args.stop:
198 stop()
199 elif args.start or args.local:
200 start(args)
201 elif args.cloud:
202 if args.cloud in supported_csp:
203 cloud(args)
204 else:
205 print(
206 f"Currently --cloud support the following options: {support_csp_string}. However, {args.cloud} is requested."
207 )
208 else:
209 print("Please use -h option to see usage")
210
211
212 if __name__ == "__main__":
213 main()
214
[end of nvflare/dashboard/cli.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nvflare/dashboard/cli.py b/nvflare/dashboard/cli.py
--- a/nvflare/dashboard/cli.py
+++ b/nvflare/dashboard/cli.py
@@ -138,6 +138,8 @@
def cloud(args):
lighter_folder = os.path.dirname(utils.__file__)
template = utils.load_yaml(os.path.join(lighter_folder, "impl", "master_template.yml"))
+ template.update(utils.load_yaml(os.path.join(lighter_folder, "impl", "aws_template.yml")))
+ template.update(utils.load_yaml(os.path.join(lighter_folder, "impl", "azure_template.yml")))
tplt = tplt_utils.Template(template)
cwd = os.getcwd()
csp = args.cloud
| {"golden_diff": "diff --git a/nvflare/dashboard/cli.py b/nvflare/dashboard/cli.py\n--- a/nvflare/dashboard/cli.py\n+++ b/nvflare/dashboard/cli.py\n@@ -138,6 +138,8 @@\n def cloud(args):\n lighter_folder = os.path.dirname(utils.__file__)\n template = utils.load_yaml(os.path.join(lighter_folder, \"impl\", \"master_template.yml\"))\n+ template.update(utils.load_yaml(os.path.join(lighter_folder, \"impl\", \"aws_template.yml\")))\n+ template.update(utils.load_yaml(os.path.join(lighter_folder, \"impl\", \"azure_template.yml\")))\n tplt = tplt_utils.Template(template)\n cwd = os.getcwd()\n csp = args.cloud\n", "issue": "[BUG] aws default vpc should not be required. \n**Describe the bug**\r\n\r\nnvflare commands bomb out, if no default VPC is defined in my AWS account. Many AWS accounts (per AWS recommendation) have no default VPC assigned, however they may have exactly one VPC assigned. In that case (check `aws ec2 describe-vpcs`) you do not have to ask the user and just pick that one. If there is none you can tell the user to create a default vpc if they have the permissions to do so: `aws ec2 create-default-vpc` or have their admin create one \r\n\r\n**To Reproduce**\r\n\r\n```\r\n dp@grammy:~/NVFlare$ nvflare dashboard --cloud aws\r\nDashboard launch script for cloud is written at /home/dp/NVFlare/aws_start_dsb.sh. Now running the script.\r\nThis script requires aws (AWS CLI), sshpass, dig and jq. Now checking if they are installed.\r\nChecking if aws exists. => found\r\nChecking if sshpass exists. => found\r\nChecking if dig exists. => found\r\nChecking if jq exists. => found\r\nOne initial user will be created when starting dashboard.\r\nPlease enter the email address for this user.\r\[email protected]\r\nGenerating key pair for VM\r\n\r\nAn error occurred (VPCIdNotSpecified) when calling the CreateSecurityGroup operation: No default VPC for this user\r\nSecurity group id:\r\n\r\nusage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]\r\nTo see help text, you can run:\r\n\r\n aws help\r\n aws <command> help\r\n aws <command> <subcommand> help\r\n\r\naws: error: argument --group-id: expected one argument\r\n\r\n\r\nusage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]\r\nTo see help text, you can run:\r\n\r\n aws help\r\n aws <command> help\r\n aws <command> <subcommand> help\r\n\r\naws: error: argument --group-id: expected one argument\r\n\r\n```\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: [e.g. wsl windows 11]\r\n - Python Version [e.g. 3.9]\r\n - NVFlare Version [e.g. 2.4]\r\n\r\n**Other comments:**\r\n\r\nAsk ChatGPT/Claude: \r\n\r\nDoes aws recommend to not create a default vpc for enterprise aws accounts? \r\n\r\nYes, AWS generally recommends not creating a default VPC for enterprise AWS accounts. Instead, they recommend creating custom VPCs tailored to your specific network requirements.\r\nHere are a few reasons why:\r\n\r\nSecurity: Default VPCs have predetermined settings, which might not align with your organization's security policies. Creating a custom VPC allows you to configure security groups, network ACLs, and other settings as per your needs.\r\nIP Address Range: Default VPCs come with a predetermined IP address range. This might lead to IP address conflicts if you have multiple AWS accounts or if you need to connect your AWS resources with your on-premises network. With a custom VPC, you can choose your IP address range.\r\nVPC Peering: If you need to set up VPC peering between multiple VPCs, using default VPCs can make the process more complex due to potential overlapping IP address ranges.\r\nCompliance: Depending on your industry, you might have to adhere to certain compliance requirements. Default VPCs might not be set up to meet these requirements out of the box.\r\nCustomization: Custom VPCs give you complete control over your network setup, including subnets, route tables, gateways, and more. This level of customization is essential for many enterprise setups.\r\n\r\nFor these reasons, AWS recommends creating custom VPCs for enterprise accounts, as it provides more flexibility, security, and control over your network infrastructure. Cop\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport os\nimport signal\nimport subprocess\nimport sys\n\nimport docker\nimport nvflare\nfrom nvflare.apis.utils.format_check import name_check\nfrom nvflare.lighter import tplt_utils, utils\n\nsupported_csp = (\"azure\", \"aws\")\n\n\ndef start(args):\n cwd = os.getcwd()\n if not args.folder:\n folder = cwd\n else:\n folder = os.path.join(cwd, args.folder)\n environment = dict()\n env_vars = args.env\n if env_vars:\n for e in env_vars:\n splitted = e.split(\"=\")\n environment[splitted[0]] = splitted[1]\n passphrase = args.passphrase\n if passphrase:\n environment[\"NVFL_DASHBOARD_PP\"] = passphrase\n if args.cred:\n environment.update({\"NVFL_CREDENTIAL\": args.cred})\n elif not os.path.exists(os.path.join(folder, \".db_init_done\")):\n need_email = True\n while need_email:\n answer = input(\n \"Please provide project admin email address. This person will be the super user of the dashboard and this project.\\n\"\n )\n error, reason = name_check(answer, \"email\")\n if error:\n print(f\"Expecting an email address, but got one in an invalid format. Reason: {reason}\")\n else:\n need_email = False\n print(\"generating random password\")\n pwd = utils.generate_password(8)\n print(f\"Project admin credential is {answer} and the password is {pwd}\")\n environment.update({\"NVFL_CREDENTIAL\": f\"{answer}:{pwd}\"})\n if args.local:\n return start_local(environment)\n try:\n client = docker.from_env()\n except docker.errors.DockerException:\n print(\"Unable to communicate to docker daemon/socket. Please make sure your docker is up and running.\")\n exit(0)\n version = nvflare.__version__\n dashboard_image = f\"nvflare/nvflare:{version}\"\n if args.image:\n if dashboard_image != args.image:\n print(\n f\"Current dashboard container image is nvflare/nvflare:{version}, but requesting to use {args.image}. Use it at your own risk.\"\n )\n dashboard_image = args.image\n try:\n print(f\"Pulling {dashboard_image}, may take some time to finish.\")\n _ = client.images.pull(dashboard_image)\n except docker.errors.APIError:\n print(f\"unable to pull {dashboard_image}\")\n exit(1)\n print(f\"Launching {dashboard_image}\")\n print(f\"Dashboard will listen to port {args.port}\")\n print(f\"{folder} on host mounted to /var/tmp/nvflare/dashboard in container\")\n if environment:\n print(f\"environment vars set to {environment}\")\n else:\n print(\"No additional environment variables set to the launched container.\")\n try:\n container_obj = client.containers.run(\n dashboard_image,\n entrypoint=[\"/usr/local/bin/python3\", \"nvflare/dashboard/wsgi.py\"],\n detach=True,\n auto_remove=True,\n name=\"nvflare-dashboard\",\n ports={8443: args.port},\n volumes={folder: {\"bind\": \"/var/tmp/nvflare/dashboard\", \"model\": \"rw\"}},\n environment=environment,\n )\n except docker.errors.APIError as e:\n print(f\"Either {dashboard_image} image does not exist or another nvflare-dashboard instance is still running.\")\n print(\"Please either provide an existing container image or stop the running container instance.\")\n print(e)\n exit(1)\n if container_obj:\n print(\"Dashboard container started\")\n print(\"Container name nvflare-dashboard\")\n print(f\"id is {container_obj.id}\")\n else:\n print(\"Container failed to start\")\n\n\ndef start_local(env):\n print(\"Local dashboard without docker is for development and test only\")\n file_dir_path = os.path.dirname(__file__)\n wsgi_location = os.path.join(file_dir_path, \"wsgi.py\")\n cmd = [sys.executable, wsgi_location]\n env.update({\"NVFL_WEB_ROOT\": os.path.dirname(os.path.abspath(__file__))})\n process_status = subprocess.run(args=cmd, env=env)\n return process_status\n\n\ndef stop():\n try:\n client = docker.from_env()\n except docker.errors.DockerException:\n print(\"Unable to communicate to docker daemon/socket. Please make sure your docker is up and running.\")\n exit(0)\n try:\n container_obj = client.containers.get(\"nvflare-dashboard\")\n except docker.errors.NotFound:\n print(\"No nvflare-dashboard container found\")\n exit(0)\n container_obj.kill(signal=signal.SIGINT)\n print(\"nvflare-dashboard exited\")\n\n\ndef cloud(args):\n lighter_folder = os.path.dirname(utils.__file__)\n template = utils.load_yaml(os.path.join(lighter_folder, \"impl\", \"master_template.yml\"))\n tplt = tplt_utils.Template(template)\n cwd = os.getcwd()\n csp = args.cloud\n dest = os.path.join(cwd, f\"{csp}_start_dsb.sh\")\n dsb_start = template[f\"{csp}_start_dsb_sh\"]\n version = nvflare.__version__\n replacement_dict = {\"NVFLARE\": f\"nvflare=={version}\", \"START_OPT\": f\"-i {args.image}\" if args.image else \"\"}\n utils._write(\n dest,\n utils.sh_replace(tplt.get_cloud_script_header() + dsb_start, replacement_dict),\n \"t\",\n exe=True,\n )\n print(f\"Dashboard launch script for cloud is written at {dest}. Now running the script.\")\n _ = subprocess.run(dest)\n os.remove(dest)\n\n\ndef has_no_arguments() -> bool:\n last_item = sys.argv[-1]\n return (\n last_item.endswith(\"dashboard.cli\") or last_item.endswith(\"dashboard/cli.py\") or last_item.endswith(\"dashboard\")\n )\n\n\ndef main():\n parser = argparse.ArgumentParser()\n define_dashboard_parser(parser)\n args = parser.parse_args()\n handle_dashboard(args)\n\n\ndef define_dashboard_parser(parser):\n parser.add_argument(\n \"--cloud\",\n type=str,\n default=\"\",\n help=\"launch dashboard on cloud service provider (ex: --cloud azure or --cloud aws)\",\n )\n parser.add_argument(\"--start\", action=\"store_true\", help=\"start dashboard\")\n parser.add_argument(\"--stop\", action=\"store_true\", help=\"stop dashboard\")\n parser.add_argument(\"-p\", \"--port\", type=str, default=\"443\", help=\"port to listen\")\n parser.add_argument(\n \"-f\", \"--folder\", type=str, help=\"folder containing necessary info (default: current working directory)\"\n )\n parser.add_argument(\n \"--passphrase\", help=\"Passphrase to encrypt/decrypt root CA private key. !!! Do not share it with others. !!!\"\n )\n parser.add_argument(\"-e\", \"--env\", action=\"append\", help=\"additonal environment variables: var1=value1\")\n parser.add_argument(\"--cred\", help=\"set credential directly in the form of USER_EMAIL:PASSWORD\")\n parser.add_argument(\"-i\", \"--image\", help=\"set the container image name\")\n parser.add_argument(\"--local\", action=\"store_true\", help=\"start dashboard locally without docker image\")\n\n\ndef handle_dashboard(args):\n support_csp_string = \", \".join(supported_csp)\n if args.stop:\n stop()\n elif args.start or args.local:\n start(args)\n elif args.cloud:\n if args.cloud in supported_csp:\n cloud(args)\n else:\n print(\n f\"Currently --cloud support the following options: {support_csp_string}. However, {args.cloud} is requested.\"\n )\n else:\n print(\"Please use -h option to see usage\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "nvflare/dashboard/cli.py"}]} | 3,683 | 153 |
gh_patches_debug_25379 | rasdani/github-patches | git_diff | freedomofpress__securedrop-335 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Timestamps
The issue of having timestamps on submitted files, and the disadvantages of retaining that metadata which can be correlated with other known variables, has been brought up.
One of the solutions which has been posited is scrambling/randomizing or nullifying the timestamps, but the problem with that is journalists would lose track of the order in which submissions arrived. With document flagging implemented in the future (#124) that would be less of an issue.
I believe @Hainish proposed an idea that would employ cryptographic hashes to retain the ordering of documents, while ditching the timestamps, but I don't know much about the specifics. Would love to hear more. So far I've done a preliminary search for possible filesystem/kernel or mount option-based solutions and didn't come up with anything.
</issue>
<code>
[start of securedrop/source.py]
1 # -*- coding: utf-8 -*-
2 import os
3 from datetime import datetime
4 import uuid
5 from functools import wraps
6 import zipfile
7 from cStringIO import StringIO
8
9 import logging
10 # This module's logger is explicitly labeled so the correct logger is used,
11 # even when this is run from the command line (e.g. during development)
12 log = logging.getLogger('source')
13
14 from flask import (Flask, request, render_template, session, redirect, url_for,
15 flash, abort, g, send_file)
16 from flask_wtf.csrf import CsrfProtect
17
18 from sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound
19
20 import config
21 import version
22 import crypto_util
23 import store
24 import background
25 from db import db_session, Source, Submission
26
27 app = Flask(__name__, template_folder=config.SOURCE_TEMPLATES_DIR)
28 app.config.from_object(config.FlaskConfig)
29 CsrfProtect(app)
30
31 app.jinja_env.globals['version'] = version.__version__
32 if getattr(config, 'CUSTOM_HEADER_IMAGE', None):
33 app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE
34 app.jinja_env.globals['use_custom_header_image'] = True
35 else:
36 app.jinja_env.globals['header_image'] = 'securedrop.png'
37 app.jinja_env.globals['use_custom_header_image'] = False
38
39
40 @app.teardown_appcontext
41 def shutdown_session(exception=None):
42 """Automatically remove database sessions at the end of the request, or
43 when the application shuts down"""
44 db_session.remove()
45
46
47 def logged_in():
48 if 'logged_in' in session:
49 return True
50
51
52 def login_required(f):
53 @wraps(f)
54 def decorated_function(*args, **kwargs):
55 if not logged_in():
56 return redirect(url_for('lookup'))
57 return f(*args, **kwargs)
58 return decorated_function
59
60
61 def ignore_static(f):
62 """Only executes the wrapped function if we're not loading a static resource."""
63 @wraps(f)
64 def decorated_function(*args, **kwargs):
65 if request.path.startswith('/static'):
66 return # don't execute the decorated function
67 return f(*args, **kwargs)
68 return decorated_function
69
70
71 @app.before_request
72 @ignore_static
73 def setup_g():
74 """Store commonly used values in Flask's special g object"""
75 # ignore_static here because `crypto_util.hash_codename` is scrypt (very
76 # time consuming), and we don't need to waste time running if we're just
77 # serving a static resource that won't need to access these common values.
78 if logged_in():
79 g.codename = session['codename']
80 g.sid = crypto_util.hash_codename(g.codename)
81 try:
82 g.source = Source.query.filter(Source.filesystem_id == g.sid).one()
83 except MultipleResultsFound as e:
84 app.logger.error("Found multiple Sources when one was expected: %s" % (e,))
85 abort(500)
86 except NoResultFound as e:
87 app.logger.error("Found no Sources when one was expected: %s" % (e,))
88 abort(404)
89 g.loc = store.path(g.sid)
90
91
92 @app.before_request
93 @ignore_static
94 def check_tor2web():
95 # ignore_static here so we only flash a single message warning about Tor2Web,
96 # corresponding to the intial page load.
97 if 'X-tor2web' in request.headers:
98 flash('<strong>WARNING:</strong> You appear to be using Tor2Web. '
99 'This <strong>does not</strong> provide anonymity. '
100 '<a href="/tor2web-warning">Why is this dangerous?</a>',
101 "header-warning")
102
103
104 @app.after_request
105 def no_cache(response):
106 """Minimize potential traces of site access by telling the browser not to
107 cache anything"""
108 no_cache_headers = {
109 'Cache-Control': 'no-cache, no-store, must-revalidate',
110 'Pragma': 'no-cache',
111 'Expires': '-1',
112 }
113 for header, header_value in no_cache_headers.iteritems():
114 response.headers.add(header, header_value)
115 return response
116
117
118 @app.route('/')
119 def index():
120 return render_template('index.html')
121
122
123 @app.route('/generate', methods=('GET', 'POST'))
124 def generate():
125 number_words = 8
126 if request.method == 'POST':
127 number_words = int(request.form['number-words'])
128 if number_words not in range(7, 11):
129 abort(403)
130 session['codename'] = crypto_util.genrandomid(number_words)
131 # TODO: make sure this codename isn't a repeat
132 return render_template('generate.html', codename=session['codename'])
133
134
135 @app.route('/create', methods=['POST'])
136 def create():
137 sid = crypto_util.hash_codename(session['codename'])
138
139 source = Source(sid, crypto_util.display_id())
140 db_session.add(source)
141 db_session.commit()
142
143 if os.path.exists(store.path(sid)):
144 # if this happens, we're not using very secure crypto
145 log.warning("Got a duplicate ID '%s'" % sid)
146 else:
147 os.mkdir(store.path(sid))
148
149 session['logged_in'] = True
150 return redirect(url_for('lookup'))
151
152
153 @app.route('/lookup', methods=('GET',))
154 @login_required
155 def lookup():
156 replies = []
157 for fn in os.listdir(g.loc):
158 if fn.startswith('reply-'):
159 try:
160 msg = crypto_util.decrypt(g.sid, g.codename,
161 file(store.path(g.sid, fn)).read()).decode("utf-8")
162 except UnicodeDecodeError:
163 app.logger.error("Could not decode reply %s" % fn)
164 else:
165 date = str(datetime.fromtimestamp(
166 os.stat(store.path(g.sid, fn)).st_mtime))
167 replies.append(dict(id=fn, date=date, msg=msg))
168
169 def async_genkey(sid, codename):
170 with app.app_context():
171 background.execute(lambda: crypto_util.genkeypair(sid, codename))
172
173 # Generate a keypair to encrypt replies from the journalist
174 # Only do this if the journalist has flagged the source as one
175 # that they would like to reply to. (Issue #140.)
176 if not crypto_util.getkey(g.sid) and g.source.flagged:
177 async_genkey(g.sid, g.codename)
178
179 return render_template('lookup.html', codename=g.codename, msgs=replies,
180 flagged=g.source.flagged, haskey=crypto_util.getkey(g.sid))
181
182
183 @app.route('/submit', methods=('POST',))
184 @login_required
185 def submit():
186 msg = request.form['msg']
187 fh = request.files['fh']
188 strip_metadata = True if 'notclean' in request.form else False
189
190 fnames = []
191
192 if msg:
193 fnames.append(store.save_message_submission(g.sid, msg))
194 flash("Thanks! We received your message.", "notification")
195 if fh:
196 fnames.append(store.save_file_submission(g.sid, fh.filename,
197 fh.stream, fh.content_type, strip_metadata))
198 flash("Thanks! We received your document '%s'."
199 % fh.filename or '[unnamed]', "notification")
200
201 for fname in fnames:
202 submission = Submission(g.source, fname)
203 db_session.add(submission)
204
205 g.source.last_updated = datetime.now()
206 db_session.commit()
207
208 return redirect(url_for('lookup'))
209
210
211 @app.route('/delete', methods=('POST',))
212 @login_required
213 def delete():
214 msgid = request.form['msgid']
215 assert '/' not in msgid
216 potential_files = os.listdir(g.loc)
217 if msgid not in potential_files:
218 abort(404) # TODO are the checks necessary?
219 store.secure_unlink(store.path(g.sid, msgid))
220 flash("Reply deleted.", "notification")
221
222 return redirect(url_for('lookup'))
223
224
225 def valid_codename(codename):
226 return os.path.exists(store.path(crypto_util.hash_codename(codename)))
227
228 @app.route('/login', methods=('GET', 'POST'))
229 def login():
230 if request.method == 'POST':
231 codename = request.form['codename']
232 if valid_codename(codename):
233 session.update(codename=codename, logged_in=True)
234 return redirect(url_for('lookup'))
235 else:
236 flash("Sorry, that is not a recognized codename.", "error")
237 return render_template('login.html')
238
239
240 @app.route('/howto-disable-js')
241 def howto_disable_js():
242 return render_template("howto-disable-js.html")
243
244
245 @app.route('/tor2web-warning')
246 def tor2web_warning():
247 return render_template("tor2web-warning.html")
248
249
250 @app.route('/journalist-key')
251 def download_journalist_pubkey():
252 journalist_pubkey = crypto_util.gpg.export_keys(config.JOURNALIST_KEY)
253 return send_file(StringIO(journalist_pubkey),
254 mimetype="application/pgp-keys",
255 attachment_filename=config.JOURNALIST_KEY + ".asc",
256 as_attachment=True)
257
258
259 @app.route('/why-journalist-key')
260 def why_download_journalist_pubkey():
261 return render_template("why-journalist-key.html")
262
263
264 _REDIRECT_URL_WHITELIST = ["http://tor2web.org/",
265 "https://www.torproject.org/download.html.en",
266 "https://tails.boum.org/",
267 "http://www.wired.com/threatlevel/2013/09/freedom-hosting-fbi/",
268 "http://www.theguardian.com/world/interactive/2013/oct/04/egotistical-giraffe-nsa-tor-document",
269 "https://addons.mozilla.org/en-US/firefox/addon/noscript/",
270 "http://noscript.net"]
271
272
273 @app.route('/redirect/<path:redirect_url>')
274 def redirect_hack(redirect_url):
275 # A hack to avoid referer leakage when a user clicks on an external link.
276 # TODO: Most likely will want to share this between source.py and
277 # journalist.py in the future.
278 if redirect_url not in _REDIRECT_URL_WHITELIST:
279 return 'Redirect not allowed'
280 else:
281 return render_template("redirect.html", redirect_url=redirect_url)
282
283
284 @app.errorhandler(404)
285 def page_not_found(error):
286 return render_template('notfound.html'), 404
287
288 @app.errorhandler(500)
289 def internal_error(error):
290 return render_template('error.html'), 500
291
292 if __name__ == "__main__":
293 # TODO make sure debug is not on in production
294 app.run(debug=True, host='0.0.0.0', port=8080)
295
[end of securedrop/source.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/securedrop/source.py b/securedrop/source.py
--- a/securedrop/source.py
+++ b/securedrop/source.py
@@ -5,6 +5,7 @@
from functools import wraps
import zipfile
from cStringIO import StringIO
+import subprocess
import logging
# This module's logger is explicitly labeled so the correct logger is used,
@@ -180,6 +181,22 @@
flagged=g.source.flagged, haskey=crypto_util.getkey(g.sid))
+def normalize_timestamps(sid):
+ """
+ Update the timestamps on all of the source's submissions to match that of
+ the latest submission. This minimizes metadata that could be useful to
+ investigators. See #301.
+ """
+ sub_paths = [ store.path(sid, submission.filename)
+ for submission in g.source.submissions ]
+ if len(sub_paths) > 1:
+ args = ["touch"]
+ args.extend(sub_paths[:-1])
+ rc = subprocess.call(args)
+ if rc != 0:
+ app.logger.warning("Couldn't normalize submission timestamps (touch exited with %d)" % rc)
+
+
@app.route('/submit', methods=('POST',))
@login_required
def submit():
@@ -204,6 +221,7 @@
g.source.last_updated = datetime.now()
db_session.commit()
+ normalize_timestamps(g.sid)
return redirect(url_for('lookup'))
| {"golden_diff": "diff --git a/securedrop/source.py b/securedrop/source.py\n--- a/securedrop/source.py\n+++ b/securedrop/source.py\n@@ -5,6 +5,7 @@\n from functools import wraps\n import zipfile\n from cStringIO import StringIO\n+import subprocess\n \n import logging\n # This module's logger is explicitly labeled so the correct logger is used,\n@@ -180,6 +181,22 @@\n flagged=g.source.flagged, haskey=crypto_util.getkey(g.sid))\n \n \n+def normalize_timestamps(sid):\n+ \"\"\"\n+ Update the timestamps on all of the source's submissions to match that of\n+ the latest submission. This minimizes metadata that could be useful to\n+ investigators. See #301.\n+ \"\"\"\n+ sub_paths = [ store.path(sid, submission.filename)\n+ for submission in g.source.submissions ]\n+ if len(sub_paths) > 1:\n+ args = [\"touch\"]\n+ args.extend(sub_paths[:-1])\n+ rc = subprocess.call(args)\n+ if rc != 0:\n+ app.logger.warning(\"Couldn't normalize submission timestamps (touch exited with %d)\" % rc)\n+\n+\n @app.route('/submit', methods=('POST',))\n @login_required\n def submit():\n@@ -204,6 +221,7 @@\n \n g.source.last_updated = datetime.now()\n db_session.commit()\n+ normalize_timestamps(g.sid)\n \n return redirect(url_for('lookup'))\n", "issue": "Timestamps\nThe issue of having timestamps on submitted files, and the disadvantages of retaining that metadata which can be correlated with other known variables, has been brought up.\n\nOne of the solutions which has been posited is scrambling/randomizing or nullifying the timestamps, but the problem with that is journalists would lose track of the order in which submissions arrived. With document flagging implemented in the future (#124) that would be less of an issue.\n\nI believe @Hainish proposed an idea that would employ cryptographic hashes to retain the ordering of documents, while ditching the timestamps, but I don't know much about the specifics. Would love to hear more. So far I've done a preliminary search for possible filesystem/kernel or mount option-based solutions and didn't come up with anything.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport os\nfrom datetime import datetime\nimport uuid\nfrom functools import wraps\nimport zipfile\nfrom cStringIO import StringIO\n\nimport logging\n# This module's logger is explicitly labeled so the correct logger is used,\n# even when this is run from the command line (e.g. during development)\nlog = logging.getLogger('source')\n\nfrom flask import (Flask, request, render_template, session, redirect, url_for,\n flash, abort, g, send_file)\nfrom flask_wtf.csrf import CsrfProtect\n\nfrom sqlalchemy.orm.exc import MultipleResultsFound, NoResultFound\n\nimport config\nimport version\nimport crypto_util\nimport store\nimport background\nfrom db import db_session, Source, Submission\n\napp = Flask(__name__, template_folder=config.SOURCE_TEMPLATES_DIR)\napp.config.from_object(config.FlaskConfig)\nCsrfProtect(app)\n\napp.jinja_env.globals['version'] = version.__version__\nif getattr(config, 'CUSTOM_HEADER_IMAGE', None):\n app.jinja_env.globals['header_image'] = config.CUSTOM_HEADER_IMAGE\n app.jinja_env.globals['use_custom_header_image'] = True\nelse:\n app.jinja_env.globals['header_image'] = 'securedrop.png'\n app.jinja_env.globals['use_custom_header_image'] = False\n\n\[email protected]_appcontext\ndef shutdown_session(exception=None):\n \"\"\"Automatically remove database sessions at the end of the request, or\n when the application shuts down\"\"\"\n db_session.remove()\n\n\ndef logged_in():\n if 'logged_in' in session:\n return True\n\n\ndef login_required(f):\n @wraps(f)\n def decorated_function(*args, **kwargs):\n if not logged_in():\n return redirect(url_for('lookup'))\n return f(*args, **kwargs)\n return decorated_function\n\n\ndef ignore_static(f):\n \"\"\"Only executes the wrapped function if we're not loading a static resource.\"\"\"\n @wraps(f)\n def decorated_function(*args, **kwargs):\n if request.path.startswith('/static'):\n return # don't execute the decorated function\n return f(*args, **kwargs)\n return decorated_function\n\n\[email protected]_request\n@ignore_static\ndef setup_g():\n \"\"\"Store commonly used values in Flask's special g object\"\"\"\n # ignore_static here because `crypto_util.hash_codename` is scrypt (very\n # time consuming), and we don't need to waste time running if we're just\n # serving a static resource that won't need to access these common values.\n if logged_in():\n g.codename = session['codename']\n g.sid = crypto_util.hash_codename(g.codename)\n try:\n g.source = Source.query.filter(Source.filesystem_id == g.sid).one()\n except MultipleResultsFound as e:\n app.logger.error(\"Found multiple Sources when one was expected: %s\" % (e,))\n abort(500)\n except NoResultFound as e:\n app.logger.error(\"Found no Sources when one was expected: %s\" % (e,))\n abort(404)\n g.loc = store.path(g.sid)\n\n\[email protected]_request\n@ignore_static\ndef check_tor2web():\n # ignore_static here so we only flash a single message warning about Tor2Web,\n # corresponding to the intial page load.\n if 'X-tor2web' in request.headers:\n flash('<strong>WARNING:</strong> You appear to be using Tor2Web. '\n 'This <strong>does not</strong> provide anonymity. '\n '<a href=\"/tor2web-warning\">Why is this dangerous?</a>',\n \"header-warning\")\n\n\[email protected]_request\ndef no_cache(response):\n \"\"\"Minimize potential traces of site access by telling the browser not to\n cache anything\"\"\"\n no_cache_headers = {\n 'Cache-Control': 'no-cache, no-store, must-revalidate',\n 'Pragma': 'no-cache',\n 'Expires': '-1',\n }\n for header, header_value in no_cache_headers.iteritems():\n response.headers.add(header, header_value)\n return response\n\n\[email protected]('/')\ndef index():\n return render_template('index.html')\n\n\[email protected]('/generate', methods=('GET', 'POST'))\ndef generate():\n number_words = 8\n if request.method == 'POST':\n number_words = int(request.form['number-words'])\n if number_words not in range(7, 11):\n abort(403)\n session['codename'] = crypto_util.genrandomid(number_words)\n # TODO: make sure this codename isn't a repeat\n return render_template('generate.html', codename=session['codename'])\n\n\[email protected]('/create', methods=['POST'])\ndef create():\n sid = crypto_util.hash_codename(session['codename'])\n\n source = Source(sid, crypto_util.display_id())\n db_session.add(source)\n db_session.commit()\n\n if os.path.exists(store.path(sid)):\n # if this happens, we're not using very secure crypto\n log.warning(\"Got a duplicate ID '%s'\" % sid)\n else:\n os.mkdir(store.path(sid))\n\n session['logged_in'] = True\n return redirect(url_for('lookup'))\n\n\[email protected]('/lookup', methods=('GET',))\n@login_required\ndef lookup():\n replies = []\n for fn in os.listdir(g.loc):\n if fn.startswith('reply-'):\n try:\n msg = crypto_util.decrypt(g.sid, g.codename,\n file(store.path(g.sid, fn)).read()).decode(\"utf-8\")\n except UnicodeDecodeError:\n app.logger.error(\"Could not decode reply %s\" % fn)\n else:\n date = str(datetime.fromtimestamp(\n os.stat(store.path(g.sid, fn)).st_mtime))\n replies.append(dict(id=fn, date=date, msg=msg))\n\n def async_genkey(sid, codename):\n with app.app_context():\n background.execute(lambda: crypto_util.genkeypair(sid, codename))\n\n # Generate a keypair to encrypt replies from the journalist\n # Only do this if the journalist has flagged the source as one\n # that they would like to reply to. (Issue #140.)\n if not crypto_util.getkey(g.sid) and g.source.flagged:\n async_genkey(g.sid, g.codename)\n\n return render_template('lookup.html', codename=g.codename, msgs=replies,\n flagged=g.source.flagged, haskey=crypto_util.getkey(g.sid))\n\n\[email protected]('/submit', methods=('POST',))\n@login_required\ndef submit():\n msg = request.form['msg']\n fh = request.files['fh']\n strip_metadata = True if 'notclean' in request.form else False\n\n fnames = []\n\n if msg:\n fnames.append(store.save_message_submission(g.sid, msg))\n flash(\"Thanks! We received your message.\", \"notification\")\n if fh:\n fnames.append(store.save_file_submission(g.sid, fh.filename,\n fh.stream, fh.content_type, strip_metadata))\n flash(\"Thanks! We received your document '%s'.\"\n % fh.filename or '[unnamed]', \"notification\")\n\n for fname in fnames:\n submission = Submission(g.source, fname)\n db_session.add(submission)\n\n g.source.last_updated = datetime.now()\n db_session.commit()\n\n return redirect(url_for('lookup'))\n\n\[email protected]('/delete', methods=('POST',))\n@login_required\ndef delete():\n msgid = request.form['msgid']\n assert '/' not in msgid\n potential_files = os.listdir(g.loc)\n if msgid not in potential_files:\n abort(404) # TODO are the checks necessary?\n store.secure_unlink(store.path(g.sid, msgid))\n flash(\"Reply deleted.\", \"notification\")\n\n return redirect(url_for('lookup'))\n\n\ndef valid_codename(codename):\n return os.path.exists(store.path(crypto_util.hash_codename(codename)))\n\[email protected]('/login', methods=('GET', 'POST'))\ndef login():\n if request.method == 'POST':\n codename = request.form['codename']\n if valid_codename(codename):\n session.update(codename=codename, logged_in=True)\n return redirect(url_for('lookup'))\n else:\n flash(\"Sorry, that is not a recognized codename.\", \"error\")\n return render_template('login.html')\n\n\[email protected]('/howto-disable-js')\ndef howto_disable_js():\n return render_template(\"howto-disable-js.html\")\n\n\[email protected]('/tor2web-warning')\ndef tor2web_warning():\n return render_template(\"tor2web-warning.html\")\n\n\[email protected]('/journalist-key')\ndef download_journalist_pubkey():\n journalist_pubkey = crypto_util.gpg.export_keys(config.JOURNALIST_KEY)\n return send_file(StringIO(journalist_pubkey),\n mimetype=\"application/pgp-keys\",\n attachment_filename=config.JOURNALIST_KEY + \".asc\",\n as_attachment=True)\n\n\[email protected]('/why-journalist-key')\ndef why_download_journalist_pubkey():\n return render_template(\"why-journalist-key.html\")\n\n\n_REDIRECT_URL_WHITELIST = [\"http://tor2web.org/\",\n \"https://www.torproject.org/download.html.en\",\n \"https://tails.boum.org/\",\n \"http://www.wired.com/threatlevel/2013/09/freedom-hosting-fbi/\",\n \"http://www.theguardian.com/world/interactive/2013/oct/04/egotistical-giraffe-nsa-tor-document\",\n \"https://addons.mozilla.org/en-US/firefox/addon/noscript/\",\n \"http://noscript.net\"]\n\n\[email protected]('/redirect/<path:redirect_url>')\ndef redirect_hack(redirect_url):\n # A hack to avoid referer leakage when a user clicks on an external link.\n # TODO: Most likely will want to share this between source.py and\n # journalist.py in the future.\n if redirect_url not in _REDIRECT_URL_WHITELIST:\n return 'Redirect not allowed'\n else:\n return render_template(\"redirect.html\", redirect_url=redirect_url)\n\n\[email protected](404)\ndef page_not_found(error):\n return render_template('notfound.html'), 404\n\[email protected](500)\ndef internal_error(error):\n return render_template('error.html'), 500\n\nif __name__ == \"__main__\":\n # TODO make sure debug is not on in production\n app.run(debug=True, host='0.0.0.0', port=8080)\n", "path": "securedrop/source.py"}]} | 3,781 | 321 |
gh_patches_debug_8286 | rasdani/github-patches | git_diff | liberapay__liberapay.com-1724 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Automatic renewal failure due to currency mismatch
Stripe error message: `The currency specified (gbp) does not match the currency on the source (eur). A payment can only be created for the same currency as the corresponding source. (request ID: req_PSmPQ85CKvW5kz)`.
SEPA Direct Debit failures
A European donor who tried to send money to a Canadian got the following error message from Stripe: `For 'sepa_debit' payments, we currently require your account to have a bank account in one of the following currencies: eur, usd, gbp, sek, nok, dkk, chf`.
SEPA Direct Debit failures
A European donor who tried to send money to a Canadian got the following error message from Stripe: `For 'sepa_debit' payments, we currently require your account to have a bank account in one of the following currencies: eur, usd, gbp, sek, nok, dkk, chf`.
</issue>
<code>
[start of liberapay/payin/cron.py]
1 from collections import defaultdict
2 from datetime import date
3
4 from pando import json
5
6 from ..cron import logger
7 from ..i18n.currencies import Money
8 from ..website import website
9 from ..utils import utcnow
10 from .common import prepare_donation, prepare_payin
11 from .stripe import charge
12
13
14 def send_donation_reminder_notifications():
15 """This function reminds donors to renew their donations.
16
17 The notifications are sent two weeks before the due date.
18 """
19 db = website.db
20 counts = defaultdict(int)
21 rows = db.all("""
22 SELECT (SELECT p FROM participants p WHERE p.id = sp.payer) AS payer
23 , json_agg((SELECT a FROM (
24 SELECT sp.id, sp.execution_date, sp.amount, sp.transfers
25 ) a ORDER BY a.execution_date)) AS payins
26 FROM scheduled_payins sp
27 WHERE sp.execution_date <= (current_date + interval '14 days')
28 AND sp.automatic IS NOT true
29 AND sp.payin IS NULL
30 AND sp.ctime < (current_timestamp - interval '6 hours')
31 GROUP BY sp.payer
32 HAVING count(*) FILTER (
33 WHERE sp.notifs_count = 0
34 OR sp.notifs_count = 1 AND sp.last_notif_ts <= (current_date - interval '4 weeks')
35 OR sp.notifs_count = 2 AND sp.last_notif_ts <= (current_date - interval '26 weeks')
36 ) > 0
37 """)
38 for payer, payins in rows:
39 if payer.is_suspended or payer.status != 'active':
40 continue
41 _check_scheduled_payins(db, payer, payins, automatic=False)
42 if not payins:
43 continue
44 donations = []
45 for sp in payins:
46 for tr in sp['transfers']:
47 donations.append({
48 'periodic_amount': tr['tip'].periodic_amount,
49 'tippee_username': tr['tippee_username'],
50 })
51 payer.notify('donate_reminder', donations=donations, email_unverified_address=True)
52 counts['donate_reminder'] += 1
53 db.run("""
54 UPDATE scheduled_payins
55 SET notifs_count = notifs_count + 1
56 , last_notif_ts = now()
57 WHERE payer = %s
58 AND id IN %s
59 """, (payer.id, tuple(sp['id'] for sp in payins)))
60 for k, n in sorted(counts.items()):
61 logger.info("Sent %i %s notifications." % (n, k))
62
63
64 def send_upcoming_debit_notifications():
65 """This daily cron job notifies donors who are about to be debited.
66
67 The notifications are sent at most once a month, 14 days before the first
68 payment of the "month" (31 days, not the calendar month).
69 """
70 db = website.db
71 counts = defaultdict(int)
72 rows = db.all("""
73 SELECT (SELECT p FROM participants p WHERE p.id = sp.payer) AS payer
74 , json_agg((SELECT a FROM (
75 SELECT sp.id, sp.execution_date, sp.amount, sp.transfers
76 ) a ORDER BY a.execution_date)) AS payins
77 FROM scheduled_payins sp
78 WHERE sp.execution_date <= (current_date + interval '45 days')
79 AND sp.automatic
80 AND sp.notifs_count = 0
81 AND sp.payin IS NULL
82 AND sp.ctime < (current_timestamp - interval '6 hours')
83 GROUP BY sp.payer, (sp.amount).currency
84 HAVING min(sp.execution_date) <= (current_date + interval '14 days')
85 """)
86 for payer, payins in rows:
87 if payer.is_suspended or payer.status != 'active':
88 continue
89 _check_scheduled_payins(db, payer, payins, automatic=True)
90 if not payins:
91 continue
92 context = {
93 'payins': payins,
94 'total_amount': sum(sp['amount'] for sp in payins),
95 }
96 for sp in context['payins']:
97 for tr in sp['transfers']:
98 del tr['tip'], tr['beneficiary']
99 if len(payins) > 1:
100 context['ndays'] = (payins[-1]['execution_date'] - utcnow().date()).days
101 while True:
102 route = db.one("""
103 SELECT r
104 FROM exchange_routes r
105 WHERE r.participant = %s
106 AND r.status = 'chargeable'
107 AND r.network::text LIKE 'stripe-%%'
108 ORDER BY r.is_default NULLS LAST
109 , r.network = 'stripe-sdd' DESC
110 , r.ctime DESC
111 LIMIT 1
112 """, (payer.id,))
113 if route is None:
114 break
115 route.sync_status()
116 if route.status == 'chargeable':
117 break
118 if route:
119 event = 'upcoming_debit'
120 context['instrument_brand'] = route.get_brand()
121 context['instrument_partial_number'] = route.get_partial_number()
122 if route.network == 'stripe-sdd':
123 source = route.stripe_source
124 context.update({
125 'creditor_identifier': website.app_conf.sepa_creditor_identifier,
126 'mandate_creation_date': route.ctime.date(),
127 'mandate_id': source.sepa_debit.mandate_reference,
128 'mandate_url': source.sepa_debit.mandate_url,
129 })
130 else:
131 event = 'missing_route'
132 payer.notify(event, email_unverified_address=True, **context)
133 counts[event] += 1
134 db.run("""
135 UPDATE scheduled_payins
136 SET notifs_count = notifs_count + 1
137 , last_notif_ts = now()
138 WHERE payer = %s
139 AND id IN %s
140 """, (payer.id, tuple(sp['id'] for sp in payins)))
141 for k, n in sorted(counts.items()):
142 logger.info("Sent %i %s notifications." % (n, k))
143
144
145 def execute_scheduled_payins():
146 """This daily cron job initiates scheduled payments.
147 """
148 db = website.db
149 counts = defaultdict(int)
150 retry = False
151 rows = db.all("""
152 SELECT sp.id, sp.execution_date, sp.transfers
153 , p AS payer, r.*::exchange_routes AS route
154 FROM scheduled_payins sp
155 JOIN participants p ON p.id = sp.payer
156 JOIN LATERAL (
157 SELECT r.*
158 FROM exchange_routes r
159 WHERE r.participant = sp.payer
160 AND r.status = 'chargeable'
161 AND r.network::text LIKE 'stripe-%%'
162 ORDER BY r.is_default NULLS LAST
163 , r.network = 'stripe-sdd' DESC
164 , r.ctime DESC
165 LIMIT 1
166 ) r ON true
167 WHERE ( r.network = 'stripe-sdd' AND sp.execution_date <= (current_date + interval '5 days') OR
168 r.network = 'stripe-card' AND sp.execution_date <= current_date )
169 AND sp.last_notif_ts < (current_date - interval '2 days')
170 AND sp.automatic
171 AND sp.payin IS NULL
172 AND p.is_suspended IS NOT TRUE
173 """)
174 for sp_id, execution_date, transfers, payer, route in rows:
175 route.__dict__['participant'] = payer
176 route.sync_status()
177 if route.status != 'chargeable':
178 retry = True
179 continue
180 transfers, canceled, impossible = _filter_transfers(payer, transfers, automatic=True)
181 if impossible:
182 for tr in impossible:
183 tr['execution_date'] = execution_date
184 del tr['beneficiary'], tr['tip']
185 payer.notify('renewal_aborted', transfers=impossible)
186 counts['renewal_aborted'] += 1
187 if transfers:
188 payin_amount = sum(tr['amount'] for tr in transfers)
189 payin = prepare_payin(db, payer, payin_amount, route, off_session=True)
190 for tr in transfers:
191 prepare_donation(
192 db, payin, tr['tip'], tr['beneficiary'], 'stripe',
193 payer, route.country, tr['amount']
194 )
195 payin = charge(db, payin, payer)
196 if payin.status in ('failed', 'succeeded'):
197 payer.notify('payin_' + payin.status, payin=payin._asdict(), provider='Stripe')
198 counts['payin_' + payin.status] += 1
199 db.run("""
200 UPDATE scheduled_payins
201 SET payin = %s
202 , mtime = current_timestamp
203 WHERE id = %s
204 """, (payin.id, sp_id))
205 else:
206 db.run("DELETE FROM scheduled_payins WHERE id = %s", (sp_id,))
207 for k, n in sorted(counts.items()):
208 logger.info("Sent %i %s notifications." % (n, k))
209 if retry:
210 execute_scheduled_payins()
211
212
213 def _check_scheduled_payins(db, payer, payins, automatic):
214 """Check scheduled payins before they're acted upon.
215
216 A scheduled payin can be out of sync with the state of the donor's tips or
217 the status of the recipient's account if the `Participant.schedule_renewals()`
218 method wasn't successfully called.
219 """
220 for sp in list(payins):
221 if isinstance(sp['amount'], dict):
222 sp['amount'] = Money(**sp['amount'])
223 sp['execution_date'] = date(*map(int, sp['execution_date'].split('-')))
224 canceled, impossible = _filter_transfers(payer, sp['transfers'], automatic)[1:]
225 if canceled:
226 if len(canceled) == len(sp['transfers']):
227 payins.remove(sp)
228 db.run("DELETE FROM scheduled_payins WHERE id = %(id)s", sp)
229 else:
230 old_tippee_ids = set(tr['tippee_id'] for tr in canceled)
231 sp['transfers'] = [
232 tr for tr in sp['transfers'] if tr['tippee_id'] not in old_tippee_ids
233 ]
234 sp['amount'] = sum(tr['amount'] for tr in sp['transfers'])
235 db.run("""
236 UPDATE scheduled_payins
237 SET amount = %(amount)s
238 , transfers = %(transfers)s
239 , mtime = current_timestamp
240 WHERE id = %(id)s
241 """, dict(sp, transfers=json.dumps([
242 {k: v for k, v in tr.items() if k not in ('tip', 'beneficiary')}
243 for tr in sp['transfers']
244 ])))
245 for tr in impossible:
246 tr['impossible'] = True
247
248
249 def _filter_transfers(payer, transfers, automatic):
250 """Splits scheduled transfers into 3 lists: "okay", "canceled" and "impossible".
251 """
252 canceled_transfers = []
253 impossible_transfers = []
254 okay_transfers = []
255 for tr in transfers:
256 if isinstance(tr['amount'], dict):
257 tr['amount'] = Money(**tr['amount'])
258 beneficiary = tr['beneficiary'] = website.db.Participant.from_id(tr['tippee_id'])
259 tip = tr['tip'] = payer.get_tip_to(beneficiary)
260 if tip.renewal_mode < 1 or (tip.renewal_mode == 2) != automatic:
261 canceled_transfers.append(tr)
262 elif beneficiary.status != 'active' or beneficiary.is_suspended or \
263 beneficiary.payment_providers & 1 == 0:
264 impossible_transfers.append(tr)
265 else:
266 okay_transfers.append(tr)
267 return okay_transfers, canceled_transfers, impossible_transfers
268
[end of liberapay/payin/cron.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/liberapay/payin/cron.py b/liberapay/payin/cron.py
--- a/liberapay/payin/cron.py
+++ b/liberapay/payin/cron.py
@@ -159,6 +159,7 @@
WHERE r.participant = sp.payer
AND r.status = 'chargeable'
AND r.network::text LIKE 'stripe-%%'
+ AND ( sp.amount::currency = 'EUR' OR r.network <> 'stripe-sdd' )
ORDER BY r.is_default NULLS LAST
, r.network = 'stripe-sdd' DESC
, r.ctime DESC
| {"golden_diff": "diff --git a/liberapay/payin/cron.py b/liberapay/payin/cron.py\n--- a/liberapay/payin/cron.py\n+++ b/liberapay/payin/cron.py\n@@ -159,6 +159,7 @@\n WHERE r.participant = sp.payer\n AND r.status = 'chargeable'\n AND r.network::text LIKE 'stripe-%%'\n+ AND ( sp.amount::currency = 'EUR' OR r.network <> 'stripe-sdd' )\n ORDER BY r.is_default NULLS LAST\n , r.network = 'stripe-sdd' DESC\n , r.ctime DESC\n", "issue": "Automatic renewal failure due to currency mismatch\nStripe error message: `The currency specified (gbp) does not match the currency on the source (eur). A payment can only be created for the same currency as the corresponding source. (request ID: req_PSmPQ85CKvW5kz)`.\nSEPA Direct Debit failures\nA European donor who tried to send money to a Canadian got the following error message from Stripe: `For 'sepa_debit' payments, we currently require your account to have a bank account in one of the following currencies: eur, usd, gbp, sek, nok, dkk, chf`.\nSEPA Direct Debit failures\nA European donor who tried to send money to a Canadian got the following error message from Stripe: `For 'sepa_debit' payments, we currently require your account to have a bank account in one of the following currencies: eur, usd, gbp, sek, nok, dkk, chf`.\n", "before_files": [{"content": "from collections import defaultdict\nfrom datetime import date\n\nfrom pando import json\n\nfrom ..cron import logger\nfrom ..i18n.currencies import Money\nfrom ..website import website\nfrom ..utils import utcnow\nfrom .common import prepare_donation, prepare_payin\nfrom .stripe import charge\n\n\ndef send_donation_reminder_notifications():\n \"\"\"This function reminds donors to renew their donations.\n\n The notifications are sent two weeks before the due date.\n \"\"\"\n db = website.db\n counts = defaultdict(int)\n rows = db.all(\"\"\"\n SELECT (SELECT p FROM participants p WHERE p.id = sp.payer) AS payer\n , json_agg((SELECT a FROM (\n SELECT sp.id, sp.execution_date, sp.amount, sp.transfers\n ) a ORDER BY a.execution_date)) AS payins\n FROM scheduled_payins sp\n WHERE sp.execution_date <= (current_date + interval '14 days')\n AND sp.automatic IS NOT true\n AND sp.payin IS NULL\n AND sp.ctime < (current_timestamp - interval '6 hours')\n GROUP BY sp.payer\n HAVING count(*) FILTER (\n WHERE sp.notifs_count = 0\n OR sp.notifs_count = 1 AND sp.last_notif_ts <= (current_date - interval '4 weeks')\n OR sp.notifs_count = 2 AND sp.last_notif_ts <= (current_date - interval '26 weeks')\n ) > 0\n \"\"\")\n for payer, payins in rows:\n if payer.is_suspended or payer.status != 'active':\n continue\n _check_scheduled_payins(db, payer, payins, automatic=False)\n if not payins:\n continue\n donations = []\n for sp in payins:\n for tr in sp['transfers']:\n donations.append({\n 'periodic_amount': tr['tip'].periodic_amount,\n 'tippee_username': tr['tippee_username'],\n })\n payer.notify('donate_reminder', donations=donations, email_unverified_address=True)\n counts['donate_reminder'] += 1\n db.run(\"\"\"\n UPDATE scheduled_payins\n SET notifs_count = notifs_count + 1\n , last_notif_ts = now()\n WHERE payer = %s\n AND id IN %s\n \"\"\", (payer.id, tuple(sp['id'] for sp in payins)))\n for k, n in sorted(counts.items()):\n logger.info(\"Sent %i %s notifications.\" % (n, k))\n\n\ndef send_upcoming_debit_notifications():\n \"\"\"This daily cron job notifies donors who are about to be debited.\n\n The notifications are sent at most once a month, 14 days before the first\n payment of the \"month\" (31 days, not the calendar month).\n \"\"\"\n db = website.db\n counts = defaultdict(int)\n rows = db.all(\"\"\"\n SELECT (SELECT p FROM participants p WHERE p.id = sp.payer) AS payer\n , json_agg((SELECT a FROM (\n SELECT sp.id, sp.execution_date, sp.amount, sp.transfers\n ) a ORDER BY a.execution_date)) AS payins\n FROM scheduled_payins sp\n WHERE sp.execution_date <= (current_date + interval '45 days')\n AND sp.automatic\n AND sp.notifs_count = 0\n AND sp.payin IS NULL\n AND sp.ctime < (current_timestamp - interval '6 hours')\n GROUP BY sp.payer, (sp.amount).currency\n HAVING min(sp.execution_date) <= (current_date + interval '14 days')\n \"\"\")\n for payer, payins in rows:\n if payer.is_suspended or payer.status != 'active':\n continue\n _check_scheduled_payins(db, payer, payins, automatic=True)\n if not payins:\n continue\n context = {\n 'payins': payins,\n 'total_amount': sum(sp['amount'] for sp in payins),\n }\n for sp in context['payins']:\n for tr in sp['transfers']:\n del tr['tip'], tr['beneficiary']\n if len(payins) > 1:\n context['ndays'] = (payins[-1]['execution_date'] - utcnow().date()).days\n while True:\n route = db.one(\"\"\"\n SELECT r\n FROM exchange_routes r\n WHERE r.participant = %s\n AND r.status = 'chargeable'\n AND r.network::text LIKE 'stripe-%%'\n ORDER BY r.is_default NULLS LAST\n , r.network = 'stripe-sdd' DESC\n , r.ctime DESC\n LIMIT 1\n \"\"\", (payer.id,))\n if route is None:\n break\n route.sync_status()\n if route.status == 'chargeable':\n break\n if route:\n event = 'upcoming_debit'\n context['instrument_brand'] = route.get_brand()\n context['instrument_partial_number'] = route.get_partial_number()\n if route.network == 'stripe-sdd':\n source = route.stripe_source\n context.update({\n 'creditor_identifier': website.app_conf.sepa_creditor_identifier,\n 'mandate_creation_date': route.ctime.date(),\n 'mandate_id': source.sepa_debit.mandate_reference,\n 'mandate_url': source.sepa_debit.mandate_url,\n })\n else:\n event = 'missing_route'\n payer.notify(event, email_unverified_address=True, **context)\n counts[event] += 1\n db.run(\"\"\"\n UPDATE scheduled_payins\n SET notifs_count = notifs_count + 1\n , last_notif_ts = now()\n WHERE payer = %s\n AND id IN %s\n \"\"\", (payer.id, tuple(sp['id'] for sp in payins)))\n for k, n in sorted(counts.items()):\n logger.info(\"Sent %i %s notifications.\" % (n, k))\n\n\ndef execute_scheduled_payins():\n \"\"\"This daily cron job initiates scheduled payments.\n \"\"\"\n db = website.db\n counts = defaultdict(int)\n retry = False\n rows = db.all(\"\"\"\n SELECT sp.id, sp.execution_date, sp.transfers\n , p AS payer, r.*::exchange_routes AS route\n FROM scheduled_payins sp\n JOIN participants p ON p.id = sp.payer\n JOIN LATERAL (\n SELECT r.*\n FROM exchange_routes r\n WHERE r.participant = sp.payer\n AND r.status = 'chargeable'\n AND r.network::text LIKE 'stripe-%%'\n ORDER BY r.is_default NULLS LAST\n , r.network = 'stripe-sdd' DESC\n , r.ctime DESC\n LIMIT 1\n ) r ON true\n WHERE ( r.network = 'stripe-sdd' AND sp.execution_date <= (current_date + interval '5 days') OR\n r.network = 'stripe-card' AND sp.execution_date <= current_date )\n AND sp.last_notif_ts < (current_date - interval '2 days')\n AND sp.automatic\n AND sp.payin IS NULL\n AND p.is_suspended IS NOT TRUE\n \"\"\")\n for sp_id, execution_date, transfers, payer, route in rows:\n route.__dict__['participant'] = payer\n route.sync_status()\n if route.status != 'chargeable':\n retry = True\n continue\n transfers, canceled, impossible = _filter_transfers(payer, transfers, automatic=True)\n if impossible:\n for tr in impossible:\n tr['execution_date'] = execution_date\n del tr['beneficiary'], tr['tip']\n payer.notify('renewal_aborted', transfers=impossible)\n counts['renewal_aborted'] += 1\n if transfers:\n payin_amount = sum(tr['amount'] for tr in transfers)\n payin = prepare_payin(db, payer, payin_amount, route, off_session=True)\n for tr in transfers:\n prepare_donation(\n db, payin, tr['tip'], tr['beneficiary'], 'stripe',\n payer, route.country, tr['amount']\n )\n payin = charge(db, payin, payer)\n if payin.status in ('failed', 'succeeded'):\n payer.notify('payin_' + payin.status, payin=payin._asdict(), provider='Stripe')\n counts['payin_' + payin.status] += 1\n db.run(\"\"\"\n UPDATE scheduled_payins\n SET payin = %s\n , mtime = current_timestamp\n WHERE id = %s\n \"\"\", (payin.id, sp_id))\n else:\n db.run(\"DELETE FROM scheduled_payins WHERE id = %s\", (sp_id,))\n for k, n in sorted(counts.items()):\n logger.info(\"Sent %i %s notifications.\" % (n, k))\n if retry:\n execute_scheduled_payins()\n\n\ndef _check_scheduled_payins(db, payer, payins, automatic):\n \"\"\"Check scheduled payins before they're acted upon.\n\n A scheduled payin can be out of sync with the state of the donor's tips or\n the status of the recipient's account if the `Participant.schedule_renewals()`\n method wasn't successfully called.\n \"\"\"\n for sp in list(payins):\n if isinstance(sp['amount'], dict):\n sp['amount'] = Money(**sp['amount'])\n sp['execution_date'] = date(*map(int, sp['execution_date'].split('-')))\n canceled, impossible = _filter_transfers(payer, sp['transfers'], automatic)[1:]\n if canceled:\n if len(canceled) == len(sp['transfers']):\n payins.remove(sp)\n db.run(\"DELETE FROM scheduled_payins WHERE id = %(id)s\", sp)\n else:\n old_tippee_ids = set(tr['tippee_id'] for tr in canceled)\n sp['transfers'] = [\n tr for tr in sp['transfers'] if tr['tippee_id'] not in old_tippee_ids\n ]\n sp['amount'] = sum(tr['amount'] for tr in sp['transfers'])\n db.run(\"\"\"\n UPDATE scheduled_payins\n SET amount = %(amount)s\n , transfers = %(transfers)s\n , mtime = current_timestamp\n WHERE id = %(id)s\n \"\"\", dict(sp, transfers=json.dumps([\n {k: v for k, v in tr.items() if k not in ('tip', 'beneficiary')}\n for tr in sp['transfers']\n ])))\n for tr in impossible:\n tr['impossible'] = True\n\n\ndef _filter_transfers(payer, transfers, automatic):\n \"\"\"Splits scheduled transfers into 3 lists: \"okay\", \"canceled\" and \"impossible\".\n \"\"\"\n canceled_transfers = []\n impossible_transfers = []\n okay_transfers = []\n for tr in transfers:\n if isinstance(tr['amount'], dict):\n tr['amount'] = Money(**tr['amount'])\n beneficiary = tr['beneficiary'] = website.db.Participant.from_id(tr['tippee_id'])\n tip = tr['tip'] = payer.get_tip_to(beneficiary)\n if tip.renewal_mode < 1 or (tip.renewal_mode == 2) != automatic:\n canceled_transfers.append(tr)\n elif beneficiary.status != 'active' or beneficiary.is_suspended or \\\n beneficiary.payment_providers & 1 == 0:\n impossible_transfers.append(tr)\n else:\n okay_transfers.append(tr)\n return okay_transfers, canceled_transfers, impossible_transfers\n", "path": "liberapay/payin/cron.py"}]} | 3,953 | 144 |
gh_patches_debug_2672 | rasdani/github-patches | git_diff | e-valuation__EvaP-1666 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make Typescript code Prettier
We should add automated formatting for our typescript files. I think https://prettier.io/ is pretty good, but the choice is open for discussion. The formatting should be done in `manage.py format` and be checked in CI.
</issue>
<code>
[start of evap/evaluation/management/commands/format.py]
1 import subprocess # nosec
2
3 from django.core.management.base import BaseCommand
4
5
6 class Command(BaseCommand):
7 args = ""
8 help = "Runs the code formatter"
9 requires_migrations_checks = False
10
11 def handle(self, *args, **options):
12 subprocess.run(["black", "evap"], check=False) # nosec
13 subprocess.run(["isort", "."], check=False) # nosec
14
[end of evap/evaluation/management/commands/format.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/evap/evaluation/management/commands/format.py b/evap/evaluation/management/commands/format.py
--- a/evap/evaluation/management/commands/format.py
+++ b/evap/evaluation/management/commands/format.py
@@ -11,3 +11,4 @@
def handle(self, *args, **options):
subprocess.run(["black", "evap"], check=False) # nosec
subprocess.run(["isort", "."], check=False) # nosec
+ subprocess.run(["npx", "prettier", "--write", "evap/static/ts/src"], check=False) # nosec
| {"golden_diff": "diff --git a/evap/evaluation/management/commands/format.py b/evap/evaluation/management/commands/format.py\n--- a/evap/evaluation/management/commands/format.py\n+++ b/evap/evaluation/management/commands/format.py\n@@ -11,3 +11,4 @@\n def handle(self, *args, **options):\n subprocess.run([\"black\", \"evap\"], check=False) # nosec\n subprocess.run([\"isort\", \".\"], check=False) # nosec\n+ subprocess.run([\"npx\", \"prettier\", \"--write\", \"evap/static/ts/src\"], check=False) # nosec\n", "issue": "Make Typescript code Prettier\nWe should add automated formatting for our typescript files. I think https://prettier.io/ is pretty good, but the choice is open for discussion. The formatting should be done in `manage.py format` and be checked in CI.\n", "before_files": [{"content": "import subprocess # nosec\n\nfrom django.core.management.base import BaseCommand\n\n\nclass Command(BaseCommand):\n args = \"\"\n help = \"Runs the code formatter\"\n requires_migrations_checks = False\n\n def handle(self, *args, **options):\n subprocess.run([\"black\", \"evap\"], check=False) # nosec\n subprocess.run([\"isort\", \".\"], check=False) # nosec\n", "path": "evap/evaluation/management/commands/format.py"}]} | 713 | 145 |
gh_patches_debug_29734 | rasdani/github-patches | git_diff | bridgecrewio__checkov-3007 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_AZURE_116 fails with Terraform Azure provider >= v.2.97.0
**Describe the issue**
This issue is regarding CKV_AZURE_116.
Since v2.97.0 of the Azure Terraform provider, the Azure policies add-on is enabled by setting
```terraform
azure_policy_enabled = true
```
as a top-level property in a `azurerm_kubernetes_cluster` resource definition, instead of the following
```terraform
addon_profile {
azure_policy {
enabled = true
}
}
```
The check should be updated to reflect the change.
**Examples**
```terraform
resource "azurerm_kubernetes_cluster" "example" {
azure_policy_enabled = true
}
```
should pass the check for CKV_AZURE_116.
**Version:**
2.0.1160
**Additional context**
None
I will submit a PR to fix this.
</issue>
<code>
[start of checkov/terraform/checks/resource/azure/AKSUsesAzurePoliciesAddon.py]
1 from checkov.common.models.enums import CheckCategories
2 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
3
4
5 class AKSUsesAzurePoliciesAddon(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure that AKS uses Azure Policies Add-on"
8 id = "CKV_AZURE_116"
9 supported_resources = ['azurerm_kubernetes_cluster']
10 categories = [CheckCategories.NETWORKING]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self):
14 return "addon_profile/[0]/azure_policy/[0]/enabled"
15
16
17 check = AKSUsesAzurePoliciesAddon()
18
[end of checkov/terraform/checks/resource/azure/AKSUsesAzurePoliciesAddon.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/azure/AKSUsesAzurePoliciesAddon.py b/checkov/terraform/checks/resource/azure/AKSUsesAzurePoliciesAddon.py
--- a/checkov/terraform/checks/resource/azure/AKSUsesAzurePoliciesAddon.py
+++ b/checkov/terraform/checks/resource/azure/AKSUsesAzurePoliciesAddon.py
@@ -1,8 +1,10 @@
-from checkov.common.models.enums import CheckCategories
-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
+from typing import Dict, List, Any
+from checkov.common.models.enums import CheckCategories, CheckResult
+from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
-class AKSUsesAzurePoliciesAddon(BaseResourceValueCheck):
+
+class AKSUsesAzurePoliciesAddon(BaseResourceCheck):
def __init__(self):
name = "Ensure that AKS uses Azure Policies Add-on"
id = "CKV_AZURE_116"
@@ -10,8 +12,20 @@
categories = [CheckCategories.NETWORKING]
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
- def get_inspected_key(self):
- return "addon_profile/[0]/azure_policy/[0]/enabled"
+ def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:
+ # since Azure provider v2.97.0
+ azure_policy_enabled = conf.get("azure_policy_enabled", [None])[0]
+ if azure_policy_enabled:
+ self.evaluated_keys = ["azure_policy_enabled"]
+ return CheckResult.PASSED
+ # up to and including Azure provider v2.96.0
+ self.evaluated_keys = ["addon_profile/[0]/azure_policy/[0]/enabled"]
+ addon_profile = conf.get("addon_profile", [None])[0]
+ if addon_profile and isinstance(addon_profile, dict):
+ azure_policy = addon_profile.get("azure_policy", [None])[0]
+ if azure_policy and isinstance(azure_policy, dict) and azure_policy.get("enabled", [None])[0]:
+ return CheckResult.PASSED
+ return CheckResult.FAILED
check = AKSUsesAzurePoliciesAddon()
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/AKSUsesAzurePoliciesAddon.py b/checkov/terraform/checks/resource/azure/AKSUsesAzurePoliciesAddon.py\n--- a/checkov/terraform/checks/resource/azure/AKSUsesAzurePoliciesAddon.py\n+++ b/checkov/terraform/checks/resource/azure/AKSUsesAzurePoliciesAddon.py\n@@ -1,8 +1,10 @@\n-from checkov.common.models.enums import CheckCategories\n-from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n+from typing import Dict, List, Any\n \n+from checkov.common.models.enums import CheckCategories, CheckResult\n+from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n \n-class AKSUsesAzurePoliciesAddon(BaseResourceValueCheck):\n+\n+class AKSUsesAzurePoliciesAddon(BaseResourceCheck):\n def __init__(self):\n name = \"Ensure that AKS uses Azure Policies Add-on\"\n id = \"CKV_AZURE_116\"\n@@ -10,8 +12,20 @@\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n- def get_inspected_key(self):\n- return \"addon_profile/[0]/azure_policy/[0]/enabled\"\n+ def scan_resource_conf(self, conf: Dict[str, List[Any]]) -> CheckResult:\n+ # since Azure provider v2.97.0\n+ azure_policy_enabled = conf.get(\"azure_policy_enabled\", [None])[0]\n+ if azure_policy_enabled:\n+ self.evaluated_keys = [\"azure_policy_enabled\"]\n+ return CheckResult.PASSED\n+ # up to and including Azure provider v2.96.0\n+ self.evaluated_keys = [\"addon_profile/[0]/azure_policy/[0]/enabled\"]\n+ addon_profile = conf.get(\"addon_profile\", [None])[0]\n+ if addon_profile and isinstance(addon_profile, dict):\n+ azure_policy = addon_profile.get(\"azure_policy\", [None])[0]\n+ if azure_policy and isinstance(azure_policy, dict) and azure_policy.get(\"enabled\", [None])[0]:\n+ return CheckResult.PASSED\n+ return CheckResult.FAILED\n \n \n check = AKSUsesAzurePoliciesAddon()\n", "issue": "CKV_AZURE_116 fails with Terraform Azure provider >= v.2.97.0\n**Describe the issue**\r\nThis issue is regarding CKV_AZURE_116.\r\nSince v2.97.0 of the Azure Terraform provider, the Azure policies add-on is enabled by setting\r\n\r\n```terraform\r\nazure_policy_enabled = true\r\n```\r\n\r\nas a top-level property in a `azurerm_kubernetes_cluster` resource definition, instead of the following\r\n\r\n```terraform\r\naddon_profile {\r\n azure_policy {\r\n enabled = true\r\n }\r\n}\r\n```\r\nThe check should be updated to reflect the change.\r\n\r\n**Examples**\r\n```terraform\r\nresource \"azurerm_kubernetes_cluster\" \"example\" {\r\n azure_policy_enabled = true\r\n}\r\n```\r\nshould pass the check for CKV_AZURE_116.\r\n\r\n**Version:**\r\n2.0.1160\r\n\r\n**Additional context**\r\nNone\r\n\r\nI will submit a PR to fix this.\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass AKSUsesAzurePoliciesAddon(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure that AKS uses Azure Policies Add-on\"\n id = \"CKV_AZURE_116\"\n supported_resources = ['azurerm_kubernetes_cluster']\n categories = [CheckCategories.NETWORKING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"addon_profile/[0]/azure_policy/[0]/enabled\"\n\n\ncheck = AKSUsesAzurePoliciesAddon()\n", "path": "checkov/terraform/checks/resource/azure/AKSUsesAzurePoliciesAddon.py"}]} | 948 | 509 |
gh_patches_debug_14134 | rasdani/github-patches | git_diff | pypa__cibuildwheel-88 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'wheel' missing from setup.py install_requires list?
While trying to use a vanilla cibuildwheel configuration with AppVeyor, I kept running into this error,
```
cibuildwheel: Failed to get name of the package. Command was ['c:\\python27\\python.exe', '.\\setup.py', '--name']
```
It was only after running that command (`python .\\setup.py --name`) ahead of the `cibuildwheel --output-dir wheelhouse` that I finally got a clue of what was going on,
```
setup requires that the Python package 'wheel' be installed. Try the command 'pip install wheel'.
```
To resolve this problem I've added `'wheel'` to [setup.py `install_requires` list](https://github.com/joerick/cibuildwheel/blob/master/setup.py#L12) in our fork.
I think that two changes are necessary:
1. On [line 111 of `__main__.py`](https://github.com/joerick/cibuildwheel/blob/master/cibuildwheel/__main__.py#L111), print the **`output`** of [CalledProcessError](https://docs.python.org/2/library/subprocess.html#subprocess.CalledProcessError), and exit with the value of **`returncode`**.
2. Add `'wheel'` to [setup.py `install_requires` list](https://github.com/joerick/cibuildwheel/blob/master/setup.py#L12)
Do you want a PR?
</issue>
<code>
[start of cibuildwheel/__main__.py]
1 from __future__ import print_function
2 import argparse, os, subprocess, sys, textwrap
3
4 import cibuildwheel
5 import cibuildwheel.linux, cibuildwheel.windows, cibuildwheel.macos
6 from cibuildwheel.environment import parse_environment, EnvironmentParseError
7 from cibuildwheel.util import BuildSkipper
8
9 def get_option_from_environment(option_name, platform=None, default=None):
10 '''
11 Returns an option from the environment, optionally scoped by the platform.
12
13 Example:
14 get_option_from_environment('CIBW_COLOR', platform='macos')
15
16 This will return the value of CIBW_COLOR_MACOS if it exists, otherwise the value of
17 CIBW_COLOR.
18 '''
19 if platform:
20 option = os.environ.get('%s_%s' % (option_name, platform.upper()))
21 if option is not None:
22 return option
23
24 return os.environ.get(option_name, default)
25
26
27 def main():
28 parser = argparse.ArgumentParser(
29 description='Build wheels for all the platforms.',
30 epilog=('Most options are supplied via environment variables. '
31 'See https://github.com/joerick/cibuildwheel#options for info.'))
32
33 parser.add_argument('--platform',
34 choices=['auto', 'linux', 'macos', 'windows'],
35 default=os.environ.get('CIBW_PLATFORM', 'auto'),
36 help=('Platform to build for. For "linux" you need docker running, on Mac '
37 'or Linux. For "macos", you need a Mac machine, and note that this '
38 'script is going to automatically install MacPython on your system, '
39 'so don\'t run on your development machine. For "windows", you need to '
40 'run in Windows, and it will build and test for all versions of '
41 'Python at C:\\PythonXX[-x64]. Default: auto.'))
42 parser.add_argument('--output-dir',
43 default=os.environ.get('CIBW_OUTPUT_DIR', 'wheelhouse'),
44 help='Destination folder for the wheels.')
45 parser.add_argument('project_dir',
46 default='.',
47 nargs='?',
48 help=('Path to the project that you want wheels for. Default: the current '
49 'directory.'))
50
51 args = parser.parse_args()
52
53 if args.platform != 'auto':
54 platform = args.platform
55 else:
56 if os.environ.get('TRAVIS_OS_NAME') == 'linux':
57 platform = 'linux'
58 elif os.environ.get('TRAVIS_OS_NAME') == 'osx':
59 platform = 'macos'
60 elif 'APPVEYOR' in os.environ:
61 platform = 'windows'
62 elif 'BITRISE_BUILD_NUMBER' in os.environ:
63 platform = 'macos'
64 else:
65 print('cibuildwheel: Unable to detect platform. cibuildwheel should run on your CI server, '
66 'Travis CI and Appveyor are supported. You can run on your development '
67 'machine using the --platform argument. Check --help output for more '
68 'information.',
69 file=sys.stderr)
70 exit(2)
71
72 output_dir = args.output_dir
73 test_command = get_option_from_environment('CIBW_TEST_COMMAND', platform=platform)
74 test_requires = get_option_from_environment('CIBW_TEST_REQUIRES', platform=platform, default='').split()
75 project_dir = args.project_dir
76 before_build = get_option_from_environment('CIBW_BEFORE_BUILD', platform=platform)
77 build_verbosity = get_option_from_environment('CIBW_BUILD_VERBOSITY', platform=platform, default='')
78 skip_config = os.environ.get('CIBW_SKIP', '')
79 environment_config = get_option_from_environment('CIBW_ENVIRONMENT', platform=platform, default='')
80
81 try:
82 build_verbosity = min(3, max(-3, int(build_verbosity)))
83 except ValueError:
84 build_verbosity = 0
85
86 try:
87 environment = parse_environment(environment_config)
88 except (EnvironmentParseError, ValueError) as e:
89 print('cibuildwheel: Malformed environment option "%s"' % environment_config, file=sys.stderr)
90 import traceback
91 traceback.print_exc(None, sys.stderr)
92 exit(2)
93
94 skip = BuildSkipper(skip_config)
95
96 # Add CIBUILDWHEEL environment variable
97 # This needs to be passed on to the docker container in linux.py
98 os.environ['CIBUILDWHEEL'] = '1'
99
100 try:
101 project_setup_py = os.path.join(project_dir, 'setup.py')
102 name_output = subprocess.check_output([sys.executable, project_setup_py, '--name'],
103 universal_newlines=True)
104 # the last line of output is the name
105 package_name = name_output.strip().splitlines()[-1]
106 except subprocess.CalledProcessError as err:
107 if not os.path.exists(project_setup_py):
108 print('cibuildwheel: Could not find setup.py at root of project', file=sys.stderr)
109 exit(2)
110 else:
111 print('cibuildwheel: Failed to get name of the package. Command was %s' % err.cmd,
112 file=sys.stderr)
113 exit(2)
114
115 if package_name == '' or package_name == 'UNKNOWN':
116 print('cibuildwheel: Invalid package name "%s". Check your setup.py' % package_name,
117 file=sys.stderr)
118 exit(2)
119
120 build_options = dict(
121 project_dir=project_dir,
122 package_name=package_name,
123 output_dir=output_dir,
124 test_command=test_command,
125 test_requires=test_requires,
126 before_build=before_build,
127 build_verbosity=build_verbosity,
128 skip=skip,
129 environment=environment,
130 )
131
132 if platform == 'linux':
133 manylinux1_x86_64_image = os.environ.get('CIBW_MANYLINUX1_X86_64_IMAGE', None)
134 manylinux1_i686_image = os.environ.get('CIBW_MANYLINUX1_I686_IMAGE', None)
135
136 build_options.update(
137 manylinux1_images={'x86_64': manylinux1_x86_64_image, 'i686': manylinux1_i686_image},
138 )
139 elif platform == 'macos':
140 pass
141 elif platform == 'windows':
142 pass
143
144 print_preamble(platform, build_options)
145
146 if not os.path.exists(output_dir):
147 os.makedirs(output_dir)
148
149 if platform == 'linux':
150 cibuildwheel.linux.build(**build_options)
151 elif platform == 'windows':
152 cibuildwheel.windows.build(**build_options)
153 elif platform == 'macos':
154 cibuildwheel.macos.build(**build_options)
155 else:
156 raise Exception('Unsupported platform')
157
158 def print_preamble(platform, build_options):
159 print(textwrap.dedent('''
160 _ _ _ _ _ _ _
161 ___|_| |_ _ _|_| |_| |_ _ _| |_ ___ ___| |
162 | _| | . | | | | | . | | | | | -_| -_| |
163 |___|_|___|___|_|_|___|_____|_|_|___|___|_|
164 '''))
165
166 print('cibuildwheel version %s\n' % cibuildwheel.__version__)
167
168
169 print('Build options:')
170 print(' platform: %r' % platform)
171 for option, value in build_options.items():
172 print(' %s: %r' % (option, value))
173
174 warnings = detect_warnings(platform, build_options)
175 if warnings:
176 print('\nWarnings:')
177 for warning in warnings:
178 print(' ' + warning)
179
180 print('\nHere we go!\n')
181
182 def detect_warnings(platform, build_options):
183 warnings = []
184
185 # warn about deprecated {python} and {pip}
186 for option_name in ['test_command', 'before_build']:
187 option_value = build_options.get(option_name)
188
189 if option_value:
190 if '{python}' in option_value or '{pip}' in option_value:
191 warnings.append(option_name + ": '{python}' and '{pip}' are no longer needed, and will be removed in a future release. Simply use 'python' or 'pip' instead.")
192
193 return warnings
194
195
196 if __name__ == '__main__':
197 main()
198
[end of cibuildwheel/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cibuildwheel/__main__.py b/cibuildwheel/__main__.py
--- a/cibuildwheel/__main__.py
+++ b/cibuildwheel/__main__.py
@@ -108,9 +108,10 @@
print('cibuildwheel: Could not find setup.py at root of project', file=sys.stderr)
exit(2)
else:
+ print(err.output)
print('cibuildwheel: Failed to get name of the package. Command was %s' % err.cmd,
file=sys.stderr)
- exit(2)
+ exit(err.returncode)
if package_name == '' or package_name == 'UNKNOWN':
print('cibuildwheel: Invalid package name "%s". Check your setup.py' % package_name,
| {"golden_diff": "diff --git a/cibuildwheel/__main__.py b/cibuildwheel/__main__.py\n--- a/cibuildwheel/__main__.py\n+++ b/cibuildwheel/__main__.py\n@@ -108,9 +108,10 @@\n print('cibuildwheel: Could not find setup.py at root of project', file=sys.stderr)\n exit(2)\n else:\n+ print(err.output)\n print('cibuildwheel: Failed to get name of the package. Command was %s' % err.cmd,\n file=sys.stderr)\n- exit(2)\n+ exit(err.returncode)\n \n if package_name == '' or package_name == 'UNKNOWN':\n print('cibuildwheel: Invalid package name \"%s\". Check your setup.py' % package_name,\n", "issue": "'wheel' missing from setup.py install_requires list?\nWhile trying to use a vanilla cibuildwheel configuration with AppVeyor, I kept running into this error,\r\n```\r\ncibuildwheel: Failed to get name of the package. Command was ['c:\\\\python27\\\\python.exe', '.\\\\setup.py', '--name']\r\n```\r\n\r\nIt was only after running that command (`python .\\\\setup.py --name`) ahead of the `cibuildwheel --output-dir wheelhouse` that I finally got a clue of what was going on,\r\n```\r\nsetup requires that the Python package 'wheel' be installed. Try the command 'pip install wheel'.\r\n```\r\n\r\nTo resolve this problem I've added `'wheel'` to [setup.py `install_requires` list](https://github.com/joerick/cibuildwheel/blob/master/setup.py#L12) in our fork.\r\n\r\nI think that two changes are necessary:\r\n1. On [line 111 of `__main__.py`](https://github.com/joerick/cibuildwheel/blob/master/cibuildwheel/__main__.py#L111), print the **`output`** of [CalledProcessError](https://docs.python.org/2/library/subprocess.html#subprocess.CalledProcessError), and exit with the value of **`returncode`**.\r\n2. Add `'wheel'` to [setup.py `install_requires` list](https://github.com/joerick/cibuildwheel/blob/master/setup.py#L12)\r\n\r\nDo you want a PR?\n", "before_files": [{"content": "from __future__ import print_function\nimport argparse, os, subprocess, sys, textwrap\n\nimport cibuildwheel\nimport cibuildwheel.linux, cibuildwheel.windows, cibuildwheel.macos\nfrom cibuildwheel.environment import parse_environment, EnvironmentParseError\nfrom cibuildwheel.util import BuildSkipper\n\ndef get_option_from_environment(option_name, platform=None, default=None):\n '''\n Returns an option from the environment, optionally scoped by the platform.\n\n Example:\n get_option_from_environment('CIBW_COLOR', platform='macos')\n\n This will return the value of CIBW_COLOR_MACOS if it exists, otherwise the value of\n CIBW_COLOR.\n '''\n if platform:\n option = os.environ.get('%s_%s' % (option_name, platform.upper()))\n if option is not None:\n return option\n\n return os.environ.get(option_name, default)\n\n\ndef main():\n parser = argparse.ArgumentParser(\n description='Build wheels for all the platforms.',\n epilog=('Most options are supplied via environment variables. '\n 'See https://github.com/joerick/cibuildwheel#options for info.'))\n\n parser.add_argument('--platform',\n choices=['auto', 'linux', 'macos', 'windows'],\n default=os.environ.get('CIBW_PLATFORM', 'auto'),\n help=('Platform to build for. For \"linux\" you need docker running, on Mac '\n 'or Linux. For \"macos\", you need a Mac machine, and note that this '\n 'script is going to automatically install MacPython on your system, '\n 'so don\\'t run on your development machine. For \"windows\", you need to '\n 'run in Windows, and it will build and test for all versions of '\n 'Python at C:\\\\PythonXX[-x64]. Default: auto.'))\n parser.add_argument('--output-dir',\n default=os.environ.get('CIBW_OUTPUT_DIR', 'wheelhouse'),\n help='Destination folder for the wheels.')\n parser.add_argument('project_dir',\n default='.',\n nargs='?',\n help=('Path to the project that you want wheels for. Default: the current '\n 'directory.'))\n\n args = parser.parse_args()\n\n if args.platform != 'auto':\n platform = args.platform\n else:\n if os.environ.get('TRAVIS_OS_NAME') == 'linux':\n platform = 'linux'\n elif os.environ.get('TRAVIS_OS_NAME') == 'osx':\n platform = 'macos'\n elif 'APPVEYOR' in os.environ:\n platform = 'windows'\n elif 'BITRISE_BUILD_NUMBER' in os.environ:\n platform = 'macos'\n else:\n print('cibuildwheel: Unable to detect platform. cibuildwheel should run on your CI server, '\n 'Travis CI and Appveyor are supported. You can run on your development '\n 'machine using the --platform argument. Check --help output for more '\n 'information.',\n file=sys.stderr)\n exit(2)\n\n output_dir = args.output_dir\n test_command = get_option_from_environment('CIBW_TEST_COMMAND', platform=platform)\n test_requires = get_option_from_environment('CIBW_TEST_REQUIRES', platform=platform, default='').split()\n project_dir = args.project_dir\n before_build = get_option_from_environment('CIBW_BEFORE_BUILD', platform=platform)\n build_verbosity = get_option_from_environment('CIBW_BUILD_VERBOSITY', platform=platform, default='')\n skip_config = os.environ.get('CIBW_SKIP', '')\n environment_config = get_option_from_environment('CIBW_ENVIRONMENT', platform=platform, default='')\n\n try:\n build_verbosity = min(3, max(-3, int(build_verbosity)))\n except ValueError:\n build_verbosity = 0\n\n try:\n environment = parse_environment(environment_config)\n except (EnvironmentParseError, ValueError) as e:\n print('cibuildwheel: Malformed environment option \"%s\"' % environment_config, file=sys.stderr)\n import traceback\n traceback.print_exc(None, sys.stderr)\n exit(2)\n\n skip = BuildSkipper(skip_config)\n\n # Add CIBUILDWHEEL environment variable\n # This needs to be passed on to the docker container in linux.py\n os.environ['CIBUILDWHEEL'] = '1'\n\n try:\n project_setup_py = os.path.join(project_dir, 'setup.py')\n name_output = subprocess.check_output([sys.executable, project_setup_py, '--name'],\n universal_newlines=True)\n # the last line of output is the name\n package_name = name_output.strip().splitlines()[-1]\n except subprocess.CalledProcessError as err:\n if not os.path.exists(project_setup_py):\n print('cibuildwheel: Could not find setup.py at root of project', file=sys.stderr)\n exit(2)\n else:\n print('cibuildwheel: Failed to get name of the package. Command was %s' % err.cmd,\n file=sys.stderr)\n exit(2)\n\n if package_name == '' or package_name == 'UNKNOWN':\n print('cibuildwheel: Invalid package name \"%s\". Check your setup.py' % package_name,\n file=sys.stderr)\n exit(2)\n\n build_options = dict(\n project_dir=project_dir,\n package_name=package_name,\n output_dir=output_dir,\n test_command=test_command,\n test_requires=test_requires,\n before_build=before_build,\n build_verbosity=build_verbosity,\n skip=skip,\n environment=environment,\n )\n\n if platform == 'linux':\n manylinux1_x86_64_image = os.environ.get('CIBW_MANYLINUX1_X86_64_IMAGE', None)\n manylinux1_i686_image = os.environ.get('CIBW_MANYLINUX1_I686_IMAGE', None)\n\n build_options.update(\n manylinux1_images={'x86_64': manylinux1_x86_64_image, 'i686': manylinux1_i686_image},\n )\n elif platform == 'macos':\n pass\n elif platform == 'windows':\n pass\n\n print_preamble(platform, build_options)\n\n if not os.path.exists(output_dir):\n os.makedirs(output_dir)\n\n if platform == 'linux':\n cibuildwheel.linux.build(**build_options)\n elif platform == 'windows':\n cibuildwheel.windows.build(**build_options)\n elif platform == 'macos':\n cibuildwheel.macos.build(**build_options)\n else:\n raise Exception('Unsupported platform')\n\ndef print_preamble(platform, build_options):\n print(textwrap.dedent('''\n _ _ _ _ _ _ _\n ___|_| |_ _ _|_| |_| |_ _ _| |_ ___ ___| |\n | _| | . | | | | | . | | | | | -_| -_| |\n |___|_|___|___|_|_|___|_____|_|_|___|___|_|\n '''))\n\n print('cibuildwheel version %s\\n' % cibuildwheel.__version__)\n\n\n print('Build options:')\n print(' platform: %r' % platform)\n for option, value in build_options.items():\n print(' %s: %r' % (option, value))\n\n warnings = detect_warnings(platform, build_options)\n if warnings:\n print('\\nWarnings:')\n for warning in warnings:\n print(' ' + warning)\n\n print('\\nHere we go!\\n')\n\ndef detect_warnings(platform, build_options):\n warnings = []\n\n # warn about deprecated {python} and {pip}\n for option_name in ['test_command', 'before_build']:\n option_value = build_options.get(option_name)\n\n if option_value:\n if '{python}' in option_value or '{pip}' in option_value:\n warnings.append(option_name + \": '{python}' and '{pip}' are no longer needed, and will be removed in a future release. Simply use 'python' or 'pip' instead.\")\n\n return warnings\n\n\nif __name__ == '__main__':\n main()\n", "path": "cibuildwheel/__main__.py"}]} | 3,141 | 175 |
gh_patches_debug_8536 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1648 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Found an embedded parameter "${AWS::Partition}" outside of an "Fn::Sub" at Resources
*cfn-lint version: (`cfn-lint --version`)*
cfn-lint 0.29.5
*Description of issue.*
I have a CloudFormation Template embedded into a AWS Systems Manager Automation Document.
Please provide as much information as possible:
* Template linting issues:
`E1029 Found an embedded parameter "${AWS::Partition}" outside of an "Fn::Sub" at Resources/DNSForwarderSetup/Properties/Content/mainSteps/0/inputs/TemplateBody
ad-3.template:433:13`
This is actually okay, I am using !Sub correctly in an Embedded CloudFormation....
* Please provide a CloudFormation sample that generated the issue.
https://github.com/aws-quickstart/quickstart-microsoft-activedirectory/blob/b071c3def8d9a30fd055571c9603cad10a753f40/templates/ad-3.template#L433
* If present, please add links to the (official) documentation for clarification.
* Validate if the issue still exists with the latest version of `cfn-lint` and/or the latest Spec files
* Feature request:
* Please provide argumentation about the missing feature. Context is key!
This errors if I use !Sub or Fn::Sub: when it actually correct..
Cfn-lint uses the [CloudFormation Resource Specifications](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-resource-specification.html) as the base to do validation. These files are included as part of the application version. Please update to the latest version of `cfn-lint` or update the spec files manually (`cfn-lint -u`)
</issue>
<code>
[start of src/cfnlint/rules/functions/SubNeeded.py]
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 from functools import reduce # pylint: disable=redefined-builtin
6 import re
7 import six
8 from cfnlint.rules import CloudFormationLintRule
9 from cfnlint.rules import RuleMatch
10
11
12 class SubNeeded(CloudFormationLintRule):
13 """Check if a substitution string exists without a substitution function"""
14 id = 'E1029'
15 shortdesc = 'Sub is required if a variable is used in a string'
16 description = 'If a substitution variable exists in a string but isn\'t wrapped with the Fn::Sub function the deployment will fail.'
17 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'
18 tags = ['functions', 'sub']
19
20 # Free-form text properties to exclude from this rule
21 excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init',
22 'CloudWatchAlarmDefinition', 'TopicRulePayload', 'BuildSpec',
23 'RequestMappingTemplate', 'LogFormat']
24 api_excludes = ['Uri', 'Body', 'ConnectionId']
25
26
27 # IAM Policy has special variables that don't require !Sub, Check for these
28 # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html
29 # https://docs.aws.amazon.com/iot/latest/developerguide/basic-policy-variables.html
30 # https://docs.aws.amazon.com/iot/latest/developerguide/thing-policy-variables.html
31 # https://docs.aws.amazon.com/transfer/latest/userguide/users.html#users-policies-scope-down
32 # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html
33 resource_excludes = ['${aws:CurrentTime}', '${aws:EpochTime}',
34 '${aws:TokenIssueTime}', '${aws:principaltype}',
35 '${aws:SecureTransport}', '${aws:SourceIp}',
36 '${aws:UserAgent}', '${aws:userid}',
37 '${aws:username}', '${ec2:SourceInstanceARN}',
38 '${iot:Connection.Thing.ThingName}',
39 '${iot:Connection.Thing.ThingTypeName}',
40 '${iot:Connection.Thing.IsAttached}',
41 '${iot:ClientId}', '${transfer:HomeBucket}',
42 '${transfer:HomeDirectory}', '${transfer:HomeFolder}',
43 '${transfer:UserName}', '${redshift:DbUser}',
44 '${cognito-identity.amazonaws.com:aud}',
45 '${cognito-identity.amazonaws.com:sub}',
46 '${cognito-identity.amazonaws.com:amr}']
47
48 # https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-access-control-identity-based.html
49 condition_excludes = [
50 '${redshift:DbUser}',
51 ]
52
53 def __init__(self):
54 """Init"""
55 super(SubNeeded, self).__init__()
56 self.config_definition = {
57 'custom_excludes': {
58 'default': '',
59 'type': 'string'
60 }
61 }
62 self.configure()
63 self.subParameterRegex = re.compile(r'(\$\{[A-Za-z0-9_:\.]+\})')
64
65 def _match_values(self, cfnelem, path):
66 """Recursively search for values matching the searchRegex"""
67 values = []
68 if isinstance(cfnelem, dict):
69 for key in cfnelem:
70 pathprop = path[:]
71 pathprop.append(key)
72 values.extend(self._match_values(cfnelem[key], pathprop))
73 elif isinstance(cfnelem, list):
74 for index, item in enumerate(cfnelem):
75 pathprop = path[:]
76 pathprop.append(index)
77 values.extend(self._match_values(item, pathprop))
78 else:
79 # Leaf node
80 if isinstance(cfnelem, six.string_types): # and re.match(searchRegex, cfnelem):
81 for variable in re.findall(self.subParameterRegex, cfnelem):
82 values.append(path + [variable])
83
84 return values
85
86 def match_values(self, cfn):
87 """
88 Search for values in all parts of the templates that match the searchRegex
89 """
90 results = []
91 results.extend(self._match_values(cfn.template, []))
92 # Globals are removed during a transform. They need to be checked manually
93 results.extend(self._match_values(cfn.template.get('Globals', {}), []))
94 return results
95
96 def _api_exceptions(self, value):
97 """ Key value exceptions """
98 parameter_search = re.compile(r'^\$\{stageVariables\..*\}$')
99 return re.match(parameter_search, value)
100
101 def _variable_custom_excluded(self, value):
102 """ User-defined exceptions for variables, anywhere in the file """
103 custom_excludes = self.config['custom_excludes']
104 if custom_excludes:
105 custom_search = re.compile(custom_excludes)
106 return re.match(custom_search, value)
107 return False
108
109 def match(self, cfn):
110 matches = []
111
112 # Get a list of paths to every leaf node string containing at least one ${parameter}
113 parameter_string_paths = self.match_values(cfn)
114 # We want to search all of the paths to check if each one contains an 'Fn::Sub'
115 for parameter_string_path in parameter_string_paths:
116 if parameter_string_path[0] in ['Parameters']:
117 continue
118 # Exclude the special IAM variables
119 variable = parameter_string_path[-1]
120
121 if 'Resource' in parameter_string_path:
122 if variable in self.resource_excludes:
123 continue
124 if 'NotResource' in parameter_string_path:
125 if variable in self.resource_excludes:
126 continue
127 if 'Condition' in parameter_string_path:
128 if variable in self.condition_excludes:
129 continue
130
131 # Step Function State Machine has a Definition Substitution that allows usage of special variables outside of a !Sub
132 # https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-stepfunctions-statemachine-definitionsubstitutions.html
133
134 if 'DefinitionString' in parameter_string_path:
135 modified_parameter_string_path = parameter_string_path
136 index = parameter_string_path.index('DefinitionString')
137 modified_parameter_string_path[index] = 'DefinitionSubstitutions'
138 modified_parameter_string_path = modified_parameter_string_path[:index+1]
139 modified_parameter_string_path.append(variable[2:-1])
140 if reduce(lambda c, k: c.get(k, {}), modified_parameter_string_path, cfn.template):
141 continue
142
143 # Exclude variables that match custom exclude filters, if configured
144 # (for third-party tools that pre-process templates before uploading them to AWS)
145 if self._variable_custom_excluded(variable):
146 continue
147
148 # Exclude literals (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html)
149 if variable.startswith('${!'):
150 continue
151
152 found_sub = False
153 # Does the path contain an 'Fn::Sub'?
154 for step in parameter_string_path:
155 if step in self.api_excludes:
156 if self._api_exceptions(parameter_string_path[-1]):
157 found_sub = True
158 elif step == 'Fn::Sub' or step in self.excludes:
159 found_sub = True
160
161 # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly
162 if not found_sub:
163 # Remove the last item (the variable) to prevent multiple errors on 1 line errors
164 path = parameter_string_path[:-1]
165 message = 'Found an embedded parameter "{}" outside of an "Fn::Sub" at {}'.format(
166 variable, '/'.join(map(str, path)))
167 matches.append(RuleMatch(path, message))
168
169 return matches
170
[end of src/cfnlint/rules/functions/SubNeeded.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/functions/SubNeeded.py b/src/cfnlint/rules/functions/SubNeeded.py
--- a/src/cfnlint/rules/functions/SubNeeded.py
+++ b/src/cfnlint/rules/functions/SubNeeded.py
@@ -20,7 +20,7 @@
# Free-form text properties to exclude from this rule
excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init',
'CloudWatchAlarmDefinition', 'TopicRulePayload', 'BuildSpec',
- 'RequestMappingTemplate', 'LogFormat']
+ 'RequestMappingTemplate', 'LogFormat', 'TemplateBody']
api_excludes = ['Uri', 'Body', 'ConnectionId']
| {"golden_diff": "diff --git a/src/cfnlint/rules/functions/SubNeeded.py b/src/cfnlint/rules/functions/SubNeeded.py\n--- a/src/cfnlint/rules/functions/SubNeeded.py\n+++ b/src/cfnlint/rules/functions/SubNeeded.py\n@@ -20,7 +20,7 @@\n # Free-form text properties to exclude from this rule\n excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init',\n 'CloudWatchAlarmDefinition', 'TopicRulePayload', 'BuildSpec',\n- 'RequestMappingTemplate', 'LogFormat']\n+ 'RequestMappingTemplate', 'LogFormat', 'TemplateBody']\n api_excludes = ['Uri', 'Body', 'ConnectionId']\n", "issue": "Found an embedded parameter \"${AWS::Partition}\" outside of an \"Fn::Sub\" at Resources\n*cfn-lint version: (`cfn-lint --version`)*\r\ncfn-lint 0.29.5\r\n\r\n*Description of issue.*\r\nI have a CloudFormation Template embedded into a AWS Systems Manager Automation Document. \r\n\r\nPlease provide as much information as possible:\r\n* Template linting issues:\r\n\r\n`E1029 Found an embedded parameter \"${AWS::Partition}\" outside of an \"Fn::Sub\" at Resources/DNSForwarderSetup/Properties/Content/mainSteps/0/inputs/TemplateBody\r\nad-3.template:433:13`\r\n\r\nThis is actually okay, I am using !Sub correctly in an Embedded CloudFormation.... \r\n\r\n * Please provide a CloudFormation sample that generated the issue.\r\n\r\nhttps://github.com/aws-quickstart/quickstart-microsoft-activedirectory/blob/b071c3def8d9a30fd055571c9603cad10a753f40/templates/ad-3.template#L433\r\n\r\n * If present, please add links to the (official) documentation for clarification.\r\n * Validate if the issue still exists with the latest version of `cfn-lint` and/or the latest Spec files\r\n* Feature request:\r\n * Please provide argumentation about the missing feature. Context is key!\r\nThis errors if I use !Sub or Fn::Sub: when it actually correct.. \r\n\r\n\r\nCfn-lint uses the [CloudFormation Resource Specifications](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-resource-specification.html) as the base to do validation. These files are included as part of the application version. Please update to the latest version of `cfn-lint` or update the spec files manually (`cfn-lint -u`)\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nfrom functools import reduce # pylint: disable=redefined-builtin\nimport re\nimport six\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass SubNeeded(CloudFormationLintRule):\n \"\"\"Check if a substitution string exists without a substitution function\"\"\"\n id = 'E1029'\n shortdesc = 'Sub is required if a variable is used in a string'\n description = 'If a substitution variable exists in a string but isn\\'t wrapped with the Fn::Sub function the deployment will fail.'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'\n tags = ['functions', 'sub']\n\n # Free-form text properties to exclude from this rule\n excludes = ['UserData', 'ZipFile', 'Condition', 'AWS::CloudFormation::Init',\n 'CloudWatchAlarmDefinition', 'TopicRulePayload', 'BuildSpec',\n 'RequestMappingTemplate', 'LogFormat']\n api_excludes = ['Uri', 'Body', 'ConnectionId']\n\n\n # IAM Policy has special variables that don't require !Sub, Check for these\n # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/basic-policy-variables.html\n # https://docs.aws.amazon.com/iot/latest/developerguide/thing-policy-variables.html\n # https://docs.aws.amazon.com/transfer/latest/userguide/users.html#users-policies-scope-down\n # https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html\n resource_excludes = ['${aws:CurrentTime}', '${aws:EpochTime}',\n '${aws:TokenIssueTime}', '${aws:principaltype}',\n '${aws:SecureTransport}', '${aws:SourceIp}',\n '${aws:UserAgent}', '${aws:userid}',\n '${aws:username}', '${ec2:SourceInstanceARN}',\n '${iot:Connection.Thing.ThingName}',\n '${iot:Connection.Thing.ThingTypeName}',\n '${iot:Connection.Thing.IsAttached}',\n '${iot:ClientId}', '${transfer:HomeBucket}',\n '${transfer:HomeDirectory}', '${transfer:HomeFolder}',\n '${transfer:UserName}', '${redshift:DbUser}',\n '${cognito-identity.amazonaws.com:aud}',\n '${cognito-identity.amazonaws.com:sub}',\n '${cognito-identity.amazonaws.com:amr}']\n\n # https://docs.aws.amazon.com/redshift/latest/mgmt/redshift-iam-access-control-identity-based.html\n condition_excludes = [\n '${redshift:DbUser}',\n ]\n\n def __init__(self):\n \"\"\"Init\"\"\"\n super(SubNeeded, self).__init__()\n self.config_definition = {\n 'custom_excludes': {\n 'default': '',\n 'type': 'string'\n }\n }\n self.configure()\n self.subParameterRegex = re.compile(r'(\\$\\{[A-Za-z0-9_:\\.]+\\})')\n\n def _match_values(self, cfnelem, path):\n \"\"\"Recursively search for values matching the searchRegex\"\"\"\n values = []\n if isinstance(cfnelem, dict):\n for key in cfnelem:\n pathprop = path[:]\n pathprop.append(key)\n values.extend(self._match_values(cfnelem[key], pathprop))\n elif isinstance(cfnelem, list):\n for index, item in enumerate(cfnelem):\n pathprop = path[:]\n pathprop.append(index)\n values.extend(self._match_values(item, pathprop))\n else:\n # Leaf node\n if isinstance(cfnelem, six.string_types): # and re.match(searchRegex, cfnelem):\n for variable in re.findall(self.subParameterRegex, cfnelem):\n values.append(path + [variable])\n\n return values\n\n def match_values(self, cfn):\n \"\"\"\n Search for values in all parts of the templates that match the searchRegex\n \"\"\"\n results = []\n results.extend(self._match_values(cfn.template, []))\n # Globals are removed during a transform. They need to be checked manually\n results.extend(self._match_values(cfn.template.get('Globals', {}), []))\n return results\n\n def _api_exceptions(self, value):\n \"\"\" Key value exceptions \"\"\"\n parameter_search = re.compile(r'^\\$\\{stageVariables\\..*\\}$')\n return re.match(parameter_search, value)\n\n def _variable_custom_excluded(self, value):\n \"\"\" User-defined exceptions for variables, anywhere in the file \"\"\"\n custom_excludes = self.config['custom_excludes']\n if custom_excludes:\n custom_search = re.compile(custom_excludes)\n return re.match(custom_search, value)\n return False\n\n def match(self, cfn):\n matches = []\n\n # Get a list of paths to every leaf node string containing at least one ${parameter}\n parameter_string_paths = self.match_values(cfn)\n # We want to search all of the paths to check if each one contains an 'Fn::Sub'\n for parameter_string_path in parameter_string_paths:\n if parameter_string_path[0] in ['Parameters']:\n continue\n # Exclude the special IAM variables\n variable = parameter_string_path[-1]\n\n if 'Resource' in parameter_string_path:\n if variable in self.resource_excludes:\n continue\n if 'NotResource' in parameter_string_path:\n if variable in self.resource_excludes:\n continue\n if 'Condition' in parameter_string_path:\n if variable in self.condition_excludes:\n continue\n\n # Step Function State Machine has a Definition Substitution that allows usage of special variables outside of a !Sub\n # https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-stepfunctions-statemachine-definitionsubstitutions.html\n\n if 'DefinitionString' in parameter_string_path:\n modified_parameter_string_path = parameter_string_path\n index = parameter_string_path.index('DefinitionString')\n modified_parameter_string_path[index] = 'DefinitionSubstitutions'\n modified_parameter_string_path = modified_parameter_string_path[:index+1]\n modified_parameter_string_path.append(variable[2:-1])\n if reduce(lambda c, k: c.get(k, {}), modified_parameter_string_path, cfn.template):\n continue\n\n # Exclude variables that match custom exclude filters, if configured\n # (for third-party tools that pre-process templates before uploading them to AWS)\n if self._variable_custom_excluded(variable):\n continue\n\n # Exclude literals (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html)\n if variable.startswith('${!'):\n continue\n\n found_sub = False\n # Does the path contain an 'Fn::Sub'?\n for step in parameter_string_path:\n if step in self.api_excludes:\n if self._api_exceptions(parameter_string_path[-1]):\n found_sub = True\n elif step == 'Fn::Sub' or step in self.excludes:\n found_sub = True\n\n # If we didn't find an 'Fn::Sub' it means a string containing a ${parameter} may not be evaluated correctly\n if not found_sub:\n # Remove the last item (the variable) to prevent multiple errors on 1 line errors\n path = parameter_string_path[:-1]\n message = 'Found an embedded parameter \"{}\" outside of an \"Fn::Sub\" at {}'.format(\n variable, '/'.join(map(str, path)))\n matches.append(RuleMatch(path, message))\n\n return matches\n", "path": "src/cfnlint/rules/functions/SubNeeded.py"}]} | 2,983 | 148 |
gh_patches_debug_15245 | rasdani/github-patches | git_diff | kornia__kornia-2232 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cleanup on setup metadata
_Originally posted by @johnnv1 in https://github.com/kornia/kornia/pull/2225#discussion_r1117693700_
-----
right now I think we have some deadlines in the setup, for example:
- https://github.com/kornia/kornia/blob/master/setup.py#L16
- https://github.com/kornia/kornia/blob/master/setup.cfg#L57
- This isn't used anymore https://github.com/kornia/kornia/tree/master/packaging
</issue>
<code>
[start of kornia/__init__.py]
1 # NOTE: kornia filters and geometry must go first since are the core of the library
2 # and by changing the import order you might get into a circular dependencies issue.
3 from . import filters
4 from . import geometry
5 from . import grad_estimator
6
7 # import the other modules for convenience
8 from . import augmentation, color, contrib, core, enhance, feature, io, losses, metrics, morphology, tracking, utils, x
9
10 # NOTE: we are going to expose to top level very few things
11 from kornia.constants import pi
12 from kornia.testing import xla_is_available
13 from kornia.utils import eye_like, vec_like, create_meshgrid, image_to_tensor, tensor_to_image
14
15 # Version variable
16 import sys
17
18 if sys.version_info >= (3, 8): # pragma: >=3.8 cover
19 import importlib.metadata as importlib_metadata
20 else: # pragma: <3.8 cover
21 import importlib_metadata
22
23 __version__ = importlib_metadata.version('kornia')
24
[end of kornia/__init__.py]
[start of kornia/utils/_compat.py]
1 from typing import TYPE_CHECKING, Callable, ContextManager, List, Optional, Tuple, TypeVar
2
3 import torch
4 from torch import Tensor
5
6 from packaging import version
7
8
9 def torch_version() -> str:
10 """Parse the `torch.__version__` variable and removes +cu*/cpu."""
11 return torch.__version__.split('+')[0]
12
13
14 def torch_version_lt(major: int, minor: int, patch: int) -> bool:
15 _version = version.parse(torch_version())
16 return _version < version.parse(f"{major}.{minor}.{patch}")
17
18
19 def torch_version_le(major: int, minor: int, patch: int) -> bool:
20 _version = version.parse(torch_version())
21 return _version <= version.parse(f"{major}.{minor}.{patch}")
22
23
24 def torch_version_ge(major: int, minor: int, patch: Optional[int] = None) -> bool:
25 _version = version.parse(torch_version())
26 if patch is None:
27 return _version >= version.parse(f"{major}.{minor}")
28 else:
29 return _version >= version.parse(f"{major}.{minor}.{patch}")
30
31
32 if TYPE_CHECKING:
33 # TODO: remove this branch when kornia relies on torch >= 1.10.0
34 def torch_meshgrid(tensors: List[Tensor], indexing: Optional[str] = None) -> Tuple[Tensor, ...]:
35 ...
36
37 else:
38 if torch_version_ge(1, 10, 0):
39
40 def torch_meshgrid(tensors: List[Tensor], indexing: str):
41 return torch.meshgrid(tensors, indexing=indexing)
42
43 else:
44 # TODO: remove this branch when kornia relies on torch >= 1.10.0
45 def torch_meshgrid(tensors: List[Tensor], indexing: str):
46 return torch.meshgrid(tensors)
47
48
49 if TYPE_CHECKING:
50 # TODO: remove this branch when kornia relies on torch >= 1.10.0
51 _T = TypeVar('_T')
52 torch_inference_mode: Callable[..., ContextManager[_T]]
53 else:
54 if torch_version_ge(1, 10, 0):
55 torch_inference_mode = torch.inference_mode
56 else:
57 # TODO: remove this branch when kornia relies on torch >= 1.10.0
58 torch_inference_mode = torch.no_grad
59
[end of kornia/utils/_compat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kornia/__init__.py b/kornia/__init__.py
--- a/kornia/__init__.py
+++ b/kornia/__init__.py
@@ -13,11 +13,4 @@
from kornia.utils import eye_like, vec_like, create_meshgrid, image_to_tensor, tensor_to_image
# Version variable
-import sys
-
-if sys.version_info >= (3, 8): # pragma: >=3.8 cover
- import importlib.metadata as importlib_metadata
-else: # pragma: <3.8 cover
- import importlib_metadata
-
-__version__ = importlib_metadata.version('kornia')
+__version__ = "0.6.11.dev"
diff --git a/kornia/utils/_compat.py b/kornia/utils/_compat.py
--- a/kornia/utils/_compat.py
+++ b/kornia/utils/_compat.py
@@ -1,9 +1,8 @@
from typing import TYPE_CHECKING, Callable, ContextManager, List, Optional, Tuple, TypeVar
import torch
-from torch import Tensor
-
from packaging import version
+from torch import Tensor
def torch_version() -> str:
| {"golden_diff": "diff --git a/kornia/__init__.py b/kornia/__init__.py\n--- a/kornia/__init__.py\n+++ b/kornia/__init__.py\n@@ -13,11 +13,4 @@\n from kornia.utils import eye_like, vec_like, create_meshgrid, image_to_tensor, tensor_to_image\n \n # Version variable\n-import sys\n-\n-if sys.version_info >= (3, 8): # pragma: >=3.8 cover\n- import importlib.metadata as importlib_metadata\n-else: # pragma: <3.8 cover\n- import importlib_metadata\n-\n-__version__ = importlib_metadata.version('kornia')\n+__version__ = \"0.6.11.dev\"\ndiff --git a/kornia/utils/_compat.py b/kornia/utils/_compat.py\n--- a/kornia/utils/_compat.py\n+++ b/kornia/utils/_compat.py\n@@ -1,9 +1,8 @@\n from typing import TYPE_CHECKING, Callable, ContextManager, List, Optional, Tuple, TypeVar\n \n import torch\n-from torch import Tensor\n-\n from packaging import version\n+from torch import Tensor\n \n \n def torch_version() -> str:\n", "issue": "Cleanup on setup metadata\n_Originally posted by @johnnv1 in https://github.com/kornia/kornia/pull/2225#discussion_r1117693700_\r\n\r\n\r\n-----\r\nright now I think we have some deadlines in the setup, for example:\r\n- https://github.com/kornia/kornia/blob/master/setup.py#L16\r\n- https://github.com/kornia/kornia/blob/master/setup.cfg#L57\r\n- This isn't used anymore https://github.com/kornia/kornia/tree/master/packaging \n", "before_files": [{"content": "# NOTE: kornia filters and geometry must go first since are the core of the library\n# and by changing the import order you might get into a circular dependencies issue.\nfrom . import filters\nfrom . import geometry\nfrom . import grad_estimator\n\n# import the other modules for convenience\nfrom . import augmentation, color, contrib, core, enhance, feature, io, losses, metrics, morphology, tracking, utils, x\n\n# NOTE: we are going to expose to top level very few things\nfrom kornia.constants import pi\nfrom kornia.testing import xla_is_available\nfrom kornia.utils import eye_like, vec_like, create_meshgrid, image_to_tensor, tensor_to_image\n\n# Version variable\nimport sys\n\nif sys.version_info >= (3, 8): # pragma: >=3.8 cover\n import importlib.metadata as importlib_metadata\nelse: # pragma: <3.8 cover\n import importlib_metadata\n\n__version__ = importlib_metadata.version('kornia')\n", "path": "kornia/__init__.py"}, {"content": "from typing import TYPE_CHECKING, Callable, ContextManager, List, Optional, Tuple, TypeVar\n\nimport torch\nfrom torch import Tensor\n\nfrom packaging import version\n\n\ndef torch_version() -> str:\n \"\"\"Parse the `torch.__version__` variable and removes +cu*/cpu.\"\"\"\n return torch.__version__.split('+')[0]\n\n\ndef torch_version_lt(major: int, minor: int, patch: int) -> bool:\n _version = version.parse(torch_version())\n return _version < version.parse(f\"{major}.{minor}.{patch}\")\n\n\ndef torch_version_le(major: int, minor: int, patch: int) -> bool:\n _version = version.parse(torch_version())\n return _version <= version.parse(f\"{major}.{minor}.{patch}\")\n\n\ndef torch_version_ge(major: int, minor: int, patch: Optional[int] = None) -> bool:\n _version = version.parse(torch_version())\n if patch is None:\n return _version >= version.parse(f\"{major}.{minor}\")\n else:\n return _version >= version.parse(f\"{major}.{minor}.{patch}\")\n\n\nif TYPE_CHECKING:\n # TODO: remove this branch when kornia relies on torch >= 1.10.0\n def torch_meshgrid(tensors: List[Tensor], indexing: Optional[str] = None) -> Tuple[Tensor, ...]:\n ...\n\nelse:\n if torch_version_ge(1, 10, 0):\n\n def torch_meshgrid(tensors: List[Tensor], indexing: str):\n return torch.meshgrid(tensors, indexing=indexing)\n\n else:\n # TODO: remove this branch when kornia relies on torch >= 1.10.0\n def torch_meshgrid(tensors: List[Tensor], indexing: str):\n return torch.meshgrid(tensors)\n\n\nif TYPE_CHECKING:\n # TODO: remove this branch when kornia relies on torch >= 1.10.0\n _T = TypeVar('_T')\n torch_inference_mode: Callable[..., ContextManager[_T]]\nelse:\n if torch_version_ge(1, 10, 0):\n torch_inference_mode = torch.inference_mode\n else:\n # TODO: remove this branch when kornia relies on torch >= 1.10.0\n torch_inference_mode = torch.no_grad\n", "path": "kornia/utils/_compat.py"}]} | 1,565 | 265 |
gh_patches_debug_37891 | rasdani/github-patches | git_diff | feast-dev__feast-536 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Extend feature set and/or feature metadata
This issue tracks the addition of new fields to the current feature set specification that allow a user to add metadata to either the feature set or features. These fields are optional and are intended to provide users with the flexibility to include feature level or feature level information.
The current proposal is to only add a single string field called `description` to `FeatureSpec`
</issue>
<code>
[start of sdk/python/feast/loaders/ingest.py]
1 import logging
2 from functools import partial
3 from multiprocessing import Pool
4 from typing import Iterable, List
5
6 import pandas as pd
7 import pyarrow.parquet as pq
8
9 from feast.constants import DATETIME_COLUMN
10 from feast.feature_set import FeatureSet
11 from feast.type_map import (
12 pa_column_to_proto_column,
13 pa_column_to_timestamp_proto_column,
14 )
15 from feast.types import Field_pb2 as FieldProto
16 from feast.types.FeatureRow_pb2 import FeatureRow
17
18 _logger = logging.getLogger(__name__)
19
20 GRPC_CONNECTION_TIMEOUT_DEFAULT = 3 # type: int
21 GRPC_CONNECTION_TIMEOUT_APPLY = 300 # type: int
22 FEAST_SERVING_URL_ENV_KEY = "FEAST_SERVING_URL" # type: str
23 FEAST_CORE_URL_ENV_KEY = "FEAST_CORE_URL" # type: str
24 BATCH_FEATURE_REQUEST_WAIT_TIME_SECONDS = 300
25 KAFKA_CHUNK_PRODUCTION_TIMEOUT = 120 # type: int
26
27
28 def _encode_pa_tables(file: str, fs: FeatureSet, row_group_idx: int) -> List[bytes]:
29 """
30 Helper function to encode a PyArrow table(s) read from parquet file(s) into
31 FeatureRows.
32
33 This function accepts a list of file directory pointing to many parquet
34 files. All parquet files must have the same schema.
35
36 Each parquet file will be read into as a table and encoded into FeatureRows
37 using a pool of max_workers workers.
38
39 Args:
40 file (str):
41 File directory of all the parquet file to encode.
42 Parquet file must have more than one row group.
43
44 fs (feast.feature_set.FeatureSet):
45 FeatureSet describing parquet files.
46
47 row_group_idx(int):
48 Row group index to read and encode into byte like FeatureRow
49 protobuf objects.
50
51 Returns:
52 List[bytes]:
53 List of byte encoded FeatureRows from the parquet file.
54 """
55 pq_file = pq.ParquetFile(file)
56 # Read parquet file as a PyArrow table
57 table = pq_file.read_row_group(row_group_idx)
58
59 # Add datetime column
60 datetime_col = pa_column_to_timestamp_proto_column(table.column(DATETIME_COLUMN))
61
62 # Preprocess the columns by converting all its values to Proto values
63 proto_columns = {
64 field_name: pa_column_to_proto_column(field.dtype, table.column(field_name))
65 for field_name, field in fs.fields.items()
66 }
67
68 feature_set = f"{fs.project}/{fs.name}:{fs.version}"
69
70 # List to store result
71 feature_rows = []
72
73 # Loop optimization declaration(s)
74 field = FieldProto.Field
75 proto_items = proto_columns.items()
76 append = feature_rows.append
77
78 # Iterate through the rows
79 for row_idx in range(table.num_rows):
80 feature_row = FeatureRow(
81 event_timestamp=datetime_col[row_idx], feature_set=feature_set
82 )
83 # Loop optimization declaration
84 ext = feature_row.fields.extend
85
86 # Insert field from each column
87 for k, v in proto_items:
88 ext([field(name=k, value=v[row_idx])])
89
90 # Append FeatureRow in byte string form
91 append(feature_row.SerializeToString())
92
93 return feature_rows
94
95
96 def get_feature_row_chunks(
97 file: str, row_groups: List[int], fs: FeatureSet, max_workers: int
98 ) -> Iterable[List[bytes]]:
99 """
100 Iterator function to encode a PyArrow table read from a parquet file to
101 FeatureRow(s).
102
103 Args:
104 file (str):
105 File directory of the parquet file. The parquet file must have more
106 than one row group.
107
108 row_groups (List[int]):
109 Specific row group indexes to be read and transformed in the parquet
110 file.
111
112 fs (feast.feature_set.FeatureSet):
113 FeatureSet describing parquet files.
114
115 max_workers (int):
116 Maximum number of workers to spawn.
117
118 Returns:
119 Iterable[List[bytes]]:
120 Iterable list of byte encoded FeatureRow(s).
121 """
122
123 pool = Pool(max_workers)
124 func = partial(_encode_pa_tables, file, fs)
125 for chunk in pool.imap(func, row_groups):
126 yield chunk
127 return
128
129
130 def validate_dataframe(dataframe: pd.DataFrame, feature_set: FeatureSet):
131 if "datetime" not in dataframe.columns:
132 raise ValueError(
133 f'Dataframe does not contain entity "datetime" in columns {dataframe.columns}'
134 )
135
136 for entity in feature_set.entities:
137 if entity.name not in dataframe.columns:
138 raise ValueError(
139 f"Dataframe does not contain entity {entity.name} in columns {dataframe.columns}"
140 )
141
142 for feature in feature_set.features:
143 if feature.name not in dataframe.columns:
144 raise ValueError(
145 f"Dataframe does not contain feature {feature.name} in columns {dataframe.columns}"
146 )
147
[end of sdk/python/feast/loaders/ingest.py]
[start of sdk/python/feast/feature.py]
1 # Copyright 2019 The Feast Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from feast.core.FeatureSet_pb2 import FeatureSpec as FeatureProto
16 from feast.field import Field
17 from feast.types import Value_pb2 as ValueTypeProto
18 from feast.value_type import ValueType
19
20
21 class Feature(Field):
22 """Feature field type"""
23
24 def to_proto(self) -> FeatureProto:
25 """Converts Feature object to its Protocol Buffer representation"""
26 value_type = ValueTypeProto.ValueType.Enum.Value(self.dtype.name)
27 return FeatureProto(
28 name=self.name,
29 value_type=value_type,
30 presence=self.presence,
31 group_presence=self.group_presence,
32 shape=self.shape,
33 value_count=self.value_count,
34 domain=self.domain,
35 int_domain=self.int_domain,
36 float_domain=self.float_domain,
37 string_domain=self.string_domain,
38 bool_domain=self.bool_domain,
39 struct_domain=self.struct_domain,
40 natural_language_domain=self.natural_language_domain,
41 image_domain=self.image_domain,
42 mid_domain=self.mid_domain,
43 url_domain=self.url_domain,
44 time_domain=self.time_domain,
45 time_of_day_domain=self.time_of_day_domain,
46 )
47
48 @classmethod
49 def from_proto(cls, feature_proto: FeatureProto):
50 """
51
52 Args:
53 feature_proto: FeatureSpec protobuf object
54
55 Returns:
56 Feature object
57 """
58 feature = cls(
59 name=feature_proto.name, dtype=ValueType(feature_proto.value_type)
60 )
61 feature.update_presence_constraints(feature_proto)
62 feature.update_shape_type(feature_proto)
63 feature.update_domain_info(feature_proto)
64 return feature
65
[end of sdk/python/feast/feature.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/feast/feature.py b/sdk/python/feast/feature.py
--- a/sdk/python/feast/feature.py
+++ b/sdk/python/feast/feature.py
@@ -56,7 +56,7 @@
Feature object
"""
feature = cls(
- name=feature_proto.name, dtype=ValueType(feature_proto.value_type)
+ name=feature_proto.name, dtype=ValueType(feature_proto.value_type),
)
feature.update_presence_constraints(feature_proto)
feature.update_shape_type(feature_proto)
diff --git a/sdk/python/feast/loaders/ingest.py b/sdk/python/feast/loaders/ingest.py
--- a/sdk/python/feast/loaders/ingest.py
+++ b/sdk/python/feast/loaders/ingest.py
@@ -25,7 +25,9 @@
KAFKA_CHUNK_PRODUCTION_TIMEOUT = 120 # type: int
-def _encode_pa_tables(file: str, fs: FeatureSet, row_group_idx: int) -> List[bytes]:
+def _encode_pa_tables(
+ file: str, feature_set: str, fields: dict, row_group_idx: int
+) -> List[bytes]:
"""
Helper function to encode a PyArrow table(s) read from parquet file(s) into
FeatureRows.
@@ -41,8 +43,11 @@
File directory of all the parquet file to encode.
Parquet file must have more than one row group.
- fs (feast.feature_set.FeatureSet):
- FeatureSet describing parquet files.
+ feature_set (str):
+ Feature set reference in the format f"{project}/{name}:{version}".
+
+ fields (dict[str, enum.Enum.ValueType]):
+ A mapping of field names to their value types.
row_group_idx(int):
Row group index to read and encode into byte like FeatureRow
@@ -61,12 +66,10 @@
# Preprocess the columns by converting all its values to Proto values
proto_columns = {
- field_name: pa_column_to_proto_column(field.dtype, table.column(field_name))
- for field_name, field in fs.fields.items()
+ field_name: pa_column_to_proto_column(dtype, table.column(field_name))
+ for field_name, dtype in fields.items()
}
- feature_set = f"{fs.project}/{fs.name}:{fs.version}"
-
# List to store result
feature_rows = []
@@ -120,8 +123,12 @@
Iterable list of byte encoded FeatureRow(s).
"""
+ feature_set = f"{fs.project}/{fs.name}:{fs.version}"
+
+ field_map = {field.name: field.dtype for field in fs.fields.values()}
+
pool = Pool(max_workers)
- func = partial(_encode_pa_tables, file, fs)
+ func = partial(_encode_pa_tables, file, feature_set, field_map)
for chunk in pool.imap(func, row_groups):
yield chunk
return
| {"golden_diff": "diff --git a/sdk/python/feast/feature.py b/sdk/python/feast/feature.py\n--- a/sdk/python/feast/feature.py\n+++ b/sdk/python/feast/feature.py\n@@ -56,7 +56,7 @@\n Feature object\n \"\"\"\n feature = cls(\n- name=feature_proto.name, dtype=ValueType(feature_proto.value_type)\n+ name=feature_proto.name, dtype=ValueType(feature_proto.value_type),\n )\n feature.update_presence_constraints(feature_proto)\n feature.update_shape_type(feature_proto)\ndiff --git a/sdk/python/feast/loaders/ingest.py b/sdk/python/feast/loaders/ingest.py\n--- a/sdk/python/feast/loaders/ingest.py\n+++ b/sdk/python/feast/loaders/ingest.py\n@@ -25,7 +25,9 @@\n KAFKA_CHUNK_PRODUCTION_TIMEOUT = 120 # type: int\n \n \n-def _encode_pa_tables(file: str, fs: FeatureSet, row_group_idx: int) -> List[bytes]:\n+def _encode_pa_tables(\n+ file: str, feature_set: str, fields: dict, row_group_idx: int\n+) -> List[bytes]:\n \"\"\"\n Helper function to encode a PyArrow table(s) read from parquet file(s) into\n FeatureRows.\n@@ -41,8 +43,11 @@\n File directory of all the parquet file to encode.\n Parquet file must have more than one row group.\n \n- fs (feast.feature_set.FeatureSet):\n- FeatureSet describing parquet files.\n+ feature_set (str):\n+ Feature set reference in the format f\"{project}/{name}:{version}\".\n+\n+ fields (dict[str, enum.Enum.ValueType]):\n+ A mapping of field names to their value types.\n \n row_group_idx(int):\n Row group index to read and encode into byte like FeatureRow\n@@ -61,12 +66,10 @@\n \n # Preprocess the columns by converting all its values to Proto values\n proto_columns = {\n- field_name: pa_column_to_proto_column(field.dtype, table.column(field_name))\n- for field_name, field in fs.fields.items()\n+ field_name: pa_column_to_proto_column(dtype, table.column(field_name))\n+ for field_name, dtype in fields.items()\n }\n \n- feature_set = f\"{fs.project}/{fs.name}:{fs.version}\"\n-\n # List to store result\n feature_rows = []\n \n@@ -120,8 +123,12 @@\n Iterable list of byte encoded FeatureRow(s).\n \"\"\"\n \n+ feature_set = f\"{fs.project}/{fs.name}:{fs.version}\"\n+\n+ field_map = {field.name: field.dtype for field in fs.fields.values()}\n+\n pool = Pool(max_workers)\n- func = partial(_encode_pa_tables, file, fs)\n+ func = partial(_encode_pa_tables, file, feature_set, field_map)\n for chunk in pool.imap(func, row_groups):\n yield chunk\n return\n", "issue": "Extend feature set and/or feature metadata\nThis issue tracks the addition of new fields to the current feature set specification that allow a user to add metadata to either the feature set or features. These fields are optional and are intended to provide users with the flexibility to include feature level or feature level information.\r\n\r\nThe current proposal is to only add a single string field called `description` to `FeatureSpec`\n", "before_files": [{"content": "import logging\nfrom functools import partial\nfrom multiprocessing import Pool\nfrom typing import Iterable, List\n\nimport pandas as pd\nimport pyarrow.parquet as pq\n\nfrom feast.constants import DATETIME_COLUMN\nfrom feast.feature_set import FeatureSet\nfrom feast.type_map import (\n pa_column_to_proto_column,\n pa_column_to_timestamp_proto_column,\n)\nfrom feast.types import Field_pb2 as FieldProto\nfrom feast.types.FeatureRow_pb2 import FeatureRow\n\n_logger = logging.getLogger(__name__)\n\nGRPC_CONNECTION_TIMEOUT_DEFAULT = 3 # type: int\nGRPC_CONNECTION_TIMEOUT_APPLY = 300 # type: int\nFEAST_SERVING_URL_ENV_KEY = \"FEAST_SERVING_URL\" # type: str\nFEAST_CORE_URL_ENV_KEY = \"FEAST_CORE_URL\" # type: str\nBATCH_FEATURE_REQUEST_WAIT_TIME_SECONDS = 300\nKAFKA_CHUNK_PRODUCTION_TIMEOUT = 120 # type: int\n\n\ndef _encode_pa_tables(file: str, fs: FeatureSet, row_group_idx: int) -> List[bytes]:\n \"\"\"\n Helper function to encode a PyArrow table(s) read from parquet file(s) into\n FeatureRows.\n\n This function accepts a list of file directory pointing to many parquet\n files. All parquet files must have the same schema.\n\n Each parquet file will be read into as a table and encoded into FeatureRows\n using a pool of max_workers workers.\n\n Args:\n file (str):\n File directory of all the parquet file to encode.\n Parquet file must have more than one row group.\n\n fs (feast.feature_set.FeatureSet):\n FeatureSet describing parquet files.\n\n row_group_idx(int):\n Row group index to read and encode into byte like FeatureRow\n protobuf objects.\n\n Returns:\n List[bytes]:\n List of byte encoded FeatureRows from the parquet file.\n \"\"\"\n pq_file = pq.ParquetFile(file)\n # Read parquet file as a PyArrow table\n table = pq_file.read_row_group(row_group_idx)\n\n # Add datetime column\n datetime_col = pa_column_to_timestamp_proto_column(table.column(DATETIME_COLUMN))\n\n # Preprocess the columns by converting all its values to Proto values\n proto_columns = {\n field_name: pa_column_to_proto_column(field.dtype, table.column(field_name))\n for field_name, field in fs.fields.items()\n }\n\n feature_set = f\"{fs.project}/{fs.name}:{fs.version}\"\n\n # List to store result\n feature_rows = []\n\n # Loop optimization declaration(s)\n field = FieldProto.Field\n proto_items = proto_columns.items()\n append = feature_rows.append\n\n # Iterate through the rows\n for row_idx in range(table.num_rows):\n feature_row = FeatureRow(\n event_timestamp=datetime_col[row_idx], feature_set=feature_set\n )\n # Loop optimization declaration\n ext = feature_row.fields.extend\n\n # Insert field from each column\n for k, v in proto_items:\n ext([field(name=k, value=v[row_idx])])\n\n # Append FeatureRow in byte string form\n append(feature_row.SerializeToString())\n\n return feature_rows\n\n\ndef get_feature_row_chunks(\n file: str, row_groups: List[int], fs: FeatureSet, max_workers: int\n) -> Iterable[List[bytes]]:\n \"\"\"\n Iterator function to encode a PyArrow table read from a parquet file to\n FeatureRow(s).\n\n Args:\n file (str):\n File directory of the parquet file. The parquet file must have more\n than one row group.\n\n row_groups (List[int]):\n Specific row group indexes to be read and transformed in the parquet\n file.\n\n fs (feast.feature_set.FeatureSet):\n FeatureSet describing parquet files.\n\n max_workers (int):\n Maximum number of workers to spawn.\n\n Returns:\n Iterable[List[bytes]]:\n Iterable list of byte encoded FeatureRow(s).\n \"\"\"\n\n pool = Pool(max_workers)\n func = partial(_encode_pa_tables, file, fs)\n for chunk in pool.imap(func, row_groups):\n yield chunk\n return\n\n\ndef validate_dataframe(dataframe: pd.DataFrame, feature_set: FeatureSet):\n if \"datetime\" not in dataframe.columns:\n raise ValueError(\n f'Dataframe does not contain entity \"datetime\" in columns {dataframe.columns}'\n )\n\n for entity in feature_set.entities:\n if entity.name not in dataframe.columns:\n raise ValueError(\n f\"Dataframe does not contain entity {entity.name} in columns {dataframe.columns}\"\n )\n\n for feature in feature_set.features:\n if feature.name not in dataframe.columns:\n raise ValueError(\n f\"Dataframe does not contain feature {feature.name} in columns {dataframe.columns}\"\n )\n", "path": "sdk/python/feast/loaders/ingest.py"}, {"content": "# Copyright 2019 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom feast.core.FeatureSet_pb2 import FeatureSpec as FeatureProto\nfrom feast.field import Field\nfrom feast.types import Value_pb2 as ValueTypeProto\nfrom feast.value_type import ValueType\n\n\nclass Feature(Field):\n \"\"\"Feature field type\"\"\"\n\n def to_proto(self) -> FeatureProto:\n \"\"\"Converts Feature object to its Protocol Buffer representation\"\"\"\n value_type = ValueTypeProto.ValueType.Enum.Value(self.dtype.name)\n return FeatureProto(\n name=self.name,\n value_type=value_type,\n presence=self.presence,\n group_presence=self.group_presence,\n shape=self.shape,\n value_count=self.value_count,\n domain=self.domain,\n int_domain=self.int_domain,\n float_domain=self.float_domain,\n string_domain=self.string_domain,\n bool_domain=self.bool_domain,\n struct_domain=self.struct_domain,\n natural_language_domain=self.natural_language_domain,\n image_domain=self.image_domain,\n mid_domain=self.mid_domain,\n url_domain=self.url_domain,\n time_domain=self.time_domain,\n time_of_day_domain=self.time_of_day_domain,\n )\n\n @classmethod\n def from_proto(cls, feature_proto: FeatureProto):\n \"\"\"\n\n Args:\n feature_proto: FeatureSpec protobuf object\n\n Returns:\n Feature object\n \"\"\"\n feature = cls(\n name=feature_proto.name, dtype=ValueType(feature_proto.value_type)\n )\n feature.update_presence_constraints(feature_proto)\n feature.update_shape_type(feature_proto)\n feature.update_domain_info(feature_proto)\n return feature\n", "path": "sdk/python/feast/feature.py"}]} | 2,597 | 665 |
gh_patches_debug_39661 | rasdani/github-patches | git_diff | pytorch__TensorRT-166 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Module tests fail because of segfault in cuDNN destructor
The tests pass functionality wise, but in the clean up, the test segfaults. It seems like this is a issue others have seen in libtorch https://github.com/pytorch/pytorch/issues/17658
</issue>
<code>
[start of py/setup.py]
1 import os
2 import sys
3 import glob
4 import setuptools
5 from setuptools import setup, Extension, find_packages
6 from setuptools.command.build_ext import build_ext
7 from setuptools.command.develop import develop
8 from setuptools.command.install import install
9 from distutils.cmd import Command
10 from wheel.bdist_wheel import bdist_wheel
11
12 from torch.utils import cpp_extension
13 from shutil import copyfile, rmtree
14
15 import subprocess
16
17 dir_path = os.path.dirname(os.path.realpath(__file__))
18
19 __version__ = '0.0.3'
20
21 CXX11_ABI = False
22
23 if "--use-cxx11-abi" in sys.argv:
24 sys.argv.remove("--use-cxx11-abi")
25 CXX11_ABI = True
26
27 def which(program):
28 import os
29 def is_exe(fpath):
30 return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
31
32 fpath, fname = os.path.split(program)
33 if fpath:
34 if is_exe(program):
35 return program
36 else:
37 for path in os.environ["PATH"].split(os.pathsep):
38 exe_file = os.path.join(path, program)
39 if is_exe(exe_file):
40 return exe_file
41
42 return None
43
44 BAZEL_EXE = which("bazel")
45
46 def build_libtrtorch_pre_cxx11_abi(develop=True, use_dist_dir=True, cxx11_abi=False):
47 cmd = [BAZEL_EXE, "build"]
48 cmd.append("//cpp/api/lib:libtrtorch.so")
49 if develop:
50 cmd.append("--compilation_mode=dbg")
51 else:
52 cmd.append("--compilation_mode=opt")
53 if use_dist_dir:
54 cmd.append("--distdir=third_party/dist_dir/x86_64-linux-gnu")
55 if not cxx11_abi:
56 cmd.append("--config=python")
57 else:
58 print("using CXX11 ABI build")
59
60 print("building libtrtorch")
61 status_code = subprocess.run(cmd).returncode
62
63 if status_code != 0:
64 sys.exit(status_code)
65
66
67 def gen_version_file():
68 if not os.path.exists(dir_path + '/trtorch/_version.py'):
69 os.mknod(dir_path + '/trtorch/_version.py')
70
71 with open(dir_path + '/trtorch/_version.py', 'w') as f:
72 print("creating version file")
73 f.write("__version__ = \"" + __version__ + '\"')
74
75 def copy_libtrtorch(multilinux=False):
76 if not os.path.exists(dir_path + '/trtorch/lib'):
77 os.makedirs(dir_path + '/trtorch/lib')
78
79 print("copying library into module")
80 if multilinux:
81 copyfile(dir_path + "/build/libtrtorch_build/libtrtorch.so", dir_path + '/trtorch/lib/libtrtorch.so')
82 else:
83 copyfile(dir_path + "/../bazel-bin/cpp/api/lib/libtrtorch.so", dir_path + '/trtorch/lib/libtrtorch.so')
84
85 class DevelopCommand(develop):
86 description = "Builds the package and symlinks it into the PYTHONPATH"
87
88 def initialize_options(self):
89 develop.initialize_options(self)
90
91 def finalize_options(self):
92 develop.finalize_options(self)
93
94 def run(self):
95 global CXX11_ABI
96 build_libtrtorch_pre_cxx11_abi(develop=True, cxx11_abi=CXX11_ABI)
97 gen_version_file()
98 copy_libtrtorch()
99 develop.run(self)
100
101
102 class InstallCommand(install):
103 description = "Builds the package"
104
105 def initialize_options(self):
106 install.initialize_options(self)
107
108 def finalize_options(self):
109 install.finalize_options(self)
110
111 def run(self):
112 global CXX11_ABI
113 build_libtrtorch_pre_cxx11_abi(develop=False, cxx11_abi=CXX11_ABI)
114 gen_version_file()
115 copy_libtrtorch()
116 install.run(self)
117
118 class BdistCommand(bdist_wheel):
119 description = "Builds the package"
120
121 def initialize_options(self):
122 bdist_wheel.initialize_options(self)
123
124 def finalize_options(self):
125 bdist_wheel.finalize_options(self)
126
127 def run(self):
128 global CXX11_ABI
129 build_libtrtorch_pre_cxx11_abi(develop=False, cxx11_abi=CXX11_ABI)
130 gen_version_file()
131 copy_libtrtorch()
132 bdist_wheel.run(self)
133
134 class CleanCommand(Command):
135 """Custom clean command to tidy up the project root."""
136 PY_CLEAN_FILES = ['./build', './dist', './trtorch/__pycache__', './trtorch/lib', './*.pyc', './*.tgz', './*.egg-info']
137 description = "Command to tidy up the project root"
138 user_options = []
139
140 def initialize_options(self):
141 pass
142
143 def finalize_options(self):
144 pass
145
146 def run(self):
147 for path_spec in self.PY_CLEAN_FILES:
148 # Make paths absolute and relative to this path
149 abs_paths = glob.glob(os.path.normpath(os.path.join(dir_path, path_spec)))
150 for path in [str(p) for p in abs_paths]:
151 if not path.startswith(dir_path):
152 # Die if path in CLEAN_FILES is absolute + outside this directory
153 raise ValueError("%s is not a path inside %s" % (path, dir_path))
154 print('Removing %s' % os.path.relpath(path))
155 rmtree(path)
156
157 ext_modules = [
158 cpp_extension.CUDAExtension('trtorch._C',
159 ['trtorch/csrc/trtorch_py.cpp'],
160 library_dirs=[
161 dir_path + '/trtorch/lib/'
162 ],
163 libraries=[
164 "trtorch"
165 ],
166 include_dirs=[
167 dir_path + "/../",
168 dir_path + "/../bazel-TRTorch/external/tensorrt/include",
169 ],
170 extra_compile_args=[
171 "-Wno-deprecated",
172 "-Wno-deprecated-declarations",
173 ] + (["-D_GLIBCXX_USE_CXX11_ABI=1"] if CXX11_ABI else ["-D_GLIBCXX_USE_CXX11_ABI=0"]),
174 extra_link_args=[
175 "-Wno-deprecated",
176 "-Wno-deprecated-declarations",
177 "-Wl,--no-as-needed",
178 "-ltrtorch",
179 "-Wl,-rpath,$ORIGIN/lib"
180 ] + (["-D_GLIBCXX_USE_CXX11_ABI=1"] if CXX11_ABI else ["-D_GLIBCXX_USE_CXX11_ABI=0"]),
181 undef_macros=[ "NDEBUG" ]
182 )
183 ]
184
185 with open("README.md", "r") as fh:
186 long_description = fh.read()
187
188 setup(
189 name='trtorch',
190 version=__version__,
191 author='NVIDIA',
192 author_email='[email protected]',
193 url='https://nvidia.github.io/TRTorch',
194 description='A compiler backend for PyTorch JIT targeting NVIDIA GPUs',
195 long_description_content_type='text/markdown',
196 long_description=long_description,
197 ext_modules=ext_modules,
198 install_requires=[
199 'torch==1.5.1',
200 ],
201 setup_requires=[],
202 cmdclass={
203 'install': InstallCommand,
204 'clean': CleanCommand,
205 'develop': DevelopCommand,
206 'build_ext': cpp_extension.BuildExtension,
207 'bdist_wheel': BdistCommand,
208 },
209 zip_safe=False,
210 license="BSD",
211 packages=find_packages(),
212 classifiers=[
213 "Development Status :: 3 - Alpha",
214 "Environment :: GPU :: NVIDIA CUDA",
215 "License :: OSI Approved :: BSD License",
216 "Intended Audience :: Developers",
217 "Intended Audience :: Science/Research",
218 "Operating System :: POSIX :: Linux",
219 "Programming Language :: C++",
220 "Programming Language :: Python",
221 "Programming Language :: Python :: Implementation :: CPython",
222 "Topic :: Scientific/Engineering",
223 "Topic :: Scientific/Engineering :: Artificial Intelligence",
224 "Topic :: Software Development",
225 "Topic :: Software Development :: Libraries"
226 ],
227 python_requires='>=3.5',
228 include_package_data=True,
229 package_data={
230 'trtorch': ['lib/*.so'],
231 },
232 exclude_package_data={
233 '': ['*.cpp', '*.h'],
234 'trtorch': ['csrc/*.cpp'],
235 }
236 )
237
[end of py/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/py/setup.py b/py/setup.py
--- a/py/setup.py
+++ b/py/setup.py
@@ -16,7 +16,7 @@
dir_path = os.path.dirname(os.path.realpath(__file__))
-__version__ = '0.0.3'
+__version__ = '0.1.0a0'
CXX11_ABI = False
@@ -158,7 +158,8 @@
cpp_extension.CUDAExtension('trtorch._C',
['trtorch/csrc/trtorch_py.cpp'],
library_dirs=[
- dir_path + '/trtorch/lib/'
+ (dir_path + '/trtorch/lib/'),
+ "/opt/conda/lib/python3.6/config-3.6m-x86_64-linux-gnu"
],
libraries=[
"trtorch"
@@ -176,7 +177,14 @@
"-Wno-deprecated-declarations",
"-Wl,--no-as-needed",
"-ltrtorch",
- "-Wl,-rpath,$ORIGIN/lib"
+ "-Wl,-rpath,$ORIGIN/lib",
+ "-lpthread",
+ "-ldl",
+ "-lutil",
+ "-lrt",
+ "-lm",
+ "-Xlinker",
+ "-export-dynamic"
] + (["-D_GLIBCXX_USE_CXX11_ABI=1"] if CXX11_ABI else ["-D_GLIBCXX_USE_CXX11_ABI=0"]),
undef_macros=[ "NDEBUG" ]
)
@@ -196,7 +204,7 @@
long_description=long_description,
ext_modules=ext_modules,
install_requires=[
- 'torch==1.5.1',
+ 'torch==1.6.0',
],
setup_requires=[],
cmdclass={
@@ -210,7 +218,7 @@
license="BSD",
packages=find_packages(),
classifiers=[
- "Development Status :: 3 - Alpha",
+ "Development Status :: 4 - Beta",
"Environment :: GPU :: NVIDIA CUDA",
"License :: OSI Approved :: BSD License",
"Intended Audience :: Developers",
@@ -224,7 +232,7 @@
"Topic :: Software Development",
"Topic :: Software Development :: Libraries"
],
- python_requires='>=3.5',
+ python_requires='>=3.6',
include_package_data=True,
package_data={
'trtorch': ['lib/*.so'],
| {"golden_diff": "diff --git a/py/setup.py b/py/setup.py\n--- a/py/setup.py\n+++ b/py/setup.py\n@@ -16,7 +16,7 @@\n \n dir_path = os.path.dirname(os.path.realpath(__file__))\n \n-__version__ = '0.0.3'\n+__version__ = '0.1.0a0'\n \n CXX11_ABI = False\n \n@@ -158,7 +158,8 @@\n cpp_extension.CUDAExtension('trtorch._C',\n ['trtorch/csrc/trtorch_py.cpp'],\n library_dirs=[\n- dir_path + '/trtorch/lib/'\n+ (dir_path + '/trtorch/lib/'),\n+ \"/opt/conda/lib/python3.6/config-3.6m-x86_64-linux-gnu\"\n ],\n libraries=[\n \"trtorch\"\n@@ -176,7 +177,14 @@\n \"-Wno-deprecated-declarations\",\n \"-Wl,--no-as-needed\",\n \"-ltrtorch\",\n- \"-Wl,-rpath,$ORIGIN/lib\"\n+ \"-Wl,-rpath,$ORIGIN/lib\",\n+ \"-lpthread\",\n+ \"-ldl\",\n+ \"-lutil\",\n+ \"-lrt\",\n+ \"-lm\",\n+ \"-Xlinker\",\n+ \"-export-dynamic\"\n ] + ([\"-D_GLIBCXX_USE_CXX11_ABI=1\"] if CXX11_ABI else [\"-D_GLIBCXX_USE_CXX11_ABI=0\"]),\n undef_macros=[ \"NDEBUG\" ]\n )\n@@ -196,7 +204,7 @@\n long_description=long_description,\n ext_modules=ext_modules,\n install_requires=[\n- 'torch==1.5.1',\n+ 'torch==1.6.0',\n ],\n setup_requires=[],\n cmdclass={\n@@ -210,7 +218,7 @@\n license=\"BSD\",\n packages=find_packages(),\n classifiers=[\n- \"Development Status :: 3 - Alpha\",\n+ \"Development Status :: 4 - Beta\",\n \"Environment :: GPU :: NVIDIA CUDA\",\n \"License :: OSI Approved :: BSD License\",\n \"Intended Audience :: Developers\",\n@@ -224,7 +232,7 @@\n \"Topic :: Software Development\",\n \"Topic :: Software Development :: Libraries\"\n ],\n- python_requires='>=3.5',\n+ python_requires='>=3.6',\n include_package_data=True,\n package_data={\n 'trtorch': ['lib/*.so'],\n", "issue": "Module tests fail because of segfault in cuDNN destructor\nThe tests pass functionality wise, but in the clean up, the test segfaults. It seems like this is a issue others have seen in libtorch https://github.com/pytorch/pytorch/issues/17658\n", "before_files": [{"content": "import os\nimport sys\nimport glob\nimport setuptools\nfrom setuptools import setup, Extension, find_packages\nfrom setuptools.command.build_ext import build_ext\nfrom setuptools.command.develop import develop\nfrom setuptools.command.install import install\nfrom distutils.cmd import Command\nfrom wheel.bdist_wheel import bdist_wheel\n\nfrom torch.utils import cpp_extension\nfrom shutil import copyfile, rmtree\n\nimport subprocess\n\ndir_path = os.path.dirname(os.path.realpath(__file__))\n\n__version__ = '0.0.3'\n\nCXX11_ABI = False\n\nif \"--use-cxx11-abi\" in sys.argv:\n sys.argv.remove(\"--use-cxx11-abi\")\n CXX11_ABI = True\n\ndef which(program):\n import os\n def is_exe(fpath):\n return os.path.isfile(fpath) and os.access(fpath, os.X_OK)\n\n fpath, fname = os.path.split(program)\n if fpath:\n if is_exe(program):\n return program\n else:\n for path in os.environ[\"PATH\"].split(os.pathsep):\n exe_file = os.path.join(path, program)\n if is_exe(exe_file):\n return exe_file\n\n return None\n\nBAZEL_EXE = which(\"bazel\")\n\ndef build_libtrtorch_pre_cxx11_abi(develop=True, use_dist_dir=True, cxx11_abi=False):\n cmd = [BAZEL_EXE, \"build\"]\n cmd.append(\"//cpp/api/lib:libtrtorch.so\")\n if develop:\n cmd.append(\"--compilation_mode=dbg\")\n else:\n cmd.append(\"--compilation_mode=opt\")\n if use_dist_dir:\n cmd.append(\"--distdir=third_party/dist_dir/x86_64-linux-gnu\")\n if not cxx11_abi:\n cmd.append(\"--config=python\")\n else:\n print(\"using CXX11 ABI build\")\n\n print(\"building libtrtorch\")\n status_code = subprocess.run(cmd).returncode\n\n if status_code != 0:\n sys.exit(status_code)\n\n\ndef gen_version_file():\n if not os.path.exists(dir_path + '/trtorch/_version.py'):\n os.mknod(dir_path + '/trtorch/_version.py')\n\n with open(dir_path + '/trtorch/_version.py', 'w') as f:\n print(\"creating version file\")\n f.write(\"__version__ = \\\"\" + __version__ + '\\\"')\n\ndef copy_libtrtorch(multilinux=False):\n if not os.path.exists(dir_path + '/trtorch/lib'):\n os.makedirs(dir_path + '/trtorch/lib')\n\n print(\"copying library into module\")\n if multilinux:\n copyfile(dir_path + \"/build/libtrtorch_build/libtrtorch.so\", dir_path + '/trtorch/lib/libtrtorch.so')\n else:\n copyfile(dir_path + \"/../bazel-bin/cpp/api/lib/libtrtorch.so\", dir_path + '/trtorch/lib/libtrtorch.so')\n\nclass DevelopCommand(develop):\n description = \"Builds the package and symlinks it into the PYTHONPATH\"\n\n def initialize_options(self):\n develop.initialize_options(self)\n\n def finalize_options(self):\n develop.finalize_options(self)\n\n def run(self):\n global CXX11_ABI\n build_libtrtorch_pre_cxx11_abi(develop=True, cxx11_abi=CXX11_ABI)\n gen_version_file()\n copy_libtrtorch()\n develop.run(self)\n\n\nclass InstallCommand(install):\n description = \"Builds the package\"\n\n def initialize_options(self):\n install.initialize_options(self)\n\n def finalize_options(self):\n install.finalize_options(self)\n\n def run(self):\n global CXX11_ABI\n build_libtrtorch_pre_cxx11_abi(develop=False, cxx11_abi=CXX11_ABI)\n gen_version_file()\n copy_libtrtorch()\n install.run(self)\n\nclass BdistCommand(bdist_wheel):\n description = \"Builds the package\"\n\n def initialize_options(self):\n bdist_wheel.initialize_options(self)\n\n def finalize_options(self):\n bdist_wheel.finalize_options(self)\n\n def run(self):\n global CXX11_ABI\n build_libtrtorch_pre_cxx11_abi(develop=False, cxx11_abi=CXX11_ABI)\n gen_version_file()\n copy_libtrtorch()\n bdist_wheel.run(self)\n\nclass CleanCommand(Command):\n \"\"\"Custom clean command to tidy up the project root.\"\"\"\n PY_CLEAN_FILES = ['./build', './dist', './trtorch/__pycache__', './trtorch/lib', './*.pyc', './*.tgz', './*.egg-info']\n description = \"Command to tidy up the project root\"\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n for path_spec in self.PY_CLEAN_FILES:\n # Make paths absolute and relative to this path\n abs_paths = glob.glob(os.path.normpath(os.path.join(dir_path, path_spec)))\n for path in [str(p) for p in abs_paths]:\n if not path.startswith(dir_path):\n # Die if path in CLEAN_FILES is absolute + outside this directory\n raise ValueError(\"%s is not a path inside %s\" % (path, dir_path))\n print('Removing %s' % os.path.relpath(path))\n rmtree(path)\n\next_modules = [\n cpp_extension.CUDAExtension('trtorch._C',\n ['trtorch/csrc/trtorch_py.cpp'],\n library_dirs=[\n dir_path + '/trtorch/lib/'\n ],\n libraries=[\n \"trtorch\"\n ],\n include_dirs=[\n dir_path + \"/../\",\n dir_path + \"/../bazel-TRTorch/external/tensorrt/include\",\n ],\n extra_compile_args=[\n \"-Wno-deprecated\",\n \"-Wno-deprecated-declarations\",\n ] + ([\"-D_GLIBCXX_USE_CXX11_ABI=1\"] if CXX11_ABI else [\"-D_GLIBCXX_USE_CXX11_ABI=0\"]),\n extra_link_args=[\n \"-Wno-deprecated\",\n \"-Wno-deprecated-declarations\",\n \"-Wl,--no-as-needed\",\n \"-ltrtorch\",\n \"-Wl,-rpath,$ORIGIN/lib\"\n ] + ([\"-D_GLIBCXX_USE_CXX11_ABI=1\"] if CXX11_ABI else [\"-D_GLIBCXX_USE_CXX11_ABI=0\"]),\n undef_macros=[ \"NDEBUG\" ]\n )\n]\n\nwith open(\"README.md\", \"r\") as fh:\n long_description = fh.read()\n\nsetup(\n name='trtorch',\n version=__version__,\n author='NVIDIA',\n author_email='[email protected]',\n url='https://nvidia.github.io/TRTorch',\n description='A compiler backend for PyTorch JIT targeting NVIDIA GPUs',\n long_description_content_type='text/markdown',\n long_description=long_description,\n ext_modules=ext_modules,\n install_requires=[\n 'torch==1.5.1',\n ],\n setup_requires=[],\n cmdclass={\n 'install': InstallCommand,\n 'clean': CleanCommand,\n 'develop': DevelopCommand,\n 'build_ext': cpp_extension.BuildExtension,\n 'bdist_wheel': BdistCommand,\n },\n zip_safe=False,\n license=\"BSD\",\n packages=find_packages(),\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Environment :: GPU :: NVIDIA CUDA\",\n \"License :: OSI Approved :: BSD License\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: C++\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Software Development\",\n \"Topic :: Software Development :: Libraries\"\n ],\n python_requires='>=3.5',\n include_package_data=True,\n package_data={\n 'trtorch': ['lib/*.so'],\n },\n exclude_package_data={\n '': ['*.cpp', '*.h'],\n 'trtorch': ['csrc/*.cpp'],\n }\n)\n", "path": "py/setup.py"}]} | 2,999 | 558 |
gh_patches_debug_297 | rasdani/github-patches | git_diff | beeware__toga-267 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GTK+: TypeError: on_close() takes 2 positional arguments but 3 were given
When creating a new Window and then closing it a TypeError is created in Linux. Once you have created an App, calling the following will reproduce the error:
window = toga.Window()
window.app = app
window.show()
Then close the window and you get:
TypeError: on_close() takes 2 positional arguments but 3 were given
</issue>
<code>
[start of src/gtk/toga_gtk/app.py]
1 import asyncio
2 import os
3 import signal
4 import sys
5
6 try:
7 import gi
8 except ImportError:
9 # app.py is the first module that will be imported when you import toga_gtk.
10 #
11 # If Gtk can't be imported, it may be because we're in a virtualenv,
12 # and the system python libraries aren't visible. This can be fixed by
13 # creating a symlink into the site-packages
14 # Try creating a symlink to the system library location.
15 # base_packages_dir is where the packages installed by the package manager
16 # can be found.
17 # gi_system_install_path is where gi can be found in the packages dir.
18 # installer_command is the command the user can run to install gi.
19 py_version = "%d.%d" % (sys.version_info.major, sys.version_info.minor)
20
21 if sys.version_info.major == 3:
22 if os.path.isdir('/usr/lib64/python%s/site-packages/' % (py_version,)):
23 # Fedora
24 base_packages_dir = '/usr/lib64/python%s/site-packages/' % (py_version,)
25 gi_system_install_path = '/usr/lib64/python%s/site-packages/gi' % (py_version,)
26 installer_command = 'dnf install pygobject3 python3-gobject'
27 elif os.path.isdir('/usr/lib/python3/dist-packages/'):
28 # Ubuntu, Debian
29 base_packages_dir = '/usr/lib/python3/dist-packages/'
30 gi_system_install_path = '/usr/local/lib/python3/dist-packages/gi'
31 installer_command = 'apt-get install python3-gi'
32 elif os.path.isdir('/usr/lib/python%s/site-packages/' % (py_version,)):
33 # Arch
34 base_packages_dir = '/usr/lib/python%s/site-packages/' % (py_version,)
35 gi_system_install_path = '/usr/lib/python%s/site-packages/gi' % (py_version,)
36 installer_command = 'pacman -S python-gobject'
37 else:
38 raise RuntimeError("Unable to locate your Python packages dir.")
39 else:
40 raise RuntimeError("Toga requires Python 3.")
41
42 # Use the location of this package to guide us to
43 # the location of the virtualenv.
44 gi_symlink_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'gi')
45 pygtkcompat_symlink_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'pygtkcompat')
46
47 if gi_symlink_path == gi_system_install_path:
48 # If we're not in a virtualenv, just raise the original import error.
49 raise
50 else:
51 gi_path = os.path.join(base_packages_dir, 'gi')
52 pygtkcompat_path = os.path.join(base_packages_dir, 'pygtkcompat')
53 if os.path.exists(gi_path) and os.path.isdir(gi_path):
54 # If we can identify the gi library, create a symlink to it.
55 try:
56 print("Creating symlink (%s & %s) to system GTK+ libraries..." % (gi_symlink_path, pygtkcompat_symlink_path))
57 os.symlink(gi_path, gi_symlink_path)
58 os.symlink(pygtkcompat_path, pygtkcompat_symlink_path)
59
60 # The call to os.symlink will return almost immediately,
61 # but for some reason, it may not be fully flushed to
62 # the file system. One way to fix this is to start
63 # the process again. This call to os.execl restarts the
64 # program with the same arguments, replacing the original
65 # operating system process.
66 os.execl(sys.executable, sys.executable, *sys.argv)
67 except OSError:
68 raise RuntimeError("Unable to automatically create symlink to system Python GTK+ bindings.")
69 else:
70 raise RuntimeError("Unable to locate the Python GTK+ bindings. Have you run '%s'?" % installer_command)
71
72 gi.require_version('Gtk', '3.0')
73 from gi.repository import Gtk, Gio, GLib
74
75
76 from toga.command import GROUP_BREAK, SECTION_BREAK, Command, Group
77 # from .command import Command, Group
78 import toga
79 from .window import Window
80 from toga import Icon
81 from toga.utils import wrapped_handler
82
83 import gbulb
84
85
86 class MainWindow(Window):
87 _IMPL_CLASS = Gtk.ApplicationWindow
88
89 def on_close(self, widget):
90 pass
91
92
93 class App:
94 """
95 Todo:
96 * Creation of Menus is not working.
97 * Disabling of menu items is not working.
98 * App Icon is not showing up
99 """
100 def __init__(self, interface):
101 self.interface = interface
102 self.interface._impl = self
103
104 gbulb.install(gtk=True)
105 self.loop = asyncio.get_event_loop()
106
107 self.create()
108
109 def create(self):
110 Icon.app_icon = Icon.load(self.interface.icon, default=Icon.TIBERIUS_ICON)
111 # Stimulate the build of the app
112 self.native = Gtk.Application(application_id=self.interface.app_id, flags=Gio.ApplicationFlags.FLAGS_NONE)
113
114 # Connect the GTK signal that will cause app startup to occur
115 self.native.connect('startup', self.startup)
116 self.native.connect('activate', self.activate)
117 # self.native.connect('shutdown', self.shutdown)
118
119 self.actions = None
120
121 def startup(self, data=None):
122 self.interface.commands.add(
123 Command(None, 'About ' + self.interface.name, group=toga.Group.APP),
124 Command(None, 'Preferences', group=toga.Group.APP),
125 # Quit should always be the last item, in a section on it's own
126 Command(lambda s: self.exit(), 'Quit ' + self.interface.name, shortcut='q', group=toga.Group.APP, section=sys.maxsize),
127 Command(None, 'Visit homepage', group=toga.Group.HELP)
128 )
129
130 self.interface.startup()
131
132 # Create the lookup table of menu items,
133 # then force the creation of the menus.
134 self._actions = {}
135 self.create_menus()
136 # self.interface.main_window._impl.create_toolbar()
137
138 def activate(self, data=None):
139 pass
140
141 def open_document(self, fileURL):
142 '''Add a new document to this app.'''
143 print("STUB: If you want to handle opening documents, implement App.open_document(fileURL)")
144
145 def create_menus(self):
146 # Only create the menu if the menu item index has been created.
147 if hasattr(self, '_actions'):
148 self._actions = {}
149 menubar = Gio.Menu()
150 label = None
151 submenu = None
152 section = None
153 for cmd in self.interface.commands:
154 if cmd == GROUP_BREAK:
155 if section:
156 submenu.append_section(None, section)
157
158 if label == '*':
159 self.native.set_app_menu(submenu)
160 else:
161 menubar.append_submenu(label, submenu)
162
163 label = None
164 submenu = None
165 section = None
166 elif cmd == SECTION_BREAK:
167 submenu.append_section(None, section)
168 section = None
169
170 else:
171 if submenu is None:
172 label = cmd.group.label
173 submenu = Gio.Menu()
174
175 if section is None:
176 section = Gio.Menu()
177
178 try:
179 action = self._actions[cmd]
180 except KeyError:
181 cmd_id = "command-%s" % id(cmd)
182 action = Gio.SimpleAction.new(cmd_id, None)
183 if cmd.action:
184 action.connect("activate", wrapped_handler(cmd, cmd.action))
185 cmd._widgets.append(action)
186 self._actions[cmd] = action
187 self.native.add_action(action)
188
189 cmd._impl._set_enabled(cmd.enabled)
190
191 item = Gio.MenuItem.new(cmd.label, 'app.' + cmd_id)
192 if cmd.shortcut:
193 item.set_attribute_value('accel', GLib.Variant('s', '<Primary>%s' % cmd.shortcut.upper()))
194
195 # item.set_attribute_value('accel', GLib.Variant(cmd.shortcut, '<Primary>%s' % cmd.shortcut.upper()))
196
197 section.append_item(item)
198
199 if section:
200 submenu.append_section(None, section)
201
202 if submenu:
203 if label == '*':
204 self.native.set_app_menu(submenu)
205 else:
206 menubar.append_submenu(label, submenu)
207
208 # Set the menu for the app.
209 self.native.set_menubar(menubar)
210
211 def main_loop(self):
212 # Modify signal handlers to make sure Ctrl-C is caught and handled.
213 signal.signal(signal.SIGINT, signal.SIG_DFL)
214
215 self.loop.run_forever(application=self.native)
216
217 def exit(self):
218 self.native.quit()
219
[end of src/gtk/toga_gtk/app.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/gtk/toga_gtk/app.py b/src/gtk/toga_gtk/app.py
--- a/src/gtk/toga_gtk/app.py
+++ b/src/gtk/toga_gtk/app.py
@@ -86,7 +86,7 @@
class MainWindow(Window):
_IMPL_CLASS = Gtk.ApplicationWindow
- def on_close(self, widget):
+ def on_close(self, widget, data):
pass
| {"golden_diff": "diff --git a/src/gtk/toga_gtk/app.py b/src/gtk/toga_gtk/app.py\n--- a/src/gtk/toga_gtk/app.py\n+++ b/src/gtk/toga_gtk/app.py\n@@ -86,7 +86,7 @@\n class MainWindow(Window):\n _IMPL_CLASS = Gtk.ApplicationWindow\n \n- def on_close(self, widget):\n+ def on_close(self, widget, data):\n pass\n", "issue": "GTK+: TypeError: on_close() takes 2 positional arguments but 3 were given\nWhen creating a new Window and then closing it a TypeError is created in Linux. Once you have created an App, calling the following will reproduce the error:\r\n \r\n window = toga.Window()\r\n window.app = app\r\n window.show()\r\n\r\nThen close the window and you get:\r\n\r\n TypeError: on_close() takes 2 positional arguments but 3 were given\n", "before_files": [{"content": "import asyncio\nimport os\nimport signal\nimport sys\n\ntry:\n import gi\nexcept ImportError:\n # app.py is the first module that will be imported when you import toga_gtk.\n #\n # If Gtk can't be imported, it may be because we're in a virtualenv,\n # and the system python libraries aren't visible. This can be fixed by\n # creating a symlink into the site-packages\n # Try creating a symlink to the system library location.\n # base_packages_dir is where the packages installed by the package manager\n # can be found.\n # gi_system_install_path is where gi can be found in the packages dir.\n # installer_command is the command the user can run to install gi.\n py_version = \"%d.%d\" % (sys.version_info.major, sys.version_info.minor)\n\n if sys.version_info.major == 3:\n if os.path.isdir('/usr/lib64/python%s/site-packages/' % (py_version,)):\n # Fedora\n base_packages_dir = '/usr/lib64/python%s/site-packages/' % (py_version,)\n gi_system_install_path = '/usr/lib64/python%s/site-packages/gi' % (py_version,)\n installer_command = 'dnf install pygobject3 python3-gobject'\n elif os.path.isdir('/usr/lib/python3/dist-packages/'):\n # Ubuntu, Debian\n base_packages_dir = '/usr/lib/python3/dist-packages/'\n gi_system_install_path = '/usr/local/lib/python3/dist-packages/gi'\n installer_command = 'apt-get install python3-gi'\n elif os.path.isdir('/usr/lib/python%s/site-packages/' % (py_version,)):\n # Arch\n base_packages_dir = '/usr/lib/python%s/site-packages/' % (py_version,)\n gi_system_install_path = '/usr/lib/python%s/site-packages/gi' % (py_version,)\n installer_command = 'pacman -S python-gobject'\n else:\n raise RuntimeError(\"Unable to locate your Python packages dir.\")\n else:\n raise RuntimeError(\"Toga requires Python 3.\")\n\n # Use the location of this package to guide us to\n # the location of the virtualenv.\n gi_symlink_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'gi')\n pygtkcompat_symlink_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'pygtkcompat')\n\n if gi_symlink_path == gi_system_install_path:\n # If we're not in a virtualenv, just raise the original import error.\n raise\n else:\n gi_path = os.path.join(base_packages_dir, 'gi')\n pygtkcompat_path = os.path.join(base_packages_dir, 'pygtkcompat')\n if os.path.exists(gi_path) and os.path.isdir(gi_path):\n # If we can identify the gi library, create a symlink to it.\n try:\n print(\"Creating symlink (%s & %s) to system GTK+ libraries...\" % (gi_symlink_path, pygtkcompat_symlink_path))\n os.symlink(gi_path, gi_symlink_path)\n os.symlink(pygtkcompat_path, pygtkcompat_symlink_path)\n\n # The call to os.symlink will return almost immediately,\n # but for some reason, it may not be fully flushed to\n # the file system. One way to fix this is to start\n # the process again. This call to os.execl restarts the\n # program with the same arguments, replacing the original\n # operating system process.\n os.execl(sys.executable, sys.executable, *sys.argv)\n except OSError:\n raise RuntimeError(\"Unable to automatically create symlink to system Python GTK+ bindings.\")\n else:\n raise RuntimeError(\"Unable to locate the Python GTK+ bindings. Have you run '%s'?\" % installer_command)\n\ngi.require_version('Gtk', '3.0')\nfrom gi.repository import Gtk, Gio, GLib\n\n\nfrom toga.command import GROUP_BREAK, SECTION_BREAK, Command, Group\n# from .command import Command, Group\nimport toga\nfrom .window import Window\nfrom toga import Icon\nfrom toga.utils import wrapped_handler\n\nimport gbulb\n\n\nclass MainWindow(Window):\n _IMPL_CLASS = Gtk.ApplicationWindow\n\n def on_close(self, widget):\n pass\n\n\nclass App:\n \"\"\"\n Todo:\n * Creation of Menus is not working.\n * Disabling of menu items is not working.\n * App Icon is not showing up\n \"\"\"\n def __init__(self, interface):\n self.interface = interface\n self.interface._impl = self\n\n gbulb.install(gtk=True)\n self.loop = asyncio.get_event_loop()\n\n self.create()\n\n def create(self):\n Icon.app_icon = Icon.load(self.interface.icon, default=Icon.TIBERIUS_ICON)\n # Stimulate the build of the app\n self.native = Gtk.Application(application_id=self.interface.app_id, flags=Gio.ApplicationFlags.FLAGS_NONE)\n\n # Connect the GTK signal that will cause app startup to occur\n self.native.connect('startup', self.startup)\n self.native.connect('activate', self.activate)\n # self.native.connect('shutdown', self.shutdown)\n\n self.actions = None\n\n def startup(self, data=None):\n self.interface.commands.add(\n Command(None, 'About ' + self.interface.name, group=toga.Group.APP),\n Command(None, 'Preferences', group=toga.Group.APP),\n # Quit should always be the last item, in a section on it's own\n Command(lambda s: self.exit(), 'Quit ' + self.interface.name, shortcut='q', group=toga.Group.APP, section=sys.maxsize),\n Command(None, 'Visit homepage', group=toga.Group.HELP)\n )\n\n self.interface.startup()\n\n # Create the lookup table of menu items,\n # then force the creation of the menus.\n self._actions = {}\n self.create_menus()\n # self.interface.main_window._impl.create_toolbar()\n\n def activate(self, data=None):\n pass\n\n def open_document(self, fileURL):\n '''Add a new document to this app.'''\n print(\"STUB: If you want to handle opening documents, implement App.open_document(fileURL)\")\n\n def create_menus(self):\n # Only create the menu if the menu item index has been created.\n if hasattr(self, '_actions'):\n self._actions = {}\n menubar = Gio.Menu()\n label = None\n submenu = None\n section = None\n for cmd in self.interface.commands:\n if cmd == GROUP_BREAK:\n if section:\n submenu.append_section(None, section)\n\n if label == '*':\n self.native.set_app_menu(submenu)\n else:\n menubar.append_submenu(label, submenu)\n\n label = None\n submenu = None\n section = None\n elif cmd == SECTION_BREAK:\n submenu.append_section(None, section)\n section = None\n\n else:\n if submenu is None:\n label = cmd.group.label\n submenu = Gio.Menu()\n\n if section is None:\n section = Gio.Menu()\n\n try:\n action = self._actions[cmd]\n except KeyError:\n cmd_id = \"command-%s\" % id(cmd)\n action = Gio.SimpleAction.new(cmd_id, None)\n if cmd.action:\n action.connect(\"activate\", wrapped_handler(cmd, cmd.action))\n cmd._widgets.append(action)\n self._actions[cmd] = action\n self.native.add_action(action)\n\n cmd._impl._set_enabled(cmd.enabled)\n\n item = Gio.MenuItem.new(cmd.label, 'app.' + cmd_id)\n if cmd.shortcut:\n item.set_attribute_value('accel', GLib.Variant('s', '<Primary>%s' % cmd.shortcut.upper()))\n\n # item.set_attribute_value('accel', GLib.Variant(cmd.shortcut, '<Primary>%s' % cmd.shortcut.upper()))\n\n section.append_item(item)\n\n if section:\n submenu.append_section(None, section)\n\n if submenu:\n if label == '*':\n self.native.set_app_menu(submenu)\n else:\n menubar.append_submenu(label, submenu)\n\n # Set the menu for the app.\n self.native.set_menubar(menubar)\n\n def main_loop(self):\n # Modify signal handlers to make sure Ctrl-C is caught and handled.\n signal.signal(signal.SIGINT, signal.SIG_DFL)\n\n self.loop.run_forever(application=self.native)\n\n def exit(self):\n self.native.quit()\n", "path": "src/gtk/toga_gtk/app.py"}]} | 3,014 | 97 |
gh_patches_debug_32151 | rasdani/github-patches | git_diff | apache__airflow-31140 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Restrict allowed characters in connection ids
### Description
I bumped into a bug where a connection id was suffixed with a whitespace e.g. "myconn ". When referencing the connection id "myconn" (without whitespace), you get a connection not found error.
To avoid such human errors, I suggest restricting the characters allowed for connection ids.
Some suggestions:
- There's an `airflow.utils.helpers.validate_key` function for validating the DAG id. Probably a good idea to reuse this.
- I believe variable ids are also not validated, would be good to check those too.
### Use case/motivation
_No response_
### Related issues
_No response_
### Are you willing to submit a PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</issue>
<code>
[start of airflow/www/validators.py]
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18 from __future__ import annotations
19
20 import json
21 from json import JSONDecodeError
22
23 from wtforms.validators import EqualTo, ValidationError
24
25
26 class GreaterEqualThan(EqualTo):
27 """Compares the values of two fields.
28
29 :param fieldname:
30 The name of the other field to compare to.
31 :param message:
32 Error message to raise in case of a validation error. Can be
33 interpolated with `%(other_label)s` and `%(other_name)s` to provide a
34 more helpful error.
35 """
36
37 def __call__(self, form, field):
38 try:
39 other = form[self.fieldname]
40 except KeyError:
41 raise ValidationError(field.gettext(f"Invalid field name '{self.fieldname}'."))
42
43 if field.data is None or other.data is None:
44 return
45
46 if field.data < other.data:
47 message_args = {
48 "other_label": hasattr(other, "label") and other.label.text or self.fieldname,
49 "other_name": self.fieldname,
50 }
51 message = self.message
52 if message is None:
53 message = field.gettext(
54 f"Field must be greater than or equal to {message_args['other_label']}."
55 )
56 else:
57 message = message % message_args
58
59 raise ValidationError(message)
60
61
62 class ValidJson:
63 """Validates data is valid JSON.
64
65 :param message:
66 Error message to raise in case of a validation error.
67 """
68
69 def __init__(self, message=None):
70 self.message = message
71
72 def __call__(self, form, field):
73 if field.data:
74 try:
75 json.loads(field.data)
76 except JSONDecodeError as ex:
77 message = self.message or f"JSON Validation Error: {ex}"
78 raise ValidationError(message=field.gettext(message.format(field.data)))
79
[end of airflow/www/validators.py]
[start of airflow/www/forms.py]
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18 from __future__ import annotations
19
20 import json
21 import operator
22 from datetime import datetime as dt
23 from typing import Iterator
24
25 import pendulum
26 from flask_appbuilder.fieldwidgets import (
27 BS3PasswordFieldWidget,
28 BS3TextAreaFieldWidget,
29 BS3TextFieldWidget,
30 Select2Widget,
31 )
32 from flask_appbuilder.forms import DynamicForm
33 from flask_babel import lazy_gettext
34 from flask_wtf import FlaskForm
35 from wtforms import widgets
36 from wtforms.fields import Field, IntegerField, PasswordField, SelectField, StringField, TextAreaField
37 from wtforms.validators import InputRequired, Optional
38
39 from airflow.compat.functools import cache
40 from airflow.configuration import conf
41 from airflow.providers_manager import ProvidersManager
42 from airflow.utils import timezone
43 from airflow.utils.types import DagRunType
44 from airflow.www.widgets import (
45 AirflowDateTimePickerROWidget,
46 AirflowDateTimePickerWidget,
47 BS3TextAreaROWidget,
48 BS3TextFieldROWidget,
49 )
50
51
52 class DateTimeWithTimezoneField(Field):
53 """A text field which stores a `datetime.datetime` matching a format."""
54
55 widget = widgets.TextInput()
56
57 def __init__(self, label=None, validators=None, datetime_format="%Y-%m-%d %H:%M:%S%Z", **kwargs):
58 super().__init__(label, validators, **kwargs)
59 self.format = datetime_format
60 self.data = None
61
62 def _value(self):
63 if self.raw_data:
64 return " ".join(self.raw_data)
65 if self.data:
66 return self.data.strftime(self.format)
67 return ""
68
69 def process_formdata(self, valuelist):
70 if not valuelist:
71 return
72 date_str = " ".join(valuelist)
73 try:
74 # Check if the datetime string is in the format without timezone, if so convert it to the
75 # default timezone
76 if len(date_str) == 19:
77 parsed_datetime = dt.strptime(date_str, "%Y-%m-%d %H:%M:%S")
78 default_timezone = self._get_default_timezone()
79 self.data = default_timezone.convert(parsed_datetime)
80 else:
81 self.data = pendulum.parse(date_str)
82 except ValueError:
83 self.data = None
84 raise ValueError(self.gettext("Not a valid datetime value"))
85
86 def _get_default_timezone(self):
87 current_timezone = conf.get("core", "default_timezone")
88 if current_timezone == "system":
89 default_timezone = pendulum.local_timezone()
90 else:
91 default_timezone = pendulum.timezone(current_timezone)
92 return default_timezone
93
94
95 class DateTimeForm(FlaskForm):
96 """Date filter form needed for task views"""
97
98 execution_date = DateTimeWithTimezoneField("Logical date", widget=AirflowDateTimePickerWidget())
99
100
101 class DateTimeWithNumRunsForm(FlaskForm):
102 """
103 Date time and number of runs form for tree view, task duration
104 and landing times
105 """
106
107 base_date = DateTimeWithTimezoneField(
108 "Anchor date", widget=AirflowDateTimePickerWidget(), default=timezone.utcnow()
109 )
110 num_runs = SelectField(
111 "Number of runs",
112 default=25,
113 choices=(
114 (5, "5"),
115 (25, "25"),
116 (50, "50"),
117 (100, "100"),
118 (365, "365"),
119 ),
120 )
121
122
123 class DateTimeWithNumRunsWithDagRunsForm(DateTimeWithNumRunsForm):
124 """Date time and number of runs and dag runs form for graph and gantt view"""
125
126 execution_date = SelectField("DAG run")
127
128
129 class DagRunEditForm(DynamicForm):
130 """Form for editing DAG Run.
131
132 We don't actually want to allow editing, so everything is read-only here.
133 """
134
135 dag_id = StringField(lazy_gettext("Dag Id"), widget=BS3TextFieldROWidget())
136 start_date = DateTimeWithTimezoneField(lazy_gettext("Start Date"), widget=AirflowDateTimePickerROWidget())
137 end_date = DateTimeWithTimezoneField(lazy_gettext("End Date"), widget=AirflowDateTimePickerROWidget())
138 run_id = StringField(lazy_gettext("Run Id"), widget=BS3TextFieldROWidget())
139 state = StringField(lazy_gettext("State"), widget=BS3TextFieldROWidget())
140 execution_date = DateTimeWithTimezoneField(
141 lazy_gettext("Logical Date"),
142 widget=AirflowDateTimePickerROWidget(),
143 )
144 conf = TextAreaField(lazy_gettext("Conf"), widget=BS3TextAreaROWidget())
145 note = TextAreaField(lazy_gettext("User Note"), widget=BS3TextAreaFieldWidget())
146
147 def populate_obj(self, item):
148 """Populates the attributes of the passed obj with data from the form's fields."""
149 super().populate_obj(item)
150 item.run_type = DagRunType.from_run_id(item.run_id)
151 if item.conf:
152 item.conf = json.loads(item.conf)
153
154
155 class TaskInstanceEditForm(DynamicForm):
156 """Form for editing TaskInstance"""
157
158 dag_id = StringField(lazy_gettext("Dag Id"), validators=[InputRequired()], widget=BS3TextFieldROWidget())
159 task_id = StringField(
160 lazy_gettext("Task Id"), validators=[InputRequired()], widget=BS3TextFieldROWidget()
161 )
162 start_date = DateTimeWithTimezoneField(lazy_gettext("Start Date"), widget=AirflowDateTimePickerROWidget())
163 end_date = DateTimeWithTimezoneField(lazy_gettext("End Date"), widget=AirflowDateTimePickerROWidget())
164 state = SelectField(
165 lazy_gettext("State"),
166 choices=(
167 ("success", "success"),
168 ("running", "running"),
169 ("failed", "failed"),
170 ("up_for_retry", "up_for_retry"),
171 ),
172 widget=Select2Widget(),
173 validators=[InputRequired()],
174 )
175 execution_date = DateTimeWithTimezoneField(
176 lazy_gettext("Logical Date"),
177 widget=AirflowDateTimePickerROWidget(),
178 validators=[InputRequired()],
179 )
180 note = TextAreaField(lazy_gettext("User Note"), widget=BS3TextAreaFieldWidget())
181
182
183 @cache
184 def create_connection_form_class() -> type[DynamicForm]:
185 """Create a form class for editing and adding Connection.
186
187 This class is created dynamically because it relies heavily on run-time
188 provider discovery, which slows down webserver startup a lot.
189 By creating the class at runtime, we can delay loading the providers until
190 when the connection form is first used, which may as well be never for a
191 short-lived server.
192 """
193 providers_manager = ProvidersManager()
194
195 def _iter_connection_types() -> Iterator[tuple[str, str]]:
196 """List available connection types."""
197 yield ("email", "Email")
198 yield ("fs", "File (path)")
199 yield ("generic", "Generic")
200 yield ("mesos_framework-id", "Mesos Framework ID")
201 for connection_type, provider_info in providers_manager.hooks.items():
202 if provider_info:
203 yield (connection_type, provider_info.hook_name)
204
205 class ConnectionForm(DynamicForm):
206 conn_id = StringField(
207 lazy_gettext("Connection Id"),
208 validators=[InputRequired()],
209 widget=BS3TextFieldWidget(),
210 )
211 conn_type = SelectField(
212 lazy_gettext("Connection Type"),
213 choices=sorted(_iter_connection_types(), key=operator.itemgetter(1)),
214 widget=Select2Widget(),
215 validators=[InputRequired()],
216 description=(
217 "Connection Type missing? Make sure you've installed the "
218 "corresponding Airflow Provider Package."
219 ),
220 )
221 description = StringField(lazy_gettext("Description"), widget=BS3TextAreaFieldWidget())
222 host = StringField(lazy_gettext("Host"), widget=BS3TextFieldWidget())
223 schema = StringField(lazy_gettext("Schema"), widget=BS3TextFieldWidget())
224 login = StringField(lazy_gettext("Login"), widget=BS3TextFieldWidget())
225 password = PasswordField(lazy_gettext("Password"), widget=BS3PasswordFieldWidget())
226 port = IntegerField(lazy_gettext("Port"), validators=[Optional()], widget=BS3TextFieldWidget())
227 extra = TextAreaField(lazy_gettext("Extra"), widget=BS3TextAreaFieldWidget())
228
229 for key, value in providers_manager.connection_form_widgets.items():
230 setattr(ConnectionForm, key, value.field)
231
232 return ConnectionForm
233
[end of airflow/www/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/airflow/www/forms.py b/airflow/www/forms.py
--- a/airflow/www/forms.py
+++ b/airflow/www/forms.py
@@ -41,6 +41,7 @@
from airflow.providers_manager import ProvidersManager
from airflow.utils import timezone
from airflow.utils.types import DagRunType
+from airflow.www.validators import ValidKey
from airflow.www.widgets import (
AirflowDateTimePickerROWidget,
AirflowDateTimePickerWidget,
@@ -205,7 +206,7 @@
class ConnectionForm(DynamicForm):
conn_id = StringField(
lazy_gettext("Connection Id"),
- validators=[InputRequired()],
+ validators=[InputRequired(), ValidKey()],
widget=BS3TextFieldWidget(),
)
conn_type = SelectField(
diff --git a/airflow/www/validators.py b/airflow/www/validators.py
--- a/airflow/www/validators.py
+++ b/airflow/www/validators.py
@@ -22,6 +22,8 @@
from wtforms.validators import EqualTo, ValidationError
+from airflow.utils import helpers
+
class GreaterEqualThan(EqualTo):
"""Compares the values of two fields.
@@ -76,3 +78,22 @@
except JSONDecodeError as ex:
message = self.message or f"JSON Validation Error: {ex}"
raise ValidationError(message=field.gettext(message.format(field.data)))
+
+
+class ValidKey:
+ """
+ Validates values that will be used as keys
+
+ :param max_length:
+ The maximum length of the given key
+ """
+
+ def __init__(self, max_length=200):
+ self.max_length = max_length
+
+ def __call__(self, form, field):
+ if field.data:
+ try:
+ helpers.validate_key(field.data, self.max_length)
+ except Exception as e:
+ raise ValidationError(str(e))
| {"golden_diff": "diff --git a/airflow/www/forms.py b/airflow/www/forms.py\n--- a/airflow/www/forms.py\n+++ b/airflow/www/forms.py\n@@ -41,6 +41,7 @@\n from airflow.providers_manager import ProvidersManager\n from airflow.utils import timezone\n from airflow.utils.types import DagRunType\n+from airflow.www.validators import ValidKey\n from airflow.www.widgets import (\n AirflowDateTimePickerROWidget,\n AirflowDateTimePickerWidget,\n@@ -205,7 +206,7 @@\n class ConnectionForm(DynamicForm):\n conn_id = StringField(\n lazy_gettext(\"Connection Id\"),\n- validators=[InputRequired()],\n+ validators=[InputRequired(), ValidKey()],\n widget=BS3TextFieldWidget(),\n )\n conn_type = SelectField(\ndiff --git a/airflow/www/validators.py b/airflow/www/validators.py\n--- a/airflow/www/validators.py\n+++ b/airflow/www/validators.py\n@@ -22,6 +22,8 @@\n \n from wtforms.validators import EqualTo, ValidationError\n \n+from airflow.utils import helpers\n+\n \n class GreaterEqualThan(EqualTo):\n \"\"\"Compares the values of two fields.\n@@ -76,3 +78,22 @@\n except JSONDecodeError as ex:\n message = self.message or f\"JSON Validation Error: {ex}\"\n raise ValidationError(message=field.gettext(message.format(field.data)))\n+\n+\n+class ValidKey:\n+ \"\"\"\n+ Validates values that will be used as keys\n+\n+ :param max_length:\n+ The maximum length of the given key\n+ \"\"\"\n+\n+ def __init__(self, max_length=200):\n+ self.max_length = max_length\n+\n+ def __call__(self, form, field):\n+ if field.data:\n+ try:\n+ helpers.validate_key(field.data, self.max_length)\n+ except Exception as e:\n+ raise ValidationError(str(e))\n", "issue": "Restrict allowed characters in connection ids\n### Description\r\n\r\nI bumped into a bug where a connection id was suffixed with a whitespace e.g. \"myconn \". When referencing the connection id \"myconn\" (without whitespace), you get a connection not found error.\r\n\r\nTo avoid such human errors, I suggest restricting the characters allowed for connection ids.\r\n\r\nSome suggestions:\r\n- There's an `airflow.utils.helpers.validate_key` function for validating the DAG id. Probably a good idea to reuse this.\r\n- I believe variable ids are also not validated, would be good to check those too.\r\n\r\n### Use case/motivation\r\n\r\n_No response_\r\n\r\n### Related issues\r\n\r\n_No response_\r\n\r\n### Are you willing to submit a PR?\r\n\r\n- [X] Yes I am willing to submit a PR!\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)\r\n\n", "before_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\nfrom __future__ import annotations\n\nimport json\nfrom json import JSONDecodeError\n\nfrom wtforms.validators import EqualTo, ValidationError\n\n\nclass GreaterEqualThan(EqualTo):\n \"\"\"Compares the values of two fields.\n\n :param fieldname:\n The name of the other field to compare to.\n :param message:\n Error message to raise in case of a validation error. Can be\n interpolated with `%(other_label)s` and `%(other_name)s` to provide a\n more helpful error.\n \"\"\"\n\n def __call__(self, form, field):\n try:\n other = form[self.fieldname]\n except KeyError:\n raise ValidationError(field.gettext(f\"Invalid field name '{self.fieldname}'.\"))\n\n if field.data is None or other.data is None:\n return\n\n if field.data < other.data:\n message_args = {\n \"other_label\": hasattr(other, \"label\") and other.label.text or self.fieldname,\n \"other_name\": self.fieldname,\n }\n message = self.message\n if message is None:\n message = field.gettext(\n f\"Field must be greater than or equal to {message_args['other_label']}.\"\n )\n else:\n message = message % message_args\n\n raise ValidationError(message)\n\n\nclass ValidJson:\n \"\"\"Validates data is valid JSON.\n\n :param message:\n Error message to raise in case of a validation error.\n \"\"\"\n\n def __init__(self, message=None):\n self.message = message\n\n def __call__(self, form, field):\n if field.data:\n try:\n json.loads(field.data)\n except JSONDecodeError as ex:\n message = self.message or f\"JSON Validation Error: {ex}\"\n raise ValidationError(message=field.gettext(message.format(field.data)))\n", "path": "airflow/www/validators.py"}, {"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\nfrom __future__ import annotations\n\nimport json\nimport operator\nfrom datetime import datetime as dt\nfrom typing import Iterator\n\nimport pendulum\nfrom flask_appbuilder.fieldwidgets import (\n BS3PasswordFieldWidget,\n BS3TextAreaFieldWidget,\n BS3TextFieldWidget,\n Select2Widget,\n)\nfrom flask_appbuilder.forms import DynamicForm\nfrom flask_babel import lazy_gettext\nfrom flask_wtf import FlaskForm\nfrom wtforms import widgets\nfrom wtforms.fields import Field, IntegerField, PasswordField, SelectField, StringField, TextAreaField\nfrom wtforms.validators import InputRequired, Optional\n\nfrom airflow.compat.functools import cache\nfrom airflow.configuration import conf\nfrom airflow.providers_manager import ProvidersManager\nfrom airflow.utils import timezone\nfrom airflow.utils.types import DagRunType\nfrom airflow.www.widgets import (\n AirflowDateTimePickerROWidget,\n AirflowDateTimePickerWidget,\n BS3TextAreaROWidget,\n BS3TextFieldROWidget,\n)\n\n\nclass DateTimeWithTimezoneField(Field):\n \"\"\"A text field which stores a `datetime.datetime` matching a format.\"\"\"\n\n widget = widgets.TextInput()\n\n def __init__(self, label=None, validators=None, datetime_format=\"%Y-%m-%d %H:%M:%S%Z\", **kwargs):\n super().__init__(label, validators, **kwargs)\n self.format = datetime_format\n self.data = None\n\n def _value(self):\n if self.raw_data:\n return \" \".join(self.raw_data)\n if self.data:\n return self.data.strftime(self.format)\n return \"\"\n\n def process_formdata(self, valuelist):\n if not valuelist:\n return\n date_str = \" \".join(valuelist)\n try:\n # Check if the datetime string is in the format without timezone, if so convert it to the\n # default timezone\n if len(date_str) == 19:\n parsed_datetime = dt.strptime(date_str, \"%Y-%m-%d %H:%M:%S\")\n default_timezone = self._get_default_timezone()\n self.data = default_timezone.convert(parsed_datetime)\n else:\n self.data = pendulum.parse(date_str)\n except ValueError:\n self.data = None\n raise ValueError(self.gettext(\"Not a valid datetime value\"))\n\n def _get_default_timezone(self):\n current_timezone = conf.get(\"core\", \"default_timezone\")\n if current_timezone == \"system\":\n default_timezone = pendulum.local_timezone()\n else:\n default_timezone = pendulum.timezone(current_timezone)\n return default_timezone\n\n\nclass DateTimeForm(FlaskForm):\n \"\"\"Date filter form needed for task views\"\"\"\n\n execution_date = DateTimeWithTimezoneField(\"Logical date\", widget=AirflowDateTimePickerWidget())\n\n\nclass DateTimeWithNumRunsForm(FlaskForm):\n \"\"\"\n Date time and number of runs form for tree view, task duration\n and landing times\n \"\"\"\n\n base_date = DateTimeWithTimezoneField(\n \"Anchor date\", widget=AirflowDateTimePickerWidget(), default=timezone.utcnow()\n )\n num_runs = SelectField(\n \"Number of runs\",\n default=25,\n choices=(\n (5, \"5\"),\n (25, \"25\"),\n (50, \"50\"),\n (100, \"100\"),\n (365, \"365\"),\n ),\n )\n\n\nclass DateTimeWithNumRunsWithDagRunsForm(DateTimeWithNumRunsForm):\n \"\"\"Date time and number of runs and dag runs form for graph and gantt view\"\"\"\n\n execution_date = SelectField(\"DAG run\")\n\n\nclass DagRunEditForm(DynamicForm):\n \"\"\"Form for editing DAG Run.\n\n We don't actually want to allow editing, so everything is read-only here.\n \"\"\"\n\n dag_id = StringField(lazy_gettext(\"Dag Id\"), widget=BS3TextFieldROWidget())\n start_date = DateTimeWithTimezoneField(lazy_gettext(\"Start Date\"), widget=AirflowDateTimePickerROWidget())\n end_date = DateTimeWithTimezoneField(lazy_gettext(\"End Date\"), widget=AirflowDateTimePickerROWidget())\n run_id = StringField(lazy_gettext(\"Run Id\"), widget=BS3TextFieldROWidget())\n state = StringField(lazy_gettext(\"State\"), widget=BS3TextFieldROWidget())\n execution_date = DateTimeWithTimezoneField(\n lazy_gettext(\"Logical Date\"),\n widget=AirflowDateTimePickerROWidget(),\n )\n conf = TextAreaField(lazy_gettext(\"Conf\"), widget=BS3TextAreaROWidget())\n note = TextAreaField(lazy_gettext(\"User Note\"), widget=BS3TextAreaFieldWidget())\n\n def populate_obj(self, item):\n \"\"\"Populates the attributes of the passed obj with data from the form's fields.\"\"\"\n super().populate_obj(item)\n item.run_type = DagRunType.from_run_id(item.run_id)\n if item.conf:\n item.conf = json.loads(item.conf)\n\n\nclass TaskInstanceEditForm(DynamicForm):\n \"\"\"Form for editing TaskInstance\"\"\"\n\n dag_id = StringField(lazy_gettext(\"Dag Id\"), validators=[InputRequired()], widget=BS3TextFieldROWidget())\n task_id = StringField(\n lazy_gettext(\"Task Id\"), validators=[InputRequired()], widget=BS3TextFieldROWidget()\n )\n start_date = DateTimeWithTimezoneField(lazy_gettext(\"Start Date\"), widget=AirflowDateTimePickerROWidget())\n end_date = DateTimeWithTimezoneField(lazy_gettext(\"End Date\"), widget=AirflowDateTimePickerROWidget())\n state = SelectField(\n lazy_gettext(\"State\"),\n choices=(\n (\"success\", \"success\"),\n (\"running\", \"running\"),\n (\"failed\", \"failed\"),\n (\"up_for_retry\", \"up_for_retry\"),\n ),\n widget=Select2Widget(),\n validators=[InputRequired()],\n )\n execution_date = DateTimeWithTimezoneField(\n lazy_gettext(\"Logical Date\"),\n widget=AirflowDateTimePickerROWidget(),\n validators=[InputRequired()],\n )\n note = TextAreaField(lazy_gettext(\"User Note\"), widget=BS3TextAreaFieldWidget())\n\n\n@cache\ndef create_connection_form_class() -> type[DynamicForm]:\n \"\"\"Create a form class for editing and adding Connection.\n\n This class is created dynamically because it relies heavily on run-time\n provider discovery, which slows down webserver startup a lot.\n By creating the class at runtime, we can delay loading the providers until\n when the connection form is first used, which may as well be never for a\n short-lived server.\n \"\"\"\n providers_manager = ProvidersManager()\n\n def _iter_connection_types() -> Iterator[tuple[str, str]]:\n \"\"\"List available connection types.\"\"\"\n yield (\"email\", \"Email\")\n yield (\"fs\", \"File (path)\")\n yield (\"generic\", \"Generic\")\n yield (\"mesos_framework-id\", \"Mesos Framework ID\")\n for connection_type, provider_info in providers_manager.hooks.items():\n if provider_info:\n yield (connection_type, provider_info.hook_name)\n\n class ConnectionForm(DynamicForm):\n conn_id = StringField(\n lazy_gettext(\"Connection Id\"),\n validators=[InputRequired()],\n widget=BS3TextFieldWidget(),\n )\n conn_type = SelectField(\n lazy_gettext(\"Connection Type\"),\n choices=sorted(_iter_connection_types(), key=operator.itemgetter(1)),\n widget=Select2Widget(),\n validators=[InputRequired()],\n description=(\n \"Connection Type missing? Make sure you've installed the \"\n \"corresponding Airflow Provider Package.\"\n ),\n )\n description = StringField(lazy_gettext(\"Description\"), widget=BS3TextAreaFieldWidget())\n host = StringField(lazy_gettext(\"Host\"), widget=BS3TextFieldWidget())\n schema = StringField(lazy_gettext(\"Schema\"), widget=BS3TextFieldWidget())\n login = StringField(lazy_gettext(\"Login\"), widget=BS3TextFieldWidget())\n password = PasswordField(lazy_gettext(\"Password\"), widget=BS3PasswordFieldWidget())\n port = IntegerField(lazy_gettext(\"Port\"), validators=[Optional()], widget=BS3TextFieldWidget())\n extra = TextAreaField(lazy_gettext(\"Extra\"), widget=BS3TextAreaFieldWidget())\n\n for key, value in providers_manager.connection_form_widgets.items():\n setattr(ConnectionForm, key, value.field)\n\n return ConnectionForm\n", "path": "airflow/www/forms.py"}]} | 4,015 | 425 |
gh_patches_debug_20585 | rasdani/github-patches | git_diff | e-valuation__EvaP-794 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Deal with update.sh
update.sh is out of date. We can either
- remove it
- update it
- replace it with something else
having a script would be pretty cool to document what needs to be done when updating a production server.
maybe this can go into a management command.
an idea for additional automation would be a git post-commit-hook that checks out the release branch and updates everything when someone pushes to the production server via ssh. logs of the update could be sent via email to the admins.
Deal with update.sh
update.sh is out of date. We can either
- remove it
- update it
- replace it with something else
having a script would be pretty cool to document what needs to be done when updating a production server.
maybe this can go into a management command.
an idea for additional automation would be a git post-commit-hook that checks out the release branch and updates everything when someone pushes to the production server via ssh. logs of the update could be sent via email to the admins.
</issue>
<code>
[start of evap/evaluation/management/commands/refresh_results_cache.py]
1 from django.core.management.base import BaseCommand
2 from django.core.cache import cache
3
4 from evap.evaluation.models import Course
5 from evap.evaluation.tools import calculate_results
6
7
8 class Command(BaseCommand):
9 args = ''
10 help = 'Clears the cache and pre-warms it with the results of all courses'
11
12 def handle(self, *args, **options):
13 self.stdout.write("Clearing cache...")
14 cache.clear()
15
16 self.stdout.write("Calculating results for all courses...")
17 for course in Course.objects.all():
18 calculate_results(course)
19
20 self.stdout.write("Done with updating cache.")
21
[end of evap/evaluation/management/commands/refresh_results_cache.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/evap/evaluation/management/commands/refresh_results_cache.py b/evap/evaluation/management/commands/refresh_results_cache.py
--- a/evap/evaluation/management/commands/refresh_results_cache.py
+++ b/evap/evaluation/management/commands/refresh_results_cache.py
@@ -1,4 +1,5 @@
from django.core.management.base import BaseCommand
+from django.core.serializers.base import ProgressBar
from django.core.cache import cache
from evap.evaluation.models import Course
@@ -12,9 +13,15 @@
def handle(self, *args, **options):
self.stdout.write("Clearing cache...")
cache.clear()
+ total_count = Course.objects.count()
self.stdout.write("Calculating results for all courses...")
- for course in Course.objects.all():
+
+ self.stdout.ending = None
+ progress_bar = ProgressBar(self.stdout, total_count)
+
+ for counter, course in enumerate(Course.objects.all()):
+ progress_bar.update(counter + 1)
calculate_results(course)
- self.stdout.write("Done with updating cache.")
+ self.stdout.write("Done with updating cache.\n")
| {"golden_diff": "diff --git a/evap/evaluation/management/commands/refresh_results_cache.py b/evap/evaluation/management/commands/refresh_results_cache.py\n--- a/evap/evaluation/management/commands/refresh_results_cache.py\n+++ b/evap/evaluation/management/commands/refresh_results_cache.py\n@@ -1,4 +1,5 @@\n from django.core.management.base import BaseCommand\n+from django.core.serializers.base import ProgressBar\n from django.core.cache import cache\n \n from evap.evaluation.models import Course\n@@ -12,9 +13,15 @@\n def handle(self, *args, **options):\n self.stdout.write(\"Clearing cache...\")\n cache.clear()\n+ total_count = Course.objects.count()\n \n self.stdout.write(\"Calculating results for all courses...\")\n- for course in Course.objects.all():\n+\n+ self.stdout.ending = None\n+ progress_bar = ProgressBar(self.stdout, total_count)\n+\n+ for counter, course in enumerate(Course.objects.all()):\n+ progress_bar.update(counter + 1)\n calculate_results(course)\n \n- self.stdout.write(\"Done with updating cache.\")\n+ self.stdout.write(\"Done with updating cache.\\n\")\n", "issue": "Deal with update.sh\nupdate.sh is out of date. We can either\n- remove it\n- update it\n- replace it with something else\n\nhaving a script would be pretty cool to document what needs to be done when updating a production server.\n\nmaybe this can go into a management command.\n\nan idea for additional automation would be a git post-commit-hook that checks out the release branch and updates everything when someone pushes to the production server via ssh. logs of the update could be sent via email to the admins.\n\nDeal with update.sh\nupdate.sh is out of date. We can either\n- remove it\n- update it\n- replace it with something else\n\nhaving a script would be pretty cool to document what needs to be done when updating a production server.\n\nmaybe this can go into a management command.\n\nan idea for additional automation would be a git post-commit-hook that checks out the release branch and updates everything when someone pushes to the production server via ssh. logs of the update could be sent via email to the admins.\n\n", "before_files": [{"content": "from django.core.management.base import BaseCommand\nfrom django.core.cache import cache\n\nfrom evap.evaluation.models import Course\nfrom evap.evaluation.tools import calculate_results\n\n\nclass Command(BaseCommand):\n args = ''\n help = 'Clears the cache and pre-warms it with the results of all courses'\n\n def handle(self, *args, **options):\n self.stdout.write(\"Clearing cache...\")\n cache.clear()\n\n self.stdout.write(\"Calculating results for all courses...\")\n for course in Course.objects.all():\n calculate_results(course)\n\n self.stdout.write(\"Done with updating cache.\")\n", "path": "evap/evaluation/management/commands/refresh_results_cache.py"}]} | 921 | 257 |
gh_patches_debug_19815 | rasdani/github-patches | git_diff | e2nIEE__pandapower-870 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in rundcpp failed to initialize result table
```python
import pandapower as pp
import pandapower.networks as nw
net = nw.case57()
pp.rundcpp(net)
net.pop("res_gen")
# Problem by executing this
pp.rundcpp(net)
# No problem doing this
pp.runpp(net)
```
Can anyone recreate this? I tries pp 2.2.2, pp 2.3.0.
</issue>
<code>
[start of pandapower/results.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright (c) 2016-2020 by University of Kassel and Fraunhofer Institute for Energy Economics
4 # and Energy System Technology (IEE), Kassel. All rights reserved.
5
6
7 import numpy as np
8 import pandas as pd
9
10 from pandapower.results_branch import _get_branch_results, _get_branch_results_3ph
11 from pandapower.results_bus import _get_bus_results, _set_buses_out_of_service, \
12 _get_shunt_results, _get_p_q_results, _get_bus_v_results, _get_bus_v_results_3ph, _get_p_q_results_3ph, \
13 _get_bus_results_3ph
14 from pandapower.results_gen import _get_gen_results, _get_gen_results_3ph
15
16 suffix_mode = {"sc": "sc", "se": "est", "pf_3ph": "3ph"}
17
18
19 def _extract_results(net, ppc):
20 _set_buses_out_of_service(ppc)
21 bus_lookup_aranged = _get_aranged_lookup(net)
22 _get_bus_v_results(net, ppc)
23 bus_pq = _get_p_q_results(net, ppc, bus_lookup_aranged)
24 _get_shunt_results(net, ppc, bus_lookup_aranged, bus_pq)
25 _get_branch_results(net, ppc, bus_lookup_aranged, bus_pq)
26 _get_gen_results(net, ppc, bus_lookup_aranged, bus_pq)
27 _get_bus_results(net, ppc, bus_pq)
28 if net._options["mode"] == "opf":
29 _get_costs(net, ppc)
30
31
32 def _extract_results_3ph(net, ppc0, ppc1, ppc2):
33 # reset_results(net, False)
34 _set_buses_out_of_service(ppc0)
35 _set_buses_out_of_service(ppc1)
36 _set_buses_out_of_service(ppc2)
37 bus_lookup_aranged = _get_aranged_lookup(net)
38
39 _get_bus_v_results_3ph(net, ppc0, ppc1, ppc2)
40 bus_pq = _get_p_q_results_3ph(net, bus_lookup_aranged)
41 # _get_shunt_results(net, ppc, bus_lookup_aranged, bus_pq)
42 _get_branch_results_3ph(net, ppc0, ppc1, ppc2, bus_lookup_aranged, bus_pq)
43 _get_gen_results_3ph(net, ppc0, ppc1, ppc2, bus_lookup_aranged, bus_pq)
44 _get_bus_results_3ph(net, bus_pq)
45
46
47 def _extract_results_se(net, ppc):
48 _set_buses_out_of_service(ppc)
49 bus_lookup_aranged = _get_aranged_lookup(net)
50 _get_bus_v_results(net, ppc, suffix="_est")
51 bus_pq = np.zeros(shape=(len(net["bus"].index), 2), dtype=np.float)
52 _get_branch_results(net, ppc, bus_lookup_aranged, bus_pq, suffix="_est")
53
54
55 def _get_costs(net, ppc):
56 net.res_cost = ppc['obj']
57
58
59 def _get_aranged_lookup(net):
60 # generate bus_lookup net -> consecutive ordering
61 maxBus = max(net["bus"].index.values)
62 bus_lookup_aranged = -np.ones(maxBus + 1, dtype=int)
63 bus_lookup_aranged[net["bus"].index.values] = np.arange(len(net["bus"].index.values))
64
65 return bus_lookup_aranged
66
67
68 def verify_results(net, mode="pf"):
69 elements = get_relevant_elements(mode)
70 suffix = suffix_mode.get(mode, None)
71 for element in elements:
72 res_element, res_empty_element = get_result_tables(element, suffix)
73 if len(net[element]) != len(net[res_element]):
74 init_element(net, element)
75 if element == "bus":
76 net._options["init_vm_pu"] = "auto"
77 net._options["init_va_degree"] = "auto"
78
79
80 def get_result_tables(element, suffix=None):
81 res_element = "res_" + element
82 res_element_with_suffix = res_element if suffix is None else res_element + "_%s" % suffix
83
84 if suffix == suffix_mode.get("se", None):
85 # State estimation used default result table
86 return res_element_with_suffix, "_empty_%s" % res_element
87 else:
88 return res_element_with_suffix, "_empty_%s" % res_element_with_suffix
89
90
91 def empty_res_element(net, element, suffix=None):
92 res_element, res_empty_element = get_result_tables(element, suffix)
93 if res_empty_element in net:
94 net[res_element] = net[res_empty_element].copy()
95 else:
96 net[res_element] = pd.DataFrame()
97
98
99 def init_element(net, element, suffix=None):
100 res_element, res_empty_element = get_result_tables(element, suffix)
101 index = net[element].index
102 if len(index):
103 # init empty dataframe
104 if res_empty_element in net:
105 columns = net[res_empty_element].columns
106 net[res_element] = pd.DataFrame(np.nan, index=index,
107 columns=columns, dtype='float')
108 else:
109 net[res_element] = pd.DataFrame(index=index, dtype='float')
110 else:
111 empty_res_element(net, element, suffix)
112
113
114 def get_relevant_elements(mode="pf"):
115 if mode == "pf" or mode == "opf":
116 return ["bus", "line", "trafo", "trafo3w", "impedance", "ext_grid",
117 "load", "sgen", "storage", "shunt", "gen", "ward", "xward",
118 "dcline"]
119 elif mode == "sc":
120 return ["bus", "line", "trafo", "trafo3w", "ext_grid", "gen", "sgen"]
121 elif mode == "se":
122 return ["bus", "line", "trafo", "trafo3w"]
123 elif mode == "pf_3ph":
124 return ["bus", "line", "trafo", "ext_grid", "shunt",
125 "load", "sgen", "storage", "asymmetric_load", "asymmetric_sgen"]
126
127
128 def init_results(net, mode="pf"):
129 elements = get_relevant_elements(mode)
130 suffix = suffix_mode.get(mode, None)
131 for element in elements:
132 init_element(net, element, suffix)
133
134
135 def reset_results(net, mode="pf"):
136 elements = get_relevant_elements(mode)
137 suffix = suffix_mode.get(mode, None)
138 for element in elements:
139 empty_res_element(net, element, suffix)
140
141
142 def _ppci_bus_to_ppc(result, ppc):
143 # result is the ppci (ppc without out of service buses)
144 # busses are sorted (REF, PV, PQ, NONE) -> results are the first 3 types
145 n_buses, bus_cols = np.shape(ppc['bus'])
146 n_rows_result, bus_cols_result = np.shape(result['bus'])
147 # create matrix of proper size
148 updated_bus = np.empty((n_buses, bus_cols_result))
149 # fill in results (first 3 types)
150 updated_bus[:n_rows_result, :] = result['bus']
151 if n_buses > n_rows_result:
152 # keep rows for busses of type NONE
153 updated_bus[n_rows_result:, :bus_cols] = ppc['bus'][n_rows_result:, :]
154 ppc['bus'] = updated_bus
155
156
157 def _ppci_branch_to_ppc(result, ppc):
158 # in service branches and gens are taken from 'internal'
159 branch_cols = np.shape(ppc['branch'])[1]
160 ppc['branch'][result["internal"]['branch_is'], :branch_cols] = result['branch'][:, :branch_cols]
161
162
163 def _ppci_gen_to_ppc(result, ppc):
164 gen_cols = np.shape(ppc['gen'])[1]
165 ppc['gen'][result["internal"]['gen_is'], :gen_cols] = result['gen'][:, :gen_cols]
166
167
168 def _ppci_other_to_ppc(result, ppc, mode):
169 ppc['internal'] = result['internal']
170
171 if mode != "sc" and mode != "se":
172 ppc['success'] = result['success']
173 ppc['et'] = result['et']
174
175 if mode == 'opf':
176 ppc['obj'] = result['f']
177 ppc['internal_gencost'] = result['gencost']
178
179 if "iterations" in result:
180 ppc["iterations"] = result["iterations"]
181
182
183 def _copy_results_ppci_to_ppc(result, ppc, mode):
184 """
185 result contains results for all in service elements
186 ppc gets the results for in- and out of service elements
187 -> results must be copied
188
189 ppc and ppci are structured as follows:
190 [in_service elements]
191 ppc = [out_of_service elements]
192 result = [in_service elements]
193
194 Parameters
195 ----------
196 result - ppci with results
197 ppc - ppc without results
198 mode - "pf","opf", "sc"...
199
200 Returns
201 -------
202 ppc with results
203 """
204
205 # copy the results for bus, gen and branch and some additional values like "success"
206 _ppci_bus_to_ppc(result, ppc)
207 _ppci_branch_to_ppc(result, ppc)
208 _ppci_gen_to_ppc(result, ppc)
209 _ppci_other_to_ppc(result, ppc, mode)
210
211 result = ppc
212 return result
213
[end of pandapower/results.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pandapower/results.py b/pandapower/results.py
--- a/pandapower/results.py
+++ b/pandapower/results.py
@@ -70,7 +70,15 @@
suffix = suffix_mode.get(mode, None)
for element in elements:
res_element, res_empty_element = get_result_tables(element, suffix)
- if len(net[element]) != len(net[res_element]):
+
+ index_equal = False if res_element not in net else net[element].index.equals(net[res_element].index)
+ if not index_equal:
+ if net["_options"]["init_results"] and element == "bus":
+ # if the indices of bus and res_bus are not equal, but init_results is set, the voltage vector
+ # is wrong. A UserWarning is raised in this case. For all other elements the result table is emptied.
+ raise UserWarning("index of result table '{}' is not equal to the element table '{}'. The init result"
+ " option may lead to a non-converged power flow.".format(res_element, element))
+ # init result table for
init_element(net, element)
if element == "bus":
net._options["init_vm_pu"] = "auto"
| {"golden_diff": "diff --git a/pandapower/results.py b/pandapower/results.py\n--- a/pandapower/results.py\n+++ b/pandapower/results.py\n@@ -70,7 +70,15 @@\n suffix = suffix_mode.get(mode, None)\n for element in elements:\n res_element, res_empty_element = get_result_tables(element, suffix)\n- if len(net[element]) != len(net[res_element]):\n+\n+ index_equal = False if res_element not in net else net[element].index.equals(net[res_element].index)\n+ if not index_equal:\n+ if net[\"_options\"][\"init_results\"] and element == \"bus\":\n+ # if the indices of bus and res_bus are not equal, but init_results is set, the voltage vector\n+ # is wrong. A UserWarning is raised in this case. For all other elements the result table is emptied.\n+ raise UserWarning(\"index of result table '{}' is not equal to the element table '{}'. The init result\"\n+ \" option may lead to a non-converged power flow.\".format(res_element, element))\n+ # init result table for\n init_element(net, element)\n if element == \"bus\":\n net._options[\"init_vm_pu\"] = \"auto\"\n", "issue": "Bug in rundcpp failed to initialize result table\n```python\r\nimport pandapower as pp\r\nimport pandapower.networks as nw\r\n\r\nnet = nw.case57()\r\npp.rundcpp(net) \r\nnet.pop(\"res_gen\")\r\n\r\n# Problem by executing this\r\npp.rundcpp(net)\r\n\r\n# No problem doing this\r\npp.runpp(net) \r\n```\r\n\r\nCan anyone recreate this? I tries pp 2.2.2, pp 2.3.0.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright (c) 2016-2020 by University of Kassel and Fraunhofer Institute for Energy Economics\n# and Energy System Technology (IEE), Kassel. All rights reserved.\n\n\nimport numpy as np\nimport pandas as pd\n\nfrom pandapower.results_branch import _get_branch_results, _get_branch_results_3ph\nfrom pandapower.results_bus import _get_bus_results, _set_buses_out_of_service, \\\n _get_shunt_results, _get_p_q_results, _get_bus_v_results, _get_bus_v_results_3ph, _get_p_q_results_3ph, \\\n _get_bus_results_3ph\nfrom pandapower.results_gen import _get_gen_results, _get_gen_results_3ph\n\nsuffix_mode = {\"sc\": \"sc\", \"se\": \"est\", \"pf_3ph\": \"3ph\"}\n\n\ndef _extract_results(net, ppc):\n _set_buses_out_of_service(ppc)\n bus_lookup_aranged = _get_aranged_lookup(net)\n _get_bus_v_results(net, ppc)\n bus_pq = _get_p_q_results(net, ppc, bus_lookup_aranged)\n _get_shunt_results(net, ppc, bus_lookup_aranged, bus_pq)\n _get_branch_results(net, ppc, bus_lookup_aranged, bus_pq)\n _get_gen_results(net, ppc, bus_lookup_aranged, bus_pq)\n _get_bus_results(net, ppc, bus_pq)\n if net._options[\"mode\"] == \"opf\":\n _get_costs(net, ppc)\n\n\ndef _extract_results_3ph(net, ppc0, ppc1, ppc2):\n # reset_results(net, False)\n _set_buses_out_of_service(ppc0)\n _set_buses_out_of_service(ppc1)\n _set_buses_out_of_service(ppc2)\n bus_lookup_aranged = _get_aranged_lookup(net)\n\n _get_bus_v_results_3ph(net, ppc0, ppc1, ppc2)\n bus_pq = _get_p_q_results_3ph(net, bus_lookup_aranged)\n # _get_shunt_results(net, ppc, bus_lookup_aranged, bus_pq)\n _get_branch_results_3ph(net, ppc0, ppc1, ppc2, bus_lookup_aranged, bus_pq)\n _get_gen_results_3ph(net, ppc0, ppc1, ppc2, bus_lookup_aranged, bus_pq)\n _get_bus_results_3ph(net, bus_pq)\n\n\ndef _extract_results_se(net, ppc):\n _set_buses_out_of_service(ppc)\n bus_lookup_aranged = _get_aranged_lookup(net)\n _get_bus_v_results(net, ppc, suffix=\"_est\")\n bus_pq = np.zeros(shape=(len(net[\"bus\"].index), 2), dtype=np.float)\n _get_branch_results(net, ppc, bus_lookup_aranged, bus_pq, suffix=\"_est\")\n\n\ndef _get_costs(net, ppc):\n net.res_cost = ppc['obj']\n\n\ndef _get_aranged_lookup(net):\n # generate bus_lookup net -> consecutive ordering\n maxBus = max(net[\"bus\"].index.values)\n bus_lookup_aranged = -np.ones(maxBus + 1, dtype=int)\n bus_lookup_aranged[net[\"bus\"].index.values] = np.arange(len(net[\"bus\"].index.values))\n\n return bus_lookup_aranged\n\n\ndef verify_results(net, mode=\"pf\"):\n elements = get_relevant_elements(mode)\n suffix = suffix_mode.get(mode, None)\n for element in elements:\n res_element, res_empty_element = get_result_tables(element, suffix)\n if len(net[element]) != len(net[res_element]):\n init_element(net, element)\n if element == \"bus\":\n net._options[\"init_vm_pu\"] = \"auto\"\n net._options[\"init_va_degree\"] = \"auto\"\n\n\ndef get_result_tables(element, suffix=None):\n res_element = \"res_\" + element\n res_element_with_suffix = res_element if suffix is None else res_element + \"_%s\" % suffix\n\n if suffix == suffix_mode.get(\"se\", None):\n # State estimation used default result table\n return res_element_with_suffix, \"_empty_%s\" % res_element\n else:\n return res_element_with_suffix, \"_empty_%s\" % res_element_with_suffix\n\n\ndef empty_res_element(net, element, suffix=None):\n res_element, res_empty_element = get_result_tables(element, suffix)\n if res_empty_element in net:\n net[res_element] = net[res_empty_element].copy()\n else:\n net[res_element] = pd.DataFrame()\n\n\ndef init_element(net, element, suffix=None):\n res_element, res_empty_element = get_result_tables(element, suffix)\n index = net[element].index\n if len(index):\n # init empty dataframe\n if res_empty_element in net:\n columns = net[res_empty_element].columns\n net[res_element] = pd.DataFrame(np.nan, index=index,\n columns=columns, dtype='float')\n else:\n net[res_element] = pd.DataFrame(index=index, dtype='float')\n else:\n empty_res_element(net, element, suffix)\n\n\ndef get_relevant_elements(mode=\"pf\"):\n if mode == \"pf\" or mode == \"opf\":\n return [\"bus\", \"line\", \"trafo\", \"trafo3w\", \"impedance\", \"ext_grid\",\n \"load\", \"sgen\", \"storage\", \"shunt\", \"gen\", \"ward\", \"xward\",\n \"dcline\"]\n elif mode == \"sc\":\n return [\"bus\", \"line\", \"trafo\", \"trafo3w\", \"ext_grid\", \"gen\", \"sgen\"]\n elif mode == \"se\":\n return [\"bus\", \"line\", \"trafo\", \"trafo3w\"] \n elif mode == \"pf_3ph\":\n return [\"bus\", \"line\", \"trafo\", \"ext_grid\", \"shunt\",\n \"load\", \"sgen\", \"storage\", \"asymmetric_load\", \"asymmetric_sgen\"]\n\n\ndef init_results(net, mode=\"pf\"):\n elements = get_relevant_elements(mode)\n suffix = suffix_mode.get(mode, None)\n for element in elements:\n init_element(net, element, suffix)\n\n\ndef reset_results(net, mode=\"pf\"):\n elements = get_relevant_elements(mode)\n suffix = suffix_mode.get(mode, None)\n for element in elements:\n empty_res_element(net, element, suffix)\n\n\ndef _ppci_bus_to_ppc(result, ppc):\n # result is the ppci (ppc without out of service buses)\n # busses are sorted (REF, PV, PQ, NONE) -> results are the first 3 types\n n_buses, bus_cols = np.shape(ppc['bus'])\n n_rows_result, bus_cols_result = np.shape(result['bus'])\n # create matrix of proper size\n updated_bus = np.empty((n_buses, bus_cols_result))\n # fill in results (first 3 types)\n updated_bus[:n_rows_result, :] = result['bus']\n if n_buses > n_rows_result:\n # keep rows for busses of type NONE\n updated_bus[n_rows_result:, :bus_cols] = ppc['bus'][n_rows_result:, :]\n ppc['bus'] = updated_bus\n\n\ndef _ppci_branch_to_ppc(result, ppc):\n # in service branches and gens are taken from 'internal'\n branch_cols = np.shape(ppc['branch'])[1]\n ppc['branch'][result[\"internal\"]['branch_is'], :branch_cols] = result['branch'][:, :branch_cols]\n\n\ndef _ppci_gen_to_ppc(result, ppc):\n gen_cols = np.shape(ppc['gen'])[1]\n ppc['gen'][result[\"internal\"]['gen_is'], :gen_cols] = result['gen'][:, :gen_cols]\n\n\ndef _ppci_other_to_ppc(result, ppc, mode):\n ppc['internal'] = result['internal']\n\n if mode != \"sc\" and mode != \"se\":\n ppc['success'] = result['success']\n ppc['et'] = result['et']\n\n if mode == 'opf':\n ppc['obj'] = result['f']\n ppc['internal_gencost'] = result['gencost']\n\n if \"iterations\" in result:\n ppc[\"iterations\"] = result[\"iterations\"]\n\n\ndef _copy_results_ppci_to_ppc(result, ppc, mode):\n \"\"\"\n result contains results for all in service elements\n ppc gets the results for in- and out of service elements\n -> results must be copied\n\n ppc and ppci are structured as follows:\n [in_service elements]\n ppc = [out_of_service elements]\n result = [in_service elements]\n\n Parameters\n ----------\n result - ppci with results\n ppc - ppc without results\n mode - \"pf\",\"opf\", \"sc\"...\n\n Returns\n -------\n ppc with results\n \"\"\"\n\n # copy the results for bus, gen and branch and some additional values like \"success\"\n _ppci_bus_to_ppc(result, ppc)\n _ppci_branch_to_ppc(result, ppc)\n _ppci_gen_to_ppc(result, ppc)\n _ppci_other_to_ppc(result, ppc, mode)\n\n result = ppc\n return result\n", "path": "pandapower/results.py"}]} | 3,276 | 272 |
gh_patches_debug_3099 | rasdani/github-patches | git_diff | MongoEngine__mongoengine-2431 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Version restriction on pillow
Do we still need to restrict pillow to less then 7.0.0? This looks to have been implemented because of python2 support, which mongoengine dropped with version 0.20.0
https://github.com/MongoEngine/mongoengine/blob/277b827d4dab4630145bc747fdab0df48a045273/setup.py#L118
</issue>
<code>
[start of setup.py]
1 import os
2 import sys
3
4 from pkg_resources import normalize_path
5 from setuptools import find_packages, setup
6 from setuptools.command.test import test as TestCommand
7
8 # Hack to silence atexit traceback in newer python versions
9 try:
10 import multiprocessing
11 except ImportError:
12 pass
13
14 DESCRIPTION = "MongoEngine is a Python Object-Document Mapper for working with MongoDB."
15
16 try:
17 with open("README.rst") as fin:
18 LONG_DESCRIPTION = fin.read()
19 except Exception:
20 LONG_DESCRIPTION = None
21
22
23 def get_version(version_tuple):
24 """Return the version tuple as a string, e.g. for (0, 10, 7),
25 return '0.10.7'.
26 """
27 return ".".join(map(str, version_tuple))
28
29
30 class PyTest(TestCommand):
31 """Will force pytest to search for tests inside the build directory
32 for 2to3 converted code (used by tox), instead of the current directory.
33 Required as long as we need 2to3
34
35 Known Limitation: https://tox.readthedocs.io/en/latest/example/pytest.html#known-issues-and-limitations
36 Source: https://www.hackzine.org/python-testing-with-pytest-and-2to3-plus-tox-and-travis-ci.html
37 """
38
39 # https://pytest.readthedocs.io/en/2.7.3/goodpractises.html#integration-with-setuptools-test-commands
40 # Allows to provide pytest command argument through the test runner command `python setup.py test`
41 # e.g: `python setup.py test -a "-k=test"`
42 # This only works for 1 argument though
43 user_options = [("pytest-args=", "a", "Arguments to pass to py.test")]
44
45 def initialize_options(self):
46 TestCommand.initialize_options(self)
47 self.pytest_args = ""
48
49 def finalize_options(self):
50 TestCommand.finalize_options(self)
51 self.test_args = ["tests"]
52 self.test_suite = True
53
54 def run_tests(self):
55 # import here, cause outside the eggs aren't loaded
56 from pkg_resources import _namespace_packages
57 import pytest
58
59 # Purge modules under test from sys.modules. The test loader will
60 # re-import them from the build location. Required when 2to3 is used
61 # with namespace packages.
62 if sys.version_info >= (3,) and getattr(self.distribution, "use_2to3", False):
63 module = self.test_args[-1].split(".")[0]
64 if module in _namespace_packages:
65 del_modules = []
66 if module in sys.modules:
67 del_modules.append(module)
68 module += "."
69 for name in sys.modules:
70 if name.startswith(module):
71 del_modules.append(name)
72 map(sys.modules.__delitem__, del_modules)
73
74 # Run on the build directory for 2to3-built code
75 # This will prevent the old 2.x code from being found
76 # by py.test discovery mechanism, that apparently
77 # ignores sys.path..
78 ei_cmd = self.get_finalized_command("egg_info")
79 self.test_args = [normalize_path(ei_cmd.egg_base)]
80
81 cmd_args = self.test_args + ([self.pytest_args] if self.pytest_args else [])
82 errno = pytest.main(cmd_args)
83 sys.exit(errno)
84
85
86 # Dirty hack to get version number from monogengine/__init__.py - we can't
87 # import it as it depends on PyMongo and PyMongo isn't installed until this
88 # file is read
89 init = os.path.join(os.path.dirname(__file__), "mongoengine", "__init__.py")
90 version_line = list(filter(lambda l: l.startswith("VERSION"), open(init)))[0]
91
92 VERSION = get_version(eval(version_line.split("=")[-1]))
93
94 CLASSIFIERS = [
95 "Development Status :: 5 - Production/Stable",
96 "Intended Audience :: Developers",
97 "License :: OSI Approved :: MIT License",
98 "Operating System :: OS Independent",
99 "Programming Language :: Python",
100 "Programming Language :: Python :: 3",
101 "Programming Language :: Python :: 3.5",
102 "Programming Language :: Python :: 3.6",
103 "Programming Language :: Python :: 3.7",
104 "Programming Language :: Python :: 3.8",
105 "Programming Language :: Python :: Implementation :: CPython",
106 "Programming Language :: Python :: Implementation :: PyPy",
107 "Topic :: Database",
108 "Topic :: Software Development :: Libraries :: Python Modules",
109 ]
110
111 extra_opts = {
112 "packages": find_packages(exclude=["tests", "tests.*"]),
113 "tests_require": [
114 "pytest<5.0",
115 "pytest-cov",
116 "coverage<5.0", # recent coverage switched to sqlite format for the .coverage file which isn't handled properly by coveralls
117 "blinker",
118 "Pillow>=2.0.0, <7.0.0", # 7.0.0 dropped Python2 support
119 ],
120 }
121
122 if "test" in sys.argv:
123 extra_opts["packages"] = find_packages()
124 extra_opts["package_data"] = {
125 "tests": ["fields/mongoengine.png", "fields/mongodb_leaf.png"]
126 }
127
128 setup(
129 name="mongoengine",
130 version=VERSION,
131 author="Harry Marr",
132 author_email="[email protected]",
133 maintainer="Stefan Wojcik",
134 maintainer_email="[email protected]",
135 url="http://mongoengine.org/",
136 download_url="https://github.com/MongoEngine/mongoengine/tarball/master",
137 license="MIT",
138 include_package_data=True,
139 description=DESCRIPTION,
140 long_description=LONG_DESCRIPTION,
141 platforms=["any"],
142 classifiers=CLASSIFIERS,
143 python_requires=">=3.5",
144 install_requires=["pymongo>=3.4, <4.0"],
145 cmdclass={"test": PyTest},
146 **extra_opts
147 )
148
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -115,7 +115,7 @@
"pytest-cov",
"coverage<5.0", # recent coverage switched to sqlite format for the .coverage file which isn't handled properly by coveralls
"blinker",
- "Pillow>=2.0.0, <7.0.0", # 7.0.0 dropped Python2 support
+ "Pillow>=7.0.0",
],
}
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -115,7 +115,7 @@\n \"pytest-cov\",\n \"coverage<5.0\", # recent coverage switched to sqlite format for the .coverage file which isn't handled properly by coveralls\n \"blinker\",\n- \"Pillow>=2.0.0, <7.0.0\", # 7.0.0 dropped Python2 support\n+ \"Pillow>=7.0.0\",\n ],\n }\n", "issue": "Version restriction on pillow\nDo we still need to restrict pillow to less then 7.0.0? This looks to have been implemented because of python2 support, which mongoengine dropped with version 0.20.0\r\n\r\nhttps://github.com/MongoEngine/mongoengine/blob/277b827d4dab4630145bc747fdab0df48a045273/setup.py#L118\n", "before_files": [{"content": "import os\nimport sys\n\nfrom pkg_resources import normalize_path\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\n# Hack to silence atexit traceback in newer python versions\ntry:\n import multiprocessing\nexcept ImportError:\n pass\n\nDESCRIPTION = \"MongoEngine is a Python Object-Document Mapper for working with MongoDB.\"\n\ntry:\n with open(\"README.rst\") as fin:\n LONG_DESCRIPTION = fin.read()\nexcept Exception:\n LONG_DESCRIPTION = None\n\n\ndef get_version(version_tuple):\n \"\"\"Return the version tuple as a string, e.g. for (0, 10, 7),\n return '0.10.7'.\n \"\"\"\n return \".\".join(map(str, version_tuple))\n\n\nclass PyTest(TestCommand):\n \"\"\"Will force pytest to search for tests inside the build directory\n for 2to3 converted code (used by tox), instead of the current directory.\n Required as long as we need 2to3\n\n Known Limitation: https://tox.readthedocs.io/en/latest/example/pytest.html#known-issues-and-limitations\n Source: https://www.hackzine.org/python-testing-with-pytest-and-2to3-plus-tox-and-travis-ci.html\n \"\"\"\n\n # https://pytest.readthedocs.io/en/2.7.3/goodpractises.html#integration-with-setuptools-test-commands\n # Allows to provide pytest command argument through the test runner command `python setup.py test`\n # e.g: `python setup.py test -a \"-k=test\"`\n # This only works for 1 argument though\n user_options = [(\"pytest-args=\", \"a\", \"Arguments to pass to py.test\")]\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.pytest_args = \"\"\n\n def finalize_options(self):\n TestCommand.finalize_options(self)\n self.test_args = [\"tests\"]\n self.test_suite = True\n\n def run_tests(self):\n # import here, cause outside the eggs aren't loaded\n from pkg_resources import _namespace_packages\n import pytest\n\n # Purge modules under test from sys.modules. The test loader will\n # re-import them from the build location. Required when 2to3 is used\n # with namespace packages.\n if sys.version_info >= (3,) and getattr(self.distribution, \"use_2to3\", False):\n module = self.test_args[-1].split(\".\")[0]\n if module in _namespace_packages:\n del_modules = []\n if module in sys.modules:\n del_modules.append(module)\n module += \".\"\n for name in sys.modules:\n if name.startswith(module):\n del_modules.append(name)\n map(sys.modules.__delitem__, del_modules)\n\n # Run on the build directory for 2to3-built code\n # This will prevent the old 2.x code from being found\n # by py.test discovery mechanism, that apparently\n # ignores sys.path..\n ei_cmd = self.get_finalized_command(\"egg_info\")\n self.test_args = [normalize_path(ei_cmd.egg_base)]\n\n cmd_args = self.test_args + ([self.pytest_args] if self.pytest_args else [])\n errno = pytest.main(cmd_args)\n sys.exit(errno)\n\n\n# Dirty hack to get version number from monogengine/__init__.py - we can't\n# import it as it depends on PyMongo and PyMongo isn't installed until this\n# file is read\ninit = os.path.join(os.path.dirname(__file__), \"mongoengine\", \"__init__.py\")\nversion_line = list(filter(lambda l: l.startswith(\"VERSION\"), open(init)))[0]\n\nVERSION = get_version(eval(version_line.split(\"=\")[-1]))\n\nCLASSIFIERS = [\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Database\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n]\n\nextra_opts = {\n \"packages\": find_packages(exclude=[\"tests\", \"tests.*\"]),\n \"tests_require\": [\n \"pytest<5.0\",\n \"pytest-cov\",\n \"coverage<5.0\", # recent coverage switched to sqlite format for the .coverage file which isn't handled properly by coveralls\n \"blinker\",\n \"Pillow>=2.0.0, <7.0.0\", # 7.0.0 dropped Python2 support\n ],\n}\n\nif \"test\" in sys.argv:\n extra_opts[\"packages\"] = find_packages()\n extra_opts[\"package_data\"] = {\n \"tests\": [\"fields/mongoengine.png\", \"fields/mongodb_leaf.png\"]\n }\n\nsetup(\n name=\"mongoengine\",\n version=VERSION,\n author=\"Harry Marr\",\n author_email=\"[email protected]\",\n maintainer=\"Stefan Wojcik\",\n maintainer_email=\"[email protected]\",\n url=\"http://mongoengine.org/\",\n download_url=\"https://github.com/MongoEngine/mongoengine/tarball/master\",\n license=\"MIT\",\n include_package_data=True,\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n platforms=[\"any\"],\n classifiers=CLASSIFIERS,\n python_requires=\">=3.5\",\n install_requires=[\"pymongo>=3.4, <4.0\"],\n cmdclass={\"test\": PyTest},\n **extra_opts\n)\n", "path": "setup.py"}]} | 2,242 | 122 |
gh_patches_debug_22111 | rasdani/github-patches | git_diff | internetarchive__openlibrary-7946 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unique visitors graph on homepage numbers appear too large
<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->
### Evidence / Screenshot (if possible)

### Relevant url?
https://openlibrary.org/
### Proposal & Constraints
Mek thinks these might be off by a factor of something? Drini thinks they might be correct just include a tooooon of requests we weren't including before (images, script tags).
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
@mekarpeles @cclauss
</issue>
<code>
[start of openlibrary/core/admin.py]
1 """Admin functionality."""
2
3 import calendar
4 import datetime
5
6 import requests
7 import web
8 from infogami import config
9
10 from openlibrary.core import cache
11
12
13 class Stats:
14 def __init__(self, docs, key, total_key):
15 self.key = key
16 self.docs = docs
17 try:
18 self.latest = docs[-1].get(key, 0)
19 except IndexError:
20 self.latest = 0
21
22 try:
23 self.previous = docs[-2].get(key, 0)
24 except IndexError:
25 self.previous = 0
26
27 try:
28 # Last available total count
29 self.total = next(x for x in reversed(docs) if total_key in x)[total_key]
30 except (KeyError, StopIteration):
31 self.total = ""
32
33 def get_counts(self, ndays=28, times=False):
34 """Returns the stats for last n days as an array useful for
35 plotting. i.e. an array of [x, y] tuples where y is the value
36 and `x` the x coordinate.
37
38 If times is True, the x coordinate in the tuple will be
39 timestamps for the day.
40 """
41
42 def _convert_to_milli_timestamp(d):
43 """Uses the `_id` of the document `d` to create a UNIX
44 timestamp and coverts it to milliseconds"""
45 t = datetime.datetime.strptime(d, "counts-%Y-%m-%d")
46 return calendar.timegm(t.timetuple()) * 1000
47
48 if times:
49 return [
50 [_convert_to_milli_timestamp(x['_key']), x.get(self.key, 0)]
51 for x in self.docs[-ndays:]
52 ]
53 else:
54 return zip(
55 range(0, ndays * 5, 5), (x.get(self.key, 0) for x in self.docs[-ndays:])
56 ) # The *5 and 5 are for the bar widths
57
58 def get_summary(self, ndays=28):
59 """Returns the summary of counts for past n days.
60
61 Summary can be either sum or average depending on the type of stats.
62 This is used to find counts for last 7 days and last 28 days.
63 """
64 return sum(x[1] for x in self.get_counts(ndays))
65
66
67 @cache.memoize(
68 engine="memcache", key="admin._get_loan_counts_from_graphite", expires=5 * 60
69 )
70 def _get_loan_counts_from_graphite(ndays: int) -> list[list[int]] | None:
71 try:
72 r = requests.get(
73 'http://graphite.us.archive.org/render',
74 params={
75 'target': 'hitcount(stats.ol.loans.bookreader, "1d")',
76 'from': '-%ddays' % ndays,
77 'tz': 'UTC',
78 'format': 'json',
79 },
80 )
81 return r.json()[0]['datapoints']
82 except (requests.exceptions.RequestException, ValueError, AttributeError):
83 return None
84
85
86 class LoanStats(Stats):
87 """
88 Temporary (2020-03-19) override of Stats for loans, due to bug
89 which caused 1mo of loans stats to be missing from regular
90 stats db. This implementation uses graphite, but only on prod,
91 so that we don't forget.
92 """
93
94 def get_counts(self, ndays=28, times=False):
95 # Let dev.openlibrary.org show the true state of things
96 if 'dev' in config.features:
97 return Stats.get_counts(self, ndays, times)
98
99 if graphite_data := _get_loan_counts_from_graphite(ndays):
100 # convert timestamp seconds to ms (as required by API)
101 return [[timestamp * 1000, count] for [count, timestamp] in graphite_data]
102 else:
103 return Stats.get_counts(self, ndays, times)
104
105
106 @cache.memoize(
107 engine="memcache", key="admin._get_visitor_counts_from_graphite", expires=5 * 60
108 )
109 def _get_visitor_counts_from_graphite(self, ndays: int = 28) -> list[list[int]]:
110 """
111 Read the unique visitors (IP addresses) per day for the last ndays from graphite.
112 :param ndays: number of days to read
113 :return: list containing [count, timestamp] for ndays
114 """
115 try:
116 response = requests.get(
117 "http://graphite.us.archive.org/render/",
118 params={
119 "target": "hitcount(stats.uniqueips.openlibrary, '1d')",
120 "from": f"-{ndays}days",
121 "tz": "UTC",
122 "format": "json",
123 },
124 )
125 response.raise_for_status()
126 visitors = response.json()[0]['datapoints']
127 except requests.exceptions.RequestException:
128 visitors = []
129 return visitors
130
131
132 class VisitorStats(Stats):
133 def get_counts(self, ndays: int = 28, times: bool = False) -> list[tuple[int, int]]:
134 visitors = _get_visitor_counts_from_graphite(ndays)
135 # Flip the order, convert timestamp to msec and convert count==None to zero
136 return [
137 (int(timestamp * 1000), int(count or 0)) for count, timestamp in visitors
138 ]
139
140
141 @cache.memoize(engine="memcache", key="admin._get_count_docs", expires=5 * 60)
142 def _get_count_docs(ndays):
143 """Returns the count docs from admin stats database.
144
145 This function is memoized to avoid accessing the db for every request.
146 """
147 today = datetime.datetime.utcnow().date()
148 dates = [today - datetime.timedelta(days=i) for i in range(ndays)]
149
150 # we want the dates in reverse order
151 dates = dates[::-1]
152
153 docs = [web.ctx.site.store.get(d.strftime("counts-%Y-%m-%d")) for d in dates]
154 return [d for d in docs if d]
155
156
157 def get_stats(ndays=30):
158 """Returns the stats for the past `ndays`"""
159 docs = _get_count_docs(ndays)
160 return {
161 'human_edits': Stats(docs, "human_edits", "human_edits"),
162 'bot_edits': Stats(docs, "bot_edits", "bot_edits"),
163 'lists': Stats(docs, "lists", "total_lists"),
164 'visitors': VisitorStats(docs, "visitors", "visitors"),
165 'loans': LoanStats(docs, "loans", "loans"),
166 'members': Stats(docs, "members", "total_members"),
167 'works': Stats(docs, "works", "total_works"),
168 'editions': Stats(docs, "editions", "total_editions"),
169 'ebooks': Stats(docs, "ebooks", "total_ebooks"),
170 'covers': Stats(docs, "covers", "total_covers"),
171 'authors': Stats(docs, "authors", "total_authors"),
172 'subjects': Stats(docs, "subjects", "total_subjects"),
173 }
174
[end of openlibrary/core/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/openlibrary/core/admin.py b/openlibrary/core/admin.py
--- a/openlibrary/core/admin.py
+++ b/openlibrary/core/admin.py
@@ -116,7 +116,7 @@
response = requests.get(
"http://graphite.us.archive.org/render/",
params={
- "target": "hitcount(stats.uniqueips.openlibrary, '1d')",
+ "target": "summarize(stats.uniqueips.openlibrary, '1d')",
"from": f"-{ndays}days",
"tz": "UTC",
"format": "json",
@@ -132,7 +132,7 @@
class VisitorStats(Stats):
def get_counts(self, ndays: int = 28, times: bool = False) -> list[tuple[int, int]]:
visitors = _get_visitor_counts_from_graphite(ndays)
- # Flip the order, convert timestamp to msec and convert count==None to zero
+ # Flip the order, convert timestamp to msec, and convert count==None to zero
return [
(int(timestamp * 1000), int(count or 0)) for count, timestamp in visitors
]
| {"golden_diff": "diff --git a/openlibrary/core/admin.py b/openlibrary/core/admin.py\n--- a/openlibrary/core/admin.py\n+++ b/openlibrary/core/admin.py\n@@ -116,7 +116,7 @@\n response = requests.get(\n \"http://graphite.us.archive.org/render/\",\n params={\n- \"target\": \"hitcount(stats.uniqueips.openlibrary, '1d')\",\n+ \"target\": \"summarize(stats.uniqueips.openlibrary, '1d')\",\n \"from\": f\"-{ndays}days\",\n \"tz\": \"UTC\",\n \"format\": \"json\",\n@@ -132,7 +132,7 @@\n class VisitorStats(Stats):\n def get_counts(self, ndays: int = 28, times: bool = False) -> list[tuple[int, int]]:\n visitors = _get_visitor_counts_from_graphite(ndays)\n- # Flip the order, convert timestamp to msec and convert count==None to zero\n+ # Flip the order, convert timestamp to msec, and convert count==None to zero\n return [\n (int(timestamp * 1000), int(count or 0)) for count, timestamp in visitors\n ]\n", "issue": "Unique visitors graph on homepage numbers appear too large\n<!-- What problem are we solving? What does the experience look like today? What are the symptoms? -->\r\n\r\n### Evidence / Screenshot (if possible)\r\n\r\n\r\n### Relevant url?\r\nhttps://openlibrary.org/\r\n\r\n### Proposal & Constraints\r\nMek thinks these might be off by a factor of something? Drini thinks they might be correct just include a tooooon of requests we weren't including before (images, script tags).\r\n\r\n### Related files\r\n<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->\r\n\r\n### Stakeholders\r\n@mekarpeles @cclauss \n", "before_files": [{"content": "\"\"\"Admin functionality.\"\"\"\n\nimport calendar\nimport datetime\n\nimport requests\nimport web\nfrom infogami import config\n\nfrom openlibrary.core import cache\n\n\nclass Stats:\n def __init__(self, docs, key, total_key):\n self.key = key\n self.docs = docs\n try:\n self.latest = docs[-1].get(key, 0)\n except IndexError:\n self.latest = 0\n\n try:\n self.previous = docs[-2].get(key, 0)\n except IndexError:\n self.previous = 0\n\n try:\n # Last available total count\n self.total = next(x for x in reversed(docs) if total_key in x)[total_key]\n except (KeyError, StopIteration):\n self.total = \"\"\n\n def get_counts(self, ndays=28, times=False):\n \"\"\"Returns the stats for last n days as an array useful for\n plotting. i.e. an array of [x, y] tuples where y is the value\n and `x` the x coordinate.\n\n If times is True, the x coordinate in the tuple will be\n timestamps for the day.\n \"\"\"\n\n def _convert_to_milli_timestamp(d):\n \"\"\"Uses the `_id` of the document `d` to create a UNIX\n timestamp and coverts it to milliseconds\"\"\"\n t = datetime.datetime.strptime(d, \"counts-%Y-%m-%d\")\n return calendar.timegm(t.timetuple()) * 1000\n\n if times:\n return [\n [_convert_to_milli_timestamp(x['_key']), x.get(self.key, 0)]\n for x in self.docs[-ndays:]\n ]\n else:\n return zip(\n range(0, ndays * 5, 5), (x.get(self.key, 0) for x in self.docs[-ndays:])\n ) # The *5 and 5 are for the bar widths\n\n def get_summary(self, ndays=28):\n \"\"\"Returns the summary of counts for past n days.\n\n Summary can be either sum or average depending on the type of stats.\n This is used to find counts for last 7 days and last 28 days.\n \"\"\"\n return sum(x[1] for x in self.get_counts(ndays))\n\n\[email protected](\n engine=\"memcache\", key=\"admin._get_loan_counts_from_graphite\", expires=5 * 60\n)\ndef _get_loan_counts_from_graphite(ndays: int) -> list[list[int]] | None:\n try:\n r = requests.get(\n 'http://graphite.us.archive.org/render',\n params={\n 'target': 'hitcount(stats.ol.loans.bookreader, \"1d\")',\n 'from': '-%ddays' % ndays,\n 'tz': 'UTC',\n 'format': 'json',\n },\n )\n return r.json()[0]['datapoints']\n except (requests.exceptions.RequestException, ValueError, AttributeError):\n return None\n\n\nclass LoanStats(Stats):\n \"\"\"\n Temporary (2020-03-19) override of Stats for loans, due to bug\n which caused 1mo of loans stats to be missing from regular\n stats db. This implementation uses graphite, but only on prod,\n so that we don't forget.\n \"\"\"\n\n def get_counts(self, ndays=28, times=False):\n # Let dev.openlibrary.org show the true state of things\n if 'dev' in config.features:\n return Stats.get_counts(self, ndays, times)\n\n if graphite_data := _get_loan_counts_from_graphite(ndays):\n # convert timestamp seconds to ms (as required by API)\n return [[timestamp * 1000, count] for [count, timestamp] in graphite_data]\n else:\n return Stats.get_counts(self, ndays, times)\n\n\[email protected](\n engine=\"memcache\", key=\"admin._get_visitor_counts_from_graphite\", expires=5 * 60\n)\ndef _get_visitor_counts_from_graphite(self, ndays: int = 28) -> list[list[int]]:\n \"\"\"\n Read the unique visitors (IP addresses) per day for the last ndays from graphite.\n :param ndays: number of days to read\n :return: list containing [count, timestamp] for ndays\n \"\"\"\n try:\n response = requests.get(\n \"http://graphite.us.archive.org/render/\",\n params={\n \"target\": \"hitcount(stats.uniqueips.openlibrary, '1d')\",\n \"from\": f\"-{ndays}days\",\n \"tz\": \"UTC\",\n \"format\": \"json\",\n },\n )\n response.raise_for_status()\n visitors = response.json()[0]['datapoints']\n except requests.exceptions.RequestException:\n visitors = []\n return visitors\n\n\nclass VisitorStats(Stats):\n def get_counts(self, ndays: int = 28, times: bool = False) -> list[tuple[int, int]]:\n visitors = _get_visitor_counts_from_graphite(ndays)\n # Flip the order, convert timestamp to msec and convert count==None to zero\n return [\n (int(timestamp * 1000), int(count or 0)) for count, timestamp in visitors\n ]\n\n\[email protected](engine=\"memcache\", key=\"admin._get_count_docs\", expires=5 * 60)\ndef _get_count_docs(ndays):\n \"\"\"Returns the count docs from admin stats database.\n\n This function is memoized to avoid accessing the db for every request.\n \"\"\"\n today = datetime.datetime.utcnow().date()\n dates = [today - datetime.timedelta(days=i) for i in range(ndays)]\n\n # we want the dates in reverse order\n dates = dates[::-1]\n\n docs = [web.ctx.site.store.get(d.strftime(\"counts-%Y-%m-%d\")) for d in dates]\n return [d for d in docs if d]\n\n\ndef get_stats(ndays=30):\n \"\"\"Returns the stats for the past `ndays`\"\"\"\n docs = _get_count_docs(ndays)\n return {\n 'human_edits': Stats(docs, \"human_edits\", \"human_edits\"),\n 'bot_edits': Stats(docs, \"bot_edits\", \"bot_edits\"),\n 'lists': Stats(docs, \"lists\", \"total_lists\"),\n 'visitors': VisitorStats(docs, \"visitors\", \"visitors\"),\n 'loans': LoanStats(docs, \"loans\", \"loans\"),\n 'members': Stats(docs, \"members\", \"total_members\"),\n 'works': Stats(docs, \"works\", \"total_works\"),\n 'editions': Stats(docs, \"editions\", \"total_editions\"),\n 'ebooks': Stats(docs, \"ebooks\", \"total_ebooks\"),\n 'covers': Stats(docs, \"covers\", \"total_covers\"),\n 'authors': Stats(docs, \"authors\", \"total_authors\"),\n 'subjects': Stats(docs, \"subjects\", \"total_subjects\"),\n }\n", "path": "openlibrary/core/admin.py"}]} | 2,682 | 261 |
gh_patches_debug_13428 | rasdani/github-patches | git_diff | pallets__werkzeug-2074 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
User agents not being detected
While using your utility to parse user agents for calls to my site, I have a few that don't get parsed correctly:
Version not getting parsed on the following user agents:
* `Mozilla/5.0 (Linux; Android 10; motorola edge) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.101 Mobile Safari/537.36`
* `Mozilla/5.0 (Linux; Android 10; motorola edge) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.81 Mobile Safari/537.36`
* `Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322)`
Detected as 'mozilla' but is actually msie v11:
* `Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko`
</issue>
<code>
[start of src/werkzeug/useragents.py]
1 import re
2 import typing as t
3
4 if t.TYPE_CHECKING:
5 from wsgiref.types import WSGIEnvironment
6
7
8 class UserAgentParser:
9 """A simple user agent parser. Used by the `UserAgent`."""
10
11 platform_rules: t.ClassVar[t.Iterable[t.Tuple[str, str]]] = (
12 (" cros ", "chromeos"),
13 ("iphone|ios", "iphone"),
14 ("ipad", "ipad"),
15 (r"darwin\b|mac\b|os\s*x", "macos"),
16 ("win", "windows"),
17 (r"android", "android"),
18 ("netbsd", "netbsd"),
19 ("openbsd", "openbsd"),
20 ("freebsd", "freebsd"),
21 ("dragonfly", "dragonflybsd"),
22 ("(sun|i86)os", "solaris"),
23 (r"x11\b|lin(\b|ux)?", "linux"),
24 (r"nintendo\s+wii", "wii"),
25 ("irix", "irix"),
26 ("hp-?ux", "hpux"),
27 ("aix", "aix"),
28 ("sco|unix_sv", "sco"),
29 ("bsd", "bsd"),
30 ("amiga", "amiga"),
31 ("blackberry|playbook", "blackberry"),
32 ("symbian", "symbian"),
33 )
34 browser_rules: t.ClassVar[t.Iterable[t.Tuple[str, str]]] = (
35 ("googlebot", "google"),
36 ("msnbot", "msn"),
37 ("yahoo", "yahoo"),
38 ("ask jeeves", "ask"),
39 (r"aol|america\s+online\s+browser", "aol"),
40 (r"opera|opr", "opera"),
41 ("edge|edg", "edge"),
42 ("chrome|crios", "chrome"),
43 ("seamonkey", "seamonkey"),
44 ("firefox|firebird|phoenix|iceweasel", "firefox"),
45 ("galeon", "galeon"),
46 ("safari|version", "safari"),
47 ("webkit", "webkit"),
48 ("camino", "camino"),
49 ("konqueror", "konqueror"),
50 ("k-meleon", "kmeleon"),
51 ("netscape", "netscape"),
52 (r"msie|microsoft\s+internet\s+explorer|trident/.+? rv:", "msie"),
53 ("lynx", "lynx"),
54 ("links", "links"),
55 ("Baiduspider", "baidu"),
56 ("bingbot", "bing"),
57 ("mozilla", "mozilla"),
58 )
59
60 _browser_version_re = r"(?:{pattern})[/\sa-z(]*(\d+[.\da-z]+)?"
61 _language_re = re.compile(
62 r"(?:;\s*|\s+)(\b\w{2}\b(?:-\b\w{2}\b)?)\s*;|"
63 r"(?:\(|\[|;)\s*(\b\w{2}\b(?:-\b\w{2}\b)?)\s*(?:\]|\)|;)"
64 )
65
66 def __init__(self) -> None:
67 self.platforms = [(b, re.compile(a, re.I)) for a, b in self.platform_rules]
68 self.browsers = [
69 (b, re.compile(self._browser_version_re.format(pattern=a), re.I))
70 for a, b in self.browser_rules
71 ]
72
73 def __call__(
74 self, user_agent: str
75 ) -> t.Tuple[t.Optional[str], t.Optional[str], t.Optional[str], t.Optional[str]]:
76 platform: t.Optional[str]
77 browser: t.Optional[str]
78 version: t.Optional[str]
79 language: t.Optional[str]
80
81 for platform, regex in self.platforms: # noqa: B007
82 match = regex.search(user_agent)
83 if match is not None:
84 break
85 else:
86 platform = None
87 for browser, regex in self.browsers: # noqa: B007
88 match = regex.search(user_agent)
89 if match is not None:
90 version = match.group(1)
91 break
92 else:
93 browser = version = None
94 match = self._language_re.search(user_agent)
95 if match is not None:
96 language = match.group(1) or match.group(2)
97 else:
98 language = None
99 return platform, browser, version, language
100
101
102 class UserAgent:
103 """Represents a user agent. Pass it a WSGI environment or a user agent
104 string and you can inspect some of the details from the user agent
105 string via the attributes. The following attributes exist:
106
107 .. attribute:: string
108
109 the raw user agent string
110
111 .. attribute:: platform
112
113 the browser platform. ``None`` if not recognized.
114 The following platforms are currently recognized:
115
116 - `aix`
117 - `amiga`
118 - `android`
119 - `blackberry`
120 - `bsd`
121 - `chromeos`
122 - `dragonflybsd`
123 - `freebsd`
124 - `hpux`
125 - `ipad`
126 - `iphone`
127 - `irix`
128 - `linux`
129 - `macos`
130 - `netbsd`
131 - `openbsd`
132 - `sco`
133 - `solaris`
134 - `symbian`
135 - `wii`
136 - `windows`
137
138 .. attribute:: browser
139
140 the name of the browser. ``None`` if not recognized.
141 The following browsers are currently recognized:
142
143 - `aol` *
144 - `ask` *
145 - `baidu` *
146 - `bing` *
147 - `camino`
148 - `chrome`
149 - `edge`
150 - `firefox`
151 - `galeon`
152 - `google` *
153 - `kmeleon`
154 - `konqueror`
155 - `links`
156 - `lynx`
157 - `mozilla`
158 - `msie`
159 - `msn`
160 - `netscape`
161 - `opera`
162 - `safari`
163 - `seamonkey`
164 - `webkit`
165 - `yahoo` *
166
167 (Browsers marked with a star (``*``) are crawlers.)
168
169 .. attribute:: version
170
171 the version of the browser. ``None`` if not recognized.
172
173 .. attribute:: language
174
175 the language of the browser. ``None`` if not recognized.
176 """
177
178 _parser = UserAgentParser()
179
180 def __init__(self, environ_or_string: t.Union["WSGIEnvironment", str]) -> None:
181 if isinstance(environ_or_string, dict):
182 self.string = environ_or_string.get("HTTP_USER_AGENT", "")
183 else:
184 self.string = environ_or_string
185
186 self.platform, self.browser, self.version, self.language = self._parser(
187 self.string
188 )
189
190 def to_header(self) -> str:
191 return self.string
192
193 def __str__(self) -> str:
194 return self.string
195
196 def __bool__(self) -> bool:
197 return bool(self.browser)
198
199 def __repr__(self):
200 return f"<{type(self).__name__} {self.browser!r}/{self.version}>"
201
[end of src/werkzeug/useragents.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/werkzeug/useragents.py b/src/werkzeug/useragents.py
--- a/src/werkzeug/useragents.py
+++ b/src/werkzeug/useragents.py
@@ -84,8 +84,18 @@
break
else:
platform = None
+
+ # Except for Trident, all browser key words come after the last ')'
+ last_closing_paren = 0
+ if (
+ not re.compile(r"trident/.+? rv:", re.I).search(user_agent)
+ and ")" in user_agent
+ and user_agent[-1] != ")"
+ ):
+ last_closing_paren = user_agent.rindex(")")
+
for browser, regex in self.browsers: # noqa: B007
- match = regex.search(user_agent)
+ match = regex.search(user_agent[last_closing_paren:])
if match is not None:
version = match.group(1)
break
| {"golden_diff": "diff --git a/src/werkzeug/useragents.py b/src/werkzeug/useragents.py\n--- a/src/werkzeug/useragents.py\n+++ b/src/werkzeug/useragents.py\n@@ -84,8 +84,18 @@\n break\n else:\n platform = None\n+\n+ # Except for Trident, all browser key words come after the last ')'\n+ last_closing_paren = 0\n+ if (\n+ not re.compile(r\"trident/.+? rv:\", re.I).search(user_agent)\n+ and \")\" in user_agent\n+ and user_agent[-1] != \")\"\n+ ):\n+ last_closing_paren = user_agent.rindex(\")\")\n+\n for browser, regex in self.browsers: # noqa: B007\n- match = regex.search(user_agent)\n+ match = regex.search(user_agent[last_closing_paren:])\n if match is not None:\n version = match.group(1)\n break\n", "issue": "User agents not being detected\nWhile using your utility to parse user agents for calls to my site, I have a few that don't get parsed correctly:\r\n\r\nVersion not getting parsed on the following user agents:\r\n\r\n* `Mozilla/5.0 (Linux; Android 10; motorola edge) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.101 Mobile Safari/537.36`\r\n* `Mozilla/5.0 (Linux; Android 10; motorola edge) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.81 Mobile Safari/537.36`\r\n* `Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322)`\r\n\r\nDetected as 'mozilla' but is actually msie v11:\r\n\r\n* `Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko`\n", "before_files": [{"content": "import re\nimport typing as t\n\nif t.TYPE_CHECKING:\n from wsgiref.types import WSGIEnvironment\n\n\nclass UserAgentParser:\n \"\"\"A simple user agent parser. Used by the `UserAgent`.\"\"\"\n\n platform_rules: t.ClassVar[t.Iterable[t.Tuple[str, str]]] = (\n (\" cros \", \"chromeos\"),\n (\"iphone|ios\", \"iphone\"),\n (\"ipad\", \"ipad\"),\n (r\"darwin\\b|mac\\b|os\\s*x\", \"macos\"),\n (\"win\", \"windows\"),\n (r\"android\", \"android\"),\n (\"netbsd\", \"netbsd\"),\n (\"openbsd\", \"openbsd\"),\n (\"freebsd\", \"freebsd\"),\n (\"dragonfly\", \"dragonflybsd\"),\n (\"(sun|i86)os\", \"solaris\"),\n (r\"x11\\b|lin(\\b|ux)?\", \"linux\"),\n (r\"nintendo\\s+wii\", \"wii\"),\n (\"irix\", \"irix\"),\n (\"hp-?ux\", \"hpux\"),\n (\"aix\", \"aix\"),\n (\"sco|unix_sv\", \"sco\"),\n (\"bsd\", \"bsd\"),\n (\"amiga\", \"amiga\"),\n (\"blackberry|playbook\", \"blackberry\"),\n (\"symbian\", \"symbian\"),\n )\n browser_rules: t.ClassVar[t.Iterable[t.Tuple[str, str]]] = (\n (\"googlebot\", \"google\"),\n (\"msnbot\", \"msn\"),\n (\"yahoo\", \"yahoo\"),\n (\"ask jeeves\", \"ask\"),\n (r\"aol|america\\s+online\\s+browser\", \"aol\"),\n (r\"opera|opr\", \"opera\"),\n (\"edge|edg\", \"edge\"),\n (\"chrome|crios\", \"chrome\"),\n (\"seamonkey\", \"seamonkey\"),\n (\"firefox|firebird|phoenix|iceweasel\", \"firefox\"),\n (\"galeon\", \"galeon\"),\n (\"safari|version\", \"safari\"),\n (\"webkit\", \"webkit\"),\n (\"camino\", \"camino\"),\n (\"konqueror\", \"konqueror\"),\n (\"k-meleon\", \"kmeleon\"),\n (\"netscape\", \"netscape\"),\n (r\"msie|microsoft\\s+internet\\s+explorer|trident/.+? rv:\", \"msie\"),\n (\"lynx\", \"lynx\"),\n (\"links\", \"links\"),\n (\"Baiduspider\", \"baidu\"),\n (\"bingbot\", \"bing\"),\n (\"mozilla\", \"mozilla\"),\n )\n\n _browser_version_re = r\"(?:{pattern})[/\\sa-z(]*(\\d+[.\\da-z]+)?\"\n _language_re = re.compile(\n r\"(?:;\\s*|\\s+)(\\b\\w{2}\\b(?:-\\b\\w{2}\\b)?)\\s*;|\"\n r\"(?:\\(|\\[|;)\\s*(\\b\\w{2}\\b(?:-\\b\\w{2}\\b)?)\\s*(?:\\]|\\)|;)\"\n )\n\n def __init__(self) -> None:\n self.platforms = [(b, re.compile(a, re.I)) for a, b in self.platform_rules]\n self.browsers = [\n (b, re.compile(self._browser_version_re.format(pattern=a), re.I))\n for a, b in self.browser_rules\n ]\n\n def __call__(\n self, user_agent: str\n ) -> t.Tuple[t.Optional[str], t.Optional[str], t.Optional[str], t.Optional[str]]:\n platform: t.Optional[str]\n browser: t.Optional[str]\n version: t.Optional[str]\n language: t.Optional[str]\n\n for platform, regex in self.platforms: # noqa: B007\n match = regex.search(user_agent)\n if match is not None:\n break\n else:\n platform = None\n for browser, regex in self.browsers: # noqa: B007\n match = regex.search(user_agent)\n if match is not None:\n version = match.group(1)\n break\n else:\n browser = version = None\n match = self._language_re.search(user_agent)\n if match is not None:\n language = match.group(1) or match.group(2)\n else:\n language = None\n return platform, browser, version, language\n\n\nclass UserAgent:\n \"\"\"Represents a user agent. Pass it a WSGI environment or a user agent\n string and you can inspect some of the details from the user agent\n string via the attributes. The following attributes exist:\n\n .. attribute:: string\n\n the raw user agent string\n\n .. attribute:: platform\n\n the browser platform. ``None`` if not recognized.\n The following platforms are currently recognized:\n\n - `aix`\n - `amiga`\n - `android`\n - `blackberry`\n - `bsd`\n - `chromeos`\n - `dragonflybsd`\n - `freebsd`\n - `hpux`\n - `ipad`\n - `iphone`\n - `irix`\n - `linux`\n - `macos`\n - `netbsd`\n - `openbsd`\n - `sco`\n - `solaris`\n - `symbian`\n - `wii`\n - `windows`\n\n .. attribute:: browser\n\n the name of the browser. ``None`` if not recognized.\n The following browsers are currently recognized:\n\n - `aol` *\n - `ask` *\n - `baidu` *\n - `bing` *\n - `camino`\n - `chrome`\n - `edge`\n - `firefox`\n - `galeon`\n - `google` *\n - `kmeleon`\n - `konqueror`\n - `links`\n - `lynx`\n - `mozilla`\n - `msie`\n - `msn`\n - `netscape`\n - `opera`\n - `safari`\n - `seamonkey`\n - `webkit`\n - `yahoo` *\n\n (Browsers marked with a star (``*``) are crawlers.)\n\n .. attribute:: version\n\n the version of the browser. ``None`` if not recognized.\n\n .. attribute:: language\n\n the language of the browser. ``None`` if not recognized.\n \"\"\"\n\n _parser = UserAgentParser()\n\n def __init__(self, environ_or_string: t.Union[\"WSGIEnvironment\", str]) -> None:\n if isinstance(environ_or_string, dict):\n self.string = environ_or_string.get(\"HTTP_USER_AGENT\", \"\")\n else:\n self.string = environ_or_string\n\n self.platform, self.browser, self.version, self.language = self._parser(\n self.string\n )\n\n def to_header(self) -> str:\n return self.string\n\n def __str__(self) -> str:\n return self.string\n\n def __bool__(self) -> bool:\n return bool(self.browser)\n\n def __repr__(self):\n return f\"<{type(self).__name__} {self.browser!r}/{self.version}>\"\n", "path": "src/werkzeug/useragents.py"}]} | 2,930 | 214 |
gh_patches_debug_14107 | rasdani/github-patches | git_diff | spack__spack-17427 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Errors untaring source tarballs Docker
Some source tarballs are created in a way that tries to preserve user and group perms.
Yes, it's a bad to create a source tarball for release with these properties -- but I suspect it's easy to go undetected until you run in a security constrained linux.
Cython is one example example I hit when trying to build in a Docker container using ubuntu:latest
```
/bin/tar: Cython-0.25.2/.gitrev: Cannot change ownership to uid 96258, gid 5000: Invalid argument
.... (many many more errors)
/bin/tar: Cython-0.25.2/USAGE.txt: Cannot change ownership to uid 96258, gid 5000: Invalid argument
/bin/tar: Cython-0.25.2: Cannot change ownership to uid 96258, gid 5000: Invalid argument
/bin/tar: Exiting with failure status due to previous errors
```
The tar errors cause spack to stop. I am not sure if there is anyway to defensively avoid this, but if its possible it would be a good enhancement to spack.
</issue>
<code>
[start of lib/spack/spack/util/compression.py]
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 import re
7 import os
8 from itertools import product
9 from spack.util.executable import which
10
11 # Supported archive extensions.
12 PRE_EXTS = ["tar", "TAR"]
13 EXTS = ["gz", "bz2", "xz", "Z"]
14 NOTAR_EXTS = ["zip", "tgz", "tbz2", "txz"]
15
16 # Add PRE_EXTS and EXTS last so that .tar.gz is matched *before* .tar or .gz
17 ALLOWED_ARCHIVE_TYPES = [".".join(l) for l in product(
18 PRE_EXTS, EXTS)] + PRE_EXTS + EXTS + NOTAR_EXTS
19
20
21 def allowed_archive(path):
22 return any(path.endswith(t) for t in ALLOWED_ARCHIVE_TYPES)
23
24
25 def decompressor_for(path, extension=None):
26 """Get the appropriate decompressor for a path."""
27 if ((extension and re.match(r'\.?zip$', extension)) or
28 path.endswith('.zip')):
29 unzip = which('unzip', required=True)
30 unzip.add_default_arg('-q')
31 return unzip
32 if extension and re.match(r'gz', extension):
33 gunzip = which('gunzip', required=True)
34 return gunzip
35 if extension and re.match(r'bz2', extension):
36 bunzip2 = which('bunzip2', required=True)
37 return bunzip2
38 tar = which('tar', required=True)
39 tar.add_default_arg('-xf')
40 return tar
41
42
43 def strip_extension(path):
44 """Get the part of a path that does not include its compressed
45 type extension."""
46 for type in ALLOWED_ARCHIVE_TYPES:
47 suffix = r'\.%s$' % type
48 if re.search(suffix, path):
49 return re.sub(suffix, "", path)
50 return path
51
52
53 def extension(path):
54 """Get the archive extension for a path."""
55 if path is None:
56 raise ValueError("Can't call extension() on None")
57
58 # Strip sourceforge suffix.
59 if re.search(r'((?:sourceforge.net|sf.net)/.*)/download$', path):
60 path = os.path.dirname(path)
61
62 for t in ALLOWED_ARCHIVE_TYPES:
63 suffix = r'\.%s$' % t
64 if re.search(suffix, path):
65 return t
66 return None
67
[end of lib/spack/spack/util/compression.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/spack/spack/util/compression.py b/lib/spack/spack/util/compression.py
--- a/lib/spack/spack/util/compression.py
+++ b/lib/spack/spack/util/compression.py
@@ -14,7 +14,7 @@
NOTAR_EXTS = ["zip", "tgz", "tbz2", "txz"]
# Add PRE_EXTS and EXTS last so that .tar.gz is matched *before* .tar or .gz
-ALLOWED_ARCHIVE_TYPES = [".".join(l) for l in product(
+ALLOWED_ARCHIVE_TYPES = [".".join(ext) for ext in product(
PRE_EXTS, EXTS)] + PRE_EXTS + EXTS + NOTAR_EXTS
@@ -36,7 +36,7 @@
bunzip2 = which('bunzip2', required=True)
return bunzip2
tar = which('tar', required=True)
- tar.add_default_arg('-xf')
+ tar.add_default_arg('-oxf')
return tar
| {"golden_diff": "diff --git a/lib/spack/spack/util/compression.py b/lib/spack/spack/util/compression.py\n--- a/lib/spack/spack/util/compression.py\n+++ b/lib/spack/spack/util/compression.py\n@@ -14,7 +14,7 @@\n NOTAR_EXTS = [\"zip\", \"tgz\", \"tbz2\", \"txz\"]\n \n # Add PRE_EXTS and EXTS last so that .tar.gz is matched *before* .tar or .gz\n-ALLOWED_ARCHIVE_TYPES = [\".\".join(l) for l in product(\n+ALLOWED_ARCHIVE_TYPES = [\".\".join(ext) for ext in product(\n PRE_EXTS, EXTS)] + PRE_EXTS + EXTS + NOTAR_EXTS\n \n \n@@ -36,7 +36,7 @@\n bunzip2 = which('bunzip2', required=True)\n return bunzip2\n tar = which('tar', required=True)\n- tar.add_default_arg('-xf')\n+ tar.add_default_arg('-oxf')\n return tar\n", "issue": "Errors untaring source tarballs Docker\nSome source tarballs are created in a way that tries to preserve user and group perms. \r\nYes, it's a bad to create a source tarball for release with these properties -- but I suspect it's easy to go undetected until you run in a security constrained linux.\r\n\r\nCython is one example example I hit when trying to build in a Docker container using ubuntu:latest\r\n\r\n```\r\n/bin/tar: Cython-0.25.2/.gitrev: Cannot change ownership to uid 96258, gid 5000: Invalid argument\r\n.... (many many more errors)\r\n/bin/tar: Cython-0.25.2/USAGE.txt: Cannot change ownership to uid 96258, gid 5000: Invalid argument\r\n/bin/tar: Cython-0.25.2: Cannot change ownership to uid 96258, gid 5000: Invalid argument\r\n/bin/tar: Exiting with failure status due to previous errors\r\n```\r\nThe tar errors cause spack to stop. I am not sure if there is anyway to defensively avoid this, but if its possible it would be a good enhancement to spack. \r\n\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nimport re\nimport os\nfrom itertools import product\nfrom spack.util.executable import which\n\n# Supported archive extensions.\nPRE_EXTS = [\"tar\", \"TAR\"]\nEXTS = [\"gz\", \"bz2\", \"xz\", \"Z\"]\nNOTAR_EXTS = [\"zip\", \"tgz\", \"tbz2\", \"txz\"]\n\n# Add PRE_EXTS and EXTS last so that .tar.gz is matched *before* .tar or .gz\nALLOWED_ARCHIVE_TYPES = [\".\".join(l) for l in product(\n PRE_EXTS, EXTS)] + PRE_EXTS + EXTS + NOTAR_EXTS\n\n\ndef allowed_archive(path):\n return any(path.endswith(t) for t in ALLOWED_ARCHIVE_TYPES)\n\n\ndef decompressor_for(path, extension=None):\n \"\"\"Get the appropriate decompressor for a path.\"\"\"\n if ((extension and re.match(r'\\.?zip$', extension)) or\n path.endswith('.zip')):\n unzip = which('unzip', required=True)\n unzip.add_default_arg('-q')\n return unzip\n if extension and re.match(r'gz', extension):\n gunzip = which('gunzip', required=True)\n return gunzip\n if extension and re.match(r'bz2', extension):\n bunzip2 = which('bunzip2', required=True)\n return bunzip2\n tar = which('tar', required=True)\n tar.add_default_arg('-xf')\n return tar\n\n\ndef strip_extension(path):\n \"\"\"Get the part of a path that does not include its compressed\n type extension.\"\"\"\n for type in ALLOWED_ARCHIVE_TYPES:\n suffix = r'\\.%s$' % type\n if re.search(suffix, path):\n return re.sub(suffix, \"\", path)\n return path\n\n\ndef extension(path):\n \"\"\"Get the archive extension for a path.\"\"\"\n if path is None:\n raise ValueError(\"Can't call extension() on None\")\n\n # Strip sourceforge suffix.\n if re.search(r'((?:sourceforge.net|sf.net)/.*)/download$', path):\n path = os.path.dirname(path)\n\n for t in ALLOWED_ARCHIVE_TYPES:\n suffix = r'\\.%s$' % t\n if re.search(suffix, path):\n return t\n return None\n", "path": "lib/spack/spack/util/compression.py"}]} | 1,474 | 227 |
gh_patches_debug_27257 | rasdani/github-patches | git_diff | 3cn-ecn__nantralPlatform-177 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Frontend events planifés et events archivés ne fonctionnent pas
Events archivés renvoit une erreur 500.
Event planifiés ne renvoit rien
</issue>
<code>
[start of server/apps/event/api_views.py]
1 from datetime import datetime
2
3 from rest_framework import generics
4
5 from .models import BaseEvent
6 from .serializers import BaseEventSerializer
7
8
9 class ListEventsGroupAPIView(generics.ListAPIView):
10 """List events for a group depending on the chosen
11 time window. By default only returns current events."""
12 serializer_class = BaseEventSerializer
13
14 def get_queryset(self):
15 if self.request.method == 'GET':
16 if self.request.GET.get('view') == 'archives':
17 return BaseEvent.objects.filter(group=self.kwargs['group'], date__lt=datetime.today())
18 elif self.request.get('view') == 'all':
19 return BaseEvent.objects.filter(group=self.kwargs['group'])
20 return BaseEvent.objects.filter(group=self.kwargs['group'], date__gte=datetime.today())
21
22
23 class UpdateEventAPIView(generics.RetrieveDestroyAPIView):
24 serializer_class = BaseEventSerializer
25 lookup_field = 'slug'
26 lookup_url_kwarg = 'event_slug'
27
28 def get_queryset(self):
29 return BaseEvent.objects.filter(slug=self.kwargs['event_slug'])
30
[end of server/apps/event/api_views.py]
[start of server/apps/event/views.py]
1 from datetime import date
2
3 from django.shortcuts import redirect, render
4 from django.contrib import messages
5 from django.contrib.auth.models import User
6 from django.views.generic.base import TemplateView, View
7 from django.views.generic import UpdateView, FormView
8 from django.contrib.auth.decorators import login_required
9
10 from .models import *
11 from .forms import EventForm, EventFormSet
12
13 from apps.group.models import Group
14 from apps.utils.accessMixins import LoginRequiredAccessMixin, UserIsAdmin
15
16
17 class EventDetailView(LoginRequiredAccessMixin, TemplateView):
18 template_name = 'event/detail.html'
19
20 def get_context_data(self, **kwargs):
21 context = super().get_context_data(**kwargs)
22 self.object = BaseEvent.get_event_by_slug(self.kwargs['event_slug'])
23 context['object'] = self.object
24 context['group'] = self.object.get_group
25 context['is_participating'] = self.object.is_participating(
26 self.request.user)
27 return context
28
29
30 class UpdateGroupCreateEventView(UserIsAdmin, FormView):
31 """In the context of a group, create event view."""
32 template_name = 'group/event/create.html'
33 form_class = EventForm
34
35 def get_context_data(self, **kwargs):
36 context = super().get_context_data(**kwargs)
37 context['object'] = Group.get_group_by_slug(self.kwargs['group_slug'])
38 return context
39
40 def form_valid(self, form, **kwargs):
41 event = form.save(commit=False)
42 event.group = Group.get_group_by_slug(
43 slug=self.kwargs['group_slug']).slug
44 event.save()
45 return redirect('group:create-event', self.kwargs['group_slug'])
46
47
48 class EventUpdateView(UserIsAdmin, UpdateView):
49 template_name = 'event/update.html'
50 fields = ['title', 'description', 'location',
51 'date', 'publicity', 'color', 'image']
52
53 def test_func(self) -> bool:
54 self.kwargs['group_slug'] = self.object.get_group.slug
55 return super().test_func()
56
57 def get_context_data(self, **kwargs):
58 context = super().get_context_data(**kwargs)
59 context['object'] = self.object.get_group
60 context['event'] = self.object
61 return context
62
63 def get_object(self, **kwargs):
64 return BaseEvent.get_event_by_slug(self.kwargs['event_slug'])
65
66 def dispatch(self, request, *args, **kwargs):
67 self.object = BaseEvent.get_event_by_slug(self.kwargs['event_slug'])
68 self.kwargs['group_slug'] = self.object.get_group.slug
69 if isinstance(self.object, EatingEvent):
70 self.fields = ['title', 'description', 'location',
71 'date', 'publicity', 'color', 'image', 'menu']
72 return super().dispatch(request, *args, **kwargs)
73
74
75 class UpdateGroupEventsView(UserIsAdmin, View):
76 template_name = 'group/event/planned_edit.html'
77
78 def get_context_data(self, **kwargs):
79 context = {}
80 context['object'] = Group.get_group_by_slug(kwargs['group_slug'])
81 context['events'] = BaseEvent.objects.filter(
82 group=kwargs['group_slug'], date__gte=date.today())
83 context['form'] = EventFormSet(queryset=context['events'])
84 return context
85
86 def get(self, request, group_slug):
87 return render(request, self.template_name, context=self.get_context_data(group_slug=group_slug))
88
89 def post(self, request, group_slug):
90 return edit_events(request, group_slug)
91
92
93 class UpdateGroupArchivedEventsView(UserIsAdmin, View):
94 template_name = 'group/archived_edit.html'
95
96 def get_context_data(self, **kwargs):
97 context = {}
98 context['object'] = Group.get_group_by_slug(kwargs['group_slug'])
99 context['events'] = BaseEvent.objects.filter(
100 group=kwargs['group_slug'], date__lt=date.today())
101 context['form'] = EventFormSet(queryset=context['events'])
102 return context
103
104 def get(self, request, group_slug):
105 return render(request, self.template_name, context=self.get_context_data(group_slug=group_slug))
106
107 def post(self, request, group_slug):
108 return edit_events(request, group_slug)
109
110
111 @login_required
112 def add_participant(request, event_slug):
113 """Adds the user to the list of participants."""
114 event = BaseEvent.get_event_by_slug(event_slug)
115 event.participants.add(request.user.student)
116 if request.GET.get('redirect'):
117 return redirect('home:home')
118 return redirect(event.get_absolute_url())
119
120
121 @login_required
122 def remove_participant(request, event_slug):
123 """Removes the user from the list of participants."""
124 event = BaseEvent.get_event_by_slug(event_slug)
125 event.participants.remove(request.user.student)
126 if request.GET.get('redirect'):
127 return redirect('home:home')
128 return redirect(event.get_absolute_url())
129
130
131 @login_required
132 def edit_events(request, group_slug):
133 group = Group.get_group_by_slug(group_slug)
134 form = EventFormSet(request.POST)
135 if form.is_valid():
136 events = form.save(commit=False)
137 # Link each event to the group
138 for event in events:
139 event.group = group.slug
140 event.save()
141 # Delete missing events
142 for event in form.deleted_objects:
143 event.delete()
144 messages.success(request, 'Events modifies')
145 return redirect('group:update-events', group_slug)
146 else:
147 messages.warning(request, form.errors)
148 return redirect('group:update-events', group_slug)
149
[end of server/apps/event/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/server/apps/event/api_views.py b/server/apps/event/api_views.py
--- a/server/apps/event/api_views.py
+++ b/server/apps/event/api_views.py
@@ -15,7 +15,7 @@
if self.request.method == 'GET':
if self.request.GET.get('view') == 'archives':
return BaseEvent.objects.filter(group=self.kwargs['group'], date__lt=datetime.today())
- elif self.request.get('view') == 'all':
+ elif self.request.GET.get('view') == 'all':
return BaseEvent.objects.filter(group=self.kwargs['group'])
return BaseEvent.objects.filter(group=self.kwargs['group'], date__gte=datetime.today())
diff --git a/server/apps/event/views.py b/server/apps/event/views.py
--- a/server/apps/event/views.py
+++ b/server/apps/event/views.py
@@ -91,13 +91,13 @@
class UpdateGroupArchivedEventsView(UserIsAdmin, View):
- template_name = 'group/archived_edit.html'
+ template_name = 'group/event/archived_edit.html'
def get_context_data(self, **kwargs):
context = {}
context['object'] = Group.get_group_by_slug(kwargs['group_slug'])
context['events'] = BaseEvent.objects.filter(
- group=kwargs['group_slug'], date__lt=date.today())
+ group=kwargs['group_slug'], date__lte=date.today())
context['form'] = EventFormSet(queryset=context['events'])
return context
| {"golden_diff": "diff --git a/server/apps/event/api_views.py b/server/apps/event/api_views.py\n--- a/server/apps/event/api_views.py\n+++ b/server/apps/event/api_views.py\n@@ -15,7 +15,7 @@\n if self.request.method == 'GET':\n if self.request.GET.get('view') == 'archives':\n return BaseEvent.objects.filter(group=self.kwargs['group'], date__lt=datetime.today())\n- elif self.request.get('view') == 'all':\n+ elif self.request.GET.get('view') == 'all':\n return BaseEvent.objects.filter(group=self.kwargs['group'])\n return BaseEvent.objects.filter(group=self.kwargs['group'], date__gte=datetime.today())\n \ndiff --git a/server/apps/event/views.py b/server/apps/event/views.py\n--- a/server/apps/event/views.py\n+++ b/server/apps/event/views.py\n@@ -91,13 +91,13 @@\n \n \n class UpdateGroupArchivedEventsView(UserIsAdmin, View):\n- template_name = 'group/archived_edit.html'\n+ template_name = 'group/event/archived_edit.html'\n \n def get_context_data(self, **kwargs):\n context = {}\n context['object'] = Group.get_group_by_slug(kwargs['group_slug'])\n context['events'] = BaseEvent.objects.filter(\n- group=kwargs['group_slug'], date__lt=date.today())\n+ group=kwargs['group_slug'], date__lte=date.today())\n context['form'] = EventFormSet(queryset=context['events'])\n return context\n", "issue": "Frontend events planif\u00e9s et events archiv\u00e9s ne fonctionnent pas\nEvents archiv\u00e9s renvoit une erreur 500.\r\nEvent planifi\u00e9s ne renvoit rien\n", "before_files": [{"content": "from datetime import datetime\n\nfrom rest_framework import generics\n\nfrom .models import BaseEvent\nfrom .serializers import BaseEventSerializer\n\n\nclass ListEventsGroupAPIView(generics.ListAPIView):\n \"\"\"List events for a group depending on the chosen\n time window. By default only returns current events.\"\"\"\n serializer_class = BaseEventSerializer\n\n def get_queryset(self):\n if self.request.method == 'GET':\n if self.request.GET.get('view') == 'archives':\n return BaseEvent.objects.filter(group=self.kwargs['group'], date__lt=datetime.today())\n elif self.request.get('view') == 'all':\n return BaseEvent.objects.filter(group=self.kwargs['group'])\n return BaseEvent.objects.filter(group=self.kwargs['group'], date__gte=datetime.today())\n\n\nclass UpdateEventAPIView(generics.RetrieveDestroyAPIView):\n serializer_class = BaseEventSerializer\n lookup_field = 'slug'\n lookup_url_kwarg = 'event_slug'\n\n def get_queryset(self):\n return BaseEvent.objects.filter(slug=self.kwargs['event_slug'])\n", "path": "server/apps/event/api_views.py"}, {"content": "from datetime import date\n\nfrom django.shortcuts import redirect, render\nfrom django.contrib import messages\nfrom django.contrib.auth.models import User\nfrom django.views.generic.base import TemplateView, View\nfrom django.views.generic import UpdateView, FormView\nfrom django.contrib.auth.decorators import login_required\n\nfrom .models import *\nfrom .forms import EventForm, EventFormSet\n\nfrom apps.group.models import Group\nfrom apps.utils.accessMixins import LoginRequiredAccessMixin, UserIsAdmin\n\n\nclass EventDetailView(LoginRequiredAccessMixin, TemplateView):\n template_name = 'event/detail.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n self.object = BaseEvent.get_event_by_slug(self.kwargs['event_slug'])\n context['object'] = self.object\n context['group'] = self.object.get_group\n context['is_participating'] = self.object.is_participating(\n self.request.user)\n return context\n\n\nclass UpdateGroupCreateEventView(UserIsAdmin, FormView):\n \"\"\"In the context of a group, create event view.\"\"\"\n template_name = 'group/event/create.html'\n form_class = EventForm\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['object'] = Group.get_group_by_slug(self.kwargs['group_slug'])\n return context\n\n def form_valid(self, form, **kwargs):\n event = form.save(commit=False)\n event.group = Group.get_group_by_slug(\n slug=self.kwargs['group_slug']).slug\n event.save()\n return redirect('group:create-event', self.kwargs['group_slug'])\n\n\nclass EventUpdateView(UserIsAdmin, UpdateView):\n template_name = 'event/update.html'\n fields = ['title', 'description', 'location',\n 'date', 'publicity', 'color', 'image']\n\n def test_func(self) -> bool:\n self.kwargs['group_slug'] = self.object.get_group.slug\n return super().test_func()\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['object'] = self.object.get_group\n context['event'] = self.object\n return context\n\n def get_object(self, **kwargs):\n return BaseEvent.get_event_by_slug(self.kwargs['event_slug'])\n\n def dispatch(self, request, *args, **kwargs):\n self.object = BaseEvent.get_event_by_slug(self.kwargs['event_slug'])\n self.kwargs['group_slug'] = self.object.get_group.slug\n if isinstance(self.object, EatingEvent):\n self.fields = ['title', 'description', 'location',\n 'date', 'publicity', 'color', 'image', 'menu']\n return super().dispatch(request, *args, **kwargs)\n\n\nclass UpdateGroupEventsView(UserIsAdmin, View):\n template_name = 'group/event/planned_edit.html'\n\n def get_context_data(self, **kwargs):\n context = {}\n context['object'] = Group.get_group_by_slug(kwargs['group_slug'])\n context['events'] = BaseEvent.objects.filter(\n group=kwargs['group_slug'], date__gte=date.today())\n context['form'] = EventFormSet(queryset=context['events'])\n return context\n\n def get(self, request, group_slug):\n return render(request, self.template_name, context=self.get_context_data(group_slug=group_slug))\n\n def post(self, request, group_slug):\n return edit_events(request, group_slug)\n\n\nclass UpdateGroupArchivedEventsView(UserIsAdmin, View):\n template_name = 'group/archived_edit.html'\n\n def get_context_data(self, **kwargs):\n context = {}\n context['object'] = Group.get_group_by_slug(kwargs['group_slug'])\n context['events'] = BaseEvent.objects.filter(\n group=kwargs['group_slug'], date__lt=date.today())\n context['form'] = EventFormSet(queryset=context['events'])\n return context\n\n def get(self, request, group_slug):\n return render(request, self.template_name, context=self.get_context_data(group_slug=group_slug))\n\n def post(self, request, group_slug):\n return edit_events(request, group_slug)\n\n\n@login_required\ndef add_participant(request, event_slug):\n \"\"\"Adds the user to the list of participants.\"\"\"\n event = BaseEvent.get_event_by_slug(event_slug)\n event.participants.add(request.user.student)\n if request.GET.get('redirect'):\n return redirect('home:home')\n return redirect(event.get_absolute_url())\n\n\n@login_required\ndef remove_participant(request, event_slug):\n \"\"\"Removes the user from the list of participants.\"\"\"\n event = BaseEvent.get_event_by_slug(event_slug)\n event.participants.remove(request.user.student)\n if request.GET.get('redirect'):\n return redirect('home:home')\n return redirect(event.get_absolute_url())\n\n\n@login_required\ndef edit_events(request, group_slug):\n group = Group.get_group_by_slug(group_slug)\n form = EventFormSet(request.POST)\n if form.is_valid():\n events = form.save(commit=False)\n # Link each event to the group\n for event in events:\n event.group = group.slug\n event.save()\n # Delete missing events\n for event in form.deleted_objects:\n event.delete()\n messages.success(request, 'Events modifies')\n return redirect('group:update-events', group_slug)\n else:\n messages.warning(request, form.errors)\n return redirect('group:update-events', group_slug)\n", "path": "server/apps/event/views.py"}]} | 2,386 | 322 |
gh_patches_debug_4645 | rasdani/github-patches | git_diff | vispy__vispy-1676 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Window transparency issue in molecular_viewer.py
On my Windows 10 machine with a GeForce GTX 1080 Ti (Driver version 430.39) there are serious issues in the visualization of the molecular_viewer.py example code (/examples/demo/gloo/molecular_viewer.py).
The background of the image is black no matter what; but wherever there is any color, the color takes on a strange transparency, where you can see through the display window if the window is superimposed over non-black backgrounds.
Additionally, if you introduce code to take screenshots, the screenshots always show a black background with pure white molecule images.

The image exhibits the undesirable behavior. You can see the shape of my (black) console window placed *behind* the display window.
</issue>
<code>
[start of examples/demo/gloo/molecular_viewer.py]
1 # -*- coding: utf-8 -*-
2 # vispy: gallery 30
3 # -----------------------------------------------------------------------------
4 # 2014, Aurore Deschildre, Gael Goret, Cyrille Rossant, Nicolas P. Rougier.
5 # Distributed under the terms of the new BSD License.
6 # -----------------------------------------------------------------------------
7 import numpy as np
8
9 from vispy import gloo
10 from vispy import app
11 from vispy.util.transforms import perspective, translate, rotate
12 from vispy.io import load_data_file
13
14 vertex = """
15 #version 120
16
17 uniform mat4 u_model;
18 uniform mat4 u_view;
19 uniform mat4 u_projection;
20 uniform vec3 u_light_position;
21 uniform vec3 u_light_spec_position;
22
23 attribute vec3 a_position;
24 attribute vec3 a_color;
25 attribute float a_radius;
26
27 varying vec3 v_color;
28 varying vec4 v_eye_position;
29 varying float v_radius;
30 varying vec3 v_light_direction;
31
32 void main (void) {
33 v_radius = a_radius;
34 v_color = a_color;
35
36 v_eye_position = u_view * u_model * vec4(a_position,1.0);
37 v_light_direction = normalize(u_light_position);
38 float dist = length(v_eye_position.xyz);
39
40 gl_Position = u_projection * v_eye_position;
41
42 // stackoverflow.com/questions/8608844/...
43 // ... resizing-point-sprites-based-on-distance-from-the-camera
44 vec4 proj_corner = u_projection * vec4(a_radius, a_radius, v_eye_position.z, v_eye_position.w); // # noqa
45 gl_PointSize = 512.0 * proj_corner.x / proj_corner.w;
46 }
47 """
48
49 fragment = """
50 #version 120
51
52 uniform mat4 u_model;
53 uniform mat4 u_view;
54 uniform mat4 u_projection;
55 uniform vec3 u_light_position;
56 uniform vec3 u_light_spec_position;
57
58 varying vec3 v_color;
59 varying vec4 v_eye_position;
60 varying float v_radius;
61 varying vec3 v_light_direction;
62 void main()
63 {
64 // r^2 = (x - x0)^2 + (y - y0)^2 + (z - z0)^2
65 vec2 texcoord = gl_PointCoord* 2.0 - vec2(1.0);
66 float x = texcoord.x;
67 float y = texcoord.y;
68 float d = 1.0 - x*x - y*y;
69 if (d <= 0.0)
70 discard;
71
72 float z = sqrt(d);
73 vec4 pos = v_eye_position;
74 pos.z += v_radius*z;
75 vec3 pos2 = pos.xyz;
76 pos = u_projection * pos;
77 // gl_FragDepth = 0.5*(pos.z / pos.w)+0.5;
78 vec3 normal = vec3(x,y,z);
79 float diffuse = clamp(dot(normal, v_light_direction), 0.0, 1.0);
80
81 // Specular lighting.
82 vec3 M = pos2.xyz;
83 vec3 O = v_eye_position.xyz;
84 vec3 L = u_light_spec_position;
85 vec3 K = normalize(normalize(L - M) + normalize(O - M));
86 // WARNING: abs() is necessary, otherwise weird bugs may appear with some
87 // GPU drivers...
88 float specular = clamp(pow(abs(dot(normal, K)), 40.), 0.0, 1.0);
89 vec3 v_light = vec3(1., 1., 1.);
90 gl_FragColor.rgb = (.15*v_color + .55*diffuse * v_color
91 + .35*specular * v_light);
92 }
93 """
94
95
96 class Canvas(app.Canvas):
97
98 def __init__(self):
99 app.Canvas.__init__(self, title='Molecular viewer',
100 keys='interactive', size=(1200, 800))
101 self.ps = self.pixel_scale
102
103 self.translate = 40
104 self.program = gloo.Program(vertex, fragment)
105 self.view = translate((0, 0, -self.translate))
106 self.model = np.eye(4, dtype=np.float32)
107 self.projection = np.eye(4, dtype=np.float32)
108
109 self.apply_zoom()
110
111 fname = load_data_file('molecular_viewer/micelle.npz')
112 self.load_molecule(fname)
113 self.load_data()
114
115 self.theta = 0
116 self.phi = 0
117
118 gloo.set_state(depth_test=True, clear_color='black')
119 self._timer = app.Timer('auto', connect=self.on_timer, start=True)
120
121 self.show()
122
123 def load_molecule(self, fname):
124 molecule = np.load(fname)['molecule']
125 self._nAtoms = molecule.shape[0]
126
127 # The x,y,z values store in one array
128 self.coords = molecule[:, :3]
129
130 # The array that will store the color and alpha scale for all the atoms
131 self.atomsColours = molecule[:, 3:6]
132
133 # The array that will store the scale for all the atoms.
134 self.atomsScales = molecule[:, 6]
135
136 def load_data(self):
137 n = self._nAtoms
138
139 data = np.zeros(n, [('a_position', np.float32, 3),
140 ('a_color', np.float32, 3),
141 ('a_radius', np.float32, 1)])
142
143 data['a_position'] = self.coords
144 data['a_color'] = self.atomsColours
145 data['a_radius'] = self.atomsScales*self.ps
146
147 self.program.bind(gloo.VertexBuffer(data))
148
149 self.program['u_model'] = self.model
150 self.program['u_view'] = self.view
151 self.program['u_light_position'] = 0., 0., 2.
152 self.program['u_light_spec_position'] = -5., 5., -5.
153
154 def on_key_press(self, event):
155 if event.text == ' ':
156 if self.timer.running:
157 self.timer.stop()
158 else:
159 self.timer.start()
160 # if event.text == 'A':
161 # self.
162
163 def on_timer(self, event):
164 self.theta += .25
165 self.phi += .25
166 self.model = np.dot(rotate(self.theta, (0, 0, 1)),
167 rotate(self.phi, (0, 1, 0)))
168 self.program['u_model'] = self.model
169 self.update()
170
171 def on_resize(self, event):
172 width, height = event.size
173
174 def apply_zoom(self):
175 width, height = self.physical_size
176 gloo.set_viewport(0, 0, width, height)
177 self.projection = perspective(25.0, width / float(height), 2.0, 100.0)
178 self.program['u_projection'] = self.projection
179
180 def on_mouse_wheel(self, event):
181 self.translate -= event.delta[1]
182 self.translate = max(-1, self.translate)
183 self.view = translate((0, 0, -self.translate))
184
185 self.program['u_view'] = self.view
186 self.update()
187
188 def on_draw(self, event):
189 gloo.clear()
190 self.program.draw('points')
191
192
193 if __name__ == '__main__':
194 mvc = Canvas()
195 app.run()
196
[end of examples/demo/gloo/molecular_viewer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/demo/gloo/molecular_viewer.py b/examples/demo/gloo/molecular_viewer.py
--- a/examples/demo/gloo/molecular_viewer.py
+++ b/examples/demo/gloo/molecular_viewer.py
@@ -87,8 +87,8 @@
// GPU drivers...
float specular = clamp(pow(abs(dot(normal, K)), 40.), 0.0, 1.0);
vec3 v_light = vec3(1., 1., 1.);
- gl_FragColor.rgb = (.15*v_color + .55*diffuse * v_color
- + .35*specular * v_light);
+ gl_FragColor.rgba = vec4(.15*v_color + .55*diffuse * v_color
+ + .35*specular * v_light, 1.0);
}
"""
| {"golden_diff": "diff --git a/examples/demo/gloo/molecular_viewer.py b/examples/demo/gloo/molecular_viewer.py\n--- a/examples/demo/gloo/molecular_viewer.py\n+++ b/examples/demo/gloo/molecular_viewer.py\n@@ -87,8 +87,8 @@\n // GPU drivers...\n float specular = clamp(pow(abs(dot(normal, K)), 40.), 0.0, 1.0);\n vec3 v_light = vec3(1., 1., 1.);\n- gl_FragColor.rgb = (.15*v_color + .55*diffuse * v_color\n- + .35*specular * v_light);\n+ gl_FragColor.rgba = vec4(.15*v_color + .55*diffuse * v_color\n+ + .35*specular * v_light, 1.0);\n }\n \"\"\"\n", "issue": "Window transparency issue in molecular_viewer.py\nOn my Windows 10 machine with a GeForce GTX 1080 Ti (Driver version 430.39) there are serious issues in the visualization of the molecular_viewer.py example code (/examples/demo/gloo/molecular_viewer.py).\r\n\r\nThe background of the image is black no matter what; but wherever there is any color, the color takes on a strange transparency, where you can see through the display window if the window is superimposed over non-black backgrounds.\r\n\r\nAdditionally, if you introduce code to take screenshots, the screenshots always show a black background with pure white molecule images.\r\n\r\nThe image exhibits the undesirable behavior. You can see the shape of my (black) console window placed *behind* the display window.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# vispy: gallery 30\n# -----------------------------------------------------------------------------\n# 2014, Aurore Deschildre, Gael Goret, Cyrille Rossant, Nicolas P. Rougier.\n# Distributed under the terms of the new BSD License.\n# -----------------------------------------------------------------------------\nimport numpy as np\n\nfrom vispy import gloo\nfrom vispy import app\nfrom vispy.util.transforms import perspective, translate, rotate\nfrom vispy.io import load_data_file\n\nvertex = \"\"\"\n#version 120\n\nuniform mat4 u_model;\nuniform mat4 u_view;\nuniform mat4 u_projection;\nuniform vec3 u_light_position;\nuniform vec3 u_light_spec_position;\n\nattribute vec3 a_position;\nattribute vec3 a_color;\nattribute float a_radius;\n\nvarying vec3 v_color;\nvarying vec4 v_eye_position;\nvarying float v_radius;\nvarying vec3 v_light_direction;\n\nvoid main (void) {\n v_radius = a_radius;\n v_color = a_color;\n\n v_eye_position = u_view * u_model * vec4(a_position,1.0);\n v_light_direction = normalize(u_light_position);\n float dist = length(v_eye_position.xyz);\n\n gl_Position = u_projection * v_eye_position;\n\n // stackoverflow.com/questions/8608844/...\n // ... resizing-point-sprites-based-on-distance-from-the-camera\n vec4 proj_corner = u_projection * vec4(a_radius, a_radius, v_eye_position.z, v_eye_position.w); // # noqa\n gl_PointSize = 512.0 * proj_corner.x / proj_corner.w;\n}\n\"\"\"\n\nfragment = \"\"\"\n#version 120\n\nuniform mat4 u_model;\nuniform mat4 u_view;\nuniform mat4 u_projection;\nuniform vec3 u_light_position;\nuniform vec3 u_light_spec_position;\n\nvarying vec3 v_color;\nvarying vec4 v_eye_position;\nvarying float v_radius;\nvarying vec3 v_light_direction;\nvoid main()\n{\n // r^2 = (x - x0)^2 + (y - y0)^2 + (z - z0)^2\n vec2 texcoord = gl_PointCoord* 2.0 - vec2(1.0);\n float x = texcoord.x;\n float y = texcoord.y;\n float d = 1.0 - x*x - y*y;\n if (d <= 0.0)\n discard;\n\n float z = sqrt(d);\n vec4 pos = v_eye_position;\n pos.z += v_radius*z;\n vec3 pos2 = pos.xyz;\n pos = u_projection * pos;\n// gl_FragDepth = 0.5*(pos.z / pos.w)+0.5;\n vec3 normal = vec3(x,y,z);\n float diffuse = clamp(dot(normal, v_light_direction), 0.0, 1.0);\n\n // Specular lighting.\n vec3 M = pos2.xyz;\n vec3 O = v_eye_position.xyz;\n vec3 L = u_light_spec_position;\n vec3 K = normalize(normalize(L - M) + normalize(O - M));\n // WARNING: abs() is necessary, otherwise weird bugs may appear with some\n // GPU drivers...\n float specular = clamp(pow(abs(dot(normal, K)), 40.), 0.0, 1.0);\n vec3 v_light = vec3(1., 1., 1.);\n gl_FragColor.rgb = (.15*v_color + .55*diffuse * v_color\n + .35*specular * v_light);\n}\n\"\"\"\n\n\nclass Canvas(app.Canvas):\n\n def __init__(self):\n app.Canvas.__init__(self, title='Molecular viewer',\n keys='interactive', size=(1200, 800))\n self.ps = self.pixel_scale\n\n self.translate = 40\n self.program = gloo.Program(vertex, fragment)\n self.view = translate((0, 0, -self.translate))\n self.model = np.eye(4, dtype=np.float32)\n self.projection = np.eye(4, dtype=np.float32)\n\n self.apply_zoom()\n\n fname = load_data_file('molecular_viewer/micelle.npz')\n self.load_molecule(fname)\n self.load_data()\n\n self.theta = 0\n self.phi = 0\n\n gloo.set_state(depth_test=True, clear_color='black')\n self._timer = app.Timer('auto', connect=self.on_timer, start=True)\n\n self.show()\n\n def load_molecule(self, fname):\n molecule = np.load(fname)['molecule']\n self._nAtoms = molecule.shape[0]\n\n # The x,y,z values store in one array\n self.coords = molecule[:, :3]\n\n # The array that will store the color and alpha scale for all the atoms\n self.atomsColours = molecule[:, 3:6]\n\n # The array that will store the scale for all the atoms.\n self.atomsScales = molecule[:, 6]\n\n def load_data(self):\n n = self._nAtoms\n\n data = np.zeros(n, [('a_position', np.float32, 3),\n ('a_color', np.float32, 3),\n ('a_radius', np.float32, 1)])\n\n data['a_position'] = self.coords\n data['a_color'] = self.atomsColours\n data['a_radius'] = self.atomsScales*self.ps\n\n self.program.bind(gloo.VertexBuffer(data))\n\n self.program['u_model'] = self.model\n self.program['u_view'] = self.view\n self.program['u_light_position'] = 0., 0., 2.\n self.program['u_light_spec_position'] = -5., 5., -5.\n\n def on_key_press(self, event):\n if event.text == ' ':\n if self.timer.running:\n self.timer.stop()\n else:\n self.timer.start()\n # if event.text == 'A':\n # self.\n\n def on_timer(self, event):\n self.theta += .25\n self.phi += .25\n self.model = np.dot(rotate(self.theta, (0, 0, 1)),\n rotate(self.phi, (0, 1, 0)))\n self.program['u_model'] = self.model\n self.update()\n\n def on_resize(self, event):\n width, height = event.size\n\n def apply_zoom(self):\n width, height = self.physical_size\n gloo.set_viewport(0, 0, width, height)\n self.projection = perspective(25.0, width / float(height), 2.0, 100.0)\n self.program['u_projection'] = self.projection\n\n def on_mouse_wheel(self, event):\n self.translate -= event.delta[1]\n self.translate = max(-1, self.translate)\n self.view = translate((0, 0, -self.translate))\n\n self.program['u_view'] = self.view\n self.update()\n\n def on_draw(self, event):\n gloo.clear()\n self.program.draw('points')\n\n\nif __name__ == '__main__':\n mvc = Canvas()\n app.run()\n", "path": "examples/demo/gloo/molecular_viewer.py"}]} | 2,831 | 188 |
gh_patches_debug_23113 | rasdani/github-patches | git_diff | sopel-irc__sopel-1848 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
xkcd outputs link even if triggered by one
The `xkcd` plugin shouldn't output a link to the comic if it was triggered by a link.
See `wikipedia`, for example. This is an anti-cycling measure in case multiple bots handling the same link(s) are present in the channel.
</issue>
<code>
[start of sopel/modules/xkcd.py]
1 # coding=utf-8
2 """
3 xkcd.py - Sopel xkcd Module
4 Copyright 2010, Michael Yanovich (yanovich.net), and Morgan Goose
5 Copyright 2012, Lior Ramati
6 Copyright 2013, Elsie Powell (embolalia.com)
7 Licensed under the Eiffel Forum License 2.
8
9 https://sopel.chat
10 """
11 from __future__ import unicode_literals, absolute_import, print_function, division
12
13 import random
14 import re
15
16 import requests
17
18 from sopel.modules.search import bing_search
19 from sopel.module import commands, url
20
21
22 ignored_sites = [
23 # For searching the web
24 'almamater.xkcd.com',
25 'blog.xkcd.com',
26 'blag.xkcd.com',
27 'forums.xkcd.com',
28 'fora.xkcd.com',
29 'forums3.xkcd.com',
30 'store.xkcd.com',
31 'wiki.xkcd.com',
32 'what-if.xkcd.com',
33 ]
34 sites_query = ' site:xkcd.com -site:' + ' -site:'.join(ignored_sites)
35
36
37 def get_info(number=None):
38 if number:
39 url = 'https://xkcd.com/{}/info.0.json'.format(number)
40 else:
41 url = 'https://xkcd.com/info.0.json'
42 data = requests.get(url).json()
43 data['url'] = 'https://xkcd.com/' + str(data['num'])
44 return data
45
46
47 def web_search(query):
48 url = bing_search(query + sites_query)
49 if not url:
50 return None
51 match = re.match(r'(?:https?://)?xkcd.com/(\d+)/?', url)
52 if match:
53 return match.group(1)
54
55
56 @commands('xkcd')
57 def xkcd(bot, trigger):
58 """.xkcd - Finds an xkcd comic strip.
59
60 Takes one of 3 inputs:
61
62 * If no input is provided it will return a random comic
63 * If numeric input is provided it will return that comic, or the
64 nth-latest comic if the number is non-positive
65 * If non-numeric input is provided it will return the first search result
66 for those keywords on the xkcd.com site
67 """
68 # get latest comic for rand function and numeric input
69 latest = get_info()
70 max_int = latest['num']
71
72 # if no input is given (pre - lior's edits code)
73 if not trigger.group(2): # get rand comic
74 random.seed()
75 requested = get_info(random.randint(1, max_int + 1))
76 else:
77 query = trigger.group(2).strip()
78
79 numbered = re.match(r"^(#|\+|-)?(\d+)$", query)
80 if numbered:
81 query = int(numbered.group(2))
82 if numbered.group(1) == "-":
83 query = -query
84 return numbered_result(bot, query, latest)
85 else:
86 # Non-number: search the web.
87 if (query.lower() == "latest" or query.lower() == "newest"):
88 requested = latest
89 else:
90 number = web_search(query)
91 if not number:
92 bot.say('Could not find any comics for that query.')
93 return
94 requested = get_info(number)
95
96 say_result(bot, requested)
97
98
99 def numbered_result(bot, query, latest):
100 max_int = latest['num']
101 if query > max_int:
102 bot.say(("Sorry, comic #{} hasn't been posted yet. "
103 "The last comic was #{}").format(query, max_int))
104 return
105 elif query <= -max_int:
106 bot.say(("Sorry, but there were only {} comics "
107 "released yet so far").format(max_int))
108 return
109 elif abs(query) == 0:
110 requested = latest
111 elif query == 404 or max_int + query == 404:
112 bot.say("404 - Not Found") # don't error on that one
113 return
114 elif query > 0:
115 requested = get_info(query)
116 else:
117 # Negative: go back that many from current
118 requested = get_info(max_int + query)
119
120 say_result(bot, requested)
121
122
123 def say_result(bot, result):
124 message = '{} | {} | Alt-text: {}'.format(result['url'], result['title'],
125 result['alt'])
126 bot.say(message)
127
128
129 @url(r'xkcd.com/(\d+)')
130 def get_url(bot, trigger, match):
131 latest = get_info()
132 numbered_result(bot, int(match.group(1)), latest)
133
[end of sopel/modules/xkcd.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sopel/modules/xkcd.py b/sopel/modules/xkcd.py
--- a/sopel/modules/xkcd.py
+++ b/sopel/modules/xkcd.py
@@ -96,7 +96,7 @@
say_result(bot, requested)
-def numbered_result(bot, query, latest):
+def numbered_result(bot, query, latest, commanded=True):
max_int = latest['num']
if query > max_int:
bot.say(("Sorry, comic #{} hasn't been posted yet. "
@@ -117,16 +117,18 @@
# Negative: go back that many from current
requested = get_info(max_int + query)
- say_result(bot, requested)
+ say_result(bot, requested, commanded)
-def say_result(bot, result):
- message = '{} | {} | Alt-text: {}'.format(result['url'], result['title'],
- result['alt'])
+def say_result(bot, result, commanded=True):
+ message = '{}{} | Alt-text: {}'.format(
+ result['url'] + ' | ' if commanded else '',
+ result['title'], result['alt']
+ )
bot.say(message)
@url(r'xkcd.com/(\d+)')
def get_url(bot, trigger, match):
latest = get_info()
- numbered_result(bot, int(match.group(1)), latest)
+ numbered_result(bot, int(match.group(1)), latest, commanded=False)
| {"golden_diff": "diff --git a/sopel/modules/xkcd.py b/sopel/modules/xkcd.py\n--- a/sopel/modules/xkcd.py\n+++ b/sopel/modules/xkcd.py\n@@ -96,7 +96,7 @@\n say_result(bot, requested)\n \n \n-def numbered_result(bot, query, latest):\n+def numbered_result(bot, query, latest, commanded=True):\n max_int = latest['num']\n if query > max_int:\n bot.say((\"Sorry, comic #{} hasn't been posted yet. \"\n@@ -117,16 +117,18 @@\n # Negative: go back that many from current\n requested = get_info(max_int + query)\n \n- say_result(bot, requested)\n+ say_result(bot, requested, commanded)\n \n \n-def say_result(bot, result):\n- message = '{} | {} | Alt-text: {}'.format(result['url'], result['title'],\n- result['alt'])\n+def say_result(bot, result, commanded=True):\n+ message = '{}{} | Alt-text: {}'.format(\n+ result['url'] + ' | ' if commanded else '',\n+ result['title'], result['alt']\n+ )\n bot.say(message)\n \n \n @url(r'xkcd.com/(\\d+)')\n def get_url(bot, trigger, match):\n latest = get_info()\n- numbered_result(bot, int(match.group(1)), latest)\n+ numbered_result(bot, int(match.group(1)), latest, commanded=False)\n", "issue": "xkcd outputs link even if triggered by one\nThe `xkcd` plugin shouldn't output a link to the comic if it was triggered by a link.\r\n\r\nSee `wikipedia`, for example. This is an anti-cycling measure in case multiple bots handling the same link(s) are present in the channel.\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nxkcd.py - Sopel xkcd Module\nCopyright 2010, Michael Yanovich (yanovich.net), and Morgan Goose\nCopyright 2012, Lior Ramati\nCopyright 2013, Elsie Powell (embolalia.com)\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport random\nimport re\n\nimport requests\n\nfrom sopel.modules.search import bing_search\nfrom sopel.module import commands, url\n\n\nignored_sites = [\n # For searching the web\n 'almamater.xkcd.com',\n 'blog.xkcd.com',\n 'blag.xkcd.com',\n 'forums.xkcd.com',\n 'fora.xkcd.com',\n 'forums3.xkcd.com',\n 'store.xkcd.com',\n 'wiki.xkcd.com',\n 'what-if.xkcd.com',\n]\nsites_query = ' site:xkcd.com -site:' + ' -site:'.join(ignored_sites)\n\n\ndef get_info(number=None):\n if number:\n url = 'https://xkcd.com/{}/info.0.json'.format(number)\n else:\n url = 'https://xkcd.com/info.0.json'\n data = requests.get(url).json()\n data['url'] = 'https://xkcd.com/' + str(data['num'])\n return data\n\n\ndef web_search(query):\n url = bing_search(query + sites_query)\n if not url:\n return None\n match = re.match(r'(?:https?://)?xkcd.com/(\\d+)/?', url)\n if match:\n return match.group(1)\n\n\n@commands('xkcd')\ndef xkcd(bot, trigger):\n \"\"\".xkcd - Finds an xkcd comic strip.\n\n Takes one of 3 inputs:\n\n * If no input is provided it will return a random comic\n * If numeric input is provided it will return that comic, or the\n nth-latest comic if the number is non-positive\n * If non-numeric input is provided it will return the first search result\n for those keywords on the xkcd.com site\n \"\"\"\n # get latest comic for rand function and numeric input\n latest = get_info()\n max_int = latest['num']\n\n # if no input is given (pre - lior's edits code)\n if not trigger.group(2): # get rand comic\n random.seed()\n requested = get_info(random.randint(1, max_int + 1))\n else:\n query = trigger.group(2).strip()\n\n numbered = re.match(r\"^(#|\\+|-)?(\\d+)$\", query)\n if numbered:\n query = int(numbered.group(2))\n if numbered.group(1) == \"-\":\n query = -query\n return numbered_result(bot, query, latest)\n else:\n # Non-number: search the web.\n if (query.lower() == \"latest\" or query.lower() == \"newest\"):\n requested = latest\n else:\n number = web_search(query)\n if not number:\n bot.say('Could not find any comics for that query.')\n return\n requested = get_info(number)\n\n say_result(bot, requested)\n\n\ndef numbered_result(bot, query, latest):\n max_int = latest['num']\n if query > max_int:\n bot.say((\"Sorry, comic #{} hasn't been posted yet. \"\n \"The last comic was #{}\").format(query, max_int))\n return\n elif query <= -max_int:\n bot.say((\"Sorry, but there were only {} comics \"\n \"released yet so far\").format(max_int))\n return\n elif abs(query) == 0:\n requested = latest\n elif query == 404 or max_int + query == 404:\n bot.say(\"404 - Not Found\") # don't error on that one\n return\n elif query > 0:\n requested = get_info(query)\n else:\n # Negative: go back that many from current\n requested = get_info(max_int + query)\n\n say_result(bot, requested)\n\n\ndef say_result(bot, result):\n message = '{} | {} | Alt-text: {}'.format(result['url'], result['title'],\n result['alt'])\n bot.say(message)\n\n\n@url(r'xkcd.com/(\\d+)')\ndef get_url(bot, trigger, match):\n latest = get_info()\n numbered_result(bot, int(match.group(1)), latest)\n", "path": "sopel/modules/xkcd.py"}]} | 1,913 | 324 |
gh_patches_debug_1564 | rasdani/github-patches | git_diff | python-trio__trio-1041 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
trio.Cancelled missing readable str()
Trying to print a trio.Cancelled exception e.g. with `f'{exc}'` yields an empty string. I usually resort to `type(exc)` when I need to see what's going on.
</issue>
<code>
[start of trio/_core/_exceptions.py]
1 import attr
2
3
4 class TrioInternalError(Exception):
5 """Raised by :func:`run` if we encounter a bug in trio, or (possibly) a
6 misuse of one of the low-level :mod:`trio.hazmat` APIs.
7
8 This should never happen! If you get this error, please file a bug.
9
10 Unfortunately, if you get this error it also means that all bets are off –
11 trio doesn't know what is going on and its normal invariants may be void.
12 (For example, we might have "lost track" of a task. Or lost track of all
13 tasks.) Again, though, this shouldn't happen.
14
15 """
16 pass
17
18
19 class RunFinishedError(RuntimeError):
20 """Raised by ``run_in_trio_thread`` and similar functions if the
21 corresponding call to :func:`trio.run` has already finished.
22
23 """
24 pass
25
26
27 class WouldBlock(Exception):
28 """Raised by ``X_nowait`` functions if ``X`` would block.
29
30 """
31 pass
32
33
34 class Cancelled(BaseException):
35 """Raised by blocking calls if the surrounding scope has been cancelled.
36
37 You should let this exception propagate, to be caught by the relevant
38 cancel scope. To remind you of this, it inherits from :exc:`BaseException`
39 instead of :exc:`Exception`, just like :exc:`KeyboardInterrupt` and
40 :exc:`SystemExit` do. This means that if you write something like::
41
42 try:
43 ...
44 except Exception:
45 ...
46
47 then this *won't* catch a :exc:`Cancelled` exception.
48
49 You cannot raise :exc:`Cancelled` yourself. Attempting to do so
50 will produce a :exc:`RuntimeError`. Use :meth:`cancel_scope.cancel()
51 <trio.CancelScope.cancel>` instead.
52
53 .. note::
54
55 In the US it's also common to see this word spelled "canceled", with
56 only one "l". This is a `recent
57 <https://books.google.com/ngrams/graph?content=canceled%2Ccancelled&year_start=1800&year_end=2000&corpus=5&smoothing=3&direct_url=t1%3B%2Ccanceled%3B%2Cc0%3B.t1%3B%2Ccancelled%3B%2Cc0>`__
58 and `US-specific
59 <https://books.google.com/ngrams/graph?content=canceled%2Ccancelled&year_start=1800&year_end=2000&corpus=18&smoothing=3&share=&direct_url=t1%3B%2Ccanceled%3B%2Cc0%3B.t1%3B%2Ccancelled%3B%2Cc0>`__
60 innovation, and even in the US both forms are still commonly used. So
61 for consistency with the rest of the world and with "cancellation"
62 (which always has two "l"s), trio uses the two "l" spelling
63 everywhere.
64
65 """
66 __marker = object()
67
68 def __init__(self, _marker=None):
69 if _marker is not self.__marker:
70 raise RuntimeError(
71 'Cancelled should not be raised directly. Use the cancel() '
72 'method on your cancel scope.'
73 )
74 super().__init__()
75
76 @classmethod
77 def _init(cls):
78 """A private constructor so that a user-created instance of Cancelled
79 can raise an appropriate error. see `issue #342
80 <https://github.com/python-trio/trio/issues/342>`__.
81 """
82 return cls(_marker=cls.__marker)
83
84
85 class BusyResourceError(Exception):
86 """Raised when a task attempts to use a resource that some other task is
87 already using, and this would lead to bugs and nonsense.
88
89 For example, if two tasks try to send data through the same socket at the
90 same time, trio will raise :class:`BusyResourceError` instead of letting
91 the data get scrambled.
92
93 """
94
95
96 class ClosedResourceError(Exception):
97 """Raised when attempting to use a resource after it has been closed.
98
99 Note that "closed" here means that *your* code closed the resource,
100 generally by calling a method with a name like ``close`` or ``aclose``, or
101 by exiting a context manager. If a problem arises elsewhere – for example,
102 because of a network failure, or because a remote peer closed their end of
103 a connection – then that should be indicated by a different exception
104 class, like :exc:`BrokenResourceError` or an :exc:`OSError` subclass.
105
106 """
107
108
109 class BrokenResourceError(Exception):
110 """Raised when an attempt to use a resource fails due to external
111 circumstances.
112
113 For example, you might get this if you try to send data on a stream where
114 the remote side has already closed the connection.
115
116 You *don't* get this error if *you* closed the resource – in that case you
117 get :class:`ClosedResourceError`.
118
119 This exception's ``__cause__`` attribute will often contain more
120 information about the underlying error.
121
122 """
123
124
125 class EndOfChannel(Exception):
126 """Raised when trying to receive from a :class:`trio.abc.ReceiveChannel`
127 that has no more data to receive.
128
129 This is analogous to an "end-of-file" condition, but for channels.
130
131 """
132
[end of trio/_core/_exceptions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/trio/_core/_exceptions.py b/trio/_core/_exceptions.py
--- a/trio/_core/_exceptions.py
+++ b/trio/_core/_exceptions.py
@@ -73,6 +73,9 @@
)
super().__init__()
+ def __str__(self):
+ return "Cancelled"
+
@classmethod
def _init(cls):
"""A private constructor so that a user-created instance of Cancelled
| {"golden_diff": "diff --git a/trio/_core/_exceptions.py b/trio/_core/_exceptions.py\n--- a/trio/_core/_exceptions.py\n+++ b/trio/_core/_exceptions.py\n@@ -73,6 +73,9 @@\n )\n super().__init__()\n \n+ def __str__(self):\n+ return \"Cancelled\"\n+\n @classmethod\n def _init(cls):\n \"\"\"A private constructor so that a user-created instance of Cancelled\n", "issue": "trio.Cancelled missing readable str()\nTrying to print a trio.Cancelled exception e.g. with `f'{exc}'` yields an empty string. I usually resort to `type(exc)` when I need to see what's going on.\n", "before_files": [{"content": "import attr\n\n\nclass TrioInternalError(Exception):\n \"\"\"Raised by :func:`run` if we encounter a bug in trio, or (possibly) a\n misuse of one of the low-level :mod:`trio.hazmat` APIs.\n\n This should never happen! If you get this error, please file a bug.\n\n Unfortunately, if you get this error it also means that all bets are off \u2013\n trio doesn't know what is going on and its normal invariants may be void.\n (For example, we might have \"lost track\" of a task. Or lost track of all\n tasks.) Again, though, this shouldn't happen.\n\n \"\"\"\n pass\n\n\nclass RunFinishedError(RuntimeError):\n \"\"\"Raised by ``run_in_trio_thread`` and similar functions if the\n corresponding call to :func:`trio.run` has already finished.\n\n \"\"\"\n pass\n\n\nclass WouldBlock(Exception):\n \"\"\"Raised by ``X_nowait`` functions if ``X`` would block.\n\n \"\"\"\n pass\n\n\nclass Cancelled(BaseException):\n \"\"\"Raised by blocking calls if the surrounding scope has been cancelled.\n\n You should let this exception propagate, to be caught by the relevant\n cancel scope. To remind you of this, it inherits from :exc:`BaseException`\n instead of :exc:`Exception`, just like :exc:`KeyboardInterrupt` and\n :exc:`SystemExit` do. This means that if you write something like::\n\n try:\n ...\n except Exception:\n ...\n\n then this *won't* catch a :exc:`Cancelled` exception.\n\n You cannot raise :exc:`Cancelled` yourself. Attempting to do so\n will produce a :exc:`RuntimeError`. Use :meth:`cancel_scope.cancel()\n <trio.CancelScope.cancel>` instead.\n\n .. note::\n\n In the US it's also common to see this word spelled \"canceled\", with\n only one \"l\". This is a `recent\n <https://books.google.com/ngrams/graph?content=canceled%2Ccancelled&year_start=1800&year_end=2000&corpus=5&smoothing=3&direct_url=t1%3B%2Ccanceled%3B%2Cc0%3B.t1%3B%2Ccancelled%3B%2Cc0>`__\n and `US-specific\n <https://books.google.com/ngrams/graph?content=canceled%2Ccancelled&year_start=1800&year_end=2000&corpus=18&smoothing=3&share=&direct_url=t1%3B%2Ccanceled%3B%2Cc0%3B.t1%3B%2Ccancelled%3B%2Cc0>`__\n innovation, and even in the US both forms are still commonly used. So\n for consistency with the rest of the world and with \"cancellation\"\n (which always has two \"l\"s), trio uses the two \"l\" spelling\n everywhere.\n\n \"\"\"\n __marker = object()\n\n def __init__(self, _marker=None):\n if _marker is not self.__marker:\n raise RuntimeError(\n 'Cancelled should not be raised directly. Use the cancel() '\n 'method on your cancel scope.'\n )\n super().__init__()\n\n @classmethod\n def _init(cls):\n \"\"\"A private constructor so that a user-created instance of Cancelled\n can raise an appropriate error. see `issue #342\n <https://github.com/python-trio/trio/issues/342>`__.\n \"\"\"\n return cls(_marker=cls.__marker)\n\n\nclass BusyResourceError(Exception):\n \"\"\"Raised when a task attempts to use a resource that some other task is\n already using, and this would lead to bugs and nonsense.\n\n For example, if two tasks try to send data through the same socket at the\n same time, trio will raise :class:`BusyResourceError` instead of letting\n the data get scrambled.\n\n \"\"\"\n\n\nclass ClosedResourceError(Exception):\n \"\"\"Raised when attempting to use a resource after it has been closed.\n\n Note that \"closed\" here means that *your* code closed the resource,\n generally by calling a method with a name like ``close`` or ``aclose``, or\n by exiting a context manager. If a problem arises elsewhere \u2013 for example,\n because of a network failure, or because a remote peer closed their end of\n a connection \u2013 then that should be indicated by a different exception\n class, like :exc:`BrokenResourceError` or an :exc:`OSError` subclass.\n\n \"\"\"\n\n\nclass BrokenResourceError(Exception):\n \"\"\"Raised when an attempt to use a resource fails due to external\n circumstances.\n\n For example, you might get this if you try to send data on a stream where\n the remote side has already closed the connection.\n\n You *don't* get this error if *you* closed the resource \u2013 in that case you\n get :class:`ClosedResourceError`.\n\n This exception's ``__cause__`` attribute will often contain more\n information about the underlying error.\n\n \"\"\"\n\n\nclass EndOfChannel(Exception):\n \"\"\"Raised when trying to receive from a :class:`trio.abc.ReceiveChannel`\n that has no more data to receive.\n\n This is analogous to an \"end-of-file\" condition, but for channels.\n\n \"\"\"\n", "path": "trio/_core/_exceptions.py"}]} | 2,059 | 100 |
gh_patches_debug_14423 | rasdani/github-patches | git_diff | translate__pootle-4087 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Elasticsearch 1.7.0 breaks Pootle if ES not set up
With `elasticsearch==1.7.0` doing `es.ping()` on a not-setup server raises `ConnectionError`.
This breaks the default install here:
https://github.com/translate/pootle/blob/master/pootle/core/search/backends/elasticsearch.py#L29
Elasticsearch 1.7.0 breaks Pootle if ES not set up
With `elasticsearch==1.7.0` doing `es.ping()` on a not-setup server raises `ConnectionError`.
This breaks the default install here:
https://github.com/translate/pootle/blob/master/pootle/core/search/backends/elasticsearch.py#L29
</issue>
<code>
[start of pootle/core/search/backends/elasticsearch.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10 from __future__ import absolute_import
11
12 __all__ = ('ElasticSearchBackend',)
13
14 try:
15 from elasticsearch import Elasticsearch
16 except:
17 Elasticsearch = None
18
19 from ..base import SearchBackend
20
21
22 class ElasticSearchBackend(SearchBackend):
23 def __init__(self, config_name):
24 super(ElasticSearchBackend, self).__init__(config_name)
25 self._es = self._get_es_server()
26 self._create_index_if_missing()
27
28 def _server_setup_and_alive(self):
29 return self._es is not None and self._es.ping()
30
31 def _get_es_server(self):
32 if self._settings is None or Elasticsearch is None:
33 return None
34 return Elasticsearch([
35 {'host': self._settings['HOST'],
36 'port': self._settings['PORT']},
37 ])
38
39 def _create_index_if_missing(self):
40 if self._server_setup_and_alive():
41 if not self._es.indices.exists(self._settings['INDEX_NAME']):
42 self._es.indices.create(self._settings['INDEX_NAME'])
43
44 def _is_valuable_hit(self, unit, hit):
45 return str(unit.id) != hit['_id']
46
47 def search(self, unit):
48 if not self._server_setup_and_alive():
49 return []
50
51 counter = {}
52 res = []
53 language = unit.store.translation_project.language.code
54 es_res = self._es.search(
55 index=self._settings['INDEX_NAME'],
56 doc_type=language,
57 body={
58 "query": {
59 "match": {
60 "source": {
61 "query": unit.source,
62 "fuzziness": self._settings['MIN_SCORE'],
63 }
64 }
65 }
66 }
67 )
68
69 for hit in es_res['hits']['hits']:
70 if self._is_valuable_hit(unit, hit):
71 translation_pair = hit['_source']['source'] + hit['_source']['target']
72 if translation_pair not in counter:
73 counter[translation_pair] = 1
74 res.append({
75 'unit_id': hit['_id'],
76 'source': hit['_source']['source'],
77 'target': hit['_source']['target'],
78 'project': hit['_source']['project'],
79 'path': hit['_source']['path'],
80 'username': hit['_source']['username'],
81 'fullname': hit['_source']['fullname'],
82 'email_md5': hit['_source']['email_md5'],
83 })
84 else:
85 counter[translation_pair] += 1
86
87 for item in res:
88 item['count'] = counter[item['source']+item['target']]
89
90 return res
91
92 def update(self, language, obj):
93 if self._server_setup_and_alive():
94 self._es.index(
95 index=self._settings['INDEX_NAME'],
96 doc_type=language,
97 body=obj,
98 id=obj['id']
99 )
100
[end of pootle/core/search/backends/elasticsearch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pootle/core/search/backends/elasticsearch.py b/pootle/core/search/backends/elasticsearch.py
--- a/pootle/core/search/backends/elasticsearch.py
+++ b/pootle/core/search/backends/elasticsearch.py
@@ -13,6 +13,7 @@
try:
from elasticsearch import Elasticsearch
+ from elasticsearch.exceptions import ConnectionError
except:
Elasticsearch = None
@@ -26,7 +27,12 @@
self._create_index_if_missing()
def _server_setup_and_alive(self):
- return self._es is not None and self._es.ping()
+ if self._es is None:
+ return False
+ try:
+ return self._es.ping()
+ except ConnectionError:
+ return False
def _get_es_server(self):
if self._settings is None or Elasticsearch is None:
| {"golden_diff": "diff --git a/pootle/core/search/backends/elasticsearch.py b/pootle/core/search/backends/elasticsearch.py\n--- a/pootle/core/search/backends/elasticsearch.py\n+++ b/pootle/core/search/backends/elasticsearch.py\n@@ -13,6 +13,7 @@\n \n try:\n from elasticsearch import Elasticsearch\n+ from elasticsearch.exceptions import ConnectionError\n except:\n Elasticsearch = None\n \n@@ -26,7 +27,12 @@\n self._create_index_if_missing()\n \n def _server_setup_and_alive(self):\n- return self._es is not None and self._es.ping()\n+ if self._es is None:\n+ return False\n+ try:\n+ return self._es.ping()\n+ except ConnectionError:\n+ return False\n \n def _get_es_server(self):\n if self._settings is None or Elasticsearch is None:\n", "issue": "Elasticsearch 1.7.0 breaks Pootle if ES not set up\nWith `elasticsearch==1.7.0` doing `es.ping()` on a not-setup server raises `ConnectionError`.\n\nThis breaks the default install here:\n\nhttps://github.com/translate/pootle/blob/master/pootle/core/search/backends/elasticsearch.py#L29\n\nElasticsearch 1.7.0 breaks Pootle if ES not set up\nWith `elasticsearch==1.7.0` doing `es.ping()` on a not-setup server raises `ConnectionError`.\n\nThis breaks the default install here:\n\nhttps://github.com/translate/pootle/blob/master/pootle/core/search/backends/elasticsearch.py#L29\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom __future__ import absolute_import\n\n__all__ = ('ElasticSearchBackend',)\n\ntry:\n from elasticsearch import Elasticsearch\nexcept:\n Elasticsearch = None\n\nfrom ..base import SearchBackend\n\n\nclass ElasticSearchBackend(SearchBackend):\n def __init__(self, config_name):\n super(ElasticSearchBackend, self).__init__(config_name)\n self._es = self._get_es_server()\n self._create_index_if_missing()\n\n def _server_setup_and_alive(self):\n return self._es is not None and self._es.ping()\n\n def _get_es_server(self):\n if self._settings is None or Elasticsearch is None:\n return None\n return Elasticsearch([\n {'host': self._settings['HOST'],\n 'port': self._settings['PORT']},\n ])\n\n def _create_index_if_missing(self):\n if self._server_setup_and_alive():\n if not self._es.indices.exists(self._settings['INDEX_NAME']):\n self._es.indices.create(self._settings['INDEX_NAME'])\n\n def _is_valuable_hit(self, unit, hit):\n return str(unit.id) != hit['_id']\n\n def search(self, unit):\n if not self._server_setup_and_alive():\n return []\n\n counter = {}\n res = []\n language = unit.store.translation_project.language.code\n es_res = self._es.search(\n index=self._settings['INDEX_NAME'],\n doc_type=language,\n body={\n \"query\": {\n \"match\": {\n \"source\": {\n \"query\": unit.source,\n \"fuzziness\": self._settings['MIN_SCORE'],\n }\n }\n }\n }\n )\n\n for hit in es_res['hits']['hits']:\n if self._is_valuable_hit(unit, hit):\n translation_pair = hit['_source']['source'] + hit['_source']['target']\n if translation_pair not in counter:\n counter[translation_pair] = 1\n res.append({\n 'unit_id': hit['_id'],\n 'source': hit['_source']['source'],\n 'target': hit['_source']['target'],\n 'project': hit['_source']['project'],\n 'path': hit['_source']['path'],\n 'username': hit['_source']['username'],\n 'fullname': hit['_source']['fullname'],\n 'email_md5': hit['_source']['email_md5'],\n })\n else:\n counter[translation_pair] += 1\n\n for item in res:\n item['count'] = counter[item['source']+item['target']]\n\n return res\n\n def update(self, language, obj):\n if self._server_setup_and_alive():\n self._es.index(\n index=self._settings['INDEX_NAME'],\n doc_type=language,\n body=obj,\n id=obj['id']\n )\n", "path": "pootle/core/search/backends/elasticsearch.py"}]} | 1,565 | 194 |
gh_patches_debug_199 | rasdani/github-patches | git_diff | twisted__twisted-1695 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 22.2.0
|[<img alt="adiroiban's avatar" src="https://avatars.githubusercontent.com/u/204609?s=50" width="50" height="50">](https://github.com/adiroiban)| @adiroiban reported|
|-|-|
|Trac ID|trac#10306|
|Type|enhancement|
|Created|2022-02-08 14:05:11Z|
<details><summary>Searchable metadata</summary>
```
trac-id__10306 10306
type__enhancement enhancement
reporter__adiroiban adiroiban
priority__normal normal
milestone__None None
branch__
branch_author__
status__closed closed
resolution__fixed fixed
component__core core
keywords__None None
time__1644329111193403 1644329111193403
changetime__1646513115841857 1646513115841857
version__None None
owner__None None
```
</details>
</issue>
<code>
[start of src/twisted/_version.py]
1 """
2 Provides Twisted version information.
3 """
4
5 # This file is auto-generated! Do not edit!
6 # Use `python -m incremental.update Twisted` to change this file.
7
8 from incremental import Version
9
10 __version__ = Version("Twisted", 22, 1, 0, post=0)
11 __all__ = ["__version__"]
12
[end of src/twisted/_version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/twisted/_version.py b/src/twisted/_version.py
--- a/src/twisted/_version.py
+++ b/src/twisted/_version.py
@@ -7,5 +7,5 @@
from incremental import Version
-__version__ = Version("Twisted", 22, 1, 0, post=0)
+__version__ = Version("Twisted", 22, 2, 0, post=0)
__all__ = ["__version__"]
| {"golden_diff": "diff --git a/src/twisted/_version.py b/src/twisted/_version.py\n--- a/src/twisted/_version.py\n+++ b/src/twisted/_version.py\n@@ -7,5 +7,5 @@\n \n from incremental import Version\n \n-__version__ = Version(\"Twisted\", 22, 1, 0, post=0)\n+__version__ = Version(\"Twisted\", 22, 2, 0, post=0)\n __all__ = [\"__version__\"]\n", "issue": "Release 22.2.0\n|[<img alt=\"adiroiban's avatar\" src=\"https://avatars.githubusercontent.com/u/204609?s=50\" width=\"50\" height=\"50\">](https://github.com/adiroiban)| @adiroiban reported|\n|-|-|\n|Trac ID|trac#10306|\n|Type|enhancement|\n|Created|2022-02-08 14:05:11Z|\n\n\n\n<details><summary>Searchable metadata</summary>\n\n```\ntrac-id__10306 10306\ntype__enhancement enhancement\nreporter__adiroiban adiroiban\npriority__normal normal\nmilestone__None None\nbranch__ \nbranch_author__ \nstatus__closed closed\nresolution__fixed fixed\ncomponent__core core\nkeywords__None None\ntime__1644329111193403 1644329111193403\nchangetime__1646513115841857 1646513115841857\nversion__None None\nowner__None None\n\n```\n</details>\n\n", "before_files": [{"content": "\"\"\"\nProvides Twisted version information.\n\"\"\"\n\n# This file is auto-generated! Do not edit!\n# Use `python -m incremental.update Twisted` to change this file.\n\nfrom incremental import Version\n\n__version__ = Version(\"Twisted\", 22, 1, 0, post=0)\n__all__ = [\"__version__\"]\n", "path": "src/twisted/_version.py"}]} | 915 | 113 |
gh_patches_debug_39189 | rasdani/github-patches | git_diff | cupy__cupy-2915 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error with _ufunc_config.config
When dispatching using CuPy from Dask (multithreaded python) there is a failure due to `_ufunc_config.config`
* Conditions (you can just paste the output of `python -c 'import cupy; cupy.show_config()'`)
```
CuPy Version : 7.0.0
CUDA Root : /usr/local/cuda-10.1/
CUDA Build Version : 10010
CUDA Driver Version : 10010
CUDA Runtime Version : 10010
cuBLAS Version : 10201
cuFFT Version : 10101
cuRAND Version : 10101
cuSOLVER Version : (10, 2, 0)
cuSPARSE Version : 10300
NVRTC Version : (10, 1)
cuDNN Build Version : None
cuDNN Version : None
NCCL Build Version : None
NCCL Runtime Version : None
```
* Code to reproduce
```
import dask, cupy
import dask.array as da
rs = dask.array.random.RandomState(RandomState=cupy.random.RandomState)
x = rs.random((100, 100))
x = x.persist()
u, s, v = da.linalg.svd(x)
u.compute()
```
* Error messages, stack traces, or logs
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "cpu-svd.py", line 10, in <module>
u.compute()
File "/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/base.py", line 165, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/base.py", line 436, in compute
results = schedule(dsk, keys, **kwargs)
File "/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/threaded.py", line 81, in get
**kwargs
File "/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/local.py", line 486, in get_async
raise_exception(exc, tb)
File "/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/local.py", line 316, in reraise
raise exc
File "/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/local.py", line 222, in execute_task
result = _execute_task(task, data)
File "/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/core.py", line 119, in _execute_task
return func(*args2)
File "/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/optimization.py", line 982, in __call__
return core.get(self.dsk, self.outkey, dict(zip(self.inkeys, args)))
File "/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/core.py", line 149, in get
result = _execute_task(task, cache)
File "/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/core.py", line 119, in _execute_task
return func(*args2)
File "/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/array/linalg.py", line 48, in _wrapped_qr
return np.linalg.qr(a)
File "<__array_function__ internals>", line 6, in qr
File "cupy/core/core.pyx", line 1344, in cupy.core.core.ndarray.__array_function__
File "/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/cupy/linalg/decomposition.py", line 248, in qr
geqrf, dev_info)
File "/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/cupy/linalg/util.py", line 42, in _check_cusolver_dev_info_if_synchronization_allowed
print(cupyx._ufunc_config.config.linalg)
AttributeError: '_thread._local' object has no attribute 'linalg'
```
</details>
Rather than using `threading.local()` here
https://github.com/cupy/cupy/blob/f06f509167957240d17924965238cb0901b700ab/cupyx/_ufunc_config.py#L4-L13
We could use `types` for a two line code change:
```python
import types
config = types.SimpleNamespace()
```
But I may be missing the reason for using threads here
</issue>
<code>
[start of cupy/linalg/util.py]
1 from numpy import linalg
2
3 import cupy
4 from cupy import core
5 import cupyx
6
7
8 def _assert_cupy_array(*arrays):
9 for a in arrays:
10 if not isinstance(a, cupy.core.ndarray):
11 raise linalg.LinAlgError(
12 'cupy.linalg only supports cupy.core.ndarray')
13
14
15 def _assert_rank2(*arrays):
16 for a in arrays:
17 if a.ndim != 2:
18 raise linalg.LinAlgError(
19 '{}-dimensional array given. Array must be '
20 'two-dimensional'.format(a.ndim))
21
22
23 def _assert_nd_squareness(*arrays):
24 for a in arrays:
25 if max(a.shape[-2:]) != min(a.shape[-2:]):
26 raise linalg.LinAlgError(
27 'Last 2 dimensions of the array must be square')
28
29
30 def _check_cusolver_dev_info_if_synchronization_allowed(routine, dev_info):
31 # `dev_info` contains a single integer, the status code of a cuSOLVER
32 # routine call. It is referred to as "devInfo" in the official cuSOLVER
33 # documentation.
34 assert isinstance(dev_info, core.ndarray)
35 assert dev_info.size == 1
36
37 config_linalg = cupyx._ufunc_config.config.linalg
38 # Only 'ignore' and 'raise' are currently supported.
39 if config_linalg == 'ignore':
40 return
41
42 assert config_linalg == 'raise'
43 dev_info_host = dev_info.item()
44 if dev_info_host != 0:
45 raise linalg.LinAlgError(
46 'Error reported by {} in cuSOLVER. devInfo = {}. Please refer'
47 ' to the cuSOLVER documentation.'.format(
48 routine.__name__, dev_info_host))
49
50
51 def _check_cublas_info_array_if_synchronization_allowed(routine, info_array):
52 # `info_array` contains integers, the status codes of a cuBLAS routine
53 # call. It is referrd to as "infoArray" or "devInfoArray" in the official
54 # cuBLAS documentation.
55 assert isinstance(info_array, core.ndarray)
56 assert info_array.ndim == 1
57
58 config_linalg = cupyx._ufunc_config.config.linalg
59 # Only 'ignore' and 'raise' are currently supported.
60 if config_linalg == 'ignore':
61 return
62
63 assert config_linalg == 'raise'
64 if (info_array != 0).any():
65 raise linalg.LinAlgError(
66 'Error reported by {} in cuBLAS. infoArray/devInfoArray = {}.'
67 ' Please refer to the cuBLAS documentation.'.format(
68 routine.__name__, info_array))
69
70
71 _tril_kernel = core.ElementwiseKernel(
72 'int64 k', 'S x',
73 'x = (_ind.get()[1] - _ind.get()[0] <= k) ? x : 0',
74 'tril_kernel',
75 reduce_dims=False
76 )
77
78
79 def _tril(x, k=0):
80 _tril_kernel(k, x)
81 return x
82
83
84 _triu_kernel = core.ElementwiseKernel(
85 'int64 k', 'S x',
86 'x = (_ind.get()[1] - _ind.get()[0] >= k) ? x : 0',
87 'triu_kernel',
88 reduce_dims=False
89 )
90
91
92 def _triu(x, k=0):
93 _triu_kernel(k, x)
94 return x
95
[end of cupy/linalg/util.py]
[start of cupyx/_ufunc_config.py]
1 import contextlib
2 import threading
3
4 config = threading.local()
5 config.divide = None
6 config.over = None
7 config.under = None
8 config.invalid = None
9 # In favor of performance, the `devInfo` input/output from cuSOLVER routine
10 # calls that is necessary to check the validity of the other outputs, are
11 # ignored, as D2H copy incurring device synchronizations would otherwise be
12 # required.
13 config.linalg = 'ignore'
14
15
16 @contextlib.contextmanager
17 def errstate(*, divide=None, over=None, under=None, invalid=None, linalg=None):
18 """
19 TODO(hvy): Write docs.
20 """
21 old_state = seterr(
22 divide=divide, over=over, under=under, invalid=invalid, linalg=linalg)
23 try:
24 yield # Return `None` similar to `numpy.errstate`.
25 finally:
26 seterr(**old_state)
27
28
29 def seterr(*, divide=None, over=None, under=None, invalid=None, linalg=None):
30 """
31 TODO(hvy): Write docs.
32 """
33 if divide is not None:
34 raise NotImplementedError()
35 if over is not None:
36 raise NotImplementedError()
37 if under is not None:
38 raise NotImplementedError()
39 if invalid is not None:
40 raise NotImplementedError()
41 if linalg is not None:
42 if linalg not in ('ignore', 'raise'):
43 raise NotImplementedError()
44
45 old_state = geterr()
46
47 config.divide = divide
48 config.under = under
49 config.over = over
50 config.invalid = invalid
51 config.linalg = linalg
52
53 return old_state
54
55
56 def geterr():
57 """
58 TODO(hvy): Write docs.
59 """
60 return dict(
61 divide=config.divide,
62 over=config.over,
63 under=config.under,
64 invalid=config.invalid,
65 linalg=config.linalg,
66 )
67
[end of cupyx/_ufunc_config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupy/linalg/util.py b/cupy/linalg/util.py
--- a/cupy/linalg/util.py
+++ b/cupy/linalg/util.py
@@ -33,8 +33,7 @@
# documentation.
assert isinstance(dev_info, core.ndarray)
assert dev_info.size == 1
-
- config_linalg = cupyx._ufunc_config.config.linalg
+ config_linalg = cupyx._ufunc_config.get_config_linalg()
# Only 'ignore' and 'raise' are currently supported.
if config_linalg == 'ignore':
return
@@ -55,7 +54,7 @@
assert isinstance(info_array, core.ndarray)
assert info_array.ndim == 1
- config_linalg = cupyx._ufunc_config.config.linalg
+ config_linalg = cupyx._ufunc_config.get_config_linalg()
# Only 'ignore' and 'raise' are currently supported.
if config_linalg == 'ignore':
return
diff --git a/cupyx/_ufunc_config.py b/cupyx/_ufunc_config.py
--- a/cupyx/_ufunc_config.py
+++ b/cupyx/_ufunc_config.py
@@ -1,16 +1,51 @@
import contextlib
import threading
-config = threading.local()
-config.divide = None
-config.over = None
-config.under = None
-config.invalid = None
-# In favor of performance, the `devInfo` input/output from cuSOLVER routine
-# calls that is necessary to check the validity of the other outputs, are
-# ignored, as D2H copy incurring device synchronizations would otherwise be
-# required.
-config.linalg = 'ignore'
+_config = threading.local()
+
+
+def get_config_divide():
+ try:
+ value = _config.divide
+ except AttributeError:
+ value = _config.divide = None
+ return value
+
+
+def get_config_over():
+ try:
+ value = _config.over
+ except AttributeError:
+ value = _config.over = None
+ return value
+
+
+def get_config_under():
+ try:
+ value = _config.under
+ except AttributeError:
+ value = _config.under = None
+ return value
+
+
+def get_config_invalid():
+ try:
+ value = _config.invalid
+ except AttributeError:
+ value = _config.invalid = None
+ return value
+
+
+def get_config_linalg():
+ # In favor of performance, the `devInfo` input/output from cuSOLVER routine
+ # calls that is necessary to check the validity of the other outputs, are
+ # ignored, as D2H copy incurring device synchronizations would otherwise be
+ # required.
+ try:
+ value = _config.linalg
+ except AttributeError:
+ value = _config.linalg = 'ignore'
+ return value
@contextlib.contextmanager
@@ -44,11 +79,11 @@
old_state = geterr()
- config.divide = divide
- config.under = under
- config.over = over
- config.invalid = invalid
- config.linalg = linalg
+ _config.divide = divide
+ _config.under = under
+ _config.over = over
+ _config.invalid = invalid
+ _config.linalg = linalg
return old_state
@@ -58,9 +93,9 @@
TODO(hvy): Write docs.
"""
return dict(
- divide=config.divide,
- over=config.over,
- under=config.under,
- invalid=config.invalid,
- linalg=config.linalg,
+ divide=get_config_divide(),
+ over=get_config_over(),
+ under=get_config_under(),
+ invalid=get_config_invalid(),
+ linalg=get_config_linalg(),
)
| {"golden_diff": "diff --git a/cupy/linalg/util.py b/cupy/linalg/util.py\n--- a/cupy/linalg/util.py\n+++ b/cupy/linalg/util.py\n@@ -33,8 +33,7 @@\n # documentation.\n assert isinstance(dev_info, core.ndarray)\n assert dev_info.size == 1\n-\n- config_linalg = cupyx._ufunc_config.config.linalg\n+ config_linalg = cupyx._ufunc_config.get_config_linalg()\n # Only 'ignore' and 'raise' are currently supported.\n if config_linalg == 'ignore':\n return\n@@ -55,7 +54,7 @@\n assert isinstance(info_array, core.ndarray)\n assert info_array.ndim == 1\n \n- config_linalg = cupyx._ufunc_config.config.linalg\n+ config_linalg = cupyx._ufunc_config.get_config_linalg()\n # Only 'ignore' and 'raise' are currently supported.\n if config_linalg == 'ignore':\n return\ndiff --git a/cupyx/_ufunc_config.py b/cupyx/_ufunc_config.py\n--- a/cupyx/_ufunc_config.py\n+++ b/cupyx/_ufunc_config.py\n@@ -1,16 +1,51 @@\n import contextlib\n import threading\n \n-config = threading.local()\n-config.divide = None\n-config.over = None\n-config.under = None\n-config.invalid = None\n-# In favor of performance, the `devInfo` input/output from cuSOLVER routine\n-# calls that is necessary to check the validity of the other outputs, are\n-# ignored, as D2H copy incurring device synchronizations would otherwise be\n-# required.\n-config.linalg = 'ignore'\n+_config = threading.local()\n+\n+\n+def get_config_divide():\n+ try:\n+ value = _config.divide\n+ except AttributeError:\n+ value = _config.divide = None\n+ return value\n+\n+\n+def get_config_over():\n+ try:\n+ value = _config.over\n+ except AttributeError:\n+ value = _config.over = None\n+ return value\n+\n+\n+def get_config_under():\n+ try:\n+ value = _config.under\n+ except AttributeError:\n+ value = _config.under = None\n+ return value\n+\n+\n+def get_config_invalid():\n+ try:\n+ value = _config.invalid\n+ except AttributeError:\n+ value = _config.invalid = None\n+ return value\n+\n+\n+def get_config_linalg():\n+ # In favor of performance, the `devInfo` input/output from cuSOLVER routine\n+ # calls that is necessary to check the validity of the other outputs, are\n+ # ignored, as D2H copy incurring device synchronizations would otherwise be\n+ # required.\n+ try:\n+ value = _config.linalg\n+ except AttributeError:\n+ value = _config.linalg = 'ignore'\n+ return value\n \n \n @contextlib.contextmanager\n@@ -44,11 +79,11 @@\n \n old_state = geterr()\n \n- config.divide = divide\n- config.under = under\n- config.over = over\n- config.invalid = invalid\n- config.linalg = linalg\n+ _config.divide = divide\n+ _config.under = under\n+ _config.over = over\n+ _config.invalid = invalid\n+ _config.linalg = linalg\n \n return old_state\n \n@@ -58,9 +93,9 @@\n TODO(hvy): Write docs.\n \"\"\"\n return dict(\n- divide=config.divide,\n- over=config.over,\n- under=config.under,\n- invalid=config.invalid,\n- linalg=config.linalg,\n+ divide=get_config_divide(),\n+ over=get_config_over(),\n+ under=get_config_under(),\n+ invalid=get_config_invalid(),\n+ linalg=get_config_linalg(),\n )\n", "issue": "Error with _ufunc_config.config\nWhen dispatching using CuPy from Dask (multithreaded python) there is a failure due to `_ufunc_config.config`\r\n* Conditions (you can just paste the output of `python -c 'import cupy; cupy.show_config()'`)\r\n\r\n```\r\nCuPy Version : 7.0.0\r\nCUDA Root : /usr/local/cuda-10.1/\r\nCUDA Build Version : 10010\r\nCUDA Driver Version : 10010\r\nCUDA Runtime Version : 10010\r\ncuBLAS Version : 10201\r\ncuFFT Version : 10101\r\ncuRAND Version : 10101\r\ncuSOLVER Version : (10, 2, 0)\r\ncuSPARSE Version : 10300\r\nNVRTC Version : (10, 1)\r\ncuDNN Build Version : None\r\ncuDNN Version : None\r\nNCCL Build Version : None\r\nNCCL Runtime Version : None\r\n```\r\n* Code to reproduce\r\n```\r\nimport dask, cupy\r\nimport dask.array as da\r\n\r\nrs = dask.array.random.RandomState(RandomState=cupy.random.RandomState)\r\nx = rs.random((100, 100))\r\nx = x.persist()\r\nu, s, v = da.linalg.svd(x)\r\nu.compute()\r\n```\r\n* Error messages, stack traces, or logs\r\n<details>\r\n<summary>Traceback</summary>\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"cpu-svd.py\", line 10, in <module>\r\n u.compute()\r\n File \"/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/base.py\", line 165, in compute\r\n (result,) = compute(self, traverse=False, **kwargs)\r\n File \"/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/base.py\", line 436, in compute\r\n results = schedule(dsk, keys, **kwargs)\r\n File \"/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/threaded.py\", line 81, in get\r\n **kwargs\r\n File \"/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/local.py\", line 486, in get_async\r\n raise_exception(exc, tb)\r\n File \"/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/local.py\", line 316, in reraise\r\n raise exc\r\n File \"/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/local.py\", line 222, in execute_task\r\n result = _execute_task(task, data)\r\n File \"/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/core.py\", line 119, in _execute_task\r\n return func(*args2)\r\n File \"/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/optimization.py\", line 982, in __call__\r\n return core.get(self.dsk, self.outkey, dict(zip(self.inkeys, args)))\r\n File \"/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/core.py\", line 149, in get\r\n result = _execute_task(task, cache)\r\n File \"/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/core.py\", line 119, in _execute_task\r\n return func(*args2)\r\n File \"/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/dask/array/linalg.py\", line 48, in _wrapped_qr\r\n return np.linalg.qr(a)\r\n File \"<__array_function__ internals>\", line 6, in qr\r\n File \"cupy/core/core.pyx\", line 1344, in cupy.core.core.ndarray.__array_function__\r\n File \"/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/cupy/linalg/decomposition.py\", line 248, in qr\r\n geqrf, dev_info)\r\n File \"/datasets/bzaitlen/miniconda3/envs/rapids-12/lib/python3.7/site-packages/cupy/linalg/util.py\", line 42, in _check_cusolver_dev_info_if_synchronization_allowed\r\n print(cupyx._ufunc_config.config.linalg)\r\nAttributeError: '_thread._local' object has no attribute 'linalg'\r\n```\r\n</details>\r\n\r\n\r\nRather than using `threading.local()` here \r\nhttps://github.com/cupy/cupy/blob/f06f509167957240d17924965238cb0901b700ab/cupyx/_ufunc_config.py#L4-L13\r\n\r\nWe could use `types` for a two line code change:\r\n\r\n```python\r\nimport types\r\n\r\nconfig = types.SimpleNamespace()\r\n```\r\n\r\nBut I may be missing the reason for using threads here\n", "before_files": [{"content": "from numpy import linalg\n\nimport cupy\nfrom cupy import core\nimport cupyx\n\n\ndef _assert_cupy_array(*arrays):\n for a in arrays:\n if not isinstance(a, cupy.core.ndarray):\n raise linalg.LinAlgError(\n 'cupy.linalg only supports cupy.core.ndarray')\n\n\ndef _assert_rank2(*arrays):\n for a in arrays:\n if a.ndim != 2:\n raise linalg.LinAlgError(\n '{}-dimensional array given. Array must be '\n 'two-dimensional'.format(a.ndim))\n\n\ndef _assert_nd_squareness(*arrays):\n for a in arrays:\n if max(a.shape[-2:]) != min(a.shape[-2:]):\n raise linalg.LinAlgError(\n 'Last 2 dimensions of the array must be square')\n\n\ndef _check_cusolver_dev_info_if_synchronization_allowed(routine, dev_info):\n # `dev_info` contains a single integer, the status code of a cuSOLVER\n # routine call. It is referred to as \"devInfo\" in the official cuSOLVER\n # documentation.\n assert isinstance(dev_info, core.ndarray)\n assert dev_info.size == 1\n\n config_linalg = cupyx._ufunc_config.config.linalg\n # Only 'ignore' and 'raise' are currently supported.\n if config_linalg == 'ignore':\n return\n\n assert config_linalg == 'raise'\n dev_info_host = dev_info.item()\n if dev_info_host != 0:\n raise linalg.LinAlgError(\n 'Error reported by {} in cuSOLVER. devInfo = {}. Please refer'\n ' to the cuSOLVER documentation.'.format(\n routine.__name__, dev_info_host))\n\n\ndef _check_cublas_info_array_if_synchronization_allowed(routine, info_array):\n # `info_array` contains integers, the status codes of a cuBLAS routine\n # call. It is referrd to as \"infoArray\" or \"devInfoArray\" in the official\n # cuBLAS documentation.\n assert isinstance(info_array, core.ndarray)\n assert info_array.ndim == 1\n\n config_linalg = cupyx._ufunc_config.config.linalg\n # Only 'ignore' and 'raise' are currently supported.\n if config_linalg == 'ignore':\n return\n\n assert config_linalg == 'raise'\n if (info_array != 0).any():\n raise linalg.LinAlgError(\n 'Error reported by {} in cuBLAS. infoArray/devInfoArray = {}.'\n ' Please refer to the cuBLAS documentation.'.format(\n routine.__name__, info_array))\n\n\n_tril_kernel = core.ElementwiseKernel(\n 'int64 k', 'S x',\n 'x = (_ind.get()[1] - _ind.get()[0] <= k) ? x : 0',\n 'tril_kernel',\n reduce_dims=False\n)\n\n\ndef _tril(x, k=0):\n _tril_kernel(k, x)\n return x\n\n\n_triu_kernel = core.ElementwiseKernel(\n 'int64 k', 'S x',\n 'x = (_ind.get()[1] - _ind.get()[0] >= k) ? x : 0',\n 'triu_kernel',\n reduce_dims=False\n)\n\n\ndef _triu(x, k=0):\n _triu_kernel(k, x)\n return x\n", "path": "cupy/linalg/util.py"}, {"content": "import contextlib\nimport threading\n\nconfig = threading.local()\nconfig.divide = None\nconfig.over = None\nconfig.under = None\nconfig.invalid = None\n# In favor of performance, the `devInfo` input/output from cuSOLVER routine\n# calls that is necessary to check the validity of the other outputs, are\n# ignored, as D2H copy incurring device synchronizations would otherwise be\n# required.\nconfig.linalg = 'ignore'\n\n\[email protected]\ndef errstate(*, divide=None, over=None, under=None, invalid=None, linalg=None):\n \"\"\"\n TODO(hvy): Write docs.\n \"\"\"\n old_state = seterr(\n divide=divide, over=over, under=under, invalid=invalid, linalg=linalg)\n try:\n yield # Return `None` similar to `numpy.errstate`.\n finally:\n seterr(**old_state)\n\n\ndef seterr(*, divide=None, over=None, under=None, invalid=None, linalg=None):\n \"\"\"\n TODO(hvy): Write docs.\n \"\"\"\n if divide is not None:\n raise NotImplementedError()\n if over is not None:\n raise NotImplementedError()\n if under is not None:\n raise NotImplementedError()\n if invalid is not None:\n raise NotImplementedError()\n if linalg is not None:\n if linalg not in ('ignore', 'raise'):\n raise NotImplementedError()\n\n old_state = geterr()\n\n config.divide = divide\n config.under = under\n config.over = over\n config.invalid = invalid\n config.linalg = linalg\n\n return old_state\n\n\ndef geterr():\n \"\"\"\n TODO(hvy): Write docs.\n \"\"\"\n return dict(\n divide=config.divide,\n over=config.over,\n under=config.under,\n invalid=config.invalid,\n linalg=config.linalg,\n )\n", "path": "cupyx/_ufunc_config.py"}]} | 3,240 | 859 |
gh_patches_debug_16220 | rasdani/github-patches | git_diff | pypa__pipenv-1426 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update internal dotenv (or make it a dependency)
Variables in a `.env` file that have single-quoted spaces are not loaded into the environment. If double quotes are used, then the variables are loaded into the environment.
##### Describe your environment
1. ubuntu 17.10
1. Python version: `$ python -V` - 3.6.4
1. Pipenv version: `$ pipenv --version` - 9.0.3
##### Expected result
All variables in `.env` would be loaded
##### Actual result
Variables in `.env` that are single-quoted and contain spaces are not loaded
##### Steps to replicate
1. Given the following `.env` file:
```
FOO='Hello World'
BAR='Test'
BAZ="Hello World"
```
2. Running `pipenv shell`
3. `FOO` will not be set in the environment.
4. This happens because the `FOO` line doesn't match the regex used (`'^\s*(\w*)\s*=\s*("[^"]*"|[^\s]*)\s*$'`) in the parsing algorithm.
##### Additional information
I tracked this issue down by stepping through the dotenv loading code used by pipenv. When I found this line of code, I learned that only double quotes would allow spaces:
https://github.com/pypa/pipenv/blob/master/pipenv/patched/dotenv/main.py#L100
This parsing algorithm diverges a bit from the official dotenv logic:
https://github.com/theskumar/python-dotenv/blob/master/dotenv/main.py#L98
</issue>
<code>
[start of pipenv/patched/dotenv/main.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import absolute_import
3
4 import codecs
5 import os
6 import sys
7 import warnings
8 import re
9 from collections import OrderedDict
10
11 __escape_decoder = codecs.getdecoder('unicode_escape')
12 __posix_variable = re.compile('\$\{[^\}]*\}')
13
14
15 def decode_escaped(escaped):
16 return __escape_decoder(escaped)[0]
17
18
19 def load_dotenv(dotenv_path, verbose=False, override=False):
20 """
21 Read a .env file and load into os.environ.
22 """
23 if not os.path.exists(dotenv_path):
24 if verbose:
25 warnings.warn("Not loading %s - it doesn't exist." % dotenv_path)
26 return None
27 for k, v in dotenv_values(dotenv_path).items():
28 if override:
29 os.environ[k] = v
30 else:
31 os.environ.setdefault(k, v)
32 return True
33
34
35 def get_key(dotenv_path, key_to_get):
36 """
37 Gets the value of a given key from the given .env
38
39 If the .env path given doesn't exist, fails
40 """
41 key_to_get = str(key_to_get)
42 if not os.path.exists(dotenv_path):
43 warnings.warn("can't read %s - it doesn't exist." % dotenv_path)
44 return None
45 dotenv_as_dict = dotenv_values(dotenv_path)
46 if key_to_get in dotenv_as_dict:
47 return dotenv_as_dict[key_to_get]
48 else:
49 warnings.warn("key %s not found in %s." % (key_to_get, dotenv_path))
50 return None
51
52
53 def set_key(dotenv_path, key_to_set, value_to_set, quote_mode="always"):
54 """
55 Adds or Updates a key/value to the given .env
56
57 If the .env path given doesn't exist, fails instead of risking creating
58 an orphan .env somewhere in the filesystem
59 """
60 key_to_set = str(key_to_set)
61 value_to_set = str(value_to_set).strip("'").strip('"')
62 if not os.path.exists(dotenv_path):
63 warnings.warn("can't write to %s - it doesn't exist." % dotenv_path)
64 return None, key_to_set, value_to_set
65 dotenv_as_dict = OrderedDict(parse_dotenv(dotenv_path))
66 dotenv_as_dict[key_to_set] = value_to_set
67 success = flatten_and_write(dotenv_path, dotenv_as_dict, quote_mode)
68 return success, key_to_set, value_to_set
69
70
71 def unset_key(dotenv_path, key_to_unset, quote_mode="always"):
72 """
73 Removes a given key from the given .env
74
75 If the .env path given doesn't exist, fails
76 If the given key doesn't exist in the .env, fails
77 """
78 key_to_unset = str(key_to_unset)
79 if not os.path.exists(dotenv_path):
80 warnings.warn("can't delete from %s - it doesn't exist." % dotenv_path)
81 return None, key_to_unset
82 dotenv_as_dict = dotenv_values(dotenv_path)
83 if key_to_unset in dotenv_as_dict:
84 dotenv_as_dict.pop(key_to_unset, None)
85 else:
86 warnings.warn("key %s not removed from %s - key doesn't exist." % (key_to_unset, dotenv_path))
87 return None, key_to_unset
88 success = flatten_and_write(dotenv_path, dotenv_as_dict, quote_mode)
89 return success, key_to_unset
90
91
92 def dotenv_values(dotenv_path):
93 values = OrderedDict(parse_dotenv(dotenv_path))
94 values = resolve_nested_variables(values)
95 return values
96
97
98 def parse_dotenv(dotenv_path):
99 with open(dotenv_path) as f:
100 for k, v in re.findall('^\s*(\w*)\s*=\s*("[^"]*"|[^\s]*)\s*$', f.read(), flags=re.MULTILINE):
101 if len(v) > 0:
102 quoted = v[0] == v[len(v) - 1] in ['"', "'"]
103
104 if quoted:
105 v = decode_escaped(v[1:-1])
106
107 yield k, v
108
109
110 def resolve_nested_variables(values):
111 def _replacement(name):
112 """
113 get appropriate value for a variable name.
114 first search in environ, if not found,
115 then look into the dotenv variables
116 """
117 ret = os.getenv(name, values.get(name, ""))
118 return ret
119
120 def _re_sub_callback(match_object):
121 """
122 From a match object gets the variable name and returns
123 the correct replacement
124 """
125 return _replacement(match_object.group()[2:-1])
126
127 for k, v in values.items():
128 values[k] = __posix_variable.sub(_re_sub_callback, v)
129
130 return values
131
132
133 def flatten_and_write(dotenv_path, dotenv_as_dict, quote_mode="always"):
134 with open(dotenv_path, "w") as f:
135 for k, v in dotenv_as_dict.items():
136 _mode = quote_mode
137 if _mode == "auto" and " " in v:
138 _mode = "always"
139 str_format = '%s="%s"\n' if _mode == "always" else '%s=%s\n'
140 f.write(str_format % (k, v))
141 return True
142
143
144 def _walk_to_root(path):
145 """
146 Yield directories starting from the given directory up to the root
147 """
148 if not os.path.exists(path):
149 raise IOError('Starting path not found')
150
151 if os.path.isfile(path):
152 path = os.path.dirname(path)
153
154 last_dir = None
155 current_dir = os.path.abspath(path)
156 while last_dir != current_dir:
157 yield current_dir
158 parent_dir = os.path.abspath(os.path.join(current_dir, os.path.pardir))
159 last_dir, current_dir = current_dir, parent_dir
160
161
162 def find_dotenv(filename='.env', raise_error_if_not_found=False, usecwd=False):
163 """
164 Search in increasingly higher folders for the given file
165
166 Returns path to the file if found, or an empty string otherwise
167 """
168 if usecwd or '__file__' not in globals():
169 # should work without __file__, e.g. in REPL or IPython notebook
170 path = os.getcwd()
171 else:
172 # will work for .py files
173 frame_filename = sys._getframe().f_back.f_code.co_filename
174 path = os.path.dirname(os.path.abspath(frame_filename))
175
176 for dirname in _walk_to_root(path):
177 check_path = os.path.join(dirname, filename)
178 if os.path.exists(check_path):
179 return check_path
180
181 if raise_error_if_not_found:
182 raise IOError('File not found')
183
184 return ''
185
[end of pipenv/patched/dotenv/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pipenv/patched/dotenv/main.py b/pipenv/patched/dotenv/main.py
--- a/pipenv/patched/dotenv/main.py
+++ b/pipenv/patched/dotenv/main.py
@@ -10,6 +10,8 @@
__escape_decoder = codecs.getdecoder('unicode_escape')
__posix_variable = re.compile('\$\{[^\}]*\}')
+__variable_declaration = re.compile('^\s*(\w*)\s*=\s*("[^"]*"|\'[^\']*\'|[^\s]*)\s*$',
+ flags=re.MULTILINE)
def decode_escaped(escaped):
@@ -97,7 +99,7 @@
def parse_dotenv(dotenv_path):
with open(dotenv_path) as f:
- for k, v in re.findall('^\s*(\w*)\s*=\s*("[^"]*"|[^\s]*)\s*$', f.read(), flags=re.MULTILINE):
+ for k, v in __variable_declaration.findall(f.read()):
if len(v) > 0:
quoted = v[0] == v[len(v) - 1] in ['"', "'"]
| {"golden_diff": "diff --git a/pipenv/patched/dotenv/main.py b/pipenv/patched/dotenv/main.py\n--- a/pipenv/patched/dotenv/main.py\n+++ b/pipenv/patched/dotenv/main.py\n@@ -10,6 +10,8 @@\n \n __escape_decoder = codecs.getdecoder('unicode_escape')\n __posix_variable = re.compile('\\$\\{[^\\}]*\\}')\n+__variable_declaration = re.compile('^\\s*(\\w*)\\s*=\\s*(\"[^\"]*\"|\\'[^\\']*\\'|[^\\s]*)\\s*$',\n+ flags=re.MULTILINE)\n \n \n def decode_escaped(escaped):\n@@ -97,7 +99,7 @@\n \n def parse_dotenv(dotenv_path):\n with open(dotenv_path) as f:\n- for k, v in re.findall('^\\s*(\\w*)\\s*=\\s*(\"[^\"]*\"|[^\\s]*)\\s*$', f.read(), flags=re.MULTILINE):\n+ for k, v in __variable_declaration.findall(f.read()):\n if len(v) > 0:\n quoted = v[0] == v[len(v) - 1] in ['\"', \"'\"]\n", "issue": "Update internal dotenv (or make it a dependency)\nVariables in a `.env` file that have single-quoted spaces are not loaded into the environment. If double quotes are used, then the variables are loaded into the environment.\r\n\r\n##### Describe your environment\r\n\r\n1. ubuntu 17.10\r\n1. Python version: `$ python -V` - 3.6.4\r\n1. Pipenv version: `$ pipenv --version` - 9.0.3\r\n\r\n##### Expected result\r\n\r\nAll variables in `.env` would be loaded\r\n\r\n##### Actual result\r\n\r\nVariables in `.env` that are single-quoted and contain spaces are not loaded\r\n\r\n##### Steps to replicate\r\n\r\n1. Given the following `.env` file:\r\n\r\n```\r\nFOO='Hello World'\r\nBAR='Test'\r\nBAZ=\"Hello World\"\r\n```\r\n\r\n2. Running `pipenv shell`\r\n3. `FOO` will not be set in the environment.\r\n4. This happens because the `FOO` line doesn't match the regex used (`'^\\s*(\\w*)\\s*=\\s*(\"[^\"]*\"|[^\\s]*)\\s*$'`) in the parsing algorithm. \r\n\r\n##### Additional information\r\n\r\nI tracked this issue down by stepping through the dotenv loading code used by pipenv. When I found this line of code, I learned that only double quotes would allow spaces:\r\n\r\nhttps://github.com/pypa/pipenv/blob/master/pipenv/patched/dotenv/main.py#L100\r\n\r\nThis parsing algorithm diverges a bit from the official dotenv logic:\r\n\r\nhttps://github.com/theskumar/python-dotenv/blob/master/dotenv/main.py#L98\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import\n\nimport codecs\nimport os\nimport sys\nimport warnings\nimport re\nfrom collections import OrderedDict\n\n__escape_decoder = codecs.getdecoder('unicode_escape')\n__posix_variable = re.compile('\\$\\{[^\\}]*\\}')\n\n\ndef decode_escaped(escaped):\n return __escape_decoder(escaped)[0]\n\n\ndef load_dotenv(dotenv_path, verbose=False, override=False):\n \"\"\"\n Read a .env file and load into os.environ.\n \"\"\"\n if not os.path.exists(dotenv_path):\n if verbose:\n warnings.warn(\"Not loading %s - it doesn't exist.\" % dotenv_path)\n return None\n for k, v in dotenv_values(dotenv_path).items():\n if override:\n os.environ[k] = v\n else:\n os.environ.setdefault(k, v)\n return True\n\n\ndef get_key(dotenv_path, key_to_get):\n \"\"\"\n Gets the value of a given key from the given .env\n\n If the .env path given doesn't exist, fails\n \"\"\"\n key_to_get = str(key_to_get)\n if not os.path.exists(dotenv_path):\n warnings.warn(\"can't read %s - it doesn't exist.\" % dotenv_path)\n return None\n dotenv_as_dict = dotenv_values(dotenv_path)\n if key_to_get in dotenv_as_dict:\n return dotenv_as_dict[key_to_get]\n else:\n warnings.warn(\"key %s not found in %s.\" % (key_to_get, dotenv_path))\n return None\n\n\ndef set_key(dotenv_path, key_to_set, value_to_set, quote_mode=\"always\"):\n \"\"\"\n Adds or Updates a key/value to the given .env\n\n If the .env path given doesn't exist, fails instead of risking creating\n an orphan .env somewhere in the filesystem\n \"\"\"\n key_to_set = str(key_to_set)\n value_to_set = str(value_to_set).strip(\"'\").strip('\"')\n if not os.path.exists(dotenv_path):\n warnings.warn(\"can't write to %s - it doesn't exist.\" % dotenv_path)\n return None, key_to_set, value_to_set\n dotenv_as_dict = OrderedDict(parse_dotenv(dotenv_path))\n dotenv_as_dict[key_to_set] = value_to_set\n success = flatten_and_write(dotenv_path, dotenv_as_dict, quote_mode)\n return success, key_to_set, value_to_set\n\n\ndef unset_key(dotenv_path, key_to_unset, quote_mode=\"always\"):\n \"\"\"\n Removes a given key from the given .env\n\n If the .env path given doesn't exist, fails\n If the given key doesn't exist in the .env, fails\n \"\"\"\n key_to_unset = str(key_to_unset)\n if not os.path.exists(dotenv_path):\n warnings.warn(\"can't delete from %s - it doesn't exist.\" % dotenv_path)\n return None, key_to_unset\n dotenv_as_dict = dotenv_values(dotenv_path)\n if key_to_unset in dotenv_as_dict:\n dotenv_as_dict.pop(key_to_unset, None)\n else:\n warnings.warn(\"key %s not removed from %s - key doesn't exist.\" % (key_to_unset, dotenv_path))\n return None, key_to_unset\n success = flatten_and_write(dotenv_path, dotenv_as_dict, quote_mode)\n return success, key_to_unset\n\n\ndef dotenv_values(dotenv_path):\n values = OrderedDict(parse_dotenv(dotenv_path))\n values = resolve_nested_variables(values)\n return values\n\n\ndef parse_dotenv(dotenv_path):\n with open(dotenv_path) as f:\n for k, v in re.findall('^\\s*(\\w*)\\s*=\\s*(\"[^\"]*\"|[^\\s]*)\\s*$', f.read(), flags=re.MULTILINE):\n if len(v) > 0:\n quoted = v[0] == v[len(v) - 1] in ['\"', \"'\"]\n\n if quoted:\n v = decode_escaped(v[1:-1])\n\n yield k, v\n\n\ndef resolve_nested_variables(values):\n def _replacement(name):\n \"\"\"\n get appropriate value for a variable name.\n first search in environ, if not found,\n then look into the dotenv variables\n \"\"\"\n ret = os.getenv(name, values.get(name, \"\"))\n return ret\n\n def _re_sub_callback(match_object):\n \"\"\"\n From a match object gets the variable name and returns\n the correct replacement\n \"\"\"\n return _replacement(match_object.group()[2:-1])\n\n for k, v in values.items():\n values[k] = __posix_variable.sub(_re_sub_callback, v)\n\n return values\n\n\ndef flatten_and_write(dotenv_path, dotenv_as_dict, quote_mode=\"always\"):\n with open(dotenv_path, \"w\") as f:\n for k, v in dotenv_as_dict.items():\n _mode = quote_mode\n if _mode == \"auto\" and \" \" in v:\n _mode = \"always\"\n str_format = '%s=\"%s\"\\n' if _mode == \"always\" else '%s=%s\\n'\n f.write(str_format % (k, v))\n return True\n\n\ndef _walk_to_root(path):\n \"\"\"\n Yield directories starting from the given directory up to the root\n \"\"\"\n if not os.path.exists(path):\n raise IOError('Starting path not found')\n\n if os.path.isfile(path):\n path = os.path.dirname(path)\n\n last_dir = None\n current_dir = os.path.abspath(path)\n while last_dir != current_dir:\n yield current_dir\n parent_dir = os.path.abspath(os.path.join(current_dir, os.path.pardir))\n last_dir, current_dir = current_dir, parent_dir\n\n\ndef find_dotenv(filename='.env', raise_error_if_not_found=False, usecwd=False):\n \"\"\"\n Search in increasingly higher folders for the given file\n\n Returns path to the file if found, or an empty string otherwise\n \"\"\"\n if usecwd or '__file__' not in globals():\n # should work without __file__, e.g. in REPL or IPython notebook\n path = os.getcwd()\n else:\n # will work for .py files\n frame_filename = sys._getframe().f_back.f_code.co_filename\n path = os.path.dirname(os.path.abspath(frame_filename))\n\n for dirname in _walk_to_root(path):\n check_path = os.path.join(dirname, filename)\n if os.path.exists(check_path):\n return check_path\n\n if raise_error_if_not_found:\n raise IOError('File not found')\n\n return ''\n", "path": "pipenv/patched/dotenv/main.py"}]} | 2,768 | 265 |
gh_patches_debug_4921 | rasdani/github-patches | git_diff | ocadotechnology__aimmo-123 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Users should see which avatar is theirs
</issue>
<code>
[start of players/views.py]
1 import logging
2
3 from django.http import JsonResponse
4 from django.contrib.auth.decorators import login_required
5 from django.http import HttpResponse
6 from django.views.generic import TemplateView
7
8 import os
9
10 from models import Player
11 from . import app_settings
12
13
14 def _post_code_success_response(message):
15 return create_response("SUCCESS", message)
16
17
18 def create_response(status, message):
19 response = {
20 "status": status,
21 "message": message
22 }
23 return JsonResponse(response)
24
25
26 @login_required
27 def code(request):
28 try:
29 player = request.user.player
30 except Player.DoesNotExist:
31 initial_code_file_name = os.path.join(
32 os.path.abspath(os.path.dirname(__file__)),
33 'avatar_examples/dumb_avatar.py',
34 )
35 with open(initial_code_file_name) as initial_code_file:
36 initial_code = initial_code_file.read()
37 player = Player.objects.create(user=request.user, code=initial_code)
38 if request.method == 'POST':
39 player.code = request.POST['code']
40 player.save()
41
42 return _post_code_success_response("Your code was saved!")
43 else:
44 return HttpResponse(player.code)
45
46
47 def games(request):
48 response = {
49 'main': {
50 'parameters': [],
51 'users': [
52 {
53 'id': player.user.pk,
54 'code': player.code,
55 } for player in Player.objects.all()
56 ]
57 }
58 }
59 return JsonResponse(response)
60
61
62 class WatchView(TemplateView):
63 template_name = 'players/watch.html'
64
65 def get_context_data(self, **kwargs):
66 context = super(WatchView, self).get_context_data(**kwargs)
67 context['game_url_base'], context['game_url_path'] = app_settings.GAME_SERVER_LOCATION_FUNCTION('main')
68 return context
69
[end of players/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/players/views.py b/players/views.py
--- a/players/views.py
+++ b/players/views.py
@@ -65,4 +65,5 @@
def get_context_data(self, **kwargs):
context = super(WatchView, self).get_context_data(**kwargs)
context['game_url_base'], context['game_url_path'] = app_settings.GAME_SERVER_LOCATION_FUNCTION('main')
+ context['current_user_player_key'] = self.request.user.pk
return context
| {"golden_diff": "diff --git a/players/views.py b/players/views.py\n--- a/players/views.py\n+++ b/players/views.py\n@@ -65,4 +65,5 @@\n def get_context_data(self, **kwargs):\n context = super(WatchView, self).get_context_data(**kwargs)\n context['game_url_base'], context['game_url_path'] = app_settings.GAME_SERVER_LOCATION_FUNCTION('main')\n+ context['current_user_player_key'] = self.request.user.pk\n return context\n", "issue": "Users should see which avatar is theirs\n\n", "before_files": [{"content": "import logging\n\nfrom django.http import JsonResponse\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import HttpResponse\nfrom django.views.generic import TemplateView\n\nimport os\n\nfrom models import Player\nfrom . import app_settings\n\n\ndef _post_code_success_response(message):\n return create_response(\"SUCCESS\", message)\n\n\ndef create_response(status, message):\n response = {\n \"status\": status,\n \"message\": message\n }\n return JsonResponse(response)\n\n\n@login_required\ndef code(request):\n try:\n player = request.user.player\n except Player.DoesNotExist:\n initial_code_file_name = os.path.join(\n os.path.abspath(os.path.dirname(__file__)),\n 'avatar_examples/dumb_avatar.py',\n )\n with open(initial_code_file_name) as initial_code_file:\n initial_code = initial_code_file.read()\n player = Player.objects.create(user=request.user, code=initial_code)\n if request.method == 'POST':\n player.code = request.POST['code']\n player.save()\n\n return _post_code_success_response(\"Your code was saved!\")\n else:\n return HttpResponse(player.code)\n\n\ndef games(request):\n response = {\n 'main': {\n 'parameters': [],\n 'users': [\n {\n 'id': player.user.pk,\n 'code': player.code,\n } for player in Player.objects.all()\n ]\n }\n }\n return JsonResponse(response)\n\n\nclass WatchView(TemplateView):\n template_name = 'players/watch.html'\n\n def get_context_data(self, **kwargs):\n context = super(WatchView, self).get_context_data(**kwargs)\n context['game_url_base'], context['game_url_path'] = app_settings.GAME_SERVER_LOCATION_FUNCTION('main')\n return context\n", "path": "players/views.py"}]} | 1,041 | 109 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.