problem_id
stringlengths
18
22
source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
13
58
prompt
stringlengths
1.71k
18.9k
golden_diff
stringlengths
145
5.13k
verification_info
stringlengths
465
23.6k
num_tokens_prompt
int64
556
4.1k
num_tokens_diff
int64
47
1.02k
gh_patches_debug_43380
rasdani/github-patches
git_diff
arviz-devs__arviz-1856
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Add docs references to plot_joint docstring Add links to the related functions and classes to all the places where they are mentioned in the [plot_joint](https://arviz-devs.github.io/arviz/api/generated/arviz.plot_joint.html) docstring. ## Sample PR: You can see https://github.com/arviz-devs/arviz/pull/1816 for the function, `plot_autocorr`. ## Source File Source file: https://github.com/arviz-devs/arviz/blob/main/arviz/plots/jointplot.py ## How to add links Add the links in the docstring of doc. You can also check the docstring by going to the `[source]`. * Add links to the ArviZ functions and classes using [Cross-referencing with Sphinx](https://docs.readthedocs.io/en/stable/guides/cross-referencing-with-sphinx.html). * Add links to the xarray functions and classes using [Intersphinx](https://docs.readthedocs.io/en/stable/guides/intersphinx.html). * Add the rest of the external links using this [syntax](https://sublime-and-sphinx-guide.readthedocs.io/en/latest/references.html#links-to-external-web-pages). ## `backend_kwargs`, `marginal_kwargs` and `joint_kwargs` For adding `backend_kwargs` as they have been added in #1816, see [Adding backend_kwargs guide](https://github.com/arviz-devs/arviz/wiki/ArviZ-Hacktoberfest-2021#adding-backend_kwargs). Follow the same process for `marginal_kwargs ` and `joint_kwargs`. ## See also section Add the following functions in the "See also" section: 1. [plot_pair](https://arviz-devs.github.io/arviz/api/generated/arviz.plot_pair.html) See [Adding "see also" section](https://github.com/arviz-devs/arviz/wiki/ArviZ-Hacktoberfest-2021#adding-see-also-section) for more details. ## Checking the preview of docs For checking the preview of docs, you don't need t set up the project on your local machine. Just follow the [checking the preview guide](https://github.com/arviz-devs/arviz/wiki/ArviZ-Hacktoberfest-2021#how-to-check-the-preview-of-docs). </issue> <code> [start of arviz/plots/jointplot.py] 1 """Joint scatter plot of two variables.""" 2 import warnings 3 4 from ..data import convert_to_dataset 5 from ..sel_utils import xarray_var_iter 6 from ..rcparams import rcParams 7 from ..utils import _var_names, get_coords 8 from .plot_utils import get_plotting_function 9 10 11 def plot_joint( 12 data, 13 group="posterior", 14 var_names=None, 15 filter_vars=None, 16 transform=None, 17 coords=None, 18 figsize=None, 19 textsize=None, 20 kind="scatter", 21 gridsize="auto", 22 contour=True, 23 fill_last=True, 24 joint_kwargs=None, 25 marginal_kwargs=None, 26 ax=None, 27 backend=None, 28 backend_kwargs=None, 29 show=None, 30 ): 31 """ 32 Plot a scatter or hexbin of two variables with their respective marginals distributions. 33 34 Parameters 35 ---------- 36 data: obj 37 Any object that can be converted to an az.InferenceData object 38 Refer to documentation of az.convert_to_dataset for details 39 group: str, optional 40 Specifies which InferenceData group should be plotted. Defaults to ‘posterior’. 41 var_names: str or iterable of str 42 Variables to be plotted. Iterable of two variables or one variable (with subset 43 having exactly 2 dimensions) are required. Prefix the variables by `~` when you 44 want to exclude them from the plot. 45 filter_vars: {None, "like", "regex"}, optional, default=None 46 If `None` (default), interpret var_names as the real variables names. If "like", 47 interpret var_names as substrings of the real variables names. If "regex", 48 interpret var_names as regular expressions on the real variables names. A la 49 `pandas.filter`. 50 transform: callable 51 Function to transform data (defaults to None i.e. the identity function) 52 coords: mapping, optional 53 Coordinates of var_names to be plotted. Passed to `Dataset.sel` 54 figsize: tuple 55 Figure size. If None it will be defined automatically. 56 textsize: float 57 Text size scaling factor for labels, titles and lines. If None it will be autoscaled based 58 on figsize. 59 kind: str 60 Type of plot to display (scatter, kde or hexbin) 61 gridsize: int or (int, int), optional. 62 The number of hexagons in the x-direction. Ignored when hexbin is False. See `plt.hexbin` 63 for details 64 contour: bool 65 If True plot the 2D KDE using contours, otherwise plot a smooth 2D KDE. Defaults to True. 66 fill_last: bool 67 If True fill the last contour of the 2D KDE plot. Defaults to True. 68 joint_kwargs: dicts, optional 69 Additional keywords modifying the join distribution (central subplot) 70 marginal_kwargs: dicts, optional 71 Additional keywords modifying the marginals distributions (top and right subplot) 72 ax: tuple of axes, optional 73 Tuple containing (ax_joint, ax_hist_x, ax_hist_y). If None, a new figure and axes 74 will be created. Matplotlib axes or bokeh figures. 75 backend: str, optional 76 Select plotting backend {"matplotlib","bokeh"}. Default "matplotlib". 77 backend_kwargs: bool, optional 78 These are kwargs specific to the backend being used. For additional documentation 79 check the plotting method of the backend. 80 show: bool, optional 81 Call backend show function. 82 83 Returns 84 ------- 85 axes: matplotlib axes or bokeh figures 86 ax_joint: joint (central) distribution 87 ax_hist_x: x (top) distribution 88 ax_hist_y: y (right) distribution 89 90 Examples 91 -------- 92 Scatter Joint plot 93 94 .. plot:: 95 :context: close-figs 96 97 >>> import arviz as az 98 >>> data = az.load_arviz_data('non_centered_eight') 99 >>> az.plot_joint(data, 100 >>> var_names=['theta'], 101 >>> coords={'school': ['Choate', 'Phillips Andover']}, 102 >>> kind='scatter', 103 >>> figsize=(6, 6)) 104 105 Hexbin Joint plot 106 107 .. plot:: 108 :context: close-figs 109 110 >>> az.plot_joint(data, 111 >>> var_names=['theta'], 112 >>> coords={'school': ['Choate', 'Phillips Andover']}, 113 >>> kind='hexbin', 114 >>> figsize=(6, 6)) 115 116 KDE Joint plot 117 118 .. plot:: 119 :context: close-figs 120 121 >>> az.plot_joint(data, 122 >>> var_names=['theta'], 123 >>> coords={'school': ['Choate', 'Phillips Andover']}, 124 >>> kind='kde', 125 >>> figsize=(6, 6)) 126 127 Overlaid plots: 128 129 .. plot:: 130 :context: close-figs 131 132 >>> data2 = az.load_arviz_data("centered_eight") 133 >>> kde_kwargs = {"contourf_kwargs": {"alpha": 0}, "contour_kwargs": {"colors": "k"}} 134 >>> ax = az.plot_joint( 135 ... data, var_names=("mu", "tau"), kind="kde", fill_last=False, 136 ... joint_kwargs=kde_kwargs, marginal_kwargs={"color": "k"} 137 ... ) 138 >>> kde_kwargs["contour_kwargs"]["colors"] = "r" 139 >>> az.plot_joint( 140 ... data2, var_names=("mu", "tau"), kind="kde", fill_last=False, 141 ... joint_kwargs=kde_kwargs, marginal_kwargs={"color": "r"}, ax=ax 142 ... ) 143 144 """ 145 warnings.warn("plot_joint will be deprecated. Please use plot_pair instead.") 146 147 valid_kinds = ["scatter", "kde", "hexbin"] 148 if kind not in valid_kinds: 149 raise ValueError( 150 ("Plot type {} not recognized." "Plot type must be in {}").format(kind, valid_kinds) 151 ) 152 153 data = convert_to_dataset(data, group=group) 154 155 if transform is not None: 156 data = transform(data) 157 158 if coords is None: 159 coords = {} 160 161 var_names = _var_names(var_names, data, filter_vars) 162 163 plotters = list(xarray_var_iter(get_coords(data, coords), var_names=var_names, combined=True)) 164 165 if len(plotters) != 2: 166 raise Exception(f"Number of variables to be plotted must 2 (you supplied {len(plotters)})") 167 168 plot_joint_kwargs = dict( 169 ax=ax, 170 figsize=figsize, 171 plotters=plotters, 172 kind=kind, 173 contour=contour, 174 fill_last=fill_last, 175 joint_kwargs=joint_kwargs, 176 gridsize=gridsize, 177 textsize=textsize, 178 marginal_kwargs=marginal_kwargs, 179 backend_kwargs=backend_kwargs, 180 show=show, 181 ) 182 183 if backend is None: 184 backend = rcParams["plot.backend"] 185 backend = backend.lower() 186 187 # TODO: Add backend kwargs 188 plot = get_plotting_function("plot_joint", "jointplot", backend) 189 axes = plot(**plot_joint_kwargs) 190 return axes 191 [end of arviz/plots/jointplot.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/arviz/plots/jointplot.py b/arviz/plots/jointplot.py --- a/arviz/plots/jointplot.py +++ b/arviz/plots/jointplot.py @@ -31,36 +31,40 @@ """ Plot a scatter or hexbin of two variables with their respective marginals distributions. + .. deprecated:: 0.9.0 + Use :func:`arviz.plot_pair` instead. + Parameters ---------- data: obj - Any object that can be converted to an az.InferenceData object - Refer to documentation of az.convert_to_dataset for details + Any object that can be converted to an :class:`arviz.InferenceData` object + Refer to documentation of :func:`arviz.convert_to_dataset` for details. group: str, optional - Specifies which InferenceData group should be plotted. Defaults to ‘posterior’. + Specifies which :class:`arviz.InferenceData` group should be plotted. + Defaults to "posterior". var_names: str or iterable of str Variables to be plotted. Iterable of two variables or one variable (with subset - having exactly 2 dimensions) are required. Prefix the variables by `~` when you + having exactly 2 dimensions) are required. Prefix the variables by ``~`` when you want to exclude them from the plot. filter_vars: {None, "like", "regex"}, optional, default=None If `None` (default), interpret var_names as the real variables names. If "like", interpret var_names as substrings of the real variables names. If "regex", interpret var_names as regular expressions on the real variables names. A la - `pandas.filter`. + ``pandas.filter``. transform: callable Function to transform data (defaults to None i.e. the identity function) coords: mapping, optional - Coordinates of var_names to be plotted. Passed to `Dataset.sel` + Coordinates of var_names to be plotted, passed to :meth:`xarray.Dataset.sel` figsize: tuple Figure size. If None it will be defined automatically. textsize: float Text size scaling factor for labels, titles and lines. If None it will be autoscaled based on figsize. kind: str - Type of plot to display (scatter, kde or hexbin) + Type of plot to display ("scatter", "kde" or "hexbin") gridsize: int or (int, int), optional. - The number of hexagons in the x-direction. Ignored when hexbin is False. See `plt.hexbin` - for details + The number of hexagons in the x-direction. Ignored when hexbin is False. + See :func:`matplotlib.pyplot.hexbin` for details. contour: bool If True plot the 2D KDE using contours, otherwise plot a smooth 2D KDE. Defaults to True. fill_last: bool @@ -73,10 +77,11 @@ Tuple containing (ax_joint, ax_hist_x, ax_hist_y). If None, a new figure and axes will be created. Matplotlib axes or bokeh figures. backend: str, optional - Select plotting backend {"matplotlib","bokeh"}. Default "matplotlib". + Select plotting backend {"matplotlib","bokeh"}. Defaults to "matplotlib". backend_kwargs: bool, optional - These are kwargs specific to the backend being used. For additional documentation - check the plotting method of the backend. + These are kwargs specific to the backend being used, passed to + :func:`matplotlib.pyplot.figure` or + :func:`bokeh.plotting.figure`. show: bool, optional Call backend show function. @@ -87,6 +92,10 @@ ax_hist_x: x (top) distribution ax_hist_y: y (right) distribution + See Also + -------- + plot_pair : Plot a scatter, kde and/or hexbin matrix with (optional) marginals on the diagonal. + Examples -------- Scatter Joint plot
{"golden_diff": "diff --git a/arviz/plots/jointplot.py b/arviz/plots/jointplot.py\n--- a/arviz/plots/jointplot.py\n+++ b/arviz/plots/jointplot.py\n@@ -31,36 +31,40 @@\n \"\"\"\n Plot a scatter or hexbin of two variables with their respective marginals distributions.\n \n+ .. deprecated:: 0.9.0\n+ Use :func:`arviz.plot_pair` instead.\n+\n Parameters\n ----------\n data: obj\n- Any object that can be converted to an az.InferenceData object\n- Refer to documentation of az.convert_to_dataset for details\n+ Any object that can be converted to an :class:`arviz.InferenceData` object\n+ Refer to documentation of :func:`arviz.convert_to_dataset` for details.\n group: str, optional\n- Specifies which InferenceData group should be plotted. Defaults to \u2018posterior\u2019.\n+ Specifies which :class:`arviz.InferenceData` group should be plotted.\n+ Defaults to \"posterior\".\n var_names: str or iterable of str\n Variables to be plotted. Iterable of two variables or one variable (with subset\n- having exactly 2 dimensions) are required. Prefix the variables by `~` when you\n+ having exactly 2 dimensions) are required. Prefix the variables by ``~`` when you\n want to exclude them from the plot.\n filter_vars: {None, \"like\", \"regex\"}, optional, default=None\n If `None` (default), interpret var_names as the real variables names. If \"like\",\n interpret var_names as substrings of the real variables names. If \"regex\",\n interpret var_names as regular expressions on the real variables names. A la\n- `pandas.filter`.\n+ ``pandas.filter``.\n transform: callable\n Function to transform data (defaults to None i.e. the identity function)\n coords: mapping, optional\n- Coordinates of var_names to be plotted. Passed to `Dataset.sel`\n+ Coordinates of var_names to be plotted, passed to :meth:`xarray.Dataset.sel`\n figsize: tuple\n Figure size. If None it will be defined automatically.\n textsize: float\n Text size scaling factor for labels, titles and lines. If None it will be autoscaled based\n on figsize.\n kind: str\n- Type of plot to display (scatter, kde or hexbin)\n+ Type of plot to display (\"scatter\", \"kde\" or \"hexbin\")\n gridsize: int or (int, int), optional.\n- The number of hexagons in the x-direction. Ignored when hexbin is False. See `plt.hexbin`\n- for details\n+ The number of hexagons in the x-direction. Ignored when hexbin is False.\n+ See :func:`matplotlib.pyplot.hexbin` for details.\n contour: bool\n If True plot the 2D KDE using contours, otherwise plot a smooth 2D KDE. Defaults to True.\n fill_last: bool\n@@ -73,10 +77,11 @@\n Tuple containing (ax_joint, ax_hist_x, ax_hist_y). If None, a new figure and axes\n will be created. Matplotlib axes or bokeh figures.\n backend: str, optional\n- Select plotting backend {\"matplotlib\",\"bokeh\"}. Default \"matplotlib\".\n+ Select plotting backend {\"matplotlib\",\"bokeh\"}. Defaults to \"matplotlib\".\n backend_kwargs: bool, optional\n- These are kwargs specific to the backend being used. For additional documentation\n- check the plotting method of the backend.\n+ These are kwargs specific to the backend being used, passed to\n+ :func:`matplotlib.pyplot.figure` or\n+ :func:`bokeh.plotting.figure`.\n show: bool, optional\n Call backend show function.\n \n@@ -87,6 +92,10 @@\n ax_hist_x: x (top) distribution\n ax_hist_y: y (right) distribution\n \n+ See Also\n+ --------\n+ plot_pair : Plot a scatter, kde and/or hexbin matrix with (optional) marginals on the diagonal.\n+\n Examples\n --------\n Scatter Joint plot\n", "issue": "Add docs references to plot_joint docstring\nAdd links to the related functions and classes to all the places where they are mentioned in the [plot_joint](https://arviz-devs.github.io/arviz/api/generated/arviz.plot_joint.html) docstring.\r\n\r\n## Sample PR:\r\nYou can see https://github.com/arviz-devs/arviz/pull/1816 for the function, `plot_autocorr`. \r\n\r\n## Source File\r\nSource file: https://github.com/arviz-devs/arviz/blob/main/arviz/plots/jointplot.py\r\n\r\n## How to add links\r\nAdd the links in the docstring of doc. You can also check the docstring by going to the `[source]`. \r\n\r\n* Add links to the ArviZ functions and classes using [Cross-referencing with Sphinx](https://docs.readthedocs.io/en/stable/guides/cross-referencing-with-sphinx.html).\r\n* Add links to the xarray functions and classes using [Intersphinx](https://docs.readthedocs.io/en/stable/guides/intersphinx.html).\r\n* Add the rest of the external links using this [syntax](https://sublime-and-sphinx-guide.readthedocs.io/en/latest/references.html#links-to-external-web-pages).\r\n\r\n## `backend_kwargs`, `marginal_kwargs` and `joint_kwargs`\r\nFor adding `backend_kwargs` as they have been added in #1816, see [Adding backend_kwargs guide](https://github.com/arviz-devs/arviz/wiki/ArviZ-Hacktoberfest-2021#adding-backend_kwargs). Follow the same process for `marginal_kwargs ` and `joint_kwargs`.\r\n\r\n## See also section\r\nAdd the following functions in the \"See also\" section:\r\n1. [plot_pair](https://arviz-devs.github.io/arviz/api/generated/arviz.plot_pair.html)\r\n\r\nSee [Adding \"see also\" section](https://github.com/arviz-devs/arviz/wiki/ArviZ-Hacktoberfest-2021#adding-see-also-section) for more details. \r\n\r\n## Checking the preview of docs\r\nFor checking the preview of docs, you don't need t set up the project on your local machine. Just follow the [checking the preview guide](https://github.com/arviz-devs/arviz/wiki/ArviZ-Hacktoberfest-2021#how-to-check-the-preview-of-docs). \n", "before_files": [{"content": "\"\"\"Joint scatter plot of two variables.\"\"\"\nimport warnings\n\nfrom ..data import convert_to_dataset\nfrom ..sel_utils import xarray_var_iter\nfrom ..rcparams import rcParams\nfrom ..utils import _var_names, get_coords\nfrom .plot_utils import get_plotting_function\n\n\ndef plot_joint(\n data,\n group=\"posterior\",\n var_names=None,\n filter_vars=None,\n transform=None,\n coords=None,\n figsize=None,\n textsize=None,\n kind=\"scatter\",\n gridsize=\"auto\",\n contour=True,\n fill_last=True,\n joint_kwargs=None,\n marginal_kwargs=None,\n ax=None,\n backend=None,\n backend_kwargs=None,\n show=None,\n):\n \"\"\"\n Plot a scatter or hexbin of two variables with their respective marginals distributions.\n\n Parameters\n ----------\n data: obj\n Any object that can be converted to an az.InferenceData object\n Refer to documentation of az.convert_to_dataset for details\n group: str, optional\n Specifies which InferenceData group should be plotted. Defaults to \u2018posterior\u2019.\n var_names: str or iterable of str\n Variables to be plotted. Iterable of two variables or one variable (with subset\n having exactly 2 dimensions) are required. Prefix the variables by `~` when you\n want to exclude them from the plot.\n filter_vars: {None, \"like\", \"regex\"}, optional, default=None\n If `None` (default), interpret var_names as the real variables names. If \"like\",\n interpret var_names as substrings of the real variables names. If \"regex\",\n interpret var_names as regular expressions on the real variables names. A la\n `pandas.filter`.\n transform: callable\n Function to transform data (defaults to None i.e. the identity function)\n coords: mapping, optional\n Coordinates of var_names to be plotted. Passed to `Dataset.sel`\n figsize: tuple\n Figure size. If None it will be defined automatically.\n textsize: float\n Text size scaling factor for labels, titles and lines. If None it will be autoscaled based\n on figsize.\n kind: str\n Type of plot to display (scatter, kde or hexbin)\n gridsize: int or (int, int), optional.\n The number of hexagons in the x-direction. Ignored when hexbin is False. See `plt.hexbin`\n for details\n contour: bool\n If True plot the 2D KDE using contours, otherwise plot a smooth 2D KDE. Defaults to True.\n fill_last: bool\n If True fill the last contour of the 2D KDE plot. Defaults to True.\n joint_kwargs: dicts, optional\n Additional keywords modifying the join distribution (central subplot)\n marginal_kwargs: dicts, optional\n Additional keywords modifying the marginals distributions (top and right subplot)\n ax: tuple of axes, optional\n Tuple containing (ax_joint, ax_hist_x, ax_hist_y). If None, a new figure and axes\n will be created. Matplotlib axes or bokeh figures.\n backend: str, optional\n Select plotting backend {\"matplotlib\",\"bokeh\"}. Default \"matplotlib\".\n backend_kwargs: bool, optional\n These are kwargs specific to the backend being used. For additional documentation\n check the plotting method of the backend.\n show: bool, optional\n Call backend show function.\n\n Returns\n -------\n axes: matplotlib axes or bokeh figures\n ax_joint: joint (central) distribution\n ax_hist_x: x (top) distribution\n ax_hist_y: y (right) distribution\n\n Examples\n --------\n Scatter Joint plot\n\n .. plot::\n :context: close-figs\n\n >>> import arviz as az\n >>> data = az.load_arviz_data('non_centered_eight')\n >>> az.plot_joint(data,\n >>> var_names=['theta'],\n >>> coords={'school': ['Choate', 'Phillips Andover']},\n >>> kind='scatter',\n >>> figsize=(6, 6))\n\n Hexbin Joint plot\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_joint(data,\n >>> var_names=['theta'],\n >>> coords={'school': ['Choate', 'Phillips Andover']},\n >>> kind='hexbin',\n >>> figsize=(6, 6))\n\n KDE Joint plot\n\n .. plot::\n :context: close-figs\n\n >>> az.plot_joint(data,\n >>> var_names=['theta'],\n >>> coords={'school': ['Choate', 'Phillips Andover']},\n >>> kind='kde',\n >>> figsize=(6, 6))\n\n Overlaid plots:\n\n .. plot::\n :context: close-figs\n\n >>> data2 = az.load_arviz_data(\"centered_eight\")\n >>> kde_kwargs = {\"contourf_kwargs\": {\"alpha\": 0}, \"contour_kwargs\": {\"colors\": \"k\"}}\n >>> ax = az.plot_joint(\n ... data, var_names=(\"mu\", \"tau\"), kind=\"kde\", fill_last=False,\n ... joint_kwargs=kde_kwargs, marginal_kwargs={\"color\": \"k\"}\n ... )\n >>> kde_kwargs[\"contour_kwargs\"][\"colors\"] = \"r\"\n >>> az.plot_joint(\n ... data2, var_names=(\"mu\", \"tau\"), kind=\"kde\", fill_last=False,\n ... joint_kwargs=kde_kwargs, marginal_kwargs={\"color\": \"r\"}, ax=ax\n ... )\n\n \"\"\"\n warnings.warn(\"plot_joint will be deprecated. Please use plot_pair instead.\")\n\n valid_kinds = [\"scatter\", \"kde\", \"hexbin\"]\n if kind not in valid_kinds:\n raise ValueError(\n (\"Plot type {} not recognized.\" \"Plot type must be in {}\").format(kind, valid_kinds)\n )\n\n data = convert_to_dataset(data, group=group)\n\n if transform is not None:\n data = transform(data)\n\n if coords is None:\n coords = {}\n\n var_names = _var_names(var_names, data, filter_vars)\n\n plotters = list(xarray_var_iter(get_coords(data, coords), var_names=var_names, combined=True))\n\n if len(plotters) != 2:\n raise Exception(f\"Number of variables to be plotted must 2 (you supplied {len(plotters)})\")\n\n plot_joint_kwargs = dict(\n ax=ax,\n figsize=figsize,\n plotters=plotters,\n kind=kind,\n contour=contour,\n fill_last=fill_last,\n joint_kwargs=joint_kwargs,\n gridsize=gridsize,\n textsize=textsize,\n marginal_kwargs=marginal_kwargs,\n backend_kwargs=backend_kwargs,\n show=show,\n )\n\n if backend is None:\n backend = rcParams[\"plot.backend\"]\n backend = backend.lower()\n\n # TODO: Add backend kwargs\n plot = get_plotting_function(\"plot_joint\", \"jointplot\", backend)\n axes = plot(**plot_joint_kwargs)\n return axes\n", "path": "arviz/plots/jointplot.py"}]}
3,056
923
gh_patches_debug_18311
rasdani/github-patches
git_diff
pytorch__ignite-474
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Version selector on gh-pages like in pytorch docs I think that many of us are installing the ignite using the master branch. It would be nice if the ignite documentation also contains the master docs like https://pytorch.org/docs/versions.html. What do you think? Best, Igor. </issue> <code> [start of docs/source/conf.py] 1 # -*- coding: utf-8 -*- 2 # 3 # Configuration file for the Sphinx documentation builder. 4 # 5 # This file does only contain a selection of the most common options. For a 6 # full list see the documentation: 7 # http://www.sphinx-doc.org/en/stable/config 8 9 # -- Path setup -------------------------------------------------------------- 10 11 # If extensions (or modules to document with autodoc) are in another directory, 12 # add these directories to sys.path here. If the directory is relative to the 13 # documentation root, use os.path.abspath to make it absolute, like shown here. 14 # 15 # import os 16 # import sys 17 # sys.path.insert(0, os.path.abspath('.')) 18 import ignite 19 import sphinx_rtd_theme 20 21 # -- Project information ----------------------------------------------------- 22 23 project = 'ignite' 24 copyright = '2018, Torch Contributors' 25 author = 'Torch Contributors' 26 27 # The short X.Y version 28 version = 'master (' + ignite.__version__ + ' )' 29 # The full version, including alpha/beta/rc tags 30 release = 'master' 31 32 33 # -- General configuration --------------------------------------------------- 34 35 # If your documentation needs a minimal Sphinx version, state it here. 36 # 37 # needs_sphinx = '1.0' 38 39 # Add any Sphinx extension module names here, as strings. They can be 40 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 41 # ones. 42 extensions = [ 43 'sphinx.ext.autosummary', 44 'sphinx.ext.doctest', 45 'sphinx.ext.intersphinx', 46 'sphinx.ext.todo', 47 'sphinx.ext.coverage', 48 'sphinx.ext.mathjax', 49 'sphinx.ext.napoleon', 50 'sphinx.ext.viewcode' 51 ] 52 53 # Add any paths that contain templates here, relative to this directory. 54 templates_path = ['_templates'] 55 56 # The suffix(es) of source filenames. 57 # You can specify multiple suffix as a list of string: 58 # 59 # source_suffix = ['.rst', '.md'] 60 source_suffix = '.rst' 61 62 # The master toctree document. 63 master_doc = 'index' 64 65 # The language for content autogenerated by Sphinx. Refer to documentation 66 # for a list of supported languages. 67 # 68 # This is also used if you do content translation via gettext catalogs. 69 # Usually you set "language" from the command line for these cases. 70 language = None 71 72 # List of patterns, relative to source directory, that match files and 73 # directories to ignore when looking for source files. 74 # This pattern also affects html_static_path and html_extra_path . 75 exclude_patterns = [] 76 77 # The name of the Pygments (syntax highlighting) style to use. 78 pygments_style = 'sphinx' 79 80 81 # -- Options for HTML output ------------------------------------------------- 82 83 # The theme to use for HTML and HTML Help pages. See the documentation for 84 # a list of builtin themes. 85 # 86 html_theme = 'sphinx_rtd_theme' 87 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] 88 89 html_theme_options = { 90 'collapse_navigation': False, 91 'display_version': True, 92 'logo_only': True, 93 } 94 95 html_logo = '_static/img/ignite-logo-dark.svg' 96 97 # Theme options are theme-specific and customize the look and feel of a theme 98 # further. For a list of options available for each theme, see the 99 # documentation. 100 # 101 # html_theme_options = {} 102 103 # Add any paths that contain custom static files (such as style sheets) here, 104 # relative to this directory. They are copied after the builtin static files, 105 # so a file named "default.css" will overwrite the builtin "default.css". 106 html_static_path = ['_static'] 107 108 html_context = { 109 'css_files': [ 110 'https://fonts.googleapis.com/css?family=Lato', 111 '_static/css/pytorch_theme.css' 112 ], 113 } 114 115 116 # -- Options for HTMLHelp output --------------------------------------------- 117 118 # Output file base name for HTML help builder. 119 htmlhelp_basename = 'ignitedoc' 120 121 122 # -- Options for LaTeX output ------------------------------------------------ 123 124 latex_elements = { 125 # The paper size ('letterpaper' or 'a4paper'). 126 # 127 # 'papersize': 'letterpaper', 128 129 # The font size ('10pt', '11pt' or '12pt'). 130 # 131 # 'pointsize': '10pt', 132 133 # Additional stuff for the LaTeX preamble. 134 # 135 # 'preamble': '', 136 137 # Latex figure (float) alignment 138 # 139 # 'figure_align': 'htbp', 140 } 141 142 # Grouping the document tree into LaTeX files. List of tuples 143 # (source start file, target name, title, 144 # author, documentclass [howto, manual, or own class]). 145 latex_documents = [ 146 (master_doc, 'ignite.tex', 'ignite Documentation', 147 'Torch Contributors', 'manual'), 148 ] 149 150 151 # -- Options for manual page output ------------------------------------------ 152 153 # One entry per manual page. List of tuples 154 # (source start file, name, description, authors, manual section). 155 man_pages = [ 156 (master_doc, 'ignite', 'ignite Documentation', 157 [author], 1) 158 ] 159 160 161 # -- Options for Texinfo output ---------------------------------------------- 162 163 # Grouping the document tree into Texinfo files. List of tuples 164 # (source start file, target name, title, author, 165 # dir menu entry, description, category) 166 texinfo_documents = [ 167 (master_doc, 'ignite', 'ignite Documentation', 168 author, 'ignite', 'One line description of project.', 169 'Miscellaneous'), 170 ] 171 172 173 # -- Extension configuration ------------------------------------------------- 174 175 # -- Options for intersphinx extension --------------------------------------- 176 177 # Example configuration for intersphinx: refer to the Python standard library. 178 intersphinx_mapping = {'https://docs.python.org/': None} 179 180 # -- Options for todo extension ---------------------------------------------- 181 182 # If true, `todo` and `todoList` produce output, else they produce nothing. 183 todo_include_todos = True 184 [end of docs/source/conf.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/docs/source/conf.py b/docs/source/conf.py --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -12,9 +12,9 @@ # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # -# import os -# import sys -# sys.path.insert(0, os.path.abspath('.')) +import os +import sys +sys.path.insert(0, os.path.abspath('../..')) import ignite import sphinx_rtd_theme @@ -25,7 +25,15 @@ author = 'Torch Contributors' # The short X.Y version -version = 'master (' + ignite.__version__ + ' )' +try: + version = os.environ['code_version'] + if 'master' in version: + version = 'master (' + ignite.__version__ + ')' + else: + version = version.replace('v', '') +except KeyError: + version = ignite.__version__ + # The full version, including alpha/beta/rc tags release = 'master'
{"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -12,9 +12,9 @@\n # add these directories to sys.path here. If the directory is relative to the\n # documentation root, use os.path.abspath to make it absolute, like shown here.\n #\n-# import os\n-# import sys\n-# sys.path.insert(0, os.path.abspath('.'))\n+import os\n+import sys\n+sys.path.insert(0, os.path.abspath('../..'))\n import ignite\n import sphinx_rtd_theme\n \n@@ -25,7 +25,15 @@\n author = 'Torch Contributors'\n \n # The short X.Y version\n-version = 'master (' + ignite.__version__ + ' )'\n+try:\n+ version = os.environ['code_version']\n+ if 'master' in version:\n+ version = 'master (' + ignite.__version__ + ')'\n+ else:\n+ version = version.replace('v', '')\n+except KeyError:\n+ version = ignite.__version__\n+\n # The full version, including alpha/beta/rc tags\n release = 'master'\n", "issue": "Version selector on gh-pages like in pytorch docs\nI think that many of us are installing the ignite using the master branch.\r\n\r\nIt would be nice if the ignite documentation also contains the master docs like https://pytorch.org/docs/versions.html. \r\n\r\n\r\nWhat do you think?\r\n\r\nBest,\r\nIgor.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\nimport ignite\nimport sphinx_rtd_theme\n\n# -- Project information -----------------------------------------------------\n\nproject = 'ignite'\ncopyright = '2018, Torch Contributors'\nauthor = 'Torch Contributors'\n\n# The short X.Y version\nversion = 'master (' + ignite.__version__ + ' )'\n# The full version, including alpha/beta/rc tags\nrelease = 'master'\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autosummary',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.viewcode'\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\nhtml_theme_options = {\n 'collapse_navigation': False,\n 'display_version': True,\n 'logo_only': True,\n}\n\nhtml_logo = '_static/img/ignite-logo-dark.svg'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\nhtml_context = {\n 'css_files': [\n 'https://fonts.googleapis.com/css?family=Lato',\n '_static/css/pytorch_theme.css'\n ],\n}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'ignitedoc'\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'ignite.tex', 'ignite Documentation',\n 'Torch Contributors', 'manual'),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'ignite', 'ignite Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'ignite', 'ignite Documentation',\n author, 'ignite', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'https://docs.python.org/': None}\n\n# -- Options for todo extension ----------------------------------------------\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n", "path": "docs/source/conf.py"}]}
2,290
252
gh_patches_debug_5902
rasdani/github-patches
git_diff
ivy-llc__ivy-18140
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> conjugate </issue> <code> [start of ivy/functional/frontends/jax/devicearray.py] 1 # global 2 3 # local 4 import ivy 5 import ivy.functional.frontends.jax as jax_frontend 6 7 8 class DeviceArray: 9 def __init__(self, array, weak_type=False): 10 self._ivy_array = array if isinstance(array, ivy.Array) else ivy.array(array) 11 self.weak_type = weak_type 12 13 def __repr__(self): 14 main = ( 15 str(self.ivy_array.__repr__()) 16 .replace("ivy.array", "ivy.frontends.jax.DeviceArray") 17 .replace(")", "") 18 + ", dtype=" 19 + str(self.ivy_array.dtype) 20 ) 21 if self.weak_type: 22 return main + ", weak_type=True)" 23 return main + ")" 24 25 # Properties # 26 # ---------- # 27 28 @property 29 def ivy_array(self): 30 return self._ivy_array 31 32 @property 33 def dtype(self): 34 return self.ivy_array.dtype 35 36 @property 37 def shape(self): 38 return self.ivy_array.shape 39 40 @property 41 def at(self): 42 return jax_frontend._src.numpy.lax_numpy._IndexUpdateHelper(self.ivy_array) 43 44 # Instance Methods # 45 # ---------------- # 46 47 def all(self, *, axis=None, out=None, keepdims=False): 48 return jax_frontend.numpy.all( 49 self._ivy_array, axis=axis, keepdims=keepdims, out=out 50 ) 51 52 def argmax( 53 self, 54 /, 55 *, 56 axis=None, 57 out=None, 58 keepdims=False, 59 ): 60 return jax_frontend.numpy.argmax( 61 self, 62 axis=axis, 63 out=out, 64 keepdims=keepdims, 65 ) 66 67 def conj(self, /): 68 return jax_frontend.numpy.conj(self._ivy_array) 69 70 def mean(self, *, axis=None, dtype=None, out=None, keepdims=False, where=None): 71 return jax_frontend.numpy.mean( 72 self._ivy_array, 73 axis=axis, 74 dtype=dtype, 75 out=out, 76 keepdims=keepdims, 77 where=where, 78 ) 79 80 def cumprod(self, axis=None, dtype=None, out=None): 81 return jax_frontend.numpy.cumprod( 82 self, 83 axis=axis, 84 dtype=dtype, 85 out=out, 86 ) 87 88 def cumsum(self, axis=None, dtype=None, out=None): 89 return jax_frontend.numpy.cumsum( 90 self, 91 axis=axis, 92 dtype=dtype, 93 out=out, 94 ) 95 96 def nonzero(self, *, size=None, fill_value=None): 97 return jax_frontend.numpy.nonzero( 98 self, 99 size=size, 100 fill_value=fill_value, 101 ) 102 103 def ravel(self, order="C"): 104 return jax_frontend.numpy.ravel( 105 self, 106 order=order, 107 ) 108 109 def sort(self, axis=-1, order=None): 110 return jax_frontend.numpy.sort( 111 self, 112 axis=axis, 113 order=order, 114 ) 115 116 def __add__(self, other): 117 return jax_frontend.numpy.add(self, other) 118 119 def __radd__(self, other): 120 return jax_frontend.numpy.add(other, self) 121 122 def __sub__(self, other): 123 return jax_frontend.lax.sub(self, other) 124 125 def __rsub__(self, other): 126 return jax_frontend.lax.sub(other, self) 127 128 def __mul__(self, other): 129 return jax_frontend.lax.mul(self, other) 130 131 def __rmul__(self, other): 132 return jax_frontend.lax.mul(other, self) 133 134 def __div__(self, other): 135 return jax_frontend.numpy.divide(self, other) 136 137 def __rdiv__(self, other): 138 return jax_frontend.numpy.divide(other, self) 139 140 def __mod__(self, other): 141 return jax_frontend.numpy.mod(self, other) 142 143 def __rmod__(self, other): 144 return jax_frontend.numpy.mod(other, self) 145 146 def __truediv__(self, other): 147 return jax_frontend.numpy.divide(self, other) 148 149 def __rtruediv__(self, other): 150 return jax_frontend.numpy.divide(other, self) 151 152 def __matmul__(self, other): 153 return jax_frontend.numpy.dot(self, other) 154 155 def __rmatmul__(self, other): 156 return jax_frontend.numpy.dot(other, self) 157 158 def __pos__(self): 159 return self 160 161 def __neg__(self): 162 return jax_frontend.lax.neg(self) 163 164 def __eq__(self, other): 165 return jax_frontend.lax.eq(self, other) 166 167 def __ne__(self, other): 168 return jax_frontend.lax.ne(self, other) 169 170 def __lt__(self, other): 171 return jax_frontend.lax.lt(self, other) 172 173 def __le__(self, other): 174 return jax_frontend.lax.le(self, other) 175 176 def __gt__(self, other): 177 return jax_frontend.lax.gt(self, other) 178 179 def __ge__(self, other): 180 return jax_frontend.lax.ge(self, other) 181 182 def __abs__(self): 183 return jax_frontend.numpy.abs(self) 184 185 def __pow__(self, other): 186 return jax_frontend.lax.pow(self, other) 187 188 def __rpow__(self, other): 189 other = ivy.asarray(other) 190 return jax_frontend.lax.pow(other, self) 191 192 def __and__(self, other): 193 return jax_frontend.numpy.bitwise_and(self, other) 194 195 def __rand__(self, other): 196 return jax_frontend.numpy.bitwise_and(other, self) 197 198 def __or__(self, other): 199 return jax_frontend.numpy.bitwise_or(self, other) 200 201 def __ror__(self, other): 202 return jax_frontend.numpy.bitwise_or(other, self) 203 204 def __xor__(self, other): 205 return jax_frontend.lax.bitwise_xor(self, other) 206 207 def __rxor__(self, other): 208 return jax_frontend.lax.bitwise_xor(other, self) 209 210 def __invert__(self): 211 return jax_frontend.lax.bitwise_not(self) 212 213 def __lshift__(self, other): 214 return jax_frontend.lax.shift_left(self, other) 215 216 def __rlshift__(self, other): 217 return jax_frontend.lax.shift_left(other, self) 218 219 def __rshift__(self, other): 220 return jax_frontend.lax.shift_right_logical(self, other) 221 222 def __rrshift__(self, other): 223 return jax_frontend.lax.shift_right_logical(other, self) 224 225 def __getitem__(self, idx): 226 return self.at[idx].get() 227 228 def __setitem__(self, idx, val): 229 raise ivy.utils.exceptions.IvyException( 230 "ivy.functional.frontends.jax.DeviceArray object doesn't support assignment" 231 ) 232 233 def __iter__(self): 234 ndim = len(self.shape) 235 if ndim == 0: 236 raise TypeError("iteration over a 0-d devicearray not supported") 237 for i in range(self.shape[0]): 238 yield self[i] 239 240 def round(self, decimals=0): 241 return jax_frontend.numpy.round(self, decimals) 242 [end of ivy/functional/frontends/jax/devicearray.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/ivy/functional/frontends/jax/devicearray.py b/ivy/functional/frontends/jax/devicearray.py --- a/ivy/functional/frontends/jax/devicearray.py +++ b/ivy/functional/frontends/jax/devicearray.py @@ -67,6 +67,9 @@ def conj(self, /): return jax_frontend.numpy.conj(self._ivy_array) + def conjugate(self, /): + return jax_frontend.numpy.conjugate(self._ivy_array) + def mean(self, *, axis=None, dtype=None, out=None, keepdims=False, where=None): return jax_frontend.numpy.mean( self._ivy_array,
{"golden_diff": "diff --git a/ivy/functional/frontends/jax/devicearray.py b/ivy/functional/frontends/jax/devicearray.py\n--- a/ivy/functional/frontends/jax/devicearray.py\n+++ b/ivy/functional/frontends/jax/devicearray.py\n@@ -67,6 +67,9 @@\n def conj(self, /):\n return jax_frontend.numpy.conj(self._ivy_array)\n \n+ def conjugate(self, /):\n+ return jax_frontend.numpy.conjugate(self._ivy_array)\n+\n def mean(self, *, axis=None, dtype=None, out=None, keepdims=False, where=None):\n return jax_frontend.numpy.mean(\n self._ivy_array,\n", "issue": "conjugate\n\n", "before_files": [{"content": "# global\n\n# local\nimport ivy\nimport ivy.functional.frontends.jax as jax_frontend\n\n\nclass DeviceArray:\n def __init__(self, array, weak_type=False):\n self._ivy_array = array if isinstance(array, ivy.Array) else ivy.array(array)\n self.weak_type = weak_type\n\n def __repr__(self):\n main = (\n str(self.ivy_array.__repr__())\n .replace(\"ivy.array\", \"ivy.frontends.jax.DeviceArray\")\n .replace(\")\", \"\")\n + \", dtype=\"\n + str(self.ivy_array.dtype)\n )\n if self.weak_type:\n return main + \", weak_type=True)\"\n return main + \")\"\n\n # Properties #\n # ---------- #\n\n @property\n def ivy_array(self):\n return self._ivy_array\n\n @property\n def dtype(self):\n return self.ivy_array.dtype\n\n @property\n def shape(self):\n return self.ivy_array.shape\n\n @property\n def at(self):\n return jax_frontend._src.numpy.lax_numpy._IndexUpdateHelper(self.ivy_array)\n\n # Instance Methods #\n # ---------------- #\n\n def all(self, *, axis=None, out=None, keepdims=False):\n return jax_frontend.numpy.all(\n self._ivy_array, axis=axis, keepdims=keepdims, out=out\n )\n\n def argmax(\n self,\n /,\n *,\n axis=None,\n out=None,\n keepdims=False,\n ):\n return jax_frontend.numpy.argmax(\n self,\n axis=axis,\n out=out,\n keepdims=keepdims,\n )\n\n def conj(self, /):\n return jax_frontend.numpy.conj(self._ivy_array)\n\n def mean(self, *, axis=None, dtype=None, out=None, keepdims=False, where=None):\n return jax_frontend.numpy.mean(\n self._ivy_array,\n axis=axis,\n dtype=dtype,\n out=out,\n keepdims=keepdims,\n where=where,\n )\n\n def cumprod(self, axis=None, dtype=None, out=None):\n return jax_frontend.numpy.cumprod(\n self,\n axis=axis,\n dtype=dtype,\n out=out,\n )\n\n def cumsum(self, axis=None, dtype=None, out=None):\n return jax_frontend.numpy.cumsum(\n self,\n axis=axis,\n dtype=dtype,\n out=out,\n )\n\n def nonzero(self, *, size=None, fill_value=None):\n return jax_frontend.numpy.nonzero(\n self,\n size=size,\n fill_value=fill_value,\n )\n\n def ravel(self, order=\"C\"):\n return jax_frontend.numpy.ravel(\n self,\n order=order,\n )\n\n def sort(self, axis=-1, order=None):\n return jax_frontend.numpy.sort(\n self,\n axis=axis,\n order=order,\n )\n\n def __add__(self, other):\n return jax_frontend.numpy.add(self, other)\n\n def __radd__(self, other):\n return jax_frontend.numpy.add(other, self)\n\n def __sub__(self, other):\n return jax_frontend.lax.sub(self, other)\n\n def __rsub__(self, other):\n return jax_frontend.lax.sub(other, self)\n\n def __mul__(self, other):\n return jax_frontend.lax.mul(self, other)\n\n def __rmul__(self, other):\n return jax_frontend.lax.mul(other, self)\n\n def __div__(self, other):\n return jax_frontend.numpy.divide(self, other)\n\n def __rdiv__(self, other):\n return jax_frontend.numpy.divide(other, self)\n\n def __mod__(self, other):\n return jax_frontend.numpy.mod(self, other)\n\n def __rmod__(self, other):\n return jax_frontend.numpy.mod(other, self)\n\n def __truediv__(self, other):\n return jax_frontend.numpy.divide(self, other)\n\n def __rtruediv__(self, other):\n return jax_frontend.numpy.divide(other, self)\n\n def __matmul__(self, other):\n return jax_frontend.numpy.dot(self, other)\n\n def __rmatmul__(self, other):\n return jax_frontend.numpy.dot(other, self)\n\n def __pos__(self):\n return self\n\n def __neg__(self):\n return jax_frontend.lax.neg(self)\n\n def __eq__(self, other):\n return jax_frontend.lax.eq(self, other)\n\n def __ne__(self, other):\n return jax_frontend.lax.ne(self, other)\n\n def __lt__(self, other):\n return jax_frontend.lax.lt(self, other)\n\n def __le__(self, other):\n return jax_frontend.lax.le(self, other)\n\n def __gt__(self, other):\n return jax_frontend.lax.gt(self, other)\n\n def __ge__(self, other):\n return jax_frontend.lax.ge(self, other)\n\n def __abs__(self):\n return jax_frontend.numpy.abs(self)\n\n def __pow__(self, other):\n return jax_frontend.lax.pow(self, other)\n\n def __rpow__(self, other):\n other = ivy.asarray(other)\n return jax_frontend.lax.pow(other, self)\n\n def __and__(self, other):\n return jax_frontend.numpy.bitwise_and(self, other)\n\n def __rand__(self, other):\n return jax_frontend.numpy.bitwise_and(other, self)\n\n def __or__(self, other):\n return jax_frontend.numpy.bitwise_or(self, other)\n\n def __ror__(self, other):\n return jax_frontend.numpy.bitwise_or(other, self)\n\n def __xor__(self, other):\n return jax_frontend.lax.bitwise_xor(self, other)\n\n def __rxor__(self, other):\n return jax_frontend.lax.bitwise_xor(other, self)\n\n def __invert__(self):\n return jax_frontend.lax.bitwise_not(self)\n\n def __lshift__(self, other):\n return jax_frontend.lax.shift_left(self, other)\n\n def __rlshift__(self, other):\n return jax_frontend.lax.shift_left(other, self)\n\n def __rshift__(self, other):\n return jax_frontend.lax.shift_right_logical(self, other)\n\n def __rrshift__(self, other):\n return jax_frontend.lax.shift_right_logical(other, self)\n\n def __getitem__(self, idx):\n return self.at[idx].get()\n\n def __setitem__(self, idx, val):\n raise ivy.utils.exceptions.IvyException(\n \"ivy.functional.frontends.jax.DeviceArray object doesn't support assignment\"\n )\n\n def __iter__(self):\n ndim = len(self.shape)\n if ndim == 0:\n raise TypeError(\"iteration over a 0-d devicearray not supported\")\n for i in range(self.shape[0]):\n yield self[i]\n\n def round(self, decimals=0):\n return jax_frontend.numpy.round(self, decimals)\n", "path": "ivy/functional/frontends/jax/devicearray.py"}]}
2,745
150
gh_patches_debug_7238
rasdani/github-patches
git_diff
freedomofpress__securedrop-4496
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Expired Source Interface session causes server error on new session ## Description Creating a new codename after a session has expired causes a server error. ## Steps to Reproduce 1. Visit the codename generation page (`/generate`) but do not press 'Submit Documents' 2. Wait for the session to expire 3. Press 'Submit Documents' Diff to force session expiry at 1 minute: ```diff diff --git a/securedrop/source_app/__init__.py b/securedrop/source_app/__init__.py index d2679b737..60df38ece 100644 --- a/securedrop/source_app/__init__.py +++ b/securedrop/source_app/__init__.py @@ -148,9 +148,7 @@ def create_app(config): flash(Markup(msg), "important") session['expires'] = datetime.utcnow() + \ - timedelta(minutes=getattr(config, - 'SESSION_EXPIRATION_MINUTES', - 120)) + timedelta(minutes=1) # ignore_static here because `crypto_util.hash_codename` is scrypt # (very time consuming), and we don't need to waste time running if ``` ## Expected Behavior A new codename is generated or an appropriate error message is shown. ## Actual Behavior The Source Interface displays: > Server error > Sorry, the website encountered an error and was unable to complete your request. </issue> <code> [start of securedrop/source_app/__init__.py] 1 from datetime import datetime, timedelta 2 from flask import (Flask, render_template, flash, Markup, request, g, session, 3 url_for, redirect) 4 from flask_babel import gettext 5 from flask_assets import Environment 6 from flask_wtf.csrf import CSRFProtect, CSRFError 7 from jinja2 import evalcontextfilter 8 from os import path 9 from sqlalchemy.orm.exc import NoResultFound 10 11 import i18n 12 import template_filters 13 import version 14 15 from crypto_util import CryptoUtil 16 from db import db 17 from models import Source 18 from request_that_secures_file_uploads import RequestThatSecuresFileUploads 19 from source_app import main, info, api 20 from source_app.decorators import ignore_static 21 from source_app.utils import logged_in 22 from store import Storage 23 from worker import rq_worker_queue 24 25 import typing 26 # https://www.python.org/dev/peps/pep-0484/#runtime-or-type-checking 27 if typing.TYPE_CHECKING: 28 # flake8 can not understand type annotation yet. 29 # That is why all type annotation relative import 30 # statements has to be marked as noqa. 31 # http://flake8.pycqa.org/en/latest/user/error-codes.html?highlight=f401 32 from sdconfig import SDConfig # noqa: F401 33 34 35 def create_app(config): 36 # type: (SDConfig) -> Flask 37 app = Flask(__name__, 38 template_folder=config.SOURCE_TEMPLATES_DIR, 39 static_folder=path.join(config.SECUREDROP_ROOT, 'static')) 40 app.request_class = RequestThatSecuresFileUploads 41 app.config.from_object(config.SourceInterfaceFlaskConfig) # type: ignore 42 app.sdconfig = config 43 44 # The default CSRF token expiration is 1 hour. Since large uploads can 45 # take longer than an hour over Tor, we increase the valid window to 24h. 46 app.config['WTF_CSRF_TIME_LIMIT'] = 60 * 60 * 24 47 CSRFProtect(app) 48 49 if config.DATABASE_ENGINE == "sqlite": 50 db_uri = (config.DATABASE_ENGINE + ":///" + 51 config.DATABASE_FILE) 52 else: 53 db_uri = ( 54 config.DATABASE_ENGINE + '://' + 55 config.DATABASE_USERNAME + ':' + 56 config.DATABASE_PASSWORD + '@' + 57 config.DATABASE_HOST + '/' + 58 config.DATABASE_NAME 59 ) 60 app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False 61 app.config['SQLALCHEMY_DATABASE_URI'] = db_uri 62 db.init_app(app) 63 64 app.storage = Storage(config.STORE_DIR, 65 config.TEMP_DIR, 66 config.JOURNALIST_KEY) 67 68 app.crypto_util = CryptoUtil( 69 scrypt_params=config.SCRYPT_PARAMS, 70 scrypt_id_pepper=config.SCRYPT_ID_PEPPER, 71 scrypt_gpg_pepper=config.SCRYPT_GPG_PEPPER, 72 securedrop_root=config.SECUREDROP_ROOT, 73 word_list=config.WORD_LIST, 74 nouns_file=config.NOUNS, 75 adjectives_file=config.ADJECTIVES, 76 gpg_key_dir=config.GPG_KEY_DIR, 77 ) 78 79 app.config['RQ_WORKER_NAME'] = config.RQ_WORKER_NAME 80 rq_worker_queue.init_app(app) 81 82 @app.errorhandler(CSRFError) 83 def handle_csrf_error(e): 84 msg = render_template('session_timeout.html') 85 session.clear() 86 flash(Markup(msg), "important") 87 return redirect(url_for('main.index')) 88 89 assets = Environment(app) 90 app.config['assets'] = assets 91 92 i18n.setup_app(config, app) 93 94 app.jinja_env.trim_blocks = True 95 app.jinja_env.lstrip_blocks = True 96 app.jinja_env.globals['version'] = version.__version__ 97 if getattr(config, 'CUSTOM_HEADER_IMAGE', None): 98 app.jinja_env.globals['header_image'] = \ 99 config.CUSTOM_HEADER_IMAGE # type: ignore 100 app.jinja_env.globals['use_custom_header_image'] = True 101 else: 102 app.jinja_env.globals['header_image'] = 'logo.png' 103 app.jinja_env.globals['use_custom_header_image'] = False 104 105 app.jinja_env.filters['rel_datetime_format'] = \ 106 template_filters.rel_datetime_format 107 app.jinja_env.filters['nl2br'] = evalcontextfilter(template_filters.nl2br) 108 app.jinja_env.filters['filesizeformat'] = template_filters.filesizeformat 109 110 for module in [main, info, api]: 111 app.register_blueprint(module.make_blueprint(config)) # type: ignore 112 113 @app.before_request 114 @ignore_static 115 def setup_i18n(): 116 """Store i18n-related values in Flask's special g object""" 117 g.locale = i18n.get_locale(config) 118 g.text_direction = i18n.get_text_direction(g.locale) 119 g.html_lang = i18n.locale_to_rfc_5646(g.locale) 120 g.locales = i18n.get_locale2name() 121 122 @app.before_request 123 @ignore_static 124 def check_tor2web(): 125 # ignore_static here so we only flash a single message warning 126 # about Tor2Web, corresponding to the initial page load. 127 if 'X-tor2web' in request.headers: 128 flash(Markup(gettext( 129 '<strong>WARNING:&nbsp;</strong> ' 130 'You appear to be using Tor2Web. ' 131 'This <strong>&nbsp;does not&nbsp;</strong> ' 132 'provide anonymity. ' 133 '<a href="{url}">Why is this dangerous?</a>') 134 .format(url=url_for('info.tor2web_warning'))), 135 "banner-warning") 136 137 @app.before_request 138 @ignore_static 139 def setup_g(): 140 """Store commonly used values in Flask's special g object""" 141 142 if 'expires' in session and datetime.utcnow() >= session['expires']: 143 msg = render_template('session_timeout.html') 144 145 # clear the session after we render the message so it's localized 146 session.clear() 147 148 flash(Markup(msg), "important") 149 150 session['expires'] = datetime.utcnow() + \ 151 timedelta(minutes=getattr(config, 152 'SESSION_EXPIRATION_MINUTES', 153 120)) 154 155 # ignore_static here because `crypto_util.hash_codename` is scrypt 156 # (very time consuming), and we don't need to waste time running if 157 # we're just serving a static resource that won't need to access 158 # these common values. 159 if logged_in(): 160 g.codename = session['codename'] 161 g.filesystem_id = app.crypto_util.hash_codename(g.codename) 162 try: 163 g.source = Source.query \ 164 .filter(Source.filesystem_id == g.filesystem_id) \ 165 .one() 166 except NoResultFound as e: 167 app.logger.error( 168 "Found no Sources when one was expected: %s" % 169 (e,)) 170 del session['logged_in'] 171 del session['codename'] 172 return redirect(url_for('main.index')) 173 g.loc = app.storage.path(g.filesystem_id) 174 175 @app.errorhandler(404) 176 def page_not_found(error): 177 return render_template('notfound.html'), 404 178 179 @app.errorhandler(500) 180 def internal_error(error): 181 return render_template('error.html'), 500 182 183 return app 184 [end of securedrop/source_app/__init__.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/securedrop/source_app/__init__.py b/securedrop/source_app/__init__.py --- a/securedrop/source_app/__init__.py +++ b/securedrop/source_app/__init__.py @@ -145,7 +145,9 @@ # clear the session after we render the message so it's localized session.clear() + # Redirect to index with flashed message flash(Markup(msg), "important") + return redirect(url_for('main.index')) session['expires'] = datetime.utcnow() + \ timedelta(minutes=getattr(config,
{"golden_diff": "diff --git a/securedrop/source_app/__init__.py b/securedrop/source_app/__init__.py\n--- a/securedrop/source_app/__init__.py\n+++ b/securedrop/source_app/__init__.py\n@@ -145,7 +145,9 @@\n # clear the session after we render the message so it's localized\n session.clear()\n \n+ # Redirect to index with flashed message\n flash(Markup(msg), \"important\")\n+ return redirect(url_for('main.index'))\n \n session['expires'] = datetime.utcnow() + \\\n timedelta(minutes=getattr(config,\n", "issue": "Expired Source Interface session causes server error on new session\n## Description\r\n\r\nCreating a new codename after a session has expired causes a server error.\r\n\r\n## Steps to Reproduce\r\n\r\n1. Visit the codename generation page (`/generate`) but do not press 'Submit Documents'\r\n2. Wait for the session to expire\r\n3. Press 'Submit Documents'\r\n\r\nDiff to force session expiry at 1 minute:\r\n\r\n```diff\r\ndiff --git a/securedrop/source_app/__init__.py b/securedrop/source_app/__init__.py\r\nindex d2679b737..60df38ece 100644\r\n--- a/securedrop/source_app/__init__.py\r\n+++ b/securedrop/source_app/__init__.py\r\n@@ -148,9 +148,7 @@ def create_app(config):\r\n flash(Markup(msg), \"important\")\r\n \r\n session['expires'] = datetime.utcnow() + \\\r\n- timedelta(minutes=getattr(config,\r\n- 'SESSION_EXPIRATION_MINUTES',\r\n- 120))\r\n+ timedelta(minutes=1)\r\n \r\n # ignore_static here because `crypto_util.hash_codename` is scrypt\r\n # (very time consuming), and we don't need to waste time running if\r\n```\r\n\r\n## Expected Behavior\r\n\r\nA new codename is generated or an appropriate error message is shown.\r\n\r\n## Actual Behavior\r\n\r\nThe Source Interface displays:\r\n\r\n> Server error\r\n> Sorry, the website encountered an error and was unable to complete your request.\n", "before_files": [{"content": "from datetime import datetime, timedelta\nfrom flask import (Flask, render_template, flash, Markup, request, g, session,\n url_for, redirect)\nfrom flask_babel import gettext\nfrom flask_assets import Environment\nfrom flask_wtf.csrf import CSRFProtect, CSRFError\nfrom jinja2 import evalcontextfilter\nfrom os import path\nfrom sqlalchemy.orm.exc import NoResultFound\n\nimport i18n\nimport template_filters\nimport version\n\nfrom crypto_util import CryptoUtil\nfrom db import db\nfrom models import Source\nfrom request_that_secures_file_uploads import RequestThatSecuresFileUploads\nfrom source_app import main, info, api\nfrom source_app.decorators import ignore_static\nfrom source_app.utils import logged_in\nfrom store import Storage\nfrom worker import rq_worker_queue\n\nimport typing\n# https://www.python.org/dev/peps/pep-0484/#runtime-or-type-checking\nif typing.TYPE_CHECKING:\n # flake8 can not understand type annotation yet.\n # That is why all type annotation relative import\n # statements has to be marked as noqa.\n # http://flake8.pycqa.org/en/latest/user/error-codes.html?highlight=f401\n from sdconfig import SDConfig # noqa: F401\n\n\ndef create_app(config):\n # type: (SDConfig) -> Flask\n app = Flask(__name__,\n template_folder=config.SOURCE_TEMPLATES_DIR,\n static_folder=path.join(config.SECUREDROP_ROOT, 'static'))\n app.request_class = RequestThatSecuresFileUploads\n app.config.from_object(config.SourceInterfaceFlaskConfig) # type: ignore\n app.sdconfig = config\n\n # The default CSRF token expiration is 1 hour. Since large uploads can\n # take longer than an hour over Tor, we increase the valid window to 24h.\n app.config['WTF_CSRF_TIME_LIMIT'] = 60 * 60 * 24\n CSRFProtect(app)\n\n if config.DATABASE_ENGINE == \"sqlite\":\n db_uri = (config.DATABASE_ENGINE + \":///\" +\n config.DATABASE_FILE)\n else:\n db_uri = (\n config.DATABASE_ENGINE + '://' +\n config.DATABASE_USERNAME + ':' +\n config.DATABASE_PASSWORD + '@' +\n config.DATABASE_HOST + '/' +\n config.DATABASE_NAME\n )\n app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False\n app.config['SQLALCHEMY_DATABASE_URI'] = db_uri\n db.init_app(app)\n\n app.storage = Storage(config.STORE_DIR,\n config.TEMP_DIR,\n config.JOURNALIST_KEY)\n\n app.crypto_util = CryptoUtil(\n scrypt_params=config.SCRYPT_PARAMS,\n scrypt_id_pepper=config.SCRYPT_ID_PEPPER,\n scrypt_gpg_pepper=config.SCRYPT_GPG_PEPPER,\n securedrop_root=config.SECUREDROP_ROOT,\n word_list=config.WORD_LIST,\n nouns_file=config.NOUNS,\n adjectives_file=config.ADJECTIVES,\n gpg_key_dir=config.GPG_KEY_DIR,\n )\n\n app.config['RQ_WORKER_NAME'] = config.RQ_WORKER_NAME\n rq_worker_queue.init_app(app)\n\n @app.errorhandler(CSRFError)\n def handle_csrf_error(e):\n msg = render_template('session_timeout.html')\n session.clear()\n flash(Markup(msg), \"important\")\n return redirect(url_for('main.index'))\n\n assets = Environment(app)\n app.config['assets'] = assets\n\n i18n.setup_app(config, app)\n\n app.jinja_env.trim_blocks = True\n app.jinja_env.lstrip_blocks = True\n app.jinja_env.globals['version'] = version.__version__\n if getattr(config, 'CUSTOM_HEADER_IMAGE', None):\n app.jinja_env.globals['header_image'] = \\\n config.CUSTOM_HEADER_IMAGE # type: ignore\n app.jinja_env.globals['use_custom_header_image'] = True\n else:\n app.jinja_env.globals['header_image'] = 'logo.png'\n app.jinja_env.globals['use_custom_header_image'] = False\n\n app.jinja_env.filters['rel_datetime_format'] = \\\n template_filters.rel_datetime_format\n app.jinja_env.filters['nl2br'] = evalcontextfilter(template_filters.nl2br)\n app.jinja_env.filters['filesizeformat'] = template_filters.filesizeformat\n\n for module in [main, info, api]:\n app.register_blueprint(module.make_blueprint(config)) # type: ignore\n\n @app.before_request\n @ignore_static\n def setup_i18n():\n \"\"\"Store i18n-related values in Flask's special g object\"\"\"\n g.locale = i18n.get_locale(config)\n g.text_direction = i18n.get_text_direction(g.locale)\n g.html_lang = i18n.locale_to_rfc_5646(g.locale)\n g.locales = i18n.get_locale2name()\n\n @app.before_request\n @ignore_static\n def check_tor2web():\n # ignore_static here so we only flash a single message warning\n # about Tor2Web, corresponding to the initial page load.\n if 'X-tor2web' in request.headers:\n flash(Markup(gettext(\n '<strong>WARNING:&nbsp;</strong> '\n 'You appear to be using Tor2Web. '\n 'This <strong>&nbsp;does not&nbsp;</strong> '\n 'provide anonymity. '\n '<a href=\"{url}\">Why is this dangerous?</a>')\n .format(url=url_for('info.tor2web_warning'))),\n \"banner-warning\")\n\n @app.before_request\n @ignore_static\n def setup_g():\n \"\"\"Store commonly used values in Flask's special g object\"\"\"\n\n if 'expires' in session and datetime.utcnow() >= session['expires']:\n msg = render_template('session_timeout.html')\n\n # clear the session after we render the message so it's localized\n session.clear()\n\n flash(Markup(msg), \"important\")\n\n session['expires'] = datetime.utcnow() + \\\n timedelta(minutes=getattr(config,\n 'SESSION_EXPIRATION_MINUTES',\n 120))\n\n # ignore_static here because `crypto_util.hash_codename` is scrypt\n # (very time consuming), and we don't need to waste time running if\n # we're just serving a static resource that won't need to access\n # these common values.\n if logged_in():\n g.codename = session['codename']\n g.filesystem_id = app.crypto_util.hash_codename(g.codename)\n try:\n g.source = Source.query \\\n .filter(Source.filesystem_id == g.filesystem_id) \\\n .one()\n except NoResultFound as e:\n app.logger.error(\n \"Found no Sources when one was expected: %s\" %\n (e,))\n del session['logged_in']\n del session['codename']\n return redirect(url_for('main.index'))\n g.loc = app.storage.path(g.filesystem_id)\n\n @app.errorhandler(404)\n def page_not_found(error):\n return render_template('notfound.html'), 404\n\n @app.errorhandler(500)\n def internal_error(error):\n return render_template('error.html'), 500\n\n return app\n", "path": "securedrop/source_app/__init__.py"}]}
2,899
131
gh_patches_debug_10249
rasdani/github-patches
git_diff
tensorflow__addons-248
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Build fails because of optimizers Some tests for optimizers fail because of this commit https://github.com/tensorflow/tensorflow/commit/2cb745ef1e0b4082a618c81274fca39be0cb4fc6. It can be fixed by replacing `self._get_hyper('epsilon')` with `self.epsilon` when trying to access the value of epsilon. For more details, please refer to https://github.com/tensorflow/tensorflow/commit/2cb745ef1e0b4082a618c81274fca39be0cb4fc6. </issue> <code> [start of tensorflow_addons/optimizers/lazy_adam.py] 1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 # ============================================================================== 15 """Variant of the Adam optimizer that handles sparse updates more efficiently. 16 17 Compared with the original Adam optimizer, the one in this file can 18 provide a large improvement in model training throughput for some 19 applications. However, it provides slightly different semantics than the 20 original Adam algorithm, and may lead to different empirical results. 21 """ 22 23 from __future__ import absolute_import 24 from __future__ import division 25 from __future__ import print_function 26 27 import tensorflow as tf 28 from tensorflow_addons.utils import keras_utils 29 30 31 @keras_utils.register_keras_custom_object 32 class LazyAdam(tf.keras.optimizers.Adam): 33 """Variant of the Adam optimizer that handles sparse updates more 34 efficiently. 35 36 The original Adam algorithm maintains two moving-average accumulators for 37 each trainable variable; the accumulators are updated at every step. 38 This class provides lazier handling of gradient updates for sparse 39 variables. It only updates moving-average accumulators for sparse variable 40 indices that appear in the current batch, rather than updating the 41 accumulators for all indices. Compared with the original Adam optimizer, 42 it can provide large improvements in model training throughput for some 43 applications. However, it provides slightly different semantics than the 44 original Adam algorithm, and may lead to different empirical results. 45 46 Note, amsgrad is currently not supported and the argument can only be 47 False. 48 """ 49 50 def _resource_apply_sparse(self, grad, var, indices): 51 var_dtype = var.dtype.base_dtype 52 lr_t = self._decayed_lr(var_dtype) 53 beta_1_t = self._get_hyper('beta_1', var_dtype) 54 beta_2_t = self._get_hyper('beta_2', var_dtype) 55 local_step = tf.cast(self.iterations + 1, var_dtype) 56 beta_1_power = tf.math.pow(beta_1_t, local_step) 57 beta_2_power = tf.math.pow(beta_2_t, local_step) 58 epsilon_t = self._get_hyper('epsilon', var_dtype) 59 lr = (lr_t * tf.math.sqrt(1 - beta_2_power) / (1 - beta_1_power)) 60 61 # \\(m := beta1 * m + (1 - beta1) * g_t\\) 62 m = self.get_slot(var, "m") 63 m_t_slice = beta_1_t * tf.gather(m, indices) + (1 - beta_1_t) * grad 64 65 m_update_kwargs = { 66 'resource': m.handle, 67 'indices': indices, 68 'updates': m_t_slice 69 } 70 m_update_op = tf.raw_ops.ResourceScatterUpdate(**m_update_kwargs) 71 72 # \\(v := beta2 * v + (1 - beta2) * (g_t * g_t)\\) 73 v = self.get_slot(var, "v") 74 v_t_slice = (beta_2_t * tf.gather(v, indices) + 75 (1 - beta_2_t) * tf.math.square(grad)) 76 77 v_update_kwargs = { 78 'resource': v.handle, 79 'indices': indices, 80 'updates': v_t_slice 81 } 82 v_update_op = tf.raw_ops.ResourceScatterUpdate(**v_update_kwargs) 83 84 # \\(variable -= learning_rate * m_t / (epsilon_t + sqrt(v_t))\\) 85 var_slice = lr * m_t_slice / (tf.math.sqrt(v_t_slice) + epsilon_t) 86 87 var_update_kwargs = { 88 'resource': var.handle, 89 'indices': indices, 90 'updates': var_slice 91 } 92 var_update_op = tf.raw_ops.ResourceScatterSub(**var_update_kwargs) 93 94 return tf.group(*[var_update_op, m_update_op, v_update_op]) 95 [end of tensorflow_addons/optimizers/lazy_adam.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/tensorflow_addons/optimizers/lazy_adam.py b/tensorflow_addons/optimizers/lazy_adam.py --- a/tensorflow_addons/optimizers/lazy_adam.py +++ b/tensorflow_addons/optimizers/lazy_adam.py @@ -55,7 +55,7 @@ local_step = tf.cast(self.iterations + 1, var_dtype) beta_1_power = tf.math.pow(beta_1_t, local_step) beta_2_power = tf.math.pow(beta_2_t, local_step) - epsilon_t = self._get_hyper('epsilon', var_dtype) + epsilon_t = tf.convert_to_tensor(self.epsilon, var_dtype) lr = (lr_t * tf.math.sqrt(1 - beta_2_power) / (1 - beta_1_power)) # \\(m := beta1 * m + (1 - beta1) * g_t\\)
{"golden_diff": "diff --git a/tensorflow_addons/optimizers/lazy_adam.py b/tensorflow_addons/optimizers/lazy_adam.py\n--- a/tensorflow_addons/optimizers/lazy_adam.py\n+++ b/tensorflow_addons/optimizers/lazy_adam.py\n@@ -55,7 +55,7 @@\n local_step = tf.cast(self.iterations + 1, var_dtype)\n beta_1_power = tf.math.pow(beta_1_t, local_step)\n beta_2_power = tf.math.pow(beta_2_t, local_step)\n- epsilon_t = self._get_hyper('epsilon', var_dtype)\n+ epsilon_t = tf.convert_to_tensor(self.epsilon, var_dtype)\n lr = (lr_t * tf.math.sqrt(1 - beta_2_power) / (1 - beta_1_power))\n \n # \\\\(m := beta1 * m + (1 - beta1) * g_t\\\\)\n", "issue": "Build fails because of optimizers\nSome tests for optimizers fail because of this commit https://github.com/tensorflow/tensorflow/commit/2cb745ef1e0b4082a618c81274fca39be0cb4fc6. It can be fixed by replacing `self._get_hyper('epsilon')` with `self.epsilon` when trying to access the value of epsilon. For more details, please refer to https://github.com/tensorflow/tensorflow/commit/2cb745ef1e0b4082a618c81274fca39be0cb4fc6.\r\n\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Variant of the Adam optimizer that handles sparse updates more efficiently.\n\nCompared with the original Adam optimizer, the one in this file can\nprovide a large improvement in model training throughput for some\napplications. However, it provides slightly different semantics than the\noriginal Adam algorithm, and may lead to different empirical results.\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport tensorflow as tf\nfrom tensorflow_addons.utils import keras_utils\n\n\n@keras_utils.register_keras_custom_object\nclass LazyAdam(tf.keras.optimizers.Adam):\n \"\"\"Variant of the Adam optimizer that handles sparse updates more\n efficiently.\n\n The original Adam algorithm maintains two moving-average accumulators for\n each trainable variable; the accumulators are updated at every step.\n This class provides lazier handling of gradient updates for sparse\n variables. It only updates moving-average accumulators for sparse variable\n indices that appear in the current batch, rather than updating the\n accumulators for all indices. Compared with the original Adam optimizer,\n it can provide large improvements in model training throughput for some\n applications. However, it provides slightly different semantics than the\n original Adam algorithm, and may lead to different empirical results.\n\n Note, amsgrad is currently not supported and the argument can only be\n False.\n \"\"\"\n\n def _resource_apply_sparse(self, grad, var, indices):\n var_dtype = var.dtype.base_dtype\n lr_t = self._decayed_lr(var_dtype)\n beta_1_t = self._get_hyper('beta_1', var_dtype)\n beta_2_t = self._get_hyper('beta_2', var_dtype)\n local_step = tf.cast(self.iterations + 1, var_dtype)\n beta_1_power = tf.math.pow(beta_1_t, local_step)\n beta_2_power = tf.math.pow(beta_2_t, local_step)\n epsilon_t = self._get_hyper('epsilon', var_dtype)\n lr = (lr_t * tf.math.sqrt(1 - beta_2_power) / (1 - beta_1_power))\n\n # \\\\(m := beta1 * m + (1 - beta1) * g_t\\\\)\n m = self.get_slot(var, \"m\")\n m_t_slice = beta_1_t * tf.gather(m, indices) + (1 - beta_1_t) * grad\n\n m_update_kwargs = {\n 'resource': m.handle,\n 'indices': indices,\n 'updates': m_t_slice\n }\n m_update_op = tf.raw_ops.ResourceScatterUpdate(**m_update_kwargs)\n\n # \\\\(v := beta2 * v + (1 - beta2) * (g_t * g_t)\\\\)\n v = self.get_slot(var, \"v\")\n v_t_slice = (beta_2_t * tf.gather(v, indices) +\n (1 - beta_2_t) * tf.math.square(grad))\n\n v_update_kwargs = {\n 'resource': v.handle,\n 'indices': indices,\n 'updates': v_t_slice\n }\n v_update_op = tf.raw_ops.ResourceScatterUpdate(**v_update_kwargs)\n\n # \\\\(variable -= learning_rate * m_t / (epsilon_t + sqrt(v_t))\\\\)\n var_slice = lr * m_t_slice / (tf.math.sqrt(v_t_slice) + epsilon_t)\n\n var_update_kwargs = {\n 'resource': var.handle,\n 'indices': indices,\n 'updates': var_slice\n }\n var_update_op = tf.raw_ops.ResourceScatterSub(**var_update_kwargs)\n\n return tf.group(*[var_update_op, m_update_op, v_update_op])\n", "path": "tensorflow_addons/optimizers/lazy_adam.py"}]}
1,806
206
gh_patches_debug_28701
rasdani/github-patches
git_diff
PaddlePaddle__PaddleDetection-33
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> 运行检测模型卷积通道剪裁示例时报错 当我运行python compress.py \ -s yolov3_mobilenet_v1_slim.yaml \ -c ../../configs/yolov3_mobilenet_v1.yml \ -o max_iters=20 \ num_classes=4 \ YoloTrainFeed.batch_size=32 \ pretrain_weights=/home/aistudio/PaddleDetection/output/yolov3_mobilenet_v1/best_model \ -d "/home/aistudio/work/coco" 这个命令时一直报错以下错误,经定位是eval_utils.py将gt_box的值转换成im_id异常,异常信息如下 loading annotations into memory... Done (t=0.01s) creating index... index created! [ 16. 115. 218. 374.] Traceback (most recent call last): File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/contrib/slim/core/compressor.py", line 593, in run self._eval(context) File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/contrib/slim/core/compressor.py", line 542, in _eval func(self.eval_graph.program, self.scope)) File "compress.py", line 207, in eval_func FLAGS.output_eval) File "../../ppdet/utils/eval_utils.py", line 205, in eval_results is_bbox_normalized=is_bbox_normalized) File "../../ppdet/utils/coco_eval.py", line 86, in bbox_eval results, clsid2catid, is_bbox_normalized=is_bbox_normalized) File "../../ppdet/utils/coco_eval.py", line 215, in bbox2out im_id = int(im_ids[i][0]) TypeError: only size-1 arrays can be converted to Python scalars 2019-11-20 20:32:00,491-ERROR: None 2019-11-20 20:32:00,491-ERROR: None 2019-11-20 20:32:01,633-INFO: epoch:1; batch_id:0; odict_keys(['loss', 'lr']) = [117.678, 0.0] 后面通过排查发现compress.py 将gt_box的值当成im_id出入导致报错, 在第79行新增outs.append(data['im_id']) 解决问题 ![QQ图片20191120204048](https://user-images.githubusercontent.com/56528375/69239822-0e5b6900-0bd6-11ea-9fa2-ee8d95eb183d.png) </issue> <code> [start of slim/prune/compress.py] 1 # Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 from __future__ import absolute_import 16 from __future__ import division 17 from __future__ import print_function 18 19 import os 20 import time 21 import multiprocessing 22 import numpy as np 23 import sys 24 sys.path.append("../../") 25 from paddle.fluid.contrib.slim import Compressor 26 27 28 def set_paddle_flags(**kwargs): 29 for key, value in kwargs.items(): 30 if os.environ.get(key, None) is None: 31 os.environ[key] = str(value) 32 33 34 # NOTE(paddle-dev): All of these flags should be set before 35 # `import paddle`. Otherwise, it would not take any effect. 36 set_paddle_flags( 37 FLAGS_eager_delete_tensor_gb=0, # enable GC to save memory 38 ) 39 40 from paddle import fluid 41 from ppdet.core.workspace import load_config, merge_config, create 42 from ppdet.data.data_feed import create_reader 43 from ppdet.utils.eval_utils import parse_fetches, eval_results 44 from ppdet.utils.cli import ArgsParser 45 from ppdet.utils.check import check_gpu 46 import ppdet.utils.checkpoint as checkpoint 47 from ppdet.modeling.model_input import create_feed 48 49 import logging 50 FORMAT = '%(asctime)s-%(levelname)s: %(message)s' 51 logging.basicConfig(level=logging.INFO, format=FORMAT) 52 logger = logging.getLogger(__name__) 53 54 55 def eval_run(exe, compile_program, reader, keys, values, cls, test_feed): 56 """ 57 Run evaluation program, return program outputs. 58 """ 59 iter_id = 0 60 results = [] 61 if len(cls) != 0: 62 values = [] 63 for i in range(len(cls)): 64 _, accum_map = cls[i].get_map_var() 65 cls[i].reset(exe) 66 values.append(accum_map) 67 68 images_num = 0 69 start_time = time.time() 70 has_bbox = 'bbox' in keys 71 for data in reader(): 72 data = test_feed.feed(data) 73 feed_data = {'image': data['image'], 'im_size': data['im_size']} 74 outs = exe.run(compile_program, 75 feed=feed_data, 76 fetch_list=[values[0]], 77 return_numpy=False) 78 outs.append(data['gt_box']) 79 outs.append(data['gt_label']) 80 outs.append(data['is_difficult']) 81 res = { 82 k: (np.array(v), v.recursive_sequence_lengths()) 83 for k, v in zip(keys, outs) 84 } 85 results.append(res) 86 if iter_id % 100 == 0: 87 logger.info('Test iter {}'.format(iter_id)) 88 iter_id += 1 89 images_num += len(res['bbox'][1][0]) if has_bbox else 1 90 logger.info('Test finish iter {}'.format(iter_id)) 91 92 end_time = time.time() 93 fps = images_num / (end_time - start_time) 94 if has_bbox: 95 logger.info('Total number of images: {}, inference time: {} fps.'. 96 format(images_num, fps)) 97 else: 98 logger.info('Total iteration: {}, inference time: {} batch/s.'.format( 99 images_num, fps)) 100 101 return results 102 103 104 def main(): 105 cfg = load_config(FLAGS.config) 106 if 'architecture' in cfg: 107 main_arch = cfg.architecture 108 else: 109 raise ValueError("'architecture' not specified in config file.") 110 111 merge_config(FLAGS.opt) 112 if 'log_iter' not in cfg: 113 cfg.log_iter = 20 114 115 # check if set use_gpu=True in paddlepaddle cpu version 116 check_gpu(cfg.use_gpu) 117 118 if cfg.use_gpu: 119 devices_num = fluid.core.get_cuda_device_count() 120 else: 121 devices_num = int( 122 os.environ.get('CPU_NUM', multiprocessing.cpu_count())) 123 124 if 'train_feed' not in cfg: 125 train_feed = create(main_arch + 'TrainFeed') 126 else: 127 train_feed = create(cfg.train_feed) 128 129 if 'eval_feed' not in cfg: 130 eval_feed = create(main_arch + 'EvalFeed') 131 else: 132 eval_feed = create(cfg.eval_feed) 133 134 place = fluid.CUDAPlace(0) if cfg.use_gpu else fluid.CPUPlace() 135 exe = fluid.Executor(place) 136 137 lr_builder = create('LearningRate') 138 optim_builder = create('OptimizerBuilder') 139 140 # build program 141 startup_prog = fluid.Program() 142 train_prog = fluid.Program() 143 with fluid.program_guard(train_prog, startup_prog): 144 with fluid.unique_name.guard(): 145 model = create(main_arch) 146 _, feed_vars = create_feed(train_feed, True) 147 train_fetches = model.train(feed_vars) 148 loss = train_fetches['loss'] 149 lr = lr_builder() 150 optimizer = optim_builder(lr) 151 optimizer.minimize(loss) 152 153 train_reader = create_reader(train_feed, cfg.max_iters, FLAGS.dataset_dir) 154 155 # parse train fetches 156 train_keys, train_values, _ = parse_fetches(train_fetches) 157 train_keys.append("lr") 158 train_values.append(lr.name) 159 160 train_fetch_list = [] 161 for k, v in zip(train_keys, train_values): 162 train_fetch_list.append((k, v)) 163 164 eval_prog = fluid.Program() 165 with fluid.program_guard(eval_prog, startup_prog): 166 with fluid.unique_name.guard(): 167 model = create(main_arch) 168 _, test_feed_vars = create_feed(eval_feed, True) 169 fetches = model.eval(test_feed_vars) 170 171 eval_prog = eval_prog.clone(True) 172 173 eval_reader = create_reader(eval_feed, args_path=FLAGS.dataset_dir) 174 test_data_feed = fluid.DataFeeder(test_feed_vars.values(), place) 175 176 # parse eval fetches 177 extra_keys = [] 178 if cfg.metric == 'COCO': 179 extra_keys = ['im_info', 'im_id', 'im_shape'] 180 if cfg.metric == 'VOC': 181 extra_keys = ['gt_box', 'gt_label', 'is_difficult'] 182 eval_keys, eval_values, eval_cls = parse_fetches(fetches, eval_prog, 183 extra_keys) 184 eval_fetch_list = [] 185 for k, v in zip(eval_keys, eval_values): 186 eval_fetch_list.append((k, v)) 187 188 exe.run(startup_prog) 189 checkpoint.load_params(exe, train_prog, cfg.pretrain_weights) 190 191 best_box_ap_list = [] 192 193 def eval_func(program, scope): 194 195 #place = fluid.CPUPlace() 196 #exe = fluid.Executor(place) 197 results = eval_run(exe, program, eval_reader, eval_keys, eval_values, 198 eval_cls, test_data_feed) 199 200 resolution = None 201 if 'mask' in results[0]: 202 resolution = model.mask_head.resolution 203 box_ap_stats = eval_results(results, eval_feed, cfg.metric, 204 cfg.num_classes, resolution, False, 205 FLAGS.output_eval) 206 if len(best_box_ap_list) == 0: 207 best_box_ap_list.append(box_ap_stats[0]) 208 elif box_ap_stats[0] > best_box_ap_list[0]: 209 best_box_ap_list[0] = box_ap_stats[0] 210 logger.info("Best test box ap: {}".format(best_box_ap_list[0])) 211 return best_box_ap_list[0] 212 213 test_feed = [('image', test_feed_vars['image'].name), 214 ('im_size', test_feed_vars['im_size'].name)] 215 216 com = Compressor( 217 place, 218 fluid.global_scope(), 219 train_prog, 220 train_reader=train_reader, 221 train_feed_list=[(key, value.name) for key, value in feed_vars.items()], 222 train_fetch_list=train_fetch_list, 223 eval_program=eval_prog, 224 eval_reader=eval_reader, 225 eval_feed_list=test_feed, 226 eval_func={'map': eval_func}, 227 eval_fetch_list=[eval_fetch_list[0]], 228 save_eval_model=True, 229 prune_infer_model=[["image", "im_size"], ["multiclass_nms_0.tmp_0"]], 230 train_optimizer=None) 231 com.config(FLAGS.slim_file) 232 com.run() 233 234 235 if __name__ == '__main__': 236 parser = ArgsParser() 237 parser.add_argument( 238 "-s", 239 "--slim_file", 240 default=None, 241 type=str, 242 help="Config file of PaddleSlim.") 243 parser.add_argument( 244 "--output_eval", 245 default=None, 246 type=str, 247 help="Evaluation directory, default is current directory.") 248 parser.add_argument( 249 "-d", 250 "--dataset_dir", 251 default=None, 252 type=str, 253 help="Dataset path, same as DataFeed.dataset.dataset_dir") 254 FLAGS = parser.parse_args() 255 main() 256 [end of slim/prune/compress.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/slim/prune/compress.py b/slim/prune/compress.py --- a/slim/prune/compress.py +++ b/slim/prune/compress.py @@ -52,7 +52,7 @@ logger = logging.getLogger(__name__) -def eval_run(exe, compile_program, reader, keys, values, cls, test_feed): +def eval_run(exe, compile_program, reader, keys, values, cls, test_feed, cfg): """ Run evaluation program, return program outputs. """ @@ -75,9 +75,16 @@ feed=feed_data, fetch_list=[values[0]], return_numpy=False) - outs.append(data['gt_box']) - outs.append(data['gt_label']) - outs.append(data['is_difficult']) + + if cfg.metric == 'VOC': + outs.append(data['gt_box']) + outs.append(data['gt_label']) + outs.append(data['is_difficult']) + elif cfg.metric == 'COCO': + outs.append(data['im_info']) + outs.append(data['im_id']) + outs.append(data['im_shape']) + res = { k: (np.array(v), v.recursive_sequence_lengths()) for k, v in zip(keys, outs) @@ -195,7 +202,7 @@ #place = fluid.CPUPlace() #exe = fluid.Executor(place) results = eval_run(exe, program, eval_reader, eval_keys, eval_values, - eval_cls, test_data_feed) + eval_cls, test_data_feed, cfg) resolution = None if 'mask' in results[0]:
{"golden_diff": "diff --git a/slim/prune/compress.py b/slim/prune/compress.py\n--- a/slim/prune/compress.py\n+++ b/slim/prune/compress.py\n@@ -52,7 +52,7 @@\n logger = logging.getLogger(__name__)\n \n \n-def eval_run(exe, compile_program, reader, keys, values, cls, test_feed):\n+def eval_run(exe, compile_program, reader, keys, values, cls, test_feed, cfg):\n \"\"\"\n Run evaluation program, return program outputs.\n \"\"\"\n@@ -75,9 +75,16 @@\n feed=feed_data,\n fetch_list=[values[0]],\n return_numpy=False)\n- outs.append(data['gt_box'])\n- outs.append(data['gt_label'])\n- outs.append(data['is_difficult'])\n+\n+ if cfg.metric == 'VOC':\n+ outs.append(data['gt_box'])\n+ outs.append(data['gt_label'])\n+ outs.append(data['is_difficult'])\n+ elif cfg.metric == 'COCO':\n+ outs.append(data['im_info'])\n+ outs.append(data['im_id'])\n+ outs.append(data['im_shape'])\n+\n res = {\n k: (np.array(v), v.recursive_sequence_lengths())\n for k, v in zip(keys, outs)\n@@ -195,7 +202,7 @@\n #place = fluid.CPUPlace()\n #exe = fluid.Executor(place)\n results = eval_run(exe, program, eval_reader, eval_keys, eval_values,\n- eval_cls, test_data_feed)\n+ eval_cls, test_data_feed, cfg)\n \n resolution = None\n if 'mask' in results[0]:\n", "issue": "\u8fd0\u884c\u68c0\u6d4b\u6a21\u578b\u5377\u79ef\u901a\u9053\u526a\u88c1\u793a\u4f8b\u65f6\u62a5\u9519\n\u5f53\u6211\u8fd0\u884cpython compress.py \\\r\n -s yolov3_mobilenet_v1_slim.yaml \\\r\n -c ../../configs/yolov3_mobilenet_v1.yml \\\r\n -o max_iters=20 \\\r\n num_classes=4 \\\r\n YoloTrainFeed.batch_size=32 \\\r\n pretrain_weights=/home/aistudio/PaddleDetection/output/yolov3_mobilenet_v1/best_model \\\r\n -d \"/home/aistudio/work/coco\"\r\n\u8fd9\u4e2a\u547d\u4ee4\u65f6\u4e00\u76f4\u62a5\u9519\u4ee5\u4e0b\u9519\u8bef\uff0c\u7ecf\u5b9a\u4f4d\u662feval_utils.py\u5c06gt_box\u7684\u503c\u8f6c\u6362\u6210im_id\u5f02\u5e38\uff0c\u5f02\u5e38\u4fe1\u606f\u5982\u4e0b\r\n\r\n\r\nloading annotations into memory...\r\nDone (t=0.01s)\r\ncreating index...\r\nindex created!\r\n[ 16. 115. 218. 374.]\r\nTraceback (most recent call last):\r\n File \"/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/contrib/slim/core/compressor.py\", line 593, in run\r\n self._eval(context)\r\n File \"/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/contrib/slim/core/compressor.py\", line 542, in _eval\r\n func(self.eval_graph.program, self.scope))\r\n File \"compress.py\", line 207, in eval_func\r\n FLAGS.output_eval)\r\n File \"../../ppdet/utils/eval_utils.py\", line 205, in eval_results\r\n is_bbox_normalized=is_bbox_normalized)\r\n File \"../../ppdet/utils/coco_eval.py\", line 86, in bbox_eval\r\n results, clsid2catid, is_bbox_normalized=is_bbox_normalized)\r\n File \"../../ppdet/utils/coco_eval.py\", line 215, in bbox2out\r\n im_id = int(im_ids[i][0])\r\nTypeError: only size-1 arrays can be converted to Python scalars\r\n2019-11-20 20:32:00,491-ERROR: None\r\n2019-11-20 20:32:00,491-ERROR: None\r\n2019-11-20 20:32:01,633-INFO: epoch:1; batch_id:0; odict_keys(['loss', 'lr']) = [117.678, 0.0]\r\n\r\n\r\n\u540e\u9762\u901a\u8fc7\u6392\u67e5\u53d1\u73b0compress.py \u5c06gt_box\u7684\u503c\u5f53\u6210im_id\u51fa\u5165\u5bfc\u81f4\u62a5\u9519\uff0c\r\n\u5728\u7b2c79\u884c\u65b0\u589eouts.append(data['im_id']) \u89e3\u51b3\u95ee\u9898\r\n\r\n![QQ\u56fe\u724720191120204048](https://user-images.githubusercontent.com/56528375/69239822-0e5b6900-0bd6-11ea-9fa2-ee8d95eb183d.png)\r\n\n", "before_files": [{"content": "# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport time\nimport multiprocessing\nimport numpy as np\nimport sys\nsys.path.append(\"../../\")\nfrom paddle.fluid.contrib.slim import Compressor\n\n\ndef set_paddle_flags(**kwargs):\n for key, value in kwargs.items():\n if os.environ.get(key, None) is None:\n os.environ[key] = str(value)\n\n\n# NOTE(paddle-dev): All of these flags should be set before\n# `import paddle`. Otherwise, it would not take any effect.\nset_paddle_flags(\n FLAGS_eager_delete_tensor_gb=0, # enable GC to save memory\n)\n\nfrom paddle import fluid\nfrom ppdet.core.workspace import load_config, merge_config, create\nfrom ppdet.data.data_feed import create_reader\nfrom ppdet.utils.eval_utils import parse_fetches, eval_results\nfrom ppdet.utils.cli import ArgsParser\nfrom ppdet.utils.check import check_gpu\nimport ppdet.utils.checkpoint as checkpoint\nfrom ppdet.modeling.model_input import create_feed\n\nimport logging\nFORMAT = '%(asctime)s-%(levelname)s: %(message)s'\nlogging.basicConfig(level=logging.INFO, format=FORMAT)\nlogger = logging.getLogger(__name__)\n\n\ndef eval_run(exe, compile_program, reader, keys, values, cls, test_feed):\n \"\"\"\n Run evaluation program, return program outputs.\n \"\"\"\n iter_id = 0\n results = []\n if len(cls) != 0:\n values = []\n for i in range(len(cls)):\n _, accum_map = cls[i].get_map_var()\n cls[i].reset(exe)\n values.append(accum_map)\n\n images_num = 0\n start_time = time.time()\n has_bbox = 'bbox' in keys\n for data in reader():\n data = test_feed.feed(data)\n feed_data = {'image': data['image'], 'im_size': data['im_size']}\n outs = exe.run(compile_program,\n feed=feed_data,\n fetch_list=[values[0]],\n return_numpy=False)\n outs.append(data['gt_box'])\n outs.append(data['gt_label'])\n outs.append(data['is_difficult'])\n res = {\n k: (np.array(v), v.recursive_sequence_lengths())\n for k, v in zip(keys, outs)\n }\n results.append(res)\n if iter_id % 100 == 0:\n logger.info('Test iter {}'.format(iter_id))\n iter_id += 1\n images_num += len(res['bbox'][1][0]) if has_bbox else 1\n logger.info('Test finish iter {}'.format(iter_id))\n\n end_time = time.time()\n fps = images_num / (end_time - start_time)\n if has_bbox:\n logger.info('Total number of images: {}, inference time: {} fps.'.\n format(images_num, fps))\n else:\n logger.info('Total iteration: {}, inference time: {} batch/s.'.format(\n images_num, fps))\n\n return results\n\n\ndef main():\n cfg = load_config(FLAGS.config)\n if 'architecture' in cfg:\n main_arch = cfg.architecture\n else:\n raise ValueError(\"'architecture' not specified in config file.\")\n\n merge_config(FLAGS.opt)\n if 'log_iter' not in cfg:\n cfg.log_iter = 20\n\n # check if set use_gpu=True in paddlepaddle cpu version\n check_gpu(cfg.use_gpu)\n\n if cfg.use_gpu:\n devices_num = fluid.core.get_cuda_device_count()\n else:\n devices_num = int(\n os.environ.get('CPU_NUM', multiprocessing.cpu_count()))\n\n if 'train_feed' not in cfg:\n train_feed = create(main_arch + 'TrainFeed')\n else:\n train_feed = create(cfg.train_feed)\n\n if 'eval_feed' not in cfg:\n eval_feed = create(main_arch + 'EvalFeed')\n else:\n eval_feed = create(cfg.eval_feed)\n\n place = fluid.CUDAPlace(0) if cfg.use_gpu else fluid.CPUPlace()\n exe = fluid.Executor(place)\n\n lr_builder = create('LearningRate')\n optim_builder = create('OptimizerBuilder')\n\n # build program\n startup_prog = fluid.Program()\n train_prog = fluid.Program()\n with fluid.program_guard(train_prog, startup_prog):\n with fluid.unique_name.guard():\n model = create(main_arch)\n _, feed_vars = create_feed(train_feed, True)\n train_fetches = model.train(feed_vars)\n loss = train_fetches['loss']\n lr = lr_builder()\n optimizer = optim_builder(lr)\n optimizer.minimize(loss)\n\n train_reader = create_reader(train_feed, cfg.max_iters, FLAGS.dataset_dir)\n\n # parse train fetches\n train_keys, train_values, _ = parse_fetches(train_fetches)\n train_keys.append(\"lr\")\n train_values.append(lr.name)\n\n train_fetch_list = []\n for k, v in zip(train_keys, train_values):\n train_fetch_list.append((k, v))\n\n eval_prog = fluid.Program()\n with fluid.program_guard(eval_prog, startup_prog):\n with fluid.unique_name.guard():\n model = create(main_arch)\n _, test_feed_vars = create_feed(eval_feed, True)\n fetches = model.eval(test_feed_vars)\n\n eval_prog = eval_prog.clone(True)\n\n eval_reader = create_reader(eval_feed, args_path=FLAGS.dataset_dir)\n test_data_feed = fluid.DataFeeder(test_feed_vars.values(), place)\n\n # parse eval fetches\n extra_keys = []\n if cfg.metric == 'COCO':\n extra_keys = ['im_info', 'im_id', 'im_shape']\n if cfg.metric == 'VOC':\n extra_keys = ['gt_box', 'gt_label', 'is_difficult']\n eval_keys, eval_values, eval_cls = parse_fetches(fetches, eval_prog,\n extra_keys)\n eval_fetch_list = []\n for k, v in zip(eval_keys, eval_values):\n eval_fetch_list.append((k, v))\n\n exe.run(startup_prog)\n checkpoint.load_params(exe, train_prog, cfg.pretrain_weights)\n\n best_box_ap_list = []\n\n def eval_func(program, scope):\n\n #place = fluid.CPUPlace()\n #exe = fluid.Executor(place)\n results = eval_run(exe, program, eval_reader, eval_keys, eval_values,\n eval_cls, test_data_feed)\n\n resolution = None\n if 'mask' in results[0]:\n resolution = model.mask_head.resolution\n box_ap_stats = eval_results(results, eval_feed, cfg.metric,\n cfg.num_classes, resolution, False,\n FLAGS.output_eval)\n if len(best_box_ap_list) == 0:\n best_box_ap_list.append(box_ap_stats[0])\n elif box_ap_stats[0] > best_box_ap_list[0]:\n best_box_ap_list[0] = box_ap_stats[0]\n logger.info(\"Best test box ap: {}\".format(best_box_ap_list[0]))\n return best_box_ap_list[0]\n\n test_feed = [('image', test_feed_vars['image'].name),\n ('im_size', test_feed_vars['im_size'].name)]\n\n com = Compressor(\n place,\n fluid.global_scope(),\n train_prog,\n train_reader=train_reader,\n train_feed_list=[(key, value.name) for key, value in feed_vars.items()],\n train_fetch_list=train_fetch_list,\n eval_program=eval_prog,\n eval_reader=eval_reader,\n eval_feed_list=test_feed,\n eval_func={'map': eval_func},\n eval_fetch_list=[eval_fetch_list[0]],\n save_eval_model=True,\n prune_infer_model=[[\"image\", \"im_size\"], [\"multiclass_nms_0.tmp_0\"]],\n train_optimizer=None)\n com.config(FLAGS.slim_file)\n com.run()\n\n\nif __name__ == '__main__':\n parser = ArgsParser()\n parser.add_argument(\n \"-s\",\n \"--slim_file\",\n default=None,\n type=str,\n help=\"Config file of PaddleSlim.\")\n parser.add_argument(\n \"--output_eval\",\n default=None,\n type=str,\n help=\"Evaluation directory, default is current directory.\")\n parser.add_argument(\n \"-d\",\n \"--dataset_dir\",\n default=None,\n type=str,\n help=\"Dataset path, same as DataFeed.dataset.dataset_dir\")\n FLAGS = parser.parse_args()\n main()\n", "path": "slim/prune/compress.py"}]}
3,821
369
gh_patches_debug_2467
rasdani/github-patches
git_diff
secdev__scapy-1417
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> No /dev/bpf handle is available ! I'm running on mac high sierra 10.13.4 after downloading from https://github.com/secdev/scapy/archive/v2.4.0.zip I unzipping and ran each of the following as root: run_scapy, run_scapy2 and run_scapy_py3 within each repl I ran: ```send(IP(dst="2.2.2.2", src="1.1.1.1"))``` and the traceback was the same: ``` Traceback (most recent call last): File "<console>", line 1, in <module> File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/sendrecv.py", line 302, in send realtime=realtime, return_packets=return_packets) File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/sendrecv.py", line 276, in __gen_send s.send(p) File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/arch/bpf/supersocket.py", line 345, in send frame = raw(self.guessed_cls()/pkt) File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/compat.py", line 96, in raw return bytes(x) File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/packet.py", line 345, in __bytes__ return self.build() File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/packet.py", line 444, in build p = self.do_build() File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/packet.py", line 426, in do_build pkt = self.self_build() File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/packet.py", line 407, in self_build p = f.addfield(self, p, val) File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/fields.py", line 80, in addfield return s+struct.pack(self.fmt, self.i2m(pkt,val)) File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/layers/l2.py", line 109, in i2m return MACField.i2m(self, pkt, self.i2h(pkt, x)) File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/layers/l2.py", line 101, in i2h x = conf.neighbor.resolve(pkt,pkt.payload) File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/layers/l2.py", line 49, in resolve return self.resolvers[k](l2inst,l3inst) File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/layers/inet.py", line 821, in inet_register_l3 return getmacbyip(l3.dst) File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/layers/l2.py", line 84, in getmacbyip nofilter=1) File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/sendrecv.py", line 434, in srp1 ans, _ = srp(*args, **kargs) File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/sendrecv.py", line 416, in srp s = conf.L2socket(promisc=promisc, iface=iface, filter=filter, nofilter=nofilter, type=type) File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/arch/bpf/supersocket.py", line 58, in __init__ (self.ins, self.dev_bpf) = get_dev_bpf() File "/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/arch/bpf/core.py", line 98, in get_dev_bpf raise Scapy_Exception("No /dev/bpf handle is available !") scapy.error.Scapy_Exception: No /dev/bpf handle is available ! ``` after looking at some of the past issues it appears similar to this one: [#1015](https://github.com/secdev/scapy/issues/1015) however it was solved some time ago... Update: The above was ran while I had wireshark running, after quitting wireshark the error stopped. </issue> <code> [start of scapy/arch/bpf/core.py] 1 # Guillaume Valadon <[email protected]> 2 3 """ 4 Scapy *BSD native support - core 5 """ 6 7 from __future__ import absolute_import 8 from scapy.config import conf 9 from scapy.error import Scapy_Exception, warning 10 from scapy.data import ARPHDR_LOOPBACK, ARPHDR_ETHER 11 from scapy.arch.common import get_if, get_bpf_pointer 12 from scapy.consts import LOOPBACK_NAME 13 14 from scapy.arch.bpf.consts import * 15 16 import os 17 import socket 18 import fcntl 19 import struct 20 21 from ctypes import cdll, cast, pointer, POINTER, Structure 22 from ctypes import c_int, c_ulong, c_char_p 23 from ctypes.util import find_library 24 from scapy.modules.six.moves import range 25 26 27 # ctypes definitions 28 29 LIBC = cdll.LoadLibrary(find_library("libc")) 30 LIBC.ioctl.argtypes = [c_int, c_ulong, c_char_p] 31 LIBC.ioctl.restype = c_int 32 33 34 # Addresses manipulation functions 35 36 def get_if_raw_addr(ifname): 37 """Returns the IPv4 address configured on 'ifname', packed with inet_pton.""" 38 39 # Get ifconfig output 40 try: 41 fd = os.popen("%s %s" % (conf.prog.ifconfig, ifname)) 42 except OSError as msg: 43 warning("Failed to execute ifconfig: (%s)", msg) 44 return b"\0\0\0\0" 45 46 # Get IPv4 addresses 47 addresses = [l for l in fd if l.find("netmask") >= 0] 48 if not addresses: 49 warning("No IPv4 address found on %s !", ifname) 50 return b"\0\0\0\0" 51 52 # Pack the first address 53 address = addresses[0].split(' ')[1] 54 return socket.inet_pton(socket.AF_INET, address) 55 56 57 def get_if_raw_hwaddr(ifname): 58 """Returns the packed MAC address configured on 'ifname'.""" 59 60 NULL_MAC_ADDRESS = b'\x00' * 6 61 62 # Handle the loopback interface separately 63 if ifname == LOOPBACK_NAME: 64 return (ARPHDR_LOOPBACK, NULL_MAC_ADDRESS) 65 66 # Get ifconfig output 67 try: 68 fd = os.popen("%s %s" % (conf.prog.ifconfig, ifname)) 69 except OSError as msg: 70 raise Scapy_Exception("Failed to execute ifconfig: (%s)" % msg) 71 72 # Get MAC addresses 73 addresses = [l for l in fd.readlines() if l.find("ether") >= 0 or 74 l.find("lladdr") >= 0 or 75 l.find("address") >= 0] 76 if not addresses: 77 raise Scapy_Exception("No MAC address found on %s !" % ifname) 78 79 # Pack and return the MAC address 80 mac = addresses[0].split(' ')[1] 81 mac = [chr(int(b, 16)) for b in mac.split(':')] 82 return (ARPHDR_ETHER, ''.join(mac)) 83 84 85 # BPF specific functions 86 87 def get_dev_bpf(): 88 """Returns an opened BPF file object""" 89 90 # Get the first available BPF handle 91 for bpf in range(0, 8): 92 try: 93 fd = os.open("/dev/bpf%i" % bpf, os.O_RDWR) 94 return (fd, bpf) 95 except OSError: 96 continue 97 98 raise Scapy_Exception("No /dev/bpf handle is available !") 99 100 101 def attach_filter(fd, iface, bpf_filter_string): 102 """Attach a BPF filter to the BPF file descriptor""" 103 104 # Retrieve the BPF byte code in decimal 105 command = "%s -i %s -ddd -s 1600 '%s'" % (conf.prog.tcpdump, iface, bpf_filter_string) 106 try: 107 f = os.popen(command) 108 except OSError as msg: 109 raise Scapy_Exception("Failed to execute tcpdump: (%s)" % msg) 110 111 # Convert the byte code to a BPF program structure 112 lines = f.readlines() 113 if lines == []: 114 raise Scapy_Exception("Got an empty BPF filter from tcpdump !") 115 116 bp = get_bpf_pointer(lines) 117 # Assign the BPF program to the interface 118 ret = LIBC.ioctl(c_int(fd), BIOCSETF, cast(pointer(bp), c_char_p)) 119 if ret < 0: 120 raise Scapy_Exception("Can't attach the BPF filter !") 121 122 123 # Interface manipulation functions 124 125 def get_if_list(): 126 """Returns a list containing all network interfaces.""" 127 128 # Get ifconfig output 129 try: 130 fd = os.popen("%s -a" % conf.prog.ifconfig) 131 except OSError as msg: 132 raise Scapy_Exception("Failed to execute ifconfig: (%s)" % msg) 133 134 # Get interfaces 135 interfaces = [line[:line.find(':')] for line in fd.readlines() 136 if ": flags" in line.lower()] 137 return interfaces 138 139 140 def get_working_ifaces(): 141 """ 142 Returns an ordered list of interfaces that could be used with BPF. 143 Note: the order mimics pcap_findalldevs() behavior 144 """ 145 146 # Only root is allowed to perform the following ioctl() call 147 if os.getuid() != 0: 148 return [] 149 150 # Test all network interfaces 151 interfaces = [] 152 for ifname in get_if_list(): 153 154 # Unlike pcap_findalldevs(), we do not care of loopback interfaces. 155 if ifname == LOOPBACK_NAME: 156 continue 157 158 # Get interface flags 159 try: 160 result = get_if(ifname, SIOCGIFFLAGS) 161 except IOError: 162 warning("ioctl(SIOCGIFFLAGS) failed on %s !", ifname) 163 continue 164 165 # Convert flags 166 ifflags = struct.unpack("16xH14x", result)[0] 167 if ifflags & 0x1: # IFF_UP 168 169 # Get a BPF handle 170 fd, _ = get_dev_bpf() 171 if fd is None: 172 raise Scapy_Exception("No /dev/bpf are available !") 173 174 # Check if the interface can be used 175 try: 176 fcntl.ioctl(fd, BIOCSETIF, struct.pack("16s16x", ifname.encode())) 177 interfaces.append((ifname, int(ifname[-1]))) 178 except IOError: 179 pass 180 181 # Close the file descriptor 182 os.close(fd) 183 184 # Sort to mimic pcap_findalldevs() order 185 interfaces.sort(key=lambda elt: elt[1]) 186 187 return interfaces 188 189 190 def get_working_if(): 191 """Returns the first interface than can be used with BPF""" 192 193 ifaces = get_working_ifaces() 194 if not ifaces: 195 # A better interface will be selected later using the routing table 196 return LOOPBACK_NAME 197 return ifaces[0][0] 198 [end of scapy/arch/bpf/core.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/scapy/arch/bpf/core.py b/scapy/arch/bpf/core.py --- a/scapy/arch/bpf/core.py +++ b/scapy/arch/bpf/core.py @@ -88,7 +88,7 @@ """Returns an opened BPF file object""" # Get the first available BPF handle - for bpf in range(0, 8): + for bpf in range(256): try: fd = os.open("/dev/bpf%i" % bpf, os.O_RDWR) return (fd, bpf)
{"golden_diff": "diff --git a/scapy/arch/bpf/core.py b/scapy/arch/bpf/core.py\n--- a/scapy/arch/bpf/core.py\n+++ b/scapy/arch/bpf/core.py\n@@ -88,7 +88,7 @@\n \"\"\"Returns an opened BPF file object\"\"\"\n \n # Get the first available BPF handle\n- for bpf in range(0, 8):\n+ for bpf in range(256):\n try:\n fd = os.open(\"/dev/bpf%i\" % bpf, os.O_RDWR)\n return (fd, bpf)\n", "issue": "No /dev/bpf handle is available !\nI'm running on mac high sierra 10.13.4\r\nafter downloading from https://github.com/secdev/scapy/archive/v2.4.0.zip I unzipping and ran each of the following as root: \r\nrun_scapy, run_scapy2 and run_scapy_py3\r\nwithin each repl I ran:\r\n```send(IP(dst=\"2.2.2.2\", src=\"1.1.1.1\"))```\r\nand the traceback was the same:\r\n```\r\nTraceback (most recent call last):\r\n File \"<console>\", line 1, in <module>\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/sendrecv.py\", line 302, in send\r\n realtime=realtime, return_packets=return_packets)\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/sendrecv.py\", line 276, in __gen_send\r\n s.send(p)\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/arch/bpf/supersocket.py\", line 345, in send\r\n frame = raw(self.guessed_cls()/pkt)\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/compat.py\", line 96, in raw\r\n return bytes(x)\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/packet.py\", line 345, in __bytes__\r\n return self.build()\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/packet.py\", line 444, in build\r\n p = self.do_build()\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/packet.py\", line 426, in do_build\r\n pkt = self.self_build()\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/packet.py\", line 407, in self_build\r\n p = f.addfield(self, p, val)\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/fields.py\", line 80, in addfield\r\n return s+struct.pack(self.fmt, self.i2m(pkt,val))\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/layers/l2.py\", line 109, in i2m\r\n return MACField.i2m(self, pkt, self.i2h(pkt, x))\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/layers/l2.py\", line 101, in i2h\r\n x = conf.neighbor.resolve(pkt,pkt.payload)\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/layers/l2.py\", line 49, in resolve\r\n return self.resolvers[k](l2inst,l3inst)\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/layers/inet.py\", line 821, in inet_register_l3\r\n return getmacbyip(l3.dst)\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/layers/l2.py\", line 84, in getmacbyip\r\n nofilter=1)\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/sendrecv.py\", line 434, in srp1\r\n ans, _ = srp(*args, **kargs)\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/sendrecv.py\", line 416, in srp\r\n s = conf.L2socket(promisc=promisc, iface=iface, filter=filter, nofilter=nofilter, type=type)\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/arch/bpf/supersocket.py\", line 58, in __init__\r\n (self.ins, self.dev_bpf) = get_dev_bpf()\r\n File \"/Users/idobn/dev/research/something/playground/tmp/scapy-2.4.0/scapy/arch/bpf/core.py\", line 98, in get_dev_bpf\r\n raise Scapy_Exception(\"No /dev/bpf handle is available !\")\r\nscapy.error.Scapy_Exception: No /dev/bpf handle is available !\r\n```\r\n\r\nafter looking at some of the past issues it appears similar to this one: [#1015](https://github.com/secdev/scapy/issues/1015)\r\nhowever it was solved some time ago...\r\n\r\nUpdate:\r\nThe above was ran while I had wireshark running, after quitting wireshark the error stopped.\n", "before_files": [{"content": "# Guillaume Valadon <[email protected]>\n\n\"\"\"\nScapy *BSD native support - core\n\"\"\"\n\nfrom __future__ import absolute_import\nfrom scapy.config import conf\nfrom scapy.error import Scapy_Exception, warning\nfrom scapy.data import ARPHDR_LOOPBACK, ARPHDR_ETHER\nfrom scapy.arch.common import get_if, get_bpf_pointer\nfrom scapy.consts import LOOPBACK_NAME\n\nfrom scapy.arch.bpf.consts import *\n\nimport os\nimport socket\nimport fcntl\nimport struct\n\nfrom ctypes import cdll, cast, pointer, POINTER, Structure\nfrom ctypes import c_int, c_ulong, c_char_p\nfrom ctypes.util import find_library\nfrom scapy.modules.six.moves import range\n\n\n# ctypes definitions\n\nLIBC = cdll.LoadLibrary(find_library(\"libc\"))\nLIBC.ioctl.argtypes = [c_int, c_ulong, c_char_p]\nLIBC.ioctl.restype = c_int\n\n\n# Addresses manipulation functions\n\ndef get_if_raw_addr(ifname):\n \"\"\"Returns the IPv4 address configured on 'ifname', packed with inet_pton.\"\"\"\n\n # Get ifconfig output\n try:\n fd = os.popen(\"%s %s\" % (conf.prog.ifconfig, ifname))\n except OSError as msg:\n warning(\"Failed to execute ifconfig: (%s)\", msg)\n return b\"\\0\\0\\0\\0\"\n\n # Get IPv4 addresses\n addresses = [l for l in fd if l.find(\"netmask\") >= 0]\n if not addresses:\n warning(\"No IPv4 address found on %s !\", ifname)\n return b\"\\0\\0\\0\\0\"\n\n # Pack the first address\n address = addresses[0].split(' ')[1]\n return socket.inet_pton(socket.AF_INET, address)\n\n\ndef get_if_raw_hwaddr(ifname):\n \"\"\"Returns the packed MAC address configured on 'ifname'.\"\"\"\n\n NULL_MAC_ADDRESS = b'\\x00' * 6\n\n # Handle the loopback interface separately\n if ifname == LOOPBACK_NAME:\n return (ARPHDR_LOOPBACK, NULL_MAC_ADDRESS)\n\n # Get ifconfig output\n try:\n fd = os.popen(\"%s %s\" % (conf.prog.ifconfig, ifname))\n except OSError as msg:\n raise Scapy_Exception(\"Failed to execute ifconfig: (%s)\" % msg)\n\n # Get MAC addresses\n addresses = [l for l in fd.readlines() if l.find(\"ether\") >= 0 or\n l.find(\"lladdr\") >= 0 or\n l.find(\"address\") >= 0]\n if not addresses:\n raise Scapy_Exception(\"No MAC address found on %s !\" % ifname)\n\n # Pack and return the MAC address\n mac = addresses[0].split(' ')[1]\n mac = [chr(int(b, 16)) for b in mac.split(':')]\n return (ARPHDR_ETHER, ''.join(mac))\n\n\n# BPF specific functions\n\ndef get_dev_bpf():\n \"\"\"Returns an opened BPF file object\"\"\"\n\n # Get the first available BPF handle\n for bpf in range(0, 8):\n try:\n fd = os.open(\"/dev/bpf%i\" % bpf, os.O_RDWR)\n return (fd, bpf)\n except OSError:\n continue\n\n raise Scapy_Exception(\"No /dev/bpf handle is available !\")\n\n\ndef attach_filter(fd, iface, bpf_filter_string):\n \"\"\"Attach a BPF filter to the BPF file descriptor\"\"\"\n\n # Retrieve the BPF byte code in decimal\n command = \"%s -i %s -ddd -s 1600 '%s'\" % (conf.prog.tcpdump, iface, bpf_filter_string)\n try:\n f = os.popen(command)\n except OSError as msg:\n raise Scapy_Exception(\"Failed to execute tcpdump: (%s)\" % msg)\n\n # Convert the byte code to a BPF program structure\n lines = f.readlines()\n if lines == []:\n raise Scapy_Exception(\"Got an empty BPF filter from tcpdump !\")\n\n bp = get_bpf_pointer(lines)\n # Assign the BPF program to the interface\n ret = LIBC.ioctl(c_int(fd), BIOCSETF, cast(pointer(bp), c_char_p))\n if ret < 0:\n raise Scapy_Exception(\"Can't attach the BPF filter !\")\n\n\n# Interface manipulation functions\n\ndef get_if_list():\n \"\"\"Returns a list containing all network interfaces.\"\"\"\n\n # Get ifconfig output\n try:\n fd = os.popen(\"%s -a\" % conf.prog.ifconfig)\n except OSError as msg:\n raise Scapy_Exception(\"Failed to execute ifconfig: (%s)\" % msg)\n\n # Get interfaces\n interfaces = [line[:line.find(':')] for line in fd.readlines()\n if \": flags\" in line.lower()]\n return interfaces\n\n\ndef get_working_ifaces():\n \"\"\"\n Returns an ordered list of interfaces that could be used with BPF.\n Note: the order mimics pcap_findalldevs() behavior\n \"\"\"\n\n # Only root is allowed to perform the following ioctl() call\n if os.getuid() != 0:\n return []\n\n # Test all network interfaces\n interfaces = []\n for ifname in get_if_list():\n\n # Unlike pcap_findalldevs(), we do not care of loopback interfaces.\n if ifname == LOOPBACK_NAME:\n continue\n\n # Get interface flags\n try:\n result = get_if(ifname, SIOCGIFFLAGS)\n except IOError:\n warning(\"ioctl(SIOCGIFFLAGS) failed on %s !\", ifname)\n continue\n\n # Convert flags\n ifflags = struct.unpack(\"16xH14x\", result)[0]\n if ifflags & 0x1: # IFF_UP\n\n # Get a BPF handle\n fd, _ = get_dev_bpf()\n if fd is None:\n raise Scapy_Exception(\"No /dev/bpf are available !\")\n\n # Check if the interface can be used\n try:\n fcntl.ioctl(fd, BIOCSETIF, struct.pack(\"16s16x\", ifname.encode()))\n interfaces.append((ifname, int(ifname[-1])))\n except IOError:\n pass\n\n # Close the file descriptor\n os.close(fd)\n\n # Sort to mimic pcap_findalldevs() order\n interfaces.sort(key=lambda elt: elt[1])\n\n return interfaces\n\n\ndef get_working_if():\n \"\"\"Returns the first interface than can be used with BPF\"\"\"\n\n ifaces = get_working_ifaces()\n if not ifaces:\n # A better interface will be selected later using the routing table\n return LOOPBACK_NAME\n return ifaces[0][0]\n", "path": "scapy/arch/bpf/core.py"}]}
3,710
128
gh_patches_debug_20695
rasdani/github-patches
git_diff
mne-tools__mne-bids-pipeline-699
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> DOC: Website <->README sync Following @hoechenberger's PRs I think our README.md (which ends up on PyPI) looks nicer / does better advertising than our docs landing page does | https://pypi.org/project/mne-bids-pipeline/#description | https://mne.tools/mne-bids-pipeline/1.0/index.html | | -- | -- | | ![Screenshot from 2022-12-08 13-01-47](https://user-images.githubusercontent.com/2365790/206529981-f1808ad5-49bf-4236-b904-ff72f9e12e70.png) | ![Screenshot from 2022-12-08 13-01-53](https://user-images.githubusercontent.com/2365790/206530007-2bbf52c3-229a-47bf-b2c9-b23812fdc977.png) | I propose that we should add the tag line and bullet points from the README.md to the landing page for mne-bids-pipeline, and also double check that our project description matches. This would also make it a bit more like MNE-Python, where our landing page tries to quickly convey "This package can do a lot of cool stuff that you want!". I propose that we should modify `build_docs.sh` to have a `gen_index.py` that takes the bullet points of README.md and pastes them into the `index.md` for the website. @hoechenberger @drammock WDYT? </issue> <code> [start of docs/hooks.py] 1 import logging 2 from typing import Dict, Any 3 4 from mkdocs.config.defaults import MkDocsConfig 5 6 logger = logging.getLogger("mkdocs") 7 8 config_updated = False 9 10 11 # Ideally there would be a better hook, but it's unclear if context can 12 # be obtained any earlier 13 def on_template_context( 14 context: Dict[str, Any], 15 template_name: str, 16 config: MkDocsConfig, 17 ) -> None: 18 """Update the copyright in the footer.""" 19 global config_updated 20 if not config_updated: 21 config_updated = True 22 now = context["build_date_utc"].strftime("%Y/%m/%d") 23 config.copyright = f"{config.copyright}, last updated {now}" 24 logger.info(f"Updated copyright to {config.copyright}") 25 [end of docs/hooks.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/docs/hooks.py b/docs/hooks.py --- a/docs/hooks.py +++ b/docs/hooks.py @@ -2,6 +2,8 @@ from typing import Dict, Any from mkdocs.config.defaults import MkDocsConfig +from mkdocs.structure.pages import Page +from mkdocs.structure.files import Files logger = logging.getLogger("mkdocs") @@ -22,3 +24,29 @@ now = context["build_date_utc"].strftime("%Y/%m/%d") config.copyright = f"{config.copyright}, last updated {now}" logger.info(f"Updated copyright to {config.copyright}") + + +_EMOJI_MAP = { + "🏆": ":trophy:", + "🛠️": ":tools:", + "📘": ":blue_book:", + "🧑‍🤝‍🧑": ":people_holding_hands_tone1:", + "💻": ":computer:", + "🆘": ":sos:", + "👣": ":footprints:", + "⏩": ":fast_forward:", + "⏏️": ":eject:", + "☁️": ":cloud:", +} + + +def on_page_markdown( + markdown: str, + page: Page, + config: MkDocsConfig, + files: Files, +) -> str: + if page.file.name == "index" and page.title == "Home": + for rd, md in _EMOJI_MAP.items(): + markdown = markdown.replace(rd, md) + return markdown
{"golden_diff": "diff --git a/docs/hooks.py b/docs/hooks.py\n--- a/docs/hooks.py\n+++ b/docs/hooks.py\n@@ -2,6 +2,8 @@\n from typing import Dict, Any\n \n from mkdocs.config.defaults import MkDocsConfig\n+from mkdocs.structure.pages import Page\n+from mkdocs.structure.files import Files\n \n logger = logging.getLogger(\"mkdocs\")\n \n@@ -22,3 +24,29 @@\n now = context[\"build_date_utc\"].strftime(\"%Y/%m/%d\")\n config.copyright = f\"{config.copyright}, last updated {now}\"\n logger.info(f\"Updated copyright to {config.copyright}\")\n+\n+\n+_EMOJI_MAP = {\n+ \"\ud83c\udfc6\": \":trophy:\",\n+ \"\ud83d\udee0\ufe0f\": \":tools:\",\n+ \"\ud83d\udcd8\": \":blue_book:\",\n+ \"\ud83e\uddd1\u200d\ud83e\udd1d\u200d\ud83e\uddd1\": \":people_holding_hands_tone1:\",\n+ \"\ud83d\udcbb\": \":computer:\",\n+ \"\ud83c\udd98\": \":sos:\",\n+ \"\ud83d\udc63\": \":footprints:\",\n+ \"\u23e9\": \":fast_forward:\",\n+ \"\u23cf\ufe0f\": \":eject:\",\n+ \"\u2601\ufe0f\": \":cloud:\",\n+}\n+\n+\n+def on_page_markdown(\n+ markdown: str,\n+ page: Page,\n+ config: MkDocsConfig,\n+ files: Files,\n+) -> str:\n+ if page.file.name == \"index\" and page.title == \"Home\":\n+ for rd, md in _EMOJI_MAP.items():\n+ markdown = markdown.replace(rd, md)\n+ return markdown\n", "issue": "DOC: Website <->README sync\nFollowing @hoechenberger's PRs I think our README.md (which ends up on PyPI) looks nicer / does better advertising than our docs landing page does\r\n\r\n| https://pypi.org/project/mne-bids-pipeline/#description | https://mne.tools/mne-bids-pipeline/1.0/index.html |\r\n| -- | -- |\r\n| ![Screenshot from 2022-12-08 13-01-47](https://user-images.githubusercontent.com/2365790/206529981-f1808ad5-49bf-4236-b904-ff72f9e12e70.png) | ![Screenshot from 2022-12-08 13-01-53](https://user-images.githubusercontent.com/2365790/206530007-2bbf52c3-229a-47bf-b2c9-b23812fdc977.png) |\r\n\r\nI propose that we should add the tag line and bullet points from the README.md to the landing page for mne-bids-pipeline, and also double check that our project description matches. This would also make it a bit more like MNE-Python, where our landing page tries to quickly convey \"This package can do a lot of cool stuff that you want!\".\r\n\r\nI propose that we should modify `build_docs.sh` to have a `gen_index.py` that takes the bullet points of README.md and pastes them into the `index.md` for the website.\r\n\r\n@hoechenberger @drammock WDYT?\n", "before_files": [{"content": "import logging\nfrom typing import Dict, Any\n\nfrom mkdocs.config.defaults import MkDocsConfig\n\nlogger = logging.getLogger(\"mkdocs\")\n\nconfig_updated = False\n\n\n# Ideally there would be a better hook, but it's unclear if context can\n# be obtained any earlier\ndef on_template_context(\n context: Dict[str, Any],\n template_name: str,\n config: MkDocsConfig,\n) -> None:\n \"\"\"Update the copyright in the footer.\"\"\"\n global config_updated\n if not config_updated:\n config_updated = True\n now = context[\"build_date_utc\"].strftime(\"%Y/%m/%d\")\n config.copyright = f\"{config.copyright}, last updated {now}\"\n logger.info(f\"Updated copyright to {config.copyright}\")\n", "path": "docs/hooks.py"}]}
1,109
335
gh_patches_debug_48777
rasdani/github-patches
git_diff
kymatio__kymatio-288
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> BUG 3D benchmark fails on GPU It doesn't call the `cuda()` function (now required) when giving a CUDA tensor input, so the scattering transform errors. </issue> <code> [start of examples/3d/compute_speed.py] 1 """ 2 3D scattering transform benchmark 3 ================================= 4 We compute scattering transforms for volume maps of size `128`-by-`128`-by- 5 `128`, with averaging scale `2**2 = 4` and maximum spherical harmonic 6 order `L = 2`. The volumes are stacked into batches of size `batch_size = 8` 7 and the transforms are computed `10` times to get an average running time. 8 """ 9 10 ############################################################################### 11 # Preliminaries 12 # ------------- 13 # Since kymatio handles PyTorch arrays, we first import `torch`. 14 15 import torch 16 17 ############################################################################### 18 # To measure the running time of the implementation, we use the `time` package. 19 20 import time 21 22 ############################################################################### 23 # The performance of the implementation depends on which "backend" is used. We 24 # therefore want to report the name of the backend when presenting the results. 25 # Certain backends are also GPU-only, we we want to detect that before running 26 # the benchmark. 27 28 import kymatio.scattering3d.backend as backend 29 30 ############################################################################### 31 # Finally, we import the `HarmonicScattering3D` class that computes the scattering 32 # transform. 33 34 from kymatio import HarmonicScattering3D 35 ############################################################################### 36 # Benchmark setup 37 # -------------------- 38 # First, we set up some basic parameters: the volume width `M`, height `N`, 39 # and depth 'O', the maximum number of the spherical harmonics `L`, and the 40 # maximum scale `2**J`. Here, we consider cubic volumes of size `128`, with 41 # a maximum scale of `2**2 = 4` and maximum spherical harmonic order of `2`. 42 43 M, N, O = 128, 128, 128 44 J = 2 45 L = 2 46 47 integral_powers = [1., 2.] 48 sigma_0 = 1 49 50 ############################################################################### 51 # To squeeze the maximum performance out of the implementation, we apply it to 52 # a batch of `8` volumes. Larger batch sizes do not yield increased efficiency, 53 # but smaller values increases the influence of overhead on the running time. 54 55 batch_size = 8 56 57 ############################################################################### 58 # We repeat the benchmark `10` times and compute the average running time to 59 # get a reasonable estimate. 60 61 times = 10 62 63 ############################################################################### 64 # Determine which devices (CPU or GPU) that are supported by the current 65 # backend. 66 67 if backend.NAME == 'torch': 68 devices = ['cpu', 'gpu'] 69 elif backend.NAME == 'skcuda': 70 devices = ['gpu'] 71 72 ############################################################################### 73 # Set up the scattering object and the test data 74 # ---------------------------------------------- 75 76 ############################################################################### 77 # Create the `HarmonicScattering3D` object using the given parameters and generate 78 # some compatible test data with the specified batch size. 79 80 scattering = HarmonicScattering3D(J, shape=(M, N, O), L=L, sigma_0=sigma_0) 81 82 x = torch.randn(batch_size, M, N, O, dtype=torch.float32) 83 84 ############################################################################### 85 # Run the benchmark 86 # ----------------- 87 # For each device, we need to convert the Tensor `x` to the appropriate type, 88 # invoke `times` calls to `scattering.forward` and print the running times. 89 # Before the timer starts, we add an extra `scattering.forward` call to ensure 90 # any first-time overhead, such as memory allocation and CUDA kernel 91 # compilation, is not counted. If the benchmark is running on the GPU, we also 92 # need to call `torch.cuda.synchronize()` before and after the benchmark to 93 # make sure that all CUDA kernels have finished executing. 94 95 for device in devices: 96 fmt_str = '==> Testing Float32 with {} backend, on {}, forward' 97 print(fmt_str.format(backend.NAME, device.upper())) 98 99 if device == 'gpu': 100 x = x.cuda() 101 else: 102 x = x.cpu() 103 104 scattering.method = 'integral' 105 scattering.integral_powers = integral_powers 106 107 scattering.forward(x) 108 109 if device == 'gpu': 110 torch.cuda.synchronize() 111 112 t_start = time.time() 113 for _ in range(times): 114 scattering.forward(x) 115 116 if device == 'gpu': 117 torch.cuda.synchronize() 118 119 t_elapsed = time.time() - t_start 120 121 fmt_str = 'Elapsed time: {:2f} [s / {:d} evals], avg: {:.2f} (s/batch)' 122 print(fmt_str.format(t_elapsed, times, t_elapsed/times)) 123 124 ############################################################################### 125 # The resulting output should be something like 126 # 127 # .. code-block:: text 128 # 129 # ==> Testing Float32 with torch backend, on CPU, forward 130 # Elapsed time: 109.739110 [s / 10 evals], avg: 10.97 (s/batch) 131 # ==> Testing Float32 with torch backend, on GPU, forward 132 # Elapsed time: 60.476041 [s / 10 evals], avg: 6.05 (s/batch) 133 [end of examples/3d/compute_speed.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/examples/3d/compute_speed.py b/examples/3d/compute_speed.py --- a/examples/3d/compute_speed.py +++ b/examples/3d/compute_speed.py @@ -97,8 +97,10 @@ print(fmt_str.format(backend.NAME, device.upper())) if device == 'gpu': + scattering.cuda() x = x.cuda() else: + scattering.cpu() x = x.cpu() scattering.method = 'integral'
{"golden_diff": "diff --git a/examples/3d/compute_speed.py b/examples/3d/compute_speed.py\n--- a/examples/3d/compute_speed.py\n+++ b/examples/3d/compute_speed.py\n@@ -97,8 +97,10 @@\n print(fmt_str.format(backend.NAME, device.upper()))\n \n if device == 'gpu':\n+ scattering.cuda()\n x = x.cuda()\n else:\n+ scattering.cpu()\n x = x.cpu()\n \n scattering.method = 'integral'\n", "issue": "BUG 3D benchmark fails on GPU\nIt doesn't call the `cuda()` function (now required) when giving a CUDA tensor input, so the scattering transform errors.\n", "before_files": [{"content": "\"\"\"\n3D scattering transform benchmark\n=================================\nWe compute scattering transforms for volume maps of size `128`-by-`128`-by-\n`128`, with averaging scale `2**2 = 4` and maximum spherical harmonic\norder `L = 2`. The volumes are stacked into batches of size `batch_size = 8`\nand the transforms are computed `10` times to get an average running time.\n\"\"\"\n\n###############################################################################\n# Preliminaries\n# -------------\n# Since kymatio handles PyTorch arrays, we first import `torch`.\n\nimport torch\n\n###############################################################################\n# To measure the running time of the implementation, we use the `time` package.\n\nimport time\n\n###############################################################################\n# The performance of the implementation depends on which \"backend\" is used. We\n# therefore want to report the name of the backend when presenting the results.\n# Certain backends are also GPU-only, we we want to detect that before running\n# the benchmark.\n\nimport kymatio.scattering3d.backend as backend\n\n###############################################################################\n# Finally, we import the `HarmonicScattering3D` class that computes the scattering\n# transform.\n\nfrom kymatio import HarmonicScattering3D\n###############################################################################\n# Benchmark setup\n# --------------------\n# First, we set up some basic parameters: the volume width `M`, height `N`,\n# and depth 'O', the maximum number of the spherical harmonics `L`, and the\n# maximum scale `2**J`. Here, we consider cubic volumes of size `128`, with\n# a maximum scale of `2**2 = 4` and maximum spherical harmonic order of `2`.\n\nM, N, O = 128, 128, 128\nJ = 2\nL = 2\n\nintegral_powers = [1., 2.]\nsigma_0 = 1\n\n###############################################################################\n# To squeeze the maximum performance out of the implementation, we apply it to\n# a batch of `8` volumes. Larger batch sizes do not yield increased efficiency,\n# but smaller values increases the influence of overhead on the running time.\n\nbatch_size = 8\n\n###############################################################################\n# We repeat the benchmark `10` times and compute the average running time to\n# get a reasonable estimate.\n\ntimes = 10\n\n###############################################################################\n# Determine which devices (CPU or GPU) that are supported by the current\n# backend.\n\nif backend.NAME == 'torch':\n devices = ['cpu', 'gpu']\nelif backend.NAME == 'skcuda':\n devices = ['gpu']\n\n###############################################################################\n# Set up the scattering object and the test data\n# ----------------------------------------------\n\n###############################################################################\n# Create the `HarmonicScattering3D` object using the given parameters and generate\n# some compatible test data with the specified batch size.\n\nscattering = HarmonicScattering3D(J, shape=(M, N, O), L=L, sigma_0=sigma_0)\n\nx = torch.randn(batch_size, M, N, O, dtype=torch.float32)\n\n###############################################################################\n# Run the benchmark\n# -----------------\n# For each device, we need to convert the Tensor `x` to the appropriate type,\n# invoke `times` calls to `scattering.forward` and print the running times.\n# Before the timer starts, we add an extra `scattering.forward` call to ensure\n# any first-time overhead, such as memory allocation and CUDA kernel\n# compilation, is not counted. If the benchmark is running on the GPU, we also\n# need to call `torch.cuda.synchronize()` before and after the benchmark to\n# make sure that all CUDA kernels have finished executing.\n\nfor device in devices:\n fmt_str = '==> Testing Float32 with {} backend, on {}, forward'\n print(fmt_str.format(backend.NAME, device.upper()))\n\n if device == 'gpu':\n x = x.cuda()\n else:\n x = x.cpu()\n\n scattering.method = 'integral'\n scattering.integral_powers = integral_powers\n\n scattering.forward(x)\n\n if device == 'gpu':\n torch.cuda.synchronize()\n\n t_start = time.time()\n for _ in range(times):\n scattering.forward(x)\n\n if device == 'gpu':\n torch.cuda.synchronize()\n\n t_elapsed = time.time() - t_start\n\n fmt_str = 'Elapsed time: {:2f} [s / {:d} evals], avg: {:.2f} (s/batch)'\n print(fmt_str.format(t_elapsed, times, t_elapsed/times))\n\n###############################################################################\n# The resulting output should be something like\n#\n# .. code-block:: text\n#\n# ==> Testing Float32 with torch backend, on CPU, forward\n# Elapsed time: 109.739110 [s / 10 evals], avg: 10.97 (s/batch)\n# ==> Testing Float32 with torch backend, on GPU, forward\n# Elapsed time: 60.476041 [s / 10 evals], avg: 6.05 (s/batch)\n", "path": "examples/3d/compute_speed.py"}]}
1,968
110
gh_patches_debug_1004
rasdani/github-patches
git_diff
cloudtools__troposphere-2238
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Update DLM Interval Rule Values Update DLM valid intervals. `1` has been added. [DLM interval rule allows ](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dlm-lifecyclepolicy-createrule.html) </issue> <code> [start of troposphere/validators/dlm.py] 1 # Copyright (c) 2012-2022, Mark Peek <[email protected]> 2 # All rights reserved. 3 # 4 # See LICENSE file for full license. 5 6 7 from . import tags_or_list 8 9 10 def validate_tags_or_list(x): 11 """ 12 Property: LifecyclePolicy.Tags 13 Property: PolicyDetails.TargetTags 14 Property: Schedule.TagsToAdd 15 """ 16 return tags_or_list(x) 17 18 19 def validate_interval(interval): 20 """ 21 Interval validation rule. 22 Property: CreateRule.Interval 23 """ 24 25 VALID_INTERVALS = (2, 3, 4, 6, 8, 12, 24) 26 27 if interval not in VALID_INTERVALS: 28 raise ValueError( 29 "Interval must be one of : %s" 30 % ", ".join([str(i) for i in VALID_INTERVALS]) 31 ) 32 return interval 33 34 35 def validate_interval_unit(interval_unit): 36 """ 37 Interval unit validation rule. 38 Property: CreateRule.IntervalUnit 39 """ 40 41 VALID_INTERVAL_UNITS = ("HOURS",) 42 43 if interval_unit not in VALID_INTERVAL_UNITS: 44 raise ValueError( 45 "Interval unit must be one of : %s" % ", ".join(VALID_INTERVAL_UNITS) 46 ) 47 return interval_unit 48 49 50 def validate_state(state): 51 """ 52 State validation rule. 53 Property: LifecyclePolicy.State 54 """ 55 56 VALID_STATES = ("ENABLED", "DISABLED") 57 58 if state not in VALID_STATES: 59 raise ValueError("State must be one of : %s" % ", ".join(VALID_STATES)) 60 return state 61 [end of troposphere/validators/dlm.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/troposphere/validators/dlm.py b/troposphere/validators/dlm.py --- a/troposphere/validators/dlm.py +++ b/troposphere/validators/dlm.py @@ -22,7 +22,7 @@ Property: CreateRule.Interval """ - VALID_INTERVALS = (2, 3, 4, 6, 8, 12, 24) + VALID_INTERVALS = (1, 2, 3, 4, 6, 8, 12, 24) if interval not in VALID_INTERVALS: raise ValueError(
{"golden_diff": "diff --git a/troposphere/validators/dlm.py b/troposphere/validators/dlm.py\n--- a/troposphere/validators/dlm.py\n+++ b/troposphere/validators/dlm.py\n@@ -22,7 +22,7 @@\n Property: CreateRule.Interval\n \"\"\"\n \n- VALID_INTERVALS = (2, 3, 4, 6, 8, 12, 24)\n+ VALID_INTERVALS = (1, 2, 3, 4, 6, 8, 12, 24)\n \n if interval not in VALID_INTERVALS:\n raise ValueError(\n", "issue": "Update DLM Interval Rule Values\nUpdate DLM valid intervals. `1` has been added.\r\n\r\n[DLM interval rule allows ](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-dlm-lifecyclepolicy-createrule.html)\n", "before_files": [{"content": "# Copyright (c) 2012-2022, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\n\nfrom . import tags_or_list\n\n\ndef validate_tags_or_list(x):\n \"\"\"\n Property: LifecyclePolicy.Tags\n Property: PolicyDetails.TargetTags\n Property: Schedule.TagsToAdd\n \"\"\"\n return tags_or_list(x)\n\n\ndef validate_interval(interval):\n \"\"\"\n Interval validation rule.\n Property: CreateRule.Interval\n \"\"\"\n\n VALID_INTERVALS = (2, 3, 4, 6, 8, 12, 24)\n\n if interval not in VALID_INTERVALS:\n raise ValueError(\n \"Interval must be one of : %s\"\n % \", \".join([str(i) for i in VALID_INTERVALS])\n )\n return interval\n\n\ndef validate_interval_unit(interval_unit):\n \"\"\"\n Interval unit validation rule.\n Property: CreateRule.IntervalUnit\n \"\"\"\n\n VALID_INTERVAL_UNITS = (\"HOURS\",)\n\n if interval_unit not in VALID_INTERVAL_UNITS:\n raise ValueError(\n \"Interval unit must be one of : %s\" % \", \".join(VALID_INTERVAL_UNITS)\n )\n return interval_unit\n\n\ndef validate_state(state):\n \"\"\"\n State validation rule.\n Property: LifecyclePolicy.State\n \"\"\"\n\n VALID_STATES = (\"ENABLED\", \"DISABLED\")\n\n if state not in VALID_STATES:\n raise ValueError(\"State must be one of : %s\" % \", \".join(VALID_STATES))\n return state\n", "path": "troposphere/validators/dlm.py"}]}
1,045
141
gh_patches_debug_23227
rasdani/github-patches
git_diff
deepchecks__deepchecks-405
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> BUG: model info check fails when model is sklearn pipeline to reproduce: https://www.kaggle.com/itay94/notebook6f16624759 ![image](https://user-images.githubusercontent.com/20860465/147880701-f8edc031-bc19-4a1b-a30b-6f9f8b89b782.png) </issue> <code> [start of deepchecks/checks/overview/model_info.py] 1 # ---------------------------------------------------------------------------- 2 # Copyright (C) 2021 Deepchecks (https://www.deepchecks.com) 3 # 4 # This file is part of Deepchecks. 5 # Deepchecks is distributed under the terms of the GNU Affero General 6 # Public License (version 3 or later). 7 # You should have received a copy of the GNU Affero General Public License 8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>. 9 # ---------------------------------------------------------------------------- 10 # 11 """Module contains model_info check.""" 12 import pandas as pd 13 from sklearn.base import BaseEstimator 14 15 from deepchecks import ModelOnlyBaseCheck, CheckResult 16 from deepchecks.utils.validation import model_type_validation 17 18 19 __all__ = ['ModelInfo'] 20 21 22 class ModelInfo(ModelOnlyBaseCheck): 23 """Summarize given model parameters.""" 24 25 def run(self, model: BaseEstimator) -> CheckResult: 26 """Run check. 27 28 Args: 29 model (BaseEstimator): A scikit-learn-compatible fitted estimator instance 30 31 Returns: 32 CheckResult: value is dictionary in format {type: <model_type>, params: <model_params_dict>} 33 """ 34 return self._model_info(model) 35 36 def _model_info(self, model: BaseEstimator): 37 model_type_validation(model) 38 model_type = type(model).__name__ 39 model_params = model.get_params() 40 default_params = type(model)().get_params() 41 42 # Create dataframe to show 43 model_param_df = pd.DataFrame(model_params.items(), columns=['Parameter', 'Value']) 44 model_param_df['Default'] = model_param_df['Parameter'].map(lambda x: default_params.get(x, '')) 45 46 def highlight_not_default(data): 47 n = len(data) 48 if data['Value'] != data['Default']: 49 return n * ['background-color: lightblue'] 50 else: 51 return n * [''] 52 53 model_param_df = model_param_df.style.apply(highlight_not_default, axis=1).hide_index() 54 55 value = {'type': model_type, 'params': model_params} 56 footnote = '<p style="font-size:0.7em"><i>Colored rows are parameters with non-default values</i></p>' 57 display = [f'Model Type: {model_type}', model_param_df, footnote] 58 59 return CheckResult(value, header='Model Info', display=display) 60 [end of deepchecks/checks/overview/model_info.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/deepchecks/checks/overview/model_info.py b/deepchecks/checks/overview/model_info.py --- a/deepchecks/checks/overview/model_info.py +++ b/deepchecks/checks/overview/model_info.py @@ -14,6 +14,7 @@ from deepchecks import ModelOnlyBaseCheck, CheckResult from deepchecks.utils.validation import model_type_validation +from deepchecks.utils.model import get_model_of_pipeline __all__ = ['ModelInfo'] @@ -31,13 +32,11 @@ Returns: CheckResult: value is dictionary in format {type: <model_type>, params: <model_params_dict>} """ - return self._model_info(model) - - def _model_info(self, model: BaseEstimator): model_type_validation(model) - model_type = type(model).__name__ - model_params = model.get_params() - default_params = type(model)().get_params() + estimator = get_model_of_pipeline(model) + model_type = type(estimator).__name__ + model_params = estimator.get_params() + default_params = type(estimator)().get_params() # Create dataframe to show model_param_df = pd.DataFrame(model_params.items(), columns=['Parameter', 'Value'])
{"golden_diff": "diff --git a/deepchecks/checks/overview/model_info.py b/deepchecks/checks/overview/model_info.py\n--- a/deepchecks/checks/overview/model_info.py\n+++ b/deepchecks/checks/overview/model_info.py\n@@ -14,6 +14,7 @@\n \n from deepchecks import ModelOnlyBaseCheck, CheckResult\n from deepchecks.utils.validation import model_type_validation\n+from deepchecks.utils.model import get_model_of_pipeline\n \n \n __all__ = ['ModelInfo']\n@@ -31,13 +32,11 @@\n Returns:\n CheckResult: value is dictionary in format {type: <model_type>, params: <model_params_dict>}\n \"\"\"\n- return self._model_info(model)\n-\n- def _model_info(self, model: BaseEstimator):\n model_type_validation(model)\n- model_type = type(model).__name__\n- model_params = model.get_params()\n- default_params = type(model)().get_params()\n+ estimator = get_model_of_pipeline(model)\n+ model_type = type(estimator).__name__\n+ model_params = estimator.get_params()\n+ default_params = type(estimator)().get_params()\n \n # Create dataframe to show\n model_param_df = pd.DataFrame(model_params.items(), columns=['Parameter', 'Value'])\n", "issue": "BUG: model info check fails when model is sklearn pipeline\nto reproduce:\r\nhttps://www.kaggle.com/itay94/notebook6f16624759\r\n\r\n![image](https://user-images.githubusercontent.com/20860465/147880701-f8edc031-bc19-4a1b-a30b-6f9f8b89b782.png)\r\n\n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"Module contains model_info check.\"\"\"\nimport pandas as pd\nfrom sklearn.base import BaseEstimator\n\nfrom deepchecks import ModelOnlyBaseCheck, CheckResult\nfrom deepchecks.utils.validation import model_type_validation\n\n\n__all__ = ['ModelInfo']\n\n\nclass ModelInfo(ModelOnlyBaseCheck):\n \"\"\"Summarize given model parameters.\"\"\"\n\n def run(self, model: BaseEstimator) -> CheckResult:\n \"\"\"Run check.\n\n Args:\n model (BaseEstimator): A scikit-learn-compatible fitted estimator instance\n\n Returns:\n CheckResult: value is dictionary in format {type: <model_type>, params: <model_params_dict>}\n \"\"\"\n return self._model_info(model)\n\n def _model_info(self, model: BaseEstimator):\n model_type_validation(model)\n model_type = type(model).__name__\n model_params = model.get_params()\n default_params = type(model)().get_params()\n\n # Create dataframe to show\n model_param_df = pd.DataFrame(model_params.items(), columns=['Parameter', 'Value'])\n model_param_df['Default'] = model_param_df['Parameter'].map(lambda x: default_params.get(x, ''))\n\n def highlight_not_default(data):\n n = len(data)\n if data['Value'] != data['Default']:\n return n * ['background-color: lightblue']\n else:\n return n * ['']\n\n model_param_df = model_param_df.style.apply(highlight_not_default, axis=1).hide_index()\n\n value = {'type': model_type, 'params': model_params}\n footnote = '<p style=\"font-size:0.7em\"><i>Colored rows are parameters with non-default values</i></p>'\n display = [f'Model Type: {model_type}', model_param_df, footnote]\n\n return CheckResult(value, header='Model Info', display=display)\n", "path": "deepchecks/checks/overview/model_info.py"}]}
1,253
280
gh_patches_debug_25992
rasdani/github-patches
git_diff
joke2k__faker-314
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Param switches on faker.password() don't guarantee valid password The format switches on `faker.password()` (`special_chars, digits, upper_case, lower_case`) don't always return passwords matching those rules. This is problematic as when using generated passwords in unit tests, where passwords must conform to validity rules (e.g. "must contain numbers"), tests can randomly fail. I expected that these switches would guarantee the function returns a conforming password. e.g. `faker.password(digits=True)` always returns a password containing digits, but this is not the case. </issue> <code> [start of faker/providers/misc/__init__.py] 1 # coding=utf-8 2 3 from __future__ import unicode_literals 4 import hashlib 5 import string 6 import uuid 7 8 from faker.generator import random 9 from faker.providers.date_time import Provider as DatetimeProvider 10 11 from .. import BaseProvider 12 13 14 class Provider(BaseProvider): 15 language_codes = ('cn', 'de', 'el', 'en', 'es', 'fr', 'it', 'pt', 'ru') 16 17 @classmethod 18 def boolean(cls, chance_of_getting_true=50): 19 return random.randint(1, 100) <= chance_of_getting_true 20 21 @classmethod 22 def null_boolean(cls): 23 return { 24 0: None, 25 1: True, 26 -1: False 27 }[random.randint(-1, 1)] 28 29 @classmethod 30 def md5(cls, raw_output=False): 31 """ 32 Calculates the md5 hash of a given string 33 :example 'cfcd208495d565ef66e7dff9f98764da' 34 """ 35 res = hashlib.md5(str(random.random()).encode('utf-8')) 36 if raw_output: 37 return res.digest() 38 return res.hexdigest() 39 40 @classmethod 41 def sha1(cls, raw_output=False): 42 """ 43 Calculates the sha1 hash of a given string 44 :example 'b5d86317c2a144cd04d0d7c03b2b02666fafadf2' 45 """ 46 res = hashlib.sha1(str(random.random()).encode('utf-8')) 47 if raw_output: 48 return res.digest() 49 return res.hexdigest() 50 51 @classmethod 52 def sha256(cls, raw_output=False): 53 """ 54 Calculates the sha256 hash of a given string 55 :example '85086017559ccc40638fcde2fecaf295e0de7ca51b7517b6aebeaaf75b4d4654' 56 """ 57 res = hashlib.sha256(str(random.random()).encode('utf-8')) 58 if raw_output: 59 return res.digest() 60 return res.hexdigest() 61 62 def locale(self): 63 return self.language_code() + '_' + self.country_code() 64 65 @classmethod 66 def country_code(cls): 67 return cls.random_element(DatetimeProvider.countries)['code'] 68 69 @classmethod 70 def language_code(cls): 71 return cls.random_element(cls.language_codes) 72 73 @classmethod 74 def uuid4(cls): 75 """ 76 Generates a random UUID4 string. 77 """ 78 return str(uuid.uuid4()) 79 80 @classmethod 81 def password(cls, length=10, special_chars=True, digits=True, upper_case=True, lower_case=True): 82 """ 83 Generates a random password. 84 @param length: Integer. Length of a password 85 @param special_chars: Boolean. Whether to use special characters !@#$%^&*()_+ 86 @param digits: Boolean. Whether to use digits 87 @param upper_case: Boolean. Whether to use upper letters 88 @param lower_case: Boolean. Whether to use lower letters 89 @return: String. Random password 90 """ 91 chars = "" 92 if special_chars: 93 chars += "!@#$%^&*()_+" 94 if digits: 95 chars += string.digits 96 if upper_case: 97 chars += string.ascii_uppercase 98 if lower_case: 99 chars += string.ascii_lowercase 100 return ''.join(random.choice(chars) for x in range(length)) 101 [end of faker/providers/misc/__init__.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/faker/providers/misc/__init__.py b/faker/providers/misc/__init__.py --- a/faker/providers/misc/__init__.py +++ b/faker/providers/misc/__init__.py @@ -88,13 +88,33 @@ @param lower_case: Boolean. Whether to use lower letters @return: String. Random password """ - chars = "" + choices = "" + required_tokens = [] if special_chars: - chars += "!@#$%^&*()_+" + required_tokens.append(random.choice("!@#$%^&*()_+")) + choices += "!@#$%^&*()_+" if digits: - chars += string.digits + required_tokens.append(random.choice(string.digits)) + choices += string.digits if upper_case: - chars += string.ascii_uppercase + required_tokens.append(random.choice(string.ascii_uppercase)) + choices += string.ascii_uppercase if lower_case: - chars += string.ascii_lowercase - return ''.join(random.choice(chars) for x in range(length)) + required_tokens.append(random.choice(string.ascii_lowercase)) + choices += string.ascii_lowercase + + assert len(required_tokens) <= length, "Required length is shorter than required characters" + + # Generate a first version of the password + chars = [random.choice(choices) for x in range(length)] + + # Pick some unique locations + random_indexes = set() + while len(random_indexes) < len(required_tokens): + random_indexes.add(random.randint(0, len(chars) - 1)) + + # Replace them with the required characters + for i, index in enumerate(random_indexes): + chars[index] = required_tokens[i] + + return ''.join(chars)
{"golden_diff": "diff --git a/faker/providers/misc/__init__.py b/faker/providers/misc/__init__.py\n--- a/faker/providers/misc/__init__.py\n+++ b/faker/providers/misc/__init__.py\n@@ -88,13 +88,33 @@\n @param lower_case: Boolean. Whether to use lower letters\n @return: String. Random password\n \"\"\"\n- chars = \"\"\n+ choices = \"\"\n+ required_tokens = []\n if special_chars:\n- chars += \"!@#$%^&*()_+\"\n+ required_tokens.append(random.choice(\"!@#$%^&*()_+\"))\n+ choices += \"!@#$%^&*()_+\"\n if digits:\n- chars += string.digits\n+ required_tokens.append(random.choice(string.digits))\n+ choices += string.digits\n if upper_case:\n- chars += string.ascii_uppercase\n+ required_tokens.append(random.choice(string.ascii_uppercase))\n+ choices += string.ascii_uppercase\n if lower_case:\n- chars += string.ascii_lowercase\n- return ''.join(random.choice(chars) for x in range(length))\n+ required_tokens.append(random.choice(string.ascii_lowercase))\n+ choices += string.ascii_lowercase\n+\n+ assert len(required_tokens) <= length, \"Required length is shorter than required characters\"\n+\n+ # Generate a first version of the password\n+ chars = [random.choice(choices) for x in range(length)]\n+\n+ # Pick some unique locations\n+ random_indexes = set()\n+ while len(random_indexes) < len(required_tokens):\n+ random_indexes.add(random.randint(0, len(chars) - 1))\n+\n+ # Replace them with the required characters\n+ for i, index in enumerate(random_indexes):\n+ chars[index] = required_tokens[i]\n+\n+ return ''.join(chars)\n", "issue": "Param switches on faker.password() don't guarantee valid password\nThe format switches on `faker.password()` (`special_chars, digits, upper_case, lower_case`) don't always return passwords matching those rules.\n\nThis is problematic as when using generated passwords in unit tests, where passwords must conform to validity rules (e.g. \"must contain numbers\"), tests can randomly fail.\n\nI expected that these switches would guarantee the function returns a conforming password. e.g. `faker.password(digits=True)` always returns a password containing digits, but this is not the case.\n\n", "before_files": [{"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\nimport hashlib\nimport string\nimport uuid\n\nfrom faker.generator import random\nfrom faker.providers.date_time import Provider as DatetimeProvider\n\nfrom .. import BaseProvider\n\n\nclass Provider(BaseProvider):\n language_codes = ('cn', 'de', 'el', 'en', 'es', 'fr', 'it', 'pt', 'ru')\n\n @classmethod\n def boolean(cls, chance_of_getting_true=50):\n return random.randint(1, 100) <= chance_of_getting_true\n\n @classmethod\n def null_boolean(cls):\n return {\n 0: None,\n 1: True,\n -1: False\n }[random.randint(-1, 1)]\n\n @classmethod\n def md5(cls, raw_output=False):\n \"\"\"\n Calculates the md5 hash of a given string\n :example 'cfcd208495d565ef66e7dff9f98764da'\n \"\"\"\n res = hashlib.md5(str(random.random()).encode('utf-8'))\n if raw_output:\n return res.digest()\n return res.hexdigest()\n\n @classmethod\n def sha1(cls, raw_output=False):\n \"\"\"\n Calculates the sha1 hash of a given string\n :example 'b5d86317c2a144cd04d0d7c03b2b02666fafadf2'\n \"\"\"\n res = hashlib.sha1(str(random.random()).encode('utf-8'))\n if raw_output:\n return res.digest()\n return res.hexdigest()\n\n @classmethod\n def sha256(cls, raw_output=False):\n \"\"\"\n Calculates the sha256 hash of a given string\n :example '85086017559ccc40638fcde2fecaf295e0de7ca51b7517b6aebeaaf75b4d4654'\n \"\"\"\n res = hashlib.sha256(str(random.random()).encode('utf-8'))\n if raw_output:\n return res.digest()\n return res.hexdigest()\n\n def locale(self):\n return self.language_code() + '_' + self.country_code()\n\n @classmethod\n def country_code(cls):\n return cls.random_element(DatetimeProvider.countries)['code']\n\n @classmethod\n def language_code(cls):\n return cls.random_element(cls.language_codes)\n\n @classmethod\n def uuid4(cls):\n \"\"\"\n Generates a random UUID4 string.\n \"\"\"\n return str(uuid.uuid4())\n\n @classmethod\n def password(cls, length=10, special_chars=True, digits=True, upper_case=True, lower_case=True):\n \"\"\"\n Generates a random password.\n @param length: Integer. Length of a password\n @param special_chars: Boolean. Whether to use special characters !@#$%^&*()_+\n @param digits: Boolean. Whether to use digits\n @param upper_case: Boolean. Whether to use upper letters\n @param lower_case: Boolean. Whether to use lower letters\n @return: String. Random password\n \"\"\"\n chars = \"\"\n if special_chars:\n chars += \"!@#$%^&*()_+\"\n if digits:\n chars += string.digits\n if upper_case:\n chars += string.ascii_uppercase\n if lower_case:\n chars += string.ascii_lowercase\n return ''.join(random.choice(chars) for x in range(length))\n", "path": "faker/providers/misc/__init__.py"}]}
1,623
394
gh_patches_debug_1336
rasdani/github-patches
git_diff
projectmesa__mesa-112
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> DataCollector bug Found a minor bug in DataCollector, where some variables are not initialized in the instance, and become class variables instead. Fixing. </issue> <code> [start of mesa/datacollection.py] 1 ''' 2 Mesa Data Collection Module 3 ===================================================== 4 5 DataCollector is meant to provide a simple, standard way to collect data 6 generated by a Mesa model. It collects three types of data: model-level data, 7 agent-level data, and tables. 8 9 A DataCollector is instantiated with two dictionaries of reporter names and 10 associated functions for each, one for model-level data and one for 11 agent-level data; a third dictionary provides table names and columns. 12 13 When the collect() method is called, each model-level function is called, with 14 the model as the argument, and the results associated with the relevant 15 variable. Then the agent-level functions are called on each 16 agent in the model scheduler. 17 18 Additionally, other objects can write directly to tables by passing in an 19 appropriate dictionary object for a table row. 20 21 The DataCollector then stores the data it collects in dictionaries: 22 * model_vars maps each reporter to a list of its values 23 * agent_vars maps each reporter to a list of lists, where each nested list 24 stores (agent_id, value) pairs. 25 * tables maps each table to a dictionary, with each column as a key with a 26 list as its value. 27 28 Finally, DataCollector can create a pandas DataFrame from each collection. 29 30 The default DataCollector here makes several assumptions: 31 * The model has a schedule object called 'schedule' 32 * The schedule has an agent list called agents 33 * For collecting agent-level variables, agents must have a unique_id 34 ''' 35 from collections import defaultdict 36 import pandas as pd 37 38 39 class DataCollector(object): 40 ''' 41 Class for collecting data generated by a Mesa model. 42 43 A DataCollector is instantiated with dictionaries of names of model- and 44 agent-level variables to collect, associated with functions which actually 45 collect them. When the collect(...) method is called, it executes these 46 functions one by one and stores the results. 47 ''' 48 model_reporters = {} 49 agent_reporters = {} 50 51 model_vars = {} 52 agent_vars = {} 53 tables = {} 54 55 model = None 56 57 def __init__(self, model_reporters={}, agent_reporters={}, tables={}): 58 ''' 59 Instantiate a DataCollector with lists of model and agent reporters. 60 61 Both model_reporters and agent_reporters accept a dictionary mapping a 62 variable name to a method used to collect it. 63 For example, if there was only one model-level reporter for number of 64 agents, it might look like: 65 {"agent_count": lambda m: m.schedule.get_agent_count() } 66 If there was only one agent-level reporter (e.g. the agent's energy), 67 it might look like this: 68 {"energy": lambda a: a.energy} 69 70 The tables arg accepts a dictionary mapping names of tables to lists of 71 columns. For example, if we want to allow agents to write their age 72 when they are destroyed (to keep track of lifespans), it might look 73 like: 74 {"Lifespan": ["unique_id", "age"]} 75 76 Args: 77 model_reporters: Dictionary of reporter names and functions. 78 agent_reporters: Dictionary of reporter names and functions. 79 ''' 80 81 self.model_reporters = {} 82 self.agent_reporters = {} 83 self.tables = {} 84 85 for name, func in model_reporters.items(): 86 self._new_model_reporter(name, func) 87 88 for name, func in agent_reporters.items(): 89 self._new_agent_reporter(name, func) 90 91 for name, columns in tables.items(): 92 self._new_table(name, columns) 93 94 def _new_model_reporter(self, reporter_name, reporter_function): 95 ''' 96 Add a new model-level reporter to collect. 97 Args: 98 reporter_name: Name of the model-level variable to collect. 99 reporter_function: Function object that returns the variable when 100 given a model instance. 101 ''' 102 103 self.model_reporters[reporter_name] = reporter_function 104 self.model_vars[reporter_name] = [] 105 106 def _new_agent_reporter(self, reporter_name, reporter_function): 107 ''' 108 Add a new agent-level reporter to collect. 109 Args: 110 reporter_name: Name of the agent-level variable to collect. 111 reporter_function: Function object that returns the variable when 112 given an agent object. 113 ''' 114 self.agent_reporters[reporter_name] = reporter_function 115 self.agent_vars[reporter_name] = [] 116 117 def _new_table(self, table_name, table_columns): 118 ''' 119 Add a new table that objects can write to. 120 Args: 121 table_name: Name of the new table. 122 table_columns: List of columns to add to the table. 123 ''' 124 new_table = {column: [] for column in table_columns} 125 self.tables[table_name] = new_table 126 127 def collect(self, model): 128 ''' 129 Collect all the data for the given model object. 130 ''' 131 if self.model_reporters: 132 for var, reporter in self.model_reporters.items(): 133 self.model_vars[var].append(reporter(model)) 134 135 if self.agent_reporters: 136 for var, reporter in self.agent_reporters.items(): 137 agent_records = [] 138 for agent in model.schedule.agents: 139 agent_records.append((agent.unique_id, reporter(agent))) 140 self.agent_vars[var].append(agent_records) 141 142 def add_table_row(self, table_name, row, ignore_missing=False): 143 ''' 144 Add a row dictionary to a specific table. 145 146 Args: 147 table_name: Name of the table to append a row to. 148 row: A dictionary of the form {column_name: value...} 149 ignore_missing: If True, fill any missing columns with Nones; 150 if False, throw an error if any columns are missing 151 ''' 152 if table_name not in self.tables: 153 raise Exception("Table does not exist.") 154 155 for column in self.tables[table_name]: 156 if column in row: 157 self.tables[table_name][column].append(row[column]) 158 elif ignore_missing: 159 self.tables[table_name][column].append(None) 160 else: 161 raise Exception("Could not insert row with missing column") 162 163 def get_model_vars_dataframe(self): 164 ''' 165 Create a pandas DataFrame from the model variables. 166 The DataFrame has one column for each model variable, and the index is 167 (implicitly) the model tick. 168 ''' 169 return pd.DataFrame(self.model_vars) 170 171 def get_agent_vars_dataframe(self): 172 ''' 173 Create a pandas DataFrame from the agent variables. 174 The DataFrame has one column for each variable, with two additional 175 columns for tick and agent_id. 176 ''' 177 178 data = defaultdict(dict) 179 for var, records in self.agent_vars.items(): 180 for step, entries in enumerate(records): 181 for entry in entries: 182 agent_id = entry[0] 183 val = entry[1] 184 data[(step, agent_id)][var] = val 185 df = pd.DataFrame.from_dict(data, orient="index") 186 df.index.names = ["Step", "AgentID"] 187 return df 188 189 def get_table_dataframe(self, table_name): 190 ''' 191 Create a pandas DataFrame from a particular table. 192 193 Args: 194 table_name: The name of the table to convert. 195 ''' 196 if table_name not in self.tables: 197 raise Exception("No such table.") 198 return pd.DataFrame(self.tables[table_name]) 199 [end of mesa/datacollection.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/mesa/datacollection.py b/mesa/datacollection.py --- a/mesa/datacollection.py +++ b/mesa/datacollection.py @@ -80,6 +80,9 @@ self.model_reporters = {} self.agent_reporters = {} + + self.model_vars = {} + self.agent_vars = {} self.tables = {} for name, func in model_reporters.items():
{"golden_diff": "diff --git a/mesa/datacollection.py b/mesa/datacollection.py\n--- a/mesa/datacollection.py\n+++ b/mesa/datacollection.py\n@@ -80,6 +80,9 @@\n \n self.model_reporters = {}\n self.agent_reporters = {}\n+\n+ self.model_vars = {}\n+ self.agent_vars = {}\n self.tables = {}\n \n for name, func in model_reporters.items():\n", "issue": "DataCollector bug\nFound a minor bug in DataCollector, where some variables are not initialized in the instance, and become class variables instead. Fixing.\n\n", "before_files": [{"content": "'''\nMesa Data Collection Module\n=====================================================\n\nDataCollector is meant to provide a simple, standard way to collect data\ngenerated by a Mesa model. It collects three types of data: model-level data,\nagent-level data, and tables.\n\nA DataCollector is instantiated with two dictionaries of reporter names and\nassociated functions for each, one for model-level data and one for\nagent-level data; a third dictionary provides table names and columns.\n\nWhen the collect() method is called, each model-level function is called, with\nthe model as the argument, and the results associated with the relevant\nvariable. Then the agent-level functions are called on each\nagent in the model scheduler.\n\nAdditionally, other objects can write directly to tables by passing in an\nappropriate dictionary object for a table row.\n\nThe DataCollector then stores the data it collects in dictionaries:\n * model_vars maps each reporter to a list of its values\n * agent_vars maps each reporter to a list of lists, where each nested list\n stores (agent_id, value) pairs.\n * tables maps each table to a dictionary, with each column as a key with a\n list as its value.\n\nFinally, DataCollector can create a pandas DataFrame from each collection.\n\nThe default DataCollector here makes several assumptions:\n * The model has a schedule object called 'schedule'\n * The schedule has an agent list called agents\n * For collecting agent-level variables, agents must have a unique_id\n'''\nfrom collections import defaultdict\nimport pandas as pd\n\n\nclass DataCollector(object):\n '''\n Class for collecting data generated by a Mesa model.\n\n A DataCollector is instantiated with dictionaries of names of model- and\n agent-level variables to collect, associated with functions which actually\n collect them. When the collect(...) method is called, it executes these\n functions one by one and stores the results.\n '''\n model_reporters = {}\n agent_reporters = {}\n\n model_vars = {}\n agent_vars = {}\n tables = {}\n\n model = None\n\n def __init__(self, model_reporters={}, agent_reporters={}, tables={}):\n '''\n Instantiate a DataCollector with lists of model and agent reporters.\n\n Both model_reporters and agent_reporters accept a dictionary mapping a\n variable name to a method used to collect it.\n For example, if there was only one model-level reporter for number of\n agents, it might look like:\n {\"agent_count\": lambda m: m.schedule.get_agent_count() }\n If there was only one agent-level reporter (e.g. the agent's energy),\n it might look like this:\n {\"energy\": lambda a: a.energy}\n\n The tables arg accepts a dictionary mapping names of tables to lists of\n columns. For example, if we want to allow agents to write their age\n when they are destroyed (to keep track of lifespans), it might look\n like:\n {\"Lifespan\": [\"unique_id\", \"age\"]}\n\n Args:\n model_reporters: Dictionary of reporter names and functions.\n agent_reporters: Dictionary of reporter names and functions.\n '''\n\n self.model_reporters = {}\n self.agent_reporters = {}\n self.tables = {}\n\n for name, func in model_reporters.items():\n self._new_model_reporter(name, func)\n\n for name, func in agent_reporters.items():\n self._new_agent_reporter(name, func)\n\n for name, columns in tables.items():\n self._new_table(name, columns)\n\n def _new_model_reporter(self, reporter_name, reporter_function):\n '''\n Add a new model-level reporter to collect.\n Args:\n reporter_name: Name of the model-level variable to collect.\n reporter_function: Function object that returns the variable when\n given a model instance.\n '''\n\n self.model_reporters[reporter_name] = reporter_function\n self.model_vars[reporter_name] = []\n\n def _new_agent_reporter(self, reporter_name, reporter_function):\n '''\n Add a new agent-level reporter to collect.\n Args:\n reporter_name: Name of the agent-level variable to collect.\n reporter_function: Function object that returns the variable when\n given an agent object.\n '''\n self.agent_reporters[reporter_name] = reporter_function\n self.agent_vars[reporter_name] = []\n\n def _new_table(self, table_name, table_columns):\n '''\n Add a new table that objects can write to.\n Args:\n table_name: Name of the new table.\n table_columns: List of columns to add to the table.\n '''\n new_table = {column: [] for column in table_columns}\n self.tables[table_name] = new_table\n\n def collect(self, model):\n '''\n Collect all the data for the given model object.\n '''\n if self.model_reporters:\n for var, reporter in self.model_reporters.items():\n self.model_vars[var].append(reporter(model))\n\n if self.agent_reporters:\n for var, reporter in self.agent_reporters.items():\n agent_records = []\n for agent in model.schedule.agents:\n agent_records.append((agent.unique_id, reporter(agent)))\n self.agent_vars[var].append(agent_records)\n\n def add_table_row(self, table_name, row, ignore_missing=False):\n '''\n Add a row dictionary to a specific table.\n\n Args:\n table_name: Name of the table to append a row to.\n row: A dictionary of the form {column_name: value...}\n ignore_missing: If True, fill any missing columns with Nones;\n if False, throw an error if any columns are missing\n '''\n if table_name not in self.tables:\n raise Exception(\"Table does not exist.\")\n\n for column in self.tables[table_name]:\n if column in row:\n self.tables[table_name][column].append(row[column])\n elif ignore_missing:\n self.tables[table_name][column].append(None)\n else:\n raise Exception(\"Could not insert row with missing column\")\n\n def get_model_vars_dataframe(self):\n '''\n Create a pandas DataFrame from the model variables.\n The DataFrame has one column for each model variable, and the index is\n (implicitly) the model tick.\n '''\n return pd.DataFrame(self.model_vars)\n\n def get_agent_vars_dataframe(self):\n '''\n Create a pandas DataFrame from the agent variables.\n The DataFrame has one column for each variable, with two additional\n columns for tick and agent_id.\n '''\n\n data = defaultdict(dict)\n for var, records in self.agent_vars.items():\n for step, entries in enumerate(records):\n for entry in entries:\n agent_id = entry[0]\n val = entry[1]\n data[(step, agent_id)][var] = val\n df = pd.DataFrame.from_dict(data, orient=\"index\")\n df.index.names = [\"Step\", \"AgentID\"]\n return df\n\n def get_table_dataframe(self, table_name):\n '''\n Create a pandas DataFrame from a particular table.\n\n Args:\n table_name: The name of the table to convert.\n '''\n if table_name not in self.tables:\n raise Exception(\"No such table.\")\n return pd.DataFrame(self.tables[table_name])\n", "path": "mesa/datacollection.py"}]}
2,588
91
gh_patches_debug_26091
rasdani/github-patches
git_diff
readthedocs__readthedocs.org-5644
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Serve custom 404 pages from Django Currently, we are serving the `404.html` from NGINX returning a `Response` that includes the `X-Accel-Redirect` header: https://github.com/rtfd/readthedocs.org/blob/537f7b8d2e8d81f8243d1d4a562968e472bc6337/readthedocs/core/views/__init__.py#L208-L210 We found that even marking the `response.status_code = 404`, the response is 200. ``` $ curl -IL https://docs.readthedocs.io/notfound HTTP/1.1 200 OK Content-Length: 14167 Content-Type: text/html Last-Modified: Mon, 22 Apr 2019 15:30:57 GMT Accept-Ranges: bytes ETag: "5cbdde31-3757" Vary: Accept-Encoding Server: nginx X-Served: Nginx-Sendfile X-Deity: web01 Strict-Transport-Security: max-age=1800; includeSubDomains Date: Mon, 29 Apr 2019 07:59:21 GMT ``` To fix this, we have to serve this file directly from Django instead of using this header. </issue> <code> [start of readthedocs/core/views/__init__.py] 1 # -*- coding: utf-8 -*- 2 3 """ 4 Core views, including the main homepage, 5 6 documentation and header rendering, and server errors. 7 """ 8 9 import os 10 import logging 11 from urllib.parse import urlparse 12 13 from django.conf import settings 14 from django.http import HttpResponseRedirect, Http404, JsonResponse 15 from django.shortcuts import render, get_object_or_404, redirect 16 from django.views.generic import TemplateView 17 18 19 from readthedocs.builds.models import Version 20 from readthedocs.core.utils.general import wipe_version_via_slugs 21 from readthedocs.core.resolver import resolve_path 22 from readthedocs.core.symlink import PrivateSymlink, PublicSymlink 23 from readthedocs.core.views.serve import _serve_file 24 from readthedocs.projects.constants import PRIVATE 25 from readthedocs.projects.models import HTMLFile, Project 26 from readthedocs.redirects.utils import ( 27 get_redirect_response, 28 project_and_path_from_request, 29 language_and_version_from_path 30 ) 31 32 log = logging.getLogger(__name__) 33 34 35 class NoProjectException(Exception): 36 pass 37 38 39 class HomepageView(TemplateView): 40 41 template_name = 'homepage.html' 42 43 def get_context_data(self, **kwargs): 44 """Add latest builds and featured projects.""" 45 context = super().get_context_data(**kwargs) 46 context['featured_list'] = Project.objects.filter(featured=True) 47 context['projects_count'] = Project.objects.count() 48 return context 49 50 51 class SupportView(TemplateView): 52 template_name = 'support.html' 53 54 def get_context_data(self, **kwargs): 55 context = super().get_context_data(**kwargs) 56 support_email = settings.SUPPORT_EMAIL 57 if not support_email: 58 support_email = 'support@{domain}'.format( 59 domain=settings.PRODUCTION_DOMAIN 60 ) 61 62 context['support_email'] = support_email 63 return context 64 65 66 def random_page(request, project_slug=None): # pylint: disable=unused-argument 67 html_file = HTMLFile.objects.order_by('?') 68 if project_slug: 69 html_file = html_file.filter(project__slug=project_slug) 70 html_file = html_file.first() 71 if html_file is None: 72 raise Http404 73 url = html_file.get_absolute_url() 74 return HttpResponseRedirect(url) 75 76 77 def wipe_version(request, project_slug, version_slug): 78 version = get_object_or_404( 79 Version, 80 project__slug=project_slug, 81 slug=version_slug, 82 ) 83 # We need to check by ``for_admin_user`` here to allow members of the 84 # ``Admin`` team (which doesn't own the project) under the corporate site. 85 if version.project not in Project.objects.for_admin_user(user=request.user): 86 raise Http404('You must own this project to wipe it.') 87 88 if request.method == 'POST': 89 wipe_version_via_slugs( 90 version_slug=version_slug, 91 project_slug=project_slug 92 ) 93 return redirect('project_version_list', project_slug) 94 return render( 95 request, 96 'wipe_version.html', 97 {'version': version, 'project': version.project}, 98 ) 99 100 101 def server_error_500(request, template_name='500.html'): 102 """A simple 500 handler so we get media.""" 103 r = render(request, template_name) 104 r.status_code = 500 105 return r 106 107 108 def server_error_404(request, exception=None, template_name='404.html'): # pylint: disable=unused-argument # noqa 109 """ 110 A simple 404 handler so we get media. 111 112 .. note:: 113 114 Marking exception as optional to make /404/ testing page to work. 115 """ 116 response = get_redirect_response(request, full_path=request.get_full_path()) 117 118 # Return a redirect response if there is one 119 if response: 120 if response.url == request.build_absolute_uri(): 121 # check that we do have a response and avoid infinite redirect 122 log.warning( 123 'Infinite Redirect: FROM URL is the same than TO URL. url=%s', 124 response.url, 125 ) 126 else: 127 return response 128 129 # Try to serve custom 404 pages if it's a subdomain/cname 130 if getattr(request, 'subdomain', False) or getattr(request, 'cname', False): 131 return server_error_404_subdomain(request, template_name) 132 133 # Return the default 404 page generated by Read the Docs 134 r = render(request, template_name) 135 r.status_code = 404 136 return r 137 138 139 def server_error_404_subdomain(request, template_name='404.html'): 140 """ 141 Handler for 404 pages on subdomains. 142 143 Check if the project associated has a custom ``404.html`` and serve this 144 page. First search for a 404 page in the current version, then continues 145 with the default version and finally, if none of them are found, the Read 146 the Docs default page (Maze Found) is rendered by Django and served. 147 """ 148 149 def resolve_404_path(project, version_slug=None, language=None): 150 """ 151 Helper to resolve the path of ``404.html`` for project. 152 153 The resolution is based on ``project`` object, version slug and 154 language. 155 156 :returns: tuple containing the (basepath, filename) 157 :rtype: tuple 158 """ 159 filename = resolve_path( 160 project, 161 version_slug=version_slug, 162 language=language, 163 filename='404.html', 164 subdomain=True, # subdomain will make it a "full" path without a URL prefix 165 ) 166 167 # This breaks path joining, by ignoring the root when given an "absolute" path 168 if filename[0] == '/': 169 filename = filename[1:] 170 171 version = None 172 if version_slug: 173 version_qs = project.versions.filter(slug=version_slug) 174 if version_qs.exists(): 175 version = version_qs.first() 176 177 private = any([ 178 version and version.privacy_level == PRIVATE, 179 not version and project.privacy_level == PRIVATE, 180 ]) 181 if private: 182 symlink = PrivateSymlink(project) 183 else: 184 symlink = PublicSymlink(project) 185 basepath = symlink.project_root 186 fullpath = os.path.join(basepath, filename) 187 return (basepath, filename, fullpath) 188 189 project, full_path = project_and_path_from_request(request, request.get_full_path()) 190 191 if project: 192 language = None 193 version_slug = None 194 schema, netloc, path, params, query, fragments = urlparse(full_path) 195 if not project.single_version: 196 language, version_slug, path = language_and_version_from_path(path) 197 198 # Firstly, attempt to serve the 404 of the current version (version_slug) 199 # Secondly, try to serve the 404 page for the default version (project.get_default_version()) 200 for slug in (version_slug, project.get_default_version()): 201 basepath, filename, fullpath = resolve_404_path(project, slug, language) 202 if os.path.exists(fullpath): 203 log.debug( 204 'serving 404.html page current version: [project: %s] [version: %s]', 205 project.slug, 206 slug, 207 ) 208 r = _serve_file(request, filename, basepath) 209 r.status_code = 404 210 return r 211 212 # Finally, return the default 404 page generated by Read the Docs 213 r = render(request, template_name) 214 r.status_code = 404 215 return r 216 217 218 def do_not_track(request): 219 dnt_header = request.META.get('HTTP_DNT') 220 221 # https://w3c.github.io/dnt/drafts/tracking-dnt.html#status-representation 222 return JsonResponse( # pylint: disable=redundant-content-type-for-json-response 223 { 224 'policy': 'https://docs.readthedocs.io/en/latest/privacy-policy.html', 225 'same-party': [ 226 'readthedocs.org', 227 'readthedocs.com', 228 'readthedocs.io', # .org Documentation Sites 229 'readthedocs-hosted.com', # .com Documentation Sites 230 ], 231 'tracking': 'N' if dnt_header == '1' else 'T', 232 }, content_type='application/tracking-status+json', 233 ) 234 [end of readthedocs/core/views/__init__.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/readthedocs/core/views/__init__.py b/readthedocs/core/views/__init__.py --- a/readthedocs/core/views/__init__.py +++ b/readthedocs/core/views/__init__.py @@ -14,13 +14,12 @@ from django.http import HttpResponseRedirect, Http404, JsonResponse from django.shortcuts import render, get_object_or_404, redirect from django.views.generic import TemplateView - +from django.views.static import serve as static_serve from readthedocs.builds.models import Version from readthedocs.core.utils.general import wipe_version_via_slugs from readthedocs.core.resolver import resolve_path from readthedocs.core.symlink import PrivateSymlink, PublicSymlink -from readthedocs.core.views.serve import _serve_file from readthedocs.projects.constants import PRIVATE from readthedocs.projects.models import HTMLFile, Project from readthedocs.redirects.utils import ( @@ -205,7 +204,7 @@ project.slug, slug, ) - r = _serve_file(request, filename, basepath) + r = static_serve(request, filename, basepath) r.status_code = 404 return r
{"golden_diff": "diff --git a/readthedocs/core/views/__init__.py b/readthedocs/core/views/__init__.py\n--- a/readthedocs/core/views/__init__.py\n+++ b/readthedocs/core/views/__init__.py\n@@ -14,13 +14,12 @@\n from django.http import HttpResponseRedirect, Http404, JsonResponse\n from django.shortcuts import render, get_object_or_404, redirect\n from django.views.generic import TemplateView\n-\n+from django.views.static import serve as static_serve\n \n from readthedocs.builds.models import Version\n from readthedocs.core.utils.general import wipe_version_via_slugs\n from readthedocs.core.resolver import resolve_path\n from readthedocs.core.symlink import PrivateSymlink, PublicSymlink\n-from readthedocs.core.views.serve import _serve_file\n from readthedocs.projects.constants import PRIVATE\n from readthedocs.projects.models import HTMLFile, Project\n from readthedocs.redirects.utils import (\n@@ -205,7 +204,7 @@\n project.slug,\n slug,\n )\n- r = _serve_file(request, filename, basepath)\n+ r = static_serve(request, filename, basepath)\n r.status_code = 404\n return r\n", "issue": "Serve custom 404 pages from Django\nCurrently, we are serving the `404.html` from NGINX returning a `Response` that includes the `X-Accel-Redirect` header:\r\n\r\nhttps://github.com/rtfd/readthedocs.org/blob/537f7b8d2e8d81f8243d1d4a562968e472bc6337/readthedocs/core/views/__init__.py#L208-L210\r\n\r\nWe found that even marking the `response.status_code = 404`, the response is 200.\r\n\r\n```\r\n$ curl -IL https://docs.readthedocs.io/notfound\r\nHTTP/1.1 200 OK\r\nContent-Length: 14167\r\nContent-Type: text/html\r\nLast-Modified: Mon, 22 Apr 2019 15:30:57 GMT\r\nAccept-Ranges: bytes\r\nETag: \"5cbdde31-3757\"\r\nVary: Accept-Encoding\r\nServer: nginx\r\nX-Served: Nginx-Sendfile\r\nX-Deity: web01\r\nStrict-Transport-Security: max-age=1800; includeSubDomains\r\nDate: Mon, 29 Apr 2019 07:59:21 GMT\r\n```\r\n\r\nTo fix this, we have to serve this file directly from Django instead of using this header.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nCore views, including the main homepage,\n\ndocumentation and header rendering, and server errors.\n\"\"\"\n\nimport os\nimport logging\nfrom urllib.parse import urlparse\n\nfrom django.conf import settings\nfrom django.http import HttpResponseRedirect, Http404, JsonResponse\nfrom django.shortcuts import render, get_object_or_404, redirect\nfrom django.views.generic import TemplateView\n\n\nfrom readthedocs.builds.models import Version\nfrom readthedocs.core.utils.general import wipe_version_via_slugs\nfrom readthedocs.core.resolver import resolve_path\nfrom readthedocs.core.symlink import PrivateSymlink, PublicSymlink\nfrom readthedocs.core.views.serve import _serve_file\nfrom readthedocs.projects.constants import PRIVATE\nfrom readthedocs.projects.models import HTMLFile, Project\nfrom readthedocs.redirects.utils import (\n get_redirect_response,\n project_and_path_from_request,\n language_and_version_from_path\n)\n\nlog = logging.getLogger(__name__)\n\n\nclass NoProjectException(Exception):\n pass\n\n\nclass HomepageView(TemplateView):\n\n template_name = 'homepage.html'\n\n def get_context_data(self, **kwargs):\n \"\"\"Add latest builds and featured projects.\"\"\"\n context = super().get_context_data(**kwargs)\n context['featured_list'] = Project.objects.filter(featured=True)\n context['projects_count'] = Project.objects.count()\n return context\n\n\nclass SupportView(TemplateView):\n template_name = 'support.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n support_email = settings.SUPPORT_EMAIL\n if not support_email:\n support_email = 'support@{domain}'.format(\n domain=settings.PRODUCTION_DOMAIN\n )\n\n context['support_email'] = support_email\n return context\n\n\ndef random_page(request, project_slug=None): # pylint: disable=unused-argument\n html_file = HTMLFile.objects.order_by('?')\n if project_slug:\n html_file = html_file.filter(project__slug=project_slug)\n html_file = html_file.first()\n if html_file is None:\n raise Http404\n url = html_file.get_absolute_url()\n return HttpResponseRedirect(url)\n\n\ndef wipe_version(request, project_slug, version_slug):\n version = get_object_or_404(\n Version,\n project__slug=project_slug,\n slug=version_slug,\n )\n # We need to check by ``for_admin_user`` here to allow members of the\n # ``Admin`` team (which doesn't own the project) under the corporate site.\n if version.project not in Project.objects.for_admin_user(user=request.user):\n raise Http404('You must own this project to wipe it.')\n\n if request.method == 'POST':\n wipe_version_via_slugs(\n version_slug=version_slug,\n project_slug=project_slug\n )\n return redirect('project_version_list', project_slug)\n return render(\n request,\n 'wipe_version.html',\n {'version': version, 'project': version.project},\n )\n\n\ndef server_error_500(request, template_name='500.html'):\n \"\"\"A simple 500 handler so we get media.\"\"\"\n r = render(request, template_name)\n r.status_code = 500\n return r\n\n\ndef server_error_404(request, exception=None, template_name='404.html'): # pylint: disable=unused-argument # noqa\n \"\"\"\n A simple 404 handler so we get media.\n\n .. note::\n\n Marking exception as optional to make /404/ testing page to work.\n \"\"\"\n response = get_redirect_response(request, full_path=request.get_full_path())\n\n # Return a redirect response if there is one\n if response:\n if response.url == request.build_absolute_uri():\n # check that we do have a response and avoid infinite redirect\n log.warning(\n 'Infinite Redirect: FROM URL is the same than TO URL. url=%s',\n response.url,\n )\n else:\n return response\n\n # Try to serve custom 404 pages if it's a subdomain/cname\n if getattr(request, 'subdomain', False) or getattr(request, 'cname', False):\n return server_error_404_subdomain(request, template_name)\n\n # Return the default 404 page generated by Read the Docs\n r = render(request, template_name)\n r.status_code = 404\n return r\n\n\ndef server_error_404_subdomain(request, template_name='404.html'):\n \"\"\"\n Handler for 404 pages on subdomains.\n\n Check if the project associated has a custom ``404.html`` and serve this\n page. First search for a 404 page in the current version, then continues\n with the default version and finally, if none of them are found, the Read\n the Docs default page (Maze Found) is rendered by Django and served.\n \"\"\"\n\n def resolve_404_path(project, version_slug=None, language=None):\n \"\"\"\n Helper to resolve the path of ``404.html`` for project.\n\n The resolution is based on ``project`` object, version slug and\n language.\n\n :returns: tuple containing the (basepath, filename)\n :rtype: tuple\n \"\"\"\n filename = resolve_path(\n project,\n version_slug=version_slug,\n language=language,\n filename='404.html',\n subdomain=True, # subdomain will make it a \"full\" path without a URL prefix\n )\n\n # This breaks path joining, by ignoring the root when given an \"absolute\" path\n if filename[0] == '/':\n filename = filename[1:]\n\n version = None\n if version_slug:\n version_qs = project.versions.filter(slug=version_slug)\n if version_qs.exists():\n version = version_qs.first()\n\n private = any([\n version and version.privacy_level == PRIVATE,\n not version and project.privacy_level == PRIVATE,\n ])\n if private:\n symlink = PrivateSymlink(project)\n else:\n symlink = PublicSymlink(project)\n basepath = symlink.project_root\n fullpath = os.path.join(basepath, filename)\n return (basepath, filename, fullpath)\n\n project, full_path = project_and_path_from_request(request, request.get_full_path())\n\n if project:\n language = None\n version_slug = None\n schema, netloc, path, params, query, fragments = urlparse(full_path)\n if not project.single_version:\n language, version_slug, path = language_and_version_from_path(path)\n\n # Firstly, attempt to serve the 404 of the current version (version_slug)\n # Secondly, try to serve the 404 page for the default version (project.get_default_version())\n for slug in (version_slug, project.get_default_version()):\n basepath, filename, fullpath = resolve_404_path(project, slug, language)\n if os.path.exists(fullpath):\n log.debug(\n 'serving 404.html page current version: [project: %s] [version: %s]',\n project.slug,\n slug,\n )\n r = _serve_file(request, filename, basepath)\n r.status_code = 404\n return r\n\n # Finally, return the default 404 page generated by Read the Docs\n r = render(request, template_name)\n r.status_code = 404\n return r\n\n\ndef do_not_track(request):\n dnt_header = request.META.get('HTTP_DNT')\n\n # https://w3c.github.io/dnt/drafts/tracking-dnt.html#status-representation\n return JsonResponse( # pylint: disable=redundant-content-type-for-json-response\n {\n 'policy': 'https://docs.readthedocs.io/en/latest/privacy-policy.html',\n 'same-party': [\n 'readthedocs.org',\n 'readthedocs.com',\n 'readthedocs.io', # .org Documentation Sites\n 'readthedocs-hosted.com', # .com Documentation Sites\n ],\n 'tracking': 'N' if dnt_header == '1' else 'T',\n }, content_type='application/tracking-status+json',\n )\n", "path": "readthedocs/core/views/__init__.py"}]}
3,274
271
gh_patches_debug_9023
rasdani/github-patches
git_diff
ESMCI__cime-2482
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Unsupported testlist versions should trigger an error #2316 (included in [cime5.4.0-alpha.25](https://github.com/ESMCI/cime/releases/tag/cime5.4.0-alpha.25)) dropped support for v1 of the testlist XML file, but trying to run a test suite from a v1.0 testlist file results in no tests being run instead of a traceable error. For example, POP's test list in both `cesm2_0_alpha10b` and `cesm2_0_alpha10c` is still v1.0, and running `aux_pop` on cheyenne results in: ``` $ qcmd -- ./create_test --parallel-jobs 4 --proc-pool 36 --xml-machine cheyenne --xml-compiler intel --xml-category aux_pop --queue economy --walltime 2:00 --project P93300070 --test-root /glade/scratch/mlevy/tests/empty_testlist Submitting command to PBS using account P93300070: ./create_test --parallel-jobs 4 --proc-pool 36 --xml-machine cheyenne --xml-compiler intel --xml-category aux_pop --queue economy --walltime 2:00 --project P93300070 --test-root /glade/scratch/mlevy/tests/empty_testlist Waiting for job 7572427.chadmin1 to start ... Testnames: [] RUNNING TESTS: Due to presence of batch system, create_test will exit before tests are complete. To force create_test to wait for full completion, use --wait At test-scheduler close, state is: test-scheduler took 0.01305103302 seconds ``` Even though there are 35 tests defined in the no-longer-supported XML format. </issue> <code> [start of scripts/lib/CIME/XML/testlist.py] 1 """ 2 Interface to the config_files.xml file. This class inherits from generic_xml.py 3 It supports version 2.0 of the testlist.xml file 4 5 In version 2 of the file options can be specified to further refine a test or 6 set of tests. They can be specified either at the top level, in which case they 7 apply to all machines/compilers for this test: 8 9 <test ...> 10 <options> 11 <option name="wallclock">00:20</option> 12 </options> 13 ... 14 </test> 15 16 or at the level of a particular machine/compiler: 17 18 <test ...> 19 <machines> 20 <machine ...> 21 <options> 22 <option name="wallclock">00:20</option> 23 </options> 24 </machine> 25 </machines> 26 </test> 27 28 Currently supported options are: 29 30 - walltime: sets the wallclock limit in the queuing system 31 32 - memleak_tolerance: specifies the relative memory growth expected for this test 33 34 - comment: has no effect, but is written out when printing the test list 35 36 """ 37 from CIME.XML.standard_module_setup import * 38 39 from CIME.XML.generic_xml import GenericXML 40 from CIME.XML.files import Files 41 42 logger = logging.getLogger(__name__) 43 44 class Testlist(GenericXML): 45 46 def __init__(self,infile, files=None): 47 """ 48 initialize an object 49 """ 50 schema = None 51 if files is None: 52 files = Files() 53 schema = files.get_schema("TESTS_SPEC_FILE") 54 GenericXML.__init__(self, infile, schema=schema) 55 56 def get_tests(self, machine=None, category=None, compiler=None, compset=None, grid=None, supported_only=False): 57 tests = [] 58 attributes = {} 59 if compset is not None: 60 attributes['compset'] = compset 61 if grid is not None: 62 attributes['grid'] = grid 63 64 testnodes = self.get_children("test", attributes=attributes) 65 66 machatts = {} 67 if machine is not None: 68 machatts["name"] = machine 69 if category is not None: 70 machatts["category"] = category 71 if compiler is not None: 72 machatts["compiler"] = compiler 73 74 75 for tnode in testnodes: 76 if supported_only and self.has(tnode, "supported") and self.get(tnode, "supported") == 'false': 77 continue 78 79 machnode = self.get_optional_child("machines", root=tnode) 80 machnodes = None if machnode is None else self.get_children("machine",machatts,root=machnode) 81 if machnodes: 82 this_test_node = {} 83 for key, value in self.attrib(tnode).items(): 84 if key == "name": 85 this_test_node["testname"] = value 86 else: 87 this_test_node[key] = value 88 89 90 91 # Get options that apply to all machines/compilers for this test 92 options = self.get_children("options", root=tnode) 93 if len(options) > 0: 94 optionnodes = self.get_children("option", root=options[0]) 95 else: 96 optionnodes = [] 97 for mach in machnodes: 98 # this_test_node can include multiple tests 99 this_test = dict(this_test_node) 100 for key, value in self.attrib(mach).items(): 101 if key == "name": 102 this_test["machine"] = value 103 else: 104 this_test[key] = value 105 this_test["options"] = {} 106 107 for onode in optionnodes: 108 this_test['options'][self.get(onode, 'name')] = self.text(onode) 109 110 # Now get options specific to this machine/compiler 111 options = self.get_optional_child("options", root=mach) 112 optionnodes = [] if options is None else self.get_children("option", root=options) 113 for onode in optionnodes: 114 this_test['options'][self.get(onode, 'name')] = self.text(onode) 115 116 tests.append(this_test) 117 118 return tests 119 [end of scripts/lib/CIME/XML/testlist.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/scripts/lib/CIME/XML/testlist.py b/scripts/lib/CIME/XML/testlist.py --- a/scripts/lib/CIME/XML/testlist.py +++ b/scripts/lib/CIME/XML/testlist.py @@ -52,6 +52,8 @@ files = Files() schema = files.get_schema("TESTS_SPEC_FILE") GenericXML.__init__(self, infile, schema=schema) + expect(self.get_version() >= 2.0, + "{} is an unsupported version of the testfile format and will be ignored".format(infile)) def get_tests(self, machine=None, category=None, compiler=None, compset=None, grid=None, supported_only=False): tests = []
{"golden_diff": "diff --git a/scripts/lib/CIME/XML/testlist.py b/scripts/lib/CIME/XML/testlist.py\n--- a/scripts/lib/CIME/XML/testlist.py\n+++ b/scripts/lib/CIME/XML/testlist.py\n@@ -52,6 +52,8 @@\n files = Files()\n schema = files.get_schema(\"TESTS_SPEC_FILE\")\n GenericXML.__init__(self, infile, schema=schema)\n+ expect(self.get_version() >= 2.0,\n+ \"{} is an unsupported version of the testfile format and will be ignored\".format(infile))\n \n def get_tests(self, machine=None, category=None, compiler=None, compset=None, grid=None, supported_only=False):\n tests = []\n", "issue": "Unsupported testlist versions should trigger an error\n#2316 (included in [cime5.4.0-alpha.25](https://github.com/ESMCI/cime/releases/tag/cime5.4.0-alpha.25)) dropped support for v1 of the testlist XML file, but trying to run a test suite from a v1.0 testlist file results in no tests being run instead of a traceable error.\r\n\r\nFor example, POP's test list in both `cesm2_0_alpha10b` and `cesm2_0_alpha10c` is still v1.0, and running `aux_pop` on cheyenne results in:\r\n\r\n```\r\n$ qcmd -- ./create_test --parallel-jobs 4 --proc-pool 36 --xml-machine cheyenne --xml-compiler intel --xml-category aux_pop --queue economy --walltime 2:00 --project P93300070 --test-root /glade/scratch/mlevy/tests/empty_testlist\r\nSubmitting command to PBS using account P93300070:\r\n ./create_test --parallel-jobs 4 --proc-pool 36 --xml-machine cheyenne --xml-compiler intel --xml-category aux_pop --queue economy --walltime 2:00 --project P93300070 --test-root /glade/scratch/mlevy/tests/empty_testlist\r\n\r\nWaiting for job 7572427.chadmin1 to start ...\r\n\r\n\r\nTestnames: []\r\nRUNNING TESTS:\r\nDue to presence of batch system, create_test will exit before tests are complete.\r\nTo force create_test to wait for full completion, use --wait\r\nAt test-scheduler close, state is:\r\ntest-scheduler took 0.01305103302 seconds\r\n```\r\n\r\nEven though there are 35 tests defined in the no-longer-supported XML format.\n", "before_files": [{"content": "\"\"\"\nInterface to the config_files.xml file. This class inherits from generic_xml.py\nIt supports version 2.0 of the testlist.xml file\n\nIn version 2 of the file options can be specified to further refine a test or\nset of tests. They can be specified either at the top level, in which case they\napply to all machines/compilers for this test:\n\n<test ...>\n <options>\n <option name=\"wallclock\">00:20</option>\n </options>\n ...\n</test>\n\nor at the level of a particular machine/compiler:\n\n<test ...>\n <machines>\n <machine ...>\n <options>\n <option name=\"wallclock\">00:20</option>\n </options>\n </machine>\n </machines>\n</test>\n\nCurrently supported options are:\n\n- walltime: sets the wallclock limit in the queuing system\n\n- memleak_tolerance: specifies the relative memory growth expected for this test\n\n- comment: has no effect, but is written out when printing the test list\n\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\n\nfrom CIME.XML.generic_xml import GenericXML\nfrom CIME.XML.files import Files\n\nlogger = logging.getLogger(__name__)\n\nclass Testlist(GenericXML):\n\n def __init__(self,infile, files=None):\n \"\"\"\n initialize an object\n \"\"\"\n schema = None\n if files is None:\n files = Files()\n schema = files.get_schema(\"TESTS_SPEC_FILE\")\n GenericXML.__init__(self, infile, schema=schema)\n\n def get_tests(self, machine=None, category=None, compiler=None, compset=None, grid=None, supported_only=False):\n tests = []\n attributes = {}\n if compset is not None:\n attributes['compset'] = compset\n if grid is not None:\n attributes['grid'] = grid\n\n testnodes = self.get_children(\"test\", attributes=attributes)\n\n machatts = {}\n if machine is not None:\n machatts[\"name\"] = machine\n if category is not None:\n machatts[\"category\"] = category\n if compiler is not None:\n machatts[\"compiler\"] = compiler\n\n\n for tnode in testnodes:\n if supported_only and self.has(tnode, \"supported\") and self.get(tnode, \"supported\") == 'false':\n continue\n\n machnode = self.get_optional_child(\"machines\", root=tnode)\n machnodes = None if machnode is None else self.get_children(\"machine\",machatts,root=machnode)\n if machnodes:\n this_test_node = {}\n for key, value in self.attrib(tnode).items():\n if key == \"name\":\n this_test_node[\"testname\"] = value\n else:\n this_test_node[key] = value\n\n\n\n # Get options that apply to all machines/compilers for this test\n options = self.get_children(\"options\", root=tnode)\n if len(options) > 0:\n optionnodes = self.get_children(\"option\", root=options[0])\n else:\n optionnodes = []\n for mach in machnodes:\n # this_test_node can include multiple tests\n this_test = dict(this_test_node)\n for key, value in self.attrib(mach).items():\n if key == \"name\":\n this_test[\"machine\"] = value\n else:\n this_test[key] = value\n this_test[\"options\"] = {}\n\n for onode in optionnodes:\n this_test['options'][self.get(onode, 'name')] = self.text(onode)\n\n # Now get options specific to this machine/compiler\n options = self.get_optional_child(\"options\", root=mach)\n optionnodes = [] if options is None else self.get_children(\"option\", root=options)\n for onode in optionnodes:\n this_test['options'][self.get(onode, 'name')] = self.text(onode)\n\n tests.append(this_test)\n\n return tests\n", "path": "scripts/lib/CIME/XML/testlist.py"}]}
2,081
151
gh_patches_debug_56553
rasdani/github-patches
git_diff
pytorch__torchdynamo-394
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> isinstance test on dtype causes graph break ``` diff --git a/tests/test_repros.py b/tests/test_repros.py index 4d590f7..3ede478 100755 --- a/tests/test_repros.py +++ b/tests/test_repros.py @@ -1246,3 +1246,11 @@ class ReproTests(torchdynamo.testing.TestCase): self.assertTrue(same(ref0, res0)) self.assertTrue(same(ref1, res1)) + + def test_isinstance_dtype(self): + @torchdynamo.optimize("eager", nopython=True) + def fn(x): + isinstance(torch.bfloat16, torch.dtype) + return x + + fn(torch.randn(3)) ``` you get ``` Traceback (most recent call last): File "/raid/ezyang/torchdynamo/torchdynamo/convert_frame.py", line 278, in _convert_frame_assert code = transform_code_object(frame.f_code, transform) File "/raid/ezyang/torchdynamo/torchdynamo/bytecode_transformation.py", line 338, in transform_code_object transformations(instructions, code_options) File "/raid/ezyang/torchdynamo/torchdynamo/convert_frame.py", line 254, in transform tracer.run() File "/raid/ezyang/torchdynamo/torchdynamo/symbolic_convert.py", line 306, in run and self.step() File "/raid/ezyang/torchdynamo/torchdynamo/symbolic_convert.py", line 284, in step getattr(self, inst.opname)(inst) File "/raid/ezyang/torchdynamo/torchdynamo/symbolic_convert.py", line 145, in wrapper return inner_fn(self, inst) File "/raid/ezyang/torchdynamo/torchdynamo/symbolic_convert.py", line 619, in CALL_FUNCTION self.call_function(fn, args, {}) File "/raid/ezyang/torchdynamo/torchdynamo/symbolic_convert.py", line 220, in call_function self.push(fn.call_function(self, args, kwargs)) File "/raid/ezyang/torchdynamo/torchdynamo/variables/builtin.py", line 220, in call_function result = handler(tx, *args, **kwargs) File "/raid/ezyang/torchdynamo/torchdynamo/variables/builtin.py", line 354, in call_isinstance arg_type = arg.python_type() File "/raid/ezyang/torchdynamo/torchdynamo/variables/torch.py", line 67, in python_type return super().python_type() File "/raid/ezyang/torchdynamo/torchdynamo/variables/base.py", line 137, in python_type raise NotImplementedError(f"{self} has no type") NotImplementedError: TorchVariable() has no type ``` </issue> <code> [start of torchdynamo/allowed_functions.py] 1 import builtins 2 import collections 3 import copy 4 import functools 5 import inspect 6 import itertools 7 import math 8 import operator 9 import types 10 import warnings 11 from typing import Dict 12 from typing import Optional 13 from typing import Set 14 15 import numpy 16 import torch 17 18 from . import config 19 from .utils import is_safe_constant 20 21 22 def make_function_id_set(lazy_initializer): 23 """ 24 Track a set of `id()`s of objects which are either allowed or not 25 allowed to go into the generated FX graph. Use to test for torch.*, 26 numpy.*, builtins.*, etc. 27 28 Support user modification to permit customization of what can be 29 added to the graph and what will cause a graph break. 30 """ 31 32 class FunctionIdSet: 33 function_ids: Optional[Set[int]] = None 34 function_names: Optional[Dict[int, str]] = None 35 36 def __call__(self): 37 if self.function_ids is None: 38 value = lazy_initializer() 39 if isinstance(value, dict): 40 self.function_ids = set(value.keys()) 41 self.function_names = value 42 else: 43 assert isinstance(value, set) 44 self.function_ids = value 45 return self.function_ids 46 47 def get_name(self, idx: int, default: str): 48 self() # lazy init 49 return self.function_names.get(idx, default) 50 51 def add(self, idx: int): 52 self() # lazy init 53 self.function_ids.add(idx) 54 55 def remove(self, idx: int): 56 if idx in self(): 57 self.function_ids.remove(idx) 58 59 def __contains__(self, idx: int): 60 return idx in self() 61 62 return FunctionIdSet() 63 64 65 @make_function_id_set 66 def _disallowed_function_ids(): 67 remove = [ 68 True, 69 False, 70 None, 71 collections.OrderedDict, 72 copy.copy, 73 copy.deepcopy, 74 inspect.signature, 75 math.__package__, 76 torch.__builtins__, 77 torch.autocast_decrement_nesting, 78 torch.autocast_increment_nesting, 79 torch.autograd.grad, 80 torch.clear_autocast_cache, 81 torch.cuda.current_device, 82 torch.distributions.constraints.is_dependent, 83 torch.distributions.normal.Normal, 84 torch.inference_mode, 85 torch.set_anomaly_enabled, 86 torch.set_autocast_cache_enabled, 87 torch.set_autocast_cpu_dtype, 88 torch.set_autocast_cpu_enabled, 89 torch.set_autocast_enabled, 90 torch.set_autocast_gpu_dtype, 91 torch.autograd.profiler.profile, 92 warnings.warn, 93 ] 94 return {id(x) for x in remove} 95 96 97 @make_function_id_set 98 def _allowed_function_ids(): 99 """ 100 Walk torch.* and get the ids of all the stuff in it 101 """ 102 warnings.filterwarnings("ignore", category=UserWarning, module="torch.distributed") 103 torch_object_ids = dict() 104 105 def _is_allowed_module_prefix(obj): 106 allowed_modules = ("torch", "math") 107 allowed_modules_dot = tuple([x + "." for x in allowed_modules]) 108 module = inspect.getmodule(obj) 109 if module is None: 110 return False 111 112 mod_name = module.__name__ 113 return mod_name in allowed_modules or mod_name.startswith(allowed_modules_dot) 114 115 def _find_torch_objects(module): 116 if any( 117 module.__name__.startswith(mod_name) 118 for mod_name in config.allowed_functions_module_string_ignorelist 119 ): 120 return 121 torch_object_ids[id(module)] = module.__name__ 122 for name, obj in list(module.__dict__.items()): 123 if id(obj) not in torch_object_ids: 124 if isinstance(obj, types.ModuleType): 125 if obj.__name__.startswith("torch."): 126 torch_object_ids[id(obj)] = f"{module.__name__}.{name}" 127 _find_torch_objects(obj) 128 elif _is_allowed_module_prefix(obj): 129 torch_object_ids[id(obj)] = f"{module.__name__}.{name}" 130 elif inspect.getmodule(obj) is None and not is_safe_constant(obj): 131 torch_object_ids[id(obj)] = f"{module.__name__}.{name}" 132 133 _find_torch_objects(torch) 134 _find_torch_objects(math) 135 136 for idx in _disallowed_function_ids(): 137 if idx in torch_object_ids: 138 del torch_object_ids[idx] 139 140 return torch_object_ids 141 142 143 @make_function_id_set 144 def _builtin_function_ids(): 145 rv = { 146 id(v): f"builtins.{k}" 147 for k, v in builtins.__dict__.items() 148 if not k.startswith("_") and callable(v) 149 } 150 rv.update( 151 { 152 id(v): f"operator.{k}" 153 for k, v in operator.__dict__.items() 154 if not k.startswith("_") and callable(v) 155 } 156 ) 157 rv.update( 158 {id(v): f"functools.{v.__name__}" for v in (itertools.chain, itertools.islice)} 159 ) 160 rv[id(functools.reduce)] = "functools.reduce" 161 return rv 162 163 164 @make_function_id_set 165 def _numpy_function_ids(): 166 rv = dict() 167 for mod in (numpy, numpy.random): 168 rv.update( 169 { 170 id(v): f"{mod.__name__}.{k}" 171 for k, v in mod.__dict__.items() 172 if callable(v) 173 and (getattr(v, "__module__", None) or mod.__name__) == mod.__name__ 174 } 175 ) 176 return rv 177 178 179 def is_allowed(obj): 180 """Is this safe to trace like torch.add ?""" 181 # torch.ops is populated lazily so we don't necessarily have them in 182 # _allowed_function_ids. Figure it out by testing the type instead 183 # in those cases 184 return id(obj) in _allowed_function_ids or isinstance( 185 obj, 186 (torch._ops.OpOverloadPacket, torch._ops.OpOverload, torch._ops._OpNamespace), 187 ) 188 189 190 def torch_get_name(obj, default): 191 """Convert a torch.* funcion to a string""" 192 return _allowed_function_ids.get_name(id(obj), default) 193 194 195 def is_builtin(obj): 196 return id(obj) in _builtin_function_ids 197 198 199 def is_numpy(obj): 200 return isinstance(obj, numpy.ndarray) or id(obj) in _numpy_function_ids 201 [end of torchdynamo/allowed_functions.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/torchdynamo/allowed_functions.py b/torchdynamo/allowed_functions.py --- a/torchdynamo/allowed_functions.py +++ b/torchdynamo/allowed_functions.py @@ -91,6 +91,11 @@ torch.autograd.profiler.profile, warnings.warn, ] + # extract all dtypes from torch + dtypes = [ + obj for obj in torch.__dict__.values() if isinstance(obj, type(torch.float32)) + ] + remove += dtypes return {id(x) for x in remove}
{"golden_diff": "diff --git a/torchdynamo/allowed_functions.py b/torchdynamo/allowed_functions.py\n--- a/torchdynamo/allowed_functions.py\n+++ b/torchdynamo/allowed_functions.py\n@@ -91,6 +91,11 @@\n torch.autograd.profiler.profile,\n warnings.warn,\n ]\n+ # extract all dtypes from torch\n+ dtypes = [\n+ obj for obj in torch.__dict__.values() if isinstance(obj, type(torch.float32))\n+ ]\n+ remove += dtypes\n return {id(x) for x in remove}\n", "issue": "isinstance test on dtype causes graph break\n```\r\ndiff --git a/tests/test_repros.py b/tests/test_repros.py\r\nindex 4d590f7..3ede478 100755\r\n--- a/tests/test_repros.py\r\n+++ b/tests/test_repros.py\r\n@@ -1246,3 +1246,11 @@ class ReproTests(torchdynamo.testing.TestCase):\r\n \r\n self.assertTrue(same(ref0, res0))\r\n self.assertTrue(same(ref1, res1))\r\n+\r\n+ def test_isinstance_dtype(self):\r\n+ @torchdynamo.optimize(\"eager\", nopython=True)\r\n+ def fn(x):\r\n+ isinstance(torch.bfloat16, torch.dtype)\r\n+ return x\r\n+\r\n+ fn(torch.randn(3))\r\n```\r\n\r\nyou get\r\n\r\n```\r\nTraceback (most recent call last): \r\n File \"/raid/ezyang/torchdynamo/torchdynamo/convert_frame.py\", line 278, in _convert_frame_assert \r\n code = transform_code_object(frame.f_code, transform) \r\n File \"/raid/ezyang/torchdynamo/torchdynamo/bytecode_transformation.py\", line 338, in transform_code_object \r\n transformations(instructions, code_options) \r\n File \"/raid/ezyang/torchdynamo/torchdynamo/convert_frame.py\", line 254, in transform \r\n tracer.run() \r\n File \"/raid/ezyang/torchdynamo/torchdynamo/symbolic_convert.py\", line 306, in run \r\n and self.step() \r\n File \"/raid/ezyang/torchdynamo/torchdynamo/symbolic_convert.py\", line 284, in step \r\n getattr(self, inst.opname)(inst) \r\n File \"/raid/ezyang/torchdynamo/torchdynamo/symbolic_convert.py\", line 145, in wrapper \r\n return inner_fn(self, inst) \r\n File \"/raid/ezyang/torchdynamo/torchdynamo/symbolic_convert.py\", line 619, in CALL_FUNCTION \r\n self.call_function(fn, args, {}) \r\n File \"/raid/ezyang/torchdynamo/torchdynamo/symbolic_convert.py\", line 220, in call_function \r\n self.push(fn.call_function(self, args, kwargs)) \r\n File \"/raid/ezyang/torchdynamo/torchdynamo/variables/builtin.py\", line 220, in call_function \r\n result = handler(tx, *args, **kwargs) \r\n File \"/raid/ezyang/torchdynamo/torchdynamo/variables/builtin.py\", line 354, in call_isinstance \r\n arg_type = arg.python_type() \r\n File \"/raid/ezyang/torchdynamo/torchdynamo/variables/torch.py\", line 67, in python_type \r\n return super().python_type() \r\n File \"/raid/ezyang/torchdynamo/torchdynamo/variables/base.py\", line 137, in python_type \r\n raise NotImplementedError(f\"{self} has no type\") \r\nNotImplementedError: TorchVariable() has no type \r\n```\n", "before_files": [{"content": "import builtins\nimport collections\nimport copy\nimport functools\nimport inspect\nimport itertools\nimport math\nimport operator\nimport types\nimport warnings\nfrom typing import Dict\nfrom typing import Optional\nfrom typing import Set\n\nimport numpy\nimport torch\n\nfrom . import config\nfrom .utils import is_safe_constant\n\n\ndef make_function_id_set(lazy_initializer):\n \"\"\"\n Track a set of `id()`s of objects which are either allowed or not\n allowed to go into the generated FX graph. Use to test for torch.*,\n numpy.*, builtins.*, etc.\n\n Support user modification to permit customization of what can be\n added to the graph and what will cause a graph break.\n \"\"\"\n\n class FunctionIdSet:\n function_ids: Optional[Set[int]] = None\n function_names: Optional[Dict[int, str]] = None\n\n def __call__(self):\n if self.function_ids is None:\n value = lazy_initializer()\n if isinstance(value, dict):\n self.function_ids = set(value.keys())\n self.function_names = value\n else:\n assert isinstance(value, set)\n self.function_ids = value\n return self.function_ids\n\n def get_name(self, idx: int, default: str):\n self() # lazy init\n return self.function_names.get(idx, default)\n\n def add(self, idx: int):\n self() # lazy init\n self.function_ids.add(idx)\n\n def remove(self, idx: int):\n if idx in self():\n self.function_ids.remove(idx)\n\n def __contains__(self, idx: int):\n return idx in self()\n\n return FunctionIdSet()\n\n\n@make_function_id_set\ndef _disallowed_function_ids():\n remove = [\n True,\n False,\n None,\n collections.OrderedDict,\n copy.copy,\n copy.deepcopy,\n inspect.signature,\n math.__package__,\n torch.__builtins__,\n torch.autocast_decrement_nesting,\n torch.autocast_increment_nesting,\n torch.autograd.grad,\n torch.clear_autocast_cache,\n torch.cuda.current_device,\n torch.distributions.constraints.is_dependent,\n torch.distributions.normal.Normal,\n torch.inference_mode,\n torch.set_anomaly_enabled,\n torch.set_autocast_cache_enabled,\n torch.set_autocast_cpu_dtype,\n torch.set_autocast_cpu_enabled,\n torch.set_autocast_enabled,\n torch.set_autocast_gpu_dtype,\n torch.autograd.profiler.profile,\n warnings.warn,\n ]\n return {id(x) for x in remove}\n\n\n@make_function_id_set\ndef _allowed_function_ids():\n \"\"\"\n Walk torch.* and get the ids of all the stuff in it\n \"\"\"\n warnings.filterwarnings(\"ignore\", category=UserWarning, module=\"torch.distributed\")\n torch_object_ids = dict()\n\n def _is_allowed_module_prefix(obj):\n allowed_modules = (\"torch\", \"math\")\n allowed_modules_dot = tuple([x + \".\" for x in allowed_modules])\n module = inspect.getmodule(obj)\n if module is None:\n return False\n\n mod_name = module.__name__\n return mod_name in allowed_modules or mod_name.startswith(allowed_modules_dot)\n\n def _find_torch_objects(module):\n if any(\n module.__name__.startswith(mod_name)\n for mod_name in config.allowed_functions_module_string_ignorelist\n ):\n return\n torch_object_ids[id(module)] = module.__name__\n for name, obj in list(module.__dict__.items()):\n if id(obj) not in torch_object_ids:\n if isinstance(obj, types.ModuleType):\n if obj.__name__.startswith(\"torch.\"):\n torch_object_ids[id(obj)] = f\"{module.__name__}.{name}\"\n _find_torch_objects(obj)\n elif _is_allowed_module_prefix(obj):\n torch_object_ids[id(obj)] = f\"{module.__name__}.{name}\"\n elif inspect.getmodule(obj) is None and not is_safe_constant(obj):\n torch_object_ids[id(obj)] = f\"{module.__name__}.{name}\"\n\n _find_torch_objects(torch)\n _find_torch_objects(math)\n\n for idx in _disallowed_function_ids():\n if idx in torch_object_ids:\n del torch_object_ids[idx]\n\n return torch_object_ids\n\n\n@make_function_id_set\ndef _builtin_function_ids():\n rv = {\n id(v): f\"builtins.{k}\"\n for k, v in builtins.__dict__.items()\n if not k.startswith(\"_\") and callable(v)\n }\n rv.update(\n {\n id(v): f\"operator.{k}\"\n for k, v in operator.__dict__.items()\n if not k.startswith(\"_\") and callable(v)\n }\n )\n rv.update(\n {id(v): f\"functools.{v.__name__}\" for v in (itertools.chain, itertools.islice)}\n )\n rv[id(functools.reduce)] = \"functools.reduce\"\n return rv\n\n\n@make_function_id_set\ndef _numpy_function_ids():\n rv = dict()\n for mod in (numpy, numpy.random):\n rv.update(\n {\n id(v): f\"{mod.__name__}.{k}\"\n for k, v in mod.__dict__.items()\n if callable(v)\n and (getattr(v, \"__module__\", None) or mod.__name__) == mod.__name__\n }\n )\n return rv\n\n\ndef is_allowed(obj):\n \"\"\"Is this safe to trace like torch.add ?\"\"\"\n # torch.ops is populated lazily so we don't necessarily have them in\n # _allowed_function_ids. Figure it out by testing the type instead\n # in those cases\n return id(obj) in _allowed_function_ids or isinstance(\n obj,\n (torch._ops.OpOverloadPacket, torch._ops.OpOverload, torch._ops._OpNamespace),\n )\n\n\ndef torch_get_name(obj, default):\n \"\"\"Convert a torch.* funcion to a string\"\"\"\n return _allowed_function_ids.get_name(id(obj), default)\n\n\ndef is_builtin(obj):\n return id(obj) in _builtin_function_ids\n\n\ndef is_numpy(obj):\n return isinstance(obj, numpy.ndarray) or id(obj) in _numpy_function_ids\n", "path": "torchdynamo/allowed_functions.py"}]}
3,107
130
gh_patches_debug_34847
rasdani/github-patches
git_diff
Flexget__Flexget-1107
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> [subliminal] Support downloading subtitles to different directory Already supported by the subliminal API, patch just adds a config option for it. ``` --- subtitles_subliminal.py.orig 2016-04-20 06:07:33.873933998 -0300 +++ subtitles_subliminal.py 2016-04-20 06:12:29.070504077 -0300 @@ -42,6 +42,7 @@ exact_match: no providers: addic7ed, opensubtitles single: no + directory: /disk/subtitles """ schema = { @@ -52,6 +53,7 @@ 'exact_match': {'type': 'boolean', 'default': True}, 'providers': {'type': 'array', 'items': {'type': 'string', 'enum': PROVIDERS}}, 'single': {'type': 'boolean', 'default': True}, + 'directory': {'type:': 'string'}, }, 'required': ['languages'], 'additionalProperties': False @@ -80,6 +82,7 @@ exact_match: Use file hash only to search for subs, otherwise Subliminal will try to guess by filename. providers: List of providers from where to download subtitles. single: Download subtitles in single mode (no language code added to subtitle filename). + directory: Path to directory where to save the subtitles, default is next to the video. """ if not task.accepted: log.debug('nothing accepted, aborting') @@ -163,7 +166,7 @@ # save subtitles to disk for video, subtitle in downloaded_subtitles.items(): if subtitle: - subliminal.save_subtitles(video, subtitle, single=single_mode) + subliminal.save_subtitles(video, subtitle, single=single_mode, directory=config.get('directory', None)) @event('plugin.register') ``` </issue> <code> [start of flexget/plugins/output/subtitles_subliminal.py] 1 from __future__ import unicode_literals, division, absolute_import 2 from builtins import * 3 4 import collections 5 import logging 6 import os 7 import sys 8 import tempfile 9 10 from flexget import plugin 11 from flexget.event import event 12 13 log = logging.getLogger('subtitles') 14 15 PROVIDERS = [ 16 'opensubtitles', 17 'thesubdb', 18 'podnapisi', 19 'addic7ed', 20 'tvsubtitles' 21 ] 22 23 24 class PluginSubliminal(object): 25 """ 26 Search and download subtitles using Subliminal by Antoine Bertin 27 (https://pypi.python.org/pypi/subliminal). 28 29 Example (complete task):: 30 31 subs: 32 find: 33 path: 34 - d:\media\incoming 35 regexp: '.*\.(avi|mkv|mp4)$' 36 recursive: yes 37 accept_all: yes 38 subliminal: 39 languages: 40 - ita 41 alternatives: 42 - eng 43 exact_match: no 44 providers: addic7ed, opensubtitles 45 single: no 46 """ 47 48 schema = { 49 'type': 'object', 50 'properties': { 51 'languages': {'type': 'array', 'items': {'type': 'string'}, 'minItems': 1}, 52 'alternatives': {'type': 'array', 'items': {'type': 'string'}}, 53 'exact_match': {'type': 'boolean', 'default': True}, 54 'providers': {'type': 'array', 'items': {'type': 'string', 'enum': PROVIDERS}}, 55 'single': {'type': 'boolean', 'default': True}, 56 }, 57 'required': ['languages'], 58 'additionalProperties': False 59 } 60 61 def on_task_start(self, task, config): 62 if list(sys.version_info) < [2, 7]: 63 raise plugin.DependencyError('subliminal', 'Python 2.7', 'Subliminal plugin requires python 2.7.') 64 try: 65 import babelfish 66 except ImportError as e: 67 log.debug('Error importing Babelfish: %s', e) 68 raise plugin.DependencyError('subliminal', 'babelfish', 'Babelfish module required. ImportError: %s', e) 69 try: 70 import subliminal 71 except ImportError as e: 72 log.debug('Error importing Subliminal: %s', e) 73 raise plugin.DependencyError('subliminal', 'subliminal', 'Subliminal module required. ImportError: %s', e) 74 75 def on_task_output(self, task, config): 76 """ 77 Configuration:: 78 subliminal: 79 languages: List of languages (as IETF codes) in order of preference. At least one is required. 80 alternatives: List of second-choice languages; subs will be downloaded but entries rejected. 81 exact_match: Use file hash only to search for subs, otherwise Subliminal will try to guess by filename. 82 providers: List of providers from where to download subtitles. 83 single: Download subtitles in single mode (no language code added to subtitle filename). 84 """ 85 if not task.accepted: 86 log.debug('nothing accepted, aborting') 87 return 88 from babelfish import Language 89 from dogpile.cache.exception import RegionAlreadyConfigured 90 import subliminal 91 from subliminal.cli import MutexLock 92 from subliminal.score import episode_scores, movie_scores 93 try: 94 subliminal.region.configure('dogpile.cache.dbm', 95 arguments={ 96 'filename': os.path.join(tempfile.gettempdir(), 'cachefile.dbm'), 97 'lock_factory': MutexLock, 98 }) 99 except RegionAlreadyConfigured: 100 pass 101 logging.getLogger("subliminal").setLevel(logging.CRITICAL) 102 logging.getLogger("enzyme").setLevel(logging.WARNING) 103 languages = set([Language.fromietf(s) for s in config.get('languages', [])]) 104 alternative_languages = set([Language.fromietf(s) for s in config.get('alternatives', [])]) 105 # keep all downloaded subtitles and save to disk when done (no need to write every time) 106 downloaded_subtitles = collections.defaultdict(list) 107 providers_list = config.get('providers', None) 108 # test if only one language was provided, if so we will download in single mode 109 # (aka no language code added to subtitle filename) 110 # unless we are forced not to by configuration 111 # if we pass 'yes' for single in configuration but choose more than one language 112 # we ignore the configuration and add the language code to the 113 # potentially downloaded files 114 single_mode = config.get('single', '') and len(languages | alternative_languages) <= 1 115 for entry in task.accepted: 116 if 'location' not in entry: 117 log.warning('Cannot act on entries that do not represent a local file.') 118 elif not os.path.exists(entry['location']): 119 entry.fail('file not found: %s' % entry['location']) 120 elif '$RECYCLE.BIN' not in entry['location']: # ignore deleted files in Windows shares 121 try: 122 entry_languages = entry.get('subtitle_languages') or languages 123 124 video = subliminal.scan_video(entry['location']) 125 existing_subtitles = set(subliminal.core.search_external_subtitles(entry['location']).values()) 126 video.subtitle_languages = existing_subtitles 127 if isinstance(video, subliminal.Episode): 128 title = video.series 129 hash_scores = episode_scores['hash'] 130 else: 131 title = video.title 132 hash_scores = movie_scores['hash'] 133 log.info('Name computed for %s was %s', entry['location'], title) 134 msc = hash_scores if config['exact_match'] else 0 135 if entry_languages & existing_subtitles: 136 log.debug('All preferred languages already exist for "%s"', entry['title']) 137 entry['subtitles_missing'] = set() 138 continue # subs for preferred lang(s) already exists 139 else: 140 subtitle = subliminal.download_best_subtitles([video], entry_languages, 141 providers=providers_list, min_score=msc) 142 if subtitle and any(subtitle.values()): 143 downloaded_subtitles.update(subtitle) 144 log.info('Subtitles found for %s', entry['location']) 145 else: 146 # only try to download for alternatives that aren't alread downloaded 147 subtitle = subliminal.download_best_subtitles([video], alternative_languages, 148 providers=providers_list, min_score=msc) 149 150 if subtitle and any(subtitle.values()): 151 downloaded_subtitles.update(subtitle) 152 entry.fail('subtitles found for a second-choice language.') 153 else: 154 entry.fail('cannot find any subtitles for now.') 155 downloaded_languages = set([Language.fromietf(str(l.language)) 156 for l in subtitle[video]]) 157 if entry_languages: 158 entry['subtitles_missing'] = entry_languages - downloaded_languages 159 except ValueError as e: 160 log.error('subliminal error: %s', e) 161 entry.fail() 162 163 if downloaded_subtitles: 164 # save subtitles to disk 165 for video, subtitle in downloaded_subtitles.items(): 166 if subtitle: 167 subliminal.save_subtitles(video, subtitle, single=single_mode) 168 169 170 @event('plugin.register') 171 def register_plugin(): 172 plugin.register(PluginSubliminal, 'subliminal', api_ver=2) 173 [end of flexget/plugins/output/subtitles_subliminal.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/flexget/plugins/output/subtitles_subliminal.py b/flexget/plugins/output/subtitles_subliminal.py --- a/flexget/plugins/output/subtitles_subliminal.py +++ b/flexget/plugins/output/subtitles_subliminal.py @@ -43,6 +43,7 @@ exact_match: no providers: addic7ed, opensubtitles single: no + directory: /disk/subtitles """ schema = { @@ -53,6 +54,7 @@ 'exact_match': {'type': 'boolean', 'default': True}, 'providers': {'type': 'array', 'items': {'type': 'string', 'enum': PROVIDERS}}, 'single': {'type': 'boolean', 'default': True}, + 'directory': {'type:': 'string'}, }, 'required': ['languages'], 'additionalProperties': False @@ -81,6 +83,7 @@ exact_match: Use file hash only to search for subs, otherwise Subliminal will try to guess by filename. providers: List of providers from where to download subtitles. single: Download subtitles in single mode (no language code added to subtitle filename). + directory: Path to directory where to save the subtitles, default is next to the video. """ if not task.accepted: log.debug('nothing accepted, aborting') @@ -164,7 +167,10 @@ # save subtitles to disk for video, subtitle in downloaded_subtitles.items(): if subtitle: - subliminal.save_subtitles(video, subtitle, single=single_mode) + _directory = config.get('directory', None) + if _directory: + _directory = os.path.expanduser(_directory) + subliminal.save_subtitles(video, subtitle, single=single_mode, directory=_directory) @event('plugin.register')
{"golden_diff": "diff --git a/flexget/plugins/output/subtitles_subliminal.py b/flexget/plugins/output/subtitles_subliminal.py\n--- a/flexget/plugins/output/subtitles_subliminal.py\n+++ b/flexget/plugins/output/subtitles_subliminal.py\n@@ -43,6 +43,7 @@\n exact_match: no\n providers: addic7ed, opensubtitles\n single: no\n+ directory: /disk/subtitles\n \"\"\"\n \n schema = {\n@@ -53,6 +54,7 @@\n 'exact_match': {'type': 'boolean', 'default': True},\n 'providers': {'type': 'array', 'items': {'type': 'string', 'enum': PROVIDERS}},\n 'single': {'type': 'boolean', 'default': True},\n+ 'directory': {'type:': 'string'},\n },\n 'required': ['languages'],\n 'additionalProperties': False\n@@ -81,6 +83,7 @@\n exact_match: Use file hash only to search for subs, otherwise Subliminal will try to guess by filename.\n providers: List of providers from where to download subtitles.\n single: Download subtitles in single mode (no language code added to subtitle filename).\n+ directory: Path to directory where to save the subtitles, default is next to the video.\n \"\"\"\n if not task.accepted:\n log.debug('nothing accepted, aborting')\n@@ -164,7 +167,10 @@\n # save subtitles to disk\n for video, subtitle in downloaded_subtitles.items():\n if subtitle:\n- subliminal.save_subtitles(video, subtitle, single=single_mode)\n+ _directory = config.get('directory', None)\n+ if _directory:\n+ _directory = os.path.expanduser(_directory)\n+ subliminal.save_subtitles(video, subtitle, single=single_mode, directory=_directory)\n \n \n @event('plugin.register')\n", "issue": "[subliminal] Support downloading subtitles to different directory\nAlready supported by the subliminal API, patch just adds a config option for it.\n\n```\n--- subtitles_subliminal.py.orig 2016-04-20 06:07:33.873933998 -0300\n+++ subtitles_subliminal.py 2016-04-20 06:12:29.070504077 -0300\n@@ -42,6 +42,7 @@\n exact_match: no\n providers: addic7ed, opensubtitles\n single: no\n+ directory: /disk/subtitles\n \"\"\"\n\n schema = {\n@@ -52,6 +53,7 @@\n 'exact_match': {'type': 'boolean', 'default': True},\n 'providers': {'type': 'array', 'items': {'type': 'string', 'enum': PROVIDERS}},\n 'single': {'type': 'boolean', 'default': True},\n+ 'directory': {'type:': 'string'},\n },\n 'required': ['languages'],\n 'additionalProperties': False\n@@ -80,6 +82,7 @@\n exact_match: Use file hash only to search for subs, otherwise Subliminal will try to guess by filename.\n providers: List of providers from where to download subtitles.\n single: Download subtitles in single mode (no language code added to subtitle filename).\n+ directory: Path to directory where to save the subtitles, default is next to the video.\n \"\"\"\n if not task.accepted:\n log.debug('nothing accepted, aborting')\n@@ -163,7 +166,7 @@\n # save subtitles to disk\n for video, subtitle in downloaded_subtitles.items():\n if subtitle:\n- subliminal.save_subtitles(video, subtitle, single=single_mode)\n+ subliminal.save_subtitles(video, subtitle, single=single_mode, directory=config.get('directory', None))\n\n\n @event('plugin.register')\n```\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import *\n\nimport collections\nimport logging\nimport os\nimport sys\nimport tempfile\n\nfrom flexget import plugin\nfrom flexget.event import event\n\nlog = logging.getLogger('subtitles')\n\nPROVIDERS = [\n 'opensubtitles',\n 'thesubdb',\n 'podnapisi',\n 'addic7ed',\n 'tvsubtitles'\n]\n\n\nclass PluginSubliminal(object):\n \"\"\"\n Search and download subtitles using Subliminal by Antoine Bertin\n (https://pypi.python.org/pypi/subliminal).\n \n Example (complete task)::\n\n subs:\n find:\n path: \n - d:\\media\\incoming\n regexp: '.*\\.(avi|mkv|mp4)$'\n recursive: yes\n accept_all: yes\n subliminal:\n languages:\n - ita\n alternatives:\n - eng\n exact_match: no\n providers: addic7ed, opensubtitles\n single: no\n \"\"\"\n \n schema = {\n 'type': 'object',\n 'properties': {\n 'languages': {'type': 'array', 'items': {'type': 'string'}, 'minItems': 1},\n 'alternatives': {'type': 'array', 'items': {'type': 'string'}},\n 'exact_match': {'type': 'boolean', 'default': True},\n 'providers': {'type': 'array', 'items': {'type': 'string', 'enum': PROVIDERS}},\n 'single': {'type': 'boolean', 'default': True},\n },\n 'required': ['languages'],\n 'additionalProperties': False\n }\n\n def on_task_start(self, task, config):\n if list(sys.version_info) < [2, 7]:\n raise plugin.DependencyError('subliminal', 'Python 2.7', 'Subliminal plugin requires python 2.7.')\n try:\n import babelfish\n except ImportError as e:\n log.debug('Error importing Babelfish: %s', e)\n raise plugin.DependencyError('subliminal', 'babelfish', 'Babelfish module required. ImportError: %s', e)\n try:\n import subliminal\n except ImportError as e:\n log.debug('Error importing Subliminal: %s', e)\n raise plugin.DependencyError('subliminal', 'subliminal', 'Subliminal module required. ImportError: %s', e)\n \n def on_task_output(self, task, config):\n \"\"\"\n Configuration::\n subliminal:\n languages: List of languages (as IETF codes) in order of preference. At least one is required.\n alternatives: List of second-choice languages; subs will be downloaded but entries rejected.\n exact_match: Use file hash only to search for subs, otherwise Subliminal will try to guess by filename.\n providers: List of providers from where to download subtitles.\n single: Download subtitles in single mode (no language code added to subtitle filename).\n \"\"\"\n if not task.accepted:\n log.debug('nothing accepted, aborting')\n return\n from babelfish import Language\n from dogpile.cache.exception import RegionAlreadyConfigured\n import subliminal\n from subliminal.cli import MutexLock\n from subliminal.score import episode_scores, movie_scores\n try:\n subliminal.region.configure('dogpile.cache.dbm',\n arguments={\n 'filename': os.path.join(tempfile.gettempdir(), 'cachefile.dbm'),\n 'lock_factory': MutexLock,\n })\n except RegionAlreadyConfigured:\n pass\n logging.getLogger(\"subliminal\").setLevel(logging.CRITICAL)\n logging.getLogger(\"enzyme\").setLevel(logging.WARNING)\n languages = set([Language.fromietf(s) for s in config.get('languages', [])])\n alternative_languages = set([Language.fromietf(s) for s in config.get('alternatives', [])])\n # keep all downloaded subtitles and save to disk when done (no need to write every time)\n downloaded_subtitles = collections.defaultdict(list)\n providers_list = config.get('providers', None)\n # test if only one language was provided, if so we will download in single mode\n # (aka no language code added to subtitle filename)\n # unless we are forced not to by configuration\n # if we pass 'yes' for single in configuration but choose more than one language\n # we ignore the configuration and add the language code to the\n # potentially downloaded files\n single_mode = config.get('single', '') and len(languages | alternative_languages) <= 1\n for entry in task.accepted:\n if 'location' not in entry:\n log.warning('Cannot act on entries that do not represent a local file.')\n elif not os.path.exists(entry['location']):\n entry.fail('file not found: %s' % entry['location'])\n elif '$RECYCLE.BIN' not in entry['location']: # ignore deleted files in Windows shares\n try:\n entry_languages = entry.get('subtitle_languages') or languages\n\n video = subliminal.scan_video(entry['location'])\n existing_subtitles = set(subliminal.core.search_external_subtitles(entry['location']).values())\n video.subtitle_languages = existing_subtitles\n if isinstance(video, subliminal.Episode):\n title = video.series\n hash_scores = episode_scores['hash']\n else:\n title = video.title\n hash_scores = movie_scores['hash']\n log.info('Name computed for %s was %s', entry['location'], title)\n msc = hash_scores if config['exact_match'] else 0\n if entry_languages & existing_subtitles:\n log.debug('All preferred languages already exist for \"%s\"', entry['title'])\n entry['subtitles_missing'] = set()\n continue # subs for preferred lang(s) already exists\n else:\n subtitle = subliminal.download_best_subtitles([video], entry_languages,\n providers=providers_list, min_score=msc)\n if subtitle and any(subtitle.values()):\n downloaded_subtitles.update(subtitle)\n log.info('Subtitles found for %s', entry['location'])\n else:\n # only try to download for alternatives that aren't alread downloaded\n subtitle = subliminal.download_best_subtitles([video], alternative_languages,\n providers=providers_list, min_score=msc)\n\n if subtitle and any(subtitle.values()):\n downloaded_subtitles.update(subtitle)\n entry.fail('subtitles found for a second-choice language.')\n else:\n entry.fail('cannot find any subtitles for now.')\n downloaded_languages = set([Language.fromietf(str(l.language))\n for l in subtitle[video]])\n if entry_languages:\n entry['subtitles_missing'] = entry_languages - downloaded_languages\n except ValueError as e:\n log.error('subliminal error: %s', e)\n entry.fail()\n\n if downloaded_subtitles:\n # save subtitles to disk\n for video, subtitle in downloaded_subtitles.items():\n if subtitle:\n subliminal.save_subtitles(video, subtitle, single=single_mode)\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(PluginSubliminal, 'subliminal', api_ver=2)\n", "path": "flexget/plugins/output/subtitles_subliminal.py"}]}
2,974
419
gh_patches_debug_5655
rasdani/github-patches
git_diff
google__mobly-842
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Refactor Snippet Client Mobly supports testing devices of multiple platforms, including Android, iOS and Windows. Mobly has one snippet client for each platform, and these clients share common patterns. So we want to consolidate current clients that they inherit from the same base class. Currently, the father class of Android snippet client has too much Android-specific code, so it's better to add a new general base class. We will add a base class and one client V2 for each platform and we won't modify existing clients. This issue is also a prerequisite of making iOS related code open source. </issue> <code> [start of mobly/controllers/android_device_lib/services/snippet_management_service.py] 1 # Copyright 2018 Google Inc. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 """Module for the snippet management service.""" 15 from mobly.controllers.android_device_lib import errors 16 from mobly.controllers.android_device_lib import snippet_client_v2 17 from mobly.controllers.android_device_lib.services import base_service 18 19 MISSING_SNIPPET_CLIENT_MSG = 'No snippet client is registered with name "%s".' 20 21 # This config is transient and we will remove it after completing the migration 22 # from v1 to v2. 23 _CLIENT_V2_CONFIG_KEY = 'use_mobly_snippet_client_v2' 24 25 26 class Error(errors.ServiceError): 27 """Root error type for snippet management service.""" 28 SERVICE_TYPE = 'SnippetManagementService' 29 30 31 class SnippetManagementService(base_service.BaseService): 32 """Management service of snippet clients. 33 34 This service manages all the snippet clients associated with an Android 35 device. 36 """ 37 38 def __init__(self, device, configs=None): 39 del configs # Unused param. 40 self._device = device 41 self._is_alive = False 42 self._snippet_clients = {} 43 super().__init__(device) 44 45 @property 46 def is_alive(self): 47 """True if any client is running, False otherwise.""" 48 return any([client.is_alive for client in self._snippet_clients.values()]) 49 50 def get_snippet_client(self, name): 51 """Gets the snippet client managed under a given name. 52 53 Args: 54 name: string, the name of the snippet client under management. 55 56 Returns: 57 SnippetClient. 58 """ 59 if name in self._snippet_clients: 60 return self._snippet_clients[name] 61 62 def add_snippet_client(self, name, package): 63 """Adds a snippet client to the management. 64 65 Args: 66 name: string, the attribute name to which to attach the snippet 67 client. E.g. `name='maps'` attaches the snippet client to 68 `ad.maps`. 69 package: string, the package name of the snippet apk to connect to. 70 71 Raises: 72 Error, if a duplicated name or package is passed in. 73 """ 74 # Should not load snippet with the same name more than once. 75 if name in self._snippet_clients: 76 raise Error( 77 self, 'Name "%s" is already registered with package "%s", it cannot ' 78 'be used again.' % (name, self._snippet_clients[name].client.package)) 79 # Should not load the same snippet package more than once. 80 for snippet_name, client in self._snippet_clients.items(): 81 if package == client.package: 82 raise Error( 83 self, 'Snippet package "%s" has already been loaded under name' 84 ' "%s".' % (package, snippet_name)) 85 86 client = snippet_client_v2.SnippetClientV2(package=package, ad=self._device) 87 client.initialize() 88 self._snippet_clients[name] = client 89 90 def remove_snippet_client(self, name): 91 """Removes a snippet client from management. 92 93 Args: 94 name: string, the name of the snippet client to remove. 95 96 Raises: 97 Error: if no snippet client is managed under the specified name. 98 """ 99 if name not in self._snippet_clients: 100 raise Error(self._device, MISSING_SNIPPET_CLIENT_MSG % name) 101 client = self._snippet_clients.pop(name) 102 client.stop() 103 104 def start(self): 105 """Starts all the snippet clients under management.""" 106 for client in self._snippet_clients.values(): 107 if not client.is_alive: 108 self._device.log.debug('Starting SnippetClient<%s>.', client.package) 109 client.initialize() 110 else: 111 self._device.log.debug( 112 'Not startng SnippetClient<%s> because it is already alive.', 113 client.package) 114 115 def stop(self): 116 """Stops all the snippet clients under management.""" 117 for client in self._snippet_clients.values(): 118 if client.is_alive: 119 self._device.log.debug('Stopping SnippetClient<%s>.', client.package) 120 client.stop() 121 else: 122 self._device.log.debug( 123 'Not stopping SnippetClient<%s> because it is not alive.', 124 client.package) 125 126 def pause(self): 127 """Pauses all the snippet clients under management. 128 129 This clears the host port of a client because a new port will be 130 allocated in `resume`. 131 """ 132 for client in self._snippet_clients.values(): 133 self._device.log.debug('Pausing SnippetClient<%s>.', client.package) 134 client.close_connection() 135 136 def resume(self): 137 """Resumes all paused snippet clients.""" 138 for client in self._snippet_clients.values(): 139 if not client.is_alive: 140 self._device.log.debug('Resuming SnippetClient<%s>.', client.package) 141 client.restore_server_connection() 142 else: 143 self._device.log.debug('Not resuming SnippetClient<%s>.', 144 client.package) 145 146 def __getattr__(self, name): 147 client = self.get_snippet_client(name) 148 if client: 149 return client 150 return self.__getattribute__(name) 151 [end of mobly/controllers/android_device_lib/services/snippet_management_service.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/mobly/controllers/android_device_lib/services/snippet_management_service.py b/mobly/controllers/android_device_lib/services/snippet_management_service.py --- a/mobly/controllers/android_device_lib/services/snippet_management_service.py +++ b/mobly/controllers/android_device_lib/services/snippet_management_service.py @@ -18,10 +18,6 @@ MISSING_SNIPPET_CLIENT_MSG = 'No snippet client is registered with name "%s".' -# This config is transient and we will remove it after completing the migration -# from v1 to v2. -_CLIENT_V2_CONFIG_KEY = 'use_mobly_snippet_client_v2' - class Error(errors.ServiceError): """Root error type for snippet management service."""
{"golden_diff": "diff --git a/mobly/controllers/android_device_lib/services/snippet_management_service.py b/mobly/controllers/android_device_lib/services/snippet_management_service.py\n--- a/mobly/controllers/android_device_lib/services/snippet_management_service.py\n+++ b/mobly/controllers/android_device_lib/services/snippet_management_service.py\n@@ -18,10 +18,6 @@\n \n MISSING_SNIPPET_CLIENT_MSG = 'No snippet client is registered with name \"%s\".'\n \n-# This config is transient and we will remove it after completing the migration\n-# from v1 to v2.\n-_CLIENT_V2_CONFIG_KEY = 'use_mobly_snippet_client_v2'\n-\n \n class Error(errors.ServiceError):\n \"\"\"Root error type for snippet management service.\"\"\"\n", "issue": "Refactor Snippet Client\nMobly supports testing devices of multiple platforms, including Android, iOS and Windows. Mobly has one snippet client for each platform, and these clients share common patterns. So we want to consolidate current clients that they inherit from the same base class.\r\n\r\nCurrently, the father class of Android snippet client has too much Android-specific code, so it's better to add a new general base class. We will add a base class and one client V2 for each platform and we won't modify existing clients.\r\n\r\nThis issue is also a prerequisite of making iOS related code open source.\r\n\n", "before_files": [{"content": "# Copyright 2018 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Module for the snippet management service.\"\"\"\nfrom mobly.controllers.android_device_lib import errors\nfrom mobly.controllers.android_device_lib import snippet_client_v2\nfrom mobly.controllers.android_device_lib.services import base_service\n\nMISSING_SNIPPET_CLIENT_MSG = 'No snippet client is registered with name \"%s\".'\n\n# This config is transient and we will remove it after completing the migration\n# from v1 to v2.\n_CLIENT_V2_CONFIG_KEY = 'use_mobly_snippet_client_v2'\n\n\nclass Error(errors.ServiceError):\n \"\"\"Root error type for snippet management service.\"\"\"\n SERVICE_TYPE = 'SnippetManagementService'\n\n\nclass SnippetManagementService(base_service.BaseService):\n \"\"\"Management service of snippet clients.\n\n This service manages all the snippet clients associated with an Android\n device.\n \"\"\"\n\n def __init__(self, device, configs=None):\n del configs # Unused param.\n self._device = device\n self._is_alive = False\n self._snippet_clients = {}\n super().__init__(device)\n\n @property\n def is_alive(self):\n \"\"\"True if any client is running, False otherwise.\"\"\"\n return any([client.is_alive for client in self._snippet_clients.values()])\n\n def get_snippet_client(self, name):\n \"\"\"Gets the snippet client managed under a given name.\n\n Args:\n name: string, the name of the snippet client under management.\n\n Returns:\n SnippetClient.\n \"\"\"\n if name in self._snippet_clients:\n return self._snippet_clients[name]\n\n def add_snippet_client(self, name, package):\n \"\"\"Adds a snippet client to the management.\n\n Args:\n name: string, the attribute name to which to attach the snippet\n client. E.g. `name='maps'` attaches the snippet client to\n `ad.maps`.\n package: string, the package name of the snippet apk to connect to.\n\n Raises:\n Error, if a duplicated name or package is passed in.\n \"\"\"\n # Should not load snippet with the same name more than once.\n if name in self._snippet_clients:\n raise Error(\n self, 'Name \"%s\" is already registered with package \"%s\", it cannot '\n 'be used again.' % (name, self._snippet_clients[name].client.package))\n # Should not load the same snippet package more than once.\n for snippet_name, client in self._snippet_clients.items():\n if package == client.package:\n raise Error(\n self, 'Snippet package \"%s\" has already been loaded under name'\n ' \"%s\".' % (package, snippet_name))\n\n client = snippet_client_v2.SnippetClientV2(package=package, ad=self._device)\n client.initialize()\n self._snippet_clients[name] = client\n\n def remove_snippet_client(self, name):\n \"\"\"Removes a snippet client from management.\n\n Args:\n name: string, the name of the snippet client to remove.\n\n Raises:\n Error: if no snippet client is managed under the specified name.\n \"\"\"\n if name not in self._snippet_clients:\n raise Error(self._device, MISSING_SNIPPET_CLIENT_MSG % name)\n client = self._snippet_clients.pop(name)\n client.stop()\n\n def start(self):\n \"\"\"Starts all the snippet clients under management.\"\"\"\n for client in self._snippet_clients.values():\n if not client.is_alive:\n self._device.log.debug('Starting SnippetClient<%s>.', client.package)\n client.initialize()\n else:\n self._device.log.debug(\n 'Not startng SnippetClient<%s> because it is already alive.',\n client.package)\n\n def stop(self):\n \"\"\"Stops all the snippet clients under management.\"\"\"\n for client in self._snippet_clients.values():\n if client.is_alive:\n self._device.log.debug('Stopping SnippetClient<%s>.', client.package)\n client.stop()\n else:\n self._device.log.debug(\n 'Not stopping SnippetClient<%s> because it is not alive.',\n client.package)\n\n def pause(self):\n \"\"\"Pauses all the snippet clients under management.\n\n This clears the host port of a client because a new port will be\n allocated in `resume`.\n \"\"\"\n for client in self._snippet_clients.values():\n self._device.log.debug('Pausing SnippetClient<%s>.', client.package)\n client.close_connection()\n\n def resume(self):\n \"\"\"Resumes all paused snippet clients.\"\"\"\n for client in self._snippet_clients.values():\n if not client.is_alive:\n self._device.log.debug('Resuming SnippetClient<%s>.', client.package)\n client.restore_server_connection()\n else:\n self._device.log.debug('Not resuming SnippetClient<%s>.',\n client.package)\n\n def __getattr__(self, name):\n client = self.get_snippet_client(name)\n if client:\n return client\n return self.__getattribute__(name)\n", "path": "mobly/controllers/android_device_lib/services/snippet_management_service.py"}]}
2,212
158
gh_patches_debug_25007
rasdani/github-patches
git_diff
fedora-infra__bodhi-1334
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> The docs on readthedocs are missing the Python bindings Our RTD page is missing the Python bindings: http://bodhi.readthedocs.io/en/latest/python_bindings.html Very likely this is due to our ```docs/conf.py``` file not injecting the root of our repo into sys.path. </issue> <code> [start of docs/conf.py] 1 # -*- coding: utf-8 -*- 2 # 3 # bodhi documentation build configuration file, created by 4 # sphinx-quickstart on Sat Aug 10 09:29:50 2013. 5 # 6 # This file is execfile()d with the current directory set to its containing dir. 7 # 8 # Note that not all possible configuration values are present in this 9 # autogenerated file. 10 # 11 # All configuration values have a default; values that are commented out 12 # serve to show the default. 13 import datetime 14 15 16 # If extensions (or modules to document with autodoc) are in another directory, 17 # add these directories to sys.path here. If the directory is relative to the 18 # documentation root, use os.path.abspath to make it absolute, like shown here. 19 # sys.path.insert(0, os.path.abspath('.')) 20 21 # -- General configuration ----------------------------------------------------- 22 23 # If your documentation needs a minimal Sphinx version, state it here. 24 # needs_sphinx = '1.0' 25 26 # Add any Sphinx extension module names here, as strings. They can be extensions 27 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. 28 extensions = ['sphinx.ext.autodoc', 29 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.viewcode'] 30 31 # Add any paths that contain templates here, relative to this directory. 32 templates_path = ['_templates'] 33 34 # The suffix of source filenames. 35 source_suffix = '.rst' 36 37 # The encoding of source files. 38 # source_encoding = 'utf-8-sig' 39 40 # The master toctree document. 41 master_doc = 'index' 42 43 # General information about the project. 44 project = u'bodhi' 45 copyright = u'2007-{}, Red Hat, Inc.'.format(datetime.datetime.utcnow().year) 46 47 # The version info for the project you're documenting, acts as replacement for 48 # |version| and |release|, also used in various other places throughout the 49 # built documents. 50 # 51 # The short X.Y version. 52 version = '2.4' 53 # The full version, including alpha/beta/rc tags. 54 release = '2.4.0' 55 56 # The language for content autogenerated by Sphinx. Refer to documentation 57 # for a list of supported languages. 58 # language = None 59 60 # There are two options for replacing |today|: either, you set today to some 61 # non-false value, then it is used: 62 # today = '' 63 # Else, today_fmt is used as the format for a strftime call. 64 # today_fmt = '%B %d, %Y' 65 66 # List of patterns, relative to source directory, that match files and 67 # directories to ignore when looking for source files. 68 exclude_patterns = ['_build'] 69 70 # The reST default role (used for this markup: `text`) to use for all documents. 71 # default_role = None 72 73 # If true, '()' will be appended to :func: etc. cross-reference text. 74 # add_function_parentheses = True 75 76 # If true, the current module name will be prepended to all description 77 # unit titles (such as .. function::). 78 # add_module_names = True 79 80 # If true, sectionauthor and moduleauthor directives will be shown in the 81 # output. They are ignored by default. 82 # show_authors = False 83 84 # The name of the Pygments (syntax highlighting) style to use. 85 pygments_style = 'sphinx' 86 87 # A list of ignored prefixes for module index sorting. 88 # modindex_common_prefix = [] 89 90 91 # -- Options for HTML output --------------------------------------------------- 92 93 # The theme to use for HTML and HTML Help pages. See the documentation for 94 # a list of builtin themes. 95 html_theme = 'default' 96 97 # Theme options are theme-specific and customize the look and feel of a theme 98 # further. For a list of options available for each theme, see the 99 # documentation. 100 # html_theme_options = {} 101 102 # Add any paths that contain custom themes here, relative to this directory. 103 # html_theme_path = [] 104 105 # The name for this set of Sphinx documents. If None, it defaults to 106 # "<project> v<release> documentation". 107 # html_title = None 108 109 # A shorter title for the navigation bar. Default is the same as html_title. 110 # html_short_title = None 111 112 # The name of an image file (relative to this directory) to place at the top 113 # of the sidebar. 114 # html_logo = None 115 116 # The name of an image file (within the static path) to use as favicon of the 117 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 118 # pixels large. 119 # html_favicon = None 120 121 # Add any paths that contain custom static files (such as style sheets) here, 122 # relative to this directory. They are copied after the builtin static files, 123 # so a file named "default.css" will overwrite the builtin "default.css". 124 html_static_path = ['_static'] 125 126 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 127 # using the given strftime format. 128 # html_last_updated_fmt = '%b %d, %Y' 129 130 # If true, SmartyPants will be used to convert quotes and dashes to 131 # typographically correct entities. 132 # html_use_smartypants = True 133 134 # Custom sidebar templates, maps document names to template names. 135 # html_sidebars = {} 136 137 # Additional templates that should be rendered to pages, maps page names to 138 # template names. 139 # html_additional_pages = {} 140 141 # If false, no module index is generated. 142 # html_domain_indices = True 143 144 # If false, no index is generated. 145 # html_use_index = True 146 147 # If true, the index is split into individual pages for each letter. 148 # html_split_index = False 149 150 # If true, links to the reST sources are added to the pages. 151 # html_show_sourcelink = True 152 153 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. 154 # html_show_sphinx = True 155 156 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. 157 # html_show_copyright = True 158 159 # If true, an OpenSearch description file will be output, and all pages will 160 # contain a <link> tag referring to it. The value of this option must be the 161 # base URL from which the finished HTML is served. 162 # html_use_opensearch = '' 163 164 # This is the file name suffix for HTML files (e.g. ".xhtml"). 165 # html_file_suffix = None 166 167 # Output file base name for HTML help builder. 168 htmlhelp_basename = 'bodhidoc' 169 170 171 # -- Options for LaTeX output -------------------------------------------------- 172 173 latex_elements = { 174 # The paper size ('letterpaper' or 'a4paper'). 175 # 'papersize': 'letterpaper', 176 177 # The font size ('10pt', '11pt' or '12pt'). 178 # 'pointsize': '10pt', 179 180 # Additional stuff for the LaTeX preamble. 181 # 'preamble': '', 182 } 183 184 # Grouping the document tree into LaTeX files. List of tuples 185 # (source start file, target name, title, author, documentclass [howto/manual]). 186 latex_documents = [ 187 ('index', 'bodhi.tex', u'bodhi Documentation', 188 u'Luke Macken', 'manual'), 189 ] 190 191 # The name of an image file (relative to this directory) to place at the top of 192 # the title page. 193 # latex_logo = None 194 195 # For "manual" documents, if this is true, then toplevel headings are parts, 196 # not chapters. 197 # latex_use_parts = False 198 199 # If true, show page references after internal links. 200 # latex_show_pagerefs = False 201 202 # If true, show URL addresses after external links. 203 # latex_show_urls = False 204 205 # Documents to append as an appendix to all manuals. 206 # latex_appendices = [] 207 208 # If false, no module index is generated. 209 # latex_domain_indices = True 210 211 212 # -- Options for manual page output -------------------------------------------- 213 214 # One entry per manual page. List of tuples 215 # (source start file, name, description, authors, manual section). 216 man_pages = [ 217 ('man_bodhi', 'bodhi', u'manage Fedora updates', ['Randy Barlow', 'Luke Macken'], 1), 218 ('man_bodhi-push', 'bodhi-push', u'push Fedora updates', ['Randy Barlow'], 1), 219 ('man_initialize_bodhi_db', 'initialize_bodhi_db', u'intialize bodhi\'s database', 220 ['Randy Barlow'], 1) 221 ] 222 223 # If true, show URL addresses after external links. 224 # man_show_urls = False 225 226 227 # -- Options for Texinfo output ------------------------------------------------ 228 229 # Grouping the document tree into Texinfo files. List of tuples 230 # (source start file, target name, title, author, 231 # dir menu entry, description, category) 232 texinfo_documents = [ 233 ('index', 'bodhi', u'bodhi Documentation', 234 u'Luke Macken', 'bodhi', 'One line description of project.', 235 'Miscellaneous'), 236 ] 237 238 # Documents to append as an appendix to all manuals. 239 # texinfo_appendices = [] 240 241 # If false, no module index is generated. 242 # texinfo_domain_indices = True 243 244 # How to display URL addresses: 'footnote', 'no', or 'inline'. 245 # texinfo_show_urls = 'footnote' 246 247 248 # Example configuration for intersphinx: refer to the Python standard library. 249 intersphinx_mapping = {'http://docs.python.org/': None} 250 [end of docs/conf.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/docs/conf.py b/docs/conf.py --- a/docs/conf.py +++ b/docs/conf.py @@ -11,12 +11,14 @@ # All configuration values have a default; values that are commented out # serve to show the default. import datetime +import os +import sys # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. -# sys.path.insert(0, os.path.abspath('.')) +sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../'))) # -- General configuration ----------------------------------------------------- @@ -92,7 +94,7 @@ # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. -html_theme = 'default' +html_theme = 'alabaster' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -11,12 +11,14 @@\n # All configuration values have a default; values that are commented out\n # serve to show the default.\n import datetime\n+import os\n+import sys\n \n \n # If extensions (or modules to document with autodoc) are in another directory,\n # add these directories to sys.path here. If the directory is relative to the\n # documentation root, use os.path.abspath to make it absolute, like shown here.\n-# sys.path.insert(0, os.path.abspath('.'))\n+sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../')))\n \n # -- General configuration -----------------------------------------------------\n \n@@ -92,7 +94,7 @@\n \n # The theme to use for HTML and HTML Help pages. See the documentation for\n # a list of builtin themes.\n-html_theme = 'default'\n+html_theme = 'alabaster'\n \n # Theme options are theme-specific and customize the look and feel of a theme\n # further. For a list of options available for each theme, see the\n", "issue": "The docs on readthedocs are missing the Python bindings\nOur RTD page is missing the Python bindings:\r\n\r\nhttp://bodhi.readthedocs.io/en/latest/python_bindings.html\r\n\r\nVery likely this is due to our ```docs/conf.py``` file not injecting the root of our repo into sys.path.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# bodhi documentation build configuration file, created by\n# sphinx-quickstart on Sat Aug 10 09:29:50 2013.\n#\n# This file is execfile()d with the current directory set to its containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\nimport datetime\n\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n# sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration -----------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be extensions\n# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.\nextensions = ['sphinx.ext.autodoc',\n 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.viewcode']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'bodhi'\ncopyright = u'2007-{}, Red Hat, Inc.'.format(datetime.datetime.utcnow().year)\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '2.4'\n# The full version, including alpha/beta/rc tags.\nrelease = '2.4.0'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n# language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all documents.\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n\n# -- Options for HTML output ---------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'default'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n# html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n# html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n# html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n# html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n# html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n# html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n# html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n# html_domain_indices = True\n\n# If false, no index is generated.\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n# html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'bodhidoc'\n\n\n# -- Options for LaTeX output --------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n # 'preamble': '',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title, author, documentclass [howto/manual]).\nlatex_documents = [\n ('index', 'bodhi.tex', u'bodhi Documentation',\n u'Luke Macken', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n# latex_use_parts = False\n\n# If true, show page references after internal links.\n# latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n# latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n# latex_appendices = []\n\n# If false, no module index is generated.\n# latex_domain_indices = True\n\n\n# -- Options for manual page output --------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n ('man_bodhi', 'bodhi', u'manage Fedora updates', ['Randy Barlow', 'Luke Macken'], 1),\n ('man_bodhi-push', 'bodhi-push', u'push Fedora updates', ['Randy Barlow'], 1),\n ('man_initialize_bodhi_db', 'initialize_bodhi_db', u'intialize bodhi\\'s database',\n ['Randy Barlow'], 1)\n]\n\n# If true, show URL addresses after external links.\n# man_show_urls = False\n\n\n# -- Options for Texinfo output ------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n ('index', 'bodhi', u'bodhi Documentation',\n u'Luke Macken', 'bodhi', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n# texinfo_appendices = []\n\n# If false, no module index is generated.\n# texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n# texinfo_show_urls = 'footnote'\n\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'http://docs.python.org/': None}\n", "path": "docs/conf.py"}]}
3,312
252
gh_patches_debug_33906
rasdani/github-patches
git_diff
intel__dffml-526
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> operation: io: Examples not being run Due to the lack of newline between the sentence in the examples section and the python prompt. The sphinx doctest plugin isn't running the examples for the IO operations. https://github.com/intel/dffml/blob/b20e40ea444e9e6091d7702895ab242e33312da8/dffml/operation/io.py#L54-L55 https://github.com/intel/dffml/blob/b20e40ea444e9e6091d7702895ab242e33312da8/dffml/operation/io.py#L108-L109 Also, to make the call to `input` return the desired value, we probably need to modify `docs/doctest_header.py` to use `uinttest.mock.patch` in a similar way to the tests, but instead use the `.start()` call (no need to use `.stop()`) https://docs.python.org/3/library/unittest.mock.html#unittest.mock.patch ``` >>> Original = Class >>> patcher = patch('__main__.Class', spec=True) >>> MockClass = patcher.start() >>> instance = MockClass() >>> assert isinstance(instance, Original) >>> patcher.stop() ``` </issue> <code> [start of docs/doctest_header.py] 1 # This file is used as a header in every file that is created to run each 2 # example when the doctests are run. 3 import os 4 import sys 5 import shutil 6 import atexit 7 import inspect 8 import asyncio 9 import tempfile 10 import functools 11 12 # Create a temporary directory for test to run in 13 DOCTEST_TEMPDIR = tempfile.mkdtemp() 14 # Remove it when the test exits 15 atexit.register(functools.partial(shutil.rmtree, DOCTEST_TEMPDIR)) 16 # Change the current working directory to the temporary directory 17 os.chdir(DOCTEST_TEMPDIR) 18 19 from dffml_model_scikit import * 20 from dffml import * 21 from dffml.base import * 22 from dffml.record import * 23 from dffml.df.base import * 24 from dffml.df.types import * 25 from dffml.df.memory import * 26 from dffml.util.net import * 27 from dffml.operation.output import * 28 from dffml.operation.dataflow import * 29 from dffml.source.memory import * 30 [end of docs/doctest_header.py] [start of dffml/operation/io.py] 1 import asyncio 2 import concurrent.futures 3 from typing import Dict, Any 4 5 from dffml.df.types import Operation, Definition 6 from dffml.df.base import ( 7 op, 8 OperationImplementationContext, 9 OperationImplementation, 10 ) 11 12 13 # Definitions 14 UserInput = Definition(name="UserInput", primitive="str") 15 DataToPrint = Definition(name="DataToPrint", primitive="str") 16 17 AcceptUserInput = Operation( 18 name="AcceptUserInput", 19 inputs={}, 20 outputs={"InputData": UserInput}, 21 conditions=[], 22 ) 23 24 25 class AcceptUserInputContext(OperationImplementationContext): 26 @staticmethod 27 def receive_input(): 28 return input() 29 30 async def run(self, inputs: Dict[str, Any]) -> Dict[str, Any]: 31 user_input = await self.parent.loop.run_in_executor( 32 self.parent.pool, self.receive_input 33 ) 34 return {"InputData": {"data": user_input}} 35 36 37 class AcceptUserInput(OperationImplementation): 38 """ 39 Accept input from stdin using python input() 40 41 Parameters 42 ++++++++++ 43 inputs : dict 44 A dictionary with a key and empty list as value. 45 46 Returns 47 +++++++ 48 dict 49 A dictionary containing user input. 50 51 Examples 52 ++++++++ 53 54 The following example shows how to use AcceptUserInput. 55 >>> dataflow = DataFlow.auto(AcceptUserInput, GetSingle) 56 >>> dataflow.seed.append( 57 ... Input( 58 ... value=[AcceptUserInput.op.outputs["InputData"].name], 59 ... definition=GetSingle.op.inputs["spec"] 60 ... ) 61 ... ) 62 >>> 63 >>> async def main(): 64 ... async for ctx, results in MemoryOrchestrator.run(dataflow, {"input":[]}): 65 ... print(results) 66 >>> 67 >>> asyncio.run(main()) 68 {'UserInput': {'data': 'Data flow is awesome'}} 69 """ 70 71 op = AcceptUserInput 72 CONTEXT = AcceptUserInputContext 73 74 def __init__(self, *args, **kwargs): 75 super().__init__(*args, **kwargs) 76 self.loop = None 77 self.pool = None 78 self.__pool = None 79 80 async def __aenter__(self) -> "OperationImplementationContext": 81 self.loop = asyncio.get_event_loop() 82 self.pool = concurrent.futures.ThreadPoolExecutor() 83 self.__pool = self.pool.__enter__() 84 return self 85 86 async def __aexit__(self, exc_type, exc_value, traceback): 87 self.__pool.__exit__(exc_type, exc_value, traceback) 88 self.__pool = None 89 self.pool = None 90 self.loop = None 91 92 93 @op( 94 inputs={"data": DataToPrint}, outputs={}, conditions=[], 95 ) 96 async def print_output(data: str): 97 """ 98 Print the output on stdout using python print() 99 100 Parameters 101 ++++++++++ 102 inputs : list 103 A list of Inputs whose value is to be printed. 104 105 Examples 106 ++++++++ 107 108 The following example shows how to use print_output. 109 >>> dataflow = DataFlow.auto(print_output, GetSingle) 110 >>> inputs = [ 111 ... Input( 112 ... value="print_output example", 113 ... definition=dataflow.definitions["DataToPrint"], 114 ... parents=None,)] 115 >>> 116 >>> async def main(): 117 ... async for ctx, results in MemoryOrchestrator.run(dataflow, inputs): 118 ... print("String to be printed is 'print_output example'") 119 >>> 120 >>> asyncio.run(main()) 121 print_output example 122 String to be printed is 'print_output example' 123 """ 124 print("\n" + data) 125 [end of dffml/operation/io.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/dffml/operation/io.py b/dffml/operation/io.py --- a/dffml/operation/io.py +++ b/dffml/operation/io.py @@ -52,6 +52,7 @@ ++++++++ The following example shows how to use AcceptUserInput. + >>> dataflow = DataFlow.auto(AcceptUserInput, GetSingle) >>> dataflow.seed.append( ... Input( @@ -106,6 +107,7 @@ ++++++++ The following example shows how to use print_output. + >>> dataflow = DataFlow.auto(print_output, GetSingle) >>> inputs = [ ... Input( @@ -121,4 +123,4 @@ print_output example String to be printed is 'print_output example' """ - print("\n" + data) + print(data) diff --git a/docs/doctest_header.py b/docs/doctest_header.py --- a/docs/doctest_header.py +++ b/docs/doctest_header.py @@ -7,7 +7,9 @@ import inspect import asyncio import tempfile +import builtins import functools +from unittest import mock # Create a temporary directory for test to run in DOCTEST_TEMPDIR = tempfile.mkdtemp() @@ -16,14 +18,18 @@ # Change the current working directory to the temporary directory os.chdir(DOCTEST_TEMPDIR) -from dffml_model_scikit import * from dffml import * from dffml.base import * from dffml.record import * from dffml.df.base import * from dffml.df.types import * -from dffml.df.memory import * from dffml.util.net import * +from dffml.df.memory import * +from dffml_model_scikit import * +from dffml.operation.io import * +from dffml.source.memory import * from dffml.operation.output import * from dffml.operation.dataflow import * -from dffml.source.memory import * + +# Used for mocking input() for AcceptUserInput operation. +mock.patch("builtins.input", return_value="Data flow is awesome").start()
{"golden_diff": "diff --git a/dffml/operation/io.py b/dffml/operation/io.py\n--- a/dffml/operation/io.py\n+++ b/dffml/operation/io.py\n@@ -52,6 +52,7 @@\n ++++++++\n \n The following example shows how to use AcceptUserInput.\n+\n >>> dataflow = DataFlow.auto(AcceptUserInput, GetSingle)\n >>> dataflow.seed.append(\n ... Input(\n@@ -106,6 +107,7 @@\n ++++++++\n \n The following example shows how to use print_output.\n+\n >>> dataflow = DataFlow.auto(print_output, GetSingle)\n >>> inputs = [\n ... Input(\n@@ -121,4 +123,4 @@\n print_output example\n String to be printed is 'print_output example'\n \"\"\"\n- print(\"\\n\" + data)\n+ print(data)\ndiff --git a/docs/doctest_header.py b/docs/doctest_header.py\n--- a/docs/doctest_header.py\n+++ b/docs/doctest_header.py\n@@ -7,7 +7,9 @@\n import inspect\n import asyncio\n import tempfile\n+import builtins\n import functools\n+from unittest import mock\n \n # Create a temporary directory for test to run in\n DOCTEST_TEMPDIR = tempfile.mkdtemp()\n@@ -16,14 +18,18 @@\n # Change the current working directory to the temporary directory\n os.chdir(DOCTEST_TEMPDIR)\n \n-from dffml_model_scikit import *\n from dffml import *\n from dffml.base import *\n from dffml.record import *\n from dffml.df.base import *\n from dffml.df.types import *\n-from dffml.df.memory import *\n from dffml.util.net import *\n+from dffml.df.memory import *\n+from dffml_model_scikit import *\n+from dffml.operation.io import *\n+from dffml.source.memory import *\n from dffml.operation.output import *\n from dffml.operation.dataflow import *\n-from dffml.source.memory import *\n+\n+# Used for mocking input() for AcceptUserInput operation.\n+mock.patch(\"builtins.input\", return_value=\"Data flow is awesome\").start()\n", "issue": "operation: io: Examples not being run\nDue to the lack of newline between the sentence in the examples section and the python prompt. The sphinx doctest plugin isn't running the examples for the IO operations.\r\n\r\nhttps://github.com/intel/dffml/blob/b20e40ea444e9e6091d7702895ab242e33312da8/dffml/operation/io.py#L54-L55\r\n\r\nhttps://github.com/intel/dffml/blob/b20e40ea444e9e6091d7702895ab242e33312da8/dffml/operation/io.py#L108-L109\r\n\r\nAlso, to make the call to `input` return the desired value, we probably need to modify `docs/doctest_header.py` to use `uinttest.mock.patch` in a similar way to the tests, but instead use the `.start()` call (no need to use `.stop()`)\r\n\r\nhttps://docs.python.org/3/library/unittest.mock.html#unittest.mock.patch\r\n\r\n```\r\n>>> Original = Class\r\n>>> patcher = patch('__main__.Class', spec=True)\r\n>>> MockClass = patcher.start()\r\n>>> instance = MockClass()\r\n>>> assert isinstance(instance, Original)\r\n>>> patcher.stop()\r\n```\n", "before_files": [{"content": "# This file is used as a header in every file that is created to run each\n# example when the doctests are run.\nimport os\nimport sys\nimport shutil\nimport atexit\nimport inspect\nimport asyncio\nimport tempfile\nimport functools\n\n# Create a temporary directory for test to run in\nDOCTEST_TEMPDIR = tempfile.mkdtemp()\n# Remove it when the test exits\natexit.register(functools.partial(shutil.rmtree, DOCTEST_TEMPDIR))\n# Change the current working directory to the temporary directory\nos.chdir(DOCTEST_TEMPDIR)\n\nfrom dffml_model_scikit import *\nfrom dffml import *\nfrom dffml.base import *\nfrom dffml.record import *\nfrom dffml.df.base import *\nfrom dffml.df.types import *\nfrom dffml.df.memory import *\nfrom dffml.util.net import *\nfrom dffml.operation.output import *\nfrom dffml.operation.dataflow import *\nfrom dffml.source.memory import *\n", "path": "docs/doctest_header.py"}, {"content": "import asyncio\nimport concurrent.futures\nfrom typing import Dict, Any\n\nfrom dffml.df.types import Operation, Definition\nfrom dffml.df.base import (\n op,\n OperationImplementationContext,\n OperationImplementation,\n)\n\n\n# Definitions\nUserInput = Definition(name=\"UserInput\", primitive=\"str\")\nDataToPrint = Definition(name=\"DataToPrint\", primitive=\"str\")\n\nAcceptUserInput = Operation(\n name=\"AcceptUserInput\",\n inputs={},\n outputs={\"InputData\": UserInput},\n conditions=[],\n)\n\n\nclass AcceptUserInputContext(OperationImplementationContext):\n @staticmethod\n def receive_input():\n return input()\n\n async def run(self, inputs: Dict[str, Any]) -> Dict[str, Any]:\n user_input = await self.parent.loop.run_in_executor(\n self.parent.pool, self.receive_input\n )\n return {\"InputData\": {\"data\": user_input}}\n\n\nclass AcceptUserInput(OperationImplementation):\n \"\"\"\n Accept input from stdin using python input()\n\n Parameters\n ++++++++++\n inputs : dict\n A dictionary with a key and empty list as value.\n\n Returns\n +++++++\n dict\n A dictionary containing user input.\n\n Examples\n ++++++++\n\n The following example shows how to use AcceptUserInput.\n >>> dataflow = DataFlow.auto(AcceptUserInput, GetSingle)\n >>> dataflow.seed.append(\n ... Input(\n ... value=[AcceptUserInput.op.outputs[\"InputData\"].name],\n ... definition=GetSingle.op.inputs[\"spec\"]\n ... )\n ... )\n >>>\n >>> async def main():\n ... async for ctx, results in MemoryOrchestrator.run(dataflow, {\"input\":[]}):\n ... print(results)\n >>>\n >>> asyncio.run(main())\n {'UserInput': {'data': 'Data flow is awesome'}}\n \"\"\"\n\n op = AcceptUserInput\n CONTEXT = AcceptUserInputContext\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.loop = None\n self.pool = None\n self.__pool = None\n\n async def __aenter__(self) -> \"OperationImplementationContext\":\n self.loop = asyncio.get_event_loop()\n self.pool = concurrent.futures.ThreadPoolExecutor()\n self.__pool = self.pool.__enter__()\n return self\n\n async def __aexit__(self, exc_type, exc_value, traceback):\n self.__pool.__exit__(exc_type, exc_value, traceback)\n self.__pool = None\n self.pool = None\n self.loop = None\n\n\n@op(\n inputs={\"data\": DataToPrint}, outputs={}, conditions=[],\n)\nasync def print_output(data: str):\n \"\"\"\n Print the output on stdout using python print()\n\n Parameters\n ++++++++++\n inputs : list\n A list of Inputs whose value is to be printed.\n\n Examples\n ++++++++\n\n The following example shows how to use print_output.\n >>> dataflow = DataFlow.auto(print_output, GetSingle)\n >>> inputs = [\n ... Input(\n ... value=\"print_output example\",\n ... definition=dataflow.definitions[\"DataToPrint\"],\n ... parents=None,)]\n >>>\n >>> async def main():\n ... async for ctx, results in MemoryOrchestrator.run(dataflow, inputs):\n ... print(\"String to be printed is 'print_output example'\")\n >>>\n >>> asyncio.run(main())\n print_output example\n String to be printed is 'print_output example'\n \"\"\"\n print(\"\\n\" + data)\n", "path": "dffml/operation/io.py"}]}
2,159
478
gh_patches_debug_42011
rasdani/github-patches
git_diff
freedomofpress__securedrop-3724
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Update OSSEC to v3.0 ## Description OSSEC 3.0 was released on July 17th 2018[0], containing a large amount of bug fixes (including 2 security fixes) as well as new major functionality. Of note, it supports whitelisting syscheck md5 hashes in a sqlite database, potentially reducing notification noise. ## User Research Evidence Users like up-to-date packages ## User Stories As a SecureDrop administrator, I would like to have all packages updated and would like to minimize alerts/noise. [0] https://github.com/ossec/ossec-hids/releases </issue> <code> [start of install_files/ansible-base/roles/build-ossec-deb-pkg/library/ossec_urls.py] 1 #!/usr/bin/env python 2 DOCUMENTATION = ''' 3 --- 4 module: ossec_urls 5 short_description: Gather facts for OSSEC download URLs 6 description: 7 - Gather version, checksum, and URL info for OSSEC downloads 8 author: 9 - Conor Schaefer (@conorsch) 10 - Freedom of the Press Foundation (@freedomofpress) 11 requirements: 12 - requests 13 options: 14 ossec_version: 15 description: 16 - version number of release to download 17 default: "2.8.2" 18 required: no 19 notes: 20 - The OSSEC version to download is hardcoded to avoid surprises. 21 If you want a newer version than the current default, you should 22 pass the version in via I(ossec_version). 23 ''' 24 EXAMPLES = ''' 25 - ossec_urls: 26 ossec_version: "2.8.2" 27 ''' 28 29 import re # noqa: E402 30 31 32 HAS_REQUESTS = True 33 try: 34 import requests 35 except ImportError: 36 HAS_REQUESTS = False 37 38 39 class OSSECURLs(): 40 41 def __init__(self, ossec_version): 42 self.ossec_version = ossec_version 43 44 checksums = self.parse_checksums() 45 46 self.ansible_facts = dict( 47 ossec_version=self.ossec_version, 48 ossec_tarball_filename=self.ossec_tarball_filename, 49 ossec_tarball_url=self.ossec_tarball_url, 50 ossec_checksum_filename=self.ossec_checksum_filename, 51 ossec_checksum_url=self.ossec_checksum_url, 52 ) 53 54 self.ansible_facts.update(checksums) 55 56 @property 57 def ossec_tarball_filename(self): 58 return "ossec-hids-{}.tar.gz".format(self.ossec_version) 59 60 @property 61 def ossec_tarball_url(self): 62 return "https://github.com/ossec/ossec-hids/archive/{}.tar.gz".format( 63 self.ossec_version) 64 65 @property 66 def ossec_checksum_url(self): 67 return "https://github.com/ossec/ossec-hids/releases/download/{}/{}".format( # noqa: E501 68 self.ossec_version, self.ossec_checksum_filename) 69 70 @property 71 def ossec_checksum_filename(self): 72 return "{}-checksum.txt".format(self.ossec_tarball_filename) 73 74 def parse_checksums(self): 75 r = requests.get(self.ossec_checksum_url) 76 checksum_regex = re.compile(r''' 77 ^MD5\( 78 ''' 79 + re.escape(self.ossec_tarball_filename) + 80 r'''\)=\s+(?P<ossec_md5_checksum>[0-9a-f]{32})\s+ 81 SHA1\( 82 ''' 83 + re.escape(self.ossec_tarball_filename) + 84 r'''\)=\s+(?P<ossec_sha1_checksum>[0-9a-f]{40})$ 85 ''', re.VERBOSE | re.MULTILINE 86 ) 87 checksum_list = r.content.rstrip() 88 results = re.match(checksum_regex, checksum_list).groupdict() 89 return results 90 91 92 def main(): 93 module = AnsibleModule( # noqa: F405 94 argument_spec=dict( 95 ossec_version=dict(default="2.8.2"), 96 ), 97 supports_check_mode=False 98 ) 99 if not HAS_REQUESTS: 100 module.fail_json(msg='requests required for this module') 101 102 ossec_version = module.params['ossec_version'] 103 try: 104 ossec_config = OSSECURLs(ossec_version=ossec_version) 105 except: # noqa: E722 106 msg = ("Failed to find checksum information for OSSEC v{}." 107 "Ensure you have the proper release specified, " 108 "and check the download page to confirm: " 109 "http://www.ossec.net/?page_id=19".format(ossec_version)) 110 module.fail_json(msg=msg) 111 112 results = ossec_config.ansible_facts 113 114 if results: 115 module.exit_json(changed=False, ansible_facts=results) 116 else: 117 msg = "Failed to fetch OSSEC URL facts." 118 module.fail_json(msg=msg) 119 120 121 from ansible.module_utils.basic import * # noqa E402,F403 122 main() 123 [end of install_files/ansible-base/roles/build-ossec-deb-pkg/library/ossec_urls.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/install_files/ansible-base/roles/build-ossec-deb-pkg/library/ossec_urls.py b/install_files/ansible-base/roles/build-ossec-deb-pkg/library/ossec_urls.py --- a/install_files/ansible-base/roles/build-ossec-deb-pkg/library/ossec_urls.py +++ b/install_files/ansible-base/roles/build-ossec-deb-pkg/library/ossec_urls.py @@ -14,7 +14,7 @@ ossec_version: description: - version number of release to download - default: "2.8.2" + default: "3.0.0" required: no notes: - The OSSEC version to download is hardcoded to avoid surprises. @@ -23,15 +23,15 @@ ''' EXAMPLES = ''' - ossec_urls: - ossec_version: "2.8.2" + ossec_version: "3.0.0" ''' -import re # noqa: E402 +import re # noqa: F401 HAS_REQUESTS = True try: - import requests + import requests # noqa: F401 except ImportError: HAS_REQUESTS = False @@ -39,60 +39,38 @@ class OSSECURLs(): def __init__(self, ossec_version): + self.REPO_URL = "https://github.com/ossec/ossec-hids" self.ossec_version = ossec_version - - checksums = self.parse_checksums() - self.ansible_facts = dict( ossec_version=self.ossec_version, ossec_tarball_filename=self.ossec_tarball_filename, ossec_tarball_url=self.ossec_tarball_url, - ossec_checksum_filename=self.ossec_checksum_filename, - ossec_checksum_url=self.ossec_checksum_url, + ossec_signature_filename=self.ossec_signature_filename, + ossec_signature_url=self.ossec_signature_url, ) - self.ansible_facts.update(checksums) - @property def ossec_tarball_filename(self): return "ossec-hids-{}.tar.gz".format(self.ossec_version) @property def ossec_tarball_url(self): - return "https://github.com/ossec/ossec-hids/archive/{}.tar.gz".format( - self.ossec_version) + return self.REPO_URL + "/archive/{}.tar.gz".format(self.ossec_version) @property - def ossec_checksum_url(self): - return "https://github.com/ossec/ossec-hids/releases/download/{}/{}".format( # noqa: E501 - self.ossec_version, self.ossec_checksum_filename) + def ossec_signature_url(self): + return self.REPO_URL + "/releases/download/{}/{}".format( + self.ossec_version, self.ossec_signature_filename) @property - def ossec_checksum_filename(self): - return "{}-checksum.txt".format(self.ossec_tarball_filename) - - def parse_checksums(self): - r = requests.get(self.ossec_checksum_url) - checksum_regex = re.compile(r''' - ^MD5\( - ''' - + re.escape(self.ossec_tarball_filename) + - r'''\)=\s+(?P<ossec_md5_checksum>[0-9a-f]{32})\s+ - SHA1\( - ''' - + re.escape(self.ossec_tarball_filename) + - r'''\)=\s+(?P<ossec_sha1_checksum>[0-9a-f]{40})$ - ''', re.VERBOSE | re.MULTILINE - ) - checksum_list = r.content.rstrip() - results = re.match(checksum_regex, checksum_list).groupdict() - return results + def ossec_signature_filename(self): + return "ossec-hids-{}.tar.gz.asc".format(self.ossec_version) def main(): module = AnsibleModule( # noqa: F405 argument_spec=dict( - ossec_version=dict(default="2.8.2"), + ossec_version=dict(default="3.0.0"), ), supports_check_mode=False )
{"golden_diff": "diff --git a/install_files/ansible-base/roles/build-ossec-deb-pkg/library/ossec_urls.py b/install_files/ansible-base/roles/build-ossec-deb-pkg/library/ossec_urls.py\n--- a/install_files/ansible-base/roles/build-ossec-deb-pkg/library/ossec_urls.py\n+++ b/install_files/ansible-base/roles/build-ossec-deb-pkg/library/ossec_urls.py\n@@ -14,7 +14,7 @@\n ossec_version:\n description:\n - version number of release to download\n- default: \"2.8.2\"\n+ default: \"3.0.0\"\n required: no\n notes:\n - The OSSEC version to download is hardcoded to avoid surprises.\n@@ -23,15 +23,15 @@\n '''\n EXAMPLES = '''\n - ossec_urls:\n- ossec_version: \"2.8.2\"\n+ ossec_version: \"3.0.0\"\n '''\n \n-import re # noqa: E402\n+import re # noqa: F401\n \n \n HAS_REQUESTS = True\n try:\n- import requests\n+ import requests # noqa: F401\n except ImportError:\n HAS_REQUESTS = False\n \n@@ -39,60 +39,38 @@\n class OSSECURLs():\n \n def __init__(self, ossec_version):\n+ self.REPO_URL = \"https://github.com/ossec/ossec-hids\"\n self.ossec_version = ossec_version\n-\n- checksums = self.parse_checksums()\n-\n self.ansible_facts = dict(\n ossec_version=self.ossec_version,\n ossec_tarball_filename=self.ossec_tarball_filename,\n ossec_tarball_url=self.ossec_tarball_url,\n- ossec_checksum_filename=self.ossec_checksum_filename,\n- ossec_checksum_url=self.ossec_checksum_url,\n+ ossec_signature_filename=self.ossec_signature_filename,\n+ ossec_signature_url=self.ossec_signature_url,\n )\n \n- self.ansible_facts.update(checksums)\n-\n @property\n def ossec_tarball_filename(self):\n return \"ossec-hids-{}.tar.gz\".format(self.ossec_version)\n \n @property\n def ossec_tarball_url(self):\n- return \"https://github.com/ossec/ossec-hids/archive/{}.tar.gz\".format(\n- self.ossec_version)\n+ return self.REPO_URL + \"/archive/{}.tar.gz\".format(self.ossec_version)\n \n @property\n- def ossec_checksum_url(self):\n- return \"https://github.com/ossec/ossec-hids/releases/download/{}/{}\".format( # noqa: E501\n- self.ossec_version, self.ossec_checksum_filename)\n+ def ossec_signature_url(self):\n+ return self.REPO_URL + \"/releases/download/{}/{}\".format(\n+ self.ossec_version, self.ossec_signature_filename)\n \n @property\n- def ossec_checksum_filename(self):\n- return \"{}-checksum.txt\".format(self.ossec_tarball_filename)\n-\n- def parse_checksums(self):\n- r = requests.get(self.ossec_checksum_url)\n- checksum_regex = re.compile(r'''\n- ^MD5\\(\n- '''\n- + re.escape(self.ossec_tarball_filename) +\n- r'''\\)=\\s+(?P<ossec_md5_checksum>[0-9a-f]{32})\\s+\n- SHA1\\(\n- '''\n- + re.escape(self.ossec_tarball_filename) +\n- r'''\\)=\\s+(?P<ossec_sha1_checksum>[0-9a-f]{40})$\n- ''', re.VERBOSE | re.MULTILINE\n- )\n- checksum_list = r.content.rstrip()\n- results = re.match(checksum_regex, checksum_list).groupdict()\n- return results\n+ def ossec_signature_filename(self):\n+ return \"ossec-hids-{}.tar.gz.asc\".format(self.ossec_version)\n \n \n def main():\n module = AnsibleModule( # noqa: F405\n argument_spec=dict(\n- ossec_version=dict(default=\"2.8.2\"),\n+ ossec_version=dict(default=\"3.0.0\"),\n ),\n supports_check_mode=False\n )\n", "issue": "Update OSSEC to v3.0\n## Description\r\n\r\nOSSEC 3.0 was released on July 17th 2018[0], containing a large amount of bug fixes (including 2 security fixes) as well as new major functionality. Of note, it supports whitelisting syscheck md5 hashes in a sqlite database, potentially reducing notification noise.\r\n\r\n## User Research Evidence\r\n\r\nUsers like up-to-date packages\r\n\r\n## User Stories\r\n\r\nAs a SecureDrop administrator, I would like to have all packages updated and would like to minimize alerts/noise.\r\n\r\n[0] https://github.com/ossec/ossec-hids/releases\n", "before_files": [{"content": "#!/usr/bin/env python\nDOCUMENTATION = '''\n---\nmodule: ossec_urls\nshort_description: Gather facts for OSSEC download URLs\ndescription:\n - Gather version, checksum, and URL info for OSSEC downloads\nauthor:\n - Conor Schaefer (@conorsch)\n - Freedom of the Press Foundation (@freedomofpress)\nrequirements:\n - requests\noptions:\n ossec_version:\n description:\n - version number of release to download\n default: \"2.8.2\"\n required: no\nnotes:\n - The OSSEC version to download is hardcoded to avoid surprises.\n If you want a newer version than the current default, you should\n pass the version in via I(ossec_version).\n'''\nEXAMPLES = '''\n- ossec_urls:\n ossec_version: \"2.8.2\"\n'''\n\nimport re # noqa: E402\n\n\nHAS_REQUESTS = True\ntry:\n import requests\nexcept ImportError:\n HAS_REQUESTS = False\n\n\nclass OSSECURLs():\n\n def __init__(self, ossec_version):\n self.ossec_version = ossec_version\n\n checksums = self.parse_checksums()\n\n self.ansible_facts = dict(\n ossec_version=self.ossec_version,\n ossec_tarball_filename=self.ossec_tarball_filename,\n ossec_tarball_url=self.ossec_tarball_url,\n ossec_checksum_filename=self.ossec_checksum_filename,\n ossec_checksum_url=self.ossec_checksum_url,\n )\n\n self.ansible_facts.update(checksums)\n\n @property\n def ossec_tarball_filename(self):\n return \"ossec-hids-{}.tar.gz\".format(self.ossec_version)\n\n @property\n def ossec_tarball_url(self):\n return \"https://github.com/ossec/ossec-hids/archive/{}.tar.gz\".format(\n self.ossec_version)\n\n @property\n def ossec_checksum_url(self):\n return \"https://github.com/ossec/ossec-hids/releases/download/{}/{}\".format( # noqa: E501\n self.ossec_version, self.ossec_checksum_filename)\n\n @property\n def ossec_checksum_filename(self):\n return \"{}-checksum.txt\".format(self.ossec_tarball_filename)\n\n def parse_checksums(self):\n r = requests.get(self.ossec_checksum_url)\n checksum_regex = re.compile(r'''\n ^MD5\\(\n '''\n + re.escape(self.ossec_tarball_filename) +\n r'''\\)=\\s+(?P<ossec_md5_checksum>[0-9a-f]{32})\\s+\n SHA1\\(\n '''\n + re.escape(self.ossec_tarball_filename) +\n r'''\\)=\\s+(?P<ossec_sha1_checksum>[0-9a-f]{40})$\n ''', re.VERBOSE | re.MULTILINE\n )\n checksum_list = r.content.rstrip()\n results = re.match(checksum_regex, checksum_list).groupdict()\n return results\n\n\ndef main():\n module = AnsibleModule( # noqa: F405\n argument_spec=dict(\n ossec_version=dict(default=\"2.8.2\"),\n ),\n supports_check_mode=False\n )\n if not HAS_REQUESTS:\n module.fail_json(msg='requests required for this module')\n\n ossec_version = module.params['ossec_version']\n try:\n ossec_config = OSSECURLs(ossec_version=ossec_version)\n except: # noqa: E722\n msg = (\"Failed to find checksum information for OSSEC v{}.\"\n \"Ensure you have the proper release specified, \"\n \"and check the download page to confirm: \"\n \"http://www.ossec.net/?page_id=19\".format(ossec_version))\n module.fail_json(msg=msg)\n\n results = ossec_config.ansible_facts\n\n if results:\n module.exit_json(changed=False, ansible_facts=results)\n else:\n msg = \"Failed to fetch OSSEC URL facts.\"\n module.fail_json(msg=msg)\n\n\nfrom ansible.module_utils.basic import * # noqa E402,F403\nmain()\n", "path": "install_files/ansible-base/roles/build-ossec-deb-pkg/library/ossec_urls.py"}]}
1,856
942
gh_patches_debug_62140
rasdani/github-patches
git_diff
searx__searx-801
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Tags <xml> are hidden from result titles </issue> <code> [start of searx/engines/xpath.py] 1 from lxml import html 2 from urllib import urlencode, unquote 3 from urlparse import urlparse, urljoin 4 from lxml.etree import _ElementStringResult, _ElementUnicodeResult 5 from searx.utils import html_to_text 6 7 search_url = None 8 url_xpath = None 9 content_xpath = None 10 title_xpath = None 11 suggestion_xpath = '' 12 results_xpath = '' 13 14 # parameters for engines with paging support 15 # 16 # number of results on each page 17 # (only needed if the site requires not a page number, but an offset) 18 page_size = 1 19 # number of the first page (usually 0 or 1) 20 first_page_num = 1 21 22 23 ''' 24 if xpath_results is list, extract the text from each result and concat the list 25 if xpath_results is a xml element, extract all the text node from it 26 ( text_content() method from lxml ) 27 if xpath_results is a string element, then it's already done 28 ''' 29 30 31 def extract_text(xpath_results): 32 if type(xpath_results) == list: 33 # it's list of result : concat everything using recursive call 34 if not xpath_results: 35 raise Exception('Empty url resultset') 36 result = '' 37 for e in xpath_results: 38 result = result + extract_text(e) 39 return result.strip() 40 elif type(xpath_results) in [_ElementStringResult, _ElementUnicodeResult]: 41 # it's a string 42 return ''.join(xpath_results) 43 else: 44 # it's a element 45 return html_to_text(xpath_results.text_content()).strip() 46 47 48 def extract_url(xpath_results, search_url): 49 url = extract_text(xpath_results) 50 51 if url.startswith('//'): 52 # add http or https to this kind of url //example.com/ 53 parsed_search_url = urlparse(search_url) 54 url = parsed_search_url.scheme + url 55 elif url.startswith('/'): 56 # fix relative url to the search engine 57 url = urljoin(search_url, url) 58 59 # normalize url 60 url = normalize_url(url) 61 62 return url 63 64 65 def normalize_url(url): 66 parsed_url = urlparse(url) 67 68 # add a / at this end of the url if there is no path 69 if not parsed_url.netloc: 70 raise Exception('Cannot parse url') 71 if not parsed_url.path: 72 url += '/' 73 74 # FIXME : hack for yahoo 75 if parsed_url.hostname == 'search.yahoo.com'\ 76 and parsed_url.path.startswith('/r'): 77 p = parsed_url.path 78 mark = p.find('/**') 79 if mark != -1: 80 return unquote(p[mark + 3:]).decode('utf-8') 81 82 return url 83 84 85 def request(query, params): 86 query = urlencode({'q': query})[2:] 87 88 fp = {'query': query} 89 if paging and search_url.find('{pageno}') >= 0: 90 fp['pageno'] = (params['pageno'] - 1) * page_size + first_page_num 91 92 params['url'] = search_url.format(**fp) 93 params['query'] = query 94 95 return params 96 97 98 def response(resp): 99 results = [] 100 dom = html.fromstring(resp.text) 101 if results_xpath: 102 for result in dom.xpath(results_xpath): 103 url = extract_url(result.xpath(url_xpath), search_url) 104 title = extract_text(result.xpath(title_xpath)[0]) 105 content = extract_text(result.xpath(content_xpath)[0]) 106 results.append({'url': url, 'title': title, 'content': content}) 107 else: 108 for url, title, content in zip( 109 (extract_url(x, search_url) for 110 x in dom.xpath(url_xpath)), 111 map(extract_text, dom.xpath(title_xpath)), 112 map(extract_text, dom.xpath(content_xpath)) 113 ): 114 results.append({'url': url, 'title': title, 'content': content}) 115 116 if not suggestion_xpath: 117 return results 118 for suggestion in dom.xpath(suggestion_xpath): 119 results.append({'suggestion': extract_text(suggestion)}) 120 return results 121 [end of searx/engines/xpath.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/searx/engines/xpath.py b/searx/engines/xpath.py --- a/searx/engines/xpath.py +++ b/searx/engines/xpath.py @@ -42,7 +42,9 @@ return ''.join(xpath_results) else: # it's a element - return html_to_text(xpath_results.text_content()).strip() + text = html.tostring(xpath_results, encoding='unicode', method='text', with_tail=False) + text = text.strip().replace('\n', ' ') + return ' '.join(text.split()) def extract_url(xpath_results, search_url):
{"golden_diff": "diff --git a/searx/engines/xpath.py b/searx/engines/xpath.py\n--- a/searx/engines/xpath.py\n+++ b/searx/engines/xpath.py\n@@ -42,7 +42,9 @@\n return ''.join(xpath_results)\n else:\n # it's a element\n- return html_to_text(xpath_results.text_content()).strip()\n+ text = html.tostring(xpath_results, encoding='unicode', method='text', with_tail=False)\n+ text = text.strip().replace('\\n', ' ')\n+ return ' '.join(text.split())\n \n \n def extract_url(xpath_results, search_url):\n", "issue": "Tags <xml> are hidden from result titles\n\n", "before_files": [{"content": "from lxml import html\nfrom urllib import urlencode, unquote\nfrom urlparse import urlparse, urljoin\nfrom lxml.etree import _ElementStringResult, _ElementUnicodeResult\nfrom searx.utils import html_to_text\n\nsearch_url = None\nurl_xpath = None\ncontent_xpath = None\ntitle_xpath = None\nsuggestion_xpath = ''\nresults_xpath = ''\n\n# parameters for engines with paging support\n#\n# number of results on each page\n# (only needed if the site requires not a page number, but an offset)\npage_size = 1\n# number of the first page (usually 0 or 1)\nfirst_page_num = 1\n\n\n'''\nif xpath_results is list, extract the text from each result and concat the list\nif xpath_results is a xml element, extract all the text node from it\n ( text_content() method from lxml )\nif xpath_results is a string element, then it's already done\n'''\n\n\ndef extract_text(xpath_results):\n if type(xpath_results) == list:\n # it's list of result : concat everything using recursive call\n if not xpath_results:\n raise Exception('Empty url resultset')\n result = ''\n for e in xpath_results:\n result = result + extract_text(e)\n return result.strip()\n elif type(xpath_results) in [_ElementStringResult, _ElementUnicodeResult]:\n # it's a string\n return ''.join(xpath_results)\n else:\n # it's a element\n return html_to_text(xpath_results.text_content()).strip()\n\n\ndef extract_url(xpath_results, search_url):\n url = extract_text(xpath_results)\n\n if url.startswith('//'):\n # add http or https to this kind of url //example.com/\n parsed_search_url = urlparse(search_url)\n url = parsed_search_url.scheme + url\n elif url.startswith('/'):\n # fix relative url to the search engine\n url = urljoin(search_url, url)\n\n # normalize url\n url = normalize_url(url)\n\n return url\n\n\ndef normalize_url(url):\n parsed_url = urlparse(url)\n\n # add a / at this end of the url if there is no path\n if not parsed_url.netloc:\n raise Exception('Cannot parse url')\n if not parsed_url.path:\n url += '/'\n\n # FIXME : hack for yahoo\n if parsed_url.hostname == 'search.yahoo.com'\\\n and parsed_url.path.startswith('/r'):\n p = parsed_url.path\n mark = p.find('/**')\n if mark != -1:\n return unquote(p[mark + 3:]).decode('utf-8')\n\n return url\n\n\ndef request(query, params):\n query = urlencode({'q': query})[2:]\n\n fp = {'query': query}\n if paging and search_url.find('{pageno}') >= 0:\n fp['pageno'] = (params['pageno'] - 1) * page_size + first_page_num\n\n params['url'] = search_url.format(**fp)\n params['query'] = query\n\n return params\n\n\ndef response(resp):\n results = []\n dom = html.fromstring(resp.text)\n if results_xpath:\n for result in dom.xpath(results_xpath):\n url = extract_url(result.xpath(url_xpath), search_url)\n title = extract_text(result.xpath(title_xpath)[0])\n content = extract_text(result.xpath(content_xpath)[0])\n results.append({'url': url, 'title': title, 'content': content})\n else:\n for url, title, content in zip(\n (extract_url(x, search_url) for\n x in dom.xpath(url_xpath)),\n map(extract_text, dom.xpath(title_xpath)),\n map(extract_text, dom.xpath(content_xpath))\n ):\n results.append({'url': url, 'title': title, 'content': content})\n\n if not suggestion_xpath:\n return results\n for suggestion in dom.xpath(suggestion_xpath):\n results.append({'suggestion': extract_text(suggestion)})\n return results\n", "path": "searx/engines/xpath.py"}]}
1,661
144
gh_patches_debug_40192
rasdani/github-patches
git_diff
conan-io__conan-center-index-18702
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> [package] tree-sitter-c/*: tree-sitter-c needs to be updated to conan v2 ### Description tree-sitter-c needs to wrok well with conan v2. ### Package and Environment Details * Package Name/Version: **tree-sitter-c/*** * Operating System+version: **Arch Linux** * Compiler+version: **GCC 13** * Docker image: **conanio/gcc8** * Conan version: **conan 2.0.4** * Python version: **Python 3.11.3** ### Conan profile ```shell Configuration for profile default: [settings] os=Linux os_build=Linux arch=x86_64 arch_build=x86_64 compiler=gcc compiler.version=13 compiler.libcxx=libstdc++11 build_type=Release [options] [conf] [build_requires] [env] ``` ### Steps to reproduce ```shell conan create all/conanfile.py --version 0.20.2 -pr:b=default -pr:h=default -s build_type=Release -o "tree-sitter-c/0.20.2:shared=False" ``` ### Logs <details><summary>Click to expand log</summary> ```shell ERROR: Error loading conanfile at '/home/whalien/codebase/misc/conan-center-index/recipes/tree-sitter-c/all/conanfile.py': Unable to load conanfile in /home/whalien/codebase/misc/conan-center-index/recipes/tree-sitter-c/all/conanfile.py File "<frozen importlib._bootstrap_external>", line 940, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/home/whalien/codebase/misc/conan-center-index/recipes/tree-sitter-c/all/conanfile.py", line 1, in <module> from conans import CMake, ConanFile, tools ImportError: cannot import name 'CMake' from 'conans' (/usr/lib/python3.11/site-packages/conans/__init__.py) ``` </details> </issue> <code> [start of recipes/tree-sitter-c/all/conanfile.py] 1 from conans import CMake, ConanFile, tools 2 import functools 3 import os 4 5 required_conan_version = ">=1.33.0" 6 7 8 class TreeSitterCConan(ConanFile): 9 name = "tree-sitter-c" 10 description = "C grammar for tree-sitter." 11 topics = ("parser", "grammar", "tree", "c", "ide") 12 url = "https://github.com/conan-io/conan-center-index" 13 homepage = "https://github.com/tree-sitter/tree-sitter-c" 14 license = "MIT" 15 settings = "os", "arch", "compiler", "build_type" 16 options = { 17 "fPIC": [True, False], 18 "shared": [True, False], 19 } 20 default_options = { 21 "fPIC": True, 22 "shared": False, 23 } 24 25 generators = "cmake", "cmake_find_package_multi" 26 exports_sources = "CMakeLists.txt" 27 28 @property 29 def _source_subfolder(self): 30 return "source_subfolder" 31 32 def config_options(self): 33 if self.settings.os == "Windows": 34 del self.options.fPIC 35 36 def configure(self): 37 if self.options.shared: 38 del self.options.fPIC 39 del self.settings.compiler.libcxx 40 del self.settings.compiler.cppstd 41 42 def requirements(self): 43 self.requires("tree-sitter/0.20.0") 44 45 def source(self): 46 tools.get(**self.conan_data["sources"][self.version], 47 destination=self._source_subfolder, strip_root=True) 48 49 @functools.lru_cache(1) 50 def _configure_cmake(self): 51 cmake = CMake(self) 52 cmake.configure() 53 return cmake 54 55 def _patch_sources(self): 56 if not self.options.shared: 57 tools.replace_in_file( 58 os.path.join(self._source_subfolder, "src", "parser.c"), 59 "__declspec(dllexport)", "" 60 ) 61 62 def build(self): 63 self._patch_sources() 64 cmake = self._configure_cmake() 65 cmake.build() 66 67 def package(self): 68 self.copy("LICENSE", src=self._source_subfolder, dst="licenses") 69 cmake = self._configure_cmake() 70 cmake.install() 71 72 def package_info(self): 73 self.cpp_info.libs = ["tree-sitter-c"] 74 [end of recipes/tree-sitter-c/all/conanfile.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/recipes/tree-sitter-c/all/conanfile.py b/recipes/tree-sitter-c/all/conanfile.py --- a/recipes/tree-sitter-c/all/conanfile.py +++ b/recipes/tree-sitter-c/all/conanfile.py @@ -1,8 +1,10 @@ -from conans import CMake, ConanFile, tools -import functools +from conan import ConanFile +from conan.tools.cmake import CMake +from conan.tools.files import get, replace_in_file, copy +from conan.tools.layout import basic_layout import os -required_conan_version = ">=1.33.0" +required_conan_version = ">=1.53.0" class TreeSitterCConan(ConanFile): @@ -13,21 +15,22 @@ homepage = "https://github.com/tree-sitter/tree-sitter-c" license = "MIT" settings = "os", "arch", "compiler", "build_type" + package_type = "library" + generators = "CMakeToolchain", "CMakeDeps" options = { - "fPIC": [True, False], "shared": [True, False], + "fPIC": [True, False], } default_options = { - "fPIC": True, "shared": False, + "fPIC": True, } - generators = "cmake", "cmake_find_package_multi" + generators = "CMakeToolchain", "CMakeDeps" exports_sources = "CMakeLists.txt" - @property - def _source_subfolder(self): - return "source_subfolder" + def layout(self): + basic_layout(self, src_folder="src") def config_options(self): if self.settings.os == "Windows": @@ -35,38 +38,38 @@ def configure(self): if self.options.shared: - del self.options.fPIC - del self.settings.compiler.libcxx - del self.settings.compiler.cppstd - - def requirements(self): - self.requires("tree-sitter/0.20.0") + self.options.rm_safe("fPIC") + self.settings.rm_safe("compiler.cppstd") + self.settings.rm_safe("compiler.libcxx") def source(self): - tools.get(**self.conan_data["sources"][self.version], - destination=self._source_subfolder, strip_root=True) + get(self, **self.conan_data["sources"][self.version], strip_root=True) - @functools.lru_cache(1) - def _configure_cmake(self): - cmake = CMake(self) - cmake.configure() - return cmake + def requirements(self): + self.requires("tree-sitter/0.20.8", transitive_headers=True, transitive_libs=True) def _patch_sources(self): if not self.options.shared: - tools.replace_in_file( - os.path.join(self._source_subfolder, "src", "parser.c"), + replace_in_file( + self, + os.path.join(self.source_folder, "src", "parser.c"), "__declspec(dllexport)", "" ) def build(self): self._patch_sources() - cmake = self._configure_cmake() + cmake = CMake(self) + cmake.configure(build_script_folder=os.path.join(self.source_folder, os.pardir)) cmake.build() def package(self): - self.copy("LICENSE", src=self._source_subfolder, dst="licenses") - cmake = self._configure_cmake() + copy( + self, + "LICENSE", + src=self.source_folder, + dst=os.path.join(self.package_folder, "licenses"), + ) + cmake = CMake(self) cmake.install() def package_info(self):
{"golden_diff": "diff --git a/recipes/tree-sitter-c/all/conanfile.py b/recipes/tree-sitter-c/all/conanfile.py\n--- a/recipes/tree-sitter-c/all/conanfile.py\n+++ b/recipes/tree-sitter-c/all/conanfile.py\n@@ -1,8 +1,10 @@\n-from conans import CMake, ConanFile, tools\n-import functools\n+from conan import ConanFile\n+from conan.tools.cmake import CMake\n+from conan.tools.files import get, replace_in_file, copy\n+from conan.tools.layout import basic_layout\n import os\n \n-required_conan_version = \">=1.33.0\"\n+required_conan_version = \">=1.53.0\"\n \n \n class TreeSitterCConan(ConanFile):\n@@ -13,21 +15,22 @@\n homepage = \"https://github.com/tree-sitter/tree-sitter-c\"\n license = \"MIT\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n+ package_type = \"library\"\n+ generators = \"CMakeToolchain\", \"CMakeDeps\"\n options = {\n- \"fPIC\": [True, False],\n \"shared\": [True, False],\n+ \"fPIC\": [True, False],\n }\n default_options = {\n- \"fPIC\": True,\n \"shared\": False,\n+ \"fPIC\": True,\n }\n \n- generators = \"cmake\", \"cmake_find_package_multi\"\n+ generators = \"CMakeToolchain\", \"CMakeDeps\"\n exports_sources = \"CMakeLists.txt\"\n \n- @property\n- def _source_subfolder(self):\n- return \"source_subfolder\"\n+ def layout(self):\n+ basic_layout(self, src_folder=\"src\")\n \n def config_options(self):\n if self.settings.os == \"Windows\":\n@@ -35,38 +38,38 @@\n \n def configure(self):\n if self.options.shared:\n- del self.options.fPIC\n- del self.settings.compiler.libcxx\n- del self.settings.compiler.cppstd\n-\n- def requirements(self):\n- self.requires(\"tree-sitter/0.20.0\")\n+ self.options.rm_safe(\"fPIC\")\n+ self.settings.rm_safe(\"compiler.cppstd\")\n+ self.settings.rm_safe(\"compiler.libcxx\")\n \n def source(self):\n- tools.get(**self.conan_data[\"sources\"][self.version],\n- destination=self._source_subfolder, strip_root=True)\n+ get(self, **self.conan_data[\"sources\"][self.version], strip_root=True)\n \n- @functools.lru_cache(1)\n- def _configure_cmake(self):\n- cmake = CMake(self)\n- cmake.configure()\n- return cmake\n+ def requirements(self):\n+ self.requires(\"tree-sitter/0.20.8\", transitive_headers=True, transitive_libs=True)\n \n def _patch_sources(self):\n if not self.options.shared:\n- tools.replace_in_file(\n- os.path.join(self._source_subfolder, \"src\", \"parser.c\"),\n+ replace_in_file(\n+ self,\n+ os.path.join(self.source_folder, \"src\", \"parser.c\"),\n \"__declspec(dllexport)\", \"\"\n )\n \n def build(self):\n self._patch_sources()\n- cmake = self._configure_cmake()\n+ cmake = CMake(self)\n+ cmake.configure(build_script_folder=os.path.join(self.source_folder, os.pardir))\n cmake.build()\n \n def package(self):\n- self.copy(\"LICENSE\", src=self._source_subfolder, dst=\"licenses\")\n- cmake = self._configure_cmake()\n+ copy(\n+ self,\n+ \"LICENSE\",\n+ src=self.source_folder,\n+ dst=os.path.join(self.package_folder, \"licenses\"),\n+ )\n+ cmake = CMake(self)\n cmake.install()\n \n def package_info(self):\n", "issue": "[package] tree-sitter-c/*: tree-sitter-c needs to be updated to conan v2\n### Description\r\n\r\ntree-sitter-c needs to wrok well with conan v2.\r\n\r\n### Package and Environment Details\r\n\r\n* Package Name/Version: **tree-sitter-c/***\r\n* Operating System+version: **Arch Linux**\r\n* Compiler+version: **GCC 13**\r\n* Docker image: **conanio/gcc8**\r\n* Conan version: **conan 2.0.4**\r\n* Python version: **Python 3.11.3**\r\n\r\n\r\n### Conan profile\r\n\r\n```shell\r\nConfiguration for profile default:\r\n\r\n[settings]\r\nos=Linux\r\nos_build=Linux\r\narch=x86_64\r\narch_build=x86_64\r\ncompiler=gcc\r\ncompiler.version=13\r\ncompiler.libcxx=libstdc++11\r\nbuild_type=Release\r\n[options]\r\n[conf]\r\n[build_requires]\r\n[env]\r\n```\r\n\r\n\r\n### Steps to reproduce\r\n\r\n```shell\r\nconan create all/conanfile.py --version 0.20.2 -pr:b=default -pr:h=default -s build_type=Release -o \"tree-sitter-c/0.20.2:shared=False\"\r\n```\r\n\r\n### Logs\r\n\r\n<details><summary>Click to expand log</summary>\r\n\r\n```shell\r\nERROR: Error loading conanfile at '/home/whalien/codebase/misc/conan-center-index/recipes/tree-sitter-c/all/conanfile.py': Unable to load conanfile in /home/whalien/codebase/misc/conan-center-index/recipes/tree-sitter-c/all/conanfile.py\r\n File \"<frozen importlib._bootstrap_external>\", line 940, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"/home/whalien/codebase/misc/conan-center-index/recipes/tree-sitter-c/all/conanfile.py\", line 1, in <module>\r\n from conans import CMake, ConanFile, tools\r\nImportError: cannot import name 'CMake' from 'conans' (/usr/lib/python3.11/site-packages/conans/__init__.py)\r\n```\r\n\r\n</details>\r\n\n", "before_files": [{"content": "from conans import CMake, ConanFile, tools\nimport functools\nimport os\n\nrequired_conan_version = \">=1.33.0\"\n\n\nclass TreeSitterCConan(ConanFile):\n name = \"tree-sitter-c\"\n description = \"C grammar for tree-sitter.\"\n topics = (\"parser\", \"grammar\", \"tree\", \"c\", \"ide\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/tree-sitter/tree-sitter-c\"\n license = \"MIT\"\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"fPIC\": [True, False],\n \"shared\": [True, False],\n }\n default_options = {\n \"fPIC\": True,\n \"shared\": False,\n }\n\n generators = \"cmake\", \"cmake_find_package_multi\"\n exports_sources = \"CMakeLists.txt\"\n\n @property\n def _source_subfolder(self):\n return \"source_subfolder\"\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def configure(self):\n if self.options.shared:\n del self.options.fPIC\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def requirements(self):\n self.requires(\"tree-sitter/0.20.0\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version],\n destination=self._source_subfolder, strip_root=True)\n\n @functools.lru_cache(1)\n def _configure_cmake(self):\n cmake = CMake(self)\n cmake.configure()\n return cmake\n\n def _patch_sources(self):\n if not self.options.shared:\n tools.replace_in_file(\n os.path.join(self._source_subfolder, \"src\", \"parser.c\"),\n \"__declspec(dllexport)\", \"\"\n )\n\n def build(self):\n self._patch_sources()\n cmake = self._configure_cmake()\n cmake.build()\n\n def package(self):\n self.copy(\"LICENSE\", src=self._source_subfolder, dst=\"licenses\")\n cmake = self._configure_cmake()\n cmake.install()\n\n def package_info(self):\n self.cpp_info.libs = [\"tree-sitter-c\"]\n", "path": "recipes/tree-sitter-c/all/conanfile.py"}]}
1,657
861
gh_patches_debug_21282
rasdani/github-patches
git_diff
cowrie__cowrie-1397
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> MySQL error handling I have created several honeypots for other protocols (Android Debug Bridge, Citrix Netscaler, Elasticsearch, Internet Printing Protocol). In all of them I use Cowrie's idea of output plugins. However, the attack traffic for these protocols is much much lower than for Telnet/SSH. Probably for this reason, the MySQL plugin often fails with the dreaded "(2006, 'MySQL server has gone away')" error. (Yes, sometimes more than 8 hours can pass between attacks.) What bothered me is that the plugin failed with a stack trace. This isn't supposed to happen, because it intercepts this error just like Cowrie's MySQL plugin and tries to re-establish the connection. Then I noticed that the error that is being reported is not `MySQLdb.OperationalError`, as the code intercepts. Instead, it is `MySQLdb._exceptions.OperationalError`. And, indeed, linting the code complains that the `MySLdb` module does not have a member named `OperationalError`. Similarly, there is no `MySQLdb.Error` - but there seems to be `MySQLdb._exceptons.Error` instead. Shouldn't these be changed in Cowrie's MySQL module? Also, from the [documentation of MySQLdb](https://mysqlclient.readthedocs.io/FAQ.html#my-data-disappeared-or-won-t-go-away): > Starting with 1.2.0, MySQLdb disables autocommit by default, as required by the DB-API standard (PEP-249). If you are using InnoDB tables or some other type of transactional table type, you’ll need to do connection.commit() before closing the connection, or else none of your changes will be written to the database. Shouldn't the `stop()` method of Cowrie's MySQL plugin issue a `self.db.commit()` before closing the connection? Or maybe set `self.db.autocommit(True)` when opening it? </issue> <code> [start of src/cowrie/output/mysql.py] 1 """ 2 MySQL output connector. Writes audit logs to MySQL database 3 """ 4 5 from __future__ import absolute_import, division 6 7 import MySQLdb 8 9 from twisted.enterprise import adbapi 10 from twisted.internet import defer 11 from twisted.python import log 12 13 import cowrie.core.output 14 from cowrie.core.config import CowrieConfig 15 16 17 class ReconnectingConnectionPool(adbapi.ConnectionPool): 18 """ 19 Reconnecting adbapi connection pool for MySQL. 20 21 This class improves on the solution posted at 22 http://www.gelens.org/2008/09/12/reinitializing-twisted-connectionpool/ 23 by checking exceptions by error code and only disconnecting the current 24 connection instead of all of them. 25 26 Also see: 27 http://twistedmatrix.com/pipermail/twisted-python/2009-July/020007.html 28 """ 29 30 def _runInteraction(self, interaction, *args, **kw): 31 try: 32 return adbapi.ConnectionPool._runInteraction( 33 self, interaction, *args, **kw) 34 except MySQLdb.OperationalError as e: 35 if e.args[0] not in (2003, 2006, 2013): 36 raise e 37 log.msg("RCP: got error {0}, retrying operation".format(e)) 38 conn = self.connections.get(self.threadID()) 39 self.disconnect(conn) 40 # Try the interaction again 41 return adbapi.ConnectionPool._runInteraction( 42 self, interaction, *args, **kw) 43 44 45 class Output(cowrie.core.output.Output): 46 """ 47 mysql output 48 """ 49 db = None 50 51 def start(self): 52 self.debug = CowrieConfig().getboolean('output_mysql', 'debug', fallback=False) 53 port = CowrieConfig().getint('output_mysql', 'port', fallback=3306) 54 try: 55 self.db = ReconnectingConnectionPool( 56 'MySQLdb', 57 host=CowrieConfig().get('output_mysql', 'host'), 58 db=CowrieConfig().get('output_mysql', 'database'), 59 user=CowrieConfig().get('output_mysql', 'username'), 60 passwd=CowrieConfig().get('output_mysql', 'password', raw=True), 61 port=port, 62 cp_min=1, 63 cp_max=1, 64 charset='utf8mb4', 65 cp_reconnect=True, 66 use_unicode=True 67 ) 68 except MySQLdb.Error as e: 69 log.msg("output_mysql: Error %d: %s" % (e.args[0], e.args[1])) 70 71 def stop(self): 72 self.db.close() 73 74 def sqlerror(self, error): 75 """ 76 1146, "Table '...' doesn't exist" 77 1406, "Data too long for column '...' at row ..." 78 """ 79 if error.value[0] in (1146, 1406): 80 log.msg("output_mysql: MySQL Error: {}".format(error.value)) 81 log.msg("MySQL schema maybe misconfigured, doublecheck database!") 82 else: 83 log.err("output_mysql: MySQL Error: {}".format(error.value)) 84 85 def simpleQuery(self, sql, args): 86 """ 87 Just run a deferred sql query, only care about errors 88 """ 89 if self.debug: 90 log.msg("output_mysql: MySQL query: {} {}".format(sql, repr(args))) 91 d = self.db.runQuery(sql, args) 92 d.addErrback(self.sqlerror) 93 94 @defer.inlineCallbacks 95 def write(self, entry): 96 if entry["eventid"] == 'cowrie.session.connect': 97 r = yield self.db.runQuery( 98 "SELECT `id`" 99 "FROM `sensors`" 100 "WHERE `ip` = %s", 101 (self.sensor,)) 102 103 if r: 104 sensorid = r[0][0] 105 else: 106 yield self.db.runQuery( 107 'INSERT INTO `sensors` (`ip`) ' 108 'VALUES (%s)', 109 (self.sensor,)) 110 111 r = yield self.db.runQuery('SELECT LAST_INSERT_ID()') 112 sensorid = int(r[0][0]) 113 self.simpleQuery( 114 "INSERT INTO `sessions` (`id`, `starttime`, `sensor`, `ip`) " 115 "VALUES (%s, FROM_UNIXTIME(%s), %s, %s)", 116 (entry["session"], entry["time"], sensorid, entry["src_ip"])) 117 118 elif entry["eventid"] == 'cowrie.login.success': 119 self.simpleQuery('INSERT INTO `auth` (`session`, `success`, `username`, `password`, `timestamp`) ' 120 'VALUES (%s, %s, %s, %s, FROM_UNIXTIME(%s))', 121 (entry["session"], 1, entry['username'], entry['password'], entry["time"])) 122 123 elif entry["eventid"] == 'cowrie.login.failed': 124 self.simpleQuery('INSERT INTO `auth` (`session`, `success`, `username`, `password`, `timestamp`) ' 125 'VALUES (%s, %s, %s, %s, FROM_UNIXTIME(%s))', 126 (entry["session"], 0, entry['username'], entry['password'], entry["time"])) 127 128 elif entry["eventid"] == 'cowrie.session.params': 129 self.simpleQuery('INSERT INTO `params` (`session`, `arch`) ' 130 'VALUES (%s, %s)', 131 (entry["session"], entry["arch"])) 132 133 elif entry["eventid"] == 'cowrie.command.input': 134 self.simpleQuery('INSERT INTO `input` (`session`, `timestamp`, `success`, `input`) ' 135 'VALUES (%s, FROM_UNIXTIME(%s), %s , %s)', 136 (entry["session"], entry["time"], 1, entry["input"])) 137 138 elif entry["eventid"] == 'cowrie.command.failed': 139 self.simpleQuery('INSERT INTO `input` (`session`, `timestamp`, `success`, `input`) ' 140 'VALUES (%s, FROM_UNIXTIME(%s), %s , %s)', 141 (entry["session"], entry["time"], 0, entry["input"])) 142 143 elif entry["eventid"] == 'cowrie.session.file_download': 144 self.simpleQuery('INSERT INTO `downloads` (`session`, `timestamp`, `url`, `outfile`, `shasum`) ' 145 'VALUES (%s, FROM_UNIXTIME(%s), %s, %s, %s)', 146 (entry["session"], entry["time"], entry['url'], entry['outfile'], entry['shasum'])) 147 148 elif entry["eventid"] == 'cowrie.session.file_download.failed': 149 self.simpleQuery('INSERT INTO `downloads` (`session`, `timestamp`, `url`, `outfile`, `shasum`) ' 150 'VALUES (%s, FROM_UNIXTIME(%s), %s, %s, %s)', 151 (entry["session"], entry["time"], entry['url'], 'NULL', 'NULL')) 152 153 elif entry["eventid"] == 'cowrie.session.file_upload': 154 self.simpleQuery('INSERT INTO `downloads` (`session`, `timestamp`, `url`, `outfile`, `shasum`) ' 155 'VALUES (%s, FROM_UNIXTIME(%s), %s, %s, %s)', 156 (entry["session"], entry["time"], '', entry['outfile'], entry['shasum'])) 157 158 elif entry["eventid"] == 'cowrie.session.input': 159 self.simpleQuery('INSERT INTO `input` (`session`, `timestamp`, `realm`, `input`) ' 160 'VALUES (%s, FROM_UNIXTIME(%s), %s , %s)', 161 (entry["session"], entry["time"], entry["realm"], entry["input"])) 162 163 elif entry["eventid"] == 'cowrie.client.version': 164 r = yield self.db.runQuery( 165 'SELECT `id` FROM `clients` ' 166 'WHERE `version` = %s', 167 (entry['version'],)) 168 169 if r: 170 id = int(r[0][0]) 171 else: 172 yield self.db.runQuery( 173 'INSERT INTO `clients` (`version`) ' 174 'VALUES (%s)', 175 (entry['version'],)) 176 177 r = yield self.db.runQuery('SELECT LAST_INSERT_ID()') 178 id = int(r[0][0]) 179 self.simpleQuery( 180 'UPDATE `sessions` ' 181 'SET `client` = %s ' 182 'WHERE `id` = %s', 183 (id, entry["session"])) 184 185 elif entry["eventid"] == 'cowrie.client.size': 186 self.simpleQuery( 187 'UPDATE `sessions` ' 188 'SET `termsize` = %s ' 189 'WHERE `id` = %s', 190 ('%sx%s' % (entry['width'], entry['height']), entry["session"])) 191 192 elif entry["eventid"] == 'cowrie.session.closed': 193 self.simpleQuery( 194 'UPDATE `sessions` ' 195 'SET `endtime` = FROM_UNIXTIME(%s) ' 196 'WHERE `id` = %s', 197 (entry["time"], entry["session"])) 198 199 elif entry["eventid"] == 'cowrie.log.closed': 200 self.simpleQuery( 201 'INSERT INTO `ttylog` (`session`, `ttylog`, `size`) ' 202 'VALUES (%s, %s, %s)', 203 (entry["session"], entry["ttylog"], entry["size"])) 204 205 elif entry["eventid"] == 'cowrie.client.fingerprint': 206 self.simpleQuery( 207 'INSERT INTO `keyfingerprints` (`session`, `username`, `fingerprint`) ' 208 'VALUES (%s, %s, %s)', 209 (entry["session"], entry["username"], entry["fingerprint"])) 210 211 elif entry["eventid"] == 'cowrie.direct-tcpip.request': 212 self.simpleQuery( 213 'INSERT INTO `ipforwards` (`session`, `timestamp`, `dst_ip`, `dst_port`) ' 214 'VALUES (%s, FROM_UNIXTIME(%s), %s, %s)', 215 (entry["session"], entry["time"], entry["dst_ip"], entry["dst_port"])) 216 217 elif entry["eventid"] == 'cowrie.direct-tcpip.data': 218 self.simpleQuery( 219 'INSERT INTO `ipforwardsdata` (`session`, `timestamp`, `dst_ip`, `dst_port`, `data`) ' 220 'VALUES (%s, FROM_UNIXTIME(%s), %s, %s, %s)', 221 (entry["session"], entry["time"], entry["dst_ip"], entry["dst_port"], entry["data"])) 222 [end of src/cowrie/output/mysql.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/src/cowrie/output/mysql.py b/src/cowrie/output/mysql.py --- a/src/cowrie/output/mysql.py +++ b/src/cowrie/output/mysql.py @@ -31,7 +31,7 @@ try: return adbapi.ConnectionPool._runInteraction( self, interaction, *args, **kw) - except MySQLdb.OperationalError as e: + except (MySQLdb.OperationalError, MySQLdb._exceptions.OperationalError) as e: if e.args[0] not in (2003, 2006, 2013): raise e log.msg("RCP: got error {0}, retrying operation".format(e)) @@ -65,10 +65,11 @@ cp_reconnect=True, use_unicode=True ) - except MySQLdb.Error as e: + except (MySQLdb.Error, MySQLdb._exceptons.Error) as e: log.msg("output_mysql: Error %d: %s" % (e.args[0], e.args[1])) def stop(self): + self.db.commit() self.db.close() def sqlerror(self, error):
{"golden_diff": "diff --git a/src/cowrie/output/mysql.py b/src/cowrie/output/mysql.py\n--- a/src/cowrie/output/mysql.py\n+++ b/src/cowrie/output/mysql.py\n@@ -31,7 +31,7 @@\n try:\n return adbapi.ConnectionPool._runInteraction(\n self, interaction, *args, **kw)\n- except MySQLdb.OperationalError as e:\n+ except (MySQLdb.OperationalError, MySQLdb._exceptions.OperationalError) as e:\n if e.args[0] not in (2003, 2006, 2013):\n raise e\n log.msg(\"RCP: got error {0}, retrying operation\".format(e))\n@@ -65,10 +65,11 @@\n cp_reconnect=True,\n use_unicode=True\n )\n- except MySQLdb.Error as e:\n+ except (MySQLdb.Error, MySQLdb._exceptons.Error) as e:\n log.msg(\"output_mysql: Error %d: %s\" % (e.args[0], e.args[1]))\n \n def stop(self):\n+ self.db.commit()\n self.db.close()\n \n def sqlerror(self, error):\n", "issue": "MySQL error handling\nI have created several honeypots for other protocols (Android Debug Bridge, Citrix Netscaler, Elasticsearch, Internet Printing Protocol). In all of them I use Cowrie's idea of output plugins. However, the attack traffic for these protocols is much much lower than for Telnet/SSH. Probably for this reason, the MySQL plugin often fails with the dreaded \"(2006, 'MySQL server has gone away')\" error. (Yes, sometimes more than 8 hours can pass between attacks.)\r\n\r\nWhat bothered me is that the plugin failed with a stack trace. This isn't supposed to happen, because it intercepts this error just like Cowrie's MySQL plugin and tries to re-establish the connection. Then I noticed that the error that is being reported is not `MySQLdb.OperationalError`, as the code intercepts. Instead, it is `MySQLdb._exceptions.OperationalError`.\r\n\r\nAnd, indeed, linting the code complains that the `MySLdb` module does not have a member named `OperationalError`. Similarly, there is no `MySQLdb.Error` - but there seems to be `MySQLdb._exceptons.Error` instead.\r\n\r\nShouldn't these be changed in Cowrie's MySQL module?\r\n\r\nAlso, from the [documentation of MySQLdb](https://mysqlclient.readthedocs.io/FAQ.html#my-data-disappeared-or-won-t-go-away):\r\n\r\n> Starting with 1.2.0, MySQLdb disables autocommit by default, as required by the DB-API standard (PEP-249). If you are using InnoDB tables or some other type of transactional table type, you\u2019ll need to do connection.commit() before closing the connection, or else none of your changes will be written to the database.\r\n\r\nShouldn't the `stop()` method of Cowrie's MySQL plugin issue a `self.db.commit()` before closing the connection? Or maybe set `self.db.autocommit(True)` when opening it?\n", "before_files": [{"content": "\"\"\"\nMySQL output connector. Writes audit logs to MySQL database\n\"\"\"\n\nfrom __future__ import absolute_import, division\n\nimport MySQLdb\n\nfrom twisted.enterprise import adbapi\nfrom twisted.internet import defer\nfrom twisted.python import log\n\nimport cowrie.core.output\nfrom cowrie.core.config import CowrieConfig\n\n\nclass ReconnectingConnectionPool(adbapi.ConnectionPool):\n \"\"\"\n Reconnecting adbapi connection pool for MySQL.\n\n This class improves on the solution posted at\n http://www.gelens.org/2008/09/12/reinitializing-twisted-connectionpool/\n by checking exceptions by error code and only disconnecting the current\n connection instead of all of them.\n\n Also see:\n http://twistedmatrix.com/pipermail/twisted-python/2009-July/020007.html\n \"\"\"\n\n def _runInteraction(self, interaction, *args, **kw):\n try:\n return adbapi.ConnectionPool._runInteraction(\n self, interaction, *args, **kw)\n except MySQLdb.OperationalError as e:\n if e.args[0] not in (2003, 2006, 2013):\n raise e\n log.msg(\"RCP: got error {0}, retrying operation\".format(e))\n conn = self.connections.get(self.threadID())\n self.disconnect(conn)\n # Try the interaction again\n return adbapi.ConnectionPool._runInteraction(\n self, interaction, *args, **kw)\n\n\nclass Output(cowrie.core.output.Output):\n \"\"\"\n mysql output\n \"\"\"\n db = None\n\n def start(self):\n self.debug = CowrieConfig().getboolean('output_mysql', 'debug', fallback=False)\n port = CowrieConfig().getint('output_mysql', 'port', fallback=3306)\n try:\n self.db = ReconnectingConnectionPool(\n 'MySQLdb',\n host=CowrieConfig().get('output_mysql', 'host'),\n db=CowrieConfig().get('output_mysql', 'database'),\n user=CowrieConfig().get('output_mysql', 'username'),\n passwd=CowrieConfig().get('output_mysql', 'password', raw=True),\n port=port,\n cp_min=1,\n cp_max=1,\n charset='utf8mb4',\n cp_reconnect=True,\n use_unicode=True\n )\n except MySQLdb.Error as e:\n log.msg(\"output_mysql: Error %d: %s\" % (e.args[0], e.args[1]))\n\n def stop(self):\n self.db.close()\n\n def sqlerror(self, error):\n \"\"\"\n 1146, \"Table '...' doesn't exist\"\n 1406, \"Data too long for column '...' at row ...\"\n \"\"\"\n if error.value[0] in (1146, 1406):\n log.msg(\"output_mysql: MySQL Error: {}\".format(error.value))\n log.msg(\"MySQL schema maybe misconfigured, doublecheck database!\")\n else:\n log.err(\"output_mysql: MySQL Error: {}\".format(error.value))\n\n def simpleQuery(self, sql, args):\n \"\"\"\n Just run a deferred sql query, only care about errors\n \"\"\"\n if self.debug:\n log.msg(\"output_mysql: MySQL query: {} {}\".format(sql, repr(args)))\n d = self.db.runQuery(sql, args)\n d.addErrback(self.sqlerror)\n\n @defer.inlineCallbacks\n def write(self, entry):\n if entry[\"eventid\"] == 'cowrie.session.connect':\n r = yield self.db.runQuery(\n \"SELECT `id`\"\n \"FROM `sensors`\"\n \"WHERE `ip` = %s\",\n (self.sensor,))\n\n if r:\n sensorid = r[0][0]\n else:\n yield self.db.runQuery(\n 'INSERT INTO `sensors` (`ip`) '\n 'VALUES (%s)',\n (self.sensor,))\n\n r = yield self.db.runQuery('SELECT LAST_INSERT_ID()')\n sensorid = int(r[0][0])\n self.simpleQuery(\n \"INSERT INTO `sessions` (`id`, `starttime`, `sensor`, `ip`) \"\n \"VALUES (%s, FROM_UNIXTIME(%s), %s, %s)\",\n (entry[\"session\"], entry[\"time\"], sensorid, entry[\"src_ip\"]))\n\n elif entry[\"eventid\"] == 'cowrie.login.success':\n self.simpleQuery('INSERT INTO `auth` (`session`, `success`, `username`, `password`, `timestamp`) '\n 'VALUES (%s, %s, %s, %s, FROM_UNIXTIME(%s))',\n (entry[\"session\"], 1, entry['username'], entry['password'], entry[\"time\"]))\n\n elif entry[\"eventid\"] == 'cowrie.login.failed':\n self.simpleQuery('INSERT INTO `auth` (`session`, `success`, `username`, `password`, `timestamp`) '\n 'VALUES (%s, %s, %s, %s, FROM_UNIXTIME(%s))',\n (entry[\"session\"], 0, entry['username'], entry['password'], entry[\"time\"]))\n\n elif entry[\"eventid\"] == 'cowrie.session.params':\n self.simpleQuery('INSERT INTO `params` (`session`, `arch`) '\n 'VALUES (%s, %s)',\n (entry[\"session\"], entry[\"arch\"]))\n\n elif entry[\"eventid\"] == 'cowrie.command.input':\n self.simpleQuery('INSERT INTO `input` (`session`, `timestamp`, `success`, `input`) '\n 'VALUES (%s, FROM_UNIXTIME(%s), %s , %s)',\n (entry[\"session\"], entry[\"time\"], 1, entry[\"input\"]))\n\n elif entry[\"eventid\"] == 'cowrie.command.failed':\n self.simpleQuery('INSERT INTO `input` (`session`, `timestamp`, `success`, `input`) '\n 'VALUES (%s, FROM_UNIXTIME(%s), %s , %s)',\n (entry[\"session\"], entry[\"time\"], 0, entry[\"input\"]))\n\n elif entry[\"eventid\"] == 'cowrie.session.file_download':\n self.simpleQuery('INSERT INTO `downloads` (`session`, `timestamp`, `url`, `outfile`, `shasum`) '\n 'VALUES (%s, FROM_UNIXTIME(%s), %s, %s, %s)',\n (entry[\"session\"], entry[\"time\"], entry['url'], entry['outfile'], entry['shasum']))\n\n elif entry[\"eventid\"] == 'cowrie.session.file_download.failed':\n self.simpleQuery('INSERT INTO `downloads` (`session`, `timestamp`, `url`, `outfile`, `shasum`) '\n 'VALUES (%s, FROM_UNIXTIME(%s), %s, %s, %s)',\n (entry[\"session\"], entry[\"time\"], entry['url'], 'NULL', 'NULL'))\n\n elif entry[\"eventid\"] == 'cowrie.session.file_upload':\n self.simpleQuery('INSERT INTO `downloads` (`session`, `timestamp`, `url`, `outfile`, `shasum`) '\n 'VALUES (%s, FROM_UNIXTIME(%s), %s, %s, %s)',\n (entry[\"session\"], entry[\"time\"], '', entry['outfile'], entry['shasum']))\n\n elif entry[\"eventid\"] == 'cowrie.session.input':\n self.simpleQuery('INSERT INTO `input` (`session`, `timestamp`, `realm`, `input`) '\n 'VALUES (%s, FROM_UNIXTIME(%s), %s , %s)',\n (entry[\"session\"], entry[\"time\"], entry[\"realm\"], entry[\"input\"]))\n\n elif entry[\"eventid\"] == 'cowrie.client.version':\n r = yield self.db.runQuery(\n 'SELECT `id` FROM `clients` '\n 'WHERE `version` = %s',\n (entry['version'],))\n\n if r:\n id = int(r[0][0])\n else:\n yield self.db.runQuery(\n 'INSERT INTO `clients` (`version`) '\n 'VALUES (%s)',\n (entry['version'],))\n\n r = yield self.db.runQuery('SELECT LAST_INSERT_ID()')\n id = int(r[0][0])\n self.simpleQuery(\n 'UPDATE `sessions` '\n 'SET `client` = %s '\n 'WHERE `id` = %s',\n (id, entry[\"session\"]))\n\n elif entry[\"eventid\"] == 'cowrie.client.size':\n self.simpleQuery(\n 'UPDATE `sessions` '\n 'SET `termsize` = %s '\n 'WHERE `id` = %s',\n ('%sx%s' % (entry['width'], entry['height']), entry[\"session\"]))\n\n elif entry[\"eventid\"] == 'cowrie.session.closed':\n self.simpleQuery(\n 'UPDATE `sessions` '\n 'SET `endtime` = FROM_UNIXTIME(%s) '\n 'WHERE `id` = %s',\n (entry[\"time\"], entry[\"session\"]))\n\n elif entry[\"eventid\"] == 'cowrie.log.closed':\n self.simpleQuery(\n 'INSERT INTO `ttylog` (`session`, `ttylog`, `size`) '\n 'VALUES (%s, %s, %s)',\n (entry[\"session\"], entry[\"ttylog\"], entry[\"size\"]))\n\n elif entry[\"eventid\"] == 'cowrie.client.fingerprint':\n self.simpleQuery(\n 'INSERT INTO `keyfingerprints` (`session`, `username`, `fingerprint`) '\n 'VALUES (%s, %s, %s)',\n (entry[\"session\"], entry[\"username\"], entry[\"fingerprint\"]))\n\n elif entry[\"eventid\"] == 'cowrie.direct-tcpip.request':\n self.simpleQuery(\n 'INSERT INTO `ipforwards` (`session`, `timestamp`, `dst_ip`, `dst_port`) '\n 'VALUES (%s, FROM_UNIXTIME(%s), %s, %s)',\n (entry[\"session\"], entry[\"time\"], entry[\"dst_ip\"], entry[\"dst_port\"]))\n\n elif entry[\"eventid\"] == 'cowrie.direct-tcpip.data':\n self.simpleQuery(\n 'INSERT INTO `ipforwardsdata` (`session`, `timestamp`, `dst_ip`, `dst_port`, `data`) '\n 'VALUES (%s, FROM_UNIXTIME(%s), %s, %s, %s)',\n (entry[\"session\"], entry[\"time\"], entry[\"dst_ip\"], entry[\"dst_port\"], entry[\"data\"]))\n", "path": "src/cowrie/output/mysql.py"}]}
3,771
267
gh_patches_debug_2407
rasdani/github-patches
git_diff
liberapay__liberapay.com-195
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Twitter API chokes on at-sign https://liberapay.com/on/twitter/@korben/ returns a 500. sentry#35, public link: https://sentry.changaco.oy.lc/share/issue/322e3335/. </issue> <code> [start of liberapay/models/account_elsewhere.py] 1 from __future__ import absolute_import, division, print_function, unicode_literals 2 3 from datetime import timedelta 4 import json 5 import uuid 6 import xml.etree.ElementTree as ET 7 8 from six.moves.urllib.parse import urlsplit, urlunsplit 9 10 from aspen import Response 11 from aspen.utils import utcnow 12 from postgres.orm import Model 13 from psycopg2 import IntegrityError 14 import xmltodict 15 16 from liberapay.constants import AVATAR_QUERY 17 from liberapay.security.crypto import constant_time_compare 18 from liberapay.website import website 19 20 21 CONNECT_TOKEN_TIMEOUT = timedelta(hours=24) 22 23 24 class UnknownAccountElsewhere(Exception): pass 25 26 27 class AccountElsewhere(Model): 28 29 typname = "elsewhere_with_participant" 30 31 def __init__(self, *args, **kwargs): 32 super(AccountElsewhere, self).__init__(*args, **kwargs) 33 self.platform_data = getattr(website.platforms, self.platform) 34 35 36 # Constructors 37 # ============ 38 39 @classmethod 40 def from_id(cls, id): 41 """Return an existing AccountElsewhere based on id. 42 """ 43 return cls.db.one(""" 44 SELECT elsewhere.*::elsewhere_with_participant 45 FROM elsewhere 46 WHERE id = %s 47 """, (id,)) 48 49 @classmethod 50 def from_user_id(cls, platform, user_id): 51 """Return an existing AccountElsewhere based on platform and user_id. 52 """ 53 return cls._from_thing('user_id', platform, user_id) 54 55 @classmethod 56 def from_user_name(cls, platform, user_name): 57 """Return an existing AccountElsewhere based on platform and user_name. 58 """ 59 return cls._from_thing('user_name', platform, user_name) 60 61 @classmethod 62 def _from_thing(cls, thing, platform, value): 63 assert thing in ('user_id', 'user_name') 64 if thing == 'user_name': 65 thing = 'lower(user_name)' 66 value = value.lower() 67 exception = UnknownAccountElsewhere(thing, platform, value) 68 return cls.db.one(""" 69 70 SELECT elsewhere.*::elsewhere_with_participant 71 FROM elsewhere 72 WHERE platform = %s 73 AND {} = %s 74 75 """.format(thing), (platform, value), default=exception) 76 77 @classmethod 78 def get_many(cls, platform, user_infos): 79 accounts = [] 80 found = cls.db.all("""\ 81 82 SELECT elsewhere.*::elsewhere_with_participant 83 FROM elsewhere 84 WHERE platform = %s 85 AND user_id = any(%s) 86 87 """, (platform, [i.user_id for i in user_infos])) 88 found = {a.user_id: a for a in found} 89 for i in user_infos: 90 if i.user_id in found: 91 accounts.append(found[i.user_id]) 92 else: 93 accounts.append(cls.upsert(i)) 94 return accounts 95 96 @classmethod 97 def upsert(cls, i): 98 """Insert or update a user's info. 99 """ 100 101 # Clean up avatar_url 102 if i.avatar_url: 103 scheme, netloc, path, query, fragment = urlsplit(i.avatar_url) 104 fragment = '' 105 if netloc.endswith('githubusercontent.com') or \ 106 netloc.endswith('gravatar.com'): 107 query = AVATAR_QUERY 108 i.avatar_url = urlunsplit((scheme, netloc, path, query, fragment)) 109 110 # Serialize extra_info 111 if isinstance(i.extra_info, ET.Element): 112 i.extra_info = xmltodict.parse(ET.tostring(i.extra_info)) 113 i.extra_info = json.dumps(i.extra_info) 114 115 cols, vals = zip(*i.__dict__.items()) 116 cols = ', '.join(cols) 117 placeholders = ', '.join(['%s']*len(vals)) 118 119 try: 120 # Try to insert the account 121 # We do this with a transaction so that if the insert fails, the 122 # participant we reserved for them is rolled back as well. 123 with cls.db.get_cursor() as cursor: 124 id = cursor.one(""" 125 INSERT INTO participants DEFAULT VALUES RETURNING id 126 """) 127 account = cursor.one(""" 128 INSERT INTO elsewhere 129 (participant, {0}) 130 VALUES (%s, {1}) 131 RETURNING elsewhere.*::elsewhere_with_participant 132 """.format(cols, placeholders), (id,)+vals) 133 except IntegrityError: 134 # The account is already in the DB, update it instead 135 account = cls.db.one(""" 136 UPDATE elsewhere 137 SET ({0}) = ({1}) 138 WHERE platform=%s AND user_id=%s 139 RETURNING elsewhere.*::elsewhere_with_participant 140 """.format(cols, placeholders), vals+(i.platform, i.user_id)) 141 if not account: 142 raise 143 144 # Return account after propagating avatar_url to participant 145 account.participant.update_avatar() 146 return account 147 148 149 # Connect tokens 150 # ============== 151 152 def check_connect_token(self, token): 153 return ( 154 self.connect_token and 155 constant_time_compare(self.connect_token, token) and 156 self.connect_expires > utcnow() 157 ) 158 159 def make_connect_token(self): 160 token = uuid.uuid4().hex 161 expires = utcnow() + CONNECT_TOKEN_TIMEOUT 162 return self.save_connect_token(token, expires) 163 164 def save_connect_token(self, token, expires): 165 return self.db.one(""" 166 UPDATE elsewhere 167 SET connect_token = %s 168 , connect_expires = %s 169 WHERE id = %s 170 RETURNING connect_token, connect_expires 171 """, (token, expires, self.id)) 172 173 174 # Random Stuff 175 # ============ 176 177 def get_auth_session(self): 178 if not self.token: 179 return 180 params = dict(token=self.token) 181 if 'refresh_token' in self.token: 182 params['token_updater'] = self.save_token 183 return self.platform_data.get_auth_session(**params) 184 185 @property 186 def liberapay_slug(self): 187 return self.user_name or ('~' + self.user_id) 188 189 @property 190 def liberapay_url(self): 191 scheme = website.canonical_scheme 192 host = website.canonical_host 193 platform = self.platform 194 slug = self.liberapay_slug 195 return "{scheme}://{host}/on/{platform}/{slug}/".format(**locals()) 196 197 @property 198 def html_url(self): 199 return self.platform_data.account_url.format( 200 user_id=self.user_id, 201 user_name=self.user_name, 202 platform_data=self.platform_data 203 ) 204 205 @property 206 def friendly_name(self): 207 if getattr(self.platform, 'optional_user_name', False): 208 return self.display_name or self.user_name or self.user_id 209 else: 210 return self.user_name or self.display_name or self.user_id 211 212 @property 213 def friendly_name_long(self): 214 r = self.friendly_name 215 display_name = self.display_name 216 if display_name and display_name != r: 217 return '%s (%s)' % (r, display_name) 218 user_name = self.user_name 219 if user_name and user_name != r: 220 return '%s (%s)' % (r, user_name) 221 return r 222 223 def save_token(self, token): 224 """Saves the given access token in the database. 225 """ 226 self.db.run(""" 227 UPDATE elsewhere 228 SET token = %s 229 WHERE id=%s 230 """, (token, self.id)) 231 self.set_attributes(token=token) 232 233 234 def get_account_elsewhere(website, state, api_lookup=True): 235 path = state['request'].line.uri.path 236 platform = getattr(website.platforms, path['platform'], None) 237 if platform is None: 238 raise Response(404) 239 uid = path['user_name'] 240 if uid[:1] == '~': 241 key = 'user_id' 242 uid = uid[1:] 243 else: 244 key = 'user_name' 245 try: 246 account = AccountElsewhere._from_thing(key, platform.name, uid) 247 except UnknownAccountElsewhere: 248 account = None 249 if not account: 250 if not api_lookup: 251 raise Response(404) 252 try: 253 user_info = platform.get_user_info(key, uid) 254 except Response as r: 255 if r.code == 404: 256 _ = state['_'] 257 err = _("There doesn't seem to be a user named {0} on {1}.", 258 uid, platform.display_name) 259 raise Response(404, err) 260 raise 261 account = AccountElsewhere.upsert(user_info) 262 return platform, account 263 [end of liberapay/models/account_elsewhere.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/liberapay/models/account_elsewhere.py b/liberapay/models/account_elsewhere.py --- a/liberapay/models/account_elsewhere.py +++ b/liberapay/models/account_elsewhere.py @@ -242,6 +242,8 @@ uid = uid[1:] else: key = 'user_name' + if uid[:1] == '@': + uid = uid[1:] try: account = AccountElsewhere._from_thing(key, platform.name, uid) except UnknownAccountElsewhere:
{"golden_diff": "diff --git a/liberapay/models/account_elsewhere.py b/liberapay/models/account_elsewhere.py\n--- a/liberapay/models/account_elsewhere.py\n+++ b/liberapay/models/account_elsewhere.py\n@@ -242,6 +242,8 @@\n uid = uid[1:]\n else:\n key = 'user_name'\n+ if uid[:1] == '@':\n+ uid = uid[1:]\n try:\n account = AccountElsewhere._from_thing(key, platform.name, uid)\n except UnknownAccountElsewhere:\n", "issue": "Twitter API chokes on at-sign\nhttps://liberapay.com/on/twitter/@korben/ returns a 500. sentry#35, public link: https://sentry.changaco.oy.lc/share/issue/322e3335/.\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom datetime import timedelta\nimport json\nimport uuid\nimport xml.etree.ElementTree as ET\n\nfrom six.moves.urllib.parse import urlsplit, urlunsplit\n\nfrom aspen import Response\nfrom aspen.utils import utcnow\nfrom postgres.orm import Model\nfrom psycopg2 import IntegrityError\nimport xmltodict\n\nfrom liberapay.constants import AVATAR_QUERY\nfrom liberapay.security.crypto import constant_time_compare\nfrom liberapay.website import website\n\n\nCONNECT_TOKEN_TIMEOUT = timedelta(hours=24)\n\n\nclass UnknownAccountElsewhere(Exception): pass\n\n\nclass AccountElsewhere(Model):\n\n typname = \"elsewhere_with_participant\"\n\n def __init__(self, *args, **kwargs):\n super(AccountElsewhere, self).__init__(*args, **kwargs)\n self.platform_data = getattr(website.platforms, self.platform)\n\n\n # Constructors\n # ============\n\n @classmethod\n def from_id(cls, id):\n \"\"\"Return an existing AccountElsewhere based on id.\n \"\"\"\n return cls.db.one(\"\"\"\n SELECT elsewhere.*::elsewhere_with_participant\n FROM elsewhere\n WHERE id = %s\n \"\"\", (id,))\n\n @classmethod\n def from_user_id(cls, platform, user_id):\n \"\"\"Return an existing AccountElsewhere based on platform and user_id.\n \"\"\"\n return cls._from_thing('user_id', platform, user_id)\n\n @classmethod\n def from_user_name(cls, platform, user_name):\n \"\"\"Return an existing AccountElsewhere based on platform and user_name.\n \"\"\"\n return cls._from_thing('user_name', platform, user_name)\n\n @classmethod\n def _from_thing(cls, thing, platform, value):\n assert thing in ('user_id', 'user_name')\n if thing == 'user_name':\n thing = 'lower(user_name)'\n value = value.lower()\n exception = UnknownAccountElsewhere(thing, platform, value)\n return cls.db.one(\"\"\"\n\n SELECT elsewhere.*::elsewhere_with_participant\n FROM elsewhere\n WHERE platform = %s\n AND {} = %s\n\n \"\"\".format(thing), (platform, value), default=exception)\n\n @classmethod\n def get_many(cls, platform, user_infos):\n accounts = []\n found = cls.db.all(\"\"\"\\\n\n SELECT elsewhere.*::elsewhere_with_participant\n FROM elsewhere\n WHERE platform = %s\n AND user_id = any(%s)\n\n \"\"\", (platform, [i.user_id for i in user_infos]))\n found = {a.user_id: a for a in found}\n for i in user_infos:\n if i.user_id in found:\n accounts.append(found[i.user_id])\n else:\n accounts.append(cls.upsert(i))\n return accounts\n\n @classmethod\n def upsert(cls, i):\n \"\"\"Insert or update a user's info.\n \"\"\"\n\n # Clean up avatar_url\n if i.avatar_url:\n scheme, netloc, path, query, fragment = urlsplit(i.avatar_url)\n fragment = ''\n if netloc.endswith('githubusercontent.com') or \\\n netloc.endswith('gravatar.com'):\n query = AVATAR_QUERY\n i.avatar_url = urlunsplit((scheme, netloc, path, query, fragment))\n\n # Serialize extra_info\n if isinstance(i.extra_info, ET.Element):\n i.extra_info = xmltodict.parse(ET.tostring(i.extra_info))\n i.extra_info = json.dumps(i.extra_info)\n\n cols, vals = zip(*i.__dict__.items())\n cols = ', '.join(cols)\n placeholders = ', '.join(['%s']*len(vals))\n\n try:\n # Try to insert the account\n # We do this with a transaction so that if the insert fails, the\n # participant we reserved for them is rolled back as well.\n with cls.db.get_cursor() as cursor:\n id = cursor.one(\"\"\"\n INSERT INTO participants DEFAULT VALUES RETURNING id\n \"\"\")\n account = cursor.one(\"\"\"\n INSERT INTO elsewhere\n (participant, {0})\n VALUES (%s, {1})\n RETURNING elsewhere.*::elsewhere_with_participant\n \"\"\".format(cols, placeholders), (id,)+vals)\n except IntegrityError:\n # The account is already in the DB, update it instead\n account = cls.db.one(\"\"\"\n UPDATE elsewhere\n SET ({0}) = ({1})\n WHERE platform=%s AND user_id=%s\n RETURNING elsewhere.*::elsewhere_with_participant\n \"\"\".format(cols, placeholders), vals+(i.platform, i.user_id))\n if not account:\n raise\n\n # Return account after propagating avatar_url to participant\n account.participant.update_avatar()\n return account\n\n\n # Connect tokens\n # ==============\n\n def check_connect_token(self, token):\n return (\n self.connect_token and\n constant_time_compare(self.connect_token, token) and\n self.connect_expires > utcnow()\n )\n\n def make_connect_token(self):\n token = uuid.uuid4().hex\n expires = utcnow() + CONNECT_TOKEN_TIMEOUT\n return self.save_connect_token(token, expires)\n\n def save_connect_token(self, token, expires):\n return self.db.one(\"\"\"\n UPDATE elsewhere\n SET connect_token = %s\n , connect_expires = %s\n WHERE id = %s\n RETURNING connect_token, connect_expires\n \"\"\", (token, expires, self.id))\n\n\n # Random Stuff\n # ============\n\n def get_auth_session(self):\n if not self.token:\n return\n params = dict(token=self.token)\n if 'refresh_token' in self.token:\n params['token_updater'] = self.save_token\n return self.platform_data.get_auth_session(**params)\n\n @property\n def liberapay_slug(self):\n return self.user_name or ('~' + self.user_id)\n\n @property\n def liberapay_url(self):\n scheme = website.canonical_scheme\n host = website.canonical_host\n platform = self.platform\n slug = self.liberapay_slug\n return \"{scheme}://{host}/on/{platform}/{slug}/\".format(**locals())\n\n @property\n def html_url(self):\n return self.platform_data.account_url.format(\n user_id=self.user_id,\n user_name=self.user_name,\n platform_data=self.platform_data\n )\n\n @property\n def friendly_name(self):\n if getattr(self.platform, 'optional_user_name', False):\n return self.display_name or self.user_name or self.user_id\n else:\n return self.user_name or self.display_name or self.user_id\n\n @property\n def friendly_name_long(self):\n r = self.friendly_name\n display_name = self.display_name\n if display_name and display_name != r:\n return '%s (%s)' % (r, display_name)\n user_name = self.user_name\n if user_name and user_name != r:\n return '%s (%s)' % (r, user_name)\n return r\n\n def save_token(self, token):\n \"\"\"Saves the given access token in the database.\n \"\"\"\n self.db.run(\"\"\"\n UPDATE elsewhere\n SET token = %s\n WHERE id=%s\n \"\"\", (token, self.id))\n self.set_attributes(token=token)\n\n\ndef get_account_elsewhere(website, state, api_lookup=True):\n path = state['request'].line.uri.path\n platform = getattr(website.platforms, path['platform'], None)\n if platform is None:\n raise Response(404)\n uid = path['user_name']\n if uid[:1] == '~':\n key = 'user_id'\n uid = uid[1:]\n else:\n key = 'user_name'\n try:\n account = AccountElsewhere._from_thing(key, platform.name, uid)\n except UnknownAccountElsewhere:\n account = None\n if not account:\n if not api_lookup:\n raise Response(404)\n try:\n user_info = platform.get_user_info(key, uid)\n except Response as r:\n if r.code == 404:\n _ = state['_']\n err = _(\"There doesn't seem to be a user named {0} on {1}.\",\n uid, platform.display_name)\n raise Response(404, err)\n raise\n account = AccountElsewhere.upsert(user_info)\n return platform, account\n", "path": "liberapay/models/account_elsewhere.py"}]}
3,132
124
gh_patches_debug_1854
rasdani/github-patches
git_diff
urllib3__urllib3-2424
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Remove integration tests for Botocore with Python 2.7 Botocore dropped support for Python 2.7 in July so we don't have to do integration testing with Botocore+Python 2.7 on the 1.26.x branch any longer. Reference: https://github.com/urllib3/urllib3/pull/2422 </issue> <code> [start of noxfile.py] 1 import os 2 import shutil 3 import subprocess 4 5 import nox 6 7 SOURCE_FILES = [ 8 "docs/", 9 "dummyserver/", 10 "src/", 11 "test/", 12 "noxfile.py", 13 "setup.py", 14 ] 15 16 17 def tests_impl(session, extras="socks,secure,brotli"): 18 # Install deps and the package itself. 19 session.install("-r", "dev-requirements.txt") 20 session.install(".[{extras}]".format(extras=extras)) 21 22 # Show the pip version. 23 session.run("pip", "--version") 24 # Print the Python version and bytesize. 25 session.run("python", "--version") 26 session.run("python", "-c", "import struct; print(struct.calcsize('P') * 8)") 27 # Print OpenSSL information. 28 session.run("python", "-m", "OpenSSL.debug") 29 30 # Inspired from https://github.com/pyca/cryptography 31 # We use parallel mode and then combine here so that coverage.py will take 32 # the paths like .tox/pyXY/lib/pythonX.Y/site-packages/urllib3/__init__.py 33 # and collapse them into src/urllib3/__init__.py. 34 35 session.run( 36 "coverage", 37 "run", 38 "--parallel-mode", 39 "-m", 40 "pytest", 41 "-r", 42 "a", 43 "--tb=native", 44 "--no-success-flaky-report", 45 *(session.posargs or ("test/",)), 46 env={"PYTHONWARNINGS": "always::DeprecationWarning"}, 47 ) 48 session.run("coverage", "combine") 49 session.run("coverage", "report", "-m") 50 session.run("coverage", "xml") 51 52 53 @nox.session(python=["2.7", "3.5", "3.6", "3.7", "3.8", "3.9", "3.10", "pypy"]) 54 def test(session): 55 tests_impl(session) 56 57 58 @nox.session(python=["2", "3"]) 59 def google_brotli(session): 60 # https://pypi.org/project/Brotli/ is the Google version of brotli, so 61 # install it separately and don't install our brotli extra (which installs 62 # brotlipy). 63 session.install("brotli") 64 tests_impl(session, extras="socks,secure") 65 66 67 @nox.session(python="2.7") 68 def app_engine(session): 69 session.install("-r", "dev-requirements.txt") 70 session.install(".") 71 session.run( 72 "coverage", 73 "run", 74 "--parallel-mode", 75 "-m", 76 "pytest", 77 "-r", 78 "sx", 79 "test/appengine", 80 *session.posargs, 81 ) 82 session.run("coverage", "combine") 83 session.run("coverage", "report", "-m") 84 session.run("coverage", "xml") 85 86 87 def git_clone(session, git_url): 88 session.run("git", "clone", "--depth", "1", git_url, external=True) 89 90 91 @nox.session(python=["2.7", "3.9"]) 92 def downstream_botocore(session): 93 root = os.getcwd() 94 tmp_dir = session.create_tmp() 95 96 session.cd(tmp_dir) 97 git_clone(session, "https://github.com/boto/botocore") 98 session.chdir("botocore") 99 session.run("git", "rev-parse", "HEAD", external=True) 100 session.run("python", "scripts/ci/install") 101 102 session.cd(root) 103 session.install(".", silent=False) 104 session.cd(f"{tmp_dir}/botocore") 105 106 session.run("python", "scripts/ci/run-tests") 107 108 109 @nox.session(python=["2.7", "3.9"]) 110 def downstream_requests(session): 111 root = os.getcwd() 112 tmp_dir = session.create_tmp() 113 114 session.cd(tmp_dir) 115 git_clone(session, "https://github.com/psf/requests") 116 session.chdir("requests") 117 session.run("git", "apply", f"{root}/ci/requests.patch", external=True) 118 session.run("git", "rev-parse", "HEAD", external=True) 119 session.install(".[socks]", silent=False) 120 session.install("-r", "requirements-dev.txt", silent=False) 121 122 session.cd(root) 123 session.install(".", silent=False) 124 session.cd(f"{tmp_dir}/requests") 125 126 session.run("pytest", "tests") 127 128 129 @nox.session() 130 def format(session): 131 """Run code formatters.""" 132 session.install("pre-commit") 133 session.run("pre-commit", "--version") 134 135 process = subprocess.run( 136 ["pre-commit", "run", "--all-files"], 137 env=session.env, 138 text=True, 139 stdout=subprocess.PIPE, 140 stderr=subprocess.STDOUT, 141 ) 142 # Ensure that pre-commit itself ran successfully 143 assert process.returncode in (0, 1) 144 145 lint(session) 146 147 148 @nox.session 149 def lint(session): 150 session.install("pre-commit") 151 session.run("pre-commit", "run", "--all-files") 152 153 154 @nox.session 155 def docs(session): 156 session.install("-r", "docs/requirements.txt") 157 session.install(".[socks,secure,brotli]") 158 159 session.chdir("docs") 160 if os.path.exists("_build"): 161 shutil.rmtree("_build") 162 session.run("sphinx-build", "-b", "html", "-W", ".", "_build/html") 163 [end of noxfile.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/noxfile.py b/noxfile.py --- a/noxfile.py +++ b/noxfile.py @@ -88,7 +88,7 @@ session.run("git", "clone", "--depth", "1", git_url, external=True) [email protected](python=["2.7", "3.9"]) [email protected](python=["3.9"]) def downstream_botocore(session): root = os.getcwd() tmp_dir = session.create_tmp()
{"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -88,7 +88,7 @@\n session.run(\"git\", \"clone\", \"--depth\", \"1\", git_url, external=True)\n \n \[email protected](python=[\"2.7\", \"3.9\"])\[email protected](python=[\"3.9\"])\n def downstream_botocore(session):\n root = os.getcwd()\n tmp_dir = session.create_tmp()\n", "issue": "Remove integration tests for Botocore with Python 2.7\nBotocore dropped support for Python 2.7 in July so we don't have to do integration testing with Botocore+Python 2.7 on the 1.26.x branch any longer.\r\n\r\nReference: https://github.com/urllib3/urllib3/pull/2422\n", "before_files": [{"content": "import os\nimport shutil\nimport subprocess\n\nimport nox\n\nSOURCE_FILES = [\n \"docs/\",\n \"dummyserver/\",\n \"src/\",\n \"test/\",\n \"noxfile.py\",\n \"setup.py\",\n]\n\n\ndef tests_impl(session, extras=\"socks,secure,brotli\"):\n # Install deps and the package itself.\n session.install(\"-r\", \"dev-requirements.txt\")\n session.install(\".[{extras}]\".format(extras=extras))\n\n # Show the pip version.\n session.run(\"pip\", \"--version\")\n # Print the Python version and bytesize.\n session.run(\"python\", \"--version\")\n session.run(\"python\", \"-c\", \"import struct; print(struct.calcsize('P') * 8)\")\n # Print OpenSSL information.\n session.run(\"python\", \"-m\", \"OpenSSL.debug\")\n\n # Inspired from https://github.com/pyca/cryptography\n # We use parallel mode and then combine here so that coverage.py will take\n # the paths like .tox/pyXY/lib/pythonX.Y/site-packages/urllib3/__init__.py\n # and collapse them into src/urllib3/__init__.py.\n\n session.run(\n \"coverage\",\n \"run\",\n \"--parallel-mode\",\n \"-m\",\n \"pytest\",\n \"-r\",\n \"a\",\n \"--tb=native\",\n \"--no-success-flaky-report\",\n *(session.posargs or (\"test/\",)),\n env={\"PYTHONWARNINGS\": \"always::DeprecationWarning\"},\n )\n session.run(\"coverage\", \"combine\")\n session.run(\"coverage\", \"report\", \"-m\")\n session.run(\"coverage\", \"xml\")\n\n\[email protected](python=[\"2.7\", \"3.5\", \"3.6\", \"3.7\", \"3.8\", \"3.9\", \"3.10\", \"pypy\"])\ndef test(session):\n tests_impl(session)\n\n\[email protected](python=[\"2\", \"3\"])\ndef google_brotli(session):\n # https://pypi.org/project/Brotli/ is the Google version of brotli, so\n # install it separately and don't install our brotli extra (which installs\n # brotlipy).\n session.install(\"brotli\")\n tests_impl(session, extras=\"socks,secure\")\n\n\[email protected](python=\"2.7\")\ndef app_engine(session):\n session.install(\"-r\", \"dev-requirements.txt\")\n session.install(\".\")\n session.run(\n \"coverage\",\n \"run\",\n \"--parallel-mode\",\n \"-m\",\n \"pytest\",\n \"-r\",\n \"sx\",\n \"test/appengine\",\n *session.posargs,\n )\n session.run(\"coverage\", \"combine\")\n session.run(\"coverage\", \"report\", \"-m\")\n session.run(\"coverage\", \"xml\")\n\n\ndef git_clone(session, git_url):\n session.run(\"git\", \"clone\", \"--depth\", \"1\", git_url, external=True)\n\n\[email protected](python=[\"2.7\", \"3.9\"])\ndef downstream_botocore(session):\n root = os.getcwd()\n tmp_dir = session.create_tmp()\n\n session.cd(tmp_dir)\n git_clone(session, \"https://github.com/boto/botocore\")\n session.chdir(\"botocore\")\n session.run(\"git\", \"rev-parse\", \"HEAD\", external=True)\n session.run(\"python\", \"scripts/ci/install\")\n\n session.cd(root)\n session.install(\".\", silent=False)\n session.cd(f\"{tmp_dir}/botocore\")\n\n session.run(\"python\", \"scripts/ci/run-tests\")\n\n\[email protected](python=[\"2.7\", \"3.9\"])\ndef downstream_requests(session):\n root = os.getcwd()\n tmp_dir = session.create_tmp()\n\n session.cd(tmp_dir)\n git_clone(session, \"https://github.com/psf/requests\")\n session.chdir(\"requests\")\n session.run(\"git\", \"apply\", f\"{root}/ci/requests.patch\", external=True)\n session.run(\"git\", \"rev-parse\", \"HEAD\", external=True)\n session.install(\".[socks]\", silent=False)\n session.install(\"-r\", \"requirements-dev.txt\", silent=False)\n\n session.cd(root)\n session.install(\".\", silent=False)\n session.cd(f\"{tmp_dir}/requests\")\n\n session.run(\"pytest\", \"tests\")\n\n\[email protected]()\ndef format(session):\n \"\"\"Run code formatters.\"\"\"\n session.install(\"pre-commit\")\n session.run(\"pre-commit\", \"--version\")\n\n process = subprocess.run(\n [\"pre-commit\", \"run\", \"--all-files\"],\n env=session.env,\n text=True,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT,\n )\n # Ensure that pre-commit itself ran successfully\n assert process.returncode in (0, 1)\n\n lint(session)\n\n\[email protected]\ndef lint(session):\n session.install(\"pre-commit\")\n session.run(\"pre-commit\", \"run\", \"--all-files\")\n\n\[email protected]\ndef docs(session):\n session.install(\"-r\", \"docs/requirements.txt\")\n session.install(\".[socks,secure,brotli]\")\n\n session.chdir(\"docs\")\n if os.path.exists(\"_build\"):\n shutil.rmtree(\"_build\")\n session.run(\"sphinx-build\", \"-b\", \"html\", \"-W\", \".\", \"_build/html\")\n", "path": "noxfile.py"}]}
2,145
110
gh_patches_debug_24601
rasdani/github-patches
git_diff
fossasia__open-event-server-835
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Session Form: Make Title and Name always required In session forms in step 5 of the wizard make "Title" and "Name" always required. Maybe we should make those in another color, so it becomes clear that they cannot be edited or change the UI somehow? ![screenshot from 2016-06-15 08 17 55](https://cloud.githubusercontent.com/assets/1583873/16069896/257ff8d4-32d2-11e6-840d-53ea58adb955.png) </issue> <code> [start of open_event/views/admin/models_views/events.py] 1 import os 2 3 from flask import request, url_for, redirect 4 from flask_admin import expose 5 from flask_admin.contrib.sqla import ModelView 6 from flask.ext import login 7 from ....helpers.data import DataManager, save_to_db 8 from ....helpers.data_getter import DataGetter 9 from datetime import datetime 10 from werkzeug.utils import secure_filename 11 from werkzeug.datastructures import ImmutableMultiDict 12 13 class EventsView(ModelView): 14 def is_accessible(self): 15 return login.current_user.is_authenticated 16 17 def _handle_view(self, name, **kwargs): 18 if not self.is_accessible(): 19 return redirect(url_for('admin.login_view', next=request.url)) 20 21 @expose('/') 22 def index_view(self): 23 live_events = DataGetter.get_live_events() 24 draft_events = DataGetter.get_draft_events() 25 past_events = DataGetter.get_past_events() 26 all_events = DataGetter.get_all_events() 27 return self.render('/gentelella/admin/event/index.html', 28 live_events=live_events, draft_events=draft_events, past_events=past_events, 29 all_events=all_events) 30 31 @expose('/create/', methods=('GET', 'POST')) 32 def create_view(self): 33 session_columns = DataGetter.get_session_columns() 34 speaker_columns = DataGetter.get_speaker_columns() 35 if request.method == 'POST': 36 imd = ImmutableMultiDict(request.files) 37 for img_file in imd.getlist('sponsors[logo]'): 38 filename = secure_filename(img_file.filename) 39 img_file.save(os.path.join(os.path.realpath('.') + '/static/media/image/', filename)) 40 event = DataManager.create_event(request.form, imd) 41 if event: 42 return redirect(url_for('.details_view', event_id=event.id)) 43 return redirect(url_for('.index_view')) 44 return self.render('/gentelella/admin/event/new/new.html', 45 session_columns=session_columns, 46 speaker_columns=speaker_columns, 47 event_types=DataGetter.get_event_types(), 48 event_topics=DataGetter.get_event_topics()) 49 50 @expose('/<int:event_id>/', methods=('GET', 'POST')) 51 def details_view(self, event_id): 52 event = DataGetter.get_event(event_id) 53 54 return self.render('/gentelella/admin/event/details/details.html', event=event) 55 56 @expose('/<int:event_id>/edit/', methods=('GET', 'POST')) 57 def edit_view(self, event_id): 58 event = DataGetter.get_event(event_id) 59 session_types = DataGetter.get_session_types_by_event_id(event_id) 60 tracks = DataGetter.get_tracks(event_id) 61 social_links = DataGetter.get_social_links_by_event_id(event_id) 62 microlocations = DataGetter.get_microlocations(event_id) 63 call_for_speakers = DataGetter.get_call_for_papers(event_id).first() 64 sponsors = DataGetter.get_sponsors(event_id) 65 session_columns = DataGetter.get_session_columns() 66 speaker_columns = DataGetter.get_speaker_columns() 67 if request.method == 'GET': 68 return self.render('/gentelella/admin/event/edit/edit.html', event=event, session_types=session_types, 69 tracks=tracks, social_links=social_links, microlocations=microlocations, 70 call_for_speakers=call_for_speakers, sponsors=sponsors, session_columns=session_columns, 71 speaker_columns=speaker_columns, event_types=DataGetter.get_event_types(), 72 event_topics=DataGetter.get_event_topics()) 73 if request.method == "POST": 74 event = DataManager.edit_event(request.form, event_id, event, session_types, tracks, social_links, 75 microlocations, call_for_speakers, sponsors) 76 return self.render('/gentelella/admin/event/details/details.html', event=event) 77 78 @expose('/<event_id>/delete/', methods=('GET',)) 79 def delete_view(self, event_id): 80 if request.method == "GET": 81 DataManager.delete_event(event_id) 82 return redirect(url_for('.index_view')) 83 84 @expose('/<int:event_id>/update/', methods=('POST',)) 85 def save_closing_date(self, event_id): 86 event = DataGetter.get_event(event_id) 87 event.closing_datetime = request.form['closing_datetime'] 88 save_to_db(event, 'Closing Datetime Updated') 89 return self.render('/gentelella/admin/event/details/details.html', event=event) 90 [end of open_event/views/admin/models_views/events.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/open_event/views/admin/models_views/events.py b/open_event/views/admin/models_views/events.py --- a/open_event/views/admin/models_views/events.py +++ b/open_event/views/admin/models_views/events.py @@ -29,6 +29,9 @@ def create_view(self): session_columns = DataGetter.get_session_columns() speaker_columns = DataGetter.get_speaker_columns() + speaker_columns = list(speaker_columns) + speaker_columns.insert(2, speaker_columns.pop(4)) # Moving email to the top + if request.method == 'POST': event = DataManager.create_event(request.form) if event: @@ -57,6 +60,7 @@ sponsors = DataGetter.get_sponsors(event_id) session_columns = DataGetter.get_session_columns() speaker_columns = DataGetter.get_speaker_columns() + if request.method == 'GET': return self.render('/gentelella/admin/event/edit/edit.html', event=event, session_types=session_types, tracks=tracks, social_links=social_links, microlocations=microlocations,
{"golden_diff": "diff --git a/open_event/views/admin/models_views/events.py b/open_event/views/admin/models_views/events.py\n--- a/open_event/views/admin/models_views/events.py\n+++ b/open_event/views/admin/models_views/events.py\n@@ -29,6 +29,9 @@\n def create_view(self):\n session_columns = DataGetter.get_session_columns()\n speaker_columns = DataGetter.get_speaker_columns()\n+ speaker_columns = list(speaker_columns)\n+ speaker_columns.insert(2, speaker_columns.pop(4)) # Moving email to the top\n+\n if request.method == 'POST':\n event = DataManager.create_event(request.form)\n if event:\n@@ -57,6 +60,7 @@\n sponsors = DataGetter.get_sponsors(event_id)\n session_columns = DataGetter.get_session_columns()\n speaker_columns = DataGetter.get_speaker_columns()\n+\n if request.method == 'GET':\n return self.render('/gentelella/admin/event/edit/edit.html', event=event, session_types=session_types,\n tracks=tracks, social_links=social_links, microlocations=microlocations,\n", "issue": "Session Form: Make Title and Name always required\nIn session forms in step 5 of the wizard make \"Title\" and \"Name\" always required. Maybe we should make those in another color, so it becomes clear that they cannot be edited or change the UI somehow?\n\n![screenshot from 2016-06-15 08 17 55](https://cloud.githubusercontent.com/assets/1583873/16069896/257ff8d4-32d2-11e6-840d-53ea58adb955.png)\n\n", "before_files": [{"content": "import os\n\nfrom flask import request, url_for, redirect\nfrom flask_admin import expose\nfrom flask_admin.contrib.sqla import ModelView\nfrom flask.ext import login\nfrom ....helpers.data import DataManager, save_to_db\nfrom ....helpers.data_getter import DataGetter\nfrom datetime import datetime\nfrom werkzeug.utils import secure_filename\nfrom werkzeug.datastructures import ImmutableMultiDict\n\nclass EventsView(ModelView):\n def is_accessible(self):\n return login.current_user.is_authenticated\n\n def _handle_view(self, name, **kwargs):\n if not self.is_accessible():\n return redirect(url_for('admin.login_view', next=request.url))\n\n @expose('/')\n def index_view(self):\n live_events = DataGetter.get_live_events()\n draft_events = DataGetter.get_draft_events()\n past_events = DataGetter.get_past_events()\n all_events = DataGetter.get_all_events()\n return self.render('/gentelella/admin/event/index.html',\n live_events=live_events, draft_events=draft_events, past_events=past_events,\n all_events=all_events)\n\n @expose('/create/', methods=('GET', 'POST'))\n def create_view(self):\n session_columns = DataGetter.get_session_columns()\n speaker_columns = DataGetter.get_speaker_columns()\n if request.method == 'POST':\n imd = ImmutableMultiDict(request.files)\n for img_file in imd.getlist('sponsors[logo]'):\n filename = secure_filename(img_file.filename)\n img_file.save(os.path.join(os.path.realpath('.') + '/static/media/image/', filename))\n event = DataManager.create_event(request.form, imd)\n if event:\n return redirect(url_for('.details_view', event_id=event.id))\n return redirect(url_for('.index_view'))\n return self.render('/gentelella/admin/event/new/new.html',\n session_columns=session_columns,\n speaker_columns=speaker_columns,\n event_types=DataGetter.get_event_types(),\n event_topics=DataGetter.get_event_topics())\n\n @expose('/<int:event_id>/', methods=('GET', 'POST'))\n def details_view(self, event_id):\n event = DataGetter.get_event(event_id)\n\n return self.render('/gentelella/admin/event/details/details.html', event=event)\n\n @expose('/<int:event_id>/edit/', methods=('GET', 'POST'))\n def edit_view(self, event_id):\n event = DataGetter.get_event(event_id)\n session_types = DataGetter.get_session_types_by_event_id(event_id)\n tracks = DataGetter.get_tracks(event_id)\n social_links = DataGetter.get_social_links_by_event_id(event_id)\n microlocations = DataGetter.get_microlocations(event_id)\n call_for_speakers = DataGetter.get_call_for_papers(event_id).first()\n sponsors = DataGetter.get_sponsors(event_id)\n session_columns = DataGetter.get_session_columns()\n speaker_columns = DataGetter.get_speaker_columns()\n if request.method == 'GET':\n return self.render('/gentelella/admin/event/edit/edit.html', event=event, session_types=session_types,\n tracks=tracks, social_links=social_links, microlocations=microlocations,\n call_for_speakers=call_for_speakers, sponsors=sponsors, session_columns=session_columns,\n speaker_columns=speaker_columns, event_types=DataGetter.get_event_types(),\n event_topics=DataGetter.get_event_topics())\n if request.method == \"POST\":\n event = DataManager.edit_event(request.form, event_id, event, session_types, tracks, social_links,\n microlocations, call_for_speakers, sponsors)\n return self.render('/gentelella/admin/event/details/details.html', event=event)\n\n @expose('/<event_id>/delete/', methods=('GET',))\n def delete_view(self, event_id):\n if request.method == \"GET\":\n DataManager.delete_event(event_id)\n return redirect(url_for('.index_view'))\n\n @expose('/<int:event_id>/update/', methods=('POST',))\n def save_closing_date(self, event_id):\n event = DataGetter.get_event(event_id)\n event.closing_datetime = request.form['closing_datetime']\n save_to_db(event, 'Closing Datetime Updated')\n return self.render('/gentelella/admin/event/details/details.html', event=event)\n", "path": "open_event/views/admin/models_views/events.py"}]}
1,752
230
gh_patches_debug_27663
rasdani/github-patches
git_diff
interlegis__sapl-2580
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Corrigir caixa de busca do lexml com link quebrado http://sapl3.interlegis.leg.br/generico/lexml_pesquisar Verificar com @edwardoliveira qual caminho tomar quanto a isso! ![image](https://user-images.githubusercontent.com/9630623/53385017-be5bb880-395b-11e9-8e41-945cacc49a69.png) </issue> <code> [start of sapl/lexml/views.py] 1 from django.http import HttpResponse 2 3 from sapl.crud.base import CrudAux 4 from sapl.lexml.OAIServer import OAIServerFactory, get_config 5 6 from .models import LexmlProvedor, LexmlPublicador 7 8 LexmlProvedorCrud = CrudAux.build(LexmlProvedor, 'lexml_provedor') 9 LexmlPublicadorCrud = CrudAux.build(LexmlPublicador, 'lexml_publicador') 10 11 12 def lexml_request(request): 13 config = get_config(request.get_raw_uri(), int(request.GET.get('batch_size', 10))) 14 oai_server = OAIServerFactory(config) 15 r = oai_server.handleRequest(request.GET) 16 response = r.decode('UTF-8') 17 return HttpResponse(response, content_type='text/xml') 18 [end of sapl/lexml/views.py] [start of sapl/lexml/urls.py] 1 from django.conf.urls import include, url 2 3 from sapl.lexml.views import LexmlProvedorCrud, LexmlPublicadorCrud, lexml_request 4 5 from .apps import AppConfig 6 7 app_name = AppConfig.name 8 9 urlpatterns = [ 10 url(r'^sistema/lexml/provedor/', 11 include(LexmlProvedorCrud.get_urls())), 12 url(r'^sistema/lexml/publicador/', 13 include(LexmlPublicadorCrud.get_urls())), 14 url(r'^sistema/lexml', lexml_request, name='lexml_endpoint') 15 ] 16 [end of sapl/lexml/urls.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/sapl/lexml/urls.py b/sapl/lexml/urls.py --- a/sapl/lexml/urls.py +++ b/sapl/lexml/urls.py @@ -1,6 +1,6 @@ from django.conf.urls import include, url -from sapl.lexml.views import LexmlProvedorCrud, LexmlPublicadorCrud, lexml_request +from sapl.lexml.views import LexmlProvedorCrud, LexmlPublicadorCrud, lexml_request, request_search from .apps import AppConfig @@ -11,5 +11,7 @@ include(LexmlProvedorCrud.get_urls())), url(r'^sistema/lexml/publicador/', include(LexmlPublicadorCrud.get_urls())), - url(r'^sistema/lexml', lexml_request, name='lexml_endpoint') + url(r'^sistema/lexml/request_search/(?P<keyword>[\w\-]+)/', request_search, name='lexml_search'), + url(r'^sistema/lexml', lexml_request, name='lexml_endpoint'), + ] diff --git a/sapl/lexml/views.py b/sapl/lexml/views.py --- a/sapl/lexml/views.py +++ b/sapl/lexml/views.py @@ -1,4 +1,5 @@ from django.http import HttpResponse +from django.shortcuts import render from sapl.crud.base import CrudAux from sapl.lexml.OAIServer import OAIServerFactory, get_config @@ -16,3 +17,6 @@ 'metadataPrefix': request.GET.get('metadataPrefix', 'oai_lexml')}) response = r.decode('UTF-8') return HttpResponse(response, content_type='text/xml') + +def request_search(request, keyword): + return render(request,"lexml/resultado-pesquisa.html",{"keyword":keyword}) \ No newline at end of file
{"golden_diff": "diff --git a/sapl/lexml/urls.py b/sapl/lexml/urls.py\n--- a/sapl/lexml/urls.py\n+++ b/sapl/lexml/urls.py\n@@ -1,6 +1,6 @@\n from django.conf.urls import include, url\n \n-from sapl.lexml.views import LexmlProvedorCrud, LexmlPublicadorCrud, lexml_request\n+from sapl.lexml.views import LexmlProvedorCrud, LexmlPublicadorCrud, lexml_request, request_search\n \n from .apps import AppConfig\n \n@@ -11,5 +11,7 @@\n include(LexmlProvedorCrud.get_urls())),\n url(r'^sistema/lexml/publicador/',\n include(LexmlPublicadorCrud.get_urls())),\n- url(r'^sistema/lexml', lexml_request, name='lexml_endpoint')\n+ url(r'^sistema/lexml/request_search/(?P<keyword>[\\w\\-]+)/', request_search, name='lexml_search'),\n+ url(r'^sistema/lexml', lexml_request, name='lexml_endpoint'),\n+\n ]\ndiff --git a/sapl/lexml/views.py b/sapl/lexml/views.py\n--- a/sapl/lexml/views.py\n+++ b/sapl/lexml/views.py\n@@ -1,4 +1,5 @@\n from django.http import HttpResponse\n+from django.shortcuts import render\n \n from sapl.crud.base import CrudAux\n from sapl.lexml.OAIServer import OAIServerFactory, get_config\n@@ -16,3 +17,6 @@\n 'metadataPrefix': request.GET.get('metadataPrefix', 'oai_lexml')})\n response = r.decode('UTF-8')\n return HttpResponse(response, content_type='text/xml')\n+\n+def request_search(request, keyword):\n+ return render(request,\"lexml/resultado-pesquisa.html\",{\"keyword\":keyword})\n\\ No newline at end of file\n", "issue": "Corrigir caixa de busca do lexml com link quebrado\nhttp://sapl3.interlegis.leg.br/generico/lexml_pesquisar\r\nVerificar com @edwardoliveira qual caminho tomar quanto a isso!\r\n\r\n![image](https://user-images.githubusercontent.com/9630623/53385017-be5bb880-395b-11e9-8e41-945cacc49a69.png)\r\n\n", "before_files": [{"content": "from django.http import HttpResponse\n\nfrom sapl.crud.base import CrudAux\nfrom sapl.lexml.OAIServer import OAIServerFactory, get_config\n\nfrom .models import LexmlProvedor, LexmlPublicador\n\nLexmlProvedorCrud = CrudAux.build(LexmlProvedor, 'lexml_provedor')\nLexmlPublicadorCrud = CrudAux.build(LexmlPublicador, 'lexml_publicador')\n\n\ndef lexml_request(request):\n config = get_config(request.get_raw_uri(), int(request.GET.get('batch_size', 10)))\n oai_server = OAIServerFactory(config)\n r = oai_server.handleRequest(request.GET)\n response = r.decode('UTF-8')\n return HttpResponse(response, content_type='text/xml')\n", "path": "sapl/lexml/views.py"}, {"content": "from django.conf.urls import include, url\n\nfrom sapl.lexml.views import LexmlProvedorCrud, LexmlPublicadorCrud, lexml_request\n\nfrom .apps import AppConfig\n\napp_name = AppConfig.name\n\nurlpatterns = [\n url(r'^sistema/lexml/provedor/',\n include(LexmlProvedorCrud.get_urls())),\n url(r'^sistema/lexml/publicador/',\n include(LexmlPublicadorCrud.get_urls())),\n url(r'^sistema/lexml', lexml_request, name='lexml_endpoint')\n]\n", "path": "sapl/lexml/urls.py"}]}
1,019
423
gh_patches_debug_22765
rasdani/github-patches
git_diff
ietf-tools__datatracker-5858
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> chore: Replace deprecated bootstrap features with alternatives ### Description Throughout the code. As suggested by @NGPixel. ### Code of Conduct - [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md) </issue> <code> [start of ietf/ietfauth/widgets.py] 1 from django.forms import PasswordInput 2 from django.utils.safestring import mark_safe 3 from django.utils.translation import gettext as _ 4 5 # The PasswordStrengthInput and PasswordConfirmationInput widgets come from the 6 # django-password-strength project, https://pypi.org/project/django-password-strength/ 7 # 8 # Original license: 9 # 10 # Copyright &copy; 2015 A.J. May and individual contributors. All rights reserved. 11 # 12 # Redistribution and use in source and binary forms, with or without modification, are permitted provided that the 13 # following conditions are met: 14 # 15 # 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following 16 # disclaimer. 17 # 18 # 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the 19 # following disclaimer in the documentation and/or other materials provided with the distribution. 20 # 21 # 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote 22 # products derived from this software without specific prior written permission. 23 # 24 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, 25 # INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 26 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 27 # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 28 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, 29 # WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE 30 # USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 31 # 32 33 class PasswordStrengthInput(PasswordInput): 34 """ 35 Form widget to show the user how strong his/her password is. 36 """ 37 38 def render(self, name, value, attrs=None, renderer=None): 39 strength_markup = """ 40 <div style="margin-top: 10px;"> 41 <div class="progress" style="margin-bottom: 10px;"> 42 <div class="progress-bar progress-bar-warning password_strength_bar" role="progressbar" aria-valuenow="0" aria-valuemin="0" aria-valuemax="5" style="width: 0%%"></div> 43 </div> 44 <p class="text-muted password_strength_info hidden"> 45 <span class="label label-danger"> 46 %s 47 </span> 48 <span style="margin-left:5px;"> 49 %s 50 </span> 51 </p> 52 </div> 53 """ % ( 54 _("Warning"), 55 _( 56 'This password would take <em class="password_strength_time"></em> to crack.' 57 ), 58 ) 59 60 try: 61 self.attrs["class"] = "%s password_strength".strip() % self.attrs["class"] 62 except KeyError: 63 self.attrs["class"] = "password_strength" 64 65 return mark_safe( 66 super(PasswordInput, self).render(name, value, attrs, renderer) 67 + strength_markup 68 ) 69 70 class Media: 71 js = ( 72 "ietf/js/zxcvbn.js", 73 "ietf/js/password_strength.js", 74 ) 75 76 77 class PasswordConfirmationInput(PasswordInput): 78 """ 79 Form widget to confirm the users password by letting him/her type it again. 80 """ 81 82 def __init__(self, confirm_with=None, attrs=None, render_value=False): 83 super(PasswordConfirmationInput, self).__init__(attrs, render_value) 84 self.confirm_with = confirm_with 85 86 def render(self, name, value, attrs=None, renderer=None): 87 if self.confirm_with: 88 self.attrs["data-confirm-with"] = "id_%s" % self.confirm_with 89 90 confirmation_markup = """ 91 <div style="margin-top: 10px;" class="hidden password_strength_info"> 92 <p class="text-muted"> 93 <span class="label label-danger"> 94 %s 95 </span> 96 <span style="margin-left:5px;">%s</span> 97 </p> 98 </div> 99 """ % ( 100 _("Warning"), 101 _("Your passwords don't match."), 102 ) 103 104 try: 105 self.attrs["class"] = ( 106 "%s password_confirmation".strip() % self.attrs["class"] 107 ) 108 except KeyError: 109 self.attrs["class"] = "password_confirmation" 110 111 return mark_safe( 112 super(PasswordInput, self).render(name, value, attrs, renderer) 113 + confirmation_markup 114 ) 115 [end of ietf/ietfauth/widgets.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/ietf/ietfauth/widgets.py b/ietf/ietfauth/widgets.py --- a/ietf/ietfauth/widgets.py +++ b/ietf/ietfauth/widgets.py @@ -41,7 +41,7 @@ <div class="progress" style="margin-bottom: 10px;"> <div class="progress-bar progress-bar-warning password_strength_bar" role="progressbar" aria-valuenow="0" aria-valuemin="0" aria-valuemax="5" style="width: 0%%"></div> </div> - <p class="text-muted password_strength_info hidden"> + <p class="text-body-secondary password_strength_info hidden"> <span class="label label-danger"> %s </span> @@ -89,7 +89,7 @@ confirmation_markup = """ <div style="margin-top: 10px;" class="hidden password_strength_info"> - <p class="text-muted"> + <p class="text-body-secondary"> <span class="label label-danger"> %s </span>
{"golden_diff": "diff --git a/ietf/ietfauth/widgets.py b/ietf/ietfauth/widgets.py\n--- a/ietf/ietfauth/widgets.py\n+++ b/ietf/ietfauth/widgets.py\n@@ -41,7 +41,7 @@\n <div class=\"progress\" style=\"margin-bottom: 10px;\">\n <div class=\"progress-bar progress-bar-warning password_strength_bar\" role=\"progressbar\" aria-valuenow=\"0\" aria-valuemin=\"0\" aria-valuemax=\"5\" style=\"width: 0%%\"></div>\n </div>\n- <p class=\"text-muted password_strength_info hidden\">\n+ <p class=\"text-body-secondary password_strength_info hidden\">\n <span class=\"label label-danger\">\n %s\n </span>\n@@ -89,7 +89,7 @@\n \n confirmation_markup = \"\"\"\n <div style=\"margin-top: 10px;\" class=\"hidden password_strength_info\">\n- <p class=\"text-muted\">\n+ <p class=\"text-body-secondary\">\n <span class=\"label label-danger\">\n %s\n </span>\n", "issue": "chore: Replace deprecated bootstrap features with alternatives\n### Description\n\nThroughout the code. As suggested by @NGPixel.\n\n### Code of Conduct\n\n- [X] I agree to follow the [IETF's Code of Conduct](https://github.com/ietf-tools/.github/blob/main/CODE_OF_CONDUCT.md)\n", "before_files": [{"content": "from django.forms import PasswordInput\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import gettext as _\n\n# The PasswordStrengthInput and PasswordConfirmationInput widgets come from the\n# django-password-strength project, https://pypi.org/project/django-password-strength/\n#\n# Original license:\n#\n# Copyright &copy; 2015 A.J. May and individual contributors. All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without modification, are permitted provided that the \n# following conditions are met:\n#\n# 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following\n# disclaimer.\n# \n# 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the \n# following disclaimer in the documentation and/or other materials provided with the distribution.\n# \n# 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote \n# products derived from this software without specific prior written permission.\n# \n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, \n# INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE \n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, \n# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR \n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, \n# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE \n# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n#\n\nclass PasswordStrengthInput(PasswordInput):\n \"\"\"\n Form widget to show the user how strong his/her password is.\n \"\"\"\n\n def render(self, name, value, attrs=None, renderer=None):\n strength_markup = \"\"\"\n <div style=\"margin-top: 10px;\">\n <div class=\"progress\" style=\"margin-bottom: 10px;\">\n <div class=\"progress-bar progress-bar-warning password_strength_bar\" role=\"progressbar\" aria-valuenow=\"0\" aria-valuemin=\"0\" aria-valuemax=\"5\" style=\"width: 0%%\"></div>\n </div>\n <p class=\"text-muted password_strength_info hidden\">\n <span class=\"label label-danger\">\n %s\n </span>\n <span style=\"margin-left:5px;\">\n %s\n </span>\n </p>\n </div>\n \"\"\" % (\n _(\"Warning\"),\n _(\n 'This password would take <em class=\"password_strength_time\"></em> to crack.'\n ),\n )\n\n try:\n self.attrs[\"class\"] = \"%s password_strength\".strip() % self.attrs[\"class\"]\n except KeyError:\n self.attrs[\"class\"] = \"password_strength\"\n\n return mark_safe(\n super(PasswordInput, self).render(name, value, attrs, renderer)\n + strength_markup\n )\n\n class Media:\n js = (\n \"ietf/js/zxcvbn.js\",\n \"ietf/js/password_strength.js\",\n )\n\n\nclass PasswordConfirmationInput(PasswordInput):\n \"\"\"\n Form widget to confirm the users password by letting him/her type it again.\n \"\"\"\n\n def __init__(self, confirm_with=None, attrs=None, render_value=False):\n super(PasswordConfirmationInput, self).__init__(attrs, render_value)\n self.confirm_with = confirm_with\n\n def render(self, name, value, attrs=None, renderer=None):\n if self.confirm_with:\n self.attrs[\"data-confirm-with\"] = \"id_%s\" % self.confirm_with\n \n confirmation_markup = \"\"\"\n <div style=\"margin-top: 10px;\" class=\"hidden password_strength_info\">\n <p class=\"text-muted\">\n <span class=\"label label-danger\">\n %s\n </span>\n <span style=\"margin-left:5px;\">%s</span>\n </p>\n </div>\n \"\"\" % (\n _(\"Warning\"),\n _(\"Your passwords don't match.\"),\n )\n\n try:\n self.attrs[\"class\"] = (\n \"%s password_confirmation\".strip() % self.attrs[\"class\"]\n )\n except KeyError:\n self.attrs[\"class\"] = \"password_confirmation\"\n\n return mark_safe(\n super(PasswordInput, self).render(name, value, attrs, renderer)\n + confirmation_markup\n )\n", "path": "ietf/ietfauth/widgets.py"}]}
1,817
238
gh_patches_debug_26425
rasdani/github-patches
git_diff
openstates__openstates-scrapers-1998
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> WV failing since at least 2017-12-07 WV has been failing since 2017-12-07 Based on automated runs it appears that WV has not run successfully in 2 days (2017-12-07). ``` 23:01:25 CRITICAL pupa: Session(s) 2018 were reported by WestVirginia.get_session_list() but were not found in WestVirginia.legislative_sessions or WestVirginia.ignored_scraped_sessions. no pupa_settings on path, using defaults wv (scrape, import) people: {} bills: {} ``` Visit http://bobsled.openstates.org for more info. </issue> <code> [start of openstates/wv/__init__.py] 1 from pupa.scrape import Jurisdiction, Organization 2 3 from .people import WVPersonScraper 4 from .committees import WVCommitteeScraper 5 from .bills import WVBillScraper 6 7 8 class WestVirginia(Jurisdiction): 9 division_id = "ocd-division/country:us/state:wv" 10 classification = "government" 11 name = "West Virginia" 12 url = "http://www.legis.state.wv.us/" 13 scrapers = { 14 'people': WVPersonScraper, 15 'committees': WVCommitteeScraper, 16 'bills': WVBillScraper 17 } 18 parties = [ 19 {'name': 'Republican'}, 20 {'name': 'Democratic'} 21 ] 22 legislative_sessions = [ 23 { 24 "_scraped_name": "2011", 25 "classification": "primary", 26 "identifier": "2011", 27 "name": "2011 Regular Session" 28 }, 29 { 30 "_scraped_name": "2012", 31 "classification": "primary", 32 "identifier": "2012", 33 "name": "2012 Regular Session" 34 }, 35 { 36 "_scraped_name": "2013", 37 "classification": "primary", 38 "identifier": "2013", 39 "name": "2013 Regular Session" 40 }, 41 { 42 "_scraped_name": "2014", 43 "classification": "primary", 44 "identifier": "2014", 45 "name": "2014 Regular Session" 46 }, 47 { 48 "_scraped_name": "2015", 49 "classification": "primary", 50 "identifier": "2015", 51 "name": "2015 Regular Session" 52 }, 53 { 54 "_scraped_name": "2016", 55 "classification": "primary", 56 "identifier": "2016", 57 "name": "2016 Regular Session" 58 }, 59 { 60 "_scraped_name": "2016", 61 "classification": "special", 62 "identifier": "20161S", 63 "name": "2016 First Special Session" 64 }, 65 { 66 "_scraped_name": "2017", 67 "classification": "primary", 68 "identifier": "2017", 69 "name": "2017 Regular Session", 70 "start_date": "2017-02-08", 71 "end_date": "2017-04-09", 72 }, 73 { 74 "_scraped_name": "2017", 75 "classification": "special", 76 "identifier": "20171S", 77 "name": "2017 First Special Session", 78 }, 79 { 80 "_scraped_name": "2017", 81 "classification": "special", 82 "identifier": "20172S", 83 "name": "2017 Second Special Session", 84 } 85 ] 86 ignored_scraped_sessions = [ 87 "2010", 88 "2009", 89 "2008", 90 "2007", 91 "2006", 92 "2005", 93 "2004", 94 "2003", 95 "2002", 96 "2001", 97 "2000", 98 "1999", 99 "1998", 100 "1997", 101 "1996", 102 "1995", 103 "1994", 104 "1993" 105 ] 106 107 def get_organizations(self): 108 legislature_name = "West Virginia Legislature" 109 lower_chamber_name = "House" 110 lower_seats = 67 111 lower_title = "Delegate" 112 upper_chamber_name = "Senate" 113 upper_seats = 17 114 upper_title = "Senator" 115 116 legislature = Organization(name=legislature_name, 117 classification="legislature") 118 upper = Organization(upper_chamber_name, classification='upper', 119 parent_id=legislature._id) 120 lower = Organization(lower_chamber_name, classification='lower', 121 parent_id=legislature._id) 122 123 for n in range(1, upper_seats+1): 124 upper.add_post( 125 label=str(n), role=upper_title, 126 division_id='{}/sldu:{}'.format(self.division_id, n)) 127 for n in range(1, lower_seats+1): 128 lower.add_post( 129 label=str(n), role=lower_title, 130 division_id='{}/sldl:{}'.format(self.division_id, n)) 131 132 yield legislature 133 yield upper 134 yield lower 135 136 def get_session_list(self): 137 from openstates.utils import url_xpath 138 return url_xpath('http://www.legis.state.wv.us/Bill_Status/Bill_Status.cfm', 139 '//select[@name="year"]/option/text()') 140 [end of openstates/wv/__init__.py] [start of billy_metadata/wv.py] 1 2 metadata = { 3 'abbreviation': 'wv', 4 'capitol_timezone': 'America/New_York', 5 'name': 'West Virginia', 6 'legislature_name': 'West Virginia Legislature', 7 'legislature_url': 'http://www.legis.state.wv.us/', 8 'chambers': { 9 'upper': {'name': 'Senate', 'title': 'Senator'}, 10 'lower': {'name': 'House', 'title': 'Delegate'}, 11 }, 12 'terms': [ 13 {'name': '2011-2012', 14 'start_year': 2011, 'end_year': 2012, 15 'sessions': ['2011', '2012'], 16 }, 17 {'name': '2013-2014', 18 'start_year': 2013, 'end_year': 2014, 19 'sessions': ['2013', '2014'], 20 }, 21 {'name': '2015-2016', 22 'start_year': 2015, 'end_year': 2016, 23 'sessions': ['2015', '2016', '20161S'], 24 }, 25 {'name': '2017-2018', 26 'start_year': 2017, 'end_year': 2018, 27 'sessions': ['2017', '20171S', '20172S'], 28 }, 29 ], 30 'session_details': { 31 '2011': {'display_name': '2011 Regular Session', 32 'type': 'primary', 33 '_scraped_name': '2011' 34 }, 35 '2012': {'display_name': '2012 Regular Session', 36 'type': 'primary', 37 '_scraped_name': '2012' 38 }, 39 '2013': {'display_name': '2013 Regular Session', 40 'type': 'primary', 41 '_scraped_name': '2013' 42 }, 43 '2014': {'display_name': '2014 Regular Session', 44 'type': 'primary', 45 '_scraped_name': '2014' 46 }, 47 '2015': {'display_name': '2015 Regular Session', 48 'type': 'primary', 49 '_scraped_name': '2015' 50 }, 51 '2016': {'display_name': '2016 Regular Session', 52 'type': 'primary', 53 '_scraped_name': '2016' 54 }, 55 '20161S': {'display_name': '2016 First Special Session', 56 'type': 'special', 57 '_scraped_name': '2016', 58 '_special_name': '1X' 59 }, 60 '2017': {'display_name': '2017 Regular Session', 61 'type': 'primary', 62 '_scraped_name': '2017' 63 }, 64 '20171S': {'display_name': '2017 First Special Session', 65 'type': 'special', 66 }, 67 '20172S': {'display_name': '2017 Second Special Session', 68 'type': 'special', 69 }, 70 }, 71 'feature_flags': ['subjects', 'influenceexplorer'], 72 '_ignored_scraped_sessions': [ 73 '2010', '2009', '2008', '2007', '2006', 74 '2005', '2004', '2003', '2002', '2001', 75 '2000', '1999', '1998', '1997', '1996', 76 '1995', '1994', '1993', 77 ] 78 } 79 [end of billy_metadata/wv.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/billy_metadata/wv.py b/billy_metadata/wv.py --- a/billy_metadata/wv.py +++ b/billy_metadata/wv.py @@ -24,7 +24,7 @@ }, {'name': '2017-2018', 'start_year': 2017, 'end_year': 2018, - 'sessions': ['2017', '20171S', '20172S'], + 'sessions': ['2017', '20171S', '20172S', '2018'], }, ], 'session_details': { @@ -67,6 +67,10 @@ '20172S': {'display_name': '2017 Second Special Session', 'type': 'special', }, + '2018': {'display_name': '2018 Regular Session', + 'type': 'primary', + '_scraped_name': '2018' + }, }, 'feature_flags': ['subjects', 'influenceexplorer'], '_ignored_scraped_sessions': [ diff --git a/openstates/wv/__init__.py b/openstates/wv/__init__.py --- a/openstates/wv/__init__.py +++ b/openstates/wv/__init__.py @@ -81,6 +81,14 @@ "classification": "special", "identifier": "20172S", "name": "2017 Second Special Session", + }, + { + "_scraped_name": "2018", + "classification": "primary", + "identifier": "2018", + "name": "2018 Regular Session", + "start_date": "2018-01-10", + "end_date": "2018-03-10", } ] ignored_scraped_sessions = [
{"golden_diff": "diff --git a/billy_metadata/wv.py b/billy_metadata/wv.py\n--- a/billy_metadata/wv.py\n+++ b/billy_metadata/wv.py\n@@ -24,7 +24,7 @@\n },\n {'name': '2017-2018',\n 'start_year': 2017, 'end_year': 2018,\n- 'sessions': ['2017', '20171S', '20172S'],\n+ 'sessions': ['2017', '20171S', '20172S', '2018'],\n },\n ],\n 'session_details': {\n@@ -67,6 +67,10 @@\n '20172S': {'display_name': '2017 Second Special Session',\n 'type': 'special',\n },\n+ '2018': {'display_name': '2018 Regular Session',\n+ 'type': 'primary',\n+ '_scraped_name': '2018'\n+ },\n },\n 'feature_flags': ['subjects', 'influenceexplorer'],\n '_ignored_scraped_sessions': [\ndiff --git a/openstates/wv/__init__.py b/openstates/wv/__init__.py\n--- a/openstates/wv/__init__.py\n+++ b/openstates/wv/__init__.py\n@@ -81,6 +81,14 @@\n \"classification\": \"special\",\n \"identifier\": \"20172S\",\n \"name\": \"2017 Second Special Session\",\n+ },\n+ {\n+ \"_scraped_name\": \"2018\",\n+ \"classification\": \"primary\",\n+ \"identifier\": \"2018\",\n+ \"name\": \"2018 Regular Session\",\n+ \"start_date\": \"2018-01-10\",\n+ \"end_date\": \"2018-03-10\",\n }\n ]\n ignored_scraped_sessions = [\n", "issue": "WV failing since at least 2017-12-07\nWV has been failing since 2017-12-07\n\nBased on automated runs it appears that WV has not run successfully in 2 days (2017-12-07).\n\n\n```\n 23:01:25 CRITICAL pupa: Session(s) 2018 were reported by WestVirginia.get_session_list() but were not found in WestVirginia.legislative_sessions or WestVirginia.ignored_scraped_sessions.\nno pupa_settings on path, using defaults\nwv (scrape, import)\n people: {}\n bills: {}\n```\n\nVisit http://bobsled.openstates.org for more info.\n\n", "before_files": [{"content": "from pupa.scrape import Jurisdiction, Organization\n\nfrom .people import WVPersonScraper\nfrom .committees import WVCommitteeScraper\nfrom .bills import WVBillScraper\n\n\nclass WestVirginia(Jurisdiction):\n division_id = \"ocd-division/country:us/state:wv\"\n classification = \"government\"\n name = \"West Virginia\"\n url = \"http://www.legis.state.wv.us/\"\n scrapers = {\n 'people': WVPersonScraper,\n 'committees': WVCommitteeScraper,\n 'bills': WVBillScraper\n }\n parties = [\n {'name': 'Republican'},\n {'name': 'Democratic'}\n ]\n legislative_sessions = [\n {\n \"_scraped_name\": \"2011\",\n \"classification\": \"primary\",\n \"identifier\": \"2011\",\n \"name\": \"2011 Regular Session\"\n },\n {\n \"_scraped_name\": \"2012\",\n \"classification\": \"primary\",\n \"identifier\": \"2012\",\n \"name\": \"2012 Regular Session\"\n },\n {\n \"_scraped_name\": \"2013\",\n \"classification\": \"primary\",\n \"identifier\": \"2013\",\n \"name\": \"2013 Regular Session\"\n },\n {\n \"_scraped_name\": \"2014\",\n \"classification\": \"primary\",\n \"identifier\": \"2014\",\n \"name\": \"2014 Regular Session\"\n },\n {\n \"_scraped_name\": \"2015\",\n \"classification\": \"primary\",\n \"identifier\": \"2015\",\n \"name\": \"2015 Regular Session\"\n },\n {\n \"_scraped_name\": \"2016\",\n \"classification\": \"primary\",\n \"identifier\": \"2016\",\n \"name\": \"2016 Regular Session\"\n },\n {\n \"_scraped_name\": \"2016\",\n \"classification\": \"special\",\n \"identifier\": \"20161S\",\n \"name\": \"2016 First Special Session\"\n },\n {\n \"_scraped_name\": \"2017\",\n \"classification\": \"primary\",\n \"identifier\": \"2017\",\n \"name\": \"2017 Regular Session\",\n \"start_date\": \"2017-02-08\",\n \"end_date\": \"2017-04-09\",\n },\n {\n \"_scraped_name\": \"2017\",\n \"classification\": \"special\",\n \"identifier\": \"20171S\",\n \"name\": \"2017 First Special Session\",\n },\n {\n \"_scraped_name\": \"2017\",\n \"classification\": \"special\",\n \"identifier\": \"20172S\",\n \"name\": \"2017 Second Special Session\",\n }\n ]\n ignored_scraped_sessions = [\n \"2010\",\n \"2009\",\n \"2008\",\n \"2007\",\n \"2006\",\n \"2005\",\n \"2004\",\n \"2003\",\n \"2002\",\n \"2001\",\n \"2000\",\n \"1999\",\n \"1998\",\n \"1997\",\n \"1996\",\n \"1995\",\n \"1994\",\n \"1993\"\n ]\n\n def get_organizations(self):\n legislature_name = \"West Virginia Legislature\"\n lower_chamber_name = \"House\"\n lower_seats = 67\n lower_title = \"Delegate\"\n upper_chamber_name = \"Senate\"\n upper_seats = 17\n upper_title = \"Senator\"\n\n legislature = Organization(name=legislature_name,\n classification=\"legislature\")\n upper = Organization(upper_chamber_name, classification='upper',\n parent_id=legislature._id)\n lower = Organization(lower_chamber_name, classification='lower',\n parent_id=legislature._id)\n\n for n in range(1, upper_seats+1):\n upper.add_post(\n label=str(n), role=upper_title,\n division_id='{}/sldu:{}'.format(self.division_id, n))\n for n in range(1, lower_seats+1):\n lower.add_post(\n label=str(n), role=lower_title,\n division_id='{}/sldl:{}'.format(self.division_id, n))\n\n yield legislature\n yield upper\n yield lower\n\n def get_session_list(self):\n from openstates.utils import url_xpath\n return url_xpath('http://www.legis.state.wv.us/Bill_Status/Bill_Status.cfm',\n '//select[@name=\"year\"]/option/text()')\n", "path": "openstates/wv/__init__.py"}, {"content": "\nmetadata = {\n 'abbreviation': 'wv',\n 'capitol_timezone': 'America/New_York',\n 'name': 'West Virginia',\n 'legislature_name': 'West Virginia Legislature',\n 'legislature_url': 'http://www.legis.state.wv.us/',\n 'chambers': {\n 'upper': {'name': 'Senate', 'title': 'Senator'},\n 'lower': {'name': 'House', 'title': 'Delegate'},\n },\n 'terms': [\n {'name': '2011-2012',\n 'start_year': 2011, 'end_year': 2012,\n 'sessions': ['2011', '2012'],\n },\n {'name': '2013-2014',\n 'start_year': 2013, 'end_year': 2014,\n 'sessions': ['2013', '2014'],\n },\n {'name': '2015-2016',\n 'start_year': 2015, 'end_year': 2016,\n 'sessions': ['2015', '2016', '20161S'],\n },\n {'name': '2017-2018',\n 'start_year': 2017, 'end_year': 2018,\n 'sessions': ['2017', '20171S', '20172S'],\n },\n ],\n 'session_details': {\n '2011': {'display_name': '2011 Regular Session',\n 'type': 'primary',\n '_scraped_name': '2011'\n },\n '2012': {'display_name': '2012 Regular Session',\n 'type': 'primary',\n '_scraped_name': '2012'\n },\n '2013': {'display_name': '2013 Regular Session',\n 'type': 'primary',\n '_scraped_name': '2013'\n },\n '2014': {'display_name': '2014 Regular Session',\n 'type': 'primary',\n '_scraped_name': '2014'\n },\n '2015': {'display_name': '2015 Regular Session',\n 'type': 'primary',\n '_scraped_name': '2015'\n },\n '2016': {'display_name': '2016 Regular Session',\n 'type': 'primary',\n '_scraped_name': '2016'\n },\n '20161S': {'display_name': '2016 First Special Session',\n 'type': 'special',\n '_scraped_name': '2016',\n '_special_name': '1X'\n },\n '2017': {'display_name': '2017 Regular Session',\n 'type': 'primary',\n '_scraped_name': '2017'\n },\n '20171S': {'display_name': '2017 First Special Session',\n 'type': 'special',\n },\n '20172S': {'display_name': '2017 Second Special Session',\n 'type': 'special',\n },\n },\n 'feature_flags': ['subjects', 'influenceexplorer'],\n '_ignored_scraped_sessions': [\n '2010', '2009', '2008', '2007', '2006',\n '2005', '2004', '2003', '2002', '2001',\n '2000', '1999', '1998', '1997', '1996',\n '1995', '1994', '1993',\n ]\n}\n", "path": "billy_metadata/wv.py"}]}
3,168
459
gh_patches_debug_4910
rasdani/github-patches
git_diff
Flexget__Flexget-3470
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Console formatting causing crash ### Expected behaviour: Output the entry-list's entries ### Actual behaviour: Output crash traceback #### Config: N\A #### Log: <details> <summary>(click to expand)</summary> ``` $ flexget entry-list list manga Traceback (most recent call last): File "C:\Python37\Scripts\flexget-script.py", line 33, in <module> sys.exit(load_entry_point('FlexGet', 'console_scripts', 'flexget')()) File "~\GitHub\Flexget\flexget\__init__.py", line 44, in main manager.start() File "~\GitHub\Flexget\flexget\manager.py", line 383, in start self.handle_cli() File "~\GitHub\Flexget\flexget\manager.py", line 412, in handle_cli options.cli_command_callback(self, command_options) File "~\GitHub\Flexget\flexget\components\managed_lists\lists\entry_list\cli.py", line 30, in do_cli entry_list_list(options) File "~\GitHub\Flexget\flexget\components\managed_lists\lists\entry_list\cli.py", line 74, in entry_list_list console(table) File "~\GitHub\Flexget\flexget\terminal.py", line 35, in __call__ self.print(text, *args, **kwargs) File "~\GitHub\Flexget\flexget\terminal.py", line 39, in print _patchable_console(*args, **kwargs) File "~\GitHub\Flexget\flexget\terminal.py", line 236, in _patchable_console console._print(*args, **kwargs) File "~\GitHub\Flexget\flexget\terminal.py", line 43, in _print super().print(*args, **kwargs) File "~\GitHub\Flexget\lib\site-packages\rich\console.py", line 1634, in print extend(render(renderable, render_options)) File "~\GitHub\Flexget\lib\site-packages\rich\console.py", line 1272, in render for render_output in iter_render: File "~\GitHub\Flexget\flexget\terminal.py", line 160, in __rich_console__ yield from segments File "~\GitHub\Flexget\lib\site-packages\rich\table.py", line 479, in __rich_console__ console, options.update_width(max_width - extra_width) File "~\GitHub\Flexget\lib\site-packages\rich\table.py", line 520, in _calculate_column_widths self._measure_column(console, options, column) for column in columns File "~\GitHub\Flexget\lib\site-packages\rich\table.py", line 520, in <listcomp> self._measure_column(console, options, column) for column in columns File "~\GitHub\Flexget\lib\site-packages\rich\table.py", line 723, in _measure_column _min, _max = get_render_width(console, options, cell.renderable) File "~\GitHub\Flexget\lib\site-packages\rich\measure.py", line 109, in get get_console_width(console, options) File "~\GitHub\Flexget\lib\site-packages\rich\padding.py", line 132, in __rich_measure__ measure_min, measure_max = Measurement.get(console, options, self.renderable) File "~\GitHub\Flexget\lib\site-packages\rich\measure.py", line 100, in get renderable, markup=options.markup, highlight=False File "~\GitHub\Flexget\lib\site-packages\rich\console.py", line 1368, in render_str emoji_variant=self._emoji_variant, File "~\GitHub\Flexget\lib\site-packages\rich\markup.py", line 162, in render ) from None rich.errors.MarkupError: closing tag '[/yuri/ scanlations]' at position 92 doesn't match any open tag ``` </details> ### Additional information: - FlexGet version: 3.3.15 - Python version: 3.7 - Installation method: git - Using daemon (yes/no): both - OS and version: w10 </issue> <code> [start of flexget/terminal.py] 1 import contextlib 2 import os 3 import threading 4 from textwrap import wrap 5 from typing import Any, Iterator, Optional, TextIO, Union 6 7 import rich 8 import rich.box 9 import rich.console 10 import rich.rule 11 import rich.segment 12 import rich.table 13 import rich.text 14 15 from flexget.options import ArgumentParser 16 17 local_context = threading.local() 18 19 20 class _Console(rich.console.Console): 21 def __init__(self, *args, **kwargs): 22 if "PYCHARM_HOSTED" in os.environ: 23 kwargs.setdefault('color_system', 'truecolor') 24 super().__init__(*args, **kwargs) 25 26 def __call__(self, text: Any, *args, **kwargs) -> None: 27 """ 28 Print to console safely. Output is able to be captured by different streams in different contexts. 29 30 Any plugin wishing to output to the user's console should use this function instead of print so that 31 output can be redirected when FlexGet is invoked from another process. 32 33 Accepts arguments like the `rich.console.Console.print` function does. 34 """ 35 self.print(text, *args, **kwargs) 36 37 def print(self, *args, **kwargs) -> None: 38 # Also capture calls directly to console.print 39 _patchable_console(*args, **kwargs) 40 41 def _print(self, *args, **kwargs): 42 """The real parent print function, which can be called internally.""" 43 super().print(*args, **kwargs) 44 45 def rule( 46 self, 47 title: rich.text.TextType = "", 48 *, 49 characters: str = "─", 50 style: Union[str, rich.console.Style] = "rule.line", 51 align: rich.text.AlignMethod = "left", 52 # This is a custom FlexGet argument 53 indent: int = 3, 54 ) -> None: 55 rule = rich.rule.Rule(title, characters=characters, style=style, align=align) 56 if indent and title: 57 if not isinstance(rule.title, rich.text.Text): 58 rule.title = self.render_str(rule.title, style="rule.text") 59 text = rich.text.Text() 60 if rule.align == 'left': 61 text = text.append(rule.characters * indent + ' ', style=rule.style) 62 text.append(rule.title) 63 if rule.align == 'right': 64 text = text.append(' ' + rule.characters * indent, style=rule.style) 65 rule.title = text 66 self.print(rule) 67 68 69 # This is used to print (rich) text to the console, as well as expose all the other features of rich's console 70 # Unlike rich, can be called directly to print (for backwards compat purposes.) 71 console = _Console() 72 73 74 PORCELAIN_BOX: rich.box.Box = rich.box.Box( 75 """\ 76 77 | 78 79 | 80 81 82 | 83 84 """, 85 ascii=True, 86 ) 87 88 GITHUB_BOX: rich.box.Box = rich.box.Box( 89 """\ 90 91 | || 92 |-|| 93 | || 94 |-|| 95 |-|| 96 | || 97 98 """, 99 ascii=True, 100 ) 101 102 103 class TerminalTable(rich.table.Table): 104 """ 105 A data table suited for CLI output, created via its sent parameters. For example:: 106 107 header = ['Col1', 'Col2'] 108 table_data = [header] 109 for item in iterable: 110 table_data.append([item.attribute1, item.attribute2]) 111 table = TerminalTable('plain', table_data) 112 print table.output 113 114 Optional values are setting table title, and supplying wrap_columns list and 115 drop_column list. If table does not fit into terminal any columns listed in 116 wrap_columns will be tried to wrap and if resulting columns are below MIN_WIDTH(10) 117 columns listed in drop_column will be removed from output. 118 119 Example:: 120 121 header = ['Col1', 'Col2'] 122 table_data = [header] 123 for item in iterable: 124 table_data.append([item.attribute1, item.attribute2]) 125 table = TerminalTable('plain', table_data, 'Table title', wrap_columns=[1,2], 126 drop_columns=[4,2]) 127 print table.output 128 129 :param table_type: A string matching TABLE_TYPES keys. 130 """ 131 132 # Easy access for our plugins without importing rich 133 Column = rich.table.Column 134 135 # TODO: Add other new types 136 TABLE_TYPES = { 137 'plain': {'box': rich.box.ASCII}, 138 'porcelain': { 139 'box': PORCELAIN_BOX, 140 'show_edge': False, 141 'pad_edge': False, 142 'title': None, 143 'padding': 0, 144 }, 145 'single': {'box': rich.box.SQUARE}, 146 'double': {'box': rich.box.DOUBLE}, 147 'github': {'box': GITHUB_BOX}, 148 'heavy-head': {'box': rich.box.HEAVY_HEAD}, 149 } 150 151 def __init__(self, *args, table_type: str = None, **kwargs) -> None: 152 self.table_type = table_type 153 if table_type: 154 kwargs = {**kwargs, **self.TABLE_TYPES[table_type]} 155 super().__init__(*args, **kwargs) 156 157 def __rich_console__(self, console, options): 158 segments = super().__rich_console__(console, options) 159 if self.table_type not in ['porcelain', 'github']: 160 yield from segments 161 return 162 # Strips out blank lines from our custom types 163 lines = rich.segment.Segment.split_lines(segments) 164 for line in lines: 165 if any(seg.text.strip() for seg in line): 166 yield from line 167 yield rich.segment.Segment.line() 168 169 170 table_parser = ArgumentParser(add_help=False) 171 table_parser.add_argument( 172 '--table-type', 173 choices=list(TerminalTable.TABLE_TYPES), 174 default='heavy-head', 175 help='Select output table style', 176 ) 177 table_parser.add_argument( 178 '--porcelain', 179 dest='table_type', 180 action='store_const', 181 const='porcelain', 182 help='Make the output parseable. Similar to using `--table-type porcelain`', 183 ) 184 185 186 def word_wrap(text: str, max_length: int) -> str: 187 """A helper method designed to return a wrapped string. 188 189 :param text: Text to wrap 190 :param max_length: Maximum allowed string length 191 :return: Wrapped text or original text 192 """ 193 if len(text) >= max_length: 194 return '\n'.join(wrap(text, max_length)) 195 return text 196 197 198 def colorize(color: str, text: str) -> str: 199 """ 200 A simple override of Color.colorize which sets the default auto colors value to True, since it's the more common 201 use case. When output isn't TTY just return text 202 203 :param color: Color tag to use 204 :param text: Text to color 205 206 :return: Colored text or text 207 """ 208 return f'[{color}]{text}[/]' 209 210 211 def disable_colors(): 212 """ 213 Disables colors to the terminal. 214 """ 215 console.no_color = True 216 217 218 @contextlib.contextmanager 219 def capture_console(filelike: TextIO) -> Iterator: 220 old_output = get_console_output() 221 local_context.output = filelike 222 try: 223 yield 224 finally: 225 local_context.output = old_output 226 227 228 def get_console_output() -> Optional[TextIO]: 229 return getattr(local_context, 'output', None) 230 231 232 def _patchable_console(*args, **kwargs): 233 # Nobody will import this directly, so we can monkeypatch it for IPC calls 234 console.file = get_console_output() 235 try: 236 console._print(*args, **kwargs) 237 finally: 238 console.file = None 239 [end of flexget/terminal.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/flexget/terminal.py b/flexget/terminal.py --- a/flexget/terminal.py +++ b/flexget/terminal.py @@ -21,6 +21,7 @@ def __init__(self, *args, **kwargs): if "PYCHARM_HOSTED" in os.environ: kwargs.setdefault('color_system', 'truecolor') + kwargs.setdefault("markup", True) super().__init__(*args, **kwargs) def __call__(self, text: Any, *args, **kwargs) -> None:
{"golden_diff": "diff --git a/flexget/terminal.py b/flexget/terminal.py\n--- a/flexget/terminal.py\n+++ b/flexget/terminal.py\n@@ -21,6 +21,7 @@\n def __init__(self, *args, **kwargs):\n if \"PYCHARM_HOSTED\" in os.environ:\n kwargs.setdefault('color_system', 'truecolor')\n+ kwargs.setdefault(\"markup\", True)\n super().__init__(*args, **kwargs)\n \n def __call__(self, text: Any, *args, **kwargs) -> None:\n", "issue": "Console formatting causing crash\n### Expected behaviour:\r\nOutput the entry-list's entries\r\n\r\n### Actual behaviour:\r\nOutput crash traceback\r\n\r\n#### Config:\r\nN\\A\r\n \r\n#### Log:\r\n<details>\r\n <summary>(click to expand)</summary>\r\n\r\n```\r\n$ flexget entry-list list manga\r\nTraceback (most recent call last):\r\n File \"C:\\Python37\\Scripts\\flexget-script.py\", line 33, in <module>\r\n sys.exit(load_entry_point('FlexGet', 'console_scripts', 'flexget')())\r\n File \"~\\GitHub\\Flexget\\flexget\\__init__.py\", line 44, in main\r\n manager.start()\r\n File \"~\\GitHub\\Flexget\\flexget\\manager.py\", line 383, in start\r\n self.handle_cli()\r\n File \"~\\GitHub\\Flexget\\flexget\\manager.py\", line 412, in handle_cli\r\n options.cli_command_callback(self, command_options)\r\n File \"~\\GitHub\\Flexget\\flexget\\components\\managed_lists\\lists\\entry_list\\cli.py\", line 30, in do_cli\r\n entry_list_list(options)\r\n File \"~\\GitHub\\Flexget\\flexget\\components\\managed_lists\\lists\\entry_list\\cli.py\", line 74, in entry_list_list\r\n console(table)\r\n File \"~\\GitHub\\Flexget\\flexget\\terminal.py\", line 35, in __call__\r\n self.print(text, *args, **kwargs)\r\n File \"~\\GitHub\\Flexget\\flexget\\terminal.py\", line 39, in print\r\n _patchable_console(*args, **kwargs)\r\n File \"~\\GitHub\\Flexget\\flexget\\terminal.py\", line 236, in _patchable_console\r\n console._print(*args, **kwargs)\r\n File \"~\\GitHub\\Flexget\\flexget\\terminal.py\", line 43, in _print\r\n super().print(*args, **kwargs)\r\n File \"~\\GitHub\\Flexget\\lib\\site-packages\\rich\\console.py\", line 1634, in print\r\n extend(render(renderable, render_options))\r\n File \"~\\GitHub\\Flexget\\lib\\site-packages\\rich\\console.py\", line 1272, in render\r\n for render_output in iter_render:\r\n File \"~\\GitHub\\Flexget\\flexget\\terminal.py\", line 160, in __rich_console__\r\n yield from segments\r\n File \"~\\GitHub\\Flexget\\lib\\site-packages\\rich\\table.py\", line 479, in __rich_console__\r\n console, options.update_width(max_width - extra_width)\r\n File \"~\\GitHub\\Flexget\\lib\\site-packages\\rich\\table.py\", line 520, in _calculate_column_widths\r\n self._measure_column(console, options, column) for column in columns\r\n File \"~\\GitHub\\Flexget\\lib\\site-packages\\rich\\table.py\", line 520, in <listcomp>\r\n self._measure_column(console, options, column) for column in columns\r\n File \"~\\GitHub\\Flexget\\lib\\site-packages\\rich\\table.py\", line 723, in _measure_column\r\n _min, _max = get_render_width(console, options, cell.renderable)\r\n File \"~\\GitHub\\Flexget\\lib\\site-packages\\rich\\measure.py\", line 109, in get\r\n get_console_width(console, options)\r\n File \"~\\GitHub\\Flexget\\lib\\site-packages\\rich\\padding.py\", line 132, in __rich_measure__\r\n measure_min, measure_max = Measurement.get(console, options, self.renderable)\r\n File \"~\\GitHub\\Flexget\\lib\\site-packages\\rich\\measure.py\", line 100, in get\r\n renderable, markup=options.markup, highlight=False\r\n File \"~\\GitHub\\Flexget\\lib\\site-packages\\rich\\console.py\", line 1368, in render_str\r\n emoji_variant=self._emoji_variant,\r\n File \"~\\GitHub\\Flexget\\lib\\site-packages\\rich\\markup.py\", line 162, in render\r\n ) from None\r\nrich.errors.MarkupError: closing tag '[/yuri/ scanlations]' at position 92 doesn't match any open tag\r\n```\r\n</details>\r\n\r\n### Additional information:\r\n\r\n- FlexGet version: 3.3.15\r\n- Python version: 3.7\r\n- Installation method: git\r\n- Using daemon (yes/no): both\r\n- OS and version: w10\r\n\n", "before_files": [{"content": "import contextlib\nimport os\nimport threading\nfrom textwrap import wrap\nfrom typing import Any, Iterator, Optional, TextIO, Union\n\nimport rich\nimport rich.box\nimport rich.console\nimport rich.rule\nimport rich.segment\nimport rich.table\nimport rich.text\n\nfrom flexget.options import ArgumentParser\n\nlocal_context = threading.local()\n\n\nclass _Console(rich.console.Console):\n def __init__(self, *args, **kwargs):\n if \"PYCHARM_HOSTED\" in os.environ:\n kwargs.setdefault('color_system', 'truecolor')\n super().__init__(*args, **kwargs)\n\n def __call__(self, text: Any, *args, **kwargs) -> None:\n \"\"\"\n Print to console safely. Output is able to be captured by different streams in different contexts.\n\n Any plugin wishing to output to the user's console should use this function instead of print so that\n output can be redirected when FlexGet is invoked from another process.\n\n Accepts arguments like the `rich.console.Console.print` function does.\n \"\"\"\n self.print(text, *args, **kwargs)\n\n def print(self, *args, **kwargs) -> None:\n # Also capture calls directly to console.print\n _patchable_console(*args, **kwargs)\n\n def _print(self, *args, **kwargs):\n \"\"\"The real parent print function, which can be called internally.\"\"\"\n super().print(*args, **kwargs)\n\n def rule(\n self,\n title: rich.text.TextType = \"\",\n *,\n characters: str = \"\u2500\",\n style: Union[str, rich.console.Style] = \"rule.line\",\n align: rich.text.AlignMethod = \"left\",\n # This is a custom FlexGet argument\n indent: int = 3,\n ) -> None:\n rule = rich.rule.Rule(title, characters=characters, style=style, align=align)\n if indent and title:\n if not isinstance(rule.title, rich.text.Text):\n rule.title = self.render_str(rule.title, style=\"rule.text\")\n text = rich.text.Text()\n if rule.align == 'left':\n text = text.append(rule.characters * indent + ' ', style=rule.style)\n text.append(rule.title)\n if rule.align == 'right':\n text = text.append(' ' + rule.characters * indent, style=rule.style)\n rule.title = text\n self.print(rule)\n\n\n# This is used to print (rich) text to the console, as well as expose all the other features of rich's console\n# Unlike rich, can be called directly to print (for backwards compat purposes.)\nconsole = _Console()\n\n\nPORCELAIN_BOX: rich.box.Box = rich.box.Box(\n \"\"\"\\\n \n | \n \n | \n \n \n | \n \n\"\"\",\n ascii=True,\n)\n\nGITHUB_BOX: rich.box.Box = rich.box.Box(\n \"\"\"\\\n \n| ||\n|-||\n| ||\n|-||\n|-||\n| ||\n \n\"\"\",\n ascii=True,\n)\n\n\nclass TerminalTable(rich.table.Table):\n \"\"\"\n A data table suited for CLI output, created via its sent parameters. For example::\n\n header = ['Col1', 'Col2']\n table_data = [header]\n for item in iterable:\n table_data.append([item.attribute1, item.attribute2])\n table = TerminalTable('plain', table_data)\n print table.output\n\n Optional values are setting table title, and supplying wrap_columns list and\n drop_column list. If table does not fit into terminal any columns listed in\n wrap_columns will be tried to wrap and if resulting columns are below MIN_WIDTH(10)\n columns listed in drop_column will be removed from output.\n\n Example::\n\n header = ['Col1', 'Col2']\n table_data = [header]\n for item in iterable:\n table_data.append([item.attribute1, item.attribute2])\n table = TerminalTable('plain', table_data, 'Table title', wrap_columns=[1,2],\n drop_columns=[4,2])\n print table.output\n\n :param table_type: A string matching TABLE_TYPES keys.\n \"\"\"\n\n # Easy access for our plugins without importing rich\n Column = rich.table.Column\n\n # TODO: Add other new types\n TABLE_TYPES = {\n 'plain': {'box': rich.box.ASCII},\n 'porcelain': {\n 'box': PORCELAIN_BOX,\n 'show_edge': False,\n 'pad_edge': False,\n 'title': None,\n 'padding': 0,\n },\n 'single': {'box': rich.box.SQUARE},\n 'double': {'box': rich.box.DOUBLE},\n 'github': {'box': GITHUB_BOX},\n 'heavy-head': {'box': rich.box.HEAVY_HEAD},\n }\n\n def __init__(self, *args, table_type: str = None, **kwargs) -> None:\n self.table_type = table_type\n if table_type:\n kwargs = {**kwargs, **self.TABLE_TYPES[table_type]}\n super().__init__(*args, **kwargs)\n\n def __rich_console__(self, console, options):\n segments = super().__rich_console__(console, options)\n if self.table_type not in ['porcelain', 'github']:\n yield from segments\n return\n # Strips out blank lines from our custom types\n lines = rich.segment.Segment.split_lines(segments)\n for line in lines:\n if any(seg.text.strip() for seg in line):\n yield from line\n yield rich.segment.Segment.line()\n\n\ntable_parser = ArgumentParser(add_help=False)\ntable_parser.add_argument(\n '--table-type',\n choices=list(TerminalTable.TABLE_TYPES),\n default='heavy-head',\n help='Select output table style',\n)\ntable_parser.add_argument(\n '--porcelain',\n dest='table_type',\n action='store_const',\n const='porcelain',\n help='Make the output parseable. Similar to using `--table-type porcelain`',\n)\n\n\ndef word_wrap(text: str, max_length: int) -> str:\n \"\"\"A helper method designed to return a wrapped string.\n\n :param text: Text to wrap\n :param max_length: Maximum allowed string length\n :return: Wrapped text or original text\n \"\"\"\n if len(text) >= max_length:\n return '\\n'.join(wrap(text, max_length))\n return text\n\n\ndef colorize(color: str, text: str) -> str:\n \"\"\"\n A simple override of Color.colorize which sets the default auto colors value to True, since it's the more common\n use case. When output isn't TTY just return text\n\n :param color: Color tag to use\n :param text: Text to color\n\n :return: Colored text or text\n \"\"\"\n return f'[{color}]{text}[/]'\n\n\ndef disable_colors():\n \"\"\"\n Disables colors to the terminal.\n \"\"\"\n console.no_color = True\n\n\[email protected]\ndef capture_console(filelike: TextIO) -> Iterator:\n old_output = get_console_output()\n local_context.output = filelike\n try:\n yield\n finally:\n local_context.output = old_output\n\n\ndef get_console_output() -> Optional[TextIO]:\n return getattr(local_context, 'output', None)\n\n\ndef _patchable_console(*args, **kwargs):\n # Nobody will import this directly, so we can monkeypatch it for IPC calls\n console.file = get_console_output()\n try:\n console._print(*args, **kwargs)\n finally:\n console.file = None\n", "path": "flexget/terminal.py"}]}
3,766
125
gh_patches_debug_21502
rasdani/github-patches
git_diff
NVIDIA-Merlin__NVTabular-1380
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> [FEA] Add DistributedClient API and use global client objects in Workflow and Dataset **Is your feature request related to a problem? Please describe.** In order to perform distributed ETL and/or spill data between device/host/disk, NVTabular currently requires the user to provide a Dask-Distributed (or Dask-CUDA) cluster. Furthermore, the `nvt.Workflow` (and sometimes `nvt.Dataset`) need to be defined with an explicit `client=` argument in order for the distributed cluster to be used. Although I feel strongly that it would be dangerous and messy for NVTabular to automatically spin up a distributed cluster by default, I do suspect that the user experience could be much better. **Describe the solution you'd like** To improve the user experience of distributed ETL with NVTabular, I propose: 1. Simple `LocalCluster`/`LocalCUDACluster`-wrapper APIs be added to NVTabular so that users can enable multi-GPU processing and/or spilling without interacting with distributed/dask_cuda. I am not yet sure of the ideal API to expose in NVTabular, but perhaps something like `DistributedClient` (wrapping `distributed.Client`). This API could be used to automatically generate a local cluster (if the address of an existing cluster is not provided), and we could add a `cpu=False` kwarg to toggle between gpu and cpu mode. 2. [**DONE** in #1318] Automatically detect and **use** an existing Dask client object. NVTabular already [checks for a global dask client](https://github.com/NVIDIA-Merlin/NVTabular/blob/34d01d7e6090d6029ac40010ed79e1558f18759c/nvtabular/workflow/workflow.py#L88) in both `Workflow ` and `Dataset`. However, these checks result in a UserWarning whenever a global client is detected. Instead of warning the user, I propose that NVTabular automatically **use** the client object when it is detected (which is actually the defult behavior in `dask.dataframe` and `dask.array` anyway). **Describe alternatives you've considered** The alternative is to continue requiring the user to deploy Dask clusters/clients with `distributed`/`dask_cuda`, and require them to include an explicit `client` argument when defining an `Workflow` objects. </issue> <code> [start of nvtabular/__init__.py] 1 # 2 # Copyright (c) 2022, NVIDIA CORPORATION. 3 # 4 # Licensed under the Apache License, Version 2.0 (the "License"); 5 # you may not use this file except in compliance with the License. 6 # You may obtain a copy of the License at 7 # 8 # http://www.apache.org/licenses/LICENSE-2.0 9 # 10 # Unless required by applicable law or agreed to in writing, software 11 # distributed under the License is distributed on an "AS IS" BASIS, 12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 # See the License for the specific language governing permissions and 14 # limitations under the License. 15 # 16 import warnings 17 18 from merlin.graph import ColumnSelector 19 from merlin.schema import ColumnSchema, Schema 20 21 from . import graph, io, workflow # noqa 22 from ._version import get_versions 23 24 # suppress some warnings with cudf warning about column ordering with dlpack 25 # and numba warning about deprecated environment variables 26 warnings.filterwarnings("ignore", module="cudf.io.dlpack") 27 warnings.filterwarnings("ignore", module="numba.cuda.envvars") 28 29 30 WorkflowNode = workflow.WorkflowNode 31 Workflow = workflow.Workflow 32 Dataset = io.dataset.Dataset 33 34 35 # Provides an alias of ColumnSelector so that old usages of ColumnGroup to 36 # select columns at the beginning of an operator chain don't break 37 def ColumnGroup(columns): 38 warnings.warn("ColumnGroup is deprecated, use ColumnSelector instead", DeprecationWarning) 39 return ColumnSelector(columns) 40 41 42 __all__ = [ 43 "Workflow", 44 "Dataset", 45 "WorkflowNode", 46 "ColumnGroup", 47 "ColumnSelector", 48 "ColumnSchema", 49 "Schema", 50 ] 51 52 # cudf warns about column ordering with dlpack methods, ignore it 53 warnings.filterwarnings("ignore", module="cudf.io.dlpack") 54 55 56 __version__ = get_versions()["version"] 57 del get_versions 58 [end of nvtabular/__init__.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/nvtabular/__init__.py b/nvtabular/__init__.py --- a/nvtabular/__init__.py +++ b/nvtabular/__init__.py @@ -18,7 +18,7 @@ from merlin.graph import ColumnSelector from merlin.schema import ColumnSchema, Schema -from . import graph, io, workflow # noqa +from . import graph, io, utils, workflow # noqa from ._version import get_versions # suppress some warnings with cudf warning about column ordering with dlpack @@ -30,6 +30,8 @@ WorkflowNode = workflow.WorkflowNode Workflow = workflow.Workflow Dataset = io.dataset.Dataset +Distributed = utils.Distributed +Serial = utils.Serial # Provides an alias of ColumnSelector so that old usages of ColumnGroup to @@ -47,6 +49,8 @@ "ColumnSelector", "ColumnSchema", "Schema", + "Distributed", + "Serial", ] # cudf warns about column ordering with dlpack methods, ignore it
{"golden_diff": "diff --git a/nvtabular/__init__.py b/nvtabular/__init__.py\n--- a/nvtabular/__init__.py\n+++ b/nvtabular/__init__.py\n@@ -18,7 +18,7 @@\n from merlin.graph import ColumnSelector\n from merlin.schema import ColumnSchema, Schema\n \n-from . import graph, io, workflow # noqa\n+from . import graph, io, utils, workflow # noqa\n from ._version import get_versions\n \n # suppress some warnings with cudf warning about column ordering with dlpack\n@@ -30,6 +30,8 @@\n WorkflowNode = workflow.WorkflowNode\n Workflow = workflow.Workflow\n Dataset = io.dataset.Dataset\n+Distributed = utils.Distributed\n+Serial = utils.Serial\n \n \n # Provides an alias of ColumnSelector so that old usages of ColumnGroup to\n@@ -47,6 +49,8 @@\n \"ColumnSelector\",\n \"ColumnSchema\",\n \"Schema\",\n+ \"Distributed\",\n+ \"Serial\",\n ]\n \n # cudf warns about column ordering with dlpack methods, ignore it\n", "issue": "[FEA] Add DistributedClient API and use global client objects in Workflow and Dataset\n**Is your feature request related to a problem? Please describe.**\r\nIn order to perform distributed ETL and/or spill data between device/host/disk, NVTabular currently requires the user to provide a Dask-Distributed (or Dask-CUDA) cluster. Furthermore, the `nvt.Workflow` (and sometimes `nvt.Dataset`) need to be defined with an explicit `client=` argument in order for the distributed cluster to be used. Although I feel strongly that it would be dangerous and messy for NVTabular to automatically spin up a distributed cluster by default, I do suspect that the user experience could be much better. \r\n\r\n**Describe the solution you'd like**\r\nTo improve the user experience of distributed ETL with NVTabular, I propose:\r\n\r\n1. Simple `LocalCluster`/`LocalCUDACluster`-wrapper APIs be added to NVTabular so that users can enable multi-GPU processing and/or spilling without interacting with distributed/dask_cuda. I am not yet sure of the ideal API to expose in NVTabular, but perhaps something like `DistributedClient` (wrapping `distributed.Client`). This API could be used to automatically generate a local cluster (if the address of an existing cluster is not provided), and we could add a `cpu=False` kwarg to toggle between gpu and cpu mode.\r\n2. [**DONE** in #1318] Automatically detect and **use** an existing Dask client object. NVTabular already [checks for a global dask client](https://github.com/NVIDIA-Merlin/NVTabular/blob/34d01d7e6090d6029ac40010ed79e1558f18759c/nvtabular/workflow/workflow.py#L88) in both `Workflow ` and `Dataset`. However, these checks result in a UserWarning whenever a global client is detected. Instead of warning the user, I propose that NVTabular automatically **use** the client object when it is detected (which is actually the defult behavior in `dask.dataframe` and `dask.array` anyway).\r\n\r\n**Describe alternatives you've considered**\r\nThe alternative is to continue requiring the user to deploy Dask clusters/clients with `distributed`/`dask_cuda`, and require them to include an explicit `client` argument when defining an `Workflow` objects.\r\n\n", "before_files": [{"content": "#\n# Copyright (c) 2022, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport warnings\n\nfrom merlin.graph import ColumnSelector\nfrom merlin.schema import ColumnSchema, Schema\n\nfrom . import graph, io, workflow # noqa\nfrom ._version import get_versions\n\n# suppress some warnings with cudf warning about column ordering with dlpack\n# and numba warning about deprecated environment variables\nwarnings.filterwarnings(\"ignore\", module=\"cudf.io.dlpack\")\nwarnings.filterwarnings(\"ignore\", module=\"numba.cuda.envvars\")\n\n\nWorkflowNode = workflow.WorkflowNode\nWorkflow = workflow.Workflow\nDataset = io.dataset.Dataset\n\n\n# Provides an alias of ColumnSelector so that old usages of ColumnGroup to\n# select columns at the beginning of an operator chain don't break\ndef ColumnGroup(columns):\n warnings.warn(\"ColumnGroup is deprecated, use ColumnSelector instead\", DeprecationWarning)\n return ColumnSelector(columns)\n\n\n__all__ = [\n \"Workflow\",\n \"Dataset\",\n \"WorkflowNode\",\n \"ColumnGroup\",\n \"ColumnSelector\",\n \"ColumnSchema\",\n \"Schema\",\n]\n\n# cudf warns about column ordering with dlpack methods, ignore it\nwarnings.filterwarnings(\"ignore\", module=\"cudf.io.dlpack\")\n\n\n__version__ = get_versions()[\"version\"]\ndel get_versions\n", "path": "nvtabular/__init__.py"}]}
1,580
241
gh_patches_debug_10839
rasdani/github-patches
git_diff
pretix__pretix-3037
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Line breaks in location field of ICS file needs to be converted ### Problem and impact When I insert an address with multiple lines in the event location field, the linebreak `\n` in the downloaded ICS file gets ignored by the calendar apps and the address is written without any space. In most apps the location field is only one line. If I insert the following address in the event location: ``` Party Location Partystreet 4 12345 Party ``` the address in Outlook or Thunderbird looks like this: ``` Party LocationPartystreet 412345 Party ``` So the apps can't find this address in a navigation app. E.g. Google replaces `\n` with `\, ` in their ICS files. ### Expected behaviour I insert the following address in the event location: ``` Party Location Partystreet 4 12345 Party ``` so I expect the address in Outlook or Thunderbird of the ICS file looks like this: ``` Party Location, Partystreet 4, 12345 Party ``` ### Steps to reproduce 1. Insert event location with multiple lines 2. Download ICS file of event 3. Open ICS file in calendar app (e.g. Outlook or Thunderbird) ### Screenshots _No response_ ### Link _No response_ ### Browser (software, desktop or mobile?) and version _No response_ ### Operating system, dependency versions _No response_ ### Version 4.15.0 </issue> <code> [start of src/pretix/presale/ical.py] 1 # 2 # This file is part of pretix (Community Edition). 3 # 4 # Copyright (C) 2014-2020 Raphael Michel and contributors 5 # Copyright (C) 2020-2021 rami.io GmbH and contributors 6 # 7 # This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General 8 # Public License as published by the Free Software Foundation in version 3 of the License. 9 # 10 # ADDITIONAL TERMS APPLY: Pursuant to Section 7 of the GNU Affero General Public License, additional terms are 11 # applicable granting you additional permissions and placing additional restrictions on your usage of this software. 12 # Please refer to the pretix LICENSE file to obtain the full terms applicable to this work. If you did not receive 13 # this file, see <https://pretix.eu/about/en/license>. 14 # 15 # This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied 16 # warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more 17 # details. 18 # 19 # You should have received a copy of the GNU Affero General Public License along with this program. If not, see 20 # <https://www.gnu.org/licenses/>. 21 # 22 import datetime 23 from urllib.parse import urlparse 24 25 import pytz 26 import vobject 27 from django.conf import settings 28 from django.utils.formats import date_format 29 from django.utils.translation import gettext as _ 30 31 from pretix.base.email import get_email_context 32 from pretix.base.models import Event 33 from pretix.helpers.format import format_map 34 from pretix.multidomain.urlreverse import build_absolute_uri 35 36 37 def get_public_ical(events): 38 """ 39 Return an ical feed for a sequence of events or subevents. The calendar files will only include public 40 information. 41 """ 42 cal = vobject.iCalendar() 43 cal.add('prodid').value = '-//pretix//{}//'.format(settings.PRETIX_INSTANCE_NAME.replace(" ", "_")) 44 creation_time = datetime.datetime.now(pytz.utc) 45 46 for ev in events: 47 event = ev if isinstance(ev, Event) else ev.event 48 tz = pytz.timezone(event.settings.timezone) 49 if isinstance(ev, Event): 50 url = build_absolute_uri(event, 'presale:event.index') 51 else: 52 url = build_absolute_uri(event, 'presale:event.index', { 53 'subevent': ev.pk 54 }) 55 56 vevent = cal.add('vevent') 57 vevent.add('summary').value = str(ev.name) 58 vevent.add('dtstamp').value = creation_time 59 if ev.location: 60 vevent.add('location').value = str(ev.location) 61 vevent.add('uid').value = 'pretix-{}-{}-{}@{}'.format( 62 event.organizer.slug, event.slug, 63 ev.pk if not isinstance(ev, Event) else '0', 64 urlparse(url).netloc 65 ) 66 67 if event.settings.show_times: 68 vevent.add('dtstart').value = ev.date_from.astimezone(tz) 69 else: 70 vevent.add('dtstart').value = ev.date_from.astimezone(tz).date() 71 72 if event.settings.show_date_to and ev.date_to: 73 if event.settings.show_times: 74 vevent.add('dtend').value = ev.date_to.astimezone(tz) 75 else: 76 # with full-day events date_to in pretix is included (e.g. last day) 77 # whereas dtend in vcalendar is non-inclusive => add one day for export 78 vevent.add('dtend').value = ev.date_to.astimezone(tz).date() + datetime.timedelta(days=1) 79 80 descr = [] 81 descr.append(_('Tickets: {url}').format(url=url)) 82 83 if ev.date_admission: 84 descr.append(str(_('Admission: {datetime}')).format( 85 datetime=date_format(ev.date_admission.astimezone(tz), 'SHORT_DATETIME_FORMAT') 86 )) 87 88 descr.append(_('Organizer: {organizer}').format(organizer=event.organizer.name)) 89 90 vevent.add('description').value = '\n'.join(descr) 91 return cal 92 93 94 def get_private_icals(event, positions): 95 """ 96 Return a list of ical objects based on a sequence of positions. 97 98 Unlike get_public_ical, this will 99 100 - Generate multiple ical files instead of one (but with deduplication applied) 101 - Respect the mail_attach_ical_description setting 102 103 It is private in the sense that mail_attach_ical_description may contain content not suited for 104 public display. 105 106 We however intentionally do not allow using placeholders based on the order and position 107 specifically. This is for two reasons: 108 109 - In reality, many people will add their invite to their calendar which is shared with a larger 110 team. People are probably not aware that they're sharing sensitive information such as their 111 secret ticket link with everyone they share their calendar with. 112 113 - It would be pretty hard to implement it in a way that doesn't require us to use distinct 114 settings fields for emails to customers and to attendees, which feels like an overcomplication. 115 """ 116 tz = pytz.timezone(event.settings.timezone) 117 118 creation_time = datetime.datetime.now(pytz.utc) 119 calobjects = [] 120 121 evs = set(p.subevent or event for p in positions) 122 for ev in evs: 123 if isinstance(ev, Event): 124 url = build_absolute_uri(event, 'presale:event.index') 125 else: 126 url = build_absolute_uri(event, 'presale:event.index', { 127 'subevent': ev.pk 128 }) 129 130 if event.settings.mail_attach_ical_description: 131 ctx = get_email_context(event=event, event_or_subevent=ev) 132 description = format_map(str(event.settings.mail_attach_ical_description), ctx) 133 else: 134 # Default description 135 descr = [] 136 descr.append(_('Tickets: {url}').format(url=url)) 137 if ev.date_admission: 138 descr.append(str(_('Admission: {datetime}')).format( 139 datetime=date_format(ev.date_admission.astimezone(tz), 'SHORT_DATETIME_FORMAT') 140 )) 141 142 descr.append(_('Organizer: {organizer}').format(organizer=event.organizer.name)) 143 description = '\n'.join(descr) 144 145 cal = vobject.iCalendar() 146 cal.add('prodid').value = '-//pretix//{}//'.format(settings.PRETIX_INSTANCE_NAME.replace(" ", "_")) 147 148 vevent = cal.add('vevent') 149 vevent.add('summary').value = str(ev.name) 150 vevent.add('description').value = description 151 vevent.add('dtstamp').value = creation_time 152 if ev.location: 153 vevent.add('location').value = str(ev.location) 154 155 vevent.add('uid').value = 'pretix-{}-{}-{}@{}'.format( 156 event.organizer.slug, 157 event.slug, 158 ev.pk if not isinstance(ev, Event) else '0', 159 urlparse(url).netloc 160 ) 161 162 if event.settings.show_times: 163 vevent.add('dtstart').value = ev.date_from.astimezone(tz) 164 else: 165 vevent.add('dtstart').value = ev.date_from.astimezone(tz).date() 166 167 if event.settings.show_date_to and ev.date_to: 168 if event.settings.show_times: 169 vevent.add('dtend').value = ev.date_to.astimezone(tz) 170 else: 171 # with full-day events date_to in pretix is included (e.g. last day) 172 # whereas dtend in vcalendar is non-inclusive => add one day for export 173 vevent.add('dtend').value = ev.date_to.astimezone(tz).date() + datetime.timedelta(days=1) 174 175 calobjects.append(cal) 176 return calobjects 177 [end of src/pretix/presale/ical.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/src/pretix/presale/ical.py b/src/pretix/presale/ical.py --- a/src/pretix/presale/ical.py +++ b/src/pretix/presale/ical.py @@ -57,7 +57,7 @@ vevent.add('summary').value = str(ev.name) vevent.add('dtstamp').value = creation_time if ev.location: - vevent.add('location').value = str(ev.location) + vevent.add('location').value = ", ".join(l.strip() for l in str(ev.location).splitlines() if l.strip()) vevent.add('uid').value = 'pretix-{}-{}-{}@{}'.format( event.organizer.slug, event.slug, ev.pk if not isinstance(ev, Event) else '0',
{"golden_diff": "diff --git a/src/pretix/presale/ical.py b/src/pretix/presale/ical.py\n--- a/src/pretix/presale/ical.py\n+++ b/src/pretix/presale/ical.py\n@@ -57,7 +57,7 @@\n vevent.add('summary').value = str(ev.name)\n vevent.add('dtstamp').value = creation_time\n if ev.location:\n- vevent.add('location').value = str(ev.location)\n+ vevent.add('location').value = \", \".join(l.strip() for l in str(ev.location).splitlines() if l.strip())\n vevent.add('uid').value = 'pretix-{}-{}-{}@{}'.format(\n event.organizer.slug, event.slug,\n ev.pk if not isinstance(ev, Event) else '0',\n", "issue": "Line breaks in location field of ICS file needs to be converted\n### Problem and impact\n\nWhen I insert an address with multiple lines in the event location field, the linebreak `\\n` in the downloaded ICS file gets ignored by the calendar apps and the address is written without any space.\r\nIn most apps the location field is only one line.\r\n\r\nIf I insert the following address in the event location:\r\n```\r\nParty Location\r\nPartystreet 4\r\n12345 Party\r\n```\r\nthe address in Outlook or Thunderbird looks like this:\r\n```\r\nParty LocationPartystreet 412345 Party\r\n```\r\nSo the apps can't find this address in a navigation app.\r\n\r\nE.g. Google replaces `\\n` with `\\, ` in their ICS files.\n\n### Expected behaviour\n\nI insert the following address in the event location:\r\n```\r\nParty Location\r\nPartystreet 4\r\n12345 Party\r\n```\r\nso I expect the address in Outlook or Thunderbird of the ICS file looks like this:\r\n```\r\nParty Location, Partystreet 4, 12345 Party\r\n```\n\n### Steps to reproduce\n\n1. Insert event location with multiple lines\r\n2. Download ICS file of event\r\n3. Open ICS file in calendar app (e.g. Outlook or Thunderbird)\n\n### Screenshots\n\n_No response_\n\n### Link\n\n_No response_\n\n### Browser (software, desktop or mobile?) and version\n\n_No response_\n\n### Operating system, dependency versions\n\n_No response_\n\n### Version\n\n4.15.0\n", "before_files": [{"content": "#\n# This file is part of pretix (Community Edition).\n#\n# Copyright (C) 2014-2020 Raphael Michel and contributors\n# Copyright (C) 2020-2021 rami.io GmbH and contributors\n#\n# This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General\n# Public License as published by the Free Software Foundation in version 3 of the License.\n#\n# ADDITIONAL TERMS APPLY: Pursuant to Section 7 of the GNU Affero General Public License, additional terms are\n# applicable granting you additional permissions and placing additional restrictions on your usage of this software.\n# Please refer to the pretix LICENSE file to obtain the full terms applicable to this work. If you did not receive\n# this file, see <https://pretix.eu/about/en/license>.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied\n# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more\n# details.\n#\n# You should have received a copy of the GNU Affero General Public License along with this program. If not, see\n# <https://www.gnu.org/licenses/>.\n#\nimport datetime\nfrom urllib.parse import urlparse\n\nimport pytz\nimport vobject\nfrom django.conf import settings\nfrom django.utils.formats import date_format\nfrom django.utils.translation import gettext as _\n\nfrom pretix.base.email import get_email_context\nfrom pretix.base.models import Event\nfrom pretix.helpers.format import format_map\nfrom pretix.multidomain.urlreverse import build_absolute_uri\n\n\ndef get_public_ical(events):\n \"\"\"\n Return an ical feed for a sequence of events or subevents. The calendar files will only include public\n information.\n \"\"\"\n cal = vobject.iCalendar()\n cal.add('prodid').value = '-//pretix//{}//'.format(settings.PRETIX_INSTANCE_NAME.replace(\" \", \"_\"))\n creation_time = datetime.datetime.now(pytz.utc)\n\n for ev in events:\n event = ev if isinstance(ev, Event) else ev.event\n tz = pytz.timezone(event.settings.timezone)\n if isinstance(ev, Event):\n url = build_absolute_uri(event, 'presale:event.index')\n else:\n url = build_absolute_uri(event, 'presale:event.index', {\n 'subevent': ev.pk\n })\n\n vevent = cal.add('vevent')\n vevent.add('summary').value = str(ev.name)\n vevent.add('dtstamp').value = creation_time\n if ev.location:\n vevent.add('location').value = str(ev.location)\n vevent.add('uid').value = 'pretix-{}-{}-{}@{}'.format(\n event.organizer.slug, event.slug,\n ev.pk if not isinstance(ev, Event) else '0',\n urlparse(url).netloc\n )\n\n if event.settings.show_times:\n vevent.add('dtstart').value = ev.date_from.astimezone(tz)\n else:\n vevent.add('dtstart').value = ev.date_from.astimezone(tz).date()\n\n if event.settings.show_date_to and ev.date_to:\n if event.settings.show_times:\n vevent.add('dtend').value = ev.date_to.astimezone(tz)\n else:\n # with full-day events date_to in pretix is included (e.g. last day)\n # whereas dtend in vcalendar is non-inclusive => add one day for export\n vevent.add('dtend').value = ev.date_to.astimezone(tz).date() + datetime.timedelta(days=1)\n\n descr = []\n descr.append(_('Tickets: {url}').format(url=url))\n\n if ev.date_admission:\n descr.append(str(_('Admission: {datetime}')).format(\n datetime=date_format(ev.date_admission.astimezone(tz), 'SHORT_DATETIME_FORMAT')\n ))\n\n descr.append(_('Organizer: {organizer}').format(organizer=event.organizer.name))\n\n vevent.add('description').value = '\\n'.join(descr)\n return cal\n\n\ndef get_private_icals(event, positions):\n \"\"\"\n Return a list of ical objects based on a sequence of positions.\n\n Unlike get_public_ical, this will\n\n - Generate multiple ical files instead of one (but with deduplication applied)\n - Respect the mail_attach_ical_description setting\n\n It is private in the sense that mail_attach_ical_description may contain content not suited for\n public display.\n\n We however intentionally do not allow using placeholders based on the order and position\n specifically. This is for two reasons:\n\n - In reality, many people will add their invite to their calendar which is shared with a larger\n team. People are probably not aware that they're sharing sensitive information such as their\n secret ticket link with everyone they share their calendar with.\n\n - It would be pretty hard to implement it in a way that doesn't require us to use distinct\n settings fields for emails to customers and to attendees, which feels like an overcomplication.\n \"\"\"\n tz = pytz.timezone(event.settings.timezone)\n\n creation_time = datetime.datetime.now(pytz.utc)\n calobjects = []\n\n evs = set(p.subevent or event for p in positions)\n for ev in evs:\n if isinstance(ev, Event):\n url = build_absolute_uri(event, 'presale:event.index')\n else:\n url = build_absolute_uri(event, 'presale:event.index', {\n 'subevent': ev.pk\n })\n\n if event.settings.mail_attach_ical_description:\n ctx = get_email_context(event=event, event_or_subevent=ev)\n description = format_map(str(event.settings.mail_attach_ical_description), ctx)\n else:\n # Default description\n descr = []\n descr.append(_('Tickets: {url}').format(url=url))\n if ev.date_admission:\n descr.append(str(_('Admission: {datetime}')).format(\n datetime=date_format(ev.date_admission.astimezone(tz), 'SHORT_DATETIME_FORMAT')\n ))\n\n descr.append(_('Organizer: {organizer}').format(organizer=event.organizer.name))\n description = '\\n'.join(descr)\n\n cal = vobject.iCalendar()\n cal.add('prodid').value = '-//pretix//{}//'.format(settings.PRETIX_INSTANCE_NAME.replace(\" \", \"_\"))\n\n vevent = cal.add('vevent')\n vevent.add('summary').value = str(ev.name)\n vevent.add('description').value = description\n vevent.add('dtstamp').value = creation_time\n if ev.location:\n vevent.add('location').value = str(ev.location)\n\n vevent.add('uid').value = 'pretix-{}-{}-{}@{}'.format(\n event.organizer.slug,\n event.slug,\n ev.pk if not isinstance(ev, Event) else '0',\n urlparse(url).netloc\n )\n\n if event.settings.show_times:\n vevent.add('dtstart').value = ev.date_from.astimezone(tz)\n else:\n vevent.add('dtstart').value = ev.date_from.astimezone(tz).date()\n\n if event.settings.show_date_to and ev.date_to:\n if event.settings.show_times:\n vevent.add('dtend').value = ev.date_to.astimezone(tz)\n else:\n # with full-day events date_to in pretix is included (e.g. last day)\n # whereas dtend in vcalendar is non-inclusive => add one day for export\n vevent.add('dtend').value = ev.date_to.astimezone(tz).date() + datetime.timedelta(days=1)\n\n calobjects.append(cal)\n return calobjects\n", "path": "src/pretix/presale/ical.py"}]}
2,977
184
gh_patches_debug_26077
rasdani/github-patches
git_diff
cupy__cupy-4203
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> [Bug, Enhancement] array_equal returns False on identical arrays with NaN values * Conditions (you can just paste the output of `python -c 'import cupy; cupy.show_config()'`) - CuPy Version : 8.0.0 - OS/Platform : Pop OS - CUDA Version : 10000 - cuDNN Version : 7605 - NCCL Version : 2604 * Code to reproduce ```python import cupy as cp a = cp.array([0, 1, cp.nan]) print(cp.array_equal(a, a)) # False ``` This relates exactly with https://github.com/numpy/numpy/issues/9229 This could be resolved by following the same procedure, resolved by https://github.com/numpy/numpy/pull/16128, which would essentially add an `equal_nan` kwarg to let user decide if NaN values should be considered equal or not, which is consistent with: https://github.com/cupy/cupy/blob/0dc6bbafb0cf75bb873531cbf69c6995a72c61a3/cupy/_logic/comparison.py#L60 https://github.com/cupy/cupy/blob/0dc6bbafb0cf75bb873531cbf69c6995a72c61a3/cupy/_logic/comparison.py#L89 * Error messages, stack traces, or logs `<null>` </issue> <code> [start of cupy/_logic/comparison.py] 1 import numpy 2 3 import cupy 4 from cupy import core 5 6 7 _is_close = core.create_ufunc( 8 'cupy_is_close', 9 ('eeee?->?', 'ffff?->?', 'dddd?->?'), 10 ''' 11 bool equal_nan = in4; 12 if (isfinite(in0) && isfinite(in1)) { 13 out0 = fabs(in0 - in1) <= in3 + in2 * fabs(in1); 14 } else if (equal_nan) { 15 out0 = (in0 == in1) || (isnan(in0) && isnan(in1)); 16 } else { 17 out0 = (in0 == in1); 18 } 19 ''' 20 ) 21 22 # Note that in cupy/core/include/cupy/complex.cuh, we already got isfinite and 23 # isnan working for complex numbers, so just replace fabs above by abs (from 24 # thrust) and we are ready to go 25 _is_close_complex = core.create_ufunc( 26 'cupy_is_close_complex', 27 ('FFff?->?', 'DDdd?->?'), 28 ''' 29 bool equal_nan = in4; 30 if (isfinite(in0) && isfinite(in1)) { 31 out0 = abs(in0 - in1) <= in3 + in2 * abs(in1); 32 } else if (equal_nan) { 33 out0 = (in0 == in1) || (isnan(in0) && isnan(in1)); 34 } else { 35 out0 = (in0 == in1); 36 } 37 ''' 38 ) 39 40 41 def array_equal(a1, a2): 42 """Returns ``True`` if two arrays are element-wise exactly equal. 43 44 Args: 45 a1 (cupy.ndarray): Input array to compare. 46 a2 (cupy.ndarray): Input array to compare. 47 48 Returns: 49 cupy.ndarray: A boolean 0-dim array. 50 If its value is ``True``, two arrays are element-wise equal. 51 52 .. seealso:: :func:`numpy.array_equal` 53 54 """ 55 if a1.shape != a2.shape: 56 return cupy.array(False) 57 return (a1 == a2).all() 58 59 60 def allclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False): 61 """Returns True if two arrays are element-wise equal within a tolerance. 62 63 Two values in ``a`` and ``b`` are considiered equal when the following 64 equation is satisfied. 65 66 .. math:: 67 68 |a - b| \\le \\mathrm{atol} + \\mathrm{rtol} |b| 69 70 Args: 71 a (cupy.ndarray): Input array to compare. 72 b (cupy.ndarray): Input array to compare. 73 rtol (float): The relative tolerance. 74 atol (float): The absolute tolerance. 75 equal_nan (bool): If ``True``, NaN's in ``a`` will be considered equal 76 to NaN's in ``b``. 77 78 Returns: 79 cupy.ndarray: A boolean 0-dim array. 80 If its value is ``True``, two arrays are element-wise equal within 81 a tolerance. 82 83 .. seealso:: :func:`numpy.allclose` 84 85 """ 86 return isclose(a, b, rtol=rtol, atol=atol, equal_nan=equal_nan).all() 87 88 89 def isclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False): 90 """Returns a boolean array where two arrays are equal within a tolerance. 91 92 Two values in ``a`` and ``b`` are considiered equal when the following 93 equation is satisfied. 94 95 .. math:: 96 97 |a - b| \\le \\mathrm{atol} + \\mathrm{rtol} |b| 98 99 Args: 100 a (cupy.ndarray): Input array to compare. 101 b (cupy.ndarray): Input array to compare. 102 rtol (float): The relative tolerance. 103 atol (float): The absolute tolerance. 104 equal_nan (bool): If ``True``, NaN's in ``a`` will be considered equal 105 to NaN's in ``b``. 106 107 Returns: 108 cupy.ndarray: A boolean array storing where ``a`` and ``b`` are equal. 109 110 .. seealso:: :func:`numpy.isclose` 111 112 """ 113 a = cupy.asanyarray(a) 114 b = cupy.asanyarray(b) 115 if (a.dtype in [numpy.complex64, numpy.complex128]) or \ 116 (b.dtype in [numpy.complex64, numpy.complex128]): 117 return _is_close_complex(a, b, rtol, atol, equal_nan) 118 else: 119 return _is_close(a, b, rtol, atol, equal_nan) 120 121 122 # TODO(okuta): Implement array_equal 123 124 125 # TODO(okuta): Implement array_equiv 126 127 128 greater = core.greater 129 130 131 greater_equal = core.greater_equal 132 133 134 less = core.less 135 136 137 less_equal = core.less_equal 138 139 140 equal = core.equal 141 142 143 not_equal = core.not_equal 144 [end of cupy/_logic/comparison.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/cupy/_logic/comparison.py b/cupy/_logic/comparison.py --- a/cupy/_logic/comparison.py +++ b/cupy/_logic/comparison.py @@ -2,6 +2,7 @@ import cupy from cupy import core +from cupy._logic import content _is_close = core.create_ufunc( @@ -38,12 +39,14 @@ ) -def array_equal(a1, a2): +def array_equal(a1, a2, equal_nan=False): """Returns ``True`` if two arrays are element-wise exactly equal. Args: a1 (cupy.ndarray): Input array to compare. a2 (cupy.ndarray): Input array to compare. + equal_nan (bool): If ``True``, NaN's in ``a1`` will be considered equal + to NaN's in ``a2``. Returns: cupy.ndarray: A boolean 0-dim array. @@ -54,7 +57,15 @@ """ if a1.shape != a2.shape: return cupy.array(False) - return (a1 == a2).all() + if not equal_nan: + return (a1 == a2).all() + # Handling NaN values if equal_nan is True + a1nan, a2nan = content.isnan(a1), content.isnan(a2) + # NaN's occur at different locations + if not (a1nan == a2nan).all(): + return cupy.array(False) + # Shapes of a1, a2 and masks are guaranteed to be consistent by this point + return (a1[~a1nan] == a2[~a1nan]).all() def allclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False):
{"golden_diff": "diff --git a/cupy/_logic/comparison.py b/cupy/_logic/comparison.py\n--- a/cupy/_logic/comparison.py\n+++ b/cupy/_logic/comparison.py\n@@ -2,6 +2,7 @@\n \n import cupy\n from cupy import core\n+from cupy._logic import content\n \n \n _is_close = core.create_ufunc(\n@@ -38,12 +39,14 @@\n )\n \n \n-def array_equal(a1, a2):\n+def array_equal(a1, a2, equal_nan=False):\n \"\"\"Returns ``True`` if two arrays are element-wise exactly equal.\n \n Args:\n a1 (cupy.ndarray): Input array to compare.\n a2 (cupy.ndarray): Input array to compare.\n+ equal_nan (bool): If ``True``, NaN's in ``a1`` will be considered equal\n+ to NaN's in ``a2``.\n \n Returns:\n cupy.ndarray: A boolean 0-dim array.\n@@ -54,7 +57,15 @@\n \"\"\"\n if a1.shape != a2.shape:\n return cupy.array(False)\n- return (a1 == a2).all()\n+ if not equal_nan:\n+ return (a1 == a2).all()\n+ # Handling NaN values if equal_nan is True\n+ a1nan, a2nan = content.isnan(a1), content.isnan(a2)\n+ # NaN's occur at different locations\n+ if not (a1nan == a2nan).all():\n+ return cupy.array(False)\n+ # Shapes of a1, a2 and masks are guaranteed to be consistent by this point\n+ return (a1[~a1nan] == a2[~a1nan]).all()\n \n \n def allclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False):\n", "issue": "[Bug, Enhancement] array_equal returns False on identical arrays with NaN values\n* Conditions (you can just paste the output of `python -c 'import cupy; cupy.show_config()'`)\r\n - CuPy Version : 8.0.0\r\n - OS/Platform : Pop OS\r\n - CUDA Version : 10000\r\n - cuDNN Version : 7605\r\n - NCCL Version : 2604\r\n\r\n* Code to reproduce\r\n```python\r\nimport cupy as cp\r\na = cp.array([0, 1, cp.nan])\r\nprint(cp.array_equal(a, a)) # False\r\n```\r\nThis relates exactly with https://github.com/numpy/numpy/issues/9229\r\nThis could be resolved by following the same procedure, resolved by https://github.com/numpy/numpy/pull/16128, which would essentially add an `equal_nan` kwarg to let user decide if NaN values should be considered equal or not, which is consistent with: https://github.com/cupy/cupy/blob/0dc6bbafb0cf75bb873531cbf69c6995a72c61a3/cupy/_logic/comparison.py#L60\r\nhttps://github.com/cupy/cupy/blob/0dc6bbafb0cf75bb873531cbf69c6995a72c61a3/cupy/_logic/comparison.py#L89\r\n\r\n* Error messages, stack traces, or logs\r\n`<null>`\n", "before_files": [{"content": "import numpy\n\nimport cupy\nfrom cupy import core\n\n\n_is_close = core.create_ufunc(\n 'cupy_is_close',\n ('eeee?->?', 'ffff?->?', 'dddd?->?'),\n '''\n bool equal_nan = in4;\n if (isfinite(in0) && isfinite(in1)) {\n out0 = fabs(in0 - in1) <= in3 + in2 * fabs(in1);\n } else if (equal_nan) {\n out0 = (in0 == in1) || (isnan(in0) && isnan(in1));\n } else {\n out0 = (in0 == in1);\n }\n '''\n)\n\n# Note that in cupy/core/include/cupy/complex.cuh, we already got isfinite and\n# isnan working for complex numbers, so just replace fabs above by abs (from\n# thrust) and we are ready to go\n_is_close_complex = core.create_ufunc(\n 'cupy_is_close_complex',\n ('FFff?->?', 'DDdd?->?'),\n '''\n bool equal_nan = in4;\n if (isfinite(in0) && isfinite(in1)) {\n out0 = abs(in0 - in1) <= in3 + in2 * abs(in1);\n } else if (equal_nan) {\n out0 = (in0 == in1) || (isnan(in0) && isnan(in1));\n } else {\n out0 = (in0 == in1);\n }\n '''\n)\n\n\ndef array_equal(a1, a2):\n \"\"\"Returns ``True`` if two arrays are element-wise exactly equal.\n\n Args:\n a1 (cupy.ndarray): Input array to compare.\n a2 (cupy.ndarray): Input array to compare.\n\n Returns:\n cupy.ndarray: A boolean 0-dim array.\n If its value is ``True``, two arrays are element-wise equal.\n\n .. seealso:: :func:`numpy.array_equal`\n\n \"\"\"\n if a1.shape != a2.shape:\n return cupy.array(False)\n return (a1 == a2).all()\n\n\ndef allclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False):\n \"\"\"Returns True if two arrays are element-wise equal within a tolerance.\n\n Two values in ``a`` and ``b`` are considiered equal when the following\n equation is satisfied.\n\n .. math::\n\n |a - b| \\\\le \\\\mathrm{atol} + \\\\mathrm{rtol} |b|\n\n Args:\n a (cupy.ndarray): Input array to compare.\n b (cupy.ndarray): Input array to compare.\n rtol (float): The relative tolerance.\n atol (float): The absolute tolerance.\n equal_nan (bool): If ``True``, NaN's in ``a`` will be considered equal\n to NaN's in ``b``.\n\n Returns:\n cupy.ndarray: A boolean 0-dim array.\n If its value is ``True``, two arrays are element-wise equal within\n a tolerance.\n\n .. seealso:: :func:`numpy.allclose`\n\n \"\"\"\n return isclose(a, b, rtol=rtol, atol=atol, equal_nan=equal_nan).all()\n\n\ndef isclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False):\n \"\"\"Returns a boolean array where two arrays are equal within a tolerance.\n\n Two values in ``a`` and ``b`` are considiered equal when the following\n equation is satisfied.\n\n .. math::\n\n |a - b| \\\\le \\\\mathrm{atol} + \\\\mathrm{rtol} |b|\n\n Args:\n a (cupy.ndarray): Input array to compare.\n b (cupy.ndarray): Input array to compare.\n rtol (float): The relative tolerance.\n atol (float): The absolute tolerance.\n equal_nan (bool): If ``True``, NaN's in ``a`` will be considered equal\n to NaN's in ``b``.\n\n Returns:\n cupy.ndarray: A boolean array storing where ``a`` and ``b`` are equal.\n\n .. seealso:: :func:`numpy.isclose`\n\n \"\"\"\n a = cupy.asanyarray(a)\n b = cupy.asanyarray(b)\n if (a.dtype in [numpy.complex64, numpy.complex128]) or \\\n (b.dtype in [numpy.complex64, numpy.complex128]):\n return _is_close_complex(a, b, rtol, atol, equal_nan)\n else:\n return _is_close(a, b, rtol, atol, equal_nan)\n\n\n# TODO(okuta): Implement array_equal\n\n\n# TODO(okuta): Implement array_equiv\n\n\ngreater = core.greater\n\n\ngreater_equal = core.greater_equal\n\n\nless = core.less\n\n\nless_equal = core.less_equal\n\n\nequal = core.equal\n\n\nnot_equal = core.not_equal\n", "path": "cupy/_logic/comparison.py"}]}
2,321
415
gh_patches_debug_8791
rasdani/github-patches
git_diff
deepchecks__deepchecks-536
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Wrong BSD license tag in the distribution Although you clearly state in the repository that the software is licensed under AGPLv3, you do not explicitly specify the license type in the `setup.py` script. The license metadata is then derived from a wrong classifier: https://github.com/deepchecks/deepchecks/blob/7974e66d95358f4bccdd2ea16591b98e1ae74529/setup.py#L114 It's reflected in the [PyPI](https://pypi.org/project/deepchecks/) distribution: ![image](https://user-images.githubusercontent.com/36337649/148455950-abdd6865-9bf6-473a-88f4-3e86efa925a8.png) </issue> <code> [start of setup.py] 1 # ---------------------------------------------------------------------------- 2 # Copyright (C) 2021 Deepchecks (https://www.deepchecks.com) 3 # 4 # This file is part of Deepchecks. 5 # Deepchecks is distributed under the terms of the GNU Affero General 6 # Public License (version 3 or later). 7 # You should have received a copy of the GNU Affero General Public License 8 # along with Deepchecks. If not, see <http://www.gnu.org/licenses/>. 9 # ---------------------------------------------------------------------------- 10 # 11 """ 12 13 |build| |Documentation Status| |pkgVersion| |pyVersions| 14 |Maintainability| |Coverage Status| 15 16 .. image:: https://raw.githubusercontent.com/deepchecks/deepchecks/main/docs/images/deepchecks-logo-with-white-wide-back.png 17 :target: https://github.com/deepchecks/deepchecks 18 19 Deepchecks is a Python package for comprehensively validating your machine learning models and data with minimal effort. 20 This includes checks related to various types of issues, such as model performance, data integrity, 21 distribution mismatches, and more. 22 23 What Do You Need in Order to Start Validating? 24 ---------------------------------------------- 25 26 Depending on your phase and what you wise to validate, you'll need a 27 subset of the following: 28 29 - Raw data (before pre-processing such as OHE, string processing, 30 etc.), with optional labels 31 32 - The model's training data with labels 33 34 - Test data (which the model isn't exposed to) with labels 35 36 - A model compatible with scikit-learn API that you wish to validate 37 (e.g. RandomForest, XGBoost) 38 39 Deepchecks validation accompanies you from the initial phase when you 40 have only raw data, through the data splits, and to the final stage of 41 having a trained model that you wish to evaluate. Accordingly, each 42 phase requires different assets for the validation. See more about 43 typical usage scenarios and the built-in suites in the 44 `docs <https://docs.deepchecks.com/?utm_source=pypi.org&utm_medium=referral&utm_campaign=readme>`__. 45 46 Installation 47 ------------ 48 49 Using pip 50 ~~~~~~~~~ 51 52 .. code:: bash 53 54 pip install deepchecks #--upgrade --user 55 56 Using conda 57 ~~~~~~~~~~~ 58 59 .. code:: bash 60 61 conda install -c deepchecks deepchecks 62 63 .. |build| image:: https://github.com/deepchecks/deepchecks/actions/workflows/build.yml/badge.svg 64 .. |Documentation Status| image:: https://readthedocs.org/projects/deepchecks/badge/?version=latest 65 :target: https://docs.deepchecks.com/en/latest/?badge=latest 66 .. |pkgVersion| image:: https://img.shields.io/pypi/v/deepchecks 67 .. |pyVersions| image:: https://img.shields.io/pypi/pyversions/deepchecks 68 .. |Maintainability| image:: https://api.codeclimate.com/v1/badges/970b11794144139975fa/maintainability 69 :target: https://codeclimate.com/github/deepchecks/deepchecks/maintainability 70 .. |Coverage Status| image:: https://coveralls.io/repos/github/deepchecks/deepchecks/badge.svg?branch=main 71 :target: https://coveralls.io/github/deepchecks/deepchecks?branch=main 72 73 """ 74 75 import setuptools 76 from setuptools import setup 77 from distutils.util import convert_path 78 import os 79 80 main_ns = {} 81 DOCLINES = (__doc__ or '').split("\n") 82 83 with open(os.path.join('./', 'VERSION')) as version_file: 84 VER = version_file.read().strip() 85 86 requirementPath = os.path.dirname(os.path.realpath(__file__)) + '/requirements.txt' 87 install_requires = [] 88 if os.path.isfile(requirementPath): 89 with open(requirementPath) as f: 90 install_requires = f.read().splitlines() 91 92 93 94 95 setup( 96 name='deepchecks', 97 version=VER, 98 packages=setuptools.find_packages(), 99 install_requires=install_requires, 100 license_files = ('LICENSE', ), 101 description = DOCLINES[0], 102 long_description="\n".join(DOCLINES[2:]), 103 author = 'deepchecks', 104 author_email = '[email protected]', 105 url = 'https://github.com/deepchecks/deepchecks', 106 download_url = "https://github.com/deepchecks/deepchecks/releases/download/{0}/deepchecks-{0}.tar.gz".format(VER), 107 keywords = ['Software Development', 'Machine Learning'], 108 include_package_data=True, 109 classifiers = [ 110 'Intended Audience :: Developers', 111 'Intended Audience :: Science/Research', 112 'Topic :: Software Development', 113 'Topic :: Scientific/Engineering', 114 'License :: OSI Approved :: BSD License', 115 'Programming Language :: Python :: 3', 116 'Programming Language :: Python :: 3.6', 117 'Programming Language :: Python :: 3.7', 118 'Programming Language :: Python :: 3.8', 119 'Programming Language :: Python :: 3.9', 120 'Programming Language :: Python :: 3.10', 121 ], 122 ) 123 [end of setup.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -111,7 +111,7 @@ 'Intended Audience :: Science/Research', 'Topic :: Software Development', 'Topic :: Scientific/Engineering', - 'License :: OSI Approved :: BSD License', + 'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7',
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -111,7 +111,7 @@\n 'Intended Audience :: Science/Research',\n 'Topic :: Software Development',\n 'Topic :: Scientific/Engineering',\n- 'License :: OSI Approved :: BSD License',\n+ 'License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n", "issue": "Wrong BSD license tag in the distribution\nAlthough you clearly state in the repository that the software is licensed under AGPLv3, you do not explicitly specify the license type in the `setup.py` script. The license metadata is then derived from a wrong classifier: \r\nhttps://github.com/deepchecks/deepchecks/blob/7974e66d95358f4bccdd2ea16591b98e1ae74529/setup.py#L114\r\n\r\nIt's reflected in the [PyPI](https://pypi.org/project/deepchecks/) distribution:\r\n![image](https://user-images.githubusercontent.com/36337649/148455950-abdd6865-9bf6-473a-88f4-3e86efa925a8.png)\r\n\r\n\n", "before_files": [{"content": "# ----------------------------------------------------------------------------\n# Copyright (C) 2021 Deepchecks (https://www.deepchecks.com)\n#\n# This file is part of Deepchecks.\n# Deepchecks is distributed under the terms of the GNU Affero General\n# Public License (version 3 or later).\n# You should have received a copy of the GNU Affero General Public License\n# along with Deepchecks. If not, see <http://www.gnu.org/licenses/>.\n# ----------------------------------------------------------------------------\n#\n\"\"\"\n\n|build| |Documentation Status| |pkgVersion| |pyVersions|\n|Maintainability| |Coverage Status|\n\n.. image:: https://raw.githubusercontent.com/deepchecks/deepchecks/main/docs/images/deepchecks-logo-with-white-wide-back.png\n :target: https://github.com/deepchecks/deepchecks\n\nDeepchecks is a Python package for comprehensively validating your machine learning models and data with minimal effort.\nThis includes checks related to various types of issues, such as model performance, data integrity,\ndistribution mismatches, and more.\n\nWhat Do You Need in Order to Start Validating?\n----------------------------------------------\n\nDepending on your phase and what you wise to validate, you'll need a\nsubset of the following:\n\n- Raw data (before pre-processing such as OHE, string processing,\n etc.), with optional labels\n\n- The model's training data with labels\n\n- Test data (which the model isn't exposed to) with labels\n\n- A model compatible with scikit-learn API that you wish to validate\n (e.g. RandomForest, XGBoost)\n\nDeepchecks validation accompanies you from the initial phase when you\nhave only raw data, through the data splits, and to the final stage of\nhaving a trained model that you wish to evaluate. Accordingly, each\nphase requires different assets for the validation. See more about\ntypical usage scenarios and the built-in suites in the\n`docs <https://docs.deepchecks.com/?utm_source=pypi.org&utm_medium=referral&utm_campaign=readme>`__.\n\nInstallation\n------------\n\nUsing pip\n~~~~~~~~~\n\n.. code:: bash\n\n pip install deepchecks #--upgrade --user\n\nUsing conda\n~~~~~~~~~~~\n\n.. code:: bash\n\n conda install -c deepchecks deepchecks\n\n.. |build| image:: https://github.com/deepchecks/deepchecks/actions/workflows/build.yml/badge.svg\n.. |Documentation Status| image:: https://readthedocs.org/projects/deepchecks/badge/?version=latest\n :target: https://docs.deepchecks.com/en/latest/?badge=latest\n.. |pkgVersion| image:: https://img.shields.io/pypi/v/deepchecks\n.. |pyVersions| image:: https://img.shields.io/pypi/pyversions/deepchecks\n.. |Maintainability| image:: https://api.codeclimate.com/v1/badges/970b11794144139975fa/maintainability\n :target: https://codeclimate.com/github/deepchecks/deepchecks/maintainability\n.. |Coverage Status| image:: https://coveralls.io/repos/github/deepchecks/deepchecks/badge.svg?branch=main\n :target: https://coveralls.io/github/deepchecks/deepchecks?branch=main\n\n\"\"\"\n\nimport setuptools\nfrom setuptools import setup\nfrom distutils.util import convert_path\nimport os\n\nmain_ns = {}\nDOCLINES = (__doc__ or '').split(\"\\n\")\n\nwith open(os.path.join('./', 'VERSION')) as version_file:\n VER = version_file.read().strip()\n\nrequirementPath = os.path.dirname(os.path.realpath(__file__)) + '/requirements.txt'\ninstall_requires = []\nif os.path.isfile(requirementPath):\n with open(requirementPath) as f:\n install_requires = f.read().splitlines()\n\n\n\n\nsetup(\n name='deepchecks',\n version=VER,\n packages=setuptools.find_packages(),\n install_requires=install_requires,\n license_files = ('LICENSE', ),\n description = DOCLINES[0],\n long_description=\"\\n\".join(DOCLINES[2:]),\n author = 'deepchecks', \n author_email = '[email protected]', \n url = 'https://github.com/deepchecks/deepchecks',\n download_url = \"https://github.com/deepchecks/deepchecks/releases/download/{0}/deepchecks-{0}.tar.gz\".format(VER),\n keywords = ['Software Development', 'Machine Learning'],\n include_package_data=True,\n classifiers = [\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'Topic :: Software Development',\n 'Topic :: Scientific/Engineering',\n 'License :: OSI Approved :: BSD License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Programming Language :: Python :: 3.10',\n ],\n)\n", "path": "setup.py"}]}
2,064
128
gh_patches_debug_2410
rasdani/github-patches
git_diff
alltheplaces__alltheplaces-2638
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Spider costco is broken During the global build at 2021-08-18-14-42-26, spider **costco** failed with **0 features** and **2 errors**. Here's [the log](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/logs/costco.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/costco.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/costco.geojson)) </issue> <code> [start of locations/spiders/costco.py] 1 # -*- coding: utf-8 -*- 2 import scrapy 3 import json 4 import re 5 from urllib.parse import urlencode 6 7 from locations.items import GeojsonPointItem 8 9 DAYS_NAME = { 10 'm': 'Mo', 11 'mon': 'Mo', 12 't': 'Tu', 13 'w': 'We', 14 's': 'Th', 15 'f': 'Fr', 16 'f ': 'Fr', 17 'sun': 'Su', 18 'sat': 'Sa', 19 'daily': '', 20 } 21 22 23 class CostcoSpider(scrapy.Spider): 24 name = "costco" 25 item_attributes = {'brand': 'Costco', 'brand_wikidata': 'Q715583'} 26 allowed_domains = ['www.costco.com'] 27 start_urls = ( 28 'https://www.costco.com/warehouse-locations', 29 ) 30 custom_settings = { 31 'USER_AGENT': 'Mozilla/5.0', 32 } 33 34 download_delay = 0.5 35 36 def parse(self, response): 37 url = 'https://www.costco.com/AjaxWarehouseBrowseLookupView?' 38 39 params = { 40 "langId": "-1", 41 # "storeId": "10301", 42 "numOfWarehouses": "50", # max allowed 43 "hasGas": "false", 44 "hasTires": "false", 45 "hasFood": "false", 46 "hasHearing": "false", 47 "hasPharmacy": "false", 48 "hasOptical": "false", 49 "hasBusiness": "false", 50 "hasPhotoCenter": "false", 51 "tiresCheckout": "0", 52 "isTransferWarehouse": "false", 53 "populateWarehouseDetails": "true", 54 "warehousePickupCheckout": "false", 55 "countryCode": "US", 56 } 57 58 with open('./locations/searchable_points/us_centroids_100mile_radius.csv') as points: 59 next(points) 60 for point in points: 61 _, lat, lon = point.strip().split(',') 62 params.update({"latitude": lat, "longitude": lon}) 63 yield scrapy.Request(url=url + urlencode(params), callback=self.parse_ajax) 64 65 def store_hours(self, store_hours): 66 opening_hours = [] 67 68 if not store_hours: 69 return None 70 71 for day_info in store_hours: 72 if day_info.lower().find('close') > -1: 73 continue 74 75 match = re.match( 76 r'^(\w+)-?[\.:]?([A-Za-z]*)\.? *(\d{1,2}):(\d{2}) ?(am|pm|) *- +(\d{1,2}):(\d{2}) ?(am|pm|hrs\.)$', day_info) 77 if not match: 78 self.logger.warn("Couldn't match hours: %s", day_info) 79 80 try: 81 day_from, day_to, fr_hr, fr_min, fr_ampm, to_hr, to_min, to_ampm = match.groups() 82 except ValueError: 83 self.logger.warn("Couldn't match hours: %s", day_info) 84 raise 85 86 day_from = DAYS_NAME[day_from.lower()] 87 day_to = DAYS_NAME[day_to.lower()] if day_to else day_from 88 89 if day_from != day_to: 90 day_str = '{}-{}'.format(day_from, day_to) 91 else: 92 day_str = '{}'.format(day_from) 93 94 day_hours = '%s %02d:%02d-%02d:%02d' % ( 95 day_str, 96 int(fr_hr) + 12 if fr_ampm == 'pm' else int(fr_hr), 97 int(fr_min), 98 int(to_hr) + 12 if to_ampm == 'pm' else int(to_hr), 99 int(to_min), 100 ) 101 102 opening_hours.append(day_hours.strip()) 103 104 return '; '.join(opening_hours) 105 106 def _clean_text(self, text): 107 return re.sub("[\r\n\t]", "", text).strip() 108 109 def parse_ajax(self, response): 110 body = json.loads(response.body_as_unicode()) 111 112 for store in body[1:]: 113 if store["distance"] < 110: 114 # only process stores that are within 110 miles of query point 115 # (to reduce processing a ton of duplicates) 116 ref = store['identifier'] 117 department = store['specialtyDepartments'] 118 119 fuels = {} 120 if 'gasPrices' in store: 121 fuels = { 122 'fuel:diesel': 'diesel' in store['gasPrices'], 123 'fuel:octane_87': 'regular' in store['gasPrices'], 124 'fuel:octane_91': 'premium' in store['gasPrices'] 125 } 126 127 properties = { 128 'lat': store.get('latitude'), 129 'lon': store.get('longitude'), 130 'ref': ref, 131 'phone': self._clean_text(store.get('phone')), 132 'name': f"Costco {store['locationName']}", 133 'addr_full': store['address1'], 134 'city': store['city'], 135 'state': store['state'], 136 'postcode': store.get('zipCode'), 137 'country': store.get('country'), 138 'website': 'https://www.costco.com/warehouse-locations/store-{}.html'.format(ref), 139 'extras': { 140 'shop': 'supermarket', 141 'number': store["displayName"], 142 'amenity:fuel': store['hasGasDepartment'], 143 'amenity:pharmacy': store['hasPharmacyDepartment'], 144 'atm': any('ATM' == d['name'] for d in department) or None, 145 'fuel:propane': any('Propane' == d['name'] for d in department) or None, 146 **fuels 147 } 148 } 149 150 hours = store.get('warehouseHours') 151 if hours: 152 try: 153 properties["opening_hours"] = self.store_hours(hours) 154 except: 155 pass 156 157 yield GeojsonPointItem(**properties) 158 [end of locations/spiders/costco.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/locations/spiders/costco.py b/locations/spiders/costco.py --- a/locations/spiders/costco.py +++ b/locations/spiders/costco.py @@ -28,7 +28,7 @@ 'https://www.costco.com/warehouse-locations', ) custom_settings = { - 'USER_AGENT': 'Mozilla/5.0', + 'USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36', } download_delay = 0.5
{"golden_diff": "diff --git a/locations/spiders/costco.py b/locations/spiders/costco.py\n--- a/locations/spiders/costco.py\n+++ b/locations/spiders/costco.py\n@@ -28,7 +28,7 @@\n 'https://www.costco.com/warehouse-locations',\n )\n custom_settings = {\n- 'USER_AGENT': 'Mozilla/5.0',\n+ 'USER_AGENT': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36',\n }\n \n download_delay = 0.5\n", "issue": "Spider costco is broken\nDuring the global build at 2021-08-18-14-42-26, spider **costco** failed with **0 features** and **2 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/logs/costco.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/costco.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-18-14-42-26/output/costco.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nimport re\nfrom urllib.parse import urlencode\n\nfrom locations.items import GeojsonPointItem\n\nDAYS_NAME = {\n 'm': 'Mo',\n 'mon': 'Mo',\n 't': 'Tu',\n 'w': 'We',\n 's': 'Th',\n 'f': 'Fr',\n 'f ': 'Fr',\n 'sun': 'Su',\n 'sat': 'Sa',\n 'daily': '',\n}\n\n\nclass CostcoSpider(scrapy.Spider):\n name = \"costco\"\n item_attributes = {'brand': 'Costco', 'brand_wikidata': 'Q715583'}\n allowed_domains = ['www.costco.com']\n start_urls = (\n 'https://www.costco.com/warehouse-locations',\n )\n custom_settings = {\n 'USER_AGENT': 'Mozilla/5.0',\n }\n\n download_delay = 0.5\n\n def parse(self, response):\n url = 'https://www.costco.com/AjaxWarehouseBrowseLookupView?'\n\n params = {\n \"langId\": \"-1\",\n # \"storeId\": \"10301\",\n \"numOfWarehouses\": \"50\", # max allowed\n \"hasGas\": \"false\",\n \"hasTires\": \"false\",\n \"hasFood\": \"false\",\n \"hasHearing\": \"false\",\n \"hasPharmacy\": \"false\",\n \"hasOptical\": \"false\",\n \"hasBusiness\": \"false\",\n \"hasPhotoCenter\": \"false\",\n \"tiresCheckout\": \"0\",\n \"isTransferWarehouse\": \"false\",\n \"populateWarehouseDetails\": \"true\",\n \"warehousePickupCheckout\": \"false\",\n \"countryCode\": \"US\",\n }\n\n with open('./locations/searchable_points/us_centroids_100mile_radius.csv') as points:\n next(points)\n for point in points:\n _, lat, lon = point.strip().split(',')\n params.update({\"latitude\": lat, \"longitude\": lon})\n yield scrapy.Request(url=url + urlencode(params), callback=self.parse_ajax)\n\n def store_hours(self, store_hours):\n opening_hours = []\n\n if not store_hours:\n return None\n\n for day_info in store_hours:\n if day_info.lower().find('close') > -1:\n continue\n\n match = re.match(\n r'^(\\w+)-?[\\.:]?([A-Za-z]*)\\.? *(\\d{1,2}):(\\d{2}) ?(am|pm|) *- +(\\d{1,2}):(\\d{2}) ?(am|pm|hrs\\.)$', day_info)\n if not match:\n self.logger.warn(\"Couldn't match hours: %s\", day_info)\n\n try:\n day_from, day_to, fr_hr, fr_min, fr_ampm, to_hr, to_min, to_ampm = match.groups()\n except ValueError:\n self.logger.warn(\"Couldn't match hours: %s\", day_info)\n raise\n\n day_from = DAYS_NAME[day_from.lower()]\n day_to = DAYS_NAME[day_to.lower()] if day_to else day_from\n\n if day_from != day_to:\n day_str = '{}-{}'.format(day_from, day_to)\n else:\n day_str = '{}'.format(day_from)\n\n day_hours = '%s %02d:%02d-%02d:%02d' % (\n day_str,\n int(fr_hr) + 12 if fr_ampm == 'pm' else int(fr_hr),\n int(fr_min),\n int(to_hr) + 12 if to_ampm == 'pm' else int(to_hr),\n int(to_min),\n )\n\n opening_hours.append(day_hours.strip())\n\n return '; '.join(opening_hours)\n\n def _clean_text(self, text):\n return re.sub(\"[\\r\\n\\t]\", \"\", text).strip()\n\n def parse_ajax(self, response):\n body = json.loads(response.body_as_unicode())\n\n for store in body[1:]:\n if store[\"distance\"] < 110:\n # only process stores that are within 110 miles of query point\n # (to reduce processing a ton of duplicates)\n ref = store['identifier']\n department = store['specialtyDepartments']\n\n fuels = {}\n if 'gasPrices' in store:\n fuels = {\n 'fuel:diesel': 'diesel' in store['gasPrices'],\n 'fuel:octane_87': 'regular' in store['gasPrices'],\n 'fuel:octane_91': 'premium' in store['gasPrices']\n }\n\n properties = {\n 'lat': store.get('latitude'),\n 'lon': store.get('longitude'),\n 'ref': ref,\n 'phone': self._clean_text(store.get('phone')),\n 'name': f\"Costco {store['locationName']}\",\n 'addr_full': store['address1'],\n 'city': store['city'],\n 'state': store['state'],\n 'postcode': store.get('zipCode'),\n 'country': store.get('country'),\n 'website': 'https://www.costco.com/warehouse-locations/store-{}.html'.format(ref),\n 'extras': {\n 'shop': 'supermarket',\n 'number': store[\"displayName\"],\n 'amenity:fuel': store['hasGasDepartment'],\n 'amenity:pharmacy': store['hasPharmacyDepartment'],\n 'atm': any('ATM' == d['name'] for d in department) or None,\n 'fuel:propane': any('Propane' == d['name'] for d in department) or None,\n **fuels\n }\n }\n\n hours = store.get('warehouseHours')\n if hours:\n try:\n properties[\"opening_hours\"] = self.store_hours(hours)\n except:\n pass\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/costco.py"}]}
2,386
168
gh_patches_debug_32431
rasdani/github-patches
git_diff
engnadeau__pybotics-13
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Robot Model-Specific Parameters Should be in External Files Robot Model-Specific Parameters (e.g., MDH, joint limits, etc) Should be in External Files (e.g., csv, json, etc). This way, new models and modifications to models are not tied to the codebase. `robot_model.py` should be deprecated and example models should be saved in a separate `example` folder in the repo. </issue> <code> [start of pybotics/__init__.py] 1 # make the follow accessible from pybotics namespace 2 from . import geometry 3 from . import kinematics 4 from .robot import * 5 from . import robot_model 6 [end of pybotics/__init__.py] [start of pybotics/robot_model.py] 1 import numpy as np 2 3 ''' 4 Modified DH parameter matrices of various robots (rx, tx, rz, tz) aka (alpha, a, theta, d) 5 ''' 6 7 8 def ur10(): 9 model = np.array([ 10 [0, 0, 0, 118.0], 11 [np.pi / 2.0, 0, np.pi, 0], 12 [0, 612.7, 0, 0], 13 [0, 571.6, 0, 163.9], 14 [-np.pi / 2.0, 0, 0, 115.7], 15 [np.pi / 2.0, 0, np.pi, 92.2] 16 ]) 17 return model 18 19 20 def puma_560(): 21 model = np.array([ 22 [0, 0, 0, 0], 23 [-np.pi / 2.0, 0, 0, 0], 24 [0, 612.7, 0, 0], 25 [0, 571.6, 0, 163.9], 26 [-np.pi / 2.0, 0, 0, 115.7], 27 [np.pi / 2.0, 0, np.pi, 92.2] 28 ]) 29 return model 30 31 32 def kuka_lbr_iiwa_7(): 33 model = np.array([ 34 [0, 0, 0, 340], 35 [-np.pi / 2.0, 0, 0, 0], 36 [np.pi / 2.0, 0, 0, 400], 37 [np.pi / 2.0, 0, 0, 0], 38 [-np.pi / 2.0, 0, 0, 400], 39 [-np.pi / 2.0, 0, 0, 0], 40 [np.pi / 2.0, 0, 0, 126] 41 ]) 42 return model 43 [end of pybotics/robot_model.py] [start of examples/example_robot.py] 1 import copy 2 3 import pybotics as py 4 import numpy as np 5 6 # set numpy print options 7 np.set_printoptions(precision=3) 8 np.set_printoptions(suppress=True) 9 10 # create robot 11 ideal_robot = py.Robot() 12 ideal_robot.robot_model = py.robot_model.ur10() 13 14 # create pseudo-realistic robot with kinematic errors 15 real_robot = copy.deepcopy(ideal_robot) 16 real_robot.impair_robot_model() 17 18 print('Ideal Robot Model:\n', ideal_robot.robot_model, '\n') 19 print('Pseudo-Real Robot Model:\n', real_robot.robot_model, '\n') 20 21 # demonstrate forward kinematics 22 joints = [0, 0, 0, 0, 0, 0] 23 24 print('Ideal Pose:\n', ideal_robot.fk(joints), '\n') 25 print('Pseudo-Real Pose:\n', real_robot.fk(joints), '\n') 26 [end of examples/example_robot.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/examples/example_robot.py b/examples/example_robot.py --- a/examples/example_robot.py +++ b/examples/example_robot.py @@ -8,18 +8,17 @@ np.set_printoptions(suppress=True) # create robot -ideal_robot = py.Robot() -ideal_robot.robot_model = py.robot_model.ur10() +model = np.loadtxt('ur10-mdh.csv', delimiter=',') +robot = py.Robot(model) -# create pseudo-realistic robot with kinematic errors -real_robot = copy.deepcopy(ideal_robot) -real_robot.impair_robot_model() - -print('Ideal Robot Model:\n', ideal_robot.robot_model, '\n') -print('Pseudo-Real Robot Model:\n', real_robot.robot_model, '\n') +print('Robot Model:\n{}\n'.format(robot.robot_model)) # demonstrate forward kinematics -joints = [0, 0, 0, 0, 0, 0] +joints = [0] * robot.num_dof() +pose = robot.fk(joints) + +print('Pose:\n{}\n'.format(pose)) -print('Ideal Pose:\n', ideal_robot.fk(joints), '\n') -print('Pseudo-Real Pose:\n', real_robot.fk(joints), '\n') +# demonstrate inverse kinematics +new_joints = robot.ik(pose) +print('Solved Joints:\n{}\n'.format(new_joints)) diff --git a/pybotics/__init__.py b/pybotics/__init__.py --- a/pybotics/__init__.py +++ b/pybotics/__init__.py @@ -1,5 +1,4 @@ # make the follow accessible from pybotics namespace from . import geometry from . import kinematics -from .robot import * -from . import robot_model +from .robot import Robot diff --git a/pybotics/robot_model.py b/pybotics/robot_model.py deleted file mode 100644 --- a/pybotics/robot_model.py +++ /dev/null @@ -1,42 +0,0 @@ -import numpy as np - -''' -Modified DH parameter matrices of various robots (rx, tx, rz, tz) aka (alpha, a, theta, d) -''' - - -def ur10(): - model = np.array([ - [0, 0, 0, 118.0], - [np.pi / 2.0, 0, np.pi, 0], - [0, 612.7, 0, 0], - [0, 571.6, 0, 163.9], - [-np.pi / 2.0, 0, 0, 115.7], - [np.pi / 2.0, 0, np.pi, 92.2] - ]) - return model - - -def puma_560(): - model = np.array([ - [0, 0, 0, 0], - [-np.pi / 2.0, 0, 0, 0], - [0, 612.7, 0, 0], - [0, 571.6, 0, 163.9], - [-np.pi / 2.0, 0, 0, 115.7], - [np.pi / 2.0, 0, np.pi, 92.2] - ]) - return model - - -def kuka_lbr_iiwa_7(): - model = np.array([ - [0, 0, 0, 340], - [-np.pi / 2.0, 0, 0, 0], - [np.pi / 2.0, 0, 0, 400], - [np.pi / 2.0, 0, 0, 0], - [-np.pi / 2.0, 0, 0, 400], - [-np.pi / 2.0, 0, 0, 0], - [np.pi / 2.0, 0, 0, 126] - ]) - return model
{"golden_diff": "diff --git a/examples/example_robot.py b/examples/example_robot.py\n--- a/examples/example_robot.py\n+++ b/examples/example_robot.py\n@@ -8,18 +8,17 @@\n np.set_printoptions(suppress=True)\n \n # create robot\n-ideal_robot = py.Robot()\n-ideal_robot.robot_model = py.robot_model.ur10()\n+model = np.loadtxt('ur10-mdh.csv', delimiter=',')\n+robot = py.Robot(model)\n \n-# create pseudo-realistic robot with kinematic errors\n-real_robot = copy.deepcopy(ideal_robot)\n-real_robot.impair_robot_model()\n-\n-print('Ideal Robot Model:\\n', ideal_robot.robot_model, '\\n')\n-print('Pseudo-Real Robot Model:\\n', real_robot.robot_model, '\\n')\n+print('Robot Model:\\n{}\\n'.format(robot.robot_model))\n \n # demonstrate forward kinematics\n-joints = [0, 0, 0, 0, 0, 0]\n+joints = [0] * robot.num_dof()\n+pose = robot.fk(joints)\n+\n+print('Pose:\\n{}\\n'.format(pose))\n \n-print('Ideal Pose:\\n', ideal_robot.fk(joints), '\\n')\n-print('Pseudo-Real Pose:\\n', real_robot.fk(joints), '\\n')\n+# demonstrate inverse kinematics\n+new_joints = robot.ik(pose)\n+print('Solved Joints:\\n{}\\n'.format(new_joints))\ndiff --git a/pybotics/__init__.py b/pybotics/__init__.py\n--- a/pybotics/__init__.py\n+++ b/pybotics/__init__.py\n@@ -1,5 +1,4 @@\n # make the follow accessible from pybotics namespace\n from . import geometry\n from . import kinematics\n-from .robot import *\n-from . import robot_model\n+from .robot import Robot\ndiff --git a/pybotics/robot_model.py b/pybotics/robot_model.py\ndeleted file mode 100644\n--- a/pybotics/robot_model.py\n+++ /dev/null\n@@ -1,42 +0,0 @@\n-import numpy as np\n-\n-'''\n-Modified DH parameter matrices of various robots (rx, tx, rz, tz) aka (alpha, a, theta, d)\n-'''\n-\n-\n-def ur10():\n- model = np.array([\n- [0, 0, 0, 118.0],\n- [np.pi / 2.0, 0, np.pi, 0],\n- [0, 612.7, 0, 0],\n- [0, 571.6, 0, 163.9],\n- [-np.pi / 2.0, 0, 0, 115.7],\n- [np.pi / 2.0, 0, np.pi, 92.2]\n- ])\n- return model\n-\n-\n-def puma_560():\n- model = np.array([\n- [0, 0, 0, 0],\n- [-np.pi / 2.0, 0, 0, 0],\n- [0, 612.7, 0, 0],\n- [0, 571.6, 0, 163.9],\n- [-np.pi / 2.0, 0, 0, 115.7],\n- [np.pi / 2.0, 0, np.pi, 92.2]\n- ])\n- return model\n-\n-\n-def kuka_lbr_iiwa_7():\n- model = np.array([\n- [0, 0, 0, 340],\n- [-np.pi / 2.0, 0, 0, 0],\n- [np.pi / 2.0, 0, 0, 400],\n- [np.pi / 2.0, 0, 0, 0],\n- [-np.pi / 2.0, 0, 0, 400],\n- [-np.pi / 2.0, 0, 0, 0],\n- [np.pi / 2.0, 0, 0, 126]\n- ])\n- return model\n", "issue": "Robot Model-Specific Parameters Should be in External Files\nRobot Model-Specific Parameters (e.g., MDH, joint limits, etc) Should be in External Files (e.g., csv, json, etc). This way, new models and modifications to models are not tied to the codebase. `robot_model.py` should be deprecated and example models should be saved in a separate `example` folder in the repo.\n", "before_files": [{"content": "# make the follow accessible from pybotics namespace\nfrom . import geometry\nfrom . import kinematics\nfrom .robot import *\nfrom . import robot_model\n", "path": "pybotics/__init__.py"}, {"content": "import numpy as np\n\n'''\nModified DH parameter matrices of various robots (rx, tx, rz, tz) aka (alpha, a, theta, d)\n'''\n\n\ndef ur10():\n model = np.array([\n [0, 0, 0, 118.0],\n [np.pi / 2.0, 0, np.pi, 0],\n [0, 612.7, 0, 0],\n [0, 571.6, 0, 163.9],\n [-np.pi / 2.0, 0, 0, 115.7],\n [np.pi / 2.0, 0, np.pi, 92.2]\n ])\n return model\n\n\ndef puma_560():\n model = np.array([\n [0, 0, 0, 0],\n [-np.pi / 2.0, 0, 0, 0],\n [0, 612.7, 0, 0],\n [0, 571.6, 0, 163.9],\n [-np.pi / 2.0, 0, 0, 115.7],\n [np.pi / 2.0, 0, np.pi, 92.2]\n ])\n return model\n\n\ndef kuka_lbr_iiwa_7():\n model = np.array([\n [0, 0, 0, 340],\n [-np.pi / 2.0, 0, 0, 0],\n [np.pi / 2.0, 0, 0, 400],\n [np.pi / 2.0, 0, 0, 0],\n [-np.pi / 2.0, 0, 0, 400],\n [-np.pi / 2.0, 0, 0, 0],\n [np.pi / 2.0, 0, 0, 126]\n ])\n return model\n", "path": "pybotics/robot_model.py"}, {"content": "import copy\n\nimport pybotics as py\nimport numpy as np\n\n# set numpy print options\nnp.set_printoptions(precision=3)\nnp.set_printoptions(suppress=True)\n\n# create robot\nideal_robot = py.Robot()\nideal_robot.robot_model = py.robot_model.ur10()\n\n# create pseudo-realistic robot with kinematic errors\nreal_robot = copy.deepcopy(ideal_robot)\nreal_robot.impair_robot_model()\n\nprint('Ideal Robot Model:\\n', ideal_robot.robot_model, '\\n')\nprint('Pseudo-Real Robot Model:\\n', real_robot.robot_model, '\\n')\n\n# demonstrate forward kinematics\njoints = [0, 0, 0, 0, 0, 0]\n\nprint('Ideal Pose:\\n', ideal_robot.fk(joints), '\\n')\nprint('Pseudo-Real Pose:\\n', real_robot.fk(joints), '\\n')\n", "path": "examples/example_robot.py"}]}
1,482
961
gh_patches_debug_169
rasdani/github-patches
git_diff
joke2k__faker-1235
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> French IBAN should be 27 char of length * Faker version: 4.1.1 ### Steps to reproduce ``` import faker from faker import Faker fake = Faker('fr_FR') fr_iban = fake.iban() fr_iban 'FR96505438725498141631455686' len(fr_iban) 28 ``` ### Expected behavior ``` >>> len(fr_iban) 27 ``` [As stated on wikipedia in France row](https://en.wikipedia.org/wiki/International_Bank_Account_Number#IBAN_formats_by_country) ### Actual behavior ``` >>> len(fr_iban) 28 ``` </issue> <code> [start of faker/providers/bank/fr_FR/__init__.py] 1 from .. import Provider as BankProvider 2 3 4 class Provider(BankProvider): 5 bban_format = '########################' 6 country_code = 'FR' 7 [end of faker/providers/bank/fr_FR/__init__.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/faker/providers/bank/fr_FR/__init__.py b/faker/providers/bank/fr_FR/__init__.py --- a/faker/providers/bank/fr_FR/__init__.py +++ b/faker/providers/bank/fr_FR/__init__.py @@ -2,5 +2,5 @@ class Provider(BankProvider): - bban_format = '########################' + bban_format = '#######################' country_code = 'FR'
{"golden_diff": "diff --git a/faker/providers/bank/fr_FR/__init__.py b/faker/providers/bank/fr_FR/__init__.py\n--- a/faker/providers/bank/fr_FR/__init__.py\n+++ b/faker/providers/bank/fr_FR/__init__.py\n@@ -2,5 +2,5 @@\n \n \n class Provider(BankProvider):\n- bban_format = '########################'\n+ bban_format = '#######################'\n country_code = 'FR'\n", "issue": "French IBAN should be 27 char of length\n* Faker version: 4.1.1\r\n\r\n### Steps to reproduce\r\n\r\n```\r\nimport faker\r\nfrom faker import Faker\r\nfake = Faker('fr_FR')\r\nfr_iban = fake.iban()\r\nfr_iban\r\n'FR96505438725498141631455686'\r\nlen(fr_iban)\r\n28\r\n```\r\n\r\n\r\n### Expected behavior\r\n\r\n```\r\n>>> len(fr_iban)\r\n27\r\n```\r\n\r\n[As stated on wikipedia in France row](https://en.wikipedia.org/wiki/International_Bank_Account_Number#IBAN_formats_by_country)\r\n\r\n### Actual behavior\r\n\r\n```\r\n>>> len(fr_iban)\r\n28\r\n```\r\n\n", "before_files": [{"content": "from .. import Provider as BankProvider\n\n\nclass Provider(BankProvider):\n bban_format = '########################'\n country_code = 'FR'\n", "path": "faker/providers/bank/fr_FR/__init__.py"}]}
742
99
gh_patches_debug_12956
rasdani/github-patches
git_diff
hpcaitech__ColossalAI-5392
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> [BUG]: Wrong import in ColossalAuto's meta_registry/binary_elementwise_ops.py ### 🐛 Describe the bug # Problem description The file `colossalai/auto_parallel/meta_profiler/meta_registry/binary_elementwise_ops.py` contains the following line: ```python from ..constants import BCAST_FUNC_OP ``` However, the file `colossalai/auto_parallel/meta_profiler/constants.py` which this import refers to does not contain any `BCAST_FUNC_OP`. This leads to an `ImportError` when running ColossalAuto in release 0.3.3 and newer. This constant can be found in the file `colossalai/auto_parallel/tensor_shard/constants.py`. The last commit to `colossalai/auto_parallel/meta_profiler/constants.py` (commit ID `079bf3cb`) removes the import of tensor_shard's `constants.py` from meta_profiler's `constants.py` (seemingly due to an automated refactoring). # Solution Since no other file in the `meta_registry` module uses constants from the `tensor_shard/constants.py` and to avoid automated removal of "unused" imports in the future, the import statement in question in above-mentioned `binary_elementwise_ops.py` could be changed to: ```python from colossalai.auto_parallel.tensor_shard.constants import BCAST_FUNC_OP ``` ### Environment - Python 3.8 - Torch 1.12.0 - no CUDA </issue> <code> [start of colossalai/auto_parallel/meta_profiler/meta_registry/binary_elementwise_ops.py] 1 from typing import List, Tuple 2 3 import torch 4 5 from colossalai._analyzer._subclasses.flop_tensor import flop_mapping 6 from colossalai._analyzer.fx.node_util import compute_size_in_bytes as activation_size 7 from colossalai.auto_parallel.tensor_shard.sharding_strategy import MemoryCost, OperationDataType, TrainCycleItem 8 9 from ..constants import BCAST_FUNC_OP 10 from ..registry import meta_register 11 12 __all__ = ["binary_elementwise_meta_info"] 13 14 15 @meta_register.register(BCAST_FUNC_OP) 16 def binary_elementwise_meta_info(*args, **kwargs) -> Tuple[TrainCycleItem, TrainCycleItem, List[torch.Tensor]]: 17 """Meta information generator for binary elementwise operations 18 NOTE: Some of the binary elementwise operations will discard the input activation after computation, as they 19 don't need those tensors for back propagation, for example, if there are two tensors being sent for `torch.add`, 20 they will be discarded right after add operation is done. We create a simple API in `ShardMetaInfo` class to identify 21 this behavior, it is critical for better memory estimation. 22 23 Returns: 24 Tuple[TrainCycleItem, TrainCycleItem, List[torch.Tensor]]: compute cost, memory cost and forward inputs 25 """ 26 27 input_op_data = [arg for arg in args if arg.type != OperationDataType.OUTPUT] 28 output_op_data = next(filter(lambda arg: arg.type == OperationDataType.OUTPUT, args)) 29 30 # construct forward args for flop mapping 31 fwd_in_args = [opdata.data for opdata in input_op_data] 32 fwd_out_args = [output_op_data.data] 33 34 # calculate cost 35 36 # calculate compute cost 37 # NOTE: we set bwd_compute_cost two times of fwd_compute_cost in this case 38 fwd_compute_cost = flop_mapping[torch.ops.aten.add.Tensor](fwd_in_args, fwd_out_args) 39 bwd_compute_cost = fwd_compute_cost * 2 40 compute_cost = TrainCycleItem(fwd=fwd_compute_cost, bwd=bwd_compute_cost, total=fwd_compute_cost + bwd_compute_cost) 41 42 # calculate memory cost 43 param_mem_cost = activation_size([arg.data for arg in input_op_data if arg.type == OperationDataType.PARAM]) 44 fwd_mem_cost = MemoryCost( 45 activation=activation_size(output_op_data.data), 46 parameter=param_mem_cost, 47 ) 48 bwd_mem_cost = MemoryCost( 49 activation=activation_size(fwd_in_args), 50 parameter=param_mem_cost, 51 ) 52 53 # total cost 54 total_mem_cost = MemoryCost( 55 activation=fwd_mem_cost.activation + bwd_mem_cost.activation, 56 parameter=fwd_mem_cost.parameter + bwd_mem_cost.parameter, 57 ) 58 59 memory_cost = TrainCycleItem(fwd=fwd_mem_cost, bwd=bwd_mem_cost, total=total_mem_cost) 60 61 # store fwd_in, fwd_buffer, fwd_out 62 fwd_in = [] 63 fwd_buffer = [] 64 fwd_out = [torch.zeros_like(output_op_data.data, device="meta")] 65 66 return compute_cost, memory_cost, fwd_in, fwd_buffer, fwd_out 67 [end of colossalai/auto_parallel/meta_profiler/meta_registry/binary_elementwise_ops.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/colossalai/auto_parallel/meta_profiler/meta_registry/binary_elementwise_ops.py b/colossalai/auto_parallel/meta_profiler/meta_registry/binary_elementwise_ops.py --- a/colossalai/auto_parallel/meta_profiler/meta_registry/binary_elementwise_ops.py +++ b/colossalai/auto_parallel/meta_profiler/meta_registry/binary_elementwise_ops.py @@ -4,9 +4,9 @@ from colossalai._analyzer._subclasses.flop_tensor import flop_mapping from colossalai._analyzer.fx.node_util import compute_size_in_bytes as activation_size +from colossalai.auto_parallel.tensor_shard.constants import BCAST_FUNC_OP from colossalai.auto_parallel.tensor_shard.sharding_strategy import MemoryCost, OperationDataType, TrainCycleItem -from ..constants import BCAST_FUNC_OP from ..registry import meta_register __all__ = ["binary_elementwise_meta_info"]
{"golden_diff": "diff --git a/colossalai/auto_parallel/meta_profiler/meta_registry/binary_elementwise_ops.py b/colossalai/auto_parallel/meta_profiler/meta_registry/binary_elementwise_ops.py\n--- a/colossalai/auto_parallel/meta_profiler/meta_registry/binary_elementwise_ops.py\n+++ b/colossalai/auto_parallel/meta_profiler/meta_registry/binary_elementwise_ops.py\n@@ -4,9 +4,9 @@\n \n from colossalai._analyzer._subclasses.flop_tensor import flop_mapping\n from colossalai._analyzer.fx.node_util import compute_size_in_bytes as activation_size\n+from colossalai.auto_parallel.tensor_shard.constants import BCAST_FUNC_OP\n from colossalai.auto_parallel.tensor_shard.sharding_strategy import MemoryCost, OperationDataType, TrainCycleItem\n \n-from ..constants import BCAST_FUNC_OP\n from ..registry import meta_register\n \n __all__ = [\"binary_elementwise_meta_info\"]\n", "issue": "[BUG]: Wrong import in ColossalAuto's meta_registry/binary_elementwise_ops.py\n### \ud83d\udc1b Describe the bug\n\n# Problem description\r\n\r\nThe file `colossalai/auto_parallel/meta_profiler/meta_registry/binary_elementwise_ops.py` contains the following line: \r\n\r\n```python\r\nfrom ..constants import BCAST_FUNC_OP\r\n```\r\n\r\nHowever, the file `colossalai/auto_parallel/meta_profiler/constants.py` which this import refers to does not contain any `BCAST_FUNC_OP`. This leads to an `ImportError` when running ColossalAuto in release 0.3.3 and newer. \r\n\r\nThis constant can be found in the file `colossalai/auto_parallel/tensor_shard/constants.py`. The last commit to `colossalai/auto_parallel/meta_profiler/constants.py` (commit ID `079bf3cb`) removes the import of tensor_shard's `constants.py` from meta_profiler's `constants.py` (seemingly due to an automated refactoring).\r\n\r\n# Solution\r\n\r\nSince no other file in the `meta_registry` module uses constants from the `tensor_shard/constants.py` and to avoid automated removal of \"unused\" imports in the future, the import statement in question in above-mentioned `binary_elementwise_ops.py` could be changed to: \r\n\r\n```python\r\nfrom colossalai.auto_parallel.tensor_shard.constants import BCAST_FUNC_OP\r\n```\n\n### Environment\n\n- Python 3.8\r\n- Torch 1.12.0\r\n- no CUDA\n", "before_files": [{"content": "from typing import List, Tuple\n\nimport torch\n\nfrom colossalai._analyzer._subclasses.flop_tensor import flop_mapping\nfrom colossalai._analyzer.fx.node_util import compute_size_in_bytes as activation_size\nfrom colossalai.auto_parallel.tensor_shard.sharding_strategy import MemoryCost, OperationDataType, TrainCycleItem\n\nfrom ..constants import BCAST_FUNC_OP\nfrom ..registry import meta_register\n\n__all__ = [\"binary_elementwise_meta_info\"]\n\n\n@meta_register.register(BCAST_FUNC_OP)\ndef binary_elementwise_meta_info(*args, **kwargs) -> Tuple[TrainCycleItem, TrainCycleItem, List[torch.Tensor]]:\n \"\"\"Meta information generator for binary elementwise operations\n NOTE: Some of the binary elementwise operations will discard the input activation after computation, as they\n don't need those tensors for back propagation, for example, if there are two tensors being sent for `torch.add`,\n they will be discarded right after add operation is done. We create a simple API in `ShardMetaInfo` class to identify\n this behavior, it is critical for better memory estimation.\n\n Returns:\n Tuple[TrainCycleItem, TrainCycleItem, List[torch.Tensor]]: compute cost, memory cost and forward inputs\n \"\"\"\n\n input_op_data = [arg for arg in args if arg.type != OperationDataType.OUTPUT]\n output_op_data = next(filter(lambda arg: arg.type == OperationDataType.OUTPUT, args))\n\n # construct forward args for flop mapping\n fwd_in_args = [opdata.data for opdata in input_op_data]\n fwd_out_args = [output_op_data.data]\n\n # calculate cost\n\n # calculate compute cost\n # NOTE: we set bwd_compute_cost two times of fwd_compute_cost in this case\n fwd_compute_cost = flop_mapping[torch.ops.aten.add.Tensor](fwd_in_args, fwd_out_args)\n bwd_compute_cost = fwd_compute_cost * 2\n compute_cost = TrainCycleItem(fwd=fwd_compute_cost, bwd=bwd_compute_cost, total=fwd_compute_cost + bwd_compute_cost)\n\n # calculate memory cost\n param_mem_cost = activation_size([arg.data for arg in input_op_data if arg.type == OperationDataType.PARAM])\n fwd_mem_cost = MemoryCost(\n activation=activation_size(output_op_data.data),\n parameter=param_mem_cost,\n )\n bwd_mem_cost = MemoryCost(\n activation=activation_size(fwd_in_args),\n parameter=param_mem_cost,\n )\n\n # total cost\n total_mem_cost = MemoryCost(\n activation=fwd_mem_cost.activation + bwd_mem_cost.activation,\n parameter=fwd_mem_cost.parameter + bwd_mem_cost.parameter,\n )\n\n memory_cost = TrainCycleItem(fwd=fwd_mem_cost, bwd=bwd_mem_cost, total=total_mem_cost)\n\n # store fwd_in, fwd_buffer, fwd_out\n fwd_in = []\n fwd_buffer = []\n fwd_out = [torch.zeros_like(output_op_data.data, device=\"meta\")]\n\n return compute_cost, memory_cost, fwd_in, fwd_buffer, fwd_out\n", "path": "colossalai/auto_parallel/meta_profiler/meta_registry/binary_elementwise_ops.py"}]}
1,656
190
gh_patches_debug_20586
rasdani/github-patches
git_diff
TheAlgorithms__Python-9178
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> sorts/random_normal_distribution_quicksort.py has no tests ### Repository commit 3 ### Python version (python --version) Python 3.11.5 ### Dependencies version (pip freeze) Numpy ### Expected behavior Tests. ### Actual behavior No tests. </issue> <code> [start of sorts/random_pivot_quick_sort.py] 1 """ 2 Picks the random index as the pivot 3 """ 4 import random 5 6 7 def partition(a, left_index, right_index): 8 pivot = a[left_index] 9 i = left_index + 1 10 for j in range(left_index + 1, right_index): 11 if a[j] < pivot: 12 a[j], a[i] = a[i], a[j] 13 i += 1 14 a[left_index], a[i - 1] = a[i - 1], a[left_index] 15 return i - 1 16 17 18 def quick_sort_random(a, left, right): 19 if left < right: 20 pivot = random.randint(left, right - 1) 21 a[pivot], a[left] = ( 22 a[left], 23 a[pivot], 24 ) # switches the pivot with the left most bound 25 pivot_index = partition(a, left, right) 26 quick_sort_random( 27 a, left, pivot_index 28 ) # recursive quicksort to the left of the pivot point 29 quick_sort_random( 30 a, pivot_index + 1, right 31 ) # recursive quicksort to the right of the pivot point 32 33 34 def main(): 35 user_input = input("Enter numbers separated by a comma:\n").strip() 36 arr = [int(item) for item in user_input.split(",")] 37 38 quick_sort_random(arr, 0, len(arr)) 39 40 print(arr) 41 42 43 if __name__ == "__main__": 44 main() 45 [end of sorts/random_pivot_quick_sort.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/sorts/random_pivot_quick_sort.py b/sorts/random_pivot_quick_sort.py deleted file mode 100644 --- a/sorts/random_pivot_quick_sort.py +++ /dev/null @@ -1,44 +0,0 @@ -""" -Picks the random index as the pivot -""" -import random - - -def partition(a, left_index, right_index): - pivot = a[left_index] - i = left_index + 1 - for j in range(left_index + 1, right_index): - if a[j] < pivot: - a[j], a[i] = a[i], a[j] - i += 1 - a[left_index], a[i - 1] = a[i - 1], a[left_index] - return i - 1 - - -def quick_sort_random(a, left, right): - if left < right: - pivot = random.randint(left, right - 1) - a[pivot], a[left] = ( - a[left], - a[pivot], - ) # switches the pivot with the left most bound - pivot_index = partition(a, left, right) - quick_sort_random( - a, left, pivot_index - ) # recursive quicksort to the left of the pivot point - quick_sort_random( - a, pivot_index + 1, right - ) # recursive quicksort to the right of the pivot point - - -def main(): - user_input = input("Enter numbers separated by a comma:\n").strip() - arr = [int(item) for item in user_input.split(",")] - - quick_sort_random(arr, 0, len(arr)) - - print(arr) - - -if __name__ == "__main__": - main()
{"golden_diff": "diff --git a/sorts/random_pivot_quick_sort.py b/sorts/random_pivot_quick_sort.py\ndeleted file mode 100644\n--- a/sorts/random_pivot_quick_sort.py\n+++ /dev/null\n@@ -1,44 +0,0 @@\n-\"\"\"\r\n-Picks the random index as the pivot\r\n-\"\"\"\r\n-import random\r\n-\r\n-\r\n-def partition(a, left_index, right_index):\r\n- pivot = a[left_index]\r\n- i = left_index + 1\r\n- for j in range(left_index + 1, right_index):\r\n- if a[j] < pivot:\r\n- a[j], a[i] = a[i], a[j]\r\n- i += 1\r\n- a[left_index], a[i - 1] = a[i - 1], a[left_index]\r\n- return i - 1\r\n-\r\n-\r\n-def quick_sort_random(a, left, right):\r\n- if left < right:\r\n- pivot = random.randint(left, right - 1)\r\n- a[pivot], a[left] = (\r\n- a[left],\r\n- a[pivot],\r\n- ) # switches the pivot with the left most bound\r\n- pivot_index = partition(a, left, right)\r\n- quick_sort_random(\r\n- a, left, pivot_index\r\n- ) # recursive quicksort to the left of the pivot point\r\n- quick_sort_random(\r\n- a, pivot_index + 1, right\r\n- ) # recursive quicksort to the right of the pivot point\r\n-\r\n-\r\n-def main():\r\n- user_input = input(\"Enter numbers separated by a comma:\\n\").strip()\r\n- arr = [int(item) for item in user_input.split(\",\")]\r\n-\r\n- quick_sort_random(arr, 0, len(arr))\r\n-\r\n- print(arr)\r\n-\r\n-\r\n-if __name__ == \"__main__\":\r\n- main()\n", "issue": "sorts/random_normal_distribution_quicksort.py has no tests\n### Repository commit\n\n3\n\n### Python version (python --version)\n\nPython 3.11.5\n\n### Dependencies version (pip freeze)\n\nNumpy\n\n### Expected behavior\n\nTests.\n\n### Actual behavior\n\nNo tests.\n", "before_files": [{"content": "\"\"\"\r\nPicks the random index as the pivot\r\n\"\"\"\r\nimport random\r\n\r\n\r\ndef partition(a, left_index, right_index):\r\n pivot = a[left_index]\r\n i = left_index + 1\r\n for j in range(left_index + 1, right_index):\r\n if a[j] < pivot:\r\n a[j], a[i] = a[i], a[j]\r\n i += 1\r\n a[left_index], a[i - 1] = a[i - 1], a[left_index]\r\n return i - 1\r\n\r\n\r\ndef quick_sort_random(a, left, right):\r\n if left < right:\r\n pivot = random.randint(left, right - 1)\r\n a[pivot], a[left] = (\r\n a[left],\r\n a[pivot],\r\n ) # switches the pivot with the left most bound\r\n pivot_index = partition(a, left, right)\r\n quick_sort_random(\r\n a, left, pivot_index\r\n ) # recursive quicksort to the left of the pivot point\r\n quick_sort_random(\r\n a, pivot_index + 1, right\r\n ) # recursive quicksort to the right of the pivot point\r\n\r\n\r\ndef main():\r\n user_input = input(\"Enter numbers separated by a comma:\\n\").strip()\r\n arr = [int(item) for item in user_input.split(\",\")]\r\n\r\n quick_sort_random(arr, 0, len(arr))\r\n\r\n print(arr)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n", "path": "sorts/random_pivot_quick_sort.py"}]}
992
402
gh_patches_debug_22354
rasdani/github-patches
git_diff
sublimelsp__LSP-491
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> LSP needs a LspJumpBackCommand When executing LSP goto definition, and then executing the built in `jump_back` command, the cursor won't be placed in the previous place, as I expect. To fix this, we can introduce a `LspJumpBackCommand`. We can do the same thing as TernForSublime [did](https://github.com/ternjs/tern_for_sublime/blob/91a27a39b1b0a33a9043aa685e1ee48c64a58274/tern.py#L564). </issue> <code> [start of plugin/definition.py] 1 import sublime 2 3 from .core.registry import client_for_view, LspTextCommand 4 from .core.protocol import Request, Point 5 from .core.documents import get_document_position, get_position, is_at_word 6 from .core.url import uri_to_filename 7 from .core.logging import debug 8 try: 9 from typing import List, Dict, Optional, Any 10 assert List and Dict and Optional and Any 11 except ImportError: 12 pass 13 14 15 class LspSymbolDefinitionCommand(LspTextCommand): 16 def __init__(self, view): 17 super().__init__(view) 18 19 def is_enabled(self, event=None): 20 if self.has_client_with_capability('definitionProvider'): 21 return is_at_word(self.view, event) 22 return False 23 24 def run(self, edit, event=None) -> None: 25 client = client_for_view(self.view) 26 if client: 27 pos = get_position(self.view, event) 28 document_position = get_document_position(self.view, pos) 29 if document_position: 30 request = Request.definition(document_position) 31 client.send_request( 32 request, lambda response: self.handle_response(response, pos)) 33 34 def handle_response(self, response: 'Optional[Any]', position) -> None: 35 window = sublime.active_window() 36 if response: 37 location = response if isinstance(response, dict) else response[0] 38 file_path = uri_to_filename(location.get("uri")) 39 start = Point.from_lsp(location['range']['start']) 40 file_location = "{}:{}:{}".format(file_path, start.row + 1, start.col + 1) 41 debug("opening location", location) 42 window.open_file(file_location, sublime.ENCODED_POSITION) 43 # TODO: can add region here. 44 else: 45 window.run_command("goto_definition") 46 47 def want_event(self): 48 return True 49 [end of plugin/definition.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/plugin/definition.py b/plugin/definition.py --- a/plugin/definition.py +++ b/plugin/definition.py @@ -5,6 +5,7 @@ from .core.documents import get_document_position, get_position, is_at_word from .core.url import uri_to_filename from .core.logging import debug +from Default.history_list import get_jump_history_for_view # type: ignore try: from typing import List, Dict, Optional, Any assert List and Dict and Optional and Any @@ -34,6 +35,9 @@ def handle_response(self, response: 'Optional[Any]', position) -> None: window = sublime.active_window() if response: + # save to jump back history + get_jump_history_for_view(self.view).push_selection(self.view) + location = response if isinstance(response, dict) else response[0] file_path = uri_to_filename(location.get("uri")) start = Point.from_lsp(location['range']['start'])
{"golden_diff": "diff --git a/plugin/definition.py b/plugin/definition.py\n--- a/plugin/definition.py\n+++ b/plugin/definition.py\n@@ -5,6 +5,7 @@\n from .core.documents import get_document_position, get_position, is_at_word\n from .core.url import uri_to_filename\n from .core.logging import debug\n+from Default.history_list import get_jump_history_for_view # type: ignore\n try:\n from typing import List, Dict, Optional, Any\n assert List and Dict and Optional and Any\n@@ -34,6 +35,9 @@\n def handle_response(self, response: 'Optional[Any]', position) -> None:\n window = sublime.active_window()\n if response:\n+ # save to jump back history\n+ get_jump_history_for_view(self.view).push_selection(self.view)\n+\n location = response if isinstance(response, dict) else response[0]\n file_path = uri_to_filename(location.get(\"uri\"))\n start = Point.from_lsp(location['range']['start'])\n", "issue": "LSP needs a LspJumpBackCommand\nWhen executing LSP goto definition, \r\nand then executing the built in `jump_back` command, \r\nthe cursor won't be placed in the previous place, as I expect.\r\n\r\nTo fix this, we can introduce a `LspJumpBackCommand`.\r\nWe can do the same thing as TernForSublime [did](https://github.com/ternjs/tern_for_sublime/blob/91a27a39b1b0a33a9043aa685e1ee48c64a58274/tern.py#L564). \n", "before_files": [{"content": "import sublime\n\nfrom .core.registry import client_for_view, LspTextCommand\nfrom .core.protocol import Request, Point\nfrom .core.documents import get_document_position, get_position, is_at_word\nfrom .core.url import uri_to_filename\nfrom .core.logging import debug\ntry:\n from typing import List, Dict, Optional, Any\n assert List and Dict and Optional and Any\nexcept ImportError:\n pass\n\n\nclass LspSymbolDefinitionCommand(LspTextCommand):\n def __init__(self, view):\n super().__init__(view)\n\n def is_enabled(self, event=None):\n if self.has_client_with_capability('definitionProvider'):\n return is_at_word(self.view, event)\n return False\n\n def run(self, edit, event=None) -> None:\n client = client_for_view(self.view)\n if client:\n pos = get_position(self.view, event)\n document_position = get_document_position(self.view, pos)\n if document_position:\n request = Request.definition(document_position)\n client.send_request(\n request, lambda response: self.handle_response(response, pos))\n\n def handle_response(self, response: 'Optional[Any]', position) -> None:\n window = sublime.active_window()\n if response:\n location = response if isinstance(response, dict) else response[0]\n file_path = uri_to_filename(location.get(\"uri\"))\n start = Point.from_lsp(location['range']['start'])\n file_location = \"{}:{}:{}\".format(file_path, start.row + 1, start.col + 1)\n debug(\"opening location\", location)\n window.open_file(file_location, sublime.ENCODED_POSITION)\n # TODO: can add region here.\n else:\n window.run_command(\"goto_definition\")\n\n def want_event(self):\n return True\n", "path": "plugin/definition.py"}]}
1,139
216
gh_patches_debug_6705
rasdani/github-patches
git_diff
ansible__molecule-2826
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Docs contains template option for init role that actually does not exists There are option documented here https://molecule.readthedocs.io/en/latest/usage.html#cmdoption-molecule-init-role-foo-template-path-arg-molecule ``` % molecule --version molecule 3.0.8 ansible==2.9.11 python==3.8 ``` but missed in command line ``` % molecule init role --template 111 Usage: molecule init role [OPTIONS] ROLE_NAME Try 'molecule init role --help' for help. Error: no such option: --template zsh: exit 2 molecule init role --template 111 ``` </issue> <code> [start of molecule/command/init/role.py] 1 # Copyright (c) 2015-2018 Cisco Systems, Inc. 2 # 3 # Permission is hereby granted, free of charge, to any person obtaining a copy 4 # of this software and associated documentation files (the "Software"), to 5 # deal in the Software without restriction, including without limitation the 6 # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or 7 # sell copies of the Software, and to permit persons to whom the Software is 8 # furnished to do so, subject to the following conditions: 9 # 10 # The above copyright notice and this permission notice shall be included in 11 # all copies or substantial portions of the Software. 12 # 13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 18 # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER 19 # DEALINGS IN THE SOFTWARE. 20 """Base class used by init role command.""" 21 22 import os 23 import subprocess 24 from subprocess import check_output 25 26 import click 27 28 from molecule import api, logger, util 29 from molecule.command import base as command_base 30 from molecule.command.init import base 31 from molecule.config import DEFAULT_DRIVER 32 33 LOG = logger.get_logger(__name__) 34 35 36 class Role(base.Base): 37 """ 38 Init Role Command Class. 39 40 .. program:: molecule init role foo 41 42 .. option:: molecule init role foo 43 44 Initialize a new role. 45 46 .. program:: molecule init role foo --template path 47 48 .. option:: molecule init role foo --template path 49 50 Initialize a new role using ansible-galaxy and include default 51 molecule directory. Please refer to the ``init scenario`` 52 command in order to generate a custom ``molecule`` scenario. 53 """ 54 55 def __init__(self, command_args): 56 """Construct Role.""" 57 self._command_args = command_args 58 59 def execute(self): 60 """ 61 Execute the actions necessary to perform a `molecule init role` and \ 62 returns None. 63 64 :return: None 65 """ 66 role_name = self._command_args["role_name"] 67 role_directory = os.getcwd() 68 msg = "Initializing new role {}...".format(role_name) 69 LOG.info(msg) 70 71 if os.path.isdir(role_name): 72 msg = ("The directory {} exists. " "Cannot create new role.").format( 73 role_name 74 ) 75 util.sysexit_with_message(msg) 76 77 try: 78 cmd = ["ansible-galaxy", "init", "-v", "--offline", role_name] 79 check_output(cmd, stderr=subprocess.STDOUT, universal_newlines=True) 80 except Exception as e: 81 util.sysexit_with_message( 82 "Galaxy failed to create role: %s: %s" % (e, e.output) 83 ) 84 85 scenario_base_directory = os.path.join(role_directory, role_name) 86 templates = [ 87 api.drivers()[self._command_args["driver_name"]].template_dir(), 88 api.verifiers()[self._command_args["verifier_name"]].template_dir(), 89 ] 90 self._process_templates("molecule", self._command_args, role_directory) 91 for template in templates: 92 self._process_templates( 93 template, self._command_args, scenario_base_directory 94 ) 95 96 role_directory = os.path.join(role_directory, role_name) 97 msg = "Initialized role in {} successfully.".format(role_directory) 98 LOG.success(msg) 99 100 101 @command_base.click_command_ex() 102 @click.pass_context 103 @click.option( 104 "--dependency-name", 105 type=click.Choice(["galaxy"]), 106 default="galaxy", 107 help="Name of dependency to initialize. (galaxy)", 108 ) 109 @click.option( 110 "--driver-name", 111 "-d", 112 type=click.Choice([str(s) for s in api.drivers()]), 113 default=DEFAULT_DRIVER, 114 help=f"Name of driver to initialize. ({DEFAULT_DRIVER})", 115 ) 116 @click.option( 117 "--lint-name", 118 type=click.Choice(["yamllint"]), 119 default="yamllint", 120 help="Name of lint to initialize. (yamllint)", 121 ) 122 @click.option( 123 "--provisioner-name", 124 type=click.Choice(["ansible"]), 125 default="ansible", 126 help="Name of provisioner to initialize. (ansible)", 127 ) 128 @click.argument("ROLE-NAME", required=True) 129 @click.option( 130 "--verifier-name", 131 type=click.Choice([str(s) for s in api.verifiers()]), 132 default="ansible", 133 help="Name of verifier to initialize. (ansible)", 134 ) 135 def role( 136 ctx, 137 dependency_name, 138 driver_name, 139 lint_name, 140 provisioner_name, 141 role_name, 142 verifier_name, 143 ): # pragma: no cover 144 """Initialize a new role for use with Molecule.""" 145 command_args = { 146 "dependency_name": dependency_name, 147 "driver_name": driver_name, 148 "lint_name": lint_name, 149 "provisioner_name": provisioner_name, 150 "role_name": role_name, 151 "scenario_name": command_base.MOLECULE_DEFAULT_SCENARIO_NAME, 152 "subcommand": __name__, 153 "verifier_name": verifier_name, 154 } 155 156 r = Role(command_args) 157 r.execute() 158 [end of molecule/command/init/role.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/molecule/command/init/role.py b/molecule/command/init/role.py --- a/molecule/command/init/role.py +++ b/molecule/command/init/role.py @@ -43,10 +43,6 @@ Initialize a new role. - .. program:: molecule init role foo --template path - - .. option:: molecule init role foo --template path - Initialize a new role using ansible-galaxy and include default molecule directory. Please refer to the ``init scenario`` command in order to generate a custom ``molecule`` scenario.
{"golden_diff": "diff --git a/molecule/command/init/role.py b/molecule/command/init/role.py\n--- a/molecule/command/init/role.py\n+++ b/molecule/command/init/role.py\n@@ -43,10 +43,6 @@\n \n Initialize a new role.\n \n- .. program:: molecule init role foo --template path\n-\n- .. option:: molecule init role foo --template path\n-\n Initialize a new role using ansible-galaxy and include default\n molecule directory. Please refer to the ``init scenario``\n command in order to generate a custom ``molecule`` scenario.\n", "issue": "Docs contains template option for init role that actually does not exists\nThere are option documented here https://molecule.readthedocs.io/en/latest/usage.html#cmdoption-molecule-init-role-foo-template-path-arg-molecule\r\n```\r\n% molecule --version \r\nmolecule 3.0.8\r\n ansible==2.9.11 python==3.8\r\n\r\n```\r\nbut missed in command line\r\n```\r\n% molecule init role --template 111\r\nUsage: molecule init role [OPTIONS] ROLE_NAME\r\nTry 'molecule init role --help' for help.\r\n\r\nError: no such option: --template\r\nzsh: exit 2 molecule init role --template 111\r\n```\n", "before_files": [{"content": "# Copyright (c) 2015-2018 Cisco Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to\n# deal in the Software without restriction, including without limitation the\n# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n# sell copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.\n\"\"\"Base class used by init role command.\"\"\"\n\nimport os\nimport subprocess\nfrom subprocess import check_output\n\nimport click\n\nfrom molecule import api, logger, util\nfrom molecule.command import base as command_base\nfrom molecule.command.init import base\nfrom molecule.config import DEFAULT_DRIVER\n\nLOG = logger.get_logger(__name__)\n\n\nclass Role(base.Base):\n \"\"\"\n Init Role Command Class.\n\n .. program:: molecule init role foo\n\n .. option:: molecule init role foo\n\n Initialize a new role.\n\n .. program:: molecule init role foo --template path\n\n .. option:: molecule init role foo --template path\n\n Initialize a new role using ansible-galaxy and include default\n molecule directory. Please refer to the ``init scenario``\n command in order to generate a custom ``molecule`` scenario.\n \"\"\"\n\n def __init__(self, command_args):\n \"\"\"Construct Role.\"\"\"\n self._command_args = command_args\n\n def execute(self):\n \"\"\"\n Execute the actions necessary to perform a `molecule init role` and \\\n returns None.\n\n :return: None\n \"\"\"\n role_name = self._command_args[\"role_name\"]\n role_directory = os.getcwd()\n msg = \"Initializing new role {}...\".format(role_name)\n LOG.info(msg)\n\n if os.path.isdir(role_name):\n msg = (\"The directory {} exists. \" \"Cannot create new role.\").format(\n role_name\n )\n util.sysexit_with_message(msg)\n\n try:\n cmd = [\"ansible-galaxy\", \"init\", \"-v\", \"--offline\", role_name]\n check_output(cmd, stderr=subprocess.STDOUT, universal_newlines=True)\n except Exception as e:\n util.sysexit_with_message(\n \"Galaxy failed to create role: %s: %s\" % (e, e.output)\n )\n\n scenario_base_directory = os.path.join(role_directory, role_name)\n templates = [\n api.drivers()[self._command_args[\"driver_name\"]].template_dir(),\n api.verifiers()[self._command_args[\"verifier_name\"]].template_dir(),\n ]\n self._process_templates(\"molecule\", self._command_args, role_directory)\n for template in templates:\n self._process_templates(\n template, self._command_args, scenario_base_directory\n )\n\n role_directory = os.path.join(role_directory, role_name)\n msg = \"Initialized role in {} successfully.\".format(role_directory)\n LOG.success(msg)\n\n\n@command_base.click_command_ex()\[email protected]_context\[email protected](\n \"--dependency-name\",\n type=click.Choice([\"galaxy\"]),\n default=\"galaxy\",\n help=\"Name of dependency to initialize. (galaxy)\",\n)\[email protected](\n \"--driver-name\",\n \"-d\",\n type=click.Choice([str(s) for s in api.drivers()]),\n default=DEFAULT_DRIVER,\n help=f\"Name of driver to initialize. ({DEFAULT_DRIVER})\",\n)\[email protected](\n \"--lint-name\",\n type=click.Choice([\"yamllint\"]),\n default=\"yamllint\",\n help=\"Name of lint to initialize. (yamllint)\",\n)\[email protected](\n \"--provisioner-name\",\n type=click.Choice([\"ansible\"]),\n default=\"ansible\",\n help=\"Name of provisioner to initialize. (ansible)\",\n)\[email protected](\"ROLE-NAME\", required=True)\[email protected](\n \"--verifier-name\",\n type=click.Choice([str(s) for s in api.verifiers()]),\n default=\"ansible\",\n help=\"Name of verifier to initialize. (ansible)\",\n)\ndef role(\n ctx,\n dependency_name,\n driver_name,\n lint_name,\n provisioner_name,\n role_name,\n verifier_name,\n): # pragma: no cover\n \"\"\"Initialize a new role for use with Molecule.\"\"\"\n command_args = {\n \"dependency_name\": dependency_name,\n \"driver_name\": driver_name,\n \"lint_name\": lint_name,\n \"provisioner_name\": provisioner_name,\n \"role_name\": role_name,\n \"scenario_name\": command_base.MOLECULE_DEFAULT_SCENARIO_NAME,\n \"subcommand\": __name__,\n \"verifier_name\": verifier_name,\n }\n\n r = Role(command_args)\n r.execute()\n", "path": "molecule/command/init/role.py"}]}
2,235
129
gh_patches_debug_37295
rasdani/github-patches
git_diff
DataDog__dd-agent-2022
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> [go_expvar] add ability to override default metric namespace By default, the go_expvar check uses the `go_expvar` namespace. However, I'd much rather have these metrics available under the same namespace as my application metrics. To accomplish this, I'm thinking it would just be a `namespace` key within `init_config` that allows you to override the default value. Looking at the `go_expvar` package it seems fairly simple, however I thought I'd open an issue before trying to submit a patch. </issue> <code> [start of checks.d/go_expvar.py] 1 # stdlib 2 from collections import defaultdict 3 import re 4 5 # 3rd party 6 import requests 7 8 # project 9 from checks import AgentCheck 10 11 DEFAULT_MAX_METRICS = 350 12 PATH = "path" 13 ALIAS = "alias" 14 TYPE = "type" 15 TAGS = "tags" 16 17 GAUGE = "gauge" 18 RATE = "rate" 19 DEFAULT_TYPE = GAUGE 20 21 22 SUPPORTED_TYPES = { 23 GAUGE: AgentCheck.gauge, 24 RATE: AgentCheck.rate, 25 } 26 27 METRIC_NAMESPACE = "go_expvar" 28 29 30 # See http://golang.org/pkg/runtime/#MemStats 31 DEFAULT_GAUGE_MEMSTAT_METRICS = [ 32 # General statistics 33 "Alloc", "TotalAlloc", 34 35 # Main allocation heap statistics 36 "HeapAlloc", "HeapSys", "HeapIdle", "HeapInuse", 37 "HeapReleased", "HeapObjects", 38 39 ] 40 41 DEFAULT_RATE_MEMSTAT_METRICS = [ 42 # General statistics 43 "Lookups", "Mallocs", "Frees", 44 45 # Garbage collector statistics 46 "PauseTotalNs", "NumGC", 47 ] 48 49 DEFAULT_METRICS = [{PATH: "memstats/%s" % path, TYPE: GAUGE} for path in DEFAULT_GAUGE_MEMSTAT_METRICS] +\ 50 [{PATH: "memstats/%s" % path, TYPE: RATE} for path in DEFAULT_RATE_MEMSTAT_METRICS] 51 52 53 class GoExpvar(AgentCheck): 54 55 def __init__(self, name, init_config, agentConfig, instances=None): 56 AgentCheck.__init__(self, name, init_config, agentConfig, instances) 57 self._last_gc_count = defaultdict(int) 58 59 def _get_data(self, url): 60 r = requests.get(url) 61 r.raise_for_status() 62 return r.json() 63 64 def _load(self, instance): 65 url = instance.get('expvar_url') 66 if not url: 67 raise Exception('GoExpvar instance missing "expvar_url" value.') 68 69 tags = instance.get('tags', []) 70 tags.append("expvar_url:%s" % url) 71 data = self._get_data(url) 72 metrics = DEFAULT_METRICS + instance.get("metrics", []) 73 max_metrics = instance.get("max_returned_metrics", DEFAULT_MAX_METRICS) 74 return data, tags, metrics, max_metrics, url 75 76 def get_gc_collection_histogram(self, data, tags, url): 77 num_gc = data.get("memstats", {}).get("NumGC") 78 pause_hist = data.get("memstats", {}).get("PauseNs") 79 last_gc_count = self._last_gc_count[url] 80 if last_gc_count == num_gc: 81 # No GC has run. Do nothing 82 return 83 start = last_gc_count % 256 84 end = (num_gc + 255) % 256 + 1 85 if start < end: 86 values = pause_hist[start:end] 87 else: 88 values = pause_hist[start:] + pause_hist[:end] 89 90 self._last_gc_count[url] = num_gc 91 92 for value in values: 93 self.histogram( 94 self.normalize("memstats.PauseNs", METRIC_NAMESPACE, fix_case=True), 95 value, tags=tags) 96 97 def check(self, instance): 98 data, tags, metrics, max_metrics, url = self._load(instance) 99 self.get_gc_collection_histogram(data, tags, url) 100 self.parse_expvar_data(data, tags, metrics, max_metrics) 101 102 def parse_expvar_data(self, data, tags, metrics, max_metrics): 103 ''' 104 Report all the metrics based on the configuration in instance 105 If a metric is not well configured or is not present in the payload, 106 continue processing metrics but log the information to the info page 107 ''' 108 count = 0 109 for metric in metrics: 110 path = metric.get(PATH) 111 metric_type = metric.get(TYPE, DEFAULT_TYPE) 112 metric_tags = list(metric.get(TAGS, [])) 113 metric_tags += tags 114 alias = metric.get(ALIAS) 115 116 if not path: 117 self.warning("Metric %s has no path" % metric) 118 continue 119 120 if metric_type not in SUPPORTED_TYPES: 121 self.warning("Metric type %s not supported for this check" % metric_type) 122 continue 123 124 keys = path.split("/") 125 values = self.deep_get(data, keys) 126 127 if len(values) == 0: 128 self.warning("No results matching path %s" % path) 129 continue 130 131 tag_by_path = alias is not None 132 133 for traversed_path, value in values: 134 actual_path = ".".join(traversed_path) 135 if tag_by_path: 136 metric_tags.append("path:%s" % actual_path) 137 138 metric_name = alias or self.normalize(actual_path, METRIC_NAMESPACE, fix_case=True) 139 140 try: 141 float(value) 142 except ValueError: 143 self.log.warning("Unreportable value for path %s: %s" % (path, value)) 144 continue 145 146 if count >= max_metrics: 147 self.warning("Reporting more metrics than the allowed maximum. " 148 "Please contact [email protected] for more information.") 149 return 150 151 SUPPORTED_TYPES[metric_type](self, metric_name, value, metric_tags) 152 count += 1 153 154 def deep_get(self, content, keys, traversed_path=None): 155 ''' 156 Allow to retrieve content nested inside a several layers deep dict/list 157 158 Examples: -content: { 159 "key1": { 160 "key2" : [ 161 { 162 "name" : "object1", 163 "value" : 42 164 }, 165 { 166 "name" : "object2", 167 "value" : 72 168 } 169 ] 170 } 171 } 172 -keys: ["key1", "key2", "1", "value"] would return [(["key1", "key2", "1", "value"], 72)] 173 -keys: ["key1", "key2", "1", "*"] would return [(["key1", "key2", "1", "value"], 72), (["key1", "key2", "1", "name"], "object2")] 174 -keys: ["key1", "key2", "*", "value"] would return [(["key1", "key2", "1", "value"], 72), (["key1", "key2", "0", "value"], 42)] 175 ''' 176 177 if traversed_path is None: 178 traversed_path = [] 179 180 if keys == []: 181 return [(traversed_path, content)] 182 183 key = keys[0] 184 regex = "".join(["^", key, "$"]) 185 try: 186 key_rex = re.compile(regex) 187 except Exception: 188 self.warning("Cannot compile regex: %s" % regex) 189 return [] 190 191 results = [] 192 for new_key, new_content in self.items(content): 193 if key_rex.match(new_key): 194 results.extend(self.deep_get(new_content, keys[1:], traversed_path + [str(new_key)])) 195 return results 196 197 def items(self, object): 198 if isinstance(object, list): 199 for new_key, new_content in enumerate(object): 200 yield str(new_key), new_content 201 elif isinstance(object, dict): 202 for new_key, new_content in object.iteritems(): 203 yield str(new_key), new_content 204 else: 205 self.log.warning("Could not parse this object, check the json" 206 "served by the expvar") 207 [end of checks.d/go_expvar.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/checks.d/go_expvar.py b/checks.d/go_expvar.py --- a/checks.d/go_expvar.py +++ b/checks.d/go_expvar.py @@ -24,7 +24,7 @@ RATE: AgentCheck.rate, } -METRIC_NAMESPACE = "go_expvar" +DEFAULT_METRIC_NAMESPACE = "go_expvar" # See http://golang.org/pkg/runtime/#MemStats @@ -71,9 +71,10 @@ data = self._get_data(url) metrics = DEFAULT_METRICS + instance.get("metrics", []) max_metrics = instance.get("max_returned_metrics", DEFAULT_MAX_METRICS) - return data, tags, metrics, max_metrics, url + namespace = instance.get('namespace', DEFAULT_METRIC_NAMESPACE) + return data, tags, metrics, max_metrics, url, namespace - def get_gc_collection_histogram(self, data, tags, url): + def get_gc_collection_histogram(self, data, tags, url, namespace): num_gc = data.get("memstats", {}).get("NumGC") pause_hist = data.get("memstats", {}).get("PauseNs") last_gc_count = self._last_gc_count[url] @@ -91,15 +92,15 @@ for value in values: self.histogram( - self.normalize("memstats.PauseNs", METRIC_NAMESPACE, fix_case=True), + self.normalize("memstats.PauseNs", namespace, fix_case=True), value, tags=tags) def check(self, instance): - data, tags, metrics, max_metrics, url = self._load(instance) - self.get_gc_collection_histogram(data, tags, url) - self.parse_expvar_data(data, tags, metrics, max_metrics) + data, tags, metrics, max_metrics, url, namespace = self._load(instance) + self.get_gc_collection_histogram(data, tags, url, namespace) + self.parse_expvar_data(data, tags, metrics, max_metrics, namespace) - def parse_expvar_data(self, data, tags, metrics, max_metrics): + def parse_expvar_data(self, data, tags, metrics, max_metrics, namespace): ''' Report all the metrics based on the configuration in instance If a metric is not well configured or is not present in the payload, @@ -135,7 +136,7 @@ if tag_by_path: metric_tags.append("path:%s" % actual_path) - metric_name = alias or self.normalize(actual_path, METRIC_NAMESPACE, fix_case=True) + metric_name = alias or self.normalize(actual_path, namespace, fix_case=True) try: float(value)
{"golden_diff": "diff --git a/checks.d/go_expvar.py b/checks.d/go_expvar.py\n--- a/checks.d/go_expvar.py\n+++ b/checks.d/go_expvar.py\n@@ -24,7 +24,7 @@\n RATE: AgentCheck.rate,\n }\n \n-METRIC_NAMESPACE = \"go_expvar\"\n+DEFAULT_METRIC_NAMESPACE = \"go_expvar\"\n \n \n # See http://golang.org/pkg/runtime/#MemStats\n@@ -71,9 +71,10 @@\n data = self._get_data(url)\n metrics = DEFAULT_METRICS + instance.get(\"metrics\", [])\n max_metrics = instance.get(\"max_returned_metrics\", DEFAULT_MAX_METRICS)\n- return data, tags, metrics, max_metrics, url\n+ namespace = instance.get('namespace', DEFAULT_METRIC_NAMESPACE)\n+ return data, tags, metrics, max_metrics, url, namespace\n \n- def get_gc_collection_histogram(self, data, tags, url):\n+ def get_gc_collection_histogram(self, data, tags, url, namespace):\n num_gc = data.get(\"memstats\", {}).get(\"NumGC\")\n pause_hist = data.get(\"memstats\", {}).get(\"PauseNs\")\n last_gc_count = self._last_gc_count[url]\n@@ -91,15 +92,15 @@\n \n for value in values:\n self.histogram(\n- self.normalize(\"memstats.PauseNs\", METRIC_NAMESPACE, fix_case=True),\n+ self.normalize(\"memstats.PauseNs\", namespace, fix_case=True),\n value, tags=tags)\n \n def check(self, instance):\n- data, tags, metrics, max_metrics, url = self._load(instance)\n- self.get_gc_collection_histogram(data, tags, url)\n- self.parse_expvar_data(data, tags, metrics, max_metrics)\n+ data, tags, metrics, max_metrics, url, namespace = self._load(instance)\n+ self.get_gc_collection_histogram(data, tags, url, namespace)\n+ self.parse_expvar_data(data, tags, metrics, max_metrics, namespace)\n \n- def parse_expvar_data(self, data, tags, metrics, max_metrics):\n+ def parse_expvar_data(self, data, tags, metrics, max_metrics, namespace):\n '''\n Report all the metrics based on the configuration in instance\n If a metric is not well configured or is not present in the payload,\n@@ -135,7 +136,7 @@\n if tag_by_path:\n metric_tags.append(\"path:%s\" % actual_path)\n \n- metric_name = alias or self.normalize(actual_path, METRIC_NAMESPACE, fix_case=True)\n+ metric_name = alias or self.normalize(actual_path, namespace, fix_case=True)\n \n try:\n float(value)\n", "issue": "[go_expvar] add ability to override default metric namespace\nBy default, the go_expvar check uses the `go_expvar` namespace. However, I'd much rather have these metrics available under the same namespace as my application metrics. To accomplish this, I'm thinking it would just be a `namespace` key within `init_config` that allows you to override the default value.\n\nLooking at the `go_expvar` package it seems fairly simple, however I thought I'd open an issue before trying to submit a patch.\n\n", "before_files": [{"content": "# stdlib\nfrom collections import defaultdict\nimport re\n\n# 3rd party\nimport requests\n\n# project\nfrom checks import AgentCheck\n\nDEFAULT_MAX_METRICS = 350\nPATH = \"path\"\nALIAS = \"alias\"\nTYPE = \"type\"\nTAGS = \"tags\"\n\nGAUGE = \"gauge\"\nRATE = \"rate\"\nDEFAULT_TYPE = GAUGE\n\n\nSUPPORTED_TYPES = {\n GAUGE: AgentCheck.gauge,\n RATE: AgentCheck.rate,\n}\n\nMETRIC_NAMESPACE = \"go_expvar\"\n\n\n# See http://golang.org/pkg/runtime/#MemStats\nDEFAULT_GAUGE_MEMSTAT_METRICS = [\n # General statistics\n \"Alloc\", \"TotalAlloc\",\n\n # Main allocation heap statistics\n \"HeapAlloc\", \"HeapSys\", \"HeapIdle\", \"HeapInuse\",\n \"HeapReleased\", \"HeapObjects\",\n\n]\n\nDEFAULT_RATE_MEMSTAT_METRICS = [\n # General statistics\n \"Lookups\", \"Mallocs\", \"Frees\",\n\n # Garbage collector statistics\n \"PauseTotalNs\", \"NumGC\",\n]\n\nDEFAULT_METRICS = [{PATH: \"memstats/%s\" % path, TYPE: GAUGE} for path in DEFAULT_GAUGE_MEMSTAT_METRICS] +\\\n [{PATH: \"memstats/%s\" % path, TYPE: RATE} for path in DEFAULT_RATE_MEMSTAT_METRICS]\n\n\nclass GoExpvar(AgentCheck):\n\n def __init__(self, name, init_config, agentConfig, instances=None):\n AgentCheck.__init__(self, name, init_config, agentConfig, instances)\n self._last_gc_count = defaultdict(int)\n\n def _get_data(self, url):\n r = requests.get(url)\n r.raise_for_status()\n return r.json()\n\n def _load(self, instance):\n url = instance.get('expvar_url')\n if not url:\n raise Exception('GoExpvar instance missing \"expvar_url\" value.')\n\n tags = instance.get('tags', [])\n tags.append(\"expvar_url:%s\" % url)\n data = self._get_data(url)\n metrics = DEFAULT_METRICS + instance.get(\"metrics\", [])\n max_metrics = instance.get(\"max_returned_metrics\", DEFAULT_MAX_METRICS)\n return data, tags, metrics, max_metrics, url\n\n def get_gc_collection_histogram(self, data, tags, url):\n num_gc = data.get(\"memstats\", {}).get(\"NumGC\")\n pause_hist = data.get(\"memstats\", {}).get(\"PauseNs\")\n last_gc_count = self._last_gc_count[url]\n if last_gc_count == num_gc:\n # No GC has run. Do nothing\n return\n start = last_gc_count % 256\n end = (num_gc + 255) % 256 + 1\n if start < end:\n values = pause_hist[start:end]\n else:\n values = pause_hist[start:] + pause_hist[:end]\n\n self._last_gc_count[url] = num_gc\n\n for value in values:\n self.histogram(\n self.normalize(\"memstats.PauseNs\", METRIC_NAMESPACE, fix_case=True),\n value, tags=tags)\n\n def check(self, instance):\n data, tags, metrics, max_metrics, url = self._load(instance)\n self.get_gc_collection_histogram(data, tags, url)\n self.parse_expvar_data(data, tags, metrics, max_metrics)\n\n def parse_expvar_data(self, data, tags, metrics, max_metrics):\n '''\n Report all the metrics based on the configuration in instance\n If a metric is not well configured or is not present in the payload,\n continue processing metrics but log the information to the info page\n '''\n count = 0\n for metric in metrics:\n path = metric.get(PATH)\n metric_type = metric.get(TYPE, DEFAULT_TYPE)\n metric_tags = list(metric.get(TAGS, []))\n metric_tags += tags\n alias = metric.get(ALIAS)\n\n if not path:\n self.warning(\"Metric %s has no path\" % metric)\n continue\n\n if metric_type not in SUPPORTED_TYPES:\n self.warning(\"Metric type %s not supported for this check\" % metric_type)\n continue\n\n keys = path.split(\"/\")\n values = self.deep_get(data, keys)\n\n if len(values) == 0:\n self.warning(\"No results matching path %s\" % path)\n continue\n\n tag_by_path = alias is not None\n\n for traversed_path, value in values:\n actual_path = \".\".join(traversed_path)\n if tag_by_path:\n metric_tags.append(\"path:%s\" % actual_path)\n\n metric_name = alias or self.normalize(actual_path, METRIC_NAMESPACE, fix_case=True)\n\n try:\n float(value)\n except ValueError:\n self.log.warning(\"Unreportable value for path %s: %s\" % (path, value))\n continue\n\n if count >= max_metrics:\n self.warning(\"Reporting more metrics than the allowed maximum. \"\n \"Please contact [email protected] for more information.\")\n return\n\n SUPPORTED_TYPES[metric_type](self, metric_name, value, metric_tags)\n count += 1\n\n def deep_get(self, content, keys, traversed_path=None):\n '''\n Allow to retrieve content nested inside a several layers deep dict/list\n\n Examples: -content: {\n \"key1\": {\n \"key2\" : [\n {\n \"name\" : \"object1\",\n \"value\" : 42\n },\n {\n \"name\" : \"object2\",\n \"value\" : 72\n }\n ]\n }\n }\n -keys: [\"key1\", \"key2\", \"1\", \"value\"] would return [([\"key1\", \"key2\", \"1\", \"value\"], 72)]\n -keys: [\"key1\", \"key2\", \"1\", \"*\"] would return [([\"key1\", \"key2\", \"1\", \"value\"], 72), ([\"key1\", \"key2\", \"1\", \"name\"], \"object2\")]\n -keys: [\"key1\", \"key2\", \"*\", \"value\"] would return [([\"key1\", \"key2\", \"1\", \"value\"], 72), ([\"key1\", \"key2\", \"0\", \"value\"], 42)]\n '''\n\n if traversed_path is None:\n traversed_path = []\n\n if keys == []:\n return [(traversed_path, content)]\n\n key = keys[0]\n regex = \"\".join([\"^\", key, \"$\"])\n try:\n key_rex = re.compile(regex)\n except Exception:\n self.warning(\"Cannot compile regex: %s\" % regex)\n return []\n\n results = []\n for new_key, new_content in self.items(content):\n if key_rex.match(new_key):\n results.extend(self.deep_get(new_content, keys[1:], traversed_path + [str(new_key)]))\n return results\n\n def items(self, object):\n if isinstance(object, list):\n for new_key, new_content in enumerate(object):\n yield str(new_key), new_content\n elif isinstance(object, dict):\n for new_key, new_content in object.iteritems():\n yield str(new_key), new_content\n else:\n self.log.warning(\"Could not parse this object, check the json\"\n \"served by the expvar\")\n", "path": "checks.d/go_expvar.py"}]}
2,783
593
gh_patches_debug_21569
rasdani/github-patches
git_diff
Lightning-AI__pytorch-lightning-1630
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Add support to log hparams and metrics to tensorboard? How can I log metrics (_e.g._ validation loss of best epoch) together with the set of hyperparameters? I have looked through the docs and through the code. It seems like an obvious thing, so maybe I'm just not getting it. Currently, the only way that I found was to extend the logger class: ```python class MyTensorBoardLogger(TensorBoardLogger): def __init__(self, *args, **kwargs): super(MyTensorBoardLogger, self).__init__(*args, **kwargs) def log_hyperparams(self, *args, **kwargs): pass @rank_zero_only def log_hyperparams_metrics(self, params: dict, metrics: dict) -> None: params = self._convert_params(params) exp, ssi, sei = hparams(params, metrics) writer = self.experiment._get_file_writer() writer.add_summary(exp) writer.add_summary(ssi) writer.add_summary(sei) # some alternative should be added self.tags.update(params) ``` And then I'm writing the hparams with metrics in a callback: ``` def on_train_end(self, trainer, module): module.logger.log_hyperparams_metrics(module.hparams, {'val_loss': self.best_val_loss}) ``` But that doesn't seem right. Is there a better way to write some metric together with the hparams as well? #### Environment - OS: Ubuntu18.04 - conda4.8.3 - pytorch-lightning==0.7.1 - torch==1.4.0 </issue> <code> [start of pytorch_lightning/loggers/tensorboard.py] 1 """ 2 TensorBoard 3 ----------- 4 """ 5 6 import csv 7 import os 8 from argparse import Namespace 9 from typing import Optional, Dict, Union, Any 10 from warnings import warn 11 12 import torch 13 from pkg_resources import parse_version 14 from torch.utils.tensorboard import SummaryWriter 15 16 from pytorch_lightning import _logger as log 17 from pytorch_lightning.loggers.base import LightningLoggerBase 18 from pytorch_lightning.utilities import rank_zero_only 19 20 21 class TensorBoardLogger(LightningLoggerBase): 22 r""" 23 Log to local file system in `TensorBoard <https://www.tensorflow.org/tensorboard>`_ format. 24 Implemented using :class:`~torch.utils.tensorboard.SummaryWriter`. Logs are saved to 25 ``os.path.join(save_dir, name, version)``. This is the default logger in Lightning, it comes 26 preinstalled. 27 28 Example: 29 >>> from pytorch_lightning import Trainer 30 >>> from pytorch_lightning.loggers import TensorBoardLogger 31 >>> logger = TensorBoardLogger("tb_logs", name="my_model") 32 >>> trainer = Trainer(logger=logger) 33 34 Args: 35 save_dir: Save directory 36 name: Experiment name. Defaults to ``'default'``. If it is the empty string then no per-experiment 37 subdirectory is used. 38 version: Experiment version. If version is not specified the logger inspects the save 39 directory for existing versions, then automatically assigns the next available version. 40 If it is a string then it is used as the run-specific subdirectory name, 41 otherwise ``'version_${version}'`` is used. 42 \**kwargs: Other arguments are passed directly to the :class:`SummaryWriter` constructor. 43 44 """ 45 NAME_CSV_TAGS = 'meta_tags.csv' 46 47 def __init__(self, 48 save_dir: str, 49 name: Optional[str] = "default", 50 version: Optional[Union[int, str]] = None, 51 **kwargs): 52 super().__init__() 53 self.save_dir = save_dir 54 self._name = name 55 self._version = version 56 57 self._experiment = None 58 self.tags = {} 59 self._kwargs = kwargs 60 61 @property 62 def root_dir(self) -> str: 63 """ 64 Parent directory for all tensorboard checkpoint subdirectories. 65 If the experiment name parameter is ``None`` or the empty string, no experiment subdirectory is used 66 and the checkpoint will be saved in "save_dir/version_dir" 67 """ 68 if self.name is None or len(self.name) == 0: 69 return self.save_dir 70 else: 71 return os.path.join(self.save_dir, self.name) 72 73 @property 74 def log_dir(self) -> str: 75 """ 76 The directory for this run's tensorboard checkpoint. By default, it is named 77 ``'version_${self.version}'`` but it can be overridden by passing a string value 78 for the constructor's version parameter instead of ``None`` or an int. 79 """ 80 # create a pseudo standard path ala test-tube 81 version = self.version if isinstance(self.version, str) else f"version_{self.version}" 82 log_dir = os.path.join(self.root_dir, version) 83 return log_dir 84 85 @property 86 def experiment(self) -> SummaryWriter: 87 r""" 88 Actual tensorboard object. To use TensorBoard features in your 89 :class:`~pytorch_lightning.core.lightning.LightningModule` do the following. 90 91 Example:: 92 93 self.logger.experiment.some_tensorboard_function() 94 95 """ 96 if self._experiment is not None: 97 return self._experiment 98 99 os.makedirs(self.root_dir, exist_ok=True) 100 self._experiment = SummaryWriter(log_dir=self.log_dir, **self._kwargs) 101 return self._experiment 102 103 @rank_zero_only 104 def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None: 105 params = self._convert_params(params) 106 params = self._flatten_dict(params) 107 sanitized_params = self._sanitize_params(params) 108 109 if parse_version(torch.__version__) < parse_version("1.3.0"): 110 warn( 111 f"Hyperparameter logging is not available for Torch version {torch.__version__}." 112 " Skipping log_hyperparams. Upgrade to Torch 1.3.0 or above to enable" 113 " hyperparameter logging." 114 ) 115 else: 116 from torch.utils.tensorboard.summary import hparams 117 exp, ssi, sei = hparams(sanitized_params, {}) 118 writer = self.experiment._get_file_writer() 119 writer.add_summary(exp) 120 writer.add_summary(ssi) 121 writer.add_summary(sei) 122 123 # some alternative should be added 124 self.tags.update(sanitized_params) 125 126 @rank_zero_only 127 def log_metrics(self, metrics: Dict[str, float], step: Optional[int] = None) -> None: 128 for k, v in metrics.items(): 129 if isinstance(v, torch.Tensor): 130 v = v.item() 131 self.experiment.add_scalar(k, v, step) 132 133 @rank_zero_only 134 def save(self) -> None: 135 super().save() 136 try: 137 self.experiment.flush() 138 except AttributeError: 139 # you are using PT version (<v1.2) which does not have implemented flush 140 self.experiment._get_file_writer().flush() 141 142 dir_path = self.log_dir 143 if not os.path.isdir(dir_path): 144 dir_path = self.save_dir 145 146 # prepare the file path 147 meta_tags_path = os.path.join(dir_path, self.NAME_CSV_TAGS) 148 149 # save the metatags file 150 with open(meta_tags_path, 'w', newline='') as csvfile: 151 fieldnames = ['key', 'value'] 152 writer = csv.DictWriter(csvfile, fieldnames=fieldnames) 153 writer.writerow({'key': 'key', 'value': 'value'}) 154 for k, v in self.tags.items(): 155 writer.writerow({'key': k, 'value': v}) 156 157 @rank_zero_only 158 def finalize(self, status: str) -> None: 159 self.save() 160 161 @property 162 def name(self) -> str: 163 return self._name 164 165 @property 166 def version(self) -> int: 167 if self._version is None: 168 self._version = self._get_next_version() 169 return self._version 170 171 def _get_next_version(self): 172 root_dir = os.path.join(self.save_dir, self.name) 173 174 if not os.path.isdir(root_dir): 175 log.warning('Missing logger folder: %s', root_dir) 176 return 0 177 178 existing_versions = [] 179 for d in os.listdir(root_dir): 180 if os.path.isdir(os.path.join(root_dir, d)) and d.startswith("version_"): 181 existing_versions.append(int(d.split("_")[1])) 182 183 if len(existing_versions) == 0: 184 return 0 185 186 return max(existing_versions) + 1 187 [end of pytorch_lightning/loggers/tensorboard.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/pytorch_lightning/loggers/tensorboard.py b/pytorch_lightning/loggers/tensorboard.py --- a/pytorch_lightning/loggers/tensorboard.py +++ b/pytorch_lightning/loggers/tensorboard.py @@ -101,7 +101,8 @@ return self._experiment @rank_zero_only - def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None: + def log_hyperparams(self, params: Union[Dict[str, Any], Namespace], + metrics: Optional[Dict[str, Any]] = None) -> None: params = self._convert_params(params) params = self._flatten_dict(params) sanitized_params = self._sanitize_params(params) @@ -114,7 +115,9 @@ ) else: from torch.utils.tensorboard.summary import hparams - exp, ssi, sei = hparams(sanitized_params, {}) + if metrics is None: + metrics = {} + exp, ssi, sei = hparams(sanitized_params, metrics) writer = self.experiment._get_file_writer() writer.add_summary(exp) writer.add_summary(ssi)
{"golden_diff": "diff --git a/pytorch_lightning/loggers/tensorboard.py b/pytorch_lightning/loggers/tensorboard.py\n--- a/pytorch_lightning/loggers/tensorboard.py\n+++ b/pytorch_lightning/loggers/tensorboard.py\n@@ -101,7 +101,8 @@\n return self._experiment\n \n @rank_zero_only\n- def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None:\n+ def log_hyperparams(self, params: Union[Dict[str, Any], Namespace],\n+ metrics: Optional[Dict[str, Any]] = None) -> None:\n params = self._convert_params(params)\n params = self._flatten_dict(params)\n sanitized_params = self._sanitize_params(params)\n@@ -114,7 +115,9 @@\n )\n else:\n from torch.utils.tensorboard.summary import hparams\n- exp, ssi, sei = hparams(sanitized_params, {})\n+ if metrics is None:\n+ metrics = {}\n+ exp, ssi, sei = hparams(sanitized_params, metrics)\n writer = self.experiment._get_file_writer()\n writer.add_summary(exp)\n writer.add_summary(ssi)\n", "issue": "Add support to log hparams and metrics to tensorboard?\nHow can I log metrics (_e.g._ validation loss of best epoch) together with the set of hyperparameters?\r\n\r\nI have looked through the docs and through the code.\r\nIt seems like an obvious thing, so maybe I'm just not getting it.\r\n\r\nCurrently, the only way that I found was to extend the logger class:\r\n```python\r\nclass MyTensorBoardLogger(TensorBoardLogger):\r\n\r\n def __init__(self, *args, **kwargs):\r\n super(MyTensorBoardLogger, self).__init__(*args, **kwargs)\r\n\r\n def log_hyperparams(self, *args, **kwargs):\r\n pass\r\n\r\n @rank_zero_only\r\n def log_hyperparams_metrics(self, params: dict, metrics: dict) -> None:\r\n params = self._convert_params(params)\r\n exp, ssi, sei = hparams(params, metrics)\r\n writer = self.experiment._get_file_writer()\r\n writer.add_summary(exp)\r\n writer.add_summary(ssi)\r\n writer.add_summary(sei)\r\n # some alternative should be added\r\n self.tags.update(params)\r\n```\r\nAnd then I'm writing the hparams with metrics in a callback:\r\n```\r\ndef on_train_end(self, trainer, module):\r\n module.logger.log_hyperparams_metrics(module.hparams, {'val_loss': self.best_val_loss})\r\n```\r\nBut that doesn't seem right. \r\nIs there a better way to write some metric together with the hparams as well?\r\n\r\n#### Environment\r\n\r\n - OS: Ubuntu18.04\r\n - conda4.8.3\r\n - pytorch-lightning==0.7.1\r\n- torch==1.4.0\r\n\n", "before_files": [{"content": "\"\"\"\nTensorBoard\n-----------\n\"\"\"\n\nimport csv\nimport os\nfrom argparse import Namespace\nfrom typing import Optional, Dict, Union, Any\nfrom warnings import warn\n\nimport torch\nfrom pkg_resources import parse_version\nfrom torch.utils.tensorboard import SummaryWriter\n\nfrom pytorch_lightning import _logger as log\nfrom pytorch_lightning.loggers.base import LightningLoggerBase\nfrom pytorch_lightning.utilities import rank_zero_only\n\n\nclass TensorBoardLogger(LightningLoggerBase):\n r\"\"\"\n Log to local file system in `TensorBoard <https://www.tensorflow.org/tensorboard>`_ format.\n Implemented using :class:`~torch.utils.tensorboard.SummaryWriter`. Logs are saved to\n ``os.path.join(save_dir, name, version)``. This is the default logger in Lightning, it comes\n preinstalled.\n\n Example:\n >>> from pytorch_lightning import Trainer\n >>> from pytorch_lightning.loggers import TensorBoardLogger\n >>> logger = TensorBoardLogger(\"tb_logs\", name=\"my_model\")\n >>> trainer = Trainer(logger=logger)\n\n Args:\n save_dir: Save directory\n name: Experiment name. Defaults to ``'default'``. If it is the empty string then no per-experiment\n subdirectory is used.\n version: Experiment version. If version is not specified the logger inspects the save\n directory for existing versions, then automatically assigns the next available version.\n If it is a string then it is used as the run-specific subdirectory name,\n otherwise ``'version_${version}'`` is used.\n \\**kwargs: Other arguments are passed directly to the :class:`SummaryWriter` constructor.\n\n \"\"\"\n NAME_CSV_TAGS = 'meta_tags.csv'\n\n def __init__(self,\n save_dir: str,\n name: Optional[str] = \"default\",\n version: Optional[Union[int, str]] = None,\n **kwargs):\n super().__init__()\n self.save_dir = save_dir\n self._name = name\n self._version = version\n\n self._experiment = None\n self.tags = {}\n self._kwargs = kwargs\n\n @property\n def root_dir(self) -> str:\n \"\"\"\n Parent directory for all tensorboard checkpoint subdirectories.\n If the experiment name parameter is ``None`` or the empty string, no experiment subdirectory is used\n and the checkpoint will be saved in \"save_dir/version_dir\"\n \"\"\"\n if self.name is None or len(self.name) == 0:\n return self.save_dir\n else:\n return os.path.join(self.save_dir, self.name)\n\n @property\n def log_dir(self) -> str:\n \"\"\"\n The directory for this run's tensorboard checkpoint. By default, it is named\n ``'version_${self.version}'`` but it can be overridden by passing a string value\n for the constructor's version parameter instead of ``None`` or an int.\n \"\"\"\n # create a pseudo standard path ala test-tube\n version = self.version if isinstance(self.version, str) else f\"version_{self.version}\"\n log_dir = os.path.join(self.root_dir, version)\n return log_dir\n\n @property\n def experiment(self) -> SummaryWriter:\n r\"\"\"\n Actual tensorboard object. To use TensorBoard features in your\n :class:`~pytorch_lightning.core.lightning.LightningModule` do the following.\n\n Example::\n\n self.logger.experiment.some_tensorboard_function()\n\n \"\"\"\n if self._experiment is not None:\n return self._experiment\n\n os.makedirs(self.root_dir, exist_ok=True)\n self._experiment = SummaryWriter(log_dir=self.log_dir, **self._kwargs)\n return self._experiment\n\n @rank_zero_only\n def log_hyperparams(self, params: Union[Dict[str, Any], Namespace]) -> None:\n params = self._convert_params(params)\n params = self._flatten_dict(params)\n sanitized_params = self._sanitize_params(params)\n\n if parse_version(torch.__version__) < parse_version(\"1.3.0\"):\n warn(\n f\"Hyperparameter logging is not available for Torch version {torch.__version__}.\"\n \" Skipping log_hyperparams. Upgrade to Torch 1.3.0 or above to enable\"\n \" hyperparameter logging.\"\n )\n else:\n from torch.utils.tensorboard.summary import hparams\n exp, ssi, sei = hparams(sanitized_params, {})\n writer = self.experiment._get_file_writer()\n writer.add_summary(exp)\n writer.add_summary(ssi)\n writer.add_summary(sei)\n\n # some alternative should be added\n self.tags.update(sanitized_params)\n\n @rank_zero_only\n def log_metrics(self, metrics: Dict[str, float], step: Optional[int] = None) -> None:\n for k, v in metrics.items():\n if isinstance(v, torch.Tensor):\n v = v.item()\n self.experiment.add_scalar(k, v, step)\n\n @rank_zero_only\n def save(self) -> None:\n super().save()\n try:\n self.experiment.flush()\n except AttributeError:\n # you are using PT version (<v1.2) which does not have implemented flush\n self.experiment._get_file_writer().flush()\n\n dir_path = self.log_dir\n if not os.path.isdir(dir_path):\n dir_path = self.save_dir\n\n # prepare the file path\n meta_tags_path = os.path.join(dir_path, self.NAME_CSV_TAGS)\n\n # save the metatags file\n with open(meta_tags_path, 'w', newline='') as csvfile:\n fieldnames = ['key', 'value']\n writer = csv.DictWriter(csvfile, fieldnames=fieldnames)\n writer.writerow({'key': 'key', 'value': 'value'})\n for k, v in self.tags.items():\n writer.writerow({'key': k, 'value': v})\n\n @rank_zero_only\n def finalize(self, status: str) -> None:\n self.save()\n\n @property\n def name(self) -> str:\n return self._name\n\n @property\n def version(self) -> int:\n if self._version is None:\n self._version = self._get_next_version()\n return self._version\n\n def _get_next_version(self):\n root_dir = os.path.join(self.save_dir, self.name)\n\n if not os.path.isdir(root_dir):\n log.warning('Missing logger folder: %s', root_dir)\n return 0\n\n existing_versions = []\n for d in os.listdir(root_dir):\n if os.path.isdir(os.path.join(root_dir, d)) and d.startswith(\"version_\"):\n existing_versions.append(int(d.split(\"_\")[1]))\n\n if len(existing_versions) == 0:\n return 0\n\n return max(existing_versions) + 1\n", "path": "pytorch_lightning/loggers/tensorboard.py"}]}
2,824
265
gh_patches_debug_5516
rasdani/github-patches
git_diff
kivy__kivy-4047
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Pygame text provider does not render text opacity properly Text renders as if opacity is set to 100% regardless of what's set in a Label's color attribute. SDL2 works just fine. Tested with master pulled today on Linux. Pygame text provider does not render text opacity properly Text renders as if opacity is set to 100% regardless of what's set in a Label's color attribute. SDL2 works just fine. Tested with master pulled today on Linux. </issue> <code> [start of kivy/core/text/text_pygame.py] 1 ''' 2 Text Pygame: Draw text with pygame 3 ''' 4 5 __all__ = ('LabelPygame', ) 6 7 from kivy.compat import PY2 8 from kivy.core.text import LabelBase 9 from kivy.core.image import ImageData 10 11 try: 12 import pygame 13 except: 14 raise 15 16 pygame_cache = {} 17 pygame_font_handles = {} 18 pygame_cache_order = [] 19 20 # init pygame font 21 try: 22 pygame.ftfont.init() 23 except: 24 pygame.font.init() 25 26 27 class LabelPygame(LabelBase): 28 29 def _get_font_id(self): 30 if PY2: 31 try: 32 return '|'.join([unicode(self.options[x]) for x in 33 ('font_size', 'font_name_r', 34 'bold', 'italic')]) 35 except UnicodeDecodeError: 36 pass 37 return '|'.join([str(self.options[x]) for x in 38 ('font_size', 'font_name_r', 'bold', 'italic')]) 39 40 def _get_font(self): 41 fontid = self._get_font_id() 42 if fontid not in pygame_cache: 43 # try first the file if it's a filename 44 font_handle = fontobject = None 45 fontname = self.options['font_name_r'] 46 ext = fontname.rsplit('.', 1) 47 if len(ext) == 2: 48 # try to open the font if it has an extension 49 font_handle = open(fontname, 'rb') 50 fontobject = pygame.font.Font(font_handle, 51 int(self.options['font_size'])) 52 53 # fallback to search a system font 54 if fontobject is None: 55 # try to search the font 56 font = pygame.font.match_font( 57 self.options['font_name_r'].replace(' ', ''), 58 bold=self.options['bold'], 59 italic=self.options['italic']) 60 61 # fontobject 62 fontobject = pygame.font.Font(font, 63 int(self.options['font_size'])) 64 pygame_cache[fontid] = fontobject 65 pygame_font_handles[fontid] = font_handle 66 pygame_cache_order.append(fontid) 67 68 # to prevent too much file open, limit the number of opened fonts to 64 69 while len(pygame_cache_order) > 64: 70 popid = pygame_cache_order.pop(0) 71 del pygame_cache[popid] 72 font_handle = pygame_font_handles.pop(popid) 73 if font_handle is not None: 74 font_handle.close() 75 76 return pygame_cache[fontid] 77 78 def get_ascent(self): 79 return self._get_font().get_ascent() 80 81 def get_descent(self): 82 return self._get_font().get_descent() 83 84 def get_extents(self, text): 85 return self._get_font().size(text) 86 87 def get_cached_extents(self): 88 return self._get_font().size 89 90 def _render_begin(self): 91 self._pygame_surface = pygame.Surface(self._size, pygame.SRCALPHA, 32) 92 self._pygame_surface.fill((0, 0, 0, 0)) 93 94 def _render_text(self, text, x, y): 95 font = self._get_font() 96 color = [c * 255 for c in self.options['color']] 97 color[0], color[2] = color[2], color[0] 98 try: 99 text = font.render(text, True, color) 100 self._pygame_surface.blit(text, (x, y), None, 101 pygame.BLEND_RGBA_ADD) 102 except pygame.error: 103 pass 104 105 def _render_end(self): 106 w, h = self._size 107 data = ImageData(w, h, 108 'rgba', self._pygame_surface.get_buffer().raw) 109 110 del self._pygame_surface 111 112 return data 113 [end of kivy/core/text/text_pygame.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/kivy/core/text/text_pygame.py b/kivy/core/text/text_pygame.py --- a/kivy/core/text/text_pygame.py +++ b/kivy/core/text/text_pygame.py @@ -97,6 +97,7 @@ color[0], color[2] = color[2], color[0] try: text = font.render(text, True, color) + text.set_colorkey(color) self._pygame_surface.blit(text, (x, y), None, pygame.BLEND_RGBA_ADD) except pygame.error:
{"golden_diff": "diff --git a/kivy/core/text/text_pygame.py b/kivy/core/text/text_pygame.py\n--- a/kivy/core/text/text_pygame.py\n+++ b/kivy/core/text/text_pygame.py\n@@ -97,6 +97,7 @@\n color[0], color[2] = color[2], color[0]\n try:\n text = font.render(text, True, color)\n+ text.set_colorkey(color)\n self._pygame_surface.blit(text, (x, y), None,\n pygame.BLEND_RGBA_ADD)\n except pygame.error:\n", "issue": "Pygame text provider does not render text opacity properly\nText renders as if opacity is set to 100% regardless of what's set in a Label's color attribute.\n\nSDL2 works just fine.\n\nTested with master pulled today on Linux.\n\nPygame text provider does not render text opacity properly\nText renders as if opacity is set to 100% regardless of what's set in a Label's color attribute.\n\nSDL2 works just fine.\n\nTested with master pulled today on Linux.\n\n", "before_files": [{"content": "'''\nText Pygame: Draw text with pygame\n'''\n\n__all__ = ('LabelPygame', )\n\nfrom kivy.compat import PY2\nfrom kivy.core.text import LabelBase\nfrom kivy.core.image import ImageData\n\ntry:\n import pygame\nexcept:\n raise\n\npygame_cache = {}\npygame_font_handles = {}\npygame_cache_order = []\n\n# init pygame font\ntry:\n pygame.ftfont.init()\nexcept:\n pygame.font.init()\n\n\nclass LabelPygame(LabelBase):\n\n def _get_font_id(self):\n if PY2:\n try:\n return '|'.join([unicode(self.options[x]) for x in\n ('font_size', 'font_name_r',\n 'bold', 'italic')])\n except UnicodeDecodeError:\n pass\n return '|'.join([str(self.options[x]) for x in\n ('font_size', 'font_name_r', 'bold', 'italic')])\n\n def _get_font(self):\n fontid = self._get_font_id()\n if fontid not in pygame_cache:\n # try first the file if it's a filename\n font_handle = fontobject = None\n fontname = self.options['font_name_r']\n ext = fontname.rsplit('.', 1)\n if len(ext) == 2:\n # try to open the font if it has an extension\n font_handle = open(fontname, 'rb')\n fontobject = pygame.font.Font(font_handle,\n int(self.options['font_size']))\n\n # fallback to search a system font\n if fontobject is None:\n # try to search the font\n font = pygame.font.match_font(\n self.options['font_name_r'].replace(' ', ''),\n bold=self.options['bold'],\n italic=self.options['italic'])\n\n # fontobject\n fontobject = pygame.font.Font(font,\n int(self.options['font_size']))\n pygame_cache[fontid] = fontobject\n pygame_font_handles[fontid] = font_handle\n pygame_cache_order.append(fontid)\n\n # to prevent too much file open, limit the number of opened fonts to 64\n while len(pygame_cache_order) > 64:\n popid = pygame_cache_order.pop(0)\n del pygame_cache[popid]\n font_handle = pygame_font_handles.pop(popid)\n if font_handle is not None:\n font_handle.close()\n\n return pygame_cache[fontid]\n\n def get_ascent(self):\n return self._get_font().get_ascent()\n\n def get_descent(self):\n return self._get_font().get_descent()\n\n def get_extents(self, text):\n return self._get_font().size(text)\n\n def get_cached_extents(self):\n return self._get_font().size\n\n def _render_begin(self):\n self._pygame_surface = pygame.Surface(self._size, pygame.SRCALPHA, 32)\n self._pygame_surface.fill((0, 0, 0, 0))\n\n def _render_text(self, text, x, y):\n font = self._get_font()\n color = [c * 255 for c in self.options['color']]\n color[0], color[2] = color[2], color[0]\n try:\n text = font.render(text, True, color)\n self._pygame_surface.blit(text, (x, y), None,\n pygame.BLEND_RGBA_ADD)\n except pygame.error:\n pass\n\n def _render_end(self):\n w, h = self._size\n data = ImageData(w, h,\n 'rgba', self._pygame_surface.get_buffer().raw)\n\n del self._pygame_surface\n\n return data\n", "path": "kivy/core/text/text_pygame.py"}]}
1,668
124
gh_patches_debug_421
rasdani/github-patches
git_diff
rucio__rucio-2062
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Documentation build fails Motivation ---------- Currently the documentation builds fail with ``` Running Sphinx v1.6.5 making output directory... loading translations [en]... done Traceback (most recent call last): File "/home/docs/checkouts/readthedocs.org/user_builds/rucio/envs/latest/local/lib/python2.7/site-packages/sphinx/cmdline.py", line 305, in main opts.warningiserror, opts.tags, opts.verbosity, opts.jobs) File "/home/docs/checkouts/readthedocs.org/user_builds/rucio/envs/latest/local/lib/python2.7/site-packages/sphinx/application.py", line 196, in __init__ self.setup_extension(extension) File "/home/docs/checkouts/readthedocs.org/user_builds/rucio/envs/latest/local/lib/python2.7/site-packages/sphinx/application.py", line 456, in setup_extension self.registry.load_extension(self, extname) File "/home/docs/checkouts/readthedocs.org/user_builds/rucio/envs/latest/local/lib/python2.7/site-packages/sphinx/registry.py", line 199, in load_extension raise ExtensionError(__('Could not import extension %s') % extname, err) ExtensionError: Could not import extension rucio.common.doc.argparse.ext (exception: No module named rucio.common.doc.argparse.ext) Extension error: Could not import extension rucio.common.doc.argparse.ext (exception: No module named rucio.common.doc.argparse.ext) ``` I did not look too much into the issue yet, but I wonder why we added this `rucio.common.doc.argparse.ext` to the repository instead of using `sphinx-argparse`? @vingar do you maybe remember? </issue> <code> [start of doc/source/conf.py] 1 # -*- coding: utf-8 -*- 2 # 3 # Rucio documentation build configuration file, created by 4 # sphinx-quickstart on Fri Oct 27 14:25:40 2017. 5 # 6 # This file is execfile()d with the current directory set to its 7 # containing dir. 8 # 9 # Note that not all possible configuration values are present in this 10 # autogenerated file. 11 # 12 # All configuration values have a default; values that are commented out 13 # serve to show the default. 14 15 # If extensions (or modules to document with autodoc) are in another directory, 16 # add these directories to sys.path here. If the directory is relative to the 17 # documentation root, use os.path.abspath to make it absolute, like shown here. 18 # 19 import os 20 import sys 21 from mock import Mock as MagicMock 22 23 sys.path.insert(len(sys.path) - 1, os.path.abspath('.')) 24 25 26 class Mock(MagicMock): 27 @classmethod 28 def __getattr__(cls, name): 29 return Mock() 30 31 @classmethod 32 def __getitem__(cls, name): 33 return Mock() 34 35 36 MOCK_MODULES = ['pycurl', 'M2Crypto'] 37 sys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES) 38 39 # -- General configuration ------------------------------------------------ 40 41 # If your documentation needs a minimal Sphinx version, state it here. 42 # 43 # needs_sphinx = '1.0' 44 45 # Add any Sphinx extension module names here, as strings. They can be 46 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 47 # ones. 48 extensions = ['sphinx.ext.autodoc', 49 'sphinx.ext.doctest', 50 'sphinx.ext.todo', 51 'rucio.common.doc.argparse.ext', 52 'sphinxcontrib.httpdomain', 53 'sphinxcontrib.autohttp.flask', 54 'sphinxcontrib.autohttp.flaskqref'] 55 56 57 on_rtd = os.environ.get('READTHEDOCS', None) == 'True' 58 59 if not on_rtd: # only import and set the theme if we're building docs locally 60 import sphinx_rtd_theme 61 html_theme = 'sphinx_rtd_theme' 62 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] 63 64 65 # Add any paths that contain templates here, relative to this directory. 66 templates_path = ['_templates'] 67 68 # The suffix(es) of source filenames. 69 # You can specify multiple suffix as a list of string: 70 # 71 # source_suffix = ['.rst', '.md'] 72 source_suffix = '.rst' 73 74 # The master toctree document. 75 master_doc = 'index' 76 77 # General information about the project. 78 project = u'Rucio' 79 copyright = u'2012-2018 CERN for the benefit of the ATLAS collaboration' 80 author = u'[email protected]' 81 82 # The version info for the project you're documenting, acts as replacement for 83 # |version| and |release|, also used in various other places throughout the 84 # built documents. 85 # 86 # The short X.Y version. 87 version = u'' 88 # The full version, including alpha/beta/rc tags. 89 release = u'1.2' 90 91 # The language for content autogenerated by Sphinx. Refer to documentation 92 # for a list of supported languages. 93 # 94 # This is also used if you do content translation via gettext catalogs. 95 # Usually you set "language" from the command line for these cases. 96 language = None 97 98 # List of patterns, relative to source directory, that match files and 99 # directories to ignore when looking for source files. 100 # This patterns also effect to html_static_path and html_extra_path 101 exclude_patterns = [] 102 103 # The name of the Pygments (syntax highlighting) style to use. 104 pygments_style = 'sphinx' 105 106 # If true, `todo` and `todoList` produce output, else they produce nothing. 107 todo_include_todos = True 108 109 110 # -- Options for HTML output ---------------------------------------------- 111 112 # The theme to use for HTML and HTML Help pages. See the documentation for 113 # a list of builtin themes. 114 # 115 # html_theme = 'alabaster' 116 117 # Theme options are theme-specific and customize the look and feel of a theme 118 # further. For a list of options available for each theme, see the 119 # documentation. 120 # 121 # html_theme_options = {} 122 123 # Add any paths that contain custom static files (such as style sheets) here, 124 # relative to this directory. They are copied after the builtin static files, 125 # so a file named "default.css" will overwrite the builtin "default.css". 126 # html_static_path = ['_static'] 127 128 # Custom sidebar templates, must be a dictionary that maps document names 129 # to template names. 130 # 131 # This is required for the alabaster theme 132 # refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars 133 # html_sidebars = { 134 # '**': [ 135 # 'about.html', 136 # 'navigation.html', 137 # 'relations.html', # needs 'show_related': True theme option to display 138 # 'searchbox.html', 139 # 'donate.html', 140 # ] 141 # } 142 143 144 # -- Options for HTMLHelp output ------------------------------------------ 145 146 # Output file base name for HTML help builder. 147 htmlhelp_basename = 'Ruciodoc' 148 149 150 # -- Options for LaTeX output --------------------------------------------- 151 152 latex_elements = { 153 # The paper size ('letterpaper' or 'a4paper'). 154 # 155 # 'papersize': 'letterpaper', 156 157 # The font size ('10pt', '11pt' or '12pt'). 158 # 159 # 'pointsize': '10pt', 160 161 # Additional stuff for the LaTeX preamble. 162 # 163 # 'preamble': '', 164 165 # Latex figure (float) alignment 166 # 167 # 'figure_align': 'htbp', 168 } 169 170 # Grouping the document tree into LaTeX files. List of tuples 171 # (source start file, target name, title, 172 # author, documentclass [howto, manual, or own class]). 173 latex_documents = [ 174 (master_doc, 'Rucio.tex', u'Rucio Documentation', 175 u'[email protected]', 'manual'), 176 ] 177 178 179 # -- Options for manual page output --------------------------------------- 180 181 # One entry per manual page. List of tuples 182 # (source start file, name, description, authors, manual section). 183 man_pages = [ 184 (master_doc, 'rucio', u'Rucio Documentation', 185 [author], 1) 186 ] 187 188 189 # -- Options for Texinfo output ------------------------------------------- 190 191 # Grouping the document tree into Texinfo files. List of tuples 192 # (source start file, target name, title, author, 193 # dir menu entry, description, category) 194 texinfo_documents = [ 195 (master_doc, 'Rucio', u'Rucio Documentation', 196 author, 'Rucio', 'One line description of project.', 197 'Miscellaneous'), 198 ] 199 [end of doc/source/conf.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/doc/source/conf.py b/doc/source/conf.py --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -20,7 +20,7 @@ import sys from mock import Mock as MagicMock -sys.path.insert(len(sys.path) - 1, os.path.abspath('.')) +sys.path.insert(len(sys.path), os.path.abspath('../../lib')) class Mock(MagicMock):
{"golden_diff": "diff --git a/doc/source/conf.py b/doc/source/conf.py\n--- a/doc/source/conf.py\n+++ b/doc/source/conf.py\n@@ -20,7 +20,7 @@\n import sys\n from mock import Mock as MagicMock\n \n-sys.path.insert(len(sys.path) - 1, os.path.abspath('.'))\n+sys.path.insert(len(sys.path), os.path.abspath('../../lib'))\n \n \n class Mock(MagicMock):\n", "issue": "Documentation build fails\nMotivation\r\n----------\r\nCurrently the documentation builds fail with \r\n\r\n```\r\nRunning Sphinx v1.6.5\r\nmaking output directory...\r\nloading translations [en]... done\r\n\r\nTraceback (most recent call last):\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/rucio/envs/latest/local/lib/python2.7/site-packages/sphinx/cmdline.py\", line 305, in main\r\n opts.warningiserror, opts.tags, opts.verbosity, opts.jobs)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/rucio/envs/latest/local/lib/python2.7/site-packages/sphinx/application.py\", line 196, in __init__\r\n self.setup_extension(extension)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/rucio/envs/latest/local/lib/python2.7/site-packages/sphinx/application.py\", line 456, in setup_extension\r\n self.registry.load_extension(self, extname)\r\n File \"/home/docs/checkouts/readthedocs.org/user_builds/rucio/envs/latest/local/lib/python2.7/site-packages/sphinx/registry.py\", line 199, in load_extension\r\n raise ExtensionError(__('Could not import extension %s') % extname, err)\r\nExtensionError: Could not import extension rucio.common.doc.argparse.ext (exception: No module named rucio.common.doc.argparse.ext)\r\n\r\nExtension error:\r\nCould not import extension rucio.common.doc.argparse.ext (exception: No module named rucio.common.doc.argparse.ext)\r\n```\r\n\r\nI did not look too much into the issue yet, but I wonder why we added this `rucio.common.doc.argparse.ext` to the repository instead of using `sphinx-argparse`? @vingar do you maybe remember?\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Rucio documentation build configuration file, created by\n# sphinx-quickstart on Fri Oct 27 14:25:40 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nfrom mock import Mock as MagicMock\n\nsys.path.insert(len(sys.path) - 1, os.path.abspath('.'))\n\n\nclass Mock(MagicMock):\n @classmethod\n def __getattr__(cls, name):\n return Mock()\n\n @classmethod\n def __getitem__(cls, name):\n return Mock()\n\n\nMOCK_MODULES = ['pycurl', 'M2Crypto']\nsys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n 'sphinx.ext.todo',\n 'rucio.common.doc.argparse.ext',\n 'sphinxcontrib.httpdomain',\n 'sphinxcontrib.autohttp.flask',\n 'sphinxcontrib.autohttp.flaskqref']\n\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\nif not on_rtd: # only import and set the theme if we're building docs locally\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Rucio'\ncopyright = u'2012-2018 CERN for the benefit of the ATLAS collaboration'\nauthor = u'[email protected]'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = u''\n# The full version, including alpha/beta/rc tags.\nrelease = u'1.2'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n# html_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = ['_static']\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# This is required for the alabaster theme\n# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars\n# html_sidebars = {\n# '**': [\n# 'about.html',\n# 'navigation.html',\n# 'relations.html', # needs 'show_related': True theme option to display\n# 'searchbox.html',\n# 'donate.html',\n# ]\n# }\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Ruciodoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'Rucio.tex', u'Rucio Documentation',\n u'[email protected]', 'manual'),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'rucio', u'Rucio Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'Rucio', u'Rucio Documentation',\n author, 'Rucio', 'One line description of project.',\n 'Miscellaneous'),\n]\n", "path": "doc/source/conf.py"}]}
2,881
88
gh_patches_debug_13813
rasdani/github-patches
git_diff
cloud-custodian__cloud-custodian-9500
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Add support for kms-key in timestream service ### Describe the feature Add support for kms-key in timestream service ### Extra information or context _No response_ </issue> <code> [start of c7n/resources/timestream.py] 1 from c7n.manager import resources 2 from c7n.actions import Action 3 from c7n.query import DescribeSource, QueryResourceManager, TypeInfo 4 from c7n.utils import local_session, type_schema 5 from c7n.tags import ( 6 TagDelayedAction, 7 TagActionFilter, 8 Tag as TagAction, 9 RemoveTag as RemoveTagAction 10 ) 11 12 13 class DescribeTimestream(DescribeSource): 14 def augment(self, resources): 15 for r in resources: 16 client = local_session(self.manager.session_factory).client('timestream-write') 17 r['Tags'] = client.list_tags_for_resource(ResourceARN=r['Arn'])['Tags'] 18 return resources 19 20 21 @resources.register('timestream-database') 22 class TimestreamDatabase(QueryResourceManager): 23 class resource_type(TypeInfo): 24 service = 'timestream-write' 25 arn_type = '' 26 name = 'DatabaseName' 27 id = arn = 'Arn' 28 enum_spec = ('list_databases', 'Databases', {}) 29 permission_prefix = 'timestream' 30 permissions = ('timestream:ListDatabases', ) 31 permissions_augment = ("timestream:ListTagsForResource",) 32 source_mapping = { 33 'describe': DescribeTimestream, 34 } 35 36 37 @resources.register('timestream-table') 38 class TimestreamTable(QueryResourceManager): 39 class resource_type(TypeInfo): 40 service = 'timestream-write' 41 arn_type = '' 42 name = 'TableName' 43 id = arn = 'Arn' 44 enum_spec = ('list_tables', 'Tables', {}) 45 permission_prefix = 'timestream' 46 permissions = ('timestream:ListTables', ) 47 48 source_mapping = { 49 'describe': DescribeTimestream, 50 } 51 52 53 @TimestreamDatabase.action_registry.register('tag') 54 @TimestreamTable.action_registry.register('tag') 55 class TimestreamTag(TagAction): 56 57 permissions = ('timestream:TagResource', ) 58 59 def process_resource_set(self, client, resource_set, tags): 60 for r in resource_set: 61 client.tag_resource(ResourceARN=r['Arn'], Tags=tags) 62 63 64 @TimestreamDatabase.action_registry.register('remove-tag') 65 @TimestreamTable.action_registry.register('remove-tag') 66 class TimestreamRemoveTag(RemoveTagAction): 67 68 permissions = ('timestream:UntagResource', ) 69 70 def process_resource_set(self, client, resource_set, tag_keys): 71 for r in resource_set: 72 client.untag_resource(ResourceARN=r['Arn'], TagKeys=tag_keys) 73 74 75 TimestreamDatabase.action_registry.register('mark-for-op', TagDelayedAction) 76 TimestreamTable.action_registry.register('mark-for-op', TagDelayedAction) 77 78 TimestreamDatabase.filter_registry.register('marked-for-op', TagActionFilter) 79 TimestreamTable.filter_registry.register('marked-for-op', TagActionFilter) 80 81 82 @TimestreamTable.action_registry.register('delete') 83 class TimestreamTableDelete(Action): 84 """ 85 Deletes a timestream table 86 """ 87 88 schema = type_schema('delete') 89 permissions = ('timestream:DeleteTable', ) 90 91 def process(self, resources): 92 client = local_session(self.manager.session_factory).client('timestream-write') 93 for r in resources: 94 try: 95 client.delete_table( 96 DatabaseName=r['DatabaseName'], 97 TableName=r['TableName'] 98 ) 99 except client.exceptions.ResourceNotFoundException: 100 continue 101 102 103 @TimestreamDatabase.action_registry.register('delete') 104 class TimestreamDatabaseDelete(Action): 105 """ 106 Deletes a timestream database 107 """ 108 109 schema = type_schema('delete', force={'type': 'boolean', 'default': False}) 110 permissions = ( 111 'timestream:DeleteDatabase', 112 'timestream:ListTables', 'timestream:DeleteTable', ) 113 114 def process(self, resources): 115 client = local_session(self.manager.session_factory).client('timestream-write') 116 for r in resources: 117 try: 118 client.delete_database( 119 DatabaseName=r['DatabaseName'], 120 ) 121 except client.exceptions.ResourceNotFoundException: 122 continue 123 except client.exceptions.ValidationException: 124 if not self.data.get('force', False): 125 self.log.error( 126 f'Unable to delete database:{r["DatabaseName"]}, ' 127 'tables must be deleted first') 128 continue 129 tables = client.list_tables(DatabaseName=r['DatabaseName'])['Tables'] 130 TimestreamTableDelete( 131 data={'type': 'delete'}, 132 manager=self.manager, 133 log_dir=self.log_dir 134 ).process(tables) 135 client.delete_database( 136 DatabaseName=r['DatabaseName'], 137 ) 138 [end of c7n/resources/timestream.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/c7n/resources/timestream.py b/c7n/resources/timestream.py --- a/c7n/resources/timestream.py +++ b/c7n/resources/timestream.py @@ -1,5 +1,6 @@ from c7n.manager import resources from c7n.actions import Action +from c7n.filters.kms import KmsRelatedFilter from c7n.query import DescribeSource, QueryResourceManager, TypeInfo from c7n.utils import local_session, type_schema from c7n.tags import ( @@ -135,3 +136,8 @@ client.delete_database( DatabaseName=r['DatabaseName'], ) + + [email protected]_registry.register('kms-key') +class KmsFilter(KmsRelatedFilter): + RelatedIdsExpression = 'KmsKeyId'
{"golden_diff": "diff --git a/c7n/resources/timestream.py b/c7n/resources/timestream.py\n--- a/c7n/resources/timestream.py\n+++ b/c7n/resources/timestream.py\n@@ -1,5 +1,6 @@\n from c7n.manager import resources\n from c7n.actions import Action\n+from c7n.filters.kms import KmsRelatedFilter\n from c7n.query import DescribeSource, QueryResourceManager, TypeInfo\n from c7n.utils import local_session, type_schema\n from c7n.tags import (\n@@ -135,3 +136,8 @@\n client.delete_database(\n DatabaseName=r['DatabaseName'],\n )\n+\n+\[email protected]_registry.register('kms-key')\n+class KmsFilter(KmsRelatedFilter):\n+ RelatedIdsExpression = 'KmsKeyId'\n", "issue": "Add support for kms-key in timestream service\n### Describe the feature\n\nAdd support for kms-key in timestream service\n\n### Extra information or context\n\n_No response_\n", "before_files": [{"content": "from c7n.manager import resources\nfrom c7n.actions import Action\nfrom c7n.query import DescribeSource, QueryResourceManager, TypeInfo\nfrom c7n.utils import local_session, type_schema\nfrom c7n.tags import (\n TagDelayedAction,\n TagActionFilter,\n Tag as TagAction,\n RemoveTag as RemoveTagAction\n)\n\n\nclass DescribeTimestream(DescribeSource):\n def augment(self, resources):\n for r in resources:\n client = local_session(self.manager.session_factory).client('timestream-write')\n r['Tags'] = client.list_tags_for_resource(ResourceARN=r['Arn'])['Tags']\n return resources\n\n\[email protected]('timestream-database')\nclass TimestreamDatabase(QueryResourceManager):\n class resource_type(TypeInfo):\n service = 'timestream-write'\n arn_type = ''\n name = 'DatabaseName'\n id = arn = 'Arn'\n enum_spec = ('list_databases', 'Databases', {})\n permission_prefix = 'timestream'\n permissions = ('timestream:ListDatabases', )\n permissions_augment = (\"timestream:ListTagsForResource\",)\n source_mapping = {\n 'describe': DescribeTimestream,\n }\n\n\[email protected]('timestream-table')\nclass TimestreamTable(QueryResourceManager):\n class resource_type(TypeInfo):\n service = 'timestream-write'\n arn_type = ''\n name = 'TableName'\n id = arn = 'Arn'\n enum_spec = ('list_tables', 'Tables', {})\n permission_prefix = 'timestream'\n permissions = ('timestream:ListTables', )\n\n source_mapping = {\n 'describe': DescribeTimestream,\n }\n\n\[email protected]_registry.register('tag')\[email protected]_registry.register('tag')\nclass TimestreamTag(TagAction):\n\n permissions = ('timestream:TagResource', )\n\n def process_resource_set(self, client, resource_set, tags):\n for r in resource_set:\n client.tag_resource(ResourceARN=r['Arn'], Tags=tags)\n\n\[email protected]_registry.register('remove-tag')\[email protected]_registry.register('remove-tag')\nclass TimestreamRemoveTag(RemoveTagAction):\n\n permissions = ('timestream:UntagResource', )\n\n def process_resource_set(self, client, resource_set, tag_keys):\n for r in resource_set:\n client.untag_resource(ResourceARN=r['Arn'], TagKeys=tag_keys)\n\n\nTimestreamDatabase.action_registry.register('mark-for-op', TagDelayedAction)\nTimestreamTable.action_registry.register('mark-for-op', TagDelayedAction)\n\nTimestreamDatabase.filter_registry.register('marked-for-op', TagActionFilter)\nTimestreamTable.filter_registry.register('marked-for-op', TagActionFilter)\n\n\[email protected]_registry.register('delete')\nclass TimestreamTableDelete(Action):\n \"\"\"\n Deletes a timestream table\n \"\"\"\n\n schema = type_schema('delete')\n permissions = ('timestream:DeleteTable', )\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('timestream-write')\n for r in resources:\n try:\n client.delete_table(\n DatabaseName=r['DatabaseName'],\n TableName=r['TableName']\n )\n except client.exceptions.ResourceNotFoundException:\n continue\n\n\[email protected]_registry.register('delete')\nclass TimestreamDatabaseDelete(Action):\n \"\"\"\n Deletes a timestream database\n \"\"\"\n\n schema = type_schema('delete', force={'type': 'boolean', 'default': False})\n permissions = (\n 'timestream:DeleteDatabase',\n 'timestream:ListTables', 'timestream:DeleteTable', )\n\n def process(self, resources):\n client = local_session(self.manager.session_factory).client('timestream-write')\n for r in resources:\n try:\n client.delete_database(\n DatabaseName=r['DatabaseName'],\n )\n except client.exceptions.ResourceNotFoundException:\n continue\n except client.exceptions.ValidationException:\n if not self.data.get('force', False):\n self.log.error(\n f'Unable to delete database:{r[\"DatabaseName\"]}, '\n 'tables must be deleted first')\n continue\n tables = client.list_tables(DatabaseName=r['DatabaseName'])['Tables']\n TimestreamTableDelete(\n data={'type': 'delete'},\n manager=self.manager,\n log_dir=self.log_dir\n ).process(tables)\n client.delete_database(\n DatabaseName=r['DatabaseName'],\n )\n", "path": "c7n/resources/timestream.py"}]}
1,870
183
gh_patches_debug_35152
rasdani/github-patches
git_diff
blaze__blaze-1046
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Deprecate columns in Data Not a huge deal, but having both `columns` and `fields` in `Data` could potentially confuse new users. </issue> <code> [start of blaze/interactive.py] 1 from __future__ import absolute_import, division, print_function 2 3 import datashape 4 import datetime 5 import operator 6 import itertools 7 import warnings 8 9 from collections import Iterator 10 from functools import reduce 11 12 from datashape import discover, Tuple, Record, DataShape, var 13 from datashape.predicates import iscollection, isscalar, isrecord, istabular 14 15 from pandas import DataFrame, Series 16 17 import numpy as np 18 19 from odo import resource, odo 20 from odo.utils import ignoring 21 22 from .expr import Expr, Symbol, ndim 23 from .dispatch import dispatch 24 from .compatibility import _strtypes 25 26 27 __all__ = ['Data', 'Table', 'into', 'to_html'] 28 29 30 names = ('_%d' % i for i in itertools.count(1)) 31 not_an_iterator = [] 32 33 34 with ignoring(ImportError): 35 import bcolz 36 not_an_iterator.append(bcolz.carray) 37 38 39 with ignoring(ImportError): 40 import pymongo 41 not_an_iterator.append(pymongo.collection.Collection) 42 not_an_iterator.append(pymongo.database.Database) 43 44 45 def Data(data, dshape=None, name=None, fields=None, columns=None, schema=None, 46 **kwargs): 47 sub_uri = '' 48 if isinstance(data, _strtypes): 49 if '::' in data: 50 data, sub_uri = data.split('::') 51 data = resource(data, schema=schema, dshape=dshape, columns=columns, 52 **kwargs) 53 if (isinstance(data, Iterator) and 54 not isinstance(data, tuple(not_an_iterator))): 55 data = tuple(data) 56 if columns: 57 warnings.warn("columns kwarg deprecated. Use fields instead", 58 DeprecationWarning) 59 if columns and not fields: 60 fields = columns 61 if schema and dshape: 62 raise ValueError("Please specify one of schema= or dshape= keyword" 63 " arguments") 64 if schema and not dshape: 65 dshape = var * schema 66 if dshape and isinstance(dshape, _strtypes): 67 dshape = datashape.dshape(dshape) 68 if not dshape: 69 dshape = discover(data) 70 types = None 71 if isinstance(dshape.measure, Tuple) and fields: 72 types = dshape[1].dshapes 73 schema = Record(list(zip(fields, types))) 74 dshape = DataShape(*(dshape.shape + (schema,))) 75 elif isscalar(dshape.measure) and fields: 76 types = (dshape.measure,) * int(dshape[-2]) 77 schema = Record(list(zip(fields, types))) 78 dshape = DataShape(*(dshape.shape[:-1] + (schema,))) 79 elif isrecord(dshape.measure) and fields: 80 ds = discover(data) 81 assert isrecord(ds.measure) 82 names = ds.measure.names 83 if names != fields: 84 raise ValueError('data column names %s\n' 85 '\tnot equal to fields parameter %s,\n' 86 '\tuse Data(data).relabel(%s) to rename fields' 87 % (names, 88 fields, 89 ', '.join('%s=%r' % (k, v) 90 for k, v in zip(names, fields)))) 91 types = dshape.measure.types 92 schema = Record(list(zip(fields, types))) 93 dshape = DataShape(*(dshape.shape + (schema,))) 94 95 ds = datashape.dshape(dshape) 96 result = InteractiveSymbol(data, ds, name) 97 98 if sub_uri: 99 for field in sub_uri.split('/'): 100 if field: 101 result = result[field] 102 103 return result 104 105 106 class InteractiveSymbol(Symbol): 107 """Interactive data. 108 109 The ``Data`` object presents a familiar view onto a variety of forms of 110 data. This user-level object provides an interactive experience to using 111 Blaze's abstract expressions. 112 113 Parameters 114 ---------- 115 data : object 116 Any type with ``discover`` and ``compute`` implementations 117 fields : list, optional 118 Field or column names, will be inferred from datasource if possible 119 dshape : str or DataShape, optional 120 DataShape describing input data 121 name : str, optional 122 A name for the data. 123 124 Examples 125 -------- 126 >>> t = Data([(1, 'Alice', 100), 127 ... (2, 'Bob', -200), 128 ... (3, 'Charlie', 300), 129 ... (4, 'Denis', 400), 130 ... (5, 'Edith', -500)], 131 ... fields=['id', 'name', 'balance']) 132 >>> t[t.balance < 0].name 133 name 134 0 Bob 135 1 Edith 136 """ 137 __slots__ = 'data', 'dshape', '_name' 138 139 def __init__(self, data, dshape, name=None): 140 self.data = data 141 self.dshape = dshape 142 self._name = name or (next(names) 143 if isrecord(dshape.measure) 144 else None) 145 146 def _resources(self): 147 return {self: self.data} 148 149 @property 150 def _args(self): 151 return id(self.data), self.dshape, self._name 152 153 def __setstate__(self, state): 154 for slot, arg in zip(self.__slots__, state): 155 setattr(self, slot, arg) 156 157 158 Data.__doc__ = InteractiveSymbol.__doc__ 159 160 161 def Table(*args, **kwargs): 162 """ Deprecated, see Data instead """ 163 warnings.warn("Table is deprecated, use Data instead", 164 DeprecationWarning) 165 return Data(*args, **kwargs) 166 167 168 @dispatch(InteractiveSymbol, dict) 169 def _subs(o, d): 170 return o 171 172 173 @dispatch(Expr) 174 def compute(expr, **kwargs): 175 resources = expr._resources() 176 if not resources: 177 raise ValueError("No data resources found") 178 else: 179 return compute(expr, resources, **kwargs) 180 181 182 def concrete_head(expr, n=10): 183 """ Return head of computed expression """ 184 if not expr._resources(): 185 raise ValueError("Expression does not contain data resources") 186 if not iscollection(expr.dshape): 187 return compute(expr) 188 189 head = expr.head(n + 1) 190 191 if not iscollection(expr.dshape): 192 return odo(head, object) 193 elif isrecord(expr.dshape.measure): 194 return odo(head, DataFrame) 195 else: 196 df = odo(head, DataFrame) 197 df.columns = [expr._name] 198 return df 199 result = compute(head) 200 201 if len(result) == 0: 202 return DataFrame(columns=expr.fields) 203 if isrecord(expr.dshape.measure): 204 return odo(result, DataFrame, dshape=expr.dshape) 205 else: 206 df = odo(result, DataFrame, dshape=expr.dshape) 207 df.columns = [expr._name] 208 return df 209 210 211 def repr_tables(expr, n=10): 212 result = concrete_head(expr, n).rename(columns={None: ''}) 213 214 if isinstance(result, (DataFrame, Series)): 215 s = repr(result) 216 if len(result) > 10: 217 s = '\n'.join(s.split('\n')[:-1]) + '\n...' 218 return s 219 else: 220 return repr(result) # pragma: no cover 221 222 223 def numel(shape): 224 if var in shape: 225 return None 226 if not shape: 227 return 1 228 return reduce(operator.mul, shape, 1) 229 230 231 def short_dshape(ds, nlines=5): 232 s = datashape.coretypes.pprint(ds) 233 lines = s.split('\n') 234 if len(lines) > 5: 235 s = '\n'.join(lines[:nlines]) + '\n ...' 236 return s 237 238 239 def coerce_to(typ, x): 240 try: 241 return typ(x) 242 except TypeError: 243 return odo(x, typ) 244 245 246 def coerce_scalar(result, dshape): 247 if 'float' in dshape: 248 return coerce_to(float, result) 249 elif 'int' in dshape: 250 return coerce_to(int, result) 251 elif 'bool' in dshape: 252 return coerce_to(bool, result) 253 elif 'datetime' in dshape: 254 return coerce_to(datetime.datetime, result) 255 elif 'date' in dshape: 256 return coerce_to(datetime.date, result) 257 else: 258 return result 259 260 261 def expr_repr(expr, n=10): 262 # Pure Expressions, not interactive 263 if not expr._resources(): 264 return str(expr) 265 266 # Scalars 267 if ndim(expr) == 0 and isscalar(expr.dshape): 268 return repr(coerce_scalar(compute(expr), str(expr.dshape))) 269 270 # Tables 271 if (ndim(expr) == 1 and (istabular(expr.dshape) or 272 isscalar(expr.dshape.measure))): 273 return repr_tables(expr, 10) 274 275 # Smallish arrays 276 if ndim(expr) >= 2 and numel(expr.shape) and numel(expr.shape) < 1000000: 277 return repr(compute(expr)) 278 279 # Other 280 dat = expr._resources().values() 281 if len(dat) == 1: 282 dat = list(dat)[0] # may be dict_values 283 284 s = 'Data: %s' % dat 285 if not isinstance(expr, Symbol): 286 s += '\nExpr: %s' % str(expr) 287 s += '\nDataShape: %s' % short_dshape(expr.dshape, nlines=7) 288 289 return s 290 291 292 @dispatch(DataFrame) 293 def to_html(df): 294 return df.to_html() 295 296 297 @dispatch(Expr) 298 def to_html(expr): 299 # Tables 300 if not expr._resources() or ndim(expr) != 1: 301 return to_html(repr(expr)) 302 return to_html(concrete_head(expr)) 303 304 305 @dispatch(object) 306 def to_html(o): 307 return repr(o) 308 309 310 @dispatch(_strtypes) 311 def to_html(o): 312 return o.replace('\n', '<br>') 313 314 315 @dispatch((object, type, str), Expr) 316 def into(a, b, **kwargs): 317 result = compute(b, **kwargs) 318 kwargs['dshape'] = b.dshape 319 return into(a, result, **kwargs) 320 321 322 def table_length(expr): 323 try: 324 return expr._len() 325 except ValueError: 326 return compute(expr.count()) 327 328 329 Expr.__repr__ = expr_repr 330 Expr._repr_html_ = lambda x: to_html(x) 331 Expr.__len__ = table_length 332 333 334 def intonumpy(data, dtype=None, **kwargs): 335 # TODO: Don't ignore other kwargs like copy 336 result = odo(data, np.ndarray) 337 if dtype and result.dtype != dtype: 338 result = result.astype(dtype) 339 return result 340 341 342 def convert_base(typ, x): 343 x = compute(x) 344 try: 345 return typ(x) 346 except: 347 return typ(odo(x, typ)) 348 349 Expr.__array__ = intonumpy 350 Expr.__int__ = lambda x: convert_base(int, x) 351 Expr.__float__ = lambda x: convert_base(float, x) 352 Expr.__complex__ = lambda x: convert_base(complex, x) 353 Expr.__bool__ = lambda x: convert_base(bool, x) 354 Expr.__nonzero__ = lambda x: convert_base(bool, x) 355 Expr.__iter__ = into(Iterator) 356 [end of blaze/interactive.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/blaze/interactive.py b/blaze/interactive.py --- a/blaze/interactive.py +++ b/blaze/interactive.py @@ -44,6 +44,12 @@ def Data(data, dshape=None, name=None, fields=None, columns=None, schema=None, **kwargs): + if columns: + raise ValueError("columns argument deprecated, use fields instead") + if schema and dshape: + raise ValueError("Please specify one of schema= or dshape= keyword" + " arguments") + sub_uri = '' if isinstance(data, _strtypes): if '::' in data: @@ -53,14 +59,6 @@ if (isinstance(data, Iterator) and not isinstance(data, tuple(not_an_iterator))): data = tuple(data) - if columns: - warnings.warn("columns kwarg deprecated. Use fields instead", - DeprecationWarning) - if columns and not fields: - fields = columns - if schema and dshape: - raise ValueError("Please specify one of schema= or dshape= keyword" - " arguments") if schema and not dshape: dshape = var * schema if dshape and isinstance(dshape, _strtypes): @@ -83,11 +81,12 @@ if names != fields: raise ValueError('data column names %s\n' '\tnot equal to fields parameter %s,\n' - '\tuse Data(data).relabel(%s) to rename fields' - % (names, - fields, - ', '.join('%s=%r' % (k, v) - for k, v in zip(names, fields)))) + '\tuse Data(data).relabel(%s) to rename ' + 'fields' % (names, + fields, + ', '.join('%s=%r' % (k, v) + for k, v in + zip(names, fields)))) types = dshape.measure.types schema = Record(list(zip(fields, types))) dshape = DataShape(*(dshape.shape + (schema,)))
{"golden_diff": "diff --git a/blaze/interactive.py b/blaze/interactive.py\n--- a/blaze/interactive.py\n+++ b/blaze/interactive.py\n@@ -44,6 +44,12 @@\n \n def Data(data, dshape=None, name=None, fields=None, columns=None, schema=None,\n **kwargs):\n+ if columns:\n+ raise ValueError(\"columns argument deprecated, use fields instead\")\n+ if schema and dshape:\n+ raise ValueError(\"Please specify one of schema= or dshape= keyword\"\n+ \" arguments\")\n+\n sub_uri = ''\n if isinstance(data, _strtypes):\n if '::' in data:\n@@ -53,14 +59,6 @@\n if (isinstance(data, Iterator) and\n not isinstance(data, tuple(not_an_iterator))):\n data = tuple(data)\n- if columns:\n- warnings.warn(\"columns kwarg deprecated. Use fields instead\",\n- DeprecationWarning)\n- if columns and not fields:\n- fields = columns\n- if schema and dshape:\n- raise ValueError(\"Please specify one of schema= or dshape= keyword\"\n- \" arguments\")\n if schema and not dshape:\n dshape = var * schema\n if dshape and isinstance(dshape, _strtypes):\n@@ -83,11 +81,12 @@\n if names != fields:\n raise ValueError('data column names %s\\n'\n '\\tnot equal to fields parameter %s,\\n'\n- '\\tuse Data(data).relabel(%s) to rename fields'\n- % (names,\n- fields,\n- ', '.join('%s=%r' % (k, v)\n- for k, v in zip(names, fields))))\n+ '\\tuse Data(data).relabel(%s) to rename '\n+ 'fields' % (names,\n+ fields,\n+ ', '.join('%s=%r' % (k, v)\n+ for k, v in\n+ zip(names, fields))))\n types = dshape.measure.types\n schema = Record(list(zip(fields, types)))\n dshape = DataShape(*(dshape.shape + (schema,)))\n", "issue": "Deprecate columns in Data\nNot a huge deal, but having both `columns` and `fields` in `Data` could potentially confuse new users.\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport datashape\nimport datetime\nimport operator\nimport itertools\nimport warnings\n\nfrom collections import Iterator\nfrom functools import reduce\n\nfrom datashape import discover, Tuple, Record, DataShape, var\nfrom datashape.predicates import iscollection, isscalar, isrecord, istabular\n\nfrom pandas import DataFrame, Series\n\nimport numpy as np\n\nfrom odo import resource, odo\nfrom odo.utils import ignoring\n\nfrom .expr import Expr, Symbol, ndim\nfrom .dispatch import dispatch\nfrom .compatibility import _strtypes\n\n\n__all__ = ['Data', 'Table', 'into', 'to_html']\n\n\nnames = ('_%d' % i for i in itertools.count(1))\nnot_an_iterator = []\n\n\nwith ignoring(ImportError):\n import bcolz\n not_an_iterator.append(bcolz.carray)\n\n\nwith ignoring(ImportError):\n import pymongo\n not_an_iterator.append(pymongo.collection.Collection)\n not_an_iterator.append(pymongo.database.Database)\n\n\ndef Data(data, dshape=None, name=None, fields=None, columns=None, schema=None,\n **kwargs):\n sub_uri = ''\n if isinstance(data, _strtypes):\n if '::' in data:\n data, sub_uri = data.split('::')\n data = resource(data, schema=schema, dshape=dshape, columns=columns,\n **kwargs)\n if (isinstance(data, Iterator) and\n not isinstance(data, tuple(not_an_iterator))):\n data = tuple(data)\n if columns:\n warnings.warn(\"columns kwarg deprecated. Use fields instead\",\n DeprecationWarning)\n if columns and not fields:\n fields = columns\n if schema and dshape:\n raise ValueError(\"Please specify one of schema= or dshape= keyword\"\n \" arguments\")\n if schema and not dshape:\n dshape = var * schema\n if dshape and isinstance(dshape, _strtypes):\n dshape = datashape.dshape(dshape)\n if not dshape:\n dshape = discover(data)\n types = None\n if isinstance(dshape.measure, Tuple) and fields:\n types = dshape[1].dshapes\n schema = Record(list(zip(fields, types)))\n dshape = DataShape(*(dshape.shape + (schema,)))\n elif isscalar(dshape.measure) and fields:\n types = (dshape.measure,) * int(dshape[-2])\n schema = Record(list(zip(fields, types)))\n dshape = DataShape(*(dshape.shape[:-1] + (schema,)))\n elif isrecord(dshape.measure) and fields:\n ds = discover(data)\n assert isrecord(ds.measure)\n names = ds.measure.names\n if names != fields:\n raise ValueError('data column names %s\\n'\n '\\tnot equal to fields parameter %s,\\n'\n '\\tuse Data(data).relabel(%s) to rename fields'\n % (names,\n fields,\n ', '.join('%s=%r' % (k, v)\n for k, v in zip(names, fields))))\n types = dshape.measure.types\n schema = Record(list(zip(fields, types)))\n dshape = DataShape(*(dshape.shape + (schema,)))\n\n ds = datashape.dshape(dshape)\n result = InteractiveSymbol(data, ds, name)\n\n if sub_uri:\n for field in sub_uri.split('/'):\n if field:\n result = result[field]\n\n return result\n\n\nclass InteractiveSymbol(Symbol):\n \"\"\"Interactive data.\n\n The ``Data`` object presents a familiar view onto a variety of forms of\n data. This user-level object provides an interactive experience to using\n Blaze's abstract expressions.\n\n Parameters\n ----------\n data : object\n Any type with ``discover`` and ``compute`` implementations\n fields : list, optional\n Field or column names, will be inferred from datasource if possible\n dshape : str or DataShape, optional\n DataShape describing input data\n name : str, optional\n A name for the data.\n\n Examples\n --------\n >>> t = Data([(1, 'Alice', 100),\n ... (2, 'Bob', -200),\n ... (3, 'Charlie', 300),\n ... (4, 'Denis', 400),\n ... (5, 'Edith', -500)],\n ... fields=['id', 'name', 'balance'])\n >>> t[t.balance < 0].name\n name\n 0 Bob\n 1 Edith\n \"\"\"\n __slots__ = 'data', 'dshape', '_name'\n\n def __init__(self, data, dshape, name=None):\n self.data = data\n self.dshape = dshape\n self._name = name or (next(names)\n if isrecord(dshape.measure)\n else None)\n\n def _resources(self):\n return {self: self.data}\n\n @property\n def _args(self):\n return id(self.data), self.dshape, self._name\n\n def __setstate__(self, state):\n for slot, arg in zip(self.__slots__, state):\n setattr(self, slot, arg)\n\n\nData.__doc__ = InteractiveSymbol.__doc__\n\n\ndef Table(*args, **kwargs):\n \"\"\" Deprecated, see Data instead \"\"\"\n warnings.warn(\"Table is deprecated, use Data instead\",\n DeprecationWarning)\n return Data(*args, **kwargs)\n\n\n@dispatch(InteractiveSymbol, dict)\ndef _subs(o, d):\n return o\n\n\n@dispatch(Expr)\ndef compute(expr, **kwargs):\n resources = expr._resources()\n if not resources:\n raise ValueError(\"No data resources found\")\n else:\n return compute(expr, resources, **kwargs)\n\n\ndef concrete_head(expr, n=10):\n \"\"\" Return head of computed expression \"\"\"\n if not expr._resources():\n raise ValueError(\"Expression does not contain data resources\")\n if not iscollection(expr.dshape):\n return compute(expr)\n\n head = expr.head(n + 1)\n\n if not iscollection(expr.dshape):\n return odo(head, object)\n elif isrecord(expr.dshape.measure):\n return odo(head, DataFrame)\n else:\n df = odo(head, DataFrame)\n df.columns = [expr._name]\n return df\n result = compute(head)\n\n if len(result) == 0:\n return DataFrame(columns=expr.fields)\n if isrecord(expr.dshape.measure):\n return odo(result, DataFrame, dshape=expr.dshape)\n else:\n df = odo(result, DataFrame, dshape=expr.dshape)\n df.columns = [expr._name]\n return df\n\n\ndef repr_tables(expr, n=10):\n result = concrete_head(expr, n).rename(columns={None: ''})\n\n if isinstance(result, (DataFrame, Series)):\n s = repr(result)\n if len(result) > 10:\n s = '\\n'.join(s.split('\\n')[:-1]) + '\\n...'\n return s\n else:\n return repr(result) # pragma: no cover\n\n\ndef numel(shape):\n if var in shape:\n return None\n if not shape:\n return 1\n return reduce(operator.mul, shape, 1)\n\n\ndef short_dshape(ds, nlines=5):\n s = datashape.coretypes.pprint(ds)\n lines = s.split('\\n')\n if len(lines) > 5:\n s = '\\n'.join(lines[:nlines]) + '\\n ...'\n return s\n\n\ndef coerce_to(typ, x):\n try:\n return typ(x)\n except TypeError:\n return odo(x, typ)\n\n\ndef coerce_scalar(result, dshape):\n if 'float' in dshape:\n return coerce_to(float, result)\n elif 'int' in dshape:\n return coerce_to(int, result)\n elif 'bool' in dshape:\n return coerce_to(bool, result)\n elif 'datetime' in dshape:\n return coerce_to(datetime.datetime, result)\n elif 'date' in dshape:\n return coerce_to(datetime.date, result)\n else:\n return result\n\n\ndef expr_repr(expr, n=10):\n # Pure Expressions, not interactive\n if not expr._resources():\n return str(expr)\n\n # Scalars\n if ndim(expr) == 0 and isscalar(expr.dshape):\n return repr(coerce_scalar(compute(expr), str(expr.dshape)))\n\n # Tables\n if (ndim(expr) == 1 and (istabular(expr.dshape) or\n isscalar(expr.dshape.measure))):\n return repr_tables(expr, 10)\n\n # Smallish arrays\n if ndim(expr) >= 2 and numel(expr.shape) and numel(expr.shape) < 1000000:\n return repr(compute(expr))\n\n # Other\n dat = expr._resources().values()\n if len(dat) == 1:\n dat = list(dat)[0] # may be dict_values\n\n s = 'Data: %s' % dat\n if not isinstance(expr, Symbol):\n s += '\\nExpr: %s' % str(expr)\n s += '\\nDataShape: %s' % short_dshape(expr.dshape, nlines=7)\n\n return s\n\n\n@dispatch(DataFrame)\ndef to_html(df):\n return df.to_html()\n\n\n@dispatch(Expr)\ndef to_html(expr):\n # Tables\n if not expr._resources() or ndim(expr) != 1:\n return to_html(repr(expr))\n return to_html(concrete_head(expr))\n\n\n@dispatch(object)\ndef to_html(o):\n return repr(o)\n\n\n@dispatch(_strtypes)\ndef to_html(o):\n return o.replace('\\n', '<br>')\n\n\n@dispatch((object, type, str), Expr)\ndef into(a, b, **kwargs):\n result = compute(b, **kwargs)\n kwargs['dshape'] = b.dshape\n return into(a, result, **kwargs)\n\n\ndef table_length(expr):\n try:\n return expr._len()\n except ValueError:\n return compute(expr.count())\n\n\nExpr.__repr__ = expr_repr\nExpr._repr_html_ = lambda x: to_html(x)\nExpr.__len__ = table_length\n\n\ndef intonumpy(data, dtype=None, **kwargs):\n # TODO: Don't ignore other kwargs like copy\n result = odo(data, np.ndarray)\n if dtype and result.dtype != dtype:\n result = result.astype(dtype)\n return result\n\n\ndef convert_base(typ, x):\n x = compute(x)\n try:\n return typ(x)\n except:\n return typ(odo(x, typ))\n\nExpr.__array__ = intonumpy\nExpr.__int__ = lambda x: convert_base(int, x)\nExpr.__float__ = lambda x: convert_base(float, x)\nExpr.__complex__ = lambda x: convert_base(complex, x)\nExpr.__bool__ = lambda x: convert_base(bool, x)\nExpr.__nonzero__ = lambda x: convert_base(bool, x)\nExpr.__iter__ = into(Iterator)\n", "path": "blaze/interactive.py"}]}
3,996
472
gh_patches_debug_28719
rasdani/github-patches
git_diff
ansible__ansible-lint-533
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Add type checking to role metadata in rule 701 MetaMainHasInfoRule </issue> <code> [start of lib/ansiblelint/rules/MetaMainHasInfoRule.py] 1 # Copyright (c) 2016, Will Thames and contributors 2 # Copyright (c) 2018, Ansible Project 3 4 from ansiblelint import AnsibleLintRule 5 6 7 class MetaMainHasInfoRule(AnsibleLintRule): 8 id = '701' 9 shortdesc = 'meta/main.yml should contain relevant info' 10 info = [ 11 'author', 12 'description', 13 'license', 14 'min_ansible_version', 15 'platforms', 16 ] 17 description = ( 18 'meta/main.yml should contain: ``{}``'.format(', '.join(info)) 19 ) 20 severity = 'HIGH' 21 tags = ['metadata'] 22 version_added = 'v4.0.0' 23 24 def matchplay(self, file, data): 25 if file['type'] != 'meta': 26 return False 27 28 galaxy_info = data.get('galaxy_info', None) 29 if not galaxy_info: 30 return [({'meta/main.yml': data}, 31 "No 'galaxy_info' found")] 32 33 results = [] 34 for info in self.info: 35 if not galaxy_info.get(info, None): 36 results.append(({'meta/main.yml': data}, 37 'Role info should contain %s' % info)) 38 39 platforms = galaxy_info.get('platforms', None) 40 if platforms: 41 for platform in platforms: 42 if not platform.get('name', None): 43 results.append(({'meta/main.yml': data}, 44 'Platform should contain name')) 45 46 return results 47 [end of lib/ansiblelint/rules/MetaMainHasInfoRule.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/lib/ansiblelint/rules/MetaMainHasInfoRule.py b/lib/ansiblelint/rules/MetaMainHasInfoRule.py --- a/lib/ansiblelint/rules/MetaMainHasInfoRule.py +++ b/lib/ansiblelint/rules/MetaMainHasInfoRule.py @@ -1,6 +1,8 @@ # Copyright (c) 2016, Will Thames and contributors # Copyright (c) 2018, Ansible Project +import six + from ansiblelint import AnsibleLintRule @@ -36,11 +38,29 @@ results.append(({'meta/main.yml': data}, 'Role info should contain %s' % info)) + for info in ['author', 'description']: + if not galaxy_info.get(info): + continue + if not isinstance(galaxy_info.get(info), six.string_types): + results.append(({'meta/main.yml': data}, + '%s should be a string' % info)) + platforms = galaxy_info.get('platforms', None) - if platforms: - for platform in platforms: - if not platform.get('name', None): - results.append(({'meta/main.yml': data}, - 'Platform should contain name')) + if not platforms: + return results + + if not isinstance(platforms, list): + results.append(({'meta/main.yml': data}, + 'Platforms should be a list of dictionaries')) + return results + + for platform in platforms: + if not isinstance(platform, dict): + results.append(({'meta/main.yml': data}, + 'Platforms should be a list of dictionaries')) + continue + if not platform.get('name', None): + results.append(({'meta/main.yml': data}, + 'Platform should contain name')) return results
{"golden_diff": "diff --git a/lib/ansiblelint/rules/MetaMainHasInfoRule.py b/lib/ansiblelint/rules/MetaMainHasInfoRule.py\n--- a/lib/ansiblelint/rules/MetaMainHasInfoRule.py\n+++ b/lib/ansiblelint/rules/MetaMainHasInfoRule.py\n@@ -1,6 +1,8 @@\n # Copyright (c) 2016, Will Thames and contributors\n # Copyright (c) 2018, Ansible Project\n \n+import six\n+\n from ansiblelint import AnsibleLintRule\n \n \n@@ -36,11 +38,29 @@\n results.append(({'meta/main.yml': data},\n 'Role info should contain %s' % info))\n \n+ for info in ['author', 'description']:\n+ if not galaxy_info.get(info):\n+ continue\n+ if not isinstance(galaxy_info.get(info), six.string_types):\n+ results.append(({'meta/main.yml': data},\n+ '%s should be a string' % info))\n+\n platforms = galaxy_info.get('platforms', None)\n- if platforms:\n- for platform in platforms:\n- if not platform.get('name', None):\n- results.append(({'meta/main.yml': data},\n- 'Platform should contain name'))\n+ if not platforms:\n+ return results\n+\n+ if not isinstance(platforms, list):\n+ results.append(({'meta/main.yml': data},\n+ 'Platforms should be a list of dictionaries'))\n+ return results\n+\n+ for platform in platforms:\n+ if not isinstance(platform, dict):\n+ results.append(({'meta/main.yml': data},\n+ 'Platforms should be a list of dictionaries'))\n+ continue\n+ if not platform.get('name', None):\n+ results.append(({'meta/main.yml': data},\n+ 'Platform should contain name'))\n \n return results\n", "issue": "Add type checking to role metadata in rule 701 MetaMainHasInfoRule\n\n", "before_files": [{"content": "# Copyright (c) 2016, Will Thames and contributors\n# Copyright (c) 2018, Ansible Project\n\nfrom ansiblelint import AnsibleLintRule\n\n\nclass MetaMainHasInfoRule(AnsibleLintRule):\n id = '701'\n shortdesc = 'meta/main.yml should contain relevant info'\n info = [\n 'author',\n 'description',\n 'license',\n 'min_ansible_version',\n 'platforms',\n ]\n description = (\n 'meta/main.yml should contain: ``{}``'.format(', '.join(info))\n )\n severity = 'HIGH'\n tags = ['metadata']\n version_added = 'v4.0.0'\n\n def matchplay(self, file, data):\n if file['type'] != 'meta':\n return False\n\n galaxy_info = data.get('galaxy_info', None)\n if not galaxy_info:\n return [({'meta/main.yml': data},\n \"No 'galaxy_info' found\")]\n\n results = []\n for info in self.info:\n if not galaxy_info.get(info, None):\n results.append(({'meta/main.yml': data},\n 'Role info should contain %s' % info))\n\n platforms = galaxy_info.get('platforms', None)\n if platforms:\n for platform in platforms:\n if not platform.get('name', None):\n results.append(({'meta/main.yml': data},\n 'Platform should contain name'))\n\n return results\n", "path": "lib/ansiblelint/rules/MetaMainHasInfoRule.py"}]}
973
404
gh_patches_debug_30718
rasdani/github-patches
git_diff
Lightning-Universe__lightning-flash-1163
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Trainer.finetune broken for Keypoint detection module ## 🐛 Bug Finetuning the `KeypointDetector` with freeze strategy does not freeze the parameters. ### To Reproduce Follow the [Keypoint detection example]( https://lightning-flash.readthedocs.io/en/latest/reference/keypoint_detection.html) #### Code sample ```python import flash print(flash.__version__) from flash.core.utilities.imports import example_requires from flash.image import KeypointDetectionData, KeypointDetector example_requires("image") import icedata # noqa: E402 data_dir = icedata.biwi.load_data() datamodule = KeypointDetectionData.from_icedata( train_folder=data_dir, val_split=0.1, batch_size=4, parser=icedata.biwi.parser, ) # 2. Build the task model = KeypointDetector( head="keypoint_rcnn", backbone="resnet18_fpn", num_keypoints=1, num_classes=datamodule.num_classes, ) # 3. Create the trainer and finetune the model trainer = flash.Trainer(max_epochs=100, gpus=1) trainer.finetune(model, datamodule=datamodule, strategy="freeze") trainable = 0 for name, param in model.adapter.model.named_parameters(): # or try from torchsummary import summary? if param.requires_grad: o = (name, np.product(param.shape), param.requires_grad) if o[2]: trainable += o[1] print(*o) print(trainable) ``` ### Expected behavior The number of trainable parameters should be smaller than 45.8M. Also, `trainer.fit` takes about as much time as `trainer.finetune`: ``` | Name | Type | Params -------------------------------------------------------------------- 0 | train_metrics | ModuleDict | 0 1 | val_metrics | ModuleDict | 0 2 | test_metrics | ModuleDict | 0 3 | adapter | IceVisionKeypointDetectionAdapter | 46.0 M -------------------------------------------------------------------- 45.8 M Trainable params 156 K Non-trainable params 46.0 M Total params 183.924 Total estimated model params size (MB) ``` ### Environment ``` efficientnet-pytorch 0.6.3 icevision 0.11.0 lightning-bolts 0.4.0 lightning-flash 0.7.0.dev0 pytorch-lightning 1.5.6 segmentation-models-pytorch 0.2.1 torch 1.10.0+cu111 torchaudio 0.10.0+cu111 torchmetrics 0.6.2 torchsummary 1.5.1 torchtext 0.11.0 torchvision 0.11.1+cu111 ``` ### Additional context <!-- Add any other context about the problem here. --> </issue> <code> [start of flash/core/integrations/icevision/backbones.py] 1 # Copyright The PyTorch Lightning team. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 from inspect import getmembers 15 16 from torch import nn 17 18 from flash.core.registry import FlashRegistry 19 from flash.core.utilities.imports import _ICEVISION_AVAILABLE 20 21 if _ICEVISION_AVAILABLE: 22 from icevision.backbones import BackboneConfig 23 24 25 def icevision_model_adapter(model_type): 26 class IceVisionModelAdapter(model_type.lightning.ModelAdapter): 27 def log(self, name, value, **kwargs): 28 if "prog_bar" not in kwargs: 29 kwargs["prog_bar"] = True 30 return super().log(name.split("/")[-1], value, **kwargs) 31 32 return IceVisionModelAdapter 33 34 35 def load_icevision(adapter, model_type, backbone, num_classes, **kwargs): 36 model = model_type.model(backbone=backbone, num_classes=num_classes, **kwargs) 37 38 backbone = nn.Module() 39 params = model.param_groups()[0] 40 for i, param in enumerate(params): 41 backbone.register_parameter(f"backbone_{i}", param) 42 43 return model_type, model, adapter(model_type), backbone 44 45 46 def load_icevision_ignore_image_size(adapter, model_type, backbone, num_classes, image_size=None, **kwargs): 47 return load_icevision(adapter, model_type, backbone, num_classes, **kwargs) 48 49 50 def load_icevision_with_image_size(adapter, model_type, backbone, num_classes, image_size=None, **kwargs): 51 kwargs["img_size"] = image_size 52 return load_icevision(adapter, model_type, backbone, num_classes, **kwargs) 53 54 55 def get_backbones(model_type): 56 _BACKBONES = FlashRegistry("backbones") 57 58 for backbone_name, backbone_config in getmembers(model_type.backbones, lambda x: isinstance(x, BackboneConfig)): 59 _BACKBONES( 60 backbone_config, 61 name=backbone_name, 62 ) 63 return _BACKBONES 64 [end of flash/core/integrations/icevision/backbones.py] [start of flash/image/detection/model.py] 1 # Copyright The PyTorch Lightning team. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 from typing import Any, Dict, List, Optional 15 16 from flash.core.adapter import AdapterTask 17 from flash.core.data.output import PredsOutput 18 from flash.core.registry import FlashRegistry 19 from flash.core.utilities.types import LR_SCHEDULER_TYPE, OPTIMIZER_TYPE, OUTPUT_TYPE 20 from flash.image.detection.backbones import OBJECT_DETECTION_HEADS 21 22 23 class ObjectDetector(AdapterTask): 24 """The ``ObjectDetector`` is a :class:`~flash.Task` for detecting objects in images. For more details, see 25 :ref:`object_detection`. 26 27 Args: 28 num_classes: The number of object classes. 29 backbone: String indicating the backbone CNN architecture to use. 30 head: String indicating the head module to use ontop of the backbone. 31 pretrained: Whether the model should be loaded with it's pretrained weights. 32 optimizer: Optimizer to use for training. 33 lr_scheduler: The LR scheduler to use during training. 34 learning_rate: The learning rate to use for training. 35 output: The :class:`~flash.core.data.io.output.Output` to use when formatting prediction outputs. 36 predict_kwargs: dictionary containing parameters that will be used during the prediction phase. 37 kwargs: additional kwargs nessesary for initializing the backbone task 38 """ 39 40 heads: FlashRegistry = OBJECT_DETECTION_HEADS 41 42 required_extras: List[str] = ["image", "icevision", "effdet"] 43 44 def __init__( 45 self, 46 num_classes: int, 47 backbone: Optional[str] = "resnet18_fpn", 48 head: Optional[str] = "retinanet", 49 pretrained: bool = True, 50 optimizer: OPTIMIZER_TYPE = "Adam", 51 lr_scheduler: LR_SCHEDULER_TYPE = None, 52 learning_rate: float = 5e-3, 53 output: OUTPUT_TYPE = None, 54 predict_kwargs: Dict = None, 55 **kwargs: Any, 56 ): 57 self.save_hyperparameters() 58 59 predict_kwargs = predict_kwargs if predict_kwargs else {} 60 metadata = self.heads.get(head, with_metadata=True) 61 adapter = metadata["metadata"]["adapter"].from_task( 62 self, 63 num_classes=num_classes, 64 backbone=backbone, 65 head=head, 66 pretrained=pretrained, 67 predict_kwargs=predict_kwargs, 68 **kwargs, 69 ) 70 71 super().__init__( 72 adapter, 73 learning_rate=learning_rate, 74 optimizer=optimizer, 75 lr_scheduler=lr_scheduler, 76 output=output or PredsOutput(), 77 ) 78 79 def _ci_benchmark_fn(self, history: List[Dict[str, Any]]) -> None: 80 """This function is used only for debugging usage with CI.""" 81 # todo 82 83 @property 84 def predict_kwargs(self) -> Dict[str, Any]: 85 """The kwargs used for the prediction step.""" 86 return self.adapter.predict_kwargs 87 88 @predict_kwargs.setter 89 def predict_kwargs(self, predict_kwargs: Dict[str, Any]): 90 self.adapter.predict_kwargs = predict_kwargs 91 [end of flash/image/detection/model.py] [start of flash_examples/integrations/fiftyone/object_detection.py] 1 # Copyright The PyTorch Lightning team. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 import os 15 from itertools import chain 16 17 import flash 18 from flash.core.integrations.fiftyone import visualize 19 from flash.core.utilities.imports import example_requires 20 from flash.image import ObjectDetectionData, ObjectDetector 21 from flash.image.detection.output import FiftyOneDetectionLabelsOutput 22 23 example_requires("image") 24 25 import icedata # noqa: E402 26 27 # 1. Create the DataModule 28 data_dir = icedata.fridge.load_data() 29 30 datamodule = ObjectDetectionData.from_icedata( 31 train_folder=data_dir, 32 predict_folder=os.path.join(data_dir, "odFridgeObjects", "images"), 33 val_split=0.1, 34 transform_kwargs={"image_size": 128}, 35 parser=icedata.fridge.parser, 36 batch_size=4, 37 ) 38 39 # 2. Build the task 40 model = ObjectDetector(head="efficientdet", backbone="d0", num_classes=datamodule.num_classes, image_size=128) 41 42 # 3. Create the trainer and finetune the model 43 trainer = flash.Trainer(max_epochs=1) 44 trainer.finetune(model, datamodule=datamodule, strategy="freeze") 45 46 # 4. Set the output and get some predictions 47 model.output = FiftyOneDetectionLabelsOutput(return_filepath=True) # output FiftyOne format 48 predictions = trainer.predict(model, datamodule=datamodule) 49 predictions = list(chain.from_iterable(predictions)) # flatten batches 50 51 # 5. Visualize predictions in FiftyOne app 52 # Optional: pass `wait=True` to block execution until App is closed 53 session = visualize(predictions, wait=True) 54 [end of flash_examples/integrations/fiftyone/object_detection.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/flash/core/integrations/icevision/backbones.py b/flash/core/integrations/icevision/backbones.py --- a/flash/core/integrations/icevision/backbones.py +++ b/flash/core/integrations/icevision/backbones.py @@ -36,7 +36,7 @@ model = model_type.model(backbone=backbone, num_classes=num_classes, **kwargs) backbone = nn.Module() - params = model.param_groups()[0] + params = sum(model.param_groups()[:-1], []) for i, param in enumerate(params): backbone.register_parameter(f"backbone_{i}", param) diff --git a/flash/image/detection/model.py b/flash/image/detection/model.py --- a/flash/image/detection/model.py +++ b/flash/image/detection/model.py @@ -49,7 +49,7 @@ pretrained: bool = True, optimizer: OPTIMIZER_TYPE = "Adam", lr_scheduler: LR_SCHEDULER_TYPE = None, - learning_rate: float = 5e-3, + learning_rate: float = 1e-2, output: OUTPUT_TYPE = None, predict_kwargs: Dict = None, **kwargs: Any, diff --git a/flash_examples/integrations/fiftyone/object_detection.py b/flash_examples/integrations/fiftyone/object_detection.py --- a/flash_examples/integrations/fiftyone/object_detection.py +++ b/flash_examples/integrations/fiftyone/object_detection.py @@ -33,14 +33,20 @@ val_split=0.1, transform_kwargs={"image_size": 128}, parser=icedata.fridge.parser, - batch_size=4, + batch_size=8, ) # 2. Build the task -model = ObjectDetector(head="efficientdet", backbone="d0", num_classes=datamodule.num_classes, image_size=128) +model = ObjectDetector( + head="efficientdet", + backbone="d0", + num_classes=datamodule.num_classes, + image_size=128, + lr_scheduler=("multisteplr", {"milestones": [20]}), +) # 3. Create the trainer and finetune the model -trainer = flash.Trainer(max_epochs=1) +trainer = flash.Trainer(max_epochs=30) trainer.finetune(model, datamodule=datamodule, strategy="freeze") # 4. Set the output and get some predictions
{"golden_diff": "diff --git a/flash/core/integrations/icevision/backbones.py b/flash/core/integrations/icevision/backbones.py\n--- a/flash/core/integrations/icevision/backbones.py\n+++ b/flash/core/integrations/icevision/backbones.py\n@@ -36,7 +36,7 @@\n model = model_type.model(backbone=backbone, num_classes=num_classes, **kwargs)\n \n backbone = nn.Module()\n- params = model.param_groups()[0]\n+ params = sum(model.param_groups()[:-1], [])\n for i, param in enumerate(params):\n backbone.register_parameter(f\"backbone_{i}\", param)\n \ndiff --git a/flash/image/detection/model.py b/flash/image/detection/model.py\n--- a/flash/image/detection/model.py\n+++ b/flash/image/detection/model.py\n@@ -49,7 +49,7 @@\n pretrained: bool = True,\n optimizer: OPTIMIZER_TYPE = \"Adam\",\n lr_scheduler: LR_SCHEDULER_TYPE = None,\n- learning_rate: float = 5e-3,\n+ learning_rate: float = 1e-2,\n output: OUTPUT_TYPE = None,\n predict_kwargs: Dict = None,\n **kwargs: Any,\ndiff --git a/flash_examples/integrations/fiftyone/object_detection.py b/flash_examples/integrations/fiftyone/object_detection.py\n--- a/flash_examples/integrations/fiftyone/object_detection.py\n+++ b/flash_examples/integrations/fiftyone/object_detection.py\n@@ -33,14 +33,20 @@\n val_split=0.1,\n transform_kwargs={\"image_size\": 128},\n parser=icedata.fridge.parser,\n- batch_size=4,\n+ batch_size=8,\n )\n \n # 2. Build the task\n-model = ObjectDetector(head=\"efficientdet\", backbone=\"d0\", num_classes=datamodule.num_classes, image_size=128)\n+model = ObjectDetector(\n+ head=\"efficientdet\",\n+ backbone=\"d0\",\n+ num_classes=datamodule.num_classes,\n+ image_size=128,\n+ lr_scheduler=(\"multisteplr\", {\"milestones\": [20]}),\n+)\n \n # 3. Create the trainer and finetune the model\n-trainer = flash.Trainer(max_epochs=1)\n+trainer = flash.Trainer(max_epochs=30)\n trainer.finetune(model, datamodule=datamodule, strategy=\"freeze\")\n \n # 4. Set the output and get some predictions\n", "issue": "Trainer.finetune broken for Keypoint detection module\n## \ud83d\udc1b Bug\r\n\r\nFinetuning the `KeypointDetector` with freeze strategy does not freeze the parameters.\r\n\r\n\r\n### To Reproduce\r\n\r\nFollow the [Keypoint detection example]( https://lightning-flash.readthedocs.io/en/latest/reference/keypoint_detection.html)\r\n\r\n\r\n#### Code sample\r\n\r\n```python\r\nimport flash\r\nprint(flash.__version__)\r\nfrom flash.core.utilities.imports import example_requires\r\nfrom flash.image import KeypointDetectionData, KeypointDetector\r\n\r\nexample_requires(\"image\")\r\n\r\nimport icedata # noqa: E402\r\n\r\ndata_dir = icedata.biwi.load_data()\r\n\r\ndatamodule = KeypointDetectionData.from_icedata(\r\n train_folder=data_dir,\r\n val_split=0.1,\r\n batch_size=4,\r\n parser=icedata.biwi.parser,\r\n)\r\n\r\n# 2. Build the task\r\nmodel = KeypointDetector(\r\n head=\"keypoint_rcnn\",\r\n backbone=\"resnet18_fpn\",\r\n num_keypoints=1,\r\n num_classes=datamodule.num_classes,\r\n)\r\n\r\n# 3. Create the trainer and finetune the model\r\ntrainer = flash.Trainer(max_epochs=100, gpus=1)\r\ntrainer.finetune(model, datamodule=datamodule, strategy=\"freeze\")\r\n\r\ntrainable = 0\r\nfor name, param in model.adapter.model.named_parameters(): # or try from torchsummary import summary?\r\n if param.requires_grad:\r\n o = (name, np.product(param.shape), param.requires_grad)\r\n if o[2]:\r\n trainable += o[1]\r\n print(*o)\r\nprint(trainable)\r\n \r\n```\r\n\r\n\r\n\r\n\r\n### Expected behavior\r\n\r\nThe number of trainable parameters should be smaller than 45.8M. Also, `trainer.fit` takes about as much time as `trainer.finetune`:\r\n\r\n```\r\n | Name | Type | Params\r\n--------------------------------------------------------------------\r\n0 | train_metrics | ModuleDict | 0 \r\n1 | val_metrics | ModuleDict | 0 \r\n2 | test_metrics | ModuleDict | 0 \r\n3 | adapter | IceVisionKeypointDetectionAdapter | 46.0 M\r\n--------------------------------------------------------------------\r\n45.8 M Trainable params\r\n156 K Non-trainable params\r\n46.0 M Total params\r\n183.924 Total estimated model params size (MB)\r\n```\r\n\r\n### Environment\r\n\r\n```\r\nefficientnet-pytorch 0.6.3\r\nicevision 0.11.0\r\nlightning-bolts 0.4.0\r\nlightning-flash 0.7.0.dev0\r\npytorch-lightning 1.5.6\r\nsegmentation-models-pytorch 0.2.1\r\ntorch 1.10.0+cu111\r\ntorchaudio 0.10.0+cu111\r\ntorchmetrics 0.6.2\r\ntorchsummary 1.5.1\r\ntorchtext 0.11.0\r\ntorchvision 0.11.1+cu111\r\n```\r\n\r\n### Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom inspect import getmembers\n\nfrom torch import nn\n\nfrom flash.core.registry import FlashRegistry\nfrom flash.core.utilities.imports import _ICEVISION_AVAILABLE\n\nif _ICEVISION_AVAILABLE:\n from icevision.backbones import BackboneConfig\n\n\ndef icevision_model_adapter(model_type):\n class IceVisionModelAdapter(model_type.lightning.ModelAdapter):\n def log(self, name, value, **kwargs):\n if \"prog_bar\" not in kwargs:\n kwargs[\"prog_bar\"] = True\n return super().log(name.split(\"/\")[-1], value, **kwargs)\n\n return IceVisionModelAdapter\n\n\ndef load_icevision(adapter, model_type, backbone, num_classes, **kwargs):\n model = model_type.model(backbone=backbone, num_classes=num_classes, **kwargs)\n\n backbone = nn.Module()\n params = model.param_groups()[0]\n for i, param in enumerate(params):\n backbone.register_parameter(f\"backbone_{i}\", param)\n\n return model_type, model, adapter(model_type), backbone\n\n\ndef load_icevision_ignore_image_size(adapter, model_type, backbone, num_classes, image_size=None, **kwargs):\n return load_icevision(adapter, model_type, backbone, num_classes, **kwargs)\n\n\ndef load_icevision_with_image_size(adapter, model_type, backbone, num_classes, image_size=None, **kwargs):\n kwargs[\"img_size\"] = image_size\n return load_icevision(adapter, model_type, backbone, num_classes, **kwargs)\n\n\ndef get_backbones(model_type):\n _BACKBONES = FlashRegistry(\"backbones\")\n\n for backbone_name, backbone_config in getmembers(model_type.backbones, lambda x: isinstance(x, BackboneConfig)):\n _BACKBONES(\n backbone_config,\n name=backbone_name,\n )\n return _BACKBONES\n", "path": "flash/core/integrations/icevision/backbones.py"}, {"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any, Dict, List, Optional\n\nfrom flash.core.adapter import AdapterTask\nfrom flash.core.data.output import PredsOutput\nfrom flash.core.registry import FlashRegistry\nfrom flash.core.utilities.types import LR_SCHEDULER_TYPE, OPTIMIZER_TYPE, OUTPUT_TYPE\nfrom flash.image.detection.backbones import OBJECT_DETECTION_HEADS\n\n\nclass ObjectDetector(AdapterTask):\n \"\"\"The ``ObjectDetector`` is a :class:`~flash.Task` for detecting objects in images. For more details, see\n :ref:`object_detection`.\n\n Args:\n num_classes: The number of object classes.\n backbone: String indicating the backbone CNN architecture to use.\n head: String indicating the head module to use ontop of the backbone.\n pretrained: Whether the model should be loaded with it's pretrained weights.\n optimizer: Optimizer to use for training.\n lr_scheduler: The LR scheduler to use during training.\n learning_rate: The learning rate to use for training.\n output: The :class:`~flash.core.data.io.output.Output` to use when formatting prediction outputs.\n predict_kwargs: dictionary containing parameters that will be used during the prediction phase.\n kwargs: additional kwargs nessesary for initializing the backbone task\n \"\"\"\n\n heads: FlashRegistry = OBJECT_DETECTION_HEADS\n\n required_extras: List[str] = [\"image\", \"icevision\", \"effdet\"]\n\n def __init__(\n self,\n num_classes: int,\n backbone: Optional[str] = \"resnet18_fpn\",\n head: Optional[str] = \"retinanet\",\n pretrained: bool = True,\n optimizer: OPTIMIZER_TYPE = \"Adam\",\n lr_scheduler: LR_SCHEDULER_TYPE = None,\n learning_rate: float = 5e-3,\n output: OUTPUT_TYPE = None,\n predict_kwargs: Dict = None,\n **kwargs: Any,\n ):\n self.save_hyperparameters()\n\n predict_kwargs = predict_kwargs if predict_kwargs else {}\n metadata = self.heads.get(head, with_metadata=True)\n adapter = metadata[\"metadata\"][\"adapter\"].from_task(\n self,\n num_classes=num_classes,\n backbone=backbone,\n head=head,\n pretrained=pretrained,\n predict_kwargs=predict_kwargs,\n **kwargs,\n )\n\n super().__init__(\n adapter,\n learning_rate=learning_rate,\n optimizer=optimizer,\n lr_scheduler=lr_scheduler,\n output=output or PredsOutput(),\n )\n\n def _ci_benchmark_fn(self, history: List[Dict[str, Any]]) -> None:\n \"\"\"This function is used only for debugging usage with CI.\"\"\"\n # todo\n\n @property\n def predict_kwargs(self) -> Dict[str, Any]:\n \"\"\"The kwargs used for the prediction step.\"\"\"\n return self.adapter.predict_kwargs\n\n @predict_kwargs.setter\n def predict_kwargs(self, predict_kwargs: Dict[str, Any]):\n self.adapter.predict_kwargs = predict_kwargs\n", "path": "flash/image/detection/model.py"}, {"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\nfrom itertools import chain\n\nimport flash\nfrom flash.core.integrations.fiftyone import visualize\nfrom flash.core.utilities.imports import example_requires\nfrom flash.image import ObjectDetectionData, ObjectDetector\nfrom flash.image.detection.output import FiftyOneDetectionLabelsOutput\n\nexample_requires(\"image\")\n\nimport icedata # noqa: E402\n\n# 1. Create the DataModule\ndata_dir = icedata.fridge.load_data()\n\ndatamodule = ObjectDetectionData.from_icedata(\n train_folder=data_dir,\n predict_folder=os.path.join(data_dir, \"odFridgeObjects\", \"images\"),\n val_split=0.1,\n transform_kwargs={\"image_size\": 128},\n parser=icedata.fridge.parser,\n batch_size=4,\n)\n\n# 2. Build the task\nmodel = ObjectDetector(head=\"efficientdet\", backbone=\"d0\", num_classes=datamodule.num_classes, image_size=128)\n\n# 3. Create the trainer and finetune the model\ntrainer = flash.Trainer(max_epochs=1)\ntrainer.finetune(model, datamodule=datamodule, strategy=\"freeze\")\n\n# 4. Set the output and get some predictions\nmodel.output = FiftyOneDetectionLabelsOutput(return_filepath=True) # output FiftyOne format\npredictions = trainer.predict(model, datamodule=datamodule)\npredictions = list(chain.from_iterable(predictions)) # flatten batches\n\n# 5. Visualize predictions in FiftyOne app\n# Optional: pass `wait=True` to block execution until App is closed\nsession = visualize(predictions, wait=True)\n", "path": "flash_examples/integrations/fiftyone/object_detection.py"}]}
3,436
559
gh_patches_debug_19057
rasdani/github-patches
git_diff
microsoft__torchgeo-991
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Add CSVLogger to train.py ### Summary Our current `train.py` script only logs to TensorBoard. We should add a CSV logger. ### Rationale In order to determine the optimal hyperparams in a hyperparam sweep, we need an easy way of finding the final val accuracy of each run. Currently, the only way to get this is to convert the TensorBoard logger output to a CSV. If we log directly to CSV this becomes much easier, especially for those who don't use TensorBoard. ### Implementation _No response_ ### Alternatives _No response_ ### Additional information _No response_ </issue> <code> [start of train.py] 1 #!/usr/bin/env python3 2 3 # Copyright (c) Microsoft Corporation. All rights reserved. 4 # Licensed under the MIT License. 5 6 """torchgeo model training script.""" 7 8 import os 9 from typing import Any, Dict, Tuple, Type, cast 10 11 import pytorch_lightning as pl 12 from omegaconf import DictConfig, OmegaConf 13 from pytorch_lightning import loggers as pl_loggers 14 from pytorch_lightning.callbacks import EarlyStopping, ModelCheckpoint 15 16 from torchgeo.datamodules import ( 17 BigEarthNetDataModule, 18 ChesapeakeCVPRDataModule, 19 COWCCountingDataModule, 20 ETCI2021DataModule, 21 EuroSATDataModule, 22 InriaAerialImageLabelingDataModule, 23 LandCoverAIDataModule, 24 NAIPChesapeakeDataModule, 25 NASAMarineDebrisDataModule, 26 RESISC45DataModule, 27 SEN12MSDataModule, 28 So2SatDataModule, 29 SpaceNet1DataModule, 30 TropicalCycloneDataModule, 31 UCMercedDataModule, 32 ) 33 from torchgeo.trainers import ( 34 BYOLTask, 35 ClassificationTask, 36 MultiLabelClassificationTask, 37 ObjectDetectionTask, 38 RegressionTask, 39 SemanticSegmentationTask, 40 ) 41 42 TASK_TO_MODULES_MAPPING: Dict[ 43 str, Tuple[Type[pl.LightningModule], Type[pl.LightningDataModule]] 44 ] = { 45 "bigearthnet": (MultiLabelClassificationTask, BigEarthNetDataModule), 46 "byol": (BYOLTask, ChesapeakeCVPRDataModule), 47 "chesapeake_cvpr": (SemanticSegmentationTask, ChesapeakeCVPRDataModule), 48 "cowc_counting": (RegressionTask, COWCCountingDataModule), 49 "cyclone": (RegressionTask, TropicalCycloneDataModule), 50 "eurosat": (ClassificationTask, EuroSATDataModule), 51 "etci2021": (SemanticSegmentationTask, ETCI2021DataModule), 52 "inria": (SemanticSegmentationTask, InriaAerialImageLabelingDataModule), 53 "landcoverai": (SemanticSegmentationTask, LandCoverAIDataModule), 54 "naipchesapeake": (SemanticSegmentationTask, NAIPChesapeakeDataModule), 55 "nasa_marine_debris": (ObjectDetectionTask, NASAMarineDebrisDataModule), 56 "resisc45": (ClassificationTask, RESISC45DataModule), 57 "sen12ms": (SemanticSegmentationTask, SEN12MSDataModule), 58 "so2sat": (ClassificationTask, So2SatDataModule), 59 "spacenet1": (SemanticSegmentationTask, SpaceNet1DataModule), 60 "ucmerced": (ClassificationTask, UCMercedDataModule), 61 } 62 63 64 def set_up_omegaconf() -> DictConfig: 65 """Loads program arguments from either YAML config files or command line arguments. 66 67 This method loads defaults/a schema from "conf/defaults.yaml" as well as potential 68 arguments from the command line. If one of the command line arguments is 69 "config_file", then we additionally read arguments from that YAML file. One of the 70 config file based arguments or command line arguments must specify task.name. The 71 task.name value is used to grab a task specific defaults from its respective 72 trainer. The final configuration is given as merge(task_defaults, defaults, 73 config file, command line). The merge() works from the first argument to the last, 74 replacing existing values with newer values. Additionally, if any values are 75 merged into task_defaults without matching types, then there will be a runtime 76 error. 77 78 Returns: 79 an OmegaConf DictConfig containing all the validated program arguments 80 81 Raises: 82 FileNotFoundError: when ``config_file`` does not exist 83 ValueError: when ``task.name`` is not a valid task 84 """ 85 conf = OmegaConf.load("conf/defaults.yaml") 86 command_line_conf = OmegaConf.from_cli() 87 88 if "config_file" in command_line_conf: 89 config_fn = command_line_conf.config_file 90 if not os.path.isfile(config_fn): 91 raise FileNotFoundError(f"config_file={config_fn} is not a valid file") 92 93 user_conf = OmegaConf.load(config_fn) 94 conf = OmegaConf.merge(conf, user_conf) 95 96 conf = OmegaConf.merge( # Merge in any arguments passed via the command line 97 conf, command_line_conf 98 ) 99 100 # These OmegaConf structured configs enforce a schema at runtime, see: 101 # https://omegaconf.readthedocs.io/en/2.0_branch/structured_config.html#merging-with-other-configs 102 task_name = conf.experiment.task 103 task_config_fn = os.path.join("conf", f"{task_name}.yaml") 104 if task_name == "test": 105 task_conf = OmegaConf.create() 106 elif os.path.exists(task_config_fn): 107 task_conf = cast(DictConfig, OmegaConf.load(task_config_fn)) 108 else: 109 raise ValueError( 110 f"experiment.task={task_name} is not recognized as a valid task" 111 ) 112 113 conf = OmegaConf.merge(task_conf, conf) 114 conf = cast(DictConfig, conf) # convince mypy that everything is alright 115 116 return conf 117 118 119 def main(conf: DictConfig) -> None: 120 """Main training loop.""" 121 ###################################### 122 # Setup output directory 123 ###################################### 124 125 experiment_name = conf.experiment.name 126 task_name = conf.experiment.task 127 if os.path.isfile(conf.program.output_dir): 128 raise NotADirectoryError("`program.output_dir` must be a directory") 129 os.makedirs(conf.program.output_dir, exist_ok=True) 130 131 experiment_dir = os.path.join(conf.program.output_dir, experiment_name) 132 os.makedirs(experiment_dir, exist_ok=True) 133 134 if len(os.listdir(experiment_dir)) > 0: 135 if conf.program.overwrite: 136 print( 137 f"WARNING! The experiment directory, {experiment_dir}, already exists, " 138 + "we might overwrite data in it!" 139 ) 140 else: 141 raise FileExistsError( 142 f"The experiment directory, {experiment_dir}, already exists and isn't " 143 + "empty. We don't want to overwrite any existing results, exiting..." 144 ) 145 146 with open(os.path.join(experiment_dir, "experiment_config.yaml"), "w") as f: 147 OmegaConf.save(config=conf, f=f) 148 149 ###################################### 150 # Choose task to run based on arguments or configuration 151 ###################################### 152 # Convert the DictConfig into a dictionary so that we can pass as kwargs. 153 task_args = cast(Dict[str, Any], OmegaConf.to_object(conf.experiment.module)) 154 datamodule_args = cast( 155 Dict[str, Any], OmegaConf.to_object(conf.experiment.datamodule) 156 ) 157 158 datamodule: pl.LightningDataModule 159 task: pl.LightningModule 160 if task_name in TASK_TO_MODULES_MAPPING: 161 task_class, datamodule_class = TASK_TO_MODULES_MAPPING[task_name] 162 task = task_class(**task_args) 163 datamodule = datamodule_class(**datamodule_args) 164 else: 165 raise ValueError( 166 f"experiment.task={task_name} is not recognized as a valid task" 167 ) 168 169 ###################################### 170 # Setup trainer 171 ###################################### 172 tb_logger = pl_loggers.TensorBoardLogger(conf.program.log_dir, name=experiment_name) 173 174 if isinstance(task, ObjectDetectionTask): 175 monitor_metric = "val_map" 176 mode = "max" 177 else: 178 monitor_metric = "val_loss" 179 mode = "min" 180 181 checkpoint_callback = ModelCheckpoint( 182 monitor=monitor_metric, 183 filename="checkpoint-epoch{epoch:02d}-val_loss{val_loss:.2f}", 184 dirpath=experiment_dir, 185 save_top_k=1, 186 save_last=True, 187 ) 188 early_stopping_callback = EarlyStopping( 189 monitor=monitor_metric, min_delta=0.00, patience=18, mode=mode 190 ) 191 192 trainer_args = cast(Dict[str, Any], OmegaConf.to_object(conf.trainer)) 193 194 trainer_args["callbacks"] = [checkpoint_callback, early_stopping_callback] 195 trainer_args["logger"] = tb_logger 196 trainer_args["default_root_dir"] = experiment_dir 197 trainer = pl.Trainer(**trainer_args) 198 199 if trainer_args.get("auto_lr_find"): 200 trainer.tune(model=task, datamodule=datamodule) 201 202 ###################################### 203 # Run experiment 204 ###################################### 205 trainer.fit(model=task, datamodule=datamodule) 206 trainer.test(ckpt_path="best", datamodule=datamodule) 207 208 209 if __name__ == "__main__": 210 # Taken from https://github.com/pangeo-data/cog-best-practices 211 _rasterio_best_practices = { 212 "GDAL_DISABLE_READDIR_ON_OPEN": "EMPTY_DIR", 213 "AWS_NO_SIGN_REQUEST": "YES", 214 "GDAL_MAX_RAW_BLOCK_CACHE_SIZE": "200000000", 215 "GDAL_SWATH_SIZE": "200000000", 216 "VSI_CURL_CACHE_SIZE": "200000000", 217 } 218 os.environ.update(_rasterio_best_practices) 219 220 conf = set_up_omegaconf() 221 222 # Set random seed for reproducibility 223 # https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.utilities.seed.html#pytorch_lightning.utilities.seed.seed_everything 224 pl.seed_everything(conf.program.seed) 225 226 # Main training procedure 227 main(conf) 228 [end of train.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/train.py b/train.py --- a/train.py +++ b/train.py @@ -170,6 +170,7 @@ # Setup trainer ###################################### tb_logger = pl_loggers.TensorBoardLogger(conf.program.log_dir, name=experiment_name) + csv_logger = pl_loggers.CSVLogger(conf.program.log_dir, name=experiment_name) if isinstance(task, ObjectDetectionTask): monitor_metric = "val_map" @@ -192,7 +193,7 @@ trainer_args = cast(Dict[str, Any], OmegaConf.to_object(conf.trainer)) trainer_args["callbacks"] = [checkpoint_callback, early_stopping_callback] - trainer_args["logger"] = tb_logger + trainer_args["logger"] = [tb_logger, csv_logger] trainer_args["default_root_dir"] = experiment_dir trainer = pl.Trainer(**trainer_args)
{"golden_diff": "diff --git a/train.py b/train.py\n--- a/train.py\n+++ b/train.py\n@@ -170,6 +170,7 @@\n # Setup trainer\n ######################################\n tb_logger = pl_loggers.TensorBoardLogger(conf.program.log_dir, name=experiment_name)\n+ csv_logger = pl_loggers.CSVLogger(conf.program.log_dir, name=experiment_name)\n \n if isinstance(task, ObjectDetectionTask):\n monitor_metric = \"val_map\"\n@@ -192,7 +193,7 @@\n trainer_args = cast(Dict[str, Any], OmegaConf.to_object(conf.trainer))\n \n trainer_args[\"callbacks\"] = [checkpoint_callback, early_stopping_callback]\n- trainer_args[\"logger\"] = tb_logger\n+ trainer_args[\"logger\"] = [tb_logger, csv_logger]\n trainer_args[\"default_root_dir\"] = experiment_dir\n trainer = pl.Trainer(**trainer_args)\n", "issue": "Add CSVLogger to train.py\n### Summary\n\nOur current `train.py` script only logs to TensorBoard. We should add a CSV logger.\n\n### Rationale\n\nIn order to determine the optimal hyperparams in a hyperparam sweep, we need an easy way of finding the final val accuracy of each run. Currently, the only way to get this is to convert the TensorBoard logger output to a CSV. If we log directly to CSV this becomes much easier, especially for those who don't use TensorBoard.\n\n### Implementation\n\n_No response_\n\n### Alternatives\n\n_No response_\n\n### Additional information\n\n_No response_\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n\"\"\"torchgeo model training script.\"\"\"\n\nimport os\nfrom typing import Any, Dict, Tuple, Type, cast\n\nimport pytorch_lightning as pl\nfrom omegaconf import DictConfig, OmegaConf\nfrom pytorch_lightning import loggers as pl_loggers\nfrom pytorch_lightning.callbacks import EarlyStopping, ModelCheckpoint\n\nfrom torchgeo.datamodules import (\n BigEarthNetDataModule,\n ChesapeakeCVPRDataModule,\n COWCCountingDataModule,\n ETCI2021DataModule,\n EuroSATDataModule,\n InriaAerialImageLabelingDataModule,\n LandCoverAIDataModule,\n NAIPChesapeakeDataModule,\n NASAMarineDebrisDataModule,\n RESISC45DataModule,\n SEN12MSDataModule,\n So2SatDataModule,\n SpaceNet1DataModule,\n TropicalCycloneDataModule,\n UCMercedDataModule,\n)\nfrom torchgeo.trainers import (\n BYOLTask,\n ClassificationTask,\n MultiLabelClassificationTask,\n ObjectDetectionTask,\n RegressionTask,\n SemanticSegmentationTask,\n)\n\nTASK_TO_MODULES_MAPPING: Dict[\n str, Tuple[Type[pl.LightningModule], Type[pl.LightningDataModule]]\n] = {\n \"bigearthnet\": (MultiLabelClassificationTask, BigEarthNetDataModule),\n \"byol\": (BYOLTask, ChesapeakeCVPRDataModule),\n \"chesapeake_cvpr\": (SemanticSegmentationTask, ChesapeakeCVPRDataModule),\n \"cowc_counting\": (RegressionTask, COWCCountingDataModule),\n \"cyclone\": (RegressionTask, TropicalCycloneDataModule),\n \"eurosat\": (ClassificationTask, EuroSATDataModule),\n \"etci2021\": (SemanticSegmentationTask, ETCI2021DataModule),\n \"inria\": (SemanticSegmentationTask, InriaAerialImageLabelingDataModule),\n \"landcoverai\": (SemanticSegmentationTask, LandCoverAIDataModule),\n \"naipchesapeake\": (SemanticSegmentationTask, NAIPChesapeakeDataModule),\n \"nasa_marine_debris\": (ObjectDetectionTask, NASAMarineDebrisDataModule),\n \"resisc45\": (ClassificationTask, RESISC45DataModule),\n \"sen12ms\": (SemanticSegmentationTask, SEN12MSDataModule),\n \"so2sat\": (ClassificationTask, So2SatDataModule),\n \"spacenet1\": (SemanticSegmentationTask, SpaceNet1DataModule),\n \"ucmerced\": (ClassificationTask, UCMercedDataModule),\n}\n\n\ndef set_up_omegaconf() -> DictConfig:\n \"\"\"Loads program arguments from either YAML config files or command line arguments.\n\n This method loads defaults/a schema from \"conf/defaults.yaml\" as well as potential\n arguments from the command line. If one of the command line arguments is\n \"config_file\", then we additionally read arguments from that YAML file. One of the\n config file based arguments or command line arguments must specify task.name. The\n task.name value is used to grab a task specific defaults from its respective\n trainer. The final configuration is given as merge(task_defaults, defaults,\n config file, command line). The merge() works from the first argument to the last,\n replacing existing values with newer values. Additionally, if any values are\n merged into task_defaults without matching types, then there will be a runtime\n error.\n\n Returns:\n an OmegaConf DictConfig containing all the validated program arguments\n\n Raises:\n FileNotFoundError: when ``config_file`` does not exist\n ValueError: when ``task.name`` is not a valid task\n \"\"\"\n conf = OmegaConf.load(\"conf/defaults.yaml\")\n command_line_conf = OmegaConf.from_cli()\n\n if \"config_file\" in command_line_conf:\n config_fn = command_line_conf.config_file\n if not os.path.isfile(config_fn):\n raise FileNotFoundError(f\"config_file={config_fn} is not a valid file\")\n\n user_conf = OmegaConf.load(config_fn)\n conf = OmegaConf.merge(conf, user_conf)\n\n conf = OmegaConf.merge( # Merge in any arguments passed via the command line\n conf, command_line_conf\n )\n\n # These OmegaConf structured configs enforce a schema at runtime, see:\n # https://omegaconf.readthedocs.io/en/2.0_branch/structured_config.html#merging-with-other-configs\n task_name = conf.experiment.task\n task_config_fn = os.path.join(\"conf\", f\"{task_name}.yaml\")\n if task_name == \"test\":\n task_conf = OmegaConf.create()\n elif os.path.exists(task_config_fn):\n task_conf = cast(DictConfig, OmegaConf.load(task_config_fn))\n else:\n raise ValueError(\n f\"experiment.task={task_name} is not recognized as a valid task\"\n )\n\n conf = OmegaConf.merge(task_conf, conf)\n conf = cast(DictConfig, conf) # convince mypy that everything is alright\n\n return conf\n\n\ndef main(conf: DictConfig) -> None:\n \"\"\"Main training loop.\"\"\"\n ######################################\n # Setup output directory\n ######################################\n\n experiment_name = conf.experiment.name\n task_name = conf.experiment.task\n if os.path.isfile(conf.program.output_dir):\n raise NotADirectoryError(\"`program.output_dir` must be a directory\")\n os.makedirs(conf.program.output_dir, exist_ok=True)\n\n experiment_dir = os.path.join(conf.program.output_dir, experiment_name)\n os.makedirs(experiment_dir, exist_ok=True)\n\n if len(os.listdir(experiment_dir)) > 0:\n if conf.program.overwrite:\n print(\n f\"WARNING! The experiment directory, {experiment_dir}, already exists, \"\n + \"we might overwrite data in it!\"\n )\n else:\n raise FileExistsError(\n f\"The experiment directory, {experiment_dir}, already exists and isn't \"\n + \"empty. We don't want to overwrite any existing results, exiting...\"\n )\n\n with open(os.path.join(experiment_dir, \"experiment_config.yaml\"), \"w\") as f:\n OmegaConf.save(config=conf, f=f)\n\n ######################################\n # Choose task to run based on arguments or configuration\n ######################################\n # Convert the DictConfig into a dictionary so that we can pass as kwargs.\n task_args = cast(Dict[str, Any], OmegaConf.to_object(conf.experiment.module))\n datamodule_args = cast(\n Dict[str, Any], OmegaConf.to_object(conf.experiment.datamodule)\n )\n\n datamodule: pl.LightningDataModule\n task: pl.LightningModule\n if task_name in TASK_TO_MODULES_MAPPING:\n task_class, datamodule_class = TASK_TO_MODULES_MAPPING[task_name]\n task = task_class(**task_args)\n datamodule = datamodule_class(**datamodule_args)\n else:\n raise ValueError(\n f\"experiment.task={task_name} is not recognized as a valid task\"\n )\n\n ######################################\n # Setup trainer\n ######################################\n tb_logger = pl_loggers.TensorBoardLogger(conf.program.log_dir, name=experiment_name)\n\n if isinstance(task, ObjectDetectionTask):\n monitor_metric = \"val_map\"\n mode = \"max\"\n else:\n monitor_metric = \"val_loss\"\n mode = \"min\"\n\n checkpoint_callback = ModelCheckpoint(\n monitor=monitor_metric,\n filename=\"checkpoint-epoch{epoch:02d}-val_loss{val_loss:.2f}\",\n dirpath=experiment_dir,\n save_top_k=1,\n save_last=True,\n )\n early_stopping_callback = EarlyStopping(\n monitor=monitor_metric, min_delta=0.00, patience=18, mode=mode\n )\n\n trainer_args = cast(Dict[str, Any], OmegaConf.to_object(conf.trainer))\n\n trainer_args[\"callbacks\"] = [checkpoint_callback, early_stopping_callback]\n trainer_args[\"logger\"] = tb_logger\n trainer_args[\"default_root_dir\"] = experiment_dir\n trainer = pl.Trainer(**trainer_args)\n\n if trainer_args.get(\"auto_lr_find\"):\n trainer.tune(model=task, datamodule=datamodule)\n\n ######################################\n # Run experiment\n ######################################\n trainer.fit(model=task, datamodule=datamodule)\n trainer.test(ckpt_path=\"best\", datamodule=datamodule)\n\n\nif __name__ == \"__main__\":\n # Taken from https://github.com/pangeo-data/cog-best-practices\n _rasterio_best_practices = {\n \"GDAL_DISABLE_READDIR_ON_OPEN\": \"EMPTY_DIR\",\n \"AWS_NO_SIGN_REQUEST\": \"YES\",\n \"GDAL_MAX_RAW_BLOCK_CACHE_SIZE\": \"200000000\",\n \"GDAL_SWATH_SIZE\": \"200000000\",\n \"VSI_CURL_CACHE_SIZE\": \"200000000\",\n }\n os.environ.update(_rasterio_best_practices)\n\n conf = set_up_omegaconf()\n\n # Set random seed for reproducibility\n # https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.utilities.seed.html#pytorch_lightning.utilities.seed.seed_everything\n pl.seed_everything(conf.program.seed)\n\n # Main training procedure\n main(conf)\n", "path": "train.py"}]}
3,299
199
gh_patches_debug_21337
rasdani/github-patches
git_diff
jupyterhub__jupyterhub-768
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Cull_Idle Instructions I'm having trouble using the [cull_idle example](https://github.com/jupyterhub/jupyterhub/tree/master/examples/cull-idle#configure-cull-idle-to-run-as-a-hub-managed-service) as a managed service. I added the [example config section](https://github.com/jupyterhub/jupyterhub/blob/master/examples/cull-idle/jupyterhub_config.py) to the bottom of my existing `jupyterhub_config.py` file: ``` python c = get_config() docker_ip = '172.17.0.1' hub_name='127.0.0.1' c.JupyterHub.ip = hub_name c.JupyterHub.hub_ip = docker_ip c.JupyterHub.debug_proxy = True c.JupyterHub.port = 8000 c.JupyterHub.cleanup_servers = False c.JupyterHub.cleanup_proxy = True c.JupyterHub.ssl_key = '/etc/jupyterhub/ssl/jhub.key' c.JupyterHub.ssl_cert = '/etc/jupyterhub/ssl/jhub.crt' c.JupyterHub.spawner_class = 'dockerspawner.DockerSpawner' c.DockerSpawner.hub_ip_connect = docker_ip c.DockerSpawner.container_ip = docker_ip c.JupyterHub.cookie_secret_file = '/srv/jupyterhub/jupyterhub_cookie_secret' c.JupyterHub.log_level = 'DEBUG' #CULL_IDLE SECTION c.JupyterHub.services = [{ 'name': 'cull-idle', 'admin': True, 'command': 'python cull_idle_servers.py --timeout=3600'.split()}] ``` I see the resulting error: ``` [C 2016-09-20 02:28:37.026 JupyterHub app:1444] Failed to start service cull-idle Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/jupyterhub/app.py", line 1442, in start yield service.start() File "/usr/local/lib/python3.5/dist-packages/jupyterhub/services/service.py", line 228, in start env['JUPYTERHUB_SERVICE_PREFIX'] = self.server.base_url AttributeError: 'NoneType' object has no attribute 'base_url' ``` Any ideas of what could be causing this? Thank you! </issue> <code> [start of jupyterhub/services/service.py] 1 """A service is a process that talks to JupyterHub 2 3 Cases: 4 5 Managed: 6 - managed by JuyterHub (always subprocess, no custom Spawners) 7 - always a long-running process 8 - managed services are restarted automatically if they exit unexpectedly 9 Unmanaged: 10 - managed by external service (docker, systemd, etc.) 11 - do not need to be long-running processes, or processes at all 12 13 14 URL: needs a route added to the proxy. 15 - Public route will always be /services/service-name 16 - url specified in config 17 - if port is 0, Hub will select a port 18 19 API access: 20 - admin: tokens will have admin-access to the API 21 - not admin: tokens will only have non-admin access 22 (not much they can do other than defer to Hub for auth) 23 24 An externally managed service running on a URL:: 25 26 { 27 'name': 'my-service', 28 'url': 'https://host:8888', 29 'admin': True, 30 'token': 'super-secret', 31 } 32 33 A hub-managed service with no URL: 34 35 { 36 'name': 'cull-idle', 37 'command': ['python', '/path/to/cull-idle'] 38 'admin': True, 39 } 40 """ 41 42 from getpass import getuser 43 import pipes 44 import shutil 45 from subprocess import Popen 46 from urllib.parse import urlparse 47 48 from tornado import gen 49 50 from traitlets import ( 51 HasTraits, 52 Any, Bool, Dict, Unicode, Instance, 53 default, observe, 54 ) 55 from traitlets.config import LoggingConfigurable 56 57 from .. import orm 58 from ..traitlets import Command 59 from ..spawner import LocalProcessSpawner 60 from ..utils import url_path_join 61 62 class _MockUser(HasTraits): 63 name = Unicode() 64 server = Instance(orm.Server, allow_none=True) 65 state = Dict() 66 service = Instance(__module__ + '.Service') 67 68 # We probably shouldn't use a Spawner here, 69 # but there are too many concepts to share. 70 71 class _ServiceSpawner(LocalProcessSpawner): 72 """Subclass of LocalProcessSpawner 73 74 Removes notebook-specific-ness from LocalProcessSpawner. 75 """ 76 cwd = Unicode() 77 cmd = Command(minlen=0) 78 79 def make_preexec_fn(self, name): 80 if not name or name == getuser(): 81 # no setuid if no name 82 return 83 return super().make_preexec_fn(name) 84 85 def start(self): 86 """Start the process""" 87 env = self.get_env() 88 cmd = self.cmd 89 90 self.log.info("Spawning %s", ' '.join(pipes.quote(s) for s in cmd)) 91 try: 92 self.proc = Popen(self.cmd, env=env, 93 preexec_fn=self.make_preexec_fn(self.user.name), 94 start_new_session=True, # don't forward signals 95 cwd=self.cwd or None, 96 ) 97 except PermissionError: 98 # use which to get abspath 99 script = shutil.which(cmd[0]) or cmd[0] 100 self.log.error("Permission denied trying to run %r. Does %s have access to this file?", 101 script, self.user.name, 102 ) 103 raise 104 105 self.pid = self.proc.pid 106 107 class Service(LoggingConfigurable): 108 """An object wrapping a service specification for Hub API consumers. 109 110 A service has inputs: 111 112 - name: str 113 the name of the service 114 - admin: bool(false) 115 whether the service should have administrative privileges 116 - url: str (None) 117 The URL where the service is/should be. 118 If specified, the service will be added to the proxy at /services/:name 119 120 If a service is to be managed by the Hub, it has a few extra options: 121 122 - command: (str/Popen list) 123 Command for JupyterHub to spawn the service. 124 Only use this if the service should be a subprocess. 125 If command is not specified, it is assumed to be managed 126 by a 127 - env: dict 128 environment variables to add to the current env 129 - user: str 130 The name of a system user to become. 131 If unspecified, run as the same user as the Hub. 132 """ 133 134 # inputs: 135 name = Unicode( 136 help="""The name of the service. 137 138 If the service has an http endpoint, it 139 """ 140 ).tag(input=True) 141 admin = Bool(False, 142 help="Does the service need admin-access to the Hub API?" 143 ).tag(input=True) 144 url = Unicode( 145 help="""URL of the service. 146 147 Only specify if the service runs an HTTP(s) endpoint that. 148 If managed, will be passed as JUPYTERHUB_SERVICE_URL env. 149 """ 150 ).tag(input=True) 151 api_token = Unicode( 152 help="""The API token to use for the service. 153 154 If unspecified, an API token will be generated for managed services. 155 """ 156 ).tag(input=True) 157 # Managed service API: 158 159 @property 160 def managed(self): 161 """Am I managed by the Hub?""" 162 return bool(self.command) 163 164 command = Command(minlen=0, 165 help="Command to spawn this service, if managed." 166 ).tag(input=True) 167 cwd = Unicode( 168 help="""The working directory in which to run the service.""" 169 ).tag(input=True) 170 environment = Dict( 171 help="""Environment variables to pass to the service. 172 Only used if the Hub is spawning the service. 173 """ 174 ).tag(input=True) 175 user = Unicode(getuser(), 176 help="""The user to become when launching the service. 177 178 If unspecified, run the service as the same user as the Hub. 179 """ 180 ).tag(input=True) 181 182 domain = Unicode() 183 host = Unicode() 184 proc = Any() 185 186 # handles on globals: 187 proxy = Any() 188 base_url = Unicode() 189 db = Any() 190 orm = Any() 191 192 @property 193 def server(self): 194 return self.orm.server 195 196 @property 197 def prefix(self): 198 return url_path_join(self.base_url, 'services', self.name) 199 200 @property 201 def proxy_path(self): 202 if not self.server: 203 return '' 204 if self.domain: 205 return url_path_join('/' + self.domain, self.server.base_url) 206 else: 207 return self.server.base_url 208 209 def __repr__(self): 210 return "<{cls}(name={name}{managed})>".format( 211 cls=self.__class__.__name__, 212 name=self.name, 213 managed=' managed' if self.managed else '', 214 ) 215 216 def start(self): 217 """Start a managed service""" 218 if not self.managed: 219 raise RuntimeError("Cannot start unmanaged service %s" % self) 220 self.log.info("Starting service %r: %r", self.name, self.command) 221 env = {} 222 env.update(self.environment) 223 224 env['JUPYTERHUB_SERVICE_NAME'] = self.name 225 env['JUPYTERHUB_API_TOKEN'] = self.api_token 226 env['JUPYTERHUB_API_URL'] = self.hub_api_url 227 env['JUPYTERHUB_BASE_URL'] = self.base_url 228 env['JUPYTERHUB_SERVICE_PREFIX'] = self.server.base_url 229 env['JUPYTERHUB_SERVICE_URL'] = self.url 230 231 self.spawner = _ServiceSpawner( 232 cmd=self.command, 233 environment=env, 234 api_token=self.api_token, 235 cwd=self.cwd, 236 user=_MockUser( 237 name=self.user, 238 service=self, 239 server=self.orm.server, 240 ), 241 ) 242 self.spawner.start() 243 self.proc = self.spawner.proc 244 self.spawner.add_poll_callback(self._proc_stopped) 245 self.spawner.start_polling() 246 247 def _proc_stopped(self): 248 """Called when the service process unexpectedly exits""" 249 self.log.error("Service %s exited with status %i", self.name, self.proc.returncode) 250 self.start() 251 252 def stop(self): 253 """Stop a managed service""" 254 if not self.managed: 255 raise RuntimeError("Cannot start unmanaged service %s" % self) 256 self.spawner.stop_polling() 257 return self.spawner.stop() 258 [end of jupyterhub/services/service.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/jupyterhub/services/service.py b/jupyterhub/services/service.py --- a/jupyterhub/services/service.py +++ b/jupyterhub/services/service.py @@ -225,8 +225,9 @@ env['JUPYTERHUB_API_TOKEN'] = self.api_token env['JUPYTERHUB_API_URL'] = self.hub_api_url env['JUPYTERHUB_BASE_URL'] = self.base_url - env['JUPYTERHUB_SERVICE_PREFIX'] = self.server.base_url - env['JUPYTERHUB_SERVICE_URL'] = self.url + if self.url: + env['JUPYTERHUB_SERVICE_URL'] = self.url + env['JUPYTERHUB_SERVICE_PREFIX'] = self.server.base_url self.spawner = _ServiceSpawner( cmd=self.command, @@ -248,7 +249,7 @@ """Called when the service process unexpectedly exits""" self.log.error("Service %s exited with status %i", self.name, self.proc.returncode) self.start() - + def stop(self): """Stop a managed service""" if not self.managed:
{"golden_diff": "diff --git a/jupyterhub/services/service.py b/jupyterhub/services/service.py\n--- a/jupyterhub/services/service.py\n+++ b/jupyterhub/services/service.py\n@@ -225,8 +225,9 @@\n env['JUPYTERHUB_API_TOKEN'] = self.api_token\n env['JUPYTERHUB_API_URL'] = self.hub_api_url\n env['JUPYTERHUB_BASE_URL'] = self.base_url\n- env['JUPYTERHUB_SERVICE_PREFIX'] = self.server.base_url\n- env['JUPYTERHUB_SERVICE_URL'] = self.url\n+ if self.url:\n+ env['JUPYTERHUB_SERVICE_URL'] = self.url\n+ env['JUPYTERHUB_SERVICE_PREFIX'] = self.server.base_url\n \n self.spawner = _ServiceSpawner(\n cmd=self.command,\n@@ -248,7 +249,7 @@\n \"\"\"Called when the service process unexpectedly exits\"\"\"\n self.log.error(\"Service %s exited with status %i\", self.name, self.proc.returncode)\n self.start()\n- \n+\n def stop(self):\n \"\"\"Stop a managed service\"\"\"\n if not self.managed:\n", "issue": "Cull_Idle Instructions\nI'm having trouble using the [cull_idle example](https://github.com/jupyterhub/jupyterhub/tree/master/examples/cull-idle#configure-cull-idle-to-run-as-a-hub-managed-service) as a managed service. I added the [example config section](https://github.com/jupyterhub/jupyterhub/blob/master/examples/cull-idle/jupyterhub_config.py) to the bottom of my existing `jupyterhub_config.py` file:\n\n``` python\nc = get_config()\ndocker_ip = '172.17.0.1' \nhub_name='127.0.0.1'\n\nc.JupyterHub.ip = hub_name\nc.JupyterHub.hub_ip = docker_ip\nc.JupyterHub.debug_proxy = True\nc.JupyterHub.port = 8000\n\nc.JupyterHub.cleanup_servers = False\nc.JupyterHub.cleanup_proxy = True\nc.JupyterHub.ssl_key = '/etc/jupyterhub/ssl/jhub.key'\nc.JupyterHub.ssl_cert = '/etc/jupyterhub/ssl/jhub.crt'\n\n\nc.JupyterHub.spawner_class = 'dockerspawner.DockerSpawner'\nc.DockerSpawner.hub_ip_connect = docker_ip\nc.DockerSpawner.container_ip = docker_ip \n\n\nc.JupyterHub.cookie_secret_file = '/srv/jupyterhub/jupyterhub_cookie_secret'\nc.JupyterHub.log_level = 'DEBUG'\n\n#CULL_IDLE SECTION\nc.JupyterHub.services = [{\n 'name': 'cull-idle',\n 'admin': True,\n 'command': 'python cull_idle_servers.py --timeout=3600'.split()}]\n```\n\nI see the resulting error:\n\n```\n[C 2016-09-20 02:28:37.026 JupyterHub app:1444] Failed to start service cull-idle\n Traceback (most recent call last):\n File \"/usr/local/lib/python3.5/dist-packages/jupyterhub/app.py\", line 1442, in start\n yield service.start()\n File \"/usr/local/lib/python3.5/dist-packages/jupyterhub/services/service.py\", line 228, in start\n env['JUPYTERHUB_SERVICE_PREFIX'] = self.server.base_url\n AttributeError: 'NoneType' object has no attribute 'base_url'\n```\n\nAny ideas of what could be causing this? Thank you!\n\n", "before_files": [{"content": "\"\"\"A service is a process that talks to JupyterHub\n\nCases:\n\nManaged:\n - managed by JuyterHub (always subprocess, no custom Spawners)\n - always a long-running process\n - managed services are restarted automatically if they exit unexpectedly\nUnmanaged:\n - managed by external service (docker, systemd, etc.)\n - do not need to be long-running processes, or processes at all\n\n\nURL: needs a route added to the proxy.\n - Public route will always be /services/service-name\n - url specified in config\n - if port is 0, Hub will select a port\n\nAPI access:\n - admin: tokens will have admin-access to the API\n - not admin: tokens will only have non-admin access\n (not much they can do other than defer to Hub for auth)\n\nAn externally managed service running on a URL::\n\n {\n 'name': 'my-service',\n 'url': 'https://host:8888',\n 'admin': True,\n 'token': 'super-secret',\n }\n\nA hub-managed service with no URL:\n\n {\n 'name': 'cull-idle',\n 'command': ['python', '/path/to/cull-idle']\n 'admin': True,\n }\n\"\"\"\n\nfrom getpass import getuser\nimport pipes\nimport shutil\nfrom subprocess import Popen\nfrom urllib.parse import urlparse\n\nfrom tornado import gen\n\nfrom traitlets import (\n HasTraits,\n Any, Bool, Dict, Unicode, Instance,\n default, observe,\n)\nfrom traitlets.config import LoggingConfigurable\n\nfrom .. import orm\nfrom ..traitlets import Command\nfrom ..spawner import LocalProcessSpawner\nfrom ..utils import url_path_join\n\nclass _MockUser(HasTraits):\n name = Unicode()\n server = Instance(orm.Server, allow_none=True)\n state = Dict()\n service = Instance(__module__ + '.Service')\n\n# We probably shouldn't use a Spawner here,\n# but there are too many concepts to share.\n\nclass _ServiceSpawner(LocalProcessSpawner):\n \"\"\"Subclass of LocalProcessSpawner\n \n Removes notebook-specific-ness from LocalProcessSpawner.\n \"\"\"\n cwd = Unicode()\n cmd = Command(minlen=0)\n \n def make_preexec_fn(self, name):\n if not name or name == getuser():\n # no setuid if no name\n return\n return super().make_preexec_fn(name)\n\n def start(self):\n \"\"\"Start the process\"\"\"\n env = self.get_env()\n cmd = self.cmd\n\n self.log.info(\"Spawning %s\", ' '.join(pipes.quote(s) for s in cmd))\n try:\n self.proc = Popen(self.cmd, env=env,\n preexec_fn=self.make_preexec_fn(self.user.name),\n start_new_session=True, # don't forward signals\n cwd=self.cwd or None,\n )\n except PermissionError:\n # use which to get abspath\n script = shutil.which(cmd[0]) or cmd[0]\n self.log.error(\"Permission denied trying to run %r. Does %s have access to this file?\",\n script, self.user.name,\n )\n raise\n\n self.pid = self.proc.pid\n\nclass Service(LoggingConfigurable):\n \"\"\"An object wrapping a service specification for Hub API consumers.\n\n A service has inputs:\n\n - name: str\n the name of the service\n - admin: bool(false)\n whether the service should have administrative privileges\n - url: str (None)\n The URL where the service is/should be.\n If specified, the service will be added to the proxy at /services/:name\n \n If a service is to be managed by the Hub, it has a few extra options:\n \n - command: (str/Popen list)\n Command for JupyterHub to spawn the service.\n Only use this if the service should be a subprocess.\n If command is not specified, it is assumed to be managed\n by a\n - env: dict\n environment variables to add to the current env\n - user: str\n The name of a system user to become.\n If unspecified, run as the same user as the Hub.\n \"\"\"\n \n # inputs:\n name = Unicode(\n help=\"\"\"The name of the service.\n \n If the service has an http endpoint, it\n \"\"\"\n ).tag(input=True)\n admin = Bool(False,\n help=\"Does the service need admin-access to the Hub API?\"\n ).tag(input=True)\n url = Unicode(\n help=\"\"\"URL of the service.\n \n Only specify if the service runs an HTTP(s) endpoint that.\n If managed, will be passed as JUPYTERHUB_SERVICE_URL env.\n \"\"\"\n ).tag(input=True)\n api_token = Unicode(\n help=\"\"\"The API token to use for the service.\n \n If unspecified, an API token will be generated for managed services.\n \"\"\"\n ).tag(input=True)\n # Managed service API:\n\n @property\n def managed(self):\n \"\"\"Am I managed by the Hub?\"\"\"\n return bool(self.command)\n\n command = Command(minlen=0,\n help=\"Command to spawn this service, if managed.\"\n ).tag(input=True)\n cwd = Unicode(\n help=\"\"\"The working directory in which to run the service.\"\"\"\n ).tag(input=True)\n environment = Dict(\n help=\"\"\"Environment variables to pass to the service.\n Only used if the Hub is spawning the service.\n \"\"\"\n ).tag(input=True)\n user = Unicode(getuser(),\n help=\"\"\"The user to become when launching the service.\n\n If unspecified, run the service as the same user as the Hub.\n \"\"\"\n ).tag(input=True)\n\n domain = Unicode()\n host = Unicode()\n proc = Any()\n\n # handles on globals:\n proxy = Any()\n base_url = Unicode()\n db = Any()\n orm = Any()\n\n @property\n def server(self):\n return self.orm.server\n\n @property\n def prefix(self):\n return url_path_join(self.base_url, 'services', self.name)\n\n @property\n def proxy_path(self):\n if not self.server:\n return ''\n if self.domain:\n return url_path_join('/' + self.domain, self.server.base_url)\n else:\n return self.server.base_url\n\n def __repr__(self):\n return \"<{cls}(name={name}{managed})>\".format(\n cls=self.__class__.__name__,\n name=self.name,\n managed=' managed' if self.managed else '',\n )\n\n def start(self):\n \"\"\"Start a managed service\"\"\"\n if not self.managed:\n raise RuntimeError(\"Cannot start unmanaged service %s\" % self)\n self.log.info(\"Starting service %r: %r\", self.name, self.command)\n env = {}\n env.update(self.environment)\n\n env['JUPYTERHUB_SERVICE_NAME'] = self.name\n env['JUPYTERHUB_API_TOKEN'] = self.api_token\n env['JUPYTERHUB_API_URL'] = self.hub_api_url\n env['JUPYTERHUB_BASE_URL'] = self.base_url\n env['JUPYTERHUB_SERVICE_PREFIX'] = self.server.base_url\n env['JUPYTERHUB_SERVICE_URL'] = self.url\n\n self.spawner = _ServiceSpawner(\n cmd=self.command,\n environment=env,\n api_token=self.api_token,\n cwd=self.cwd,\n user=_MockUser(\n name=self.user,\n service=self,\n server=self.orm.server,\n ),\n )\n self.spawner.start()\n self.proc = self.spawner.proc\n self.spawner.add_poll_callback(self._proc_stopped)\n self.spawner.start_polling()\n\n def _proc_stopped(self):\n \"\"\"Called when the service process unexpectedly exits\"\"\"\n self.log.error(\"Service %s exited with status %i\", self.name, self.proc.returncode)\n self.start()\n \n def stop(self):\n \"\"\"Stop a managed service\"\"\"\n if not self.managed:\n raise RuntimeError(\"Cannot start unmanaged service %s\" % self)\n self.spawner.stop_polling()\n return self.spawner.stop()\n", "path": "jupyterhub/services/service.py"}]}
3,511
267
gh_patches_debug_1497
rasdani/github-patches
git_diff
CiviWiki__OpenCiviWiki-1060
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Move user/account-related templates out of `threads` app ### Idea summary There are several user- and account-related templates in the `threads` app. They should reside in the `accounts` app instead. ### Further details Move all of the following templates from the `threads` app to the `accounts` app: - [ ] `threads/templates/threads/base/less_headers/account_less.html` -> `accounts/templates/accounts/base/less_headers/account_less.html` - [ ] `threads/templates/threads/base/less_headers/login_less.html` -> `accounts/templates/accounts/base/less_headers/login_less.html` - [ ] `threads/templates/threads/partials/account/*` to `accounts/templates/accounts/partials/account/*` - [ ] `threads/templates/threads/partials/feed/*` to `accounts/templates/accounts/partials/feed/*` - [ ] `threads/templates/threads/partials/login/*` to `accounts/templates/accounts/partials/login/*` - [ ] `threads/templates/threads/partials/user-setup/*` to `accounts/templates/accounts/partials/user-setup/*` - [ ] `threads/templates/threads/user/*` -> `accounts/templates/accounts/*` - [ ] `threads/templates/threads/account.html` -> `accounts/templates/accounts/account.html` - [ ] `threads/templates/threads/feed.html` -> `accounts/templates/accounts/feed.html` - [ ] `threads/templates/threads/invite.html` -> `accounts/templates/accounts/invite.html` - [ ] `threads/templates/threads/user-setup.html` -> `accounts/templates/accounts/user-setup.html` - [ ] make sure to fix all imports related to the moved files </issue> <code> [start of project/accounts/views.py] 1 """ 2 Class based views. 3 4 This module will include views for the accounts app. 5 """ 6 7 from django.conf import settings 8 from django.contrib.auth.mixins import LoginRequiredMixin 9 from django.views.generic.edit import FormView, UpdateView 10 from django.views import View 11 from django.contrib.auth import views as auth_views 12 from django.contrib.auth import login 13 from django.contrib.sites.shortcuts import get_current_site 14 from django.urls import reverse_lazy 15 from django.contrib.auth import get_user_model 16 from django.utils.encoding import force_str 17 from django.utils.http import urlsafe_base64_decode 18 from django.template.response import TemplateResponse 19 from accounts.models import Profile 20 from accounts.forms import UserRegistrationForm, ProfileEditForm 21 from accounts.authentication import send_activation_email, account_activation_token 22 from django.http import HttpResponseRedirect 23 24 25 class RegisterView(FormView): 26 """ 27 A form view that handles user registration. 28 """ 29 30 template_name = "accounts/register/register.html" 31 form_class = UserRegistrationForm 32 success_url = "/" 33 34 def _create_user(self, form): 35 username = form.cleaned_data["username"] 36 password = form.cleaned_data["password"] 37 email = form.cleaned_data["email"] 38 39 user = get_user_model().objects.create_user(username, email, password) 40 Profile.objects.create(user=user) 41 42 return user 43 44 def _send_email(self, user): 45 domain = get_current_site(self.request).domain 46 send_activation_email(user, domain) 47 48 def _login(self, user): 49 login(self.request, user) 50 51 def form_valid(self, form): 52 user = self._create_user(form) 53 54 self._send_email(user) 55 self._login(user) 56 57 return super(RegisterView, self).form_valid(form) 58 59 60 class PasswordResetView(auth_views.PasswordResetView): 61 template_name = "accounts/users/password_reset.html" 62 email_template_name = "accounts/users/password_reset_email.html" 63 subject_template_name = "accounts/users/password_reset_subject.txt" 64 from_email = settings.EMAIL_HOST_USER 65 success_url = reverse_lazy("accounts_password_reset_done") 66 67 68 class PasswordResetDoneView(auth_views.PasswordResetDoneView): 69 template_name = "accounts/users/password_reset_done.html" 70 71 72 class PasswordResetConfirmView(auth_views.PasswordResetConfirmView): 73 template_name = "accounts/users/password_reset_confirm.html" 74 success_url = reverse_lazy("accounts_password_reset_complete") 75 76 77 class PasswordResetCompleteView(auth_views.PasswordResetCompleteView): 78 template_name = "accounts/users/password_reset_complete.html" 79 80 81 class SettingsView(LoginRequiredMixin, UpdateView): 82 """A form view to edit Profile""" 83 84 login_url = 'accounts_login' 85 form_class = ProfileEditForm 86 success_url = reverse_lazy('accounts_settings') 87 template_name = 'accounts/utils/update_settings.html' 88 89 def get_object(self, queryset=None): 90 return Profile.objects.get(user=self.request.user) 91 92 def get_initial(self): 93 profile = Profile.objects.get(user=self.request.user) 94 self.initial.update({ 95 "username": profile.user.username, 96 "email": profile.user.email, 97 "first_name": profile.first_name or None, 98 "last_name": profile.last_name or None, 99 "about_me": profile.about_me or None, 100 }) 101 return super(SettingsView, self).get_initial() 102 103 104 class ProfileActivationView(View): 105 """ 106 This shows different views to the user when they are verifying 107 their account based on whether they are already verified or not. 108 """ 109 110 def get(self, request, uidb64, token): 111 112 User = get_user_model() 113 try: 114 uid = force_str(urlsafe_base64_decode(uidb64)) 115 user = User.objects.get(pk=uid) 116 117 except (TypeError, ValueError, OverflowError, User.DoesNotExist): 118 user = None 119 120 if user is not None and account_activation_token.check_token(user, token): 121 profile = Profile.objects.get(user=user) 122 if profile.is_verified: 123 redirect_link = {"href": "/", "label": "Back to Main"} 124 template_var = { 125 "title": "Email Already Verified", 126 "content": "You have already verified your email", 127 "link": redirect_link, 128 } 129 return TemplateResponse(request, "general-message.html", template_var) 130 else: 131 profile.is_verified = True 132 profile.save() 133 134 redirect_link = {"href": "/", "label": "Back to Main"} 135 template_var = { 136 "title": "Email Verification Successful", 137 "content": "Thank you for verifying your email with CiviWiki", 138 "link": redirect_link, 139 } 140 return TemplateResponse(request, "general-message.html", template_var) 141 else: 142 # invalid link 143 redirect_link = {"href": "/", "label": "Back to Main"} 144 template_var = { 145 "title": "Email Verification Error", 146 "content": "Email could not be verified", 147 "link": redirect_link, 148 } 149 return TemplateResponse(request, "general-message.html", template_var) 150 151 152 class ProfileSetupView(LoginRequiredMixin, View): 153 """A view to make the user profile full_profile""" 154 155 login_url = 'accounts_login' 156 157 def get(self, request): 158 profile = Profile.objects.get(user=request.user) 159 if profile.full_profile: 160 return HttpResponseRedirect("/") 161 # start temp rep rendering TODO: REMOVE THIS 162 else: 163 data = { 164 "username": request.user.username, 165 "email": request.user.email, 166 } 167 return TemplateResponse(request, "user-setup.html", data) 168 [end of project/accounts/views.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/project/accounts/views.py b/project/accounts/views.py --- a/project/accounts/views.py +++ b/project/accounts/views.py @@ -164,4 +164,4 @@ "username": request.user.username, "email": request.user.email, } - return TemplateResponse(request, "user-setup.html", data) + return TemplateResponse(request, "accounts/user-setup.html", data)
{"golden_diff": "diff --git a/project/accounts/views.py b/project/accounts/views.py\n--- a/project/accounts/views.py\n+++ b/project/accounts/views.py\n@@ -164,4 +164,4 @@\n \"username\": request.user.username,\n \"email\": request.user.email,\n }\n- return TemplateResponse(request, \"user-setup.html\", data)\n+ return TemplateResponse(request, \"accounts/user-setup.html\", data)\n", "issue": "Move user/account-related templates out of `threads` app\n### Idea summary\r\n\r\nThere are several user- and account-related templates in the `threads` app. They should reside in the `accounts` app instead.\r\n\r\n### Further details\r\n\r\nMove all of the following templates from the `threads` app to the `accounts` app:\r\n\r\n- [ ] `threads/templates/threads/base/less_headers/account_less.html` -> `accounts/templates/accounts/base/less_headers/account_less.html`\r\n- [ ] `threads/templates/threads/base/less_headers/login_less.html` -> `accounts/templates/accounts/base/less_headers/login_less.html`\r\n- [ ] `threads/templates/threads/partials/account/*` to `accounts/templates/accounts/partials/account/*`\r\n- [ ] `threads/templates/threads/partials/feed/*` to `accounts/templates/accounts/partials/feed/*`\r\n- [ ] `threads/templates/threads/partials/login/*` to `accounts/templates/accounts/partials/login/*`\r\n- [ ] `threads/templates/threads/partials/user-setup/*` to `accounts/templates/accounts/partials/user-setup/*`\r\n- [ ] `threads/templates/threads/user/*` -> `accounts/templates/accounts/*`\r\n- [ ] `threads/templates/threads/account.html` -> `accounts/templates/accounts/account.html`\r\n- [ ] `threads/templates/threads/feed.html` -> `accounts/templates/accounts/feed.html`\r\n- [ ] `threads/templates/threads/invite.html` -> `accounts/templates/accounts/invite.html`\r\n- [ ] `threads/templates/threads/user-setup.html` -> `accounts/templates/accounts/user-setup.html`\r\n- [ ] make sure to fix all imports related to the moved files\n", "before_files": [{"content": "\"\"\"\nClass based views.\n\nThis module will include views for the accounts app.\n\"\"\"\n\nfrom django.conf import settings\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.views.generic.edit import FormView, UpdateView\nfrom django.views import View\nfrom django.contrib.auth import views as auth_views\nfrom django.contrib.auth import login\nfrom django.contrib.sites.shortcuts import get_current_site\nfrom django.urls import reverse_lazy\nfrom django.contrib.auth import get_user_model\nfrom django.utils.encoding import force_str\nfrom django.utils.http import urlsafe_base64_decode\nfrom django.template.response import TemplateResponse\nfrom accounts.models import Profile\nfrom accounts.forms import UserRegistrationForm, ProfileEditForm\nfrom accounts.authentication import send_activation_email, account_activation_token\nfrom django.http import HttpResponseRedirect\n\n\nclass RegisterView(FormView):\n \"\"\"\n A form view that handles user registration.\n \"\"\"\n\n template_name = \"accounts/register/register.html\"\n form_class = UserRegistrationForm\n success_url = \"/\"\n\n def _create_user(self, form):\n username = form.cleaned_data[\"username\"]\n password = form.cleaned_data[\"password\"]\n email = form.cleaned_data[\"email\"]\n\n user = get_user_model().objects.create_user(username, email, password)\n Profile.objects.create(user=user)\n\n return user\n\n def _send_email(self, user):\n domain = get_current_site(self.request).domain\n send_activation_email(user, domain)\n\n def _login(self, user):\n login(self.request, user)\n\n def form_valid(self, form):\n user = self._create_user(form)\n\n self._send_email(user)\n self._login(user)\n\n return super(RegisterView, self).form_valid(form)\n\n\nclass PasswordResetView(auth_views.PasswordResetView):\n template_name = \"accounts/users/password_reset.html\"\n email_template_name = \"accounts/users/password_reset_email.html\"\n subject_template_name = \"accounts/users/password_reset_subject.txt\"\n from_email = settings.EMAIL_HOST_USER\n success_url = reverse_lazy(\"accounts_password_reset_done\")\n\n\nclass PasswordResetDoneView(auth_views.PasswordResetDoneView):\n template_name = \"accounts/users/password_reset_done.html\"\n\n\nclass PasswordResetConfirmView(auth_views.PasswordResetConfirmView):\n template_name = \"accounts/users/password_reset_confirm.html\"\n success_url = reverse_lazy(\"accounts_password_reset_complete\")\n\n\nclass PasswordResetCompleteView(auth_views.PasswordResetCompleteView):\n template_name = \"accounts/users/password_reset_complete.html\"\n\n\nclass SettingsView(LoginRequiredMixin, UpdateView):\n \"\"\"A form view to edit Profile\"\"\"\n\n login_url = 'accounts_login'\n form_class = ProfileEditForm\n success_url = reverse_lazy('accounts_settings')\n template_name = 'accounts/utils/update_settings.html'\n\n def get_object(self, queryset=None):\n return Profile.objects.get(user=self.request.user)\n\n def get_initial(self):\n profile = Profile.objects.get(user=self.request.user)\n self.initial.update({\n \"username\": profile.user.username,\n \"email\": profile.user.email,\n \"first_name\": profile.first_name or None,\n \"last_name\": profile.last_name or None,\n \"about_me\": profile.about_me or None,\n })\n return super(SettingsView, self).get_initial()\n\n\nclass ProfileActivationView(View):\n \"\"\"\n This shows different views to the user when they are verifying\n their account based on whether they are already verified or not.\n \"\"\"\n\n def get(self, request, uidb64, token):\n\n User = get_user_model()\n try:\n uid = force_str(urlsafe_base64_decode(uidb64))\n user = User.objects.get(pk=uid)\n\n except (TypeError, ValueError, OverflowError, User.DoesNotExist):\n user = None\n\n if user is not None and account_activation_token.check_token(user, token):\n profile = Profile.objects.get(user=user)\n if profile.is_verified:\n redirect_link = {\"href\": \"/\", \"label\": \"Back to Main\"}\n template_var = {\n \"title\": \"Email Already Verified\",\n \"content\": \"You have already verified your email\",\n \"link\": redirect_link,\n }\n return TemplateResponse(request, \"general-message.html\", template_var)\n else:\n profile.is_verified = True\n profile.save()\n\n redirect_link = {\"href\": \"/\", \"label\": \"Back to Main\"}\n template_var = {\n \"title\": \"Email Verification Successful\",\n \"content\": \"Thank you for verifying your email with CiviWiki\",\n \"link\": redirect_link,\n }\n return TemplateResponse(request, \"general-message.html\", template_var)\n else:\n # invalid link\n redirect_link = {\"href\": \"/\", \"label\": \"Back to Main\"}\n template_var = {\n \"title\": \"Email Verification Error\",\n \"content\": \"Email could not be verified\",\n \"link\": redirect_link,\n }\n return TemplateResponse(request, \"general-message.html\", template_var)\n\n\nclass ProfileSetupView(LoginRequiredMixin, View):\n \"\"\"A view to make the user profile full_profile\"\"\"\n\n login_url = 'accounts_login'\n\n def get(self, request):\n profile = Profile.objects.get(user=request.user)\n if profile.full_profile:\n return HttpResponseRedirect(\"/\")\n # start temp rep rendering TODO: REMOVE THIS\n else:\n data = {\n \"username\": request.user.username,\n \"email\": request.user.email,\n }\n return TemplateResponse(request, \"user-setup.html\", data)\n", "path": "project/accounts/views.py"}]}
2,419
89
gh_patches_debug_10113
rasdani/github-patches
git_diff
Mailu__Mailu-2654
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> plus mail in auto-forward destination The email address in `admin -> User settings -> Auto-forward -> Destination` cannot contains a '+' character (error: Invalid email address). But the '+' in email address is valid and admin should accept it. </issue> <code> [start of core/admin/mailu/ui/forms.py] 1 from wtforms import validators, fields, widgets 2 from wtforms_components import fields as fields_ 3 from flask_babel import lazy_gettext as _ 4 5 import flask_login 6 import flask_wtf 7 import re 8 9 LOCALPART_REGEX = "^[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*$" 10 11 class DestinationField(fields.SelectMultipleField): 12 """ Allow for multiple emails selection from current user choices and 13 additional email addresses. 14 """ 15 16 validator = re.compile(r'^.+@([^.@][^@]+)$', re.IGNORECASE) 17 18 def iter_choices(self): 19 managed = [ 20 str(email) 21 for email in flask_login.current_user.get_managed_emails() 22 ] 23 for email in managed: 24 selected = self.data is not None and self.coerce(email) in self.data 25 yield (email, email, selected) 26 for email in self.data or (): 27 if email not in managed: 28 yield (email, email, True) 29 30 def pre_validate(self, form): 31 for item in self.data: 32 if not self.validator.match(item): 33 raise validators.ValidationError(_('Invalid email address.')) 34 35 class MultipleEmailAddressesVerify(object): 36 def __init__(self,message=_('Invalid email address.')): 37 self.message = message 38 39 def __call__(self, form, field): 40 pattern = re.compile(r'^([_a-z0-9\-]+)(\.[_a-z0-9\-]+)*@([a-z0-9\-]{2,}\.)*([a-z]{2,})(,([_a-z0-9\-]+)(\.[_a-z0-9\-]+)*@([a-z0-9\-]{2,}\.)*([a-z]{2,}))*$') 41 if not pattern.match(field.data.replace(" ", "")): 42 raise validators.ValidationError(self.message) 43 44 class ConfirmationForm(flask_wtf.FlaskForm): 45 submit = fields.SubmitField(_('Confirm')) 46 47 class DomainForm(flask_wtf.FlaskForm): 48 name = fields.StringField(_('Domain name'), [validators.DataRequired()]) 49 max_users = fields_.IntegerField(_('Maximum user count'), [validators.NumberRange(min=-1)], default=10) 50 max_aliases = fields_.IntegerField(_('Maximum alias count'), [validators.NumberRange(min=-1)], default=10) 51 max_quota_bytes = fields_.IntegerSliderField(_('Maximum user quota'), default=0) 52 signup_enabled = fields.BooleanField(_('Enable sign-up'), default=False) 53 comment = fields.StringField(_('Comment')) 54 submit = fields.SubmitField(_('Save')) 55 56 57 class DomainSignupForm(flask_wtf.FlaskForm): 58 name = fields.StringField(_('Domain name'), [validators.DataRequired()]) 59 localpart = fields.StringField(_('Initial admin'), [validators.DataRequired()]) 60 pw = fields.PasswordField(_('Admin password'), [validators.DataRequired()]) 61 pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')]) 62 captcha = flask_wtf.RecaptchaField() 63 submit = fields.SubmitField(_('Create')) 64 65 66 class AlternativeForm(flask_wtf.FlaskForm): 67 name = fields.StringField(_('Alternative name'), [validators.DataRequired()]) 68 submit = fields.SubmitField(_('Save')) 69 70 71 class RelayForm(flask_wtf.FlaskForm): 72 name = fields.StringField(_('Relayed domain name'), [validators.DataRequired()]) 73 smtp = fields.StringField(_('Remote host')) 74 comment = fields.StringField(_('Comment')) 75 submit = fields.SubmitField(_('Save')) 76 77 78 class UserForm(flask_wtf.FlaskForm): 79 localpart = fields.StringField(_('E-mail'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)]) 80 pw = fields.PasswordField(_('Password')) 81 pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')]) 82 quota_bytes = fields_.IntegerSliderField(_('Quota'), default=10**9) 83 enable_imap = fields.BooleanField(_('Allow IMAP access'), default=True) 84 enable_pop = fields.BooleanField(_('Allow POP3 access'), default=True) 85 displayed_name = fields.StringField(_('Displayed name')) 86 comment = fields.StringField(_('Comment')) 87 enabled = fields.BooleanField(_('Enabled'), default=True) 88 submit = fields.SubmitField(_('Save')) 89 90 91 class UserSignupForm(flask_wtf.FlaskForm): 92 localpart = fields.StringField(_('Email address'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)]) 93 pw = fields.PasswordField(_('Password'), [validators.DataRequired()]) 94 pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')]) 95 submit = fields.SubmitField(_('Sign up')) 96 97 class UserSignupFormCaptcha(UserSignupForm): 98 captcha = flask_wtf.RecaptchaField() 99 100 class UserSettingsForm(flask_wtf.FlaskForm): 101 displayed_name = fields.StringField(_('Displayed name')) 102 spam_enabled = fields.BooleanField(_('Enable spam filter')) 103 spam_threshold = fields_.IntegerSliderField(_('Spam filter tolerance')) 104 forward_enabled = fields.BooleanField(_('Enable forwarding')) 105 forward_keep = fields.BooleanField(_('Keep a copy of the emails')) 106 forward_destination = fields.StringField(_('Destination'), [validators.Optional(), MultipleEmailAddressesVerify()]) 107 submit = fields.SubmitField(_('Save settings')) 108 109 110 class UserPasswordForm(flask_wtf.FlaskForm): 111 pw = fields.PasswordField(_('Password'), [validators.DataRequired()]) 112 pw2 = fields.PasswordField(_('Password check'), [validators.DataRequired()]) 113 submit = fields.SubmitField(_('Update password')) 114 115 116 class UserReplyForm(flask_wtf.FlaskForm): 117 reply_enabled = fields.BooleanField(_('Enable automatic reply')) 118 reply_subject = fields.StringField(_('Reply subject')) 119 reply_body = fields.StringField(_('Reply body'), 120 widget=widgets.TextArea()) 121 reply_startdate = fields.html5.DateField(_('Start of vacation')) 122 reply_enddate = fields.html5.DateField(_('End of vacation')) 123 submit = fields.SubmitField(_('Update')) 124 125 126 class TokenForm(flask_wtf.FlaskForm): 127 displayed_password = fields.StringField( 128 _('Your token (write it down, as it will never be displayed again)') 129 ) 130 raw_password = fields.HiddenField([validators.DataRequired()]) 131 comment = fields.StringField(_('Comment')) 132 ip = fields.StringField( 133 _('Authorized IP'), [validators.Optional(), validators.IPAddress(ipv6=True)] 134 ) 135 submit = fields.SubmitField(_('Save')) 136 137 138 class AliasForm(flask_wtf.FlaskForm): 139 localpart = fields.StringField(_('Alias'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)]) 140 wildcard = fields.BooleanField( 141 _('Use SQL LIKE Syntax (e.g. for catch-all aliases)')) 142 destination = DestinationField(_('Destination')) 143 comment = fields.StringField(_('Comment')) 144 submit = fields.SubmitField(_('Save')) 145 146 147 class AdminForm(flask_wtf.FlaskForm): 148 admin = fields.SelectField(_('Admin email'), choices=[]) 149 submit = fields.SubmitField(_('Submit')) 150 151 152 class ManagerForm(flask_wtf.FlaskForm): 153 manager = fields.SelectField(_('Manager email')) 154 submit = fields.SubmitField(_('Submit')) 155 156 157 class FetchForm(flask_wtf.FlaskForm): 158 protocol = fields.SelectField(_('Protocol'), choices=[ 159 ('imap', 'IMAP'), ('pop3', 'POP3') 160 ]) 161 host = fields.StringField(_('Hostname or IP'), [validators.DataRequired()]) 162 port = fields.IntegerField(_('TCP port'), [validators.DataRequired(), validators.NumberRange(min=0, max=65535)]) 163 tls = fields.BooleanField(_('Enable TLS')) 164 username = fields.StringField(_('Username'), [validators.DataRequired()]) 165 password = fields.PasswordField(_('Password')) 166 keep = fields.BooleanField(_('Keep emails on the server')) 167 submit = fields.SubmitField(_('Submit')) 168 169 170 class AnnouncementForm(flask_wtf.FlaskForm): 171 announcement_subject = fields.StringField(_('Announcement subject'), 172 [validators.DataRequired()]) 173 announcement_body = fields.StringField(_('Announcement body'), 174 [validators.DataRequired()], widget=widgets.TextArea()) 175 submit = fields.SubmitField(_('Send')) 176 [end of core/admin/mailu/ui/forms.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/core/admin/mailu/ui/forms.py b/core/admin/mailu/ui/forms.py --- a/core/admin/mailu/ui/forms.py +++ b/core/admin/mailu/ui/forms.py @@ -37,7 +37,7 @@ self.message = message def __call__(self, form, field): - pattern = re.compile(r'^([_a-z0-9\-]+)(\.[_a-z0-9\-]+)*@([a-z0-9\-]{2,}\.)*([a-z]{2,})(,([_a-z0-9\-]+)(\.[_a-z0-9\-]+)*@([a-z0-9\-]{2,}\.)*([a-z]{2,}))*$') + pattern = re.compile(r'^([_a-z0-9\-\+]+)(\.[_a-z0-9\-\+]+)*@([a-z0-9\-]{1,}\.)*([a-z]{1,})(,([_a-z0-9\-\+]+)(\.[_a-z0-9\-\+]+)*@([a-z0-9\-]{1,}\.)*([a-z]{2,}))*$') if not pattern.match(field.data.replace(" ", "")): raise validators.ValidationError(self.message)
{"golden_diff": "diff --git a/core/admin/mailu/ui/forms.py b/core/admin/mailu/ui/forms.py\n--- a/core/admin/mailu/ui/forms.py\n+++ b/core/admin/mailu/ui/forms.py\n@@ -37,7 +37,7 @@\n self.message = message\n \n def __call__(self, form, field):\n- pattern = re.compile(r'^([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,})(,([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,}))*$')\n+ pattern = re.compile(r'^([_a-z0-9\\-\\+]+)(\\.[_a-z0-9\\-\\+]+)*@([a-z0-9\\-]{1,}\\.)*([a-z]{1,})(,([_a-z0-9\\-\\+]+)(\\.[_a-z0-9\\-\\+]+)*@([a-z0-9\\-]{1,}\\.)*([a-z]{2,}))*$')\n if not pattern.match(field.data.replace(\" \", \"\")):\n raise validators.ValidationError(self.message)\n", "issue": "plus mail in auto-forward destination\nThe email address in `admin -> User settings -> Auto-forward -> Destination` cannot contains a '+' character (error: Invalid email address). But the '+' in email address is valid and admin should accept it.\n", "before_files": [{"content": "from wtforms import validators, fields, widgets\nfrom wtforms_components import fields as fields_\nfrom flask_babel import lazy_gettext as _\n\nimport flask_login\nimport flask_wtf\nimport re\n\nLOCALPART_REGEX = \"^[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+(?:\\.[a-zA-Z0-9!#$%&'*+/=?^_`{|}~-]+)*$\"\n\nclass DestinationField(fields.SelectMultipleField):\n \"\"\" Allow for multiple emails selection from current user choices and\n additional email addresses.\n \"\"\"\n\n validator = re.compile(r'^.+@([^.@][^@]+)$', re.IGNORECASE)\n\n def iter_choices(self):\n managed = [\n str(email)\n for email in flask_login.current_user.get_managed_emails()\n ]\n for email in managed:\n selected = self.data is not None and self.coerce(email) in self.data\n yield (email, email, selected)\n for email in self.data or ():\n if email not in managed:\n yield (email, email, True)\n\n def pre_validate(self, form):\n for item in self.data:\n if not self.validator.match(item):\n raise validators.ValidationError(_('Invalid email address.'))\n\nclass MultipleEmailAddressesVerify(object):\n def __init__(self,message=_('Invalid email address.')):\n self.message = message\n\n def __call__(self, form, field):\n pattern = re.compile(r'^([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,})(,([_a-z0-9\\-]+)(\\.[_a-z0-9\\-]+)*@([a-z0-9\\-]{2,}\\.)*([a-z]{2,}))*$')\n if not pattern.match(field.data.replace(\" \", \"\")):\n raise validators.ValidationError(self.message)\n\nclass ConfirmationForm(flask_wtf.FlaskForm):\n submit = fields.SubmitField(_('Confirm'))\n\nclass DomainForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Domain name'), [validators.DataRequired()])\n max_users = fields_.IntegerField(_('Maximum user count'), [validators.NumberRange(min=-1)], default=10)\n max_aliases = fields_.IntegerField(_('Maximum alias count'), [validators.NumberRange(min=-1)], default=10)\n max_quota_bytes = fields_.IntegerSliderField(_('Maximum user quota'), default=0)\n signup_enabled = fields.BooleanField(_('Enable sign-up'), default=False)\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass DomainSignupForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Domain name'), [validators.DataRequired()])\n localpart = fields.StringField(_('Initial admin'), [validators.DataRequired()])\n pw = fields.PasswordField(_('Admin password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n captcha = flask_wtf.RecaptchaField()\n submit = fields.SubmitField(_('Create'))\n\n\nclass AlternativeForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Alternative name'), [validators.DataRequired()])\n submit = fields.SubmitField(_('Save'))\n\n\nclass RelayForm(flask_wtf.FlaskForm):\n name = fields.StringField(_('Relayed domain name'), [validators.DataRequired()])\n smtp = fields.StringField(_('Remote host'))\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass UserForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('E-mail'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n pw = fields.PasswordField(_('Password'))\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n quota_bytes = fields_.IntegerSliderField(_('Quota'), default=10**9)\n enable_imap = fields.BooleanField(_('Allow IMAP access'), default=True)\n enable_pop = fields.BooleanField(_('Allow POP3 access'), default=True)\n displayed_name = fields.StringField(_('Displayed name'))\n comment = fields.StringField(_('Comment'))\n enabled = fields.BooleanField(_('Enabled'), default=True)\n submit = fields.SubmitField(_('Save'))\n\n\nclass UserSignupForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('Email address'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Confirm password'), [validators.EqualTo('pw')])\n submit = fields.SubmitField(_('Sign up'))\n\nclass UserSignupFormCaptcha(UserSignupForm):\n captcha = flask_wtf.RecaptchaField()\n\nclass UserSettingsForm(flask_wtf.FlaskForm):\n displayed_name = fields.StringField(_('Displayed name'))\n spam_enabled = fields.BooleanField(_('Enable spam filter'))\n spam_threshold = fields_.IntegerSliderField(_('Spam filter tolerance'))\n forward_enabled = fields.BooleanField(_('Enable forwarding'))\n forward_keep = fields.BooleanField(_('Keep a copy of the emails'))\n forward_destination = fields.StringField(_('Destination'), [validators.Optional(), MultipleEmailAddressesVerify()])\n submit = fields.SubmitField(_('Save settings'))\n\n\nclass UserPasswordForm(flask_wtf.FlaskForm):\n pw = fields.PasswordField(_('Password'), [validators.DataRequired()])\n pw2 = fields.PasswordField(_('Password check'), [validators.DataRequired()])\n submit = fields.SubmitField(_('Update password'))\n\n\nclass UserReplyForm(flask_wtf.FlaskForm):\n reply_enabled = fields.BooleanField(_('Enable automatic reply'))\n reply_subject = fields.StringField(_('Reply subject'))\n reply_body = fields.StringField(_('Reply body'),\n widget=widgets.TextArea())\n reply_startdate = fields.html5.DateField(_('Start of vacation'))\n reply_enddate = fields.html5.DateField(_('End of vacation'))\n submit = fields.SubmitField(_('Update'))\n\n\nclass TokenForm(flask_wtf.FlaskForm):\n displayed_password = fields.StringField(\n _('Your token (write it down, as it will never be displayed again)')\n )\n raw_password = fields.HiddenField([validators.DataRequired()])\n comment = fields.StringField(_('Comment'))\n ip = fields.StringField(\n _('Authorized IP'), [validators.Optional(), validators.IPAddress(ipv6=True)]\n )\n submit = fields.SubmitField(_('Save'))\n\n\nclass AliasForm(flask_wtf.FlaskForm):\n localpart = fields.StringField(_('Alias'), [validators.DataRequired(), validators.Regexp(LOCALPART_REGEX)])\n wildcard = fields.BooleanField(\n _('Use SQL LIKE Syntax (e.g. for catch-all aliases)'))\n destination = DestinationField(_('Destination'))\n comment = fields.StringField(_('Comment'))\n submit = fields.SubmitField(_('Save'))\n\n\nclass AdminForm(flask_wtf.FlaskForm):\n admin = fields.SelectField(_('Admin email'), choices=[])\n submit = fields.SubmitField(_('Submit'))\n\n\nclass ManagerForm(flask_wtf.FlaskForm):\n manager = fields.SelectField(_('Manager email'))\n submit = fields.SubmitField(_('Submit'))\n\n\nclass FetchForm(flask_wtf.FlaskForm):\n protocol = fields.SelectField(_('Protocol'), choices=[\n ('imap', 'IMAP'), ('pop3', 'POP3')\n ])\n host = fields.StringField(_('Hostname or IP'), [validators.DataRequired()])\n port = fields.IntegerField(_('TCP port'), [validators.DataRequired(), validators.NumberRange(min=0, max=65535)])\n tls = fields.BooleanField(_('Enable TLS'))\n username = fields.StringField(_('Username'), [validators.DataRequired()])\n password = fields.PasswordField(_('Password'))\n keep = fields.BooleanField(_('Keep emails on the server'))\n submit = fields.SubmitField(_('Submit'))\n\n\nclass AnnouncementForm(flask_wtf.FlaskForm):\n announcement_subject = fields.StringField(_('Announcement subject'),\n [validators.DataRequired()])\n announcement_body = fields.StringField(_('Announcement body'),\n [validators.DataRequired()], widget=widgets.TextArea())\n submit = fields.SubmitField(_('Send'))\n", "path": "core/admin/mailu/ui/forms.py"}]}
2,721
288
gh_patches_debug_12272
rasdani/github-patches
git_diff
zestedesavoir__zds-site-3664
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Notification permanante Bonjour, Depuis le forum, > Je me suis désabonné à un sujet (ne plus suivre) alors que j'avais une notification de se sujet. > > Désormais j'ai en permanence 1 notification. Même si je me réabonne, elle ne disparaît plus ^^" > Ce bug se combine également avec celui-ci [[v18] Date incorrecte dans les notifications](https://github.com/zestedesavoir/zds-site/issues/3628) . > > D'ailleurs on peut voir ce bug comme l'opposé de celui-ci [Suivre un sujet, messages oubliés](https://github.com/zestedesavoir/zds-site/issues/3654) ». > > Source: [ache](https://zestedesavoir.com/forums/sujet/6300/notification-permanante/) </issue> <code> [start of zds/notification/managers.py] 1 # -*- coding: utf-8 -*- 2 from django.contrib.contenttypes.models import ContentType 3 from django.core.exceptions import ObjectDoesNotExist 4 from django.db import models 5 6 from zds.forum.models import Topic 7 from zds.utils import get_current_user 8 9 10 class SubscriptionManager(models.Manager): 11 """ 12 Custom subscription manager 13 """ 14 15 def get_existing(self, user, content_object, is_active=None, by_email=None): 16 """ 17 If exists, return the existing subscription for the given user and content object. 18 19 :param user: concerned user. 20 :type user: django.contrib.auth.models.User 21 :param content_object: Generic content concerned. 22 :type content_object: instance concerned by notifications 23 :param is_active: Boolean to know if we want a subscription active or not. 24 :type is_active: Boolean 25 :param by_email: Boolean to know if we want a subscription for email or not. 26 :type by_email: Boolean 27 :return: subscription or None 28 """ 29 content_type = ContentType.objects.get_for_model(content_object) 30 try: 31 if is_active is None and by_email is None: 32 existing = self.get( 33 object_id=content_object.pk, 34 content_type__pk=content_type.pk, 35 user=user) 36 elif is_active is not None and by_email is None: 37 existing = self.get( 38 object_id=content_object.pk, 39 content_type__pk=content_type.pk, 40 user=user, is_active=is_active) 41 elif is_active is None and by_email is not None: 42 existing = self.get( 43 object_id=content_object.pk, 44 content_type__pk=content_type.pk, 45 user=user, by_email=by_email) 46 else: 47 existing = self.get( 48 object_id=content_object.pk, 49 content_type__pk=content_type.pk, 50 user=user, is_active=is_active, 51 by_email=by_email) 52 except ObjectDoesNotExist: 53 existing = None 54 return existing 55 56 def get_or_create_active(self, user, content_object): 57 """ 58 Gets (or create if it doesn't exist) the subscription for the content object given. 59 60 :param user: concerned user. 61 :type user: django.contrib.auth.models.User 62 :param content_object: Generic content concerned. 63 :type content_object: instance concerned by notifications 64 :return: subscription 65 """ 66 content_type = ContentType.objects.get_for_model(content_object) 67 try: 68 subscription = self.get( 69 object_id=content_object.pk, 70 content_type__pk=content_type.pk, 71 user=user) 72 if not subscription.is_active: 73 subscription.activate() 74 except ObjectDoesNotExist: 75 subscription = self.model(user=user, content_object=content_object) 76 subscription.save() 77 78 return subscription 79 80 def get_subscriptions(self, content_object, is_active=True): 81 """ 82 Gets subscriptions of the content object. 83 84 :param content_object: Generic content concerned. 85 :type content_object: instance concerned by notifications 86 :param is_active: Boolean to know if we want a subscription active or not. 87 :type is_active: Boolean 88 :return: an iterable list of subscriptions 89 """ 90 content_type = ContentType.objects.get_for_model(content_object) 91 return self.filter(object_id=content_object.pk, 92 content_type__pk=content_type.pk, 93 is_active=is_active) 94 95 def get_subscribers(self, content_object, only_by_email=False): 96 """ 97 Gets all subscribers of a content object. 98 99 :param content_object: Generic content concerned. 100 :type content_object: instance concerned by notifications 101 :param only_by_email: Boolean to know if we want a subscription for email or not. 102 :type only_by_email: Boolean 103 :return: users 104 """ 105 content_type = ContentType.objects.get_for_model(content_object) 106 if only_by_email: 107 # if I'm only interested by the email subscription 108 subscription_list = self.filter( 109 object_id=content_object.pk, 110 content_type__pk=content_type.pk, 111 by_email=True) 112 else: 113 subscription_list = self.filter( 114 object_id=content_object.pk, 115 content_type__pk=content_type.pk) 116 117 return [subscription.user for subscription in subscription_list] 118 119 def toggle_follow(self, content_object, user=None, by_email=False): 120 """ 121 Toggle following of a resource notifiable for a user. 122 123 :param content_object: A resource notifiable. 124 :param user: A user. If undefined, the current user is used. 125 :param by_email: Get subscription by email or not. 126 :return: subscription of the user for the content. 127 """ 128 if not user: 129 user = get_current_user() 130 if by_email: 131 existing = self.get_existing(user, content_object, is_active=True, by_email=True) 132 else: 133 existing = self.get_existing(user, content_object, is_active=True) 134 if not existing: 135 subscription = self.get_or_create_active(user, content_object) 136 if by_email: 137 subscription.activate_email() 138 return subscription 139 if by_email: 140 existing.deactivate_email() 141 else: 142 existing.deactivate() 143 return existing 144 145 146 class TopicAnswerSubscriptionManager(SubscriptionManager): 147 """ 148 Custom topic answer subscription manager. 149 """ 150 151 def get_objects_followed_by(self, user): 152 """ 153 Gets objects followed by the given user. 154 155 :param user: concerned user. 156 :type user: django.contrib.auth.models.User 157 :return: All objects followed by given user. 158 """ 159 topic_list = self.filter(user=user, is_active=True, content_type=ContentType.objects.get_for_model(Topic)) \ 160 .values_list('object_id', flat=True) 161 162 return Topic.objects.filter(id__in=topic_list).order_by('-last_message__pubdate') 163 164 def unfollow_and_mark_read_everybody_at(self, topic): 165 """ 166 Deactivate a subscription at a topic and mark read the notification associated if exist. 167 168 :param topic: topic concerned. 169 :type topic: zds.forum.models.Topic 170 """ 171 subscriptions = self.get_subscriptions(topic) 172 for subscription in subscriptions: 173 if not topic.forum.can_read(subscription.user): 174 subscription.deactivate() 175 subscription.mark_notification_read() 176 177 178 class NotificationManager(models.Manager): 179 """ 180 Custom notification manager. 181 """ 182 183 def get_unread_notifications_of(self, user): 184 """ 185 Gets all notifications for a user whose user is passed as argument. 186 187 :param user: user object 188 :type user: django.contrib.auth.models.User 189 :return: an iterable over notifications with user data already loaded 190 :rtype: an iterable list of notifications 191 """ 192 return self.filter(subscription__user=user, is_read=False) \ 193 .select_related("sender") 194 195 def filter_content_type_of(self, model): 196 """ 197 Applies a filter on the content_type. 198 199 :param model: Model concerned for the ContentType 200 :type model: Model concerned by notifications 201 :return: an iterable list of notifications 202 """ 203 content_subscription_type = ContentType.objects.get_for_model(model) 204 return self.filter(subscription__content_type__pk=content_subscription_type.pk) 205 206 def get_users_for_unread_notification_on(self, content_object): 207 """ 208 Gets all users who have an notification unread on the given content object. 209 210 :param content_object: generic content object. 211 :type content_object: instance concerned by notifications 212 :return: an iterable list of users. 213 """ 214 content_type = ContentType.objects.get_for_model(content_object) 215 notifications = self.filter(object_id=content_object.pk, content_type__pk=content_type.pk) \ 216 .select_related("subscription") \ 217 .select_related("subscription__user") 218 return [notification.subscription.user for notification in notifications] 219 220 221 class TopicFollowedManager(models.Manager): 222 def get_followers_by_email(self, topic): 223 """ 224 :return: the set of users who follow this topic by email. 225 """ 226 return self.filter(topic=topic, email=True).select_related("user") 227 228 def is_followed(self, topic, user=None): 229 """ 230 Checks if the user follows this topic. 231 :param user: A user. If undefined, the current user is used. 232 :return: `True` if the user follows this topic, `False` otherwise. 233 """ 234 if user is None: 235 user = get_current_user() 236 237 return self.filter(topic=topic, user=user).exists() 238 [end of zds/notification/managers.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/zds/notification/managers.py b/zds/notification/managers.py --- a/zds/notification/managers.py +++ b/zds/notification/managers.py @@ -4,6 +4,7 @@ from django.db import models from zds.forum.models import Topic +from zds.notification import signals from zds.utils import get_current_user @@ -136,6 +137,7 @@ if by_email: subscription.activate_email() return subscription + signals.content_read.send(sender=content_object.__class__, instance=content_object, user=user) if by_email: existing.deactivate_email() else:
{"golden_diff": "diff --git a/zds/notification/managers.py b/zds/notification/managers.py\n--- a/zds/notification/managers.py\n+++ b/zds/notification/managers.py\n@@ -4,6 +4,7 @@\n from django.db import models\n \n from zds.forum.models import Topic\n+from zds.notification import signals\n from zds.utils import get_current_user\n \n \n@@ -136,6 +137,7 @@\n if by_email:\n subscription.activate_email()\n return subscription\n+ signals.content_read.send(sender=content_object.__class__, instance=content_object, user=user)\n if by_email:\n existing.deactivate_email()\n else:\n", "issue": "Notification permanante\nBonjour,\n\nDepuis le forum, \n\n> Je me suis d\u00e9sabonn\u00e9 \u00e0 un sujet (ne plus suivre) alors que j'avais une notification de se sujet.\n> \n> D\u00e9sormais j'ai en permanence 1 notification. M\u00eame si je me r\u00e9abonne, elle ne dispara\u00eet plus ^^\"\n> Ce bug se combine \u00e9galement avec celui-ci [[v18] Date incorrecte dans les notifications](https://github.com/zestedesavoir/zds-site/issues/3628) .\n> \n> D'ailleurs on peut voir ce bug comme l'oppos\u00e9 de celui-ci [Suivre un sujet, messages oubli\u00e9s](https://github.com/zestedesavoir/zds-site/issues/3654) \u00bb.\n> \n> Source: [ache](https://zestedesavoir.com/forums/sujet/6300/notification-permanante/)\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.db import models\n\nfrom zds.forum.models import Topic\nfrom zds.utils import get_current_user\n\n\nclass SubscriptionManager(models.Manager):\n \"\"\"\n Custom subscription manager\n \"\"\"\n\n def get_existing(self, user, content_object, is_active=None, by_email=None):\n \"\"\"\n If exists, return the existing subscription for the given user and content object.\n\n :param user: concerned user.\n :type user: django.contrib.auth.models.User\n :param content_object: Generic content concerned.\n :type content_object: instance concerned by notifications\n :param is_active: Boolean to know if we want a subscription active or not.\n :type is_active: Boolean\n :param by_email: Boolean to know if we want a subscription for email or not.\n :type by_email: Boolean\n :return: subscription or None\n \"\"\"\n content_type = ContentType.objects.get_for_model(content_object)\n try:\n if is_active is None and by_email is None:\n existing = self.get(\n object_id=content_object.pk,\n content_type__pk=content_type.pk,\n user=user)\n elif is_active is not None and by_email is None:\n existing = self.get(\n object_id=content_object.pk,\n content_type__pk=content_type.pk,\n user=user, is_active=is_active)\n elif is_active is None and by_email is not None:\n existing = self.get(\n object_id=content_object.pk,\n content_type__pk=content_type.pk,\n user=user, by_email=by_email)\n else:\n existing = self.get(\n object_id=content_object.pk,\n content_type__pk=content_type.pk,\n user=user, is_active=is_active,\n by_email=by_email)\n except ObjectDoesNotExist:\n existing = None\n return existing\n\n def get_or_create_active(self, user, content_object):\n \"\"\"\n Gets (or create if it doesn't exist) the subscription for the content object given.\n\n :param user: concerned user.\n :type user: django.contrib.auth.models.User\n :param content_object: Generic content concerned.\n :type content_object: instance concerned by notifications\n :return: subscription\n \"\"\"\n content_type = ContentType.objects.get_for_model(content_object)\n try:\n subscription = self.get(\n object_id=content_object.pk,\n content_type__pk=content_type.pk,\n user=user)\n if not subscription.is_active:\n subscription.activate()\n except ObjectDoesNotExist:\n subscription = self.model(user=user, content_object=content_object)\n subscription.save()\n\n return subscription\n\n def get_subscriptions(self, content_object, is_active=True):\n \"\"\"\n Gets subscriptions of the content object.\n\n :param content_object: Generic content concerned.\n :type content_object: instance concerned by notifications\n :param is_active: Boolean to know if we want a subscription active or not.\n :type is_active: Boolean\n :return: an iterable list of subscriptions\n \"\"\"\n content_type = ContentType.objects.get_for_model(content_object)\n return self.filter(object_id=content_object.pk,\n content_type__pk=content_type.pk,\n is_active=is_active)\n\n def get_subscribers(self, content_object, only_by_email=False):\n \"\"\"\n Gets all subscribers of a content object.\n\n :param content_object: Generic content concerned.\n :type content_object: instance concerned by notifications\n :param only_by_email: Boolean to know if we want a subscription for email or not.\n :type only_by_email: Boolean\n :return: users\n \"\"\"\n content_type = ContentType.objects.get_for_model(content_object)\n if only_by_email:\n # if I'm only interested by the email subscription\n subscription_list = self.filter(\n object_id=content_object.pk,\n content_type__pk=content_type.pk,\n by_email=True)\n else:\n subscription_list = self.filter(\n object_id=content_object.pk,\n content_type__pk=content_type.pk)\n\n return [subscription.user for subscription in subscription_list]\n\n def toggle_follow(self, content_object, user=None, by_email=False):\n \"\"\"\n Toggle following of a resource notifiable for a user.\n\n :param content_object: A resource notifiable.\n :param user: A user. If undefined, the current user is used.\n :param by_email: Get subscription by email or not.\n :return: subscription of the user for the content.\n \"\"\"\n if not user:\n user = get_current_user()\n if by_email:\n existing = self.get_existing(user, content_object, is_active=True, by_email=True)\n else:\n existing = self.get_existing(user, content_object, is_active=True)\n if not existing:\n subscription = self.get_or_create_active(user, content_object)\n if by_email:\n subscription.activate_email()\n return subscription\n if by_email:\n existing.deactivate_email()\n else:\n existing.deactivate()\n return existing\n\n\nclass TopicAnswerSubscriptionManager(SubscriptionManager):\n \"\"\"\n Custom topic answer subscription manager.\n \"\"\"\n\n def get_objects_followed_by(self, user):\n \"\"\"\n Gets objects followed by the given user.\n\n :param user: concerned user.\n :type user: django.contrib.auth.models.User\n :return: All objects followed by given user.\n \"\"\"\n topic_list = self.filter(user=user, is_active=True, content_type=ContentType.objects.get_for_model(Topic)) \\\n .values_list('object_id', flat=True)\n\n return Topic.objects.filter(id__in=topic_list).order_by('-last_message__pubdate')\n\n def unfollow_and_mark_read_everybody_at(self, topic):\n \"\"\"\n Deactivate a subscription at a topic and mark read the notification associated if exist.\n\n :param topic: topic concerned.\n :type topic: zds.forum.models.Topic\n \"\"\"\n subscriptions = self.get_subscriptions(topic)\n for subscription in subscriptions:\n if not topic.forum.can_read(subscription.user):\n subscription.deactivate()\n subscription.mark_notification_read()\n\n\nclass NotificationManager(models.Manager):\n \"\"\"\n Custom notification manager.\n \"\"\"\n\n def get_unread_notifications_of(self, user):\n \"\"\"\n Gets all notifications for a user whose user is passed as argument.\n\n :param user: user object\n :type user: django.contrib.auth.models.User\n :return: an iterable over notifications with user data already loaded\n :rtype: an iterable list of notifications\n \"\"\"\n return self.filter(subscription__user=user, is_read=False) \\\n .select_related(\"sender\")\n\n def filter_content_type_of(self, model):\n \"\"\"\n Applies a filter on the content_type.\n\n :param model: Model concerned for the ContentType\n :type model: Model concerned by notifications\n :return: an iterable list of notifications\n \"\"\"\n content_subscription_type = ContentType.objects.get_for_model(model)\n return self.filter(subscription__content_type__pk=content_subscription_type.pk)\n\n def get_users_for_unread_notification_on(self, content_object):\n \"\"\"\n Gets all users who have an notification unread on the given content object.\n\n :param content_object: generic content object.\n :type content_object: instance concerned by notifications\n :return: an iterable list of users.\n \"\"\"\n content_type = ContentType.objects.get_for_model(content_object)\n notifications = self.filter(object_id=content_object.pk, content_type__pk=content_type.pk) \\\n .select_related(\"subscription\") \\\n .select_related(\"subscription__user\")\n return [notification.subscription.user for notification in notifications]\n\n\nclass TopicFollowedManager(models.Manager):\n def get_followers_by_email(self, topic):\n \"\"\"\n :return: the set of users who follow this topic by email.\n \"\"\"\n return self.filter(topic=topic, email=True).select_related(\"user\")\n\n def is_followed(self, topic, user=None):\n \"\"\"\n Checks if the user follows this topic.\n :param user: A user. If undefined, the current user is used.\n :return: `True` if the user follows this topic, `False` otherwise.\n \"\"\"\n if user is None:\n user = get_current_user()\n\n return self.filter(topic=topic, user=user).exists()\n", "path": "zds/notification/managers.py"}]}
3,131
142
gh_patches_debug_21784
rasdani/github-patches
git_diff
tensorflow__tfx-3870
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Update tensorflow-hub requirement to allow 0.12.0? If the feature is related to a specific library below, please raise an issue in the respective repo directly: [TensorFlow Data Validation Repo](https://github.com/tensorflow/data-validation/issues) [TensorFlow Model Analysis Repo](https://github.com/tensorflow/model-analysis/issues) [TensorFlow Transform Repo](https://github.com/tensorflow/transform/issues) [TensorFlow Serving Repo](https://github.com/tensorflow/serving/issues) **System information** - TFX Version (you are using): 1.0.0-rc0 - Environment in which you plan to use the feature (e.g., Local (Linux/MacOS/Windows), Interactive Notebook, Google Cloud, etc..): MacOS, AWS - Are you willing to contribute it (Yes/No): Yes **Describe the feature and the current behavior/state.** tfx (1.0.0-rc0) currently depends on tensorflow-hub (>=0.9.0,<0.10) I was wondering if we could update tensorflow-hub dependancy for tfx to allow tf-hub 0.12.0, so something like (>=0.9.0,<=0.12.0)? I am not sure if that would break anything in tfx, but I am happy to investigate and contribute to this change **Will this change the current API? How?** No **Who will benefit with this feature?** tensorflow-hub has added some new features in 0.10.0 and beyond (specifically the one I'm interested in "`compute_output_shape` in `hub.KerasLayer`" which they added in 0.12.0). It would be cool to be able to take advantage of those while still being able to use tfx **Do you have a workaround or are completely blocked by this?** : Blocked **Name of your Organization (Optional)** **Any Other info.** </issue> <code> [start of tfx/dependencies.py] 1 # Copyright 2019 Google LLC. All Rights Reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 """Package dependencies for TFX. 15 16 tfx and family libraries (such as tensorflow-model-analysis) adopts environment 17 variable (TFX_DEPENDENCY_SELECTOR) based dependency version selection. This 18 dependency will be baked in to the wheel, in other words you cannot change 19 dependency string once wheel is built. 20 21 - UNCONSTRAINED uses dependency without any version constraint string, which is 22 useful when you manually build wheels of parent library (e.g. tfx-bsl) of 23 arbitrary version, and install it without dependency constraints conflict. 24 - NIGHTLY uses x.(y+1).0.dev version as a lower version constraint. tfx nightly 25 will transitively depend on nightly versions of other TFX family libraries, 26 and this version constraint is required. 27 - GIT_MASTER uses github master branch URL of the dependency, which is useful 28 during development, or when depending on the github master HEAD version of 29 tfx. This is because tfx github master HEAD version is actually using github 30 master HEAD version of parent libraries. 31 Caveat: URL dependency is not upgraded with --upgrade flag, and you have to 32 specify --force-reinstall flag to fetch the latest change from each master 33 branch HEAD. 34 - For the release, we use a range of version, which is also used as a default. 35 """ 36 import os 37 38 39 def select_constraint(default, nightly=None, git_master=None): 40 """Select dependency constraint based on TFX_DEPENDENCY_SELECTOR env var.""" 41 selector = os.environ.get('TFX_DEPENDENCY_SELECTOR') 42 if selector == 'UNCONSTRAINED': 43 return '' 44 elif selector == 'NIGHTLY' and nightly is not None: 45 return nightly 46 elif selector == 'GIT_MASTER' and git_master is not None: 47 return git_master 48 else: 49 return default 50 51 52 def make_pipeline_sdk_required_install_packages(): 53 return [ 54 'absl-py>=0.9,<0.13', 55 'ml-metadata' + select_constraint( 56 # LINT.IfChange 57 default='>=1.0.0,<1.1.0', 58 # LINT.ThenChange(tfx/workspace.bzl) 59 nightly='>=1.1.0.dev', 60 git_master='@git+https://github.com/google/ml-metadata@master'), 61 'packaging>=20,<21', 62 'portpicker>=1.3.1,<2', 63 'protobuf>=3.12.2,<4', 64 'docker>=4.1,<5', 65 # TODO(b/176812386): Deprecate usage of jinja2 for placeholders. 66 'jinja2>=2.7.3,<3', 67 ] 68 69 70 def make_required_install_packages(): 71 # Make sure to sync the versions of common dependencies (absl-py, numpy, 72 # and protobuf) with TF. 73 return make_pipeline_sdk_required_install_packages() + [ 74 'apache-beam[gcp]>=2.29,<3', 75 'attrs>=19.3.0,<21', 76 'click>=7,<8', 77 'google-api-python-client>=1.7.8,<2', 78 'google-cloud-aiplatform>=0.5.0,<0.8', 79 'google-cloud-bigquery>=1.28.0,<3', 80 'grpcio>=1.28.1,<2', 81 # TODO(b/173976603): remove pinned keras-tuner upperbound when its 82 # dependency expecatation with TensorFlow is sorted out. 83 'keras-tuner>=1,<1.0.2', 84 'kubernetes>=10.0.1,<12', 85 # TODO(b/179195488): remove numpy dependency after 1.20 migration. 86 # This dependency was added only to limit numpy 1.20 installation. 87 'numpy>=1.16,<1.20', 88 'pyarrow>=1,<3', 89 'pyyaml>=3.12,<6', 90 'tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3', 91 'tensorflow-hub>=0.9.0,<0.10', 92 'tensorflow-data-validation' + select_constraint( 93 default='>=1.0.0,<1.1.0', 94 nightly='>=1.1.0.dev', 95 git_master='@git+https://github.com/tensorflow/data-validation@master' 96 ), 97 'tensorflow-model-analysis' + select_constraint( 98 default='>=0.31,<0.32', 99 nightly='>=0.32.0.dev', 100 git_master='@git+https://github.com/tensorflow/model-analysis@master' 101 ), 102 'tensorflow-serving-api>=1.15,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3', 103 'tensorflow-transform' + select_constraint( 104 default='>=1.0.0,<1.1.0', 105 nightly='>=1.1.0.dev', 106 git_master='@git+https://github.com/tensorflow/transform@master'), 107 'tfx-bsl' + select_constraint( 108 default='>=1.0.0,<1.1.0', 109 nightly='>=1.1.0.dev', 110 git_master='@git+https://github.com/tensorflow/tfx-bsl@master'), 111 ] 112 113 114 def make_extra_packages_airflow(): 115 """Prepare extra packages needed for Apache Airflow orchestrator.""" 116 return [ 117 # TODO(b/188940096): update supported version. 118 'apache-airflow[mysql]>=1.10.14,<3', 119 # TODO(b/182848576): Delete pinned sqlalchemy after apache-airflow 2.0.2 120 # or later.(github.com/apache/airflow/issues/14811) 121 'sqlalchemy>=1.3,<1.4', 122 ] 123 124 125 def make_extra_packages_kfp(): 126 """Prepare extra packages needed for Kubeflow Pipelines orchestrator.""" 127 return [ 128 'kfp>=1.6.1,<2', 129 'kfp-pipeline-spec>=0.1.7,<0.2', 130 ] 131 132 133 def make_extra_packages_test(): 134 """Prepare extra packages needed for running unit tests.""" 135 # Note: It is okay to pin packages to exact versions in this list to minimize 136 # conflicts. 137 return make_extra_packages_airflow() + make_extra_packages_kfp() + [ 138 'pytest>=5,<6', 139 ] 140 141 142 def make_extra_packages_docker_image(): 143 # Packages needed for tfx docker image. 144 return [ 145 'kfp-pipeline-spec>=0.1.7,<0.2', 146 'mmh>=2.2,<3', 147 'python-snappy>=0.5,<0.6', 148 ] 149 150 151 def make_extra_packages_tfjs(): 152 # Packages needed for tfjs. 153 return [ 154 'tensorflowjs>=3.6.0,<4', 155 ] 156 157 158 def make_extra_packages_tf_ranking(): 159 # Packages needed for tf-ranking which is used in tfx/examples/ranking. 160 return [ 161 'tensorflow-ranking>=0.3.3,<0.4', 162 'struct2tensor' + select_constraint( 163 default='>=0.31,<0.32', 164 nightly='>=0.32.0.dev', 165 git_master='@git+https://github.com/google/struct2tensor@master'), 166 ] 167 168 169 def make_extra_packages_examples(): 170 # Extra dependencies required for tfx/examples. 171 return [ 172 # Required for presto ExampleGen custom component in 173 # tfx/examples/custom_components/presto_example_gen 174 'presto-python-client>=0.7,<0.8', 175 # Required for slack custom component in 176 # tfx/examples/custom_components/slack 177 'slackclient>=2.8.2,<3', 178 'websocket-client>=0.57,<1', 179 # Required for bert examples in tfx/examples/bert 180 'tensorflow-text>=1.15.1,<3', 181 # Required for tfx/examples/cifar10 182 'flatbuffers>=1.12,<2', 183 'tflite-support>=0.1.0a1,<0.1.1', 184 # Required for tfx/examples/penguin/experimental 185 # LINT.IfChange 186 'scikit-learn>=0.23,<0.24', 187 # LINT.ThenChange( 188 # examples/penguin/experimental/penguin_pipeline_sklearn_gcp.py) 189 # Required for the experimental tfx/examples using Flax, e.g., 190 # tfx/examples/penguin. 191 'jax>=0.2.13,<0.3', 192 'jaxlib>=0.1.64,<0.2', 193 'flax>=0.3.3,<0.4', 194 # Required for tfx/examples/penguin/penguin_utils_cloud_tuner.py 195 'tensorflow-cloud>=0.1,<0.2', 196 ] 197 198 199 def make_extra_packages_all(): 200 # All extra dependencies. 201 return [ 202 *make_extra_packages_test(), 203 *make_extra_packages_tfjs(), 204 *make_extra_packages_tf_ranking(), 205 *make_extra_packages_examples(), 206 ] 207 [end of tfx/dependencies.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/tfx/dependencies.py b/tfx/dependencies.py --- a/tfx/dependencies.py +++ b/tfx/dependencies.py @@ -88,7 +88,7 @@ 'pyarrow>=1,<3', 'pyyaml>=3.12,<6', 'tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3', - 'tensorflow-hub>=0.9.0,<0.10', + 'tensorflow-hub>=0.9.0,<=0.12.0', 'tensorflow-data-validation' + select_constraint( default='>=1.0.0,<1.1.0', nightly='>=1.1.0.dev', @@ -97,8 +97,7 @@ 'tensorflow-model-analysis' + select_constraint( default='>=0.31,<0.32', nightly='>=0.32.0.dev', - git_master='@git+https://github.com/tensorflow/model-analysis@master' - ), + git_master='@git+https://github.com/tensorflow/model-analysis@master'), 'tensorflow-serving-api>=1.15,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3', 'tensorflow-transform' + select_constraint( default='>=1.0.0,<1.1.0',
{"golden_diff": "diff --git a/tfx/dependencies.py b/tfx/dependencies.py\n--- a/tfx/dependencies.py\n+++ b/tfx/dependencies.py\n@@ -88,7 +88,7 @@\n 'pyarrow>=1,<3',\n 'pyyaml>=3.12,<6',\n 'tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3',\n- 'tensorflow-hub>=0.9.0,<0.10',\n+ 'tensorflow-hub>=0.9.0,<=0.12.0',\n 'tensorflow-data-validation' + select_constraint(\n default='>=1.0.0,<1.1.0',\n nightly='>=1.1.0.dev',\n@@ -97,8 +97,7 @@\n 'tensorflow-model-analysis' + select_constraint(\n default='>=0.31,<0.32',\n nightly='>=0.32.0.dev',\n- git_master='@git+https://github.com/tensorflow/model-analysis@master'\n- ),\n+ git_master='@git+https://github.com/tensorflow/model-analysis@master'),\n 'tensorflow-serving-api>=1.15,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3',\n 'tensorflow-transform' + select_constraint(\n default='>=1.0.0,<1.1.0',\n", "issue": "Update tensorflow-hub requirement to allow 0.12.0?\nIf the feature is related to a specific library below, please raise an issue in\r\nthe respective repo directly:\r\n\r\n[TensorFlow Data Validation Repo](https://github.com/tensorflow/data-validation/issues)\r\n\r\n[TensorFlow Model Analysis Repo](https://github.com/tensorflow/model-analysis/issues)\r\n\r\n[TensorFlow Transform Repo](https://github.com/tensorflow/transform/issues)\r\n\r\n[TensorFlow Serving Repo](https://github.com/tensorflow/serving/issues)\r\n\r\n**System information**\r\n\r\n- TFX Version (you are using): 1.0.0-rc0\r\n- Environment in which you plan to use the feature (e.g., Local\r\n (Linux/MacOS/Windows), Interactive Notebook, Google Cloud, etc..): MacOS, AWS\r\n- Are you willing to contribute it (Yes/No): Yes\r\n\r\n**Describe the feature and the current behavior/state.**\r\ntfx (1.0.0-rc0) currently depends on tensorflow-hub (>=0.9.0,<0.10)\r\n\r\nI was wondering if we could update tensorflow-hub dependancy for tfx to allow tf-hub 0.12.0, so something like (>=0.9.0,<=0.12.0)?\r\n\r\nI am not sure if that would break anything in tfx, but I am happy to investigate and contribute to this change\r\n\r\n**Will this change the current API? How?**\r\nNo\r\n\r\n**Who will benefit with this feature?**\r\ntensorflow-hub has added some new features in 0.10.0 and beyond (specifically the one I'm interested in \"`compute_output_shape` in `hub.KerasLayer`\" which they added in 0.12.0). It would be cool to be able to take advantage of those while still being able to use tfx\r\n\r\n**Do you have a workaround or are completely blocked by this?** :\r\nBlocked\r\n\r\n**Name of your Organization (Optional)**\r\n\r\n\r\n**Any Other info.**\r\n\n", "before_files": [{"content": "# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Package dependencies for TFX.\n\ntfx and family libraries (such as tensorflow-model-analysis) adopts environment\nvariable (TFX_DEPENDENCY_SELECTOR) based dependency version selection. This\ndependency will be baked in to the wheel, in other words you cannot change\ndependency string once wheel is built.\n\n- UNCONSTRAINED uses dependency without any version constraint string, which is\n useful when you manually build wheels of parent library (e.g. tfx-bsl) of\n arbitrary version, and install it without dependency constraints conflict.\n- NIGHTLY uses x.(y+1).0.dev version as a lower version constraint. tfx nightly\n will transitively depend on nightly versions of other TFX family libraries,\n and this version constraint is required.\n- GIT_MASTER uses github master branch URL of the dependency, which is useful\n during development, or when depending on the github master HEAD version of\n tfx. This is because tfx github master HEAD version is actually using github\n master HEAD version of parent libraries.\n Caveat: URL dependency is not upgraded with --upgrade flag, and you have to\n specify --force-reinstall flag to fetch the latest change from each master\n branch HEAD.\n- For the release, we use a range of version, which is also used as a default.\n\"\"\"\nimport os\n\n\ndef select_constraint(default, nightly=None, git_master=None):\n \"\"\"Select dependency constraint based on TFX_DEPENDENCY_SELECTOR env var.\"\"\"\n selector = os.environ.get('TFX_DEPENDENCY_SELECTOR')\n if selector == 'UNCONSTRAINED':\n return ''\n elif selector == 'NIGHTLY' and nightly is not None:\n return nightly\n elif selector == 'GIT_MASTER' and git_master is not None:\n return git_master\n else:\n return default\n\n\ndef make_pipeline_sdk_required_install_packages():\n return [\n 'absl-py>=0.9,<0.13',\n 'ml-metadata' + select_constraint(\n # LINT.IfChange\n default='>=1.0.0,<1.1.0',\n # LINT.ThenChange(tfx/workspace.bzl)\n nightly='>=1.1.0.dev',\n git_master='@git+https://github.com/google/ml-metadata@master'),\n 'packaging>=20,<21',\n 'portpicker>=1.3.1,<2',\n 'protobuf>=3.12.2,<4',\n 'docker>=4.1,<5',\n # TODO(b/176812386): Deprecate usage of jinja2 for placeholders.\n 'jinja2>=2.7.3,<3',\n ]\n\n\ndef make_required_install_packages():\n # Make sure to sync the versions of common dependencies (absl-py, numpy,\n # and protobuf) with TF.\n return make_pipeline_sdk_required_install_packages() + [\n 'apache-beam[gcp]>=2.29,<3',\n 'attrs>=19.3.0,<21',\n 'click>=7,<8',\n 'google-api-python-client>=1.7.8,<2',\n 'google-cloud-aiplatform>=0.5.0,<0.8',\n 'google-cloud-bigquery>=1.28.0,<3',\n 'grpcio>=1.28.1,<2',\n # TODO(b/173976603): remove pinned keras-tuner upperbound when its\n # dependency expecatation with TensorFlow is sorted out.\n 'keras-tuner>=1,<1.0.2',\n 'kubernetes>=10.0.1,<12',\n # TODO(b/179195488): remove numpy dependency after 1.20 migration.\n # This dependency was added only to limit numpy 1.20 installation.\n 'numpy>=1.16,<1.20',\n 'pyarrow>=1,<3',\n 'pyyaml>=3.12,<6',\n 'tensorflow>=1.15.2,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3',\n 'tensorflow-hub>=0.9.0,<0.10',\n 'tensorflow-data-validation' + select_constraint(\n default='>=1.0.0,<1.1.0',\n nightly='>=1.1.0.dev',\n git_master='@git+https://github.com/tensorflow/data-validation@master'\n ),\n 'tensorflow-model-analysis' + select_constraint(\n default='>=0.31,<0.32',\n nightly='>=0.32.0.dev',\n git_master='@git+https://github.com/tensorflow/model-analysis@master'\n ),\n 'tensorflow-serving-api>=1.15,!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,<3',\n 'tensorflow-transform' + select_constraint(\n default='>=1.0.0,<1.1.0',\n nightly='>=1.1.0.dev',\n git_master='@git+https://github.com/tensorflow/transform@master'),\n 'tfx-bsl' + select_constraint(\n default='>=1.0.0,<1.1.0',\n nightly='>=1.1.0.dev',\n git_master='@git+https://github.com/tensorflow/tfx-bsl@master'),\n ]\n\n\ndef make_extra_packages_airflow():\n \"\"\"Prepare extra packages needed for Apache Airflow orchestrator.\"\"\"\n return [\n # TODO(b/188940096): update supported version.\n 'apache-airflow[mysql]>=1.10.14,<3',\n # TODO(b/182848576): Delete pinned sqlalchemy after apache-airflow 2.0.2\n # or later.(github.com/apache/airflow/issues/14811)\n 'sqlalchemy>=1.3,<1.4',\n ]\n\n\ndef make_extra_packages_kfp():\n \"\"\"Prepare extra packages needed for Kubeflow Pipelines orchestrator.\"\"\"\n return [\n 'kfp>=1.6.1,<2',\n 'kfp-pipeline-spec>=0.1.7,<0.2',\n ]\n\n\ndef make_extra_packages_test():\n \"\"\"Prepare extra packages needed for running unit tests.\"\"\"\n # Note: It is okay to pin packages to exact versions in this list to minimize\n # conflicts.\n return make_extra_packages_airflow() + make_extra_packages_kfp() + [\n 'pytest>=5,<6',\n ]\n\n\ndef make_extra_packages_docker_image():\n # Packages needed for tfx docker image.\n return [\n 'kfp-pipeline-spec>=0.1.7,<0.2',\n 'mmh>=2.2,<3',\n 'python-snappy>=0.5,<0.6',\n ]\n\n\ndef make_extra_packages_tfjs():\n # Packages needed for tfjs.\n return [\n 'tensorflowjs>=3.6.0,<4',\n ]\n\n\ndef make_extra_packages_tf_ranking():\n # Packages needed for tf-ranking which is used in tfx/examples/ranking.\n return [\n 'tensorflow-ranking>=0.3.3,<0.4',\n 'struct2tensor' + select_constraint(\n default='>=0.31,<0.32',\n nightly='>=0.32.0.dev',\n git_master='@git+https://github.com/google/struct2tensor@master'),\n ]\n\n\ndef make_extra_packages_examples():\n # Extra dependencies required for tfx/examples.\n return [\n # Required for presto ExampleGen custom component in\n # tfx/examples/custom_components/presto_example_gen\n 'presto-python-client>=0.7,<0.8',\n # Required for slack custom component in\n # tfx/examples/custom_components/slack\n 'slackclient>=2.8.2,<3',\n 'websocket-client>=0.57,<1',\n # Required for bert examples in tfx/examples/bert\n 'tensorflow-text>=1.15.1,<3',\n # Required for tfx/examples/cifar10\n 'flatbuffers>=1.12,<2',\n 'tflite-support>=0.1.0a1,<0.1.1',\n # Required for tfx/examples/penguin/experimental\n # LINT.IfChange\n 'scikit-learn>=0.23,<0.24',\n # LINT.ThenChange(\n # examples/penguin/experimental/penguin_pipeline_sklearn_gcp.py)\n # Required for the experimental tfx/examples using Flax, e.g.,\n # tfx/examples/penguin.\n 'jax>=0.2.13,<0.3',\n 'jaxlib>=0.1.64,<0.2',\n 'flax>=0.3.3,<0.4',\n # Required for tfx/examples/penguin/penguin_utils_cloud_tuner.py\n 'tensorflow-cloud>=0.1,<0.2',\n ]\n\n\ndef make_extra_packages_all():\n # All extra dependencies.\n return [\n *make_extra_packages_test(),\n *make_extra_packages_tfjs(),\n *make_extra_packages_tf_ranking(),\n *make_extra_packages_examples(),\n ]\n", "path": "tfx/dependencies.py"}]}
3,643
335
gh_patches_debug_3237
rasdani/github-patches
git_diff
DistrictDataLabs__yellowbrick-545
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Joint Plot Viz has messed up (overlapping labels on) axes **Describe the bug** If you look at the x and y axis on http://www.scikit-yb.org/en/latest/api/features/scatter.html#joint-plot-visualization you will see that the labels are overlapping. **To Reproduce** Create a joint plot as shown in the docs **Expected behavior** Labels on axes should be clear. **Desktop (please complete the following information):** - OS: macOS - Python Version 3.6.4 - Yellowbrick Version 0.8 Joint Plot Viz has messed up (overlapping labels on) axes **Describe the bug** If you look at the x and y axis on http://www.scikit-yb.org/en/latest/api/features/scatter.html#joint-plot-visualization you will see that the labels are overlapping. **To Reproduce** Create a joint plot as shown in the docs **Expected behavior** Labels on axes should be clear. **Desktop (please complete the following information):** - OS: macOS - Python Version 3.6.4 - Yellowbrick Version 0.8 </issue> <code> [start of docs/api/features/jointplot.py] 1 import pandas as pd 2 import matplotlib.pyplot as plt 3 4 from yellowbrick.features import JointPlotVisualizer 5 6 7 def jointplot(X, y, outpath, **kwargs): 8 # Create a new figure and axes 9 fig = plt.figure() 10 ax = fig.add_subplot(111) 11 12 # Create the visualizer 13 visualizer = JointPlotVisualizer(ax=ax, **kwargs) 14 visualizer.fit(X, y) 15 visualizer.transform(X) 16 17 # Save to disk 18 visualizer.poof(outpath=outpath) 19 plt.savefig(outpath) 20 21 22 if __name__ == '__main__': 23 24 # Load the regression data set 25 data = pd.read_csv("../../../examples/data/concrete/concrete.csv") 26 27 feature = 'cement' 28 target = 'strength' 29 30 # Get the X and y data from the DataFrame 31 Xs = data[feature] 32 ys = data[target] 33 34 # Draw the joint plot visualizer 35 jointplot(Xs, ys, "images/jointplot.png", feature=feature, target=target) 36 37 # Draw the joint plot visualizer with hexadecimal scatter plot 38 jointplot(Xs, ys, "images/jointplot_hex.png", feature=feature, target=target, joint_plot='hex') 39 [end of docs/api/features/jointplot.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/docs/api/features/jointplot.py b/docs/api/features/jointplot.py --- a/docs/api/features/jointplot.py +++ b/docs/api/features/jointplot.py @@ -5,12 +5,8 @@ def jointplot(X, y, outpath, **kwargs): - # Create a new figure and axes - fig = plt.figure() - ax = fig.add_subplot(111) - # Create the visualizer - visualizer = JointPlotVisualizer(ax=ax, **kwargs) + visualizer = JointPlotVisualizer(**kwargs) visualizer.fit(X, y) visualizer.transform(X)
{"golden_diff": "diff --git a/docs/api/features/jointplot.py b/docs/api/features/jointplot.py\n--- a/docs/api/features/jointplot.py\n+++ b/docs/api/features/jointplot.py\n@@ -5,12 +5,8 @@\n \n \n def jointplot(X, y, outpath, **kwargs):\n- # Create a new figure and axes\n- fig = plt.figure()\n- ax = fig.add_subplot(111)\n-\n # Create the visualizer\n- visualizer = JointPlotVisualizer(ax=ax, **kwargs)\n+ visualizer = JointPlotVisualizer(**kwargs)\n visualizer.fit(X, y)\n visualizer.transform(X)\n", "issue": "Joint Plot Viz has messed up (overlapping labels on) axes\n**Describe the bug**\r\n\r\nIf you look at the x and y axis on http://www.scikit-yb.org/en/latest/api/features/scatter.html#joint-plot-visualization you will see that the labels are overlapping.\r\n\r\n**To Reproduce**\r\n\r\nCreate a joint plot as shown in the docs\r\n\r\n\r\n\r\n**Expected behavior**\r\n\r\nLabels on axes should be clear.\r\n\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: macOS\r\n - Python Version 3.6.4\r\n - Yellowbrick Version 0.8\r\n\r\n\nJoint Plot Viz has messed up (overlapping labels on) axes\n**Describe the bug**\r\n\r\nIf you look at the x and y axis on http://www.scikit-yb.org/en/latest/api/features/scatter.html#joint-plot-visualization you will see that the labels are overlapping.\r\n\r\n**To Reproduce**\r\n\r\nCreate a joint plot as shown in the docs\r\n\r\n\r\n\r\n**Expected behavior**\r\n\r\nLabels on axes should be clear.\r\n\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: macOS\r\n - Python Version 3.6.4\r\n - Yellowbrick Version 0.8\r\n\r\n\n", "before_files": [{"content": "import pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom yellowbrick.features import JointPlotVisualizer\n\n\ndef jointplot(X, y, outpath, **kwargs):\n # Create a new figure and axes\n fig = plt.figure()\n ax = fig.add_subplot(111)\n\n # Create the visualizer\n visualizer = JointPlotVisualizer(ax=ax, **kwargs)\n visualizer.fit(X, y)\n visualizer.transform(X)\n\n # Save to disk\n visualizer.poof(outpath=outpath)\n plt.savefig(outpath)\n\n\nif __name__ == '__main__':\n\n # Load the regression data set\n data = pd.read_csv(\"../../../examples/data/concrete/concrete.csv\")\n\n feature = 'cement'\n target = 'strength'\n\n # Get the X and y data from the DataFrame\n Xs = data[feature]\n ys = data[target]\n\n # Draw the joint plot visualizer\n jointplot(Xs, ys, \"images/jointplot.png\", feature=feature, target=target)\n\n # Draw the joint plot visualizer with hexadecimal scatter plot\n jointplot(Xs, ys, \"images/jointplot_hex.png\", feature=feature, target=target, joint_plot='hex')\n", "path": "docs/api/features/jointplot.py"}]}
1,131
143
gh_patches_debug_17565
rasdani/github-patches
git_diff
beeware__toga-530
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> ImageView Example Crashes when Image Not Found ## Expected Behavior <!--- If you're describing a bug, tell us what you expect to happen. --> The image view example to render If the imageview widget does not find an image file, it should print a warning instead of crashing. Also, test coverage shows 100% for imageview widget `toga/widgets/imageview.py`, consider adding a test case for image not found. <!--- If you're requesting a new feature, tell us why you'd like this feature. --> ## Current Behavior <!--- If you're describing a bug, what currently happens? --> ``` (venv) robert@robert-VirtualBox:~/staging/toga/examples/imageview$ python -m imageview [GTK+] Not implemented: Widget.set_hidden() resources/brutus.png [GTK+] Not implemented: Widget.set_hidden() Traceback (most recent call last): File "/home/robert/staging/toga/src/gtk/toga_gtk/app.py", line 67, in startup self.interface.startup() File "/home/robert/staging/toga/examples/imageview/imageview/app.py", line 19, in startup imageview_from_path = toga.ImageView(image_from_path) File "/home/robert/staging/toga/src/core/toga/widgets/imageview.py", line 23, in __init__ self.image = image File "/home/robert/staging/toga/src/core/toga/widgets/imageview.py", line 33, in image self._impl.set_image(self._image) File "/home/robert/staging/toga/src/gtk/toga_gtk/widgets/imageview.py", line 33, in set_image self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_file(os.path.join(toga.App.app_dir, self.image.path)) GLib.Error: g-file-error-quark: Failed to open file '/home/robert/staging/toga/examples/imageview/imageview/resources/brutus.png': No such file or directory (4) ^C ``` I have my local checkout of toga installed the following and seems to have the new location of brutus.png from 2d95b16 (#521) pip install -e src/core pip install -e src/dummy pip install -e src/gtk ``` (venv) robert@robert-VirtualBox:~/staging/toga/examples/imageview$ ls -la /home/robert/staging/toga/examples/imageview/imageview/resources/brutus.png ls: cannot access '/home/robert/staging/toga/examples/imageview/imageview/resources/brutus.png': No such file or directory (venv) robert@robert-VirtualBox:~/staging/toga/examples/imageview$ ls -la /home/robert/staging/toga/examples/imageview/resources/brutus.png -rw-r--r-- 1 robert robert 5469 May 28 07:26 /home/robert/staging/toga/examples/imageview/resources/brutus.png (venv) robert@robert-VirtualBox:~/staging/toga/examples/imageview$ ``` ## Steps to reproduce <!--- Provide a set of steps describing how to reproduce this bug. If you have a live example, provide the link below --> 1.Setup Linux environment following instructions here: https://toga.readthedocs.io/en/latest/how-to/contribute.html 2. cd imageview 3. python -m imageview ## Your Environment <!--- Provide details on your current environment you found the bug in --> * Python Version (list the specific version number) Python 3.6.5 * Operating System and Version (select from the following and list the specific version number; if your OS is not listed, list that as well) - [ ] macOS - version: - [ x] Linux - distro: - version: Ubuntu_18_04 - [ ] Windows - version: - [ ] Other - name: - version: * Toga Target (the type of app you are trying to generate) - [ ] android - [ ] cocoa - [ ] django - [x ] gtk - [ ] iOS - [ ] tvOS - [ ] watchOS - [ ] winforms - [ ] win32 - [ ] Other (please specify) </issue> <code> [start of src/gtk/toga_gtk/widgets/imageview.py] 1 import os 2 from urllib.request import Request, urlopen 3 4 from gi.repository import GdkPixbuf, Gio, Gtk 5 6 import toga 7 8 from .base import Widget 9 10 11 class ImageView(Widget): 12 13 def create(self): 14 self.native = Gtk.Box(orientation=Gtk.Orientation.VERTICAL) 15 self._image = Gtk.Image() 16 self.native.add(self._image) 17 self.native.interface = self.interface 18 19 def get_image(self): 20 return self.image 21 22 def set_image(self, image): 23 self.image = image 24 25 if self.image.path.startswith(('http://', 'https://')): 26 request = Request(self.image.path, headers={'User-Agent': ''}) 27 with urlopen(request) as result: 28 input_stream = Gio.MemoryInputStream.new_from_data(result.read(), None) 29 self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_stream(input_stream, None) 30 elif os.path.isabs(self.image.path): 31 self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_file(self.image.path) 32 else: 33 self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_file(os.path.join(toga.App.app_dir, self.image.path)) 34 35 self.rehint() 36 37 def rehint(self): 38 height, width = self._resize_max( 39 original_height=self._original_pixbuf.get_height(), 40 original_width=self._original_pixbuf.get_width(), 41 max_height=self.native.get_allocated_height(), 42 max_width=self.native.get_allocated_width()) 43 44 pixbuf = self._original_pixbuf.scale_simple(width, height, GdkPixbuf.InterpType.BILINEAR) 45 self._image.set_from_pixbuf(pixbuf) 46 47 @staticmethod 48 def _resize_max(original_height, original_width, max_height, max_width): 49 50 # Check to make sure all dimensions have valid sizes 51 if min(original_height, original_width, max_height, max_width) <= 0: 52 return 1, 1 53 54 width_ratio = max_width/original_width 55 height_ratio = max_height/original_height 56 57 height = original_height * width_ratio 58 if height <= max_height: 59 width = original_width * width_ratio 60 else: 61 height = original_height * height_ratio 62 width = original_width * height_ratio 63 64 return int(height), int(width) 65 [end of src/gtk/toga_gtk/widgets/imageview.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/src/gtk/toga_gtk/widgets/imageview.py b/src/gtk/toga_gtk/widgets/imageview.py --- a/src/gtk/toga_gtk/widgets/imageview.py +++ b/src/gtk/toga_gtk/widgets/imageview.py @@ -27,11 +27,11 @@ with urlopen(request) as result: input_stream = Gio.MemoryInputStream.new_from_data(result.read(), None) self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_stream(input_stream, None) - elif os.path.isabs(self.image.path): - self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_file(self.image.path) + full_image_path = self.image.path if os.path.isabs(self.image.path) else os.path.join(toga.App.app_dir, self.image.path) + if os.path.isfile(full_image_path): + self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_file else: - self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_file(os.path.join(toga.App.app_dir, self.image.path)) - + raise ValueError("No image file available at ", path) self.rehint() def rehint(self):
{"golden_diff": "diff --git a/src/gtk/toga_gtk/widgets/imageview.py b/src/gtk/toga_gtk/widgets/imageview.py\n--- a/src/gtk/toga_gtk/widgets/imageview.py\n+++ b/src/gtk/toga_gtk/widgets/imageview.py\n@@ -27,11 +27,11 @@\n with urlopen(request) as result:\n input_stream = Gio.MemoryInputStream.new_from_data(result.read(), None)\n self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_stream(input_stream, None)\n- elif os.path.isabs(self.image.path):\n- self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_file(self.image.path)\n+ full_image_path = self.image.path if os.path.isabs(self.image.path) else os.path.join(toga.App.app_dir, self.image.path)\n+ if os.path.isfile(full_image_path):\n+ self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_file\n else:\n- self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_file(os.path.join(toga.App.app_dir, self.image.path))\n-\n+ raise ValueError(\"No image file available at \", path)\n self.rehint()\n \n def rehint(self):\n", "issue": "ImageView Example Crashes when Image Not Found\n## Expected Behavior\r\n<!--- If you're describing a bug, tell us what you expect to happen. -->\r\nThe image view example to render\r\nIf the imageview widget does not find an image file, it should print a warning instead of crashing.\r\nAlso, test coverage shows 100% for imageview widget `toga/widgets/imageview.py`, consider adding a test case for image not found.\r\n\r\n<!--- If you're requesting a new feature, tell us why you'd like this feature. -->\r\n\r\n## Current Behavior\r\n<!--- If you're describing a bug, what currently happens? -->\r\n```\r\n(venv) robert@robert-VirtualBox:~/staging/toga/examples/imageview$ python -m imageview\r\n[GTK+] Not implemented: Widget.set_hidden()\r\nresources/brutus.png\r\n[GTK+] Not implemented: Widget.set_hidden()\r\nTraceback (most recent call last):\r\n File \"/home/robert/staging/toga/src/gtk/toga_gtk/app.py\", line 67, in startup\r\n self.interface.startup()\r\n File \"/home/robert/staging/toga/examples/imageview/imageview/app.py\", line 19, in startup\r\n imageview_from_path = toga.ImageView(image_from_path)\r\n File \"/home/robert/staging/toga/src/core/toga/widgets/imageview.py\", line 23, in __init__\r\n self.image = image\r\n File \"/home/robert/staging/toga/src/core/toga/widgets/imageview.py\", line 33, in image\r\n self._impl.set_image(self._image)\r\n File \"/home/robert/staging/toga/src/gtk/toga_gtk/widgets/imageview.py\", line 33, in set_image\r\n self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_file(os.path.join(toga.App.app_dir, self.image.path))\r\nGLib.Error: g-file-error-quark: Failed to open file '/home/robert/staging/toga/examples/imageview/imageview/resources/brutus.png': No such file or directory (4)\r\n^C\r\n```\r\nI have my local checkout of toga installed the following and seems to have the new location of brutus.png from 2d95b16 (#521) \r\npip install -e src/core\r\npip install -e src/dummy\r\npip install -e src/gtk\r\n\r\n```\r\n(venv) robert@robert-VirtualBox:~/staging/toga/examples/imageview$ ls -la /home/robert/staging/toga/examples/imageview/imageview/resources/brutus.png\r\nls: cannot access '/home/robert/staging/toga/examples/imageview/imageview/resources/brutus.png': No such file or directory\r\n(venv) robert@robert-VirtualBox:~/staging/toga/examples/imageview$ ls -la /home/robert/staging/toga/examples/imageview/resources/brutus.png\r\n-rw-r--r-- 1 robert robert 5469 May 28 07:26 /home/robert/staging/toga/examples/imageview/resources/brutus.png\r\n(venv) robert@robert-VirtualBox:~/staging/toga/examples/imageview$ \r\n```\r\n## Steps to reproduce\r\n<!--- Provide a set of steps describing how to reproduce this bug. If you have a live example, provide the link below -->\r\n1.Setup Linux environment following instructions here:\r\nhttps://toga.readthedocs.io/en/latest/how-to/contribute.html\r\n\r\n2. cd imageview\r\n\r\n3. python -m imageview\r\n\r\n## Your Environment\r\n<!--- Provide details on your current environment you found the bug in -->\r\n\r\n* Python Version (list the specific version number)\r\nPython 3.6.5\r\n\r\n* Operating System and Version (select from the following and list the specific version number; if your OS is not listed, list that as well)\r\n\r\n - [ ] macOS - version: \r\n - [ x] Linux - distro: - version: Ubuntu_18_04\r\n - [ ] Windows - version:\r\n - [ ] Other - name: - version:\r\n\r\n* Toga Target (the type of app you are trying to generate)\r\n \r\n - [ ] android\r\n - [ ] cocoa\r\n - [ ] django \r\n - [x ] gtk\r\n - [ ] iOS\r\n - [ ] tvOS\r\n - [ ] watchOS\r\n - [ ] winforms \r\n - [ ] win32\r\n - [ ] Other (please specify)\r\n\n", "before_files": [{"content": "import os\nfrom urllib.request import Request, urlopen\n\nfrom gi.repository import GdkPixbuf, Gio, Gtk\n\nimport toga\n\nfrom .base import Widget\n\n\nclass ImageView(Widget):\n\n def create(self):\n self.native = Gtk.Box(orientation=Gtk.Orientation.VERTICAL)\n self._image = Gtk.Image()\n self.native.add(self._image)\n self.native.interface = self.interface\n\n def get_image(self):\n return self.image\n\n def set_image(self, image):\n self.image = image\n\n if self.image.path.startswith(('http://', 'https://')):\n request = Request(self.image.path, headers={'User-Agent': ''})\n with urlopen(request) as result:\n input_stream = Gio.MemoryInputStream.new_from_data(result.read(), None)\n self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_stream(input_stream, None)\n elif os.path.isabs(self.image.path):\n self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_file(self.image.path)\n else:\n self._original_pixbuf = GdkPixbuf.Pixbuf.new_from_file(os.path.join(toga.App.app_dir, self.image.path))\n\n self.rehint()\n\n def rehint(self):\n height, width = self._resize_max(\n original_height=self._original_pixbuf.get_height(),\n original_width=self._original_pixbuf.get_width(),\n max_height=self.native.get_allocated_height(),\n max_width=self.native.get_allocated_width())\n\n pixbuf = self._original_pixbuf.scale_simple(width, height, GdkPixbuf.InterpType.BILINEAR)\n self._image.set_from_pixbuf(pixbuf)\n\n @staticmethod\n def _resize_max(original_height, original_width, max_height, max_width):\n\n # Check to make sure all dimensions have valid sizes\n if min(original_height, original_width, max_height, max_width) <= 0:\n return 1, 1\n\n width_ratio = max_width/original_width\n height_ratio = max_height/original_height\n\n height = original_height * width_ratio\n if height <= max_height:\n width = original_width * width_ratio\n else:\n height = original_height * height_ratio\n width = original_width * height_ratio\n\n return int(height), int(width)\n", "path": "src/gtk/toga_gtk/widgets/imageview.py"}]}
2,128
266
gh_patches_debug_2948
rasdani/github-patches
git_diff
espnet__espnet-3022
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Error with using compute-fbank-feats.py Hello! I tried to use script compute-fbank-feats.py to compute fbank features from wav, and tried to use it according to its documentation https://espnet.github.io/espnet/apis/utils_py.html#compute-fbank-feats-py as folllows: ``` python3.7 utils/compute-fbank-feats.py scp:wav.scp ark:out.ark ``` but got an error: ``` File "utils/compute-fbank-feats.py", line 134, in <module> main() File "utils/compute-fbank-feats.py", line 128, in main fmax=args.fmax, File "/home/karina/.local/lib/python3.7/site-packages/espnet/transform/spectrogram.py", line 116, in logmelspectrogram x_stft, fs=fs, n_mels=n_mels, n_fft=n_fft, fmin=fmin, fmax=fmax, eps=eps File "/home/karina/.local/lib/python3.7/site-packages/espnet/transform/spectrogram.py", line 74, in stft2logmelspectrogram fmax = fs / 2 if fmax is None else fmax TypeError: unsupported operand type(s) for /: 'NoneType' and 'int' ``` wav.scp contains this text: ``` 0 test.wav ``` Does anyone have ideas how to solve this error? </issue> <code> [start of utils/compute-fbank-feats.py] 1 #!/usr/bin/env python3 2 3 # Copyright 2018 Nagoya University (Tomoki Hayashi) 4 # Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) 5 6 import argparse 7 from distutils.util import strtobool 8 import logging 9 10 import kaldiio 11 import numpy 12 import resampy 13 14 from espnet.transform.spectrogram import logmelspectrogram 15 from espnet.utils.cli_utils import get_commandline_args 16 from espnet.utils.cli_writers import file_writer_helper 17 from espnet2.utils.types import int_or_none 18 19 20 def get_parser(): 21 parser = argparse.ArgumentParser( 22 description="compute FBANK feature from WAV", 23 formatter_class=argparse.ArgumentDefaultsHelpFormatter, 24 ) 25 parser.add_argument("--fs", type=int_or_none, help="Sampling frequency") 26 parser.add_argument( 27 "--fmax", type=int_or_none, default=None, nargs="?", help="Maximum frequency" 28 ) 29 parser.add_argument( 30 "--fmin", type=int_or_none, default=None, nargs="?", help="Minimum frequency" 31 ) 32 parser.add_argument("--n_mels", type=int, default=80, help="Number of mel basis") 33 parser.add_argument("--n_fft", type=int, default=1024, help="FFT length in point") 34 parser.add_argument( 35 "--n_shift", type=int, default=512, help="Shift length in point" 36 ) 37 parser.add_argument( 38 "--win_length", 39 type=int_or_none, 40 default=None, 41 nargs="?", 42 help="Analisys window length in point", 43 ) 44 parser.add_argument( 45 "--window", 46 type=str, 47 default="hann", 48 choices=["hann", "hamming"], 49 help="Type of window", 50 ) 51 parser.add_argument( 52 "--write-num-frames", type=str, help="Specify wspecifer for utt2num_frames" 53 ) 54 parser.add_argument( 55 "--filetype", 56 type=str, 57 default="mat", 58 choices=["mat", "hdf5"], 59 help="Specify the file format for output. " 60 '"mat" is the matrix format in kaldi', 61 ) 62 parser.add_argument( 63 "--compress", type=strtobool, default=False, help="Save in compressed format" 64 ) 65 parser.add_argument( 66 "--compression-method", 67 type=int, 68 default=2, 69 help="Specify the method(if mat) or " "gzip-level(if hdf5)", 70 ) 71 parser.add_argument("--verbose", "-V", default=0, type=int, help="Verbose option") 72 parser.add_argument( 73 "--normalize", 74 choices=[1, 16, 24, 32], 75 type=int, 76 default=None, 77 help="Give the bit depth of the PCM, " 78 "then normalizes data to scale in [-1,1]", 79 ) 80 parser.add_argument("rspecifier", type=str, help="WAV scp file") 81 parser.add_argument( 82 "--segments", 83 type=str, 84 help="segments-file format: each line is either" 85 "<segment-id> <recording-id> <start-time> <end-time>" 86 "e.g. call-861225-A-0050-0065 call-861225-A 5.0 6.5", 87 ) 88 parser.add_argument("wspecifier", type=str, help="Write specifier") 89 return parser 90 91 92 def main(): 93 parser = get_parser() 94 args = parser.parse_args() 95 96 logfmt = "%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s" 97 if args.verbose > 0: 98 logging.basicConfig(level=logging.INFO, format=logfmt) 99 else: 100 logging.basicConfig(level=logging.WARN, format=logfmt) 101 logging.info(get_commandline_args()) 102 103 with kaldiio.ReadHelper( 104 args.rspecifier, segments=args.segments 105 ) as reader, file_writer_helper( 106 args.wspecifier, 107 filetype=args.filetype, 108 write_num_frames=args.write_num_frames, 109 compress=args.compress, 110 compression_method=args.compression_method, 111 ) as writer: 112 for utt_id, (rate, array) in reader: 113 array = array.astype(numpy.float32) 114 if args.fs is not None and rate != args.fs: 115 array = resampy.resample(array, rate, args.fs, axis=0) 116 if args.normalize is not None and args.normalize != 1: 117 array = array / (1 << (args.normalize - 1)) 118 119 lmspc = logmelspectrogram( 120 x=array, 121 fs=args.fs, 122 n_mels=args.n_mels, 123 n_fft=args.n_fft, 124 n_shift=args.n_shift, 125 win_length=args.win_length, 126 window=args.window, 127 fmin=args.fmin, 128 fmax=args.fmax, 129 ) 130 writer[utt_id] = lmspc 131 132 133 if __name__ == "__main__": 134 main() 135 [end of utils/compute-fbank-feats.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/utils/compute-fbank-feats.py b/utils/compute-fbank-feats.py --- a/utils/compute-fbank-feats.py +++ b/utils/compute-fbank-feats.py @@ -118,7 +118,7 @@ lmspc = logmelspectrogram( x=array, - fs=args.fs, + fs=args.fs if args.fs is not None else rate, n_mels=args.n_mels, n_fft=args.n_fft, n_shift=args.n_shift,
{"golden_diff": "diff --git a/utils/compute-fbank-feats.py b/utils/compute-fbank-feats.py\n--- a/utils/compute-fbank-feats.py\n+++ b/utils/compute-fbank-feats.py\n@@ -118,7 +118,7 @@\n \n lmspc = logmelspectrogram(\n x=array,\n- fs=args.fs,\n+ fs=args.fs if args.fs is not None else rate,\n n_mels=args.n_mels,\n n_fft=args.n_fft,\n n_shift=args.n_shift,\n", "issue": "Error with using compute-fbank-feats.py\nHello! I tried to use script compute-fbank-feats.py to compute fbank features from wav, and tried to use it according to its documentation https://espnet.github.io/espnet/apis/utils_py.html#compute-fbank-feats-py\r\n\r\nas folllows:\r\n```\r\npython3.7 utils/compute-fbank-feats.py scp:wav.scp ark:out.ark\r\n```\r\n\r\nbut got an error:\r\n\r\n```\r\n File \"utils/compute-fbank-feats.py\", line 134, in <module>\r\n main()\r\n File \"utils/compute-fbank-feats.py\", line 128, in main\r\n fmax=args.fmax,\r\n File \"/home/karina/.local/lib/python3.7/site-packages/espnet/transform/spectrogram.py\", line 116, in logmelspectrogram\r\n x_stft, fs=fs, n_mels=n_mels, n_fft=n_fft, fmin=fmin, fmax=fmax, eps=eps\r\n File \"/home/karina/.local/lib/python3.7/site-packages/espnet/transform/spectrogram.py\", line 74, in stft2logmelspectrogram\r\n fmax = fs / 2 if fmax is None else fmax\r\nTypeError: unsupported operand type(s) for /: 'NoneType' and 'int'\r\n```\r\n\r\nwav.scp contains this text:\r\n```\r\n0 test.wav\r\n```\r\n\r\nDoes anyone have ideas how to solve this error?\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright 2018 Nagoya University (Tomoki Hayashi)\n# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)\n\nimport argparse\nfrom distutils.util import strtobool\nimport logging\n\nimport kaldiio\nimport numpy\nimport resampy\n\nfrom espnet.transform.spectrogram import logmelspectrogram\nfrom espnet.utils.cli_utils import get_commandline_args\nfrom espnet.utils.cli_writers import file_writer_helper\nfrom espnet2.utils.types import int_or_none\n\n\ndef get_parser():\n parser = argparse.ArgumentParser(\n description=\"compute FBANK feature from WAV\",\n formatter_class=argparse.ArgumentDefaultsHelpFormatter,\n )\n parser.add_argument(\"--fs\", type=int_or_none, help=\"Sampling frequency\")\n parser.add_argument(\n \"--fmax\", type=int_or_none, default=None, nargs=\"?\", help=\"Maximum frequency\"\n )\n parser.add_argument(\n \"--fmin\", type=int_or_none, default=None, nargs=\"?\", help=\"Minimum frequency\"\n )\n parser.add_argument(\"--n_mels\", type=int, default=80, help=\"Number of mel basis\")\n parser.add_argument(\"--n_fft\", type=int, default=1024, help=\"FFT length in point\")\n parser.add_argument(\n \"--n_shift\", type=int, default=512, help=\"Shift length in point\"\n )\n parser.add_argument(\n \"--win_length\",\n type=int_or_none,\n default=None,\n nargs=\"?\",\n help=\"Analisys window length in point\",\n )\n parser.add_argument(\n \"--window\",\n type=str,\n default=\"hann\",\n choices=[\"hann\", \"hamming\"],\n help=\"Type of window\",\n )\n parser.add_argument(\n \"--write-num-frames\", type=str, help=\"Specify wspecifer for utt2num_frames\"\n )\n parser.add_argument(\n \"--filetype\",\n type=str,\n default=\"mat\",\n choices=[\"mat\", \"hdf5\"],\n help=\"Specify the file format for output. \"\n '\"mat\" is the matrix format in kaldi',\n )\n parser.add_argument(\n \"--compress\", type=strtobool, default=False, help=\"Save in compressed format\"\n )\n parser.add_argument(\n \"--compression-method\",\n type=int,\n default=2,\n help=\"Specify the method(if mat) or \" \"gzip-level(if hdf5)\",\n )\n parser.add_argument(\"--verbose\", \"-V\", default=0, type=int, help=\"Verbose option\")\n parser.add_argument(\n \"--normalize\",\n choices=[1, 16, 24, 32],\n type=int,\n default=None,\n help=\"Give the bit depth of the PCM, \"\n \"then normalizes data to scale in [-1,1]\",\n )\n parser.add_argument(\"rspecifier\", type=str, help=\"WAV scp file\")\n parser.add_argument(\n \"--segments\",\n type=str,\n help=\"segments-file format: each line is either\"\n \"<segment-id> <recording-id> <start-time> <end-time>\"\n \"e.g. call-861225-A-0050-0065 call-861225-A 5.0 6.5\",\n )\n parser.add_argument(\"wspecifier\", type=str, help=\"Write specifier\")\n return parser\n\n\ndef main():\n parser = get_parser()\n args = parser.parse_args()\n\n logfmt = \"%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s\"\n if args.verbose > 0:\n logging.basicConfig(level=logging.INFO, format=logfmt)\n else:\n logging.basicConfig(level=logging.WARN, format=logfmt)\n logging.info(get_commandline_args())\n\n with kaldiio.ReadHelper(\n args.rspecifier, segments=args.segments\n ) as reader, file_writer_helper(\n args.wspecifier,\n filetype=args.filetype,\n write_num_frames=args.write_num_frames,\n compress=args.compress,\n compression_method=args.compression_method,\n ) as writer:\n for utt_id, (rate, array) in reader:\n array = array.astype(numpy.float32)\n if args.fs is not None and rate != args.fs:\n array = resampy.resample(array, rate, args.fs, axis=0)\n if args.normalize is not None and args.normalize != 1:\n array = array / (1 << (args.normalize - 1))\n\n lmspc = logmelspectrogram(\n x=array,\n fs=args.fs,\n n_mels=args.n_mels,\n n_fft=args.n_fft,\n n_shift=args.n_shift,\n win_length=args.win_length,\n window=args.window,\n fmin=args.fmin,\n fmax=args.fmax,\n )\n writer[utt_id] = lmspc\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "utils/compute-fbank-feats.py"}]}
2,248
117
gh_patches_debug_26151
rasdani/github-patches
git_diff
python-gitlab__python-gitlab-2771
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Allow update of protected branches In gitlab 15.6 gitlab finally added api support to update protected branch settings, so ProjectProtectedBranch should be updated accordingly https://gitlab.com/gitlab-org/gitlab/-/issues/20229/ </issue> <code> [start of gitlab/v4/objects/branches.py] 1 from typing import Any, cast, Union 2 3 from gitlab.base import RESTManager, RESTObject 4 from gitlab.mixins import NoUpdateMixin, ObjectDeleteMixin 5 from gitlab.types import RequiredOptional 6 7 __all__ = [ 8 "ProjectBranch", 9 "ProjectBranchManager", 10 "ProjectProtectedBranch", 11 "ProjectProtectedBranchManager", 12 ] 13 14 15 class ProjectBranch(ObjectDeleteMixin, RESTObject): 16 _id_attr = "name" 17 18 19 class ProjectBranchManager(NoUpdateMixin, RESTManager): 20 _path = "/projects/{project_id}/repository/branches" 21 _obj_cls = ProjectBranch 22 _from_parent_attrs = {"project_id": "id"} 23 _create_attrs = RequiredOptional(required=("branch", "ref")) 24 25 def get( 26 self, id: Union[str, int], lazy: bool = False, **kwargs: Any 27 ) -> ProjectBranch: 28 return cast(ProjectBranch, super().get(id=id, lazy=lazy, **kwargs)) 29 30 31 class ProjectProtectedBranch(ObjectDeleteMixin, RESTObject): 32 _id_attr = "name" 33 34 35 class ProjectProtectedBranchManager(NoUpdateMixin, RESTManager): 36 _path = "/projects/{project_id}/protected_branches" 37 _obj_cls = ProjectProtectedBranch 38 _from_parent_attrs = {"project_id": "id"} 39 _create_attrs = RequiredOptional( 40 required=("name",), 41 optional=( 42 "push_access_level", 43 "merge_access_level", 44 "unprotect_access_level", 45 "allow_force_push", 46 "allowed_to_push", 47 "allowed_to_merge", 48 "allowed_to_unprotect", 49 "code_owner_approval_required", 50 ), 51 ) 52 53 def get( 54 self, id: Union[str, int], lazy: bool = False, **kwargs: Any 55 ) -> ProjectProtectedBranch: 56 return cast(ProjectProtectedBranch, super().get(id=id, lazy=lazy, **kwargs)) 57 [end of gitlab/v4/objects/branches.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/gitlab/v4/objects/branches.py b/gitlab/v4/objects/branches.py --- a/gitlab/v4/objects/branches.py +++ b/gitlab/v4/objects/branches.py @@ -1,7 +1,13 @@ from typing import Any, cast, Union from gitlab.base import RESTManager, RESTObject -from gitlab.mixins import NoUpdateMixin, ObjectDeleteMixin +from gitlab.mixins import ( + CRUDMixin, + NoUpdateMixin, + ObjectDeleteMixin, + SaveMixin, + UpdateMethod, +) from gitlab.types import RequiredOptional __all__ = [ @@ -28,11 +34,11 @@ return cast(ProjectBranch, super().get(id=id, lazy=lazy, **kwargs)) -class ProjectProtectedBranch(ObjectDeleteMixin, RESTObject): +class ProjectProtectedBranch(SaveMixin, ObjectDeleteMixin, RESTObject): _id_attr = "name" -class ProjectProtectedBranchManager(NoUpdateMixin, RESTManager): +class ProjectProtectedBranchManager(CRUDMixin, RESTManager): _path = "/projects/{project_id}/protected_branches" _obj_cls = ProjectProtectedBranch _from_parent_attrs = {"project_id": "id"} @@ -49,6 +55,7 @@ "code_owner_approval_required", ), ) + _update_method = UpdateMethod.PATCH def get( self, id: Union[str, int], lazy: bool = False, **kwargs: Any
{"golden_diff": "diff --git a/gitlab/v4/objects/branches.py b/gitlab/v4/objects/branches.py\n--- a/gitlab/v4/objects/branches.py\n+++ b/gitlab/v4/objects/branches.py\n@@ -1,7 +1,13 @@\n from typing import Any, cast, Union\n \n from gitlab.base import RESTManager, RESTObject\n-from gitlab.mixins import NoUpdateMixin, ObjectDeleteMixin\n+from gitlab.mixins import (\n+ CRUDMixin,\n+ NoUpdateMixin,\n+ ObjectDeleteMixin,\n+ SaveMixin,\n+ UpdateMethod,\n+)\n from gitlab.types import RequiredOptional\n \n __all__ = [\n@@ -28,11 +34,11 @@\n return cast(ProjectBranch, super().get(id=id, lazy=lazy, **kwargs))\n \n \n-class ProjectProtectedBranch(ObjectDeleteMixin, RESTObject):\n+class ProjectProtectedBranch(SaveMixin, ObjectDeleteMixin, RESTObject):\n _id_attr = \"name\"\n \n \n-class ProjectProtectedBranchManager(NoUpdateMixin, RESTManager):\n+class ProjectProtectedBranchManager(CRUDMixin, RESTManager):\n _path = \"/projects/{project_id}/protected_branches\"\n _obj_cls = ProjectProtectedBranch\n _from_parent_attrs = {\"project_id\": \"id\"}\n@@ -49,6 +55,7 @@\n \"code_owner_approval_required\",\n ),\n )\n+ _update_method = UpdateMethod.PATCH\n \n def get(\n self, id: Union[str, int], lazy: bool = False, **kwargs: Any\n", "issue": "Allow update of protected branches\nIn gitlab 15.6 gitlab finally added api support to update protected branch settings, so ProjectProtectedBranch should be updated accordingly\r\n\r\nhttps://gitlab.com/gitlab-org/gitlab/-/issues/20229/\r\n\r\n\r\n\n", "before_files": [{"content": "from typing import Any, cast, Union\n\nfrom gitlab.base import RESTManager, RESTObject\nfrom gitlab.mixins import NoUpdateMixin, ObjectDeleteMixin\nfrom gitlab.types import RequiredOptional\n\n__all__ = [\n \"ProjectBranch\",\n \"ProjectBranchManager\",\n \"ProjectProtectedBranch\",\n \"ProjectProtectedBranchManager\",\n]\n\n\nclass ProjectBranch(ObjectDeleteMixin, RESTObject):\n _id_attr = \"name\"\n\n\nclass ProjectBranchManager(NoUpdateMixin, RESTManager):\n _path = \"/projects/{project_id}/repository/branches\"\n _obj_cls = ProjectBranch\n _from_parent_attrs = {\"project_id\": \"id\"}\n _create_attrs = RequiredOptional(required=(\"branch\", \"ref\"))\n\n def get(\n self, id: Union[str, int], lazy: bool = False, **kwargs: Any\n ) -> ProjectBranch:\n return cast(ProjectBranch, super().get(id=id, lazy=lazy, **kwargs))\n\n\nclass ProjectProtectedBranch(ObjectDeleteMixin, RESTObject):\n _id_attr = \"name\"\n\n\nclass ProjectProtectedBranchManager(NoUpdateMixin, RESTManager):\n _path = \"/projects/{project_id}/protected_branches\"\n _obj_cls = ProjectProtectedBranch\n _from_parent_attrs = {\"project_id\": \"id\"}\n _create_attrs = RequiredOptional(\n required=(\"name\",),\n optional=(\n \"push_access_level\",\n \"merge_access_level\",\n \"unprotect_access_level\",\n \"allow_force_push\",\n \"allowed_to_push\",\n \"allowed_to_merge\",\n \"allowed_to_unprotect\",\n \"code_owner_approval_required\",\n ),\n )\n\n def get(\n self, id: Union[str, int], lazy: bool = False, **kwargs: Any\n ) -> ProjectProtectedBranch:\n return cast(ProjectProtectedBranch, super().get(id=id, lazy=lazy, **kwargs))\n", "path": "gitlab/v4/objects/branches.py"}]}
1,110
335
gh_patches_debug_28117
rasdani/github-patches
git_diff
open-telemetry__opentelemetry-python-contrib-2573
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Add xray propagators that prioritizes xray environment variable Similar to [this](https://github.com/open-telemetry/opentelemetry-java-contrib/pull/1032). </issue> <code> [start of propagator/opentelemetry-propagator-aws-xray/src/opentelemetry/propagators/aws/aws_xray_propagator.py] 1 # Copyright The OpenTelemetry Authors 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 """ 16 AWS X-Ray Propagator 17 -------------------- 18 19 The **AWS X-Ray Propagator** provides a propagator that when used, adds a `trace 20 header`_ to outgoing traces that is compatible with the AWS X-Ray backend 21 service. This allows the trace context to be propagated when a trace spans 22 multiple AWS services. 23 24 The same propagator setup is used to extract a context sent by external systems 25 so that child span have the correct parent context. 26 27 **NOTE**: Because the parent context parsed from the ``X-Amzn-Trace-Id`` header 28 assumes the context is _not_ sampled by default, users should make sure to add 29 ``Sampled=1`` to their ``X-Amzn-Trace-Id`` headers so that the child spans are 30 sampled. 31 32 Usage 33 ----- 34 35 Use the provided AWS X-Ray Propagator to inject the necessary context into 36 traces sent to external systems. 37 38 This can be done by either setting this environment variable: 39 40 :: 41 42 export OTEL_PROPAGATORS = xray 43 44 45 Or by setting this propagator in your instrumented application: 46 47 .. code-block:: python 48 49 from opentelemetry.propagate import set_global_textmap 50 from opentelemetry.propagators.aws import AwsXRayPropagator 51 52 set_global_textmap(AwsXRayPropagator()) 53 54 API 55 --- 56 .. _trace header: https://docs.aws.amazon.com/xray/latest/devguide/xray-concepts.html#xray-concepts-tracingheader 57 """ 58 59 import logging 60 import typing 61 62 from opentelemetry import trace 63 from opentelemetry.context import Context 64 from opentelemetry.propagators.textmap import ( 65 CarrierT, 66 Getter, 67 Setter, 68 TextMapPropagator, 69 default_getter, 70 default_setter, 71 ) 72 73 TRACE_HEADER_KEY = "X-Amzn-Trace-Id" 74 KV_PAIR_DELIMITER = ";" 75 KEY_AND_VALUE_DELIMITER = "=" 76 77 TRACE_ID_KEY = "Root" 78 TRACE_ID_LENGTH = 35 79 TRACE_ID_VERSION = "1" 80 TRACE_ID_DELIMITER = "-" 81 TRACE_ID_DELIMITER_INDEX_1 = 1 82 TRACE_ID_DELIMITER_INDEX_2 = 10 83 TRACE_ID_FIRST_PART_LENGTH = 8 84 85 PARENT_ID_KEY = "Parent" 86 PARENT_ID_LENGTH = 16 87 88 SAMPLED_FLAG_KEY = "Sampled" 89 SAMPLED_FLAG_LENGTH = 1 90 IS_SAMPLED = "1" 91 NOT_SAMPLED = "0" 92 93 94 _logger = logging.getLogger(__name__) 95 96 97 class AwsParseTraceHeaderError(Exception): 98 def __init__(self, message): 99 super().__init__() 100 self.message = message 101 102 103 class AwsXRayPropagator(TextMapPropagator): 104 """Propagator for the AWS X-Ray Trace Header propagation protocol. 105 106 See: 107 https://docs.aws.amazon.com/xray/latest/devguide/xray-concepts.html#xray-concepts-tracingheader 108 """ 109 110 # AWS 111 112 def extract( 113 self, 114 carrier: CarrierT, 115 context: typing.Optional[Context] = None, 116 getter: Getter[CarrierT] = default_getter, 117 ) -> Context: 118 if context is None: 119 context = Context() 120 121 trace_header_list = getter.get(carrier, TRACE_HEADER_KEY) 122 123 if not trace_header_list or len(trace_header_list) != 1: 124 return context 125 126 trace_header = trace_header_list[0] 127 128 if not trace_header: 129 return context 130 131 try: 132 ( 133 trace_id, 134 span_id, 135 sampled, 136 ) = AwsXRayPropagator._extract_span_properties(trace_header) 137 except AwsParseTraceHeaderError as err: 138 _logger.debug(err.message) 139 return context 140 141 options = 0 142 if sampled: 143 options |= trace.TraceFlags.SAMPLED 144 145 span_context = trace.SpanContext( 146 trace_id=trace_id, 147 span_id=span_id, 148 is_remote=True, 149 trace_flags=trace.TraceFlags(options), 150 trace_state=trace.TraceState(), 151 ) 152 153 if not span_context.is_valid: 154 _logger.debug( 155 "Invalid Span Extracted. Inserting INVALID span into provided context." 156 ) 157 return context 158 159 return trace.set_span_in_context( 160 trace.NonRecordingSpan(span_context), context=context 161 ) 162 163 @staticmethod 164 def _extract_span_properties(trace_header): 165 trace_id = trace.INVALID_TRACE_ID 166 span_id = trace.INVALID_SPAN_ID 167 sampled = False 168 169 for kv_pair_str in trace_header.split(KV_PAIR_DELIMITER): 170 try: 171 key_str, value_str = kv_pair_str.split(KEY_AND_VALUE_DELIMITER) 172 key, value = key_str.strip(), value_str.strip() 173 except ValueError as ex: 174 raise AwsParseTraceHeaderError( 175 ( 176 "Error parsing X-Ray trace header. Invalid key value pair: %s. Returning INVALID span context.", 177 kv_pair_str, 178 ) 179 ) from ex 180 if key == TRACE_ID_KEY: 181 if not AwsXRayPropagator._validate_trace_id(value): 182 raise AwsParseTraceHeaderError( 183 ( 184 "Invalid TraceId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.", 185 TRACE_HEADER_KEY, 186 trace_header, 187 ) 188 ) 189 190 try: 191 trace_id = AwsXRayPropagator._parse_trace_id(value) 192 except ValueError as ex: 193 raise AwsParseTraceHeaderError( 194 ( 195 "Invalid TraceId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.", 196 TRACE_HEADER_KEY, 197 trace_header, 198 ) 199 ) from ex 200 elif key == PARENT_ID_KEY: 201 if not AwsXRayPropagator._validate_span_id(value): 202 raise AwsParseTraceHeaderError( 203 ( 204 "Invalid ParentId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.", 205 TRACE_HEADER_KEY, 206 trace_header, 207 ) 208 ) 209 210 try: 211 span_id = AwsXRayPropagator._parse_span_id(value) 212 except ValueError as ex: 213 raise AwsParseTraceHeaderError( 214 ( 215 "Invalid TraceId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.", 216 TRACE_HEADER_KEY, 217 trace_header, 218 ) 219 ) from ex 220 elif key == SAMPLED_FLAG_KEY: 221 if not AwsXRayPropagator._validate_sampled_flag(value): 222 raise AwsParseTraceHeaderError( 223 ( 224 "Invalid Sampling flag in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.", 225 TRACE_HEADER_KEY, 226 trace_header, 227 ) 228 ) 229 230 sampled = AwsXRayPropagator._parse_sampled_flag(value) 231 232 return trace_id, span_id, sampled 233 234 @staticmethod 235 def _validate_trace_id(trace_id_str): 236 return ( 237 len(trace_id_str) == TRACE_ID_LENGTH 238 and trace_id_str.startswith(TRACE_ID_VERSION) 239 and trace_id_str[TRACE_ID_DELIMITER_INDEX_1] == TRACE_ID_DELIMITER 240 and trace_id_str[TRACE_ID_DELIMITER_INDEX_2] == TRACE_ID_DELIMITER 241 ) 242 243 @staticmethod 244 def _parse_trace_id(trace_id_str): 245 timestamp_subset = trace_id_str[ 246 TRACE_ID_DELIMITER_INDEX_1 + 1 : TRACE_ID_DELIMITER_INDEX_2 247 ] 248 unique_id_subset = trace_id_str[ 249 TRACE_ID_DELIMITER_INDEX_2 + 1 : TRACE_ID_LENGTH 250 ] 251 return int(timestamp_subset + unique_id_subset, 16) 252 253 @staticmethod 254 def _validate_span_id(span_id_str): 255 return len(span_id_str) == PARENT_ID_LENGTH 256 257 @staticmethod 258 def _parse_span_id(span_id_str): 259 return int(span_id_str, 16) 260 261 @staticmethod 262 def _validate_sampled_flag(sampled_flag_str): 263 return len( 264 sampled_flag_str 265 ) == SAMPLED_FLAG_LENGTH and sampled_flag_str in ( 266 IS_SAMPLED, 267 NOT_SAMPLED, 268 ) 269 270 @staticmethod 271 def _parse_sampled_flag(sampled_flag_str): 272 return sampled_flag_str[0] == IS_SAMPLED 273 274 def inject( 275 self, 276 carrier: CarrierT, 277 context: typing.Optional[Context] = None, 278 setter: Setter[CarrierT] = default_setter, 279 ) -> None: 280 span = trace.get_current_span(context=context) 281 282 span_context = span.get_span_context() 283 if not span_context.is_valid: 284 return 285 286 otel_trace_id = f"{span_context.trace_id:032x}" 287 xray_trace_id = TRACE_ID_DELIMITER.join( 288 [ 289 TRACE_ID_VERSION, 290 otel_trace_id[:TRACE_ID_FIRST_PART_LENGTH], 291 otel_trace_id[TRACE_ID_FIRST_PART_LENGTH:], 292 ] 293 ) 294 295 parent_id = f"{span_context.span_id:016x}" 296 297 sampling_flag = ( 298 IS_SAMPLED 299 if span_context.trace_flags & trace.TraceFlags.SAMPLED 300 else NOT_SAMPLED 301 ) 302 303 # TODO: Add OT trace state to the X-Ray trace header 304 305 trace_header = KV_PAIR_DELIMITER.join( 306 [ 307 KEY_AND_VALUE_DELIMITER.join([key, value]) 308 for key, value in [ 309 (TRACE_ID_KEY, xray_trace_id), 310 (PARENT_ID_KEY, parent_id), 311 (SAMPLED_FLAG_KEY, sampling_flag), 312 ] 313 ] 314 ) 315 316 setter.set( 317 carrier, 318 TRACE_HEADER_KEY, 319 trace_header, 320 ) 321 322 @property 323 def fields(self): 324 """Returns a set with the fields set in `inject`.""" 325 326 return {TRACE_HEADER_KEY} 327 [end of propagator/opentelemetry-propagator-aws-xray/src/opentelemetry/propagators/aws/aws_xray_propagator.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/propagator/opentelemetry-propagator-aws-xray/src/opentelemetry/propagators/aws/aws_xray_propagator.py b/propagator/opentelemetry-propagator-aws-xray/src/opentelemetry/propagators/aws/aws_xray_propagator.py --- a/propagator/opentelemetry-propagator-aws-xray/src/opentelemetry/propagators/aws/aws_xray_propagator.py +++ b/propagator/opentelemetry-propagator-aws-xray/src/opentelemetry/propagators/aws/aws_xray_propagator.py @@ -58,6 +58,7 @@ import logging import typing +from os import environ from opentelemetry import trace from opentelemetry.context import Context @@ -71,6 +72,7 @@ ) TRACE_HEADER_KEY = "X-Amzn-Trace-Id" +AWS_TRACE_HEADER_ENV_KEY = "_X_AMZN_TRACE_ID" KV_PAIR_DELIMITER = ";" KEY_AND_VALUE_DELIMITER = "=" @@ -324,3 +326,33 @@ """Returns a set with the fields set in `inject`.""" return {TRACE_HEADER_KEY} + + +class AwsXrayLambdaPropagator(AwsXRayPropagator): + """Implementation of the AWS X-Ray Trace Header propagation protocol but + with special handling for Lambda's ``_X_AMZN_TRACE_ID` environment + variable. + """ + + def extract( + self, + carrier: CarrierT, + context: typing.Optional[Context] = None, + getter: Getter[CarrierT] = default_getter, + ) -> Context: + + xray_context = super().extract(carrier, context=context, getter=getter) + + if trace.get_current_span(context=context).get_span_context().is_valid: + return xray_context + + trace_header = environ.get(AWS_TRACE_HEADER_ENV_KEY) + + if trace_header is None: + return xray_context + + return super().extract( + {TRACE_HEADER_KEY: trace_header}, + context=xray_context, + getter=getter, + )
{"golden_diff": "diff --git a/propagator/opentelemetry-propagator-aws-xray/src/opentelemetry/propagators/aws/aws_xray_propagator.py b/propagator/opentelemetry-propagator-aws-xray/src/opentelemetry/propagators/aws/aws_xray_propagator.py\n--- a/propagator/opentelemetry-propagator-aws-xray/src/opentelemetry/propagators/aws/aws_xray_propagator.py\n+++ b/propagator/opentelemetry-propagator-aws-xray/src/opentelemetry/propagators/aws/aws_xray_propagator.py\n@@ -58,6 +58,7 @@\n \n import logging\n import typing\n+from os import environ\n \n from opentelemetry import trace\n from opentelemetry.context import Context\n@@ -71,6 +72,7 @@\n )\n \n TRACE_HEADER_KEY = \"X-Amzn-Trace-Id\"\n+AWS_TRACE_HEADER_ENV_KEY = \"_X_AMZN_TRACE_ID\"\n KV_PAIR_DELIMITER = \";\"\n KEY_AND_VALUE_DELIMITER = \"=\"\n \n@@ -324,3 +326,33 @@\n \"\"\"Returns a set with the fields set in `inject`.\"\"\"\n \n return {TRACE_HEADER_KEY}\n+\n+\n+class AwsXrayLambdaPropagator(AwsXRayPropagator):\n+ \"\"\"Implementation of the AWS X-Ray Trace Header propagation protocol but\n+ with special handling for Lambda's ``_X_AMZN_TRACE_ID` environment\n+ variable.\n+ \"\"\"\n+\n+ def extract(\n+ self,\n+ carrier: CarrierT,\n+ context: typing.Optional[Context] = None,\n+ getter: Getter[CarrierT] = default_getter,\n+ ) -> Context:\n+\n+ xray_context = super().extract(carrier, context=context, getter=getter)\n+\n+ if trace.get_current_span(context=context).get_span_context().is_valid:\n+ return xray_context\n+\n+ trace_header = environ.get(AWS_TRACE_HEADER_ENV_KEY)\n+\n+ if trace_header is None:\n+ return xray_context\n+\n+ return super().extract(\n+ {TRACE_HEADER_KEY: trace_header},\n+ context=xray_context,\n+ getter=getter,\n+ )\n", "issue": "Add xray propagators that prioritizes xray environment variable\nSimilar to [this](https://github.com/open-telemetry/opentelemetry-java-contrib/pull/1032).\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nAWS X-Ray Propagator\n--------------------\n\nThe **AWS X-Ray Propagator** provides a propagator that when used, adds a `trace\nheader`_ to outgoing traces that is compatible with the AWS X-Ray backend\nservice. This allows the trace context to be propagated when a trace spans\nmultiple AWS services.\n\nThe same propagator setup is used to extract a context sent by external systems\nso that child span have the correct parent context.\n\n**NOTE**: Because the parent context parsed from the ``X-Amzn-Trace-Id`` header\nassumes the context is _not_ sampled by default, users should make sure to add\n``Sampled=1`` to their ``X-Amzn-Trace-Id`` headers so that the child spans are\nsampled.\n\nUsage\n-----\n\nUse the provided AWS X-Ray Propagator to inject the necessary context into\ntraces sent to external systems.\n\nThis can be done by either setting this environment variable:\n\n::\n\n export OTEL_PROPAGATORS = xray\n\n\nOr by setting this propagator in your instrumented application:\n\n.. code-block:: python\n\n from opentelemetry.propagate import set_global_textmap\n from opentelemetry.propagators.aws import AwsXRayPropagator\n\n set_global_textmap(AwsXRayPropagator())\n\nAPI\n---\n.. _trace header: https://docs.aws.amazon.com/xray/latest/devguide/xray-concepts.html#xray-concepts-tracingheader\n\"\"\"\n\nimport logging\nimport typing\n\nfrom opentelemetry import trace\nfrom opentelemetry.context import Context\nfrom opentelemetry.propagators.textmap import (\n CarrierT,\n Getter,\n Setter,\n TextMapPropagator,\n default_getter,\n default_setter,\n)\n\nTRACE_HEADER_KEY = \"X-Amzn-Trace-Id\"\nKV_PAIR_DELIMITER = \";\"\nKEY_AND_VALUE_DELIMITER = \"=\"\n\nTRACE_ID_KEY = \"Root\"\nTRACE_ID_LENGTH = 35\nTRACE_ID_VERSION = \"1\"\nTRACE_ID_DELIMITER = \"-\"\nTRACE_ID_DELIMITER_INDEX_1 = 1\nTRACE_ID_DELIMITER_INDEX_2 = 10\nTRACE_ID_FIRST_PART_LENGTH = 8\n\nPARENT_ID_KEY = \"Parent\"\nPARENT_ID_LENGTH = 16\n\nSAMPLED_FLAG_KEY = \"Sampled\"\nSAMPLED_FLAG_LENGTH = 1\nIS_SAMPLED = \"1\"\nNOT_SAMPLED = \"0\"\n\n\n_logger = logging.getLogger(__name__)\n\n\nclass AwsParseTraceHeaderError(Exception):\n def __init__(self, message):\n super().__init__()\n self.message = message\n\n\nclass AwsXRayPropagator(TextMapPropagator):\n \"\"\"Propagator for the AWS X-Ray Trace Header propagation protocol.\n\n See:\n https://docs.aws.amazon.com/xray/latest/devguide/xray-concepts.html#xray-concepts-tracingheader\n \"\"\"\n\n # AWS\n\n def extract(\n self,\n carrier: CarrierT,\n context: typing.Optional[Context] = None,\n getter: Getter[CarrierT] = default_getter,\n ) -> Context:\n if context is None:\n context = Context()\n\n trace_header_list = getter.get(carrier, TRACE_HEADER_KEY)\n\n if not trace_header_list or len(trace_header_list) != 1:\n return context\n\n trace_header = trace_header_list[0]\n\n if not trace_header:\n return context\n\n try:\n (\n trace_id,\n span_id,\n sampled,\n ) = AwsXRayPropagator._extract_span_properties(trace_header)\n except AwsParseTraceHeaderError as err:\n _logger.debug(err.message)\n return context\n\n options = 0\n if sampled:\n options |= trace.TraceFlags.SAMPLED\n\n span_context = trace.SpanContext(\n trace_id=trace_id,\n span_id=span_id,\n is_remote=True,\n trace_flags=trace.TraceFlags(options),\n trace_state=trace.TraceState(),\n )\n\n if not span_context.is_valid:\n _logger.debug(\n \"Invalid Span Extracted. Inserting INVALID span into provided context.\"\n )\n return context\n\n return trace.set_span_in_context(\n trace.NonRecordingSpan(span_context), context=context\n )\n\n @staticmethod\n def _extract_span_properties(trace_header):\n trace_id = trace.INVALID_TRACE_ID\n span_id = trace.INVALID_SPAN_ID\n sampled = False\n\n for kv_pair_str in trace_header.split(KV_PAIR_DELIMITER):\n try:\n key_str, value_str = kv_pair_str.split(KEY_AND_VALUE_DELIMITER)\n key, value = key_str.strip(), value_str.strip()\n except ValueError as ex:\n raise AwsParseTraceHeaderError(\n (\n \"Error parsing X-Ray trace header. Invalid key value pair: %s. Returning INVALID span context.\",\n kv_pair_str,\n )\n ) from ex\n if key == TRACE_ID_KEY:\n if not AwsXRayPropagator._validate_trace_id(value):\n raise AwsParseTraceHeaderError(\n (\n \"Invalid TraceId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n )\n\n try:\n trace_id = AwsXRayPropagator._parse_trace_id(value)\n except ValueError as ex:\n raise AwsParseTraceHeaderError(\n (\n \"Invalid TraceId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n ) from ex\n elif key == PARENT_ID_KEY:\n if not AwsXRayPropagator._validate_span_id(value):\n raise AwsParseTraceHeaderError(\n (\n \"Invalid ParentId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n )\n\n try:\n span_id = AwsXRayPropagator._parse_span_id(value)\n except ValueError as ex:\n raise AwsParseTraceHeaderError(\n (\n \"Invalid TraceId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n ) from ex\n elif key == SAMPLED_FLAG_KEY:\n if not AwsXRayPropagator._validate_sampled_flag(value):\n raise AwsParseTraceHeaderError(\n (\n \"Invalid Sampling flag in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n )\n\n sampled = AwsXRayPropagator._parse_sampled_flag(value)\n\n return trace_id, span_id, sampled\n\n @staticmethod\n def _validate_trace_id(trace_id_str):\n return (\n len(trace_id_str) == TRACE_ID_LENGTH\n and trace_id_str.startswith(TRACE_ID_VERSION)\n and trace_id_str[TRACE_ID_DELIMITER_INDEX_1] == TRACE_ID_DELIMITER\n and trace_id_str[TRACE_ID_DELIMITER_INDEX_2] == TRACE_ID_DELIMITER\n )\n\n @staticmethod\n def _parse_trace_id(trace_id_str):\n timestamp_subset = trace_id_str[\n TRACE_ID_DELIMITER_INDEX_1 + 1 : TRACE_ID_DELIMITER_INDEX_2\n ]\n unique_id_subset = trace_id_str[\n TRACE_ID_DELIMITER_INDEX_2 + 1 : TRACE_ID_LENGTH\n ]\n return int(timestamp_subset + unique_id_subset, 16)\n\n @staticmethod\n def _validate_span_id(span_id_str):\n return len(span_id_str) == PARENT_ID_LENGTH\n\n @staticmethod\n def _parse_span_id(span_id_str):\n return int(span_id_str, 16)\n\n @staticmethod\n def _validate_sampled_flag(sampled_flag_str):\n return len(\n sampled_flag_str\n ) == SAMPLED_FLAG_LENGTH and sampled_flag_str in (\n IS_SAMPLED,\n NOT_SAMPLED,\n )\n\n @staticmethod\n def _parse_sampled_flag(sampled_flag_str):\n return sampled_flag_str[0] == IS_SAMPLED\n\n def inject(\n self,\n carrier: CarrierT,\n context: typing.Optional[Context] = None,\n setter: Setter[CarrierT] = default_setter,\n ) -> None:\n span = trace.get_current_span(context=context)\n\n span_context = span.get_span_context()\n if not span_context.is_valid:\n return\n\n otel_trace_id = f\"{span_context.trace_id:032x}\"\n xray_trace_id = TRACE_ID_DELIMITER.join(\n [\n TRACE_ID_VERSION,\n otel_trace_id[:TRACE_ID_FIRST_PART_LENGTH],\n otel_trace_id[TRACE_ID_FIRST_PART_LENGTH:],\n ]\n )\n\n parent_id = f\"{span_context.span_id:016x}\"\n\n sampling_flag = (\n IS_SAMPLED\n if span_context.trace_flags & trace.TraceFlags.SAMPLED\n else NOT_SAMPLED\n )\n\n # TODO: Add OT trace state to the X-Ray trace header\n\n trace_header = KV_PAIR_DELIMITER.join(\n [\n KEY_AND_VALUE_DELIMITER.join([key, value])\n for key, value in [\n (TRACE_ID_KEY, xray_trace_id),\n (PARENT_ID_KEY, parent_id),\n (SAMPLED_FLAG_KEY, sampling_flag),\n ]\n ]\n )\n\n setter.set(\n carrier,\n TRACE_HEADER_KEY,\n trace_header,\n )\n\n @property\n def fields(self):\n \"\"\"Returns a set with the fields set in `inject`.\"\"\"\n\n return {TRACE_HEADER_KEY}\n", "path": "propagator/opentelemetry-propagator-aws-xray/src/opentelemetry/propagators/aws/aws_xray_propagator.py"}]}
3,697
484
gh_patches_debug_51489
rasdani/github-patches
git_diff
kivy__kivy-4728
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Error: fromstring() in core/image/img_pil.py Platform: Linux (OpenSuse, Ubuntu) [INFO ] [Kivy ] v1.9.1 [INFO ] [Python ] v2.7.12 (default, Jul 01 2016, 15:36:53) [GCC] Error: File "/usr/lib64/python2.7/site-packages/kivy/core/image/img_pil.py", line 105, in save image = PILImage.fromstring(fmt.upper(), (width, height), pixels) File "/usr/lib64/python2.7/site-packages/PIL/Image.py", line 2063, in fromstring "Please call frombytes() instead.") Exception: fromstring() has been removed. Please call frombytes() instead. In File "/usr/lib64/python2.7/site-packages/kivy/core/image/img_pil.py" Line 105: image = PILImage.fromstring(fmt.upper(), (width, height), pixels) use... image = PILImage.frombytes(fmt.upper(), (width, height), pixels) </issue> <code> [start of kivy/core/image/img_pil.py] 1 ''' 2 PIL: PIL image loader 3 ''' 4 5 __all__ = ('ImageLoaderPIL', ) 6 7 try: 8 from PIL import Image as PILImage 9 except: 10 import Image as PILImage 11 12 from kivy.logger import Logger 13 from kivy.core.image import ImageLoaderBase, ImageData, ImageLoader 14 15 16 class ImageLoaderPIL(ImageLoaderBase): 17 '''Image loader based on the PIL library. 18 19 .. versionadded:: 1.0.8 20 21 Support for GIF animation added. 22 23 Gif animation has a lot of issues(transparency/color depths... etc). 24 In order to keep it simple, what is implemented here is what is 25 natively supported by the PIL library. 26 27 As a general rule, try to use gifs that have no transparency. 28 Gif's with transparency will work but be prepared for some 29 artifacts until transparency support is improved. 30 31 ''' 32 33 @staticmethod 34 def can_save(): 35 return True 36 37 @staticmethod 38 def can_load_memory(): 39 return True 40 41 @staticmethod 42 def extensions(): 43 '''Return accepted extensions for this loader''' 44 # See http://www.pythonware.com/library/pil/handbook/index.htm 45 return ('bmp', 'bufr', 'cur', 'dcx', 'fits', 'fl', 'fpx', 'gbr', 46 'gd', 'gif', 'grib', 'hdf5', 'ico', 'im', 'imt', 'iptc', 47 'jpeg', 'jpg', 'jpe', 'mcidas', 'mic', 'mpeg', 'msp', 48 'pcd', 'pcx', 'pixar', 'png', 'ppm', 'psd', 'sgi', 49 'spider', 'tga', 'tiff', 'wal', 'wmf', 'xbm', 'xpm', 50 'xv') 51 52 def _img_correct(self, _img_tmp): 53 '''Convert image to the correct format and orientation. 54 ''' 55 # image loader work only with rgb/rgba image 56 if _img_tmp.mode.lower() not in ('rgb', 'rgba'): 57 try: 58 imc = _img_tmp.convert('RGBA') 59 except: 60 Logger.warning( 61 'Image: Unable to convert image to rgba (was %s)' % 62 (_img_tmp.mode.lower())) 63 raise 64 _img_tmp = imc 65 66 return _img_tmp 67 68 def _img_read(self, im): 69 '''Read images from an animated file. 70 ''' 71 im.seek(0) 72 73 # Read all images inside 74 try: 75 img_ol = None 76 while True: 77 img_tmp = im 78 img_tmp = self._img_correct(img_tmp) 79 if img_ol and (hasattr(im, 'dispose') and not im.dispose): 80 # paste new frame over old so as to handle 81 # transparency properly 82 img_ol.paste(img_tmp, (0, 0), img_tmp) 83 img_tmp = img_ol 84 img_ol = img_tmp 85 yield ImageData(img_tmp.size[0], img_tmp.size[1], 86 img_tmp.mode.lower(), img_tmp.tobytes()) 87 im.seek(im.tell() + 1) 88 except EOFError: 89 pass 90 91 def load(self, filename): 92 try: 93 im = PILImage.open(filename) 94 except: 95 Logger.warning('Image: Unable to load image <%s>' % filename) 96 raise 97 # update internals 98 if not self._inline: 99 self.filename = filename 100 # returns an array of type ImageData len 1 if not a sequence image 101 return list(self._img_read(im)) 102 103 @staticmethod 104 def save(filename, width, height, fmt, pixels, flipped=False): 105 image = PILImage.fromstring(fmt.upper(), (width, height), pixels) 106 if flipped: 107 image = image.transpose(PILImage.FLIP_TOP_BOTTOM) 108 image.save(filename) 109 return True 110 111 112 # register 113 ImageLoader.register(ImageLoaderPIL) 114 [end of kivy/core/image/img_pil.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/kivy/core/image/img_pil.py b/kivy/core/image/img_pil.py --- a/kivy/core/image/img_pil.py +++ b/kivy/core/image/img_pil.py @@ -102,7 +102,8 @@ @staticmethod def save(filename, width, height, fmt, pixels, flipped=False): - image = PILImage.fromstring(fmt.upper(), (width, height), pixels) + image = PILImage.frombytes(fmt.upper(), (width, height), pixels) + if flipped: image = image.transpose(PILImage.FLIP_TOP_BOTTOM) image.save(filename)
{"golden_diff": "diff --git a/kivy/core/image/img_pil.py b/kivy/core/image/img_pil.py\n--- a/kivy/core/image/img_pil.py\n+++ b/kivy/core/image/img_pil.py\n@@ -102,7 +102,8 @@\n \n @staticmethod\n def save(filename, width, height, fmt, pixels, flipped=False):\n- image = PILImage.fromstring(fmt.upper(), (width, height), pixels)\n+ image = PILImage.frombytes(fmt.upper(), (width, height), pixels)\n+\n if flipped:\n image = image.transpose(PILImage.FLIP_TOP_BOTTOM)\n image.save(filename)\n", "issue": "Error: fromstring() in core/image/img_pil.py\nPlatform: Linux (OpenSuse, Ubuntu)\r\n\r\n[INFO ] [Kivy ] v1.9.1\r\n[INFO ] [Python ] v2.7.12 (default, Jul 01 2016, 15:36:53) [GCC]\r\n\r\nError:\r\n File \"/usr/lib64/python2.7/site-packages/kivy/core/image/img_pil.py\", line 105, in save\r\n image = PILImage.fromstring(fmt.upper(), (width, height), pixels)\r\n File \"/usr/lib64/python2.7/site-packages/PIL/Image.py\", line 2063, in fromstring\r\n \"Please call frombytes() instead.\")\r\nException: fromstring() has been removed. Please call frombytes() instead.\r\n\r\n\r\nIn File \"/usr/lib64/python2.7/site-packages/kivy/core/image/img_pil.py\"\r\nLine 105:\r\nimage = PILImage.fromstring(fmt.upper(), (width, height), pixels)\r\n\r\nuse...\r\n\r\nimage = PILImage.frombytes(fmt.upper(), (width, height), pixels)\n", "before_files": [{"content": "'''\nPIL: PIL image loader\n'''\n\n__all__ = ('ImageLoaderPIL', )\n\ntry:\n from PIL import Image as PILImage\nexcept:\n import Image as PILImage\n\nfrom kivy.logger import Logger\nfrom kivy.core.image import ImageLoaderBase, ImageData, ImageLoader\n\n\nclass ImageLoaderPIL(ImageLoaderBase):\n '''Image loader based on the PIL library.\n\n .. versionadded:: 1.0.8\n\n Support for GIF animation added.\n\n Gif animation has a lot of issues(transparency/color depths... etc).\n In order to keep it simple, what is implemented here is what is\n natively supported by the PIL library.\n\n As a general rule, try to use gifs that have no transparency.\n Gif's with transparency will work but be prepared for some\n artifacts until transparency support is improved.\n\n '''\n\n @staticmethod\n def can_save():\n return True\n\n @staticmethod\n def can_load_memory():\n return True\n\n @staticmethod\n def extensions():\n '''Return accepted extensions for this loader'''\n # See http://www.pythonware.com/library/pil/handbook/index.htm\n return ('bmp', 'bufr', 'cur', 'dcx', 'fits', 'fl', 'fpx', 'gbr',\n 'gd', 'gif', 'grib', 'hdf5', 'ico', 'im', 'imt', 'iptc',\n 'jpeg', 'jpg', 'jpe', 'mcidas', 'mic', 'mpeg', 'msp',\n 'pcd', 'pcx', 'pixar', 'png', 'ppm', 'psd', 'sgi',\n 'spider', 'tga', 'tiff', 'wal', 'wmf', 'xbm', 'xpm',\n 'xv')\n\n def _img_correct(self, _img_tmp):\n '''Convert image to the correct format and orientation.\n '''\n # image loader work only with rgb/rgba image\n if _img_tmp.mode.lower() not in ('rgb', 'rgba'):\n try:\n imc = _img_tmp.convert('RGBA')\n except:\n Logger.warning(\n 'Image: Unable to convert image to rgba (was %s)' %\n (_img_tmp.mode.lower()))\n raise\n _img_tmp = imc\n\n return _img_tmp\n\n def _img_read(self, im):\n '''Read images from an animated file.\n '''\n im.seek(0)\n\n # Read all images inside\n try:\n img_ol = None\n while True:\n img_tmp = im\n img_tmp = self._img_correct(img_tmp)\n if img_ol and (hasattr(im, 'dispose') and not im.dispose):\n # paste new frame over old so as to handle\n # transparency properly\n img_ol.paste(img_tmp, (0, 0), img_tmp)\n img_tmp = img_ol\n img_ol = img_tmp\n yield ImageData(img_tmp.size[0], img_tmp.size[1],\n img_tmp.mode.lower(), img_tmp.tobytes())\n im.seek(im.tell() + 1)\n except EOFError:\n pass\n\n def load(self, filename):\n try:\n im = PILImage.open(filename)\n except:\n Logger.warning('Image: Unable to load image <%s>' % filename)\n raise\n # update internals\n if not self._inline:\n self.filename = filename\n # returns an array of type ImageData len 1 if not a sequence image\n return list(self._img_read(im))\n\n @staticmethod\n def save(filename, width, height, fmt, pixels, flipped=False):\n image = PILImage.fromstring(fmt.upper(), (width, height), pixels)\n if flipped:\n image = image.transpose(PILImage.FLIP_TOP_BOTTOM)\n image.save(filename)\n return True\n\n\n# register\nImageLoader.register(ImageLoaderPIL)\n", "path": "kivy/core/image/img_pil.py"}]}
1,883
139
gh_patches_debug_40407
rasdani/github-patches
git_diff
Theano__Theano-5398
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Add `conv2d_transpose` method to `theano.tensor.nnet.abstract_conv` ... and make it available in `theano.tensor.nnet`. The semantics and name of the parameters was discussed IRL with @vdumoulin and @fvisin . This is a follow-up to #3681 and #3872. </issue> <code> [start of theano/tensor/nnet/__init__.py] 1 from __future__ import absolute_import, print_function, division 2 from .nnet import ( 3 CrossentropyCategorical1Hot, CrossentropyCategorical1HotGrad, 4 CrossentropySoftmax1HotWithBiasDx, CrossentropySoftmaxArgmax1HotWithBias, 5 LogSoftmax, Prepend_scalar_constant_to_each_row, 6 Prepend_scalar_to_each_row, Softmax, 7 SoftmaxGrad, SoftmaxWithBias, binary_crossentropy, 8 categorical_crossentropy, crossentropy_categorical_1hot, 9 crossentropy_categorical_1hot_grad, crossentropy_softmax_1hot, 10 crossentropy_softmax_1hot_with_bias, 11 crossentropy_softmax_1hot_with_bias_dx, 12 crossentropy_softmax_argmax_1hot_with_bias, 13 crossentropy_softmax_max_and_argmax_1hot, 14 crossentropy_softmax_max_and_argmax_1hot_with_bias, 15 crossentropy_to_crossentropy_with_softmax, 16 crossentropy_to_crossentropy_with_softmax_with_bias, 17 graph_merge_softmax_with_crossentropy_softmax, h_softmax, 18 logsoftmax, logsoftmax_op, prepend_0_to_each_row, prepend_1_to_each_row, 19 prepend_scalar_to_each_row, relu, softmax, softmax_grad, softmax_graph, 20 softmax_op, softmax_simplifier, softmax_with_bias, elu, 21 confusion_matrix) 22 from . import opt 23 from .conv import ConvOp 24 from .Conv3D import * 25 from .ConvGrad3D import * 26 from .ConvTransp3D import * 27 from .sigm import (softplus, sigmoid, sigmoid_inplace, 28 scalar_sigmoid, ultra_fast_sigmoid, 29 hard_sigmoid) 30 from .bn import batch_normalization 31 32 33 import warnings 34 from .abstract_conv import conv2d as abstract_conv2d 35 from .abstract_conv import conv3d 36 37 38 def conv2d(input, filters, input_shape=None, filter_shape=None, 39 border_mode='valid', subsample=(1, 1), filter_flip=True, 40 image_shape=None, filter_dilation=(1, 1), **kwargs): 41 """ 42 This function will build the symbolic graph for convolving a mini-batch of a 43 stack of 2D inputs with a set of 2D filters. The implementation is modelled 44 after Convolutional Neural Networks (CNN). 45 46 47 Parameters 48 ---------- 49 input: symbolic 4D tensor 50 Mini-batch of feature map stacks, of shape 51 (batch size, input channels, input rows, input columns). 52 See the optional parameter ``input_shape``. 53 54 filters: symbolic 4D tensor 55 Set of filters used in CNN layer of shape 56 (output channels, input channels, filter rows, filter columns). 57 See the optional parameter ``filter_shape``. 58 59 input_shape: None, tuple/list of len 4 of int or Constant variable 60 The shape of the input parameter. 61 Optional, possibly used to choose an optimal implementation. 62 You can give ``None`` for any element of the list to specify that this 63 element is not known at compile time. 64 65 filter_shape: None, tuple/list of len 4 of int or Constant variable 66 The shape of the filters parameter. 67 Optional, possibly used to choose an optimal implementation. 68 You can give ``None`` for any element of the list to specify that this 69 element is not known at compile time. 70 71 border_mode: str, int or tuple of two int 72 Either of the following: 73 74 ``'valid'``: apply filter wherever it completely overlaps with the 75 input. Generates output of shape: input shape - filter shape + 1 76 ``'full'``: apply filter wherever it partly overlaps with the input. 77 Generates output of shape: input shape + filter shape - 1 78 ``'half'``: pad input with a symmetric border of ``filter rows // 2`` 79 rows and ``filter columns // 2`` columns, then perform a valid 80 convolution. For filters with an odd number of rows and columns, this 81 leads to the output shape being equal to the input shape. 82 ``int``: pad input with a symmetric border of zeros of the given 83 width, then perform a valid convolution. 84 ``(int1, int2)``: pad input with a symmetric border of ``int1`` rows 85 and ``int2`` columns, then perform a valid convolution. 86 87 subsample: tuple of len 2 88 Factor by which to subsample the output. 89 Also called strides elsewhere. 90 91 filter_flip: bool 92 If ``True``, will flip the filter rows and columns 93 before sliding them over the input. This operation is normally referred 94 to as a convolution, and this is the default. If ``False``, the filters 95 are not flipped and the operation is referred to as a cross-correlation. 96 97 image_shape: None, tuple/list of len 4 of int or Constant variable 98 Deprecated alias for input_shape. 99 100 filter_dilation: tuple of len 2 101 Factor by which to subsample (stride) the input. 102 Also called dilation elsewhere. 103 104 kwargs: Any other keyword arguments are accepted for backwards 105 compatibility, but will be ignored. 106 107 Returns 108 ------- 109 Symbolic 4D tensor 110 Set of feature maps generated by convolutional layer. Tensor is 111 of shape (batch size, output channels, output rows, output columns) 112 113 Notes 114 ----- 115 If cuDNN is available, it will be used on the 116 GPU. Otherwise, it is the *CorrMM* convolution that will be used 117 "caffe style convolution". 118 119 This is only supported in Theano 0.8 or the development 120 version until it is released. 121 122 The parameter filter_dilation is an implementation of `dilated 123 convolution <https://arxiv.org/pdf/1511.07122v3.pdf>`_. 124 125 """ 126 127 if 'imshp_logical' in kwargs or 'kshp_logical' in kwargs: 128 raise ValueError( 129 "Keyword arguments 'imshp_logical' and 'kshp_logical' for conv2d " 130 "are not supported anymore (and have not been a reliable way to " 131 "perform upsampling). That feature is still available by calling " 132 "theano.tensor.nnet.conv.conv2d() for the time being.") 133 if len(kwargs.keys()) > 0: 134 warnings.warn(str(kwargs.keys()) + 135 " are now deprecated in " 136 "`tensor.nnet.abstract_conv.conv2d` interface" 137 " and will be ignored.", 138 stacklevel=2) 139 140 if image_shape is not None: 141 warnings.warn("The `image_shape` keyword argument to " 142 "`tensor.nnet.conv2d` is deprecated, it has been " 143 "renamed to `input_shape`.", 144 stacklevel=2) 145 if input_shape is None: 146 input_shape = image_shape 147 else: 148 raise ValueError("input_shape and image_shape should not" 149 " be provided at the same time.") 150 151 return abstract_conv2d(input, filters, input_shape, filter_shape, 152 border_mode, subsample, filter_flip, 153 filter_dilation) 154 [end of theano/tensor/nnet/__init__.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/theano/tensor/nnet/__init__.py b/theano/tensor/nnet/__init__.py --- a/theano/tensor/nnet/__init__.py +++ b/theano/tensor/nnet/__init__.py @@ -32,6 +32,7 @@ import warnings from .abstract_conv import conv2d as abstract_conv2d +from .abstract_conv import conv2d_grad_wrt_inputs from .abstract_conv import conv3d @@ -151,3 +152,86 @@ return abstract_conv2d(input, filters, input_shape, filter_shape, border_mode, subsample, filter_flip, filter_dilation) + + +def conv2d_transpose(input, filters, output_shape, filter_shape=None, + border_mode='valid', input_dilation=(1, 1), + filter_flip=True, filter_dilation=(1, 1)): + """ + This function will build the symbolic graph for applying a transposed + convolution over a mini-batch of a stack of 2D inputs with a set of 2D + filters. + + + Parameters + ---------- + input: symbolic 4D tensor + Mini-batch of feature map stacks, of shape + (batch size, input channels, input rows, input columns). + See the optional parameter ``input_shape``. + + filters: symbolic 4D tensor + Set of filters used in CNN layer of shape + (input channels, output channels, filter rows, filter columns). + See the optional parameter ``filter_shape``. **Note: the order for + ``output_channels`` and ``input_channels`` is reversed with respect to + ``conv2d``.** + + output_shape: tuple/list of len 4 of int or Constant variable + The shape of the output of ``conv2d_transpose``. The last two elements + are allowed to be ``tensor.scalar`` variables. + + filter_shape: None, tuple/list of len 4 of int or Constant variable + The shape of the filters parameter. + Optional, possibly used to choose an optimal implementation. + You can give ``None`` for any element of the list to specify that this + element is not known at compile time. + + border_mode: str, int or tuple of two int + Refers to the ``border_mode`` argument of the corresponding forward + (non-transposed) convolution. See the argument description in + ``conv2d``. What was ``padding`` for the forward convolution means + ``cropping`` the output of the transposed one. ``valid`` corresponds to + no cropping, ``full`` to maximal cropping. + + input_dilation: tuple of len 2 + Corresponds to ``subsample`` (also called strides elsewhere) in the + non-transposed convolution. + + filter_flip: bool + If ``True``, will flip the filter rows and columns + before sliding them over the input. This operation is normally referred + to as a convolution, and this is the default. If ``False``, the filters + are not flipped and the operation is referred to as a cross-correlation. + + filter_dilation: tuple of len 2 + Factor by which to subsample (stride) the input. + Also called dilation elsewhere. + + Returns + ------- + Symbolic 4D tensor + Set of feature maps generated by the transposed convolution. Tensor is + of shape (batch size, output channels, output rows, output columns) + + Notes + ----- + If cuDNN is available, it will be used on the + GPU. Otherwise, it is the *CorrMM* convolution that will be used + "caffe style convolution". + + This operation is also sometimes called "deconvolution". + + The parameter filter_dilation is an implementation of `dilated + convolution <https://arxiv.org/pdf/1511.07122v3.pdf>`_. + + """ + + return conv2d_grad_wrt_inputs(output_grad=input, + filters=filters, + input_shape=output_shape, + filter_shape=filter_shape, + border_mode=border_mode, + subsample=input_dilation, + filter_flip=filter_flip, + filter_dilation=filter_dilation)
{"golden_diff": "diff --git a/theano/tensor/nnet/__init__.py b/theano/tensor/nnet/__init__.py\n--- a/theano/tensor/nnet/__init__.py\n+++ b/theano/tensor/nnet/__init__.py\n@@ -32,6 +32,7 @@\n \n import warnings\n from .abstract_conv import conv2d as abstract_conv2d\n+from .abstract_conv import conv2d_grad_wrt_inputs\n from .abstract_conv import conv3d\n \n \n@@ -151,3 +152,86 @@\n return abstract_conv2d(input, filters, input_shape, filter_shape,\n border_mode, subsample, filter_flip,\n filter_dilation)\n+\n+\n+def conv2d_transpose(input, filters, output_shape, filter_shape=None,\n+ border_mode='valid', input_dilation=(1, 1),\n+ filter_flip=True, filter_dilation=(1, 1)):\n+ \"\"\"\n+ This function will build the symbolic graph for applying a transposed\n+ convolution over a mini-batch of a stack of 2D inputs with a set of 2D\n+ filters.\n+\n+\n+ Parameters\n+ ----------\n+ input: symbolic 4D tensor\n+ Mini-batch of feature map stacks, of shape\n+ (batch size, input channels, input rows, input columns).\n+ See the optional parameter ``input_shape``.\n+\n+ filters: symbolic 4D tensor\n+ Set of filters used in CNN layer of shape\n+ (input channels, output channels, filter rows, filter columns).\n+ See the optional parameter ``filter_shape``. **Note: the order for\n+ ``output_channels`` and ``input_channels`` is reversed with respect to\n+ ``conv2d``.**\n+\n+ output_shape: tuple/list of len 4 of int or Constant variable\n+ The shape of the output of ``conv2d_transpose``. The last two elements\n+ are allowed to be ``tensor.scalar`` variables.\n+\n+ filter_shape: None, tuple/list of len 4 of int or Constant variable\n+ The shape of the filters parameter.\n+ Optional, possibly used to choose an optimal implementation.\n+ You can give ``None`` for any element of the list to specify that this\n+ element is not known at compile time.\n+\n+ border_mode: str, int or tuple of two int\n+ Refers to the ``border_mode`` argument of the corresponding forward\n+ (non-transposed) convolution. See the argument description in\n+ ``conv2d``. What was ``padding`` for the forward convolution means\n+ ``cropping`` the output of the transposed one. ``valid`` corresponds to\n+ no cropping, ``full`` to maximal cropping.\n+\n+ input_dilation: tuple of len 2\n+ Corresponds to ``subsample`` (also called strides elsewhere) in the\n+ non-transposed convolution.\n+\n+ filter_flip: bool\n+ If ``True``, will flip the filter rows and columns\n+ before sliding them over the input. This operation is normally referred\n+ to as a convolution, and this is the default. If ``False``, the filters\n+ are not flipped and the operation is referred to as a cross-correlation.\n+\n+ filter_dilation: tuple of len 2\n+ Factor by which to subsample (stride) the input.\n+ Also called dilation elsewhere.\n+\n+ Returns\n+ -------\n+ Symbolic 4D tensor\n+ Set of feature maps generated by the transposed convolution. Tensor is\n+ of shape (batch size, output channels, output rows, output columns)\n+\n+ Notes\n+ -----\n+ If cuDNN is available, it will be used on the\n+ GPU. Otherwise, it is the *CorrMM* convolution that will be used\n+ \"caffe style convolution\".\n+\n+ This operation is also sometimes called \"deconvolution\".\n+\n+ The parameter filter_dilation is an implementation of `dilated\n+ convolution <https://arxiv.org/pdf/1511.07122v3.pdf>`_.\n+\n+ \"\"\"\n+\n+ return conv2d_grad_wrt_inputs(output_grad=input,\n+ filters=filters,\n+ input_shape=output_shape,\n+ filter_shape=filter_shape,\n+ border_mode=border_mode,\n+ subsample=input_dilation,\n+ filter_flip=filter_flip,\n+ filter_dilation=filter_dilation)\n", "issue": "Add `conv2d_transpose` method to `theano.tensor.nnet.abstract_conv`\n... and make it available in `theano.tensor.nnet`.\nThe semantics and name of the parameters was discussed IRL with @vdumoulin and @fvisin .\n\nThis is a follow-up to #3681 and #3872.\n\n", "before_files": [{"content": "from __future__ import absolute_import, print_function, division\nfrom .nnet import (\n CrossentropyCategorical1Hot, CrossentropyCategorical1HotGrad,\n CrossentropySoftmax1HotWithBiasDx, CrossentropySoftmaxArgmax1HotWithBias,\n LogSoftmax, Prepend_scalar_constant_to_each_row,\n Prepend_scalar_to_each_row, Softmax,\n SoftmaxGrad, SoftmaxWithBias, binary_crossentropy,\n categorical_crossentropy, crossentropy_categorical_1hot,\n crossentropy_categorical_1hot_grad, crossentropy_softmax_1hot,\n crossentropy_softmax_1hot_with_bias,\n crossentropy_softmax_1hot_with_bias_dx,\n crossentropy_softmax_argmax_1hot_with_bias,\n crossentropy_softmax_max_and_argmax_1hot,\n crossentropy_softmax_max_and_argmax_1hot_with_bias,\n crossentropy_to_crossentropy_with_softmax,\n crossentropy_to_crossentropy_with_softmax_with_bias,\n graph_merge_softmax_with_crossentropy_softmax, h_softmax,\n logsoftmax, logsoftmax_op, prepend_0_to_each_row, prepend_1_to_each_row,\n prepend_scalar_to_each_row, relu, softmax, softmax_grad, softmax_graph,\n softmax_op, softmax_simplifier, softmax_with_bias, elu,\n confusion_matrix)\nfrom . import opt\nfrom .conv import ConvOp\nfrom .Conv3D import *\nfrom .ConvGrad3D import *\nfrom .ConvTransp3D import *\nfrom .sigm import (softplus, sigmoid, sigmoid_inplace,\n scalar_sigmoid, ultra_fast_sigmoid,\n hard_sigmoid)\nfrom .bn import batch_normalization\n\n\nimport warnings\nfrom .abstract_conv import conv2d as abstract_conv2d\nfrom .abstract_conv import conv3d\n\n\ndef conv2d(input, filters, input_shape=None, filter_shape=None,\n border_mode='valid', subsample=(1, 1), filter_flip=True,\n image_shape=None, filter_dilation=(1, 1), **kwargs):\n \"\"\"\n This function will build the symbolic graph for convolving a mini-batch of a\n stack of 2D inputs with a set of 2D filters. The implementation is modelled\n after Convolutional Neural Networks (CNN).\n\n\n Parameters\n ----------\n input: symbolic 4D tensor\n Mini-batch of feature map stacks, of shape\n (batch size, input channels, input rows, input columns).\n See the optional parameter ``input_shape``.\n\n filters: symbolic 4D tensor\n Set of filters used in CNN layer of shape\n (output channels, input channels, filter rows, filter columns).\n See the optional parameter ``filter_shape``.\n\n input_shape: None, tuple/list of len 4 of int or Constant variable\n The shape of the input parameter.\n Optional, possibly used to choose an optimal implementation.\n You can give ``None`` for any element of the list to specify that this\n element is not known at compile time.\n\n filter_shape: None, tuple/list of len 4 of int or Constant variable\n The shape of the filters parameter.\n Optional, possibly used to choose an optimal implementation.\n You can give ``None`` for any element of the list to specify that this\n element is not known at compile time.\n\n border_mode: str, int or tuple of two int\n Either of the following:\n\n ``'valid'``: apply filter wherever it completely overlaps with the\n input. Generates output of shape: input shape - filter shape + 1\n ``'full'``: apply filter wherever it partly overlaps with the input.\n Generates output of shape: input shape + filter shape - 1\n ``'half'``: pad input with a symmetric border of ``filter rows // 2``\n rows and ``filter columns // 2`` columns, then perform a valid\n convolution. For filters with an odd number of rows and columns, this\n leads to the output shape being equal to the input shape.\n ``int``: pad input with a symmetric border of zeros of the given\n width, then perform a valid convolution.\n ``(int1, int2)``: pad input with a symmetric border of ``int1`` rows\n and ``int2`` columns, then perform a valid convolution.\n\n subsample: tuple of len 2\n Factor by which to subsample the output.\n Also called strides elsewhere.\n\n filter_flip: bool\n If ``True``, will flip the filter rows and columns\n before sliding them over the input. This operation is normally referred\n to as a convolution, and this is the default. If ``False``, the filters\n are not flipped and the operation is referred to as a cross-correlation.\n\n image_shape: None, tuple/list of len 4 of int or Constant variable\n Deprecated alias for input_shape.\n\n filter_dilation: tuple of len 2\n Factor by which to subsample (stride) the input.\n Also called dilation elsewhere.\n\n kwargs: Any other keyword arguments are accepted for backwards\n compatibility, but will be ignored.\n\n Returns\n -------\n Symbolic 4D tensor\n Set of feature maps generated by convolutional layer. Tensor is\n of shape (batch size, output channels, output rows, output columns)\n\n Notes\n -----\n If cuDNN is available, it will be used on the\n GPU. Otherwise, it is the *CorrMM* convolution that will be used\n \"caffe style convolution\".\n\n This is only supported in Theano 0.8 or the development\n version until it is released.\n\n The parameter filter_dilation is an implementation of `dilated\n convolution <https://arxiv.org/pdf/1511.07122v3.pdf>`_.\n\n \"\"\"\n\n if 'imshp_logical' in kwargs or 'kshp_logical' in kwargs:\n raise ValueError(\n \"Keyword arguments 'imshp_logical' and 'kshp_logical' for conv2d \"\n \"are not supported anymore (and have not been a reliable way to \"\n \"perform upsampling). That feature is still available by calling \"\n \"theano.tensor.nnet.conv.conv2d() for the time being.\")\n if len(kwargs.keys()) > 0:\n warnings.warn(str(kwargs.keys()) +\n \" are now deprecated in \"\n \"`tensor.nnet.abstract_conv.conv2d` interface\"\n \" and will be ignored.\",\n stacklevel=2)\n\n if image_shape is not None:\n warnings.warn(\"The `image_shape` keyword argument to \"\n \"`tensor.nnet.conv2d` is deprecated, it has been \"\n \"renamed to `input_shape`.\",\n stacklevel=2)\n if input_shape is None:\n input_shape = image_shape\n else:\n raise ValueError(\"input_shape and image_shape should not\"\n \" be provided at the same time.\")\n\n return abstract_conv2d(input, filters, input_shape, filter_shape,\n border_mode, subsample, filter_flip,\n filter_dilation)\n", "path": "theano/tensor/nnet/__init__.py"}]}
2,490
986
gh_patches_debug_31223
rasdani/github-patches
git_diff
Cog-Creators__Red-DiscordBot-4738
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Properly rotate logging # Other bugs <!-- Did you find a bug with something other than a command? Fill out the following: --> #### What were you trying to do? Check log files due to no disk space. 12gb for 2 of reds files. The logs are only labelled as part 1 and 9 with nothing in between. #### What were you expecting to happen? Log files to correctly rotate and not be just labelled as 1 and 9 #### What actually happened? <!-- Replace this line with a description of what actually happened. Include any error messages --> #### How can we reproduce this issue? Just running red with ~100 cogs. ![](https://i.flaree.xyz/WindowsTerminal_0UK1SRb72M.png) </issue> <code> [start of redbot/logging.py] 1 import logging.handlers 2 import pathlib 3 import re 4 import sys 5 6 from typing import List, Tuple, Optional, Union 7 from logging import LogRecord 8 from datetime import datetime # This clearly never leads to confusion... 9 from os import isatty 10 11 from rich._log_render import LogRender 12 from rich.containers import Renderables 13 from rich.logging import RichHandler 14 from rich.table import Table 15 from rich.text import Text 16 from rich.traceback import Traceback 17 18 19 MAX_OLD_LOGS = 8 20 21 22 class RotatingFileHandler(logging.handlers.RotatingFileHandler): 23 """Custom rotating file handler. 24 25 This file handler rotates a bit differently to the one in stdlib. 26 27 For a start, this works off of a "stem" and a "directory". The stem 28 is the base name of the log file, without the extension. The 29 directory is where all log files (including backups) will be placed. 30 31 Secondly, this logger rotates files downwards, and new logs are 32 *started* with the backup number incremented. The stdlib handler 33 rotates files upwards, and this leaves the logs in reverse order. 34 35 Thirdly, naming conventions are not customisable with this class. 36 Logs will initially be named in the format "{stem}.log", and after 37 rotating, the first log file will be renamed "{stem}-part1.log", 38 and a new file "{stem}-part2.log" will be created for logging to 39 continue. 40 41 A few things can't be modified in this handler: it must use append 42 mode, it doesn't support use of the `delay` arg, and it will ignore 43 custom namers and rotators. 44 45 When this handler is instantiated, it will search through the 46 directory for logs from previous runtimes, and will open the file 47 with the highest backup number to append to. 48 """ 49 50 def __init__( 51 self, 52 stem: str, 53 directory: pathlib.Path, 54 maxBytes: int = 0, 55 backupCount: int = 0, 56 encoding: Optional[str] = None, 57 ) -> None: 58 self.baseStem = stem 59 self.directory = directory.resolve() 60 # Scan for existing files in directory, append to last part of existing log 61 log_part_re = re.compile(rf"{stem}-part(?P<partnum>\d+).log") 62 highest_part = 0 63 for path in directory.iterdir(): 64 match = log_part_re.match(path.name) 65 if match and int(match["partnum"]) > highest_part: 66 highest_part = int(match["partnum"]) 67 if highest_part: 68 filename = directory / f"{stem}-part{highest_part}.log" 69 else: 70 filename = directory / f"{stem}.log" 71 super().__init__( 72 filename, 73 mode="a", 74 maxBytes=maxBytes, 75 backupCount=backupCount, 76 encoding=encoding, 77 delay=False, 78 ) 79 80 def doRollover(self): 81 if self.stream: 82 self.stream.close() 83 self.stream = None 84 initial_path = self.directory / f"{self.baseStem}.log" 85 if self.backupCount > 0 and initial_path.exists(): 86 initial_path.replace(self.directory / f"{self.baseStem}-part1.log") 87 88 match = re.match( 89 rf"{self.baseStem}(?:-part(?P<part>\d+)?)?.log", pathlib.Path(self.baseFilename).name 90 ) 91 latest_part_num = int(match.groupdict(default="1").get("part", "1")) 92 if self.backupCount < 1: 93 # No backups, just delete the existing log and start again 94 pathlib.Path(self.baseFilename).unlink() 95 elif latest_part_num > self.backupCount: 96 # Rotate files down one 97 # red-part2.log becomes red-part1.log etc, a new log is added at the end. 98 for i in range(1, self.backupCount): 99 next_log = self.directory / f"{self.baseStem}-part{i + 1}.log" 100 if next_log.exists(): 101 prev_log = self.directory / f"{self.baseStem}-part{i}.log" 102 next_log.replace(prev_log) 103 else: 104 # Simply start a new file 105 self.baseFilename = str( 106 self.directory / f"{self.baseStem}-part{latest_part_num + 1}.log" 107 ) 108 109 self.stream = self._open() 110 111 112 class RedLogRender(LogRender): 113 def __call__( 114 self, 115 console, 116 renderables, 117 log_time=None, 118 time_format=None, 119 level="", 120 path=None, 121 line_no=None, 122 link_path=None, 123 logger_name=None, 124 ): 125 output = Table.grid(padding=(0, 1)) 126 output.expand = True 127 if self.show_time: 128 output.add_column(style="log.time") 129 if self.show_level: 130 output.add_column(style="log.level", width=self.level_width) 131 output.add_column(ratio=1, style="log.message", overflow="fold") 132 if self.show_path and path: 133 output.add_column(style="log.path") 134 if logger_name: 135 output.add_column() 136 row = [] 137 if self.show_time: 138 log_time = log_time or console.get_datetime() 139 log_time_display = log_time.strftime(time_format or self.time_format) 140 if log_time_display == self._last_time: 141 row.append(Text(" " * len(log_time_display))) 142 else: 143 row.append(Text(log_time_display)) 144 self._last_time = log_time_display 145 if self.show_level: 146 row.append(level) 147 148 row.append(Renderables(renderables)) 149 if self.show_path and path: 150 path_text = Text() 151 path_text.append(path, style=f"link file://{link_path}" if link_path else "") 152 if line_no: 153 path_text.append(f":{line_no}") 154 row.append(path_text) 155 156 if logger_name: 157 logger_name_text = Text() 158 logger_name_text.append(f"[{logger_name}]") 159 row.append(logger_name_text) 160 161 output.add_row(*row) 162 return output 163 164 165 class RedRichHandler(RichHandler): 166 """Adaptation of Rich's RichHandler to manually adjust the path to a logger name""" 167 168 def __init__(self, *args, **kwargs): 169 super().__init__(*args, **kwargs) 170 self._log_render = RedLogRender( 171 show_time=self._log_render.show_time, 172 show_level=self._log_render.show_level, 173 show_path=self._log_render.show_path, 174 level_width=self._log_render.level_width, 175 ) 176 177 def emit(self, record: LogRecord) -> None: 178 """Invoked by logging.""" 179 path = pathlib.Path(record.pathname).name 180 level = self.get_level_text(record) 181 message = self.format(record) 182 time_format = None if self.formatter is None else self.formatter.datefmt 183 log_time = datetime.fromtimestamp(record.created) 184 185 traceback = None 186 if self.rich_tracebacks and record.exc_info and record.exc_info != (None, None, None): 187 exc_type, exc_value, exc_traceback = record.exc_info 188 assert exc_type is not None 189 assert exc_value is not None 190 traceback = Traceback.from_exception( 191 exc_type, 192 exc_value, 193 exc_traceback, 194 width=self.tracebacks_width, 195 extra_lines=self.tracebacks_extra_lines, 196 theme=self.tracebacks_theme, 197 word_wrap=self.tracebacks_word_wrap, 198 show_locals=self.tracebacks_show_locals, 199 locals_max_length=self.locals_max_length, 200 locals_max_string=self.locals_max_string, 201 ) 202 message = record.getMessage() 203 204 use_markup = getattr(record, "markup") if hasattr(record, "markup") else self.markup 205 if use_markup: 206 message_text = Text.from_markup(message) 207 else: 208 message_text = Text(message) 209 210 if self.highlighter: 211 message_text = self.highlighter(message_text) 212 if self.KEYWORDS: 213 message_text.highlight_words(self.KEYWORDS, "logging.keyword") 214 215 self.console.print( 216 self._log_render( 217 self.console, 218 [message_text] if not traceback else [message_text, traceback], 219 log_time=log_time, 220 time_format=time_format, 221 level=level, 222 path=path, 223 line_no=record.lineno, 224 link_path=record.pathname if self.enable_link_path else None, 225 logger_name=record.name, 226 ) 227 ) 228 229 230 def init_logging( 231 level: int, location: pathlib.Path, force_rich_logging: Union[bool, None] 232 ) -> None: 233 root_logger = logging.getLogger() 234 235 base_logger = logging.getLogger("red") 236 base_logger.setLevel(level) 237 dpy_logger = logging.getLogger("discord") 238 dpy_logger.setLevel(logging.WARNING) 239 warnings_logger = logging.getLogger("py.warnings") 240 warnings_logger.setLevel(logging.WARNING) 241 242 enable_rich_logging = False 243 244 if isatty(0) and force_rich_logging is None: 245 # Check if the bot thinks it has a active terminal. 246 enable_rich_logging = True 247 elif force_rich_logging is True: 248 enable_rich_logging = True 249 250 file_formatter = logging.Formatter( 251 "[{asctime}] [{levelname}] {name}: {message}", datefmt="%Y-%m-%d %H:%M:%S", style="{" 252 ) 253 if enable_rich_logging is True: 254 rich_formatter = logging.Formatter("{message}", datefmt="[%X]", style="{") 255 256 stdout_handler = RedRichHandler(rich_tracebacks=True, show_path=False) 257 stdout_handler.setFormatter(rich_formatter) 258 else: 259 stdout_handler = logging.StreamHandler(sys.stdout) 260 stdout_handler.setFormatter(file_formatter) 261 262 root_logger.addHandler(stdout_handler) 263 logging.captureWarnings(True) 264 265 if not location.exists(): 266 location.mkdir(parents=True, exist_ok=True) 267 # Rotate latest logs to previous logs 268 previous_logs: List[pathlib.Path] = [] 269 latest_logs: List[Tuple[pathlib.Path, str]] = [] 270 for path in location.iterdir(): 271 match = re.match(r"latest(?P<part>-part\d+)?\.log", path.name) 272 if match: 273 part = match.groupdict(default="")["part"] 274 latest_logs.append((path, part)) 275 match = re.match(r"previous(?:-part\d+)?.log", path.name) 276 if match: 277 previous_logs.append(path) 278 # Delete all previous.log files 279 for path in previous_logs: 280 path.unlink() 281 # Rename latest.log files to previous.log 282 for path, part in latest_logs: 283 path.replace(location / f"previous{part}.log") 284 285 latest_fhandler = RotatingFileHandler( 286 stem="latest", 287 directory=location, 288 maxBytes=1_000_000, # About 1MB per logfile 289 backupCount=MAX_OLD_LOGS, 290 encoding="utf-8", 291 ) 292 all_fhandler = RotatingFileHandler( 293 stem="red", 294 directory=location, 295 maxBytes=1_000_000, 296 backupCount=MAX_OLD_LOGS, 297 encoding="utf-8", 298 ) 299 300 for fhandler in (latest_fhandler, all_fhandler): 301 fhandler.setFormatter(file_formatter) 302 root_logger.addHandler(fhandler) 303 [end of redbot/logging.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/redbot/logging.py b/redbot/logging.py --- a/redbot/logging.py +++ b/redbot/logging.py @@ -58,7 +58,7 @@ self.baseStem = stem self.directory = directory.resolve() # Scan for existing files in directory, append to last part of existing log - log_part_re = re.compile(rf"{stem}-part(?P<partnum>\d+).log") + log_part_re = re.compile(rf"{stem}-part(?P<partnum>\d)\.log") highest_part = 0 for path in directory.iterdir(): match = log_part_re.match(path.name) @@ -86,7 +86,7 @@ initial_path.replace(self.directory / f"{self.baseStem}-part1.log") match = re.match( - rf"{self.baseStem}(?:-part(?P<part>\d+)?)?.log", pathlib.Path(self.baseFilename).name + rf"{self.baseStem}(?:-part(?P<part>\d))?\.log", pathlib.Path(self.baseFilename).name ) latest_part_num = int(match.groupdict(default="1").get("part", "1")) if self.backupCount < 1: @@ -95,7 +95,7 @@ elif latest_part_num > self.backupCount: # Rotate files down one # red-part2.log becomes red-part1.log etc, a new log is added at the end. - for i in range(1, self.backupCount): + for i in range(1, self.backupCount + 1): next_log = self.directory / f"{self.baseStem}-part{i + 1}.log" if next_log.exists(): prev_log = self.directory / f"{self.baseStem}-part{i}.log"
{"golden_diff": "diff --git a/redbot/logging.py b/redbot/logging.py\n--- a/redbot/logging.py\n+++ b/redbot/logging.py\n@@ -58,7 +58,7 @@\n self.baseStem = stem\n self.directory = directory.resolve()\n # Scan for existing files in directory, append to last part of existing log\n- log_part_re = re.compile(rf\"{stem}-part(?P<partnum>\\d+).log\")\n+ log_part_re = re.compile(rf\"{stem}-part(?P<partnum>\\d)\\.log\")\n highest_part = 0\n for path in directory.iterdir():\n match = log_part_re.match(path.name)\n@@ -86,7 +86,7 @@\n initial_path.replace(self.directory / f\"{self.baseStem}-part1.log\")\n \n match = re.match(\n- rf\"{self.baseStem}(?:-part(?P<part>\\d+)?)?.log\", pathlib.Path(self.baseFilename).name\n+ rf\"{self.baseStem}(?:-part(?P<part>\\d))?\\.log\", pathlib.Path(self.baseFilename).name\n )\n latest_part_num = int(match.groupdict(default=\"1\").get(\"part\", \"1\"))\n if self.backupCount < 1:\n@@ -95,7 +95,7 @@\n elif latest_part_num > self.backupCount:\n # Rotate files down one\n # red-part2.log becomes red-part1.log etc, a new log is added at the end.\n- for i in range(1, self.backupCount):\n+ for i in range(1, self.backupCount + 1):\n next_log = self.directory / f\"{self.baseStem}-part{i + 1}.log\"\n if next_log.exists():\n prev_log = self.directory / f\"{self.baseStem}-part{i}.log\"\n", "issue": "Properly rotate logging\n# Other bugs\r\n\r\n<!-- \r\nDid you find a bug with something other than a command? Fill out the following:\r\n-->\r\n\r\n#### What were you trying to do?\r\n\r\nCheck log files due to no disk space. 12gb for 2 of reds files. The logs are only labelled as part 1 and 9 with nothing in between.\r\n\r\n#### What were you expecting to happen?\r\n\r\nLog files to correctly rotate and not be just labelled as 1 and 9\r\n\r\n#### What actually happened?\r\n\r\n<!-- Replace this line with a description of what actually happened. Include any error messages -->\r\n\r\n#### How can we reproduce this issue?\r\n\r\nJust running red with ~100 cogs.\r\n\r\n\r\n![](https://i.flaree.xyz/WindowsTerminal_0UK1SRb72M.png)\r\n\r\n\n", "before_files": [{"content": "import logging.handlers\nimport pathlib\nimport re\nimport sys\n\nfrom typing import List, Tuple, Optional, Union\nfrom logging import LogRecord\nfrom datetime import datetime # This clearly never leads to confusion...\nfrom os import isatty\n\nfrom rich._log_render import LogRender\nfrom rich.containers import Renderables\nfrom rich.logging import RichHandler\nfrom rich.table import Table\nfrom rich.text import Text\nfrom rich.traceback import Traceback\n\n\nMAX_OLD_LOGS = 8\n\n\nclass RotatingFileHandler(logging.handlers.RotatingFileHandler):\n \"\"\"Custom rotating file handler.\n\n This file handler rotates a bit differently to the one in stdlib.\n\n For a start, this works off of a \"stem\" and a \"directory\". The stem\n is the base name of the log file, without the extension. The\n directory is where all log files (including backups) will be placed.\n\n Secondly, this logger rotates files downwards, and new logs are\n *started* with the backup number incremented. The stdlib handler\n rotates files upwards, and this leaves the logs in reverse order.\n\n Thirdly, naming conventions are not customisable with this class.\n Logs will initially be named in the format \"{stem}.log\", and after\n rotating, the first log file will be renamed \"{stem}-part1.log\",\n and a new file \"{stem}-part2.log\" will be created for logging to\n continue.\n\n A few things can't be modified in this handler: it must use append\n mode, it doesn't support use of the `delay` arg, and it will ignore\n custom namers and rotators.\n\n When this handler is instantiated, it will search through the\n directory for logs from previous runtimes, and will open the file\n with the highest backup number to append to.\n \"\"\"\n\n def __init__(\n self,\n stem: str,\n directory: pathlib.Path,\n maxBytes: int = 0,\n backupCount: int = 0,\n encoding: Optional[str] = None,\n ) -> None:\n self.baseStem = stem\n self.directory = directory.resolve()\n # Scan for existing files in directory, append to last part of existing log\n log_part_re = re.compile(rf\"{stem}-part(?P<partnum>\\d+).log\")\n highest_part = 0\n for path in directory.iterdir():\n match = log_part_re.match(path.name)\n if match and int(match[\"partnum\"]) > highest_part:\n highest_part = int(match[\"partnum\"])\n if highest_part:\n filename = directory / f\"{stem}-part{highest_part}.log\"\n else:\n filename = directory / f\"{stem}.log\"\n super().__init__(\n filename,\n mode=\"a\",\n maxBytes=maxBytes,\n backupCount=backupCount,\n encoding=encoding,\n delay=False,\n )\n\n def doRollover(self):\n if self.stream:\n self.stream.close()\n self.stream = None\n initial_path = self.directory / f\"{self.baseStem}.log\"\n if self.backupCount > 0 and initial_path.exists():\n initial_path.replace(self.directory / f\"{self.baseStem}-part1.log\")\n\n match = re.match(\n rf\"{self.baseStem}(?:-part(?P<part>\\d+)?)?.log\", pathlib.Path(self.baseFilename).name\n )\n latest_part_num = int(match.groupdict(default=\"1\").get(\"part\", \"1\"))\n if self.backupCount < 1:\n # No backups, just delete the existing log and start again\n pathlib.Path(self.baseFilename).unlink()\n elif latest_part_num > self.backupCount:\n # Rotate files down one\n # red-part2.log becomes red-part1.log etc, a new log is added at the end.\n for i in range(1, self.backupCount):\n next_log = self.directory / f\"{self.baseStem}-part{i + 1}.log\"\n if next_log.exists():\n prev_log = self.directory / f\"{self.baseStem}-part{i}.log\"\n next_log.replace(prev_log)\n else:\n # Simply start a new file\n self.baseFilename = str(\n self.directory / f\"{self.baseStem}-part{latest_part_num + 1}.log\"\n )\n\n self.stream = self._open()\n\n\nclass RedLogRender(LogRender):\n def __call__(\n self,\n console,\n renderables,\n log_time=None,\n time_format=None,\n level=\"\",\n path=None,\n line_no=None,\n link_path=None,\n logger_name=None,\n ):\n output = Table.grid(padding=(0, 1))\n output.expand = True\n if self.show_time:\n output.add_column(style=\"log.time\")\n if self.show_level:\n output.add_column(style=\"log.level\", width=self.level_width)\n output.add_column(ratio=1, style=\"log.message\", overflow=\"fold\")\n if self.show_path and path:\n output.add_column(style=\"log.path\")\n if logger_name:\n output.add_column()\n row = []\n if self.show_time:\n log_time = log_time or console.get_datetime()\n log_time_display = log_time.strftime(time_format or self.time_format)\n if log_time_display == self._last_time:\n row.append(Text(\" \" * len(log_time_display)))\n else:\n row.append(Text(log_time_display))\n self._last_time = log_time_display\n if self.show_level:\n row.append(level)\n\n row.append(Renderables(renderables))\n if self.show_path and path:\n path_text = Text()\n path_text.append(path, style=f\"link file://{link_path}\" if link_path else \"\")\n if line_no:\n path_text.append(f\":{line_no}\")\n row.append(path_text)\n\n if logger_name:\n logger_name_text = Text()\n logger_name_text.append(f\"[{logger_name}]\")\n row.append(logger_name_text)\n\n output.add_row(*row)\n return output\n\n\nclass RedRichHandler(RichHandler):\n \"\"\"Adaptation of Rich's RichHandler to manually adjust the path to a logger name\"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._log_render = RedLogRender(\n show_time=self._log_render.show_time,\n show_level=self._log_render.show_level,\n show_path=self._log_render.show_path,\n level_width=self._log_render.level_width,\n )\n\n def emit(self, record: LogRecord) -> None:\n \"\"\"Invoked by logging.\"\"\"\n path = pathlib.Path(record.pathname).name\n level = self.get_level_text(record)\n message = self.format(record)\n time_format = None if self.formatter is None else self.formatter.datefmt\n log_time = datetime.fromtimestamp(record.created)\n\n traceback = None\n if self.rich_tracebacks and record.exc_info and record.exc_info != (None, None, None):\n exc_type, exc_value, exc_traceback = record.exc_info\n assert exc_type is not None\n assert exc_value is not None\n traceback = Traceback.from_exception(\n exc_type,\n exc_value,\n exc_traceback,\n width=self.tracebacks_width,\n extra_lines=self.tracebacks_extra_lines,\n theme=self.tracebacks_theme,\n word_wrap=self.tracebacks_word_wrap,\n show_locals=self.tracebacks_show_locals,\n locals_max_length=self.locals_max_length,\n locals_max_string=self.locals_max_string,\n )\n message = record.getMessage()\n\n use_markup = getattr(record, \"markup\") if hasattr(record, \"markup\") else self.markup\n if use_markup:\n message_text = Text.from_markup(message)\n else:\n message_text = Text(message)\n\n if self.highlighter:\n message_text = self.highlighter(message_text)\n if self.KEYWORDS:\n message_text.highlight_words(self.KEYWORDS, \"logging.keyword\")\n\n self.console.print(\n self._log_render(\n self.console,\n [message_text] if not traceback else [message_text, traceback],\n log_time=log_time,\n time_format=time_format,\n level=level,\n path=path,\n line_no=record.lineno,\n link_path=record.pathname if self.enable_link_path else None,\n logger_name=record.name,\n )\n )\n\n\ndef init_logging(\n level: int, location: pathlib.Path, force_rich_logging: Union[bool, None]\n) -> None:\n root_logger = logging.getLogger()\n\n base_logger = logging.getLogger(\"red\")\n base_logger.setLevel(level)\n dpy_logger = logging.getLogger(\"discord\")\n dpy_logger.setLevel(logging.WARNING)\n warnings_logger = logging.getLogger(\"py.warnings\")\n warnings_logger.setLevel(logging.WARNING)\n\n enable_rich_logging = False\n\n if isatty(0) and force_rich_logging is None:\n # Check if the bot thinks it has a active terminal.\n enable_rich_logging = True\n elif force_rich_logging is True:\n enable_rich_logging = True\n\n file_formatter = logging.Formatter(\n \"[{asctime}] [{levelname}] {name}: {message}\", datefmt=\"%Y-%m-%d %H:%M:%S\", style=\"{\"\n )\n if enable_rich_logging is True:\n rich_formatter = logging.Formatter(\"{message}\", datefmt=\"[%X]\", style=\"{\")\n\n stdout_handler = RedRichHandler(rich_tracebacks=True, show_path=False)\n stdout_handler.setFormatter(rich_formatter)\n else:\n stdout_handler = logging.StreamHandler(sys.stdout)\n stdout_handler.setFormatter(file_formatter)\n\n root_logger.addHandler(stdout_handler)\n logging.captureWarnings(True)\n\n if not location.exists():\n location.mkdir(parents=True, exist_ok=True)\n # Rotate latest logs to previous logs\n previous_logs: List[pathlib.Path] = []\n latest_logs: List[Tuple[pathlib.Path, str]] = []\n for path in location.iterdir():\n match = re.match(r\"latest(?P<part>-part\\d+)?\\.log\", path.name)\n if match:\n part = match.groupdict(default=\"\")[\"part\"]\n latest_logs.append((path, part))\n match = re.match(r\"previous(?:-part\\d+)?.log\", path.name)\n if match:\n previous_logs.append(path)\n # Delete all previous.log files\n for path in previous_logs:\n path.unlink()\n # Rename latest.log files to previous.log\n for path, part in latest_logs:\n path.replace(location / f\"previous{part}.log\")\n\n latest_fhandler = RotatingFileHandler(\n stem=\"latest\",\n directory=location,\n maxBytes=1_000_000, # About 1MB per logfile\n backupCount=MAX_OLD_LOGS,\n encoding=\"utf-8\",\n )\n all_fhandler = RotatingFileHandler(\n stem=\"red\",\n directory=location,\n maxBytes=1_000_000,\n backupCount=MAX_OLD_LOGS,\n encoding=\"utf-8\",\n )\n\n for fhandler in (latest_fhandler, all_fhandler):\n fhandler.setFormatter(file_formatter)\n root_logger.addHandler(fhandler)\n", "path": "redbot/logging.py"}]}
3,925
403
gh_patches_debug_752
rasdani/github-patches
git_diff
CTPUG__wafer-657
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> icalendar 5.0 breaks the tests With icalendar 5.0, the test_ics_view test fails with ``` File "/home/runner/work/wafer/wafer/wafer/schedule/tests/test_views.py", line 1526, in test_ics_view 20 self.assertEqual(event['dtstart'].params['value'], 'DATE-TIME') 21 File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/icalendar/caselessdict.py", line 40, in __getitem__ 22 return super().__getitem__(key.upper()) 23 KeyError: 'VALUE' ``` but it works fine with 4.1 There's nothing obvious in the icalendar changelog about this behaviour change, so more investriagtion is needed. </issue> <code> [start of setup.py] 1 from glob import glob 2 import subprocess 3 4 from setuptools import find_packages, setup 5 6 REQUIRES = [ 7 'Django>=2.2,<5', 8 'bleach', 9 'bleach-allowlist', 10 'diff-match-patch', 11 'django-bakery>=0.13.0', 12 'django-crispy-forms', 13 'django-markitup>=4.0.0', 14 'django-registration-redux', 15 'django-reversion', 16 'django-select2', 17 'djangorestframework', 18 'drf-extensions>=0.5.0', 19 'icalendar>=4.0,<5.0', 20 'jsonfield', 21 'markdown>=2.5', 22 'pillow', 23 'py3dns', 24 'pyLibravatar', 25 'pytz', 26 'requests', 27 ] 28 29 SOURCES = [] 30 31 32 with open('README.rst', 'r') as f: 33 long_description = f.read() 34 35 36 def compile_translations(): 37 try: 38 subprocess.check_call(['./manage.py', 'compilemessages']) 39 except subprocess.CalledProcessError: 40 print("WARNING: cannot compile translations.") 41 return glob('wafer/locale/*/LC_MESSAGES/django.mo') 42 43 44 setup( 45 name="wafer", 46 version="0.14.1a", 47 url='http://github.com/CTPUG/wafer', 48 license='ISC', 49 description="A wafer-thin Django library for running small conferences.", 50 long_description=long_description, 51 long_description_content_type="text/x-rst", 52 author='CTPUG', 53 author_email='[email protected]', 54 packages=find_packages(), 55 include_package_data=True, 56 install_requires=REQUIRES, 57 dependency_links=SOURCES, 58 data_files=[ 59 ('locale', compile_translations()), 60 ], 61 setup_requires=[ 62 # Add setuptools-git, so we get correct behaviour for 63 # include_package_data 64 'setuptools_git >= 1.0', 65 ], 66 classifiers=[ 67 'Development Status :: 4 - Beta', 68 'Intended Audience :: Developers', 69 'License :: OSI Approved :: ISC License (ISCL)', 70 'Operating System :: POSIX', 71 'Programming Language :: Python :: 3', 72 'Programming Language :: Python :: 3.6', 73 'Programming Language :: Python :: 3.7', 74 'Programming Language :: Python :: 3.8', 75 'Framework :: Django', 76 'Topic :: Software Development :: Libraries :: Python Modules', 77 'Topic :: Internet :: WWW/HTTP', 78 ], 79 ) 80 [end of setup.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -16,7 +16,7 @@ 'django-select2', 'djangorestframework', 'drf-extensions>=0.5.0', - 'icalendar>=4.0,<5.0', + 'icalendar>=4.0', 'jsonfield', 'markdown>=2.5', 'pillow',
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -16,7 +16,7 @@\n 'django-select2',\n 'djangorestframework',\n 'drf-extensions>=0.5.0',\n- 'icalendar>=4.0,<5.0',\n+ 'icalendar>=4.0',\n 'jsonfield',\n 'markdown>=2.5',\n 'pillow',\n", "issue": "icalendar 5.0 breaks the tests\nWith icalendar 5.0, the test_ics_view test fails with\r\n```\r\nFile \"/home/runner/work/wafer/wafer/wafer/schedule/tests/test_views.py\", line 1526, in test_ics_view\r\n20\r\n self.assertEqual(event['dtstart'].params['value'], 'DATE-TIME')\r\n21\r\n File \"/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/icalendar/caselessdict.py\", line 40, in __getitem__\r\n22\r\n return super().__getitem__(key.upper())\r\n23\r\nKeyError: 'VALUE'\r\n```\r\n\r\nbut it works fine with 4.1\r\n\r\nThere's nothing obvious in the icalendar changelog about this behaviour change, so more investriagtion is needed.\r\n\n", "before_files": [{"content": "from glob import glob\nimport subprocess\n\nfrom setuptools import find_packages, setup\n\nREQUIRES = [\n 'Django>=2.2,<5',\n 'bleach',\n 'bleach-allowlist',\n 'diff-match-patch',\n 'django-bakery>=0.13.0',\n 'django-crispy-forms',\n 'django-markitup>=4.0.0',\n 'django-registration-redux',\n 'django-reversion',\n 'django-select2',\n 'djangorestframework',\n 'drf-extensions>=0.5.0',\n 'icalendar>=4.0,<5.0',\n 'jsonfield',\n 'markdown>=2.5',\n 'pillow',\n 'py3dns',\n 'pyLibravatar',\n 'pytz',\n 'requests',\n]\n\nSOURCES = []\n\n\nwith open('README.rst', 'r') as f:\n long_description = f.read()\n\n\ndef compile_translations():\n try:\n subprocess.check_call(['./manage.py', 'compilemessages'])\n except subprocess.CalledProcessError:\n print(\"WARNING: cannot compile translations.\")\n return glob('wafer/locale/*/LC_MESSAGES/django.mo')\n\n\nsetup(\n name=\"wafer\",\n version=\"0.14.1a\",\n url='http://github.com/CTPUG/wafer',\n license='ISC',\n description=\"A wafer-thin Django library for running small conferences.\",\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n author='CTPUG',\n author_email='[email protected]',\n packages=find_packages(),\n include_package_data=True,\n install_requires=REQUIRES,\n dependency_links=SOURCES,\n data_files=[\n ('locale', compile_translations()),\n ],\n setup_requires=[\n # Add setuptools-git, so we get correct behaviour for\n # include_package_data\n 'setuptools_git >= 1.0',\n ],\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: ISC License (ISCL)',\n 'Operating System :: POSIX',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Framework :: Django',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Internet :: WWW/HTTP',\n ],\n)\n", "path": "setup.py"}]}
1,420
100
gh_patches_debug_3145
rasdani/github-patches
git_diff
beetbox__beets-2196
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> beetsplug/fromfilename.py python3 compatability https://github.com/beetbox/beets/blob/e8afcbe7ec4bc37bb080f6f466b98807d020753b/beetsplug/fromfilename.py#L104 python2 ``` python2 d.values() ``` for python 3 this should be ``` python3 list(d.values()) ``` </issue> <code> [start of beetsplug/fromfilename.py] 1 # -*- coding: utf-8 -*- 2 # This file is part of beets. 3 # Copyright 2016, Jan-Erik Dahlin 4 # 5 # Permission is hereby granted, free of charge, to any person obtaining 6 # a copy of this software and associated documentation files (the 7 # "Software"), to deal in the Software without restriction, including 8 # without limitation the rights to use, copy, modify, merge, publish, 9 # distribute, sublicense, and/or sell copies of the Software, and to 10 # permit persons to whom the Software is furnished to do so, subject to 11 # the following conditions: 12 # 13 # The above copyright notice and this permission notice shall be 14 # included in all copies or substantial portions of the Software. 15 16 """If the title is empty, try to extract track and title from the 17 filename. 18 """ 19 from __future__ import division, absolute_import, print_function 20 21 from beets import plugins 22 from beets.util import displayable_path 23 import os 24 import re 25 import six 26 27 28 # Filename field extraction patterns. 29 PATTERNS = [ 30 # "01 - Track 01" and "01": do nothing 31 r'^(\d+)\s*-\s*track\s*\d$', 32 r'^\d+$', 33 34 # Useful patterns. 35 r'^(?P<artist>.+)-(?P<title>.+)-(?P<tag>.*)$', 36 r'^(?P<track>\d+)\s*-(?P<artist>.+)-(?P<title>.+)-(?P<tag>.*)$', 37 r'^(?P<track>\d+)\s(?P<artist>.+)-(?P<title>.+)-(?P<tag>.*)$', 38 r'^(?P<artist>.+)-(?P<title>.+)$', 39 r'^(?P<track>\d+)\.\s*(?P<artist>.+)-(?P<title>.+)$', 40 r'^(?P<track>\d+)\s*-\s*(?P<artist>.+)-(?P<title>.+)$', 41 r'^(?P<track>\d+)\s*-(?P<artist>.+)-(?P<title>.+)$', 42 r'^(?P<track>\d+)\s(?P<artist>.+)-(?P<title>.+)$', 43 r'^(?P<title>.+)$', 44 r'^(?P<track>\d+)\.\s*(?P<title>.+)$', 45 r'^(?P<track>\d+)\s*-\s*(?P<title>.+)$', 46 r'^(?P<track>\d+)\s(?P<title>.+)$', 47 r'^(?P<title>.+) by (?P<artist>.+)$', 48 ] 49 50 # Titles considered "empty" and in need of replacement. 51 BAD_TITLE_PATTERNS = [ 52 r'^$', 53 r'\d+?\s?-?\s*track\s*\d+', 54 ] 55 56 57 def equal(seq): 58 """Determine whether a sequence holds identical elements. 59 """ 60 return len(set(seq)) <= 1 61 62 63 def equal_fields(matchdict, field): 64 """Do all items in `matchdict`, whose values are dictionaries, have 65 the same value for `field`? (If they do, the field is probably not 66 the title.) 67 """ 68 return equal(m[field] for m in matchdict.values()) 69 70 71 def all_matches(names, pattern): 72 """If all the filenames in the item/filename mapping match the 73 pattern, return a dictionary mapping the items to dictionaries 74 giving the value for each named subpattern in the match. Otherwise, 75 return None. 76 """ 77 matches = {} 78 for item, name in names.items(): 79 m = re.match(pattern, name, re.IGNORECASE) 80 if m and m.groupdict(): 81 # Only yield a match when the regex applies *and* has 82 # capture groups. Otherwise, no information can be extracted 83 # from the filename. 84 matches[item] = m.groupdict() 85 else: 86 return None 87 return matches 88 89 90 def bad_title(title): 91 """Determine whether a given title is "bad" (empty or otherwise 92 meaningless) and in need of replacement. 93 """ 94 for pat in BAD_TITLE_PATTERNS: 95 if re.match(pat, title, re.IGNORECASE): 96 return True 97 return False 98 99 100 def apply_matches(d): 101 """Given a mapping from items to field dicts, apply the fields to 102 the objects. 103 """ 104 some_map = d.values()[0] 105 keys = some_map.keys() 106 107 # Only proceed if the "tag" field is equal across all filenames. 108 if 'tag' in keys and not equal_fields(d, 'tag'): 109 return 110 111 # Given both an "artist" and "title" field, assume that one is 112 # *actually* the artist, which must be uniform, and use the other 113 # for the title. This, of course, won't work for VA albums. 114 if 'artist' in keys: 115 if equal_fields(d, 'artist'): 116 artist = some_map['artist'] 117 title_field = 'title' 118 elif equal_fields(d, 'title'): 119 artist = some_map['title'] 120 title_field = 'artist' 121 else: 122 # Both vary. Abort. 123 return 124 125 for item in d: 126 if not item.artist: 127 item.artist = artist 128 129 # No artist field: remaining field is the title. 130 else: 131 title_field = 'title' 132 133 # Apply the title and track. 134 for item in d: 135 if bad_title(item.title): 136 item.title = six.text_type(d[item][title_field]) 137 if 'track' in d[item] and item.track == 0: 138 item.track = int(d[item]['track']) 139 140 141 # Plugin structure and hook into import process. 142 143 class FromFilenamePlugin(plugins.BeetsPlugin): 144 def __init__(self): 145 super(FromFilenamePlugin, self).__init__() 146 self.register_listener('import_task_start', filename_task) 147 148 149 def filename_task(task, session): 150 """Examine each item in the task to see if we can extract a title 151 from the filename. Try to match all filenames to a number of 152 regexps, starting with the most complex patterns and successively 153 trying less complex patterns. As soon as all filenames match the 154 same regex we can make an educated guess of which part of the 155 regex that contains the title. 156 """ 157 items = task.items if task.is_album else [task.item] 158 159 # Look for suspicious (empty or meaningless) titles. 160 missing_titles = sum(bad_title(i.title) for i in items) 161 162 if missing_titles: 163 # Get the base filenames (no path or extension). 164 names = {} 165 for item in items: 166 path = displayable_path(item.path) 167 name, _ = os.path.splitext(os.path.basename(path)) 168 names[item] = name 169 170 # Look for useful information in the filenames. 171 for pattern in PATTERNS: 172 d = all_matches(names, pattern) 173 if d: 174 apply_matches(d) 175 [end of beetsplug/fromfilename.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/beetsplug/fromfilename.py b/beetsplug/fromfilename.py --- a/beetsplug/fromfilename.py +++ b/beetsplug/fromfilename.py @@ -101,7 +101,7 @@ """Given a mapping from items to field dicts, apply the fields to the objects. """ - some_map = d.values()[0] + some_map = list(d.values())[0] keys = some_map.keys() # Only proceed if the "tag" field is equal across all filenames.
{"golden_diff": "diff --git a/beetsplug/fromfilename.py b/beetsplug/fromfilename.py\n--- a/beetsplug/fromfilename.py\n+++ b/beetsplug/fromfilename.py\n@@ -101,7 +101,7 @@\n \"\"\"Given a mapping from items to field dicts, apply the fields to\n the objects.\n \"\"\"\n- some_map = d.values()[0]\n+ some_map = list(d.values())[0]\n keys = some_map.keys()\n \n # Only proceed if the \"tag\" field is equal across all filenames.\n", "issue": "beetsplug/fromfilename.py python3 compatability\nhttps://github.com/beetbox/beets/blob/e8afcbe7ec4bc37bb080f6f466b98807d020753b/beetsplug/fromfilename.py#L104\n\npython2\n\n``` python2\nd.values()\n```\n\nfor python 3 this should be\n\n``` python3\nlist(d.values())\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2016, Jan-Erik Dahlin\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"If the title is empty, try to extract track and title from the\nfilename.\n\"\"\"\nfrom __future__ import division, absolute_import, print_function\n\nfrom beets import plugins\nfrom beets.util import displayable_path\nimport os\nimport re\nimport six\n\n\n# Filename field extraction patterns.\nPATTERNS = [\n # \"01 - Track 01\" and \"01\": do nothing\n r'^(\\d+)\\s*-\\s*track\\s*\\d$',\n r'^\\d+$',\n\n # Useful patterns.\n r'^(?P<artist>.+)-(?P<title>.+)-(?P<tag>.*)$',\n r'^(?P<track>\\d+)\\s*-(?P<artist>.+)-(?P<title>.+)-(?P<tag>.*)$',\n r'^(?P<track>\\d+)\\s(?P<artist>.+)-(?P<title>.+)-(?P<tag>.*)$',\n r'^(?P<artist>.+)-(?P<title>.+)$',\n r'^(?P<track>\\d+)\\.\\s*(?P<artist>.+)-(?P<title>.+)$',\n r'^(?P<track>\\d+)\\s*-\\s*(?P<artist>.+)-(?P<title>.+)$',\n r'^(?P<track>\\d+)\\s*-(?P<artist>.+)-(?P<title>.+)$',\n r'^(?P<track>\\d+)\\s(?P<artist>.+)-(?P<title>.+)$',\n r'^(?P<title>.+)$',\n r'^(?P<track>\\d+)\\.\\s*(?P<title>.+)$',\n r'^(?P<track>\\d+)\\s*-\\s*(?P<title>.+)$',\n r'^(?P<track>\\d+)\\s(?P<title>.+)$',\n r'^(?P<title>.+) by (?P<artist>.+)$',\n]\n\n# Titles considered \"empty\" and in need of replacement.\nBAD_TITLE_PATTERNS = [\n r'^$',\n r'\\d+?\\s?-?\\s*track\\s*\\d+',\n]\n\n\ndef equal(seq):\n \"\"\"Determine whether a sequence holds identical elements.\n \"\"\"\n return len(set(seq)) <= 1\n\n\ndef equal_fields(matchdict, field):\n \"\"\"Do all items in `matchdict`, whose values are dictionaries, have\n the same value for `field`? (If they do, the field is probably not\n the title.)\n \"\"\"\n return equal(m[field] for m in matchdict.values())\n\n\ndef all_matches(names, pattern):\n \"\"\"If all the filenames in the item/filename mapping match the\n pattern, return a dictionary mapping the items to dictionaries\n giving the value for each named subpattern in the match. Otherwise,\n return None.\n \"\"\"\n matches = {}\n for item, name in names.items():\n m = re.match(pattern, name, re.IGNORECASE)\n if m and m.groupdict():\n # Only yield a match when the regex applies *and* has\n # capture groups. Otherwise, no information can be extracted\n # from the filename.\n matches[item] = m.groupdict()\n else:\n return None\n return matches\n\n\ndef bad_title(title):\n \"\"\"Determine whether a given title is \"bad\" (empty or otherwise\n meaningless) and in need of replacement.\n \"\"\"\n for pat in BAD_TITLE_PATTERNS:\n if re.match(pat, title, re.IGNORECASE):\n return True\n return False\n\n\ndef apply_matches(d):\n \"\"\"Given a mapping from items to field dicts, apply the fields to\n the objects.\n \"\"\"\n some_map = d.values()[0]\n keys = some_map.keys()\n\n # Only proceed if the \"tag\" field is equal across all filenames.\n if 'tag' in keys and not equal_fields(d, 'tag'):\n return\n\n # Given both an \"artist\" and \"title\" field, assume that one is\n # *actually* the artist, which must be uniform, and use the other\n # for the title. This, of course, won't work for VA albums.\n if 'artist' in keys:\n if equal_fields(d, 'artist'):\n artist = some_map['artist']\n title_field = 'title'\n elif equal_fields(d, 'title'):\n artist = some_map['title']\n title_field = 'artist'\n else:\n # Both vary. Abort.\n return\n\n for item in d:\n if not item.artist:\n item.artist = artist\n\n # No artist field: remaining field is the title.\n else:\n title_field = 'title'\n\n # Apply the title and track.\n for item in d:\n if bad_title(item.title):\n item.title = six.text_type(d[item][title_field])\n if 'track' in d[item] and item.track == 0:\n item.track = int(d[item]['track'])\n\n\n# Plugin structure and hook into import process.\n\nclass FromFilenamePlugin(plugins.BeetsPlugin):\n def __init__(self):\n super(FromFilenamePlugin, self).__init__()\n self.register_listener('import_task_start', filename_task)\n\n\ndef filename_task(task, session):\n \"\"\"Examine each item in the task to see if we can extract a title\n from the filename. Try to match all filenames to a number of\n regexps, starting with the most complex patterns and successively\n trying less complex patterns. As soon as all filenames match the\n same regex we can make an educated guess of which part of the\n regex that contains the title.\n \"\"\"\n items = task.items if task.is_album else [task.item]\n\n # Look for suspicious (empty or meaningless) titles.\n missing_titles = sum(bad_title(i.title) for i in items)\n\n if missing_titles:\n # Get the base filenames (no path or extension).\n names = {}\n for item in items:\n path = displayable_path(item.path)\n name, _ = os.path.splitext(os.path.basename(path))\n names[item] = name\n\n # Look for useful information in the filenames.\n for pattern in PATTERNS:\n d = all_matches(names, pattern)\n if d:\n apply_matches(d)\n", "path": "beetsplug/fromfilename.py"}]}
2,577
117
gh_patches_debug_32113
rasdani/github-patches
git_diff
opendatacube__datacube-core-1457
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> BUG: Scale in rectilinear reproject produces different results (without dask) Related: - #1448 - #1450 ### Expected behaviour reproject behaves similar to GDAL defaults ### Actual behaviour The behavior has slight differences with GDAL defaults ### Steps to reproduce the behaviour - Original CRS = EPSG:32615 - Original Transform = Affine(5.0, 0.0, 451560.0, 0.0, -5.0, 4640690.0) ```python import datacube import rioxarray import xarray from rasterio.enum import Resampling dc = datacube.Datacube() old = xarray.open_dataset("old_demsm.nc", decode_coords="all") # loaded with datacube==1.8.12 new = dc.load( # datacube==1.8.13 product=..., measurements=["demsm"], geom=geom, output_crs="EPSG:6933", resolution=[-10, 10], resampling="cubic_spline", ).squeeze() ``` ![image](https://github.com/opendatacube/datacube-core/assets/8699967/28737a10-f813-4b6d-a374-22801875ef36) ```python orignal_rasters = dc.find_datasets( product=..., measurements=["demsm"], geom=geom, ) original_demsm = rioxarray.open_rasterio( orignal_rasters[0].raster_path, masked=True, ).rio.reproject_match( match_data_array=new["demsm"], resampling=Resampling.cubic_spline, ).rio.clip( [geom], crs="EPSG:4326", ).squeeze() ``` ![image](https://github.com/opendatacube/datacube-core/assets/8699967/c9430328-60d4-414e-a17e-2a2c17035bc0) ### Environment information * Which ``datacube --version`` are you using? 1.8.13 cc @alfredoahds </issue> <code> [start of datacube/storage/_read.py] 1 # This file is part of the Open Data Cube, see https://opendatacube.org for more information 2 # 3 # Copyright (c) 2015-2023 ODC Contributors 4 # SPDX-License-Identifier: Apache-2.0 5 """ Dataset -> Raster 6 """ 7 from affine import Affine 8 import numpy as np 9 from typing import Optional, Tuple 10 11 from ..utils.math import is_almost_int, valid_mask 12 13 from ..utils.geometry import ( 14 roi_shape, 15 roi_is_empty, 16 roi_is_full, 17 roi_pad, 18 GeoBox, 19 w_, 20 warp_affine, 21 rio_reproject, 22 compute_reproject_roi) 23 24 from ..utils.geometry._warp import is_resampling_nn, Resampling, Nodata 25 from ..utils.geometry import gbox as gbx 26 27 28 def rdr_geobox(rdr) -> GeoBox: 29 """ Construct GeoBox from opened dataset reader. 30 """ 31 h, w = rdr.shape 32 return GeoBox(w, h, rdr.transform, rdr.crs) 33 34 35 def can_paste(rr, stol=1e-3, ttol=1e-2): 36 """ 37 Take result of compute_reproject_roi and check if can read(possibly with scale) and paste, 38 or do we need to read then reproject. 39 40 :returns: (True, None) if one can just read and paste 41 :returns: (False, Reason) if pasting is not possible, so need to reproject after reading 42 """ 43 if not rr.is_st: # not linear or not Scale + Translation 44 return False, "not ST" 45 46 scale = rr.scale 47 if not is_almost_int(scale, stol): # non-integer scaling 48 return False, "non-integer scale" 49 50 scale = np.round(scale) 51 A = rr.transform.linear # src -> dst 52 A = A*Affine.scale(scale, scale) # src.overview[scale] -> dst 53 54 (sx, _, tx, # tx, ty are in dst pixel space 55 _, sy, ty, 56 *_) = A 57 58 if any(abs(abs(s) - 1) > stol 59 for s in (sx, sy)): # not equal scaling across axis? 60 return False, "sx!=sy, probably" 61 62 ny, nx = (n/scale 63 for n in roi_shape(rr.roi_src)) 64 65 # src_roi doesn't divide by scale properly: 66 # example 3x7 scaled down by factor of 2 67 if not all(is_almost_int(n, stol) for n in (nx, ny)): 68 return False, "src_roi doesn't align for scale" 69 70 # TODO: probably need to deal with sub-pixel translation here, if we want 71 # to ignore sub-pixel translation and dst roi is 1 pixel bigger than src it 72 # should still be ok to paste after cropping dst roi by one pixel on the 73 # appropriate side. As it stands sub-pixel translation will be ignored only 74 # in some cases. 75 76 # scaled down shape doesn't match dst shape 77 s_shape = (int(ny), int(nx)) 78 if s_shape != roi_shape(rr.roi_dst): 79 return False, "src_roi/scale != dst_roi" 80 81 # final check: sub-pixel translation 82 if not all(is_almost_int(t, ttol) for t in (tx, ty)): 83 return False, "sub-pixel translation" 84 85 return True, None 86 87 88 def pick_read_scale(scale: float, rdr=None, tol=1e-3): 89 assert scale > 0 90 # First find nearest integer scale 91 # Scale down to nearest integer, unless we can scale up by less than tol 92 # 93 # 2.999999 -> 3 94 # 2.8 -> 2 95 # 0.3 -> 1 96 97 if scale < 1: 98 return 1 99 100 if is_almost_int(scale, tol): 101 scale = np.round(scale) 102 103 scale = int(scale) 104 105 if rdr is not None: 106 # TODO: check available overviews in rdr 107 pass 108 109 return scale 110 111 112 def read_time_slice(rdr, 113 dst: np.ndarray, 114 dst_gbox: GeoBox, 115 resampling: Resampling, 116 dst_nodata: Nodata, 117 extra_dim_index: Optional[int] = None) -> Tuple[slice, slice]: 118 """ From opened reader object read into `dst` 119 120 :returns: affected destination region 121 """ 122 assert dst.shape == dst_gbox.shape 123 src_gbox = rdr_geobox(rdr) 124 125 rr = compute_reproject_roi(src_gbox, dst_gbox) 126 127 if roi_is_empty(rr.roi_dst): 128 return rr.roi_dst 129 130 is_nn = is_resampling_nn(resampling) 131 scale = pick_read_scale(rr.scale, rdr) 132 scale_x, scale_y = [pick_read_scale(s) for s in rr.scale2] 133 134 paste_ok, _ = can_paste(rr, ttol=0.9 if is_nn else 0.01) 135 136 def norm_read_args(roi, shape, extra_dim_index): 137 if roi_is_full(roi, rdr.shape): 138 roi = None 139 140 if roi is None and shape == rdr.shape: 141 shape = None 142 143 w = w_[roi] 144 145 # Build 3D read window 146 # Note: Might be a good idea to natively support nD read windows. 147 if extra_dim_index is not None: 148 if w is None: 149 w = () 150 return (extra_dim_index,) + w, shape 151 else: 152 # 2D read window 153 return w, shape 154 155 if paste_ok: 156 A = rr.transform.linear 157 sx, sy = A.a, A.e 158 159 dst = dst[rr.roi_dst] 160 pix = rdr.read(*norm_read_args(rr.roi_src, dst.shape, extra_dim_index)) 161 162 if sx < 0: 163 pix = pix[:, ::-1] 164 if sy < 0: 165 pix = pix[::-1, :] 166 167 if rdr.nodata is None: 168 np.copyto(dst, pix) 169 else: 170 np.copyto(dst, pix, where=valid_mask(pix, rdr.nodata)) 171 else: 172 if rr.is_st: 173 # add padding on src/dst ROIs, it was set to tight bounds 174 # TODO: this should probably happen inside compute_reproject_roi 175 rr.roi_dst = roi_pad(rr.roi_dst, 1, dst_gbox.shape) 176 rr.roi_src = roi_pad(rr.roi_src, 1, src_gbox.shape) 177 178 dst = dst[rr.roi_dst] 179 dst_gbox = dst_gbox[rr.roi_dst] 180 src_gbox = src_gbox[rr.roi_src] 181 if scale > 1: 182 src_gbox = gbx.zoom_out(src_gbox, scale) 183 184 pix = rdr.read(*norm_read_args(rr.roi_src, src_gbox.shape, extra_dim_index)) 185 186 if rr.transform.linear is not None: 187 A = (~src_gbox.transform)*dst_gbox.transform 188 warp_affine(pix, dst, A, resampling, 189 src_nodata=rdr.nodata, dst_nodata=dst_nodata) 190 else: 191 rio_reproject(pix, dst, src_gbox, dst_gbox, resampling, 192 src_nodata=rdr.nodata, dst_nodata=dst_nodata, 193 XSCALE=scale_x, YSCALE=scale_y) 194 195 return rr.roi_dst 196 197 198 def read_time_slice_v2(rdr, 199 dst_gbox: GeoBox, 200 resampling: Resampling, 201 dst_nodata: Nodata) -> Tuple[Optional[np.ndarray], 202 Tuple[slice, slice]]: 203 """ From opened reader object read into `dst` 204 205 :returns: pixels read and ROI of dst_gbox that was affected 206 """ 207 # pylint: disable=too-many-locals 208 src_gbox = rdr_geobox(rdr) 209 210 rr = compute_reproject_roi(src_gbox, dst_gbox) 211 212 if roi_is_empty(rr.roi_dst): 213 return None, rr.roi_dst 214 215 is_nn = is_resampling_nn(resampling) 216 scale = pick_read_scale(rr.scale, rdr) 217 218 paste_ok, _ = can_paste(rr, ttol=0.9 if is_nn else 0.01) 219 220 def norm_read_args(roi, shape): 221 if roi_is_full(roi, rdr.shape): 222 roi = None 223 224 if roi is None and shape == rdr.shape: 225 shape = None 226 227 return roi, shape 228 229 if paste_ok: 230 read_shape = roi_shape(rr.roi_dst) 231 A = rr.transform.linear 232 sx, sy = A.a, A.e 233 234 pix = rdr.read(*norm_read_args(rr.roi_src, read_shape)).result() 235 236 if sx < 0: 237 pix = pix[:, ::-1] 238 if sy < 0: 239 pix = pix[::-1, :] 240 241 # normalise nodata to be equal to `dst_nodata` 242 if rdr.nodata is not None and rdr.nodata != dst_nodata: 243 pix[pix == rdr.nodata] = dst_nodata 244 245 dst = pix 246 else: 247 if rr.is_st: 248 # add padding on src/dst ROIs, it was set to tight bounds 249 # TODO: this should probably happen inside compute_reproject_roi 250 rr.roi_dst = roi_pad(rr.roi_dst, 1, dst_gbox.shape) 251 rr.roi_src = roi_pad(rr.roi_src, 1, src_gbox.shape) 252 253 dst_gbox = dst_gbox[rr.roi_dst] 254 src_gbox = src_gbox[rr.roi_src] 255 if scale > 1: 256 src_gbox = gbx.zoom_out(src_gbox, scale) 257 258 dst = np.full(dst_gbox.shape, dst_nodata, dtype=rdr.dtype) 259 pix = rdr.read(*norm_read_args(rr.roi_src, src_gbox.shape)).result() 260 261 if rr.transform.linear is not None: 262 A = (~src_gbox.transform)*dst_gbox.transform 263 warp_affine(pix, dst, A, resampling, 264 src_nodata=rdr.nodata, dst_nodata=dst_nodata) 265 else: 266 rio_reproject(pix, dst, src_gbox, dst_gbox, resampling, 267 src_nodata=rdr.nodata, dst_nodata=dst_nodata) 268 269 return dst, rr.roi_dst 270 [end of datacube/storage/_read.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/datacube/storage/_read.py b/datacube/storage/_read.py --- a/datacube/storage/_read.py +++ b/datacube/storage/_read.py @@ -129,7 +129,6 @@ is_nn = is_resampling_nn(resampling) scale = pick_read_scale(rr.scale, rdr) - scale_x, scale_y = [pick_read_scale(s) for s in rr.scale2] paste_ok, _ = can_paste(rr, ttol=0.9 if is_nn else 0.01) @@ -183,14 +182,32 @@ pix = rdr.read(*norm_read_args(rr.roi_src, src_gbox.shape, extra_dim_index)) + # XSCALE and YSCALE are (currently) undocumented arguments that rasterio passed through to + # GDAL. Not using them results in very inaccurate warping in images with highly + # non-square (i.e. long and thin) aspect ratios. + # + # See https://github.com/OSGeo/gdal/issues/7750 as well as + # https://github.com/opendatacube/datacube-core/pull/1450 and + # https://github.com/opendatacube/datacube-core/issues/1456 + # + # In theory we might be able to get better results for queries with significantly + # different vertical and horizontal scales, but explicitly using XSCALE=1, YSCALE=1 + # appears to be most appropriate for most requests, and is demonstrably better + # than not setting them at all. + gdal_scale_params = { + "XSCALE": 1, + "YSCALE": 1, + } if rr.transform.linear is not None: A = (~src_gbox.transform)*dst_gbox.transform warp_affine(pix, dst, A, resampling, - src_nodata=rdr.nodata, dst_nodata=dst_nodata) + src_nodata=rdr.nodata, dst_nodata=dst_nodata, + **gdal_scale_params) else: + rio_reproject(pix, dst, src_gbox, dst_gbox, resampling, src_nodata=rdr.nodata, dst_nodata=dst_nodata, - XSCALE=scale_x, YSCALE=scale_y) + **gdal_scale_params) return rr.roi_dst
{"golden_diff": "diff --git a/datacube/storage/_read.py b/datacube/storage/_read.py\n--- a/datacube/storage/_read.py\n+++ b/datacube/storage/_read.py\n@@ -129,7 +129,6 @@\n \n is_nn = is_resampling_nn(resampling)\n scale = pick_read_scale(rr.scale, rdr)\n- scale_x, scale_y = [pick_read_scale(s) for s in rr.scale2]\n \n paste_ok, _ = can_paste(rr, ttol=0.9 if is_nn else 0.01)\n \n@@ -183,14 +182,32 @@\n \n pix = rdr.read(*norm_read_args(rr.roi_src, src_gbox.shape, extra_dim_index))\n \n+ # XSCALE and YSCALE are (currently) undocumented arguments that rasterio passed through to\n+ # GDAL. Not using them results in very inaccurate warping in images with highly\n+ # non-square (i.e. long and thin) aspect ratios.\n+ #\n+ # See https://github.com/OSGeo/gdal/issues/7750 as well as\n+ # https://github.com/opendatacube/datacube-core/pull/1450 and\n+ # https://github.com/opendatacube/datacube-core/issues/1456\n+ #\n+ # In theory we might be able to get better results for queries with significantly\n+ # different vertical and horizontal scales, but explicitly using XSCALE=1, YSCALE=1\n+ # appears to be most appropriate for most requests, and is demonstrably better\n+ # than not setting them at all.\n+ gdal_scale_params = {\n+ \"XSCALE\": 1,\n+ \"YSCALE\": 1,\n+ }\n if rr.transform.linear is not None:\n A = (~src_gbox.transform)*dst_gbox.transform\n warp_affine(pix, dst, A, resampling,\n- src_nodata=rdr.nodata, dst_nodata=dst_nodata)\n+ src_nodata=rdr.nodata, dst_nodata=dst_nodata,\n+ **gdal_scale_params)\n else:\n+\n rio_reproject(pix, dst, src_gbox, dst_gbox, resampling,\n src_nodata=rdr.nodata, dst_nodata=dst_nodata,\n- XSCALE=scale_x, YSCALE=scale_y)\n+ **gdal_scale_params)\n \n return rr.roi_dst\n", "issue": "BUG: Scale in rectilinear reproject produces different results (without dask)\nRelated:\r\n- #1448\r\n- #1450\r\n\r\n### Expected behaviour\r\n\r\nreproject behaves similar to GDAL defaults\r\n\r\n### Actual behaviour\r\n\r\nThe behavior has slight differences with GDAL defaults\r\n\r\n### Steps to reproduce the behaviour\r\n- Original CRS = EPSG:32615\r\n- Original Transform = Affine(5.0, 0.0, 451560.0, 0.0, -5.0, 4640690.0)\r\n\r\n```python\r\nimport datacube\r\nimport rioxarray\r\nimport xarray\r\nfrom rasterio.enum import Resampling\r\n\r\ndc = datacube.Datacube()\r\nold = xarray.open_dataset(\"old_demsm.nc\", decode_coords=\"all\") # loaded with datacube==1.8.12 \r\nnew = dc.load( # datacube==1.8.13 \r\n product=...,\r\n measurements=[\"demsm\"],\r\n geom=geom,\r\n output_crs=\"EPSG:6933\",\r\n resolution=[-10, 10],\r\n resampling=\"cubic_spline\",\r\n).squeeze()\r\n```\r\n\r\n![image](https://github.com/opendatacube/datacube-core/assets/8699967/28737a10-f813-4b6d-a374-22801875ef36)\r\n\r\n```python\r\norignal_rasters = dc.find_datasets(\r\n product=...,\r\n measurements=[\"demsm\"],\r\n geom=geom,\r\n)\r\noriginal_demsm = rioxarray.open_rasterio(\r\n orignal_rasters[0].raster_path,\r\n masked=True,\r\n).rio.reproject_match(\r\n match_data_array=new[\"demsm\"],\r\n resampling=Resampling.cubic_spline,\r\n).rio.clip(\r\n [geom],\r\n crs=\"EPSG:4326\",\r\n).squeeze()\r\n```\r\n![image](https://github.com/opendatacube/datacube-core/assets/8699967/c9430328-60d4-414e-a17e-2a2c17035bc0)\r\n\r\n### Environment information\r\n\r\n* Which ``datacube --version`` are you using? 1.8.13\r\n\r\ncc @alfredoahds \n", "before_files": [{"content": "# This file is part of the Open Data Cube, see https://opendatacube.org for more information\n#\n# Copyright (c) 2015-2023 ODC Contributors\n# SPDX-License-Identifier: Apache-2.0\n\"\"\" Dataset -> Raster\n\"\"\"\nfrom affine import Affine\nimport numpy as np\nfrom typing import Optional, Tuple\n\nfrom ..utils.math import is_almost_int, valid_mask\n\nfrom ..utils.geometry import (\n roi_shape,\n roi_is_empty,\n roi_is_full,\n roi_pad,\n GeoBox,\n w_,\n warp_affine,\n rio_reproject,\n compute_reproject_roi)\n\nfrom ..utils.geometry._warp import is_resampling_nn, Resampling, Nodata\nfrom ..utils.geometry import gbox as gbx\n\n\ndef rdr_geobox(rdr) -> GeoBox:\n \"\"\" Construct GeoBox from opened dataset reader.\n \"\"\"\n h, w = rdr.shape\n return GeoBox(w, h, rdr.transform, rdr.crs)\n\n\ndef can_paste(rr, stol=1e-3, ttol=1e-2):\n \"\"\"\n Take result of compute_reproject_roi and check if can read(possibly with scale) and paste,\n or do we need to read then reproject.\n\n :returns: (True, None) if one can just read and paste\n :returns: (False, Reason) if pasting is not possible, so need to reproject after reading\n \"\"\"\n if not rr.is_st: # not linear or not Scale + Translation\n return False, \"not ST\"\n\n scale = rr.scale\n if not is_almost_int(scale, stol): # non-integer scaling\n return False, \"non-integer scale\"\n\n scale = np.round(scale)\n A = rr.transform.linear # src -> dst\n A = A*Affine.scale(scale, scale) # src.overview[scale] -> dst\n\n (sx, _, tx, # tx, ty are in dst pixel space\n _, sy, ty,\n *_) = A\n\n if any(abs(abs(s) - 1) > stol\n for s in (sx, sy)): # not equal scaling across axis?\n return False, \"sx!=sy, probably\"\n\n ny, nx = (n/scale\n for n in roi_shape(rr.roi_src))\n\n # src_roi doesn't divide by scale properly:\n # example 3x7 scaled down by factor of 2\n if not all(is_almost_int(n, stol) for n in (nx, ny)):\n return False, \"src_roi doesn't align for scale\"\n\n # TODO: probably need to deal with sub-pixel translation here, if we want\n # to ignore sub-pixel translation and dst roi is 1 pixel bigger than src it\n # should still be ok to paste after cropping dst roi by one pixel on the\n # appropriate side. As it stands sub-pixel translation will be ignored only\n # in some cases.\n\n # scaled down shape doesn't match dst shape\n s_shape = (int(ny), int(nx))\n if s_shape != roi_shape(rr.roi_dst):\n return False, \"src_roi/scale != dst_roi\"\n\n # final check: sub-pixel translation\n if not all(is_almost_int(t, ttol) for t in (tx, ty)):\n return False, \"sub-pixel translation\"\n\n return True, None\n\n\ndef pick_read_scale(scale: float, rdr=None, tol=1e-3):\n assert scale > 0\n # First find nearest integer scale\n # Scale down to nearest integer, unless we can scale up by less than tol\n #\n # 2.999999 -> 3\n # 2.8 -> 2\n # 0.3 -> 1\n\n if scale < 1:\n return 1\n\n if is_almost_int(scale, tol):\n scale = np.round(scale)\n\n scale = int(scale)\n\n if rdr is not None:\n # TODO: check available overviews in rdr\n pass\n\n return scale\n\n\ndef read_time_slice(rdr,\n dst: np.ndarray,\n dst_gbox: GeoBox,\n resampling: Resampling,\n dst_nodata: Nodata,\n extra_dim_index: Optional[int] = None) -> Tuple[slice, slice]:\n \"\"\" From opened reader object read into `dst`\n\n :returns: affected destination region\n \"\"\"\n assert dst.shape == dst_gbox.shape\n src_gbox = rdr_geobox(rdr)\n\n rr = compute_reproject_roi(src_gbox, dst_gbox)\n\n if roi_is_empty(rr.roi_dst):\n return rr.roi_dst\n\n is_nn = is_resampling_nn(resampling)\n scale = pick_read_scale(rr.scale, rdr)\n scale_x, scale_y = [pick_read_scale(s) for s in rr.scale2]\n\n paste_ok, _ = can_paste(rr, ttol=0.9 if is_nn else 0.01)\n\n def norm_read_args(roi, shape, extra_dim_index):\n if roi_is_full(roi, rdr.shape):\n roi = None\n\n if roi is None and shape == rdr.shape:\n shape = None\n\n w = w_[roi]\n\n # Build 3D read window\n # Note: Might be a good idea to natively support nD read windows.\n if extra_dim_index is not None:\n if w is None:\n w = ()\n return (extra_dim_index,) + w, shape\n else:\n # 2D read window\n return w, shape\n\n if paste_ok:\n A = rr.transform.linear\n sx, sy = A.a, A.e\n\n dst = dst[rr.roi_dst]\n pix = rdr.read(*norm_read_args(rr.roi_src, dst.shape, extra_dim_index))\n\n if sx < 0:\n pix = pix[:, ::-1]\n if sy < 0:\n pix = pix[::-1, :]\n\n if rdr.nodata is None:\n np.copyto(dst, pix)\n else:\n np.copyto(dst, pix, where=valid_mask(pix, rdr.nodata))\n else:\n if rr.is_st:\n # add padding on src/dst ROIs, it was set to tight bounds\n # TODO: this should probably happen inside compute_reproject_roi\n rr.roi_dst = roi_pad(rr.roi_dst, 1, dst_gbox.shape)\n rr.roi_src = roi_pad(rr.roi_src, 1, src_gbox.shape)\n\n dst = dst[rr.roi_dst]\n dst_gbox = dst_gbox[rr.roi_dst]\n src_gbox = src_gbox[rr.roi_src]\n if scale > 1:\n src_gbox = gbx.zoom_out(src_gbox, scale)\n\n pix = rdr.read(*norm_read_args(rr.roi_src, src_gbox.shape, extra_dim_index))\n\n if rr.transform.linear is not None:\n A = (~src_gbox.transform)*dst_gbox.transform\n warp_affine(pix, dst, A, resampling,\n src_nodata=rdr.nodata, dst_nodata=dst_nodata)\n else:\n rio_reproject(pix, dst, src_gbox, dst_gbox, resampling,\n src_nodata=rdr.nodata, dst_nodata=dst_nodata,\n XSCALE=scale_x, YSCALE=scale_y)\n\n return rr.roi_dst\n\n\ndef read_time_slice_v2(rdr,\n dst_gbox: GeoBox,\n resampling: Resampling,\n dst_nodata: Nodata) -> Tuple[Optional[np.ndarray],\n Tuple[slice, slice]]:\n \"\"\" From opened reader object read into `dst`\n\n :returns: pixels read and ROI of dst_gbox that was affected\n \"\"\"\n # pylint: disable=too-many-locals\n src_gbox = rdr_geobox(rdr)\n\n rr = compute_reproject_roi(src_gbox, dst_gbox)\n\n if roi_is_empty(rr.roi_dst):\n return None, rr.roi_dst\n\n is_nn = is_resampling_nn(resampling)\n scale = pick_read_scale(rr.scale, rdr)\n\n paste_ok, _ = can_paste(rr, ttol=0.9 if is_nn else 0.01)\n\n def norm_read_args(roi, shape):\n if roi_is_full(roi, rdr.shape):\n roi = None\n\n if roi is None and shape == rdr.shape:\n shape = None\n\n return roi, shape\n\n if paste_ok:\n read_shape = roi_shape(rr.roi_dst)\n A = rr.transform.linear\n sx, sy = A.a, A.e\n\n pix = rdr.read(*norm_read_args(rr.roi_src, read_shape)).result()\n\n if sx < 0:\n pix = pix[:, ::-1]\n if sy < 0:\n pix = pix[::-1, :]\n\n # normalise nodata to be equal to `dst_nodata`\n if rdr.nodata is not None and rdr.nodata != dst_nodata:\n pix[pix == rdr.nodata] = dst_nodata\n\n dst = pix\n else:\n if rr.is_st:\n # add padding on src/dst ROIs, it was set to tight bounds\n # TODO: this should probably happen inside compute_reproject_roi\n rr.roi_dst = roi_pad(rr.roi_dst, 1, dst_gbox.shape)\n rr.roi_src = roi_pad(rr.roi_src, 1, src_gbox.shape)\n\n dst_gbox = dst_gbox[rr.roi_dst]\n src_gbox = src_gbox[rr.roi_src]\n if scale > 1:\n src_gbox = gbx.zoom_out(src_gbox, scale)\n\n dst = np.full(dst_gbox.shape, dst_nodata, dtype=rdr.dtype)\n pix = rdr.read(*norm_read_args(rr.roi_src, src_gbox.shape)).result()\n\n if rr.transform.linear is not None:\n A = (~src_gbox.transform)*dst_gbox.transform\n warp_affine(pix, dst, A, resampling,\n src_nodata=rdr.nodata, dst_nodata=dst_nodata)\n else:\n rio_reproject(pix, dst, src_gbox, dst_gbox, resampling,\n src_nodata=rdr.nodata, dst_nodata=dst_nodata)\n\n return dst, rr.roi_dst\n", "path": "datacube/storage/_read.py"}]}
4,072
544
gh_patches_debug_16333
rasdani/github-patches
git_diff
ckan__ckan-8186
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> package_patch breaks uploaded resources ## CKAN version 2.10.4 ## Describe the bug When using package_patch to update a dataset, intending **not** to update an uploaded resource the resource filename appears to be lost ### Steps to reproduce 1. Create a dataset containing at least one uploaded file resource 2. Call package_patch to update the dataset, but for the resources only pass the resource ids like specified in the documentation (https://docs.ckan.org/en/2.10/api/#ckan.logic.action.patch.package_patch) like: ``` package_patch(**{'id':'b6cc8622-3334-4cdb-8960-e2a3c4269a8d', 'description':'Updated description', 'resources':[{'id':'a97b8889-5efb-440c-b6ad-fa9a9e4d7659'}, {'id':'bdbb977f-9faa-4715-88d3-c5e9042e69a4', 'description':'Updated resource description'}]}) ``` 3. Browse to the updated resource, the download link is missing the file name probably replaced by ___ and the mimetype is lost ### Expected behavior The resources should be unchanged when no changes are requested ### Additional details I think part of this issue arises out of url being used to store the filename. It may be cleaner to just add filename as another field in uploaded resources which could also make renaming uploaded files easier. </issue> <code> [start of ckan/logic/action/patch.py] 1 # encoding: utf-8 2 3 '''API functions for partial updates of existing data in CKAN''' 4 5 from ckan.logic import ( 6 get_action as _get_action, 7 check_access as _check_access, 8 get_or_bust as _get_or_bust, 9 fresh_context as _fresh_context 10 ) 11 from ckan.types import Context, DataDict 12 from ckan.types.logic import ActionResult 13 14 15 def package_patch( 16 context: Context, data_dict: DataDict) -> ActionResult.PackagePatch: 17 '''Patch a dataset (package). 18 19 :param id: the id or name of the dataset 20 :type id: string 21 22 The difference between the update and patch methods is that the patch will 23 perform an update of the provided parameters, while leaving all other 24 parameters unchanged, whereas the update methods deletes all parameters 25 not explicitly provided in the data_dict. 26 27 You are able to partially update and/or create resources with 28 package_patch. If you are updating existing resources be sure to provide 29 the resource id. Existing resources excluded from the package_patch 30 data_dict will be removed. Resources in the package data_dict without 31 an id will be treated as new resources and will be added. New resources 32 added with the patch method do not create the default views. 33 34 You must be authorized to edit the dataset and the groups that it belongs 35 to. 36 ''' 37 _check_access('package_patch', context, data_dict) 38 39 show_context: Context = { 40 'model': context['model'], 41 'session': context['session'], 42 'user': context['user'], 43 'auth_user_obj': context['auth_user_obj'], 44 'ignore_auth': context.get('ignore_auth', False), 45 'for_update': True 46 } 47 48 package_dict = _get_action('package_show')( 49 show_context, 50 {'id': _get_or_bust(data_dict, 'id')}) 51 52 patched = dict(package_dict) 53 patched.update(data_dict) 54 patched['id'] = package_dict['id'] 55 return _get_action('package_update')(context, patched) 56 57 58 def resource_patch(context: Context, 59 data_dict: DataDict) -> ActionResult.ResourcePatch: 60 '''Patch a resource 61 62 :param id: the id of the resource 63 :type id: string 64 65 The difference between the update and patch methods is that the patch will 66 perform an update of the provided parameters, while leaving all other 67 parameters unchanged, whereas the update methods deletes all parameters 68 not explicitly provided in the data_dict 69 ''' 70 _check_access('resource_patch', context, data_dict) 71 72 show_context: Context = _fresh_context(context) 73 show_context.update({'for_update': True}) 74 75 resource_dict = _get_action('resource_show')( 76 show_context, 77 {'id': _get_or_bust(data_dict, 'id')}) 78 79 patched = dict(resource_dict) 80 patched.update(data_dict) 81 return _get_action('resource_update')(context, patched) 82 83 84 def group_patch(context: Context, 85 data_dict: DataDict) -> ActionResult.GroupPatch: 86 '''Patch a group 87 88 :param id: the id or name of the group 89 :type id: string 90 91 The difference between the update and patch methods is that the patch will 92 perform an update of the provided parameters, while leaving all other 93 parameters unchanged, whereas the update methods deletes all parameters 94 not explicitly provided in the data_dict 95 ''' 96 _check_access('group_patch', context, data_dict) 97 98 show_context: Context = _fresh_context(context) 99 100 group_dict = _get_action('group_show')( 101 show_context, 102 {'id': _get_or_bust(data_dict, 'id')}) 103 104 patched = dict(group_dict) 105 patched.pop('display_name', None) 106 patched.update(data_dict) 107 108 patch_context = context.copy() 109 patch_context['allow_partial_update'] = True 110 return _get_action('group_update')(patch_context, patched) 111 112 113 def organization_patch( 114 context: Context, 115 data_dict: DataDict) -> ActionResult.OrganizationPatch: 116 '''Patch an organization 117 118 :param id: the id or name of the organization 119 :type id: string 120 121 The difference between the update and patch methods is that the patch will 122 perform an update of the provided parameters, while leaving all other 123 parameters unchanged, whereas the update methods deletes all parameters 124 not explicitly provided in the data_dict 125 ''' 126 _check_access('organization_patch', context, data_dict) 127 128 show_context: Context = _fresh_context(context) 129 130 organization_dict = _get_action('organization_show')( 131 show_context, 132 {'id': _get_or_bust(data_dict, 'id')}) 133 134 patched = dict(organization_dict) 135 patched.pop('display_name', None) 136 patched.update(data_dict) 137 138 patch_context = context.copy() 139 patch_context['allow_partial_update'] = True 140 return _get_action('organization_update')(patch_context, patched) 141 142 143 def user_patch(context: Context, 144 data_dict: DataDict) -> ActionResult.UserPatch: 145 '''Patch a user 146 147 :param id: the id or name of the user 148 :type id: string 149 150 The difference between the update and patch methods is that the patch will 151 perform an update of the provided parameters, while leaving all other 152 parameters unchanged, whereas the update methods deletes all parameters 153 not explicitly provided in the data_dict 154 ''' 155 _check_access('user_patch', context, data_dict) 156 157 show_context: Context = _fresh_context(context) 158 159 user_dict = _get_action('user_show')( 160 show_context, 161 {'id': _get_or_bust(data_dict, 'id')}) 162 163 patched = dict(user_dict) 164 patched.pop('display_name', None) 165 patched.update(data_dict) 166 return _get_action('user_update')(context, patched) 167 [end of ckan/logic/action/patch.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/ckan/logic/action/patch.py b/ckan/logic/action/patch.py --- a/ckan/logic/action/patch.py +++ b/ckan/logic/action/patch.py @@ -24,12 +24,10 @@ parameters unchanged, whereas the update methods deletes all parameters not explicitly provided in the data_dict. - You are able to partially update and/or create resources with - package_patch. If you are updating existing resources be sure to provide - the resource id. Existing resources excluded from the package_patch - data_dict will be removed. Resources in the package data_dict without - an id will be treated as new resources and will be added. New resources - added with the patch method do not create the default views. + To partially update resources or other metadata not at the top level + of a package use + :py:func:`~ckan.logic.action.update.package_revise` instead to maintain + existing nested values. You must be authorized to edit the dataset and the groups that it belongs to.
{"golden_diff": "diff --git a/ckan/logic/action/patch.py b/ckan/logic/action/patch.py\n--- a/ckan/logic/action/patch.py\n+++ b/ckan/logic/action/patch.py\n@@ -24,12 +24,10 @@\n parameters unchanged, whereas the update methods deletes all parameters\n not explicitly provided in the data_dict.\n \n- You are able to partially update and/or create resources with\n- package_patch. If you are updating existing resources be sure to provide\n- the resource id. Existing resources excluded from the package_patch\n- data_dict will be removed. Resources in the package data_dict without\n- an id will be treated as new resources and will be added. New resources\n- added with the patch method do not create the default views.\n+ To partially update resources or other metadata not at the top level\n+ of a package use\n+ :py:func:`~ckan.logic.action.update.package_revise` instead to maintain\n+ existing nested values.\n \n You must be authorized to edit the dataset and the groups that it belongs\n to.\n", "issue": "package_patch breaks uploaded resources\n## CKAN version\r\n2.10.4\r\n\r\n## Describe the bug\r\nWhen using package_patch to update a dataset, intending **not** to update an uploaded resource the resource filename appears to be lost\r\n\r\n### Steps to reproduce\r\n1. Create a dataset containing at least one uploaded file resource\r\n2. Call package_patch to update the dataset, but for the resources only pass the resource ids like specified in the documentation \r\n(https://docs.ckan.org/en/2.10/api/#ckan.logic.action.patch.package_patch) like:\r\n```\r\npackage_patch(**{'id':'b6cc8622-3334-4cdb-8960-e2a3c4269a8d', \r\n 'description':'Updated description', \r\n 'resources':[{'id':'a97b8889-5efb-440c-b6ad-fa9a9e4d7659'},\r\n {'id':'bdbb977f-9faa-4715-88d3-c5e9042e69a4', \r\n 'description':'Updated resource description'}]})\r\n```\r\n3. Browse to the updated resource, the download link is missing the file name probably replaced by ___ and the mimetype is lost\r\n\r\n\r\n### Expected behavior\r\nThe resources should be unchanged when no changes are requested\r\n\r\n### Additional details\r\nI think part of this issue arises out of url being used to store the filename. It may be cleaner to just add filename as another field in uploaded resources which could also make renaming uploaded files easier.\r\n\n", "before_files": [{"content": "# encoding: utf-8\n\n'''API functions for partial updates of existing data in CKAN'''\n\nfrom ckan.logic import (\n get_action as _get_action,\n check_access as _check_access,\n get_or_bust as _get_or_bust,\n fresh_context as _fresh_context\n)\nfrom ckan.types import Context, DataDict\nfrom ckan.types.logic import ActionResult\n\n\ndef package_patch(\n context: Context, data_dict: DataDict) -> ActionResult.PackagePatch:\n '''Patch a dataset (package).\n\n :param id: the id or name of the dataset\n :type id: string\n\n The difference between the update and patch methods is that the patch will\n perform an update of the provided parameters, while leaving all other\n parameters unchanged, whereas the update methods deletes all parameters\n not explicitly provided in the data_dict.\n\n You are able to partially update and/or create resources with\n package_patch. If you are updating existing resources be sure to provide\n the resource id. Existing resources excluded from the package_patch\n data_dict will be removed. Resources in the package data_dict without\n an id will be treated as new resources and will be added. New resources\n added with the patch method do not create the default views.\n\n You must be authorized to edit the dataset and the groups that it belongs\n to.\n '''\n _check_access('package_patch', context, data_dict)\n\n show_context: Context = {\n 'model': context['model'],\n 'session': context['session'],\n 'user': context['user'],\n 'auth_user_obj': context['auth_user_obj'],\n 'ignore_auth': context.get('ignore_auth', False),\n 'for_update': True\n }\n\n package_dict = _get_action('package_show')(\n show_context,\n {'id': _get_or_bust(data_dict, 'id')})\n\n patched = dict(package_dict)\n patched.update(data_dict)\n patched['id'] = package_dict['id']\n return _get_action('package_update')(context, patched)\n\n\ndef resource_patch(context: Context,\n data_dict: DataDict) -> ActionResult.ResourcePatch:\n '''Patch a resource\n\n :param id: the id of the resource\n :type id: string\n\n The difference between the update and patch methods is that the patch will\n perform an update of the provided parameters, while leaving all other\n parameters unchanged, whereas the update methods deletes all parameters\n not explicitly provided in the data_dict\n '''\n _check_access('resource_patch', context, data_dict)\n\n show_context: Context = _fresh_context(context)\n show_context.update({'for_update': True})\n\n resource_dict = _get_action('resource_show')(\n show_context,\n {'id': _get_or_bust(data_dict, 'id')})\n\n patched = dict(resource_dict)\n patched.update(data_dict)\n return _get_action('resource_update')(context, patched)\n\n\ndef group_patch(context: Context,\n data_dict: DataDict) -> ActionResult.GroupPatch:\n '''Patch a group\n\n :param id: the id or name of the group\n :type id: string\n\n The difference between the update and patch methods is that the patch will\n perform an update of the provided parameters, while leaving all other\n parameters unchanged, whereas the update methods deletes all parameters\n not explicitly provided in the data_dict\n '''\n _check_access('group_patch', context, data_dict)\n\n show_context: Context = _fresh_context(context)\n\n group_dict = _get_action('group_show')(\n show_context,\n {'id': _get_or_bust(data_dict, 'id')})\n\n patched = dict(group_dict)\n patched.pop('display_name', None)\n patched.update(data_dict)\n\n patch_context = context.copy()\n patch_context['allow_partial_update'] = True\n return _get_action('group_update')(patch_context, patched)\n\n\ndef organization_patch(\n context: Context,\n data_dict: DataDict) -> ActionResult.OrganizationPatch:\n '''Patch an organization\n\n :param id: the id or name of the organization\n :type id: string\n\n The difference between the update and patch methods is that the patch will\n perform an update of the provided parameters, while leaving all other\n parameters unchanged, whereas the update methods deletes all parameters\n not explicitly provided in the data_dict\n '''\n _check_access('organization_patch', context, data_dict)\n\n show_context: Context = _fresh_context(context)\n\n organization_dict = _get_action('organization_show')(\n show_context,\n {'id': _get_or_bust(data_dict, 'id')})\n\n patched = dict(organization_dict)\n patched.pop('display_name', None)\n patched.update(data_dict)\n\n patch_context = context.copy()\n patch_context['allow_partial_update'] = True\n return _get_action('organization_update')(patch_context, patched)\n\n\ndef user_patch(context: Context,\n data_dict: DataDict) -> ActionResult.UserPatch:\n '''Patch a user\n\n :param id: the id or name of the user\n :type id: string\n\n The difference between the update and patch methods is that the patch will\n perform an update of the provided parameters, while leaving all other\n parameters unchanged, whereas the update methods deletes all parameters\n not explicitly provided in the data_dict\n '''\n _check_access('user_patch', context, data_dict)\n\n show_context: Context = _fresh_context(context)\n\n user_dict = _get_action('user_show')(\n show_context,\n {'id': _get_or_bust(data_dict, 'id')})\n\n patched = dict(user_dict)\n patched.pop('display_name', None)\n patched.update(data_dict)\n return _get_action('user_update')(context, patched)\n", "path": "ckan/logic/action/patch.py"}]}
2,545
244
gh_patches_debug_48555
rasdani/github-patches
git_diff
Flexget__Flexget-3648
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Error on scheduler: Only timezones from the pytz library are supported ### Steps to reproduce: - Step 1: `flexget -L verbose daemon start` #### Config: ```yaml schedules: - tasks: ['some-task'] interval: hours: 1 ``` #### Backtrace: ``` File "/home/pi/.local/lib/python3.9/site-packages/flexget/__init__.py", line 44, in main manager.start() File "/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py", line 383, in start self.handle_cli() File "/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py", line 409, in handle_cli self.daemon_command(command_options) File "/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py", line 506, in daemon_command run_daemon() File "/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py", line 487, in run_daemon fire_event('manager.daemon.started', self) File "/home/pi/.local/lib/python3.9/site-packages/flexget/event.py", line 109, in fire_event result = event(*args, **kwargs) File "/home/pi/.local/lib/python3.9/site-packages/flexget/event.py", line 20, in __call__ return self.func(*args, **kwargs) File "/home/pi/.local/lib/python3.9/site-packages/flexget/components/scheduler/scheduler.py", line 126, in setup_scheduler scheduler = BackgroundScheduler( File "/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/base.py", line 87, in __init__ self.configure(gconfig, **options) File "/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/base.py", line 126, in configure self._configure(config) File "/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/background.py", line 29, in _configure super(BackgroundScheduler, self)._configure(config) File "/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/base.py", line 697, in _configure self.timezone = astimezone(config.pop('timezone', None)) or get_localzone() File "/home/pi/.local/lib/python3.9/site-packages/apscheduler/util.py", line 93, in astimezone raise TypeError('Only timezones from the pytz library are supported') TypeError: Only timezones from the pytz library are supported ``` ### Additional information: - FlexGet version: 3.5.2 - Python version: 3.9.2 - Installation method: - Using daemon (yes/no): yes It seems to have started after https://github.com/Flexget/Flexget/pull/3453 that change the timezone argument to a non-pytz compatible object. Error on scheduler: Only timezones from the pytz library are supported ### Steps to reproduce: - Step 1: `flexget -L verbose daemon start` #### Config: ```yaml schedules: - tasks: ['some-task'] interval: hours: 1 ``` #### Backtrace: ``` File "/home/pi/.local/lib/python3.9/site-packages/flexget/__init__.py", line 44, in main manager.start() File "/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py", line 383, in start self.handle_cli() File "/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py", line 409, in handle_cli self.daemon_command(command_options) File "/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py", line 506, in daemon_command run_daemon() File "/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py", line 487, in run_daemon fire_event('manager.daemon.started', self) File "/home/pi/.local/lib/python3.9/site-packages/flexget/event.py", line 109, in fire_event result = event(*args, **kwargs) File "/home/pi/.local/lib/python3.9/site-packages/flexget/event.py", line 20, in __call__ return self.func(*args, **kwargs) File "/home/pi/.local/lib/python3.9/site-packages/flexget/components/scheduler/scheduler.py", line 126, in setup_scheduler scheduler = BackgroundScheduler( File "/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/base.py", line 87, in __init__ self.configure(gconfig, **options) File "/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/base.py", line 126, in configure self._configure(config) File "/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/background.py", line 29, in _configure super(BackgroundScheduler, self)._configure(config) File "/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/base.py", line 697, in _configure self.timezone = astimezone(config.pop('timezone', None)) or get_localzone() File "/home/pi/.local/lib/python3.9/site-packages/apscheduler/util.py", line 93, in astimezone raise TypeError('Only timezones from the pytz library are supported') TypeError: Only timezones from the pytz library are supported ``` ### Additional information: - FlexGet version: 3.5.2 - Python version: 3.9.2 - Installation method: - Using daemon (yes/no): yes It seems to have started after https://github.com/Flexget/Flexget/pull/3453 that change the timezone argument to a non-pytz compatible object. </issue> <code> [start of flexget/components/scheduler/scheduler.py] 1 import datetime 2 import hashlib 3 import logging 4 import os 5 import struct 6 7 from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore 8 from apscheduler.schedulers.background import BackgroundScheduler 9 from apscheduler.triggers.cron import CronTrigger 10 from loguru import logger 11 12 from flexget.config_schema import format_checker, register_config_key, register_schema 13 from flexget.event import event 14 from flexget.manager import manager 15 from flexget.utils import json 16 17 logger = logger.bind(name='scheduler') 18 19 20 # Add a format checker for more detailed errors on cron type schedules 21 @format_checker.checks('cron_schedule', raises=ValueError) 22 def is_cron_schedule(instance): 23 if not isinstance(instance, dict): 24 return True 25 try: 26 return CronTrigger(**instance) 27 except TypeError: 28 # A more specific error message about which key will also be shown by properties schema keyword 29 raise ValueError('Invalid key for schedule.') 30 31 32 DEFAULT_SCHEDULES = [{'tasks': ['*'], 'interval': {'hours': 1}}] 33 34 UNITS = ['minutes', 'hours', 'days', 'weeks'] 35 interval_schema = { 36 'type': 'object', 37 'title': 'Simple Interval', 38 'properties': { 39 'minutes': {'type': 'number'}, 40 'hours': {'type': 'number'}, 41 'days': {'type': 'number'}, 42 'weeks': {'type': 'number'}, 43 'jitter': {'type': 'integer'}, 44 }, 45 'anyOf': [{'required': [unit]} for unit in UNITS], 46 'error_anyOf': 'Interval must be specified as one or more of %s' % ', '.join(UNITS), 47 'additionalProperties': False, 48 } 49 50 cron_schema = { 51 'type': 'object', 52 'title': 'Advanced Cron Interval', 53 'properties': { 54 'year': {'type': ['integer', 'string']}, 55 'month': {'type': ['integer', 'string']}, 56 'day': {'type': ['integer', 'string']}, 57 'week': {'type': ['integer', 'string']}, 58 'day_of_week': {'type': ['integer', 'string']}, 59 'hour': {'type': ['integer', 'string']}, 60 'minute': {'type': ['integer', 'string']}, 61 'jitter': {'type': 'integer'}, 62 }, 63 'additionalProperties': False, 64 } 65 66 schedule_schema = { 67 'type': 'object', 68 'title': 'Schedule', 69 'description': 'A schedule which runs specified tasks periodically.', 70 'properties': { 71 'tasks': {'type': ['array', 'string'], 'items': {'type': 'string'}}, 72 'interval': interval_schema, 73 'schedule': cron_schema, 74 }, 75 'required': ['tasks'], 76 'minProperties': 2, 77 'maxProperties': 2, 78 'error_minProperties': 'Either `cron` or `interval` must be defined.', 79 'error_maxProperties': 'Either `cron` or `interval` must be defined.', 80 'additionalProperties': False, 81 } 82 83 main_schema = { 84 'oneOf': [ 85 {'type': 'array', 'title': 'Enable', 'items': schedule_schema}, 86 {'type': 'boolean', 'title': 'Disable', 'description': 'Disable task schedules'}, 87 ] 88 } 89 90 scheduler = None 91 scheduler_job_map = {} 92 93 94 def job_id(conf): 95 """Create a unique id for a schedule item in config.""" 96 return hashlib.sha1(json.dumps(conf, sort_keys=True).encode('utf-8')).hexdigest() 97 98 99 def run_job(tasks): 100 """Add the execution to the queue and waits until it is finished""" 101 logger.debug('executing tasks: {}', tasks) 102 finished_events = manager.execute( 103 options={'tasks': tasks, 'cron': True, 'allow_manual': False}, priority=5 104 ) 105 for _, task_name, event_ in finished_events: 106 logger.debug('task finished executing: {}', task_name) 107 event_.wait() 108 logger.debug('all tasks in schedule finished executing') 109 110 111 @event('manager.daemon.started') 112 def setup_scheduler(manager): 113 """Configure and start apscheduler""" 114 global scheduler 115 if logger.level(manager.options.loglevel).no > logger.level('DEBUG').no: 116 logging.getLogger('apscheduler').setLevel(logging.WARNING) 117 # Since APScheduler runs in a separate thread, slower devices can sometimes get a DB lock, so use a separate db 118 # for the jobs to avoid this 119 db_filename = os.path.join(manager.config_base, 'db-%s-jobs.sqlite' % manager.config_name) 120 # in case running on windows, needs double \\ 121 db_filename = db_filename.replace('\\', '\\\\') 122 database_uri = 'sqlite:///%s' % db_filename 123 jobstores = {'default': SQLAlchemyJobStore(url=database_uri)} 124 # If job was meant to run within last day while daemon was shutdown, run it once when continuing 125 job_defaults = {'coalesce': True, 'misfire_grace_time': 60 * 60 * 24} 126 scheduler = BackgroundScheduler( 127 jobstores=jobstores, 128 job_defaults=job_defaults, 129 timezone=datetime.datetime.now().astimezone().tzinfo, 130 ) 131 setup_jobs(manager) 132 133 134 @event('manager.config_updated') 135 def setup_jobs(manager): 136 """Set up the jobs for apscheduler to run.""" 137 if not manager.is_daemon: 138 return 139 140 global scheduler_job_map 141 scheduler_job_map = {} 142 143 if 'schedules' not in manager.config: 144 logger.info( 145 'No schedules defined in config. Defaulting to run all tasks on a 1 hour interval.' 146 ) 147 config = manager.config.get('schedules', True) 148 if config is True: 149 config = DEFAULT_SCHEDULES 150 elif not config: # Schedules are disabled with `schedules: no` 151 if scheduler.running: 152 logger.info('Shutting down scheduler') 153 scheduler.shutdown() 154 return 155 if not scheduler.running: 156 logger.info('Starting scheduler') 157 scheduler.start(paused=True) 158 existing_job_ids = [job.id for job in scheduler.get_jobs()] 159 configured_job_ids = [] 160 for job_config in config: 161 jid = job_id(job_config) 162 configured_job_ids.append(jid) 163 scheduler_job_map[id(job_config)] = jid 164 if jid in existing_job_ids: 165 continue 166 if 'interval' in job_config: 167 trigger, trigger_args = 'interval', job_config['interval'] 168 else: 169 trigger, trigger_args = 'cron', job_config['schedule'] 170 tasks = job_config['tasks'] 171 if not isinstance(tasks, list): 172 tasks = [tasks] 173 name = ','.join(tasks) 174 scheduler.add_job( 175 run_job, args=(tasks,), id=jid, name=name, trigger=trigger, **trigger_args 176 ) 177 # Remove jobs no longer in config 178 for jid in existing_job_ids: 179 if jid not in configured_job_ids: 180 scheduler.remove_job(jid) 181 scheduler.resume() 182 183 184 @event('manager.shutdown_requested') 185 def shutdown_requested(manager): 186 if scheduler and scheduler.running: 187 scheduler.shutdown(wait=True) 188 189 190 @event('manager.shutdown') 191 def stop_scheduler(manager): 192 if scheduler and scheduler.running: 193 scheduler.shutdown(wait=False) 194 195 196 @event('config.register') 197 def register_config(): 198 register_config_key('schedules', main_schema) 199 register_schema('/schema/config/schedule', schedule_schema) 200 [end of flexget/components/scheduler/scheduler.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/flexget/components/scheduler/scheduler.py b/flexget/components/scheduler/scheduler.py --- a/flexget/components/scheduler/scheduler.py +++ b/flexget/components/scheduler/scheduler.py @@ -1,4 +1,3 @@ -import datetime import hashlib import logging import os @@ -126,7 +125,6 @@ scheduler = BackgroundScheduler( jobstores=jobstores, job_defaults=job_defaults, - timezone=datetime.datetime.now().astimezone().tzinfo, ) setup_jobs(manager)
{"golden_diff": "diff --git a/flexget/components/scheduler/scheduler.py b/flexget/components/scheduler/scheduler.py\n--- a/flexget/components/scheduler/scheduler.py\n+++ b/flexget/components/scheduler/scheduler.py\n@@ -1,4 +1,3 @@\n-import datetime\n import hashlib\n import logging\n import os\n@@ -126,7 +125,6 @@\n scheduler = BackgroundScheduler(\n jobstores=jobstores,\n job_defaults=job_defaults,\n- timezone=datetime.datetime.now().astimezone().tzinfo,\n )\n setup_jobs(manager)\n", "issue": "Error on scheduler: Only timezones from the pytz library are supported\n### Steps to reproduce:\r\n- Step 1: `flexget -L verbose daemon start`\r\n\r\n#### Config:\r\n\r\n```yaml\r\nschedules:\r\n - tasks: ['some-task']\r\n interval:\r\n hours: 1\r\n```\r\n \r\n#### Backtrace:\r\n\r\n```\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/__init__.py\", line 44, in main\r\n manager.start()\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py\", line 383, in start\r\n self.handle_cli()\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py\", line 409, in handle_cli\r\n self.daemon_command(command_options)\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py\", line 506, in daemon_command\r\n run_daemon()\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py\", line 487, in run_daemon\r\n fire_event('manager.daemon.started', self)\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/event.py\", line 109, in fire_event\r\n result = event(*args, **kwargs)\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/event.py\", line 20, in __call__\r\n return self.func(*args, **kwargs)\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/components/scheduler/scheduler.py\", line 126, in setup_scheduler\r\n scheduler = BackgroundScheduler(\r\n File \"/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/base.py\", line 87, in __init__\r\n self.configure(gconfig, **options)\r\n File \"/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/base.py\", line 126, in configure\r\n self._configure(config)\r\n File \"/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/background.py\", line 29, in _configure\r\n super(BackgroundScheduler, self)._configure(config)\r\n File \"/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/base.py\", line 697, in _configure\r\n self.timezone = astimezone(config.pop('timezone', None)) or get_localzone()\r\n File \"/home/pi/.local/lib/python3.9/site-packages/apscheduler/util.py\", line 93, in astimezone\r\n raise TypeError('Only timezones from the pytz library are supported')\r\nTypeError: Only timezones from the pytz library are supported\r\n```\r\n\r\n### Additional information:\r\n\r\n- FlexGet version: 3.5.2\r\n- Python version: 3.9.2\r\n- Installation method:\r\n- Using daemon (yes/no): yes\r\n\r\nIt seems to have started after https://github.com/Flexget/Flexget/pull/3453 that change the timezone argument to a non-pytz compatible object.\nError on scheduler: Only timezones from the pytz library are supported\n### Steps to reproduce:\r\n- Step 1: `flexget -L verbose daemon start`\r\n\r\n#### Config:\r\n\r\n```yaml\r\nschedules:\r\n - tasks: ['some-task']\r\n interval:\r\n hours: 1\r\n```\r\n \r\n#### Backtrace:\r\n\r\n```\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/__init__.py\", line 44, in main\r\n manager.start()\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py\", line 383, in start\r\n self.handle_cli()\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py\", line 409, in handle_cli\r\n self.daemon_command(command_options)\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py\", line 506, in daemon_command\r\n run_daemon()\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/manager.py\", line 487, in run_daemon\r\n fire_event('manager.daemon.started', self)\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/event.py\", line 109, in fire_event\r\n result = event(*args, **kwargs)\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/event.py\", line 20, in __call__\r\n return self.func(*args, **kwargs)\r\n File \"/home/pi/.local/lib/python3.9/site-packages/flexget/components/scheduler/scheduler.py\", line 126, in setup_scheduler\r\n scheduler = BackgroundScheduler(\r\n File \"/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/base.py\", line 87, in __init__\r\n self.configure(gconfig, **options)\r\n File \"/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/base.py\", line 126, in configure\r\n self._configure(config)\r\n File \"/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/background.py\", line 29, in _configure\r\n super(BackgroundScheduler, self)._configure(config)\r\n File \"/home/pi/.local/lib/python3.9/site-packages/apscheduler/schedulers/base.py\", line 697, in _configure\r\n self.timezone = astimezone(config.pop('timezone', None)) or get_localzone()\r\n File \"/home/pi/.local/lib/python3.9/site-packages/apscheduler/util.py\", line 93, in astimezone\r\n raise TypeError('Only timezones from the pytz library are supported')\r\nTypeError: Only timezones from the pytz library are supported\r\n```\r\n\r\n### Additional information:\r\n\r\n- FlexGet version: 3.5.2\r\n- Python version: 3.9.2\r\n- Installation method:\r\n- Using daemon (yes/no): yes\r\n\r\nIt seems to have started after https://github.com/Flexget/Flexget/pull/3453 that change the timezone argument to a non-pytz compatible object.\n", "before_files": [{"content": "import datetime\nimport hashlib\nimport logging\nimport os\nimport struct\n\nfrom apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore\nfrom apscheduler.schedulers.background import BackgroundScheduler\nfrom apscheduler.triggers.cron import CronTrigger\nfrom loguru import logger\n\nfrom flexget.config_schema import format_checker, register_config_key, register_schema\nfrom flexget.event import event\nfrom flexget.manager import manager\nfrom flexget.utils import json\n\nlogger = logger.bind(name='scheduler')\n\n\n# Add a format checker for more detailed errors on cron type schedules\n@format_checker.checks('cron_schedule', raises=ValueError)\ndef is_cron_schedule(instance):\n if not isinstance(instance, dict):\n return True\n try:\n return CronTrigger(**instance)\n except TypeError:\n # A more specific error message about which key will also be shown by properties schema keyword\n raise ValueError('Invalid key for schedule.')\n\n\nDEFAULT_SCHEDULES = [{'tasks': ['*'], 'interval': {'hours': 1}}]\n\nUNITS = ['minutes', 'hours', 'days', 'weeks']\ninterval_schema = {\n 'type': 'object',\n 'title': 'Simple Interval',\n 'properties': {\n 'minutes': {'type': 'number'},\n 'hours': {'type': 'number'},\n 'days': {'type': 'number'},\n 'weeks': {'type': 'number'},\n 'jitter': {'type': 'integer'},\n },\n 'anyOf': [{'required': [unit]} for unit in UNITS],\n 'error_anyOf': 'Interval must be specified as one or more of %s' % ', '.join(UNITS),\n 'additionalProperties': False,\n}\n\ncron_schema = {\n 'type': 'object',\n 'title': 'Advanced Cron Interval',\n 'properties': {\n 'year': {'type': ['integer', 'string']},\n 'month': {'type': ['integer', 'string']},\n 'day': {'type': ['integer', 'string']},\n 'week': {'type': ['integer', 'string']},\n 'day_of_week': {'type': ['integer', 'string']},\n 'hour': {'type': ['integer', 'string']},\n 'minute': {'type': ['integer', 'string']},\n 'jitter': {'type': 'integer'},\n },\n 'additionalProperties': False,\n}\n\nschedule_schema = {\n 'type': 'object',\n 'title': 'Schedule',\n 'description': 'A schedule which runs specified tasks periodically.',\n 'properties': {\n 'tasks': {'type': ['array', 'string'], 'items': {'type': 'string'}},\n 'interval': interval_schema,\n 'schedule': cron_schema,\n },\n 'required': ['tasks'],\n 'minProperties': 2,\n 'maxProperties': 2,\n 'error_minProperties': 'Either `cron` or `interval` must be defined.',\n 'error_maxProperties': 'Either `cron` or `interval` must be defined.',\n 'additionalProperties': False,\n}\n\nmain_schema = {\n 'oneOf': [\n {'type': 'array', 'title': 'Enable', 'items': schedule_schema},\n {'type': 'boolean', 'title': 'Disable', 'description': 'Disable task schedules'},\n ]\n}\n\nscheduler = None\nscheduler_job_map = {}\n\n\ndef job_id(conf):\n \"\"\"Create a unique id for a schedule item in config.\"\"\"\n return hashlib.sha1(json.dumps(conf, sort_keys=True).encode('utf-8')).hexdigest()\n\n\ndef run_job(tasks):\n \"\"\"Add the execution to the queue and waits until it is finished\"\"\"\n logger.debug('executing tasks: {}', tasks)\n finished_events = manager.execute(\n options={'tasks': tasks, 'cron': True, 'allow_manual': False}, priority=5\n )\n for _, task_name, event_ in finished_events:\n logger.debug('task finished executing: {}', task_name)\n event_.wait()\n logger.debug('all tasks in schedule finished executing')\n\n\n@event('manager.daemon.started')\ndef setup_scheduler(manager):\n \"\"\"Configure and start apscheduler\"\"\"\n global scheduler\n if logger.level(manager.options.loglevel).no > logger.level('DEBUG').no:\n logging.getLogger('apscheduler').setLevel(logging.WARNING)\n # Since APScheduler runs in a separate thread, slower devices can sometimes get a DB lock, so use a separate db\n # for the jobs to avoid this\n db_filename = os.path.join(manager.config_base, 'db-%s-jobs.sqlite' % manager.config_name)\n # in case running on windows, needs double \\\\\n db_filename = db_filename.replace('\\\\', '\\\\\\\\')\n database_uri = 'sqlite:///%s' % db_filename\n jobstores = {'default': SQLAlchemyJobStore(url=database_uri)}\n # If job was meant to run within last day while daemon was shutdown, run it once when continuing\n job_defaults = {'coalesce': True, 'misfire_grace_time': 60 * 60 * 24}\n scheduler = BackgroundScheduler(\n jobstores=jobstores,\n job_defaults=job_defaults,\n timezone=datetime.datetime.now().astimezone().tzinfo,\n )\n setup_jobs(manager)\n\n\n@event('manager.config_updated')\ndef setup_jobs(manager):\n \"\"\"Set up the jobs for apscheduler to run.\"\"\"\n if not manager.is_daemon:\n return\n\n global scheduler_job_map\n scheduler_job_map = {}\n\n if 'schedules' not in manager.config:\n logger.info(\n 'No schedules defined in config. Defaulting to run all tasks on a 1 hour interval.'\n )\n config = manager.config.get('schedules', True)\n if config is True:\n config = DEFAULT_SCHEDULES\n elif not config: # Schedules are disabled with `schedules: no`\n if scheduler.running:\n logger.info('Shutting down scheduler')\n scheduler.shutdown()\n return\n if not scheduler.running:\n logger.info('Starting scheduler')\n scheduler.start(paused=True)\n existing_job_ids = [job.id for job in scheduler.get_jobs()]\n configured_job_ids = []\n for job_config in config:\n jid = job_id(job_config)\n configured_job_ids.append(jid)\n scheduler_job_map[id(job_config)] = jid\n if jid in existing_job_ids:\n continue\n if 'interval' in job_config:\n trigger, trigger_args = 'interval', job_config['interval']\n else:\n trigger, trigger_args = 'cron', job_config['schedule']\n tasks = job_config['tasks']\n if not isinstance(tasks, list):\n tasks = [tasks]\n name = ','.join(tasks)\n scheduler.add_job(\n run_job, args=(tasks,), id=jid, name=name, trigger=trigger, **trigger_args\n )\n # Remove jobs no longer in config\n for jid in existing_job_ids:\n if jid not in configured_job_ids:\n scheduler.remove_job(jid)\n scheduler.resume()\n\n\n@event('manager.shutdown_requested')\ndef shutdown_requested(manager):\n if scheduler and scheduler.running:\n scheduler.shutdown(wait=True)\n\n\n@event('manager.shutdown')\ndef stop_scheduler(manager):\n if scheduler and scheduler.running:\n scheduler.shutdown(wait=False)\n\n\n@event('config.register')\ndef register_config():\n register_config_key('schedules', main_schema)\n register_schema('/schema/config/schedule', schedule_schema)\n", "path": "flexget/components/scheduler/scheduler.py"}]}
3,980
125
gh_patches_debug_13282
rasdani/github-patches
git_diff
great-expectations__great_expectations-1870
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Use cleaner solution for non-truncating division in python 2 Prefer `from __future__ import division` to `1.*x/y` </issue> <code> [start of docs/conf.py] 1 # -*- coding: utf-8 -*- 2 # 3 # great_expectations documentation build configuration file, created by 4 # sphinx-quickstart on Thu Jun 8 23:00:19 2017. 5 # 6 # This file is execfile()d with the current directory set to its 7 # containing dir. 8 # 9 # Note that not all possible configuration values are present in this 10 # autogenerated file. 11 # 12 # All configuration values have a default; values that are commented out 13 # serve to show the default. 14 15 # If extensions (or modules to document with autodoc) are in another directory, 16 # add these directories to sys.path here. If the directory is relative to the 17 # documentation root, use os.path.abspath to make it absolute, like shown here. 18 # 19 20 import os 21 import re 22 import sys 23 import uuid 24 from collections import namedtuple 25 26 from sphinx.ext.autodoc import between 27 28 sys.path.insert(0, os.path.abspath("../")) 29 30 31 # -- General configuration ------------------------------------------------ 32 33 # If your documentation needs a minimal Sphinx version, state it here. 34 # 35 # needs_sphinx = '1.0' 36 37 # Add any Sphinx extension module names here, as strings. They can be 38 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 39 # ones. 40 extensions = [ 41 # 'sphinx_rtd_theme', 42 # "sphinx.ext.autodoc", 43 "sphinx.ext.todo", 44 "sphinx.ext.coverage", 45 # 'sphinx.ext.mathjax' 46 "sphinx.ext.napoleon", 47 "sphinxcontrib.contentui", 48 "sphinx_gitstamp", 49 "sphinx.ext.autosectionlabel", 50 "sphinxcontrib.discourse", 51 "autoapi.extension", 52 ] 53 54 autoapi_type = "python" 55 autoapi_dirs = ["../great_expectations"] 56 autoapi_add_toctree_entry = False 57 58 # Add any paths that contain templates here, relative to this directory. 59 templates_path = ["_templates"] 60 61 # discourse url connect 62 discourse_url = "https://discuss.greatexpectations.io/" 63 64 # The suffix(es) of source filenames. 65 # You can specify multiple suffix as a list of string: 66 # 67 # source_suffix = ['.rst', '.md'] 68 source_suffix = ".rst" 69 70 # The master toctree document. 71 index_doc = "index" 72 73 # General information about the project. 74 project = "great_expectations" 75 copyright = "2020, The Great Expectations Team. " 76 author = "The Great Expectations Team" 77 gitstamp_fmt = "%d %b %Y" 78 79 # The version info for the project you're documenting, acts as replacement for 80 # |version| and |release|, also used in various other places throughout the 81 # built documents. 82 # 83 # The short X.Y version. 84 version = "" 85 # The full version, including alpha/beta/rc tags. 86 release = "" 87 88 # The language for content autogenerated by Sphinx. Refer to documentation 89 # for a list of supported languages. 90 # 91 # This is also used if you do content translation via gettext catalogs. 92 # Usually you set "language" from the command line for these cases. 93 language = None 94 95 # List of patterns, relative to source directory, that match files and 96 # directories to ignore when looking for source files. 97 # This patterns also effect to html_static_path and html_extra_path 98 exclude_patterns = [] 99 100 # The name of the Pygments (syntax highlighting) style to use. 101 pygments_style = "paraiso-dark" 102 103 # If true, `todo` and `todoList` produce output, else they produce nothing. 104 todo_include_todos = True 105 106 # -- Options for HTML output ---------------------------------------------- 107 108 # The theme to use for HTML and HTML Help pages. See the documentation for 109 # a list of builtin themes. 110 # 111 # html_theme = 'alabaster' 112 html_theme = "sphinx_rtd_theme" 113 114 # Theme options are theme-specific and customize the look and feel of a theme 115 # further. For a list of options available for each theme, see the 116 # documentation. 117 # 118 html_theme_options = { 119 "logo_only": True, 120 "collapse_navigation": False, 121 "navigation_depth": 4, 122 } 123 124 html_static_path = [ 125 "_static", 126 "_static/style.css", 127 "_static/hk-grotesk-pro/HKGroteskPro-Bold.woff2", 128 "_static/hk-grotesk-pro/HKGroteskPro-Regular.woff2", 129 "_static/hk-grotesk-pro/HKGroteskPro-SemiBold.woff2", 130 "_static/hk-grotesk-pro/HKGroteskPro-Medium.woff2", 131 "_static/header-logo.png", 132 "_static/discuss-logo.png", 133 ] 134 html_css_files = ["style.css"] 135 136 # html_logo = '../pip-logo.png' 137 138 # Add any paths that contain custom static files (such as style sheets) here, 139 # relative to this directory. They are copied after the builtin static files, 140 # so a file named "default.css" will overwrite the builtin "default.css". 141 142 143 # -- Options for Napoleon Extension -------------------------------------------- 144 145 # Parse Google style docstrings. 146 # See http://google.github.io/styleguide/pyguide.html 147 napoleon_google_docstring = True 148 149 # Parse NumPy style docstrings. 150 # See https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt 151 napoleon_numpy_docstring = True 152 153 # Should special members (like __membername__) and private members 154 # (like _membername) members be included in the documentation if they 155 # have docstrings. 156 napoleon_include_private_with_doc = False 157 napoleon_include_special_with_doc = True 158 159 # If True, docstring sections will use the ".. admonition::" directive. 160 # If False, docstring sections will use the ".. rubric::" directive. 161 # One may look better than the other depending on what HTML theme is used. 162 napoleon_use_admonition_for_examples = False 163 napoleon_use_admonition_for_notes = False 164 napoleon_use_admonition_for_references = False 165 166 # If True, use Sphinx :ivar: directive for instance variables: 167 # :ivar attr1: Description of attr1. 168 # :type attr1: type 169 # If False, use Sphinx .. attribute:: directive for instance variables: 170 # .. attribute:: attr1 171 # 172 # *type* 173 # 174 # Description of attr1. 175 napoleon_use_ivar = False 176 177 # If True, use Sphinx :param: directive for function parameters: 178 # :param arg1: Description of arg1. 179 # :type arg1: type 180 # If False, output function parameters using the :parameters: field: 181 # :parameters: **arg1** (*type*) -- Description of arg1. 182 napoleon_use_param = True 183 184 # If True, use Sphinx :rtype: directive for the return type: 185 # :returns: Description of return value. 186 # :rtype: type 187 # If False, output the return type inline with the return description: 188 # :returns: *type* -- Description of return value. 189 napoleon_use_rtype = True 190 191 192 # -- Options for HTMLHelp output ------------------------------------------ 193 194 # Output file base name for HTML help builder. 195 htmlhelp_basename = "great_expectationsdoc" 196 197 198 # -- Options for LaTeX output --------------------------------------------- 199 200 latex_elements = { 201 # The paper size ('letterpaper' or 'a4paper'). 202 # 203 # 'papersize': 'letterpaper', 204 # The font size ('10pt', '11pt' or '12pt'). 205 # 206 # 'pointsize': '10pt', 207 # Additional stuff for the LaTeX preamble. 208 # 209 # 'preamble': '', 210 # Latex figure (float) alignment 211 # 212 # 'figure_align': 'htbp', 213 } 214 215 # Grouping the document tree into LaTeX files. List of tuples 216 # (source start file, target name, title, 217 # author, documentclass [howto, manual, or own class]). 218 latex_documents = [ 219 ( 220 index_doc, 221 "great_expectations.tex", 222 "great_expectations Documentation", 223 "The Great Expectations Team", 224 "manual", 225 ), 226 ] 227 228 229 # -- Options for manual page output --------------------------------------- 230 231 # One entry per manual page. List of tuples 232 # (source start file, name, description, authors, manual section). 233 man_pages = [ 234 (index_doc, "great_expectations", "great_expectations Documentation", [author], 1) 235 ] 236 237 238 # -- Options for Texinfo output ------------------------------------------- 239 240 # Grouping the document tree into Texinfo files. List of tuples 241 # (source start file, target name, title, author, 242 # dir menu entry, description, category) 243 texinfo_documents = [ 244 ( 245 index_doc, 246 "great_expectations", 247 "great_expectations Documentation", 248 author, 249 "great_expectations", 250 "Always know what to expect from your data.", 251 "Miscellaneous", 252 ), 253 ] 254 255 256 autodoc_member_order = "bysource" 257 258 259 def process_docstring(app, what, name, obj, options, lines): 260 261 from docs.feature_annotation_parser import parse_feature_annotation 262 263 docstring = "\n".join(lines) 264 annotation_list = parse_feature_annotation(docstring) 265 266 process_between = between(marker="--ge-feature-maturity-info--", exclude=True) 267 process_between(app, what, name, obj, options, lines) 268 269 if not annotation_list: 270 return 271 272 feature_annotation_admonition = """ 273 .. admonition:: `Feature Maturity <https://docs.greatexpectations.io/en/latest/features>`_ 274 275 """ 276 277 feature_annotation_template = """ 278 | |icon-{icon_hash}| **{title}** - `How-to Guide <{how_to_guide_url}>`_ 279 | {description} 280 | **Maturity**: {maturity} 281 | **Details**: 282 | **API Stability**: {maturity_details[api_stability]} 283 | **Implementation Completeness**: {maturity_details[implementation_completeness]} 284 | **Unit Test Coverage**: {maturity_details[unit_test_coverage]} 285 | **Integration Infrastructure/Test Coverage**: {maturity_details[integration_infrastructure_test_coverage]} 286 | **Documentation Completeness**: {maturity_details[documentation_completeness]} 287 | **Bug Risk**: {maturity_details[bug_risk]}\ 288 """ 289 expectation_completeness_template = """ 290 | **Expectation Completeness**: {maturity_details[expectation_completeness]}\ 291 """ 292 icon_template = """ 293 .. |icon-{icon_hash}| image:: {icon} 294 :height: 15px 295 """ 296 297 for annotation in annotation_list: 298 icon_hash = uuid.uuid1().hex 299 annotation["icon_hash"] = icon_hash 300 description = ( 301 annotation.get("description") 302 or annotation.get("short_description") 303 or f"*TODO: {annotation.get('title')} Description*" 304 ) 305 how_to_guide_url = ( 306 annotation.get("how_to_guide_url") 307 or "https://docs.greatexpectations.io/en/latest/how_to_guides.html" 308 ) 309 annotation["how_to_guide_url"] = how_to_guide_url 310 annotation["description"] = description 311 312 if annotation["maturity_details"].get("expectation_completeness"): 313 feature_annotation_admonition += ( 314 feature_annotation_template 315 + expectation_completeness_template 316 + icon_template 317 ).format(**annotation) 318 else: 319 feature_annotation_admonition += ( 320 feature_annotation_template + icon_template 321 ).format(**annotation) 322 323 lines += feature_annotation_admonition.splitlines() 324 325 326 def setup(app): 327 app.connect("autodoc-process-docstring", process_docstring) 328 [end of docs/conf.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/docs/conf.py b/docs/conf.py --- a/docs/conf.py +++ b/docs/conf.py @@ -46,7 +46,7 @@ "sphinx.ext.napoleon", "sphinxcontrib.contentui", "sphinx_gitstamp", - "sphinx.ext.autosectionlabel", + # "sphinx.ext.autosectionlabel", "sphinxcontrib.discourse", "autoapi.extension", ] @@ -54,6 +54,7 @@ autoapi_type = "python" autoapi_dirs = ["../great_expectations"] autoapi_add_toctree_entry = False +# autoapi_keep_files = True # Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"]
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -46,7 +46,7 @@\n \"sphinx.ext.napoleon\",\n \"sphinxcontrib.contentui\",\n \"sphinx_gitstamp\",\n- \"sphinx.ext.autosectionlabel\",\n+ # \"sphinx.ext.autosectionlabel\",\n \"sphinxcontrib.discourse\",\n \"autoapi.extension\",\n ]\n@@ -54,6 +54,7 @@\n autoapi_type = \"python\"\n autoapi_dirs = [\"../great_expectations\"]\n autoapi_add_toctree_entry = False\n+# autoapi_keep_files = True\n \n # Add any paths that contain templates here, relative to this directory.\n templates_path = [\"_templates\"]\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# great_expectations documentation build configuration file, created by\n# sphinx-quickstart on Thu Jun 8 23:00:19 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n\nimport os\nimport re\nimport sys\nimport uuid\nfrom collections import namedtuple\n\nfrom sphinx.ext.autodoc import between\n\nsys.path.insert(0, os.path.abspath(\"../\"))\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n # 'sphinx_rtd_theme',\n # \"sphinx.ext.autodoc\",\n \"sphinx.ext.todo\",\n \"sphinx.ext.coverage\",\n # 'sphinx.ext.mathjax'\n \"sphinx.ext.napoleon\",\n \"sphinxcontrib.contentui\",\n \"sphinx_gitstamp\",\n \"sphinx.ext.autosectionlabel\",\n \"sphinxcontrib.discourse\",\n \"autoapi.extension\",\n]\n\nautoapi_type = \"python\"\nautoapi_dirs = [\"../great_expectations\"]\nautoapi_add_toctree_entry = False\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# discourse url connect\ndiscourse_url = \"https://discuss.greatexpectations.io/\"\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = \".rst\"\n\n# The master toctree document.\nindex_doc = \"index\"\n\n# General information about the project.\nproject = \"great_expectations\"\ncopyright = \"2020, The Great Expectations Team. \"\nauthor = \"The Great Expectations Team\"\ngitstamp_fmt = \"%d %b %Y\"\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \"\"\n# The full version, including alpha/beta/rc tags.\nrelease = \"\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"paraiso-dark\"\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\n# html_theme = 'alabaster'\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n \"logo_only\": True,\n \"collapse_navigation\": False,\n \"navigation_depth\": 4,\n}\n\nhtml_static_path = [\n \"_static\",\n \"_static/style.css\",\n \"_static/hk-grotesk-pro/HKGroteskPro-Bold.woff2\",\n \"_static/hk-grotesk-pro/HKGroteskPro-Regular.woff2\",\n \"_static/hk-grotesk-pro/HKGroteskPro-SemiBold.woff2\",\n \"_static/hk-grotesk-pro/HKGroteskPro-Medium.woff2\",\n \"_static/header-logo.png\",\n \"_static/discuss-logo.png\",\n]\nhtml_css_files = [\"style.css\"]\n\n# html_logo = '../pip-logo.png'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n\n\n# -- Options for Napoleon Extension --------------------------------------------\n\n# Parse Google style docstrings.\n# See http://google.github.io/styleguide/pyguide.html\nnapoleon_google_docstring = True\n\n# Parse NumPy style docstrings.\n# See https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt\nnapoleon_numpy_docstring = True\n\n# Should special members (like __membername__) and private members\n# (like _membername) members be included in the documentation if they\n# have docstrings.\nnapoleon_include_private_with_doc = False\nnapoleon_include_special_with_doc = True\n\n# If True, docstring sections will use the \".. admonition::\" directive.\n# If False, docstring sections will use the \".. rubric::\" directive.\n# One may look better than the other depending on what HTML theme is used.\nnapoleon_use_admonition_for_examples = False\nnapoleon_use_admonition_for_notes = False\nnapoleon_use_admonition_for_references = False\n\n# If True, use Sphinx :ivar: directive for instance variables:\n# :ivar attr1: Description of attr1.\n# :type attr1: type\n# If False, use Sphinx .. attribute:: directive for instance variables:\n# .. attribute:: attr1\n#\n# *type*\n#\n# Description of attr1.\nnapoleon_use_ivar = False\n\n# If True, use Sphinx :param: directive for function parameters:\n# :param arg1: Description of arg1.\n# :type arg1: type\n# If False, output function parameters using the :parameters: field:\n# :parameters: **arg1** (*type*) -- Description of arg1.\nnapoleon_use_param = True\n\n# If True, use Sphinx :rtype: directive for the return type:\n# :returns: Description of return value.\n# :rtype: type\n# If False, output the return type inline with the return description:\n# :returns: *type* -- Description of return value.\nnapoleon_use_rtype = True\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"great_expectationsdoc\"\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n index_doc,\n \"great_expectations.tex\",\n \"great_expectations Documentation\",\n \"The Great Expectations Team\",\n \"manual\",\n ),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (index_doc, \"great_expectations\", \"great_expectations Documentation\", [author], 1)\n]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n index_doc,\n \"great_expectations\",\n \"great_expectations Documentation\",\n author,\n \"great_expectations\",\n \"Always know what to expect from your data.\",\n \"Miscellaneous\",\n ),\n]\n\n\nautodoc_member_order = \"bysource\"\n\n\ndef process_docstring(app, what, name, obj, options, lines):\n\n from docs.feature_annotation_parser import parse_feature_annotation\n\n docstring = \"\\n\".join(lines)\n annotation_list = parse_feature_annotation(docstring)\n\n process_between = between(marker=\"--ge-feature-maturity-info--\", exclude=True)\n process_between(app, what, name, obj, options, lines)\n\n if not annotation_list:\n return\n\n feature_annotation_admonition = \"\"\"\n.. admonition:: `Feature Maturity <https://docs.greatexpectations.io/en/latest/features>`_\n\n \"\"\"\n\n feature_annotation_template = \"\"\"\n | |icon-{icon_hash}| **{title}** - `How-to Guide <{how_to_guide_url}>`_\n | {description}\n | **Maturity**: {maturity}\n | **Details**:\n | **API Stability**: {maturity_details[api_stability]}\n | **Implementation Completeness**: {maturity_details[implementation_completeness]}\n | **Unit Test Coverage**: {maturity_details[unit_test_coverage]}\n | **Integration Infrastructure/Test Coverage**: {maturity_details[integration_infrastructure_test_coverage]}\n | **Documentation Completeness**: {maturity_details[documentation_completeness]}\n | **Bug Risk**: {maturity_details[bug_risk]}\\\n\"\"\"\n expectation_completeness_template = \"\"\"\n | **Expectation Completeness**: {maturity_details[expectation_completeness]}\\\n\"\"\"\n icon_template = \"\"\"\n .. |icon-{icon_hash}| image:: {icon}\n :height: 15px\n\"\"\"\n\n for annotation in annotation_list:\n icon_hash = uuid.uuid1().hex\n annotation[\"icon_hash\"] = icon_hash\n description = (\n annotation.get(\"description\")\n or annotation.get(\"short_description\")\n or f\"*TODO: {annotation.get('title')} Description*\"\n )\n how_to_guide_url = (\n annotation.get(\"how_to_guide_url\")\n or \"https://docs.greatexpectations.io/en/latest/how_to_guides.html\"\n )\n annotation[\"how_to_guide_url\"] = how_to_guide_url\n annotation[\"description\"] = description\n\n if annotation[\"maturity_details\"].get(\"expectation_completeness\"):\n feature_annotation_admonition += (\n feature_annotation_template\n + expectation_completeness_template\n + icon_template\n ).format(**annotation)\n else:\n feature_annotation_admonition += (\n feature_annotation_template + icon_template\n ).format(**annotation)\n\n lines += feature_annotation_admonition.splitlines()\n\n\ndef setup(app):\n app.connect(\"autodoc-process-docstring\", process_docstring)\n", "path": "docs/conf.py"}]}
3,938
166
gh_patches_debug_22334
rasdani/github-patches
git_diff
wagtail__wagtail-10913
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Login template does not respect non_fields_errors display When using a custom `WAGTAILADMIN_USER_LOGIN_FORM`, you can't set form-wide errors as they will always be displayed as `Your {{ username_field }} and password didn't match. Please try again.` from `"wagtailadmin/login.html"` As the default LoginForm (`wagtail.admin.forms.auth.LoginForm`) subclasses `django.contrib.auth.forms.AuthenticationForm` which already has an `'invalid_login'` error message (that is usename_field-aware) we could just use that, but if we want this particular message we could just override this error message" </issue> <code> [start of wagtail/admin/forms/auth.py] 1 from django import forms 2 from django.contrib.auth.forms import AuthenticationForm 3 from django.contrib.auth.forms import PasswordChangeForm as DjangoPasswordChangeForm 4 from django.contrib.auth.forms import PasswordResetForm as DjangoPasswordResetForm 5 from django.utils.translation import gettext_lazy 6 7 8 class LoginForm(AuthenticationForm): 9 username = forms.CharField(max_length=254, widget=forms.TextInput()) 10 11 password = forms.CharField( 12 widget=forms.PasswordInput( 13 attrs={ 14 "placeholder": gettext_lazy("Enter password"), 15 } 16 ) 17 ) 18 19 remember = forms.BooleanField(required=False) 20 21 def __init__(self, request=None, *args, **kwargs): 22 super().__init__(request=request, *args, **kwargs) 23 self.fields["username"].widget.attrs["placeholder"] = gettext_lazy( 24 "Enter your %(username_field_name)s" 25 ) % {"username_field_name": self.username_field.verbose_name} 26 self.fields["username"].widget.attrs["autofocus"] = "" 27 28 @property 29 def extra_fields(self): 30 for field_name in self.fields.keys(): 31 if field_name not in ["username", "password", "remember"]: 32 yield field_name, self[field_name] 33 34 35 class PasswordResetForm(DjangoPasswordResetForm): 36 email = forms.EmailField( 37 label=gettext_lazy("Enter your email address to reset your password"), 38 max_length=254, 39 required=True, 40 ) 41 42 @property 43 def extra_fields(self): 44 for field_name in self.fields.keys(): 45 if field_name not in ["email"]: 46 yield field_name, self[field_name] 47 48 49 class PasswordChangeForm(DjangoPasswordChangeForm): 50 """ 51 Since this is displayed as part of a larger form, this differs from the vanilla Django 52 PasswordChangeForm as follows: 53 * the old-password field is not auto-focused 54 * Fields are not marked as required 55 """ 56 57 def __init__(self, *args, **kwargs): 58 super().__init__(*args, **kwargs) 59 try: 60 del self.fields["old_password"].widget.attrs["autofocus"] 61 except KeyError: 62 pass 63 64 self.fields["old_password"].required = False 65 self.fields["new_password1"].required = False 66 self.fields["new_password2"].required = False 67 [end of wagtail/admin/forms/auth.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/wagtail/admin/forms/auth.py b/wagtail/admin/forms/auth.py --- a/wagtail/admin/forms/auth.py +++ b/wagtail/admin/forms/auth.py @@ -18,6 +18,13 @@ remember = forms.BooleanField(required=False) + error_messages = { + **AuthenticationForm.error_messages, + "invalid_login": gettext_lazy( + "Your %(username_field)s and password didn't match. Please try again." + ), + } + def __init__(self, request=None, *args, **kwargs): super().__init__(request=request, *args, **kwargs) self.fields["username"].widget.attrs["placeholder"] = gettext_lazy( @@ -31,6 +38,13 @@ if field_name not in ["username", "password", "remember"]: yield field_name, self[field_name] + def get_invalid_login_error(self): + return forms.ValidationError( + self.error_messages["invalid_login"], + code="invalid_login", + params={"username_field": self.username_field.verbose_name}, + ) + class PasswordResetForm(DjangoPasswordResetForm): email = forms.EmailField(
{"golden_diff": "diff --git a/wagtail/admin/forms/auth.py b/wagtail/admin/forms/auth.py\n--- a/wagtail/admin/forms/auth.py\n+++ b/wagtail/admin/forms/auth.py\n@@ -18,6 +18,13 @@\n \n remember = forms.BooleanField(required=False)\n \n+ error_messages = {\n+ **AuthenticationForm.error_messages,\n+ \"invalid_login\": gettext_lazy(\n+ \"Your %(username_field)s and password didn't match. Please try again.\"\n+ ),\n+ }\n+\n def __init__(self, request=None, *args, **kwargs):\n super().__init__(request=request, *args, **kwargs)\n self.fields[\"username\"].widget.attrs[\"placeholder\"] = gettext_lazy(\n@@ -31,6 +38,13 @@\n if field_name not in [\"username\", \"password\", \"remember\"]:\n yield field_name, self[field_name]\n \n+ def get_invalid_login_error(self):\n+ return forms.ValidationError(\n+ self.error_messages[\"invalid_login\"],\n+ code=\"invalid_login\",\n+ params={\"username_field\": self.username_field.verbose_name},\n+ )\n+\n \n class PasswordResetForm(DjangoPasswordResetForm):\n email = forms.EmailField(\n", "issue": "Login template does not respect non_fields_errors display\nWhen using a custom `WAGTAILADMIN_USER_LOGIN_FORM`, you can't set form-wide errors as they will always be displayed as `Your {{ username_field }} and password didn't match. Please try again.` from `\"wagtailadmin/login.html\"`\r\n\r\nAs the default LoginForm (`wagtail.admin.forms.auth.LoginForm`) subclasses `django.contrib.auth.forms.AuthenticationForm` which already has an `'invalid_login'` error message (that is usename_field-aware) we could just use that, but if we want this particular message we could just override this error message\"\n", "before_files": [{"content": "from django import forms\nfrom django.contrib.auth.forms import AuthenticationForm\nfrom django.contrib.auth.forms import PasswordChangeForm as DjangoPasswordChangeForm\nfrom django.contrib.auth.forms import PasswordResetForm as DjangoPasswordResetForm\nfrom django.utils.translation import gettext_lazy\n\n\nclass LoginForm(AuthenticationForm):\n username = forms.CharField(max_length=254, widget=forms.TextInput())\n\n password = forms.CharField(\n widget=forms.PasswordInput(\n attrs={\n \"placeholder\": gettext_lazy(\"Enter password\"),\n }\n )\n )\n\n remember = forms.BooleanField(required=False)\n\n def __init__(self, request=None, *args, **kwargs):\n super().__init__(request=request, *args, **kwargs)\n self.fields[\"username\"].widget.attrs[\"placeholder\"] = gettext_lazy(\n \"Enter your %(username_field_name)s\"\n ) % {\"username_field_name\": self.username_field.verbose_name}\n self.fields[\"username\"].widget.attrs[\"autofocus\"] = \"\"\n\n @property\n def extra_fields(self):\n for field_name in self.fields.keys():\n if field_name not in [\"username\", \"password\", \"remember\"]:\n yield field_name, self[field_name]\n\n\nclass PasswordResetForm(DjangoPasswordResetForm):\n email = forms.EmailField(\n label=gettext_lazy(\"Enter your email address to reset your password\"),\n max_length=254,\n required=True,\n )\n\n @property\n def extra_fields(self):\n for field_name in self.fields.keys():\n if field_name not in [\"email\"]:\n yield field_name, self[field_name]\n\n\nclass PasswordChangeForm(DjangoPasswordChangeForm):\n \"\"\"\n Since this is displayed as part of a larger form, this differs from the vanilla Django\n PasswordChangeForm as follows:\n * the old-password field is not auto-focused\n * Fields are not marked as required\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n try:\n del self.fields[\"old_password\"].widget.attrs[\"autofocus\"]\n except KeyError:\n pass\n\n self.fields[\"old_password\"].required = False\n self.fields[\"new_password1\"].required = False\n self.fields[\"new_password2\"].required = False\n", "path": "wagtail/admin/forms/auth.py"}]}
1,267
260
gh_patches_debug_21367
rasdani/github-patches
git_diff
learningequality__kolibri-8341
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Brand new Setup Wizard infinitely redirects <!-- Instructions: * Fill out the sections below, replace …'s with information about your issue * Use the 'preview' function above this text box to verify formatting before submitting --> ## Observed behavior <!-- Description of the behavior that was observed, including screenshots or other references when applicable --> On 0.15 branch, with a fresh KOLIBRI_HOME folder, going to the server causes infinite redirects and does not allow the user to proceed with the Setup Wizard (it doesn't load at all - tries to request the page like 10 times and then gives up). ## Context <!-- Tell us about your environment, including: * Kolibri version * Operating system * Browser --> Kolibri: release-v0.15.x branch OS: Fedora Browser: Chrome </issue> <code> [start of kolibri/core/kolibri_plugin.py] 1 from __future__ import absolute_import 2 from __future__ import print_function 3 from __future__ import unicode_literals 4 5 from django.conf import settings 6 from django.contrib.staticfiles.templatetags.staticfiles import static 7 from django.core.urlresolvers import get_resolver 8 from django.core.urlresolvers import reverse 9 from django.template.loader import render_to_string 10 from django.utils.html import mark_safe 11 from django.utils.translation import get_language 12 from django.utils.translation import get_language_bidi 13 from django.utils.translation import get_language_info 14 from django_js_reverse.core import _safe_json 15 from django_js_reverse.core import generate_json 16 from django_js_reverse.rjsmin import jsmin 17 18 import kolibri 19 from kolibri.core.content.utils.paths import get_content_storage_url 20 from kolibri.core.content.utils.paths import get_hashi_path 21 from kolibri.core.content.utils.paths import get_zip_content_base_path 22 from kolibri.core.content.utils.paths import get_zip_content_config 23 from kolibri.core.device.models import ContentCacheKey 24 from kolibri.core.device.utils import allow_other_browsers_to_connect 25 from kolibri.core.device.utils import get_device_setting 26 from kolibri.core.hooks import NavigationHook 27 from kolibri.core.oidc_provider_hook import OIDCProviderHook 28 from kolibri.core.theme_hook import ThemeHook 29 from kolibri.core.webpack.hooks import WebpackBundleHook 30 from kolibri.plugins.app.utils import interface 31 from kolibri.plugins.hooks import register_hook 32 from kolibri.utils import i18n 33 from kolibri.utils.conf import OPTIONS 34 35 36 @register_hook 37 class FrontEndCoreAppAssetHook(WebpackBundleHook): 38 bundle_id = "default_frontend" 39 40 def url_tag(self): 41 # Modified from: 42 # https://github.com/ierror/django-js-reverse/blob/master/django_js_reverse/core.py#L101 43 js_name = "window.kolibriPluginDataGlobal['{bundle}'].urls".format( 44 bundle=self.unique_id 45 ) 46 default_urlresolver = get_resolver(None) 47 48 data = generate_json(default_urlresolver) 49 50 # Generate the JS that exposes functions to reverse all Django URLs 51 # in the frontend. 52 js = render_to_string( 53 "django_js_reverse/urls_js.tpl", 54 {"data": _safe_json(data), "js_name": "__placeholder__"}, 55 # For some reason the js_name gets escaped going into the template 56 # so this was the easiest way to inject it. 57 ).replace("__placeholder__", js_name) 58 zip_content_origin, zip_content_port = get_zip_content_config() 59 return [ 60 mark_safe( 61 """<script type="text/javascript">""" 62 # Minify the generated Javascript 63 + jsmin(js) 64 # Add URL references for our base static URL, the Django media URL 65 # and our content storage URL - this allows us to calculate 66 # the path at which to access a local file on the frontend if needed. 67 + """ 68 {js_name}.__staticUrl = '{static_url}'; 69 {js_name}.__mediaUrl = '{media_url}'; 70 {js_name}.__contentUrl = '{content_url}'; 71 {js_name}.__zipContentUrl = '{zip_content_url}'; 72 {js_name}.__hashiUrl = '{hashi_url}'; 73 {js_name}.__zipContentOrigin = '{zip_content_origin}'; 74 {js_name}.__zipContentPort = {zip_content_port}; 75 </script> 76 """.format( 77 js_name=js_name, 78 static_url=settings.STATIC_URL, 79 media_url=settings.MEDIA_URL, 80 content_url=get_content_storage_url( 81 baseurl=OPTIONS["Deployment"]["URL_PATH_PREFIX"] 82 ), 83 zip_content_url=get_zip_content_base_path(), 84 hashi_url=get_hashi_path(), 85 zip_content_origin=zip_content_origin, 86 zip_content_port=zip_content_port, 87 ) 88 ) 89 ] 90 91 def navigation_tags(self): 92 return [ 93 hook.render_to_page_load_sync_html() 94 for hook in NavigationHook.registered_hooks 95 ] 96 97 def render_to_page_load_sync_html(self): 98 """ 99 Don't render the frontend message files in the usual way 100 as the global object to register them does not exist yet. 101 Instead they are loaded through plugin data. 102 """ 103 tags = ( 104 self.plugin_data_tag() 105 + self.url_tag() 106 + list(self.js_and_css_tags()) 107 + self.navigation_tags() 108 ) 109 110 return mark_safe("\n".join(tags)) 111 112 @property 113 def plugin_data(self): 114 language_code = get_language() 115 static_root = static("assets/fonts/noto-full") 116 full_file = "{}.{}.{}.css?v={}" 117 return { 118 "fullCSSFileModern": full_file.format( 119 static_root, language_code, "modern", kolibri.__version__ 120 ), 121 "fullCSSFileBasic": full_file.format( 122 static_root, language_code, "basic", kolibri.__version__ 123 ), 124 "allowRemoteAccess": allow_other_browsers_to_connect() 125 or not interface.enabled, 126 "appCapabilities": interface.capabilities, 127 "contentCacheKey": ContentCacheKey.get_cache_key(), 128 "languageGlobals": self.language_globals(), 129 "oidcProviderEnabled": OIDCProviderHook.is_enabled(), 130 "kolibriTheme": ThemeHook.get_theme(), 131 "isSubsetOfUsersDevice": get_device_setting("subset_of_users_device"), 132 } 133 134 def language_globals(self): 135 language_code = get_language() 136 lang_dir = "rtl" if get_language_bidi() else "ltr" 137 138 languages = {} 139 for code, language_name in settings.LANGUAGES: 140 lang_info = i18n.KOLIBRI_LANGUAGE_INFO[code] 141 languages[code] = { 142 # Format to match the schema of the content Language model 143 "id": code, 144 "lang_name": language_name, 145 "english_name": lang_info["english_name"] 146 if lang_info 147 else get_language_info(code)["name"], 148 "lang_direction": get_language_info(code)["bidi"], 149 } 150 return { 151 "coreLanguageMessages": self.frontend_messages(), 152 "languageCode": language_code, 153 "languageDir": lang_dir, 154 "languages": languages, 155 } 156 157 158 @register_hook 159 class FrontendHeadAssetsHook(WebpackBundleHook): 160 """ 161 Render these assets in the <head> tag of base.html, before other JS and assets. 162 """ 163 164 bundle_id = "frontend_head_assets" 165 inline = True 166 167 def render_to_page_load_sync_html(self): 168 """ 169 Add in the extra language font file tags needed 170 for preloading our custom font files. 171 """ 172 tags = ( 173 self.plugin_data_tag() 174 + self.language_font_file_tags() 175 + self.frontend_message_tag() 176 + list(self.js_and_css_tags()) 177 ) 178 179 return mark_safe("\n".join(tags)) 180 181 def language_font_file_tags(self): 182 language_code = get_language() 183 common_file = static("assets/fonts/noto-common.css") 184 subset_file = static("assets/fonts/noto-subset.{}.css".format(language_code)) 185 return [ 186 '<link type="text/css" href="{common_css_file}?v={version}" rel="stylesheet"/>'.format( 187 common_css_file=common_file, version=kolibri.__version__ 188 ), 189 '<link type="text/css" href="{subset_css_file}?v={version}" rel="stylesheet"/>'.format( 190 subset_css_file=subset_file, version=kolibri.__version__ 191 ), 192 ] 193 194 @property 195 def plugin_data(self): 196 return {"unsupportedUrl": reverse("kolibri:core:unsupported")} 197 [end of kolibri/core/kolibri_plugin.py] [start of kolibri/core/device/middleware.py] 1 from django.conf import settings 2 from django.http import HttpResponseRedirect 3 from django.shortcuts import redirect 4 from django.urls import is_valid_path 5 from django.utils import translation 6 7 from .translation import get_language_from_request_and_is_from_path 8 from kolibri.core.device.hooks import SetupHook 9 from kolibri.core.device.utils import DeviceNotProvisioned 10 from kolibri.utils.conf import OPTIONS 11 12 13 class KolibriLocaleMiddleware(object): 14 """ 15 Copied and then modified into a new style middleware from: 16 https://github.com/django/django/blob/stable/1.11.x/django/middleware/locale.py 17 Also has several other changes to suit our purposes. 18 The principal concern of this middleware is to activate translation for the current 19 language, so that throughout the lifecycle of this request, any translation or language 20 related functionality is set to the appropriate locale. 21 Unlike the Django middleware, this middleware only runs on requests to URLs that are 22 prefixed by a language code. Other URLs, such as for untranslated API endpoints do not 23 have a language code set on them. 24 """ 25 26 def __init__(self, get_response): 27 # Standard boilerplate for a new style Django middleware. 28 self.get_response = get_response 29 30 def __call__(self, request): 31 # First get the language code, and whether this was calculated from the path 32 # i.e. was this a language-prefixed URL. 33 language, language_from_path = get_language_from_request_and_is_from_path( 34 request 35 ) 36 # If this URL has been resolved to a view, and the view is not on a language prefixed 37 # URL, then the function above will return None for the language code to indicate that 38 # no translation is necessary. 39 if language is not None: 40 # Only activate translation if there is a language code returned. 41 translation.activate(language) 42 request.LANGUAGE_CODE = translation.get_language() 43 44 response = self.get_response(request) 45 46 if language is not None: 47 48 language = translation.get_language() 49 50 if response.status_code == 404 and not language_from_path: 51 # Maybe the language code is missing in the URL? Try adding the 52 # language prefix and redirecting to that URL. 53 # First get any global prefix that is being used. 54 script_prefix = OPTIONS["Deployment"]["URL_PATH_PREFIX"] 55 # Replace the global prefix with the global prefix and the language prefix. 56 language_path = request.path_info.replace( 57 script_prefix, "%s%s/" % (script_prefix, language), 1 58 ) 59 60 # Get the urlconf from the request, default to the global settings ROOT_URLCONF 61 urlconf = getattr(request, "urlconf", settings.ROOT_URLCONF) 62 # Check if this is a valid path 63 path_valid = is_valid_path(language_path, urlconf) 64 # Check if the path is only invalid because it is missing a trailing slash 65 path_needs_slash = not path_valid and ( 66 settings.APPEND_SLASH 67 and not language_path.endswith("/") 68 and is_valid_path("%s/" % language_path, urlconf) 69 ) 70 # If the constructed path is valid, or it would be valid with a trailing slash 71 # then redirect to the prefixed path, with a trailing slash added if needed. 72 if path_valid or path_needs_slash: 73 # Insert language after the script prefix and before the 74 # rest of the URL 75 language_url = request.get_full_path( 76 force_append_slash=path_needs_slash 77 ).replace(script_prefix, "%s%s/" % (script_prefix, language), 1) 78 return HttpResponseRedirect(language_url) 79 80 # Add a content language header to the response if not already present. 81 if "Content-Language" not in response: 82 response["Content-Language"] = language 83 84 return response 85 86 87 class ProvisioningErrorHandler(object): 88 def __init__(self, get_response): 89 self.get_response = get_response 90 91 def process_exception(self, request, exception): 92 if isinstance(exception, DeviceNotProvisioned) and SetupHook.provision_url(): 93 return redirect(SetupHook.provision_url()) 94 return None 95 96 def __call__(self, request): 97 return self.get_response(request) 98 [end of kolibri/core/device/middleware.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/kolibri/core/device/middleware.py b/kolibri/core/device/middleware.py --- a/kolibri/core/device/middleware.py +++ b/kolibri/core/device/middleware.py @@ -89,7 +89,11 @@ self.get_response = get_response def process_exception(self, request, exception): - if isinstance(exception, DeviceNotProvisioned) and SetupHook.provision_url(): + if ( + isinstance(exception, DeviceNotProvisioned) + and SetupHook.provision_url() + and not request.path.startswith(SetupHook.provision_url()) + ): return redirect(SetupHook.provision_url()) return None diff --git a/kolibri/core/kolibri_plugin.py b/kolibri/core/kolibri_plugin.py --- a/kolibri/core/kolibri_plugin.py +++ b/kolibri/core/kolibri_plugin.py @@ -128,7 +128,9 @@ "languageGlobals": self.language_globals(), "oidcProviderEnabled": OIDCProviderHook.is_enabled(), "kolibriTheme": ThemeHook.get_theme(), - "isSubsetOfUsersDevice": get_device_setting("subset_of_users_device"), + "isSubsetOfUsersDevice": get_device_setting( + "subset_of_users_device", False + ), } def language_globals(self):
{"golden_diff": "diff --git a/kolibri/core/device/middleware.py b/kolibri/core/device/middleware.py\n--- a/kolibri/core/device/middleware.py\n+++ b/kolibri/core/device/middleware.py\n@@ -89,7 +89,11 @@\n self.get_response = get_response\n \n def process_exception(self, request, exception):\n- if isinstance(exception, DeviceNotProvisioned) and SetupHook.provision_url():\n+ if (\n+ isinstance(exception, DeviceNotProvisioned)\n+ and SetupHook.provision_url()\n+ and not request.path.startswith(SetupHook.provision_url())\n+ ):\n return redirect(SetupHook.provision_url())\n return None\n \ndiff --git a/kolibri/core/kolibri_plugin.py b/kolibri/core/kolibri_plugin.py\n--- a/kolibri/core/kolibri_plugin.py\n+++ b/kolibri/core/kolibri_plugin.py\n@@ -128,7 +128,9 @@\n \"languageGlobals\": self.language_globals(),\n \"oidcProviderEnabled\": OIDCProviderHook.is_enabled(),\n \"kolibriTheme\": ThemeHook.get_theme(),\n- \"isSubsetOfUsersDevice\": get_device_setting(\"subset_of_users_device\"),\n+ \"isSubsetOfUsersDevice\": get_device_setting(\n+ \"subset_of_users_device\", False\n+ ),\n }\n \n def language_globals(self):\n", "issue": "Brand new Setup Wizard infinitely redirects\n<!--\r\nInstructions:\r\n * Fill out the sections below, replace \u2026's with information about your issue\r\n * Use the 'preview' function above this text box to verify formatting before submitting\r\n-->\r\n\r\n## Observed behavior\r\n<!--\r\nDescription of the behavior that was observed, including screenshots or other references when applicable\r\n-->\r\n\r\nOn 0.15 branch, with a fresh KOLIBRI_HOME folder, going to the server causes infinite redirects and does not allow the user to proceed with the Setup Wizard (it doesn't load at all - tries to request the page like 10 times and then gives up).\r\n\r\n## Context\r\n<!--\r\nTell us about your environment, including:\r\n * Kolibri version\r\n * Operating system\r\n * Browser\r\n-->\r\n\r\nKolibri: release-v0.15.x branch\r\nOS: Fedora\r\nBrowser: Chrome\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\nfrom django.conf import settings\nfrom django.contrib.staticfiles.templatetags.staticfiles import static\nfrom django.core.urlresolvers import get_resolver\nfrom django.core.urlresolvers import reverse\nfrom django.template.loader import render_to_string\nfrom django.utils.html import mark_safe\nfrom django.utils.translation import get_language\nfrom django.utils.translation import get_language_bidi\nfrom django.utils.translation import get_language_info\nfrom django_js_reverse.core import _safe_json\nfrom django_js_reverse.core import generate_json\nfrom django_js_reverse.rjsmin import jsmin\n\nimport kolibri\nfrom kolibri.core.content.utils.paths import get_content_storage_url\nfrom kolibri.core.content.utils.paths import get_hashi_path\nfrom kolibri.core.content.utils.paths import get_zip_content_base_path\nfrom kolibri.core.content.utils.paths import get_zip_content_config\nfrom kolibri.core.device.models import ContentCacheKey\nfrom kolibri.core.device.utils import allow_other_browsers_to_connect\nfrom kolibri.core.device.utils import get_device_setting\nfrom kolibri.core.hooks import NavigationHook\nfrom kolibri.core.oidc_provider_hook import OIDCProviderHook\nfrom kolibri.core.theme_hook import ThemeHook\nfrom kolibri.core.webpack.hooks import WebpackBundleHook\nfrom kolibri.plugins.app.utils import interface\nfrom kolibri.plugins.hooks import register_hook\nfrom kolibri.utils import i18n\nfrom kolibri.utils.conf import OPTIONS\n\n\n@register_hook\nclass FrontEndCoreAppAssetHook(WebpackBundleHook):\n bundle_id = \"default_frontend\"\n\n def url_tag(self):\n # Modified from:\n # https://github.com/ierror/django-js-reverse/blob/master/django_js_reverse/core.py#L101\n js_name = \"window.kolibriPluginDataGlobal['{bundle}'].urls\".format(\n bundle=self.unique_id\n )\n default_urlresolver = get_resolver(None)\n\n data = generate_json(default_urlresolver)\n\n # Generate the JS that exposes functions to reverse all Django URLs\n # in the frontend.\n js = render_to_string(\n \"django_js_reverse/urls_js.tpl\",\n {\"data\": _safe_json(data), \"js_name\": \"__placeholder__\"},\n # For some reason the js_name gets escaped going into the template\n # so this was the easiest way to inject it.\n ).replace(\"__placeholder__\", js_name)\n zip_content_origin, zip_content_port = get_zip_content_config()\n return [\n mark_safe(\n \"\"\"<script type=\"text/javascript\">\"\"\"\n # Minify the generated Javascript\n + jsmin(js)\n # Add URL references for our base static URL, the Django media URL\n # and our content storage URL - this allows us to calculate\n # the path at which to access a local file on the frontend if needed.\n + \"\"\"\n {js_name}.__staticUrl = '{static_url}';\n {js_name}.__mediaUrl = '{media_url}';\n {js_name}.__contentUrl = '{content_url}';\n {js_name}.__zipContentUrl = '{zip_content_url}';\n {js_name}.__hashiUrl = '{hashi_url}';\n {js_name}.__zipContentOrigin = '{zip_content_origin}';\n {js_name}.__zipContentPort = {zip_content_port};\n </script>\n \"\"\".format(\n js_name=js_name,\n static_url=settings.STATIC_URL,\n media_url=settings.MEDIA_URL,\n content_url=get_content_storage_url(\n baseurl=OPTIONS[\"Deployment\"][\"URL_PATH_PREFIX\"]\n ),\n zip_content_url=get_zip_content_base_path(),\n hashi_url=get_hashi_path(),\n zip_content_origin=zip_content_origin,\n zip_content_port=zip_content_port,\n )\n )\n ]\n\n def navigation_tags(self):\n return [\n hook.render_to_page_load_sync_html()\n for hook in NavigationHook.registered_hooks\n ]\n\n def render_to_page_load_sync_html(self):\n \"\"\"\n Don't render the frontend message files in the usual way\n as the global object to register them does not exist yet.\n Instead they are loaded through plugin data.\n \"\"\"\n tags = (\n self.plugin_data_tag()\n + self.url_tag()\n + list(self.js_and_css_tags())\n + self.navigation_tags()\n )\n\n return mark_safe(\"\\n\".join(tags))\n\n @property\n def plugin_data(self):\n language_code = get_language()\n static_root = static(\"assets/fonts/noto-full\")\n full_file = \"{}.{}.{}.css?v={}\"\n return {\n \"fullCSSFileModern\": full_file.format(\n static_root, language_code, \"modern\", kolibri.__version__\n ),\n \"fullCSSFileBasic\": full_file.format(\n static_root, language_code, \"basic\", kolibri.__version__\n ),\n \"allowRemoteAccess\": allow_other_browsers_to_connect()\n or not interface.enabled,\n \"appCapabilities\": interface.capabilities,\n \"contentCacheKey\": ContentCacheKey.get_cache_key(),\n \"languageGlobals\": self.language_globals(),\n \"oidcProviderEnabled\": OIDCProviderHook.is_enabled(),\n \"kolibriTheme\": ThemeHook.get_theme(),\n \"isSubsetOfUsersDevice\": get_device_setting(\"subset_of_users_device\"),\n }\n\n def language_globals(self):\n language_code = get_language()\n lang_dir = \"rtl\" if get_language_bidi() else \"ltr\"\n\n languages = {}\n for code, language_name in settings.LANGUAGES:\n lang_info = i18n.KOLIBRI_LANGUAGE_INFO[code]\n languages[code] = {\n # Format to match the schema of the content Language model\n \"id\": code,\n \"lang_name\": language_name,\n \"english_name\": lang_info[\"english_name\"]\n if lang_info\n else get_language_info(code)[\"name\"],\n \"lang_direction\": get_language_info(code)[\"bidi\"],\n }\n return {\n \"coreLanguageMessages\": self.frontend_messages(),\n \"languageCode\": language_code,\n \"languageDir\": lang_dir,\n \"languages\": languages,\n }\n\n\n@register_hook\nclass FrontendHeadAssetsHook(WebpackBundleHook):\n \"\"\"\n Render these assets in the <head> tag of base.html, before other JS and assets.\n \"\"\"\n\n bundle_id = \"frontend_head_assets\"\n inline = True\n\n def render_to_page_load_sync_html(self):\n \"\"\"\n Add in the extra language font file tags needed\n for preloading our custom font files.\n \"\"\"\n tags = (\n self.plugin_data_tag()\n + self.language_font_file_tags()\n + self.frontend_message_tag()\n + list(self.js_and_css_tags())\n )\n\n return mark_safe(\"\\n\".join(tags))\n\n def language_font_file_tags(self):\n language_code = get_language()\n common_file = static(\"assets/fonts/noto-common.css\")\n subset_file = static(\"assets/fonts/noto-subset.{}.css\".format(language_code))\n return [\n '<link type=\"text/css\" href=\"{common_css_file}?v={version}\" rel=\"stylesheet\"/>'.format(\n common_css_file=common_file, version=kolibri.__version__\n ),\n '<link type=\"text/css\" href=\"{subset_css_file}?v={version}\" rel=\"stylesheet\"/>'.format(\n subset_css_file=subset_file, version=kolibri.__version__\n ),\n ]\n\n @property\n def plugin_data(self):\n return {\"unsupportedUrl\": reverse(\"kolibri:core:unsupported\")}\n", "path": "kolibri/core/kolibri_plugin.py"}, {"content": "from django.conf import settings\nfrom django.http import HttpResponseRedirect\nfrom django.shortcuts import redirect\nfrom django.urls import is_valid_path\nfrom django.utils import translation\n\nfrom .translation import get_language_from_request_and_is_from_path\nfrom kolibri.core.device.hooks import SetupHook\nfrom kolibri.core.device.utils import DeviceNotProvisioned\nfrom kolibri.utils.conf import OPTIONS\n\n\nclass KolibriLocaleMiddleware(object):\n \"\"\"\n Copied and then modified into a new style middleware from:\n https://github.com/django/django/blob/stable/1.11.x/django/middleware/locale.py\n Also has several other changes to suit our purposes.\n The principal concern of this middleware is to activate translation for the current\n language, so that throughout the lifecycle of this request, any translation or language\n related functionality is set to the appropriate locale.\n Unlike the Django middleware, this middleware only runs on requests to URLs that are\n prefixed by a language code. Other URLs, such as for untranslated API endpoints do not\n have a language code set on them.\n \"\"\"\n\n def __init__(self, get_response):\n # Standard boilerplate for a new style Django middleware.\n self.get_response = get_response\n\n def __call__(self, request):\n # First get the language code, and whether this was calculated from the path\n # i.e. was this a language-prefixed URL.\n language, language_from_path = get_language_from_request_and_is_from_path(\n request\n )\n # If this URL has been resolved to a view, and the view is not on a language prefixed\n # URL, then the function above will return None for the language code to indicate that\n # no translation is necessary.\n if language is not None:\n # Only activate translation if there is a language code returned.\n translation.activate(language)\n request.LANGUAGE_CODE = translation.get_language()\n\n response = self.get_response(request)\n\n if language is not None:\n\n language = translation.get_language()\n\n if response.status_code == 404 and not language_from_path:\n # Maybe the language code is missing in the URL? Try adding the\n # language prefix and redirecting to that URL.\n # First get any global prefix that is being used.\n script_prefix = OPTIONS[\"Deployment\"][\"URL_PATH_PREFIX\"]\n # Replace the global prefix with the global prefix and the language prefix.\n language_path = request.path_info.replace(\n script_prefix, \"%s%s/\" % (script_prefix, language), 1\n )\n\n # Get the urlconf from the request, default to the global settings ROOT_URLCONF\n urlconf = getattr(request, \"urlconf\", settings.ROOT_URLCONF)\n # Check if this is a valid path\n path_valid = is_valid_path(language_path, urlconf)\n # Check if the path is only invalid because it is missing a trailing slash\n path_needs_slash = not path_valid and (\n settings.APPEND_SLASH\n and not language_path.endswith(\"/\")\n and is_valid_path(\"%s/\" % language_path, urlconf)\n )\n # If the constructed path is valid, or it would be valid with a trailing slash\n # then redirect to the prefixed path, with a trailing slash added if needed.\n if path_valid or path_needs_slash:\n # Insert language after the script prefix and before the\n # rest of the URL\n language_url = request.get_full_path(\n force_append_slash=path_needs_slash\n ).replace(script_prefix, \"%s%s/\" % (script_prefix, language), 1)\n return HttpResponseRedirect(language_url)\n\n # Add a content language header to the response if not already present.\n if \"Content-Language\" not in response:\n response[\"Content-Language\"] = language\n\n return response\n\n\nclass ProvisioningErrorHandler(object):\n def __init__(self, get_response):\n self.get_response = get_response\n\n def process_exception(self, request, exception):\n if isinstance(exception, DeviceNotProvisioned) and SetupHook.provision_url():\n return redirect(SetupHook.provision_url())\n return None\n\n def __call__(self, request):\n return self.get_response(request)\n", "path": "kolibri/core/device/middleware.py"}]}
3,924
298
gh_patches_debug_6770
rasdani/github-patches
git_diff
celery__celery-5355
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Django fixup appends to PYTHONPATH instead of prepending Hi, ## Environment & Settings **Celery version**: 4.2.1 (windowlicker) # Steps to Reproduce We are using Celery + Django in [dissemin](https://github.com/dissemin/dissemin/). We have a Django app named "statistics" which is actually a name conflicting with a Python 3 package from standard library. This should be fine in principle as long as the `PYTHONPATH` is set so that the local modules have precedence over the system ones. When running Celery CLI, the system wide module has precedence over the local ones apparently. I traced this issue back to [this `sys.path` tweak](https://github.com/celery/celery/blob/072dab85261599234341cc714b0d6f0caca20f00/celery/fixups/django.py#L60-L61), which is actually **appending** local path instead of prepending it. I may have missed something, but is it important for some reason to append it and not prepend it in this context? # Expected Behavior Celery should load the local module as expected. # Actual Behavior ``` # When going through celery CLI sys.path == ['/Users/lverney/.local/share/virtualenvs/dissemin3/bin', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python36.zip', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6/lib-dynload', '/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6/site-packages', '/Users/lverney/.local/share/virtualenvs/dissemin3/src/django-allauth', '/Users/lverney/tmp/dissemin'] # Without celery sys.path == ['', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python36.zip', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6/lib-dynload', '/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6/site-packages', '/Users/lverney/.local/share/virtualenvs/dissemin3/src/django-allauth'] ``` We can see that current path is actually appended, not prepended. Therefore, "system" modules have precedence on the ones from the local project. # Workaround For people experiencing this issue, `PYTHONPATH=$(pwd) celery …` is a workaround. Best, /cc @wetneb who first noticed this issue. </issue> <code> [start of celery/fixups/django.py] 1 """Django-specific customization.""" 2 from __future__ import absolute_import, unicode_literals 3 4 import os 5 import sys 6 import warnings 7 from datetime import datetime 8 from importlib import import_module 9 10 from kombu.utils.imports import symbol_by_name 11 from kombu.utils.objects import cached_property 12 13 from celery import _state, signals 14 from celery.exceptions import FixupWarning, ImproperlyConfigured 15 16 __all__ = ('DjangoFixup', 'fixup') 17 18 ERR_NOT_INSTALLED = """\ 19 Environment variable DJANGO_SETTINGS_MODULE is defined 20 but Django isn't installed. Won't apply Django fix-ups! 21 """ 22 23 24 def _maybe_close_fd(fh): 25 try: 26 os.close(fh.fileno()) 27 except (AttributeError, OSError, TypeError): 28 # TypeError added for celery#962 29 pass 30 31 32 def _verify_django_version(django): 33 if django.VERSION < (1, 11): 34 raise ImproperlyConfigured('Celery 4.x requires Django 1.11 or later.') 35 36 37 def fixup(app, env='DJANGO_SETTINGS_MODULE'): 38 """Install Django fixup if settings module environment is set.""" 39 SETTINGS_MODULE = os.environ.get(env) 40 if SETTINGS_MODULE and 'django' not in app.loader_cls.lower(): 41 try: 42 import django # noqa 43 except ImportError: 44 warnings.warn(FixupWarning(ERR_NOT_INSTALLED)) 45 else: 46 _verify_django_version(django) 47 return DjangoFixup(app).install() 48 49 50 class DjangoFixup(object): 51 """Fixup installed when using Django.""" 52 53 def __init__(self, app): 54 self.app = app 55 if _state.default_app is None: 56 self.app.set_default() 57 self._worker_fixup = None 58 59 def install(self): 60 # Need to add project directory to path 61 sys.path.append(os.getcwd()) 62 63 self._settings = symbol_by_name('django.conf:settings') 64 self.app.loader.now = self.now 65 66 signals.import_modules.connect(self.on_import_modules) 67 signals.worker_init.connect(self.on_worker_init) 68 return self 69 70 @property 71 def worker_fixup(self): 72 if self._worker_fixup is None: 73 self._worker_fixup = DjangoWorkerFixup(self.app) 74 return self._worker_fixup 75 76 @worker_fixup.setter 77 def worker_fixup(self, value): 78 self._worker_fixup = value 79 80 def on_import_modules(self, **kwargs): 81 # call django.setup() before task modules are imported 82 self.worker_fixup.validate_models() 83 84 def on_worker_init(self, **kwargs): 85 self.worker_fixup.install() 86 87 def now(self, utc=False): 88 return datetime.utcnow() if utc else self._now() 89 90 def autodiscover_tasks(self): 91 from django.apps import apps 92 return [config.name for config in apps.get_app_configs()] 93 94 @cached_property 95 def _now(self): 96 return symbol_by_name('django.utils.timezone:now') 97 98 99 class DjangoWorkerFixup(object): 100 _db_recycles = 0 101 102 def __init__(self, app): 103 self.app = app 104 self.db_reuse_max = self.app.conf.get('CELERY_DB_REUSE_MAX', None) 105 self._db = import_module('django.db') 106 self._cache = import_module('django.core.cache') 107 self._settings = symbol_by_name('django.conf:settings') 108 109 self.interface_errors = ( 110 symbol_by_name('django.db.utils.InterfaceError'), 111 ) 112 self.DatabaseError = symbol_by_name('django.db:DatabaseError') 113 114 def django_setup(self): 115 import django 116 django.setup() 117 118 def validate_models(self): 119 from django.core.checks import run_checks 120 self.django_setup() 121 run_checks() 122 123 def install(self): 124 signals.beat_embedded_init.connect(self.close_database) 125 signals.worker_ready.connect(self.on_worker_ready) 126 signals.task_prerun.connect(self.on_task_prerun) 127 signals.task_postrun.connect(self.on_task_postrun) 128 signals.worker_process_init.connect(self.on_worker_process_init) 129 self.close_database() 130 self.close_cache() 131 return self 132 133 def on_worker_process_init(self, **kwargs): 134 # Child process must validate models again if on Windows, 135 # or if they were started using execv. 136 if os.environ.get('FORKED_BY_MULTIPROCESSING'): 137 self.validate_models() 138 139 # close connections: 140 # the parent process may have established these, 141 # so need to close them. 142 143 # calling db.close() on some DB connections will cause 144 # the inherited DB conn to also get broken in the parent 145 # process so we need to remove it without triggering any 146 # network IO that close() might cause. 147 for c in self._db.connections.all(): 148 if c and c.connection: 149 self._maybe_close_db_fd(c.connection) 150 151 # use the _ version to avoid DB_REUSE preventing the conn.close() call 152 self._close_database() 153 self.close_cache() 154 155 def _maybe_close_db_fd(self, fd): 156 try: 157 _maybe_close_fd(fd) 158 except self.interface_errors: 159 pass 160 161 def on_task_prerun(self, sender, **kwargs): 162 """Called before every task.""" 163 if not getattr(sender.request, 'is_eager', False): 164 self.close_database() 165 166 def on_task_postrun(self, sender, **kwargs): 167 # See https://groups.google.com/group/django-users/ 168 # browse_thread/thread/78200863d0c07c6d/ 169 if not getattr(sender.request, 'is_eager', False): 170 self.close_database() 171 self.close_cache() 172 173 def close_database(self, **kwargs): 174 if not self.db_reuse_max: 175 return self._close_database() 176 if self._db_recycles >= self.db_reuse_max * 2: 177 self._db_recycles = 0 178 self._close_database() 179 self._db_recycles += 1 180 181 def _close_database(self): 182 for conn in self._db.connections.all(): 183 try: 184 conn.close_if_unusable_or_obsolete() 185 except self.interface_errors: 186 pass 187 except self.DatabaseError as exc: 188 str_exc = str(exc) 189 if 'closed' not in str_exc and 'not connected' not in str_exc: 190 raise 191 192 def close_cache(self): 193 try: 194 self._cache.close_caches() 195 except (TypeError, AttributeError): 196 pass 197 198 def on_worker_ready(self, **kwargs): 199 if self._settings.DEBUG: 200 warnings.warn('Using settings.DEBUG leads to a memory leak, never ' 201 'use this setting in production environments!') 202 [end of celery/fixups/django.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/celery/fixups/django.py b/celery/fixups/django.py --- a/celery/fixups/django.py +++ b/celery/fixups/django.py @@ -57,8 +57,10 @@ self._worker_fixup = None def install(self): - # Need to add project directory to path - sys.path.append(os.getcwd()) + # Need to add project directory to path. + # The project directory has precedence over system modules, + # so we prepend it to the path. + sys.path.prepend(os.getcwd()) self._settings = symbol_by_name('django.conf:settings') self.app.loader.now = self.now
{"golden_diff": "diff --git a/celery/fixups/django.py b/celery/fixups/django.py\n--- a/celery/fixups/django.py\n+++ b/celery/fixups/django.py\n@@ -57,8 +57,10 @@\n self._worker_fixup = None\n \n def install(self):\n- # Need to add project directory to path\n- sys.path.append(os.getcwd())\n+ # Need to add project directory to path.\n+ # The project directory has precedence over system modules,\n+ # so we prepend it to the path.\n+ sys.path.prepend(os.getcwd())\n \n self._settings = symbol_by_name('django.conf:settings')\n self.app.loader.now = self.now\n", "issue": "Django fixup appends to PYTHONPATH instead of prepending\nHi,\r\n\r\n## Environment & Settings\r\n**Celery version**: 4.2.1 (windowlicker)\r\n\r\n# Steps to Reproduce\r\n\r\nWe are using Celery + Django in [dissemin](https://github.com/dissemin/dissemin/). We have a Django app named \"statistics\" which is actually a name conflicting with a Python 3 package from standard library. This should be fine in principle as long as the `PYTHONPATH` is set so that the local modules have precedence over the system ones.\r\n\r\nWhen running Celery CLI, the system wide module has precedence over the local ones apparently.\r\n\r\nI traced this issue back to [this `sys.path` tweak](https://github.com/celery/celery/blob/072dab85261599234341cc714b0d6f0caca20f00/celery/fixups/django.py#L60-L61), which is actually **appending** local path instead of prepending it.\r\n\r\nI may have missed something, but is it important for some reason to append it and not prepend it in this context?\r\n\r\n# Expected Behavior\r\n\r\nCelery should load the local module as expected.\r\n\r\n# Actual Behavior\r\n\r\n```\r\n# When going through celery CLI\r\nsys.path == ['/Users/lverney/.local/share/virtualenvs/dissemin3/bin', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python36.zip', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6/lib-dynload', '/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6/site-packages', '/Users/lverney/.local/share/virtualenvs/dissemin3/src/django-allauth', '/Users/lverney/tmp/dissemin']\r\n\r\n# Without celery\r\nsys.path == ['', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python36.zip', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6/lib-dynload', '/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6', '/Users/lverney/.local/share/virtualenvs/dissemin3/lib/python3.6/site-packages', '/Users/lverney/.local/share/virtualenvs/dissemin3/src/django-allauth']\r\n```\r\n\r\nWe can see that current path is actually appended, not prepended. Therefore, \"system\" modules have precedence on the ones from the local project.\r\n\r\n\r\n# Workaround\r\n\r\nFor people experiencing this issue, `PYTHONPATH=$(pwd) celery \u2026` is a workaround.\r\n\r\n\r\nBest,\r\n\r\n/cc @wetneb who first noticed this issue.\n", "before_files": [{"content": "\"\"\"Django-specific customization.\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport os\nimport sys\nimport warnings\nfrom datetime import datetime\nfrom importlib import import_module\n\nfrom kombu.utils.imports import symbol_by_name\nfrom kombu.utils.objects import cached_property\n\nfrom celery import _state, signals\nfrom celery.exceptions import FixupWarning, ImproperlyConfigured\n\n__all__ = ('DjangoFixup', 'fixup')\n\nERR_NOT_INSTALLED = \"\"\"\\\nEnvironment variable DJANGO_SETTINGS_MODULE is defined\nbut Django isn't installed. Won't apply Django fix-ups!\n\"\"\"\n\n\ndef _maybe_close_fd(fh):\n try:\n os.close(fh.fileno())\n except (AttributeError, OSError, TypeError):\n # TypeError added for celery#962\n pass\n\n\ndef _verify_django_version(django):\n if django.VERSION < (1, 11):\n raise ImproperlyConfigured('Celery 4.x requires Django 1.11 or later.')\n\n\ndef fixup(app, env='DJANGO_SETTINGS_MODULE'):\n \"\"\"Install Django fixup if settings module environment is set.\"\"\"\n SETTINGS_MODULE = os.environ.get(env)\n if SETTINGS_MODULE and 'django' not in app.loader_cls.lower():\n try:\n import django # noqa\n except ImportError:\n warnings.warn(FixupWarning(ERR_NOT_INSTALLED))\n else:\n _verify_django_version(django)\n return DjangoFixup(app).install()\n\n\nclass DjangoFixup(object):\n \"\"\"Fixup installed when using Django.\"\"\"\n\n def __init__(self, app):\n self.app = app\n if _state.default_app is None:\n self.app.set_default()\n self._worker_fixup = None\n\n def install(self):\n # Need to add project directory to path\n sys.path.append(os.getcwd())\n\n self._settings = symbol_by_name('django.conf:settings')\n self.app.loader.now = self.now\n\n signals.import_modules.connect(self.on_import_modules)\n signals.worker_init.connect(self.on_worker_init)\n return self\n\n @property\n def worker_fixup(self):\n if self._worker_fixup is None:\n self._worker_fixup = DjangoWorkerFixup(self.app)\n return self._worker_fixup\n\n @worker_fixup.setter\n def worker_fixup(self, value):\n self._worker_fixup = value\n\n def on_import_modules(self, **kwargs):\n # call django.setup() before task modules are imported\n self.worker_fixup.validate_models()\n\n def on_worker_init(self, **kwargs):\n self.worker_fixup.install()\n\n def now(self, utc=False):\n return datetime.utcnow() if utc else self._now()\n\n def autodiscover_tasks(self):\n from django.apps import apps\n return [config.name for config in apps.get_app_configs()]\n\n @cached_property\n def _now(self):\n return symbol_by_name('django.utils.timezone:now')\n\n\nclass DjangoWorkerFixup(object):\n _db_recycles = 0\n\n def __init__(self, app):\n self.app = app\n self.db_reuse_max = self.app.conf.get('CELERY_DB_REUSE_MAX', None)\n self._db = import_module('django.db')\n self._cache = import_module('django.core.cache')\n self._settings = symbol_by_name('django.conf:settings')\n\n self.interface_errors = (\n symbol_by_name('django.db.utils.InterfaceError'),\n )\n self.DatabaseError = symbol_by_name('django.db:DatabaseError')\n\n def django_setup(self):\n import django\n django.setup()\n\n def validate_models(self):\n from django.core.checks import run_checks\n self.django_setup()\n run_checks()\n\n def install(self):\n signals.beat_embedded_init.connect(self.close_database)\n signals.worker_ready.connect(self.on_worker_ready)\n signals.task_prerun.connect(self.on_task_prerun)\n signals.task_postrun.connect(self.on_task_postrun)\n signals.worker_process_init.connect(self.on_worker_process_init)\n self.close_database()\n self.close_cache()\n return self\n\n def on_worker_process_init(self, **kwargs):\n # Child process must validate models again if on Windows,\n # or if they were started using execv.\n if os.environ.get('FORKED_BY_MULTIPROCESSING'):\n self.validate_models()\n\n # close connections:\n # the parent process may have established these,\n # so need to close them.\n\n # calling db.close() on some DB connections will cause\n # the inherited DB conn to also get broken in the parent\n # process so we need to remove it without triggering any\n # network IO that close() might cause.\n for c in self._db.connections.all():\n if c and c.connection:\n self._maybe_close_db_fd(c.connection)\n\n # use the _ version to avoid DB_REUSE preventing the conn.close() call\n self._close_database()\n self.close_cache()\n\n def _maybe_close_db_fd(self, fd):\n try:\n _maybe_close_fd(fd)\n except self.interface_errors:\n pass\n\n def on_task_prerun(self, sender, **kwargs):\n \"\"\"Called before every task.\"\"\"\n if not getattr(sender.request, 'is_eager', False):\n self.close_database()\n\n def on_task_postrun(self, sender, **kwargs):\n # See https://groups.google.com/group/django-users/\n # browse_thread/thread/78200863d0c07c6d/\n if not getattr(sender.request, 'is_eager', False):\n self.close_database()\n self.close_cache()\n\n def close_database(self, **kwargs):\n if not self.db_reuse_max:\n return self._close_database()\n if self._db_recycles >= self.db_reuse_max * 2:\n self._db_recycles = 0\n self._close_database()\n self._db_recycles += 1\n\n def _close_database(self):\n for conn in self._db.connections.all():\n try:\n conn.close_if_unusable_or_obsolete()\n except self.interface_errors:\n pass\n except self.DatabaseError as exc:\n str_exc = str(exc)\n if 'closed' not in str_exc and 'not connected' not in str_exc:\n raise\n\n def close_cache(self):\n try:\n self._cache.close_caches()\n except (TypeError, AttributeError):\n pass\n\n def on_worker_ready(self, **kwargs):\n if self._settings.DEBUG:\n warnings.warn('Using settings.DEBUG leads to a memory leak, never '\n 'use this setting in production environments!')\n", "path": "celery/fixups/django.py"}]}
3,144
160
gh_patches_debug_36983
rasdani/github-patches
git_diff
liberapay__liberapay.com-1717
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> `Cache-Control: immutable` It's [a new experimental HTTP feature](https://bitsup.blogspot.fr/2016/05/cache-control-immutable.html) that we should probably start using. It's low priority though. </issue> <code> [start of liberapay/utils/http_caching.py] 1 """ 2 Handles HTTP caching. 3 """ 4 5 import atexit 6 from hashlib import md5 7 import os 8 from tempfile import mkstemp 9 10 from aspen.request_processor.dispatcher import DispatchResult, DispatchStatus 11 from pando import Response 12 13 from liberapay.utils import b64encode_s, find_files 14 15 16 ETAGS = {} 17 18 19 def compile_assets(website): 20 cleanup = [] 21 for spt in find_files(website.www_root+'/assets/', '*.spt'): 22 filepath = spt[:-4] # /path/to/www/assets/foo.css 23 if not os.path.exists(filepath): 24 cleanup.append(filepath) 25 dispatch_result = DispatchResult(DispatchStatus.okay, spt, None, None, None) 26 state = dict(dispatch_result=dispatch_result, response=Response()) 27 state['state'] = state 28 resource = website.request_processor.resources.get(spt) 29 content = resource.render(state, dispatch_result, None).body 30 if not isinstance(content, bytes): 31 content = content.encode('utf8') 32 tmpfd, tmpfpath = mkstemp(dir='.') 33 os.write(tmpfd, content) 34 os.close(tmpfd) 35 os.rename(tmpfpath, filepath) 36 if website.env.clean_assets: 37 atexit.register(lambda: rm_f(*cleanup)) 38 39 40 def rm_f(*paths): 41 for path in paths: 42 try: 43 os.unlink(path) 44 except Exception: 45 pass 46 47 48 def clean_assets(www_root): 49 rm_f(*[spt[:-4] for spt in find_files(www_root+'/assets/', '*.spt')]) 50 51 52 def asset_etag(path): 53 if path.endswith('.spt'): 54 return '' 55 if path in ETAGS: 56 return ETAGS[path] 57 with open(path, 'rb') as f: 58 h = b64encode_s(md5(f.read()).digest()) 59 ETAGS[path] = h 60 return h 61 62 63 # algorithm functions 64 65 def get_etag_for_file(dispatch_result, website, state): 66 if dispatch_result.status != DispatchStatus.okay: 67 return {'etag': None} 68 try: 69 return {'etag': asset_etag(dispatch_result.match)} 70 except Exception as e: 71 website.tell_sentry(e, state) 72 return {'etag': None} 73 74 75 def try_to_serve_304(dispatch_result, request, response, etag): 76 """Try to serve a 304 for static resources. 77 """ 78 if not etag: 79 # This is a request for a dynamic resource. 80 return 81 82 qs_etag = request.qs.get('etag') 83 if qs_etag and qs_etag != etag: 84 # Don't serve one version of a file as if it were another. 85 raise response.error(410) 86 87 headers_etag = request.headers.get(b'If-None-Match', b'').decode('ascii', 'replace') 88 if not headers_etag: 89 # This client doesn't want a 304. 90 return 91 92 if headers_etag != etag: 93 # Cache miss, the client sent an old or invalid etag. 94 return 95 96 # Huzzah! 97 # ======= 98 # We can serve a 304! :D 99 100 raise response.success(304) 101 102 103 def add_caching_to_response(response, request=None, etag=None): 104 """Set caching headers. 105 """ 106 if not etag: 107 # This is a dynamic resource, disable caching by default 108 if b'Cache-Control' not in response.headers: 109 response.headers[b'Cache-Control'] = b'no-cache' 110 return 111 112 assert request is not None # sanity check 113 114 if response.code not in (200, 304): 115 return 116 117 # https://developers.google.com/speed/docs/best-practices/caching 118 response.headers[b'Etag'] = etag.encode('ascii') 119 120 if request.qs.get('etag'): 121 # We can cache "indefinitely" when the querystring contains the etag. 122 response.headers[b'Cache-Control'] = b'public, max-age=31536000' 123 else: 124 # Otherwise we cache for 1 hour 125 response.headers[b'Cache-Control'] = b'public, max-age=3600' 126 [end of liberapay/utils/http_caching.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/liberapay/utils/http_caching.py b/liberapay/utils/http_caching.py --- a/liberapay/utils/http_caching.py +++ b/liberapay/utils/http_caching.py @@ -79,47 +79,41 @@ # This is a request for a dynamic resource. return + # Compare the etag in the request's querystring to the one we have. qs_etag = request.qs.get('etag') if qs_etag and qs_etag != etag: # Don't serve one version of a file as if it were another. raise response.error(410) + # Compare the etag in the request's headers to the one we have. headers_etag = request.headers.get(b'If-None-Match', b'').decode('ascii', 'replace') - if not headers_etag: - # This client doesn't want a 304. - return - - if headers_etag != etag: - # Cache miss, the client sent an old or invalid etag. - return - - # Huzzah! - # ======= - # We can serve a 304! :D + if headers_etag and headers_etag == etag: + # Success! We can serve a 304. + raise response.success(304) - raise response.success(304) - -def add_caching_to_response(response, request=None, etag=None): +def add_caching_to_response(state, website, response, request=None, etag=None): """Set caching headers. """ - if not etag: - # This is a dynamic resource, disable caching by default - if b'Cache-Control' not in response.headers: - response.headers[b'Cache-Control'] = b'no-cache' - return - - assert request is not None # sanity check - if response.code not in (200, 304): return - - # https://developers.google.com/speed/docs/best-practices/caching - response.headers[b'Etag'] = etag.encode('ascii') - - if request.qs.get('etag'): - # We can cache "indefinitely" when the querystring contains the etag. - response.headers[b'Cache-Control'] = b'public, max-age=31536000' + if b'Cache-Control' in response.headers: + # The caching policy has already been defined somewhere else + return + if etag: + try: + assert not response.headers.cookie + except Exception as e: + website.tell_sentry(e, state) + response.headers.cookie.clear() + # https://developers.google.com/speed/docs/best-practices/caching + response.headers[b'Etag'] = etag.encode('ascii') + if request.qs.get('etag'): + # We can cache "indefinitely" when the querystring contains the etag. + response.headers[b'Cache-Control'] = b'public, max-age=31536000, immutable' + else: + # Otherwise we cache for 1 hour + response.headers[b'Cache-Control'] = b'public, max-age=3600' else: - # Otherwise we cache for 1 hour - response.headers[b'Cache-Control'] = b'public, max-age=3600' + # This is a dynamic resource, disable caching by default + response.headers[b'Cache-Control'] = b'no-cache'
{"golden_diff": "diff --git a/liberapay/utils/http_caching.py b/liberapay/utils/http_caching.py\n--- a/liberapay/utils/http_caching.py\n+++ b/liberapay/utils/http_caching.py\n@@ -79,47 +79,41 @@\n # This is a request for a dynamic resource.\n return\n \n+ # Compare the etag in the request's querystring to the one we have.\n qs_etag = request.qs.get('etag')\n if qs_etag and qs_etag != etag:\n # Don't serve one version of a file as if it were another.\n raise response.error(410)\n \n+ # Compare the etag in the request's headers to the one we have.\n headers_etag = request.headers.get(b'If-None-Match', b'').decode('ascii', 'replace')\n- if not headers_etag:\n- # This client doesn't want a 304.\n- return\n-\n- if headers_etag != etag:\n- # Cache miss, the client sent an old or invalid etag.\n- return\n-\n- # Huzzah!\n- # =======\n- # We can serve a 304! :D\n+ if headers_etag and headers_etag == etag:\n+ # Success! We can serve a 304.\n+ raise response.success(304)\n \n- raise response.success(304)\n \n-\n-def add_caching_to_response(response, request=None, etag=None):\n+def add_caching_to_response(state, website, response, request=None, etag=None):\n \"\"\"Set caching headers.\n \"\"\"\n- if not etag:\n- # This is a dynamic resource, disable caching by default\n- if b'Cache-Control' not in response.headers:\n- response.headers[b'Cache-Control'] = b'no-cache'\n- return\n-\n- assert request is not None # sanity check\n-\n if response.code not in (200, 304):\n return\n-\n- # https://developers.google.com/speed/docs/best-practices/caching\n- response.headers[b'Etag'] = etag.encode('ascii')\n-\n- if request.qs.get('etag'):\n- # We can cache \"indefinitely\" when the querystring contains the etag.\n- response.headers[b'Cache-Control'] = b'public, max-age=31536000'\n+ if b'Cache-Control' in response.headers:\n+ # The caching policy has already been defined somewhere else\n+ return\n+ if etag:\n+ try:\n+ assert not response.headers.cookie\n+ except Exception as e:\n+ website.tell_sentry(e, state)\n+ response.headers.cookie.clear()\n+ # https://developers.google.com/speed/docs/best-practices/caching\n+ response.headers[b'Etag'] = etag.encode('ascii')\n+ if request.qs.get('etag'):\n+ # We can cache \"indefinitely\" when the querystring contains the etag.\n+ response.headers[b'Cache-Control'] = b'public, max-age=31536000, immutable'\n+ else:\n+ # Otherwise we cache for 1 hour\n+ response.headers[b'Cache-Control'] = b'public, max-age=3600'\n else:\n- # Otherwise we cache for 1 hour\n- response.headers[b'Cache-Control'] = b'public, max-age=3600'\n+ # This is a dynamic resource, disable caching by default\n+ response.headers[b'Cache-Control'] = b'no-cache'\n", "issue": "`Cache-Control: immutable`\nIt's [a new experimental HTTP feature](https://bitsup.blogspot.fr/2016/05/cache-control-immutable.html) that we should probably start using. It's low priority though.\n\n", "before_files": [{"content": "\"\"\"\nHandles HTTP caching.\n\"\"\"\n\nimport atexit\nfrom hashlib import md5\nimport os\nfrom tempfile import mkstemp\n\nfrom aspen.request_processor.dispatcher import DispatchResult, DispatchStatus\nfrom pando import Response\n\nfrom liberapay.utils import b64encode_s, find_files\n\n\nETAGS = {}\n\n\ndef compile_assets(website):\n cleanup = []\n for spt in find_files(website.www_root+'/assets/', '*.spt'):\n filepath = spt[:-4] # /path/to/www/assets/foo.css\n if not os.path.exists(filepath):\n cleanup.append(filepath)\n dispatch_result = DispatchResult(DispatchStatus.okay, spt, None, None, None)\n state = dict(dispatch_result=dispatch_result, response=Response())\n state['state'] = state\n resource = website.request_processor.resources.get(spt)\n content = resource.render(state, dispatch_result, None).body\n if not isinstance(content, bytes):\n content = content.encode('utf8')\n tmpfd, tmpfpath = mkstemp(dir='.')\n os.write(tmpfd, content)\n os.close(tmpfd)\n os.rename(tmpfpath, filepath)\n if website.env.clean_assets:\n atexit.register(lambda: rm_f(*cleanup))\n\n\ndef rm_f(*paths):\n for path in paths:\n try:\n os.unlink(path)\n except Exception:\n pass\n\n\ndef clean_assets(www_root):\n rm_f(*[spt[:-4] for spt in find_files(www_root+'/assets/', '*.spt')])\n\n\ndef asset_etag(path):\n if path.endswith('.spt'):\n return ''\n if path in ETAGS:\n return ETAGS[path]\n with open(path, 'rb') as f:\n h = b64encode_s(md5(f.read()).digest())\n ETAGS[path] = h\n return h\n\n\n# algorithm functions\n\ndef get_etag_for_file(dispatch_result, website, state):\n if dispatch_result.status != DispatchStatus.okay:\n return {'etag': None}\n try:\n return {'etag': asset_etag(dispatch_result.match)}\n except Exception as e:\n website.tell_sentry(e, state)\n return {'etag': None}\n\n\ndef try_to_serve_304(dispatch_result, request, response, etag):\n \"\"\"Try to serve a 304 for static resources.\n \"\"\"\n if not etag:\n # This is a request for a dynamic resource.\n return\n\n qs_etag = request.qs.get('etag')\n if qs_etag and qs_etag != etag:\n # Don't serve one version of a file as if it were another.\n raise response.error(410)\n\n headers_etag = request.headers.get(b'If-None-Match', b'').decode('ascii', 'replace')\n if not headers_etag:\n # This client doesn't want a 304.\n return\n\n if headers_etag != etag:\n # Cache miss, the client sent an old or invalid etag.\n return\n\n # Huzzah!\n # =======\n # We can serve a 304! :D\n\n raise response.success(304)\n\n\ndef add_caching_to_response(response, request=None, etag=None):\n \"\"\"Set caching headers.\n \"\"\"\n if not etag:\n # This is a dynamic resource, disable caching by default\n if b'Cache-Control' not in response.headers:\n response.headers[b'Cache-Control'] = b'no-cache'\n return\n\n assert request is not None # sanity check\n\n if response.code not in (200, 304):\n return\n\n # https://developers.google.com/speed/docs/best-practices/caching\n response.headers[b'Etag'] = etag.encode('ascii')\n\n if request.qs.get('etag'):\n # We can cache \"indefinitely\" when the querystring contains the etag.\n response.headers[b'Cache-Control'] = b'public, max-age=31536000'\n else:\n # Otherwise we cache for 1 hour\n response.headers[b'Cache-Control'] = b'public, max-age=3600'\n", "path": "liberapay/utils/http_caching.py"}]}
1,788
810
gh_patches_debug_67222
rasdani/github-patches
git_diff
svthalia__concrexit-3642
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Unexpected description/clarification on form field in registration form ### Describe the bug On the page to register as a Thalia member, there is a form field with an explanation/clarification saying 'Warning: changing this in the admin does not update the contribution.' Please see the screenshot below. ### How to reproduce Go to the ['Become a Member'](https://thalia.nu/association/register/member/) page. You'll see it :) ### Expected behaviour That the text would not be there. ### Screenshots ![image](https://github.com/svthalia/concrexit/assets/9899175/8dfcdc16-e248-4872-a4af-86563bcc81dd) ### Additional context N/A </issue> <code> [start of website/registrations/forms.py] 1 from django import forms 2 from django.conf import settings 3 from django.core.exceptions import NON_FIELD_ERRORS, ValidationError 4 from django.forms import HiddenInput, TypedChoiceField 5 from django.urls import reverse_lazy 6 from django.utils import timezone 7 from django.utils.safestring import mark_safe 8 from django.utils.text import capfirst 9 from django.utils.translation import gettext_lazy as _ 10 11 from members.models import Membership 12 from payments.widgets import SignatureWidget 13 from registrations import services 14 15 from .models import Reference, Registration, Renewal 16 17 18 class BaseRegistrationForm(forms.ModelForm): 19 """Base form for membership registrations. 20 21 Subclasses must implement setting the right contribution. 22 """ 23 24 birthday = forms.DateField( 25 label=capfirst(_("birthday")), 26 ) 27 28 privacy_policy = forms.BooleanField( 29 required=True, 30 ) 31 32 direct_debit = forms.BooleanField( 33 required=False, 34 label=_("Pay via direct debit"), 35 help_text=_( 36 "This will allow you to sign a Direct Debit mandate, allowing Thalia to withdraw the membership fees from your bank account. Also, you will be able to use this bank account for future payments to Thalia via Thalia Pay." 37 ), 38 ) 39 40 contribution = forms.DecimalField(required=False, widget=HiddenInput()) 41 42 def __init__(self, *args, **kwargs): 43 super().__init__(*args, **kwargs) 44 self.fields["privacy_policy"].label = mark_safe( 45 _('I accept the <a href="{}">privacy policy</a>.').format( 46 reverse_lazy("singlepages:privacy-policy") 47 ) 48 ) 49 self.fields["birthday"].widget.input_type = "date" 50 51 def clean(self): 52 if self.cleaned_data.get("phone_number") is not None: # pragma: no cover 53 self.cleaned_data["phone_number"] = self.cleaned_data[ 54 "phone_number" 55 ].replace(" ", "") 56 super().clean() 57 58 59 class RegistrationAdminForm(forms.ModelForm): 60 """Custom admin form for Registration model to add the widget for the signature.""" 61 62 class Meta: 63 fields = "__all__" 64 model = Registration 65 widgets = { 66 "signature": SignatureWidget(), 67 } 68 69 70 class MemberRegistrationForm(BaseRegistrationForm): 71 """Form for member registrations.""" 72 73 this_year = timezone.now().year 74 years = reversed( 75 [(x, f"{x} - {x + 1}") for x in range(this_year - 20, this_year + 1)] 76 ) 77 78 starting_year = TypedChoiceField( 79 choices=years, 80 coerce=int, 81 empty_value=this_year, 82 required=False, 83 help_text=_("What lecture year did you start studying at Radboud University?"), 84 ) 85 86 def __init__(self, *args, **kwargs): 87 super().__init__(*args, **kwargs) 88 self.fields["student_number"].required = True 89 self.fields["programme"].required = True 90 self.fields["starting_year"].required = True 91 92 class Meta: 93 model = Registration 94 widgets = {"signature": SignatureWidget()} 95 fields = ( 96 "length", 97 "first_name", 98 "last_name", 99 "birthday", 100 "email", 101 "phone_number", 102 "student_number", 103 "programme", 104 "starting_year", 105 "address_street", 106 "address_street2", 107 "address_postal_code", 108 "address_city", 109 "address_country", 110 "optin_birthday", 111 "optin_mailinglist", 112 "contribution", 113 "membership_type", 114 "direct_debit", 115 "initials", 116 "iban", 117 "bic", 118 "signature", 119 ) 120 121 def clean(self): 122 super().clean() 123 self.cleaned_data["contribution"] = settings.MEMBERSHIP_PRICES[ 124 self.cleaned_data["length"] 125 ] 126 127 return self.cleaned_data 128 129 130 class BenefactorRegistrationForm(BaseRegistrationForm): 131 """Form for benefactor registrations.""" 132 133 icis_employee = forms.BooleanField( 134 required=False, label=_("I am an employee of iCIS") 135 ) 136 137 contribution = forms.DecimalField( 138 required=True, 139 max_digits=5, 140 decimal_places=2, 141 ) 142 143 class Meta: 144 model = Registration 145 widgets = { 146 "signature": SignatureWidget(), 147 } 148 fields = ( 149 "length", 150 "first_name", 151 "last_name", 152 "birthday", 153 "email", 154 "phone_number", 155 "student_number", 156 "address_street", 157 "address_street2", 158 "address_postal_code", 159 "address_city", 160 "address_country", 161 "optin_birthday", 162 "optin_mailinglist", 163 "contribution", 164 "membership_type", 165 "direct_debit", 166 "initials", 167 "iban", 168 "bic", 169 "signature", 170 ) 171 172 173 class RenewalForm(forms.ModelForm): 174 """Form for membership renewals.""" 175 176 privacy_policy = forms.BooleanField( 177 required=True, 178 ) 179 180 icis_employee = forms.BooleanField( 181 required=False, label=_("I am an employee of iCIS") 182 ) 183 184 contribution = forms.DecimalField( 185 required=False, 186 max_digits=5, 187 decimal_places=2, 188 ) 189 190 def __init__(self, *args, **kwargs): 191 super().__init__(*args, **kwargs) 192 self.fields["privacy_policy"].label = mark_safe( 193 _('I accept the <a href="{}">privacy policy</a>.').format( 194 reverse_lazy("singlepages:privacy-policy") 195 ) 196 ) 197 self.fields["length"].help_text = ( 198 "A discount of €7,50 will be applied if you upgrade your (active) year membership " 199 "to a membership until graduation. You will only have to pay €22,50 in that case." 200 ) 201 202 class Meta: 203 model = Renewal 204 fields = ( 205 "member", 206 "length", 207 "contribution", 208 "membership_type", 209 "no_references", 210 "remarks", 211 ) 212 213 def clean(self): 214 super().clean() 215 if self.cleaned_data["member"].profile.is_minimized: 216 raise ValidationError( 217 "It's not possible to renew a membership using an incomplete profile." 218 ) 219 220 if self.cleaned_data["length"] == Renewal.MEMBERSHIP_STUDY: 221 now = timezone.now() 222 if Membership.objects.filter( 223 user=self.cleaned_data["member"], 224 type=Membership.MEMBER, 225 until__gte=now, 226 since__lte=now, 227 ).exists(): 228 # The membership upgrade discount applies if, at the time a Renewal is 229 # created, the user has an active 'member' type membership for a year. 230 self.cleaned_data["contribution"] = ( 231 settings.MEMBERSHIP_PRICES[Renewal.MEMBERSHIP_STUDY] 232 - settings.MEMBERSHIP_PRICES[Renewal.MEMBERSHIP_YEAR] 233 ) 234 else: 235 self.cleaned_data["contribution"] = settings.MEMBERSHIP_PRICES[ 236 Renewal.MEMBERSHIP_STUDY 237 ] 238 elif self.cleaned_data["membership_type"] == Membership.MEMBER: 239 self.cleaned_data["contribution"] = settings.MEMBERSHIP_PRICES[ 240 self.cleaned_data["length"] 241 ] 242 243 return self.cleaned_data 244 245 246 class ReferenceForm(forms.ModelForm): 247 def clean(self): 248 super().clean() 249 membership = self.cleaned_data["member"].current_membership 250 if membership and membership.type == Membership.BENEFACTOR: 251 raise ValidationError(_("Benefactors cannot give references.")) 252 253 membership = self.cleaned_data["member"].latest_membership 254 if ( 255 membership 256 and membership.until 257 and membership.until < services.calculate_membership_since() 258 ): 259 raise ValidationError( 260 "It's not possible to give references for memberships " 261 "that start after your own membership's end." 262 ) 263 264 class Meta: 265 model = Reference 266 fields = "__all__" 267 error_messages = { 268 NON_FIELD_ERRORS: { 269 "unique_together": _( 270 "You've already given a reference for this person." 271 ), 272 } 273 } 274 [end of website/registrations/forms.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/website/registrations/forms.py b/website/registrations/forms.py --- a/website/registrations/forms.py +++ b/website/registrations/forms.py @@ -47,6 +47,7 @@ ) ) self.fields["birthday"].widget.input_type = "date" + self.fields["length"].help_text = None def clean(self): if self.cleaned_data.get("phone_number") is not None: # pragma: no cover
{"golden_diff": "diff --git a/website/registrations/forms.py b/website/registrations/forms.py\n--- a/website/registrations/forms.py\n+++ b/website/registrations/forms.py\n@@ -47,6 +47,7 @@\n )\n )\n self.fields[\"birthday\"].widget.input_type = \"date\"\n+ self.fields[\"length\"].help_text = None\n \n def clean(self):\n if self.cleaned_data.get(\"phone_number\") is not None: # pragma: no cover\n", "issue": "Unexpected description/clarification on form field in registration form\n### Describe the bug\r\n\r\nOn the page to register as a Thalia member, there is a form field with an explanation/clarification saying 'Warning: changing this in the admin does not update the contribution.' Please see the screenshot below.\r\n\r\n### How to reproduce\r\n\r\nGo to the ['Become a Member'](https://thalia.nu/association/register/member/) page. You'll see it :)\r\n\r\n### Expected behaviour\r\n\r\nThat the text would not be there.\r\n\r\n### Screenshots\r\n\r\n![image](https://github.com/svthalia/concrexit/assets/9899175/8dfcdc16-e248-4872-a4af-86563bcc81dd)\r\n\r\n### Additional context\r\n\r\nN/A\r\n\n", "before_files": [{"content": "from django import forms\nfrom django.conf import settings\nfrom django.core.exceptions import NON_FIELD_ERRORS, ValidationError\nfrom django.forms import HiddenInput, TypedChoiceField\nfrom django.urls import reverse_lazy\nfrom django.utils import timezone\nfrom django.utils.safestring import mark_safe\nfrom django.utils.text import capfirst\nfrom django.utils.translation import gettext_lazy as _\n\nfrom members.models import Membership\nfrom payments.widgets import SignatureWidget\nfrom registrations import services\n\nfrom .models import Reference, Registration, Renewal\n\n\nclass BaseRegistrationForm(forms.ModelForm):\n \"\"\"Base form for membership registrations.\n\n Subclasses must implement setting the right contribution.\n \"\"\"\n\n birthday = forms.DateField(\n label=capfirst(_(\"birthday\")),\n )\n\n privacy_policy = forms.BooleanField(\n required=True,\n )\n\n direct_debit = forms.BooleanField(\n required=False,\n label=_(\"Pay via direct debit\"),\n help_text=_(\n \"This will allow you to sign a Direct Debit mandate, allowing Thalia to withdraw the membership fees from your bank account. Also, you will be able to use this bank account for future payments to Thalia via Thalia Pay.\"\n ),\n )\n\n contribution = forms.DecimalField(required=False, widget=HiddenInput())\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"privacy_policy\"].label = mark_safe(\n _('I accept the <a href=\"{}\">privacy policy</a>.').format(\n reverse_lazy(\"singlepages:privacy-policy\")\n )\n )\n self.fields[\"birthday\"].widget.input_type = \"date\"\n\n def clean(self):\n if self.cleaned_data.get(\"phone_number\") is not None: # pragma: no cover\n self.cleaned_data[\"phone_number\"] = self.cleaned_data[\n \"phone_number\"\n ].replace(\" \", \"\")\n super().clean()\n\n\nclass RegistrationAdminForm(forms.ModelForm):\n \"\"\"Custom admin form for Registration model to add the widget for the signature.\"\"\"\n\n class Meta:\n fields = \"__all__\"\n model = Registration\n widgets = {\n \"signature\": SignatureWidget(),\n }\n\n\nclass MemberRegistrationForm(BaseRegistrationForm):\n \"\"\"Form for member registrations.\"\"\"\n\n this_year = timezone.now().year\n years = reversed(\n [(x, f\"{x} - {x + 1}\") for x in range(this_year - 20, this_year + 1)]\n )\n\n starting_year = TypedChoiceField(\n choices=years,\n coerce=int,\n empty_value=this_year,\n required=False,\n help_text=_(\"What lecture year did you start studying at Radboud University?\"),\n )\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"student_number\"].required = True\n self.fields[\"programme\"].required = True\n self.fields[\"starting_year\"].required = True\n\n class Meta:\n model = Registration\n widgets = {\"signature\": SignatureWidget()}\n fields = (\n \"length\",\n \"first_name\",\n \"last_name\",\n \"birthday\",\n \"email\",\n \"phone_number\",\n \"student_number\",\n \"programme\",\n \"starting_year\",\n \"address_street\",\n \"address_street2\",\n \"address_postal_code\",\n \"address_city\",\n \"address_country\",\n \"optin_birthday\",\n \"optin_mailinglist\",\n \"contribution\",\n \"membership_type\",\n \"direct_debit\",\n \"initials\",\n \"iban\",\n \"bic\",\n \"signature\",\n )\n\n def clean(self):\n super().clean()\n self.cleaned_data[\"contribution\"] = settings.MEMBERSHIP_PRICES[\n self.cleaned_data[\"length\"]\n ]\n\n return self.cleaned_data\n\n\nclass BenefactorRegistrationForm(BaseRegistrationForm):\n \"\"\"Form for benefactor registrations.\"\"\"\n\n icis_employee = forms.BooleanField(\n required=False, label=_(\"I am an employee of iCIS\")\n )\n\n contribution = forms.DecimalField(\n required=True,\n max_digits=5,\n decimal_places=2,\n )\n\n class Meta:\n model = Registration\n widgets = {\n \"signature\": SignatureWidget(),\n }\n fields = (\n \"length\",\n \"first_name\",\n \"last_name\",\n \"birthday\",\n \"email\",\n \"phone_number\",\n \"student_number\",\n \"address_street\",\n \"address_street2\",\n \"address_postal_code\",\n \"address_city\",\n \"address_country\",\n \"optin_birthday\",\n \"optin_mailinglist\",\n \"contribution\",\n \"membership_type\",\n \"direct_debit\",\n \"initials\",\n \"iban\",\n \"bic\",\n \"signature\",\n )\n\n\nclass RenewalForm(forms.ModelForm):\n \"\"\"Form for membership renewals.\"\"\"\n\n privacy_policy = forms.BooleanField(\n required=True,\n )\n\n icis_employee = forms.BooleanField(\n required=False, label=_(\"I am an employee of iCIS\")\n )\n\n contribution = forms.DecimalField(\n required=False,\n max_digits=5,\n decimal_places=2,\n )\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"privacy_policy\"].label = mark_safe(\n _('I accept the <a href=\"{}\">privacy policy</a>.').format(\n reverse_lazy(\"singlepages:privacy-policy\")\n )\n )\n self.fields[\"length\"].help_text = (\n \"A discount of \u20ac7,50 will be applied if you upgrade your (active) year membership \"\n \"to a membership until graduation. You will only have to pay \u20ac22,50 in that case.\"\n )\n\n class Meta:\n model = Renewal\n fields = (\n \"member\",\n \"length\",\n \"contribution\",\n \"membership_type\",\n \"no_references\",\n \"remarks\",\n )\n\n def clean(self):\n super().clean()\n if self.cleaned_data[\"member\"].profile.is_minimized:\n raise ValidationError(\n \"It's not possible to renew a membership using an incomplete profile.\"\n )\n\n if self.cleaned_data[\"length\"] == Renewal.MEMBERSHIP_STUDY:\n now = timezone.now()\n if Membership.objects.filter(\n user=self.cleaned_data[\"member\"],\n type=Membership.MEMBER,\n until__gte=now,\n since__lte=now,\n ).exists():\n # The membership upgrade discount applies if, at the time a Renewal is\n # created, the user has an active 'member' type membership for a year.\n self.cleaned_data[\"contribution\"] = (\n settings.MEMBERSHIP_PRICES[Renewal.MEMBERSHIP_STUDY]\n - settings.MEMBERSHIP_PRICES[Renewal.MEMBERSHIP_YEAR]\n )\n else:\n self.cleaned_data[\"contribution\"] = settings.MEMBERSHIP_PRICES[\n Renewal.MEMBERSHIP_STUDY\n ]\n elif self.cleaned_data[\"membership_type\"] == Membership.MEMBER:\n self.cleaned_data[\"contribution\"] = settings.MEMBERSHIP_PRICES[\n self.cleaned_data[\"length\"]\n ]\n\n return self.cleaned_data\n\n\nclass ReferenceForm(forms.ModelForm):\n def clean(self):\n super().clean()\n membership = self.cleaned_data[\"member\"].current_membership\n if membership and membership.type == Membership.BENEFACTOR:\n raise ValidationError(_(\"Benefactors cannot give references.\"))\n\n membership = self.cleaned_data[\"member\"].latest_membership\n if (\n membership\n and membership.until\n and membership.until < services.calculate_membership_since()\n ):\n raise ValidationError(\n \"It's not possible to give references for memberships \"\n \"that start after your own membership's end.\"\n )\n\n class Meta:\n model = Reference\n fields = \"__all__\"\n error_messages = {\n NON_FIELD_ERRORS: {\n \"unique_together\": _(\n \"You've already given a reference for this person.\"\n ),\n }\n }\n", "path": "website/registrations/forms.py"}]}
3,138
107
gh_patches_debug_11742
rasdani/github-patches
git_diff
kserve__kserve-832
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Calling kfserving.Storage.download(src_uri, dest_path) for an S3 URI without having the environment variable AWS_ENDPOINT_URL set will always fail. /kind bug Calling kfserving.Storage.download(src_uri, dest_path) for an S3 URI *without* having the environment variable `AWS_ENDPOINT_URL` set will always fail. **What steps did you take and what happened:** The bug manifested from a use of an MLFLOW_SERVER prepackaged Seldon server where the model resided in S3 and the `envSecretRefName` included the `AWS_KEY`, `AWS_SECRET_KEY` and `S3_ENDPOINT` all properly defined with valid values. **What did you expect to happen:** The S3 file should be downloaded (in this use case above this means the Seldon model server loads the model file from S3 during the container intitialization). **Anything else you would like to add:** The root cause appears to be the bad default used when creating the Minio client: ``` def _create_minio_client(): # Remove possible http scheme for Minio url = urlparse(os.getenv("AWS_ENDPOINT_URL", "s3.amazonaws.com")) use_ssl = url.scheme == 'https' if url.scheme else bool(os.getenv("S3_USE_HTTPS", "true")) return Minio(url.netloc,... ``` The `netloc` will *always* be None _unless_ you happen to set the environment variable `AWS_ENDPOINT_URL`. This is because the default value is bad (missing the scheme), so the "s3.amazonaws.com" gets interpreted as the *path* instead of the *netloc* ``` >>> import os >>> from urllib.parse import urlparse >>> url = urlparse(os.getenv("AWS_ENDPOINT_URL", "s3.amazonaws.com")) >>> print(url) ParseResult(scheme='', netloc='', path='s3.amazonaws.com', params='', query='', fragment='') ``` **Environment:** - Istio Version: - Knative Version: - KFServing Version: - Kubeflow version: - Minikube version: - Kubernetes version: (use `kubectl version`): v1.14.9-eks-c0eccc - OS (e.g. from `/etc/os-release`): </issue> <code> [start of python/kfserving/kfserving/storage.py] 1 # Copyright 2020 kubeflow.org. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 import glob 16 import logging 17 import tempfile 18 import os 19 import re 20 from urllib.parse import urlparse 21 from azure.storage.blob import BlockBlobService 22 from google.auth import exceptions 23 from google.cloud import storage 24 from minio import Minio 25 26 _GCS_PREFIX = "gs://" 27 _S3_PREFIX = "s3://" 28 _BLOB_RE = "https://(.+?).blob.core.windows.net/(.+)" 29 _LOCAL_PREFIX = "file://" 30 31 32 class Storage(object): # pylint: disable=too-few-public-methods 33 @staticmethod 34 def download(uri: str, out_dir: str = None) -> str: 35 logging.info("Copying contents of %s to local", uri) 36 37 is_local = False 38 if uri.startswith(_LOCAL_PREFIX) or os.path.exists(uri): 39 is_local = True 40 41 if out_dir is None: 42 if is_local: 43 # noop if out_dir is not set and the path is local 44 return Storage._download_local(uri) 45 out_dir = tempfile.mkdtemp() 46 47 if uri.startswith(_GCS_PREFIX): 48 Storage._download_gcs(uri, out_dir) 49 elif uri.startswith(_S3_PREFIX): 50 Storage._download_s3(uri, out_dir) 51 elif re.search(_BLOB_RE, uri): 52 Storage._download_blob(uri, out_dir) 53 elif is_local: 54 return Storage._download_local(uri, out_dir) 55 else: 56 raise Exception("Cannot recognize storage type for " + uri + 57 "\n'%s', '%s', and '%s' are the current available storage type." % 58 (_GCS_PREFIX, _S3_PREFIX, _LOCAL_PREFIX)) 59 60 logging.info("Successfully copied %s to %s", uri, out_dir) 61 return out_dir 62 63 @staticmethod 64 def _download_s3(uri, temp_dir: str): 65 client = Storage._create_minio_client() 66 bucket_args = uri.replace(_S3_PREFIX, "", 1).split("/", 1) 67 bucket_name = bucket_args[0] 68 bucket_path = bucket_args[1] if len(bucket_args) > 1 else "" 69 objects = client.list_objects(bucket_name, prefix=bucket_path, recursive=True) 70 count = 0 71 for obj in objects: 72 # Replace any prefix from the object key with temp_dir 73 subdir_object_key = obj.object_name.replace(bucket_path, "", 1).strip("/") 74 # fget_object handles directory creation if does not exist 75 if not obj.is_dir: 76 if subdir_object_key == "": 77 subdir_object_key = obj.object_name 78 client.fget_object(bucket_name, obj.object_name, 79 os.path.join(temp_dir, subdir_object_key)) 80 count = count + 1 81 if count == 0: 82 raise RuntimeError("Failed to fetch model. \ 83 The path or model %s does not exist." % (uri)) 84 85 @staticmethod 86 def _download_gcs(uri, temp_dir: str): 87 try: 88 storage_client = storage.Client() 89 except exceptions.DefaultCredentialsError: 90 storage_client = storage.Client.create_anonymous_client() 91 bucket_args = uri.replace(_GCS_PREFIX, "", 1).split("/", 1) 92 bucket_name = bucket_args[0] 93 bucket_path = bucket_args[1] if len(bucket_args) > 1 else "" 94 bucket = storage_client.bucket(bucket_name) 95 prefix = bucket_path 96 if not prefix.endswith("/"): 97 prefix = prefix + "/" 98 blobs = bucket.list_blobs(prefix=prefix) 99 count = 0 100 for blob in blobs: 101 # Replace any prefix from the object key with temp_dir 102 subdir_object_key = blob.name.replace(bucket_path, "", 1).strip("/") 103 104 # Create necessary subdirectory to store the object locally 105 if "/" in subdir_object_key: 106 local_object_dir = os.path.join(temp_dir, subdir_object_key.rsplit("/", 1)[0]) 107 if not os.path.isdir(local_object_dir): 108 os.makedirs(local_object_dir, exist_ok=True) 109 if subdir_object_key.strip() != "": 110 dest_path = os.path.join(temp_dir, subdir_object_key) 111 logging.info("Downloading: %s", dest_path) 112 blob.download_to_filename(dest_path) 113 count = count + 1 114 if count == 0: 115 raise RuntimeError("Failed to fetch model. \ 116 The path or model %s does not exist." % (uri)) 117 118 @staticmethod 119 def _download_blob(uri, out_dir: str): # pylint: disable=too-many-locals 120 match = re.search(_BLOB_RE, uri) 121 account_name = match.group(1) 122 storage_url = match.group(2) 123 container_name, prefix = storage_url.split("/", 1) 124 125 logging.info("Connecting to BLOB account: [%s], container: [%s], prefix: [%s]", 126 account_name, 127 container_name, 128 prefix) 129 try: 130 block_blob_service = BlockBlobService(account_name=account_name) 131 blobs = block_blob_service.list_blobs(container_name, prefix=prefix) 132 except Exception: # pylint: disable=broad-except 133 token = Storage._get_azure_storage_token() 134 if token is None: 135 logging.warning("Azure credentials not found, retrying anonymous access") 136 block_blob_service = BlockBlobService(account_name=account_name, token_credential=token) 137 blobs = block_blob_service.list_blobs(container_name, prefix=prefix) 138 count = 0 139 for blob in blobs: 140 dest_path = os.path.join(out_dir, blob.name) 141 if "/" in blob.name: 142 head, tail = os.path.split(blob.name) 143 if prefix is not None: 144 head = head[len(prefix):] 145 if head.startswith('/'): 146 head = head[1:] 147 dir_path = os.path.join(out_dir, head) 148 dest_path = os.path.join(dir_path, tail) 149 if not os.path.isdir(dir_path): 150 os.makedirs(dir_path) 151 152 logging.info("Downloading: %s to %s", blob.name, dest_path) 153 block_blob_service.get_blob_to_path(container_name, blob.name, dest_path) 154 count = count + 1 155 if count == 0: 156 raise RuntimeError("Failed to fetch model. \ 157 The path or model %s does not exist." % (uri)) 158 159 @staticmethod 160 def _get_azure_storage_token(): 161 tenant_id = os.getenv("AZ_TENANT_ID", "") 162 client_id = os.getenv("AZ_CLIENT_ID", "") 163 client_secret = os.getenv("AZ_CLIENT_SECRET", "") 164 subscription_id = os.getenv("AZ_SUBSCRIPTION_ID", "") 165 166 if tenant_id == "" or client_id == "" or client_secret == "" or subscription_id == "": 167 return None 168 169 # note the SP must have "Storage Blob Data Owner" perms for this to work 170 import adal 171 from azure.storage.common import TokenCredential 172 173 authority_url = "https://login.microsoftonline.com/" + tenant_id 174 175 context = adal.AuthenticationContext(authority_url) 176 177 token = context.acquire_token_with_client_credentials( 178 "https://storage.azure.com/", 179 client_id, 180 client_secret) 181 182 token_credential = TokenCredential(token["accessToken"]) 183 184 logging.info("Retrieved SP token credential for client_id: %s", client_id) 185 186 return token_credential 187 188 @staticmethod 189 def _download_local(uri, out_dir=None): 190 local_path = uri.replace(_LOCAL_PREFIX, "", 1) 191 if not os.path.exists(local_path): 192 raise RuntimeError("Local path %s does not exist." % (uri)) 193 194 if out_dir is None: 195 return local_path 196 elif not os.path.isdir(out_dir): 197 os.makedirs(out_dir) 198 199 if os.path.isdir(local_path): 200 local_path = os.path.join(local_path, "*") 201 202 for src in glob.glob(local_path): 203 _, tail = os.path.split(src) 204 dest_path = os.path.join(out_dir, tail) 205 logging.info("Linking: %s to %s", src, dest_path) 206 os.symlink(src, dest_path) 207 return out_dir 208 209 @staticmethod 210 def _create_minio_client(): 211 # Remove possible http scheme for Minio 212 url = urlparse(os.getenv("AWS_ENDPOINT_URL", "s3.amazonaws.com")) 213 use_ssl = url.scheme == 'https' if url.scheme else bool(os.getenv("S3_USE_HTTPS", "true")) 214 return Minio(url.netloc, 215 access_key=os.getenv("AWS_ACCESS_KEY_ID", ""), 216 secret_key=os.getenv("AWS_SECRET_ACCESS_KEY", ""), 217 region=os.getenv("AWS_REGION", ""), 218 secure=use_ssl) 219 [end of python/kfserving/kfserving/storage.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/python/kfserving/kfserving/storage.py b/python/kfserving/kfserving/storage.py --- a/python/kfserving/kfserving/storage.py +++ b/python/kfserving/kfserving/storage.py @@ -208,8 +208,8 @@ @staticmethod def _create_minio_client(): - # Remove possible http scheme for Minio - url = urlparse(os.getenv("AWS_ENDPOINT_URL", "s3.amazonaws.com")) + # Adding prefixing "http" in urlparse is necessary for it to be the netloc + url = urlparse(os.getenv("AWS_ENDPOINT_URL", "http://s3.amazonaws.com")) use_ssl = url.scheme == 'https' if url.scheme else bool(os.getenv("S3_USE_HTTPS", "true")) return Minio(url.netloc, access_key=os.getenv("AWS_ACCESS_KEY_ID", ""),
{"golden_diff": "diff --git a/python/kfserving/kfserving/storage.py b/python/kfserving/kfserving/storage.py\n--- a/python/kfserving/kfserving/storage.py\n+++ b/python/kfserving/kfserving/storage.py\n@@ -208,8 +208,8 @@\n \n @staticmethod\n def _create_minio_client():\n- # Remove possible http scheme for Minio\n- url = urlparse(os.getenv(\"AWS_ENDPOINT_URL\", \"s3.amazonaws.com\"))\n+ # Adding prefixing \"http\" in urlparse is necessary for it to be the netloc\n+ url = urlparse(os.getenv(\"AWS_ENDPOINT_URL\", \"http://s3.amazonaws.com\"))\n use_ssl = url.scheme == 'https' if url.scheme else bool(os.getenv(\"S3_USE_HTTPS\", \"true\"))\n return Minio(url.netloc,\n access_key=os.getenv(\"AWS_ACCESS_KEY_ID\", \"\"),\n", "issue": "Calling kfserving.Storage.download(src_uri, dest_path) for an S3 URI without having the environment variable AWS_ENDPOINT_URL set will always fail.\n/kind bug\r\n\r\nCalling kfserving.Storage.download(src_uri, dest_path) for an S3 URI *without* having the environment variable `AWS_ENDPOINT_URL` set will always fail.\r\n\r\n**What steps did you take and what happened:**\r\n\r\nThe bug manifested from a use of an MLFLOW_SERVER prepackaged Seldon server where the model resided in S3 and the `envSecretRefName` included the `AWS_KEY`, `AWS_SECRET_KEY` and `S3_ENDPOINT` all properly defined with valid values.\r\n\r\n\r\n**What did you expect to happen:**\r\n\r\nThe S3 file should be downloaded (in this use case above this means the Seldon model server loads the model file from S3 during the container intitialization).\r\n\r\n**Anything else you would like to add:**\r\n\r\nThe root cause appears to be the bad default used when creating the Minio client:\r\n\r\n```\r\n def _create_minio_client():\r\n # Remove possible http scheme for Minio\r\n url = urlparse(os.getenv(\"AWS_ENDPOINT_URL\", \"s3.amazonaws.com\"))\r\n use_ssl = url.scheme == 'https' if url.scheme else bool(os.getenv(\"S3_USE_HTTPS\", \"true\"))\r\n return Minio(url.netloc,...\r\n```\r\n\r\nThe `netloc` will *always* be None _unless_ you happen to set the environment variable `AWS_ENDPOINT_URL`. This is because the default value is bad (missing the scheme), so the \"s3.amazonaws.com\" gets interpreted as the *path* instead of the *netloc*\r\n\r\n```\r\n>>> import os\r\n>>> from urllib.parse import urlparse\r\n>>> url = urlparse(os.getenv(\"AWS_ENDPOINT_URL\", \"s3.amazonaws.com\"))\r\n>>> print(url)\r\nParseResult(scheme='', netloc='', path='s3.amazonaws.com', params='', query='', fragment='')\r\n```\r\n\r\n**Environment:**\r\n\r\n- Istio Version:\r\n- Knative Version:\r\n- KFServing Version:\r\n- Kubeflow version:\r\n- Minikube version:\r\n- Kubernetes version: (use `kubectl version`): v1.14.9-eks-c0eccc\r\n- OS (e.g. from `/etc/os-release`):\r\n\n", "before_files": [{"content": "# Copyright 2020 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport glob\nimport logging\nimport tempfile\nimport os\nimport re\nfrom urllib.parse import urlparse\nfrom azure.storage.blob import BlockBlobService\nfrom google.auth import exceptions\nfrom google.cloud import storage\nfrom minio import Minio\n\n_GCS_PREFIX = \"gs://\"\n_S3_PREFIX = \"s3://\"\n_BLOB_RE = \"https://(.+?).blob.core.windows.net/(.+)\"\n_LOCAL_PREFIX = \"file://\"\n\n\nclass Storage(object): # pylint: disable=too-few-public-methods\n @staticmethod\n def download(uri: str, out_dir: str = None) -> str:\n logging.info(\"Copying contents of %s to local\", uri)\n\n is_local = False\n if uri.startswith(_LOCAL_PREFIX) or os.path.exists(uri):\n is_local = True\n\n if out_dir is None:\n if is_local:\n # noop if out_dir is not set and the path is local\n return Storage._download_local(uri)\n out_dir = tempfile.mkdtemp()\n\n if uri.startswith(_GCS_PREFIX):\n Storage._download_gcs(uri, out_dir)\n elif uri.startswith(_S3_PREFIX):\n Storage._download_s3(uri, out_dir)\n elif re.search(_BLOB_RE, uri):\n Storage._download_blob(uri, out_dir)\n elif is_local:\n return Storage._download_local(uri, out_dir)\n else:\n raise Exception(\"Cannot recognize storage type for \" + uri +\n \"\\n'%s', '%s', and '%s' are the current available storage type.\" %\n (_GCS_PREFIX, _S3_PREFIX, _LOCAL_PREFIX))\n\n logging.info(\"Successfully copied %s to %s\", uri, out_dir)\n return out_dir\n\n @staticmethod\n def _download_s3(uri, temp_dir: str):\n client = Storage._create_minio_client()\n bucket_args = uri.replace(_S3_PREFIX, \"\", 1).split(\"/\", 1)\n bucket_name = bucket_args[0]\n bucket_path = bucket_args[1] if len(bucket_args) > 1 else \"\"\n objects = client.list_objects(bucket_name, prefix=bucket_path, recursive=True)\n count = 0\n for obj in objects:\n # Replace any prefix from the object key with temp_dir\n subdir_object_key = obj.object_name.replace(bucket_path, \"\", 1).strip(\"/\")\n # fget_object handles directory creation if does not exist\n if not obj.is_dir:\n if subdir_object_key == \"\":\n subdir_object_key = obj.object_name\n client.fget_object(bucket_name, obj.object_name,\n os.path.join(temp_dir, subdir_object_key))\n count = count + 1\n if count == 0:\n raise RuntimeError(\"Failed to fetch model. \\\nThe path or model %s does not exist.\" % (uri))\n\n @staticmethod\n def _download_gcs(uri, temp_dir: str):\n try:\n storage_client = storage.Client()\n except exceptions.DefaultCredentialsError:\n storage_client = storage.Client.create_anonymous_client()\n bucket_args = uri.replace(_GCS_PREFIX, \"\", 1).split(\"/\", 1)\n bucket_name = bucket_args[0]\n bucket_path = bucket_args[1] if len(bucket_args) > 1 else \"\"\n bucket = storage_client.bucket(bucket_name)\n prefix = bucket_path\n if not prefix.endswith(\"/\"):\n prefix = prefix + \"/\"\n blobs = bucket.list_blobs(prefix=prefix)\n count = 0\n for blob in blobs:\n # Replace any prefix from the object key with temp_dir\n subdir_object_key = blob.name.replace(bucket_path, \"\", 1).strip(\"/\")\n\n # Create necessary subdirectory to store the object locally\n if \"/\" in subdir_object_key:\n local_object_dir = os.path.join(temp_dir, subdir_object_key.rsplit(\"/\", 1)[0])\n if not os.path.isdir(local_object_dir):\n os.makedirs(local_object_dir, exist_ok=True)\n if subdir_object_key.strip() != \"\":\n dest_path = os.path.join(temp_dir, subdir_object_key)\n logging.info(\"Downloading: %s\", dest_path)\n blob.download_to_filename(dest_path)\n count = count + 1\n if count == 0:\n raise RuntimeError(\"Failed to fetch model. \\\nThe path or model %s does not exist.\" % (uri))\n\n @staticmethod\n def _download_blob(uri, out_dir: str): # pylint: disable=too-many-locals\n match = re.search(_BLOB_RE, uri)\n account_name = match.group(1)\n storage_url = match.group(2)\n container_name, prefix = storage_url.split(\"/\", 1)\n\n logging.info(\"Connecting to BLOB account: [%s], container: [%s], prefix: [%s]\",\n account_name,\n container_name,\n prefix)\n try:\n block_blob_service = BlockBlobService(account_name=account_name)\n blobs = block_blob_service.list_blobs(container_name, prefix=prefix)\n except Exception: # pylint: disable=broad-except\n token = Storage._get_azure_storage_token()\n if token is None:\n logging.warning(\"Azure credentials not found, retrying anonymous access\")\n block_blob_service = BlockBlobService(account_name=account_name, token_credential=token)\n blobs = block_blob_service.list_blobs(container_name, prefix=prefix)\n count = 0\n for blob in blobs:\n dest_path = os.path.join(out_dir, blob.name)\n if \"/\" in blob.name:\n head, tail = os.path.split(blob.name)\n if prefix is not None:\n head = head[len(prefix):]\n if head.startswith('/'):\n head = head[1:]\n dir_path = os.path.join(out_dir, head)\n dest_path = os.path.join(dir_path, tail)\n if not os.path.isdir(dir_path):\n os.makedirs(dir_path)\n\n logging.info(\"Downloading: %s to %s\", blob.name, dest_path)\n block_blob_service.get_blob_to_path(container_name, blob.name, dest_path)\n count = count + 1\n if count == 0:\n raise RuntimeError(\"Failed to fetch model. \\\nThe path or model %s does not exist.\" % (uri))\n\n @staticmethod\n def _get_azure_storage_token():\n tenant_id = os.getenv(\"AZ_TENANT_ID\", \"\")\n client_id = os.getenv(\"AZ_CLIENT_ID\", \"\")\n client_secret = os.getenv(\"AZ_CLIENT_SECRET\", \"\")\n subscription_id = os.getenv(\"AZ_SUBSCRIPTION_ID\", \"\")\n\n if tenant_id == \"\" or client_id == \"\" or client_secret == \"\" or subscription_id == \"\":\n return None\n\n # note the SP must have \"Storage Blob Data Owner\" perms for this to work\n import adal\n from azure.storage.common import TokenCredential\n\n authority_url = \"https://login.microsoftonline.com/\" + tenant_id\n\n context = adal.AuthenticationContext(authority_url)\n\n token = context.acquire_token_with_client_credentials(\n \"https://storage.azure.com/\",\n client_id,\n client_secret)\n\n token_credential = TokenCredential(token[\"accessToken\"])\n\n logging.info(\"Retrieved SP token credential for client_id: %s\", client_id)\n\n return token_credential\n\n @staticmethod\n def _download_local(uri, out_dir=None):\n local_path = uri.replace(_LOCAL_PREFIX, \"\", 1)\n if not os.path.exists(local_path):\n raise RuntimeError(\"Local path %s does not exist.\" % (uri))\n\n if out_dir is None:\n return local_path\n elif not os.path.isdir(out_dir):\n os.makedirs(out_dir)\n\n if os.path.isdir(local_path):\n local_path = os.path.join(local_path, \"*\")\n\n for src in glob.glob(local_path):\n _, tail = os.path.split(src)\n dest_path = os.path.join(out_dir, tail)\n logging.info(\"Linking: %s to %s\", src, dest_path)\n os.symlink(src, dest_path)\n return out_dir\n\n @staticmethod\n def _create_minio_client():\n # Remove possible http scheme for Minio\n url = urlparse(os.getenv(\"AWS_ENDPOINT_URL\", \"s3.amazonaws.com\"))\n use_ssl = url.scheme == 'https' if url.scheme else bool(os.getenv(\"S3_USE_HTTPS\", \"true\"))\n return Minio(url.netloc,\n access_key=os.getenv(\"AWS_ACCESS_KEY_ID\", \"\"),\n secret_key=os.getenv(\"AWS_SECRET_ACCESS_KEY\", \"\"),\n region=os.getenv(\"AWS_REGION\", \"\"),\n secure=use_ssl)\n", "path": "python/kfserving/kfserving/storage.py"}]}
3,534
192
gh_patches_debug_11815
rasdani/github-patches
git_diff
vyperlang__vyper-1800
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Private functions can't have duplicate function selectors This is an unnecessary constraint. Label names can be made unique. ```python @private @constant def gfah(): pass @private @constant def eexo(): pass # error: Label with name priv_236395036 already exists! ``` </issue> <code> [start of vyper/signatures/interface.py] 1 import copy 2 import importlib 3 from pathlib import ( 4 Path, 5 ) 6 import pkgutil 7 from typing import ( 8 Sequence, 9 Tuple, 10 ) 11 12 from vyper import ast 13 from vyper.exceptions import ( 14 ParserException, 15 StructureException, 16 ) 17 import vyper.interfaces 18 from vyper.parser import ( 19 parser, 20 ) 21 from vyper.parser.constants import ( 22 Constants, 23 ) 24 from vyper.signatures import ( 25 sig_utils, 26 ) 27 from vyper.signatures.event_signature import ( 28 EventSignature, 29 ) 30 from vyper.signatures.function_signature import ( 31 FunctionSignature, 32 ) 33 from vyper.typing import ( 34 InterfaceImports, 35 SourceCode, 36 ) 37 38 39 # Populate built-in interfaces. 40 def get_builtin_interfaces(): 41 interface_names = [x.name for x in pkgutil.iter_modules(vyper.interfaces.__path__)] 42 return { 43 name: extract_sigs({ 44 'type': 'vyper', 45 'code': importlib.import_module( 46 f'vyper.interfaces.{name}', 47 ).interface_code, 48 }) 49 for name in interface_names 50 } 51 52 53 def render_return(sig): 54 if sig.output_type: 55 return " -> " + str(sig.output_type) 56 return "" 57 58 59 def abi_type_to_ast(atype): 60 if atype in ('int128', 'uint256', 'bool', 'address', 'bytes32'): 61 return ast.Name(id=atype) 62 elif atype == 'decimal': 63 return ast.Name(id='int128') 64 elif atype == 'bytes': 65 return ast.Subscript( 66 value=ast.Name(id='bytes'), 67 slice=ast.Index(256) 68 ) 69 elif atype == 'string': 70 return ast.Subscript( 71 value=ast.Name(id='string'), 72 slice=ast.Index(256) 73 ) 74 else: 75 raise ParserException(f'Type {atype} not supported by vyper.') 76 77 78 def mk_full_signature_from_json(abi): 79 funcs = [func for func in abi if func['type'] == 'function'] 80 sigs = [] 81 82 for func in funcs: 83 args = [] 84 returns = None 85 for a in func['inputs']: 86 arg = ast.arg( 87 arg=a['name'], 88 annotation=abi_type_to_ast(a['type']), 89 lineno=0, 90 col_offset=0 91 ) 92 args.append(arg) 93 94 if len(func['outputs']) == 1: 95 returns = abi_type_to_ast(func['outputs'][0]['type']) 96 elif len(func['outputs']) > 1: 97 returns = ast.Tuple( 98 elts=[ 99 abi_type_to_ast(a['type']) 100 for a in func['outputs'] 101 ] 102 ) 103 104 decorator_list = [ast.Name(id='public')] 105 if func['constant']: 106 decorator_list.append(ast.Name(id='constant')) 107 if func['payable']: 108 decorator_list.append(ast.Name(id='payable')) 109 110 sig = FunctionSignature.from_definition( 111 code=ast.FunctionDef( 112 name=func['name'], 113 args=ast.arguments(args=args), 114 decorator_list=decorator_list, 115 returns=returns, 116 ), 117 custom_units=set(), 118 custom_structs=dict(), 119 constants=Constants() 120 ) 121 sigs.append(sig) 122 return sigs 123 124 125 def extract_sigs(sig_code): 126 if sig_code['type'] == 'vyper': 127 interface_ast = [ 128 i for i in parser.parse_to_ast(sig_code['code']) if 129 isinstance(i, ast.FunctionDef) or 130 (isinstance(i, ast.AnnAssign) and i.target.id != "implements") 131 ] 132 return sig_utils.mk_full_signature(interface_ast, sig_formatter=lambda x, y: x) 133 elif sig_code['type'] == 'json': 134 return mk_full_signature_from_json(sig_code['code']) 135 else: 136 raise Exception( 137 (f"Unknown interface signature type '{sig_code['type']}' supplied. " 138 "'vyper' & 'json' are supported") 139 ) 140 141 142 def extract_interface_str(code, contract_name, interface_codes=None): 143 sigs = sig_utils.mk_full_signature( 144 parser.parse_to_ast(code), 145 sig_formatter=lambda x, y: (x, y), 146 interface_codes=interface_codes, 147 ) 148 events = [sig for sig, _ in sigs if isinstance(sig, EventSignature)] 149 functions = [sig for sig, _ in sigs if isinstance(sig, FunctionSignature)] 150 out = "" 151 # Print events. 152 for idx, event in enumerate(events): 153 if idx == 0: 154 out += "# Events\n\n" 155 event_args_str = ', '.join([arg.name + ': ' + str(arg.typ) for arg in event.args]) 156 out += f"{event.name}: event({{{event_args_str}}})\n" 157 158 # Print functions. 159 def render_decorator(sig): 160 o = "\n" 161 if sig.const: 162 o += "@constant\n" 163 if not sig.private: 164 o += "@public\n" 165 return o 166 167 for idx, func in enumerate(functions): 168 if idx == 0: 169 out += "\n# Functions\n" 170 if not func.private and func.name != '__init__': 171 args = ", ".join([arg.name + ": " + str(arg.typ) for arg in func.args]) 172 out += f"{render_decorator(func)}def {func.name}({args}){render_return(func)}:\n pass\n" # noqa: E501 173 out += "\n" 174 175 return out 176 177 178 def extract_external_interface(code, contract_name, interface_codes=None): 179 sigs = sig_utils.mk_full_signature( 180 parser.parse_to_ast(code), 181 sig_formatter=lambda x, y: (x, y), 182 interface_codes=interface_codes, 183 ) 184 functions = [sig for sig, _ in sigs if isinstance(sig, FunctionSignature)] 185 cname = Path(contract_name).stem.capitalize() 186 187 out = "" 188 offset = 4 * " " 189 for idx, func in enumerate(functions): 190 if idx == 0: 191 out += f"\n# External Contracts\ncontract {cname}:\n" 192 if not func.private and func.name != '__init__': 193 args = ", ".join([arg.name + ": " + str(arg.typ) for arg in func.args]) 194 func_type = "constant" if func.const else "modifying" 195 out += offset + f"def {func.name}({args}){render_return(func)}: {func_type}\n" 196 out += "\n" 197 return out 198 199 200 def extract_file_interface_imports(code: SourceCode) -> InterfaceImports: 201 ast_tree = parser.parse_to_ast(code) 202 203 imports_dict: InterfaceImports = {} 204 for item in ast_tree: 205 if isinstance(item, ast.Import): 206 for a_name in item.names: # type: ignore 207 if not a_name.asname: 208 raise StructureException( 209 'Interface statement requires an accompanying `as` statement.', 210 item, 211 ) 212 if a_name.asname in imports_dict: 213 raise StructureException( 214 f'Interface with alias {a_name.asname} already exists', 215 item, 216 ) 217 imports_dict[a_name.asname] = a_name.name.replace('.', '/') 218 elif isinstance(item, ast.ImportFrom): 219 for a_name in item.names: # type: ignore 220 if a_name.asname: 221 raise StructureException("From imports cannot use aliases", item) 222 level = item.level # type: ignore 223 module = item.module or "" # type: ignore 224 if not level and module == 'vyper.interfaces': 225 continue 226 227 base_path = "" 228 if level > 1: 229 base_path = "../" * (level-1) 230 elif level == 1: 231 base_path = "./" 232 base_path = f"{base_path}{module.replace('.','/')}/" 233 234 for a_name in item.names: # type: ignore 235 if a_name.name in imports_dict: 236 raise StructureException( 237 f'Interface with name {a_name.name} already exists', 238 item, 239 ) 240 imports_dict[a_name.name] = f"{base_path}{a_name.name}" 241 242 return imports_dict 243 244 245 Conflict = Tuple[FunctionSignature, FunctionSignature] 246 Conflicts = Tuple[Conflict, ...] 247 248 249 def find_signature_conflicts(sigs: Sequence[FunctionSignature]) -> Conflicts: 250 """ 251 Takes a sequence of function signature records and returns a tuple of 252 pairs of signatures from that sequence that produce the same internal 253 method id. 254 """ 255 # Consider self-comparisons as having been seen by default (they will be 256 # skipped) 257 comparisons_seen = set([frozenset((sig.sig,)) for sig in sigs]) 258 conflicts = [] 259 260 for sig in sigs: 261 method_id = sig.method_id 262 263 for other_sig in sigs: 264 comparison_id = frozenset((sig.sig, other_sig.sig)) 265 if comparison_id in comparisons_seen: 266 continue # Don't make redundant or useless comparisons 267 268 other_method_id = other_sig.method_id 269 if method_id == other_method_id: 270 conflicts.append((sig, other_sig)) 271 272 comparisons_seen.add(comparison_id) 273 274 return tuple(conflicts) 275 276 277 def check_valid_contract_interface(global_ctx, contract_sigs): 278 public_func_sigs = [ 279 sig for sig in contract_sigs.values() 280 if isinstance(sig, FunctionSignature) and not sig.private 281 ] 282 func_conflicts = find_signature_conflicts(public_func_sigs) 283 284 if len(func_conflicts) > 0: 285 sig_1, sig_2 = func_conflicts[0] 286 287 raise StructureException( 288 f'Methods {sig_1.sig} and {sig_2.sig} have conflicting IDs ' 289 f'(id {sig_1.method_id})', 290 sig_1.func_ast_code, 291 ) 292 293 if global_ctx._interface: 294 funcs_left = global_ctx._interface.copy() 295 296 for sig, func_sig in contract_sigs.items(): 297 if isinstance(func_sig, FunctionSignature): 298 # Remove units, as inteface signatures should not enforce units. 299 clean_sig_output_type = func_sig.output_type 300 if func_sig.output_type: 301 clean_sig_output_type = copy.deepcopy(func_sig.output_type) 302 clean_sig_output_type.unit = {} 303 if ( 304 sig in funcs_left and # noqa: W504 305 not func_sig.private and # noqa: W504 306 funcs_left[sig].output_type == clean_sig_output_type 307 ): 308 del funcs_left[sig] 309 if isinstance(func_sig, EventSignature) and func_sig.sig in funcs_left: 310 del funcs_left[func_sig.sig] 311 312 if funcs_left: 313 error_message = 'Contract does not comply to supplied Interface(s).\n' 314 missing_functions = [ 315 str(func_sig) 316 for sig_name, func_sig 317 in funcs_left.items() 318 if isinstance(func_sig, FunctionSignature) 319 ] 320 missing_events = [ 321 sig_name 322 for sig_name, func_sig 323 in funcs_left.items() 324 if isinstance(func_sig, EventSignature) 325 ] 326 if missing_functions: 327 err_join = "\n\t".join(missing_functions) 328 error_message += f'Missing interface functions:\n\t{err_join}' 329 if missing_events: 330 err_join = "\n\t".join(missing_events) 331 error_message += f'Missing interface events:\n\t{err_join}' 332 raise StructureException(error_message) 333 [end of vyper/signatures/interface.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/vyper/signatures/interface.py b/vyper/signatures/interface.py --- a/vyper/signatures/interface.py +++ b/vyper/signatures/interface.py @@ -275,11 +275,10 @@ def check_valid_contract_interface(global_ctx, contract_sigs): - public_func_sigs = [ - sig for sig in contract_sigs.values() - if isinstance(sig, FunctionSignature) and not sig.private - ] - func_conflicts = find_signature_conflicts(public_func_sigs) + # the check for private function collisions is made to prevent future + # breaking changes if we switch to internal calls (@iamdefinitelyahuman) + func_sigs = [sig for sig in contract_sigs.values() if isinstance(sig, FunctionSignature)] + func_conflicts = find_signature_conflicts(func_sigs) if len(func_conflicts) > 0: sig_1, sig_2 = func_conflicts[0]
{"golden_diff": "diff --git a/vyper/signatures/interface.py b/vyper/signatures/interface.py\n--- a/vyper/signatures/interface.py\n+++ b/vyper/signatures/interface.py\n@@ -275,11 +275,10 @@\n \n \n def check_valid_contract_interface(global_ctx, contract_sigs):\n- public_func_sigs = [\n- sig for sig in contract_sigs.values()\n- if isinstance(sig, FunctionSignature) and not sig.private\n- ]\n- func_conflicts = find_signature_conflicts(public_func_sigs)\n+ # the check for private function collisions is made to prevent future\n+ # breaking changes if we switch to internal calls (@iamdefinitelyahuman)\n+ func_sigs = [sig for sig in contract_sigs.values() if isinstance(sig, FunctionSignature)]\n+ func_conflicts = find_signature_conflicts(func_sigs)\n \n if len(func_conflicts) > 0:\n sig_1, sig_2 = func_conflicts[0]\n", "issue": "Private functions can't have duplicate function selectors\nThis is an unnecessary constraint. Label names can be made unique.\r\n\r\n```python\r\n@private\r\n@constant\r\ndef gfah(): pass\r\n\r\n@private\r\n@constant\r\ndef eexo(): pass\r\n\r\n# error: Label with name priv_236395036 already exists!\r\n```\n", "before_files": [{"content": "import copy\nimport importlib\nfrom pathlib import (\n Path,\n)\nimport pkgutil\nfrom typing import (\n Sequence,\n Tuple,\n)\n\nfrom vyper import ast\nfrom vyper.exceptions import (\n ParserException,\n StructureException,\n)\nimport vyper.interfaces\nfrom vyper.parser import (\n parser,\n)\nfrom vyper.parser.constants import (\n Constants,\n)\nfrom vyper.signatures import (\n sig_utils,\n)\nfrom vyper.signatures.event_signature import (\n EventSignature,\n)\nfrom vyper.signatures.function_signature import (\n FunctionSignature,\n)\nfrom vyper.typing import (\n InterfaceImports,\n SourceCode,\n)\n\n\n# Populate built-in interfaces.\ndef get_builtin_interfaces():\n interface_names = [x.name for x in pkgutil.iter_modules(vyper.interfaces.__path__)]\n return {\n name: extract_sigs({\n 'type': 'vyper',\n 'code': importlib.import_module(\n f'vyper.interfaces.{name}',\n ).interface_code,\n })\n for name in interface_names\n }\n\n\ndef render_return(sig):\n if sig.output_type:\n return \" -> \" + str(sig.output_type)\n return \"\"\n\n\ndef abi_type_to_ast(atype):\n if atype in ('int128', 'uint256', 'bool', 'address', 'bytes32'):\n return ast.Name(id=atype)\n elif atype == 'decimal':\n return ast.Name(id='int128')\n elif atype == 'bytes':\n return ast.Subscript(\n value=ast.Name(id='bytes'),\n slice=ast.Index(256)\n )\n elif atype == 'string':\n return ast.Subscript(\n value=ast.Name(id='string'),\n slice=ast.Index(256)\n )\n else:\n raise ParserException(f'Type {atype} not supported by vyper.')\n\n\ndef mk_full_signature_from_json(abi):\n funcs = [func for func in abi if func['type'] == 'function']\n sigs = []\n\n for func in funcs:\n args = []\n returns = None\n for a in func['inputs']:\n arg = ast.arg(\n arg=a['name'],\n annotation=abi_type_to_ast(a['type']),\n lineno=0,\n col_offset=0\n )\n args.append(arg)\n\n if len(func['outputs']) == 1:\n returns = abi_type_to_ast(func['outputs'][0]['type'])\n elif len(func['outputs']) > 1:\n returns = ast.Tuple(\n elts=[\n abi_type_to_ast(a['type'])\n for a in func['outputs']\n ]\n )\n\n decorator_list = [ast.Name(id='public')]\n if func['constant']:\n decorator_list.append(ast.Name(id='constant'))\n if func['payable']:\n decorator_list.append(ast.Name(id='payable'))\n\n sig = FunctionSignature.from_definition(\n code=ast.FunctionDef(\n name=func['name'],\n args=ast.arguments(args=args),\n decorator_list=decorator_list,\n returns=returns,\n ),\n custom_units=set(),\n custom_structs=dict(),\n constants=Constants()\n )\n sigs.append(sig)\n return sigs\n\n\ndef extract_sigs(sig_code):\n if sig_code['type'] == 'vyper':\n interface_ast = [\n i for i in parser.parse_to_ast(sig_code['code']) if\n isinstance(i, ast.FunctionDef) or\n (isinstance(i, ast.AnnAssign) and i.target.id != \"implements\")\n ]\n return sig_utils.mk_full_signature(interface_ast, sig_formatter=lambda x, y: x)\n elif sig_code['type'] == 'json':\n return mk_full_signature_from_json(sig_code['code'])\n else:\n raise Exception(\n (f\"Unknown interface signature type '{sig_code['type']}' supplied. \"\n \"'vyper' & 'json' are supported\")\n )\n\n\ndef extract_interface_str(code, contract_name, interface_codes=None):\n sigs = sig_utils.mk_full_signature(\n parser.parse_to_ast(code),\n sig_formatter=lambda x, y: (x, y),\n interface_codes=interface_codes,\n )\n events = [sig for sig, _ in sigs if isinstance(sig, EventSignature)]\n functions = [sig for sig, _ in sigs if isinstance(sig, FunctionSignature)]\n out = \"\"\n # Print events.\n for idx, event in enumerate(events):\n if idx == 0:\n out += \"# Events\\n\\n\"\n event_args_str = ', '.join([arg.name + ': ' + str(arg.typ) for arg in event.args])\n out += f\"{event.name}: event({{{event_args_str}}})\\n\"\n\n # Print functions.\n def render_decorator(sig):\n o = \"\\n\"\n if sig.const:\n o += \"@constant\\n\"\n if not sig.private:\n o += \"@public\\n\"\n return o\n\n for idx, func in enumerate(functions):\n if idx == 0:\n out += \"\\n# Functions\\n\"\n if not func.private and func.name != '__init__':\n args = \", \".join([arg.name + \": \" + str(arg.typ) for arg in func.args])\n out += f\"{render_decorator(func)}def {func.name}({args}){render_return(func)}:\\n pass\\n\" # noqa: E501\n out += \"\\n\"\n\n return out\n\n\ndef extract_external_interface(code, contract_name, interface_codes=None):\n sigs = sig_utils.mk_full_signature(\n parser.parse_to_ast(code),\n sig_formatter=lambda x, y: (x, y),\n interface_codes=interface_codes,\n )\n functions = [sig for sig, _ in sigs if isinstance(sig, FunctionSignature)]\n cname = Path(contract_name).stem.capitalize()\n\n out = \"\"\n offset = 4 * \" \"\n for idx, func in enumerate(functions):\n if idx == 0:\n out += f\"\\n# External Contracts\\ncontract {cname}:\\n\"\n if not func.private and func.name != '__init__':\n args = \", \".join([arg.name + \": \" + str(arg.typ) for arg in func.args])\n func_type = \"constant\" if func.const else \"modifying\"\n out += offset + f\"def {func.name}({args}){render_return(func)}: {func_type}\\n\"\n out += \"\\n\"\n return out\n\n\ndef extract_file_interface_imports(code: SourceCode) -> InterfaceImports:\n ast_tree = parser.parse_to_ast(code)\n\n imports_dict: InterfaceImports = {}\n for item in ast_tree:\n if isinstance(item, ast.Import):\n for a_name in item.names: # type: ignore\n if not a_name.asname:\n raise StructureException(\n 'Interface statement requires an accompanying `as` statement.',\n item,\n )\n if a_name.asname in imports_dict:\n raise StructureException(\n f'Interface with alias {a_name.asname} already exists',\n item,\n )\n imports_dict[a_name.asname] = a_name.name.replace('.', '/')\n elif isinstance(item, ast.ImportFrom):\n for a_name in item.names: # type: ignore\n if a_name.asname:\n raise StructureException(\"From imports cannot use aliases\", item)\n level = item.level # type: ignore\n module = item.module or \"\" # type: ignore\n if not level and module == 'vyper.interfaces':\n continue\n\n base_path = \"\"\n if level > 1:\n base_path = \"../\" * (level-1)\n elif level == 1:\n base_path = \"./\"\n base_path = f\"{base_path}{module.replace('.','/')}/\"\n\n for a_name in item.names: # type: ignore\n if a_name.name in imports_dict:\n raise StructureException(\n f'Interface with name {a_name.name} already exists',\n item,\n )\n imports_dict[a_name.name] = f\"{base_path}{a_name.name}\"\n\n return imports_dict\n\n\nConflict = Tuple[FunctionSignature, FunctionSignature]\nConflicts = Tuple[Conflict, ...]\n\n\ndef find_signature_conflicts(sigs: Sequence[FunctionSignature]) -> Conflicts:\n \"\"\"\n Takes a sequence of function signature records and returns a tuple of\n pairs of signatures from that sequence that produce the same internal\n method id.\n \"\"\"\n # Consider self-comparisons as having been seen by default (they will be\n # skipped)\n comparisons_seen = set([frozenset((sig.sig,)) for sig in sigs])\n conflicts = []\n\n for sig in sigs:\n method_id = sig.method_id\n\n for other_sig in sigs:\n comparison_id = frozenset((sig.sig, other_sig.sig))\n if comparison_id in comparisons_seen:\n continue # Don't make redundant or useless comparisons\n\n other_method_id = other_sig.method_id\n if method_id == other_method_id:\n conflicts.append((sig, other_sig))\n\n comparisons_seen.add(comparison_id)\n\n return tuple(conflicts)\n\n\ndef check_valid_contract_interface(global_ctx, contract_sigs):\n public_func_sigs = [\n sig for sig in contract_sigs.values()\n if isinstance(sig, FunctionSignature) and not sig.private\n ]\n func_conflicts = find_signature_conflicts(public_func_sigs)\n\n if len(func_conflicts) > 0:\n sig_1, sig_2 = func_conflicts[0]\n\n raise StructureException(\n f'Methods {sig_1.sig} and {sig_2.sig} have conflicting IDs '\n f'(id {sig_1.method_id})',\n sig_1.func_ast_code,\n )\n\n if global_ctx._interface:\n funcs_left = global_ctx._interface.copy()\n\n for sig, func_sig in contract_sigs.items():\n if isinstance(func_sig, FunctionSignature):\n # Remove units, as inteface signatures should not enforce units.\n clean_sig_output_type = func_sig.output_type\n if func_sig.output_type:\n clean_sig_output_type = copy.deepcopy(func_sig.output_type)\n clean_sig_output_type.unit = {}\n if (\n sig in funcs_left and # noqa: W504\n not func_sig.private and # noqa: W504\n funcs_left[sig].output_type == clean_sig_output_type\n ):\n del funcs_left[sig]\n if isinstance(func_sig, EventSignature) and func_sig.sig in funcs_left:\n del funcs_left[func_sig.sig]\n\n if funcs_left:\n error_message = 'Contract does not comply to supplied Interface(s).\\n'\n missing_functions = [\n str(func_sig)\n for sig_name, func_sig\n in funcs_left.items()\n if isinstance(func_sig, FunctionSignature)\n ]\n missing_events = [\n sig_name\n for sig_name, func_sig\n in funcs_left.items()\n if isinstance(func_sig, EventSignature)\n ]\n if missing_functions:\n err_join = \"\\n\\t\".join(missing_functions)\n error_message += f'Missing interface functions:\\n\\t{err_join}'\n if missing_events:\n err_join = \"\\n\\t\".join(missing_events)\n error_message += f'Missing interface events:\\n\\t{err_join}'\n raise StructureException(error_message)\n", "path": "vyper/signatures/interface.py"}]}
3,944
213
gh_patches_debug_19982
rasdani/github-patches
git_diff
sopel-irc__sopel-1386
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> [dice] Ugly SyntaxErrors for invalid die rolls People really like trying to break bots… Several users got in on the fun this weekend, trying to roll dice like `0x1d0x3`, `0dNaN`, `1d3d4d5`, `onedone`, and variations on that sort of thing. The `dice` module, I guess, doesn't catch `SyntaxError` exceptions. If parsing the given dice expression fails, some generic error ("I'm sorry, you have to give me real dice." or something like that) should get output rather than the likes of `Ast.Node 'Call' not implemented`. Probably an easy fix, which I'll get to soon enough if nobody else beats me to it. </issue> <code> [start of sopel/modules/dice.py] 1 # coding=utf-8 2 """ 3 dice.py - Dice Module 4 Copyright 2010-2013, Dimitri "Tyrope" Molenaars, TyRope.nl 5 Copyright 2013, Ari Koivula, <[email protected]> 6 Licensed under the Eiffel Forum License 2. 7 8 https://sopel.chat/ 9 """ 10 from __future__ import unicode_literals, absolute_import, print_function, division 11 12 import random 13 import re 14 import operator 15 16 import sopel.module 17 from sopel.tools.calculation import eval_equation 18 19 20 class DicePouch: 21 def __init__(self, num_of_die, type_of_die, addition): 22 """Initialize dice pouch and roll the dice. 23 24 Args: 25 num_of_die: number of dice in the pouch. 26 type_of_die: how many faces the dice have. 27 addition: how much is added to the result of the dice. 28 """ 29 self.num = num_of_die 30 self.type = type_of_die 31 self.addition = addition 32 33 self.dice = {} 34 self.dropped = {} 35 36 self.roll_dice() 37 38 def roll_dice(self): 39 """Roll all the dice in the pouch.""" 40 self.dice = {} 41 self.dropped = {} 42 for __ in range(self.num): 43 number = random.randint(1, self.type) 44 count = self.dice.setdefault(number, 0) 45 self.dice[number] = count + 1 46 47 def drop_lowest(self, n): 48 """Drop n lowest dice from the result. 49 50 Args: 51 n: the number of dice to drop. 52 """ 53 54 sorted_x = sorted(self.dice.items(), key=operator.itemgetter(0)) 55 56 for i, count in sorted_x: 57 count = self.dice[i] 58 if n == 0: 59 break 60 elif n < count: 61 self.dice[i] = count - n 62 self.dropped[i] = n 63 break 64 else: 65 self.dice[i] = 0 66 self.dropped[i] = count 67 n = n - count 68 69 for i, count in self.dropped.items(): 70 if self.dice[i] == 0: 71 del self.dice[i] 72 73 def get_simple_string(self): 74 """Return the values of the dice like (2+2+2[+1+1])+1.""" 75 dice = self.dice.items() 76 faces = ("+".join([str(face)] * times) for face, times in dice) 77 dice_str = "+".join(faces) 78 79 dropped_str = "" 80 if self.dropped: 81 dropped = self.dropped.items() 82 dfaces = ("+".join([str(face)] * times) for face, times in dropped) 83 dropped_str = "[+%s]" % ("+".join(dfaces),) 84 85 plus_str = "" 86 if self.addition: 87 plus_str = "{:+d}".format(self.addition) 88 89 return "(%s%s)%s" % (dice_str, dropped_str, plus_str) 90 91 def get_compressed_string(self): 92 """Return the values of the dice like (3x2[+2x1])+1.""" 93 dice = self.dice.items() 94 faces = ("%dx%d" % (times, face) for face, times in dice) 95 dice_str = "+".join(faces) 96 97 dropped_str = "" 98 if self.dropped: 99 dropped = self.dropped.items() 100 dfaces = ("%dx%d" % (times, face) for face, times in dropped) 101 dropped_str = "[+%s]" % ("+".join(dfaces),) 102 103 plus_str = "" 104 if self.addition: 105 plus_str = "{:+d}".format(self.addition) 106 107 return "(%s%s)%s" % (dice_str, dropped_str, plus_str) 108 109 def get_sum(self): 110 """Get the sum of non-dropped dice and the addition.""" 111 result = self.addition 112 for face, times in self.dice.items(): 113 result += face * times 114 return result 115 116 def get_number_of_faces(self): 117 """Returns sum of different faces for dropped and not dropped dice 118 119 This can be used to estimate, whether the result can be shown in 120 compressed form in a reasonable amount of space. 121 """ 122 return len(self.dice) + len(self.dropped) 123 124 125 def _roll_dice(bot, dice_expression): 126 result = re.search( 127 r""" 128 (?P<dice_num>-?\d*) 129 d 130 (?P<dice_type>-?\d+) 131 (v(?P<drop_lowest>-?\d+))? 132 $""", 133 dice_expression, 134 re.IGNORECASE | re.VERBOSE) 135 136 dice_num = int(result.group('dice_num') or 1) 137 dice_type = int(result.group('dice_type')) 138 139 # Dice can't have zero or a negative number of sides. 140 if dice_type <= 0: 141 bot.reply("I don't have any dice with %d sides. =(" % dice_type) 142 return None # Signal there was a problem 143 144 # Can't roll a negative number of dice. 145 if dice_num < 0: 146 bot.reply("I'd rather not roll a negative amount of dice. =(") 147 return None # Signal there was a problem 148 149 # Upper limit for dice should be at most a million. Creating a dict with 150 # more than a million elements already takes a noticeable amount of time 151 # on a fast computer and ~55kB of memory. 152 if dice_num > 1000: 153 bot.reply('I only have 1000 dice. =(') 154 return None # Signal there was a problem 155 156 dice = DicePouch(dice_num, dice_type, 0) 157 158 if result.group('drop_lowest'): 159 drop = int(result.group('drop_lowest')) 160 if drop >= 0: 161 dice.drop_lowest(drop) 162 else: 163 bot.reply("I can't drop the lowest %d dice. =(" % drop) 164 165 return dice 166 167 168 @sopel.module.commands("roll") 169 @sopel.module.commands("dice") 170 @sopel.module.commands("d") 171 @sopel.module.priority("medium") 172 @sopel.module.example(".roll 3d1+1", 'You roll 3d1+1: (1+1+1)+1 = 4') 173 @sopel.module.example(".roll 3d1v2+1", 'You roll 3d1v2+1: (1[+1+1])+1 = 2') 174 @sopel.module.example(".roll 2d4", 'You roll 2d4: \(\d\+\d\) = \d', re=True) 175 @sopel.module.example(".roll 100d1", '[^:]*: \(100x1\) = 100', re=True) 176 @sopel.module.example(".roll 1001d1", 'I only have 1000 dice. =(') 177 @sopel.module.example(".roll 1d1 + 1d1", 'You roll 1d1 + 1d1: (1) + (1) = 2') 178 @sopel.module.example(".roll 1d1+1d1", 'You roll 1d1+1d1: (1)+(1) = 2') 179 def roll(bot, trigger): 180 """.dice XdY[vZ][+N], rolls dice and reports the result. 181 182 X is the number of dice. Y is the number of faces in the dice. Z is the 183 number of lowest dice to be dropped from the result. N is the constant to 184 be applied to the end result. 185 """ 186 # This regexp is only allowed to have one captured group, because having 187 # more would alter the output of re.findall. 188 dice_regexp = r"-?\d*[dD]-?\d+(?:[vV]-?\d+)?" 189 190 # Get a list of all dice expressions, evaluate them and then replace the 191 # expressions in the original string with the results. Replacing is done 192 # using string formatting, so %-characters must be escaped. 193 if not trigger.group(2): 194 return bot.reply("No dice to roll.") 195 arg_str = trigger.group(2) 196 dice_expressions = re.findall(dice_regexp, arg_str) 197 arg_str = arg_str.replace("%", "%%") 198 arg_str = re.sub(dice_regexp, "%s", arg_str) 199 200 f = lambda dice_expr: _roll_dice(bot, dice_expr) 201 dice = list(map(f, dice_expressions)) 202 203 if None in dice: 204 # Stop computing roll if there was a problem rolling dice. 205 return 206 207 def _get_eval_str(dice): 208 return "(%d)" % (dice.get_sum(),) 209 210 def _get_pretty_str(dice): 211 if dice.num <= 10: 212 return dice.get_simple_string() 213 elif dice.get_number_of_faces() <= 10: 214 return dice.get_compressed_string() 215 else: 216 return "(...)" 217 218 eval_str = arg_str % (tuple(map(_get_eval_str, dice))) 219 pretty_str = arg_str % (tuple(map(_get_pretty_str, dice))) 220 221 # Showing the actual error will hopefully give a better hint of what is 222 # wrong with the syntax than a generic error message. 223 try: 224 result = eval_equation(eval_str) 225 except Exception as e: 226 bot.reply("SyntaxError, eval(%s), %s" % (eval_str, e)) 227 return 228 229 bot.reply("You roll %s: %s = %d" % ( 230 trigger.group(2), pretty_str, result)) 231 232 233 @sopel.module.commands("choice") 234 @sopel.module.commands("ch") 235 @sopel.module.commands("choose") 236 @sopel.module.priority("medium") 237 def choose(bot, trigger): 238 """ 239 .choice option1|option2|option3 - Makes a difficult choice easy. 240 """ 241 if not trigger.group(2): 242 return bot.reply('I\'d choose an option, but you didn\'t give me any.') 243 choices = [trigger.group(2)] 244 for delim in '|\\/,': 245 choices = trigger.group(2).split(delim) 246 if len(choices) > 1: 247 break 248 # Use a different delimiter in the output, to prevent ambiguity. 249 for show_delim in ',|/\\': 250 if show_delim not in trigger.group(2): 251 show_delim += ' ' 252 break 253 254 pick = random.choice(choices) 255 return bot.reply('Your options: %s. My choice: %s' % (show_delim.join(choices), pick)) 256 257 258 if __name__ == "__main__": 259 from sopel.test_tools import run_example_tests 260 run_example_tests(__file__) 261 [end of sopel/modules/dice.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/sopel/modules/dice.py b/sopel/modules/dice.py --- a/sopel/modules/dice.py +++ b/sopel/modules/dice.py @@ -218,12 +218,20 @@ eval_str = arg_str % (tuple(map(_get_eval_str, dice))) pretty_str = arg_str % (tuple(map(_get_pretty_str, dice))) - # Showing the actual error will hopefully give a better hint of what is - # wrong with the syntax than a generic error message. try: result = eval_equation(eval_str) - except Exception as e: - bot.reply("SyntaxError, eval(%s), %s" % (eval_str, e)) + except TypeError: + bot.reply("The type of this equation is, apparently, not a string. " + + "How did you do that, anyway?") + except ValueError: + # As it seems that ValueError is raised if the resulting equation would + # be too big, give a semi-serious answer to reflect on this. + bot.reply("You roll %s: %s = very big" % ( + trigger.group(2), pretty_str)) + return + except (SyntaxError, eval_equation.Error): + bot.reply("I don't know how to process that. " + + "Are the dice as well as the algorithms correct?") return bot.reply("You roll %s: %s = %d" % (
{"golden_diff": "diff --git a/sopel/modules/dice.py b/sopel/modules/dice.py\n--- a/sopel/modules/dice.py\n+++ b/sopel/modules/dice.py\n@@ -218,12 +218,20 @@\n eval_str = arg_str % (tuple(map(_get_eval_str, dice)))\n pretty_str = arg_str % (tuple(map(_get_pretty_str, dice)))\n \n- # Showing the actual error will hopefully give a better hint of what is\n- # wrong with the syntax than a generic error message.\n try:\n result = eval_equation(eval_str)\n- except Exception as e:\n- bot.reply(\"SyntaxError, eval(%s), %s\" % (eval_str, e))\n+ except TypeError:\n+ bot.reply(\"The type of this equation is, apparently, not a string. \" +\n+ \"How did you do that, anyway?\")\n+ except ValueError:\n+ # As it seems that ValueError is raised if the resulting equation would\n+ # be too big, give a semi-serious answer to reflect on this.\n+ bot.reply(\"You roll %s: %s = very big\" % (\n+ trigger.group(2), pretty_str))\n+ return\n+ except (SyntaxError, eval_equation.Error):\n+ bot.reply(\"I don't know how to process that. \" +\n+ \"Are the dice as well as the algorithms correct?\")\n return\n \n bot.reply(\"You roll %s: %s = %d\" % (\n", "issue": "[dice] Ugly SyntaxErrors for invalid die rolls\nPeople really like trying to break bots\u2026 Several users got in on the fun this weekend, trying to roll dice like `0x1d0x3`, `0dNaN`, `1d3d4d5`, `onedone`, and variations on that sort of thing.\r\n\r\nThe `dice` module, I guess, doesn't catch `SyntaxError` exceptions. If parsing the given dice expression fails, some generic error (\"I'm sorry, you have to give me real dice.\" or something like that) should get output rather than the likes of `Ast.Node 'Call' not implemented`.\r\n\r\nProbably an easy fix, which I'll get to soon enough if nobody else beats me to it.\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\ndice.py - Dice Module\nCopyright 2010-2013, Dimitri \"Tyrope\" Molenaars, TyRope.nl\nCopyright 2013, Ari Koivula, <[email protected]>\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat/\n\"\"\"\nfrom __future__ import unicode_literals, absolute_import, print_function, division\n\nimport random\nimport re\nimport operator\n\nimport sopel.module\nfrom sopel.tools.calculation import eval_equation\n\n\nclass DicePouch:\n def __init__(self, num_of_die, type_of_die, addition):\n \"\"\"Initialize dice pouch and roll the dice.\n\n Args:\n num_of_die: number of dice in the pouch.\n type_of_die: how many faces the dice have.\n addition: how much is added to the result of the dice.\n \"\"\"\n self.num = num_of_die\n self.type = type_of_die\n self.addition = addition\n\n self.dice = {}\n self.dropped = {}\n\n self.roll_dice()\n\n def roll_dice(self):\n \"\"\"Roll all the dice in the pouch.\"\"\"\n self.dice = {}\n self.dropped = {}\n for __ in range(self.num):\n number = random.randint(1, self.type)\n count = self.dice.setdefault(number, 0)\n self.dice[number] = count + 1\n\n def drop_lowest(self, n):\n \"\"\"Drop n lowest dice from the result.\n\n Args:\n n: the number of dice to drop.\n \"\"\"\n\n sorted_x = sorted(self.dice.items(), key=operator.itemgetter(0))\n\n for i, count in sorted_x:\n count = self.dice[i]\n if n == 0:\n break\n elif n < count:\n self.dice[i] = count - n\n self.dropped[i] = n\n break\n else:\n self.dice[i] = 0\n self.dropped[i] = count\n n = n - count\n\n for i, count in self.dropped.items():\n if self.dice[i] == 0:\n del self.dice[i]\n\n def get_simple_string(self):\n \"\"\"Return the values of the dice like (2+2+2[+1+1])+1.\"\"\"\n dice = self.dice.items()\n faces = (\"+\".join([str(face)] * times) for face, times in dice)\n dice_str = \"+\".join(faces)\n\n dropped_str = \"\"\n if self.dropped:\n dropped = self.dropped.items()\n dfaces = (\"+\".join([str(face)] * times) for face, times in dropped)\n dropped_str = \"[+%s]\" % (\"+\".join(dfaces),)\n\n plus_str = \"\"\n if self.addition:\n plus_str = \"{:+d}\".format(self.addition)\n\n return \"(%s%s)%s\" % (dice_str, dropped_str, plus_str)\n\n def get_compressed_string(self):\n \"\"\"Return the values of the dice like (3x2[+2x1])+1.\"\"\"\n dice = self.dice.items()\n faces = (\"%dx%d\" % (times, face) for face, times in dice)\n dice_str = \"+\".join(faces)\n\n dropped_str = \"\"\n if self.dropped:\n dropped = self.dropped.items()\n dfaces = (\"%dx%d\" % (times, face) for face, times in dropped)\n dropped_str = \"[+%s]\" % (\"+\".join(dfaces),)\n\n plus_str = \"\"\n if self.addition:\n plus_str = \"{:+d}\".format(self.addition)\n\n return \"(%s%s)%s\" % (dice_str, dropped_str, plus_str)\n\n def get_sum(self):\n \"\"\"Get the sum of non-dropped dice and the addition.\"\"\"\n result = self.addition\n for face, times in self.dice.items():\n result += face * times\n return result\n\n def get_number_of_faces(self):\n \"\"\"Returns sum of different faces for dropped and not dropped dice\n\n This can be used to estimate, whether the result can be shown in\n compressed form in a reasonable amount of space.\n \"\"\"\n return len(self.dice) + len(self.dropped)\n\n\ndef _roll_dice(bot, dice_expression):\n result = re.search(\n r\"\"\"\n (?P<dice_num>-?\\d*)\n d\n (?P<dice_type>-?\\d+)\n (v(?P<drop_lowest>-?\\d+))?\n $\"\"\",\n dice_expression,\n re.IGNORECASE | re.VERBOSE)\n\n dice_num = int(result.group('dice_num') or 1)\n dice_type = int(result.group('dice_type'))\n\n # Dice can't have zero or a negative number of sides.\n if dice_type <= 0:\n bot.reply(\"I don't have any dice with %d sides. =(\" % dice_type)\n return None # Signal there was a problem\n\n # Can't roll a negative number of dice.\n if dice_num < 0:\n bot.reply(\"I'd rather not roll a negative amount of dice. =(\")\n return None # Signal there was a problem\n\n # Upper limit for dice should be at most a million. Creating a dict with\n # more than a million elements already takes a noticeable amount of time\n # on a fast computer and ~55kB of memory.\n if dice_num > 1000:\n bot.reply('I only have 1000 dice. =(')\n return None # Signal there was a problem\n\n dice = DicePouch(dice_num, dice_type, 0)\n\n if result.group('drop_lowest'):\n drop = int(result.group('drop_lowest'))\n if drop >= 0:\n dice.drop_lowest(drop)\n else:\n bot.reply(\"I can't drop the lowest %d dice. =(\" % drop)\n\n return dice\n\n\[email protected](\"roll\")\[email protected](\"dice\")\[email protected](\"d\")\[email protected](\"medium\")\[email protected](\".roll 3d1+1\", 'You roll 3d1+1: (1+1+1)+1 = 4')\[email protected](\".roll 3d1v2+1\", 'You roll 3d1v2+1: (1[+1+1])+1 = 2')\[email protected](\".roll 2d4\", 'You roll 2d4: \\(\\d\\+\\d\\) = \\d', re=True)\[email protected](\".roll 100d1\", '[^:]*: \\(100x1\\) = 100', re=True)\[email protected](\".roll 1001d1\", 'I only have 1000 dice. =(')\[email protected](\".roll 1d1 + 1d1\", 'You roll 1d1 + 1d1: (1) + (1) = 2')\[email protected](\".roll 1d1+1d1\", 'You roll 1d1+1d1: (1)+(1) = 2')\ndef roll(bot, trigger):\n \"\"\".dice XdY[vZ][+N], rolls dice and reports the result.\n\n X is the number of dice. Y is the number of faces in the dice. Z is the\n number of lowest dice to be dropped from the result. N is the constant to\n be applied to the end result.\n \"\"\"\n # This regexp is only allowed to have one captured group, because having\n # more would alter the output of re.findall.\n dice_regexp = r\"-?\\d*[dD]-?\\d+(?:[vV]-?\\d+)?\"\n\n # Get a list of all dice expressions, evaluate them and then replace the\n # expressions in the original string with the results. Replacing is done\n # using string formatting, so %-characters must be escaped.\n if not trigger.group(2):\n return bot.reply(\"No dice to roll.\")\n arg_str = trigger.group(2)\n dice_expressions = re.findall(dice_regexp, arg_str)\n arg_str = arg_str.replace(\"%\", \"%%\")\n arg_str = re.sub(dice_regexp, \"%s\", arg_str)\n\n f = lambda dice_expr: _roll_dice(bot, dice_expr)\n dice = list(map(f, dice_expressions))\n\n if None in dice:\n # Stop computing roll if there was a problem rolling dice.\n return\n\n def _get_eval_str(dice):\n return \"(%d)\" % (dice.get_sum(),)\n\n def _get_pretty_str(dice):\n if dice.num <= 10:\n return dice.get_simple_string()\n elif dice.get_number_of_faces() <= 10:\n return dice.get_compressed_string()\n else:\n return \"(...)\"\n\n eval_str = arg_str % (tuple(map(_get_eval_str, dice)))\n pretty_str = arg_str % (tuple(map(_get_pretty_str, dice)))\n\n # Showing the actual error will hopefully give a better hint of what is\n # wrong with the syntax than a generic error message.\n try:\n result = eval_equation(eval_str)\n except Exception as e:\n bot.reply(\"SyntaxError, eval(%s), %s\" % (eval_str, e))\n return\n\n bot.reply(\"You roll %s: %s = %d\" % (\n trigger.group(2), pretty_str, result))\n\n\[email protected](\"choice\")\[email protected](\"ch\")\[email protected](\"choose\")\[email protected](\"medium\")\ndef choose(bot, trigger):\n \"\"\"\n .choice option1|option2|option3 - Makes a difficult choice easy.\n \"\"\"\n if not trigger.group(2):\n return bot.reply('I\\'d choose an option, but you didn\\'t give me any.')\n choices = [trigger.group(2)]\n for delim in '|\\\\/,':\n choices = trigger.group(2).split(delim)\n if len(choices) > 1:\n break\n # Use a different delimiter in the output, to prevent ambiguity.\n for show_delim in ',|/\\\\':\n if show_delim not in trigger.group(2):\n show_delim += ' '\n break\n\n pick = random.choice(choices)\n return bot.reply('Your options: %s. My choice: %s' % (show_delim.join(choices), pick))\n\n\nif __name__ == \"__main__\":\n from sopel.test_tools import run_example_tests\n run_example_tests(__file__)\n", "path": "sopel/modules/dice.py"}]}
3,741
331
gh_patches_debug_55168
rasdani/github-patches
git_diff
spack__spack-6617
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> spack find : always prompt 0 installed packages On a clean `develop` checkout : ``` $ git clone https://github.com/LLNL/spack.git Cloning into 'spack'... remote: Counting objects: 25613, done. remote: Compressing objects: 100% (42/42), done. remote: Total 25613 (delta 12), reused 3 (delta 3), pack-reused 25557 Receiving objects: 100% (25613/25613), 6.65 MiB | 6.46 MiB/s, done. Resolving deltas: 100% (13031/13031), done. Checking connectivity... done. $ cd spack $ . share/spack/setup-env.sh $ spack compilers ==> Available compilers -- gcc ---------------------------------------------------------- [email protected] $ spack install zlib ==> Installing zlib ==> Trying to fetch from file:///home/mculpo/production/spack-mirror/zlib/zlib-1.2.8.tar.gz ######################################################################## 100,0% ==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix/zlib-1.2.8.tar.gz ==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix ==> No patches needed for zlib ==> Building zlib ==> Successfully installed zlib Fetch: 0.01s. Build: 3.69s. Total: 3.70s. [+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix $ spack find ==> 0 installed packages. $ spack install szip ==> Installing szip ==> Trying to fetch from file:///home/mculpo/production/spack-mirror/szip/szip-2.1.tar.gz ######################################################################## 100,0% ==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq/szip-2.1.tar.gz ==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq ==> No patches needed for szip ==> Building szip ==> Successfully installed szip Fetch: 0.01s. Build: 8.09s. Total: 8.10s. [+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq $ spack find ==> 0 installed packages. ``` The db seems to be written correctly : ``` database: installs: d6pdl6xvnvap6ihrqcqtgvweghbszmix: explicit: true installed: true path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix ref_count: 0 spec: zlib: arch: linux-x86_64 compiler: name: gcc version: '4.8' dependencies: {} namespace: builtin parameters: cflags: [] cppflags: [] cxxflags: [] fflags: [] ldflags: [] ldlibs: [] version: 1.2.8 esfmhl54wbdb7nnnip6y6jbxlbmxs2jq: explicit: true installed: true path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq ref_count: 0 spec: szip: arch: linux-x86_64 compiler: name: gcc version: '4.8' dependencies: {} namespace: builtin parameters: cflags: [] cppflags: [] cxxflags: [] fflags: [] ldflags: [] ldlibs: [] version: '2.1' version: 0.9.1 ``` xrootd requires zlib to be installed on system CMake can't find zlib when installing xrootd. zlib is not listed as a dependency fro xrootd, so CMake looks for it on the system. </issue> <code> [start of var/spack/repos/builtin/packages/globus-toolkit/package.py] 1 ############################################################################## 2 # Copyright (c) 2013-2017, Lawrence Livermore National Security, LLC. 3 # Produced at the Lawrence Livermore National Laboratory. 4 # 5 # This file is part of Spack. 6 # Created by Todd Gamblin, [email protected], All rights reserved. 7 # LLNL-CODE-647188 8 # 9 # For details, see https://github.com/spack/spack 10 # Please also see the NOTICE and LICENSE files for our notice and the LGPL. 11 # 12 # This program is free software; you can redistribute it and/or modify 13 # it under the terms of the GNU Lesser General Public License (as 14 # published by the Free Software Foundation) version 2.1, February 1999. 15 # 16 # This program is distributed in the hope that it will be useful, but 17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF 18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and 19 # conditions of the GNU Lesser General Public License for more details. 20 # 21 # You should have received a copy of the GNU Lesser General Public 22 # License along with this program; if not, write to the Free Software 23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 24 ############################################################################## 25 from spack import * 26 27 28 class GlobusToolkit(AutotoolsPackage): 29 """The Globus Toolkit is an open source software toolkit used for building 30 grids""" 31 32 homepage = "http://toolkit.globus.org" 33 url = "http://toolkit.globus.org/ftppub/gt6/installers/src/globus_toolkit-6.0.1506371041.tar.gz" 34 35 version('6.0.1506371041', 'e17146f68e03b3482aaea3874d4087a5') 36 [end of var/spack/repos/builtin/packages/globus-toolkit/package.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/var/spack/repos/builtin/packages/globus-toolkit/package.py b/var/spack/repos/builtin/packages/globus-toolkit/package.py --- a/var/spack/repos/builtin/packages/globus-toolkit/package.py +++ b/var/spack/repos/builtin/packages/globus-toolkit/package.py @@ -33,3 +33,6 @@ url = "http://toolkit.globus.org/ftppub/gt6/installers/src/globus_toolkit-6.0.1506371041.tar.gz" version('6.0.1506371041', 'e17146f68e03b3482aaea3874d4087a5') + version('6.0.1493989444', '9e9298b61d045e65732e12c9727ceaa8') + + depends_on('openssl')
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/globus-toolkit/package.py b/var/spack/repos/builtin/packages/globus-toolkit/package.py\n--- a/var/spack/repos/builtin/packages/globus-toolkit/package.py\n+++ b/var/spack/repos/builtin/packages/globus-toolkit/package.py\n@@ -33,3 +33,6 @@\n url = \"http://toolkit.globus.org/ftppub/gt6/installers/src/globus_toolkit-6.0.1506371041.tar.gz\"\n \n version('6.0.1506371041', 'e17146f68e03b3482aaea3874d4087a5')\n+ version('6.0.1493989444', '9e9298b61d045e65732e12c9727ceaa8')\n+\n+ depends_on('openssl')\n", "issue": "spack find : always prompt 0 installed packages\nOn a clean `develop` checkout : \n\n```\n$ git clone https://github.com/LLNL/spack.git\nCloning into 'spack'...\nremote: Counting objects: 25613, done.\nremote: Compressing objects: 100% (42/42), done.\nremote: Total 25613 (delta 12), reused 3 (delta 3), pack-reused 25557\nReceiving objects: 100% (25613/25613), 6.65 MiB | 6.46 MiB/s, done.\nResolving deltas: 100% (13031/13031), done.\nChecking connectivity... done.\n\n$ cd spack\n$ . share/spack/setup-env.sh \n$ spack compilers\n==> Available compilers\n-- gcc ----------------------------------------------------------\[email protected]\n\n$ spack install zlib\n==> Installing zlib\n==> Trying to fetch from file:///home/mculpo/production/spack-mirror/zlib/zlib-1.2.8.tar.gz\n######################################################################## 100,0%\n==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix/zlib-1.2.8.tar.gz\n==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix\n==> No patches needed for zlib\n==> Building zlib\n==> Successfully installed zlib\n Fetch: 0.01s. Build: 3.69s. Total: 3.70s.\n[+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix\n\n$ spack find\n==> 0 installed packages.\n\n$ spack install szip\n==> Installing szip\n==> Trying to fetch from file:///home/mculpo/production/spack-mirror/szip/szip-2.1.tar.gz\n######################################################################## 100,0%\n==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq/szip-2.1.tar.gz\n==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq\n==> No patches needed for szip\n==> Building szip\n==> Successfully installed szip\n Fetch: 0.01s. Build: 8.09s. Total: 8.10s.\n[+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq\n\n$ spack find \n==> 0 installed packages.\n```\n\nThe db seems to be written correctly : \n\n```\ndatabase:\n installs:\n d6pdl6xvnvap6ihrqcqtgvweghbszmix:\n explicit: true\n installed: true\n path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix\n ref_count: 0\n spec:\n zlib:\n arch: linux-x86_64\n compiler:\n name: gcc\n version: '4.8'\n dependencies: {}\n namespace: builtin\n parameters:\n cflags: []\n cppflags: []\n cxxflags: []\n fflags: []\n ldflags: []\n ldlibs: []\n version: 1.2.8\n esfmhl54wbdb7nnnip6y6jbxlbmxs2jq:\n explicit: true\n installed: true\n path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq\n ref_count: 0\n spec:\n szip:\n arch: linux-x86_64\n compiler:\n name: gcc\n version: '4.8'\n dependencies: {}\n namespace: builtin\n parameters:\n cflags: []\n cppflags: []\n cxxflags: []\n fflags: []\n ldflags: []\n ldlibs: []\n version: '2.1'\n version: 0.9.1\n```\n\nxrootd requires zlib to be installed on system\nCMake can't find zlib when installing xrootd. zlib is not listed as a dependency fro xrootd, so CMake looks for it on the system.\n", "before_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2017, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/spack/spack\n# Please also see the NOTICE and LICENSE files for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom spack import *\n\n\nclass GlobusToolkit(AutotoolsPackage):\n \"\"\"The Globus Toolkit is an open source software toolkit used for building\n grids\"\"\"\n\n homepage = \"http://toolkit.globus.org\"\n url = \"http://toolkit.globus.org/ftppub/gt6/installers/src/globus_toolkit-6.0.1506371041.tar.gz\"\n\n version('6.0.1506371041', 'e17146f68e03b3482aaea3874d4087a5')\n", "path": "var/spack/repos/builtin/packages/globus-toolkit/package.py"}]}
2,213
235
gh_patches_debug_26075
rasdani/github-patches
git_diff
openmc-dev__openmc-2825
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> _DECAY_ENERGIES not cleared when changing chain in openmc.config <!-- If you are a user of OpenMC and are running into trouble with the code or are seeking general user support, we highly recommend posting on the OpenMC discourse forum first. GitHub issues should be used specifically for bug reports and feature requests. https://openmc.discourse.group/ --> ## Bug Description This causes incorrect sets of decay energies to be used. The PR which solves this has a more in-depth description of the problem. ## Steps to Reproduce Run two depletion calcs, each with a different chain. Then, try to postprocess the decay heats from each chain within the same python script by changing `openmc.config['chain_file']`. The decay heats will use energies from the first one we loaded, but not the second. This is because the decay heats are cached in `openmc/data/decay.py` and we're not clearing that dictionary upon changing the chain. </issue> <code> [start of openmc/config.py] 1 from collections.abc import MutableMapping 2 import os 3 from pathlib import Path 4 import warnings 5 6 from openmc.data import DataLibrary 7 from openmc.data.decay import _DECAY_PHOTON_ENERGY 8 9 __all__ = ["config"] 10 11 12 class _Config(MutableMapping): 13 def __init__(self, data=()): 14 self._mapping = {} 15 self.update(data) 16 17 def __getitem__(self, key): 18 return self._mapping[key] 19 20 def __delitem__(self, key): 21 del self._mapping[key] 22 if key == 'cross_sections': 23 del os.environ['OPENMC_CROSS_SECTIONS'] 24 elif key == 'mg_cross_sections': 25 del os.environ['OPENMC_MG_CROSS_SECTIONS'] 26 elif key == 'chain_file': 27 del os.environ['OPENMC_CHAIN_FILE'] 28 # Reset photon source data since it relies on chain file 29 _DECAY_PHOTON_ENERGY.clear() 30 31 def __setitem__(self, key, value): 32 if key == 'cross_sections': 33 # Force environment variable to match 34 self._set_path(key, value) 35 os.environ['OPENMC_CROSS_SECTIONS'] = str(value) 36 elif key == 'mg_cross_sections': 37 self._set_path(key, value) 38 os.environ['OPENMC_MG_CROSS_SECTIONS'] = str(value) 39 elif key == 'chain_file': 40 self._set_path(key, value) 41 os.environ['OPENMC_CHAIN_FILE'] = str(value) 42 # Reset photon source data since it relies on chain file 43 _DECAY_PHOTON_ENERGY.clear() 44 else: 45 raise KeyError(f'Unrecognized config key: {key}. Acceptable keys ' 46 'are "cross_sections", "mg_cross_sections" and ' 47 '"chain_file"') 48 49 def __iter__(self): 50 return iter(self._mapping) 51 52 def __len__(self): 53 return len(self._mapping) 54 55 def __repr__(self): 56 return repr(self._mapping) 57 58 def _set_path(self, key, value): 59 self._mapping[key] = p = Path(value) 60 if not p.exists(): 61 warnings.warn(f"'{value}' does not exist.") 62 63 64 def _default_config(): 65 """Return default configuration""" 66 config = _Config() 67 68 # Set cross sections using environment variable 69 if "OPENMC_CROSS_SECTIONS" in os.environ: 70 config['cross_sections'] = os.environ["OPENMC_CROSS_SECTIONS"] 71 if "OPENMC_MG_CROSS_SECTIONS" in os.environ: 72 config['mg_cross_sections'] = os.environ["OPENMC_MG_CROSS_SECTIONS"] 73 74 # Set depletion chain 75 chain_file = os.environ.get("OPENMC_CHAIN_FILE") 76 if (chain_file is None and 77 config.get('cross_sections') is not None and 78 config['cross_sections'].exists() 79 ): 80 # Check for depletion chain in cross_sections.xml 81 data = DataLibrary.from_xml(config['cross_sections']) 82 for lib in reversed(data.libraries): 83 if lib['type'] == 'depletion_chain': 84 chain_file = lib['path'] 85 break 86 if chain_file is not None: 87 config['chain_file'] = chain_file 88 89 return config 90 91 92 config = _default_config() 93 [end of openmc/config.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/openmc/config.py b/openmc/config.py --- a/openmc/config.py +++ b/openmc/config.py @@ -4,7 +4,7 @@ import warnings from openmc.data import DataLibrary -from openmc.data.decay import _DECAY_PHOTON_ENERGY +from openmc.data.decay import _DECAY_ENERGY, _DECAY_PHOTON_ENERGY __all__ = ["config"] @@ -41,6 +41,7 @@ os.environ['OPENMC_CHAIN_FILE'] = str(value) # Reset photon source data since it relies on chain file _DECAY_PHOTON_ENERGY.clear() + _DECAY_ENERGY.clear() else: raise KeyError(f'Unrecognized config key: {key}. Acceptable keys ' 'are "cross_sections", "mg_cross_sections" and ' @@ -76,7 +77,7 @@ if (chain_file is None and config.get('cross_sections') is not None and config['cross_sections'].exists() - ): + ): # Check for depletion chain in cross_sections.xml data = DataLibrary.from_xml(config['cross_sections']) for lib in reversed(data.libraries):
{"golden_diff": "diff --git a/openmc/config.py b/openmc/config.py\n--- a/openmc/config.py\n+++ b/openmc/config.py\n@@ -4,7 +4,7 @@\n import warnings\n \n from openmc.data import DataLibrary\n-from openmc.data.decay import _DECAY_PHOTON_ENERGY\n+from openmc.data.decay import _DECAY_ENERGY, _DECAY_PHOTON_ENERGY\n \n __all__ = [\"config\"]\n \n@@ -41,6 +41,7 @@\n os.environ['OPENMC_CHAIN_FILE'] = str(value)\n # Reset photon source data since it relies on chain file\n _DECAY_PHOTON_ENERGY.clear()\n+ _DECAY_ENERGY.clear()\n else:\n raise KeyError(f'Unrecognized config key: {key}. Acceptable keys '\n 'are \"cross_sections\", \"mg_cross_sections\" and '\n@@ -76,7 +77,7 @@\n if (chain_file is None and\n config.get('cross_sections') is not None and\n config['cross_sections'].exists()\n- ):\n+ ):\n # Check for depletion chain in cross_sections.xml\n data = DataLibrary.from_xml(config['cross_sections'])\n for lib in reversed(data.libraries):\n", "issue": "_DECAY_ENERGIES not cleared when changing chain in openmc.config\n<!--\r\nIf you are a user of OpenMC and are running into trouble with the code or are\r\nseeking general user support, we highly recommend posting on the OpenMC\r\ndiscourse forum first. GitHub issues should be used specifically for bug reports\r\nand feature requests.\r\n\r\nhttps://openmc.discourse.group/\r\n\r\n-->\r\n\r\n## Bug Description\r\nThis causes incorrect sets of decay energies to be used. The PR which solves this has a more in-depth description of the problem.\r\n\r\n\r\n## Steps to Reproduce\r\nRun two depletion calcs, each with a different chain. Then, try to postprocess the decay heats from each chain within the same python script by changing `openmc.config['chain_file']`. The decay heats will use energies from the first one we loaded, but not the second. This is because the decay heats are cached in `openmc/data/decay.py` and we're not clearing that dictionary upon changing the chain.\r\n\n", "before_files": [{"content": "from collections.abc import MutableMapping\nimport os\nfrom pathlib import Path\nimport warnings\n\nfrom openmc.data import DataLibrary\nfrom openmc.data.decay import _DECAY_PHOTON_ENERGY\n\n__all__ = [\"config\"]\n\n\nclass _Config(MutableMapping):\n def __init__(self, data=()):\n self._mapping = {}\n self.update(data)\n\n def __getitem__(self, key):\n return self._mapping[key]\n\n def __delitem__(self, key):\n del self._mapping[key]\n if key == 'cross_sections':\n del os.environ['OPENMC_CROSS_SECTIONS']\n elif key == 'mg_cross_sections':\n del os.environ['OPENMC_MG_CROSS_SECTIONS']\n elif key == 'chain_file':\n del os.environ['OPENMC_CHAIN_FILE']\n # Reset photon source data since it relies on chain file\n _DECAY_PHOTON_ENERGY.clear()\n\n def __setitem__(self, key, value):\n if key == 'cross_sections':\n # Force environment variable to match\n self._set_path(key, value)\n os.environ['OPENMC_CROSS_SECTIONS'] = str(value)\n elif key == 'mg_cross_sections':\n self._set_path(key, value)\n os.environ['OPENMC_MG_CROSS_SECTIONS'] = str(value)\n elif key == 'chain_file':\n self._set_path(key, value)\n os.environ['OPENMC_CHAIN_FILE'] = str(value)\n # Reset photon source data since it relies on chain file\n _DECAY_PHOTON_ENERGY.clear()\n else:\n raise KeyError(f'Unrecognized config key: {key}. Acceptable keys '\n 'are \"cross_sections\", \"mg_cross_sections\" and '\n '\"chain_file\"')\n\n def __iter__(self):\n return iter(self._mapping)\n\n def __len__(self):\n return len(self._mapping)\n\n def __repr__(self):\n return repr(self._mapping)\n\n def _set_path(self, key, value):\n self._mapping[key] = p = Path(value)\n if not p.exists():\n warnings.warn(f\"'{value}' does not exist.\")\n\n\ndef _default_config():\n \"\"\"Return default configuration\"\"\"\n config = _Config()\n\n # Set cross sections using environment variable\n if \"OPENMC_CROSS_SECTIONS\" in os.environ:\n config['cross_sections'] = os.environ[\"OPENMC_CROSS_SECTIONS\"]\n if \"OPENMC_MG_CROSS_SECTIONS\" in os.environ:\n config['mg_cross_sections'] = os.environ[\"OPENMC_MG_CROSS_SECTIONS\"]\n\n # Set depletion chain\n chain_file = os.environ.get(\"OPENMC_CHAIN_FILE\")\n if (chain_file is None and\n config.get('cross_sections') is not None and\n config['cross_sections'].exists()\n ):\n # Check for depletion chain in cross_sections.xml\n data = DataLibrary.from_xml(config['cross_sections'])\n for lib in reversed(data.libraries):\n if lib['type'] == 'depletion_chain':\n chain_file = lib['path']\n break\n if chain_file is not None:\n config['chain_file'] = chain_file\n\n return config\n\n\nconfig = _default_config()\n", "path": "openmc/config.py"}]}
1,621
267
gh_patches_debug_27835
rasdani/github-patches
git_diff
zenml-io__zenml-65
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> [ENHANCEMENT] There should be a way to list all integrations with dependencies **Is your enhancement request related to a problem? Please describe.** Not easy to see the list of supported integrations -> For example: is it `zenml[torch]` or `zenml[pytorch]` **Describe the enhancement you'd like** A way to list all integrations with dependencies they would install. **How do you solve your current problem with the current status-quo of ZenML?** Have to look at source code **Additional context** Thank you @JoyZhou for pointing it out </issue> <code> [start of zenml/cli/base.py] 1 # Copyright (c) maiot GmbH 2020. All Rights Reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at: 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express 12 # or implied. See the License for the specific language governing 13 # permissions and limitations under the License. 14 15 import os 16 from typing import Text 17 18 import click 19 import git 20 21 from zenml.cli.cli import cli 22 from zenml.cli.utils import confirmation 23 from zenml.cli.utils import pass_repo 24 from zenml.repo import Repository 25 from zenml.utils.analytics_utils import track, INITIALIZE 26 27 28 @cli.command('init') 29 @click.option('--repo_path', type=click.Path(exists=True)) 30 @click.option('--pipelines_dir', type=click.Path(exists=True)) 31 @click.option('--analytics_opt_in', '-a', type=click.BOOL) 32 @track(event=INITIALIZE) 33 def init(repo_path: Text, pipelines_dir: Text = None, 34 analytics_opt_in: bool = None): 35 """Initialize ZenML on given path.""" 36 if repo_path is None: 37 repo_path = os.getcwd() 38 39 if analytics_opt_in is None: 40 analytics_opt_in = confirmation( 41 "ZenML collects anonymized usage information. This data helps us " 42 "create a better product and understand the needs of the " 43 "community better. You can find more information about exactly " 44 "why, what and how we collect usage analytics statistics at: " 45 "https://docs.zenml.io/misc/usage-analytics.html. " 46 "Would you like to opt-in to usage analytics?") 47 48 try: 49 Repository.init_repo( 50 repo_path, 51 None, 52 None, 53 pipelines_dir, 54 analytics_opt_in, 55 ) 56 click.echo(f'ZenML repo initialized at {repo_path}') 57 except git.InvalidGitRepositoryError: 58 click.echo(f'{repo_path} is not a valid git repository! Please ' 59 f'initialize ZenML within a git repository.') 60 61 62 @cli.command('clean') 63 @click.option('--yes', '-y', type=click.BOOL, default=False) 64 @pass_repo 65 def clean(repo: Repository, yes: bool = False): 66 """Clean everything in repository.""" 67 if not yes: 68 confirm = confirmation( 69 "This will completely delete all pipelines, their associated " 70 "artifacts and metadata ever created in this ZenML repository. " 71 "Are you sure you want to proceed?") 72 else: 73 confirm = True 74 75 click.echo("Not implemented for this version") 76 # if confirm: 77 # repo.clean() 78 [end of zenml/cli/base.py] [start of zenml/utils/requirement_utils.py] 1 import re 2 3 import subprocess 4 import sys 5 6 ######## 7 # BASE # 8 ######## 9 BASE_REQUIREMENTS = ["absl-py==0.10.0", 10 "pip-check-reqs>=2.0.1,<3", 11 "click>=7.0,<8", 12 "setuptools>=38.4.0", 13 "nbformat>=5.0.4", 14 "panel==0.8.3", 15 "plotly==4.0.0", 16 "tabulate==0.8.7", 17 "numpy==1.18.0", 18 "httplib2==0.17.0", 19 "tfx==0.26.1", 20 "fire==0.3.1", 21 "gitpython==3.1.11", 22 "analytics-python==1.2.9", 23 "distro==1.5.0", 24 "tensorflow>=2.3.0,<2.4.0", 25 "tensorflow-serving-api==2.3.0"] 26 27 ##################### 28 # EXTRAS: PROVIDERS # 29 ##################### 30 GCP_INTEGRATION = 'gcp' 31 GCP_REQUIREMENTS = ["apache-beam[gcp]==2.27.0", 32 "apache-beam==2.27.0", 33 "google-apitools==0.5.31"] 34 35 AWS_INTEGRATION = 'aws' 36 AWS_REQUIREMENTS = ["boto3==1.16.62"] 37 38 AZURE_INTEGRATION = 'azure' 39 AZURE_REQUIREMENTS = [] 40 41 ################### 42 # EXTRAS: TOOLING # 43 ################### 44 PYTORCH_INTEGRATION = 'pytorch' 45 PYTORCH_REQUIREMENTS = ['torch==1.7.0'] 46 47 CORTEX_INTEGRATION = 'cortex' 48 CORTEX_REQUIREMENTS = ['cortex==0.29.0'] 49 50 ############### 51 # DATASOURCES # 52 ############### 53 POSTGRES_INTEGRATION = 'postgres' 54 POSTGRES_REQUIREMENTS = ['beam-nuggets==0.17.0', 'pg8000==1.16.5', 55 'sqlalchemy==1.3.22'] 56 57 ####################### 58 # NLP via HuggingFace # 59 ####################### 60 HUGGINGFACE_INTEGRATION = 'huggingface' 61 HUGGINGFACE_REQUIREMENTS = ['transformers==4.3.3', 'tokenizers==0.10.1'] 62 63 ############### 64 # EXTRAS: ALL # 65 ############### 66 ALL_INTEGRATION = 'all' 67 ALL_REQUIREMENTS = BASE_REQUIREMENTS + \ 68 GCP_REQUIREMENTS + \ 69 PYTORCH_REQUIREMENTS + \ 70 AZURE_REQUIREMENTS + \ 71 AWS_REQUIREMENTS + \ 72 POSTGRES_REQUIREMENTS + \ 73 CORTEX_REQUIREMENTS + \ 74 HUGGINGFACE_REQUIREMENTS 75 76 EXTRAS_REQUIRE = {GCP_INTEGRATION: GCP_REQUIREMENTS, 77 AWS_INTEGRATION: AWS_REQUIREMENTS, 78 # AZURE_INTEGRATION: AZURE_REQUIREMENTS, 79 PYTORCH_INTEGRATION: PYTORCH_REQUIREMENTS, 80 CORTEX_INTEGRATION: CORTEX_REQUIREMENTS, 81 POSTGRES_INTEGRATION: POSTGRES_REQUIREMENTS, 82 HUGGINGFACE_INTEGRATION: HUGGINGFACE_REQUIREMENTS, 83 ALL_INTEGRATION: ALL_REQUIREMENTS} 84 85 86 ################## 87 # UTIL FUNCTIONS # 88 ################## 89 def check_integration(integration): 90 # Get the installed packages 91 reqs = subprocess.check_output([sys.executable, '-m', 'pip', 'freeze']) 92 installed_packages = [r.decode().split('==')[0] for r in reqs.split()] 93 94 # Get the required extra packages for the integration 95 assert integration in EXTRAS_REQUIRE, \ 96 f'At this moment, there is no integration for {integration}. ' \ 97 f'Possible integrations for ZenML ' \ 98 f'include: {list(EXTRAS_REQUIRE.keys())}.' 99 100 specs = EXTRAS_REQUIRE[integration] 101 102 for s in specs: 103 # Decouple from the version 104 pattern = r"([a-zA-Z0-9\-]+)(\[.+\])*(.*)" 105 s = re.search(pattern, s)[1] 106 107 # TODO: We can also validate the version 108 if s not in installed_packages: 109 raise ModuleNotFoundError( 110 f"{integration} integration not installed. " 111 f"Please install zenml[{integration}] via " 112 f"`pip install zenml[{integration}]`") 113 [end of zenml/utils/requirement_utils.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/zenml/cli/base.py b/zenml/cli/base.py --- a/zenml/cli/base.py +++ b/zenml/cli/base.py @@ -31,20 +31,11 @@ @click.option('--analytics_opt_in', '-a', type=click.BOOL) @track(event=INITIALIZE) def init(repo_path: Text, pipelines_dir: Text = None, - analytics_opt_in: bool = None): + analytics_opt_in: bool = True): """Initialize ZenML on given path.""" if repo_path is None: repo_path = os.getcwd() - if analytics_opt_in is None: - analytics_opt_in = confirmation( - "ZenML collects anonymized usage information. This data helps us " - "create a better product and understand the needs of the " - "community better. You can find more information about exactly " - "why, what and how we collect usage analytics statistics at: " - "https://docs.zenml.io/misc/usage-analytics.html. " - "Would you like to opt-in to usage analytics?") - try: Repository.init_repo( repo_path, diff --git a/zenml/utils/requirement_utils.py b/zenml/utils/requirement_utils.py --- a/zenml/utils/requirement_utils.py +++ b/zenml/utils/requirement_utils.py @@ -110,3 +110,11 @@ f"{integration} integration not installed. " f"Please install zenml[{integration}] via " f"`pip install zenml[{integration}]`") + + +def list_integrations(): + """Prints integrations in an easy to read format.""" + for k, v in EXTRAS_REQUIRE.items(): + print("*********") + print(f"The integration {k} has the following dependencies: {v}") + print(f'To install: pip install zenml[{k}]')
{"golden_diff": "diff --git a/zenml/cli/base.py b/zenml/cli/base.py\n--- a/zenml/cli/base.py\n+++ b/zenml/cli/base.py\n@@ -31,20 +31,11 @@\n @click.option('--analytics_opt_in', '-a', type=click.BOOL)\n @track(event=INITIALIZE)\n def init(repo_path: Text, pipelines_dir: Text = None,\n- analytics_opt_in: bool = None):\n+ analytics_opt_in: bool = True):\n \"\"\"Initialize ZenML on given path.\"\"\"\n if repo_path is None:\n repo_path = os.getcwd()\n \n- if analytics_opt_in is None:\n- analytics_opt_in = confirmation(\n- \"ZenML collects anonymized usage information. This data helps us \"\n- \"create a better product and understand the needs of the \"\n- \"community better. You can find more information about exactly \"\n- \"why, what and how we collect usage analytics statistics at: \"\n- \"https://docs.zenml.io/misc/usage-analytics.html. \"\n- \"Would you like to opt-in to usage analytics?\")\n-\n try:\n Repository.init_repo(\n repo_path,\ndiff --git a/zenml/utils/requirement_utils.py b/zenml/utils/requirement_utils.py\n--- a/zenml/utils/requirement_utils.py\n+++ b/zenml/utils/requirement_utils.py\n@@ -110,3 +110,11 @@\n f\"{integration} integration not installed. \"\n f\"Please install zenml[{integration}] via \"\n f\"`pip install zenml[{integration}]`\")\n+\n+\n+def list_integrations():\n+ \"\"\"Prints integrations in an easy to read format.\"\"\"\n+ for k, v in EXTRAS_REQUIRE.items():\n+ print(\"*********\")\n+ print(f\"The integration {k} has the following dependencies: {v}\")\n+ print(f'To install: pip install zenml[{k}]')\n", "issue": "[ENHANCEMENT] There should be a way to list all integrations with dependencies\n**Is your enhancement request related to a problem? Please describe.**\r\nNot easy to see the list of supported integrations -> For example: is it `zenml[torch]` or `zenml[pytorch]`\r\n\r\n**Describe the enhancement you'd like**\r\nA way to list all integrations with dependencies they would install.\r\n\r\n**How do you solve your current problem with the current status-quo of ZenML?**\r\nHave to look at source code\r\n\r\n**Additional context**\r\nThank you @JoyZhou for pointing it out\n", "before_files": [{"content": "# Copyright (c) maiot GmbH 2020. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at:\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express\n# or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\nimport os\nfrom typing import Text\n\nimport click\nimport git\n\nfrom zenml.cli.cli import cli\nfrom zenml.cli.utils import confirmation\nfrom zenml.cli.utils import pass_repo\nfrom zenml.repo import Repository\nfrom zenml.utils.analytics_utils import track, INITIALIZE\n\n\[email protected]('init')\[email protected]('--repo_path', type=click.Path(exists=True))\[email protected]('--pipelines_dir', type=click.Path(exists=True))\[email protected]('--analytics_opt_in', '-a', type=click.BOOL)\n@track(event=INITIALIZE)\ndef init(repo_path: Text, pipelines_dir: Text = None,\n analytics_opt_in: bool = None):\n \"\"\"Initialize ZenML on given path.\"\"\"\n if repo_path is None:\n repo_path = os.getcwd()\n\n if analytics_opt_in is None:\n analytics_opt_in = confirmation(\n \"ZenML collects anonymized usage information. This data helps us \"\n \"create a better product and understand the needs of the \"\n \"community better. You can find more information about exactly \"\n \"why, what and how we collect usage analytics statistics at: \"\n \"https://docs.zenml.io/misc/usage-analytics.html. \"\n \"Would you like to opt-in to usage analytics?\")\n\n try:\n Repository.init_repo(\n repo_path,\n None,\n None,\n pipelines_dir,\n analytics_opt_in,\n )\n click.echo(f'ZenML repo initialized at {repo_path}')\n except git.InvalidGitRepositoryError:\n click.echo(f'{repo_path} is not a valid git repository! Please '\n f'initialize ZenML within a git repository.')\n\n\[email protected]('clean')\[email protected]('--yes', '-y', type=click.BOOL, default=False)\n@pass_repo\ndef clean(repo: Repository, yes: bool = False):\n \"\"\"Clean everything in repository.\"\"\"\n if not yes:\n confirm = confirmation(\n \"This will completely delete all pipelines, their associated \"\n \"artifacts and metadata ever created in this ZenML repository. \"\n \"Are you sure you want to proceed?\")\n else:\n confirm = True\n\n click.echo(\"Not implemented for this version\")\n # if confirm:\n # repo.clean()\n", "path": "zenml/cli/base.py"}, {"content": "import re\n\nimport subprocess\nimport sys\n\n########\n# BASE #\n########\nBASE_REQUIREMENTS = [\"absl-py==0.10.0\",\n \"pip-check-reqs>=2.0.1,<3\",\n \"click>=7.0,<8\",\n \"setuptools>=38.4.0\",\n \"nbformat>=5.0.4\",\n \"panel==0.8.3\",\n \"plotly==4.0.0\",\n \"tabulate==0.8.7\",\n \"numpy==1.18.0\",\n \"httplib2==0.17.0\",\n \"tfx==0.26.1\",\n \"fire==0.3.1\",\n \"gitpython==3.1.11\",\n \"analytics-python==1.2.9\",\n \"distro==1.5.0\",\n \"tensorflow>=2.3.0,<2.4.0\",\n \"tensorflow-serving-api==2.3.0\"]\n\n#####################\n# EXTRAS: PROVIDERS #\n#####################\nGCP_INTEGRATION = 'gcp'\nGCP_REQUIREMENTS = [\"apache-beam[gcp]==2.27.0\",\n \"apache-beam==2.27.0\",\n \"google-apitools==0.5.31\"]\n\nAWS_INTEGRATION = 'aws'\nAWS_REQUIREMENTS = [\"boto3==1.16.62\"]\n\nAZURE_INTEGRATION = 'azure'\nAZURE_REQUIREMENTS = []\n\n###################\n# EXTRAS: TOOLING #\n###################\nPYTORCH_INTEGRATION = 'pytorch'\nPYTORCH_REQUIREMENTS = ['torch==1.7.0']\n\nCORTEX_INTEGRATION = 'cortex'\nCORTEX_REQUIREMENTS = ['cortex==0.29.0']\n\n###############\n# DATASOURCES #\n###############\nPOSTGRES_INTEGRATION = 'postgres'\nPOSTGRES_REQUIREMENTS = ['beam-nuggets==0.17.0', 'pg8000==1.16.5',\n 'sqlalchemy==1.3.22']\n\n#######################\n# NLP via HuggingFace #\n#######################\nHUGGINGFACE_INTEGRATION = 'huggingface'\nHUGGINGFACE_REQUIREMENTS = ['transformers==4.3.3', 'tokenizers==0.10.1']\n\n###############\n# EXTRAS: ALL #\n###############\nALL_INTEGRATION = 'all'\nALL_REQUIREMENTS = BASE_REQUIREMENTS + \\\n GCP_REQUIREMENTS + \\\n PYTORCH_REQUIREMENTS + \\\n AZURE_REQUIREMENTS + \\\n AWS_REQUIREMENTS + \\\n POSTGRES_REQUIREMENTS + \\\n CORTEX_REQUIREMENTS + \\\n HUGGINGFACE_REQUIREMENTS\n\nEXTRAS_REQUIRE = {GCP_INTEGRATION: GCP_REQUIREMENTS,\n AWS_INTEGRATION: AWS_REQUIREMENTS,\n # AZURE_INTEGRATION: AZURE_REQUIREMENTS,\n PYTORCH_INTEGRATION: PYTORCH_REQUIREMENTS,\n CORTEX_INTEGRATION: CORTEX_REQUIREMENTS,\n POSTGRES_INTEGRATION: POSTGRES_REQUIREMENTS,\n HUGGINGFACE_INTEGRATION: HUGGINGFACE_REQUIREMENTS,\n ALL_INTEGRATION: ALL_REQUIREMENTS}\n\n\n##################\n# UTIL FUNCTIONS #\n##################\ndef check_integration(integration):\n # Get the installed packages\n reqs = subprocess.check_output([sys.executable, '-m', 'pip', 'freeze'])\n installed_packages = [r.decode().split('==')[0] for r in reqs.split()]\n\n # Get the required extra packages for the integration\n assert integration in EXTRAS_REQUIRE, \\\n f'At this moment, there is no integration for {integration}. ' \\\n f'Possible integrations for ZenML ' \\\n f'include: {list(EXTRAS_REQUIRE.keys())}.'\n\n specs = EXTRAS_REQUIRE[integration]\n\n for s in specs:\n # Decouple from the version\n pattern = r\"([a-zA-Z0-9\\-]+)(\\[.+\\])*(.*)\"\n s = re.search(pattern, s)[1]\n\n # TODO: We can also validate the version\n if s not in installed_packages:\n raise ModuleNotFoundError(\n f\"{integration} integration not installed. \"\n f\"Please install zenml[{integration}] via \"\n f\"`pip install zenml[{integration}]`\")\n", "path": "zenml/utils/requirement_utils.py"}]}
2,639
426
gh_patches_debug_15300
rasdani/github-patches
git_diff
systemd__mkosi-2303
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> `mkosi qemu` crashes when `--directory` has only one path component Starting with version 20 (`20.1-1~bpo12+1` from Debian bookworm-backports), `mkosi qemu` seems to assume that its config directory path always has at least 2 components. This leads to crashes like the following: ``` root@mkosi:~# mkdir /foo; mkosi --directory=/foo qemu [...] mke2fs 1.47.0 (5-Feb-2023) Traceback (most recent call last): File "/usr/lib/python3/dist-packages/mkosi/run.py", line 162, in uncaught_exception_handler yield File "/usr/lib/python3.11/contextlib.py", line 81, in inner return func(*args, **kwds) ^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/mkosi/__main__.py", line 34, in main run_verb(args, images) File "/usr/lib/python3/dist-packages/mkosi/__init__.py", line 3468, in run_verb { File "/usr/lib/python3/dist-packages/mkosi/qemu.py", line 830, in run_qemu sandbox=config.sandbox(network=True, devices=True, relaxed=True), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/mkosi/config.py", line 1386, in sandbox return sandbox_cmd( ^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/mkosi/sandbox.py", line 124, in sandbox_cmd if (d := os.fspath(list(Path.cwd().parents)[-2])) not in (*dirs, "/home", "/usr", "/nix", "/tmp"): ~~~~~~~~~~~~~~~~~~~~~~~~^^^^ IndexError: list index out of range ``` I ran into this problem by using mkosi inside a Vagrant VM, which mounts the project directory to `/vagrant` by default. Changing that mountpoint to `/media/vagrant` works as a workaround. </issue> <code> [start of mkosi/sandbox.py] 1 # SPDX-License-Identifier: LGPL-2.1+ 2 import enum 3 import logging 4 import os 5 import uuid 6 from collections.abc import Sequence 7 from pathlib import Path 8 from typing import Optional 9 10 from mkosi.types import PathString 11 from mkosi.util import INVOKING_USER, flatten, one_zero 12 13 14 # https://github.com/torvalds/linux/blob/master/include/uapi/linux/capability.h 15 class Capability(enum.Enum): 16 CAP_NET_ADMIN = 12 17 18 19 def have_effective_cap(capability: Capability) -> bool: 20 for line in Path("/proc/self/status").read_text().splitlines(): 21 if line.startswith("CapEff:"): 22 hexcap = line.removeprefix("CapEff:").strip() 23 break 24 else: 25 logging.warning(f"\"CapEff:\" not found in /proc/self/status, assuming we don't have {capability}") 26 return False 27 28 return (int(hexcap, 16) & (1 << capability.value)) != 0 29 30 31 def finalize_passwd_mounts(root: Path) -> list[PathString]: 32 """ 33 If passwd or a related file exists in the apivfs directory, bind mount it over the host files while we 34 run the command, to make sure that the command we run uses user/group information from the apivfs 35 directory instead of from the host. 36 """ 37 options: list[PathString] = [] 38 39 for f in ("passwd", "group", "shadow", "gshadow"): 40 options += ["--ro-bind-try", root / "etc" / f, f"/etc/{f}"] 41 42 return options 43 44 45 def finalize_crypto_mounts(tools: Path = Path("/")) -> list[PathString]: 46 mounts = [ 47 (tools / subdir, Path("/") / subdir) 48 for subdir in ( 49 Path("etc/pki"), 50 Path("etc/ssl"), 51 Path("etc/crypto-policies"), 52 Path("etc/ca-certificates"), 53 Path("etc/pacman.d/gnupg"), 54 Path("var/lib/ca-certificates"), 55 ) 56 if (tools / subdir).exists() 57 ] 58 59 return flatten( 60 ["--ro-bind", src, target] 61 for src, target 62 in sorted(set(mounts), key=lambda s: s[1]) 63 ) 64 65 66 def sandbox_cmd( 67 *, 68 network: bool = False, 69 devices: bool = False, 70 scripts: Optional[Path] = None, 71 tools: Path = Path("/"), 72 relaxed: bool = False, 73 options: Sequence[PathString] = (), 74 ) -> list[PathString]: 75 cmdline: list[PathString] = [] 76 77 if not relaxed: 78 # We want to use an empty subdirectory in the host's /var/tmp as the sandbox's /var/tmp. To make sure it only 79 # gets created when we run the sandboxed command and cleaned up when the sandboxed command exits, we create it 80 # using shell. 81 vartmp = f"/var/tmp/mkosi-var-tmp-{uuid.uuid4().hex[:16]}" 82 cmdline += ["sh", "-c", f"trap 'rm -rf {vartmp}' EXIT && mkdir --mode 1777 {vartmp} && $0 \"$@\""] 83 else: 84 vartmp = None 85 86 cmdline += [ 87 "bwrap", 88 "--ro-bind", tools / "usr", "/usr", 89 *(["--unshare-net"] if not network and have_effective_cap(Capability.CAP_NET_ADMIN) else []), 90 "--die-with-parent", 91 "--proc", "/proc", 92 "--setenv", "SYSTEMD_OFFLINE", one_zero(network), 93 ] 94 95 if relaxed: 96 cmdline += ["--bind", "/tmp", "/tmp"] 97 else: 98 cmdline += [ 99 "--tmpfs", "/tmp", 100 "--unshare-ipc", 101 ] 102 103 if (tools / "nix/store").exists(): 104 cmdline += ["--bind", tools / "nix/store", "/nix/store"] 105 106 if devices or relaxed: 107 cmdline += [ 108 "--bind", "/sys", "/sys", 109 "--bind", "/run", "/run", 110 "--dev-bind", "/dev", "/dev", 111 ] 112 else: 113 cmdline += ["--dev", "/dev"] 114 115 if relaxed: 116 dirs = ("/etc", "/opt", "/srv", "/media", "/mnt", "/var", os.fspath(INVOKING_USER.home())) 117 118 for d in dirs: 119 if Path(d).exists(): 120 cmdline += ["--bind", d, d] 121 122 # `Path.parents` only supports slices and negative indexing from Python 3.10 onwards. 123 # TODO: Remove list() when we depend on Python 3.10 or newer. 124 if (d := os.fspath(list(Path.cwd().parents)[-2])) not in (*dirs, "/home", "/usr", "/nix", "/tmp"): 125 cmdline += ["--bind", d, d] 126 127 if vartmp: 128 cmdline += ["--bind", vartmp, "/var/tmp"] 129 130 for d in ("bin", "sbin", "lib", "lib32", "lib64"): 131 if (p := tools / d).is_symlink(): 132 cmdline += ["--symlink", p.readlink(), Path("/") / p.relative_to(tools)] 133 134 path = "/usr/bin:/usr/sbin" if tools != Path("/") else os.environ["PATH"] 135 136 cmdline += [ 137 "--setenv", "PATH", f"{scripts or ''}:{path}", 138 *options, 139 ] 140 141 if not relaxed: 142 cmdline += ["--symlink", "../proc/self/mounts", "/etc/mtab"] 143 144 # If we're using /usr from a tools tree, we have to use /etc/alternatives from the tools tree as well if it 145 # exists since that points directly back to /usr. Apply this after the options so the caller can mount 146 # something else to /etc without overriding this mount. In relaxed mode, we only do this if /etc/alternatives 147 # already exists on the host as otherwise we'd modify the host's /etc by creating the mountpoint ourselves (or 148 # fail when trying to create it). 149 if (tools / "etc/alternatives").exists() and (not relaxed or Path("/etc/alternatives").exists()): 150 cmdline += ["--ro-bind", tools / "etc/alternatives", "/etc/alternatives"] 151 152 if scripts: 153 cmdline += ["--ro-bind", scripts, scripts] 154 155 if network and not relaxed: 156 cmdline += ["--bind", "/etc/resolv.conf", "/etc/resolv.conf"] 157 158 # bubblewrap creates everything with a restricted mode so relax stuff as needed. 159 ops = [] 160 if not devices: 161 ops += ["chmod 1777 /dev/shm"] 162 if not relaxed: 163 ops += ["chmod 755 /etc"] 164 ops += ["exec $0 \"$@\""] 165 166 cmdline += ["sh", "-c", " && ".join(ops)] 167 168 return cmdline 169 170 171 def apivfs_cmd(root: Path) -> list[PathString]: 172 return [ 173 "bwrap", 174 "--dev-bind", "/", "/", 175 "--tmpfs", root / "run", 176 "--tmpfs", root / "tmp", 177 "--bind", "/var/tmp", root / "var/tmp", 178 "--proc", root / "proc", 179 "--dev", root / "dev", 180 # APIVFS generally means chrooting is going to happen so unset TMPDIR just to be safe. 181 "--unsetenv", "TMPDIR", 182 # Make sure /etc/machine-id is not overwritten by any package manager post install scripts. 183 "--ro-bind-try", root / "etc/machine-id", root / "etc/machine-id", 184 *finalize_passwd_mounts(root), 185 "sh", "-c", 186 f"chmod 1777 {root / 'tmp'} {root / 'var/tmp'} {root / 'dev/shm'} && " 187 f"chmod 755 {root / 'run'} && " 188 # Make sure anything running in the root directory thinks it's in a container. $container can't always be 189 # accessed so we write /run/host/container-manager as well which is always accessible. 190 f"mkdir -m 755 {root}/run/host && echo mkosi >{root}/run/host/container-manager && " 191 "exec $0 \"$@\"", 192 ] 193 194 195 def chroot_cmd(root: Path, *, resolve: bool = False, options: Sequence[PathString] = ()) -> list[PathString]: 196 cmdline: list[PathString] = [ 197 "sh", "-c", 198 f"trap 'rm -rf {root / 'work'}' EXIT && " 199 # /etc/resolv.conf can be a dangling symlink to /run/systemd/resolve/stub-resolv.conf. Bubblewrap tries to call 200 # mkdir() on each component of the path which means it will try to call 201 # mkdir(/run/systemd/resolve/stub-resolv.conf) which will fail unless /run/systemd/resolve exists already so 202 # we make sure that it already exists. 203 f"mkdir -p -m 755 {root / 'work'} {root / 'run/systemd'} {root / 'run/systemd/resolve'} && " 204 # No exec here because we need to clean up the /work directory afterwards. 205 f"$0 \"$@\"", 206 "bwrap", 207 "--dev-bind", root, "/", 208 "--setenv", "container", "mkosi", 209 "--setenv", "HOME", "/", 210 "--setenv", "PATH", "/work/scripts:/usr/bin:/usr/sbin", 211 ] 212 213 if resolve: 214 cmdline += ["--ro-bind-try", "/etc/resolv.conf", "/etc/resolv.conf"] 215 216 cmdline += options 217 218 return apivfs_cmd(root) + cmdline 219 [end of mkosi/sandbox.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/mkosi/sandbox.py b/mkosi/sandbox.py --- a/mkosi/sandbox.py +++ b/mkosi/sandbox.py @@ -119,9 +119,16 @@ if Path(d).exists(): cmdline += ["--bind", d, d] - # `Path.parents` only supports slices and negative indexing from Python 3.10 onwards. - # TODO: Remove list() when we depend on Python 3.10 or newer. - if (d := os.fspath(list(Path.cwd().parents)[-2])) not in (*dirs, "/home", "/usr", "/nix", "/tmp"): + if len(Path.cwd().parents) >= 2: + # `Path.parents` only supports slices and negative indexing from Python 3.10 onwards. + # TODO: Remove list() when we depend on Python 3.10 or newer. + d = os.fspath(list(Path.cwd().parents)[-2]) + elif len(Path.cwd().parents) == 1: + d = os.fspath(Path.cwd()) + else: + d = "" + + if d and d not in (*dirs, "/home", "/usr", "/nix", "/tmp"): cmdline += ["--bind", d, d] if vartmp:
{"golden_diff": "diff --git a/mkosi/sandbox.py b/mkosi/sandbox.py\n--- a/mkosi/sandbox.py\n+++ b/mkosi/sandbox.py\n@@ -119,9 +119,16 @@\n if Path(d).exists():\n cmdline += [\"--bind\", d, d]\n \n- # `Path.parents` only supports slices and negative indexing from Python 3.10 onwards.\n- # TODO: Remove list() when we depend on Python 3.10 or newer.\n- if (d := os.fspath(list(Path.cwd().parents)[-2])) not in (*dirs, \"/home\", \"/usr\", \"/nix\", \"/tmp\"):\n+ if len(Path.cwd().parents) >= 2:\n+ # `Path.parents` only supports slices and negative indexing from Python 3.10 onwards.\n+ # TODO: Remove list() when we depend on Python 3.10 or newer.\n+ d = os.fspath(list(Path.cwd().parents)[-2])\n+ elif len(Path.cwd().parents) == 1:\n+ d = os.fspath(Path.cwd())\n+ else:\n+ d = \"\"\n+\n+ if d and d not in (*dirs, \"/home\", \"/usr\", \"/nix\", \"/tmp\"):\n cmdline += [\"--bind\", d, d]\n \n if vartmp:\n", "issue": "`mkosi qemu` crashes when `--directory` has only one path component\nStarting with version 20 (`20.1-1~bpo12+1` from Debian bookworm-backports), `mkosi qemu` seems to assume that its config directory path always has at least 2 components. This leads to crashes like the following:\r\n\r\n```\r\nroot@mkosi:~# mkdir /foo; mkosi --directory=/foo qemu\r\n[...]\r\nmke2fs 1.47.0 (5-Feb-2023)\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3/dist-packages/mkosi/run.py\", line 162, in uncaught_exception_handler\r\n yield\r\n File \"/usr/lib/python3.11/contextlib.py\", line 81, in inner\r\n return func(*args, **kwds)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3/dist-packages/mkosi/__main__.py\", line 34, in main\r\n run_verb(args, images)\r\n File \"/usr/lib/python3/dist-packages/mkosi/__init__.py\", line 3468, in run_verb\r\n {\r\n File \"/usr/lib/python3/dist-packages/mkosi/qemu.py\", line 830, in run_qemu\r\n sandbox=config.sandbox(network=True, devices=True, relaxed=True),\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3/dist-packages/mkosi/config.py\", line 1386, in sandbox\r\n return sandbox_cmd(\r\n ^^^^^^^^^^^^\r\n File \"/usr/lib/python3/dist-packages/mkosi/sandbox.py\", line 124, in sandbox_cmd\r\n if (d := os.fspath(list(Path.cwd().parents)[-2])) not in (*dirs, \"/home\", \"/usr\", \"/nix\", \"/tmp\"):\r\n ~~~~~~~~~~~~~~~~~~~~~~~~^^^^\r\nIndexError: list index out of range\r\n```\r\n\r\nI ran into this problem by using mkosi inside a Vagrant VM, which mounts the project directory to `/vagrant` by default. Changing that mountpoint to `/media/vagrant` works as a workaround.\r\n\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: LGPL-2.1+\nimport enum\nimport logging\nimport os\nimport uuid\nfrom collections.abc import Sequence\nfrom pathlib import Path\nfrom typing import Optional\n\nfrom mkosi.types import PathString\nfrom mkosi.util import INVOKING_USER, flatten, one_zero\n\n\n# https://github.com/torvalds/linux/blob/master/include/uapi/linux/capability.h\nclass Capability(enum.Enum):\n CAP_NET_ADMIN = 12\n\n\ndef have_effective_cap(capability: Capability) -> bool:\n for line in Path(\"/proc/self/status\").read_text().splitlines():\n if line.startswith(\"CapEff:\"):\n hexcap = line.removeprefix(\"CapEff:\").strip()\n break\n else:\n logging.warning(f\"\\\"CapEff:\\\" not found in /proc/self/status, assuming we don't have {capability}\")\n return False\n\n return (int(hexcap, 16) & (1 << capability.value)) != 0\n\n\ndef finalize_passwd_mounts(root: Path) -> list[PathString]:\n \"\"\"\n If passwd or a related file exists in the apivfs directory, bind mount it over the host files while we\n run the command, to make sure that the command we run uses user/group information from the apivfs\n directory instead of from the host.\n \"\"\"\n options: list[PathString] = []\n\n for f in (\"passwd\", \"group\", \"shadow\", \"gshadow\"):\n options += [\"--ro-bind-try\", root / \"etc\" / f, f\"/etc/{f}\"]\n\n return options\n\n\ndef finalize_crypto_mounts(tools: Path = Path(\"/\")) -> list[PathString]:\n mounts = [\n (tools / subdir, Path(\"/\") / subdir)\n for subdir in (\n Path(\"etc/pki\"),\n Path(\"etc/ssl\"),\n Path(\"etc/crypto-policies\"),\n Path(\"etc/ca-certificates\"),\n Path(\"etc/pacman.d/gnupg\"),\n Path(\"var/lib/ca-certificates\"),\n )\n if (tools / subdir).exists()\n ]\n\n return flatten(\n [\"--ro-bind\", src, target]\n for src, target\n in sorted(set(mounts), key=lambda s: s[1])\n )\n\n\ndef sandbox_cmd(\n *,\n network: bool = False,\n devices: bool = False,\n scripts: Optional[Path] = None,\n tools: Path = Path(\"/\"),\n relaxed: bool = False,\n options: Sequence[PathString] = (),\n) -> list[PathString]:\n cmdline: list[PathString] = []\n\n if not relaxed:\n # We want to use an empty subdirectory in the host's /var/tmp as the sandbox's /var/tmp. To make sure it only\n # gets created when we run the sandboxed command and cleaned up when the sandboxed command exits, we create it\n # using shell.\n vartmp = f\"/var/tmp/mkosi-var-tmp-{uuid.uuid4().hex[:16]}\"\n cmdline += [\"sh\", \"-c\", f\"trap 'rm -rf {vartmp}' EXIT && mkdir --mode 1777 {vartmp} && $0 \\\"$@\\\"\"]\n else:\n vartmp = None\n\n cmdline += [\n \"bwrap\",\n \"--ro-bind\", tools / \"usr\", \"/usr\",\n *([\"--unshare-net\"] if not network and have_effective_cap(Capability.CAP_NET_ADMIN) else []),\n \"--die-with-parent\",\n \"--proc\", \"/proc\",\n \"--setenv\", \"SYSTEMD_OFFLINE\", one_zero(network),\n ]\n\n if relaxed:\n cmdline += [\"--bind\", \"/tmp\", \"/tmp\"]\n else:\n cmdline += [\n \"--tmpfs\", \"/tmp\",\n \"--unshare-ipc\",\n ]\n\n if (tools / \"nix/store\").exists():\n cmdline += [\"--bind\", tools / \"nix/store\", \"/nix/store\"]\n\n if devices or relaxed:\n cmdline += [\n \"--bind\", \"/sys\", \"/sys\",\n \"--bind\", \"/run\", \"/run\",\n \"--dev-bind\", \"/dev\", \"/dev\",\n ]\n else:\n cmdline += [\"--dev\", \"/dev\"]\n\n if relaxed:\n dirs = (\"/etc\", \"/opt\", \"/srv\", \"/media\", \"/mnt\", \"/var\", os.fspath(INVOKING_USER.home()))\n\n for d in dirs:\n if Path(d).exists():\n cmdline += [\"--bind\", d, d]\n\n # `Path.parents` only supports slices and negative indexing from Python 3.10 onwards.\n # TODO: Remove list() when we depend on Python 3.10 or newer.\n if (d := os.fspath(list(Path.cwd().parents)[-2])) not in (*dirs, \"/home\", \"/usr\", \"/nix\", \"/tmp\"):\n cmdline += [\"--bind\", d, d]\n\n if vartmp:\n cmdline += [\"--bind\", vartmp, \"/var/tmp\"]\n\n for d in (\"bin\", \"sbin\", \"lib\", \"lib32\", \"lib64\"):\n if (p := tools / d).is_symlink():\n cmdline += [\"--symlink\", p.readlink(), Path(\"/\") / p.relative_to(tools)]\n\n path = \"/usr/bin:/usr/sbin\" if tools != Path(\"/\") else os.environ[\"PATH\"]\n\n cmdline += [\n \"--setenv\", \"PATH\", f\"{scripts or ''}:{path}\",\n *options,\n ]\n\n if not relaxed:\n cmdline += [\"--symlink\", \"../proc/self/mounts\", \"/etc/mtab\"]\n\n # If we're using /usr from a tools tree, we have to use /etc/alternatives from the tools tree as well if it\n # exists since that points directly back to /usr. Apply this after the options so the caller can mount\n # something else to /etc without overriding this mount. In relaxed mode, we only do this if /etc/alternatives\n # already exists on the host as otherwise we'd modify the host's /etc by creating the mountpoint ourselves (or\n # fail when trying to create it).\n if (tools / \"etc/alternatives\").exists() and (not relaxed or Path(\"/etc/alternatives\").exists()):\n cmdline += [\"--ro-bind\", tools / \"etc/alternatives\", \"/etc/alternatives\"]\n\n if scripts:\n cmdline += [\"--ro-bind\", scripts, scripts]\n\n if network and not relaxed:\n cmdline += [\"--bind\", \"/etc/resolv.conf\", \"/etc/resolv.conf\"]\n\n # bubblewrap creates everything with a restricted mode so relax stuff as needed.\n ops = []\n if not devices:\n ops += [\"chmod 1777 /dev/shm\"]\n if not relaxed:\n ops += [\"chmod 755 /etc\"]\n ops += [\"exec $0 \\\"$@\\\"\"]\n\n cmdline += [\"sh\", \"-c\", \" && \".join(ops)]\n\n return cmdline\n\n\ndef apivfs_cmd(root: Path) -> list[PathString]:\n return [\n \"bwrap\",\n \"--dev-bind\", \"/\", \"/\",\n \"--tmpfs\", root / \"run\",\n \"--tmpfs\", root / \"tmp\",\n \"--bind\", \"/var/tmp\", root / \"var/tmp\",\n \"--proc\", root / \"proc\",\n \"--dev\", root / \"dev\",\n # APIVFS generally means chrooting is going to happen so unset TMPDIR just to be safe.\n \"--unsetenv\", \"TMPDIR\",\n # Make sure /etc/machine-id is not overwritten by any package manager post install scripts.\n \"--ro-bind-try\", root / \"etc/machine-id\", root / \"etc/machine-id\",\n *finalize_passwd_mounts(root),\n \"sh\", \"-c\",\n f\"chmod 1777 {root / 'tmp'} {root / 'var/tmp'} {root / 'dev/shm'} && \"\n f\"chmod 755 {root / 'run'} && \"\n # Make sure anything running in the root directory thinks it's in a container. $container can't always be\n # accessed so we write /run/host/container-manager as well which is always accessible.\n f\"mkdir -m 755 {root}/run/host && echo mkosi >{root}/run/host/container-manager && \"\n \"exec $0 \\\"$@\\\"\",\n ]\n\n\ndef chroot_cmd(root: Path, *, resolve: bool = False, options: Sequence[PathString] = ()) -> list[PathString]:\n cmdline: list[PathString] = [\n \"sh\", \"-c\",\n f\"trap 'rm -rf {root / 'work'}' EXIT && \"\n # /etc/resolv.conf can be a dangling symlink to /run/systemd/resolve/stub-resolv.conf. Bubblewrap tries to call\n # mkdir() on each component of the path which means it will try to call\n # mkdir(/run/systemd/resolve/stub-resolv.conf) which will fail unless /run/systemd/resolve exists already so\n # we make sure that it already exists.\n f\"mkdir -p -m 755 {root / 'work'} {root / 'run/systemd'} {root / 'run/systemd/resolve'} && \"\n # No exec here because we need to clean up the /work directory afterwards.\n f\"$0 \\\"$@\\\"\",\n \"bwrap\",\n \"--dev-bind\", root, \"/\",\n \"--setenv\", \"container\", \"mkosi\",\n \"--setenv\", \"HOME\", \"/\",\n \"--setenv\", \"PATH\", \"/work/scripts:/usr/bin:/usr/sbin\",\n ]\n\n if resolve:\n cmdline += [\"--ro-bind-try\", \"/etc/resolv.conf\", \"/etc/resolv.conf\"]\n\n cmdline += options\n\n return apivfs_cmd(root) + cmdline\n", "path": "mkosi/sandbox.py"}]}
3,712
294
gh_patches_debug_7284
rasdani/github-patches
git_diff
scrapy__scrapy-3381
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> scrapy check command ignores contracts with the same URL ### Description When testing callbacks with the same URL, only one callback is tested. ### Reproduce ```python class DemoSpider(Spider): name = 'demo_spider' def returns_item_with_url(self, response): """ method which returns request @url http://scrapy.org @returns items 1 1 """ return TestItem(url=response.url) def returns_item_with_name(self, response): """ method which returns request @url http://scrapy.org @returns items 1 1 """ return TestItem(name='scrapy') ``` Then run `scrapy check`. You'll get the following output: ``` . ---------------------------------------------------------------------- Ran 1 contract in 0.894s OK ``` ### Reason This is default behavior for crawlers to filter same URLs. ### Solution Use `dont_filter` in requests returned by `ContractsManager`. </issue> <code> [start of scrapy/contracts/__init__.py] 1 import sys 2 import re 3 from functools import wraps 4 from inspect import getmembers 5 from unittest import TestCase 6 7 from scrapy.http import Request 8 from scrapy.utils.spider import iterate_spider_output 9 from scrapy.utils.python import get_spec 10 11 12 class ContractsManager(object): 13 contracts = {} 14 15 def __init__(self, contracts): 16 for contract in contracts: 17 self.contracts[contract.name] = contract 18 19 def tested_methods_from_spidercls(self, spidercls): 20 methods = [] 21 for key, value in getmembers(spidercls): 22 if (callable(value) and value.__doc__ and 23 re.search(r'^\s*@', value.__doc__, re.MULTILINE)): 24 methods.append(key) 25 26 return methods 27 28 def extract_contracts(self, method): 29 contracts = [] 30 for line in method.__doc__.split('\n'): 31 line = line.strip() 32 33 if line.startswith('@'): 34 name, args = re.match(r'@(\w+)\s*(.*)', line).groups() 35 args = re.split(r'\s+', args) 36 37 contracts.append(self.contracts[name](method, *args)) 38 39 return contracts 40 41 def from_spider(self, spider, results): 42 requests = [] 43 for method in self.tested_methods_from_spidercls(type(spider)): 44 bound_method = spider.__getattribute__(method) 45 requests.append(self.from_method(bound_method, results)) 46 47 return requests 48 49 def from_method(self, method, results): 50 contracts = self.extract_contracts(method) 51 if contracts: 52 request_cls = Request 53 for contract in contracts: 54 if contract.request_cls is not None: 55 request_cls = contract.request_cls 56 57 # calculate request args 58 args, kwargs = get_spec(request_cls.__init__) 59 kwargs['callback'] = method 60 for contract in contracts: 61 kwargs = contract.adjust_request_args(kwargs) 62 63 args.remove('self') 64 65 # check if all positional arguments are defined in kwargs 66 if set(args).issubset(set(kwargs)): 67 request = request_cls(**kwargs) 68 69 # execute pre and post hooks in order 70 for contract in reversed(contracts): 71 request = contract.add_pre_hook(request, results) 72 for contract in contracts: 73 request = contract.add_post_hook(request, results) 74 75 self._clean_req(request, method, results) 76 return request 77 78 def _clean_req(self, request, method, results): 79 """ stop the request from returning objects and records any errors """ 80 81 cb = request.callback 82 83 @wraps(cb) 84 def cb_wrapper(response): 85 try: 86 output = cb(response) 87 output = list(iterate_spider_output(output)) 88 except: 89 case = _create_testcase(method, 'callback') 90 results.addError(case, sys.exc_info()) 91 92 def eb_wrapper(failure): 93 case = _create_testcase(method, 'errback') 94 exc_info = failure.type, failure.value, failure.getTracebackObject() 95 results.addError(case, exc_info) 96 97 request.callback = cb_wrapper 98 request.errback = eb_wrapper 99 100 101 class Contract(object): 102 """ Abstract class for contracts """ 103 request_cls = None 104 105 def __init__(self, method, *args): 106 self.testcase_pre = _create_testcase(method, '@%s pre-hook' % self.name) 107 self.testcase_post = _create_testcase(method, '@%s post-hook' % self.name) 108 self.args = args 109 110 def add_pre_hook(self, request, results): 111 if hasattr(self, 'pre_process'): 112 cb = request.callback 113 114 @wraps(cb) 115 def wrapper(response): 116 try: 117 results.startTest(self.testcase_pre) 118 self.pre_process(response) 119 results.stopTest(self.testcase_pre) 120 except AssertionError: 121 results.addFailure(self.testcase_pre, sys.exc_info()) 122 except Exception: 123 results.addError(self.testcase_pre, sys.exc_info()) 124 else: 125 results.addSuccess(self.testcase_pre) 126 finally: 127 return list(iterate_spider_output(cb(response))) 128 129 request.callback = wrapper 130 131 return request 132 133 def add_post_hook(self, request, results): 134 if hasattr(self, 'post_process'): 135 cb = request.callback 136 137 @wraps(cb) 138 def wrapper(response): 139 output = list(iterate_spider_output(cb(response))) 140 try: 141 results.startTest(self.testcase_post) 142 self.post_process(output) 143 results.stopTest(self.testcase_post) 144 except AssertionError: 145 results.addFailure(self.testcase_post, sys.exc_info()) 146 except Exception: 147 results.addError(self.testcase_post, sys.exc_info()) 148 else: 149 results.addSuccess(self.testcase_post) 150 finally: 151 return output 152 153 request.callback = wrapper 154 155 return request 156 157 def adjust_request_args(self, args): 158 return args 159 160 161 def _create_testcase(method, desc): 162 spider = method.__self__.name 163 164 class ContractTestCase(TestCase): 165 def __str__(_self): 166 return "[%s] %s (%s)" % (spider, method.__name__, desc) 167 168 name = '%s_%s' % (spider, method.__name__) 169 setattr(ContractTestCase, name, lambda x: x) 170 return ContractTestCase(name) 171 [end of scrapy/contracts/__init__.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/scrapy/contracts/__init__.py b/scrapy/contracts/__init__.py --- a/scrapy/contracts/__init__.py +++ b/scrapy/contracts/__init__.py @@ -56,7 +56,12 @@ # calculate request args args, kwargs = get_spec(request_cls.__init__) + + # Don't filter requests to allow + # testing different callbacks on the same URL. + kwargs['dont_filter'] = True kwargs['callback'] = method + for contract in contracts: kwargs = contract.adjust_request_args(kwargs)
{"golden_diff": "diff --git a/scrapy/contracts/__init__.py b/scrapy/contracts/__init__.py\n--- a/scrapy/contracts/__init__.py\n+++ b/scrapy/contracts/__init__.py\n@@ -56,7 +56,12 @@\n \n # calculate request args\n args, kwargs = get_spec(request_cls.__init__)\n+\n+ # Don't filter requests to allow\n+ # testing different callbacks on the same URL.\n+ kwargs['dont_filter'] = True\n kwargs['callback'] = method\n+\n for contract in contracts:\n kwargs = contract.adjust_request_args(kwargs)\n", "issue": "scrapy check command ignores contracts with the same URL\n### Description\r\nWhen testing callbacks with the same URL, only one callback is tested.\r\n\r\n### Reproduce\r\n```python\r\nclass DemoSpider(Spider):\r\n name = 'demo_spider'\r\n\r\n def returns_item_with_url(self, response):\r\n \"\"\" method which returns request\r\n @url http://scrapy.org\r\n @returns items 1 1\r\n \"\"\"\r\n return TestItem(url=response.url)\r\n\r\n def returns_item_with_name(self, response):\r\n \"\"\" method which returns request\r\n @url http://scrapy.org\r\n @returns items 1 1\r\n \"\"\"\r\n return TestItem(name='scrapy')\r\n```\r\n\r\nThen run `scrapy check`.\r\n\r\nYou'll get the following output:\r\n\r\n```\r\n.\r\n----------------------------------------------------------------------\r\nRan 1 contract in 0.894s\r\n\r\nOK\r\n```\r\n\r\n### Reason\r\nThis is default behavior for crawlers to filter same URLs.\r\n\r\n### Solution\r\nUse `dont_filter` in requests returned by `ContractsManager`.\n", "before_files": [{"content": "import sys\nimport re\nfrom functools import wraps\nfrom inspect import getmembers\nfrom unittest import TestCase\n\nfrom scrapy.http import Request\nfrom scrapy.utils.spider import iterate_spider_output\nfrom scrapy.utils.python import get_spec\n\n\nclass ContractsManager(object):\n contracts = {}\n\n def __init__(self, contracts):\n for contract in contracts:\n self.contracts[contract.name] = contract\n\n def tested_methods_from_spidercls(self, spidercls):\n methods = []\n for key, value in getmembers(spidercls):\n if (callable(value) and value.__doc__ and\n re.search(r'^\\s*@', value.__doc__, re.MULTILINE)):\n methods.append(key)\n\n return methods\n\n def extract_contracts(self, method):\n contracts = []\n for line in method.__doc__.split('\\n'):\n line = line.strip()\n\n if line.startswith('@'):\n name, args = re.match(r'@(\\w+)\\s*(.*)', line).groups()\n args = re.split(r'\\s+', args)\n\n contracts.append(self.contracts[name](method, *args))\n\n return contracts\n\n def from_spider(self, spider, results):\n requests = []\n for method in self.tested_methods_from_spidercls(type(spider)):\n bound_method = spider.__getattribute__(method)\n requests.append(self.from_method(bound_method, results))\n\n return requests\n\n def from_method(self, method, results):\n contracts = self.extract_contracts(method)\n if contracts:\n request_cls = Request\n for contract in contracts:\n if contract.request_cls is not None:\n request_cls = contract.request_cls\n\n # calculate request args\n args, kwargs = get_spec(request_cls.__init__)\n kwargs['callback'] = method\n for contract in contracts:\n kwargs = contract.adjust_request_args(kwargs)\n\n args.remove('self')\n\n # check if all positional arguments are defined in kwargs\n if set(args).issubset(set(kwargs)):\n request = request_cls(**kwargs)\n\n # execute pre and post hooks in order\n for contract in reversed(contracts):\n request = contract.add_pre_hook(request, results)\n for contract in contracts:\n request = contract.add_post_hook(request, results)\n\n self._clean_req(request, method, results)\n return request\n\n def _clean_req(self, request, method, results):\n \"\"\" stop the request from returning objects and records any errors \"\"\"\n\n cb = request.callback\n\n @wraps(cb)\n def cb_wrapper(response):\n try:\n output = cb(response)\n output = list(iterate_spider_output(output))\n except:\n case = _create_testcase(method, 'callback')\n results.addError(case, sys.exc_info())\n\n def eb_wrapper(failure):\n case = _create_testcase(method, 'errback')\n exc_info = failure.type, failure.value, failure.getTracebackObject()\n results.addError(case, exc_info)\n\n request.callback = cb_wrapper\n request.errback = eb_wrapper\n\n\nclass Contract(object):\n \"\"\" Abstract class for contracts \"\"\"\n request_cls = None\n\n def __init__(self, method, *args):\n self.testcase_pre = _create_testcase(method, '@%s pre-hook' % self.name)\n self.testcase_post = _create_testcase(method, '@%s post-hook' % self.name)\n self.args = args\n\n def add_pre_hook(self, request, results):\n if hasattr(self, 'pre_process'):\n cb = request.callback\n\n @wraps(cb)\n def wrapper(response):\n try:\n results.startTest(self.testcase_pre)\n self.pre_process(response)\n results.stopTest(self.testcase_pre)\n except AssertionError:\n results.addFailure(self.testcase_pre, sys.exc_info())\n except Exception:\n results.addError(self.testcase_pre, sys.exc_info())\n else:\n results.addSuccess(self.testcase_pre)\n finally:\n return list(iterate_spider_output(cb(response)))\n\n request.callback = wrapper\n\n return request\n\n def add_post_hook(self, request, results):\n if hasattr(self, 'post_process'):\n cb = request.callback\n\n @wraps(cb)\n def wrapper(response):\n output = list(iterate_spider_output(cb(response)))\n try:\n results.startTest(self.testcase_post)\n self.post_process(output)\n results.stopTest(self.testcase_post)\n except AssertionError:\n results.addFailure(self.testcase_post, sys.exc_info())\n except Exception:\n results.addError(self.testcase_post, sys.exc_info())\n else:\n results.addSuccess(self.testcase_post)\n finally:\n return output\n\n request.callback = wrapper\n\n return request\n\n def adjust_request_args(self, args):\n return args\n\n\ndef _create_testcase(method, desc):\n spider = method.__self__.name\n\n class ContractTestCase(TestCase):\n def __str__(_self):\n return \"[%s] %s (%s)\" % (spider, method.__name__, desc)\n\n name = '%s_%s' % (spider, method.__name__)\n setattr(ContractTestCase, name, lambda x: x)\n return ContractTestCase(name)\n", "path": "scrapy/contracts/__init__.py"}]}
2,283
132
gh_patches_debug_19742
rasdani/github-patches
git_diff
aws__aws-cli-1994
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Using `--no-paginate` with `aws cloudformation describe-stacks` throws error Version: `aws-cli/1.10.34 Python/2.7.9 Linux/3.16.0-4-amd64 botocore/1.4.24` When I attempt to run `aws cloudformation describe-stacks --no-paginate --region us-west-2` I receive: ``` 'Namespace' object has no attribute 'page_size' ``` This does not happen if I do not use the `--no-paginate` flag. </issue> <code> [start of awscli/customizations/paginate.py] 1 # Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"). You 4 # may not use this file except in compliance with the License. A copy of 5 # the License is located at 6 # 7 # http://aws.amazon.com/apache2.0/ 8 # 9 # or in the "license" file accompanying this file. This file is 10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF 11 # ANY KIND, either express or implied. See the License for the specific 12 # language governing permissions and limitations under the License. 13 """This module has customizations to unify paging paramters. 14 15 For any operation that can be paginated, we will: 16 17 * Hide the service specific pagination params. This can vary across 18 services and we're going to replace them with a consistent set of 19 arguments. The arguments will still work, but they are not 20 documented. This allows us to add a pagination config after 21 the fact and still remain backwards compatible with users that 22 were manually doing pagination. 23 * Add a ``--starting-token`` and a ``--max-items`` argument. 24 25 """ 26 import logging 27 from functools import partial 28 29 from botocore import xform_name 30 from botocore.exceptions import DataNotFoundError, PaginationError 31 from botocore import model 32 33 from awscli.arguments import BaseCLIArgument 34 35 36 logger = logging.getLogger(__name__) 37 38 39 STARTING_TOKEN_HELP = """ 40 <p>A token to specify where to start paginating. This is the 41 <code>NextToken</code> from a previously truncated response.</p> 42 """ 43 44 MAX_ITEMS_HELP = """ 45 <p>The total number of items to return. If the total number 46 of items available is more than the value specified in 47 max-items then a <code>NextToken</code> will 48 be provided in the output that you can use to resume pagination. 49 This <code>NextToken</code> response element should <b>not</b> be 50 used directly outside of the AWS CLI.</p> 51 """ 52 53 PAGE_SIZE_HELP = """ 54 <p>The size of each page.<p> 55 """ 56 57 58 def register_pagination(event_handlers): 59 event_handlers.register('building-argument-table', unify_paging_params) 60 event_handlers.register_last('doc-description', add_paging_description) 61 62 63 def get_paginator_config(session, service_name, operation_name): 64 try: 65 paginator_model = session.get_paginator_model(service_name) 66 except DataNotFoundError: 67 return None 68 try: 69 operation_paginator_config = paginator_model.get_paginator( 70 operation_name) 71 except ValueError: 72 return None 73 return operation_paginator_config 74 75 76 def add_paging_description(help_command, **kwargs): 77 # This customization is only applied to the description of 78 # Operations, so we must filter out all other events. 79 if not isinstance(help_command.obj, model.OperationModel): 80 return 81 service_name = help_command.obj.service_model.service_name 82 paginator_config = get_paginator_config( 83 help_command.session, service_name, help_command.obj.name) 84 if not paginator_config: 85 return 86 help_command.doc.style.new_paragraph() 87 help_command.doc.writeln( 88 ('``%s`` is a paginated operation. Multiple API calls may be issued ' 89 'in order to retrieve the entire data set of results. You can ' 90 'disable pagination by providing the ``--no-paginate`` argument.') 91 % help_command.name) 92 # Only include result key information if it is present. 93 if paginator_config.get('result_key'): 94 queries = paginator_config['result_key'] 95 if type(queries) is not list: 96 queries = [queries] 97 queries = ", ".join([('``%s``' % s) for s in queries]) 98 help_command.doc.writeln( 99 ('When using ``--output text`` and the ``--query`` argument on a ' 100 'paginated response, the ``--query`` argument must extract data ' 101 'from the results of the following query expressions: %s') 102 % queries) 103 104 105 def unify_paging_params(argument_table, operation_model, event_name, 106 session, **kwargs): 107 paginator_config = get_paginator_config( 108 session, operation_model.service_model.service_name, 109 operation_model.name) 110 if paginator_config is None: 111 # We only apply these customizations to paginated responses. 112 return 113 logger.debug("Modifying paging parameters for operation: %s", 114 operation_model.name) 115 _remove_existing_paging_arguments(argument_table, paginator_config) 116 parsed_args_event = event_name.replace('building-argument-table.', 117 'operation-args-parsed.') 118 shadowed_args = {} 119 add_paging_argument(argument_table, 'starting-token', 120 PageArgument('starting-token', STARTING_TOKEN_HELP, 121 parse_type='string', 122 serialized_name='StartingToken'), 123 shadowed_args) 124 input_members = operation_model.input_shape.members 125 type_name = 'integer' 126 if 'limit_key' in paginator_config: 127 limit_key_shape = input_members[paginator_config['limit_key']] 128 type_name = limit_key_shape.type_name 129 if type_name not in PageArgument.type_map: 130 raise TypeError( 131 ('Unsupported pagination type {0} for operation {1}' 132 ' and parameter {2}').format( 133 type_name, operation_model.name, 134 paginator_config['limit_key'])) 135 add_paging_argument(argument_table, 'page-size', 136 PageArgument('page-size', PAGE_SIZE_HELP, 137 parse_type=type_name, 138 serialized_name='PageSize'), 139 shadowed_args) 140 141 add_paging_argument(argument_table, 'max-items', 142 PageArgument('max-items', MAX_ITEMS_HELP, 143 parse_type=type_name, 144 serialized_name='MaxItems'), 145 shadowed_args) 146 session.register( 147 parsed_args_event, 148 partial(check_should_enable_pagination, 149 list(_get_all_cli_input_tokens(paginator_config)), 150 shadowed_args, argument_table)) 151 152 153 def add_paging_argument(argument_table, arg_name, argument, shadowed_args): 154 if arg_name in argument_table: 155 # If there's already an entry in the arg table for this argument, 156 # this means we're shadowing an argument for this operation. We 157 # need to store this later in case pagination is turned off because 158 # we put these arguments back. 159 # See the comment in check_should_enable_pagination() for more info. 160 shadowed_args[arg_name] = argument_table[arg_name] 161 argument_table[arg_name] = argument 162 163 164 def check_should_enable_pagination(input_tokens, shadowed_args, argument_table, 165 parsed_args, parsed_globals, **kwargs): 166 normalized_paging_args = ['start_token', 'max_items'] 167 for token in input_tokens: 168 py_name = token.replace('-', '_') 169 if getattr(parsed_args, py_name) is not None and \ 170 py_name not in normalized_paging_args: 171 # The user has specified a manual (undocumented) pagination arg. 172 # We need to automatically turn pagination off. 173 logger.debug("User has specified a manual pagination arg. " 174 "Automatically setting --no-paginate.") 175 parsed_globals.paginate = False 176 177 # Because pagination is now disabled, there's a chance that 178 # we were shadowing arguments. For example, we inject a 179 # --max-items argument in unify_paging_params(). If the 180 # the operation also provides its own MaxItems (which we 181 # expose as --max-items) then our custom pagination arg 182 # was shadowing the customers arg. When we turn pagination 183 # off we need to put back the original argument which is 184 # what we're doing here. 185 for key, value in shadowed_args.items(): 186 argument_table[key] = value 187 188 if not parsed_globals.paginate: 189 ensure_paging_params_not_set(parsed_args, shadowed_args) 190 191 192 def ensure_paging_params_not_set(parsed_args, shadowed_args): 193 paging_params = ['starting_token', 'page_size', 'max_items'] 194 shadowed_params = [p.replace('-', '_') for p in shadowed_args.keys()] 195 params_used = [p for p in paging_params if 196 p not in shadowed_params and getattr(parsed_args, p)] 197 198 if len(params_used) > 0: 199 converted_params = ', '.join( 200 ["--" + p.replace('_', '-') for p in params_used]) 201 raise PaginationError( 202 message="Cannot specify --no-paginate along with pagination " 203 "arguments: %s" % converted_params) 204 205 206 def _remove_existing_paging_arguments(argument_table, pagination_config): 207 for cli_name in _get_all_cli_input_tokens(pagination_config): 208 argument_table[cli_name]._UNDOCUMENTED = True 209 210 211 def _get_all_cli_input_tokens(pagination_config): 212 # Get all input tokens including the limit_key 213 # if it exists. 214 tokens = _get_input_tokens(pagination_config) 215 for token_name in tokens: 216 cli_name = xform_name(token_name, '-') 217 yield cli_name 218 if 'limit_key' in pagination_config: 219 key_name = pagination_config['limit_key'] 220 cli_name = xform_name(key_name, '-') 221 yield cli_name 222 223 224 def _get_input_tokens(pagination_config): 225 tokens = pagination_config['input_token'] 226 if not isinstance(tokens, list): 227 return [tokens] 228 return tokens 229 230 231 def _get_cli_name(param_objects, token_name): 232 for param in param_objects: 233 if param.name == token_name: 234 return param.cli_name.lstrip('-') 235 236 237 class PageArgument(BaseCLIArgument): 238 type_map = { 239 'string': str, 240 'integer': int, 241 } 242 243 def __init__(self, name, documentation, parse_type, serialized_name): 244 self.argument_model = model.Shape('PageArgument', {'type': 'string'}) 245 self._name = name 246 self._serialized_name = serialized_name 247 self._documentation = documentation 248 self._parse_type = parse_type 249 self._required = False 250 251 @property 252 def cli_name(self): 253 return '--' + self._name 254 255 @property 256 def cli_type_name(self): 257 return self._parse_type 258 259 @property 260 def required(self): 261 return self._required 262 263 @required.setter 264 def required(self, value): 265 self._required = value 266 267 @property 268 def documentation(self): 269 return self._documentation 270 271 def add_to_parser(self, parser): 272 parser.add_argument(self.cli_name, dest=self.py_name, 273 type=self.type_map[self._parse_type]) 274 275 def add_to_params(self, parameters, value): 276 if value is not None: 277 pagination_config = parameters.get('PaginationConfig', {}) 278 pagination_config[self._serialized_name] = value 279 parameters['PaginationConfig'] = pagination_config 280 [end of awscli/customizations/paginate.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/awscli/customizations/paginate.py b/awscli/customizations/paginate.py --- a/awscli/customizations/paginate.py +++ b/awscli/customizations/paginate.py @@ -184,7 +184,7 @@ # what we're doing here. for key, value in shadowed_args.items(): argument_table[key] = value - + if not parsed_globals.paginate: ensure_paging_params_not_set(parsed_args, shadowed_args) @@ -193,7 +193,7 @@ paging_params = ['starting_token', 'page_size', 'max_items'] shadowed_params = [p.replace('-', '_') for p in shadowed_args.keys()] params_used = [p for p in paging_params if - p not in shadowed_params and getattr(parsed_args, p)] + p not in shadowed_params and getattr(parsed_args, p, None)] if len(params_used) > 0: converted_params = ', '.join(
{"golden_diff": "diff --git a/awscli/customizations/paginate.py b/awscli/customizations/paginate.py\n--- a/awscli/customizations/paginate.py\n+++ b/awscli/customizations/paginate.py\n@@ -184,7 +184,7 @@\n # what we're doing here.\n for key, value in shadowed_args.items():\n argument_table[key] = value\n- \n+\n if not parsed_globals.paginate:\n ensure_paging_params_not_set(parsed_args, shadowed_args)\n \n@@ -193,7 +193,7 @@\n paging_params = ['starting_token', 'page_size', 'max_items']\n shadowed_params = [p.replace('-', '_') for p in shadowed_args.keys()]\n params_used = [p for p in paging_params if\n- p not in shadowed_params and getattr(parsed_args, p)]\n+ p not in shadowed_params and getattr(parsed_args, p, None)]\n \n if len(params_used) > 0:\n converted_params = ', '.join(\n", "issue": "Using `--no-paginate` with `aws cloudformation describe-stacks` throws error\nVersion: `aws-cli/1.10.34 Python/2.7.9 Linux/3.16.0-4-amd64 botocore/1.4.24`\n\nWhen I attempt to run `aws cloudformation describe-stacks --no-paginate --region us-west-2` I receive:\n\n```\n'Namespace' object has no attribute 'page_size'\n```\n\nThis does not happen if I do not use the `--no-paginate` flag.\n\n", "before_files": [{"content": "# Copyright 2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\n\"\"\"This module has customizations to unify paging paramters.\n\nFor any operation that can be paginated, we will:\n\n * Hide the service specific pagination params. This can vary across\n services and we're going to replace them with a consistent set of\n arguments. The arguments will still work, but they are not\n documented. This allows us to add a pagination config after\n the fact and still remain backwards compatible with users that\n were manually doing pagination.\n * Add a ``--starting-token`` and a ``--max-items`` argument.\n\n\"\"\"\nimport logging\nfrom functools import partial\n\nfrom botocore import xform_name\nfrom botocore.exceptions import DataNotFoundError, PaginationError\nfrom botocore import model\n\nfrom awscli.arguments import BaseCLIArgument\n\n\nlogger = logging.getLogger(__name__)\n\n\nSTARTING_TOKEN_HELP = \"\"\"\n<p>A token to specify where to start paginating. This is the\n<code>NextToken</code> from a previously truncated response.</p>\n\"\"\"\n\nMAX_ITEMS_HELP = \"\"\"\n<p>The total number of items to return. If the total number\nof items available is more than the value specified in\nmax-items then a <code>NextToken</code> will\nbe provided in the output that you can use to resume pagination.\nThis <code>NextToken</code> response element should <b>not</b> be\nused directly outside of the AWS CLI.</p>\n\"\"\"\n\nPAGE_SIZE_HELP = \"\"\"\n<p>The size of each page.<p>\n\"\"\"\n\n\ndef register_pagination(event_handlers):\n event_handlers.register('building-argument-table', unify_paging_params)\n event_handlers.register_last('doc-description', add_paging_description)\n\n\ndef get_paginator_config(session, service_name, operation_name):\n try:\n paginator_model = session.get_paginator_model(service_name)\n except DataNotFoundError:\n return None\n try:\n operation_paginator_config = paginator_model.get_paginator(\n operation_name)\n except ValueError:\n return None\n return operation_paginator_config\n\n\ndef add_paging_description(help_command, **kwargs):\n # This customization is only applied to the description of\n # Operations, so we must filter out all other events.\n if not isinstance(help_command.obj, model.OperationModel):\n return\n service_name = help_command.obj.service_model.service_name\n paginator_config = get_paginator_config(\n help_command.session, service_name, help_command.obj.name)\n if not paginator_config:\n return\n help_command.doc.style.new_paragraph()\n help_command.doc.writeln(\n ('``%s`` is a paginated operation. Multiple API calls may be issued '\n 'in order to retrieve the entire data set of results. You can '\n 'disable pagination by providing the ``--no-paginate`` argument.')\n % help_command.name)\n # Only include result key information if it is present.\n if paginator_config.get('result_key'):\n queries = paginator_config['result_key']\n if type(queries) is not list:\n queries = [queries]\n queries = \", \".join([('``%s``' % s) for s in queries])\n help_command.doc.writeln(\n ('When using ``--output text`` and the ``--query`` argument on a '\n 'paginated response, the ``--query`` argument must extract data '\n 'from the results of the following query expressions: %s')\n % queries)\n\n\ndef unify_paging_params(argument_table, operation_model, event_name,\n session, **kwargs):\n paginator_config = get_paginator_config(\n session, operation_model.service_model.service_name,\n operation_model.name)\n if paginator_config is None:\n # We only apply these customizations to paginated responses.\n return\n logger.debug(\"Modifying paging parameters for operation: %s\",\n operation_model.name)\n _remove_existing_paging_arguments(argument_table, paginator_config)\n parsed_args_event = event_name.replace('building-argument-table.',\n 'operation-args-parsed.')\n shadowed_args = {}\n add_paging_argument(argument_table, 'starting-token',\n PageArgument('starting-token', STARTING_TOKEN_HELP,\n parse_type='string',\n serialized_name='StartingToken'),\n shadowed_args)\n input_members = operation_model.input_shape.members\n type_name = 'integer'\n if 'limit_key' in paginator_config:\n limit_key_shape = input_members[paginator_config['limit_key']]\n type_name = limit_key_shape.type_name\n if type_name not in PageArgument.type_map:\n raise TypeError(\n ('Unsupported pagination type {0} for operation {1}'\n ' and parameter {2}').format(\n type_name, operation_model.name,\n paginator_config['limit_key']))\n add_paging_argument(argument_table, 'page-size',\n PageArgument('page-size', PAGE_SIZE_HELP,\n parse_type=type_name,\n serialized_name='PageSize'),\n shadowed_args)\n\n add_paging_argument(argument_table, 'max-items',\n PageArgument('max-items', MAX_ITEMS_HELP,\n parse_type=type_name,\n serialized_name='MaxItems'),\n shadowed_args)\n session.register(\n parsed_args_event,\n partial(check_should_enable_pagination,\n list(_get_all_cli_input_tokens(paginator_config)),\n shadowed_args, argument_table))\n\n\ndef add_paging_argument(argument_table, arg_name, argument, shadowed_args):\n if arg_name in argument_table:\n # If there's already an entry in the arg table for this argument,\n # this means we're shadowing an argument for this operation. We\n # need to store this later in case pagination is turned off because\n # we put these arguments back.\n # See the comment in check_should_enable_pagination() for more info.\n shadowed_args[arg_name] = argument_table[arg_name]\n argument_table[arg_name] = argument\n\n\ndef check_should_enable_pagination(input_tokens, shadowed_args, argument_table,\n parsed_args, parsed_globals, **kwargs):\n normalized_paging_args = ['start_token', 'max_items']\n for token in input_tokens:\n py_name = token.replace('-', '_')\n if getattr(parsed_args, py_name) is not None and \\\n py_name not in normalized_paging_args:\n # The user has specified a manual (undocumented) pagination arg.\n # We need to automatically turn pagination off.\n logger.debug(\"User has specified a manual pagination arg. \"\n \"Automatically setting --no-paginate.\")\n parsed_globals.paginate = False\n\n # Because pagination is now disabled, there's a chance that\n # we were shadowing arguments. For example, we inject a\n # --max-items argument in unify_paging_params(). If the\n # the operation also provides its own MaxItems (which we\n # expose as --max-items) then our custom pagination arg\n # was shadowing the customers arg. When we turn pagination\n # off we need to put back the original argument which is\n # what we're doing here.\n for key, value in shadowed_args.items():\n argument_table[key] = value\n \n if not parsed_globals.paginate:\n ensure_paging_params_not_set(parsed_args, shadowed_args)\n\n\ndef ensure_paging_params_not_set(parsed_args, shadowed_args):\n paging_params = ['starting_token', 'page_size', 'max_items']\n shadowed_params = [p.replace('-', '_') for p in shadowed_args.keys()]\n params_used = [p for p in paging_params if\n p not in shadowed_params and getattr(parsed_args, p)]\n\n if len(params_used) > 0:\n converted_params = ', '.join(\n [\"--\" + p.replace('_', '-') for p in params_used])\n raise PaginationError(\n message=\"Cannot specify --no-paginate along with pagination \"\n \"arguments: %s\" % converted_params)\n\n\ndef _remove_existing_paging_arguments(argument_table, pagination_config):\n for cli_name in _get_all_cli_input_tokens(pagination_config):\n argument_table[cli_name]._UNDOCUMENTED = True\n\n\ndef _get_all_cli_input_tokens(pagination_config):\n # Get all input tokens including the limit_key\n # if it exists.\n tokens = _get_input_tokens(pagination_config)\n for token_name in tokens:\n cli_name = xform_name(token_name, '-')\n yield cli_name\n if 'limit_key' in pagination_config:\n key_name = pagination_config['limit_key']\n cli_name = xform_name(key_name, '-')\n yield cli_name\n\n\ndef _get_input_tokens(pagination_config):\n tokens = pagination_config['input_token']\n if not isinstance(tokens, list):\n return [tokens]\n return tokens\n\n\ndef _get_cli_name(param_objects, token_name):\n for param in param_objects:\n if param.name == token_name:\n return param.cli_name.lstrip('-')\n\n\nclass PageArgument(BaseCLIArgument):\n type_map = {\n 'string': str,\n 'integer': int,\n }\n\n def __init__(self, name, documentation, parse_type, serialized_name):\n self.argument_model = model.Shape('PageArgument', {'type': 'string'})\n self._name = name\n self._serialized_name = serialized_name\n self._documentation = documentation\n self._parse_type = parse_type\n self._required = False\n\n @property\n def cli_name(self):\n return '--' + self._name\n\n @property\n def cli_type_name(self):\n return self._parse_type\n\n @property\n def required(self):\n return self._required\n\n @required.setter\n def required(self, value):\n self._required = value\n\n @property\n def documentation(self):\n return self._documentation\n\n def add_to_parser(self, parser):\n parser.add_argument(self.cli_name, dest=self.py_name,\n type=self.type_map[self._parse_type])\n\n def add_to_params(self, parameters, value):\n if value is not None:\n pagination_config = parameters.get('PaginationConfig', {})\n pagination_config[self._serialized_name] = value\n parameters['PaginationConfig'] = pagination_config\n", "path": "awscli/customizations/paginate.py"}]}
3,710
224
gh_patches_debug_3701
rasdani/github-patches
git_diff
huggingface__transformers-10531
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Typo in deberta_v2/__init__.py https://github.com/huggingface/transformers/blob/c503a1c15ec1b11e69a3eaaf06edfa87c05a2849/src/transformers/models/deberta_v2/__init__.py#L31 Should be '' DEBERTA_V2_PRETRAINED_MODEL_ARCHIVE_LIST ''. </issue> <code> [start of src/transformers/models/deberta_v2/__init__.py] 1 # flake8: noqa 2 # There's no way to ignore "F401 '...' imported but unused" warnings in this 3 # module, but to preserve other warnings. So, don't check this module at all. 4 5 # Copyright 2020 The HuggingFace Team. All rights reserved. 6 # 7 # Licensed under the Apache License, Version 2.0 (the "License"); 8 # you may not use this file except in compliance with the License. 9 # You may obtain a copy of the License at 10 # 11 # http://www.apache.org/licenses/LICENSE-2.0 12 # 13 # Unless required by applicable law or agreed to in writing, software 14 # distributed under the License is distributed on an "AS IS" BASIS, 15 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 16 # See the License for the specific language governing permissions and 17 # limitations under the License. 18 19 from typing import TYPE_CHECKING 20 21 from ...file_utils import _BaseLazyModule, is_torch_available 22 23 24 _import_structure = { 25 "configuration_deberta_v2": ["DEBERTA_V2_PRETRAINED_CONFIG_ARCHIVE_MAP", "DebertaV2Config"], 26 "tokenization_deberta_v2": ["DebertaV2Tokenizer"], 27 } 28 29 if is_torch_available(): 30 _import_structure["modeling_deberta_v2"] = [ 31 "DEBERTA_PRETRAINED_MODEL_ARCHIVE_LIST", 32 "DebertaV2ForSequenceClassification", 33 "DebertaV2Model", 34 "DebertaV2ForMaskedLM", 35 "DebertaV2PreTrainedModel", 36 "DebertaV2ForTokenClassification", 37 "DebertaV2ForQuestionAnswering", 38 ] 39 40 41 if TYPE_CHECKING: 42 from .configuration_deberta_v2 import DEBERTA_V2_PRETRAINED_CONFIG_ARCHIVE_MAP, DebertaV2Config 43 from .tokenization_deberta_v2 import DebertaV2Tokenizer 44 45 if is_torch_available(): 46 from .modeling_deberta_v2 import ( 47 DEBERTA_V2_PRETRAINED_MODEL_ARCHIVE_LIST, 48 DebertaV2ForMaskedLM, 49 DebertaV2ForQuestionAnswering, 50 DebertaV2ForSequenceClassification, 51 DebertaV2ForTokenClassification, 52 DebertaV2Model, 53 DebertaV2PreTrainedModel, 54 ) 55 56 else: 57 import importlib 58 import os 59 import sys 60 61 class _LazyModule(_BaseLazyModule): 62 """ 63 Module class that surfaces all objects but only performs associated imports when the objects are requested. 64 """ 65 66 __file__ = globals()["__file__"] 67 __path__ = [os.path.dirname(__file__)] 68 69 def _get_module(self, module_name: str): 70 return importlib.import_module("." + module_name, self.__name__) 71 72 sys.modules[__name__] = _LazyModule(__name__, _import_structure) 73 [end of src/transformers/models/deberta_v2/__init__.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/src/transformers/models/deberta_v2/__init__.py b/src/transformers/models/deberta_v2/__init__.py --- a/src/transformers/models/deberta_v2/__init__.py +++ b/src/transformers/models/deberta_v2/__init__.py @@ -28,7 +28,7 @@ if is_torch_available(): _import_structure["modeling_deberta_v2"] = [ - "DEBERTA_PRETRAINED_MODEL_ARCHIVE_LIST", + "DEBERTA_V2_PRETRAINED_MODEL_ARCHIVE_LIST", "DebertaV2ForSequenceClassification", "DebertaV2Model", "DebertaV2ForMaskedLM",
{"golden_diff": "diff --git a/src/transformers/models/deberta_v2/__init__.py b/src/transformers/models/deberta_v2/__init__.py\n--- a/src/transformers/models/deberta_v2/__init__.py\n+++ b/src/transformers/models/deberta_v2/__init__.py\n@@ -28,7 +28,7 @@\n \n if is_torch_available():\n _import_structure[\"modeling_deberta_v2\"] = [\n- \"DEBERTA_PRETRAINED_MODEL_ARCHIVE_LIST\",\n+ \"DEBERTA_V2_PRETRAINED_MODEL_ARCHIVE_LIST\",\n \"DebertaV2ForSequenceClassification\",\n \"DebertaV2Model\",\n \"DebertaV2ForMaskedLM\",\n", "issue": "Typo in deberta_v2/__init__.py\nhttps://github.com/huggingface/transformers/blob/c503a1c15ec1b11e69a3eaaf06edfa87c05a2849/src/transformers/models/deberta_v2/__init__.py#L31\r\nShould be '' DEBERTA_V2_PRETRAINED_MODEL_ARCHIVE_LIST ''.\n", "before_files": [{"content": "# flake8: noqa\n# There's no way to ignore \"F401 '...' imported but unused\" warnings in this\n# module, but to preserve other warnings. So, don't check this module at all.\n\n# Copyright 2020 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import TYPE_CHECKING\n\nfrom ...file_utils import _BaseLazyModule, is_torch_available\n\n\n_import_structure = {\n \"configuration_deberta_v2\": [\"DEBERTA_V2_PRETRAINED_CONFIG_ARCHIVE_MAP\", \"DebertaV2Config\"],\n \"tokenization_deberta_v2\": [\"DebertaV2Tokenizer\"],\n}\n\nif is_torch_available():\n _import_structure[\"modeling_deberta_v2\"] = [\n \"DEBERTA_PRETRAINED_MODEL_ARCHIVE_LIST\",\n \"DebertaV2ForSequenceClassification\",\n \"DebertaV2Model\",\n \"DebertaV2ForMaskedLM\",\n \"DebertaV2PreTrainedModel\",\n \"DebertaV2ForTokenClassification\",\n \"DebertaV2ForQuestionAnswering\",\n ]\n\n\nif TYPE_CHECKING:\n from .configuration_deberta_v2 import DEBERTA_V2_PRETRAINED_CONFIG_ARCHIVE_MAP, DebertaV2Config\n from .tokenization_deberta_v2 import DebertaV2Tokenizer\n\n if is_torch_available():\n from .modeling_deberta_v2 import (\n DEBERTA_V2_PRETRAINED_MODEL_ARCHIVE_LIST,\n DebertaV2ForMaskedLM,\n DebertaV2ForQuestionAnswering,\n DebertaV2ForSequenceClassification,\n DebertaV2ForTokenClassification,\n DebertaV2Model,\n DebertaV2PreTrainedModel,\n )\n\nelse:\n import importlib\n import os\n import sys\n\n class _LazyModule(_BaseLazyModule):\n \"\"\"\n Module class that surfaces all objects but only performs associated imports when the objects are requested.\n \"\"\"\n\n __file__ = globals()[\"__file__\"]\n __path__ = [os.path.dirname(__file__)]\n\n def _get_module(self, module_name: str):\n return importlib.import_module(\".\" + module_name, self.__name__)\n\n sys.modules[__name__] = _LazyModule(__name__, _import_structure)\n", "path": "src/transformers/models/deberta_v2/__init__.py"}]}
1,423
162
gh_patches_debug_44212
rasdani/github-patches
git_diff
python-discord__bot-978
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Use scheduler for silencing I was looking through the code of [`silencer.py`](https://github.com/python-discord/bot/blob/1d3acb0a222a885c5ef42e91570a00f91976fb5b/bot/cogs/moderation/silence.py) and I found this: https://github.com/python-discord/bot/blob/1d3acb0a222a885c5ef42e91570a00f91976fb5b/bot/cogs/moderation/silence.py#L93 With this approach, in case the channel was unsilenced and re-silenced back, it might get unsilenced by this timer. This is a relatively rare occurrence as this would have to happen within the 15 minutes limit, but nevertheless it seems more logical to use `Scheduler` in here, which can be canceled when the channel is manually unsilenced, to prevent this. </issue> <code> [start of bot/cogs/moderation/silence.py] 1 import asyncio 2 import logging 3 from contextlib import suppress 4 from typing import Optional 5 6 from discord import TextChannel 7 from discord.ext import commands, tasks 8 from discord.ext.commands import Context 9 10 from bot.bot import Bot 11 from bot.constants import Channels, Emojis, Guild, MODERATION_ROLES, Roles 12 from bot.converters import HushDurationConverter 13 from bot.utils.checks import with_role_check 14 15 log = logging.getLogger(__name__) 16 17 18 class SilenceNotifier(tasks.Loop): 19 """Loop notifier for posting notices to `alert_channel` containing added channels.""" 20 21 def __init__(self, alert_channel: TextChannel): 22 super().__init__(self._notifier, seconds=1, minutes=0, hours=0, count=None, reconnect=True, loop=None) 23 self._silenced_channels = {} 24 self._alert_channel = alert_channel 25 26 def add_channel(self, channel: TextChannel) -> None: 27 """Add channel to `_silenced_channels` and start loop if not launched.""" 28 if not self._silenced_channels: 29 self.start() 30 log.info("Starting notifier loop.") 31 self._silenced_channels[channel] = self._current_loop 32 33 def remove_channel(self, channel: TextChannel) -> None: 34 """Remove channel from `_silenced_channels` and stop loop if no channels remain.""" 35 with suppress(KeyError): 36 del self._silenced_channels[channel] 37 if not self._silenced_channels: 38 self.stop() 39 log.info("Stopping notifier loop.") 40 41 async def _notifier(self) -> None: 42 """Post notice of `_silenced_channels` with their silenced duration to `_alert_channel` periodically.""" 43 # Wait for 15 minutes between notices with pause at start of loop. 44 if self._current_loop and not self._current_loop/60 % 15: 45 log.debug( 46 f"Sending notice with channels: " 47 f"{', '.join(f'#{channel} ({channel.id})' for channel in self._silenced_channels)}." 48 ) 49 channels_text = ', '.join( 50 f"{channel.mention} for {(self._current_loop-start)//60} min" 51 for channel, start in self._silenced_channels.items() 52 ) 53 await self._alert_channel.send(f"<@&{Roles.moderators}> currently silenced channels: {channels_text}") 54 55 56 class Silence(commands.Cog): 57 """Commands for stopping channel messages for `verified` role in a channel.""" 58 59 def __init__(self, bot: Bot): 60 self.bot = bot 61 self.muted_channels = set() 62 self._get_instance_vars_task = self.bot.loop.create_task(self._get_instance_vars()) 63 self._get_instance_vars_event = asyncio.Event() 64 65 async def _get_instance_vars(self) -> None: 66 """Get instance variables after they're available to get from the guild.""" 67 await self.bot.wait_until_guild_available() 68 guild = self.bot.get_guild(Guild.id) 69 self._verified_role = guild.get_role(Roles.verified) 70 self._mod_alerts_channel = self.bot.get_channel(Channels.mod_alerts) 71 self._mod_log_channel = self.bot.get_channel(Channels.mod_log) 72 self.notifier = SilenceNotifier(self._mod_log_channel) 73 self._get_instance_vars_event.set() 74 75 @commands.command(aliases=("hush",)) 76 async def silence(self, ctx: Context, duration: HushDurationConverter = 10) -> None: 77 """ 78 Silence the current channel for `duration` minutes or `forever`. 79 80 Duration is capped at 15 minutes, passing forever makes the silence indefinite. 81 Indefinitely silenced channels get added to a notifier which posts notices every 15 minutes from the start. 82 """ 83 await self._get_instance_vars_event.wait() 84 log.debug(f"{ctx.author} is silencing channel #{ctx.channel}.") 85 if not await self._silence(ctx.channel, persistent=(duration is None), duration=duration): 86 await ctx.send(f"{Emojis.cross_mark} current channel is already silenced.") 87 return 88 if duration is None: 89 await ctx.send(f"{Emojis.check_mark} silenced current channel indefinitely.") 90 return 91 92 await ctx.send(f"{Emojis.check_mark} silenced current channel for {duration} minute(s).") 93 await asyncio.sleep(duration*60) 94 log.info("Unsilencing channel after set delay.") 95 await ctx.invoke(self.unsilence) 96 97 @commands.command(aliases=("unhush",)) 98 async def unsilence(self, ctx: Context) -> None: 99 """ 100 Unsilence the current channel. 101 102 If the channel was silenced indefinitely, notifications for the channel will stop. 103 """ 104 await self._get_instance_vars_event.wait() 105 log.debug(f"Unsilencing channel #{ctx.channel} from {ctx.author}'s command.") 106 if await self._unsilence(ctx.channel): 107 await ctx.send(f"{Emojis.check_mark} unsilenced current channel.") 108 109 async def _silence(self, channel: TextChannel, persistent: bool, duration: Optional[int]) -> bool: 110 """ 111 Silence `channel` for `self._verified_role`. 112 113 If `persistent` is `True` add `channel` to notifier. 114 `duration` is only used for logging; if None is passed `persistent` should be True to not log None. 115 Return `True` if channel permissions were changed, `False` otherwise. 116 """ 117 current_overwrite = channel.overwrites_for(self._verified_role) 118 if current_overwrite.send_messages is False: 119 log.info(f"Tried to silence channel #{channel} ({channel.id}) but the channel was already silenced.") 120 return False 121 await channel.set_permissions(self._verified_role, **dict(current_overwrite, send_messages=False)) 122 self.muted_channels.add(channel) 123 if persistent: 124 log.info(f"Silenced #{channel} ({channel.id}) indefinitely.") 125 self.notifier.add_channel(channel) 126 return True 127 128 log.info(f"Silenced #{channel} ({channel.id}) for {duration} minute(s).") 129 return True 130 131 async def _unsilence(self, channel: TextChannel) -> bool: 132 """ 133 Unsilence `channel`. 134 135 Check if `channel` is silenced through a `PermissionOverwrite`, 136 if it is unsilence it and remove it from the notifier. 137 Return `True` if channel permissions were changed, `False` otherwise. 138 """ 139 current_overwrite = channel.overwrites_for(self._verified_role) 140 if current_overwrite.send_messages is False: 141 await channel.set_permissions(self._verified_role, **dict(current_overwrite, send_messages=None)) 142 log.info(f"Unsilenced channel #{channel} ({channel.id}).") 143 self.notifier.remove_channel(channel) 144 self.muted_channels.discard(channel) 145 return True 146 log.info(f"Tried to unsilence channel #{channel} ({channel.id}) but the channel was not silenced.") 147 return False 148 149 def cog_unload(self) -> None: 150 """Send alert with silenced channels on unload.""" 151 if self.muted_channels: 152 channels_string = ''.join(channel.mention for channel in self.muted_channels) 153 message = f"<@&{Roles.moderators}> channels left silenced on cog unload: {channels_string}" 154 asyncio.create_task(self._mod_alerts_channel.send(message)) 155 156 # This cannot be static (must have a __func__ attribute). 157 def cog_check(self, ctx: Context) -> bool: 158 """Only allow moderators to invoke the commands in this cog.""" 159 return with_role_check(ctx, *MODERATION_ROLES) 160 [end of bot/cogs/moderation/silence.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/bot/cogs/moderation/silence.py b/bot/cogs/moderation/silence.py --- a/bot/cogs/moderation/silence.py +++ b/bot/cogs/moderation/silence.py @@ -1,7 +1,7 @@ import asyncio import logging from contextlib import suppress -from typing import Optional +from typing import NamedTuple, Optional from discord import TextChannel from discord.ext import commands, tasks @@ -11,10 +11,18 @@ from bot.constants import Channels, Emojis, Guild, MODERATION_ROLES, Roles from bot.converters import HushDurationConverter from bot.utils.checks import with_role_check +from bot.utils.scheduling import Scheduler log = logging.getLogger(__name__) +class TaskData(NamedTuple): + """Data for a scheduled task.""" + + delay: int + ctx: Context + + class SilenceNotifier(tasks.Loop): """Loop notifier for posting notices to `alert_channel` containing added channels.""" @@ -53,15 +61,25 @@ await self._alert_channel.send(f"<@&{Roles.moderators}> currently silenced channels: {channels_text}") -class Silence(commands.Cog): +class Silence(Scheduler, commands.Cog): """Commands for stopping channel messages for `verified` role in a channel.""" def __init__(self, bot: Bot): + super().__init__() self.bot = bot self.muted_channels = set() self._get_instance_vars_task = self.bot.loop.create_task(self._get_instance_vars()) self._get_instance_vars_event = asyncio.Event() + async def _scheduled_task(self, task: TaskData) -> None: + """Calls `self.unsilence` on expired silenced channel to unsilence it.""" + await asyncio.sleep(task.delay) + log.info("Unsilencing channel after set delay.") + + # Because `self.unsilence` explicitly cancels this scheduled task, it is shielded + # to avoid prematurely cancelling itself + await asyncio.shield(task.ctx.invoke(self.unsilence)) + async def _get_instance_vars(self) -> None: """Get instance variables after they're available to get from the guild.""" await self.bot.wait_until_guild_available() @@ -90,9 +108,13 @@ return await ctx.send(f"{Emojis.check_mark} silenced current channel for {duration} minute(s).") - await asyncio.sleep(duration*60) - log.info("Unsilencing channel after set delay.") - await ctx.invoke(self.unsilence) + + task_data = TaskData( + delay=duration*60, + ctx=ctx + ) + + self.schedule_task(ctx.channel.id, task_data) @commands.command(aliases=("unhush",)) async def unsilence(self, ctx: Context) -> None: @@ -103,7 +125,9 @@ """ await self._get_instance_vars_event.wait() log.debug(f"Unsilencing channel #{ctx.channel} from {ctx.author}'s command.") - if await self._unsilence(ctx.channel): + if not await self._unsilence(ctx.channel): + await ctx.send(f"{Emojis.cross_mark} current channel was not silenced.") + else: await ctx.send(f"{Emojis.check_mark} unsilenced current channel.") async def _silence(self, channel: TextChannel, persistent: bool, duration: Optional[int]) -> bool: @@ -140,6 +164,7 @@ if current_overwrite.send_messages is False: await channel.set_permissions(self._verified_role, **dict(current_overwrite, send_messages=None)) log.info(f"Unsilenced channel #{channel} ({channel.id}).") + self.cancel_task(channel.id) self.notifier.remove_channel(channel) self.muted_channels.discard(channel) return True
{"golden_diff": "diff --git a/bot/cogs/moderation/silence.py b/bot/cogs/moderation/silence.py\n--- a/bot/cogs/moderation/silence.py\n+++ b/bot/cogs/moderation/silence.py\n@@ -1,7 +1,7 @@\n import asyncio\n import logging\n from contextlib import suppress\n-from typing import Optional\n+from typing import NamedTuple, Optional\n \n from discord import TextChannel\n from discord.ext import commands, tasks\n@@ -11,10 +11,18 @@\n from bot.constants import Channels, Emojis, Guild, MODERATION_ROLES, Roles\n from bot.converters import HushDurationConverter\n from bot.utils.checks import with_role_check\n+from bot.utils.scheduling import Scheduler\n \n log = logging.getLogger(__name__)\n \n \n+class TaskData(NamedTuple):\n+ \"\"\"Data for a scheduled task.\"\"\"\n+\n+ delay: int\n+ ctx: Context\n+\n+\n class SilenceNotifier(tasks.Loop):\n \"\"\"Loop notifier for posting notices to `alert_channel` containing added channels.\"\"\"\n \n@@ -53,15 +61,25 @@\n await self._alert_channel.send(f\"<@&{Roles.moderators}> currently silenced channels: {channels_text}\")\n \n \n-class Silence(commands.Cog):\n+class Silence(Scheduler, commands.Cog):\n \"\"\"Commands for stopping channel messages for `verified` role in a channel.\"\"\"\n \n def __init__(self, bot: Bot):\n+ super().__init__()\n self.bot = bot\n self.muted_channels = set()\n self._get_instance_vars_task = self.bot.loop.create_task(self._get_instance_vars())\n self._get_instance_vars_event = asyncio.Event()\n \n+ async def _scheduled_task(self, task: TaskData) -> None:\n+ \"\"\"Calls `self.unsilence` on expired silenced channel to unsilence it.\"\"\"\n+ await asyncio.sleep(task.delay)\n+ log.info(\"Unsilencing channel after set delay.\")\n+\n+ # Because `self.unsilence` explicitly cancels this scheduled task, it is shielded\n+ # to avoid prematurely cancelling itself\n+ await asyncio.shield(task.ctx.invoke(self.unsilence))\n+\n async def _get_instance_vars(self) -> None:\n \"\"\"Get instance variables after they're available to get from the guild.\"\"\"\n await self.bot.wait_until_guild_available()\n@@ -90,9 +108,13 @@\n return\n \n await ctx.send(f\"{Emojis.check_mark} silenced current channel for {duration} minute(s).\")\n- await asyncio.sleep(duration*60)\n- log.info(\"Unsilencing channel after set delay.\")\n- await ctx.invoke(self.unsilence)\n+\n+ task_data = TaskData(\n+ delay=duration*60,\n+ ctx=ctx\n+ )\n+\n+ self.schedule_task(ctx.channel.id, task_data)\n \n @commands.command(aliases=(\"unhush\",))\n async def unsilence(self, ctx: Context) -> None:\n@@ -103,7 +125,9 @@\n \"\"\"\n await self._get_instance_vars_event.wait()\n log.debug(f\"Unsilencing channel #{ctx.channel} from {ctx.author}'s command.\")\n- if await self._unsilence(ctx.channel):\n+ if not await self._unsilence(ctx.channel):\n+ await ctx.send(f\"{Emojis.cross_mark} current channel was not silenced.\")\n+ else:\n await ctx.send(f\"{Emojis.check_mark} unsilenced current channel.\")\n \n async def _silence(self, channel: TextChannel, persistent: bool, duration: Optional[int]) -> bool:\n@@ -140,6 +164,7 @@\n if current_overwrite.send_messages is False:\n await channel.set_permissions(self._verified_role, **dict(current_overwrite, send_messages=None))\n log.info(f\"Unsilenced channel #{channel} ({channel.id}).\")\n+ self.cancel_task(channel.id)\n self.notifier.remove_channel(channel)\n self.muted_channels.discard(channel)\n return True\n", "issue": "Use scheduler for silencing\nI was looking through the code of [`silencer.py`](https://github.com/python-discord/bot/blob/1d3acb0a222a885c5ef42e91570a00f91976fb5b/bot/cogs/moderation/silence.py) and I found this:\r\n\r\nhttps://github.com/python-discord/bot/blob/1d3acb0a222a885c5ef42e91570a00f91976fb5b/bot/cogs/moderation/silence.py#L93\r\n\r\nWith this approach, in case the channel was unsilenced and re-silenced back, it might get unsilenced by this timer. This is a relatively rare occurrence as this would have to happen within the 15 minutes limit, but nevertheless it seems more logical to use `Scheduler` in here, which can be canceled when the channel is manually unsilenced, to prevent this.\n", "before_files": [{"content": "import asyncio\nimport logging\nfrom contextlib import suppress\nfrom typing import Optional\n\nfrom discord import TextChannel\nfrom discord.ext import commands, tasks\nfrom discord.ext.commands import Context\n\nfrom bot.bot import Bot\nfrom bot.constants import Channels, Emojis, Guild, MODERATION_ROLES, Roles\nfrom bot.converters import HushDurationConverter\nfrom bot.utils.checks import with_role_check\n\nlog = logging.getLogger(__name__)\n\n\nclass SilenceNotifier(tasks.Loop):\n \"\"\"Loop notifier for posting notices to `alert_channel` containing added channels.\"\"\"\n\n def __init__(self, alert_channel: TextChannel):\n super().__init__(self._notifier, seconds=1, minutes=0, hours=0, count=None, reconnect=True, loop=None)\n self._silenced_channels = {}\n self._alert_channel = alert_channel\n\n def add_channel(self, channel: TextChannel) -> None:\n \"\"\"Add channel to `_silenced_channels` and start loop if not launched.\"\"\"\n if not self._silenced_channels:\n self.start()\n log.info(\"Starting notifier loop.\")\n self._silenced_channels[channel] = self._current_loop\n\n def remove_channel(self, channel: TextChannel) -> None:\n \"\"\"Remove channel from `_silenced_channels` and stop loop if no channels remain.\"\"\"\n with suppress(KeyError):\n del self._silenced_channels[channel]\n if not self._silenced_channels:\n self.stop()\n log.info(\"Stopping notifier loop.\")\n\n async def _notifier(self) -> None:\n \"\"\"Post notice of `_silenced_channels` with their silenced duration to `_alert_channel` periodically.\"\"\"\n # Wait for 15 minutes between notices with pause at start of loop.\n if self._current_loop and not self._current_loop/60 % 15:\n log.debug(\n f\"Sending notice with channels: \"\n f\"{', '.join(f'#{channel} ({channel.id})' for channel in self._silenced_channels)}.\"\n )\n channels_text = ', '.join(\n f\"{channel.mention} for {(self._current_loop-start)//60} min\"\n for channel, start in self._silenced_channels.items()\n )\n await self._alert_channel.send(f\"<@&{Roles.moderators}> currently silenced channels: {channels_text}\")\n\n\nclass Silence(commands.Cog):\n \"\"\"Commands for stopping channel messages for `verified` role in a channel.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n self.muted_channels = set()\n self._get_instance_vars_task = self.bot.loop.create_task(self._get_instance_vars())\n self._get_instance_vars_event = asyncio.Event()\n\n async def _get_instance_vars(self) -> None:\n \"\"\"Get instance variables after they're available to get from the guild.\"\"\"\n await self.bot.wait_until_guild_available()\n guild = self.bot.get_guild(Guild.id)\n self._verified_role = guild.get_role(Roles.verified)\n self._mod_alerts_channel = self.bot.get_channel(Channels.mod_alerts)\n self._mod_log_channel = self.bot.get_channel(Channels.mod_log)\n self.notifier = SilenceNotifier(self._mod_log_channel)\n self._get_instance_vars_event.set()\n\n @commands.command(aliases=(\"hush\",))\n async def silence(self, ctx: Context, duration: HushDurationConverter = 10) -> None:\n \"\"\"\n Silence the current channel for `duration` minutes or `forever`.\n\n Duration is capped at 15 minutes, passing forever makes the silence indefinite.\n Indefinitely silenced channels get added to a notifier which posts notices every 15 minutes from the start.\n \"\"\"\n await self._get_instance_vars_event.wait()\n log.debug(f\"{ctx.author} is silencing channel #{ctx.channel}.\")\n if not await self._silence(ctx.channel, persistent=(duration is None), duration=duration):\n await ctx.send(f\"{Emojis.cross_mark} current channel is already silenced.\")\n return\n if duration is None:\n await ctx.send(f\"{Emojis.check_mark} silenced current channel indefinitely.\")\n return\n\n await ctx.send(f\"{Emojis.check_mark} silenced current channel for {duration} minute(s).\")\n await asyncio.sleep(duration*60)\n log.info(\"Unsilencing channel after set delay.\")\n await ctx.invoke(self.unsilence)\n\n @commands.command(aliases=(\"unhush\",))\n async def unsilence(self, ctx: Context) -> None:\n \"\"\"\n Unsilence the current channel.\n\n If the channel was silenced indefinitely, notifications for the channel will stop.\n \"\"\"\n await self._get_instance_vars_event.wait()\n log.debug(f\"Unsilencing channel #{ctx.channel} from {ctx.author}'s command.\")\n if await self._unsilence(ctx.channel):\n await ctx.send(f\"{Emojis.check_mark} unsilenced current channel.\")\n\n async def _silence(self, channel: TextChannel, persistent: bool, duration: Optional[int]) -> bool:\n \"\"\"\n Silence `channel` for `self._verified_role`.\n\n If `persistent` is `True` add `channel` to notifier.\n `duration` is only used for logging; if None is passed `persistent` should be True to not log None.\n Return `True` if channel permissions were changed, `False` otherwise.\n \"\"\"\n current_overwrite = channel.overwrites_for(self._verified_role)\n if current_overwrite.send_messages is False:\n log.info(f\"Tried to silence channel #{channel} ({channel.id}) but the channel was already silenced.\")\n return False\n await channel.set_permissions(self._verified_role, **dict(current_overwrite, send_messages=False))\n self.muted_channels.add(channel)\n if persistent:\n log.info(f\"Silenced #{channel} ({channel.id}) indefinitely.\")\n self.notifier.add_channel(channel)\n return True\n\n log.info(f\"Silenced #{channel} ({channel.id}) for {duration} minute(s).\")\n return True\n\n async def _unsilence(self, channel: TextChannel) -> bool:\n \"\"\"\n Unsilence `channel`.\n\n Check if `channel` is silenced through a `PermissionOverwrite`,\n if it is unsilence it and remove it from the notifier.\n Return `True` if channel permissions were changed, `False` otherwise.\n \"\"\"\n current_overwrite = channel.overwrites_for(self._verified_role)\n if current_overwrite.send_messages is False:\n await channel.set_permissions(self._verified_role, **dict(current_overwrite, send_messages=None))\n log.info(f\"Unsilenced channel #{channel} ({channel.id}).\")\n self.notifier.remove_channel(channel)\n self.muted_channels.discard(channel)\n return True\n log.info(f\"Tried to unsilence channel #{channel} ({channel.id}) but the channel was not silenced.\")\n return False\n\n def cog_unload(self) -> None:\n \"\"\"Send alert with silenced channels on unload.\"\"\"\n if self.muted_channels:\n channels_string = ''.join(channel.mention for channel in self.muted_channels)\n message = f\"<@&{Roles.moderators}> channels left silenced on cog unload: {channels_string}\"\n asyncio.create_task(self._mod_alerts_channel.send(message))\n\n # This cannot be static (must have a __func__ attribute).\n def cog_check(self, ctx: Context) -> bool:\n \"\"\"Only allow moderators to invoke the commands in this cog.\"\"\"\n return with_role_check(ctx, *MODERATION_ROLES)\n", "path": "bot/cogs/moderation/silence.py"}]}
2,752
872
gh_patches_debug_12268
rasdani/github-patches
git_diff
ManimCommunity__manim-1363
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Suggestion to improve user experience w.r.t InCodeTexTemplate example ## Enhancement proposal At present on macOS (with MacTex latest or even older installations) if you run InCodeTexTemplate from example_scenes/customtex.py it would fail for quite a lot of people with the following error ![dvisvgm](https://user-images.githubusercontent.com/127006/113961056-14fa7880-97eb-11eb-82b9-db34c4115d0f.png) As you can see that I have version 2.11. The error message suggests having version 2.4 or greater. Unfortunately here the issue is not because of the version rather it is because of `ghostscript` installation which is required by `dvisvgm`. Interesting I do have the `ghostscript` installed on my machine yet `dvisvgm` is not able to see. In order for `dvisvgm` to see it, I have to make sure that a certain environment variable is set. See image below: ![Screenink_1](https://user-images.githubusercontent.com/127006/113961559-ddd89700-97eb-11eb-8c04-6bb8c5a5f845.png) After this, the `example_scenes/customtex.py` would work fine. I have described the resolution (which could be classified as an installation issue) in case others are facing it. The issue with Ghostscript and mactex is not unique to manim hence should not be considered a bug related to manim. That said, the error message is a bit misleading and handling errors properly is always a challenging problem in s/w engineering. That said, my suggestion here is to modify the `example_scenes/customtex.py` to not advertise the `pdf` output. Here is the snippet from the example: ``` class InCodeTexTemplate(Scene): """This example scene demonstrates how to modify the tex template for a particular scene from the code for the scene itself. """ def construct(self): # Create a new template myTemplate = TexTemplate() # Add packages to the template myTemplate.add_to_preamble(r"\usepackage{esvect}") # Set the compiler and output format (default: latex and .dvi) # possible tex compilers: "latex", "pdflatex", "xelatex", "lualatex", "luatex" # possible output formats: ".dvi", ".pdf", and ".xdv" myTemplate.tex_compiler = "pdflatex" myTemplate.output_format = ".pdf" # To use this template in a Tex() or MathTex() object # use the keyword argument tex_template text = MathTex(r"\vv{vb}", tex_template=myTemplate) self.play(Write(text)) self.wait(1) ``` Here is my rationale for not advertising the `pdf` output - a) The example is first and foremost about the ability to add imports for additional packages in the preamble of tex document b) Some of the imports indeed require a custom compiler so it is okay to suggest that you could change the compiler c) Forgive me for making a bit biased opinion here as I may be ignoring some use case but I am not able to see the use of generating the `pdf` as the output as ultimately the goal is to get the `svg` anyways. Usage of `pdf` will invite issues with `ghostscript` described above leading to a bad first user experience w.r.t usage of the feature of `TexTemplate`. I think it is a great feature to add preamble dynamically! I have been using latex for many years and even I got stumbled by this Ghostscript issue and had to manually print the output of `dvisvgm` to figure it therefore I have a reason to believe that the beginners would have a bad experience because of this. IMHO, modifying the example would be a better thing to do given the audience and primary functionality of manim. </issue> <code> [start of example_scenes/customtex.py] 1 from manim import * 2 3 4 class TexTemplateFromCLI(Scene): 5 """This scene uses a custom TexTemplate file. 6 The path of the TexTemplate _must_ be passed with the command line 7 argument `--tex_template <path to template>`. 8 For this scene, you can use the custom_template.tex file next to it. 9 This scene will fail to render if a tex_template.tex that doesn't 10 import esvect is passed, and will throw a LaTeX error in that case. 11 """ 12 13 def construct(self): 14 text = MathTex(r"\vv{vb}") 15 self.play(Write(text)) 16 self.wait(1) 17 18 19 class InCodeTexTemplate(Scene): 20 """This example scene demonstrates how to modify the tex template 21 for a particular scene from the code for the scene itself. 22 """ 23 24 def construct(self): 25 # Create a new template 26 myTemplate = TexTemplate() 27 28 # Add packages to the template 29 myTemplate.add_to_preamble(r"\usepackage{esvect}") 30 31 # Set the compiler and output format (default: latex and .dvi) 32 # possible tex compilers: "latex", "pdflatex", "xelatex", "lualatex", "luatex" 33 # possible output formats: ".dvi", ".pdf", and ".xdv" 34 myTemplate.tex_compiler = "pdflatex" 35 myTemplate.output_format = ".pdf" 36 37 # To use this template in a Tex() or MathTex() object 38 # use the keyword argument tex_template 39 text = MathTex(r"\vv{vb}", tex_template=myTemplate) 40 self.play(Write(text)) 41 self.wait(1) 42 [end of example_scenes/customtex.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/example_scenes/customtex.py b/example_scenes/customtex.py --- a/example_scenes/customtex.py +++ b/example_scenes/customtex.py @@ -31,8 +31,8 @@ # Set the compiler and output format (default: latex and .dvi) # possible tex compilers: "latex", "pdflatex", "xelatex", "lualatex", "luatex" # possible output formats: ".dvi", ".pdf", and ".xdv" - myTemplate.tex_compiler = "pdflatex" - myTemplate.output_format = ".pdf" + myTemplate.tex_compiler = "latex" + myTemplate.output_format = ".dvi" # To use this template in a Tex() or MathTex() object # use the keyword argument tex_template
{"golden_diff": "diff --git a/example_scenes/customtex.py b/example_scenes/customtex.py\n--- a/example_scenes/customtex.py\n+++ b/example_scenes/customtex.py\n@@ -31,8 +31,8 @@\n # Set the compiler and output format (default: latex and .dvi)\n # possible tex compilers: \"latex\", \"pdflatex\", \"xelatex\", \"lualatex\", \"luatex\"\n # possible output formats: \".dvi\", \".pdf\", and \".xdv\"\n- myTemplate.tex_compiler = \"pdflatex\"\n- myTemplate.output_format = \".pdf\"\n+ myTemplate.tex_compiler = \"latex\"\n+ myTemplate.output_format = \".dvi\"\n \n # To use this template in a Tex() or MathTex() object\n # use the keyword argument tex_template\n", "issue": "Suggestion to improve user experience w.r.t InCodeTexTemplate example\n## Enhancement proposal\r\n\r\nAt present on macOS (with MacTex latest or even older installations) if you run InCodeTexTemplate from example_scenes/customtex.py it would fail for quite a lot of people with the following error\r\n\r\n![dvisvgm](https://user-images.githubusercontent.com/127006/113961056-14fa7880-97eb-11eb-82b9-db34c4115d0f.png)\r\n\r\nAs you can see that I have version 2.11. The error message suggests having version 2.4 or greater.\r\n\r\nUnfortunately here the issue is not because of the version rather it is because of `ghostscript` installation which is required by `dvisvgm`. Interesting I do have the `ghostscript` installed on my machine yet `dvisvgm` is not able to see.\r\n\r\nIn order for `dvisvgm` to see it, I have to make sure that a certain environment variable is set. See image below:\r\n\r\n![Screenink_1](https://user-images.githubusercontent.com/127006/113961559-ddd89700-97eb-11eb-8c04-6bb8c5a5f845.png)\r\n\r\nAfter this, the `example_scenes/customtex.py` would work fine.\r\n\r\nI have described the resolution (which could be classified as an installation issue) in case others are facing it. The issue with Ghostscript and mactex is not unique to manim hence should not be considered a bug related to manim. That said, the error message is a bit misleading and handling errors properly is always a challenging problem in s/w engineering.\r\n\r\nThat said, my suggestion here is to modify the `example_scenes/customtex.py` to not advertise the `pdf` output. Here is the snippet from the example:\r\n\r\n```\r\nclass InCodeTexTemplate(Scene):\r\n \"\"\"This example scene demonstrates how to modify the tex template\r\n for a particular scene from the code for the scene itself.\r\n \"\"\"\r\n\r\n def construct(self):\r\n # Create a new template\r\n myTemplate = TexTemplate()\r\n\r\n # Add packages to the template\r\n myTemplate.add_to_preamble(r\"\\usepackage{esvect}\")\r\n\r\n # Set the compiler and output format (default: latex and .dvi)\r\n # possible tex compilers: \"latex\", \"pdflatex\", \"xelatex\", \"lualatex\", \"luatex\"\r\n # possible output formats: \".dvi\", \".pdf\", and \".xdv\"\r\n myTemplate.tex_compiler = \"pdflatex\"\r\n myTemplate.output_format = \".pdf\"\r\n\r\n # To use this template in a Tex() or MathTex() object\r\n # use the keyword argument tex_template\r\n text = MathTex(r\"\\vv{vb}\", tex_template=myTemplate)\r\n self.play(Write(text))\r\n self.wait(1)\r\n```\r\n\r\nHere is my rationale for not advertising the `pdf` output -\r\n\r\na) The example is first and foremost about the ability to add imports for additional packages in the preamble of tex document\r\n\r\nb) Some of the imports indeed require a custom compiler so it is okay to suggest that you could change the compiler\r\n\r\nc) Forgive me for making a bit biased opinion here as I may be ignoring some use case but I am not able to see the use of generating the `pdf` as the output as ultimately the goal is to get the `svg` anyways.\r\n\r\nUsage of `pdf` will invite issues with `ghostscript` described above leading to a bad first user experience w.r.t usage of the feature of `TexTemplate`. I think it is a great feature to add preamble dynamically!\r\n\r\nI have been using latex for many years and even I got stumbled by this Ghostscript issue and had to manually print the output of `dvisvgm` to figure it therefore I have a reason to believe that the beginners would have a bad experience because of this.\r\n\r\nIMHO, modifying the example would be a better thing to do given the audience and primary functionality of manim.\r\n\r\n\n", "before_files": [{"content": "from manim import *\n\n\nclass TexTemplateFromCLI(Scene):\n \"\"\"This scene uses a custom TexTemplate file.\n The path of the TexTemplate _must_ be passed with the command line\n argument `--tex_template <path to template>`.\n For this scene, you can use the custom_template.tex file next to it.\n This scene will fail to render if a tex_template.tex that doesn't\n import esvect is passed, and will throw a LaTeX error in that case.\n \"\"\"\n\n def construct(self):\n text = MathTex(r\"\\vv{vb}\")\n self.play(Write(text))\n self.wait(1)\n\n\nclass InCodeTexTemplate(Scene):\n \"\"\"This example scene demonstrates how to modify the tex template\n for a particular scene from the code for the scene itself.\n \"\"\"\n\n def construct(self):\n # Create a new template\n myTemplate = TexTemplate()\n\n # Add packages to the template\n myTemplate.add_to_preamble(r\"\\usepackage{esvect}\")\n\n # Set the compiler and output format (default: latex and .dvi)\n # possible tex compilers: \"latex\", \"pdflatex\", \"xelatex\", \"lualatex\", \"luatex\"\n # possible output formats: \".dvi\", \".pdf\", and \".xdv\"\n myTemplate.tex_compiler = \"pdflatex\"\n myTemplate.output_format = \".pdf\"\n\n # To use this template in a Tex() or MathTex() object\n # use the keyword argument tex_template\n text = MathTex(r\"\\vv{vb}\", tex_template=myTemplate)\n self.play(Write(text))\n self.wait(1)\n", "path": "example_scenes/customtex.py"}]}
1,858
184
gh_patches_debug_3173
rasdani/github-patches
git_diff
mathesar-foundation__mathesar-1157
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> URI type does not allow NULL value to be set. ## Description A column with the `URI` type does not allow `null` value to be set after creation even though the column `nullable` option is enabled ## Expected behaviour Allow setting a null value to a nullable `URI` type column without throwing up an error ## To Reproduce 1. Create a new column with the type `URI`. 2. Try setting a null value to the column by creating a new row and not entering any data. 3. New row does not get created due to an error with `URI` type not accepting a null value </issue> <code> [start of db/types/uri.py] 1 from enum import Enum 2 import os 3 from sqlalchemy import text, Text, Table, Column, String, MetaData 4 from sqlalchemy.sql import quoted_name 5 from sqlalchemy.sql.functions import GenericFunction 6 from sqlalchemy.types import UserDefinedType 7 8 from db.functions import hints 9 from db.functions.base import DBFunction, Contains, sa_call_sql_function, Equal 10 from db.functions.packed import DBFunctionPacked 11 12 from db.types import base 13 14 URI_STR = base.MathesarCustomType.URI.value 15 DB_TYPE = base.get_qualified_name(URI_STR) 16 17 TLDS_PATH = os.path.join( 18 os.path.join(os.path.abspath(os.path.dirname(__file__)), "resources"), 19 "tlds.txt" 20 ) 21 22 TLDS_TABLE_NAME = "top_level_domains" 23 QUALIFIED_TLDS = base.get_qualified_name(TLDS_TABLE_NAME) 24 25 26 class URIFunction(Enum): 27 PARTS = URI_STR + "_parts" 28 SCHEME = URI_STR + "_scheme" 29 AUTHORITY = URI_STR + "_authority" 30 PATH = URI_STR + "_path" 31 QUERY = URI_STR + "_query" 32 FRAGMENT = URI_STR + "_fragment" 33 34 35 QualifiedURIFunction = Enum( 36 "QualifiedURIFunction", 37 { 38 func_name.name: base.get_qualified_name(func_name.value) 39 for func_name in URIFunction 40 } 41 ) 42 43 44 # This regex and the use of it are based on the one given in RFC 3986. 45 URI_REGEX_STR = r"'^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?'" 46 47 48 class URI(UserDefinedType): 49 def get_col_spec(self, **_): 50 # This results in the type name being upper case when viewed. 51 # Actual usage in the DB is case-insensitive. 52 return DB_TYPE.upper() 53 54 55 # This function lets us avoid having to define repetitive classes for 56 # adding custom SQL functions to SQLAlchemy 57 def build_generic_function_def_class(name): 58 class_dict = { 59 "type": Text, 60 "name": quoted_name(QualifiedURIFunction[name].value, False), 61 "identifier": URIFunction[name].value 62 } 63 return type(class_dict["identifier"], (GenericFunction,), class_dict) 64 65 66 # We need to add these classes to the globals() dict so they get picked 67 # up by SQLAlchemy 68 globals().update( 69 {f.name: build_generic_function_def_class(f.name) for f in URIFunction} 70 ) 71 72 73 def install(engine): 74 drop_domain_query = f""" 75 DROP DOMAIN IF EXISTS {DB_TYPE}; 76 """ 77 78 create_uri_parts_query = f""" 79 CREATE OR REPLACE FUNCTION {QualifiedURIFunction.PARTS.value}({base.PostgresType.TEXT.value}) 80 RETURNS {base.PostgresType.TEXT.value}[] AS $$ 81 SELECT regexp_match($1, {URI_REGEX_STR}); 82 $$ 83 LANGUAGE SQL IMMUTABLE RETURNS NULL ON NULL INPUT; 84 """ 85 uri_parts_map = { 86 QualifiedURIFunction.SCHEME.value: 2, 87 QualifiedURIFunction.AUTHORITY.value: 4, 88 QualifiedURIFunction.PATH.value: 5, 89 QualifiedURIFunction.QUERY.value: 7, 90 QualifiedURIFunction.FRAGMENT.value: 9, 91 } 92 93 create_domain_query = f""" 94 CREATE DOMAIN {DB_TYPE} AS text CHECK ( 95 {QualifiedURIFunction.SCHEME.value}(value) IS NOT NULL 96 AND {QualifiedURIFunction.PATH.value}(value) IS NOT NULL 97 ); 98 """ 99 100 with engine.begin() as conn: 101 conn.execute(text(drop_domain_query)) 102 conn.execute(text(create_uri_parts_query)) 103 for part, index in uri_parts_map.items(): 104 create_uri_part_getter_query = f""" 105 CREATE OR REPLACE FUNCTION {part}({base.PostgresType.TEXT.value}) 106 RETURNS {base.PostgresType.TEXT.value} AS $$ 107 SELECT ({QualifiedURIFunction.PARTS.value}($1))[{index}]; 108 $$ 109 LANGUAGE SQL IMMUTABLE RETURNS NULL ON NULL INPUT; 110 """ 111 conn.execute(text(create_uri_part_getter_query)) 112 conn.execute(text(create_domain_query)) 113 conn.commit() 114 115 116 def install_tld_lookup_table(engine): 117 tlds_table = Table( 118 TLDS_TABLE_NAME, 119 MetaData(bind=engine), 120 Column("tld", String, primary_key=True), 121 schema=base.preparer.quote_schema(base.SCHEMA) 122 ) 123 tlds_table.create() 124 with engine.begin() as conn, open(TLDS_PATH) as f: 125 conn.execute( 126 tlds_table.insert(), 127 [{"tld": tld.strip().lower()} for tld in f if tld[:2] != "# "], 128 ) 129 130 131 class ExtractURIAuthority(DBFunction): 132 id = 'extract_uri_authority' 133 name = 'extract URI authority' 134 hints = tuple([ 135 hints.parameter_count(1), 136 hints.parameter(1, hints.uri), 137 ]) 138 depends_on = tuple([URIFunction.AUTHORITY]) 139 140 @staticmethod 141 def to_sa_expression(uri): 142 return sa_call_sql_function(URIFunction.AUTHORITY.value, uri) 143 144 145 class ExtractURIScheme(DBFunction): 146 id = 'extract_uri_scheme' 147 name = 'extract URI scheme' 148 hints = tuple([ 149 hints.parameter_count(1), 150 hints.parameter(1, hints.uri), 151 ]) 152 depends_on = tuple([URIFunction.SCHEME]) 153 154 @staticmethod 155 def to_sa_expression(uri): 156 return sa_call_sql_function(URIFunction.SCHEME.value, uri) 157 158 159 class URIAuthorityContains(DBFunctionPacked): 160 id = 'uri_authority_contains' 161 name = 'URI authority contains' 162 hints = tuple([ 163 hints.returns(hints.boolean), 164 hints.parameter_count(2), 165 hints.parameter(0, hints.uri), 166 hints.parameter(1, hints.string_like), 167 hints.mathesar_filter, 168 ]) 169 depends_on = tuple([URIFunction.AUTHORITY]) 170 171 def unpack(self): 172 param0 = self.parameters[0] 173 param1 = self.parameters[1] 174 return Contains([ 175 ExtractURIAuthority([param0]), 176 param1, 177 ]) 178 179 180 class URISchemeEquals(DBFunctionPacked): 181 id = 'uri_scheme_equals' 182 name = 'URI scheme is' 183 hints = tuple([ 184 hints.returns(hints.boolean), 185 hints.parameter_count(2), 186 hints.parameter(0, hints.uri), 187 hints.parameter(1, hints.string_like), 188 hints.mathesar_filter, 189 ]) 190 depends_on = tuple([URIFunction.SCHEME]) 191 192 def unpack(self): 193 param0 = self.parameters[0] 194 param1 = self.parameters[1] 195 return Equal([ 196 ExtractURIScheme([param0]), 197 param1, 198 ]) 199 [end of db/types/uri.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/db/types/uri.py b/db/types/uri.py --- a/db/types/uri.py +++ b/db/types/uri.py @@ -92,8 +92,8 @@ create_domain_query = f""" CREATE DOMAIN {DB_TYPE} AS text CHECK ( - {QualifiedURIFunction.SCHEME.value}(value) IS NOT NULL - AND {QualifiedURIFunction.PATH.value}(value) IS NOT NULL + (value IS NULL) OR ({QualifiedURIFunction.SCHEME.value}(value) IS NOT NULL + AND {QualifiedURIFunction.PATH.value}(value) IS NOT NULL) ); """
{"golden_diff": "diff --git a/db/types/uri.py b/db/types/uri.py\n--- a/db/types/uri.py\n+++ b/db/types/uri.py\n@@ -92,8 +92,8 @@\n \n create_domain_query = f\"\"\"\n CREATE DOMAIN {DB_TYPE} AS text CHECK (\n- {QualifiedURIFunction.SCHEME.value}(value) IS NOT NULL\n- AND {QualifiedURIFunction.PATH.value}(value) IS NOT NULL\n+ (value IS NULL) OR ({QualifiedURIFunction.SCHEME.value}(value) IS NOT NULL\n+ AND {QualifiedURIFunction.PATH.value}(value) IS NOT NULL)\n );\n \"\"\"\n", "issue": "URI type does not allow NULL value to be set.\n## Description\r\nA column with the `URI` type does not allow `null` value to be set after creation even though the column `nullable` option is enabled\r\n\r\n## Expected behaviour\r\nAllow setting a null value to a nullable `URI` type column without throwing up an error \r\n\r\n## To Reproduce\r\n1. Create a new column with the type `URI`.\r\n2. Try setting a null value to the column by creating a new row and not entering any data.\r\n3. New row does not get created due to an error with `URI` type not accepting a null value\r\n\r\n\r\n\n", "before_files": [{"content": "from enum import Enum\nimport os\nfrom sqlalchemy import text, Text, Table, Column, String, MetaData\nfrom sqlalchemy.sql import quoted_name\nfrom sqlalchemy.sql.functions import GenericFunction\nfrom sqlalchemy.types import UserDefinedType\n\nfrom db.functions import hints\nfrom db.functions.base import DBFunction, Contains, sa_call_sql_function, Equal\nfrom db.functions.packed import DBFunctionPacked\n\nfrom db.types import base\n\nURI_STR = base.MathesarCustomType.URI.value\nDB_TYPE = base.get_qualified_name(URI_STR)\n\nTLDS_PATH = os.path.join(\n os.path.join(os.path.abspath(os.path.dirname(__file__)), \"resources\"),\n \"tlds.txt\"\n)\n\nTLDS_TABLE_NAME = \"top_level_domains\"\nQUALIFIED_TLDS = base.get_qualified_name(TLDS_TABLE_NAME)\n\n\nclass URIFunction(Enum):\n PARTS = URI_STR + \"_parts\"\n SCHEME = URI_STR + \"_scheme\"\n AUTHORITY = URI_STR + \"_authority\"\n PATH = URI_STR + \"_path\"\n QUERY = URI_STR + \"_query\"\n FRAGMENT = URI_STR + \"_fragment\"\n\n\nQualifiedURIFunction = Enum(\n \"QualifiedURIFunction\",\n {\n func_name.name: base.get_qualified_name(func_name.value)\n for func_name in URIFunction\n }\n)\n\n\n# This regex and the use of it are based on the one given in RFC 3986.\nURI_REGEX_STR = r\"'^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\\?([^#]*))?(#(.*))?'\"\n\n\nclass URI(UserDefinedType):\n def get_col_spec(self, **_):\n # This results in the type name being upper case when viewed.\n # Actual usage in the DB is case-insensitive.\n return DB_TYPE.upper()\n\n\n# This function lets us avoid having to define repetitive classes for\n# adding custom SQL functions to SQLAlchemy\ndef build_generic_function_def_class(name):\n class_dict = {\n \"type\": Text,\n \"name\": quoted_name(QualifiedURIFunction[name].value, False),\n \"identifier\": URIFunction[name].value\n }\n return type(class_dict[\"identifier\"], (GenericFunction,), class_dict)\n\n\n# We need to add these classes to the globals() dict so they get picked\n# up by SQLAlchemy\nglobals().update(\n {f.name: build_generic_function_def_class(f.name) for f in URIFunction}\n)\n\n\ndef install(engine):\n drop_domain_query = f\"\"\"\n DROP DOMAIN IF EXISTS {DB_TYPE};\n \"\"\"\n\n create_uri_parts_query = f\"\"\"\n CREATE OR REPLACE FUNCTION {QualifiedURIFunction.PARTS.value}({base.PostgresType.TEXT.value})\n RETURNS {base.PostgresType.TEXT.value}[] AS $$\n SELECT regexp_match($1, {URI_REGEX_STR});\n $$\n LANGUAGE SQL IMMUTABLE RETURNS NULL ON NULL INPUT;\n \"\"\"\n uri_parts_map = {\n QualifiedURIFunction.SCHEME.value: 2,\n QualifiedURIFunction.AUTHORITY.value: 4,\n QualifiedURIFunction.PATH.value: 5,\n QualifiedURIFunction.QUERY.value: 7,\n QualifiedURIFunction.FRAGMENT.value: 9,\n }\n\n create_domain_query = f\"\"\"\n CREATE DOMAIN {DB_TYPE} AS text CHECK (\n {QualifiedURIFunction.SCHEME.value}(value) IS NOT NULL\n AND {QualifiedURIFunction.PATH.value}(value) IS NOT NULL\n );\n \"\"\"\n\n with engine.begin() as conn:\n conn.execute(text(drop_domain_query))\n conn.execute(text(create_uri_parts_query))\n for part, index in uri_parts_map.items():\n create_uri_part_getter_query = f\"\"\"\n CREATE OR REPLACE FUNCTION {part}({base.PostgresType.TEXT.value})\n RETURNS {base.PostgresType.TEXT.value} AS $$\n SELECT ({QualifiedURIFunction.PARTS.value}($1))[{index}];\n $$\n LANGUAGE SQL IMMUTABLE RETURNS NULL ON NULL INPUT;\n \"\"\"\n conn.execute(text(create_uri_part_getter_query))\n conn.execute(text(create_domain_query))\n conn.commit()\n\n\ndef install_tld_lookup_table(engine):\n tlds_table = Table(\n TLDS_TABLE_NAME,\n MetaData(bind=engine),\n Column(\"tld\", String, primary_key=True),\n schema=base.preparer.quote_schema(base.SCHEMA)\n )\n tlds_table.create()\n with engine.begin() as conn, open(TLDS_PATH) as f:\n conn.execute(\n tlds_table.insert(),\n [{\"tld\": tld.strip().lower()} for tld in f if tld[:2] != \"# \"],\n )\n\n\nclass ExtractURIAuthority(DBFunction):\n id = 'extract_uri_authority'\n name = 'extract URI authority'\n hints = tuple([\n hints.parameter_count(1),\n hints.parameter(1, hints.uri),\n ])\n depends_on = tuple([URIFunction.AUTHORITY])\n\n @staticmethod\n def to_sa_expression(uri):\n return sa_call_sql_function(URIFunction.AUTHORITY.value, uri)\n\n\nclass ExtractURIScheme(DBFunction):\n id = 'extract_uri_scheme'\n name = 'extract URI scheme'\n hints = tuple([\n hints.parameter_count(1),\n hints.parameter(1, hints.uri),\n ])\n depends_on = tuple([URIFunction.SCHEME])\n\n @staticmethod\n def to_sa_expression(uri):\n return sa_call_sql_function(URIFunction.SCHEME.value, uri)\n\n\nclass URIAuthorityContains(DBFunctionPacked):\n id = 'uri_authority_contains'\n name = 'URI authority contains'\n hints = tuple([\n hints.returns(hints.boolean),\n hints.parameter_count(2),\n hints.parameter(0, hints.uri),\n hints.parameter(1, hints.string_like),\n hints.mathesar_filter,\n ])\n depends_on = tuple([URIFunction.AUTHORITY])\n\n def unpack(self):\n param0 = self.parameters[0]\n param1 = self.parameters[1]\n return Contains([\n ExtractURIAuthority([param0]),\n param1,\n ])\n\n\nclass URISchemeEquals(DBFunctionPacked):\n id = 'uri_scheme_equals'\n name = 'URI scheme is'\n hints = tuple([\n hints.returns(hints.boolean),\n hints.parameter_count(2),\n hints.parameter(0, hints.uri),\n hints.parameter(1, hints.string_like),\n hints.mathesar_filter,\n ])\n depends_on = tuple([URIFunction.SCHEME])\n\n def unpack(self):\n param0 = self.parameters[0]\n param1 = self.parameters[1]\n return Equal([\n ExtractURIScheme([param0]),\n param1,\n ])\n", "path": "db/types/uri.py"}]}
2,606
143
gh_patches_debug_17781
rasdani/github-patches
git_diff
openstates__openstates-scrapers-1428
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> SD: legislators have no email addresses Example legislator: http://sdlegislature.gov/Legislators/Legislators/MemberDetail.aspx?Session=2017&Member=1125&Cleaned=True Looking at him in openstates, his phone number is there but his email address is missing: https://openstates.org/api/v1/legislators/SDL000286/ The data is there, the scraper just needs to be updated to capture the email property. Thanks! </issue> <code> [start of openstates/sd/legislators.py] 1 import re 2 3 from billy.scrape import NoDataForPeriod 4 from billy.scrape.legislators import LegislatorScraper, Legislator 5 6 import lxml.html 7 8 9 class SDLegislatorScraper(LegislatorScraper): 10 jurisdiction = 'sd' 11 latest_only = True 12 13 def scrape(self, chamber, term): 14 url = 'http://www.sdlegislature.gov/Legislators/default.aspx' \ 15 '?CurrentSession=True' 16 17 if chamber == 'upper': 18 search = 'Senate Members' 19 else: 20 search = 'House Members' 21 22 page = self.get(url).text 23 page = lxml.html.fromstring(page) 24 page.make_links_absolute(url) 25 26 for link in page.xpath("//h4[text()='{}']/../div/a".format(search)): 27 name = link.text.strip() 28 29 self.scrape_legislator(name, chamber, term, 30 '{}&Cleaned=True'.format( 31 link.attrib['href'])) 32 33 def scrape_legislator(self, name, chamber, term, url): 34 page = self.get(url).text 35 page = lxml.html.fromstring(page) 36 page.make_links_absolute(url) 37 38 party = page.xpath("string(//span[contains(@id, 'Party')])") 39 party = party.strip() 40 41 if party == 'Democrat': 42 party = 'Democratic' 43 44 district = page.xpath("string(//span[contains(@id, 'District')])") 45 district = district.strip().lstrip('0') 46 47 occupation = page.xpath( 48 "string(//span[contains(@id, 'Occupation')])") 49 occupation = occupation.strip() 50 51 (photo_url, ) = page.xpath('//img[contains(@id, "_imgMember")]/@src') 52 53 office_phone = page.xpath( 54 "string(//span[contains(@id, 'CapitolPhone')])").strip() 55 56 email = None 57 58 email_link = page.xpath('//a[@id="lnkMail"]') 59 60 if email_link: 61 email = email_link[0].attrib['href'].split(":")[1] 62 63 legislator = Legislator(term, chamber, district, name, 64 party=party, 65 occupation=occupation, 66 photo_url=photo_url, 67 url=url) 68 kwargs = {} 69 if office_phone.strip() != "": 70 kwargs['phone'] = office_phone 71 72 if email and email.strip() != "": 73 # South Dakota protects their email addresses from scraping using 74 # some JS code that runs on page load 75 # Until that code is run, all their email addresses are listed as 76 # *@example.com; so, fix this 77 kwargs['email'] = re.sub(r'@example\.com$', '@sdlegislature.gov', email) 78 79 if kwargs: 80 legislator.add_office('capitol', 'Capitol Office', **kwargs) 81 82 home_address = [ 83 x.strip() for x in 84 page.xpath('//td/span[contains(@id, "HomeAddress")]/text()') 85 if x.strip() 86 ] 87 if home_address: 88 home_address = "\n".join(home_address) 89 home_phone = page.xpath( 90 "string(//span[contains(@id, 'HomePhone')])").strip() 91 legislator.add_office( 92 'district', 93 'District Office', 94 address=home_address, 95 phone=home_phone or None 96 ) 97 98 legislator.add_source(url) 99 100 comm_url = page.xpath("//a[. = 'Committees']")[0].attrib['href'] 101 self.scrape_committees(legislator, comm_url) 102 103 self.save_legislator(legislator) 104 105 def scrape_committees(self, leg, url): 106 page = self.get(url).text 107 page = lxml.html.fromstring(page) 108 leg.add_source(url) 109 110 term = leg['roles'][0]['term'] 111 112 for link in page.xpath("//a[contains(@href, 'CommitteeMem')]"): 113 comm = link.text.strip() 114 115 role = link.xpath('../following-sibling::td')[0]\ 116 .text_content().lower() 117 118 if comm.startswith('Joint'): 119 chamber = 'joint' 120 else: 121 chamber = leg['roles'][0]['chamber'] 122 123 leg.add_role('committee member', term=term, chamber=chamber, 124 committee=comm, position=role) 125 [end of openstates/sd/legislators.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/openstates/sd/legislators.py b/openstates/sd/legislators.py --- a/openstates/sd/legislators.py +++ b/openstates/sd/legislators.py @@ -69,12 +69,14 @@ if office_phone.strip() != "": kwargs['phone'] = office_phone - if email and email.strip() != "": - # South Dakota protects their email addresses from scraping using - # some JS code that runs on page load - # Until that code is run, all their email addresses are listed as - # *@example.com; so, fix this - kwargs['email'] = re.sub(r'@example\.com$', '@sdlegislature.gov', email) + # SD is hiding their email addresses entirely in JS now, so + # search through <script> blocks looking for them + for script in page.xpath('//script'): + if script.text: + match = re.search(r'([\w.]+@sdlegislature\.gov)', script.text) + if match: + kwargs['email'] = match.group(0) + break if kwargs: legislator.add_office('capitol', 'Capitol Office', **kwargs)
{"golden_diff": "diff --git a/openstates/sd/legislators.py b/openstates/sd/legislators.py\n--- a/openstates/sd/legislators.py\n+++ b/openstates/sd/legislators.py\n@@ -69,12 +69,14 @@\n if office_phone.strip() != \"\":\n kwargs['phone'] = office_phone\n \n- if email and email.strip() != \"\":\n- # South Dakota protects their email addresses from scraping using\n- # some JS code that runs on page load\n- # Until that code is run, all their email addresses are listed as\n- # *@example.com; so, fix this\n- kwargs['email'] = re.sub(r'@example\\.com$', '@sdlegislature.gov', email)\n+ # SD is hiding their email addresses entirely in JS now, so\n+ # search through <script> blocks looking for them\n+ for script in page.xpath('//script'):\n+ if script.text:\n+ match = re.search(r'([\\w.]+@sdlegislature\\.gov)', script.text)\n+ if match:\n+ kwargs['email'] = match.group(0)\n+ break\n \n if kwargs:\n legislator.add_office('capitol', 'Capitol Office', **kwargs)\n", "issue": "SD: legislators have no email addresses\nExample legislator: http://sdlegislature.gov/Legislators/Legislators/MemberDetail.aspx?Session=2017&Member=1125&Cleaned=True\r\n\r\nLooking at him in openstates, his phone number is there but his email address is missing: https://openstates.org/api/v1/legislators/SDL000286/\r\n\r\nThe data is there, the scraper just needs to be updated to capture the email property.\r\n\r\nThanks!\n", "before_files": [{"content": "import re\n\nfrom billy.scrape import NoDataForPeriod\nfrom billy.scrape.legislators import LegislatorScraper, Legislator\n\nimport lxml.html\n\n\nclass SDLegislatorScraper(LegislatorScraper):\n jurisdiction = 'sd'\n latest_only = True\n\n def scrape(self, chamber, term):\n url = 'http://www.sdlegislature.gov/Legislators/default.aspx' \\\n '?CurrentSession=True'\n\n if chamber == 'upper':\n search = 'Senate Members'\n else:\n search = 'House Members'\n\n page = self.get(url).text\n page = lxml.html.fromstring(page)\n page.make_links_absolute(url)\n\n for link in page.xpath(\"//h4[text()='{}']/../div/a\".format(search)):\n name = link.text.strip()\n\n self.scrape_legislator(name, chamber, term,\n '{}&Cleaned=True'.format(\n link.attrib['href']))\n\n def scrape_legislator(self, name, chamber, term, url):\n page = self.get(url).text\n page = lxml.html.fromstring(page)\n page.make_links_absolute(url)\n\n party = page.xpath(\"string(//span[contains(@id, 'Party')])\")\n party = party.strip()\n\n if party == 'Democrat':\n party = 'Democratic'\n\n district = page.xpath(\"string(//span[contains(@id, 'District')])\")\n district = district.strip().lstrip('0')\n\n occupation = page.xpath(\n \"string(//span[contains(@id, 'Occupation')])\")\n occupation = occupation.strip()\n\n (photo_url, ) = page.xpath('//img[contains(@id, \"_imgMember\")]/@src')\n\n office_phone = page.xpath(\n \"string(//span[contains(@id, 'CapitolPhone')])\").strip()\n\n email = None\n\n email_link = page.xpath('//a[@id=\"lnkMail\"]')\n\n if email_link:\n email = email_link[0].attrib['href'].split(\":\")[1]\n\n legislator = Legislator(term, chamber, district, name,\n party=party,\n occupation=occupation,\n photo_url=photo_url,\n url=url)\n kwargs = {}\n if office_phone.strip() != \"\":\n kwargs['phone'] = office_phone\n\n if email and email.strip() != \"\":\n # South Dakota protects their email addresses from scraping using\n # some JS code that runs on page load\n # Until that code is run, all their email addresses are listed as\n # *@example.com; so, fix this\n kwargs['email'] = re.sub(r'@example\\.com$', '@sdlegislature.gov', email)\n\n if kwargs:\n legislator.add_office('capitol', 'Capitol Office', **kwargs)\n\n home_address = [\n x.strip() for x in\n page.xpath('//td/span[contains(@id, \"HomeAddress\")]/text()')\n if x.strip()\n ]\n if home_address:\n home_address = \"\\n\".join(home_address)\n home_phone = page.xpath(\n \"string(//span[contains(@id, 'HomePhone')])\").strip()\n legislator.add_office(\n 'district',\n 'District Office',\n address=home_address,\n phone=home_phone or None\n )\n\n legislator.add_source(url)\n\n comm_url = page.xpath(\"//a[. = 'Committees']\")[0].attrib['href']\n self.scrape_committees(legislator, comm_url)\n\n self.save_legislator(legislator)\n\n def scrape_committees(self, leg, url):\n page = self.get(url).text\n page = lxml.html.fromstring(page)\n leg.add_source(url)\n\n term = leg['roles'][0]['term']\n\n for link in page.xpath(\"//a[contains(@href, 'CommitteeMem')]\"):\n comm = link.text.strip()\n\n role = link.xpath('../following-sibling::td')[0]\\\n .text_content().lower()\n\n if comm.startswith('Joint'):\n chamber = 'joint'\n else:\n chamber = leg['roles'][0]['chamber']\n\n leg.add_role('committee member', term=term, chamber=chamber,\n committee=comm, position=role)\n", "path": "openstates/sd/legislators.py"}]}
1,855
278
gh_patches_debug_6207
rasdani/github-patches
git_diff
getsentry__sentry-23499
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> AttributeError /api/0/organizations/{organization_slug}/sentry-apps/ ## Important Details How are you running Sentry? <!-- Please pick one of the following --> On-Premise w/ Docker, Sentry 21.1.0486d790, based on https://github.com/Rungutan/sentry-performance-monitoring ## Description Opening of these sub items in "Organization" settings fails: * Audit Log * Integrations * Developer settings This seems to be the resulting issue in the "Internal" Sentry project: ``` AttributeError: 'NoneType' object has no attribute 'get_allowed_origins' File "sentry/api/base.py", line 124, in handle_exception response = super(Endpoint, self).handle_exception(exc) File "rest_framework/views.py", line 449, in handle_exception self.raise_uncaught_exception(exc) File "sentry/api/base.py", line 237, in dispatch response = handler(request, *args, **kwargs) File "sentry/api/bases/sentryapps.py", line 59, in wrapped return func(self, *args, **kwargs) File "sentry/api/endpoints/organization_sentry_apps.py", line 19, in get on_results=lambda x: serialize(x, request.user, access=request.access), File "sentry/api/base.py", line 330, in paginate results = on_results(cursor_result.results) File "sentry/api/endpoints/organization_sentry_apps.py", line 19, in <lambda> on_results=lambda x: serialize(x, request.user, access=request.access), File "sentry/api/serializers/base.py", line 45, in serialize return [serializer(o, attrs=attrs.get(o, {}), user=user, **kwargs) for o in objects] File "sentry/api/serializers/base.py", line 45, in <listcomp> return [serializer(o, attrs=attrs.get(o, {}), user=user, **kwargs) for o in objects] File "sentry/api/serializers/base.py", line 60, in __call__ return self.serialize(obj, attrs, user, **kwargs) File "sentry/api/serializers/models/sentry_app.py", line 31, in serialize "allowedOrigins": obj.application.get_allowed_origins(), ``` Last SQL before exception ``` SELECT "sentry_organization"."id", "sentry_organization"."name", "sentry_organization"."slug", "sentry_organization"."status", "sentry_organization"."date_added", "sentry_organization"."default_role", "sentry_organization"."flags" FROM "sentry_organization" WHERE "sentry_organization"."id" = %s ``` ### What you expected to happen Menus should work. ### Possible Solution [If you have an idea on how this could be solved include that detail here.] </issue> <code> [start of src/sentry/api/endpoints/organization_sentry_apps.py] 1 from sentry.api.bases import OrganizationEndpoint, add_integration_platform_metric_tag 2 from sentry.api.paginator import OffsetPaginator 3 from sentry.api.serializers import serialize 4 from sentry.models import SentryApp 5 6 7 class OrganizationSentryAppsEndpoint(OrganizationEndpoint): 8 @add_integration_platform_metric_tag 9 def get(self, request, organization): 10 queryset = SentryApp.objects.filter(owner=organization) 11 12 return self.paginate( 13 request=request, 14 queryset=queryset, 15 order_by="-date_added", 16 paginator_cls=OffsetPaginator, 17 on_results=lambda x: serialize(x, request.user, access=request.access), 18 ) 19 [end of src/sentry/api/endpoints/organization_sentry_apps.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/src/sentry/api/endpoints/organization_sentry_apps.py b/src/sentry/api/endpoints/organization_sentry_apps.py --- a/src/sentry/api/endpoints/organization_sentry_apps.py +++ b/src/sentry/api/endpoints/organization_sentry_apps.py @@ -7,7 +7,7 @@ class OrganizationSentryAppsEndpoint(OrganizationEndpoint): @add_integration_platform_metric_tag def get(self, request, organization): - queryset = SentryApp.objects.filter(owner=organization) + queryset = SentryApp.objects.filter(owner=organization, application__isnull=False) return self.paginate( request=request,
{"golden_diff": "diff --git a/src/sentry/api/endpoints/organization_sentry_apps.py b/src/sentry/api/endpoints/organization_sentry_apps.py\n--- a/src/sentry/api/endpoints/organization_sentry_apps.py\n+++ b/src/sentry/api/endpoints/organization_sentry_apps.py\n@@ -7,7 +7,7 @@\n class OrganizationSentryAppsEndpoint(OrganizationEndpoint):\n @add_integration_platform_metric_tag\n def get(self, request, organization):\n- queryset = SentryApp.objects.filter(owner=organization)\n+ queryset = SentryApp.objects.filter(owner=organization, application__isnull=False)\n \n return self.paginate(\n request=request,\n", "issue": "AttributeError /api/0/organizations/{organization_slug}/sentry-apps/\n## Important Details\r\n\r\nHow are you running Sentry?\r\n\r\n<!-- Please pick one of the following -->\r\nOn-Premise w/ Docker, Sentry 21.1.0486d790, based on https://github.com/Rungutan/sentry-performance-monitoring\r\n\r\n## Description\r\n\r\nOpening of these sub items in \"Organization\" settings fails:\r\n* Audit Log\r\n* Integrations\r\n* Developer settings\r\n\r\nThis seems to be the resulting issue in the \"Internal\" Sentry project:\r\n```\r\nAttributeError: 'NoneType' object has no attribute 'get_allowed_origins'\r\n File \"sentry/api/base.py\", line 124, in handle_exception\r\n response = super(Endpoint, self).handle_exception(exc)\r\n File \"rest_framework/views.py\", line 449, in handle_exception\r\n self.raise_uncaught_exception(exc)\r\n File \"sentry/api/base.py\", line 237, in dispatch\r\n response = handler(request, *args, **kwargs)\r\n File \"sentry/api/bases/sentryapps.py\", line 59, in wrapped\r\n return func(self, *args, **kwargs)\r\n File \"sentry/api/endpoints/organization_sentry_apps.py\", line 19, in get\r\n on_results=lambda x: serialize(x, request.user, access=request.access),\r\n File \"sentry/api/base.py\", line 330, in paginate\r\n results = on_results(cursor_result.results)\r\n File \"sentry/api/endpoints/organization_sentry_apps.py\", line 19, in <lambda>\r\n on_results=lambda x: serialize(x, request.user, access=request.access),\r\n File \"sentry/api/serializers/base.py\", line 45, in serialize\r\n return [serializer(o, attrs=attrs.get(o, {}), user=user, **kwargs) for o in objects]\r\n File \"sentry/api/serializers/base.py\", line 45, in <listcomp>\r\n return [serializer(o, attrs=attrs.get(o, {}), user=user, **kwargs) for o in objects]\r\n File \"sentry/api/serializers/base.py\", line 60, in __call__\r\n return self.serialize(obj, attrs, user, **kwargs)\r\n File \"sentry/api/serializers/models/sentry_app.py\", line 31, in serialize\r\n \"allowedOrigins\": obj.application.get_allowed_origins(),\r\n```\r\n\r\nLast SQL before exception\r\n\r\n```\r\nSELECT \"sentry_organization\".\"id\", \"sentry_organization\".\"name\", \"sentry_organization\".\"slug\", \"sentry_organization\".\"status\", \"sentry_organization\".\"date_added\", \"sentry_organization\".\"default_role\", \"sentry_organization\".\"flags\" FROM \"sentry_organization\" WHERE \"sentry_organization\".\"id\" = %s\r\n```\r\n\r\n### What you expected to happen\r\nMenus should work.\r\n\r\n### Possible Solution\r\n\r\n[If you have an idea on how this could be solved include that detail here.]\r\n\n", "before_files": [{"content": "from sentry.api.bases import OrganizationEndpoint, add_integration_platform_metric_tag\nfrom sentry.api.paginator import OffsetPaginator\nfrom sentry.api.serializers import serialize\nfrom sentry.models import SentryApp\n\n\nclass OrganizationSentryAppsEndpoint(OrganizationEndpoint):\n @add_integration_platform_metric_tag\n def get(self, request, organization):\n queryset = SentryApp.objects.filter(owner=organization)\n\n return self.paginate(\n request=request,\n queryset=queryset,\n order_by=\"-date_added\",\n paginator_cls=OffsetPaginator,\n on_results=lambda x: serialize(x, request.user, access=request.access),\n )\n", "path": "src/sentry/api/endpoints/organization_sentry_apps.py"}]}
1,347
138
gh_patches_debug_55582
rasdani/github-patches
git_diff
wagtail__wagtail-997
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Password reset request generates email with URL "example.com" Received: ``` Please follow the link below to reset your password http://example.com/admin/password_reset/confirm/NA/3x7-cfc1f37209f0c04d1ee1/ ``` This time `BASE_URL` _is_ configured, but as this view is from django.contrib this is perhaps due to some other missing setting. Related to #693 #826 </issue> <code> [start of wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py] 1 from __future__ import unicode_literals 2 3 import re 4 5 from django.conf import settings 6 from django import template 7 from django.contrib.humanize.templatetags.humanize import intcomma 8 9 from wagtail.wagtailcore import hooks 10 from wagtail.wagtailcore.models import get_navigation_menu_items, UserPagePermissionsProxy, PageViewRestriction 11 from wagtail.wagtailcore.utils import camelcase_to_underscore 12 from wagtail.wagtailadmin.menu import admin_menu 13 14 15 register = template.Library() 16 17 register.filter('intcomma', intcomma) 18 19 @register.inclusion_tag('wagtailadmin/shared/explorer_nav.html') 20 def explorer_nav(): 21 return { 22 'nodes': get_navigation_menu_items() 23 } 24 25 26 @register.inclusion_tag('wagtailadmin/shared/explorer_nav_child.html') 27 def explorer_subnav(nodes): 28 return { 29 'nodes': nodes 30 } 31 32 33 @register.inclusion_tag('wagtailadmin/shared/main_nav.html', takes_context=True) 34 def main_nav(context): 35 request = context['request'] 36 37 return { 38 'menu_html': admin_menu.render_html(request), 39 'request': request, 40 } 41 42 @register.simple_tag 43 def main_nav_js(): 44 return admin_menu.media['js'] 45 46 47 @register.filter("ellipsistrim") 48 def ellipsistrim(value, max_length): 49 if len(value) > max_length: 50 truncd_val = value[:max_length] 51 if not len(value) == max_length+1 and value[max_length+1] != " ": 52 truncd_val = truncd_val[:truncd_val.rfind(" ")] 53 return truncd_val + "..." 54 return value 55 56 57 @register.filter 58 def fieldtype(bound_field): 59 try: 60 return camelcase_to_underscore(bound_field.field.__class__.__name__) 61 except AttributeError: 62 try: 63 return camelcase_to_underscore(bound_field.__class__.__name__) 64 except AttributeError: 65 return "" 66 67 68 @register.filter 69 def widgettype(bound_field): 70 try: 71 return camelcase_to_underscore(bound_field.field.widget.__class__.__name__) 72 except AttributeError: 73 return "" 74 75 76 @register.filter 77 def meta_description(model): 78 try: 79 return model.model_class()._meta.description 80 except: 81 return "" 82 83 84 @register.assignment_tag(takes_context=True) 85 def page_permissions(context, page): 86 """ 87 Usage: {% page_permissions page as page_perms %} 88 Sets the variable 'page_perms' to a PagePermissionTester object that can be queried to find out 89 what actions the current logged-in user can perform on the given page. 90 """ 91 # Create a UserPagePermissionsProxy object to represent the user's global permissions, and 92 # cache it in the context for the duration of the page request, if one does not exist already 93 if 'user_page_permissions' not in context: 94 context['user_page_permissions'] = UserPagePermissionsProxy(context['request'].user) 95 96 # Now retrieve a PagePermissionTester from it, specific to the given page 97 return context['user_page_permissions'].for_page(page) 98 99 100 @register.assignment_tag(takes_context=True) 101 def test_page_is_public(context, page): 102 """ 103 Usage: {% test_page_is_public page as is_public %} 104 Sets 'is_public' to True iff there are no page view restrictions in place on 105 this page. 106 Caches the list of page view restrictions in the context, to avoid repeated 107 DB queries on repeated calls. 108 """ 109 if 'all_page_view_restriction_paths' not in context: 110 context['all_page_view_restriction_paths'] = PageViewRestriction.objects.select_related('page').values_list('page__path', flat=True) 111 112 is_private = any([ 113 page.path.startswith(restricted_path) 114 for restricted_path in context['all_page_view_restriction_paths'] 115 ]) 116 117 return not is_private 118 119 120 @register.simple_tag 121 def hook_output(hook_name): 122 """ 123 Example: {% hook_output 'insert_editor_css' %} 124 Whenever we have a hook whose functions take no parameters and return a string, this tag can be used 125 to output the concatenation of all of those return values onto the page. 126 Note that the output is not escaped - it is the hook function's responsibility to escape unsafe content. 127 """ 128 snippets = [fn() for fn in hooks.get_hooks(hook_name)] 129 return ''.join(snippets) 130 131 132 @register.assignment_tag 133 def usage_count_enabled(): 134 return getattr(settings, 'WAGTAIL_USAGE_COUNT_ENABLED', False) 135 136 137 class EscapeScriptNode(template.Node): 138 TAG_NAME = 'escapescript' 139 SCRIPT_RE = re.compile(r'<(-*)/script>') 140 141 def __init__(self, nodelist): 142 super(EscapeScriptNode, self).__init__() 143 self.nodelist = nodelist 144 145 def render(self, context): 146 out = self.nodelist.render(context) 147 escaped_out = self.SCRIPT_RE.sub(r'<-\1/script>', out) 148 return escaped_out 149 150 @classmethod 151 def handle(cls, parser, token): 152 nodelist = parser.parse(('end' + EscapeScriptNode.TAG_NAME,)) 153 parser.delete_first_token() 154 return cls(nodelist) 155 156 register.tag(EscapeScriptNode.TAG_NAME, EscapeScriptNode.handle) 157 [end of wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py b/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py --- a/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py +++ b/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py @@ -134,6 +134,11 @@ return getattr(settings, 'WAGTAIL_USAGE_COUNT_ENABLED', False) [email protected]_tag +def base_url_setting(): + return getattr(settings, 'BASE_URL', None) + + class EscapeScriptNode(template.Node): TAG_NAME = 'escapescript' SCRIPT_RE = re.compile(r'<(-*)/script>')
{"golden_diff": "diff --git a/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py b/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py\n--- a/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py\n+++ b/wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py\n@@ -134,6 +134,11 @@\n return getattr(settings, 'WAGTAIL_USAGE_COUNT_ENABLED', False)\n \n \[email protected]_tag\n+def base_url_setting():\n+ return getattr(settings, 'BASE_URL', None)\n+\n+\n class EscapeScriptNode(template.Node):\n TAG_NAME = 'escapescript'\n SCRIPT_RE = re.compile(r'<(-*)/script>')\n", "issue": "Password reset request generates email with URL \"example.com\"\nReceived:\n\n```\nPlease follow the link below to reset your password\nhttp://example.com/admin/password_reset/confirm/NA/3x7-cfc1f37209f0c04d1ee1/\n```\n\nThis time `BASE_URL` _is_ configured, but as this view is from django.contrib this is perhaps due to some other missing setting. \n\nRelated to #693 #826\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport re\n\nfrom django.conf import settings\nfrom django import template\nfrom django.contrib.humanize.templatetags.humanize import intcomma\n\nfrom wagtail.wagtailcore import hooks\nfrom wagtail.wagtailcore.models import get_navigation_menu_items, UserPagePermissionsProxy, PageViewRestriction\nfrom wagtail.wagtailcore.utils import camelcase_to_underscore\nfrom wagtail.wagtailadmin.menu import admin_menu\n\n\nregister = template.Library()\n\nregister.filter('intcomma', intcomma)\n\[email protected]_tag('wagtailadmin/shared/explorer_nav.html')\ndef explorer_nav():\n return {\n 'nodes': get_navigation_menu_items()\n }\n\n\[email protected]_tag('wagtailadmin/shared/explorer_nav_child.html')\ndef explorer_subnav(nodes):\n return {\n 'nodes': nodes\n }\n\n\[email protected]_tag('wagtailadmin/shared/main_nav.html', takes_context=True)\ndef main_nav(context):\n request = context['request']\n\n return {\n 'menu_html': admin_menu.render_html(request),\n 'request': request,\n }\n\[email protected]_tag\ndef main_nav_js():\n return admin_menu.media['js']\n\n\[email protected](\"ellipsistrim\")\ndef ellipsistrim(value, max_length):\n if len(value) > max_length:\n truncd_val = value[:max_length]\n if not len(value) == max_length+1 and value[max_length+1] != \" \":\n truncd_val = truncd_val[:truncd_val.rfind(\" \")]\n return truncd_val + \"...\"\n return value\n\n\[email protected]\ndef fieldtype(bound_field):\n try:\n return camelcase_to_underscore(bound_field.field.__class__.__name__)\n except AttributeError:\n try:\n return camelcase_to_underscore(bound_field.__class__.__name__)\n except AttributeError:\n return \"\"\n\n\[email protected]\ndef widgettype(bound_field):\n try:\n return camelcase_to_underscore(bound_field.field.widget.__class__.__name__)\n except AttributeError:\n return \"\"\n\n\[email protected]\ndef meta_description(model):\n try:\n return model.model_class()._meta.description\n except:\n return \"\"\n\n\[email protected]_tag(takes_context=True)\ndef page_permissions(context, page):\n \"\"\"\n Usage: {% page_permissions page as page_perms %}\n Sets the variable 'page_perms' to a PagePermissionTester object that can be queried to find out\n what actions the current logged-in user can perform on the given page.\n \"\"\"\n # Create a UserPagePermissionsProxy object to represent the user's global permissions, and\n # cache it in the context for the duration of the page request, if one does not exist already\n if 'user_page_permissions' not in context:\n context['user_page_permissions'] = UserPagePermissionsProxy(context['request'].user)\n\n # Now retrieve a PagePermissionTester from it, specific to the given page\n return context['user_page_permissions'].for_page(page)\n\n\[email protected]_tag(takes_context=True)\ndef test_page_is_public(context, page):\n \"\"\"\n Usage: {% test_page_is_public page as is_public %}\n Sets 'is_public' to True iff there are no page view restrictions in place on\n this page.\n Caches the list of page view restrictions in the context, to avoid repeated\n DB queries on repeated calls.\n \"\"\"\n if 'all_page_view_restriction_paths' not in context:\n context['all_page_view_restriction_paths'] = PageViewRestriction.objects.select_related('page').values_list('page__path', flat=True)\n\n is_private = any([\n page.path.startswith(restricted_path)\n for restricted_path in context['all_page_view_restriction_paths']\n ])\n\n return not is_private\n\n\[email protected]_tag\ndef hook_output(hook_name):\n \"\"\"\n Example: {% hook_output 'insert_editor_css' %}\n Whenever we have a hook whose functions take no parameters and return a string, this tag can be used\n to output the concatenation of all of those return values onto the page.\n Note that the output is not escaped - it is the hook function's responsibility to escape unsafe content.\n \"\"\"\n snippets = [fn() for fn in hooks.get_hooks(hook_name)]\n return ''.join(snippets)\n\n\[email protected]_tag\ndef usage_count_enabled():\n return getattr(settings, 'WAGTAIL_USAGE_COUNT_ENABLED', False)\n\n\nclass EscapeScriptNode(template.Node):\n TAG_NAME = 'escapescript'\n SCRIPT_RE = re.compile(r'<(-*)/script>')\n\n def __init__(self, nodelist):\n super(EscapeScriptNode, self).__init__()\n self.nodelist = nodelist\n\n def render(self, context):\n out = self.nodelist.render(context)\n escaped_out = self.SCRIPT_RE.sub(r'<-\\1/script>', out)\n return escaped_out\n\n @classmethod\n def handle(cls, parser, token):\n nodelist = parser.parse(('end' + EscapeScriptNode.TAG_NAME,))\n parser.delete_first_token()\n return cls(nodelist)\n\nregister.tag(EscapeScriptNode.TAG_NAME, EscapeScriptNode.handle)\n", "path": "wagtail/wagtailadmin/templatetags/wagtailadmin_tags.py"}]}
2,146
170
gh_patches_debug_16973
rasdani/github-patches
git_diff
coala__coala-1081
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> coala-ci doesn't use fail_aqcuire_settings </issue> <code> [start of coalib/coala_main.py] 1 from itertools import chain 2 from pyprint.ConsolePrinter import ConsolePrinter 3 import os 4 5 from coalib.output.printers.LogPrinter import LogPrinter 6 from coalib.processes.Processing import execute_section 7 from coalib.results.HiddenResult import HiddenResult 8 from coalib.settings.ConfigurationGathering import gather_configuration 9 from coalib.misc.Exceptions import get_exitcode 10 from coalib.bears.BEAR_KIND import BEAR_KIND 11 from coalib.collecting.Collectors import collect_bears 12 from coalib.output.Tagging import tag_results, delete_tagged_results 13 14 15 do_nothing = lambda *args: True 16 17 18 def run_coala(log_printer=None, 19 print_results=do_nothing, 20 acquire_settings=do_nothing, 21 print_section_beginning=do_nothing, 22 nothing_done=do_nothing, 23 show_bears=do_nothing): 24 """ 25 This is a main method that should be usable for almost all purposes and 26 reduces executing coala to one function call. 27 28 :param log_printer: A LogPrinter object to use for logging. 29 :param print_results: A callback that takes a LogPrinter, a 30 section, a list of results to be printed, 31 the file dict and the mutable file diff 32 dict. 33 :param acquire_settings: The method to use for requesting settings. 34 It will get a parameter which is a 35 dictionary with the settings name as key 36 and a list containing a description in [0] 37 and the names of the bears who need this 38 setting in all following indexes. 39 :param print_section_beginning: A callback that will be called with a 40 section name string whenever analysis of a 41 new section is started. 42 :param nothing_done: A callback that will be called without 43 parameters if nothing was done. 44 :param show_bears: A callback that will be called with first 45 a list of local bears, second a list of 46 global bears to output them. A third bool 47 parameter may be used to indicate if a 48 compressed output (True) or a normal output 49 (False) is desired, the former being used 50 for showing all available bears to the user. 51 :return: A dictionary containing a list of results 52 for all analyzed sections as key. 53 """ 54 log_printer = log_printer or LogPrinter(ConsolePrinter()) 55 56 exitcode = 0 57 results = None 58 try: 59 yielded_results = False 60 did_nothing = True 61 (sections, 62 local_bears, 63 global_bears, 64 targets) = gather_configuration(acquire_settings, log_printer) 65 66 tag = str(sections['default'].get('tag', None)) 67 dtag = str(sections['default'].get('dtag', None)) 68 69 show_all_bears = bool(sections['default'].get('show_all_bears', False)) 70 show_bears_ = bool(sections["default"].get("show_bears", "False")) 71 if show_all_bears: 72 show_bears_ = True 73 for section in sections: 74 bear_dirs = sections[section].bear_dirs() 75 local_bears[section] = collect_bears(bear_dirs, 76 ["**"], 77 [BEAR_KIND.LOCAL], 78 log_printer) 79 global_bears[section] = collect_bears(bear_dirs, 80 ["**"], 81 [BEAR_KIND.GLOBAL], 82 log_printer) 83 84 if dtag != "None": 85 delete_tagged_results( 86 dtag, 87 os.path.abspath(str(sections["default"].get("config")))) 88 89 if show_bears_: 90 show_bears(local_bears, 91 global_bears, 92 show_all_bears) 93 did_nothing = False 94 else: 95 results = {} 96 for section_name in sections: 97 section = sections[section_name] 98 if not section.is_enabled(targets): 99 continue 100 101 print_section_beginning(section) 102 section_result = execute_section( 103 section=section, 104 global_bear_list=global_bears[section_name], 105 local_bear_list=local_bears[section_name], 106 print_results=print_results, 107 log_printer=log_printer) 108 yielded_results = yielded_results or section_result[0] 109 110 results_for_section = [] 111 for value in chain(section_result[1].values(), 112 section_result[2].values()): 113 if value is None: 114 continue 115 116 for result in value: 117 if not isinstance(result, HiddenResult): 118 results_for_section.append(result) 119 120 results[section_name] = results_for_section 121 did_nothing = False 122 123 if tag != "None": 124 tag_results( 125 tag, 126 os.path.abspath(str(sections["default"].get("config"))), 127 results) 128 129 if did_nothing: 130 nothing_done(log_printer) 131 132 if yielded_results: 133 exitcode = 1 134 except BaseException as exception: # pylint: disable=broad-except 135 exitcode = exitcode or get_exitcode(exception, log_printer) 136 137 return results, exitcode 138 [end of coalib/coala_main.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/coalib/coala_main.py b/coalib/coala_main.py --- a/coalib/coala_main.py +++ b/coalib/coala_main.py @@ -9,6 +9,7 @@ from coalib.misc.Exceptions import get_exitcode from coalib.bears.BEAR_KIND import BEAR_KIND from coalib.collecting.Collectors import collect_bears +from coalib.output.Interactions import fail_acquire_settings from coalib.output.Tagging import tag_results, delete_tagged_results @@ -17,7 +18,7 @@ def run_coala(log_printer=None, print_results=do_nothing, - acquire_settings=do_nothing, + acquire_settings=fail_acquire_settings, print_section_beginning=do_nothing, nothing_done=do_nothing, show_bears=do_nothing):
{"golden_diff": "diff --git a/coalib/coala_main.py b/coalib/coala_main.py\n--- a/coalib/coala_main.py\n+++ b/coalib/coala_main.py\n@@ -9,6 +9,7 @@\n from coalib.misc.Exceptions import get_exitcode\n from coalib.bears.BEAR_KIND import BEAR_KIND\n from coalib.collecting.Collectors import collect_bears\n+from coalib.output.Interactions import fail_acquire_settings\n from coalib.output.Tagging import tag_results, delete_tagged_results\n \n \n@@ -17,7 +18,7 @@\n \n def run_coala(log_printer=None,\n print_results=do_nothing,\n- acquire_settings=do_nothing,\n+ acquire_settings=fail_acquire_settings,\n print_section_beginning=do_nothing,\n nothing_done=do_nothing,\n show_bears=do_nothing):\n", "issue": "coala-ci doesn't use fail_aqcuire_settings\n\n", "before_files": [{"content": "from itertools import chain\nfrom pyprint.ConsolePrinter import ConsolePrinter\nimport os\n\nfrom coalib.output.printers.LogPrinter import LogPrinter\nfrom coalib.processes.Processing import execute_section\nfrom coalib.results.HiddenResult import HiddenResult\nfrom coalib.settings.ConfigurationGathering import gather_configuration\nfrom coalib.misc.Exceptions import get_exitcode\nfrom coalib.bears.BEAR_KIND import BEAR_KIND\nfrom coalib.collecting.Collectors import collect_bears\nfrom coalib.output.Tagging import tag_results, delete_tagged_results\n\n\ndo_nothing = lambda *args: True\n\n\ndef run_coala(log_printer=None,\n print_results=do_nothing,\n acquire_settings=do_nothing,\n print_section_beginning=do_nothing,\n nothing_done=do_nothing,\n show_bears=do_nothing):\n \"\"\"\n This is a main method that should be usable for almost all purposes and\n reduces executing coala to one function call.\n\n :param log_printer: A LogPrinter object to use for logging.\n :param print_results: A callback that takes a LogPrinter, a\n section, a list of results to be printed,\n the file dict and the mutable file diff\n dict.\n :param acquire_settings: The method to use for requesting settings.\n It will get a parameter which is a\n dictionary with the settings name as key\n and a list containing a description in [0]\n and the names of the bears who need this\n setting in all following indexes.\n :param print_section_beginning: A callback that will be called with a\n section name string whenever analysis of a\n new section is started.\n :param nothing_done: A callback that will be called without\n parameters if nothing was done.\n :param show_bears: A callback that will be called with first\n a list of local bears, second a list of\n global bears to output them. A third bool\n parameter may be used to indicate if a\n compressed output (True) or a normal output\n (False) is desired, the former being used\n for showing all available bears to the user.\n :return: A dictionary containing a list of results\n for all analyzed sections as key.\n \"\"\"\n log_printer = log_printer or LogPrinter(ConsolePrinter())\n\n exitcode = 0\n results = None\n try:\n yielded_results = False\n did_nothing = True\n (sections,\n local_bears,\n global_bears,\n targets) = gather_configuration(acquire_settings, log_printer)\n\n tag = str(sections['default'].get('tag', None))\n dtag = str(sections['default'].get('dtag', None))\n\n show_all_bears = bool(sections['default'].get('show_all_bears', False))\n show_bears_ = bool(sections[\"default\"].get(\"show_bears\", \"False\"))\n if show_all_bears:\n show_bears_ = True\n for section in sections:\n bear_dirs = sections[section].bear_dirs()\n local_bears[section] = collect_bears(bear_dirs,\n [\"**\"],\n [BEAR_KIND.LOCAL],\n log_printer)\n global_bears[section] = collect_bears(bear_dirs,\n [\"**\"],\n [BEAR_KIND.GLOBAL],\n log_printer)\n\n if dtag != \"None\":\n delete_tagged_results(\n dtag,\n os.path.abspath(str(sections[\"default\"].get(\"config\"))))\n\n if show_bears_:\n show_bears(local_bears,\n global_bears,\n show_all_bears)\n did_nothing = False\n else:\n results = {}\n for section_name in sections:\n section = sections[section_name]\n if not section.is_enabled(targets):\n continue\n\n print_section_beginning(section)\n section_result = execute_section(\n section=section,\n global_bear_list=global_bears[section_name],\n local_bear_list=local_bears[section_name],\n print_results=print_results,\n log_printer=log_printer)\n yielded_results = yielded_results or section_result[0]\n\n results_for_section = []\n for value in chain(section_result[1].values(),\n section_result[2].values()):\n if value is None:\n continue\n\n for result in value:\n if not isinstance(result, HiddenResult):\n results_for_section.append(result)\n\n results[section_name] = results_for_section\n did_nothing = False\n\n if tag != \"None\":\n tag_results(\n tag,\n os.path.abspath(str(sections[\"default\"].get(\"config\"))),\n results)\n\n if did_nothing:\n nothing_done(log_printer)\n\n if yielded_results:\n exitcode = 1\n except BaseException as exception: # pylint: disable=broad-except\n exitcode = exitcode or get_exitcode(exception, log_printer)\n\n return results, exitcode\n", "path": "coalib/coala_main.py"}]}
1,920
186
gh_patches_debug_3060
rasdani/github-patches
git_diff
dotkom__onlineweb4-1920
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> PaymentPrice description is not required by manager but required for __str__ ## What kind of an issue is this? - [x] Bug report ## What is the expected behaviour? That you can create a PaymentPrice by following the guidelines, which are a required price and an optional description. ## What is the current behaviour? By creating a PaymentPrice with a required price but no description the system fails with the error message ``` TypeError at /admin/payment/payment/add/ unsupported operand type(s) for +: 'NoneType' and 'str' ``` ## How do you reproduce this problem? * Create an event with an attached attendance * Navigate to /admin/payment/payment/add/ * Add a payment to the attendance event you created, without a description (but with a price) * The system crashes with the error message `unsupported operand type(s) for +: 'NoneType' and 'str'` </issue> <code> [start of apps/payment/models.py] 1 # -*- coding: utf-8 -*- 2 3 import uuid 4 5 from django.conf import settings 6 from django.contrib.contenttypes.fields import GenericForeignKey 7 from django.contrib.contenttypes.models import ContentType 8 from django.db import models 9 from django.utils import timezone 10 from django.utils.translation import ugettext as _ 11 from rest_framework.exceptions import NotAcceptable 12 13 from apps.events.models import AttendanceEvent, Attendee 14 from apps.marks.models import Suspension 15 16 User = settings.AUTH_USER_MODEL 17 18 19 class Payment(models.Model): 20 21 TYPE_CHOICES = ( 22 (1, _('Umiddelbar')), 23 (2, _('Frist')), 24 (3, _('Utsettelse')), 25 ) 26 27 # Make sure these exist in settings if they are to be used. 28 STRIPE_KEY_CHOICES = ( 29 ('arrkom', 'arrkom'), 30 ('prokom', 'prokom'), 31 ('trikom', 'trikom'), 32 ('fagkom', 'fagkom'), 33 ) 34 35 content_type = models.ForeignKey(ContentType) 36 object_id = models.PositiveIntegerField() 37 content_object = GenericForeignKey() 38 stripe_key = models.CharField( 39 _('stripe key'), 40 max_length=10, 41 choices=STRIPE_KEY_CHOICES, 42 default="arrkom" 43 ) 44 45 payment_type = models.SmallIntegerField(_('type'), choices=TYPE_CHOICES) 46 47 # Optional fields depending on payment type 48 deadline = models.DateTimeField(_("frist"), blank=True, null=True) 49 active = models.BooleanField(default=True) 50 delay = models.SmallIntegerField(_('utsettelse'), blank=True, null=True, default=2) 51 52 # For logging and history 53 added_date = models.DateTimeField(_("opprettet dato"), auto_now=True) 54 changed_date = models.DateTimeField(auto_now=True, editable=False) 55 last_changed_by = models.ForeignKey(User, editable=False, null=True) # Blank and null is temperarly 56 57 def paid_users(self): 58 return [payment_relation.user for payment_relation in self.paymentrelation_set.filter(refunded=False)] 59 60 def payment_delays(self): 61 return self.paymentdelay_set.filter(active=True) 62 63 def payment_delay_users(self): 64 return [payment_delay.user for payment_delay in self.payment_delays()] 65 66 def create_payment_delay(self, user, deadline): 67 payment_delays = self.paymentdelay_set.filter(payment=self, user=user) 68 69 if payment_delays: 70 for payment_delay in payment_delays: 71 payment_delay.valid_to = deadline 72 payment_delay.save() 73 else: 74 PaymentDelay.objects.create(payment=self, user=user, valid_to=deadline) 75 76 def description(self): 77 if self._is_type(AttendanceEvent): 78 return self.content_object.event.title 79 80 def get_receipt_description(self): 81 receipt_description = "" 82 description = [' '] * 30 83 temp = self.description()[0:25] 84 description[0:len(temp)+1] = list(temp) 85 for c in description: 86 receipt_description += c 87 return receipt_description 88 89 def responsible_mail(self): 90 if self._is_type(AttendanceEvent): 91 event_type = self.content_object.event.event_type 92 if event_type == 1 or event_type == 4: # Sosialt & Utflukt 93 return settings.EMAIL_ARRKOM 94 elif event_type == 2: # Bedpres 95 return settings.EMAIL_BEDKOM 96 elif event_type == 3: # Kurs 97 return settings.EMAIL_FAGKOM 98 elif event_type == 5: # Ekskursjon 99 return settings.EMAIL_EKSKOM 100 else: 101 return settings.DEFAULT_FROM_EMAIL 102 else: 103 return settings.DEFAULT_FROM_EMAIL 104 105 def handle_payment(self, user): 106 if self._is_type(AttendanceEvent): 107 attendee = Attendee.objects.filter(event=self.content_object, user=user) 108 109 # Delete payment delay objects for the user if there are any 110 delays = PaymentDelay.objects.filter(payment=self, user=user) 111 for delay in delays: 112 delay.delete() 113 114 # If the user is suspended because of a lack of payment the suspension is deactivated. 115 suspensions = Suspension.objects.filter(payment_id=self.id, user=user) 116 for suspension in suspensions: 117 suspension.active = False 118 suspension.save() 119 120 if attendee: 121 attendee[0].paid = True 122 attendee[0].save() 123 else: 124 Attendee.objects.create(event=self.content_object, user=user, paid=True) 125 126 def handle_refund(self, host, payment_relation): 127 payment_relation.refunded = True 128 payment_relation.save() 129 130 if self._is_type(AttendanceEvent): 131 self.content_object.notify_waiting_list( 132 host=host, unattended_user=payment_relation.user) 133 Attendee.objects.get(event=self.content_object, 134 user=payment_relation.user).delete() 135 136 def check_refund(self, payment_relation): 137 if self._is_type(AttendanceEvent): 138 attendance_event = self.content_object 139 if attendance_event.unattend_deadline < timezone.now(): 140 return False, _("Fristen for og melde seg av har utgått") 141 if len(Attendee.objects.filter(event=attendance_event, user=payment_relation.user)) == 0: 142 return False, _("Du er ikke påmeldt dette arrangementet.") 143 if attendance_event.event.event_start < timezone.now(): 144 return False, _("Dette arrangementet har allerede startet.") 145 146 return True, '' 147 148 return False, 'Refund checks not implemented' 149 150 def prices(self): 151 return self.paymentprice_set.all() 152 153 def price(self): 154 # TODO implement group based pricing 155 if self.paymentprice_set.count() > 0: 156 return self.paymentprice_set.all()[0] 157 return None 158 159 def _is_type(self, model_type): 160 return ContentType.objects.get_for_model(model_type) == self.content_type 161 162 def __str__(self): 163 return self.description() 164 165 class Meta(object): 166 unique_together = ('content_type', 'object_id') 167 168 verbose_name = _("betaling") 169 verbose_name_plural = _("betalinger") 170 171 172 class PaymentPrice(models.Model): 173 payment = models.ForeignKey(Payment) 174 price = models.IntegerField(_("pris")) 175 description = models.CharField(max_length=128, null=True, blank=True) 176 177 def __str__(self): 178 return self.description + " (" + str(self.price) + "kr)" 179 180 class Meta(object): 181 verbose_name = _("pris") 182 verbose_name_plural = _("priser") 183 184 185 class PaymentRelation(models.Model): 186 payment = models.ForeignKey(Payment) 187 payment_price = models.ForeignKey(PaymentPrice) 188 user = models.ForeignKey(User) 189 datetime = models.DateTimeField(auto_now=True) 190 refunded = models.BooleanField(default=False) 191 192 unique_id = models.CharField(max_length=128, null=True, blank=True) 193 stripe_id = models.CharField(max_length=128) 194 195 def save(self, *args, **kwargs): 196 if not self.unique_id: 197 self.unique_id = str(uuid.uuid4()) 198 super(PaymentRelation, self).save(*args, **kwargs) 199 200 def __str__(self): 201 return self.payment.description() + " - " + str(self.user) 202 203 class Meta(object): 204 verbose_name = _("betalingsrelasjon") 205 verbose_name_plural = _("betalingsrelasjoner") 206 207 208 class PaymentDelay(models.Model): 209 payment = models.ForeignKey(Payment) 210 user = models.ForeignKey(User) 211 valid_to = models.DateTimeField() 212 213 active = models.BooleanField(default=True) 214 215 def __str__(self): 216 return self.payment.description() + " - " + str(self.user) 217 218 class Meta(object): 219 unique_together = ('payment', 'user') 220 221 verbose_name = _('betalingsutsettelse') 222 verbose_name_plural = _('betalingsutsettelser') 223 224 225 class PaymentTransaction(models.Model): 226 user = models.ForeignKey(User) 227 amount = models.IntegerField(null=True, blank=True) 228 used_stripe = models.BooleanField(default=False) 229 230 datetime = models.DateTimeField(auto_now=True) 231 232 def __str__(self): 233 return str(self.user) + " - " + str(self.amount) + "(" + str(self.datetime) + ")" 234 235 def save(self, *args, **kwargs): 236 if not self.pk: 237 self.user.saldo = self.user.saldo + self.amount 238 239 if self.user.saldo < 0: 240 raise NotAcceptable("Insufficient funds") 241 242 self.user.save() 243 super(PaymentTransaction, self).save(*args, **kwargs) 244 245 class Meta: 246 ordering = ['-datetime'] 247 verbose_name = _('transaksjon') 248 verbose_name_plural = _('transaksjoner') 249 [end of apps/payment/models.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/apps/payment/models.py b/apps/payment/models.py --- a/apps/payment/models.py +++ b/apps/payment/models.py @@ -175,6 +175,8 @@ description = models.CharField(max_length=128, null=True, blank=True) def __str__(self): + if not self.description: + return str(self.price) + "kr" return self.description + " (" + str(self.price) + "kr)" class Meta(object):
{"golden_diff": "diff --git a/apps/payment/models.py b/apps/payment/models.py\n--- a/apps/payment/models.py\n+++ b/apps/payment/models.py\n@@ -175,6 +175,8 @@\n description = models.CharField(max_length=128, null=True, blank=True)\n \n def __str__(self):\n+ if not self.description:\n+ return str(self.price) + \"kr\"\n return self.description + \" (\" + str(self.price) + \"kr)\"\n \n class Meta(object):\n", "issue": "PaymentPrice description is not required by manager but required for __str__\n## What kind of an issue is this?\r\n\r\n- [x] Bug report\r\n\r\n\r\n## What is the expected behaviour?\r\n\r\nThat you can create a PaymentPrice by following the guidelines, which are a required price and an optional description.\r\n\r\n## What is the current behaviour?\r\n\r\nBy creating a PaymentPrice with a required price but no description the system fails with the error message\r\n\r\n```\r\nTypeError at /admin/payment/payment/add/\r\nunsupported operand type(s) for +: 'NoneType' and 'str'\r\n```\r\n\r\n\r\n## How do you reproduce this problem? \r\n\r\n* Create an event with an attached attendance\r\n* Navigate to /admin/payment/payment/add/\r\n* Add a payment to the attendance event you created, without a description (but with a price)\r\n* The system crashes with the error message `unsupported operand type(s) for +: 'NoneType' and 'str'`\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport uuid\n\nfrom django.conf import settings\nfrom django.contrib.contenttypes.fields import GenericForeignKey\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.db import models\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext as _\nfrom rest_framework.exceptions import NotAcceptable\n\nfrom apps.events.models import AttendanceEvent, Attendee\nfrom apps.marks.models import Suspension\n\nUser = settings.AUTH_USER_MODEL\n\n\nclass Payment(models.Model):\n\n TYPE_CHOICES = (\n (1, _('Umiddelbar')),\n (2, _('Frist')),\n (3, _('Utsettelse')),\n )\n\n # Make sure these exist in settings if they are to be used.\n STRIPE_KEY_CHOICES = (\n ('arrkom', 'arrkom'),\n ('prokom', 'prokom'),\n ('trikom', 'trikom'),\n ('fagkom', 'fagkom'),\n )\n\n content_type = models.ForeignKey(ContentType)\n object_id = models.PositiveIntegerField()\n content_object = GenericForeignKey()\n stripe_key = models.CharField(\n _('stripe key'),\n max_length=10,\n choices=STRIPE_KEY_CHOICES,\n default=\"arrkom\"\n )\n\n payment_type = models.SmallIntegerField(_('type'), choices=TYPE_CHOICES)\n\n # Optional fields depending on payment type\n deadline = models.DateTimeField(_(\"frist\"), blank=True, null=True)\n active = models.BooleanField(default=True)\n delay = models.SmallIntegerField(_('utsettelse'), blank=True, null=True, default=2)\n\n # For logging and history\n added_date = models.DateTimeField(_(\"opprettet dato\"), auto_now=True)\n changed_date = models.DateTimeField(auto_now=True, editable=False)\n last_changed_by = models.ForeignKey(User, editable=False, null=True) # Blank and null is temperarly\n\n def paid_users(self):\n return [payment_relation.user for payment_relation in self.paymentrelation_set.filter(refunded=False)]\n\n def payment_delays(self):\n return self.paymentdelay_set.filter(active=True)\n\n def payment_delay_users(self):\n return [payment_delay.user for payment_delay in self.payment_delays()]\n\n def create_payment_delay(self, user, deadline):\n payment_delays = self.paymentdelay_set.filter(payment=self, user=user)\n\n if payment_delays:\n for payment_delay in payment_delays:\n payment_delay.valid_to = deadline\n payment_delay.save()\n else:\n PaymentDelay.objects.create(payment=self, user=user, valid_to=deadline)\n\n def description(self):\n if self._is_type(AttendanceEvent):\n return self.content_object.event.title\n\n def get_receipt_description(self):\n receipt_description = \"\"\n description = [' '] * 30\n temp = self.description()[0:25]\n description[0:len(temp)+1] = list(temp)\n for c in description:\n receipt_description += c\n return receipt_description\n\n def responsible_mail(self):\n if self._is_type(AttendanceEvent):\n event_type = self.content_object.event.event_type\n if event_type == 1 or event_type == 4: # Sosialt & Utflukt\n return settings.EMAIL_ARRKOM\n elif event_type == 2: # Bedpres\n return settings.EMAIL_BEDKOM\n elif event_type == 3: # Kurs\n return settings.EMAIL_FAGKOM\n elif event_type == 5: # Ekskursjon\n return settings.EMAIL_EKSKOM\n else:\n return settings.DEFAULT_FROM_EMAIL\n else:\n return settings.DEFAULT_FROM_EMAIL\n\n def handle_payment(self, user):\n if self._is_type(AttendanceEvent):\n attendee = Attendee.objects.filter(event=self.content_object, user=user)\n\n # Delete payment delay objects for the user if there are any\n delays = PaymentDelay.objects.filter(payment=self, user=user)\n for delay in delays:\n delay.delete()\n\n # If the user is suspended because of a lack of payment the suspension is deactivated.\n suspensions = Suspension.objects.filter(payment_id=self.id, user=user)\n for suspension in suspensions:\n suspension.active = False\n suspension.save()\n\n if attendee:\n attendee[0].paid = True\n attendee[0].save()\n else:\n Attendee.objects.create(event=self.content_object, user=user, paid=True)\n\n def handle_refund(self, host, payment_relation):\n payment_relation.refunded = True\n payment_relation.save()\n\n if self._is_type(AttendanceEvent):\n self.content_object.notify_waiting_list(\n host=host, unattended_user=payment_relation.user)\n Attendee.objects.get(event=self.content_object,\n user=payment_relation.user).delete()\n\n def check_refund(self, payment_relation):\n if self._is_type(AttendanceEvent):\n attendance_event = self.content_object\n if attendance_event.unattend_deadline < timezone.now():\n return False, _(\"Fristen for og melde seg av har utg\u00e5tt\")\n if len(Attendee.objects.filter(event=attendance_event, user=payment_relation.user)) == 0:\n return False, _(\"Du er ikke p\u00e5meldt dette arrangementet.\")\n if attendance_event.event.event_start < timezone.now():\n return False, _(\"Dette arrangementet har allerede startet.\")\n\n return True, ''\n\n return False, 'Refund checks not implemented'\n\n def prices(self):\n return self.paymentprice_set.all()\n\n def price(self):\n # TODO implement group based pricing\n if self.paymentprice_set.count() > 0:\n return self.paymentprice_set.all()[0]\n return None\n\n def _is_type(self, model_type):\n return ContentType.objects.get_for_model(model_type) == self.content_type\n\n def __str__(self):\n return self.description()\n\n class Meta(object):\n unique_together = ('content_type', 'object_id')\n\n verbose_name = _(\"betaling\")\n verbose_name_plural = _(\"betalinger\")\n\n\nclass PaymentPrice(models.Model):\n payment = models.ForeignKey(Payment)\n price = models.IntegerField(_(\"pris\"))\n description = models.CharField(max_length=128, null=True, blank=True)\n\n def __str__(self):\n return self.description + \" (\" + str(self.price) + \"kr)\"\n\n class Meta(object):\n verbose_name = _(\"pris\")\n verbose_name_plural = _(\"priser\")\n\n\nclass PaymentRelation(models.Model):\n payment = models.ForeignKey(Payment)\n payment_price = models.ForeignKey(PaymentPrice)\n user = models.ForeignKey(User)\n datetime = models.DateTimeField(auto_now=True)\n refunded = models.BooleanField(default=False)\n\n unique_id = models.CharField(max_length=128, null=True, blank=True)\n stripe_id = models.CharField(max_length=128)\n\n def save(self, *args, **kwargs):\n if not self.unique_id:\n self.unique_id = str(uuid.uuid4())\n super(PaymentRelation, self).save(*args, **kwargs)\n\n def __str__(self):\n return self.payment.description() + \" - \" + str(self.user)\n\n class Meta(object):\n verbose_name = _(\"betalingsrelasjon\")\n verbose_name_plural = _(\"betalingsrelasjoner\")\n\n\nclass PaymentDelay(models.Model):\n payment = models.ForeignKey(Payment)\n user = models.ForeignKey(User)\n valid_to = models.DateTimeField()\n\n active = models.BooleanField(default=True)\n\n def __str__(self):\n return self.payment.description() + \" - \" + str(self.user)\n\n class Meta(object):\n unique_together = ('payment', 'user')\n\n verbose_name = _('betalingsutsettelse')\n verbose_name_plural = _('betalingsutsettelser')\n\n\nclass PaymentTransaction(models.Model):\n user = models.ForeignKey(User)\n amount = models.IntegerField(null=True, blank=True)\n used_stripe = models.BooleanField(default=False)\n\n datetime = models.DateTimeField(auto_now=True)\n\n def __str__(self):\n return str(self.user) + \" - \" + str(self.amount) + \"(\" + str(self.datetime) + \")\"\n\n def save(self, *args, **kwargs):\n if not self.pk:\n self.user.saldo = self.user.saldo + self.amount\n\n if self.user.saldo < 0:\n raise NotAcceptable(\"Insufficient funds\")\n\n self.user.save()\n super(PaymentTransaction, self).save(*args, **kwargs)\n\n class Meta:\n ordering = ['-datetime']\n verbose_name = _('transaksjon')\n verbose_name_plural = _('transaksjoner')\n", "path": "apps/payment/models.py"}]}
3,218
108
gh_patches_debug_12678
rasdani/github-patches
git_diff
pymodbus-dev__pymodbus-1384
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> pymodbus.server does not start with its default parameters <!-- Before opening a new issue, make sure you do the following: * check that your issue isn't already filed: https://github.com/pymodbus-dev/pymodbus/issues * check the discussions forum https://github.com/pymodbus-dev/pymodbus/discussions * prepare a short, runnable example that reproduce the issue with the latest development version of Pymodbus Before opening a new issue, make sure you do the following --> ### Versions * Python: 3.10.9 * OS: Fedora 36 * Pymodbus: 3.1.3 ### Description - starting "pymodbus.server run" fails, but starting it with "pymodbus.server run -s tcp" works ("-s tcp" should not be needed, it is the default) - might be related to #1118 and #1139 - it looks like that in https://github.com/pymodbus-dev/pymodbus/blob/dev/pymodbus/repl/server/main.py#L124 there is a mismatch between the expected type for that option ("str") and the used default parameter (the enum ModbusServerTypes.tcp (which evaluates to "ModbusServerTypes.tcp" in a string context)) ### Code and Logs ``` $ pymodbus.server run 20:21:10 ERROR logging:92 Invalid server ModbusServerTypes.tcp $ $ pymodbus.server run -s tcp __________ .______. _________ \______ \___.__. _____ ____ __| _/\_ |__ __ __ ______ / _____/ ______________ __ ___________ | ___< | |/ \ / _ \ / __ | | __ \| | \/ ___/ \_____ \_/ __ \_ __ \ \/ // __ \_ __ \\ | | \___ | Y Y ( <_> ) /_/ | | \_\ \ | /\___ \ / \ ___/| | \/\ /\ ___/| | \/ |____| / ____|__|_| /\____/\____ | |___ /____//____ > /_______ /\___ >__| \_/ \___ >__| \/ \/ \/ \/ \/ \/ \/ \/ SERVER > ``` </issue> <code> [start of pymodbus/repl/server/main.py] 1 """Repl server main.""" 2 import asyncio 3 import json 4 import sys 5 from enum import Enum 6 from pathlib import Path 7 from typing import List 8 9 import typer 10 11 from pymodbus import pymodbus_apply_logging_config 12 from pymodbus.framer.socket_framer import ModbusSocketFramer 13 from pymodbus.logging import Log 14 from pymodbus.repl.server.cli import run_repl 15 from pymodbus.server.reactive.default_config import DEFAULT_CONFIG 16 from pymodbus.server.reactive.main import ( 17 DEFAULT_FRAMER, 18 DEFUALT_HANDLERS, 19 ReactiveServer, 20 ) 21 22 23 CANCELLED_ERROR = asyncio.exceptions.CancelledError 24 CONTEXT_SETTING = {"allow_extra_args": True, "ignore_unknown_options": True} 25 26 27 # TBD class ModbusServerConfig: 28 29 30 class ModbusServerTypes(str, Enum): 31 """Server types.""" 32 33 # ["tcp", "serial", "tls", "udp"] 34 tcp = "tcp" # pylint: disable=invalid-name 35 serial = "serial" # pylint: disable=invalid-name 36 tls = "tls" # pylint: disable=invalid-name 37 udp = "udp" # pylint: disable=invalid-name 38 39 40 class ModbusFramerTypes(str, Enum): 41 """Framer types.""" 42 43 # ["socket", "rtu", "tls", "ascii", "binary"] 44 socket = "socket" # pylint: disable=invalid-name 45 rtu = "rtu" # pylint: disable=invalid-name 46 tls = "tls" # pylint: disable=invalid-name 47 ascii = "ascii" # pylint: disable=invalid-name 48 binary = "binary" # pylint: disable=invalid-name 49 50 51 def _completer(incomplete: str, valid_values: List[str]) -> List[str]: 52 """Complete value.""" 53 completion = [] 54 for name in valid_values: 55 if name.startswith(incomplete): 56 completion.append(name) 57 return completion 58 59 60 def framers(incomplete: str) -> List[str]: 61 """Return an autocompleted list of supported clouds.""" 62 _framers = ["socket", "rtu", "tls", "ascii", "binary"] 63 return _completer(incomplete, _framers) 64 65 66 def servers(incomplete: str) -> List[str]: 67 """Return an autocompleted list of supported clouds.""" 68 _servers = ["tcp", "serial", "tls", "udp"] 69 return _completer(incomplete, _servers) 70 71 72 def process_extra_args(extra_args: List[str], modbus_config: dict) -> dict: 73 """Process extra args passed to server.""" 74 options_stripped = [x.strip().replace("--", "") for x in extra_args[::2]] 75 extra_args_dict = dict(list(zip(options_stripped, extra_args[1::2]))) 76 for option, value in extra_args_dict.items(): 77 if option in modbus_config: 78 try: 79 modbus_config[option] = type(modbus_config[option])(value) 80 except ValueError as err: 81 Log.error( 82 "Error parsing extra arg {} with value '{}'. {}", option, value, err 83 ) 84 sys.exit(1) 85 return modbus_config 86 87 88 app = typer.Typer( 89 no_args_is_help=True, 90 context_settings=CONTEXT_SETTING, 91 help="Reactive Modbus server", 92 ) 93 94 95 @app.callback() 96 def server( 97 ctx: typer.Context, 98 host: str = typer.Option("localhost", "--host", help="Host address"), 99 web_port: int = typer.Option(8080, "--web-port", help="Web app port"), 100 broadcast_support: bool = typer.Option( 101 False, "-b", help="Support broadcast messages" 102 ), 103 repl: bool = typer.Option(True, help="Enable/Disable repl for server"), 104 verbose: bool = typer.Option( 105 False, help="Run with debug logs enabled for pymodbus" 106 ), 107 ): 108 """Run server code.""" 109 log_level = Log.DEBUG if verbose else Log.ERROR 110 pymodbus_apply_logging_config(log_level) 111 112 ctx.obj = { 113 "repl": repl, 114 "host": host, 115 "web_port": web_port, 116 "broadcast": broadcast_support, 117 } 118 119 120 @app.command("run", context_settings=CONTEXT_SETTING) 121 def run( 122 ctx: typer.Context, 123 modbus_server: str = typer.Option( 124 ModbusServerTypes.tcp, 125 "--modbus-server", 126 "-s", 127 case_sensitive=False, 128 autocompletion=servers, 129 help="Modbus Server", 130 ), 131 modbus_framer: str = typer.Option( 132 ModbusFramerTypes.socket, 133 "--framer", 134 "-f", 135 case_sensitive=False, 136 autocompletion=framers, 137 help="Modbus framer to use", 138 ), 139 modbus_port: int = typer.Option(5020, "--modbus-port", "-p", help="Modbus port"), 140 modbus_unit_id: List[int] = typer.Option( 141 None, "--unit-id", "-u", help="Supported Modbus unit id's" 142 ), 143 modbus_config_path: Path = typer.Option( 144 None, help="Path to additional modbus server config" 145 ), 146 randomize: int = typer.Option( 147 0, 148 "--random", 149 "-r", 150 help="Randomize every `r` reads. 0=never, 1=always,2=every-second-read" 151 ", and so on. Applicable IR and DI.", 152 ), 153 change_rate: int = typer.Option( 154 0, 155 "--change-rate", 156 "-c", 157 help="Rate in % registers to change. 0=none, 100=all, 12=12% of registers" 158 ", and so on. Applicable IR and DI.", 159 ), 160 ): 161 """Run Reactive Modbus server. 162 163 Exposing REST endpoint for response manipulation. 164 """ 165 repl = ctx.obj.pop("repl") 166 # TBD extra_args = ctx.args 167 web_app_config = ctx.obj 168 loop = asyncio.get_event_loop() 169 framer = DEFAULT_FRAMER.get(modbus_framer, ModbusSocketFramer) 170 if modbus_config_path: 171 with open(modbus_config_path, encoding="utf-8") as my_file: 172 modbus_config = json.load(my_file) 173 else: 174 modbus_config = DEFAULT_CONFIG 175 176 extra_args = ctx.args 177 data_block_settings = modbus_config.pop("data_block_settings", {}) 178 modbus_config = modbus_config.get(modbus_server, {}) 179 modbus_config = process_extra_args(extra_args, modbus_config) 180 if modbus_server != "serial": 181 handler = modbus_config.pop("handler", "ModbusConnectedRequestHandler") 182 else: 183 handler = modbus_config.pop("handler", "ModbusSingleRequestHandler") 184 handler = DEFUALT_HANDLERS.get(handler.strip()) 185 186 modbus_config["handler"] = handler 187 modbus_config["randomize"] = randomize 188 modbus_config["change_rate"] = change_rate 189 app = ReactiveServer.factory( 190 modbus_server, 191 framer, 192 modbus_port=modbus_port, 193 unit=modbus_unit_id, 194 loop=loop, 195 single=False, 196 data_block_settings=data_block_settings, 197 **web_app_config, 198 **modbus_config, 199 ) 200 try: 201 loop.run_until_complete(app.run_async(repl)) 202 if repl: 203 loop.run_until_complete(run_repl(app)) 204 loop.run_forever() 205 206 except CANCELLED_ERROR: 207 print("Done!!!!!") 208 209 210 if __name__ == "__main__": 211 app() 212 [end of pymodbus/repl/server/main.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/pymodbus/repl/server/main.py b/pymodbus/repl/server/main.py --- a/pymodbus/repl/server/main.py +++ b/pymodbus/repl/server/main.py @@ -121,7 +121,7 @@ def run( ctx: typer.Context, modbus_server: str = typer.Option( - ModbusServerTypes.tcp, + ModbusServerTypes.tcp.value, "--modbus-server", "-s", case_sensitive=False, @@ -129,7 +129,7 @@ help="Modbus Server", ), modbus_framer: str = typer.Option( - ModbusFramerTypes.socket, + ModbusFramerTypes.socket.value, "--framer", "-f", case_sensitive=False,
{"golden_diff": "diff --git a/pymodbus/repl/server/main.py b/pymodbus/repl/server/main.py\n--- a/pymodbus/repl/server/main.py\n+++ b/pymodbus/repl/server/main.py\n@@ -121,7 +121,7 @@\n def run(\n ctx: typer.Context,\n modbus_server: str = typer.Option(\n- ModbusServerTypes.tcp,\n+ ModbusServerTypes.tcp.value,\n \"--modbus-server\",\n \"-s\",\n case_sensitive=False,\n@@ -129,7 +129,7 @@\n help=\"Modbus Server\",\n ),\n modbus_framer: str = typer.Option(\n- ModbusFramerTypes.socket,\n+ ModbusFramerTypes.socket.value,\n \"--framer\",\n \"-f\",\n case_sensitive=False,\n", "issue": "pymodbus.server does not start with its default parameters\n<!--\r\n\r\nBefore opening a new issue, make sure you do the following:\r\n * check that your issue isn't already filed: https://github.com/pymodbus-dev/pymodbus/issues\r\n * check the discussions forum https://github.com/pymodbus-dev/pymodbus/discussions\r\n * prepare a short, runnable example that reproduce the issue with the latest development version of Pymodbus\r\n\r\n Before opening a new issue, make sure you do the following\r\n-->\r\n\r\n### Versions\r\n\r\n* Python: 3.10.9\r\n* OS: Fedora 36\r\n* Pymodbus: 3.1.3\r\n\r\n### Description\r\n\r\n- starting \"pymodbus.server run\" fails, but starting it with \"pymodbus.server run -s tcp\" works (\"-s tcp\" should not be needed, it is the default)\r\n- might be related to #1118 and #1139 \r\n- it looks like that in https://github.com/pymodbus-dev/pymodbus/blob/dev/pymodbus/repl/server/main.py#L124 there is a mismatch between the expected type for that option (\"str\") and the used default parameter (the enum ModbusServerTypes.tcp (which evaluates to \"ModbusServerTypes.tcp\" in a string context))\r\n\r\n\r\n\r\n\r\n### Code and Logs\r\n```\r\n$ pymodbus.server run\r\n20:21:10 ERROR logging:92 Invalid server ModbusServerTypes.tcp\r\n$\r\n$ pymodbus.server run -s tcp\r\n\r\n__________ .______. _________\r\n\\______ \\___.__. _____ ____ __| _/\\_ |__ __ __ ______ / _____/ ______________ __ ___________\r\n | ___< | |/ \\ / _ \\ / __ | | __ \\| | \\/ ___/ \\_____ \\_/ __ \\_ __ \\ \\/ // __ \\_ __ \\\\\r\n | | \\___ | Y Y ( <_> ) /_/ | | \\_\\ \\ | /\\___ \\ / \\ ___/| | \\/\\ /\\ ___/| | \\/\r\n |____| / ____|__|_| /\\____/\\____ | |___ /____//____ > /_______ /\\___ >__| \\_/ \\___ >__|\r\n \\/ \\/ \\/ \\/ \\/ \\/ \\/ \\/\r\n\r\n\r\nSERVER >\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Repl server main.\"\"\"\nimport asyncio\nimport json\nimport sys\nfrom enum import Enum\nfrom pathlib import Path\nfrom typing import List\n\nimport typer\n\nfrom pymodbus import pymodbus_apply_logging_config\nfrom pymodbus.framer.socket_framer import ModbusSocketFramer\nfrom pymodbus.logging import Log\nfrom pymodbus.repl.server.cli import run_repl\nfrom pymodbus.server.reactive.default_config import DEFAULT_CONFIG\nfrom pymodbus.server.reactive.main import (\n DEFAULT_FRAMER,\n DEFUALT_HANDLERS,\n ReactiveServer,\n)\n\n\nCANCELLED_ERROR = asyncio.exceptions.CancelledError\nCONTEXT_SETTING = {\"allow_extra_args\": True, \"ignore_unknown_options\": True}\n\n\n# TBD class ModbusServerConfig:\n\n\nclass ModbusServerTypes(str, Enum):\n \"\"\"Server types.\"\"\"\n\n # [\"tcp\", \"serial\", \"tls\", \"udp\"]\n tcp = \"tcp\" # pylint: disable=invalid-name\n serial = \"serial\" # pylint: disable=invalid-name\n tls = \"tls\" # pylint: disable=invalid-name\n udp = \"udp\" # pylint: disable=invalid-name\n\n\nclass ModbusFramerTypes(str, Enum):\n \"\"\"Framer types.\"\"\"\n\n # [\"socket\", \"rtu\", \"tls\", \"ascii\", \"binary\"]\n socket = \"socket\" # pylint: disable=invalid-name\n rtu = \"rtu\" # pylint: disable=invalid-name\n tls = \"tls\" # pylint: disable=invalid-name\n ascii = \"ascii\" # pylint: disable=invalid-name\n binary = \"binary\" # pylint: disable=invalid-name\n\n\ndef _completer(incomplete: str, valid_values: List[str]) -> List[str]:\n \"\"\"Complete value.\"\"\"\n completion = []\n for name in valid_values:\n if name.startswith(incomplete):\n completion.append(name)\n return completion\n\n\ndef framers(incomplete: str) -> List[str]:\n \"\"\"Return an autocompleted list of supported clouds.\"\"\"\n _framers = [\"socket\", \"rtu\", \"tls\", \"ascii\", \"binary\"]\n return _completer(incomplete, _framers)\n\n\ndef servers(incomplete: str) -> List[str]:\n \"\"\"Return an autocompleted list of supported clouds.\"\"\"\n _servers = [\"tcp\", \"serial\", \"tls\", \"udp\"]\n return _completer(incomplete, _servers)\n\n\ndef process_extra_args(extra_args: List[str], modbus_config: dict) -> dict:\n \"\"\"Process extra args passed to server.\"\"\"\n options_stripped = [x.strip().replace(\"--\", \"\") for x in extra_args[::2]]\n extra_args_dict = dict(list(zip(options_stripped, extra_args[1::2])))\n for option, value in extra_args_dict.items():\n if option in modbus_config:\n try:\n modbus_config[option] = type(modbus_config[option])(value)\n except ValueError as err:\n Log.error(\n \"Error parsing extra arg {} with value '{}'. {}\", option, value, err\n )\n sys.exit(1)\n return modbus_config\n\n\napp = typer.Typer(\n no_args_is_help=True,\n context_settings=CONTEXT_SETTING,\n help=\"Reactive Modbus server\",\n)\n\n\[email protected]()\ndef server(\n ctx: typer.Context,\n host: str = typer.Option(\"localhost\", \"--host\", help=\"Host address\"),\n web_port: int = typer.Option(8080, \"--web-port\", help=\"Web app port\"),\n broadcast_support: bool = typer.Option(\n False, \"-b\", help=\"Support broadcast messages\"\n ),\n repl: bool = typer.Option(True, help=\"Enable/Disable repl for server\"),\n verbose: bool = typer.Option(\n False, help=\"Run with debug logs enabled for pymodbus\"\n ),\n):\n \"\"\"Run server code.\"\"\"\n log_level = Log.DEBUG if verbose else Log.ERROR\n pymodbus_apply_logging_config(log_level)\n\n ctx.obj = {\n \"repl\": repl,\n \"host\": host,\n \"web_port\": web_port,\n \"broadcast\": broadcast_support,\n }\n\n\[email protected](\"run\", context_settings=CONTEXT_SETTING)\ndef run(\n ctx: typer.Context,\n modbus_server: str = typer.Option(\n ModbusServerTypes.tcp,\n \"--modbus-server\",\n \"-s\",\n case_sensitive=False,\n autocompletion=servers,\n help=\"Modbus Server\",\n ),\n modbus_framer: str = typer.Option(\n ModbusFramerTypes.socket,\n \"--framer\",\n \"-f\",\n case_sensitive=False,\n autocompletion=framers,\n help=\"Modbus framer to use\",\n ),\n modbus_port: int = typer.Option(5020, \"--modbus-port\", \"-p\", help=\"Modbus port\"),\n modbus_unit_id: List[int] = typer.Option(\n None, \"--unit-id\", \"-u\", help=\"Supported Modbus unit id's\"\n ),\n modbus_config_path: Path = typer.Option(\n None, help=\"Path to additional modbus server config\"\n ),\n randomize: int = typer.Option(\n 0,\n \"--random\",\n \"-r\",\n help=\"Randomize every `r` reads. 0=never, 1=always,2=every-second-read\"\n \", and so on. Applicable IR and DI.\",\n ),\n change_rate: int = typer.Option(\n 0,\n \"--change-rate\",\n \"-c\",\n help=\"Rate in % registers to change. 0=none, 100=all, 12=12% of registers\"\n \", and so on. Applicable IR and DI.\",\n ),\n):\n \"\"\"Run Reactive Modbus server.\n\n Exposing REST endpoint for response manipulation.\n \"\"\"\n repl = ctx.obj.pop(\"repl\")\n # TBD extra_args = ctx.args\n web_app_config = ctx.obj\n loop = asyncio.get_event_loop()\n framer = DEFAULT_FRAMER.get(modbus_framer, ModbusSocketFramer)\n if modbus_config_path:\n with open(modbus_config_path, encoding=\"utf-8\") as my_file:\n modbus_config = json.load(my_file)\n else:\n modbus_config = DEFAULT_CONFIG\n\n extra_args = ctx.args\n data_block_settings = modbus_config.pop(\"data_block_settings\", {})\n modbus_config = modbus_config.get(modbus_server, {})\n modbus_config = process_extra_args(extra_args, modbus_config)\n if modbus_server != \"serial\":\n handler = modbus_config.pop(\"handler\", \"ModbusConnectedRequestHandler\")\n else:\n handler = modbus_config.pop(\"handler\", \"ModbusSingleRequestHandler\")\n handler = DEFUALT_HANDLERS.get(handler.strip())\n\n modbus_config[\"handler\"] = handler\n modbus_config[\"randomize\"] = randomize\n modbus_config[\"change_rate\"] = change_rate\n app = ReactiveServer.factory(\n modbus_server,\n framer,\n modbus_port=modbus_port,\n unit=modbus_unit_id,\n loop=loop,\n single=False,\n data_block_settings=data_block_settings,\n **web_app_config,\n **modbus_config,\n )\n try:\n loop.run_until_complete(app.run_async(repl))\n if repl:\n loop.run_until_complete(run_repl(app))\n loop.run_forever()\n\n except CANCELLED_ERROR:\n print(\"Done!!!!!\")\n\n\nif __name__ == \"__main__\":\n app()\n", "path": "pymodbus/repl/server/main.py"}]}
3,287
183
gh_patches_debug_7932
rasdani/github-patches
git_diff
chainer__chainer-240
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Add type check to Parameter function Related to #123 </issue> <code> [start of chainer/functions/parameter.py] 1 import numpy 2 3 from chainer import function 4 5 6 class Parameter(function.Function): 7 8 """Function that outputs its weight array. 9 10 This is a parameterized function that takes no input and returns a variable 11 holding a shallow copy of the parameter array. 12 13 Args: 14 array: Initial parameter array. 15 16 """ 17 parameter_names = 'W', 18 gradient_names = 'gW', 19 20 def __init__(self, array): 21 self.W = array 22 self.gW = numpy.empty_like(array) 23 24 def forward(self, x): 25 return self.W, 26 27 def backward(self, x, gy): 28 self.gW += gy[0] 29 return () 30 [end of chainer/functions/parameter.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/chainer/functions/parameter.py b/chainer/functions/parameter.py --- a/chainer/functions/parameter.py +++ b/chainer/functions/parameter.py @@ -1,6 +1,7 @@ import numpy from chainer import function +from chainer.utils import type_check class Parameter(function.Function): @@ -21,6 +22,9 @@ self.W = array self.gW = numpy.empty_like(array) + def check_type_forward(self, in_types): + type_check.expect(in_types.size() == 0) + def forward(self, x): return self.W,
{"golden_diff": "diff --git a/chainer/functions/parameter.py b/chainer/functions/parameter.py\n--- a/chainer/functions/parameter.py\n+++ b/chainer/functions/parameter.py\n@@ -1,6 +1,7 @@\n import numpy\n \n from chainer import function\n+from chainer.utils import type_check\n \n \n class Parameter(function.Function):\n@@ -21,6 +22,9 @@\n self.W = array\n self.gW = numpy.empty_like(array)\n \n+ def check_type_forward(self, in_types):\n+ type_check.expect(in_types.size() == 0)\n+\n def forward(self, x):\n return self.W,\n", "issue": "Add type check to Parameter function\nRelated to #123 \n\n", "before_files": [{"content": "import numpy\n\nfrom chainer import function\n\n\nclass Parameter(function.Function):\n\n \"\"\"Function that outputs its weight array.\n\n This is a parameterized function that takes no input and returns a variable\n holding a shallow copy of the parameter array.\n\n Args:\n array: Initial parameter array.\n\n \"\"\"\n parameter_names = 'W',\n gradient_names = 'gW',\n\n def __init__(self, array):\n self.W = array\n self.gW = numpy.empty_like(array)\n\n def forward(self, x):\n return self.W,\n\n def backward(self, x, gy):\n self.gW += gy[0]\n return ()\n", "path": "chainer/functions/parameter.py"}]}
746
136
gh_patches_debug_22175
rasdani/github-patches
git_diff
cloud-custodian__cloud-custodian-663
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> handle throttles on log group output </issue> <code> [start of c7n/log.py] 1 # Copyright 2016 Capital One Services, LLC 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 """ 15 Python Standard Logging integration with CloudWatch Logs 16 17 Double Buffered with background thread delivery. 18 19 We do an initial buffering on the log handler directly, to avoid 20 some of the overhead of pushing to the queue (albeit dubious as 21 std logging does default lock acquisition around handler emit). 22 also uses a single thread for all outbound. Background thread 23 uses a separate session. 24 """ 25 26 import boto3 27 from botocore.exceptions import ClientError 28 29 import itertools 30 import logging 31 from operator import itemgetter 32 import threading 33 import time 34 35 try: 36 import Queue 37 except ImportError: 38 import queue as Queue 39 40 41 FLUSH_MARKER = object() 42 SHUTDOWN_MARKER = object() 43 44 EMPTY = Queue.Empty 45 46 47 class Error(object): 48 49 AlreadyAccepted = "DataAlreadyAcceptedException" 50 InvalidToken = "InvalidSequenceTokenException" 51 ResourceExists = "ResourceAlreadyExistsException" 52 53 @staticmethod 54 def code(e): 55 return e.response.get('Error', {}).get('Code') 56 57 58 class CloudWatchLogHandler(logging.Handler): 59 """Python Log Handler to Send to Cloud Watch Logs 60 61 http://goo.gl/eZGAEK 62 """ 63 64 batch_size = 20 65 batch_interval = 40 66 batch_min_buffer = 10 67 68 def __init__(self, log_group=__name__, log_stream=None, 69 session_factory=None): 70 super(CloudWatchLogHandler, self).__init__() 71 self.log_group = log_group 72 self.log_stream = log_stream 73 self.session_factory = session_factory or boto3.Session 74 self.transport = None 75 self.queue = Queue.Queue() 76 self.threads = [] 77 # do some basic buffering before sending to transport to minimize 78 # queue/threading overhead 79 self.buf = [] 80 self.last_seen = time.time() 81 # Logging module internally is tracking all handlers, for final 82 # cleanup atexit, custodian is a bit more explicitly scoping shutdown to 83 # each policy, so use a sentinel value to avoid deadlocks. 84 self.shutdown = False 85 try: 86 self.session_factory().client( 87 'logs').create_log_group(logGroupName=self.log_group) 88 except ClientError as e: 89 if Error.code(e) != Error.ResourceExists: 90 raise 91 92 # Begin logging.Handler API 93 def emit(self, message): 94 """Send logs""" 95 # We're sending messages asynchronously, bubble to caller when 96 # we've detected an error on the message. This isn't great, 97 # but options once we've gone async without a deferred/promise 98 # aren't great. 99 if self.transport and self.transport.error: 100 raise self.transport.error 101 102 # Sanity safety, people do like to recurse by attaching to 103 # root log :-( 104 if message.name.startswith('boto'): 105 return 106 107 msg = self.format_message(message) 108 if not self.transport: 109 self.start_transports() 110 self.buf.append(msg) 111 self.flush_buffers( 112 (message.created - self.last_seen >= self.batch_interval)) 113 114 self.last_seen = message.created 115 116 def flush(self): 117 """Ensure all logging output has been flushed.""" 118 if self.shutdown: 119 return 120 self.flush_buffers(force=True) 121 self.queue.put(FLUSH_MARKER) 122 self.queue.join() 123 124 def close(self): 125 if self.shutdown: 126 return 127 self.shutdown = True 128 self.queue.put(SHUTDOWN_MARKER) 129 self.queue.join() 130 for t in self.threads: 131 t.join() 132 self.threads = [] 133 134 # End logging.Handler API 135 136 def format_message(self, msg): 137 """format message.""" 138 return {'timestamp': int(msg.created * 1000), 139 'message': self.format(msg), 140 'stream': self.log_stream or msg.name, 141 'group': self.log_group} 142 143 def start_transports(self): 144 """start thread transports.""" 145 self.transport = Transport( 146 self.queue, self.batch_size, self.batch_interval, 147 self.session_factory) 148 thread = threading.Thread(target=self.transport.loop) 149 self.threads.append(thread) 150 thread.daemon = True 151 thread.start() 152 153 def flush_buffers(self, force=False): 154 if not force and len(self.buf) < self.batch_min_buffer: 155 return 156 self.queue.put(self.buf) 157 self.buf = [] 158 159 160 class Transport(object): 161 162 def __init__(self, queue, batch_size, batch_interval, session_factory): 163 self.queue = queue 164 self.batch_size = batch_size 165 self.batch_interval = batch_interval 166 self.client = session_factory().client('logs') 167 self.sequences = {} 168 self.buffers = {} 169 self.error = None 170 171 def create_stream(self, group, stream): 172 try: 173 self.client.create_log_stream( 174 logGroupName=group, logStreamName=stream) 175 except ClientError as e: 176 if Error.code(e) != Error.ResourceExists: 177 self.error = e 178 return False 179 return True 180 181 def send(self): 182 for k, messages in self.buffers.items(): 183 self.send_group(k, messages) 184 self.buffers = {} 185 186 def send_group(self, k, messages): 187 group, stream = k.split('=', 1) 188 if stream not in self.sequences: 189 if not self.create_stream(group, stream): 190 return 191 self.sequences[stream] = None 192 params = dict( 193 logGroupName=group, logStreamName=stream, 194 logEvents=sorted( 195 messages, key=itemgetter('timestamp'), reverse=False)) 196 if self.sequences[stream]: 197 params['sequenceToken'] = self.sequences[stream] 198 try: 199 response = self.client.put_log_events(**params) 200 except ClientError as e: 201 if Error.code(e) in (Error.AlreadyAccepted, Error.InvalidToken): 202 self.sequences[stream] = e.response['Error']['Message'].rsplit( 203 " ", 1)[-1] 204 return self.send_group(k, messages) 205 self.error = e 206 return 207 self.sequences[stream] = response['nextSequenceToken'] 208 209 def loop(self): 210 def keyed(datum): 211 return "%s=%s" % ( 212 datum.pop('group'), datum.pop('stream')) 213 214 while True: 215 try: 216 datum = self.queue.get(block=True, timeout=self.batch_interval) 217 except EMPTY: 218 if Queue is None: 219 return 220 datum = None 221 if datum is None: 222 # Timeout reached, flush 223 self.send() 224 continue 225 elif datum == FLUSH_MARKER: 226 self.send() 227 elif datum == SHUTDOWN_MARKER: 228 self.queue.task_done() 229 return 230 else: 231 for k, group in itertools.groupby(datum, keyed): 232 self.buffers.setdefault(k, []).extend(group) 233 self.queue.task_done() 234 [end of c7n/log.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/c7n/log.py b/c7n/log.py --- a/c7n/log.py +++ b/c7n/log.py @@ -37,6 +37,7 @@ except ImportError: import queue as Queue +from c7n.utils import get_retry FLUSH_MARKER = object() SHUTDOWN_MARKER = object() @@ -82,9 +83,15 @@ # cleanup atexit, custodian is a bit more explicitly scoping shutdown to # each policy, so use a sentinel value to avoid deadlocks. self.shutdown = False + retry = get_retry(('ThrottlingException',)) try: - self.session_factory().client( - 'logs').create_log_group(logGroupName=self.log_group) + client = self.session_factory().client('logs') + logs = retry( + client.describe_log_groups, + logGroupNamePrefix=self.log_group)['logGroups'] + if [l for l in logs if l['logGroupName'] == self.log_group]: + retry(client.create_log_group, + logGroupName=self.log_group) except ClientError as e: if Error.code(e) != Error.ResourceExists: raise
{"golden_diff": "diff --git a/c7n/log.py b/c7n/log.py\n--- a/c7n/log.py\n+++ b/c7n/log.py\n@@ -37,6 +37,7 @@\n except ImportError:\n import queue as Queue\n \n+from c7n.utils import get_retry\n \n FLUSH_MARKER = object()\n SHUTDOWN_MARKER = object()\n@@ -82,9 +83,15 @@\n # cleanup atexit, custodian is a bit more explicitly scoping shutdown to\n # each policy, so use a sentinel value to avoid deadlocks.\n self.shutdown = False\n+ retry = get_retry(('ThrottlingException',))\n try:\n- self.session_factory().client(\n- 'logs').create_log_group(logGroupName=self.log_group)\n+ client = self.session_factory().client('logs')\n+ logs = retry(\n+ client.describe_log_groups,\n+ logGroupNamePrefix=self.log_group)['logGroups']\n+ if [l for l in logs if l['logGroupName'] == self.log_group]:\n+ retry(client.create_log_group,\n+ logGroupName=self.log_group)\n except ClientError as e:\n if Error.code(e) != Error.ResourceExists:\n raise\n", "issue": "handle throttles on log group output\n\n", "before_files": [{"content": "# Copyright 2016 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nPython Standard Logging integration with CloudWatch Logs\n\nDouble Buffered with background thread delivery.\n\nWe do an initial buffering on the log handler directly, to avoid\nsome of the overhead of pushing to the queue (albeit dubious as\nstd logging does default lock acquisition around handler emit).\nalso uses a single thread for all outbound. Background thread\nuses a separate session.\n\"\"\"\n\nimport boto3\nfrom botocore.exceptions import ClientError\n\nimport itertools\nimport logging\nfrom operator import itemgetter\nimport threading\nimport time\n\ntry:\n import Queue\nexcept ImportError:\n import queue as Queue\n\n\nFLUSH_MARKER = object()\nSHUTDOWN_MARKER = object()\n\nEMPTY = Queue.Empty\n\n\nclass Error(object):\n\n AlreadyAccepted = \"DataAlreadyAcceptedException\"\n InvalidToken = \"InvalidSequenceTokenException\"\n ResourceExists = \"ResourceAlreadyExistsException\"\n\n @staticmethod\n def code(e):\n return e.response.get('Error', {}).get('Code')\n\n\nclass CloudWatchLogHandler(logging.Handler):\n \"\"\"Python Log Handler to Send to Cloud Watch Logs\n\n http://goo.gl/eZGAEK\n \"\"\"\n\n batch_size = 20\n batch_interval = 40\n batch_min_buffer = 10\n\n def __init__(self, log_group=__name__, log_stream=None,\n session_factory=None):\n super(CloudWatchLogHandler, self).__init__()\n self.log_group = log_group\n self.log_stream = log_stream\n self.session_factory = session_factory or boto3.Session\n self.transport = None\n self.queue = Queue.Queue()\n self.threads = []\n # do some basic buffering before sending to transport to minimize\n # queue/threading overhead\n self.buf = []\n self.last_seen = time.time()\n # Logging module internally is tracking all handlers, for final\n # cleanup atexit, custodian is a bit more explicitly scoping shutdown to\n # each policy, so use a sentinel value to avoid deadlocks.\n self.shutdown = False\n try:\n self.session_factory().client(\n 'logs').create_log_group(logGroupName=self.log_group)\n except ClientError as e:\n if Error.code(e) != Error.ResourceExists:\n raise\n\n # Begin logging.Handler API\n def emit(self, message):\n \"\"\"Send logs\"\"\"\n # We're sending messages asynchronously, bubble to caller when\n # we've detected an error on the message. This isn't great,\n # but options once we've gone async without a deferred/promise\n # aren't great.\n if self.transport and self.transport.error:\n raise self.transport.error\n\n # Sanity safety, people do like to recurse by attaching to\n # root log :-(\n if message.name.startswith('boto'):\n return\n\n msg = self.format_message(message)\n if not self.transport:\n self.start_transports()\n self.buf.append(msg)\n self.flush_buffers(\n (message.created - self.last_seen >= self.batch_interval))\n\n self.last_seen = message.created\n\n def flush(self):\n \"\"\"Ensure all logging output has been flushed.\"\"\"\n if self.shutdown:\n return\n self.flush_buffers(force=True)\n self.queue.put(FLUSH_MARKER)\n self.queue.join()\n\n def close(self):\n if self.shutdown:\n return\n self.shutdown = True\n self.queue.put(SHUTDOWN_MARKER)\n self.queue.join()\n for t in self.threads:\n t.join()\n self.threads = []\n\n # End logging.Handler API\n\n def format_message(self, msg):\n \"\"\"format message.\"\"\"\n return {'timestamp': int(msg.created * 1000),\n 'message': self.format(msg),\n 'stream': self.log_stream or msg.name,\n 'group': self.log_group}\n\n def start_transports(self):\n \"\"\"start thread transports.\"\"\"\n self.transport = Transport(\n self.queue, self.batch_size, self.batch_interval,\n self.session_factory)\n thread = threading.Thread(target=self.transport.loop)\n self.threads.append(thread)\n thread.daemon = True\n thread.start()\n\n def flush_buffers(self, force=False):\n if not force and len(self.buf) < self.batch_min_buffer:\n return\n self.queue.put(self.buf)\n self.buf = []\n\n\nclass Transport(object):\n\n def __init__(self, queue, batch_size, batch_interval, session_factory):\n self.queue = queue\n self.batch_size = batch_size\n self.batch_interval = batch_interval\n self.client = session_factory().client('logs')\n self.sequences = {}\n self.buffers = {}\n self.error = None\n\n def create_stream(self, group, stream):\n try:\n self.client.create_log_stream(\n logGroupName=group, logStreamName=stream)\n except ClientError as e:\n if Error.code(e) != Error.ResourceExists:\n self.error = e\n return False\n return True\n\n def send(self):\n for k, messages in self.buffers.items():\n self.send_group(k, messages)\n self.buffers = {}\n\n def send_group(self, k, messages):\n group, stream = k.split('=', 1)\n if stream not in self.sequences:\n if not self.create_stream(group, stream):\n return\n self.sequences[stream] = None\n params = dict(\n logGroupName=group, logStreamName=stream,\n logEvents=sorted(\n messages, key=itemgetter('timestamp'), reverse=False))\n if self.sequences[stream]:\n params['sequenceToken'] = self.sequences[stream]\n try:\n response = self.client.put_log_events(**params)\n except ClientError as e:\n if Error.code(e) in (Error.AlreadyAccepted, Error.InvalidToken):\n self.sequences[stream] = e.response['Error']['Message'].rsplit(\n \" \", 1)[-1]\n return self.send_group(k, messages)\n self.error = e\n return\n self.sequences[stream] = response['nextSequenceToken']\n\n def loop(self):\n def keyed(datum):\n return \"%s=%s\" % (\n datum.pop('group'), datum.pop('stream'))\n\n while True:\n try:\n datum = self.queue.get(block=True, timeout=self.batch_interval)\n except EMPTY:\n if Queue is None:\n return\n datum = None\n if datum is None:\n # Timeout reached, flush\n self.send()\n continue\n elif datum == FLUSH_MARKER:\n self.send()\n elif datum == SHUTDOWN_MARKER:\n self.queue.task_done()\n return\n else:\n for k, group in itertools.groupby(datum, keyed):\n self.buffers.setdefault(k, []).extend(group)\n self.queue.task_done()\n", "path": "c7n/log.py"}]}
2,717
259
gh_patches_debug_17546
rasdani/github-patches
git_diff
mesonbuild__meson-5531
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> apple clang does not have visibility "protected" This results in the attribute check function returning false on apple clang, but symbol visibility otherwise working. so something like: ```meson args = [] if cc.has_function_attribute('visibility') args += '-DEXPORT=__attribute__((visibility("default"))) endif library( 'foo.c', c_args : args, gnu_symbol_visibility : 'hidden', ) ``` will fail with missing symbols because args is `[]` </issue> <code> [start of mesonbuild/compilers/c_function_attributes.py] 1 # These functions are based on the following code: 2 # https://git.savannah.gnu.org/gitweb/?p=autoconf-archive.git;a=blob_plain;f=m4/ax_gcc_func_attribute.m4, 3 # which is licensed under the following terms: 4 # 5 # Copyright (c) 2013 Gabriele Svelto <[email protected]> 6 # 7 # Copying and distribution of this file, with or without modification, are 8 # permitted in any medium without royalty provided the copyright notice 9 # and this notice are preserved. This file is offered as-is, without any 10 # warranty. 11 # 12 13 C_FUNC_ATTRIBUTES = { 14 'alias': ''' 15 int foo(void) { return 0; } 16 int bar(void) __attribute__((alias("foo")));''', 17 'aligned': 18 'int foo(void) __attribute__((aligned(32)));', 19 'alloc_size': 20 'void *foo(int a) __attribute__((alloc_size(1)));', 21 'always_inline': 22 'inline __attribute__((always_inline)) int foo(void) { return 0; }', 23 'artificial': 24 'inline __attribute__((artificial)) int foo(void) { return 0; }', 25 'cold': 26 'int foo(void) __attribute__((cold));', 27 'const': 28 'int foo(void) __attribute__((const));', 29 'constructor': 30 'int foo(void) __attribute__((constructor));', 31 'constructor_priority': 32 'int foo( void ) __attribute__((__constructor__(65535/2)));', 33 'deprecated': 34 'int foo(void) __attribute__((deprecated("")));', 35 'destructor': 36 'int foo(void) __attribute__((destructor));', 37 'dllexport': 38 '__declspec(dllexport) int foo(void) { return 0; }', 39 'dllimport': 40 '__declspec(dllimport) int foo(void);', 41 'error': 42 'int foo(void) __attribute__((error("")));', 43 'externally_visible': 44 'int foo(void) __attribute__((externally_visible));', 45 'fallthrough': ''' 46 int foo( void ) { 47 switch (0) { 48 case 1: __attribute__((fallthrough)); 49 case 2: break; 50 } 51 return 0; 52 };''', 53 'flatten': 54 'int foo(void) __attribute__((flatten));', 55 'format': 56 'int foo(const char * p, ...) __attribute__((format(printf, 1, 2)));', 57 'format_arg': 58 'char * foo(const char * p) __attribute__((format_arg(1)));', 59 'gnu_inline': 60 'inline __attribute__((gnu_inline)) int foo(void) { return 0; }', 61 'hot': 62 'int foo(void) __attribute__((hot));', 63 'ifunc': 64 ('int my_foo(void) { return 0; }' 65 'static int (*resolve_foo(void))(void) { return my_foo; }' 66 'int foo(void) __attribute__((ifunc("resolve_foo")));'), 67 'leaf': 68 '__attribute__((leaf)) int foo(void) { return 0; }', 69 'malloc': 70 'int *foo(void) __attribute__((malloc));', 71 'noclone': 72 'int foo(void) __attribute__((noclone));', 73 'noinline': 74 '__attribute__((noinline)) int foo(void) { return 0; }', 75 'nonnull': 76 'int foo(char * p) __attribute__((nonnull(1)));', 77 'noreturn': 78 'int foo(void) __attribute__((noreturn));', 79 'nothrow': 80 'int foo(void) __attribute__((nothrow));', 81 'optimize': 82 '__attribute__((optimize(3))) int foo(void) { return 0; }', 83 'packed': 84 'struct __attribute__((packed)) foo { int bar; };', 85 'pure': 86 'int foo(void) __attribute__((pure));', 87 'returns_nonnull': 88 'int *foo(void) __attribute__((returns_nonnull));', 89 'unused': 90 'int foo(void) __attribute__((unused));', 91 'used': 92 'int foo(void) __attribute__((used));', 93 'visibility': ''' 94 int foo_def(void) __attribute__((visibility("default"))); 95 int foo_hid(void) __attribute__((visibility("hidden"))); 96 int foo_int(void) __attribute__((visibility("internal"))); 97 int foo_pro(void) __attribute__((visibility("protected")));''', 98 'warning': 99 'int foo(void) __attribute__((warning("")));', 100 'warn_unused_result': 101 'int foo(void) __attribute__((warn_unused_result));', 102 'weak': 103 'int foo(void) __attribute__((weak));', 104 'weakref': ''' 105 static int foo(void) { return 0; } 106 static int var(void) __attribute__((weakref("foo")));''', 107 } 108 109 CXX_FUNC_ATTRIBUTES = { 110 # Alias must be applied to the mangled name in C++ 111 'alias': 112 ('extern "C" {' 113 'int foo(void) { return 0; }' 114 '}' 115 'int bar(void) __attribute__((alias("foo")));' 116 ), 117 'ifunc': 118 ('extern "C" {' 119 'int my_foo(void) { return 0; }' 120 'static int (*resolve_foo(void))(void) { return my_foo; }' 121 '}' 122 'int foo(void) __attribute__((ifunc("resolve_foo")));'), 123 } 124 [end of mesonbuild/compilers/c_function_attributes.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/mesonbuild/compilers/c_function_attributes.py b/mesonbuild/compilers/c_function_attributes.py --- a/mesonbuild/compilers/c_function_attributes.py +++ b/mesonbuild/compilers/c_function_attributes.py @@ -93,8 +93,15 @@ 'visibility': ''' int foo_def(void) __attribute__((visibility("default"))); int foo_hid(void) __attribute__((visibility("hidden"))); - int foo_int(void) __attribute__((visibility("internal"))); - int foo_pro(void) __attribute__((visibility("protected")));''', + int foo_int(void) __attribute__((visibility("internal")));''', + 'visibility:default': + 'int foo(void) __attribute__((visibility("default")));', + 'visibility:hidden': + 'int foo(void) __attribute__((visibility("hidden")));', + 'visibility:internal': + 'int foo(void) __attribute__((visibility("internal")));', + 'visibility:protected': + 'int foo(void) __attribute__((visibility("protected")));', 'warning': 'int foo(void) __attribute__((warning("")));', 'warn_unused_result':
{"golden_diff": "diff --git a/mesonbuild/compilers/c_function_attributes.py b/mesonbuild/compilers/c_function_attributes.py\n--- a/mesonbuild/compilers/c_function_attributes.py\n+++ b/mesonbuild/compilers/c_function_attributes.py\n@@ -93,8 +93,15 @@\n 'visibility': '''\n int foo_def(void) __attribute__((visibility(\"default\")));\n int foo_hid(void) __attribute__((visibility(\"hidden\")));\n- int foo_int(void) __attribute__((visibility(\"internal\")));\n- int foo_pro(void) __attribute__((visibility(\"protected\")));''',\n+ int foo_int(void) __attribute__((visibility(\"internal\")));''',\n+ 'visibility:default':\n+ 'int foo(void) __attribute__((visibility(\"default\")));',\n+ 'visibility:hidden':\n+ 'int foo(void) __attribute__((visibility(\"hidden\")));',\n+ 'visibility:internal':\n+ 'int foo(void) __attribute__((visibility(\"internal\")));',\n+ 'visibility:protected':\n+ 'int foo(void) __attribute__((visibility(\"protected\")));',\n 'warning':\n 'int foo(void) __attribute__((warning(\"\")));',\n 'warn_unused_result':\n", "issue": "apple clang does not have visibility \"protected\"\nThis results in the attribute check function returning false on apple clang, but symbol visibility otherwise working.\r\n\r\nso something like:\r\n```meson\r\nargs = []\r\nif cc.has_function_attribute('visibility')\r\n args += '-DEXPORT=__attribute__((visibility(\"default\")))\r\nendif\r\n\r\nlibrary(\r\n 'foo.c',\r\n c_args : args,\r\n gnu_symbol_visibility : 'hidden',\r\n)\r\n```\r\n\r\nwill fail with missing symbols because args is `[]`\n", "before_files": [{"content": "# These functions are based on the following code:\n# https://git.savannah.gnu.org/gitweb/?p=autoconf-archive.git;a=blob_plain;f=m4/ax_gcc_func_attribute.m4,\n# which is licensed under the following terms:\n#\n# Copyright (c) 2013 Gabriele Svelto <[email protected]>\n#\n# Copying and distribution of this file, with or without modification, are\n# permitted in any medium without royalty provided the copyright notice\n# and this notice are preserved. This file is offered as-is, without any\n# warranty.\n#\n\nC_FUNC_ATTRIBUTES = {\n 'alias': '''\n int foo(void) { return 0; }\n int bar(void) __attribute__((alias(\"foo\")));''',\n 'aligned':\n 'int foo(void) __attribute__((aligned(32)));',\n 'alloc_size':\n 'void *foo(int a) __attribute__((alloc_size(1)));',\n 'always_inline':\n 'inline __attribute__((always_inline)) int foo(void) { return 0; }',\n 'artificial':\n 'inline __attribute__((artificial)) int foo(void) { return 0; }',\n 'cold':\n 'int foo(void) __attribute__((cold));',\n 'const':\n 'int foo(void) __attribute__((const));',\n 'constructor':\n 'int foo(void) __attribute__((constructor));',\n 'constructor_priority':\n 'int foo( void ) __attribute__((__constructor__(65535/2)));',\n 'deprecated':\n 'int foo(void) __attribute__((deprecated(\"\")));',\n 'destructor':\n 'int foo(void) __attribute__((destructor));',\n 'dllexport':\n '__declspec(dllexport) int foo(void) { return 0; }',\n 'dllimport':\n '__declspec(dllimport) int foo(void);',\n 'error':\n 'int foo(void) __attribute__((error(\"\")));',\n 'externally_visible':\n 'int foo(void) __attribute__((externally_visible));',\n 'fallthrough': '''\n int foo( void ) {\n switch (0) {\n case 1: __attribute__((fallthrough));\n case 2: break;\n }\n return 0;\n };''',\n 'flatten':\n 'int foo(void) __attribute__((flatten));',\n 'format':\n 'int foo(const char * p, ...) __attribute__((format(printf, 1, 2)));',\n 'format_arg':\n 'char * foo(const char * p) __attribute__((format_arg(1)));',\n 'gnu_inline':\n 'inline __attribute__((gnu_inline)) int foo(void) { return 0; }',\n 'hot':\n 'int foo(void) __attribute__((hot));',\n 'ifunc':\n ('int my_foo(void) { return 0; }'\n 'static int (*resolve_foo(void))(void) { return my_foo; }'\n 'int foo(void) __attribute__((ifunc(\"resolve_foo\")));'),\n 'leaf':\n '__attribute__((leaf)) int foo(void) { return 0; }',\n 'malloc':\n 'int *foo(void) __attribute__((malloc));',\n 'noclone':\n 'int foo(void) __attribute__((noclone));',\n 'noinline':\n '__attribute__((noinline)) int foo(void) { return 0; }',\n 'nonnull':\n 'int foo(char * p) __attribute__((nonnull(1)));',\n 'noreturn':\n 'int foo(void) __attribute__((noreturn));',\n 'nothrow':\n 'int foo(void) __attribute__((nothrow));',\n 'optimize':\n '__attribute__((optimize(3))) int foo(void) { return 0; }',\n 'packed':\n 'struct __attribute__((packed)) foo { int bar; };',\n 'pure':\n 'int foo(void) __attribute__((pure));',\n 'returns_nonnull':\n 'int *foo(void) __attribute__((returns_nonnull));',\n 'unused':\n 'int foo(void) __attribute__((unused));',\n 'used':\n 'int foo(void) __attribute__((used));',\n 'visibility': '''\n int foo_def(void) __attribute__((visibility(\"default\")));\n int foo_hid(void) __attribute__((visibility(\"hidden\")));\n int foo_int(void) __attribute__((visibility(\"internal\")));\n int foo_pro(void) __attribute__((visibility(\"protected\")));''',\n 'warning':\n 'int foo(void) __attribute__((warning(\"\")));',\n 'warn_unused_result':\n 'int foo(void) __attribute__((warn_unused_result));',\n 'weak':\n 'int foo(void) __attribute__((weak));',\n 'weakref': '''\n static int foo(void) { return 0; }\n static int var(void) __attribute__((weakref(\"foo\")));''',\n}\n\nCXX_FUNC_ATTRIBUTES = {\n # Alias must be applied to the mangled name in C++\n 'alias':\n ('extern \"C\" {'\n 'int foo(void) { return 0; }'\n '}'\n 'int bar(void) __attribute__((alias(\"foo\")));'\n ),\n 'ifunc':\n ('extern \"C\" {'\n 'int my_foo(void) { return 0; }'\n 'static int (*resolve_foo(void))(void) { return my_foo; }'\n '}'\n 'int foo(void) __attribute__((ifunc(\"resolve_foo\")));'),\n}\n", "path": "mesonbuild/compilers/c_function_attributes.py"}]}
2,100
252
gh_patches_debug_17876
rasdani/github-patches
git_diff
ray-project__ray-4493
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Convert build system to Bazel. It seems like using Bazel (https://github.com/bazelbuild/bazel) would improve our build system in a lot of ways. @rshin did an early version of this a long time ago in https://github.com/ray-project/ray-legacy/pull/408. @ericl, @concretevitamin, @pcmoritz have all used Bazel I believe and are proponents. @rsepassi and @dbieber have recently worked on this and have a good sense of what is involved. Any thoughts about this? Any drawbacks to be aware of? cc @chuxi @guoyuhong @raulchen </issue> <code> [start of python/setup.py] 1 from __future__ import absolute_import 2 from __future__ import division 3 from __future__ import print_function 4 5 import os 6 import re 7 import shutil 8 import subprocess 9 import sys 10 11 from setuptools import setup, find_packages, Distribution 12 import setuptools.command.build_ext as _build_ext 13 14 # Ideally, we could include these files by putting them in a 15 # MANIFEST.in or using the package_data argument to setup, but the 16 # MANIFEST.in gets applied at the very beginning when setup.py runs 17 # before these files have been created, so we have to move the files 18 # manually. 19 20 # NOTE: The lists below must be kept in sync with ray/CMakeLists.txt. 21 22 ray_files = [ 23 "ray/core/src/ray/thirdparty/redis/src/redis-server", 24 "ray/core/src/ray/gcs/redis_module/libray_redis_module.so", 25 "ray/core/src/plasma/plasma_store_server", "ray/_raylet.so", 26 "ray/core/src/ray/raylet/raylet_monitor", "ray/core/src/ray/raylet/raylet", 27 "ray/dashboard/dashboard.py", "ray/dashboard/index.html", 28 "ray/dashboard/res/main.css", "ray/dashboard/res/main.js" 29 ] 30 31 # These are the directories where automatically generated Python flatbuffer 32 # bindings are created. 33 generated_python_directories = [ 34 "ray/core/generated", "ray/core/generated/ray", 35 "ray/core/generated/ray/protocol" 36 ] 37 38 optional_ray_files = [] 39 40 ray_autoscaler_files = [ 41 "ray/autoscaler/aws/example-full.yaml", 42 "ray/autoscaler/gcp/example-full.yaml", 43 "ray/autoscaler/local/example-full.yaml", 44 ] 45 46 if "RAY_USE_NEW_GCS" in os.environ and os.environ["RAY_USE_NEW_GCS"] == "on": 47 ray_files += [ 48 "ray/core/src/credis/build/src/libmember.so", 49 "ray/core/src/credis/build/src/libmaster.so", 50 "ray/core/src/credis/redis/src/redis-server" 51 ] 52 53 optional_ray_files += ray_autoscaler_files 54 55 extras = { 56 "rllib": [ 57 "pyyaml", "gym[atari]", "opencv-python-headless", "lz4", "scipy" 58 ], 59 "debug": ["psutil", "setproctitle", "py-spy"], 60 "dashboard": ["psutil", "aiohttp"], 61 } 62 63 64 class build_ext(_build_ext.build_ext): 65 def run(self): 66 # Note: We are passing in sys.executable so that we use the same 67 # version of Python to build pyarrow inside the build.sh script. Note 68 # that certain flags will not be passed along such as --user or sudo. 69 # TODO(rkn): Fix this. 70 command = ["../build.sh", "-p", sys.executable] 71 if os.getenv("RAY_INSTALL_JAVA") == "1": 72 # Also build binaries for Java if the above env variable exists. 73 command += ["-l", "python,java"] 74 subprocess.check_call(command) 75 76 # We also need to install pyarrow along with Ray, so make sure that the 77 # relevant non-Python pyarrow files get copied. 78 pyarrow_files = [] 79 for (root, dirs, filenames) in os.walk("./ray/pyarrow_files/pyarrow"): 80 for name in filenames: 81 pyarrow_files.append(os.path.join(root, name)) 82 83 # Make sure the relevant files for modin get copied. 84 modin_files = [] 85 for (root, dirs, filenames) in os.walk("./ray/modin"): 86 for name in filenames: 87 modin_files.append(os.path.join(root, name)) 88 89 files_to_include = ray_files + pyarrow_files + modin_files 90 91 # Copy over the autogenerated flatbuffer Python bindings. 92 for directory in generated_python_directories: 93 for filename in os.listdir(directory): 94 if filename[-3:] == ".py": 95 files_to_include.append(os.path.join(directory, filename)) 96 97 for filename in files_to_include: 98 self.move_file(filename) 99 100 # Try to copy over the optional files. 101 for filename in optional_ray_files: 102 try: 103 self.move_file(filename) 104 except Exception: 105 print("Failed to copy optional file {}. This is ok." 106 .format(filename)) 107 108 def move_file(self, filename): 109 # TODO(rkn): This feels very brittle. It may not handle all cases. See 110 # https://github.com/apache/arrow/blob/master/python/setup.py for an 111 # example. 112 source = filename 113 destination = os.path.join(self.build_lib, filename) 114 # Create the target directory if it doesn't already exist. 115 parent_directory = os.path.dirname(destination) 116 if not os.path.exists(parent_directory): 117 os.makedirs(parent_directory) 118 if not os.path.exists(destination): 119 print("Copying {} to {}.".format(source, destination)) 120 shutil.copy(source, destination) 121 122 123 class BinaryDistribution(Distribution): 124 def has_ext_modules(self): 125 return True 126 127 128 def find_version(*filepath): 129 # Extract version information from filepath 130 here = os.path.abspath(os.path.dirname(__file__)) 131 with open(os.path.join(here, *filepath)) as fp: 132 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]", 133 fp.read(), re.M) 134 if version_match: 135 return version_match.group(1) 136 raise RuntimeError("Unable to find version string.") 137 138 139 requires = [ 140 "numpy >= 1.14", 141 "filelock", 142 "funcsigs", 143 "click", 144 "colorama", 145 "pytest", 146 "pyyaml", 147 "redis", 148 # NOTE: Don't upgrade the version of six! Doing so causes installation 149 # problems. See https://github.com/ray-project/ray/issues/4169. 150 "six >= 1.0.0", 151 # The typing module is required by modin. 152 "typing", 153 "flatbuffers", 154 "faulthandler;python_version<'3.3'", 155 ] 156 157 setup( 158 name="ray", 159 version=find_version("ray", "__init__.py"), 160 author="Ray Team", 161 author_email="[email protected]", 162 description=("A system for parallel and distributed Python that unifies " 163 "the ML ecosystem."), 164 long_description=open("../README.rst").read(), 165 url="https://github.com/ray-project/ray", 166 keywords=("ray distributed parallel machine-learning " 167 "reinforcement-learning deep-learning python"), 168 packages=find_packages(), 169 cmdclass={"build_ext": build_ext}, 170 # The BinaryDistribution argument triggers build_ext. 171 distclass=BinaryDistribution, 172 install_requires=requires, 173 setup_requires=["cython >= 0.29"], 174 extras_require=extras, 175 entry_points={ 176 "console_scripts": [ 177 "ray=ray.scripts.scripts:main", 178 "rllib=ray.rllib.scripts:cli [rllib]", "tune=ray.tune.scripts:cli" 179 ] 180 }, 181 include_package_data=True, 182 zip_safe=False, 183 license="Apache 2.0") 184 [end of python/setup.py] [start of python/ray/autoscaler/docker.py] 1 from __future__ import absolute_import 2 from __future__ import division 3 from __future__ import print_function 4 5 import os 6 import logging 7 try: # py3 8 from shlex import quote 9 except ImportError: # py2 10 from pipes import quote 11 12 logger = logging.getLogger(__name__) 13 14 15 def dockerize_if_needed(config): 16 if "docker" not in config: 17 return config 18 docker_image = config["docker"].get("image") 19 cname = config["docker"].get("container_name") 20 run_options = config["docker"].get("run_options", []) 21 ssh_user = config["auth"]["ssh_user"] 22 if not docker_image: 23 if cname: 24 logger.warning( 25 "dockerize_if_needed: " 26 "Container name given but no Docker image - continuing...") 27 return config 28 else: 29 assert cname, "Must provide container name!" 30 docker_mounts = {dst: dst for dst in config["file_mounts"]} 31 32 config["setup_commands"] = ( 33 docker_start_cmds(ssh_user, docker_image, docker_mounts, cname, 34 run_options) + with_docker_exec( 35 config["setup_commands"], container_name=cname)) 36 37 config["head_setup_commands"] = with_docker_exec( 38 config["head_setup_commands"], container_name=cname) 39 config["head_start_ray_commands"] = ( 40 docker_autoscaler_setup(cname) + with_docker_exec( 41 config["head_start_ray_commands"], container_name=cname)) 42 43 config["worker_setup_commands"] = with_docker_exec( 44 config["worker_setup_commands"], container_name=cname) 45 config["worker_start_ray_commands"] = with_docker_exec( 46 config["worker_start_ray_commands"], 47 container_name=cname, 48 env_vars=["RAY_HEAD_IP"]) 49 50 return config 51 52 53 def with_docker_exec(cmds, container_name, env_vars=None): 54 env_str = "" 55 if env_vars: 56 env_str = " ".join( 57 ["-e {env}=${env}".format(env=env) for env in env_vars]) 58 return [ 59 "docker exec {} {} /bin/sh -c {} ".format(env_str, container_name, 60 quote(cmd)) for cmd in cmds 61 ] 62 63 64 def aptwait_cmd(): 65 return ("while sudo fuser" 66 " /var/{lib/{dpkg,apt/lists},cache/apt/archives}/lock" 67 " >/dev/null 2>&1; " 68 "do echo 'Waiting for release of dpkg/apt locks'; sleep 5; done") 69 70 71 def docker_start_cmds(user, image, mount, cname, user_options): 72 cmds = [] 73 74 # create flags 75 # ports for the redis, object manager, and tune client 76 port_flags = " ".join([ 77 "-p {port}:{port}".format(port=port) 78 for port in ["6379", "8076", "4321"] 79 ]) 80 mount_flags = " ".join( 81 ["-v {src}:{dest}".format(src=k, dest=v) for k, v in mount.items()]) 82 83 # for click, used in ray cli 84 env_vars = {"LC_ALL": "C.UTF-8", "LANG": "C.UTF-8"} 85 env_flags = " ".join( 86 ["-e {name}={val}".format(name=k, val=v) for k, v in env_vars.items()]) 87 88 user_options_str = " ".join(user_options) 89 # docker run command 90 docker_check = [ 91 "docker", "inspect", "-f", "'{{.State.Running}}'", cname, "||" 92 ] 93 docker_run = [ 94 "docker", "run", "--rm", "--name {}".format(cname), "-d", "-it", 95 port_flags, mount_flags, env_flags, user_options_str, "--net=host", 96 image, "bash" 97 ] 98 cmds.append(" ".join(docker_check + docker_run)) 99 docker_update = [ 100 " && ".join(("apt-get -y update", "apt-get -y upgrade", 101 "apt-get install -y git wget cmake psmisc")) 102 ] 103 cmds.extend(with_docker_exec(docker_update, container_name=cname)) 104 return cmds 105 106 107 def docker_autoscaler_setup(cname): 108 cmds = [] 109 for path in ["~/ray_bootstrap_config.yaml", "~/ray_bootstrap_key.pem"]: 110 # needed because docker doesn't allow relative paths 111 base_path = os.path.basename(path) 112 cmds.append("docker cp {path} {cname}:{dpath}".format( 113 path=path, dpath=base_path, cname=cname)) 114 cmds.extend( 115 with_docker_exec( 116 ["cp {} {}".format("/" + base_path, path)], 117 container_name=cname)) 118 return cmds 119 [end of python/ray/autoscaler/docker.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/python/ray/autoscaler/docker.py b/python/ray/autoscaler/docker.py --- a/python/ray/autoscaler/docker.py +++ b/python/ray/autoscaler/docker.py @@ -98,7 +98,7 @@ cmds.append(" ".join(docker_check + docker_run)) docker_update = [ " && ".join(("apt-get -y update", "apt-get -y upgrade", - "apt-get install -y git wget cmake psmisc")) + "apt-get install -y git wget psmisc")) ] cmds.extend(with_docker_exec(docker_update, container_name=cname)) return cmds diff --git a/python/setup.py b/python/setup.py --- a/python/setup.py +++ b/python/setup.py @@ -17,7 +17,7 @@ # before these files have been created, so we have to move the files # manually. -# NOTE: The lists below must be kept in sync with ray/CMakeLists.txt. +# NOTE: The lists below must be kept in sync with ray/BUILD.bazel. ray_files = [ "ray/core/src/ray/thirdparty/redis/src/redis-server",
{"golden_diff": "diff --git a/python/ray/autoscaler/docker.py b/python/ray/autoscaler/docker.py\n--- a/python/ray/autoscaler/docker.py\n+++ b/python/ray/autoscaler/docker.py\n@@ -98,7 +98,7 @@\n cmds.append(\" \".join(docker_check + docker_run))\n docker_update = [\n \" && \".join((\"apt-get -y update\", \"apt-get -y upgrade\",\n- \"apt-get install -y git wget cmake psmisc\"))\n+ \"apt-get install -y git wget psmisc\"))\n ]\n cmds.extend(with_docker_exec(docker_update, container_name=cname))\n return cmds\ndiff --git a/python/setup.py b/python/setup.py\n--- a/python/setup.py\n+++ b/python/setup.py\n@@ -17,7 +17,7 @@\n # before these files have been created, so we have to move the files\n # manually.\n \n-# NOTE: The lists below must be kept in sync with ray/CMakeLists.txt.\n+# NOTE: The lists below must be kept in sync with ray/BUILD.bazel.\n \n ray_files = [\n \"ray/core/src/ray/thirdparty/redis/src/redis-server\",\n", "issue": "Convert build system to Bazel.\nIt seems like using Bazel (https://github.com/bazelbuild/bazel) would improve our build system in a lot of ways.\r\n\r\n@rshin did an early version of this a long time ago in https://github.com/ray-project/ray-legacy/pull/408. @ericl, @concretevitamin, @pcmoritz have all used Bazel I believe and are proponents. @rsepassi and @dbieber have recently worked on this and have a good sense of what is involved.\r\n\r\nAny thoughts about this? Any drawbacks to be aware of?\r\n\r\ncc @chuxi @guoyuhong @raulchen\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport re\nimport shutil\nimport subprocess\nimport sys\n\nfrom setuptools import setup, find_packages, Distribution\nimport setuptools.command.build_ext as _build_ext\n\n# Ideally, we could include these files by putting them in a\n# MANIFEST.in or using the package_data argument to setup, but the\n# MANIFEST.in gets applied at the very beginning when setup.py runs\n# before these files have been created, so we have to move the files\n# manually.\n\n# NOTE: The lists below must be kept in sync with ray/CMakeLists.txt.\n\nray_files = [\n \"ray/core/src/ray/thirdparty/redis/src/redis-server\",\n \"ray/core/src/ray/gcs/redis_module/libray_redis_module.so\",\n \"ray/core/src/plasma/plasma_store_server\", \"ray/_raylet.so\",\n \"ray/core/src/ray/raylet/raylet_monitor\", \"ray/core/src/ray/raylet/raylet\",\n \"ray/dashboard/dashboard.py\", \"ray/dashboard/index.html\",\n \"ray/dashboard/res/main.css\", \"ray/dashboard/res/main.js\"\n]\n\n# These are the directories where automatically generated Python flatbuffer\n# bindings are created.\ngenerated_python_directories = [\n \"ray/core/generated\", \"ray/core/generated/ray\",\n \"ray/core/generated/ray/protocol\"\n]\n\noptional_ray_files = []\n\nray_autoscaler_files = [\n \"ray/autoscaler/aws/example-full.yaml\",\n \"ray/autoscaler/gcp/example-full.yaml\",\n \"ray/autoscaler/local/example-full.yaml\",\n]\n\nif \"RAY_USE_NEW_GCS\" in os.environ and os.environ[\"RAY_USE_NEW_GCS\"] == \"on\":\n ray_files += [\n \"ray/core/src/credis/build/src/libmember.so\",\n \"ray/core/src/credis/build/src/libmaster.so\",\n \"ray/core/src/credis/redis/src/redis-server\"\n ]\n\noptional_ray_files += ray_autoscaler_files\n\nextras = {\n \"rllib\": [\n \"pyyaml\", \"gym[atari]\", \"opencv-python-headless\", \"lz4\", \"scipy\"\n ],\n \"debug\": [\"psutil\", \"setproctitle\", \"py-spy\"],\n \"dashboard\": [\"psutil\", \"aiohttp\"],\n}\n\n\nclass build_ext(_build_ext.build_ext):\n def run(self):\n # Note: We are passing in sys.executable so that we use the same\n # version of Python to build pyarrow inside the build.sh script. Note\n # that certain flags will not be passed along such as --user or sudo.\n # TODO(rkn): Fix this.\n command = [\"../build.sh\", \"-p\", sys.executable]\n if os.getenv(\"RAY_INSTALL_JAVA\") == \"1\":\n # Also build binaries for Java if the above env variable exists.\n command += [\"-l\", \"python,java\"]\n subprocess.check_call(command)\n\n # We also need to install pyarrow along with Ray, so make sure that the\n # relevant non-Python pyarrow files get copied.\n pyarrow_files = []\n for (root, dirs, filenames) in os.walk(\"./ray/pyarrow_files/pyarrow\"):\n for name in filenames:\n pyarrow_files.append(os.path.join(root, name))\n\n # Make sure the relevant files for modin get copied.\n modin_files = []\n for (root, dirs, filenames) in os.walk(\"./ray/modin\"):\n for name in filenames:\n modin_files.append(os.path.join(root, name))\n\n files_to_include = ray_files + pyarrow_files + modin_files\n\n # Copy over the autogenerated flatbuffer Python bindings.\n for directory in generated_python_directories:\n for filename in os.listdir(directory):\n if filename[-3:] == \".py\":\n files_to_include.append(os.path.join(directory, filename))\n\n for filename in files_to_include:\n self.move_file(filename)\n\n # Try to copy over the optional files.\n for filename in optional_ray_files:\n try:\n self.move_file(filename)\n except Exception:\n print(\"Failed to copy optional file {}. This is ok.\"\n .format(filename))\n\n def move_file(self, filename):\n # TODO(rkn): This feels very brittle. It may not handle all cases. See\n # https://github.com/apache/arrow/blob/master/python/setup.py for an\n # example.\n source = filename\n destination = os.path.join(self.build_lib, filename)\n # Create the target directory if it doesn't already exist.\n parent_directory = os.path.dirname(destination)\n if not os.path.exists(parent_directory):\n os.makedirs(parent_directory)\n if not os.path.exists(destination):\n print(\"Copying {} to {}.\".format(source, destination))\n shutil.copy(source, destination)\n\n\nclass BinaryDistribution(Distribution):\n def has_ext_modules(self):\n return True\n\n\ndef find_version(*filepath):\n # Extract version information from filepath\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *filepath)) as fp:\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n fp.read(), re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nrequires = [\n \"numpy >= 1.14\",\n \"filelock\",\n \"funcsigs\",\n \"click\",\n \"colorama\",\n \"pytest\",\n \"pyyaml\",\n \"redis\",\n # NOTE: Don't upgrade the version of six! Doing so causes installation\n # problems. See https://github.com/ray-project/ray/issues/4169.\n \"six >= 1.0.0\",\n # The typing module is required by modin.\n \"typing\",\n \"flatbuffers\",\n \"faulthandler;python_version<'3.3'\",\n]\n\nsetup(\n name=\"ray\",\n version=find_version(\"ray\", \"__init__.py\"),\n author=\"Ray Team\",\n author_email=\"[email protected]\",\n description=(\"A system for parallel and distributed Python that unifies \"\n \"the ML ecosystem.\"),\n long_description=open(\"../README.rst\").read(),\n url=\"https://github.com/ray-project/ray\",\n keywords=(\"ray distributed parallel machine-learning \"\n \"reinforcement-learning deep-learning python\"),\n packages=find_packages(),\n cmdclass={\"build_ext\": build_ext},\n # The BinaryDistribution argument triggers build_ext.\n distclass=BinaryDistribution,\n install_requires=requires,\n setup_requires=[\"cython >= 0.29\"],\n extras_require=extras,\n entry_points={\n \"console_scripts\": [\n \"ray=ray.scripts.scripts:main\",\n \"rllib=ray.rllib.scripts:cli [rllib]\", \"tune=ray.tune.scripts:cli\"\n ]\n },\n include_package_data=True,\n zip_safe=False,\n license=\"Apache 2.0\")\n", "path": "python/setup.py"}, {"content": "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport logging\ntry: # py3\n from shlex import quote\nexcept ImportError: # py2\n from pipes import quote\n\nlogger = logging.getLogger(__name__)\n\n\ndef dockerize_if_needed(config):\n if \"docker\" not in config:\n return config\n docker_image = config[\"docker\"].get(\"image\")\n cname = config[\"docker\"].get(\"container_name\")\n run_options = config[\"docker\"].get(\"run_options\", [])\n ssh_user = config[\"auth\"][\"ssh_user\"]\n if not docker_image:\n if cname:\n logger.warning(\n \"dockerize_if_needed: \"\n \"Container name given but no Docker image - continuing...\")\n return config\n else:\n assert cname, \"Must provide container name!\"\n docker_mounts = {dst: dst for dst in config[\"file_mounts\"]}\n\n config[\"setup_commands\"] = (\n docker_start_cmds(ssh_user, docker_image, docker_mounts, cname,\n run_options) + with_docker_exec(\n config[\"setup_commands\"], container_name=cname))\n\n config[\"head_setup_commands\"] = with_docker_exec(\n config[\"head_setup_commands\"], container_name=cname)\n config[\"head_start_ray_commands\"] = (\n docker_autoscaler_setup(cname) + with_docker_exec(\n config[\"head_start_ray_commands\"], container_name=cname))\n\n config[\"worker_setup_commands\"] = with_docker_exec(\n config[\"worker_setup_commands\"], container_name=cname)\n config[\"worker_start_ray_commands\"] = with_docker_exec(\n config[\"worker_start_ray_commands\"],\n container_name=cname,\n env_vars=[\"RAY_HEAD_IP\"])\n\n return config\n\n\ndef with_docker_exec(cmds, container_name, env_vars=None):\n env_str = \"\"\n if env_vars:\n env_str = \" \".join(\n [\"-e {env}=${env}\".format(env=env) for env in env_vars])\n return [\n \"docker exec {} {} /bin/sh -c {} \".format(env_str, container_name,\n quote(cmd)) for cmd in cmds\n ]\n\n\ndef aptwait_cmd():\n return (\"while sudo fuser\"\n \" /var/{lib/{dpkg,apt/lists},cache/apt/archives}/lock\"\n \" >/dev/null 2>&1; \"\n \"do echo 'Waiting for release of dpkg/apt locks'; sleep 5; done\")\n\n\ndef docker_start_cmds(user, image, mount, cname, user_options):\n cmds = []\n\n # create flags\n # ports for the redis, object manager, and tune client\n port_flags = \" \".join([\n \"-p {port}:{port}\".format(port=port)\n for port in [\"6379\", \"8076\", \"4321\"]\n ])\n mount_flags = \" \".join(\n [\"-v {src}:{dest}\".format(src=k, dest=v) for k, v in mount.items()])\n\n # for click, used in ray cli\n env_vars = {\"LC_ALL\": \"C.UTF-8\", \"LANG\": \"C.UTF-8\"}\n env_flags = \" \".join(\n [\"-e {name}={val}\".format(name=k, val=v) for k, v in env_vars.items()])\n\n user_options_str = \" \".join(user_options)\n # docker run command\n docker_check = [\n \"docker\", \"inspect\", \"-f\", \"'{{.State.Running}}'\", cname, \"||\"\n ]\n docker_run = [\n \"docker\", \"run\", \"--rm\", \"--name {}\".format(cname), \"-d\", \"-it\",\n port_flags, mount_flags, env_flags, user_options_str, \"--net=host\",\n image, \"bash\"\n ]\n cmds.append(\" \".join(docker_check + docker_run))\n docker_update = [\n \" && \".join((\"apt-get -y update\", \"apt-get -y upgrade\",\n \"apt-get install -y git wget cmake psmisc\"))\n ]\n cmds.extend(with_docker_exec(docker_update, container_name=cname))\n return cmds\n\n\ndef docker_autoscaler_setup(cname):\n cmds = []\n for path in [\"~/ray_bootstrap_config.yaml\", \"~/ray_bootstrap_key.pem\"]:\n # needed because docker doesn't allow relative paths\n base_path = os.path.basename(path)\n cmds.append(\"docker cp {path} {cname}:{dpath}\".format(\n path=path, dpath=base_path, cname=cname))\n cmds.extend(\n with_docker_exec(\n [\"cp {} {}\".format(\"/\" + base_path, path)],\n container_name=cname))\n return cmds\n", "path": "python/ray/autoscaler/docker.py"}]}
3,935
259
gh_patches_debug_10864
rasdani/github-patches
git_diff
apache__airflow-33404
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> Fix lost runs in `CronTriggerTimetable` closes: #27399 Fix lost runs in `CronTriggerTimetable` by resetting seconds and microseconds in `start_time_candidates`. <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Thank you for contributing! Please make sure that your code changes are covered with tests. And in case of new features or big changes remember to adjust the documentation. Feel free to ping committers for the review! In case of an existing issue, reference it using one of the following: closes: #ISSUE related: #ISSUE How to write a good git commit message: http://chris.beams.io/posts/git-commit/ --> <!-- Please keep an empty line above the dashes. --> --- **^ Add meaningful description above** Read the **[Pull Request Guidelines](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst#pull-request-guidelines)** for more information. In case of fundamental code changes, an Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvement+Proposals)) is needed. In case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x). In case of backwards incompatible changes please leave a note in a newsfragment file, named `{pr_number}.significant.rst` or `{issue_number}.significant.rst`, in [newsfragments](https://github.com/apache/airflow/tree/main/newsfragments). </issue> <code> [start of airflow/timetables/trigger.py] 1 # Licensed to the Apache Software Foundation (ASF) under one 2 # or more contributor license agreements. See the NOTICE file 3 # distributed with this work for additional information 4 # regarding copyright ownership. The ASF licenses this file 5 # to you under the Apache License, Version 2.0 (the 6 # "License"); you may not use this file except in compliance 7 # with the License. You may obtain a copy of the License at 8 # 9 # http://www.apache.org/licenses/LICENSE-2.0 10 # 11 # Unless required by applicable law or agreed to in writing, 12 # software distributed under the License is distributed on an 13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 # KIND, either express or implied. See the License for the 15 # specific language governing permissions and limitations 16 # under the License. 17 from __future__ import annotations 18 19 import datetime 20 from typing import Any 21 22 from dateutil.relativedelta import relativedelta 23 from pendulum import DateTime 24 from pendulum.tz.timezone import Timezone 25 26 from airflow.timetables._cron import CronMixin 27 from airflow.timetables.base import DagRunInfo, DataInterval, TimeRestriction, Timetable 28 29 30 class CronTriggerTimetable(CronMixin, Timetable): 31 """Timetable that triggers DAG runs according to a cron expression. 32 33 This is different from ``CronDataIntervalTimetable``, where the cron 34 expression specifies the *data interval* of a DAG run. With this timetable, 35 the data intervals are specified independently from the cron expression. 36 Also for the same reason, this timetable kicks off a DAG run immediately at 37 the start of the period (similar to POSIX cron), instead of needing to wait 38 for one data interval to pass. 39 40 Don't pass ``@once`` in here; use ``OnceTimetable`` instead. 41 """ 42 43 def __init__( 44 self, 45 cron: str, 46 *, 47 timezone: str | Timezone, 48 interval: datetime.timedelta | relativedelta = datetime.timedelta(), 49 ) -> None: 50 super().__init__(cron, timezone) 51 self._interval = interval 52 53 @classmethod 54 def deserialize(cls, data: dict[str, Any]) -> Timetable: 55 from airflow.serialization.serialized_objects import decode_relativedelta, decode_timezone 56 57 interval: datetime.timedelta | relativedelta 58 if isinstance(data["interval"], dict): 59 interval = decode_relativedelta(data["interval"]) 60 else: 61 interval = datetime.timedelta(seconds=data["interval"]) 62 return cls(data["expression"], timezone=decode_timezone(data["timezone"]), interval=interval) 63 64 def serialize(self) -> dict[str, Any]: 65 from airflow.serialization.serialized_objects import encode_relativedelta, encode_timezone 66 67 interval: float | dict[str, Any] 68 if isinstance(self._interval, datetime.timedelta): 69 interval = self._interval.total_seconds() 70 else: 71 interval = encode_relativedelta(self._interval) 72 timezone = encode_timezone(self._timezone) 73 return {"expression": self._expression, "timezone": timezone, "interval": interval} 74 75 def infer_manual_data_interval(self, *, run_after: DateTime) -> DataInterval: 76 return DataInterval(run_after - self._interval, run_after) 77 78 def next_dagrun_info( 79 self, 80 *, 81 last_automated_data_interval: DataInterval | None, 82 restriction: TimeRestriction, 83 ) -> DagRunInfo | None: 84 if restriction.catchup: 85 if last_automated_data_interval is not None: 86 next_start_time = self._get_next(last_automated_data_interval.end) 87 elif restriction.earliest is None: 88 return None # Don't know where to catch up from, give up. 89 else: 90 next_start_time = self._align_to_next(restriction.earliest) 91 else: 92 start_time_candidates = [self._align_to_next(DateTime.utcnow())] 93 if last_automated_data_interval is not None: 94 start_time_candidates.append(self._get_next(last_automated_data_interval.end)) 95 if restriction.earliest is not None: 96 start_time_candidates.append(self._align_to_next(restriction.earliest)) 97 next_start_time = max(start_time_candidates) 98 if restriction.latest is not None and restriction.latest < next_start_time: 99 return None 100 return DagRunInfo.interval(next_start_time - self._interval, next_start_time) 101 [end of airflow/timetables/trigger.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/airflow/timetables/trigger.py b/airflow/timetables/trigger.py --- a/airflow/timetables/trigger.py +++ b/airflow/timetables/trigger.py @@ -89,7 +89,7 @@ else: next_start_time = self._align_to_next(restriction.earliest) else: - start_time_candidates = [self._align_to_next(DateTime.utcnow())] + start_time_candidates = [self._align_to_prev(DateTime.utcnow())] if last_automated_data_interval is not None: start_time_candidates.append(self._get_next(last_automated_data_interval.end)) if restriction.earliest is not None:
{"golden_diff": "diff --git a/airflow/timetables/trigger.py b/airflow/timetables/trigger.py\n--- a/airflow/timetables/trigger.py\n+++ b/airflow/timetables/trigger.py\n@@ -89,7 +89,7 @@\n else:\n next_start_time = self._align_to_next(restriction.earliest)\n else:\n- start_time_candidates = [self._align_to_next(DateTime.utcnow())]\n+ start_time_candidates = [self._align_to_prev(DateTime.utcnow())]\n if last_automated_data_interval is not None:\n start_time_candidates.append(self._get_next(last_automated_data_interval.end))\n if restriction.earliest is not None:\n", "issue": "Fix lost runs in `CronTriggerTimetable`\ncloses: #27399\r\n\r\nFix lost runs in `CronTriggerTimetable` by resetting seconds and microseconds in `start_time_candidates`.\r\n<!--\r\n Licensed to the Apache Software Foundation (ASF) under one\r\n or more contributor license agreements. See the NOTICE file\r\n distributed with this work for additional information\r\n regarding copyright ownership. The ASF licenses this file\r\n to you under the Apache License, Version 2.0 (the\r\n \"License\"); you may not use this file except in compliance\r\n with the License. You may obtain a copy of the License at\r\n\r\n http://www.apache.org/licenses/LICENSE-2.0\r\n\r\n Unless required by applicable law or agreed to in writing,\r\n software distributed under the License is distributed on an\r\n \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\r\n KIND, either express or implied. See the License for the\r\n specific language governing permissions and limitations\r\n under the License.\r\n -->\r\n\r\n<!--\r\nThank you for contributing! Please make sure that your code changes\r\nare covered with tests. And in case of new features or big changes\r\nremember to adjust the documentation.\r\n\r\nFeel free to ping committers for the review!\r\n\r\nIn case of an existing issue, reference it using one of the following:\r\n\r\ncloses: #ISSUE\r\nrelated: #ISSUE\r\n\r\nHow to write a good git commit message:\r\nhttp://chris.beams.io/posts/git-commit/\r\n-->\r\n\r\n\r\n\r\n<!-- Please keep an empty line above the dashes. -->\r\n---\r\n**^ Add meaningful description above**\r\n\r\nRead the **[Pull Request Guidelines](https://github.com/apache/airflow/blob/main/CONTRIBUTING.rst#pull-request-guidelines)** for more information.\r\nIn case of fundamental code changes, an Airflow Improvement Proposal ([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvement+Proposals)) is needed.\r\nIn case of a new dependency, check compliance with the [ASF 3rd Party License Policy](https://www.apache.org/legal/resolved.html#category-x).\r\nIn case of backwards incompatible changes please leave a note in a newsfragment file, named `{pr_number}.significant.rst` or `{issue_number}.significant.rst`, in [newsfragments](https://github.com/apache/airflow/tree/main/newsfragments).\r\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\nfrom __future__ import annotations\n\nimport datetime\nfrom typing import Any\n\nfrom dateutil.relativedelta import relativedelta\nfrom pendulum import DateTime\nfrom pendulum.tz.timezone import Timezone\n\nfrom airflow.timetables._cron import CronMixin\nfrom airflow.timetables.base import DagRunInfo, DataInterval, TimeRestriction, Timetable\n\n\nclass CronTriggerTimetable(CronMixin, Timetable):\n \"\"\"Timetable that triggers DAG runs according to a cron expression.\n\n This is different from ``CronDataIntervalTimetable``, where the cron\n expression specifies the *data interval* of a DAG run. With this timetable,\n the data intervals are specified independently from the cron expression.\n Also for the same reason, this timetable kicks off a DAG run immediately at\n the start of the period (similar to POSIX cron), instead of needing to wait\n for one data interval to pass.\n\n Don't pass ``@once`` in here; use ``OnceTimetable`` instead.\n \"\"\"\n\n def __init__(\n self,\n cron: str,\n *,\n timezone: str | Timezone,\n interval: datetime.timedelta | relativedelta = datetime.timedelta(),\n ) -> None:\n super().__init__(cron, timezone)\n self._interval = interval\n\n @classmethod\n def deserialize(cls, data: dict[str, Any]) -> Timetable:\n from airflow.serialization.serialized_objects import decode_relativedelta, decode_timezone\n\n interval: datetime.timedelta | relativedelta\n if isinstance(data[\"interval\"], dict):\n interval = decode_relativedelta(data[\"interval\"])\n else:\n interval = datetime.timedelta(seconds=data[\"interval\"])\n return cls(data[\"expression\"], timezone=decode_timezone(data[\"timezone\"]), interval=interval)\n\n def serialize(self) -> dict[str, Any]:\n from airflow.serialization.serialized_objects import encode_relativedelta, encode_timezone\n\n interval: float | dict[str, Any]\n if isinstance(self._interval, datetime.timedelta):\n interval = self._interval.total_seconds()\n else:\n interval = encode_relativedelta(self._interval)\n timezone = encode_timezone(self._timezone)\n return {\"expression\": self._expression, \"timezone\": timezone, \"interval\": interval}\n\n def infer_manual_data_interval(self, *, run_after: DateTime) -> DataInterval:\n return DataInterval(run_after - self._interval, run_after)\n\n def next_dagrun_info(\n self,\n *,\n last_automated_data_interval: DataInterval | None,\n restriction: TimeRestriction,\n ) -> DagRunInfo | None:\n if restriction.catchup:\n if last_automated_data_interval is not None:\n next_start_time = self._get_next(last_automated_data_interval.end)\n elif restriction.earliest is None:\n return None # Don't know where to catch up from, give up.\n else:\n next_start_time = self._align_to_next(restriction.earliest)\n else:\n start_time_candidates = [self._align_to_next(DateTime.utcnow())]\n if last_automated_data_interval is not None:\n start_time_candidates.append(self._get_next(last_automated_data_interval.end))\n if restriction.earliest is not None:\n start_time_candidates.append(self._align_to_next(restriction.earliest))\n next_start_time = max(start_time_candidates)\n if restriction.latest is not None and restriction.latest < next_start_time:\n return None\n return DagRunInfo.interval(next_start_time - self._interval, next_start_time)\n", "path": "airflow/timetables/trigger.py"}]}
2,152
153
gh_patches_debug_17412
rasdani/github-patches
git_diff
ESMCI__cime-4411
You will be provided with a partial code base and an issue statement explaining a problem to resolve. <issue> case.build --clean-build will remove the CASEDIR contents if EXEDIR points to the CASEDIR I'm reporting this from @dmleung. He setup some cases where EXEDIR was set to be the same as CASEDIR (so NOT a bld subdirectory of the CASEDIR). If he then runs "./case.build --clean-build" it then erases the contents of the CASEDIR. Outside of that it actually functions fine, but if you clean your build it will cause problems. Note, that he did this with older cime versions and it seemed to work OK, but with cime6.0.45 it will remove the case directory contents. The important thing to note is that this is something you should NOT do! So we can either close this as a WONTFIX as a degenerate case that you shouldn't do. Or a simple thing to do would be to add a simple check that EXEDIR should not be the same as CASEDIR, or there should be an error reported. </issue> <code> [start of CIME/XML/env_build.py] 1 """ 2 Interface to the env_build.xml file. This class inherits from EnvBase 3 """ 4 from CIME.XML.standard_module_setup import * 5 6 from CIME import utils 7 from CIME.XML.env_base import EnvBase 8 9 logger = logging.getLogger(__name__) 10 11 12 class EnvBuild(EnvBase): 13 # pylint: disable=unused-argument 14 def __init__( 15 self, case_root=None, infile="env_build.xml", components=None, read_only=False 16 ): 17 """ 18 initialize an object interface to file env_build.xml in the case directory 19 """ 20 schema = os.path.join(utils.get_schema_path(), "env_entry_id.xsd") 21 EnvBase.__init__(self, case_root, infile, schema=schema, read_only=read_only) 22 [end of CIME/XML/env_build.py] </code> I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format. <patch> diff --git a/file.py b/file.py --- a/file.py +++ b/file.py @@ -1,27 +1,35 @@ def euclidean(a, b): - while b: - a, b = b, a % b - return a + if b == 0: + return a + return euclidean(b, a % b) def bresenham(x0, y0, x1, y1): points = [] dx = abs(x1 - x0) dy = abs(y1 - y0) - sx = 1 if x0 < x1 else -1 - sy = 1 if y0 < y1 else -1 - err = dx - dy + x, y = x0, y0 + sx = -1 if x0 > x1 else 1 + sy = -1 if y0 > y1 else 1 - while True: - points.append((x0, y0)) - if x0 == x1 and y0 == y1: - break - e2 = 2 * err - if e2 > -dy: - err -= dy - x0 += sx - if e2 < dx: - err += dx - y0 += sy + if dx > dy: + err = dx / 2.0 + while x != x1: + points.append((x, y)) + err -= dy + if err < 0: + y += sy + err += dx + x += sx + else: + err = dy / 2.0 + while y != y1: + points.append((x, y)) + err -= dx + if err < 0: + x += sx + err += dy + y += sy + + points.append((x, y)) return points </patch>
diff --git a/CIME/XML/env_build.py b/CIME/XML/env_build.py --- a/CIME/XML/env_build.py +++ b/CIME/XML/env_build.py @@ -18,4 +18,19 @@ initialize an object interface to file env_build.xml in the case directory """ schema = os.path.join(utils.get_schema_path(), "env_entry_id.xsd") + self._caseroot = case_root EnvBase.__init__(self, case_root, infile, schema=schema, read_only=read_only) + + def set_value(self, vid, value, subgroup=None, ignore_type=False): + """ + Set the value of an entry-id field to value + Returns the value or None if not found + subgroup is ignored in the general routine and applied in specific methods + """ + # Do not allow any of these to be the same as CASEROOT + if vid in ("EXEROOT", "OBJDIR", "LIBROOT"): + utils.expect(value != self._caseroot, f"Cannot set {vid} to CASEROOT") + + return super(EnvBuild, self).set_value( + vid, value, subgroup=subgroup, ignore_type=ignore_type + )
{"golden_diff": "diff --git a/CIME/XML/env_build.py b/CIME/XML/env_build.py\n--- a/CIME/XML/env_build.py\n+++ b/CIME/XML/env_build.py\n@@ -18,4 +18,19 @@\n initialize an object interface to file env_build.xml in the case directory\n \"\"\"\n schema = os.path.join(utils.get_schema_path(), \"env_entry_id.xsd\")\n+ self._caseroot = case_root\n EnvBase.__init__(self, case_root, infile, schema=schema, read_only=read_only)\n+\n+ def set_value(self, vid, value, subgroup=None, ignore_type=False):\n+ \"\"\"\n+ Set the value of an entry-id field to value\n+ Returns the value or None if not found\n+ subgroup is ignored in the general routine and applied in specific methods\n+ \"\"\"\n+ # Do not allow any of these to be the same as CASEROOT\n+ if vid in (\"EXEROOT\", \"OBJDIR\", \"LIBROOT\"):\n+ utils.expect(value != self._caseroot, f\"Cannot set {vid} to CASEROOT\")\n+\n+ return super(EnvBuild, self).set_value(\n+ vid, value, subgroup=subgroup, ignore_type=ignore_type\n+ )\n", "issue": "case.build --clean-build will remove the CASEDIR contents if EXEDIR points to the CASEDIR\nI'm reporting this from @dmleung. He setup some cases where EXEDIR was set to be the same as CASEDIR (so NOT a bld subdirectory of the CASEDIR). If he then runs \"./case.build --clean-build\" it then erases the contents of the CASEDIR. Outside of that it actually functions fine, but if you clean your build it will cause problems. Note, that he did this with older cime versions and it seemed to work OK, but with cime6.0.45 it will remove the case directory contents.\r\n\r\nThe important thing to note is that this is something you should NOT do! So we can either close this as a WONTFIX as a degenerate case that you shouldn't do. Or a simple thing to do would be to add a simple check that EXEDIR should not be the same as CASEDIR, or there should be an error reported.\n", "before_files": [{"content": "\"\"\"\nInterface to the env_build.xml file. This class inherits from EnvBase\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\n\nfrom CIME import utils\nfrom CIME.XML.env_base import EnvBase\n\nlogger = logging.getLogger(__name__)\n\n\nclass EnvBuild(EnvBase):\n # pylint: disable=unused-argument\n def __init__(\n self, case_root=None, infile=\"env_build.xml\", components=None, read_only=False\n ):\n \"\"\"\n initialize an object interface to file env_build.xml in the case directory\n \"\"\"\n schema = os.path.join(utils.get_schema_path(), \"env_entry_id.xsd\")\n EnvBase.__init__(self, case_root, infile, schema=schema, read_only=read_only)\n", "path": "CIME/XML/env_build.py"}]}
939
274